Why Placement? Why Directed Self-Placement?

By Dan Royer and Roger Gilles
Grand Valley State University

Why Placement?

In our view, there are only two good reasons to enforce the “placement” of students into different levels of first-year writing: to give under-prepared or otherwise disadvantaged students a better chance of succeeding in a first-year program; or to separate students of differing abilities so teachers can design reading and writing activities for students of roughly equal abilities. These are the reasons given by Edward M. White probably the best single source on writing placement in general and we agree with him.

To justify a placement system based on the first reason, you must be able to articulate what it means to succeed in your writing program and demonstrate that a significant number of your students would not succeed without a course or two of developmental work.

To justify a placement system based on the second reason, you must assess your entering students’ writing and determine a significant range of writing ability, and you must be convinced that the pedagogy of your program demands the segregation of students into groups of similar abilities.

What does it mean to succeed in a first-year writing program?

Passing a first-year writing class ought to be a challenge to average entering students, and the final writing of passing students ought to be solid, competent writing whatever that means at a particular college or university. Placement is about helping students pass, and passing is about writing at a certain level. Crucially, then, a program’s placement system is really about that program’s outcomes.

It’s important not to let the term “succeed” transform into “do very well.” We all want our students to do very well, but most colleges and universities demand only that students do passing work or perhaps “C” level work to satisfy the writing requirement. Placing students in a basic writing course to help them get an A or a B instead of a C in the regular course is patronizing to the students. If a C is not good enough, it should not be a C. If it is good enough, we should allow students to make do with it. Forcing C students to become B or A students is at best a way to prop up a potentially unnecessary developmental course or program, and at worst a way to wring out extra tuition dollars from already cash-strapped students.

If students writing at a C level in your “regular” composition course are, by the standards of your college or university, poor writers, then you’ve got a grading problem which may be quite different from a placement problem.

When do students of differing abilities need to be segregated for pedagogical purposes?

Mixing students who are all capable of at least passing a course in one semester is neither unreasonable nor unmanageable. In other words, a standard first-year writing course should be designed to “work” for those who are capable of distinguishing themselves with A work; those who are capable in a single semester of achieving B work, but probably no better; and those who, with diligence and helpful guidance, are capable of reaching the basic level of competence demanded of C work. College writing teachers will undoubtedly recognize that this represents quite a range of ability, but this is the range of ability that the grading system demands that a program group together. It’s the students who, even with hard work and careful guidance from a solid writing teacher, are unlikely to achieve C writing in a single semester who need to be placed into a developmental course or curriculum.

Some schools segregate students who appear likely to earn an A in a regular course and challenge them with an “honors” composition course or sequence. We stopped doing this at our school for two reasons: first, we observed that these students often got the best teachers, the “honors” teachers, and we wondered about the ethics of withholding the best teachers from the students who need them most the poor students; and second, we realized that we value the modeling effect that strong student writers can have on other students. This is not to say that the “regular” course should not challenge the strongest students it should. Indeed, the strongest students should set the standard for the regular course; that’s how we recognize the writing of B, C, and D students. B writers are those who, when challenged to excel, respond with “good” rather than “excellent” writing. We know it’s good rather than excellent because other students, the A students, have responded with excellent writing!

In our view, taking a process-based approach to writing instruction allows for productive work to be done with students with a relatively wide range of abilities. Like most others, we advocate a workshop-like classroom setting involving reading and writing for ideas, drafting, and revising within a community of writers working toward the shared goals of the program. The point of the class is to become familiar with the basic forms of academic writing and to improve from beginning to end. Mixing writers of varying abilities in a context like this does not strike us as a problem.

Still, most schools do have some entering students who, for whatever reason, are simply not capable of achieving C work, as defined by that institution, in a given semester. These are the students for whom a developmental writing program needs to be designed. We suggest that you begin by asking the teachers of your standard first-semester course about the students who fail. Who fails the course, and why? The developmental writing course should be designed to help those students.

Why Directed Self-Placement?

Effective placement must be understood as “effective” for a particular program at a particular institution. Effective means students who need extra support are receiving it, and those who do not need it are not being given such support whether they like it or not. Effective also means that students and teachers feel they are in the right place, and that the environment is right for learning.

There are many ways to channel students into different courses. Some schools use an indirect, external indicator such as SAT/ACT score, the TSWE, or high school GPA. These are actually pretty good at predicting success in college writing courses, but they do little in the way of convincing students and teachers that everyone is in the right place for the right reasons. Some schools use a diagnostic essay and/or a portfolio of writing from high school. Most scholars and administrators view these as preferable to test scores, but such writing samples are often incomplete “snapshots” of student writing or, in the case of portfolios, difficult to collect and assess with the limited time and budget most schools have to work with.

And some schools are now using directed self-placement: explaining to students the differences among the available courses and guiding them in their decisions about which course to take. DSP challenges two key assumptions that college writing ability can be effectively measured outside the rich context of classroom assessment practices, and that writing ability alone best predicts success in the college writing classroom.

How do we measure college writing ability?

It seems simple enough, but our belief is that the best way to measure students’ college writing ability is to put them in a semester-long course, engage them in reading and writing activities, and see how they do. To determine how they do, we consider many behaviors and documents. At the very least, we look at several pieces of their finished writing. At our school, these include essays of a couple thousand words initially drafted over several weeks and revised and edited over the entire semester. The essays generally cite outside sources as well as the students’ own ideas and experiences. They are rich documents arising from a tremendously rich environment a semester-long college classroom.

Most placement contexts and the resulting written documents pale by comparison. We don’t think many people in our field disagree with this. But we assert that the placement contexts and resulting documents are unacceptably pale, especially when used to determine whether or not a student should take an extra college course or two, or three.

To put it in assessment terms, if the goal of a writing assessment is to measure how well students are likely to do in a college writing class, there is nothing more “valid” than a college writing class! Why settle for less?

But what about “reliability”? Over the past decades, scholars have developed ways to ensure a high degree of inter-rater reliability of holistically scored placement essays. A similar effort has not been put forth, however, to ensure the reliability of our courses. How much “inter-rater reliability” has your program achieved when it comes to final grades? How much agreement is there over what constitutes A, B, C, and D writing within your first-year program? Granted, it is not easy to determine how well students do in a semester-long course. It takes a tremendous amount of discussion, practice, and discipline to grade writing fairly and consistently across sections of a required course. But of all of the “reliability” efforts we might invest ourselves in, to us it seems the most important.

As it is, many schools around the country achieve admirable “reliability” in their placement testing, only to send students into courses whose teachers aren’t even expected to develop common grading criteria and standards.
So our method is to establish clear program expectations, communicate those to students as they enter the university, and then let them decide based on their experiences as a student and writer how many semesters they need to meet those expectations.

Is success only about writing ability?

Some scholars claim that most placement methods the scoring of timed-writing samples, for example are good enough to determine where students fall in the continuum of entering students. We’re thinking here of Edward M. White, who was an early and persuasive proponent of the use of timed-writing samples for placement, and William l. Smith, who developed and wrote about probably the most rigorous placement program using timed-writing samples. In their view, placement is ultimately about determining which students “fit” a particular course. Experienced teachers read entrance essays, sort them, and send some students here, and other students there. And then the program administrators ask the teachers, during the first week, if most or all of the students seem to “fit” in the class. Smith even goes so far as to ask teachers during the fourth or fifth week of the term which students now do and don’t seem to “fit” with the others. Those who don’t are called “misplaced,” and the administrators go about trying to revise their placement methods to minimize the number of “misplaced” students the next year.

As interested as these scholars and administrators are in how well students “fit” in their courses, they are uncomfortable talking about how these “misplaced” students actually do in the courses. Final grades don’t interest them very much. Why? Because, they say, so many factors go into final grades, beginning with the quality of the instruction, but also including such things as the students’ attendance, their motivation, the number of credits the students are taking, the part-time jobs they may hold, unexpected personal problems or illnesses that arise during the semester and, we would add, the inconsistency of the grading itself.

And of course they’re right: placement cannot anticipate the real-life factors that largely determine which students actually do well or poorly in our classes. When we ask our teachers why their students fail, they rarely cite the students’ lack of writing ability. And yet assessing that “writing ability” is at the core of all traditional placement methods!

The theory behind traditional placement seems to be to segregate students by ability at the start (standardized placement, based on “we know what grade you ought to get”), and then to see how hard they work, or how well they handle their lives’ exigencies, over the course of the semester (relative grading).

DSP turns this around. The theory behind DSP is to explain to students ahead of time what will be expected of them in the required courses (standardized grading), and then to let them decide if they are ready to work hard enough, or able to handle their lives well enough, to succeed in the course (relative placement).

Obviously, then, DSP requires that a program understand its own goals and expectations, and that the teachers of the program actually follow those goals and adhere to those expectations. At our school, that is the real work of writing-program administration, and it is difficult work. Explaining the program to students and letting them decide which course is right for them that’s the easy part.

Some Beginning Questions:

  1. Assuming for a moment that you are dissatisfied with your current placement practices, what do you think most needs to change? That is, what is the biggest problem you’re your current practice that you are trying to solve?
  2. How much of your sense of needing to change things has to do with financial or other “resource” exigencies? If these are your main problems, it may be helpful to think of this as something other than a “placement” problem. Would implementing DSP (or any other new placement method) respond to these conditions?
  3. In general, how can decisions about placement into first-year writing courses include the larger contexts of program pedagogy, programs goals, and complex student lives?
  4. Are your program goals currently clear enough that “placement” has specific meaning in your college or university context? In what ways are you trying to ensure that all of your teachers are pursuing the same goals?
  5. Finally, what does it mean to “succeed” in your first-year writing program? Given what you know about what it takes to pass through your program, what kinds of students really need a developmental course?

Works Cited

  1. Smith, William L. “Assessing the Reliability and Adequacy of Using Holistic Scoring of Essays as a College Placement Technique.” Validating Holistic Scoring for Writing Assessment: Theoretical and Empirical Foundations. Ed. Michael M. Williamson and Brian A. Huot. Cresskill, NJ: Hampton, 1993. 142-205.
  2. White, Edward M. “An Apologia for the Timed Impromptu Essay Test.” College Composition and Communication 46 (1995): 30-45.
  3. ---. Developing Successful College Writing Programs. San Francisco: Jossey-Bass, 1989.

Recommended Texts

Besides the book listed above, two other Edward M. White books are worth reading in order to gain a wide perspective on writing assessment and placement: Teaching and Assessing Writing, 2nd ed. San Francisco: Jossey-Bass, 1994; and Assessment of Writing: Politics, Policies, Practices. Eds. Edward M. White, William D. Lutz, and Sandra Kamusikiri. New York: MLA, 1996.

For a fuller discussion of our own views on placement, along with many valuable articles on running writing programs, we recommend The Writing Program Administrator’s Resource: A Guide to Reflective Institutional Practice. Eds. Stuart C. Brown and Theresa Enos. Mahwah, NJ: ERL, 2002.

For a fuller discussion of directed self-placement, see

  • Directed Self-Placement: Principles and Practices. Ed. Daniel J. Royer and Roger Gilles. Cresskill, NJ: Hampton, 2003.
  • Royer, Daniel J. and Roger Gilles. "Directed Self-Placement: An Attitude of Orientation." College Composition and Communication 50.1 (1998). View a PDF of this article.

You may also be interested in the Introduction to our book, Directed Self-Placement: Principles and Practices published by Hampton Press. Or view the table of contents to the book.

If you are particularly interested in "basic writers" and the importance of directed self-placement. You may want to check out this article, also by Royer and Gilles from Basic Writing e-Journal: Basic Writing and Directed Self-Placement.



Page last modified April 27, 2017