I’ve been focusing on educational research the past while, and I promise to return, but let’s take a few minutes to talk about what the public calls marking or grading, and educational insiders call student assessment.
Every tried to figure out what student grades actually mean? If your child gets a B+ or a score of 78% on her report card, what does that tell you? Well, at a minimum, you can infer that the teacher gave some tests and assignments, and perhaps evaluated some classroom activities, put the numbers together with some weighting formula or other and calculated a number that’s supposed to represent something about what your child learned. Based on your experience as a student, you know that B+ and 78% are pretty good, but not likely the top of the class. The sad news is that much of the time, the teacher couldn’t tell you much more than this.
In recent years, educators have struggled with the notion of student grades and have come to the unsettling if unsurprising conclusion that student grades roughly align with our intuitive notion of “how the child is doing in school” but are utterly lacking in precision. Worse yet, we’re not sure what “how the child is doing in school” actually means beyond gut-level instinct. And it doesn’t take too much extra effort to begin to suspect that gut-level instincts are probably tainted by our unspoken assumptions about boys and girls, about minority and majority class students, by general likeability, and—as has been demonstrated several times in the research literature—by grooming and attractiveness[i]. Something is deeply wrong with working at an intuitive level.
Before thinking about how we might get it right, I’ll point out a few of the obstacles to meaningful reform. First, teachers believe that they are doing a good job, have a grip on the perils and pitfalls of their assessment, and would rather not have bureaucrats tell them how to grade, thank you very much. Second, parents have the comfort of repetition. If student report cards are similar enough to what they received as children, this is good. Alternative assessment and reporting is confusing to parents, who do not have the benefit of working inside the system. Politics is the third big enemy of reform. Politicians responsible for education do not like to see public uncertainty. Nor do they like to see teachers and their professional associations (or unions) unhappy. And they definitely do not want anyone to think that traditional standards are at risk. I’ll deal with the practicalities and politics another day.
The most sensible of current reforms, at least to my mind, is the reconceptualizing of student assessment as a measurement. Of course, this suggests the question: a measurement of what? There are a few candidates: performance, understanding and learning come to mind. Let’s look at these one at a time.
Performance is a tempting thing to measure. This is good old behavioural psychology at work. If we can’t get inside our students’ minds to figure out what they know, we’ll give them tasks. We measure performance on those tasks (let’s ignore the details for now) and report on task-performance. This is very tempting. I assigned 10 math problems, the student completed 9, 7 of which were correct. The student didn’t hand in the assignment, and therefore completed 0% of the assigned tasks and so on. The cleanness and neatness of this approach begins to fade when you start to wonder what the point of education might be. If the desired outcome of the class is to perform tasks (say, accurately calculate 2-digit by 2-digit multiplication) then this approach makes sense. If the desire is to report only on what the child did, this approach makes sense. If your desired outcomes are more complex, or more intellectual (say, interpret source documents to make a case for or against the invasion of Iraq), then the behavioural approach seems woefully inadequate. Worse yet, if you’d like to make any kinds of inferences about student understanding or learning, it is not clear how counting correct responses to closed questions brings you anywhere close to your goal.
This doesn’t rule out the performance/behavioural approach; but it does point out some severe limitations to its usefulness, especially as students move beyond rudiments.
What about measuring understanding? We do have a slight problem in that understanding is a fairly difficult thing to measure. At a first approximation, a person understands something insofar as she can use language to bring another person to agreement about the conditions under which this understanding is seen to be true, or justified. This is why teachers and students need to talk. An oral case is far more persuasive than a multiple choice examination. Prose is perhaps the most significant human technology for expressing understanding; this appears to be well understood in some teachers’ practice, but little understood by the education community as a whole. If we go this route, we immediately lose the sense of percentage grades, and some of the sense of the more coarse distinctions letter grades make.
Can we measure learning? By this I mean: can we measure student progress as acquisition on new skills, abilities and understanding? I think so. This is a radical shift from current practice, where we try to measure against a standard. Every student in the same classroom is expected to meet the same standard, regardless of what they knew when they walked in the door. If we measure learning, then we are still free to measure performance and/or understanding; we are making a commitment to measure change, rather than current standing.
As I so often do, I’ve only scratched the surface. But I hope you can see where this is going. I want to talk about student assessment in a number of ways. First, I want to speak of assessment as a measurement. This will take us into the strange world of “no zero” policies and beyond. I want to speak of assessment as a combination of measurement of performance and of understanding, and seek ways to meaningfully communicate this to students, parents and the public. I want to reframe assessment (and, dare I say it, instruction) to more closely reflect each individual student’s current abilities and trajectories, rather than trying to move a class, as one, to a fixed goal. Finally, I want to address wise and foolish use of assessment information.
It’s coming. I promise.
[i] The report incorrectly asserts that this “study is the first to demonstrate that non-cognitive traits play an important role in the assignment of grades in high school”. This is only the latest in a number of such studies, dating back at least to the 1960s.