From the dean
SOE faculty helping answer pressing questions about assessment and student learning
By Bill McDiarmid
Oct. 12, 2010
Improving measures of student learning ranks as one of the most critical issues we face. Recognition that we don’t yet have the types of measures we need to drive policy, curriculum, and teaching in the right direction comes from all quarters – including the White House.
Somewhat lost amid the fanfare surrounding recent announcements of states, including North Carolina, winning Race-to-the-Top (RttT) grant competitions was the announcement that the U.S. Department of Education was funding two large consortia of states to develop new student assessments.
In the long run, the work of these consortia may have greater impact on teachers’ classroom practices than the RttT state grants.
Fundamentally, assessments are tools to determine the effects of instruction. The policy context of the past couple of decades has focused time, attention, and resources on “summative” assessments – that is, tests given at the end of a period of instruction. In North Carolina, the End-of-Grade and End-of-Year tests are examples of such assessments.
A major limitation of summative tests is that educators don’t receive their students’ test results until it’s too late to change their instruction. Although the results may be helpful to the teacher for the next year’s group of students, teachers will tell you that what worked or didn’t with one group may or may not work well with the next.
As the SOE’s Professor Greg Cizek has argued, educators need ongoing assessments of student progress. Greg was recently quoted in a New York Times article noting that “Research has long shown that more frequent testing is beneficial to kids, but educators have resisted this finding.”
This is a challenge for us as we prepare educators: How do we help them not merely understand the value of frequent assessments but how to incorporate these into their instruction?
Teachers need immediate feedback on whether or not their students are learning what they are being taught. For decades, teachers have used a variety of ways to determine whether or not their charges are learning – quizzes, question-and-response, scanning faces and body language, and so on.
Sometimes these methods provide valid information but often they don’t. For most students, when the teacher asks if everyone understands, nodding is the path of least resistance. In addition, many teachers, confronted with, say, results from a unit test, don’t feel they have time to re-teach the materials their students apparently did not learn.
We have learned from research that the students who have frequent opportunities to demonstrate their understanding of the content learn considerably more than students in classrooms lacking such opportunities. Termed “formative assessments,” these short, frequent measures of student learning provide teachers with immediate feedback to inform their teaching move.
Student learning increases even more when they learn to monitor and assess their own learning – a focus of Assistant Professor Jeff Greene’s research on “self-regulated learning.”
Moreover, we are learning more about creating such assessment opportunities.
Scholars have promoted, since the 1990s, the idea of “authentic assessments” – that is, classroom tasks that reveal not merely students’ grasp of the content but their ability to apply what they are learning to “real-world” situations.
For instance, amidst a lesson on the Bill of Rights, a teacher might ask students to write letters to the editor about a local book-banning controversy, basing their argument on the First Amendment.
The new consortia will undoubtedly also investigate “performance assessments” in which students are asked to complete a task – individually or collectively – that reveals their understanding of the underlying knowledge and skills. Writing and math portfolios – both used in various state assessments over the past couple of decades -- are examples of performance tasks.
Before “No Child Left Behind,” some states, notably Kentucky, used such performance tasks as part of the state assessment. Although testing experts have trouble with the reliability of such performances, they are much more authentic – and expensive – than more conventional measures. As always, there are tradeoffs for any type of assessment.
We feel very fortunate to have both Greg Cizek and Jeff Greene on the SOE faculty. Their research provides a basis for improving our preparation of educators. Good preparation programs are grounded in research that is not only well designed and carefully conducted but that addresses the most pressing problems of practice that we face.
Click the links in the accompanying box for more of Greg’s and Jeff’s ideas and work.