Our district, like most if not all in California, gives multiple assessments throughout the year. These tests are patterned after the California Standards Tests (CSTs) in both format and content. The purpose of the tests is to inform the district (oh yeah, and the teachers) regarding the progress toward mastery of the standards at each grade level scheduled to take the CST at the end of the year. We are able to disaggregate the data to see who needs help in which areas. In other words, it is data to help us determine what to teach and to whom.
The latest test is called the "Blueprint" test because it matches most closely to the actual test "Blueprints", guidelines that determine the number and type of questions that will appear on each CST. It is supposed to provide data for driving instruction during the final days before the actual CSTs.
Before starting the district "Blueprint" test, a student asked, "If I don't know how to do a question, do I just leave it blank?" I instinctively launched into the "You-never-leave-a-question-blank" spiel, also known as the "If-you-don't-know-it-just-guess" refrain. Midway through the speech, it occurred to me that I was directly contradicting what I had just told them not 2 minutes earlier. This test is to let me know what concepts we need to practice. If they guess at each problem they don't know how to solve, and they guess correctly, I may end up not covering a particular concept that they actually need! I stopped myself, explained to the class that it was a really good question. I told them I needed to think about and discuss it with other teachers, so for now, they needed to guess if they had to but to try to solve the problems nonetheless.
This dilemma has caused me to remember a grading/scoring technique I had used once a few years back. In my far-ranging interweb travels, I once stumbled across a website that encouraged something called Knowledge and Judgment Scoring. This is not a widely-used model (yet), and Googling it will give you lots of hits, though virtually all of them point back to a single source: Professor Richard Hart and his grading program Power Up Plus (now in version 5.20).
In a nutshell, Professor Hart's scoring system gives students credit for getting an answer right and for wisely choosing to omit answers to questions they do not know how to answer. In doing so, teachers will get a better picture of what students know (the questions they answered and got right), where they are mistaken (answered but incorrect), and what students really don't know (omitted). By giving actual point credit for omitting answers, students are encouraged to think critically and metacognitively about their own learning. In other words, students become active monitors of their education rather than passively waiting for the teacher to tell them what they know and don't know.
This was a really attractive notion to me. I downloaded the "source code", a macro-embedded Excel spreadsheet (direct link to zipped spreadsheet) and used it a number of times that year. It was a bit cumbersome to use in that I had to enter each students answers, but the feedback it provided was great! I was really able to target those areas with which my students had the most trouble! They made great progress that year, and by the end I could really see in many of them a maturity in their attitudes toward their education that they hadn't shown before. (These students were 5th and 6th grade remedial students.)
The following year, I was placed in a first grade teaching position and had much less opportunity and less inclination to use the technique as my students didn't have the maturity to benefit from it, and the complexity of the concepts was small enough that there was basically no need to test for discreet knowledge on the various aspects of the concepts. But the point is this: knowing what they know they don't know is highly valuable! I believe we need to pursue it.
Currently, I am struggling to keep my head above water in this new grade level. I had forgotten how much energy one expends working with lower grade at-risk students. (I went from an upper-grade remediation teacher to a second grade teacher this year.) Up to now, I hadn't had much time nor opportunity to ponder assessment in the second grade; for all intents and purposes, I was a noob, so I just rolled with program already set up by returning teachers in years previous, though occasionally I would (of course) question techniques, practices, or specific assessments.
So, I am embarrassed that I had not more actively considered our assessment practices up to now. And I am both humbled but grateful that this 7 year old student asked me the true question regarding testing! If I remain in second grade, or indeed wherever I end up teaching, I believe I (we all?) should attempt to use the technique of "showing what you don't know" along with the traditional "showing what you know"!
I welcome any thoughts or comments.