The true university of these days is a collection of books.
--Thomas Carlyle (1795 - 1881)

What Do (Don't) You Know?

 

Every day I ask my students to "show me what you know". It may be in the form of an answer on a dry-erase lapboard they hold over their head (we are low-tech/low-funds at our school, i.e. no cordless clickers). They may raise their hands and give an oral response. They may share with a neighbor and report orally afterward. About every 5-10 school days, they take an actual paper-pencil quiz or test. Each of these activities is a form of assessment that I use to inform and guide my instruction. In these days of high-stakes testing, I feel that the purpose of testing has been mutated. End of the year assessments have become less about guiding instruction and more about "accountability", which for many teachers carries sinister implications of unfair judgment. (There's a whole hugely important discussion about THAT for which we have no room here.) Recently, a 7 year old asked me a question about testing that gave me much pause.

Our district, like most if not all in California, gives multiple assessments throughout the year. These tests are patterned after the California Standards Tests (CSTs) in both format and content. The purpose of the tests is to inform the district (oh yeah, and the teachers) regarding the progress toward mastery of the standards at each grade level scheduled to take the CST at the end of the year. We are able to disaggregate the data to see who needs help in which areas. In other words, it is data to help us determine what to teach and to whom.

The latest test is called the "Blueprint" test because it matches most closely to the actual test "Blueprints", guidelines that determine the number and type of questions that will appear on each CST. It is supposed to provide data for driving instruction during the final days before the actual CSTs.

Before starting the district "Blueprint" test, a student asked, "If I don't know how to do a question, do I just leave it blank?" I instinctively launched into the "You-never-leave-a-question-blank" spiel, also known as the "If-you-don't-know-it-just-guess" refrain. Midway through the speech, it occurred to me that I was directly contradicting what I had just told them not 2 minutes earlier. This test is to let me know what concepts we need to practice. If they guess at each problem they don't know how to solve, and they guess correctly, I may end up not covering a particular concept that they actually need! I stopped myself, explained to the class that it was a really good question. I told them I needed to think about and discuss it with other teachers, so for now, they needed to guess if they had to but to try to solve the problems nonetheless.

This dilemma has caused me to remember a grading/scoring technique I had used once a few years back. In my far-ranging interweb travels, I once stumbled across a website that encouraged something called Knowledge and Judgment Scoring. This is not a widely-used model (yet), and Googling it will give you lots of hits, though virtually all of them point back to a single source: Professor Richard Hart and his grading program Power Up Plus (now in version 5.20).

In a nutshell, Professor Hart's scoring system gives students credit for getting an answer right and for wisely choosing to omit answers to questions they do not know how to answer. In doing so, teachers will get a better picture of what students know (the questions they answered and got right), where they are mistaken (answered but incorrect), and what students really don't know (omitted). By giving actual point credit for omitting answers, students are encouraged to think critically and metacognitively about their own learning. In other words, students become active monitors of their education rather than passively waiting for the teacher to tell them what they know and don't know.

This was a really attractive notion to me. I downloaded the "source code", a macro-embedded Excel spreadsheet (direct link to zipped spreadsheet) and used it a number of times that year. It was a bit cumbersome to use in that I had to enter each students answers, but the feedback it provided was great! I was really able to target those areas with which my students had the most trouble! They made great progress that year, and by the end I could really see in many of them a maturity in their attitudes toward their education that they hadn't shown before. (These students were 5th and 6th grade remedial students.)

The following year, I was placed in a first grade teaching position and had much less opportunity and less inclination to use the technique as my students didn't have the maturity to benefit from it, and the complexity of the concepts was small enough that there was basically no need to test for discreet knowledge on the various aspects of the concepts. But the point is this: knowing what they know they don't know is highly valuable! I believe we need to pursue it.

Currently, I am struggling to keep my head above water in this new grade level. I had forgotten how much energy one expends working with lower grade at-risk students. (I went from an upper-grade remediation teacher to a second grade teacher this year.) Up to now, I hadn't had much time nor opportunity to ponder assessment in the second grade; for all intents and purposes, I was a noob, so I just rolled with program already set up by returning teachers in years previous, though occasionally I would (of course) question techniques, practices, or specific assessments.

So, I am embarrassed that I had not more actively considered our assessment practices up to now. And I am both humbled but grateful that this 7 year old student asked me the true question regarding testing! If I remain in second grade, or indeed wherever I end up teaching, I believe I (we all?) should attempt to use the technique of "showing what you don't know" along with the traditional "showing what you know"!

I welcome any thoughts or comments.

(Last modified:02/26/2013)

Comments

Joanne - 03/03/2013
I agree with you. If they don't know it, we need to know. I do wonder if some of them, due to maturity levels, would just look at a question and think, "I don't know it" and move on instead of concentrating on the question and trying to figure it out. I notice with my groups that they often do know how to do something, but they read the question too quickly or answer the question they think it is asking rather than what it is really asking. Having permission to skip it if they don't know it could be used as an excuse to complete very few questions. There is also the question of the validity of multiple choice tests. They are the easiest to score, but not the best indicator of what a child really can do. All things to ponder...

Mr. Flores - 03/04/2013
Of course, letting them just have an excuse needs to be discouraged. Part of that would be related to maturity. Part of it would be related to how we present the assessment, as an opportunity rather than an inquisition. One of the characteristics of the technique is to give twice as many points for correct answers than for skipped questions, which can serve to mitigate lazy skipping for most students. I think that this technique is valuable, but like all tools, it is not appropriate for every situation.

Name: (required)
Email: (optional; will not be displayed or shared)
Comment: (no HTML please!)


All comments must be approved before being displayed.