How do bad ideas about assessment lead to workload problems?

This is part 7 of a series of blogs on my new book, Making Good Progress?: The future of Assessment for Learning. Click here to read the introduction to the series.

Bad ideas can cause workload problems. If you have a flawed understanding of how a system works, the temptation is to work harder to try and make the system work, rather than to look at the deeper reasons why it isn’t working.

The DfE run a regular teacher survey diary. In the survey from 2010, primary teachers recorded spending 5 hours per week on assessment. By 2013, they were spending 10 hours per week on assessment. Confusion and misperceptions around assessment are creating a lot of extra work – but there is no evidence they are providing any real benefits.

So what are the bad assessment ideas which are creating workload but not generating any improvements? Here are a few ideas.

Over reliance on prose descriptors when grading work
Like a lot of teachers, I used to really dislike marking. But when I would stop and think about it, I realised that I actually really liked reading pupils’ work. It was the process of sitting there with the mark scheme trying to work out a grade and provide feedback from the mark scheme that I disliked. And it turns out there is a good reason for that: the human mind is not good at making these kind of absolute judgements. The result is miserable teachers and not very accurate grades. There is a better way (comparative judgement).

Over reliance on prose descriptors when giving feedback
Prose descriptors are equally unhelpful for giving feedback. A lot of the guidance that comes with descriptors recommends using the language of the descriptors with pupils, or at least using ‘pupil friendly’ variations of the descriptor. The result is that teachers end up writing out whole paragraphs at the end of a pupils’ piece of work: ‘Well done: you’ve displayed an emerging knowledge of the past, but in order to improve, you need to develop your knowledge of the past.’

These kind of comments are not very useful as feedback because whilst they may be accurate, they are not helpful. How is a pupil supposed to respond to such feedback? As Dylan Wiliam says, feedback like this is like telling an unsuccessful comedian that they need to be funnier.

I like the approach being pioneered by a few schools which involves reading a class’s responses, identifying the aspects they all struggled with, and reteaching those in the next lesson. If this response is recorded on a simple proforma, that can hopefully suffice for accountability purposes too.

Mistrust of short answer questions and MCQs
Short answer questions and multiple-choice questions (MCQs) can’t assess everything, clearly. But they can do some things really well and they also have the bonus of being very very easy to mark. A good multiple choice question is not easy to write, to be fair. But once you have written it, you can use it again and again with limited effort, and you can use MCQs that have been created by others too. Unlike feedback based on prose descriptors, if you use MCQs to give feedback then pupils can actively do something helpful in response to your feedback.

Advertisements

2 thoughts on “How do bad ideas about assessment lead to workload problems?

  1. Tom Burkard

    MCQs are useful for high-stakes testing, as they eliminate any element of subjectivity. In a recent paper– http://parliamentstreet.org/research/2017/free-schools-free-society/ –we argue that they should be the exclusive element of GCSEs in STEM subjects, and the major element in humanities. We agree that CJ is vastly preferable to mark schemes. The quality of information such testing provides would make it possible to dismantle most of the cumbersome accountability measures which now cripple the profession.

    However, for routine assessment, I think that short-answer tests are much better. Not only are they far easier to write–which compensates for the additional time spent marking–but they do a better job of securing learning in long-term memory. The effort of constructing a response and writing it down is considerably greater than merely recognising the correct response and differentiating it from the incorrect alternatives.

    Reply
  2. literacyforpleasure

    I like the approach being pioneered by a few schools which involves reading a class’s responses, identifying the aspects they all struggled with, and reteaching those in the next lesson. If this response is recorded on a simple proforma, that can hopefully suffice for accountability purposes too. – This is EXACTLY what we are now doing in my class – It’s working a treat so far!

    https://literacyforpleasure.wordpress.com/2016/12/22/my-marking-routine-how-ive-organised-my-english-marking-workload-for-the-better/

    Reply

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s