Category Archives: Assessment

Global Education and Skills Forum 2018

Last weekend I spoke at the Global Education and Skills Forum in Dubai. I spoke for the motion in the following debate: ‘”I can just Google it” is making us stupid.’ You can see the video here. I’ve put a transcript of my speech below, together with references.

In a letter to a friend, the ancient philosopher Seneca recounted the story of a rich Roman merchant who wanted to appear as though he was a very well-read man. This merchant decided that instead of actually reading books himself, he would instead hire a team of slaves to do it for him. “He spent an enormous amount of money on slaves: one of them to know Homer by heart, another to know Hesiod, while he assigned one apiece to each of the nine lyric poets. Then, he used these slaves to give his dinner guests nightmares: He would have these fellows at his elbow so that he could continually be turning to them for quotations from these poets which he might repeat to the company.”[1]

Of course, no one nowadays has slaves to remember things for them. But we do all feel very comfortable with the idea that we can outsource our memories to Google. In my book, Seven Myths about Education, I devoted a chapter to collecting examples of technologists and educationalists telling us that remembering things just isn’t necessary in a world with ubiquitous smartphones.[2]

These people are wrong, and they are dangerously wrong. And it is not just ancient writers like Seneca who tell us they are wrong. There is a whole body of modern scientific literature which makes the same point. Somewhat ironically, a great deal of this research derives from the work of Herbert Simon, one of the pioneers of artificial intelligence and modern computing. What we know from this research about how the brain works is that memory and attention are two vital parts of our intellectual equipment.[3] We also know that memory and attention are under siege from modern technology like never before.[4] Let us consider these two vital components in turn: why do they matter, and why are they under threat from technology?

First, memory. Our memories matter because we need facts stored in long-term memory in order to be able to think. This is because our working memory – what you might think of as consciousness – is extremely limited and can handle only about 4 – 7 new items of information. That isn’t nearly enough to do anything complex like driving a car, or reading a book. But we can cheat working memory’s limitations – not by hiring a bunch of slaves or using Google, but by committing facts to long term memory. This is why memorising times tables matters. When you solve a complex real world maths problem, you have to process a lot of information in working memory. If you also have to stop every second to type the times tables into your smartphone, your working memory will quickly be overwhelmed, and you will not be able to solve the problem. You’ll forget what the start of the problem was by the time you get to the end.[5] As one group of researchers have said, long term memory is the seat of human intellectual skill.[6] What we know influences how we see the world, how we think and how we reason. Intuition and creativity are the function of large well-memorised bodies of knowledge clashing against each other.[7] We can’t outsource this stuff.

If memory is so important, how do we make memories? The simplest answer is that we remember what we pay attention to – and that brings me to the second thing I want to talk about – attention.[8] If we pay attention to something, we are more likely to remember it. Our attention determines our memories. And nearly all of the major technology companies make their money by harvesting our attention, and selling it to advertisers.[9] These companies have invented increasingly sophisticated methods of grabbing our attention, even if it involves distorting the truth, manufacturing outrage, and exploiting loneliness.[10]  In the process, they don’t just distract our attention: they degrade its quality. Think how hard it is to concentrate on a book after spending an hour or so on social media.[11] Recent research shows that even the sight of a switched off phone makes it harder to focus.[12] Given the vital importance of attention for forming memories, a system that is built on stealing and degrading our attention cannot make us smarter.

At this point, people might typically say, but what about the good uses of technology? What about the Khan Academys, the Duolingos, the Courseras? What about Andrew’s platform Cerego, which uses the science of learning to design educational content that really will stick in long-term memory? And I agree that these kinds of websites are fantastic. They give billions of people access to quality educational content at low or even no cost, which is amazing. We on this side of the house are absolutely not opposed to educational technology. I work for an ed tech company. In my previous jobs as an English teacher I was always experimenting with different methods of online learning. What we are opposed to are misconceptions like the one in the title of this debate, that you can just Google it. Or, as one Google executive said recently, ‘I don’t know why children are learning the quadratic equation. I don’t know why they can’t just ask Google for the answer.” (See footnote 2). And in fact, the reason we are so particularly opposed to misconceptions like this one is that such misconceptions damage good education technology. They make it harder for the really powerful and effective methods of education technology to fulfil their potential, because the really effective education technology is not about outsourcing memory, but about making the process of memorisation as effective, efficient and fun as possible.

Not only that, but good forms of education technology are also being damaged by the tech companies’ insatiable appetite for attention. Online education courses have a phenomenally high drop-out rate. One study from 2014 showed that just 13% of people who enrol on an online course complete it.[13] Why is this? Plenty of reasons have been put forward, but I would like to suggest that one important reason is that because these courses are delivered online, they are therefore competing with everything else that online has to offer – the instant social updates, the flash shopping discounts, the cat videos, Donald Trump’s twitter feed. It isn’t enough to create fantastic educational content for free.[14] In order for it to make people smarter, people have to pay attention to it. And large numbers of them simply aren’t.

Of course one could imagine a world in which technology was used to make us smarter. I would happily sketch for you the outlines of a world where technology did make us smarter.[15] The point is that that is not the world we currently live in. The technology we use prioritises entertainment, outrage, distraction and convenience ahead of learning. By and large, the big money in technology is not going towards helping children to learn their times tables in the most efficient and fun way possible. It is going towards encouraging children to take another selfie, and to forget about the times tables because there’s a robot who will do it for them.

Seneca concluded his story of the Roman merchant with the following moral: “A sound mind can neither be bought nor borrowed.”[16] I would add the following modern updating. “A sound mind can neither be bought, nor borrowed, nor outsourced to the cloud.” And until we recognise that truth, Google will continue to make us stupider.

[1] Seneca: letters from a Stoic. Ed. Campbell, Robin. Penguin, 1969, Letter XXVII

[2] Christodoulou, Daisy. Seven myths about education. Routledge, 2014, chapter 4. Seven Myths was published in 2014; plenty of similar claims have been made since then, including, for example, here by Jonathan Rochelle, Google’s director of education apps: “Referring to his own children, he said: “I cannot answer for them what they are going to do with the quadratic equation. I don’t know why they are learning it.” He added, “And I don’t know why they can’t ask Google for the answer if the answer is right there.”

[3] EG, see Frantz R. “Herbert Simon. Artificial intelligence as a framework for understanding intuition.” Journal of Economic Psychology 2003; 24: 265–277. Simon also wrote explicitly about education here: Anderson J. R., Reder L.M. and Simon H.A. Applications and misapplications of cognitive psychology to mathematics education. Texas Education Review 2000; 1: 29–49. I discuss this paper in my blog post here.

[4] EG see Wu, Tim. The attention merchants: The epic scramble to get inside our heads. Vintage, 2017, also Teixeira, Thales S. “The rising cost of consumer attention: why you should care, and what you can do about it.” (2014). Simon also commented on the economics of attention here: Simon, Herbert A. “Designing organizations for an information-rich world.” (1971): 37-72. “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

[5] Cowan N. “The magical number 4 in short-term memory: A reconsideration of mental storage capacity.” Behavioral and Brain Sciences 2001; 24: 87–114; Cowan N. Working Memory Capacity: Essays in Cognitive Psychology. Hove: Taylor and Francis, 2005. See also Miller G.A. “The magical number seven, plus or minus two: Some limits on our capacity for processing information.” Psychological Review 1956; 63: 81–97; More recently, Professor Daniel Willingham has written this New York Times article about this exact issue.

[6] Sweller J., van Merriënboer J.J.G. and Paas F.G.W.C. Cognitive architecture and instructional design. Educational Psychology Review 1998; 10: 251–296.

[7] Larkin, J., McDermott, J., Simon, D. P., & Simon, H. A. “Expert and novice performance in solving physics problems.” Science, 1980; 208(4450), 1335-1342, p.1335.

[8] Willingham D.T. Why Don’t Students Like School? San Francisco: Jossey-Bass, 2009, p. 53. William James also discusses attention in chapter 11 of The principles of psychology: ‘My experience is what I agree to attend to.’

[9] As Tristan Harris argues, the advertising model which underpins the modern technology economy means that companies ‘have an unbounded interest in getting more of people’s time on a screen’.

[10] See for example this article from the Guardian which investigates YouTube’s ‘Most Recommended’ algorithm and this on how Facebook uses information on users’ emotional states. See also Jean Twenge, in this article in the Atlantic and iGen: Why Today’s Super-connected Kids are Growing Up Less Rebellious, More Tolerant, Less Happy–and Completely Unprepared for Adulthood–and what that Means for the Rest of Us. Simon and Schuster, 2017. “The more time teens spend looking at screens, the more likely they are to report symptoms of depression.” See also Tromholt, Morten. “The Facebook experiment: Quitting Facebook leads to higher levels of well-being.” Cyberpsychology, Behavior, and Social Networking 19.11 (2016): 661-666.

[11] One small-scale study showed that undergraduates switch windows on their computers every 11 seconds on average. Yeykelis, Leo, James J. Cummings, and Byron Reeves. “The Fragmentation of Work, Entertainment, E-Mail, and News on a Personal Computer: Motivational Predictors of Switching Between Media Content.” Media Psychology (2017): 1-26.

[12] Ward, Adrian F., et al. “Brain drain: the mere presence of one’s own smartphone reduces available cognitive capacity.” Journal of the Association for Consumer Research 2.2 (2017): 140-154.

[13] Onah, Daniel FO, Jane Sinclair, and Russell Boyatt. “Dropout rates of massive open online courses: behavioural patterns.” EDULEARN14 proceedings (2014): 5825-5834.

[14] It should also be pointed out that whilst there is a lot of brilliant educational content on the internet, there are also a lot of educational claims made for websites, activities and games that are unlikely to lead to real learning. In his book Deep Work, Cal Newport points out the ‘absurdity of the now common idea that exposure to simplistic, consumer-facing products—especially in schools—somehow prepares people to succeed in a high-tech economy. Giving students iPads or allowing them to film homework assignments on YouTube prepares them for a high-tech economy about as much as playing with Hot Wheels would prepare them to thrive as auto mechanics.’ Newport also argues for the importance of attention, seeing uninterrupted ‘deep work’ as one of the main creators of value in the modern economy. Newport, Cal. Deep work: Rules for focused success in a distracted world. Hachette UK, 2016.

[15] For some suggestions, see the final chapter of my second book, Making Good Progress?: The future of Assessment for Learning. Oxford University Press, 2017.

[16] Seneca, ibid.

Advertisements

Research Ed 2017

This was the fifth national Research Ed conference, and in my mind they’ve started becoming a bit like FA Cup Finals or Christmas – recurring events that start to blur into one. “Oh, South Hampstead – was that the one where Ben Riley from Deans for Impact visited and it all kicked off about grammars?” “No, that was Capital City 2016South Hampstead 2015 was the one where Eric Kalenze visited and where James Murphy taught us the Maori word for green.” Etc. Looking back at my notes from 2013, I find that Ben Goldacre warned then against the ‘energy-zappers’ who will criticise everything you do – too true.

  • The title of my talk was: Improving assessment: the key to education reform.
  • You can download my slides here: Research Ed 2017
  • The livestream is here.
  • If you’re interested in finding out more about comparative judgement, one of the things I talked about, then there are still a few places left on our London training day later this week.

As ever, it is inspiring to meet so many people who are so committed and excited about the cause of research in education, and to be able to talk and share ideas with them. I always come away from these conferences with my mind buzzing with new ideas. Research Ed has only been around for four years, but I cannot imagine the world of education without it. Here’s to many more brilliant conferences.

Feedback and English mocks

You can also read this post on the No More Marking blog.

In the previous few posts, I’ve looked at the workload generated by traditional English mock marking, and at the low reliability, and I’ve suggested that comparative judgement can produce more reliable results and take less time. However, one question I frequently get about comparative judgement is: what about the feedback? Traditional marking may be time-consuming, but it often results in pupils getting personalised comments on their work. Surely this makes it all worthwhile? And beyond a grade, what kind of feedback can comparative judgement give you? This post is a response to those questions.

First, there’s a limit to the amount of formative feedback you can get from any summative assessment. That’s because summative assessments are not designed with formative feedback in mind: they are instead designed to give an accurate grade. So for the most useful kind of formative feedback, I think you need to set non-exam tasks. I write about this more in Making Good Progress.

Still, whilst formative feedback from summative assessments is limited, it does exist. When you read a set of exam scripts, there are obviously insights you’ll want to share back with your pupils, and similarly it’s always helpful to read examiners’ reports to get an idea of the common misconceptions all pupils make. I think we need to do fewer mock exams, because their usefulness is limited, but clearly when we do do them, we want to get whatever use we can from them.

So what is the best way for a teacher to give feedback on mock performance? The dominant method at the minute seems to be written comments at the bottom of an exam script. This is extraordinarily time-consuming, as we’ve documented here, and as other bloggers have noted here, here and here. What I want to suggest in this post is that these kinds of comments are also very unhelpful. Dylan Wiliam sums up why perfectly:

‘I remember talking to a middle school student who was looking at the feedback his teacher had given him on a science assignment. The teacher had written, “You need to be more systematic in planning your scientific inquiries.” I asked the student what that meant to him, and he said,“I don’t know. If I knew how to be more systematic, I would have been more systematic the first time.” This kind of feedback is accurate — it is describing what needs to happen — but it is not helpful because the learner does not know how to use the feedback to improve. It is rather like telling an unsuccessful comedian to be funnier — accurate, but not particularly helpful, advice.’

Wiliam, Dylan. Embedded formative assessment. Indiana: Solution Tree Press, 2002, p.120.

This might seem like a funny and slightly flippant comment, but actually it expresses a profound philosophical point put forward in the work of philosophers such as Michael Polanyi and Thomas Kuhn, which is that words are not always that good at explaining new concepts to novices. Often, part of what a novice needs to learn is what some of these words like ‘systematic’, or, to use an example from Kuhn, ‘energy’, really mean. If pupils don’t know what these words really mean, they can get stuck in a circular loop, similar to the one you might have experienced as a child when you didn’t know the meaning of a word, so you looked it up in a dictionary, only to find you didn’t know any of the words in that definition, so you looked those up, only to find that you didn’t understand the words in those definitions, and so forth…

Much more helpful than written comments are actions: things that a pupil has to do next in order to improve their performance. These do not have to be individual to every pupil, and they do not have to be laboriously written at the bottom of every script. They can be communicated verbally in the next lesson, and they can be acted on in that lesson too.

How does all this fit in with comparative judgement? One objection people have to comparative judgement is that whilst it may give an accurate grade, it doesn’t give pupils a comment at the bottom of their script. We’ve heard of a couple of schools where after judging a set of scripts, they’ve then required staff to go back and write comments on the scripts too. This is totally unnecessary and unhelpful! Instead, we’d recommend combining comparative judgement with whole-class marking. Whole-class marking is a concept I first came across on blogs by Joe Kirby and Jo Facer at Michaela Community School. Instead of writing comments on a set of books, you can jot down the feedback you want to give on a single piece of paper. You can formalise this a bit more by developing a one-page marking proforma, which gives you a structure to record your insights as you mark or judge a set of scripts, and to help you plan a lesson in response to the scripts. Here’s an example we’ve put together based on some year 7 narrative writing. The parts in red are the parts that involve teacher and/or pupil actions.

Caveat: this is written out far more neatly and coherently than is necessary — we’ve only done this to illustrate how it works. These proformas can be much more messy, as in Toby French’s example here. What’s important is the thought process they support, and the record they will provide over time of actions and improvements. In short, combining comparative judgement with one-page marking proformas will drastically reduce the time it takes to mark a set of scripts, and will give your pupils far more useful feedback than a series of written comments.

Our aim with our Progress to GCSE English project is to use tools like the one above to allow schools to replace traditional mock marking with comparative judgement. We ran our first training days in July, and will be running more in the autumn term. To find out more, sign up to our mailing list here. Our primary project, Sharing Standards, takes a similar approach, and you can read more about it here.

Life after Levels: Five years on

Exactly five years ago, the government announced that national curriculum levels would be removed – and not replaced.

Here’s a quick guide to some of my life after levels blog posts from the last five years.

It was definitely a good thing to abolish levels. As I argued here, here and here, they didn’t give us a shared language. Instead, they provided us with the illusion of a common language, which is actually very misleading. This is because they were based on prose performance descriptors, which can be interpreted in many different ways. Unfortunately, many replacements for NC levels were based around the same flawed prose descriptor model.

If prose descriptors don’t work, what does? One good idea is to define your standards really clearly as questions. EG, instead of saying ‘Pupils can compare fractions to see which is larger’, actually ask them ‘what’s bigger: 4/7 or 6/7? 2/3 or 3/4? 5/7 or 5/9?’ And don’t expect that if they get one of those questions right that they will get them all right!

This works well for maths, but what about things like essays? How do you mark those without a descriptor or a rubric? Another great idea is to use comparative judgement. I first wrote about this back in November 2015. It is basically the most exciting thing to happen to assessment ever. I am so excited about it that I am going to work for No More Marking who provide an online comparative judgement engine. If you haven’t read about it already, do! You can also watch this video of me talking about one of our pilot projects at Research Ed in 2016.

The two books I’ve found most helpful in thinking about assessment are Measuring Up by Daniel Koretz, and Principled Assessment Design by Dylan Wiliam. My review of William’s book is here. My review of Koretz’s book is in three parts: Part one is How useful are tests?, part two is Validity and reliability, and part three is Why teaching to the test is so bad.

In February 2017, Oxford University Press published my own book on assessment, Making Good Progress?: The Future of Assessment for Learning. You can read more about it here. At the Wellington Festival of Education in 2016, I gave a talk which summarised the book’s thesis – you can see the video of this here.

I think the abolition of levels has given teachers the chance to take control of assessment, and has sparked debate, discussion and innovation around assessment which has been hugely valuable. Of course, things still aren’t perfect. National primary assessment has had a number of setbacks, and there are still lots of examples of ‘new’ assessment systems which are essentially rehashed levels. But overall I am really excited, both about the work that has happened in the last five years, and the potential for even further improvements in the next few years.

 

 

Five ways you can make the primary writing moderation process less stressful

The primary interim frameworks are now in their second year, and their inconsistencies have been well-documented. Education Datalab have shown that last year there were inconsistencies between local authorities, while more recently the TES published an article revealing that many writing moderators were unable to correctly assess specimen portfolios. Here are five ways to help deal with the uncertainty.

1. Look outside your school or network
Teachers are great judges of their pupils’ work, but find it much harder to place those judgements on a national scale. So wherever possible, try to get exposure to work outside your school to get a clearer idea of where the national standard is.

2. Use what we know about results last year
The interim frameworks were used for the first time last year and, as noted, there are plenty of inconsistencies in how they were applied. However, we do now know that last year, nationally, 74% of pupils were awarded EXS+, and 15% GDS. This compares to 66% and 19% respectively in reading.

3. Check your greater depth (especially if you’re a school in a disadvantaged area)
There is particular evidence that greater depth is being applied inconsistently, and that schools with below average attainment overall are reluctant to award greater depth.

4. Remember that all achievement is on a continuum
Like all grades, ‘greater depth’ and ‘expected standard’ are just arbitrary lines. A pupil who just scrapes ‘expected standard’ actually has more in common with a pupil at the top of ‘working towards’ than they do with a pupil at the top of ‘expected standard’. Not everyone in the same grade will have exactly the same profile, and sometimes the differences between pupils getting the same grade will be greater than pupils with different grades.

5. Use the Sharing Standards results
In March, 199 schools and 8512 pupils took part in Sharing Standards: a trial using comparative judgement to assess Year 6 writing. The results are available here, together with exemplar portfolios. The results offer all four of the benefits above: they involve teacher judgement from across the country; they use information from last year’s results to set this year’s standard; this means they avoid the problem of school-level bias; and they allow you to see the distribution of scripts, not just the grade.

Some people have expressed surprise at the quality of the work at the greater depth threshold. But as we’ve seen, there is no national agreement about what greater depth is.  It is true that the comparative judgement process does not use the interim frameworks, but it does have the same intention: to support professionals in assessing writing quality. In our follow-up survey with schools, 98% of the respondents said they are planning to use their results in their moderation process as they felt the results supported their internal assessment of writing standards. The Sharing Standards results are the only nationally standardised scale of Key Stage 2 writing, so it can’t hurt to take a look and see how thousands of pupils nationally are doing.

Four and a half things you need to know about new GCSE grades

Last week I had a dream that I was explaining the new GCSE number grades to a class of year 11s. No matter how many times I explained it, they kept saying ‘so 1 is the top grade, right miss? And 3 is a good pass? And if I get 25 marks I am guaranteed a grade 3?’

Here are the four and a half things I think you need to know about the new GCSE number grades

ONE: The new grading system will provide more information than the old one
When I taught in the 6th form, I felt that there were lots of pupils who had received the same grade in their English GCSE but who nevertheless coped very differently with the academic challenge of A-level. There are lots of reasons for this, but I think one is that grades C and B in particular are awarded to so many pupils. Nearly 30% of pupils receive a grade C in English and Maths, and there are clearly big differences between a pupil at the top of that grade and one at the bottom. With the new system, it looks as though the most common grade will be a 4, which only about 20% of pupils will get. With the old letter system, things had got a bit lop-sided: half the grades available were used to distinguish the top two-thirds of candidates.  In the new system, two-thirds of the available grades will be awarded to the top two-thirds of candidates, which is fairer, provides more information, and will help 6th forms and employers distinguish between candidates.

TWO: We don’t know what the grade boundaries will be.
Even with an established specification, it is really hard to predict in advance the relative difficulty of different questions, which is why grade boundaries can never be set in advance. This is even more the case with a new specification. We just don’t know how many marks will be needed to get a certain grade.

THREE: We do know roughly what the grade distribution will be like
Whilst we don’t know the number of marks needed to get a certain grade, we do know how many pupils will get a grade 4 and above (70%), and how many will get a grade 7 and above (16% in English, 20% in Maths). The new 4 grade is linked to the old C grade, and the new 7 to the old A. I’ve heard some people say that the new standards are a ‘complete unknown’. This isn’t the case. We know a lot about where the new standards will be, and this approach lets us know a lot more than other approaches which could have been taken (see below).

FOUR: There’s an ‘ethical imperative’ behind this process
The ‘ethical imperative’ is the idea that no pupil will be disadvantaged by the fact that they were the first to take these new exams. (See page 16-17 here). That’s why Ofqual have created a link between the last year of letter grades, and the first year of number grades. Suppose these new specs really are so fiendishly hard that all the pupils struggle dramatically on them. 70% of pupils will still get a grade 4+. They are not going to be disadvantaged by the introduction of new and harder exams.

AND A HALF: Secondary teachers: if you don’t like this approach, just talk to a primary colleague about what they went through last year!
At Ark, I’ve been involved with the changes to Sats that happened last year, and the changes to GCSE grading that are happening this year. There was no ‘ethical imperative’ at primary last year, meaning we didn’t know until the results were published what the standard would be. Whereas we know in advance with the new GCSE that about 70% of pupils will get a 4 or above, at primary we were left wondering if 80% would pass, if 60% would, or if 20% would! We didn’t have a clue! In the event, the standard for reading fell sharply compared to previous years. Not only did this lead to a very stressful year for primary teachers, it also means that it is extremely hard to compare results year on year from before and after 2016. One might argue that this matters less at primary as pupils do not take the results with them in life and get compared to pupils from previous years. But of course, the results of schools are compared over time, and a great deal depends on these comparisons. So I think an ethical imperative would have been welcome at primary too, and that the new GCSE grades have been designed in the fairest possible way for both schools and pupils.

What makes a good formative assessment?

This is part 5 of a series of blogs on my new book, Making Good Progress?: The future of Assessment for Learning. Click here to read the introduction to the series.

In the last two blog posts – here and here –  I’ve spoken about the importance of breaking down complex skills into smaller pieces. This has huge implications for formative assessments, where the aim is to improve a pupil’s performance, not just to measure it.

Although we typically speak of ‘formative assessment’ and ‘summative assessment’, actually, the same assessment can be used for both formative and summative purposes. What matters is how the information from an assessment is used. A test can be designed to give a pupil a grade, but a teacher can use the information from individual questions on the test paper to diagnose a pupil’s weaknesses and decide what work to give them next. In this case, the teacher is taking an assessment that has been designed for summative purposes, but using it formatively.

Whilst it is possible to reuse assessments in this way, it is also true that some types of assessment are simply better suited for formative purposes than others. Because complex skills can be broken down into smaller pieces, there is great value in designing assessments which try to capture progress against these smaller units.

However, too often, a lot of formative assessments are simply mini-summative assessments – tasks that are really similar in style and substance to the final summative task, with the only difference being that they have been slightly reduced in size. So for example, if the final assessment is a full essay on the causes of the first world war, the formative assessment is one paragraph on how the assassination of Franz Ferdinand contributed to the start of the first world war. If the final summative assessment is an essay analysing the character of Bill Sikes, the formative assessment is an essay analysing Fagin. The idea is that the comments and improvements a teacher gives pupils on the formative essay will help them improve for the summative essay.

But I would argue that in order to improve at a complex task, sometimes we need to practise other types of task. Here is Dylan Wiliam commenting on this, in the context of baseball.

The coach has to design a series of activities that will move athletes from their current state to the goal state. Often coaches will take a complex activity, such as the double play in baseball, and break it down into a series of components, each of which needs to be practised until fluency is reached, and then the components are assembled together. Not only does the coach have a clear notion of quality (the well-executed double play), he also understands the anatomy of quality; he is able to see the high-quality performance as being composed of a series of elements that can be broken down into a developmental sequence for the athlete. (Embedded Formative Assessment, p.122)

Wiliam calls this series of activities ‘a model of progression’. When you break a complex activity down into a series of components, what you end up with often doesn’t look like the final activity. When you break down the skill of writing an essay into its constituent parts, what you end up with doesn’t look like an essay. I wrote about this about five years ago, where I set out what I felt were some of the series of activities that could help pupils become a good writer.

Once we’ve established a model of progression in a subject, then we can think about how to measure progress – and measuring progress is what the next post will be about.