Research Ed 2016: evidence-fuelled optimism

One of the great things about the Research Ed conferences is that whilst their aim is to promote a sceptical, dispassionate and evidence-based approach to education, at the end of them I always end up feeling irrationally excited and optimistic. The conferences bring together so many great people and ideas that it’s easy to think educational nirvana is just around the corner. Of course, I also know from the many Research Ed sessions on statistics that this is a sampling error: the 750+ people at Capital City Academy yesterday are entirely unrepresentative of just about anything, and educational change is a slow and hard slog, not a skip into the sunlit uplands. Still, I am pretty sure there must be some research that says if you can’t feel optimistic at the start of September, you will never make it through November and February.

And there was some evidence that the community of people brought together by Research Ed really are making a difference, not just in England but in other parts of the world too. One of my favourite sessions of the day was the last one by Ben Riley of the US organisation Deans for Impact, who produced the brilliant The Science of Learning report. Ben thinks that English teachers are in the vanguard of the evidence-based education movement, and that we are way ahead of the US on this score.   One small piece of evidence for this is that a quarter of the downloads of The Science of Learning are from the UK. There clearly is a big appetite for this kind of stuff here. In the next few years, I am really hopeful that we will start to see more and more of the results and the impact of these new approaches.

Here’s a quick summary of my session yesterday, plus two others I attended.

My session

For the first time, I actually presented some original research at Research Ed, rather than talking about other people’s work. Over the last few months, I have been working with Dr Chris Wheadon of No More Marking on a pilot of comparative judgment of KS2 writing. We found that the current method of moderation using the interim frameworks has some significant flaws, and that comparative judgment delivers more reliable results with fewer distortions of teaching and learning. I will blog in more depth about this soon: it was only a small pilot, but it shows CJ has a lot of promise!

Heather Fearn

Heather (blog here) presented some research she has been working on about historical aptitude. What kinds of skills and knowledge do pupils need to be able to analyse historical texts they have never seen before, or comment on historical eras they have never studied? The Oxford Historical Aptitude Test (HAT) asks pupils to do just that, and I have blogged about it here before. In short, I think it is a great test with some bad advice, because it constantly tells pupils that they don’t have to know anything about history to be able to answer questions on the paper. Heather’s research proved how misleading this advice was. She got some of her pupils to answer questions on the HAT, and then analysed their answers and looked at the other historical eras they had referred to in order to make sense of the new ones they encountered on the HAT. Pupils were much better at analysing eras, like Mao’s China, where comparisons to Nazi Germany were appropriate or helpful. When asked to analyse eras like 16th century Germany, they fell back on to anachronisms such as talking about ‘the inner city’, because they didn’t really have a frame of reference for such eras.

This is a very very brief summary of some complex research, but I took two implications from it, one for history teachers, and one for everyone. First, the more historical knowledge pupils have, the more sophisticated analysis they can make and they more easily they are able to understand new eras of history. Second, there are profound and worrying consequences of the relentless focus in history lessons on the Nazis. Heather noted that her pupils were great at talking about dictatorships and fascism in their work, but when they had to talk about democracy, they struggled because they just didn’t understand it – even though it was the political system they had grown up with. This seems to me to offer a potential explanation of Godwin’s Law: we understand new things by comparing them to old things; if we don’t know many ‘old things’ we will always be forcing the ‘new things’ into inappropriate boxes; if all we are taught is the Nazis, we will therefore end up comparing everything to them. I think this kind of research shows we need to teach the historical roots of democracy more explicitly – perhaps by focussing more on eras such as the ancient Greeks, and the neglected Anglo-Saxons.

Ben Riley

Ben is the founder of Deans for Impact, a US teacher training organisation.  The Science of Learning, referenced above, is a report by them which focusses on the key scientific knowledge teachers need to understand how pupils learn. In this session, Ben presented some of their current thinking, which is more about how teachers learn. Their big idea is that ‘deliberate practice’ is just as valuable for teachers as it is for pupils. However, deliberate practice is a tricky concept, and one that requires a clear understanding of goals and methods. We might have a clear idea of how pupils make progress in mathematics. We have less of an idea of how they make progress in history (as Heather’s research above shows). And we probably have even less of a clear idea of how teachers make progress. Can we use deliberate practice in the absence of such understanding? Deans for Impact have been working with K Anders Ericsson, the world expert on expertise, to try and answer this question. I’ve been reading and writing a lot about deliberate practice over the last few months as part of the research for my new book, Making Good Progress?, which will be out in January. In this book, I focus on using it with pupils. I haven’t thought as much about its application to teacher education, but there is no doubt that deliberate practice is an enormously powerful technique which can lead to dramatic improvements in performance – so if we can make it work for teachers, we should.

Comparative judgment: practical tips for in-school use

 

I have blogged a bit before about comparative judgment and how it could help make marking more efficient and more reliable, and help to free the teaching of writing from tick box approaches. I think CJ has the potential to be used for national assessments – that’s why I’m working with Dr Chris Wheadon and the No More Marking team on a national experiment to moderate Key Stage 2 writing assessments using comparative judgement. However, whatever happens nationally there are ways you can use CJ within your own school. The No More Marking website allows you to use comparative judgment for free. The website is very easy to use and it is definitely worth a go if you are interested.  Here are some suggestions based on what we have done at Ark, plus some practical tips.

Primary writing assessments

There are no interim frameworks outside Y2 & Y6, so using CJ for writing in Y1, 3, 4, & 5 feels as good a solution as any. If you want to try and measure progress across these years, you could get pupils to do a task now and judge it now. Then keep the tasks and use them again in a judging session at Christmas or this time next year and see how they compare to the same pupils’ work at that point in time. We haven’t done this yet but it feels like a really powerful way of showing pupils the progress they are making. I know lots of schools already keep portfolios of pupils’ work across time, so you could use these to start the process.

KS3 English assessments

I’ve personally found it easier to judge writing assessments than to judge literature essays. Others have said the same. We recently got some very high reliability scores when judging a set of allegories that had been written by our Y8 pupils.

KS4 English exams

We haven’t used CJ for KS4 tasks yet, but it would certainly be possible. You could try and judge entire exams, or just pick out individual questions. I feel that individual questions would be easier to judge, and that you would get more accurate results for them. I think you would also get more interesting discussions and feedback afterwards when sharing the results.

KS3 history assessments

CJ has worked just as well for us in history, although again, I found judging history essays to be harder and slightly more time consuming than judging writing tasks. We did some judging on essays on the Battle of Hastings. This is a classic Y7 task and it was interesting for me to see the different ways different teachers had approached it.

And here are some general practical tips

  • Go to No More Marking, set up an account for free, and then, on the dashboard, create a new task.
  • You will need to upload all the scripts from your pupil. You can upload them as a jpg or pdf. If the pupil work is on paper, you’ll need to scan it – if you have a copier with this facility that shouldn’t be too difficult. The slightly fiddly bit is making sure every separate pdf or jpg has a pupil name or identifier in the title. This means that when you get the results, you will be able to easily see which pupil has which mark. Alternatively, you can use the QR coded answer sheets that nomoremarking.com provides. The bar coded sheets automatically recognise which pupil is which from the bar code on the scan and match them to their results.
  • How many judgments do you want per script? If reliability is very important, you will want 10 judgments per script. If it is less important, you can get away with 5. It can feel nice to aim high and try and get a reliability score of 0.85 or more, but there are a couple of things to consider. First, what type of reliability are you getting at the moment without comparative judgment? You probably don’t even know or have a way of finding out. So if you can get a reliability score of 0.75 from doing 5 judgments, that’s more than likely to be an improvement on what you are doing currently. You might be able to get up to 0.9 by doubling the judgments, but you will need to consider whether it is worth doubling the amount of time. It will depend on what you are using the results for. I am starting to think that in some cases, doing 5 judgments per script as a quick sift and then meeting as a team to discuss the results and set standards might be the best way forward.
  • Do you want to do the judging together as a group, or send out the links to people? To begin with, I’ve found it quite powerful to have people in the same room doing the judging. Whatever you choose to do, I also think it is worth having a group follow up session where you discuss the scripts and think about why certain scripts were better than others, and what the teaching and learning implications. As I have said before, the two immediate benefits of CJ are that it saves time and it is more reliable. But the longer term benefit is freeing teaching from the tick box. If you don’t meet after to discuss the results and implications, then you are making it harder to achieve that.
  • Do you want to include exemplars or not? These can make it easier to apply the standards or grades once you have the results. But I would only use this if you are sure they are at a certain standard. Also, be careful – if you are putting in a script that you feel sure is a C grade, do you think it is a top or bottom C? The C-grade is a large grade so you need to be sure.
  • I would recommend trying to get scripts from more than one class to begin with (if possible even more than one school). One of the nice things about the CJ tasks we have done is how they make it quick and easy for teachers to see how other teachers and pupils have attempted similar tasks.

A new blog you need to follow

A good friend of mine, Maria Egan, has just set up a new education blog. It’s called the Razor Blade in the Candy Floss.

Maria has been an enormous influence on my thinking and writing so I am really pleased she has set up this blog, although it does mean I won’t be able to pass off her ideas as my own any more. I first met Maria when she was a teacher at my school in my final year at 6th form. I actually gave her the nickname ‘Razor Blade in the Candy Floss’ because of the habit she had of saying things that seemed completely innocuous but turned out to contain a bit of a sting in the tail – often a sting you only realised a couple of hours later while you were trying to avoid doing your homework.

Maria is also a bit of a dark horse because it turns out she has been running this blog on her school’s intranet for a while but I never knew about it, even though we have long discussions about education (this may have something to do with the fact that a lot of our discussions degenerate into monologues by me). The most recent post is a review of David Brooks’s A Road to Character. I recommended this book to Maria because I really liked it and thought it was quite insightful. However, as ever, Maria has a way of pointing out some of the flaws in it in a way that makes me feel slightly credulous.

And although I agree with Brooks that the ‘me me me’ culture is deplorable, I think that the underlying premise of the book is a bit pessimistic.  Are all the world’s greatest people really dead?  Is there an ever dwindling number of people with personality traits we could aspire to emulate?  Am I really going to start attending funerals where the nicest thing anyone can think to say about the deceased is that she had a growth mindset?  Life is different from before; people are different from before, but I’m not despairing about the current generation or future ones.  I couldn’t teach if I were.

So, in short: follow that blog!

“Best fit” is not the problem

I can remember taking part in marking moderation sessions using the Assessing Pupil Progress grids. We marked using ‘best fit’ judgments. At their worst, such ‘best fit’ judgments were really flawed. A pupil might produce a very inaccurate piece of writing that everyone agreed was a level 2 on Assessment Focus 6 – write with technical accuracy of syntax and punctuation in phrases, clauses and sentences. But then someone would point out how imaginative it was, and say that it deserved a much higher level for Assessment Focus 1 – write imaginative, interesting and thoughtful texts. Using a best fit judgment, therefore, a pupil would end up with the national average level even though they had produced work that had serious technical flaws. Another pupil might get the same level, but produce work that was much more accurate. Given this, it is easy to see why ‘best fit’ judgments have fallen out of favour. On the new primary interim writing frameworks, to get a certain grade you have to be ‘secure’ at all of the statements. So, we’ve moved from a best fit framework to a secure fit one. Another way of saying this is that we have moved from a ‘compensatory’ approach, where weaknesses in one area can be made up with strengths in another, to a ‘mastery’ approach, where pupils have to master everything to succeed.

The problem with the ‘secure’ or ‘mastery’ approach to assessment is that when it is combined with open tasks like extended writing it leads to tick box approaches to teaching. However good an essay is, if it doesn’t tick a particular box, it can’t get a particular grade. However bad an essay is, if it ticks all the boxes, it gets a top grade. It is much harder than you might think to construct the tick boxes so that good essays are never penalised, and that bad essays are never rewarded. I’ve written about this problem before, here. This approach penalises ambitious and original writers. For example, if a pupil knows that to achieve a certain grade they have to spell every word correctly and can’t misspell one, then the tactical thing to do is to only use very very basic words. Similarly with sentence structure, punctuation, grammar, etc. Thus, the pupil who writes a simple, boring but accurate story does better than the pupil who writes an interesting, thoughtful and ambitious story with a couple of errors. Teachers realise this is what is happening and adapt their teaching in response, focussing not just on the basics, but also, more damagingly, on actively eliminating anything that is even slightly more ambitious than the basics.

Regular readers might be surprised to hear me say this, since I have always made a point of the importance of accuracy, and of the importance of pupils learning to walk before they can run. I have also been very enthusiastic about mastery approaches to learning. So is this me changing my mind? No. I still think accuracy is extremely important, and that it enables creativity rather than stifling it. I still also think that mastery curriculums are the best type. My issue is not about the curriculum and teaching, but about assessment. Open tasks like extended writing are not the best way to assess accuracy or mastery. This is because, crucially, open tasks do not ask pupils to do the same thing in the same way. They introduce an element of discretion into the task. Pupil one might spell a lot of words wrong in her extended task, but that might be because she has attempted to use more difficult words. Pupil 2 might spell hardly any words wrong, but she may have used much easier words. Had you asked pupil 2 to spell the words that pupil 1 got wrong, she may not have been able to. So she isn’t a better speller, but she is credited as such. If you insist on marking open tasks in a secure fit way, this becomes a serious problem as it leads to huge distortions of both assessment and teaching. Essentially, what you are doing is giving the candidate discretion to respond to the task in different ways, but denying the marker similar discretion. Better spellers are marked as though they are weaker spellers, because they have essentially set themselves a higher standard thanks to having a more ambitious vocabulary. Lessons focus on how to avoid difficult words rather than use them, how to avoid the apostrophe rather than use it correctly. If the secure fit approach to marking open tasks really did reward accuracy, I would be in favour of it. But it doesn’t reward accuracy. It rewards gaming. Michael Tidd’s great article in the TES here shows exactly how this process works. James Bowen also has an excellent article here looking at some of these problems.

It is obviously important that pupils can spell correctly. But open tasks are not the best way of assessing this. The best and fairest way of checking that pupils can spell correctly is to give them a test on just that. If all pupils are asked to spell the same set of words in the same conditions, you can genuinely tell who the better spellers are, and the test will also have a positive impact on teaching and learning as the only way to do well on the test is to learn the spellings.

One final point is to note that the problems I outlined at the start about the flaws with ‘best fit’ judgments actually had less to do with ‘best fit’ and more to do with (drum roll) vague prose descriptors. The fundamental problem is getting any set of descriptors with whatever kind of ‘fit’ to adequately represent quality.

The prose descriptors allowed for pupils to be overmarked on particularly vague areas like AF1 – write imaginative texts –  when in actual fact they were probably not doing all that well on those areas. I don’t think there are hundreds of thousands pupils out there who write wonderfully imaginatively but can’t construct a sentence, or vice versa. It’s precisely because accuracy enables creativity that there aren’t millions of pupils out there struggling with the mechanics of writing but producing masterpieces nonetheless. I have made this point again and again with reference to evidence from cognitive psychology, but let me now give you a recent piece of assessment evidence that appears to point the same way. We have held quite a few comparative judgment sessions at Ark primary schools. You can read more about comparative judgment here, but essentially, it relies on best fit comparisons of pupils work, rather than ticks against criteria. You rely on groups of teachers making independent, almost instinctive judgments about what is the better piece of work. At the start of one of our comparative judgment sessions, one of the teachers said to me that he didn’t think we would get agreement at the end because we would all have different ideas of what good writing was. For him, good writing was all about creativity, and he was prepared to overlook flaws with technical accuracy in favour of really imaginative and creative writing. OK, I said, for today, I will judge as though the only thing that matters is technical accuracy. I will look solely for that, and disregard all else. At the end of the judging session, we both had a high level of agreement with the rest of the judging group.  This is of course just one small data point, but as I say, I think it goes to prove something which has been very well-evidenced in cognitive psychology. The high level of agreement between all teachers at this comparative judgment session (and on all the others we have run) also shows us that judging writing and even judging creativity are perhaps not as subjective as we might think. It is not the judgments themselves that are subjective, but the prose descriptors we have created to rationalise the judgments.

Similarly, if the problem with best fit judgments wasn’t actually the best fit, but the prose descriptors, then keeping the descriptors but moving to secure fit judgments won’t solve the fundamental problem. And again, we have some evidence that this is the case too. Michael Tidd has collected new writing framework results from hundreds of schools nationally. The results are, in Michael’s words, ‘erratic’. They don’t follow any kind of typical or expected pattern, and they don’t even correlate with schools’ previous results.  Whatever the precise reason for this, it is surely some evidence that introducing a secure fit model based on prose descriptors is not going to solve our concerns around the validity and reliability of writing judgments.

In conclusion

  • If you want to assess a specific and precise concept and ensure that pupils have learned it to mastery, test that concept itself in the most specific and precise way possible and mark for mastery – expect pupils to get 90% or 100% of questions correct.
  • If you want to assess performance on more open, real world tasks where pupils have significant discretion in how they respond to the task, you cannot mark it in a ‘secure fit’ or ‘mastery’ way without risking serious distortions of both assessment accuracy and teaching quality. You have to mark it in a ‘best fit’ way. If the pupil has discretion in how they respond, so should the marker.
  • Prose descriptors will be inaccurate and distort teaching whether they are used in a best fit or secure fit way. To avoid these inaccuracies and distortions, use something like comparative judgment which allows for performance on open tasks to be assessed in a best fit way without prose descriptors.

Ouroboros by Greg Ashman

I’m a bit late to this, but I just wanted to write about how much I enjoyed Ouroboros by Greg Ashman. It’s a very elegantly and sparely written account of Greg’s experiences of teaching in England and Australia, and of the education research which is relevant to his experiences. The central organising metaphor is the ouroboros, ‘an ancient symbol of a snake or dragon that is consuming its own tail.’ Ouroboros can be ‘a vicious metaphor to represent the antithesis of progress – we cannot move forward if we are going round and round. Moreover, Ouroboros adds something to the cycle. It represents the reinvention of old ideas as new ideas. Again and again.’

I found this metaphor very helpful when thinking about modern education. It is so demoralising to see the number of fads that get warmed over and served up as new. And the great fear is not only that bad ideas persist. Even worse, the constant recycling of bad ideas prevents the adoption of new ones, and makes teachers understandably cynical and mistrusting of innovation in general, even though real innovation is what we desperately need to break out of this cycle.

But ouroboros can be a more positive metaphor. ‘We can also view Ouroboros as a virtuous metaphor; a feed-back loop with information flowing from the effect back to the cause. When we teach, we do not speak into the void.’

Greg thinks that this kind of feedback loop is at the heart of good teaching. However, he also notes that attempts to promote the use of feedback over the last decade or so, under the name of Assessment for Learning, have led to disillusion. ‘In U.K. schools, formative assessment followed an unfortunate trajectory that hollowed-out much of the original purpose and has therefore left many teachers quite jaded.’ However, as he notes, ‘the basic principle is sound.’ And there is much good advice in this book about how to rescue the sound principles of formative assessment from the ‘bureaucratic barnacles’ that have grown up around it.

Highly recommended.

How to crack the Oxford History Aptitude Test

Recently, a friend of mine sent me a link to ‪Oxford University’s History Aptitude Tests (HAT). These tests are designed for 18 year olds applying for admission to Oxford. I really liked the look of them – the one I saw was interesting, challenging, covered a broad range of historical eras and I can imagine that it would be quite an interesting test to discuss in class too. However, I did also think that some of the advice that came with the tests wasn’t as helpful as it could have been. For example, here:

‪”The HAT is a test of skills, not substantive historical knowledge. It is designed so that candidates should find it equally challenging, regardless of what period(s) they have studied or what school examinations they are taking.”

I am not sure this is the case. The HAT does require substantive historical knowledge, and a candidate with knowledge of the eras on the test paper would not find it as challenging as a candidate with no such knowledge. Let’s have a look at some of the questions from this paper.

Question one
The first question features an extract from a book about the Comanche empire. The test advises that ‘you do not need to know anything about the subject to answer the questions below.’ I suppose that is true in the loosest sense, in that I do not need to know anything about physics in order to take an A-level in it. But of course, that isn’t the sense in which most of the examinees will be interested in. I think you do need to know something about North American colonisation to do well at the questions below. There are two questions. One of them is ‘In your own words, write a single sentence identifying the main argument of the first paragraph’, and the second is ‘What does the author argue in this passage about recent attempts made by historians to integrate Native Americans into the history of colonialism in North America?’

At first glance, it may seem as if these really aren’t testing prior knowledge, but are instead testing an abstract skill of ‘summarising’, or ‘argument recognition’. However, in actual fact, even these questions are testing substantive historical knowledge. The passage and question from HAT are actually uncannily similar to one of the classic experiments used to show why knowledge is so important for cognition.  In 1978, E.D. Hirsch asked groups of students to read two passages of equal difficulty in terms of vocabulary and syntax. One was about friendship, and one was about Grant and Lee and the end of the Civil War. University students understood both passages equally well. Poorer students at the community college did just as well on the passage about friendship, but struggled on the one about the Civil War. Hirsch theorised that their weakness on this second passage was down to their lack of knowledge about the Civil War, not any lack in some innate ‘passage comprehension’ ability. Similar research has been carried out again and again, to the extent that researchers in this field say that reading is not a ‘formal skill’: it is dependent on background knowledge. Recently, Kate Hammond’s articles in Teaching History about the power of ‘substantive historical knowledge’ also speak to the importance of background knowledge for historical understanding. She shows how pupils who have historical knowledge that goes beyond the exam rubric and even the era being studied are often able to deploy such knowledge in a way that leads to better analysis. For example, if a pupil has knowledge of how minority parties operate within a democracy, this can lead to better analysis of the challenges that faced the Nazi party in the 1920s.

In the case of the HAT extract about the Comanche empire, students with knowledge of western colonialism and the nature of indigenous societies will understand the passage better, read it quicker, and summarise it more acutely. Pupils without that knowledge will not be able to employ their generic ‘summarising’ or ‘historical analysis’ mental muscles, because such muscles don’t exist. Instead, they will be puzzling over what a ‘Euro-American’ is or what the ‘colonization of the Americas’ entailed.

Question two
The next question is: Write an essay of 1.5 to 3 sides assessing and explaining who were the ‘winners’ and ‘losers’ in any historical event, process or movement. You may answer with reference to any society, period or place with which you are familiar.

Obviously, you will need ‘substantive historical knowledge’ to answer this question. The more knowledge of different eras you have, the more likely you are to find one that fits the bill for the question, and the more detailed knowledge you have of each individual era, the more likely you are to have something worthwhile to say about it.

Question three
The final question is a source from 16th century Germany. It says, “You do not need to know anything about Germany in the sixteenth century to answer the question below, nor should you draw on any information from outside the source.”

As regards the first piece of advice, again, it’s true that you may not need to know anything to answer the question, but it will certainly help you if you do. But that’s better than the second piece of advice, which is actually cognitively impossible. The modern research on reading and cognition shows us that when we read, we make sense of the content by…drawing on information from outside the source.  No written text contains all the information we need to make sense of it. All texts depend to some extent on the reader supplying certain bits of information themselves.

When we look at the source itself, we can find plenty of examples of how knowledge from outside the text is impossible to avoid using, and is extremely helpful. First of all, there’s vocabulary: knowing the historical meanings of alms, peasants, lodgings and bathhouse would definitely be helpful. Second, there are references to concepts that have a particular meaning in medieval Europe: the phrase ‘put out of the city’, for example, makes a lot more sense if you understand something about medieval European cities, their rights and freedoms, and their geographical limits and defences. Similarly, there is a reference to epilepsy, which is now understood as a physical illness, but in 16th century Europe was seen as a sign of madness. All of this ‘information from outside the source’ would be hugely valuable in answering the question, and those students who have this information will do better than those who don’t.

I can see how this advice is intended to be well-meaning. I can see that it might be trying to ensure that candidates are not intimidated if they haven’t heard of a particular period of history, and perhaps also to demonstrate that this admissions test is fair to all pupils regardless of educational background – that even if you are a state school pupil who has only studied the Nazis and Tudors, you won’t be at a disadvantage to pupils from independent schools who have studied more historical topics. The test is attempting to uncover some kind of innate ‘historical aptitude’ which exists regardless of the amount of history books you’ve read or historical ideas you have been exposed to. The only problem, of course, is that such innate historical aptitude doesn’t exist. Like many concepts we mistakenly describe as ‘skill’, the ability to analyse historical problems and sources is not something innate and discrete which resides mysteriously within us. It is learnt, and depends to a large degree on the amount of knowledge we have in long-term memory. Actually, in the case of history, this should be even easier to appreciate than in other fields of life. For example, there is no such thing as innate chess skill, but it does at least feel plausible that there might be a part of the brain devoted to the logic necessary for chess. There is no such thing as innate historical skill either, and it feels less plausible that there is a part of the brain devoted to analysing the causes of the Second World War. The concept of ‘historical aptitude’ reminds me of GK Chesterton’s famous quotation:

 Education, they say, is the Latin for leading out or drawing out the dormant faculties of each person. Somewhere far down in the dim boyish soul is a primordial yearning to learn Greek accents or to wear clean collars; and the schoolmaster only gently and tenderly liberates this imprisoned purpose. Sealed up in the newborn babe are the intrinsic secrets of how to eat asparagus and what was the date of Bannockburn. The educator only draws out the child’s own unapparent love of long division; only leads out the child’s slightly veiled preference for milk pudding to tarts.

I spoke to a couple of friends who teach at independent schools and frequently prepare students for this assessment. They disagreed with the ideas that a) you couldn’t prepare students for it and b) it didn’t depend on knowledge. They said that you could prepare students for it by getting them to read lots about lots of different historical eras, and that the students who knew more history generally did better on it. Interestingly, however, they also said that it was because of these reasons that, like me, they quite liked the test. It wasn’t possible to game it in any way, and preparing students for this test generally involved activities which made them better historians, not just better test takers. And they felt the results generally did distinguish between candidates who were and were not good at history. I suspect in many cases, therefore, the advice on this paper is not the end of the world, as plenty of people are probably ignoring it.  Still, both the friends I spoke to were at independent schools who put a lot of time and effort into cracking the Oxbridge admissions code. What about teachers at schools who don’t have a tradition of Oxbridge entry, or can’t devote as much time to reading the runes of these tests? Aren’t they more likely to take this advice at face value – and aren’t their pupils therefore more likely to do badly on such a test? Improving the advice on how to prepare for this test could help all students become better historians, but it could particularly help students from disadvantaged backgrounds.

What can teachers learn from high-performance sport? Plan for injury!

Yesterday I went to a brilliant day of professional development at Ark Globe Academy called Teach Like a Top Athlete: Coaching and Mastery Methods. I went to a workshop run by the amazing Jo Facer on Mastery Planning, and one by the equally amazing Dan Lavipour and Michael Slavinsky called What Can Teachers Learn From High Performance Sport? Dan is a former Olympic coach who now works in youth sports performance; Michael is a former French teacher and the Teaching and Learning Director at Researchers in Schools. Dan and Michael formed a great double act, as Dan talked us through some principles of high performance sport, and Michael drew out some of the analogies for the classroom. And there were tons of analogies. I think a lot about the links between sport and teaching, but these two took it to another level. There were dozens of things I could have chosen to blog about – deliberate practice, the theory of self-determination, the links between the conjugate periodization of training and linear exam courses – but the one that I am going to restrict myself to for now is what Dan and Michael had to say about planning for injury. In sport, injury happens. Netballers get ankle and knee problems; fast bowlers get stress fractures; footballers get hamstring issues. When you plan for injury, you work out what the common injuries are in your discipline and set up training plans that attempt to prevent such injuries.

So is there an analogy with teaching here? Obviously it’s not perfect, but I think there is. In our subjects, we can work out what the top 10 most common misconceptions or errors are, and set up our schemes of work to try and anticipate and prevent them. Here’s an example: I once did an analysis of recalled GCSE scripts in English which showed that ambiguous pronouns were a  major weakness and a real impediment to understanding. Pupils used ‘it’ and ‘they’ a lot, without always being clear who or what those pronouns were referring to.  Some targeted work on pronouns and antecedents could have helped improve clarity.

How can we identify such common misconceptions? In many subjects, we’ll already have a good idea, and in maths and the sciences, there are plenty of great resources out there listing them. But Michael suggested another profitable method: analysing examiners’ reports to see what issues seem to crop up again and again. This is something I started doing when I was researching my book, Seven Myths about Education. I included one example in the book: an examiners’ report which explained that many pupils thought a glacier was a wild tribe of humans from the north. In the report’s words:

Given the current interest in environmental issues, and the popularity of a particular type of film and television programme, it was surprising that a number of candidates seemed unaware of what a glacier is and some seemed to be convinced that the glaciers were some sort of tribe, presumably advancing from somewhere in the north.

There are other examiners’ reports which helpfully list the common writing errors made by pupils. This one, for example, from OCR:

Common errors included not marking sentence divisions, confusion over its and it’s, homophone errors (there/their/they’re and to/too), writing one word instead of two (infact, aswell, alot, incase, eachother) and writing two words instead of one (some one, no where, country side, your self, any thing, neighbour hood). A surprising number of candidates used capitals erratically: for example, they did not feature at the beginning of names but did appear randomly in the middle of words.

These reports also have interesting things to say about the use of PEE paragraphs, and mnemonic techniques like AFOREST. But my favourite type of  examiners’ reports are the ones on unseen reading and writing exams.  The unseen reading texts can be on any topic, and often, the examiners’ report ends up lamenting the students’ lack of knowledge about some crucial aspect of thet ext. They provide perfect examples of how reading is not a skill, and why background knowledge is crucial for comprehension. Here’s just a few examples of what I mean.

Most candidates were able to gain a mark for the next part of the question stating that the whale shark eats plankton. However a number of candidates offered no answer, perhaps they did not recognise plankton to be food, although the context should have made this clear.
(WJEC)

The first part of the question simply asked candidates to note the distance Mike Perham had travelled on his round-world voyage. Most selected the correct distance: 30,000 miles, though some over-complicated the response by confusing the distance the report said Perham still had to cover on the final leg of his journey with the total distance. This led to some candidates saying the whole journey was 30,300, whilst others reported the voyage to be just 300 miles. (WJEC)

I thought that I might be apologising for how embarrassingly straightforward this question was but it proved to be inexplicably difficult as many of the candidates just could not focus their minds on the reasons why the Grand National is such a dangerous race. I know that comparison has always been difficult but this question was set up to make things as straightforward as possible. Still it seemed like an insurmountable hurdle, the examining equivalent of Becher’s Brook, at which large numbers fell dramatically. I cannot really explain why so many candidates got themselves into such a tangle with this question. Many of them went round in circles, asserting that the race was dangerous because it was dangerous. (WJEC)

However, what was very noticeable was that many candidates had very little idea of what was in these places or why someone might want to visit (except for Alton Towers of course). Specific attractions were often in very short supply and usually were just mentioned in passing before the article got to the serious business of shopping and eating. I have to admit that the idea of making a day trip to London or Manchester to shop in Primark or eat in KFC did not appeal massively, although it is true that teenagers may find such things irresistible. More seriously, I think a better sense of audience might have helped here, although the lack of knowledge about places is not easy to remedy. (WJEC)

Lack of knowledge in general is certainly not easy to remedy, especially in the short term when you are preparing for an exam. But if we took it back a couple of steps, and started to ‘plan for injury’ in schools, not just on the sports field, how might we try and address this lack of knowledge? What would we need to change? When and where would be need to begin? When you think about it like this, the advantages of a coherent and sequenced knowledge-based curriculum become very obvious.