New report by the Sutton Trust: What Makes Great Teaching

Today the Sutton Trust and the University of Durham have published a fascinating new report called What Makes Great Teaching? It sets out to answer that title question, as well as looking at ways we can measure great teaching, and how that could be used to promote better learning. Here is my short summary of some key points from the report.

1. What is effective teaching? This report is very honest about the fact that we don’t have as clear an idea of what good teaching is as we think we do. I think this is an important point to make. Too often, reports like this one start from the point of assuming that everyone knows what good teaching is, and that the challenge is finding the time/money/will/methodology to implement changes. This report is saying that actually, there are a lot of misconceptions about what good teaching is, and as such, reform efforts could end up doing more harm than good. We need to think more clearly and critically about what good teaching is – and this report does that. As well as listing what effective teaching practices are, it also lists what ineffective practices are. This list has already received some media attention (including a Guardian article with a bit from me), as it says that some popular practices such as learning styles and discovery learning are not backed up by evidence. The report draws its evidence from a wide range of sources, including knowledge from cognitive psychology. It cites Dan Willingham quite a lot, and quotes his wonderful line that memory is the residue of thought. As regular readers will know, I think cognitive psychology has a lot to offer education, so it is great to see it getting so much publicity in this report.

2. How can we measure good teaching? According to this report, the focus should always be on student outcomes (not necessarily just academic ones). This can also be a bit of a hard truth. If a group of teachers work really hard at mastering a particular technique or teaching approach, and they do master it and use it in all their lessons, it can be tempting to define this as success. But this report says – no. The focus has to be on student outcomes. Although we can devise proxy measures which can stand in for student outcomes, we always need to be regularly checking back to the student outcomes to see if those assumptions are still holding true. The report is also honest about the fact that a lot of the current ways we measure teaching are flawed. That’s why we need to use more than one measure, to always be checking them against each other, and to be very careful about the purposes we put these measurements to. The report suggests that our current measures are probably only suitable for low-stakes purposes, and that they certainly can’t be used for both formative and summative measures at the same time (or ‘fixing’ and ‘firing’ as they call it).

3. How can we improve measurement? Although the report is very cautious about the current state of measurement tools, it offers some useful thoughts about how we could improve this state of affairs. First, school leaders need to be able to understand the strengths and limitations of all these various data sources. According to the report, there is ‘the need for a high level of assessment and data skills among school leaders. The ability to identify and source ‘high-quality’ assessments, to integrate multiple sources of information, applying appropriate weight and caution to each, and to interpret the various measures validly, is a non-trivial demand.’ Also, student assessment needs to be improved. If we always want to be checking the effect of our practices on student outcomes, we need a better way of measuring those outcomes. The report gives this tantalising suggestion: that the profession could create ‘a system of crowd-sourced assessments, peer-reviewed by teachers, calibrated and quality assured using psychometric models, and using a range of item formats’. It would be great to hear more details about this proposal, and perhaps about how CEM or the Sutton Trust could provide the infrastructure and/or training to get such a system off the ground.

One of the authors of the paper is Rob Coe, and I think this report builds on his 2013 Durham Lecture, Improving Education: A Triumph of Hope over Experience. This lecture was also sceptical about a lot of recent attempts to measure and define good teaching, as can be seen in the following two slides from the lecture.

Improving Education Fig 6 Mistaking School Improvement Improving Education Fig 8 Poor Proxies

I recommended this lecture to a friend who said something along the lines of ‘yes, this is great – but it’s so depressing! All it says is that we have got everything wrong for the last 20 years and that education research is really hard. Where are the solutions?’ I think this paper offers some of those solutions, and I would recommend it to anyone interested in improving their practice or their school.

Advertisements

6 thoughts on “New report by the Sutton Trust: What Makes Great Teaching

  1. logicalincrementalism

    “All it says is that we have got everything wrong for the last 20 years and that education research is really hard.”

    A more accurate summary would be that we have got a lot of things wrong for the past 150 years and that education research is complex.

    Reply
  2. Pingback: Sutton Trust 2014 coverage. | @mrocallaghan_edu

  3. gwarner99

    A good example of the way that findings shift with th ideological climate. In some ways, the article highlights some reasons for this; it’s very difficult to show conclusively which teaching methods work, for a number of reason. There are so many confounding factors in most studies; the cohorts tend to be quite small; and there probably isn’t any one method that is most effective; rather, good teachers can make a number of different approaches work, while no method will survive poor implementation .

    The truth is that the distinction between traditional and progressive methods is largely meaningless, except as an ideological banner. To quote the article “more traditional styles that reward effort, use class time efficiently and insist on clear rules to manage pupil behaviour” are not incompatible with “techniques such as “discovery learning,” where pupils are meant to uncover key ideas for themselves”,- they describe different aspects of the learning process. Clear, organised structure in a classroom helps, not hinders active engagement and discovery, but does not justify the overwhelming use of a single highly didactic, top-down learning style. There is a great deal of snake oil in the marketing of “learning styles” or brain based learning”, but that doesn’t mean that we can ignore the knowledge we do have about the neurobiology and varieties of learning.
    The picture of “progressive” or “traditional” methods is largely a straw man. I’ve believed in a social constructivist approach to learning theory for decades precisely because it breaks down such a simplistic false distinction.

    Meanwhile, the findings of this kind of study is being used (not necessarily by their authors) as an ideological weapon rather than a source of useful knowledge. Ironically, this is precisely the kind of “fad” or “trendy thinking” that it’s proponents denounce.

    Reply
  4. Crispin Weston

    Dear Daisy,

    So the key to understanding what constitutes good teaching is how we measure learning. Many of the progressive lobby claim that we can’t – or that the attempt to measure learning is necessarily reductive. I disagree – I think you can measure any learning outcome (creativity, originality, confidence), even if you only start with a teacher judgement on a scale of A-E. The secret is to quantify and correlate.

    This is where I am cautious about Daniel’s Koretz’ warning about teaching to the test. You can define learning as what gets laid down in long-term memory, but you can never measure the brain directly. You can only infer the mental dispositions of the learner by observing his performances. I would go further and call those dispositions “capability”, defining capabilities as the kinds of things that you are able to do in certain types of situation. In other words the test is not just a proxy for “the domain” – it is the definition of the domain.

    I would not hold my breath waiting for school leaders to emerge who are able “to identify and source ‘high-quality’ assessments, to integrate multiple sources of information, applying appropriate weight and caution to each, and to interpret the various measures validly”. That is indeed a non-trivial task! Apart from anything else, when we talk about the integration of multiple sources of information, we are talking about a very great quantity of data, as well as the possession of the kind of statistical expertise that is very rare in the profession. And at the moment, nearly all of the performance data that gets circulated in schools is from summative assessments, which arrive too late for anyone to do anything useful with them. The only real-time data that we have, in general, is from the morning registration.

    The only way to quantify and correlate data about learning outcomes in ways that will provide reasonably reliable measures of learning outcomes is, in my view, by digital analytics systems, encapsulating the statistical expertise of non-teachers. But those digital systems require digitized data as their raw material. The importance of the bar-code reader to the development of supermarket logistics systems illustrates a recurring pattern: that the Achilles heel of most fancy data analytics systems is data entry.

    Learning analytics systems will therefore not work until learning outcome data is captured more-or-less automatically from digitally-mediated learning activities. I say “digitally mediated” because the activity itself may be substantially non-digital – Eric Mazur’s peer instruction provides a good example of this – http://www.youtube.com/watch?v=y5qRyf34v3Q#t=1562.

    The second benefit of digitally-mediated learning activities is that they can encapsulate particular approaches to pedagogy, allowing these to be replicated consistently through the education system. Thanks for the link to Rob Coe’s paper, which I have not seen before. It is great that he prefaces his “what is to be done?” section with the caveat that these recommendations are not based on research. And it seems to me that although the research shows that what matters is good teaching, it does not follow that this means more training to create better teachers. The quality of output of any technician represents a combination of innate skill and extrinsic technology (i.e. having the right “tools of the trade”). We had a discussion on Friday about the current lack of good textbooks in schools. It seems to me that any plausible solution to that problem will be based on digital publishing, not paper-based publishing, and the digital textbooks of the future will be substantially based on the distribution of activity, not information.

    My proposal is that a sort of ed-tech that will provide heterogeneous, digital textbooks will:
    a. provide a medium through which we can disseminate pedagogies;
    b. provide the “tools of the trade” to help teachers manage appropriate progression and feedback at scale;
    c. while simultaneously harvesting the learning outcome data that will help teachers manage learning and provide evidence for what works (both through formal research and through informal optimization of learning activities and designs).

    I suggest that this represents a much more plausible approach to the problems of under-performance than more CPD, which has so often been tried, so often failed, and is so easily hijacked by existing prejudices about what constitutes good teaching.

    Sorry to leave such a long comment – but it seemed to me that some of these thoughts were finding an echo in your own thinking about the potential for multiple choice and adaptive learning systems – and that once the summary of research contained in Rob Coe’s paper has been accepted, we need to stimulate a wider debate about the more uncertain question of what is to be done.

    Best wishes, Crispin.

    Reply

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s