In my most recent blogs about assessment, I’ve looked at some of the practical problems with assessment criteria. I think these practical problems are related to two theoretical issues: the nature of human judgment, which I’ve written about here, and tacit knowledge, which is what this post is about. In Michael Polanyi’s phrase, ‘we know more than we can tell’, and we certainly know more than we can tell in APP grids.
Take writing as an example. Teachers know what quality writing is, and when given examples of writing, teachers tend to agree on the relative quality of the examples. But it is fiendishly difficult to articulate exactly what makes one piece of writing better quality than another, and still harder to generate a set of rules which will allow a novice to identify or create quality. Sets of rules for creating quality writing or quality anything can descend into absurdity. Dylan Wiliam is fond of quoting the following from Michael Polanyi:
Maxims are rules, the correct application of which is part of the art which they govern. The true maxims of golfing or of poetry increase our insight into golfing or poetry and may even give valuable guidance to golfers and poets; but these maxims would instantly condemn themselves to absurdity if they tried to replace the golfer’s skill or the poet’s art. Maxims cannot be understood, still less applied by anyone not already possessing a good practical knowledge of the art. They derive their interest from our appreciation of the art and cannot themselves either replace or establish that appreciation.
‘These maxims would instantly condemn themselves to absurdity.’ This phrase goes through my mind whenever I read essays that have been self-consciously written to the rules of a mark scheme, rubric, or other kind of maxim. For example, I often read essays which have been quite obviously written to the rules of a PEE paragraph structure.
In this poem, the poet is angry. I know he is angry because it says the word ‘anger’. This shows me that he is angry.
In this extract, Dickens shows us that Magwitch is frightening. I know this because it says ‘bleak’ and this word shows me that Magwitch is very intimidating.
Or, at GCSE, (and this is derived from an examiners’ report, here)
This article tells us that horse-racing is dangerous. We know it is dangerous because it is dangerous.
Or, at A-level, I have read essays where pupils repeat chunks of the assessment objectives, as if to flag up to the examiner that they are ticking this particular objective.
In The Darkling Thrush, Hardy uses an unusual form to shape meaning. He also uses a different structure and his language is very interesting, and overall, the form, structure and language shape meaning in this literary text.
Or, more commonly at primary, writing where every sentence begins with an adverbial word or phrase which barely makes any sense.
Forgettably, he crept through the darkness.
I think the absurdity here results from pupils having been given a rule or maxim which is of some help but which will not on its own create quality. Generally, it is a good idea to use evidence and explain your reasoning, to comment on form, structure and language, and to use adverbial sentence openers. But without concrete examples of how such rules operate in practice, they are of very limited value. And this is the problem with criteria and rubrics: they are full of prose descriptions of what quality is, but they will not actually help anyone who doesn’t already know what quality is to acquire it. Or, in Rob Coe’s words, criteria ‘are not meaningful unless you know what they already mean.’
I’ve argued before that over-reliance on criteria leads to confusion and inaccuracies with grading. But what we see here is even worse: reliance on criteria also leads to confusion in the classroom. Prose criteria are only helpful if you already understand the subject, so using them as a method to inculcate understanding is futile. And yet, we’re often recommended to share ‘success criteria’ with pupils, and rubrics are often rewritten in ‘pupil-friendly language’ which may be pupil friendly in that pupils can pronounce them, but are certainly not pupil friendly in that they can understand what they mean. This approach also leads to a ‘tick-box’ mentality, where pupils and teachers look to make sure that pupils have ticked off everything on the mark scheme. But again, this is unhelpful, because for something like writing, the question is not whether a pupil has used a an adverbial opener or referred to historical context, but is more about how well they have used the adverbial opener, and how appropriate and insightful their reference to context is. The people for whom rubrics and criteria will be least helpful are novices: pupils and new teachers. And yet the people who end up relying on them the most, and who are often encouraged to rely on them as a means to acquire expertise, are pupils and new teachers. Rubrics on their own will not help them to acquire expertise, and in many cases, I worry that they may even inhibit the development of expertise.
Polanyi’s student Thomas Kuhn wrote about the problem of tacit knowledge in The Structure of Scientific Revolutions.
A phenomenon familiar to both students of science and historians of science provides a clue. The former regularly report that they have read through a chapter of their text, understood it perfectly, but nonetheless had difficulty solving a number of the problems at the chapter’s end…learning is not acquired by exclusively verbal means. Rather it comes as one is given words together with concrete examples of how they function in use; nature and words are learned together.
Kuhn is talking about science here; to adapt this for writing, we might say that examples and words are learned together. As I’ve argued here, it is not enough to provide descriptors describing quality writing: descriptors need to be accompanied with examples of what essays of this particular standard look like.
This idea of tacit knowledge can sometimes be interpreted to mean that pupils can never learn something explicitly and must just pick up expertise implicitly. That is not my interpretation at all, nor do I think it is borne out by Kuhn or Polanyi’s work. Kuhn does say that rules and prose descriptions on their own cannot bring understanding, but he does not suggest replacing them with aimless discovery. His suggestion is that it is the problem sets at the end of the prose textbook chapter which really bring meaning. The types of problem sets he is referring to are often quite artificial and isolated examples of the natural world, examples that have been deliberately isolated and selected to prove the textbook’s point. Polanyi also talks quite extensively about how the expert has spent hours focussing their attention on tiny details, and learning to recognise differences that completely elude the casual observer. This is not achieved through discovery, but through direction. It is not achieved quickly, but through thousands of hours of deliberate practice.
Similarly, to go back to the example of writing, I don’t think that we can just expect pupils to pick up notions of quality writing through discovery. What we need are examples of quality writing where the salient features are isolated and discussed, and where pupils have to respond in some way to them, just as the problem sets in a science textbook require certain responses.
If we want to explain what quality is, we need more than just prose descriptors. We need annotated examples, problem sets, and plenty of time.