You could be forgiven for thinking that a statement such as ‘The deceptive allure of clarity’ must have come straight from the mouth of Iain McGilchrist, author of ‘The Master and His Emissary. The Divided Brain and the Making of the Western World’. McGilchrist would align this statement with the way in which the left hemisphere attends to the world. In his book he explains that we are living in a left hemisphere dominated world. For the left hemisphere, the parts are more important than the whole. The left hemisphere values the known, familiar, certain, distinct, fragmentary, isolated and unchanging. It abstracts ideas from body and context, seeing things as inanimate and representational. In the left hemisphere’s view of the world, quality is replaced by quantity, and unique cases are replaced with categories.
But this statement, ‘The Deceptive Allure of Clarity’, did not come from McGilchrist, but from a Lancaster University online Department of Education Research Seminar that I attended this week, which was presented by Jan McArthur and Joanne Wood. The full title of their talk was ‘Towards Wicked Marking Criteria: the deceptive allure of clarity’, which is what drew me, and many others, in (the session was very well attended). This was how the session was advertised.
In this seminar we consider the dissonance between two major themes in the scholarship of teaching, learning and assessment in higher education: the engagement with complex and structured forms of knowledge and the development of increasingly precise marking criteria for assessment. We question what is lost when we aim to make assessment a more and more precise practice? We argue that academic knowledge cannot always be broken into manageable “bits” but often should be evaluated holistically. Finally we propose that students who perform “badly” in assessments have often not done this by accident or neglect but rather through diligent and conscientious following of implicit messages we send out as teachers, often in the name of clarity.
They started the session by asking the question: ‘What if the pursuit of clarity is part of the problem?’ By this they were making reference to what they called ‘The Monster Rubric’, which is so detailed and atomised that it loses all sense of what it is trying to achieve.
What follows is my reaction to this seminar and should not be attributed to either of the speakers.
It is easy to find examples of these rubrics online, through a simple search for rubric images. For example here is one with an excessive level of granularity. I can’t imagine how much time it must have taken to develop this rubric – time that perhaps could have been better spent in the service of students?
Most institutions use rubrics for marking students work. Why? Well, principally for quality assurance reasons. The institution/tutor has to demonstrate that the marking is fair and equitable. But in reality, my experience is that for experienced tutors/markers, the rubric is not helpful and so they make the rubric fit their marking rather than the other way round. The rubric does not inform the marking. An experienced marker knows that the whole is greater than the sum of the parts. An experienced marker knows that complex knowledge can’t be broken down into bits. An experienced marker knows that there are qualities in assignments, which contribute to the whole being greater than the sum of the parts, that simply can’t be measured, but nevertheless contribute to the mark. An experienced marker can pick up an assignment, flip through it and know straight away roughly what mark it will receive. The marker then reads the assignment carefully to check this initial assessment and give critical feedback. Only finally does the marker make sure (for quality assurance purposes) that the rubric fits the given mark.
We do students a disservice by misleading them into thinking that their achievements can be broken into bits and that each bit is worth a certain percentage. Complex knowledge cannot be defined in these terms. A rubric cannot cross all the t’s and dot all the i’s. The rubric should not be so atomised that there is no room for students to move in. As Iain McGilchrist says:
‘… the gaps in the structure are where the light gets in. If you tighten everything up, then you get total darkness’. (https://youtu.be/0Zld-MX11lA).
If we must have rubrics, then they should be guides rather than prescriptive, and students and staff should be encouraged to move beyond them.
Just last night, I listened to the new talk McGilchrist did with Jordan Peterson. I don’t like the latter, but it actually turned out well. Peterson seems to be a better interviewer than interviewee. He both allowed McGilchrist to talk and pushed him with probing questions when necessary. My only disappointment was what felt like a dismissive view of ‘equality’ that McGilchrist voiced. That was only a small part of an otherwise great discussion. Your post here does remind me of some discussion they had.
They talked much about knowing and not knowing, focus and attention, processes/becoming and thingification, etc. It’s familiar territory, but I enjoyed it and no doubt it was a useful introduction to McGilchrist’s work. It’s always good to reach out to other audiences. It makes me think of Jimmy Dore going on Fox News, an act that some on the left criticized for whatever reason. We do have to be careful, though, about who we legitimize; such as the filmmaker helping to popularize Alex Jones by putting him in two major films in the ’90s and Aughts.
Anyway, your commentary on the “deceptive allure of clarity” made me think of a related topic. Humans need something to latch onto. Even McGilchrist’s success probably has much to do with his focusing on brain hemispheres which is by itself a powerful and popular visual image. A picture of a brain imaging or maybe even simply the brain as a symbol in discussion makes something seem more scientific. The modern mind demands something concrete, in order to reify abstractions or what can appear abstract. Here is something from a post:
The near cosmic morality tale of ideological conflict is itself a symbolic conflation. There is always a story being told and its narrative force has deep roots. Wherever a symbolic conflation takes hold, a visceral embodiment is to be found nearby. Our obsession with ideology is unsurprisingly matched by our obsession with the human brain. The symbolic conflation, though moral imagination, gets overlaid onto the brain for there is no greater bodily symbol of the modern self. We fight over the meaning of human nature by wielding the scientific facts of neurocognition and brain scans. It’s the same reason the culture wars obsess over the visceral physicality of sexuality: same sex marriage, abortion, etc. But the hidden mysteries of the brain make it particularly fertile soil. As Robert Burton explained in A Skeptic’s Guide to the Mind (Kindle Locations 2459-2465):
“…our logic is influenced by a sense of beauty and symmetry. Even the elegance of brain imaging can greatly shape our sense of what is correct. In a series of experiments by psychologists David McCabe and Alan Castel, it was shown that “presenting brain images with an article summarizing cognitive neuroscience research resulted in higher ratings of scientific reasoning for arguments made in those articles, as compared to other articles that did not contain similar images. These data lend support to the notion that part of the fascination and credibility of brain imaging research lies in the persuasive power of the actual brain images.” The authors’ conclusion: “Brain images are influential because they provide a physical basis for abstract cognitive processes, appealing to people’s affinity for reductionistic explanations of cognitive phenomena.” ” *
The body is always the symbolic field of battle. Yet the material form occludes what exactly the battle is being fought over. The embodied imagination is the body politic. We are the fear we project outward. And that very fear keeps us from looking inward, instead always drawing us onward. We moderns are driven by anxiety, even as we can never quite pinpoint what is agitating us. We are stuck in a holding pattern of the mind, waiting for something we don’t know and are afraid to know. Even as we are constantly on the move, we aren’t sure we are getting anywhere, like a dog trotting along the fenceline of its yard.
* D. McCabe and A. Castel, “Seeing Is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning,” Cognition, 107( 1), April 2008, 345– 52.
(For criticisms, see: The Not So Seductive Allure of Colorful Brain Images, The Neurocritic. But for more recent corroboration, see: People Think Research is More Credible When It Includes “Extraneous” Brain Images, Peter Simons, Mad In America)
Agreed. The allure of clarity is nonsense.
What the ‘rubricisation’ (and yes, the activity it describes is as clumsy as this name for it) is created for is the convenience of the bureaucrat – it’s an administrative matrix, not a creative, engaging, inquisitive, enquiring, exploratory one. It delivers simplistic, context-stripped metrics, for the purpose of circulation – the academic equivalent of ‘monetisation’.
Even calling it ‘left brain’ is unnecessarily flattering. And the more successful it becomes in creating useful ‘data’ for decontextualised circulation, the more it undermines a collegiate-based culture of enquiry.
There’s a lot to unpack here, but I do think that the opportunity to develop one’s own rubric may help alleviate some of the problem. I was able to create my rubric based on how I actually grade (the process you describe), breaking down the things I look for into three areas so that students could see in which areas they were strong and in which weak.
I have also recently had to use a college-wide rubric which was horribly detailed like the one you display. Using it on the same assignments I had already graded with my own rubric, I was depressed to find that the students scored much lower on the college-wide detailed rubric. Now I see that was partly because it was disaggregating the elements so much that the whole was being completely missed. I’m being asked to help evaluate the new rubric process, so I appreciate that you’ve helped me bring a different perspective to that conversation.
So I’m not sure I agree that the breaking down into parts is the problem. Perhaps it’s more about *how* we break down the parts to develop a general and useful rubric.
“I think this focus on justice of assessments is derived from laudable motives but has now utterly went awry. When the measure is mistaken for the thing being measured (the grades for the abilities) then the abilities are eventually harmed. … In my opinion, much less of justifiable exactness, and more discretionary and perhaps a little biased judgement by seasoned teachers, could indeed be ethically superior, when I understand John Rawls correctly: even those disadvantaged by the inexactnesses would eventually be better off.” (Source)
Thanks Benjamin, Roy, Lisa and Matthias for all your comments.
Benjamin – you have said that people need something to latch on to. I think this may be what Lisa is getting at when she says she created her own rubric. Markers of course need to think about what it is about the assignment that merits a given mark (i.e. the bits), especially if it is an assignment that is being given for the first time, and it is new to the marker. The problem arises if the marker (and student) loses sight of the whole.
McGilchrist relates this to playing piece of music. In order to do this we need to first practise it by breaking it down into parts and focussing on the individual notes, phrases etc. But when we have practised sufficiently and can play the piece of music, we are no longer focussing on the parts, the individual notes. We only hear the whole piece and it is the whole piece that has meaning. The individual notes on their own have none.
I have been thinking more about the difference between a rubric/assessment based on general principles and one based on specific skills or content. To continue with the music example, I could assess a piece of music by discussing the level at which I believe it achieves a steady rhythm, uses instruments that complement each other, and features a melody. Although these criteria are subjective, they are general. If instead I created a rubric that assessed the level at which the music was in 3/4 time, consistently used trombone, and featured notes that ranged only between C and F, I would be assessing at a level of specificity that would be too much about bits and not much about music.
Hi Lisa – thanks for your interesting comment. I think we are talking at cross purposes. Entirely my fault I see, as I wrote ‘McGilchrist relates this to playing a piece of music’ which I can see now would be interpreted as McGilchrist relates this to marking … Whereas what McGilchrist was talking about, was nothing to do with marking, but the fact that if we focus on the bits (in his example the notes in a piece of music) then this bears no relation to the meaning of the whole piece of music and how it makes you feel when you listen to it. So I was, in my clumsy way, trying to say that what a rubric can’t do is give you that sense of the whole, something that is beyond the bits, i.e. the rubric, but which the ‘experienced’ marker knows. Maybe even this doesn’t make sense??
My fault entirely — I was only using music as an example, not continuing from your use of music. I should have chosen something else. How about History? A general rubric (like mine) would assess the sophistication of the thesis, the use of sources, and the skill in getting the point across. This may not give me the sense of the whole when I mark, but it does provide me with pegs on which I can hang my marker hat, and communicate with the studient. A specific rubric (and I’ve seen them) would instead assess the inclusion of specific terms or events, the depth of analysis of World War II, and application of the argument from page 94 of the textbook. Perhaps rather than general and specific, a better framework would be skills and content. Rubrics that assess content completely miss the whole, while those that assess skills can still see the overall.