Showing posts with label Examinations. Show all posts
Showing posts with label Examinations. Show all posts

Thursday, 12 November 2020

Missed opportunity

 

In what looks like only a minor variation on the customary song that Wales should use its devolved power any way it wishes as long as it does the same as England, the Secretary of State has been complaining that the Welsh government has axed next year’s school examinations. Apparently, he sees some sort of strange equivalence between a devolved body acting entirely within its own powers on an issue which has been completely devolved, and a UK government which ignores the devolution boundaries, in the sense that both, in his view, should be subject to what he euphemistically calls ‘consultation’. It’s another illustration, not that one were needed, of the fundamental problem with ‘devolution’: it doesn’t recognise Welsh sovereignty as being anything other than a temporary loan of power from London.

Whether the actual decision of the Welsh government is the right one or not is another question, and is a legitimate subject for debate, even if it’s a debate in which the Secretary of State has no legitimate role. The problem is that it’s an issue clouded by ideology and prejudice rather than one led by facts and evidence; the question of whether exams are the ‘right’ way to assess pupils seems to be highly correlated with political outlook. There’s no doubt that some children are well-served by an examination process, but neither is there any doubt that others are not – for a variety of reasons, exam performance doesn’t always reflect the progress and ability seen by teachers in classrooms. On the other hand, there is more scope for teacher assessments to contain a subjective element in their assessment of pupils, no matter how hard they strive to avoid that. There is no perfect system.

In the limited circumstances of the pandemic, it is probably better to do as the Welsh government have done and take the decision early, thus giving themselves plenty of time to think through a proper and robust alternative assessment process rather than the chaos we saw last year, and it’s probably reasonable to assume that the approach of the Westminster government of leaving things until the last minute to decide will lead to more chaos in England again next year, unless they get lucky in controlling the virus. (And lucky is the right word, given the obvious lack of any planned approach to anything.) The problem remains, though, that this still looks like a one-off decision to deal with a particular anticipated situation next year, rather than an opportunity for a thorough review to determine what Wales needs from a system of pupil assessment and how such a system can be made fairer for all. It’s in danger of being an opportunity missed.

Monday, 17 August 2020

Locking disadvantage into the system


Last week’s ‘A’ Level results fiasco reminded me of my own experience of ‘O’ Level results back in 1967. Shortly after we got the results, my French teacher stood in front of the whole class and said that she really didn’t understand how I’d managed to get a grade 3 in French whilst a fellow pupil (whom she also named) only got a grade 6 pass. “I’d have understood it better if the results had been the other way round”, she told us. Being charitable, she was probably trying to boost the confidence of my disappointed peer (and it may well have been fair comment anyway!) but my gratitude for her faith in my ability wasn’t exactly unbounded. Perhaps my fellow student had a bad day and I had a good one; perhaps the questions on the day were simply a better match for what I’d remembered than what he’d remembered; perhaps I was just better at sitting exams – I’ll never know how it happened, only that had my grade been based on teacher assessment rather than examination it would have been lower. (And, had that been repeated in other subjects, life could have turned out very differently.) The point is that teacher assessment is no more perfect a method of assessing ability in a subject than an examination. Both can create anomalies and the two methods will always produce different results for at least some of the pupils. Which result is the fairest is an open question, and our faith in the reliability and accuracy of the system as it impacts individuals is seriously misplaced even in a normal year.
We know for certain that examination performance varies between schools, and if results based on teacher assessment reduce or eliminate those differences, then they are not reflecting accurately what would have happened had the exams been held. (I should note, in passing, the implicit assumption in that statement that the exam results are the accurate ones, itself an assumption open to serious question.) That, in effect, is the justification for making ‘adjustments’ to teacher assessments. They are an attempt to reflect historic differences in performance between schools and, in fairness, they may well have more-or-less achieved that at an overall level. But, by being based on a statistical approach, we can be absolutely certain that the individual pupils whose scores were thus ‘adjusted’ from teacher assessments would not always be the same pupils whose scores would have differed from those same assessments had the exams been held. Reducing it to a mathematical exercise might produce the ‘right’ averages, but it can never produce the ‘right’ results for individuals. It’s an approach which is fundamentally flawed.
We also know the main reason that some schools regularly see lower results on average: they serve poorer communities. We have known for decades that there is a very strong correlation between academic performance as measured by examinations on the one hand and parental income on the other, and none of the actions taken to try and address that have been particularly successful. (I suspect that is primarily because they all set out to address the symptoms rather than the cause in an attempt at some sort of short-term fix, but that’s a subject for another time.) That difference in performance may be well-known and well-established but it isn’t, and never has been, fair. There is something particularly grotesque about a Labour government here in Wales trying to ensure that historical disadvantage based on relative affluence is properly reflected in this year’s results. They are effectively locking in that household income based disadvantage for another whole cohort of young people based on historical results for the schools they attend without the individual members of that cohort having even the limited opportunity to buck the trend which the examination system provides, and from which at least some would have benefited had the exams gone ahead.
There is no perfect solution to this year’s issues (and even less is there an easy fix for the real underlying long-term problem), but applying an algorithm specifically designed to perpetuate the injustices of the past is about the worst solution which anyone could devise. The (eventual and belated) Scottish decision to simply accept the teacher assessments isn’t a perfect one either (the idea that hundreds of teachers in hundreds of schools could ever be grading students precisely and consistently is laughable), but, coupled with a robust appeals process, it’s probably the least worst option in the circumstances. I really don’t understand why ‘Welsh’ Labour prefers to follow an only slightly adapted version of the approach of the English Tories instead.

Monday, 24 September 2012

It's not about the ologies

One of the core tenets of Thatcherism was that competition is inherently a good thing, because competitive markets drive costs down and efficiency up.  That was the underlying argument for the marketisation of the health service, for instance – and of course the examination boards.

In the latter case the expectation was that instead of each examination board having a monopoly within its own geographical area as was previously the case, they would compete for ‘customers’ (or ‘schools’, as they are more usually known).  It was always understood that the probable result would be fewer boards each having more customers; but the aim was that the overall cost would be less.
I’m not in a position to state definitively whether it worked; but I suspect that it did - in that narrow economic sense at least.  There are two problems though.
The first is that whilst the purists regard the way in which costs are reduced and efficiency increased as irrelevant as long as it actually happens, it is far from being irrelevant in terms of its other effects.
And the second – as could and should have easily been foreseen – is that marketplace competition doesn’t operate solely on price.  Once they’ve done all they can to reduce costs, competing organisations start looking for other differentiating factors.
In the case of the examination boards, the obvious differentiating factor was always going to be pass rates.  When the schools are being judged on league tables of exam results, then choosing the exam board most likely to help them climb the rankings becomes more important then the cost comparison.
At first sight, the surprise is not so much that that system is now unravelling, but that it’s taken so long to reach that point.  But if we consider the motivations of all the different stakeholders, it’s no surprise at all.
Governments, of all parties, want to demonstrate that their policies are working.  What better way to do that than regular increases in pass rates?
Schools want to demonstrate that they are improving their performance and climbing the league tables.  What better way to do that than regular increases in pass rates?
The examination boards want to grow their ‘business’ and attract more ‘customers’.  What better way to do that than regular increases in pass rates?
Pupils, of course, always wanted to get the best results possible, and their parents (otherwise known as ‘voters’) want the same thing as well.  What better way to demonstrate that than regular increases in pass rates?
‘Teaching to the exams’ is nothing particularly new, but all the incentives have been for schools to do more of it.
Effectively, there’s been a collusion by consensus in which all of those stakeholders’ aspirations have apparently been met, with no stakeholder having any real incentive to ask too loudly the difficult questions about whether the inexorable rise in results actually reflected any real underlying improvement in knowledge and skill.
In principle, it is surely right - indeed overdue - for the UK Education Minister to challenge this process.  And recognising what he called the ‘malign’ impact of Thatcher’s reforms is equally overdue.  But changing the rules part way through an examination cycle was a spectacularly cackhanded way of trying to address the issue, and incredibly unjust on those pupils in England who don’t have a Welsh Government to reverse the decision.
It now looks inevitable that Wales and England will be taking different examination routes in future.  Doing that in a rush as a result of a spat doesn’t seem the right basis for such a major decision, but we are, as they say, where we are.
What I hope will not get lost – but greatly fear will indeed get lost – in this debate is more detailed consideration of that issue about ‘teaching to the exams’.  There still seems to be far too much emphasis being placed on the rigour of the exams, and far too little on whether and to what extent examination results tell us very much about the knowledge and skills of the examinees. 
And in so far as employers and others are complaining about the output of the education system, it is surely about knowledge and skills, not the number of passes in ologies.