By Newman Burdett
Tuesday 30 July 2013
Reading the recent maelstrom surrounding the PISA rankings, the prophetic words of an NFER colleague echoed in my head:
Probably the greatest risk in the use of large-scale international datasets is the ease with which it is possible to draw overly simplistic – or erroneous – conclusions.
The PISA results were never designed to give accurate rankings and should be interpreted as broad categories. The OECD itself describes the purpose of PISA thus:
Parents, students, teachers, governments and the general public – all stakeholders – need to know how well their education systems prepare students for real-life situations. Many countries monitor learning to evaluate this. Comparative international assessments can extend and enrich the national picture by providing a larger context within which to interpret national performance.
It was never intended to allow politicians to say ‘my education system is bigger than yours’, but in the realpolitik of education the OECD will always have to appeal to policymakers to fund the surveys, so the idea of ranking is too attractive to downplay. Even if the OECD didn’t publish rankings, just as the Department for Education does not publish school leagues tables, somebody would – and indeed they do. There are at least half a dozen different rankings on education that already exist – you can pay your money and take your choice. And they all give different answers e.g. Legatum Prosperity Index, UN Human Development Index, Pearson Education Index (Learning Curve) all give different rankings and ‘mean’ different things.
The problem with the current debate is that all this arguing about whether (or not) the rankings are accurate overlooks a more significant point – they can never be meaningful. How can you say if one education system is better than another? It depends on the needs of that country not some illusory competition and there are a lot of different measures, so what does better mean anyway? The real problem with rankings is that they convert complex data into a single measurement; it is the equivalent of trying to judge who is best for a job by adding their height, weight, IQ, age and previous salary and ranking the result, while ignoring that different jobs require different skills. We just end up losing useful information- and what does it really tell us of any use? To use a formative assessment analogy (which is what the surveys are really for) we might know we are 7th (or 28th) but it does not tell us how to improve (other than ‘do what the test demands’ or ‘have a smaller and more uniform population’).
This is not to say the international surveys do not give us a great deal of useful information. Our research presented at IEA in Singapore in June shows there is good evidence that the PISA (and TIMSS) tests are appropriate to English students and do provide a good match with the national curriculum. Beyond the achievement measurements, they have a huge amount of detail that does tell us much that is useful to the educational debates going on in this country. But we need to stop looking through the window of PISA in a simplistic way and trying to gain an international gold star. We need to look at what we need to improve, what we are doing well at and what our learners (and by implication what society, industry etc.) need and how best to deliver on that.
If we get seduced by the political rhetoric and drawn into a mindset where we simplistically judge our system against contexts that bear no relation to the needs, aspirations or reality of our own education system then we will end up genuinely failing and, ironically, falling down the rankings (not that they exist!).
We should not be surprised that people misuse the data for their own agendas, or that it gets misinterpreted, but rather than arguing about what the international surveys cannot tell us, we should be looking at what they can tell us and framing meaningful questions that will allow us to plan curriculum and assessment systems fit for the future needs of today’s young folk.
The PISA results are a useful research tool but they are only one piece of the picture – they give us an indication of areas we should be looking at in more detail – they are not an outcome or an aim in themselves.