RCTs in schools: making the case for ‘fair tests’
Friday 11 October 2013
When we were at school we learned the concept of a ‘fair test’; the notion that if we wanted to discern the effect of something, we need to keep everything constant apart from the thing we are testing. Historically, for much of the research into effectiveness of classroom interventions, this simple idea has not been adhered to. Groups of students who received the intervention have been compared to others who cannot be regarded as equivalent, or in some cases there has been no comparison group at all.
In 2011, The Education Endowment Foundation started to fund a series of fair tests in education research, alternatively known as randomised controlled trials (RCTs). Those of us who are concerned about evidence in education were pleased. In February 2013, Ben Goldacre joined forces with the Department for Education to launch another programme of RCTs. By now, we were beginning to think the dark days of unfair tests to evaluate classroom initiatives were over. Unfortunately, the education research community is still riddled with dissenting voices such as Professor Frank Furedi (TES, 4 October). Teachers are the most qualified to make decisions about which interventions to use in school and should be provided with reliable evidence on which they can base their judgements. The present drive for rigorous evidence in education will only bear fruit if teachers understand its philosophy. Last week’s article contained some misunderstandings that made it very damaging to this ultimate goal.
Before unpicking some of the arguments against the ‘medicalisation’ of evaluation in education, let us remember that we do not live in a world where an individual teacher’s knowledge and professionalism is the only driving force towards excellence in education. Schools routinely embrace new interventions; spending considerable sums on programmes often deemed to help struggling students. On the surface, these ideas seem harmless. Indeed in many cases they are surely beneficial. However, in most cases the programmes have been poorly evaluated and teachers have to choose between them on the basis of limited evidence of effectiveness. In the words of Professor Furedi, teachers need to have “the intellectual space where they can reflect and discuss their experience”. However, without external input and yardsticks against which to measure effectiveness, there is a danger that this reflection becomes introspection, is vulnerable to social factors and confirmation biases, and does not provide an environment where new ideas and innovations can be introduced in a disciplined, objective manner. In the most extreme cases (for example, Brain Gym) teachers largely see through the pseudoscience and realise this is not for their school. However, in many instances, teachers will select a particular intervention without detailed information about the extent of its effectiveness being available. In some cases, practices could be damaging to children’s education when compared to normal classroom experience, and there just is not the evidence to show it.
Professor Furedi highlights the complexity of the variables influencing teaching and learning and claims this renders RCTs pointless. In fact, this is one of the most important reasons why RCTs are necessary in education. The human body and its interactions with its environment are hugely complex and have necessitated RCTs to ensure fair testing in medicine. Interactions between pupils and teachers are at least as complicated and also require randomisation to ensure fair comparisons are made. Neuroscientists and educationalists are far from understanding the reasons why certain practices work in the classroom. Experiments show us whether ideas work by ensuring all these complex interactions are evened out between those experiencing the intervention and the control group of children.
Professor Furedi also highlights the education secretary Michael Gove’s commissioning of the recent Department for Education report calling for a new age of RCTs. His wording may imply to some readers that this move towards evidence in education is somehow political. In fact the very essence of running experiments in education settings is about as far from politics as you can get. Ideas that have been carefully constructed by experienced professionals are tested in as impartial a way as humanity has yet to devise. Results give rise to evidence about a plethora of programmes that teachers can choose from on the basis of potential size of impact, an understanding of what will work in their context and likely cost. Contrast this to a government (any government) imposing a national programme on schools that has not been adequately tested. Indeed, Professor Furedi highlights the fact that only a handful of initiatives have been identified in which the evidence base met the rigorous criteria of the ‘what works’ movement in the US. The education research landscape is now changing rapidly in the UK and the result will be many interventions that have been evaluated to these exacting standards.
If you are a teacher who would like to spend their interventions budget on the basis of how effective programmes are rather than how heavily they are marketed, now is a great time to get involved. Firstly, you must consider the quality of the evidence for effectiveness. One excellent way of doing this is through the Sutton Trust-EEF Teaching and Learning Toolkit. Secondly, if your school is invited to be part of a trial, why not embrace the opportunity, provided the demands are not too extreme? In this way, you will play a crucial part in weeding out ineffective classroom interventions and discovering the best ones.
Ben Styles is a Research Director at the National Foundation for Educational Research (NFER) and author of ‘A Guide to Running Randomised Controlled Trials for Educational Researchers’, published by NFER.