The importance of not making a difference

By Anneka Dawson

Monday 7 July 2014

Schools beware – trials showing no statistically significant impact on pupils’ achievement get little media coverage.

In May, The Education Endowment Foundation published the latest results from four of its 20 randomised controlled trials (RCTs) on interventions which are believed to improve literacy levels in the transition from primary to secondary school. The headline findings reported in the majority of mainstream news focused on the trial ‘improving writing quality’ that boosted children’s reading by nine months, but made no mention of the other three trials – which found no significant differences.

These reports highlight two interesting misconceptions about the use of randomised controlled trials in educational research. Firstly, that an intervention should be immediately abandoned if the results of one trial show that it is not having a positive impact on learning. Secondly, that results that don’t show a statistically significant improvement are not important. In addition, there is the related concern that randomised controlled trials are not ethical, as one group (the control group) is ‘denied’ the intervention. This blog aims to address these three misconceptions.

To address the first misconception, EEF blogged on the ‘significance of no significance’, challenging the view that we should immediately give up on interventions that have not shown a statistically significant effect in a trial, without setting the findings in context. They point out that this finding could be only part of a fuller story: perhaps the intervention would work with a different group of students (for example younger students), or needs fine-tuning to take into consideration the practical nature of running a trial in schools (including leaving plenty of time for schools to plan for an intervention to ensure that they can meet its requirements fully)?

Further research about many educational interventions is needed to build a full body of evidence that clearly demonstrates which interventions work and which do not, and in what contexts. This is what is required to ensure that practitioners are able to make the most informed choices about which interventions to use in their schools, and is what the EEF and Sutton Trust aim to offer through their Sutton Trust-EEF Teaching and Learning Toolkit. The toolkit brings together the findings from many different research projects in a set area (such as early intervention), and translates research results into estimated months of additional progress that pupils receiving the intervention would make over a year.

To address the second misconception it is important to challenge the persistent view that studies showing no significant differences are actually showing ‘no results’. These projects are of equal value to the schools and pupils the programmes are trying to help as those results which show a positive impact. It is equally important for schools to know what doesn’t work, so they don’t waste their time trying to implement an intervention that is unlikely to make a difference, as it is to know what does work. In the past, results of studies showing an intervention made no statistically significant differences may not have been published and therefore knowledge about what doesn’t work was hidden. Thanks to the increasing use of trial websites such as http://controlled-trials.com/, we are becoming more able to find the results of trials, whether or not the findings were positive. This is essential knowledge for senior leaders in schools to decide on how precious school budgets should be spent as outlined in a previous blog by my colleague Ben Styles.

Finally, I would like to debunk the common myth that participation in an RCT is unethical because some of the schools (or pupils) are denied the intervention. As can be seen in the current EEF example, only one of the four literacy interventions was making a statistically significant difference to pupils’ achievement. It is clear from this that those pupils who were not allocated to the intervention were not disadvantaged compared to those who were allocated to the intervention. More importantly, we have only discovered this by using a randomised controlled trial with a group of schools or pupils acting as a control (i.e. not having access to the intervention). Based on the value of these findings to learners, it could be argued rather that RCTs are the most ethical method to be used to evaluate interventions! All groups in a trial are vital to its ability to determine if an intervention is worth a school’s precious time, effort and budget.

The increasing use of RCTs in educational research, demonstrated once again through the publication of these EEF results, is raising a number of questions about the best approach to collecting evidence on what works in schools. As more trials are undertaken, the benefits of RCTs are becoming increasingly apparent, as well as the issues that we are working through to make them as valuable as possible.