By Pippa Lord
Monday 18 November 2019
I’ve been lucky enough to have had a really interesting journey throughout my career in education research and especially in my current role as a Trials Director in NFER’s Education Trials Unit where I now design and manage randomised controlled trials (RCTs). Starting as a research assistant over 20 years ago, I’ve worked on literature reviews, focused on student voice, mixed methods research, teachers’ CPD, consultations, quasi-experimental comparison group designs, and RCTs – and indeed in that order, although that was not by any particular design. My career path has been absolutely vital to how I approach RCTs in education – I bring a holistic approach, thinking about feasibility, practicality, intrigued about what makes an intervention work (or not work), and wanting to find out what difference interventions make to teachers and their teaching, and to children and young people and their learning.
A year of celebration
And so it was at the recent RCT Centenary seminar – an event hosted by NFER and the Royal Statistical Society to mark the 100th year since the first published RCT (or near RCT) in education. We heard a fascinating journey through the history of RCTs in education as Professor Carole Torgerson talked us through the ups and downs of education trials, reporting random selection (in 1919), uncertainty (in 1931), running economic evaluations and process evaluations alongside (in 1931), and adjusting for clustering (apparently evident in early education trials, and adopted by health trials later). Amongst all the ups and downs, adherence to the randomisation seems key; and amongst all the complexities of running trials in schools, it was so encouraging to hear that schools are keen to be part of this approach and understand its importance.
The scientific method is vital in a researcher’s armoury. But so too is the creativity to design a trial that neatly fits around an intervention, engaging participants, and collecting relevant data robustly and sensitively. Many RCTs in education do not find evidence of a positive effect of an intervention, and we need to understand better why this is and where improvements can be made to both trial design and stage of intervention. We heard about the need for RCTs to be more selective, to only be conducted when an intervention is ready, relevant and replicable (the 3 Rs). No-one said it quite like this, but Head of NFER’s Education Trials Unit Dr Ben Styles’ presentation particularly highlighted the need for Replicability; former English teacher Alex Quigley (and now National content manager at the Education Endowment Foundation (EEF)) highlighted aspects of Relevance to the classroom and to school leaders; and several including Camilla Nevill Head of Evaluation at the EEF talked about Readiness (or not) for trial.
There were also three (or four) Ps:
- Parameters – we researchers and commissioners need to have more evidence at our disposal to help us calculate (more) realistically what effect sizes we expect to see, and the parameters for intra-cluster correlations and pre-post correlations need (much) more testing and piloting. Dr Hugues Lortie-Forgues’ (Lecturer in Education at the University of York) session on effect sizes really highlighted this. So to Precision – let’s be more precise about what we are testing and why? as Camilla Nevill attested.
- Pilots – several speakers and audience members commented that designers of interventions should develop their work through robust pilot evaluation, before researchers put it through RCT.
- Professional development – speakers highlighted the need for the training for education practitioners to include a more robust focus on evidence including RCTs. Teachers and school leaders are our partners in educational research, we can’t do it without their input and willingness to provide their time and contribution to finding out if and how an intervention works. So I was very struck to hear from Paul Connelly (Professor of Education at Queen’s University Belfast) that a common research methods textbook used in initial teacher training really doesn’t do RCTs much justice. I am pleased to say that many schools we approach to take part in RCTs seem to understand their role and importance, and Camilla Nevill reported that around half of all schools in England have to date taken part in an EEF RCT. But perhaps there is a skillset that needs improving – we need to improve the link between research design and how to implement the RCT design in practice.
Three is the magic number
Finally there were the three Is from the conference. Implementation and the need for better measurement of implementation fidelity and understanding of what’s worked (highlighted by Steven Higgins, Professor of Education at Durham University); improving impact measures including piloting those measures; and investment in training and trials infrastructure including building ‘innovation risk’ into trials.
If you are a researcher planning an RCT in education, you might want to think about these 3Ps, 3Rs,and 3Is – the PRIs as I call it…The prize is to keep to your random allocation in a high quality RCT, so that the results can be used to make a difference to teaching and learning. And so, full circle back to our Chief Executive Carole Willis’s address that started the whole event, ‘randomly assigning is important’ – it helps us to get closer to understanding the impact that interventions, programmes and approaches make to children’s learning and other outcomes. It is not without challenges, but as researchers using RCTs, we need to continually support and improve how RCTs are designed, implemented and used.
I am looking forward to continuing to work in this area, with and for policy-makers, schools, teachers and pupils.
Later this week, we’ll be publishing a teacher’s perspective from the RCT Centenary seminar on the blog. To ensure you don’t miss it, sign up here to receive blog notifications.
You can also explore more about the use of RCTs in education and how we marked the centenary of the first education trial here.