It worked for me

By Ben Durbin

Friday 15 November 2013

Love it or hate it, Twitter is the place to go if you want a good argument.  Tristram Hunt seems not to be a fan, criticising in yesterday’s Guardian "Twitter-fuelled orthodoxies of left and right, with both sides displaying decreasing interest in evidence-based policymaking”.  One clash last night centred on the merits of a particular teaching intervention Mantle of the Expert. In the blue corner, @andrewolduk led the charge against, with his and his team-mates’ arguments ranging from “it looks mad” through to a more reasoned “where’s the evidence?”  And in the red corner, @debrakidd and others understandably rankled against the first of these arguments (perhaps the shadow education secretary has a point). It’s the response to the second argument I’m particularly interested in though: “It worked for me”.

Now, you might think you already know what I’m going to say next: after all, I work for the National Foundation for Educational Research, and have written before in support of a more evidence-informed teaching profession. And I do indeed treat comments such as “it worked for me” with a measure of suspicion.  However, at the risk of incurring the wrath of colleagues and roughly half of the Twitter community I’d like to make a controversial suggestion….

Perhaps Mantle of the Expert (MoE) did genuinely work for those people making this claim. By which I mean perhaps it really did make a meaningful contribution to improving the lives, learning, and academic performance of the children involved. After all, what reason do we have to think otherwise – surely no one is suggesting conspiracy, mass cover-up, or that we’re falling foul of a crafty online brand ambassador campaign.

So where’s the problem, and how can this be reconciled with my view that teaching would still benefit from greater use of evidence, including RCT evidence, wherever feasible? In short:

  1. Perceptions of effectiveness can sometimes be misleading: objective evidence should be used to support/strengthen such arguments.
  2. Anecdotal success in one setting may be valid, but is not enough to justify large-scale adoption elsewhere (although, nor is failure in one setting always reason enough to rule out success elsewhere – but it does lower the odds).
  3. Simply finding an intervention that works is not enough – not when there may be an easily accessible and even more effective alternative.

I’ll now elaborate a little bit more on each of these three points:

How do you know it worked for you?  Even in the absence of a big shiny RCT demonstrating effectiveness, have you adopted the new approach in a critical, evaluative manner?  It’s not enough to say that children enjoyed it (important as this is); it’s not even necessarily enough to say that a particular outcome improved over time. What you really need to ask is, can you demonstrate that the outcomes of interest improve more than they would have otherwise? Indeed, were you clear before you started what the intended outcome might be? This is where small scale teacher-led enquiry research can come into its own, as I’ve discussed in another recent post.

Does that mean it will work for someone else?  Large-scale quantitative impact studies, including RCTs typically report on the average impact of an intervention. But in reality this hides a range of outcomes across individual children and schools (David Weston gave a good explanation of this in his Research Ed presentation). This variability may be wholly due to chance, in which case implementing an intervention with a zero average effect will always be a gamble. Or it may be associated with some other factor not controlled for in the study (e.g. it worked for all the rural schools and not the urban ones) – in which case there will be schools where the intervention will usually be effective regardless of what the RCT said (and others where it usually won’t). Only by conducting further evaluation into both the processes and the impact of the intervention can this be unpicked, and well-informed decisions made about whether “it worked for your class” translates into “it will work for my class”.

Is good the enemy of great? If there’s any such thing as a universal truth, then “teachers are short of time and resources” must be one. Every moment of time or portion of budget spent on one thing can no longer be spent on something else. It is therefore not enough just to settle on something that seems to be working – we should be striving to find approaches that are the best possible use of the resources we have. Innovation and new ideas are risky – some will succeed (and for this reason the status quo can be pretty risky too) but others will not. The only way to ensure the best for children is for interventions such as MoE to be subjected to scrutiny, and compared with the best available alternatives.

“It worked for me” is fine as a starting point, but not as a final destination.