Are more free schools a good idea?
Thursday 12 March 2015
Earlier this week David Cameron announced plans to open 500 new free schools in the next Parliament. In doing so, he adopted a familiar template used by politicians announcing new policy initiatives: our previous policy has been successful; if you vote for us we’ll do it some more, and it’ll continue to be successful.
This approach relies on being able to credibly claim success for the policy so far. Enter stage right – on the same day – Think Tank Policy Exchange published a report claiming that free schools are driving up standards of others schools in the areas they’ve opened in – so vindicating the policy’s twin rationales of greater choice and competition. It’s still too early to judge the success of the free schools themselves, hence the focus on their effect on the wider local school landscape.
What are we to make of these plans and accompanying claims? There’s already been much discussion, a lot of it predictably based on established political, philosophical or professional standpoints. So rather than offering yet another set of perspectives, I would like to suggest a template of my own – a sense check if you will – for how we should scrutinise announcements like the Prime Minister’s.
There are three key questions to ask:
1. Do we know it worked before?
2. Do we understand why?
3. Is it likely to work again?
So how does this week’s free school announcement fare?
1) Do we know it worked?
In other words, are we confident that free schools have genuinely caused an overall improvement in outcomes in the areas where they’ve opened (particularly in lower performing schools, as the report claims)? Becky Allen and Dave Thomson have already provided a detailed critique of the evidence in the Policy Exchange report, so I won’t seek to replicate this. But, in short, here are the sorts of issues that need to be considered:
• Is there a valid counterfactual – i.e. an estimate of what would have happened to the other schools in the area in the absence of the policy? And if this is estimated by observing changes over time, are we taking into account any pre-existing trends (e.g. underperforming local schools were improving anyhow); or if it involves a comparison group of similar schools, how similar are they really – is there any form of selection bias?
• Especially when making claims about under-performing schools in the surrounding area improving, is regression to the mean a possibility? Some schools underperform in a given year just because of chance variation, in which case we’d expect them to perform better in the following year regardless of any policy intervention.
• Have we observed large enough differences with large enough samples to conclude that any observed effects are down to more than just chance? This is a particular issue in evaluating free schools, given the low numbers of schools, pupils, and years’ worth of data.
2) Do we understand why?
What are the mechanisms by which the policy led to the outcomes observed? In the case of Policy Exchange’s ‘rising tide’ argument, to what extent is this a feature specific to free schools? Would a similar effect be observed if a new local authority school had opened in the area, or if an existing school was taken over by a new leadership team (parent-led or otherwise), or there was a new injection of funding, infrastructure, or in-kind support from the Government?
Without a decent, well-tested understanding of how and why a policy has worked, there are no guarantees that its benefits could be sustained or replicated elsewhere. Which leads neatly on to the third question.
3) Is it likely to work again?
Are we confident that past performance will be an accurate predictor of the future? Would we expect all free schools to perform the same as the first few, or is it possible that the best-equipped, most highly-motivated groups are already involved in establishing the first batch of free schools? And is it practical to scale – e.g. is there sufficient capacity in the system to accommodate a larger scale change? Does the policy represent good value for money, when considering all of the potential costs/savings across the system? And finally, what’s the bigger picture – are there any unintended consequences that could emerge as free school numbers grow to become a significant feature in our schools landscape?
London Challenge provides another example of where this set of questions can usefully be applied. Did the London Challenge genuinely cause better outcomes in London – or can these be better explained in other ways – for example through demographic and/or labour market effects? And, if the London Challenge did have a contribution to make, what were the key ingredients of or mechanisms for this (leadership, collaboration, rigorous use of data?) Given all of this, would we expect it to work elsewhere – in Manchester, the Black Country, or Wales – all areas which have sought to replicate its success.
As these examples illustrate, the three questions are often very difficult to answer, even many years later.
Given what’s at stake, we might nevertheless expect a serious attempt to do so from anyone committing substantial public funds, and with the education of the nation in their hands.