By Julie Nelson
Thursday 3 March 2016
Last month, Professor Carol Campbell (University of Toronto and Knowledge Network for Applied Educational Research) and I co-hosted a round-table discussion at the International Congress for School Improvement and Effectiveness, and simultaneously issued a call for papers for a special issue of Educational Research on evidence-informed practice (EiP) in education. We were joined by researchers, policymakers and teaching professionals from many countries.
The discussion provided an opportunity to discuss the issues planned for coverage in the journal.
At one level, the discussion didn’t present many surprises (EiP is challenging, there are structural and cultural barriers, different people understand it in different ways, and so on). But at another, it generated some fresh perspectives that we will draw on as we edit the journal.
1. What is EiP?
We anticipated that the discussion might focus on issues of research production, application and use. In the event, the discussion took a different turn:
Performance data and professional judgement
Performance data was judged as the main source of evidence for most schools. But as one participant noted: ‘data alone should never be used to make decisions. Professional judgement should use data to make decisions’. This reinforced a model I presented in a recent blog post, where EiP represents the intersection of performance data, professional judgement and research evidence. I still believe that research evidence is an overlooked piece of that jigsaw though!
Values and attributes
The group also viewed EiP as a way of thinking, incorporating willingness to challenge existing practices, and being reflective – prepared to utilise externally and internally-generated sources to disrupt pre-held beliefs. Coupled with this, teachers need to be critical consumers of evidence – not unquestioning users. It seems to me that insufficient attention is given to the issue of teacher values and attributes. In England, the Professional standards for teachers and trainers of post-16 learning see these as equal in importance with knowledge and skills. In many respects, this thinking is more advanced than in the mainstream education sector.
2. What does EiP ‘look like’?
The group focused on preconditions for evidence engagement rather than on particular models of EiP, and particularly on the issue of trust:
- system trust – space for school leaders to lead schools according to professional expertise without fear of reprisal
- institutional trust – leaders allowing teachers to experiment, innovate, and learn from different sources in a non-judgemental environment, and modelling behaviours
- inter-personal trust – space for teachers to learn from each other, and others outside of schools, through peer learning and collaboration.
It is interesting that preconditions emerged as a strong group response to this question. The Coalition for Evidence-based Education is currently developing ‘research readiness’ guidance for schools in England. Similarly, NFER’s Self-Review Tool helps schools to review their research readiness in relation to issues like leadership, resources and collaboration. These preconditions are often overlooked in discussions about EiP, but without them, arguably it is unlikely to flourish.
3. What enables and hinders knowledge mobilisation?
The term ‘knowledge mobilisation’ recognises that there is a bridge to cross between the worlds of research and practice. In education (where there are few formal structures supporting connection between the two), we need to think creatively about how to bridge the gap.
What types of evidence are needed (what traffic should pass over the bridge)?
Contributors queried how often researchers ask schools what evidence they need? In a previous NFER Thinks piece, a colleague and I discussed systemic challenges around achieving better alignment between research and practice. There is still a way to go.
Who should produce the evidence (should the traffic be single lane or two-way)?
Ideally we need ‘practice into research’ and ‘research into practice’. Research should support the scale-up of teacher innovations as well as generating information for schools on what works.
What are the best ways of exchanging evidence (which vehicles do the best job)?
Some contributors were involved in strategies exploring how to embed research evidence. Right now we don’t know which work best, but the evidence base is growing. A recent NFER review found that socially interactive approaches may be more effective than passive dissemination approaches.
A key learning point for me was that evidence is only as useful as its application. But there is no point ‘telling teachers what to do’. Teachers need to believe in and own the knowledge if they are to utilise it.
Our expert discussant, Professor Louise Stoll, suggested an alternative way of thinking about utilisation: ‘accommodation’. This allows for professional judgement and knowledge of context. She also used the term ‘knowledge animation’ rather than ‘knowledge mobilisation’. This implies that someone, somewhere (hopefully the teacher) owns the evidence and is bringing it to life!