Education does not have an evidence problem – it has an implementation problem.
In recent years we’ve seen a rising awareness of the importance of research-based evidence in education. This is, of course, to be welcomed, not least by those who have been campaigning for a greater focus on evidence for decades. There is a lot of evidence: there are now hundreds of education research journals, with hundreds of thousands of articles. For example:
The National Reading Panel (2000) selected research from the approximately 100,000 reading research studies that have been published since 1966, and another 15,000 that had been published before that time. (Hempenstall, 2006)
And that was over two decades ago.
Curiously, despite the abundance of research, we have not seen significant, system-wide advances in teaching practice or school operations. Improvements are often localised, closely linked to specific individuals, and often temporary, fading out when those individuals move on. Where even simple research-based strategies have been applied system-wide, such as the Year 1 Phonics Screening Check, data suggests that a significant improvement in performance has followed.
So why, despite the decrease in resistance to research, are we not seeing greater impact in schools, in particular, in reading? Why are the groups of children who were doing badly ten, fifteen or twenty years ago, still doing badly? After all, we don’t seem to have a lack of evidence. Researchers such as Keith Stanovich, for example, have noted that the scientific consensus about how we learn to read is one the strongest in any field of science. Yet we still seem to be in the midst of reading wars, with no apparent end in sight.
Essentially, the same factors that cause educators to resist or ignore research, can also cause us to implement research badly.
- One of the strongest factors that holds back our systems from responding to research in a scientific manner is that of confirmation bias. We tend to dismiss statements, findings or theories that are at odds with what we already believe. Likewise, we will give far greater to weight to ‘evidence’ that supports our position than to information that doesn’t.
- Cognitive dissonance is an extension of this problem. Once we have invested in an idea in a significant way, in terms of our ego, our reputation, our professional standing, or our political career, it becomes very difficult to accept evidence to the contrary. This why the reading wars rumble on: no amount of research is going to persuade some to repudiate what they have been promoting in their initial teacher training lectures, so we continue to have new teachers joining the profession who know little or nothing of the research on systematic phonics teaching. Likewise, teachers who have spent many years using practices that have been discredited are often highly unlikely to receive the bad news with grace.
- Within the profession, we have a tendency to oversimplify research, along with a tendency to adopt ideas that ‘feel right’ rather than those that have been evaluated critically. This is because most teachers have little or no training in how to evaluate research, and are thus unable to benefit from it, or (worse) are at the mercy of ‘gatekeepers’ who filter the research according to their own narrative.
- For the same reason, therefore, there is a lack of understanding of the rigour required to produce reliable scientific evidence in education, especially in ‘real world’ (as opposed to clinical or laboratory) settings.
- This lack of rigour means that, when schools do attempt to implement initiatives, we often ‘adjust for context’ by making changes in the procedure, programme or method – without realising the importance of the features that they are altering. We also tend to underestimate the impact of these ‘context-based’ changes on student learning, which result in poor fidelity.
- Without fidelity, it is not possible to evaluate or replicate an initiative, and therefore it is not possible to determine why the initiative succeeded or failed.
- Because the results are therefore dubious, ambiguous, or imprecise, those responsible for the initiative develop narratives around what happened according to their own interpretations.
- While the school moves on to the ‘next new thing’, the overall effect is that confidence in research begins to wane.
In the next post, we will look at some examples of research studies where implementation issues undermined the outcomes.
You may also be interested in:
I tried that and it didn’t work
Teaching Reading is Rocket Science
From novice to expert: seven signs your school is dealing with reading effectively