Low Fidelity

Apr 11, 2023 | Effective Practice

Without fidelity, we can’t expect reliable outcomes – and crucially, we won’t know what went wrong.

In our last post, we looked at the reasons why teachers may be resistant to some educational research, and why schools so often implement research-based initiatives poorly. Briefly, these were:

  • Confirmation bias and cognitive dissonance;
  • A lack of training in evaluating research;
  • A lack of understanding of the importance of fidelity in both conducting and implementing research.

For the sake of illustrating the last point, let us consider some published evaluations by the Education Endowment Foundation that reference this issue of fidelity of implementation. An obvious example is the report on the RWI Fresh Start programme, published in 2022. The study aimed to determine the impact of this phonics-based reading and writing programme on low-performing students at the transition from primary school to secondary school. However, it was extremely difficult to draw conclusions. The evaluators commented:

Fresh Start was not implemented as intended in a significant proportion of intervention schools, with 35% of schools not delivering Fresh Start at all, 29% delivering Fresh Start to some but not all eligible pupils and 12% not providing enough data for the evaluators to know whether or not they were delivering Fresh Start [at all].

In other words, 76% of the schools involved in the study failed to meet basic fidelity standards, rendering the research valueless.

Staff from two schools in this study were interviewed for their perspectives on implementation. Some felt that the training and materials were not sufficient; they also commented, however, that schools lacked space, staffing time, and often did not prepare teaching assistants to work with classes or groups on matters like behaviour management. It seems hardly surprising, in the light of such fidelity problems, that Fresh Start pupils were found to make three months less progress than the control group (some of whom were also, it turned out, being taught using Fresh Start).

In the EEF’s Switch-On Reading evaluation, the authors noted that there were issues with training and implementation and that these might be responsible for the zero effect size determined by the study. In the Embedding Formative Assessment study, the authors noted that “implementation varied significantly”, with no observed effect on English or mathematics performance (pp 4 -5). In Oxford University’s Fit to Study programme, which achieved a zero effect size, the EEF notes that “schools struggled to implement Fit to Study and attendance at the training was poor”.

The lesson from these studies?  Fidelity is essential if we are to be evaluators of our own impact, as Hattie puts it. This is true of research studies, and of schools attempting to implement research.

For a contrasting example, consider the well-known Response to Intervention study in secondary schools conducted by Vaughan and Fletcher (2012):

Fidelity of implementation is an issue of high importance, as is the extent to which the time we allocated to treatment corresponded with the actual time students received treatment. As we report in more detail in other papers (Vaughn, Cirino, et al., 2010; Vaughn, Wanzek, et al., 2010; Vaughn et al., 2011), fidelity of treatment was high largely because the research team hired and supervised all of the treatment teachers and actual treatment time was documented as corresponding with expected treatment time.

The strong emphasis on fidelity in this study not only enabled the authors to declare their results with confidence, but also to know which questions their research had answered, and which it had not. In the same way, schools need to be able to understand which needs they are meeting, and which they are not, so that we can continually improve.


Of course, it’s still possible for schools to invest in bad ideas, but we have plenty of evidence to help us to avoid that trap.  The larger problem is that it is possible to invest in a good idea and still get poor results, simply because we didn’t observe a high standard of fidelity.

Maintaining fidelity in the complexity of secondary school environments will be the subject of the last post in this series.


You may also be interested in:

A Heart for School improvement

Beware the Reading Traps

The Implementation Trap

Reading Intervention That Gets Striking Results


Join Our Newsletter

Join our mailing list to keep up to date with our upcoming events

Invalid email address
We promise not to spam you. You can unsubscribe at any time.

Related Posts