Education has a reputation for being subject to fads, where new ideas are adopted and then dropped. It seems to me that this is not so much because teachers are lazy, but because we are so enthusiastic, and always eager for new ways to help our students. Approaches that we think ‘work’, we keep in our arsenal, while we discard those that ‘don’t work’.
There is always the next new thing. We had Brain Gym, VAK, and NLP. We had versions of AfL that reduced it to lolly sticks and endless ‘dialogue’ marking. More lately we’ve had grit, growth mindset, and mindfulness. We have cold calling, interleaved practice, and worked examples. These approaches range from having no evidence, to misinterpreted evidence, to quite sound evidence. Sometimes it’s our intuition, rather than the evidence, that has made an approach appealing. Responding to superficial features rather than checking the evidence for ourselves can lead to a lot of trouble later on.
But there is another problem. It’s the comment in the title of this post: “I tried that, but . . . .” In the hectic pace of school life, it’s so easy to approach new strategies – or interventions – superficially. We cut corners from the original approach, make a few ‘adjustments’ and ‘adapt to our context’ (which often means, ‘our timetable’). Then we wonder why it’s ‘not working’ – why students don’t seem engaged, why there doesn’t seem to be much progress. Well, if it’s not the original version, why would it?
Another explanation for why an approach isn’t working could be that the expected outcomes were overstated. This is why it is so important to understand the evidence (including the theoretical framework) underlying the strategy. If the underlying practices employed in an intervention are not supported by decent empirical research, why would we expect it to have impact in the classroom? The same goes for statements about the programme’s outcomes. I read of one reading intervention that initially claimed an effect size of 1 – about a year’s progress over a ten-week period. The actual impact dropped to a quarter of that in an EEF pilot, and then to zero in a wider study.
Understanding the underlying methodology also matters because it gives us a better chance of understanding why certain elements are arranged as they are. Does it really matter if two components are switched? I recall observing one reading tutor who had changed the order of the lesson plan she had been trained to use. When asked why she did this, she simply said that she preferred to start the lesson at a different point. She had forgotten that it mattered to start the lesson with previous learning. (For the record, this is for revision, success and motivation.) Fidelity of delivery is essential if we are to expect good outcomes.
One of the biggest threats to fidelity of delivery is that now very-discussed phenomenon, the Dunning-Kruger Effect. We feel confident that we have mastered something only because we are such novices that we don’t know how much we still have to learn. Early success can convince us that we know enough to tinker, and that can ultimately have negative consequences. If we persevere with learning more about an approach, we often find that we have skills to learn which, when we set out, we didn’t even know existed.
So concluding that “it didn’t work” is not the end of the process. If we find that a strategy or intervention isn’t working, we should ask ourselves: is it to do with the way we have implemented it, or were the expected outcomes overstated to begin with? If research shows that the latter is the case, it is best to let it go and focus on something more evidence-based in future. If, on the other hand, honest self-review shows that we are not delivering it as intended, then we need to focus on fidelity of delivery and if necessary, upskill.
The key is being humble enough to reflect and change if we need to.
You may also be interested in: