How to find out what works in ‘What Works?’

by | Nov 11, 2017 | Effective Practice

Choosing an effective intervention may not be as difficult as you think.

For school leaders looking for evidence on the effectiveness of literacy interventions, the go-to source is Professor Greg Brooks’ What works for children and young people with literacy difficulties? Published by the SpLD-Dyslexia Trust, this work compiles the available evidence on currently available interventions in reading, spelling and writing. Greg Brooks invites submissions, evaluates the data and collates the information into a form that enables reasonable comparisons to be made. Pre-dating the EEF’s “Toolkit”, and much more precisely described, What Works is now in its fifth edition.

This blog post is prompted, however, by conversations with pressed senior leaders and SENDCOs who find that the sheer wealth of information seems too much to wade through. This is a step-by-step guide for secondary school leaders to simplify what may seem like a daunting process.

Step 1: Identify the relevant age group

The report is split up into sections covering primary, Key Stage 3 and above, and young adults.

Step 2: Identify the relevant learning domain

There are three domains covered in the compilation: reading, spelling and writing.

Step 3: Compare the ratings

Greg Brooks has classified the results under three broad headings: useful, substantial, and remarkable. If you are looking for serious progress, you only want to be dealing with an intervention with ‘remarkable’ impact. (After all, if a gain of a few months meets your students’ needs, they probably don’t need intervention at all.)

Step 4: Compare the Ratio Gains and the total gains

Look at the number of months gained for each month in the programme. Greg Brooks has calculated this as the Ratio Gain (RG) for each intervention. This means that the impact of interventions is roughly comparable. However, you should also note the total months gained. A Ratio Gain of three months for every month on the programme might seem like a good result, but if the programme can only offer two months’ worth of lessons, you can only expect a six-month gain. Is that enough? If your aim is for struggling readers to catch up quickly, you will need a programme that can deliver a total gain of at least three years, preferably more, and do so within an academic year.

Step 5: Analyse how much impact the intervention had on the weakest students

For each set of results, Greg Brooks has identified the average level at pre-intervention and the average level at post-intervention. This helps to show us how far behind the students were to begin with. A data set may show a significant gain, but if this gain occurs mainly with students who are already performing at the average for their age or above, is this a catch-up intervention? An intervention that produces significant gains for students who are well behind their peers is a much more worthwhile investment.

The five steps above should help you to draw your own conclusions very quickly.

Spoiler alert

If you want to do your own research, please don’t read any further! However, if you’d like to take a preview, I’ve summarised the results on reading at secondary school for you below.

Cheat sheet

In summary, there are five interventions in the secondary school section for reading considered to show ‘remarkable’ results. Of these, four showed average gains of 10 – 27 months. In these interventions, better results were shown for students who were already reading close to their chronological age. In the one remaining intervention, students were reading on average five years behind when starting the programme, and gained more than five years (60+ months) in their reading by the time they had completed it. These gains were maintained at follow-up testing.

You should, of course, do your own checking!

What about the EEF’s toolkit?

Greg Brooks has included some of the Education Endowment Foundation’s randomised controlled trials, but ruled out 14 of the 24 EEF literacy studies: “The reasons for not mentioning 14 of the RCT evaluations in this report varied: non-significant findings, implementation or sampling problems, small samples, high drop-out . . .” . Others, like Greg Ashman, have expressed their reservations about the lack of clarity in the way the Toolkit presents information. For clear descriptions of the interventions and specific, accessible data, (including the results of RCTs) we would recommend What Works every time.


Join Our Newsletter

Join our mailing list to keep up to date with our upcoming events

Invalid email address
We promise not to spam you. You can unsubscribe at any time.

Related Posts