Choosing an Intervention: Who Does It Help?

by | Oct 15, 2017 | Effective Practice, Teaching Struggling Readers, Whole School Literacy

To know if an intervention is effective, we need to know who it helps most. 

Schools are rightly making more of an effort to evaluate the evidence for interventions before investing in them. This is a good thing, not least because poor interventions waste students’ time, the most finite but least appreciated commodity in the education system.

However, such evaluation requires looking past the headline averages. Let’s say an intervention is reported as enabling students to make 24 months’ progress in a few weeks. Unless we know the characteristics of these students, we really can’t tell if this intervention is likely to be of benefit to the pupils about whom we are concerned. There are two main questions to address: how were students selected for the intervention, and how far behind expectations were they to begin with?

Question 1: How were students selected for the intervention?

Was it just a one-shot test? The fact that a standardised test has been used does not automatically mean that the student’s score is a true indication of their performance. Apart from the standard error of measurement implicit in all standardised tests, low test motivation can play a much larger role within a specific school’s population that it does in the general population.  In one of my previous schools, when we re-tested the Year 10s with a different test (having been unimpressed by some of their apparently low scores), we halved the number of students who appeared to be in need of reading support. When we then tested the remaining students one-to-one (with a third test), we halved the numbers again. In other words, 75% of the students with low scores in the first standardised test turned out to be reading at levels that did not require intervention after all. This may not be the case in all schools, but if there is only one test used, we can’t be sure.

 

You can see how relying on a single score at pre-test could make progress appear more positive than it is, because the students were not starting as far behind as their score suggested. (For example, one of the students in the school above improved between tests by four and a half years). In theory, larger numbers in a sample will compensate for individual variations in motivation. In a real setting, many of the students scoring poorly because of low motivation would end up in the intervention group. Not only does this waste resources on students who don’t need them, it can also make the intervention look more effective than it really is. Higher scores at post-test for these students look like progress, when in fact they may simply reflect a more representative performance than the pre-test scores.

In summary, beware of a single test score being used to identify students in need of intervention.

Question 2: How far behind were the students in the sample?

For example, in one study I read recently, the scores of a group of students had a mean starting level of 12 years and 3 months – hardly behind at all, since the average age for the group was estimated at 12.5. Of course if 12.3 was the mean, then some were above that level, some below. Leaving aside the question of why you would intervene with students who are hardly (or perhaps not at all) behind, the important question for schools is: how much growth was achieved by students who were reading significantly behind, e.g. three years or more? In the case above, based on information on the intervention’s website, it turns out that greater progress was made by those who started out as average or above average readers. Much less progress was made by those who were well behind to begin with. How useful is this if we are looking to support the weakest readers?

 

In summary, check the starting points for pupils in the intervention under investigation to compare these with the students who are of concern in your school.

Anything that improves reading is good news – but if we are investing in staffing and resources, we should look past the headlines and slogans to ensure that we give our students the outcomes they need.

 

You may also be interested in:

What Works? 

How to Save Time and Money Through Screening

Building on the Evidence

Beware the Reading Traps

I tried that and it didn’t work . . . 

TAGS:

Join Our Newsletter

Join our mailing list to keep up to date with our upcoming events

Invalid email address
We promise not to spam you. You can unsubscribe at any time.

Related Posts