10/10/16

A step towards better science for education policy

Education children Data 3ie.jpg
Copyright: Atul Loke / Panos

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

Evidence-based policy. It's one of those phrases that has become ubiquitous. But what exactly does it mean? And perhaps more importantly, where is that evidence coming from, and can it be trusted as a basis for formulating policy?

The NGO The International Initiative for Impact Evaluation (3ie) has been wrestling with such questions in the field of education, and its most recent study ‘The impact of education programmes on learning and school participation in low- and middle-income countries’ was launched last week (September 27) at the What Works summit in London, UK.  

The researchers synthesised evidence from 216 studies, reaching 16 million children across 52 low- and middle-income countries. They say there are no ‘magic bullets’ to ensure high-quality education for all, but there are lessons to be learned for improving future education programmes — about how cash gifts can boost school attendance, for example.

Because the systematic review process was a central element of this study, much of the meeting focussed on its methodology.

“We applied study inclusion criteria based on scientific research to make sure we captured studies with designs that minimise bias. So we excluded the most problematic studies.”

Birte Snilstveit, 3ie

Peering in from the outside, it occurred to me that the total of 216 research projects seems a big enough number to examine and to perhaps be statistically significant. But how did the researchers assess this? And what about the wide variation in the methodologies used in many of the studies assessed?

Then there’s the dark side of science, including social science: the lack of transparency in data and methodology, for example, or a culture of publication bias that favours positive results over negative ones, or the poor use of statistics in the analysis of results. Oh and not forgetting to mention the lack of reproducibility of huge swathes of research.

How did 3ie evaluation specialist Birte Snilstveit, navigate the problems in the systematic review when, as she noted at the launch, some of the primary data is flawed?

“We tried to do the best we could with the data we have,” she said. “We applied study inclusion criteria based on scientific research to make sure we captured studies with designs that minimise bias. So we excluded the most problematic studies.“

This seems reasonable enough, but doesn’t go far enough. A more promising if small step for making educational research studies more robust is 3ie’s launch of a registry covering both randomised controlled trials and quasi-experimental studies conducted in low- and middle-income countries.

This was modelled on medicine’s All Trials project campaign, which has met a great favourable response. Its aim is to have all medical research projects pre-registered before any analysis takes place to prevent them going ‘missing in action’ if they produce negative results.

Snilstveit called for more funding for high-quality studies on the effects of education programmes. “Such studies should target substantive gaps: promising interventions, innovations, areas where effects are unknown, geographical contexts where there is a lack of evidence — for example, west and north Africa, Middle East, large, populous countries like Nigeria, Bangladesh and Indonesia,” she said.

She added to the list studies that incorporate equity, target the hardest to reach children, report on intervention costs and allow for cost-effectiveness analysis. 

Uptake of the registry has been low, with only 91 studies registered so far. But at least it’s a step in the right direction.