Send to a friend
“Transparency International has something called the Corruption Perspective Index — and that’s subjective, because there is no objective measure of corruption. And I don’t see any improvement on that in the near term.”
These words, spoken by Michael Hershman, cofounder of the anti-corruption NGO, were perhaps the ones that struck me most at this year’s Warwick International Development Summit held at the University of Warwick, United Kingdom.
The meeting, which was held this weekend (21-23 November), was a refreshing chance to hear from experts who see the big picture of development. Neil Buhne, director of the UN Development Programme office in Geneva, Switzerland, for example, was there.
Several speakers discussed the need to ‘restructure’ development aid. I confess, sometimes their exhortations to take simple and powerful actions — such as cutting out aid to corrupt regimes — made some of the technological and scientific aspects of development covered at SciDev.Net feel peripheral.
But then again, maybe not.
An hour or so into a panel discussion about restructuring aid I began to feel slightly weary of hearing the expert opinions about what aid was working, what wasn’t, what should be prioritised and what left to simmer. Conditioned by my PhD training, I was eager to hear if the experts thought there was enough evidence about what development measures really work.
When I asked that question, I got a range of responses. Hershman seemed to think that impact evaluations were mostly sufficient. While Byrony Everett, a development consultant who has worked for the UK Department for International Development, bemoaned the fact that evaluations were too often “standalone” and didn’t capture big picture changes. Warwick University philosopher David Axelsen brought up the issue of how evaluations often deal in rather coarse units: how many people are malnourished, rather than how much improvement in nourishment has occurred.
It’s not surprising that development is hard to measure. When it comes to corruption, for example, of course there is an inherent difficulty in measuring it. When officials take bribes, they do not tend to mention it in surveys. Another example, pointed out by SciDev.Net’s columnist Henrietta Miers, is the challenge of measuring progress on women’s empowerment — how would you put a number on that?
But I felt like the range of opinions I heard during the panel discussions betrayed a wider lack of attention to measuring development outcomes in the sector. Just because these concepts are tricky to measure, does that mean we shouldn’t try? Surely being confident of what progress has been made should be a pre-requisite to good development planning?
I think development would benefit from more scientifically minded people helping to thrash out these measurement challenges. For example, Miers has suggested an option for measuring women’s empowerment might be anthropological observation; embedding researchers with communities and recording changes in their dynamics over long periods.
This sounds unusual and would be expensive. But giving more thought to ideas like this could be worth it if the result is a better understanding of where aid and development are succeeding — and where it isn’t.