27/02/14

Aid agencies urged to raise the bar on data collection

humanitarian aid_isamedia.jpg
Copyright: Flickr/isafmedia

Speed read

  • Agencies need to improve how they measure the effectiveness of their assistance
  • To do this they must develop more robust ways to collect and analyse data
  • Transparency, independent reviews and collaboration are all crucial

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

[LONDON] Humanitarian agencies need to improve how they measure the effectiveness of their assistance, but to do so they must develop clearer data collection methods and analyse results more systematically, according to a report by a humanitarian research network.
 
They will also need to open up their data to outside scrutiny and treat them more scientifically, experts said at the report’s launch at the Overseas Development Institute, in London, United Kingdom, last week (21 February).
 
The report by ALNAP (the Active Learning Network for Accountability and Performance in Humanitarian Action) recommends ways of improving the quality and use of evidence of effectiveness. It also sets out criteria for assessing how good that evidence is.

Paul Knox Clarke, the report’s joint author and head of research and communications at ALNAP, stressed that evidence is “something you need to do the job”. 
 
Paul Knox Clarke on evidence collection methodologies by SciDev.Net

Evidence collection and analysis is a “significant input” for programme design, he said, enabling agencies to assess whether humanitarian support is needed, what type would be most effective and how much is required.

To ensure the accuracy of evidence, data that are collected need to “faithfully mirror conditions on the ground”, Knox Clarke said, and be representative to ensure that different voices from affected communities are heard and reflected in programme design.

More should also be done to ensure the inclusion of those receiving aid or in crisis, who rarely call the shots about the information being collected, he said.

A failure to clearly explain how evidence was collected was a key stumbling block to delivering useful evidence about interventions and impact, Knox Clarke told SciDev.Net.

“While there are many notable and noble exceptions,” he said, “generally reports [assessing humanitarian work] do not make any formal statement of the methodology that they used either to collect the information, to establish the accuracy of the information or to establish the degree to which the information is representative of the larger population.”

Paul Knox Clarke on how questions for evidence collection are designed by SciDev.Net

John Seaman, director of research at Evidence for Development, a UK charity that develops methods for collecting and analysing information on household income and livelihoods, stressed that the culture and business models of humanitarian agencies were often geared primarily towards the organisations’ perpetuation rather than assessing whether or not their work was worthwhile and effective.

“It has shifted a bit, but the reality is that that dominates business,” he said.

Seaman called for independent reviews of the existing data that are “removed from all the pressures and biases that are intrinsic to any agency’s operations”. To do this, he added, agencies needed to break out of their silos, start collaborating and start regarding data collection and analysis as they are in any branch of proper science.

Knox Clarke said that forging collaborations — both between agencies, academics and other actors — would be crucial to improving evidence collection and analysis.

“Working with academics on these methodology issues is an obvious win,” he said.

Joanna Macrae, humanitarian adviser at the UK Department for International Development, agreed there was a need for more collaboration among humanitarian agencies and with a “broader array of actors”, including academics, to help improve the quality of evidence and allow data to be compared over time.

“Huge quantities of data are held by individual agencies at individual project level, but the problem is they tend to be locked in at that level,” she said. “Wouldn’t it be interesting if we could open up a lot of that data so that different people could be mining it at different times for different purposes?”

Encouraging a cultural shift towards embedding evidence-based approaches into humanitarian work should involve incentives, she said, including making collaborations between academics and agencies a condition of funding.

And science academics would also need incentives to get involved as, historically, their rate of engagement has not been particularly high, she said.

Link to Insufficient evidence? The quality and use of evidence in humanitarian action