04/05/15

Science groups told to monitor impact of advice

testing an upcoming Practitioner2019s Guide
Copyright: Flickr/C. Schubert/CCAFS

Speed read

  • Routine consultations usually end as soon as the report is handed over
  • This may be particularly prevalent in developing countries
  • After providing information, advisory groups should evaluate its effects

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

Scientific expertise is increasingly prominent in international policymaking but little effort is made to judge how effective this input is, according to a report from the Organisation for Economic Co-operation and Development (OECD).

Simply providing information is not enough — advisory groups and researchers must evaluate and monitor the impacts of advice to improve their credibility and effectiveness, it says.

The authors of the report, published last month (20 April), tell SciDev.Net that their call to action is a response to what they see as lack of responsibility from scientists.

“If development studies have taught us one thing, it’s that you need to get out there and get your hands dirty.”

James Wilsdon, University of Sussex, United Kingdom

Science advisory bodies “usually consider that their role ends when the advice is provided, do not comment on policy decisions and only intervene and communicate when the advice is misinterpreted”, the report says.

Carthage Smith, the OECD’s lead analyst for the report, says advisory bodies must become more self-critical and must automatically consider their effectiveness.

It’s not difficult, he says, as the methods needed are in any standard impact assessment tool kit. Reflection should begin by examining the quality of the evidence being provided. But it is equally important to judge how the information is taken up by stakeholders and if it helped to achieve any policy goals in the long term, Smith adds.

In the report, a German research body, the Commission of Experts for Research and Innovation, is held up as a shining example of what can be achieved. Despite lacking any role in implementing its advice, it still routinely conducts follow-up surveys and tracks the political uptake of its work, the report finds.

Unfortunately, this type of introspection is rare, it says. Scientific advice given on high-profile issues — like the Ebola crisis or the Fukushima nuclear disaster — does get dissected. But more routine consultations usually end the moment the report is handed over. At this point, ad hoc groups formed to answer specific policy questions disband, leaving no one accountable for the information provided.

Smith believes this is particularly prevalent in developing countries. Without stable institutions with a defined and long-term advisory remit, there is rarely the capacity to look beyond the report itself.


Furthermore, the majority of scientific assessments relating to developing countries are conducted by a handful of institutions in richer nations, says Frank Biermann from VU University Amsterdam, the Netherlands. The political scientist says decision-makers are less likely to take assessments on board if they come from external sources — especially those on social and political issues.

But James Wilsdon, professor of science and democracy at the University of Sussex, United Kingdom, says the development sector offers a tried and tested template for conducting the kind of real-time project assessments that science advisors sorely lack.

One central pillar of this approach is moving out of the laboratories and meeting rooms to discuss impacts with the people on the ground who are affected most. “If development studies have taught us one thing, it’s that you need to get out there and get your hands dirty,” he says.