Policymaking is only as good as the evidence used to formulate it. So when developing countries boost their national research effort to underpin policymaking and nurture innovation, they must strive for high quality research.
Simply increasing the quantity of research — by training more scientists or doling out more research grants — does not necessarily improve its quality.
How far Pakistan's radical higher education reforms have improved the country's research quality, for example, is highly debated (see Debating Pakistan's higher education overhaul).
The same is true in Africa. Academics at Ethiopia's Addis Ababa University have complained that the government's drive to train 5000 new PhDs in a decade, and the breakneck speed at which they are building new universities, is diluting excellence.
And in South Africa, where universities are being forced to accept more students from poorer backgrounds, lecturers complain of falling standards.
African governments setting ambitious targets must build into their funding system ways to identify, promote and support high quality research.
But they must first understand how research quality is measured.
In developed countries, such quality is traditionally judged by whether a research paper is published in a reputable journal and whether it becomes widely cited.
By those standards, African science fares badly. The continent's share of articles published in internationally indexed journals fell between 1980 and 2004 from 1.4 per cent to 1.2 per cent.
But such figures exclude a lot of African research. A 2007 study by Dutch science analyst Robert Tijssen, of Leiden University, reported that more than half of African research is published in local journals that are not listed in international indices.
Many African journals are not indexed because of the continent's reputation for poor quality control, says Zimbabwean science metrics researcher Taurai Imbayarwo. "Anything from Africa has always been looked down upon," he says.
Some still shine
But Imbayarwo argues not all African journals deserve such treatment. And African researchers may publish locally because of time pressures or the subject nature of their research — not because their work is too poor to get into top journals, he says.
Although many journals publish erratically and some have poor peer review or editorial independence, there are those that use good practice. A 2006 report by the Academy of Science of South Africa found that more than 200 journals in South Africa alone publish regularly, have functioning editorial boards and use independent peer review to assess submitted papers.
Meanwhile, Tijssen has analysed 2005–2008 citation rates for this column. He finds that some of Africa's biggest research spenders — Egypt, Nigeria and South Africa — scored below the world average.
He also shows that six African countries achieved higher than world-average citation rates in the four-year period — Botswana, the Gambia, Kenya, Malawi, Mozambique, and Uganda. Only two countries managed the same feat between 2001 and 2004.
The countries listed as achieving high impact rates says much about how the traditional definition of quality research fails Africa. For a start, there is a bias against countries that do not publish research in English.
Arab nations also fare badly. Algeria, Libya, Morocco and Tunisia are all far below the world average, but few would dispute that Algeria has a stronger research base than the Gambia.
Must quality be useful?
One alternative interpretation of quality is the extent to which the research is fit for purpose — i.e. whether it serves a local social or economic need.
For example, research carried out using the Large Hadron Collider — an astronomically expensive European particle accelerator — may be widely cited, but is unlikely to be important for reducing poverty in Ouagadougou.
On such a measure, African science does better. Indeed, local journals' regional focus can offer policymakers a much richer source of relevant science and case studies than the international core literature.
But relevance is not the same as scientific quality. Judging quality simply on the potential value of the research outcomes does not guarantee that the methods used are scientifically robust or that the findings are accurate.
Perhaps the most incontestable measure of research quality is how academically rigorous it is.
Romain Murenzi, former science minister of Rwanda and visiting professor at the University of Maryland in the United States, agrees that academic strength — guaranteed through peer review — is the key.
This definition is easier to work with than one based on how research might be used. And, as well as local problems, it encompasses basic research that expands knowledge for the joy of discovery.
This is the kind of research quality African governments must encourage, by improving local journals and rewarding good research practice in universities and institutes.
That does not mean downplaying research that is relevant to local problems — but relevance and quality should not be confused.
Of course, monitoring such quality requires extensive groundwork. There are initiatives that will help, such as Imbayarwo's African Science Trackers, a South African-based company that is compiling an index of quality-assured African journals, or African Journals Online, which offers online access to articles published in hundreds of African peer-reviewed journals.
There is no doubt that introducing stringent quality control in African science will be costly. But the alternative — that African research remains second class with limited potential for social and economic impact— would be a higher price to pay.
Journalist Linda Nordling, based in Cape Town, South Africa, specialises in African science policy, education and development. She was the founding editor of Research Africa and writes for SciDev.Net, the Guardian, Nature and others.