Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

A new international study of scientific publications shows that the gap in scientific output between developed and developing countries is even greater than the gap in inputs (i.e. research spending). The implications need careful consideration.

For many years, the standard way of describing the strength of a country's research efforts has been to calculate the amount of money that is spends on research and development (R&D), and then to express this figure as a proportion of its gross national product (GNP). Pioneered by bodies such as the Organisation for Economic Cooperation and Development, the resulting figure has acted as a key bearing that developed and developing countries alike have sought to chart their position relative to other countries, and in some cases their steady climb up (or drop down) the scientific league.

Such figures can have an important symbolic value. Last year, for example, African science ministers attending a meeting organised by the New Partnership for Africa's Development (NEPAD), promised to seek to persuade their economic counterparts to increase each countries spending on R&D to 1.0 per cent of GNP (see African nations agree on science spending targets). This was put forward less as a realistic goal than as an aspiration intended to highlight the extra political commitment required to secure an effective scientific infrastructure (most advanced nations spend between 2.0 and 3.0 per cent of GNP on R&D; in Africa, the figure tends to be 0.2 to 0.3 per cent).

In recent years, however, it has become increasingly clear that measuring scientific strength in terms of spending alone is not only relatively crude, but also misleading. For merely adding up the amount of money allocated to research provides no indication of the effectiveness with which it is being spent. The focus has therefore shifted to looking at the results — or outputs — of scientific research. In particular, growing attention has been paid to the number of papers produced by scientists in internationally recognised journals (usually taken to be those whose contents are recorded by the Institute for Scientific Information (ISI) in Philadelphia, now known as Thomson ISI).

Sadly, the picture here for most developing countries is even worse than that given by crude spending figures alone. This was confirmed last week by a new study carried out by David King, the chief scientific adviser to the British government. This showed that the publication records of even the most scientifically active developing nations remains far behind their counterparts in the developed world (see China, Brazil and India lead southern science output). Overall, researchers in eight countries alone — headed by the United States, the United Kingdom, Germany and Japan — produce almost 85 per cent of the world's leading science; 163 countries, including most of the developing world, accounts for less than 2.5 per cent.

Limitations on citation statistics

There are, of course, serious limitations on the use of citations alone as a measure of scientific productivity, particularly when it comes to research that is carried out in the developing world. One is the fact that much of such research is aimed primarily at problem solving, rather than increasing the global store of knowledge. And even if such activity does not lead to a scientific publication, that does not mean it has failed; rather its key measure of success may be in terms of saving lives (if it is medically-oriented) or increasing local food production (if it is in the agricultural sciences), neither of which is reflected in citation statistics.

Secondly, many scientists in developing countries complain — with some justification — that regardless of the quality of their research, other factors make it much more difficult for them than for their colleagues in the developing world to get papers accepted in the type of internationally-recognised journals whose contents are measured by the ISI. These factors can range from the institutions in which they work — various studies have shown that papers submitted under the name of recognised authors are more likely to be accepted for publication than those whose authors are unknown — to a relative lack of familiarity with the English language that has become the convention for scientific publication.

This leads directly to the third complaint about conclusions based on citation rates, namely that there is an overall bias within ISI statistics towards journals that are published in English — and largely in the developed world. The organisation itself has done much to reduce this bias, and can, justifiably, point to the growing number of developing world publications that are included in its statistical coverage. Nevertheless there remains a legitimate concern that even the work of developing country scientists that is published does not get sufficient recognition within the international scientific community if it appears in relatively small, local language journals (a complaint frequently voiced, for example, in Latin America). As a result, it is under-represented in the statistics that track the output of that community.

Focusing on effectiveness

For all of these reasons, one needs to be cautious not to read too much into absolute figures such as those produced by King. Having said that, however, the figures do reveal that there is little room for complacency; the fact that, for all its investment in research over the past decade, the scientific output of China remains comparable to a country the size of Belgium provides food for thought. Furthermore, there are important lessons to be learnt by developed and developing countries alike about the reasons for the relative differences in performance.

Perhaps the most revealing is the relative strength that King's analysis reveals of science in the United Kingdom compared to that of other European countries. Overall publication rates alone demonstrate this; during the period 1997-2001, for example, British scientists were responsible for 9.43 per cent of the world's output of scientific papers (as registered by ISI), compared to 8.76 per cent for Germany and 6.39 per cent for France, even though expenditure on science in the latter two countries was significantly higher.

The difference is even more marked when looking at the quality of scientific publications, measured by relative contributions to the top one per cent of highly-cited papers. Here Britain scores 12.78 per cent, compared to 10.4 per cent for Germany and 6.85 per cent for France (in comparison, China, India and Brazil, the leading developing countries, are far behind in terms of their contributions to the top papers, with contributions of 0.99, 0.54 and 0.5 per cent respectively, and that is even without taking the relative size of their populations into account).

King attributes Britain's scientific strength to reforms in Britain's higher education system during the 1980s. This was a time of significant pain for both UK universities and the scientists who worked for them, as cuts in public spending imposed by the Thatcher government led to job losses and departmental closures. But it also led to a significant restructuring of the country's overall research efforts, reflected in a decision to reward the most productive university research departments with extra funding (and remove funding from those that were failing to perform). The result, as King's analysis has shown, epitomises the recipe "short-term pain for long-term gain". Or, as King himself puts it, "although many UK scientists campaigned against these cuts, they encouraged a level of resourcefulness among researchers, and approaches to industry… that are now bearing fruit".

It is perhaps here, rather than in the absolute figures, that developing countries have most to learn from this study. Pumping money into science is not enough, as many of such countries have discovered to their cost. Indeed, a single-minded pursuit of increased expenditure on research and development as a proportion of GNP is not the Holy Grail that many pretend (if it was, France and Germany would be way ahead of Britain in the research race).

What counts is the level of transparency and accountability with which the money is spent, and measures that are introduced to ensure that money is used to promote and reward scientific creativity (even if on relatively small projects), rather then institution building and career politics. The more this lesson can be built into the science policies of the developing world, the more rapidly they are likely to bridge the 'output gap' that, at present, continues to fuel the knowledge divide between rich and poor nations.

Link to full article by David King in Nature