Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

Citation analysis can be a valuable tool for allocating scarce scientific resources. But the potential for error, bias and misuse means that it must be applied transparently, and subject to close scrutiny.

Achieving the optimal use of scarce resources has long been a fundamental human concern; indeed, it is the basis of one definition of the discipline of economics. But, at least until relatively recently, it is not a one that has pre-occupied the scientific community. In many countries, for example, spending on university research has increased steadily as a result of political pressure to broaden access to higher education, not of focussed pressure to increase scientific productivity. At the same time, additional funding for research tends to have been approved on a discipline-by-discipline basis, not as a lump sum to be divided up in the most cost-effective way.

In recent years, however, much has changed. Under pressure to reduce public expenditure — often as part of a commitment to neo-liberal economic policies — governments have become increasingly stringent in their demands that spending on science is focussed on the most productive scientists. And this has stimulated increasing controversy over a pursuit that started as little more than an intellectual pastime: the identification of such individuals through the practice of citation analysis, the assessment of an individual?s scientific prowess by measuring the extent to which his or her scientific publications are cited by other researchers.

Today, citation analysis is widely used by research managers and others seeking to maximise the output of their investment in research. If properly used, it has undoubted value. But the scope for misuse is rife, ranging from the way that the crude application of citation rates can overlook talented individuals whose intellectual skills do not fit easily into the normative practices of science, to the fact that much current citation analysis lacks the rigour and accuracy that its neatly quantified conclusions suggest.

Two items posted on SciDev.Net reflect this ambivalence and its consequences. The first describes a current dispute over a decision by the government of Colombia to introduce a rigorous system for basing university salaries on academic productivity (see Colombian academics dispute new salary rules). With little public attention, the dispute has virtually brought the university system in that country to a halt. At the root of the protest is a distrust of the government?s motives. Many university staff argue that the new strategy has as much to do with cutting public funding for universities as it does with increasing the efficiency with which these universities operate.

The second is a complaint from two scientists in China that, for various reasons — such as the fact that a single journal may be published under a variety of names — respected journals in developing countries may not be given sufficient recognition in calculating citation statistics. As a result, the scientists claim, those who publish in such journals rather than more widely known journals may, whatever the value of their work, appear to receive less international recognition (see Statistics hide impact of non-English journals).

In all such discussions, four factors should be borne in mind. Firstly, in too many countries, resources for science are distributed on a basis that takes insufficient account of scientific productivity, and places excessive emphasis on inappropriate criteria (such as political connections), a situation regrettably as familiar in the United States as it is in developing countries. The more such countries turn to a more merit-based system for allocating scientific resources (which includes the use of citation analysis), the healthier their science is likely to be.

At the same time, however, all citation analyses should carry the warning ?handle with care?. For example, although they can provide a reasonable way of comparing relative performance within a particular scientific discipline, such forms of analysis are often far less use in making comparisons between disciplines. And the underrating of certain foreign language journals described by the Chinese authors provides another warning.

Thirdly, while the use of citation analysis to allocate resources does not in itself imply a desire to cut spending on science, the one can certainly disguise the other. Such was the case, for example, in Britain in the 1980s under Prime Minister Margaret Thatcher. The lesson here is the need for transparency in the way the procedures are applied — and vigilance on the part of those that they are being applied to.

Finally, the more a single international measure of scientific productivity is adopted as the norm for evaluating scientific productivity, the greater is the need to ensure that measures such as citation rates are both linguistically and culturally neutral. Given that developing countries in particular, where English is frequently not the standard language of communication in the scientific literature, suffer even more than the developed countries from this bias, this is an issue in urgent need of attention.

© SciDev.Net 2002