Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

Over the past decades, various quantitative measures have been developed to evaluate the

productivity of individual scientists or scientific groups, most based on the extent to which

their published papers are cited by other researchers. One of the most widely used of these are

so-called ‘impact factors’ – essentially a measure of scientific impact that is based on citation

rates, but also takes into account the significance of the journals in which results have been

published.


The use of such quantitative measurements has been widely criticised in the scientific

community. Many point out that they fail to take into account factors other than scientific

publications that should be used to evaluate the work of an individual scientist. Others resent

the extent to which impact factors and similar measures, despite their weaknesses, have come to

play a dominant role in allocating research funds.


In this letter to Nature, Adam Łomnicki from Jagiellonian University in Kraków, Poland

admits that the system is “wrong and unjust”. But he argues that, just like the market economy,

other systems are worse. Furthermore he suggests that the use of impact factors by developing

countries is essential if they are to develop an effective scientific community. Abandoning such

“objective” measures of science evaluation, he argues, “would remove a tool for rewarding

researchers who attempt to do good science and for eliminating those who do not”.


Link to Łomnicki’s letter in Nature

Reference: Nature 424, 487 (2003)