We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit SciDev.Net — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on SciDev.Net” containing a link back to the original article.
  4. If you want to also take images published in this story you will need to confirm with the original source if you're licensed to use them.
  5. The easiest way to get the article on your site is to embed the code below.
For more information view our media page and republishing guidelines.

The full article is available here as HTML.

Press Ctrl-C to copy

Over the past decades, various quantitative measures have been developed to evaluate the productivity of individual scientists or scientific groups, most based on the extent to which their published papers are cited by other researchers. One of the most widely used of these are so-called 'impact factors' – essentially a measure of scientific impact that is based on citation rates, but also takes into account the significance of the journals in which results have been published.

The use of such quantitative measurements has been widely criticised in the scientific community. Many point out that they fail to take into account factors other than scientific publications that should be used to evaluate the work of an individual scientist. Others resent the extent to which impact factors and similar measures, despite their weaknesses, have come to play a dominant role in allocating research funds.

In this letter to Nature, Adam Łomnicki from Jagiellonian University in Kraków, Poland admits that the system is "wrong and unjust". But he argues that, just like the market economy, other systems are worse. Furthermore he suggests that the use of impact factors by developing countries is essential if they are to develop an effective scientific community. Abandoning such "objective" measures of science evaluation, he argues, "would remove a tool for rewarding researchers who attempt to do good science and for eliminating those who do not".

Link to Łomnicki's letter in Nature

Reference: Nature 424, 487 (2003)