14/01/11

Science evaluation needs a rethink

Counting publications is one way to evaluate science. Copyright: SciDev.Net

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

Traditionally, science is evaluated according to researchers’ outputs — the number of articles published in science journals or patents registered, for example.

But the real value is how those products benefit society or the environment. The speed with which new knowledge reaches citizens matters too. 

Evaluating the impact of science may sound simple, but it is not. As experience has shown in Latin America, for example, even scientists don’t agree on how to do it, and countries measure output in different ways.

What should be measured?

Colombia evaluates its science community using information about the new knowledge produced in a given year — the number of publications, books or software, for example. This web-based system was developed in Brazil.

Scientists in both countries agree that this approach provides useful insight into national trends, and also that it has weaknesses. Yet they have different ideas about how to improve it.

Brazilian researchers have suggested (see Brazil scientists criticise evaluation criteria, in Spanish) that publishing in national journals and presenting work at conferences should count for more than they do at present.

But in Colombia (see Colombia: controversy over evaluation of science groups, in Spanish), scientists feel an article published in a national journal should not carry as much weight as one published in a well-respected international publication such as Science or Nature.

Some researchers in Mexico have also objected to how their Sistema Nacional de Investigadores (National Research System) measures the scientific output of individuals. They say patents should be judged not only on the number registered but also on how they are used to boost the economy. That, they point out, would make Mexican scientists more focused on protecting the rights to the knowledge they produce.  

Adding innovation

The criteria used to evaluate science become even more problematic as countries in Latin America shift from seeing science and technology as an end in itself, to seeing it as the way to achieve the innovation necessary for a knowledge-based economy and society.

Innovation cannot be measured simply by counting articles in science journals. Nor is the number of patents registered the only indicator of innovation. And, even if it were, does a patent carry the same weight as a published article? Are they even comparable?

To take another example: if a research group seeking solutions to an industrial problem signs a million-dollar contract with a company, is that an indicator of success? And, if such a group comes up with an idea that leads to a profitable spin-off, how should its impact be measured?

As we enter what some are calling the century of innovation, our evaluation systems need to change. They need criteria that reflect the objectives of S&T and innovation: to generate products for national markets and economies, to create practical solutions to social challenges, and to add value to existing knowledge.

And they must provide incentives for a country’s own researchers to be innovative. In most developing countries, fewer patent applications come from national researchers than from international companies.

Rethinking evaluation

Evaluation systems for science and technology, and now for innovation, are themselves constantly being evaluated in Latin America.

One criticism of our current evaluation systems is that, because the criteria were largely designed by academics, they show little understanding of the productive sector and overlook the value of work produced by groups not dedicated to basic research. 

Evaluation criteria should be flexible enough to differ from one discipline to another — as already happens in Mexico.

And what about publications produced for citizens rather than the scientific community — for those who participated in research as patients or survey respondents, for example, or community members affected by a contaminated river? In general, such ‘social products’ are not valued highly, and scientists do not make much effort to produce them.

But the impact of a well-targeted communication strategy can be far larger than that of a publication in a journal.

An epidemiologist once pointed out to me that sharing results with people affected by the research will do more to save lives than counting international publications, however valuable — so researchers should be evaluated according to the social impact of their work.

This view suits public health research perfectly. But it would not be adequate for other disciplines. It is time to think of different types of evaluation for different types of research. And I would go so far as to say that these should be organised by results, not scientific fields. 

A good start would be to set up a state-of-the-art, interdisciplinary advisory group to stimulate a broad and innovative discussion.

Lisbeth Fog is a regional consultant for SciDev.Net, based in Colombia.