We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit SciDev.Net — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on SciDev.Net” containing a link back to the original article.
  4. If you want to also take images published in this story you will need to confirm with the original source if you're licensed to use them.
  5. The easiest way to get the article on your site is to embed the code below.
For more information view our media page and republishing guidelines.

The full article is available here as HTML.

Press Ctrl-C to copy

South Africa's rating system for researchers belongs to the past, but its administrators are reluctant to change, says Michael Cherry.

In 1984, during the isolation of the apartheid era, South Africa introduced a rating system for university researchers. The system aimed to keep researchers from leaving the country by offering basic science funding and, until 1996, freeing scientists from the need to write grant proposals.

But in the past decade there has been a spate of highly critical reports on the system and most researchers want change.

In 2005, a review panel comprising experts from the United States, New Zealand and South Africa — and chaired by the former president of the Academy of Sciences of South Africa, Wieland Gevers — reported widespread dissatisfaction with the system, and recommended it be reconsidered.

In November 2007, an exhaustive review of the rating system commissioned by Higher Education South Africa (HESA, the statutory body of the country's 23 universities) found "a growing scepticism and even disillusionment" with the system. The review comprised five reports by separate authors and a synthesis report. It is highly critical of the National Research Foundation (NRF), saying it had "neglected to engage with … concerns and criticisms … in most cases only responding to technical and procedural issues". Yet, ignoring the recommendations of its own review, HESA has recently endorsed the NRF's assertion that there is "no evidence" to suggest that the rating system "should be done away with".

A failing system

Barely 11 per cent of scholars in South Africa's higher education sector are rated and the ratings themselves are highly subjective, relying on terms like 'international recognition', 'leader in a field' and 'proven track record'.

Ratings purport to reflect a researcher's status within a particular research field, penalising multi-disciplinary work. Abuses are also common.

HESA's review cites a 1996 report that found researchers rated as 'established' were more productive than those termed 'leader in a field' or enjoying 'international recognition'. And young researchers 'showing potential' outperformed those with 'exceptional promise'. This was despite the latter receiving almost ten times more funding, on average, during the review period than the former.

The report recommended replacing the evaluation process with a simpler accreditation for proven/able researchers, and allocating funding based on the quality, feasibility and relevance of research proposals.

The NRF system is unusual in rating individualresearchers. And uniquely, the NRF employs two different assessment systems: the standard project review process (comparable to most research councils worldwide) and the individual rating system (which is not). So researchers have to submit both research and rating proposals — and they are getting tired of it.

Another constituent report of the HESA review notes a steady and significant increase in the number of researchers letting their rating lapse. This is because although some researchers do get money from their institutions because of their rating, most receive little or no benefit from the system.

Time for change

The NRF has not yet responded officially to the HESA reports and recommendations. But it has announced, through its website, that it will finance a new category of funding for all rated researchers — in which individuals will receive funding commensurate with their ratings — as soon as it can afford to do so. The indications are that it decided to do this long before the HESA reports were submitted.

The NRF's denial of evidence suggesting the rating system should be scrapped may reflect its own vested interest in the system. More astonishing is that the HESA review committee, which is intended to represent the interests of the academic sector, has endorsed the system in defiance of the recommendations of its own review.

The committee, chaired by University of the Witwatersrand vice-chancellor Loyiso Nongxa, comprised mostly senior university administrators. Tellingly, one of the constituent reports in the HESA review remarks that "managers/administrators and researchers have different perceptions of the merit and impact of the rating system".

Barring a few outsized egos, researchers would welcome change: the resistance is coming from university administrators. Are they just too idle to assess staff performance properly? Is it that they dare not depart from criteria that, over the past quarter-century, have assumed a local canonical status? Or have they failed to escape the apartheid-era mentality of having to do things our own way on account of international isolation?

As long as their heads remain in the sand, research in South Africa will be the loser. Creative endeavour can only really flourish in a collegial environment — the antithesis of the current one, in which researchers are, in the words of Stellenbosch chemist Andrew Crouch, "graded like meat". It will be a great pity if HESA's endorsement of the rating system once again kills off impetus to abandon it without delay.

M. I. Cherry is professor in the Department of Botany and Zoology at Stellenbosch University, South Africa.

See Letter to the editor.