We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit SciDev.Net — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on SciDev.Net” containing a link back to the original article.
  4. If you want to also take images published in this story you will need to confirm with the original source if you're licensed to use them.
  5. The easiest way to get the article on your site is to embed the code below.
For more information view our media page and republishing guidelines.

The full article is available here as HTML.

Press Ctrl-C to copy

[SAO PAULO] Quality, not quantity — that's the motto of a revolution set to take place in Brazil's postgraduate education system. By the end of the year, the education ministry will be putting the system that evaluates every postgraduate programme in the country through a radical reassessment process.

The move comes after a decade of calls for more efficiency in the postgraduate system — increasing the rate of PhD production, for instance. Those calls were certainly effective. In 2003, Brazil awarded about 8,000 doctorates, up from 5,335 in 2000. The time taken to complete a doctorate fell from 59 months to 51 months between 2000 and 2002.

But Renato Janine Ribeiro, the director of evaluation at CAPES — the body in the education ministry responsible for evaluating postgraduate programmes —  is determined to change fundamentally the postgraduate system's priorities.

CAPES has described its evaluation system as "the most impressive in the Southern Hemisphere, better than many in European countries". Forty-six commissions and 550 consultants take part in the assessment marathon. As the consultants are all academics, the process amounts to a large-scale external peer review. The scores it produces serve as guidelines for students choosing a PhD programme, as well as criteria for funding distribution. 

Although the CAPES system is considered to be a model system in comparison to its controversial counterparts monitoring undergraduate and high school programmes — considered to be too narrowly quantitative — Janine Ribeiro will launch his revolution later this year, when the triennial evaluation process begins again.

The results of the 2001–2003 evaluation were announced on 4 October. The CAPES team scrutinised 1,819 graduate programmes with 1,020 PhD and 1,726 masters courses. Of these, only 61 programmes (3.4 per cent) received the highest score — what CAPES considers 'international quality'.

Ninety per cent of the top-scoring graduate programmes were in south-eastern Brazil, with many of them being at just two universities — the University of São Paulo with seven, and the Federal University of Rio de Janeiro with eight.

At the other end of the scale, the low scores of 55 programmes mean they face exclusion from the accredited graduate system. They remain anonymous during the 60-day appeals period.

According to Janine Ribeiro, the first priority is to make evaluation more comprehensive. He says the current system is limited by its emphasis on numerically measurable criteria such as the numbers of students in the programme, and the time taken to complete courses.

Instead, Janine Ribeiro aims to counter charges of so-called 'productivism' in the evaluation system by focusing on the quality of PhD programmes' output — not just the number of doctorates they issue.

He and his team began the task modestly by recommending that all the programmes create websites to promote "transparency and efficiency" in the system. These will showcase programmes' achievements and goals, and will include CAPES evaluations.

CAPES's own website currently provides only contact details for each programme's coordinator. Soon, prospective students should also be able to find links there for the PhD and masters programmes' homepages. Janine Ribeiro says this will enable both prospective students and the wider research community to check the quality of evaluation itself, as they will have access to the evaluators' judgements and reasons for the grades.

This aspect of the plan has already provoked outcry. The first reactions from academics in charge of postgraduate programmes were complaints about the lack of time to fulfil yet another task, and demands for financial support from CAPES for setting up the websites. No such funds will be forthcoming, warns Janine Ribeiro.

Other proposed changes could be even tougher to implement. The current evaluation forms — described by Janine Ribeiro as "unfriendly" — stress quantitative aspects such as the number of publications and conference presentations by staff and students, without taking into account the quality of academic output.

To remedy this, CAPES is considering asking each researcher to pinpoint the five most significant items from their entire list of publications and presentations. Evaluators could then spend longer judging content instead of checking the fine details of data provided.

"With fewer factors involved, it should be possible to give more attention to quality," says Janine Ribeiro.

The proposed 'top five' idea would also correct the system's bias towards scientists. As scientific researchers tend to publish articles in peer-reviewed journals rather than books, their output generally looks much larger than that of philosophers, historians and sociologists. Concentrating on important achievements in publishing rather than the sheer number of publications could give a less skewed picture.

The approach should also help all programmes get a fairer share of scarce federal funds, based on the value of their work rather than on output alone.

Janine Ribeiro does, however, say that academics in the humanities tend to be more resistant to a process where excellence (or the lack of it) is rewarded (or punished) with resources. Humanities scholars have always been a source of resistance to external evaluation systems in Brazil, which they have considered to be methods of controlling academia and threatening research autonomy.

In an article published in April, Janine Ribeiro argued that in recent years the academic evaluation systems — both those of CAPES and of individual programmes — have been used to justify the federal government's neglectful attitude towards higher education in Brazil by citing widespread cuts made in public universities to counter the country's chronic debt crisis.

Janine Ribeiro claims, however, that this is no reason to refuse all forms of external evaluation. Universities and research institutions funded with public monies ought to be accountable to society at large, he says, and this means they must demonstrate their contribution to the country's socio-economic development.

Another problem that the overhaul is addressing is the individual approaches to evaluation developed by the 46 CAPES committees overseeing different fields of study. They resist moves to make evaluation uniform, which would make comparisons between them easier, but would considerably reduce their influence over entire fields of knowledge.

Ribeiro says his planned defence of quality is not intended simply to address the focus on quantity, but also to defuse this exertion of what he calls "academic power".

"Without academic quality, [this power] is nothing," he says.