The problem with university rankings
Our obsession with top-rated universities is denying us a 'world-class' global higher education system, says Ellen Hazelkorn.
Few people in higher education (HE) are unaware of university rankings. They measure a university's ability to attract talent and produce new knowledge — usually using the number of publications or citations to determine research quality.
The US News and World Report (USNWR) began providing information about US universities in 1983. Since then, national rankings have been created in over 40 countries. Global rankings may be more recent but they have become more influential; the Shanghai Jiao Tong Academic Ranking of World Universities (SJT) began in 2003, followed by Webometrics and Times Higher Education QS World University Ranking (THE–QS) in 2004, the Taiwan Performance Ranking of Scientific Papers for Research Universities in 2007, and USNWR's World's Best Colleges and Universities in 2008 (which uses Times QS data). The European Union has announced a 'new multi-dimensional university ranking system with global outreach' to be piloted in 2010.
Race for the top
Despite over 17,000 HE institutions worldwide, there is a near-obsession with the status of the top 100 universities. None of these are in Africa or South America.
Rankings were initially aimed at undergraduate students and their parents. Indeed, international research shows that high achieving students believe a high university rank carries special benefits, positively affecting their career opportunities and quality of life. Universities that do well in the rankings often see rising student applications while those going lower can suffer a decline.
Yet rankings today influence the opinions and decisions of a wide range of stakeholders. And universities themselves use rankings in many ways, some positive and some perverse.
Who uses rankings?
Rankings affect universities' decisions about their international partnerships. Such partnerships have become strategically important for research, academic programmes, and student/faculty exchanges. According to an international survey, 57 per cent of respondents said their institute's ranking was influencing whether researchers in other HE institutions partnered with them, and 34 per cent said they felt rankings were influencing whether academic or professional organisations would accept their membership.
Universities are also using rankings internally to inform decisions about which institutions to partner with. For example, Ian Gow, former provost of The University of Nottingham, Ningbo, China has suggested that government authorities are urging local institutions to limit partnerships to the top 20 foreign institutions. Academics elsewhere have also confirmed they are unlikely to consider research partnerships with a lower ranked university unless the person or team is exceptional. This could pose significant disadvantages to new HE institutions, or institutions in developing countries.
Donors also refer to rankings when considering which university offers the best brand image and return-on-investment. Deutsche Telecom admits it used rankings to influence its decision about professorial chairs, while Boeing said it will be using performance data to determine "which colleges...share in the US$100 million that [it] spends...on course work and supplemental training."
Universities are setting priorities and allocating resources to academic disciplines and research fields which can help improve their rank. Many governments use rankings when deciding resource allocation and institutional accreditation.
Rankings can also affect students seeking government sponsorship to study abroad — in Mongolia and Qatar, scholarships are restricted to students admitted to highly ranked international universities.
And they can decide whether governments recognise foreign qualifications — Macedonia automatically recognises qualifications from the top 500 universities listed in the THE–QS, SJT or USNWR.
Employers are another group who often use rankings to measure probable graduate success, making them less likely to recruit graduates from universities that are not well placed.
Because of these effects, not being ranked can mean a university becomes invisible to international PhD students, 'world-class' researchers, academic partners, philanthropists and donors.
Rankings based on citations best record the bio-sciences, making the arts, humanities and social sciences vulnerable. Professional disciplines, such as engineering, business and education, which do not have a strong tradition of peer-reviewed publications, are also under pressure.
Rankings have placed a new premium on status and reputation, with a strong bias towards long-established and well-endowed institutions, usually with medical schools, in developed countries. This system makes it impossible for developing country universities to compete with the big players in the United States or Europe. The gap between elite and mass education and between universities in the developed and developing world is likely to widen.
One particular problem is that rankings perpetuate a single definition of quality at a time when HE institutions, and their missions, are diversifying. By focusing primarily on research intensity, other dimensions, such as teaching and learning, community engagement, third mission and innovation, and social and economic impact are ignored.
In addition, HE institutions are complex organisations with strengths and weaknesses across various departments and activities. Excellence can be defined differently depending upon the criteria or indicators/weightings which are used. By aggregating the score across the various indicators, rankings reduce the complexity of higher education to a single digit score, and exaggerate differences.
Despite these criticisms, governments such as China, India, Japan and Korea are looking to build their own world-class universities.
Of course, rankings can help to reform and modernise higher education, encouraging universities to professionalise services and management, and improve the quality of their programmes and facilities for students and faculty.
But rather than concentrating resources in a small number of elite universities, the aim should be a world-class HE system. Governments should aim to develop a diverse range of universities each with specialist world-class expertise, to attract high-achieving students and high-skilled labour. Building such a world-class HE system would enable countries to mobilise and leverage the potential of the whole system for the benefit of society at large.
Ellen Hazelkorn is Director of Research and Enterprise, and Dean of the Graduate Research School at the Dublin Institute of Technology, Ireland.She also leads the Higher Education Policy Research Unit.