Bold ideas and critical thoughts on science.

Ellen Hazelkorn

The dubious practice of university rankings

Ellen Hazelkorn takes a look at the accuracy of university rankings from an international perspective.
13 March 2019

Global rankings have decisively shifted the nature of the conversation around higher education to  emphasise universities’ performance in knowledge economies. How did that happen and why are rankings becoming increasingly important?

The emergence of global rankings coincided with the acceleration of globalisation at the turn of the millennium. This is because higher education is a global game. Its success (or failure) is integral to and a powerful indicator of the knowledge-producing and talent-attracting capacity of nations. But, the landscape in which HE operates today has become extremely complex; there are many more demands and many constituencies which have an impact on and a voice in shaping higher education’s role and purpose. While rankings have been around since the early 1900s, global rankings represented a significant transformation in their range and influence.

The arrival of the Shanghai Academic Rankings of World Rankings (ARWU) in 2003 set off an immediate chain-reaction that speaks to pent-up demand in the political system, and arguably more widely, for more meaningful and internationally comparative measures of quality and performance. In other words, once the number of people participating in and served by higher education expands, so as to begin to comprise and affect the whole of society rather than a narrow elite, then matters of higher education governance and management, and performance and productivity necessarily come to the fore.

Over recent years, rankings have become a significant actor and influencer on the higher education landscape and society more broadly, used around the world by policymakers and decision-makers at government and higher education institution (HEI) level, as well as by students and parents, investors, local communities, the media, and others. There are over 18,000 university-level institutions worldwide. Those ranked within the top-500 would be within the top 3% worldwide. Yet, by a perverse logic, rankings have generated a perception amongst the public, policymakers and stakeholders that only those within the top 20, 50 or 100 are worthy of being called excellent. Rankings are seen to be driving a resource-intensive competition worldwide. But, it is also apparent that putting the spotlight on the top universities, usually referred to as world-class, has limited benefit for society overall or all students. Despite much criticism about their methodology, data sources and the concept of rankings, their significance and influence extends far beyond higher education – and is linked to what rankings tells us (or are perceived to tell us) about national and institutional competitiveness and the changing geopolitical and knowledge landscape. Their persistence is tied to their celebration of “elites” in an environment where pursuit of mobile talent, finance and business are critical success factors for individuals and societies.

How accurate are university rankings, in your opinion? In what sense do indicators vary among different university rankings?

Rankings’ popularity is largely related to their simplicity – but this is also the main source of criticism. The choice of indicators and weightings reflect the priorities or value judgements of the producers. There is no such thing as an objective ranking nor a reason why indicators should be either weighted (or given particular weights) or aggregated. Although rankings purport to measure higher education quality, they focus on a limited set of attributes for which (internationally) comparable data is available. They are handicapped especially by the absence of internationally comparable data for teaching and learning, student and societal engagement, third mission, etc. This means that most global rankings focus unduly on research and reputation.

Many of the issues being measured are important for institutional strategic planning and public policy but annual comparisons are misguided because institutions do not and cannot change significantly from year to year. In addition, many of the indicators or their proxies have at best an indirect relationship to faculty or educational quality and could actually be counter-productive. Their influence derives from the appearance of scientific objectivity.

Rankings are both consistent and inconsistent. Despite a common nomenclature, they appear to differ considerably from each other. “Which university is best” is asked differently depending upon which ranking is asking the question; there are methodological differences as to how similar aspects are counted (e.g. definitions of staff and students or international staff and students). And because much of the data relies on institutional submissions there is room for error and “malpractice”. This presents a problem for users who perceive the final results as directly comparable. Nonetheless, the same institutions tend to appear at or near the top; differences are greatest beyond the top 100 or so places.

Over the years, ranking organisations have become very adept at expanding what has become a profitable business model. Recent years has seen an increasingly close corporate relationship, including mergers, between publishing, data/data analytics and rankings companies.

There is a proliferation of different types of rankings, for different types of institutions, world regions, and aspects of higher education. Most recently, global rankings have sought to focus on teaching, engagement and the UN’s SDGs. Despite these “improvements”, the same problems which afflict earlier formats and methodologies remain. Many of the indicators focus on inputs which are strongly correlated to wealth (e.g. institutional age, tuition fees or endowments/philanthropy), as a proxy for educational quality. Both QS and THE use reputational surveys as a means of assessing how an institution is valued by its faculty peers and key stakeholders. This methodology is widely criticized as overly-subjective, self-referential and self-perpetuating in circumstances where a respondent’s knowledge is limited to that which they already know, and reputation is conflated with quality or institutional age

What is the difference between global and national rankings? What are the strengths and weaknesses of these two different assessments?

Global rankings rely on internationally comparable information. However, there are greater differences in context, data accuracy and reliability, and data definitions. The ranking organisations do not audit the data, and even if they did context remains important.

Despite claims to “compare the world’s top universities” (Quacquarelli Symonds World University Rankings, 2019) or “provide the definitive list of the world’s best universities evaluated across teaching, research, international outlook, reputation and more” (Times Higher Education, 2018), in truth, global rankings measure a very small sub-set of the total 18,000 higher education institutions worldwide.

National rankings in contrast have access to a wider array of data, as well as having the capacity to quality control it – albeit problems still persist. Today, there are national ranking in more than 70 countries; they can be commercial or government rankings. They may be a manifestation of other evaluation processes, such as research assessment, whereby the results are put into a rankings format either by media organisations or the government itself. National rankings are used by students/parents but also by governments to drive policy outcomes such as classification, quality assurance, improving research and driving performance broadly.

It is  claimed that universities learn how to apply resources in exactly the right place and submit data in exactly the right way and are thus able to manipulate rankings. A prominent scandal was the Trinity College Dublin data scandal, where researchers were allegedly instructed in how to answer ranking questionnaires. How easy is it to manipulate such rankings?

Higher education leaders and policymakers both claim that rankings are not a significant factor in decision-making, but few are either unaware of the rank of their universities or that of national or international peers. The increasing hype surrounding the now multi-annual publication of rankings is treated with a mixture of growing alarm, scepticism and, in an increasing number of instances, with meaningful engagement with ranking organisations around the process of collecting the necessary data and responding to the results.

This reaction is wide-spread. There is a strong belief that rankings help maintain and build institutional position and reputation, good students use rankings to shortlist university choice, especially at the postgraduate level, and that governments and stakeholders use rankings to influence their own decisions about funding, sponsorship and employee recruitment.

As such, rankings often form an explicit institutional goal, are incorporated into the strategic objectives implicitly, are used to set actual targets, or are used to measure of achievement or success. In a survey conducted by this author, more than half of international HE leaders said they have taken strategic, organizational, managerial, or faculty action in response to rankings. Only 8% had taken no action. Many universities maintain vigilance about their peer’s performance, nationally and internationally.

In global rankings, the most common approach is to increase the selectivity index – or the proportion of high performing students. It is no coincidence that many of the indicators of quality are associated with student outcomes of employment, salary, etc. – which are in turn are strongly correlated with these characteristics. The effect is to reinforce elite student entry.

The practice of managing student recruitment has received considerable attention in the USA but is not confined to that country. Similar patterns are found elsewhere even where equity and open recruitment is the norm. The practice of urging “supporters” to respond positively to reputational surveys used by many universities.

Given the significance and potential impact of rankings and the fact that high-achieving students – especially international students – are heavily influenced by rankings, these responses are both understandable and rational. However, the really big changes in ranking performance derive from the methodological changes that the ranking organisations themselves engage in. Some of these changes are introduced to improve the rankings. However, one can’t cynically dismiss the view that many of the changes are aimed at generating publicity, and consultancy, for the rankings themselves!

What steps are necessary in order to improve global university rankings?

Rankings are not an appropriate method for assessing or comparing quality, or the basis for making strategic decisions by countries or universities. If a country or institution wishes to improve performance there are alternative methodologies and processes. Beware unintended consequences of simplistic approaches.

These are some suggested Do’s and Don’ts –

Don’t:

  • Change your institution’s mission to conform with rankings;
  • Use rankings as the only/primary set of indicators to frame goals or assess performance;
  • Use rankings to inform policy or resource allocation decisions;
  • Manipulate public information and data in order to rise in the rankings.

Do:

  • Ensure your university has an appropriate/realistic strategy and performance framework;
  • Use rankings only as part of an overall quality assurance, assessment or benchmarking system;
  • Be accountable and provide good quality public information about learning outcomes, impact and benefit to students and society;
  • Engage in an information campaign to broaden media and public understanding of the limitations of rankings.

Author info

Ellen Hazelkorn is partner in BH Associates, an international consultancy firm focusing on education, and Emeritus Professor, Technological University Dublin (Ireland). She is the author of Rankings and the Reshaping of Higher Education. The Battle for World Class Excellence (2nd ed., 2015) and editor, Global Rankings and the Geopolitics of Higher Education (2017). She was Board Member and Policy Adviser to the Higher Education Authority (HEA), 2011-2017, and President of EAIR (European Higher Education Society), 2013-2016. She is Joint Editor of Policy Reviews in Higher Education and a 2018-19 NAFSA Senior Fellow. She was awarded the Tony Adams Award for Excellence in Research, 2018, by European Association of International Education (EAIE) for her research on global rankings.

Digital Object Identifier (DOI)

https://doi.org/10.5281/zenodo.2592196

Cite as

Hazelkorn, E. (2019). University Rankings: there is room for error and “malpractice”. Elephant in the Lab. DOI: 10.5281/zenodo.2592196

References

Collapse references

Fyfe, A., et al. (2017). Untangling Academic Publishing: a history of the relationship between
commercial interests, academic prestige and the circulation of research. LINK.
(accessed: 13th March)

Times Higher Education, 2018. LINK.
(accessed: 13th March)

Quacquarelli Symonds World University Rankings, 2019. LINK.
(accessed: 13th March)

4 Comments

  1. Great article and sound advice.
    To me, the present ecosystem for ranking of universities, and Business schools is so pervasive and entrenched that it requires more than one approach to improve the situation.

    I am very new to this field, but as a senior advisor in strategy, and these days a PhD student, I have published an outline – https://www.johanschlasberg.com/en/group-ranking-of-business-schools/ – for a new ranking concept. I refer to it as Group rankings with the goal of reducing the yearly media attention. Universities and Business schools will have to think hard if they want to continue other’s commercial models or invent and market something better.

  2. Nice and useful article for those who wanted to pursue higher education. Nowadays the education system already become a buisness. Most of the high ranking school require ten thousands plus of the tuition fee. As said in the article, the ranking system is used for reference only.

  3. This is an excellent article. It is a good reminder to realize that there are over 18,000 global higher ed institutions, and less than 2,000 are generally ranked.

    The Dont’s & Dos list is helpful. I want to add something to the Do list. Make sure that your data is correct in the various databases on which the Rankings Organizations rely. You can read more about that in this guide: https://www.elsevier.com/research-intelligence/university-rankings-guide.

  4. This is an excellent analysis. Sadly, the “don’t” list is being widely violated within many universities in Australia. There is an unquestioning mantra about improving performance in global league tables, research assessment exercises and publication in highly-rated journals to name a few. KPIs are commonly cascaded down to the individual academic level based on achieving outcomes focused on publication in specific journals based purely on their rating, and targets set for disciplines to achieve certain scores on the government research assessment exercise. Lofty university goals about mission and purpose are decoupled from the reality of what is really focused on by university leaders. For example, a university strategic goal to “conduct applied research that helps solve society’s wicked problems” is operationalised at the discipline/individual level to be: ” publish in highly ranked journals that are rated A or A*.” This KPI seems to have little to do with achieving the original goal.

Continue reading

The Case for PubPub

The Case for PubPub

In this Short Analysis, Jefferson Pooley is reviewing/introducing PubPub, a web-based publishing platform hosted by a nonprofit, the Knowledge Futures Group (KFG)