Uso de indicadores y rankings en la evaluación de la investigación académica
Julio 10, 2024

Stop using rankings to evaluate research, study recommends

A review of the literature on the use of university rankings in research evaluation and excellence initiatives points to an agreement that rankings should not be used in research assessment or to guide policy initiatives.

However, converting this academic consensus into a move away from rankings involves a cultural shift and is therefore likely to be challenging.

The study titled “University rankings in the context of research evaluation: A state-of-the-art review”, published on 7 June in the open-access SocArXiv archive, involved a systematic review of both academic and grey literature (material produced outside of traditional academic publishing).

The review was based largely on English-language sources indexed in the Web of Science and a review of academic literature in Russian.

According to its author, Dmitry Kochetkov, associate professor at the RUDN University in Russia and PhD candidate at Leiden University in the Netherlands, the choice to focus on Russia was intentional because university rankings played a “key role” in the implementation of the Project 5–100 flagship excellence initiative (2013-2020), he writes.

Rankings most frequently mentioned in the study include ARWU, QS World University Rankings, Times Higher Education (THE) World University Rankings, Leiden Ranking and U-Multirank.

Kochetkov told University World News the paper concerned “rankings that use a composite indicator, also known as league tables.

“At the same time, rankings without composite indicators, such as the Leiden Ranking or U-Multirank, can be used for monitoring and benchmarking (but not for assessment)”.

Academic consensus

The study finds that most academic articles on university rankings and their use in research evaluation are critical of rankings, highlighting five primary areas of criticism.

They include: technical errors in methodology and lack of transparency and reproducibility of rankings; incomplete coverage of university performance indicators and shift towards research; territorial, linguistic and geographic biases; risks of losing national and organisational identity of higher education; and conflicts of interest.

As an example of conflict of interest, the study notes that “in the case of most global university rankings … the league tables are compiled by profit-seeking organisations that generate revenue by selling to universities additional data, services (consulting), and other types of subscription based content”.

It said a study analysing the ranking positions of 28 Russian universities that contracted Quacquarelli Symonds for the provision of consulting services in 2016-2021, showed an “anomalous increase” in ranking positions, unsupported by changes in the characteristics of universities in national statistics.

“Thus, it can be assumed that QS acts as a bias for itself,” the study notes.

Although in the minority, a few articles either favoured the use of rankings in research evaluation or were neutral. One was a study that considered the ARWU ranking to be a transparent tool for assessing research performance since the ranking was based solely on objective data.

Another study considered the choice of ARWU methodology and indicators to be optimal given the regressive nature of research performance measures.

Withdrawal from rankings

In a section discussing the complexity of change, Kochetkov writes: “Rankings are uncritically taken by too many as an indicator of, if not quality, then reputation. Therefore, the refusal of a university to be present in the rankings carries serious reputational risks.”

Referencing the past withdrawal of some prestigious US colleges from the US News & World Report, the study notes: “This, of course, is not the end of the ‘ranking power’ yet, but such steps, coupled with the ongoing criticism from the academic community, are already forcing ranking compilers to make climb downs in terms of improving the methodology.”

Kochetkov argues that the “marketing function of rankings makes them hard to refuse. In addition, most of the design and interpretation problems are associated with the use of composite indicators, which combine various aspects of a university’s performance in an arbitrary way. Rankings free from this issue can be used for benchmarking purposes (for example, Leiden Ranking and U-Multirank)”.

The paper suggests that a discussion of rankings and their use, especially in the context of research evaluation and policy initiatives, should involve not only ranking compilers and universities, but also the general academic public and governments. However, as Kotchetkov notes, the issue of rejection is “complex”.

“The rejection of rankings goes beyond legal or political aspects; it is a cultural change. Cultural change refers to the transformation and evolution of beliefs, values, customs, and practices within a society or a particular group over time,” the study notes.

Based on this assessment, Kotchetkov argues that the process of rejecting rankings must be “evolutionary, not revolutionary”.

In an email interview, Kochetkov told University World News: “University missions are unique, and we should assess universities based on them,” he said, urging readers to explore More Than Our Rank,” a global initiative developed in response to some of the “problematic features and effects of the global university”.

Kochetkov told University World News: “We do need a framework for collective action on all the levels, individual, institutional, national, and global.

“At the moment, we observe such developments in Europe CoARA Agreement (The Coalition for Advancing Research Assessment) … I hope this movement will spread to other parts of the world very soon.”

The study explicitly calls for collective action “at all levels”.

“If we all stop perceiving rankings in a certain way, they will no longer determine the evaluation of research,” it states, before making the following recommendations:

• Stop evaluating academics based on university ranking indicators and start rewarding the contributions of faculty and researchers in all areas of university activity.

• Stop constructing university strategies based on university rankings. Do not use ranking tiers in analytical reports for management decision making; instead, focus on the actual contributions made by a university (scientific, educational, and societal).

• Stop evaluating universities based on ranking indicators. Every university has a unique mission, and only fulfilment of this mission really matters.

• Stop using ranking information in national strategies and other kinds of ambitions. Only universities’ contributions to national and global goals should be considered.

• Stop paying money for consulting services to ranking compilers. This is a pure conflict of interests.

Rethinking performance evaluations

Kotchetkov’s thinking is shared by a range of rankings and higher education experts.

Professor Ellen Hazelkorn, joint managing partner at BH Associates education consultants, told University World News that rankings had “no place in research evaluation” and indicated there is “a large move away” from using rankings for this purpose.

“They [rankings] have many known methodological flaws, including reducing evaluation to the dominance of a narrow set of quantitative journal- and publication-based metrics,” said Hazelkorn, who is also a professor emeritus at Technological University Dublin in Ireland.

“This leads to many inappropriate consequences and uses including as a proxy for quality, undermining the breadth and diversity of universities and research, criteria for recruitment/promotion or visa applications, etcetera.

In addition to CoARA, Hazelkorn referred to the San Francisco Declaration on Research Assessment, and the Leiden Manifesto for Research Metrics as examples of alternative approaches.

“Many universities are moving away from rankings. These efforts should be supported,” she said.

Andy Pacino, academic director at education consultancy ELT Central, told University World News he believed that university rankings do not always correctly reflect the standards, strength, research or even teaching capabilities of a university – “and there are a number of factors that make me feel so”, he said, going on to list institutional bias, research funding and the pressure to publish.

“A report that is accepted merely on the status of the researcher’s university does not qualify it in any way whatsoever,” Pacino said.

“Having said that, medical journals such as the The Lancet or the The BMJ are both widely recognised as leaders in their field, and so there are exceptions to my own rule.

“As far as the recommendations of the study go, I believe that we should reject ranking when evaluating research: perhaps there should be a heavier focus on the researcher’s subject authority.

“If it is the case, as it seems to be, that the higher ranked the institution, the better received the research is, then policy makers ought to have a rethink on performance evaluations,” he said.

Dr Stephen Wilkinson, director of research at the University of Wollongong in the UAE, told University World News he also agreed with aspects of the study.

“There are many instances where specific areas of research focus at low ranked universities are found to be world leading, and even in the highest ranking universities there are subjects where research is underdeveloped.’ he said.

“Institutional rankings roll together many measurements across a university and these are very separate from the excellence or lack-thereof of individual researchers.

“Let’s say for example that a professor moves from a high-ranked, to a low-ranked university (perhaps for family reasons). Does this change the quality of research that this professor is capable of? Of course not.” he added.

“In my opinion, if a researcher is pushing forward the boundaries of understanding of their respective fields, then their physical location, or the name of their employer, contributes very little to their individual achievements.

“This then means that researchers should be assessed as individuals based on their own impact and contributions to society, not solely based on a measurement-ranking of their respective institutions,” explained Wilkinson.

In search of balance

Professor Atta-ur-Rahman, a UNESCO Science Prize laureate and former coordinator general of the Standing Committee on Scientific and Technological Cooperation of the 57-country Organisation of Islamic Cooperation, had a slightly different take, describing Kochetkov’s study as “unbalanced” and said it did not take into consideration advantages associated with rankings.

“They [rankings] provide prospective students with a comprehensive evaluation of universities worldwide and offer insights into the quality of education, faculty, research output, and overall reputation.

“This helps students make informed decisions about where to apply, ensuring they select institutions that align with their academic and career aspirations,” he explained.

He said university ranking systems enable institutions to benchmark their performance against peers both locally and globally, thereby “creating healthy competition, fostering a culture of continuous improvement and strategic planning”.

He also pointed to the fact that rankings can enhance a university’s reputation, leading to increased funding opportunities and collaborations.

However, he conceded that rankings “do have some disadvantages and limitations”.

“One significant disadvantage is their heavy emphasis on research output and quality. This focus can overshadow other essential aspects of education, such as teaching quality, student satisfaction, and holistic development,” he said.

He also pointed to the fact that ranking methodologies often rely on a limited set of criteria and data sources, which can “result in an incomplete and sometimes biased evaluation”.

Not intended as a solution

University World News reached out to ARWU, QS World University Rankings, Times Higher Education World University Rankings and U-Multirank for their views, but only QS responded.

Ben Sowter, senior vice-president of QS told University World News: “Few, if any, university rankings were designed as a research assessment tool, and QS would agree that they are far from a complete solution to that problem.

“Indeed, in our experience they are rarely used in isolation in that context. That institutions and nations should concern themselves with competitiveness and seek out multiple data sources to evaluate it, seems uncontroversial.

“It is then their duty to understand the value, flaws, and constraints of the data that they ingest. Responsible leaders can extract the value from the data they have, without losing sight of factors that are less easily measured, and without compromising their distinctive identity or the pursuit of their mission.”

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Trump y la ciencia

Trump’s anti-science appointees ‘drive demonisation of scholars’ With vaccine sceptics taking top White House posts, there are concerns that attacks on scientists who counter misinformation may become more extreme Jack Grove,November 18, 2024 Researchers who have...

Share This