Rankings internacionales de universidades
Diciembre 10, 2018

December 6th, 2018 – Alex Usher

Cast your minds back, if you will, by about 15 years. Paul Martin had yet to show us why great finance ministers make lousy Prime Ministers. The ghastly CROCS fad was still three years away. And in China, Professor Nian Cai Liu had just released the inaugural Academic Ranking of World Universities, known more colloquially as the Shanghai Rankings.
While national rankings were old hat, the Shanghai Rankings’ global nature was something genuinely new. The sadly-defunct magazine Asiaweek had tried doing a regional ranking in the late 1990s but it failed because – like  US News and  Maclean’s  – they relied on a methodology which required institutional participation and once a few big institutions say no, the whole thing fell apart. But Professor Liu, who had developed the rankings in an attempt to answer the question “how will we know if Chinese universities are world-class,” chose a different methodology (Liu’s home university was one of the 30-plus institutions to be part of the Chinese government’s Project 985 , which aimed to develop “world-class universities, so it was a pressing question at the time). Using only bibliometric measures and presence of major awards (Nobels, Field medals, etc.) he sidestepped the institutional veto while still directly comparing institutions around the globe.
This idea was so novel it was immediately copied by QS and Times Higher Education (who at the time worked together on a single set of rankings), though the latter chose to ask institutions for data and did their own survey to measure reputation. But the results of the two exercises were broadly similar: the US looked  really  good, most of continental Europe was a long way behind, and non-OECD countries were essentially invisible. These created enormous political waves, which led a number of countries to launch government initiatives ploughing significant new money into research. These initiatives were often executed in ways that allegedly increased institutional stratification within these countries.
What’s difficult to remember, 15 years out, is why on earth anyone took these things seriously (which initially, outside the Americas, most people did). There were very good reasons to reject them. Specifically, why are universities the right unit of analysis? Why not individual programs within an institution? Or, to go the other way, national systems of research? France and Germany have argued that latter point quite effectively since the Shanghai approach eliminates from view the work of CNRS, Max Planck, etc. Or, alternatively, why would we choose to accept an exercise which ranked universities solely based on their research? (yes, the QS/THE rankings had some other indicators in there, but they were highly correlated to research intensity, so it amounted to the same thing.)
The reason these objections were not made was because at a gut level, a lot of people believed the ranking. These rankings had external validity of a sort: back in 2003-04, the American economy was considered the envy of the world. Dotcom bust? Major terrorist attacks and two wars in the Middle East? These had but little effect on the astonishing growth/prosperity engine that was the America of 2004. And so, people around the world looked at these rankings and said “hey… maybe their economy is great…because they have great universities?” As you can imagine, this was an argument that appealed to universities and perhaps dulled their critical faculties a bit. And in Europe, where the Lisbon agenda (“make Europe the World’s Most Innovative Economy by 2010”) had been agreed in 2000, spending money on universities seemed like a magic way of reaching that goal without doing all the tedious things to actually make their economies more innovative, like implementing thoroughgoing competition reform and (in some places anyway) shrinking the state from gargantuan to merely comprehensive. So, there was a coalition of forces, particularly in Europe, which saw some benefit to the policy implications of the rankings if not to the rankings themselves.
Now, what if Professor Liu has done this five years later? What if he had released his findings in the fall of 2008 instead of 2003? Those key pieces of external validation wouldn’t have been there. No one would have said to themselves “research universities are the cause of American prosperity”, they would have said “American prosperity is built on a whole bunch of bad cheques and the worst type of casino capitalism” (which is not true, but in 2008 it was hard to see beyond that). They might well have cast a more skeptical eye on rankings that placed so many American schools – even some mediocre ones – in a world top 500.
My point here isn’t that rankings are incorrect or that they have only pernicious effects. Yes, I wish those early rankings had been more precise and called themselves global research rankings rather than global university rankings – we’d have saved ourselves a lot of nonsense on all sides of the debate, frankly. And yes, rankings have had some pernicious effects, but they also sent a lot of extra money into basic research and in some countries they have measurably improved the quality of education (Russia springs to mind here).
My point, rather, is that the near-total acceptance of these rankings in the policy world and the deluge of national-level policy initiatives that followed were by no means a given. They were both a product of a particular moment in the global political economy, one which largely disappeared within a few years. At another moment they might easily have been ignored. On vagaries such as this, entire policy fields may pivot.

 

 
December 7th, 2018 – Alex Usher
I‘ve been in Europe for most of the past two weeks on a number of rankings-related projects. And as a result of these travels, I’m more optimistic about international rankings than I have been for a long time. Here’s why.
First of all, we are getting a lot of new data at the international level. There are two primary sources for this. The first is the THE rankings – in particular their new European Teaching Rankings , which use  surveys to look at student engagement and the student experience . This is excellent. It’s not the first time this has been attempted – U-Multirank has done this for a couple of years now – but THE taking up this data brings this approach into some of the world’s largest universities (THE also did something similar in its  US rankings ). I think there’s some calibration required to do this properly at a global level – for instance, surveys critiquing teachers may be a different experience, culturally, for western students vs. ones in Confucian-influenced systems – but man, if THE ever starts attaching this kind of survey to its global rankings, things could get interesting. (THE is also exploring institutional indicators based on UN sustainable development goals, which I am more skeptical about, but their desire to explore and innovate is highly commendable).
The second is  U-Multirank , which has been slower to gain acceptance than its sponsors originally hoped, but it is now getting some reasonably high-quality data on a whole bunch of topics from over 1400 institutions world-wide. In a sense, U-Multirank is getting institutions across the globe to up their game on internal data collection and – very slowly – fostering the emergence of international standards around data collection on things like on-time graduation rates, student internship, and study abroad opportunities. This, too, is excellent.
Another excellent thing is the increasing amount of discourse in non-OECD countries critiquing the mainly research-based nature of rankings. Obviously, that discourse has been there from the start in places like Latin America, and there have been attempts to create alternatives such as Universitas Indonesia’s “GreenMetric” University Sustainability Rankings . More recently, there is the new Moscow University Rankings which focus (in part) on the “Third Mission” of universities (this is a European term for what we in North America call “service” or “outreach” and is equally amorphous/multifaceted). In the United States, the  Washington Monthly rankings have long fulfilled mostly the same role – but there are increasingly calls to mainstream this approach. This last Monday, six Democratic Senators, including possible Presidential aspirants Cory Booker and Kamala Harris,  wrote a public letter to  US News & World Report  asking that they include indicators around social mobility and inclusion (of which there are now quite a lot – see last week’s blog on this subject  here ) in their overall rankings.
(Aside: I’m not convinced that folding new inclusion/third mission/student experience indicators into conventional ranking systems that tend to privilege research intensity and selectivity is the way to go. If you do that, Harvard still always wins, and the impact of the new indicators are lost. I think it’s probably better in the long run just to have mission specific rankings: rank everyone separately based on research, experience, third mission, etc. Clearer that way).
More broadly, what I’m starting to see is a refocussing of the whole discourse around rankings. It’s not really about ranking qua ranking anymore. To an ever-greater extent it’s about benchmarking and, more importantly, about data availability and comparability. There is a large and growing constituency for genuinely comparable institutional data in areas beyond bibliometrics. This, too, is excellent.
The end goal here is laudable: that any institution in the world should be able to get high-quality comparable data about similar institutions around the globe which can help it benchmark and improve its performance. We’re still decades away from this. Developing this kind of data on topics other than research takes a lot of time and a lot of conversations. But the trend is now moving in this direction much more quickly than it was even a couple of years ago.
Reaching this final goal will mean jumping one last hurdle: making the data more or less open. There’s an obvious case for doing so: right now, institutions just give THE their money, which THE then turns around and sells to institutions for a hefty fee. I don’t think this monopoly will last forever, and suspect that institutions outside the OECD, which can’t afford THE’s fees, may lead the way in creating some kind of open repository of ranking data. (Equally, some kind of open,  common data set  such as the one which exists in the US may also arise because too many rankers want the same data and institutions will get tired of dealing with them all). This may not happen overnight, and if the THE’s business model is ever destroyed this way we’ll lose a major innovator in the field, but I do believe it will happen in the long run.
Bottom line: globally, the rankings discussion is finally reaching a level of sophistication which makes more interesting discussions possible. In the past, rankings have had a lot of pernicious effects on higher education; I’m a lot more optimistic about the role they will play in the future.

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This