The Economics of Rankings |
November 27, 2017 – Alex Usher
|
One of the greatest misapprehensions about rankings – and there are a lot, believe me – is that rankers are “just doing it for the money”. For the most part, this is wrong. It’s really hard to make money at rankings.
To start with, at a rough guess, only about half of all rankings are done for commercial reasons. Many get carried out by academic institutions or institutions affiliated thereto, and they have no intention whatsoever of making money. Maybe the most prominent of these worldwide is the Webometrics Rankings, which are done by Spain’s Consejo Superior de Investigaciones Cientificas, but there are loads of others. Middle East Technical University puts out the University Rankings by Academic Performance, and in Taiwan the Higher Education, Evaluation and Accreditation Council of Taiwan (HEEACT) used to do something similar (though it later got out of the business by transferring that unit to the National Taiwan University. All of these are non-commercial, money-losing exercises.
How do they lose money? Well, they’re reasonably labour-intensive. Data for a ranking comes from three potential sources. You can use third-party data – that is data that has already been produced and processed (to some degree). Sometimes that data is free; other times it needs to be purchased – for instance, any use of bibliometric data usually comes with costs unless one of the big science publishers agrees to give you a deal (the big ranking outfits all have deals with either Clarivate or Elsevier). You can use surveys of various kinds, whether of students, graduates, professors or employers. The marginal cost of an email is zero, so this option is cheaper than it was 20 years ago, but the act of gathering a useful sample and creating filters to ensure only legitimate users reply is expensive (the main methodological argument against the QS rankings is that its survey sample and quality filters are significantly weaker than those used by the Times Higher and I’m pretty sure that’s down to cost pressures). Or you can ask institutions for data – which apart from having methodological issues (are they all interpreting questions the same way, is anybody gaming the data, etc) has significant transaction costs in terms of managing the interactions. Those few rankers with significant market power can put the burden on institutions to upload the data to a portal, but they’re pretty rare.
Meanwhile, you need an income source and mostly that needs to come in the form of sales and advertising. Magazines have an edge over newspapers here. Newspapers which publish rankings see no boost in sales for their rankings issues. In fact, almost nothing boosts daily newspaper sales on a one-day basis- even something like 9/11 only boosts the numbers by a percentage point or two. People either subscribe or they don’t. Newspapers that do rankings do it because they want to signal to subscribers that they are interested in a topic that may be of interest to them. If they’re lucky they can get a bit of money back through dedicated advertising sales. But usually that only works if there’s a dedicated print product or supplement to go with it, which of course creates new costs.
Magazines, on the other hand, do get boosts from single-issue events: and doubly-so if they create dedicated print runs. Unlike newspapers, they can occupy real estate (so to speak) in newsagents and magazine stores for months on end, meaning their dedicated print runs can be larger. This is how Maclean’s makes its money, for instance. In the case of US News and World Report the approach was so successful that the lucrative rankings unit has outlived the parent print magazine, which ended in 2010.
|
A few rankings offset some of their costs through consultancy work – for instance the Leiden Rankings and the Shanghai Rankings (though in neither case do revenues amount to very much). But some of the biggest rankings are simply loss leaders for other products. QS loses money on its rankings, but they are great advertising for its business of organizing education fairs in various parts of the world, plus it sells consultancy services on rankings and general institutional improvement through its QS Stars service. The Times Higher provides similar consultancy services, plus it has a relatively lucrative conference business – but unlike QS it has its own publication platform which also drives income: additionally, it also creates bespoke rankings for national markets in conjunction with other media partners like the Wall Street Journal. These big guys are essentially data companies, and rankings are just a form of advertising for that data.
(Some of you may recall the Measuring Academic Research in Canada research rankings which we here at HESA Towers produced five years ago also used a similar model. The initial cost of the research was high five figures which we more or less made up in bespoke data sales to institutions.)
There are a few outliers. U-Multirank, for instance, relies on public funding from the EU to keep itself going (a point which sticks in the craw of many other rankers). It’s not clear what will happen to this ranking once its funding runs out in a year or so. And then there’s the Washington Monthly ranking (ably compiled by my colleague Robert Kelchen – if you’re in higher education and not following him on twitter you’re doing it wrong), which cleverly relies on third-party data – some of it fairly quirky – which is already publicly available, thereby reducing costs enormously. But since most higher education systems have terrible data transparency, this option doesn’t really work anywhere but the United States.
Bottom line: very few rankings make money on their own through advertising revenues; for the most part they act as loss leaders for other products.
|
0 Comments