Performance-Based Funding (Part 1)
I was reading the Ontario Confederation of University Faculty Association (OCUFA)’s position statement on a new funding formula for the province. Two things caught my eye. One, they want money to make sure Ontario universities can do world-class research and teaching; and two, they demand strict opposition to any kind of performance-based funding formula (PBF). Put differently: OCUFA wants great teaching and research to be funded, but are adamantly opposed to rewarding anyone for actually doing it.
Except that’s slightly uncharitable. OCUFA’s larger point seems to be that performance-based funding formulae (also known as output-based funding) “don’t actually achieve their goals”, and point to work done by University of Wisconsin professor Nicholas Hillman and Florida State’s David Tandberg on the topic. From a government-spending efficacy point of view, this objection is fair enough, but it’s a bit peculiar from an institutional or faculty standpoint; the Hillman/Tandberg evidence doesn’t indicate that institutions were actually harmed in any way by the introduction of said arrangements, so what’s the problem?
Anyways, last week HCM associates in Washington put out a paper taking a contrary view to Hillman/Tandberg, so we now have some live controversy to talk about. Tomorrow, I’ll examine the Hillman/Tandberg and HCM evidence to evaluate the claims of each, but today I want to go through what output-based funding mechanisms can actually look like, and in the process show how difficult it is for meta-analyses – such as Hillman’s and HCM’s – to calculate potential impact.
At one level, PBF is simple: you pay for what comes out of universities rather than what goes in. So: don’t pay for bums in seats, pay for graduates; don’t pay based on research grants earned, pay based on articles published in top journals, etc. But the way these get paid-out can vary widely, so their impacts are not all the same.
Take graduation numbers, which happens to be the simplest and most common indicator used in PBFs. A government could literally pay a certain amount per graduate – or maybe “weighted graduate” to take account of different costs by field of study. It could pay each institution based on its share of total graduates or weighted graduates. It could give each institution a target number of graduates (based on size and current degree of selectivity, perhaps) and pay out 100% of a value if it hits the target, and 0% if it does not. Or, it could set a target and then pay a pro-rated amount based on how well the institution did vis-a-vis the target. And so on, and so forth.
Each of these methods of paying out PBF money plainly has different distributional consequences. However, if you’re trying to work out whether output-based funding actually affects institutional outcomes, then the distributional consequence is only of secondary importance. What matters more is how different the distributional outcomes are from whatever distribution existed in the previous funding formula.
So, say the province Saskatchewan moves from its current mix of historical grant and formula grant to a fully PBF system, where 100% of the funding is based on the number of (field-weighted) graduates produced. Currently, the University of Saskatchewan gets around three times as much in total operating grants as the University of Regina. If USask also produced three times as many (field-weighted) graduates as URegina, even the shift to a 100% PBF model wouldn’t change anything in terms of distribution, and hence would have limited consequences in terms of policy and (presumably) outputs.
In effect, the real question is: how much funding, which was formerly “locked-in”, becomes “at-risk” during the shift to PBF? If the answer is zero, then it’s not much of a surprise that institutional behaviour doesn’t change either.
Tomorrow: a look at the duelling America research papers on PBF.
———————————————————————————-
Performance-Based Funding (Part 2)
So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so). Basically, one side says they work, and the other says they don’t.
Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here. To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion. This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without. Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.
Given the methodology, there’s no real arguing with the findings here. Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”. But as we noted yesterday, the existence of PBF is only one dimension of the issue. The extent of PBF funding, and the extent to which it drives overall funding, must matter as well. On this, Hillman and Tandberg are silent.
The HCM paper does in fact give this issue some space. Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding. Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects. Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much). The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.
But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF. So as far as advancing the debate about what works in performance-based funding, it’s not up to much.
So what should we believe here? The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either. What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.
Such places do exist – but oddly enough neither of these reports actually looks at them. That’s because they’re not in the United States, they’re in Europe. More on that tomorrow.
———————————————————————-
Performance-Based Funding (Part 3)
As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding. Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income. Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany). In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income). The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.
Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question. But you’d be wrong. There’s actually almost nothing. That’s not to say these programs haven’t been evaluated. The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy). Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”
This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds. PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks). In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less. This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.
Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here). Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies. Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities. Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.
So, how to interpret the evidence of the past three days? Tune in tomorrow.
—————————————————————————————-
Performance-Based Funding (Part 4)
I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.
Let’s return for a second to a point I made Tuesday. When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes. Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly. And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.
I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them. Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it. The more money that suddenly becomes at risk, the more the big universities scream. Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.
Ah, you say: but what about Europe? Surely the large size of PBF incentives must have caused outrage when they were introduced, right? That’s a good question, and I don’t really have an answer. It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts. I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc. But those are just guesses. In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.
So, I think we’re kind of back to square one. I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can incentivize greater institutional efficiency. But beyond that, I don’t think we’ve got much solid to go on.
For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives. Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.
No, what you want to incentivize is the deeply unsexy stuff that’s hard to do. Pay for Aboriginal completions in STEM subjects. Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years. Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).
Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives. I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way. And that’s probably as much art as science.
0 Comments