|AU: But what’s the difference between the countries that want to use contracts versus performance formulas? Is there something that distinguishes the ones that specifically use performance-based contracts?
BJ: It’s difficult to say because you have such a wide variety of higher education systems and also funding systems. So the distinction is in the mix. There’s quite a few well-resourced higher education systems that have performance contracts because they have institutions that have a good capacity to negotiate, they have strategic capacities and they can make forward-looking exercises. So, they can run those performance agreements perhaps more easily than countries that are not so well resourced, and that may want to use a more simplistic mechanistic funding formula where every institution is treated on the basis of its course on a number of indicators. We were not looking into the exact reasons why one country or the other chose a particular mix.
AU: Right but that sounds like a strong hypothesis to me – that it has to do with the institution’s ability and strategic capacity to execute projects which makes the difference. The report makes the point that the most common form of government funding in Europe is a combination of formula funding and performance-based funding. But within this group, the balance between formula and performance funding seems to vary widely. I mean, you’ve got countries where it’s maybe under 10% and you’ve got countries where it’s over 50%. So, which countries rely most heavily on performance-based funding and why?
BJ: Like I said, it’s a bit difficult to say why, but the countries that have a really large share of their core funding attached to measures or agreements on performance are situated in the Scandinavia, let’s say Norway, Sweden, Denmark. Also our own country where I’m from, in the Netherlands, we also have a quite high degree of performance-based funding. Finland and also Bulgaria too, they also have a quite high degree of performance-based funding. Why do they do this? I think they have good systems of information in place that monitor and track performance and progress over time. They have institutions that, like you said, are well organized and have the capacity to handle systems that focus on performance. There’s a wide variety and that’s interesting but like our project shows, there’s also countries that change the mix and that try to tweak the system or reform the systems over time because they feel that they have to focus or stress on a particular objective and then attach funding to it. So, it’s a dynamic picture that we have taken as snapshots of.
AU: Performance-based funding is based on indicators, right? As you said, its measurement. You have to be able to measure how institutions are doing on indicators in order to make the system work. In the United States where performance-based funding systems are equally common. I think in most years, somewhere between 25 and 30 states use them and then there’s another 10 or 15 that use them occasionally. But in the US, typically the indicators are related to student performance. They might be a multidimensional picture of student performance, but it is completion rates or staying in school rates and sometimes they’re calculating very complicated ways to meet equity goals. I don’t get the impression that in Europe, you’re quite as narrowly focused. So, what are the kinds of indicators most commonly in use in Europe and does the use vary much by country?
BJ: It’s not that different from the US examples you mentioned. We also have a strong focus in Europe of students getting degrees as the outcome of their time in university or in college. So, completion rates are equally important and quite big, let’s say, problems and challenges when it comes to student completing, but also students completing in time. Students often take more time than the stipulated number of years they are expected to study and that’s quite common in Europe. But, what is different from the US is that we also have a strong focus on research performance or indicators of research performance, like numbers of publications, citation rates, or even scores in research assessment exercises. They try to place a number on the quality of research in any university. And so, there’s also a strong research performance element in those European systems and I guess in the US that’s not necessary because it’s a system where research is funded mostly through all kinds of competitions and there’s no need to stress it even further through these performance-based funding system. But the most common indicators in Europe are, like I said, the degrees completion. But, it’s also external income that is generated through all kinds of research contracts that are won by university or university departments. Doctorates/PhDs also emphasized in performance-based funding. It’s different mixes, different weights and they vary by country depending on what countries have in terms of ambitions or where they see as the most important gap to fill.
AU: Ben, moving over from performance-based funding to performance contracts or performance agreements. It seems to me that there are different things being incentivized in these two areas. So, teaching and learning seems to be an area which gets attention in performance agreements or performance contracts, but not in performance-based funding formulas. I’m kind of intrigued by this, like, what’s the content of these agreements? How does an institution deliver on a teaching and learning commitment in a performance contract? What kinds of enforcement mechanisms do you see here?
BJ: That’s more than one question. In the contract institutions are usually expected to make reference to the national goals or the goals of the Ministry of Education who has particular ambitions for the country as a whole, like internationalization or strengthening the quality of education or strengthening the quality of research. Those are the kinds of goals that institutions are then trying to respond to in light of their own institutional strengths, opportunities, but also weaknesses to cover areas that they can do better. So that’s the kind of things that are included in a contract. They are responding to the national ambitions and the national strategy. But on top of that, institutions can also bring in their own ambitions. So, they can pay attention to regional needs or particular areas of research they feel they can stress because there’s an opportunity or a future technology they want to work on. So the content is, a mix of national goals and institutional goals, and the institutions are then expected to deliver on those goals in the next three or four or five years. The way they do that is by putting in place new institutes or hiring new staff, or getting new technology in place, or innovating their teaching and learning in one way or the other by introducing new pedagogical models, new types of our teaching. How it’s checked whether they indeed deliver on those things that’s by showing results and in their national mandatory annual reports they have to write for their funding agency, their ministry. The annual reports have paragraphs on this, that’s mostly in a qualitative way, but supported by numbers that to show what institutions have been doing in all these things that they promised to work on.
AU: Let me ask you a question about the sorts of the differences between your analysis on performance-based funding and the kind of analysis that have been taking place in North America. A big theme in North America is that performance-based funding doesn’t work in the sense of increasing completion rates. Now, your review of case studies indicated that they have a number of positive effects in the sense that things intended to be incentivized, were in fact being incentivized. They work, and they were getting results. Does that mean, in your opinion, that performance-based funding works and how strong would you say the evidence is?
BJ: We did not do a very sophisticated quantitative analysis of this. Simply, we felt that the data to do that kind of thing over 27 countries, or even a subset of that, is not there. The countries are so different and so diverse so we relied on more qualitative information. We asked a lot of experts and we looked at existing evaluations that had been carried out in the countries about those topics of whether those systems of performance-based funding have delivered on their promises. What we noted then is that those funding system did not exist in isolation. They were often part of a whole set of tools that governments implemented together with funding reforms in the tools to get to a particular result and a higher level of performance one way or the other. That combination of tools make it also very difficult to identify what the exact impact is of the funding in this toolbox.
So, we did not do a very highbrow sophisticated quantitative study but we relied on qualitative information from a whole set of different sources. And that leads us to believe that those systems indeed stress the things that the governments would like to see in the institutions work on. It gives institutions the signal of what that governments would like to see. The institutions work on these particular things then both the priorities and the results then are made much more transparent. So it’s partly an accountability tool and a learning instrument. It leaves us to believe that these systems in Europe at least show some positive results.
AU: You have also noticed some negative effects associated with performance-based funding, though. Could you tell us what those what those effects are?
BJ: We looked at a number of countries and asked about those negative effects. It’s usually the fact that some institutions feel that the performance-based funding system doesn’t treat them favorably, doesn’t treat them fairly, because they can never deliver on particular indicators because they happen to be in a place or in a situation where the conditions are not as resourceful or well-organized or not as rich than other institutions. There’s large regional differences in some of those countries. Like in Italy, there’s big differences between the northern part of Italy where the wealthy part of the country is located and there’s more opportunities for working for business and students are from better backgrounds. Whereas in the south of Italy and let’s say Sardinia, there’s more poverty and there’s less opportunity to generate research funds. So, those institutions located in the south, they feel they are treated unfairly. That’s one of the negative effects and then the rich institutions will become richer and the poor will remain poor. Another effect is, we saw in Poland, is that institutions, when they’re funded based on their research publications, they tend to really focus on publications in English language journals. That goes at the expense of publications in Polish journals because it’s those in English language journal that are rewarded. So, the inequalities between institutions, even the inequalities between disciplines, are sometimes seen as leading to negative side effects of the system.
AU: Finally, let’s talk a little bit about why the results of your study differ from the conclusions of the major US Studies. Obviously one is the tools used for assessment, right? So, the American studies of performance-based funding tend to be based on some fairly finicky, and in my humble opinion, not very well designed, difference-indifference techniques, right? Whereas yours is a much more qualitative methodology as you suggested. But there’s another one, and that’s the amount of money associated with European performance-based funding systems is a lot bigger, and there are much higher sums of money involved than in American ones where often it’s only a few tens of thousands of dollars or a few hundreds of thousands of dollars at stake where in Europe, it seems to me they’re worth a lot more. That might concentrate institution’s minds a little bit. What’s your hypothesis? How big of a factor are those two things?
BJ: Indeed, it has to do with these things you mentioned. The amount of public funding in Europe in general is much higher than in the US or North America. Fees are, with a few exceptions, not very high in Europe, and so the higher education institutions are depending very much on public funding. So, governments try to use that as a lever or a tool for the institutions to deliver on particular priorities or results. Another thing is that in North America, the research funding is not part of the course. There is research funding that is perhaps part of the current funding or the recurrent funding, but in most research funding in North America, it’s based on competition. So, there is already quite a bit of performance incentives in place in the system. So, Europe tries to increase the orientation of performance by also including those research output indicators in the funding formula so that institutions become more focused on research performance. So, there’s more at stake in Europe. Then, there’s less fees and because the fees smaller, students in Europe also have less of an incentive to complete in time. They don’t have to pay that much for an extra year in college, apart from the housing and the other things. In the US, I think it’s that’s probably much different in the more prestigious institutions.
AU: That’s a good point about context and structural factors in evaluating these things. Ben, thank you so much for being with us today.
BJ: Okay. Thank you, Alex. it was a pleasure.
AU: It remains for me to thank the show’s excellent producers, Tiffany MacLennan and Sam Pufek, and of course, you the listener, for tuning in. If you have any comments or suggestions for future episodes, please send us a line at [email protected]. Next week, the podcast turns its spotlight back on Canada for the first time since the relaunch in January and our guest will be Dr. Julia Eastman, one of the authors of the quite excellent book, University Governance in Canada: Navigating Complexity. Bye for now.
*This transcript has been auto-generated with limited editorial review; suggested edits can be made to [email protected].