Are you sitting comfortably? Then I’ll begin.
According to some briefing papers handed out by the Ontario Ministry of Training, Colleges and Universities last week (which someone very kindly provided me, you know who you are, many thanks) the new PBF is supposed to be based on ten indicators: six related to “skills and job outlooks” and four related to “economic and community impact”. One of the latter sets of indicators is meant to be designed and measured individually by each university and college (in consultation with the ministry), which is a continuation of practices adopted in previous strategic mandate agreements. One of the “economic” indicators is a dual indicator – a research metric for universities and an “apprenticeship-related” metric for colleges which is “under development” (i.e. the government has no clue what to do about this), so I won’t touch the college one for now. And finally, one of the skills indicators is some kind of direct measurement of skills, presumably using something like the tests HEQCO performed as part of its Postsecondary and Workplace Skills project (which I wrote about
back here) –I will deal with separately with that one tomorrow because it’s a huge topic all on its own.
So that leaves us with eight indicators, which are:
Graduation Rate. Assuming they stick with current practice, this will be defined as “percentage of first-time, full-time undergraduate university students who commenced their study in a given Fall term and graduated from the same institution within 6 years.”. Obvious problems include what to do about transfer students (aren’t we supposed to be promoting pathways?)
Graduate Employment. The government is suggesting measuring not employment rates but the “proportion of graduates employed full-time in fields closely or partly related to one’s field of study”. This question is currently asked on both the university and college versions of Ontario Graduate Surveys but is not currently published at an institutional level. It is a self-report question, which means the government does not have to define what a “related” or “partly related” means.
Graduate Earnings. This is currently tracked by the ministry through the Ontario Graduate Surveys, but the government appears to want to switch to using Statistics Canada’s new Educational and Labour Market Longitudinal Platform (ELMLP), through which graduate incomes can be tracked through the tax system as a means of measurement. This is mostly to the good, in the sense that response rates will be high and valid and graduates can be tracked for longer (though it is not clear what the preferred time frame is here), but they will lose the ability to exclude graduates who are enrolled in school for a second degree.
Experiential Learning. The Government’s briefing document indicates it wants to use the “number and proportion” of graduates in programs with experiential learning (this is confusing because it is actually two indicators) for colleges, but substitutes the word “courses” for “programs” when it comes to universities. I have no idea what this means and suspect they may not either. Possibly, this is a complicated way of saying they want to know what proportion of graduates have had a work-integrated learning experience.
Institutional Strength/Focus. This is a weird one. The government says it wants to measure “the proportion of students in an area of strength” at each institution. I can’t see how any institution looking at this metric is going to name anything other than their largest faculty (usually Arts) as their area of strength. Or how OCAD isn’t just going to say “art/design” and get a 100% rating. Maybe there’s some subtlety here that I’m missing but this just seems pointless to me.
Research Funding and Capacity (universities only): Straight up, this is just how much tri-council research funding each institution receives, meaning it could be seen as a form of indirect provincial support to cover overhead on federal research. This seems clear enough, but presumably there will be quite some jostling about definitions, in particular: how is everyone supposed to count money for projects that have investigators at multiple institutions? Should it use the same method as the federal indirect research support program, or some other method? Over how many years will the calculation be made? A multiple-year rolling average seems best, since in any given year the number can be quite volatile at smaller institutions.
“Innovation”. Simply, they mean funding from industry sources (for universities, this is specified as “research income”). The government claims it can get this data from the Statscan/CAUBO Financial Information of Universities and Colleges Survey, although I’m 99% sure that’s not something that gets tracked specifically. Also, important question: do non-profits count as “industry”? Because particularly in the medical field, that’s a heck of a big chunk of the research pie.
“Community/Local Impact”. OK, hold on to your hats. Someone clearly told the government they should have a community impact indicator to make this look like “not just a business thing”, but of course community impact is complex, diffuse, and difficult to measure consistently. So, in their desperation to find a “community” metric which was easy to measure, they settled on…are you ready?…institution size…divided by….community size. No, you’re not misreading that and yes, it’s asinine. I mean, first of all it’s not performance. Secondly, it’s not clear how you measure community; for instance, Confederation College has campuses in five communities, Waterloo has three, etc., so what do you use as a denominator? Third: What? WHAT? Are you KIDDING ME? Set up a battle of wits between this idea and a bag of hammers and the blunt instruments win every time. This idea needs to die in a fire before this process goes any further because it completely undermines the idea of performance indicators. If the province needs a way to quietly funnel money to small town schools (helloooo, Nipissing!) then do it through the rest of the grant, not the performance envelope.
OK, so that is eight indicators. Two of these (community impact, institutional strength) are irretrievably stupid and should be jettisoned at the first opportunity. The “research” and “innovation” measures are reasonable provided sensible definitions are used (multi-year averages, the indirect funding method of counting tri-council income, inclusion of non-profits) and would be non-controversial in most European countries. The experiential learning one is probably OK, but again much depends on the actual definitions chosen.
That leaves the three graduation/employment metrics. There are some technical issues with all of them. The graduation rate definition is one-dimensional, and in most US states a simple grad rate is now usually accompanied by other metrics of progress beyond completion (e.g. indicators for successfully bringing transfer students to degree, or indicators for getting students to complete 30/60/90 credits). The graduate employment “in a related field” is going to make people scream (it might be a useful metric for professional programs, but in most cases degrees aren’t designed to link to occupations and even where they are, people shift occupations after a few years anyway) and in any case it is to be measured through a survey with notoriously low completion rates, which will matter at small institutions. The graduate income measure is technically OK but doesn’t work well as an indicator in some types of PBF systems because it does not scale with institution size (I’ll deal more with this in Thursday’s blog).
But the bigger issue with all three of these is that they conceivably set up some very bad incentives for institutions. In all three of them, institutions could juice their scores by dumping humanities or fine arts programs and admit only white dudes, because that’s who does best in the labour market. I’m not saying they would do this – institutions do have ethical compasses – but it is quite clearly a dynamic that could be in play at the margin. As it stands, there is a strong argument here that these measures have the potential to be anti-diversity and anti-access.
There is, I think, a way to counter this argument. Let’s say the folks in TCU do the right thing and consign those two ridiculous indicators to the dustbin: why not replace them with indicators which encourage broadening participation? For instance, awarding points to institutions which are particularly good at enrolling students with disabilities, Indigenous students, low-income students, etc. The first two are measured already through the current SMA process; the third could be measured through student aid files, if necessary. That way, any institution which tries to win points by being more restrictive in its intake would lose points on another (hopefully equally weighted) indicator, and the institutions which do best would be those that are both open access and have great graduation/employment outcomes. Which, frankly, is as it should be.
Tomorrow: Measuring skills.
0 Comments