Most people inside of universities understands this logic; almost nobody outside the university does. A lot of what you see in university branding and university strategic plans is, to some degree, covering up this problem: the external world (governments especially, but also philanthropists) assume that institutions are unitary entities with agency and so it is important that the university present itself this way. If it presents itself as a set of cats fighting in a bag – arguably nearer the truth – things would unlikely to go well on the whole “go get more money” front. But this accentuates the problem from an accountability perspective. The more institutions paint themselves as having agency (as opposed to the bagged cats they contain), the more external bodies – governments, rankers, whatever – are going to focus on institutions rather than programs or department as the locus of accountability. This is arguably both unfair and unhelpful.
‘Twas ever thus. And to some extent the way you deal with this issue is by having external quality assurance procedures focused on individual disciplines or programs, which in North America is done primarily by professional disciplines. But if,
as I suggested a couple of days ago, academic research is becoming less focussed on disciplines and more focussed on challenges, that makes judgement about what constitutes excellence somewhat harder. And, let’s just add to that: as academic units respond to public pressure to work jointly and collaboratively on these issues (and you’d better believe this pressure is only going to grow over the next decade or so), it’s going to get harder by several orders of magnitude to separate out and evaluate the work of any individual or unit.
Put simply: public authorities think evaluating universities is a good thing; in some cases, they want to tie money to some of those evaluations. They also think universities should work jointly on “big societal programs”. The latter complicates the former enormously because it becomes ever less clear where the real locus of activity lies. Certainly, it will become harder to measure through pure metrics and algorithms. More nuanced analyses, some even containing things like (shock horror) expert judgement might be required.
Additionally, it would probably help if we could complement assessments of programs and institutions with evaluations of how well our system(s) of higher education are working. This might be difficult in the short term, since very few Canadian governments can articulate what they actually want out of the system, particularly on research (Quebec probably could, the rest would struggle). We used to do these pretty regularly, albeit on an ad hoc basis. My colleague Yves Pelletier and I did one for
Manitoba colleges a few years ago, but I don’t believe there have been any since then. It may have been ten years since the last one anywhere in Canada for universities (the
O’Neill Report in Nova Scotia). We need more of these, with specific mandates to look at how academics are contributing to overall growth in human knowledge through networked research – because otherwise that will never get considered.
(Yes, I know there is a post-secondary review going on in Newfoundland, but it was announced over two years ago, and I am starting to doubt that it will ever actually report. One rumour I heard recently is that the panel wants to punt on the issue of whether or not to end the 20-year long freeze on tuition fees, which is bananas if true because the whole point of the panel was to give weak-kneed politicians cover for doing what everyone knows needs to be done.)
In fact, it might even be more cost-effective and generally useful for provinces to contribute to a genuine, pan-Canadian exploration of how Canadian universities are collectively advancing knowledge. Run it through the Council of Ministers of Education, Canada. Difficult, I know, because provincial education ministries dislike working with each other almost as much as they dislike working with the feds but again: if it’s the only way to get at a problem, why not take it?
Or even: why wouldn’t Canadian universities themselves commission periodic reports by credible independent observers on the state of system contributions? There is a precedent: the
Commission of Inquiry on Canadian University Education (I realize this is now going back 30 years, but it is a precedent nonetheless). After all, the biggest barrier to re-organizing research around interdisciplinary challenges is common to all universities: disciplines have permanence in the form of departmental bureaucracies and tend to outlast occasional initiatives at greater trans-disciplinary coherence. It might be worth a collective examination of how to overcome that.
0 Comments