The End of College? (Part 1)
Over the next couple of days, I want to talk a bit about a new book called The End of College, written by the New America Foundation’s Kevin Carey. It’s an important book not just because it’s been excerpted repeatedly in some major publications, or because the conclusions are correct (in my view: they’re not), but because it has an unerringly precise diagnosis of how higher education came to its present malaise, and the nature of the economic and institutional reasons that impede change in higher education.
Carey’s narrative starts by tracing the origins of universities’ current problems back to the 19the century, when America had three competing types of universities. First were the small liberal arts colleges devoted either to Cardinal Newman’s ideals, or training clergy, or both; second were the Land Grant institutions, created by the Morrill Act and devoted to the “practical arts”; and a third was a group that wanted to emulate German universities and become what we now call “research universities”. Faced with three different types of institutions from which to choose, America chose not choose at all – in effect, it asked universities to embody all three ideals at once.
On top of that, American universities made another fateful decision, which was to adopt what is known as the Elective model (I prefer the term “Smorgasbord model”, and wrote about it back here). Starting at Harvard under President Charles Eliot, this move did away with programs consisting of a standardized set of courses in a standard curriculum, and replaced it with professors teaching more or less what they felt like, and students getting to choose the courses they liked. This mix of specialization and scholarly freedom was one of the things that allowed institutions to accommodate both liberal and practical arts within the same faculties. In Carey’s words: “the American university emerged as an institution that was designed like a research university, charged with practical training and immersed in the spirit of liberal education”.
The problem is that this hybrid university simply didn’t work very well as far as teaching was concerned. The research end of the university began demanding PhDs – research degrees – as minimum criteria for hiring. So hiring came to center on research expertise even though this was no guarantee of either teaching quality or ability in practical arts. And over time, universities largely abandoned responsibility for teaching to those people who were experts in research but amateurs at teaching. No one checked up on teaching effectiveness or learning outcomes. Degrees came to be a function of time spent in seats rather than actual measures of competence, proficiency, or mastery of a subject.
Because no one could check up on actual outputs or outcomes – not only are our research-crazy institutions remarkably incurious about applying their talents to the actual process of learning, they actively resist outsiders attempts to measure, too (see: AHELO) – competition between universities was fought solely on prestige. Older universities had a head start on prestige; unless lavishly funded by the public (as the University of California was, for a time), the only way to complete with age was with money – often students’ money. Hence, George Washington University, New York University, the University of Southern California, and (to a lesser extent) Washington U St. Louis all rose in the rankings by charging students exorbitant fees and ploughing that money into the areas that bring prestige: research, ivy, nicer quads, etc. (Similarly, Canadian institutions devoted an unholy percentage of all the extra billions they got in tuition and government grants since the late 90s into becoming more research-intensive; in Australia, G-8 universities are shameless in saying that the proceeds of deregulated tuition are going to be ploughed into research.) The idea that all those student dollars might actually be used to – you know – improve instruction rarely gets much of a look-in.
Maybe if we were cruising along at full employment, no one would care much about all this. But the last six years have seen slow growth and (in the US at least) unprecedented declines in disposable middle-class incomes, as well as graduates’ post-school incomes. So now you’ve got a system that is increasingly expensive (again, more so in the US than Canada), doesn’t attempt to set outcomes standards or impose standards on its professors, or do much in terms of working out “what works”.
Carey – rightly, I think – sees this as unsustainable: something has to give. The question is, what? Tomorrow, I’ll discuss Carey’s views on the subject, and on Friday I’ll provide some thoughts of my own.
———————————————-
The End of College? (Part 2)
As discussed yesterday, Kevin Carey’s The End of College pinpoints higher education’s key ills in its inability (or unwillingness) to provide students with any real signal about the quality of their work. This serves students badly in a number of ways. First, it makes finding job matches harder, and second, it means institutions can mis-sell themselves by investing in the accoutrements of excellence (ivy, quads, expensive residences) without its substance.
Essentially, Carey believes that technology will solve these problems. He’s not a blind MOOC-hypester; in fact, his chapter on Coursera is reasonably astute as to the reasons the current generation of MOOCs have yet to set the world alight. But he is utterly certain that the forces of technology will eventually provide high-quality, low-price solutions, which will overwhelm the current model. The ability to learn without the need for physical classrooms or libraries, the ability to get tutorial and peer assistance online, and the ability to test and certify at a distance will largely do away with the need for current (expensive) physical universities, and usher in the age of “The University of Everywhere”. Cue the usual stuff about “disruption”.
Carey provides readers with a useful overview of some of the ed tech companies whose products are trying to provide the basis of this revolution, with a particular emphasis on technologies that can capture and measure learning progress, and use that information both to immediately improve student performance, and to provide feedback to instructors and institutions to improve courses. He also spends a chapter looking at the issue of credentials. He correctly recognizes that the main reason universities have been able to maintain their position for so long is the strength of the Bachelor’s degree, a credential over which they maintain a near-monopoly. And yet, he notes, credentials don’t actually tell much about what a graduate’s capabilities are. And so he spends an entire chapter talking about alternatives to Bachelor’s degrees, such as Digital “badges” – open-sourced, machine-readable competency-based credentials which, in theory at least, are better at communicating actual skills to potential employers.
The problem is that this argument misses the mark, somewhat. To measure learning in the way techno-optimists wish, the “learning” has to be machine-readable. That is to say, student capabilities at a point in time have to be captured via clicks or keystrokes, and those keystrokes have to be interpretable as capabilities. The first is trivially easy (although implementing into a classroom setting in a disciplined way may end up being a form of torture); the second will vary from easy to unimaginably difficult depending on the discipline.
A lot of the promise people see in machine learning is based on things like Sebastian Thrun’s early MOOCs, which were in some ways quite intriguing. But these were in computer science, where answering a question rightly or wrongly is a pretty good indication of a mastery of underlying concepts, which in turn is probably a reasonable measure of “competence” in a field. But extrapolating from computer science is less helpful; most disciplines – and indeed, all of business and the social sciences – are not susceptible to capture this way. The fact that a history student might not know a “correct” answer to a question (e.g. “in what year was the Magna Carta signed”?) doesn’t tell you how well that student has mastered skills like how to interpret sources. In the humanities and social sciences (here including Law, Education, and Business), you can capture information, but it tells you very little about underlying skills.
With badges, the problem is roughly the same. Provided you are in a field of study where discrete skills are what matters, badges make sense. But by and large, those fields of study aren’t where the problem is in higher education. What problems do badges solve in humanities and social sciences? If the skills you want to signal to employers are integrative thinking or teamwork (i.e. skills the majority of employers say they most desperately need), how do badges solve any of the problems associated with the current Bachelor’s degree?
Two final points. First, I think Carey is too optimistic about learners, and insufficiently mindful that universities have roles beyond teaching. One justified criticism of much of the “disruption” crowd is that their alternative vision implies a high degree of autodidacticism among learners: if you put all these resources online for people, they will take advantage of them on their own. But in fact, that’s likely the case only for a minority of learners: a University of Everywhere will – in the early years at least, and quite possibly much longer – likely impose significant penalties on learners who need a bit more assistance. They need a level of human contact and interaction higher than that which can be provided over the internet.
Finally, one of the main reasons people go to universities is the social aspect. They meet people who will remain friends, and with whom they’ll associate for the rest of their lives. They learn many skills from each other via extra-curricular activities. Basically, they learn to become adults – and that’s a hugely important function. And sure, most universities do a half-assed job (at best) of communicating and executing this function, but Carey’s alternative is not an improvement on this. It is why I’m fairly sure that even if most students could go to the University of Everywhere, they would still choose not to. Even if it were practical, I’m not sure it passes the market test.
So if Carey’s diagnosis about universities’ weaknesses are accurate but his predictions incorrect, what are the real alternatives? I’ll tackle that tomorrow.
—————————————————
The Alternative to the End of College (Part 3)
So, if Kevin Carey is pretty much dead on about the weaknesses of current universities, and mostly wrong about where things go from here, how else might universities change over the next couple of decades?
Let’s start with the key points:
- Money pressures aren’t going to ease up. The cost disease will always be with us;
- Professors want to research, and they don’t want to do it in soviet-style academies, divorced from teaching. They’ll fight hard for present system;
- Higher education is, to a significant extent, a Veblen Good. It is thus, to a considerable degree, impervious to disruption;
- Students don’t go to school just for the teaching. They go for the experience. And the networks. And the personal contact. And occasional piece of praise. Some of this can be had online; but it tends to be more meaningful and lasting if accompanied by something face-to-face;
- The value of an established credential is that it relieves employers of the need to think too hard about the worth of an applicant. For this reason, it’s really hard for a new credential to displace an established credential;
- Employers are looking for universities to produce graduates who have more soft skills – mainly relating to teamwork and customer-facing skills. Students know this – and they want an education that will help provide this.
Any future one can imagine will need to meet these parameters. So, let’s extrapolate a little bit from here.
- Students will pay more for university if asked. They may not like it, but they will do it. This will eventually ease some of the cost pressure. As a result, the status quo re: day-to-day practices will be easier to maintain. A blow-out event;
- That said, absent a frontal assault by government (which I think unlikely), tenured research track faculty are likely to hang around and get more expensive. So there will still be cost-pressure for change;
- Professional pressures around research output means professors by and large will abandon lower-year courses (to the extent they already haven’t). Something has to replace them;
- MOOCs – or something like them – are an obvious way to cut costs here. Carey notes that although there are hundreds of thousands of different courses offered across the United States, the majority of credits actually awarded come from just 5,000 or so courses, which are pretty standard across institutions (e.g. American History 100, Accounting 200, etc.). To some significant degree, these can be standardized. That’s not to say there need only be a single course in each of these 5,000 areas: monocultures are bad. But in the words of one Harvard professor Carey interviewed, there probably doesn’t need to be more than half a dozen, either. Delivered at sufficient volume, these future-MOOCs will not just feature top lecturers, but also will have massively better support packages and learning design. Institutions could still localize and personalize them by offering their own tutorial support and testing of the material covered in these future-MOOCs, and then award their own credit for them. It’s not obvious the outcomes of this kind of arrangements would be worse than they are now: the lectures will likely be better, the scope for improvements for inter-institutional mobility and credit transfer are enormous, and the more nightmarish scenarios around MOOCS could be avoided;
- Pressure from students and employers is going to lead to significant re-designs of programs around learning outcomes – and specifically around issues of teamwork and problem-solving. The key change is going to come around how to integrate credible assessments of these qualities into existing structures of courses and degrees. There will likely be a lot of experimentation; certainly, I think we’re on the verge of the most serious re-think of the structure of credits and degrees since the 1960s;
- In tandem, various forms of work-based learning are going to keep expanding. Co-ops and internships will grow. Practical upper-year courses where students get to tackle real-world problems will become much more common. Some new types of validation – maybe not badges but something different from a simple diploma – will arise to help document achievement in these areas.
In other words, there will likely some big changes in undergraduate programming, some due to technology, some due to cost pressures, and some due to demands from students and employers. These changes will weaken the importance of the credit hour and reduce the centrality of academic discipline in academic life. It will make university-based learning less reliant on classroom teaching as we currently know it.
But it will not be the End of College.
*Note: I’ll be in South Africa next week, and to keep myself sane, I’ll be taking a one-week hiatus from the blog. See you all again on March 23rd.
0 Comments