You’re reading the latest issue of The Edge, a weekly newsletter by Goldie Blumenstyk. Sign up here to get her insights on the people, trends, and ideas that are reshaping higher education.
I’m Goldie Blumenstyk, a senior writer at The Chronicle of Higher Education, covering innovation in and around academe. Here’s what I’m thinking about this week.
An author of ‘Academically Adrift’ strikes again.
Few higher-ed books of the last decade were as influential as Academically Adrift, the 2011 indictment of modern college culture by Richard Arum and Josipa Roksa. It documented how little time students were actually spending on their college coursework — and how little they were learning.
But the book also had its critics, primarily people who argued that Arum and Roksa were drawing too many conclusions from a limited set of data. Arum says those critics were right. “I really took that critique to heart,” he told me when we spoke last week.
Ever since, Arum says, he’s been trying to find ways to give institutions “a firmer and more empirical basis” to measure the value of college and especially liberal-arts education — and “not just through ideology and rhetoric.”
Now, joined by a cadre of other leading education researchers from an array of universities, Arum is setting out to correct that failure. The new project aims to comprehensively measure the complexity of learning inside the classroom and beyond, using vast amounts of student data. The effort kicked off last week at the University of California at Irvine, where Arum is now dean of the School of Education.
Call me jaded, but as interesting, and maybe even noble, as Irvine’s Next Generation Undergraduate Success Measurement Project sounds, I still find myself wondering if they can pull it off.
Reason 1 is that I’ve been hearing versions of this idea ever since I started working for The Chronicle. My second big story as a reporter here was on how a new movement was going to bring holistic assessments to higher ed. That was so long ago, I couldn’t even find the article in our electronic archive, though I did find this follow-up story from two years later. Such assessments have yet to materialize, although I certainly realize that the technologies for collecting and wrangling data are a lot more sophisticated today than they were in 1988.
Reason 2: For it to succeed with the impact Arum is hoping for, the project will need to dislodge the growing primacy of a data-analytics culture already taking root at hundreds of colleges — much of which is driven by “student success” systems powered by companies like EAB, Civitas Learning, and Starfish.
I asked Arum about that, and he seemed unfazed. “This is of a whole different order,” he told me.
As he sees it, those student-success systems are “designed primarily around student persistence and retention. They have not been focused on student learning and a broader set of outcomes that are motivating students and their families.”
Those commercial systems are good for what they do, he said, “but it’s not enough.” Nor, he argued, noting the many academic collaborators involved from Irvine and beyond, are they even comparable to what he and his colleagues are trying to do. This isn’t a product, he said, it’s “a scientific effort.”
Hmm. That should sit well with the vendor crowd.
To be sure, this new project does have a much broader scope than what’s available in existing products. You may recall many of the project’s essentials from our earlier coverage, which described its goals and plans to eventually go national. About 1,000 UC-Irvine freshmen and juniors will be in the first test group. Over the next two years they’ll take periodic assessments designed to measure what motivates them. They’ll also be tested on their progress in developing the abilities to recognize confirmation bias, engage in collaborative problem-solving, and other skills. Arum calls those “core competencies of the 21st century” for college graduates to succeed in the labor market and participate effectively in civic life.
The project will also use data on the 1,000-plus students drawn from their activities in the learning-management systems of their courses. And it will mine historical student data from a larger set of students.
Which brings me to Reason 3: Yes, all those inputs could make for more-sophisticated and more-nuanced measures, but they could lead to information overload. The project has $1.1-million from the Andrew W. Mellon Foundation, and a collaboration with the Educational Testing Service, which has developed some of the assessments.
Irvine’s campus leaders have gone all in on the project, too. They even took part in an assembly on campus last week with the project’s student volunteers before they took their first tests, to thank them for being part of an experiment that could, in the chancellor’s words, ”have a profound impact on higher education in the United States.”
Arum told me he hopes to have some preliminary findings as early as the spring of 2020. I wondered if he planned to eventually use the findings as the basis for another book? He didn’t rule that out.
But he said his primary goal is to use the data to help the field of higher education, by identifying a finite set of factors to measure how things like students’ choice of courses, or their reliance on peer and adult mentors, affects what they learn, and even the value of a liberal-arts education.
“You need a much richer set of data to be able to answer that question,” he said. “It’s unconscionable that we don’t know the answers to these things.”
Fair enough. But here’s hoping it doesn’t take another 30 years to get there.