¿Cuánto aprenden los estudiantes universitarios?
Febrero 19, 2011

chronicle_logo.gif
Ya en una oportunidad anterior nos referimos a un reciente estudio publicado en los EEUU sobre los bajos niveles de aprendizaje de los alumnos durante sus estudios en el college. A continuación más sobre el mismo estudio: críticas dirigidas a sus descubrimientos empíricos e interpretaciones y defensa de los autores.
Recursos adicionales
Extractos del libro Academically Adrift, ver aquí.
Comentarios referidos a los aspectos cuantitativos del estudio.
Críticas aparecidas en la prensa.
Conversación con los autores.
Scholars Question New Book’s Gloom on Education: Doubts are raised about study behind ‘Academically Adrift’
By David Glenn, The chronicle of Higher Education, February 13, 2011
It has been a busy month for Richard Arum and Josipa Roksa. In mid-January, the University of Chicago Press published their gloomy account of the quality of undergraduate education, Academi­cally Adrift: Limited Learning on College Campuses.
Since then the two sociologists have been through a torrent of radio interviews and public lectures. In the first days after the book’s release, they had to handle a certain amount of breathless reaction, both pro and con, from people who hadn’t actually read it. But now that more people in higher education have had time to digest their arguments, sophisticated conversations are developing about the study’s lessons and about its limitations.
Many college leaders are praising the ambition of Mr. Arum and Ms. Roksa’s project, and some say they hope the book will focus new attention on the quality of undergraduate instruction. When the authors spoke last month at the annual meeting of the Association of American Colleges and Universities, in San Francisco, the ballroom far overfilled its capacity, and they were introduced as “rock stars.”
But three lines of skepticism have also emerged.
First, some scholars say that Academically Adrift’s heavy reliance on the Collegiate Learning Assessment, a widely used essay test that measures reasoning and writing skills, limits the value of the study. Second, some people believe the authors have not paid enough attention to the deprofessionalization of faculty work and the economic strains on colleges, factors that the critics say have played significant roles in the ero­sion of instructional quality. Third, some readers challenge the authors’ position that the federal government should provide far more money to study the quality of college learning, but should not otherwise do much to regulate colleges.
Testing Questions
The heart of Mr. Arum and Ms. Roksa’s argument is that too many students fail to improve their writing and reasoning skills while they are in college. Academically Adrift is not concerned with how much expertise students gain in their majors, but with broad skills that the authors believe all college students should acquire.
The two scholars’ study has tracked more than 2,000 students who enrolled at 24 institutions in the fall of 2005. Academically Adrift covers the students’ first two years at college, and a separate report, released last month by the Social Science Research Council, follows them through 2009. (The authors, who are continuing to follow the students as they enter the work force, hope to publish a sequel to Academically Adrift within three years.) The book’s bottom line: Forty-five percent of students in the study failed to significantly improve their writing and critical-thinking skills, as measured by the Collegiate Learning Assessment, during their first two years of college. The news in the updated report is not much sunnier: Through their senior years, 36 percent of students failed to improve.
But for these purposes, how reliable is the Collegiate Learning Assessment? The 10-year-old nationally normed test, commonly known as the CLA, is widely admired because it uses a pure essay format. There are no multiple-choice questions, and its creators say the format provides an unusually sophisticated window into students’ skills.
The test has also been criticized, however. The most common criticism—that colleges test discon­nected cross-sections of freshmen and seniors, and therefore lack true apples-to-apples comparisons—does not apply to Academically Adrift. In Mr. Arum and Ms. Roksa’s study, the same students took the test three times during their college careers.
But other concerns about the CLA might suggest real limitations in the new study.
One such criticism is that students don’t have much motivation to take the CLA seriously. If some students, especially seniors with their eyes on graduation, float through the test without much effort, how valid are their scores? The authors reply that when they take into account questionnaire data about the attention students gave to the test and their feelings about its importance, the basic analysis does not change. Motivated students did perform better on the test, but motivated and unmotivated students were more or less evenly distributed across all subject majors and sociodemo­graphic groups, so the fundamental patterns that Mr. Arum and Ms. Roksa found do not change when they add motivation to their analysis.
Imperfect Instrument
Another concern is that because the CLA includes only one or two questions, the potential measurement error in scoring the tests is very high. Because so few students take the test on a given campus (typi­cally about 100), the CLA is at best “a crude measure” for assessing differences across groups, says John Aubrey Douglass, a senior research fellow at the Center for Studies in Higher Education at the University of California at Berkeley.
Mr. Douglass says that while Mr. Arum and Ms. Roksa “point to a real problem in higher education,” the CLA is “not a very good baseline for this or other studies.”
Mr. Arum responded to such criticisms during his talk at the San Francisco meeting. He called the CLA a “solid instrument” but conceded that it is imperfect. He and Ms. Roksa say they are confident in their general findings, though, because their results parallel those of the recently concluded Wabash National Study of Liberal Arts Education.
That study used the ACT’s Collegiate Assessment of Academic Proficiency, or CAAP, a test that attempts to measure skills similar to those covered by the CLA. Unlike the CLA, the CAAP is in a multiple-choice format, which means that there are fewer concerns about measurement error. The results of the two studies were similar: In the Wabash study, students’ CAAP scores improved by 0.11 standard deviations, on average, between their freshman and sophomore years. In the Academically Adrift study, students’ CLA scores, on average, improved by 0.18 standard deviations. (Scholars often use standard deviations, a statistical measure of variance, in comparing performances on different normed tests.)
Some skeptics have a more fundamental problem with the CLA. They say its entire purpose—measuring reasoning and writing abilities in the abstract—is misbegotten. Critical-thinking skills are deeply entwined with discipline-specific knowledge, they say, so it makes no sense to use the same test to measure the writing and reasoning abilities of an engineering major, a biology major, a history major, and an education major. If that critique is valid, then Academically Adrift’s argument might be not only obvious but circular: Students do better on the CLA if they major in departments where they are asked to do a lot of CLA-like tasks.
But the students who had the strong­est CLA-score gains during college in the Academically Adrift study were actually those who majored in science and mathematics, departments where they are not necessarily required to write many essays. Mr. Arum and Ms. Roksa suggest that time on task is the biggest factor here. Students in those departments do a lot of homework, and their relatively heavy engagement with their schoolwork seems to lead to broad improvements in their reasoning skills.
Scarce Resources?
In Academically Adrift, Mr. Arum and Ms. Roksa devote several pages to faculty members’ incentives and how those might affect the qual­ity of education. Because they want to improve their national rankings, four-year colleges generally push professors to improve their research productivity above all else. Insofar as they assess faculty members’ teaching skills at all, they often use student course-evaluation tools that Mr. Arum and Ms. Roksa regard as deeply flawed.
“The incentives in higher education are completely misaligned for academic rigor,” Mr. Arum said in San Francisco. If students choose easy departments, for example, administrators reward those departments for their enrollment growth.
But some readers argue that Mr. Arum and Ms. Roksa have underplayed a crucial part of the context of faculty life. Fewer and fewer instructors are on the tenure track, they point out, and faculty members’ professional roles in shaping the curriculum are eroding.
Meanwhile, resource gaps among colleges are growing. Elite private universities, for example, spend vastly more on each student than community colleges do. The question of institutional inequality has been “put behind a cloak of invisibility,” said Vicki W. Legion, an instructor of health education at the City College of San Francisco, during the question-and-answer session after Mr. Arum’s talk.
Mr. Arum replied that he did not believe there was any simple connection between resource allocation and the quality of student learning.
“Higher education is complicated because there are so many different types of institutions,” he said. “We have some colleges where tuition has been rising at twice the rate of inflation during the last decade. It’s hard to believe that a lack of resources there is responsible for this problem.
“Now, it might be that there are problems of resource allocation, where colleges and universities are not investing in instruction and full-time faculty,” he continued. “Then there are other institutions that really are being starved of resources, and where the ranks of full-time faculty members have been decimated. So it’s complicated. Resources matter, but they’re not the only part of the puzzle.”
Federal Role
In Mr. Arum’s public appearances, his voice grows most passionate when he attacks the federal government for not spending enough on research in college-student learning. The major federal longitudinal studies of college life have not included any measures of actual learning outcomes.
The Academically Adrift study was conducted entirely with private money. “It is a shame and a disgrace,” Mr. Arum says, “that the federal government has not made that kind of data available for social scientists.” It would be an easy matter, he says, for the government to include a test like the CLA or the CAAP the next time it does a major national longitudinal study.
But Clifford Adelman, a senior associate at the Institute for Higher Education Policy who once oversaw the U.S. Department of Education’s longitudinal college studies, says it would be unwise and infeasible to do that.
“As I understand the proposal for a uniform higher-education assessment,” he says in an e-mail message, “it would be given to a sample-of-the-sample (just as the CLA does), consisting of paid volunteers (this, as any psychometrician will tell you, will contaminate results) who have either SAT or ACT scores as reference points on which exam scores can be regressed (thus enshrining our ‘beloved’ SAT and ACT).
“The ironies and contradictions here,” Mr. Adelman concluded, “are enough to cloud the whole enterprise.”
Mr. Arum replies that what­ever the imperfections of the CLA or similar tests, learning more about the effects of college classrooms is still worth a significant amount of public dollars. “There is no reason,” he says, “why we should have this kind of data for K-12 education but not for higher education.”
Provosts and faculty leaders, too, are still wrestling with the book. One person who is ambivalent is Victor Anand Coelho, associate provost for undergraduate education at Boston University.
“Academically Adrift is a sort of veiled qualitative critique of higher education, even though it’s standing on a huge amount of quantitative data,” he says. “It doesn’t move into some of the more interesting things that are presenting real results.”
The book does not plumb deeply enough, Mr. Coelho says, into emerging kinds of team-based instruction that have proved their effectiveness in the health sciences, among other fields.
Ralph A. Wolff, president of the Western Association of Schools and Colleges, is among those who see substantial limitations in the CLA. But those limitations are no reason for colleges to ignore Academically Adrift, he says. “The authors pulled together a whole variety of data, not just the CLA. I think we have to take it seriously. It’s a challenge to us to understand what kinds of practices make a real difference in learning.”
Mr. Arum looks forward to exploring such questions for a long time to come. “We’re just grateful,” he says, “that so many people in higher education have been open to this kind of conversation.”

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This