Tasa de graduación: El debate en los EEUU
Marzo 5, 2012

chronicle_logo.gif

Importante artículo que muestra la necesidad, dificultades y limitaciones de la medición de tasas de graduación.
The Rise and Fall of the Graduation Rate
By Jeff Selingo, The Chronicle of Higher Educatio, March 2, 2012
A college’s graduation rate is such a basic consumer fact for would-be students these days that it’s difficult to imagine that the federal government didn’t even collect the information as recently as the early 1990s.
If not for two former Olympic basketball players who made their way to Congress and wanted college athletes to know about their chances of graduating, we might still be in the dark about how well a college does in graduating the students it enrolls.
Making Sense of Graduation Rates
College Completion Web Site
How important are completion rates? The Chronicle’s new site presents the numbers, puts them in context, and allows you to compare rates across the nation.
Completion in Context
The Students Who Don’t Count
The growing group of transfers, people who take a year off, and part-time students are not included in national data about who finishes college.
Do Completion Rates Really Measure Quality?
Seven experts assess the meaning behind the measurements.
Read More Analysis and Insight »
In the late 1980s, the two basketball players, Rep. Tom McMillen and Sen. Bill Bradley, wanted to force colleges to publish the graduation rates of their athletes. Although the NCAA eventually agreed to publish the information on its own, the idea still made its way into a broader disclosure bill in 1990 that also required colleges to publish crime rates at their institutions. By then, the graduation-rate provision was expanded to include all students. The thought was that you can’t compare athletes without knowing the rate for everyone else on the campus.
It would be another five years before colleges actually started to report their graduation rates because of debates among federal regulators over exactly how to collect this information.
More than 15 years later, the debate over how to measure college graduation rates and what they measure rages on.
Even before the Education Department settled on regulations in the mid-1990s, the definition that they were working from in the law was quickly becoming outdated. The law defined the rate as the percentage of full-time, first-time students who enrolled in the fall and completed their degree within “150 percent of normal time”—six years for students seeking a bachelor’s degree.
“The definition made sense for what higher education was in the late 1980s, full-time and residential,” says Terry W. Hartle, who at the time was working for Sen. Edward M. Kennedy, a major figure in higher-education policy making.
During the 1990s, higher education was in a state of rapid transition, with more students going to college part time and transferring between institutions, and more adults returning for their degrees. What were called nontraditional students then are today’s traditional students. But very few of them are captured by the federal definition of the graduation rate.
“What everyone underestimated was how challenging of an idea this was going to be to implement and how quickly it was going to be obsolete,” says Mr. Hartle, now senior vice president for government relations at the American Council on Education.
Every year, the method by which the government measures the graduation rate gets further and further from what’s actually happening on campuses. For example, about one-third of students now transfer from the college where they started, according to a recent report from the National Student Clearinghouse Research Center.
A better way of collecting graduation rates is already well known: a “unit-record” tracking system that would follow students from institution to institution for the full length of their college careers. Attempts to create such a system were defeated in Congress several years ago, in part by lobbyists for private colleges who worried about the privacy of student records.
“We have a battle between two competing goods, better data on one hand and privacy on the other,” says Sarah A. Flanagan, vice president for government relations at the National Association of Independent Colleges and Universities.
What’s puzzling about the opposition by some in the higher-education establishment to collecting more accurate graduation data through a unit-record system is that under such a method the rates at most institutions would probably improve.
One theory about why some private-college presidents oppose an improved system is that it would most likely raise the bottom-line numbers at public colleges. That would close the gap between the two groups and raise questions about the argument perpetuated by less-selective private colleges that they cost more but at least their students get their degrees on time.
Input vs. Output
But even if we were able to publish improved data, the question will remain whether the rate actually measures the quality of an institution. Colleges that continue to do poorly, even under a better measurement, will still offer other excuses. A likely one is that the rate depends too much on the types of students an institution admits.
Indeed, several higher-education experts have long argued that while the graduation rate is often hyped as an output measure, it’s really an input measure, like SAT scores and class rank. But Vincent Tinto, a professor of education at Syracuse University and one of the nation’s leading experts on college completion, says that even institutions with similar selectivity in admissions have substantial differences in graduation rates.
“There is a lot more to the ability of colleges to graduate their students than is reflected in the students they admit,” says Mr. Tinto, the author of a forthcoming book on the subject, Completing College: Rethinking Institutional Action (University of Chicago Press).
When giving advice to prospective college students and their parents, Mr. Tinto tells them to seek out an institution’s graduation rate and “then ask what is it for students like them.” That’s an important figure for colleges to supply to applicants, Mr. Tinto says, given that rates differ within institutions based on factors like gender, race and ethnicity, and major. And one dimension the rate doesn’t capture, he adds, is student intention. Some students enter an institution, especially two-year colleges, planning to transfer.
One place you sometimes see wide variances between graduation rates is within state public-college systems, although they often have comparable admissions standards. Take Pennsylvania, for example, where graduation rates of the 14-campus public-college system range from 24 percent (Cheyney University) to 65 percent (West Chester) and many points in between. To the system’s chancellor, John C. Cavanaugh, the rates are nothing more than “efficiency of throughput.” They might tell him that some campuses provide more academic support services to students, but they don’t “answer the question: What do students know, and how well do they know it?”
True, graduation rates don’t determine the quality of a degree. Yet students who start college but doesn’t finish are typically no better off than those who never even started, and in some cases might be worse off, if they took on debt. Given the subsidies they give to colleges, federal and state governments have a stake in making sure that students finish what they started. And a college credential remains one of the only signals to the job market that a potential employee is ready.
“In a society that cares about the credential, finishing college matters,” maintains Mark Schneider, a former U.S. commissioner for education statistics and now vice president at the American Institutes for Research. “Employers don’t advertise they want six years of college. They want a degree.”
Policy Pressure
A major fear on the part of higher-education leaders who play down the impact of the graduation data is how, and by whom, the rate is used. At least some college officials support using a graduation measure as a comparative consumer tool. For example, more than 300 public four-year colleges have joined the Voluntary System of Accountability program, which has devised a new completion metric that includes transfer students, using data from the National Student Clearinghouse.
What worries some higher-education officials is that measurements adopted as useful tools for consumers could turn into an accountability stick for the government.
“Most of the student unit-record systems,” Ms. Flanagan says, “are being built for policy, not consumer information.” Translation: Politicians want not just transparency for consumers, but they also want to reward institutions that do well and punish those that don’t measure up.
While Mr. Cavanaugh of the Pennsylvania state system supports a unit-record system, he too worries about the policy implications, especially for graduation rates. “You don’t want to end up with the higher-ed version of No Child Left Behind,” he says, in which the jobs and salaries of individual faculty members are dependent on the academic success of their students.
“Most faculty members went through grad school learning that rigor is how many students you fail, not how many you graduate.”


Análisis de expertos
Información complementaria

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This