En el último número de la Revista Education Week, aparecida hoy, se publica el artículo Education Research Could Improve Schools, But Probably Won’t, de Ronald A. Wolk, Presidente del Consejo de Editorial Projects in Education, organismo sin fines de lucro que edita la revista Education Weekly.
Se trata de una visión escéptica sobre la influencia que la investigación educacional podría ejercer en el mejoramiento de la educación.
Muchos de los argumentos de Wolk en torno a la investigación educacional y sus peripecias son certeros y deben preocupar a esta comunidad de especialistas.
Ver texto completo más abajo
Education Research Could Improve Schools, But Probably Won’t
By Ronald A. Wolk
In my idealistic days 25 years ago, I believed that education research would lead us to the promised land of successful schools and high student achievement.
Many folks still believe that, including the president of the United States, who insists he is determined to make education an “evidence-based field,” “a scientifically based practice.” (Despite the fact that he has long denied global warming, opposes embryonic-stem-cell research, and wants to include the teaching of “intelligent design” alongside evolution in schools.)
As much as I hate to say it—and I truly hope I am wrong—I no longer believe it, and here’s why:
Research is not readily accessible—either physically or intellectually—to the potential users. Summaries of major studies appear in periodicals like Education Week, but the detailed results (usually written for other researchers in academic-speak) are usually available only in separate reports or in relatively low-circulation journals that don’t reach those who most need to know.
Even if research findings were widely available and written in clear prose that even a dimwit like me could understand, the reports would not be widely read. Most teachers are not consumers of research, nor are most principals or superintendents.
And even if educators and policymakers did read all the studies in a timely fashion, schools and education practice would not change very much, mainly because making significant changes means altering value structures, disrupting routines, and teaching old dogs new tricks.
Moreover, researchers seem to delight in neutralizing each other. That’s easier to do in social science than the physical sciences because there are so many uncontrollable variables. And the bigger the question addressed, the more vulnerable the findings.
When one study claims small classes boost student achievement, another insists they do not. One study finds social promotion harmful; another says retention hurts children more. Money matters; no it does not. Vouchers work; no they do not. And on and on.
This makes it easy for policymakers and practitioners to get off the hook, because they can always find research results to rebut those they don’t agree with. And it makes it tougher on foundations trying to decide where their grants will make the most positive difference.
When some entrepreneurial soul proposes trying something different from what we have been doing in traditional schools for a century, naysayers immediately warn that there is not enough research to justify such an experiment and remind us that “it is immoral to use other people’s children as guinea pigs.”
By some perverted logic, we are told that we do not have enough research to justify trying something, but if we do not try it, how will we ever get any data to assess whether it works?
Research rarely leads to significant change because it is often expensive to apply or is a threat to the status quo. Good professional development may really improve teaching, but it can be terribly costly. Small classes may boost student achievement, but they increase costs.
If a major study found that public charter schools were outperforming traditional public schools by a country mile, the teachers’ unions would still fight them to the death and use all of their influence in state legislatures to help snuff them out.
In rare cases where research findings are neither too costly nor too controversial, and are therefore embraced by policymakers, they are often applied so ineptly that they are ineffective—or worse, they wind up doing more harm than good.
The textbook example in recent years is the proposal of then-Gov. Gray Davis of California to extend the limited class-size-reduction measure enacted by his predecessor, former Gov. Pete Wilson, to cover all students.
I have often tried to picture how the governor and his aides reached that decision. The only uncynical explanation I can come up with is that they must have been smoking something. Was there nobody in the room who raised crucial questions such as whether there were enough teachers or classrooms available, or whether this was the best use of limited resources?
The federal No Child Left Behind Act is a more recent and powerful example. Based to a fair degree on research and conventional wisdom, the law’s good intentions have been undermined by its heavy-handed implementation.
I find much education research suspect because it depends so heavily on the flawed measure of standardized-test scores. In most important studies I have seen over the years, the research findings are based solely on student test scores. The limitations in the metric devalue the findings.
I have listened to the liturgy of psychometricians enough to understand why researchers rely so heavily on test results. But scores on standardized tests are not a true or reliable measure of student learning. They do not measure many of the things we hope schooling will produce in children, like good habits of mind and behavior, and they do not measure Howard Gardner’s other “intelligences,” like artistic talent, kinesthetic ability, and social skills.
Finally, efforts to apply research findings are not likely to produce the desired outcomes because the educational system, like a combustion engine, will not work efficiently if any of its critical parts are broken. Most would agree, for example, that schools will not succeed without good teachers. But you need good salaries, good working conditions, and radically improved teacher-preparation programs to attract smart students and produce good teachers. You cannot get those conditions, however, without having adequate resources, altering practices in higher education, and making basic changes in the structure and operations of schools. In short, the broken components of the system have to be addressed simultaneously.
Deborah J. Stipek, the dean of Stanford University’s graduate school of education, published an essay on education research in these pages several years ago that made some of the points I make here. (Scientifically Based Practice, March 23, 2005). But one statement in her essay boggled my mind. She wrote: “[B]asing decisions on research and data is a new concept. Both the desire to consult research and the skills to interpret it will need to be developed within the teaching community.”
If the dean is correct, and she probably is, one wonders what educators and teacher-preparation programs have been doing for the past century.
It is easier to criticize than to offer remedies, but Dean Stipek’s comment suggests at least one: Researchers could do more to create an audience for their work. The people who conduct education research and follow it are often the same people who prepare teachers in education schools and departments. What better context for preparing teachers than the most important and timely research on the field they are about to enter? What better opportunity to cultivate in aspiring teachers an interest in research?
Another improvement might be more emphasis on longitudinal studies. These are expensive and time-consuming, but they also can be powerful. Researchers are still feasting off data from the National Education Longitudinal Study (NELS) and the High/Scope Perry Preschool study. Wouldn’t it be helpful to have data on what has happened to the graduates of alternative schools during the past 20 years, and to follow the graduates of charter schools for the next 20 years, instead of relying on standardized-test scores that are usually incompatible with these schools’ educational philosophies and methods?
In the mid-1990s, I was a member of the National Research Council committee that produced SERP—the Strategic Education Research Partnership program. It was an attempt to deal with education’s systemic challenge. Could we identify the highest-priority questions, those whose answers would lead to better schools and improved learning, and get the education and policy community to agree? Could a carefully constructed program of strategic research priorities lead to an integrated assault on education’s systemic problems? Could government and foundations be persuaded to provide long-term funding for such an effort?
If those questions were ever to be answered affirmatively, maybe education research could improve education. Maybe, if there were more of a consensus in the research community, there would be more positive outcomes, both in legislatures and in schools.
Ronald A. Wolk is the chairman of the board of Editorial Projects in Education, the nonprofit corporation that publishes Education Week, and was the newspaper’s founding editor. He also is the chairman of the board of the Big Picture Company, in Providence, R.I. The views expressed here are his own.
Vol. 26, Issue 42, Pages 38-39
0 Comments