Una de las principales voces y analista del debate sobre rankings mundiales de universidades, Ellen Hazelkorn (su libro más reciente en la foto), vice president for research and enterprise and head of the Higher Education Policy Research Unit at the Dublin Institute of Technology, argumenta en la siguiente columna que los efectos perversos de estos rankings continuan presentes pero que hay señas incipientes de una mayor preocupación por las capacidades de los sistemas nacionales en vez de solo considerar universidades aisladas.
Focusing on the Total Quality Experience
The Chronicle of Higher Education, May 15, 2012, 3:28 pm
By Guest Writer
The following is a guest post by Ellen Hazelkorn, vice president for research and enterprise and head of the Higher Education Policy Research Unit at the Dublin Institute of Technology. She is the author of Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence (Palgrave Macmillan).
—————————————————————————–
The university rankings debate is heating up – again. Hopefully, this time it will be different and with better outcomes for everyone.
At a time when many nations are experiencing high levels of public and private debt and higher education is in great demand, university rankings have encouraged a preoccupation with the trials and tribulations of a handful of “world class” universities. This is having a profound–and perverse–effect on higher-education policy making, universities, and public opinion.
Rankings privilege the most resource-intensive and expensive universities on the assumption that such universities offer the best panacea for success in the global economy and world science. Thus, governments worry their institutions are not elite or selective enough, while university leaders say too much attention has been directed at widening participation. As a result, many governments are making the insidious connection between excellence and exclusiveness. They are busy reshaping their systems and institutions, their educational priorities and societal values to conform to indicators designed by others for commercial or other purposes. The public’s interest has become confused with self-interest.
There are, however, some small signs that the pendulum is beginning to swing.
I have argued many times, in these columns and elsewhere, of the importance of focusing on the capacity of “the system as a whole” rather than simply on the performance of a few elite institutions. I have posed the policy challenge in terms of promoting a “world class system” rather than “world class universities.”
The Australian Review of Higher Education pinned its colors clearly to this mast, saying “we must address the rights of all citizens to share” the benefits of higher education. The Irish minister of education and skills said similarly in April of this year: “We need to maintain a clear focus on system performance overall rather than a narrower focus on individual institutional performance.”
A new ranking developed by the Melbourne Institute of Applied Economic and Social Research follows a path previously furrowed by Quacquarelli Symonds (QS), the Lisbon Council, and Jamil Salmi, former tertiary education coordinator of the World Bank. In their different ways, these initiatives are attempting to assess the quality, impact and benefit of the higher education system as a whole.
In 2008, the Lisbon Council, an independent think tank based in Brussels, created the “University Systems Ranking. Citizens and Society in the Age of Knowledge,” and QS developed its “National System Strength Rankings”; both have been one-off ventures. The former measured the performance of 17 OECD countries against six criteria: inclusiveness, access, effectiveness, attractiveness, age-range, and responsiveness, while the QS ranking used broad four sets of indicators: system, access, flagship, and economic. Both sought to measure participation and government investment levels.
Salmi pointedly devised a benchmarking tool rather than a ranking in 2011. His aim was to evaluate how well a tertiary education system produces expected outcomes, and the key inputs, processes and enabling factors required to bring about the favorable outcomes. He used two broad indicators “system performance” (attainment; learning achievement; equity; research; knowledge and technology transfer; values, behavior, and attitudes), and “system health” (macro environment; leadership at the national level; governance and regulatory framework; quality assurance framework; financial resources and incentives; articulation and information mechanisms; location; digital and telecommunications infrastructure).
The new “U21 Ranking of National Higher Education System,” is more ambitious than any of these models. It has 20 criteria grouped under four main headings, each weighted differently in the final aggregate score:
• Resources, 25 percent (investment by government and private sector on teaching and research);
• Output, 40 percent (research and its impact, and ability of system to produce an educated workforce which meets labor market needs);
• Connectivity, 10 percent (international students and proportion of articles co-authored with international collaborators);
• Environment, 25 percent (government policy and regulation, institutional and socio-economic diversity and participation opportunities).
It hopes to overcome problems of insufficient data in future editions; this should lead to more countries being included rather than the initial forty-eight.
With the exception of Salmi, these initiatives are rankings rather than benchmarking. What’s the difference and does it matter? Benchmarking uses comparison as a strategic tool, helping governments, university leaders, and others to systematically compare practice and performance with peer institutions or countries. It can also be used as a diagnostic tool underpinning a program of continuous improvement. In contrast, rankings measure higher-education quality through quantification; by aggregating the scores and ranking them sequentially, it establishes a hierarchy of performance.
System-ranking is certainly better than concentrating on individual institutions, but it still reduces quality and excellence to a single digit, and de-contextualizes national circumstances. We still don’t have sufficient understanding of how these different factors work over time to improve the student experience or overall quality, or what policy choices work best in different circumstances.
What are we trying to accomplish? I’ve defined the goal as “making the system world-class”, with the following characteristics.
• Open and competitive education, offering the widest chance to the broadest number of students;
• Coherent portfolio of horizontally differentiated high-performing and actively engaged institutions – providing a breadth of educational, research, and student experiences;
• Developing knowledge and skills that citizens need to contribute to society throughout their lives, while attracting international talent;
• Graduates able to succeed in the labor market, fuel and sustain personal, social and economic development, and underpin civil society; and
• Operating successfully in the global market, international in perspective, and responsive to change.
Without a doubt, it is important that governments and the public can compare national performance. These initiatives are focusing our attention on the capacity of the higher-education system to educate all students and deliver benefits to the whole of society–in other words, to provide a total quality experience. They are a step in the right direction.
Chile, Aprendizajes en rojo: la última década en educación
Aprendizajes en rojo: la última década en educación Los últimos años han mostrado al país la necesidad de...
0 Comments