El último Ranking mundial de unversidades del Times Higher Education Supplement (THES) y QS (2007) registra entre las primeras 500 universidades solamente a 11 universidades latinoamericanas.
175: Universidad de San Pablo, Brasil
177: Universidad de Campinas, Brasil
192: Universidad Autónoma de México
239: Pontificia Universidad Católica de Chile
264: Universidad de Buenos Aires
312: Universidad de Chile
338: Universidad Federal de Rio de Janiero
Después, entre los lugares 400 y 500, aparecen las siguientes universidades por orden alfabético de sus nombres (en inglés):
Austral University, Argentina
Belgrano University, Argentina
Pontificia Universidad Católica del Peru
Pontificia Universidad Católica do Rio de Janeiro
Cincuenta primeras universidades del mundo en ciencias sociales, Ranking THES – QS, 2007
Cincuenta primeras universidades del mundo en artes y humanidades, Ranking THES – QS, 2007
Cincuenta primeras universidades del mundo en ciencias naturales, Ranking THES – QS, 2007
Cincuenta primeras universidades del mundo en ciencias de la vida y biomédicas, Ranking THES – QS, 2007
Cincuenta primeras universidades del mundo en el área de tecnologías e ingenierías, Ranking THES – QS, 2007
Ver más abajo texto que explica la metodología emopleada, que contiene entre otras la novedad de haber reemplazado la contabilidad de publicaciones científicas provista por Thomson-ISI, eligiendop en cambio la de Scopus.
Recursos asociados
Vínculos a rankings nacionales e internacionales
Mejores universidades por áreas de conocimiento, 2007, Institute of Higher Education, Shanghai Jiao Tong University
Metodología aplicada el año 2007
Background
The THES – QS World University Rankings began in 2004 and have attracted a great deal of comment, reaction and feedback since first publication in October of that year. Since that time the project has assimilated a great deal of new ideas and had evolved into a stronger, more robust measure of comparative international university quality. Inclusion of the employer review in 2005 and the increased response to the Peer Review questionnaire are examples of these enhancements. The process continues… aside from improved response rates in both survey elements there are four key developments in 2007 that have had an impact on results. In as simple terms as possible, this document outlines those changes.
DEVELOPMENT: Peer reviewers prevented from promoting their own university.
Since the inception of the rankings the Peer Review has been the centrepiece of the ranking, thus even the smallest alteration to its compilation can have a major effect on the overall performance of institutions. In the first three years of the rankings, no restrictions have been placed on universities identified as excellent by peer reviewers meaning that universities could potentially encourage their own academics to sign up and complete the questionnaire in their favour. Whilst there has been no evidence to suggest a deliberate assault by any single institution in this respect, as the awareness of both the ranking and the science behind it has become more widespread, it has become necessary to eliminate a reviewer’s own university from the list they are presented with in the questionnaire.
EFFECT: The effect of this development will be most profound where the peer review has received a particularly impressive volume of response in a country in comparison to other countries in the same region. The effect is likely to be further exaggerated if that country has a small number of institutions in the original list. An academic from the University of Arkansas is perhaps less likely to select their own institution (one of over 60 in the US) than an academic from Nanyang Technological University in Singapore (one of just two in Singapore).
DEVELOPMENT: Switch to Scopus from ESI (Thomson) for citation data
For the 2006 results the time period for citation counts was slashed from 10 years to 5 years in response to feedback suggesting that the rankings ought to be a more contemporary measure of university strength. In 2004, when the rankings began, the only reputable source of citation data was Thomson’s Web of Science – the ESI is an associated, simplified product that provides an indication of research strength by university and seemed the most appropriate basis for our citation indicators at the time.
Coincidentally, Scopus was also born in 2004 and has rapidly evolved since that time. In 2007, Scopus has been able to answer many of the questions left unanswered in three years of working with ESI – we have been able to find data for many institutions that have not been represented in this indicator in the past and we have also been able to query the entire Scopus database rather than simply the slices of Web of Science represented by ESI. The general consensus in published reviews of both systems (e.g. Fingerman 2006) seems to be that they both have their merits and can be used to complement one another. The vast majority of any criticism for Scopus seems to relate to its tracking of research and, in particular, citations from before 1996 but – since we are only concerned with the most recent complete 5 year window – any weaknesses in this respect have no bearing.
EFFECT
The Scopus database has a less pronounced bias towards the US, resulting in a reduced advantage in their favour in this indicator
Scopus covers a larger number of papers and journals overall leading to greater representation from lesser known universities and institutions from academic systems with less emphasis on publication
Scopus covers more sources in languages other than English resulting in better numbers for institutions with large volumes of high quality research in their own language
0 Comments