Evaluar la investigación: debate europeo
Abril 2, 2022

Captura de pantalla 2015-07-18 a la(s) 13.06.15Target talk splits European research assessment reformers

 Culture change becomes a numbers game as agreement drafters try to pin down progress without spooking major players
March 30, 2022

European efforts to reform research assessment have hit the thorny issue of targets and timelines, with at least one major player threatening to walk away if progress is tied to numbers.

About 300 organisations told the European Commission they would join a coalition to reform research assessment, with a separate group of around 20 invited to actively draft a reform agreement.

There is broad consensus that the judgement of research contributions is still too dependent on counting journal papers. More nuanced alternatives exist, but assessors and institutions are often cautious to adopt changes that will materially affect thousands of careers.

“It’s a thing we think the science and the research funders should be free to decide: what tools or instruments would fit them best to do their job,” said Tobias Grimm, who represents the German Research Foundation (DFG) in the smaller group of drafting organisations.

“The goal of the initiative is to get away from quantitative targets for the science community; we want to get away from counting impact factors and so on. If you replace it with a new set of targets for funders and for science, that’s not the road we would follow.”

The commission suggested developing broader assessment criteria, with recognising peer review work and allocating resources for reform among the actions to which signatory institutions could commit.

Dr Grimm said that the DFG would walk away from the agreement if those commitments came with concrete targets. “We don’t need to align anything with anything. We need common principles and we need to confirm ourselves there are better and not so good ways of doing research assessment, but we won’t prescribe it to someone,” he said.

That opposition is at odds with some of those responsible for drafting the agreement, which is designed to go beyond previous declarations on research assessment, such as those named for San Francisco, Leiden and Hong Kong.

“We do want to link it to some kind of target or timeline,” said Karen Stroobants, a consultant who is helping draft the agreement. “We are realistic that that probably will mean that we will not from the very start get everyone on board. That’s the balance, of course: if you want to have a certain level of ambition it will be difficult to have absolutely everyone on board from day one.”

Science Europe’s secretary general, Lidia Borrell-Damian, said that while it was “ideal” to keep all organisations on board, the important thing was to make sure universities, research organisations and funders from across Europe were represented. “Probably there will be a timeline suggested, but it will just be indicative,” she said.

The agreement is also due to sketch out a monitoring system, but this must account for roadblocks that institutions themselves cannot remove, such as where recruitment rules are bound by national regulations.

Bert Overlaet, who represents the League of European Research Universities in the drafting group, said that the agreement would be “dead before it has started” if European Union governments do not back it. “The institutions in general have a fear that they will engage in a process with the commission and then the member states won’t follow,” he said.

Dr Borrell-Damian said an initial plan to firm up the agreement before July had been shelved and that it would probably not be ready for signatures until autumn 2022.

[email protected]


Banning journal impact factors is bad for Dutch science

 Abandoning measurable evaluation criteria will make judgements more political and more random, say Raymond Poot and Willem Mulder
August 3, 2021

Recently, Utrecht University announced that it will ban journal impact factors from evaluation procedures of scientists. Such measurable performance indices are to be abandoned in favour of an “open science” system, which centralises the collective (team) at the expense of individual scientists.

However, we are concerned that Utrecht’s new “recognition and rewards” system will lead to randomness and a compromising of scientific quality, which will have serious consequences for the recognition and evaluation of Dutch scientists. In particular, it will have negative consequences for young scientists, who will no longer be able to compete internationally.

Utrecht’s assertion that the journal impact factor plays a disproportionately large role in the evaluation of researchers is misleading. For a considerable number of research fields, impact factors are not that relevant. To account for field-specific cultures, the field-weighted citation impact score was developed, which compares the total citations actually received by a scientist with the average of the subject field. For example, research groups in medical imaging typically publish their results in technical journals with relatively low impact factors. Although not groundbreaking, however, the development of faster MRI methods is very important. The Dutch Research Council (NWO) takes this into account in its awarding policies. Accordingly, many personal grants have been awarded by the NWO’s career development programme to medical imaging researchers who never publish in high impact factor journals.

A second misconception is that a journal’s impact factor does not correlate with the quality of its publications. An average paper in a top journal, such as NatureScience or Cell, requires much more work than an average paper in a technical journal. Top journals get assistance from world experts and thereby safeguard high impact and quality. This does not mean that every paper in Nature is by definition better than a publication in a technical journal, but, by and large, new technologies and concepts that overthrow dogmas are published in the top journals.

For the NWO’s “Veni, Vidi, Vici” talent programmes, the application format has changed radically over the past few years. The curriculum vitae with objective information on publications, citations, lectures and so on is replaced by a “narrative”. Reviewers will no longer grade the proposal and are forced instead to fill out lists with strengths and weaknesses, irrespective of their opinion of the proposal. For some NWO competitions, CVs are removed altogether because of the emphasis on “team science”.

The feedback from the assessment committees is disturbing. Members do not have a clue how to compare candidates, and googling their performance numbers is banned. Reviewers, often recruited from outside the Netherlands, complain about the time-consuming format and sometimes simply refuse to judge the narrative.

We believe that the NWO has the duty to allocate public funds in a way that supports the best and most talented scientists to discover new insights and innovations. We strongly support “recognition and rewards” for academics who are not exclusively science-oriented, but consider this the responsibility of the universities, not of the NWO. University HR policies must offer different career tracks for academics who excel in non-science competencies.

Quantitatively analysing a problem is an important feature of scientific practice, particularly in the medical, life and exact sciences. In these disciplines, creative solutions are sought for problems worldwide. Scientific success is therefore more easily measurable and comparable. For more qualitative sciences, it is understandable that other ways to assess success can be used. We strongly support diverse ways to evaluate different science disciplines and suggest that fields themselves determine how scientists in their discipline are assessed.

Utrecht’s policy puts a strong emphasis on open science, level of public engagement, public accessibility of data, composition of the research team and demonstrated leadership. These criteria are not scientific: they are political. Moreover, it is extremely difficult to measure them, let alone use them to conduct a fair comparison of different scientists. They should therefore not be the dominant criteria in the assessment of scientists. In particular, for the research track of the medical, life and exact sciences, internationally recognised and measurable criteria must be paramount.

The US, the world’s science powerhouse, is on a completely different trajectory from the Netherlands. Big public funders such as the National Institutes of Health (NIH) and the National Science Foundation (NSF) focus solely on scientific excellence and have not signed the Declaration on Research Assessment (Dora; also known as the San Francisco Declaration), which abolishes the impact factor as an evaluation index.

We believe that the NWO and the Dutch universities should maintain objective and measurable standards for academics who primarily focus on research. We prefer scientists who are optimised for generating the best science, not for writing the prettiest narrative. This will be the best way both to benefit society and to safeguard the Netherlands’ favourable position in international rankings.

Raymond Poot is an associate professor (UHD) at the Erasmus University Medical Center, Rotterdam. Willem Mulder is professor of precision medicine at the Radboud University Medical Center and the Eindhoven University of Technology. This is a translated and edited version of an article that was first published in the Dutch journal Science Guide, which was signed by another 172 academics.


Most European campuses ‘use journal impact factor to judge staff’

Preliminary results of EUA survey suggest three-quarters of responding institutions draw on much-criticised metric

October 3, 2019

European universities continue to rely heavily on publication metrics – in particular, the much-criticised journal impact factor – when assessing academic performance, a study suggests.

However, preliminary results of a survey of about 200 institutions by the European University Association indicate that one of the main obstacles to reforming research assessment is resistance to change from academics themselves.

The survey, to be published in full later this month, says that research publications and attracting external research funding were rated most highly when universities were asked which type of work mattered most for academic careers, selected as being important or very important by 90 per cent and 81 per cent of respondents respectively.

Asked how academic work was evaluated for career progression decisions, publication and citation metrics ranked top, selected as important or very important by 82 per cent of respondents.

And, despite criticism of journal impact factor – a citation-based evaluation of the periodical in which an academic’s work is published, not an assessment of the impact of the paper itself – three-quarters of institutions said that they used it to evaluate staff performance, more than any other metric. Academics argue that journal impact factor is an unfair metric and can be open to manipulation.

Seventy per cent of respondents to the EUA survey said that they used academics’ h-index, a measure of productivity and citation impact, as part of their assessments.

Bregt Saenen, the EUA’s policy and project officer, said that the widespread use of journal impact factors was “one of the most disappointing results from the survey”.

“The quality of a journal article should be assessed based on the merit of the research/article itself, not on the reputation of the journal in which the article is published,” he said.

Dr Saenen said that universities needed to move towards “a less limited set of evaluation practices to assess a wider range of academic work”.

With this in mind, Dr Saenen said that it was “encouraging” to see other areas being regarded as important or very important by respondents, such as research impact and knowledge transfer (68 per cent), supervision (63 per cent) and teaching (62 per cent).

Seventy-four per cent of respondents said that qualitative peer review assessment was an important or very important factor in career progression decisions.

However, asked what the main barriers to reviewing research assessment were, 33 per cent cited resistance to reform from academics themselves. This was one of the most popular responses, alongside the complexity of reform (46 per cent), lack of institutional capacity (38 per cent) and concern over increased costs (33 per cent).

Dr Saenen said that key barriers for universities were likely to be “accountability to research funding organisations and governments in their approach to research assessment, as well as the influence of the competitive environment in research and innovation”.

Reviewing research assessment procedures “is a shared responsibility and will require a concerted approach”, he said.

[email protected]

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Trump y la ciencia

Trump’s anti-science appointees ‘drive demonisation of scholars’ With vaccine sceptics taking top White House posts, there are concerns that attacks on scientists who counter misinformation may become more extreme Jack Grove,November 18, 2024 Researchers who have...

Share This