Global drive for more open, rigorous research is growing
There is growing pushback against research systems driven by financial rewards for exciting findings, at the expense of rigour and integrity. The Open Science Framework for transparent research has now been adopted by thousands of journals and dozens of funders. A new National Institutes of Health data sharing policy applies to all grants from February this year.
“The European Union is further ahead, but the United States is making a big leap and there’s activity all around the world for these sorts of changes,” says Professor Brian Nosek, professor of psychology at the University of Virginia in the US and executive director of the non-profit Center for Open Science.
The centre administers the Open Science Framework, an open source infrastructure that supports researchers to be more rigorous and transparent and to share all of the research process. It has “grown exponentially” to 600,000 users and more than 100,000 registered studies, he says.
The National Institutes of Health data sharing policy, Nosek told University World News, “is assertive and a sea change for biomedical research and being more open. The US is also undergoing a full transformation because of an Office of Science and Technology Policy (OSTP) memo requiring similar policies for data sharing across all federal agencies”.
The OSTP memo was released in August 2022 and requires taxpayer-supported research results to be made available immediately to the American public at no cost. A White House memorandum ordered all federal departments and agencies to implement the policy by the end of 2025.
“We won’t ever get rid of rewards for being innovative or for new discoveries, and arguably we shouldn’t. That is an important thing to do; it advances scholarship,” Nosek says. “A key goal is how to diversify the things that are rewarded so that we have a better balance between the different activities that would make research as efficient and effective as possible.”
Funders are key, providing grants for research that lead to publication, which is the currency of career advancement at universities – especially articles in prestigious journals. Funding is also more likely if a researcher has a good publication record. “So these are tightly linked, as is getting a job and keeping the job – the incentives at the institutional level.”
Regardless of who they choose to fund, if funders require rigorous methods like preregistration of research, data sharing and more open policies, they will help to shift norms in research communities toward greater transparency.
Universities are behind the times, with little being done to change practices, Nosek observes.
However, there is attention emerging. In the EU, for example, the Coalition for Advancing Research Assessment (COARA) is developing new metrics and ways that researchers can be evaluated for jobs. On 29 March, COARA announced that more than 500 organisations had signed its Agreement on Research Assessment.
A complementary effort in the US is the Higher Education Leadership Initiative for Open Scholarship (HELIOS), which emerged from the National Academies of Sciences, Engineering and Medicine and aims for “a more transparent, inclusive, and trustworthy research ecosystem”, according to its website.
With commitment from more than 90 colleges and universities, HELIOS says it “represents the largest, most carefully coordinated effort to align higher education practices with open scholarship values”.
The rise of reproducibility
A core problem for research systems is that the massive increase in research globally has not generated an equal volume of useful new knowledge, as systems have skewed towards rewards for exciting findings and away from the hard work of verifying them, says Nosek, a world leader in the study of replicability.
For scholars concerned about the integrity of research, the past decade has been a “reckoning of self examination across a variety of fields” to discover the extent to which published findings are reliable.
“Multiple large scale replication efforts provided an opportunity to assess the state of replicability of evidence. Each of those projects provided evidence that has been surprising – in the sense that a surprising number of published claims that are cited, applied and used in different ways, failed to replicate in good faith, high powered attempts to replicate those results.”
That has occurred in every field that has bothered to look, Nosek told University World News. The problems that replication efforts revealed “prompted examination of how reward systems might need to be adapted so that we can value verification in conjunction with innovation, which is an important critical driver of new discovery”.
Research replication has been seen as critical to science for a long time. Researchers know that scientific claims become part of the body of knowledge by demonstrating replicability, with an independent team able to repeat the methodology and observe the same evidence.
But in practice, replication is neither reported nor rewarded much, Nosek says. Findings that are not reproducible may introduce waste into a research system. “Of course we’re going to get lots of ideas wrong initially. We’re venturing into the unknown, so it’s no surprise that error occurs.
“That’s not a problem. The problem emerges when we think we’ve discovered something, but we actually haven’t and we fail to notice.”
The Center for Open Science
Nosek co-founded the Center for Open Science 10 years ago, to coordinate a number of projects to produce evidence of the replication challenge, and then to find ways to shift the reward system and to provide new methodologies, tools and training for researchers, to help them raise the quality and rigour of their research and the likelihood that their findings will be replicable.
There were initially two projects. The Reproducibility Project Psychology tried to replicate 100 findings from prominent journals in the psychology field.
The second project was to build the Open Science Framework, which among other things is a research registry. As with most of the centre’s projects, it was developed with a variety of stakeholders.
“We have 50 staff at the centre but our projects are much bigger than that, because they involve so many collaborations with other stakeholders in the field. We’re trying to co-create solutions. We are the administrative drivers, but if we don’t have community involvement, they won’t be successful,” says Nosek.
“The Open Science Framework is an open source infrastructure that supports researchers across the entire lifecycle of research to make their research more rigorous, more transparent, and to share not just the final report but all of the parts of the research process that happened along the way – research plans, data produced, materials generated, analysis and outcomes.
“We created this framework for journals and funders and institutions to say to researchers, ‘this is what’s required to do transparent open research, to be published in our journal or to get our grant funding or to be a researcher at our institution’. Having been adopted by thousands of journals and a few dozen funders, it is ‘just entering into universities’ consciousness”.
There are now many research registries worldwide, mostly for clinical trials. In the US, since 2000 it has been required by law to preregister clinical trials. There are also registries around the world, also mostly for clinical trials. The importance of preregistration is long recognised, for instance to protect against financial conflicts of interest and to make sure medicines work.
“We are trying to extend that basic concept across all kinds of research,” Nosek says. The Framework helps researchers and research teams to manage their materials and data, and improve their research process. “We make it super-simple for researchers to make as much research public as they are willing and able to do.”
The next step is to make preregistration normative in research communities.
The imperative to change practices
Over time, the Center for Open Science has added additional areas of support for researchers based on its theory of change – “how can we move from a culture that values innovation at the expense of verification to one that accelerates discovery by promoting innovation but following up with verification practices”, says Nosek.
It is critical for rigour and transparency to share the data and materials underlying research in the final paper. “It’s kind of shocking that hasn’t been normal in the past,” he adds.
To understand research requires access to the full methodology, protocols conducted, materials generated, how data was analysed and so on. Different analyses may be applied to the same data to see if the findings are robust, replications conducted, and the research built on and extended.
Says Nosek: “Researchers have not been doing these things because they’re not rewarded for doing them. That’s extra work. Why would I expose myself to you attacking me? This thing feels risky and certainly if nobody else is doing it, why would I bother?”
Preregistration tackles key problems. The first is publication bias. Researchers are more likely to get published with positive results – when they find evidence for what they are investigating – than with negative results. By preregistering all studies, all of a person’s or lab’s research is discoverable. “That’s important for the overall health of the research system.”
Further, preregistration makes it clear what was planned in research before data was observed, and what was reported that was unplanned and discovered after the fact .
“We have all kinds of stuff that happens unexpectedly, and it is important,” explains Nosek. “But that kind of discovery is by definition more uncertain than planned confrontations of ideas that we had beforehand. And so it needs to be really clear which is which.”
As part of efforts to encourage cultural change, the centre works with research communities. “Grassroots community building provides visibility in the research community to say, ‘Oh, preregistration is a thing that maybe we should consider doing. Oh, I see how they’re improving practice, maybe I should try it.’”
However, that alone is insufficient to address problems around the research reward system. To get into the mainstream, Nosek and colleagues also work with journals, funders and institutions.
Peer review of research proposals
One key intervention is a publishing model that the centre promotes called Registered Reports, which stresses the importance of the research quality by conducting peer review before data collection.
The idea, says Nosek, is that instead of doing research and then writing up a report and sending it to a journal for peer review, a researcher does preliminary work, arrives at the critical studies to be undertaken for a paper, and sends a proposal to a journal.
The journal sees the exploratory work, designs, plans and questions, and that work is peer reviewed to assess whether the questions are important and the research methodology can effectively test them. If the peer review is positive, the journal commits to publishing the results, regardless of outcome.
“That builds preregistration into the journal review process and fundamentally changes the rewards,” Nosek explains. “In the standard model at journals, I’m rewarded for publishing exciting novel results. In registered reports nobody knows what the results are, I can’t be rewarded for that. My reward is for asking important questions and designing rigorous methods to test them.”
More than 300 journals have now adopted registered reports. “Nature is the latest adopter, so it’s hit the mainstream and the most prestigious outlets as an option,” says Nosek.
“The evidence we have today suggests that the registered reports model addresses publication bias. Negative results are more likely to get published than in the traditional model. And it is associated with higher rigour and higher quality of the research itself.” Nosek and colleagues are now conducting a randomised trial to see if they can find causal evidence for that claim.
Tools for better transparency
Another example of how the rewards and requirements for researchers can be changed is embodied in Transparency and Openness Promotion (TOP) guidelines, which provide tools to help implement better and more transparent research and were developed in collaboration with journals, funders and societies.
The TOP guidelines were co-organised by the centre and Science Magazine, developed together with an array of groups, and published in 2015.
“Collectively, all of those behaviours are driving culture change efforts in different fields.”
Nosek says one of the projects of the research reform movement, just getting going, argues that now more researchers are doing behaviours like preregistration, “can we start to make that data more aggregated and more visible so it could be incorporated as a complementary way to evaluate how institutions are doing, so that university rankings can evolve?”
0 Comments