As a format it’s slow, encourages hype, and is difficult to correct. A radical overhaul of publishing could make science better
When was the last time you saw a scientific paper? A physical one, I mean. An older academic in my previous university department used to keep all his scientific journals in recycled cornflakes boxes. On entering his office, you’d be greeted by a wall of Kellogg’s roosters, occupying shelf upon shelf, on packets containing various issues of Journal of Experimental Psychology, Psychophysiology, Journal of Neuropsychology, and the like. It was an odd sight, but there was method to it: if you didn’t keep your journals organised, how could you be expected to find the particular paper you were looking for?
The time for cornflakes boxes has passed: now we have the internet. Having been printed on paper since the very first scientific journal was inaugurated in 1665, the overwhelming majority of research is now submitted, reviewed and read online. During the pandemic, it was often devoured on social media, an essential part of the unfolding story of Covid-19. Hard copies of journals are increasingly viewed as curiosities – or not viewed at all.
But although the internet has transformed the way we read it, the overall system for how we publish science remains largely unchanged. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal.
This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce “better” results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on.
There are some possible fixes that change the way journals work. Maybe the decision to publish could be made based only on the methodology of a study, rather than on its results (this is already happening to a modest extent in a few journals). Maybe scientists could just publish all their research by default, and journals would curate, rather than decide, which results get out into the world. But maybe we could go a step further, and get rid of scientific papers altogether.
Scientists are obsessed with papers – specifically, with having more papers published under their name, extending the crucial “publications” section of their CV. So it might sound outrageous to suggest we could do without them. But that obsession is the problem. Paradoxically, the sacred status of a published, peer-reviewed paper makes it harder to get the contents of those papers right.
Consider the messy reality of scientific research. Studies almost always throw up weird, unexpected numbers that complicate any simple interpretation. But a traditional paper – word count and all – pretty well forces you to dumb things down. If what you’re working towards is a big, milestone goal of a published paper, the temptation is ever-present to file away a few of the jagged edges of your results, to help “tell a better story”. Many scientists admit, in surveys, to doing just that – making their results into unambiguous, attractive-looking papers, but distorting the science along the way.
And consider corrections. We know that scientific papers regularly contain errors. One algorithm that ran through thousands of psychology papers found that, at worst, more than 50% had one specific statistical error, and more than 15% had an error serious enough to overturn the results. With papers, correcting this kind of mistake is a slog: you have to write in to the journal, get the attention of the busy editor, and get them to issue a new, short paper that formally details the correction. Many scientists who request corrections find themselves stonewalled or otherwise ignored by journals. Imagine the number of errors that litter the scientific literature that haven’t been corrected because to do so is just too much hassle.
Finally, consider data. Back in the day, sharing the raw data that formed the basis of a paper with that paper’s readers was more or less impossible. Now it can be done in a few clicks, by uploading the data to an open repository. And yet, we act as if we live in the world of yesteryear: papers still hardly ever have the data attached, preventing reviewers and readers from seeing the full picture.
The solution to all these problems is the same as the answer to “How do I organise my journals if I don’t use cornflakes boxes?” Use the internet. We can change papers into mini-websites (sometimes called “notebooks”) that openly report the results of a given study. Not only does this give everyone a view of the full process from data to analysis to write-up – the dataset would be appended to the website along with all the statistical code used to analyse it, and anyone could reproduce the full analysis and check they get the same numbers – but any corrections could be made swiftly and efficiently, with the date and time of all updates publicly logged.
This would be a major improvement on the status quo, where the analysis and writing of papers goes on entirely in private, with scientists then choosing on a whim whether to make their results public. Sure, throwing sunlight on the whole process might reveal ambiguities or hard-to-explain contradictions in the results – but that’s how science really is. There are also other potential benefits of this hi-tech way of publishing science: for example, if you were running a long-term study on the climate or on child development, it would be a breeze to add in new data as it appears.
There are barriers to big changes like this. Some are to do with skills: it’s easy to write a Word document with your results and send it in to a journal, as we do now; it’s harder to make a notebook website that weaves together the data, code and interpretation. More importantly, how would peer review operate in this scenario? It’s been suggested that scientists could hire “red teams” – people whose job is to pick holes in your findings – to dig into their notebook sites and test them to destruction. But who would pay, and exactly how the system would work, is up for debate.
We’ve made astonishing progress in so many areas of science, and yet we’re still stuck with the old, flawed model of publishing research. Indeed, even the name “paper” harkens back to a bygone age. Some fields of science are already moving in the direction I’ve described here, using online notebooks instead of journals – living documents instead of living fossils. It’s time for the rest of science to follow suit.