Back in 2013 Barack Obama worked out a compromise between advocates for free and open research and s scientific publishing companies who fear the end of their way of business. The decision reached was to allow for a year where federally funded research could exist behind the paywall of the different scientific journals, and then to make the content freely available to the world. This was seen as a way to keep publishers in the game, which do coordinate the peer reviewing process, while still allowing the public to read the research on a relatively short timeline.

Well now President Biden has completely removed the one year grace period, forcing federally funded peer reviewed research to be published without a paywall, and requiring that it be machine readable and include metadata, something that, while required in Obama’s compromise, hasn’t resulted in any fully searchable and easy index of available scientific articles. Most sites, like Google Scholar, mix paywalled results alongside freely published ones.

Part of this is because, while the US Federal Government is the largest funder of scientific research, it funds far short of the majority of research. The $62.5B NIH budget, the $10.5B NSF budget, and the $3.86B DARPA budget, combined with the various smaller federal agencies, only made up about 7-9% of the total scientific research this year. But because it is so important, it is a little like the University of California system going test optional, other institutions will likely follow.

The back and forth around free access to scientific research has a long history. For years the argument against it was because the scientific journals coordinated peer reviewing and then they needed to pay their publication costs. Now that so much has moved online, publication costs are just server costs, but peer reviewing is still a keystone of the scientific process.

However, peer reviewing as a “best practice” has long been questioned. Back in the 1980’s it was already in doubt, Douglas Peters and Stephen Ceci took on the challenge of proving that the results were flawed. They collected 12 articles, edited the institute name to be a fictitious one, and resubmittle the articles to the same publications which had published them between 18 and 32 months earlier. They were reviewed by 38 editors and scientific “referees” (the scientist chosen by the editor to give their recommendation on your paper). Three of the 38 noticed that it was an article which had been submitted before, but that allowed 9 to go through the process, and most received recommendations against publication. 

Papers that were in the publication less than three years before were considered to have “serious methodological flaws” by the reviewers who looked at them now. This was not because science had advanced that much in the short time since original publication, but because the assumption underlying, that scientists could consistently recognize what is good and what is problemic science, is flawed. So flawed, in fact, that it has been debated whether it is any better than chance alone.

And before we get up in arms at the inconsistent scientists, it is important to remember that peer review work is done for free. It has been based on goodwill and a sort of tradition that it was done, but without any compensation or really improved personal outcomes from doing the work. But that too is changing. Peer reviewing is in crisis because ever since COVID, more and more scientists are declining to do this work. Without any clear benefit, and with the ever increasing burden of different tasks and hats that professors are expected to do, this one has become even less popular. To be clear there are still editors who are paid by the publication, but they rely on expertise from the increasingly niche world that many papers exist within and then park the content behind a paywall.

It is important to look at that paywall. For Nature, one of the two preeminent interdisciplinary journals, an individual subscription seems reasonable at $200 (Science is priced similarly, but it also allows researchers to publish their articles on institutional or personal websites). But the institutional prices are all hidden and negotiated. For large research universities, the cost is negligible and they will often subscribe to several hundred different journals for their institution and not worry too much about the cost, but for researchers who belong to institutions that cannot subscribe to every major publication, or the public, it is locked away.

So we have publications who provide a service of dubious utility, mostly on the back of unpaid researchers, charging a fee to view articles which is mostly only paid by larger institutions. This makes journals pretty unpopular, meaning lots of scientists support people getting articles any way they can. In a tweet that almost immediately went viral, Dr. Holly Witteman encourages people to simply email the authors of the article and ask them to send it to you.

In fact, it has only been the open publication of these articles which has been legally difficult. And that is where Sci-Hub comes in. Started by a Kazakhstani computer programmer named Alexandra Elbakyan in 2011, Sci-Hub provides free (pirated) access to “nearly all scientific literature”. It has been naturally sued and ordered to shut down in a number of courts around the world, but much like Pirate Bay, it has managed to continue to provide access by shuffling through domains. And it is popular and widely used. Unlike media piracy which tends to happen more in lower income countries, pirating scientific articles seems to happen everywhere.

However, one thing that neither Sci-Hub nor any other online scientific search engine has been able to do is create a really compelling and easy way to look for articles on a specific topic. Sci-Hub doesn’t have any search function at all, and the most widely used one, Google Scholar, does an OK job, but requires a fair bit of effort on the part of the scientists to continuously keep their publication list and metadata up to date. It seems possible that with the new requirements, there will be space to dramatically improve the ability to find the best, newest science out there and share it with the world.

And maybe this will force everyone to ask the question, what benefits are publications providing at all. Because if the peer reviewing process is not especially useful, why do we need them? The truth is that the main difficulty with getting rid of them is the same as the difficulty with removing a college degree requirement from a job posting. Whether it actually selects good science or good candidates has been less important than the fact that it cuts down on the total number of options that you have to choose from. Restricting who is published both allows for people to have less that needs to be read to stay current with a discipline, and it gives universities and other scientists a method for ranking and rewarding those who do manage to navigate the labyrinthine system. Even if somewhat random in terms of quality, it is a gameable system that can be worked by people with enough determination and time.

That being said, paralysis of choice has been solved by search engines before. It has been harder for Google to do it with scientific articles, but judging an article based on how many articles link to it is something that both Google and the scientific system do commonly. It might take more eyes and more demand than it currently has now, but it likely will change. Which of the publishers will weather that change remains to be seen.

Leave a Reply

Your email address will not be published.