Bringing discussion back to science

Opportunities for debating failure have gradually been removed from the academic literature. The Journal of Trial and Error (JOTE) envisions to become a platform where scientific failure can be openly discussed. Not only will it publish articles on research containing methodological and/or conceptual errors, JOTE will also invite subject specialists and scholars of science to discuss what the failures teach us about knowledge creation. I sympathize with this attempt to bring discussion back to science. In this blog I argue that the disappearance of discussion opportunities is related to a more fundamental shift in how scholars, editors and other stakeholders perceive the academic literature.

Originally, the academic publishing system arose as a logbook of attempts and discussions to obtain knowledge. In the last few decades, however, there has been a shift towards viewing the scientific literature as a database of facts. [1] This shift is reflected in new methodologies such as meta-analyses, used by (mainly) biomedical scholars to combine data from various studies into one single multi-sided analysis. The shift in perception of the academic literature is moreover, and arguably best, reflected in the emergence of the phenomenon of retractions. If the literature is a database of verified facts, then ‘inaccurate knowledge’ should be deleted from it. Retractions are a prime way of doing this. [2]

Retracting academic errors

First appearing in the 1980s, the phenomenon really took shape after the turn of the millennium, with their numbers being on sharp rise for several years now. Retractions are passionately tracked and archived by RetractionWatch, a group of American science journalists who highlight why articles got retracted and how stakeholders respond to those retractions. As the number of retractions keeps rising, some argue that fraud or misconduct in science is becoming more common. Although it’s true that retractions may signal a transgression of norms, they also signal the social reaction to such a transgression: an editor or publisher has decided, often after strong community pressure, to take the relatively severe measure of retraction to repair the scientific record. Hence, rather than a sign of more misconduct in science, retraction rates are mainly an indicator of a research community taking a very particular action to redress its apparent instances of failure.

Retractions appear for a variety of reasons. Traditionally they were used as a way to tackle misconduct and filter outright fraudulent articles, and the majority of retractions still appear to occur for this reason. [3] Also, they have been used as a response to honest mistakes, by authors or actors within the publishing process, for instance when articles were inadvertently published twice, the wrong version of a manuscript was published, or some of the authors were unaware of the manuscript’s submission. [4]

Retractions, reinvented

However, over the past years, journal authors and editors are increasingly using retractions for different purposes and under different conditions. By now, we have seen retractions appear for errors in mathematical proofs, misidentification of research materials and errors in methods or analyses coming to light after publication. The use of retractions for these purposes is somewhat concerning and potentially counterproductive.

Retractions silence a host of interesting questions about an article’s ‘failure’

Using retractions for these purposes namely reflects an expectation that every published article should contain valid or ‘true’ knowledge. If it does not, it should be deleted. By deleting an article from the academic literature, we potentially obstruct the start of a proper debate about the underlying reasons and mechanisms that led to the article’s apparent flaws. Whereas it once was common practice to write a letter to the editor, an editorial or a new piece indicating and discussing an article’s shortcomings, this debate is now closed before it started by simply removing the article – often accompanied by a succinct ‘Notification of Retraction’ including only brief details on the reason to retract the article. [5]

Such retractions silence a host of interesting questions about an article’s ‘failure’: What made the author and reviewer initially believe this (mathematical) proof? How come these flaws were not detected during review? Under what circumstances would the current analyses still be valid? And how can similar errors be prevented in the future? The laudable initiative of JOTE to reinstall a platform to discuss these aspects within the scientific discourse fills a gap that once wasn’t there.

Alternative platforms

But does that mean that academic discussion has completely vanished? No, certainly not of course. However, I think we do see a significant shift in where and how such discussions take place, with them increasingly being outsourced to alternative platforms outside the core academic literature. In fact, JOTE can be considered part of this trend.

Although the practice of retractions has slowly but surely driven some opportunities for discussion out of the pages of academic journals, other platforms have emerged where research articles are discussed. These range from popular social media platforms, like Twitter, to more tailored platforms for the discussion of academic literature. Pubpeer for example, launched in 2012, aims to ‘improve the quality of scientific research by enabling innovative approaches for community interaction’ and the weblog RetractionWatch, launched in 2010, has become the online place to be for daily updates on the latest retractions. RetractionWatch always provides background and context to these cases and they have archived 20,000 retracted articles in their RetractionWatch Database.

These initiatives have not been without success. By now, Pubpeer has sparked debate about multiple controversial research articles. Not hindered by reputational considerations that might deter journal editors to discuss failure in their journal’s articles, Pubpeer has brought attention to numerous problematic articles otherwise passing unnoticed. In addition, the possibility to comment anonymously has most likely allowed people to post concerns that they might otherwise not have dared to express.

A middle ground

Unfortunately, all these valuable endeavours cannot be expected to fully replace the discussions and debates going on within the academic literature itself. Whereas conversations, through articles, letters, editorials, responses and rebuttals, on the pages of academic journals are moderated and coordinated by the journal’s management and editors, discussions on newly emerging platforms are often not. While this may provide opportunities in terms of transparency, inclusivity and the democratization of science, it surely also comes with its drawbacks. Among others, open discussion platforms are regularly accused of being a sanctuary for complainants who publicly pillory scientists with personal attacks without commenting carefully on the content of their work.

Alternative platforms cannot be expected to replace discussions within the academic literature

JOTE might hence turn out to cover a suitable middle ground between traditional academic publishing and these open discussion platforms: providing an outlet for (discussions about) failure while staying close to the academic format and customs of debate.

Altogether, I highly support the initiative of the Journal for Trial and Error to create a space for academic contemplation and mutual learning. Whereas other outlets are increasingly banning errors from their pages without much of an open debate, let alone actively accepting or inviting them to their journals, JOTE has the potential to grow into an important institute in the academic literature. Acknowledging the central role of errors in science, I genuinely hope JOTE will be able to live up to their aim of discussing failure without fear or shame.


[1] Radboud University, Institute for Science in Society & Leiden University, Centre for Science and Technology Studies

[1] Horbach, S.P.J.M. and W. Halffman, The changing forms and expectations of peer review. Research Integrity and Peer Review, 2018. 3(1): p. 8.

[2] Horbach, S.P.J.M. and W. Halffman, The ability of different peer review procedures to flag problematic publications. Scientometrics, 2019. 118(1): p. 339-373.

[3] Fang, F.C., R.G. Steen, and A. Casadevall, Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences of the United States of America, 2012. 109(42): p. 17028-17033.

[4] Andersen, L.E. and K.B. Wray, Detecting errors that result in retractions. Social Studies of Science 49(6), 2019, 942–954.

[5] Ibidem