Peer Review and Public Trust

Originally published on October 18, 2018

///

The public acceptance of research findings is built on trust. Anything that undermines that trust plays into the hands of those who want to discredit academic research and scholarship, ranging from climate change to gender studies, some for political reasons. Some even go as far as trying to publish fraudulent articles with the simple goal of discrediting an entire field they disagree with, as the recent “grievance studies” hoax shows.[i] This makes for great headlines in the popular press but does not provide any solutions to a problem that we have known for years and serves to exacerbate the loss of public trust. 


While there are many reasons that the faith of the public in research is decreasing, one contributing factor is that the current peer review system is failing. Peer review is considered the gold standard for assessing the worthiness of newly created knowledge. If a manuscript passes the critical eyes of experts in the field, it will be added to the canon of knowledge and is available to others to build upon. We believe a viable way to enhance the public trust in science is to change the peer review system.


Peer review, however, is a surprisingly new phenomenon. According to Spier[ii], scientific works of the 17th century were debated in societies and academies and it wasn’t until the 18th century that editors took over the review process and developed a more formal method of having experts in academic societies make recommendations on the publication of manuscripts. It took even longer until manuscripts were sent out for review by experts, and required the ability to make copies of manuscripts. According to Spier[iii], “it was not until the 1890s, when the typewriter became available, that carbon papers could be used to make replicate copies, 3-5 at the most.”


The research enterprise, and with it the number of publications, significantly expanded after the second World War, from about 2-3% between 1750 and 1950 to 8-9% after 1950.[iv] Fortunately, the Xerox photocopier became commercially available in 1959[v], which made the distribution of manuscripts much easier, and today’s peer review system became the standard. The current system is based on a technology that allowed us to make copies of manuscripts that could then be mailed to reviewers. This was a revolutionary step in 1959 when the copy machine became available. Technology has moved on, manuscripts are no longer mailed to reviewers as paper copies, and publications are online (even if there are still printed volumes). Our approach to vouching for new discoveries should adapt to these changes in how we disseminate knowledge.


What has not changed is the time it takes to carefully review a manuscript. Reviewing a manuscript is recognized as a professional duty but there is no reward system in place that allows researchers to devote sufficient time to ensure that the methodology is rigorous and the results are accurate. The pressure to get grants and publish has overtaken our attention to a professional duty of carefully reviewing what others have done when there is little immediate reward. As more and more research is interdisciplinary and co-authored by multiple researchers from diverse fields, the resulting manuscripts may even exceed the expertise of a single reviewer who may then resort to checking whether the results make sense overall but may no longer be able to vouch for their accuracy. In addition, it is often impossible to check the details of an analysis, either because of lack of access to the original data or the impossibility to check code that was developed for the analysis. This has contributed to the “reproducibility crisis” we find ourselves in the middle of.


While outright fraud is rare, though not uncommon as Retraction Watch[vi] shows, many studies cannot be replicated. Even if we take the data from a study and repeat the analysis, we may not be able to reproduce the same result since analysis methods change, databases that are used for the analysis are not frozen in time, and commercial software packages are updated. There are numerous reasons for the resulting lack of computational reproducibility that are beyond the control of the investigator and would require packaging one’s analysis in containers[vii] that is often beyond the capability of investigators.

An additional stress on the peer review system comes from the proliferation of online journals that promise a quicker publication process. If one journal rejects a paper, publishers may even offer an alternative, less prestigious outlet, which may offer the additional incentive of open access to the public at an increased financial burden to the authors. These additional outlets benefit publishers and authors: additional revenue for publishers and higher publication counts for authors. But all to the detriment of an already overburdened review process.


While peer review will remain the sine qua non of establishing the validity of claims to new knowledge, there is no reason why peer review has to be done the way we are currently doing it. We need to develop a system that rewards those who spend time reviewing manuscripts and increases public trust in new claims. Here is a way that would accomplish both: When a paper is accepted, reviewers should write a short summary of why they recommended the publication, and the summary along with the identity of the reviewers should be published alongside the original article. The summary should count toward promotion and tenure for the reviewer and should be part of the annual merit review. This would not only change the reward system but also explain to others why the publication is worthwhile. It would increase the likelihood that reviews are done thoroughly and the results in the paper are trustworthy, at least to the best of the reviewers’ knowledge. If a manuscript requires the expertise of multiple fields, reviewers can be brought together to collaborate on reviews. The reviewers’ identity only needs to be made public if the paper is accepted for publication. If a paper is not accepted, the authors would not learn about the identity of the reviewers, thus preserving the anonymity of reviewers of rejected manuscripts.


We can go even further and enlist a broader audience to provide comments and to establish the validity of results. The more than four thousand colleges and universities in this country could take it upon themselves to create courses for undergraduate students and beginning graduate students to review clusters of papers. Funding agencies could fund the development of course materials that would guide students through the research process with data and materials that were acquired by other labs. Faculty would get credit for the development of research-based course materials and for teaching the courses. Students could experience the discovery process hands-on. They could write up their findings, which could be linked to the original articles, and thus contribute to a more robust knowledge enterprise. There would be an incentive to look at previously published results and work through the papers while learning different methods and discussing the validity of using the methods to ascertain the claims in the publications. This process would identify research results that are robust and could help eliminate non-reproducible research.


We believe that these approaches could make what is published more accurate and reproducible. Because there would be an ongoing discussion and reworking of previously published work, the academy could then change its current metrics of publication numbers, citation numbers and impact factors for promotion and tenure and merit review to more meaningful metrics that favor quality over quantity, thus providing incentives to researchers to focus on quality over quantity and improve the trustworthiness of published work.


[i] Puckrose, H., J.A. Lindsay, and P. Boghossian. Academic Grievance Studies and the Corruption of Scholarship. Areo (2018). Online access: https://areomagazine.com/2018/10/02/academic-grievance-studies-and-the-corruption-of-scholarship/ (accessed on October 14, 2018)

[ii] Spier, Ray. “The history of the peer-review process.” TRENDS in Biotechnology 20.8 (2002): 357-358.

[iii] Ibid.

[iv] Van Noorden, Richard. “Global scientific output doubles every nine years.” Nature News Blog (2014).

[v] Spier, 2002.

[vi] http://retractionwatch.com/

[vii] Docker and Reproducibility. Chapter 9. Online access: https://reproducible-analysis-workshop.readthedocs.io/en/latest/8.Intro-Docker.html (accessed on October 14, 2018)