Skip to main content

Peer review for biomedical publications: we can improve the system


The lack of formal training programs for peer reviewers places the scientific quality of biomedical publications at risk, as the introduction of `hidden’ bias may not be easily recognized by the reader. The exponential increase in the number of manuscripts submitted for publication worldwide, estimated in the millions annually, overburdens the capability of available qualified referees. Indeed, the workload imposed on individual reviewers appears to be reaching a `breaking point’ that may no longer be sustainable. Some journals have made efforts to improve peer review via structured guidelines, courses for referees, and employing biostatisticians to ensure appropriate study design and analyses. Further strategies designed to incentivize and reward peer review work include journals providing continuing medical education (CME) credits to individual referees by defined criteria for timely and high-quality evaluations. Alternative options to supplement the current peer review process consist of `post-publication peer review,’ `decoupled peer review,’ `collaborative peer review,’ and `portable peer review’. This article outlines the shortcomings and flaws in the current peer review system and discusses new innovative options on the horizon.

See related article:


We read with enthusiasm the recent opinion article by Jigisha Patel in BMC Medicine[1]. Dr. Patel provides a critical analysis of the shortcomings and `hidden dangers’ of the established peer review process in biomedical publishing, with a focus on peer review for randomized controlled trials (RCTs) [1]. The lack of coherent training and specialization of peer reviewers appears to jeopardize the scientific quality of published manuscripts [2]. Once published, articles of `hidden’ substandard quality will negatively affect the relevance of meta-analyses, clinical guidelines and evidence-based treatment recommendations (“Garbage in, garbage out!”) [3]. This notion is coherently illustrated by a quote from Dr. Patel’s current article:

“Treatment decisions are based on evidence which is itself determined by a system for which there is no evidence of effectiveness”[1].

The peer review process `left behind’

Although the quality of evidence-based medicine (EBM) has evolved over the years with the provision of defined uniform criteria for reporting of trials (Consolidated Standards of Reporting Trials (CONSORT) statement [4]) and of meta-analyses (Quality of Reporting of Meta-analyses (QUOROM) statement [5]), we have not observed a similar evolution of the peer review process, and the current modalities of peer review warrant reconsideration. This is analogous to considering a modern 21st century information technology company running its operations on first-generation 4 kB Apple computers from 1976.

The exponential increase in the number of manuscripts submitted for publication worldwide overburden the capability of available qualified referees to keep up with reviewing requests and to ensure timeliness and quality of their respective evaluations. In 2006, the estimated number of published peer-reviewed articles reached 1.4 million per year [6]. As the rejection rate for average journals ranges 20 to 50% (and much higher for more prestigious journals), the number of submitted manuscripts undergoing a formal peer review is more likely to approach 2 to 3 million per year. These conservative estimates, originating from 2006, must be adjusted to the current time, based on the dramatic `inflation’ of new open-access journals sprouting like mushrooms all over the globe.

The ever-increasing competitiveness in research (“publish or perish”) in these current times of limited grant funding opportunities incentivizes researchers to `fragment’ results from a single study into multiple publications, or to publish identical data sets redundantly [7]. This effect contributes to the ever-increasing `flood’ of biomedical manuscripts submitted for publication globally.

The burden on reviewers

The burden placed on peer reviewers to assess an increasing number of submitted manuscripts - a large proportion of which are characterized by questionable scientific quality - appears to be reaching a `breaking point’ that is no longer sustainable. Increasing numbers of reports on unethical research conduct, including the publication of fraudulent and fabricated data and of plagiarized or redundant publications, represent an additional dilemma for editors and reviewers [7]-[9]. Selected papers that are officially retracted tend to receive wide public attention [10],[11]; however, such reports are likely to represent just the `tip of the iceberg’ of an unrecognized problem for the scientific community. Indeed, a highly provocative interpretation of biomedical publications claimed that most published research findings are misleading, and the result of an unjustified “chase for statistical significance” [12].

This raises the following questions:

How are peer reviewers supposed to cope with the sheer number of increasing reviewing requests and assignments?

How are untrained `lay’ referees expected to recognize and scrutinize flaws in study design, methodology, and the validity of interpretation of data?

How are qualified `expert’ referees expected to recognize research misconduct and to stratify apparent `good papers’ from unethical submissions, including redundant publications and fabricated data?

The burden on editors

As editors of two peer-reviewed journals, representing both a model of open-access (Patient Safety in Surgery[13]) and a traditional print journal (Journal of Trauma and Acute Care Surgery[14]), we are exposed to the daily challenge of identifying and commissioning suitable referees who are willing to accept a requested assignment and to return a quality report in a timely fashion. Indeed, ensuring a streamlined, fast-track, and high-quality peer review process remains the ultimate editorial responsibility and duty for the scientific community. Any flaw in the peer review process of submitted manuscripts will ultimately jeopardize the quality of evidence-based recommendations, which rely on the assumption that the quality of the published science should be impeccable.

Extrapolated to the court of law, would anybody accept a verdict from poorly qualified judges, purely based on the notion that those individuals were available to complete the assigned task? Clearly, the editorial process is highly responsible and challenging. Most editors spend a significant amount of time investigating the suitability of potential reviewers by matching their publication record to the topic of interest, and cross-checking potential referees for co-authorships with submitting authors. An editor’s `favorite’ type of reviewer comprises candidates who are immediately agreeable to accept requested assignments and who return a high-quality and comprehensive evaluation before expiration of the deadline. Despite diligent scrutiny to the process, as editors, we frequently remain uncertain as to the true qualifications of the assigned individual referees.

The `ideal’ peer reviewer

In a perfect world, the ideal peer reviewer should constitute an active scientist working in the same subspecialty `niche’ of research matching the topic of the submitted paper, but should not have any current collaboration or professional liaison with the submitting authors, in order to avoid a conflict of interest. On the other hand, such expert `peers’ may easily be direct competitors for grant awards in the same field of research. This bias could provide the root cause of unjustified adverse reports leading to rejection of a submitted paper, or to a significant delay in publication by requesting additional cumbersome experiments. This type of `hidden’ conflict of interest may not be detectable by a journal’s managing editors.

Flaws and fraud in the system

Recent worrisome reports describe a new pattern of peer review fraud, by which submitting authors falsify the contact information of suggested referees, with the goal of diverting the peer review request to their own email account under a falsified name. A recent report in the New York Times described a peer review fraud scheme run by a researcher in Taiwan, which led to a journal’s retraction of 60 publications [15]. The uncovered operation was designated as a “peer review and citation ring” consisting of fake researchers and real ones whose identity was assumed by the author, who created 130 fraudulent e-mail accounts used in the forged peer review process [15]. As most biomedical journals rely on an online submission and review system to assess submitted manuscripts, the `gray zone’ of online peer review fraud may be higher than assumed.

In light of all the shortcomings related to the current peer review process and its impact on the quality and practice of EBM, many critical voices have questioned the validity and sustainability of our current approach to scientific publishing [2],[3],[16],[17]. A provocative recommendation by the forefront science group “The Edge” suggests completely abolishing EBM per se as an outdated scientific tenet, in answer to the annual question of 2014 “What scientific idea is ready for retirement”[18],[19].

`Journal survival’ versus rigorous peer review

A recent in-house editorial analysis in 2012 to 2013 on the `fate’ of rejected manuscripts with the Journal of Trauma and Acute Care Surgery revealed that 42% of rejected papers were readily published in open-access journals within an average of 10 months after rejection (Crebs and Moore; unpublished observations). The interpretation of this finding is ambiguous. On the one hand, it is very possible that the scrutiny of the initial peer review process will help improve the overall quality of a rejected paper after revision, and thus make it more appealing and suitable for publication in a second-tier target journal. On the other hand, some open-access online journals appear to commission articles by a purely business incentive, without tribute to scientific merit and quality of research.

Provocatively speaking, many of the new generation open-access journals may tend to accept a lower threshold of peer review quality, or imply that in-house editorial decision-making is reflective of formal `peer review’ as a trade-off to sustain their financial viability [20]. This is particularly important as the revenue stream in the `author pays’ model is dependent on the high publication fee ($2,000 or more) charged to authors upon acceptance of their article for publication. For this reason, many scientists consider open-access peer review in general as intrinsically biased. A journal’s overall rejection rate may serve as a proxy or surrogate marker to the quality of peer review, in conjunction with the number of peer review cycles, the number of referees assigned to an individual manuscript, and the commissioning of re-reviews and application of editorial changes prior to acceptance. These metrics could be transparently incorporated in a peer review `quality mark’ included in each publication, as suggested in Dr. Patel’s article [1].

New models on the horizon

Despite the negative headlines and acknowledged deficiencies in the system, there have been significant efforts to improve the quality of the current modality of biomedical peer review. For example, the Journal of Trauma and Acute Care Surgery 1) selects reviewers based on their publication record; 2) assigns reviewers based on a list of those considered experts in the topic; 3) provides continuing medical education (CME) credits for high-quality reviews and timeliness of completion; 4) provides formal annual education sessions on how to conduct peer review; and 5) employs a MD/PhD biostatistician to review all provisionally accepted manuscripts. The Journal furthermore provides uniform guidelines for reviewers (see Additional file 1: Appendix 1) which appear particularly helpful for younger and less experienced scientists at an early stage of their career. Other journals, including the Journal of Bone and Joint Surgery, recently adopted a new grading system for the quality of peer review, termed “peer review evaluation” (PRE) score, which is based on defined objective metrics including the overall number of review cycles. PRE score is designed to measure the level of quality of peer review under the assumption that a more engaged peer review process will result in a higher quality final publication. Additional new concepts that have been recently advocated as alternatives include `post-publication peer review’, `collaborative peer review’, and `decoupled peer review’ [1]. Finally, third-party evaluations managed by for-profit companies have recently been offered as an independent `portable peer review’, paid for by the author and moved between journals until a final editorial decision is made [21].


In summary, we applaud Dr. Patel’s important contribution, which identifies the multiple shortcomings of the current peer review process for biomedical publishing, and offers specific pertinent solutions to improve the system [1]. It is ultimately our duty as editors and scientists to move the field forward, as we can no longer accept the standard excuse of peer review being a "broken system - but still the best we have". We can improve the system.

Authors' contributions

Both authors contributed to the design and writing of this article. Both authors read and approved the final manuscript.

Additional file


  1. Patel J: Why training and specialization is needed for peer review: a case study of peer review for randomized controlled trials. BMC Med. 2014, 12: 128-10.1186/s12916-014-0128-z.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Smith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006, 99: 178-182. 10.1258/jrsm.99.4.178.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Stahel PF, Mauffrey C: Evidence-based medicine: A `hidden threat’ for patient safety and surgical innovation?. Bone Joint J. 2014, 96-B: 997-999. 10.1302/0301-620X.96B8.34117.

    Article  CAS  PubMed  Google Scholar 

  4. Schulz KF, Altman DG, Moher D, Group C: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010, 340: c332-10.1136/bmj.c332.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF: Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999, 354: 1896-1900. 10.1016/S0140-6736(99)04149-5.

    Article  CAS  PubMed  Google Scholar 

  6. Björk BC, Roos A, Lauri M: Scientific journal publishing: yearly volume and open access availability. Inform Res. 2009, 14: 391.

    Google Scholar 

  7. Stahel PF, Clavien PA, Smith WR, Moore EE: Redundant publications in surgery: a threat to patient safety?. Patient Saf Surg. 2008, 2: 6-10.1186/1754-9493-2-6.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Bhutta ZA, Crane J: Should research fraud be a crime?. BMJ. 2014, 349: g4532-10.1136/bmj.g4532.

    Article  PubMed  Google Scholar 

  9. Fanelli D: How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 2009, 4: e5738-10.1371/journal.pone.0005738.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Obokata H, Wakayama T, Sasai Y, Kojima K, Vacanti MP, Niwa H, Yamato M, Vacanti CA: Retraction: Stimulus-triggered fate conversion of somatic cells into pluripotency. Nature. 2014, 511: 112.

    Google Scholar 

  11. Obokata H, Sasai Y, Niwa H, Kadota M, Andrabi M, Takata N, Tokoro M, Terashita Y, Yonemura S, Vacanti CA, Wakayama T: Retraction: Bidirectional developmental potential in reprogrammed cells with acquired pluripotency. Nature. 2014, 511: 112.

    Google Scholar 

  12. Ioannidis JPA: Why most published research findings are false. PLoS Med. 2005, 2: e124-10.1371/journal.pmed.0020124.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Patient Safety in Surgery.., []

  14. Journal of Trauma and Acute Care Surgery.., []

  15. Fountain H: Science journal pulls 60 papers in peer-review fraud. New York Times. 2014

    Google Scholar 

  16. Spence D: Evidence based medicine is broken. BMJ. 2014, 348: g22-10.1136/bmj.g22.

    Article  Google Scholar 

  17. Steinberg EP, Luce BR: Evidence based? Caveat emptor!. Health Aff. 2005, 24: 80-92. 10.1377/hlthaff.24.1.80.

    Article  Google Scholar 

  18. Overbye D: Over the side with old scientific tenets. The New York Times. 2014

    Google Scholar 

  19. The Edge.., []

  20. OMICS open-access journals.., []

  21. Van Noorden R: Company offers portable peer review. Nature. 2013, 494: 161-10.1038/494161a.

    Article  CAS  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding authors

Correspondence to Philip F Stahel or Ernest E Moore.

Additional information

Competing interests

Both authors are editors of the two peer-reviewed journals discussed in this article (Patient Safety in Surgery and Journal of Trauma and Acute Care Surgery). The authors declare no other conflict of interest related to this manuscript.

Electronic supplementary material

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stahel, P.F., Moore, E.E. Peer review for biomedical publications: we can improve the system. BMC Med 12, 179 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: