Why training and specialization is needed for peer review: a case study of peer review for randomized controlled trials
© Patel; licensee BioMed Central Ltd. 2014
Received: 22 May 2014
Accepted: 14 July 2014
Published: 30 July 2014
The purpose and effectiveness of peer review is currently a subject of hot debate, as is the need for greater openness and transparency in the conduct of clinical trials. Innovations in peer review have focused on the process of peer review rather than its quality.
The aims of peer review are poorly defined, with no evidence that it works and no established way to provide training. However, despite the lack of evidence for its effectiveness, evidence-based medicine, which directly informs patient care, depends on the system of peer review. The current system applies the same process to all fields of research and all study designs. While the volume of available health related information is vast, there is no consistent means for the lay person to judge its quality or trustworthiness. Some types of research, such as randomized controlled trials, may lend themselves to a more specialized form of peer review where training and ongoing appraisal and revalidation is provided to individuals who peer review randomized controlled trials. Any randomized controlled trial peer reviewed by such a trained peer reviewer could then have a searchable ‘quality assurance’ symbol attached to the published articles and any published peer reviewer reports, thereby providing some guidance to the lay person seeking to inform themselves about their own health or medical treatment.
Specialization, training and ongoing appraisal and revalidation in peer review, coupled with a quality assurance symbol for the lay person, could address some of the current limitations of peer review for randomized controlled trials.
KeywordsPeer review Evidence based medicine EBM Randomized controlled trials RCT Clinical training Medical education Reporting guidelines CONSORT
A brief history of trial reporting and peer review
‘Better have them all removed now.’ That was the advice I received in the early 1990s when my pain free un-erupted wisdom teeth first came to the notice of a surgeon. He was emphatic that I would suffer complications in the future if I did not have all four teeth removed under a general anesthetic. This seemed drastic to me, but I was given the same advice by two health professionals and it was with trepidation that I questioned their advice. At the time, ‘Evidence-Based Medicine’ which proposed the use of scientific evidence to inform clinical decision making was still a novel idea [] and the Cochrane Collaboration [], aimed at facilitating up-to-date systematic reviews of randomized controlled trials, had recently been founded.
I decided to search for the evidence. My only source of information was a medical library where I could identify and photo-copy relevant looking articles or get copies via an ‘inter-library loan’. I did not find any useful information, but I decided against the procedure on the basis that the risk of a general anesthetic and a stay in hospital seemed to me to completely outweigh any benefit of having four perfectly healthy pain-free teeth removed.
A short time later, when I was a junior doctor, a subgroup analysis of the diabetic patients who took part in the original ‘4S study’ [], reported that simvastatin treatment improved morbidity and mortality in patients with diabetes []. At the time, my peers and I took for granted that the editors of the journals where the studies were published must have chosen the best people qualified to peer review and the peer reviewers must have done a competent job. The reported findings were compelling enough to have a profound effect on the care received by patients with diabetes.
These experiences not only illustrate the barriers to information I faced as a patient, but the power of individual clinical trials to directly influence treatment decisions for individual patients and the blind faith I and my peers had in a system whereby publication in a peer reviewed journal gave the reported results the status of ‘the evidence’ and, therefore, the ‘Truth’.
While my faith in the publication process was naïve and misplaced, flaws in the way RCTs were conducted and reported were recognized and initiatives were underway to address these concerns. These culminated in the Consolidated Standards of Reporting Trials (CONSORT) statement [] which aims to specify in detail how RCTs should be reported to improve transparency and help peer reviewers and readers make informed judgments about clinical trials. Since then a number of reporting guidelines for other types of clinical studies have been developed [].
While reporting guidelines aimed to address how individual trials were reported, there were also concerns about how far only positive or favorable findings were published while those with less exciting, favorable or inclusive findings were not (publishing bias). In 2005, the International Committee of Medical Journal Editors (ICMJE) published a statement announcing that its member journals would adopt compulsory trial registration as journal policy []. The aim was to register the existence of all clinical trials so that they became part of the public record.
Recently, in light of ongoing concerns about publication bias and the suppression of unfavorable results, the All Trials campaign [] was launched which calls for the registering of all clinical trials and availability of all data for treatments in current use.
Meanwhile, running parallel with this, the world of peer review, was undergoing a revolution. Most definitions of peer review include a description of a process of scrutiny by independent experts or peers in the same field [,]. For peer-review journals this process involves sending submitted manuscripts to two or more people deemed to be knowledgeable enough in the field of the manuscript to judge its suitability for publication in that journal.
Flaws with the common single blind peer review system (where the reviewers know who the authors are, but the authors do not know who the reviewers are) were recognized [] and there were experiments with double blind peer review to attempt to address this as well as in open peer review where the identity of reviewers and authors is known to all. While closed peer review did not appear to improve the quality of peer review [], open peer review did appear to be feasible without undermining the quality of peer reviewer reports [] and was first adopted by the British Medical Journal (BMJ) in 1999 [].
The novel idea of an ‘Open Access’ journal, where all published research is freely available without subscription, began to emerge and although it was met by ferocious opposition from publishers [], BioMed Central [], the first completely online open access publisher was founded in 2000, followed, in 2006, by the launch of PLoS One [].
Models of peer review
Peer review model
Available information on peer review selection criteria
Reviewers know who the authors are, but authors do not know who the reviewers are.
The majority of biomedical journals
Varies from journal to journal. The journal editors select peer reviewers according to their own criteria.
Both the reviewers and authors remain anonymous
Open peer review
Both reviewers and authors are known to each other
First introduced by the BMJ []
BMC series medical journals []
Re-review opt out
Authors are able to ‘opt-out’ of re-review after revisions if reviewers deem the research to be sound.
BMC Biology: []
As above, but one referee will usually be selected from those nominated by the author.
Collaborative peer review
Peer review includes a stage where the peer reviewers with or without the editor or authors take part in real time interactive discussion about the manuscript and agree a single set of revisions.
A member of a ‘Board of Reviewing Editors’ oversees peer review and usually peer reviews themselves.
Members of the Editorial Board peer review and use a formal evaluation system
Portable peer review
Manuscripts which are peer reviewed by one journal, but rejected on grounds of threshold or interest are transferred together with their peer review reports to other journals which have the scope and threshold to match the manuscript. This can occur within a publisher or between a consortium of publishers.
BioMed Central []
Criteria for selecting peer reviewers will be that of the original journal
Decoupled peer review
Manuscripts are submitted to a peer reviewing service which organizes peer review and provides advice on appropriate journals based on the peer review reports.
Axios Review []
Criteria can vary. For example,
Rubriq: Peer reviewers must have a terminal degree in the area of interest, be employed full time in an accredited research university at the level of professor, instructor, post doc fellow or faculty research associate, must be a published first author or corresponding author in a peer reviewed academic journal within the last four years, and have prior experience as a journal peer reviewer. There is a standardized scorecard.
Peerage of science []
Journals can also select manuscripts based on the peer review reports.
Peerage of science: Peer reviewers select the manuscripts they wish to review. Peer reviewers need to be scientists to qualify to peer review. Peer review reports are reviewed by fellow reviewers. Only scientists who have published a peer reviewed scientific article in an established international journal as first or corresponding author will be validated as Peers.
Post publication peer review
Manuscripts undergo initial checks and are published. Peer reviewers are then invited. Authors can revise their manuscripts. Revisions are published. If the manuscript ‘passes’ peer review, the article is indexed in databases such as Pub Med, Scopus etc
F1000Research: Authors are asked to identify five potential referees who might be from the peer review panel. Author suggested referees should not have collaborated with the authors in the past five years, be from their own institution, or be too senior to be likely to undertake such refereeing (they should ideally have authored at least one article in the field as the lead author).
The impetus behind these recent initiatives has been to reduce delays for authors and reduce burden for reviewers. Their focus is on the process of peer review in terms of how and when it is done, rather than the substance and quality of peer review itself or expertise of the peer reviewer.
The problem with peer review in medicine
Recent innovations in peer review seem to be driven by biologists with medical research ‘tagging along’. However, systems which might help biological research to thrive, might not necessarily be appropriate for research that directly influences patient care. There is no agreement on who a ‘peer’ or what ‘peer review’ actually is []. It is not clear what peer review aims to achieve [] and no evidence that peer review works []. Journal instructions for peer reviewers [] and the criteria for eligibility to peer review are variable (Table 1). There has been little evaluation of any of the more recent innovations in peer review for any outcomes. Furthermore, the whole system is based on honesty and trust and, as a consequence, is not designed to detect fraud.
Despite this, peer review is still seen by researchers as important and necessary for scientific communication [] and publication in a peer reviewed medical journal is still the only valid or legitimate route to disseminating clinical research. In 2006, Richard Smith of the BMJ commented that it was, ‘odd that science should be rooted in belief’ []. In the world of evidence based medicine, it is astonishing that the evidence on which medical treatment is based is itself based on such precarious foundations with so many untested assumptions. Today, a junior doctor still relies on faith in the peer review system when judging a clinical trial and a patient searching, ‘Should I have my wisdom teeth removed if they don’t hurt?’ would get more than a million results on Google (search date 12 May 2014) with no guidance on the relevance or trustworthiness of any of them, leaving them as much in the dark as I was when I first asked that question. The difference between now and then is that then, information was just not available or accessible, and now, there is so much information available of varying quality that it is impossible to make sense of it all without some specialist knowledge. For example, if the lay person knows what to search for (prophylactic extraction of third molar) and which sources they can trust (the Cochrane library), the relevant information can be found easily. According to a Cochrane review I found [], there is no evidence either way of the benefit of having wisdom teeth removed if they are asymptomatic. I feel reassured I made the right decision all those years ago. However, not all clinical questions can be answered so easily or can afford the luxury of waiting for a Cochrane systematic review to be done. When there is no ready-made Cochrane review, a system that provides some sort of quality check for individual studies might serve as an important consideration for patients (and doctors) who need to weigh up, using the available evidence, the risks and benefits of a course of action and make definitive, time dependent, decisions that could be life changing.
A UK Parliamentary enquiry on peer review in 2011 [] concluded that different types of peer review are suitable for different disciplines and encouraged increased recognition that peer-review quality is independent of journal business model. With this in mind, is there a need to redesign peer review specifically for clinical research and ensure that this is driven by the clinical community?
Training and specialization in peer review
With peer review as a vague and undefined process it is not surprising that in a survey of peer review conducted by Sense about Science, 56% of reviewers in a survey said there was a lack of guidance on how to review and 68% thought formal training would help []. Training and mentoring schemes for peer review have shown little impact [-] and even a decline in peer reviewer performance with time []. It may be that by the time a researcher has reached the stage in their career when they start to peer review, it is too late to teach peer review.
Although reporting guidelines have been available for two decades, many researchers and reviewers still do not understand what they are or the need for them. This is further compounded by inconsistent guidance from journals for authors on how to use reporting guidelines [] and a lack of awareness of how they can improve the reporting of RCTs [] and, thereby, aid peer review. There are misunderstandings about trial registration and even what constitutes an RCT. There is evidence that reviewers fail to detect deliberately introduced errors [,] and do not detect deficiencies in reporting methods, sometimes even suggesting inappropriate revisions []. Manuscripts reporting poorly conducted clinical research get published in peer reviewed journals and their findings inform systematic reviews, which in turn could also be poorly conducted and reported. These systematic reviews have the potential to inform clinical judgments.
The need for a concerted effort across disciplines to investigate the effects of peer review has been recognized [], but before the effects can be investigated, the aims of peer review need to be defined. This is a daunting challenge if one aim, or a small number of aims, is intended to fulfill all peer review needs for all fields, specialties and study designs. A more manageable way may be to introduce specialization into peer review, so that specific fields can define the purpose and aims of peer review to suit their own needs and design training to meets those aims.
Since the methodology for conducting and reporting of RCTs has been defined by the CONSORT statement [] which improves the reporting of RCTs [] and, thereby, aids the peer review process, peer review of RCTs lends itself to such specialization. CONSORT could form the framework for the content of a training program and help to define the knowledge and skills that are needed by a given individual to appraise an RCT critically. Peer reviewers could be taught to spot fundamental flaws and be periodically evaluated to make sure they do, in the same way that any other knowledge or skill that affects patient care is.
To achieve this, major organizations including medical schools, medical regulatory and accreditation organizations (such as the General Medical Council and Royal Colleges in the UK), funding bodies, publishers and journal editors and lay people need to come to a consensus on the definition, purpose, standards and training requirements of peer review of RCTs. Training should begin in medical schools and be ongoing.
By recognizing peer review as a professional skill with measurable standards which are separate from the journal, publisher or peer review model, peer review is separated from commercial considerations, peer reviewers get recognition for their work, and researchers, clinicians and patients get some indication of quality on which to base their judgments. Publishers and journals are then free to innovate while still maintaining consistency of peer review for RCTS, editors have clear criteria on which to base their choice of peer reviewer for a given manuscript and a baseline is set that allows for future research into the effectiveness of peer review per se and comparative studies on the effectiveness and quality of emerging innovations.
While innovations in trial reporting and the peer review process have increased transparency, there has been little progress in defining the aims and effects or improving the quality of peer review itself. There is a vast volume of health information available to the lay person with little or no guidance on its quality or trustworthiness.
Treatment decisions are based on evidence which is itself determined by a system for which there is no evidence of effectiveness. Innovations in peer review that specifically address the quality of peer review and the expertise of the peer reviewer and provide guidance for lay people seeking to inform themselves about their own health related decisions are urgently needed. Formal professional training for peer review of RCTs coupled with a means of identifying RCTs peer reviewed by such trained experts could address these needs.
The focus of this article has been on peer review of evidence-based medicine and RCTs in particular because the consequences of an ill-defined system of peer review are easily understandable by the scientist and the lay person alike. However, the purpose of peer review and a method of training and evaluating peer reviewers could be defined in a similar way for any other type of study design or any other field.
The Consolidated Standards of Reporting Trials
randomised controlled trial
The author would like to acknowledge Elizabeth Moylan, Biology Editor at BioMed Central for her detailed comments and suggestions for this manuscript and the whole Biology and Medical Editors team at BioMed Central for their general advice and comments.
- Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992, 268: 2420-2425. 10.1001/jama.1992.03490170092032.
- Bero L, Rennie D: The Cochrane Collaboration. Preparing, maintaining, and disseminating systematic reviews of the effects of health care. JAMA. 1995, 274: 1935-1938. 10.1001/jama.1995.03530240045039.View ArticlePubMedGoogle Scholar
- Randomised trial of cholesterol lowering in 4444 patients with coronary heart disease: the Scandinavian Simvastatin Survival Study (4S). Lancet. 1994, 344: 1383-1389.
- Pyŏrälä K, Pedersen TR, Kjekshus J, Faergeman O, Olsson AG, Thorgeirsson G: Cholesterol lowering with simvastatin improves prognosis of diabetic patients with coronary heart disease. A subgroup analysis of the Scandinavian Simvastatin Survival Study (4S). Diabetes Care. 1997, 20: 614-620. 10.2337/diacare.20.4.614.View ArticlePubMedGoogle Scholar
- Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, Pitkin R, Rennie D, Schulz KF, Simel D, Stroup DF: Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996, 276: 637-639. 10.1001/jama.1996.03540080059030.View ArticlePubMedGoogle Scholar
- The EQUATOR NETWORK , [http://www.equator-network.org/]
- De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJ, Schroeder TV, Sox HC, Van Der Weyden MB: Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004, 351: 1250-1251. 10.1056/NEJMe048225.View ArticlePubMedGoogle Scholar
- AllTrials , [http://www.alltrials.net/]
- Wager L, Godlee F, Jefferson T: What is peer review? In How to survive peer review, Chapter 2.ᅟ UK: BMJ books; 2002. Chapter 2.
- Weller AC: Introduction to the editorial review process. In Editorial Peer Review, Its Strengths and Weeknesss, Chapter 1. 2nd edition.ᅟ USA: Information Today Inc; 2002, 15.
- Smith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006, 99: 178-82. 10.1258/jrsm.99.4.178.View ArticlePubMedPubMed CentralGoogle Scholar
- van Rooyen S, Godlee F, Evans S, Smith R, Black N: Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA. 1998, 280: 234-237. 10.1001/jama.280.3.234.View ArticlePubMedGoogle Scholar
- van Rooyen S, Godlee F, Evans S, Black N, Smith R: Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ. 1999, 318: 23-27. 10.1136/bmj.318.7175.23.View ArticlePubMedPubMed CentralGoogle Scholar
- Smith R: Opening up BMJ peer review. BMJ. 1999, 318: 4-10.1136/bmj.318.7175.4.View ArticlePubMedPubMed CentralGoogle Scholar
- Anthes E, Former NIH Director on Open Access: Harold Varmus: Public Library of Science. Available at: , [http://seedmagazine.com/content/article/harold_varmus_public_library_of_science/]
- BioMed Central , [http://www.biomedcentral.com/]
- PloS One , [http://www.plosone.org/]
- Ware M, Mabe M: The STM report. An Overview of Scientific and Scholarly Journal Publishing. 2012, International Association of Scientific, Technical and Medical Publishers, UKGoogle Scholar
- Is Peer Review Broken? , [http://www.biomedcentral.com/biome/video-is-peer-review-broken/]
- Robertson M: Re-review opt out and painless publishing. BMC Biology. 2013, 11: 18-10.1186/1741-7007-11-18.View ArticlePubMedPubMed CentralGoogle Scholar
- 21 F1000 Research , [http://f1000research.com/author-guidelines]
- Peerage of Science , [http://www.peerageofscience.org/]
- Axious Review , [http://axiosreview.org/the-process/]
- Rubriq , [http://www.rubriq.com/]
- BioMed Central transfers , [http://www.biomedcentral.com/authors/transferfaq]
- eLife , [http://elifesciences.org/about#process]
- Frontiers , [http://www.frontiersin.org/about/reviewsystem]
- Jefferson TO, Alderson P, Wager E, Davidoff F: Effects of editorial peer review: a systematic review. JAMA. 2002, 287: 2784-2786. 10.1001/jama.287.21.2784.View ArticlePubMedGoogle Scholar
- Jefferson T, Rudin M, Brodney Folse S, Davidoff F: Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007, 18: MR000016.Google Scholar
- Hirst A, Altman DG: Are peer reviewers encouraged to use reporting guidelines? A survey of 116 health research journals.PloS One ᅟ, 7:e35621..
- Mulligan A, Hall L, Raphael E: Peer review in a changing world: an international study measuring the attitudes of researchers. JASIS&T. 2013, 64: 132-161. 10.1002/asi.22798.View ArticleGoogle Scholar
- Mettes TD, Ghaeminia H, Nienhuijs ME, Perry J, van der Sanden WJ, Plasschaert A: Surgical removal versus retention for the management of asymptomatic impacted wisdom teeth. Cochrane Database Syst Rev. 2012, 13: CD003879.Google Scholar
- House of Commons Science and Technology Committee: Peer review in scientific publications. Eight Report of Session 2010-12 UK..
- Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R: Effects of training on quality of peer review: randomised controlled trial. BMJ. 2004, 328: 673-10.1136/bmj.38023.700775.AE.View ArticlePubMedPubMed CentralGoogle Scholar
- Houry D, Green S, Callaham ML: Effects of training on quality of peer review: randomised controlled trial. BMC Med Educ. 2012, 12: 83-10.1186/1472-6920-12-83.View ArticlePubMedPubMed CentralGoogle Scholar
- Callaham ML, Tercier J: The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med. 2007, 4: e40-10.1371/journal.pmed.0040040.View ArticlePubMedPubMed CentralGoogle Scholar
- Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R: What errors do peer reviewers detect, and does training improve their ability to detect them?. JR Soc Med. 2008, 101: 507-514. 10.1258/jrsm.2008.080062.View ArticleGoogle Scholar
- Callaham M, McCulloch C: Longitudinal trends in the performance of scientific peer reviewers. Ann Emerg Med. 2011, 57: 141-148. 10.1016/j.annemergmed.2010.07.027.View ArticlePubMedGoogle Scholar
- Turner L, Shamseer L, Altman DG, Schulz KF, Moher D: Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev. 2012, 1: 60-10.1186/2046-4053-1-60.View ArticlePubMedPubMed CentralGoogle Scholar
- Hopewell S, Collins GS, Boutron I, Yu LM, Cook J, Shanyinde M, Wharton R, Shamseer L, Altman DG: Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ. 2014, 349: g4145-10.1136/bmj.g4145.View ArticlePubMedPubMed CentralGoogle Scholar
- Schulz KF, Altman DG, Moher D: CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010, 152: 726-732. 10.7326/0003-4819-152-11-201006010-00232.View ArticlePubMedGoogle Scholar
- Bsi Kitemark , [http://www.bsigroup.com/en-GB/our-services/product-certification/kitemark/]
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.