This article has Open Peer Review reports available.
Increasing the evidence base in journalology: creating an international best practice journal research network
© The Author(s). 2016
Received: 14 July 2016
Accepted: 29 September 2016
Published: 10 October 2016
Biomedical journals continue to be the single most important conduit for disseminating biomedical knowledge. Unlike clinical medicine, where evidence is considered fundamental to practice, journals still operate largely in a ‘black box’ mode without sufficient evidence to drive their practice. We believe there is an immediate need to substantially increase the amount and quality of research by journals to ensure their practice is as evidence based as possible. To achieve this goal, we are proposing the development of an international ‘best practice journal research network’. We invite journals and others to join the network. Such a network is likely to improve the quality of journals. It is also likely to address many unanswered questions in publication science, including peer review, which can provide robust and generalizable answers.
Biomedical journals remain the central conduit through which biomedical knowledge is disseminated; this is likely to continue for the foreseeable future regardless of the publishing models used for dissemination. However, even a cursory scan of what journals publish raises concern – the quality of reporting of peer-reviewed and editor-approved clinical and preclinical research is appalling [1–5]. If reports of clinical research are unusable, this substantially reduces the confidence clinicians have in using best evidence to inform best practice, therefore affecting patient care directly.
The reasons behind the poor completeness of reporting of biomedical research are complex and involve various players - publishers, journals, and scientific editors are partially responsible. For example, editors do not explicitly recommend the use of reporting guidelines as part of the review process  despite emerging evidence indicating that their use is associated with more complete reporting [7, 8]. Further, there appears to be a knowledge gap for scientific editors and peer reviewers ; more than a third of manuscripts submitted to 46 journals for publication consideration were inappropriately classified by the editorial offices as randomized trials during a study . Additionally, there is still no agreement regarding the effectiveness of peer review. Unlike clinical medicine, where evidence is considered fundamental to practice, journals still operate largely in a ‘black box’ mode without sufficient evidence to drive their practice.
Innovative editors have not stood still. In 1989, championed by Drummond Rennie and JAMA, the first International Congress on Peer Review and Biomedical Publication was held, its remit being to investigate and better understand the scientific process of publication. The congress, held every 4 years, and jointly coordinated with the BMJ, is an important oracle for sharing journalology  research. The eighth congress will be held in Chicago in 2017 . Likewise, in 1994, Richard Smith, editor-in-chief of BMJ established Locknet, its remit being the focus on research issues related to the peer review process . Locknet is not currently active (Richard Smith, personal communication), although the BMJ and JAMA both have a long tradition of collaborative journalology research.
The evidence base underlying journalology is too small and consists largely of descriptive studies and few observational and quasi-experimental studies . An underwhelming number of randomized trials  have addressed ‘what works’ questions in journalology and in a related field of investigation, research on research (i.e., meta-research) . Therefore, we are interested in finding ways to substantially increase the amount of research conducted by journals in journalology and meta-research. We believe there are models in clinical medicine that could be successfully adapted for journals to develop and sustain an international biomedical journal research network (http://or.org/pdf/NIH_Roadmap-ClinicalResearch.pdf). Similarly, the Army of Women (https://www.armyofwomen.org/) has proven to be an effective model to increase participation in breast cancer trials. This might be a useful model to consider in order to ensure the participation of sufficient journals in journalology research.
It is likely that many journal editors have limited experience in participating in research studies and may feel they lack the expertise to conduct and participate in such studies. Therefore, we think that any journal wanting to join the network should ‘buddy’ with an established research team involved in this domain. Journals are best placed to consider how such an arrangement might work optimally and the arrangements might differ across journals; one option to consider is appointing a member of the clinical research team as a member of a journal’s scientific editorial team (http://eyes.cochrane.org/partnerships-eyes-and-vision-journals).
Journals might be interested in joining the network, and conducting and participating in research, for a variety of reasons – it is likely the best way of improving the quality of their journal (clinical centres participating in research provide better care and participants have better outcomes) ; it would enhance journal personnel expertise and experience in research, including clinical trials, which in turn might also help them review similar submissions to their journals; and it would help to more reliably answer relevant questions about journalology.
A journal research network can play an important role in enhancing the science of journalology and meta-research. Many researchers have addressed questions around whether peer review is effective. To date, there is no clear definitive answer. Too few studies have been conducted and, of those published, there is little agreement about optimal designs  and best primary outcome(s). Our community might benefit from experiences in clinical medicine by developing core outcome sets across a range of journalology topics, including peer review, and participating in the COMET initiative . Similarly, we might benefit from involving members of the public and other advocacy groups in our research, following similar developments in clinical medicine.
We have discussed the idea of developing a network of journals for conducting research with a few colleagues in leadership positions at several journals (see Acknowledgements). These journals have welcomed this development and have agreed to help establish the initiative. While editors are crucial to help ensure the development and success of the network, other groups are also needed. Perhaps editors can promote the network to their publishers. Publishers bring an important cultural and policy authority to the table; they can be a powerful voice in enabling new perspectives such as data sharing. Similarly, they will be an important voice in taking new evidence generated from the network and implementing best practices in their journal(s). Funders of biomedical research are also important to include in the network. Their investment in research has often resulted in substantial waste in the form of unusable publications . It is safe to assume their interests lie in increasing the research value of their investments, ensuring the public’s funding of research is being used optimally.
By joining the network, journals commit to engage in discussions about ways to improve the quality of published research. Since this initiative is new, it is difficult to be more precise as to how the network will evolve and mature over time. However, for the immediate future, the practical goals are to meet virtually, gauge interest in developing audit and feedback mechanisms to monitor the quality of journal publications, and provide opportunities for journals to participate in various research projects. The primary remit of the network is to substantially increase the amount of research, including clinical trials, addressing relevant questions in journalology. We invite journals to join the “Best Practice Journal Research Network” by completing a simple registration form (http://www.bpjrn.com/). An important outcome of this increased activity will be to provide more data and robust research to inform an evidence-based approach to running the scientific aspects of journals.
There is still a large number of unanswered questions that research can provide robust and generalizable answers to (Box 1). We know there are many other questions that need to be answered and we invite others to submit questions to the network website. This can provide a venue to refine the research question and proposed methodology, and to conduct and report such research optimally. Similarly, in clinical medicine, multi-centre research often provides greater generalizability of the results. While conducting research at a single journal is a good starting point, we believe such research is more generalizable when conducted across multiple journals. We think a network of journals working together will increase the number of multi-centre studies in journalology.
Recognizing the importance of access to data for biomedical research, editorial groups are now advocating for this in clinical medicine [20, 21]. Editors and publishers also need to strongly advocate for data sharing to help promote research within their own community. For example, several questions concerning peer review can only be meaningfully answered with access to journal peer review reports. By analogy to clinical data, we think that publicly funded peer review (many researchers are publicly funded) data are a public good, produced in the public interest, and that they should be openly available to the maximum extent possible. While access is possible in some open access journals, all journals need to develop a mechanism to ensure access to all of their peer review reports. It would be unfortunate to advocate for data sharing in clinical medicine and neglect similar needs of journalology researchers.
How might a journal research network move forward? One possibility is to use a template similar to that successfully used by the Committee on Publication Ethics. We have started the network with the development of a website. We hope this will facilitate international outreach, such as to the Counsel of Science Editors, Asia Pacific Association of Medical Journal Editors, and other groups and interested individuals. A few of the network ideas possibly overlap with those of the Responsible Research and Innovation movement . Links to this group and others is also important to pursue.
We hope publishers, editors, funders, and others will join the network. The small group of editors initially approached will speak shortly and will reach out more broadly thereafter. Like any new initiative, we will have to work hard to keep the network practical and of interest to a very broad spectrum of editors and publishers, and funding opportunities will need to be sought to facilitate the initial development of the network and sustain it. Journals participating in research will also need intellectual and fiscal support to participate in research studies.
Journals need best evidence to inform their best practice, which will increase confidence in how journals function as well as help reduce the enormous waste in the system. We invite interested journals and their publishers to join us on this adventure.
Does scientific editor training in core competencies result in more use of reporting guidelines by journal editors and peer reviewers?
Do reporting guidelines improve the completeness of reporting observational studies?
Does a formal assessment of clinical trial protocols during peer review of completed trial reports reduce switched outcomes when these reports are published?
Does better peer review reduce the number of retractions?
We would like to acknowledge feedback to earlier versions of this commentary from Drs. Howard Bauchner, Trish Groves, Drummond Rennie, and Richard Smith. We acknowledge the editors of the following journals who have agreed to discuss the development of the network: Annals of Internal Medicine (Drs. Jaya Rao and Catherine Stack), BMC Medicine (Dr. Sabina Alam), BMJ and BMJ Open (Drs. Adrian Aldcroft and Sara Schroter), The Cochrane Database of Systematic Reviews (Drs. David Tovey and Harriet MacLehose), JAMA (Dr. Howard Bauchner), The Lancet (Dr. Jocalyn Clark), and PLoS Medicine (Dr. Larry Peiperl).
DM and PR conceived of the idea. DM wrote the first draft. DM and PR edited the draft and both read and approved the final version.
DM directs a centre for journalology (publication science). He is interested in a broad spectrum of methods (e.g., training editors and peer reviewers, generating evidence to inform best practice in journals, reporting guidelines) to improve the completeness and transparency of published biomedical research. PR is a professor of epidemiology at Paris Descartes University and adjunct professor at the Mailman School of Public Health (Columbia University). He is director of the INSERM Epidemiology and Biostatistics Research Center (Sorbonne Paris Cité), the French Cochrane Centre, and the French EQUATOR center.
DM is a member of BMC Medicine’s editorial board. He is also co-editor-in-chief of Systematic Reviews, a BioMed Central (BMC) journal. Finally, DM has also received financial support from BMC for an unrelated project.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Yordanov Y, Dechartres A, Porcher R, et al. Avoidable waste of research related to inadequate methods in clinical trials. BMJ. 2015;350:h809.View ArticlePubMedPubMed CentralGoogle Scholar
- Open Science Collaboration. PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716.View ArticleGoogle Scholar
- Macleod MR, Lawson MA, Kyriakopoulou A, et al. Risk of bias in reports of in vivo research: a focus for improvement. PLoS Biology. 2015;13(10):e1002273.View ArticlePubMedPubMed CentralGoogle Scholar
- Dwan K, Altman DG, Clarke M, et al. Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials. PLoS Medicine. 2014;11(6):e1001666.View ArticlePubMedPubMed CentralGoogle Scholar
- Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Medicine. 2016;13(5):e1002028.View ArticlePubMedPubMed CentralGoogle Scholar
- Hirst A, Altman DG. Are peer reviewers encouraged to use reporting guidelines? A survey of 116 health research journals. PLoS One. 2012;7(4):e35621.View ArticlePubMedPubMed CentralGoogle Scholar
- Cobo E, Cortes J, Ribera JM, et al. Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: masked randomised trial. BMJ. 2011;343:d6783.View ArticlePubMedPubMed CentralGoogle Scholar
- Turner L, Shamseer L, Altman DG, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev. 2012;11:MR000030.PubMedGoogle Scholar
- Galipeau J, Moher D, Campbell C, et al. A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology. Journal of Clinical Epidemiology. 2015;68(3):257–65.View ArticlePubMedGoogle Scholar
- Ravaud P. Web-based Tool to Improve Reporting of Randomized Controlled Trials (WebCONSORT). 2016. https://clinicaltrials.gov/ct2/show/NCT01891448. Accessed 9 June 2016.
- Garfield E, Stephen P. Lock on “Journalology”. Curr Comment. 1990;3:19–24.Google Scholar
- Rennie D, Flanagin A, Godlee F, et al. Eighth international congress on peer review in biomedical publication. BMJ. 2015;350:h2411.View ArticlePubMedGoogle Scholar
- van Rooyen S. A critical examination of the peer review process. Learned Publish. 1998;11(3):185–91.View ArticleGoogle Scholar
- Malicki M, von Elm E, Marusic A. Study design, publication outcome, and funding of research presented at international congresses on peer review and biomedical publication. JAMA. 2014;311(10):1065–7.View ArticlePubMedGoogle Scholar
- Kousta S, Ferguson C, Ganley E. Meta-research: broadening the scope of PLOS Biology. PLoS Biology. 2016;14(1):e1002334.View ArticlePubMedPubMed CentralGoogle Scholar
- Lannon CM, Peterson LE. Pediatric collaborative networks for quality improvement and research. Academic Pediatrics. 2013;13(6 Suppl):S69–74.View ArticlePubMedGoogle Scholar
- Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Medicine. 2016;14:85.View ArticlePubMedPubMed CentralGoogle Scholar
- Williamson PR, Altman DG, Blazeby JM, et al. Developing core outcome sets for clinical trials: issues to consider. Trials. 2012;13:132.View ArticlePubMedPubMed CentralGoogle Scholar
- Kleinert S, Horton R. How should medical science change? Lancet. 2014;383(9913):197–8.View ArticlePubMedGoogle Scholar
- Taichman DB, Backus J, Baethge C, et al. Sharing clinical trial data: a proposal from the International Committee of Medical Journal Editors. PLoS Medicine. 2016;13(1):e1001950.View ArticlePubMedPubMed CentralGoogle Scholar
- The European Association of Science Editors (EASE). EASE statement on data sharing 4 April 2016. http://www.ease.org.uk/wp-content/uploads/2016/05/data-sharing-statement.pdf. Accessed 9 June 2016.
- Owen R, Macnaghten P, Stilgoe J. Responsible research and innovation: from science in society to science for society, with society. Science and Public Policy. 2012;39:751–60.View ArticleGoogle Scholar