Skip to main content
  • Research article
  • Open access
  • Published:

Designs of trials assessing interventions to improve the peer review process: a vignette-based survey

Abstract

Background

We aimed to determine the best study designs for assessing interventions to improve the peer review process according to experts’ opinions. Furthermore, for interventions previously evaluated, we determined whether the study designs actually used were rated as the best study designs.

Methods

Study design: A series of six vignette-based surveys exploring the best study designs for six different interventions (training peer reviewers, adding an expert to the peer review process, use of reporting guidelines checklists, blinding peer reviewers to the results (i.e., results-free peer review), giving incentives to peer reviewers, and post-publication peer review).

Vignette construction: Vignettes were case scenarios of trials assessing interventions aimed at improving the quality of peer review. For each intervention, the vignette included the study type (e.g., randomized controlled trial [RCT]), setting (e.g., single biomedical journal), and type of manuscript assessed (e.g., actual manuscripts received by the journal); each of these three features varied between vignettes.

Participants: Researchers with expertise in peer review or methodology of clinical trials.

Outcome: Participants were proposed two vignettes describing two different study designs to assess the same intervention and had to indicate which study design they preferred on a scale, from − 5 (preference for study A) to 5 (preference for study B), 0 indicating no preference between the suggested designs (primary outcome). Secondary outcomes were trust in the results and feasibility of the designs.

Results

A total of 204 experts assessed 1044 paired comparisons. The preferred study type was RCTs with randomization of manuscripts for four interventions (adding an expert, use of reporting guidelines checklist, results-free peer review, post-publication peer review) and RCTs with randomization of peer reviewers for two interventions (training peer reviewers and using incentives). The preferred setting was mainly several biomedical journals from different publishers, and the preferred type of manuscript was actual manuscripts submitted to journals. However, the most feasible designs were often cluster RCTs and interrupted time series analysis set in a single biomedical journal, with the assessment of a fabricated manuscript. Three interventions were previously assessed: none used the design rated first in preference by experts.

Conclusion

The vignette-based survey allowed us to identify the best study designs for assessing different interventions to improve peer review according to experts’ opinion. There is gap between the preferred study designs and the designs actually used.

Peer Review reports

Background

The peer review process is the cornerstone of research [1,2,3]. This process aims to provide a method for rational, fair, and objective decision-making and to raise the quality of publications. However, this process is increasingly being questioned [4]. Primary functions of peer reviewers are poorly defined, and often expectations of manuscripts differ between editors and peer reviewers [5]. Peer review frequently fails to be objective, rational, and free of prejudice [6]. Flawed and misleading articles are still being published [7]. Less than half of biomedical academics think that the peer review process is fair, scientific, or transparent [8]. Studies have highlighted some limitations of peer review [9,10,11], including limitations in detecting errors and fraud, improving the completeness of reporting [12], or decreasing the distortion of study results [13].

Some interventions developed and implemented by editors to improve the quality of peer review include blinding the peer reviewer to the author’s identity, using open peer review, or training peer reviewers [14]. However, research evaluating these interventions with an experimental design is scarce [15]. Furthermore, assessing these interventions can raise important methodological issues related to the choice of study type, setting, and type of manuscript being evaluated [15].

Here, we used a vignette-based survey of experts to determine the best study designs for assessing interventions to improve the peer review process according to experts’ opinions. Furthermore, for interventions that were previously evaluated [15], we determined whether the study designs actually used were the study designs experts preferred.

Methods

Study design

We performed a series of vignette-based surveys. A vignette can be defined as a hypothetical situation for which research participants are asked a set of directed questions to reveal their values and perceptions. The vignette-based survey has been found useful in different biomedical fields. It is frequently used to examine judgments and decision-making processes and to evaluate clinical practices [16, 17]. The method has also been used to identify the best trial designs for methodological questions [18, 19]. In this study, vignettes were case scenarios of trials assessing different interventions aimed at improving the quality of peer review.

Vignettes’ conception

To build the vignettes, we performed a methodological review to identify a variety of interventions for improving peer review.

Methodological review

We searched MEDLINE (via PubMed), with no restriction on language or date of publication. Our search strategy relied on the search terms “peer review,” “peer reviews,” “peer reviewer,” or “peer reviewers” in the title. We included all types of experimental designs evaluating any intervention aiming to improve the quality of the peer review process in biomedical journals. We also included all articles (including editorials, comments) highlighting an intervention to improve the peer review process. The title and abstract of papers were screened by one researcher (AH) for eligibility.

A total of 12 interventions were identified. Interventions were classified according to their goal (Fig. 1): (1) to improve the accuracy of peer review (i.e., training; adding a specialist to peer review; using checklists); (2) to avoid bias and increase transparency (i.e., blinding; open peer review); (3) to reduce the duration of the peer review process (i.e., using communication media; early screening; use of incentives such as payment), and (4) to make peer review a team effort (i.e., using the wisdom of the crowd such as post publication peer review and expert collaboration).

Fig. 1
figure 1

Interventions for peer review identified and classified. Interventions selected to be explored in the vignette-based survey are highlighted in a white box

Six different interventions were selected: training peer reviewers, adding an expert to the peer review process, use of reporting guidelines checklists, blinding peer reviewers to the results (i.e., results-free peer review), giving incentives to peer reviewers, and post-publication peer review. These interventions are described in Table 1.

Table 1 Interventions included in the vignette-based survey

The choice of these interventions took into account the following factors: having at least one intervention within each group and making sure that the interventions’ assessment raised different methodological issues and consequently required different types of study design. For this purpose, we selected interventions that targeted the peer reviewers (e.g., training, incentives) or the manuscript (e.g., adding a specialist) or involved important changes in the process (e.g., post-publication peer review). Furthermore, we favored interventions that we believed were important in terms of their goal (improving the accuracy of peer review and avoiding bias), were implemented but never tested (blinding peer reviewers to results; post-publication peer review), or were frequently suggested (use of incentives).

More specifically, we decided to consider three interventions aimed at improving the accuracy of peer review (i.e., training, adding a specialist to peer review, using checklists), which we believe is a very important goal of the peer review. The intervention “results-free peer review” was selected because of clear evidence of outcome bias in the peer review process [20], and some editors (e.g., BMC Psychology) have implemented this new form of review. Nevertheless, the intervention has never been evaluated. Use of incentives is regularly highlighted as being essential to improve the peer review process, and some initiatives such as Publon are being implemented. Finally, post-publication peer review is widely implemented in some fields and is increasingly been used in biomedical research with specific publishers such as F1000. However, this new process has never been evaluated.

Vignettes’ content

The vignettes were structured in two parts as shown in Fig. 2. The first part described the study objective. It included the description of the intervention, the comparator (i.e., usual process of peer review), and the main outcome measure (i.e., quality of the peer review report or quality of the manuscript revised by the authors according to the type of intervention assessed) and remained unchanged for all vignettes.

Fig. 2
figure 2

Template of the vignette and survey questions

The second part of the vignette described the study design considering three different features: the study type, setting, and type of manuscript assessed by the peer reviewer when appropriate; each of these three features varied among the vignettes (Fig. 2). The study type could be an RCT randomizing manuscripts, an RCT randomizing peer reviewers, a cluster RCT randomizing journals, an interrupted time series analysis, a pairwise comparison, or a stepped wedge cluster RCT with randomization of journals (Table 2). The setting could be a single biomedical journal, several biomedical journals from a single publisher, or several biomedical journals from several publishers. The type of manuscript assessed by the peer reviewer could be the actual manuscripts received by the journal(s) or a fabricated manuscript that purposely included methodological issues, errors, and poorly reported items.

Table 2 Study types

All possible combinations of designs were generated, and two methodologists assessed each design to exclude implausible and contradictory ones. Particularly, we considered a single type of manuscript (i.e., actual manuscripts received by the journal[s]) for the following interventions: adding an expert to the peer review process, use of reporting guidelines checklists, use of incentives, and post-publication peer review.

Participants

Our target population consisted of researchers with an expertise in the field of peer review or methodology of clinical trials. To recruit such participants, we searched for the email addresses for all the authors of the papers of our review. We also identified and searched for the email addresses of participants of the 2013 Peer Review Congress, members of the Editorial Boards of the five journals with the highest impact factor, the Journal of Clinical Epidemiology and Public Library of Science Medicine (PLOS); members of the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network, the REduce research Waste and Reward DIligance (REWARD) Alliance, the METHODS Cochrane Group, the Methods in Research on Research (MiRoR) project, Trial Forge and Meta-Research Innovation Center at Stanford (METRICS). The full list is available in Additional file 1: Appendix 1.

Surveys

A total of 94 vignettes were included in the study: 24 for training, 24 for results-free peer review, 13 for the use of reporting guidelines checklist, 10 for adding an expert to the process, 13 for the use of incentives, and 10 for post-publication peer review. Participants received an invitation via email with a personalized link to the survey. On the home page of the website, participants were informed that the data collected was anonymous and were asked to give their informed consent before starting the questionnaire. A maximum of three reminders were sent to participants, and no incentive was used to maximize the response rate. Participants were proposed two vignettes describing two different study designs to assess a same intervention, and had to indicate which study design they preferred (Fig. 2). Each participant was invited to evaluate six pairs of vignettes for a given intervention.

Sample size

From a pragmatic point of view, we wanted each pair of vignettes to be assessed by participants at least once. For the interventions with fewer than 20 vignettes, we planned for each pair of vignettes to be assessed twice, to increase the number of evaluations per vignette. Therefore, to assess all pairs of vignettes (n = 1044 in total: 276 each for training and results-free peer review, 156 each for the use of reporting guidelines checklist and the use of incentives, and 90 each for adding an expert to the process and for post-publication peer review), and assuming each participant would assess six pairs of vignettes, we needed a minimum of 174 participants. If participants could not evaluate six pairs of vignettes, other participants were recruited.

Ranking of the study designs actually implemented

Using the results of our methodological review, we determined how the study designs actually used were ranked in our survey. For this purpose, we extracted the study type, setting, and type of manuscript used to assess these interventions in the review.

Outcomes

Our main outcome was the overall preference for a study design. Participants had to answer the following question: “If you had to conduct this trial, which study would you choose?” on a semantic differential scale rated from − 5 (preference for study A) to 5 (preference for study B), 0 indicating no preference between the suggested designs.

Other outcomes were the rankings for trust in the results and feasibility, measured by using the same scale. The questions asked were as follows:

  • “If you read the results of this study, which study would you trust most?”

  • “Which protocol is logistically simpler to set up?”

Participants had the opportunity to leave comments if they wished to.

Statistical analysis

Answers for the online questionnaire were collected through the website. The results were recorded in a .csv file and analyzed with R v3.2.2 (http://www.R-project.org, the R Foundation for Statistical Computing, Vienna, Austria) and SAS 9.4 (SAS Institute Inc., Cary, NC). For each intervention and each outcome (overall preference, trust in results, and feasibility), the mean score for each vignette was calculated for each combination of designs in order to have a ranking. For each intervention, we used a linear mixed model to assess the association between each outcome and the following three fixed effects: study type, setting, and type of manuscript. The reading order of the two vignettes of a pair was added as a fourth fixed effect. To account for correlation between vignettes, an intercept term that randomly varied at the level of the vignette effect was included in the model. To account for correlation within vignette pairs (at each comparison, two vignettes have exactly opposite scores), we bootstrapped pairs with 1000 replications of the original sample to estimate the parameters (and 95% confidence intervals) of the model. Correlation due do respondents was found to be null, so it was not modeled.

Results

Participants

Between May 11, 2017, and July 31, 2017, 1037 people were contacted in waves until all pairs of vignettes were evaluated. Of the 331 participants who clicked on the link, 210 gave their consent, and 204 completed the survey (Table 3). Participants were located mainly in Europe (n = 114, 56%) and North America (n = 72, 35%). More than half worked as a methodologist (n = 135, 66%) and about half were trialists (n = 99, 49%) or editors (n = 102, 50%).

Table 3 Baseline demographics and other characteristics of participants (n = 204)

Vignette-based surveys

Additional file 1: Appendix 2 summarizes the results in a spider diagram of mean vignette scores per intervention in terms of overall preference, trust in the results, and feasibility.

Preferred study designs

Additional file 1: Appendix 3 provides the mean score for each vignette for each combination of features (i.e., study type, setting, manuscript type). Table 4 reports the factors associated with overall preference for each study design feature (study type, setting, type of manuscript). For each feature, we arbitrarily identified a reference (stepped wedge cluster RCT with randomization of journals for the study type, several biomedical journals from different publishers for the setting, and one fabricated manuscript for the type of manuscript. The parameter reported is the mean difference in overall preference associated with each category of independent variable as compared with the reference (after adjusting for all other variables).

Table 4 Results—factors associated with overall preference for each study design feature: parameter estimates [and 95% confidence intervals]. For each independent variable, parameter estimates represent mean difference in overall preference associated with each category of independent variable as compared with the reference (after adjusting for all other variables in the table and after taking into account the reading order of the 2 vignettes of the pair)

Overall, the preferred study type was RCTs with randomization of manuscripts for four interventions (adding an expert, use of reporting guidelines checklist, results-free peer review, post-publication peer review) and RCTs with randomization of peer reviewers for two interventions (training peer reviewers and using incentives), with adjustment for all other variables. The preferred setting was mainly several biomedical journals from different publishers, and the preferred type of manuscript was actual manuscripts submitted to journals.

Other designs, such as the cluster stepped wedge of journals or the interrupted time series, scored low.

Trust and feasibility

Additional file 1: Appendices 4 and 5 provide the mean score for each vignette for each combination of features (i.e., study type, setting, manuscript type) for trust and feasibility. After adjustment for all other variables, the most trusted study designs were consistent with the preferred study designs for all interventions (Additional file 1: Appendix 6). In contrast, the study designs rated first in terms of feasibility were not the preferred study designs (Additional file 1: Appendix 7). The preferred study types in terms of feasibility were a pairwise comparison for training peer reviewers (rated as third preferred study type), a cluster RCT with randomization of journals for results-free peer review and use of reporting guidelines checklists (rated fourth and third preferred study type, respectively), and interrupted time series analysis for adding an expert to the peer review process, using incentives and post-publication peer review (rated last, third and third preferred study types, respectively). The setting and type of manuscript were mainly a single biomedical journal and use of a fabricated manuscript.

Ranking of the study designs actually implemented

The ranking of the study design actually implemented is reported in Table 5. Our review identified no studies assessing results-free peer review, use of incentives, and post-publication peer review; five RCTs and one cross-sectional study assessing training; two RCTs assessing use of reporting guidelines checklists and two RCTs assessing adding an expert. None used the designs rated first by experts in terms of preference. None were ranked in the first quarter. This ranking is mainly related to the choice of setting.

Table 5 Ranking of the study designs of the RCTs identified in the methodological review of interventions to improve the peer review process according experts

Discussion

The peer review process is central to the publication of scientific articles. Our series of vignette-based surveys attempted to surpass the methodological problems of performing research on research by assembling a panel of experts on this research question and using their collective wisdom to identify the best designs. We created 94 vignettes of different study designs for 6 different interventions. Overall, 204 experts in peer review or methodology of clinical trials assessed 1044 paired comparisons of designs, which allowed participants to select their answers in terms of overall preference, trust in the results, and feasibility of the study. We identified the study design that was preferred by experts. We did not specify what is considered the “best” study design because we wanted to give full freedom to the experts and let them balance the different features of the design in terms of internal validity, external validity, and feasibility.

Our study has important strengths. We performed a methodological review to identify interventions for evaluating peer review and to classify them according to their effect on the peer review process. Participants, with expertise as a methodologist, an editor, a trialist or involved in research on peer review, were well suited to compare and score the vignettes. The vignette-based survey we used is an innovative study design [18], which, to our knowledge, has never been used in the context of peer review. This method also allowed experts to discuss the pros and cons of each designs. Table 6 provides the notable characteristics of the preferred study designs for each intervention.

Table 6 Notable characteristics of the preferred designs for each intervention

Our results revealed that the preferred designs were often very similar to the most trusted designs but very different from the most feasible ones. Preferred settings were generally in several biomedical journals from one or more publishers, and the preferred type of manuscript assessed by the peer reviewer was always an actual manuscript submitted to the journal. In contrast, the most feasible designs were often set in a single biomedical journal, with assessment of a fabricated manuscript. Some designs, such as RCTs with randomization of manuscripts or peer reviewers, were usually high-ranked. Other designs, such as the cluster stepped wedge of journals or the interrupted time series, regularly scored low.

This preference for trust in the results of the study rather than feasibility could be explained by the fact that the most trusted study designs does not raise important feasibility issues and should be easy to implement. Indeed, there are no major barriers to the randomization of manuscripts or peer reviewer. Opt-out consent procedures and blinding procedures are usually easy to implement; authors and reviewers are informed that studies of peer review are being conducted within a journal but are not informed of the studies to avoid any change in behavior. Outcome assessment (quality of the peer review report or quality of the manuscript) can be assessed by blinded outcome assessors. However, the ability to coordinate between journals and publishers and achieve a required sample size could be considered a major barrier.

Our results also highlighted that the designs actually implemented was never the preferred study design. Particularly, all studies performed involved a single journal, whereas the preferred study designs were set in several medical journals from different publishers, which provides high external validity because it is close to the real-world situation, including many types of journals, manuscripts, and reviewers. This inconsistency between implemented studies and preferred study designs may be due to these trials being the first performed in this field and that investigators, who were pioneers in these fields, favored ensuring feasibility. Furthermore, investigators and researchers in this field must have learned a lot from these trials and would probably improve the design of future trials taking into account these previous experiences.

The following limitations should be acknowledged. We focused on 6 interventions of the 12 identified and on the assessment of a single intervention per study, even though the synergistic use of interventions could improve the quality of peer review. Because of the restrictive format of the vignettes, not all elements of study designs could be addressed. No indication of the sample size was included, which could have an effect on both feasibility and trust in the results. The number of vignettes we included in the questionnaire was also limited, which restricted the number of interventions, comparators, and outcomes. Our study focused solely on the interventions improving the quality of peer review and thus of manuscripts, but other innovations such as re-review opt out and portable or cascade peer review were not included [21]. Participation level was about 20%, which could have biased our results. However, the level of expertise of participants was appropriate. Finally, we cannot exclude that participants could have been influenced by ideological or other preferences for a study design for a given intervention.

Conclusion

Well-performed trials are needed to assess interventions proposed to improve the peer review process. We encourage editors and other investigators to pursue the research on peer review and plan their studies in light of the findings of this vignette-based survey. We hope the evaluation of study designs with a vignette-based survey, based on international expertise, will help to develop a standardization of practices. This standardization will help improve the comparison and ensure the quality of future studies.

Abbreviations

95% CI:

95% confidence intervals

RCT:

Randomized controlled trials

References

  1. Smith R. Peer review: reform or revolution? BMJ. 1997;315(7111):759–60.

    Article  CAS  PubMed Central  Google Scholar 

  2. Rennie D. Suspended judgment. Editorial peer review: let us put it on trial. Control Clin Trials. 1992;13(6):443–5.

    Article  CAS  PubMed Central  Google Scholar 

  3. Kronick DA. Peer review in 18th-century scientific journalism. JAMA. 1990;263(10):1321–2.

    Article  CAS  PubMed Central  Google Scholar 

  4. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;2:MR000016.

    Google Scholar 

  5. Chauvin A, Ravaud P, Baron G, Barnes C, Boutron I. The most important tasks for peer reviewers evaluating a randomized controlled trial are not congruent with the tasks most often requested by journal editors. BMC Med. 2015;13:158.

    Article  PubMed Central  Google Scholar 

  6. Mahoney MJ. Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cogn Ther Res. 1977;1(2):161–75.

    Article  Google Scholar 

  7. The Editors of The L. Retraction—Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet. 2010;375(9713):445.

    Article  Google Scholar 

  8. Ho RC, Mak KK, Tao R, Lu Y, Day JR, Pan F. Views on the peer review system of biomedical journals: an online survey of academics from high-ranking universities. BMC Med Res Methodol. 2013;13:74.

    Article  PubMed Central  Google Scholar 

  9. Wager E, Jefferson T. Shortcomings of peer review in biomedical journals. Learned Publishing. 2001;14(4):257–63.

    Article  Google Scholar 

  10. Rennie D (ed.): Misconduct and journal peer review; 1999.

    Google Scholar 

  11. Henderson M. Problems with peer review. BMJ. 2010;340:c1409.

    Article  PubMed Central  Google Scholar 

  12. Hopewell S, Collins GS, Boutron I, Yu LM, Cook J, Shanyinde M, Wharton R, Shamseer L, Altman DG. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ. 2014;349:g4145.

    Article  PubMed Central  Google Scholar 

  13. Lazarus C, Haneef R, Ravaud P, Boutron I. Classification and prevalence of spin in abstracts of non-randomized studies evaluating an intervention. BMC Med Res Methodol. 2015;15:85.

    Article  PubMed Central  Google Scholar 

  14. Galipeau J, Moher D, Skidmore B, Campbell C, Hendry P, Cameron DW, Hebert PC, Palepu A. Systematic review of the effectiveness of training programs in writing for scholarly publication, journal editing, and manuscript peer review (protocol). Syst Rev. 2013;2:41.

    Article  PubMed Central  Google Scholar 

  15. Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Med. 2016;14(1):85.

    Article  PubMed Central  Google Scholar 

  16. Hughes R, Huby M. The application of vignettes in social and nursing research. J Adv Nurs. 2002;37(4):382–6.

    Article  PubMed Central  Google Scholar 

  17. Bachmann LM, Mühleisen A, Bock A, ter Riet G, Held U, Kessels AG. Vignette studies of medical choice and judgement to study caregivers’ medical decision behaviour: systematic review. BMC Med Res Methodol. 2008;8(1):50.

    Article  PubMed Central  Google Scholar 

  18. Do-Pham G, Le Cleach L, Giraudeau B, Maruani A, Chosidow O, Ravaud P. Designing randomized-controlled trials to improve head-louse treatment: systematic review using a vignette-based method. J Invest Dermatol. 2014;134(3):628–34.

    Article  CAS  PubMed Central  Google Scholar 

  19. Gould D. Using vignettes to collect data for nursing research studies: how valid are the findings? J Clin Nurs. 1996;5(4):207–12.

    Article  CAS  PubMed Central  Google Scholar 

  20. Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med. 2010;170(21):1934–9.

    Article  PubMed Central  Google Scholar 

  21. Kovanis M, Trinquart L, Ravaud P, Porcher R. Evaluating alternative systems of peer review: a large-scale agent-based modelling approach to scientific publication. Scientometrics. 2017;113(1):651–671.

    Article  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank Elise Diard for designing and managing the survey website.

We thank Iosief Abraha, Sabina Alam, Loai Albarqouni, Doug Altman, Joseph Ana, Jose Anglez, Liz Bal, Vicki Barber, Ginny Barbour, Adrian Barnett, Beatriz Barros, Hilda Bastian, Jesse Berlin, Theodora Bloom, Charles Boachie, Peter Bower, Matthias Briel, William Cameron, Patrice Capers, Viswas Chhapola, Anna Chiumento, Oriana Ciani, Anna Clark, Mike Clarke, Erik Cobo, Peter Craig, Rafael Dal-Ré, Simon Day, Diana Elbourne, Caitlyn Ellerbe, Zen Faulkes, Padhrag Fleming, Robert H. Fletcher, Rachael Frost, Marcelo Gama de Abreu, Chantelle Garritty, Julie Glanville, Robert Goldberg, Robert Golub, Ole Haagen Nielse, Gergö Hadlaczky, Barbara Hawkins, Brian Haynes, Jerome Richard Hoffman, Virginia Howard, Haley Hutchings, Philip Jones, Roger Jones, Kathryn Kaiser, Veronique Kiermer, Maria Kowalczuk, Yannick Lemanach, Alex Levis, Dandan Liu, Andreas Lundh, Herve Maisonneuve, Mario Malicki, Maura Marcucci, Evan Mayo-Wilson, Lawrence Mbuagbaw, Elaine McColl, Joanne McKenzie, Bahar Mehmani, John Moran, Tim Morris, Elizabeth Moylan, Cynthia Mulrow, Christelle Nguyen, Leslie Nicoll, John Norrie, David Ofori-Adjei, Matthew Page, Nikolaos Pandis, Spyridon N. Papageorgiou, Nathalie Percie du Sert, Morten Petersen, Patrick Phillips, Dawid Pieper, Raphael Porcher, Jonas Ranstam, Jean Raymond, Barney Reeves, Melissa Rethlefsen, Ludovic Reveiz, Daniel Riddle, Yves Rosenberg, Timothy Rowe, Roberta W. Scherer, David Schoenfeld, David L. Schriger, Sara Schroter, Larissa Shamseer, Richard Smith, Ines Steffens, Philipp Storz-Pfennig, Caroline Struthers, Brett D. Thombs, Shaun Treweek, Margaret Twinker, Cornelis H. Van Werkhoven, Roderick P. Venekamp, Alexandre Vivot, Sunita Vohra, Liz Wager, Ellen Weber, Wim Weber, Matthew Westmore, Ian White, Sankey Williams for their participation in our vignette-based questionnaire. 


Funding

This study required no particular funding.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

AH, PR, GB, and IB made substantial contributions to conception and design, acquisition of data, analysis, and interpretation of data. AH, PR, GB and IB have been involved in drafting the manuscript or revising it critically for important intellectual content. AH, PR, GB, and IB gave the final approval of the version to be published. Each author have participated sufficiently in the work to take public responsibility for appropriate portions of the content and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Isabelle Boutron.

Ethics declarations

Ethics approval and consent to participate

All information collected with the questionnaire was confidential. Personal data collected were age group, sex, and location of participants, and all information was anonymous. Authorization was obtained from the Commission Nationale de l’Informatique et des Libertés (CNIL) whose authority is to protect participants’ personal data (no. 2044356). The protocol was approved by the Institutional Review Board of the Institut National de la Santé et de la Recherche Médicale (INSERM) (IRB00003888).

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Appendix 1. List of participants. Appendix 2. Results—Spider diagrams of mean vignette scores per intervention in terms of overall preference, trust in the results and feasibility. Appendix 3. Results—Mean score for each combination of features for the preferred study design (primary outcome). Appendix 4. Results—Mean score for each combination of features for trust in results (secondary outcomes). Appendix 5. Results—Mean score for each combination of features for feasibility (secondary outcomes). Appendix 6. Results—Parameter estimates for trust in the results model. Appendix 7. Results—Parameter estimates for feasibility model. (DOCX 1461 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Heim, A., Ravaud, P., Baron, G. et al. Designs of trials assessing interventions to improve the peer review process: a vignette-based survey. BMC Med 16, 191 (2018). https://doi.org/10.1186/s12916-018-1167-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12916-018-1167-7

Keywords