Skip to main content

Conduct and reporting of individual participant data network meta-analyses need improvement

The Original Article was published on 01 June 2020

Background

Network meta-analysis (NMA) of healthcare interventions is being increasingly used in medical research aiming to address a key question for clinical decision making: which interventions work best for a given disease? The popularity of NMA builds on three main characteristics that make it a unique tool in the field of evidence synthesis: (a) it allows inference for comparisons that have never been evaluated in individual studies, (b) it usually gives relative effect estimates with the highest precision, and (c) it allows estimating the ranking of interventions with respect to some outcomes of interest [1, 2]. Despite these benefits, a major constraint of most NMAs to date is that they are primarily considering aggregate data, hence data at study level or, at best, arm-level data.

The use of individual participant data (IPD) in meta-analyses is known to have several advantages, such as allowing to handle properly missing data, investigation of associations between outcomes and participants’ characteristics, and exploration of case-mix heterogeneity [3]. According to the article by Gao et al., though, only 21 NMAs of randomized controlled trials (RCTs) with available IPD were published until June 2019 [4]. On the contrary, a collection of NMAs with aggregate RCT data included, already in 2015, 450 publications [5]; this number has probably been tripled now. The main explanation of this imbalance is probably the difficulties encountered in obtaining IPD from trial investigators. Generally, IPD sharing is a rather time- and resource-consuming procedure and, until recently, was restricted within small research communities. On top of data availability issues, conducting NMA with IPD requires knowledge of advanced statistical modeling approaches and specific expertise in the review team. This increased complexity is sometimes an obstacle for NMA investigators leading them to refrain from an IPD analysis plan.

Recent advances

Interestingly, the study by Gao et al. does not show a consistent increase in the publications of IPD NMAs over years as it would be expected given the overall steep increase of NMAs in the literature [5]. However, new initiatives, such as YODA (https://yoda.yale.edu/) and Clinical Study Data Request (https://www.clinicalstudydatarequest.com/), that aim to promote large-scale IPD sharing will possibly facilitate the access of meta-analysts to such data. At the same time, researchers have started being more familiar with methods for incorporating IPD in NMA due to the greater focus placed lately in the field including new developments [6, 7] and several training events (e.g., Cochrane webinars). Considering these important advances, a larger number of IPD NMAs is anticipated to be published in the next few years. It is of great importance, though, to reassure that they will follow high-quality standards, since they are usually considered superior to NMAs of aggregate data and their conclusions may have stronger impact.

Empirical results

Gao et al. found important deficiencies in the 21 IPD NMAs they identified. Specifically, they evaluated the methodological and reporting quality of these NMAs using three tools: the PRISMA-IPD [8], the PRISMA-NMA [9], and the AMSTAR-2 [10] checklists (after removing overlapping items) and concluded that the overall quality of the publications was suboptimal. One of the most important limitations was the lack of an assessment of the NMA assumptions. None of the articles reported assessment of the fundamental conceptual synthesis assumption and less than half assessed the respective statistical assumption. Further, most of the NMAs ignored missing data or used naïve imputation methods, none described how IPD and aggregate data were combined when both were used, and only 10% reported registration of a protocol and evaluation of across-study biases. Based on AMSTAR-2, the majority of these NMAs were rated as being of critically low quality. It is encouraging that involvement of a statistician or epidemiologist in the review team was associated with overall better methodological and reporting quality.

These findings raise concerns with respect to the validity of results from published IPD NMAs. Of course, some of these articles were published before the development of the appraisal tools that were used and full compliance could not be expected. Deficiencies, though, did not seem to be mitigated after the development of the checklists. For example, PRISMA-NMA clearly states that evaluation of the synthesis assumptions is crucial, but even subsequent IPD NMAs did not follow this recommendation. It is also possible that some of the identified deficiencies are the result of poor reporting and do not necessarily show poor methodological quality.

Conclusions

In conclusion, the study by Gao et al. poses an important question: whether existing guidelines for conducting and reporting NMAs in the presence of IPD are sufficient. Although using a combination of PRISMA-IPD and PRISMA-NMA would cover most of the required information that needs to be reported, it seems reasonable to move towards the development of specific guidance for IPD NMAs. The development of a comprehensive list of items that any IPD NMA should report, such as an extension of the PRISMA-NMA, would be helpful not only to NMA authors but also to journal editors and reviewers who will be able to easily judge the reporting and methodological completeness of these articles.

Availability of data and materials

Not applicable

Abbreviations

AMSTAR:

A MeaSurement Tool to Assess systematic Reviews

IPD:

Individual participant data

NMA:

Network meta-analysis

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

RCT:

Randomized controlled trial

YODA:

Yale University Open Data Access

References

  1. Salanti G. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Meth. 2012;3(2):80–97.

    Article  Google Scholar 

  2. Chaimani A, Caldwell DM, Li T, Higgins JP, Salanti G. Undertaking network meta-analyses. In: Cochrane Handbook for Systematic Reviews of Interventions. Wiley; 2019:285–320. doi:https://doi.org/10.1002/9781119536604.ch11.

  3. Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ. 2010;340. https://doi.org/10.1136/bmj.c221.

    Article  Google Scholar 

  4. Gao Y, Shi S, Li M, Luo X, Liu M, Yang K, Zhang J, Song F, Tian J. Statistical analyses and quality of individual participant data network meta-analyses were suboptimal: a cross-sectional study. BMC Med. 2020. https://doi.org/10.1186/s12916-020-01591-0.

  5. Petropoulou M, Nikolakopoulou A, Veroniki A-A, et al. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J Clin Epidemiol doi:https://doi.org/10.1016/j.jclinepi.2016.11.002.

    Article  Google Scholar 

  6. Debray TP, Schuit E, Efthimiou O, et al. An overview of methods for network meta-analysis using individual participant data: when do benefits arise? Stat Methods Med Res. 2018;27(5):1351–64. https://doi.org/10.1177/0962280216660741.

    Article  PubMed  Google Scholar 

  7. Leahy J, O’Leary A, Afdhal N, et al. The impact of individual patient data in a network meta-analysis: an investigation into parameter estimation and model selection. Res Synth Methods. 2018;9(3):441–69. https://doi.org/10.1002/jrsm.1305.

    Article  PubMed  Google Scholar 

  8. Stewart LA, Clarke M, Rovers M, et al. Preferred Reporting Items for Systematic Review and Meta-Analyses of individual participant data: the PRISMA-IPD Statement. JAMA. 2015;313(16):1657–65. https://doi.org/10.1001/jama.2015.3656.

    Article  PubMed  Google Scholar 

  9. Hutton B, Salanti G, Caldwell DM, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84. https://doi.org/10.7326/M14-2385.

    Article  Google Scholar 

  10. Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358. https://doi.org/10.1136/bmj.j4008.

Download references

Acknowledgements

Not applicable

Funding

Not applicable

Author information

Authors and Affiliations

Authors

Contributions

AC drafted the first version of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Anna Chaimani.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chaimani, A. Conduct and reporting of individual participant data network meta-analyses need improvement. BMC Med 18, 156 (2020). https://doi.org/10.1186/s12916-020-01630-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12916-020-01630-w

Keywords