Is network meta-analysis as valid as standard pairwise meta-analysis? It all depends on the distribution of effect modifiers
© Jansen and Naci; licensee BioMed Central Ltd. 2013
Received: 22 February 2013
Accepted: 30 May 2013
Published: 4 July 2013
In the last decade, network meta-analysis of randomized controlled trials has been introduced as an extension of pairwise meta-analysis. The advantage of network meta-analysis over standard pairwise meta-analysis is that it facilitates indirect comparisons of multiple interventions that have not been studied in a head-to-head fashion. Although assumptions underlying pairwise meta-analyses are well understood, those concerning network meta-analyses are perceived to be more complex and prone to misinterpretation.
In this paper, we aim to provide a basic explanation when network meta-analysis is as valid as pairwise meta-analysis. We focus on the primary role of effect modifiers, which are study and patient characteristics associated with treatment effects. Because network meta-analysis includes different trials comparing different interventions, the distribution of effect modifiers cannot only vary across studies for a particular comparison (as with standard pairwise meta-analysis, causing heterogeneity), but also between comparisons (causing inconsistency). If there is an imbalance in the distribution of effect modifiers between different types of direct comparisons, the related indirect comparisons will be biased. If it can be assumed that this is not the case, network meta-analysis is as valid as pairwise meta-analysis.
The validity of network meta-analysis is based on the underlying assumption that there is no imbalance in the distribution of effect modifiers across the different types of direct treatment comparisons, regardless of the structure of the evidence network.
KeywordsBias Comparative effectiveness Confounding Effect modification Indirect comparison Meta-analysis Mixed treatment comparison Network meta-analysis Randomized controlled trial Systematic review
Randomized controlled trials (RCTs) are considered as the gold standard of whether a health intervention works and/or whether it is better than another. Although often placed at the top of evidence hierarchies, single RCTs rarely provide adequate information for addressing the evidence demands of patients, clinicians and policymakers. Instead, each trial provides a piece of evidence that, when taken together with others, addresses important questions for patients, clinicians, and other healthcare decision-makers . Traditional pairwise meta-analyses of RCTs are increasingly used to synthesize the results of different trials evaluating the same intervention(s) to obtain an overall estimate of the treatment effect of one intervention relative to the control.
In the last decade, network meta-analysis has been introduced as a generalization of pairwise meta-analysis. When the available RCTs of interest do not all compare the same interventions but each trial compares only a subset of the interventions of interest, it is possible to develop a network of RCTs where all trials have at least one intervention in common with another. Such a network allows for indirect comparisons of interventions not studied in a head-to-head fashion . For example, the treatment effects from trials comparing treatments B relative to A (AB trials) and trials comparing treatments C relative to A (AC trials) can be pooled to obtain an indirect estimate for the comparison between treatments B and C [3–5]. Even when a trial comparing treatments C and B (BC trial) exists, combining the direct estimates with the results of indirect comparisons can result in refined estimates as a broader evidence base is considered [6, 7]. In general, if the available evidence base consists of a network of interlinked multiple RCTs involving treatments compared directly, indirectly, or both, the entire body of evidence can be synthesized by means of network meta-analysis .
Although assumptions underlying standard pairwise meta-analyses of direct comparisons are well understood, those concerning network meta-analysis for both direct and indirect comparisons might be perceived to be more complex, and might be prone to misinterpretation [9–11]. In this paper, we aim to compare pairwise meta-analysis with network meta-analysis with a specific focus on the primary role of effect modifiers as the common cause of heterogeneity and bias. We discuss effect modification first within individual trials, then in standard pairwise meta-analyses of multiple randomized trials, and finally in network meta-analyses.
Effect modification and within-study variation of treatment effects
Within an RCT, different groups of participants can respond to treatments differently. Hence, it is possible to have subgroups of participants with different treatment effects. This variation in true treatment effects is called heterogeneity and is caused by differences in patient characteristics within a trial that are effect modifiers (Figure 1) . When heterogeneity occurs within an individual trial, it is referred to as within-study heterogeneity. Within-study heterogeneity occurs particularly in trials without strict entry criteria. For instance, RCTs evaluating the efficacy of cholesterol-lowering statins often include a mixture of patients with and without a history of coronary artery disease. As these subgroups of patients respond to statin therapy differently (that is, individuals with a history of coronary artery disease tend to derive greater relative mortality reduction as patients without a history of coronary artery disease), disease history is an effect modifier and results in within-study heterogeneity of treatment effects.
Effect modification and between-study variation of treatment effects
Pooling different studies in the presence of extreme between-study heterogeneity does not introduce bias, but may render the results of the meta-analysis irrelevant . When considering the scenario presented in Figure 2b, the pooled result is not applicable to a moderate-only population or a severe-only population. In this situation, an alternative approach would be to perform separate meta-analyses for the studies with severe and moderate populations. In Figure 2c a more realistic scenario is presented where there is both within-study and between-study variation in the distribution of the effect modifier. Given the use of published treatment effects we can only observe between-study heterogeneity in treatment effects. Combining the results of these four heterogeneous studies is in essence similar to the pooling of the treatment effect across subgroups of one trial characterized by different values of the effect modifier.
Effect modification and between-comparison variation of treatment effects
In a standard pairwise meta-analysis where each trial compares the same interventions with the same control (say only AB studies) the only source of variation in the treatment effects between trials can be due to the presence of effect modifiers that are different from one trial to the next: between-study heterogeneity. In a network meta-analysis, studies concern different treatment comparisons (for example, AB studies, AC studies). Hence, there is an additional source of variability of treatment effects between trials, which is the treatment comparison itself. In a network meta-analysis or indirect comparison of RCTs there can be three types of variation of treatment effects: (1) true within-study variation of treatment effects (which is only observable with individual patient-level data or reporting of subgroups), (2) true between-study variation in treatment effects for a particular treatment comparison, and (3) true between-comparison variation in treatment effects.
In Figure 3b a network meta-analysis is presented with variation in the distribution of the effect modifier across the AB studies resulting in between-study heterogeneity. The same is observed for the AC comparison. Since the distribution of severity across the four AB studies is the same as for the four AC studies, the difference between the pooled estimates of AB and AC is only due to the actual difference in the interventions compared. The indirect estimate for the BC comparison is again unbiased.
A similar scenario is presented in Figure 4b. Here, there is an imbalance (or between-comparison variation) in the distribution of the effect modifier. In addition, there is heterogeneity across the AB studies as well as the AC studies due to variation in the effect modifier between studies within the comparisons. This variation results in biased indirect comparison estimates.
An imbalance in the distribution of effect modifiers across the different comparisons, sometimes referred to as a violation of the similarity or consistency assumptions , results in a violation of transitivity. Transitivity means that if C is more efficacious than B, and B is more efficacious than A, then C has to be more efficacious than A. It is important to acknowledge that there is always the risk of unknown imbalances in effect modifiers and accordingly the risk of residual confounding bias, even if all observed effect modifiers are balanced. However, this does not imply that network meta-analyses are as prone to bias as observational studies. In non-randomized comparative studies the relative treatment effect between the two interventions are affected by confounding bias if either the prognostic factors of the study effects or modifiers of treatment effects are not balanced across the intervention groups. Given the randomized nature of the individual trials included in network meta-analyses, and we only compare treatment effects of interventions that are part of the same network of RCTs, we only have to worry about the effect modifiers as a source of confounding bias.
Heterogeneity and inconsistency as the two sides of the same effect-modification coin
If a network meta-analysis consists of an evidence base where for some interventions there is both direct and indirect evidence, inconsistency can be evaluated by comparing the treatment effect estimates obtained from the direct comparison, with those obtained from the indirect comparisons for the same contrast [18–21]. For example, in a network of RCTs that consists of AB, AC, and BC studies, inconsistency can be evaluated by comparing the direct comparison BC with the indirect estimate for BC obtained from the AB and AC studies. For comparisons where only indirect evidence is available, say the BC comparison in a network of only AB and AC studies, inconsistency cannot be assessed this way, and can only be explored by comparing the average distribution of effect modifiers between AB and AC studies .
In network meta-analysis, consistency is sometimes referred to as a separate assumption from the similarity assumption suggesting that the similarity assumption relates to indirect comparisons, and the consistency assumption only applies to situations where there is both direct and indirect evidence for a certain treatment comparison . However, portraying similarity and consistency as separate assumptions is not very useful given the fact that any valid network meta-analysis is based on the assumption that there is no imbalance in the distribution of effect modifiers across the different types of treatment comparisons (i.e. transitivity), regardless of the structure of the evidence network.
In an attempt to bridge the gap between the conceptual considerations and realities of performing network meta-analysis, a brief discussion on practical implications is warranted. Frequently, there are several observed differences in trial and patient characteristics across the different direct comparisons. Deciding which covariates are effect modifiers based on observed differences in results across trials can be challenging and potentially lead to false conclusions regarding the sources of inconsistency . We recommend that researchers first generate a list of potential treatment effect modifiers for the interventions of interest based on prior knowledge or subgroup results of individual studies before comparing results between studies. Next, the distribution of study and patient characteristics that are determined to be likely effect modifiers should be compared across studies to identify any potential imbalances between different types of direct comparisons .
If there are a sufficient number of studies included in the network meta-analysis, it may be possible to perform a meta-regression analysis where the treatment effect of each study is not only a function of the treatment comparison of that study but also related to an effect modifier . This allows indirect comparisons with adjustment for confounding bias due to differences in the measured effect modifiers between studies if the estimated relationship between effect modifier and treatment effect is not greatly affected by bias [23, 24]. Network meta-analysis is typically based on study-level data extracted from published reports of trials. Adjusting for imbalances in patient level effect modifiers based on study-level data can be prone to ecological bias [24–26]. Having access to patient level data (at least for a subset of studies) can improve parameter estimation of network meta-analysis models with adjustment for differences in patient-level covariates across comparisons. Hence, it is recommended to use patient-level data where available [25, 26].
Even in cases where relative treatment effect modifiers are identified in advance, practical challenges remain in regards to their availability in published reports thereby limiting meta-regression analysis . Nevertheless, we recommend network meta-analysis reports to include a discussion of known effect modifiers, their availability in the published body of evidence, and how their distribution across studies may affect the findings.
Network meta-analysis is different from pairwise meta-analysis in the sense that there is not only one type of treatment comparison, but multiple treatment comparisons. As a result, the distribution of effect modifiers cannot only vary across studies for a particular comparison (as with pairwise meta-analysis), but also between comparisons. If there is an imbalance in the distribution of effect modifiers between different types of comparisons, indirect comparisons will be biased and the validity of the network meta-analysis is compromised. In the Additional file 1 this key requirement for transitivity is also demonstrated with mathematical equations. If the assumption that there are no imbalances in effect modifiers between different types of direct comparisons can be defended or seems appropriate given the available RCTs then network meta-analysis is as valid as pairwise meta-analysis. If there are sources of bias that affect the direct comparisons of the individual studies (for example, information bias, publication bias, or selective outcome reporting bias) then the pooled results of both pairwise meta-analysis and network meta-analysis are affected. However, when indirect evidence can wash out trial-specific biases that are sometimes not identifiable in a head-to-head meta-analysis, indirect estimates obtained with a network meta-analysis might be preferable [19, 20]. Network meta-analysis has the advantage that it allows for indirect comparisons, more data are incorporated in the analysis, and the bigger picture is tackled, while a single pairwise meta-analysis offers a very fragmented picture.
The authors thank Alexander Rowe from the U.S. Centers for Disease Control and Prevention (Atlanta, GA, USA), and Tom Trikalinos from Brown University (Providence, RI, USA) for their comments on the initial draft.
- Inthout J, Ioannidis JP, Borm GF: Obtaining evidence by a single well-powered trial or several modestly powered trials. Stat Methods Med Res. In press
- Caldwell DM, Ades AE, Higgins JPT: Simultaneous comparison of multiple treatments: combining direct and indirect evidence. BMJ. 2005, 331: 897-900. 10.1136/bmj.331.7521.897.PubMedPubMed CentralView Article
- Sutton A, Ades AE, Cooper N, Abrams K: Use of indirect and mixed treatment comparisons for technology assessment. PharmacoEconomics. 2008, 26: 753-767. 10.2165/00019053-200826090-00006.PubMedView Article
- Bucher HC, Guyatt GH, Griffith LE, Walter SD: The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J ClinEpidemiol. 1997, 50: 683-691.
- Lumley T: Network meta-analysis for indirect treatment comparisons. Stat Med. 2002, 21: 2313-2324. 10.1002/sim.1201.PubMedView Article
- Madan J, Stevenson MD, Cooper KL, Ades AE, Whyte S, Akehurst R: Consistency between direct and indirect trial evidence: is direct evidence always more reliable?. Value Health. 2011, 14: 953-960. 10.1016/j.jval.2011.05.042.PubMedView Article
- Song F, Harvey I, Lilford R: Adjusted indirect comparison may be less biased than direct comparison for evaluating new pharmaceutical interventions. J ClinEpidemiol. 2008, 61: 455-463.
- Salanti G, Higgins JP, Ades AE, Ioannidis JP: Evaluation of networks of randomized trials. Stat Methods Med Res. 2008, 17: 279-301.PubMedView Article
- Mills EJ, Ioannidis JP, Thorlund K, Schunemann HJ, Puhan MA, Guyatt GH: How to use an article reporting a multiple treatment comparison meta-analysis. JAMA. 2012, 308: 1246-1253. 10.1001/2012.jama.11228.PubMedView Article
- Li T, Puhan MA, Vedula SS, Singh S, Dickersin K: Network meta-analysis-highly attractive but more methodological research is needed. BMC Med. 2011, 9: 79-10.1186/1741-7015-9-79.PubMedPubMed CentralView Article
- Naci H, Fleurence R: Using indirect evidence to determine the comparative effectiveness of prescription drugs: do benefits outweigh risks?. Health Outcomes Res Med. 2011, 2: e241-e249. 10.1016/j.ehrm.2011.10.001.View Article
- Savović J, Jones HE, Altman DG, Harris RJ, Jüni P, Pildal J, Als-Nielsen B, Balk EM, Gluud C, Gluud LL, Ioannidis JP, Schulz KF, Beynon R, Welton NJ, Wood L, Moher D, Deeks JJ, Sterne JA: Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann Intern Med. 2012, 157: 429-438.PubMedView Article
- Higgins JP, Thompson SG: Quantifying heterogeneity in a meta-analysis. Stat Med. 2002, 21: 1539-1558. 10.1002/sim.1186.PubMedView Article
- Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F: Methods for Meta-Analysis in Medical Research. 2000, London, UK: Wiley
- Thompson SG: Systematic Review: Why sources of heterogeneity in meta-analysis should be investigated. BMJ. 1994, 309: 1351-1355. 10.1136/bmj.309.6965.1351.PubMedPubMed CentralView Article
- Jansen JP, Schmid CH, Salanti G: Directed acyclic graphs can help understand bias in indirect and mixed treatment comparisons. J ClinEpidemiol. 2012, 65: 798-807.
- Song F, Loke YK, Walsh T, Glenny A-M, Eastwood AJ, Altman DG: Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ. 2009, 19: 338.
- Lu G, Ades AE: Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004, 23: 3105-3124. 10.1002/sim.1875.PubMedView Article
- Lu G, Ades AE: Assessing evidence inconsistency in mixed treatment comparisons. J Am Stat Assoc. 2006, 101: 447-459. 10.1198/016214505000001302.View Article
- Higgins JPT, Jackson D, Barrett JK, Lu G, Ades AE, White IR: Consistency and inconsistency in network meta-analysis: concepts and models for multi-arm studies. Res Synth Methods. 2012, 3: 98-110. 10.1002/jrsm.1044.PubMedPubMed CentralView Article
- Jansen JP, Fleurence R, Devine B, Itzler R, Barrett A, Hawkins N, Lee K, Boersma C, Annemans L, Cappelleri JC: Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: Part 1. Value Health. 2011, 14: 417-428. 10.1016/j.jval.2011.04.002.PubMedView Article
- Thompson SG, Higgins JPT: How should meta-regression analyses be undertaken and interpreted?. Stat Med. 2002, 21: 1559-1573. 10.1002/sim.1187.PubMedView Article
- Cooper NJ, Sutton AJ, Morris D, Ades AE, Welton NJ: Addressing between-study heterogeneity and inconsistency in mixed treatment comparisons: application to stroke prevention treatments in individuals with non-rheumatic atrial fibrillation. Stat Med. 2009, 28: 1861-1881. 10.1002/sim.3594.PubMedView Article
- Salanti G, Marinho V, Higgins JP: A case study of multiple-treatments meta-analysis demonstrates that covariates should be considered. J ClinEpidemiol. 2009, 62: 857-864.
- Jansen JP: Network meta-analysis of individual and aggregate level data. Res Synth Methods. 2012, 3: 177-190. 10.1002/jrsm.1048.PubMedView Article
- Saramago P, Sutton AJ, Cooper NJ, Manca A: Mixed treatment comparisons using aggregate and individual participant level data. Stat Med. 2012, 31: 3516-3536. 10.1002/sim.5442.PubMedView Article
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1741-7015/11/159/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.