Comparative effectiveness and safety of pharmaceuticals assessed in observational studies compared with randomized controlled trials

Background There have been ongoing efforts to understand when and how data from observational studies can be applied to clinical and regulatory decision making. The objective of this review was to assess the comparability of relative treatment effects of pharmaceuticals from observational studies and randomized controlled trials (RCTs). Methods We searched PubMed and Embase for systematic literature reviews published between January 1, 1990, and January 31, 2020, that reported relative treatment effects of pharmaceuticals from both observational studies and RCTs. We extracted pooled relative effect estimates from observational studies and RCTs for each outcome, intervention-comparator, or indication assessed in the reviews. We calculated the ratio of the relative effect estimate from observational studies over that from RCTs, along with the corresponding 95% confidence interval (CI) for each pair of pooled RCT and observational study estimates, and we evaluated the consistency in relative treatment effects. Results Thirty systematic reviews across 7 therapeutic areas were identified from the literature. We analyzed 74 pairs of pooled relative effect estimates from RCTs and observational studies from 29 reviews. There was no statistically significant difference (based on the 95% CI) in relative effect estimates between RCTs and observational studies in 79.7% of pairs. There was an extreme difference (ratio < 0.7 or > 1.43) in 43.2% of pairs, and, in 17.6% of pairs, there was a significant difference and the estimates pointed in opposite directions. Conclusions Overall, our review shows that while there is no significant difference in the relative risk ratios between the majority of RCTs and observational studies compared, there is significant variation in about 20% of comparisons. The source of this variation should be the subject of further inquiry to elucidate how much of the variation is due to differences in patient populations versus biased estimates arising from issues with study design or analytical/statistical methods. Supplementary Information The online version contains supplementary material available at 10.1186/s12916-021-02176-1.


Background
Health care decision makers, particularly regulators but also health technology assessment agencies, have depended upon evidence from randomized clinical trials (RCTs) to assess drug effectiveness and to make comparisons among treatment options. Widespread adoption of the RCT was the hallmark of progress in clinical research in the twentieth century and accelerated the development and approval of new therapeutics; confidence in RCTs derived from their experimental nature, designs to minimize bias, rigorous data quality, and analytic approaches that supported causal inference.
In the last 30 years, we have witnessed an explosion of observational real-world data (RWD) and evidence (RWE) derived from RWD that has supplemented our understanding of the benefits and risks of treatments in broader populations of patients. RWE has been largely leveraged by regulators to assess the safety of marketed products and for new drug approvals when RCTs are infeasible, such as in rare diseases, oncology, or for longterm adverse effects. RCTs often do not have sufficient sample size to detect rare adverse events or long enough follow-up to detect long-term adverse effects. In such cases, regulatory decisions are often supplemented by RWE. However, leveraging of RWE has been much more slowly embraced in comparison to the adoption of RCTs for a variety of reasons. Imputation of causality is less certain in the absence of randomization and RWD can be much sparser and often requires extensive curation before it can be analyzed. Thus, skepticism about the robustness of observational RWD studies has made decision makers cautious in relying solely upon it to render judgments about the availability and appropriate use of new therapeutics, particularly by regulatory bodies.
Moreover, observational studies examining the effectiveness of treatments in similar populations have not always provided results consistent with RCTs. Despite many studies finding similar treatment effect estimates from RCTs and RWD analyses [1][2][3], other analyses have documented wide variation in results from RWD analyses within the same therapeutic areas [4], including analyses using propensity score-based methods [5]. Nonetheless, public interest has grown in the routine leveraging of RWD to promote the creation of a learning healthcare system, and regulatory bodies and other decision makers are exploring ways to expand their use of RWE. This is partly due to increasing acknowledgement of the value of RWE, such as its ability to better reflect actual environments in which the interventions are used.
One promising approach to understanding the sources of variability between RCT and observational study results is to compare estimates obtained from RWD analyses that attempt to emulate the eligibility criteria, endpoints, and other features of trials as closely as possible. A small number of RWD analyses have generated findings similar to previous RCTs [6,7], and the findings of other RWD analyses have been consistent with subsequent RCTs [8]. In a small number of cases, RCTs and RWD studies have been published simultaneously [9]. This has the advantage of not knowing the RCT estimate when conducting the RWD study. There have been disagreements between observational RWD analyses and RCTs that were based upon avoidable errors in the RWD analysis design [7,10]. This has led to a focus on the importance of research design in observational RWD analyses attempting to draw causal inferences regarding treatment effects [11][12][13].Emulation studies can improve understanding of when observational studies may reliably generate results consistent with RCTs; however, not all RCTs can be feasibly emulated using RWD due to limitations in observational datasets. Existing sources of observational data, such as health insurance claims and electronic health records (EHRs), may not routinely capture the intervention, indication, inclusion and exclusion criteria, and/or endpoints used in RCTs [14].
The objective of this paper is to provide further evidence on the comparability of RCTs and observational studies when the latter use a range of study designs and were not designed to emulate RCTs. We aim to quantify the extent of the difference in treatment effect estimates between RCTs and observational studies. We go beyond previous comparisons of RCTs and observational studies, with a focus purely on pharmaceuticals, and provide a systematic landscape review of the (in)consistency between RCT and observational study treatment effect estimates. The reasons for the variation in relative treatment effects are not assessed in this review but should be the subject of further study.  [15]. We restricted our search to focus on pharmaceuticals only. PubMed and Embase were searched for the following concepts: pharmaceuticals, study methodology, and comparisons (filters: Humans and English language). The PubMed search strategy which was adapted for use in Embase can be found in Additional File 1.

Study selection
After removing duplicate references, three authors (JG, YH and LO) screened the titles and abstracts to identify relevant reviews. Once complete, LO verified the screening for accuracy. Following the title and abstract screen, full text articles were obtained for all potentially relevant reviews. Full text articles were then assessed to determine if they meet the selection criteria for final inclusion in the review.

Data extraction
A pilot extraction was first done by two authors (JG and YH) on a sample of three articles using a standardized extraction table. This was done to test the standardized extraction table and to ensure consistency between the authors performing the data extraction. JG and YH then independently extracted information from each review using the standardized extraction table. A third author (LO) verified the extraction for accuracy and identified any discrepancies. These discrepancies were discussed until resolved. We focused on primary outcomes reported in the reviews and extracted information summarizing the scope of each of the identified systematic reviews. Extracted information included the following: review objective, population, disease/therapeutic area, interventions, outcome(s), number of included RCTs and observational studies, pooled relative treatment effect estimates for RCTs and observational studies along with the 95% confidence intervals (95% CI), and measures of heterogeneity.

Analysis
Based on the extracted information, we calculated the ratio of the relative treatment effect estimate from observational studies over the relative treatment effect estimate from RCTs (e.g., RR obs /RR rct ), along with the corresponding 95% CI obtained via a Monte Carlo simulation for each pair of pooled RCT and observational study estimates. Outcomes for which the relative treatment effect was not expressed with a relative risk (RR), odds ratio (OR), or hazard ratio (HR) were excluded from our analysis.
We expressed differences in pooled effect estimates with the following measures: ratios that were < 1, > 1, or = 1, ratios indicating an "extreme difference" (< 0.70 or > 1.43) [16] and absence of an extreme difference. We evaluated (in)consistency between pooled RCT and observational study estimates with the following measures: presence of opposite direction of effect, RCT effect estimate outside the 95% CI of the observational study estimate, and vice versa, statistically significant difference between RCT and observational study estimates, and statistically significant difference along with opposite direction of effect. Statistically significant difference was determined by examining the 95% CI of the ratio of the relative treatment effect estimates from observational studies and RCTs derived from the Monte Carlo simulation. We examined differences in relative effect measures from observational studies and RCTs by outcome type and therapeutic area.
To test the robustness of our findings, we conducted two sensitivity analyses. As some reviews assessed more than one endpoint and contributed more than one pair of pooled relative treatment effects from RCTs and observational studies to our analysis, we repeated the analysis with one endpoint per review, i.e., a single pair of pooled relative treatment effects from RCTs and observational studies from each review, selecting the most frequently used endpoints for inclusion whenever possible. Additionally, as some studies were included in more than one review, we repeated the analysis ensuring that there is no overlap of data between the included reviews, i.e., ensuring that each study was included in only one review included in our analysis. Details on the sensitivity analyses are included in Additional File 2. All analyses were conducted using RStudio, version 1.3.1073 (©2009-2020 RStudio, PBC).

Literature search
Our search on PubMed and Embase yielded 3798 unique citations after removing duplicates. After screening titles and abstracts, we identified 93 full text articles for further review. Of these, 30 reviews met our inclusion criteria (Fig. 1).

Included systematic reviews
The characteristics of the included reviews and the pairs of pooled relative treatment effects from RCTs and observational studies reported in the reviews are summarized in Table 1 [2/30]) were identified from the literature. These reviews included 519 RCTs and observational studies and provided 79 pairs of pooled relative treatment effects from RCTs and observational studies across multiple interventions, comparators, and outcomes. Five pairs were excluded from our assessment because they concerned continuous outcomes (n = 1) or no pooled effect estimate was reported for observational studies (n = 4). As a result, 74 pairs of pooled relative treatment effects from RCTs and observational studies from 29 reviews were available for assessment of consistency.        Table 2). Sensitivity analyses including only one endpoint from each review and ensuring no overlap of data between the included reviews resulted in similar findings ( Table 2). Scatterplots of relative effect measures from observational studies and RCTs by outcome type and therapeutic area can be found in Additional File 3: Figures S1 and S2.   Table 3). The results remained fairly consistent when the sensitivity analyses were conducted (Table 3).

Discussion
Our analysis of 29 reviews comparing results of RCTs and observational studies of pharmaceuticals showed, on average, no significant differences in their relative risk ratios across all studies, but also considerable study-bystudy variability. The median ratio of the relative effect measure from observational studies to RCTs was 0.92, indicating just slightly lower effectiveness/safety estimates in observational studies than corresponding RCTs. This is in fact somewhat higher than the 0.80 ratio recently found in meta-research comparing effect estimates of randomized clinical trials that use routinely collected data (i.e., from traditional observational study sources such as registries, electronic health records, or administrative claims) for outcome ascertainment with traditional trials not using routinely collected data [47]. However, whether judging by the frequency of "extreme" differences (43.2%) or statistically significant differences in opposite directions (17.6%), one could not claim that observational study results consistently replicated RCT results on a study-by-study basis in our sample.
There are a number of reasons that any given observational study result may not replicate an RCT comparing the same treatments. First, it may not have been the intent of the observational study researchers to match a specific clinical trial-they may have intentionally studied a different treatment population, setting, or protocol in order to complement or test the RCT findings. In such cases, there would be variation in effect estimates due to estimating a different causal effect. Even if the researcher does attempt to match a specific RCT, the data may not have been available to closely match it, since patient histories, test results, etc., used for RCT inclusion criteria may not be observed, or outcomes may not be captured the same way. Even given similar data, nonrandomized studies have the potential for selection/ channeling bias into treatment determined by factors unobservable in either type of study, and analytic attempts to correct for such confounding may have limited success. In some cases, treatment conditions may differ enough between the RCT and real-world practice that replication of results should not be expected, e.g., due to careful safety monitoring that affects subsequent treatment in RCTs. Finally, it is possible that other pharmacoepidemiologic principles, beyond the study design considerations we already mentioned, were violated in the individual RWD studies, which could have caused disagreement between their results and the RCTs. While variation in treatment effect estimates due to estimating a different causal effect in a different study population is expected and valid, biased estimates arising from issues with study design or analytical methods may be problematic.
Details in these reviews were typically insufficient to distinguish among these possible explanations, without detailed review of the individual studies, which we did not attempt here. However, some reviews did attempt to explain the differences they found. For example, in the review by Gandhi et al. (2015) [24], which compared dual-antiplatelet therapy (DAPT) to mono-antiplatelet therapy (MAPT) following transcatheter aortic valve implantation, there was a statistically significant difference in pooled relative treatment effect estimates from observational studies and RCTs. The primary outcome was more likely to occur in the DAPT group than in the MAPT group in the observational studies (OR 3.02; 95% CI 1.91-4.76); however, no statistically significant difference was found between DAPT and MAPT in the RCTs (OR 0.98; 95% CI 0.46-2.11). The authors explained that the RCTs (n = 2) and observational studies (n = 2) included in this review had variable patient inclusion/exclusion criteria and there were differences in the type of prosthetic aortic valve used, which may have introduced selection bias [24].
To allow for better use of individual observational studies to inform decision-making, their ability to replicate RCT results needs to become more reliable, and the "target trial" approach seems to be a path forward. Several systematic efforts using sophisticated observational data research designs to emulate multiple RCTs are underway [48,49]. These efforts are intended to provide regulatory bodies and other decision makers with empirical evidence to support the development of a framework for assessing when and under what circumstances observational RWE can be used to support a wider range of regulatory decisions. RCT DUPLICATE is a collaboration between the Food and Drug Administration (FDA), Brigham and Women's Hospital and Harvard Medical School Division of Pharmacoepidemiology, to replicate 30 completed Phase III or IV trials and to predict the results of seven ongoing Phase IV trials using Medicare and commercial claims data [50]. The RCT DUPLICATE team has recently reported results for its first 10 trials [51]. They report hazard ratio estimates within the 95% CI of the corresponding trial for 8 of 10 emulations.
The Multi-Regional Clinical Trials Center and Optum-Labs are leading another effort called Observational Patient Evidence for Regulatory Approval and Understanding Disease (OPERAND) which extends the trial emulation activity and relaxes the inclusion/exclusion criteria of the trials to examine treatment effects in the broader patient population treated in routine care [52]. The FDA has also funded the Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation to predict the results of three to four ongoing safety trials using OptumLabs claims data [53].
It is important to understand that clinical trials emulation efforts are being conducted solely to improve understanding of when observational studies may be expected to produce robust results. Bartlett and colleagues [14] found that in a review of 220 clinical trials published in high impact medical journals in 2017, 15% could potentially be emulated using data available from medical claims or EHRs. For example, the inclusion/exclusion criteria for many oncology trials require data on genetic markers and progression free survival unavailable in EHRs. The estimate by Bartlett and colleagues may prove to be an underestimate as the ability to link different types of observational data continues to improve. Nevertheless, it is reasonable to assume that it is not possible to emulate most trials with existing observational datasets.
These efforts are critical to advance our understanding of the strengths and limitations of observational RWE, identifying issues with study design, endpoint definition, data quality, and analytical methodology that may impact the consistency of findings between RWE and RCTs. While much attention has focused on differences in study populations between observational studies and RCTs as the reason for the inconsistency in effect estimates, emerging evidence suggests that issues with study design (e.g., establishing time zero of exposure) may be equally if not more important [7]. Therefore, the results of these efforts will not provide definitive guidance to decision makers but they emphasize how even subtle differences in study design and endpoint definition can impact absolute estimates of treatment effect. Moreover, RWE studies are answering a different question than RCTs, i.e., "Does it work?" verses "Can it Work?" The former is important to a variety of stakeholders beyond regulators. Hence, they should not be expected to provide results identical to RCTs.

Conclusions
In conclusion, although our review shows no average significant difference in the relative risk ratios between published RCTs and observational studies, there is substantial study-to-study variation. It was impractical to review all individual observational study designs and examine their potential biases, but future work should elucidate how much of the variation is due to differences in study populations versus biased estimates arising from issues with study design or analytical methods. As more target trial replication attempts are conducted and published, more systematic evidence will emerge on the reliability of this approach and on the potential for observational studies to more routinely inform healthcare decisions.