Skip to main content
  • Research article
  • Open access
  • Published:

Influence of lack of blinding on the estimation of medication-related harms: a retrospective cohort study of randomized controlled trials

Abstract

Background

Empirical evidence suggests that lack of blinding may be associated with biased estimates of treatment benefit in randomized controlled trials, but the influence on medication-related harms is not well-recognized. We aimed to investigate the association between blinding and clinical trial estimates of medication-related harms.

Methods

We searched PubMed from January 1, 2015, till January 1, 2020, for systematic reviews with meta-analyses of medication-related harms. Eligible meta-analyses must have contained trials both with and without blinding. Potential covariates that may confound effect estimates were addressed by restricting trials within the comparison or by hierarchical analysis of harmonized groups of meta-analyses (therefore harmonizing drug type, control, dosage, and registration status) across eligible meta-analyses. The weighted hierarchical linear regression was then used to estimate the differences in harm estimates (odds ratio, OR) between trials that lacked blinding and those that were blinded. The results were reported as the ratio of OR (ROR) with its 95% confidence interval (CI).

Results

We identified 629 meta-analyses of harms with 10,069 trials. We estimated a weighted average ROR of 0.68 (95% CI: 0.53 to 0.88, P < 0.01) among 82 trials in 20 meta-analyses where blinding of participants was lacking. With regard to lack of blinding of healthcare providers or outcomes assessors, the RORs were 0.68 (95% CI: 0.53 to 0.87, P < 0.01 from 81 trials in 22 meta-analyses) and 1.00 (95% CI: 0.94 to 1.07, P = 0.94 from 858 trials among 155 meta-analyses) respectively. Sensitivity analyses indicate that these findings are applicable to both objective and subjective outcomes.

Conclusions

Lack of blinding of participants and health care providers in randomized controlled trials may underestimate medication-related harms. Adequate blinding in randomized trials, when feasible, may help safeguard against potential bias in estimating the effects of harms.

Peer Review reports

Background

The randomized controlled trial is the preferred and most rigorous study design in clinical research for assessment of medication efficacy [1]. In a randomized controlled trial, blinding is a vital procedure to mitigate bias. However, blinding may not always be achievable due to practical and/or ethical reasons. In many cases, blinding increases the difficulty of participant recruitment, complexity of implementation (e.g., preparing packaging of the interventions), and total costs of a trial [2]. In addition, blinding is difficult for non-pharmaceutical interventions. Lack of blinding results in knowledge of intervention assignment and may affect adherence and attrition or influence recording of outcomes, resulting in performance bias and measurement bias [3].

Empirical and/or meta-epidemiological studies are valuable sources of evidence that can help us examine the relationship between methodological weaknesses and their potential impact on research findings [4]. For example, empirical studies have demonstrated that a lack of blinding of participants, care providers, or outcome assessors may lead to exaggerated treatment effects [5,6,7,8,9,10,11,12,13]. However, existing empirical studies have focused mainly on efficacy or effectiveness, while few have addressed related questions on harms, including medication-related harms. This underemphasis on harms perpetuates the gap between evidence generation, evidence synthesis, and informed decision-making. As highlighted in the Cochrane Handbook, harms are considered just as important as effectiveness/efficacy in the evaluation of healthcare interventions [14].

Harm outcomes (especially those that are serious in nature) typically involve lower event rates than benefit outcomes, and the measurement of such harm outcomes can be substantially affected by random error [15, 16]. The occurrence of some obviously identifiable adverse reactions may overcome attempts to maintain blinding, thus increasing the possibility of participants, health care providers, and investigators being able to correctly discern the intervention [17,18,19]. Moreover, harm outcomes often involve the utilization of composite outcomes, which may result in selective reporting bias [20]. As a result, lack of blinding may have a differential impact on estimates of harm as compared to benefits. The potential impact of lack of blinding remains an important gap in research and clearly needs to be addressed, as it may have important implications for evidence-based practice, policy formulation, and informed decision-making.

In this large-scale meta-epidemiological study, we compared effect estimates of harm from blinded randomized trials as opposed to trials without blinding, which were otherwise comparable with regard to interventions, controls, and key methodological features.

Methods

Protocol and reporting

The present study is part of a large research program designed to investigate potential methodological factors that influence reporting of harms in randomized controlled trials. The protocol for this research program has been reported elsewhere [21]. We have formatted and reported our study in accordance with the Preferred Reporting Items for Overviews of Reviews (PRIOR) checklist where applicable, as this tool is the “up-to-date” version of all related guidelines [22].

Data source

The study is based on our recently constructed large empirical dataset, known as SMART Safety [23, 24]. The foundations of this dataset stem from a PubMed literature search conducted on July 28, 2020, by an information specialist, with the aim of retrieving systematic reviews of medication harms that were published (including online first) between January 1, 2015, and January 1, 2020. [25]. The representativeness of the search has been verified earlier, with sensitivity ranging from 93.85 to 99.30% [21]. The full search strategy is reported in Additional file 1.

Inclusion criteria

Systematic reviews of medication-related harms with harms as the exclusive outcome and with at least one meta-analysis were considered for eligibility. This means we did not consider systematic reviews that included efficacy/effectiveness outcomes, regardless of whether harms were treated as primary or secondary outcomes. For inclusion in the final analysis, the meta-analyses must have included at least five randomized controlled trials with two-by-two tabular data (comparison group and harm outcome) available for trials both with and without blinding. We defined a systematic review or meta-analysis on the basis of the article title as stipulated by the review authors. We defined harm outcomes as “any untoward medical occurrence in a patient or subject in clinical practice,” which include risk, complication, adverse effects, or adverse reaction, based on the PRISMA harms checklist [26].

We recognize that the restriction to a minimum of five studies may lead to a slight loss of the representativeness of the data in the current study. However, we also note that meta-analyses that contain only a few studies are less likely to be able to meet our eligibility requirement that both blinded and unblinded studies be available for harms outcome analysis [27].

Two authors (XQ, CX) independently screened the titles, abstracts (stage 1), and full-texts (stage 2) of the records using Rayyan (https://www.rayyan.ai/). Only those excluded by both authors were excluded during stage 1, and the remaining records were screened again in stage 2, with disagreements resolved through consensus.

Data collection

Data collection was conducted using independent duplicate extraction (CX, TQ, FZ, XY, RZ, YT, XX, YZ, XZ, LFK, YY, HD); see details in Additional file 1 (Table S1 and Table S2). Three levels of data were collected: systematic review level, meta-analysis level, and trial level. For the systematic review level, the name of the review author, region of the review author, number of trials, and registration information were collected. At the meta-analysis level, we collected information on the outcome of interest. The following items were extracted at the study level: first author name, year of publication, journal, number of participants and number of events in each group (metadata), details of interventions and controls (e.g., type of intervention, dosage, duration), funding source (e.g., academic, industry), registration (Yes, No), average population age status (child, adult), trial centers and regions involved, and bias assessment information. All the study-level items, except for the metadata (i.e., 2 by 2 table data), were taken from the original trials. For the metadata (events, group size of each arm), we first extracted the information from the meta-analyses, either via forest plot or table. In order to avoid potential data extraction errors, we checked all data by referring to the original trials; any errors identified were further recorded and corrected [21].

We used an adaptation of the RoB 2 by selecting applicable components and domains for our assessment, without going through the entire algorithm and signaling process [28]. The parameters of specific interest were as follows: (1) random sequence generation; (2) allocation concealment; (3) blinding of participants; (4) blinding of healthcare providers; and (5) blinding of outcome assessors. To avoid potential confusion, we did not use the recommended “response options” of RoB 2; instead, we used the options of “Yes” or “Probably Yes” as studies that implemented blinding or probably implemented blinding and, similarly, “No” and “Probably No” for those that did not or probably did not implement blinding. The assessment of the risk of bias information was based on what was reported in the original trials and carried out independently in duplicate with any disagreements resolved by discussion (Additional file 1: Table S1 and S2).

We further categorized outcomes from each meta-analysis as objective or subjective. This was done independently by two senior methodologists (LFK, CX), and their decisions were compared by a third author (RZ) in a blinded manner. Further online discussion was employed for disagreements until consensus was achieved. The criteria for the judgment of the type of outcomes were based on the explanatory file of RoB 2 [28].

All data collected were double-checked to minimize errors in data extraction. The details of the contributors to data extraction are recorded in Additional file 1 (Tables S1 and S2).

Outcomes

We pre-defined the primary outcome in this investigation as the ratio of the harm estimates in trials with and without blinding (participants, healthcare providers, and trial outcome assessors). Based on the RoB assessment, we dichotomized the blinding status of trials as follows: those clearly claiming implementation of blinding (judged as “Yes,” see above) or probably implemented blinding (judged as “Probably Yes”), while the rest were considered to be without blinding (judged as “No,” “Probably No,” and “No information”). No secondary outcomes were defined.

Control of confounding

We recognize that trials with blinding may not share exactly the same characteristics as trials without blinding. As such, “third factors” or covariates that may have a confounding impact on our comparative evaluation of effect estimates from trials with and without blinding were identified and accounted for. From our review of the relevant literature [9, 29], we identified the following potential covariates that may influence estimates of harms: (1) specific features of the interventions; (2) nature of the controls; (3) variation in dosage of the intervention (mean dose per week); (4) treatment duration; (5) average age of the trial population; (6) source of funding (e.g., academic, industry, not reported); (7) role of funder; (8) number of centers; (9) trial registration; and (10) analytic protocol (e.g., intention-to-treat, per-protocol). We further conducted a causal path analysis via directed acyclic graphs (http://dagitty.net/) to identify which of these covariates may confound the association between blinding status and effect estimates for harm in randomized trials [30].

In order to reduce confounding and additionally assess the direct effect of the absence of blinding, we implemented restriction and stratification of selected important covariates to harmonize the sets of trials being compared. For example, with regard to intervention dose, only trials with the same dose (e.g., 50 mg/daily) could be grouped together in meta-analyses where trials with and without blinding were being compared. Through restriction and stratification of trials on reported values of these important covariates, we were able to conduct an analysis harmonized across groups of trials that shared similar attributes. We believe that this analytic approach (based on comparisons of blinded and unblinded trials within each harmonized group) leads to less confounded estimates of the relative differences between trials. See Additional file 1: Fig. S1 for more details.

Potential confounders were addressed through the covariate-harmonization process between trials in the comparisons of blinding status. Restriction was used to limit trials such that those that were included had similar pharmaceutical formulation, daily dose, and type of control within each meta-analysis. Stratification was also used across trials to create a covariate for harmonized groups by age category (child or adult participants), analytic protocol (e.g., intention-to-treat, ITT), trial registration, and allocation concealment. See Additional file 1: Figs. S2 and S3.

Statistical analysis

Baseline characteristics were summarized as proportions or median and interquartile ranges (IQR). We first calculated the log odds ratio (OR) of each eligible trial for harm estimates of the intervention compared to control. A weighted hierarchical linear regression was then employed to estimate the ratio of OR (ROR) of trials with and without blinding by treating the trial as level one and the variable for covariate-harmonized groups as level two, with cluster robust standard errors to account for potential within-topic correlation of the groups [31]. When zero events occurred, we applied a continuity correction by adding 0.5 to each cell to estimate the OR within a trial [17].

We conducted sensitivity analysis according to the aforementioned pre-defined categorization, i.e., objective and subjective outcomes. The rationale for this approach was that previous studies have shown that objective outcomes are less susceptible to methodological issues involving blinding [32]. Post hoc sensitivity analysis was conducted by excluding studies with zero events [33]. Since we observed some imbalance of four trial characteristics for blinded versus unblinded trials, additional post hoc sensitivity analyses were employed.

Missing data occurred in 19 variables in the SMART Safety dataset, which ranged from 3.08 to 27.54%, mainly due to insufficient reporting, with a small minority missing due to inability to access full-text versions of trial reports (Additional file 1: Table S3). For the 15 variables we used in this study, the missing proportion ranged from 3.08 to 14.26%, and only two exceeded 10% (treatment duration in intervention and control group). We judged that the proportion of missing data in the remaining trials following the covariate-harmonization process would be small, and we therefore removed trials with missing data with the expectation that there would be little impact on our results [34]. All data analyses were run via Stata/SE 16.0 (Stata Corp LCC, College Station, TX), with two-sided alpha of 0.05 as the significance level. The code for the analysis is presented in Additional file 1.

Results

The search identified 18,636 records. After removing 1967 duplicates (searched separately before and after 1 January 2018) and 15,339 obviously out of scope based on titles and abstracts, 1330 records remained to be assessed for eligibility via full-texts. Among these, 151 systematic reviews with 629 meta-analyses involving 10,069 studies were identified as eligible (Fig. 1). The list of included and excluded systematic reviews (with reasons) can be accessed in Additional file 1 (Table S4). Table 1 presents baseline characteristics of our dataset, and Additional file 1: Fig. S4 presents word clouds of the related harm outcomes.

Fig. 1
figure 1

Flow diagram of literature screening

Table 1 Basic characteristics of eligible systematic reviews and trials

After removing trials with missing data, 7693 (76.40%) studies from 607 meta-analyses remained for analysis. From the latter, we carried out restriction on trials to harmonize covariates, resulting in 82 trials within 25 covariate-harmonized groups (in 20 meta-analyses) being eligible for analysis of lack of blinding of participants on harm estimates, 81 trials within 26 covariate-harmonized groups (in 22 meta-analyses) being eligible for analysis of lack of blinding of care providers on harm estimates, and 858 trials within 268 covariate-harmonized groups (in 155 meta-analyses) being eligible for analysis of lack of blinding of outcome assessors on harm estimates. Characteristics of included trials within these covariate-harmonized groups are presented in Table 2.

Table 2 Trial characteristics of the comparisons

Lack of blinding of participants on harm effects

Based on 82 trials within 25 covariate-harmonized groups, our regression analysis showed that for overall harms, the ROR for trials lacking blinding was 0.68 (95% CI: 0.53 to 0.88, P < 0.01) compared to trials blinded for participants.

When stratified by type of outcome, the ROR for trials lacking blinding was 0.69 (95% CI: 0.51 to 0.92, P = 0.01, n = 51) for objective outcomes and 0.66 (95% CI: 0.45 to 0.98, P = 0.04, n = 31) for subjective outcomes when compared to trials blinded for participants (Fig. 2).

Fig. 2
figure 2

Influence of lack of blinding on harm effects

Lack of blinding of health care providers on harm effects

Based on 81 trials within the 26 covariate-harmonized groups, our regression analysis showed that, for overall harm, the ROR for trials lacking blinding was 0.68 (95% CI: 0.53 to 0.87, P < 0.01) compared to trials blinded for health care providers.

When stratified by type of outcome, the ROR for trials lacking blinding was 0.69 (95% CI: 0.51 to 0.92, P = 0.01, n = 51) for objective outcomes and 0.66 (95% CI: 0.47 to 0.93, P = 0.02, n = 30) for subjective outcomes compared to trials blinded for health care providers; see Fig. 2.

Lack of blinding of trial outcome assessors on harm effects

Based on 858 trials within the 268 covariate-harmonized groups, our regression analysis showed that for overall harm, the ROR for trials lacking blinding was 1.00 (95% CI: 0.94 to 1.07, P = 0.94) compared to trials blinded for outcomes assessors.

When stratified by type of outcome, the ROR for trials lacking blinding was 0.99 (95% CI: 0.91 to 1.09, P = 0.89, n = 340) for objective outcomes and 1.01 (95% CI: 0.92 to 1.11, P = 0.84, n = 508) for subjective outcomes compared to trials with blinded outcomes assessors; see Fig. 2.

Sensitivity analyses

Sensitivity analysis by removing studies with zero events showed no substantial changes, with a ROR for lack of participant blinding of 0.64 (95% CI: 0.42, 0.97, P = 0.04), ROR for lack of health care provider blinding of 0.68 (95% CI: 0.53 to 0.87, P < 0.01), and ROR for lack of outcome assessor blinding of 1.01 (95% CI: 0.94 to 1.08, P = 0.84). Additional post hoc sensitivity analyses found the impact of blinding to be consistent under different sub-settings (Table 3).

Table 3 Post hoc sensitivity analyses

Discussion

In this study, we used a large empirical dataset to investigate the influence of blinding on estimates of medication-related harms after addressing known covariates that could have been potential confounders. Our results suggest that lack of blinding of participants and health care providers in randomized controlled trials may substantially influence estimates of medication-related harms, regardless of whether outcomes are objective or subjective. We found that, on average, lack of blinding was associated with underestimation of harm effects by 32%. These findings highlight the importance of blinding in randomized controlled trials for harmful outcomes, just as it is for efficacy outcomes. Nevertheless, blinding of trial assessors may have less or no influence on estimates of harms which are directly recorded by participants and health care personnel without requiring any additional input or adjustment by trial assessors.

There was a substantial difference in our findings from previous empirical investigations. In the study by Savovic in 2012, trials lacking blinding on participants and health care providers showed significantly exaggerated treatment effects (effectiveness/efficacy) in subjective outcomes, but not for objective outcomes [6]. In their further study in 2018, similar results were observed again [35]. The MetaBLIND study found no impact of lack of blinding on both subjective and objective efficacy outcomes [11]. However, in our study, evidence of the significant impact of blinding on both objective and subjective harm outcomes was observed. We postulate that for harm outcomes, lack of blinding on participants and care providers may be associated with performance bias [3], which would result in deviation of intended intervention, regardless of whether the outcome is objective or subjective.

The directed acyclic graphs (see Additional file 1: Fig. S1) may help us to further interpret our findings. There were several causal paths for blinded participants and/or health care providers on harms, namely, (1) the direct path and (2) via the interventions, controls, or dosage to influence harm (“indirect” paths). The current study focused on the direct effect of lack of blinding on the estimation of medication-related harms by restricting intervention, dosage, and control to be identical within meta-analysis, but it is still possible that the “indirect” paths partially explain the underestimation of harms due to lack of blinding. For example, for participants who did not adhere to the intervention or switched to another intervention when they were aware of the intervention they received, the intervention for them would be distorted and could influence assessment of harm effects. Similarly, it is possible that health care providers applied additional interventions to participants if they were aware of treatment assignment.

In the directed acyclic graphs, there is only one path for blinding trial outcome assessors to harm effects, namely, the direct effect. It may be anticipated that measurement of objective outcomes is dependent on outcome assessors, as no subjective judgment might be involved. For subjective outcomes, there was also no difference in harm effects between blinded and unblinded trial assessors. It is possible that blinding for outcome assessors may not have been applied for all outcomes; for example, blinding may have been applied only for efficacy outcomes, not for harm outcomes. In addition, many harm outcomes were patient-reported or reported by heath care providers (e.g., diarrhea) and blinding for other parties involved in trial outcome assessment (e.g., safety monitoring panel) may have played no role in such subjective outcomes. In such a situation, blinding of the safety panel may prevent further bias creeping into the data, but this blinding cannot easily remove bias that has already occurred earlier at source. Considering the differential impact of blinding on harm effects, further research is worthwhile to verify our findings and explore potential mechanism(s).

The findings of the current study have important implications for future evidence synthesis research. Currently, evidence synthesis researchers may not always give detailed consideration towards potential methodological weakness in harms reported in included trials, thus possibly ignoring the potential impact of such weaknesses on the validity of the final result. Based on the evidence of our current study, it would be sensible to carefully consider the potential impact that lack of blinding may have and perhaps effect estimates based on such components of methodological weaknesses should be treated as part of a sensitivity analysis to inform evidence users [36].

Strengths and limitations

To the best of our knowledge, this is the first study that has investigated the influence of lack of blinding on estimation of harms. Our large-scale dataset ensures a sufficient number of “observations” to achieve a valid estimation of results. Data accuracy in this study has been checked multiple times, and the data collection process was also carefully recorded, thus providing greater safeguards against potential bias due to data errors or non-transparency. In the data analysis, we identified potential confounders and addressed them via harmonization procedures, in an effort to obtain the direct effect of lack of blinding on estimating harms. All of these steps serve to increase the robustness and reliability of our study findings.

Some limitations should be highlighted. First, due to the nature of the observational design of our study, we are unable to determine a causal relationship. Although we employed directed acyclic graphs to detect potential confounders, it is not possible to control for all confounders. Several unmeasured methodological issues could influence our results. For example, dropouts from randomized trials may result in missing data bias for harm effects. There is also a possibility that blinding could be compromised if trial participants or health care providers successfully guessed the study intervention, and this could further influence reporting or recording of harms. In our database, we identified 11 randomized controlled trials that reported the proportion of correct guesses of intervention allocation by participants or health care providers, with a proportion ranging from 10.6 to 85.7% (median: 59.0%) for intervention group and 31.9 to 78.4% for control group (median: 49.6%). Second, we were unable to account for the potential difference on the settings of the trials and the varying definitions of harms in the trials, as well as the biological nature of the harms, which may contribute to some amount of heterogeneity of the results [37,38,39]. Third, missing data may have had an impact on the results. Even though the missing rate was low for each variable used in the current study, when considered in total, missing data resulted in 23.60% study loss, which could impact the validity of our results. The integrity of such information largely relies on comprehensive reporting of the included trials, which is a parameter that can only be addressed through strict adherence to reporting guidelines. Fourth, poor reporting of harms may impact the representativeness of the current study, as empirical evidence showed that only 43% of published trials reported harms data [40]. The release of the new CONSORT Harms statement [41] is expected to be helpful in promoting harms reporting in future randomized trials.

Conclusions

In summary, our study demonstrates that lack of blinding of participants and health care providers in randomized controlled trials may lead to underestimates of medication-related harm effects, regardless of whether these were objective or subjective outcomes. However, lack of blinding of trial outcome assessors may not necessarily influence estimates of harm effects. Implementing blinding in randomized trials, when feasible, may help safeguard against potential bias in estimating effects of harms.

Availability of data and materials

Data can be found at https://osf.io/g3mdu/.

Abbreviations

CI:

Confidence interval

CONSROT:

Consolidated Standards of Reporting Trials

IQR:

Interquartile ranges

ITT:

Intention-to-treat

OR:

Odds ratio

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

RoB:

Risk of bias

ROR:

Ratio of odds ratio

References

  1. Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Anand R, Norrie J, Bradley JM, McAuley DF, Clarke M. Fool’s gold? Why blinded trials are not always best. BMJ. 2020;368:l6228.

    Article  PubMed  Google Scholar 

  3. Higgins JPT, Savović J, Page MJ, Elbers RG, Sterne JAC. Chapter 8: Assessing risk of bias in a randomized trial. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available from www.training.cochrane.org/handbook.

  4. Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evid Based Med. 2017;22(4):139–42.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Hróbjartsson A, Thomsen AS, Emanuelsson F, Tendal B, Hilden J, Boutron I, et al. Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors. BMJ. 2012;344:e1119.

    Article  PubMed  Google Scholar 

  6. Savović J, Jones HE, Altman DG, Harris RJ, Jüni P, Pildal J, et al. Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann Intern Med. 2012;157(6):429–38.

    Article  PubMed  Google Scholar 

  7. Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet. 1998;352(9128):609–13.

    Article  CAS  PubMed  Google Scholar 

  8. Kunz R, Vist G, Oxman AD. Randomisation to protect against selection bias in healthcare trials. Cochrane Database Syst Rev. 2007;(2):MR000012.

  9. Savovic J, Turner RM, Mawdsley D, Jones HE, Beynon R, Higgins JPT, et al. Association between risk-of-bias assessments and results of randomized trials in Cochrane reviews: the ROBES meta-epidemiologic study. Am J Epidemiol. 2018;187(5):1113–22.

    Article  PubMed  Google Scholar 

  10. Wood L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008;336(7644):601–5.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Stadelmaier J, Roux I, Petropoulou M, Schwingshackl L. Empirical evidence of study design biases in nutrition randomised controlled trials: a meta-epidemiological study. BMC Med. 2022;20(1):330.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Moustgaard H, Clayton GL, Jones HE, Boutron I, Jørgensen L, Laursen DRT, et al. Impact of blinding on estimated treatment effects in randomised clinical trials: meta-epidemiological study. BMJ. 2020;368:l6802.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Pitre T, Kirsh S, Jassal T, Anderson M, Padoan A, Xiang A, et al. The impact of blinding on trial results: a systematic review and meta-analysis. Cochrane Ev Synth. 2023:e12015.

  14. Peryer G, Golder S, Junqueira D, Vohra S, Loke YK. Chapter 19: Adverse effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane; 2022. Available from www.training.cochrane.org/handbook.

  15. Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med. 2011;8(5):e1001026.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Xu C, Furuya-Kanamori L, Zorzela L, Lin L, Vohra S. A proposed framework to guide evidence synthesis practice for meta-analysis with zero-events studies. J Clin Epidemiol. 2021;135:70–8.

    Article  PubMed  Google Scholar 

  17. Fergusson D, Glass KC, Waring D, Shapiro S. Turning a blind eye: the success of blinding reported in a random sample of randomised, placebo controlled trials. BMJ. 2004;328(7437):432.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Hróbjartsson A, Forfang E, Haahr MT, Als-Nielsen B, Brorson S. Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding. Int J Epidemiol. 2007;36(3):654–63.

    Article  PubMed  Google Scholar 

  19. Lin YH, Sahker E, Shinohara K, Horinouchi N, Ito M, Lelliott M, et al. Assessment of blinding in randomized controlled trials of antidepressants for depressive disorders 2000–2020: a systematic review and meta-analysis. EClinicalMedicine. 2022;50:101505.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Montori VM, Permanyer-Miralda G, Ferreira-González I, Busse JW, Pacheco-Huergo V, Bryant D, et al. Validity of composite end points in clinical trials. BMJ. 2005;330(7491):594–6.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Xu C, Yu T, Furuya-Kanamori L, Lin L, Zorzela L, Zhou X, et al. Validity of data extraction in evidence synthesis practice of adverse events: reproducibility study. BMJ. 2022;377:e069155.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Gates M, Gates A, Pieper D, Fernandes RM, Tricco AC, Moher D, et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ. 2022;378:e070849.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Fan S, Yu T, Yang X, Zhang R, Furuya-Kanamori L, Xu C. The SMART Safety: an empirical dataset for evidence synthesis of adverse events. Data Brief. 2023;51:109639. https://doi.org/10.17605/OSF.IO/M7U3C.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Xu C. SMART Safety: a large empirical database for systematic reviews of adverse events. 2023. OSF Storage: https://osf.io/m7u3c/.

  25. Xu C, Zhou X, Zorzela L, Ju K, Furuya-Kanamori L, Lin L, et al. Utilization of the evidence from studies with no events in meta-analyses of adverse events: an empirical investigation. BMC Med. 2021;19(1):141.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Zorzela L, Loke YK, Ioannidis JP, Golder S, Santaguida P, Altman DG, et al. PRISMA harms checklist: improving harms reporting in systematic reviews. BMJ. 2016;352:i157.

    Article  PubMed  Google Scholar 

  27. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273(5):408–12.

    Article  CAS  PubMed  Google Scholar 

  28. Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898.

    Article  PubMed  Google Scholar 

  29. Odutayo A, Emdin CA, Hsiao AJ, Shakir M, Copsey B, Dutton S, et al. Association between trial registration and positive study findings: cross sectional study (Epidemiological Study of Randomized Trials-ESORT). BMJ. 2017;356:j917.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Tennant PWG, Murray EJ, Arnold KF, Berrie L, Fox MP, Gadd SC, et al. Use of directed acyclic graphs (DAGs) to identify confounders in applied health research: review and recommendations. Int J Epidemiol. 2021;50(2):620–32.

    Article  PubMed  Google Scholar 

  31. Xu C, Doi SAR. The robust error meta-regression method for dose-response meta-analysis. Int J Evid Based Healthc. 2018;16(3):138–44.

    Article  PubMed  Google Scholar 

  32. Page MJ, Higgins JP, Clayton G, Sterne JA, Hróbjartsson A, Savović J. Empirical evidence of study design biases in randomized trials: systematic review of meta-epidemiological studies. PLoS One. 2016;11(7):e0159267.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Xu C, Furuya-Kanamori L, Islam N, Doi SA. Should studies with no events in both arms be excluded in evidence synthesis? Contemp Clin Trials. 2022;122:106962.

    Article  PubMed  Google Scholar 

  34. Kahale LA, Khamis AM, Diab B, Chang Y, Lopes LC, Agarwal A, et al. Potential impact of missing outcome data on treatment effects in systematic reviews: imputation study. BMJ. 2020;370:m2898.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Savovic J, Turner RM, Mawdsley D, Jones HE, Beynon R, Higgins JPT, et al. Association between risk-of-bias assessments and results of randomized trials in Cochrane reviews: the ROBES meta-epidemiologic study. Am J Epidemiol. 2018;187(5):1113–22.

    Article  PubMed  Google Scholar 

  36. Doi SA, Barendregt JJ, Khan S, Thalib L, Williams GM. Advances in the meta-analysis of heterogeneous clinical trials II: the quality effects model. Contemp Clin Trials. 2015;45(Pt A):123–9.

    Article  PubMed  Google Scholar 

  37. Qureshi R, Mayo-Wilson E, Li T. Harms in Systematic Reviews Paper 1: an introduction to research on harms. J Clin Epidemiol. 2022;143:186–96.

    Article  PubMed  Google Scholar 

  38. Qureshi R, Mayo-Wilson E, Rittiphairoj T, McAdams-DeMarco M, Guallar E, Li T. Harms in Systematic Reviews Paper 2: methods used to assess harms are neglected in systematic reviews of gabapentin. J Clin Epidemiol. 2022;143:212–23.

    Article  PubMed  Google Scholar 

  39. Qureshi R, Mayo-Wilson E, Rittiphairoj T, McAdams-DeMarco M, Guallar E, Li T. Harms in Systematic Reviews Paper 3: given the same data sources, systematic reviews of gabapentin have different results for harms. J Clin Epidemiol. 2022;143:224–41.

    Article  PubMed  Google Scholar 

  40. Golder S, Loke YK, Wright K, Norman G. Reporting of adverse events in published and unpublished studies of health care interventions: a systematic review. PLoS Med. 2016;13(9):e1002127.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Junqueira DR, Zorzela L, Golder S, Loke Y, Gagnier JJ, Julious SA, et al. CONSORT Harms 2022 statement, explanation, and elaboration: updated guideline for the reporting of harms in randomised trials. BMJ. 2023;381:e073725.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We thank Mr. Lu Cuncun from Lanzhou University for developing the search strategy for the whole project. We also thank Rui Zhang, Xing Xing, Yuan Tian, and Yi Zhu from Anhui Medical University and Xiaoqin Zhou, Minghan Dai, and Tianqi Yu from Sichuan University of West China Hospital for helping with the data collection and data checking of the whole project.

Funding

The current study was supported by the National Natural Science Foundation of China (72204003). Luis Furuya-Kanamori was supported by an Australian National Health and Medical Research Council Fellowship (APP1158469). Suhail Doi was supported by Program Grant #NPRP-BSRA01-0406–210030 from the Qatar National Research Fund. The funding bodies had no role in any process of the study (i.e., study design, analysis, interpretation of data, writing of the report, and decision to submit the article for publication).

Author information

Authors and Affiliations

Authors

Contributions

Conception and design: CX; manuscript drafting: CX; data collection: CX, FY, TQ, XY; data analysis and result interpretation: CX; statistical guidance: LL, HC; methodology guidance: SD, LFK, LZ, SV, YL, SG, SL; manuscript editing: LL, LFK, LZ, SV, YL, HC, SG, SL. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. All authors have read and approved the manuscript. All authors approved the final version to be published.

Corresponding author

Correspondence to Chang Xu.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Fig. S1.

The process of moderator harmonization. Fig. S2. The DAG plot for identifying potential effect modifiers (Blind for participants and health care providers). Fig. S3. The DAG plot for identifying potential effect modifiers (Blind for outcome assessors). Fig. S4. The word cloud of harm outcomes of the SMART Safety dataset.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, C., Zhang, F., Doi, S.A.R. et al. Influence of lack of blinding on the estimation of medication-related harms: a retrospective cohort study of randomized controlled trials. BMC Med 22, 83 (2024). https://doi.org/10.1186/s12916-024-03300-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12916-024-03300-7

Keywords