Herein, we identified a random sample of trials with SAEs posted at ClinicalTrials.gov to assess whether these safety results were reported in published articles and, if yes, whether there were discrepancies between the publication and the registry data. Our results highlight that the reporting of SAEs in published articles remains a major problem. For a sample of 300 trials with SAEs posted at ClinicalTrials.gov, among 202 with a matching publication, 30 (15 %) did not mention SAEs or reported no SAEs in the corresponding publications. The number of SAEs per group was frequently not reported in the published articles and when it was reported, discrepancies with the numbers posted at ClinicalTrials.gov were common, with frequently more SAEs reported at ClinicalTrials.gov than in the published article.
Restricted space in articles is a frequently cited reason for incomplete reporting of harms [4, 18]. However, the assessment of the balance between benefits and risks should be the core of trial reports. Failure to report SAEs may lead to a biased safety profile and erroneous decision-making, with major consequences for patients. Despite the extension of the Consolidated Standards of Reporting Trials (CONSORT) statement published in 2004, which provides guidelines on reporting harms-related data [7], reporting of safety data in published articles of clinical trials continues to be suboptimal [5, 16, 19, 20], with poor adherence to the statement [21–24]. According to a recent study, only 63 % of published articles reported the total number of SAEs by group [16].
In a previous article focusing on completeness of reporting, we found that SAEs were significantly more completely reported at ClinicalTrials.gov than in the published articles (99 % vs. 63 %, P <0.0001) [16]. This result was particularly troubling, but one explanation could be that SAEs were not reported in published articles because there were none.
Our results identified some trials not mentioning SAEs or reporting no SAEs in the published article, despite these being reported at ClinicalTrials.gov. Furthermore, when SAEs were reported in published articles, discrepancies with the number posted at ClinicalTrials.gov were common, with frequently more SAEs reported at ClinicalTrials.gov than in the published article. Although we do not know which the ‘true’ results are, we believe that these discrepancies clearly outline problems in the reporting of SAEs. Two studies comparing results posted at ClinicalTrials.gov and in peer-reviewed publications also showed discrepancies in the number of SAEs [18, 25, 26]. The originality of our approach was the identification of trials for which we had knowledge of SAEs to assess whether and how these safety results were reported in published articles.
Our results have important implications. Our results highlight that ClinicalTrials.gov provide more information on serious harms, whereas these events are frequently underreported in published articles. For systematic reviewers, they outline the interest of using ClinicalTrials.gov to find safety results not yet published in journals and for trials with both SAEs posted and published, to compare the rate of SAEs. In case of discrepancies, we recommend systematically contacting authors for clarification and performing sensitivity analyses in case of non-response to assess to what extent these discrepancies may affect the meta-analysis result. For journals, they question the peer-review process, in that the assessment of data recorded in registries including results and harms when available should be part of the process to assess if there are any discrepancies that could bias the results. In case of discrepancies, investigators should be contacted for clarification. They also raise questions about how reporting guidelines, especially the CONSORT harms, are implemented by journals, with a need for more active endorsement. Templates with mandatory reporting of critical elements, such as that used at ClinicalTrials.gov [14], could improve the reporting of safety results in journals. For policymakers, our results advocate an extension to all countries of the mandatory posting of trial results. Besides their use for limiting publication bias and selective outcome reporting, public registries may help improve transparency of results in clinical trials. Accordingly, in April 2014, the European Union voted to adopt the Clinical Trials Regulation, which requires the registration of all clinical trials conducted in Europe and posting of trial summary results in the European Clinical trials Database (EudraCT) within 1 year after trial completion [27, 28]. Nevertheless, compliance to the legal requirement in the United States is low [16, 29–33] despite civil monetary penalties (up to $10,000 a day) and, for federally funded studies, the withholding of grant funds in cases of non-compliance [14]. Therefore, compliance must be improved. A recent article showed that sending emails to responsible parties of completed trials that do not comply with the FDAAA legal requirement to post results significantly improved the posting rate at 6 months [34].
Limitations
We may not have identified all published articles because we searched only MEDLINE for publications. Further, for trials without publications, the results could be published at a future date because publication in journals may take time due to multiple submissions. Some trials may have multiple publications with different results reported. In this case, we did not include all reports resulting from the trial but only the reports that included safety data and matched the time frame reported at ClinicalTrials.gov. Finally, this study focused on trials assessing pharmacological treatments, but non-pharmacological treatments can also incur SAEs.