Skip to main content

Dissemination of Registered COVID-19 Clinical Trials (DIRECCT): a cross-sectional study

Abstract

Background

The results of clinical trials should be completely and rapidly reported during public health emergencies such as COVID-19. This study aimed to examine when, and where, the results of COVID-19 clinical trials were disseminated throughout the first 18 months of the pandemic.

Methods

Clinical trials for COVID-19 treatment or prevention were identified from the WHO ICTRP database. All interventional trials with a registered completion date ≤ 30 June 2021 were included. Trial results, published as preprints, journal articles, or registry results, were located using automated and manual techniques across PubMed, Google Scholar, Google, EuropePMC, CORD-19, the Cochrane COVID-19 Study Register, and clinical trial registries. Our main analysis reports the rate of dissemination overall and per route, and the time from registered completion to results using Kaplan–Meier methods, with additional subgroup and sensitivity analyses reported.

Results

Overall, 1643 trials with completion dates ranging from 46 to 561 days prior to the start of results searches were included. The cumulative probability of reporting was 12.5% at 3 months from completion, 21.6% at 6 months, and 32.8% at 12 months. Trial results were most commonly disseminated in journals (n = 278 trials, 69.2%); preprints were available for 194 trials (48.3%), 86 (44.3%) of which converted to a full journal article. Trials completed earlier in the pandemic were reported more rapidly than those later in the pandemic, and those involving ivermectin were more rapidly reported than other common interventions. Results were robust to various sensitivity analyses except when considering only trials in a “completed” status on the registry, which substantially increased reporting rates. Poor trial registry data on completion status and dates limits the precision of estimates.

Conclusions

COVID-19 trials saw marginal increases in reporting rates compared to standard practice; most registered trials failed to meet even the 12-month non-pandemic standard. Preprints were common, complementing journal publication; however, registries were underutilized for rapid reporting. Maintaining registry data enables accurate representation of clinical research; failing to do so undermines these registries’ use for public accountability and analysis. Addressing rapid reporting and registry data quality must be emphasized at global, national, and institutional levels.

Peer Review reports

Background

Complete and timely reporting of trials allows for evidence to rapidly translate into clinical practice — this need is more acute during public health emergencies. The World Health Organization (WHO) recommends that dissemination of trial results in a global health emergency should be “greatly shortened” from the usual expectation of 12 months [1], although this has not occurred in past pandemics with many trials failing to meet even non-pandemic expectations [2]. Since 2020, the number of registered clinical trials addressing COVID-19 grew alongside the pandemic: the WHO International Clinical Trials Registry Platform (ICTRP) database went from 240 COVID-19 trial registrations in February 2020 to 20,076 on 5 May 2023 [3].

Clinical trial registration and results reporting are moral, ethical, pragmatic, and often legal requirements [4,5,6,7] and should ensure a public accounting of planned, ongoing, and completed clinical trials. Due to these various mandates, trial registries have become an essential piece of public health infrastructure and a valuable tool for research. They provide an avenue for transparency and accountability by making the planned methods, timeline, and outcomes of a trial public. Expectations are then set for when results should be available and what they should contain. Some registries also have the ability to directly host results. This robust system of registration and reporting compliments and informs dissemination in other fora like journals and preprints [8] and can aid in evidence synthesis [9], landscape assessments [10], and planning future research [11].

The primary objective of this study was to evaluate results reporting of COVID-19 trials completed in the first 18 months of the pandemic (i.e., January 2020 through June 2021). It leverages clinical trial registry data to examine a comprehensive population of COVID-19 research. This allows tracking of where and when the dissemination of clinical trials occurred during the pandemic. These results can help inform guidance on how research should be managed, reported, coordinated, and synthesized during future emergency situations. These findings extend on our previously published interim results on trials completed during the first 6 months of the pandemic, in which we found that 14% (41/285) of trials rapidly reported results and that preprints were most commonly used [12].

Methods

The DIRECCT project is a meta cross-sectional study that examined the availability of results of trials completed through the first 18 months of the COVID-19 pandemic (i.e., January 2020–June 2021). Further details on the methods are available on the Open Science Framework in a preregistered protocol (https://doi.org/10.17605/OSF.IO/8FR9T) and an updated protocol inclusive of all amendments (https://doi.org/10.17605/OSF.IO/5F8J2). This study was reported according to the STROBE checklist for cross-sectional studies (Additional File 1) [13].

Trial population

We used the WHO ICTRP list of registered COVID-19 trials [14], updated through 1 July 2021, as our primary data source. This was supplemented with additional data collected directly from clinical trial registries. This allowed access to information not present in the ICTRP dataset and better management of duplicate registrations. Cross-registrations (i.e., a trial with multiple registrations) were identified in registry data and results publications and collapsed into a single record prior to final analysis.

Inclusion and exclusion criteria

We aimed to include all trials, of any design, examining an intervention for the treatment or prevention of COVID-19 infection and acute disease. The inclusion and exclusion criteria remained consistent with those in the interim report of the study [12], including the previously declared post hoc additions (Table 1). Existing determinations on exclusions from our preliminary analysis were carried forward into this final analysis.

Table 1 Inclusion and exclusion criteria

Inclusion and exclusion criteria were applied first automatically using data extracted from the ICTRP and individual registries. This screened out observational studies, those in a “withdrawn” status, and those registered prior to 2020 or completed after the initiation of our searches. For all trials passing this automated step, inclusion and exclusion criteria were then manually assessed prior to manual searches for trial results within the Numbat Systematic Review Manager.

Data extraction

For automated trial results searches, PubMed metadata was extracted for all records matching a PRESS Peer Reviewed search strategy for COVID clinical trials from the COVID-evidence project [15]. Trial IDs from the WHO ICTRP list of registered COVID-19 trials were text searched in the PubMed metadata as well as in the free-text of the CORD-19 database of open access coronavirus-related literature. Potential hits were presented to searchers and reviewed during the manual search process.

The interventions studied in included trials were extracted from trial registries. A “unique” study arm was recorded if it differed from other arms in either the intervention(s) used or the dosing, but not if it differed only in the population (e.g., age cohorts) receiving the same intervention. Interventions under study were separated from controls and standards of care given in all arms. Interventions were then manually reviewed and normalized, grouped, and deduplicated, regardless of dose, to arrive at the unique interventions used in each trial.

Search strategy

Following manual inclusion screening, trials progressed to manual searches. First, the trial’s registrations and any automatically identified potential publications from PubMed or CORD-19 were screened for any published results. Keyword searches were then conducted in the Cochrane COVID-19 study register, PubMed, Europe PMC, Google Scholar, and Google. Search terms covered the trial IDs, interventions, investigators, sponsor, and any other relevant or distinct keywords at the discretion of the searcher.

We had planned to separately examine reporting for the first 12 and 18 months of the pandemic. However, due to the large volume of COVID-19 trials, both analyses were collapsed into a final 18-month analysis. Manual searches took place between March 2021 and January 2022; each trial was searched at least once after 15 August 2021 to ensure a minimum of at least 6 weeks had passed from any given trial’s registered completion date. Additional validations and checks were made to the dataset during cleaning, preparation, and analysis through January 2023.

Data validation

As the number of COVID-19 trials was much larger than our initial expectations, searches in duplicate for all trials were not possible with available resources. In light of this, a number of strategies were adopted to minimize bias and extraction errors. All trials completed through 31 December 2020 (i.e., the first 12 months) were searched independently, in duplicate, with at least one search occurring after the final results cut-off date of 15 August 2021. For the remaining trials, any issues experienced by extractors during solo-extraction were flagged and triggered an independent second search by another team member. All trials manually screened for inclusion and searched in duplicate were reconciled by consensus between extractors, and any remaining issues were referred to the study leadership team (MSH, NJD) for final adjudication. Notable edge cases in our population are detailed in Additional File 2: Appendix A. Following the completion of data extraction, preprint-journal article matches were validated using the Bio/MedRxiv API [16] when possible, and the leadership team manually reviewed all remaining combinations to ensure correct matching and categorization.

Outcomes

Outcome definitions

Searchers recorded any trial results disseminated as a journal publication, preprint publication, or stand-alone results on a registry. In addition, journal and preprint publications were assessed as being “interim” or “full.” Full trial results were those that contained the complete follow-up for at least one primary outcome. Publications were matched to registrations, and to each other (i.e., preprints to publications) by comparing titles, interventions, investigators, and basic design characteristics with ambiguities referred for adjudication. Registry results had to be hosted on the registry, rather than simply link to an external results document or paper, and meet the minimum ICTRP standard for summary trial results to be counted (i.e., contain baseline characteristics, participant flow, adverse events, and outcome measures) [17].

Trial results reporting

Per protocol, our primary analysis is based on registry data as it stood when collected in early July 2021. When a trial was cross-registered on multiple databases, we took key information, like completion dates, from the registry with the most recent update, or from the EU Clinical Trial Register (EUCTR) which only includes completion dates after a trial completes. We report summary statistics on trial results availability across any route, and for each individual route. Additionally, we generated cumulative incidence curves using Kaplan–Meier methods with unreported trials censored on 15 August 2021. For the time to preprint publication model, the cumulative incidence curve was fit using the Aalen-Johansen method with journal publication as a competing risk and ties broken by nominal offsets [18]. For time to registry results, we limited our population to trials with a registration on ClinicalTrials.gov, the EUCTR, or the ISRCTN, as these registries have the most mature processes for hosting results meeting our definition. Trials with results published prior to the registered completion date were considered reported at time zero. We additionally investigated time from preprint to journal publication using Kaplan–Meier methods.

Subgroup analyses

Time-to-report comparisons, using cumulative incidence curves, were also generated for various sub-populations. These additional analyses were modified from, or added post hoc to, the original protocol. First, we examined how trial reporting timelines changed over time. Trials were stratified into three 6-month periods covering the first 18 months of the pandemic (i.e., January 2020–June 2021) based on when they completed. These 6-month periods align with the original prespecified analysis plan. Next, cumulative incidence curves for studies containing each of the five most common interventions were fit. Lastly, we restricted our sample to those meeting certain design characteristics and enrollment standards, as a proxy for those most likely to influence clinical practice. We defined these as late-phase (i.e., ≥ Phase 2), randomized trials enrolling at least 100 participants. Trial characteristics were extracted from the ICTRP dataset using previously validated automated methods [19].

Sensitivity analyses

Four sensitivity analyses, all post hoc, were conducted in order to check whether reporting rates were sensitive to changes in methods. Each sensitivity analysis was applied independently, not cumulatively. First, only full study completion dates were used, rather than primary completion dates when available. Next, the final population was restricted to only those that had both reached their registered completion date and updated their trial registration to a “completed” status, indicating proactive acknowledgement the trial took place and completed. We then expanded our definition of “first results” to include interim results that did not include complete follow-up of a primary outcome. Lastly, we re-extracted data from all registries in April 2022 and applied these updated completion and trial status data retrospectively to our sample for re-analysis. For each sensitivity analysis, we examined the raw reporting rates and cumulative incidence curves, and compared them to our primary findings.

Software, data, and code

Data analysis was conducted in R V.4.3.0 (R Foundation for Statistical Computing, Vienna, Austria) and Python V.3.8.1 (Python Software Foundation, Wilmington, Delaware USA). Manual data extraction and reconciliation was conducted in Numbat Systematic Review Manager [20]. Code and data are available on Github [21, 22] and Zenodo [23].

Results

As of 30 June 2021, the ICTRP COVID-19 database contained 10,396 interventional and observational clinical study registrations. After automated screening, and accounting for cross-registrations, 2372 completed interventional trials remained. After manual screening, 1643 completed interventional trials meeting our inclusion criteria were manually searched for our final analysis; 68% (n = 1,100) were searched by at least two investigators. A flow-chart detailing all exclusions is available in Fig. 1. Characteristics of the 1643 trials are included in Table 2.

Fig. 1
figure 1

Flow-chart for trial inclusion. ICTRP, International Clinical Trials Registry Platform. Cross-registrations include those identified in automated and manual screening of registries and publications

Table 2 Characteristics of included trials, overall and subsetted by trials with and without results reported by the start of result searches 15 August 2021. Cross-registrations refer to registrations in 2 or more registries; multiple registrations of the same trial in the same registry are not counted as cross-registrations. Top interventions refer to most common individual interventions. The “Other” trial status includes trials in “Ongoing” and any other non-completed statuses

Trial results reporting

Registration of COVID-19 clinical trials

Trials were most commonly registered in ClinicalTrials.gov, followed by the Chinese Clinical Trial Registry. Overall, 210 trials (13%) had two or more registry entries that required deduplication. Reporting rates across registries were comparable, with trials on the ISRCTN most likely to have results across all dissemination routes (62%). Additional File 2: Figure S1 shows an upset plot of cross-registrations, and Additional File 2: Table S1 shows the reporting rate per registry without deduplication of records.

Dissemination of COVID-19 clinical trials

The cumulative probability of reporting under pandemic conditions was 12.5% at 3 months since completion, 21.6% at 6 months, and 32.8% at 12 months — in other words, just under a third of trials met the WHO’s normal standard for first dissemination of results, ideally on a trial registry, in non-emergency situations. The minimum time from completion was 46 days (i.e., six full weeks through 15 August 2021) with a maximum time of 561 days; median time from completion to searches was 250 days (IQR 138–369) and the median time to reporting was undefined for all models. Figure 2A–D shows cumulative incidence plots for time-to-publication for (a) first publication across any dissemination route, (b) earliest journal publication, (c) earliest preprint publication, and (d) summary results limited to the three registries with mature structured summary results reporting formats (i.e., EUCTR, ClinicalTrials.gov, ISRCTN; see Additional File 2: Fig. 2 for data across all registries). Overall, 402 trials (24%) had a result available prior to the start of searches spread across 545 individual results publications.

Fig. 2
figure 2

AD Time to trial results reporting across dissemination routes. Summary results in the registry limited to the three registries with structured summary results reporting formats (EU Clinical Trial Register, ClinicalTrials.gov, ISRCTN). The Aalen-Johanssen plot of time to preprint publication (C) used nominal offsets to break ties at 0. Trials with a publication date prior to the available completion date (across all results, n = 71 of 402 trials, 18%) were considered reported at time 0

Dissemination routes for COVID-19 results

Of the 402 trials with results, journal articles were the most common dissemination route with 278 trials (69.2%) having a peer-reviewed publication reporting a primary endpoint. Reporting across all dissemination routes is detailed in Fig. 3. A primary result was available in a preprint for 194 trials (48.3%); matching preprint-journal pairs were located for 86 trials (21.4%), with one trial excluded from this count, as it had a preprint and journal article that were not matched as they reported different results from the same trial. When multiple preprint-article pairs (n = 1) or multiple preprints for a single article (n = 4) were located, we used the earliest preprint date in all analyses. Figure 4 shows the delay from preprint to full journal publication for all preprints; journal articles published prior to preprints were set to time 0, and preprints without a journal publication were censored at the start of result searches. The median time from preprint publication to journal publication was 198 days.

Fig. 3
figure 3

COVID-19 clinical trials with results by dissemination route. Trials reporting both a preprint and a full article include some non-matches, i.e., a preprint reporting full results of one primary outcome measure and an article reporting full results of a different primary outcome measure, which are both counted as full results

Fig. 4
figure 4

Time from preprint to journal publication

Subgroup analyses

Figure 5a shows time to reporting stratified by whether trials completed in the early, middle, or late periods of the first 18 months of the COVID-19 pandemic. Trials completed in the first 6 months of the pandemic were consistently reported sooner than those in the later parts of the pandemic at both 100 (Early: 17.0%; Mid: 13.6%; Late: 11.9%) and 200 days (Early: 28.0%; Mid: 21.4%; Late: 20.8%) from trial completion.

Fig. 5
figure 5

A–C Time to dissemination by subgroups. A 6-month phase of the pandemic. B Top 5 common interventions. C Based on trial design standards. HCQ: Hydroxychloroquine; Con. Plasma: Convalescent Plasma

To investigate whether registered trial design characteristics influenced reporting, we separated all trials that were Phase 2 or higher, randomized, and enrolled at least 100 participants from those that did not have these characteristics. Trials were included if any registration indicated these characteristics were met. Of the 598 trials with these characteristics, 138 (23%) reported. Figure 5b shows how this compares to trials without these characteristics.

Sensitivity analyses

First, limiting our sample to trials that had reached full completion by 30 June 2021 showed 374 of 1420 (26%) trials reported by the start of searches. Next, restricting our sample to trials with a “Completed” status (e.g., Completed, Terminated) reduced our sample to 553 trials, of which 230 (42%) reported by the start of searches. Then, expanding our results to interim findings added 68 results publications for a reporting percentage of 27% (n = 439 trials). Finally, using the completion dates available on the registries in April 2022 changed the completion date for 272 trials of the 1648 (17%) trials that passed manual screening. Of these, 215 trials’ completion dates were made later whereas 57 trials’ completion dates were moved earlier. An additional 7 trials had a withdrawn status on a registry in April 2022 and were excluded. The final dataset using completion dates and trial status available in April 2022 comprised 1486 trials of which 392 (26%) trials reported by the start of our searches. Additional File 2: Figure S3 shows cumulative incidence curves for each sensitivity analysis compared to the primary analysis.

Discussion

Summary of results

Examining all trials registered as completed during the first 18 months of the COVID-19 pandemic yielded a 32.8% cumulative probability of reporting at 12 months, with just over two-thirds of trials failing to meet the WHO’s non-pandemic standard for first dissemination of trial results. The median time from trial completion to results searches was 250 days (range 46–561 days). Despite a rise in the use and popularity of preprints, especially early in the pandemic, the most common dissemination route was to publish only in a journal article. Clinical trial registries were, comparatively, rarely used for rapid dissemination. The overall reporting rate was robust to a number of sensitivity analyses; however, trials with a completed status on a registry, in addition to having passed their listed completion date, had a notably higher reporting rate. Reporting was most rapid during the first 6 months of the pandemic compared to the subsequent two 6-month periods. Ivermectin showed notably different reporting patterns compared to other top interventions (i.e., hydroxychloroquine, convalescent plasma, azithromycin, and stem cells).

Findings in context

This study builds on our interim findings, and studies tracking COVID-19 trials from other groups [12, 24,25,26]. Accelerated trial result reporting was consistent in this expanded population, as 13.5% of studies continued to report within 100 days of completion. As would be expected, more time to report led to an increase in overall trial results availability from 14 to 24%. While our preliminary findings showed a slight preference for preprints at the start of the pandemic, by the start of our searches on 15 August 2021, journal publications were the most common dissemination route. Still, the rise in the use of preprints remains substantial and notable, with 57% of reported trials in our population having a preprint available. However, the majority of preprints in our cohort remained unconverted into journal articles (55%, 111/202). Other research has shown concordance in reporting characteristics among COVID preprints that do convert to journal articles [27,28,29].

While the raw reporting rate of 24% is low, it does appear that results dissemination of completed trials was accelerated during the COVID-19 pandemic, both compared to prior pandemics and standard practice. Jones and colleagues examined reporting of trials for Ebola, H1N1, and Zika Virus, with time from completion ranging from ~ 18 months to ~ 72 months [2]. Only Ebola saw a journal publication rate exceeding 20% within a year from completion; the journal reporting rate for COVID-19 exceeded 20% within 300 days, and for any dissemination route in under 200 days. The delayed reporting found by Jones and colleagues is consistent with other findings from the H1N1 pandemic [30, 31]. Similar to our COVID-19 analysis, there was substantially lower dissemination on registries throughout these pandemics. Only five of 333 (1.5%) trials met the non-emergency WHO standard of having results on a registry within 12 months, and in a journal within 24 months; while 32.8% of COVID-19 trials had disseminated results within 12 months, only 7.2% reported on the registry, even when restricting the population to only the registries most likely to contain results. Based on the low usage of registries for rapid dissemination during the COVID-19 pandemic to date, compliance with this standard has not improved. In contrast with our findings, Jones and colleagues’ analysis did not find a noticeable change in overall reporting for trials in a completed status.

As in our interim findings, reporting of COVID-19 clinical trials appeared accelerated compared to standard practice. Other large studies examining the time to dissemination for clinical trials in non-pandemic situations show rates of dissemination within the first year far below the 32.8% seen in our findings [32, 33]. Even legally mandated reporting to ClinicalTrials.gov under US law leads to just 41% of trials reported within a year of primary completion [34]. These non-pandemic analyses, however, typically only cover journal articles and registry results; the rise of preprints may impact future analyses of time-to-publication should they continue to be used in non-COVID-19 contexts. However, even having one quarter of trials published in journals at 1 year would represent an improvement compared with recently documented practice [35,36,37].

In our assessment of common interventions, trials containing arms assessing convalescent plasma, hydroxychloroquine, and azithromycin showed reporting patterns similar to trials examining all other interventions outside of the top five most common. Stem cells also followed the same general trend though with slightly slower reporting. However, trials with an ivermectin treatment arm showed persistently more rapid reporting. This is notable given the serious concerns raised around both fraud and overall trial quality within ivermectin COVID-19 research [38, 39]. Also notable is the relatively low reporting rate of stem cell trials. Ivermectin and hydroxychloroquine, including its usage in combination with azithromycin, were the focus of intense attention, debate, and controversy during the pandemic [40,41,42,43]. While receiving less attention, convalescent plasma also garnered serious consideration as a potential treatment, including an emergency approval from the US Food and Drug Administration, before it was shown to be largely ineffective [44, 45]. However, stem cells were never elevated to similar levels of public, political, and media attention despite high apparent interest from the research community. This mismatch translating to the lowest, and slowest, reporting trends is a notable finding worthy of additional investigation.

Strengths and limitations

This analysis presents a thorough overview of dissemination of clinical trial results during the COVID-19 pandemic. We are not aware of any other analysis that comprehensively examines the link between registration and publication of COVID-19 clinical trials across all ICTRP primary and data provider registries. We made efforts to limit duplication in our dataset through extensive checks for cross-registrations. Given that 13% of the trials in our final sample had multiple registry entries, failure to take this step could have likely impacted our conclusions. Our detailed documentation of these links between registrations and results across multiple dissemination routes could be a boon to future research examining COVID-19 clinical trials. As this is, to our knowledge, the largest comprehensive assessment of the reporting of COVID-19 clinical trials to date, our curated, open dataset can aid in making future metaresearch on the pandemic more efficient and complete.

We included all registered trials, not only randomized controlled trials, in this analysis as a reflection of the full scope of the COVID-19 research landscape. Other major COVID-landscape projects tended to focus on randomized trials, as they aimed to support evidence synthesis efforts [25, 46, 47]. Non-randomized studies, such as early research on hydroxychloroquine [48], were influential to the course of the pandemic despite their design limitations. A sensitivity analysis examining only late-phase, large, randomized studies was nearly identical to the overall reporting rate (23% vs. 24%).These samller, early-phase and non-randomized trials, though perhaps less influential for evidence synthesis and medical guidelines, represent the majority of our sample (64%, 1045/1643), and collectively enrolled thousands of participants at substantial overall cost, and thus have the same moral imperative to share timely results and avoid research waste.

While we could only search roughly two-thirds of our sample in duplicate and could not conduct outreach to investigators, due to resource constraints, our comprehensive search strategy ensured all trials underwent a thorough process for results discovery. Registries, COVID-19-specific study databases, and numerous bibliographic databases were searched using both automated and manual methods. In our efforts to be as inclusive as possible, we included non-English-language results, if we could reasonably translate or otherwise validate the connection to a given registration, though we recognize that the study team was not necessarily well positioned to locate results outside of their native languages and may have missed some results due to this limitation. Publications in non-English languages should still ideally include reference to the trial registration ID in the abstract and full-text which can help mitigate these discovery issue. Searchers were also encouraged to flag trials for adjudication and duplicate coding when faced with any doubts or questions.

This study aimed to examine the rapid dissemination of trial results within pandemic conditions leading to shorter time from completion to results searches than is typical for similar studies of trial non-publication. This approach allowed for feedback on pandemic trial reporting trends faster than typical retrospective analyses which usually occur years later. However, some studies crucial to the pandemic response, but with very long follow-up time, such as adaptive trials and vaccine trials, were not included in our population, as they remain ongoing with only interim results potentially available. We hope future research will build on our dataset of COVID-19 registration and publication through expanded and updated searches, to further understand how dissemination practices may have influenced clinical decision-making during the entirety of the COVID-19 pandemic.

The main limitation of this work was that poor data quality on clinical trial registries may have influenced our findings. Given existing concerns about the reliability of trial information across multiple registries [49], we took efforts to ensure we used more recent and complete data from across multiple registries when possible. We also attempted to examine the impact of data quality and found that using more recent data did not improve reporting statistics. However, the registry entries with more accurate upkeep, in the form of proactive updates to the trial status, did show markedly increased dissemination: the overall reporting rate nearly doubled (24% vs. 42%) when trials were limited to those that had proactively updated their trial to a “completed” status, in addition to having met their completion date.

Poor registry data could impact this analysis in a number of ways. First, the status of trials may be incorrect, resulting in the misclassification of ongoing, completed, terminated, and withdrawn trials. Trials that terminate early with partial enrollment are still expected to update their registrations and indicate whether they were (1) withdrawn prior to enrolling participants, and therefore no results could exist, or (2) terminated early after enrolling some participants. Terminated trials are still expected to report in some form, though reporting rates of these trials are known to be low [49,50,51,52]. Next, completion dates could be incorrect, leading to imprecision in reporting timelines and the potential for misclassification of “ongoing” studies. Our study showed such misspecification of completion dates on the registry: 71 trials, 18% of all results located, had results published on the same day or prior to the registered completion date. Refreshing our completion date data 10 months later did not make an appreciable difference to the overall reporting rate or trends, suggesting that increased time from trial completion does not see improved registry data quality. Lastly, proper maintenance of registry records is likely a positive predictor of trial reporting, which could be investigated in future research. While each of these mechanisms may play a role, better data on which trials actually occurred and when they completed would lead to more precise estimates of publication bias. We hope our open data can provide a starting point to further examine the impact of registry data quality on the validity of analyses of publication bias.

Implications for policy and practice

Despite recommendations for accelerated reporting during public health emergencies, overall reporting remained low with most trials failing to meet even the non-pandemic 12-month standard for results dissemination on a clinical trial registry. The slight increase in reporting compared to standard practice, especially early in the pandemic, should not obscure the fact that more than two-thirds of all pandemic-relevant trials did not publish results within 12 months of the study end date on the registry. This is despite the rise in preprints to aid faster dissemination [50], the availability of registries to rapidly host results [51], and efforts by many journals and publishers to fast-track review of COVID-19 research [52]. Whether this lack of reporting is due to publication bias, a high number of aborted studies, or poor registration data, it underscores cause for concern.

Clinical trial registries cease to represent the current clinical trial landscape when they fail to present timely, accurate, and complete data. Evidence synthesis and research planning [9, 11] rely on registries to provide information on planned, ongoing, and completed trials. Neglecting registry data reduces the accuracy and efficiency of this work and threatens the quality of the resulting clinical guidelines and medical decision-making. COVID-19 was a unique global phenomenon and dominated the focus of new research. Unfortunately, as a result, it appears that in the rush to initiate new studies, many failed to start, ended early, or had difficulty with enrollment and simply abandoned their trials and registry entries [25]. As the high proportion of results discordant with registered completion dates show, even when studies unambiguously did occur, registries could not necessarily be counted on as accurate reflections of reality.

Similarly disappointing is that registries remain substantially underutilized as a rapid dissemination platform. Registries like ClinicalTrials.gov and the EUCTR have standard reporting formats that allow for the publication of results in parallel to preprint and journal publication. While the results have to meet some quality standards, there is no peer review, and no lead time for writing and formatting manuscripts, which should allow for more rapid dissemination. With new minimum standards for registry-hosted results under consultation at the ICTRP [53], registries will need to invest in encouraging and facilitating reporting, while researchers and their institutions should consider reporting to registries a routine aspect of results dissemination, especially during public health emergencies. Journal editors could also make registry maintenance and the posting of summary results a condition of publication, in much the same way they require prospective registration [54] and be more explicit that the publication of summary results on a registry does not count as prior publication, so as to encourage the use of registries as a complementary dissemination route.

While faster dissemination, via preprints or registries, does draw concerns around unvetted or low-quality results entering the public domain, it also allows high-impact results to be adopted into care more quickly [55]. Evidence has shown that COVID-19 preprints that convert to publications are typically concordant in their main characteristics [27, 28, 56] while those that remain unpublished tend to have more issues [29]. “Hot” topics like COVID-19 also likely draw more intense scrutiny during the pre-publication review process that will lead to public discussion around controversial or low-quality preprinted results [57]. The quality of results posted to ClinicalTrials.gov has consistently shown to be high when compared to journal publications for the same study [58,59,60,61].

Our results show that the many COVID-19 studies remain unpublished and have unclear registry data that hides their true status: stakeholders involved in clinical trials, including researchers, funders, registries, research institutions, ethics committees, and regulators, need to work together to facilitate timely publication and to ensure that registered data reflects a trial’s true status. Better coordination of emergency research among stakeholders can help to reduce the number of trials that terminated early due to false starts or failure to recruit [62, 63]. However, given low reporting rates and high uncertainty about the status of unreported trials, evidence synthesis efforts around COVID-19 treatments should routinely check for publication bias and make additional efforts to confirm the status of registered trials with investigators.

Governments and international bodies like the WHO should refine their guidance and laws around when and where results should be published, especially in public health emergencies. This will provide clear criteria that stakeholders should aim to achieve and that can be tracked and audited. Individual registries and coordinating bodies like the ICTRP should improve standards and processes for routine follow-up with trial sponsors to ensure data is updated and results clearly posted on, or clearly linked to, the registry. These efforts will reduce confusion and burden for future research planning, evidence synthesis, and metaresearch efforts. Aiming to improve these standards now will aid in ensuring that the knowledge infrastructure around future public health emergencies is better managed.

Conclusion

Expectations to more rapidly report trial results during the COVID-19 public health emergency have not consistently led to more rapid reporting, as the vast majority of registered clinical trials still failed to meet this standard. Preprints were common during the pandemic, complementing journal publication as a method for dissemination; however, registries were not routinely used for rapid reporting. The importance of maintaining registry data, in order to provide an accurate representation of the research landscape, is a key issue that must be emphasized at the global, national, and institutional levels. Poor data quality also undermines the public purpose of clinical trial registration for public audit and analysis.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available in the Zenodo repository (https://doi.org/10.5281/zenodo.8181415) with code and additional data available via GitHub (https://github.com/maia-sh/direcct; https://github.com/ebmdatalab/direcct-phase2-python).

Abbreviations

EUCTR:

EU Clinical Trials Register

ICTRP:

International Clinical Trials Registry Platform

WHO:

World Health Organization

References

  1. Modjarrad K, Moorthy VS, Millett P, Gsell P-S, Roth C, Kieny M-P. Developing Global Norms for Sharing Data and Results during Public Health Emergencies. PLoS Med. 2016;13:e1001935.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Jones CW, Adams AC, Murphy E, King RP, Saracco B, Stesis KR, et al. Delays in reporting and publishing trial results during pandemics: cross sectional analysis of 2009 H1N1, 2014 Ebola, and 2016 Zika clinical trials. BMC Med Res Methodol. 2021;21:120.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Karam G. Number of COVID-19 Clinical Studies in the ICTRP Database. Twitter.com. 2023. https://twitter.com/GhassanKaram/status/1654468011271618561. Accessed 1 Jun 2023.

  4. World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013;310:2191–4.

    Article  Google Scholar 

  5. European Parliament and the Council of the European Union. Regulation (EC) No 536/2014 of the European Parliament and of the Council of 16 April 201 on Clinical Trials on Medicinal Products for Human Use, and Repealing Directive 2001/20/EC. Official J Eur Union. 2014;57 L 158:1–76.

    Google Scholar 

  6. National Institutes of Health, Department of Health and Human Services. Clinical Trials Registration and Results Information Submission Final rule. Fed Regist. 2016;81:64981–5157.

    Google Scholar 

  7. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004;351:1250–1.

    Article  PubMed  Google Scholar 

  8. Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, et al. A systematic review of comparisons between protocols or registrations and full reports in primary biomedical research. BMC Med Res Methodol. 2018;18:9.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Hunter KE, Webster AC, Page MJ, Willson M, McDonald S, Berber S, et al. Searching clinical trials registers: guide for systematic reviewers. BMJ. 2022;377:e068791.

  10. Karlsen APH, Wiberg S, Laigaard J, Pedersen C, Rokamp KZ, Mathiesen O. A systematic review of trial registry entries for randomized clinical trials investigating COVID-19 medical prevention and treatment. PLoS ONE. 2020;15:e0237903.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Sadek J, Inskip A, Woltmann J, Wilkins G, Marshall C, Pokora M, et al. ScanMedicine: An online search system for medical innovation. Contemp Clin Trials. 2023;125:107042.

    Article  PubMed  Google Scholar 

  12. Salholz-Hillel M, Grabitz P, Pugh-Jones M, Strech D, DeVito NJ. Results availability and timeliness of registered COVID-19 clinical trials: interim cross-sectional results from the DIRECCT study. BMJ Open. 2021;11:e053096.

    Article  PubMed  Google Scholar 

  13. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370:1453–7.

    Article  Google Scholar 

  14. ICTRP. ICTRP Registry Network. World Health Organization. https://www.who.int/clinical-trials-registry-platform/network. Accessed 6 May 2021.

  15. Janiaud P, Axfors C, Saccilotto R, Hemkens LG, Schmitt A, Hirt J. COVID-evidence: a living database of trials on interventions for COVID-19. 2021.

    Google Scholar 

  16. bioRxiv/medRxiv API Summary. biorxiv.org. https://api.biorxiv.org/pubs/help. Accessed 1 Jun 2023.

  17. World Health Organization. International Standards for Clinical Trial Registries - Version 3.0. Geneva: WHO; 2018.

  18. Huebner M, Wolkewitz M, Enriquez-Sarano M, Schumacher M. Competing risks need to be considered in survival analysis models for cardiovascular outcomes. J Thorac Cardiovasc Surg. 2017;153:1427–31.

    Article  PubMed  Google Scholar 

  19. Smith JA, DeVito N, Lee H, Tiplady C, Abhari RE, Kartsonaki C. Estimating the effect of COVID-19 on trial design characteristics: a registered report. Royal Soc Open Sci. 2023;10:201543.

    Article  Google Scholar 

  20. Carlisle BG. Numbat Systematic Review Manager. Berlin, Germany: The Grey Literature; 2014.

    Google Scholar 

  21. DeVito NJ. direcct-phase2-python. GitHub. 2023. https://github.com/ebmdatalab/direcct-phase2-python. Accessed 2023.

  22. Salholz-Hillel M. direcct. GitHub. 2023. https://github.com/maia-sh/direcct. Accessed 2023.

  23. Salholz-Hillel M, Pugh-Jones M, Hildebrand N, Schult TA, Schwietering J, Grabitz P, et al. Final Dataset for the DIssemination of REgistered COVID-19 Clinical Trials (DIRECCT) Study. 2023.

    Google Scholar 

  24. Dillman A, Park JJH, Zoratti MJ, Zannat N-E, Lee Z, Dron L, et al. Reporting and design of randomized controlled trials for COVID-19: A systematic review. Contemp Clin Trials. 2021;101:106239.

    Article  PubMed  Google Scholar 

  25. Janiaud P, Axfors C, Ioannidis JPA, Hemkens LG. Recruitment and Results Reporting of COVID-19 Randomized Clinical Trials Registered in the First 100 Days of the Pandemic. JAMA Netw Open. 2021;4:e210330–e210330.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Sevryugina YV, Dicks AJ. Publication practices during the COVID-19 pandemic: Expedited publishing or simply an early bird effect? Learn Publ. 2022. https://doi.org/10.1002/leap.1483.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Janda G, Khetpal V, Shi X, Ross JS, Wallach JD. Comparison of Clinical Study Results Reported in medRxiv Preprints vs Peer-reviewed Journal Articles. JAMA Netw Open. 2022;5:e2245847.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Kapp P, Esmail L, Ghosn L, Ravaud P, Boutron I. Transparency and reporting characteristics of COVID-19 randomized controlled trials. BMC Med. 2022;20:363.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Spungen H, Burton J, Schenkel S, Schriger DL. Completeness and Spin of medRxiv Preprint and Associated Published Abstracts of COVID-19 Randomized Clinical Trials. JAMA. 2023;329:1310–2.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Ioannidis JPA, Manzoli L, De Vito C, D’Addario M, Villari P. Publication delay of randomized trials on 2009 influenza A (H1N1) vaccination. PLoS ONE. 2011;6:e28346.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Manzoli L, Flacco ME, D’Addario M, Capasso L, De Vito C, Marzuillo C, et al. Non-publication and delayed publication of randomized trials on vaccines: survey. BMJ. 2014;348:g3058.

    Article  PubMed  Google Scholar 

  32. Chen R, Desai NR, Ross JS, Zhang W, Chau KH, Wayda B, et al. Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016;352:i637.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Riedel N, Wieschowski S, Bruckner T, Holst MR, Kahrass H, Nury E, et al. Results dissemination from completed clinical trials conducted at German university medical centers remained delayed and incomplete. The 2014–2017 cohort. J Clin Epidemiol. 2021;144:1–7.

    Article  PubMed  Google Scholar 

  34. DeVito NJ, Bacon S, Goldacre B. Compliance with legal requirement to report clinical trial results on ClinicalTrials.gov: a cohort study. Lancet. 2020;395:361–9.

    Article  PubMed  Google Scholar 

  35. Nelson JT, Tse T, Puplampu-Dove Y, Golfinopoulos E, Zarin DA. Comparison of Availability of Trial Results in ClinicalTrials.gov and PubMed by Data Source and Funder Type. JAMA. 2023. https://doi.org/10.1001/jama.2023.2351.

  36. Pedersen C, Tai S, Valley E, Henry K, Duarte-García A, Singla S, et al. Unpublished clinical trials of common rheumatic diseases. Rheumatology . 2023. https://doi.org/10.1093/rheumatology/kead141.

  37. Rees CA, Narang C, Westbrook A, Bourgeois FT. Dissemination of the Results of Pediatric Clinical Trials Funded by the US National Institutes of Health. JAMA. 2023;329:590–2.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Hill A, Mirchandani M, Pilkington V. Ivermectin for COVID-19: Addressing Potential Bias and Medical Fraud. Open Forum Infect Dis. 2022;9:ofab645.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Garegnani LI, Madrid E, Meza N. Misleading clinical evidence and systematic reviews on ivermectin for COVID-19. BMJ Evid Based Med. 2022;27:156–8.

    Article  PubMed  Google Scholar 

  40. Hua Y, Jiang H, Lin S, Yang J, Plasek JM, Bates DW, et al. Using Twitter data to understand public perceptions of approved versus off-label use for COVID-19-related medications. J Am Med Inform Assoc. 2022;29:1668–78.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Schellack N, Strydom M, Pepper MS, Herd CL, Hendricks CL, Bronkhorst E, et al. Social Media and COVID-19-Perceptions and Public Deceptions of Ivermectin, Colchicine and Hydroxychloroquine: Lessons for Future Pandemics. Antibiotics (Basel). 2022;11.

  42. Taccone FS, Hites M, Dauby N. From hydroxychloroquine to ivermectin: how unproven “cures” can go viral. Clin Microbiol Infect. 2022;28:472–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Baker SA, Maddox A. From COVID-19 Treatment to Miracle Cure. M/C J. 2022;25.

  44. Tanne JH. Covid-19: FDA approves use of convalescent plasma to treat critically ill patients. BMJ. 2020;368:m1256.

    Article  PubMed  Google Scholar 

  45. Piechotta V, Iannizzi C, Chai KL, Valk SJ, Kimber C, Dorando E, et al. Convalescent plasma or hyperimmune immunoglobulin for people with COVID-19: a living systematic review. Cochrane Database Syst Rev. 2021. https://doi.org/10.1002/14651858.CD013600.pub4.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Boutron I, Chaimani A, Meerpohl JJ, Hróbjartsson A, Devane D, Rada G, et al. The COVID-NMA Project: Building an Evidence Ecosystem for the COVID-19 Pandemic. Ann Intern Med. 2020;173:1015–7.

    Article  PubMed  Google Scholar 

  47. Verdugo-Paiva F, Vergara C, Ávila C, Castro-Guevara JA, Cid J, Contreras V, et al. COVID-19 Living Overview of Evidence repository is highly comprehensive and can be used as a single source for COVID-19 studies. J Clin Epidemiol. 2022;149:195–202.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Gautret P, Lagier J-C, Parola P, Hoang VT, Meddeb L, Mailhe M, et al. Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. Int J Antimicrob Agents. 2020;56:105949.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  49. Speich B, Gloy VL, Klatte K, Gryaznov D, Taji Heravi A, Ghosh N, et al. Reliability of Trial Information Across Registries for Trials With Multiple Registrations: A Systematic Review. JAMA Netw Open. 2021;4:e2128898.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Krumholz HM, Bloom T, Sever R, Rawlinson C, Inglis JR, Ross JS. Submissions and Downloads of Preprints in the First Year of medRxiv. JAMA. 2020;324:1903–5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  51. Collins, S. F. NIH calls on clinical researchers to swiftly share COVID-19 results. National Institutes of Health. 2020. https://www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-calls-clinical-researchers-swiftly-share-covid-19-results. Accessed 24 Mar 2021.

  52. Hurst P, Greaves S. COVID-19 Rapid Review cross-publisher initiative: What we have learned and what we are going to do next. Learn Publ. 2021;34:450–3.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Bruckner T. ICTRP consultation: How should clinical trial results be reported in registries? TranspariMED. 2022. https://www.transparimed.org/single-post/ictrp-consultation. Accessed 24 Feb 2023.

  54. FAQs: Clinical Trials Registration. ICMJE. http://www.icmje.org/about-icmje/faqs/clinical-trials-registration/. Accessed 11 Aug 2020.

  55. Smart P. The evolution, benefits, and challenges of preprints and their interaction with journals. Sci Educ. 2022. https://doi.org/10.6087/kcse.269.

    Article  Google Scholar 

  56. Brierley L, Nanni F, Polka JK, Dey G, Pálfy M, Fraser N, et al. Tracking changes between preprint posting and journal publication during a pandemic. PLoS Biol. 2022;20:e3001285.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Fraser N, Brierley L, Dey G, Polka JK, Pálfy M, Nanni F, et al. The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLoS Biol. 2021;19:e3000959.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  58. Hartung DM, Zarin DA, Guise J-M, McDonagh M, Paynter R, Helfand M. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med. 2014;160:477–83.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Tang E, Ravaud P, Riveros C, Perrodeau E, Dechartres A. Comparison of serious adverse events posted at ClinicalTrials.gov and published in corresponding journal articles. BMC Med. 2015;13:189.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS Med. 2013;10:e1001566 (discussion e1001566).

    Article  PubMed  PubMed Central  Google Scholar 

  61. Talebi R, Redberg RF, Ross JS. Consistency of trial reporting between ClinicalTrials.gov and corresponding publications: one decade after FDAAA. Trials. 2020;21:675.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Talpos S. Is Hydroxychloroquine Making Covid-19 Clinical Trials Harder?. Cambidge: Undark; 2020.

  63. Herper M, Riglin E. Data show panic, disorganization dominate the study of Covid-19 drugs. Boston: STAT News; 2020.

Download references

Acknowledgements

We would like to thank Samruddhi Yerunkar, Susanne Schorr, and Lana Saksone for their supporting contributions to the project. We also thank our colleagues who provided useful comments on our study protocol and/or manuscript: members of the DataLab at the University of Oxford and QUEST Center for Responsible Research at the Berlin Institute of Health at Charité Universitätsmedizin Berlin.

Funding

The DIRECCT project is part of the CEOsys (COVID-19 Evidence Ecosystem) project funded within the Network of University Medicine (Nationales Forschungsnetzwerk der Universitätsmedizin—NUM) by the Federal Ministry of Education and Research of Germany (Bundesministerium für Bildung und Forschung—BMBF). FKZ: 01KX2021. During the course of this project NJD was also employed on a grant from The Good Thinking Society to work on trials transparency research. The funders had no involvement in the study design, analysis, or writing of the manuscript nor the decision to submit.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: MS-H, PG, BG, DS, and NJD. Data curation: MS-H, and NJD. Formal analysis: MS-H, and NJD. Funding acquisition: BG and DS. Investigation: MS-H, MP-J, NH, TAS, JS, PG, BGC, and NJD. Methodology: MS-H, PG, and NJD. Project administration: MS-H and NJD. Resources: BGC. Software: MS-H, BGC, and NJD. Supervision: BG, DS, and NJD. Validation: MS-H and NJD. Visualization: MS-H and NJD. Writing—original draft: MS-H and NJD. Writing—review & editing: MS-H, MP-J, NH, TAS, JS, PG, BCG, BG, DS, NJD. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Nicholas J. DeVito.

Ethics declarations

Ethics approval and consent to participate

This study used only publicly available, non-sensitive data with no human participants and was therefore not required to undergo ethical review at the University of Oxford.

Consent for publication

Not applicable.

Competing interests

The authors report no competing interests related to the submitted work. Outside of the submitted work, BG reports receiving research funding from the Laura and John Arnold Foundation, the NHS National Institute for Health Research (NIHR), the NIHR School of Primary Care Research, the NIHR Oxford Biomedical Research Centre, the Mohn- Westlake Foundation, NIHR Applied Research Collaboration Oxford and Thames Valley, the Wellcome Trust, the Good Thinking Foundation, Health Data Research UK, the Health Foundation, the World Health Organization, UKRI, Asthma UK, the British Lung Foundation, Elsevier, and the Longitudinal Health and Wellbeing strand of the National Core Studies program; he also receives personal income from speaking and writing for lay audiences on the misuse of science and is a co-founder of the AllTrials Campaign; NJD reports funding from the Naji Foundation, EU Horizon WIDERA work program, World Health Organization, and Fetzer Franklin Memorial Fund.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

STROBE Statement—Checklist of items that should be included in reports of cross-sectional studies.

Additional file 2.

Appendix A which details notable edge cases in our assessments for this project and Appendix B which contains supplementary figures and tables.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salholz-Hillel, M., Pugh-Jones, M., Hildebrand, N. et al. Dissemination of Registered COVID-19 Clinical Trials (DIRECCT): a cross-sectional study. BMC Med 21, 475 (2023). https://doi.org/10.1186/s12916-023-03161-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12916-023-03161-6

Keywords