Public availability of results of observational studies evaluating an intervention registered at ClinicalTrials.gov
© Baudart et al. 2016
Received: 12 August 2015
Accepted: 5 January 2016
Published: 28 January 2016
Observational studies are essential for assessing safety. The aims of this study were to evaluate whether results of observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov were published and, if not, whether they were available through posting on ClinicalTrials.gov or the sponsor website.
We identified a cohort of observational studies with safety outcome(s) registered on ClinicalTrials.gov after October 1, 2007, and completed between October 1, 2007, and December 31, 2011. We systematically searched PubMed for a publication, as well as ClinicalTrials.gov and the sponsor website for results. The main outcomes were the time to the first publication in journals and to the first public availability of the study results (i.e. published or posted on ClinicalTrials.gov or the sponsor website). For all studies with results publicly available, we evaluated the completeness of reporting (i.e. reported with the number of events per arm) of safety outcomes.
We identified 489 studies; 334 (68 %) were partially or completely funded by industry. Results for only 189 (39 %, i.e. 65 % of the total target number of participants) were published at least 30 months after the study completion. When searching other data sources, we obtained the results for 53 % (n = 158; i.e. 93 % of the total target number of participants) of unpublished studies; 31 % (n = 94) were posted on ClinicalTrials.gov and 21 % (n = 64) on the sponsor website. As compared with non-industry-funded studies, industry-funded study results were less likely to be published but not less likely to be publicly available. Of the 242 studies with a primary outcome recorded as a safety issue, all these outcomes were adequately reported in 86 % (114/133) when available in a publication, 91 % (62/68) when available on ClinicalTrials.gov, and 80 % (33/41) when available on the sponsor website.
Only 39 % of observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov had their results published at least 30 months after study completion. The registration of these observational studies allowed searching other sources (results posted at ClinicalTrials.gov and sponsor website) and obtaining results for half of unpublished studies and 93 % of the total target number of participants.
KeywordsObservational studies Trial registration Waste in research
Failure to provide access to research results is a key source of wasted research . The results for more than 50 % of clinical trials are never published, and publication is more likely for clinical trials with statistically significant (positive) than negative results [2–6]. Lack of the availability of research findings has serious consequences; it affects the results of systematic reviews and meta-analyses and distorts the evidence used for the prioritization of research questions and clinical and policy decision-making [7–9]. In response to this waste, in 2005, the International Committee of Medical Editors required the registration of all clinical trials, before study inception, in a publicly accessible register such as ClinicalTrials.gov [10, 11]. In 2007, the US Food and Drug Administration Amendments Act also required the posting of clinical trials results for all phase II to IV trials of drugs, biologic treatments and devices having at least one site in the United State at ClinicalTrials.gov no later than 1 year after the date of final collection of data for the pre-specified primary outcome . In Europe, a new law that will be implemented in 2016 will require that all clinical trials be registered on a publicly accessible European Union clinical-trials register before they can begin, with a summary of trial results posted within a year after the end of the trial.
These policies have an important impact and allowed increase research value. In fact, thanks to trials registration, unpublished studies are identified, and their results could be made available through posting. However, registration is currently mandatory only for clinical trials.
Observational studies such as cohort and case–control studies are important for assessing the intervention effect [13–16]. They are particularly useful designs when randomized controlled trials (RCTs) are not feasible or when assessing rare adverse events and long-term effectiveness. Such studies represent a large part of the published literature and outnumber published RCTs . Nevertheless, prospective registration of observational studies is not currently requested . Despite not being mandatory, more than 35,000 observational studies are registered at ClinicalTrials.gov.
Our hypothesis is that registration of observational studies evaluating an intervention in ClinicalTrials.gov is important to increase research value as it allows identifying unpublished studies and obtaining unpublished results.
The main objectives of this study were (1) to evaluate whether results of observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov were published and, if not, whether they were available through posting on ClinicalTrials.gov or the sponsor website; (2) to evaluate and compare the time to publication and to public availability of results after searching other sources by study funding source; and (3) to evaluate the completeness of reporting of the outcomes designated as safety issues.
We identified a cohort of observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov.
Search for relevant studies
We searched ClinicalTrials.gov on April 14, 2014, by using “completed” for recruitment, “observational studies” for study type, and date of first registration between October 1, 2007, and December 31, 2011, and “has an outcome measure designated as a safety issue” in the safety issue field of ClinicalTrials.gov. We chose October 2007 because modifications were made at this time to the design-specific data elements used for registering observational studies on ClinicalTrials.gov. These changes were strongly influenced by protocol-related items in the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement .
Observational studies are defined by ClinicalTrials.gov as studies where “investigators assess health outcomes in groups of participants according to a protocol or research plan. Participants may receive interventions, which can include medical products, such as drugs or devices, or procedures as part of their routine medical care, but participants are not assigned to specific interventions by the investigator (as in a clinical trial https://clinicaltrials.gov/ct2/about-studies/glossary#O).”
Identification of relevant observational studies
Among the studies retrieved, we identified all observational prospective studies assessing interventions that had a primary completion date (i.e. the date when the final patient was examined or received an intervention for the purposes of final collection of data for the primary outcome) between October 1, 2007, and December 31, 2011. We defined interventions as all pharmacologic and non-pharmacologic treatments (pharmaceutical drugs, surgery, education, rehabilitation, etc.) aimed at improving the participants’ health. We chose this time period to be able to investigate public availability (i.e. at least 24 months after the study primary completion). We excluded studies with a primary completion date after December 2011, studies assessing genetics, predictors, or risk factors; not assessing interventions, pharmacodynamics, and pharmacokinetics of drugs; phase 0, I, II, I/II and II/III studies; studies with healthy volunteers, and retrospective and randomized studies. We also excluded studies for which the primary completion date was not reported at ClinicalTrials.gov. The selection process was performed by one researcher, and all records included were independently verified by a second researcher.
Extraction of data from ClinicalTrials.gov
We downloaded from ClinicalTrials.gov the following data concerning the characteristics of the studies: clinical trial number (NCT), title, study design (defined with the observational model: cohort, case–control, case-only, case crossover or other; and time perspective: prospective, cross-sectional or other), enrollment (i.e. sample size), first received date, primary completion date, results first received date, condition and intervention under study, outcome measures, locations of recruitment and funding source. The funding source was classified at ClinicalTrials.gov as “NIH” (National Institutes of Health), “US federal”, “industry”, and other non-industry organizations (universities, hospitals, foundations, and other government and other non-industry organizations). We secondarily categorized funding sources as non-industry (i.e. funded by NIH, US federal, other non-industry organizations) or industry (i.e. partially or totally funded by industry).
One researcher classified the following information from the full ClinicalTrials.gov record: medical field, type of intervention (drug, device, procedure/surgery, other), location of recruitment (Europe, North America, South America, Africa, Asia, Oceania) and study purpose as safety, efficacy or both. As a quality assurance procedure, a second researcher independently verified 50 % of the data.
Publication of study results in journals
For each observational study identified, we systematically searched for a publication reporting the study results (search date June 2014). First, we examined the “publication” field at ClinicalTrials.gov to search for a citation for an article that described the study results. If no citation was reported, we searched MEDLINE via PubMed by using the ClinicalTrials.gov identification number (NCT). If no publication was identified, we searched MEDLINE via PubMed by using keywords for the intervention under study and the condition. A researcher screened all citations retrieved up to the primary completion date registered at ClinicalTrials.gov and selected all citations corresponding to the selected study. A second researcher independently performed the search on PubMed for all the studies for which no publication was identified; any discrepancies were discussed until obtaining consensus.
Finally, if no publication was identified, we contacted the sponsor or the principal investigator. We searched in the “additional information” field of ClinicalTrials.gov for a link to the sponsor website for contacting the sponsor. If no link was available, we recorded the principal investigator’s name from the “contacts and locations” fields of ClinicalTrials.gov and searched PubMed and Google to identify their email address. The email reminded the recipient of the study NCT number and inquired about the study publication, presentation at a congress, and plans for future publication (Additional file 1: Appendix 1). We systematically sent two reminders. If no answer was received, the study findings were considered unpublished. When the study results were reported as an abstract or poster presented at a scientific meeting, we considered that the results were not available. In fact, previous evidence showed that the quality of abstracts presented at meetings is suboptimal, frequently including results that are not the final results [19, 20].
To determine whether the publication(s) corresponded to the registered observational study, we retrieved the full-text article for all citations selected and assessed from the abstract and the full text, if needed, whether a combination of information, including description of interventions and conditions, population, location, responsible party, number of participants, primary outcome measures, and primary funding sponsor partly or completely matched the information at ClinicalTrials.gov. We only selected articles that reported the results of the study. All cases were assessed by a second independent researcher, and disagreements were resolved by consensus.
If the publication highlighted that the study did not actually fulfill the inclusion criteria, while it was impossible to detect it from ClinicalTrials.gov record, the studies were still included in the analysis and considered as published to avoid bias. This occurred for eight studies (randomized n = 6; retrospective n = 1; phase 0/I/II n = 1).
Search of results posted on ClinicalTrials.gov and the sponsor website
For each observational study identified, we assessed whether unpublished results were publicly available from other sources. For this purpose, (1) we searched whether results were posted at ClinicalTrials.gov and (2) systematically searched the sponsor website to identify whether the results were available. For this, we searched ClinicalTrials.gov for a link to the sponsor website. If no link was available, we searched Google using the sponsor name to identify the sponsor website. Then, we searched the website for a section dedicated to access to study results and used the study NCT number to find the study results.
The main outcomes were the time to the first publication in journals, and the time to the first public availability of the study results.
The time to the first publication in journals was the time (in months) that elapsed between the primary completion date of the study and the publication. The study primary completion date was obtained from ClinicalTrials.gov. The publication date was the first date an article was made available online ahead of print (i.e. epub date indexed on PubMed) or published in a paper-printed version.
The time to the first public availability of the study results was the time (in months) between the primary completion of the study and the first public availability of the study results by publication or posting on ClinicalTrials.gov or the sponsor website. When results were available in different sources, we used the first date.
Secondary outcomes were the proportion of study results publicly available (via publication or posting on ClinicalTrials.gov or the sponsor website) at 12 and 24 months after primary completion.
Completeness of reporting of outcomes designated as safety issues
For all studies with results publicly available (via publication or posting on ClinicalTrials.gov or the sponsor website), we evaluated the completeness of reporting of all primary outcome measure(s) designated as a “safety issue” and when not available, all secondary outcome measure(s) designated as a safety issue as recorded from ClinicalTrials.gov. Then, for each outcome recorded, we systematically searched in the report available (i.e. publication, and for unpublished studies, results posted on ClinicalTrials.gov or the sponsor website) whether the results were adequately reported (i.e. reported with the number of events per arm), partially reported (i.e. reported with the number of events pooled or only mentioned), or not reported. When several publications were available, we selected the publication with the results more completely reported.
For all studies with results publicly available, we determined the proportion of studies with all primary outcomes designated as safety issues adequately reported and when not available, the proportion of studies with all secondary outcomes adequately reported.
For this analysis, we excluded studies when the results were not published or available in English.
Quantitative variables are described with median (quartile 1–3; Q1–Q3) and qualitative variables with number and percentages. For each outcome, we assessed Kaplan-Meier estimates of the cumulative incidence of studies (with 95 % confidence interval (CI)) at 12 and 24 months. All studies without results available in the different sources were censored on June 1, 2014 (i.e. search date). Cumulative incidence curves estimated by Kaplan–Meier methods are displayed globally and by funding source (partially or completely industry-funded; non-industry-funded). Univariate and multivariate Cox proportional hazards regression analyses were used to calculate adjusted hazard ratios (HRs) by funding source (with 95 % CIs and P values from Wald test). The following confounding variables were entered in the multivariate Cox model: type of intervention (drug, device, procedure/surgery, or other), location recruitment (Africa/Asia/South America or Europe/Australia/North America), objective of the study (safety or both efficacy and safety, or only efficacy), sample size and registration period (before the start date of the study/between the start date of the study and the primary study completion/after the primary study completion date).
Statistical analysis involved use of SAS v9.4 (SAS Inst. Inc., Cary, NC) and R software (v3.1.2) (http://www.R-project.org, the R Foundation for Statistical Computing, Vienna, Austria).
Study selection and characteristics
Characteristics of selected studies (n = 489) by funding: industry or non-industry
Industry funded (totally or partially) (n = 334)
Non-industry funded (n = 155)
Total (n = 489)
Before the start date of the study
Between the start date of the study and study primary completion date
After the study primary completion date
Safety or both safety and efficacy
Publication of study results
The median time between the study primary completion date and our search was 49.0 (Q1–Q3, 41.0–62.0) months; for all studies, at least 30 months had elapsed since the study primary completion date. Among the 489 studies, we identified a publication via the citation reported at ClinicalTrials.gov for 75 and via a systematic search of PubMed for 99. For the 314 remaining studies without a publication identified, we obtained an email address and contacted the sponsor/principal investigator of 241 studies, of which 52 responded and 15 provided an article (Additional file 1: Appendix 2).
Posting of study results on ClinicalTrials.gov and on the sponsor website
When searching the results section of ClinicalTrials.gov, we obtained the results for 31 % (n = 94) of our unpublished studies (i.e. 147,593 cumulative target participants). When searching sponsor websites, we obtained the results for 64 (21 %) of unpublished studies (i.e. 195,291 cumulative target participants), of which 39 could be obtained through a link to the sponsor website available in ClinicalTrials.gov. All study results obtained from sponsor websites were sponsored by Bayer (n = 16), GSK (n = 12), or Novo Nordisk (n = 36).
Overall public availability of study results
Completeness of reporting (Table 2)
Completeness of reporting of primary outcomes and secondary outcomes designated as a “safety issue”
Results not published but posted on ClinicalTrials.gov
Results not published but posted on the sponsor website
Completeness of reporting of primary outcome(s) designated as a safety issue
n = 133
n = 68
n = 41
n = 242
All outcomes adequately reported
At least one outcome partially reported or not reported
All outcomes not reported
Completeness of reporting of secondary outcome(s) designated as a safety issue for trials with no primary outcomes designated as a safety issue
n = 47
n = 26
n = 23
n = 96
All outcomes adequately reported
At least one outcome partially reported or not reported
No outcome reported
We evaluated the public availability of study results in a cohort of 481 observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov and completed for more than 30 months. Only 39 % (n = 189) had results published, corresponding to 65 % of the total target number of participants. The cumulative percentage of studies with results published at 12 and 24 months after primary completion were 8.2 % (95 % CI, 5.7–10.6) and 21.3 % (17.6–24.9), respectively. However, when searching other data sources (results posted on ClinicalTrials.gov and sponsor websites), we obtained the results for about half of the studies with unpublished results (n = 158, 53 %) corresponding to 93 % of the total target number of participants. Further, the median sample size of unpublished results posted on ClinicalTrials.gov and sponsor websites was high.
To our knowledge, this is the first large study evaluating the public availability of the results of observational studies registered at ClinicalTrials.gov in terms of publication or posting on ClinicalTrials.gov or the sponsor’s website. Most evidence of the lack of availability of research results has focused on the publication of clinical trials and the posting of results on ClinicalTrials.gov [5, 6, 21–24]. Large cohorts of registered clinical trials showed that the results of only 46–63 % are published [21, 23]. In the field of diagnostic studies, results for 54 % of studies completed for at least 18 months were published .
Our study has several important implications; they clearly illustrate the need for a change in policy with a request to also prospectively register observational studies. In many situations, observational studies are the only data available because RCTs are not appropriate or feasible. The number of published meta-analyses including observational studies in health has increased substantially and these meta-analyses could be used to inform clinical decision-making and public health policy [18, 26]. Registration of observational studies is debated, as shown by recent editorials published in major medical journals [17, 27–32]. However, much of the rationale for the prospective registration of clinical trials [18, 33] also applies to the registration of observational studies. Although registration at ClinicalTrials.gov does not guarantee that trial results will be published in a timely manner [22, 34–36], it allows knowing the existence of the study, searching for unpublished data, exploring publication bias, outcome reporting bias, and fidelity to the protocol [37, 38].
Our results also highlight the need to reconsider the strategy used to identify research findings. In fact, searching the results section at ClinicalTrials.gov as well as the sponsor website increased by two-fold the number of studies with results available and allowed access to the data of 93 % of the total target number of participants. It is consequently very important that systematic reviewers search for these data in trials registries and the sponsors websites. Previous studies comparing the results posted and published for clinical trials showed that results are more completely reported in registries than in publications [21, 23] and that discrepancies between the ClinicalTrials.gov results database and matching publications are common . However, it is unclear which source is more accurate .
Publication of research findings in peer-reviewed journals is considered essential for disseminating research results. However, the publication process is long, complicated and supposes an important investment from the sponsor and investigator. Lack of or delayed publication could be related to the lack of incentives to disseminate negative results, time constraints, limited resources, changing interests, or difficulties and failure to have the results published . The sponsor may prefer posting results on a website than investing in publication in a peer-reviewed journal. However, we question why sponsors post results on their website and not on ClinicalTrials.gov. In fact, ClinicalTrials.gov performs a quality control, which is not the case with the sponsor website.
Finally, ClinicalTrials.gov offers researchers the opportunity to provide access to their data if they decide not to publish them in a peer-reviewed journal. For clinical trials, the posting of results is a requirement. The World Health Organization is calling for a strict timeline for public disclosure of clinical trial results and published a new Statement on the Public Disclosure of Clinical Trial Results, which specifies that study results be reported at least 30 months after a study is completed. This requirement should also be extended to observational studies.
Our study has some potential limitations. First, as registration of observational studies is not mandatory, we could explore the public availability of the trial results only for observational studies registered at ClinicalTrials.gov. However, the publication rate and public availability of results is not likely to be higher for observational studies that were not registered. Second, our search was performed only in Medline via PubMed and we may have missed some publications. However, Medline is the largest database of biomedical journals and is the source that nearly all physicians and policymakers use to access clinical trial findings. Further, a recent study showed that searching Embase has a modest impact on the results of systematic reviews  and most studies evaluating RCT publication did not search Embase [21, 34]. Additionally, we systematically contacted the sponsor or investigator to check whether the study was published. Finally, we used the data recorded at ClinicalTrials.gov, but this information is not always accurate, and ClinicalTrials.gov added a database of summary results allowing for reporting results of observational studies according to the STROBE statement only in September 2008.
In conclusion, about 39 % of observational studies evaluating an intervention with a safety outcome and registered at ClinicalTrials.gov had their results published. Searching for unpublished data allowed for access to more than two-thirds of the study results and 93 % of the total target number of participants. Given the potential important benefit of requesting the registration of observational studies, this practice should be required by research regulations.
Availability of data and materials
The data will be made available on Dryad.
No source of funding were obtained for this study.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.View ArticlePubMedPubMed CentralGoogle Scholar
- Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):iii. ix-xi, 1–193.Google Scholar
- Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev. 2009;1, MR000006.PubMedGoogle Scholar
- Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med. 2008;5(11):e217. Discussion e217.View ArticlePubMedPubMed CentralGoogle Scholar
- Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3(8), e3081.View ArticlePubMedPubMed CentralGoogle Scholar
- Dwan K, Gamble C, Williamson PR, Kirkham JJ. Systematic review of the empirical evidence of study publication bias and outcome reporting bias -- an updated review. PLoS One. 2013;8:7.View ArticleGoogle Scholar
- Liberati A. Need to realign patient-oriented and commercial and academic research. Lancet. 2011;378(9805):1777–8.View ArticlePubMedGoogle Scholar
- Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358(3):252–60.View ArticlePubMedGoogle Scholar
- Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ. 2012;344:d7202.View ArticlePubMedGoogle Scholar
- DeAngelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. JAMA. 2004;292(11):1363–4.View ArticlePubMedGoogle Scholar
- De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Lancet. 2004;364(9438):911–2.View ArticlePubMedGoogle Scholar
- Zarin DA, Tse T. Medicine. Moving toward transparency of clinical trials. Science. 2008;319(5868):1340–2.View ArticlePubMedPubMed CentralGoogle Scholar
- von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370(9596):1453–7.View ArticleGoogle Scholar
- Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–12.View ArticlePubMedGoogle Scholar
- Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312(7040):1215–8.View ArticlePubMedPubMed CentralGoogle Scholar
- Psaty BM, Vandenbroucke JP. Opportunities for enhancing the FDA guidance on pharmacovigilance. JAMA. 2008;300(8):952–4.View ArticlePubMedGoogle Scholar
- Dal-Re R, Ioannidis JP, Bracken MB, Buffler PA, Chan AW, Franco EL, et al. Making prospective registration of observational research a reality. Sci Transl Med. 2014;6(224):224cm221.View ArticleGoogle Scholar
- Williams RJ, Tse T, Harlan WR, Zarin DA. Registration of observational studies: is it time? CMAJ. 2010;182(15):1638–42.View ArticlePubMedPubMed CentralGoogle Scholar
- Krzyzanowska MK, Pintilie M, Brezden-Masley C, Dent R, Tannock IF. Quality of abstracts describing randomized trials in the proceedings of American Society of Clinical Oncology meetings: guidelines for improved reporting. J Clin Oncol. 2004;22(10):1993–9.View ArticlePubMedGoogle Scholar
- Booth CM, Le Maitre A, Ding K, Farn K, Fralick M, Phillips C, et al. Presentation of nonfinal results of randomized controlled trials at major oncology meetings. J Clin Oncol. 2009;27(24):3938–44.View ArticlePubMedGoogle Scholar
- Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ. 2012;344:d7292.View ArticlePubMedPubMed CentralGoogle Scholar
- Nguyen TA, Dechartres A, Belgherbi S, Ravaud P. Public availability of results of trials assessing cancer drugs in the United States. J Clin Oncol. 2013;31(24):2998–3003.View ArticlePubMedGoogle Scholar
- Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS Med. 2013;10(12):e1001566. Discussion e1001566.View ArticlePubMedPubMed CentralGoogle Scholar
- Gill CJ. How often do US-based human subjects research studies register on time, and how often do they post their results? A statistical analysis of the ClinicalTrials.gov database. BMJ Open. 2012;2:4.View ArticleGoogle Scholar
- Korevaar DA, Bossuyt PM, Hooft L. Infrequent and incomplete registration of test accuracy studies: analysis of recent study reports. BMJ Open. 2014;4(1):e004596.View ArticlePubMedPubMed CentralGoogle Scholar
- Shrier I, Boivin JF, Steele RJ, Platt RW, Furlan A, Kakuma R, et al. Should meta-analyses of interventions include observational studies in addition to randomized controlled trials? A critical examination of underlying principles. Am J Epidemiol. 2007;166(10):1203–9.View ArticlePubMedGoogle Scholar
- Should protocols for observational research be registered? Lancet. 2010;375(9712):348.Google Scholar
- Peat G, Riley RD, Croft P, Morley KI, Kyzas PA, Moons KG, et al. Improving the transparency of prognosis research: the role of reporting, data sharing, registration, and protocols. PLoS Med. 2014;11(7):e1001671.View ArticlePubMedPubMed CentralGoogle Scholar
- Ioannidis JP. The importance of potential studies that have not existed and registration of observational data sets. JAMA. 2012;308(6):575–6.View ArticlePubMedGoogle Scholar
- Loder E, Groves T, Macauley D. Registration of observational studies. BMJ. 2010;340:c950.View ArticlePubMedGoogle Scholar
- Meyer RM. Evolution of clinical trials registries. J Clin Oncol. 2012;30(2):131–3.View ArticlePubMedGoogle Scholar
- PLOS Medicine Editors. Observational studies: getting clear about transparency. PLoS Med. 2014;11(8):e1001711.View ArticleGoogle Scholar
- Dickersin K, Min YI, Meinert CL. Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA. 1992;267(3):374–8.View ArticlePubMedGoogle Scholar
- Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.View ArticlePubMedPubMed CentralGoogle Scholar
- Maruani A, Boutron I, Baron G, Ravaud P. Impact of sending email reminders of the legal requirement for posting results on ClinicalTrials.gov: cohort embedded pragmatic randomized controlled trial. BMJ. 2014;349:g5579.View ArticlePubMedPubMed CentralGoogle Scholar
- Rasmussen N, Lee K, Bero L. Association of trial registration with the results and conclusions of published trials of new oncology drugs. Trials. 2009;10:116.View ArticlePubMedPubMed CentralGoogle Scholar
- Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302(9):977–84.View ArticlePubMedGoogle Scholar
- Hartung DM, Zarin DA, Guise JM, McDonagh M, Paynter R, Helfand M. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med. 2014;160(7):477–83.View ArticlePubMedPubMed CentralGoogle Scholar
- Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions. J Clin Epidemiol. 2015;68(9):1076–84.View ArticlePubMedGoogle Scholar