Clinical trial registration and reporting: a survey of academic organizations in the United States

Background Many clinical trials conducted by academic organizations are not published, or are not published completely. Following the US Food and Drug Administration Amendments Act of 2007, “The Final Rule” (compliance date April 18, 2017) and a National Institutes of Health policy clarified and expanded trial registration and results reporting requirements. We sought to identify policies, procedures, and resources to support trial registration and reporting at academic organizations. Methods We conducted an online survey from November 21, 2016 to March 1, 2017, before organizations were expected to comply with The Final Rule. We included active Protocol Registration and Results System (PRS) accounts classified by ClinicalTrials.gov as a “University/Organization” in the USA. PRS administrators manage information on ClinicalTrials.gov. We invited one PRS administrator to complete the survey for each organization account, which was the unit of analysis. Results Eligible organization accounts (N = 783) included 47,701 records (e.g., studies) in August 2016. Participating organizations (366/783; 47%) included 40,351/47,701 (85%) records. Compared with other organizations, Clinical and Translational Science Award (CTSA) holders, cancer centers, and large organizations were more likely to participate. A minority of accounts have a registration (156/366; 43%) or results reporting policy (129/366; 35%). Of those with policies, 15/156 (11%) and 49/156 (35%) reported that trials must be registered before institutional review board approval is granted or before beginning enrollment, respectively. Few organizations use computer software to monitor compliance (68/366; 19%). One organization had penalized an investigator for non-compliance. Among the 287/366 (78%) accounts reporting that they allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent staff is 0.08 (interquartile range = 0.02–0.25). Because of non-response and social desirability, this could be a “best case” scenario. Conclusions Before the compliance date for The Final Rule, some academic organizations had policies and resources that facilitate clinical trial registration and reporting. Most organizations appear to be unprepared to meet the new requirements. Organizations could enact the following: adopt policies that require trial registration and reporting, allocate resources (e.g., staff, software) to support registration and reporting, and ensure there are consequences for investigators who do not follow standards for clinical research. Electronic supplementary material The online version of this article (10.1186/s12916-018-1042-6) contains supplementary material, which is available to authorized users.

To help participants enroll in trials, improve access to information, and reduce bias, authors have long proposed registering all trials prospectively [22][23][24][25][26][27]. The Food and Drug Administration Modernization Act of 1997 led to the creation of ClinicalTrials.gov, a publicly accessible database maintained by the National Library of Medicine (NLM), which launched in 2000 [28]. In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that trials initiated from 2005 would have to be registered to be considered for publication [29,30]. Title VIII of the Food and Drug Administration Amendments Act of 2007 (FDAAA) [31] required that certain trials of drugs, biologics, and medical devices be registered and that results for trials of approved products be posted on ClinicalTrials.gov. The FDAAA also authorized the Food and Drug Administration (FDA) to issue fines for non-compliance, currently up to $11,569 per trial per day [32]. "The Final Rule,", which took effect on January 18, 2017, clarified and expanded requirements for registration and reporting (Box 1) [33,34]; organizations were expected to be in compliance by April 18, 2017. In a complementary policy, the National Institutes Application programming interface (API) A set of methods to facilitate communication among software components, as described in Section 10 of the PRS User's Guide (https://prsinfo.clinicaltrials.gov/prs-users-guide.html#section10) Cancer center An organization that specializes in the diagnosis and treatment of cancer, including organizations designated by the National Cancer Institute (see "National Cancer Institute cancer center") Clinical trial ("trial") A study in which human participants are assigned prospectively to receive one or more interventions (i.e., diagnostic, therapeutic, or other types) to evaluate the effects of the intervention(s) on health-related outcomes. For example, see [34,36] Clinical and Translational Science Awards Institutional review board (IRB) A group of persons with responsibility for ensuring the protection of human subjects involved in research. For example, see [58- of Health (NIH) issued broader requirements that apply to all trials funded by the NIH, including early trials and trials of behavioral interventions [35,36].
There is little evidence about how academic organizations support trial registration and reporting, but some evidence suggests that they are unprepared to meet these requirements. For example, academic organizations have performed worse than industry in registering trials prospectively [37,38] and reporting results [39][40][41][42][43][44][45][46].

Methods
Between November 21, 2016 and March 1, 2017, we conducted an online survey of academic organizations in the USA. We surveyed administrators who are responsible for maintaining organization accounts on ClinicalTrials.gov. For each eligible ClinicalTrials.gov account, we asked one administrator to describe the policies and procedures and the available resources to support trial registration and reporting at their organization (Box 2).

Identifying eligible PRS accounts
The online system used to enter information in the Clin-icalTrials.gov database is called the Protocol Registration and Results System (PRS). Each study registered on ClinicalTrials.gov is associated with a "record" of that study, and each record is assigned to one PRS organization account. A record may or may not include study results. A single organization, such as a university or health system, might register trials using one or many accounts. For example, "Yale University" is one account; by comparison, "Harvard Medical School" and "Harvard School of Dental Medicine" are each separate accounts.
We used the PRS account as the unit of analysis because accounts related to the same organization often • To be considered for publication, clinical trials must be registered in public registry before enrolling participants [29,30].
• Reports of clinical trials must include a data sharing statement [57]. • Which trials are covered by these policies?
• When did these policies come into effect?
• Do these policies describe processes for investigators joining and leaving the organization?
• Are there penalties for investigators who do not register their trials or report their results?
Staffing and support • Which functions do staff members perform (e.g., entering results, checking records, educating investigators)?
• How many staff members are assigned to support trial registration and results reporting? How much time do they spend on these activities?
• Are there plans to hire more staff in the future?

Monitoring systems
• Does the organization have a system for monitoring trial registration and results reporting? For notifying investigators when results are due?
• Does an IRB check whether trials are registered and reported? represent schools or departments that have separate policies and procedures related to trial registration and reporting. Furthermore, we are not aware of a reliable method to associate individual accounts with organization. For example, the "Johns Hopkins University" account includes mostly records from the Johns Hopkins University School of Medicine. Investigators at Johns Hopkins University also register trials using the accounts "Johns Hopkins Bloomberg School of Public Health," "Johns Hopkins Children's Hospital," and "Sidney Kimmel Comprehensive Cancer Center." Schools and hospitals related to Johns Hopkins University have distinct policies, faculties, administrative staff, and institutional review boards (IRBs).
We included all "active" accounts categorized by Clinical-Trials.gov as a "University/Organization" in the USA. We received a spreadsheet from the NLM with the number of records in each eligible account on August 4, 2016, and we received PRS administrator contact information from the NLM on September 28, 2016 and December 12, 2016.

Survey design
We developed a draft survey based on investigators' content knowledge and evidence from studies that were known to us at the time. We organized questions into three domains: (1) organization characteristics, (2) registration and results policies and practices, and (3) staff and resources. We also invited participants to describe any compliance efforts that our questions did not cover. We then piloted the survey among 14 members of the National Clinical Trials Registration and Results Reporting Taskforce. The final survey used skip logic so that participants saw only those questions that were relevant based on their previous answers. Responses were saved automatically, and participants could return to the survey at any time; this allowed participants to discuss their answers with organizational colleagues before submitting. We conducted the survey using Qualtrics software (www.qualtrics.com/); a copy is available as a Word document (Additional file 1) and on the Qualtrics website (http://bit.ly/2tCSqyl).

Participant recruitment
One or more persons, called "PRS administrators" by ClinicalTrials.gov, may add or modify records in each account. Some PRS administrators are employed specifically to work on ClinicalTrials.gov, but many PRS administrators have little or no time budgeted by their organizations to work on ClinicalTrials.gov.
For each eligible account, we created a unique internet address (URL) which we emailed in an invitation letter to one administrator. For accounts with more than one administrator, we first contacted all administrators and asked them to select the appropriate administrator to complete this survey; then, we sent the survey to the nominated administrator. If an administrator did not complete the survey, EMW sent at least two reminders from his university email account after approximately 2 weeks and 4 weeks. For accounts with multiple administrators, we emailed all administrators if the designated administrator did not respond after two reminders. We instructed administrators associated with multiple accounts to complete a separate survey for each account.
Participants indicated their consent by continuing past the opening page and by completing the survey.

Analyses
To analyze the results, we first excluded accounts that did not complete three required questions about whether they had: (1) a registration policy, (2) a results reporting policy, and (3) computer software to manage their records. We then calculated descriptive statistics using SPSS 24. For categorical data, we calculated the number and proportion of accounts that selected each response. For continuous data, we calculated the median and interquartile range (IQR) depending on the distribution of responses.
We conducted subgroup analyses to determine whether organization characteristics might be related to policies and resources. We compared:

Accounts affiliated with a Clinical and Translational
Science Award (CTSA) versus other accounts 2. Accounts affiliated with a cancer center versus other accounts 3. Accounts with < 20 records, 20-99 records, and ≥ 100 records We conducted a sensitivity analysis to determine whether the results might be sensitive to non-response bias by comparing accounts that responded before the effective date for The Final Rule (January 18, 2017) with accounts that responded on or after The Final Rule took effect.
The median number of administrators per account was 1 (IQR = 1-3), and one organization had 182 registered administrators.

Survey participation
Of 783 eligible accounts, we found no contact details for 16 (2%) and attempted to contact 767 (98%). In four cases (< 1%), we were unable to identify a usable email address. Of eligible accounts, 10/783 (1%) emailed us to decline, 306/783 (39%) did not participate in the survey, and 81/783 (10%) did not provide sufficient information to be included in the analysis (Fig. 1). Two accounts reported that they had multiple policies related to the same account; we asked them to complete questions about their account characteristics but not to complete questions about their specific policies and resources.
Included accounts were responsible for 40,351/47,701 (85%) records registered by eligible accounts. We received a partial (43)   Responsibility for registering trials is most often assigned to principal investigators (72/129; 56%). Responsibility for monitoring whether results are reported on time is most often assigned to principal investigators (54/115, 47%) and administrators (68/115, 59%).
Some policies allow organizations to penalize investigators who fail to register trials (27/115; 18%) or fail to report results (21/114; 18%). One account (< 1%) reported that their organization had penalized an investigator for non-compliance.

Resources
Few accounts use computer software to manage their records (68/366; 19%). Of those that use computer   An answer to this question was required for an account to be included in the analysis; accounts that did not see or skipped this question were excluded from all analyses b Of the 68 accounts that use an electronic management system ("computer software"), 2 (3%) use an application programming interface (API) to communicate with ClinicalTrials.gov c Because participants could "check all that apply," the sum of all categories exceeds the number of participants who responded (i.e., some participants selected multiple responses) d The number of possible responses (i.e., the denominator) includes the accounts with a relevant policy that viewed this question. The number of accounts that viewed each question is less than the total number of accounts in the study because (1) participants did not see all questions because of skip logic, and (2) some participants discontinued the survey before viewing all questions e Higher degrees include JD (N = 21, 7%), PhD (N = 69, 22%), and MD (N = 32, 10%); 13 accounts selected 2 higher degrees (8 both PhD and JD, 5 both PhD and MD) f The number of possible responses was limited to the accounts that reported monitoring compliance with their results reporting policy g Of the 11 accounts reporting that IRBs monitor trial registration, 4 indicated that the IRB requires registration for approval for some (N = 3) or all trials (N = 1) h Results are the median and interquartile range. We also calculated mean = 0.3, standard deviation = 0.6 software, two use the application programming interface (API) to connect with ClinicalTrials.gov (Table 3).
Among the 287/366 (78%) accounts that allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent (FTE) staff is 0.08 (IQR = 0.02-0.25). Among the staff who support ClinicalTrials.gov registration and reporting requirements, the staff member with the highest level of education has a graduate degree (232/411; 75%) more often than a bachelor's degree (68/411; 22%) or a high school diploma (11/411; 3%). At the time of this survey, 34/338 (10%) planned to hire more staff, while 217/338 (64%) and 87/338 (26%) did not plan to hire more staff or did not know, respectively. Among accounts affiliated with a CTSA, 24/109 (22%) receive support for Clinical-Trials.gov compliance from the CTSA.

Subgroup analyses
Registration and reporting policies are more common among the following accounts: (1) those with many  Results are for accounts that responded to this question. In our initial analysis, we found potentially invalid data; for example, some participants entered "0.5" rather than "50%". This occurred because a software bug prevented us from enforcing a data validation rule in the survey. To verify these results, we emailed administrators who indicated that staff spent ≤ 1% of their time on trial registration and reporting. Post hoc, we excluded two outliers because they appeared to report the total number of staff employed at the organization rather than the number of staff who support trial registration and results reporting records, (2) those affiliated with CTSAs, and (3) those affiliated with cancer centers (Table 4). For example, most cancer centers have a registration policy (61/97; 63%) and a reporting policy (52/97; 54%); a minority of other accounts have a registration policy (94/267; 35%) or a reporting policy (77/267; 28%).

Non-response bias
We found direct and indirect evidence of non-response bias, which suggests that our results might overestimate the amount of support available at academic organizations. For example, one administrator who declined to participate replied that their organization "does not have any central staff managing clinicaltrials.gov and does not utilize an institutional account." Account size was related to survey participation, and many participating accounts were large (Table 5). Of those accounts we invited to complete the survey that included < 20 records, 171/532 (32%) participated. By comparison, 98/113 (87%) accounts with ≥ 100 records participated.
Participation might have been related to organization resources. Nearly all CTSAs (62/64; 97%) and most National Cancer Institute (NCI) cancer centers (55/69; 80%) participated in the survey (Table 5), including 48 accounts affiliated with both a cancer center and a CTSA. Furthermore, some included accounts were related; for example, 107 accounts were affiliated with one of the 62 CTSAs.
In a sensitivity analysis (Additional file 5), we found no clear differences in policies and computer software when comparing early and late responders. Most participants completed the survey before the effective date, so late responders included only 31/366 (8%) accounts.

Summary of findings
To our knowledge, this is the largest and most comprehensive survey of organizations that register and report clinical trials on ClinicalTrials.gov. We had a high participation rate, and accounts that completed the survey conduct the overwhelming majority of clinical trials registered by academic organizations in the USA. We found that some organizations were prepared to meet trial registration and reporting requirements before The Final Rule took effect, but there is wide variation in practice. Most organizations do not have policies for trial registration and reporting. Most existing policies are consistent with FDAAA; however, most are not consistent with the ICMJE registration policy. Nearly half of existing policies do not require registration of all NIH-funded trials, though organizations could adapt their polices in response to the new NIH requirements. Few policies include penalties for investigators who do not register or report their trials. Although some organizations use computer software to monitor trial registration and reporting, only two have systems that connect directly with ClinicalTrials.gov (i.e., using API). Most staff who support trial registration and reporting have other responsibilities, and most organizations do not plan to hire more staff to support trial registration and reporting.

Implications
Our results suggest that most organizations assign responsibility for trial registration and reporting to individual investigators and provide little oversight. Previous studies indicate that senior investigators often delegate this responsibility to their junior colleagues [47]. Records include studies for which the organization was listed as the "lead sponsor" and the study was conducted in the USA; that is, we excluded records for which the principal investigator (PI) was the "lead sponsor," and we excluded studies done outside the USA c Two accounts selected both an "NCI cancer center" and an "Other cancer center"; thus, 97 accounts were affiliated with a cancer center d "Other schools" include: school of public health (N = 59, 16%), school of social work (N = 41, 11%), school of arts and sciences (N = 56, 15%), school of nursing (N = 72, 20%), school of dentistry (N = 40, 11%) To our knowledge, the FDA has never assessed a civil monetary penalty for failing to register or report a trial, and the NIH has never penalized an organization for failing to meet their requirements. The ICMJE policy is not applied uniformly [48], and many published trials are still not registered prospectively and completely [37,[49][50][51][52]. Organizations may be more likely to comply with these requirements if they are held accountable for doing so by journals, FDA, and funders (see, e.g., http://www.who.int/ ictrp/results/jointstatement/en).
Improving research transparency in the long term will require changes in norms and culture. Organizations could take four immediate steps to improve trial registration and reporting. First, organizations could offer education to help investigators understand these requirements. Second, organizations could implement policies and procedures to support trial registration and reporting. For example, organizations could require that investigators answer questions on IRB applications to identify clinical trials that require registration. Organizations could also require that investigators provide trial registration numbers before allowing trials to commence. Third, organizations could identify trials that do not meet trial registration and reporting requirements and help individual investigators bring those trials into compliance. Notably, software could provide automatic reminders when trial information needs to be updated [53] or when results will be due, and software could help organizations identify problems that require attention from leaders. Prospective reminders would allow administrators and investigators to update information before they become non-compliant with reporting requirements. Finally, organizations could ensure there are consequences for investigators who fail to meet trial registration and reporting requirements. For example, organizations could stop enrollment in ongoing trials or stop investigators from obtaining new grants [54].

Limitations
Although we sent multiple reminders and gave participants months to respond, our results might be influenced by non-response and social desirability. However, such biases would lead us to overestimate support for research trial registration and reporting. Participating accounts conduct more trials than non-participating accounts, and they appear to be most likely to have policies and resources to support transparency.
Because we analyzed results by account, our results are not directly comparable with studies that grouped trials using the data fields "funder" [39,40,43], "sponsor" [41,44], "collaborator" [41], or "affiliation" [42]. We analyzed results by account because (1) the account should usually represent the "responsible party," which is the person or organization legally responsible for fulfilling trial registration and reporting requirements, and (2) because we were not aware of another method to identify all trials, or even all accounts, associated with each organization.
We could not always determine which trials were associated with specific organizations, and organizations might not know which accounts their investigators use. Organizations could work with ClincalTrials.gov to identify non-working email addresses, update administrators' contact information, assign and identify an administrator responsible for overseeing each account, and create a one-to-one relationship between each account and organization. For example, ClinicalTrials.gov could identify multiple accounts managed by administrators at the same organization and help organizations move information into a single account. Organizations would need to prepare before centralizing their records; centralized administration could reduce trial registration and reporting if administrators lack the time, training, and resources to manage these tasks effectively.
We requested information from one administrator at each organization, and administrators might have been unaware of policies and practices that affect other parts of their organizations (e.g., IRBs, grant management). Finally, some organizations were misclassified on Clini-calTrials.gov (e.g., non-US organizations); we do not know how many organizations were inadvertently included or excluded because of misclassification.

Future research
Further research is needed to determine how to support trial registration and reporting at different types of organizations. Some large organizations register several trials each week, while other organizations register a few trials each year. For small organizations, hiring staff to support trial registration and reporting could be prohibitively expensive. Further qualitative research could explore how different types of organizations are responding to these requirements.
Future surveys could examine predictors of compliance with trial registration and reporting requirements. Although there are important variations in policy and practice, additional quantitative analyses would have little immediate value because most organizations have low compliance [37][38][39][40][41][42][43][44][45]. Instead, detailed case studies might be most useful for identifying best practices. For example, Duke Medicine developed a centralized approach [55], and the US Department of Veterans Affairs (VA) described multiple efforts to support transparency, including an "internal web-based portal system" [54]. The National Clinical Trials Registration and Results Reporting Taskforce is a network of administrators who meet monthly by teleconference, share resources (e.g., educational materials), and provide informal peer education. As industry appears to be doing better than academia [37,[39][40][41][42][43][44], it might be useful for academic organizations to understand the methods industry uses to monitor and report compliance (see, e.g., [56]).
We surveyed organizations after the publication of The Final Rule, and most accounts completed the survey before The Final Rule took effect, several months before the compliance date [34]. Our results should be considered a "baseline" for future studies investigating whether organizations adopt new policies and procedures, and whether they allocate new resources, to fulfill registration and reporting requirements. The federal government estimates compliance costs for organizations will be $70,287,277 per year [34]. This survey, and future updates, could be used to improve estimates of the costs of compliance.

Conclusions
To support clinical trial registration and results reporting, organizations should strongly consider adopting appropriate policies, allocating resources to implement those policies, and ensuring there are consequences for investigators who do not register and report the results of their research.