Design
We conducted a multi-journal, two-arm parallel group, randomised trial to assess the impact of the WebCONSORT tool compared to a control intervention on the completeness of reporting of randomised trials submitted to biomedical journals. The study obtained ethics approval from the University of Oxford Central Research Ethics Committee, Oxford, UK (MSD-IDREC-C1-2012-89) and is registered on ClinicalTrials.gov (NCT01891448).
Journal participants
To be eligible for inclusion, journals were required to (1) endorse the CONSORT Statement (assessed via journal Instruction to Authors and as listed on the CONSORT website: www.consort-statement.org); (2) not actively implement the CONSORT Statement (defined as requiring authors to submit a completed CONSORT checklist alongside their manuscript at the time of article submission); and (3) publish reports of randomised trials (criteria assessed February 2013). All journals that met the above inclusion criteria were sent an email (February 2013) from the WebCONSORT study scientific committee inviting them to participate in the study. The description of requirements for participation were included in the email and study information sheet (Appendix 1) and editors were asked to verify that they complied with these criteria and that, while they endorsed the CONSORT Statement, they do not actively implement it.
If a journal agreed to participate, and confirmed they met the eligibility criteria, then the journal editor was asked (Appendix 2) to include an electronic web address to the WebCONSORT study website in their request for revision letter to authors for any manuscript identified by the journal as reporting the results of a randomised trial. We did this by asking the journal to include this standard sentence in their revision letter to authors:
“As part of the process of revising your manuscript we would like you to use the WebCONSORT tool which is designed to help you improve the reporting of your randomised trial. You can access the tool by clicking on the following link: [link to WebCONSORT study site]. Please be aware that by submitting your manuscript to our journal it may be part of a research study, any participation will not impact on any future acceptance or rejection of your manuscript”.
Participating journals were also informed that we would require access to the revised manuscript to assess reporting quality irrespective of whether it was published or not.
Random assignment
Authors registering on the WebCONSORT study website were asked to provide some basic information about their randomised trial. This included the name of the journal where the manuscript was submitted, the manuscript number and title, name of submitting author, trial design (e.g. parallel, cluster, non-inferiority, pragmatic), type of intervention (e.g. non-pharmacologic, herbal, acupuncture), and number of study groups (arms). Registered manuscripts were then randomised into two groups (i.e. WebCONSORT tool or control). The sequence of randomisation was computer generated and stratified by whether or not a CONSORT extension was relevant. The assignment was centralised using a web-based system. Authors and journal editors were blinded to allocation of the intervention.
Interventions
Construction of the WebCONSORT tool
To construct the WebCONSORT tool (Fig. 1) we first combined the different CONSORT extensions to allow grouping of items of similar nature and adaptation of some items to the 2010 version of the CONSORT Statement. Secondly, we designed and built a computerised tool to allow authors to produce a list of items that must be included in the report of their results and a flowchart specific to their trial. The tool combines the main CONSORT checklist and extension checklists for different trial designs (e.g. non-inferiority [18], cluster randomised [19], and pragmatic trials [20]) and for specific types of interventions (e.g. non-pharmacological treatments [21], acupuncture [22], and herbal therapy [23]). The checklist extensions for Abstracts [24] and Harms [25] were not included because they are applicable to all trials. The tool automatically generated a unique list of items customised to a specific trial combining the list of items from the main CONSORT and the items from all relevant extensions (e.g. for a pragmatic trial evaluating a non-pharmacological treatment with cluster randomisation, the main CONSORT checklist was combined with three extensions: pragmatic trial, cluster trial, and non-pharmacological extensions). This list was generated based on the description of the trial made by the author (i.e. type of design and interventions).
A website (Appendix 3: Figure 6) was created where authors were able to log on and register. Using a drop-down menu, they could select their precise type of trial, taking into account the methodological characteristics. Authors were unaware that they were randomised by the software to the WebCONSORT or control intervention.
Experimental intervention
Authors randomised to the WebCONSORT arm were directed to a list of CONSORT items specific to their trial which they could print out. They could also obtain an automatic flowchart adapted to the design of their trial. Authors were told that the items generated by the WebCONSORT tool should be reported in the revised manuscript and that the completed checklist and flow diagram should be submitted to the editor. The content of the WebCONSORT tool was validated by members of the study team; this was done by performing a number of “dummy” randomisations to ensure the correctly formatted customised checklist was generated based on different numbers and types of CONSORT extensions being selected. The WebCONSORT tool website was also tested by the scientific committee of the study and by external experts with experience in designing and conducting clinical trials to ensure the website was clear and well understood.
Control intervention
Authors randomised to the control group were directed to a dummy version of the WebCONSORT tool website which included the customised flow diagram generator part of the tool but not the main checklist generator or elements relating to CONSORT extensions.
Outcomes
Our primary outcome was the proportion of the most important and poorly reported CONSORT Statement checklist items (main CONSORT and extensions), pertaining to a given study, reported in the revised manuscript. For the main CONSORT Statement, a group of experts, from within the CONSORT Group, identified the 10 most important and poorly reported CONSORT checklist items to be assessed for each manuscript, based on their expert opinion and supported by empirical evidence where this was available. In addition, the lead authors of each extension were asked to define the five most important and poorly reported modified items specific to their extension (Appendix 4: Table 3). As the number of items differed across trials because the number of relevant extensions varied, we calculated the percentage of possible items that were reported for each article.
The secondary outcomes were the mean proportion of adequately reported items from the main CONSORT Statement (based on the 10 items for the primary outcome above), and the mean proportion of adequately reported items for each of the relevant CONSORT extensions (based on the five items for the primary outcome above). We also collected data on the rejection rate of studies. We had planned to assess the compliance rate of authors submitting a CONSORT checklist to the journal and to obtain feedback from authors and journal editors on the review process; however, these proved difficult to implement in practice and hence were not assessed.
The evaluation of revised manuscripts was conducted by a team of 10 reviewers (based at the Centre for Statistics in Medicine, University of Oxford), with statistical expertise in the design and reporting of clinical trials, working in pairs who were blinded to the nature of the study and allocation of the interventions. Each pair independently extracted data from the manuscripts; any differences between reviewers were resolved by discussion, with the involvement of an arbitrator if necessary. To ensure consistency between reviewers, we first piloted the data extraction form. We discussed any disparities in the interpretation and modified the data extraction form accordingly.
Sample size
The expected average proportion of adequately reported items in the control arm was 0.60, and our hypothesis was that the proportion of adequately reported items would increase by 25% relatively (15% in absolute value), thus attaining 0.75 in the experimental arm. Assuming a common standard deviation of 0.40, 151 articles per arm were required to demonstrate a significant difference with a power of 90% (two-sided type 1 error is set at 5%), for a total of 302 articles. This sample size calculation was based on the assumption that the mean absolute difference is similar in each stratum (whether or not a CONSORT extension is relevant). We also hypothesized that clustering by journal would have a limited impact because we anticipated the number of journals would be high. Consequently, we did not take into account the clustering by journal in the sample size calculation. We did not anticipate that journals would enroll manuscripts that were not in fact reports of randomised trials.
Statistical analysis
The main population for analysis were all manuscripts resubmitted to journals after the intervention occurred, which was during the revision process of the manuscript. Statistical analysis was undertaken using STATA IC (version 13). All outcomes were quantitative and described using proportions, mean, standard deviation, and minimum and maximum values. Quantitative variables with asymmetric distributions were presented as medians and interquartile ranges. For the primary and secondary outcomes, we estimated the difference between means in the two groups with 95% confidence intervals. The analysis was also stratified according to those articles which required the inclusion of one or more CONSORT extensions and those which did not. Due to the much larger than anticipated incorrectly specified extensions, we also performed a post-hoc sensitivity analysis for both primary and secondary outcomes to exclude an extension from the analysis of a manuscript if it was wrongly selected by the authors.