The broad methods underlying the START programme have been published [5].
Recruitment of host trials
As part of the START programme, chief investigators on trials recently funded by the National Institute of Health Research (NIHR) Health Technology Assessment Programme or on the Primary Care Research Network portfolio were invited to participate in START. Interested trials were selected on the basis of sample size (at least 800 participants to be approached) and design (using a recruitment method amenable to the START recruitment strategies). Although a variety of recruitment methods could be adopted for studies included in the programme (such as postal or face-to-face recruitment), all studies that participated used postal recruitment methods. The minimum sample size of 800 participants to be approached in each trial was based on an indicative sample size calculation, although the expectation was always that the primary analysis would involve pooling of results across trials in a meta-analysis [5]. Host trials were offered access to one of two strategies (participant information sheets optimised through user-testing or multimedia information), both intended to improve communication of trial information to potential participants, which has been shown to have potential to increase research participation rates [11]. We aimed to recruit 6 ‘host’ trials to each strategy. This was based on practical considerations and a desire to test the strategy in a reasonable range of contexts rather than a formal sample size calculation.
Development of the intervention—participant information sheets optimised through user-testing
For user-testing, we recruited healthy members of the public, who had a similar socio-demographic profile (age and education) to the participants eligible for host trials. We excluded people who had taken part in any medicines trial or readability testing in the previous 6 months.
An independent groups design was used, with each participant seeing only one version of the information. We conducted three rounds of user-testing, with 10 participants in each round. The first round tested the original trial materials (PIS and cover letter), after which the optimised versions were developed, using information design and plain English. Although the information sheet and letter for each trial varied the revisions always included: plain English; short sentences and paragraphs; use of colour for contrast and impact, and bold text for highlighting; a reduced number of sub-sections; a contents list; and clear trial contact details. This approach has been shown to produce increased levels of understanding and approval [12,13,14].
The second and third rounds tested the revised versions, with minor changes made to wording and layout in response to the findings of each round of testing. In user-testing, each participant was shown a version of the information sheet and cover letter and asked to respond to 20 factual questions: three related to the cover letter and 17 to the information sheet. The questions were drawn from four categories of information that would apply to any trial: the nature and purpose of the trial (three questions); the process and meaning of consent (four questions); trial procedures (10 questions); and safety, efficacy and nature of the tested intervention (three questions). For each question, participants were asked to locate the answer (testing navigation and organisation of the information), then give the answer in their own words (testing clarity of wording) [15].
Methods of the SWAT
In each SWAT, participants being approached to take part were randomised to receive the optimised information or routine information materials. Individual randomisation was preferred for the SWATs, as the methods used were highly amenable to randomisation at that level, which would generally increase power and precision, and be less vulnerable to selection bias. We adopted site randomisation only where that was preferred for logistical reasons (e.g. where there was insufficient resource to conduct individual randomisation, or where individual randomisation might cause disruption to the host trial).
Outcome measures
The primary outcome was recruitment, defined as the proportion of participants recruited and randomised to a host trial following an invitation to take part. The denominator for the outcome was the total number of potentially eligible participants offered entry to the host trial. Depending on the particular trial, this would include a mix of eligible and ineligible patients according to the formal inclusion and exclusion criteria. All trials were able to provide reliable data on the numbers offered participation.
Secondary outcomes were:
-
Acceptance, defined as the proportion of potentially eligible participants who express interest in participating, either by posting a reply or attending a recruitment appointment. We anticipated that in some SWATs, the number of participants recruited to the host trial could be different from numbers of participants responding positively to the invitation, due to eligibility criteria used in the host trial.
Research ethics approval
The START programme was approved by the National Research Ethics Service (NRES) Committee, Yorkshire and the Humber – South Yorkshire (Ref: 11/YH/0271) on 5 August 2011. Each individual host trial had its own ethical agreement and registration.
Data analysis
For each individual SWAT, analyses of recruitment were conducted in line with the statistical analysis plan developed by SE and VM. Outcomes were first described separately by study arm and then compared using logistic regression to estimate the between-group odds ratio and corresponding 95% confidence interval.
For the pooled analysis, data from each SWAT were entered into Stata and meta-analysed using the Stata metan command (Stata version 14.2). Random effects meta-analysis models were used based on the assumption that clinical and methodological heterogeneity was likely to impact on the results. Statistical inconsistency was quantified using the I2 statistic.
In the meta-analysis, we used a two-staged analysis strategy where each individual SWAT was analysed using the appropriate analysis methods (i.e. taking into account whether it was individually randomised or cluster randomised) to generate trial-level summary statistics (e.g. odds ratio) first, and then the results from each individual SWATs were combined across trials using the Stata metan command (Stata version 14.2).
Regardless of the observed statistical heterogeneity, we performed pre-specified subgroup analyses investigating differences between studies based on underlying recruitment rates (low defined as a recruitment rate of 5% or below in control group vs. higher rates). We hypothesised that when the baseline recruitment rate is low, the increase in the absolute recruitment rate associated with a recruitment intervention is likely to be higher. A second planned analysis comparing patients with a known diagnosis versus participants ‘at risk’ was not conducted as it proved difficult to assign trials to the categories reliably. In a post hoc sensitivity analysis, we assessed the impact of including one of the SWATs (ISDR) which faced particular design challenges [9] on the overall pooled effect estimate by re-estimating the pooled odds ratio with this study excluded.