Skip to main content

Table 1 Principles and good practice statements for multi-model comparisons

From: Guidelines for multi-model comparisons of the impact of infectious disease interventions

Principle

Good practice

1. Policy and research question: The model comparison should address a relevant, clearly defined policy question

• The policy question should be refined, operationalised and converted into a research question through an iterative process

• Process and timelines should be defined in agreement with the policy question

2. Model identification and selection: The identification and selection of models for inclusion in the model comparison should be transparent and minimise selection bias

• All models that can (be adapted to) answer the research question should be systematically identified, preferably through a combination of a systematic literature review and open call

• Models should be selected using pre-specified inclusion and exclusion criteria, and models identified as potentially suitable but not included should be reported alongside their reason for non-participation

• Models used and changes made as part of the comparison process should be well documented

• If an internal or external validation was used to limit the model selection, it should be reported

3. Harmonisation: Standardisation of input and output data should be determined by the research question and value of the effort needed for this step

• Developing a pre-specified protocol may be useful; if so, it could be published with the comparison results

• Modellers should consider fitting models to a common setting or settings

• Harmonisation of parameters governing the setting, disease, population and interventions should be considered whilst avoiding changes to fundamental model structures leading to model convergence

4. Exploring variability: Between- and within-model variability and uncertainty should be explored

• Multiple scenarios should be explored to understand the drivers of the model results

• Sensitivity analysis and what-if analyses (examining extreme scenarios) should be carried out

5. Presenting and pooling results: Results should be presented in an appropriate way to support decision-making

• The results for the individual models should be presented, along with within-model uncertainty ranges

• Summary measures that combine outcomes of models should only be used if all outcomes support the same policy; it should be clearly communicated whether summary ranges include within-model uncertainty or between-model uncertainty (i.e. the range of point estimates across the model)

6. Interpretation: Results should be interpreted to inform the policy question

• Key results and their interpretation to policy questions should be discussed

• Key strengths and limitations of the model comparison process and results should be addressed

• Key recommendations for next steps should be reported