- Open Access
Reporting guidelines for realist evaluations seek to improve clarity and transparency
BMC Medicine volume 14, Article number: 109 (2016)
An increasing number of realist evaluations are being conducted from a wide range of disciplinary perspectives and with diverse, fit-for-purpose methods. This commentary discusses the recent BMC Medicine publication of RAMESES II reporting guidelines for realist evaluations. Knowledge users such as program implementers and decision-makers will benefit from the increased transparency of reporting and interpretation in light of the totality of evidence encouraged by this guidance. It is hoped that these reporting guidelines will eventually lead to improved knowledge synthesis and contribute to the cumulative science regarding realist evaluation.
Please see related article: https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-016-0643-1
Realist evaluations are theory-driven evaluations that seek to understand how complex interventions work, for whom they work, and how programs and their effects are influenced by the context and a setting that is rooted in the philosophy discipline [1, 2]. Realist evaluations are carried out from different disciplinary perspectives and using a plurality of methods that are fit for purpose. They can be useful in understanding how a program or policy works, in which settings, and for whom. Realist evaluations can be used to hypothesize whether the program works in different settings and for different participants, including “program designers, implementers, and recipients” . The hypotheses are tested and refined during the evaluation of the program, and this evaluation can be understood using the context-mechanism-outcome (CMO) configuration. Because of this, realist evaluations are appreciated by implementers and decision-makers who seek to understand how a program or policy works, and in which circumstances, when designing or funding programs.
In a recent article in BMC Medicine, Wong and colleagues seek to improve the transparency of reporting realist evaluations by developing consensus and evidence-based reporting guidelines for realist evaluations . They use transparent and accepted methods endorsed by the EQUATOR Network  and outlined by Moher and colleagues  to develop this guidance. Their protocol was published in BMJ Open  and 35 experts with diverse disciplinary backgrounds and experience from six different countries participated in three rounds of a Delphi survey to develop this guidance. A high response was achieved across all rounds of the Delphi (range 76–90 %).
The tool consists of 20 items, which have been broken down into the following six sections: Title, Summary of Abstract, Introduction, Methods, Results, and Discussion. Each of the 20 items included in their reporting realist evaluations tool includes a detailed rationale for each item and exemplars of good practice. They have also built in flexibility in terms of the order of reporting and they strongly encourage authors to document a justification for any variance from the reporting items, including omissions of items. The first item is related to identifying the study as a realist evaluation in the title, which will aid in identifying these types of studies in the future. Items 3 (rationale for evaluation), 4 (program theory), 7 (rationale for realist evaluation), 8 (environment and context), and 10 (evaluation design) are particularly important because, based on evaluations of the reporting of realist reviews , these items are more likely to be poorly reported. Items 11 through 13 relate to data collection, the recruitment process, and data analysis. Item 16 on the summary of findings encourages authors to rate the strength of the evidence from the evaluation, which is extremely important for stakeholders who seek to use this information. Item 20 relates to the source of funding and declaring any potential conflicts of interest.
One major advance of this reporting guideline is the encouragement to situate the realist evaluation in the totality of the evidence (item 18), which will help program implementers to interpret the findings in light of other relevant evidence while considering the contribution of differences in settings and populations. This is in keeping with other global initiatives to consider the entirety of the evidence when reporting results of primary studies, such as The Lancet guidelines  and the CONSORT Statement . We agree with the authors that situating findings in the light of relevant evidence will contribute to the cumulative evidence base and science regarding other similar programs and policies.
The authors are already promoting the uptake of these reporting standards through the RAMESES listserv, and training workshops and materials, which will assist in their uptake by program evaluators. Also, the authors plan to evaluate the usefulness and impact of these reporting guidelines in the future. An additional activity that could enhance the impact of these reporting guidelines is registration with the EQUATOR Network . Also, the EQUATOR Network provides tools and resources for journal editors to facilitate the use of reporting guidelines by authors.
The RAMESES II reporting guidelines for realist evaluations is an important initiative to ensure these evaluations are reported in sufficient detail, in the context of existing evidence, and with a rating of strength of evidence for main findings that will greatly assist users of the evaluations. Because reviews are only as good as the included studies, in our opinion, this could also eventually improve realist syntheses that include realist evaluations . We look forward to the upcoming development of quality methodological standards for realist evaluations, which will also advance the science of this type of study, as well as likely improve realist synthesis.
Pawson R, Tilley N. Realistic evaluation. London: Sage; 1997.
Pawson R. The science of evaluation: a realist manifesto. London: Sage; 2013.
Wong G, Westhorp G, Manzano A, Greenhalgh J, Jagosh J, Greenhalgh T. RAMESES II reporting standards for realist evaluations. BMC Med. 2016;14(1):96.
EQUATOR Network [http://www.equator-network.org/]. Accessed 18 Jul 2016.
Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2), e1000217.
Greenhalgh T, Wong G, Jagosh J, Greenhalgh J, Manzano A, Westhorp G, Pawson R. Protocol--the RAMESES II study: developing guidance and reporting standards for realist evaluation. BMJ Open. 2015;5(8), e008567.
Tricco AC, Soobiah C, Antony J, Cogo E, MacDonald H, Lillie E, Tran J, D’Souza J, Hui W, Perrier L. A scoping review identifies multiple emerging knowledge synthesis methods, but few studies operationalize the method. J Clin Epidemiol. 2016;73:19–28.
Clark S, Horton R. Putting research into context-revisited. Lancet. 2010;376(9734):10–1.
Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol. 2010;63(8):834–40.
Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. RAMESES publication standards: realist syntheses. BMC Med. 2013;11:21.
We would like to thank Manosila Yoganathan for her help formatting this commentary.
VW and ACT drafted the initial manuscript. Both authors critically reviewed and approved the final manuscript.
VW and ACT declare that they have no competing interests.
About this article
Cite this article
Welch, V.A., Tricco, A.C. Reporting guidelines for realist evaluations seek to improve clarity and transparency. BMC Med 14, 109 (2016). https://doi.org/10.1186/s12916-016-0658-7
- Realist evaluation
- Reporting guidelines
- Knowledge synthesis