- Research article
- Open Access
- Open Peer Review
Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination
BMC Medicinevolume 13, Article number: 82 (2015)
As the number of systematic reviews is growing rapidly, we systematically investigate whether meta-analyses published in leading medical journals present an outline of available evidence by referring to previous meta-analyses and systematic reviews.
We searched PubMed for recent meta-analyses of pharmacological treatments published in high impact factor journals. Previous systematic reviews and meta-analyses were identified with electronic searches of keywords and by searching reference sections. We analyzed the number of meta-analyses and systematic reviews that were cited, described and discussed in each recent meta-analysis. Moreover, we investigated publication characteristics that potentially influence the referencing practices.
We identified 52 recent meta-analyses and 242 previous meta-analyses on the same topics. Of these, 66% of identified previous meta-analyses were cited, 36% described, and only 20% discussed by recent meta-analyses. The probability of citing a previous meta-analysis was positively associated with its publication in a journal with a higher impact factor (odds ratio, 1.49; 95% confidence interval, 1.06 to 2.10) and more recent publication year (odds ratio, 1.19; 95% confidence interval 1.03 to 1.37). Additionally, the probability of a previous study being described by the recent meta-analysis was inversely associated with the concordance of results (odds ratio, 0.38; 95% confidence interval, 0.17 to 0.88), and the probability of being discussed was increased for previous studies that employed meta-analytic methods (odds ratio, 32.36; 95% confidence interval, 2.00 to 522.85).
Meta-analyses on pharmacological treatments do not consistently refer to and discuss findings of previous meta-analyses on the same topic. Such neglect can lead to research waste and be confusing for readers. Journals should make the discussion of related meta-analyses mandatory.
Systematic reviews and meta-analyses represent a high level of evidence and are invaluable to health professionals in synthesizing the results of medical research . The number of systematic reviews is growing rapidly - in 2010 approximately 11 such studies were published per day, which corresponds to the number of randomized controlled trials (RCTs) published three decades ago . Because of this exponential growth in publication rates, many meta-analysis authors may not discuss the results of previous meta-analyses and systematic reviews on the same topic - in a manner analogous to authors of RCTs not referring to a substantial portion of other relevant RCTs  or systematic reviews [4,5]. This can be very confusing for readers and cause waste in research resources , including waste in study planning , design, and conduct  as well as leading to unnecessary duplications  and incomplete reporting [10,11].
To grasp the importance of such neglect, imagine clinicians seeking a treatment solution for a patient’s specific medical problem. They find two similar meta-analyses with discordant results. If the newer article does not refer to the older, the readers are given no explanation about the possible reasons for this discrepancy. Which article should they trust more? Their level of uncertainty is higher than before reading the authoritative articles and an evidence-based decision regarding their patients’ treatment even more difficult. Not referring to important related research is also against the principles of evidence-based medicine, because meta-analysts agree that all available evidence should be systematically searched and reviewed in an unbiased manner .
Moreover, as for all types of research, the question a meta-analysis is trying to answer should be relevant [7-9]. If the question has been already answered in a previous meta-analysis, the authors should clearly justify why they decided to perform a similar analysis again. Is it a replication, an update, or maybe just an unnecessary duplication?
To provide patients, clinicians, and policymakers with the most useful information about a clinical question, meta-analyses should not neglect previous systematic reviews about the same topic. This will not only help to provide a more complete understanding of the clinical problem, but also to avoid research waste and biased results.
We report a systematic investigation of whether recent meta-analyses published in the leading medical journals cite, describe, and discuss previous meta-analyses and systematic reviews on the same topic. We also analyze factors that are likely to be associated with this phenomenon.
First, we identified a sample of recent meta-analyses, then for each included recent article we performed a separate systematic search to find similar previous meta-analyses and systematic reviews. Our goal was to estimate what proportion of the previous meta-analyses and systematic reviews was cited, described, and discussed by the recent meta-analyses. We also investigated potential predictors of citing, describing, and discussing. We initially published a protocol at our institutional website .
Selection of the recent meta-analyses
We searched PubMed, combining the names of the six general medical journals with the highest impact factors according to Journal Citation Records, 2013 edition (New England Journal of Medicine, The Lancet, JAMA: The Journal of the American Medical Association, Annals of Internal Medicine, PLOS Medicine, British Medical Journal) and ‘meta-analysis’ as publication type (see Additional file 1). To produce a more homogenous sample we only included meta-analyses on pharmacological treatments. The original search was completed in March 2013. We aimed to include at least 50 published meta-analyses. We expanded the search to January 2012 to meet this criterion.
We then systematically assessed citation habits of these recent meta-analyses towards previous meta-analyses and systematic reviews on the same topic.
Selection of the previous meta-analyses and systematic reviews
For each recent meta-analysis we searched PubMed for previous meta-analyses and systematic reviews on the same topic (unlike recent articles, previous studies also included systematic reviews without meta-analysis), combining the keywords provided by the recent articles with ‘meta-analysis’ or ‘systematic review’ as publication type (see Additional file 2). The keywords were based on the characteristics of the participating population and the intervention(s) used. The reference lists of all included studies were also screened. We compared PICO questions between the recent and the previous articles in order to make sure that they focus on a similar group of participants (P) and use similar interventions (I), comparators (C), and outcomes (O) . Previous articles which had any of the PICO questions completely different from the corresponding questions in the recent article were excluded. Additionally, for included previous articles we calculated a ‘similarity score,’ such as for each PICO question one or zero points were given, depending whether all of the four PICO questions were identical to the corresponding questions from the recent article (one point per question was given if that was the case) or whether the questions were only overlapping. That was the case when the criteria for each PICO question were only partially similar, for example when multiple outcomes were used and only some of them were employed by the previous study (in such a case zero points were given, but the study was not excluded). For more details and examples of this similarity score see Additional file 3. We also excluded articles published more than 10 years or less than 1 year from the time of publication of the recent meta-analysis, unless they were cited in the recent meta-analysis. This criterion ensured that we did not analyse outdated material and it also does justice to the fact that the publication process can take a long time.
Statistical analysis of predictors
Our primary question was to estimate what proportion of the previous meta-analyses and systematic reviews was cited (that is, whether a reference to the previous article was provided by the recent study), described (that is, whether any information about the results of the previous article was given), and discussed (that is, whether the results from the previous article were related to the results or conclusions from the recent study) by the recent meta-analysis. Table 1 provides specific examples for each definition.
We also investigated potential predictors of citing, describing, and discussing previous meta-analyses and systematic reviews by recent meta-analyses using mixed-effects logistic regression analysis in R.
Recent article-specific predictors included: journal title, medical discipline, journal impact factor (based on Journal Citation Records, 2013 edition), and quality of the systematic review as measured with the AMSTAR score (a measurement tool for the assessment of the methodological quality of systematic reviews) .
Previous article-specific predictors included: level of similarity of the review question (based on comparison of PICO questions between recent and previous article), journal impact factor, publication year, article type (systematic review using meta-analytic methods versus not), and concordance of results between recent and previous articles (similar results versus different results). Results were judged as ‘similar’ when the direction of the effect was the same, irrespective of the effect size. In general, concordance of results was based on the major findings of the study (primary outcome, if possible) and, if necessary, particular results and conclusions were compared, including strength of evidence. In case of previous systematic reviews without meta-analysis, concordance was based on the main message of the paper, that is, the authors’ summary and/or conclusions (illustrating examples are presented in Additional file 4). To avoid confusion we emphasize that the term ‘similarity’ refers to a comparison of the recent and previous articles in terms of the review questions, whereas the term ‘concordance’ refers to a comparison of recent and previous article results.
In a sensitivity analysis we excluded previous articles published before 2010 to check whether the general pattern of results changed in the newer papers.
BH piloted the analysis on a sample of 10 studies, selecting and extracting all the data. AP independently extracted a random sample of 25%. An inter-rater reliability analysis using the Kappa coefficient was performed to determine the consistency among raters . Conflicts were resolved by discussion between BH and AP; if necessary, SL was involved. Results of the regression analyses are presented as odds ratios (OR) and associated 95% confidence intervals (CI).
We identified 52 recent meta-analyses and 242 previous meta-analyses and systematic reviews (including 24 previous systematic reviews without meta-analysis), covering a wide range of drugs and medical specialties. Table 2 shows summary characteristics of included studies, whereas Additional files 5 and 6 provide detailed information on the individual meta-analyses and systematic reviews.
Out of 52 recent meta-analyses there were only four without previously published meta-analyses or systematic reviews. These four articles were excluded from the regression analysis. For the remaining 48 articles there were, on average, five (range 1 to 28, SD 4.6) previous meta-analyses or systematic reviews per paper.
Out of 242 previous meta-analyses and systematic reviews, approximately two-thirds were cited (159 out of 242, 66%), one-third described (86 out of 242, 36%), and only one-fifth discussed in the recent meta-analyses (49 out of 242, 20%). This pattern of results did not change when the previous articles published before 2010 (that is, older than two years) were excluded (see Figure 1).
Citing a previous meta-analysis or systematic review by a recent meta-analysis was positively associated with publication of the previous article in a journal with a higher impact factor (OR, 1.49; 95% CI, 1.06 to 2.10) and more recent publication year (OR, 1.19; 95% CI, 1.03 to 1.37). Similar results were found for describing (higher impact factor: OR, 1.83; 95% CI, 1.27 to 2.62; more recent publication year: OR, 1.29; 95% CI, 1.08 to 1.55) as well as for discussing (higher impact factor: OR, 1.72; 95% CI, 1.16 to 2.55; more recent publication year: OR, 1.55; 95% CI, 1.17 to 2.06). Additionally, the probability of describing the previous article was inversely associated with the concordance of results (OR, 0.38; 95% CI, 0.17 to 0.88) and the probability of being discussed was increased for previous articles that employed meta-analytic methods (OR, 32.36; 95% CI, 2.00 to 522.85). The AMSTAR score of the recent meta-analysis as well as the similarity score were not significantly associated with any of the outcomes (see Table 3) and the nominal variables journal title and medical discipline were excluded from the regression analysis.
The inter-rater reliability for the independent raters was found to be Kappa = 0.664 (P < 0.001; 95% CI, 0.607 to 0.721).
We found that in recent meta-analyses on pharmacological interventions published in leading medical journals, the proportion citing, describing, or discussing previous meta-analyses and systematic reviews on the same topic was low. Specifically, we found that only two-thirds of previous meta-analyses and systematic reviews were cited, one-third described, and only one in five of the previous articles’ results was discussed in light of the recent meta-analysis’ findings.
For individual RCTs it has been pointed out that most new trials are not interpreted, planned, and designed in the context of existing systematic reviews and other relevant evidence [6,23]. Our findings suggest that this statement applies also to otherwise methodologically sound meta-analyses. A fundamental principle of meta-analyses and systematic reviews is that all relevant clinical trials should be considered. We believe that they should also outline previous meta-analyses and systematic reviews about the same topic. Understanding the existing literature is central to any new project. In case of a meta-analysis, not referring to the results of previous meta-analyses and systematic reviews is especially problematic because it is likely to lead to confusion and disinformation among clinicians, patients, and policymakers, which is exactly the opposite of what any effort aiming at synthesizing scientific findings should be.
One could argue that citing 65% of previous relevant meta-analyses and systematic reviews is not a bad result, but we believe that simply putting a reference to another review is not enough. Systematic reviewers should place their results in the context of previous reviews, that is, provide a meaningful comment, comparison, or explanation of existing differences.
Moreover, this neglect is an example of inadequate study planning, suggesting that many authors do not perform the necessary literature search before initiating their own project [7,24]. This might very well be one of the reasons behind unnecessary duplication of effort in health sciences . As trenchantly expressed by Terry and colleagues, ‘The issue of knowing what research is currently being undertaken … is a black hole in the public health landscape’ . Especially worrisome is the fact that the authors who refer to similar previous papers rarely justify why their own project was undertaken, given a similar work was recently performed. In our sample we found 10 recent meta-analyses referring to a very similar previous work (‘similarity score’ four out of four). Only six of them justified why the same analysis was performed again (most common reason being that the previous paper needs to be updated or that some discordant results require clarification). None of them mentioned rigorous replication as a reason, suggesting that an ‘efficient culture for replication of research’  has yet to emerge in health sciences.
We found that this neglect to refer to previously published systematic reviews and meta-analyses was predicted by a number of variables. According to our model, previous meta-analyses and systematic reviews were more likely to be cited, described, or discussed by a recent meta-analyses if recently published in a journal with a high impact factor, if results were different, and if meta-analytic methods were used.
More-recent meta-analyses are simply more up to date and usually include more RCTs. However, this does not necessarily mean that an older meta-analysis should be neglected - depending, among other factors, on how many new studies have been published since, an older meta-analysis can still serve as a valuable source of information that should be included in the literature review. Importantly, when we excluded all the previous articles published before 2010, the general pattern of our main result did not change (see Figure 1), showing that neglecting to refer to and discuss previously published systematic reviews and meta-analyses persisted even for the most recent material.
Our results show that for an increase of one unit in impact factor the odds to get cited by a recent meta-analysis increased by 49%. Although criticized, impact factor is an important criterion for readers to assess the importance of scientific literature . However, authors of systematic reviews should be especially careful not to miss important insights published outside high-impact-factor journals and select evidence on grounds of methodological validity rather than simply high visibility .
We hypothesize that omitting similar findings from previous papers may constitute an (un)conscious strategy performed by authors to artificially create ‘novelty value’ to win an advantage during the peer-review and publication process. This is because journals demand novel, ground-breaking results to qualify for acceptance  - revealing that another article, using similar methodology, has obtained the same results likely decreases the novelty of the submitted paper. Such acts distort readers’ understanding of the true landscape of the medical evidence.
Although it is generally acknowledged that meta-analysis can be an important and reliable source of information , we would like to emphasize that the methodology itself cannot be a synonym of scientific quality and authors should be aware of both strengths and weaknesses of this method .
Our analysis has limitations. We decided to focus only on clinical journals with the highest impact factors, because they usually publish papers of high scientific quality . Nevertheless, our sample may not be considered representative for all medical meta-analyses. Because we wanted to be systematic in our approach, we included New England Journal of Medicine, although it does not publish many systematic reviews. We also restricted ourselves to pharmacological interventions. Therefore, our results do not necessarily generalize to other forms of treatment or other journals although we do not see any obvious reasons why the situation there should differ. We only used PubMed to identify the previous articles, so we might have missed some relevant meta-analyses or systematic reviews about a given topic. However, because we always included all previous meta-analyses and systematic reviews cited by the recent article (that is, all previous systematic reviews and meta-analyses that were on the reference list of a given recent article), our results represent a rather conservative estimate of the proportion of the previous meta-analyses and systematic reviews that were cited, described, and discussed by the recent meta-analyses. Selection by a single reviewer and 25% double extraction was also a limitation of our study, but the level of agreement between reviewers was good according to the Kappa coefficient . Moreover, this is not a review where exact accuracy is essential - our primary result is very robust and our conclusions would not change even if the number of discussed and described papers should increase by a factor of two. Finally, our detailed description of the results in the Additional files allows verification and replication (see Additional file 7 for a list of references to all included meta-analyses and systematic reviews).
As we are not aware of any other research that could have guided our selection of predictors, we chose them based on our own expertise. Because of that, some of the measures we used have not been previously validated (similarity score of the review question, concordance of results), but we made sure they were as simple as possible and well-operationalized (including a priori definitions wherever possible). Moreover, the similarity score was based on the PICO questions that are considered essential in defining which studies to include and exclude  and constitute a well-recognized procedure .
Conclusions and policy implications
Upcoming systematic reviews and meta-analyses should include an outline of previous systematic work about the same topic. Such an outline should be recommended by evidence-based medicine guidelines and officially implemented by the editorial policies. Currently, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement recommends that authors of systematic reviews explain in the introduction how their work adds to what is already known and explain whether is it a new review or an update (see item 3: Rationale) . This is not sufficient. In this respect the Consolidated Standards of Reporting Trials (CONSORT) Statement seems more demanding, recommending that each new trial includes a reference to a systematic review of previous similar trials or a note of the absence of such trials (see item 2a: Scientific background and explanation of rationale) . We feel that there is no reason why systematic reviews should not follow an analogous procedure. The Cochrane Collaboration already acknowledged this problem and includes an obligatory section ‘Agreements and disagreements with other studies or reviews’ in its software Review Manager .
To reduce unnecessary duplication of research effort and adequately determine whether there is a need to undertake a new project, all systematic reviews and meta-analyses should be prospectively registered  using international registries of protocols, like PROSPERO .
Limiting the failure to refer to what is already known would make systematic reviews and meta-analyses a more useful, transparent, and valuable source of information for clinicians, researchers, policymakers, and patients. This simple step towards clarity and informativeness would enhance evidence-based practice as well as reduce waste in research resources [6-8,10,37] and reduce human suffering .
A measurement tool for the assessment of the methodological quality of systematic reviews
Consolidated Standards of Reporting Trials
participants, intervention, comparator, outcome
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
international prospective register of systematic reviews
randomized controlled trial
Oxman AD, Guyatt GH. The science of reviewing research. Ann N Y Acad Sci. 1993;703:125–33.
Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7:e1000326.
Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med. 2011;154:50–5.
Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. Lancet. 2010;376:20–1.
Clarke M, Hopewell S. Many reports of randomised trials still don’t begin or end with a systematic review of the relevant evidence. J Bahrain Med Soc. 2013;24:145–8.
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–9.
Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383:156–65.
Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75.
Chang SM, Carey T, Kato EU, Guise J-M, Sanders GD. Identifying research needs for improving health care. Ann Intern Med. 2012;157:439–45.
Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–76.
Greenberg SA. How citation distortions create unfounded authority: analysis of citation network. BMJ. 2009;339:b2680.
Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.
Centrum für Disease Management [http://www.cfdm.de/media/doc/Protocol%20Quoting%20Habits%20of%20Meta-analyses.doc].
Schardt C, Adams MB, Owens T, Keitz S, Fontelo P. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Med Inform Decis Mak. 2007;7:16.
Hempel S, Newberry SJ, Maher AR, Wang Z, Miles JN, Shanman R, et al. Probiotics for the prevention and treatment of antibiotic-associated diarrhea: a systematic review and meta-analysis. JAMA. 2012;307:1959–69.
D’Souza AL, Rajkumar C, Cooke J, Bulpitt CJ. Probiotics in prevention of antibiotic associated diarrhoea: meta-analysis. BMJ. 2002;324:1361.
Makani H, Bangalore S, Desouza KA, Shah A, Messerli FH. Efficacy and safety of dual blockade of the renin-angiotensin system: meta-analysis of randomised trials. BMJ. 2013;346:f360.
Kunz R, Friedrich C, Wolbers M, Mann JF. Meta-analysis: effect of monotherapy and combination therapy with inhibitors of the renin angiotensin system on proteinuria in renal disease. Ann Intern Med. 2008;148:30–48.
Fox BD, Kahn SR, Langleben D, Eisenberg MJ, Shimony A. Efficacy and safety of novel oral anticoagulants for treatment of acute venous thromboembolism: direct and adjusted indirect meta-analysis of randomised controlled trials. BMJ. 2012;345:e7498.
Loke YK, Kwok CS. Dabigatran and rivaroxaban for prevention of venous hromboembolism–systematic review and adjusted indirect comparison. J Clin Pharm Ther. 2011;36:111–24.
Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;15:10.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74.
Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol. 2013;13:50.
Cooper N, Jones D, Sutton A. The use of systematic reviews when designing studies. Clin Trials. 2005;2:260–4.
Terry RF, Salm JF, Nannei C, Dye C. Creating a global observatory for health R&D. Science. 2014;345:1302–4.
Editorial. The impact factor game. PLoS Med. 2006;3:e291.
Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502.
Bertamini M, Munafo MR. Bite-size science and its undesired side effects. Perspect Psychol Sci. 2012;7:67–71.
Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. Users’ guides to the medical literature IX: a method for grading health care recommendations. JAMA. 1995;274:1800–4.
Garg AX, Hackam D, Tonelli M. Systematic review and meta-analysis: when one study is just not enough. Clin J Am Soc Nephrol. 2008;3:253–60.
Saha S, Saint S, Christakis DA. Impact factor: a valid measure of journal quality? J Med Libr Assoc. 2003;91:42–6.
Huang X, Lin J, Demner-Fushman D. Evaluation of PICO as a knowledge representation for clinical questions. AMIA Annu Symp Proc. 2006;2006:359–63.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700.
Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869.
Centre TNC. Review Manager (RevMan). 52nd ed. The Cochrane Collaboration: Copenhagen; 2012.
Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, et al. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Syst Rev. 2012;1:2.
Siontis KC, Hernandez-Boussard T, Ioannidis JP. Overlapping meta-analyses on the same topic: survey of published studies. BMJ. 2013;347:f4501.
Chalmers I. The Lethal Consequences of Failing to Make Use of All Relevant Evidence about the Effects of Medical Treatments: The Need for Systematic Reviews. In: Rothwell P, editor. Treating Individuals: from Randomised Trials to Personalised Medicine. London: Lancet; 2007. p. 37–58.
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. AC acknowledges support from the NIHR Oxford cognitive health Clinical Research Facility. JRG is an NIHR Senior Investigator.
The authors declare that they have no competing interests.
BH and SL together with JRG, AC and JMD conceived and designed the study. BH identified the eligible meta-analyses and, together with AP, extracted the data. GS, DM, SL, MTS, and BH performed the statistical analyses and interpreted the data. BH wrote the manuscript, and all authors revised it critically for content and approved the final version.
BH is a researcher in evidence-based medicine at the Department of Psychiatry and Psychotherapy, Technical University Munich, Germany. AP is a research analyst at the Centre for Addiction and Mental Health, Toronto, Canada. MTS is a resident in psychiatry and researcher in evidence-based psychiatry at the Department of Psychiatry and Psychotherapy, Technical University Munich, Germany. JGR is Head of Department of Psychiatry; Professor of Epidemiological Psychiatry at the University of Oxford, UK; and Director of the Oxford Clinical Trials Unit for Mental Illness. AC is an Associate Professor and Senior Clinical Researcher at the Department of Psychiatry at the University of Oxford, UK; Editor in Chief of Evidence-Based Mental Health; and Editor of the Cochrane Depression, Anxiety and Neurosis Group. JMD is Gilman Professor of Psychiatry and Research Professor of Medicine at University of Illinois at Chicago, USA; and Editor of the Cochrane Schizophrenia Group. DM is a lecturer in Statistics at the Department of Primary Education, University of Ioannina, Greece. GS is Assistant Professor in Epidemiology at the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Greece; and the convener of the Cochrane Collaboration’s Statistical Method Group and the Comparing Multiple Interventions Methods Group. SL is a Professor and Vice-chairman of the Department of Psychiatry and Psychotherapy Technical University Munich, Germany; Honorary Professor of Evidence-based Psychopharmacological Treatment at University of Aarhus, Denmark; and Editor to the Cochrane Schizophrenia Group.
PRISMA flow diagram for recent meta-analyses.
PRISMA flow diagrams for previous systematic reviews and meta-analyses.
Similarity score of the review question between recent and previous review: definitions and examples.
Concordance of results: example of similar and different results.
Summary characteristics of recent and previous studies.
Detailed characteristics of recent meta-analyses.
References to all included meta-analyses and systematic reviews.