Generally, the visibility and recognition of scientific research activities benefits from increasing collaborative research [18]. Research collaboration plays an important role in science, policy and medicine [19, 20]. Research collaboration in the ‘Big Science’ era involves addressing important and relevant research questions that require a complex construction of multi-disciplinary teams of scientists and researchers, large-scale scientific structures, budgets of an unprecedented scale, and widespread sharing of scientific knowledge and data. Thus, meta-analysis can be considered a good example of ‘Big Science’ in medicine and clinical epidemiology [21], as quantitative evidence synthesis is the application, in practice, of the principle that science is cumulative [22, 23]. An obvious manifestation of this is the observed trend of 75 new randomized trials and 11 new systematic reviews being published daily, with a plateau in this growth not yet reached [24]. Therefore, promoting research collaboration in evidence synthesis is able to strengthen research activity, productivity and impact.
In general, we found a strong clustering of papers published in two British journals (BMJ and The Lancet accounted for 57% of meta-analyses), in contrast to other general medical journals (for example, the NEJM represented less than 2%). We hypothesized that these different findings between journals may potentially reflect an editorial policy and/or preference, with the BMJ, The Lancet and JAMA journals specifically being more interested in and/or promoting the publication of high-quality quantitative evidence synthesis.
Perhaps a relevant finding is that collaborative networks are expanding in multiple regions, revealing a discernable and well-established scientific community, with the most prolific authors and institutions having an important number of collaborations. As might be expected, the scientific community captured by the networks is centered on a nucleus of scientists and researchers from academia, medical centers and health research institutes from western high-income countries (North America, Western Europe and Australia/Oceania). Specifically, the most intense global collaborations took place between authors and institutions from the USA, the UK and Canada. However, although these three countries lead in the number of published high impact meta-analyses, the efforts during the period of study were global, with publications from authors and institutions in more than 50 different countries. Cultural links may have historically benefited some countries through alliances with nations and regions that speak the same language (as may be the case for the UK through alliances with Commonwealth countries that speak English) and have adopted similar scientific and research structures [25]. However, there is a clear over-representation of scientists based in western high-income countries, and the limited participation of low and middle income-based researchers could warrant further pragmatic action. Given that research resources and funding are often restricted, it is the responsibility of the scientific community to utilize the resources available most efficiently when exploring research priorities to afford the health needs of the population, stimulating north–south and west–east collaborations where possible. In fact, these results are consistent with those reported by Uthman et al. [26], who assessed the characteristics of the 100 most frequently cited meta-analysis related articles. Although the scope of our research is definitely different from that paper, those authors also showed that the USA, the UK and Canada have taken leadership in the production of citation papers, but no first author from low or middle income countries led one of the most cited papers.
The maps of scientific partnership show that authors who are ‘leaders’ and thus who may contribute collaboration, have more frequent and intense collaboration between other authors and institutions from different countries. The study also identifies highly cohesive cluster networks and provides considerable information on the structure that can be put to various purposes, such as funding agencies designing strategies for future scientific collaboration, agencies such as the World Health Organization promoting a global coordinated agenda for perceived high priority clinical topics, and sharing of reliable and innovative methodologies that can be linked to world-class educational and training opportunities.
There are several possible explanations for our findings. The use of modern communication and information technologies, especially the Internet, has diminished the role of geographical and territorial boundaries in the access and transmissibility of information [27]. This has enabled scientists, and particularly systematic reviewers, closer internalization of research and collaboration. Similarly, the creation of some international collaborations, including those conducting clinical trials, may have settled the groundwork for the subsequent realization of collaborative meta-analyses that may have a clear scientific and clinical impact. For example, according to SCI-E, the most cited meta-analysis article has received more than 2,500 citations; this was a paper by the Antithrombotic Trialists’ Collaboration [28] that contributed to determine the protective effects of anti-platelet therapy (such as low dose aspirin) for patients at high risk of occlusive vascular events. Analyses of the Oxford-based Early Breast Cancer Trialists Collaborative Group (EBCTCG) provided breakthrough examples of complete pictures of the evidence on the long-term effects of various therapies on early breast cancer [29–31].
Collaborative networks, an important form of social network analysis, have been intensively studied in many scientific disciplines, including biology, physics, medicine and economics [13, 16, 32–36]. To our knowledge, no study has previously described and characterized the global collaborative patterns and networks of published meta-analyses of randomized trials. Very few studies have reviewed evidence synthesis for decision-making using a research collaboration approach [35, 36], and although not directly comparable with our analysis, there are aspects worthy of comment. A recent paper by Wagstaff and Culyer [35] examined four decades of health economics research. They compared authors, institutions, countries and journals in terms of the volume of publications in the US, the UK and Canada; Harvard University, the World Bank and the MIT emerged at the top on a variety of measures. Previously, Greenberg et al. [36] conducted a review of cost-effectiveness analyses of the English-language articles indexed in PubMed since 2006, and observed that the most prolific authors were affiliated with renowned US institutions in the USA (for example, Harvard University, Stanford University and Tufts University, and their affiliated hospitals).
There are several limitations to our study. First, although the scientific production analysed has been drawn from an exhaustive analysis of the literature, it is possible that the search missed some relevant articles. Furthermore, some reports were published in journals without being indexed as meta-analyses, making them difficult to identify. The analyses inevitably represent an initial investigation, and a more detailed exploration is also needed. In addition, we restricted our analysis to meta-analyses that considered randomized trials as the primary source of clinical evidence, and therefore there may be scientists and researchers (or institutions) who do not appear because their papers are not reflected in the collaborative networks (for example, genetic epidemiology). It would be interesting to explore whether the use of alternative sources (such as observational research, descriptive epidemiology, molecular genetics, non-clinical studies) resulted in similar results to those reported here. Our analysis was also limited in scope, focusing only on original research and reviews articles. Undoubtedly, there are other important reports (for example, methodological [37–39] and conceptual papers [6, 40, 41]) that also merit consideration. Second, we excluded the Cochrane Library, specifically, the Cochrane Database of Systematic Reviews, a major source of systematic reviews. However, to date its impact factor is smaller than that of any of the included journals. Given the dynamic nature of the field, other opportunities for further research include examining the evolution of the identified networks over time (for example, by means of longitudinal analysis) also considering papers published in multi-disciplinary journals or those included in journals belonging to other categories.
Additionally, there were no further inquiries or attempts to verify the quality of reporting of the meta-analyses in our sample. Previous research [42] has addressed this issue, pointing out that some of the meta-analyses published in leading medical journals have important methodological limitations. Third, as in many bibliometric analyses, the importance of normalizing the names of scientists, researchers and their institutions is fundamental to avoiding potential errors caused in recognizing variations in the name of a single author. Nevertheless, we conducted a careful manual validation of the bibliographic references to avoid these potential errors. In the case of authors, the criterion followed with two or more variants of a name or surname was to check the coincidence of the different variants with the workplace. As discussed elsewhere [34], this procedure does not assure complete certainty. It does not take into account possible changes in the author’s workplace, nor does it avoid the problem where the same bibliographic name refers to the scientific production of two or more authors, although the fact that a single field and a short chronologic period were being analyzed helped to minimize this kind of error. For institutional names, the main problem is that the same name frequently applies to two or more institutions, something that is common for authors who work in institutes or hospitals connected to universities. In such cases, we opted to assign as many names to the macroinstitutions as could be identified. Although this resulted in the problem of multiplying the number of institutions in the recount, it was necessary in order to avoid losing information concerning the macroinstitutions occurring in second place or later in the list of names. The same criterion of multiplying the names was used in the case of the institutes and other research organizations, sometimes administratively dependent on one macroinstitution, the result being that a ‘fictitious’ inter-institutional collaboration may have been obtained.