Arquivos Brasileiros de Neurocirurgia: Brazilian Neurosurgery 2016; 35(03): 179-184
DOI: 10.1055/s-0036-1583528
Original Article | Artigo Original
Thieme Publicações Ltda Rio de Janeiro, Brazil

Guidelines for Integration of Systematic Reviews using Primary Studies

Métodos para integração de revisões sistemáticas utilizando revisões publicadas e estudos primários
Ricardo Vieira Botelho
1   Spine Surgery Group, Neurosurgery Service, Hospital do Servidor Público Estadual (HSPE) de São Paulo e do Conjunto Hospitalar do Mandaqui, São Paulo, SP, Brazil
2   Instituto de Assistência Médica ao Servidor Público Estadual (IAMSPE) de São Paulo, São Paulo, SP, Brazil
,
Matheus Fernandes Oliveira
1   Spine Surgery Group, Neurosurgery Service, Hospital do Servidor Público Estadual (HSPE) de São Paulo e do Conjunto Hospitalar do Mandaqui, São Paulo, SP, Brazil
› Author Affiliations
Further Information

Address for correspondence

Matheus Fernandes Oliveira
Instituto de Assistência Médica ao Servidor Público Estadual
Av. Ibirapuera, 981 - Indianópolis, São Paulo, SP
Brazil   

Publication History

27 January 2016

28 March 2016

Publication Date:
17 May 2016 (online)

 

Abstract

Objectives Due to the growing number of systematic reviews published and the need to update the existing revisions, the Agency for Healthcare Research and Quality (AHRQ) – Evidence-based Practice Centers (EPC) published in 2008 a preliminary guide to integrate primary data education to data from systematic reviews already published. This study is a translation effort of the American agency's guidelines to provide subsidies for revisions in our midst.

Methods A study group with experts in systematic review was gathered to identify any methodological requirements that need clarification and guidance to revision developers that used existing reviews. In addition, they identified and consulted experienced researchers to provide guidance on conducting systematic reviews: Key informants (KI).

Results No evidence was found in the literature for the driving advice based on the integration of existing systematic reviews and primary assays. Recommendations were based on expert opinion.

Conclusion The literature lacks guidelines for integration of systematic reviews and primary studies.


#

Resumo

Objetivos Devido ao crescimento do número de revisões sistemáticas publicadas e a necessidade de atualização das revisões existentes, a Agency for Healthcare Research and Quality (AHRQ) – Evidence-based Practice Centers (EPC) – publicou em 2008 um guia preliminar para integrar os dados dos estudos primários aos dados das revisões sistemáticas já publicadas, no desenvolvimento de novas revisões sistemáticas. Este estudo é um esforço de tradução das orientações da agência americana, com o intuito de fornecer subsídios para as revisões em nosso meio.

Métodos Um grupo de estudos com especialistas em revisão sistemática foi reunido para identificar as eventuais necessidades metodológicas que precisariam de esclarecimento e orientação para elaboradores de revisões que utilizassem revisões existentes. Adicionalmente, foram identificados e consultados pesquisadores experientes que fornecessem orientações sobre a realização de revisões sistemáticas: informantes-chave (IC).

Resultados Não foram encontradas evidências na literatura para basear recomendações na condução da integração de ensaios primários às revisões sistemáticas existentes. As recomendações foram baseadas em opiniões de especialistas.

Conclusão A literatura carece de orientações para integração de estudos primários às revisões sistemáticas.


#

Introduction

Because of the growing number of published systematic reviews and the necessity to update existing reviews[1] [2], the Agency for Healthcare Research and Quality (AHRQ) – Evidence-based Practice Centers (EPC) – published in 2008 a preliminary guide to integrating data from primary studies into data from systematic reviews previously published for the development of new systematic reviews.

This subject is of great importance to the authors of systematic reviews, given the lack of related publications.[3] [4] This study results from an effort to translate the guidelines from the American agency, in order to provide funding for the reviews in our field.


#

Methods

A group of studies by specialists in systematic reviews was assembled to identify the possible methodological necessities that need clarification and guidance for reviewers using existing reviews. The necessities are[1] [2] [3] [4]:

  • definition of criteria to identify when a new review adds value to the existing reviews;

  • organization of principles to integrate primary and secondary evidence into the new systematic reviews (including models for evidence tables);

  • guidelines for the clear description of the methods used to identify, select, and decide how to better use the systematic reviews;

  • methods to minimize bias in the selection of previous reviews to use or for the integration of existing ones;

  • methods to minimize bias in the incorporation of selected parts of existing reviews;

  • qualitative and quantitative methods to summarize the bodies of evidence that include a systematic review as the only source of evidence;

  • production of an analysis tool for assessment of the methodological quality (in addition to Assessing the Methodological Quality of Systematic Reviews – AMSTAR[5]);

  • methods to establish the systematic review as the only reference or as a reference for the evidence.

Steps Used in the Production of Recommendations for the Selection and Evaluation of Systematic Reviews in the Existence of Multiple Reviews

The steps used for the selection were the electronic search, use of knowledge from key-informers (not obtained in the electronic search), and analysis of bias of existing publications.

Recommendations were created for the primary evaluation of existing systematic reviews by reference checking, introduction, and discussion.[3] [4]


#

Electronic Search

A revision was performed in databases of nearly ten thousand citations on the methodology of systematic reviews and search of compared efficacy[6] [7] [8] [9] [10] [11] [12] [13] ([Table 1]).

Table 1

Search filters for systematic reviews in the different databases (ISSG Search Filters Resource)[13]

Database

Filter

CINAHL

Lee E, Dobbins M, DeCorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Medical Research Methodology 2012, 12:51 doi:10.1186/1471-2288-12-5

University of Texas School of Public Health. Search filters for systematic reviews and meta-analyses. Accessed 06 Dec 2013. [EBSCO]

Wong SS, Wilczynski NL, Haynes RB. Optimal CINAHL search strategies for identifying therapy studies and review articles. JournalofNursingScholarship 2006;38(2):194–9.

SIGN strategy [undated] [Ovid]

EMBASE

Lee E, Dobbins M, DeCorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Medical Research Methodology 2012, 12:51 doi:10.1186/1471-2288-12-5

Wilczynski NL, Haynes RB, Hedges Team. EMBASE search strategies achieved high sensitivity and specificity for retrieving methodologically sound systematic reviews. Journal of Clinical Epidemiology2007;60(1):29–33. [Ovid]

BMJ Clinical Evidence strategy [undated] [Ovid]

CADTH strategy [2014] [Ovid]

SIGN strategy [undated] [Ovid]

MEDLINE

Lee E, Dobbins M, DeCorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Medical Research Methodology 2012, 12:51 doi:10.1186/1471-2288-12-5

University of Texas School of Public Health. Search filters for systematic reviews and meta-analyses. Accessed 06 Dec 2013. [Ovid]

National Library of Medicine: systematic reviews PubMed subset strategy [2008] [PubMed]

ISSG structured abstract (pdf)

ISSG search filter appraisal (pdf)

Montori VM, Wilczynski NL, Morgan D, Haynes RB. Optimal search strategies for retrieving systematic reviews from MEDLINE: analytical survey. BMJ 2005;330(7482):68. [Ovid/PubMed] Also at http://hiru.mcmaster.ca/hiru/HIRU_Hedges_MEDLINE_Strategies.aspx

ISSG structured abstract (pdf)

ISSG search filter appraisal (pdf)

Alberta Research Centre for Child Health Evidence:systematic reviews filter [2003] [Ovid]

ISSG structured abstract (pdf)

ISSG search filter appraisal (pdf)

Shojania KG, Bero LA. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Effective Clinical Practice 2001;4(4):157–62. [PubMed]

ISSG structured abstract (pdf)

ISSG search filter appraisal (pdf)

White VJ, Glanville JM, Lefebvre C, Sheldon TA. A statistical approach to designing search filters to find systematic reviews: objectivity enhances accuracy. Journal of Information Science 2001;27(6):357–70. [Ovid]

Boynton J, Glanville J, McDaid D, Lefebvre C. Identifying systematic reviews in MEDLINE: developing an objective approach to search strategy design. Journal of Information Science 1998;24(3):137–54. [Ovid]

ISSG structured abstract (pdf)

ISSG search filter appraisal (pdf)

BMJ Clinical Evidence strategy [undated] [Ovid]

CADTH strategy [2014] [Ovid, PubMed]

Health Evidence Bulletins - Wales strategy [undated] [Ovid]

PubMed systematic reviews subset [modified February 2014] [PubMed]

SIGN strategy [undated] [Ovid]

University of Alberta strategy [undated] [Ovid]

PsycINFO

University of Texas School of Public Health. Search filters for systematic reviews and meta-analyses. Accessed 06 Dec 2013. [Ovid]

Eady AM, Wilczynski NL, Haynes RB. PsycINFO search strategies identified methodologically sound therapy studies and review articles for use by clinicians and researchers. Journal of Clinical Epidemiology2008;61(1):34–40. [Ovid] and also at http://hiru.mcmaster.ca/hiru/HIRU_Hedges_PsycINFO_Strategies.aspx

CADTH strategy [2014] [Ovid]

Health Evidence Bulletins - Wales strategy [undated] [Ovid]

Publications to provide guidance on the integration of existing systematic reviews into new systematic reviews were searched. There were no strict eligibility criteria, but all articles that discussed the integration of reviews were used.

An electronic search using the following terms in the title, abstract, or description was performed: overview; umbrella; review of review; use of secondary studies; discordant review; incorporating review; multiple systematic review; review of systematic review; relevant review; synthesis of systematic review; secondary evidence; synopsis of systematic; synopsis of review.


#

Use of Key-Informants

Additionally, experienced researchers that could provide guidelines for the conduction of systematic reviews were identified and consulted: the key-informants (KIs). These were representatives of organizations or systematic collaborators in the production of systematic reviews and developers of methodologies.


#

Analysis of the Risk of Bias of Primary Studies and the Description of the Summary-Effect

The use of systematic reviews (when there were multiple reviews) was based on the analysis of the risk of bias (RB) of the primary studies, the description of the summary-effect, and the strength of the bodies of evidence of existing systematic reviews.[3] [4]

In the face of the possible outcomes, five possible scenarios were created:

  1. using the review without changing or adding new studies;

  2. using the review and adding new studies;

  3. using the review with new or modified analysis;

  4. using selected parts of the review;

  5. not using the review.


#
#

Results

Development of Recommendations

All KIs reviewed the subjects. The recommendations were developed in an interactive way until a consensus was reached. Little evidence was found to support the recommendations. When there was a consensus on minimum standards, recommendations were provided.

The literature provided 470 citations of the database of the methodology group.[1] [2] [3] [4]


#

Synthesis of the Interviews with Key-Informants

Eleven KIs from various institutions that conduct systematic reviews participated. One institution chose not to include any systematic review, only primary tests, as the basis of its reviews. Most institutions chose to include systematic reviews in their reviews, but none of them have published a guide for the process. One institution mentioned the use of a prior method in this area.[1]

The key subjects of consensus among the KIs were organized into three general topics.

Use of Multiple Existing Reviews

The KIs reported that it is common to use the best systematic review instead of including all existing reviews. The decision on the best systematic review was based on[1] [2] [3] [4] [11] [12] [14] [15] [16]:

  • the one that most appropriately approached the current review, including the scope, populations, interventions, outcomes, inclusion/exclusion criteria, and methods;

  • the assumption that if a review had only some characteristics of interest, elements of the review could be incorporated, or a review could be added to supplementary studies; in the meantime, empty reviews with little evidence, regardless of their relevance and quality, could not be used by some organizations;

  • the quality and updates of the search;

  • the AMSTAR tool, the most frequently used tool to grade the quality of the review, although there has been a recognition of its limitations;

  • in some institutions, the latest revision among those of the highest quality, whereas in others, the base was the AMSTAR score with a limit score of 8;

  • the reputation of the review origin, with no use of reviews with a clear bias or conflict of interest, for example, reviews coming from the industry;

  • the transparency and level of detail;

  • the analysis of the statistical methods;

  • the principle that if a review does not provide enough detail, KIs could conduct their own critical review;

  • considerations on the existence of wide or unexplained disagreement between reviews, and the inclusion of these in the new review, even if not they do not formally add new evidence.


#

Assessment of the Risk of Bias

The assessment of the RB of individual studies is a crucial stage of the review. Most KIs noted that the tools to assess the RB of existing systematic reviews were not appropriate to determine whether the RB of individual studies can be used in the current review.[1] [2] [3] [4] [11] [12] [13] [14] [15]

The most important criterion, according to the KIs, is the type of tool used to evaluate the RB and transparency of the study description. Reviews from presumably reliable sources, such as The Cochrane Collaboration (EPC Program), were preferred.[1] [2] [3] [4] [11] [12] [13] [14] [15]

The KIs agreed that an existing review should not necessarily use the same tool to evaluate the RB that would be used in the current review, but the evaluation of the former needs to be performed with an accepted and appropriate tool. In addition, the tools to evaluate the RB should be cited in the study methods.[1] [2] [3] [4]

The combination of an appropriate tool, sufficient detail, transparency, and agreement in the evaluation of the RBs of the study sample is sufficient to be accepted by the KIs. However, lack of confidence and the need to repeat the assessment of the RB leads to questions about the possibility of the review being redone or used in the current review.[1] [2] [3] [4] [11] [12] [14] [15] [16]


#

Minimum Criteria for the Definition of a Good Systematic Review[8] [9]

  • Existence of an adequate and explicit search

  • Well-defined eligibility criteria

  • Consideration of the quality of the included studies or assessment of the RB

  • Adequate synthesis, or attempt to synthesize the findings, quantitatively or qualitatively


#
#

Quality Evaluation

Regarding the review, the relevance should be assessed by the acronym PICOT (population, intervention, comparison, outcome, time). The reviews are evaluated according to the design, quality, tools, and updates. The review that is most approximate to the current version in scope, inclusion/exclusion criteria, and methods is generally prioritized[3]. However, a review of dubious quality, or with few characteristics of interest, could be used in part or even not at all in the synthesis of the current review. The KIs more often used the AMSTAR tool to grade the quality of the review, although they recognized its limitations[8] [9] [10] [11] [12]


#

Summary Tables

Summary tables should include sufficient information, showing the overall strength of the evidence, study limitations, consistency, accuracy, objectivity, and biases report. The strength of the classifications of evidence must be based on the underlying primary evidence, not on the number or quality of the existing systematic reviews. There are no clear rules for when a new quantitative synthesis needs to be performed or when a synthesis, qualitative or quantitative, may be used from a previous review.[1] [2] [3] [4]

If new studies are consistent with previous syntheses and are unlikely to change the conclusion of the review, the evaluators may choose not to carry out an updated synthesis.[1] [2] [3] [4]


#

Summary and Evaluation of the Strength of Evidence

The considerations on the use of the strength of evidence of an existing review are similar to those on the use of the RB.[17] [18] [19]

The review used an appropriate grading system to include areas in the AHRQ - EPC Program Methods Guide. Briefly, the strengths and limitations of the primary studies, objectivity, consistency, accuracy, and bias report were included. Reviews that are compatible include EPC, SOE (Strength of Evidence), GRADE (The Grading of Recommendations Assessment, Development, and Evaluation), and USPSTF (United States Preventive Services Task Force).[17] [18] [19]


#
#

Discussion

Summary of Recommendations

Existing reviews should be confirmed as systematic reviews using an amount of minimum eligibility criteria. It is proposed that minimum eligibility includes an explicit and proper search, the application of predefined criteria for study selection, assessment of the RB for the included studies, and synthesis of the results.[1] [2] [3] [4]

The quality of the existing systematic reviews should be evaluated in an explicit manner with minimum quality criteria that include the search in multiple sources, the use of a tool generally accepted to assess the RB, and sufficient information to evaluate the strength of the body of evidence that includes most areas of the RB, objectivity, consistency and accuracy, and bias report.[1] [2] [3] [4]

The assessment of the RB of existing reviews can be used when the review describes an explicit process, including the use of a tool or method that is compatible with the approach of the current review and that evaluates the key sources of potential bias.[5] [6] [7] [8]

It is suggested that the RB is re-evaluated in a sample of studies of an existing review under consideration for inclusion in a new review to confirm the agreement with the approach of the current review team.[1] [2] [3] [4]

It is recommended that reviews should at least narratively describe the findings of previous reviews, including the number and type of studies included, and the summary of the findings. It is also recommended that the newly identified studies are clearly distinguished from studies in the existing reviews when presented in the narrative and in the tables.[1] [2] [3] [4]

Summary tables should include sufficient information to support the grading of the strength of evidence, including the grading of individual areas of strength of evidence (limitations of the studies, consistency, accuracy, objectivity, RB report).[1] [2] [3] [4]

The grading of the strength of evidence should be based on primary evidence and not on the number or quality of existing systematic reviews. Using the areas of the strength of evidence as a model (limitations of the studies, consistency, accuracy, objectivity, RB report), the authors of the review should consider how the new evidence would make an estimate of the change in the effect or in the grading of the strength of evidence. A new quantitative synthesis is necessary if the new studies change the evaluation of the strength of evidence, either to obtain a more accurate or a more updated estimate[17] [18] [19]

When existing reviews do not complete the grading of the strength of evidence for the comparison and the outcome of interest, the strength of evidence should be evaluated for the body of evidence, considering the primary studies of previous reviews and new studies identified. When no new study is incorporated in the strength of evidence, the evaluation of the strength of evidence of a systematic review can be used by using an acceptable grading from the SOE, consistent with the context of the current review. In these cases, it is suggested that the overall analysis of the strength of evidence is reviewed, considering the strength of the field, to confirm the consistency with current reviews. In cases where new studies are added to the body of evidence, the strength of evidence may require re-evaluation based on all studies (evidence).[17] [18] [19]


#
#

Conclusion

No evidence was found in the literature on which to base recommendations for the integration of existing systematic reviews and primary studies. The recommendations were based on expert opinion, and this work should be considered an outline for the integration of existing reviews and primary studies.


#
#

Supplementary Material

  • References

  • 1 Whitlock EP, Lin JS, Chou R, Shekelle P, Robinson KA. Using existing systematic reviews in complex systematic reviews. Ann Intern Med 2008; 148 (10) 776-782
  • 2 White CM, Ip S, McPheeters M , et al. Using Existing Systematic Reviews to Replace De Novo Processes in Conducting Comparative Effectiveness Reviews. In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews. Rockville, MD: Agency for Healthcare Research and Quality; September 2009. http://effectivehealthcare.ahrq.gov
  • 3 Robinson KA, Whitlock EP, Oneil ME , et al. Integration of Existing Systematic Reviews. AHRQ Publication No. 14–EHC016-EF. Rockville, MD: Agency for Healthcare Research and Quality; June 23, 2014. http://effectivehealthcare.ahrq.gov
  • 4 Robinson KA, Whitlock EP, Oneil ME , et al. Integration of existing systematic reviews into new reviews: identification of guidance needs. Syst Rev 2014; 3: 60
  • 5 Shea BJ, Grimshaw JM, Wells GA , et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 2007; 7 (1) 10
  • 6 Effective Health Care Scientific Resource Center. Methods Article Alert. Portland: Scientific Resource Center. www.epcsrc.org/methods_library/index.cfm . Accessed May 2014
  • 7 Filters to identify systematic reviews. InterTASC Information Specialists' SubGroup. https://sites.google.com/a/york.ac.uk/issgsearch-filters-resource/filters-to-identifysystematic-reviews . Accessed October 2014
  • 8 Systematic reviews: CRD's guidance for undertaking reviews in health care: Centre for Reviews and Dissemination; 2009
  • 9 Higgins J, Green S. Cochrane handbook for systematic reviews of interventions. The Cochrane Collaboration; 2011
  • 10 Moher D, Liberati A, Tetzlaff J, Altman DG ; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151 (4) 264-269 , W64
  • 11 Relevo R, Paynter R. Peer review of search strategies. Methods Research Reports. AHRQ Publication No.: 12–EHC068-EF. Rockville, MD: Agency for Healthcare Research and Quality; June 2012
  • 12 Ahmadzai N, Newberry SJ, Maglione MA , et al. A surveillance system to assess the need for updating systematic reviews. Syst Rev 2013; 2 (1) 104
  • 13 InterTASC Information Specialists' Sub-Group (ISSG). ISSG Search Filters Resource. Avalable at: https://sites.google.com/a/york.ac.uk/issg-search-filters-resource/filters-to-identify-systematic-reviews
  • 14 Chung M, Newberry SJ, Ansari MT , et al. Two methods provide similar signals for the need to update systematic reviews. J Clin Epidemiol 2012; 65 (6) 660-668
  • 15 Shekelle PG, Newberry SJ, Wu H , et al. Identifying Signals for Updating Systematic Reviews.: A Comparison of Two Methods. Methods Research Report. AHRQ Publication No. 11–EHC042-EF. Rockville, MD: Agency for Healthcare Research and Quality; June 2011. http://effectivehealthcare.ahrq.gov
  • 16 Cornell JE, Mulrow CD, Localio R , et al. Random-effects meta-analysis of inconsistent effects: a time for change. Ann Intern Med 2014; 160 (4) 267-270
  • 17 Berkman ND, Lohr K, Ansari MT , et al. Grading the Strength of a Body of Evidence When Assessing Health Care Interventions for the Effective Health Care Program of the Agency for Healthcare Research and Quality: An Update. AHRQ Publication No. 13(14)-EHC130-EF. Rockville, MD: Agency for Healthcare Research and Quality; November 2013. http://effectivehealthcare.ahrq.gov
  • 18 DerSimonian R, Kacker R. Random-effects model for meta-analysis of clinical trials: an update. Contemp Clin Trials 2007; 28 (2) 105-114
  • 19 Robinson KA, Chou R, Berkman ND , et al. Integrating Bodies of Evidence: Existing Systematic Reviews and Primary Studies. Methods Guide for Comparative Effectiveness Reviews (Prepared by the Scientific Resource Center under Contract No. 290–2012–00004-C). AHRQ Publication No. 15–EHC007-EF. Rockville, MD: Agency for Healthcare Research and Quality. February 2015

Address for correspondence

Matheus Fernandes Oliveira
Instituto de Assistência Médica ao Servidor Público Estadual
Av. Ibirapuera, 981 - Indianópolis, São Paulo, SP
Brazil   

  • References

  • 1 Whitlock EP, Lin JS, Chou R, Shekelle P, Robinson KA. Using existing systematic reviews in complex systematic reviews. Ann Intern Med 2008; 148 (10) 776-782
  • 2 White CM, Ip S, McPheeters M , et al. Using Existing Systematic Reviews to Replace De Novo Processes in Conducting Comparative Effectiveness Reviews. In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews. Rockville, MD: Agency for Healthcare Research and Quality; September 2009. http://effectivehealthcare.ahrq.gov
  • 3 Robinson KA, Whitlock EP, Oneil ME , et al. Integration of Existing Systematic Reviews. AHRQ Publication No. 14–EHC016-EF. Rockville, MD: Agency for Healthcare Research and Quality; June 23, 2014. http://effectivehealthcare.ahrq.gov
  • 4 Robinson KA, Whitlock EP, Oneil ME , et al. Integration of existing systematic reviews into new reviews: identification of guidance needs. Syst Rev 2014; 3: 60
  • 5 Shea BJ, Grimshaw JM, Wells GA , et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 2007; 7 (1) 10
  • 6 Effective Health Care Scientific Resource Center. Methods Article Alert. Portland: Scientific Resource Center. www.epcsrc.org/methods_library/index.cfm . Accessed May 2014
  • 7 Filters to identify systematic reviews. InterTASC Information Specialists' SubGroup. https://sites.google.com/a/york.ac.uk/issgsearch-filters-resource/filters-to-identifysystematic-reviews . Accessed October 2014
  • 8 Systematic reviews: CRD's guidance for undertaking reviews in health care: Centre for Reviews and Dissemination; 2009
  • 9 Higgins J, Green S. Cochrane handbook for systematic reviews of interventions. The Cochrane Collaboration; 2011
  • 10 Moher D, Liberati A, Tetzlaff J, Altman DG ; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151 (4) 264-269 , W64
  • 11 Relevo R, Paynter R. Peer review of search strategies. Methods Research Reports. AHRQ Publication No.: 12–EHC068-EF. Rockville, MD: Agency for Healthcare Research and Quality; June 2012
  • 12 Ahmadzai N, Newberry SJ, Maglione MA , et al. A surveillance system to assess the need for updating systematic reviews. Syst Rev 2013; 2 (1) 104
  • 13 InterTASC Information Specialists' Sub-Group (ISSG). ISSG Search Filters Resource. Avalable at: https://sites.google.com/a/york.ac.uk/issg-search-filters-resource/filters-to-identify-systematic-reviews
  • 14 Chung M, Newberry SJ, Ansari MT , et al. Two methods provide similar signals for the need to update systematic reviews. J Clin Epidemiol 2012; 65 (6) 660-668
  • 15 Shekelle PG, Newberry SJ, Wu H , et al. Identifying Signals for Updating Systematic Reviews.: A Comparison of Two Methods. Methods Research Report. AHRQ Publication No. 11–EHC042-EF. Rockville, MD: Agency for Healthcare Research and Quality; June 2011. http://effectivehealthcare.ahrq.gov
  • 16 Cornell JE, Mulrow CD, Localio R , et al. Random-effects meta-analysis of inconsistent effects: a time for change. Ann Intern Med 2014; 160 (4) 267-270
  • 17 Berkman ND, Lohr K, Ansari MT , et al. Grading the Strength of a Body of Evidence When Assessing Health Care Interventions for the Effective Health Care Program of the Agency for Healthcare Research and Quality: An Update. AHRQ Publication No. 13(14)-EHC130-EF. Rockville, MD: Agency for Healthcare Research and Quality; November 2013. http://effectivehealthcare.ahrq.gov
  • 18 DerSimonian R, Kacker R. Random-effects model for meta-analysis of clinical trials: an update. Contemp Clin Trials 2007; 28 (2) 105-114
  • 19 Robinson KA, Chou R, Berkman ND , et al. Integrating Bodies of Evidence: Existing Systematic Reviews and Primary Studies. Methods Guide for Comparative Effectiveness Reviews (Prepared by the Scientific Resource Center under Contract No. 290–2012–00004-C). AHRQ Publication No. 15–EHC007-EF. Rockville, MD: Agency for Healthcare Research and Quality. February 2015