• Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Apr 17, 2024 2:02 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

Systematic Reviews: Types of literature review, methods, & resources

  • Types of literature review, methods, & resources
  • Protocol and registration
  • Search strategy
  • Medical Literature Databases to search
  • Study selection and appraisal
  • Data Extraction/Coding/Study characteristics/Results
  • Reporting the quality/risk of bias
  • Manage citations using RefWorks This link opens in a new window
  • GW Box file storage for PDF's This link opens in a new window

Analytical reviews

GUIDELINES FOR HOW TO CARRY OUT AN ANALYTICAL REVIEW OF QUANTITATIVE RESEARCH

Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network. (Tracking and listing over 550 reporting guidelines for various different study types including Randomised trials, Systematic reviews, Study protocols, Diagnostic/prognostic studies, Case reports, Clinical practice guidelines, Animal pre-clinical studies, etc). http://www.equator-network.org/resource-centre/library-of-health-research-reporting/

When comparing therapies :

PRISMA (Guideline on how to perform and write-up a systematic review and/or meta-analysis of the outcomes reported in multiple clinical trials of therapeutic interventions. PRISMA  replaces the previous QUORUM statement guidelines ):  Liberati, A,, Altman, D,, Moher, D, et al. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration.  Plos Medicine, 6 (7):e1000100. doi:10.1371/journal.pmed.1000100 

When comparing diagnostic methods :

Checklist for Artificial Intelligence in Medical Imaging (CLAIM). CLAIM is modeled after the STARD guideline and has been extended to address applications of AI in medical imaging that include classification, image reconstruction, text analysis, and workflow optimization. The elements described here should be viewed as a “best practice” to guide authors in presenting their research. Reported in Mongan, J., Moy, L., & Kahn, C. E., Jr (2020). Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers.  Radiology. Artificial intelligence ,  2 (2), e200029. https://doi.org/10.1148/ryai.2020200029

STAndards for the Reporting of Diagnostic accuracy studies (STARD) Statement. (Reporting guidelines for writing up a study comparing the accuracy of competing diagnostic methods)  http://www.stard-statement.org/

When evaluating clinical practice guidelines :

AGREE Research Trust (ART) (2013).  Appraisal of Guidelines for Research & Evaluation (AGREE-II) . (A 23-item instrument for as sessing th e quality of Clinical Practice Guidelines. Used internationally for evaluating or deciding which guidelines could be recommended for use in practice or to inform health policy decisions.)

National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) Instrument (2019). (A 15-item instrument using scales of 1-5 to evaluate a guideline's adherence to the Institute of Medicine's standard for trustworthy guidelines. It has good external validity among guideline developers and good interrater reliability across trained reviewers.)

When reviewing genetics studies

Human genetics review reporting guidelines.  Little J, Higgins JPT (eds.). The HuGENet™ HuGE Review Handbook, version 1.0 . 

When you need to re-analyze individual participant data

If you wish to collect, check, and re-analyze individual participant data (IPD) from clinical trials addressing a particular research question, you should follow the  PRISMA-IPD  guidelines as reported in  Stewart, L.A., Clarke, M., Rovers, M., et al. (2015). Preferred Reporting Items for a Systematic Review and Meta-analysis of Individual Participant Data: The PRISMA-IPD Statement. JAMA, 313(16):1657-1665. doi:10.1001/jama.2015.3656 .

When comparing Randomized studies involving animals, livestock, or food:

O’Connor AM, et al. (2010).  The REFLECT statement: methods and processes of creating reporting guidelines for randomized controlled trials for livestock and food safety by modifying the CONSORT statement.  Zoonoses Public Health. 57(2):95-104. Epub 2010/01/15. doi: 10.1111/j.1863-2378.2009.01311.x. PubMed PMID: 20070653.

Sargeant JM, et al. (2010).  The REFLECT Statement: Reporting Guidelines for Randomized Controlled Trials in Livestock and Food Safety: Explanation and Elaboration.  Zoonoses Public Health. 57(2):105-36. Epub 2010/01/15. doi: JVB1312 [pii] 10.1111/j.1863-2378.2009.01312.x. PubMed PMID: 20070652.

GUIDELINES FOR HOW TO WRITE UP FOR PUBLICATION THE RESULTS OF ONE QUANTITATIVE CLINICAL TRIAL

When reporting the results of a Randomized Controlled Trial :

Consolidated Standards of Reporting Trials (CONSORT) Statement. (2010 reporting guideline for writing up a Randomized Controlled Clinical Trial).  http://www.consort-statement.org . Since updated in 2022, see Butcher, M. A., et al. (2022). Guidelines for Reporting Outcomes in Trial Reports: The CONSORT-Outcomes 2022 Extension . JAMA : the Journal of the American Medical Association, 328(22), 2252–2264. https://doi.org/10.1001/jama.2022.21022

Kilkenny, C., Browne, W. J., Cuthill, I. C., Emerson, M., & Altman, D. G. (2010). Improving bioscience research reporting: The ARRIVE guidelines for reporting animal research. PLoS Biology, 8(6), e1000412–e1000412. https://doi.org/10.1371/journal.pbio.1000412 (A 20-item checklist, following the CONSORT approach, listing the information that published articles reporting research using animals should include, such as the number and specific characteristics of animals used; details of housing and husbandry; and the experimental, statistical, and analytical methods used to reduce bias.)

Narrative reviews

GUIDELINES  FOR HOW TO CARRY OUT  A  NARRATIVE REVIEW / QUALITATIVE RESEARCH /  OBSERVATIONAL STUDIES

Campbell, M. (2020). Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ, 368. doi: https://doi.org/10.1136/bmj.l6890  (guideline on how to analyse evidence for a narrative review, to provide a recommendation based on heterogenous study types).

Community Preventive Services Task Force (2021).  The Methods Manual for Community Guide Systematic Reviews . (Public Health Prevention systematic review guidelines)

Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network. (Tracking and listing over 550 reporting guidelines for various different study types including Observational studies, Qualitative research, Quality improvement studies, and Economic evaluations). http://www.equator-network.org/resource-centre/library-of-health-research-reporting/

Cochrane Qualitative & Implementation Methods Group. (2019). Training resources. Retrieved from  https://methods.cochrane.org/qi/training-resources . (Training materials for how to do a meta-synthesis, or qualitative evidence synthesis). 

Cornell University Library (2019). Planning worksheet for structured literature reviews. Retrieved 4/8/22 from  https://osf.io/tnfm7/  (offers a framework for a narrative literature review).

Green, B. N., Johnson, C. D., & Adams, A. (2006).  Writing narrative literature reviews for peer-reviewed journals: secrets of the trade . Journal of Chiropractic Medicine, 5(3): 101-117. DOI: 10.1016/ S0899-3467 (07)60142-6.  This is a very good article about what to take into consideration when writing any type of narrative review.

When reviewing observational studies/qualitative research :

STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) statement. (Reporting guidelines for various types of health sciences observational studies).  http://www.strobe-statement.org 

Meta-analysis of Observational Studies in Epidemiology (MOOSE)  http://jama.jamanetwork.com/article.aspx?articleid=192614

RATS Qualitative research systematic review guidelines.  https://www.equator-network.org/reporting-guidelines/qualitative-research-review-guidelines-rats/

Methods/Guidance

Right Review , this decision support website provides an algorithm to help reviewers choose a review methodology from among 41 knowledge synthesis methods.

The Systematic Review Toolbox , an online catalogue of tools that support various tasks within the systematic review and wider evidence synthesis process. Maintained by the UK University of York Health Economics Consortium, Newcastle University NIHR Innovation Observatory, and University of Sheffield School of Health and Related Research.

Institute of Medicine. (2011).  Finding What Works in Health Care: Standards for Systematic Reviews . Washington, DC: National Academies  (Systematic review guidelines from the Health and Medicine Division (HMD) of the U.S. National Academies of Sciences, Engineering, and Medicine (formerly called the Institute of Medicine)).

International Committee of Medical Journal Editors (2022).  Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals . Guidance on how to prepare a manuscript for submission to a Medical journal.

Cochrane Handbook of Systematic Reviews of Interventions (International Cochrane Collaboration systematic review guidelines). The various Cochrane review groups comporise around 30,000 physicians around the world working in the disciplines on reviews of interventions with very detailed methods for verifying the validity of the research methods and analysis performed in screened-in Randmized Controlled Clinical Trials. Typically published Cochrane Reviews are the most exhaustive review of the evidence of effectiveness of a particular drug or intervention, and include a statistical meta-analysis. Similar to practice guidelines, Cochrane reviews are periodically revised and updated.

Joanna Briggs Institute (JBI) Manual of Evidence Synthesis . (International systematic review guidelines). Based at the University of Adelaide, South Australia, and collaborating with around 80 academic and medical entities around the world. Unlike Cochrane Reviews that strictly focus on efficacy of interventions, JBI offers a broader, inclusive approach to evidence, to accommodate a range of diverse questions and study designs. The JBI manual provides guidance on how to analyse and include both quantitative and qualitative research.

Cochrane Methods Support Unit, webinar recordings on methodological support questions 

Cochrane Qualitative & Implementation Methods Group. (2019). Training resources. Retrieved from https://methods.cochrane.org/qi/training-resources . (How to do a meta-synthesis, or qualitative evidence synthesis). 

Center for Reviews and Dissemination (University of York, England) (2009).  Systematic Reviews: CRD's guidance for undertaking systematic reviews in health care . (British systematic review guidelines). 

Agency for Health Research & Quality (AHRQ) (2013). Methods guide for effectiveness and comparative effectiveness reviews . (U.S. comparative effectiveness review guidelines)

Hunter, K. E., et al. (2022). Searching clinical trials registers: guide for systematic reviewers.  BMJ (Clinical research ed.) ,  377 , e068791. https://doi.org/10.1136/bmj-2021-068791

Patient-Centered Outcomes Research Institute (PCORI).  The PCORI Methodology Report . (A 47-item methodology checklist for U.S. patient-centered outcomes research. Established under the Patient Protection and Affordable Care Act, PCORI funds the development of guidance on the comparative effectivess of clinical healthcare, similar to the UK National Institute for Clinical Evidence but without reporting cost-effectiveness QALY metrics). 

Canadian Agency for Drugs and Technologies in Health (CADTH) (2019). Grey Matters: a practical tool for searching health-related grey literature. Retrieved from https://www.cadth.ca/resources/finding-evidence/grey-matters . A checklist of N American & international online databases and websites you can use to search for unpublished reports, posters, and policy briefs, on topics including general medicine and nursing, public and mental health, health technology assessment, drug and device regulatory, approvals, warnings, and advisories.

Hempel, S., Xenakis, L., & Danz, M. (2016). Systematic Reviews for Occupational Safety and Health Questions: Resources for Evidence Synthesis. Retrieved 8/15/16 from http://www.rand.org/pubs/research_reports/RR1463.html . NIOSH guidelines for how to carry out a systematic review in the occupational safety and health domain.

A good source for reporting guidelines is the  NLM's  Research Reporting Guidelines and Initiatives .

Grading of Recommendations Assessment, Development and Evaluation (GRADE). (An international group of academics/clinicians working to promote a common approach to grading the quality of evidence and strength of recommendations.) 

Phillips, B., Ball, C., Sackett, D., et al. (2009). Oxford Centre for Evidence Based Medicine: Levels of Evidence. Retrieved 3/20/17 from https://www.cebm.net/wp-content/uploads/2014/06/CEBM-Levels-of-Evidence-2.1.pdf . (Another commonly used criteria for grading the quality of evidence and strength of recommendations, developed in part by EBM guru David Sackett.) 

Systematic Reviews for Animals & Food  (guidelines including the REFLECT statement for carrying out a systematic review on animal health, animal welfare, food safety, livestock, and agriculture)

Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Information & Libraries Journal, 26(2), 91-108. doi:10.1111/j.1471-1842.2009.00848.x. (Describes 14 different types of literature and systematic review, useful for thinking at the outset about what sort of literature review you want to do.)

Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: exploring review types and associated information retrieval requirements . Health information and libraries journal, 36(3), 202–222. doi:10.1111/hir.12276  (An updated look at different types of literature review, expands on the Grant & Booth 2009 article listed above).

Garrard, J. (2007).  Health Sciences Literature Review Made Easy: The Matrix Method  (2nd Ed.).   Sudbury, MA:  Jones & Bartlett Publishers. (Textbook of health sciences literature search methods).

Zilberberg, M. (2012).  Between the lines: Finding the truth in medical literature . Goshen, MA: Evimed Research Press. (Concise book on foundational concepts of evidence-based medicine).

Lang, T. (2009). The Value of Systematic Reviews as Research Activities in Medical Education . In: Lang, T. How to write, publish, & present in the health sciences : a guide for clinicians & laboratory researchers. Philadelphia : American College of Physicians.  (This book chapter has a helpful bibliography on systematic review and meta-analysis methods)

Brown, S., Martin, E., Garcia, T., Winter, M., García, A., Brown, A., Cuevas H.,  & Sumlin, L. (2013). Managing complex research datasets using electronic tools: a meta-analysis exemplar . Computers, Informatics, Nursing: CIN, 31(6), 257-265. doi:10.1097/NXN.0b013e318295e69c. (This article advocates for the programming of electronic fillable forms in Adobe Acrobat Pro to feed data into Excel or SPSS for analysis, and to use cloud based file sharing systems such as Blackboard, RefWorks, or EverNote to facilitate sharing knowledge about the decision-making process and keep data secure. Of particular note are the flowchart describing this process, and their example screening form used for the initial screening of abstracts).

Brown, S., Upchurch, S., & Acton, G. (2003). A framework for developing a coding scheme for meta-analysis . Western Journal Of Nursing Research, 25(2), 205-222. (This article describes the process of how to design a coded data extraction form and codebook, Table 1 is an example of a coded data extraction form that can then be used to program a fillable form in Adobe Acrobat or Microsoft Access).

Elamin, M. B., Flynn, D. N., Bassler, D., Briel, M., Alonso-Coello, P., Karanicolas, P., & ... Montori, V. M. (2009). Choice of data extraction tools for systematic reviews depends on resources and review complexity .  Journal Of Clinical Epidemiology ,  62 (5), 506-510. doi:10.1016/j.jclinepi.2008.10.016  (This article offers advice on how to decide what tools to use to extract data for analytical systematic reviews).

Riegelman R.   Studying a Study and Testing a Test: Reading Evidence-based Health Research , 6th Edition.  Lippincott Williams & Wilkins, 2012. (Textbook of quantitative statistical methods used in health sciences research).

Rathbone, J., Hoffmann, T., & Glasziou, P. (2015). Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Systematic Reviews, 480. doi:10.1186/s13643-015-0067-6

Guyatt, G., Rennie, D., Meade, M., & Cook, D. (2015). Users' guides to the medical literature (3rd ed.). New York: McGraw-Hill Education Medical.  (This is a foundational textbook on evidence-based medicine and of particular use to the reviewer who wants to learn about the different types of published research article e.g. "what is a case report?" and to understand what types of study design best answer what types of clinical question).

Glanville, J., Duffy, S., Mccool, R., & Varley, D. (2014). Searching ClinicalTrials.gov and the International Clinical Trials Registry Platform to inform systematic reviews: what are the optimal search approaches? Journal of the Medical Library Association : JMLA, 102(3), 177–183. https://doi.org/10.3163/1536-5050.102.3.007

Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan a web and mobile app for systematic reviews.  Systematic Reviews, 5 : 210, DOI: 10.1186/s13643-016-0384-4. http://rdcu.be/nzDM

Kwon Y, Lemieux M, McTavish J, Wathen N. (2015). Identifying and removing duplicate records from systematic review searches. J Med Libr Assoc. 103 (4): 184-8. doi: 10.3163/1536-5050.103.4.004. https://www.ncbi.nlm.nih.gov/pubmed/26512216

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. (2016). De-duplication of database search results for systematic reviews in EndNote. J Med Libr Assoc. 104 (3):240-3. doi: 10.3163/1536-5050.104.3.014. Erratum in: J Med Libr Assoc. 2017 Jan;105(1):111. https://www.ncbi.nlm.nih.gov/pubmed/27366130

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–46. doi: 10.1016/j.jclinepi.2016.01.021 . PRESS is a guideline with a checklist for librarians to critically appraise the search strategy for a systematic review literature search.

Clark, JM, Sanders, S, Carter, M, Honeyman, D, Cleo, G, Auld, Y, Booth, D, Condron, P, Dalais, C, Bateup, S, Linthwaite, B, May, N, Munn, J, Ramsay, L, Rickett, K, Rutter, C, Smith, A, Sondergeld, P, Wallin, M, Jones, M & Beller, E 2020, 'Improving the translation of search strategies using the Polyglot Search Translator: a randomized controlled trial',  Journal of the Medical Library Association , vol. 108, no. 2, pp. 195-207.

Journal articles describing systematic review methods can be searched for in PubMed using this search string in the PubMed search box: sysrev_methods [sb] . 

Software tools for systematic reviews

  • Covidence GW in 2019 has bought a subscription to this Cloud based tool for facilitating screening decisions, used by the Cochrane Collaboration. Register for an account.
  • NVIVO for analysis of qualitative research NVIVO is used for coding interview data to identify common themes emerging from interviews with several participants. GW faculty, staff, and students may download NVIVO software.
  • RedCAP RedCAP is software that can be used to create survey forms for research or data collection or data extraction. It has very detailed functionality to enable data exchange with Electronic Health Record Systems, and to integrate with study workflow such as scheduling follow up reminders for study participants.
  • SRDR tool from AHRQ Free, web-based and has a training environment, tutorials, and example templates of systematic review data extraction forms
  • RevMan 5 RevMan 5 is the desktop version of the software used by Cochrane systematic review teams. RevMan 5 is free for academic use and can be downloaded and configured to run as stand alone software that does not connect with the Cochrane server if you follow the instructions at https://training.cochrane.org/online-learning/core-software-cochrane-reviews/revman/revman-5-download/non-cochrane-reviews
  • Rayyan Free, web-based tool for collecting and screening citations. It has options to screen with multiple people, masking each other.
  • GradePro Free, web application to create, manage and share summaries of research evidence (called Evidence Profiles and Summary of Findings Tables) for reviews or guidelines, uses the GRADE criteria to evaluate each paper under review.
  • DistillerSR Needs subscription. Create coded data extraction forms from templates.
  • EPPI Reviewer Needs subscription. Like DistillerSR, tool for text mining, data clustering, classification and term extraction
  • SUMARI Needs subscription. Qualitative data analysis.
  • Dedoose Needs subscription. Qualitative data analysis, similar to NVIVO in that it can be used to code interview transcripts, identify word co-occurence, cloud based.
  • Meta-analysis software for statistical analysis of data for quantitative reviews SPSS, SAS, and STATA are popular analytical statistical software that include macros for carrying out meta-analysis. Himmelfarb has SPSS on some 3rd floor computers, and GW affiliates may download SAS to your own laptop from the Division of IT website. To perform mathematical analysis of big data sets there are statistical analysis software libraries in the R programming language available through GitHub and RStudio, but this requires advanced knowledge of the R and Python computer languages and data wrangling/cleaning.
  • PRISMA 2020 flow diagram generator The PRISMA Statement website has a page listing example flow diagram templates and a link to software for creating PRISMA 2020 flow diagrams using R software.

GW researchers may want to consider using Refworks to manage citations, and GW Box to store the full text PDF's of review articles. You can also use online survey forms such as Qualtrics, RedCAP, or Survey Monkey, to design and create your own coded fillable forms, and export the data to Excel or one of the qualitative analytical software tools listed above.

Forest Plot Generators

  • RevMan 5 the desktop version of the software used by Cochrane systematic review teams. RevMan 5 is free for academic use and can be downloaded and configured to run as stand alone software that does not connect with the Cochrane server if you follow the instructions at https://training.cochrane.org/online-learning/core-software-cochrane-reviews/revman/revman-5-download/non-cochrane-reviews.
  • Meta-Essentials a free set of workbooks designed for Microsoft Excel that, based on your input, automatically produce meta-analyses including Forest Plots. Produced for Erasmus University Rotterdam joint research institute.
  • Neyeloff, Fuchs & Moreira Another set of Excel worksheets and instructions to generate a Forest Plot. Published as Neyeloff, J.L., Fuchs, S.C. & Moreira, L.B. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis. BMC Res Notes 5, 52 (2012). https://doi-org.proxygw.wrlc.org/10.1186/1756-0500-5-52
  • For R programmers instructions are at https://cran.r-project.org/web/packages/forestplot/vignettes/forestplot.html and you can download the R code package from https://github.com/gforge/forestplot
  • << Previous: Home
  • Next: Protocol and registration >>

Creative Commons License

  • Last Updated: Apr 22, 2024 9:18 AM
  • URL: https://guides.himmelfarb.gwu.edu/systematic_review

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu
  • Correspondence
  • Open access
  • Published: 10 January 2018

What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences

  • Zachary Munn   ORCID: orcid.org/0000-0002-7091-5842 1 ,
  • Cindy Stern 1 ,
  • Edoardo Aromataris 1 ,
  • Craig Lockwood 1 &
  • Zoe Jordan 1  

BMC Medical Research Methodology volume  18 , Article number:  5 ( 2018 ) Cite this article

122k Accesses

2019 Citations

407 Altmetric

Metrics details

Systematic reviews have been considered as the pillar on which evidence-based healthcare rests. Systematic review methodology has evolved and been modified over the years to accommodate the range of questions that may arise in the health and medical sciences. This paper explores a concept still rarely considered by novice authors and in the literature: determining the type of systematic review to undertake based on a research question or priority.

Within the framework of the evidence-based healthcare paradigm, defining the question and type of systematic review to conduct is a pivotal first step that will guide the rest of the process and has the potential to impact on other aspects of the evidence-based healthcare cycle (evidence generation, transfer and implementation). It is something that novice reviewers (and others not familiar with the range of review types available) need to take account of but frequently overlook. Our aim is to provide a typology of review types and describe key elements that need to be addressed during question development for each type.

Conclusions

In this paper a typology is proposed of various systematic review methodologies. The review types are defined and situated with regard to establishing corresponding questions and inclusion criteria. The ultimate objective is to provide clarified guidance for both novice and experienced reviewers and a unified typology with respect to review types.

Peer Review reports

Introduction

Systematic reviews are the gold standard to search for, collate, critique and summarize the best available evidence regarding a clinical question [ 1 , 2 ]. The results of systematic reviews provide the most valid evidence base to inform the development of trustworthy clinical guidelines (and their recommendations) and clinical decision making [ 2 ]. They follow a structured research process that requires rigorous methods to ensure that the results are both reliable and meaningful to end users. Systematic reviews are therefore seen as the pillar of evidence-based healthcare [ 3 , 4 , 5 , 6 ]. However, systematic review methodology and the language used to express that methodology, has progressed significantly since their appearance in healthcare in the 1970’s and 80’s [ 7 , 8 ]. The diachronic nature of this evolution has caused, and continues to cause, great confusion for both novice and experienced researchers seeking to synthesise various forms of evidence. Indeed, it has already been argued that the current proliferation of review types is creating challenges for the terminology for describing such reviews [ 9 ]. These fundamental issues primarily relate to a) the types of questions being asked and b) the types of evidence used to answer those questions.

Traditionally, systematic reviews have been predominantly conducted to assess the effectiveness of health interventions by critically examining and summarizing the results of randomized controlled trials (RCTs) (using meta-analysis where feasible) [ 4 , 10 ]. However, health professionals are concerned with questions other than whether an intervention or therapy is effective, and this is reflected in the wide range of research approaches utilized in the health field to generate knowledge for practice. As such, Pearson and colleagues have argued for a pluralistic approach when considering what counts as evidence in health care; suggesting that not all questions can be answered from studies measuring effectiveness alone [ 4 , 11 ]. As the methods to conduct systematic reviews have evolved and advanced, so too has the thinking around the types of questions we want and need to answer in order to provide the best possible, evidence-based care [ 4 , 11 ].

Even though most systematic reviews conducted today still focus on questions relating to the effectiveness of medical interventions, many other review types which adhere to the principles and nomenclature of a systematic review have emerged to address the diverse information needs of healthcare professionals and policy makers. This increasing array of systematic review options may be confusing for the novice systematic reviewer, and in our experience as educators, peer reviewers and editors we find that many beginner reviewers struggle to achieve conceptual clarity when planning for a systematic review on an issue other than effectiveness. For example, reviewers regularly try to force their question into the PICO format (population, intervention, comparator and outcome), even though their question may be an issue of diagnostic test accuracy or prognosis; attempting to define all the elements of PICO can confound the remainder of the review process. The aim of this article is to propose a typology of systematic review types aligned to review questions to assist and guide the novice systematic reviewer and editors, peer-reviewers and policy makers. To our knowledge, this is the first classification of types of systematic reviews foci conducted in the medical and health sciences into one central typology.

Review typology

For the purpose of this typology a systematic review is defined as a robust, reproducible, structured critical synthesis of existing research. While other approaches to the synthesis of evidence exist (including but not limited to literature reviews, evidence maps, rapid reviews, integrative reviews, scoping and umbrella reviews), this paper seeks only to include approaches that subscribe to the above definition. As such, ten different types of systematic review foci are listed below and in Table  1 . In this proposed typology, we provide the key elements for formulating a question for each of the 10 review types.

Effectiveness reviews [ 12 ]

Experiential (Qualitative) reviews [ 13 ]

Costs/Economic Evaluation reviews [ 14 ]

Prevalence and/or Incidence reviews [ 15 ]

Diagnostic Test Accuracy reviews [ 16 ]

Etiology and/or Risk reviews [ 17 ]

Expert opinion/policy reviews [ 18 ]

Psychometric reviews [ 19 ]

Prognostic reviews [ 20 ]

Methodological systematic reviews [ 21 , 22 ]

Effectiveness reviews

Systematic reviews assessing the effectiveness of an intervention or therapy are by far the most common. Essentially effectiveness is the extent to which an intervention, when used appropriately, achieves the intended effect [ 11 ]. The PICO approach (see Table 1 ) to question development is well known [ 23 ] and comprehensive guidance for these types of reviews is available [ 24 ]. Characteristics regarding the population (e.g. demographic and socioeconomic factors and setting), intervention (e.g. variations in dosage/intensity, delivery mode, and frequency/duration/timing of delivery), comparator (active or passive) and outcomes (primary and secondary including benefits and harms, how outcomes will be measured including the timing of measurement) need to be carefully considered and appropriately justified.

Experiential (qualitative) reviews

Experiential (qualitative) reviews focus on analyzing human experiences and cultural and social phenomena. Reviews including qualitative evidence may focus on the engagement between the participant and the intervention, as such a qualitative review may describe an intervention, but its question focuses on the perspective of the individuals experiencing it as part of a larger phenomenon. They can be important in exploring and explaining why interventions are or are not effective from a person-centered perspective. Similarly, this type of review can explain and explore why an intervention is not adopted in spite of evidence of its effectiveness [ 4 , 13 , 25 ]. They are important in providing information on the patient’s experience, which can enable the health professional to better understand and interact with patients. The mnemonic PICo can be used to guide question development (see Table 1 ). With qualitative evidence there is no outcome or comparator to be considered. A phenomenon of interest is the experience, event or process occurring that is under study, such as response to pain or coping with breast cancer; it differs from an intervention in its focus. Context will vary depending on the objective of the review; it may include consideration of cultural factors such as geographic location, specific racial or gender based interests, and details about the setting such as acute care, primary healthcare, or the community [ 4 , 13 , 25 ]. Reviews assessing the experience of a phenomenon may opt to use a mixed methods approach and also include quantitative data, such as that from surveys. There are reporting guidelines available for qualitative reviews, including the ‘Enhancing transparency in reporting the synthesis of qualitative research’ (ENTREQ) statement [ 26 ] and the newly proposed meta-ethnography reporting guidelines (eMERGe) [ 27 ].

Costs/economic evaluation reviews

Costs/Economics reviews assess the costs of a certain intervention, process, or procedure. In any society, resources available (including dollars) have alternative uses. In order to make the best decisions about alternative courses of action evidence is needed on the health benefits and also on the types and amount of resources needed for these courses of action. Health economic evaluations are particularly useful to inform health policy decisions attempting to achieve equality in healthcare provision to all members of society and are commonly used to justify the existence and development of health services, new health technologies and also, clinical guideline development [ 14 ]. Issues of cost and resource use may be standalone reviews or components of effectiveness reviews [ 28 ]. Cost/Economic evaluations are examples of a quantitative review and as such can follow the PICO mnemonic (see Table 1 ). Consideration should be given to whether the entire world/international population is to be considered or only a population (or sub-population) of a particular country. Details of the intervention and comparator should include the nature of services/care delivered, time period of delivery, dosage/intensity, co-interventions, and personnel undertaking delivery. Consider if outcomes will only focus on resource usage and costs of the intervention and its comparator(s) or additionally on cost-effectiveness. Context (including perspective) can also be considered in these types of questions e.g. health setting(s).

Prevalence and/or incidence reviews

Essentially prevalence or incidence reviews measure disease burden (whether at a local, national or global level). Prevalence refers to the proportion of a population who have a certain disease whereas incidence relates to how often a disease occurs. These types of reviews enable governments, policy makers, health professionals and the general population to inform the development and delivery of health services and evaluate changes and trends in diseases over time [ 15 , 29 ]. Prevalence or incidence reviews are important in the description of geographical distribution of a variable and the variation between subgroups (such as gender or socioeconomic status), and for informing health care planning and resource allocation. The CoCoPop framework can be used for reviews addressing a question relevant to prevalence or incidence (see Table 1 ). Condition refers to the variable of interest and can be a health condition, disease, symptom, event of factor. Information regarding how the condition will be measured, diagnosed or confirmed should be provided. Environmental factors can have a substantial impact on the prevalence or incidence of a condition so it is important that authors define the context or specific setting relevant to their review question [ 15 , 29 ]. The population or study subjects should be clearly defined and described in detail.

Diagnostic test accuracy reviews

Systematic reviews assessing diagnostic test accuracy provide a summary of test performance and are important for clinicians and other healthcare practitioners in order to determine the accuracy of the diagnostic tests they use or are considering using [ 16 ]. Diagnostic tests are used by clinicians to identify the presence or absence of a condition in a patient for the purpose of developing an appropriate treatment plan. Often there are several tests available for diagnosis. The mnemonic PIRD is recommended for question development for these types of systematic reviews (see Table 1 ). The population is all participants who will undergo the diagnostic test while the index test(s) is the diagnostic test whose accuracy is being investigated in the review. Consider if multiple iterations of a test exist and who carries out or interprets the test, the conditions the test is conducted under and specific details regarding how the test will be conducted. The reference test is the ‘gold standard’ test to which the results of the index test will be compared. It should be the best test currently available for the diagnosis of the condition of interest. Diagnosis of interest relates to what diagnosis is being investigated in the systematic review. This may be a disease, injury, disability or any other pathological condition [ 16 ].

Etiology and/or risk reviews

Systematic reviews of etiology and risk are important for informing healthcare planning and resource allocation, and are particularly valuable for decision makers when making decisions regarding health policy and prevention of adverse health outcomes. The common objective of many of these types of reviews is to determine whether and to what degree a relationship exists between an exposure and a health outcome. Use of the PEO mnemonic is recommended (see Table 1 ). The review question should outline the exposure, disease, symptom or health condition of interest, the population or groups at risk, as well as the context/location, the time period and the length of time where relevant [ 17 ]. The exposure of interest refers to a particular risk factor or several risk factors associated with a disease/condition of interest in a population, group or cohort who have been exposed to them. It should be clearly reported what the exposure or risk factor is, and how it may be measured/identified including the dose and nature of exposure and the duration of exposure, if relevant. Important outcomes of interest relevant to the health issue and important to key stakeholders (e.g. knowledge users, consumers, policy makers, payers etc.) must be specified. Guidance now exists for conducting these types of reviews [ 17 ]. As these reviews rely heavily on observational studies, the Meta-analysis Of Observational Studies in Epidemiology (MOOSE) [ 30 ] reporting guidelines should be referred to in addition to the PRISMA guidelines.

Expert opinion/policy reviews

Expert opinion and policy analysis systematic reviews focus on the synthesis of narrative text and/or policy. Expert opinion has a role to play in evidence-based healthcare, as it can be used to either complement empirical evidence or, in the absence of research studies, stand alone as the best available evidence. The synthesis of findings from expert opinion within the systematic review process is not well recognized in mainstream evidence-based practice. However, in the absence of research studies, the use of a transparent systematic process to identify the best available evidence drawn from text and opinion can provide practical guidance to practitioners and policy makers [ 18 ]. While a number of mnemonics have been discussed previously that can be used for opinion and text, not all elements necessarily apply to every text or opinion-based review, and use of mnemonics should be considered a guide rather than a policy. Broadly PICo can be used where I can refer to either the intervention or a phenomena of interest (see Table 1 ). Reviewers will need to describe the population, giving attention to whether specific characteristics of interest, such as age, gender, level of education or professional qualification are important to the question. As with other types of reviews, interventions may be broad areas of practice management, or specific, singular interventions. However, reviews of text or opinion may also reflect an interest in opinions around power, politics or other aspects of health care other than direct interventions, in which case, these should be described in detail. The use of a comparator and specific outcome statement is not necessarily required for a review of text and opinion based literature. In circumstances where they are considered appropriate, the nature and characteristics of the comparator and outcomes should be described [ 18 ].

Psychometric reviews

Psychometric systematic reviews (or systematic reviews of measurement properties) are conducted to assess the quality/characteristics of health measurement instruments to determine the best tool for use (in terms of its validity, reliability, responsiveness etc.) in practice for a certain condition or factor [ 31 , 32 , 33 ]. A psychometric systematic review may be undertaken on a) the measurement properties of one measurement instrument, b) the measurement properties of the most commonly utilized measurement instruments measuring a specific construct, c) the measurement properties of all available measurement instruments to measure a specific construct in a specific population or d) the measurement properties of all available measurement instruments in a specific population that does not specify the construct to be measured. The COnsensus-based Standards for the selection of health Measurement Instruments (COSMIN) group have developed guidance for conducting these types of reviews [ 19 , 31 ]. They recommend firstly defining the type of review to be conducted as well as the construct or the name(s) of the outcome measurement instrument(s) of interest, the target population, the type of measurement instrument of interest (e.g. questionnaires, imaging tests) and the measurement properties on which the review investigates (see Table 1 ).

Prognostic reviews

Prognostic research is of high value as it provides clinicians and patients with information regarding the course of a disease and potential outcomes, in addition to potentially providing useful information to deliver targeted therapy relating to specific prognostic factors [ 20 , 34 , 35 ]. Prognostic reviews are complex and methodology for these types of reviews is still under development, although a Cochrane methods group exists to support this approach [ 20 ]. Potential systematic reviewers wishing to conduct a prognostic review may be interested in determining the overall prognosis for a condition, the link between specific prognostic factors and an outcome and/or prognostic/prediction models and prognostic tests [ 20 , 34 , 35 , 36 , 37 ]. Currently there is little information available to guide the development of a well-defined review question however the Quality in Prognosis Studies (QUIPS) tool [ 34 ] and the Checklist for critical appraisal and data extraction for systematic reviews of prediction modelling studies (CHARMS Checklist) [ 38 ] have been developed to assist in this process (see Table 1 ).

Methodology systematic reviews

Systematic reviews can be conducted for methodological purposes [ 39 ], and examples of these reviews are available in the Cochrane Database [ 40 , 41 ] and elsewhere [ 21 ]. These reviews can be performed to examine any methodological issues relating to the design, conduct and review of research studies and also evidence syntheses. There is limited guidance for conducting these reviews, although there does exist an appendix in the Cochrane Handbook focusing specifically on methodological reviews [ 39 ]. They suggest following the SDMO approach where the types of studies should define all eligible study designs as well as any thresholds for inclusion (e.g. RCTS and quasi-RCTs). Types of data should detail the raw material for the methodology studies (e.g. original research submitted to biomedical journals) and the comparisons of interest should be described under types of methods (e.g. blinded peer review versus unblinded peer review) (see Table 1 ). Lastly both primary and secondary outcome measures should be listed (e.g. quality of published report) [ 39 ].

The need to establish a specific, focussed question that can be utilized to define search terms, inclusion and exclusion criteria and interpretation of data within a systematic review is an ongoing issue [ 42 ]. This paper provides an up-to-date typology for systematic reviews which reflects the current state of systematic review conduct. It is now possible that almost any question can be subjected to the process of systematic review. However, it can be daunting and difficult for the novice researcher to determine what type of review they require and how they should conceptualize and phrase their review question, inclusion criteria and the appropriate methods for analysis and synthesis [ 23 ]. Ensuring that the review question is well formed is of the utmost importance as question design has the most significant impact on the conduct of a systematic review as the subsequent inclusion criteria are drawn from the question and provide the operational framework for the review [ 23 ]. In this proposed typology, we provide the key elements for formulating a question for each of the 10 review types.

When structuring a systematic review question some of these key elements are universally agreed (such as PICO for effectiveness reviews) whilst others are more novel. For example, the use of PIRD for diagnostic reviews contrasts with other mnemonics, such as PITR [ 43 ], PPP-ICP-TR [ 44 ] or PIRATE [ 45 ]. Qualitative reviews have sometimes been guided by the mnemonic SPIDER, however this has been recommended against for guiding searching due to it not identifying papers that are relevant [ 46 ]. Variations on our guidance exist, with the additional question elements of ‘time’ (PICOT) and study types (PICOS) also existing. Reviewers are advised to consider these elements when crafting their question to determine if they are relevant for their topic. We believe that based on the guidance included in this typology, constructing a well-built question for a systematic review is a skill that can be mastered even for the novice reviewer.

Related to this discussion of a typology for systematic reviews is the issue of how to distinguish a systematic review from a literature review. When searching the literature, you may come across papers referred to as ‘systematic reviews,’ however, in reality they do not necessarily fit this description [ 21 ]. This is of significant concern given the common acceptance of systematic reviews as ‘level 1’ evidence and the best study design to inform practice. However, many of these reviews are simply literature reviews masquerading as the ideal product. It is therefore important to have a critical eye when assessing publications identified as systematic reviews. Today, the methodology of systematic reviews continues to evolve. However, there is general acceptance of certain steps being required in a systematic review of any evidence type [ 2 ] and these should be used to distinguish between a literature review and a systematic review. The following can be viewed as the defining features of a systematic review and its conduct [ 1 , 2 ]:

Clearly articulated objectives and questions to be addressed

Inclusion and exclusion criteria, stipulated a priori (in a protocol), that determine the eligibility of studies

A comprehensive search to identify all relevant studies, both published and unpublished

A process of study screening and selection

Appraisal of the quality of included studies/ papers (risk of bias) and assessment of the validity of their results/findings/ conclusions

Analysis of data extracted from the included research

Presentation and synthesis of the results/ findings extracted

Interpret the results, potentially establishing the certainty of the results and making and implications for practice and research

Transparent reporting of the methodology and methods used to conduct the review

Prior to deciding what type of review to conduct, the reviewer should be clear that a systematic review is the best approach. A systematic review may be undertaken to confirm whether current practice is based on evidence (or not) and to address any uncertainty or variation in practice that may be occurring. Conducting a systematic review also identifies where evidence is not available and can help categorize future research in the area. Most importantly, they are used to produce statements to guide decision-making. Indications for systematic reviews:

uncover the international evidence

confirm current practice/ address any variation

identify areas for future research

investigate conflicting results

produce statements to guide decision-making

The popularity of systematic reviews has resulted in the creation of various evidence review processes over the last 30 years. These include integrative reviews, scoping reviews [ 47 ], evidence maps [ 48 ], realist syntheses [ 49 ], rapid reviews [ 50 ], umbrella reviews (systematic reviews of reviews) [ 51 ], mixed methods reviews [ 52 ], concept analyses [ 53 ] and others. Useful typologies of these diverse review types can be used as reference for researchers, policy makers and funders when discussing a review approach [ 54 , 55 ]. It was not the purpose of this article to describe and define each of these diverse evidence synthesis methods as our focus was purely on systematic review questions. Depending on the researcher, their question/s and their resources at hand, one of these approaches may be the best fit for answering a particular question.

Gough and colleagues [ 9 ] provided clarification between different review designs and methods but stopped short of providing a taxonomy of review types. The rationale for this was that in the field of evidence synthesis ‘the rate of development of new approaches to reviewing is too fast and the overlap of approaches too great for that to be helpful.’ [ 9 ] They instead provide a useful description of how reviews may differ and more importantly why this may be the case. It is also our view that evidence synthesis methodology is a rapidly developing field, and that even within the review types classified here (such as effectiveness [ 56 ] or experiential [qualitative [ 57 ]]) there may be many different subsets and complexities that need to be addressed. Essentially, the classifications listed above may be just the initial level of a much larger family tree. We believe that this typology will provide a useful contribution to efforts to sort and classify evidence review approaches and understand the need for this to be updated over time. A useful next step might be the development of a comprehensive taxonomy to further guide reviewers in making a determination about the most appropriate evidence synthesis product to undertake for a particular purpose or question.

Systematic reviews of animal studies (or preclinical systematic reviews) have not been common practice in the past (when comparing to clinical research) although this is changing [ 58 , 59 , 60 , 61 ]. Systematic reviews of these types of studies can be useful to inform the design of future experiments (both preclinical and clinical) [ 59 ] and address an important gap in translation science [ 5 , 60 ]. Guidance for these types of reviews is now emerging [ 58 , 60 , 62 , 63 , 64 ]. These review types, which are often hypothesis generating, were excluded from our typology as they are only very rarely used to answer a clinical question.

Systematic reviews are clearly an indispensable component in the chain of scientific enquiry in a much broader sense than simply to inform policy and practice and therefore ensuring that they are designed in a rigorous manner, addressing appropriate questions driven by clinical and policy needs is essential. With the ever-increasing global investment in health research it is imperative that the needs of health service providers and end users are met. It has been suggested that one way to ensure this occurs is to precede any research investment with a systematic review of existing research [ 65 ]. However, the only way that such a strategy would be effective would be if all reviews conducted are done so with due rigour.

It has been argued recently that there is mass production of reviews that are often unnecessary, misleading and conflicted with most having weak or insufficient evidence to inform decision making [ 66 ]. Indeed, asking has been identified as a core functional competency associated with obtaining and applying the best available evidence [ 67 ]. Fundamental to the tenets of evidence-based healthcare and, in particular evidence implementation, is the ability to formulate a question that is amenable to obtaining evidence and “structured thinking” around question development is critical to its success [ 67 ]. The application of evidence can be significantly hampered when existing evidence does not correspond to the situations that practitioners (or guideline developers) are faced with. Hence, determination of appropriate review types that respond to relevant clinical and policy questions is essential.

The revised JBI Model of Evidence-Based Healthcare clarifies the conceptual integration of evidence generation, synthesis, transfer and implementation, “linking how these occur with the necessarily challenging dynamics that contribute to whether translation of evidence into policy and practice is successful” [ 68 ]. Fundamental to this approach is the recognition that the process of evidence-based healthcare is not prescriptive or linear, but bi-directional, with each component having the potential to affect what occurs on either side of it. Thus, a systematic review can impact upon the types of primary research that are generated as a result of recommendations produced in the review (evidence generation) but also on the success of their uptake in policy and practice (evidence implementation). It is therefore critical for those undertaking systematic reviews to have a solid understanding of the type of review required to respond to their question.

For novice reviewers, or those unfamiliar with the broad range of review types now available, access to a typology to inform their question development is timely. The typology described above provides a framework that indicates the antecedents and determinants of undertaking a systematic review. There are several factors that may lead an author to conduct a review and these may or may not start with a clearly articulated clinical or policy question. Having a better understanding of the review types available and the questions that these reviews types lend themselves to answering is critical to the success or otherwise of a review. Given the significant resource required to undertake a review this first step is critical as it will impact upon what occurs in both evidence generation and evidence implementation. Thus, enabling novice and experienced reviewers to ensure that they are undertaking the “right” review to respond to a clinical or policy question appropriately has strategic implications from a broader evidence-based healthcare perspective.

Systematic reviews are the ideal method to rigorously collate, examine and synthesize a body of literature. Systematic review methods now exist for most questions that may arise in healthcare. This article provides a typology for systematic reviewers when deciding on their approach in addition to guidance on structuring their review question. This proposed typology provides the first known attempt to sort and classify systematic review types and their question development frameworks and therefore it can be a useful tool for researchers, policy makers and funders when deciding on an appropriate approach.

Abbreviations

CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies

Condition, Context, Population

COnsensus-based Standards for the selection of health Measurement Instruments

  • Evidence-based healthcare

Meta-ethnography reporting guidelines

Enhancing transparency in reporting the synthesis of qualitative research

Joanna Briggs Institute

Meta-analysis Of Observational Studies in Epidemiology

Population, Exposure, Outcome

Population, Prognostic Factors (or models of interest), Outcome

Population, Intervention, Comparator, Outcome

Population, Phenomena of Interest, Context

Population, Intervention, Comparator/s, Outcomes, Context

Population, Index Test, Reference Test, Diagnosis of Interest

Quality in Prognosis Studies

Randomised controlled trial

Studies, Data, Methods, Outcomes

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ (Clinical research ed). 2009;339:b2700.

Article   Google Scholar  

Aromataris E, Pearson A. The systematic review: an overview. AJN. Am J Nurs. 2014;114(3):53–8.

Article   PubMed   Google Scholar  

Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A. Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol. 2014;14:108.

Article   PubMed   PubMed Central   Google Scholar  

Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews. JBI Reports. 2004;2:45–64.

Pearson A, Jordan Z, Munn Z. Translational science and evidence-based healthcare: a clarification and reconceptualization of how knowledge is generated and used in healthcare. Nursing research and practice. 2012;2012:792519.

Steinberg E, Greenfield S, Mancher M, Wolman DM, Graham R. Clinical practice guidelines we can trust: National Academies Press 2011.

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.

Chalmers I, Hedges LV, Cooper HA. Brief history of research synthesis. Eval Health Prof. 2002;25(1):12–37.

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Systematic Reviews. 2012;1:28.

Munn Z, Tufanaru C, Aromataris EJBI. S systematic reviews: data extraction and synthesis. Am J Nurs. 2014;114(7):49–54.

Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-based healthcare. International Journal of Evidence-Based Healthcare. 2005;3(8):207–15.

PubMed   Google Scholar  

Tufanaru C, Munn Z, Stephenson M, Aromataris E. Fixed or random effects meta-analysis? Common methodological issues in systematic reviews of effectiveness. Int J Evid Based Healthc. 2015;13(3):196–207.

Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Healthc. 2015;13(3):179–87.

Gomersall JS, Jadotte YT, Xue Y, Lockwood S, Riddle D, Preda A. Conducting systematic reviews of economic evaluations. Int J Evid Based Healthc. 2015;13(3):170–8.

Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data. Int J Evid Based Healthc. 2015;13(3):147–53.

Campbell JM, Klugar M, Ding S, et al. Diagnostic test accuracy: methods for systematic review and meta-analysis. Int J Evid Based Healthc. 2015;13(3):154–62.

Moola S, Munn Z, Sears K, et al. Conducting systematic reviews of association (etiology): the Joanna Briggs Institute's approach. Int J Evid Based Healthc. 2015;13(3):163–9.

McArthur A, Klugarova J, Yan H, Florescu S. Innovations in the systematic review of text and opinion. Int J Evid Based Healthc. 2015;13(3):188–95.

Mokkink LB, Terwee CB, Patrick DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res. 2010;19(4):539–49.

Dretzke J, Ensor J, Bayliss S, et al. Methodological issues and recommendations for systematic reviews of prognostic studies: an example from cardiovascular disease. Systematic reviews. 2014;3(1):1.

Campbell JM, Kavanagh S, Kurmis R, Munn Z. Systematic Reviews in Burns Care: Poor Quality and Getting Worse. Journal of Burn Care & Research. 9000;Publish Ahead of Print.

France EF, Ring N, Thomas R, Noyes J, Maxwell M, Jepson RA. Methodological systematic review of what’s wrong with meta-ethnography reporting. BMC Med Res Methodol. 2014;14(1):1.

Stern C, Jordan Z, McArthur A. Developing the review question and inclusion criteria. Am J Nurs. 2014;114(4):53–6.

Higgins J, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0 [updated March 2011]. ed: The Cochrane Collaboration 2011.

Hannes K, Lockwood C, Pearson AA. Comparative analysis of three online appraisal instruments' ability to assess validity in qualitative research. Qual Health Res. 2010;20(12):1736–43.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12:181.

France EF, Ring N, Noyes J, et al. Protocol-developing meta-ethnography reporting guidelines (eMERGe). BMC Med Res Methodol. 2015;15:103.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Shemilt I, Mugford M, Byford S, et al. In: JPT H, Green S, editors. Chapter 15: incorporating economics evidence. Cochrane Handbook for Systematic Reviews of Interventions. The Cochrane Collaboration: In; 2011.

Google Scholar  

Munn Z, Moola S, Riitano D, Lisy K. The development of a critical appraisal tool for use in systematic reviews addressing questions of prevalence. Int J Health Policy Manag. 2014;3(3):123–8.

Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis of observational studies in epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–12.

Article   CAS   PubMed   Google Scholar  

COSMIN: COnsensus-based Standards for the selection of health Measurement INstruments. Systematic reviews of measurement properties. [cited 8th December 2016]; Available from: http://www.cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html

Terwee CB, HCWd V, CAC P, Mokkink LB. Protocol for systematic reviews of measurement properties. COSMIN: Knowledgecenter Measurement Instruments; 2011.

Mokkink LB, Terwee CB, Stratford PW, et al. Evaluation of the methodological quality of systematic reviews of health status measurement instruments. Qual Life Res. 2009;18(3):313–33.

Hayden JA, van der Windt DA, Cartwright JL, CÃ P, Bombardier C. Assessing bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.

The Cochrane Collaboration. Cochrane Methods Prognosis. 2016 [cited 7th December 2016]; Available from: http://methods.cochrane.org/prognosis/scope-our-work .

Rector TS, Taylor BC, Wilt TJ. Chapter 12: systematic review of prognostic tests. J Gen Intern Med. 2012;27(Suppl 1):S94–101.

Peters S, Johnston V, Hines S, Ross M, Coppieters M. Prognostic factors for return-to-work following surgery for carpal tunnel syndrome: a systematic review. JBI Database of Systematic Reviews and Implementation Reports. 2016;14(9):135–216.

Moons KG, de Groot JA, Bouwmeester W, et al. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med. 2014;11(10):e1001744.

Clarke M, Oxman AD, Paulsen E, Higgins JP, Green S, Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. In: Higgins JP, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0 ed: The Cochrane Collaboration 2011.

Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;2:MR000016.

Djulbegovic B, Kumar A, Glasziou PP, et al. New treatments compared to established treatments in randomized trials. Cochrane Database Syst Rev. 2012;10:MR000024.

PubMed   PubMed Central   Google Scholar  

Thoma A, Eaves FF 3rd. What is wrong with systematic reviews and meta-analyses: if you want the right answer, ask the right question! Aesthet Surg J. 2016;36(10):1198–201.

Deeks JJ, Wisniewski S, Davenport C. In: Deeks JJ, Bossuyt PM, Gatsonis C, editors. Chapter 4: guide to the contents of a Cochrane diagnostic test accuracy protocol. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy The Cochrane Collaboration: In; 2013.

Bae J-M. An overview of systematic reviews of diagnostic tests accuracy. Epidemiology and Health. 2014;36:e2014016.

White S, Schultz T. Enuameh YAK. Lippincott Wiliams & Wilkins: Synthesizing evidence of diagnostic accuracy; 2011.

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi SPICO. PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. International journal of evidence-based healthcare. 2015;13(3):141–6.

Hetrick SE, Parker AG, Callahan P, Purcell R. Evidence mapping: illustrating an emerging methodology to improve evidence-based practice in youth mental health. J Eval Clin Pract. 2010;16(6):1025–30.

Wong G, Greenhalgh T, Westhorp G, Pawson R. Development of methodological guidance, publication standards and training materials for realist and meta-narrative reviews: the RAMESES (Realist And Meta-narrative Evidence Syntheses - Evolving Standards) project. Southampton UK: Queen's Printer and Controller of HMSO 2014. This work was produced by Wong et al. under the terms of a commissioning contract issued by the secretary of state for health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR journals library, National Institute for Health Research, evaluation, trials and studies coordinating Centre, alpha house, University of Southampton Science Park, Southampton SO16 7NS, UK. 2014.

Munn Z, Lockwood C, Moola S. The development and use of evidence summaries for point of care information systems: a streamlined rapid review approach. Worldviews Evid-Based Nurs. 2015;12(3):131–8.

Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Healthc. 2015;13(3):132–40.

Pearson A, White H, Bath-Hextall F, Salmond S, Apostolo J, Kirkpatrick PA. Mixed-methods approach to systematic reviews. Int J Evid Based Healthc. 2015;13(3):121–31.

Draper PA. Critique of concept analysis. J Adv Nurs. 2014;70(6):1207–8.

Grant MJ, Booth A. A Typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.

Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J Clin Epidemiol. 2011;64(1):11–20.

Bender R. A practical taxonomy proposal for systematic reviews of therapeutic interventions. 21st Cochrane Colloquium Quebec, Canada 2013.

Kastner M, Tricco AC, Soobiah C, et al. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12:114.

Leenaars M, Hooijmans CR, van Veggel N, et al. A step-by-step guide to systematically identify all relevant animal studies. Lab Anim. 2012;46(1):24–31.

de Vries RB, Wever KE, Avey MT, Stephens ML, Sena ES, Leenaars M. The usefulness of systematic reviews of animal experiments for the design of preclinical and clinical studies. ILAR J. 2014;55(3):427–37.

Hooijmans CR, Ritskes-Hoitinga M. Progress in using systematic reviews of animal studies to improve translational research. PLoS Med. 2013;10(7):e1001482.

Mignini LE, Khan KS. Methodological quality of systematic reviews of animal studies: a survey of reviews of basic research. BMC Med Res Methodol. 2006;6:10.

van Luijk J, Bakker B, Rovers MM, Ritskes-Hoitinga M, de Vries RB, Leenaars M. Systematic reviews of animal studies; missing link in translational research? PLoS One. 2014;9(3):e89981.

Vesterinen HM, Sena ES, Egan KJ, et al. Meta-analysis of data from animal studies: a practical guide. J Neurosci Methods. 2014;221:92–102.

CAMARADES. Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies. 2014 [cited 8th December 2016]; Available from: http://www.dcn.ed.ac.uk/camarades/default.htm#about

Moher D, Glasziou P, Chalmers I, et al. Increasing value and reducing waste in biomedical research: who's listening? Lancet. 2016;387(10027):1573–86.

Ioannidis J. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. The Milbank Quarterly. 2016;94(3):485–514.

Rousseau DM, Gunia BC. Evidence-based practice: the psychology of EBP implementation. Annu Rev Psychol. 2016;67:667–92.

Jordan Z, Lockwood C, Aromataris E. Munn Z. The Joanna Briggs Institute: The updated JBI model for evidence-based healthcare; 2016.

Cooney GM, Dwan K, Greig CA, et al. Exercise for depression. Cochrane Database Syst Rev. 2013;9:CD004366.

Munn Z, Jordan Z. The patient experience of high technology medical imaging: a systematic review of the qualitative evidence. JBI Libr. Syst Rev. 2011;9(19):631–78.

de Verteuil R, Tan WS. Self-monitoring of blood glucose in type 2 diabetes mellitus: systematic review of economic evidence. JBI Libr. Syst Rev. 2010;8(7):302–42.

Munn Z, Moola S, Lisy K, Riitano D, Murphy F. Claustrophobia in magnetic resonance imaging: a systematic review and meta-analysis. Radiography. 2015;21(2):e59–63.

Hakonsen SJ, Pedersen PU, Bath-Hextall F, Kirkpatrick P. Diagnostic test accuracy of nutritional tools used to identify undernutrition in patients with colorectal cancer: a systematic review. JBI Database System Rev Implement Rep. 2015;13(4):141–87.

Australia C. Risk factors for lung cancer: a systematic review. NSW: Surry Hills; 2014.

McArthur A, Lockwood C. Maternal mortality in Cambodia, Thailand, Malaysia and Sri Lanka: a systematic review of local and national policy and practice initiatives. JBI Libr Syst Rev. 2010;8(16 Suppl):1–10.

Peek K. Muscle strength in adults with spinal cord injury: a systematic review of manual muscle testing, isokinetic and hand held dynamometry clinimetrics. JBI Database of Systematic Reviews and Implementation Reports. 2014;12(5):349–429.

Hayden JA, Tougas ME, Riley R, Iles R, Pincus T. Individual recovery expectations and prognosis of outcomes in non-specific low back pain: prognostic factor exemplar review. Cochrane Libr. 2014. http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD011284/full .

Download references

Acknowledgements

No funding was provided for this paper.

Availability of data and materials

Not applicable

Author information

Authors and affiliations.

The Joanna Briggs Institute, The University of Adelaide, 55 King William Road, North Adelaide, Soueth Australia, 5005, Australia

Zachary Munn, Cindy Stern, Edoardo Aromataris, Craig Lockwood & Zoe Jordan

You can also search for this author in PubMed   Google Scholar

Contributions

ZM: Led the development of this paper and conceptualised the idea for a systematic review typology. Provided final approval for submission. CS: Contributed conceptually to the paper and wrote sections of the paper. Provided final approval for submission. EA: Contributed conceptually to the paper and reviewed and provided feedback on all drafts. Provided final approval for submission. CL: Contributed conceptually to the paper and reviewed and provided feedback on all drafts. Provided final approval for submission. ZJ: Contributed conceptually to the paper and reviewed and provided feedback on all drafts. Provided approval and encouragement for the work to proceed. Provided final approval for submission.

Corresponding author

Correspondence to Zachary Munn .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

All the authors are members of the Joanna Briggs Institute, an evidence-based healthcare research institute which provides formal guidance regarding evidence synthesis, transfer and implementation.

The authors have no other competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Munn, Z., Stern, C., Aromataris, E. et al. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 18 , 5 (2018). https://doi.org/10.1186/s12874-017-0468-4

Download citation

Received : 29 May 2017

Accepted : 28 December 2017

Published : 10 January 2018

DOI : https://doi.org/10.1186/s12874-017-0468-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic reviews
  • Question development

BMC Medical Research Methodology

ISSN: 1471-2288

types of systematic literature reviews

Banner

Systematic Reviews

  • Introduction to Systematic Reviews

Traditional Systematic Reviews

Meta-analyses, scoping reviews, rapid reviews, umbrella reviews, selecting a review type.

  • Reading Systematic Reviews
  • Resources for Conducting Systematic Reviews
  • Getting Help with Systematic Reviews from the Library
  • History of Systematic Reviews
  • Acknowledgements

Systematic Reviews are a family of review types that include:

This page provides information about the most common types of systematic reviews, important resources and references for conducting them, and some tools for choosing the best type for your research question .

Additional Information

  • A typology of reviews: an analysis of 14 review types and associated methodologies This classic article is a valuable reference point for those commissioning, conducting, supporting or interpreting reviews.
  • Traditional Systematic Reviews follow a rigorous and well-defined methodology to identify, select, and critically appraise relevant research articles on a specific topic and within a specified population of subjects
  • The primary goal of this type of study is to comprehensively find the empirical data available on a topic, identify relevant articles, synthesize their findings and draw evidence-based conclusions to answer a clinical question
  • Cochrane Handbook for Systematic Reviews of Interventions The Cochrane Handbook for Systematic Reviews of Interventions provides direction on the standard methods involved in conducting a systematic review. It is the official guide to the process involved in preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
  • JBI Manual for Evidence Synthesis The JBI Manual for Evidence Synthesis is designed to provide authors with a comprehensive guide to conducting JBI systematic reviews. It describes in detail the process of planning, undertaking and writing up a systematic review using JBI methods. The JBI Manual for Evidence Synthesis should be used in conjunction with the support and tutorials offered at the JBI SUMARI Knowledge Base.

These are some places where protocols for systematic reviews might be published.

  • PROSPERO: International prospective register of systematic reviews PROSPERO is an international database of prospectively registered systematic reviews in health and social care, welfare, public health, education, crime, justice, and international development, where there is a health related outcome. Key features from the review protocol are recorded and maintained as a permanent record. PROSPERO aims to provide a comprehensive listing of systematic reviews registered at inception to help avoid duplication and reduce opportunity for reporting bias by enabling comparison of the completed review with what was planned in the protocol.
  • Guidance Notes for Registering A Systematic Review Protocol with PROSPERO
  • OSF Registries Open Science Framework (OSF) Registries is an open network of study registgrations and pre-registrations. It can be used to pre-register a systematic review protocol. Note that OSF pre-registrations are not reviewed.
  • OSF Preregistration Initiative This page explains the motivation behind preregistrations and best practices for doing so.
  • Protocols.io A secure platform for developing and sharing reproducible methods, including protocols for systematic reviews.
  • PRISMA 2020 Statement The PRISMA 2020 Statement was published in 2021. It consists of a checklist and a flow diagram, and is intended to be accompanied by the PRISMA 2020 Explanation and Elaboration document.
  • Meta-analysis is a statistical method that can be applied during a systematic review to extract and combine the results from multiple studies
  • This pooling of data from compatible studies increases the statistical power and precision of the conclusions made by the systematic review
  • Systematic reviews can be done without doing a meta-analysis, but a meta-analysis must be done in connection with a systematic review
  • Scoping reviews identify the existing literature available on a topic to help identify key concepts, the type and amount of evidence available on a subject, and what research gaps exist in a specific area of study
  • They are particularly useful when a research question is broad and the goal is to provide an understanding of the available evidence on a topic rather than providing a focused synthesis on a narrow question
  • JBI Manual Chapter 11: Scoping Reviews
  • Updated methodological guidance for the conduct of scoping reviews The objective of this paper is to describe the updated methodological guidance for conducting a JBI scoping review, with a focus on new updates to the approach and development of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (the PRISMA-ScR).
  • Steps for Conducting a Scoping Review This article in the Journal of Graduate Medical Education provides a comprehensive yet brief overview of the scoping review process.

Note: Protocols for scoping reviews can be published in all the same places as traditional systematic reviews except PROSPERO.

  • Best practice guidance and reporting items for the development of scoping review protocols The purpose of this article is to clearly describe how to develop a robust and detailed scoping review protocol, which is the first stage of the scoping review process. This paper provides detailed guidance and a checklist for prospective authors to ensure that their protocols adequately inform both the conduct of the ensuing review and their readership.
  • PRISMA for Scoping Reviews (PRISMA-ScR) The PRISMA extension for scoping reviews was published in 2018. The checklist contains 20 essential reporting items and 2 optional items to include when completing a scoping review. Scoping reviews serve to synthesize evidence and assess the scope of literature on a topic. Among other objectives, scoping reviews help determine whether a systematic review of the literature is warranted.
  • Touro College: What is a Scoping Review? This page describes scoping reviews, including their limitations, alternate names, and how they differ from traditional systematic reviews.
  • What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis This article from JBI Evidence Synthesis provides a thorough definition of what scoping reviews are and what they are for.
  • The role of scoping reviews in reducing research waste This article from the Journal of Clinical Epidemiology looks at how scoping reviews can reduce research waste.
  • Rapid reviews streamline the systematic review process by omitting certain steps or accelerating the timeline
  • They are useful when there is a need for timely evidence synthesis, such as in response to questions concerning an urgent policy or clinical situation such as the COVID-19 pandemic
  • Rapid Review Guidebook This document provides guidance on the process of conducting rapid reviews to use evidence to inform policy and program decision making.
  • Rapid reviews to strengthen health policy and systems: a practical guide This guide from the World Health Organization offers guidance on how to plan, conduct, and promote the use of rapid reviews to strengthen health policy and systems decisions. The Guide explores different approaches and methods for expedited synthesis of health policy and systems research, and highlights key challenges for this emerging field, including its application in low- and middle-income countries. It touches on the utility of rapid reviews of health systems evidence, and gives insights into applied methods to swiftly conduct knowledge syntheses and foster their use in policy and practice.
  • Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews The Cochrane Rapid Reviews Methods Group offers new, interim guidance to support the conduct of Rapid Reviews.
  • Touro College: What is a Rapid Review? This page describes rapid reviews, including their limitations, alternate names, and how they differ from traditional systematic reviews.
  • Umbrella reviews synthesize evidence from multiple systematic reviews and meta-analyses on a specific topic
  • They provide a next-generation level of evidence synthesis, analyzing evidence taken from multiple systematic reviews to offer a broader perspective on a given subject
  • JBI Manual Chapter 10: Umbrella reviews
  • Preferred Reporting Items for Overviews of Reviews (PRIOR) Overviews of reviews (i.e., overviews) compile information from multiple systematic reviews to provide a single synthesis of relevant evidence for healthcare decision-making. Despite their increasing popularity, there are currently no systematically developed reporting guidelines for overviews. This is problematic because the reporting of published overviews varies considerably and is often substandard. Our objective is to use explicit, systematic, and transparent methods to develop an evidence-based and agreement-based reporting guideline for overviews of reviews of healthcare interventions (PRIOR, Preferred Reporting Items for Overviews of Reviews).
  • Touro College: What is an Overview of Reviews? This page describes umbrella reviews, including their limitations, alternate names, and how they differ from traditional systematic reviews.
  • Cornell University Systematic Review Decision Tree This decision tree is designed to assist researchers in choosing a review type.
  • Right Review This tool is designed to provide guidance and supporting material to reviewers on methods for the conduct and reporting of knowledge synthesis.
  • << Previous: Introduction to Systematic Reviews
  • Next: Reading Systematic Reviews >>
  • Last Updated: Mar 27, 2024 4:35 PM
  • URL: https://libguides.ohsu.edu/systematic-reviews
  • University of Wisconsin–Madison
  • University of Wisconsin-Madison
  • Research Guides
  • Evidence Synthesis, Systematic Review Services
  • Literature Review Types, Taxonomies

Evidence Synthesis, Systematic Review Services : Literature Review Types, Taxonomies

  • Develop a Protocol
  • Develop Your Research Question
  • Select Databases
  • Select Gray Literature Sources
  • Write a Search Strategy
  • Manage Your Search Process
  • Register Your Protocol
  • Citation Management
  • Article Screening
  • Risk of Bias Assessment
  • Synthesize, Map, or Describe the Results
  • Find Guidance by Discipline
  • Manage Your Research Data
  • Browse Evidence Portals by Discipline
  • Automate the Process, Tools & Technologies
  • Additional Resources

Choosing a Literature Review Methodology

Growing interest in evidence-based practice has driven an increase in review methodologies. Your choice of review methodology (or literature review type) will be informed by the intent (purpose, function) of your research project and the time and resources of your team. 

  • Decision Tree (What Type of Review is Right for You?) Developed by Cornell University Library staff, this "decision-tree" guides the user to a handful of review guides given time and intent.

Types of Evidence Synthesis*

Critical Review - Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or model.

Mapping Review (Systematic Map) - Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature.

Meta-Analysis - Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results.

Mixed Studies Review (Mixed Methods Review) - Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies.

Narrative (Literature) Review - Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness.

Overview - Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics.

Qualitative Systematic Review or Qualitative Evidence Synthesis - Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies.

Rapid Review - Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research.

Scoping Review or Evidence Map - Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research.

State-of-the-art Review - Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives on issue or point out area for further research.

Systematic Review - Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review. (An emerging subset includes Living Reviews or Living Systematic Reviews - A [review or] systematic review which is continually updated, incorporating relevant new evidence as it becomes available.)

Systematic Search and Review - Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis.’

Umbrella Review - Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results.

*These definitions are in Grant & Booth's "A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies."

Literature Review Types/Typologies, Taxonomies

Grant, M. J., and A. Booth. "A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies."  Health Information and Libraries Journal  26.2 (2009): 91-108.  DOI: 10.1111/j.1471-1842.2009.00848.x  Link

Munn, Zachary, et al. “Systematic Review or Scoping Review? Guidance for Authors When Choosing between a Systematic or Scoping Review Approach.” BMC Medical Research Methodology , vol. 18, no. 1, Nov. 2018, p. 143. DOI: 10.1186/s12874-018-0611-x. Link

Sutton, A., et al. "Meeting the Review Family: Exploring Review Types and Associated Information Retrieval Requirements."  Health Information and Libraries Journal  36.3 (2019): 202-22.  DOI: 10.1111/hir.12276  Link

Dissertation Research (Capstones, Theses)

While a full systematic review may not necessarily satisfy criteria for dissertation research in a discipline (as independent scholarship), the methods described in this guide--from developing a protocol to searching and synthesizing the literature--can help to ensure that your review of the literature is comprehensive, transparent, and reproducible.

In this context, your review type , then, may be better described as a 'structured literature review', a ' systematized search and review', or a ' systematized scoping (or integrative or mapping) review'.

  • Planning Worksheet for Structured, Systematized Literature Reviews
  • << Previous: Home
  • Next: The Systematic Review Process >>
  • Last Updated: Apr 12, 2024 11:56 AM
  • URL: https://researchguides.library.wisc.edu/literature_review

Banner

Literature Reviews: Systematic, Scoping, Integrative

Characteristics of review types, choosing a review type.

Steps in a Systematic/Scoping/Integrative Review

Confirming the Knowledge Gap

Standards and reporting guidelines.

  • Creating a Search Strategy
  • Limits and Inclusion Criteria
  • Review Protocols
  • Elements of a Systematic Review
  • Review Tools and Applications

Additional Resources

  • JBI Manual for Evidence Synthesis Process outlines for multiple types of evidence reviews. A great source to cite in your methods section.
  • PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Guidance for authors and peer reviewers on best practices in reporting for evidence reviews. Includes extensions for different types of reviews, including scoping reviews

Not sure which review type is right for your research question? Check out the links below for help choosing.

  • What Review is Right for You? v2 14 page PDF survey to help you determine which review type might work best for you. Very thorough!
  • Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18(1), 143. https://doi.org/10.1186/s12874-018-0611-x

Creating an effective search for a systematic review means walking a tightrope between comprehensiveness and managability. You want to try to include all of the studies that could possibly be relevant while simultaneously getting your search results down to a number of articles that you can realistically review. 

The Basic Process:

  • Develop a research question.
  • Consult with a librarian for help with steps 3-16.
  • Search databases to see if a review has already been published on your topic. 
  • Search protocol repositories to see if a review on your topic is planned.
  • Select the type of review (systematic, scoping, integrative). This will require running some test searches to see if there is enough literature to merit a systematic review.
  • Select databases.
  • Select grey literature sources (if applicable). Read this article for helpful suggestions on systematically searching for grey literature.
  • Formulate an initial search for one of your selected databases. For tips on searching, consult our Mastering Keyword Searching guide.
  • Review results from initial search, scanning titles, abstracts, and subject headings to identify additional terms. You may also want to use the subject heading database you can find within each database.
  • Run the search again. Continue to add relevant terms and adjust the scope of your question (which may require eliminating terms) until results are a reasonable size and predominantly relevant to your question.
  • When you think your search is nearly final, gather 2-3 of your most relevant articles and test their reference lists against your search results. If your search contains a large majority of the relevant articles from those reference lists, your have your final search (remember no search is ever perfect, and you will nearly always add articles you find via reference lists, recommendations, etc. that did not appear in your search results). 
  • Translate your search to your other databases. Generally your keywords will stay the same across databases, but you will most likely need to adjust your subject headings, because those can vary from database to database.
  • Ask a librarian to peer review your search. Try the PRESS checklist . 
  • Develop inclusion and exclusion criteria in preparation for reviewing articles (this step may come later for a scoping review)
  • Write a protocol .
  • Database name (be as specific as possible, including the full title, especially for databases that are offered in multiple formats, e.g. Ovid Medline) and dates of coverage.
  • Search terms, including indicating which are subject headings and which are keywords plus any limitations to where the keywords were search if relevant.
  • Database limits/filters applied to the results (e.g. publication year, language, etc.).
  • Date of your search.
  • Number of results.
  • Begin title/abstract screening. Two reviewers for each item is best practice.
  • Begin full-text review of the articles still remaining. Again, two reviewers for each item is best practice. 
  • Conduct citation mining for the articles that make it through full-text review. That means looking at reference lists (backwards searching) and searching for articles that cite back to the article you have (forward searching). You might also consider setting aside all of the systematic and scoping reviews that came up with your search (generally those are excluded from your review) and mining their reference lists as well. Repeat the title/abstract screening and full-text reviews for the articles identified through citation mining.
  • Check all articles that made it through the full-text review for retractions, and remove any articles that have been retracted. 
  • If doing a systematic review, conduct a critical appraisal of included articles (aka Risk of Bias Assessment).
  • Covidence. (2024). A practical guide to data extraction for intervention systematic reviews .
  • Pollock et al. (2023). Recommendations for the extraction, analysis, and presentation of results in scoping reviews . JBI Evidence Synthesis, 21 (3), 520-532. 
  • Prepare your manuscript (for information on writing each section of your manuscript, see our guide to Writing up Your Own Research ). 

Before beginning your review, you need to be sure that no other reviews with the same research question as yours already exist or are in progress. This is easily done by searching research databases and protocol registries.

Databases to Check

types of systematic literature reviews

Protocol Registries

  • PROSPERO PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites register systematic reviews of human studies and systematic reviews of animal studies.

types of systematic literature reviews

It is a good idea to familiarize yourself with the standards and reporting guidelines for the type of review you are planning to do. Following the standards/guidelines as you plan and execute your review will help ensure that you minimize bias and maximize your chances of getting published.

Systematic Reviews

  • PRISMA Statement The PRISMA statement is currently the standards and guidelines of choice for systematic reviews. At the link you will find the statement as well as explanations of each element, a checklist of elements, a PRISMA flow diagram template, and more.

types of systematic literature reviews

  • IOM Finding What Works in Healthcare: Standards for Systematic Reviews Standards from the National Academy of Medicine and National Academies Press. The free download link is all the way over on the right.

Scoping Reviews

  • PRISMA-SCR Extension for Scoping Reviews A PRISMA statement, explanation and checklist specifically for scoping reviews.
  • Updated methodological guidance for the conduct of scoping reviews While the PRISMA-SCR provides reporting guidelines, these guidelines from JBI are for how to actually plan and do your review. This is the explanation for updates made to the manual linked below. You can skip this article and go directly to the JBI manual if you prefer.

Integrative Reviews

  • Whittemore, R., & Knafl, K. (2005). The integrative review: updated methodology. Journal of Advanced Nursing, 52 (5), 546–553. This article is the current standard for designing an integrative review. more... less... https://doi.org/10.1111/j.1365-2648.2005.03621.x
  • Tavares de Souza, M., Dias da Silva, M., & de Carvalho, R. (2010). Integrative review: What is it? How to do it? Einstein, 8 (1). https://doi.org/10.1590/s1679-45082010rw1134
  • Next: Creating a Search Strategy >>
  • Last Updated: Apr 11, 2024 9:51 AM
  • URL: https://libguides.massgeneral.org/reviews

types of systematic literature reviews

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

Types of Literature Review — A Guide for Researchers

Sumalatha G

Table of Contents

Researchers often face challenges when choosing the appropriate type of literature review for their study. Regardless of the type of research design and the topic of a research problem , they encounter numerous queries, including:

What is the right type of literature review my study demands?

  • How do we gather the data?
  • How to conduct one?
  • How reliable are the review findings?
  • How do we employ them in our research? And the list goes on.

If you’re also dealing with such a hefty questionnaire, this article is of help. Read through this piece of guide to get an exhaustive understanding of the different types of literature reviews and their step-by-step methodologies along with a dash of pros and cons discussed.

Heading from scratch!

What is a Literature Review?

A literature review provides a comprehensive overview of existing knowledge on a particular topic, which is quintessential to any research project. Researchers employ various literature reviews based on their research goals and methodologies. The review process involves assembling, critically evaluating, and synthesizing existing scientific publications relevant to the research question at hand. It serves multiple purposes, including identifying gaps in existing literature, providing theoretical background, and supporting the rationale for a research study.

What is the importance of a Literature review in research?

Literature review in research serves several key purposes, including:

  • Background of the study: Provides proper context for the research. It helps researchers understand the historical development, theoretical perspectives, and key debates related to their research topic.
  • Identification of research gaps: By reviewing existing literature, researchers can identify gaps or inconsistencies in knowledge, paving the way for new research questions and hypotheses relevant to their study.
  • Theoretical framework development: Facilitates the development of theoretical frameworks by cultivating diverse perspectives and empirical findings. It helps researchers refine their conceptualizations and theoretical models.
  • Methodological guidance: Offers methodological guidance by highlighting the documented research methods and techniques used in previous studies. It assists researchers in selecting appropriate research designs, data collection methods, and analytical tools.
  • Quality assurance and upholding academic integrity: Conducting a thorough literature review demonstrates the rigor and scholarly integrity of the research. It ensures that researchers are aware of relevant studies and can accurately attribute ideas and findings to their original sources.

Types of Literature Review

Literature review plays a crucial role in guiding the research process , from providing the background of the study to research dissemination and contributing to the synthesis of the latest theoretical literature review findings in academia.

However, not all types of literature reviews are the same; they vary in terms of methodology, approach, and purpose. Let's have a look at the various types of literature reviews to gain a deeper understanding of their applications.

1. Narrative Literature Review

A narrative literature review, also known as a traditional literature review, involves analyzing and summarizing existing literature without adhering to a structured methodology. It typically provides a descriptive overview of key concepts, theories, and relevant findings of the research topic.

Unlike other types of literature reviews, narrative reviews reinforce a more traditional approach, emphasizing the interpretation and discussion of the research findings rather than strict adherence to methodological review criteria. It helps researchers explore diverse perspectives and insights based on the research topic and acts as preliminary work for further investigation.

Steps to Conduct a Narrative Literature Review

Steps-to-conduct-a-Narrative-Literature-Review

Source:- https://www.researchgate.net/figure/Steps-of-writing-a-narrative-review_fig1_354466408

Define the research question or topic:

The first step in conducting a narrative literature review is to clearly define the research question or topic of interest. Defining the scope and purpose of the review includes — What specific aspect of the topic do you want to explore? What are the main objectives of the research? Refine your research question based on the specific area you want to explore.

Conduct a thorough literature search

Once the research question is defined, you can conduct a comprehensive literature search. Explore and use relevant databases and search engines like SciSpace Discover to identify credible and pertinent, scholarly articles and publications.

Select relevant studies

Before choosing the right set of studies, it’s vital to determine inclusion (studies that should possess the required factors) and exclusion criteria for the literature and then carefully select papers. For example — Which studies or sources will be included based on relevance, quality, and publication date?

*Important (applies to all the reviews): Inclusion criteria are the factors a study must include (For example: Include only peer-reviewed articles published between 2022-2023, etc.). Exclusion criteria are the factors that wouldn’t be required for your search strategy (Example: exclude irrelevant papers, preprints, written in non-English, etc.)

Critically analyze the literature

Once the relevant studies are shortlisted, evaluate the methodology, findings, and limitations of each source and jot down key themes, patterns, and contradictions. You can use efficient AI tools to conduct a thorough literature review and analyze all the required information.

Synthesize and integrate the findings

Now, you can weave together the reviewed studies, underscoring significant findings such that new frameworks, contrasting viewpoints, and identifying knowledge gaps.

Discussion and conclusion

This is an important step before crafting a narrative review — summarize the main findings of the review and discuss their implications in the relevant field. For example — What are the practical implications for practitioners? What are the directions for future research for them?

Write a cohesive narrative review

Organize the review into coherent sections and structure your review logically, guiding the reader through the research landscape and offering valuable insights. Use clear and concise language to convey key points effectively.

Structure of Narrative Literature Review

A well-structured, narrative analysis or literature review typically includes the following components:

  • Introduction: Provides an overview of the topic, objectives of the study, and rationale for the review.
  • Background: Highlights relevant background information and establish the context for the review.
  • Main Body: Indexes the literature into thematic sections or categories, discussing key findings, methodologies, and theoretical frameworks.
  • Discussion: Analyze and synthesize the findings of the reviewed studies, stressing similarities, differences, and any gaps in the literature.
  • Conclusion: Summarizes the main findings of the review, identifies implications for future research, and offers concluding remarks.

Pros and Cons of Narrative Literature Review

  • Flexibility in methodology and doesn’t necessarily rely on structured methodologies
  • Follows traditional approach and provides valuable and contextualized insights
  • Suitable for exploring complex or interdisciplinary topics. For example — Climate change and human health, Cybersecurity and privacy in the digital age, and more
  • Subjectivity in data selection and interpretation
  • Potential for bias in the review process
  • Lack of rigor compared to systematic reviews

Example of Well-Executed Narrative Literature Reviews

Paper title:  Examining Moral Injury in Clinical Practice: A Narrative Literature Review

Narrative-Literature-Reviews

Source: SciSpace

While narrative reviews offer flexibility, academic integrity remains paramount. So, ensure proper citation of all sources and maintain a transparent and factual approach throughout your critical narrative review, itself.

2. Systematic Review

A systematic literature review is one of the comprehensive types of literature review that follows a structured approach to assembling, analyzing, and synthesizing existing research relevant to a particular topic or question. It involves clearly defined criteria for exploring and choosing studies, as well as rigorous methods for evaluating the quality of relevant studies.

It plays a prominent role in evidence-based practice and decision-making across various domains, including healthcare, social sciences, education, health sciences, and more. By systematically investigating available literature, researchers can identify gaps in knowledge, evaluate the strength of evidence, and report future research directions.

Steps to Conduct Systematic Reviews

Steps-to-Conduct-Systematic-Reviews

Source:- https://www.researchgate.net/figure/Steps-of-Systematic-Literature-Review_fig1_321422320

Here are the key steps involved in conducting a systematic literature review

Formulate a clear and focused research question

Clearly define the research question or objective of the review. It helps to centralize the literature search strategy and determine inclusion criteria for relevant studies.

Develop a thorough literature search strategy

Design a comprehensive search strategy to identify relevant studies. It involves scrutinizing scientific databases and all relevant articles in journals. Plus, seek suggestions from domain experts and review reference lists of relevant review articles.

Screening and selecting studies

Employ predefined inclusion and exclusion criteria to systematically screen the identified studies. This screening process also typically involves multiple reviewers independently assessing the eligibility of each study.

Data extraction

Extract key information from selected studies using standardized forms or protocols. It includes study characteristics, methods, results, and conclusions.

Critical appraisal

Evaluate the methodological quality and potential biases of included studies. Various tools (BMC medical research methodology) and criteria can be implemented for critical evaluation depending on the study design and research quetions .

Data synthesis

Analyze and synthesize review findings from individual studies to draw encompassing conclusions or identify overarching patterns and explore heterogeneity among studies.

Interpretation and conclusion

Interpret the findings about the research question, considering the strengths and limitations of the research evidence. Draw conclusions and implications for further research.

The final step — Report writing

Craft a detailed report of the systematic literature review adhering to the established guidelines of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). This ensures transparency and reproducibility of the review process.

By following these steps, a systematic literature review aims to provide a comprehensive and unbiased summary of existing evidence, help make informed decisions, and advance knowledge in the respective domain or field.

Structure of a systematic literature review

A well-structured systematic literature review typically consists of the following sections:

  • Introduction: Provides background information on the research topic, outlines the review objectives, and enunciates the scope of the study.
  • Methodology: Describes the literature search strategy, selection criteria, data extraction process, and other methods used for data synthesis, extraction, or other data analysis..
  • Results: Presents the review findings, including a summary of the incorporated studies and their key findings.
  • Discussion: Interprets the findings in light of the review objectives, discusses their implications, and identifies limitations or promising areas for future research.
  • Conclusion: Summarizes the main review findings and provides suggestions based on the evidence presented in depth meta analysis.
*Important (applies to all the reviews): Remember, the specific structure of your literature review may vary depending on your topic, research question, and intended audience. However, adhering to a clear and logical hierarchy ensures your review effectively analyses and synthesizes knowledge and contributes valuable insights for readers.

Pros and Cons of Systematic Literature Review

  • Adopts rigorous and transparent methodology
  • Minimizes bias and enhances the reliability of the study
  • Provides evidence-based insights
  • Time and resource-intensive
  • High dependency on the quality of available literature (literature research strategy should be accurate)
  • Potential for publication bias

Example of Well-Executed Systematic Literature Review

Paper title: Systematic Reviews: Understanding the Best Evidence For Clinical Decision-making in Health Care: Pros and Cons.

Systematic-Literature-Review

Read this detailed article on how to use AI tools to conduct a systematic review for your research!

3. Scoping Literature Review

A scoping literature review is a methodological review type of literature review that adopts an iterative approach to systematically map the existing literature on a particular topic or research area. It involves identifying, selecting, and synthesizing relevant papers to provide an overview of the size and scope of available evidence. Scoping reviews are broader in scope and include a diverse range of study designs and methodologies especially focused on health services research.

The main purpose of a scoping literature review is to examine the extent, range, and nature of existing studies on a topic, thereby identifying gaps in research, inconsistencies, and areas for further investigation. Additionally, scoping reviews can help researchers identify suitable methodologies and formulate clinical recommendations. They also act as the frameworks for future systematic reviews or primary research studies.

Scoping reviews are primarily focused on —

  • Emerging or evolving topics — where the research landscape is still growing or budding. Example — Whole Systems Approaches to Diet and Healthy Weight: A Scoping Review of Reviews .
  • Broad and complex topics : With a vast amount of existing literature.
  • Scenarios where a systematic review is not feasible: Due to limited resources or time constraints.

Steps to Conduct a Scoping Literature Review

While Scoping reviews are not as rigorous as systematic reviews, however, they still follow a structured approach. Here are the steps:

Identify the research question: Define the broad topic you want to explore.

Identify Relevant Studies: Conduct a comprehensive search of relevant literature using appropriate databases, keywords, and search strategies.

Select studies to be included in the review: Based on the inclusion and exclusion criteria, determine the appropriate studies to be included in the review.

Data extraction and charting : Extract relevant information from selected studies, such as year, author, main results, study characteristics, key findings, and methodological approaches.  However, it varies depending on the research question.

Collate, summarize, and report the results: Analyze and summarize the extracted data to identify key themes and trends. Then, present the findings of the scoping review in a clear and structured manner, following established guidelines and frameworks .

Structure of a Scoping Literature Review

A scoping literature review typically follows a structured format similar to a systematic review. It includes the following sections:

  • Introduction: Introduce the research topic and objectives of the review, providing the historical context, and rationale for the study.
  • Methods : Describe the methods used to conduct the review, including search strategies, study selection criteria, and data extraction procedures.
  • Results: Present the findings of the review, including key themes, concepts, and patterns identified in the literature review.
  • Discussion: Examine the implications of the findings, including strengths, limitations, and areas for further examination.
  • Conclusion: Recapitulate the main findings of the review and their implications for future research, policy, or practice.

Pros and Cons of Scoping Literature Review

  • Provides a comprehensive overview of existing literature
  • Helps to identify gaps and areas for further research
  • Suitable for exploring broad or complex research questions
  • Doesn’t provide the depth of analysis offered by systematic reviews
  • Subject to researcher bias in study selection and data extraction
  • Requires careful consideration of literature search strategies and inclusion criteria to ensure comprehensiveness and validity.

In short, a scoping review helps map the literature on developing or emerging topics and identifying gaps. It might be considered as a step before conducting another type of review, such as a systematic review. Basically, acts as a precursor for other literature reviews.

Example of a Well-Executed Scoping Literature Review

Paper title: Health Chatbots in Africa Literature: A Scoping Review

Scoping-Literature-Review

Check out the key differences between Systematic and Scoping reviews — Evaluating literature review: systematic vs. scoping reviews

4. Integrative Literature Review

Integrative Literature Review (ILR) is a type of literature review that proposes a distinctive way to analyze and synthesize existing literature on a specific topic, providing a thorough understanding of research and identifying potential gaps for future research.

Unlike a systematic review, which emphasizes quantitative studies and follows strict inclusion criteria, an ILR embraces a more pliable approach. It works beyond simply summarizing findings — it critically analyzes, integrates, and interprets research from various methodologies (qualitative, quantitative, mixed methods) to provide a deeper understanding of the research landscape. ILRs provide a holistic and systematic overview of existing research, integrating findings from various methodologies. ILRs are ideal for exploring intricate research issues, examining manifold perspectives, and developing new research questions.

Steps to Conduct an Integrative Literature Review

  • Identify the research question: Clearly define the research question or topic of interest as formulating a clear and focused research question is critical to leading the entire review process.
  • Literature search strategy: Employ systematic search techniques to locate relevant literature across various databases and sources.
  • Evaluate the quality of the included studies : Critically assess the methodology, rigor, and validity of each study by applying inclusion and exclusion criteria to filter and select studies aligned with the research objectives.
  • Data Extraction: Extract relevant data from selected studies using a structured approach.
  • Synthesize the findings : Thoroughly analyze the selected literature, identify key themes, and synthesize findings to derive noteworthy insights.
  • Critical appraisal: Critically evaluate the quality and validity of qualitative research and included studies by using BMC medical research methodology.
  • Interpret and present your findings: Discuss the purpose and implications of your analysis, spotlighting key insights and limitations. Organize and present the findings coherently and systematically.

Structure of an Integrative Literature Review

  • Introduction : Provide an overview of the research topic and the purpose of the integrative review.
  • Methods: Describe the opted literature search strategy, selection criteria, and data extraction process.
  • Results: Present the synthesized findings, including key themes, patterns, and contradictions.
  • Discussion: Interpret the findings about the research question, emphasizing implications for theory, practice, and prospective research.
  • Conclusion: Summarize the main findings, limitations, and contributions of the integrative review.

Pros and Cons of Integrative Literature Review

  • Informs evidence-based practice and policy to the relevant stakeholders of the research.
  • Contributes to theory development and methodological advancement, especially in the healthcare arena.
  • Integrates diverse perspectives and findings
  • Time-consuming process due to the extensive literature search and synthesis
  • Requires advanced analytical and critical thinking skills
  • Potential for bias in study selection and interpretation
  • The quality of included studies may vary, affecting the validity of the review

Example of Integrative Literature Reviews

Paper Title: An Integrative Literature Review: The Dual Impact of Technological Tools on Health and Technostress Among Older Workers

Integrative-Literature-Review

5. Rapid Literature Review

A Rapid Literature Review (RLR) is the fastest type of literature review which makes use of a streamlined approach for synthesizing literature summaries, offering a quicker and more focused alternative to traditional systematic reviews. Despite employing identical research methods, it often simplifies or omits specific steps to expedite the process. It allows researchers to gain valuable insights into current research trends and identify key findings within a shorter timeframe, often ranging from a few days to a few weeks — unlike traditional literature reviews, which may take months or even years to complete.

When to Consider a Rapid Literature Review?

  • When time impediments demand a swift summary of existing research
  • For emerging topics where the latest literature requires quick evaluation
  • To report pilot studies or preliminary research before embarking on a comprehensive systematic review

Steps to Conduct a Rapid Literature Review

  • Define the research question or topic of interest. A well-defined question guides the search process and helps researchers focus on relevant studies.
  • Determine key databases and sources of relevant literature to ensure comprehensive coverage.
  • Develop literature search strategies using appropriate keywords and filters to fetch a pool of potential scientific articles.
  • Screen search results based on predefined inclusion and exclusion criteria.
  • Extract and summarize relevant information from the above-preferred studies.
  • Synthesize findings to identify key themes, patterns, or gaps in the literature.
  • Prepare a concise report or a summary of the RLR findings.

Structure of a Rapid Literature Review

An effective structure of an RLR typically includes the following sections:

  • Introduction: Briefly introduce the research topic and objectives of the RLR.
  • Methodology: Describe the search strategy, inclusion and exclusion criteria, and data extraction process.
  • Results: Present a summary of the findings, including key themes or patterns identified.
  • Discussion: Interpret the findings, discuss implications, and highlight any limitations or areas for further research
  • Conclusion: Summarize the key findings and their implications for practice or future research

Pros and Cons of Rapid Literature Review

  • RLRs can be completed quickly, authorizing timely decision-making
  • RLRs are a cost-effective approach since they require fewer resources compared to traditional literature reviews
  • Offers great accessibility as RLRs provide prompt access to synthesized evidence for stakeholders
  • RLRs are flexible as they can be easily adapted for various research contexts and objectives
  • RLR reports are limited and restricted, not as in-depth as systematic reviews, and do not provide comprehensive coverage of the literature compared to traditional reviews.
  • Susceptible to bias because of the expedited nature of RLRs. It would increase the chance of overlooking relevant studies or biases in the selection process.
  • Due to time constraints, RLR findings might not be robust enough as compared to systematic reviews.

Example of a Well-Executed Rapid Literature Review

Paper Title: What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature

Rapid-Literature-Review

A Summary of Literature Review Types

Tools and resources for conducting different types of literature reviews, online scientific databases.

Platforms such as SciSpace , PubMed , Scopus , Elsevier , and Web of Science provide access to a vast array of scholarly literature, facilitating the search and data retrieval process.

Reference management software

Tools like SciSpace Citation Generator , EndNote, Zotero , and Mendeley assist researchers in organizing, annotating, and citing relevant literature, streamlining the review process altogether.

Automate Literature Review with AI tools

Automate the literature review process by using tools like SciSpace literature review which helps you compare and contrast multiple papers all on one screen in an easy-to-read matrix format. You can effortlessly analyze and interpret the review findings tailored to your study. It also supports the review in 75+ languages, making it more manageable even for non-English speakers.

types of systematic literature reviews

Goes without saying — literature review plays a pivotal role in academic research to identify the current trends and provide insights to pave the way for future research endeavors. Different types of literature review has their own strengths and limitations, making them suitable for different research designs and contexts. Whether conducting a narrative review, systematic review, scoping review, integrative review, or rapid literature review, researchers must cautiously consider the objectives, resources, and the nature of the research topic.

If you’re currently working on a literature review and still adopting a manual and traditional approach, switch to the automated AI literature review workspace and transform your traditional literature review into a rapid one by extracting all the latest and relevant data for your research!

There you go!

types of systematic literature reviews

Frequently Asked Questions

Narrative reviews give a general overview of a topic based on the author's knowledge. They may lack clear criteria and can be biased. On the other hand, systematic reviews aim to answer specific research questions by following strict methods. They're thorough but time-consuming.

A systematic review collects and analyzes existing research to provide an overview of a topic, while a meta-analysis statistically combines data from multiple studies to draw conclusions about the overall effect of an intervention or relationship between variables.

A systematic review thoroughly analyzes existing research on a specific topic using strict methods. In contrast, a scoping review offers a broader overview of the literature without evaluating individual studies in depth.

A systematic review thoroughly examines existing research using a rigorous process, while a rapid review provides a quicker summary of evidence, often by simplifying some of the systematic review steps to meet shorter timelines.

A systematic review carefully examines many studies on a single topic using specific guidelines. Conversely, an integrative review blends various types of research to provide a more comprehensive understanding of the topic.

You might also like

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Sumalatha G

AI for Meta-Analysis — A Comprehensive Guide

Monali Ghosh

How To Write An Argumentative Essay

University Libraries      University of Nevada, Reno

  • Skill Guides
  • Subject Guides

Systematic, Scoping, and Other Literature Reviews: Overview

  • Project Planning

What Is a Systematic Review?

Regular literature reviews are simply summaries of the literature on a particular topic. A systematic review, however, is a comprehensive literature review conducted to answer a specific research question. Authors of a systematic review aim to find, code, appraise, and synthesize all of the previous research on their question in an unbiased and well-documented manner. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) outline the minimum amount of information that needs to be reported at the conclusion of a systematic review project. 

Other types of what are known as "evidence syntheses," such as scoping, rapid, and integrative reviews, have varying methodologies. While systematic reviews originated with and continue to be a popular publication type in medicine and other health sciences fields, more and more researchers in other disciplines are choosing to conduct evidence syntheses. 

This guide will walk you through the major steps of a systematic review and point you to key resources including Covidence, a systematic review project management tool. For help with systematic reviews and other major literature review projects, please send us an email at  [email protected] .

Getting Help with Reviews

Organization such as the Institute of Medicine recommend that you consult a librarian when conducting a systematic review. Librarians at the University of Nevada, Reno can help you:

  • Understand best practices for conducting systematic reviews and other evidence syntheses in your discipline
  • Choose and formulate a research question
  • Decide which review type (e.g., systematic, scoping, rapid, etc.) is the best fit for your project
  • Determine what to include and where to register a systematic review protocol
  • Select search terms and develop a search strategy
  • Identify databases and platforms to search
  • Find the full text of articles and other sources
  • Become familiar with free citation management (e.g., EndNote, Zotero)
  • Get access to you and help using Covidence, a systematic review project management tool

Doing a Systematic Review

  • Plan - This is the project planning stage. You and your team will need to develop a good research question, determine the type of review you will conduct (systematic, scoping, rapid, etc.), and establish the inclusion and exclusion criteria (e.g., you're only going to look at studies that use a certain methodology). All of this information needs to be included in your protocol. You'll also need to ensure that the project is viable - has someone already done a systematic review on this topic? Do some searches and check the various protocol registries to find out. 
  • Identify - Next, a comprehensive search of the literature is undertaken to ensure all studies that meet the predetermined criteria are identified. Each research question is different, so the number and types of databases you'll search - as well as other online publication venues - will vary. Some standards and guidelines specify that certain databases (e.g., MEDLINE, EMBASE) should be searched regardless. Your subject librarian can help you select appropriate databases to search and develop search strings for each of those databases.  
  • Evaluate - In this step, retrieved articles are screened and sorted using the predetermined inclusion and exclusion criteria. The risk of bias for each included study is also assessed around this time. It's best if you import search results into a citation management tool (see below) to clean up the citations and remove any duplicates. You can then use a tool like Rayyan (see below) to screen the results. You should begin by screening titles and abstracts only, and then you'll examine the full text of any remaining articles. Each study should be reviewed by a minimum of two people on the project team. 
  • Collect - Each included study is coded and the quantitative or qualitative data contained in these studies is then synthesized. You'll have to either find or develop a coding strategy or form that meets your needs. 
  • Explain - The synthesized results are articulated and contextualized. What do the results mean? How have they answered your research question?
  • Summarize - The final report provides a complete description of the methods and results in a clear, transparent fashion. 

Adapted from

Types of reviews, systematic review.

These types of studies employ a systematic method to analyze and synthesize the results of numerous studies. "Systematic" in this case means following a strict set of steps - as outlined by entities like PRISMA and the Institute of Medicine - so as to make the review more reproducible and less biased. Consistent, thorough documentation is also key. Reviews of this type are not meant to be conducted by an individual but rather a (small) team of researchers. Systematic reviews are widely used in the health sciences, often to find a generalized conclusion from multiple evidence-based studies. 

Meta-Analysis

A systematic method that uses statistics to analyze the data from numerous studies. The researchers combine the data from studies with similar data types and analyze them as a single, expanded dataset. Meta-analyses are a type of systematic review.

Scoping Review

A scoping review employs the systematic review methodology to explore a broader topic or question rather than a specific and answerable one, as is generally the case with a systematic review. Authors of these types of reviews seek to collect and categorize the existing literature so as to identify any gaps.

Rapid Review

Rapid reviews are systematic reviews conducted under a time constraint. Researchers make use of workarounds to complete the review quickly (e.g., only looking at English-language publications), which can lead to a less thorough and more biased review. 

Narrative Review

A traditional literature review that summarizes and synthesizes the findings of numerous original research articles. The purpose and scope of narrative literature reviews vary widely and do not follow a set protocol. Most literature reviews are narrative reviews. 

Umbrella Review

Umbrella reviews are, essentially, systematic reviews of systematic reviews. These compile evidence from multiple review studies into one usable document. 

Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal , vol. 26, no. 2, 2009, pp. 91-108. doi: 10.1111/j.1471-1842.2009.00848.x .

  • Next: Project Planning >>
  • University of Michigan Library
  • Research Guides

Systematic Reviews

  • Types of Reviews
  • Work with a Search Expert
  • Covidence Review Software

Choosing a Review Type

Types of literature reviews.

  • Evidence in a Systematic Review
  • Information Sources
  • Search Strategy
  • Managing Records
  • Selection Process
  • Data Collection Process
  • Study Risk of Bias Assessment
  • Reporting Results
  • For Search Professionals

This guide focuses on the methodology for systematic reviews (SRs), but an SR may not be the best methodology to use to meet your project's goals. Use the articles listed here or in the Types of Literature Reviews box below for information about additional methodologies that could better fit your project. 

  • Haddaway NR, Lotfi T, Mbuagbaw L. Systematic reviews: A glossary for public health . Scand J Public Health. 2022 Feb 9:14034948221074998. doi: 10.1177/14034948221074998. Epub ahead of print. PMID: 35139715.
  • Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info Libr J. 2009 Jun;26(2):91-108. Defines 14 types of reviews and provides a helpful summary table on pp. 94-95.
  • Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements . Health Info Libr J . 2019;36(3):202–222. doi:10.1111/hir.12276
  • If you're not sure what type of review is right for your quantitative review, use this tool to find the best methodology for your project.:What Review is Right for You? https://whatreviewisrightforyou.knowledgetranslation.net

Meta-Analyses

  • Comparative Effectiveness
  • systematically and transparently searches for a broad range of information to synthesize, in order to find the effect of an intervention.
  • uses a protocol 
  • has a clear data extraction and management plan.
  • Time-intensive and often take months to a year or more to complete, even with a multi-person team. 

NOTE: The term "systematic review" is also used incorrectly as a blanket term for other types of reviews.

Methodological Guidance

  • Finding What Works in Health Care: Standards for Systematic Reviews. 2011. Institute of Medicine. http://books.nap.edu/openbook.php?record_id=13059
  • Cochrane Handbook of Systematic Reviews of Interventions, v. 6. 2019. https://training.cochrane.org/handbook
  • The Joanna Briggs Reviewers Manual. 2024. https://jbi-global-wiki.refined.site/space/MANUAL
  • The Community Guide/Methods/Systematic Review Methods. 2014. The Community Preventive Services Task Force. http://www.thecommunityguide.org/about/methods.html

For issues in systematic reviews, especially in social science or other qualitative research: 

  • Some Potential "Pitfalls" in the Construction of Educational Systematic Reviews. https://doi.org/10.1007/s40596-017-0675-7
  • Lescoat, A., Murphy, S. L., Roofeh, D., et al. (2021). Considerations for a combined index for limited cutaneous systemic sclerosis to support drug development and improve outcomes. https://doi.org/10.1177/2397198320961967
  • DeLong, M. R., Tandon, V. J., Bertrand, A. A. (2021). Review of Outcomes in Prepectoral Prosthetic Breast Reconstruction with and without Surgical Mesh Assistance.  https://pubmed.ncbi.nlm.nih.gov/33177453/
  • Carey, M. R., Vaughn, V. M., Mann, J. (2020). Is Non-Steroidal Anti-Inflammatory Therapy Non-Inferior to Antibiotic Therapy in Uncomplicated Urinary Tract Infections: a Systematic Review.  https://pubmed.ncbi.nlm.nih.gov/32270403/
  • Statistical technique for combining the findings from disparate  quantitative studies.
  • Uses statistical methods to objectively evaluate, synthesize, and summarize results.
  • May be conducted independently or as part of a systematic review.
  • Cochrane Handbook, Ch 10: Analysing data and undertaking meta-analyses https://training.cochrane.org/handbook/current/chapter-10
  • Bauer, M. E., Toledano, R. D., Houle, T., et al. (2020). Lumbar neuraxial procedures in thrombocytopenic patients across populations: A systematic review and meta-analysis. https://pubmed.ncbi.nlm.nih.gov/31810860/ 6
  • Mailoa J, Lin GH, Khoshkam V, MacEachern M, et al. Long-Term Effect of Four Surgical Periodontal Therapies and One Non-Surgical Therapy: A Systematic Review and Meta-Analysis. https://pubmed.ncbi.nlm.nih.gov/26110453/

Umbrella Reviews

  • Reviews other systematic reviews on a topic. 
  • Often defines a broader question than is typical of a traditional systematic review.
  • Most useful when there are competing interventions to consider.
  • Ioannidis JP. Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses .  https://pubmed.ncbi.nlm.nih.gov/35081993
  • Aromataris, E., Fernandez, R., Godfrey, C. M., Holly, C., Khalil, H., & Tungpunkom, P.  2015 Methodology for JBI Umbrella Reviews. https://ro.uow.edu.au/cgi/viewcontent.cgi?articl.
  • Gastaldon, C., Solmi, M., Correll, C. U., et al. (2022). Risk factors of postpartum depression and depressive symptoms: umbrella review of current evidence from systematic reviews and meta-analyses of observational studies. https://pubmed.ncbi.nlm.nih.gov/35081993/
  • Blodgett, T. J., & Blodgett, N. P. (2021). Melatonin and melatonin-receptor agonists to prevent delirium in hospitalized older adults: An umbrella review.   https://pubmed.ncbi.nlm.nih.gov/34749057/

Comparative effectiveness 

  • Systematic reviews of existing research on the effectiveness, comparative effectiveness, and comparative harms of different health care interventions.
  •  Intended to provide relevant evidence to inform real-world health care decisions for patients, providers, and policymakers.
  • “Methods Guide for Effectiveness and Comparative Effectiveness Reviews.” Methods Guide for Effectiveness and Comparative Effectiveness Reviews https://effectivehealthcare.ahrq.gov/products/collections/cer-methods-guide
  • Main document of above guide :  https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/cer-methods-guide_overview.pdf .
  • Tanni KA, Truong CB, Johnson BS, Qian J. Comparative effectiveness and safety of eribulin in advanced or metastatic breast cancer: a systematic review and meta-analysis. Crit Rev Oncol Hematol. 2021 Jul;163:103375. doi: 10.1016/j.critrevonc.2021.103375. Epub 2021 Jun 2. PMID: 34087344.
  • Rice D, Corace K, Wolfe D, Esmaeilisaraji L, Michaud A, Grima A, Austin B, Douma R, Barbeau P, Butler C, Willows M, Poulin PA, Sproule BA, Porath A, Garber G, Taha S, Garner G, Skidmore B, Moher D, Thavorn K, Hutton B. Evaluating comparative effectiveness of psychosocial interventions adjunctive to opioid agonist therapy for opioid use disorder: A systematic review with network meta-analyses. PLoS One. 2020 Dec 28;15(12):e0244401. doi: 10.1371/journal.pone.0244401. PMID: 33370393; PMCID: PMC7769275.

​ Scoping Review or Evidence Map

Systematically and transparently collect and  categorize  existing evidence on a broad question of  policy or management importance.

Seeks to identify research gaps and opportunities for evidence synthesis rather than searching for the effect of an intervention. 

May critically evaluate existing evidence, but does not attempt to synthesize the results in the way a systematic review would. (see  EE Journal  and  CIFOR )

May take longer than a systematic review.

  • For useful guidance on whether to conduct a scoping review or not, see Figure 1 in this article. Pollock, D , Davies, EL , Peters, MDJ , et al. Undertaking a scoping review: A practical guide for nursing and midwifery students, clinicians, researchers, and academics . J Adv Nurs . 2021 ; 77 : 2102 – 2113 . https://doi.org/10.1111/jan.14743For a helpful

Hilary Arksey & Lisa O'Malley (2005) Scoping studies: towards a methodological framework http://10.1080/1364557032000119616

Aromataris E, Munn Z, eds. (2020) . JBI Manual for Evidence Synthesis.  JBI. Chapter 11: Scoping Reviews. https://wiki.jbi.global/display/MANUAL/Chapter+11%3A+Scoping+reviews

Munn Z, Peters MD, Stern C, Tet al. (2018)  Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. https://pubmed.ncbi.nlm.nih.gov/30453902/

Tricco AC, Lillie E, Zarin W, et al.. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018 Oct 2;169(7):467-473. doi: 10.7326/M18-0850. Epub 2018 Sep 4. PMID: 30178033.  https://www.acpjournals.org/doi/epdf/10.7326/M18-0850

Bouldin E, Patel SR, Tey CS, et al. Bullying and Children who are Deaf or Hard-of-hearing: A Scoping Review. https://pubmed.ncbi.nlm.nih.gov/33438758

Finn M, Gilmore B, Sheaf G, Vallières F. What do we mean by individual capacity strengthening for primary health care in low- and middle-income countries? A systematic scoping review to improve conceptual clarity. https://pubmed.ncbi.nlm.nih.gov/33407554/

Hirt J, Nordhausen T, Meichlinger J, Braun V, Zeller A, Meyer G. Educational interventions to improve literature searching skills in the health sciences: a scoping review.  https://pubmed.ncbi.nlm.nih.gov/33013210/

​ Rapid Review

Useful for addressing issues needing timely decisions, such as developing policy recommendations. 

Applies systematic review methodology within a time-constrained setting.

Employs intentional, methodological "shortcuts" (limiting search terms for example) at the risk of introducing bias.

Defining characteristic is the transparency of team methodological choices.

Garritty, Chantelle, Gerald Gartlehner, Barbara Nussbaumer-Streit, Valerie J. King, Candyce Hamel, Chris Kamel, Lisa Affengruber, and Adrienne Stevens. “Cochrane Rapid Reviews Methods Group Offers Evidence-Informed Guidance to Conduct Rapid Reviews.” Journal of Clinical Epidemiology 130 (February 2021): 13–22. https://doi.org/10.1016/j.jclinepi.2020.10.007 .

Klerings I , Robalino S , Booth A , et al. Rapid reviews methods series: Guidance on literature search. BMJ Evidence-Based Medicine. 19 April 2023. https:// 10.1136/bmjebm-2022-112079

WHO. “WHO | Rapid Reviews to Strengthen Health Policy and Systems: A Practical Guide.” World Health Organization. Accessed February 11, 2022. http://www.who.int/alliance-hpsr/resources/publications/rapid-review-guide/en/ .

Dobbins, Maureen. “Steps for Conducting a Rapid Review,” 2017, 25.  https://www.nccmt.ca/uploads/media/media/0001/01/a816af720e4d587e13da6bb307df8c907a5dff9a.pdf

Norris HC, Richardson HM, Benoit MC, et al. (2021) Utilization Impact of Cost-Sharing Elimination for Preventive Care Services: A Rapid Review.   https://pubmed.ncbi.nlm.nih.gov/34157906/

Marcus N, Stergiopoulos V. Re-examining mental health crisis intervention: A rapid review comparing outcomes across police, co-responder and non-police models. Health Soc Care Community. 2022 Feb 1. doi: 10.1111/hsc.13731. Epub ahead of print. PMID: 35103364.

Narrative ( Literature ) Review

A broad term referring to reviews with a wide scope and non-standardized methodology.

See Baethge 2019 below for a method to provide quality assessment,

Search strategies, comprehensiveness, and time range covered will vary and do not follow an established protocol.

It provides insight into a particular topic by critically examining sources, generally over a particular period of time.

Greenhalgh, T., Thorne, S., & Malterud, K. (2018). Time to challenge the spurious hierarchy of systematic over narrative reviews?. https://pubmed.ncbi.nlm.nih.gov/29578574/

  • Baethge, C., Goldbeck-Wood, S. & Mertens, S. (2019). SANRA—a scale for the quality assessment of narrative review articles. https://doi.org/10.1186/ s41073-019-0064-8   https:// researchintegrityjournal. biomedcentral.com/articles/10. 1186/s41073-019-0064-8
  • Czypionka, T., Greenhalgh, T., Bassler, D., & Bryant, M. B. (2021). Masks and Face Coverings for the Lay Public : A Narrative Update. https://pubmed.ncbi.nlm.nih.gov/33370173/
  • Gardiner, F. W., Nwose, E. U., Bwititi, P. T., et al.. (2017). Services aimed at achieving desirable clinical outcomes in patients with chronic kidney disease and diabetes mellitus: A narrative review. https://pubmed.ncbi.nlm.nih.gov/29201367/
  •  Dickerson, S. S., Connors, L. M., Fayad, A., & Dean, G. E. (2014). Sleep-wake disturbances in cancer patients: narrative review of literature focusing on improving quality of life outcomes.  https://pubmed.ncbi.nlm.nih.gov/25050080/

University of Texas

  • University of Texas Libraries
  • UT Libraries

Systematic Reviews & Evidence Synthesis Methods

Types of reviews.

  • Formulate Question
  • Find Existing Reviews & Protocols
  • Register a Protocol
  • Searching Systematically
  • Supplementary Searching
  • Managing Results
  • Deduplication
  • Critical Appraisal
  • Glossary of terms
  • Librarian Support
  • Video tutorials This link opens in a new window
  • Systematic Review & Evidence Synthesis Boot Camp

Not sure what type of review you want to conduct?

There are many types of reviews ---  narrative reviews ,  scoping reviews , systematic reviews, integrative reviews, umbrella reviews, rapid reviews and others --- and it's not always straightforward to choose which type of review to conduct. These Review Navigator tools (see below) ask a series of questions to guide you through the various kinds of reviews and to help you determine the best choice for your research needs.

  • Which review is right for you? (Univ. of Manitoba)
  • What type of review is right for you? (Cornell)
  • Review Ready Reckoner - Assessment Tool (RRRsAT)
  • A typology of reviews: an analysis of 14 review types and associated methodologies. by Grant & Booth
  • Meeting the review family: exploring review types and associated information retrieval requirements | Health Info Libr J, 2019

Reproduced from Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info Libr J. 2009 Jun;26(2):91-108. doi: 10.1111/j.1471-1842.2009.00848.x

  • Last Updated: Apr 9, 2024 8:57 PM
  • URL: https://guides.lib.utexas.edu/systematicreviews

Creative Commons License

Advocate Health - Midwest Library Homepage

Systematic Review Process: Types of Reviews

  • Definitions of a Systematic Review

Types of Reviews

  • Systematic Review Planning Process
  • Resources Needed to Conduct a Review
  • Reporting Guidelines
  • Where to Search
  • How to Search
  • Screening and Study Selection
  • Data Extraction
  • Appraisal and Analysis
  • Citation Management
  • Additional Resources: Guides and Books
  • Using Covidence for Your Systematic Review
  • Librarian Collaboration

Narrative vs. Systematic Reviews

People often confuse systematic and literature (narrative) reviews. They both are used to provide a summary of the existing literature or research on a specific topic.

A narrative or traditional literature review is a comprehensive, critical, and objective analysis of the current knowledge on a topic. They are an essential part of the research process and help to establish a theoretical framework and focus or context for your research. A literature review will help you to identify patterns and trends in the literature so that you can identify gaps or inconsistencies in a body of knowledge. This should lead you to a sufficiently focused research question that justifies your research.

A systematic review is comprehensive and has minimal bias. It is based on a specific question and uses eligibility criteria and a pre-planned protocol. This type of study evaluates the quality of evidence. 

A systematic review can be either quantitative or qualitative:

  • If quantitative, the review will include studies that have numerical data.
  • If qualitative, the review derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants. It will include focus groups, interviews, observations and diaries.

Narrative reviews in comparison provide a perspective on topic (like a textbook chapter), may have no specified search strategy, might have significant bias issues, and may not evaluate quality of evidence.

This table provides a detailed comparison of systematic and literature (narrative) reviews.

Tools to Help You Choose a Review Type

There are other comprehensive literature reviews of similar methodology to the systematic review. These tools can help you determine which type of review you may want to conduct. 

  • The Review Ready Reckoner - Assessment Tool (RRRsAT) is a chart created as an adaptation of Andrew Booth's article on review typology. The chart that describes the features of multiple review types listing characteristics that distinguish each type and including sample of each type of review.
  • The What Review is Right for You tool asks five short questions to help you identify the most appropriate method for a review.

Use this chart  to determine the type of review you are interested in writing and to learn the differences in the stages and processes of various reviews compared to systematic reviews.

Source: Yale University

The type of review you conduct will depend on the purpose of the review, your question, your resources, expertise, and type of data.

Here are two suggested articles to consult if you want to know more about review types:

Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies.   Health information & libraries journal ,  26 (2), 91-108. This article defines 14 types of reviews. There is a helpful summary table on pp.94-95

Sutton A, Clowes M, Preston L, Booth A.  Meeting the review family: exploring review types and associated information retrieval requirements.   Health information & libraries journal . 2019;36(3):202–222. doi:10.1111/hir.12276

This Comparison table is derived from a guide which is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license , and was originally included in a workbook by Amanda Wanner at Plymouth University for Systematic Reviews and Scoping Reviews. Stephanie Roth at Temple University remixed the original version. Many thanks and much appreciation to Amanda Wanner and Stephanie Roth for allowing me to create a derivative of their work.

creative commons logo

Funaro, M., Nyhan, K., & Brackett, A. (n.d.).   What type of review could you write?  Yale Harvey Cushing/John Hay Whitney Medical Library.

  • << Previous: Definitions of a Systematic Review
  • Next: Systematic Review Planning Process >>
  • Last Updated: Dec 27, 2023 12:48 PM
  • URL: https://library.aah.org/guides/systematicreview

types of systematic literature reviews

Help us improve our Library guides with this 5 minute survey . We appreciate your feedback!

  • UOW Library
  • Key guides for researchers

Systematic Review

  • Five other types of systematic review
  • What is a systematic review?
  • How is a literature review different?
  • Search tips for systematic reviews
  • Controlled vocabularies
  • Grey literature
  • Transferring your search
  • Documenting your results
  • Support & contact

Five other types of systematic reviews

1. scoping review.

A scoping review is a preliminary assessment of the potential size and scope of available research literature. Aims to identify the nature and extent of research evidence (usually including ongoing research). 

Scoping reviews provide an understanding of the size and scope of the available literature and can inform whether a full systematic review should be undertaken. 

If you're not sure you should conduct a systematic review or a scoping review,  this article  outlines the differences between these review types and could help your decision making.  

2. Rapid review

Rapid reviews are an assessment of what is already known about a policy or practice issue by using systematic review methods to search and critically appraise existing research. 

This methodology utilises several legitimate techniques to shorten the process – careful focus of the research question, using broad or less sophisticated search strategies, conducting a review of reviews, restricting the amount of grey literature, extracting only key variables and performing more simple quality appraisals. 

Rapid reviews have an increased risk of potential bias due to their short timeframe. Documenting the methodology and highlighting its limitations is one way to mitigate bias. 

3. Narrative review

Also called a literature review.  

A narrative, or literature, review synthesises primary studies and explores this through description rather than statistics. Library support for literature review can be found in  this guide . 

4. Meta-analysis

A meta-analysis statistically combines the results of quantitative studies to provide a more precise effect on the results. This type of study examines data from multiple studies, on the same subject, to determine trends. 

Outcomes from a meta-analysis may include a more precise estimate of the effect of treatment or risk factor for disease, or other outcomes, than any individual study contributing to the combined studies being analysed.

5. Mixed methods/mixed studies

Refers to any combination of methods where one significant component is a literature review (usually systematic review). For example, a mixed methods study might include a systematic review accompanied by interviews or by a stakeholder consultation. 

Within a review context, mixed methods studies refers to a combination of review approaches. For example, combining quantitative with qualitative research or outcome with process studies. 

Further reading:

  • Duke University, Types of Reviews
  • Systematic review types: meet the family  (Covidence)
  • Systematic reviews and other types from Temple University
  • A typology of reviews: an analysis of 14 review types and associated methodologies  (Grant & Booth, 2009).
  • Previous: What is a systematic review?
  • Next: How is a literature review different?
  • Last Updated: Apr 22, 2024 3:02 PM
  • URL: https://uow.libguides.com/systematic-review

Insert research help text here

LIBRARY RESOURCES

Library homepage

Library SEARCH

A-Z Databases

STUDY SUPPORT

Academic Skills Centre

Referencing and citing

Digital Skills Hub

MORE UOW SERVICES

UOW homepage

Student support and wellbeing

IT Services

types of systematic literature reviews

On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.

types of systematic literature reviews

Copyright & disclaimer | Privacy & cookie usage

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Digital Health Device Collection
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?

Types of Reviews

  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

Review Typologies

There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.

  • Review Typologies (from LITR-EX) This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable.

Review the table to peruse review types and associated methodologies. Librarians can also help your team determine which review type might be appropriate for your project. 

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108.  doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: What is a Systematic Review?
  • Next: Manuals and Reporting Guidelines >>
  • Last Updated: Mar 20, 2024 2:21 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Duke University Libraries

Literature Reviews

  • Types of reviews
  • Getting started

Types of reviews and examples

Choosing a review type.

  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Artificial intelligence (AI) tools
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

types of systematic literature reviews

Contact a Librarian

Ask a Librarian

  • Meta-analysis
  • Systematized

Definition:

"A term used to describe a conventional overview of the literature, particularly when contrasted with a systematic review (Booth et al., 2012, p. 265).

Characteristics:

  • Provides examination of recent or current literature on a wide range of subjects
  • Varying levels of completeness / comprehensiveness, non-standardized methodology
  • May or may not include comprehensive searching, quality assessment or critical appraisal

Mitchell, L. E., & Zajchowski, C. A. (2022). The history of air quality in Utah: A narrative review.  Sustainability ,  14 (15), 9653.  doi.org/10.3390/su14159653

Booth, A., Papaioannou, D., & Sutton, A. (2012). Systematic approaches to a successful literature review. London: SAGE Publications Ltd.

"An assessment of what is already known about a policy or practice issue...using systematic review methods to search and critically appraise existing research" (Grant & Booth, 2009, p. 100).

  • Assessment of what is already known about an issue
  • Similar to a systematic review but within a time-constrained setting
  • Typically employs methodological shortcuts, increasing risk of introducing bias, includes basic level of quality assessment
  • Best suited for issues needing quick decisions and solutions (i.e., policy recommendations)

Learn more about the method:

Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: the evolution of a rapid review approach.  Systematic reviews, 1 (1), 1-9.  https://doi.org/10.1186/2046-4053-1-10

Virginia Commonwealth University Libraries. (2021). Rapid Review Protocol .

Quarmby, S., Santos, G., & Mathias, M. (2019). Air quality strategies and technologies: A rapid review of the international evidence.  Sustainability, 11 (10), 2757.  https://doi.org/10.3390/su11102757

Grant, M.J. & Booth, A. (2009). A typology of reviews: an analysis of the 14 review types and associated methodologies.  Health Information & Libraries Journal , 26(2), 91-108. https://www.doi.org/10.1111/j.1471-1842.2009.00848.x

Developed and refined by the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre), this review "map[s] out and categorize[s] existing literature on a particular topic, identifying gaps in research literature from which to commission further reviews and/or primary research" (Grant & Booth, 2009, p. 97).

Although mapping reviews are sometimes called scoping reviews, the key difference is that mapping reviews focus on a review question, rather than a topic

Mapping reviews are "best used where a clear target for a more focused evidence product has not yet been identified" (Booth, 2016, p. 14)

Mapping review searches are often quick and are intended to provide a broad overview

Mapping reviews can take different approaches in what types of literature is focused on in the search

Cooper I. D. (2016). What is a "mapping study?".  Journal of the Medical Library Association: JMLA ,  104 (1), 76–78. https://doi.org/10.3163/1536-5050.104.1.013

Miake-Lye, I. M., Hempel, S., Shanman, R., & Shekelle, P. G. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products.  Systematic reviews, 5 (1), 1-21.  https://doi.org/10.1186/s13643-016-0204-x

Tainio, M., Andersen, Z. J., Nieuwenhuijsen, M. J., Hu, L., De Nazelle, A., An, R., ... & de Sá, T. H. (2021). Air pollution, physical activity and health: A mapping review of the evidence.  Environment international ,  147 , 105954.  https://doi.org/10.1016/j.envint.2020.105954

Booth, A. (2016). EVIDENT Guidance for Reviewing the Evidence: a compendium of methodological literature and websites . ResearchGate. https://doi.org/10.13140/RG.2.1.1562.9842 . 

Grant, M.J. & Booth, A. (2009). A typology of reviews: an analysis of the 14 review types and associated methodologies.  Health Information & Libraries Journal , 26(2), 91-108.  https://www.doi.org/10.1111/j.1471-1842.2009.00848.x

"A type of review that has as its primary objective the identification of the size and quality of research in a topic area in order to inform subsequent review" (Booth et al., 2012, p. 269).

  • Main purpose is to map out and categorize existing literature, identify gaps in literature—great for informing policy-making
  • Search comprehensiveness determined by time/scope constraints, could take longer than a systematic review
  • No formal quality assessment or critical appraisal

Learn more about the methods :

Arksey, H., & O'Malley, L. (2005) Scoping studies: towards a methodological framework.  International Journal of Social Research Methodology ,  8 (1), 19-32.  https://doi.org/10.1080/1364557032000119616

Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science: IS, 5, 69. https://doi.org/10.1186/1748-5908-5-69

Example : 

Rahman, A., Sarkar, A., Yadav, O. P., Achari, G., & Slobodnik, J. (2021). Potential human health risks due to environmental exposure to nano-and microplastics and knowledge gaps: A scoping review.  Science of the Total Environment, 757 , 143872.  https://doi.org/10.1016/j.scitotenv.2020.143872

A review that "[compiles] evidence from multiple...reviews into one accessible and usable document" (Grant & Booth, 2009, p. 103). While originally intended to be a compilation of Cochrane reviews, it now generally refers to any kind of evidence synthesis.

  • Compiles evidence from multiple reviews into one document
  • Often defines a broader question than is typical of a traditional systematic review

Choi, G. J., & Kang, H. (2022). The umbrella review: a useful strategy in the rain of evidence.  The Korean Journal of Pain ,  35 (2), 127–128.  https://doi.org/10.3344/kjp.2022.35.2.127

Aromataris, E., Fernandez, R., Godfrey, C. M., Holly, C., Khalil, H., & Tungpunkom, P. (2015). Summarizing systematic reviews: Methodological development, conduct and reporting of an umbrella review approach. International Journal of Evidence-Based Healthcare , 13(3), 132–140. https://doi.org/10.1097/XEB.0000000000000055

Rojas-Rueda, D., Morales-Zamora, E., Alsufyani, W. A., Herbst, C. H., Al Balawi, S. M., Alsukait, R., & Alomran, M. (2021). Environmental risk factors and health: An umbrella review of meta-analyses.  International Journal of Environmental Research and Public Dealth ,  18 (2), 704.  https://doi.org/10.3390/ijerph18020704

A meta-analysis is a "technique that statistically combines the results of quantitative studies to provide a more precise effect of the result" (Grant & Booth, 2009, p. 98).

  • Statistical technique for combining results of quantitative studies to provide more precise effect of results
  • Aims for exhaustive, comprehensive searching
  • Quality assessment may determine inclusion/exclusion criteria
  • May be conducted independently or as part of a systematic review

Berman, N. G., & Parker, R. A. (2002). Meta-analysis: Neither quick nor easy. BMC Medical Research Methodology , 2(1), 10. https://doi.org/10.1186/1471-2288-2-10

Hites R. A. (2004). Polybrominated diphenyl ethers in the environment and in people: a meta-analysis of concentrations.  Environmental Science & Technology ,  38 (4), 945–956.  https://doi.org/10.1021/es035082g

A systematic review "seeks to systematically search for, appraise, and [synthesize] research evidence, often adhering to the guidelines on the conduct of a review" provided by discipline-specific organizations, such as the Cochrane Collaboration (Grant & Booth, 2009, p. 102).

  • Aims to compile and synthesize all known knowledge on a given topic
  • Adheres to strict guidelines, protocols, and frameworks
  • Time-intensive and often takes months to a year or more to complete
  • The most commonly referred to type of evidence synthesis. Sometimes confused as a blanket term for other types of reviews

Gascon, M., Triguero-Mas, M., Martínez, D., Dadvand, P., Forns, J., Plasència, A., & Nieuwenhuijsen, M. J. (2015). Mental health benefits of long-term exposure to residential green and blue spaces: a systematic review.  International Journal of Environmental Research and Public Health ,  12 (4), 4354–4379.  https://doi.org/10.3390/ijerph120404354

"Systematized reviews attempt to include one or more elements of the systematic review process while stopping short of claiming that the resultant output is a systematic review" (Grant & Booth, 2009, p. 102). When a systematic review approach is adapted to produce a more manageable scope, while still retaining the rigor of a systematic review such as risk of bias assessment and the use of a protocol, this is often referred to as a  structured review  (Huelin et al., 2015).

  • Typically conducted by postgraduate or graduate students
  • Often assigned by instructors to students who don't have the resources to conduct a full systematic review

Salvo, G., Lashewicz, B. M., Doyle-Baker, P. K., & McCormack, G. R. (2018). Neighbourhood built environment influences on physical activity among adults: A systematized review of qualitative evidence.  International Journal of Environmental Research and Public Health ,  15 (5), 897.  https://doi.org/10.3390/ijerph15050897

Huelin, R., Iheanacho, I., Payne, K., & Sandman, K. (2015). What’s in a name? Systematic and non-systematic literature reviews, and why the distinction matters. https://www.evidera.com/resource/whats-in-a-name-systematic-and-non-systematic-literature-reviews-and-why-the-distinction-matters/

Flowchart of review types

  • Review Decision Tree - Cornell University For more information, check out Cornell's review methodology decision tree.
  • LitR-Ex.com - Eight literature review methodologies Learn more about 8 different review types (incl. Systematic Reviews and Scoping Reviews) with practical tips about strengths and weaknesses of different methods.
  • << Previous: Getting started
  • Next: 1. Define your research question >>
  • Last Updated: Apr 3, 2024 12:40 PM
  • URL: https://guides.library.duke.edu/lit-reviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Charles Sturt University

Literature Review: Types of literature reviews

  • Traditional or narrative literature reviews
  • Scoping Reviews
  • Systematic literature reviews
  • Annotated bibliography
  • Keeping up to date with literature
  • Finding a thesis
  • Evaluating sources and critical appraisal of literature
  • Managing and analysing your literature
  • Further reading and resources

Types of literature reviews

types of systematic literature reviews

The type of literature review you write will depend on your discipline and whether you are a researcher writing your PhD, publishing a study in a journal or completing an assessment task in your undergraduate study.

A literature review for a subject in an undergraduate degree will not be as comprehensive as the literature review required for a PhD thesis.

An undergraduate literature review may be in the form of an annotated bibliography or a narrative review of a small selection of literature, for example ten relevant articles. If you are asked to write a literature review, and you are an undergraduate student, be guided by your subject coordinator or lecturer.

The common types of literature reviews will be explained in the pages of this section.

  • Narrative or traditional literature reviews
  • Critically Appraised Topic (CAT)
  • Scoping reviews
  • Annotated bibliographies

These are not the only types of reviews of literature that can be conducted. Often the term "review" and "literature" can be confusing and used in the wrong context. Grant and Booth (2009) attempt to clear up this confusion by discussing 14 review types and the associated methodology, and advantages and disadvantages associated with each review.

Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies . Health Information & Libraries Journal, 26 , 91–108. doi:10.1111/j.1471-1842.2009.00848.x

What's the difference between reviews?

Researchers, academics, and librarians all use various terms to describe different types of literature reviews, and there is often inconsistency in the ways the types are discussed. Here are a couple of simple explanations.

  • The image below describes common review types in terms of speed, detail, risk of bias, and comprehensiveness:

Description of the differences between review types in image form

"Schematic of the main differences between the types of literature review" by Brennan, M. L., Arlt, S. P., Belshaw, Z., Buckley, L., Corah, L., Doit, H., Fajt, V. R., Grindlay, D., Moberly, H. K., Morrow, L. D., Stavisky, J., & White, C. (2020). Critically Appraised Topics (CATs) in veterinary medicine: Applying evidence in clinical practice. Frontiers in Veterinary Science, 7 , 314. https://doi.org/10.3389/fvets.2020.00314 is licensed under CC BY 3.0

  • The table below lists four of the most common types of review , as adapted from a widely used typology of fourteen types of reviews (Grant & Booth, 2009).  

Grant, M.J. & Booth, A. (2009).  A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26 (2), 91-108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

See also the Library's  Literature Review guide.

Critical Appraised Topic (CAT)

For information on conducting a Critically Appraised Topic or CAT

Callander, J., Anstey, A. V., Ingram, J. R., Limpens, J., Flohr, C., & Spuls, P. I. (2017).  How to write a Critically Appraised Topic: evidence to underpin routine clinical practice.  British Journal of Dermatology (1951), 177(4), 1007-1013. https://doi.org/10.1111/bjd.15873 

Books on Literature Reviews

Cover Art

  • << Previous: Home
  • Next: Traditional or narrative literature reviews >>
  • Last Updated: Apr 10, 2024 5:05 PM
  • URL: https://libguides.csu.edu.au/review

Acknowledgement of Country

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.

  • Chester Fritz Library
  • Library of the Health Sciences
  • Thormodsgard Law Library

Literature Reviews

  • Get started

Literature Reviews within a Scholarly Work

Literature reviews as a scholarly work.

  • Finding Literature Reviews
  • Your Literature Search
  • Library Books
  • How to Videos
  • Communicating & Citing Research
  • Bibliography

Literature reviews summarize and analyze what has been written on a particular topic and identify gaps or disagreements in the scholarly work on that topic.

Within a scholarly work, the literature review situates the current work within the larger scholarly conversation and emphasizes how that particular scholarly work contributes to the conversation on the topic. The literature review portion may be as brief as a few paragraphs focusing on a narrow topic area.

When writing this type of literature review, it's helpful to start by identifying sources most relevant to your research question. A citation tracking database such as Web of Science can also help you locate seminal articles on a topic and find out who has more recently cited them. See "Your Literature Search" for more details.

A literature review may itself be a scholarly publication and provide an analysis of what has been written on a particular topic without contributing original research. These types of literature reviews can serve to help keep people updated on a field as well as helping scholars choose a research topic to fill gaps in the knowledge on that topic. Common types include:

Systematic Review

Systematic literature reviews follow specific procedures in some ways similar to setting up an experiment to ensure that future scholars can replicate the same steps. They are also helpful for evaluating data published over multiple studies. Thus, these are common in the medical field and may be used by healthcare providers to help guide diagnosis and treatment decisions. Cochrane Reviews are one example of this type of literature review.

Semi-Systematic Review

When a systematic review is not feasible, a semi-systematic review can help synthesize research on a topic or how a topic has been studied in different fields (Snyder 2019). Rather than focusing on quantitative data, this review type identifies themes, theoretical perspectives, and other qualitative information related to the topic. These types of reviews can be particularly helpful for a historical topic overview, for developing a theoretical model, and for creating a research agenda for a field (Snyder 2019). As with systematic reviews, a search strategy must be developed before conducting the review.

Integrative Review

An integrative review is less systematic and can be helpful for developing a theoretical model or to reconceptualize a topic. As Synder (2019) notes, " This type of review often re quires a more creative collection of data, as the purpose is usually not to cover all articles ever published on the topic but rather to combine perspectives and insights from di ff erent fi elds or research traditions" (p. 336).

Source: Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research. 104. 333-339. doi: 10.1016/j.jbusres.2019.07.039

  • << Previous: Get started
  • Next: Finding Literature Reviews >>
  • Last Updated: Dec 5, 2023 8:31 PM
  • URL: https://libguides.und.edu/literature-reviews

Banner

Systematic Reviews - the process: Review Types

  • Review Types
  • Useful tools
  • Search filters
  • Search strings

Before you begin

This is a guide on conducting reviews. Systematic Reviews are not the only review type. This guide will help you

  • Find the best review type for your purpose
  • Understand the steps of the review process
  • Conduct a thorough, structured search of the literature.  

Review types

Systematic Review Pyramid

Systematic Review

Cochrane defines a systematic review as an attempt “to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question". They should be exhaustive and use transparent methods. This minimises bias and ensures they are reproducible.

Forest Plot image

Meta Analysis

A meta-analysis is a "technique that statistically combines the results of quantitative studies to provide a more precise effect of the results" ( Grant & Booth, 2009 ). It is normally performed on studies identified during a systematic review.

Telescope image

Scoping Review

Stopwatch

  • Next: Useful tools >>
  • Last Updated: Apr 25, 2024 1:45 PM
  • URL: https://svhg.libguides.com/systematic_review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley-Blackwell Online Open

Logo of blackwellopen

An overview of methodological approaches in systematic reviews

Prabhakar veginadu.

1 Department of Rural Clinical Sciences, La Trobe Rural Health School, La Trobe University, Bendigo Victoria, Australia

Hanny Calache

2 Lincoln International Institute for Rural Health, University of Lincoln, Brayford Pool, Lincoln UK

Akshaya Pandian

3 Department of Orthodontics, Saveetha Dental College, Chennai Tamil Nadu, India

Mohd Masood

Associated data.

APPENDIX B: List of excluded studies with detailed reasons for exclusion

APPENDIX C: Quality assessment of included reviews using AMSTAR 2

The aim of this overview is to identify and collate evidence from existing published systematic review (SR) articles evaluating various methodological approaches used at each stage of an SR.

The search was conducted in five electronic databases from inception to November 2020 and updated in February 2022: MEDLINE, Embase, Web of Science Core Collection, Cochrane Database of Systematic Reviews, and APA PsycINFO. Title and abstract screening were performed in two stages by one reviewer, supported by a second reviewer. Full‐text screening, data extraction, and quality appraisal were performed by two reviewers independently. The quality of the included SRs was assessed using the AMSTAR 2 checklist.

The search retrieved 41,556 unique citations, of which 9 SRs were deemed eligible for inclusion in final synthesis. Included SRs evaluated 24 unique methodological approaches used for defining the review scope and eligibility, literature search, screening, data extraction, and quality appraisal in the SR process. Limited evidence supports the following (a) searching multiple resources (electronic databases, handsearching, and reference lists) to identify relevant literature; (b) excluding non‐English, gray, and unpublished literature, and (c) use of text‐mining approaches during title and abstract screening.

The overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process, as well as some methodological modifications currently used in expedited SRs. Overall, findings of this overview highlight the dearth of published SRs focused on SR methodologies and this warrants future work in this area.

1. INTRODUCTION

Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the “gold standard” of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search, appraise, and synthesize the available evidence. 3 Several guidelines, developed by various organizations, are available for the conduct of an SR; 4 , 5 , 6 , 7 among these, Cochrane is considered a pioneer in developing rigorous and highly structured methodology for the conduct of SRs. 8 The guidelines developed by these organizations outline seven fundamental steps required in SR process: defining the scope of the review and eligibility criteria, literature searching and retrieval, selecting eligible studies, extracting relevant data, assessing risk of bias (RoB) in included studies, synthesizing results, and assessing certainty of evidence (CoE) and presenting findings. 4 , 5 , 6 , 7

The methodological rigor involved in an SR can require a significant amount of time and resource, which may not always be available. 9 As a result, there has been a proliferation of modifications made to the traditional SR process, such as refining, shortening, bypassing, or omitting one or more steps, 10 , 11 for example, limits on the number and type of databases searched, limits on publication date, language, and types of studies included, and limiting to one reviewer for screening and selection of studies, as opposed to two or more reviewers. 10 , 11 These methodological modifications are made to accommodate the needs of and resource constraints of the reviewers and stakeholders (e.g., organizations, policymakers, health care professionals, and other knowledge users). While such modifications are considered time and resource efficient, they may introduce bias in the review process reducing their usefulness. 5

Substantial research has been conducted examining various approaches used in the standardized SR methodology and their impact on the validity of SR results. There are a number of published reviews examining the approaches or modifications corresponding to single 12 , 13 or multiple steps 14 involved in an SR. However, there is yet to be a comprehensive summary of the SR‐level evidence for all the seven fundamental steps in an SR. Such a holistic evidence synthesis will provide an empirical basis to confirm the validity of current accepted practices in the conduct of SRs. Furthermore, sometimes there is a balance that needs to be achieved between the resource availability and the need to synthesize the evidence in the best way possible, given the constraints. This evidence base will also inform the choice of modifications to be made to the SR methods, as well as the potential impact of these modifications on the SR results. An overview is considered the choice of approach for summarizing existing evidence on a broad topic, directing the reader to evidence, or highlighting the gaps in evidence, where the evidence is derived exclusively from SRs. 15 Therefore, for this review, an overview approach was used to (a) identify and collate evidence from existing published SR articles evaluating various methodological approaches employed in each of the seven fundamental steps of an SR and (b) highlight both the gaps in the current research and the potential areas for future research on the methods employed in SRs.

An a priori protocol was developed for this overview but was not registered with the International Prospective Register of Systematic Reviews (PROSPERO), as the review was primarily methodological in nature and did not meet PROSPERO eligibility criteria for registration. The protocol is available from the corresponding author upon reasonable request. This overview was conducted based on the guidelines for the conduct of overviews as outlined in The Cochrane Handbook. 15 Reporting followed the Preferred Reporting Items for Systematic reviews and Meta‐analyses (PRISMA) statement. 3

2.1. Eligibility criteria

Only published SRs, with or without associated MA, were included in this overview. We adopted the defining characteristics of SRs from The Cochrane Handbook. 5 According to The Cochrane Handbook, a review was considered systematic if it satisfied the following criteria: (a) clearly states the objectives and eligibility criteria for study inclusion; (b) provides reproducible methodology; (c) includes a systematic search to identify all eligible studies; (d) reports assessment of validity of findings of included studies (e.g., RoB assessment of the included studies); (e) systematically presents all the characteristics or findings of the included studies. 5 Reviews that did not meet all of the above criteria were not considered a SR for this study and were excluded. MA‐only articles were included if it was mentioned that the MA was based on an SR.

SRs and/or MA of primary studies evaluating methodological approaches used in defining review scope and study eligibility, literature search, study selection, data extraction, RoB assessment, data synthesis, and CoE assessment and reporting were included. The methodological approaches examined in these SRs and/or MA can also be related to the substeps or elements of these steps; for example, applying limits on date or type of publication are the elements of literature search. Included SRs examined or compared various aspects of a method or methods, and the associated factors, including but not limited to: precision or effectiveness; accuracy or reliability; impact on the SR and/or MA results; reproducibility of an SR steps or bias occurred; time and/or resource efficiency. SRs assessing the methodological quality of SRs (e.g., adherence to reporting guidelines), evaluating techniques for building search strategies or the use of specific database filters (e.g., use of Boolean operators or search filters for randomized controlled trials), examining various tools used for RoB or CoE assessment (e.g., ROBINS vs. Cochrane RoB tool), or evaluating statistical techniques used in meta‐analyses were excluded. 14

2.2. Search

The search for published SRs was performed on the following scientific databases initially from inception to third week of November 2020 and updated in the last week of February 2022: MEDLINE (via Ovid), Embase (via Ovid), Web of Science Core Collection, Cochrane Database of Systematic Reviews, and American Psychological Association (APA) PsycINFO. Search was restricted to English language publications. Following the objectives of this study, study design filters within databases were used to restrict the search to SRs and MA, where available. The reference lists of included SRs were also searched for potentially relevant publications.

The search terms included keywords, truncations, and subject headings for the key concepts in the review question: SRs and/or MA, methods, and evaluation. Some of the terms were adopted from the search strategy used in a previous review by Robson et al., which reviewed primary studies on methodological approaches used in study selection, data extraction, and quality appraisal steps of SR process. 14 Individual search strategies were developed for respective databases by combining the search terms using appropriate proximity and Boolean operators, along with the related subject headings in order to identify SRs and/or MA. 16 , 17 A senior librarian was consulted in the design of the search terms and strategy. Appendix A presents the detailed search strategies for all five databases.

2.3. Study selection and data extraction

Title and abstract screening of references were performed in three steps. First, one reviewer (PV) screened all the titles and excluded obviously irrelevant citations, for example, articles on topics not related to SRs, non‐SR publications (such as randomized controlled trials, observational studies, scoping reviews, etc.). Next, from the remaining citations, a random sample of 200 titles and abstracts were screened against the predefined eligibility criteria by two reviewers (PV and MM), independently, in duplicate. Discrepancies were discussed and resolved by consensus. This step ensured that the responses of the two reviewers were calibrated for consistency in the application of the eligibility criteria in the screening process. Finally, all the remaining titles and abstracts were reviewed by a single “calibrated” reviewer (PV) to identify potential full‐text records. Full‐text screening was performed by at least two authors independently (PV screened all the records, and duplicate assessment was conducted by MM, HC, or MG), with discrepancies resolved via discussions or by consulting a third reviewer.

Data related to review characteristics, results, key findings, and conclusions were extracted by at least two reviewers independently (PV performed data extraction for all the reviews and duplicate extraction was performed by AP, HC, or MG).

2.4. Quality assessment of included reviews

The quality assessment of the included SRs was performed using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). The tool consists of a 16‐item checklist addressing critical and noncritical domains. 18 For the purpose of this study, the domain related to MA was reclassified from critical to noncritical, as SRs with and without MA were included. The other six critical domains were used according to the tool guidelines. 18 Two reviewers (PV and AP) independently responded to each of the 16 items in the checklist with either “yes,” “partial yes,” or “no.” Based on the interpretations of the critical and noncritical domains, the overall quality of the review was rated as high, moderate, low, or critically low. 18 Disagreements were resolved through discussion or by consulting a third reviewer.

2.5. Data synthesis

To provide an understandable summary of existing evidence syntheses, characteristics of the methods evaluated in the included SRs were examined and key findings were categorized and presented based on the corresponding step in the SR process. The categories of key elements within each step were discussed and agreed by the authors. Results of the included reviews were tabulated and summarized descriptively, along with a discussion on any overlap in the primary studies. 15 No quantitative analyses of the data were performed.

From 41,556 unique citations identified through literature search, 50 full‐text records were reviewed, and nine systematic reviews 14 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 were deemed eligible for inclusion. The flow of studies through the screening process is presented in Figure  1 . A list of excluded studies with reasons can be found in Appendix B .

An external file that holds a picture, illustration, etc.
Object name is JEBM-15-39-g001.jpg

Study selection flowchart

3.1. Characteristics of included reviews

Table  1 summarizes the characteristics of included SRs. The majority of the included reviews (six of nine) were published after 2010. 14 , 22 , 23 , 24 , 25 , 26 Four of the nine included SRs were Cochrane reviews. 20 , 21 , 22 , 23 The number of databases searched in the reviews ranged from 2 to 14, 2 reviews searched gray literature sources, 24 , 25 and 7 reviews included a supplementary search strategy to identify relevant literature. 14 , 19 , 20 , 21 , 22 , 23 , 26 Three of the included SRs (all Cochrane reviews) included an integrated MA. 20 , 21 , 23

Characteristics of included studies

SR = systematic review; MA = meta‐analysis; RCT = randomized controlled trial; CCT = controlled clinical trial; N/R = not reported.

The included SRs evaluated 24 unique methodological approaches (26 in total) used across five steps in the SR process; 8 SRs evaluated 6 approaches, 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 while 1 review evaluated 18 approaches. 14 Exclusion of gray or unpublished literature 21 , 26 and blinding of reviewers for RoB assessment 14 , 23 were evaluated in two reviews each. Included SRs evaluated methods used in five different steps in the SR process, including methods used in defining the scope of review ( n  = 3), literature search ( n  = 3), study selection ( n  = 2), data extraction ( n  = 1), and RoB assessment ( n  = 2) (Table  2 ).

Summary of findings from review evaluating systematic review methods

There was some overlap in the primary studies evaluated in the included SRs on the same topics: Schmucker et al. 26 and Hopewell et al. 21 ( n  = 4), Hopewell et al. 20 and Crumley et al. 19 ( n  = 30), and Robson et al. 14 and Morissette et al. 23 ( n  = 4). There were no conflicting results between any of the identified SRs on the same topic.

3.2. Methodological quality of included reviews

Overall, the quality of the included reviews was assessed as moderate at best (Table  2 ). The most common critical weakness in the reviews was failure to provide justification for excluding individual studies (four reviews). Detailed quality assessment is provided in Appendix C .

3.3. Evidence on systematic review methods

3.3.1. methods for defining review scope and eligibility.

Two SRs investigated the effect of excluding data obtained from gray or unpublished sources on the pooled effect estimates of MA. 21 , 26 Hopewell et al. 21 reviewed five studies that compared the impact of gray literature on the results of a cohort of MA of RCTs in health care interventions. Gray literature was defined as information published in “print or electronic sources not controlled by commercial or academic publishers.” Findings showed an overall greater treatment effect for published trials than trials reported in gray literature. In a more recent review, Schmucker et al. 26 addressed similar objectives, by investigating gray and unpublished data in medicine. In addition to gray literature, defined similar to the previous review by Hopewell et al., the authors also evaluated unpublished data—defined as “supplemental unpublished data related to published trials, data obtained from the Food and Drug Administration  or other regulatory websites or postmarketing analyses hidden from the public.” The review found that in majority of the MA, excluding gray literature had little or no effect on the pooled effect estimates. The evidence was limited to conclude if the data from gray and unpublished literature had an impact on the conclusions of MA. 26

Morrison et al. 24 examined five studies measuring the effect of excluding non‐English language RCTs on the summary treatment effects of SR‐based MA in various fields of conventional medicine. Although none of the included studies reported major difference in the treatment effect estimates between English only and non‐English inclusive MA, the review found inconsistent evidence regarding the methodological and reporting quality of English and non‐English trials. 24 As such, there might be a risk of introducing “language bias” when excluding non‐English language RCTs. The authors also noted that the numbers of non‐English trials vary across medical specialties, as does the impact of these trials on MA results. Based on these findings, Morrison et al. 24 conclude that literature searches must include non‐English studies when resources and time are available to minimize the risk of introducing “language bias.”

3.3.2. Methods for searching studies

Crumley et al. 19 analyzed recall (also referred to as “sensitivity” by some researchers; defined as “percentage of relevant studies identified by the search”) and precision (defined as “percentage of studies identified by the search that were relevant”) when searching a single resource to identify randomized controlled trials and controlled clinical trials, as opposed to searching multiple resources. The studies included in their review frequently compared a MEDLINE only search with the search involving a combination of other resources. The review found low median recall estimates (median values between 24% and 92%) and very low median precisions (median values between 0% and 49%) for most of the electronic databases when searched singularly. 19 A between‐database comparison, based on the type of search strategy used, showed better recall and precision for complex and Cochrane Highly Sensitive search strategies (CHSSS). In conclusion, the authors emphasize that literature searches for trials in SRs must include multiple sources. 19

In an SR comparing handsearching and electronic database searching, Hopewell et al. 20 found that handsearching retrieved more relevant RCTs (retrieval rate of 92%−100%) than searching in a single electronic database (retrieval rates of 67% for PsycINFO/PsycLIT, 55% for MEDLINE, and 49% for Embase). The retrieval rates varied depending on the quality of handsearching, type of electronic search strategy used (e.g., simple, complex or CHSSS), and type of trial reports searched (e.g., full reports, conference abstracts, etc.). The authors concluded that handsearching was particularly important in identifying full trials published in nonindexed journals and in languages other than English, as well as those published as abstracts and letters. 20

The effectiveness of checking reference lists to retrieve additional relevant studies for an SR was investigated by Horsley et al. 22 The review reported that checking reference lists yielded 2.5%–40% more studies depending on the quality and comprehensiveness of the electronic search used. The authors conclude that there is some evidence, although from poor quality studies, to support use of checking reference lists to supplement database searching. 22

3.3.3. Methods for selecting studies

Three approaches relevant to reviewer characteristics, including number, experience, and blinding of reviewers involved in the screening process were highlighted in an SR by Robson et al. 14 Based on the retrieved evidence, the authors recommended that two independent, experienced, and unblinded reviewers be involved in study selection. 14 A modified approach has also been suggested by the review authors, where one reviewer screens and the other reviewer verifies the list of excluded studies, when the resources are limited. It should be noted however this suggestion is likely based on the authors’ opinion, as there was no evidence related to this from the studies included in the review.

Robson et al. 14 also reported two methods describing the use of technology for screening studies: use of Google Translate for translating languages (for example, German language articles to English) to facilitate screening was considered a viable method, while using two computer monitors for screening did not increase the screening efficiency in SR. Title‐first screening was found to be more efficient than simultaneous screening of titles and abstracts, although the gain in time with the former method was lesser than the latter. Therefore, considering that the search results are routinely exported as titles and abstracts, Robson et al. 14 recommend screening titles and abstracts simultaneously. However, the authors note that these conclusions were based on very limited number (in most instances one study per method) of low‐quality studies. 14

3.3.4. Methods for data extraction

Robson et al. 14 examined three approaches for data extraction relevant to reviewer characteristics, including number, experience, and blinding of reviewers (similar to the study selection step). Although based on limited evidence from a small number of studies, the authors recommended use of two experienced and unblinded reviewers for data extraction. The experience of the reviewers was suggested to be especially important when extracting continuous outcomes (or quantitative) data. However, when the resources are limited, data extraction by one reviewer and a verification of the outcomes data by a second reviewer was recommended.

As for the methods involving use of technology, Robson et al. 14 identified limited evidence on the use of two monitors to improve the data extraction efficiency and computer‐assisted programs for graphical data extraction. However, use of Google Translate for data extraction in non‐English articles was not considered to be viable. 14 In the same review, Robson et al. 14 identified evidence supporting contacting authors for obtaining additional relevant data.

3.3.5. Methods for RoB assessment

Two SRs examined the impact of blinding of reviewers for RoB assessments. 14 , 23 Morissette et al. 23 investigated the mean differences between the blinded and unblinded RoB assessment scores and found inconsistent differences among the included studies providing no definitive conclusions. Similar conclusions were drawn in a more recent review by Robson et al., 14 which included four studies on reviewer blinding for RoB assessment that completely overlapped with Morissette et al. 23

Use of experienced reviewers and provision of additional guidance for RoB assessment were examined by Robson et al. 14 The review concluded that providing intensive training and guidance on assessing studies reporting insufficient data to the reviewers improves RoB assessments. 14 Obtaining additional data related to quality assessment by contacting study authors was also found to help the RoB assessments, although based on limited evidence. When assessing the qualitative or mixed method reviews, Robson et al. 14 recommends the use of a structured RoB tool as opposed to an unstructured tool. No SRs were identified on data synthesis and CoE assessment and reporting steps.

4. DISCUSSION

4.1. summary of findings.

Nine SRs examining 24 unique methods used across five steps in the SR process were identified in this overview. The collective evidence supports some current traditional and modified SR practices, while challenging other approaches. However, the quality of the included reviews was assessed to be moderate at best and in the majority of the included SRs, evidence related to the evaluated methods was obtained from very limited numbers of primary studies. As such, the interpretations from these SRs should be made cautiously.

The evidence gathered from the included SRs corroborate a few current SR approaches. 5 For example, it is important to search multiple resources for identifying relevant trials (RCTs and/or CCTs). The resources must include a combination of electronic database searching, handsearching, and reference lists of retrieved articles. 5 However, no SRs have been identified that evaluated the impact of the number of electronic databases searched. A recent study by Halladay et al. 27 found that articles on therapeutic intervention, retrieved by searching databases other than PubMed (including Embase), contributed only a small amount of information to the MA and also had a minimal impact on the MA results. The authors concluded that when the resources are limited and when large number of studies are expected to be retrieved for the SR or MA, PubMed‐only search can yield reliable results. 27

Findings from the included SRs also reiterate some methodological modifications currently employed to “expedite” the SR process. 10 , 11 For example, excluding non‐English language trials and gray/unpublished trials from MA have been shown to have minimal or no impact on the results of MA. 24 , 26 However, the efficiency of these SR methods, in terms of time and the resources used, have not been evaluated in the included SRs. 24 , 26 Of the SRs included, only two have focused on the aspect of efficiency 14 , 25 ; O'Mara‐Eves et al. 25 report some evidence to support the use of text‐mining approaches for title and abstract screening in order to increase the rate of screening. Moreover, only one included SR 14 considered primary studies that evaluated reliability (inter‐ or intra‐reviewer consistency) and accuracy (validity when compared against a “gold standard” method) of the SR methods. This can be attributed to the limited number of primary studies that evaluated these outcomes when evaluating the SR methods. 14 Lack of outcome measures related to reliability, accuracy, and efficiency precludes making definitive recommendations on the use of these methods/modifications. Future research studies must focus on these outcomes.

Some evaluated methods may be relevant to multiple steps; for example, exclusions based on publication status (gray/unpublished literature) and language of publication (non‐English language studies) can be outlined in the a priori eligibility criteria or can be incorporated as search limits in the search strategy. SRs included in this overview focused on the effect of study exclusions on pooled treatment effect estimates or MA conclusions. Excluding studies from the search results, after conducting a comprehensive search, based on different eligibility criteria may yield different results when compared to the results obtained when limiting the search itself. 28 Further studies are required to examine this aspect.

Although we acknowledge the lack of standardized quality assessment tools for methodological study designs, we adhered to the Cochrane criteria for identifying SRs in this overview. This was done to ensure consistency in the quality of the included evidence. As a result, we excluded three reviews that did not provide any form of discussion on the quality of the included studies. The methods investigated in these reviews concern supplementary search, 29 data extraction, 12 and screening. 13 However, methods reported in two of these three reviews, by Mathes et al. 12 and Waffenschmidt et al., 13 have also been examined in the SR by Robson et al., 14 which was included in this overview; in most instances (with the exception of one study included in Mathes et al. 12 and Waffenschmidt et al. 13 each), the studies examined in these excluded reviews overlapped with those in the SR by Robson et al. 14

One of the key gaps in the knowledge observed in this overview was the dearth of SRs on the methods used in the data synthesis component of SR. Narrative and quantitative syntheses are the two most commonly used approaches for synthesizing data in evidence synthesis. 5 There are some published studies on the proposed indications and implications of these two approaches. 30 , 31 These studies found that both data synthesis methods produced comparable results and have their own advantages, suggesting that the choice of the method must be based on the purpose of the review. 31 With increasing number of “expedited” SR approaches (so called “rapid reviews”) avoiding MA, 10 , 11 further research studies are warranted in this area to determine the impact of the type of data synthesis on the results of the SR.

4.2. Implications for future research

The findings of this overview highlight several areas of paucity in primary research and evidence synthesis on SR methods. First, no SRs were identified on methods used in two important components of the SR process, including data synthesis and CoE and reporting. As for the included SRs, a limited number of evaluation studies have been identified for several methods. This indicates that further research is required to corroborate many of the methods recommended in current SR guidelines. 4 , 5 , 6 , 7 Second, some SRs evaluated the impact of methods on the results of quantitative synthesis and MA conclusions. Future research studies must also focus on the interpretations of SR results. 28 , 32 Finally, most of the included SRs were conducted on specific topics related to the field of health care, limiting the generalizability of the findings to other areas. It is important that future research studies evaluating evidence syntheses broaden the objectives and include studies on different topics within the field of health care.

4.3. Strengths and limitations

To our knowledge, this is the first overview summarizing current evidence from SRs and MA on different methodological approaches used in several fundamental steps in SR conduct. The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs.

There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies. Also, the majority of the included SRs may be considered outdated as they were published (or last updated) more than 5 years ago 33 ; only three of the nine SRs have been published in the last 5 years. 14 , 25 , 26 Therefore, important and recent evidence related to these topics may not have been included. Substantial numbers of included SRs were conducted in the field of health, which may limit the generalizability of the findings. Some method evaluations in the included SRs focused on quantitative analyses components and MA conclusions only. As such, the applicability of these findings to SR more broadly is still unclear. 28 Considering the methodological nature of our overview, limiting the inclusion of SRs according to the Cochrane criteria might have resulted in missing some relevant evidence from those reviews without a quality assessment component. 12 , 13 , 29 Although the included SRs performed some form of quality appraisal of the included studies, most of them did not use a standardized RoB tool, which may impact the confidence in their conclusions. Due to the type of outcome measures used for the method evaluations in the primary studies and the included SRs, some of the identified methods have not been validated against a reference standard.

Some limitations in the overview process must be noted. While our literature search was exhaustive covering five bibliographic databases and supplementary search of reference lists, no gray sources or other evidence resources were searched. Also, the search was primarily conducted in health databases, which might have resulted in missing SRs published in other fields. Moreover, only English language SRs were included for feasibility. As the literature search retrieved large number of citations (i.e., 41,556), the title and abstract screening was performed by a single reviewer, calibrated for consistency in the screening process by another reviewer, owing to time and resource limitations. These might have potentially resulted in some errors when retrieving and selecting relevant SRs. The SR methods were grouped based on key elements of each recommended SR step, as agreed by the authors. This categorization pertains to the identified set of methods and should be considered subjective.

5. CONCLUSIONS

This overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process. Limited evidence was also identified on some methodological modifications currently used to expedite the SR process. Overall, findings highlight the dearth of SRs on SR methodologies, warranting further work to confirm several current recommendations on conventional and expedited SR processes.

CONFLICT OF INTEREST

The authors declare no conflicts of interest.

Supporting information

APPENDIX A: Detailed search strategies

ACKNOWLEDGMENTS

The first author is supported by a La Trobe University Full Fee Research Scholarship and a Graduate Research Scholarship.

Open Access Funding provided by La Trobe University.

Veginadu P, Calache H, Gussy M, Pandian A, Masood M. An overview of methodological approaches in systematic reviews . J Evid Based Med . 2022; 15 :39–54. 10.1111/jebm.12468 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Unriddle

Detailed Comparison: Systematic Review vs Literature Review

Systematic review vs literature review. Discover the unique characteristics of each type of review and determine which suits your research needs.

Table of Contents

What Is Systematic Review?

How is a systematic review different from a literature review, related reading, what is literature review, systematic review vs literature review comparison, eliminating bias through comprehensive study, summarizing research for informed insights, systematic review components, literature review components, unriddle allows you to read faster and write better., when to use systematic review vs literature review, systematic review, literature review, what are the key characteristics of a systematic review vs literature review, read faster & write better with unriddle for free today, efficient literature review, seamless citation management, enhanced writing with unriddle's ai autocomplete suggestions, collaborative workspace: unriddle's platform for teamwork, streamlining literature reviews.

Detailed Comparison: Systematic Review vs Literature Review

  • Protocol development
  • Study selection
  • Data extraction
  • Quality assessment.
  • Systematic Literature Review
  • Literature Review Example
  • Literature Review Outline Example
  • How To Write A Literature Review
  • Thematic Literature Review
  • How To Write An Abstract For A Literature Review
  • Literature Review Generator

Systematic Review Vs Literature Review

  • Pre-specified eligibility criteria
  • Systematic search strategy
  • Assessment of the validity of findings
  • Interpretation and presentation of results
  • Reference list
  • Introduction
  • Unriddle generates an AI assistant on top of any document so you can quickly find, summarize, and understand info. No more endless skimming.
  • Unriddle understands the meaning behind your writing and automatically links you to relevant things you’ve read and written about in the past.
  • Highlight text and Unriddle will show you the most relevant sources from your library using AI. Never lose a citation again.
  • Generate text with AI autocomplete to improve and expand your writing, with all suggestions based on the context of what you're working on.

Systematic Review Vs Literature Review

  • Population (or patients)
  • Intervention

Systematic Review Vs Literature Review

  • Apa Literature Review Format
  • Literature Review Abstract Example
  • Literature Review Topics
  • Ai Literature Review Generator
  • Thesis Literature Review
  • How To Write A Scientific Literature Review
  • Literature Review Vs Annotated Bibliography
  • Systematic Review Software
  • Narrative Literature Review
  • Dissertation Literature Review Example
  • Systematic Literature Review Example
  • Paper Digest Literature Review
  • Literature Review Introduction Example
  • Consensus Ai Tool
  • Google Scholar Alternative
  • Review Of Related Literature
  • Scientific Literature Review Example

Share this post

Ready to take the next big step for your research?

Join 500K+ researchers now

Related posts.

15 Best AI Literature Review Tools

15 Best AI Literature Review Tools

Conducting a literature review? Make your research process easier with these literature review tools designed to enhance your review experience.

Complete Literature Review Example Guides

Complete Literature Review Example Guides

Stuck on how to write a literature review? These literature review example articles showcase approaches and help guide you in crafting your own.

How To Write A Literature Review In 9 Simple Steps

How To Write A Literature Review In 9 Simple Steps

Need help on how to write a literature review? Follow these 9 simple steps to create a well-structured and comprehensive literature review.

How To Write A Systematic Literature Review In 7 Simple Step

How To Write A Systematic Literature Review In 7 Simple Step

Follow these seven steps to conduct a systematic literature review and gather valuable insights for your research project. Start today!

Literature Review Outline Example For Inspiration

Literature Review Outline Example For Inspiration

Stuck on how to structure your literature review? Get inspired by literature review outline examples that breaks down the key sections for you.

  • UWF Libraries
  • Types of Reviews
  • Types of Sources
  • Finding Articles
  • Finding Books & Background Resources
  • Finding the Evidence: Summaries & Clinical Guidlines
  • Gerontology
  • Mental Health
  • Pharmacology

What are the types of reviews?

Literature review examples, systematic review examples, meta-analysis examples.

  • Statistics and Data
  • APA Formatting Guide
  • Citation Managers
  • Library Presentations

As you begin searching through the literature for evidence, you will come across different types of publications. Below are examples of the most common types and explanations of what they are. Although systematic reviews and meta-analysis are considered the highest quality of evidence, not every topic will have an SR or MA.

types of systematic literature reviews

Remember, a literature review provides an overview of a topic. There may or may not be a method for how studies are collected or interpreted. Lit reviews aren't always obviously labeled "literature review"; they may be embedded within sections such as the introduction or background. You can figure this out by reading the article. 

  • Pandemics though History Full Citation: Sampath, S., Khedr, A., Qmar, S., Tekin, A., Singh, R., Green, R. & Kashyap, R. (2021). Pandemics Through History. Cureus 13(9).
  • The Evolution of Public Health Genomics: Exploring Its Past, Present, and Future Full Citation: Molster, C,M. et al. (2018). The Evolution of Public Health Genomics: Exploring Its Past, Present, and Future. Frontiers in Public Health. 6(247).

Systematic reviews address a clinical question.  Reviews are gathered using a specific, defined set of criteria.

  • Selection criteria is defined
  • The words "Systematic Review" may appear int he title or abstract
  • BTW -> Cochrane Reviews aka Systematic Reviews
  • Additional reviews can be found by using a systematic review limit 
  • A systematic review of the mental health changes of children and young people before and during the COVID‑19 pandemic Full Citation: Kauhanen, L. et. al. (2023). A systematic review of the mental health changes of children and young people before and during the COVID‑19 pandemic. European Child & Health Psychiatry. 32., p. 995-1013
  • Protocol for a systematic review to understand the long-term mental-health effects of influenza pandemics in the pre-COVID-19 era Full Citation:Dinka, J. et. al. (2023). Protocol for a systematic review to understand the long-term mental-health effects of influenza pandemics in the pre-COVID-19 era. Scandinavian Journal of Public Health.
  • Cochrane Library (Wiley) This link opens in a new window Over 5000 reviews of research on medical treatments, practices, and diagnostic tests are provided in this database. Cochrane Reviews is the premier resource for Evidence Based Practice.
  • PubMed (NLM) This link opens in a new window PubMed comprises more than 22 million citations for biomedical literature from MEDLINE, life science journals, and online books.

Meta-analysis is a study that combines data from OTHER studies. All the studies are combined to argue whether a clinical intervention is statistically significant by combining the results from the other studies.  For example, you want to examine a specific headache intervention without running a clinical trial.  You can look at other articles that discuss your clinical intervention, combine all the participants from those articles, and run a statistical analysis to test if your results are significant. Guess what? There's a lot of math. 

  • Include the words "meta-analysis" or "meta analysis" in your keywords
  • Meta-analyses will always be accompanied by a systematic review, but a systematic review may not have a meta-analysis
  • See if the abstract or results section mention a meta-analysis
  • Use databases like Cochrane or PubMed
  • Effect of the COVID-19 pandemic on the proportion of physically active children and adults worldwide: A systematic review and meta-analysis Full Citation:Chaabna, K. et.al. (2022@. Effect of the COVID-19 pandemic on the proportion of physically active children and adults worldwide: A systematic review and meta-analysis. Frontiers in Public Health. 10. p. 1-14
  • Effects of training and competition on the sleep of elite athletes: a systematic review and meta-analysis Full Citation: Zhang, C.C. (2020). Utilization of public health care by people with private health insurance: a systematic reviep. 1-12w and meta-analysis. BMC Public Health. 20(1).
  • << Previous: What is a Lit Review?
  • Next: Statistics and Data >>
  • Last Updated: Apr 20, 2024 4:24 PM
  • URL: https://libguides.uwf.edu/nursing

Systematic Reviews & Literature Reviews

Evidence synthesis: part 1.

This blog post is the first in a series exploring Evidence Synthesis . We’re going to start by looking at two types of evidence synthesis: literature reviews and systemic reviews . To help me with this topic I looked at a number of research guides from other institutions, e.g., Cornell University Libraries.

The Key Differences Between a Literature Review and a Systematic Review

Overall, while both literature reviews and systematic reviews involve reviewing existing research literature, systematic reviews adhere to more rigorous and transparent methods to minimize bias and provide robust evidence to inform decision-making in education and other fields. If you are interested in learning about other evidence synthesis this decision tree created by Cornell Libraries (Robinson, n.d.) is a nice visual introduction.

Along with exploring evidence synthesis I am also interested in generative A.I.   I want to be transparent about how I used A.I. to create the table above. I fed this prompt into ChatGPT:

“ List the differences between a literature review and a systemic review for a graduate student of education “

I wanted to see what it would produce. I reformatted the list into a table so that it would be easier to compare and contrast these two reviews much like the one created by Cornell University Libraries (Kibbee, 2024). I think ChatGPT did a pretty good job. I did have to do quite a bit of editing, and make sure that what was created matched what I already knew. There are things ChatGPT left out, for example time frames, and how many people are needed for a systemic review, but we can revisit that in a later post.

Kibbee, M. (2024, April 10). Libguides: A guide to evidence synthesis: Cornell University Library Evidence Synthesis Service. Cornell University Library. https://guides.library.cornell.edu/evidence-synthesis/intro

  • Blog Archive 2009-2018
  • Library Hours
  • Library Salons
  • Library Spaces
  • Library Workshops
  • Reference Desk Questions

Subscribe to the Bank Street Library Blog

  • Open access
  • Published: 19 April 2024

Person-centered care assessment tool with a focus on quality healthcare: a systematic review of psychometric properties

  • Lluna Maria Bru-Luna 1 ,
  • Manuel Martí-Vilar 2 ,
  • César Merino-Soto 3 ,
  • José Livia-Segovia 4 ,
  • Juan Garduño-Espinosa 5 &
  • Filiberto Toledano-Toledano 5 , 6 , 7  

BMC Psychology volume  12 , Article number:  217 ( 2024 ) Cite this article

268 Accesses

Metrics details

The person-centered care (PCC) approach plays a fundamental role in ensuring quality healthcare. The Person-Centered Care Assessment Tool (P-CAT) is one of the shortest and simplest tools currently available for measuring PCC. The objective of this study was to conduct a systematic review of the evidence in validation studies of the P-CAT, taking the “Standards” as a frame of reference.

First, a systematic literature review was conducted following the PRISMA method. Second, a systematic descriptive literature review of validity tests was conducted following the “Standards” framework. The search strategy and information sources were obtained from the Cochrane, Web of Science (WoS), Scopus and PubMed databases. With regard to the eligibility criteria and selection process, a protocol was registered in PROSPERO (CRD42022335866), and articles had to meet criteria for inclusion in the systematic review.

A total of seven articles were included. Empirical evidence indicates that these validations offer a high number of sources related to test content, internal structure for dimensionality and internal consistency. A moderate number of sources pertain to internal structure in terms of test-retest reliability and the relationship with other variables. There is little evidence of response processes, internal structure in measurement invariance terms, and test consequences.

The various validations of the P-CAT are not framed in a structured, valid, theory-based procedural framework like the “Standards” are. This can affect clinical practice because people’s health may depend on it. The findings of this study show that validation studies continue to focus on the types of validity traditionally studied and overlook interpretation of the scores in terms of their intended use.

Peer Review reports

Person-centered care (PCC)

Quality care for people with chronic diseases, functional limitations, or both has become one of the main objectives of medical and care services. The person-centered care (PCC) approach is an essential element not only in achieving this goal but also in providing high-quality health maintenance and medical care [ 1 , 2 , 3 ]. In addition to guaranteeing human rights, PCC provides numerous benefits to both the recipient and the provider [ 4 , 5 ]. Additionally, PCC includes a set of necessary competencies for healthcare professionals to address ongoing challenges in this area [ 6 ]. PCC includes the following elements [ 7 ]: an individualized, goal-oriented care plan based on individuals’ preferences; an ongoing review of the plan and the individual’s goals; support from an interprofessional team; active coordination among all medical and care providers and support services; ongoing information exchange, education and training for providers; and quality improvement through feedback from the individual and caregivers.

There is currently a growing body of literature on the application of PCC. A good example of this is McCormack’s widely known mid-range theory [ 8 ], an internationally recognized theoretical framework for PCC and how it is operationalized in practice. This framework forms a guide for care practitioners and researchers in hospital settings. This framework is elaborated in PCC and conceived of as “an approach to practice that is established through the formation and fostering of therapeutic relationships between all care providers, service users, and others significant to them, underpinned by values of respect for persons, [the] individual right to self-determination, mutual respect, and understanding” [ 9 ].

Thus, as established by PCC, it is important to emphasize that reference to the person who is the focus of care refers not only to the recipient but also to everyone involved in a care interaction [ 10 , 11 ]. PCC ensures that professionals are trained in relevant skills and methodology since, as discussed above, carers are among the agents who have the greatest impact on the quality of life of the person in need of care [ 12 , 13 , 14 ]. Furthermore, due to the high burden of caregiving, it is essential to account for caregivers’ well-being. In this regard, studies on professional caregivers are beginning to suggest that the provision of PCC can produce multiple benefits for both the care recipient and the caregiver [ 15 ].

Despite a considerable body of literature and the frequent inclusion of the term in health policy and research [ 16 ], PCC involves several complications. There is no standard consensus on the definition of this concept [ 17 ], which includes problematic areas such as efficacy assessment [ 18 , 19 ]. In addition, the difficulty of measuring the subjectivity involved in identifying the dimensions of the CPC and the infrequent use of standardized measures are acute issues [ 20 ]. These limitations and purposes motivated the creation of the Person-Centered Care Assessment Tool (P-CAT; [ 21 ]), which emerged from the need for a brief, economical, easily applied, versatile and comprehensive assessment instrument to provide valid and reliable measures of PCC for research purposes [ 21 ].

Person-centered care assessment tool (P-CAT)

There are several instruments that can measure PCC from different perspectives (i.e., the caregiver or the care recipient) and in different contexts (e.g., hospitals and nursing homes). However, from a practical point of view, the P-CAT is one of the shortest and simplest tools and contains all the essential elements of PCC described in the literature. It was developed in Australia to measure the approach of long-term residential settings to older people with dementia, although it is increasingly used in other healthcare settings, such as oncology units [ 22 ] and psychiatric hospitals [ 23 ].

Due to the brevity and simplicity of its application, the versatility of its use in different medical and care contexts, and its potential emic characteristics (i.e., constructs that can be cross-culturally applicable with reasonable and similar structure and interpretation; [ 24 ]), the P-CAT is one of the most widely used tests by professionals to measure PCC [ 25 , 26 ]. It has expanded to several countries with cultural and linguistic differences. Since its creation, it has been adapted in countries separated by wide cultural and linguistic differences, such as Norway [ 27 ], Sweden [ 28 ], China [ 29 ], South Korea [ 30 ], Spain [ 25 ], and Italy [ 31 ].

The P-CAT comprises 13 items rated on a 5-point ordinal scale (from “strongly disagree” to “strongly agree”), with high scores indicating a high degree of person-centeredness. The scale consists of three dimensions: person-centered care (7 items), organizational support (4 items) and environmental accessibility (2 items). In the original study ( n  = 220; [ 21 ]), the internal consistency of the instrument yielded satisfactory values for the total scale ( α  = 0.84) and good test-retest reliability ( r  =.66) at one-week intervals. A reliability generalization study conducted in 2021 [ 32 ] that estimated the internal consistency of the P-CAT and analyzed possible factors that could affect the it revealed that the mean α value for the 25 meta-analysis samples (some of which were part of the validations included in this study) was 0.81, and the only variable that had a statistically significant relationship with the reliability coefficient was the mean age of the sample. With respect to internal structure validity, three factors (56% of the total variance) were obtained, and content validity was assessed by experts, literature reviews and stakeholders [ 33 ].

Although not explicitly stated, the apparent commonality between validation studies of different versions of the P-CAT may be influenced by an influential decades-old validity framework that differentiates three categories: content validity, construct validity, and criterion validity [ 34 , 35 ]. However, a reformulation of the validity of the P-CAT within a modern framework, which would provide a different definition of validity, has not been performed.

Scale validity

Traditionally, validation is a process focused on the psychometric properties of a measurement instrument [ 36 ]. In the early 20th century, with the frequent use of standardized measurement tests in education and psychology, two definitions emerged: the first defined validity as the degree to which a test measures what it intends to measure, while the second described the validity of an instrument in terms of the correlation it presents with a variable [ 35 ].

However, in the past century, validity theory has evolved, leading to the understanding that validity should be based on specific interpretations for an intended purpose. It should not be limited to empirically obtained psychometric properties but should also be supported by the theory underlying the construct measured. Thus, to speak of classical or modern validity theory suggests an evolution in the classical or modern understanding of the concept of validity. Therefore, a classical approach (called classical test theory, CTT) is specifically differentiated from a modern approach. In general, recent concepts associated with a modern view of validity are based on (a) a unitary conception of validity and (b) validity judgments based on inferences and interpretations of the scores of a measure [ 37 , 38 ]. This conceptual advance in the concept of validity led to the creation of a guiding framework to for obtaining evidence to support the use and interpretation of the scores obtained by a measure [ 39 ].

This purpose is addressed by the Standards for Educational and Psychological Testing (“Standards”), a guide created by the American Educational Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME) in 2014 with the aim of providing guidelines to assess the validity of the interpretations of scores of an instrument based on their intended use. Two conceptual aspects stand out in this modern view of validity: first, validity is a unitary concept centered on the construct; second, validity is defined as “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” [ 37 ]. Thus, the “Standards” propose several sources that serve as a reference for assessing different aspects of validity. The five sources of valid evidence are as follows [ 37 ]: test content, response processes, internal structure, relations to other variables and consequences of testing. According to AERA et al. [ 37 ], test content validity refers to the relationship of the administration process, subject matter, wording and format of test items to the construct they are intended to measure. It is measured predominantly with qualitative methods but without excluding quantitative approaches. The validity of the responses is based on analysis of the cognitive processes and interpretation of the items by respondents and is measured with qualitative methods. Internal structure validity is based on the interrelationship between the items and the construct and is measured by quantitative methods. Validity in terms of the relationship with other variables is based on comparison between the variable that the instrument intends to measure and other theoretically relevant external variables and is measured by quantitative methods. Finally, validity based on the results of the test analyses consequences, both intended and unintended, that may be due to a source of invalidity. It is measured mainly by qualitative methods.

Thus, although validity plays a fundamental role in providing a strong scientific basis for interpretations of test scores, validation studies in the health field have traditionally focused on content validity, criterion validity and construct validity and have overlooked the interpretation and use of scores [ 34 ].

“Standards” are considered a suitable validity theory-based procedural framework for reviewing the validity of questionnaires due to its ability to analyze sources of validity from both qualitative and quantitative approaches and its evidence-based method [ 35 ]. Nevertheless, due to a lack of knowledge or the lack of a systematic description protocol, very few instruments to date have been reviewed within the framework of the “Standards” [ 39 ].

Current study

Although the P-CAT is one of the most widely used instruments by professionals and has seven validations [ 25 , 27 , 28 , 29 , 30 , 31 , 40 ], no analysis has been conducted of its validity within the framework of the “Standards”. That is, empirical evidence of the validity of the P-CAT has not been obtained in a way that helps to develop a judgment based on a synthesis of the available information.

A review of this type is critical given that some methodological issues seem to have not been resolved in the P-CAT. For example, although the multidimensionality of the P-CAT was identified in the study that introduced it, Bru-Luna et al. [ 32 ] recently stated that in adaptations of the P-CAT [ 25 , 27 , 28 , 29 , 30 , 40 ], the total score is used for interpretation and multidimensionality is disregarded. Thus, the multidimensionality of the original study was apparently not replicated. Bru-Luna et al. [ 32 ] also indicated that the internal structure validity of the P-CAT is usually underreported due to a lack of sufficiently rigorous approaches to establish with certainty how its scores are calculated.

The validity of the P-CAT, specifically its internal structure, appears to be unresolved. Nevertheless, substantive research and professional practice point to this measure as relevant to assessing PCC. This perception is contestable and judgment-based and may not be sufficient to assess the validity of the P-CAT from a cumulative and synthetic angle based on preceding validation studies. An adequate assessment of validity requires a model to conceptualize validity followed by a review of previous studies of the validity of the P-CAT using this model.

Therefore, the main purpose of this study was to conduct a systematic review of the evidence provided by P-CAT validation studies while taking the “Standards” as a framework.

The present study comprises two distinct but interconnected procedures. First, a systematic literature review was conducted following the PRISMA method ( [ 41 ]; Additional file 1; Additional file 2) with the aim of collecting all validations of the P-CAT that have been developed. Second, a systematic description of the validity evidence for each of the P-CAT validations found in the systematic review was developed following the “Standards” framework [ 37 ]. The work of Hawkins et al. [ 39 ], the first study to review validity sources according to the guidelines proposed by the “Standards”, was also used as a reference. Both provided conceptual and pragmatic guidance for organizing and classifying validity evidence for the P-CAT.

The procedure conducted in the systematic review is described below, followed by the procedure for examining the validity studies.

Systematic review

Search strategy and information sources.

Initially, the Cochrane database was searched with the aim of identifying systematic reviews of the P-CAT. When no such reviews were found, subsequent preliminary searches were performed in the Web of Science (WoS), Scopus and PubMed databases. These databases play a fundamental role in recent scientific literature since they are the main sources of published articles that undergo high-quality content and editorial review processes [ 42 ]. The search formula was as follows. The original P-CAT article [ 21 ] was located, after which all articles that cited it through 2021 were identified and analyzed. This approach ensured the inclusion of all validations. No articles were excluded on the basis of language to avoid language bias [ 43 ]. Moreover, to reduce the effects of publication bias, a complementary search in Google Scholar was also performed to allow the inclusion of “gray” literature [ 44 ]. Finally, a manual search was performed through a review of the references of the included articles to identify other articles that met the search criteria but were not present in any of the aforementioned databases.

This process was conducted by one of the authors and corroborated by another using the Covidence tool [ 45 ]. A third author was consulted in case of doubt.

Eligibility criteria and selection process

The protocol was registered in PROSPERO, and the search was conducted according to these criteria. The identification code is CRD42022335866.

The articles had to meet the following criteria for inclusion in the systematic review: (a) a methodological approach to P-CAT validations, (b) an experimental or quasiexperimental studies, (c) studies with any type of sample, and (d) studies in any language. We discarded studies that met at least one of the following exclusion criteria: (a) systematic reviews or bibliometric reviews of the instrument or meta-analyses or (b) studies published after 2021.

Data collection process

After the articles were selected, the most relevant information was extracted from each article. Fundamental data were recorded in an Excel spreadsheet for each of the sections: introduction, methodology, results and discussion. Information was also recorded about the limitations mentioned in each article as well as the practical implications and suggestions for future research.

Given the aim of the study, information was collected about the sources of validity of each study, including test content (judges’ evaluation, literature review and translation), response processes, internal structure (factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings), and relationships with other variables (convergent, divergent, concurrent and predictive validity) and consequences of measurement.

Description of the validity study

To assess the validity of the studies, an Excel table was used. Information was recorded for the seven articles included in the systematic review. The data were extracted directly from the texts of the articles and included information about the authors, the year of publication, the country where each P-CAT validation was produced and each of the five standards proposed in the “Standards” [ 37 ].

The validity source related to internal structure was divided into three sections to record information about dimensionality (e.g., factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings), reliability expression (i.e., internal consistency and test-retest) and the study of factorial invariance according to the groups into which it was divided (e.g., sex, age, profession) and the level of study (i.e., metric, intercepts). This approach allowed much more information to be obtained than relying solely on source validity based on internal structure. This division was performed by the same researcher who performed the previous processes.

Study selection and study characteristics

The systematic review process was developed according to the PRISMA methodology [ 41 ].

The WoS, Scopus, PubMed and Google Scholar databases were searched on February 12, 2022 and yielded a total of 485 articles. Of these, 111 were found in WoS, 114 in Scopus, 43 in PubMed and 217 in Google Scholar. In the first phase, the title and abstracts of all the articles were read. In this first screening, 457 articles were eliminated because they did not include studies with a methodological approach to P-CAT validation and one article was excluded because it was the original P-CAT article. This resulted in a total of 27 articles, 19 of which were duplicated in different databases and, in the case of Google Scholar, within the same database. This process yielded a total of eight articles that were evaluated for eligibility by a complete reading of the text. In this step, one of the articles was excluded due to a lack of access to the full text of the study [ 31 ] (although the original manuscript was found, it was impossible to access the complete content; in addition, the authors of the manuscript were contacted, but no reply was received). Finally, a manual search was performed by reviewing the references of the seven studies, but none were considered suitable for inclusion. Thus, the review was conducted with a total of seven articles.

Of the seven studies, six were original validations in other languages. These included Norwegian [ 27 ], Swedish [ 28 ], Chinese (which has two validations [ 29 , 40 ]), Spanish [ 25 ], and Korean [ 30 ]. The study by Selan et al. [ 46 ] included a modification of the Swedish version of the P-CAT and explored the psychometric properties of both versions (i.e., the original Swedish version and the modified version).

The item selection and screening process are illustrated in detail in Fig.  1 .

figure 1

PRISMA 2020 flow diagram for new systematic reviews including database searches

Validity analysis

To provide a clear overview of the validity analyses, Table  1 descriptively shows the percentages of items that provide information about the five standards proposed by the “Standards” guide [ 37 ].

The table shows a high number of validity sources related to test content and internal structure in relation to dimensionality and internal consistency, followed by a moderate number of sources for test-retest and relationship with other variables. A rate of 0% is observed for validity sources related to response processes, invariance and test consequences. Below, different sections related to each of the standards are shown, and the information is presented in more detail.

Evidence based on test content

The first standard, which focused on test content, was met for all items (100%). Translation, which refers to the equivalence of content between the original language and the target language, was met in the six articles that conducted validation in another language and/or culture. These studies reported that the validations were translated by bilingual experts and/or experts in the area of care. In addition, three studies [ 25 , 29 , 40 ] reported that the translation process followed International Test Commission guidelines, such as those of Beaton et al. [ 47 ], Guillemin [ 48 ], Hambleton et al. [ 49 ], and Muñiz et al. [ 50 ]. Evaluation by judges, who referred to the relevance, clarity and importance of the content, was divided into two categories: expert evaluation (a panel of expert judges for each of the areas to consider in the evaluation instrument) and experiential evaluation (potential participants testing the test). The first type of evaluation occurred in three of the articles [ 28 , 29 , 46 ], while the other occurred in two [ 25 , 40 ]. Only one of the items [ 29 ] reported that the scale contained items that reflected the dimension described in the literature. The validity evidence related to the test content presented in each article can be found in Table  2 .

Evidence based on response processes

The second standard, related to the validity of the response process, was obtained according to the “Standards” from the analysis of individual responses: “questioning test takers about their performance strategies or response to particular items (…), maintaining records that monitor the development of a response to a writing task (…), documentation of other aspects of performance, like eye movement or response times…” [ 37 ] (p. 15). According to the analysis of the validity of the response processes, none of the articles complied with this evidence.

Evidence based on internal structure

The third standard, validity related to internal structure, was divided into three sections. First, the dimensionality of each study was examined in terms of factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings. Le et al. [ 40 ] conducted an exploratory-confirmatory design while Sjögren et al. [ 28 ] conducted a confirmatory-exploratory design to assess construct validity using confirmatory factor analysis (CFA) and investigated it further using exploratory factor analysis (EFA). The remaining articles employed only a single form of factor analysis: three employed EFA, and two employed CFA. Regarding the next point, only three of the articles reported the factor extraction method used, including Kaiser’s eigenvalue, criterion, scree plot test, parallel analysis and Velicer’s MAP test. Instrument validations yielded a total of two factors in five of the seven articles, while one yielded a single dimension [ 25 ] and the other yielded three dimensions [ 29 ], as in the original instrument. The interfactor R was reported only in the study by Zhong and Lou [ 29 ], whereas in the study by Martínez et al. [ 25 ], it could be easily obtained since it consisted of only one dimension. Internal replication was also calculated in the Spanish validation by randomly splitting the sample into two to test the correlations between factors. The effectiveness of the method was not reported in any of the articles. This information is presented in Table  3 in addition to a summary of the factor loadings.

The second section examined reliability. All the studies presented measures of internal consistency conducted in their entirety with Cronbach’s α coefficient for both the total scale and the subscales. The ω coefficient of McDonald was not used in any case. Four of the seven articles performed a test-retest test. Martínez et al. [ 25 ] conducted a test-retest after a period of seven days, while Le et al. [ 40 ] and Rokstad et al. [ 27 ] performed it between one and two weeks later and Sjögren et al. [ 28 ] allowed approximately two weeks to pass after the initial test.

The third section analyzes the calculation of invariance, which was not reported in any of the studies.

Evidence based on relationships with other variables

In the fourth standard, based on validity according to the relationship with other variables, the articles that reported it used only convergent validity (i.e., it was hypothesized that the variables related to the construct measured by the test—in this case, person-centeredness—were positively or negatively related to another construct). Discriminant validity hypothesizes that the variables related to the PCC construct are not correlated in any way with any other variable studied. No article (0%) measured discriminant evidence, while four (57%) measured convergent evidence [ 25 , 29 , 30 , 46 ]. Convergent validity was obtained through comparisons with instruments such as the Person-Centered Climate Questionnaire–Staff Version (PCQ-S), the Staff-Based Measures of Individualized Care for Institutionalized Persons with Dementia (IC), the Caregiver Psychological Elder Abuse Behavior Scale (CPEAB), the Organizational Climate (CLIOR) and the Maslach Burnout Inventory (MBI). In the case of Selan et al. [ 46 ], convergent validity was assessed on two items considered by the authors as “crude measures of person-centered care (i.e., external constructs) giving an indication of the instruments’ ability to measure PCC” (p. 4). Concurrent validity, which measures the degree to which the results of one test are or are not similar to those of another test conducted at more or less the same time with the same participants, and predictive validity, which allows predictions to be established regarding behavior based on comparison between the values of the instrument and the criterion, were not reported in any of the studies.

Evidence based on the consequences of testing

The fifth and final standard was related to the consequences of the test. It analyzed the consequences, both intended and unintended, of applying the test to a given sample. None of the articles presented explicit or implicit evidence of this.

The last two sources of validity can be seen in Table  4 .

Table  5 shows the results of the set of validity tests for each study according to the described standards.

The main purpose of this article is to analyze the evidence of validity in different validation studies of the P-CAT. To gather all existing validations, a systematic review of all literature citing this instrument was conducted.

The publication of validation studies of the P-CAT has been constant over the years. Since the publication of the original instrument in 2010, seven validations have been published in other languages (taking into account the Italian version by Brugnolli et al. [ 31 ], which could not be included in this study) as well as a modification of one of these versions. The very unequal distribution of validations between languages and countries is striking. A recent systematic review [ 51 ] revealed that in Europe, the countries where the PCC approach is most widely used are the United Kingdom, Sweden, the Netherlands, Northern Ireland, and Norway. It has also been shown that the neighboring countries seem to exert an influence on each other due to proximity [ 52 ] such that they tend to organize healthcare in a similar way, as is the case for Scandinavian countries. This favors the expansion of PCC and explains the numerous validations we found in this geographical area.

Although this approach is conceived as an essential element of healthcare for most governments [ 53 ], PCC varies according to the different definitions and interpretations attributed to it, which can cause confusion in its application (e.g., between Norway and the United Kingdom [ 54 ]). Moreover, facilitators of or barriers to implementation depend on the context and level of development of each country, and financial support remains one of the main factors in this regard [ 53 ]. This fact explains why PCC is not globally widespread among all territories. In countries where access to healthcare for all remains out of reach for economic reasons, the application of this approach takes a back seat, as does the validation of its assessment tools. In contrast, in a large part of Europe or in countries such as China or South Korea that have experienced decades of rapid economic development, patients are willing to be involved in their medical treatment and enjoy more satisfying and efficient medical experiences and environments [ 55 ], which facilitates the expansion of validations of instruments such as the P-CAT.

Regarding validity testing, the guidelines proposed by the “Standards” [ 37 ] were followed. According to the analysis of the different validations of the P-CAT instrument, none of the studies used a structured validity theory-based procedural framework for conducting validation. The most frequently reported validity tests were on the content of the test and two of the sections into which the internal structure was divided (i.e., dimensionality and internal consistency).

In the present article, the most cited source of validity in the studies was the content of the test because most of the articles were validations of the P-CAT in other languages, and the authors reported that the translation procedure was conducted by experts in all cases. In addition, several of the studies employed International Test Commission guidelines, such as those by Beaton et al. [ 47 ], Guillemin [ 48 ], Hambleton et al. [ 49 ], and Muñiz et al. [ 50 ]. Several studies also assessed the relevance, clarity and importance of the content.

The third source of validity, internal structure, was the next most often reported, although it appeared unevenly among the three sections into which this evidence was divided. Dimensionality and internal consistency were reported in all studies, followed by test-retest consistency. In relation to the first section, factor analysis, a total of five EFAs and four CFAs were presented in the validations. Traditionally, EFA has been used in research to assess dimensionality and identify key psychological constructs, although this approach involves a number of inconveniences, such as difficulty testing measurement invariance and incorporating latent factors into subsequent analyses [ 56 ] or the major problem of factor loading matrix rotation [ 57 ]. Studies eventually began to employ CFA, a technique that overcame some of these obstacles [ 56 ] but had other drawbacks; for example, the strict requirement of zero cross-loadings often does not fit the data well, and misspecification of zero loadings tends to produce distorted factors [ 57 ]. Recently, exploratory structural equation modeling (ESEM) has been proposed. This technique is widely recommended both conceptually and empirically to assess the internal structure of psychological tools [ 58 ] since it overcomes the limitations of EFA and CFA in estimating their parameters [ 56 , 57 ].

The next section, reliability, reports the total number of items according to Cronbach’s α reliability coefficient. Reliability is defined as a combination of systematic and random influences that determine the observed scores on a psychological test. Reporting the reliability measure ensures that item-based scores are consistent, that the tool’s responses are replicable and that they are not modified solely by random noise [ 59 , 60 ]. Currently, the most commonly employed reliability coefficient in studies with a multi-item measurement scale (MIMS) is Cronbach’s α [ 60 , 61 ].

Cronbach’s α [ 62 ] is based on numerous strict assumptions (e.g., the test must be unidimensional, factor loadings must be equal for all items and item errors should not covary) to estimate internal consistency. These assumptions are difficult to meet, and their violation may produce small reliability estimates [ 60 ]. One of the alternative measures to α that is increasingly recommended by the scientific literature is McDonald’s ω [ 63 ], a composite reliability measure. This coefficient is recommended for congeneric scales in which tau equivalence is not assumed. It has several advantages. For example, estimates of ω are usually robust when the estimated model contains more factors than the true model, even with small samples, or when skewness in univariate item distributions produces lower biases than those found when using α [ 59 ].

The test-retest method was the next most commonly reported internal structure section in these studies. This type of reliability considers the consistency of the scores of a test between two measurements separated by a period [ 64 ]. It is striking that test-retest consistency does not have a prevalence similar to that of internal consistency since, unlike internal consistency, test-retest consistency can be assessed for practically all types of patient-reported outcomes. It is even considered by some measurement experts to report reliability with greater relevance than internal consistency since it plays a fundamental role in the calculation of parameters for health measures [ 64 ]. However, the literature provides little guidance regarding the assessment of this type of reliability.

The internal structure section that was least frequently reported in the studies in this review was invariance. A lack of invariance refers to a difference between scores on a test that is not explained by group differences in the structure it is intended to measure [ 65 ]. The invariance of the measure should be emphasized as a prerequisite in comparisons between groups since “if scale invariance is not examined, item bias may not be fully recognized and this may lead to a distorted interpretation of the bias in a particular psychological measure” [ 65 ].

Evidence related to other variables was the next most reported source of validity in the studies included in this review. Specifically, the four studies that reported this evidence did so according to convergent validity and cited several instruments. None of the studies included evidence of discriminant validity, although this may be because there are currently several obstacles related to the measurement of this type of validity [ 66 ]. On the one hand, different definitions are used in the applied literature, which makes its evaluation difficult; on the other hand, the literature on discriminant validity focuses on techniques that require the use of multiple measurement methods, which often seem to have been introduced without sufficient evidence or are applied randomly.

Validity related to response processes was not reported by any of the studies. There are several methods to analyze this validity. These methods can be divided into two groups: “those that directly access the psychological processes or cognitive operations (think aloud, focus group, and interviews), compared to those which provide indirect indicators which in turn require additional inference (eye tracking and response times)” [ 38 ]. However, this validity evidence has traditionally been reported less frequently than others in most studies, perhaps because there are fewer clear and accepted practices on how to design or report these studies [ 67 ].

Finally, the consequences of testing were not reported in any of the studies. There is debate regarding this source of validity, with two main opposing streams of thought. On the one hand [ 68 , 69 ]) suggests that consequences that appear after the application of a test should not derive from any source of test invalidity and that “adverse consequences only undermine the validity of an assessment if they can be attributed to a problem of fit between the test and the construct” (p. 6). In contrast, Cronbach [ 69 , 70 ] notes that adverse social consequences that may result from the application of a test may call into question the validity of the test. However, the potential risks that may arise from the application of a test should be minimized in any case, especially in regard to health assessments. To this end, it is essential that this aspect be assessed by instrument developers and that the experiences of respondents be protected through the development of comprehensive and informed practices [ 39 ].

This work is not without limitations. First, not all published validation studies of the P-CAT, such as the Italian version by Brugnolli et al. [ 31 ], were available. These studies could have provided relevant information. Second, many sources of validity could not be analyzed because the studies provided scant or no data, such as response processes [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ], relationships with other variables [ 27 , 28 , 40 ], consequences of testing [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ], or invariance [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ] in the case of internal structure and interfactor R [ 27 , 28 , 30 , 40 , 46 ], internal replication [ 27 , 28 , 29 , 30 , 40 , 46 ] or the effect of the method [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ] in the case of dimensionality. In the future, it is hoped that authors will become aware of the importance of validity, as shown in this article and many others, and provide data on unreported sources so that comprehensive validity studies can be performed.

The present work also has several strengths. The search was extensive, and many studies were obtained using three different databases, including WoS, one of the most widely used and authoritative databases in the world. This database includes a large number and variety of articles and is not fully automated due to its human team [ 71 , 72 , 73 ]. In addition, to prevent publication bias, gray literature search engines such as Google Scholar were used to avoid the exclusion of unpublished research [ 44 ]. Finally, linguistic bias was prevented by not limiting the search to articles published in only one or two languages, thus avoiding the overrepresentation of studies in one language and underrepresentation in others [ 43 ].

Conclusions

Validity is understood as the degree to which tests and theory support the interpretations of instrument scores for their intended use [ 37 ]. From this perspective, the various validations of the P-CAT are not presented in a structured, valid, theory-based procedural framework like the “Standards” are. After integration and analysis of the results, it was observed that these validation reports offer a high number of sources of validity related to test content, internal structure in dimensionality and internal consistency, a moderate number of sources for internal structure in terms of test-retest reliability and the relationship with other variables, and a very low number of sources for response processes, internal structure in terms of invariance, and test consequences.

Validity plays a fundamental role in ensuring a sound scientific basis for test interpretations because it provides evidence of the extent to which the data provided by the test are valid for the intended purpose. This can affect clinical practice as people’s health may depend on it. In this sense, the “Standards” are considered a suitable and valid theory-based procedural framework for studying this modern conception of questionnaire validity, which should be taken into account in future research in this area.

Although the P-CAT is one of the most widely used instruments for assessing PCC, as shown in this study, PCC has rarely been studied. The developers of measurement tests applied to the health care setting, on which the health and quality of life of many people may depend, should use this validity framework to reflect the clear purpose of the measurement. This approach is important because the equity of decision making by healthcare professionals in daily clinical practice may depend on the source of validity. Through a more extensive study of validity that includes the interpretation of scores in terms of their intended use, the applicability of the P-CAT, an instrument that was initially developed for long-term care homes for elderly people, could be expanded to other care settings. However, the findings of this study show that validation studies continue to focus on traditionally studied types of validity and overlook the interpretation of scores in terms of their intended use.

Data availability

All data relevant to the study were included in the article or uploaded as additional files. Additional template data extraction forms are available from the corresponding author upon reasonable request.

Abbreviations

American Educational Research Association

American Psychological Association

Confirmatory factor analysis

Organizational Climate

Caregiver Psychological Elder Abuse Behavior Scale

Exploratory factor analysis

Exploratory structural equation modeling

Staff-based Measures of Individualized Care for Institutionalized Persons with Dementia

Maslach Burnout Inventory

Multi-item measurement scale

Maximum likelihood

National Council on Measurement in Education

Person-Centered Care Assessment Tool

  • Person-centered care

Person-Centered Climate Questionnaire–Staff Version

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

International Register of Systematic Review Protocols

Standards for Educational and Psychological Testing

weighted least square mean and variance adjusted

Web of Science

Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy; 2001.

Google Scholar  

International Alliance of Patients’ Organizations. What is patient-centred healthcare? A review of definitions and principles. 2nd ed. London, UK: International Alliance of Patients’ Organizations; 2007.

World Health Organization. WHO global strategy on people-centred and integrated health services: interim report. Geneva, Switzerland: World Health Organization; 2015.

Britten N, Ekman I, Naldemirci Ö, Javinger M, Hedman H, Wolf A. Learning from Gothenburg model of person centred healthcare. BMJ. 2020;370:m2738.

Article   PubMed   Google Scholar  

Van Diepen C, Fors A, Ekman I, Hensing G. Association between person-centred care and healthcare providers’ job satisfaction and work-related health: a scoping review. BMJ Open. 2020;10:e042658.

Article   PubMed   PubMed Central   Google Scholar  

Ekman N, Taft C, Moons P, Mäkitalo Å, Boström E, Fors A. A state-of-the-art review of direct observation tools for assessing competency in person-centred care. Int J Nurs Stud. 2020;109:103634.

American Geriatrics Society Expert Panel on Person-Centered Care. Person-centered care: a definition and essential elements. J Am Geriatr Soc. 2016;64:15–8.

Article   Google Scholar  

McCormack B, McCance TV. Development of a framework for person-centred nursing. J Adv Nurs. 2006;56:472–9.

McCormack B, McCance T. Person-centred practice in nursing and health care: theory and practice. Chichester, England: Wiley; 2016.

Nolan MR, Davies S, Brown J, Keady J, Nolan J. Beyond person-centred care: a new vision for gerontological nursing. J Clin Nurs. 2004;13:45–53.

McCormack B, McCance T. Person-centred nursing: theory, models and methods. Oxford, UK: Wiley-Blackwell; 2010.

Book   Google Scholar  

Abraha I, Rimland JM, Trotta FM, Dell’Aquila G, Cruz-Jentoft A, Petrovic M, et al. Systematic review of systematic reviews of non-pharmacological interventions to treat behavioural disturbances in older patients with dementia. The SENATOR-OnTop series. BMJ Open. 2017;7:e012759.

Anderson K, Blair A. Why we need to care about the care: a longitudinal study linking the quality of residential dementia care to residents’ quality of life. Arch Gerontol Geriatr. 2020;91:104226.

Bauer M, Fetherstonhaugh D, Haesler E, Beattie E, Hill KD, Poulos CJ. The impact of nurse and care staff education on the functional ability and quality of life of people living with dementia in aged care: a systematic review. Nurse Educ Today. 2018;67:27–45.

Smythe A, Jenkins C, Galant-Miecznikowska M, Dyer J, Downs M, Bentham P, et al. A qualitative study exploring nursing home nurses’ experiences of training in person centred dementia care on burnout. Nurse Educ Pract. 2020;44:102745.

McCormack B, Borg M, Cardiff S, Dewing J, Jacobs G, Janes N, et al. Person-centredness– the ‘state’ of the art. Int Pract Dev J. 2015;5:1–15.

Wilberforce M, Challis D, Davies L, Kelly MP, Roberts C, Loynes N. Person-centredness in the care of older adults: a systematic review of questionnaire-based scales and their measurement properties. BMC Geriatr. 2016;16:63.

Rathert C, Wyrwich MD, Boren SA. Patient-centered care and outcomes: a systematic review of the literature. Med Care Res Rev. 2013;70:351–79.

Sharma T, Bamford M, Dodman D. Person-centred care: an overview of reviews. Contemp Nurse. 2016;51:107–20.

Ahmed S, Djurkovic A, Manalili K, Sahota B, Santana MJ. A qualitative study on measuring patient-centered care: perspectives from clinician-scientists and quality improvement experts. Health Sci Rep. 2019;2:e140.

Edvardsson D, Fetherstonhaugh D, Nay R, Gibson S. Development and initial testing of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2010;22:101–8.

Tamagawa R, Groff S, Anderson J, Champ S, Deiure A, Looyis J, et al. Effects of a provincial-wide implementation of screening for distress on healthcare professionals’ confidence and understanding of person-centered care in oncology. J Natl Compr Canc Netw. 2016;14:1259–66.

Degl’ Innocenti A, Wijk H, Kullgren A, Alexiou E. The influence of evidence-based design on staff perceptions of a supportive environment for person-centered care in forensic psychiatry. J Forensic Nurs. 2020;16:E23–30.

Hulin CL. A psychometric theory of evaluations of item and scale translations: fidelity across languages. J Cross Cult Psychol. 1987;18:115–42.

Martínez T, Suárez-Álvarez J, Yanguas J, Muñiz J. Spanish validation of the person-centered Care Assessment Tool (P-CAT). Aging Ment Health. 2016;20:550–8.

Martínez T, Martínez-Loredo V, Cuesta M, Muñiz J. Assessment of person-centered care in gerontology services: a new tool for healthcare professionals. Int J Clin Health Psychol. 2020;20:62–70.

Rokstad AM, Engedal K, Edvardsson D, Selbaek G. Psychometric evaluation of the Norwegian version of the person-centred Care Assessment Tool. Int J Nurs Pract. 2012;18:99–105.

Sjögren K, Lindkvist M, Sandman PO, Zingmark K, Edvardsson D. Psychometric evaluation of the Swedish version of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2012;24:406–15.

Zhong XB, Lou VW. Person-centered care in Chinese residential care facilities: a preliminary measure. Aging Ment Health. 2013;17:952–8.

Tak YR, Woo HY, You SY, Kim JH. Validity and reliability of the person-centered Care Assessment Tool in long-term care facilities in Korea. J Korean Acad Nurs. 2015;45:412–9.

Brugnolli A, Debiasi M, Zenere A, Zanolin ME, Baggia M. The person-centered Care Assessment Tool in nursing homes: psychometric evaluation of the Italian version. J Nurs Meas. 2020;28:555–63.

Bru-Luna LM, Martí-Vilar M, Merino-Soto C, Livia J. Reliability generalization study of the person-centered Care Assessment Tool. Front Psychol. 2021;12:712582.

Edvardsson D, Innes A. Measuring person-centered care: a critical comparative review of published tools. Gerontologist. 2010;50:834–46.

Hawkins M, Elsworth GR, Nolte S, Osborne RH. Validity arguments for patient-reported outcomes: justifying the intended interpretation and use of data. J Patient Rep Outcomes. 2021;5:64.

Sireci SG. On the validity of useless tests. Assess Educ Princ Policy Pract. 2016;23:226–35.

Hawkins M, Elsworth GR, Osborne RH. Questionnaire validation practice: a protocol for a systematic descriptive literature review of health literacy assessments. BMJ Open. 2019;9:e030753.

American Educational Research Association, American Psychological Association. National Council on Measurement in Education. Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 2014.

Padilla JL, Benítez I. Validity evidence based on response processes. Psicothema. 2014;26:136–44.

PubMed   Google Scholar  

Hawkins M, Elsworth GR, Hoban E, Osborne RH. Questionnaire validation practice within a theoretical framework: a systematic descriptive literature review of health literacy assessments. BMJ Open. 2020;10:e035974.

Le C, Ma K, Tang P, Edvardsson D, Behm L, Zhang J, et al. Psychometric evaluation of the Chinese version of the person-centred Care Assessment Tool. BMJ Open. 2020;10:e031580.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg. 2021;88:105906.

Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, web of Science, and Google Scholar: strengths and weaknesses. FASEB J. 2008;22:338–42.

Grégoire G, Derderian F, Le Lorier J. Selecting the language of the publications included in a meta-analysis: is there a tower of Babel bias? J Clin Epidemiol. 1995;48:159–63.

Arias MM. Aspectos metodológicos Del metaanálisis (1). Pediatr Aten Primaria. 2018;20:297–302.

Covidence. Covidence systematic review software. Veritas Health Innovation, Australia. 2014. https://www.covidence.org/ . Accessed 28 Feb 2022.

Selan D, Jakobsson U, Condelius A. The Swedish P-CAT: modification and exploration of psychometric properties of two different versions. Scand J Caring Sci. 2017;31:527–35.

Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). 2000;25:3186–91.

Guillemin F. Cross-cultural adaptation and validation of health status measures. Scand J Rheumatol. 1995;24:61–3.

Hambleton R, Merenda P, Spielberger C. Adapting educational and psychological tests for cross-cultural assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2005.

Muñiz J, Elosua P, Hambleton RK. International test commission guidelines for test translation and adaptation: second edition. Psicothema. 2013;25:151–7.

Rosengren K, Brannefors P, Carlstrom E. Adoption of the concept of person-centred care into discourse in Europe: a systematic literature review. J Health Organ Manag. 2021;35:265–80.

Alharbi T, Olsson LE, Ekman I, Carlström E. The impact of organizational culture on the outcome of hospital care: after the implementation of person-centred care. Scand J Public Health. 2014;42:104–10.

Bensbih S, Souadka A, Diez AG, Bouksour O. Patient centered care: focus on low and middle income countries and proposition of new conceptual model. J Med Surg Res. 2020;7:755–63.

Stranz A, Sörensdotter R. Interpretations of person-centered dementia care: same rhetoric, different practices? A comparative study of nursing homes in England and Sweden. J Aging Stud. 2016;38:70–80.

Zhou LM, Xu RH, Xu YH, Chang JH, Wang D. Inpatients’ perception of patient-centered care in Guangdong province, China: a cross-sectional study. Inquiry. 2021. https://doi.org/10.1177/00469580211059482 .

Marsh HW, Morin AJ, Parker PD, Kaur G. Exploratory structural equation modeling: an integration of the best features of exploratory and confirmatory factor analysis. Annu Rev Clin Psychol. 2014;10:85–110.

Asparouhov T, Muthén B. Exploratory structural equation modeling. Struct Equ Model Multidiscip J. 2009;16:397–438.

Cabedo-Peris J, Martí-Vilar M, Merino-Soto C, Ortiz-Morán M. Basic empathy scale: a systematic review and reliability generalization meta-analysis. Healthc (Basel). 2022;10:29–62.

Flora DB. Your coefficient alpha is probably wrong, but which coefficient omega is right? A tutorial on using R to obtain better reliability estimates. Adv Methods Pract Psychol Sci. 2020;3:484–501.

McNeish D. Thanks coefficient alpha, we’ll take it from here. Psychol Methods. 2018;23:412–33.

Hayes AF, Coutts JJ. Use omega rather than Cronbach’s alpha for estimating reliability. But… Commun Methods Meas. 2020;14:1–24.

Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.

McDonald R. Test theory: a unified approach. Mahwah, NJ: Erlbaum; 1999.

Polit DF. Getting serious about test-retest reliability: a critique of retest research and some recommendations. Qual Life Res. 2014;23:1713–20.

Ceylan D, Çizel B, Karakaş H. Testing destination image scale invariance for intergroup comparison. Tour Anal. 2020;25:239–51.

Rönkkö M, Cho E. An updated guideline for assessing discriminant validity. Organ Res Methods. 2022;25:6–14.

Hubley A, Zumbo B. Response processes in the context of validity: setting the stage. In: Zumbo B, Hubley A, editors. Understanding and investigating response processes in validation research. Cham, Switzerland: Springer; 2017. pp. 1–12.

Messick S. Validity of performance assessments. In: Philips G, editor. Technical issues in large-scale performance assessment. Washington, DC: Department of Education, National Center for Education Statistics; 1996. pp. 1–18.

Moss PA. The role of consequences in validity theory. Educ Meas Issues Pract. 1998;17:6–12.

Cronbach L. Five perspectives on validity argument. In: Wainer H, editor. Test validity. Hillsdale, MI: Erlbaum; 1988. pp. 3–17.

Birkle C, Pendlebury DA, Schnell J, Adams J. Web of Science as a data source for research on scientific and scholarly activity. Quant Sci Stud. 2020;1:363–76.

Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6:245.

Web of Science Group. Editorial selection process. Clarivate. 2024. https://clarivate.com/webofsciencegroup/solutions/%20editorial-selection-process/ . Accessed 12 Sept 2022.

Download references

Acknowledgements

The authors thank the casual helpers for their aid in information processing and searching.

This work is one of the results of research project HIM/2015/017/SSA.1207, “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”. Main researcher: Filiberto Toledano-Toledano Ph.D. The present research was funded by federal funds for health research and was approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. The source of federal funds did not control the study design, data collection, analysis, or interpretation, or decisions regarding publication.

Author information

Authors and affiliations.

Departamento de Educación, Facultad de Ciencias Sociales, Universidad Europea de Valencia, 46010, Valencia, Spain

Lluna Maria Bru-Luna

Departamento de Psicología Básica, Universitat de València, Blasco Ibáñez Avenue, 21, 46010, Valencia, Spain

Manuel Martí-Vilar

Departamento de Psicología, Instituto de Investigación de Psicología, Universidad de San Martín de Porres, Tomás Marsano Avenue 242, Lima 34, Perú

César Merino-Soto

Instituto Central de Gestión de la Investigación, Universidad Nacional Federico Villarreal, Carlos Gonzalez Avenue 285, 15088, San Miguel, Perú

José Livia-Segovia

Unidad de Investigación en Medicina Basada en Evidencias, Hospital Infantil de México Federico Gómez Instituto Nacional de Salud, Dr. Márquez 162, 06720, Doctores, Cuauhtémoc, Mexico

Juan Garduño-Espinosa & Filiberto Toledano-Toledano

Unidad de Investigación Multidisciplinaria en Salud, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, México-Xochimilco 289, Arenal de Guadalupe, 14389, Tlalpan, Mexico City, Mexico

Filiberto Toledano-Toledano

Dirección de Investigación y Diseminación del Conocimiento, Instituto Nacional de Ciencias e Innovación para la Formación de Comunidad Científica, INDEHUS, Periférico Sur 4860, Arenal de Guadalupe, 14389, Tlalpan, Mexico City, Mexico

You can also search for this author in PubMed   Google Scholar

Contributions

L.M.B.L. conceptualized the study, collected the data, performed the formal anal- ysis, wrote the original draft, and reviewed and edited the subsequent drafts. M.M.V. collected the data and reviewed and edited the subsequent drafts. C.M.S. collected the data, performed the formal analysis, wrote the original draft, and reviewed and edited the subsequent drafts. J.L.S. collected the data, wrote the original draft, and reviewed and edited the subsequent drafts. J.G.E. collected the data and reviewed and edited the subsequent drafts. F.T.T. conceptualized the study and reviewed and edited the subsequent drafts. L.M.B.L. conceptualized the study and reviewed and edited the subsequent drafts. M.M.V. conceptualized the study and reviewed and edited the subsequent drafts. C.M.S. reviewed and edited the subsequent drafts. J.G.E. reviewed and edited the subsequent drafts. F.T.T. conceptualized the study; provided resources, software, and supervision; wrote the original draft; and reviewed and edited the subsequent drafts.

Corresponding author

Correspondence to Filiberto Toledano-Toledano .

Ethics declarations

Ethics approval and consent to participate.

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. HIM/2015/017/SSA.1207, “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Bru-Luna, L.M., Martí-Vilar, M., Merino-Soto, C. et al. Person-centered care assessment tool with a focus on quality healthcare: a systematic review of psychometric properties. BMC Psychol 12 , 217 (2024). https://doi.org/10.1186/s40359-024-01716-7

Download citation

Received : 17 May 2023

Accepted : 07 April 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s40359-024-01716-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Person-centered care assessment tool

BMC Psychology

ISSN: 2050-7283

types of systematic literature reviews

Exploring data sources and mathematical approaches for estimating human mobility rates and implications for understanding COVID-19 dynamics: a systematic literature review

Affiliations.

  • 1 Department of Mathematics, Central University of Rajasthan, Kishangarh, Ajmer, 305817, India.
  • 2 Department of Mathematics, Central University of Rajasthan, Kishangarh, Ajmer, 305817, India. [email protected].
  • 3 Intercollegiate Biomathematics Alliance, Illinois State University, Normal, USA.
  • 4 Kalam Institute of Health Technology, Visakhapatnam, India.
  • PMID: 38641762
  • DOI: 10.1007/s00285-024-02082-z

Human mobility, which refers to the movement of people from one location to another, is believed to be one of the key factors shaping the dynamics of the COVID-19 pandemic. There are multiple reasons that can change human mobility patterns, such as fear of an infection, control measures restricting movement, economic opportunities, political instability, etc. Human mobility rates are complex to estimate as they can occur on various time scales, depending on the context and factors driving the movement. For example, short-term movements are influenced by the daily work schedule, whereas long-term trends can be due to seasonal employment opportunities. The goal of the study is to perform literature review to: (i) identify relevant data sources that can be used to estimate human mobility rates at different time scales, (ii) understand the utilization of variety of data to measure human movement trends under different contexts of mobility changes, and (iii) unraveling the associations between human mobility rates and social determinants of health affecting COVID-19 disease dynamics. The systematic review of literature was carried out to collect relevant articles on human mobility. Our study highlights the use of three major sources of mobility data: public transit, mobile phones, and social surveys. The results also provides analysis of the data to estimate mobility metrics from the diverse data sources. All major factors which directly and indirectly influenced human mobility during the COVID-19 spread are explored. Our study recommends that (a) a significant balance between primitive and new estimated mobility parameters need to be maintained, (b) the accuracy and applicability of mobility data sources should be improved, (c) encouraging broader interdisciplinary collaboration in movement-based research is crucial for advancing the study of COVID-19 dynamics among scholars from various disciplines.

Keywords: COVID-19 infection; Human mobility data; Mobility metrics; Mobility rates.

© 2024. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.

Publication types

  • Systematic Review
  • COVID-19* / epidemiology
  • Information Sources

Grants and funding

  • MTR/2022/001028]/Science and Engineering Research Board India

IMAGES

  1. Overview

    types of systematic literature reviews

  2. How to Conduct a Systematic Review

    types of systematic literature reviews

  3. Types of Reviews

    types of systematic literature reviews

  4. Systematic literature review phases.

    types of systematic literature reviews

  5. Types of literature reviews

    types of systematic literature reviews

  6. Phases of the systematic literature review

    types of systematic literature reviews

VIDEO

  1. 3_session2 Importance of literature review, types of literature review, Reference management tool

  2. Interpreting systematic literature reviews and Commonly used performance indicators in Social Protec

  3. Systematic Literature Review Part2 March 20, 2023 Joseph Ntayi

  4. Introduction Systematic Literature Review-Various frameworks Bibliometric Analysis

  5. 11 Error Types, Systematic Debugging, Exceptions

  6. Systematic Literature Review in Quantitative & Qualitative Research

COMMENTS

  1. Types of Literature Reviews

    Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints. Time-limited formal quality assessment. Typically narrative and tabular.

  2. Types of literature review, methods, & resources

    (Describes 14 different types of literature and systematic review, useful for thinking at the outset about what sort of literature review you want to do.) Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: exploring review types and associated information retrieval requirements. Health information and libraries ...

  3. Systematic reviews: Structure, form and content

    A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.

  4. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  5. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

  6. What kind of systematic review should I conduct? A proposed typology

    Systematic reviews have been considered as the pillar on which evidence-based healthcare rests. Systematic review methodology has evolved and been modified over the years to accommodate the range of questions that may arise in the health and medical sciences. This paper explores a concept still rarely considered by novice authors and in the literature: determining the type of systematic review ...

  7. Systematic and other reviews: criteria and complexities

    The type of systematic review, according to the Cochrane Collaboration, ... Literature reviews include peer-reviewed original research, systematic reviews, and meta-analyses, but also may include conference abstracts, books, graduate degree theses, and other non-peer reviewed publications. The methods used to identify and evaluate studies ...

  8. LibGuides: Systematic Reviews: Types of Systematic Reviews

    Traditional Systematic Reviews follow a rigorous and well-defined methodology to identify, select, and critically appraise relevant research articles on a specific topic and within a specified population of subjects. The primary goal of this type of study is to comprehensively find the empirical data available on a topic, identify relevant ...

  9. Literature Review Types, Taxonomies

    Mapping Review (Systematic Map) - Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature. Meta-Analysis - Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results.

  10. Literature Reviews: Systematic, Scoping, Integrative

    Select the type of review (systematic, scoping, integrative). This will require running some test searches to see if there is enough literature to merit a systematic review. Select databases. Select grey literature sources (if applicable). Read this article for helpful suggestions on systematically searching for grey literature.

  11. Types of Literature Review

    1. Narrative Literature Review. A narrative literature review, also known as a traditional literature review, involves analyzing and summarizing existing literature without adhering to a structured methodology. It typically provides a descriptive overview of key concepts, theories, and relevant findings of the research topic.

  12. Systematic, Scoping, and Other Literature Reviews: Overview

    A scoping review employs the systematic review methodology to explore a broader topic or question rather than a specific and answerable one, as is generally the case with a systematic review. Authors of these types of reviews seek to collect and categorize the existing literature so as to identify any gaps. Rapid Review

  13. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  14. Research Guides: Systematic Reviews: Types of Reviews

    Systematic Reviews. With a clearly defined question, systematically and transparently searches for a broad range of information to synthesize, in order to find the effect of an intervention. uses a protocol. has a clear data extraction and management plan. Time-intensive and often take months to a year or more to complete, even with a multi ...

  15. Types of Reviews

    There are many types of reviews --- narrative reviews, scoping reviews, systematic reviews, integrative reviews, umbrella reviews, rapid reviews and others --- and it's not always straightforward to choose which type of review to conduct.These Review Navigator tools (see below) ask a series of questions to guide you through the various kinds of reviews and to help you determine the best choice ...

  16. Systematic Review Process: Types of Reviews

    A literature review will help you to identify patterns and trends in the literature so that you can identify gaps or inconsistencies in a body of knowledge. This should lead you to a sufficiently focused research question that justifies your research. A systematic review is comprehensive and has minimal bias. It is based on a specific question ...

  17. Five other types of systematic review

    Scoping reviews provide an understanding of the size and scope of the available literature and can inform whether a full systematic review should be undertaken. If you're not sure you should conduct a systematic review or a scoping review, this article outlines the differences between these review types and could help your decision making. 2.

  18. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  19. Types of Reviews

    The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews. Review Typologies (from LITR-EX) ... Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of ...

  20. Types of reviews

    Types of reviews and examples. Definition: "A term used to describe a conventional overview of the literature, particularly when contrasted with a systematic review (Booth et al., 2012, p. 265). Characteristics: Example: Mitchell, L. E., & Zajchowski, C. A. (2022). The history of air quality in Utah: A narrative review.

  21. Literature Review: Types of literature reviews

    The common types of literature reviews will be explained in the pages of this section. Narrative or traditional literature reviews. Critically Appraised Topic (CAT) Scoping reviews. Systematic literature reviews. Annotated bibliographies. These are not the only types of reviews of literature that can be conducted.

  22. What is a Literature Review?

    A literature review may itself be a scholarly publication and provide an analysis of what has been written on a particular topic without contributing original research. These types of literature reviews can serve to help keep people updated on a field as well as helping scholars choose a research topic to fill gaps in the knowledge on that topic.

  23. LibGuides: Systematic Reviews

    This is a guide on conducting reviews. Systematic Reviews are not the only review type. This guide will help you. Find the best review type for your purpose; Understand the steps of the review process; Conduct a thorough, structured search of the literature.

  24. An overview of methodological approaches in systematic reviews

    There is some evidence to support the use of checking reference lists to complement literature search in systematic reviews. Low: S: additional yield by publication type, study design or both and data pertaining to costs: Step: Selecting studies: Reviewer characteristics: Robson, 2019 14 Single vs. double reviewer screening

  25. Detailed Comparison: Systematic Review vs Literature Review

    A literature review is a general overview of existing knowledge on a specific research topic, which may or may not involve a systematic approach. In contrast, a systematic review is a specific type of literature review that follows a well-defined methodology to collect, evaluate, and synthesize evidence from multiple studies.

  26. Types of Reviews

    What are the types of reviews? As you begin searching through the literature for evidence, you will come across different types of publications. Below are examples of the most common types and explanations of what they are. Although systematic reviews and meta-analysis are considered the highest quality of evidence, not every topic will have an ...

  27. Systematic Reviews & Literature Reviews

    The scope can be broader and more flexible, allowing for the inclusion of various types of studies, such as empirical research, theoretical papers, and conceptual frameworks. A systematic review has a narrower scope, focusing on empirical research studies that meet predefined criteria for inclusion. It typically excludes non-peer-reviewed ...

  28. Person-centered care assessment tool with a focus on quality healthcare

    The present study comprises two distinct but interconnected procedures. First, a systematic literature review was conducted following the PRISMA method ( []; Additional file 1; Additional file 2) with the aim of collecting all validations of the P-CAT that have been developed.Second, a systematic description of the validity evidence for each of the P-CAT validations found in the systematic ...

  29. Exploring data sources and mathematical approaches for ...

    The systematic review of literature was carried out to collect relevant articles on human mobility. Our study highlights the use of three major sources of mobility data: public transit, mobile phones, and social surveys. The results also provides analysis of the data to estimate mobility metrics from the diverse data sources.

  30. Indigenous conflict management practices in Ethiopia: a systematic

    It is the first of its type to use a systematic review analysis of the dynamics of indigenous conflict management practices in Ethiopia to demonstrate how the concepts relate to managing conflict conventional by carefully reviewing a large body of research in this field. ... By evaluating each article downloaded using a systematic literature ...