Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

difference between research paper and systematic review

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved March 11, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

Literature Review vs Systematic Review

  • Literature Review vs. Systematic Review
  • Primary vs. Secondary Sources
  • Databases and Articles
  • Specific Journal or Article

Subject Guide

Profile Photo

Definitions

It’s common to confuse systematic and literature reviews because both are used to provide a summary of the existent literature or research on a specific topic. Regardless of this commonality, both types of review vary significantly. The following table provides a detailed explanation as well as the differences between systematic and literature reviews. 

Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:  http://dx.doi.org/10.6084/m9.figshare.766364

  • << Previous: Home
  • Next: Primary vs. Secondary Sources >>
  • Last Updated: Dec 15, 2023 10:19 AM
  • URL: https://libguides.sjsu.edu/LitRevVSSysRev

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Digital Health Device Collection
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?

Types of Reviews

  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

Review Typologies

There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.

  • Review Typologies (from LITR-EX) This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable.

Review the table to peruse review types and associated methodologies. Librarians can also help your team determine which review type might be appropriate for your project. 

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108.  doi:10.1111/j.1471-1842.2009.00848.x

  • << Previous: What is a Systematic Review?
  • Next: Manuals and Reporting Guidelines >>
  • Last Updated: Feb 8, 2024 5:16 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

  • Guidelines as Topic
  • Meta-Analysis as Topic*
  • Publication Bias
  • Review Literature as Topic
  • Systematic Reviews as Topic*

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 11 March 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

difference between research paper and systematic review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Indian J Dermatol
  • v.59(2); Mar-Apr 2014

Understanding and Evaluating Systematic Reviews and Meta-analyses

Michael bigby.

From the Department of Dermatology, Harvard Medical School, Beth Israel Deaconess Medical Center, Boston, MA 02215, USA

A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and structured presentation of the results. A systematic review that incorporates quantitative pooling of similar studies to produce an overall summary of treatment effects is a meta-analysis. A systematic review should have clear, focused clinical objectives containing four elements expressed through the acronym PICO (Patient, group of patients, or problem, an Intervention, a Comparison intervention and specific Outcomes). Explicit and thorough search of the literature is a pre-requisite of any good systematic review. Reviews should have pre-defined explicit criteria for what studies would be included and the analysis should include only those studies that fit the inclusion criteria. The quality (risk of bias) of the primary studies should be critically appraised. Particularly the role of publication and language bias should be acknowledged and addressed by the review, whenever possible. Structured reporting of the results with quantitative pooling of the data must be attempted, whenever appropriate. The review should include interpretation of the data, including implications for clinical practice and further research. Overall, the current quality of reporting of systematic reviews remains highly variable.

Introduction

A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and structured presentation of the results. A systematic review can be distinguished from a narrative review because it will have explicitly stated objectives (the focused clinical question), materials (the relevant medical literature) and methods (the way in which studies are assessed and summarized).[ 1 , 2 ] A systematic review that incorporates quantitative pooling of similar studies to produce an overall summary of treatment effects is a meta-analysis.[ 1 , 2 ] Meta-analysis may allow recognition of important treatment effects by combining the results of small trials that individually might lack the power to consistently demonstrate differences among treatments.[ 1 ]

With over 200 speciality dermatology journals being published, the amount of data published just in the dermatologic literature exceeds our ability to read it.[ 3 ] Therefore, keeping up with the literature by reading journals is an impossible task. Systematic reviews provide a solution to handle information overload for practicing physicians.

Criteria for reporting systematic reviews have been developed by a consensus panel first published as Quality of Reporting of Meta-analyses (QUOROM) and later refined as Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA).[ 4 , 5 ] This detailed, 27-item checklist contains items that should be included and reported in high quality systematic reviews and meta-analyses. The methods for understanding and appraising systematic reviews and meta-analyses presented in this paper are a subset of the PRISMA criteria.

The items that are the essential features of a systematic review include having clear objectives, explicit criteria for study selection, an assessment of the quality of included studies, criteria for which studies can be combined, appropriate analysis and presentation of results and practical conclusions that are based on the evidence evaluated [ Table 1 ]. Meta-analysis is only appropriate if the included studies are conceptually similar. Meta-analyses should only be conducted after a systematic review.[ 1 , 6 ]

Criteria for evaluating a systematic review or the meta-analysis

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g001.jpg

A Systematic Review Should Have Clear, Focused Clinical Objectives

A focused clinical question for a systematic review should contain the same four elements used to formulate well-built clinical questions for individual studies, namely a Patient, group of patients, or problem, an Intervention, a Comparison intervention and specific Outcomes.[ 7 ] These features can be remembered by the acronym PICO. The interventions and comparison interventions should be adequately described so that what was done can be reproduced in future studies and in practice. For diseases with established effective treatments, comparisons of new treatments or regimens to established treatments provide the most useful information. The outcomes reported should be those that are most relevant to physicians and patients.[ 1 ]

Explicit and Thorough Search of the Literature

A key question to ask of a systematic review is: “Is it unlikely that important, relevant studies were missed?” A sound systematic review can be performed only if most or all of the available data are examined. An explicit and thorough search of the literature should be performed. It should include searching several electronic bibliographic databases including the Cochrane Controlled Trials Registry, which is part of the Cochrane Library, Medline, Embase and Literatura Latino Americana em Ciências da Saúde. Bibliographies of retrieved studies, review articles and textbooks should be examined for studies fitting inclusion criteria. There should be no language restrictions. Additional sources of data include scrutiny of citation lists in retrieved articles, hand-searching for conference reports, prospective trial registers (e.g., clinical trials.gov for the USA and clinical trialsregister.eu for the European union) and contacting key researchers, authors and drug companies.[ 1 , 8 ]

Reviews should have Pre-defined Explicit Criteria for what Studies would be Included and the Analysis should Include Only those Studies that Fit the Inclusion Criteria

The overwhelming majority of systematic reviews involve therapy. Randomized, controlled clinical trials should therefore be used for systematic reviews of therapy if they are available, because they are generally less susceptible to selection and information bias in comparison with other study designs.[ 1 , 9 ]

Systematic reviews of diagnostic studies and harmful effects of interventions are increasingly being performed and published. Ideally, diagnostic studies included in systematic reviews should be cohort studies of representative populations. The studies should include a criterion (gold) standard test used to establish a diagnosis that is applied uniformly and blinded to the results of the test(s) being studied.[ 1 , 9 ]

Randomized controlled trials can be included in systematic reviews of studies of adverse effects of interventions if the events are common. For rare adverse effects, case-control studies, post-marketing surveillance studies and case reports are more appropriate.[ 1 , 9 ]

The Quality (Risk of Bias) of the Primary Studies should be Critically Appraised

The risk of bias of included therapeutic trials is assessed using the criteria that are used to evaluate individual randomized controlled clinical trials. The quality criteria commonly used include concealed, random allocation; groups similar in terms of known prognostic factors; equal treatment of groups; blinding of patients, researchers and analyzers of the data to treatment allocation and accounting for all patients entered into the trial when analyzing the results (intention-to-treat design).[ 1 ] Absence of these items has been demonstrated to increase the risk of bias of systematic reviews and to exaggerate the treatment effects in individual studies.[ 10 ]

Structured Reporting of the Results with Quantitative Pooling of the Data, if Appropriate

Systematic reviews that contain studies that have results that are similar in magnitude and direction provide results that are most likely to be true and useful. It may be impossible to draw firm conclusions from systematic reviews in which studies have results of widely different magnitude and direction.[ 1 , 9 ]

Meta-analysis should only be performed to synthesize results from different trials if the trials have conceptual homogeneity.[ 1 , 6 , 9 ] The trials must involve similar patient populations, have used similar treatments and have measured results in a similar fashion at a similar point in time.

Once conceptual homogeneity is established and the decision to combine results is made, there are two main statistical methods by which results are combined: random-effects models (e.g., DerSimonian and Laird) and fixed-effects models (e.g., Peto or Mantel-Haenszel).[ 11 ] Random-effects models assume that the results of the different studies may come from different populations with varying responses to treatment. Fixed-effects models assume that each trial represents a random sample of a single population with a single response to treatment [ Figure 1 ]. In general, random-effects models are more conservative (i.e., random-effects models are less likely to show statistically significant results than fixed-effects models). When the combined studies have statistical homogeneity (i.e., when the studies are reasonably similar in direction, magnitude and variability), random-effects and fixed-effects models give similar results.

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g002.jpg

Fixed-effects models (a) assume that each trial represents a random sample (colored curves) of a single population with a single response to treatment. Random-effects models (b) assume that the different trials’ results (colored curves) may come from different populations with varying responses to treatment.

The point estimates and confidence intervals of the individual trials and the synthesis of all trials in meta-analysis are typically displayed graphically in a forest plot [ Figure 2 ].[ 12 ] Results are most commonly expressed as the odds ratio (OR) of the treatment effect (i.e., the odds of achieving a good outcome in the treated group divided by the odds of achieving a good result in the control group) but can be expressed as risk differences (i.e., difference in response rate) or relative risk (probability of achieving a good outcome in the treated group divided by the probability in the control group). An OR of 1 (null) indicates no difference between treatment and control and is usually represented by a vertical line passing through 1 on the x-axis. An OR of greater or less than 1 implies that the treatment is superior or inferior to the control respectively.

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g003.jpg

Annotated results of a meta-analysis of six studies, using random effects models reported as odd ratios using MIX version 1.7 (Bax L, Yu LM, Ikeda N, Tsuruta H, Moons KGM. Development and validation of MIX: comprehensive free software for meta-analysis of causal research data. BMC Med Res Methodol http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1626481/ ). The central graph is a typical Forest Plot

The point estimate of individual trials is indicated by a square whose size is proportional to the size of the trial (i.e., number of patients analyzed). The precision of the trial is represented by the 95% confidence interval that appears in Forest Plots as the brackets surrounding point estimate. If the 95% confidence interval (brackets) does not cross null (OR of 1), then the individual trial is statistically significant at the P = 0.05 level.[ 12 ] The summary value for all trials is shown graphically as a parallelogram whose size is proportional to the total number of patients analyzed from all trials. The lateral tips of the parallelogram represent the 95% confidence interval and if they do not cross null (OR of 1), then the summary value of the meta-analysis is statistically significant at the P = 0.05 level. ORs can be converted to risk differences and numbers needed to treat (NNTs) if the event rate in the control group is known [ Table 2 ].[ 13 , 14 ]

Deriving numbers needed to treat from a treatment's odds ratio and the observed or expected event rates of untreated groups or individuals

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g004.jpg

The difference in response rate and its reciprocal, the NNT, are the most easily understood measures of the magnitude of the treatment effect.[ 1 , 9 ] The NNT represents the number of patients one would need to treat in order to achieve one additional cure. Whereas the interpretation of NNT might be straightforward within one trial, interpretation of NNT requires some caution within a systematic review, as this statistic is highly sensitive to baseline event rates.[ 1 ]

For example, if a treatment A is 30% more effective than treatment B for clearing psoriasis and 50% of people on treatment B are cleared with therapy, then 65% will clear with treatment A. These results correspond to a rate difference of 15% (65-50) and an NNT of 7 (1/0.15). This difference sounds quite worthwhile clinically. However if the baseline clearance rate for treatment B in another trial or setting is only 30%, the rate difference will be only 9% and the NNT now becomes 11 and if the baseline clearance rate is 10%, then the NNT for treatment A will be 33, which is perhaps less worthwhile.[ 1 ]

Therefore, NNT summary measures within a systematic review should be interpreted with caution because “control” or baseline event rates usually differ considerably between studies.[ 1 , 15 ] Instead, a range of NNTs for a range of plausible control event rates that occur in different clinical settings should be given, along with their 95% confidence intervals.[ 1 , 16 ]

The data used in a meta-analysis can be tested for statistical heterogeneity. Methods to tests for statistical heterogeneity include the χ 2 and I.[ 2 , 11 , 17 ] Tests for statistical heterogeneity are typically of low power and hence detecting statistical homogeneity does not mean clinical homogeneity. When there is evidence of heterogeneity, reasons for heterogeneity between studies – such as different disease subgroups, intervention dosage, or study quality – should be sought.[ 11 , 17 ] Detecting the source of heterogeneity generally requires sub-group analysis, which is only possible when data from many or large trials are available.[ 1 , 9 ]

In some systematic reviews in which a large number of trials have been performed, it is possible to evaluate whether certain subgroups (e.g. children versus adults) are more likely to benefit than others. Subgroup analysis is rarely possible in dermatology, because few trials are available. Subgroup analyses should always be pre-specified in a systematic review protocol in order to avoid spurious post hoc claims.[ 1 , 9 ]

The Importance of Publication Bias

Publication bias is the tendency that studies that show positive effects are more likely to be published and are easier to find.[ 1 , 18 ] It results from allowing factors other than the quality of the study to influence its acceptability for publication. Factors such as the sample size, the direction and statistical significance of findings, or the investigators’ perception of whether the findings are “interesting,” are related to the likelihood of publication.[ 1 , 19 , 20 ] Negative studies with small sample size are less likely to be published.[ 1 , 19 , 20 ] Studies published are often dominated by the pharmaceutical company sponsored trials of new, expensive treatments often compared with the placebo.

For many diseases, the studies published are dominated by drug company-sponsored trials of new, expensive treatments. Such studies are almost always “positive.”[ 1 , 21 , 22 ] This bias in publication can result in data-driven systematic reviews that draw more attention to those medicines. Systematic reviews that have been sponsored directly or indirectly by industry are also prone to bias through over-inclusion of unpublished “positive” studies that are kept “on file” by that company and by not including or not finishing registered trials whose results are negative.[ 1 , 23 ] The creation of study registers (e.g. http://clinicaltrials.gov ) and advance publication of research designs have been proposed as ways to prevent publication bias.[ 1 , 24 , 25 ] Many dermatology journals now require all their published trials to have been registered beforehand, but this policy is not well policed.[ 1 ]

Language bias is the tendency for studies that are “positive” to be published in an English-language journal and be more quickly found than inconclusive or negative studies.[ 1 , 26 ] A thorough systematic review should therefore not restrict itself to journals published in English.[ 1 ]

Publication bias can be detected by using a simple graphic test (funnel plot), by calculating the fail-safe N, Begg's rank correlation method, Egger regression method and others.[ 1 , 9 , 11 , 27 , 28 ] These techniques are of limited value when less than 10 randomized controlled trials are included. Testing for publication bias is often not possible in systematic reviews of skin diseases, due to the limited number and sizes of trials.[ 1 , 9 ]

Question-driven systematic reviews answer the clinical questions of most concern to practitioners. In many cases, studies that are of most relevance to doctors and patients have not been done in the field of dermatology, due to inadequate sources of independent funding.[ 1 , 9 ]

The Quality of Reporting of Systematic Reviews

The quality of reporting of systematic reviews is highly variable.[ 1 ] One cross-sectional study of 300 systematic reviews published in Medline showed that over 90% were reported in specialty journals. Funding sources were not reported in 40% of reviews. Only two-thirds reported the range of years that the literature was searched for trials. Around a third of reviews failed to provide a quality assessment of the included studies and only half of the reviews included the term “systematic review” or “meta-analysis” in the title.[ 1 , 29 ]

The Review should Include Interpretation of the Data, Including Implications for Clinical Practice and Further Research

The conclusions in the discussion section of a systematic review should closely reflect the data that have been presented within that review. Clinical recommendations can be made when conclusive evidence is found, analyzed and presented. The authors should make it clear which of the treatment recommendations are based on the review data and which reflect their own judgments.[ 1 , 9 ]

Many reviews in dermatology, however, find little evidence to address the questions posed. The review may still be of value even if it lacks conclusive evidence, especially if the question addressed is an important one.[ 1 , 30 ] For example, the systematic review may provide the authors with the opportunity to call for primary research in an area and to make recommendations on study design and outcomes that might help future researchers.[ 1 , 31 ]

Source of Support: Nil

Conflict of Interest: Nil.

DistillerSR Logo

About Systematic Reviews

Understanding the Differences Between a Systematic Review vs Literature Review

difference between research paper and systematic review

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Let’s look at these differences in further detail.

Goal of the Review

The objective of a literature review is to provide context or background information about a topic of interest. Hence the methodology is less comprehensive and not exhaustive. The aim is to provide an overview of a subject as an introduction to a paper or report. This overview is obtained firstly through evaluation of existing research, theories, and evidence, and secondly through individual critical evaluation and discussion of this content.

A systematic review attempts to answer specific clinical questions (for example, the effectiveness of a drug in treating an illness). Answering such questions comes with a responsibility to be comprehensive and accurate. Failure to do so could have life-threatening consequences. The need to be precise then calls for a systematic approach. The aim of a systematic review is to establish authoritative findings from an account of existing evidence using objective, thorough, reliable, and reproducible research approaches, and frameworks.

Level of Planning Required

The methodology involved in a literature review is less complicated and requires a lower degree of planning. For a systematic review, the planning is extensive and requires defining robust pre-specified protocols. It first starts with formulating the research question and scope of the research. The PICO’s approach (population, intervention, comparison, and outcomes) is used in designing the research question. Planning also involves establishing strict eligibility criteria for inclusion and exclusion of the primary resources to be included in the study. Every stage of the systematic review methodology is pre-specified to the last detail, even before starting the review process. It is recommended to register the protocol of your systematic review to avoid duplication. Journal publishers now look for registration in order to ensure the reviews meet predefined criteria for conducting a systematic review [1].

Search Strategy for Sourcing Primary Resources

Learn more about distillersr.

(Article continues below)

difference between research paper and systematic review

Quality Assessment of the Collected Resources

A rigorous appraisal of collected resources for the quality and relevance of the data they provide is a crucial part of the systematic review methodology. A systematic review usually employs a dual independent review process, which involves two reviewers evaluating the collected resources based on pre-defined inclusion and exclusion criteria. The idea is to limit bias in selecting the primary studies. Such a strict review system is generally not a part of a literature review.

Presentation of Results

Most literature reviews present their findings in narrative or discussion form. These are textual summaries of the results used to critique or analyze a body of literature about a topic serving as an introduction. Due to this reason, literature reviews are sometimes also called narrative reviews. To know more about the differences between narrative reviews and systematic reviews , click here.

A systematic review requires a higher level of rigor, transparency, and often peer-review. The results of a systematic review can be interpreted as numeric effect estimates using statistical methods or as a textual summary of all the evidence collected. Meta-analysis is employed to provide the necessary statistical support to evidence outcomes. They are usually conducted to examine the evidence present on a condition and treatment. The aims of a meta-analysis are to determine whether an effect exists, whether the effect is positive or negative, and establish a conclusive estimate of the effect [2].

Using statistical methods in generating the review results increases confidence in the review. Results of a systematic review are then used by clinicians to prescribe treatment or for pharmacovigilance purposes. The results of the review can also be presented as a qualitative assessment when the end goal is issuing recommendations or guidelines.

Risk of Bias

Literature reviews are mostly used by authors to provide background information with the intended purpose of introducing their own research later. Since the search for included primary resources is also less exhaustive, it is more prone to bias.

One of the main objectives for conducting a systematic review is to reduce bias in the evidence outcome. Extensive planning, strict eligibility criteria for inclusion and exclusion, and a statistical approach for computing the result reduce the risk of bias.

Intervention studies consider risk of bias as the “likelihood of inaccuracy in the estimate of causal effect in that study.” In systematic reviews, assessing the risk of bias is critical in providing accurate assessments of overall intervention effect [3].

With numerous review methods available for analyzing, synthesizing, and presenting existing scientific evidence, it is important for researchers to understand the differences between the review methods. Choosing the right method for a review is crucial in achieving the objectives of the research.

[1] “Systematic Review Protocols and Protocol Registries | NIH Library,” www.nihlibrary.nih.gov . https://www.nihlibrary.nih.gov/services/systematic-review-service/systematic-review-protocols-and-protocol-registries

[2] A. B. Haidich, “Meta-analysis in medical research,” Hippokratia , vol. 14, no. Suppl 1, pp. 29–37, Dec. 2010, [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049418/#:~:text=Meta%2Danalyses%20are%20conducted%20to

3 Reasons to Connect

difference between research paper and systematic review

Elsevier QRcode Wechat

  • Research Process

Systematic Literature Review or Literature Review?

  • 3 minute read
  • 37.9K views

Table of Contents

As a researcher, you may be required to conduct a literature review. But what kind of review do you need to complete? Is it a systematic literature review or a standard literature review? In this article, we’ll outline the purpose of a systematic literature review, the difference between literature review and systematic review, and other important aspects of systematic literature reviews.

What is a Systematic Literature Review?

The purpose of systematic literature reviews is simple. Essentially, it is to provide a high-level of a particular research question. This question, in and of itself, is highly focused to match the review of the literature related to the topic at hand. For example, a focused question related to medical or clinical outcomes.

The components of a systematic literature review are quite different from the standard literature review research theses that most of us are used to (more on this below). And because of the specificity of the research question, typically a systematic literature review involves more than one primary author. There’s more work related to a systematic literature review, so it makes sense to divide the work among two or three (or even more) researchers.

Your systematic literature review will follow very clear and defined protocols that are decided on prior to any review. This involves extensive planning, and a deliberately designed search strategy that is in tune with the specific research question. Every aspect of a systematic literature review, including the research protocols, which databases are used, and dates of each search, must be transparent so that other researchers can be assured that the systematic literature review is comprehensive and focused.

Most systematic literature reviews originated in the world of medicine science. Now, they also include any evidence-based research questions. In addition to the focus and transparency of these types of reviews, additional aspects of a quality systematic literature review includes:

  • Clear and concise review and summary
  • Comprehensive coverage of the topic
  • Accessibility and equality of the research reviewed

Systematic Review vs Literature Review

The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper. That is, it includes an introduction, description of the methods used, a discussion and conclusion, as well as a reference list or bibliography.

A systematic review, however, includes entirely different components that reflect the specificity of its research question, and the requirement for transparency and inclusion. For instance, the systematic review will include:

  • Eligibility criteria for included research
  • A description of the systematic research search strategy
  • An assessment of the validity of reviewed research
  • Interpretations of the results of research included in the review

As you can see, contrary to the general overview or summary of a topic, the systematic literature review includes much more detail and work to compile than a standard literature review. Indeed, it can take years to conduct and write a systematic literature review. But the information that practitioners and other researchers can glean from a systematic literature review is, by its very nature, exceptionally valuable.

This is not to diminish the value of the standard literature review. The importance of literature reviews in research writing is discussed in this article . It’s just that the two types of research reviews answer different questions, and, therefore, have different purposes and roles in the world of research and evidence-based writing.

Systematic Literature Review vs Meta Analysis

It would be understandable to think that a systematic literature review is similar to a meta analysis. But, whereas a systematic review can include several research studies to answer a specific question, typically a meta analysis includes a comparison of different studies to suss out any inconsistencies or discrepancies. For more about this topic, check out Systematic Review VS Meta-Analysis article.

Language Editing Plus

With Elsevier’s Language Editing Plus services , you can relax with our complete language review of your systematic literature review or literature review, or any other type of manuscript or scientific presentation. Our editors are PhD or PhD candidates, who are native-English speakers. Language Editing Plus includes checking the logic and flow of your manuscript, reference checks, formatting in accordance to your chosen journal and even a custom cover letter. Our most comprehensive editing package, Language Editing Plus also includes any English-editing needs for up to 180 days.

PowerPoint Presentation of Your Research Paper

  • Publication Recognition

How to Make a PowerPoint Presentation of Your Research Paper

What is and How to Write a Good Hypothesis in Research?

  • Manuscript Preparation

What is and How to Write a Good Hypothesis in Research?

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

difference between research paper and systematic review

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

  • En español – ExME
  • Em português – EME

Systematic reviews vs meta-analysis: what’s the difference?

Posted on 24th July 2023 by Verónica Tanco Tellechea

""

You may hear the terms ‘systematic review’ and ‘meta-analysis being used interchangeably’. Although they are related, they are distinctly different. Learn more in this blog for beginners.

What is a systematic review?

According to Cochrane (1), a systematic review attempts to identify, appraise and synthesize all the empirical evidence to answer a specific research question. Thus, a systematic review is where you might find the most relevant, adequate, and current information regarding a specific topic. In the levels of evidence pyramid , systematic reviews are only surpassed by meta-analyses. 

To conduct a systematic review, you will need, among other things: 

  • A specific research question, usually in the form of a PICO question.
  • Pre-specified eligibility criteria, to decide which articles will be included or discarded from the review. 
  • To follow a systematic method that will minimize bias.

You can find protocols that will guide you from both Cochrane and the Equator Network , among other places, and if you are a beginner to the topic then have a read of an overview about systematic reviews.

What is a meta-analysis?

A meta-analysis is a quantitative, epidemiological study design used to systematically assess the results of previous research (2) . Usually, they are based on randomized controlled trials, though not always. This means that a meta-analysis is a mathematical tool that allows researchers to mathematically combine outcomes from multiple studies.

When can a meta-analysis be implemented?

There is always the possibility of conducting a meta-analysis, yet, for it to throw the best possible results it should be performed when the studies included in the systematic review are of good quality, similar designs, and have similar outcome measures.

Why are meta-analyses important?

Outcomes from a meta-analysis may provide more precise information regarding the estimate of the effect of what is being studied because it merges outcomes from multiple studies. In a meta-analysis, data from various trials are combined and generate an average result (1), which is portrayed in a forest plot diagram. Moreover, meta-analysis also include a funnel plot diagram to visually detect publication bias.

Conclusions

A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles included in a systematic-review. 

Remember: All meta-analyses involve a systematic review, but not all systematic reviews involve a meta-analysis.

If you would like some further reading on this topic, we suggest the following:

The systematic review – a S4BE blog article

Meta-analysis: what, why, and how – a S4BE blog article

The difference between a systematic review and a meta-analysis – a blog article via Covidence

Systematic review vs meta-analysis: what’s the difference? A 5-minute video from Research Masterminds:

  • About Cochrane reviews [Internet]. Cochranelibrary.com. [cited 2023 Apr 30]. Available from: https://www.cochranelibrary.com/about/about-cochrane-reviews
  • Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(Suppl 1):29–37.

' src=

Verónica Tanco Tellechea

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

difference between research paper and systematic review

How to read a funnel plot

This blog introduces you to funnel plots, guiding you through how to read them and what may cause them to look asymmetrical.

""

Heterogeneity in meta-analysis

When you bring studies together in a meta-analysis, one of the things you need to consider is the variability in your studies – this is called heterogeneity. This blog presents the three types of heterogeneity, considers the different types of outcome data, and delves a little more into dealing with the variations.

""

Natural killer cells in glioblastoma therapy

As seen in a previous blog from Davide, modern neuroscience often interfaces with other medical specialities. In this blog, he provides a summary of new evidence about the potential of a therapeutic strategy born at the crossroad between neurology, immunology and oncology.

  • Open access
  • Published: 19 November 2018

Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach

  • Zachary Munn   ORCID: orcid.org/0000-0002-7091-5842 1 ,
  • Micah D. J. Peters 1 ,
  • Cindy Stern 1 ,
  • Catalin Tufanaru 1 ,
  • Alexa McArthur 1 &
  • Edoardo Aromataris 1  

BMC Medical Research Methodology volume  18 , Article number:  143 ( 2018 ) Cite this article

804k Accesses

4229 Citations

723 Altmetric

Metrics details

Scoping reviews are a relatively new approach to evidence synthesis and currently there exists little guidance regarding the decision to choose between a systematic review or scoping review approach when synthesising evidence. The purpose of this article is to clearly describe the differences in indications between scoping reviews and systematic reviews and to provide guidance for when a scoping review is (and is not) appropriate.

Researchers may conduct scoping reviews instead of systematic reviews where the purpose of the review is to identify knowledge gaps, scope a body of literature, clarify concepts or to investigate research conduct. While useful in their own right, scoping reviews may also be helpful precursors to systematic reviews and can be used to confirm the relevance of inclusion criteria and potential questions.

Conclusions

Scoping reviews are a useful tool in the ever increasing arsenal of evidence synthesis approaches. Although conducted for different purposes compared to systematic reviews, scoping reviews still require rigorous and transparent methods in their conduct to ensure that the results are trustworthy. Our hope is that with clear guidance available regarding whether to conduct a scoping review or a systematic review, there will be less scoping reviews being performed for inappropriate indications better served by a systematic review, and vice-versa.

Peer Review reports

Systematic reviews in healthcare began to appear in publication in the 1970s and 1980s [ 1 , 2 ]. With the emergence of groups such as Cochrane and the Joanna Briggs Institute (JBI) in the 1990s [ 3 ], reviews have exploded in popularity both in terms of the number conducted [ 1 ], and their uptake to inform policy and practice. Today, systematic reviews are conducted for a wide range of purposes across diverse fields of inquiry, different evidence types and for different questions [ 4 ]. More recently, the field of evidence synthesis has seen the emergence of scoping reviews, which are similar to systematic reviews in that they follow a structured process, however they are performed for different reasons and have some key methodological differences [ 5 , 6 , 7 , 8 ]. Scoping reviews are now seen as a valid approach in those circumstances where systematic reviews are unable to meet the necessary objectives or requirements of knowledge users. There now exists clear guidance regarding the definition of scoping reviews, how to conduct scoping reviews and the steps involved in the scoping review process [ 6 , 8 ]. However, the guidance regarding the key indications or reasons why reviewers may choose to follow a scoping review approach is not as straightforward, with scoping reviews often conducted for purposes that do not align with the original indications as proposed by Arksey and O’Malley [ 5 , 6 , 7 , 8 , 9 , 10 ]. As editors and peer reviewers for various journals we have noticed that there is inconsistency and confusion regarding the indications for scoping reviews and a lack of clarity for authors regarding when a scoping review should be performed as opposed to a systematic review. The purpose of this article is to provide practical guidance for reviewers on when to perform a systematic review or a scoping review, supported with some key examples.

Indications for systematic reviews

Systematic reviews can be broadly defined as a type of research synthesis that are conducted by review groups with specialized skills, who set out to identify and retrieve international evidence that is relevant to a particular question or questions and to appraise and synthesize the results of this search to inform practice, policy and in some cases, further research [ 11 , 12 , 13 ]. According to the Cochrane handbook, a systematic review ‘uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made.’ [ 14 ] Systematic reviews follow a structured and pre-defined process that requires rigorous methods to ensure that the results are both reliable and meaningful to end users. These reviews may be considered the pillar of evidence-based healthcare [ 15 ] and are widely used to inform the development of trustworthy clinical guidelines [ 11 , 16 , 17 ].

A systematic review may be undertaken to confirm or refute whether or not current practice is based on relevant evidence, to establish the quality of that evidence, and to address any uncertainty or variation in practice that may be occurring. Such variations in practice may be due to conflicting evidence and undertaking a systematic review should (hopefully) resolve such conflicts. Conducting a systematic review may also identify gaps, deficiencies, and trends in the current evidence and can help underpin and inform future research in the area. Systematic reviews can be used to produce statements to guide clinical decision-making, the delivery of care, as well as policy development [ 12 ]. Broadly, indications for systematic reviews are as follows [ 4 ]:

Uncover the international evidence

Confirm current practice/ address any variation/ identify new practices

Identify and inform areas for future research

Identify and investigate conflicting results

Produce statements to guide decision-making

Despite the utility of systematic reviews to address the above indications, there are cases where systematic reviews are unable to meet the necessary objectives or requirements of knowledge users or where a methodologically robust and structured preliminary searching and scoping activity may be useful to inform the conduct of the systematic reviews. As such, scoping reviews (which are also sometimes called scoping exercises/scoping studies) [ 8 ] have emerged as a valid approach with rather different indications to those for systematic reviews. It is important to note here that other approaches to evidence synthesis have also emerged, including realist reviews, mixed methods reviews, concept analyses and others [ 4 , 18 , 19 , 20 ]. This article focuses specifically on the choice between a systematic review or scoping review approach.

Indications for scoping reviews

True to their name, scoping reviews are an ideal tool to determine the scope or coverage of a body of literature on a given topic and give clear indication of the volume of literature and studies available as well as an overview (broad or detailed) of its focus. Scoping reviews are useful for examining emerging evidence when it is still unclear what other, more specific questions can be posed and valuably addressed by a more precise systematic review [ 21 ]. They can report on the types of evidence that address and inform practice in the field and the way the research has been conducted.

The general purpose for conducting scoping reviews is to identify and map the available evidence [ 5 , 22 ]. Arskey and O’Malley, authors of the seminal paper describing a framework for scoping reviews, provided four specific reasons why a scoping review may be conducted [ 5 , 6 , 7 , 22 ]. Soon after, Levac, Colquhoun and O’Brien further clarified and extended this original framework [ 7 ]. These authors acknowledged that at the time, there was no universally recognized definition of scoping reviews nor a commonly acknowledged purpose or indication for conducting them. In 2015, a methodological working group of the JBI produced formal guidance for conducting scoping reviews [ 6 ]. However, we have not previously addressed and expanded upon the indications for scoping reviews. Below, we build upon previously described indications and suggest the following purposes for conducting a scoping review:

To identify the types of available evidence in a given field

To clarify key concepts/ definitions in the literature

To examine how research is conducted on a certain topic or field

To identify key characteristics or factors related to a concept

As a precursor to a systematic review.

To identify and analyse knowledge gaps

Deciding between a systematic review and a scoping review approach

Authors deciding between the systematic review or scoping review approach should carefully consider the indications discussed above for each synthesis type and determine exactly what question they are asking and what purpose they are trying to achieve with their review. We propose that the most important consideration is whether or not the authors wish to use the results of their review to answer a clinically meaningful question or provide evidence to inform practice. If the authors have a question addressing the feasibility, appropriateness, meaningfulness or effectiveness of a certain treatment or practice, then a systematic review is likely the most valid approach [ 11 , 23 ]. However, authors do not always wish to ask such single or precise questions, and may be more interested in the identification of certain characteristics/concepts in papers or studies, and in the mapping, reporting or discussion of these characteristics/concepts. In these cases, a scoping review is the better choice.

As scoping reviews do not aim to produce a critically appraised and synthesised result/answer to a particular question, and rather aim to provide an overview or map of the evidence. Due to this, an assessment of methodological limitations or risk of bias of the evidence included within a scoping review is generally not performed (unless there is a specific requirement due to the nature of the scoping review aim) [ 6 ]. Given this assessment of bias is not conducted, the implications for practice (from a clinical or policy making point of view) that arise from a scoping review are quite different compared to those of a systematic review. In some cases, there may be no need or impetus to make implications for practice and if there is a need to do so, these implications may be significantly limited in terms of providing concrete guidance from a clinical or policy making point of view. Conversely, when we compare this to systematic reviews, the provision of implications for practice is a key feature of systematic reviews and is recommended in reporting guidelines for systematic reviews [ 13 ].

Exemplars for different scoping review indications

In the following section, we elaborate on each of the indications listed for scoping reviews and provide a number of examples for authors considering a scoping review approach.

Scoping reviews that seek to identify the types of evidence in a given field share similarities with evidence mapping activities as explained by Bragge and colleagues in a paper on conducting scoping research in broad topic areas [ 24 ]. Chambers and colleagues [ 25 ] conducted a scoping review in order to identify current knowledge translation resources (and any evaluations of them) that use, adapt and present findings from systematic reviews to suit the needs of policy makers. Following a comprehensive search across a range of databases, organizational websites and conference abstract repositories based upon predetermined inclusion criteria, the authors identified 20 knowledge translation resources which they classified into three different types (overviews, summaries and policy briefs) as well as seven published and unpublished evaluations. The authors concluded that evidence synthesists produce a range of resources to assist policy makers to transfer and utilize the findings of systematic reviews and that focussed summaries are the most common. Similarly, a scoping review was conducted by Challen and colleagues [ 26 ] in order to determine the types of available evidence identifying the source and quality of publications and grey literature for emergency planning. A comprehensive set of databases and websites were investigated and 1603 relevant sources of evidence were identified mainly addressing emergency planning and response with fewer sources concerned with hazard analysis, mitigation and capability assessment. Based on the results of the review, the authors concluded that while there is a large body of evidence in the field, issues with its generalizability and validity are as yet largely unknown and that the exact type and form of evidence that would be valuable to knowledge users in the field is not yet understood.

To clarify key concepts/definitions in the literature

Scoping reviews are often performed to examine and clarify definitions that are used in the literature. A scoping review by Schaink and colleagues 27 was performed to investigate how the notion of “patient complexity” had been defined, classified, and understood in the existing literature. A systematic search of healthcare databases was conducted. Articles were assessed to determine whether they met the inclusion criteria and the findings of included articles were grouped into five health dimensions. An overview of how complexity has been described was presented, including the varying definitions and interpretations of the term. The results of the scoping review enabled the authors to then develop a complexity framework or model to assist in defining and understanding patient complexity [ 27 ].

Hines et al. [ 28 ] provide a further example where a scoping review has been conducted to define a concept, in this case the condition bronchopulmonary dysplasia. The authors revealed significant variation in how the condition was defined across the literature, prompting the authors to call for a ‘comprehensive and evidence-based definition’. [ 28 ]

To examine how research is conducted on a certain topic

Scoping reviews can be useful tools to investigate the design and conduct of research on a particular topic. A scoping review by Callary and colleagues 29 investigated the methodological design of studies assessing wear of a certain type of hip replacement (highly crosslinked polyethylene acetabular components) [ 29 ]. The aim of the scoping review was to survey the literature to determine how data pertinent to the measurement of hip replacement wear had been reported in primary studies and whether the methods were similar enough to allow for comparison across studies. The scoping review revealed that the methods to assess wear (radiostereometric analysis) varied significantly with many different approaches being employed amongst the investigators. The results of the scoping review led to the authors recommending enhanced standardization in measurements and methods for future research in this field [ 29 ].

There are other examples of scoping reviews investigating research methodology, with perhaps the most pertinent examples being two recent scoping reviews of scoping review methods [ 9 , 10 ]. Both of these scoping reviews investigated how scoping reviews had been reported and conducted, with both advocating for a need for clear guidance to improve standardization of methods [ 9 , 10 ]. Similarly, a scoping review investigating methodology was conducted by Tricco and colleagues 30 on rapid review methods that have been evaluated, compared, used or described in the literature. A variety of rapid review approaches were identified with many instances of poor reporting identified. The authors called for prospective studies to compare results presented by rapid reviews versus systematic reviews.

Scoping reviews can be conducted to identify and examine characteristics or factors related to a particular concept. Harfield and colleagues (2015) conducted a scoping review to identify the characteristics of indigenous primary healthcare service delivery models [ 30 , 31 , 32 ]. A systematic search was conducted, followed by screening and study selection. Once relevant studies had been identified, a process of data extraction commenced to extract characteristics referred to in the included papers. Over 1000 findings were eventually grouped into eight key factors (accessible health services, community participation, culturally appropriate and skilled workforce, culture, continuous quality improvement, flexible approaches to care, holistic health care, self-determination and empowerment). The results of this scoping review have been able to inform a best practice model for indigenous primary healthcare services.

Scoping reviews conducted as precursors to systematic reviews may enable authors to identify the nature of a broad field of evidence so that ensuing reviews can be assured of locating adequate numbers of relevant studies for inclusion. They also enable the relevant outcomes and target group or population for example for a particular intervention to be identified. This can have particular practical benefits for review teams undertaking reviews on less familiar topics and can assist the team to avoid undertaking an “empty” review [ 33 ]. Scoping reviews of this kind may help reviewers to develop and confirm their a priori inclusion criteria and ensure that the questions to be posed by their subsequent systematic review are able to be answered by available, relevant evidence. In this way, systematic reviews are able to be underpinned by a preliminary and evidence-based scoping stage.

A scoping review commissioned by the United Kingdom Department for International Development was undertaken to determine the scope and nature of literature on people’s experiences of microfinance. The results of this scoping review were used to inform the development of targeted systematic review questions that focussed upon areas of particular interest [ 34 ].

In their recent scoping review on the conduct and reporting of scoping reviews, Tricco and colleagues 10 reveal only 12% of scoping reviews contained recommendations for the development of ensuing systematic reviews, suggesting that the majority of scoping review authors do not conduct scoping reviews as a precursor to future systematic reviews.

To identify and analyze gaps in the knowledge base

Scoping reviews are rarely solely conducted to simply identify and analyze gaps present in a given knowledge base, as examination and presentation of what hasn’t been investigated or reported generally requires exhaustive examination of all of what is available. In any case, because scoping reviews tend to be a useful approach for reviewing evidence rapidly in emerging fields or topics, identification and analysis of knowledge gaps is a common and valuable indication for conducting a scoping review. A scoping review was recently conducted to review current research and identify knowledge gaps on the topic of “occupational balance”, or the balance of work, rest, sleep, and play [ 35 ]. Following a systematic search across a range of relevant databases, included studies were selected and in line with predetermined inclusion criteria, were described and mapped to provide both an overall picture of the current state of the evidence in the field and to identify and highlight knowledge gaps in the area. The results of the scoping review allowed the authors to illustrate several research ‘gaps’, including the absence of studies conducted outside of western societies, the lack of knowledge around peoples’ levels of occupational balance, as well as a dearth of evidence regarding how occupational balance may be enhanced. As with other scoping reviews focussed upon identifying and analyzing knowledge gaps, results such as these allow for the identification of future research initiatives.

Scoping reviews are now seen as a valid review approach for certain indications. A key difference between scoping reviews and systematic reviews is that in terms of a review question, a scoping review will have a broader “scope” than traditional systematic reviews with correspondingly more expansive inclusion criteria. In addition, scoping reviews differ from systematic reviews in their overriding purpose. We have previously recommended the use of the PCC mnemonic (Population, Concept and Context) to guide question development [ 36 ]. The importance of clearly defining the key questions and objectives of a scoping review has been discussed previously by one of the authors, as a lack of clarity can result in difficulties encountered later on in the review process [ 36 ].

Considering their differences from systematic reviews, scoping reviews should still not be confused with traditional literature reviews. Traditional literature reviews have been used as a means to summarise various publications or research on a particular topic for many years. In these traditional reviews, authors examine research reports in addition to conceptual or theoretical literature that focuses on the history, importance, and collective thinking around a topic, issue or concept. These types of reviews can be considered subjective, due to their substantial reliance on the author’s pre-exiting knowledge and experience and as they do not normally present an unbiased, exhaustive and systematic summary of a topic [ 12 ]. Regardless of some of these limitations, traditional literature reviews may still have some use in terms of providing an overview of a topic or issue. Scoping reviews provide a useful alternative to literature reviews when clarification around a concept or theory is required. If traditional literature reviews are contrasted with scoping reviews, the latter [ 6 ]:

Are informed by an a priori protocol

Are systematic and often include exhaustive searching for information

Aim to be transparent and reproducible

Include steps to reduce error and increase reliability (such as the inclusion of multiple reviewers)

Ensure data is extracted and presented in a structured way

Another approach to evidence synthesis that has emerged recently is the production of evidence maps [ 37 ]. The purpose of these evidence maps is similar to scoping reviews to identify and analyse gaps in the knowledge base [ 37 , 38 ]. In fact, most evidence mapping articles cite seminal scoping review guidance for their methods [ 38 ]. The two approaches therefore have many similarities, with perhaps the most prominent difference being the production of a visual database or schematic (i.e. map) which assists the user in interpreting where evidence exists and where there are gaps [ 38 ]. As Miake-Lye states, at this stage ‘it is difficult to determine where one method ends and the other begins.’ [ 38 ] Both approaches may be valid when the indication is for determining the extent of evidence on a particular topic, particularly when highlighting gaps in the research.

A further popular method to define and scope concepts, particularly in nursing, is through the conduct of a concept analysis [ 39 , 40 , 41 , 42 ]. Formal concept analysis is ‘a process whereby concepts are logically and systematically investigated to form clear and rigorously constructed conceptual definitions,’ [ 42 ] which is similar to scoping reviews where the indication is to clarify concepts in the literature. There is limited methodological guidance on how to conduct a concept analysis and recently they have been critiqued for having no impact on practice [ 39 ]. In our opinion, scoping reviews (where the purpose is to systematically investigate a concept in the literature) offer a methodologically rigorous alternative to concept analysis with their results perhaps being more useful to inform practice.

Comparing and contrasting the characteristics of traditional literature reviews, scoping reviews and systematic reviews may help clarify the true essence of these different types of reviews (see Table 1 ).

Rapid reviews are another emerging type of evidence synthesis and a substantial amount of literature have addressed these types of reviews [ 43 , 44 , 45 , 46 , 47 ]. There are various definitions for rapid reviews, and for simplification purposes, we define these review types as ‘systematic reviews with shortcuts.’ In this paper, we have not discussed the choice between a rapid or systematic review approach as we are of the opinion that perhaps the major consideration for conducting a rapid review (as compared to a systematic or scoping review) is not the purpose/question itself, but the feasibility of conducting a full review given financial/resource limitations and time pressures. As such, a rapid review could potentially be conducted for any of the indications listed above for the scoping or systematic review, whilst shortening or skipping entirely some steps in the standard systematic or scoping review process.

There is some overlap across the six listed purposes for conducting a scoping review described in this paper. For example, it is logical to presume that if a review group were aiming to identify the types of available evidence in a field they would also be interested in identifying and analysing gaps in the knowledge base. Other combinations of purposes for scoping reviews would also make sense for certain questions/aims. However, we have chosen to list them as discrete reasons in this paper in an effort to provide some much needed clarity on the appropriate purposes for conducting scoping reviews. As such, scoping review authors should not interpret our list of indications as a discrete list where only one purpose can be identified.

It is important to mention some potential abuses of scoping reviews. Reviewers may conduct a scoping review as an alternative to a systematic review in order to avoid the critical appraisal stage of the review and expedite the process, thinking that a scoping review may be easier than a systematic review to conduct. Other reviewers may conduct a scoping review in order to ‘map’ the literature when there is no obvious need for ‘mapping’ in this particular subject area. Others may conduct a scoping review with very broad questions as an alternative to investing the time and effort required to craft the necessary specific questions required for undertaking a systematic review. In these cases, scoping reviews are not appropriate and authors should refer to our guidance regarding whether they should be conducting a systematic review instead.

This article provides some clarification on when to conduct a scoping review as compared to a systematic review and clear guidance on the purposes for conducting a scoping review. We hope that this paper will provide a useful addition to this evolving methodology and encourage others to review, modify and build upon these indications as the approach matures. Further work in scoping review methods is required, with perhaps the most important advancement being the recent development of an extension to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for scoping reviews [ 48 ] and the development of software and training programs to support these reviews [ 49 , 50 ]. As the methodology advances, guidance for scoping reviews (such as that included in the Joanna Briggs Institute Reviewer’s Manual) will require revision, refining and updating.

Scoping reviews are a useful tool in the ever increasing arsenal of evidence synthesis approaches. Researchers may preference the conduct of a scoping review over a systematic review where the purpose of the review is to identify knowledge gaps, scope a body of literature, clarify concepts, investigate research conduct, or to inform a systematic review. Although conducted for different purposes compared to systematic reviews, scoping reviews still require rigorous and transparent methods in their conduct to ensure that the results are trustworthy. Our hope is that with clear guidance available regarding whether to conduct a scoping review or a systematic review, there will be less scoping reviews being performed for inappropriate indications better served by a systematic review, and vice-versa.

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.

Article   Google Scholar  

Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25(1):12–37.

Jordan Z, Munn Z, Aromataris E, Lockwood C. Now that we're here, where are we? The JBI approach to evidence-based healthcare 20 years on. Int J Evid Based Healthc. 2015;13(3):117–20.

Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol. 2018;18(1):5.

Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13(3):141–6.

Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):1.

Colquhoun HL, Levac D, O'Brien KK, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67(12):1291–4.

Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5(4):371–85.

Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16:15.

Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews. JBI Reports. 2004;2:45–64.

Aromataris E, Pearson A. The systematic review: an overview. AJN The American Journal of Nursing. 2014;114(3):53–8.

Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ (Clinical research ed). 2009;339:b2700.

Higgins J, Green S, eds. Cochrane handbook for systematic reviews of interventions. Version 5.1.0 [updated March 2011]. ed: The Cochrane Collaboration 2011.

Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A. Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol. 2014;14:108.

Pearson A, Jordan Z, Munn Z. Translational science and evidence-based healthcare: a clarification and reconceptualization of how knowledge is generated and used in healthcare. Nursing research and practice. 2012;2012:792519.

Steinberg E, Greenfield S, Mancher M, Wolman DM, Graham R. Clinical practice guidelines we can trust. Institute of Medicine. Washington, DC: National Academies Press; 2011.

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Systematic Reviews. 2012;1:28.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.

Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J Clin Epidemiol. 2011;64(1):11–20.

Armstrong R, Hall BJ, Doyle J, Waters E. ‘Scoping the scope’ of a cochrane review. J Public Health. 2011;33(1):147–50.

Anderson S, Allen P, Peckham S, Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008;6(1):1.

Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-based healthcare. International Journal of Evidence-Based Healthcare. 2005;3(8):207–15.

PubMed   Google Scholar  

Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The global evidence mapping initiative: scoping research in broad topic areas. BMC Med Res Methodol. 2011;11:92.

Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q. 2011;89(1):131–56.

Challen K, Lee AC, Booth A, Gardois P, Woods HB, Goodacre SW. Where is the evidence for emergency planning: a scoping review. BMC Public Health. 2012;12:542.

Schaink AK, Kuluski K, Lyons RF, et al. A scoping review and thematic classification of patient complexity: offering a unifying framework. Journal of comorbidity. 2012;2(1):1–9.

Hines D, Modi N, Lee SK, Isayama T, Sjörs G, Gagliardi L, Lehtonen L, Vento M, Kusuda S, Bassler D, Mori R. Scoping review shows wide variation in the definitions of bronchopulmonary dysplasia in preterm infants and calls for a consensus. Acta Paediatr. 2017;106(3):366–74.

Callary SA, Solomon LB, Holubowycz OT, Campbell DG, Munn Z, Howie DW. Wear of highly crosslinked polyethylene acetabular components. Acta Orthop. 2015;86(2):159–68.

Davy C, Harfield S, McArthur A, Munn Z, Brown A. Access to primary health care services for indigenous peoples: a framework synthesis. Int J Equity Health. 2016;15(1):163.

Harfield S, Davy C, Kite E, et al. Characteristics of indigenous primary health care models of service delivery: a scoping review protocol. JBI Database System Rev Implement Rep. 2015;13(11):43–51.

Harfield SG, Davy C, McArthur A, Munn Z, Brown A, Brown N. Characteristics of indigenous primary health care service delivery models: a systematic scoping review. Glob Health. 2018;14(1):12.

Peters MDJ LC, Munn Z, Moola S, Mishra RK (2015) , Protocol. Adelaide: the Joanna Briggs Institute UoA. What are people’s views and experiences of delivering and participating in microfinance interventions? A systematic review of qualitative evidence from South Asia.

Peters MDJ LC, Munn Z, Moola S, Mishra RK People’s views and experiences of participating in microfinance interventions: A systematic review of qualitative evidence. London: EPPI-Centre: social science research unit, UCL Institute of education, University College London; 2016.

Wagman P, Håkansson C, Jonsson H. Occupational balance: a scoping review of current research and identified knowledge gaps. J Occup Sci. 2015;22(2):160–9.

Peters MD. In no uncertain terms: the importance of a defined objective in scoping reviews. JBI Database System Rev Implement Rep. 2016;14(2):1–4.

Hetrick SE, Parker AG, Callahan P, Purcell R. Evidence mapping: illustrating an emerging methodology to improve evidence-based practice in youth mental health. J Eval Clin Pract. 2010;16(6):1025–30.

Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Systematic reviews. 2016;5(1):1.

Draper P. A critique of concept analysis. J Adv Nurs. 2014;70(6):1207–8.

Gibson CH. A concept analysis of empowerment. J Adv Nurs. 1991;16(3):354–61.

Article   CAS   Google Scholar  

Meeberg GA. Quality of life: a concept analysis. J Adv Nurs. 1993;18(1):32–8.

Ream E, Richardson A. Fatigue: a concept analysis. Int J Nurs Stud. 1996;33(5):519–29.

Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med. 2015;13:224.

Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in health technology assessments. Int J Evid Based Healthc. 2012;10(4):397–410.

Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.

Munn Z, Lockwood C, Moola S. The development and use of evidence summaries for point of care information systems: a streamlined rapid review approach. Worldviews Evid-Based Nurs. 2015;12(3):131–8.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Munn Z, Aromataris E, Tufanaru C, Stern C, Porritt K, Farrow J, Lockwood C, Stephenson M, Moola S, Lizarondo L, McArthur A. The development of software to support multiple systematic review types: the Joanna Briggs institute system for the unified management, assessment and review of information (JBI SUMARI). Int J Evid Based Healthc. 2018. (in press)

Stern C, Munn Z, Porritt K, et al. An international educational training course for conducting systematic reviews in health care: the Joanna Briggs Institute's comprehensive systematic review training program. Worldviews Evid-Based Nurs. 2018;15(5):401–8.

Download references

Acknowledgements

No funding was provided for this paper.

Availability of data and materials

Not applicable.

Author information

Authors and affiliations.

The Joanna Briggs Institute, The University of Adelaide, 55 King William Road, North Adelaide, 5005, South Australia

Zachary Munn, Micah D. J. Peters, Cindy Stern, Catalin Tufanaru, Alexa McArthur & Edoardo Aromataris

You can also search for this author in PubMed   Google Scholar

Contributions

ZM: Led the development of this paper and conceptualised the idea for a paper on indications for scoping reviews. Provided final approval for submission. MP: Contributed conceptually to the paper and wrote sections of the paper. Provided final approval for submission. CS: Contributed conceptually to the paper and wrote sections of the paper. Provided final approval for submission. CT: Contributed conceptually to the paper and wrote sections of the paper. Provided final approval for submission. AM: Contributed conceptually to the paper and reviewed and provided feedback on all drafts. Provided final approval for submission. EA: Contributed conceptually to the paper and reviewed and provided feedback on all drafts. Provided approval and encouragement for the work to proceed. Provided final approval for submission.

Corresponding author

Correspondence to Zachary Munn .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

All the authors are members of the Joanna Briggs Institute, an evidence-based healthcare research institute which provides formal guidance regarding evidence synthesis, transfer and implementation. Zachary Munn is a member of the editorial board of this journal. The authors have no other competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Munn, Z., Peters, M.D.J., Stern, C. et al. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol 18 , 143 (2018). https://doi.org/10.1186/s12874-018-0611-x

Download citation

Received : 21 February 2018

Accepted : 06 November 2018

Published : 19 November 2018

DOI : https://doi.org/10.1186/s12874-018-0611-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Scoping review
  • Evidence-based healthcare

BMC Medical Research Methodology

ISSN: 1471-2288

difference between research paper and systematic review

FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Research Skills Blog

What is the difference between a systematic review and a systematic literature review?

By Carol Hollier on 07-Jan-2020 14:23:00

Systematic Literative Reviews | IFIS Publishing

For those not immersed in systematic reviews, understanding the difference between a systematic review and a systematic literature review can be confusing.  It helps to realise that a “systematic review” is a clearly defined thing, but ambiguity creeps in around the phrase “systematic literature review” because people can and do use it in a variety of ways. 

A systematic review is a research study of research studies.  To qualify as a systematic review, a review needs to adhere to standards of transparency and reproducibility.  It will use explicit methods to identify, select, appraise, and synthesise empirical results from different but similar studies.  The study will be done in stages:  

  • In stage one, the question, which must be answerable, is framed
  • Stage two is a comprehensive literature search to identify relevant studies
  • In stage three the identified literature’s quality is scrutinised and decisions made on whether or not to include each article in the review
  • In stage four the evidence is summarised and, if the review includes a meta-analysis, the data extracted; in the final stage, findings are interpreted. [1]

Some reviews also state what degree of confidence can be placed on that answer, using the GRADE scale.  By going through these steps, a systematic review provides a broad evidence base on which to make decisions about medical interventions, regulatory policy, safety, or whatever question is analysed.   By documenting each step explicitly, the review is not only reproducible, but can be updated as more evidence on the question is generated.

Sometimes when people talk about a “systematic literature review”, they are using the phrase interchangeably with “systematic review”.  However, people can also use the phrase systematic literature review to refer to a literature review that is done in a fairly systematic way, but without the full rigor of a systematic review. 

For instance, for a systematic review, reviewers would strive to locate relevant unpublished studies in grey literature and possibly by contacting researchers directly.  Doing this is important for combatting publication bias, which is the tendency for studies with positive results to be published at a higher rate than studies with null results.  It is easy to understand how this well-documented tendency can skew a review’s findings, but someone conducting a systematic literature review in the loose sense of the phrase might, for lack of resource or capacity, forgo that step. 

Another difference might be in who is doing the research for the review. A systematic review is generally conducted by a team including an information professional for searches and a statistician for meta-analysis, along with subject experts.  Team members independently evaluate the studies being considered for inclusion in the review and compare results, adjudicating any differences of opinion.   In contrast, a systematic literature review might be conducted by one person. 

Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive.  A systematic literature review would contrast with what is sometimes called a narrative or journalistic literature review, where the reviewer’s search strategy is not made explicit, and evidence may be cherry-picked to support an argument.

FSTA is a key tool for systematic reviews and systematic literature reviews in the sciences of food and health.

pawel-czerwinski-VkITYPupzSg-unsplash-1

The patents indexed help find results of research not otherwise publicly available because it has been done for commercial purposes.

The FSTA thesaurus will surface results that would be missed with keyword searching alone. Since the thesaurus is designed for the sciences of food and health, it is the most comprehensive for the field. 

All indexing and abstracting in FSTA is in English, so you can do your searching in English yet pick up non-English language results, and get those results translated if they meet the criteria for inclusion in a systematic review.

FSTA includes grey literature (conference proceedings) which can be difficult to find, but is important to include in comprehensive searches.

FSTA content has a deep archive. It goes back to 1969 for farm to fork research, and back to the late 1990s for food-related human nutrition literature—systematic reviews (and any literature review) should include not just the latest research but all relevant research on a question. 

You can also use FSTA to find literature reviews.

FSTA allows you to easily search for review articles (both narrative and systematic reviews) by using the subject heading or thesaurus term “REVIEWS" and an appropriate free-text keyword.

On the Web of Science or EBSCO platform, an FSTA search for reviews about cassava would look like this: DE "REVIEWS" AND cassava.

On the Ovid platform using the multi-field search option, the search would look like this: reviews.sh. AND cassava.af.

In 2011 FSTA introduced the descriptor META-ANALYSIS, making it easy to search specifically for systematic reviews that include a meta-analysis published from that year onwards.

On the EBSCO or Web of Science platform, an FSTA search for systematic reviews with meta-analyses about staphylococcus aureus would look like this: DE "META-ANALYSIS" AND staphylococcus aureus.

On the Ovid platform using the multi-field search option, the search would look like this: meta-analysis.sh. AND staphylococcus aureus.af.

Systematic reviews with meta-analyses published before 2011 are included in the REVIEWS controlled vocabulary term in the thesaurus.

An easy way to locate pre-2011 systematic reviews with meta-analyses is to search the subject heading or thesaurus term "REVIEWS" AND meta-analysis as a free-text keyword AND another appropriate free-text keyword.

On the Web of Science or EBSCO platform, the FSTA search would look like this: DE "REVIEWS" AND meta-analysis AND carbohydrate*

On the Ovid platform using the multi-field search option, the search would look like this: reviews .sh. AND meta-analysis.af. AND carbohydrate*.af.  

Related resources:

  • Literature Searching Best Practise Guide
  • Predatory publishing: Investigating researchers’ knowledge & attitudes
  • The IFIS Expert Guide to Journal Publishing

Library image by  Paul Schafer , microscope image by Matthew Waring , via Unsplash.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

This paper is in the following e-collection/theme issue:

Published on 11.3.2024 in Vol 26 (2024)

The Impact of Digital Hospitals on Patient and Clinician Experience: Systematic Review and Qualitative Evidence Synthesis

Authors of this article:

Author Orcid Image

  • Oliver J Canfell 1, 2, 3, 4 * , PhD   ; 
  • Leanna Woods 1, 2 * , PhD   ; 
  • Yasaman Meshkat 5 , MBBS   ; 
  • Jenna Krivit 5 , BSc   ; 
  • Brinda Gunashanhar 5 , BSc   ; 
  • Christine Slade 6 , PhD   ; 
  • Andrew Burton-Jones 4 , PhD   ; 
  • Clair Sullivan 1, 2, 7 , MBBS, MD  

1 Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia

2 Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia

3 Digital Health Cooperative Research Centre, Australian Government, Sydney, Australia

4 UQ Business School, Faculty of Business, Economics and Law, The University of Queensland, Brisbane, Australia

5 School of Clinical Medicine, Faculty of Medicine, The University of Queensland, Brisbane, Australia

6 Institute for Teaching and Learning Innovation, The University of Queensland, Brisbane, Australia

7 Metro North Hospital and Health Service, Department of Health, Queensland Government, Brisbane, Australia

*these authors contributed equally

Corresponding Author:

Oliver J Canfell, PhD

Centre for Health Services Research

Faculty of Medicine

The University of Queensland

Level 5 Health Sciences Building

Central, Fig Tree Cres

Brisbane, 4006

Phone: 61 731765530

Email: [email protected]

Background: The digital transformation of health care is advancing rapidly. A well-accepted framework for health care improvement is the Quadruple Aim: improved clinician experience, improved patient experience, improved population health, and reduced health care costs. Hospitals are attempting to improve care by using digital technologies, but the effectiveness of these technologies is often only measured against cost and quality indicators, and less is known about the clinician and patient experience.

Objective: This study aims to conduct a systematic review and qualitative evidence synthesis to assess the clinician and patient experience of digital hospitals.

Methods: The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and ENTREQ (Enhancing the Transparency in Reporting the Synthesis of Qualitative Research) guidelines were followed. The PubMed, Embase, Scopus, CINAHL, and PsycINFO databases were searched from January 2010 to June 2022. Studies that explored multidisciplinary clinician or adult inpatient experiences of digital hospitals (with a full electronic medical record) were included. Study quality was assessed using the Mixed Methods Appraisal Tool. Data synthesis was performed narratively for quantitative studies. Qualitative evidence synthesis was performed via (1) automated machine learning text analytics using Leximancer (Leximancer Pty Ltd) and (2) researcher-led inductive synthesis to generate themes.

Results: A total of 61 studies (n=39, 64% quantitative; n=15, 25% qualitative; and n=7, 11% mixed methods) were included. Most studies (55/61, 90%) investigated clinician experiences, whereas few (10/61, 16%) investigated patient experiences. The study populations ranged from 8 to 3610 clinicians, 11 to 34,425 patients, and 5 to 2836 hospitals. Quantitative outcomes indicated that clinicians had a positive overall satisfaction (17/24, 71% of the studies) with digital hospitals, and most studies (11/19, 58%) reported a positive sentiment toward usability. Data accessibility was reported positively, whereas adaptation, clinician-patient interaction, and workload burnout were reported negatively. The effects of digital hospitals on patient safety and clinicians’ ability to deliver patient care were mixed. The qualitative evidence synthesis of clinician experience studies (18/61, 30%) generated 7 themes: inefficient digital documentation, inconsistent data quality, disruptions to conventional health care relationships, acceptance, safety versus risk, reliance on hybrid (digital and paper) workflows, and patient data privacy. There was weak evidence of a positive association between digital hospitals and patient satisfaction scores.

Conclusions: Clinicians’ experience of digital hospitals appears positive according to high-level indicators (eg, overall satisfaction and data accessibility), but the qualitative evidence synthesis revealed substantive tensions. There is insufficient evidence to draw a definitive conclusion on the patient experience within digital hospitals, but indications appear positive or agnostic. Future research must prioritize equitable investigation and definition of the digital clinician and patient experience to achieve the Quadruple Aim of health care.

Introduction

Investment in digital health is advancing rapidly. In 2020, the total global funding for digital health was the highest recorded at US $26.5 billion [ 1 ]. A global appetite for digital health, fueled recently by the COVID-19 pandemic and the associated rapid adoption of point-of-care technological solutions [ 2 ], including telehealth [ 3 ], has driven the digital disruption of health care. A core pillar of digital health investment is the digital transformation of hospitals [ 4 ].

The World Health Organization Global Strategy on Digital Health (2020-2025) recommends the implementation of a national digital health architecture, including digital hospitals [ 5 ]. Digital hospitals represent significant jurisdictional investments to improve the quality and safety of acute care [ 6 ]. A digital hospital uses a comprehensive electronic medical record (EMR) to achieve its clinical goals [ 7 ] and is becoming the predominant method of care delivery worldwide. These new digital hospital environments radically disrupt well-rehearsed clinical workflows and create unfamiliar environments for patients and clinicians, potentially affecting quality, safety, and experience of care [ 8 - 10 ].

It has been difficult to determine the value of digital hospital implementations as what is considered valuable changes over time and place and from person to person [ 11 ]. Previous studies evaluating digital health implementations have focused on three domains: (1) improving patient or hospital outcomes using quantitative evaluation [ 12 ]; (2) exploring patient [ 13 ] and clinician behavior, workflows, and attitudes toward EMRs or digital hospital transformations [ 10 ]; and (3) quantifying value using economic evaluations [ 4 ]. Evidence to date demonstrates conflicting impacts of EMRs on hospital practice, with positive indications for medication safety, guideline adherence, and decision support [ 12 , 14 ] and negative indications for physician-patient communication, staff attitude, and workflow disruption [ 15 , 16 ]. Focusing on the narrow aspects of digital health implementations has resulted in patchy assessments of the value of digital health technologies.

The Quadruple Aim is the overarching goal of a learning health care system of enhancing patient experience, improving population health, reducing health care costs, and improving the provider experience ( Figure 1 ) [ 17 ]. The Quadruple Aim of health care has been proposed as a strategic compass to guide digital health investment planning, decision-making, and evaluation [ 11 ] and has been used in the health care workforce [ 18 , 19 ], innovation implementation [ 20 ], and COVID-19 pandemic [ 21 ] contexts to identify current trends and research gaps.

difference between research paper and systematic review

The experience of patients and clinicians has yet to be explored as important contributors to the Quadruple Aim. Previous evaluations of clinician experience have focused on individual retrospective recalls of attitudes, perceptions of EMR implementation [ 10 ], and observational time and motion studies [ 22 ]. Existing patient experience research has focused on bespoke digital systems for patient use (eg, internet-based care technologies, web-based patient platforms, and mobile health apps) or emerging trends (eg, COVID-19 impacts, effects of specific technologies, research methods, and new technologies) [ 23 , 24 ]. Traditional evaluations of technology in health care are selective in the outcomes they measure, with an overwhelming focus on clinical outcomes and efficiency.

To address this research gap, our research question was as follows: what is the clinician and patient experience of digital hospitals? We hypothesized that clinicians and patients would report digital hospital experiences positively (eg, patient safety benefits because of digital safeguards [ 25 ]), negatively (eg, productivity loss because of documentation burden [ 15 ]), and ambivalently (eg, no observed impact of the digital environment on patient experience [ 10 ]). The study aim was to conduct a systematic literature review and qualitative evidence synthesis to assess the clinician and patient experience of digital hospitals.

Search Strategy and Identification of Included Articles

This review adhered to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement checklist [ 26 ] ( Multimedia Appendix 1 [ 26 ]) and the ENTREQ (Enhancing the Transparency in Reporting the Synthesis of Qualitative Research) guidelines [ 27 ] ( Multimedia Appendix 2 ). The protocol for this review was registered in PROSPERO (CRD42021258719).

The PubMed, Embase, Scopus, CINAHL, and PsycINFO databases were searched twice—on June 24, 2021 (version [V1]), and subsequently on June 23, 2022 (version [V2])—using the same search strategy restricted by year of publication (2010 to the present) because of the relative novelty of the digital transformation of acute care.

The search strategy was initially developed in PubMed using the population, intervention, comparison, and outcome structure ( Table 1 ) [ 28 ] and translated to other databases ( Multimedia Appendix 3 ). We define experience as an individual’s perception of events, incorporating themes of expectation and satisfaction. Thus, synonyms for “experience” were used to characterize the outcome. A combination of indexed terms (eg, Medical Subject Headings) and keywords identified after consultation with a research librarian and subject matter experts was used. Truncations, synonyms, and terminological variations (eg, “EMR” vs “EHR”) were also used.

a PICO: population, intervention, comparison, and outcome.

b MeSH: Medical Subject Headings.

c tiab: title and abstract.

d EMR: electronic medical record.

e EHR: electronic health record.

f ti: title.

In total, 2 reviewers (YM and OJC) performed title and abstract screening. Full-text review was then performed based on the eligibility criteria. Backward citation tracking (snowballing) was used to identify additional articles in the reference lists of included articles and relevant reviews. Decision conflicts were resolved through internal discussion or by involving a third reviewer’s opinion (C Slade) when required.

Eligibility Criteria

Studies were included if they described a quantitative, qualitative, or mixed methods investigation of clinician or adult inpatient “experience” in a digital hospital ( Table 2 ). This review focused on multidisciplinary clinicians and adult inpatients as a first step in synthesizing evidence of digital hospital experience. Pediatric inpatients were excluded because of possible environmental factors that could confound the digital hospital experience (eg, patient entertainment systems). Our study setting prioritized EMRs—a real-time patient health record that collects, stores, and displays clinical information in a tertiary setting [ 29 ] (eg, Cerner Millenium and Epic)—as the foundation of a digital hospital [ 30 ] as opposed to an electronic health record (or personal health record), which displays summarized patient information to the consumer in the community and across multiple health care providers [ 29 ] (eg, My Health Record [Australia], the National Health Service app [United Kingdom], and personal health record [Ministry of National Guard Health Affairs, Kingdom of Saudi Arabia]). The terms “EMR” and “EHR” may be used interchangeably in some countries, so both terms were adopted in the search strategy. The stage of digital hospital implementation (eg, EMR maturity [ 31 ]) was not considered.

a EMR: electronic medical record.

b IoT: Internet of Things.

Data Extraction

The Covidence software (Veritas Health Innovation) and Excel (Microsoft Corp) facilitated data extraction of study details ( Multimedia Appendix 4 ). For studies eligible for the qualitative evidence synthesis (eg, presence of verbatim participant quotes), additional data that explained the qualitative findings were extracted, including the primary and main themes; secondary and subthemes; minor and unexpected themes; participant quotations; and any text labeled within the “results” or “findings” (ie, narrative) sections, including data-driven discoveries, judgments, or explanations the researchers offered about their phenomena [ 32 , 33 ]. All extracted data were cross-checked by a second reviewer for accuracy, and any discrepancies were resolved through discussion. Owing to the heterogeneity in study design, populations, and outcome measures for quantitative studies, a meta-analysis was inappropriate, and a narrative synthesis of quantitative study results was conducted.

Quality Assessment

The quality of the included studies was assessed using the Mixed Methods Appraisal Tool (MMAT) [ 34 ]. Each study’s methodology was evaluated against 5 criteria ( Yes , No , or Can’t tell ) that differed between study designs (qualitative, quantitative randomized controlled trial, quantitative nonrandomized, quantitative descriptive, and mixed methods). The included studies were divided equally among 5 reviewers (LW, YM, C Slade, JK, and OJC). An MMAT star rating was generated for each article ( Yes =1 star; up to 5 stars in total), a method adapted from a recent systematic review by Freire et al [ 35 ], and the scores were cross-checked by a second reviewer. Discrepancies were discussed and resolved within the research team. No study was excluded from the review based on its MMAT score.

Data Synthesis

Data synthesis was conducted in 2 stages in accordance with our study aims.

Systematic Review: Narrative Synthesis

A narrative (qualitative) synthesis of quantitative studies was conducted to summarize and compare key findings [ 36 ]. We first developed a preliminary synthesis based on extracted data and then explored the relationships within and between studies to identify and explain any heterogeneity. Identified experience outcomes were inductively grouped together based on similarity (eg, ease of use and user-friendliness), and the group was given a descriptor (eg, usability) that would accurately reflect the experience outcomes within the group.

Qualitative Evidence Synthesis

The qualitative evidence synthesis [ 37 ] was conducted in 2 steps.

Step 1: Automated Text Analytics Using Machine Learning

Step 1 was undertaken using the text analytics tool Leximancer (version 4.5; Leximancer Pty Ltd) [ 38 ], an increasingly adopted [ 39 ] approach to qualitative analysis that is 74% effective at mapping complex concepts from matched qualitative data and >3 times faster than manual thematic analysis [ 40 ]. Leximancer applies an unsupervised machine learning algorithm and inbuilt thesaurus to uncover networks or patterns of word- and namelike terms in a body of text [ 41 , 42 ]. Leximancer then generates interconnections, structures, and patterns among terms to develop “concepts”—collections of words that are linked together within the text—and group them into “themes”—concepts that are highly connected. The interrelationships between concepts and themes are visualized on a map. Advantages include expediting the early stages of qualitative analysis and providing a first impression of meaning within qualitative data that limits researcher bias.

After data extraction, qualitative data from the included articles were synthesized into 3 data sets—“themes” (primary, secondary, and minor), “quotes” (from participants), and “narrative” (any text under the “results” or “discussion” sections)—ready for separate Leximancer analysis. We chose to analyze each data set separately to account for any significant (but unknown at the time) heterogeneity across the studies. In total, 3 researchers were allocated 1 data set each using Leximancer to create an initial concept map without altering any settings. Initial concepts were reviewed for meaning, and redundant conversational words were removed where appropriate (eg, study , doing , and participants ). Concept variations of EMR or electronic health record were removed as this was labeled as the independent variable and the target context already under analysis. Concept variations (eg, patient and patients ) were merged where necessary. All other software settings were kept as the default values.

Step 2: Researcher-Led Thematic Analysis

The preliminary themes and concepts identified via text analytics underwent validation and researcher-led thematic analysis in accordance with a modified version of the method by Thomas and Harden [ 33 ]. First, the top 5 Leximancer-identified concepts (eg, “patient”) were identified and connected with their 2 most related concepts (eg, “patient” AND “documentation” or “patient” AND “time”) to create a concept grouping. In total, 3 researchers (OJC, LW, and JK) validated each Leximancer concept grouping by extracting relevant text and generating a preliminary interpretation of the meaning of each concept grouping, which was cross-checked between researchers. Researchers (OJC, LW, JK, and BG) then worked collaboratively across all 3 data sets to conduct a rapid thematic analysis using a cluster and name technique to generate a working thematic framework [ 43 ]. Through an iterative and interpretive process, researchers then grouped similar concepts into parent themes. Discrepancies were resolved through discussion until the final themes were decided and approved by consensus.

Identification of Included Articles

In total, 2059 studies were identified from the first search (V1), and an additional 462 studies were identified from the second search (V2; Figure 2 ). Following duplicate removal and title and abstract screening, a total of 109 studies (V1: n=84, 77.1%; V2: n=25, 22.9%) remained and underwent full-text review. Of these 109 studies, 61 (56%) met our inclusion criteria and were included in this review, comprising quantitative (n=39, 64%), qualitative (n=15, 25%), and mixed methods (n=7, 11%) designs.

difference between research paper and systematic review

In total, 52% (32/61) of the included studies met all 5 MMAT quality criteria ( Multimedia Appendix 5 [ 44 - 104 ]). An additional 20% (12/61) of the studies met 4 out of 5 of the quality criteria. Only 7% (4/61) of the studies met ≤2 of the 5 quality criteria, of which 75% (3/4) were mixed methods studies and 25% (1/4) had a quantitative descriptive design. For these studies, a score of ≤2 indicated inadequate sampling, and in the case of the mixed methods studies, the integration of and inconsistencies between quantitative and qualitative elements were not adequately described.

Characteristics of the Included Studies

Study design.

Of the 61 included articles ( Multimedia Appendix 6 [ 44 - 104 ]), most (n=39, 64%) adopted quantitative methods to assess clinician and patient experiences. Most quantitative studies (31/39, 79%) conducted descriptive cross-sectional surveys to assess experience at one point. A total of 21% (8/39) of the studies assessed clinician and patient experience through quantitative nonrandomized methods in the pre– and post–EMR implementation periods. A minority of the included studies (15/61, 25%) qualitatively assessed experience through interviews, focus groups, and ethnographic observations. A total of 11% (7/61) of the studies used both quantitative and qualitative methods (mixed methods).

The most common country of study was the United States (21/61, 34%), followed by Australia (6/61, 10%), Saudi Arabia (5/61, 8%), and Canada (4/61, 7%). More than half (32/61, 52%) of the included studies were published after 2018. The settings included diversity across large tertiary academic hospitals and private hospitals in rural and metropolitan settings. The number of participating hospitals ranged from 1 to 2836.

Participants

In total, 90% (55/61) of the studies investigated the clinician experience using quantitative, qualitative, or mixed methods. Within the clinician experience group, 35% (19/55) of the studies included all EMR users, followed by nursing staff only (17/55, 31%) and physicians only (15/55, 27%). Study participation ranged from 8 to 3610 across the clinician experience studies. Only 16% (10/61) of the included studies investigated the patient experience. In total, 60% (6/10) of the studies focused exclusively on the patient experience with EMR, and 40% (4/10) included perspectives from both stakeholder groups. Patient participant counts ranged from 11 to 34,425.

Quantitative Results: Clinician and Patient Experience in Digital Hospitals

Table 3 reports the outcome measures of the digital hospital experience identified in the studies with quantitative components (46/61, 75%; 39/46, 85% quantitative and 7/46, 15% mixed methods).

Patient Experience in a Digital Hospital

Of the 9 quantitative or mixed methods studies reporting the patient perspective, 7 (78%) [ 48 , 51 , 57 , 69 , 84 , 96 , 102 ] used different survey methods (eg, the Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey) to quantify “patient experience” using various satisfaction metrics. Of these studies, 57% (4/7) reported a positive association [ 48 , 57 , 69 , 84 ] between EMR and patient satisfaction scores and 43% (3/7) reported no substantial change [ 51 , 96 , 102 ]. In total, 22% (2/9) of the studies [ 67 , 92 ] used hospital outcomes rather than patient feedback, which did not meet our definition of “experience.”

A total of 33% (3/9) of the studies [ 57 , 96 , 102 ] reported patient experience before and after EMR implementation (or transition between EMR systems). Tian et al [ 96 ] surveyed 34,425 patients using the standardized HCAHPS survey and found a significant decreasing trend in patient experience scores for the 6 months after implementation followed by a return to baseline, with no significant changes overall. Monturo et al [ 102 ] surveyed 55 patients and found no significant changes in overall patient satisfaction.

Of the 9 studies, 3 (33%) cross-sectional studies [ 48 , 51 , 84 ] compared patient experiences in hospitals with and without an advanced EMR. Hu et al [ 84 ], using the HCAHPS survey at 1006 hospitals, and Kazley et al [ 48 ], using Hospital Compare data at 2836 hospitals, both found a positive association between EMR adoption and overall hospital rating and discharge information. Jarvis et al [ 51 ] found no significant difference in HCAHPS scores in advanced EMR versus non–advanced EMR hospitals.

Clinician Satisfaction With the EMR

Of the 41 quantitative studies that investigated the clinician experience, 24 (59%) included an overall EMR satisfaction metric, and 71% (17/24) of these studies reported a positive sentiment [ 47 , 49 , 59 , 60 , 62 , 70 , 72 , 73 , 76 , 79 , 80 , 85 , 87 - 89 , 98 , 101 ]. For instance, Kutney-Lee et al [ 76 ] used the registered nurse forecasting study (RN4CAST-US) nursing survey with 12,377 nurses across 353 hospitals and found a 74.9% “satisfaction with current EMR.” In total, 25% (6/24) of the studies [ 64 , 68 , 72 , 75 , 89 , 94 ] used features of the technology acceptance model, and “perceived usefulness” or “perceived value” was equated to overall satisfaction . Evidence of increasing satisfaction with increased digitization was found in a study that stratified results by level of EMR adoption, including groups for basic EMR (71.3% satisfaction reported) and comprehensive EMR (78.4% satisfaction) [ 76 ].

Of the 24 studies that reported overall EMR satisfaction as an outcome of clinician experience, 7 (29%) [ 46 , 54 , 56 , 61 , 77 , 97 , 104 ] reported negative sentiment with the EMR. For instance, Tilahun and Fritz [ 61 ] surveyed 406 clinicians and found that 64.4% were dissatisfied with the use of the EMR system; however, only 22.8% strongly disagreed with the following statement: “I prefer EMR than the paper record.” One study found that only 15.6% of respondents (n=141 physicians) felt that the EMR was an “effective tool” [ 46 ], and another found that only 38.9% of users (n=262 nurses and physicians using the National Usability-Focused Health Information System Scale) rated the EMR system as “high quality” [ 104 ].

Usability of the EMR by Clinicians

Of the 41 studies that included the EMR user perspective, 19 (46%) reported a usability metric, and 11 (58%) of these reported a positive sentiment [ 50 , 54 , 68 , 70 , 71 , 72 , 76 , 82 , 83 , 89 , 99 ]. Survey components that investigated usability used outcomes such as “ease of use,” “user friendly,” and “technical quality.”

Positive sentiment was considered to be >3.5/5 on a 5-point Likert scale or >50% agreement with usability statements. Of those using a Likert scale, statements such as “the health record I am working with is user-friendly” scored 3.62 (n=667 nurses) [ 82 ], and “perceived ease of use” scored 3.7 (n=1539 nurses in 15 hospitals) [ 72 ] and 3.78 (n=223 nurses) [ 89 ]. Aldosari et al [ 68 ] surveyed 153 nurses and found a 79.7% agreement with the following statement—“It is easy to use the EMR”—and 70.5% agreement with the following statement: “I find the EMR system interface to be user friendly.”

A negative sentiment regarding usability was found in 42% (8/19) of the studies [ 49 , 53 , 66 , 77 , 87 , 93 , 98 , 104 ]. In the study using the National Usability-Focused Health Information System Scale (n=3013 physicians), most participants (60.15%) disagreed with the following statement: “routine tasks can be performed in a straightforward manner without the need for extra steps using the systems” [ 66 ]. Comparatively fewer nurses (n=3560) in the same study disagreed with this statement [ 66 ].

Adaptation to New Systems

A total of 30% (14/41) of the studies discussed the experience of adapting existing workflows to integrate the new digital interface and transitioning to a digital environment on the wards [ 47 , 50 , 56 , 60 , 64 , 66 , 70 , 71 , 75 , 77 , 93 , 98 , 99 , 101 ]. Generally, the adaptation outcome had a negative sentiment from EMR users. One survey of 285 nurses 8 to 13 months after EMR implementation found that users felt that the EMR provided a “holistic view of the patient, but fragmentation and complexity introduce workflow challenges” [ 64 ]. Another study found that 35.1% of physicians (n=317) agreed with the following statement: “EMR does not disrupt workflow” [ 60 ]. A third study found that 48.7% of physicians (n=3013) and 62.3% of nurses (n=3560) disagreed with the following statement: “learning the EHR did not require a lot of training” [ 66 ].

Data Accessibility and Clinician-Patient Interaction

Data accessibility in digital hospitals was reported in 39% (18/41) of the quantitative studies [ 49 , 50 , 53 , 54 , 66 , 68 , 70 , 71 , 75 , 76 , 80 , 82 , 83 , 87 , 93 , 98 , 99 , 104 ]. Much of the sentiment was positive as clinicians agreed that the EMR allowed users to access information when and where they needed it. One survey of 153 nurses in a Saudi Arabian hospital found that 85.6% of respondents agreed with the following statement—“I have access to the information where I need it”—and 83.6% agreed with the following statement: “I have access to the information when I need it” [ 68 ]. One large cross-sectional study of 2684 clinicians found a 50.3% agreement with the following statement: “EMR provides precise information I need” [ 98 ]. Neutral sentiment was indicated in one study by an average of 3.5 (5-point Likert scale) in response to the following statement—“it is easy to find the information I need”—from 244 clinicians 2 months after EMR implementation [ 50 ]. The survey by Lloyd et al [ 93 ] found that 49% of physicians (n=224) and 59.4% of nurses (n=72) agreed that “it is easy to obtain necessary patient information using the EMR system.”

A total of 7% (3/41) [ 67 , 98 , 101 ] of the studies reported on the impact of EMR on clinician-patient interaction, with all 3 studies agreeing that the EMR reduced this communication. Czernik et al [ 101 ] found that 39% of 126 physicians agreed (7-point Likert scale) with the statement that EMR causes “lack of proper patient-doctor communication.” A total of 43% (3/7) [ 57 , 69 , 102 ] of patient experience studies reported on the impact of EMR on patient-provider interaction. Migdal et al [ 57 ] focused more specifically on physician-patient communication with their patient participants using a CICARE survey (17-question Likert scale) designed by the University of California, Los Angeles, Health system to assess resident physician performance. Of the 3417 patient surveys, Migdal et al [ 57 ] found that 9 of 16 relevant questions had statistically significant improvements after EMR implementation, suggesting improvement in communication between patients and providers after EMR implementation.

Workload and Burnout

Many studies (19/41, 46%) reported on the impact of EMR on clinical workload, including symptoms of burnout and subjective productivity. One cross-sectional study in Canada surveyed 208 physicians and found that 68.2% of respondents felt that the EMR “added to daily frustration”; 24.5% of respondents had one or more symptoms of burnout; and, of those with burnout, nearly 75% “identified EMR as contributor to burnout symptoms” [ 88 ]. Another study across 343 hospitals including 12,004 nurses compared EMR usability (as per the RN4CAST-US usability survey) with symptoms of burnout and found that lower EMR usability scores were associated with higher odds of burnout (odds ratio 1.41, 95% CI 1.21-1.64) [ 92 ]. Often, studies assessed workload in terms of productivity. In 5 low-resource hospitals 3 years after EMR implementation (n=405 physicians and nurses), 82.4% of physicians disagreed that “EMR improves productivity,” whereas 61% of nurses agreed [ 61 ].

Patient Safety and Delivery of Care

There was a mixed sentiment across 34% (14/41) of the quantitative studies [ 50 , 53 , 56 , 66 , 73 , 75 , 79 , 80 , 85 , 87 , 88 , 93 , 94 , 98 ], which included survey items on the impact of the EMR on patient safety. In total, 43% (6/14) of these studies [ 53 , 56 , 66 , 73 , 93 , 98 ] included survey items about EMR preventing errors in patient care, especially mistakes associated with medications. One study in a large specialist hospital in Nigeria (n=35 health care workers) found that the EMR made clinicians “more prone to errors” [ 80 ]. Similarly, Kaipio et al [ 66 ] found that less than half of the surveyed physicians (44.7%) and nurses (40.2%) agreed with the following statement: “IT systems help in preventing errors and mistakes associated with medication.” Conversely, Al Otaybi et al [ 98 ] (n=2684 health care workers) found that only 15.5% agreed with the following statement: “EMR increases the risk of making errors.” One study investigated the change in user experience over time and found that agreement with the statement that the EMR “improves prevention in errors and mistakes associated with medications” increased by 13% from 2010 to 2014 [ 85 ].

There was also a mixed sentiment across 29% (12/41) of the studies, which included outcomes on the impact of the EMR on clinicians’ ability to deliver care to their patients. In a large study of >12,000 US nurses, more than half (55.4%) reported agreement with the statement that the EMR “systems interfere with the provision of care” [ 76 ], and in a smaller study, 84.2% of participants disagreed with the statement that the EMR “system has positive impact on quality of care” [ 61 ]. Conversely, in the Netherlands, nurses were more likely to agree with the following statement: “the information in the health records supports my activities during the provision of care” [ 82 ].

Qualitative Results: Clinician and Patient Experience in Digital Hospitals

A total of 18 studies (n=14, 78% qualitative and n=4, 22% mixed methods) with qualitative components were included in the qualitative evidence synthesis. Only 7% (1/15) of the qualitative studies in our review explored the patient experience in a digital hospital [ 52 ]; however, this study was excluded from the qualitative evidence synthesis and is reported narratively in the following paragraph. A total of 29% (2/7) of the mixed methods studies were also excluded for lacking direct participant quotes as per the exclusion criteria [ 50 , 75 ]; however, the quantitative results are reported in the previous sections.

Strauss [ 52 ] interviewed 11 patients about the dynamics with their nurses and the EMR. Similar to the qualitative evidence synthesis, participants described a positive perception of the EMR when the nurses acknowledged the participants before using the electronic device; however, many “expressed concerns [for] the privacy of their health record information.” Interestingly, participants’ expectations of the “clinical knowledge and competency of the nurse, within the technological arena, have increased with the implementation of the [EMR].”

Qualitative Evidence Synthesis: Clinician Experience Only

Step 1: automated text analytics using leximancer.

Multimedia Appendix 7 presents the results of the automated text analytics using Leximancer. Figures S1 to S3 in Multimedia Appendix 7 present the intertopic concept maps derived from the themes, quotes, and narrative qualitative data, respectively.

Table S1 in Multimedia Appendix 7 compares the top 5 concepts and their 2 most related concepts identified by Leximancer across each qualitative group. Owing to the relative homogeneity in the top 5 concepts identified among the themes, quotes, and narrative qualitative data, it was decided to perform researcher-led thematic analysis (step 2) collectively instead of individually for each data group. This decision was not a predetermined method and was made organically during data analysis.

Textbox 1 presents 7 themes that describe the clinician experience in digital hospitals derived from the qualitative evidence synthesis (18/61, 30% of the studies).

  • Slow and inefficient digital documentation detracts from other clinical priorities.
  • Inconsistent data quality and discoverability challenge clinician trust in making data-driven decisions.
  • Digital technology creates new tensions that disrupt conventional health care relationships.
  • Acceptance of digital hospitals is a value-based spectrum that changes over time.
  • Clinicians value patient safety benefits while acknowledging concerns about new digital risks.
  • Clinicians feel reliant on hybrid (digital and paper) workflows to maintain the standard of care.
  • Clinicians worry about compromising patient data privacy to improve care efficiency.

Theme 1: Slow and Inefficient Digital Documentation Detracts From Other Clinical Priorities

Documentation was a time burden for clinicians with a slow and inefficient workflow [ 95 ] in which it was difficult to find information [ 95 ], there were “too many steps to accomplish simple tasks” [ 44 ], and users were required to re-enter the same data repeatedly [ 63 ].

Clinicians felt challenged by the requirement for accurate and complete documentation during the provision of patient care. Staff reported wanting to provide care but needing to complete medical records:

A lot of the time we’re having to say [to patients], “Oh look, I’ll have to come back to you, I’ve got to do my documentation.” [ 81 ]

The study by Schenk et al [ 64 ] concluded that “EHR implementation was disruptive to nursing care and adversely influenced nursing attitudes,” reporting “too many steps to find and chart information, information that is fragmented and overly complex, leading to workflow challenges and interruptions of care.” Shortcuts and quick orders save time [ 95 ] and were used as workarounds to improve efficiency.

Theme 2: Inconsistent Data Quality and Discoverability Challenge Trust in Decisions

Clinicians found that data quality improved in digital hospitals. Studies highlighted various benefits of digitization, including improved documentation of data [ 78 ], efficient display of information [ 95 ], improved data completeness [ 78 ], and improved documentation readability [ 63 ]. This was summarized by the following participant quote:

...the availability of data in the EHR is a good thing. [ 101 ]

“Note bloat” was reported as a theme regarding difficulties finding information in the digital system [ 95 ]. Discoverability challenged clinicians and negatively affected their trust in making data-driven decisions. It was difficult to find information [ 95 ] and easy to miss information [ 95 ], as described by one participant:

...you’ve got to look through 100 documents to find the information you are looking for. [ 65 ]

Inefficiencies in the EMR design may lead to inappropriate care:

A wrong decision happens based on the missing information. [ 45 ]

Theme 3: Digital Technology Creates New Tensions That Disrupt Conventional Health Care Relationships

Reduced patient contact [ 55 ] from EMRs “inserted” between the patient and clinician deteriorated the personal relationship [ 44 ]. There was the potential to lose focus on the patient, undermine rapport [ 86 ], and communicate with the computer in lieu of direct bedside patient communication [ 81 ]. The effect of digital documentation on trust in the psychiatrist-patient relationship was noted, and required open communication between the psychiatrist and patient to promote transparency about what was documented [ 100 ].

Managing disrupted communication [ 81 ] to preserve the patient’s personhood in the digital environment [ 81 ] created notable tension [ 69 ]. For example, the EMR can improve information accuracy [ 44 ] by viewing a patient history [ 103 ] or using a computer to facilitate conversation [ 86 ]. Preparing notes before consultations, minimizing screen use and explaining computer use, taking paper notes, sharing screens with patients, and viewing results digitally together “seems to counterbalance the negative effects of computer use” [ 86 ].

Interprofessional communication between clinicians was affected by digital notes “only when the data entered by different roles in the healthcare system are accurate, the clinicians can make timely and correct decisions” [ 103 ].

Theme 4: Acceptance of Digital Hospitals Is a Value-Based Spectrum That Changes Over Time

Individual beliefs about digital hospitals trended negatively (eg, threats to patient safety, waste of human resources, and perceived inefficiency), but objective experiences trended positively (eg, access to records, optimized treatment, efficiency and health system coordination [ 45 ], cost savings, improved productivity, and quality of care). Acceptance fluctuated between perception and reality—a mismatch between “work as imagined” and “work as done.”

Negative experiences with “work as imagined” were related to perceived inefficiencies [ 45 ]. Digital hospitals created their own unique time pressures with improved productivity in some cases [ 45 ], with participants viewing technology as “both time saving and time consuming” [ 74 ].

Positive experiences with “work as done” coincided with a longer time since implementation, when adoption had progressed, disruption to their work processes had eased, and workflows had been integrated [ 58 ]. Initially, clinicians reported having a negative first impression of the EMR, especially perceived complexity and ease of use [ 91 ].

Theme 5: Clinicians Value Patient Safety Benefits While Acknowledging Concerns About New Digital Risks

Digital hospitals generate patient safety benefits while creating new ways to make errors and increase risks. New errors may negatively affect patient safety. These include wrong patient errors, alert fatigue, inappropriate alerts, data entry errors, technical problems [ 78 ], field auto-population or auto-refresh errors, and the absence of aids for dose calculations [ 95 ]. Concerns regarding clinician overreliance on the system, ignoring correct alerts, and prioritizing system compliance over clinical accuracy were raised in some cases [ 45 ].

Reduced medication errors were the primary reported patient safety benefit, including awareness of known adverse reactions to medications [ 103 ] and improved legibility [ 78 ]. Clinical decision support enabled by the EMR was seen as a safety benefit to alert staff for prompt intervention [ 78 ]:

There is a pop up in the system which questions are you supposed to give that medication right now? [ 103 ]

The safety benefits of sharing medical information [ 100 ] and applying regulatory frameworks to the workflow [ 100 ] were noted. Entering clinical data into structured mandatory fields and managerial-level review improved thoroughness and was considered to enhance patient safety [ 78 ].

Theme 6: Clinicians Feel Reliant on Hybrid (Digital and Paper) Workflows to Maintain Standard of Care

Digital transformation caused workflow disruption. New digital workflows were time consuming, with a one-size-only user interface and limited ability to adapt to individual patient characteristics or change information once documented [ 86 ]. The digital interface was insufficient to meet clinical workflow needs. Often, EMR workflows were supplemented with paper workflows [ 81 ]. Paper enabled total customization to fit with workflow conventions and was a “cognitive support” to supplement personal workflows to plan and prioritize in a flexible, convenient, comfortable, and trusted way [ 58 ]:

I go to my [paper] notes and I make little boxes, and if I do those tasks I tick them. [ 81 ]

Theme 7: Clinicians Worry About Compromising Patient Data Privacy to Improve Care Efficiency

The improved documentation captured by the EMR created a perceived privacy risk for the patient. Perceptions on patient preferences to protect the disclosure of medical information involved elements such as diagnoses [ 100 ], mental health–related stigma [ 100 ], and patient distrust of the EMR system or its users [ 86 ]. Reported strategies enlisted by health care professionals were to only include clinical documentation that was general in nature, avoid labeling (eg, “mood disorder” instead of “depression”) [ 100 ], prioritize based on clinical relevance [ 100 ], or limit types of information to critical information related to medication that is considered essential knowledge [ 86 ].

Principal Findings and Comparison With Prior Work

Our findings revealed mixed and complex clinician and patient experiences of digital hospitals ( Figure 3 ). Generally, clinicians reported positive overall satisfaction (17/24, 71% of the studies) with digital hospitals in quantitative measures; however, there were negative experiences for clinicians reported in qualitative studies, including compromised clinician-patient interactions, inefficient data workflows, and patient data privacy concerns. For example, acceptance of digital hospitals fluctuated over time and trended negatively if grounded in individual perceptions and beliefs (ie, “work as imagined”) yet trended positively if based on objective measures of practice (ie, “work as done”) [ 105 ]. These inconsistencies are likely reflective of the various contextual factors that influence experience, such as intervention design or stage of implementation. It is likely that the quantitative finding of mixed usability of EMRs explains clinician reliance on hybrid (digital and paper) workflows revealed in the qualitative evidence synthesis as clinicians seek to maintain their clinical workflow standard and validate data using additional sources of truth, such as paper [ 10 ].

difference between research paper and systematic review

The effect of digital hospitals on clinician-reported patient safety and clinician ability to deliver care were mixed; there was acknowledgment that digitization primarily reduces medication error risk but creates new risks driven by questionable data quality. In the small proportion of studies that explored patient experience (10/61, 16%), there was weak evidence supporting a positive association between digital hospitals and patient satisfaction scores. To our knowledge, this is the first review to systematically evaluate clinician and patient experience in digital hospitals and use qualitative evidence synthesis with machine learning (Leximancer) to consolidate identified themes in previous qualitative research into an empirical “umbrella” view of digital hospital experience.

Previous studies have evaluated the adoption of health ITs and identified enablers of and barriers to the routine use of EMRs in practice. The perceived value of the EMR to clinical workflows and data accessibility are key adoption facilitators, whereas cost and time consumption are barriers to adoption [ 106 ]. Our findings revealed that clinicians reported high satisfaction with digital hospitals and positively viewed data accessibility in quantitative measures; however, our qualitative evidence synthesis revealed themes of frustration with slow digital workflows and inconsistent data discoverability. Evidence of EMR adoption in low-income countries also highlights clinician perception of the EMR as a key facilitator and interoperability and clinician burnout as barriers, similar to our findings on the impact of EMR on workload and burnout symptoms [ 107 ].

Our review found that the patient experience of digital hospitals was reported disproportionately less frequently than the clinician experience. Evidence of patient satisfaction with EMRs was systematically reviewed in 2013, and it was found that a small number of studies (n=8) indicated positive patient satisfaction with the EMR in mixed settings across primary care, emergency, and outpatient departments [ 13 ]. This evidence is consistent with our findings of a positive or neutral association but remains grounded in cross-sectional methods that warrant rigorous trial evaluation. Beyond the clinician-facing EMR, patient-centered digital health records have emerged as a mechanism to engage and empower consumers living with chronic conditions. Patient portals within EMRs can contribute positively to health care quality and safety by improving medication adherence and clinician-patient communication [ 24 ] and have been shown to improve patient care navigation and disease knowledge without adverse effects and with high patient satisfaction [ 108 ]. Measuring satisfaction with digital hospitals using a simple quantitative scale is unlikely to capture the complexity and heterogeneity of digital hospital environments. There was dissonance in clinician perspective on data accessibility (or discoverability) between quantitative and qualitative studies. Clinicians reported objective satisfaction with data accessibility and positive attitudes toward data quality; however, they were dissatisfied with the inefficient workflows required to generate high-quality data (ie, input) and the ability to leverage these data for secondary use (ie, output).

Although patient experience is consistently positively associated with patient safety, clinical effectiveness, and self-rated and objectively measured health outcomes [ 109 ], there remains a paucity of empirical research that directly investigates patient experience in digital hospital environments. Our results revealed that, in a small number of studies, patient satisfaction scores were positively associated with digital hospitals (4/61, 7%) or remained unchanged (3/61, 5%). Patients and clinicians shared positive overall impressions of EMRs and negative attitudes toward data privacy in a digital hospital environment; however, this result must be considered in the disproportionate context of patient and clinician experience data reported in studies (10/61, 16% vs 55/61, 90%, respectively).

Patient experience in a digital hospital was sometimes inferred from surrogate measures, including hospital recommendations and discharge information quality, that do not capture the complexity of personal experience. For patients, crude quantitative measurement of satisfaction with digital hospitals neglects the complex nature of the digital patient experience [ 110 ] or emerging consumer digital health themes (eg, ethical implications, security, choice, privacy, transparency, accuracy, user-friendliness, and equity of access) [ 111 , 112 ]. The authors support the call by Viitanen et al [ 23 ] to develop a framework to describe the different aspects of patient experience and correlate them with appropriate methods for studying patient experience in this context.

The effect of digital hospitals on the clinician-patient relationship was consistently reported by clinicians as a negative outcome of digitization. These results align with the well-researched relationship between EMRs and the clinician-patient dynamic, with evidence supporting negative communication outcomes (eg, rapport, quality of interaction, and time) [ 113 ] from a clinician perspective. Patient perceptions of clinician-patient communication when using EMRs are relatively stable in previous systematic reviews despite objective studies describing potentially negative (eg, interrupted speech) and positive (eg, facilitating questions) effects [ 16 ]. Patient portals within EMRs can improve clinician-patient communication and should be considered a necessary infrastructure for health services implementing EMRs to mitigate potential negative effects [ 24 ].

Implications for Practice

“Improved patient experience” and “improved clinician experience” are 2 quadrants of the Quadruple Aim of health care that warrant significant health service investment amidst the widespread digital transformation of health care. Our findings highlight the need to address pervasive barriers to positive clinician and patient experiences in digital hospitals. To tackle the issues of clinician-patient interaction, inefficient documentation, workload, and burnout identified in this review, health services can invest in feasible and cost-effective solutions such as clinical education and training that are tailored to each clinical discipline as each discipline has unique EMR needs [ 10 ]. Prioritizing investment in patient-facing digital solutions such as patient portals can democratize clinical knowledge and empower patients on their unique health care journey [ 24 ].

Investment in optimizing EMR infrastructure will build a strong foundation for new clinical applications in descriptive, predictive (ie, artificial intelligence [AI]), and prescriptive (ie, causal AI and decision support) analytics that can benefit clinical workflows and patient outcomes [ 114 ]. COVID-19 initiated the rapid adoption of virtual health care [ 115 ], and clinical applications of AI are rapidly emerging as the future primary disruptors of health care [ 116 ]. Global health services are building machinery to shift from reactive (treat-manage) to proactive (predict-prevent) models of care, with evidence of success for acute clinical problems such as reducing mortality rate and organ failure in the early identification of sepsis [ 117 ]. Stakeholder perspectives on implementing clinical AI have been recently consolidated in a qualitative evidence synthesis [ 118 ]; however, similarly to our review, patients, carers, and consumers were an underrepresented group compared with clinicians (11.4% vs 70% of data, respectively).

Our approach of using AI (machine learning) via Leximancer to perform the qualitative evidence synthesis is a novel approach compared with recent manual evidence synthesis methods [ 118 - 120 ]. The use of semiautomated content analysis tools such as Leximancer can accelerate progress toward a learning health system. These tools offer an accelerated pipeline for analyzing “big” qualitative data that suffer from traditional yet burdensome manual analytic workflows. Key health care use cases of applying digital tools to routine analytical workflows are patient-reported experience measures [ 121 ], unstructured clinical notes in EMRs [ 122 ], and social media [ 123 ]. Natural language processing and machine learning have been tested to analyze free-text comments from patient experience feedback [ 124 ]. Applications of Leximancer can be pushed by investigating how its algorithms can be used in real-world health care to drive continuous cycles of quality improvement with greater speed and efficiency compared with a manual control. Leximancer offers an impartial starting point for content analysis by automating the identification of key concepts and themes that warrant further qualitative refinement by the research team.

Limitations

The scope of this review was limited to experiences in a digital hospital environment, and thus, the experiences of specific digital systems (eg, telehealth and patient portals) were not considered. The complex, interacting factors that influence experience; the stage of digital hospital implementation; and the differences among settings were not explored in this review and offer important foci for future research. By not including gray literature and articles not published in English, our search strategy may have missed informal evaluations of clinician and patient experience of digital hospitals (eg, within health service annual reports) and geographical variation in digital hospital evaluations. The heterogeneity of digital health environments is reflected in the heterogeneity of studies included in this review, meaning that it is difficult to draw definitive conclusions agnostic to time and place. One limitation of using Leximancer for the qualitative evidence synthesis is that it does not automatically identify emotive concepts; these are identified by researchers when interpreting the results. Although Leximancer reduces the potential for human bias when compared with manual analysis, researcher interpretation of Leximancer results remains a gateway for introducing bias [ 40 ]. A qualitative evidence synthesis for patient experience studies was not possible as we only identified one eligible study. The patient experience results should be interpreted with caution because of the relatively limited patient experience data in studies (10/61, 16%) compared with clinician experience data in studies (55/61, 90%).

Conclusions

The clinician experience of digital hospitals appeared positive according to high-level indicators (eg, overall satisfaction and data accessibility); however, the qualitative evidence synthesis revealed substantive tensions between digital hospitals and overall experience, such as weakening clinician-patient interaction, change burden, and inefficient data workflows. There is insufficient evidence to draw a definitive conclusion on the patient experience of digital hospitals, but quantitative indications of satisfaction appear positive or agnostic to digitization. Future research must prioritize investigating the patient experience in digital hospitals and measuring the link between exposure (digital hospital) and outcome (experience) in carefully designed pragmatic trials. Areas of interest include examining the interacting factors that influence experience, the stage of digital hospital implementation, and the differences among settings. Equitable investigation of the patient (including pediatric patients) and clinician digital hospital experience must be prioritized in future research. Worldwide, as digital health becomes inseparable from hospitals and general health care, understanding how to optimize the clinician and patient experience in digital hospital environments will be critical to achieving the Quadruple Aim of (digital) health care.

Acknowledgments

OJC and LW are funded by the Digital Health Cooperative Research Centre (DHCRC). DHCRC is funded under the Commonwealth Government Cooperative Research Centres Program, Australia. OJC was also funded by the Australian Research Council Linkage Program Grant (ARC LP170101154). The funder had no role in study design, data collection, data analysis, data interpretation, or writing of the report.

Data Availability

The data supporting the findings of this study are available from the corresponding author (OJC) upon reasonable request.

Authors' Contributions

OJC and YM conceptualized the study. YM designed the search strategy with input from OJC and C Slade. OJC and YM executed the search. OJC, YM, JK, and BG performed article screening. OJC, LW, JK, and BG performed data extraction and quality appraisal. OJC and LW validated the article screening and data extraction. C Slade assisted with designing the search strategy and article screening. OJC, LW, JK, and BG performed data analysis. OJC and LW wrote the first draft of the paper with input from JK and BG. C Sullivan and ABJ critically reviewed the manuscript and provided expert input. OJC, LW, and JK revised the manuscript. All authors have read and approved the final version of the manuscript.

Conflicts of Interest

None declared.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 checklist.

ENTREQ (Enhancing the Transparency in Reporting the Synthesis of Qualitative Research) guideline checklist.

Final search strategies for all included databases.

Data extraction template.

Mixed Methods Appraisal Tool results.

Characteristics of the studies included in the systematic review and qualitative evidence synthesis on clinician and patient experience in digital hospitals (N=61).

Leximancer results—qualitative evidence synthesis.

  • Digital health and the trends healthcare investors are following. Healthcare Information and Management Systems Society. URL: https://www.himss.org/resources/digital-health-and-trends-healthcare-investors-are-following [accessed 2024-11-01]
  • Salway RJ, Silvestri D, Wei EK, Bouton M. Using information technology to improve COVID-19 care at New York City health + hospitals. Health Aff (Millwood). Sep 01, 2020;39(9):1601-1604. [ CrossRef ] [ Medline ]
  • Thomas EE, Haydon HM, Mehrotra A, Caffery LJ, Snoswell CL, Banbury A, et al. Building on the momentum: sustaining telehealth beyond COVID-19. J Telemed Telecare. Sep 26, 2020;28(4):301-308. [ CrossRef ]
  • Nguyen KH, Wright C, Simpson D, Woods L, Comans T, Sullivan C. Economic evaluation and analyses of hospital-based electronic medical records (EMRs): a scoping review of international literature. NPJ Digit Med. Mar 08, 2022;5(1):29. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Global strategy on digital health 2020-2025. World Health Organization. 2021. URL: https://tinyurl.com/58d488cu [accessed 2024-01-29]
  • Barnett A, Winning M, Canaris S, Cleary M, Staib A, Sullivan C. Digital transformation of hospital quality and safety: real-time data for real-time action. Aust Health Rev. Jan 2019;43(6):656-661. [ CrossRef ] [ Medline ]
  • Sullivan C, Staib A, Ayre S, Daly M, Collins R, Draheim M, et al. Pioneering digital disruption: Australia's first integrated digital tertiary hospital. Med J Aust. Nov 07, 2016;205(9):386-389. [ CrossRef ] [ Medline ]
  • Sullivan C, Staib A. Digital disruption 'syndromes' in a hospital: important considerations for the quality and safety of patient care during rapid digital transformation. Aust Health Rev. Jun 2018;42(3):294-298. [ CrossRef ] [ Medline ]
  • Robertson ST, Rosbergen IC, Burton-Jones A, Grimley RS, Brauer SG. The effect of the electronic health record on interprofessional practice: a systematic review. Appl Clin Inform. May 01, 2022;13(3):541-559. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Canfell OJ, Meshkat Y, Kodiyattu Z, Engstrom T, Chan W, Mifsud J, et al. Understanding the digital disruption of health care: an ethnographic study of real-time multidisciplinary clinical behavior in a new digital hospital. Appl Clin Inform. Oct 09, 2022;13(5):1079-1091. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Woods L, Eden R, Canfell OJ, Nguyen KH, Comans T, Sullivan C. Show me the money: how do we justify spending health care dollars on digital health? Med J Aust. Feb 06, 2023;218(2):53-57. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eden R, Burton-Jones A, Scott I, Staib A, Sullivan C. Effects of eHealth on hospital practice: synthesis of the current literature. Aust Health Rev. Sep 2018;42(5):568-578. [ CrossRef ] [ Medline ]
  • Liu J, Luo L, Zhang R, Huang T. Patient satisfaction with electronic medical/health record: a systematic review. Scand J Caring Sci. Dec 26, 2013;27(4):785-791. [ CrossRef ] [ Medline ]
  • Campanella P, Lovato E, Marone C, Fallacara L, Mancuso A, Ricciardi W, et al. The impact of electronic health records on healthcare quality: a systematic review and meta-analysis. Eur J Public Health. Feb 30, 2016;26(1):60-64. [ CrossRef ] [ Medline ]
  • Kruse CS, Kristof C, Jones B, Mitchell E, Martinez A. Barriers to electronic health record adoption: a systematic literature review. J Med Syst. Dec 2016;40(12):252. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alkureishi MA, Lee WW, Lyons M, Press VG, Imam S, Nkansah-Amankra A, et al. Impact of electronic medical record use on the patient-doctor relationship and communication: a systematic review. J Gen Intern Med. May 19, 2016;31(5):548-560. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rathert C, Williams ES, Linhart H. Evidence for the quadruple aim: a systematic review of the literature on physician burnout and patient outcomes. Med Care. Dec 2018;56(12):976-984. [ CrossRef ] [ Medline ]
  • Privitera MR. Addressing human factors in burnout and the delivery of healthcare: quality and safety imperative of the quadruple aim. Health. 2018;10(05):629-644. [ CrossRef ]
  • Liddy C, Keely E. Using the quadruple aim framework to measure impact of heath technology implementation: a case study of eConsult. J Am Board Fam Med. May 09, 2018;31(3):445-455. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wong AH, Ahmed RA, Ray JM, Khan H, Hughes PG, McCoy CE, et al. Supporting the quadruple aim using simulation and human factors during COVID-19 care. Am J Med Qual. Mar 2021;36(2):73-83. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bingham G, Tong E, Poole S, Ross P, Dooley M. A longitudinal time and motion study quantifying how implementation of an electronic medical record influences hospital nurses' care delivery. Int J Med Inform. Sep 2021;153:104537. [ CrossRef ] [ Medline ]
  • Viitanen J, Valkonen P, Savolainen K, Karisalmi N, Hölsä S, Kujala S. Patient experience from an eHealth perspective: a scoping review of approaches and recent trends. Yearb Med Inform. Aug 04, 2022;31(1):136-145. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dendere R, Slade C, Burton-Jones A, Sullivan C, Staib A, Janda M. Patient portals facilitating engagement with inpatient electronic medical records: a systematic review. J Med Internet Res. Apr 11, 2019;21(4):e12779. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bae J, Rask KJ, Becker ER. The impact of electronic medical records on hospital-acquired adverse safety events: differential effects between single-source and multiple-source systems. Am J Med Qual. Apr 07, 2018;33(1):72-80. [ CrossRef ] [ Medline ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. Jul 21, 2009;6(7):e1000097. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. Nov 27, 2012;12(1):181. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schardt C, Adams MB, Owens T, Keitz S, Fontelo P. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Med Inform Decis Mak. Jun 15, 2007;7(1):16. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Almond H, Mather C. Digital Health: A Transformative Approach. Amsterdam, The Netherlands. Elsevier; 2023.
  • Lim HC, Austin JA, van der Vegt AH, Rahimi AK, Canfell OJ, Mifsud J, et al. Toward a learning health care system: a systematic review and evidence-based conceptual framework for implementation of clinical analytics in a digital hospital. Appl Clin Inform. Mar 2022;13(2):339-354. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • EMRAM: a strategic roadmap for effective EMR adoption and maturity. HIMSS Analytics. 2017. URL: https://www.himssanalytics.org/emram [accessed 2021-03-31]
  • Sandelowski M, Barroso J. Toward a metasynthesis of qualitative findings on motherhood in HIV-positive women. Res Nurs Health. Apr 17, 2003;26(2):153-170. [ CrossRef ] [ Medline ]
  • Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. Jul 10, 2008;8(1):45. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. Dec 18, 2018;34(4):285-291. [ CrossRef ]
  • Freire K, Pope R, Jeffrey K, Andrews K, Nott M, Bowman T. Engaging with children and adolescents: a systematic review of participatory methods and approaches in research informing the development of health resources and interventions. Adolescent Res Rev. Feb 03, 2022;7(3):335-354. [ CrossRef ]
  • Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods programme version. ESRC Methods Programme. 2006. URL: https://tinyurl.com/5n8bj7wu [accessed 2024-01-29]
  • Noyes J, Harden A, Ames H, Booth A, Flemming K, France E, et al. Cochrane-Campbell handbook for qualitative evidence synthesis. Cochrane Training. 2023. URL: https://tinyurl.com/mr2j8rv3 [accessed 2023-10-31]
  • Smith AE, Humphreys MS. Evaluation of unsupervised semantic mapping of natural language with Leximancer concept mapping. Behav Res Methods. May 2006;38(2):262-279. [ CrossRef ] [ Medline ]
  • Sotiriadou P, Brouwers J, Le TA. Choosing a qualitative data analysis tool: a comparison of NVivo and Leximancer. Ann Leis Res. Apr 22, 2014;17(2):218-234. [ CrossRef ]
  • Engstrom T, Strong J, Sullivan C, Pole JD. A comparison of Leximancer semi-automated content analysis to manual content analysis: a healthcare exemplar using emotive transcripts of COVID-19 hospital staff interactive webcasts. Int J Qual Methods. Aug 18, 2022;21:16094069221118993. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Haynes E, Garside R, Green J, Kelly MP, Thomas J, Guell C. Semiautomated text analytics for qualitative data synthesis. Res Synth Methods. Sep 09, 2019;10(3):452-464. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Leximancer user guide. Leximancer Pty Ltd. 2002. URL: https://tinyurl.com/3y5yc8j2 [accessed 2024-01-29]
  • Knapp J, Zeratsky J, Kowitz B. Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. New York City, NY. Simon & Schuster; 2016.
  • Boyer L, Samuelian JC, Fieschi M, Lancon C. Implementing electronic medical records in a psychiatric hospital: a qualitative study. Int J Psychiatry Clin Pract. Sep 10, 2010;14(3):223-227. [ CrossRef ] [ Medline ]
  • Holden RJ. Physicians' beliefs about using EMR and CPOE: in pursuit of a contextualized understanding of health IT use behavior. Int J Med Inform. Feb 2010;79(2):71-80. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Al-Mujaini A, Al-Farsi Y, Al-Maniri A, Ganesh A. Satisfaction and perceived quality of an electronic medical record system in a tertiary hospital in oman. Oman Med J. Sep 20, 2011;26(5):324-328. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Claret PG, Sebbanne M, Bobbia X, Bonnec JM, Pommet S, Jebali C, et al. First medical contact and physicians' opinion after the implementation of an electronic record system. Am J Emerg Med. Sep 2012;30(7):1235-1240. [ CrossRef ] [ Medline ]
  • Kazley AS, Diana ML, Ford EW, Menachemi N. Is electronic health record use associated with patient satisfaction in hospitals? Health Care Manage Rev. 2012;37(1):23-30. [ CrossRef ] [ Medline ]
  • Top M, Gider Ö. Nurses' views on electronic medical records (EMR) in Turkey: an analysis according to use, quality and user satisfaction. J Med Syst. Jun 8, 2012;36(3):1979-1988. [ CrossRef ] [ Medline ]
  • Bossen C, Jensen LG, Udsen FW. Evaluation of a comprehensive EHR based on the DeLone and McLean model for IS success: approach, results, and success factors. Int J Med Inform. Oct 2013;82(10):940-953. [ CrossRef ] [ Medline ]
  • Jarvis B, Johnson T, Butler P, O'Shaughnessy K, Fullam F, Tran L, et al. Assessing the impact of electronic health records as an enabler of hospital quality and patient satisfaction. Acad Med. Oct 2013;88(10):1471-1477. [ CrossRef ] [ Medline ]
  • Strauss B. The patient perception of the nurse-patient relationship when nurses utilize an electronic health record within a hospital setting. Comput Inform Nurs. Dec 2013;31(12):596-604. [ CrossRef ] [ Medline ]
  • Top M, Yilmaz A, Gider Ö. Electronic medical records (EMR) and nurses in Turkish hospitals. Syst Pract Action Res. Oct 13, 2012;26(3):281-297. [ CrossRef ]
  • Alharthi H, Youssef A, Radwan S, Al-Muallim S, Zainab AT. Physician satisfaction with electronic medical records in a major Saudi Government hospital. J Taibah Univ Med Sci. Sep 2014;9(3):213-218. [ FREE Full text ] [ CrossRef ]
  • Burgin A, O'Rourke R, Tully MP. Learning to work with electronic patient records and prescription charts: experiences and perceptions of hospital pharmacists. Res Social Adm Pharm. Sep 2014;10(5):741-755. [ CrossRef ] [ Medline ]
  • Lakbala P, Dindarloo K. Physicians' perception and attitude toward electronic medical record. Springerplus. Feb 3, 2014;3(1):63. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Migdal CW, Namavar AA, Mosley VN, Afsar-manesh N. Impact of electronic health records on the patient experience in a hospital setting. J Hosp Med. Oct 23, 2014;9(10):627-633. [ CrossRef ] [ Medline ]
  • Dowding DW, Turley M, Garrido T. Nurses' use of an integrated electronic health record: results of a case site analysis. Inform Health Soc Care. Dec 2015;40(4):345-361. [ CrossRef ] [ Medline ]
  • Harmon C, Fogle M, Roussel L. Then and now: nurses' perceptions of the electronic health record. Online J Nurs Inform. 2015;19(1). [ FREE Full text ]
  • Shaker HA, Farooq MU, Dhafar KO. Physicians' perception about electronic medical record system in Makkah Region, Saudi Arabia. Avicenna J Med. Aug 09, 2015;5(1):1-5. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tilahun B, Fritz F. Comprehensive evaluation of electronic medical record system use and user satisfaction at five low-resource setting hospitals in ethiopia. JMIR Med Inform. May 25, 2015;3(2):e22. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bani-Issa W, Al Yateem N, Al Makhzoomy IK, Ibrahim A. Satisfaction of health-care providers with electronic health records and perceived barriers to its implementation in the United Arab Emirates. Int J Nurs Pract. Aug 2016;22(4):408-416. [ CrossRef ] [ Medline ]
  • Chang CP, Lee TT, Liu CH, Mills ME. Nurses' experiences of an initial and reimplemented electronic health record use. Comput Inform Nurs. Apr 2016;34(4):183-190. [ CrossRef ] [ Medline ]
  • Schenk EC, Mayer DM, Ward-Barney E, Estill P, Goss L, Shreffler-Grant J. RN perceptions of a newly adopted electronic health record. J Nurs Adm. Mar 2016;46(3):139-145. [ CrossRef ] [ Medline ]
  • Bardach SH, Real K, Bardach DR. Perspectives of healthcare practitioners: an exploration of interprofessional communication using electronic medical records. J Interprof Care. May 02, 2017;31(3):300-306. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kaipio J, Lääveri T, Hyppönen H, Vainiomäki S, Reponen J, Kushniruk A, et al. Usability problems do not heal by themselves: national survey on physicians' experiences with EHRs in Finland. Int J Med Inform. Jan 2017;97:266-281. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ratanawongsa N, Barton JL, Lyles CR, Wu M, Yelin EH, Martinez D, et al. Computer use, language, and literacy in safety net clinic communication. J Am Med Inform Assoc. Jan 2017;24(1):106-112. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Aldosari B, Al-Mansour S, Aldosari H, Alanazi A. Assessment of factors influencing nurses acceptance of electronic medical record in a Saudi Arabia hospital. Inform Med Unlocked. 2018;10:82-88. [ CrossRef ]
  • Burridge L, Foster M, Jones R, Geraghty T, Atresh S. Person-centred care in a digital hospital: observations and perspectives from a specialist rehabilitation setting. Aust Health Rev. Sep 2018;42(5):529-535. [ CrossRef ] [ Medline ]
  • Abu Raddaha AH. Nurses’ perceptions about and confidence in using an electronic medical record system. Proc Singapore Healthc. Sep 21, 2017;27(2):110-117. [ CrossRef ]
  • Strudwick G, McGillis Hall L, Nagle L, Trbovich P. Acute care nurses' perceptions of electronic health record use: a mixed method study. Nurs Open. Oct 07, 2018;5(4):491-500. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tubaishat A. Perceived usefulness and perceived ease of use of electronic health records among nurses: application of Technology Acceptance Model. Inform Health Soc Care. Dec 18, 2018;43(4):379-389. [ CrossRef ] [ Medline ]
  • Alsohime F, Temsah MH, Al-Eyadhy A, Bashiri FA, Househ M, Jamal A, et al. Satisfaction and perceived usefulness with newly-implemented Electronic Health Records System among pediatricians at a university hospital. Comput Methods Programs Biomed. Feb 2019;169:51-57. [ CrossRef ] [ Medline ]
  • Burkoski V, Yoon J, Hutchinson D, Solomon S, Collins BE. Experiences of nurses working in a fully digital hospital: a phenomenological study. Nurs Leadersh (Tor Ont). May 07, 2019;32(SP):72-85. [ CrossRef ] [ Medline ]
  • Hung SY, Nakayama M, Chen CC, Tsai FL. Physician perceptions of electronic medical records: the impact of system service quality, and generation/experience gaps. Int J Healthc Technol Manag. 2019;17(4):229-254. [ CrossRef ]
  • Kutney-Lee A, Sloane DM, Bowles KH, Burns LR, Aiken LH. Electronic health record adoption and nurse reports of usability and quality of care: the role of work environment. Appl Clin Inform. Jan 20, 2019;10(1):129-139. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schopf TR, Nedrebø B, Hufthammer KO, Daphu IK, Lærum H. How well is the electronic health record supporting the clinical tasks of hospital physicians? A survey of physicians at three Norwegian hospitals. BMC Health Serv Res. Dec 04, 2019;19(1):934. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tubaishat A. The effect of electronic health records on patient safety: a qualitative exploratory study. Inform Health Soc Care. Jan 14, 2019;44(1):79-91. [ CrossRef ] [ Medline ]
  • Williams DC, Warren RW, Ebeling M, Andrews AL, Teufel Ii RJ. Physician use of electronic health records: survey study assessing factors associated with provider reported satisfaction and perceived patient impact. JMIR Med Inform. Apr 04, 2019;7(2):e10949. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alobo IG, Soyannwo T, Ukponwan G, Akogu S, Akpa AM, Ayankola K. Implementing electronic health system in Nigeria: perspective assessment in a specialist hospital. Afr Health Sci. Jun 22, 2020;20(2):948-954. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Burridge L, Foster M, Jones R, Geraghty T, Atresh S. Nurses' perspectives of person-centered spinal cord injury rehabilitation in a digital hospital. Rehabil Nurs. 2020;45(5):263-270. [ CrossRef ] [ Medline ]
  • De Groot K, De Veer AJ, Paans W, Francke AL. Use of electronic health records and standardized terminologies: a nationwide survey of nursing staff experiences. Int J Nurs Stud. Apr 2020;104:103523. [ CrossRef ] [ Medline ]
  • Eden R, Burton-Jones A, Staib A, Sullivan C. Surveying perceptions of the early impacts of an integrated electronic medical record across a hospital and healthcare service. Aust Health Rev. 2020;44(5):690. [ CrossRef ]
  • Hu X, Qu H, Houser SH, Ding J, Chen H, Zhang X, et al. Exploring association between certified EHRs adoption and patient experience in U.S. psychiatric hospitals. PLoS One. Jun 17, 2020;15(6):e0234607. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kaipio J, Kuusisto A, Hyppönen H, Heponiemi T, Lääveri T. Physicians' and nurses' experiences on EHR usability: comparison between the professional groups by employment sector and system brand. Int J Med Inform. Feb 2020;134:104018. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moerenhout T, Fischer GS, Saelaert M, De Sutter A, Provoost V, Devisch I. Primary care physicians' perspectives on the ethical impact of the electronic medical record. J Am Board Fam Med. Jan 06, 2020;33(1):106-117. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schwarz M, Coccetti A, Draheim M, Gordon G. Perceptions of allied health staff of the implementation of an integrated electronic medical record across regional and metropolitan settings. Aust Health Rev. 2020;44(6):965. [ CrossRef ]
  • Tajirian T, Stergiopoulos V, Strudwick G, Sequeira L, Sanches M, Kemp J, et al. The influence of electronic health record use on physician burnout: cross-sectional survey. J Med Internet Res. Jul 15, 2020;22(7):e19274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cho Y, Kim M, Choi M. Factors associated with nurses' user resistance to change of electronic health record systems. BMC Med Inform Decis Mak. Jul 17, 2021;21(1):218. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Heponiemi T, Gluschkoff K, Vehko T, Kaihlanen AM, Saranto K, Nissinen S, et al. Electronic health record implementations and insufficient training endanger nurses' well-being: cross-sectional survey study. J Med Internet Res. Dec 23, 2021;23(12):e27096. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jung SY, Hwang H, Lee K, Lee D, Yoo S, Lim K, et al. User perspectives on barriers and facilitators to the implementation of electronic health records in behavioral hospitals: qualitative study. JMIR Form Res. Apr 08, 2021;5(4):e18764. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kutney-Lee A, Brooks Carthon M, Sloane DM, Bowles KH, McHugh MD, Aiken LH. Electronic health record usability: associations with nurse and patient outcomes in hospitals. Med Care. Jul 01, 2021;59(7):625-631. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lloyd S, Long K, Oshni Alvandi A, Di Donato J, Probst Y, Roach J, et al. A national survey of EMR usability: comparisons between medical and nursing professions in the hospital and primary care sectors in Australia and Finland. Int J Med Inform. Oct 2021;154:104535. [ CrossRef ] [ Medline ]
  • Luyten J, Marneffe W. Examining the acceptance of an integrated electronic health records system: insights from a repeated cross-sectional design. Int J Med Inform. Jun 2021;150:104450. [ CrossRef ] [ Medline ]
  • Pruitt ZM, Howe JL, Hettinger AZ, Ratwani RM. Emergency physician perceptions of electronic health record usability and safety. J Patient Saf. Dec 01, 2021;17(8):e983-e987. [ CrossRef ] [ Medline ]
  • Tian D, Hoehner CM, Woeltje KF, Luong L, Lane MA. Disrupted and restored patient experience with transition to new electronic health record system. J Patient Exp. Aug 18, 2021;8:23743735211034064. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jedwab RM, Hutchinson AM, Manias E, Calvo RA, Dobroff N, Redley B. Change in nurses' psychosocial characteristics pre- and post-electronic medical record system implementation coinciding with the SARS-CoV-2 pandemic: pre- and post-cross-sectional surveys. Int J Med Inform. Jul 2022;163:104783. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Al Otaybi HF, Al-Raddadi R, Bakhamees FH. Performance, barriers, and satisfaction of healthcare workers toward electronic medical records in Saudi Arabia: a national multicenter study. Cureus. Feb 2022;14(2):e21899. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Arikan F, Kara H, Erdogan E, Ulker F. Barriers to adoption of electronic health record systems from the perspective of nurses: a cross-sectional study. Comput Inform Nurs. Nov 15, 2021;40(4):236-243. [ CrossRef ] [ Medline ]
  • Chivilgina O, Elger BS, Benichou MM, Jotterand F. "What's the best way to document information concerning psychiatric patients? I just don't know"-a qualitative study about recording psychiatric patients notes in the era of electronic health records. PLoS One. Mar 3, 2022;17(3):e0264255. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Czernik Z, Yu A, Pell J, Feinbloom D, Jones CD. Hospitalist perceptions of electronic health records: a multi-site survey. J Gen Intern Med. Jan 21, 2022;37(1):269-271. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Monturo C, Brockway C, Ginev A. Electronic health record transition: the patient experience. Comput Inform Nurs. Jan 01, 2022;40(1):53-60. [ CrossRef ] [ Medline ]
  • Upadhyay S, Hu HF. A qualitative analysis of the impact of electronic health records (EHR) on healthcare quality and safety: clinicians' lived experiences. Health Serv Insights. Mar 03, 2022;15:11786329211070722. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Welchen V, Matte J, Giacomello CP, Dalle Molle F, Camargo ME. Usability perception of the health information systems in Brazil: the view of hospital health professionals on the electronic health record. RAUSP Manag J. Apr 07, 2022;57(3):264-279. [ CrossRef ]
  • Catchpole K, Neyens DM, Abernathy J, Allison D, Joseph A, Reeves ST. Framework for direct observation of performance and safety in healthcare. BMJ Qual Saf. Dec 28, 2017;26(12):1015-1021. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kruse CS, Kothman K, Anerobi K, Abanaka L. Adoption factors of the electronic health record: a systematic review. JMIR Med Inform. Jun 01, 2016;4(2):e19. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Woldemariam MT, Jimma W. Adoption of electronic health record systems to enhance the quality of healthcare in low-income countries: a systematic review. BMJ Health Care Inform. Jun 12, 2023;30(1):e100704. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brands MR, Gouw SC, Beestrum M, Cronin RM, Fijnvandraat K, Badawy SM. Patient-centered digital health records and their effects on health outcomes: systematic review. J Med Internet Res. Dec 22, 2022;24(12):e43086. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. Jan 03, 2013;3(1):e001570. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wang T, Giunti G, Melles M, Goossens R. Digital patient experience: umbrella systematic review. J Med Internet Res. Aug 04, 2022;24(8):e37952. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Queensland digital health consumer charter. Health Consumers Queensland. URL: https://www.hcq.org.au/qdhcc/ [accessed 2024-01-29]
  • Kukafka R. Digital health consumers on the road to the future. J Med Internet Res. Nov 21, 2019;21(11):e16359. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pelland KD, Baier RR, Gardner RL. "It's like texting at the dinner table": a qualitative analysis of the impact of electronic health records on patient-physician interaction in hospitals. J Innov Health Inform. Jun 30, 2017;24(2):894. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sanchez P, Voisey JP, Xia T, Watson HI, O'Neil AQ, Tsaftaris SA. Causal machine learning for healthcare and precision medicine. R Soc Open Sci. Aug 03, 2022;9(8):220638. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bouabida K, Lebouché B, Pomey MP. Telehealth and COVID-19 pandemic: an overview of the telehealth use, advantages, challenges, and opportunities during COVID-19 pandemic. Healthcare (Basel). Nov 16, 2022;10(11):2293. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lee TC, Shah NU, Haack A, Baxter SL. Clinical implementation of predictive models embedded within electronic health record systems: a systematic review. Informatics (MDPI). Sep 2020;7(3):25. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adams R, Henry KE, Sridharan A, Soleimani H, Zhan A, Rawat N, et al. Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat Med. Jul 21, 2022;28(7):1455-1460. [ CrossRef ] [ Medline ]
  • Hogg HD, Al-Zubaidy M, Technology Enhanced Macular Services Study Reference Group; Talks J, Denniston AK, Kelly CJ, et al. Stakeholder perspectives of clinical artificial intelligence implementation: systematic review of qualitative evidence. J Med Internet Res. Jan 10, 2023;25:e39742. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Odendaal WA, Anstey Watkins JA, Leon N, Goudge J, Griffiths F, Tomlinson M, et al. Health workers' perceptions and experiences of using mHealth technologies to deliver primary healthcare services: a qualitative evidence synthesis. Cochrane Database Syst Rev. Mar 26, 2020;3(3):CD011942. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ames HM, Glenton C, Lewin S, Tamrat T, Akama E, Leon N. Clients' perceptions and experiences of targeted digital communication accessible via mobile devices for reproductive, maternal, newborn, child, and adolescent health: a qualitative evidence synthesis. Cochrane Database Syst Rev. Oct 14, 2019;10(10):CD013447. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • De Rosis S, Cerasuolo D, Nuti S. Using patient-reported measures to drive change in healthcare: the experience of the digital, continuous and systematic PREMs observatory in Italy. BMC Health Serv Res. Apr 16, 2020;20(1):315. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sheikhalishahi S, Miotto R, Dudley JT, Lavelli A, Rinaldi F, Osmani V. Natural language processing of clinical notes on chronic diseases: systematic review. JMIR Med Inform. Apr 27, 2019;7(2):e12239. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Conway M, Hu M, Chapman WW. Recent advances in using natural language processing to address public health research questions using social media and consumergenerated data. Yearb Med Inform. Aug 16, 2019;28(1):208-217. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Khanbhai M, Anyadi P, Symons J, Flott K, Darzi A, Mayer E. Applying natural language processing and machine learning techniques to patient experience feedback: a systematic review. BMJ Health Care Inform. Mar 02, 2021;28(1):e100262. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T Leung; submitted 29.03.23; peer-reviewed by W Tam, E Pittman; comments to author 07.09.23; revised version received 08.11.23; accepted 31.01.24; published 11.03.24.

©Oliver J Canfell, Leanna Woods, Yasaman Meshkat, Jenna Krivit, Brinda Gunashanhar, Christine Slade, Andrew Burton-Jones, Clair Sullivan. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.03.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

  • Open access
  • Published: 15 April 2023

A beginner’s guide on the use of brain organoids for neuroscientists: a systematic review

  • Lance A. Mulder   ORCID: orcid.org/0000-0001-6709-2593 1 , 2 , 3 ,
  • Josse A. Depla 1 , 2 , 3 , 4 ,
  • Adithya Sridhar 1 , 2 , 3 ,
  • Katja Wolthers 2 , 3 ,
  • Dasja Pajkrt 1 , 2 , 3 &
  • Renata Vieira de Sá 2 , 3 , 4  

Stem Cell Research & Therapy volume  14 , Article number:  87 ( 2023 ) Cite this article

8086 Accesses

2 Citations

4 Altmetric

Metrics details

The first human brain organoid protocol was presented in the beginning of the previous decade, and since then, the field witnessed the development of many new brain region-specific models, and subsequent protocol adaptations and modifications. The vast amount of data available on brain organoid technology may be overwhelming for scientists new to the field and consequently decrease its accessibility. Here, we aimed at providing a practical guide for new researchers in the field by systematically reviewing human brain organoid publications.

Articles published between 2010 and 2020 were selected and categorised for brain organoid applications. Those describing neurodevelopmental studies or protocols for novel organoid models were further analysed for culture duration of the brain organoids, protocol comparisons of key aspects of organoid generation, and performed functional characterisation assays. We then summarised the approaches taken for different models and analysed the application of small molecules and growth factors used to achieve organoid regionalisation. Finally, we analysed articles for organoid cell type compositions, the reported time points per cell type, and for immunofluorescence markers used to characterise different cell types.

Calcium imaging and patch clamp analysis were the most frequently used neuronal activity assays in brain organoids. Neural activity was shown in all analysed models, yet network activity was age, model, and assay dependent. Induction of dorsal forebrain organoids was primarily achieved through combined (dual) SMAD and Wnt signalling inhibition. Ventral forebrain organoid induction was performed with dual SMAD and Wnt signalling inhibition, together with additional activation of the Shh pathway. Cerebral organoids and dorsal forebrain model presented the most cell types between days 35 and 60. At 84 days, dorsal forebrain organoids contain astrocytes and potentially oligodendrocytes. Immunofluorescence analysis showed cell type-specific application of non-exclusive markers for multiple cell types.

Conclusions

We provide an easily accessible overview of human brain organoid cultures, which may help those working with brain organoids to define their choice of model, culture time, functional assay, differentiation, and characterisation strategies.

Human brain development starts in the third week post-conception and continues until early adulthood. Early human brain development progresses through several stages, including the formation of the neural tube (neurulation), the formation of the brain vesicles (ventral induction), and the organisation and structuring of different brain regions. Much of our knowledge on human brain development has been extrapolated from animal studies, mainly drosophila and rodents [ 1 , 2 ]. Although some developmental features and principles are evolutionarily conserved across species, many features are species specific, including the presence of specific cell populations or broad morphological features. For instance, outer radial glia (oRG), a population of basal unipolar precursor cells [ 3 ] which is directly related to the multiple waves of cortical neurogenesis, is only present in higher primates [ 4 , 5 ]. Another noteworthy difference is the gyrification of the brain in higher primates but is absent in rodents [ 6 ]. Expansion of the cortical surface through the formation of gyri and sulci is observed in several different mammal species [ 7 , 8 ], but is strongest in higher primates and particularly in humans [ 9 ]. As a result of these crucial differences, most animal models frequently fail at translating human pathology.

Until the last decade, available models to study the human brain development included post-mortem material at different stages of development, extrapolations from animal models [ 1 , 10 , 11 ], and in vitro mono- or co-culture models of cell types present in the brain [ 12 ], each presenting their own advantages and limitations (Table 1 ). In the past decade, the quest for more complex and physiologically relevant human in vitro models for disease modelling and drug discovery [ 13 ] culminated in the development of brain organoids (Fig.  1 ). Considering the characteristic human differences, this review will solely discuss human brain organoids.

figure 1

Schematic overview of the currently available brain organoid models representing different regions of the human developing central nervous system. The CNS is represented by the forebrain (in dark and light brown), midbrain (green), hindbrain (orange), and spinal Cord (pink). Below each region, the available organoid models are listed with bullet points. Forebrain organoid protocols are subcategorised under telencephalon (dark brown) and diencephalon (light brown) based on the origins of their respective structures. In the forebrain coronary section, the hippocampus is bilaterally depicted with dashed lines in the telencephalon hemispheres. Lining the ventricles is the choroid plexus epithelium (grey line). In the diencephalon, thalamus and hypothalamus are indicated by dashed lines

Human brain organoids are self-assembled three-dimensional (3D) tissue models derived from pluripotent stem cells (PSC) that recapitulate certain aspects of human brain development and physiology [ 14 , 15 , 16 ], including specific cell types and brain regions. As such, cells can communicate with other cell types and with the extracellular matrix creating a physiological microenvironment [ 17 , 18 , 19 ]. Their gene expression profiles resemble that of the human foetal brain, up to the last trimester of gestation [ 20 , 21 ]. Additionally, they can provide insights into the migratory trajectories of certain cell types in vivo, for example, the migration of interneurons from the ventral forebrain into the dorsal forebrain [ 22 ]. Brain organoids have been shown to be a suitable model for studying human neurodevelopment and have been widely used to answer biological questions. They have allowed researchers to gain better understanding of multiple topics, such as the genetic mechanisms driving human brain evolution [ 23 , 24 , 25 , 26 ], the effect of pollutants on brain development [ 27 , 28 , 29 ], and of the neurotrophic effects of multiple drugs [ 30 , 31 , 32 ]. Additionally, human brain organoids have helped to understand the neurological impact that a variety of viruses can have on the brain (reviewed by Depla et al. [ 33 ]), including the recent SARS-CoV-2 virus [ 34 , 35 , 36 ].

Brain organoids can be obtained using guided or non-guided approaches [ 37 , 38 , 39 ]. In both approaches, hPSCs are first cultured in 3D spheres called embryoid bodies (EB) which have the ability to differentiate into the three embryonic germ layers: endoderm, mesoderm, and ectoderm. EB are guided towards an ectodermal fate and further differentiated into neural ectoderm which gives rise to neural precursor cells (NPC, neural stem cells and neural progenitors). NPC further differentiate into the diverse neuronal and glial cell types (e.g. neurons, astrocytes, and oligodendrocytes) over the span of the organoid maturation while these cell types organise into region-specific structures mimicking different regions in the human brain. Given their ectodermal origin, organoids generally lack non-ectodermal cell types such as microglia [ 40 ] and vasculature [ 41 , 42 ]. Non-guided protocols typically rely solely on self-organisation and cell-to-cell interactions to generate cerebral organoids. Cerebral organoids mainly display a dorsal forebrain identity, but can also contain cells from other brain regions, such as hippocampus or retina [ 37 ]. Guided approaches make use of patterning factors to mimic in vivo development and generate region-specific brain organoids. Generally, these protocols make use of dual Suppressor of Mothers against Decapentaplegic (SMAD) inhibition (bone morphogenetic protein (BMP) and the transforming growth factor beta (TGFβ) pathways) to generate neural ectoderm and then further guide the EB towards the desired identity [ 43 ]. During brain development, the anterior–posterior orientation is established by high concentrations of wingless/integrated (WNT) at the posterior side and by anterior inhibition of Wnt signalling by secreted frizzled-related protein 1 (Sfrp1). The dorsal–ventral axis is determined by a high BMP concentration dorsally and a high sonic hedgehog (SHH) concentration ventrally. Similarly, to generate dorsal forebrain organoids EB are treated with SMAD inhibitors with along with Wnt and Shh inhibitors to achieve the desired dorsal anterior identity. On the contrary, the generation of ventral forebrain organoids relies on SHH agonists and Wnt signalling inhibition to obtain ventral anterior-oriented neural ectoderm.

Since the first reports of cerebral and cortical (dorsal forebrain) organoids by Lancaster et al. [ 37 ] and Kadoshima et al. [ 38 ], respectively, many protocols have been published that make use of patterning factors to generate region-specific organoids [ 22 , 39 , 44 , 45 , 46 , 47 ]. This surge in new models led to the appearance of new terms associated with these models, such as brain organoids, cerebral organoids, cortical spheroids, and cortical organoids. Additionally, due to the broad application of these models, multiple modifications and adaptations have been introduced to the original protocols.

Brain organoids are complex models, and given the number of different models, choosing the appropriate one as well as correctly reporting the obtained results from them can be delicate. Even though many valuable reviews have been published on how brain organoids recapitulate brain development and how they can be applied to a myriad of research topics [ 48 , 49 , 50 , 51 , 52 ], to date there is no systematic review available that focuses on the practical aspects of brain organoid technology. Such an overview, including a categorical report of cell types described in each model and their cellular markers, may be valuable for researchers in the field. Here, we present an overview of the available models and their applications. We further focus on articles studying neurodevelopment using brain organoids to assess their described functional characterisation assays and which assays are mostly used. We also provide a protocol comparison of the major organoid models, quantitatively describe the reported small molecules used for forebrain identity induction, and the cell type compositions reported at different stages of organoid maturation. Lastly, we provide an analysis of what immunofluorescence (IF) markers are used to identity each of the cell types. This review does not focus on comparing the specific culture steps for brain organoid generation and differentiation, described in the included research articles and protocol articles. This review serves as a practical guide to better understand the available brain organoid models at hand for neurodevelopmental studies regarding their culture duration, present cell types, and IF characterisation.

Protocol and search strategy

The setup of this systematic review was based on the Preferred Reporting Items for Systematic Review and Meta-Analysis for Protocol 2015 [ 53 ] (Fig.  2 ). PubMed and Ovid Embase (Embase classic and Embase) were used to construct the article base. Articles were obtained from January 1 st 2010 up until December 31st 2020 using the search terms described in Table 2 . Articles published in 2021, but available in the online databases in 2020 were also included. PubMed and Ovid Embase require different search strategies. For PubMed, the first search (#1) was performed to look for articles associated with the “organoid” mesh term, and synonyms found in the title (ti) or abstract (ab) section. The second search (#2) was performed to search for articles associated with the “Brain” mesh term and brain-related identities in title or abstract. Then, (#3) using the Boolean Operator AND, articles containing both these terms were filtered. For Ovid Embase, the first search established the articles listed under “Organoids”; the second search determined the articles with all valuable keywords present in the title, abstract, or keywords (kw) section. Then, using the Boolean Operator OR, articles obtained through either search were collected. Search results were filtered for peer-reviewed and English-written articles before exportation to Rayyan QCRI Review tool [ 54 ]. Duplicates were removed by the program upon importing. A few articles were later included as these were not captured in either search but were obtained through cross referencing. The systematic review was not pre-registered.

figure 2

Preferred Reporting Items for Systematic Review and Meta-Analysis for Protocol 2015 article inclusion flow chart

Selection criteria

Articles were included based on the following criteria: articles on brain organoids generated from (induced) hPSC through a fully 3D EB differentiation protocols; original articles and protocol articles; articles using in-house generated organoids. Articles in the following categories were excluded: articles describing organoids with in-between two-dimensional differentiation steps; articles using organoids differentiated from primary or cancer cell lines; reviews, commentary, news articles, and conference articles; in silico studies and articles receiving organoids from other groups or solely describing other datasets; reports describing cell aggregates originating from one cell type (e.g. NPCs or neurons), and articles describing the generation of retinal or inner ear organoid models. If an article could be excluded on the basis of multiple criteria, one criterion was chosen since they did not follow hierarchical rules. Two authors (LM and JD) independently assessed the eligibility of each article according to the inclusion and exclusion criteria. Articles were screened on title and abstract and subsequently on full text using the Rayyan QCRI Review tool. Conflicts in inclusion were discussed and resolved through consensus.

Data extraction

The following data were extracted: author, year of publication, field of application, stem cell type, what protocol was used for organoid generation, brain organoid identity, age of the brain organoid, functional characterisation, used small molecules and growth factors, reported cell types, the time points reported for each cell type, and the reported IF markers for each cell type. Certain articles described the generation and/or application of multiple brain organoid models. In these cases, data were extracted for each organoid model separately.

A quantitative study was performed on full-text articles to categorise the included articles based on the organoid application (Additional file 1 ). Articles could be eligible for more than one category, and in these instances the defining factor for categorisation was determined by the aim of the study.

Within the ‘Protocol development & Neurodevelopmental studies’ category, articles were quantitatively analysed for organoid model identity, culture duration, functional assays, small molecule use to guide identity, cell type composition of the models, and the IF markers used to characterise each cell type.

Analysis of organoid model identity was based on reported terminology of the organoid by the authors and by the protocols used to generate the organoid. This analysis allowed for grouping of the articles per organoid identity, on which successive analysis was performed. Organoids were categorised as ‘cerebral organoids’ based on author nomenclature in combination with the use of embedment in extracellular matrix (ECM). This was irrespective of the use of small molecules. Similarly, organoids were categorised as ‘dorsal forebrain organoids’, ‘midbrain’, ‘thalamus’, etc., based on terminology used by the authors. ECM embedment or administration was not taken into consideration into this grouping.

A timeline was constructed to present the development in the generation of different region-specific organoid protocols. This timeline was constructed using only the first-time publications of different models.

Articles grouped by organoid identity were further extracted for the culture duration of the organoids identities (ages in days), and for functional characterisation assessment. For culture duration assessment, the latest reported culture time points were extracted per organoid model. Median culture durations per model were then determined.

For functional characterisation assessment, articles that performed either calcium imaging, whole cell patch clamp, or measured extracellular field potentials were analysed. Assays performed on dissociated organoids and cells outgrown from plated organoids were excluded, as these assays no longer obtained information from 3D tissues. For each included article, the type of functional assay and the respective method of sample preparation (whole mount, sections, dissociated organoids, or plated organoids) were determined. Then, the age range of organoids used for each assay was determined per model. For some articles and models, functional data were available in the context of assembloids; in these cases the age was determined by the sum of organoid at the time of fusion plus the time of assembloid culture. Although the following analyses on the models included in this review were only performed on separate organoid models, assembloid data were considered for qualitative assessment of the functional assays specifically.

Next, article analyses on organoid culture protocols, small molecule and growth factor use for identity guiding, cell type compositions, and reported markers were performed on organoid models demonstrating a telencephalon forebrain identity. These models had to be described in more than four articles. The thalamic & pituitary, midbrain, cerebellum, brain stem, and spinal cord organoid models were not included in the analyses as these brain structures do not originate from the telencephalon. The medial pallial/hippocampal organoid model was assigned to dorsal forebrain organoids, and striatal organoid, and GE organoid models were assigned to ventral forebrain (subpallium) organoids. These brain structures originate from the telencephalon during human neurodevelopment.

To summarise the approaches taken for the generation of cerebral, dorsal, and ventral forebrain organoids, protocols were analysed for initial cell seeding density, timing of EB formation, the use of small molecules and ECM, and the usage of static versus rotational culture conditions. Articles describing the generation of EBs as single cells in suspension (e.g. in a flask or plate) without a definite number of cells per EB were not included in the EB seeding density analysis. For this analysis, EB formation and neural induction were considered as two separate steps. When both steps were performed simultaneously (e.g. small molecule administration at the time of single cell seeding), it was classified as neural induction without an EB formation phase.

Cerebral, dorsal forebrain organoid, and ventral forebrain organoid articles were analysed for the use of small molecules and growth factors to guide the organoid models to their respective identities. The individual molecules were scored by the number of articles describing their use. Certain articles described the use of the same small molecule(s) and/or growth factor(s) in multiple culture steps. If the steps were performed with the same intention (e.g. induction of the EB involving multiple steps using dorsomorphin), their use was scored once, as the molecule served the same purpose. If the culture steps involved different aims (e.g. FGF2 for both EB induction and neural tissue proliferation), the use of that molecule was scored separately, since the purpose of application was different. No analysis was performed on cerebral organoid articles as generation of these organoids does not involve the use of identity-guiding molecules.

Cerebral, dorsal forebrain, and ventral forebrain organoid articles were then analysed for reported cell types and their first reported time points in days. In order to examine cell type compositions and reporting times described in different models, articles were further grouped by identity. Cell types were only included when the authors also stated their respective reporting time points. The resulting list of cell types was used for analysis of cell type composition and IF marker characterisation. For IF marker characterisation analysis, articles were extracted per cell type and no longer by model. In order to improve readability of the cell type composition table and the IF characterisation figure, cell types were grouped where possible. Grouping was performed by the nomenclature used by the authors of the articles and/or by their marker expressions (when IF marker profiles overlapped fully). The resulting groups were as follows: ‘Precursors’ include all precursor cells that were mentioned as such by the original authors without further specification. ‘Neural precursor cells’ include neural precursor cells (NPCs), neuro-epithelial cells, and neural stem cells [ 55 ]. ‘Radial glia cells’ include both radial glial (RG) cells and apical progenitors (identical markers: PAX6, SOX2, GFAP). ‘Intermediate progenitor cells’ contained both intermediate progenitor cells and basal progenitors if the latter were specified as “basal intermediate progenitors” [ 56 ]. Basal progenitors specified to localise in the oSVZ were assigned to the group ‘oRG’. Glutamatergic neurons and GABAergic neurons reported in the models were grouped together with neurons described as excitatory and inhibitory, respectively. Articles that reported on cell types without clarifying what IF markers were used were excluded from IF analysis. Marker characterisation was performed by cumulative scoring of the articles describing the same marker in relation to the same cell type. However, if certain markers were only reported on once to characterise a specific cell type, these markers were removed. If this left the cell type without any markers to characterise it, it was removed from the list as a whole (including the list for cell type composition and time points) to improve cohesion.

Database search results and categorisation

Search of PubMed and Embase databases generated a total of 5160 article entries after removal of duplicates. After exclusion based on title and abstract, 378 articles remained and were assessed on a full-text base. Full-text analysis resulted in further exclusion of 76 articles. The resulting 302 articles were included for qualitative analysis (Fig.  2 ; Additional file 1 ).

Since the initial development of the cerebral and forebrain organoids in 2013, other protocols rapidly followed (Fig.  3 ). The first following years (2014–2017) a focus on protocols describing the generation of novel dorsal and ventral forebrain models could be observed, including brain regions such as the choroid plexus, hippocampus, and ganglionic eminence (GE). There was an additional focus on midbrain [ 57 ] protocols and diencephalon-derived identities like the hypothalamus, thalamus, and pituitary gland. In more recent years (2018–present), novel organoid models were published exhibiting non-forebrain identities, such as the spinal cord [ 58 ] and brain stem. Besides the first publications for each model described here, other unique protocols on previously reported regions have been published. One example is the publication on cortical spheroids by Paşca et al. [ 39 ] demonstrating dorsal forebrain identity and is a widely used protocol in the field of brain organoid development.

figure 3

Timeline of the first published protocol of different organoid identities. Only first published articles of each brain organoid identity are presented

Next, we categorised the articles for better understanding of the applications for which organoids are currently used (Supplementary Table 1). The resulting 302 articles were assigned to one of twelve categories (Additional file 2 ). Most articles (125/302; 41.4%) were categorised as ‘Protocol development and Neurodevelopment’ studies (from here on ‘Neurodevelopmental studies’). These articles are either the first publication describing a protocol for a certain of brain region organoid or articles that use brain organoids to study human neurodevelopment. The category ‘Protocol optimisation’ (48/302; 15.9%) included articles describing optimisations to brain organoids protocols or use brain organoids to optimise experimental conditions [ 59 , 60 , 61 ]. The category ‘Immunology & Infection’ studies (44/302; 14.6%) included articles in which organoids were used for infection studies and/or the subsequent immunological response. The last categories with five or fewer articles assigned included ‘vascularisation’ studies (5/302, > 2%), ‘transplantation’ studies (4/302; > 2%), ‘gene therapy’ studies (2/302; > 1%), and ‘axonal regeneration’ studies (1/302; > 1%). Further quantitative analysis was performed on the 125 articles categorised under ‘Neurodevelopmental studies’ as this category included most studies, as well as publications describing new protocols which generally contain extensive characterisation.

Neurodevelopmental models and culture duration

To further understand the type of models being used within the ‘Neurodevelopmental studies’ category, we quantified the number of studies using each organoid model (Additional file 2 ). The most prominent models were cerebral organoids (n = 72), mostly based on the protocol by Lancaster et al. [ 37 , 62 ], followed by dorsal forebrain organoids (n = 48), based on Paşca et al. [ 39 ], thalamic organoids (n = 5), and ventral forebrain organoids (n = 4) (Fig.  4 ). Certain brain organoid models were only reported once, namely brain stem [ 63 ], striatum [ 47 ], medial pallium/hippocampus [ 17 ], and hypothalamic organoids [ 64 ]. One brain organoid protocol was not included for further analyses because no clear identity was reported by the authors [ 65 ].

figure 4

Brain organoid models and their reported days in culture within the neurodevelopmental category are depicted. Box plots depict the 25% and 75% of the individual reports of days in culture, per organoid model. Each report is plotted as a single point. The median days in culture is depicted behind each model for readability. p: pallium, mp: medial pallium, sp: subpallium. The number within brackets depicts the n of articles included per model

Organoids can be cultured up to several months or even years, which impacts their cellular composition and maturation stage. To determine the culture durations mostly used in the literature, we extracted the latest time point in culture reported for the articles of each model (Fig.  4 ).

Several articles report on the long-term culture of organoids, with the following maximum times reported for different organoid models: cerebral organoids: 270 days [ 66 ], dorsal forebrain organoids: 595 days [ 67 ], medial pallium/hippocampal organoids: 100 days [ 17 ], ventral forebrain organoids: 595 days [ 67 ], thalamus: 300 days [ 68 ], hypothalamus: 40 days [ 64 ], striatum: 170 days [ 47 ], GE: 81 days [ 69 ], choroid plexus: 75 days [ 70 ], midbrain 146 days [ 18 ], cerebellum: 80 days [ 71 ], brain stem: 28 days [ 63 ], spinal cord: 75 days [ 46 ]. To understand the effects of long-term culture on the organoids, we analysed articles that cultured them over 100 days. For cerebral organoids, dorsal forebrain, and ventral forebrain organoids, and thalamic organoids, further maturation of the organoids resulted in increased complexity of cell type compositions and layer formations [ 46 , 66 , 67 , 72 , 73 , 74 , 75 , 76 ] as compared to earlier time points. Extensive astrogenesis (appearing around 60 days, and intensifying after 100 days, of culture) is prominently mentioned for cerebral organoids and dorsal forebrain organoids [ 39 , 69 , 77 , 78 , 79 , 80 ]. The appearance of oligodendrocytes and their subsequent myelination of neurons was described to start between 60 and 100 days of culture [ 72 , 73 , 79 , 81 ]. A good example of the relationship between maturation and function is illustrated in thalamic-pituitary organoids where, after 100 days, more hormone-producing cells were observed but only became hormonally active around 200 days [ 68 ]. Similarly, long-term culture of cerebral organoids (270 days) leads to the presence of electrophysiologically active neurons exhibiting functional synapses and dendritic spines [ 66 ]. Astrocytes from dorsal forebrain organoids were shown to express more mature gene expression profiles after 299 + days [ 79 ], and dorsal forebrain organoids were reported to demonstrate electrophysiological profiles with nested oscillations after 180 days [ 74 ].

Analysis of neuronal function in brain organoids

Neuronal activity is a key determinant of function in brain organoids. To further understand how this aspect has been analysed in organoid studies, we examined the number of articles describing analyses of neuronal activity using whole cell patch clamp, calcium imaging, or extracellular activity measurements. Overall, we found that only 32 of the 124 articles described the performance of activity assays (Additional file 2 ). Calcium imaging and patch clamp were the most frequently used assays (in 19 articles each), whereas measurements of extracellular activity were only performed in 6 articles (Additional file 3 ). To analyse which organoid preparation method was used for each assay, we cross-referenced the method of organoid preparation with the essay performed (Additional file 3 ). Our analysis shows that whole mount organoids are the most used approach for calcium imaging (11/19) and extracellular activity (4/6), whereas patch clamp is mainly performed in organoid sections of 250 to 350 micron (11/19). Subsequent analysis focused on studies using whole mount and organoid sections as data collected from these approaches are from cells within the 3D structure. Consequently, this led to the exclusion of some models, including cerebellum, hippocampus, and medial pallium. Lastly, we quantified the number of times each assay was used per organoid model and the range of ages at which assays were performed (Table 3 ). Overall, neuronal activity was shown in all models. In dorsal forebrain organoids, synchronised calcium transients have been described at 45 days up to 175 days in culture. However, one study reported a lack of synchronisation and maturation in day 76 organoids [ 75 ]. In cerebral organoids, electrophysiological properties have been detected at 34 days and electrophysiological signature of mature neurons was described at day 62. Calcium imaging data showed the presence of functional cortical networks in day 85 organoids. In contrast, time series studies measuring extracellular field potentials specifically reported cortical networks to only be present after 120 days in culture, which correlated with increased expression of pre- and post-synaptic markers, as well as of other genes involved in synaptic maturation. The presence of functional neural networks was described in thalamus (day 49) and ganglionic eminence organoids (day 40–50). Spinal cord assembloids were shown to generate functional neuromuscular junctions and elicit spontaneous and spontaneous activity in muscle cells. Additionally, they were shown to receive input from the dorsal forebrain organoids and form functional circuits. In midbrain organoids, electrophysiological properties characteristic of dopaminergic (DA) neurons have been described and could be inhibited by D2/D3 agonists. Lastly the only report on brain stem organoids reported that at day 30 most cells did not display action or membrane potential and only a few cells were responsive.

Cerebral, dorsal forebrain, and ventral forebrain organoids culture protocols

Next, we focussed on the articles describing cerebral, dorsal forebrain, and ventral forebrain organoids, as these were the most used models. The hippocampal organoid model was assigned to dorsal forebrain organoids, and striatal organoid and GE organoid models were assigned to ventral forebrain organoids, as these brain structures originate from the telencephalon during human neurodevelopment.

To compare different protocols available to generate the cerebral, dorsal, and ventral forebrain organoids, we analysed and summarised key aspects of the protocols, including cell seeding density, neural induction duration, use of small molecules, use of ECM, and presence of rotational culture (Table 4 ). Cerebral organoids were generally generated as described by Lancaster and colleagues [ 37 , 62 ], without the use of small molecules and making use of embedment in ECM (Matrigel or Geltrex) and rotational culture systems (orbital shaker or spinning flasks). Some articles, based on protocols by Kadoshima et al. [ 38 ], Qian et al. [ 91 ], and Coulter et al. [ 92 ], described the generation of cerebral organoids with the use of dorsalising or ventralising small molecules. A subset of protocols did not describe the use of ECM or rotation conditions. Protocols for generation of dorsal and ventral forebrain organoid used small molecules to achieve regionalisation of the models. Most of the dorsal forebrain organoids were created without the addition of a supporting ECM; others were either embedded or received liquid ECM added into the culture medium (Matrigel or Geltrex, 0.5–2%). Additionally, a subset of these articles reports the mechanical removal of the ECM several days after embedding, by either cutting off the ECM or pipetting the organoid up and down. None of the ventral forebrain organoid protocols reported ECM use. Dorsal forebrain and ventral forebrain organoid were primarily cultured under static conditions, compared to the majority of the cerebral organoids which were cultured under rotation conditions. Rotation culture was strongly linked to the type of ECM administration, with all the protocols describing embedment of the organoids also describing rotation culture conditions. The EB starting cell number did not seem to influence this choice in culture, which is interesting as this is a more defining factor regarding nutrient diffusion. For all models, we found there to be a large range of initial cell seeding density per EB and therefore of EB starting size and differences in cell seeding densities between protocols.

Small molecule use to guide brain region identities

To elaborate on the guiding principles used to generate dorsal and ventral forebrain identities, we analysed and scored different small molecules and growth factors used in each publication (Fig.  5 ). During data extraction, different stages of organoid generation became apparent. The first stage described the generation of the EBs and their subsequent neuroectoderm induction, followed by an optional proliferation step, and lastly differentiation and maturation. In most dorsal forebrain protocols, dual SMAD inhibition is restricted to EB formation and induction (21/50), since continuous BMP inhibition blocks dorsalisation of the tissue [ 93 ]. Some articles described the additional use of Wnt inhibitors (11/50) alone, or combined with Shh activators (2/50). One article described the timed use of Wnt inhibitors and activators to specifically generate medial pallium/ hippocampal tissue [ 17 ]. Single SMAD inhibition was also mentioned in combination with Wnt inhibition (14/50) or SHH inhibition (2/50). For all cases of single SMAD inhibition, the TGFβ pathway was inhibited.

figure 5

Small molecule and growth factor used in guided dorsal forebrain and ventral forebrain protocols. Molecules and factors are scored by their uses in EB formation and induction of neuroectoderm, proliferation of the neural tissues, or differentiation and maturation of the organoids. Molecules are grouped by their pathways and determined to exert an inhibitory (blue) or stimulating (red) effect on different pathways. Abbreviations top to bottom, left to right, form: formation, Induc: induction, Prolif: proliferation, Diff: differentiation, Mat: maturation, SB431: SB-431542/3, Activin A: Recominbant Human/ Mouse/ Rat Activin A, Dorso: dorsomorphine, LDN: LDN-193189, IWR-1e: IWR-1(endo), CHIR: CHIR99021, Cyclo: cyclopamine, SAG: smoothened agonist, Purmor: purmorphamine, SHH: Recombinant SHH, RA: retinoic acid, Allepreg: allepregnanolone, Ketoco: ketoconazole, Clema: clemastine, GSK: GSK2656157, HGF: hepatocyte growth factor, IGF: Insulin-like growth factor, PDGF: PDGF-AA, AA: ascorbic acid, Doco: docosahexaenoic acid

Dual SMAD inhibition was also described for ventral forebrain organoid induction, with the additional administration of Wnt inhibitors (1/8) alone, or with Wnt inhibitors and Shh activators combined (8/8). Retinoic acid receptor is highly expressed in the striatum during brain development [ 94 ]. Two of the nine articles describing dual SMAD inhibition together with Wnt inhibition and Shh activation described the use of RA signalling activation with RA for differentiation of subpallial organoids [ 22 , 67 ], and one article described the use of the RA receptor agonist SR11237 for striatal organoid [ 47 ].

The use of molecules for neural tissue proliferation was similar for dorsal forebrain organoids (FGF2 (21/50) and epidermal growth factor (EGF) (21/50)), and ventral forebrain organoids (FGF2 (2/8) and EGF (2/8)). Once identity was achieved, dorsal and ventral forebrain organoid medium was supplemented with growth factors to support neuronal differentiation and maturation. These included brain-derived neurotrophic factor (BDNF), glial cell-derived neurotrophic factor (GDNF), cyclic AMP (cAMP), neurotrophin 3 (NT3), and ascorbic acid (AA). Also, the notch signalling pathway/ gamma-secretase inhibitor DAPT (1/8) was used for differentiation and maturation of ventral forebrain organoids.

To generate oligodendrocyte containing dorsal forebrain organoids, the additional use thyroid hormone 3 (T3) (2/50), insulin-like growth factor (IGF) (2/50), platelet-derived growth factor AA (PDGF-AA) (2/50), biotin (1/50), hepatocyte growth factor (HGF) (1/50), cytochrome-P450 inhibitor ketoconazole (1/50), EBP inhibitor clemastine (1/50), and PERK-inhibitor GSK2656157 (1/50) were described during differentiation and maturation [ 73 , 81 ]. Small molecules ketoconaloze, clemastine, and GSK2656157, along with T3 and PDGF-AA, were used to stimulate myelination and oligodendrocyte maturation [ 73 ].

A few publications on cerebral organoids (12/72) reported the use of patterning factors. Of these, most reported the use of dual SMAD inhibition alone (5/12), whereas some used it in combination with Wnt inhibitors (3/12) or Wnt and Shh inhibitors (1/12). Single SMAD inhibition, targeting the TGFβ pathway, in combination with Wnt inhibition was described once. One article reported on the use of Wnt inhibitors only without SMAD inhibition and one article on Shh inhibition only without SMAD inhibition. Lastly, one article reported ventralisation using only Wnt inhibitors and Shh activators, without SMAD inhibition. Regarding cerebral organoid maturation, some protocols described the (continued) use of TGFβ (1/72), IWR-1e (1/72), cyclopamine (1/72), BDNF (7/72), GDNF (2/72), cAMP (2/72), and AA (1/72). One article described the use of an SHH-producing regionaliser made from modified iPSCs [ 19 ] to induce a ventral identity.

Next, we categorised whether the molecules used were of synthetic or natural origins. Overall, for SMAD inhibition, the predominant choice to block TGFβ signalling pathway described in all three models was by using synthetic molecules SB431542/3 (65/68 total articles) or A83-01 (10/68 total articles). In contrast, the choice of BMP inhibitors differed between cerebral, dorsal, and ventral protocols. In dorsal forebrain protocols, dorsomorphin was the preferred choice (25/36), followed by synthetic chemical LDN-193189 (9/36) or NOGGIN (2/36). Contrarily, in ventral and cerebral organoid protocols, LDN-193189 (5/8 and 7/10, respectively) was preferred over dorsomorphin (3/8 and 3/10, respectively). Activation of BMP pathway was only described in one dorsal forebrain article using BMP4. The use of Wnt inhibitors was also different across organoid identities, with dorsal forebrain protocols describing the use of synthetic inhibitors IWR-1(endo) (17/26), XAV939 (6/26), and IWP-2 (1/26), or natural inhibitor DKK1 (2/26). Similarly, cerebral organoid protocols solely mentioned the use of synthetic inhibitors IWR-1e (3/7), XAV939 (3/7), and IWP-2 (1/7). In contrast, protocols for ventral identities reported on the use of XAV939 (3/7) and IWP-2 (4/7) only. Wnt activation was only described in cerebral and dorsal forebrain organoids and was achieved using the synthetic molecule CHIR99021 (2/3 and 5/7, respectively) and naturally occurring WNT-3A (1/3 and 2/7, respectively). Lastly, Shh inhibition for dorsalisation was exclusively done using cyclopamine in cerebral and dorsal forebrain organoid models (2/2 and 3/3, respectively). One cerebral organoid and one dorsal forebrain article described smoothened agonist (SAG) treatment for Shh activation. For ventral forebrain organoids, Shh activation was primarily achieved using SAG (4/8), followed by recombinant SHH (3/8) or synthetic inhibitor purmorphamine (3/8).

A point worth noting is that we noticed apparent differences in concentrations of the same molecule, when analysing the dorsal forebrain and ventral forebrain organoid protocols. We also noticed apparent differences in the number of days that the molecule was administered to obtain similar effects with regard to pathway inhibition or activation.

Categorisation of present cell types and their reported time points

We then set out to investigate the cellular composition of different forebrain organoid models. Articles grouped by identity were extracted for the reported cell types and their cognate first reporting times (Table 5 ). As the dorsal forebrain is the most prominent brain region present in cerebral organoids, a large overlap was observed between them and dorsal forebrain models when looking at neuronal subtypes. Given their unguided nature, cerebral organoids have also been described to contain ventral forebrain, midbrain, and choroid plexus (cuboidal epithelium) identities, as well as microglia. Dorsal and ventral forebrain organoids contain only region-specific cell types. One exception is the report of MGE precursors in dorsal forebrain organoids [ 69 ]. Neural precursor cells were reported around similar time points on cerebral and dorsal forebrain organoids [30 and 35 median days, respectively], whereas this was earlier in ventral forebrain organoids (median 20 days). Neurons were reported in cerebral organoids as early as seven days of culture [ 95 ], and mature neurons were reported at 28 days. In dorsal forebrain organoids, neurons were reported from 14 days onwards, although one article described the presence of mature neurons after twelve days [ 96 ]. In ventral forebrain organoids, neurons were reported after 25 days. First reports of astrocytes started at 30, 76, and 80 days in cerebral, dorsal forebrain, and ventral forebrain organoids, respectively. Mature astrocytes were reported in dorsal forebrain organoids starting from 120 days and have not been reported in the other two models. Oligodendrocytes were reported in 28-day-old cerebral organoids and mature oligodendrocytes after 39 days. In dorsal and ventral forebrain organoid, oligodendrocytes have been reported at 98 and 80 days, respectively.

Markers used for characterisation of different cell types

To provide an overview of the most used cell markers, we summarised the reported markers used for cell type characterisation (Fig.  6 ). Markers used to determine organoid identity were grouped under regional identity. FOXG1 was the commonly used marker to visualise forebrain identity, both dorsal and ventral. Dorsal forebrain was specified by staining for PAX6, LHX2 and EMX1&2. Ventral identities were mostly visualised by NKX2.1, LHX6, and DLX2 expression, which also marked the presences of hypothalamic, GE and MGE precursors, as well as GABAergic inhibitory interneurons and their precursors (Fig.  6 ). OTX1 & OTX2 are broadly expressed during development, both in ectoderm and mesoderm [ 97 ]. In the developing brain, they are mostly expressed in the mesencephalon and forebrain, midbrain, and hindbrain. In brain organoids, OTX1 and OTX2 were used for determination of forebrain, dorsal forebrain, ventral forebrain.

figure 6

Cell types and their reported markers . Individual markers are depicted in relation to the cell type that they were reported to characterise. The ‘Regional identity’ column depicts markers not often used for specific cell type characterisations, but more generally used to determine the identity of the organoid model. Strong marker overlap is evident between precursors cells. NPC: neural precursor cells; RG: radial glia; oRG: outer radial glia; IPC: intermediate progenitor cells; GE: ganglionic eminence; MGE: medial ganglionic eminence; IN: interneuron; DA: dopaminergic; ChP: choroid plexus

When visualising neural precursors (NPC and RG), SOX2 and Nestin were the most used markers. Dorsal forebrain progenitors were mostly visualised with PAX6, GFAP and phosphorylated vimentin (pVimentin), whereas ventral forebrain precursors can be characterised by NKX2.1. oRG, a cortex-specific population localised in the oSVZ, was usually visualised using HOPX. Intermediate progenitor cells (IPCs) were visualised with TBR2. Markers NEUN and bTUB3 were most used to visualise neurons in general, while DCX and NeuroD1 were used to distinguish immature neurons from MAP2-positive mature neurons. Cortical neurons were distinguished using CTIP2 and TBR1 for deep-layer neurons (layers V – VI) and SATB2 and BRN2 for superficial-layer neurons (layers II – IV). Cajal-Retzius cells, which make up the mantle zone and the cortical plate, were shown specifically using REELIN. Ventral interneuron subtypes were visualised and distinguished by their subtype-specific markers parvalbumin (PV), calretinin, calbindin or somatostatin (SST). Glutamatergic excitatory neurons were shown by staining for vGLUT1 (SCL17A7) and GABAergic inhibitory (inter)neurons by GABA, GAD65 & GAD67, vGAT (SCL32A3). For characterisation of astrocytes, markers GFAP and S100b were the most used. Oligodendrocytes were reported using MBP and O4, and their progenitors with OLIG2 and SOX10.

Markers followed a general cell type-specific application, with specific markers for precursors, neurons, astrocytes, and oligodendrocytes. The IF markers used were not exclusive to one cell type but were able to visualise multiple cell types. pVimentin is regularly used to characterise NPC, yet this marker is also expressed in non-neuronal fibroblasts. It is important to keep in mind the aim of the study when using these markers whether it is only necessary to indicate neuronal presence (MAP2, bTUB3, NEUN) or a specific neuronal subtype (e.g. neuroblasts or Cajal-Retzius cells). The overlap in markers was especially evident in precursor cell types; NPCs, RG, neural stem cells, and neuroepithelial cells were all reported by staining against PAX6, SOX2, and Nestin.

This review aimed to systematically summarise brain organoid model applications, usage, and cell composition. We found that most articles using brain organoids fall under neurodevelopmental studies, protocol optimisation studies, or immunology and infection studies. However, brain organoid technology has been used for many other applications, including gene therapy [ 98 , 99 ], psychiatric disorders [ 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 ], transplantation studies [ 109 , 110 , 111 , 112 ], cancer studies [ 113 , 114 , 115 , 116 , 117 , 118 , 119 , 120 , 121 , 122 ], and brain therapeutic studies [ 30 , 31 , 32 , 123 , 124 , 125 , 126 , 127 , 128 , 129 ]. This illustrates their broad applicability in neuroscience research.

Within the neurodevelopmental field, cerebral and dorsal forebrain organoids are the most used models. As cerebral and dorsal forebrain organoids were the first models to have been reported, researchers may be mainly acquainted with these models. Organoids have been cultured for considerably long periods of time (> 100 days). This long culture duration resulted in brain organoid models that exhibit diverse, mature, and cell–cell interactions (e.g. myelination). These brain organoids exhibited extensive cellular organisation and layer formation with mature gene expression profiles. Nevertheless, even with long culture times, brain organoids display a foetal to early postnatal stage phenotype overall [ 130 , 131 ]. Transcriptome analysis on brain organoids and primary material demonstrates increased metabolic stress in brain organoids and is proposed to contribute to impaired molecular subtype specification of the individual cell types [ 132 ]. Currently, brain organoids are of limited use to model adult neurodevelopment, as opposed to modelling the early gestational period. Nevertheless, brain organoids have been extensively applied to study human neurodegenerative disorders [ 133 , 134 , 135 ].

The culture durations and maturation stages of the organoids are intimately related to their cellular compositions. However, in this review we did not analyse this relationship in detail. Relating the cell type occurrence to the median culture durations of the three models, we can conclude that cerebral organoids are generally cultured up to the times when most cell types are present [ 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60  days]. Dorsal forebrain organoids have a median culture duration of 84 days at which stage they contain astrocytes and potentially (immature) oligodendrocytes. Ventral forebrain organoids are cultured for rather long periods (median 125 days) at which stage all cell types are present. Even though reports on long-term cultures provide an estimate of how long organoids can be kept in culture, it would be interesting to determine their long-term viability. However, most studies included in our analysis do not state the specific reasons for which cultures are terminated or why culture time is not extended. Therefore, this could not be extracted from our analysis.

Regarding the assessment of neuronal function, studies using different methods report different outcomes on maturation of neuronal networks. As assays and sample preparation methods differ across studies, even within the same model, it is difficult to extrapolate general conclusions. Notably, most studies included in our analysis use functional assays as a proof of principle for the presence of functional neurons. Further in-depth studies using different assays for characterising the electrophysiological properties of organoid models over time would be crucial to precisely determine the emergence and maturation of active, and thus functional, neuronal networks. Nonetheless, we found that neuronal activity is present in all models evaluated and that most studies report maturation of neuronal networks overtime, indicating that organoid maturation is accompanied by increased connectivity.

With regard to the application of small molecules for organoid regionalisation, it would be important to further examine their individual and combined efficacies and standardise their uses. Even though they might be interchangeable with regards to function, molecules with different half-lives and stabilities may exert different influences on seemingly similar protocols, especially taking into consideration that the use of the same molecule is described in different protocols, with different concentrations, and/or over different spans of time. Furthermore, the morphogen pathways manipulated at the initial stages of the organoid differentiation are crucial for determining the regional identity. Changes at this initial stage may lead to different positional identities within the same brain region. This is particularly relevant for dorsal forebrain organoids where some protocols use only dual SMAD inhibition and others use additional dorsalisation and anteriorisation cues, such as Wnt and Shh inhibition. Further studies elaborating on the efficiency and efficacy of different combinations of synthesised and natural molecules, their concentration, and application period would be highly relevant for the standardisation of brain organoid models.

Overall, we provided a summary of the principles and critical steps used to generate cerebral, dorsal, and ventral forebrain organoids. In performing this analysis, we encountered several inconsistencies which may account for overall differences across protocols. For instance, across the three models we observed a large variation regarding the initial EB size (seeding density). Another key determinant of organoid differentiation is the use of ECM support. This varies mostly in cerebral organoids where specimens are either embedded in ECM droplets; ECM is added to the medium or is completely absent. Each of these approaches has advantages and disadvantages, whereas droplet ECM allows for a physical matrix environment for the organoid to expand in. Liquid ECM administration to the medium does not provide an instant 3D matrix environment. It is important to mention that Matrigel- and Geltrex-based ECM have large inter-batch variability which impacts protocol outcomes [ 136 ]. As such, approaches lacking ECM support are not subjected to such variability. The use of rotational cultures is advantageous as it increases the diffusion of nutrients and oxygen to the core of the organoid [ 62 , 137 , 138 ]. However, despite this, organoids are still reported to have a necrotic core after they reach a certain size limit [ 37 ]. Additionally, the use of this type of culture requires specialised equipment that may limit the use of this technique in some laboratories. In summary, each model and approach have its own advantages and disadvantages (Table 6 ) that should be considered when choosing the appropriate model.

Considering the reportages of the individual cell types, all cell types should be present after 80 days of culture in cerebral and dorsal forebrain organoids. For ventral forebrain organoids, it is difficult to draw such a conclusion as fewer articles reported on this model. In our final analysis, we examined the IF markers used to characterise each cell type in cerebral, dorsal, and ventral forebrain organoids with the intent of providing an easy-to-use template for researchers to select markers to use. Given the lack of standardisation in terminology used to report cell types, a comparison between articles was in some cases difficult. As an example, several reports refer to “dorsal telencephalon precursors” and “cortical progenitors” using identical IF marker characterisation. As such, it would be helpful to clarify and standardise how to refer to each cell type as well as which markers to use for characterisation.

Although this is the first extensive report on the practical aspects of human brain organoid culture and reportage, this review has some limitations. In addition to the PubMed and Ovid Embase database searches performed, a few articles had to be added via cross-referencing later, indicating a possible underrepresentation of articles in the searches. We focussed on a specific category of studies, lowering the number of included articles possibly limiting the downstream analysis. We did not aim to perform a quality assessment of each protocol regarding their validity or quality, as we think that each protocol can have its own advantages and limitations in the context of their use and application. This review’s aim was to report and summarise what is currently described. Per organoid model, we described the culture ranges in days as well as reported cell types. Our analysis did not allow us to determine important aspects of organoids cultures such as the presence of non-neuronal cell types or the long-term viability of the models. These aspects would be important for a critical assessment of the protocols, but unfortunately, they often go unreported. Lastly, IF was chosen as the reference for characterisation since it is a generally accessible technique used by most laboratories for tissue characterisation. However, not every article included our analysis applied this technique.

Future expansion and optimisation of brain organoid technology can be expected in the coming years to address several outstanding limitations in the field including the integration of vascularisation [ 42 , 139 , 140 ] and additional cell types such as microglia [ 40 , 84 , 141 ]. Additionally, we can expect a continuous focus on tissue organisation to increasingly mimic brain development through the elegant use of assembloids described by Birey et al. [ 22 ] and Xiang et al. [ 69 ], or the integration of regional organisers as shown by Cederquist and colleagues [ 19 ]. Lastly, emphasis on extending the phenotypic state of brain organoids from foetal into more mature phenotypes [ 131 ] may also be expected. As the field progresses to address these and other topics, attention should be given to improve intra-model standardisation and standardise the nomenclature used for the cell types.

The dynamic development of new approaches and optimisation of protocols to generate brain organoids has amounted to a great number of articles and information currently available. In this review, we provide a systematic overview of culture durations, functional activity assays, protocol key aspect comparisons, small molecule and growth factor application, cell type composition, and IF marker usage of the most used models to be used as a practical guide for researchers in the field of human brain organoid research.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Three-dimensional

Ascorbic acid

Brain-derived neurotrophic factor

Bone morphogenetic protein

Embryoid body

Extracellular matrix

Epidermal growth factor

Basic fibroblast growth factor

Glial cell-derived neurotrophic factor

Ganglionic eminence

Hepatocyte growth factor

Human-induced pluripotent stem cells

Immunofluorescence

Insulin-like growth factor

Intermediate progenitor cells

Neural precursor cells

Neurotrophin 3

Platelet-derived growth factor AA

Parvalbumin

Phosphorylated vimentin

(Outer) radial glia

Retinoic acid

Smoothened agonist

Sonic hedgehog

Suppressor of Mothers against Decapentaplegic

Somatostatin

(Inner/outer) subventricular zone

Thyroid hormone 3

Transforming growth factor beta

Wingless/integrated

Clancy B, Finlay BL, Darlington RB, Anand KJ. Extrapolating brain development from experimental species to humans. Neurotoxicology. 2007;28(5):931–7.

Article   PubMed   Google Scholar  

Mellert DJ, Williamson WR, Shirangi TR, Card GM, Truman JW. Genetic and environmental control of neurodevelopmental robustness in drosophila. PLoS ONE. 2016;11(5):e0155957.

Article   PubMed   PubMed Central   Google Scholar  

Hansen DV, Lui JH, Parker PR, Kriegstein AR. Neurogenic radial glia in the outer subventricular zone of human neocortex. Nature. 2010;464(7288):554–61.

Article   CAS   PubMed   Google Scholar  

Lui JH, Hansen DV, Kriegstein AR. Development and evolution of the human neocortex. Cell. 2011;146(1):18–36.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ostrem BE, Lui JH, Gertz CC, Kriegstein AR. Control of outer radial glial stem cell mitosis in the human brain. Cell Rep. 2014;8(3):656–64.

Rice D, Barone S Jr. Critical periods of vulnerability for the developing nervous system: evidence from humans and animal models. Environ Health Perspect. 2000;108(Suppl 3):511–33.

Reillo I, de Juan RC, Garcia-Cabezas MA, Borrell V. A role for intermediate radial glia in the tangential expansion of the mammalian cerebral cortex. Cereb Cortex. 2011;21(7):1674–94.

Hevner RF, Haydar TF. The (not necessarily) convoluted role of basal radial glia in cortical neurogenesis. Cereb Cortex. 2012;22(2):465–8.

Rogers J, Kochunov P, Zilles K, Shelledy W, Lancaster J, Thompson P, et al. On the genetic architecture of cortical folding and brain volume in primates. Neuroimage. 2010;53(3):1103–8.

Semple BD, Blomgren K, Gimlin K, Ferriero DM, Noble-Haeusslein LJ. Brain development in rodents and humans: identifying benchmarks of maturation and vulnerability to injury across species. Prog Neurobiol. 2013;106–107:1–16.

Rakic P. Mode of cell migration to the superficial layers of fetal monkey neocortex. J Comp Neurol. 1972;145(1):61–83.

Tao Y, Zhang SC. Neural subtype specification from human pluripotent stem cells. Cell Stem Cell. 2016;19(5):573–86.

Zhao X, Bhattacharyya A. Human models are needed for studying human neurodevelopmental disorders. Am J Hum Genet. 2018;103(6):829–57.

Trujillo CA, Muotri AR. Brain organoids and the study of neurodevelopment. Trends Mol Med. 2018;24(12):982–90.

Qian X, Song H, Ming GL. Brain organoids: advances, applications and challenges. Development. 2019;146(8):21.

Article   Google Scholar  

Marton RM, Pasca SP. Organoid and assembloid technologies for investigating cellular crosstalk in human brain development and disease. Trends Cell Biol. 2020;30(2):133–43.

Sakaguchi H, Kadoshima T, Soen M, Narii N, Ishida Y, Ohgushi M, et al. Generation of functional hippocampal neurons from self-organizing human embryonic stem cell-derived dorsomedial telencephalic tissue. Nat Commun. 2015;6:8896.

Jo J, Xiao Y, Sun AX, Cukuroglu E, Tran HD, Goke J, et al. Midbrain-like organoids from human pluripotent stem cells contain functional dopaminergic and neuromelanin-producing neurons. Cell Stem Cell. 2016;19(2):248–57.

Cederquist GY, Asciolla JJ, Tchieu J, Walsh RM, Cornacchia D, Resh MD, et al. Specification of positional identity in forebrain organoids. Nat Biotechnol. 2019;37(4):436–44.

Camp JG, Badsha F, Florio M, Kanton S, Gerber T, Wilsch-Brauninger M, et al. Human cerebral organoids recapitulate gene expression programs of fetal neocortex development. In: The Proceedings of the National Academy of Sciences. 2015;112(51):15672–7.

Luo C, Lancaster MA, Castanon R, Nery JR, Knoblich JA, Ecker JR. Cerebral organoids recapitulate epigenomic signatures of the human fetal brain. Cell Rep. 2016;17(12):3369–84.

Birey F, Andersen J, Makinson CD, Islam S, Wei W, Huber N, et al. Assembly of functionally integrated human forebrain spheroids. Nature. 2017;545(7652):54–9.

Kanton S, Boyle MJ, He Z, Santel M, Weigert A, Sanchis-Calleja F, et al. Organoid single-cell genomic atlas uncovers human-specific features of brain development. Nature. 2019;574(7778):418–22.

Pollen AA, Bhaduri A, Andrews MG, Nowakowski TJ, Meyerson OS, Mostajo-Radji MA, et al. Establishing cerebral organoids as models of human-specific brain evolution. Cell. 2019;176(4):743-56.e17.

Baldassari S, Musante I, Iacomino M, Zara F, Salpietro V, Scudieri P. Brain organoids as model systems for genetic neurodevelopmental disorders. Front Cell Dev Biol. 2020;8:590119.

Wang YW, Hu N, Li XH. Genetic and epigenetic regulation of brain organoids. Front Cell Dev Biol. 2022;10:948818.

Ao Z, Cai H, Havert DJ, Wu Z, Gong Z, Beggs JM, et al. One-stop microfluidic assembly of human brain organoids to model prenatal cannabis exposure. Anal Chem. 2020;92(6):4630–8.

Dang J, Tiwari SK, Agrawal K, Hui H, Qin Y, Rana TM. Glial cell diversity and methamphetamine-induced neuroinflammation in human cerebral organoids. Mol Psychiatry. 2021;26(4):1194–207.

Fan P, Wang Y, Xu M, Han X, Liu Y. The application of brain organoids in assessing neural toxicity. Front Mol Neurosci. 2022;15:799397.

Dakic V, Minardi Nascimento J, Costa Sartore R, Maciel RM, de Araujo DB, Ribeiro S, et al. Short term changes in the proteome of human cerebral organoids induced by 5-MeO-DMT. Sci Rep. 2017;7(1):12863.

Shakhbazau A, Danilkovich N, Seviaryn I, Ermilova T, Kosmacheva S. Effects of minocycline and rapamycin in gamma-irradiated human embryonic stem cells-derived cerebral organoids. Mol Biol Rep. 2019;46(1):1343–8.

Zheng X, Zhang L, Kuang Y, Venkataramani V, Jin F, Hein K, et al. Extracellular vesicles derived from neural progenitor cells–a preclinical evaluation for stroke treatment in mice. Transl Stroke Res. 2021;12(1):185–203.

Depla JA, Mulder LA, de Sá RV, Wartel M, Sridhar A, Evers MM, et al. Human brain organoids as models for central nervous system viral infection. Viruses. 2022;14(3):634.

Pellegrini L, Albecka A, Mallery DL, Kellner MJ, Paul D, Carter AP, et al. SARS-CoV-2 infects the brain choroid plexus and disrupts the blood-csf barrier in human brain organoids. Cell Stem Cell. 2020;27(6):951–61 e5.

Ramani A, Pranty AI, Gopalakrishnan J. Neurotropic effects of SARS-CoV-2 modeled by the human brain organoids. Stem Cell Rep. 2021;16(3):373–84.

Article   CAS   Google Scholar  

Han Y, Yang L, Lacko LA, Chen S. Human organoid models to study SARS-CoV-2 infection. Nat Methods. 2022;19(4):418–28.

Lancaster MA, Renner M, Martin CA, Wenzel D, Bicknell LS, Hurles ME, et al. Cerebral organoids model human brain development and microcephaly. Nature. 2013;501(7467):373–9.

Kadoshima T, Sakaguchi H, Nakano T, Soen M, Ando S, Eiraku M, et al. Self-organization of axial polarity, inside-out layer pattern, and species-specific progenitor dynamics in human ES cell-derived neocortex. Proc Natl Acad Sci USA. 2013;110(50):20284–9.

Pasca AM, Sloan SA, Clarke LE, Tian Y, Makinson CD, Huber N, et al. Functional cortical neurons and astrocytes from human pluripotent stem cells in 3D culture. Nat Methods. 2015;12(7):671–8.

Abud EM, Ramirez RN, Martinez ES, Healy LM, Nguyen CHH, Newman SA, et al. iPSC-derived human microglia-like cells to study neurological diseases. Neuron. 2017;94(2):278–93 e9.

Heide M, Huttner WB, Mora-Bermudez F. Brain organoids as models to study human neocortex development and evolution. Curr Opin Cell Biol. 2018;55:8–16.

Cakir B, Xiang Y, Tanaka Y, Kural MH, Parent M, Kang YJ, et al. Engineering of human brain organoids with a functional vascular-like system. Nat Methods. 2019;16(11):1169–75.

Chambers SM, Fasano CA, Papapetrou EP, Tomishima M, Sadelain M, Studer L. Highly efficient neural conversion of human ES and iPS cells by dual inhibition of SMAD signaling. Nat Biotechnol. 2009;27(3):275–80.

Ozone C, Suga H, Eiraku M, Kadoshima T, Yonemura S, Takata N, et al. Functional anterior pituitary generated in self-organizing culture of human embryonic stem cells. Nat Commun. 2016;7:10351.

Bagley JA, Reumann D, Bian S, Levi-Strauss J, Knoblich JA. Fused cerebral organoids model interactions between brain regions. Nat Methods. 2017;14(7):743–51.

Andersen J, Revah O, Miura Y, Thom N, Amin ND, Kelley KW, et al. Generation of functional human 3D cortico-motor assembloids. Cell. 2020; 183(7):1913–29 e26.

Miura Y, Li MY, Birey F, Ikeda K, Revah O, Thete MV, et al. Generation of human striatal organoids and cortico-striatal assembloids from human pluripotent stem cells. Nat Biotechnol. 2020;38(12):1421–30.

Seto Y, Eiraku M. Human brain development and its in vitro recapitulation. Neurosci Res. 2019;138:33–42.

Yakoub AM, Sadek M. Analysis of synapses in cerebral organoids. Cell Transpl. 2019;28(9–10):1173–82.

Benito-Kwiecinski S, Lancaster MA. Brain organoids: human neurodevelopment in a dish. Cold Spring Harbor Persp Biol. 2020;12(8).

Jeong HJ, Jimenez Z, Mukhambetiyar K, Seo M, Choi JW, Park TE. Engineering human brain organoids: from basic research to tissue regeneration. Tissue Eng Regener Med. 2020;17(6):747–57.

Xiang Y, Cakir B, Park IH. Deconstructing and reconstructing the human brain with regionally specified brain organoids. Semin Cell Dev Biol. 2021;111:40–51.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan–a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):1–10.

Martinez-Cerdeno V, Noctor SC. Neural progenitor cell terminology. Front Neuroanat. 2018;12:104.

Arai Y, Taverna E. Neural progenitor cell polarity and cortical development. Front Cell Neurosci. 2017;11:384.

Tieng V, Stoppini L, Villy S, Fathi M, Dubois-Dauphin M, Krause KH. Engineering of midbrain organoids containing long-lived dopaminergic neurons. Stem Cells Dev. 2014;23(13):1535–47.

Hor JH, Soh ES, Tan LY, Lim VJW, Santosa MM, Winanto, et al. Cell cycle inhibitors protect motor neurons in an organoid model of spinal muscular atrophy. Cell Death Dis. 2018;9(11):1100.

McMurtrey RJ. Analytic models of oxygen and nutrient diffusion, metabolism dynamics, and architecture optimization in three-dimensional tissue constructs with applications and insights in cerebral organoids. Tissue Eng Part C Methods. 2016;22(3):221–49.

Rakotoson I, Delhomme B, Djian P, Deeg A, Brunstein M, Seebacher C, et al. Fast 3-D imaging of brain organoids with a new single-objective planar-illumination two-photon microscope. Front Neuroanat. 2019;13:77.

Skardal A, Aleman J, Forsythe S, Rajan S, Murphy S, Devarasetty M, et al. Drug compound screening in single and integrated multi-organoid body-on-a-chip systems. Biofabrication. 2020;12(2):025017.

Lancaster MA, Knoblich JA. Generation of cerebral organoids from human pluripotent stem cells. Nat Protoc. 2014;9(10):2329–40.

Eura N, Matsui TK, Luginbuhl J, Matsubayashi M, Nanaura H, Shiota T, et al. Brainstem organoids from human pluripotent stem cells. Front Neurosci. 2020;14:538.

Qian X, Nguyen HN, Song MM, Hadiono C, Ogden SC, Hammack C, et al. Brain-region-specific organoids using mini-bioreactors for modeling ZIKV exposure. Cell. 2016;165(5):1238–54.

Zafeiriou MP, Bao G, Hudson J, Halder R, Blenkle A, Schreiber MK, et al. Developmental GABA polarity switch and neuronal plasticity in bioengineered neuronal organoids. Nat Commun. 2020;11(1):3791.

Quadrato G, Nguyen T, Macosko EZ, Sherwood JL, Min Yang S, Berger DR, et al. Cell diversity and network dynamics in photosensitive human brain organoids. Nature. 2017;545(7652):48–53.

Trevino AE, Sinnott-Armstrong N, Andersen J, Yoon SJ, Huber N, Pritchard JK, et al. Chromatin accessibility dynamics in a model of human forebrain development. Science. 2020;367(6476):215.

Kasai T, Suga H, Sakakibara M, Ozone C, Matsumoto R, Kano M, et al. Hypothalamic contribution to pituitary functions is recapitulated in vitro using 3D-cultured human iPS cells. Cell Rep. 2020;30(1):18-24.e5.

Xiang Y, Tanaka Y, Patterson B, Kang YJ, Govindaiah G, Roselaar N, et al. Fusion of regionally specified hpsc-derived organoids models human brain development and interneuron migration. Cell Stem Cell. 2017;21(3):383–98 e7.

Pellegrini L, Bonfio C, Chadwick J, Begum F, Skehel M, Lancaster MA. Human CNS barrier-forming organoids with cerebrospinal fluid production. Science. 2020;369(6500).

Hua TT, Bejoy J, Song L, Wang Z, Zeng Z, Zhou Y, et al. Cerebellar differentiation from human stem cells through retinoid, wnt, and sonic hedgehog pathways. Tissue Eng Part A. 2021;27(13–14):881–93.

Matsui TK, Matsubayashi M, Sakaguchi YM, Hayashi RK, Zheng C, Sugie K, et al. Six-month cultured cerebral organoids from human ES cells contain matured neural cells. Neurosci Lett. 2018;670:75–82.

Madhavan M, Nevin ZS, Shick HE, Garrison E, Clarkson-Paredes C, Karl M, et al. Induction of myelinating oligodendrocytes in human cortical spheroids. Nat Methods. 2018;15(9):700–6.

Trujillo CA, Gao R, Negraes PD, Gu J, Buchanan J, Preissl S, et al. Complex oscillatory waves emerging from cortical organoids model early human brain network development. Cell Stem Cell. 2019;25(4):558–69 e7.

Sakaguchi H, Ozaki Y, Ashida T, Matsubara T, Oishi N, Kihara S, et al. Self-organized synchronous calcium transients in a cultured human neural network derived from cerebral organoids. Stem Cell Rep. 2019;13(3):458–73.

Bershteyn M, Nowakowski TJ, Pollen AA, Di Lullo E, Nene A, Wynshaw-Boris A, et al. Human iPSC-derived cerebral organoids model cellular features of lissencephaly and reveal prolonged mitosis of outer radial glia. Cell Stem Cell. 2017;20(4):435–49 e4.

Renner M, Lancaster MA, Bian S, Choi H, Ku T, Peer A, et al. Self-organized developmental patterning and differentiation in cerebral organoids. EMBO J. 2017;36(10):1316–29.

Velasco S, Kedaigle AJ, Simmons SK, Nash A, Rocha M, Quadrato G, et al. Individual brain organoids reproducibly form cell diversity of the human cerebral cortex. Nature. 2019;570(7762):523–7.

Sloan SA, Darmanis S, Huber N, Khan TA, Birey F, Caneda C, et al. Human astrocyte maturation captured in 3D cerebral cortical spheroids derived from pluripotent stem cells. Neuron. 2017;95(4):779–90 e6.

Blair JD, Hockemeyer D, Bateup HS. Genetically engineered human cortical spheroid models of tuberous sclerosis. Nat Med. 2018;24(10):1568–78.

Marton RM, Miura Y, Sloan SA, Li Q, Revah O, Levy RJ, et al. Differentiation and maturation of oligodendrocytes in human three-dimensional neural cultures. Nat Neurosci. 2019;22(3):484–91.

Kirihara T, Luo Z, Chow SYA, Misawa R, Kawada J, Shibata S, et al. A human induced pluripotent stem cell-derived tissue model of a cerebral tract connecting two cortical regions. iScience. 2019;14:301–11.

Logan S, Arzua T, Yan Y, Jiang C, Liu X, Yu LK, et al. Dynamic characterization of structural, molecular, and electrophysiological phenotypes of human-induced pluripotent stem cell-derived cerebral organoids, and comparison with fetal and adult gene profiles. Cells. 2020;9(5):1301.

Ormel PR, Vieira de Sa R, van Bodegraven EJ, Karst H, Harschnitz O, Sneeboer MAM, et al. Microglia innately develop within cerebral organoids. Nat Commun. 2018;9(1):4167.

Fair SR, Julian D, Hartlaub AM, Pusuluri ST, Malik G, Summerfied TL, et al. Electrophysiological maturation of cerebral organoids correlates with dynamic morphological and cellular development. Stem Cell Rep. 2020;15(4):855–68.

Li R, Sun L, Fang A, Li P, Wu Q, Wang X. Recapitulating cortical development with organoid culture in vitro and modeling abnormal spindle-like (ASPM Related Primary) microcephaly disease. Protein Cell. 2017;8(11):823–33.

Sloan SA, Andersen J, Pasca AM, Birey F, Pasca SP. Generation and assembly of human brain region-specific three-dimensional cultures. Nat Protoc. 2018;13(9):2062–85.

Sun AX, Yuan Q, Fukuda M, Yu W, Yan H, Lim GGY, et al. Potassium channel dysfunction in human neuronal models of angelman syndrome. Science. 2019;366(6472):1486–92.

Qian X, Su Y, Adam CD, Deutschmann AU, Pather SR, Goldberg EM, et al. Sliced human cortical organoids for modeling distinct cortical layer formation. Cell Stem Cell. 2020;26(5):766–81 e9.

Xiang Y, Tanaka Y, Cakir B, Patterson B, Kim KY, Sun P, et al. hESC-derived thalamic organoids form reciprocal projections when fused with cortical organoids. Cell Stem Cell. 2019;24(3):487–97 e7.

Qian X, Jacob F, Song MM, Nguyen HN, Song H, Ming GL. Generation of human brain region-specific organoids using a miniaturized spinning bioreactor. Nat Protoc. 2018;13(3):565–80.

Coulter ME, Dorobantu CM, Lodewijk GA, Delalande F, Cianferani S, Ganesh VS, et al. The ESCRT-III protein CHMP1A mediates secretion of sonic hedgehog on a distinctive subtype of extracellular vesicles. Cell Rep. 2018;24(4):973–86 e8.

Meyers EA, Kessler JA. TGF-beta family signaling in neural and neuronal differentiation, development, and function. Cold Spring Harbor Persp Biol. 2017;9(8).

Kang HJ, Kawasawa YI, Cheng F, Zhu Y, Xu X, Li M, et al. Spatio-temporal transcriptome of the human brain. Nature. 2011;478(7370):483–9.

Sen D, Voulgaropoulos A, Drobna Z, Keung AJ. Human cerebral organoids reveal early spatiotemporal dynamics and pharmacological responses of UBE3A. Stem Cell Reports. 2020;15(4):845–54.

Akamine S, Okuzono S, Yamamoto H, Setoyama D, Sagata N, Ohgidani M, et al. GNAO1 organizes the cytoskeletal remodeling and firing of developing neurons. FASEB J. 2020;34(12):16601–21.

Simeone A. Otx1 and Otx2 in the development and evolution of the mammalian brain. EMBO J. 1998;17(23):6790–8.

Depla JA, Sogorb-Gonzalez M, Mulder LA, Heine VM, Konstantinova P, van Deventer SJ, et al. Cerebral organoids: a human model for AAV capsid selection and therapeutic transgene efficacy in the brain. Mol Ther Methods Clin Dev. 2020;18:167–75.

Latour YL, Yoon R, Thomas SE, Grant C, Li C, Sena-Esteves M, et al. Human GLB1 knockout cerebral organoids: a model system for testing AAV9-mediated GLB1 gene therapy for reducing GM1 ganglioside storage in GM1 gangliosidosis. Mol Genet Metabolism Rep. 2019;21:100513.

Kathuria A, Lopez-Lengowski K, Jagtap SS, McPhie D, Perlis RH, Cohen BM, et al. Transcriptomic landscape and functional characterization of induced pluripotent stem cell-derived cerebral organoids in Schizophrenia. JAMA Psychiat. 2020;77(7):745–54.

Kathuria A, Lopez-Lengowski K, Vater M, McPhie D, Cohen BM, Karmacharya R. Transcriptome analysis and functional characterization of cerebral organoids in bipolar disorder. Genome Med. 2020;12(1):34.

Khan TA, Revah O, Gordon A, Yoon SJ, Krawisz AK, Goold C, et al. Neuronal defects in a human cellular model of 22q11.2 deletion syndrome. Nat Med. 2020;26(12):1888–98.

Meng Q, Wang L, Dai R, Wang J, Ren Z, Liu S, et al. Integrative analyses prioritize GNL3 as a risk gene for bipolar disorder. Mol Psychiatry. 2020;25(11):2672–84.

Qin L, Tiwari AK, Zai CC, Freeman N, Zhai D, Liu F, et al. Regulation of melanocortin-4-receptor (MC4R) expression by SNP rs17066842 is dependent on glucose concentration. Eur Neuropsychopharmacol. 2020;37:39–48.

Sawada T, Chater TE, Sasagawa Y, Yoshimura M, Fujimori-Tonou N, Tanaka K, et al. Developmental excitation-inhibition imbalance underlying psychoses revealed by single-cell analyses of discordant twins-derived cerebral organoids. Mol Psychiatry. 2020;25(11):2695–711.

Stachowiak EK, Benson CA, Narla ST, Dimitri A, Chuye LEB, Dhiman S, et al. Cerebral organoids reveal early cortical maldevelopment in schizophrenia-computational anatomy and genomics, role of FGFR1. Transl Psychiatry. 2017;7(11):6.

Wang Q, Dong X, Hu T, Qu C, Lu J, Zhou Y, et al. Constitutive activity of serotonin receptor 6 regulates human cerebral organoids formation and depression-like behaviors. Stem Cell Rep. 2021;16(1):75–88.

Ye F, Kang E, Yu C, Qian X, Jacob F, Yu C, et al. DISC1 Regulates neurogenesis via modulating kinetochore attachment of Ndel1/Nde1 during Mitosis. Neuron. 2017;96(5):1041–54 e5.

Daviaud N, Friedel RH, Zou H. Vascularization and engraftment of transplanted human cerebral organoids in mouse cortex. eNeuro. 2018;5(6).

Kitahara T, Sakaguchi H, Morizane A, Kikuchi T, Miyamoto S, Takahashi J. Axonal extensions along corticospinal tracts from transplanted human cerebral organoids. Stem Cell Rep. 2020;15(2):467–81.

Wang SN, Wang Z, Xu TY, Cheng MH, Li WL, Miao CY. Cerebral organoids repair ischemic stroke brain injury. Transl Stroke Res. 2020;11(5):983–1000.

Wang Z, Wang SN, Xu TY, Hong C, Cheng MH, Zhu PX, et al. Cerebral organoids transplantation improves neurological motor function in rat brain injury. CNS Neurosci Ther. 2020;26(7):682–97.

Ballabio C, Anderle M, Gianesello M, Lago C, Miele E, Cardano M, et al. Modeling medulloblastoma in vivo and with human cerebellar organoids. Nat Commun. 2020;11(1):583.

Bian S, Repic M, Guo Z, Kavirayani A, Burkard T, Bagley JA, et al. Genetically engineered cerebral organoids model brain tumor formation. Nat Methods. 2018;15(9):748.

Choe MS, Kim JS, Yeo HC, Bae CM, Han HJ, Baek K, et al. A simple metastatic brain cancer model using human embryonic stem cell-derived cerebral organoids. Feder Am Soc Exp Biol. 2020;34(12):16464–75.

CAS   Google Scholar  

Cosset E, Locatelli M, Marteyn A, Lescuyer P, Dall Antonia F, Mor FM, et al. Human neural organoids for studying brain cancer and neurodegenerative diseases. JoVE. 2019(148).

Goranci-Buzhala G, Mariappan A, Gabriel E, Ramani A, Ricci-Vitiani L, Buccarelli M, et al. Rapid and efficient invasion assay of glioblastoma in human brain organoids. Cell Rep. 2020;31(10):107738.

Hwang JW, Loisel-Duwattez J, Desterke C, Latsis T, Pagliaro S, Griscelli F, et al. A novel neuronal organoid model mimicking glioblastoma (GBM) features from induced pluripotent stem cells (iPSC). Biochim Biophys Acta. 2020;1864(4):129540.

Kim HM, Lee SH, Lim J, Yoo J, Hwang DY. The epidermal growth factor receptor variant type III mutation frequently found in gliomas induces astrogenesis in human cerebral organoids. Cell Prolif. 2021;54(2):e12965.

Krieger TG, Tirier SM, Park J, Jechow K, Eisemann T, Peterziel H, et al. Modeling glioblastoma invasion using human brain organoids and single-cell transcriptomics. Neuro Oncol. 2020;22(8):1138–49.

Ogawa J, Pao GM, Shokhirev MN, Verma IM. Glioblastoma model using human cerebral organoids. Cell Rep. 2018;23(4):1220–9.

Parisian AD, Koga T, Miki S, Johann PD, Kool M, Crawford JR, et al. SMARCB1 loss interacts with neuronal differentiation state to block maturation and impact cell stability. Genes Dev. 2020;34(19–20):1316–29.

Ghatak S, Dolatabadi N, Gao R, Wu Y, Scott H, Trudler D, et al. NitroSynapsin ameliorates hypersynchronous neural network activity in Alzheimer hiPSC models. Mol Psychiatry. 2021;26(10):5751–65.

Huang J, Liu F, Tang H, Wu H, Li L, Wu R, et al. Tranylcypromine causes neurotoxicity and represses BHC110/LSD1 in human-induced pluripotent stem cell-derived cerebral organoids model. Front Neurol. 2017;8:626.

Liu F, Huang J, Liu Z. Vincristine impairs microtubules and causes neurotoxicity in cerebral organoids. Neuroscience. 2019;404:530–40.

Tournier N, Goutal S, Mairinger S, Hernandez-Lozano I, Filip T, Sauberer M, et al. Complete inhibition of ABCB1 and ABCG2 at the blood-brain barrier by co-infusion of Erlotinib and Tariquidar to improve brain delivery of the model ABCB1/ABCG2 substrate [(11)C]Erlotinib. J Cereb Blood Flow Metab. 2021;41(7):1634–46.

Trujillo CA, Adams JW, Negraes PD, Carromeu C, Tejwani L, Acab A, et al. Pharmacological reversal of synaptic and network pathology in human MECP2-KO neurons and cortical organoids. EMBO Mol Med. 2021;13(1):e12523.

Zhao J, Ye Z, Yang J, Zhang Q, Shan W, Wang X, et al. Nanocage encapsulation improves antiepileptic efficiency of phenytoin. Biomaterials. 2020;240:119849.

Zhang I, Lepine P, Han C, Lacalle-Aurioles M, Chen CX, Haag R, et al. Nanotherapeutic modulation of human neural cells and glioblastoma in organoids and monocultures. Cells. 2020;9(11).

Nascimento JM, Saia-Cereda VM, Sartore RC, da Costa RM, Schitine CS, Freitas HR, et al. Human cerebral organoids and fetal brain tissue share proteomic similarities. Front Cell Dev Biol. 2019;7:303.

Gordon A, Yoon SJ, Tran SS, Makinson CD, Park JY, Andersen J, et al. Long-term maturation of human cortical organoids matches key early postnatal transitions. Nat Neurosci. 2021;24(3):331–42.

Bhaduri A, Andrews MG, Mancia Leon W, Jung D, Shin D, Allen D, et al. Cell stress in cortical organoids impairs molecular subtype specification. Nature. 2020;578(7793):142–8.

Raja WK, Mungenast AE, Lin YT, Ko T, Abdurrob F, Seo J, et al. Self-organizing 3D human neural tissue derived from induced pluripotent stem cells recapitulate Alzheimer’s disease phenotypes. PLoS ONE. 2016;11(9):e0161969.

Seo J, Kritskiy O, Watson LA, Barker SJ, Dey D, Raja WK, et al. Inhibition of p25/Cdk5 attenuates tauopathy in mouse and iPSC models of frontotemporal dementia. J Neurosci. 2017;37(41):9917–24.

Alic I, Goh PA, Murray A, Portelius E, Gkanatsiou E, Gough G, et al. Patient-specific Alzheimer-like pathology in trisomy 21 cerebral organoids reveals BACE2 as a gene dose-sensitive AD suppressor in human brain. Mol Psychiatry. 2021;26(10):5766–88.

Tian A, Muffat J, Li Y. Studying human neurodevelopment and diseases using 3D brain organoids. J Neurosci. 2020;40(6):1186–93.

Xiang Y, Yoshiaki T, Patterson B, Cakir B, Kim KY, Cho YS, et al. Generation and fusion of human cortical and medial ganglionic eminence brain organoids. Curr Protoc Stem Cell Biol. 2018;47(1):e61.

Yin X, Mead BE, Safaee H, Langer R, Karp JM, Levy O. Engineering stem cell organoids. Cell Stem Cell. 2016;18(1):25–38.

Ham O, Jin YB, Kim J, Lee MO. Blood vessel formation in cerebral organoids formed from human embryonic stem cells. Biochem Biophys Res Commun. 2020;521(1):84–90.

Shi Y, Sun L, Wang M, Liu J, Zhong S, Li R, et al. Vascularized human cortical organoids (vOrganoids) model cortical development in vivo. PLoS Biol. 2020;18(5):e3000705.

Xu R, Boreland AJ, Li X, Erickson C, Jin M, Atkins C, et al. Developing human pluripotent stem cell-based cerebral organoids with a controllable microglia ratio for modeling brain development and pathology. Stem Cell Rep. 2021;16(8):1923–37.

Download references

Acknowledgements

The authors want to thank the members of the OrganoVIR Labs for their insightful comments and the fruitful discussions.

This work was supported by the Van Herk group through a donation to Amsterdam UMC, location Academic Medical Center. The funders had no role in the design of the study and collection, analysis, and interpretation of the data and in writing the manuscript.

Author information

Authors and affiliations.

Department of Paediatric Infectious Diseases, Amsterdam UMC Location University of Amsterdam, Amsterdam, The Netherlands

Lance A. Mulder, Josse A. Depla, Adithya Sridhar & Dasja Pajkrt

Department Medical Microbiology, OrganoVIR Labs, Amsterdam UMC Location University of Amsterdam, Amsterdam, the Netherlands

Lance A. Mulder, Josse A. Depla, Adithya Sridhar, Katja Wolthers, Dasja Pajkrt & Renata Vieira de Sá

Amsterdam Institute for Infection and Immunity, Infectious Diseases, Amsterdam, the Netherlands

uniQure Biopharma B.V., Amsterdam, The Netherlands

Josse A. Depla & Renata Vieira de Sá

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualisation was performed by L.M., J.D., R.S., A.S., K.W., D.P.; methodology by L.M., J.D., R.S., A.S., K.W., D.P.; investigation by L.M., J.D., R.S.; formal analysis by L.M., R.S.; funding acquisition by K.W., D.P.; resources by K.W., D.P.; supervision by R.S., A.S., K.W., D.P.; project administration by K.W., D.P.; visualisation by L.M., R.S.; writing—original draft by L.M., J.D., R.S., A.S., K.W., D.P.; writing—review & editing by L.M., J.D., R.S., A.S., K.W., D.P. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lance A. Mulder .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

J.D. & R.S. are employees of uniQure B.V. Other authors (L.M., A.S., K.W. & D.P.) have no competing interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

: Categorisation of included articles. This file contains a table of the 302 articles that were included into the study, and which are subsequently categorised into research fields based on the stated aim of the articles.

Additional file 2

: Neurodevelopmental and Protocol generation articles. This file contains a table of the 124 articles categorised under ‘Neurodevelopmental studies and Protocol generation’. Each article is analysed for the name(s) of the organoid model(s) reported as well as the identity of each model, the culture durations (age) of each model, and whether the article performed functional assays (Yes/No score).

Additional file 3

: Functional assays described in different brain organoid models. This file contains a table of the articles included for the functional assay analysis. Articles were analysed for the type of assay described, and for the method of organoid preparation. Articles with an * depict assembloid studies.

Additional file 4

: The reference list of the articles described in Additional files 1–3.

Additional file 5

: Individual markers are depicted in relation to the cell type that they were reported to characterise. The ‘Regional identity’ column depicts markers not often used for specific cell type characterisations, but more generally used to determine the identity of the organoid model. Strong marker overlap is evident between precursors cells. NPC: neural precursor cells; RG: radial glia; oRG: outer radial glia; IPC: intermediate progenitor cells; GE: ganglionic eminence; MGE: medial ganglionic eminence; IN: interneuron; DA: dopaminergic; ChP: choroid plexus.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mulder, L.A., Depla, J.A., Sridhar, A. et al. A beginner’s guide on the use of brain organoids for neuroscientists: a systematic review. Stem Cell Res Ther 14 , 87 (2023). https://doi.org/10.1186/s13287-023-03302-x

Download citation

Received : 01 June 2022

Accepted : 27 March 2023

Published : 15 April 2023

DOI : https://doi.org/10.1186/s13287-023-03302-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Human brain organoids
  • Neurodevelopment
  • Pluripotent stem cells
  • Cell type characterisation

Stem Cell Research & Therapy

ISSN: 1757-6512

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

difference between research paper and systematic review

IMAGES

  1. Research Paper vs. Review: 5 Main Differences

    difference between research paper and systematic review

  2. 10 Steps to Write a Systematic Literature Review Paper in 2023

    difference between research paper and systematic review

  3. Literature Review vs Systematic Paper: A Complete Comparison

    difference between research paper and systematic review

  4. Differences Between Review Paper and Research Paper

    difference between research paper and systematic review

  5. Research Paper vs. Review Paper: Differences Between Research Papers

    difference between research paper and systematic review

  6. Systematic Review and Literature Review: What's The Differences?

    difference between research paper and systematic review

VIDEO

  1. What is research

  2. Research part 1/overview of research

  3. Choosing A Research Topic

  4. Systematic Literature Review Paper presentation

  5. [Paper Review] Systematic Review of Biologically-Informed Deep Learning Models for Cancer (1)

  6. Difference between Research Methods and Research Methodology #research #researchmethodology

COMMENTS

  1. Systematic Review

    A review is an overview of the research that's already been completed on a topic. What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias. The methods are repeatable, and the approach is formal and systematic: Formulate a research question. Develop a protocol.

  2. Introduction to systematic review and meta-analysis

    It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical ...

  3. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  4. SJSU Research Guides: Literature Review vs Systematic Review

    This guide will help you identify the basic differences between a literature review and a systematic review. ... to confuse systematic and literature reviews because both are used to provide a summary of the existent literature or research on a specific topic. ... Lynn (2013): Difference between a systematic review and a literature review ...

  5. Types of Reviews

    Not all research questions are well-suited for systematic reviews. Review Typologies (from LITR-EX) This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice ...

  6. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  7. Traditional reviews vs. systematic reviews

    They aim to summarise the best available evidence on a particular research topic. The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal ...

  8. Systematic Review

    What is a systematic review? A review is an overview of the research that's already been completed on a topic.. What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias.The methods are repeatable, and the approach is formal and systematic:. Formulate a research question; Develop a protocol

  9. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    With the view to address this challenge, the systematic review method was developed. Systematic reviews aim to inform and facilitate this process through research synthesis of multiple studies, enabling increased and efficient access to evidence.[1,3,4] Systematic reviews and meta-analyses have become increasingly important in healthcare settings.

  10. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement ...

  11. Understanding and Evaluating Systematic Reviews and Meta-analyses

    Abstract. A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and structured presentation of the results. A systematic review that incorporates quantitative pooling of similar studies to produce ...

  12. Understanding the Differences Between a Systematic Review ...

    The methodology involved in a literature review is less complicated and requires a lower degree of planning. For a systematic review, the planning is extensive and requires defining robust pre-specified protocols. It first starts with formulating the research question and scope of the research. The PICO's approach (population, intervention ...

  13. Conducting systematic literature reviews and bibliometric analyses

    Irrespective of the approach (author or theme-centric), many authors often only review selected publications in their literature review section and create a 'narrative', rather than a systematic review. Systematic reviews require the collection of a representative or comprehensive dataset of available research (Tranfield et al., 2003), and ...

  14. Systematic Review VS Meta-Analysis

    A systematic review is a form of research done collecting, appraising and synthesizing evidence to answer a particular question, in a very transparent and systematic way. Data (or evidence) used in systematic reviews have their origin in scholarly literature - published or unpublished. So, findings are typically very reliable.

  15. Systematic Literature Review or Literature Review

    The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper.

  16. Systematic reviews vs meta-analysis: what's the difference?

    A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles ...

  17. Comparing Integrative and Systematic Literature Reviews

    A literature review is a systematic way of collecting and synthesizing previous research (Snyder, 2019).An integrative literature review provides an integration of the current state of knowledge as a way of generating new knowledge (Holton, 2002).HRDR is labeling Integrative Literature Review as one of the journal's four non-empirical research article types as in theory and conceptual ...

  18. Systematic review or scoping review? Guidance for authors when choosing

    Background Scoping reviews are a relatively new approach to evidence synthesis and currently there exists little guidance regarding the decision to choose between a systematic review or scoping review approach when synthesising evidence. The purpose of this article is to clearly describe the differences in indications between scoping reviews and systematic reviews and to provide guidance for ...

  19. 5 Differences between a research paper and a review paper

    Scholarly literature can be of different types; some of which require that researchers conduct an original study, whereas others can be based on existing research. One of the most popular Q&As led us to conclude that of all the types of scholarly literature, researchers are most confused by the differences between a research paper and a review paper. This infographic explains the five main ...

  20. What is the difference between a systematic review and a ...

    In contrast, a systematic literature review might be conducted by one person. Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive. A systematic literature review would contrast with what is sometimes called a narrative or ...

  21. What is the difference between a survey paper and a systematic review

    It focuses on collecting and presenting technical information, often to describe the history of discoveries about a given topic. A survey article, therefore, is typically shorter than a review article. A literature review (sometimes called a narrative review) also involves the collection of all the relevant literature on a topic.

  22. What is the difference between review article and systematic review

    Most recent answer. A systematic review is a review performed according to several chosen criteria with consideration to the accuracy and adequacy. Also, the word systematic might refer to the ...

  23. What is the difference between a research paper and a review paper

    The research paper will be based on the analysis and interpretation of this data. A review article or review paper is based on other published articles. It does not report original research. Review articles generally summarize the existing literature on a topic in an attempt to explain the current state of understanding on the topic.

  24. Journal of Medical Internet Research

    Objective: This study aims to conduct a systematic review and qualitative evidence synthesis to assess the clinician and patient experience of digital hospitals. Methods: The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and ENTREQ (Enhancing the Transparency in Reporting the Synthesis of Qualitative Research ...

  25. A beginner's guide on the use of brain organoids for neuroscientists: a

    In this review, we provide a systematic overview of culture durations, functional activity assays, protocol key aspect comparisons, small molecule and growth factor application, cell type composition, and IF marker usage of the most used models to be used as a practical guide for researchers in the field of human brain organoid research.

  26. A systematic review on microplastic pollution in water, sediments, and

    Coastal lagoons are transitional environments between continental and marine aquatic systems. Globally, coastal lagoons are of great ecological and socioeconomic importance as providers of valuable ecosystem services. However, these fragile environments are subject to several human pressures, including pollution by microplastics (MPs). The aim of this review was to identify and summarize ...

  27. Phthalate exposure and risk of metabolic syndrome components: A

    Systematic review. Metabolic syndrome is a cluster of conditions that increase the risk of cardiovascular disease, i.e. obesity, insulin resistance, hypertriglyceridemia, low high-density lipoprotein cholesterol (HDL-c) levels and arterial hypertension. Phthalates are environmental chemicals which might influence the risk of the aforementioned ...