U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.71(2); 2018 Apr

Introduction to systematic review and meta-analysis

1 Department of Anesthesiology and Pain Medicine, Inje University Seoul Paik Hospital, Seoul, Korea

2 Department of Anesthesiology and Pain Medicine, Chung-Ang University College of Medicine, Seoul, Korea

Systematic reviews and meta-analyses present results by combining and analyzing data from different studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses have been actively performed in various fields including anesthesiology. These research methods are powerful tools that can overcome the difficulties in performing large-scale randomized controlled trials. However, the inclusion of studies with any biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines have been suggested for conducting systematic reviews and meta-analyses to help standardize them and improve their quality. Nonetheless, accepting the conclusions of many studies without understanding the meta-analysis can be dangerous. Therefore, this article provides an easy introduction to clinicians on performing and understanding meta-analyses.

Introduction

A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Usually, in order to obtain more reliable results, a meta-analysis is mainly conducted on randomized controlled trials (RCTs), which have a high level of evidence [ 2 ] ( Fig. 1 ). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. Following the Quality of Reporting of Meta-analyses (QUORUM) statement [ 3 ], and the appearance of registers such as Cochrane Library’s Methodology Register, a large number of systematic literature reviews have been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [ 4 ] was published, and it greatly helped standardize and improve the quality of systematic reviews and meta-analyses [ 5 ].

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f1.jpg

Levels of evidence.

In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including not only perioperative management but also intensive care and outpatient anesthesia [6–13]. Systematic reviews and meta-analyses include various topics, such as comparing various treatments of postoperative nausea and vomiting [ 14 , 15 ], comparing general anesthesia and regional anesthesia [ 16 – 18 ], comparing airway maintenance devices [ 8 , 19 ], comparing various methods of postoperative pain control (e.g., patient-controlled analgesia pumps, nerve block, or analgesics) [ 20 – 23 ], comparing the precision of various monitoring instruments [ 7 ], and meta-analysis of dose-response in various drugs [ 12 ].

Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to help better extract accurate, good quality data from the flood of data being produced. However, a lack of understanding about systematic reviews and meta-analyses can lead to incorrect outcomes being derived from the review and analysis processes. If readers indiscriminately accept the results of the many meta-analyses that are published, incorrect data may be obtained. Therefore, in this review, we aim to describe the contents and methods used in systematic reviews and meta-analyses in a way that is easy to understand for future authors and readers of systematic review and meta-analysis.

Study Planning

It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical methods on estimates from two or more different studies to form a pooled estimate [ 1 ]. Following a systematic review, if it is not possible to form a pooled estimate, it can be published as is without progressing to a meta-analysis; however, if it is possible to form a pooled estimate from the extracted data, a meta-analysis can be attempted. Systematic reviews and meta-analyses usually proceed according to the flowchart presented in Fig. 2 . We explain each of the stages below.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f2.jpg

Flowchart illustrating a systematic review.

Formulating research questions

A systematic review attempts to gather all available empirical research by using clearly defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical process of analyzing and combining results from several similar studies. Here, the definition of the word “similar” is not made clear, but when selecting a topic for the meta-analysis, it is essential to ensure that the different studies present data that can be combined. If the studies contain data on the same topic that can be combined, a meta-analysis can even be performed using data from only two studies. However, study selection via a systematic review is a precondition for performing a meta-analysis, and it is important to clearly define the Population, Intervention, Comparison, Outcomes (PICO) parameters that are central to evidence-based research. In addition, selection of the research topic is based on logical evidence, and it is important to select a topic that is familiar to readers without clearly confirmed the evidence [ 24 ].

Protocols and registration

In systematic reviews, prior registration of a detailed research plan is very important. In order to make the research process transparent, primary/secondary outcomes and methods are set in advance, and in the event of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organization like PROSPERO ( http://www.crd.york.ac.uk/PROSPERO/ ), and the registration number is recorded when reporting the study, in order to share the protocol at the time of planning.

Defining inclusion and exclusion criteria

Information is included on the study design, patient characteristics, publication status (published or unpublished), language used, and research period. If there is a discrepancy between the number of patients included in the study and the number of patients included in the analysis, this needs to be clearly explained while describing the patient characteristics, to avoid confusing the reader.

Literature search and study selection

In order to secure proper basis for evidence-based research, it is essential to perform a broad search that includes as many studies as possible that meet the inclusion and exclusion criteria. Typically, the three bibliographic databases Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Effort is required to identify not only published studies but also abstracts, ongoing studies, and studies awaiting publication. Among the studies retrieved in the search, the researchers remove duplicate studies, select studies that meet the inclusion/exclusion criteria based on the abstracts, and then make the final selection of studies based on their full text. In order to maintain transparency and objectivity throughout this process, study selection is conducted independently by at least two investigators. When there is a inconsistency in opinions, intervention is required via debate or by a third reviewer. The methods for this process also need to be planned in advance. It is essential to ensure the reproducibility of the literature selection process [ 25 ].

Quality of evidence

However, well planned the systematic review or meta-analysis is, if the quality of evidence in the studies is low, the quality of the meta-analysis decreases and incorrect results can be obtained [ 26 ]. Even when using randomized studies with a high quality of evidence, evaluating the quality of evidence precisely helps determine the strength of recommendations in the meta-analysis. One method of evaluating the quality of evidence in non-randomized studies is the Newcastle-Ottawa Scale, provided by the Ottawa Hospital Research Institute 1) . However, we are mostly focusing on meta-analyses that use randomized studies.

If the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) system ( http://www.gradeworkinggroup.org/ ) is used, the quality of evidence is evaluated on the basis of the study limitations, inaccuracies, incompleteness of outcome data, indirectness of evidence, and risk of publication bias, and this is used to determine the strength of recommendations [ 27 ]. As shown in Table 1 , the study limitations are evaluated using the “risk of bias” method proposed by Cochrane 2) . This method classifies bias in randomized studies as “low,” “high,” or “unclear” on the basis of the presence or absence of six processes (random sequence generation, allocation concealment, blinding participants or investigators, incomplete outcome data, selective reporting, and other biases) [ 28 ].

The Cochrane Collaboration’s Tool for Assessing the Risk of Bias [ 28 ]

Data extraction

Two different investigators extract data based on the objectives and form of the study; thereafter, the extracted data are reviewed. Since the size and format of each variable are different, the size and format of the outcomes are also different, and slight changes may be required when combining the data [ 29 ]. If there are differences in the size and format of the outcome variables that cause difficulties combining the data, such as the use of different evaluation instruments or different evaluation timepoints, the analysis may be limited to a systematic review. The investigators resolve differences of opinion by debate, and if they fail to reach a consensus, a third-reviewer is consulted.

Data Analysis

The aim of a meta-analysis is to derive a conclusion with increased power and accuracy than what could not be able to achieve in individual studies. Therefore, before analysis, it is crucial to evaluate the direction of effect, size of effect, homogeneity of effects among studies, and strength of evidence [ 30 ]. Thereafter, the data are reviewed qualitatively and quantitatively. If it is determined that the different research outcomes cannot be combined, all the results and characteristics of the individual studies are displayed in a table or in a descriptive form; this is referred to as a qualitative review. A meta-analysis is a quantitative review, in which the clinical effectiveness is evaluated by calculating the weighted pooled estimate for the interventions in at least two separate studies.

The pooled estimate is the outcome of the meta-analysis, and is typically explained using a forest plot ( Figs. 3 and ​ and4). 4 ). The black squares in the forest plot are the odds ratios (ORs) and 95% confidence intervals in each study. The area of the squares represents the weight reflected in the meta-analysis. The black diamond represents the OR and 95% confidence interval calculated across all the included studies. The bold vertical line represents a lack of therapeutic effect (OR = 1); if the confidence interval includes OR = 1, it means no significant difference was found between the treatment and control groups.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f3.jpg

Forest plot analyzed by two different models using the same data. (A) Fixed-effect model. (B) Random-effect model. The figure depicts individual trials as filled squares with the relative sample size and the solid line as the 95% confidence interval of the difference. The diamond shape indicates the pooled estimate and uncertainty for the combined effect. The vertical line indicates the treatment group shows no effect (OR = 1). Moreover, if the confidence interval includes 1, then the result shows no evidence of difference between the treatment and control groups.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f4.jpg

Forest plot representing homogeneous data.

Dichotomous variables and continuous variables

In data analysis, outcome variables can be considered broadly in terms of dichotomous variables and continuous variables. When combining data from continuous variables, the mean difference (MD) and standardized mean difference (SMD) are used ( Table 2 ).

Summary of Meta-analysis Methods Available in RevMan [ 28 ]

The MD is the absolute difference in mean values between the groups, and the SMD is the mean difference between groups divided by the standard deviation. When results are presented in the same units, the MD can be used, but when results are presented in different units, the SMD should be used. When the MD is used, the combined units must be shown. A value of “0” for the MD or SMD indicates that the effects of the new treatment method and the existing treatment method are the same. A value lower than “0” means the new treatment method is less effective than the existing method, and a value greater than “0” means the new treatment is more effective than the existing method.

When combining data for dichotomous variables, the OR, risk ratio (RR), or risk difference (RD) can be used. The RR and RD can be used for RCTs, quasi-experimental studies, or cohort studies, and the OR can be used for other case-control studies or cross-sectional studies. However, because the OR is difficult to interpret, using the RR and RD, if possible, is recommended. If the outcome variable is a dichotomous variable, it can be presented as the number needed to treat (NNT), which is the minimum number of patients who need to be treated in the intervention group, compared to the control group, for a given event to occur in at least one patient. Based on Table 3 , in an RCT, if x is the probability of the event occurring in the control group and y is the probability of the event occurring in the intervention group, then x = c/(c + d), y = a/(a + b), and the absolute risk reduction (ARR) = x − y. NNT can be obtained as the reciprocal, 1/ARR.

Calculation of the Number Needed to Treat in the Dichotomous table

Fixed-effect models and random-effect models

In order to analyze effect size, two types of models can be used: a fixed-effect model or a random-effect model. A fixed-effect model assumes that the effect of treatment is the same, and that variation between results in different studies is due to random error. Thus, a fixed-effect model can be used when the studies are considered to have the same design and methodology, or when the variability in results within a study is small, and the variance is thought to be due to random error. Three common methods are used for weighted estimation in a fixed-effect model: 1) inverse variance-weighted estimation 3) , 2) Mantel-Haenszel estimation 4) , and 3) Peto estimation 5) .

A random-effect model assumes heterogeneity between the studies being combined, and these models are used when the studies are assumed different, even if a heterogeneity test does not show a significant result. Unlike a fixed-effect model, a random-effect model assumes that the size of the effect of treatment differs among studies. Thus, differences in variation among studies are thought to be due to not only random error but also between-study variability in results. Therefore, weight does not decrease greatly for studies with a small number of patients. Among methods for weighted estimation in a random-effect model, the DerSimonian and Laird method 6) is mostly used for dichotomous variables, as the simplest method, while inverse variance-weighted estimation is used for continuous variables, as with fixed-effect models. These four methods are all used in Review Manager software (The Cochrane Collaboration, UK), and are described in a study by Deeks et al. [ 31 ] ( Table 2 ). However, when the number of studies included in the analysis is less than 10, the Hartung-Knapp-Sidik-Jonkman method 7) can better reduce the risk of type 1 error than does the DerSimonian and Laird method [ 32 ].

Fig. 3 shows the results of analyzing outcome data using a fixed-effect model (A) and a random-effect model (B). As shown in Fig. 3 , while the results from large studies are weighted more heavily in the fixed-effect model, studies are given relatively similar weights irrespective of study size in the random-effect model. Although identical data were being analyzed, as shown in Fig. 3 , the significant result in the fixed-effect model was no longer significant in the random-effect model. One representative example of the small study effect in a random-effect model is the meta-analysis by Li et al. [ 33 ]. In a large-scale study, intravenous injection of magnesium was unrelated to acute myocardial infarction, but in the random-effect model, which included numerous small studies, the small study effect resulted in an association being found between intravenous injection of magnesium and myocardial infarction. This small study effect can be controlled for by using a sensitivity analysis, which is performed to examine the contribution of each of the included studies to the final meta-analysis result. In particular, when heterogeneity is suspected in the study methods or results, by changing certain data or analytical methods, this method makes it possible to verify whether the changes affect the robustness of the results, and to examine the causes of such effects [ 34 ].

Heterogeneity

Homogeneity test is a method whether the degree of heterogeneity is greater than would be expected to occur naturally when the effect size calculated from several studies is higher than the sampling error. This makes it possible to test whether the effect size calculated from several studies is the same. Three types of homogeneity tests can be used: 1) forest plot, 2) Cochrane’s Q test (chi-squared), and 3) Higgins I 2 statistics. In the forest plot, as shown in Fig. 4 , greater overlap between the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared test, calculated from the forest plot in Fig. 4 , is less than 0.1, it is considered to show statistical heterogeneity and a random-effect can be used. Finally, I 2 can be used [ 35 ].

I 2 , calculated as shown above, returns a value between 0 and 100%. A value less than 25% is considered to show strong homogeneity, a value of 50% is average, and a value greater than 75% indicates strong heterogeneity.

Even when the data cannot be shown to be homogeneous, a fixed-effect model can be used, ignoring the heterogeneity, and all the study results can be presented individually, without combining them. However, in many cases, a random-effect model is applied, as described above, and a subgroup analysis or meta-regression analysis is performed to explain the heterogeneity. In a subgroup analysis, the data are divided into subgroups that are expected to be homogeneous, and these subgroups are analyzed. This needs to be planned in the predetermined protocol before starting the meta-analysis. A meta-regression analysis is similar to a normal regression analysis, except that the heterogeneity between studies is modeled. This process involves performing a regression analysis of the pooled estimate for covariance at the study level, and so it is usually not considered when the number of studies is less than 10. Here, univariate and multivariate regression analyses can both be considered.

Publication bias

Publication bias is the most common type of reporting bias in meta-analyses. This refers to the distortion of meta-analysis outcomes due to the higher likelihood of publication of statistically significant studies rather than non-significant studies. In order to test the presence or absence of publication bias, first, a funnel plot can be used ( Fig. 5 ). Studies are plotted on a scatter plot with effect size on the x-axis and precision or total sample size on the y-axis. If the points form an upside-down funnel shape, with a broad base that narrows towards the top of the plot, this indicates the absence of a publication bias ( Fig. 5A ) [ 29 , 36 ]. On the other hand, if the plot shows an asymmetric shape, with no points on one side of the graph, then publication bias can be suspected ( Fig. 5B ). Second, to test publication bias statistically, Begg and Mazumdar’s rank correlation test 8) [ 37 ] or Egger’s test 9) [ 29 ] can be used. If publication bias is detected, the trim-and-fill method 10) can be used to correct the bias [ 38 ]. Fig. 6 displays results that show publication bias in Egger’s test, which has then been corrected using the trim-and-fill method using Comprehensive Meta-Analysis software (Biostat, USA).

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f5.jpg

Funnel plot showing the effect size on the x-axis and sample size on the y-axis as a scatter plot. (A) Funnel plot without publication bias. The individual plots are broader at the bottom and narrower at the top. (B) Funnel plot with publication bias. The individual plots are located asymmetrically.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f6.jpg

Funnel plot adjusted using the trim-and-fill method. White circles: comparisons included. Black circles: inputted comparisons using the trim-and-fill method. White diamond: pooled observed log risk ratio. Black diamond: pooled inputted log risk ratio.

Result Presentation

When reporting the results of a systematic review or meta-analysis, the analytical content and methods should be described in detail. First, a flowchart is displayed with the literature search and selection process according to the inclusion/exclusion criteria. Second, a table is shown with the characteristics of the included studies. A table should also be included with information related to the quality of evidence, such as GRADE ( Table 4 ). Third, the results of data analysis are shown in a forest plot and funnel plot. Fourth, if the results use dichotomous data, the NNT values can be reported, as described above.

The GRADE Evidence Quality for Each Outcome

N: number of studies, ROB: risk of bias, PON: postoperative nausea, POV: postoperative vomiting, PONV: postoperative nausea and vomiting, CI: confidence interval, RR: risk ratio, AR: absolute risk.

When Review Manager software (The Cochrane Collaboration, UK) is used for the analysis, two types of P values are given. The first is the P value from the z-test, which tests the null hypothesis that the intervention has no effect. The second P value is from the chi-squared test, which tests the null hypothesis for a lack of heterogeneity. The statistical result for the intervention effect, which is generally considered the most important result in meta-analyses, is the z-test P value.

A common mistake when reporting results is, given a z-test P value greater than 0.05, to say there was “no statistical significance” or “no difference.” When evaluating statistical significance in a meta-analysis, a P value lower than 0.05 can be explained as “a significant difference in the effects of the two treatment methods.” However, the P value may appear non-significant whether or not there is a difference between the two treatment methods. In such a situation, it is better to announce “there was no strong evidence for an effect,” and to present the P value and confidence intervals. Another common mistake is to think that a smaller P value is indicative of a more significant effect. In meta-analyses of large-scale studies, the P value is more greatly affected by the number of studies and patients included, rather than by the significance of the results; therefore, care should be taken when interpreting the results of a meta-analysis.

When performing a systematic literature review or meta-analysis, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results can be biased and the outcomes can be incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results that could usually only be achieved using large-scale RCTs, which are difficult to perform in individual studies. As our understanding of evidence-based medicine increases and its importance is better appreciated, the number of systematic reviews and meta-analyses will keep increasing. However, indiscriminate acceptance of the results of all these meta-analyses can be dangerous, and hence, we recommend that their results be received critically on the basis of a more accurate understanding.

1) http://www.ohri.ca .

2) http://methods.cochrane.org/bias/assessing-risk-bias-included-studies .

3) The inverse variance-weighted estimation method is useful if the number of studies is small with large sample sizes.

4) The Mantel-Haenszel estimation method is useful if the number of studies is large with small sample sizes.

5) The Peto estimation method is useful if the event rate is low or one of the two groups shows zero incidence.

6) The most popular and simplest statistical method used in Review Manager and Comprehensive Meta-analysis software.

7) Alternative random-effect model meta-analysis that has more adequate error rates than does the common DerSimonian and Laird method, especially when the number of studies is small. However, even with the Hartung-Knapp-Sidik-Jonkman method, when there are less than five studies with very unequal sizes, extra caution is needed.

8) The Begg and Mazumdar rank correlation test uses the correlation between the ranks of effect sizes and the ranks of their variances [ 37 ].

9) The degree of funnel plot asymmetry as measured by the intercept from the regression of standard normal deviates against precision [ 29 ].

10) If there are more small studies on one side, we expect the suppression of studies on the other side. Trimming yields the adjusted effect size and reduces the variance of the effects by adding the original studies back into the analysis as a mirror image of each study.

Literature Review vs Systematic Review

  • Literature Review vs. Systematic Review
  • Primary vs. Secondary Sources
  • Databases and Articles
  • Specific Journal or Article

Subject Guide

Profile Photo

Definitions

It’s common to confuse systematic and literature reviews because both are used to provide a summary of the existent literature or research on a specific topic. Regardless of this commonality, both types of review vary significantly. The following table provides a detailed explanation as well as the differences between systematic and literature reviews. 

Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:  http://dx.doi.org/10.6084/m9.figshare.766364

  • << Previous: Home
  • Next: Primary vs. Secondary Sources >>
  • Last Updated: Dec 15, 2023 10:19 AM
  • URL: https://libguides.sjsu.edu/LitRevVSSysRev

FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Thought for Food Blog

What is the difference between a systematic review and a systematic literature review?

By Carol Hollier on 07-Jan-2020 12:42:03

Systematic Reviews vs Systematic Literature Reviews | IFIS Publishing

For those not immersed in systematic reviews, understanding the difference between a systematic review and a systematic literature review can be confusing.  It helps to realise that a “systematic review” is a clearly defined thing, but ambiguity creeps in around the phrase “systematic literature review” because people can and do use it in a variety of ways. 

A systematic review is a research study of research studies.  To qualify as a systematic review, a review needs to adhere to standards of transparency and reproducibility.  It will use explicit methods to identify, select, appraise, and synthesise empirical results from different but similar studies.  The study will be done in stages:  

  • In stage one, the question, which must be answerable, is framed
  • Stage two is a comprehensive literature search to identify relevant studies
  • In stage three the identified literature’s quality is scrutinised and decisions made on whether or not to include each article in the review
  • In stage four the evidence is summarised and, if the review includes a meta-analysis, the data extracted; in the final stage, findings are interpreted. [1]

Some reviews also state what degree of confidence can be placed on that answer, using the GRADE scale.  By going through these steps, a systematic review provides a broad evidence base on which to make decisions about medical interventions, regulatory policy, safety, or whatever question is analysed.   By documenting each step explicitly, the review is not only reproducible, but can be updated as more evidence on the question is generated.

Sometimes when people talk about a “systematic literature review”, they are using the phrase interchangeably with “systematic review”.  However, people can also use the phrase systematic literature review to refer to a literature review that is done in a fairly systematic way, but without the full rigor of a systematic review. 

For instance, for a systematic review, reviewers would strive to locate relevant unpublished studies in grey literature and possibly by contacting researchers directly.  Doing this is important for combatting publication bias, which is the tendency for studies with positive results to be published at a higher rate than studies with null results.  It is easy to understand how this well-documented tendency can skew a review’s findings, but someone conducting a systematic literature review in the loose sense of the phrase might, for lack of resource or capacity, forgo that step. 

Another difference might be in who is doing the research for the review. A systematic review is generally conducted by a team including an information professional for searches and a statistician for meta-analysis, along with subject experts.  Team members independently evaluate the studies being considered for inclusion in the review and compare results, adjudicating any differences of opinion.   In contrast, a systematic literature review might be conducted by one person. 

Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive.  A systematic literature review would contrast with what is sometimes called a narrative or journalistic literature review, where the reviewer’s search strategy is not made explicit, and evidence may be cherry-picked to support an argument.

FSTA is a key tool for systematic reviews and systematic literature reviews in the sciences of food and health.

pawel-czerwinski-VkITYPupzSg-unsplash-1

The patents indexed help find results of research not otherwise publicly available because it has been done for commercial purposes.

The FSTA thesaurus will surface results that would be missed with keyword searching alone. Since the thesaurus is designed for the sciences of food and health, it is the most comprehensive for the field. 

All indexing and abstracting in FSTA is in English, so you can do your searching in English yet pick up non-English language results, and get those results translated if they meet the criteria for inclusion in a systematic review.

FSTA includes grey literature (conference proceedings) which can be difficult to find, but is important to include in comprehensive searches.

FSTA content has a deep archive. It goes back to 1969 for farm to fork research, and back to the late 1990s for food-related human nutrition literature—systematic reviews (and any literature review) should include not just the latest research but all relevant research on a question. 

You can also use FSTA to find literature reviews.

FSTA allows you to easily search for review articles (both narrative and systematic reviews) by using the subject heading or thesaurus term “REVIEWS" and an appropriate free-text keyword.

On the Web of Science or EBSCO platform, an FSTA search for reviews about cassava would look like this: DE "REVIEWS" AND cassava.

On the Ovid platform using the multi-field search option, the search would look like this: reviews.sh. AND cassava.af.

In 2011 FSTA introduced the descriptor META-ANALYSIS, making it easy to search specifically for systematic reviews that include a meta-analysis published from that year onwards.

On the EBSCO or Web of Science platform, an FSTA search for systematic reviews with meta-analyses about staphylococcus aureus would look like this: DE "META-ANALYSIS" AND staphylococcus aureus.

On the Ovid platform using the multi-field search option, the search would look like this: meta-analysis.sh. AND staphylococcus aureus.af.

Systematic reviews with meta-analyses published before 2011 are included in the REVIEWS controlled vocabulary term in the thesaurus.

An easy way to locate pre-2011 systematic reviews with meta-analyses is to search the subject heading or thesaurus term "REVIEWS" AND meta-analysis as a free-text keyword AND another appropriate free-text keyword.

On the Web of Science or EBSCO platform, the FSTA search would look like this: DE "REVIEWS" AND meta-analysis AND carbohydrate*

On the Ovid platform using the multi-field search option, the search would look like this: reviews .s h. AND meta-analysis.af. AND carbohydrate*.af.  

Related resources:

  • Literature Searching Best Practise Guide
  • Predatory publishing: Investigating researchers’ knowledge & attitudes
  • The IFIS Expert Guide to Journal Publishing

Library image by  Paul Schafer , microscope image by Matthew Waring , via Unsplash.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

University Libraries      University of Nevada, Reno

  • Skill Guides
  • Subject Guides

Systematic, Scoping, and Other Literature Reviews: Overview

  • Project Planning

What Is a Systematic Review?

Regular literature reviews are simply summaries of the literature on a particular topic. A systematic review, however, is a comprehensive literature review conducted to answer a specific research question. Authors of a systematic review aim to find, code, appraise, and synthesize all of the previous research on their question in an unbiased and well-documented manner. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) outline the minimum amount of information that needs to be reported at the conclusion of a systematic review project. 

Other types of what are known as "evidence syntheses," such as scoping, rapid, and integrative reviews, have varying methodologies. While systematic reviews originated with and continue to be a popular publication type in medicine and other health sciences fields, more and more researchers in other disciplines are choosing to conduct evidence syntheses. 

This guide will walk you through the major steps of a systematic review and point you to key resources including Covidence, a systematic review project management tool. For help with systematic reviews and other major literature review projects, please send us an email at  [email protected] .

Getting Help with Reviews

Organization such as the Institute of Medicine recommend that you consult a librarian when conducting a systematic review. Librarians at the University of Nevada, Reno can help you:

  • Understand best practices for conducting systematic reviews and other evidence syntheses in your discipline
  • Choose and formulate a research question
  • Decide which review type (e.g., systematic, scoping, rapid, etc.) is the best fit for your project
  • Determine what to include and where to register a systematic review protocol
  • Select search terms and develop a search strategy
  • Identify databases and platforms to search
  • Find the full text of articles and other sources
  • Become familiar with free citation management (e.g., EndNote, Zotero)
  • Get access to you and help using Covidence, a systematic review project management tool

Doing a Systematic Review

  • Plan - This is the project planning stage. You and your team will need to develop a good research question, determine the type of review you will conduct (systematic, scoping, rapid, etc.), and establish the inclusion and exclusion criteria (e.g., you're only going to look at studies that use a certain methodology). All of this information needs to be included in your protocol. You'll also need to ensure that the project is viable - has someone already done a systematic review on this topic? Do some searches and check the various protocol registries to find out. 
  • Identify - Next, a comprehensive search of the literature is undertaken to ensure all studies that meet the predetermined criteria are identified. Each research question is different, so the number and types of databases you'll search - as well as other online publication venues - will vary. Some standards and guidelines specify that certain databases (e.g., MEDLINE, EMBASE) should be searched regardless. Your subject librarian can help you select appropriate databases to search and develop search strings for each of those databases.  
  • Evaluate - In this step, retrieved articles are screened and sorted using the predetermined inclusion and exclusion criteria. The risk of bias for each included study is also assessed around this time. It's best if you import search results into a citation management tool (see below) to clean up the citations and remove any duplicates. You can then use a tool like Rayyan (see below) to screen the results. You should begin by screening titles and abstracts only, and then you'll examine the full text of any remaining articles. Each study should be reviewed by a minimum of two people on the project team. 
  • Collect - Each included study is coded and the quantitative or qualitative data contained in these studies is then synthesized. You'll have to either find or develop a coding strategy or form that meets your needs. 
  • Explain - The synthesized results are articulated and contextualized. What do the results mean? How have they answered your research question?
  • Summarize - The final report provides a complete description of the methods and results in a clear, transparent fashion. 

Adapted from

Types of reviews, systematic review.

These types of studies employ a systematic method to analyze and synthesize the results of numerous studies. "Systematic" in this case means following a strict set of steps - as outlined by entities like PRISMA and the Institute of Medicine - so as to make the review more reproducible and less biased. Consistent, thorough documentation is also key. Reviews of this type are not meant to be conducted by an individual but rather a (small) team of researchers. Systematic reviews are widely used in the health sciences, often to find a generalized conclusion from multiple evidence-based studies. 

Meta-Analysis

A systematic method that uses statistics to analyze the data from numerous studies. The researchers combine the data from studies with similar data types and analyze them as a single, expanded dataset. Meta-analyses are a type of systematic review.

Scoping Review

A scoping review employs the systematic review methodology to explore a broader topic or question rather than a specific and answerable one, as is generally the case with a systematic review. Authors of these types of reviews seek to collect and categorize the existing literature so as to identify any gaps.

Rapid Review

Rapid reviews are systematic reviews conducted under a time constraint. Researchers make use of workarounds to complete the review quickly (e.g., only looking at English-language publications), which can lead to a less thorough and more biased review. 

Narrative Review

A traditional literature review that summarizes and synthesizes the findings of numerous original research articles. The purpose and scope of narrative literature reviews vary widely and do not follow a set protocol. Most literature reviews are narrative reviews. 

Umbrella Review

Umbrella reviews are, essentially, systematic reviews of systematic reviews. These compile evidence from multiple review studies into one usable document. 

Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal , vol. 26, no. 2, 2009, pp. 91-108. doi: 10.1111/j.1471-1842.2009.00848.x .

  • Next: Project Planning >>

Penn State University Libraries

  • Home-Articles and Databases
  • Asking the clinical question
  • PICO & Finding Evidence
  • Evaluating the Evidence
  • Systematic Review vs. Literature Review
  • Ethical & Legal Issues for Nurses
  • Nursing Library Instruction Course
  • Data Management Toolkit This link opens in a new window
  • Useful Nursing Resources
  • Writing Resources
  • LionSearch and Finding Articles
  • The Catalog and Finding Books

Know the Difference! Systematic Review vs. Literature Review

It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic.  Even with this common ground, both types vary significantly.  Please review the following chart (and its corresponding poster linked below) for the detailed explanation of each as well as the differences between each type of review.

  • What's in a name? The difference between a Systematic Review and a Literature Review, and why it matters by Lynn Kysh, MLIS, University of Southern California - Norris Medical Library
  • << Previous: Evaluating the Evidence
  • Next: Ethical & Legal Issues for Nurses >>
  • Last Updated: Mar 1, 2024 11:54 AM
  • URL: https://guides.libraries.psu.edu/nursing
  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Study protocol article, a protocol for the use of case reports/studies and case series in systematic reviews for clinical toxicology.

systematic literature review vs case study

  • 1 Univ Angers, CHU Angers, Univ Rennes, INSERM, EHESP, Institut de Recherche en Santé, Environnement et Travail-UMR_S 1085, Angers, France
  • 2 Department of Occupational Medicine, Epidemiology and Prevention, Donald and Barbara Zucker School of Medicine, Northwell Health, Feinstein Institutes for Medical Research, Hofstra University, Great Neck, NY, United States
  • 3 Department of Health Sciences, University of California, San Francisco and California State University, Hayward, CA, United States
  • 4 Program on Reproductive Health and the Environment, University of California, San Francisco, San Francisco, CA, United States
  • 5 Cesare Maltoni Cancer Research Center, Ramazzini Institute, Bologna, Italy
  • 6 Department of Research and Public Health, Reims Teaching Hospitals, Robert Debré Hospital, Reims, France
  • 7 CHU Angers, Univ Angers, Poisoning Control Center, Clinical Data Center, Angers, France

Introduction: Systematic reviews are routinely used to synthesize current science and evaluate the evidential strength and quality of resulting recommendations. For specific events, such as rare acute poisonings or preliminary reports of new drugs, we posit that case reports/studies and case series (human subjects research with no control group) may provide important evidence for systematic reviews. Our aim, therefore, is to present a protocol that uses rigorous selection criteria, to distinguish high quality case reports/studies and case series for inclusion in systematic reviews.

Methods: This protocol will adapt the existing Navigation Guide methodology for specific inclusion of case studies. The usual procedure for systematic reviews will be followed. Case reports/studies and case series will be specified in the search strategy and included in separate sections. Data from these sources will be extracted and where possible, quantitatively synthesized. Criteria for integrating cases reports/studies and case series into the overall body of evidence are that these studies will need to be well-documented, scientifically rigorous, and follow ethical practices. The instructions and standards for evaluating risk of bias will be based on the Navigation Guide. The risk of bias, quality of evidence and the strength of recommendations will be assessed by two independent review teams that are blinded to each other.

Conclusion: This is a protocol specified for systematic reviews that use case reports/studies and case series to evaluate the quality of evidence and strength of recommendations in disciplines like clinical toxicology, where case reports/studies are the norm.

Introduction

Systematic reviews are routinely relied upon to qualitatively synthesize current knowledge in a subject area. These reviews are often paired with a meta-analysis for quantitative syntheses. These qualitative and quantitative summaries of pooled data, collectively evaluate the quality of the evidence and the strength of the resulting research recommendations.

There currently exist several guidance documents to instruct on the rigors of systematic review methodology: (i) the Cochrane Collaboration, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA-P (for protocols) that offer directives on data synthesis; and (ii) the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) guidelines that establish rules for the development of scientific recommendations ( 1 – 5 ). This systematic review guidance is based predominantly on clinical studies, where randomized controlled trials (RCTs) are the gold standard. For that reason, a separate group of researchers has designed the Navigation Guide, specific to environmental health studies that are often observational ( 6 , 7 ). To date, systematic review guidelines (GRADE, PRISMA, PRISMA-P, and Navigation Guide) remove case reports/studies and case series (human subjects research with no control group) from consideration in systematic reviews, in part due to the challenges in evaluating the internal validity of these kinds of study designs. We hypothesize, however, that under certain circumstances, such as in rare acute poisonings, or preliminary reports of new drugs, some case reports and case series may contribute relevant knowledge that would be informative to systematic review recommendations. This is particularly important in clinical settings, where such evidence could potentially change our understanding of the screening, presentation, and potential treatment of rare conditions, such as poisoning from obscure toxins. The Cochrane Collaboration handbook states that “ for some rare or delayed adverse outcomes only case series or case-control studies may be available. Non-randomized studies of interventions with some study design features that are more susceptible to bias may be acceptable for evaluation of serious adverse events in the absence of better evidence, but the risk of bias must still be assessed and reported ” ( 8 ). In addition, the Cochrane Adverse Effects group has shown that case studies may be the best settings in which to observe adverse effects, especially when they are rare and acute ( 9 ). We believe that there may be an effective way to consider case reports/studies and case series in systematic reviews, specifically by developing specific criteria for their inclusion and accounting for their inherent bias.

We propose here a systematic review protocol that has been specifically developed to consider the inclusion and integration of case reports/studies and case series. Our main objective is to create a protocol that is an adaptation of the Navigation Guide ( 6 , 10 ) that presents methodology to examine high quality case reports/studies and case series through cogent inclusion and exclusion criteria. This methodology is in concordance with the Cochrane Methods for Adverse Effects for scoping reviews ( 11 ).

This protocol was prepared in accordance with the usual structured methodology for systematic reviews (PRISMA, PRISMA-P, and Navigation guide) ( 3 – 7 , 10 ). The protocol will be registered on an appropriate website, such as one of the following:

(i) The International Prospective Register of Systematic Reviews (PROSPERO) database ( https://www.crd.york.ac.uk/PROSPERO/ ) is an international database of prospectively registered systematic reviews in health and social welfare, public health, education, crime, justice, and international development, where there is a health-related outcome. It aims to provide a comprehensive listing of systematic reviews registered at inception to help avoid duplication and reduce opportunity for reporting bias by enabling comparison of the completed review with what was planned in the protocol. PROSPERO accepts registrations for systematic reviews, rapid reviews, and umbrella reviews. Key elements of the review protocol are permanently recorded and stored.

(ii) The Open Science Framework (OSF) platform ( https://osf.io/ ) is a free, open, and integrated platform that facilitates open collaboration in research science. It allows for the management and sharing of research project at all stages of research for broad dissemination. It also enables capture of different aspects and products of the research lifecycle, from the development of a research idea, through the design of a study, the storage and analysis of collected data, to the writing and publication of reports or research articles.

(iii) The Research Registry (RR) database ( https://www.researchregistry.com/ ) is a one-stop repository for the registration of all types of research studies, from “first-in-man” case reports/studies to observational/interventional studies to systematic reviews and meta-analyses. The goal is to ensure that every study involving human participants is registered in accordance with the 2013 Declaration of Helsinki. The RR enables prospective or retrospective registrations of studies, including those types of studies that cannot be registered in existing registries. It specifically publishes systematic reviews and meta-analyses and does not register case reports/studies that are not first-in-man or animal studies.

Any significant future changes to the protocol resulting from knowledge gained during the development stages of this project will be documented in detail and a rationale for all changes will be proposed and reported in PROSPERO, OSF, or RR.

The overall protocol will differentiate itself from other known methodologies, by defining two independent teams of reviewers: a classical team and a case team. The classical team will review studies with control groups and an acceptable comparison group (case reports/studies and case series will be excluded). In effect, this team will conduct a more traditional systematic review where evidence from case reports/studies and case series are not considered. The case team will review classical studies, case reports, and case series. This case team will act as a comparison group to identify differences in systematic review conclusions due to the inclusion of evidence from case reports/studies and case series. Both teams will identify studies that meet specified inclusion criteria, conduct separate analyses and risk of bias evaluations, along with overall quality assessments, and syntheses of strengths of evidence. Each team will be blinded to the results of the other team throughout the process. Upon completion of the systematic review, results from each team will be presented, evaluated, and compared.

Patient and Public Involvement

No patient involved.

Eligibility Criteria

Studies will be selected according to the criteria outlined below.

Study Designs

Studies of any design reported in any translatable language to English by online programs (e.g., Google Translate) will be included at the beginning. These studies will span interventional studies with control groups (Randomized Controlled Trials: RCTs), as well as observational studies with and without exposed groups. All observational studies will be eligible for inclusion in accordance with the objectives of this systematic review. Thereafter, only the case team will include cases reports/studies and case series, as specified in their search strategy. The case team will include a separate section for human subjects research that has been conducted with no control groups.

Type of Population

All types of studies examining the general adult human population or healthy adult humans will be included. Studies that involve both adults and children will also be included if data for adults are reported separately. Animal studies will be excluded for the methodological purpose of this (case reports/studies and case series) protocol given that the framework for systematic reviews in toxicology already adequately retrieves this type of toxin data on animals.

Inclusion/Exclusion Criteria

Studies of any design will be included if they fulfill all the eligibility criteria. To be integrated into the overall body of evidence, cases reports/studies and case series must meet pre-defined criteria indicating that they are well-documented, scientifically rigorous, and follow ethical practices, under the CARE guidelines (for Ca se Re ports) ( 12 , 13 ) and the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Case reports/studies and for Case Series ( 14 , 15 ) that classify case reports/studies in terms of completeness, transparency and data analysis. Studies that were conducted using unethical practices will be excluded.

Type of Exposure/Intervention

Either the prescribed treatment or described exposure to a chemical substance (toxin/toxicant) will be detailed here.

Type of Comparators

In this protocol we plan to compare two review methodologies: one will include and the other will exclude high quality case reports/studies and case series; these two review methodologies will be compared. The comparator will be (the presence or absence of) an available control group that has been specified and is acceptable scientifically and ethically.

Type of Outcomes

The outcome of mortality or morbidity related to the toxicological exposure, will be detailed here.

Information Sources and Search Strategy

There will be no design, date or language limitations applied to the search strategy. A systematic search in electronic academic databases, electronic grey literature, organizational websites, and internet search engines will be performed. We will search at least the following major databases:

- Electronic academic databases : Pubmed, Web of Sciences, Toxline, Poisondex, and databases specific to case reports/studies and case series (e.g., PMC, Scopus, Medline) ( 13 )

- Electronic grey literature databases : OpenGrey ( http://www.opengrey.eu/ ), grey literature Report ( http://greylit.org/ )

- Organizational websites : AHRQ Patient Safety Network ( https://psnet.ahrq.gov/webmm ), World Health Organization ( www.who.int )

- Internet search engines : Google ( https://www.google.com/ ), GoogleScholar ( https://scholar.google.com/ ).

Study Records

Following a systematic search in all the databases above, each of the two independent teams of reviewers (the classical team and the case team) will, respectively, upload separately and in accordance with the eligibility criteria, the literature search results to the systematic review management software, “Covidence,” a primary screening and data extraction tool ( 16 ).

All study records identified during the search will be downloaded and duplicate records will be identified and deleted. Thereafter, two research team members will independently screen the titles and abstracts (step 1) and then the full texts (step 2) of potentially relevant studies for inclusion. If necessary, information will be requested from the publication authors to resolve questions about eligibility. Finally, any disagreements that may potentially exist between the two research team members will be resolved first by discussion and then by consulting a third research team member for arbitration.

If a study record identified during the search was authored by a reviewing research team member, or that team member participated in the identified study, that study record will be re-assigned to another reviewing team member.

Data Collection Process, Items Included, and Prioritization if Needed

All reviewing team members will use standardized forms or software (e.g., Covidence), and each review member will independently extract the data from included studies. If possible, the extracted data will be synthesized numerically. To ensure consistency across reviewers, calibration exercises (reviewer training) will be conducted prior to starting the reviews. Extracted information will include the minimum study characteristics (study authors, study year, study country, participants, intervention/exposure, outcome), study design (summary of study design, comparator, models used, and effect estimate measure) and study context (e.g., data on simultaneous exposure to other risk factors that would be relevant contributors to morbidity or mortality). As specified in the section on study records, a third review team member will resolve any conflicts that arise during data extraction that are not resolved by consensus between the two initial data extractors.

Data on potential conflict of interest for included studies, as well as financial disclosures and funding sources, will also be extracted. If no financial statement or conflict of interest declaration is available, the names of the authors will be searched in other studies published within the previous 36 months and in other publicly available declarations of interests, for funding information ( 17 , 18 ).

Risk of Bias Assessment

To assess the risk of bias within included studies, the internal validity of potential studies will be assessed by using the Navigation Guide tool ( 6 , 19 ), which covers nine domains of bias for human studies: (a) source population representation; (b) blinding; (c) exposure or intervention assessment; (d) outcome assessment; (e) confounding; (f) incomplete outcome data; (g) selective outcome reporting; (h) conflict of interest; and (i) other sources of bias. For each section of the tool, the procedures undertaken for each study will be described and the risk of bias will be rated as “ low risk”; “probably low risk”; “probably risk”; “high risk”; or “not applicable.” Risk of bias on the levels of the individual study and the entire body of evidence will be assessed. Most of the text from these instructions and criteria for judging risk of bias has been adopted verbatim or adapted from one of the latest Navigation Guide systematic reviews used by WHO/ILO ( 6 , 19 , 20 ).

For case reports/studies and case series, the text from these instructions and criteria for judging risk of bias has been adopted verbatim or adapted from one of the latest Navigation Guide systematic reviews ( 21 ), and is given in Supplementary Material . Specific criteria are listed below. To ensure consistency across reviewers, calibration exercises (reviewer training) will be conducted prior to starting the risk of bias assessments for case reports/studies and case series.

Are the Study Groups at Risk of Not Representing Their Source Populations in a Manner That Might Introduce Selection Bias?

The source population is viewed as the population for which study investigators are targeting their study question of interest.

Examples of considerations for this risk of bias domain include: (1) the context of the case report; (2) level of detail reported for participant inclusion/exclusion (including details from previously published papers referenced in the article), with inclusion of all relevant consecutive patients in the considered period; ( 14 , 15 ) (3) exclusion rates, attrition rates and reasons.

Were Exposure/Intervention (Toxic, Treatment) Assessment Methods Lacking Accuracy?

The following list of considerations represents a collection of factors proposed by experts in various fields that may potentially influence the internal validity of the exposure assessment in a systematic manner (not those that may randomly affect overall study results). These should be interpreted only as suggested considerations and should not be viewed as scoring or a checklist . Considering there are no controls in such designs, this should be evaluated carefully to be sure the report really contributes to the actual knowledge .

List of Considerations :

Possible sources of exposure assessment metrics:

1) Identification of the exposure

2) Dose evaluation

3) Toxicological values

4) Clinical effects *

5) Biological effects *

6) Treatments given (dose, timing, route)

* Some clinical and biological effects might be related to exposure

For each, overall considerations include:

1) What is the quality of the source of the metric being used?

2) Is the exposure measured in the study a surrogate for the exposure?

3) What was the temporal coverage (i.e., short or long-term exposure)?

4) Did the analysis account for prediction uncertainty?

5) How was missing data accounted for, and any data imputations incorporated?

6) Were sensitivity analyses performed?

Were Outcome Assessment Methods Lacking Accuracy?

This item is similar to actual Navigation guidelines that require an assessment of the accuracy of the measured outcome.

Was Potential Confounding Inadequately Incorporated?

This is a very important issue for case reports/studies and case series. Case reports/studies and case series do not include controls and so, to be considered in a systematic review, these types of studies will need to be well-documented with respect to treatment or other contextual factors that may explain or influence the outcome. Prior to initiating the study screening, review team members should collectively generate a list of potential confounders that are based on expert opinion and knowledge gathered from the scientific literature:

Tier I: Important confounders

• Other associated treatment (i.e., intoxication, insufficient dose, history, or context)

• Medical history

Tier II: Other potentially important confounders and effect modifiers:

• Age, sex, country.

Were Incomplete Outcome Data Inadequately Addressed?

This item is similar to actual Navigation Guide instructions, though it may be very unlikely that outcome data would be incomplete in published case reports/studies and case series.

Does the Study Report Appear to Have Selective Outcome Reporting?

This item is similar to actual Navigation Guide instructions, though it may be very unlikely that there would be selective outcome reporting in published case reports/studies and case series.

Did the Study Receive Any Support From a Company, Study Author, or Other Entity Having a Financial Interest?

This item is similar to actual Navigation Guide instructions.

Did the Study Appear to Have Other Problems That Could Put It at a Risk of Bias?

Data synthesis criteria and summary measures if feasible.

Meta-analyses will be conducted using a random-effects model if studies are sufficiently homogeneous in terms of design and comparator. For dichotomous outcomes, effects of associations will be determined by using risk ratios (RR) or odds ratios (OR) with 95% confidence intervals (CI). Continuous outcomes will be analyzed using weighted mean differences (with 95% CI) or standardized mean differences (with 95% CI) if different measurement scales are used. Skewed data and non-quantitative data will be presented descriptively. Where data are missing, a request will be made to the original authors of the study to obtain the relevant missing data. If these data cannot be obtained, an imputation method will be performed. The statistical heterogeneity of the studies using the Chi Squared test (significance level: 0.1) and I 2 statistic (0–40%: might not be important; 30–60%: may represent moderate heterogeneity; 50–90%: may represent substantial heterogeneity; 75–100%: considerable heterogeneity). If there is heterogeneity, an attempt will be made to explain the source of this heterogeneity through a subgroup or sensitivity analysis.

Finally, the meta-analysis will be conducted in the latest version of the statistical software RevMan. The Mantel-Haenszel method will be used for the fixed effects model if tests of heterogeneity are not significant. If statistical heterogeneity is observed ( I 2 ≥ 50% or p < 0.1), the random effects model will be chosen. If quantitative synthesis is not feasible (e.g., if heterogeneity exists), a meta-analysis will not be performed and a narrative, qualitative summary of the study findings will be done.

Separate analyses will be conducted for the studies that contain control groups using expected mortality/morbidity, in order to include them in the quantitative synthesis of case reports/studies and case series.

If quantitative synthesis is not appropriate, a systematic narrative synthesis will be provided with information presented in the text and tables to summarize and explain the characteristics and findings of the included studies. The narrative synthesis will explore the relationship and findings both within and between the included studies.

Possible Additional Analyses

If feasible, subgroup analyses will be used to explore possible sources of heterogeneity, if there is evidence for differences in effect estimates by country, study design, or patient characteristics (e.g., sex and age). In addition, sensitivity analysis will be performed to explore the source of heterogeneity as for example, published vs. unpublished data, full-text publications vs. abstracts, risk of bias (by omitting studies that are judged to be at high risk of bias).

Overall Quality of Evidence Assessment

The quality of evidence will be assessed using an adapted version of the Evidence Quality Assessment Tool in the Navigation Guide. This tool is based on the GRADE approach ( 1 ). The assessment will be conducted by two teams, again blinded to each other, one that has the results of the case reports/studies and case series/control synthesis, the other without.

Data synthesis will be conducted independently by the classical and case teams. Evidence ratings will start at “high” for randomized control studies, “moderate” for observational studies, and “low” for case reports/studies and case series . It is important to be clear that sufficient levels of evidence cannot be achieved without study comparators. With regards to case reports/studies and case series, we classify these as starting at the lowest point of evidence and therefore we cannot consider evidence higher than low for these kinds of studies. Complete instructions for making quality of evidence judgments are presented in Supplementary Material .

Synthesis of Strength of Evidence

The standard Navigation Guide methodology will be applied to rate the strength of recommendations. The classical and case teams, blinded to the results from each other during the process, will independently assess the strength of evidence. The evidence quality ratings will be translated into strength of evidence for each population based on a combination of four criteria: (a) Quality of body of evidence; (b) Direction of effect; (c) Confidence in effect; and (d) Other compelling attributes of the data that may influence certainty. The ratings for strength of evidence will be “sufficient evidence of harmfulness,” “limited of harmfulness,” “inadequate of harmfulness” and “evidence of lack of harmfulness.”

Once we complete the synthesis of case reports/studies and case series, findings of this separate evidence stream will only be considered if RCTs and observational studies are not available. They will not be used to upgrade or downgrade the strength of other evidence streams.

To the best of our knowledge, this protocol is one of the first to specifically address the incorporation of case reports/studies and case series in a systematic review ( 9 ). The protocol was adapted from the Navigation Guide with the intent of integrating the case reports/studies and case series in systematic review recommendations, while following traditional systematic review methodology to the greatest extent possible. To be included, these case report/studies and case series will need to be well-documented, scientifically rigorous, and follow ethical practices. In addition, we believe that some case reports/studies and case series might bring relevant knowledge that should be considered in systematic review recommendations when data from RCT's and observational studies are not available, especially when even a small number of studies report an important and possibly causal association in an epidemic or a side effect of a newly marketed medicine. Our methodology will be the first to effectively incorporate case reports/studies and case series in systematic reviews that synthesize evidence for clinicians, researchers, and drug developers. These types of studies will be incorporated mostly through paper selection and risk of bias assessments. In addition, we will conduct meta-analyses if the eligible studies provide sufficient data.

This protocol has limitations related primarily to the constraints of case reports/studies and case series. These are descriptive studies. In addition, a case series is subject to selection bias because the clinician or researcher selects the cases themselves and may represent outliers in clinical practice. Furthermore, this kind of study does not have a control group, so it is not possible to compare what happens to other people who do not have the disease or receive treatment. These sources of bias mean that reported results may not be generalizable to a larger patient population and therefore cannot generate information on incidences or prevalence rates and ratios ( 22 , 23 ). However, it is important to note that promoting the need to synthesize these types of studies (case reports/studies and case series) in a formal systematic review, should not deter or delay immediate action from being taken when a few small studies report a plausible causal association between exposure and disease, such as, in the event of an epidemic or a side effect of a newly marketed medicine ( 23 ). In this study protocol, we will not consider animal studies that might give relevant toxicological information because we are focusing on study areas where a paucity of information exists. Finally, we must note that, case reports/studies and case series do not provide independent proof, and therefore, the findings of this separate evidence stream (case reports/studies and case series) will only be considered if evidence from RCTs and observational studies is not available. Case reports/studies and case series will not be used to upgrade or downgrade the strength of other evidence streams. In any case, it is very important to remember that these kinds of studies (case reports/studies and case series) are there to quickly alert agencies of the need to take immediate action to prevent further harm.

Despite these limitations, case reports/studies and case series are a first line of evidence because they are where new issues and ideas emerge (hypothesis-generating) and can contribute to a change in clinical practice ( 23 – 25 ). We therefore believe that data from case reports/studies and case series, when synthesized and presented with completeness and transparency, may provide important details that are relevant to systematic review recommendations.

Author Contributions

AD and GS the protocol study was designed. JL, TW, and DM reviewed. MF, ALG, RV, NC, CB, GLR, MD, ML, and AN significant improvement was made. AN and AD wrote the manuscript. GS improved the language. All authors reviewed and commented on the final manuscript, read and approved the final manuscript to be published.

This project was supported by the French Pays de la Loire region and Angers Loire Métropole, University of Angers and Centre Hospitalo-Universitaire CHU Angers. The project is entitled TEC-TOP (no award/grant number).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2021.708380/full#supplementary-material

1. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. (2008) 336:924–6. doi: 10.1136/bmj.39489.470347.AD

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020) . Cochrane (2020). Available online at: www.training.cochrane.org/handbook

Google Scholar

3. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. (2009) 62:e1–34. doi: 10.1016/j.jclinepi.2009.06.006

4. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. (2015) 4:1. doi: 10.1186/2046-4053-4-1

5. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 : elaboration and explanation. BMJ . (2015) 350:g7647. doi: 10.1136/bmj.g7647

PubMed Abstract | CrossRef Full Text

6. Woodruff TJ, Sutton P, Navigation Guide Work Group. An evidence-based medicine methodology to bridge the gap between clinical and environmental health sciences. Health Aff (Millwood). (2011) 30:931–7. doi: 10.1377/hlthaff.2010.1219

7. Woodruff TJ, Sutton P. The Navigation Guide systematic review methodology: a rigorous and transparent method for translating environmental health science into better health outcomes. Environ Health Perspect. (2014) 122:1007–14. doi: 10.1289/ehp.1307175

8. Reeves BC, Deeks JJ, Higgins JPT, Shea B, Tugwell P, Wells GA. Chapter 24: Including non-randomized studies on intervention effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020). Cochrane (2020). Available online at: www.training.cochrane.org/handbook

9. Loke YK, Price D, Herxheimer A, the Cochrane Adverse Effects Methods Group. Systematic reviews of adverse effects: framework for a structured approach. BMC Med Res Methodol. (2007) 7:32. doi: 10.1186/1471-2288-7-32

10. Lam J, Koustas E, Sutton P, Johnson PI, Atchley DS, Sen S, et al. The Navigation Guide - evidence-based medicine meets environmental health: integration of animal and human evidence for PFOA effects on fetal growth. Environ Health Perspect. (2014) 122:1040–51. doi: 10.1289/ehp.1307923

11. Peryer G, Golder S, Junqueira DR, Vohra S, Loke YK. Chapter 19: Adverse effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 (updated September 2020) . Cochrane (2020). Available online at: www.training.cochrane.org/handbook

12. Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D, et al. The CARE guidelines: consensus-based clinical case reporting guideline development. J Med Case Rep. (2013) 7:223. doi: 10.1186/1752-1947-7-223

13. Riley DS, Barber MS, Kienle GS, Aronson JK, von Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. (2017) 89:218–35. doi: 10.1016/j.jclinepi.2017.04.026

14. Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, et al. Chapter 7: Systematic reviews of etiology and risk. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI (2020). doi: 10.46658/JBIMES-20-08. Available online at: https://synthesismanual.jbi.global

CrossRef Full Text

15. Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence Synthesis. (2020) 18:2127–33. doi: 10.11124/JBISRIR-D-19-00099

16. Covidence systematic review software, V.H.I. Covidence Systematic Review Software , V.H.I. Melbourne, CA. Available online at: www.covidence.org ; https://support.covidence.org/help/how-can-i-cite-covidence

17. Drazen JM, de Leeuw PW, Laine C, Mulrow C, DeAngelis CD, Frizelle FA, et al. Toward More Uniform Conflict Disclosures: The Updated ICMJE Conflict of Interest Reporting Form. JAMA. (2010) 304:212. doi: 10.1001/jama.2010.918

18. Drazen JM, Weyden MBVD, Sahni P, Rosenberg J, Marusic A, Laine C, et al. Uniform Format for Disclosure of Competing Interests in ICMJE Journals. N Engl J Med. (2009) 361:1896–7. doi: 10.1056/NEJMe0909052

19. Johnson PI, Sutton P, Atchley DS, Koustas E, Lam J, Sen S, et al. The navigation guide—evidence-based medicine meets environmental health: systematic review of human evidence for PFOA effects on fetal growth. Environ Health Perspect. (2014) 122:1028–39. doi: 10.1289/ehp.1307893

20. Descatha A, Sembajwe G, Baer M, Boccuni F, Di Tecco C, Duret C, et al. WHO/ILO work-related burden of disease and injury: protocol for systematic reviews of exposure to long working hours and of the effect of exposure to long working hours on stroke. Environ Int. (2018) 119:366–78. doi: 10.1016/j.envint.2018.06.016

21. Lam J, Lanphear BP, Bellinger D, Axelrad DA, McPartland J, Sutton P, et al. Developmental PBDE exposure and IQ/ADHD in childhood: a systematic review and meta-analysis. Environ Health Perspect. (2017) 125:086001. doi: 10.1289/EHP1632

22. Hay JE, Wiesner RH, Shorter RG, LaRusso NF, Baldus WP. Primary sclerosing cholangitis and celiac disease. Ann Intern Med. (1988) 109:713–7. doi: 10.7326/0003-4819-109-9-713

23. Nissen T, Wynn R. The clinical case report: a review of its merits and limitations. BMC Res Notes. (2014) 7:264. doi: 10.1186/1756-0500-7-264

24. Buonfrate D, Requena-Mendez A, Angheben A, Muñoz J, Gobbi F, Van Den Ende J, et al. Severe strongyloidiasis: a systematic review of case reports. BMC Infect Dis. (2013) 13:78. doi: 10.1186/1471-2334-13-78

25. Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E, Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, et al. Clinical Practice Guidelines We Can Trust . Washington, D.C.: National Academies Press (2011).

Keywords: toxicology, epidemiology, public health, protocol, systematic review, case reports/studies, case series

Citation: Nambiema A, Sembajwe G, Lam J, Woodruff T, Mandrioli D, Chartres N, Fadel M, Le Guillou A, Valter R, Deguigne M, Legeay M, Bruneau C, Le Roux G and Descatha A (2021) A Protocol for the Use of Case Reports/Studies and Case Series in Systematic Reviews for Clinical Toxicology. Front. Med. 8:708380. doi: 10.3389/fmed.2021.708380

Received: 19 May 2021; Accepted: 11 August 2021; Published: 06 September 2021.

Reviewed by:

Copyright © 2021 Nambiema, Sembajwe, Lam, Woodruff, Mandrioli, Chartres, Fadel, Le Guillou, Valter, Deguigne, Legeay, Bruneau, Le Roux and Descatha. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Aboubakari Nambiema, aboubakari.nambiema@univ-angers.fr ; orcid.org/0000-0002-4258-3764

Elsevier QRcode Wechat

  • Research Process

Systematic Literature Review or Literature Review?

  • 3 minute read
  • 43.1K views

Table of Contents

As a researcher, you may be required to conduct a literature review. But what kind of review do you need to complete? Is it a systematic literature review or a standard literature review? In this article, we’ll outline the purpose of a systematic literature review, the difference between literature review and systematic review, and other important aspects of systematic literature reviews.

What is a Systematic Literature Review?

The purpose of systematic literature reviews is simple. Essentially, it is to provide a high-level of a particular research question. This question, in and of itself, is highly focused to match the review of the literature related to the topic at hand. For example, a focused question related to medical or clinical outcomes.

The components of a systematic literature review are quite different from the standard literature review research theses that most of us are used to (more on this below). And because of the specificity of the research question, typically a systematic literature review involves more than one primary author. There’s more work related to a systematic literature review, so it makes sense to divide the work among two or three (or even more) researchers.

Your systematic literature review will follow very clear and defined protocols that are decided on prior to any review. This involves extensive planning, and a deliberately designed search strategy that is in tune with the specific research question. Every aspect of a systematic literature review, including the research protocols, which databases are used, and dates of each search, must be transparent so that other researchers can be assured that the systematic literature review is comprehensive and focused.

Most systematic literature reviews originated in the world of medicine science. Now, they also include any evidence-based research questions. In addition to the focus and transparency of these types of reviews, additional aspects of a quality systematic literature review includes:

  • Clear and concise review and summary
  • Comprehensive coverage of the topic
  • Accessibility and equality of the research reviewed

Systematic Review vs Literature Review

The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper. That is, it includes an introduction, description of the methods used, a discussion and conclusion, as well as a reference list or bibliography.

A systematic review, however, includes entirely different components that reflect the specificity of its research question, and the requirement for transparency and inclusion. For instance, the systematic review will include:

  • Eligibility criteria for included research
  • A description of the systematic research search strategy
  • An assessment of the validity of reviewed research
  • Interpretations of the results of research included in the review

As you can see, contrary to the general overview or summary of a topic, the systematic literature review includes much more detail and work to compile than a standard literature review. Indeed, it can take years to conduct and write a systematic literature review. But the information that practitioners and other researchers can glean from a systematic literature review is, by its very nature, exceptionally valuable.

This is not to diminish the value of the standard literature review. The importance of literature reviews in research writing is discussed in this article . It’s just that the two types of research reviews answer different questions, and, therefore, have different purposes and roles in the world of research and evidence-based writing.

Systematic Literature Review vs Meta Analysis

It would be understandable to think that a systematic literature review is similar to a meta analysis. But, whereas a systematic review can include several research studies to answer a specific question, typically a meta analysis includes a comparison of different studies to suss out any inconsistencies or discrepancies. For more about this topic, check out Systematic Review VS Meta-Analysis article.

Language Editing Plus

With Elsevier’s Language Editing Plus services , you can relax with our complete language review of your systematic literature review or literature review, or any other type of manuscript or scientific presentation. Our editors are PhD or PhD candidates, who are native-English speakers. Language Editing Plus includes checking the logic and flow of your manuscript, reference checks, formatting in accordance to your chosen journal and even a custom cover letter. Our most comprehensive editing package, Language Editing Plus also includes any English-editing needs for up to 180 days.

PowerPoint Presentation of Your Research Paper

  • Publication Recognition

How to Make a PowerPoint Presentation of Your Research Paper

What is and How to Write a Good Hypothesis in Research?

  • Manuscript Preparation

What is and How to Write a Good Hypothesis in Research?

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

systematic literature review vs case study

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

  • En español – ExME
  • Em português – EME

Traditional reviews vs. systematic reviews

Posted on 3rd February 2016 by Weyinmi Demeyin

systematic literature review vs case study

Millions of articles are published yearly (1) , making it difficult for clinicians to keep abreast of the literature. Reviews of literature are necessary in order to provide clinicians with accurate, up to date information to ensure appropriate management of their patients. Reviews usually involve summaries and synthesis of primary research findings on a particular topic of interest and can be grouped into 2 main categories; the ‘traditional’ review and the ‘systematic’ review with major differences between them.

Traditional reviews provide a broad overview of a research topic with no clear methodological approach (2) . Information is collected and interpreted unsystematically with subjective summaries of findings. Authors aim to describe and discuss the literature from a contextual or theoretical point of view. Although the reviews may be conducted by topic experts, due to preconceived ideas or conclusions, they could be subject to bias.

Systematic reviews are overviews of the literature undertaken by identifying, critically appraising and synthesising results of primary research studies using an explicit, methodological approach(3). They aim to summarise the best available evidence on a particular research topic.

The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal, Synthesis, Conclusions, Reproducibility, and Update.

Traditional reviews

  • Authors: One or more authors usually experts in the topic of interest
  • Study protocol: No study protocol
  • Research question: Broad to specific question, hypothesis not stated
  • Search strategy: No detailed search strategy, search is probably conducted using keywords
  • Sources of literature: Not usually stated and non-exhaustive, usually well-known articles. Prone to publication bias
  • Selection criteria: No specific selection criteria, usually subjective. Prone to selection bias
  • Critical appraisal: Variable evaluation of study quality or method
  • Synthesis: Often qualitative synthesis of evidence
  • Conclusions: Sometimes evidence based but can be influenced by author’s personal belief
  • Reproducibility: Findings cannot be reproduced independently as conclusions may be subjective
  • Update: Cannot be continuously updated

Systematic reviews

  • Authors: Two or more authors are involved in good quality systematic reviews, may comprise experts in the different stages of the review
  • Study protocol: Written study protocol which includes details of the methods to be used
  • Research question: Specific question which may have all or some of PICO components (Population, Intervention, Comparator, and Outcome). Hypothesis is stated
  • Search strategy: Detailed and comprehensive search strategy is developed
  • Sources of literature: List of databases, websites and other sources of included studies are listed. Both published and unpublished literature are considered
  • Selection criteria: Specific inclusion and exclusion criteria
  • Critical appraisal: Rigorous appraisal of study quality
  • Synthesis: Narrative, quantitative or qualitative synthesis
  • Conclusions: Conclusions drawn are evidence based
  • Reproducibility: Accurate documentation of method means results can be reproduced
  • Update: Systematic reviews can be periodically updated to include new evidence

Decisions and health policies about patient care should be evidence based in order to provide the best treatment for patients. Systematic reviews provide a means of systematically identifying and synthesising the evidence, making it easier for policy makers and practitioners to assess such relevant information and hopefully improve patient outcomes.

  • Fletcher RH, Fletcher SW. Evidence-Based Approach to the Medical Literature. Journal of General Internal Medicine. 1997; 12(Suppl 2):S5-S14. doi:10.1046/j.1525-1497.12.s2.1.x. Available from:  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1497222/
  • Rother ET. Systematic literature review X narrative review. Acta paul. enferm. [Internet]. 2007 June [cited 2015 Dec 25]; 20(2): v-vi. Available from: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-21002007000200001&lng=en. http://dx.doi.org/10.1590/S0103-21002007000200001
  • Khan KS, Ter Riet G, Glanville J, Sowden AJ, Kleijnen J. Undertaking systematic reviews of research on effectiveness: CRD’s guidance for carrying out or commissioning reviews. NHS Centre for Reviews and Dissemination; 2001.

' src=

Weyinmi Demeyin

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on Traditional reviews vs. systematic reviews

' src=

THE INFORMATION IS VERY MUCH VALUABLE, A LOT IS INDEED EXPECTED IN ORDER TO MASTER SYSTEMATIC REVIEW

' src=

Thank you very much for the information here. My question is : Is it possible for me to do a systematic review which is not directed toward patients but just a specific population? To be specific can I do a systematic review on the mental health needs of students?

' src=

Hi Rosemary, I wonder whether it would be useful for you to look at Module 1 of the Cochrane Interactive Learning modules. This is a free module, open to everyone (you will just need to register for a Cochrane account if you don’t already have one). This guides you through conducting a systematic review, with a section specifically around defining your research question, which I feel will help you in understanding your question further. Head to this link for more details: https://training.cochrane.org/interactivelearning

I wonder if you have had a search on the Cochrane Library as yet, to see what Cochrane systematic reviews already exist? There is one review, titled “Psychological interventions to foster resilience in healthcare students” which may be of interest: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD013684/full You can run searches on the library by the population and intervention you are interested in.

I hope these help you start in your investigations. Best wishes. Emma.

' src=

La revisión sistemática vale si hay solo un autor?

HI Alex, so sorry for the delay in replying to you. Yes, that is a very good point. I have copied a paragraph from the Cochrane Handbook, here, which does say that for a Cochrane Review, you should have more than one author.

“Cochrane Reviews should be undertaken by more than one person. In putting together a team, authors should consider the need for clinical and methodological expertise for the review, as well as the perspectives of stakeholders. Cochrane author teams are encouraged to seek and incorporate the views of users, including consumers, clinicians and those from varying regions and settings to develop protocols and reviews. Author teams for reviews relevant to particular settings (e.g. neglected tropical diseases) should involve contributors experienced in those settings”.

Thank you for the discussion point, much appreciated.

' src=

Hello, I’d like to ask you a question: what’s the difference between systematic review and systematized review? In addition, if the screening process of the review was made by only one author, is still a systematic or is a systematized review? Thanks

Hi. This article from Grant & Booth is a really good one to look at explaining different types of reviews: https://onlinelibrary.wiley.com/doi/10.1111/j.1471-1842.2009.00848.x It includes Systematic Reviews and Systematized Reviews. In answer to your second question, have a look at this Chapter from the Cochrane handbook. It covers the question about ‘Who should do a systematic review’. https://training.cochrane.org/handbook/current/chapter-01

A really relevant part of this chapter is this: “Systematic reviews should be undertaken by a team. Indeed, Cochrane will not publish a review that is proposed to be undertaken by a single person. Working as a team not only spreads the effort, but ensures that tasks such as the selection of studies for eligibility, data extraction and rating the certainty of the evidence will be performed by at least two people independently, minimizing the likelihood of errors.”

I hope this helps with the question. Best wishes. Emma.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

What do trialists do about participants who are ‘lost to follow-up’?

Participants in clinical trials may exit the study prior to having their results collated; in this case, what do we do with their results?

Family therapy walking outdoors

Family Therapy approaches for Anorexia Nervosa

Is Family Therapy effective in the treatment of Anorexia Nervosa? Emily summarises a recent Cochrane Review in this blog and examines the evidence.

Blood pressure tool

Antihypertensive drugs for primary prevention – at what blood pressure do we start treatment?

In this blog, Giorgio Karam examines the evidence on antihypertensive drugs for primary prevention – when do we start treatment?

Educational resources and simple solutions for your research journey

Systematic review vs literature review: Some essential differences

Systematic Review vs. Literature Review: Some Essential Differences

Most budding researchers are confused between systematic review vs. literature review. As a PhD student or early career researcher, you must by now be well versed with the fact that literature review is the most important aspect of any scientific research, without which a study cannot be commenced. However, literature review is in itself an ‘umbrella term’, and there are several types of reviews, such as systematic literature reviews , that you may need to perform during your academic publishing journey, based upon their specific relevance to each study.   

Your research goal, approach, and design will finally influence your choice of systematic review vs literature review . Apart from systematic literature review , some other common types of literature review are 1 :   

  • Narrative literature review – used to identify gaps in the existing knowledge base  
  • Scoping literature review – used to identify the scope of a particular study  
  • Integrative literature review – used to generate secondary data that upon integration can be used to define new frameworks and perspectives  
  • Theoretical literature review – used to pool all kinds of theories associated with a particular concept  

The most commonly used form of review, however, is the systematic literature review . Compared to the other types of literature reviews described above, this one requires a more rigorous and well-defined approach. The systematic literature review can be divided into two main categories: meta-analysis and meta-synthesis. Meta-analysis is related to identifying patterns and relationships within the data, by using statistical procedures. Meta-synthesis on the other hand, is concerned with integrating findings of multiple qualitative research studies, not necessarily needing statistical procedures.  

systematic literature review vs case study

Table of Contents

Difference between systematic review and literature review

In spite of having this basic understanding, however, there might still be a lot of confusion when it comes to finalizing between a systematic review vs literature review of any other kind. Since these two types of reviews serve a similar purpose, they are often used interchangeably and the difference between systematic review and literature review is overlooked.  In order to ease this confusion and smoothen the process of decision-making it is essential to have a closer look at a systematic review vs. literature review and the differences between them 2.3 :   

  Tips to keep in mind when performing a literature review  

While the above illustrated similarities and differences between systematic review and literature review might be helpful as an overview, here are some additional pointers that you can keep in mind while performing a review for your research study 4 :  

  • Check the authenticity of the source thoroughly while using an article in your review.  
  • Regardless of the type of review that you intend to perform, i t is important to ensure that the landmark literature, the one that first spoke about your topic of interest, is given prominence in your review. These can be identified with a simple Google Scholar search and checking the most cited articles.  
  • Make sure to include all the latest literature that focuses on your research question.   
  • Avoid including irrelevant data by revisiting your aims, objectives, and research questions as often as possible during the review process.  
  • If you intend to submit your review in any peer-reviewed journal, make sure to have a defined structure based upon your selected type of review .  
  • If it is a systematic literature review , make sure that the research question is clear and cri sp and framed in a manner that is subjected to quantitative analysis.  
  • If it is a literature review of any other kind, make sure that you include enough checkpoints to minimize biases in your conclusions . You can use an integrative approach to show how different data points fit together, however, it is also essential to mention and describe data that doesn’t fit together in order to produce a balanced review. This can also help identify gaps and pave the way for designing future studies on the topic.   

We hope that the above article was helpful for you in understanding the basics of literature review and to know the use of systemic review vs. literature review.

Q: When to do a systematic review?

A systematic review is conducted to synthesize and analyze existing research on a specific question. It’s valuable when a comprehensive assessment of available evidence is required to answer a well-defined research question. Systematic reviews follow a predefined protocol, rigorous methodology, and aim to minimize bias. They’re especially useful for informing evidence-based decisions in healthcare and policy-making.

Q: When to do a literature review?

A literature review surveys existing literature on a topic, providing an overview of key concepts and findings. It’s conducted when exploring a subject, identifying gaps, and contextualizing research. Literature reviews are valuable at the beginning of a study to establish the research landscape and justify the need for new research.

Q: What is the difference between a literature review and a scoping review?

A literature review summarizes existing research on a topic, while a scoping review maps the literature to identify research gaps and areas for further investigation. While both assess existing literature, a scoping review tends to have broader inclusion criteria and aims to provide an overview of the available research, helping researchers understand the breadth of a topic before narrowing down a research question.

Q: What’ is the difference between systematic Literature Review and Meta Analysis?

A systematic literature review aims to comprehensively identify, select, and analyze all relevant studies on a specific research question using a rigorous methodology. It summarizes findings qualitatively. On the other hand, a meta-analysis is a statistical technique applied within a systematic review. It involves pooling and analyzing quantitative data from multiple studies to provide a more precise estimate of an effect size. In essence, a meta-analysis is a quantitative synthesis that goes beyond the qualitative summary of a systematic literature review.

References:  

  • Types of Literature Review – Business Research Methodology. https://research-methodology.net/research-methodology/types-literature-review/  
  • Mellor, L. The difference between a systematic review and a literature review. Covidence. https://www.covidence.org/blog/the-difference-between-a-systematic-review-and-a-literature-review \
  • Basu, G. SJSU Research Guides – Literature Review vs Systematic Review.  https://libguides.sjsu.edu/LitRevVSSysRev/definitions  
  • Jansen, D., Phair, D. Writing A Literature Review: 7 Common (And Costly) Mistakes To Avoid. Grad Coach, June 2021. https://gradcoach.com/literature-review-mistakes/  

Related Posts

read, write, and promote research

How to Read, Write, Promote Your Research Effectively: Timesaving Solution for Busy Researchers

sources in research

Why You Need to Evaluate Sources in Research?

DistillerSR Logo

About Systematic Reviews

The Difference Between Narrative Review and Systematic Review

systematic literature review vs case study

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Reviews in scientific research are tools that help synthesize literature on a topic of interest and describe its current state. Different types of reviews are conducted depending on the research question and the scope of the review. A systematic review is one such review that is robust, reproducible, and transparent. It involves collating evidence by using all of the eligible and critically appraised literature available on a certain topic. To know more about how to do a systematic review , you can check out our article at the link. The primary aim of a systematic review is to recommend best practices and inform policy development. Hence, there is a need for high-quality, focused, and precise methods and reporting. For more exploratory research questions, methods such as a scoping review are employed. Be sure you understand the difference between a systematic review and a scoping review , if you don’t, check out the link to learn more.

When the word “review” alone is used to describe a research paper, the first thing that should come to mind is that it is a literature review. Almost every researcher starts off their career with literature reviews. To know the difference between a systematic review and a literature review , read on here.  Traditional literature reviews are also sometimes referred to as narrative reviews since they use narrative analysis to synthesize data. In this article, we will explore the differences between a systematic review and a narrative review, in further detail.

Learn More About DistillerSR

(Article continues below)

systematic literature review vs case study

Narrative Review vs Systematic Review

Both systematic and narrative reviews are classified as secondary research studies since they both use existing primary research studies e.g. case studies. Despite this similarity, there are key differences in their methodology and scope. The major differences between them lie in their objectives, methodology, and application areas.

Differences In Objective

The main objective of a systematic review is to formulate a well-defined research question and use qualitative and quantitative methods to analyze all the available evidence attempting to answer the question. In contrast, narrative reviews can address one or more questions with a much broader scope. The efficacy of narrative reviews is irreplaceable in tracking the development of a scientific principle, or a clinical concept. This ability to conduct a wider exploration could be lost in the restrictive framework of a systematic review.

Differences in Methodology

For systematic reviews, there are guidelines provided by the Cochrane Handbook, ROSES, and the PRISMA statement that can help determine the protocol, and methodology to be used. However, for narrative reviews, such standard guidelines do not exist. Although, there are recommendations available.

Systematic reviews comprise an explicit, transparent, and pre-specified methodology. The methodology followed in a systematic review is as follows,

  • Formulating the clinical research question to answer (PICO approach)
  • Developing a protocol (with strict inclusion and exclusion criteria for the selection of primary studies)
  • Performing a detailed and broad literature search
  • Critical appraisal of the selected studies
  • Data extraction from the primary studies included in the review
  • Data synthesis and analysis using qualitative or quantitative methods [3].
  • Reporting and discussing results of data synthesis.
  • Developing conclusions based on the findings.

A narrative review on the other hand does not have a strict protocol to be followed. The design of the review depends on its author and the objectives of the review. As yet, there is no consensus on the standard structure of a narrative review. The preferred approach is the IMRAD (Introduction, Methods, Results, and Discussion) [2]. Apart from the author’s preferences, a narrative review structure must respect the journal style and conventions followed in the respective field.

Differences in Application areas

Narrative reviews are aimed at identifying and summarizing what has previously been published. Their general applications include exploring existing debates, the appraisal of previous studies conducted on a certain topic, identifying knowledge gaps, and speculating on the latest interventions available. They are also used to track and report on changes that have occurred in an existing field of research. The main purpose is to deepen the understanding in a certain research area. The results of a systematic review provide the most valid evidence to guide clinical decision-making and inform policy development [1]. They have now become the gold standard in evidence-based medicine [1].

Although both types of reviews come with their own benefits and limitations, researchers should carefully consider the differences between them before making a decision on which review type to use.

  • Aromataris E, Pearson A. The systematic review: an overview. AJN. Am J Nurs. 2014;114(3):53–8.
  • Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropratic Medicine 2006;5:101–117.
  • Linares-Espinós E, Hernández V, Domínguez-Escrig JL, Fernández-Pello S, Hevia V, Mayor J, et al. Metodología de una revisión sistemática. Actas Urol Esp. 2018;42:499–506.

3 Reasons to Connect

systematic literature review vs case study

COMMENTS

  1. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  2. Literature Review vs Systematic Review

    It's common to confuse systematic and literature reviews because both are used to provide a summary of the existent literature or research on a specific topic. Regardless of this commonality, both types of review vary significantly. The following table provides a detailed explanation as well as the differences between systematic and ...

  3. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ... In this case, the acronym is PICOT. Type of study design ...

  4. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  5. What is the difference between a systematic review and a systematic

    In contrast, a systematic literature review might be conducted by one person. Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive. A systematic literature review would contrast with what is sometimes called a narrative or ...

  6. Systematic, Scoping, and Other Literature Reviews: Overview

    Systematic Review. These types of studies employ a systematic method to analyze and synthesize the results of numerous studies. "Systematic" in this case means following a strict set of steps - as outlined by entities like PRISMA and the Institute of Medicine - so as to make the review more reproducible and less biased.

  7. Systematic Review vs. Literature Review

    Systematic Review vs. Literature Review. It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic. Even with this common ground, both types vary significantly. Please review the following chart (and its corresponding poster linked below) for the ...

  8. Systematic reviews: Structure, form and content

    Abstract. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the search.

  9. Frontiers

    Introduction: Systematic reviews are routinely used to synthesize current science and evaluate the evidential strength and quality of resulting recommendations. For specific events, such as rare acute poisonings or preliminary reports of new drugs, we posit that case reports/studies and case series (human subjects research with no control group) may provide important evidence for systematic ...

  10. PDF Systematic Literature Reviews: an Introduction

    Cite this article: Lame, G. (2019) 'Systematic Literature Reviews: An Introduction', in Proceedings of the 22nd International Conference on Engineering Design (ICED19), Delft, The Netherlands, 5-8 August 2019. DOI:10.1017/ ... and a procedure is set in case they disagree on a study (often a third reviewer stepping in). A

  11. Cohort vs Case Studies

    It is important to note that what makes the Cohort design "prospective" in nature is that you are working from suspected cause to effect (or outcome). It can be a concurrent study, meaning that you start collecting data now; non-current, typical of a chart review or review of other records, or; a combination of the two. Remember that:

  12. Systematic reviews vs meta-analysis: what's the difference?

    A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles ...

  13. Understanding the Differences Between a Systematic Review vs Literature

    The methodology involved in a literature review is less complicated and requires a lower degree of planning. For a systematic review, the planning is extensive and requires defining robust pre-specified protocols. It first starts with formulating the research question and scope of the research. The PICO's approach (population, intervention ...

  14. Systematic Literature Review or Literature Review

    The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper.

  15. Traditional reviews vs. systematic reviews

    They aim to summarise the best available evidence on a particular research topic. The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal ...

  16. Systematic review vs literature review: Some essential differences

    Apart from systematic literature review, some other common types of literature review are1: Narrative literature review - used to identify gaps in the existing knowledge base. Scoping literature review - used to identify the scope of a particular study. Integrative literature review - used to generate secondary data that upon integration ...

  17. The Differences Between a Randomized-Controlled Trial vs Systematic Review

    Therefore, you have to review as many studies as possible to gather enough data to support crucial decisions. A failure to review all the relevant eligible studies may lead to inconsistent results. This is where different types of reviews, including systematic reviews and integrated reviews (among others), become essential.

  18. The Difference Between Narrative Review and Systematic Review

    Both systematic and narrative reviews are classified as secondary research studies since they both use existing primary research studies e.g. case studies. Despite this similarity, there are key differences in their methodology and scope. The major differences between them lie in their objectives, methodology, and application areas.

  19. Case Studies: A Systematic Review of the Evidence

    This study aimed to determine the extent, range and nature of research about case studies in higher education. Method A systematic review was conducted using a wide ranging search strategy ...