AIS Electronic Library (AISeL)

  • eLibrary Home
  • eLibrary Login
  • < Previous

Home > Conferences > ECIS Proceedings > ECIS2002 > 15

ECIS 2002 Proceedings

Critical appraisal guidelines for single case study research.

Clare Atkins , Nelson Marlborough Institute of Technology Follow Jennifer Sampson , University of Melbourne Follow

The use of critical appraisal guidelines to assess the validity of research findings has become an established technique in those disciplines, such as healthcare and medicine, that encourage the use of evidence-based practice. Critical appraisal guidelines provide a rigorous set of criteria, often in the form of a checklist, against which a piece of research can be assessed. Although well established criteria exist for many forms of quantitative research, such as clinical trials and cohort studies, qualitative research is less well served. Through a synthesis of existing best practices in interpretative research this paper provides comprehensive guidelines for the conduct of single case study research and extrapolates from them a set of critical appraisal guidelines to assist in the evaluation of such work.

Recommended Citation

Atkins, Clare and Sampson, Jennifer, "Critical Appraisal Guidelines for Single Case Study Research" (2002). ECIS 2002 Proceedings . 15. https://aisel.aisnet.org/ecis2002/15

Since March 16, 2009

Advanced Search

  • Notify me via email or RSS
  • ECIS 2002 Proceedings Website
  • All Content

Author Corner

  • eLibrary FAQ

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

  • Mayo Clinic Libraries
  • General Guides
  • Systematic Reviews
  • Critical Appraisal by Study Design

Systematic Reviews: Critical Appraisal by Study Design

  • Knowledge Synthesis Comparison
  • Knowledge Synthesis Decision Tree
  • Standards & Reporting Results
  • Mayo Clinic Library Manuals & Materials
  • Training Resources
  • Review Teams
  • Develop & Refine Your Research Question
  • Develop a Timeline
  • Project Management
  • Communication
  • PRISMA-P Checklist
  • Eligibility Criteria
  • Register your Protocol
  • Other Resources
  • Other Screening Tools
  • Grey Literature Searching
  • Citation Searching
  • Data Extraction Tools
  • Minimize Bias
  • Covidence for Quality Assessment
  • Synthesis & Meta-Analysis
  • Publishing your Systematic Review

Tools for Critical Appraisal of Studies

critical appraisal guidelines for single case study research

“The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making.” 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires “a methodological approach coupled with the right tools and skills to match these methods is essential for finding meaningful results.” 3 In short, it is a method of differentiating good research from bad research.

Critical Appraisal by Study Design (featured tools)

  • Non-RCTs or Observational Studies
  • Diagnostic Accuracy
  • Animal Studies
  • Qualitative Research
  • Tool Repository
  • AMSTAR 2 The original AMSTAR was developed to assess the risk of bias in systematic reviews that included only randomized controlled trials. AMSTAR 2 was published in 2017 and allows researchers to “identify high quality systematic reviews, including those based on non-randomised studies of healthcare interventions.” 4 more... less... AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews)
  • ROBIS ROBIS is a tool designed specifically to assess the risk of bias in systematic reviews. “The tool is completed in three phases: (1) assess relevance(optional), (2) identify concerns with the review process, and (3) judge risk of bias in the review. Signaling questions are included to help assess specific concerns about potential biases with the review.” 5 more... less... ROBIS (Risk of Bias in Systematic Reviews)
  • BMJ Framework for Assessing Systematic Reviews This framework provides a checklist that is used to evaluate the quality of a systematic review.
  • CASP Checklist for Systematic Reviews This CASP checklist is not a scoring system, but rather a method of appraising systematic reviews by considering: 1. Are the results of the study valid? 2. What are the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Systematic Reviews Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • JBI Critical Appraisal Tools, Checklist for Systematic Reviews JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • NHLBI Study Quality Assessment of Systematic Reviews and Meta-Analyses The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • RoB 2 RoB 2 “provides a framework for assessing the risk of bias in a single estimate of an intervention effect reported from a randomized trial,” rather than the entire trial. 6 more... less... RoB 2 (revised tool to assess Risk of Bias in randomized trials)
  • CASP Randomised Controlled Trials Checklist This CASP checklist considers various aspects of an RCT that require critical appraisal: 1. Is the basic study design valid for a randomized controlled trial? 2. Was the study methodologically sound? 3. What are the results? 4. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CONSORT Statement The CONSORT checklist includes 25 items to determine the quality of randomized controlled trials. “Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report.” 7 more... less... CONSORT (Consolidated Standards of Reporting Trials)
  • NHLBI Study Quality Assessment of Controlled Intervention Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • JBI Critical Appraisal Tools Checklist for Randomized Controlled Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • ROBINS-I ROBINS-I is a “tool for evaluating risk of bias in estimates of the comparative effectiveness… of interventions from studies that did not use randomization to allocate units… to comparison groups.” 8 more... less... ROBINS-I (Risk Of Bias in Non-randomized Studies – of Interventions)
  • NOS This tool is used primarily to evaluate and appraise case-control or cohort studies. more... less... NOS (Newcastle-Ottawa Scale)
  • AXIS Cross-sectional studies are frequently used as an evidence base for diagnostic testing, risk factors for disease, and prevalence studies. “The AXIS tool focuses mainly on the presented [study] methods and results.” 9 more... less... AXIS (Appraisal tool for Cross-Sectional Studies)
  • NHLBI Study Quality Assessment Tools for Non-Randomized Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. • Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies • Quality Assessment of Case-Control Studies • Quality Assessment Tool for Before-After (Pre-Post) Studies With No Control Group • Quality Assessment Tool for Case Series Studies more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • Case Series Studies Quality Appraisal Checklist Developed by the Institute of Health Economics (Canada), the checklist is comprised of 20 questions to assess “the robustness of the evidence of uncontrolled, [case series] studies.” 10
  • Methodological Quality and Synthesis of Case Series and Case Reports In this paper, Dr. Murad and colleagues “present a framework for appraisal, synthesis and application of evidence derived from case reports and case series.” 11
  • MINORS The MINORS instrument contains 12 items and was developed for evaluating the quality of observational or non-randomized studies. 12 This tool may be of particular interest to researchers who would like to critically appraise surgical studies. more... less... MINORS (Methodological Index for Non-Randomized Studies)
  • JBI Critical Appraisal Tools for Non-Randomized Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis. • Checklist for Analytical Cross Sectional Studies • Checklist for Case Control Studies • Checklist for Case Reports • Checklist for Case Series • Checklist for Cohort Studies
  • QUADAS-2 The QUADAS-2 tool “is designed to assess the quality of primary diagnostic accuracy studies… [it] consists of 4 key domains that discuss patient selection, index test, reference standard, and flow of patients through the study and timing of the index tests and reference standard.” 13 more... less... QUADAS-2 (a revised tool for the Quality Assessment of Diagnostic Accuracy Studies)
  • JBI Critical Appraisal Tools Checklist for Diagnostic Test Accuracy Studies JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • STARD 2015 The authors of the standards note that “[e]ssential elements of [diagnostic accuracy] study methods are often poorly described and sometimes completely omitted, making both critical appraisal and replication difficult, if not impossible.”10 The Standards for the Reporting of Diagnostic Accuracy Studies was developed “to help… improve completeness and transparency in reporting of diagnostic accuracy studies.” 14 more... less... STARD 2015 (Standards for the Reporting of Diagnostic Accuracy Studies)
  • CASP Diagnostic Study Checklist This CASP checklist considers various aspects of diagnostic test studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Diagnostic Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • SYRCLE’s RoB “[I]mplementation of [SYRCLE’s RoB tool] will facilitate and improve critical appraisal of evidence from animal studies. This may… enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies.” 15 more... less... SYRCLE’s RoB (SYstematic Review Center for Laboratory animal Experimentation’s Risk of Bias)
  • ARRIVE 2.0 “The [ARRIVE 2.0] guidelines are a checklist of information to include in a manuscript to ensure that publications [on in vivo animal studies] contain enough information to add to the knowledge base.” 16 more... less... ARRIVE 2.0 (Animal Research: Reporting of In Vivo Experiments)
  • Critical Appraisal of Studies Using Laboratory Animal Models This article provides “an approach to critically appraising papers based on the results of laboratory animal experiments,” and discusses various “bias domains” in the literature that critical appraisal can identify. 17
  • CEBM Critical Appraisal of Qualitative Studies Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • CASP Qualitative Studies Checklist This CASP checklist considers various aspects of qualitative research studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • Quality Assessment and Risk of Bias Tool Repository Created by librarians at Duke University, this extensive listing contains over 100 commonly used risk of bias tools that may be sorted by study type.
  • Latitudes Network A library of risk of bias tools for use in evidence syntheses that provides selection help and training videos.

References & Recommended Reading

1.     Kolaski, K., Logan, L. R., & Ioannidis, J. P. (2024). Guidance to best tools and practices for systematic reviews .  British Journal of Pharmacology ,  181 (1), 180-210

2.    Portney LG.  Foundations of clinical research : applications to evidence-based practice.  Fourth edition. ed. Philadelphia: F A Davis; 2020.

3.     Fowkes FG, Fulton PM.  Critical appraisal of published research: introductory guidelines.   BMJ (Clinical research ed).  1991;302(6785):1136-1140.

4.     Singh S.  Critical appraisal skills programme.   Journal of Pharmacology and Pharmacotherapeutics.  2013;4(1):76-77.

5.     Shea BJ, Reeves BC, Wells G, et al.  AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.   BMJ (Clinical research ed).  2017;358:j4008.

6.     Whiting P, Savovic J, Higgins JPT, et al.  ROBIS: A new tool to assess risk of bias in systematic reviews was developed.   Journal of clinical epidemiology.  2016;69:225-234.

7.     Sterne JAC, Savovic J, Page MJ, et al.  RoB 2: a revised tool for assessing risk of bias in randomised trials.  BMJ (Clinical research ed).  2019;366:l4898.

8.     Moher D, Hopewell S, Schulz KF, et al.  CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials.  Journal of clinical epidemiology.  2010;63(8):e1-37.

9.     Sterne JA, Hernan MA, Reeves BC, et al.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.  BMJ (Clinical research ed).  2016;355:i4919.

10.     Downes MJ, Brennan ML, Williams HC, Dean RS.  Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS).   BMJ open.  2016;6(12):e011458.

11.   Guo B, Moga C, Harstall C, Schopflocher D.  A principal component analysis is conducted for a case series quality appraisal checklist.   Journal of clinical epidemiology.  2016;69:199-207.e192.

12.   Murad MH, Sultan S, Haffar S, Bazerbachi F.  Methodological quality and synthesis of case series and case reports.  BMJ evidence-based medicine.  2018;23(2):60-63.

13.   Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J.  Methodological index for non-randomized studies (MINORS): development and validation of a new instrument.   ANZ journal of surgery.  2003;73(9):712-716.

14.   Whiting PF, Rutjes AWS, Westwood ME, et al.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.   Annals of internal medicine.  2011;155(8):529-536.

15.   Bossuyt PM, Reitsma JB, Bruns DE, et al.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.   BMJ (Clinical research ed).  2015;351:h5527.

16.   Hooijmans CR, Rovers MM, de Vries RBM, Leenaars M, Ritskes-Hoitinga M, Langendam MW.  SYRCLE's risk of bias tool for animal studies.   BMC medical research methodology.  2014;14:43.

17.   Percie du Sert N, Ahluwalia A, Alam S, et al.  Reporting animal research: Explanation and elaboration for the ARRIVE guidelines 2.0.  PLoS biology.  2020;18(7):e3000411.

18.   O'Connor AM, Sargeant JM.  Critical appraisal of studies using laboratory animal models.   ILAR journal.  2014;55(3):405-417.

  • << Previous: Minimize Bias
  • Next: Covidence for Quality Assessment >>
  • Last Updated: Jan 16, 2024 9:04 AM
  • URL: https://libraryguides.mayo.edu/systematicreviewprocess

CASP Checklists

How to use our CASP Checklists

Referencing and Creative Commons

  • Online Training Courses
  • CASP Workshops
  • Teaching Critical Appraisal
  • What is Critical Appraisal
  • Study Designs
  • Useful Links
  • Bibliography
  • View all Tools and Resources
  • Testimonials

Critical Appraisal Checklists

We offer a number of free downloadable checklists to help you more easily and accurately perform critical appraisal across a number of different study types.

The CASP checklists are easy to understand but in case you need any further guidance on how they are structured, take a look at our guide on how to use our CASP checklists .

CASP Randomised Controlled Trial Checklist

  • Print & Fill

CASP Systematic Review Checklist

CASP Qualitative Studies Checklist

CASP Cohort Study Checklist

CASP Diagnostic Study Checklist

CASP Case Control Study Checklist

CASP Economic Evaluation Checklist

CASP Clinical Prediction Rule Checklist

Checklist Archive

  • CASP Randomised Controlled Trial Checklist 2018 fillable form
  • CASP Randomised Controlled Trial Checklist 2018

CASP Checklist

Need more information?

  • Online Learning
  • Privacy Policy

critical appraisal guidelines for single case study research

Critical Appraisal Skills Programme

Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Copyright 2024 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Critical appraisal guidelines for single case study research

Profile image of Clare Atkins

2002, Proceedings of the Xth European Conference of …

Related Papers

The Indian Journal of Pediatrics

Nick Higginbotham

critical appraisal guidelines for single case study research

Kieron Ivan Gutierrez

This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research. Abstract PURPOSE We wanted to review and synthesize published criteria for good qualitative research and develop a cogent set of evaluative criteria. METHODS We identified published journal articles discussing criteria for rigorous research using standard search strategies then examined reference sections of relevant journal articles to identify books and book chapters on this topic. A cross-publication content analysis allowed us to identify criteria and understand the beliefs that shape them. RESULTS Seven criteria for good qualitative research emerged: (1) carrying out ethical research; (2) importance of the research; (3) clarity and coherence of the research report; (4) use of appropriate and rigorous methods; (5) importance of reflexivity or attending to researcher bias; (6) importance of establishing validity or credibility; and (7) importance of verification or reliability. General agreement was observed across publications on the first 4 quality dimensions. On the last 3, important divergent perspectives were observed in how these criteria should be applied to qualitative research, with differences based on the paradigm embraced by the authors. CONCLUSION Qualitative research is not a unified field. Most manuscript and grant reviewers are not qualitative experts and are likely to embrace a generic set of criteria rather than those relevant to the particular qualitative approach proposed or reported. Reviewers and researchers need to be aware of this tendency and educate health care researchers about the criteria appropriate for evaluating qualitative research from within the theoretical and methodological framework from which it emerges.

Scott Reeves

British Journal of Cardiac Nursing

Ayelet Kuper

Journal of health …

Andrew Booth

Handbook of Theory and Methods in Applied Health Research

Faraz Ahmed

Journal of Evaluation in Clinical Practice

Mary Boulton

A hand search of the original papers in seven medical journals over 5 years was conducted in order to identify those reporting qualitative research. A total of 210 papers were initially identified, of which 70 used qualitative methods of both data collection and analysis. These papers were evaluated by the researchers using a checklist which specified the criteria of good practice. Overall, 2% of the original papers published in the journals reported qualitative studies. Papers were more frequently positively assessed in terms of having clear aims, reporting research for which a qualitative approach was appropriate and describing their methods of data collection. Papers were less frequently positively assessed in relation to issued of data analysis such as validity, reliability and providing representative supporting evidence. It is concluded that the full potential of qualitative research has yet to be realized in the field of health care.

Anindya Maratus S

RELATED PAPERS

Loredana Boschetto , Luigia Falco

Procedia Computer Science

Dániel Mány

Lecture Notes in Computer Science

Jose Macedo

Journal of Thermal Analysis and Calorimetry

Deuber Agostini

Universitas Médica

Juan Carlos Medina Gonzalez

Carlos Gonzales Delgado

Scandinavian Journal of Public Health

Eivind Meland

Abditeknika Jurnal Pengabdian Masyarakat

solikin solikin

Samet Aydoğan

DOAJ (DOAJ: Directory of Open Access Journals)

Mehdi Gheitasi

Unsteady Computational Fluid Dynamics in Aeronautics

Paul Tucker

Proceedings of the 10th International Conference on Computer Supported Education

Stefano Federici

Physics in Medicine and Biology

American journal of health behavior

Charles Eaton

Carlos Cruz Ramos

Asta Veličkienė

Remote Sensing

Adélaïde Taveneau

Feminist Review

nirmal puwar

Angewandte Chemie International Edition

Mahmud Hussain

HELLEN ADHIAMBO OGOT

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Mil Med Res

Logo of milmedres

Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?

1 Center for Evidence-Based and Translational Medicine, Zhongnan Hospital, Wuhan University, 169 Donghu Road, Wuchang District, Wuhan, 430071 Hubei China

Yun-Yun Wang

2 Department of Evidence-Based Medicine and Clinical Epidemiology, The Second Clinical College, Wuhan University, Wuhan, 430071 China

Zhi-Hua Yang

Xian-tao zeng.

3 Center for Evidence-Based and Translational Medicine, Wuhan University, Wuhan, 430071 China

4 Global Health Institute, Wuhan University, Wuhan, 430072 China

Associated Data

The data and materials used during the current review are all available in this review.

Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.

In the twentieth century, pioneering works by distinguished professors Cochrane A [ 1 ], Guyatt GH [ 2 ], and Chalmers IG [ 3 ] have led us to the evidence-based medicine (EBM) era. In this era, how to search, critically appraise, and use the best evidence is important. Moreover, systematic review and meta-analysis is the most used tool for summarizing primary data scientifically [ 4 – 6 ] and also the basic for developing clinical practice guideline according to the Institute of Medicine (IOM) [ 7 ]. Hence, to perform a systematic review and/ or meta-analysis, assessing the methodological quality of based primary studies is important; naturally, it would be key to assess its own methodological quality before usage. Quality includes internal and external validity, while methodological quality usually refers to internal validity [ 8 , 9 ]. Internal validity is also recommended as “risk of bias (RoB)” by the Cochrane Collaboration [ 9 ].

There are three types of tools: scales, checklists, and items [ 10 , 11 ]. In 2015, Zeng et al. [ 11 ] investigated methodological quality tools for randomized controlled trial (RCT), non-randomized clinical intervention study, cohort study, case-control study, cross-sectional study, case series, diagnostic accuracy study which also called “diagnostic test accuracy (DTA)”, animal study, systematic review and meta-analysis, and clinical practice guideline (CPG). From then on, some changes might generate in pre-existing tools, and new tools might also emerge; moreover, the research method has also been developed in recent years. Hence, it is necessary to systematically investigate commonly-used tools for assessing methodological quality, especially those for economic evaluation, clinical prediction rule/model, and qualitative study. Therefore, this narrative review presented related methodological quality (including “RoB”) assessment tools for primary and secondary medical studies up to December 2019, and Table  1 presents their basic characterizes. We hope this review can help the producers, users, and researchers of evidence.

The basic characteristics of the included methodological quality (risk of bias) assessment tools

AMSTAR A measurement tool to assess systematic reviews, AHRQ Agency for healthcare research and quality, AXIS Appraisal tool for cross-sectional studies, CASP Critical appraisal skills programme, CAMARADES The collaborative approach to meta-analysis and review of animal data from experimental studies, COSMIN Consensus-based standards for the selection of health measurement instruments, DSU Decision support unit, EPOC the effective practice and organisation of care group, GRACE The god research for comparative effectiveness initiative, IHE Canada institute of health economics, JBI Joanna Briggs Institute, MINORS Methodological index for non-randomized studies, NOS Newcastle-Ottawa scale, NMA network meta-analysis, NIH national institutes of health, NICE National institute for clinical excellence, PEDro physiotherapy evidence database, PROBAST The prediction model risk of bias assessment tool, QUADAS Quality assessment of diagnostic accuracy studies, QIPS Quality in prognosis studies, RoB Risk of bias, ROBINS-I Risk of bias in non-randomised studies - of interventions, ROBIS Risk of bias in systematic review, SYRCLE Systematic review center for laboratory animal experimentation, STAIR Stroke therapy academic industry roundtable, SIGN The Scottish intercollegiate guidelines network

Tools for intervention studies

Randomized controlled trial (individual or cluster).

The first RCT was designed by Hill BA (1897–1991) and became the “gold standard” for experimental study design [ 12 , 13 ] up to now. Nowadays, the Cochrane risk of bias tool for randomized trials (which was introduced in 2008 and edited on March 20, 2011) is the most commonly recommended tool for RCT [ 9 , 14 ], which is called “RoB”. On August 22, 2019 (which was introduced in 2016), the revised revision for this tool to assess RoB in randomized trials (RoB 2.0) was published [ 15 ]. The RoB 2.0 tool is suitable for individually-randomized, parallel-group, and cluster- randomized trials, which can be found in the dedicated website https://www.riskofbias.info/welcome/rob-2-0-tool . The RoB 2.0 tool consists of five bias domains and shows major changes when compared to the original Cochrane RoB tool (Table S 1 A-B presents major items of both versions).

The Physiotherapy Evidence Database (PEDro) scale is a specialized methodological assessment tool for RCT in physiotherapy [ 16 , 17 ] and can be found in http://www.pedro.org.au/english/downloads/pedro-scale/ , covering 11 items (Table S 1 C). The Effective Practice and Organisation of Care (EPOC) Group is a Cochrane Review Group who also developed a tool (called as “EPOC RoB Tool”) for complex interventions randomized trials. This tool has 9 items (Table S 1 D) and can be found in https://epoc.cochrane.org/resources/epoc-resources-review-authors . The Critical Appraisal Skills Programme (CASP) is a part of the Oxford Centre for Triple Value Healthcare Ltd. (3 V) portfolio, which provides resources and learning and development opportunities to support the development of critical appraisal skills in the UK ( http://www.casp-uk.net/ ) [ 18 – 20 ]. The CASP checklist for RCT consists of three sections involving 11 items (Table S 1 E). The National Institutes of Health (NIH) also develops quality assessment tools for controlled intervention study (Table S 1 F) to assess methodological quality of RCT ( https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools ).

The Joanna Briggs Institute (JBI) is an independent, international, not-for-profit researching and development organization based in the Faculty of Health and Medical Sciences at the University of Adelaide, South Australia ( https://joannabriggs.org/ ). Hence, it also develops many critical appraisal checklists involving the feasibility, appropriateness, meaningfulness and effectiveness of healthcare interventions. Table S 1 G presents the JBI Critical appraisal checklist for RCT, which includes 13 items.

The Scottish Intercollegiate Guidelines Network (SIGN) was established in 1993 ( https://www.sign.ac.uk/ ). Its objective is to improve the quality of health care for patients in Scotland via reducing variations in practices and outcomes, through developing and disseminating national clinical guidelines containing recommendations for effective practice based on current evidence. Hence, it also develops many critical appraisal checklists for assessing methodological quality of different study types, including RCT (Table S 1 H).

In addition, the Jadad Scale [ 21 ], Modified Jadad Scale [ 22 , 23 ], Delphi List [ 24 ], Chalmers Scale [ 25 ], National Institute for Clinical Excellence (NICE) methodology checklist [ 11 ], Downs & Black checklist [ 26 ], and other tools summarized by West et al. in 2002 [ 27 ] are not commonly used or recommended nowadays.

Animal study

Before starting clinical trials, the safety and effectiveness of new drugs are usually tested in animal models [ 28 ], so animal study is considered as preclinical research, possessing important significance [ 29 , 30 ]. Likewise, the methodological quality of animal study also needs to be assessed [ 30 ]. In 1999, the initial “Stroke Therapy Academic Industry Roundtable (STAIR)” recommended their criteria for assessing the quality of stroke animal studies [ 31 ] and this tool is also called “STAIR”. In 2009, the STAIR Group updated their criteria and developed “Recommendations for Ensuring Good Scientific Inquiry” [ 32 ]. Besides, Macleod et al. [ 33 ] proposed a 10-point tool based on STAIR to assess methodological quality of animal study in 2004, which is also called “CAMARADES (The Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies)”; with “S” presenting “Stroke” at that time and now standing for “Studies” ( http://www.camarades.info/ ). In CAMARADES tool, every item could reach a highest score of one point and the total score for this tool could achieve 10 points (Table S 1 J).

In 2008, the Systematic Review Center for Laboratory animal Experimentation (SYRCLE) was established in Netherlands and this team developed and released an RoB tool for animal intervention studies - SYRCLE’s RoB tool in 2014, based on the original Cochrane RoB Tool [ 34 ]. This new tool contained 10 items which had become the most recommended tool for assessing the methodological quality of animal intervention studies (Table S 1 I).

Non-randomised studies

In clinical research, RCT is not always feasible [ 35 ]; therefore, non-randomized design remains considerable. In non-randomised study (also called quasi-experimental studies), investigators control the allocation of participants into groups, but do not attempt to adopt randomized operation [ 36 ], including follow-up study. According to with or without comparison, non-randomized clinical intervention study can be divided into comparative and non-comparative sub-types, the Risk Of Bias In Non-randomised Studies - of Interventions (ROBINS-I) tool [ 37 ] is the preferentially recommended tool. This tool is developed to evaluate risk of bias in estimating comparative effectiveness (harm or benefit) of interventions in studies not adopting randomization in allocating units (individuals or clusters of individuals) into comparison groups. Besides, the JBI critical appraisal checklist for quasi-experimental studies (non-randomized experimental studies) is also suitable, which includes 9 items. Moreover, the methodological index for non-randomized studies (MINORS) [ 38 ] tool can also be used, which contains a total of 12 methodological points; the first 8 items could be applied for both non-comparative and comparative studies, while the last 4 items appropriate for studies with two or more groups. Every item is scored from 0 to 2, and the total scores over 16 or 24 give an overall quality score. Table S 1 K-L-M presented the major items of these three tools.

Non-randomized study with a separate control group could also be called clinical controlled trial or controlled before-and-after study. For this design type, the EPOC RoB tool is suitable (see Table S 1 D). When using this tool, the “random sequence generation” and “allocation concealment” should be scored as “High risk”, while grading for other items could be the same as that for randomized trial.

Non-randomized study without a separate control group could be a before-after (Pre-Post) study, a case series (uncontrolled longitudinal study), or an interrupted time series study. A case series is described a series of individuals, who usually receive the same intervention, and contains non control group [ 9 ]. There are several tools for assessing the methodological quality of case series study. The latest one was developed by Moga C et al. [ 39 ] in 2012 using a modified Delphi technique, which was developed by the Canada Institute of Health Economics (IHE); hence, it is also called “IHE Quality Appraisal Tool” (Table S 1 N). Moreover, NIH also develops a quality assessment tool for case series study, including 9 items (Table S 1 O). For interrupted time series studies, the “EPOC RoB tool for interrupted time series studies” is recommended (Table S 1 P). For the before-after study, we recommend the NIH quality assessment tool for before-after (Pre-Post) study without control group (Table S 1 Q).

In addition, for non-randomized intervention study, the Reisch tool (Check List for Assessing Therapeutic Studies) [ 11 , 40 ], Downs & Black checklist [ 26 ], and other tools summarized by Deeks et al. [ 36 ] are not commonly used or recommended nowadays.

Tools for observational studies and diagnostic study

Observational studies include cohort study, case-control study, cross-sectional study, case series, case reports, and comparative effectiveness research [ 41 ], and can be divided into analytical and descriptive studies [ 42 ].

Cohort study

Cohort study includes prospective cohort study, retrospective cohort study, and ambidirectional cohort study [ 43 ]. There are some tools for assessing the quality of cohort study, such as the CASP cohort study checklist (Table S 2 A), SIGN critical appraisal checklists for cohort study (Table S 2 B), NIH quality assessment tool for observational cohort and cross-sectional studies (Table S 2 C), Newcastle-Ottawa Scale (NOS; Table S 2 D) for cohort study, and JBI critical appraisal checklist for cohort study (Table S 2 E). However, the Downs & Black checklist [ 26 ] and the NICE methodology checklist for cohort study [ 11 ] are not commonly used or recommended nowadays.

The NOS [ 44 , 45 ] came from an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. Among all above mentioned tools, the NOS is the most commonly used tool nowadays which also allows to be modified based on a special subject.

Case-control study

Case-control study selects participants based on the presence of a specific disease or condition, and seeks earlier exposures that may lead to the disease or outcome [ 42 ]. It has an advantage over cohort study, that is the issue of “drop out” or “loss in follow up” of participants as seen in cohort study would not arise in such study. Nowadays, there are some acceptable tools for assessing the methodological quality of case-control study, including CASP case-control study checklist (Table S 2 F), SIGN critical appraisal checklists for case-control study (Table S 2 G), NIH quality assessment tool of case-control study (Table S 2 H), JBI critical appraisal checklist for case-control study (Table S 2 I), and the NOS for case-control study (Table S 2 J). Among them, the NOS for case-control study is also the most frequently used tool nowadays and allows to be modified by users.

In addition, the Downs & Black checklist [ 26 ] and the NICE methodology checklist for case-control study [ 11 ] are also not commonly used or recommended nowadays.

Cross-sectional study (analytical or descriptive)

Cross-sectional study is used to provide a snapshot of a disease and other variables in a defined population at a time point. It can be divided into analytical and purely descriptive types. Descriptive cross-sectional study merely describes the number of cases or events in a particular population at a time point or during a period of time; whereas analytic cross-sectional study can be used to infer relationships between a disease and other variables [ 46 ].

For assessing the quality of analytical cross-sectional study, the NIH quality assessment tool for observational cohort and cross-sectional studies (Table S 2 C), JBI critical appraisal checklist for analytical cross-sectional study (Table S 2 K), and the Appraisal tool for Cross-Sectional Studies (AXIS tool; Table S 2 L) [ 47 ] are recommended tools. The AXIS tool is a critical appraisal tool that addresses study design and reporting quality as well as the risk of bias in cross-sectional study, which was developed in 2016 and contains 20 items. Among these three tools, the JBI checklist is the most preferred one.

Purely descriptive cross-sectional study is usually used to measure disease prevalence and incidence. Hence, the critical appraisal tool for analytic cross-sectional study is not proper for the assessment. Only few quality assessment tools are suitable for descriptive cross-sectional study, like the JBI critical appraisal checklist for studies reporting prevalence data [ 48 ] (Table S 2 M), Agency for Healthcare Research and Quality (AHRQ) methodology checklist for assessing the quality of cross-sectional/ prevalence study (Table S 2 N), and Crombie’s items for assessing the quality of cross-sectional study [ 49 ] (Table S 2 O). Among them, the JBI tool is the newest.

Case series and case reports

Unlike above mentioned interventional case series, case reports and case series are used to report novel occurrences of a disease or a unique finding [ 50 ]. Hence, they belong to descriptive studies. There is only one tool – the JBI critical appraisal checklist for case reports (Table S 2 P).

Comparative effectiveness research

Comparative effectiveness research (CER) compares real-world outcomes [ 51 ] resulting from alternative treatment options that are available for a given medical condition. Its key elements include the study of effectiveness (effect in the real world), rather than efficacy (ideal effect), and the comparisons among alternative strategies [ 52 ]. In 2010, the Good Research for Comparative Effectiveness (GRACE) Initiative was established and developed principles to help healthcare providers, researchers, journal readers, and editors evaluate inherent quality for observational research studies of comparative effectiveness [ 41 ]. And in 2016, a validated assessment tool – the GRACE Checklist v5.0 (Table S 2 Q) was released for assessing the quality of CER.

Diagnostic study

Diagnostic tests, also called “Diagnostic Test Accuracy (DTA)”, are used by clinicians to identify whether a condition exists in a patient or not, so as to develop an appropriate treatment plan [ 53 ]. DTA has several unique features in terms of its design which differ from standard intervention and observational evaluations. In 2003, Penny et al. [ 53 , 54 ] developed a tool for assessing the quality of DTA, namely Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. In 2011, a revised “QUADAS-2” tool (Table S 2 R) was launched [ 55 , 56 ]. Besides, the CASP diagnostic checklist (Table S 2 S), SIGN critical appraisal checklists for diagnostic study (Table S 2 T), JBI critical appraisal checklist for diagnostic test accuracy studies (Table S 2 U), and the Cochrane risk of bias assessing tool for diagnostic test accuracy (Table S 2 V) are also common useful tools in this field.

Of them, the Cochrane risk of bias tool ( https://methods.cochrane.org/sdt/ ) is based on the QUADAS tool, and the SIGN and JBI tools are based on the QUADAS-2 tool. Of course, the QUADAS-2 tool is the first recommended tool. Other relevant tools reviewed by Whiting et al. [ 53 ] in 2004 are not used nowadays.

Tools for other primary medical studies

Health economic evaluation.

Health economic evaluation research comparatively analyses alternative interventions with regard to their resource uses, costs and health effects [ 57 ]. It focuses on identifying, measuring, valuing and comparing resource use, costs and benefit/effect consequences for two or more alternative intervention options [ 58 ]. Nowadays, health economic study is increasingly popular. Of course, its methodological quality also needs to be assessed before its initiation. The first tool for such assessment was developed by Drummond and Jefferson in 1996 [ 59 ], and then many tools have been developed based on the Drummond’s items or its revision [ 60 ], such as the SIGN critical appraisal checklists for economic evaluations (Table S 3 A), CASP economic evaluation checklist (Table S 3 B), and the JBI critical appraisal checklist for economic evaluations (Table S 3 C). The NICE only retains one methodology checklist for economic evaluation (Table S 3 D).

However, we regard the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement [ 61 ] as a reporting tool rather than a methodological quality assessment tool, so we do not recommend it to assess the methodological quality of health economic evaluation.

Qualitative study

In healthcare, qualitative research aims to understand and interpret individual experiences, behaviours, interactions, and social contexts, so as to explain interested phenomena, such as the attitudes, beliefs, and perspectives of patients and clinicians; the interpersonal nature of caregiver and patient relationships; illness experience; and the impact of human sufferings [ 62 ]. Compared with quantitative studies, assessment tools for qualitative studies are fewer. Nowadays, the CASP qualitative research checklist (Table S 3 E) is the most frequently recommended tool for this issue. Besides, the JBI critical appraisal checklist for qualitative research [ 63 , 64 ] (Table S 3 F) and the Quality Framework: Cabinet Office checklist for social research [ 65 ] (Table S 3 G) are also suitable.

Prediction studies

Clinical prediction study includes predictor finding (prognostic factor) studies, prediction model studies (development, validation, and extending or updating), and prediction model impact studies [ 66 ]. For predictor finding study, the Quality In Prognosis Studies (QIPS) tool [ 67 ] can be used for assessing its methodological quality (Table S 3 H). For prediction model impact studies, if it uses a randomized comparative design, tools for RCT can be used, especially the RoB 2.0 tool; if it uses a nonrandomized comparative design, tools for non-randomized studies can be used, especially the ROBINS-I tool. For diagnostic and prognostic prediction model studies, the Prediction model Risk Of Bias Assessment Tool (PROBAST; Table S 3 I) [ 68 ] and CASP clinical prediction rule checklist (Table S 3 J) are suitable.

Text and expert opinion papers

Text and expert opinion-based evidence (also called “non-research evidence”) comes from expert opinions, consensus, current discourse, comments, and assumptions or assertions that appear in various journals, magazines, monographs and reports [ 69 – 71 ]. Nowadays, only the JBI has a critical appraisal checklist for the assessment of text and expert opinion papers (Table S 3 K).

Outcome measurement instruments

An outcome measurement instrument is a “device” used to collect a measurement. The range embraced by the term ‘instrument’ is broad, and can refer to questionnaire (e.g. patient-reported outcome such as quality of life), observation (e.g. the result of a clinical examination), scale (e.g. a visual analogue scale), laboratory test (e.g. blood test) and images (e.g. ultrasound or other medical imaging) [ 72 , 73 ]. Measurements can be subjective or objective, and either unidimensional (e.g. attitude) or multidimensional. Nowadays, only one tool - the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) Risk of Bias checklist [ 74 – 76 ] ( www.cosmin.nl/ ) is proper for assessing the methodological quality of outcome measurement instrument, and Table S 3 L presents its major items, including patient - reported outcome measure (PROM) development (Table S 3 LA), content validity (Table S 3 LB), structural validity (Table S 3 LC), internal consistency (Table S 3 LD), cross-cultural validity/ measurement invariance (Table S 3 LE), reliability (Table S 3 LF), measurement error (Table S 3 LG), criterion validity (Table S 3 LH), hypotheses testing for construct validity (Table S 3 LI), and responsiveness (Table S 3 LJ).

Tools for secondary medical studies

Systematic review and meta-analysis.

Systematic review and meta-analysis are popular methods to keep up with current medical literature [ 4 – 6 ]. Their ultimate purposes and values lie in promoting healthcare [ 6 , 77 , 78 ]. Meta-analysis is a statistical process of combining results from several studies, commonly a part of a systematic review [ 11 ]. Of course, critical appraisal would be necessary before using systematic review and meta-analysis.

In 1988, Sacks et al. developed the first tool for assessing the quality of meta-analysis on RCTs - the Sack’s Quality Assessment Checklist (SQAC) [ 79 ]; And then in 1991, Oxman and Guyatt developed another tool – the Overview Quality Assessment Questionnaire (OQAQ) [ 80 , 81 ]. To overcome the shortcomings of these two tools, in 2007 the A Measurement Tool to Assess Systematic Reviews (AMSTAR) was developed based on them [ 82 ] ( http://www.amstar.ca/ ). However, this original AMSTAR instrument did not include an assessment on the risk of bias for non-randomised studies, and the expert group thought revisions should address all aspects of the conduct of a systematic review. Hence, the new instrument for randomised or non-randomised studies on healthcare interventions - AMSTAR 2 was released in 2017 [ 83 ], and Table S 4 A presents its major items.

Besides, the CASP systematic review checklist (Table S 4 B), SIGN critical appraisal checklists for systematic reviews and meta-analyses (Table S 4 C), JBI critical appraisal checklist for systematic reviews and research syntheses (Table S 4 D), NIH quality assessment tool for systematic reviews and meta-analyses (Table S 4 E), The Decision Support Unit (DSU) network meta-analysis (NMA) methodology checklist (Table S 4 F), and the Risk of Bias in Systematic Review (ROBIS) [ 84 ] tool (Table S 4 G) are all suitable. Among them, the AMSTAR 2 is the most commonly used and the ROIBS is the most frequently recommended.

Among those tools, the AMSTAR 2 is suitable for assessing systematic review and meta-analysis based on randomised or non-randomised interventional studies, the DSU NMA methodology checklist for network meta-analysis, while the ROBIS for meta-analysis based on interventional, diagnostic test accuracy, clinical prediction, and prognostic studies.

Clinical practice guidelines

Clinical practice guideline (CPG) is integrated well into the thinking of practicing clinicians and professional clinical organizations [ 85 – 87 ]; and also make scientific evidence incorporated into clinical practice [ 88 ]. However, not all CPGs are evidence-based [ 89 , 90 ] and their qualities are uneven [ 91 – 93 ]. Until now there were more than 20 appraisal tools have been developed [ 94 ]. Among them, the Appraisal of Guidelines for Research and Evaluation (AGREE) instrument has the greatest potential in serving as a basis to develop an appraisal tool for clinical pathways [ 94 ]. The AGREE instrument was first released in 2003 [ 95 ] and updated to AGREE II instrument in 2009 [ 96 ] ( www.agreetrust.org/ ). Now the AGREE II instrument is the most recommended tool for CPG (Table S 4 H).

Besides, based on the AGREE II, the AGREE Global Rating Scale (AGREE GRS) Instrument [ 97 ] was developed as a short item tool to evaluate the quality and reporting of CPGs.

Discussion and conclusions

Currently, the EBM is widely accepted and the major attention of healthcare workers lies in “Going from evidence to recommendations” [ 98 , 99 ]. Hence, critical appraisal of evidence before using is a key point in this process [ 100 , 101 ]. In 1987, Mulrow CD [ 102 ] pointed out that medical reviews needed routinely use scientific methods to identify, assess, and synthesize information. Hence, perform methodological quality assessment is necessary before using the study. However, although there are more than 20 years have been passed since the first tool emergence, many users remain misunderstand the methodological quality and reporting quality. Of them, someone used the reporting checklist to assess the methodological quality, such as used the Consolidated Standards of Reporting Trials (CONSORT) statement [ 103 ] to assess methodological quality of RCT, used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement [ 104 ] to methodological quality of cohort study. This phenomenon indicates more universal education of clinical epidemiology is needed for medical students and professionals.

The methodological quality tool development should according to the characteristics of different study types. In this review, we used “methodological quality”, “risk of bias”, “critical appraisal”, “checklist”, “scale”, “items”, and “assessment tool” to search in the NICE website, SIGN website, Cochrane Library website and JBI website, and on the basis of them, added “systematic review”, “meta-analysis”, “overview” and “clinical practice guideline” to search in PubMed. Compared with our previous systematic review [ 11 ], we found some tools are recommended and remain used, some are used without recommendation, and some are eliminated [ 10 , 29 , 30 , 36 , 53 , 94 , 105 – 107 ]. These tools produce a significant impetus for clinical practice [ 108 , 109 ].

In addition, compared with our previous systematic review [ 11 ], this review stated more tools, especially those developed after 2014, and the latest revisions. Of course, we also adjusted the method of study type classification. Firstly, in 2014, the NICE provided 7 methodology checklists but only retains and updated the checklist for economic evaluation now. Besides, the Cochrane RoB 2.0 tool, AMSTAR 2 tool, CASP checklist, and most of JBI critical appraisal checklists are all the newest revisions; the NIH quality assessment tool, ROBINS-I tool, EPOC RoB tool, AXIS tool, GRACE Checklist, PROBAST, COSMIN Risk of Bias checklist, and ROBIS tool are all newly released tools. Secondly, we also introduced tools for network meta-analysis, outcome measurement instruments, text and expert opinion papers, prediction studies, qualitative study, health economic evaluation, and CER. Thirdly, we classified interventional studies into randomized and non-randomized sub-types, and then further classified non-randomized studies into with and without controlled group. Moreover, we also classified cross-sectional study into analytic and purely descriptive sub-types, and case-series into interventional and observational sub-types. These processing courses were more objective and comprehensive.

Obviously, the number of appropriate tools is the largest for RCT, followed by cohort study; the applicable range of JBI is widest [ 63 , 64 ], with CASP following closely. However, further efforts remain necessary to develop appraisal tools. For some study types, only one assessment tool is suitable, such as CER, outcome measurement instruments, text and expert opinion papers, case report, and CPG. Besides, there is no proper assessment tool for many study types, such as overview, genetic association study, and cell study. Moreover, existing tools have not been fully accepted. In the future, how to develop well accepted tools remains a significant and important work [ 11 ].

Our review can help the professionals of systematic review, meta-analysis, guidelines, and evidence users to choose the best tool when producing or using evidence. Moreover, methodologists can obtain the research topics for developing new tools. Most importantly, we must remember that all assessment tools are subjective, and actual yields of wielding them would be influenced by user’s skills and knowledge level. Therefore, users must receive formal training (relevant epidemiological knowledge is necessary), and hold rigorous academic attitude, and at least two independent reviewers should be involved in evaluation and cross-checking to avoid performance bias [ 110 ].

Supplementary information

Acknowledgements.

The authors thank all the authors and technicians for their hard field work for development methodological quality assessment tools.

Abbreviations

Authors’ contributions.

XTZ is responsible for the design of the study and review of the manuscript; LLM, ZHY, YYW, and DH contributed to the data collection; LLM, YYW, and HW contributed to the preparation of the article. All authors read and approved the final manuscript.

This work was supported (in part) by the Entrusted Project of National commission on health and health of China (No. [2019]099), the National Key Research and Development Plan of China (2016YFC0106300), and the Nature Science Foundation of Hubei Province (2019FFB03902). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors declare that there are no conflicts of interest in this study.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Contributor Information

Lin-Lu Ma, Email: moc.361@58251689531 .

Yun-Yun Wang, Email: moc.361@49072054531 .

Zhi-Hua Yang, Email: moc.621@xxauhihzgnay .

Di Huang, Email: moc.361@74384236131 .

Hong Weng, Email: moc.361@29hgnew .

Xian-Tao Zeng, Email: moc.361@8211oatnaixgnez , Email: moc.mtbecuhw@oatnaixgnez .

Supplementary information accompanies this paper at 10.1186/s40779-020-00238-8.

  • Research article
  • Open access
  • Published: 18 April 2015

Can shared decision-making reduce medical malpractice litigation? A systematic review

  • Marie-Anne Durand 1 , 2 ,
  • Benjamin Moulton 3 , 4 , 5 ,
  • Elizabeth Cockle 2 ,
  • Mala Mann 6 &
  • Glyn Elwyn 1 , 7  

BMC Health Services Research volume  15 , Article number:  167 ( 2015 ) Cite this article

8628 Accesses

48 Citations

18 Altmetric

Metrics details

To explore the likely influence and impact of shared decision-making on medical malpractice litigation and patients’ intentions to initiate litigation.

We included all observational, interventional and qualitative studies published in all languages, which assessed the effect or likely influence of shared decision-making or shared decision-making interventions on medical malpractice litigation or on patients’ intentions to litigate. The following databases were searched from inception until January 2014: CINAHL, Cochrane Register of Controlled Trials, Cochrane Database of Systematic Reviews, EMBASE, HMIC, Lexis library, MEDLINE, NHS Economic Evaluation Database, Open SIGLE, PsycINFO and Web of Knowledge. We also hand searched reference lists of included studies and contacted experts in the field. Downs & Black quality assessment checklist, the Critical Appraisal Skill Programme qualitative tool, and the Critical Appraisal Guidelines for single case study research were used to assess the quality of included studies.

6562 records were screened and 19 articles were retrieved for full-text review. Five studies wee included in the review. Due to the number and heterogeneity of included studies, we conducted a narrative synthesis adapted from the ESRC guidance for narrative synthesis. Four themes emerged. The analysis confirms the absence of empirical data necessary to determine whether or not shared decision-making promoted in the clinical encounter can reduce litigation. Three out of five included studies provide retrospective and simulated data suggesting that ignoring or failing to diagnose patient preferences, particularly when no effort has been made to inform and support understanding of possible harms and benefits, puts clinicians at a higher risk of litigation. Simulated scenarios suggest that documenting the use of decision support interventions in patients’ notes could offer some level of medico-legal protection. Our analysis also indicated that a sizeable proportion of clinicians prefer ordering more tests and procedures, irrespective of patient informed preferences, as protection against litigation.

Conclusions

Given the lack of empirical data, there is insufficient evidence to determine whether or not shared decision-making and the use of decision support interventions can reduce medical malpractice litigation. Further investigation is required.

Trial registration

This review was registered on PROSPERO. Registration number: CRD42012002367 .

Peer Review reports

While policies are evolving to reflect a progressive shift in medical practice towards patient-centered care [ 1 - 4 ], the approach known as shared decision-making has yet to become incorporated as usual care. King and Moulton have argued that current standards of informed consent are unfit for the rapidly evolving medical landscape [ 5 ], where approximately 47% of all medical treatments are “preference-sensitive” [ 6 ]. They advocate that adopting shared decision-making would lead to important and necessary reforms in the area of informed consent. In situations of clinical equipoise, also known as preference-sensitive decisions, it is widely argued that individual patient preferences [ 7 ] should become the guiding principle for patients making informed decisions together with their healthcare providers [ 8 ].

Poor communication and lack of information are the most commonly reported sources of patient dissatisfaction in healthcare [ 9 , 10 ]. Extensive evidence has confirmed that communication failures are strongly correlated with medical malpractice litigation [ 11 - 15 ]. Physicians’ inability to clearly communicate with their patients, to disclose risks and benefits, and to answer their questions, are common predictors of medical malpractice claims [ 1 , 16 , 17 ]. Levinson analyzed communication behaviors between physicians who had never experienced malpractice litigation and those who had previously been sued, and found that the latter tended to demonstrate poorer communication skills. They were also less likely to form helpful interactions with patients: “relationships matter to both patients and physicians and the relationship itself may be the most powerful antidote to the malpractice crisis that medicine can provide” [ 11 , 17 ].

There is considerable hope that sharing decisions with patients using good communication skills and tools that improve provider-patient communication and understanding of the harm versus benefit trade-offs would lead to lower litigation levels. However, there is yet no evidence to confirm that it is indeed the case. Shared decision-making is defined as involving a patient and health care provider who work together to deliberate about the harms and benefits of two or more reasonable options, in order to choose a course of care that is ideally aligned with the patient’s preferences [ 18 ]. Evidence from controlled contexts suggest that shared decision-making can improve patient outcomes by increasing knowledge, realistic expectations, participation in decision-making and reducing post-intervention indecision compared to usual practice [ 19 ]. There is uncertainty around its impact on cost and litigation rates [ 19 , 20 ]. Notwithstanding, researchers, policy makers and key stakeholders in this area often speculate that shared decision-making, facilitated by the use of decision support interventions, may reduce litigation rates or limit physicians’ liability in lawsuits [ 5 , 21 ]. In the United States, things are evolving rapidly. Several states (Maine, Vermont, Massachusetts, Minnesota, and Washington) have adopted legislation to promote Shared Decision-Making (SDM) [ 1 ]. However there is no widespread adoption of SDM as an alternative to traditional means of obtaining informed consent and no evidence that SDM may lead to reduced litigation.

It is thus important to examine whether SDM (patient participation in decision-making and/or elicitation of patient preferences) and related interventions might reduce preventable litigation. Our aim is to explore the likely impact and influence of shared decision-making and shared decision-making interventions on medical malpractice litigation, and patients’ intentions to initiate litigation.

The systematic review protocol was registered on PROSPERO in May 2012 (Registration number CRD42012002367). We planned and reported the review in accordance with the preferred reporting items for systematic reviews and meta-analyses (PRISMA) [ 22 ] (see protocol in Additional file 1 and PRISMA checklist in Additional file 2 ).

Study selection and inclusion criteria

After removing duplicates and irrelevant studies, three researchers independently screened the title and abstract of retrieved records. Disagreements were resolved by discussion. Two researchers independently screened full-text articles.

We included all observational, interventional and qualitative studies published in all languages, which assessed the effect or likely influence of shared decision-making (patient participation in decision-making and/or elicitation of patient preferences) or shared decision-making interventions on medical malpractice litigation or on patients’ intentions to litigate. Shared decision-making interventions were defined as the use of tools or strategies designed to engage patients in medical decision-making and/or facilitate shared decision-making and patient activation the medical encounter, by providing information about the options and associated outcomes and implicit methods to clarify values [ 19 ]. Interventions designed to promote informed consent or communication were included in the review if the standard process of consenting and informing patients was complemented by an effort to involve patients in the decision-making process and elicit their preferences. We included all study outcomes related to litigations.

We excluded studies that exclusively examined the impact of communication skills, provision of information or informed consent on medical malpractice litigation, without considering the influence of patient participation in decision-making and/or elicitation of patient preferences.

Search methods

The search strategy was developed with an Information Specialist and piloted in OVID MEDLINE (see Additional file 3 ). We combined keywords and Medical Subject Heading terms for shared decision-making, decision-making, patient participation, doctor-patient relationship, informed decision, decision support, decision support techniques, litigation, medical malpractice, liability, medical negligence claim and legal proceedings (see full list in Additional file 3 ). The following electronic databases were searched from inception until January 2014: CINAHL, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, EMBASE, HMIC, Lexis Library, MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, NHSEED, Open SIGLE, PsycINFO and Web of Knowledge. Conference proceedings and the reference list of all primary and review articles were hand searched. A “cited by” search and “related articles” search were also performed on PubMed. We used social media lists to contact 378 individuals registered as having special interests in this area.

Quality assessment and data extraction

Independent dual data extraction was performed using a piloted pre-designed form. We extracted information about the 1) the author(s)/publication year, 2) type of publication 3) country, 4) source of funding, 5) study purpose, 6) duration, 7) study type, 8) methodological approach, 9) recruitment procedure, 10) theoretical framework, 11) participant characteristics, 12) sample size, 13) setting, 13) type of intervention (if applicable), 14) duration of intervention, 15) follow-up, 16) control condition, 17) methods of analysis, 18) number of participants enrolled, included in analysis, withdrawn and lost to follow-up for both intervention and control groups, 19) outcome measures (type of medical malpractice litigation, outcome of the litigation and factors affecting the outcome, duration of the litigation, litigation cost, accessibility, usability of intervention).

We used Downs & Black quality assessment checklist [ 23 ] to assess the quality of observational studies. This checklist was selected for its psychometric properties and relevance for assessing non-randomized studies [ 23 ]. Qualitative studies were assessed using a qualitative appraisal tool developed by the Critical Appraisal Skill Programme (CASP)[ 24 ]. The quality of case studies was independently assessed by two researchers, using the critical appraisal guidelines for single case study research [ 25 ].

Evidence synthesis

The Economic and Social Research Council (ESRC) guidance for narrative synthesis [ 26 ], which we followed and adapted, stipulates that a narrative analysis should be driven by a theoretical model, should include a preliminary synthesis of included study findings, an assessment of the principal trends and relationships in the data, and an examination of the robustness of the findings. For this review, we hypothesized that taking active steps to involve patients in sharing preference-sensitive decisions about their care, such as eliciting individual preferences, would reduce the risk of medical malpractice litigation, actual malpractice suits and related costs (see Figure  1 ). The ESRC guidance recommends that before undertaking a review, the authors first develop a ‘theory of change’, that describes how the intervention or concept works. The theory of change outlined in Figure  1 was developed prior to data extraction to inform decisions about the review questions and the type of studies to include. A preliminary synthesis was subsequently undertaken using the extracted data organized in a tabular form. The study quality was examined and the relationships and patterns in the data were thematically analyzed and synthesized.

“Theory of change” underlying the narrative synthesis. *Decision coaching involves preparing and facilitating patient participation in medical decision-making in a non-directive manner.

8803 citations were identified from database search and 40 from other sources. After duplicates and irrelevant hits had been removed, 6562 records were screened and 19 articles were retrieved for full-text review. Fourteen studies were excluded upon full-text review for the following reasons: 1) the studies did not evaluate the impact of shared decision-making on litigation or intention to litigate (n = 9); 2) the study did not include any data (e.g. editorial/opinion article) (n = 5). Five articles met our inclusion criteria [ 27 - 31 ] (see Figure  2 ). We note that 14 editorials and opinion articles, published between 1980 and 2009, hypothesized about the potential for shared decision-making to reduce medical malpractice claims. They were not based on empirical data, and were therefore excluded. A much greater number of citations explored the relationship between poor communication or inadequate informed consent and medical malpractice litigation but failed to consider patient involvement in decision-making, and were thus excluded.

PRISMA flow diagram.

The five included studies report data collected in three countries (United States, United Kingdom and Korea), and were published between 1994 and 2008. They encompass the following study designs: Four qualitative studies (including two case studies), and one quasi-experimental design. The sample size was generally small, ranging from 1 (case studies) to 886 participants. Two studies involved an intervention designed to enhance informed patient choice and shared decision-making: a video-based decision aid for PSA testing [ 28 ] and a series of evidence-based leaflets for maternity care [ 30 ] (see Table  1 ). Four studies were conducted with patients from primary and secondary care settings. Subjects taking part in the simulated scenario-based study were recruited from the general population [ 28 ]. All participants were adults, mostly white and well educated. None of the studies explicitly reported the use of theoretical models or frameworks.

Robustness of the synthesis

The qualitative studies (n = 2, all designs except case studies) that were rated against CASP had satisfactory quality ratings (see Table  1 ) [ 27 , 30 ]. For both studies, the research design, sampling and data collection procedures were deemed appropriate. Reflexivity, ethical issues and analysis could have been improved. The quality of the case studies (n = 2) was low; these were exclusively based on documented legal cases [ 29 , 31 ]. Areas of concerns for both case studies where the absence of dual analysis involving an independent researcher or triangulation, the lack of clearly formulated questions and conceptual framework and lack of information about the data collection and data analysis procedures. Findings should therefore be interpreted with caution. The quality of the quasi-experimental study was low, but consistent with Downs and Black’s average ratings for non-randomized studies [ 28 ]. Given this study was based on a simulated court case, where lay people were asked to behave as hypothetical jurors, external validity was poor and internal validity was low with a high risk of selection bias and other confounders.

Narrative synthesis

The relationships and patterns occurring across the five included studies were thematically analyzed and synthesized. Given the heterogeneity of the included outcomes, it was not possible to synthesize the data according to their outcome measures (se Table  1 ). The following themes, closely linked to the study outcomes, emerged: 1) Interfering with patient preferences; 2) Documenting shared decision-making to meet the standard of care; 3) Can a decision aid offer medico-legal protection? 4) Using “defensive medicine” to minimize malpractice litigation Three out of four themes are closely aligned with the theory of change, which we had developed before undertaking this analysis and which is represented in Figure  1 . However, theme 4 is one that naturally emerged from the data analysis, and which we had not anticipated or identified as prominent.

Theme1: Respecting patient preferences

One might assume that medical malpractice claims are primarily triggered by an unexpected significant adverse outcome, such as death. Beckman’s descriptive case review [ 27 ] and Um’s case study [ 31 ] suggest that other factors, such as the quality of the doctor-patient relationship, the consideration and respect for patient preferences, poor communication and patients’ involvement (or lack of involvement) in the care and decision-making processes, influence, or may even determine, the initiation of malpractice claims. In a qualitative analysis of 45 plaintiff depositions of settled cases, Beckman et al. extracted information about the reason(s) motivating the claim, all information pertaining to the relationship between the claimant and health provider, and whether a health professional suggested maloccurrence (i.e. a negative outcome that is not imputable to the quality of care provided by the medical team). The authors independently coded the verbatim transcripts, and identified 15 issues and their respective frequencies. Their analysis suggested that problematic patient-provider relationship issues had occurred in 71% of all depositions (inter-rater reliability of 93.3%). The following four categories emerged: not understanding the patient and/or family perspective (13.1%), dysfunctional delivery of information (26.4%), devaluing patient and/or family views (28.9%), and patient abandonment (31.6%). Three of these categories (not understanding the patient and or family perspective, devaluing patient and/or family views and patient abandonment) are likely to be associated with the physician’s inability to promote and support the patient and family’s involvement in shared decision-making, and consider their concerns and preferences.

Recurrent in Beckman’s analysis was the clinicians’ tendency to ignore patients’ views, thus infringing on their autonomy and interfering with their preferences. Examples of specific issues relating to shared decision-making included: failure to solicit patient and/or family opinion (2.6%), discounting a patient and/or family opinion (5.3%), discounting a family’s attempt to advocate (5.3%), not listening (5.3%), failure to keep a patient and/or family up to date (5.3%).

Um’s case study of a pregnant woman’s struggle to receive care and procedures that are aligned with her preferences [ 31 ] illustrates the same theme. The lack of maternal involvement in deciding about prenatal testing led her to sue her physician after her newborn baby was diagnosed with Down’s syndrome. Over the course of the pregnancy, the pregnant mother, who was aware of family history of chromosomal abnormality, had repeatedly requested an amniocentesis. Despite her concerns, and repeated requests for further invasive testing, the clinician refused to arrange the procedure. The obstetrician was sued on the grounds of negligence and found liable for interfering with the “mother’s right to self-determination”, thus interfering with her preferences [ 31 ]. Her views and preferences had clearly been expressed, but were overruled by the health care provider. In this particular instance, a lawsuit could have been avoided if the provider had respected her explicit and informed preferences.

Using data from actual malpractice lawsuits in different contexts and clinical areas, both studies demonstrate that over and above the importance of good communication, the inability to involve patients in decision-making and to consider their concerns and preferences can incite patients to commence litigation. In Beckman’s study, the authors insist on the impact of “devaluing a patient’s or family views” on medical malpractice intentions, referring to “particularly risk-laden form of sharing information”.

While the above analysis indicates that discounting patients’ views might increase the risk of litigation, one of five studies included in this review examined the implications of discussing the pros and cons of PSA testing [ 29 ]. In 1999, when advising a 53 year old patient, Dr Merenstein, reports testifying that he followed the principles of shared decision-making as well as the US National guidelines for prostate cancer screening (Guidelines of the American Academy of Family Physicians, the American Urological Association and the American Cancer Society). He discussed the pros and cons of screening for a disease that, if left undiagnosed, may not be life threatening, and described the poor accuracy and potential harms of the PSA test. The patient subsequently declined to have PSA screening. The discussion and the patient’s decision were documented by Merenstein. A few years later, the patient saw another Physician who ordered a PSA test without discussion. This test led to a diagnosis of prostate cancer. The plaintiff denied that an informed discussion about the risks and benefits of the PSA test had occurred. There was testimony from two physicians, on behalf of the plaintiff, that the standard of care in Virginia was to not discuss the uncertainties of testing, but rather to perform the test. Dr Merenstein’s residency program was found negligent because of the previous failure to perform a PSA test. Dr Merenstein and a defense expert testified that Merenstein had nevertheless followed published clinical practice guidelines, had discussed the harms and uncertainties of the test, and promoted shared decision-making and his patient’s autonomy. The jury and the plaintiff’s lawyers did not recognize this process as being consistent with the standard of care. While Dr Merenstein was not found negligent, his residency program was found liable and the case was settled without further appellate review. It is important to recognize that there are many variables that may have influenced the jury’s decision.

Theme 2: Documenting shared decision-making to meet the standard of care

In response to the Merenstein trial, Barry et al. conducted a simulation study [ 28 ] to investigate whether involving patients in PSA screening decisions would influence a jury’s verdict, and examine whether the Merenstein outcome had been atypical. Lay participants were divided into six focus groups and instructed to behave as hypothetical jurors in considering two variations of the Merenstein case. In the first variation, narrated to the first three groups, the physician’s notes did not mention a discussion about the risks and benefits of the PSA screening (“no pros and cons note” scenario). In the second variation, presented to the remaining three groups, the physician had clearly documented a discussion about the harms and benefits of PSA screening, resulting in the patient’s informed decision to decline the test (“pros and cons note” scenario). All potential jurors were asked to decide whether the physician had met the standard of care, and if not, whether any harm had been caused. The majority of participants (83%) in the first three groups considered that there was deviation from the standard of care, i.e. negligence, and no informed consent. As one participant commented: “ Not documented, not done.”

Further, 61% of the participants in the first three groups also believed that harm had been caused. Although there is no clear evidence that ordering the PSA test would have affected the cancer prognosis, the majority of mock jurors in the first three groups (61%) believed that ordering the test would have saved the patient’s life or significantly improved the outcome. In their words: “Because of the severity of the disease, the doctor should have done the test as a standard process. Even if he explained the pros and cons, I don’t think there should be a question of him not doing the test. He should do it as a standard process with no discussion.” This reflects a trend to use tests and invasive procedures, even if the benefits are unclear and some individuals might thus prefer to decline the test when informed. The participants’ view was reminiscent of the Merenstein judgment.

However, when presented with the “pros and cons scenario”, 72% of all participants considered that promoting patient choice and facilitating shared decision-making met the standard of care, provided the discussion had been documented in the patient’s record. Contrary to the outcome of the Merenstein trial, these findings indicate that embedding and documenting shared decision-making in routine clinical practice could provide a higher degree of medico-legal protection and lead to better informed consent. However, this should be interpreted in light of contextual factors and study limitations. First, the vote for the second scenario (pros and cons) was not unanimous: 28% believed that the standard of care had not been met, and 23% felt that harm had been caused. Although a minority vote, it mirrors the outcome of the Merenstein case. Second, given the simulated nature of the study, it is difficult to infer whether this outcome would be representative of similar malpractice results. However, these findings indicate that shared decision-making documented in the patient’s record could provide what King & Moulton call ‘perfected informed consent’ and, in many instances, may prevent litigation on a failure to inform claim (see Figure  3 ).

Liability risk according to patient involvement in decision-making.

Theme 3: Can a decision aid offer medico-legal protection?

As part of Barry‘s study, and after the initial set of votes, all participants were shown a video-based patient decision aid for PSA screening. They were asked to imagine that the decision aid had been provided to the patient prior to his informed refusal, and documented in the notes. The vote was almost unanimous (94%), confirming that the standard of care would have been met without any harm caused. One participant said: “ The tape tells you as much as you can possibly tell a patient.” And another commented: “ Let me tell you something, after watching that video, there’s no way that you could not know what it is, the pros and cons, the risks, the quantity, the quality of life, the incontinence, impotence…. Honestly that’s even better than the doctor just saying it to you.” After being shown the decision support intervention, the majority of participants implicitly conceded that offering a choice and discussing the pros and cons were justified, and eventually gained understanding of the harms, benefits and controversies of the test. Although this finding is based on a simulation study, it suggests that documenting the use of decision support interventions could be considered the highest standard of care, and could reduce the risk of liability.

Theme 4: Using “defensive medicine” to minimize malpractice litigation

Two out of the five included studies [ 29 , 30 ] highlight that a number of health professionals believe that promoting the use of technological interventions offers the best protection against litigation. In a maternity clinic, Stapleton and colleagues observed and interviewed clinicians about their use, opinion and perceived potential impact on litigation of evidence-based information leaflets. According to their observations, obstetricians showed a tendency to minimize, or not mention, the risk of interventions, treatments or screening procedures. Some obstetricians explicitly favored ordering tests and treatments and were not curious to know the views of their patients. They strongly believed that their own clinical recommendation offered better medico-legal protection than adopting an approach based on informing patients and adopting a shared decision-making approach. This view was not shared by the midwives who, by and large, promoted the use of the decision support interventions. Patients in this study reported feeling ‘bullied’ into undergoing tests and procedures.

The assertion that involving patients in decision-making leads to less litigation has not been extensively studied and cannot yet be confirmed. No empirical research conducted in clinical settings has assessed the impact of shared decision-making and related interventions on medical malpractice litigation. We are thus unable to determine whether or not shared decision-making and related interventions can reduce malpractice litigation. However, three out the five studies analyzed here provide retrospective and simulated data suggesting that not paying attention to patient preferences, particularly when no effort has been made to inform and support understanding of possible harms and benefits, may put clinicians at a higher risk of litigation. However, given the number, heterogeneity and quality of included studies, these findings should be interpreted with caution. Simulated malpractice scenarios indicate that supporting and documenting shared decision-making in the patients’ notes, as well as using decision support interventions to support patients’ deliberation, would meet the standard of care, and as such, could offer ‘perfected informed consent’ and might reduce litigation. Nonetheless, two studies also emphasize that health professionals are still wary of promoting and respecting patient preferences. Many continue to believe that ordering more tests and procedures, irrespective of patient informed preferences, is a better defense against litigation than the promotion of patient’s autonomy and informed preferences. This is a complex issue, and one that might discourage clinicians from practicing SDM. Further, it is worth noting that adopting shared decision-making without accurately documenting the process, even when decisions are aligned with the recommendations in clinical guidelines, would not be recognized from a medico-legal perspective as dispositive of the issue (see Figure  3 ).

Strength and weaknesses of the study

Several limitations need to be considered. First, studies that met our inclusion criteria were rare and highly heterogeneous, two of which were court-based case studies in different countries. We decided not to restrict the type of studies included as we were aware of the lack of data in this area and wanted to have as comprehensive a perspective as possible. Synthesizing results of such diversity can be problematic and unreliable. In order to limit interpretation biases, we closely followed the guidelines provided by the ESRC framework. We also considered the impact of contextual factors, particularly for the included case studies. For instance, the Merenstein verdict [ 29 ] was in opposition with the findings of Beckman’s and Um’s studies [ 27 , 31 ]. These apparently contradictory rulings were cautiously interpreted in light of contextual factors. The Merenstein case study is a self-reported case study published in the Journal of the American Medical Association. Its scientific quality and validity are therefore limited, as demonstrated by the quality assessment, but met the inclusion criteria and provided a unique account of the possible consequences of promoting shared decision-making in naturalistic settings. We do not know whether the Merenstein case is an outlier or whether its outcome may have been typical or representative of similar malpractice litigation cases. However, it is important to bear in mind that despite the impact that the case and self-reported commentary have had over the past eight years, it has absolutely no precedential value as case law. Finally, and in relation to Barry’s study, it is worth noting that, in medical malpractice, the determination of whether or not a ‘standard of care’ is met is typically determined by medical experts, and not by lay people, although the jury would make the final decision. The findings of Barry’s study thus need to be interpreted carefully. In a real case of medical malpractice, the jury’s final decision might have been influenced by the experts’ discussions and opinions as to whether or not the standard of care had been met, which was not the case in Barry’s study.

Comparison with other studies

Ample evidence suggests that patients are less likely to sue their physician when the relationship is satisfactory and the provider displays person-centered communication skills [ 17 , 32 - 34 ]. However, as demonstrated by this review, no studies have empirically examined the effect of shared decision-making on lawsuits. This may be due to methodological challenges. Litigation is a relatively rare occurrence. A very large sample size and long-term follow-up would be required to evaluate, in a controlled setting, the exact impact of shared decision-making on litigation incidence and costs. Nonetheless, for over two decades, many have advocated for shared decision-making’s role in informed consent procedures [ 21 , 35 - 38 ]. Green, in 1988, urged physicians and patients to engage in collaborations and agreements about the optimal course of care, to document it in the patient’s chart, and use “questionnaires that elicit values, preferences and needs” which we now refer to as decision support interventions [ 35 ].

More recently, King and Moulton examined the principles underpinning informed consent law in the United States and exposed that existing medical consent procedures (i.e. the physician and patient based standards) are not aligned with advances in medicine, easy access to information, growing expectations, and recent policy developments promoting patient autonomy and involvement in health care [ 5 ]. The authors suggest a revision of informed consent doctrine, and propose shared decision-making as a prerequisite and adjunct to the informed consent process between provider and patient for preference-sensitive care or situations of clinical equipoise.

This proposition is supported by empirical evidence on the effectiveness of informed consent procedures [ 38 - 41 ], suggesting that informed consent is practiced today through the “ritualistic recitation of risks and benefits” [ 42 ], enumerated on a written form with little or no engagement of the patient and little or no evaluation of whether the patient understands any of the risks. There is no bidirectional discourse. Most clinicians feel that informed consent is the patient signature on a piece of paper and do not recognize that informed consent is the process of communication between provider and patient and that risks, benefits and alternatives are an essential part of that discussion. In a content analysis of hospital informed consent forms, Bottrell et al. [ 38 ] established that risks were not systematically portrayed and the majority of consent forms were used as mere treatment authorizations, regardless of the quality of the consent process undergone. Patients cannot therefore appreciate risks and make an informed decision if the harms, benefits and alternative options are not presented in a comprehensive and accessible manner and may later regret having consented to treatments they had very little knowledge of. The shortcomings of current consent procedures could be remedied by promoting shared decision-making in the clinical encounter and the use of decision support interventions, thus ensuring that patients are thoroughly aware of all aspects of the treatment or screening options available before engaging in a course of care [ 38 , 43 - 46 ]. Promoting shared decision-making and decision support interventions would contribute to ensure that patients’ preferences and self-determination are respected, and patients are fully informed of all possible outcomes, harms, benefits and alternative health options. If such process had occurred in the context of Um’s case study, the lawsuit would have been avoided and the patient would have received the desired course of care.

However, a sizeable proportion of health professionals continue to practice defensive medicine and believe in the virtue of ordering more tests and procedures to avoid or reduce litigation risks [ 47 - 49 ]. Gattelari at al. confirmed people’s common tendency to opt for optional tests when they possess poor knowledge of the harms and benefits of available options [ 50 ]. When presented with hypothetical scenarios and no explanations of the harms and benefits of PSA screening or mention of National Guidelines, the majority of participants felt that the test should have been ordered, and the GP found liable for the patient’s negative outcomes. Further, a study of the impact of National guidelines on physicians’ perceptions of medico-legal risks for PSA screening found that 46% of surveyed GPs still perceived medico-legal risk if the test was not performed, despite clear National guidelines [ 47 ].

The desirability of patient involvement in medical decision-making and its role in informed consent procedures have been advocated for decades. Many have assumed that promoting shared decision-making would reduce preventable litigation. However, there is to date no clear evidence to confirm that shared decision-making can indeed reduce medical malpractice litigation. Given the number, heterogeneity and quality of included studies, the review provides insufficient evidence to draw firm conclusions. Data from retrospective and simulated studies seem to indicate that shared decision-making, and the use of good quality decision support interventions, could offer some level of medico-legal protection. It also highlights some clinicians’ reticence to consider that a patient who has understood the risks and implications of various tests, treatments or surgical procedures, might be less likely to sue. Many clinicians believe that their own medical opinion, combined with more tests and procedures, remains the best protection against litigation, as illustrated by the jurors’ verdict in the Merenstein judgment.

While most countries are yet to embed shared decision-making in legal reforms of informed consent, Washington State passed legislation in 2007 to change the informed consent law and offer physicians who practice shared decision-making with a “certified “ decision aid, a higher degree of protection against a failure to inform law suit. There are now five states that have promoted shared decision-making in state law. Massachusetts now requires that in order to be a medical home or an accountable care organization, the state must certify the entity, and it must, in turn, encourage shared decision-making for certain preferences sensitive conditions in order to qualify.

Shared decision-making, and the use of decision support interventions, will only offer an effective alternative to current informed consent procedures if it becomes embedded in legislative health policy reforms and part of common law. It constitutes an overwhelming ethical imperative in the context of preference sensitive care where the failure to elicit and act on patients’ preferences is tantamount to operating on the wrong patient [ 51 ]. Nevertheless, more empirical data is needed to determine the impact of shared decision-making on preventable litigation and potentially overcome clinicians’ reticence and the illusory protection that defensive medicine seems to provide.

Abbreviations

Economic and social research council

Prostate specific antigen (test)

  • Shared decision-making

Senate and House of Representatives. Patient Protection and Affordable Care Act. Washington: 2010.

Arterburn D, Wellman R, Westbrook E, Rutter C, Ross T, McCulloch D, et al. Introducing decision aids at group health was linked to sharply lower hip and knee surgery rates and costs. Health Aff. 2012;31:9.

Article   Google Scholar  

Department of health. Equity and excellence: liberating the NHS. NHS White Paper. 2010.

Senate and House of Representatives. 111 th congress. Patient Protection and Affordable Care Act. Washington: 2010. HR 3590.

King JS, Moulton BW. Rethinking informed consent: the case for Shared Medical Decision-making. Am J Law Med. 2006;32(4):429–501.

PubMed   Google Scholar  

Godlee F. Clinical Evidence. London: BMJ Publishing Group; 2005.

Google Scholar  

Feldman-Stewart D, Brundage MD, McConnell BA, MacKillop WJ. Practical issues in assisting shared decision-making. Health Expect. 2000;3(1):46–54. Epub 2001/04/03.

Article   PubMed   Google Scholar  

Oshnima Lee E, Emanuel EJ. Shared decision-making to improve cre and reduce costs. N Engl J Med. 2013;368:6–8.

Coulter A, Cleary PD. Patients’ experiences with hospital care in five countries. Health Aff (Millwood). 2001;20(3):244–52. Epub 2001/10/05.

Article   CAS   Google Scholar  

Grol R, Wensing M, Mainz J, Jung HP, Ferreira P, Hearnshaw H, et al. Patients in Europe evaluate general practice care: an international comparison. Br J Gen Pract. 2000;50(460):882–7. Epub 2001/01/06.

CAS   PubMed   PubMed Central   Google Scholar  

Roter D. The patient-physician relatioship and its implications for malpractice litigation. J Health Care L & Pol’y. 2006;9:304–14.

Hickson GB, Clayton EW, Entman SS, Miller CS, Githens PB, Whetten-Goldstein K, et al. Obstetricians’ prior malpractice experience and patients’ satisfaction with care. JAMA. 1994;272(20):1583–7.

Article   CAS   PubMed   Google Scholar  

Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):1126–33. Epub 2005/10/01.

Vincent C, Young M, Phillips A. Why do people sue doctors? A study of patients and relatives taking legal action. Lancet. 1994;343(8913):1609–13. Epub 1994/06/25.

Taylor DM, Wolfe RS, Cameron PA. Analysis of complaints lodged by patients attending Victorian hospitals, 1997–2001. Med J Aust. 2004;181(1):31–5. Epub 2004/07/06.

Wofford MM, Wofford JL, Bothra J, Kendrick SB, Smith A, Lichstein PR. Patient complaints about physician behaviors: a qualitative study. Acad Med. 2004;79(2):134–8. Epub 2004/01/28.

Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication, The relationship with malpractice claims among primary care physicians and surgeons. Jama. 1997;277(7):553–9. Epub 1997/02/19.

Elwyn G, Laitner S, Coulter A, Walker E, Watson P, Thomson R. Implementing shared decision-making in the NHS. BMJ. 2010;341:c5146. Epub 2010/10/16.

Stacey D, Bennett CL, Barry MJ, Col NF, Eden KB, Holmes-Rovner M, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2014;(1).

Arterburn D, Wellman R, Westbrook E, Rutter C, Ross T, McCulloch D, et al. Introducing decision aids at Group Health was linked to sharply lower hip and knee surgery rates and costs. Health Aff (Millwood). 2012;31(9):2094–104. Epub 2012/09/06.

Kamal P, Dixon-Woods M, Kurinczuk JJ, Oppenheimer C, Squire P, Waugh J. Factors influencing repeat caesarean section: qualitative exploratory study of obstetricians’ and midwives’ accounts. BJOG. 2005;112(8):1054–60. Epub 2005/07/28.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7), e1000097. Epub 2009/07/22.

Article   PubMed   PubMed Central   Google Scholar  

Downs SH, Black N. The feasability of creating a checklist for the assessment of the methodological quality both of randomised and non randomised studies in health care interventions. J Epidemiol Community Health. 1998;52:377–84.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Public Health Resource Unit. Critical Appraisal Skills Programme (CASP). Available at: http://www.casp-uk.net 2007.

Atkins C, Sampson J. Critical appraisal guidelines for single case study research. ECIS. 2002;6(8):100–9.

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: A product from the ESRC Methods Programme. 2006.

Beckman HB, Markakis KM, Suchman AL, Frankel RM. The doctor-patient relationship and malpractice, Lessons from plaintiff depositions. Arch Intern Med. 1994;154(12):1365–70. Epub 1994/06/27.

Barry MJ, Wescott PH, Reifler EJ, Chang Y, Moulton BW. Reactions of potential jurors to a hypothetical malpractice suit: alleging failure to perform a prostate-specific antigen test. J Law Med Ethics. 2008;36(2):396–402. 2. Epub 2008/06/13.

Merenstein D. A piece of my mind, Winners and losers. JAMA. 2004;291(1):15–6. Epub 2004/01/08.

Stapleton H, Kirkham M, Thomas G. Qualitative study of evidence based leaflets in maternity care. BMJ. 2002;324(7338):639.

Um YR. A critique of a ‘wrongful life’ lawsuit in Korea. Nurs Ethics. 2000;7(3):250–61. Epub 2000/09/15.

Flocke SA, Miller WL, Crabtree BF. Relationships between physician practice style, patient satisfaction, and attributes of primary care. J Fam Pract. 2002;51(10):835–40. Epub 2002/10/29.

Moore PJ, Adler NE, Robertson PA. Medical malpractice: the effect of doctor-patient relations on medical patient perceptions and malpractice intentions. West J Med. 2000;173(4):244–50. Epub 2000/10/06.

Roter DL, Stewart M, Putnam SM, Lipkin Jr M, Stiles W, Inui TS. Communication patterns of primary care physicians. Jama. 1997;277(4):350–6.

Green JA. Minimizing malpractice risks by role clarification, The confusing transition from tort to contract. Ann Intern Med. 1988;109(3):234–41.

Monico EP. Ramifications of shared decision-making: changing medical malpractice from tort to contract. Conn Med. 2009;73(4):233–4.

Monico EP, Calise A, Calabro J. Torts to contract? moving from informed consent to shared decision-making. J Healthc Risk Manag. 2008;28(4):7–12.

Bottrell MM, Alpert H, Fischbach RL, Emanuel LL. Hospital informed consent for procedure forms: facilitating quality patient-physician interaction. Arch Surg. 2000;135(1):26–33. Epub 2000/01/15.

Mein E, Alaani A, Jones RV. Consent for mastoidectomy: a patient’s perspective. Auris Nasus Larynx. 2007;34(4):505–9.

Braddock CH, Fihn SD, Levinson W, Jonsen AR, Pearlman RA. How doctors and patients discuss routine clinical decisions, Informed decision-making in the outpatient setting. J Gen Intern Med. 1997;12(6):339–45. Epub 1997/06/01.

PubMed   PubMed Central   Google Scholar  

Mazur DJ. What should patients be told prior to a medical procedure? Ethical and legal perspectives on medical informed consent. Am J Med. 1986;81(6):1051–4. Epub 1986/12/01.

Abram M. The ethical and legal implications of informed consent in the patient-practitioner relationship. Washington D.C: President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research; 1982.

O'Connell K. Two arms, two choices: if only I'd known then what I know now. Health Aff (Millwood).31(8):1895–9. Epub 2012/08/08.

Bernat JL, Peterson LM. Patient-centered informed consent in surgical practice. Arch Surg. 2006;141(1):86–92. Epub 2006/01/18.

Karlawish JH. Shared decision-making in critical care: a clinical reality and an ethical necessity. Am J Crit Care. 1996;5(6):391–6. Epub 1996/11/01.

CAS   PubMed   Google Scholar  

Widdershoven GAM, Verheggen FWSM. Improving informed consent by implementing shared decision-making in health care. IRB. 1998;21(4):1–5.

Girgis S, Ward JE, Thomson CJ. General practitioners’ perceptions of medicolegal risk, Using case scenarios to assess the potential impact of prostate cancer screening guidelines. Med J Aust. 1999;171(7):362–6. Epub 1999/12/11.

Austin OJ, Valente S, Hasse LA, Kues JR. Determinants of prostate-specific antigen test use in prostate cancer screening by primary care physicians. Arch Fam Med. 1997;6(5):453–8. Epub 1997/09/26.

Asher E, Greenberg-Dotan S, Halevy J, Glick S, Reuveni H. Defensive medicine in Israel - a nationwide survey. PLoS One.7(8):e42613. Epub 2012/08/24.

Gattellari M, Ward JE. Will men attribute fault to their GP for adverse effects arising from controversial screening tests? An Australian study using scenarios about PSA screening. J Med Screen. 2004;11(4):165–9. Epub 2004/11/27.

Mulley Jr. AG, Trimble C, Elwyn G. Patients’ preferences matter: Stop the silent misdiagnosis. London: 2012.

Download references

Acknowledgments

We would like to thank Dr Frances Bunn for her advice in the early stages of the systematic review process.

This systematic review is the personal work of the authors and was undertaken without dedicated funding.

Author information

Authors and affiliations.

The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth College, Hanover, USA

Marie-Anne Durand & Glyn Elwyn

Department of Psychology, University of Hertfordshire, Hatfield, UK

Marie-Anne Durand & Elizabeth Cockle

Informed Medical Decisions Foundation, Boston, USA

Benjamin Moulton

Harvard School of Public Health and Boston University Law School, Boston, USA

Boston University Law School, Boston, USA

Support Unit for Research Evidence, Cardiff University, Cardiff, UK

The Dartmouth Center for Health Care Delivery Science, Hanover, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marie-Anne Durand .

Additional information

Competing interests.

BM has employment relationship with the Informed Medical Decisions Foundation and Royalties relationship with Health Dialog.

Authors’ contributions

MM developed the search strategy. M-AD carried out the search. MM, M-AD and EC screened all retrieved citations. M-AD and EC extracted data and undertook the quality assessment. M-AD carried out the narrative synthesis. BM and GE moderated discussions about inclusion and provided guidance and critical appraisal at all stages of the review process. MM, BM and GE reviewed all successive drafts of the systematic review. All authors read and approved the final manuscript.

Additional files

Additional file 1:.

Systematic review protocol.

Additional file 2:

PRISMA checklist.

Additional file 3:

Search strategy.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Durand, MA., Moulton, B., Cockle, E. et al. Can shared decision-making reduce medical malpractice litigation? A systematic review. BMC Health Serv Res 15 , 167 (2015). https://doi.org/10.1186/s12913-015-0823-2

Download citation

Received : 29 May 2014

Accepted : 26 March 2015

Published : 18 April 2015

DOI : https://doi.org/10.1186/s12913-015-0823-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Decision-making
  • Informed consent
  • Malpractice
  • Decision support techniques

BMC Health Services Research

ISSN: 1472-6963

critical appraisal guidelines for single case study research

Critical Appraisal of Research Articles: Clinical Practice Guidelines

  • Systematic Reviews

Clinical Practice Guidelines

  • Qualitative Studies

What is a Clinical Practice Guideline?

A clinical practice guideline is a systematically developed statement designed to help health care professionals make decisions about appropriate health care for specific clinical circumstances.

Guidelines attempt to do this by:

  • Describing a range of generally accepted approaches for the diagnosis, management, or prevention of specific diseases or conditions.
  • Defining practices that meet the needs of most patients in most circumstances.

Find Clinical Practice Guidelines:

1. Go to ClinicalKey  and limit the search to "Guidelines".

2.  When searching MEDLINE , click on "Additional Limits", and limit by publication type (Practice Guideline).

Questions to Ask

Scope and Purpose

  • Are the overall objectives of the guideline specifically described?
  • Is the clinical question(s) covered by the guideline specifically described?
  • Are the patients to whom the guideline is meant to apply specifically described?

Stakeholder Involvement

  • Does the guideline development group include individuals from all of the relevant professional groups?
  • Have the patients' views and preferences been sought?
  • Have the target users of the guideline been clearly defined?
  • Has the guideline been piloted among target users?

Rigour of Development

  • Were systematic methods used to search for evidence?
  • Is the criteria for selecting the evidence clearly described?
  • Were the methods used for formulating the recommendations clearly described?
  • Were the health benefits, side effects, and risks considered in formulating the recommendations?
  • Is there an explicit link between the recommendations and the supporting evidence?
  • Has the guideline been externally reviewed by experts prior to its publication?
  • Is a procedure for updating the guideline provided?

Clarity and Presentation

  • Are the recommendations specific and unambiguous?
  • Are different options for management of the condition clearly presented?
  • Are key recommendations clearly identifiable?
  • Is the guideline supported with tools for application?

Applicability

  • Have the potential organization barriers in applying the recommendation been discussed?
  • Have the potential cost implications of applying the recommendations been considered?
  • Does the guideline present key review criteria for monitoring and/or auditing purposes?

Editorial Independence

  • Is the guideline editorially independent from the funding body?
  • Have conflicts of interest of guideline development members been recorded?

Appraisal of Clinical Practice Guidelines

The AGREE (Appraisal of Guidelines for Research & Evaluation) Instrument is a tool that assesses the methodological rigour and transparency in which a guideline is developed and it is used internationally.

The purpose of the AGREE Instrument is to provide a framework for assessing the quality of clinical practice guidelines.

 The AGREE Instrument is designed to assess guidelines developed by local, regional, national, or international groups or affiliated government organizations.  These include:

  • New guidelines
  • Existing guidelines
  • Updates of existing guidelines

PDF of the latest AGREE Instrument

  • << Previous: Systematic Reviews
  • Next: Qualitative Studies >>

Creative Commons License

  • Last Updated: Jan 12, 2024 8:26 AM
  • URL: https://guides.himmelfarb.gwu.edu/CriticalAppraisal

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu

Can shared decision-making reduce medical malpractice litigation? A systematic review

Affiliations.

  • 1 The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth College, Hanover, USA. [email protected].
  • 2 Department of Psychology, University of Hertfordshire, Hatfield, UK. [email protected].
  • 3 Informed Medical Decisions Foundation, Boston, USA. [email protected].
  • 4 Harvard School of Public Health and Boston University Law School, Boston, USA. [email protected].
  • 5 Boston University Law School, Boston, USA. [email protected].
  • 6 Department of Psychology, University of Hertfordshire, Hatfield, UK. [email protected].
  • 7 Support Unit for Research Evidence, Cardiff University, Cardiff, UK. [email protected].
  • 8 The Dartmouth Institute for Health Policy and Clinical Practice, Dartmouth College, Hanover, USA. [email protected].
  • 9 The Dartmouth Center for Health Care Delivery Science, Hanover, USA. [email protected].
  • PMID: 25927953
  • PMCID: PMC4409730
  • DOI: 10.1186/s12913-015-0823-2

Background: To explore the likely influence and impact of shared decision-making on medical malpractice litigation and patients' intentions to initiate litigation.

Methods: We included all observational, interventional and qualitative studies published in all languages, which assessed the effect or likely influence of shared decision-making or shared decision-making interventions on medical malpractice litigation or on patients' intentions to litigate. The following databases were searched from inception until January 2014: CINAHL, Cochrane Register of Controlled Trials, Cochrane Database of Systematic Reviews, EMBASE, HMIC, Lexis library, MEDLINE, NHS Economic Evaluation Database, Open SIGLE, PsycINFO and Web of Knowledge. We also hand searched reference lists of included studies and contacted experts in the field. Downs & Black quality assessment checklist, the Critical Appraisal Skill Programme qualitative tool, and the Critical Appraisal Guidelines for single case study research were used to assess the quality of included studies.

Results: 6562 records were screened and 19 articles were retrieved for full-text review. Five studies wee included in the review. Due to the number and heterogeneity of included studies, we conducted a narrative synthesis adapted from the ESRC guidance for narrative synthesis. Four themes emerged. The analysis confirms the absence of empirical data necessary to determine whether or not shared decision-making promoted in the clinical encounter can reduce litigation. Three out of five included studies provide retrospective and simulated data suggesting that ignoring or failing to diagnose patient preferences, particularly when no effort has been made to inform and support understanding of possible harms and benefits, puts clinicians at a higher risk of litigation. Simulated scenarios suggest that documenting the use of decision support interventions in patients' notes could offer some level of medico-legal protection. Our analysis also indicated that a sizeable proportion of clinicians prefer ordering more tests and procedures, irrespective of patient informed preferences, as protection against litigation.

Conclusions: Given the lack of empirical data, there is insufficient evidence to determine whether or not shared decision-making and the use of decision support interventions can reduce medical malpractice litigation. Further investigation is required.

Trial registration: This review was registered on PROSPERO.

Registration number: CRD42012002367 .

Publication types

  • Systematic Review
  • Aged, 80 and over
  • Decision Making*
  • Informed Consent
  • Malpractice / legislation & jurisprudence*
  • Middle Aged
  • Retrospective Studies
  • Young Adult
  • History Category
  • Psychology Category
  • Informative Category
  • Analysis Category
  • Business Category
  • Economics Category
  • Health Category
  • Literature Category
  • Review Category
  • Sociology Category
  • Technology Category

Finished Papers

Customer Reviews

icon

IMAGES

  1. (PDF) Critical Appraisal Guidelines for Single Case Study Research

    critical appraisal guidelines for single case study research

  2. Table 3 from Critical Appraisal Guidelines for Single Case Study

    critical appraisal guidelines for single case study research

  3. Critical Appraisal Guidelines for Single Case Study Research

    critical appraisal guidelines for single case study research

  4. (PDF) Critical Appraisal Guidelines for Single Case Study Research

    critical appraisal guidelines for single case study research

  5. (PDF) Critical Appraisal of a Case Report

    critical appraisal guidelines for single case study research

  6. critical appraisal of case control study example

    critical appraisal guidelines for single case study research

VIDEO

  1. case study research (background info and setting the stage)

  2. Project appraisal

  3. Research report

  4. Case Presentation

  5. Individual Research

  6. Critical Appraisal of a Quantitative Research

COMMENTS

  1. Critical Appraisal Guidelines for Single Case Study Research

    Through a synthesis of existing best practices in interpretative research this paper provides comprehensive guidelines for the conduct of single case study research and extrapolates...

  2. Critical Appraisal Guidelines for Single Case Study Research

    Through a synthesis of existing best practices in interpretative research this paper provides comprehensive guidelines for the conduct of single case study research and extrapolates from them a set of critical appraisal guidelines to assist in the evaluation of such work. 1. INTRODUCTION

  3. Critical Appraisal Guidelines for Single Case Study Research

    Through a synthesis of existing best practices in interpretative research this paper provides comprehensive guidelines for the conduct of single case study research and extrapolates from them a set of critical appraisal guidelines to assist in the evaluation of such work. Expand View Paper aisel.aisnet.org Save to Library Create Alert Cite

  4. Critical Appraisal Guidelines for Single Case Study Research

    Through a synthesis of existing best practices in interpretative research this paper provides comprehensive guidelines for the conduct of single case study research and extrapolates from them a set of critical appraisal guidelines to assist in the evaluation of such work. Recommended Citation

  5. Scientific writing: Critical Appraisal Toolkit (CAT) for assessing

    Abstract. Healthcare professionals are often expected to critically appraise research evidence in order to make recommendations for practice and policy development. Here we describe the Critical Appraisal Toolkit (CAT) currently used by the Public Health Agency of Canada. The CAT consists of: algorithms to identify the type of study design ...

  6. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Combat information overload; Identify papers that are clinically relevant;

  7. Critical Appraisal Tools and Reporting Guidelines

    The main aims for this paper are to (a) describe steps involved in selecting appropriate critical appraisal tools and research evidence reporting guidelines; and (b) identify a list of commonly used critical appraisal tools and reporting guidelines in health research.

  8. Full article: Critical appraisal

    The steps involved in a sound critical appraisal include: (a) identifying the study type (s) of the individual paper (s), (b) identifying appropriate criteria and checklist (s), (c) selecting an appropriate set of criteria and checklist, (d) performing the appraisal, and (e) summarizing and using the results.

  9. Quality Appraisal of Single-Subject Experimental Designs: An ...

    utilized established critical appraisal guidelines to help clinicians and applied researchers assess the relevance and rigor of research find-ings (Crombie, 1996). Critical appraisal guidelines for surveys, cohort studies, case-control studies and randomized controlled trials have been in existence for a long time and are readily available ...

  10. Systematic Reviews: Critical Appraisal by Study Design

    "The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making."1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult.2 The critical appraisal process requires "a methodolo...

  11. CASP Checklists

    Critical Appraisal Checklists We offer a number of free downloadable checklists to help you more easily and accurately perform critical appraisal across a number of different study types.

  12. JBI Critical Appraisal Tools

    JBI's Evidence Synthesis Critical Appraisal Tools Assist in Assessing the Trustworthiness, ... Stern C, McArthur A, Stephenson M, Aromataris E. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evidence Synthesis. 2020;18(10):2127-2133. Associated publication(s) ... Qualitative research ...

  13. (PDF) Quality Appraisal of Single-Subject Experimental Designs: An

    Critical appraisal of the research literature is an essential step in informing and implementing evidence-based practice. Quality appraisal tools that assess the methodological quality of ...

  14. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  15. Critical appraisal guidelines for single case study research

    RESULTS Seven criteria for good qualitative research emerged: (1) carrying out ethical research; (2) importance of the research; (3) clarity and coherence of the research report; (4) use of appropriate and rigorous methods; (5) importance of reflexivity or attending to researcher bias; (6) importance of establishing validity or credibility; and ...

  16. PDF Checklist for Case Reports

    JBI Critical Appraisal Tools All systematic reviews incorporate a process of critique or appraisal of the research evidence. The purpose of this appraisal is to assess the methodological quality of a study and to determine the extent to which a study has addressed the possibility of bias in its design, conduct and analysis. All papers selected for

  17. Critical Appraisal Tools and Reporting Guidelines

    Keywords. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essen-tial role in evidence-based practice and decision ...

  18. Methodological quality (risk of bias) assessment tools for primary and

    Randomized controlled trial (individual or cluster) The first RCT was designed by Hill BA (1897-1991) and became the "gold standard" for experimental study design [12, 13] up to now.Nowadays, the Cochrane risk of bias tool for randomized trials (which was introduced in 2008 and edited on March 20, 2011) is the most commonly recommended tool for RCT [9, 14], which is called "RoB".

  19. Can shared decision-making reduce medical malpractice litigation? A

    Downs & Black quality assessment checklist, the Critical Appraisal Skill Programme qualitative tool, and the Critical Appraisal Guidelines for single case study research were used to assess the quality of included studies. 6562 records were screened and 19 articles were retrieved for full-text review. Five studies wee included in the review.

  20. Clinical Practice Guidelines

    The AGREE (Appraisal of Guidelines for Research & Evaluation) Instrument is a tool that assesses the methodological rigour and transparency in which a guideline is developed and it is used internationally. The purpose of the AGREE Instrument is to provide a framework for assessing the quality of clinical practice guidelines.

  21. Critical Appraisal

    Dang, D., & Dearholt, S. (2017). Johns Hopkins nursing evidence-based practice: model and guidelines. 3rd ed. Indianapolis, IN: Sigma Theta Tau International. Last Updated: Dec 28, 2023 4:42 PM URL: https://guides.library.uab.edu/c.php?g=1188122

  22. Can shared decision-making reduce medical malpractice ...

    We also hand searched reference lists of included studies and contacted experts in the field. Downs & Black quality assessment checklist, the Critical Appraisal Skill Programme qualitative tool, and the Critical Appraisal Guidelines for single case study research were used to assess the quality of included studies.

  23. Critical Appraisal Guidelines For Single Case Study Research

    You can have a cheap essay writing service by either of the two methods. First, claim your first-order discount - 15%. And second, order more essays to become a part of the Loyalty Discount Club and save 5% off each order to spend the bonus funds on each next essay bought from us.