• UNC Libraries
  • HSL Subject Research
  • Qualitative Research Resources
  • Assessing Qualitative Research

Qualitative Research Resources: Assessing Qualitative Research

Created by health science librarians.

HSL Logo

  • What is Qualitative Research?
  • Qualitative Research Basics
  • Special Topics
  • Training Opportunities: UNC & Beyond
  • Help at UNC
  • Qualitative Software for Coding/Analysis
  • Software for Audio, Video, Online Surveys
  • Finding Qualitative Studies

About this Page

Legend (let evidence guide every new decision) assessment tools: cincinnati children's hospital, equator network: enhancing the quality and transparency of health research, other tools for assessing qualitative research.

  • Writing Up Your Research
  • Integrating Qualitative Research into Systematic Reviews
  • Publishing Qualitative Research
  • Presenting Qualitative Research
  • Qualitative & Libraries: a few gems
  • Data Repositories

Why is this information important?

  • Qualitative research typically focuses on collecting very detailed information on a few cases and often addresses meaning, rather than objectively identifiable factors.
  • This means that typical markers of research quality for quantitative studies, such as validity and reliability, cannot be used to assess qualitative research.

On this page you'll find:

The resources on this page will guide you to some of the alternative measures/tools you can use to assess qualitative research.

Evidence Evaluation Tools and Resources

This website has a number of resources for evaluating health sciences research across a variety of designs/study types, including an Evidence Appraisal form for qualitative research (in table), as well as forms for mixed methods studies from a variety of clinical question domains. The site includes information on the following:

  • Evaluating the Evidence Algorithm (pdf download)
  • Evidence Appraisal Forms ( see Domain of Clinical Questions Table )
  • Table of Evidence Levels (pdf download)
  • Grading a Body of Evidence (pdf download)
  • Judging the Strength of a Recommendation (pdf download)
  • LEGEND Glossary (pdf download)
  • EQUATOR: Qualitative Research Reporting Guidelines
  • EQUATOR Network Home

The EQUATOR Network is an ‘umbrella’ organisation that brings together researchers, medical journal editors, peer reviewers, developers of reporting guidelines, research funding bodies and other collaborators with mutual interest in improving the quality of research publications and of research itself. 

The EQUATOR Library contains a comprehensive searchable database of reporting guidelines for many study types--including qualitative--and also links to other resources relevant to research reporting:

  • Library for health research reporting:  provides an up-to-date collection of guidelines and policy documents related to health research reporting. These are aimed mainly at authors of research articles, journal editors, peer reviewers and reporting guideline developers.
  • Toolkits to support writing research, using guidelines, teaching research skills, selecting the appropriate reporting guideline
  • Courses and events
  • Librarian Network

Also see Articles box, below, some of which contain checklists or tools. 

Most checklists or tools are meant to help you think critically and systematically when appraising research.  Users should generally consult accompanying materials such as manuals, handbooks, and cited literature to use these tools appropriately.  Broad understanding of the variety and complexity of qualitative research is generally necessary, along with an understanding of the philosophical perspectives plus knowledge about specific qualitative research methods and their implementation.  

  • CASP/Critical Assessment Skills Programme Tool for Evaluating Qualitative Research 2018
  • CASP Knowledge Hub Includes critical appraisal checklists for key study designs; glossary of key research terms; key links related to evidence based healthcare, statistics, and research; a bibliography of articles and research papers about CASP and other critical appraisal tools and approaches 1993-2012.
  • JBI (Joanna Briggs Institute) Manual for Evidence Synthesis (2020) See the following chapters: Chapter 2: Systematic reviews of qualitative evidence. Includes appendices: • Appendix 2.1: JBI Critical Appraisal Checklist for Qualitative Research • Appendix 2.2: Discussion of JBI Qualitative critical appraisal criteria • Appendix 2.3 JBI Qualitative data extraction tool Chapter 8: Mixed methods systematic reviews more... less... Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-01
  • McGill Mixed Methods Appraisal Tool (MMAT) Front Page Public wiki site for the MMAT: The MMAT is intended to be used as a checklist for concomitantly appraising and/or describing studies included in systematic mixed studies reviews (reviews including original qualitative, quantitative and mixed methods studies). The MMAT was first published in 2009. Since then, it has been validated in several studies testing its interrater reliability, usability and content validity. The latest version of the MMAT was updated in 2018.
  • McGill Mixed Methods Appraisal Tool (MMAT) 2018 User Guide See full site (public wiki link above) for additional information, including FAQ's, references and resources, earlier versions, and more.
  • McMaster University Critical Review Form & Guidelines for Qualitative Studies v2.0 Includes links to Qualitative Review Form (v2.0) and accompanying Guidelines from the Evidence Based Practice Research Group of McMaster University's School of Rehabilitation Science). Links are also provided for Spanish, German, and French versions.
  • NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed, 2012, from UK National Institute for Health and Care Excellence Includes checklist and notes on its use. From Methods for the Development of NICE Public Health Guidance, 3rd edition. more... less... Produced by the National Institute for Health and Clinical Excellence © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
  • NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed. (.pdf download) Appendix H Checklist and Notes download. © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
  • Qualitative Research Review Guidelines, RATS
  • SBU Swedish Agency for Health Technology Assessment and Assessment of Social Services Evaluation and synthesis of studies using qualitative methods of analysis, 2016. Appendix 2 of this document (at the end) contains a checklist for evaluating qualitative research. more... less... SBU. Evaluation and synthesis of studies using qualitative methods of analysis. Stockholm: Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU); 2016.
  • Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 3rd ed (JAMA Evidence) Chapter 13.5 Qualitative Research
  • Slides: Appraising Qualitative Research from Users' Guide to the Medical Literature, 3rd edition Click on the 'Related Content' tab to find the link to download the Appraising Qualitative Research slides.

These articles address a range of issues related to understanding and evaluating qualitative research; some  include checklists or tools.

Clissett, P. (2008) "Evaluating Qualitative Research." Journal of Orthopaedic Nursing 12: 99-105.

Cohen, Deborah J. and Benjamin F. Crabtree. (2008) "Evidence for Qualitative Research in Health Care: Controversies and Recommendations." Annals of Family Medicine 6(4): 331-339.

  • Supplemental Appendix 1. Search Strategy for Criteria for Qualitative Research in Health Care
  • Supplemental Appendix 2. Publications Analyzed: Health Care Journals and Frequently Referenced Books and Book Chapters (1980-2005) That Posited Criteria for "Good" Qualitative Research.

Dixon-Woods, M.,  R.L. Shaw, S. Agarwal, and J.A. Smith. (2004) "The Problem of Appraising Qualitative Research." Qual Safe Health Care 13: 223-225.

Fossey, E., C. Harvey, F. McDermott, and L. Davidson. (2002) "Understanding and Evaluating Qualitative Research." Australian and New Zealand Journal of Psychiatry 36(6): 717-732.

Hammarberg, K., M. Kirkman, S. de Lacey. (2016) "Qualitative Research Methods: When to Use and How to Judge them." Human Reproduction 31 (3): 498-501.

Lee, J. (2014) "Genre-Appropriate Judgments of Qualitative Research." Philosophy of the Social Sciences 44(3): 316-348. (This provides 3 strategies for evaluating qualitative research, 2 that the author is not crazy about and one that he considers more appropriate/accurate).

Majid, Umair and Vanstone,Meredith (2018). "Appraising Qualitative Research for Evidence Syntheses: A Compendium of Quality Appraisal Tools." Qualitative Health Research  28(13): 2115-2131.   PMID: 30047306 DOI:  10.1177/1049732318785358

Meyrick, Jane. (2006) "What is Good Qualitative Research? A First Step towards a Comprehensive Approach to Judging Rigour/Quality." Journal of Health Psychology 11(5): 799-808.

Miles, MB, AM Huberman, J Saldana. (2014) Qualitative Data Analysis.  Thousand Oaks, Califorinia, SAGE Publications, Inc. Chapter 11: Drawing and Verifying Conclusions . Check Availability of Print Book . 

Morse, JM. (1997) "Perfectly Healthy but Dead:"The Myth of Inter-Rater Reliability. Qualitative Health Research 7(4): 445-447.  

O’Brien BC, Harris IB, Beckman TJ, et al. (2014) Standards for reporting qualitative research: a synthesis of recommendations . Acad Med 89(9):1245–1251. DOI: 10.1097/ACM.0000000000000388 PMID: 24979285

The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met. The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.

Ryan, Frances, Michael Coughlin, and Patricia Cronin. (2007) "Step by Step Guide to Critiquing Research: Part 2, Qualitative Research." British Journal of Nursing 16(12): 738-744.

Stige, B, K. Malterud, and T. Midtgarden. (2009) "Toward an Agenda for Evaluation of Qualitative Research." Qualitative Health Research 19(10): 1504-1516.

Tong, Allison and Mary Amanda Dew. (2016-EPub ahead of print). "Qualitative Research in Transplantation: Ensuring Relevance and Rigor. "   Transplantation 

Allison Tong, Peter Sainsbury, Jonathan Craig; Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups , International Journal for Quality in Health Care , Volume 19, Issue 6, 1 December 2007, Pages 349–357, https://doi.org/10.1093/intqhc/mzm042

The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting.

Tracy, Sarah (2010) “Qualitative Quality: Eight ‘Big-Tent’ Criteria for Excellent Qualitative Research.” Qualitative Inquiry 16(10):837-51

  • Critical Appraisal Skills Programme
  • IMPSCI (Implementation Science) Tutorials
  • Johns Hopkins: Why Mixed Methods?
  • Measuring, Learning, and Evaluation Project for the Urban Reproductive Health Initiative This project ran 2010-2015. Some project resources are still available.
  • NIH OBSSR (Office of Behavioral & Social Sciences Research) Best Practices for Mixed Methods Research in Health Sciences, 2011 The OBSSR commissioned a team in 2010 to develop a resource that would provide guidance to NIH investigators on how to rigorously develop and evaluate mixed methods research applications. more... less... John W. Creswell, Ph.D., University of Nebraska-Lincoln Ann Carroll Klassen, Ph.D., Drexel University Vicki L. Plano Clark, Ph.D., University of Nebraska-Lincoln Katherine Clegg Smith, Ph.D., Johns Hopkins University With the Assistance of a Specially Appointed Working Group
  • NIH OBSSR Qualitative Methods in Health Research Legacy Resource: The Office of Behavioral and Social Sciences Research/OBSSR sponsored a workshop in 1999 entitled Qualitative Methods in Health Research: Opportunities and Considerations in Application and Review. The workshop brought together 12 researchers who served on NIH review committees or had been successful in obtaining funding from NIH. more... less... Link not working on OBSSR website, here https://obssr.od.nih.gov/about-us/publications/ Formerly: https://obssr-archive.od.nih.gov/pdf/Qualitative.PDF
  • NSF Workshop on Interdisciplinary Standards for Systematic Qualitative Research On May 19-20, 2005, a workshop on Interdisciplinary Standards for Systematic Qualitative Research was held at the National Science Foundation (NSF) in Arlington, Virginia. The workshop was cofunded by a grant from four NSF Programs—Cultural Anthropology, Law and Social Science, Political Science, and Sociology… It is well recognized that each of the four disciplines have different research design and evaluation cultures as well as considerable variability in the emphasis on interpretation and explanation, commitment to constructivist and positivist epistemologies, and the degree of perceived consensus about the value and prominence of qualitative research methods. more... less... Within this multidisciplinary and multimethods context, twenty-four scholars from the four disciplines were charged to (1) articulate the standards used in their particular field to ensure rigor across the range of qualitative methodological approaches;1* (2) identify common criteria shared across the four disciplines for designing and evaluating research proposals and fostering multidisciplinary collaborations; and (3) develop an agenda for strengthening the tools, training, data, research design, and infrastructure for research using qualitative approaches.
  • Qualitative Research for Improved Programs
  • Qualitative Research Methods: A Data Collector's Field Guide (2005) From FHI 360/Family Health International with support from US AID. Natasha Mack, Cynthia Woodsong, Kathleen M. MacQueen, Greg Guest, and Emily Name. The guide is divided into five modules covering the following topics: Module 1 – Qualitative Research Methods Overview Module 2 – Participant Observation Module 3 – In-Depth Interviews Module 4 – Focus Groups Module 5 – Data Documentation and Management
  • Robert Wood Johnson Foundation Guidelines for Designing, Analyzing, and Reporting Qualitative Research
  • Robert Wood Johnson Foundation: Qualitative Research Guidelines Project
  • << Previous: Finding Qualitative Studies
  • Next: Writing Up Your Research >>
  • Last Updated: Feb 19, 2024 12:47 PM
  • URL: https://guides.lib.unc.edu/qual

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Qualitative evidence...

Qualitative evidence synthesis to improve implementation of clinical guidelines

Chinese translation.

  • Related content
  • Peer review
  • Christopher Carroll , reader in systematic review and evidence synthesis
  • School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
  • c.carroll{at}shef.ac.uk

Christopher Carroll argues that generic advice to share decision making is insufficient and that successful clinical guidelines need to reflect disease specific insights into patients’ experiences, views, beliefs, and priorities

As Sackett and colleagues wrote 20 years ago, evidence based practice involves the use of the “best external evidence” to inform clinical decision making. 1 The published evidence used to underpin clinical guidelines, including those produced by the National Institute for Health and Care Excellence (NICE) in the UK, is almost exclusively quantitative. This is understandable as the principal focus is efficacy and safety: the aim is to establish what works. However, Sackett and colleagues were also clear that clinical practice should take account of patients’ preferences.

This is currently achieved by patient involvement in the process 2 and by using primary qualitative research, which uses techniques such as interviews to explore how and why patients make the decisions they do. 3 4 But a synthesis of such qualitative research studies paints a rich, subtle, and useful picture of patients’ experience, views, beliefs, and priorities, and could improve the implementation of clinical guidelines.

What is qualitative evidence synthesis?

Synthesis of quantitative studies using techniques such as meta-analysis promises greater power, more precise results, and the possibility of generalising from statistically representative samples. 5 The basic rationale behind the synthesis of qualitative evidence is similar: to make the most of relevant studies for the purposes of policy and practice. 4 The synthesis of several relevant qualitative studies can offer multiple perspectives as well as providing evidence of contradictory viewpoints that might otherwise be missed when considering a single study alone. 4 Qualitative evidence synthesis also enables researchers to “go beyond” the findings of such primary research studies and produce something that is more than their simple sum. 6

Because of its small sample sizes qualitative research is often criticised for lack of generalisability. However, this assumes that qualitative studies have the same purpose and measure the same outcomes as their quantitative equivalents. They do not. Rather, they are filling some of the gaps left by the quantitative evidence. The type of generalisability they offer is different. For example, qualitative study samples can be “informationally” rather than statistically representative, in the sense that they can offer information that is applicable to many other people with a similar condition or receiving similar treatments. 4 Rather than seeking to offer an alternative method of measuring efficacy and safety, qualitative evidence and its synthesis is mainly aiming to provide something that the quantitative evidence often does not, such as identifying and explaining patient behaviours. It could be argued that it is essential for true evidence based practice.

Standards already exist for the conduct and reporting of qualitative evidence synthesis. 7 Approaches to synthesis can be aggregative (such as narrative or framework synthesis or meta-aggregation, which summarise studies’ findings), interpretive (such as meta-ethnography or critical interpretive synthesis, which seek to generate completely original conceptualisations and theories based on the evidence), or a combination of the two (such as thematic synthesis). 6 8 Framework, narrative, and thematic synthesis are particularly useful for answering questions about the uptake of interventions and for integrating quantitative and qualitative findings. 6 8 9 These methods are therefore potentially the most appropriate for use in developing clinical guidelines. In the UK, NICE public health guidance already often uses a form of thematic synthesis and integrates quantitative and qualitative evidence using a narrative approach. 10 Below, I will show how it could also be useful in clinical guidelines.

Enhanced clinical guidelines

Qualitative evidence synthesis has several potential benefits for clinical guidelines, 4 but I will focus on patient preferences, in particular shared decision making or the principle of “nothing about me without me.” 11 This principle requires that clinical decisions be consistent with the elicited preferences and values of the patient. It is an end in itself. 11 Failure to take account of a patient’s needs and views contributes to lower levels of adherence to treatments and poorer clinical outcomes, 12 whereas well conducted shared decision making improves patient satisfaction and willingness to follow treatment plans. 13 These are key outcomes for any policy maker who wants to see research having its intended effect in practice.

Although NICE has a quality standard and clinical guideline on involving patients and, where appropriate, their family or other representatives in treatment decisions, 14 15 this guidance is quite generic. By contrast, a qualitative evidence synthesis of relevant studies can provide specific information about the many issues that need to be taken into account during shared decision making with particular groups of patients. This type of synthesis can therefore potentially offer a valuable supplement to the experiences of patient representatives on guideline panels, as the recent update of NICE guidelines for stroke rehabilitation show. 16

Long term management of stroke

The full guideline 17 identified a relevant synthesis of qualitative and quantitative evidence 18 but essentially contained its own thematic synthesis of 17 qualitative studies, identifying relevant themes with supporting evidence listed. This published evidence suggested that patients and family members thought that health professionals viewed goal setting as relatively unimportant and as solely the professionals’ responsibility: decision making on this aspect of care was not being shared.

Together with the relevant quantitative evidence, these findings informed a series of evidence statements (section 6.2.3 17 ), which in turn informed the recommendations (6.2.5). These recommendations then appeared in the final guideline (CG162), 16 which required that goal setting be conducted at specific meetings and be meaningful, relevant, challenging but achievable, time sensitive (reviewed regularly), and involve input from patients and their family or carers (box 1). The influence of the qualitative evidence and its synthesis is quite clear: the quantitative evidence only noted that standard procedures were not conducive to shared decision making; the qualitative evidence emphasised the importance of shared decision making and the specifics of how it should be achieved, and these were integrated in detail into the recommendations.

Box 1: Incorporation of qualitative evidence synthesis in NICE guidelines on long term management of stroke 16 17

Evidence from qualitative and quantitative studies (section 6.2.1).

Inhibitory factors such as limited time, presiding professional routines and the single opportunity to meet clinicians post discharge for secondary risk management (three qualitative studies: low to moderate confidence in studies)

Standard goal setting meeting, which is held away from the patient and with standard documentation, is not conducive to patient centred goal setting (quantitative study: low to moderate confidence)

Summary of challenges to patient participation in goal setting (6.2.3) 17

Five studies highlighted factors inhibiting patients from participating in goal settings. These factors include: limited time, presiding professional routines, goal setting meeting which is held away from the patient, single opportunity to meet clinicians post discharge for secondary risk management, stroke pathology with its highly unpredictable recovery prognosis and its effects such as aphasia

Translation to clinical guideline 16

1.2.8 Ensure that people with stroke have goals for their rehabilitation that:

Are meaningful and relevant to them

Focus on activity and participation

Are challenging but achievable

Include both short and long term elements

1.2.9 Ensure that goal setting meetings during stroke rehabilitation:

Are timetabled into the working week

Include the person with stroke and, where appropriate, their family or carer in the discussion

1.2.10 Ensure that during goal setting meetings, people with stroke are provided with:

An explanation of the goal setting process

The information they need in a format that is accessible to them

The support they need to make decisions and take an active part in setting goals

Unfortunately, such practice is rare in NICE clinical guidelines. Below, I use the examples of diabetes and cardiac rehabilitation to show how guidelines could be improved by including qualitative evidence synthesis.

Type 2 diabetes

The section on diet in the recently published NICE clinical guideline on type 2 diabetes (NG28) repeatedly recommends that health professionals, “provide … advice,” “emphasise advice,” and “discourage” or “encourage” certain actions. 19 Strategies therefore emanate from the relevant health professional alone. Yet a recent qualitative evidence synthesis of 37 studies emphasised the importance of shared decision making because patients with type 2 diabetes and their families felt that communication with health professionals was often difficult and their opinions were not acknowledged. 20 The evidence synthesis indicated that a shift from “advice” to negotiation was needed; effort should be made to elicit the concerns, needs, and preferences of patients and their families. Box 2 gives a possible revised recommendation for the section on nutritional advice.

Box 2: How NICE recommendations for type 2 diabetes 19 could be enhanced by findings from qualitative evidence synthesis 20

Current guideline recommendation.

Provide individualised and ongoing nutritional advice …

Provide dietary advice in a form sensitive to the person’s needs, culture and beliefs …

Emphasise advice on healthy balanced eating that is applicable to the general population …

Themes from qualitative evidence synthesis

Difficulty communicating with healthcare provider— Individual has difficulty communicating needs, questions, and concerns with healthcare provider

Respectful communication— Transferring of information in a way that is understood by the sender and receiver with consideration for feelings, rights, wishes, or traditions. It is the acknowledgment that both parties and their opinions have value

Possible enhanced recommendation

Ensure that the person with type 2 diabetes:

Is given an explanation of why healthy balanced eating is important

Is given an explanation of how an agreed dietary plan will have benefits for them

Ensure that meetings regarding nutrition are ongoing and individualised, and the person with type 2 diabetes is given the information and support they need to make decisions and take an active part in identifying the best diet for them and that the information is in a format that is accessible to them

Ensure that the person with type 2 diabetes

Is asked about their concerns and needs

Is asked about any restrictions to their diet governed by their feelings, rights, wishes, religion, or traditions

Participates in deciding what dietary changes are appropriate and achievable for them

Long term cardiac rehabilitation

As with type 2 diabetes, evidence on patients’ views and experiences is limited in the NICE clinical guideline on cardiac rehabilitation (CG172). 21 Two of the key findings from a qualitative evidence synthesis of 90 studies looking at patients’ views of cardiac rehabilitation are: patients’ sense of lacking any control over their condition and being unconvinced that the interventions on offer would produce positive outcomes. 22

Several of the included studies reported that some cardiac rehabilitation patients focus only on the avoidance of stress to reduce the chance of another heart attack (for example) rather than modifying diet, physical activity, or smoking—three key points in the guidance (CG 172). Given that attendance of rehabilitation programmes is a known problem, 22 a decision making process that explicitly addresses patients’ potential mindsets might lead to greater patient satisfaction with agreed treatment plans and improved clinical outcomes. The guideline can be made more specific (box 3).

Box 3: Using qualitative evidence to enhance NICE recommendations for cardiac rehabilitation 21

Current guideline recommendation on encouraging people to attend.

Establish people’s health beliefs and their specific illness perceptions before offering appropriate lifestyle advice and to encourage attendance at a cardiac rehabilitation programme.

Theme from qualitative evidence synthesis

Patients perceived heart disease as defying any attempts to reduce risk. For example, risk of acute myocardial infarction was perceived to be unpredictable, inevitable, and uncontrollable, irrespective of whether the underlying heart condition was seen as low or high severity. Likewise, participants expressed a low sense of control over their future health.

Involve patients in decision making by establishing their health beliefs and specific illness perceptions:

Ask patients why they think they had a heart attack

Ask patients whether they think their condition can be controlled

Explain the relation between lifestyle and heart disease

Explain how appropriate lifestyle behaviours and attendance at a cardiac rehabilitation programme can give the patient greater control over their condition

Agree appropriate and achievable lifestyle changes and rehabilitation programme attendance that is meaningful to the patient

Putting evidence into practice

Clinical guidelines and quality standards might stress the need for decision making to be shared, but it is the synthesis of qualitative evidence that details what this negotiation should involve for any particular condition and its treatment. 14 By accessing and using evidence on patients’ anxieties, beliefs, and preferences, which can be highly condition specific, recommendations in clinical guidelines can be tailored and enhanced. Treatment plans might have a greater chance of being followed if they are the result of a negotiation that seeks to cover and address topics of known importance to particular patient groups. 13

However, the use of this evidence is not without problems. 4 Although there are many hundreds of published qualitative evidence syntheses that can be used for clinical guidelines, they might not be available for every indication. Fortunately, there are pragmatic and relatively rapid methods of qualitative evidence synthesis that guideline developers could use to fill that gap. 9 Also, generic qualitative evidence synthesis reporting guidelines exist, 7 others are being developed for particular methods, 23 and standards are evolving to establish the level of confidence users can ascribe to the findings of such syntheses. 24

Despite the availability of methods for integrating quantitative and qualitative evidence, 6 8 there is no ready made toolkit for doing so. The NICE stroke guideline and public health programme 10 both offer relevant templates, but future work should seek to identify the most appropriate approach for clinical guidelines. The qualitative evidence might also come from settings that are not directly applicable to the NHS, so this needs to be taken into account, though the same problem can apply to quantitative evidence. Nevertheless, as the examples described above suggest, such evidence, carefully considered and integrated with the quantitative evidence, can offer a highly useful addition to the expert and patient opinion currently used in the guideline development process.

Key messages

Simply recommending the general principle of shared decision making in clinical guidelines does not mean issues important to patients will be addressed

Qualitative evidence synthesis can help guideline developers identify these issues and include specific recommendations

By making use of this evidence, clinical guidelines can be more informed, richer, and context specific

This has potential benefits for patient satisfaction and clinical outcomes

Contributors and sources: CC is a member of the Sheffield Technology Assessment Group, conducting systematic reviews for NICE, and the codeveloper of the qualitative evidence synthesis method, “best fit” framework synthesis. He is a member of the NICE Interventional Procedures Advisory Committee.

Competing interests: I have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

Provenance and peer review: Not commissioned; externally peer reviewed.

  • ↵ Sackett DL, Rosenberg WM, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996 ; 356 : 71 - 2 . doi:10.1136/bmj.312.7023.71   pmid:8555924 . OpenUrl
  • ↵ Boivin A, Currie K, Fervers B, et al. G-I-N PUBLIC. Patient and public involvement in clinical guidelines: international experiences and future perspectives. Qual Saf Health Care 2010 ; 356 : e22 . pmid:20427302 . OpenUrl
  • ↵ Tan TP, Stokes T, Shaw EJ. Use of qualitative research as evidence in the clinical guideline program of the National Institute for Health and Clinical Excellence. Int J Evid Based Healthc 2009 ; 356 : 169 - 72 . doi:10.1111/j.1744-1609.2009.00135.x   pmid:21631857 . OpenUrl
  • ↵ Sandelowski M, Barroso J. Handbook for synthesizing qualitative research. Springer, 2007 .
  • ↵ Egger M, Smith GD, Altman D. Systematic reviews in health care: meta-analysis in context. 2nd ed . BMJ, 2001 . doi:10.1002/9780470693926 .
  • ↵ Pope C, Mays N, Popay J. Informing policy making and management in healthcare: the place for synthesis. Healthc Policy 2006 ; 356 : 43 - 8 . pmid:19305652 . OpenUrl
  • ↵ Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol 2012 ; 356 : 181 . doi:10.1186/1471-2288-12-181   pmid:23185978 . OpenUrl
  • ↵ Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol 2009 ; 356 : 59 . doi:10.1186/1471-2288-9-59   pmid:19671152 . OpenUrl
  • ↵ Dixon-Woods M. Using framework-based synthesis for conducting reviews of qualitative studies. BMC Med 2011 ; 356 : 39 . doi:10.1186/1741-7015-9-39   pmid:21492447 . OpenUrl
  • ↵  NICE. Methods for the development of NICE public health guidance. 3rd ed . National Institute for Health and Care Excellence, 2012 .
  • ↵ Barry MJ, Edgman-Levitan S. Shared decision making-pinnacle of patient-centered care. N Engl J Med 2012 ; 356 : 780 - 1 . doi:10.1056/NEJMp1109283   pmid:22375967 . OpenUrl
  • ↵ Osterberg L, Blaschke T. Adherence to medication. N Engl J Med 2005 ; 356 : 487 - 97 . doi:10.1056/NEJMra050100   pmid:16079372 . OpenUrl
  • ↵ Wilson SR, Strub P, Buist AS, et al. Better Outcomes of Asthma Treatment (BOAT) Study Group. Shared treatment decision making improves adherence and outcomes in poorly controlled asthma. Am J Respir Crit Care Med 2010 ; 356 : 566 - 77 . doi:10.1164/rccm.200906-0907OC   pmid:20019345 . OpenUrl
  • ↵  NICE. Quality standard 6: shared decision making. National Institute of Health and Care Excellence, 2012 .
  • ↵  NICE. Patient experience in adult NHS services: improving the experience of care for people using adult NHS services. Clinical guideline 138. National Institute of Health and Care Excellence, 2012 .
  • ↵ NICE. Stroke rehabilitation: long-term rehabilitation after stroke. Clinical guideline 162 . 2013. https://www.nice.org.uk/guidance/cg162
  • ↵  National Clinical Guideline Centre. Stroke rehabilitation: Long-term rehabilitation after stroke. Final full guideline. Clinical guideline 162. Methods, evidence and recommendations. National Clinical Guideline Centre, 2013 .
  • ↵ Rosewilliam S, Roskell CA, Pandyan AD. A systematic review and synthesis of the quantitative and qualitative evidence behind patient-centred goal setting in stroke rehabilitation. Clin Rehabil 2011 ; 356 : 501 - 14 . doi:10.1177/0269215510394467   pmid:21441308 . OpenUrl
  • ↵  NICE. Type 2 diabetes. The management of type 2 diabetes. Clinical guideline 87. 2014 . https://www.nice.org.uk/guidance/cg87
  • ↵ Wilkinson A, Whitehead L, Ritchie L. Factors influencing the ability to self-manage diabetes for adults living with type 1 or 2 diabetes. Int J Nurs Stud 2014 ; 356 : 111 - 22 . doi:10.1016/j.ijnurstu.2013.01.006   pmid:23473390 . OpenUrl
  • ↵  NICE. MI - secondary prevention. Secondary prevention in primary and secondary care for patients following a myocardial infarction. NICE clinical guideline 172. 2013 . https://www.nice.org.uk/guidance/cg172
  • ↵ Clark AM, King-Shier KM, Thompson DR, et al. A qualitative systematic review of influences on attendance at cardiac rehabilitation programs after referral. Am Heart J 2012 ; 356 : 835 - 45.e2 . doi:10.1016/j.ahj.2012.08.020   pmid:23194483 . OpenUrl
  • ↵ France EF, Ring N, Noyes J, et al. Protocol-developing meta-ethnography reporting guidelines (eMERGe). BMC Med Res Methodol 2015 ; 356 : 103 . doi:10.1186/s12874-015-0068-0   pmid:26606922 . OpenUrl
  • ↵ Lewin S, Glenton C, Munthe-Kaas H, et al. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med 2015 ; 356 : e1001895 . doi:10.1371/journal.pmed.1001895   pmid:26506244 . OpenUrl

qualitative research nice guidelines

  • Open access
  • Published: 12 October 2022

The conduct and reporting of qualitative evidence syntheses in health and social care guidelines: a content analysis

  • Chris Carmona 1 ,
  • Susan Baxter 1 &
  • Christopher Carroll 1  

BMC Medical Research Methodology volume  22 , Article number:  267 ( 2022 ) Cite this article

1892 Accesses

1 Citations

6 Altmetric

Metrics details

Background:

This paper is part of a broader investigation into the ways in which health and social care guideline producers are using qualitative evidence syntheses (QESs) alongside more established methods of guideline development such as systematic reviews and meta-analyses of quantitative data. This study is a content analysis of QESs produced over a 5-year period by a leading provider of guidelines for the National Health Service in the UK (the National Institute for Health and Care Excellence) to explore how closely they match a reporting framework for QES.

Guidelines published or updated between Jan 2015 and Dec 2019 were identified via searches of the National Institute for Health and Care excellence (NICE) website. These guidelines were searched to identify any QES conducted during the development of the guideline. Data relating to the compliance of these syntheses against a reporting framework for QES (ENTREQ) were extracted and compiled, and descriptive statistics used to provide an analysis of the of QES conduct, reporting and use by this major international guideline producer.

QES contributed, in part, to 54 out of a total of 192 guidelines over the five-year period. Although methods for producing and reporting QES have changed substantially over the past decade, this study found that there has been little change in the number or quality of NICE QESs over time. The largest predictor of quality was the centre or team which undertook the synthesis. Analysis indicated that elements of review methods which were similar to those used in quantitative systematic reviews tended to be carried out well and mostly matched the criteria in the reporting framework, but review methods which were more specific to a QES tended to be carried out less well, with fewer examples of criteria in the reporting framework being achieved.

Conclusion:

The study suggests that use, conduct and reporting of optimal QES methods requires development, as over time the quality of reporting of QES both overall, and by specific centres, has not improved in spite of clearer reporting frameworks and important methodological developments. Further staff training in QES methods may be helpful for reviewers who are more familiar with conventional forms of systematic review if the highest standards of QES are to be achieved. There seems potential for greater use of evidence from qualitative research during guideline development.

Peer Review reports

Introduction

Evidence-based health and social care guidelines (including clinical, public health and social care guidelines) are part of the landscape of evidence-based health and social care in many countries. These guidelines are normally based on one or more analyses of relevant evidence, often in the form of systematic reviews of effectiveness data, and often interpreted by an expert committee.

Even though methods for synthesising qualitative research have been around for many years, interest in the use of qualitative evidence to inform the development of these guidelines has grown considerably over recent years. This is partly because of key developments such as more robust methods of synthesis, development of tools like GRADE CERQual and better frameworks for reporting qualitative studies [ 1 ] and partly because qualitative data can answer particular types of questions better than quantitative data. Quantitative data are still key for questions of efficacy, but are less able to answer questions relating to the effects of patient preference, feasibility and acceptability on the broader effectiveness of a treatment or intervention. These questions are best answered by qualitative studies. [ 2 ].

The World Health Organization (WHO) handbook [ 3 ] affirms that qualitative evidence should be used in the process of guideline development, and the Cochrane Qualitative and Implementation Methods group are planning to publish a manual for qualitative evidence synthesis in 2023. Other leading international guideline producers, such as the UK National Institute for Health and Care Excellence (NICE) are using qualitative evidence syntheses, both alone and as part of mixed methods reviews, to present evidence to their guideline committees and this is supported by initiatives such as GRADE CERQual [ 4 ] that have been developed with guideline committees specifically in mind. This surge of interest led Lewin and Glenton to declare “a new era” for qualitative research [ 1 ]. A recent paper exploring how developers use qualitative evidence searched internationally for guidelines that used qualitative research and appraised their quality [ 5 ]. The authors rated the guidelines using the AGREE II criteria, finding that most of the guidelines were of high quality. However, the AGREE criteria are intended to assess the methodological quality of the guideline itself and the authors did not investigate the reporting of the evidence reviews that informed the guideline.

A short paper published by Tan and colleagues in 2009 [ 10 ] explored the use of qualitative evidence by NICE between 2002 (when NICE produced its first guidelines) and 2007. The authors reported that almost 50% of NICE guidelines produced in that period made use of qualitative studies, although they did not report whether those are single qualitative studies or whether any qualitative evidence synthesis was undertaken. The paper noted a growing trend by year in terms of the numbers of qualitative studies used in guidelines, rising from nine studies in 2003 to 41 in 2004, 60 in 2005 and 139 studies in 2006. The authors attributed the growth in the number of qualitative studies used to a combination of two factors. Firstly, a shift toward producing more guidelines on chronic conditions, where they argued that patient needs constituted an important part of the guideline, and secondly, that NICE’s developing policy emphasis on patient and carer involvement led to more attention being paid to patient and carer perspectives.

They further noted that only five of the 22 guidelines which drew on qualitative research used (or documented) specific search strategies for qualitative literature over and above searches that were done for quantitative studies. Only four of the guidelines documented key methodological process details such as inclusion/exclusion criteria for qualitative studies.

This study also highlighted a gap in the reporting of the reviews - only half (11/22) of the guidelines reported how critical appraisal of qualitative studies was carried out, and only three of the 22 reported how data were synthesised.

The study concluded that “there is no consistency in how qualitative evidence is utilised in the development of NICE clinical guidelines. There are also clear training needs for NICE’s guideline developers in terms of how best to identify, quality appraise and synthesise qualitative evidence” (p.172).

The work reported in this current paper updates the study by Tan and colleagues by exploring whether methodological changes within NICE, or development in methodological standards for QES have led to a change in their use in NICE guidelines. It also builds on a review of methodological literature by the current authors [ 6 ]. The study aims to examine all qualitative evidence syntheses used in guideline documents published between 2015 and the end of 2019 by a leading producer of guidelines for clinical, public health, and social care in the UK. NICE was chosen as an appropriate exemplar because of its international reputation as a leading guideline producer. The study aimed to explore where and how QES are used in the development of health and social care guidelines, and how the methodologies used compare with international standards of good practice.

The study used a content analysis method to analyse textual data. [ 7 ] Berelson described content analysis as “a research technique for the objective, systematic and quantitative description of the manifest content of communication” (p. 18). [ 8 ] Content analysis incorporates both quantitative approaches that convert the textual data to numerical data, for example by counting occurrences of the content of interest, and also more qualitative approaches that analyse the way that the content of interest in presented or discussed. The process followed in this study was based on the method outlined by Bengtsson (see Table  1 ). [ 9 ].

Source documents

In order to compare recent NICE guidelines with the sample included by Tan et al. [ 10 ], and to reflect current practice, we scrutinised guidelines from a 5-year period (the beginning of 2015 until the end of 2019).

Using inbuilt functionality on the NICE website, a search was conducted for guidelines published between January 2015 and December 2019. This search encompassed the three types of evidence-based guideline produced by the guideline development centres at NICE, classified on the website as ‘public health’, ‘social care’ or ‘clinical’. It does not include guidelines where the method of development differed, that is, antimicrobial guidelines, cancer service guidelines, COVID-19 guidelines and medicines practice guidelines (less than 40 guidelines in total). The resulting list of guidelines was copied to the clipboard (using the website functionality) and pasted into an excel spreadsheet (Microsoft Office Professional Plus 2019).

For each included guideline, the individual evidence reviews (systematic reviews and qualitative evidence syntheses) were explored using the ‘evidence’ tab on the guideline webpage.

Each evidence review was examined to evaluate whether or not a qualitative evidence synthesis (defined as 2 or more qualitative studies combined together to answer the same review question) had been undertaken by the technical team (or a contractor) responsible for the development of the guideline. Evidence reviews that did not report the use of qualitative evidence synthesis (or mixed-methods synthesis with a qualitative component) were excluded from the sample. Any qualitative reviews and mixed methods reviews identified were downloaded and saved. These formed the sample for the content analysis.

Data collection

Included QES were copied to a new excel spreadsheet and rationalised so that the unit of analysis was the qualitative evidence synthesis rather than the guideline (some guidelines were supported by multiple qualitative evidence syntheses). The coding framework (described below) was added to the spreadsheet to create a data extraction tool.

The coding framework used was intended to provide two sets of data – descriptive data and content data.

Descriptive data

This included key data from the QES – guideline number, year of publication, author (by guideline producing centre rather than individual authors) and number of qualitative studies included in the analysis. The use of GRADE CERQual [ 4 ] to assess the confidence was also noted.

Content data

The criteria set by ENTREQ [ 11 ] are the most commonly used reporting framework for QES, and therefore this framework was selected as a useful one for examining the content of the QES included in this study – see Table  2 and Additional File 1. There are alternative reporting standards for specific types of QES, for example the eMERGe Reporting Guidance for meta-ethnography [ 12 ], but since NICE has not produced any of these types of QES they were not used in this analysis.

Data analysis

Each of the QES was read and descriptive data and content data were coded into an excel spreadsheet according to the framework described above and in Additional file 1. Coding was binary and indicated whether the QES reported on the criterion in the reporting framework or not. For example, did the QES report its aim? Did it report the synthesis methodology it is underpinned by? This approach did not allow for any judgment about the adequacy of each reporting criterion, only whether it was present or not. This approach was taken to allow for analysis of coding.

Resulting data are presented predominantly as descriptive statistics to show trends, consistencies and inconsistencies in the data. Data were visualised using Microsoft Excel or were imported into the R program [ 13 ], using the ‘tidyverse’ package [ 14 ] to manage the data and the ‘ggplot2’ package [ 15 ] (also part of the tidyverse) for data visualisation. The R code used to generate the figures can be found in Additional File 1.

Number and size of QES undertaken

Between January 2015 and December 2019, NICE published 192 clinical, public health and social care guidelines. The website categorises the breakdown of these guidelines as 156 clinical, 30 public health and 48 social care guidelines, however this includes some guidelines listed in more than one category, hence the discrepancy in numbers. For the purposes of this analysis, pragmatic decisions were made about the main topic area of a guideline to assign each guideline to a single category, resulting in a breakdown of 143 clinically focussed guidelines, 25 public health focussed guidelines and 24 social care focussed guidelines. Each of these guidelines is based on multiple sources of evidence – most often systematic reviews of quantitative evidence, but also prognostic and diagnostic reviews (of the predictive or diagnostic accuracy of tests or indicators), epidemiological studies (of prevalence and incidence) and, more rarely, qualitative evidence syntheses. The total number of reviews (both quantitative and qualitative) conducted for a guideline can range from one review for an update of a single clinical question to around 40 reviews for a large guideline with multiple questions. The reviews are conducted by expert review teams who present them to the guideline committee. The committee who undertake a structured discussion (although not using a formal evidence to decision framework) of the evidence contained in the reviews (and their confidence in that evidence if GRADE CERQual was used), alongside any other evidence, and contextualise it using their expertise and experience of the UK health and social care system to make guideline recommendations. When a guideline is published, all of the evidence considered by the committee is also published alongside the guideline.

Of the 192 guidelines referred to above, 54 guidelines (28%) had one or more QES as part of their evidence base (qualitative evidence syntheses defined as a synthesis of more than one qualitative study). Overall, out of a total of approximately Footnote 1 1,500 reviews/research questions, 90 were QES (approx. 6%).

Of the 54 guidelines with one or more QES, 36 (out of a total of 143 [25%]) were clinically focussed, 13 (out of 25 [52%]) were public health focussed, 5 (out of 24 [21%]) were social care focussed. This shows that social care and clinically focussed guidelines are roughly half as likely to use qualitative evidence synthesis as public health focussed guidelines.

The number of QES used per included guideline ranges from 1 to 6 (mean = 1.67 per guideline that contains a QES, less than 0.4 QES per guideline published between Jan 2015 and Dec 2019).

In terms of the number of included papers in the QES, there was a large amount of variation. The largest QES contained 69 papers, the smallest QES contained two papers. Distribution of QES by the number of included papers is shown in Fig.  1 . Reasons for the variation were not explored as part of this analysis but may be related to the size of the evidence base, or to the formulation of the review protocol.

figure 1

Frequency of QES by number of included papers

Overall, 65% (58 out of 90) of QES had less than 12 papers included, with a mode of four and a median of 10 papers. The four QES with more than 42 papers were from two guidelines [ 16 ],[ 17 ] and in both cases a single set of included papers was identified through searching and sifting and the data were extracted from the single set of papers to develop two QES with different review questions.

Figure  2 shows the number of QES conducted by year for the period 2015–2019. The graph does not indicate any meaningful trend toward producing more QES in spite of the growth in acceptability of QES in evidence-based health and social care, and the development of more rigorous methods (see methodological review). The large variations in 2017 and 2019 might be at least partly explained by the lifecycle of a guideline. In most cases guidelines take longer than a year to develop and publish. The number of guidelines published per year is somewhat variable, depending on the length of the guidelines’ development – guidelines with more review questions, usually addressed sequentially, tend to have longer development times. There is no evidence found by this analysis that would indicate why 2017 and 2019 were years when fewer QES were published.

figure 2

Number of QES published by year (2015–2019)

Purpose of QES undertaken

There are a range of QES methodologies which vary widely on the epistemological spectrum, and in level of complexity, from aggregative approaches to more configurative/interpretive approaches. QES undertaken for NICE guidelines all use simpler descriptive or aggregative approaches. These syntheses can be used to address a range of issues that concern people’s (both patients and healthcare professionals) views, beliefs and lived experiences. While quantitative evidence is best for addressing questions of efficacy (does treatment A have an effect on condition B?), qualitative evidence can be useful to bridge the gap between efficacy and real-life effectiveness, for example understanding why people do not take their medicines as prescribed, how the medicines impact their lives and how things could be improved. In spite of this, guidelines produced by NICE in the period 2015–2019 seem to address a much more limited range of question types using QES. Almost half of the QES undertaken answer one of two types of question:

What are the barriers and/or facilitators to……?

What are the information (and support) needs of ……?

Many of the remaining questions deal with similar question types, often about support and care needs. This may indicate a limited understanding in the NICE guideline development centres of the potential remit of QES and their flexibility with regards to issues such as service configuration, professional support etc. Other kinds of QES do include occasional innovative questions, for example one QES for guideline NG77 (management of cataracts in adults) [ 18 ] was employed to explore how lens implant errors happen through qualitative analysis of physician reports and case studies.

Quality of reporting

The 90 QES published by NICE between Jan 2015 and Dec 2019 were assessed against the ENTREQ reporting criteria as described in Table  1 (above) and in more detail in Supplementary Material 1.

Analysis of number of guidelines meeting each of the ENTREQ criteria is shown in Fig.  3 with an additional column to indicate whether the QES used GRADE CERQual to assess confidence in the qualitative findings.

figure 3

Number of QES (out of 90) meeting each ENTREQ reporting criterion

ENTREQ criteria relating to setting out the aim of the review and to the systematic searching and sifting of studies to generate a pool of included studies was generally done well and described adequately in the included QES. The exception to this was the synthesis methodology criterion (described by the ENTREQ statement as “Identify the synthesis methodology or theoretical framework which underpins the synthesis, and describe the rationale for choice of methodology”). Many QES (40/90) were marked down on this criterion because either they only provided a brief sentence or statement to describe the methods of data synthesis used, for example “We undertook thematic synthesis”, with no methodological detail, or simply provided inadequate descriptions of methodology, often not specifying an approach to synthesis at all.

Derivation of themes (described by the ENTREQ statement as “Explain whether the process of deriving the themes or constructs was inductive or deductive”) was demonstrated in a third of QES, and these were mostly undertaken by a particular guideline developer who present a ‘theme map’ as a standard part of their QES.

In 70 of the reviews, synthesis output (described by the ENTREQ statement as “Present rich, compelling and useful results that go beyond a summary of the primary studies”) was reported. This was mostly in the form of NICE evidence statements, although some evidence statements made no attempt at synthesis and simply listed the themes identified by individual studies. Some QES used a Cochrane style ‘Summary of qualitative findings’ table to present synthesised themes and sub-themes along with their CERQual confidence rating. Other than that, CERQual was not often used. This does not seem to be dependent on the age of the review (as might be expected given the introduction of CERQual in 2015) but seems to depend more on the guideline developer.

Variation over time

It might be expected that adherence to reporting frameworks improves over time as methods for undertaking QES become more robust and more widely known. It might also be expected that guideline developers would develop their methods for QES (and train their staff in those methods), and that more recent iterations of the NICE guideline methods manual might give clearer direction on its expectations from QES.

Figure  4 explores how well QES from different centres match with criteria in the ENTREQ reporting framework over time. For years where a centre produced more than 1 QES, the mean of the number of criteria in the framework (out of 21) for the QES produced in that year is used. It is important to note that using a mean number of reporting criteria is somewhat arbitrary since it requires making a generalisation that each of the 21 criteria in the framework is of equal importance to the reporting of a QES.

figure 4

ENTREQ criteria (out of a maximum of 21) reported by year and authoring centre

Data suggest that in fact there is little variation over time, but that the main determinant of the number of ENTREQ criteria reported is the guideline developer who authored the review. Of the two guideline developers who authored the majority of the QES in the past 5 years, one reasonably consistently reports around 11–13 criteria (Centre 7), whereas the other performs better in 2016 and 2017, but drops to a similar level in 2018 and 2019 (Centre 6). It is unclear what may drive the drop. Two possible confounding factors are the publication of the new NICE methods manual [ 19 ] in 2018, or simply a change in staff or senior staff from someone more familiar with QES to someone less familiar.

To further explore this, data were plotted to calculate the median number of ENTREQ criteria reported over all years (2015–2019) by guideline developer. Figure  5 presents this data along with the associated point values for each QES.

figure 5

Median number of criteria in the ENTREQ framework met (dots represent individual QES)

The data in Fig.  5 broadly support the hypothesis that the different producers of QES account for most of the variation in the number of criteria reported on in the reporting framework. Centres that do less well tend to have only produced 2 or 3 QES over the 5 years period and therefore staff are likely to have been less familiar with QES methods having done them rarely. The Centre 8 team do not appear to fit this pattern. Their QES perform poorly against the ENTREQ framework, however the team have produced 11 QES in the 5-year time-frame, including the lowest scoring and second lowest scoring.

The widest variation in meeting the criteria in the framework is seen in the contractor group, but this is to be expected since it is a heterogeneous group comprised of various organisations and academic groups. Since these QES were contracted out, it is reasonable that the highest ranking QES are in this group since competitive tendering would lead to these syntheses being undertaken by specialist teams familiar with QES.

Centres 6 and 7 are the most prolific producers of QES, with centre 7 demonstrating a wide range of reporting quality across their QES. Centre 6 reporting quality appears to be dichotomous with a cluster of QES scoring 10 or 11, and a larger cluster scoring 15 or 16. It is unclear what the cause of this dichotomy might be.

The number of QES undertaken by NICE (including its contractors) over the 5-year period up to the end of 2019 formed a fraction of the total number of reviews undertaken in the period. Although it is difficult to ascertain why this is the case, there are plausible explanations that can at least partially explain this lack of attention to the qualitative evidence.

The majority of the guidelines produced in the period were clinical guidelines (143 out of 192), and clinical guidelines are most often about the relative efficacy of different treatment modalities. In questions of efficacy, the gold standard is the randomised controlled trial, or a systematic review of randomised controlled trials. Although QES could be used to bridge the efficacy – effectiveness gap (that is, the difference between the biological or medicinal effect of the medicine itself on the body and its observed effectiveness in a particular population) by addressing issues such as acceptability of the treatment, compliance with regimes, attitudes towards the medicine etc., the reality is that in the majority of cases there is unlikely to be published qualitative evidence that could be synthesised that directly addresses the efficacy question. For example, while there might be substantial research into people’s lived experiences of particular illnesses, there is less likely to be evidence on people’s experiences of undergoing treatment A specifically. The most obvious exception to this is in long term conditions, or conditions where there is a notable impact on quality of life, where there is potentially substantial qualitative research – for example, cancer care or kidney dialysis. There is also a growing recognition within producers of clinical guidelines of the importance of qualitative evidence as a tool in implementation research because they “generate opportunities to examine complexity and include a diversity of perspectives” [ 20 ].

Arguably, QES could be more routinely useful in public health and social care topics where interventions tend to be more interpersonal or sociopsychological than biological and evaluations of views, perceptions and lived experiences (traditionally the domain of qualitative research) are more likely to be qualitative than in clinical medicine.

The line of argument about the likely availability of qualitative data is to a large extent borne out by the size of the QES that were carried out. With a modal number of four papers per QES they are, on average, relatively small. Themes from QES that contain so few studies may not score highly in a CERQual assessment (they are likely to be downgraded for adequacy unless the data from the studies is very rich), and this may restrict their usefulness as part of a decision-making process. Of the four large (> 50 papers) QES, two were part of the workplace health guideline [ 17 ], a non-clinical, public health guideline, and two were related to the attention deficit hyperactivity disorder: diagnosis and management guideline [ 16 ], which fits the model of a long-term condition with a notable impact on quality of life.

It is also plausible that the lack of relevant studies identified for most of the QES was due to either inappropriate research questions, or insufficient searching. Technical staff and information specialists producing QES within NICE are usually quantitative systematic reviewers and have little training in searching for or assessing qualitative evidence. Added to this, qualitative studies are notoriously poorly indexed in databases [ 21 ], qualitative study filters are still quite primitive in comparison to quantitative ones [ 21 ],[ 22 ], and qualitative literature searches are often quite specific (as opposed to sensitive) to limit the large amounts of irrelevant papers that need to be excluded during the sifting process.

The numbers of QES published per year does not appear to have the incremental increase that would be expected given the development of methods for QES over the 5 years in question, however this could be simply because the time period is too short to demonstrate any trend. It is also likely due to the varying patterns of NICE guideline publication. NICE guidelines take varying amounts of time to complete depending on a variety of factors, so there is not a consistent background rate of guideline publication against which the numbers of QES can easily be measured. The Tan paper however, reports that almost 50% of guidelines published in 2002–2007 ‘made use of qualitative studies’ (this is a slightly different measure to ‘undertaking a QES’ – the inclusion criterion for the current study. See below). During 2015–2019 that number was 28%, so a more detailed examination of the numbers over the lifetime of NICE could potentially reveal a year on year decrease in the number of guidelines using QES. A caveat here is that the Tan paper refers to ‘making use of qualitative studies’ but does not define this. There are guidelines from that period that report single or small numbers of qualitative studies but do not make any attempt at synthesis and therefore would not be considered for this study. The current content analysis only counted syntheses of two or more qualitative studies and did not count incidental use of single qualitative studies. This is likely to account for a good deal of the discrepancy.

Almost half of the QES undertaken in 2015–2019 were carried out to address generic questions about barriers and facilitators to accessing a service or treatment, or about information needs relating to a condition. A substantial number of the remainder were about care and support needs of people with a specific condition. There seems in general little appetite to address more creative questions through QES even though the NICE manual [ 19 ] gives a broader list of examples than this including:

What elements of care on the general ward are viewed as important by patients following their discharge from critical care areas?

How does culture affect the need for and content of information and support for bottle or breastfeeding?

What are the perceived risks and benefits of immunisation among parents, carers or young people? Is there a difference in perceived benefits and risks between groups whose children are partially immunised and those who have not been immunised?

What information and support should be offered to children with atopic eczema and their families and carers?

What are the views and experiences of health, social care and other practitioners about home-based intermediate care?

Occasional forays are made into more novel uses of QES. For example, in the Cataracts in adults: management guideline [ 18 ], a QES was undertaken to inform recommendations on wrong lens implant errors, specifically the questions “What are the procedural causes of wrong lens implant errors?” and “What strategies should be adopted to reduce the risk of wrong lens implant errors?”.

An avenue that does not seem to have been routinely explored by NICE is the use of QES as contextual grounding for guidelines. For example, a guideline about diabetes might usefully be underpinned as a whole by a QES that explored peoples experiences of living with, or caring for, people with diabetes, even though qualitative data to inform a QES about specific question within the guideline might not be available, the context would enable a guideline committee to frame their recommendation making in peoples lived experience of the condition.

It is clear from Fig.  3 that there is good consistency within the ENTREQ criteria as to whether it is done well or poorly in NICE QES. Most criteria are either reported on by over 80 (out of 90) or by less than 45 QES. Very few criteria fall between these brackets.

Closer examination of the reporting criteria reveals that the criteria in the framework where the number of QES reporting the criterion are very high are all criteria that duplicate steps in quantitative systematic reviews and are therefore familiar to staff who are predominantly quantitative systematic reviewers. ENTREQ criteria relating to documenting the searching and sifting process, and to the creations of evidence tables of study characteristics are invariably done well, as is the presentation of the results of the methodological critical appraisal of the papers. Almost all of the criteria that duplicate steps in the quantitative systematic review process were reported in the QES (85 or more of the 90 QES).

Steps that are unique to QES, or where QES methods differ from quantitative systematic review methods, fare less well, and this is particularly the case with the criteria in the framework that require specific skills in methods for QES: data extraction, coding, use of software, and study comparison all fare poorly with less than 10% of the included QES reporting how (or if) they undertook these steps. Description of methods of qualitative synthesis also fared poorly with only around half of the QES reporting a synthesis approach in any detail.

Variation over time and centre undertaking QES

The data presented here for different guideline producing centres are, at best, only indicative data. The picture they present of static guideline producing centres is potentially a misleading one. In the period under scrutiny (2015–2019), major changes were made to the way in which NICE contracts out work for guideline production. In the early stages of this time period, NICE had contracts with several external collaborating centres, mostly associated with academic units, and additionally an internal clinical guidelines team and a public health team. The external teams were responsible for specific areas of guideline production (for example, the National Collaborating Centre for Mental Health, or the National Collaborating Centre for Women’s and Children’s Health). The collaborating centres were replaced with two generic bodies, the National Guidelines Alliance and the National Guidelines Centre. These two bodies absorbed the functions, and in many cases the staff, of the Collaborating Centres. It is likely that the changing membership of review teams over that time has had an impact on the systematic review and QES processes that underpin the guidelines. [ 23 ].

In spite of this, there seem to be two general trends in the data contained in Figs.  4 and 5 that are important for this analysis. Firstly, that over time the quality of reporting of QES both overall, and by specific centres, has not improved in spite of clearer reporting frameworks and important methodological developments in QES. Secondly, the quality of reporting seems (in most cases) to be related to the centre producing the QES, with clear clusters of reviews of similar quality within centres. The exceptions, as discussed above, are the generic ‘contractor’ category and the public health team.

Limitations

While we believe that the findings are robust, we acknowledge that the way that reviews are reported by NICE changed several times during the 5-year period under consideration. At various times multiple questions could be subsumed into single reviews or split across different review questions. This means that accurate counting becomes difficult, and some numbers are a near approximation based on counting and pragmatic decisions. Where numbers are uncertain this is reported.

The ENTREQ framework was not intended to be used for ‘scoring’ QES, and arguably not all ENTREQ reporting domains are equal in importance, nor was it designed as a formal reporting standard - its a general statement containing 21 items or criteria that can be broadly applied to common types of QES methodologies. As a framework, it is not well suited for more complex methodologies, however it is useful for simpler descriptive/aggregative methods as used in the QES described here.

The main purpose of this analysis was to better understand the quality of reporting of QES rather than why QES were or were not undertaken for specific guidelines. QES are relevant to a very specific range of research questions, and not all NICE guidelines could have benefitted from a QES. Further research would need to be undertaken to establish whether QES had been used appropriately in guideline development.

As with any documentary appraisal, it is unclear whether issues identified in this paper are due to the lack transparent reporting of the qualitative evidence syntheses or whether they relate to the conduct of the reviews themselves, or just to the reporting of them.

Along with its international peers, including Cochrane and the World Health Organization (WHO), NICE is developing methods for the use of QES for producing health guidelines [ 6 ], [ 19 ]. To date this seems only to have been through relatively small numbers of QES, to address a very limited number of questions, primarily those about barriers and facilitators to service use and about people’s information and support needs when diagnosed with, or living with, a health condition. There is a potential to better understand the range of questions which qualitative evidence might be able to shed light on, and this in turn might make them more common as part of guideline production.

The focus of health guideline producing bodies on the use of systematic reviews of quantitative evidence and the relatively small amount of QES means that there is no noticeable improvement over time in the quality of QES produced. QES that are not produced by contractors who specialise in qualitative methods often lack transparent reporting of those aspects of the qualitative evidence synthesis that differ from the stages of a quantitative systematic review.

The clearest factor in the quality of a QES seems to be the team that undertook it. Teams which produce well-reported QES seem to do so consistently, and we can speculate that this may because they have staff with a particular interest or skill set in this area. Solutions to this might include ensuring that staff undertaking QES have appropriate skills and supervision, and providing clearer guidance about how a QES should be undertaken [ 6 ].

Data availability

The datasets used and analysed during the current study are included in Supplementary Material 2. The source documents are freely available on the NICE website ( www.nice.org.uk ).

It is not possible to accurately count the number of review questions due to changes in the way that these are reported.

Lewin S, Glenton C. Are we entering a new era for qualitative research? Using qualitative evidence to support guidance and guideline development by the World Health Organization. International journal for equity in health. 2018;17(1).

Glenton C, Lewin S, Lawrie TA, et al. Qualitative Evidence Synthesis (QES) for Guidelines: Paper 3 - Using qualitative evidence syntheses to develop implementation considerations and inform implementation processes. Health Res Policy Syst Aug. 2019;17(1):74. https://doi.org/10.1186/s12961-019-0450-1 .

Article   Google Scholar  

World Health Organization. WHO handbook for guideline development. 2nd ed.: ed. World Health Organization; 2014.

GRADE CERQual. Confidence in the Evidence from Reviews of Qualitative Research. (Accessed 14/09/2019, https://www.cerqual.org/ ).

Wang Y-Y, Liang D-D, Lu C, et al. An exploration of how developers use qualitative evidence: content analysis and critical appraisal of guidelines. BMC Med Res Methodol. 2020;2020/06/17(1):160. https://doi.org/10.1186/s12874-020-01041-8 . 20 ) .

Carmona C, Baxter S, Carroll C. Systematic review of the methodological literature for integrating qualitative evidence syntheses into health guideline development. Research Synthesis Methods . 2021-02-16 2021; https://doi.org/10.1002/jrsm.1483 .

Hsieh H-F, Shannon SE. Three Approaches to Qualitative Content Analysis. Qualitative Health Research . 2005-11-01 2005;15(9):1277–1288. https://doi.org/10.1177/1049732305276687 .

Berelson B. Content analysis in communication research . Content analysis in communication research. Free Press; 1952. pp. 220–0.

Bengtsson M. How to plan and perform a qualitative study using content analysis. NursingPlus open. 2016;2:8–14. https://doi.org/10.1016/j.npls.2016.01.001 .

Tan TPY, Stokes T, Shaw EJ. Use of qualitative research as evidence in the clinical guideline program of the National Institute for Health and Clinical Excellence. Int J Evid Based Healthc. 2009;7(3):169–72. https://doi.org/10.1111/j.1744-1609.2009.00135.x .

Article   PubMed   Google Scholar  

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. Letter. BMC Medical Research Methodology . 2012-11-27 2012;12(1):1–8. https://doi.org/10.1186/1471-2288-12-181 .

Flemming K, Noyes J. Qualitative Evidence Synthesis: Where Are We at? Int J Qualitative Methods. 2021. https://doi.org/10.1177/1609406921993276 .

R: A language and environment for statistical computing . R Foundation for Statistical Computing; 2020. https://www.R-project.org/ .

Wickham H, Averick M, Bryan J, et al. Welcome to the tidyverse. J Open Source Softw. 2019;4(43) https://doi.org/10.21105/joss.01686 .

ggplot2: Elegant Graphics for Data Analysis . Springer-Verlag New York; 2016. https://ggplot2.tidyverse.org .

National Institute for Health and Care Excellence (NICE). Attention deficit hyperactivity disorder: diagnosis and management. (2018). (Accessed 14/09/2019, https://www.nice.org.uk/guidance/ng87 ).

National Institute for Health and Care Excellence (NICE). Workplace health: management practices. (2016). (Accessed 14/09/2019, https://www.nice.org.uk/guidance/ng13 ).

National Institute for Health and Care Excellence (NICE). Cataracts in adults: management. (2017). (Accessed 14/09/2019, https://www.nice.org.uk/guidance/ng77/) .

National Institute for Health and Care Excellence (NICE). Developing NICE guidelines: the manual . (2018). (Accessed 14/09/2019, https://www.nice.org.uk/process/pmg20/) .

Ramanadhan S, Revette AC, Lee RM, Aveling EL. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun. 2021/06/29 2021;2(1):70. https://doi.org/10.1186/s43058-021-00174-1 .

Cooke A, Smith D, Booth A. Beyond PICO. Qualitative Health Research . 2012-10-01 2012;22(10):1435–1443. https://doi.org/10.1177/1049732312452938 .

Rosumeck S, Wagner M, Wallraf S, Euler U. A validation study revealed differences in design and performance of search filters for qualitative research in PsycINFO and CINAHL. Journal of Clinical Epidemiology . 2020/12/01/ 2020;128:101–108. https://doi.org/10.1016/j.jclinepi.2020.09.031 .

Uttley L, Montgomery P. The influence of the team in conducting a systematic review. Syst Reviews. 2017;6(1) https://doi.org/10.1186/s13643-017-0548-x .

Download references

Acknowledgements

Not applicable.

No funding was received for this research. Open access publication of this paper was funded by the University of Sheffield Institutional Open Access Fund. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.

Author information

Authors and affiliations.

School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Chris Carmona, Susan Baxter & Christopher Carroll

You can also search for this author in PubMed   Google Scholar

Contributions

Chris Carmona undertook data extraction and analysis and was the main author of the paper. Susan Baxter and Christopher Carroll participated in methodological decisions, contributed to the preparation of the analyses and provided supervision for the overall study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Carmona .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

Chris Carmona is an employee of the UK National Institute for Health and Care Excellence. Christopher Carroll and Susan Baxter have no competing interests to declare.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Carmona, C., Baxter, S. & Carroll, C. The conduct and reporting of qualitative evidence syntheses in health and social care guidelines: a content analysis. BMC Med Res Methodol 22 , 267 (2022). https://doi.org/10.1186/s12874-022-01743-1

Download citation

Received : 13 May 2022

Revised : 06 September 2022

Accepted : 26 September 2022

Published : 12 October 2022

DOI : https://doi.org/10.1186/s12874-022-01743-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative evidence synthesis
  • Reporting frameworks
  • Guideline development

BMC Medical Research Methodology

ISSN: 1471-2288

qualitative research nice guidelines

Use of qualitative research as evidence in the clinical guideline program of the National Institute for Health and Clinical Excellence

Affiliation.

  • 1 National Institute for Health and Clinical Excellence, Manchester, UK. [email protected]
  • PMID: 21631857
  • DOI: 10.1111/j.1744-1609.2009.00135.x

Aim: To describe the use of qualitative research as evidence in a national clinical guideline program (National Institute for Health and Clinical Excellence - NICE, UK) and to identify training needs for guideline developers.

Methods: All published NICE clinical guidelines from December 2002 to June 2007 were reviewed to determine whether qualitative studies were considered as evidence in the development of recommendations and how this type of evidence had been used. Developers of clinical guidelines due to be published between July 2007 and March 2008 were asked to describe their training needs regarding the use of qualitative research in clinical guidelines. Data were summarised using simple descriptive statistics.

Results: Of the 49 clinical guidelines published by NICE within the study period, nearly half (45%, 22/49) used qualitative studies as an evidence base for developing recommendations for clinical practice. The number of qualitative studies used in these clinical guidelines increased from 2003 to 2006: 9 studies in 2003; 41 studies in 2004; 60 studies in 2005; 139 studies in 2006. In terms of how qualitative evidence was used in the guidelines, the study identified the following main issues: inconsistencies in the terminology used to describe types of qualitative study design; lack of standardised search strategies and/or targeted processes to select studies from databases; lack of a standardised approach to quality appraisal and poor reporting of how the identified evidence was used to inform the relevant guideline recommendations. Of the 17 clinical guidelines in development during the study period, the questionnaire was returned by approximately half of the guideline developers (response rate 47%, 8/17). A wide range of training needs was identified, chiefly training in the identification, quality appraisal and synthesis of qualitative studies and guidance as to the guideline areas where qualitative studies should be considered as evidence.

Conclusion: Qualitative research is increasingly being used by NICE's clinical guideline developers as an evidence base to generate clinical practice recommendations. There are, however, clear training needs for NICE's guideline developers in terms of how best to identify, quality appraise and synthesise qualitative evidence for use in evidence-based clinical guidelines.

© 2009 The Authors. Journal Compilation © Blackwell Publishing Asia Pty Ltd.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

Acknowledgements

Abbreviations, authors’ contributions.

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

The Oxford Handbook of Qualitative Research (2nd edn)

The Oxford Handbook of Qualitative Research (2nd edn)

Patricia Leavy Independent Scholar Kennebunk, ME, USA

  • Cite Icon Cite
  • Permissions Icon Permissions

The Oxford Handbook of Qualitative Research, second edition, presents a comprehensive retrospective and prospective review of the field of qualitative research. Original, accessible chapters written by interdisciplinary leaders in the field make this a critical reference work. Filled with robust examples from real-world research; ample discussion of the historical, theoretical, and methodological foundations of the field; and coverage of key issues including data collection, interpretation, representation, assessment, and teaching, this handbook aims to be a valuable text for students, professors, and researchers. This newly revised and expanded edition features up-to-date examples and topics, including seven new chapters on duoethnography, team research, writing ethnographically, creative approaches to writing, writing for performance, writing for the public, and teaching qualitative research.

Signed in as

Institutional accounts.

  • Google Scholar Indexing
  • GoogleCrawler [DO NOT DELETE]

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Developing NICE guidelines: the manual

NICE process and methods [PMG20] Published: 31 October 2014 Last updated: 17 January 2024

  • Tools and resources
  • 1 Introduction
  • 2 The scope
  • 3 Decision-making committees
  • 4 Developing review questions and planning the evidence review
  • 5 Identifying the evidence: literature searching and evidence submission

6 Reviewing evidence

  • 7 Incorporating economic evaluation
  • 8 Linking to other guidance
  • 9 Interpreting the evidence and writing the guideline
  • 10 The validation process for draft guidelines, and dealing with stakeholder comments
  • 11 Finalising and publishing the guideline recommendations
  • 12 Support for putting the guideline recommendations into practice
  • 13 Ensuring that published guidelines are current and accurate
  • 14 Updating guideline recommendations
  • 15 Appendices
  • Update information

NICE process and methods

6.1 identifying and selecting relevant evidence, 6.2 assessing evidence: critical appraisal, analysis, and certainty in the findings, 6.3 equality and diversity considerations, 6.4 health inequalities, 6.5 summarising evidence, 6.6 presenting evidence for reviews other than reviews of primary studies, 6.7 references and further reading.

Reviewing evidence is an explicit, systematic and transparent process that can be applied to both quantitative (experimental and observational) and qualitative evidence (see the chapter on developing review questions and planning the evidence review ). The key aim of any review is to provide a summary of the relevant evidence to ensure that the committee can make fully informed decisions about its recommendations. This chapter describes how evidence is reviewed in the development of guidelines.

Evidence reviews for NICE guidelines summarise the evidence and its limitations so that the committee can interpret the evidence and make appropriate recommendations, even where there is uncertainty.

Most of the evidence reviews for NICE guidelines will be presenting syntheses of evidence from systematic literature searches for primary research studies. Evidence identified during these literature searches and from other sources (see the chapter on identifying the evidence: literature searching and evidence submission ) should be reviewed against the review protocol to identify the most appropriate information to answer the review questions. The evidence review process used to inform guidelines must be explicit and transparent, and involves 8 main steps:

writing the review protocol (see the section on planning the evidence review in the chapter on developing review questions and planning the evidence review )

identifying and selecting relevant evidence (including a list of excluded studies with reasons for exclusion)

critical appraisal (assessing the study design and its methods)

extracting relevant data

synthesising the results (including statistical analyses such as meta-analysis)

assessing quality and certainty in the evidence

interpreting the results

considering health inequalities.

Any substantial deviations from these steps need to be agreed, in advance, with staff with responsibility for quality assurance. Additional considerations for reviews using alternative methods not based primarily on literature reviews of primary studies (such as formal consensus methods, adapting recommendations from other guidelines or primary analyses of real-world data) are discussed in the section on presenting evidence for reviews other than reviews of primary studies .

For all evidence reviews and data synthesis, it is important that the method used to report and evaluate the evidence is easy to follow. It should be written up in clear English and any analytical decisions should be clearly justified.

Updating previous NICE reviews

In many cases, the evidence reviews will be an update of a previous review we've done on the same or a similar topic, to include more recently published evidence. In these cases, a judgement should be made on what elements of the previous review can be reused, and which need to be redone, based on the level of similarity between the original and new review questions, protocols and methods. Examples of elements that can be considered for reuse include:

literature searches and literature search results

evidence tables for included studies

critical appraisal of included studies

data extraction and meta-analysis

previously identified information on equalities and health inequalities.

The process of selecting relevant evidence is common to all evidence reviews based on systematic literature searches; the other steps are discussed in relation to the main types of review question. The same rigour should be applied to reviewing all data, whether fully or partially published studies or unpublished data supplied by stakeholders . Care should be taken to identify and remove multiple reports of the same study to prevent double-counting.

Published studies

Titles and abstracts of the retrieved citations should be screened against the inclusion criteria defined in the review protocol, and those that do not meet these should be excluded. A percentage should be screened independently by 2 reviewers (that is, titles and abstracts should be double-screened). The percentage of records to be double-screened for each review should be specified in the review protocol.

If reviewers disagree about a study's relevance, this should be resolved by discussion or by recourse to a third reviewer. If, after discussion, there is still doubt about whether or not the study meets the inclusion criteria, it should be retained. If there are concerns about the level of disagreement between reviewers, the reasons should be explored, and a course of action agreed to ensure a rigorous selection process. A further proportion of studies should then be double-screened to validate this new process until appropriate agreement is achieved.

Once the screening of titles and abstracts is complete, full versions of the selected studies should be obtained for assessment. As with title and abstract screening, a percentage of full studies should be checked independently by 2 reviewers, with any differences being resolved and additional studies being assessed by multiple reviewers if sufficient agreement is not achieved. Studies that fail to meet the inclusion criteria once the full version has been checked should be excluded at this stage.

The study selection process should be clearly documented and include full details of the inclusion and exclusion criteria. A flow chart should be used to summarise the number of papers included and excluded at each stage and this should be presented in the evidence review document (see the PRISMA statement ). Each study excluded after checking the full version should be listed, along with the reason for its exclusion. Reasons for study exclusion need to be sufficiently detailed for people to be able to understand the reason without needing to read the original paper (for example, avoid stating only that 'the study population did not meet that specified in the review protocol', but also include why it did not match the protocol population).

Priority screening

Priority screening refers to any technique that uses a machine learning algorithm to enhance the efficiency of screening. Usually, this involves taking information on previously included or excluded papers, and using this to order the unscreened papers from those most likely to be included to those least likely. This can be used to identify a higher proportion of relevant papers earlier in the screening process, or to set a cut‑off for manual screening, beyond which it is unlikely that additional relevant studies will be identified.

There is currently no published guidance on setting thresholds for stopping screening where priority screening has been used. Any methods used should be documented in the review protocol and agreed in advance with the team with responsibility for quality assurance. Any thresholds set should, at minimum, consider the following:

the number of references identified so far through the search, and how this identification rate has changed over the review (for example, how many candidate papers were found in each 1,000 screened)

the overall number of studies expected, which may be based on a previous version of the guideline (if it is an update), published systematic reviews , or the experience of the guideline committee

the ratio of relevant/irrelevant records found at the random sampling stage (if undertaken) before priority screening.

The actual thresholds used for each review question should be clearly documented, either in the guideline methods chapter or in the evidence review documents. Examples of how this has been implemented can be found in NICE's guidelines on autism spectrum disorders in under 19s and prostate cancer .

Ensuring relevant records are not missed

Regardless of the level of double-screening, and whether or not priority screening was used, additional checks should always be made to reduce the risk that relevant studies are not identified. These should include, at minimum:

checking reference lists of identified systematic reviews, even if these reviews are not used as a source of primary data

checking with the guideline committee that they are not aware of any relevant studies that have been missed

looking for published papers associated with any key trial registry entries or published protocols that have been identified.

It may be useful to test the sensitivity of the search by checking that it picks up known studies of relevance.

Conference abstracts

Conference abstracts seldom contain enough information to allow confident judgements about the quality and results of a study. It can be difficult to trace the original studies or additional data, and the information found may not always be useful. Also, good-quality studies will often publish full text papers after the conference abstract, and these will be identified by routine searches. Conference abstracts should therefore not routinely be included in the search strategy and review, unless there are good reasons for doing so. If a decision is made to include conference abstracts for a particular review, the justification for doing so should be clearly documented in the review protocol. If conference abstracts are searched for, the investigators may be contacted if additional information is needed to complete the assessment for inclusion.

National policy, legislation and medicines safety advice

Relevant national policy, legislation or medicines safety advice may be identified in the literature search and used to inform guidelines (such as drug safety updates from the Medicines and Healthcare products Regulatory Agency [MHRA]). This evidence does not need critical appraisal in the same way as other evidence, given the nature of the source. National policy, legislation or medicines safety advice can be quoted verbatim as evidence (for example, the Health and Social Care Act [2012]), where needed, and a summary of any relevant medicines safety advice identified should be included in the evidence review document.

Unpublished data and studies in progress

Any unpublished data should be quality assessed in the same way as published studies (see the section on assessing evidence: critical appraisal, analysis, and certainty in the findings ). If additional information is needed to complete the quality assessment, the investigators may be contacted. Similarly, if data from in-progress studies are included, they should be quality assessed in the same way as published studies. Confidential information should be kept to a minimum, and a structured abstract of the study must be made available for public disclosure during consultation on the guideline. Additional considerations for reviews using primary analyses of real-world data are discussed in the section on presenting evidence for reviews other than reviews of primary studies .

Grey literature

Grey literature may be quality assessed in the same way as published literature, although because of its nature, such an assessment may be more difficult. Consideration should therefore be given to the elements of quality that are most likely to be important (for example, elements of the study methodology that are less clearly described than in a published article, because of the lack of need to go through the peer-review process, or conflicts of interest in the study).

Introduction

Assessing the quality of the evidence for a review question is critical. It requires a systematic process of assessing both the appropriateness of the study design and the methods of the study (critical appraisal) as well as the certainty of the findings (using an approach, such as GRADE ).

Options for assessing the quality of the evidence should be considered by the development team. The chosen approach should be discussed and agreed with staff with responsibility for quality assurance, where the approach deviates from the standard (described in critical appraisal of individual studies). The agreed approach should be documented in the review protocol (see the appendix on review protocol templates ) together with the reasons for the choice. If additional information is needed to complete the data extraction or quality assessment, study investigators may be contacted, although this is not something that is done routinely.

Critical appraisal of individual studies

Every study should be appraised using a checklist appropriate for the study design (see the appendix on appraisal checklists, evidence tables, GRADE and economic profiles ). If a checklist other than those listed is needed, or the one recommended as the preferred option is not used, the planned approach should be discussed and agreed with staff with responsibility for quality assurance and documented in the review protocol.

The ROBINS-I checklist is currently only validated and recommended for use with non-randomised controlled trials and cohort studies. However, there may be situations where a mix of non-randomised study types is included within a review. It can then be helpful to use this checklist across all included study types to maintain consistency of assessment. If this is done, additional care should be taken to ensure all relevant risks of bias for study designs for which ROBINS-I is not currently validated (such as case–control studies) are assessed.

In some evidence reviews, it may be possible to identify particular risk of bias criteria that are likely to be the most important indicators of biases for the review question (for example, conflicts of interest or study funding, if it is an area where there is known to be concern about the sponsorship of studies). If any such criteria are identified, these should then be used to guide decisions about the overall risk of bias of each individual study.

Sometimes, a decision might be made to exclude certain studies at particularly high risk of bias, or to explore any impact of bias through sensitivity analysis . If so, the approach should be specified in the review protocol and agreed with staff with responsibility for quality assurance.

Criteria relating to key areas of bias may also be useful when summarising and presenting the evidence (see the section on summarising evidence ). Topic-specific input (for example, from committee members) may be needed to identify the most appropriate criteria to define subgroup analyses, or to define inclusion in a review, for example, the minimum biopsy protocol for identifying the relevant population in cancer studies.

For each criterion that might be explored in sensitivity analysis, the decision on whether it has been met or not (for example, which population subgroup the study has been categorised as), and the information used to arrive at the decision (for example, the study inclusion criteria, or the actual population recruited into the study), should be recorded in a standard template for inclusion in an evidence table (see the appendix on appraisal checklists, evidence tables, GRADE and economic profiles ).

Each study included in an evidence review should be critically appraised by 1 reviewer and a proportion of these checked by another reviewer. Any differences in critical appraisal should be resolved by discussion or involving a third reviewer.

Data extraction

Study characteristics should be extracted to a standard template for inclusion in an evidence table (see the appendix on appraisal checklists, evidence tables, GRADE and economic profiles ). Care should be taken to ensure that newly identified studies are cross-checked against existing studies to avoid double-counting. This is particularly important where there may be multiple reports of the same study.

If complex data extraction is done for a review question (for example, situations where a large number of transformations or adjustments are made to the raw data from the included studies), data extraction should be checked by a second reviewer to avoid data errors, which are time-consuming to fix. This may be more common in reviews using more complex analysis methods (for example, network meta-analyses or meta-regressions) but decisions around dual data extraction should be based on the complexity of the extraction, not the complexity of the analysis.

Analysing and presenting results for studies on the effectiveness of interventions

Meta-analysis may be appropriate if treatment estimates of the same outcome from more than 1 study are available. Recognised approaches to meta-analysis should be used, as described in the handbook from Cochrane , in Higgins et al. (2021) and documents developed by the NICE Guidelines Technical Support Unit .

There are several ways of summarising and illustrating the strength and direction of quantitative evidence about the effectiveness of an intervention, even if a meta-analysis is not done. Forest plots can be used to show effect estimates and confidence intervals for each study (when available, or when it is possible to calculate them). They can also be used to provide a graphical representation when it is not appropriate to do a meta-analysis and present a pooled estimate. However, the homogeneity of the outcomes and measures in the studies needs to be carefully considered: a forest plot needs data derived from the same (or justifiably similar) population, interventions, outcomes and measures.

Head‑to‑head data that compares the effectiveness of interventions is useful for a comparison between 2 active management options. A network meta-analysis (NMA) is a method that can include trials that compare the interventions of interest head-to-head and also trials that allow indirect comparisons via other interventions.

The same principles of good practice for evidence reviews and meta-analyses should be applied when conducting network meta-analyses. The reasons for identifying and selecting the randomised controlled trials (RCTs) should be explained. This includes the reasons for selecting the treatment comparisons, and whether any interventions that are not being considered as options for recommendations will be included within the network to allow for indirect comparisons between interventions of interest. The methods of synthesis should be described clearly either in the methods section of the evidence review document or the guideline methods chapter.

When multiple competing options are being appraised, network meta-analysis is the preferred approach to use, and should be considered in such cases. The data from individual trials should also be documented (usually as an appendix). If there is doubt about the inclusion of particular trials (for example, because of concerns about limitations or applicability ), a sensitivity analysis in which these trials are excluded may also be presented. The level of consistency between the direct and indirect evidence on the interventions should be reported, including consideration of model fit and comparison statistics such as the total residual deviance, and the deviance information criterion (DIC). Results of any further inconsistency tests done, such as deviance plots or those based on node-splitting, should also be reported.

In addition to the inconsistency checks described above, which compare the direct and indirect evidence within a network meta-analysis model, results from direct comparisons may also be presented for comparison with the results from a network meta-analysis (thus comparing the direct and overall network meta-analysis results to aid validity checks and interpretation, rather than direct and indirect to check consistency). These may be the results from the direct evidence within the network meta-analysis, or from direct pairwise comparisons done outside the network meta-analysis, depending on which is considered more informative.

When evidence is combined using network meta-analyses, trial randomisation should typically be preserved. If this is not appropriate, the planned approach should be discussed and agreed with staff with responsibility for quality assurance. A comparison of the results from single treatment arms from different RCTs is not acceptable unless the data are treated as observational and analysed as such.

Further information on complex methods for evidence synthesis is provided by the documents developed by the NICE Guidelines Technical Support Unit . The methods described in these documents should be used as the basis for analysis, and any deviations from these methods clearly described and justified, and agreed with staff who have responsibility for quality assurance.

To promote transparency of health research reporting (as endorsed by the EQUATOR network ), evidence from a network meta-analysis should usually be reported according to the criteria in the modified PRISMA‑NMA checklist in the appendix on network meta-analysis reporting standards .

Evidence from a network meta-analysis can be presented in a variety of ways. The network should be presented diagrammatically with the available treatment comparisons clearly identified, and show the number of trials in each comparison. Further information on how to present the results of network meta-analyses is provided by the documents developed by the NICE Guidelines Technical Support Unit .

There is no NICE-endorsed approach for assessing the quality or certainty of outputs derived from network meta-analysis. At a minimum, a narrative description of the confidence in the results of the network meta-analysis should be presented, considering all the areas in a standard GRADE profile (risk of bias, indirectness, inconsistency and imprecision). Several other approaches have been suggested in the literature that may be relevant in particular circumstances (Phillippo et al. 2019, Phillippo et al. 2017, Caldwell et al. 2016, Purhan et al. 2014, Salanti et al. 2014). The approach to assessing confidence in results should take into account the particular questions the network meta-analysis is trying to address. For example, the approach to imprecision may be different if a network meta-analysis is trying to identify the single most effective treatment, compared to creating a ranking of all possible treatments.

Dealing with complex interventions

Analysing quantitative evidence on complex interventions may involve considering factors other than effectiveness. This includes:

whether there are particular circumstances when the interventions work

is there interaction, synergy or mediation between intervention components

which factors impact on implementation

is the intervention feasible and acceptable in different contexts

how might this enhance or reduce the interventions' effect in different circumstances (see sections 17.2 and 17.5 in the Cochrane Handbook for Systematic Reviews of Interventions ).

Different analytical approaches are relevant to different types of complexity and question (see table 1 in Higgins et al. 2019 ). The appropriate choice of technique will depend on the review question, available evidence, time needed to do the approach and likely impact on guideline recommendations. The approach should be discussed and agreed with staff who have responsibility for quality assurance.

Further information on complex methods for evidence synthesis is provided by the documents developed by the NICE Guidelines Technical Support Unit and NICE's Decision Support Unit .

Additional information is available from:

Agency for Healthcare Research and Quality (AHRQ) series on complex intervention systematic reviews (2017)

Viswanathan et al. (2017) AHRQ series: paper 4

BMJ series on complex health interventions in complex systems: concepts and methods for evidence-informed health decisions (Higgins, 2019)

chapter 17 of the Cochrane Handbook for Systematic Reviews of Interventions .

Analysing and presenting results of studies of diagnostic test accuracy

Information on methods of presenting and synthesising results from studies of diagnostic test accuracy is available in the Cochrane Handbook for Systematic Reviews of Interventions . When meta-analyses of paired accuracy measures (such as sensitivity and specificity) are done, bivariate analysis should be used where possible, to preserve correlations between outcomes. Univariate analyses can still be used if there are insufficient studies for a bivariate analysis.

Meta-analyses should not normally be done on positive and negative predictive values, unless the analysis takes account of differences in prevalence. Instead, analyses can be done on sensitivity and specificity and these results applied to separate prevalence estimates to obtain positive and negative predictive values, if these are outcomes specified in the review protocol.

If meta-analysis is not possible or appropriate (for example, if the differences between populations, references standard or index test thresholds are too large), there should be a narrative summary of the results that were considered most important for the review question.

Analysing and presenting results of studies of prognosis, or prediction models for a diagnosis or prognosis

There is currently no consensus on approaches for synthesising evidence from studies on prognosis , or prediction models for diagnosis or prognosis. The approach chosen should be based on the types of data included (for example, prognostic accuracy data, prediction models, or associative studies presenting odds ratios or hazard ratios). For prognostic accuracy data, the same approach for synthesis can be taken as with diagnostic accuracy data, with the addition of the need to consider length of follow-up as part of the analysis. When considering meta-analysis, reviewers should consider how similar the prognostic factors or predictors and confounding factors are across all studies reporting the same outcome measure. It is important to explore whether all likely confounding factors have been accounted for, and whether the metrics used to measure exposure (or outcome) are universal. When studies cannot be pooled, results should be presented consistently across studies. For more information on prognostic reviews, see Collins 2015 and Moons 2015.

Analysing, synthesising and presenting results of qualitative evidence

Qualitative evidence occurs in many forms and formats and so different methods may be used for synthesis and presentation (such as those described by the Cochrane Qualitative & Implementation Methods Group ).

Qualitative evidence should be synthesised and then summarised using GRADE-CERQual (see GRADE-CERQual Implementation series ). If synthesis of the evidence is not appropriate, a narrative summary may be adequate; this should be agreed with staff with responsibility for quality assurance. The approach used may depend on the volume of the evidence. If the qualitative evidence is extensive, then a recognised method of synthesis is preferable (normally aggregative, thematic or framework synthesis type approaches). If the evidence is disparate and sparse, a narrative summary may be appropriate.

The simplest approach to synthesise qualitative data in a meaningful way is to group the findings in the evidence tables (comprising of 'first order' participant quotes and participant observations as well as 'second order' interpretations by study authors). Then, to write third-order interpretations based on the reviewers' interpretations of the first and second-order constructs synthesised across studies. These third-order interpretations will become themes and subthemes or 'review findings'. This synthesis can be carried out if enough data are found, and the papers and research reports cover the same (or similar) context or use similar methods. These should be relevant to the review questions and could, for example, include intervention, age, population or setting.

Synthesis can be carried out in several ways (as noted above), and each may be appropriate depending on the question type, and the evidence identified. Papers reporting on the same findings can be grouped together to compare and contrast themes, focusing not just on consistency but also on any differences. The narrative should be based on these themes.

A more complex but useful approach is 'conceptual mapping' (see Johnson et al. 2000). This involves identifying the key themes and concepts across all the evidence tables and grouping them into first level (major), second level (associated) and third level (subthemes) themes. Results are presented in schematic form as a conceptual diagram and the narrative is based on the structure of the diagram.

Integrating and presenting results of mixed methods reviews

If a mixed methods approach has been identified as needed (see the chapter on developing review questions and planning the evidence review ), then the approach to integration needs consideration. Integration refers to A) how quantitative and qualitative evidence are combined following separate synthesis (convergent-segregated) or, B) how quantitative and qualitative data that have been transformed are merged (convergent-integrated).

A) The convergent-segregated approach consists of doing separate quantitative and qualitative syntheses (as usual), followed by integration of the results derived from each of the syntheses. Integrating the quantitative and qualitative synthesised findings gives a greater depth of understanding of the phenomena of interest compared to doing 2 separate component syntheses without formally linking the 2 sets of evidence.

B) All qualitative evidence from a convergent-segregated mixed methods review should be synthesised and then summarised using GRADE-CERQual. If appropriate, all quantitative data (for example, for intervention studies) should be presented using GRADE. An overall summary of how the quantitative and qualitative evidence are linked should ideally be presented in either matrices or thematic diagrams. It should also be summarised in the review using the approach questions in the section on integration of quantitative and qualitative evidence to frame the integration evidence summary ( JBI manual for evidence synthesis ).

Integration of quantitative and qualitative evidence

The integration section should provide a summary that represents the configured analysis of the quantitative and qualitative evidence. This can include matrices, look-up tables or thematic maps, but as a minimum should include statements that address all of the following questions:

Are the results and findings from individual syntheses supportive or contradictory?

Does the qualitative evidence explain why the intervention is or is not effective?

Does the qualitative evidence explain differences in the direction and size of effect across the included quantitative studies?

Which aspects of the quantitative evidence were or were not explored in the qualitative studies?

Which aspects of the qualitative evidence were or were not tested in the quantitative studies?

'All of the questions above should be answered, but dependent on the evidence included in the review it is acknowledged that some responses will be more detailed than others' ( JBI manual for evidence synthesis ).

This should be reported as a summary of the mixed findings after reporting on the effectiveness and qualitative evidence synthesis.

A) The convergent-integrated approach refers to a process of combining extracted data from quantitative studies (including data from the quantitative component of mixed methods studies) and qualitative studies (including data from the qualitative component of mixed methods studies) and involves data transformation .

B) The convergent-segregated approach is the standard approach to adopt in most of our mixed methods reviews. If convergent-segregated is not the planned approach, data transformation methods and outcome reporting should be discussed and agreed with staff who have responsibility for quality assurance and documented in the review protocol.

Certainty or confidence in the findings of analysis

Once critical appraisal of the studies and data analysis are complete, the certainty or confidence in the findings should be presented (for individual or synthesised studies) at outcome level using GRADE or GRADE-CERQual. Although GRADE has not been formally validated for all quantitative review types (such as prognostic reviews), GRADE principles can be applied and adapted to other types of questions. Any substantial changes made by the development team to GRADE should be agreed with staff with responsibility for quality assurance before use.

If using GRADE or GRADE-CERQual is not appropriate, the planned approach should be discussed and agreed with staff with responsibility for quality assurance. It should be documented in the review protocol (see the appendix on review protocol templates ) together with the reasons for the choice.

Certainty or confidence in the findings by outcome

Before starting an evidence review, the outcomes of interest which are important to people using services and the public for the purpose of decision making should be identified. The reasons for prioritising outcomes should be stated in the evidence review document. This should be done before starting the evidence review and clearly separated from discussion of the evidence, because there is potential to introduce bias if outcomes are selected when the results are known. An example of this would be choosing only outcomes for which there were statistically significant results.

The committee discussion section should also explain how the importance of outcomes was considered when discussing the evidence. For example, the committee may want to define prioritised outcomes into 'critical' and 'important'. Alternatively, they may think that all prioritised outcomes are crucial for decision making. In this case, there will be no distinction between 'critical' or 'important' for all prioritised outcomes. The impact of this on the final recommendations should be clear.

GRADE and GRADE-CERQual assess the certainty or confidence in the review findings by looking at features of the evidence found for each outcome or theme. GRADE is summarised in box 6.1, and GRADE-CERQual in box 6.2.

GRADE assesses the following features for the evidence found for each outcome:

study limitations (risk of bias) – the internal validity of the evidence

inconsistency – the heterogeneity or variability in the estimates of treatment effect across studies

indirectness – the extent of differences between the population, intervention, comparator for the intervention and outcome of interest in the studies from that in the review protocol

imprecision – the level of certainly in the effect estimate

other considerations – publication bias, the degree of selective publication of studies.

In a standard GRADE approach, the certainty or confidence of evidence is classified as high, moderate, low or very low. In the context of NICE guidelines, it can be interpreted as follows:

High – further research is very unlikely to change our recommendation.

Moderate – further research may have an important impact on our confidence in the estimate of effect and may change the strength of our recommendation.

Low – further research is likely to have an important impact on our confidence in the estimate of effect and is likely to change the recommendation.

Very low – any estimate of effect is very uncertain and further research will probably change the recommendation.

GRADE-CERQual assesses the following features for the evidence found for each finding:

methodological limitations – the internal validity of the evidence

relevance – the extent to which the evidence is applicable to the context in the review question

coherence – the extent of the similarities and differences within the evidence

adequacy of data – the extent of richness and quantity of the evidence.

In a standard GRADE-CERQual approach, the certainty or confidence of evidence is classified as high, moderate, low or very low. In the context of NICE guidelines, it can be interpreted as follows:

High – it is highly likely that the review finding is a reasonable representation of the phenomenon of interest.

Moderate – it is likely that the review finding is a reasonable representation of the phenomenon of interest.

Low – it is possible that the review finding is a reasonable representation of the phenomenon of interest.

Very low – it is unclear whether the review finding is a reasonable representation of the phenomenon of interest.

The approach we take differs from the standard GRADE and GRADE-CERQual system in 2 ways:

it also integrates a review of the quality of cost-effectiveness studies (see the chapter on incorporating economic evaluation )

it does not use 'overall summary' labels for the quality of the evidence across all outcomes, or for the strength of a recommendation, but uses the wording of recommendations to reflect the strength of the evidence (see the chapter on interpreting the evidence and writing the guideline ).

GRADE or GRADE-CERQual tables summarise the certainty in the evidence and data for each critical and each important outcome or theme and include a limited description of the certainty in the evidence. GRADE or GRADE-CERQual tables should be available (in an appendix) for each review question.

For mixed methods findings there is no recognised approach to combining the certainty of evidence from GRADE and GRADE-CERQual. The certainty and confidence ratings should be reported for both evidence types within the evidence summary of integrated findings and their impact on decision making described in the relevant section of the review.

Alternative approaches to assessing imprecision in GRADE

For information on assessing imprecision the standard GRADE approach can be used. If this approach is not used, the approach should be agreed with staff who have responsibility for quality assurance.

Our equality and diversity duties are expressed in a single public sector equality duty ('the equality duty', see the section on key principles for developing NICE guideline recommendations in the introduction chapter ). The equality duty supports good decision making by encouraging public bodies to understand how different people will be affected by their activities. As much of our work involves developing advice for others on what to do, this includes thinking about how people will be affected by our recommendations when they are implemented (for example, by health and social care practitioners).

In addition to meeting our legal obligations, we are committed to going beyond compliance, particularly in terms of tackling health inequalities . Specifically, we consider that we should also take account of the 4 dimensions of health inequalities – socioeconomic status and deprivation, protected characteristics (defined in the Equality Act 2010), inclusion health groups (such as people experiencing homelessness and young people leaving care), and geography. Wherever possible, our guidance aims to reduce and not increase identified health inequalities.

Ensuring inclusivity of the evidence review criteria

Any equality criteria specified in the review protocol should be included in the evidence tables. At the data extraction stage, reviewers should refer to the health inequalities framework criteria (including age, gender/sex, sexual orientation, gender reassignment, disability, ethnicity, religion, place of residence, occupation, education, socioeconomic position and social capital; Gough et al. 2012) and any other relevant protected characteristics, and record these where reported, if specified in the review protocol. See the section on reducing health inequalities in the introduction chapter . Review inclusion and exclusion criteria should also take the relevant groups into account, as specified in the review protocol.

Equalities and health inequalities should be considered during the drafting of the evidence reviews, including any issues documented in the equality and health inequalities assessment. Equality and health inequality considerations should be included in the data extraction process and should be recorded in the committee discussion section. Equalities and health inequalities are also considered during surveillance and updating. See chapters on ensuring that published guidelines are current and accurate and updating guideline recommendations for more information.

Presenting evidence

The following sections should be included in the evidence review document:

an introduction to the evidence review

a description of the studies or other evidence identified, in either table or narrative format

evidence tables (usually presented in an appendix)

full GRADE or GRADE-CERQual profiles (in an appendix)

evidence summaries (of the results or conclusions of the evidence)

an overall summary of merged quantitative and qualitative evidence (either using matrices or thematic diagrams) and the integration questions for mixed methods reviews

results from other analysis of evidence, such as forest plots, area under the curve graphs, network meta-analysis (usually presented in an appendix; see the appendix on network meta-analysis reporting standards ).

The evidence should usually be presented separately for each review question; however, alternative methods of presentation may be needed for some evidence reviews (for example, where review questions are closely linked and need to be interpreted together).

Any substantial deviations in presentation need to be agreed, in advance, with staff with responsibility for quality assurance.

Describing the included evidence

A description of the evidence identified should be produced. The content of this will depend on the type of question and the type of evidence. It should also identify and describe any gaps in the evidence, and cover at minimum:

the volume of information for the review question(s), that is, the number of studies identified, included, and excluded (with a link to a PRISMA selection flowchart, in an appendix)

the study types, populations, interventions, settings or outcomes for each study related to a particular review question.

Evidence tables

Evidence tables help to identify the similarities and differences between studies, including the key characteristics of the study population and interventions or outcome measures.

Data from identified studies are extracted to standard templates for inclusion in evidence tables. The type of data and study information that should be included depends on the type of study and review question, and should be concise and consistently reported.

The types of information that could be included for quantitative studies are:

bibliography (authors, date)

study aim, study design (for example, RCT, case–control study ) and setting (for example, country)

funding details (if known)

population (for example, source and eligibility, and which population subgroup of the protocol the study has been mapped to, if relevant)

intervention, if applicable (for example, content, who delivers the intervention, duration, method, dose, mode or timing of delivery, and which intervention subgroup of the protocol the study has been mapped to, if relevant)

comparator, if applicable (for example, content, who delivers the intervention, duration, method, dose, mode or timing of delivery)

method of allocation to study groups (if applicable)

outcomes (for example, primary and secondary and whether measures were objective, subjective or otherwise validated, and the timepoint at which these outcomes were measured)

key findings (for example, effect sizes, confidence intervals , for all relevant outcomes, and where appropriate, other information such as numbers needed to treat and considerations of heterogeneity if summarising a systematic review or meta-analysis)

inadequately reported data, missing data or if data have been imputed (include method of imputation or if transformation is used)

overall comments on quality, based on the critical appraisal and what checklist was used to make this assessment. When study details are inadequately reported, or absent, this should be clearly stated.

If data are not being used in any further statistical analysis, or are not reported in GRADE tables, effect sizes (point estimate) with confidence intervals should be reported, or back calculated from the published evidence where possible. If confidence intervals are not reported, exact p values (whether or not significant), with the test from which they were obtained, should be described. When confidence intervals or p values are inadequately reported or not given, this should be stated. Any descriptive statistics (including any mean values and degree of spread such as ranges) indicating the direction of the difference between intervention and comparator should be presented. If no further statistical information is available, this should be clearly stated.

The type of data that could be reported in evidence tables for qualitative studies includes:

study aim, study design and setting (for example, country)

population or participants

theoretical perspective adopted (such as grounded theory)

key objectives and research questions; methods (including analytical and data collection technique)

key themes/findings (including quotes from participants that illustrate these themes or findings, if appropriate)

gaps and limitations

Evidence summaries

Full GRADE or GRADE-CERQual tables that present both the results of the analysis and describe the confidence in the evidence should normally be provided (in an appendix).

Additionally, whether GRADE or GRADE-CERQual are used or not, a summary of the evidence should be included within the evidence review document. This summary can be in any format (narrative, tabular, pictorial) but should contain sufficient detail to explain the key findings of the review without needing to refer to the full results in the appendices.

Evidence summaries are structured and written to help committees formulate recommendations, and stakeholders and users of the guidance to understand the reason why those recommendations were made. They are separate to the committee's interpretation of the evidence, which should be covered in the committee discussion section. They can help to understand:

whether or not there is sufficient evidence (in terms of strength and applicability) to form a judgement

whether (on balance) the evidence demonstrates that an intervention, approach or programme is effective or ineffective, or is inconclusive

the size of effect and associated measure of uncertainty

whether the evidence is applicable to people affected by the guideline and contexts covered by the guideline.

Structure and content of evidence summaries

Evidence summaries do not need to repeat every finding from an evidence review, but should contain sufficient information to understand the key findings of the review, including:

Sufficient descriptions of the interventions, tests or factors being reported on to enable interpretation of the results reported.

The volume of and confidence in the evidence, as well as the magnitude and direction of effects.

Key strengths and limitations of the evidence that may not be obvious from overall confidence ratings (for example, the countries evidence came from, if that is expected to have a meaningful impact on the results).

For findings not showing a meaningful benefit or harm between multiple options, it should be clear whether these have been interpreted as demonstrating equivalence, or simply that it is not possible to tell whether there is a difference or not from the available evidence.

Any outcomes where evidence was searched for but no or insufficient evidence was found.

These summaries can be done in a variety of formats (for example, evidence statement, narrative summaries, tables) provided they cover the relevant information. 'Vote counting' (merely reporting on the number or proportion of studies showing a particular positive or negative finding) is not an acceptable summary of the evidence.

Context- or topic-specific terms (for example, 'an increase in HIV incidence', 'a reduction in injecting drug use' and 'smoking cessation') may be used. Any such terms should be used consistently in each review and their definitions reported.

The principles described above remain relevant when reporting evidence not based on systematic reviews of primary studies done by NICE. A description of some of these alternative approaches and when they may be appropriate is given in the chapter on developing review questions and planning the evidence review . However, additional factors need to be considered in many of these situations and are described in this section. When reviews have used either multiple options described in this section or an option combined with a systematic review of primary studies, the different approaches should be reported separately according to the appropriate reporting approach outlined in this chapter. A description of how these sources of evidence were either combined or interpreted together by the committee should also be given.

Reporting reviews based on a published systematic review or qualitative evidence synthesis

In some cases, evidence reviews may be based on previously published systematic reviews or qualitative evidence syntheses done outside of NICE, rather than an original review. In such cases, where that review is publicly available, presentation of review content in NICE evidence review documents should be limited to those sections where additional material or analysis has been undertaken. If a published and free to access review has been used with no adaptation, it should be cited in the relevant sections and appendices of the NICE evidence review document and a hyperlink to the original review provided, with no reproduction of the review content. If the review used is not free to access, then the relevant content should be summarised within the guideline.

Examples of additions that may be made to published reviews include adding new data to an out-of-date review, including additional outcomes or subgroups, re-analysing data using different statistical strategies, re-evaluating GRADE quality assessments, and combining separate reviews in a network meta-analysis. If we have updated a review to include additional material or analysis, a link should be provided to the relevant original review with a full citation in line with the NICE style guide on referencing and citations. Only the relevant updated sections should be written up in the NICE evidence review document.

An evidence summary should still be provided in the evidence review, which makes clear which parts of the cited reviews were used as evidence within the guideline, and summarises any changes or additional analyses undertaken, if relevant. When considering the confidence we have in the findings of a published review, both the quality of the overall review (as assessed using the checklists recommended in the appendix on appraisal checklists, evidence tables, GRADE and economic profiles ), and the quality of the studies within that review should be taken into account.

Reporting reviews based on a published individual participant data meta-analysis

Evidence reviews based on a published individual patient data (IPD) meta-analysis should follow the same principles as reviews based on other published systematic reviews. Reviewers can make use of the PRISMA-IPD checklist to assess the reporting standards of published IPD analyses, and Wang 2021 includes a checklist that can be used for quality assessment of IPD meta-analyses.

In most cases it is not possible to update an IPD meta-analysis within a guideline, and therefore an approach must be decided if there are additional relevant studies not included within the analysis (for example, additional studies published after the searches in the published review). A number of possible approaches can be followed:

Only include the IPD meta-analysis in the review, and exclude any additional studies.

Include the IPD meta-analysis review, and additionally report aggregated results for the studies not included in the IPD analysis.

Include the IPD meta-analysis review, and additionally report aggregated results for all studies within the review, both those included within the IPD meta-analysis and those not included.

The approach taken should be described and justified within the review. It should take into account the number and proportion of studies not included in the IPD meta-analysis, whether those studies are systematically different to the studies included, and whether the studies not included would be likely to lead to different overall conclusions.

Reporting reviews based on multiple published systematic reviews or qualitative evidence syntheses

Sometimes an evidence review may report the results of multiple systematic reviews, either as a result of a review of reviews being done, or because multiple relevant reviews are otherwise identified. Each review should be reported following the advice in the section on reporting reviews based on a published systematic review or qualitative evidence synthesis.

Additionally, the evidence review should report on any overlaps between the included reviews (for example, where multiple included reviews cover the same intervention or include some of the same studies), or any important differences between the methodologies of the included reviews. How these overlaps or differences were dealt with when assessing evidence and making recommendations should be reported.

Reporting reviews based on formal consensus methods

When formal consensus methods, such as Delphi panels or nominal group technique, are used as a way of generating or interpreting evidence, at minimum the following information should be reported in the evidence review document:

How the participants involved in the formal consensus exercise were selected.

How the initial evidence or statements presented as part of the formal consensus exercise were derived.

The methodology used for the formal consensus exercises, including any thresholds used for retaining or discarding statements.

The results of each round or iteration of the formal consensus exercise.

How the results of the formal consensus exercise were then used to inform the recommendations made.

Reporting reviews or using recommendations from previously published guidance from other organisations

If systematic reviews or qualitative evidence syntheses done as part of a published non-NICE guideline are used as evidence within a NICE guideline, those reviews should be assessed following the advice in the section above on reporting reviews based on a published systematic review or qualitative evidence synthesis. No assessment of other aspects of the guideline is needed, because only the evidence from the reviews is being used, not any other part of the non-NICE guideline.

If parts of the non-NICE guideline other than evidence reviews are used (for example, if the recommendations made are themselves used as evidence, not just the underlying reviews) then the guideline should be assessed for quality using the AGREE II instrument. There is no cut-off point for accepting or rejecting a guideline, and each committee needs to set its own parameters. These should be documented in the methods of the guideline, and the full results of the assessment included in the evidence review document. In addition to the assessment of the quality of the guideline, the following should also be included in the review at a minimum:

A summary of the content from the non-NICE guideline used to inform the NICE guideline (for example, the recommendations considered).

A description of the justifications presented in the non-NICE guideline (for example, why those recommendations were made).

A description of how the NICE committee interpreted that content, including any concerns about quality and applicability, and how it informed their own discussions and recommendations.

A clear link between which parts of the non-NICE guideline informed the final recommendations in the NICE guideline.

Reporting reviews or using recommendations from previously published NICE guidelines

If systematic reviews or qualitative evidence syntheses done as part of published NICE guidelines are considered relevant and appropriate, they can be used as evidence within a different NICE guideline. These reviews can be included as part of the evidence when:

the review question in the guideline in development is sufficiently similar to the question addressed in the published guideline

the evidence is unlikely to have changed significantly since the publication of the related published NICE evidence review.

When evidence reviews from another guideline are used to develop new recommendations, the decision should be made clear in the methods section of the guideline in development, and the committee's independent interpretation and discussion of the evidence should be documented in the discussion section. The evidence reviews from the published guideline (including review protocol , search strategy, evidence tables and full evidence profiles [if available]) should be included in the guideline in development. They then become part of the evidence for the new guideline and are updated as needed in future updates of the guideline.

If parts of a published NICE guideline (or multiple guidelines) other than evidence reviews are used (for example, if recommendations made are themselves used as evidence, not just the underlying reviews) and new recommendations are formulated, the committee's discussion and decision should be documented clearly in the review. This should include areas of agreement and difference with the committee for the published guideline (for example, in terms of key considerations – balance of benefits and harms or costs, and interpretation of the evidence).

The following should be included in the review at a minimum:

A summary of the content from the published NICE guideline used to inform the guideline in development (for example, the recommendations considered).

A description of the justifications presented in the published NICE guideline (for example, why those recommendations were made).

A description of how the committee interpreted that content, including any concerns about applicability, and how it informed their own discussions and recommendations, including how the recommendations from the published NICE guideline were extrapolated to the guideline in development. It is not routinely necessary to do an assessment of the published NICE guideline using the AGREE II instrument. However, in certain circumstances such an assessment may be useful (for example, if it is an older NICE guideline that used different methods to those currently in use), and if an assessment is undertaken the results should be reported in the review.

A clear link between which parts of the published NICE guideline informed the final recommendations in the guideline in development and why new recommendations were needed (including why the original recommendations could not be adopted without change).

Reporting reviews using primary analysis of real-world data

Reviewers should follow the advice in the NICE real-world evidence framework when reporting primary analyses of real-world data done by NICE. At a minimum, the level of detail provided should match that which would be provided in a published research article. It should also be enough to enable an independent researcher with access to the data to reproduce the study, interpret the results, and to fully understand the strengths and limitations of the study.

More information on what is required and links to relevant reporting tools are provided in the NICE real-world evidence framework.

Reporting reviews using calls for evidence or expert witnesses

If evidence for a review has been obtained using either a call for evidence or an expert witness, follow the reporting advice in the appendix on calls for evidence and expert witnesses .

Reporting reviews using additional consultation or commissioned primary research

If evidence for a review has been obtained using either additional consultation or commissioned primary research, follow the reporting advice in the appendix on approaches to additional consultation and commissioned primary research .

AGREE Collaboration (2003) Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project . Quality and Safety in Health Care 12: 18–23

Booth A, Lewin S, Glenton C. et al. (2018) Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 7: understanding the potential impacts of dissemination bias . Implementation Sci 13:12

Brouwers M, Kho M, Browman G et al. for the AGREE Next Steps Consortium (2010) AGREE II: advancing guideline development, reporting and evaluation in healthcare . Canadian Medical Association Journal 182: E839–42

Caldwell D, Ades A, Dias S et al. (2016) A threshold analysis assessed the credibility of conclusions from network meta-analysis . Journal of Clinical Epidemiology 80: 68–76

Caldwell D, Welton N (2016) Approaches for synthesising complex mental health interventions in meta-analysis Evidence-Based Mental Health 19:16

Collins G, Reistma J, Altman D et al. (2015) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) : The TRIPOD Statement. Annals of Internal Medicine 162: 55–63

Colvin C, Garside R, Wainwright M et al. (2018) Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 4: how to assess coherence . Implementation Sci 13:13

Glenton C, Carlsen B, Lewin S et al. (2018) Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence . Implementation Sci 13:14

GRADE working group (2004) Grading quality of evidence and strength of recommendations . British Medical Journal 328: 1490–4

The GRADE series in the Journal of Clinical Epidemiology

Guyatt G, Oxman A, Schünemann H et al. (2011) GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology . Journal of Clinical Epidemiology 64: 380–2

Higgins J, Thomas J, Chandler J et al., editors (2021) Cochrane Handbook for Systematic Reviews of Interventions, version 6.2

Higgins J, López-López J, Becker B, et al (2019) Synthesising quantitative evidence in systematic reviews of complex health interventions BMJ Global Health 2019;4:e000858

Johnsen J, Biegel D, Shafran R (2000) Concept mapping in mental health: uses and adaptations . Evaluation and Programme Planning 23: 67–75

Lewin S, Bohren M, Rashidian A et al. (2018)  Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 2: how to make an overall CERQual assessment of confidence and create a Summary of Qualitative Findings table . Implementation Sci 13:10

Lewin S, Booth A, Glenton C et al . (2018) Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series . Implementation Sci 13:2

Lizarondo L, Stern C, Carrier J, et al. Chapter 8: Mixed methods systematic reviews . In: Aromataris E, Munn Z (Editors), JBI Manual for Evidence Synthesis. JBI, 2020.

Moons K, Altman D, Reistma J et al. (2015) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration . Annals of Internal Medicine 126: W1–W73

Munthe-Kaas H, Bohren M, Glenton C et al. (2018) Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 3: how to assess methodological limitations . Implementation Sci 13:9

NICE Decision Support Unit (2020) Sources and Synthesis of Evidence: Update to Evidence Synthesis Methods (sheffield.ac.uk) [online; accessed 31 March 2022]

NICE Decision Support Unit Evidence synthesis TSD series [online; accessed 31 August 2018]

Noyes J, Booth A, Lewin S et al. (2018) Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 6: how to assess relevance of the data (nih.gov) . Implementation Sci 13:4

Phillippo D, Dias S, Ades A et al. (2017) Sensitivity of treatment recommendations to bias in network meta-analysis . Journal of the Royal Statistical Society; Series A

Phillippo D, Dias S, Welton N et al. (2019) Threshold Analysis as an Alternative to GRADE for Assessing Confidence in Guideline Recommendations Based on Network Meta-analyses . Annals of Internal Medicine 170(8): 538-46

Puhan M, Schünemann H, Murad M et al. (2014) A GRADE working group approach for rating the quality of treatment effect estimates from network meta-analysis . British Medical Journal 349: g5630

Salanti G, Del Giovane C, Chaimani A et al. (2014) Evaluating the quality of evidence from a network meta-analysis . PloS one. 9(7): e99682

Thomas J, O'Mara-Eves A, Brunton G. (2014) Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example . Systematic Reviews 3:67

Thomas J, Petticrew M, Noyes J, et al. Chapter 17: Intervention complexity . In: Higgins JPT, Thomas J, Chandler J et al (editors), Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022) [online; accessed 31 March 2022]

Viswanathan M, McPheeters M, Murad M et al. (2017) AHRQ series on complex intervention systematic reviews—paper 4: selecting analytic approaches - Journal of Clinical Epidemiology . J Clin Epidemiol 90:28

Wang H, Chen Y, Lin T, et al. (2021) The methodological quality of individual participant data meta-analysis on intervention effects: systematic review BMJ 373:n736

Welton N, Caldwell D, Adamopoulos E, et al. (2009) Mixed Treatment Comparison Meta-Analysis of Complex Interventions: Psychological Interventions in Coronary Heart Disease . Am J Epidemiol 169:1158

Whiting P, Rutjes A, Westwood M et al. and the QUADAS‑2 group (2011) QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies . Annals of Internal Medicine 155:529

IMAGES

  1. Guidelines for Qualitative Papers, how to write a qualitative research

    qualitative research nice guidelines

  2. Qualitative Research: Definition, Types, Methods and Examples

    qualitative research nice guidelines

  3. 6 Types of Qualitative Research Methods

    qualitative research nice guidelines

  4. Guidelines for Qualitative Papers, how to write a qualitative research

    qualitative research nice guidelines

  5. COREQ (Consolidated criteria for reporting qualitative research

    qualitative research nice guidelines

  6. Qualitative Research Methods

    qualitative research nice guidelines

VIDEO

  1. What is research

  2. What is Research??

  3. Quantitative Vs Qualitative Research| Part 2

  4. WHAT IS RESEARCH?

  5. Qualitative and Quantitative Research

  6. Research Methods

COMMENTS

  1. Appendix H Quality appraisal checklist

    Methods for the development of NICE public health guidance (third edition) NICE process and methods [PMG4] Published: 26 September 2012. 1 Introduction; 2 Topic selection and scoping the guidance ... Quality in qualitative research can be assessed using the same broad concepts of validity (or trustworthiness) used for quantitative research, but ...

  2. Appendix G Methodology checklist: qualitative studies

    Other checklists can found in the NICE clinical guidelines manual and Methods for the development of NICE public health guidance.. This checklist is based on checklists from: Spencer L. Ritchie J, Lewis J et al. (2003) Quality in qualitative evaluation: a framework for assessing research evidence .

  3. PDF National Institute for Health and

    vary between NICE guidance-producing centres and should be developed in the context of their process/methods manual(s). 1.6 The Medical Research Council (MRC) has funded a research project which ... qualitative research, for example, formative and summative evaluations, trials, longitudinal studies, secondary analysis, systematic reviews, and

  4. Qualitative Research Resources: Assessing Qualitative Research

    How to search for and evaluate qualitative research, integrate qualitative research into systematic reviews, report/publish qualitative research. Includes some Mixed Methods resources. ... From Methods for the Development of NICE Public Health Guidance, 3rd edition.

  5. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  6. An exploration of how developers use qualitative evidence: content

    Clinical practice guidelines have become increasingly widely used to guide quality improvement of clinical practice. Qualitative research may be a useful way to improve the quality and implementation of guidelines. The methodology for qualitative evidence used in guidelines development is worthy of further research. A comprehensive search was made of WHO, NICE, SIGN, NGC, RNAO, PubMed, Embase ...

  7. Qualitative evidence synthesis to improve implementation of clinical

    Enhanced clinical guidelines. Qualitative evidence synthesis has several potential benefits for clinical guidelines,4 but I will focus on patient preferences, in particular shared decision making or the principle of "nothing about me without me."11 This principle requires that clinical decisions be consistent with the elicited preferences and values of the patient.

  8. PDF Developing NICE guidelines: the manual

    NICE guidelines cover health and care in England. Decisions on how they apply in other UK countries are made by ministers in theWelsh Government,Scottish Government, andNorthern Ireland Executive. 1.2 Information about this manual This manual explains the processes and methods NICE uses for developing, maintaining and updating NICE guidelines.

  9. The conduct and reporting of qualitative evidence syntheses in health

    This paper is part of a broader investigation into the ways in which health and social care guideline producers are using qualitative evidence syntheses (QESs) alongside more established methods of guideline development such as systematic reviews and meta-analyses of quantitative data. This study is a content analysis of QESs produced over a 5-year period by a leading provider of guidelines ...

  10. PDF Criteria for Good Qualitative Research: A Comprehensive Review

    makes for good qualitative research'' (Tracy, 2010, p. 837) To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are estab-lished on diverse philosophical perspectives. Bryman et al., (2008, p. 262) suggest that ''It is widely assumed that

  11. Systematic review of the methodological literature for integrating

    The process manual used by NICE to produce clinical guidelines. The NICE manual includes details of synthesis for all the types of evidence it uses, not just qualitative evidence. Ring 2010: Guidance: Guidance from NHS Quality Improvement Scotland about the various methods of QES that could be used in HTA. Ring 2011: Research article

  12. Use of qualitative research as evidence in the clinical guideline

    Aim: To describe the use of qualitative research as evidence in a national clinical guideline program (National Institute for Health and Clinical Excellence - NICE, UK) and to identify training needs for guideline developers. Methods: All published NICE clinical guidelines from December 2002 to June 2007 were reviewed to determine whether qualitative studies were considered as evidence in the ...

  13. Qualitative evidence

    There are two main designs for synthesizing qualitative evidence with evidence of the effects of interventions: sequential reviews; and convergent mixed-methods review. The Cochrane Qualitative and Implementation Methods Group website provides links to practical guidance and key steps for authors who are considering a qualitative evidence ...

  14. Planning Qualitative Research: Design and Decision Making for New

    Qualitative research draws from interpretivist and constructivist paradigms, seeking to deeply understand a research subject rather than predict outcomes, as in the positivist paradigm (Denzin & Lincoln, 2011).Interpretivism seeks to build knowledge from understanding individuals' unique viewpoints and the meaning attached to those viewpoints (Creswell & Poth, 2018).

  15. COREQ (Consolidated Criteria for Reporting Qualitative Studies)

    It is the only reporting guidance for qualitative research to have received other than isolated endorsement although it applies to only a few of the many qualitative methods in use. The COREQ checklist was developed to promote explicit and comprehensive reporting of interviews and focus groups. The COREQ checklist consists of 32 criteria, with ...

  16. 6 Reviewing the evidence

    6 Reviewing the evidence. 7 Assessing cost effectiveness. 8 Linking clinical guidelines to other NICE guidance. 9 Developing and wording guideline recommendations. 10 Writing the clinical guideline and the role of the NICE editors. 11 The consultation process and dealing with stakeholder comments.

  17. 31 Writing Up Qualitative Research

    Abstract. This chapter provides guidelines for writing journal articles based on qualitative approaches. The guidelines are part of the tradition of the Chicago School of Sociology and the author's experience as a writer and reviewer. The guidelines include understanding experiences in context, immersion, interpretations grounded in accounts ...

  18. Standards for Reporting Qualitative Research

    bility to accommodate various paradigms, approaches, and methods. Method The authors identified guidelines, reporting standards, and critical appraisal criteria for qualitative research by searching PubMed, Web of Science, and Google through July 2013; reviewing the reference lists of retrieved sources; and contacting experts. Specifically, two authors reviewed a sample of sources to generate ...

  19. Appendix H: Methodology checklist: qualitative studies

    Table 1 is an example table from the full guideline for Management of stable angina (NICE clinical guideline 126) for the review question 'What are the information needs of people with stable angina?'. In order to answer this question, three qualitative studies were included and assessed for quality. ... Qualitative research is not experimental ...

  20. Developing NICE guidelines: the manual

    The systematic identification of evidence is an essential step in developing NICE guideline recommendations. This chapter sets out how evidence is identified at each stage of the guideline development cycle. It provides details of the systematic literature searching methods used to identify the best available evidence for NICE guidelines.

  21. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  22. The Oxford Handbook of Qualitative Research

    Abstract. The Oxford Handbook of Qualitative Research, second edition, presents a comprehensive retrospective and prospective review of the field of qualitative research. Original, accessible chapters written by interdisciplinary leaders in the field make this a critical reference work. Filled with robust examples from real-world research ...

  23. 6 Reviewing evidence

    6 Reviewing evidence. Reviewing evidence is an explicit, systematic and transparent process that can be applied to both quantitative (experimental and observational) and qualitative evidence (see the chapter on developing review questions and planning the evidence review).The key aim of any review is to provide a summary of the relevant evidence to ensure that the committee can make fully ...