Conducting a Literature Review

  • Literature Review
  • Developing a Topic
  • Planning Your Literature Review
  • Developing a Search Strategy
  • Managing Citations
  • Critical Appraisal Tools
  • Writing a Literature Review

Appraise Your Research Articles

The structure of a literature review should include the following :

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

The critical evaluation of each work should consider :

  • Provenance  -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
  • Methodology  -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
  • Objectivity  -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
  • Persuasiveness  -- which of the author's theses are most convincing or least convincing?
  • Value  -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

Reviewing the Literature

While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what the articles are saying, but how are they saying it.

Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?
  • When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Tools for Critical Appraisal

Now, that you have found articles based on your research question you can appraise the quality of those articles. These are resources you can use to appraise different study designs.

Centre for Evidence Based Medicine (Oxford)

University of Glasgow

"AFP uses the Strength-of-Recommendation Taxonomy (SORT), to label key recommendations in clinical review articles."

  • SORT: Rating the Strength of Evidence    American Family Physician and other family medicine journals use the Strength of Recommendation Taxonomy (SORT) system for rating bodies of evidence for key clinical recommendations.

Seton Hall logo

  • The Interprofessional Health Sciences Library
  • 123 Metro Boulevard
  • Nutley, NJ 07110
  • [email protected]
  • Visiting Campus
  • News and Events
  • Parents and Families
  • Web Accessibility
  • Career Center
  • Public Safety
  • Accountability
  • Privacy Statements
  • Report a Problem
  • Login to LibApps
  • Research article
  • Open access
  • Published: 04 June 2019

Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool

  • Heather Menzies Munthe-Kaas 1 ,
  • Claire Glenton 1 ,
  • Andrew Booth 2 ,
  • Jane Noyes 3 &
  • Simon Lewin 1 , 4  

BMC Medical Research Methodology volume  19 , Article number:  113 ( 2019 ) Cite this article

30k Accesses

25 Citations

41 Altmetric

Metrics details

Qualitative evidence synthesis is increasingly used alongside reviews of effectiveness to inform guidelines and other decisions. To support this use, the GRADE-CERQual approach was developed to assess and communicate the confidence we have in findings from reviews of qualitative research. One component of this approach requires an appraisal of the methodological limitations of studies contributing data to a review finding. Diverse critical appraisal tools for qualitative research are currently being used. However, it is unclear which tool is most appropriate for informing a GRADE-CERQual assessment of confidence.

Methodology

We searched for tools that were explicitly intended for critically appraising the methodological quality of qualitative research. We searched the reference lists of existing methodological reviews for critical appraisal tools, and also conducted a systematic search in June 2016 for tools published in health science and social science databases. Two reviewers screened identified titles and abstracts, and then screened the full text of potentially relevant articles. One reviewer extracted data from each article and a second reviewer checked the extraction. We used a best-fit framework synthesis approach to code checklist criteria from each identified tool and to organise these into themes.

We identified 102 critical appraisal tools: 71 tools had previously been included in methodological reviews, and 31 tools were identified from our systematic search. Almost half of the tools were published after 2010. Few authors described how their tool was developed, or why a new tool was needed. After coding all criteria, we developed a framework that included 22 themes. None of the tools included all 22 themes. Some themes were included in up to 95 of the tools.

It is problematic that researchers continue to develop new tools without adequately examining the many tools that already exist. Furthermore, the plethora of tools, old and new, indicates a lack of consensus regarding the best tool to use, and an absence of empirical evidence about the most important criteria for assessing the methodological limitations of qualitative research, including in the context of use with GRADE-CERQual.

Peer Review reports

Qualitative evidence syntheses (also called systematic reviews of qualitative evidence) are becoming increasingly common and are used for diverse purposes [ 1 ]. One such purpose is their use, alongside reviews of effectiveness, to inform guidelines and other decisions, with the first Cochrane qualitative evidence synthesis published in 2013 [ 2 ]. However, there are challenges in using qualitative synthesis findings to inform decision making because methods to assess how much confidence to place in these findings are poorly developed [ 3 ]. The ‘Confidence in the Evidence from Reviews of Qualitative research’ (GRADE-CERQual) approach aims to transparently and systematically assess how much confidence to place in individual findings from qualitative evidence syntheses [ 3 ]. Confidence here is defined as “an assessment of the extent to which the review finding is a reasonable representation of the phenomenon of interest” ([ 3 ] p.5). GRADE-CERQual draws on the conceptual approach used by the GRADE tool for assessing certainty in evidence from systematic reviews of effectiveness [ 4 ]. However, GRADE- CERQual is designed specifically for findings from qualitative evidence syntheses and is informed by the principles and methods of qualitative research [ 3 , 5 ].

The GRADE-CERQual approach bases its assessment of confidence on four components: the methodological limitations of the individual studies contributing to a review finding; the adequacy of data supporting a review finding; the coherence of each review finding; and the relevance of a review finding [ 5 ]. In order to assess the methodological limitations of the studies contributing data to a review finding, a critical appraisal tool is necessary. Critical appraisal tools “provide analytical evaluations of the quality of the study, in particular the methods applied to minimise biases in a research project” [ 6 ]. Debate continues over whether or not one should critically appraisal qualitative research [ 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ]. Arguments against using criteria to appraise qualitative research have centred on the idea that “research paradigms in the qualitative tradition are philosophically based on relativism, which is fundamentally at odds with the purpose of criteria to help establish ‘truth’” [ 16 ]. The starting point in this paper, however, is that it is both possible and desirable to establish a set of criteria for critically appraising the methodological strengths and limitations of qualitative research. End users of findings from primary qualitative research and from syntheses of qualitative research often make judgments regarding the quality of the research they are reading, and this is often done in an ad hoc manner [ 3 ]. Within a decision making context, such as formulating clinical guideline recommendations, the implicit nature of such judgements limits the ability of other users to understand or critique these judgements. A set of criteria to appraise methodological limitations allows such judgements to be conducted, and presented, in a more systematic and transparent manner. We understand and accept that these judgements are likely to differ between end users – explicit criteria help to make these differences more transparent.

The terms “qualitative research” and “qualitative evidence synthesis” refer to an ever-growing multitude of research and synthesis methods [ 17 , 18 , 19 , 20 ]. Thus far, the GRADE-CERQual approach has mostly been applied to syntheses producing a primarily descriptive rather than theoretical type of finding [ 5 ]. Consequently, it is primarily this descriptive standpoint from which the analysis presented in the current paper is conducted. The authors acknowledge, however, the potential need for different criteria when appraising the methodological strengths and limitations of different types of primary qualitative research. While accepting that there is probably no universal set of critical appraisal criteria for qualitative research, we maintain that some general principles of good practice by which qualitative research should be conducted do exist. We hope that our work in this area, and the work of others, will help us to develop a better understanding of this important area.

In health science environments, there is now widespread acceptance of the use of tools to critically appraise individual studies, and as Hannes and Macaitis have observed, “it becomes more important to shift the academic debate from whether or not to make an appraisal to what criteria to use” [ 21 ]. This shift is paramount because a plethora of critical appraisal tools and checklists [ 22 , 23 , 24 ] exists and yet there is little, if any, agreement on the best approach for assessing the methodological limitations of qualitative studies [ 25 ]. To the best of our knowledge, few tools have been designed for appraising qualitative studies in the context of qualitative synthesis [ 26 , 27 ]. Furthermore, there is a paucity of tools designed to critically appraise qualitative research to inform a practical decision or recommendation, as opposed to critical appraisal as an academic exercise by researchers or students.

In the absence of consensus, the Cochrane Qualitative & Implementation Methods Group (QIMG) provide a set of criteria that can be used to select an appraisal tool, noting that review authors can potentially apply critical appraisal tools specific to the methods used in the studies being assessed, and that the chosen critical appraisal tool should focus on methodological strengths and limitations (and not reporting standards) [ 11 ]. A recent review of qualitative evidence syntheses found that the majority of identified syntheses (92%; 133/145) reported appraising the quality of included studies. However, a wide range of tools were used (30 different tools) and some reviews reported using multiple critical appraisal tools [ 28 ]. So far, authors of Cochrane qualitative evidence syntheses have adopted different approaches, including adapting existing appraisal tools and using tools that are familiar to the review team.

This lack of a uniform approach mirrors the situation for systematic reviews of effectiveness over a decade ago, where over 30 checklists were being used to assess the quality of randomised trials [ 29 ]. To address this lack of consistency and to reach consensus, a working group of methodologists, editors and review authors developed the risk of bias tool that is now used for Cochrane intervention reviews and is a key component of the GRADE approach [ 4 , 30 , 31 ]. The Cochrane risk of bias tool encourages review authors to be transparent and systematic in how they appraise the methodological limitations of primary studies. Assessments using this tool are based on an assessment of objective goals and on a judgment of whether failure to meet these objective goals raises any concerns for the particular research question or review finding. Similar efforts are needed to develop a critical appraisal tool to assess methodological limitations of primary qualitative studies in the context of qualitative evidence syntheses (Fig.  1 ).

figure 1

PRISMA Flow chart. Results of systematic mapping review described in this article

Previous reviews

While at least five methodological reviews of critical appraisal tools for qualitative research have been published since 2003, we assessed that these did not adequately address the aims of this project [ 22 , 23 , 24 , 32 , 33 ]. Most of the existing reviews focused only on critical appraisal tools in the health sciences [ 22 , 23 , 24 , 32 ] . One review focused on reporting standards for qualitative research [ 23 ], one review did not use a systematic approach to searching the literature [ 24 ], one review included critical appraisal tools for any study design (quantitative or qualitative) [ 32 ], and one review only included tools defined as “‘high-utility tools’ […] that are some combination of available, familiar, authoritative and easy to use tools that produce valuable results and offer guidance for their use” [ 33 ]. In the one review that most closely resembles the aims of the current review, the search was conducted in 2010, did not include tools used in the social sciences, and was not conducted from the perspective of the GRADE-CERQual approach (see discussion below) [ 22 ].

Current review

We conducted this review of critical appraisal tools for qualitative research within the context of the GRADE-CERQual approach. This reflects our specific interest in identifying (or developing, if need be) a critical appraisal tool to assess the methodological strengths and limitations of a body of evidence that contributes to a review finding and, ultimately, to contribute to an assessment of how much confidence we have in review findings based on these primary studies [ 3 ]. Our focus is thus not on assessing the overall quality of an individual study, but rather on assessing how any identified methodological limitations of a study could influence our confidence in an individual review finding. This particular perspective may not have exerted a large influence on the conduct of our current mapping review. However, it will likely influence how we interpret our results, reflecting our thinking on methodological limitations both at the individual study level and at the level of a review finding. Our team is also guided by how potential concepts found in existing checklists may overlap with the other components of the GRADE-CERQual approach, namely relevance, adequacy and coherence (see Table  1 for definitions).

The aim of this review was to systematically map existing critical appraisal tools for primary qualitative studies, and identify common criteria across these tools.

Eligibility criteria

For the purposes of this review, we defined a critical appraisal tool as a tool, checklist or set of criteria that provides guidance on how to appraise the methodological strengths and limitations of qualitative research. This could include, for instance, instructions for authors of scientific journals; articles aimed at improving qualitative research and targeting authors and peer reviewers; and chapters from qualitative methodology manuals that discuss critical appraisal.

We included critical appraisal tools if they were explicitly intended to be applicable to qualitative research. We included tools that were defined for mixed methods if it was clearly stated that their approach included qualitative methods. We included tools with clear criteria or questions intended to guide the user through an assessment of the study. However, we did not include publications where the author discussed issues related to methodological rigor of qualitative research but did not provide a list or set of questions or criteria to support the end user in assessing the methodological strengths and limitations of qualitative research. These assessments were sometimes challenging, and we have sought to make our judgements as transparent as possible. We did not exclude tools based on how their final critical appraisal assessments were determined (e.g., whether the tool used numeric quality scores, a summary of elements, or weighting of criteria).

We included published or unpublished papers that were available in full text, and that were written in any language, but with an English abstract.

Search strategy

We began by conducting a broad scoping search of existing reviews of critical appraisal tools for qualitative research in Google Scholar using the terms “critical appraisal OR quality AND qualitative”. We identified four reviews, the most recent of which focussed on checklists used within health sciences and was published in 2016 (search conducted in 2010) [ 34 ]. We included critical appraisal tools identified by these four previous reviews if they met the inclusion criteria described above [ 22 , 23 , 24 , 32 ]. We proceeded to search systematically in health and medical databases for checklists published after 2010 (so as not to duplicate the most recent review described above). Since we were not aware of any review which searched specifically for checklists used in the social sciences, we extended our search in social sciences databases backwards to 2006. We chose this date as our initial reading had suggested that development of critical appraisal within the social science field was insufficiently mature before 2006, and considered that any exceptions would be identified through searching reference lists of identified studies. We also searched references of identified relevant papers and contacted methodological experts to identify any unpublished tools.

In June 2016, we conducted a systematic literature search of Pubmed/MEDLINE, PsycInfo, CINAHL, ERIC, ScienceDirect, Social services abstracts and Web of Science databases using variations of the following search strategy: (“Qualitative research” OR “qualitative health research” OR “qualitative study” OR “qualitative studies” OR “qualitative paper” OR “qualitative papers”) AND (“Quality Assessment” OR “critical appraisal” or “internal validity” or “external validity” OR rigor or rigour) AND (Checklist or checklists or guidelines or criteria or standards) (see Additional file 1 for the complete search strategy). A Google Scholar alert for frequently cited articles and checklists was created to identify any tools published since June 2016.

Study selection

Using the Covidence web-based tool [ 35 ] two authors independently assessed titles and abstracts and then assessed the full text versions of potentially relevant checklists using the inclusion criteria described above. A third author mediated in cases of disagreement.

Data extraction

We extracted data from every included checklist related to study characteristics (title, author details, year, type of publication), checklist characteristics (intended end user (e.g. practitioner, guideline panel, review author, primary researcher, peer reviewer), discipline (e.g. health sciences, social sciences), and details regarding how the checklist was developed or how specific checklist criteria were justified). We also extracted the checklist criteria intended to be assessed within each identified checklist and any prompts, supporting questions, etc. Each checklist item/question (and supporting question/prompt) was treated as a separate data item. The data extraction form is available in Additional file 2 .

Synthesis methods

We analysed the criteria included in the identified checklists using the best fit framework analysis approach [ 36 ]. We developed a framework using the ten items from the Critical Appraisal Skills Programme (CASP) Qualitative Research Checklist. We used this checklist because it is frequently used in qualitative evidence syntheses [ 28 ]. We then extracted the criteria from the identified checklists and charted each checklist question or criterion into one of the themes in the framework. We expanded the initial framework to accommodate any coded criteria that did not fit into an existing framework theme. Finally, we tabulated the frequency of each theme across the identified checklists (the number of checklists for which a theme was mentioned as a checklist criterion). The themes, which are derived from the expanded CASP framework, could be viewed as a set of overarching criterion statements based on synthesis of the multiple criteria found in the included tools. However, for simplicity we use the term ‘theme’ to describe each of these analytic groups.

In this paper, we use the terms “checklist” and “critical appraisal tools” interchangeably. The term “guidance” however is defined differently within the context of this review, and is discussed in the discussion section below. The term “checklist criteria” refers to criteria that authors have included in their critical appraisal tools. The term “theme” refers to the 22 framework themes that we have developed in this synthesis and into which the criteria from the individual checklists were sorted. The term “cod(e)/ing” refers to the process of sorting the checklist criteria within the framework themes.

Our systematic search resulted in 7199 unique references. We read the full papers for 310 of these, and included 31 checklists that met the inclusion criteria. We also included 71 checklists from previous reviews that met our inclusion criteria. A total of 102 checklists were described in 100 documents [ 22 , 23 , 24 , 26 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 , 120 , 121 , 122 , 123 , 124 , 125 , 126 , 127 , 128 , 129 , 130 , 131 , 132 ] (see Fig. 1 ). A list of the checklists are included in Additional file 3 . One publication described three checklists (Silverman 2008; [ 119 ]).

Characteristics of the included checklists

The incidence of new critical appraisal tools appears to be increasing (see Fig.  2 ). Approximately 80% of the identified tools have been published since 2000.

figure 2

Identified critical appraisal tools (sorted by publication year). References list of critical appraisal tools included in this mapping review

Critical appraisal tool development

Approximately half of the articles describing critical appraisal tools did not report how the tools were developed, or this was unclear ( N  = 53). Approximately one third of tools were based on a review and synthesis of existing checklists ( N  = 33), or adapted directly from one or more existing checklists ( N  = 10). The other checklists were developed using a Delphi survey method or consultation with methodologists or practitioners ( N  = 4), a review of criteria used by journal peer reviewers (N = 1), or using a theoretical approach (N = 1).

Health or social welfare field

We attempted to sort the checklists according to the source discipline (field) in which they were developed (e.g. health services or social welfare services). In some cases this was apparent from the accompanying article, or from the checklist criteria, but in many cases we based our assessment on the authors’ affiliations and the journal in which the checklist was published. The majority of checklists were developed by researchers in the field of health care ( N  = 60). The remaining checklists appear to have been developed within health and/or social care ( N  = 2), education (N = 2), social care ( N  = 4), or other fields ( N  = 8). Many publications either did not specify any field, or it was unclear within which field the checklist was developed ( N  = 26).

Intended end user

It was unclear who the intended end user was (e.g., policy maker, clinician/practitioner, primary researcher, systematic review author, or peer reviewer) for many of the checklists ( N  = 34). Of the checklists where the intended end user was implied or discussed, ten were intended for primary authors and peer reviewers, and ten were intended for peer reviewers alone. Seventeen checklists were intended to support practitioners in reading/assessing the quality of qualitative research, and 17 were intended for use by primary researchers to improve their qualitative research. Ten checklists were intended for use by systematic review authors, two for use by primary research authors and systematic review authors, and two were intended for students appraising qualitative research.

Checklist versus guidance

The critical appraisal tools that we identified appeared to vary greatly in how explicit the included criteria were and the extent of accompanying guidance and supporting questions for the end user. Below we discuss the differences between checklists and guidance with examples from the identified tools.

Using the typology described by Hammersley (2007), the term “checklist” is used to describe a tool where the user is provided with observable indicators to establish (along with other criteria) whether or not the findings of a study are valid, or are of value. Such tools tend to be quite explicit and comprehensive; furthermore the checklist criteria are usually related to research conduct and may be intended for people unfamiliar with critically appraising qualitative research [ 8 ]. The tool described in Sandelowski (2007) is an example of such a checklist [ 115 ].

Other tools may be intended to be used as guidance, with a list of considerations or reminders that are open to revision when being applied [ 8 ]. Such tools are less explicit. The tool described by Carter (2007) is such an example, where the focus on a fundamental appraisal of methods and methodology seems directed at experienced researchers [ 48 ].

Results of the framework synthesis

Through our framework synthesis we have categorised the criteria included in the 102 identified critical appraisal tools into 22 themes. The themes represent a best effort at translating many criteria, worded in different ways, into themes. Given the diversity in how critical appraisal tools are organized (e.g. broad versus narrow questions), not all of the themes are mutually exclusive (e.g. some criteria are included in more than one theme if they address two different themes), and some themes are broad and include a wide range of criteria from the included critical appraisal tools (e.g. Was the data collected in a way that addressed the research issue? represents any criterion from an included critical appraisal tool that discussed data collection methods). In Table  2 , we present the number of criteria from critical appraisal tools that relate to each theme. None of the included tools contributed criteria to all 22 themes.

Framework themes: design and/or conduct of qualitative research

The majority of the framework themes relate to the design and conduct of a qualitative research study. However, some themes overlap with, or relate to, what are conventionally considered to be reporting standards. The first reporting standards for primary qualitative research were not published until 2007 and many of the appraisal tools predate this and include a mix of methodological quality and quality of reporting standards [ 23 ]. The current project did not aim to distinguish or discuss which criteria is related to critical appraisal versus reporting standards. However, we discuss the ramifications of this blurry distinction below.

Breadth of framework themes

Some themes represent a wide range of critical appraisal criteria. For example, the theme “Was the data analysis sufficiently rigorous?” includes checklist criteria related to several different aspects of data analysis: (a) whether the researchers provide in-depth description of the analysis process, (b) whether the researchers discuss how data were selected for presentation, (c) if data were presented to support the finding, and (d) whether or not disconfirming cases are discussed. On the other hand, some of the themes cover a narrower breadth of criteria. For example, the theme “Have ethical issues been taken into consideration?” only includes checklist criteria related to whether the researchers have sought ethical approval, informed participants about their rights, or considered the needs of vulnerable participants. The themes differ in terms of breadth mainly because of how the original coding framework was structured. Some of the themes from the original framework were very specific and could be addressed by seeking one or two pieces of information from a qualitative study (e.g., Is this a qualitative study?). Other themes from the original framework were broad and a reader would need to seek multiple pieces of information in order to make a clear assessment (e.g., Was the data collected in a way that addressed the research issue?).

Scope of existing critical appraisal tools

We coded many of the checklist criteria as relevant to multiple themes. For example, one checklist criterion was: “Criticality - Does the research process demonstrate evidence of critical appraisal” [ 128 ]. We interpreted and coded this criterion as relevant to two themes: “Was the data analysis sufficiently rigorous” and “Is there a clear statement of findings?”. On the other hand, several checklists also contained multiple criteria related to one theme. For instance, one checklist (Waterman 2010; [ 127 ]) included two separate questions related to the theme “Was the data collected in a way that addressed the research issue?” (Question 5: Was consideration given to the local context while implementing change? Is it clear which context was selected, and why, for each phase of the project? Was the context appropriate for this type of study? And Question 11: Were data collected in a way that addressed the research issue? Is it clear how data were collected, and why, for each phase of the project? Were data collection and record-keeping systematic? If methods were modified during data collection is an explanation provided?) [ 127 ]. A further example relates to reflexivity. The majority of critical appraisal tools include at least one criterion or question related to reflexivity ( N  = 71). Reflexivity was discussed with respect to the researcher’s relationship with participants, their potential influence on data collection methods and the setting, as well as the influence of their epistemological or theoretical perspective on data analysis. We grouped all criteria that discussed reflexivity into one theme.

The growing number of critical appraisal tools for qualitative research reflects increasing recognition of the value and use of qualitative research methods and their value in informing decision making. More checklists have been published in the last six years than in the preceding decade. However, upon closer inspection, many recent checklists are published adaptations of existing checklists, possibly tailored to a specific research question, but without any clear indication of how they improve upon the original. Below we discuss the framework themes developed from this synthesis, specifically which themes are most appropriate for critically appraising qualitative research and why, especially within the context of conducting a qualitative evidence synthesis. We will also discuss differences between checklists and guidance for critical appraisal and the unclear boundaries between critical appraisal criteria and reporting standards.

Are these the best criteria to be assessing?

The framework themes we present in this paper vary greatly in terms of how well they are covered by existing tools. However, a theme’s frequency is not necessarily indicative of the perceived or real importance of the group of criteria it encapsulates. Some themes appear more frequently than others in existing checklists simply due to the number of checklists which adapt or synthesise one of more existing tools. Some themes, such as “Was there disclosure of funding sources?”, and “Were end users involved in the development of the research study?” were only present in a small number of tools. These themes may be as important as more commonly covered themes when assessing the methodological strengths and limitations of qualitative research. It is unclear whether some of the identified themes were included in many different tools because they actually represent important issues to consider when assessing whether elements of qualitative research design or conduct could weaken our trust in the study findings, or whether frequency of a theme simply reflects a shared familiarity with concepts and assumptions on what constitutes or leads to rigor in qualitative research.

Only four of the identified critical appraisal tools were developed with input from stakeholders using consensus methods, although it is unclear how consensus was reached, or what it was based on. In more than half of the studies there was no discussion of how the tool was developed. None of the identified critical appraisal tools appear to be based on empirical evidence or explicit hypotheses regarding the relationships between components of qualitative study design and conduct and the trustworthiness of the study findings. This is directly in contrast to Whiting and colleagues (2017) discussion of how to develop quality assessment tools: “[r]obust tools are usually developed based on empirical evidence refined by expert consensus” [ 133 ]. A concerted and collaborative effort is needed in the field to begin thinking about why some criteria are included in critical appraisal tools, what is current knowledge on how the absence of these criteria can weaken the rigour of qualitative research, and whether there are specific approaches that strengthen data collection and analysis processes.

Methodological limitations: assessing individual studies versus individual findings

Thus far, critical appraisal tools have focused on assessing the methodological strengths and limitations of individual studies and the reviews of critical appraisal tools that we identified took the same approach. This mapping review is the first phase of a larger research project to consider how best to assess methodological limitations in the context of qualitative evidence syntheses. In this context, review authors need to assess the methodological “quality” of all studies contributing to a review finding, and also whether specific limitations are of concern for a particular finding as “individual features of study design may have implications for some of those review findings, but not necessarily other review findings” [ 134 ]. The ultimate aim of this research project is to identify, or develop if necessary, a critical appraisal tool to systematically and transparently support the assessment of the methodological limitations component of the GRADE-CERQual approach (see Fig.  3 ), which focuses on how much confidence can be placed in individual qualitative evidence synthesis findings.

figure 3

Process of identifying/developing a tool to support assessment of the GRADE-CERQual methodological limitations component (Cochrane qualitative Methodological Limitations Tool; CAMELOT). The research described in this article addresses phase 1 of this project

Critical appraisal versus reporting standards

While differences exist between criteria for assessing methodological strengths and limitations and criteria for assessing the reporting of research, the difference between these two aims, and the tools used to assess these, is not always clear. As Moher and colleagues (2014) point out “[t]his distinction is, however, less straightforward for systematic reviews than for assessments of the reporting of an individual study, because the reporting and conduct of systematic reviews are, by nature, closely intertwined” [ 135 ]. Review authors are sometimes unable to differentiate poor reporting from poor design or conduct of a study. Although current guidance recommends a focus on criteria related to assessing methodological strengths and limitations when choosing a critical appraisal tool (see discussion in introduction), deciding what is methodological versus a reporting issue is not always straightforward: “without a clear understanding of how a study was done, readers are unable to judge whether the findings are reliable” [ 135 ]. The themes identified in the current framework synthesis illustrate this point: while many themes clearly relate to the design and conduct of qualitative research, some themes could also be interpreted as relating to reporting standards (e.g., Was there disclosure of funding sources? Is there an audit trail). At least one theme, ‘Reporting standards (including demographic characteristics of the study)’, would not be considered key to assessment of methodological strengths and limitations of qualitative research.

Finally, the unclear distinction between critical appraisal and reporting standards can be demonstrated by the description of one of the tools included in this synthesis [ 96 ]. This tool is called Standards for Reporting Qualitative Research (SRQR), however, the tool is both based on a review of critical appraisal criteria from previously published instruments, and concludes that the proposed standards will provide “clear standards for reporting qualitative research” and assist “readers when critically appraising […] study findings” [ 96 ] p.1245).

Reporting standards are being developed separately and discussion of these is beyond the remit of this paper [ 136 ]. However, when developing critical appraisal tools, one must be aware that some criteria or questions may also relate to reporting and ensure that such criteria are not used to assess both the methodological strengths and limitations and reporting quality for a publication.

Intended audience

This review included any critical appraisal tool intended for application to qualitative research, regardless of the intended end user. The type of end user targeted by a critical appraisal tool could have implications for the tool’s content and form. For instance, tools designed for practitioners who are applying the findings from an individual study to their specific setting may focus on different criteria than tools designed for primary researchers undertaking qualitative research. However, since many of the included critical appraisal tools did not identify the intended end user, it is difficult to establish any clear patterns between the content of the critical appraisal tools and the audience for which the tool was intended. It is also unclear whether or not separate critical appraisal tools are needed for different audiences, or whether one flexible appraisal tool would suffice. Further research and user testing is needed with existing critical appraisal tools, including those under development.

Tools or guidance intended to support primary researchers undertaking qualitative research in establishing rigour were not included in this mapping and analysis. This is because guidance for primary research authors on how to design and conduct high quality qualitative research focus on how to apply methods in the best and most appropriate manner. Critical appraisal tools, however, are instruments used to fairly and rapidly assess methodological strengths and limitations of a study post hoc. For these reasons, those critical appraisal tools we identified and included that appear to target primary researchers as end users may be less relevant than other identified tools for the aims of this project.

Lessons from the development of quantitative research tools on risk of bias

While the fundamental purposes and principles of qualitative and quantitative research may differ, many principles from development of the Cochrane Risk of Bias tool transfer to developing a tool for the critical appraisal of qualitative research. These principles include avoiding quality scales (e.g. summary scores), focusing on internal validity, considering limitations as they relate to individual results (findings), the need to use judgment in making assessments, choosing domains that combine theoretical and empirical considerations, and a focus on the limitations as represented in the research (as opposed to quality of reporting) [ 31 ]. Further development of a tool in the context of qualitative evidence synthesis and GRADE-CERQual needs to take these principles into account, and lessons learned during this process may be valuable for the development of future critical appraisal or Risk of Bias tools.

Further research

As discussed earlier, CERQual is intended to be applied to individual findings from qualitative evidence syntheses with a view to informing decision making, including in the context of guidelines and health systems guidance [ 137 ]. Our framework synthesis has uncovered three important issues to consider when critically appraising qualitative research in order to support an assessment of confidence in review findings from qualitative evidence syntheses. First, since no existing critical appraisal tool describes an empirical basis for including specific criteria, we need to begin to identify and explore the empirical and theoretical evidence for the framework themes developed in this review. Second, we need to consider whether the identified themes are appropriate for critical appraisal within the specific context of the findings of qualitative evidence syntheses. Thirdly, some of the themes from the framework synthesis relate more to research reporting standards than to research conduct. As we plan to focus only on themes related to research conduct, we need to reach consensus on which themes relate to research conduct and which relate to reporting (see Fig. 2 ).

Currently, more than 100 critical appraisal tools exist for qualitative research. This reflects an increasing recognition of the value of qualitative research. However, none of the identified critical appraisal tools appear to be based on empirical evidence or clear hypotheses related to how specific elements of qualitative study design or conduct influence the trustworthiness of study findings. Furthermore, the target audience for many of the checklists is unclear (e.g., practitioners or review authors), and many identified tools also include checklist criteria related to reporting quality of primary qualitative research. Existing critical appraisal tools for qualitative studies are thus not fully fit for purpose in supporting the methodological limitations component of the GRADE-CERQual approach. Given the number of tools adapted from previously produced tools, the frequency count for framework concepts in this framework synthesis does not necessarily indicate the perceived or real importance of each concept. More work is needed to prioritise checklist criteria for assessing the methodological strengths and limitations of primary qualitative research, and to explore the theoretical and empirical basis for the inclusion of criteria.

Abbreviations

Critical Appraisal Skills Programme

The Cumulative Index to Nursing and Allied Health Literature database

Education Resources Information Center

Grading of Recommendations Assessment, Development, and Evaluation

Confidence in the Evidence from Reviews of Qualitative research

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Cochrane Qualitative & Implementation Methods Group

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Systematic Reviews. 2012:1(28).

Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database Syst Rev. 2013;2013.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin C, Gülmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for heatlh and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.

Article   PubMed   PubMed Central   Google Scholar  

Guyatt G, Oxman A, Kunz R, Vist G, Falck-Ytter Y, Schunemann H. For the GRADE working group: what is "quality of evidence" and why is it important to clinicians? BMJ. 2008;336:995–8.

Lewin S, Booth A, Bohren M, Glenton C, Munthe-Kaas HM, Carlsen B, Colvin CJ, Tuncalp Ö, Noyes J, Garside R, et al. Applying the GRADE-CERQual approach (1): introduction to the series. Implement Sci. 2018.

Katrak P, Bialocerkowski A, Massy-Westropp N, Kumar V, Grimmer K. A systematic review of the content of critical appraisal tools. BMC Med Res Methodol. 2004:4(22).

Denzin N. Qualitative inquiry under fire: toward a new paradigm dialogue. USA: Left Coast Press; 2009.

Google Scholar  

Hammersley M. The issue of quality in qualitative research. International Journal of Research & Method in Education. 2007;30(3):287–305.

Article   Google Scholar  

Smith J. The problem of criteria for judging interpretive inquiry. Educ Eval Policy Anal. 1984;6(4):379–91.

Smith J, Deemer D. The problem of criteria in the age of relativism. In: Densin N, Lincoln Y, editors. Handbook of Qualitative Research. London: Sage Publication; 2000.

Noyes J, Booth A, Flemming K, Garside R, Harden A, Lewin S, Pantoja T, Hannes K, Cargo M, Thomas J. Cochrane qualitative and implementation methods group guidance series—paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol. 2018;1(97):49–58.

Soilemezi D, Linceviciute S. Synthesizing qualitative research: reflections and lessons learnt by two new reviewers. Int J Qual Methods. 2018;17(1):160940691876801.

Carroll C, Booth A. Quality assessment of qualitative evidence for systematic review and synthesis: is it meaningful, and if so, how should it be performed? Res Synth Methods. 2015;6(2):149–54.

Article   PubMed   Google Scholar  

Sandelowski M. A matter of taste: evaluating the quality of qualitative research. Nurs Inq. 2015;22(2):86–94.

Garside R. Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innovation: The European Journal of Social Science Research. 2013;27(1):67–79.

Barusch A, Gringeri C, George M: Rigor in Qualitative Social Work Research: A Review of Strategies Used in Published Articles. Social Work Research 2011, 35(1):11–19 19p.

Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005;10:45–53.

Green J, Thorogood N: Qualitative methodology in health research. In: Qualitative methods for health research, 4th Edition. Edn. Edited by Seaman J. London, UK: Sage Publications; 2018.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009:9(59).

Gough D, Oliver S, Thomas J. An introduction to systematic reviews. London, UK: Sage; 2017.

Hannes K, Macaitis K. A move to more transparent and systematic approaches of qualitative evidence synthesis: update of a review on published papers. Qual Res. 2012;12:402–42.

Santiago-Delefosse M, Gavin A, Bruchez C, Roux P, Stephen SL. Quality of qualitative research in the health sciences: Analysis of the common criteria present in 58 assessment guidelines by expert users. Social Science & Medicine. 2016;148:142–151 110p.

Article   CAS   Google Scholar  

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Walsh D, Downe S. Appraising the quality of qualitative research. Midwifery. 2006;22(2):108–19.

Dixon-Woods M, Sutton M, Shaw RL, Miller T, Smith J, Young B, Bonas S, Booth A, Jones D. Appraising qualitative research for inclusion in systematic reviews: a quantitative and qualitative comparison of three methods. Journal of Health Services Research & Policy. 2007;12(1):42–7.

Long AF, Godfrey M. An evaluation tool to assess the quality of qualitative research studies. Int J Soc Res Methodol. 2004;7(2):181–96.

Popay J, Rogers A, Williams G. Rationale and Standards for the Systematic Review of Qualitative Literature in Health Services Research. Qual Health Res. 1998:8(3).

Article   CAS   PubMed   Google Scholar  

Dalton J, Booth A, Noyes J, Sowden A. Potential value of systematic reviews of qualitative evidence in informing user-centered health and social care: findings from a descriptive overview. J Clin Epidemiol. 2017;88:37–46.

Lundh A, Gøtzsche P. Recommendations by Cochrane Review Groups for assessment of the risk of bias in studies. BMC Med Res Methodol. 2008;8(22).

Higgins J, Sterne J, Savović J, Page M, Hróbjartsson A, Boutron I, Reeves B, Eldridge S: A revised tool for assessing risk of bias in randomized trials In: Cochrane Methods. Edited by J. C, McKenzie J, Boutron I, Welch V, vol. 2016: Cochrane Database Syst Rev 2016.

Higgins J, Altman D, Gøtzsche P, Jüni P, Moher D, Oxman A, Savović J, Schulz K, Weeks L, Sterne J. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;18(343):d5928.

Crowe M, Sheppard L. A review of critical appraisal tools show they lack rigor: alternative tool structure is proposed. J Clin Epidemiol. 2011;64(1):79–89.

Majid U, Vanstone M. Appraising qualitative research for evidence syntheses: a compendium of quality appraisal tools Qualitative Health Research; 2018.

Santiago-Delefosse M, Bruchez C, Gavin A, Stephen SL. Quality criteria for qualitative research in health sciences. A comparative analysis of eight grids of quality criteria in psychiatry/psychology and medicine. Evolution Psychiatrique. 2015;80(2):375–99.

Covidence systematic review software.

Carroll C, Booth A, Leaviss J, Rick J. "best fit" framework synthesis: refining the method. BMC Med Res Methodol. 2013;13:37.

Methods for the development of NICE public health guidance (third edition): Process and methods. In. UK: National Institute for Health and Care Excellence; 2012.

Anderson C. Presenting and evaluating qualitative research. Am J Pharm Educ. 2010;74(8):141.

Baillie L: Promoting and evaluating scientific rigour in qualitative research. Nursing standard (Royal College of Nursing (Great Britain) : 1987) 2015, 29(46):36–42.

Ballinger C. Demonstrating rigour and quality? In: LFCB, editor. Qualitative research for allied health professionals: Challenging choices. Chichester, England: J. Wiley & Sons; 2006. p. 235–46.

Bleijenbergh I, Korzilius H, Verschuren P. Methodological criteria for the internal validity and utility of practice oriented research. Qual Quant. 2011;45(1):145–56.

Boeije HR, van Wesel F, Alisic E. Making a difference: towards a method for weighing the evidence in a qualitative synthesis. J Eval Clin Pract. 2011;17(4):657–63.

Boulton M, Fitzpatrick R, Swinburn C. Qualitative research in health care: II. A structured review and evaluation of studies. J Eval Clin Pract. 1996;2(3):171–9.

Britton N, Jones R, Murphy E, Stacy R. Qualitative research methods in general practice and primary care. Fam Pract. 1995;12(1):104–14.

Burns N. Standards for qualitative research. Nurs Sci Q. 1989;2(1):44–52.

Caldwell K, Henshaw L, Taylor G. Developing a framework for critiquing health research: an early evaluation. Nurse Educ Today. 2011;31(8):e1–7.

Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J. Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003;56(4):671–84.

Carter S, Little M. Justifying knowledge, justifying method, taking action: epistemologies, methodologies, and methods in qualitative research. Qual Health Res. 2007;17(10):1316–28.

Cesario S, Morin K, Santa-Donato A. Evaluating the level of evidence of qualitative research. J Obstet Gynecol Neonatal Nurs. 2002;31(6):708–14.

Cobb AN, Hagemaster JN. Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987;26(4):138–43.

CAS   PubMed   Google Scholar  

Cohen D, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. The Annals of Family Medicine. 2008;6(4):331–9.

Cooney A: Rigour and grounded theory. Nurse Researcher 2011, 18(4):17–22 16p.

Côté L, Turgeon J. Appraising qualitative research articles in medicine and medical education. Medical Teacher. 2005;27(1):71–5.

Creswell JW. Qualitative procedures. Research design: qualitative, quantitative, and mixed method approaches (2nd ed.). Thousand Oaks, CA: Sage Publications; 2003.

10 questions to help you make sense of qualitative research.

Crowe M, Sheppard L. A general critical appraisal tool: an evaluation of construct validity. Int J Nurs Stud. 2011;48(12):1505–16.

Currie G, McCuaig C, Di Prospero L. Systematically Reviewing a Journal Manuscript: A Guideline for Health Reviewers. Journal of Medical Imaging and Radiation Sciences. 2016;47(2):129–138.e123.

Curtin M, Fossey E. Appraising the trustworthiness of qualitative studies: guidelines for occupational therapists. Aust Occup Ther J. 2007;54:88–94.

Cyr J. The pitfalls and promise of focus groups as a data collection method. Sociol Methods Res. 2016;45(2):231–59.

Dixon-Woods M, Shaw RL, Agarwal S, Smith JA. The problem of appraising qualitative research. Quality and Safety in Health Care. 2004;13(3):223–5.

Article   CAS   PubMed   PubMed Central   Google Scholar  

El Hussein M, Jakubec SL, Osuji J. Assessing the FACTS: a mnemonic for teaching and learning the rapid assessment of rigor in qualitative research studies. Qual Rep. 2015;20(8):1182–4.

Elder NC, Miller WL. Reading and evaluating qualitative research studies. J Fam Pract. 1995;41(3):279–85.

Elliott R, Fischer CT, Rennie DL. Evolving guidelines for publication of qualitative research studies in psychology and related fields. Br J Clin Psychol. 1999;38(3):215–29.

Farrell SE, Kuhn GJ, Coates WC, Shayne PH, Fisher J, Maggio LA, Lin M. Critical appraisal of emergency medicine education research: the best publications of 2013. Acad Emerg Med Off J Soc Acad Emerg Med. 2014;21(11):1274–83.

Fawkes C, Ward E, Carnes D. What evidence is good evidence? A masterclass in critical appraisal. International Journal of Osteopathic Medicine. 2015;18(2):116–29.

Forchuk C, Roberts J. How to critique qualitative research articles. Can J Nurs Res. 1993;25(4):47–56.

Forman J, Crewsell J, Damschroder L, Kowalski C, Krein S. Quailtative research methods: key features and insights gained from use in infection prevention research. Am J Infect Control. 2008;36(10):764–71.

Fossey E, Harvey C, McDermott F, Davidson L. Understanding and evaluating qualitative research. Aust N Z J Psychiatry. 2002;36(6):717–32.

Fujiura GT. Perspectives on the publication of qualitative research. Intellectual and Developmental Disabilities. 2015;53(5):323–8.

Greenhalgh T, Taylor R. How to read a paper: papers that go beyond numbers (qualitative research). BMJ. 1997;315(7110):740–3.

Greenhalgh T, Wengraf T. Collecting stories: is it research? Is it good research? Preliminary guidance based on a Delphi study. Med Educ. 2008;42(3):242–7.

Gringeri C, Barusch A, Cambron C. Examining foundations of qualitative research: a review of social work dissertations, 2008-2010. J Soc Work Educ. 2013;49(4):760–73.

Hoddinott P, Pill R. A review of recently published qualitative research in general practice. More methodological questions than answers? Fam Pract. 1997;14(4):313–9.

Inui T, Frankel R. Evaluating the quality of qualitative research: a proposal pro tem. J Gen Intern Med. 1991;6(5):485–6.

Jeanfreau SG, Jack L Jr. Appraising qualitative research in health education: guidelines for public health educators. Health Promot Pract. 2010;11(5):612–7.

Kitto SC, Chesters J, Grbich C. Quality in qualitative research: criteria for authors and assessors in the submission and assessment of qualitative research articles for the medical journal of Australia. Med. J. Aust. 2008;188(4):243–6.

PubMed   Google Scholar  

Kneale J, Santry J. Critiquing qualitative research. J Orthop Nurs. 1999;3(1):24–32.

Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ. 2008;337:687–92.

Lane S, Arnold E. Qualitative research: a valuable tool for transfusion medicine. Transfusion. 2011;51(6):1150–3.

Lee E, Mishna F, Brennenstuhl S. How to critically evaluate case studies in social work. Res Soc Work Pract. 2010;20(6):682–9.

Leininger M: Evaluation criteria and critique of qualitative research studies. In: Critical issues in qualitative research methods. edn. Edited by (Ed.) JM. Thousand Oaks, CA.: Sage Publications; 1993: 95–115.

Leonidaki V. Critical appraisal in the context of integrations of qualitative evidence in applied psychology: the introduction of a new appraisal tool for interview studies. Qual Res Psychol. 2015;12(4):435–52.

Critical review form - Qualitative studies (Version 2.0).

Lincoln Y, Guba E. Establishing trustworthiness. In: YLEG, editor. Naturalistic inquiry. Newbury Park, CA: Sage Publications; 1985. p. 289–331.

Long A, Godfrey M, Randall T, Brettle A, Grant M. Developing evidence based social care policy and practic. Part 3: Feasibility of undertaking systematic reviews in social care. In: University of Leeds (Nuffield Institute for Health) and University of Salford (Health Care Practice R&D Unit); 2002.

Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001;358(9280):483–8.

Manuj I, Pohlen TL. A reviewer's guide to the grounded theory methodology in logistics and supply chain management research. International Journal of Physical Distribution & Logistics Management. 2012;42(8–9):784–803.

Marshall C, Rossman GB. Defending the value and logic of qualitative research. In: Designing qualitative research. Newbury Park, CA: Sage Publications; 1989.

Mays N, Pope C. Qualitative research: Rigour and qualitative research. BMJ. 1995;311:109–12.

Mays N, Pope C. Qualitative research in health care: Assessing quality in qualitative research. BMJ. 2000;320(50–52).

Meyrick J. What is good qualitative research? A first step towards a comprehensive approach to judging rigour/quality. J Health Psychol. 2006;11(5):799–808.

Miles MB, Huberman AM: Drawing and verifying conclusions. In: Qualitative data analysis: An expanded sourcebook (2nd ed). edn. Thousand Oaks, CA: Sage Publications; 1997: 277–280.

Morse JM. A review committee's guide for evaluating qualitative proposals. Qual Health Res. 2003;13(6):833–51.

Nelson A. Addressing the threat of evidence-based practice to qualitative inquiry through increasing attention to quality: a discussion paper. Int J Nurs Stud. 2008;45:316–22.

Norena ALP, Alcaraz-Moreno N, Guillermo Rojas J, Rebolledo Malpica D. Applicability of the criteria of rigor and ethics in qualitative research. Aquichan. 2012;12(3):263–74.

O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Academic medicine : journal of the Association of American Medical Colleges. 2014;89(9):1245–51.

O'Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. Journal of Health Services Research & Policy. 2008;13(2):92–8.

O'HEocha C, Wang X, Conboy K. The use of focus groups in complex and pressurised IS studies and evaluation using Klein & Myers principles for interpretive research. Inf Syst J. 2012;22(3):235–56.

Oliver DP. Rigor in Qualitative Research. Research on Aging, 2011;33(4):359–360 352p.

Pearson A, Jordan Z, Lockwood C, Aromataris E. Notions of quality and standards for qualitative research reporting. Int J Nurs Pract. 2015;21(5):670–6.

Peters S. Qualitative Research Methods in Mental Health. Evidence Based Mental Health. 2010;13(2):35–40 36p.

Guidelines for Articles. Canadian Family Physician.

Plochg T. Van Zwieten M (eds.): guidelines for quality assurance in health and health care research: qualitative research. Qualitative Research Network AMCUvA: Amsterdam, NL; 2002.

Proposal: A mixed methods appraisal tool for systematic mixed studies reviews.

Poortman CL, Schildkamp K. Alternative quality standards in qualitative research? Quality & Quantity: International Journal of Methodology. 2012;46(6):1727–51.

Popay J, Williams G. Qualitative research and evidence-based healthcare. J R Soc Med. 1998;91(35):32–7.

Ravenek MJ, Rudman DL. Bridging conceptions of quality in moments of qualitative research. Int J Qual Methods. 2013;12:436–56.

Rice-Lively ML. Research proposal evaluation form: qualitative methodology. In., vol. 2016. https://www.ischool.utexas.edu/~marylynn/qreval.html UT School of. Information. 1995.

Rocco T. Criteria for evaluating qualitative studies. Human Research Development International. 2010;13(4):375–8.

Rogers A, Popay J, Williams G, Latham M: Part II: setting standards for qualitative research: the development of markers. In: Inequalities in health and health promotion: insights from the qualitative research literature edn. London: Health Education Authority; 1997: 35–52.

Rowan M, Huston P. Qualitative research articles: information for authors and peer reviewers. Canadian Meidcal Association Journal. 1997;157(10):1442–6.

CAS   Google Scholar  

Russell CK, Gregory DM. Evaluation of qualitative research studies. Evid Based Nurs. 2003;6(2):36–40.

Ryan F, Coughlan M, Cronin P. Step-by-step guide to critiquing research. Part 2: qualitative research. Br J Nurs. 2007;16(12):738–44.

Salmon P. Assessing the quality of qualitative research. Patient Educ Couns. 2013;90(1):1–3.

Sandelowski M, Barroso J. Appraising reports of qualitative studies. In: Handbook for synthesizing qualitative research. New York: Springer; 2007. p. 75–101.

Savall H, Zardet V, Bonnet M, Péron M. The emergence of implicit criteria actualy used by reviewers of qualitative research articles. Organ Res Methods. 2008;11(3):510–40.

Schou L, Hostrup H, Lyngso EE, Larsen S, Poulsen I. Validation of a new assessment tool for qualitative research articles. J Adv Nurs. 2012;68(9):2086–94.

Shortell S. The emergence of qualitative methods in health services research. Health Serv Res. 1999;34(5 Pt 2):1083–90.

CAS   PubMed   PubMed Central   Google Scholar  

Silverman D, Marvasti A. Quality in Qualitative Research. In: Doing Qualitative Research: A Comprehensive Guide. Thousand Oaks, CA: Sage Publications; 2008. p. 257–76.

Sirriyeh R, Lawton R, Gardner P, Armitage G. Reviewing studies with diverse designs: the development and evaluation of a new tool. J Eval Clin Pract. 2012;18(4):746–52.

Spencer L, Ritchie J, Lewis JR, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. In. London: Government Chief Social Researcher's Office; 2003.

Stige B, Malterud K, Midtgarden T. Toward an agenda for evaluation of qualitative research. Qual Health Res. 2009;19(10):1504–16.

Stiles W. Evaluating qualitative research. Evidence-Based Mental Health. 1999;4(2):99–101.

Storberg-Walker J. Instructor's corner: tips for publishing and reviewing qualitative studies in applied disciplines. Hum Resour Dev Rev. 2012;11(2):254–61.

Tracy SJ. Qualitative quality: eight "big-tent" criteria for excellent qualitative research. Qual Inq. 2010;16(10):837–51.

Treloar C, Champness S, Simpson PL, Higginbotham N. Critical appraisal checklist for qualitative research studies. Indian J Pediatr. 2000;67(5):347–51.

Waterman H, Tillen D, Dickson R, De Konig K. Action research: a systematic review and guidance for assessment. Health Technol Assess. 2001;5(23):43–50.

Whittemore R, Chase SK, Mandle CL. Validity in qualitative research. Qual Health Res. 2001;11(4):522–37.

Yardley L. Dilemmas in qualitative health research. Psychol Health. 2000;15(2):215–28.

Yarris LM, Juve AM, Coates WC, Fisher J, Heitz C, Shayne P, Farrell SE. Critical appraisal of emergency medicine education research: the best publications of 2014. Acad Emerg Med Off J Soc Acad Emerg Med. 2015;22(11):1327–36.

Zingg W, Castro-Sanchez E, Secci FV, Edwards R, Drumright LN, Sevdalis N, Holmes AH. Innovative tools for quality assessment: integrated quality criteria for review of multiple study designs (ICROMS). Public Health. 2016;133:19–37.

Zitomer MR, Goodwin D. Gauging the quality of qualitative research in adapted physical activity. Adapt Phys Act Q. 2014;31(3):193–218.

Whiting P, Wolff R, Mallett S, Simera I, Savović J. A proposed framework for developing quality assessment tools. Systematic Reviews. 2017:6(204).

Munthe-Kaas H, Bohren M, Glenton C, Lewin S, Noyes J, Tuncalp Ö, Booth A, Garside R, Colvin C, Wainwright M, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings - paper 3: how to assess methodological limitations. Implementation Science In press.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Steward L, Group. TP-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews. 2014:4(1).

Hannes K, Heyvært M, Slegers K, Vandenbrande S, Van Nuland M. Exploring the Potential for a Consolidated Standard for Reporting Guidelines for Qualitative Research: An Argument Delphi Approach. International Journal of Qualitative Methods. 2015;14(4):1–16.

Bosch-Caplanch X, Lavis J, Lewin S, Atun R, Røttingen J-A, al. e: Guidance for evidence-informed policies about health systems: Rationale for and challenges of guidance development. PloS Medicine 2012, 9(3):e1001185.

Download references

Acknowledgements

We would like to acknowledge Susan Maloney who helped with abstract screening.

This study received funding from the Cochrane Collaboration Methods Innovation Fund 2: 2015–2018. SL receives additional funding from the South African Medical Research Council. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing te manuscript.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and affiliations.

Norwegian Institute of Public Health, Oslo, Norway

Heather Menzies Munthe-Kaas, Claire Glenton & Simon Lewin

School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

School of Social Sciences, Bangor University, Bangor, UK

Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Simon Lewin

You can also search for this author in PubMed   Google Scholar

Contributions

HMK, CG and SL designed the study. AB designed the systematic search strategy. HMK conducted the search. HMK, CG, SL, JN and AB screened abstracts and full text. HMK extracted data and CG checked the extraction. HMK, CG and SL conducted the framework analysis. HMK drafted the article. HMK wrote the manuscript and all authors provided feedback and approved the manuscript for publication.

Corresponding author

Correspondence to Heather Menzies Munthe-Kaas .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

HMK, CG, SL, JN and AB are co-authors of the GRADE-CERQual approach.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Search strategy. (DOCX 14 kb)

Additional file 2:

Data extraction form. (DOCX 14 kb)

Additional file 3:

List of included critical appraisal tools. (DOCX 25 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Munthe-Kaas, H.M., Glenton, C., Booth, A. et al. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol 19 , 113 (2019). https://doi.org/10.1186/s12874-019-0728-6

Download citation

Received : 08 December 2017

Accepted : 09 April 2019

Published : 04 June 2019

DOI : https://doi.org/10.1186/s12874-019-0728-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological limitations
  • Qualitative research
  • Qualitative evidence synthesis
  • Systematic mapping
  • Framework synthesis

BMC Medical Research Methodology

ISSN: 1471-2288

tools for critiquing research articles

Banner

Reading and Critiquing Research

  • Resources for Critiquing
  • Structure of a Research Article
  • Tips for Reading a Journal Article
  • At a loss for words?

Additional Research Starters

  • Literature Reviews by Cynthia Hunt Last Updated Jun 26, 2023 27 views this year

Assistant Director of Library Services

Profile Photo

Reading & Critiquing Research Articles

Reading and critiquing scholarly research articles is a skill developed with time and practice. As you read more within your discipline you'll likely discover patterns in the structure of the journal articles. You'll also get more experienced at differentiating between good and bad articles.

Critique is a synonym for evaluation. A critique is a critical analysis or evaluation of a subject, situation, literary work, or other type of evaluand. It is critical in the sense of being characterized by careful analysis and judgment and analytic in the sense of a separating or breaking up of a whole into its parts, especially for examination of these parts to find their nature, proportion, function, interrelationship, and so on. A common fallacy is equating critique with critical or negative , neither of which is implied. Source: Mathison, S. (2005). Encyclopedia of evaluation Thousand Oaks, CA: SAGE Publications Ltd doi: 10.4135/9781412950558

Critical appraisal is a crucial part of evidence-based medicine, yet reading and critiquing a journal article can seem like a daunting and complex task. Breaking the process down into steps should enable you to build up the necessary skills, such as:

- Skimming the article in the first instance to look for the author's main points and conclusions

- Being familiar with the way that many journal articles are structured (abstract, method, results, discussion etc)

- Reflecting on and being critical of what you are reading

A checklist or toolkit such as those found in this research starter will guide you through this process in a structured way. This research starter will also direct you to articles, web pages, online guides and books to guide you toward effectively appraising scientific articles.

If you have any questions, suggestions for helpful books, links, or resources about critical appraisal please email [email protected]

So you've Found an Article! Great! Now What?

  • Questions to Ask
  • Toolkits and checklists for critical appraisal

The following questions may be helpful in determining whether you are reading a good scholarly article:

  • Is the research question clearly stated ? Does it seem significant?
  • Has the new research been framed well within the existing research? In other words, is there evidence of a literature review and does it seem complete?
  • Is the researcher's methodology clearly laid out? Does it seem appropriate for the research problem?
  • Do the researcher's conclusions make sense, given the results reported or the evidence presented? Are there any inconsistencies? Any apparent biases in the data or evidence?
  • Have limitations to the research or argument been identified?
  • Does the References list appear accurate and complete?
  • Next: Resources for Critiquing >>
  • Last Updated: Nov 15, 2022 2:43 PM
  • URL: https://goodwin.libguides.com/critiquing
  • Skip to primary navigation
  • Skip to content

Avidnote

  • Home – AI for Research

Avidnote

The Ultimate Guide to Critiquing Research Articles

The ultimate guide to critiquing research articles. Learn how to evaluate validity and reliability, identify biases, and contribute to knowledge. Enhance your critique skills and join the intellectual adventure now!

Critiquing research articles is a fundamental skill for any scientist or researcher. It allows us to evaluate the validity and reliability of the findings, identify potential biases or limitations, and contribute to the advancement of knowledge in our respective fields. But why is critiquing research articles so important?

The Importance of Critiquing Research Articles

There are several reasons why critiquing research articles is crucial:

Ensuring Accuracy and Integrity: By critically analyzing the methods, results, and conclusions of a study, we can identify any flaws or inconsistencies that may undermine the credibility of the research. This helps maintain the high standards of scientific inquiry and prevents the dissemination of misleading or erroneous information.

Facilitating Scientific Progress: By identifying gaps in existing knowledge or weaknesses in previous studies, we can propose new research questions and design more robust experiments. This iterative process of critique and improvement is essential for advancing our understanding of the world and finding solutions to complex problems.

Nurturing a Culture of Intellectual Rigor: Critiquing research articles encourages researchers to question assumptions, challenge established theories, and explore alternative explanations. This fosters healthy debate, drives innovation, and pushes the boundaries of scientific inquiry.

In this blog post, we will delve deeper into the importance and relevance of critiquing research articles. We will explore effective strategies and provide valuable insights to help you enhance your critique skills. So, whether you’re a seasoned researcher or just starting your scientific journey, join us as we embark on this intellectual adventure of critiquing research articles.

Stay tuned for our next section, where we’ll discuss how to capture the reader’s attention with a clear hook.

Understanding Research Articles

Research articles are a fundamental component of the academic and scientific community. They serve as a means for researchers to communicate their findings, share knowledge, and contribute to the advancement of their respective fields. In this section, we will delve into the purpose and structure of research articles, as well as explore the different types of research articles that exist.

Purpose and Structure of Research Articles

The purpose of a research article is to present the results of a study or experiment in a clear and organized manner. These articles typically follow a specific structure, which allows readers to navigate through the information easily. Understanding this structure is crucial for researchers who want to effectively communicate their work.

The structure of a research article usually consists of several sections, each serving a specific purpose. The most common sections include:

  • Introduction: Sets the stage for the research, providing background information and stating the research question or hypothesis. This section helps readers understand the context and significance of the study.
  • Methods: Outlines the procedures and techniques used in the research, including the sample size, data collection methods, and statistical analyses. This section allows other researchers to replicate the study and verify the results.
  • Results: Presents the findings of the research in a concise and objective manner. It often includes tables, graphs, and figures to illustrate the data. This section should be focused on presenting the facts without interpretation or bias.
  • Discussion: Analyzes and interprets the results of the study. Researchers may compare their findings to previous research, discuss limitations, and propose future directions. This section demonstrates their understanding of the implications of their work.
  • Conclusion: Summarizes the main findings of the study and reiterates their significance. It may also include recommendations for further research or practical applications of the findings.

Different Types of Research Articles

Research articles can take various forms depending on the nature of the study and the intended audience. The three main types of research articles are:

  • Empirical Research Articles: Present the results of original studies or experiments. These articles follow the structure we discussed earlier, with a focus on presenting data and analysis. They are the most common type in scientific and academic journals.
  • Review Articles: Provide a comprehensive analysis and synthesis of existing research on a particular topic. They summarize the findings of multiple studies and offer a broader perspective on the subject. Review articles are valuable resources for researchers looking to gain a deeper understanding of a specific field or topic.
  • Theoretical Research Articles: Focus on developing new theories or frameworks. They propose conceptual models, hypotheses, or theoretical explanations for phenomena. These articles are often found in disciplines such as philosophy, sociology, and psychology.

Research articles play a critical role in the dissemination of knowledge within the academic and scientific communities. Understanding the purpose and structure of these articles is essential for researchers to effectively communicate their findings. By following a clear and organized structure, researchers can ensure that their work is accessible and impactful to their peers and the broader scientific community.

Importance of Critiquing Research Articles

Critiquing research articles plays a vital role in the academic and research community. It not only benefits researchers and academics but also contributes to the overall advancement of knowledge. In this section, we will explore the benefits of critiquing research articles, how it improves critical thinking skills and enhances research abilities, and the importance of identifying strengths, weaknesses, and gaps in existing research.

Benefits of Critiquing Research Articles for Researchers and Academics

Critiquing research articles provides researchers and academics with several important benefits:

  • Staying up-to-date: By critically analyzing existing research, researchers can identify gaps in the literature and areas that require further exploration. This helps them shape their own research questions and contribute to the existing body of knowledge.
  • Improving research methodologies: By closely examining the methods and techniques used in published studies, researchers can gain insights into best practices and avoid potential pitfalls. This enhances the quality and rigor of their own research, leading to more accurate and reliable results.
  • Fostering collaboration and intellectual discussion: By engaging in critical analysis and providing constructive feedback, researchers can contribute to the ongoing dialogue and debate surrounding a particular topic. This not only enriches the academic discourse but also promotes the refinement and advancement of ideas.

Improving Critical Thinking Skills and Enhancing Research Abilities through Critiquing

Critiquing research articles is an excellent way to develop and improve critical thinking skills. By evaluating the strengths and weaknesses of published studies, researchers are challenged to think critically and objectively. This process cultivates a critical mindset that is essential for conducting high-quality research.

Moreover, critiquing research articles enhances researchers’ research abilities. Through the analysis of existing research, researchers gain a deeper understanding of the methodologies and approaches that have been used successfully in the past. This knowledge can be applied to their own research, allowing them to make informed decisions and design studies that are more likely to yield meaningful results.

Identifying Strengths, Weaknesses, and Gaps in Existing Research

One of the key benefits of critiquing research articles is the ability to identify strengths, weaknesses, and gaps in existing research. By critically evaluating published studies, researchers can:

  • Assess the strengths of the research design, the validity of the findings, and the relevance of the conclusions.
  • Avoid repeating mistakes in their own work by recognizing limitations in methodology, sample size, or data analysis.
  • Identify areas that have not been adequately explored or where conflicting results exist, providing opportunities for further research and the potential to make significant contributions to the field.

The Key Elements of Critiquing Research Articles

When critiquing research articles, it is important to consider several key elements. These elements can help you analyze and evaluate the quality and validity of the research. In this section, we will explore some of these key components and provide tips for effectively critiquing each one.

  • The Abstract: The abstract is a concise summary of the entire research article, providing an overview of the study’s purpose, methodology, results, and conclusions. When critiquing the abstract, pay attention to whether it accurately reflects the content of the article and effectively conveys the main points. Look for clarity, coherence, and relevance in the abstract.
  • The Introduction: The introduction sets the stage for the research by providing background information, stating the research problem, and outlining the objectives and hypotheses of the study. When evaluating the introduction, consider whether it effectively contextualizes the research and justifies its significance. Look for logical progression of ideas and clear articulation of the research question or problem.
  • The Methodology: The methodology section describes the research design, sample size, data collection methods, and statistical analysis used in the study. This section is crucial for assessing the quality and rigor of the research. When critiquing the methodology, consider whether the chosen research design is appropriate for the research question and objectives. Evaluate the sample size and whether it is representative of the target population. Assess the data collection methods for reliability and validity. Finally, examine the statistical analysis to determine if it is appropriate and accurately reflects the data.
  • The Results: The results section presents the findings of the study, often using tables, graphs, or statistical analyses. When evaluating the results, look for clarity and coherence in the presentation of the data. Consider whether the results are relevant to the research question and objectives. Assess the statistical significance of the findings and whether they support or contradict previous research in the field.
  • The Discussion: The discussion section is where the researchers interpret the results, relate them to previous research, and discuss the implications of the findings. When critiquing the discussion, consider whether the interpretation of the results is supported by the data presented. Look for logical connections between the results and the research question. Assess whether the authors acknowledge any limitations of the study and suggest directions for future research.
  • The References: The references section provides a list of the sources cited in the research article. When critiquing the references, consider whether they are relevant, reputable, and up-to-date. Look for a variety of sources to support the research claims and ensure that proper citation formats are used.

To effectively critique research articles, it is essential to analyze each component thoroughly and consider their individual strengths and weaknesses. By paying attention to the key elements, such as the abstract, introduction, methodology, results, discussion, and references, you can develop a comprehensive understanding of the research and evaluate its quality. Remember to use the tips provided in this section to guide your analysis and critique.

Enhance Your Research Skills with Avidnote

If you want to learn more about research article critique and other valuable insights for academics and researchers, be sure to check out the Avidnote Blog. It offers a wealth of information and tips to enhance your research writing, reading, and analysis processes. Additionally, Avidnote, an AI platform recommended by universities, provides features tailored for researchers, such as summarizing text, analyzing research data, and organizing reading lists. Don’t forget to explore the Avidnote Premium options, including a free plan for Karlstad Studentkår members. Start improving your research workflow with Avidnote today!

The Pitfalls of Critiquing Research Articles

In the world of research, critiquing research articles is an essential skill. It allows researchers to evaluate the quality and validity of published studies, identify potential biases, and contribute to the advancement of knowledge in their field. However, there are common pitfalls that researchers should avoid when critiquing research articles. Let’s explore some of these pitfalls and how to overcome them.

Failing to Understand the Study Design and Methodology

One common mistake researchers make when critiquing research articles is failing to fully understand the study design and methodology. It is crucial to have a thorough understanding of the research design, including the sampling methods, data collection procedures, and statistical analyses employed. Without this understanding, it becomes challenging to assess the study’s strengths and weaknesses accurately.

To overcome this pitfall, researchers can start by carefully reading the methods section of the article. This section provides details about the study’s design, participants, data collection instruments, and analysis methods. By familiarizing themselves with the study’s methodology, researchers can better evaluate its appropriateness for addressing the research question and drawing valid conclusions.

Biases in the Critique Process

Another common pitfall is the presence of biases in the critique process. Biases can manifest in various ways, such as personal beliefs, professional affiliations, or even unconscious biases. These biases can influence the interpretation of the research findings and compromise the objectivity of the critique.

To mitigate biases, researchers should strive to maintain objectivity and impartiality throughout the critique process. One way to achieve this is by critically evaluating the evidence presented in the research article and considering alternative explanations for the findings. It is also essential to be aware of one’s own biases and consciously challenge them to ensure a fair and balanced evaluation.

Emotional Reactions

Researchers should be cautious of their emotional reactions when critiquing research articles. It is natural to have preferences or opinions, but it is crucial to separate personal beliefs from the evaluation of the study’s scientific merit. By focusing on the evidence and logical reasoning, researchers can avoid being swayed by emotional biases and provide a more objective critique.

Maintaining objectivity also involves being open to different perspectives and interpretations. It is essential to consider the limitations of the study and acknowledge areas where further research is needed. Constructive criticism can contribute to the development of robust scientific knowledge, and researchers should approach the critique process with a mindset of continuous improvement.

Critiquing research articles is a valuable skill for researchers, but it is not without its pitfalls. To avoid these pitfalls, researchers should strive to understand the study design and methodology thoroughly, overcome biases, and maintain objectivity and impartiality throughout the critique process. By doing so, researchers can provide insightful and constructive critiques that contribute to the advancement of knowledge in their field. So, let’s continue honing our critiquing skills and fostering a culture of rigorous and objective research evaluation.

Tools and Resources to Aid in Critiquing Research Articles

When it comes to critiquing research articles, having the right tools and resources can make the process more efficient and effective. In this section, we will explore some helpful tools and online platforms that can assist you in your critique. Additionally, we will discuss Avidnote, an AI-powered platform specifically designed to enhance the research critique process.

Online Platforms, Software, and AI Tools for Effective Critiquing

The internet has opened up a world of possibilities for researchers, providing access to a wealth of information and resources. When it comes to critiquing research articles, there are several online platforms and software tools available that can streamline the process and help you uncover the strengths and weaknesses of a study.

One online platform worth mentioning is Avidnote . Designed with researchers in mind, Avidnote offers a range of AI-powered features that can enhance your research writing, reading, and analysis processes. With Avidnote, you can write research papers faster, summarize text, analyze research data, transcribe interviews, and more. It’s like having a virtual research assistant at your fingertips.

Avidnote is highly recommended by universities and offers AI functionalities specifically tailored for researchers. Whether you’re a student or a seasoned academic, Avidnote can help you save time and improve the quality of your critique. Plus, Avidnote offers different pricing plans to suit your needs, ranging from a free plan to professional and premium plans with additional AI usage, storage, and features.

One of the standout features of Avidnote is its commitment to data privacy. As a user, you own all the data you produce on the platform, and Avidnote ensures that your information is kept secure. This is particularly important when critiquing research articles, as you may be dealing with sensitive or confidential data.

In addition to its powerful AI capabilities, Avidnote also promotes ethical writing practices. The platform encourages users to use its features responsibly and provides valuable insights and tips for academics and researchers on its blog. Whether you’re looking for guidance on critiquing research articles or other aspects of the research process, Avidnote’s blog is a valuable resource.

Avidnote also offers features to help you organize your reading lists and prepare for critiques. With its seamless integration with reference management software, you can easily annotate and mark papers, store secure and searchable notes, and take quick notes on the go. The platform also allows you to work in groups and create shared projects, making collaboration with colleagues a breeze.

If you’re a member of Karlstad Studentkår, you’ll be pleased to know that you can access Avidnote Premium for free by registering with the code KAU. This is a fantastic opportunity to take advantage of Avidnote’s premium features without breaking the bank. Additionally, PhD students who are members of the student association can also access Avidnote for free, further demonstrating the platform’s commitment to supporting academic research.

When it comes to critiquing research articles, having the right tools and resources can make all the difference. Online platforms, software tools, and AI-powered platforms like Avidnote can streamline the critique process, saving you time and improving the quality of your analysis. With its range of features tailored for researchers, Avidnote is a valuable tool that can enhance your research writing, reading, and analysis processes. So why not give it a try and see how it can transform your critique?

The Importance of Constructive Feedback in Research

In the research community, providing constructive feedback on research articles plays a crucial role in promoting growth and improvement. Constructive feedback not only helps researchers refine their work but also contributes to the overall advancement of knowledge in their field.

Constructive feedback is invaluable in the research community because it allows researchers to identify areas for improvement and refine their work. By offering insights and suggestions, reviewers can help authors strengthen their arguments, enhance the clarity of their writing, and address any potential weaknesses. This collaborative process fosters a culture of continuous improvement and drives the advancement of research.

Guidelines for Offering Helpful and Respectful Feedback

When providing feedback, it is essential to follow guidelines that ensure the feedback is helpful, respectful, and constructive. One important guideline is to focus on the content rather than the person behind it. By separating the work from the individual, feedback can be given in a way that is less personal and more objective. This approach helps maintain a positive and supportive environment for researchers.

Another guideline is to be specific and provide concrete examples. Vague statements like “this section needs improvement” are not helpful. Instead, pointing out specific areas that could benefit from clarification or providing alternative approaches can guide authors in making meaningful revisions. Additionally, offering examples or referring to relevant research can strengthen the feedback and provide authors with a clearer understanding of how to improve their work.

It is also important to be respectful and considerate when giving feedback. Recognize the effort and time that went into the research and acknowledge the strengths of the work. By starting with positive feedback, reviewers can create a more receptive atmosphere and help authors feel encouraged to make necessary revisions. Additionally, using a constructive and supportive tone throughout the feedback can help foster a collaborative relationship between reviewers and authors.

The Role of Feedback in Research Development

Feedback plays a crucial role in promoting growth and improvement in research. It helps researchers identify blind spots and encourages them to explore different perspectives. By engaging in a constructive dialogue, researchers can refine their ideas, challenge assumptions, and broaden the impact of their work. Constructive feedback also contributes to the overall quality of research publications, ensuring that they meet the rigorous standards of the scientific community.

Research is a dynamic and evolving process, and feedback is a key component in driving progress. By offering constructive feedback, researchers contribute to the continuous development of their field and help elevate the quality of research outcomes. It is through this collaborative effort that researchers can collectively push the boundaries of knowledge and make meaningful contributions to their respective disciplines.

In Conclusion

Providing constructive feedback on research articles is crucial for the growth and improvement of the research community. By adhering to guidelines that promote helpful, respectful, and constructive feedback, researchers can actively contribute to the advancement of their field. The feedback process fosters a culture of continuous improvement, encourages collaboration, and drives the overall progress of research. So, let us embrace the power of constructive feedback and work together to push the boundaries of knowledge.

Why Critiquing Research Articles is Crucial

Critiquing research articles is a crucial skill for researchers to develop for their personal and professional growth. It allows them to:

Evaluate the quality and validity of research

Identify gaps in knowledge

Contribute to the advancement of their field

Avidnote: Enhancing Research Processes

Avidnote is an AI platform designed for researchers that offers a range of features to enhance the research writing, reading, and analysis processes. With Avidnote, researchers can:

Write research papers faster

Summarize text

Analyze research data

Transcribe interviews

Avidnote provides researchers with the tools they need to streamline their work. It has recommendations from universities and offers a range of pricing plans to cater to researchers at every level. The platform ensures data privacy and promotes ethical writing practices.

Avidnote Blog: Valuable Resource for Researchers

The Avidnote blog is a valuable resource for academics and researchers. It provides insights and tips on various topics, including critiquing research articles. Avidnote also offers features to help users organize their reading lists and prepare for critiques, making the process more efficient and effective.

Avidnote’s Integration with OpenAI

Avidnote integrates with OpenAI’s private beta, staying at the forefront of research and academic work. This integration offers cutting-edge tools for users.

Members of Karlstad Studentkår can even access Avidnote Premium for free by registering with the code KAU. This further enhances their research capabilities.

Avidnote: Simplifying the Research Process

Avidnote is the ultimate companion for researchers, providing them with the necessary tools and resources to excel in their work. Whether it’s writing, organizing studies, or collaborating with others, Avidnote simplifies the research process and allows researchers to focus on making impactful contributions to their field. Try it out by clicking here .

Remember, your research has the power to shape the future. Let Avidnote be your ally on this journey.

You may also like

tools for critiquing research articles

Ambio: A Comprehensive Resource for Environmental Research

Ambio: A comprehensive resource for environmental research. Explore a wealth of knowledge on the human-environment relationship and gain insights to shape our planet’s future. Join us today!

tools for critiquing research articles

How to Enhance Productivity in Academia

Enhance productivity in academia with time management and efficiency strategies. Achieve academic goals by optimizing your workflow and finding a balance between work and self-care. Boost your productivity now!

Privacy Overview

Adding {{itemName}} to cart

Added {{itemName}} to cart

CASP Checklists

  • How to use our CASP Checklists
  • Referencing and Creative Commons
  • Online Training Courses
  • CASP Workshops
  • What is Critical Appraisal
  • Study Designs
  • Useful Links
  • Bibliography
  • View all Tools and Resources
  • Testimonials

Critical appraisal tools and resources

CASP has produced simple critical appraisal checklists for the key study designs. These are not meant to replace considered thought and judgement when reading a paper but are for use as a guide and aide memoire. All CASP checklists cover three main areas: validity , results and clinical relevance.

What is Critical Appraisal?

Critical Appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. It is an essential skill for evidence-based medicine because it allows people to find and use research evidence reliably and efficiently.

Learn more about what critical appraisal is, why we need it and more

A complete list (published & unpublished) of articles and research papers about CASP and other critical appraisal tools and approaches, covering from 1993-2012.

  • CASP Checklist

Need more information?

  • Online Learning
  • Privacy Policy

tools for critiquing research articles

Critical Appraisal Skills Programme

Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Copyright 2024 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand

Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Nuffield Department of Primary Care Health Sciences, University of Oxford

Critical Appraisal tools

Critical appraisal worksheets to help you appraise the reliability, importance and applicability of clinical evidence.

Critical appraisal is the systematic evaluation of clinical research papers in order to establish:

  • Does this study address a  clearly focused question ?
  • Did the study use valid methods to address this question?
  • Are the valid results of this study important?
  • Are these valid, important results applicable to my patient or population?

If the answer to any of these questions is “no”, you can save yourself the trouble of reading the rest of it.

This section contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples.

Critical Appraisal Worksheets

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostics  Critical Appraisal Sheet
  • Prognosis  Critical Appraisal Sheet
  • Randomised Controlled Trials  (RCT) Critical Appraisal Sheet
  • Critical Appraisal of Qualitative Studies  Sheet
  • IPD Review  Sheet

Chinese - translated by Chung-Han Yang and Shih-Chieh Shao

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostic Study  Critical Appraisal Sheet
  • Prognostic Critical Appraisal Sheet
  • RCT  Critical Appraisal Sheet
  • IPD reviews Critical Appraisal Sheet
  • Qualitative Studies Critical Appraisal Sheet 

German - translated by Johannes Pohl and Martin Sadilek

  • Systematic Review  Critical Appraisal Sheet
  • Diagnosis Critical Appraisal Sheet
  • Prognosis Critical Appraisal Sheet
  • Therapy / RCT Critical Appraisal Sheet

Lithuanian - translated by Tumas Beinortas

  • Systematic review appraisal Lithuanian (PDF)
  • Diagnostic accuracy appraisal Lithuanian  (PDF)
  • Prognostic study appraisal Lithuanian  (PDF)
  • RCT appraisal sheets Lithuanian  (PDF)

Portugese - translated by Enderson Miranda, Rachel Riera and Luis Eduardo Fontes

  • Portuguese – Systematic Review Study Appraisal Worksheet
  • Portuguese – Diagnostic Study Appraisal Worksheet
  • Portuguese – Prognostic Study Appraisal Worksheet
  • Portuguese – RCT Study Appraisal Worksheet
  • Portuguese – Systematic Review Evaluation of Individual Participant Data Worksheet
  • Portuguese – Qualitative Studies Evaluation Worksheet

Spanish - translated by Ana Cristina Castro

  • Systematic Review  (PDF)
  • Diagnosis  (PDF)
  • Prognosis  Spanish Translation (PDF)
  • Therapy / RCT  Spanish Translation (PDF)

Persian - translated by Ahmad Sofi Mahmudi

  • Prognosis  (PDF)
  • PICO  Critical Appraisal Sheet (PDF)
  • PICO Critical Appraisal Sheet (MS-Word)
  • Educational Prescription  Critical Appraisal Sheet (PDF)

Explanations & Examples

  • Pre-test probability
  • SpPin and SnNout
  • Likelihood Ratios

Making sense of research: A guide for critiquing a paper

Affiliation.

  • 1 School of Nursing, Griffith University, Meadowbrook, Queensland.
  • PMID: 16114192
  • DOI: 10.5172/conu.14.1.38

Learning how to critique research articles is one of the fundamental skills of scholarship in any discipline. The range, quantity and quality of publications available today via print, electronic and Internet databases means it has become essential to equip students and practitioners with the prerequisites to judge the integrity and usefulness of published research. Finding, understanding and critiquing quality articles can be a difficult process. This article sets out some helpful indicators to assist the novice to make sense of research.

Publication types

  • Data Interpretation, Statistical
  • Research Design
  • Review Literature as Topic

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 25, Issue 1
  • Critical appraisal of qualitative research: necessity, partialities and the issue of bias
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5660-8224 Veronika Williams ,
  • Anne-Marie Boylan ,
  • http://orcid.org/0000-0003-4597-1276 David Nunan
  • Nuffield Department of Primary Care Health Sciences , University of Oxford, Radcliffe Observatory Quarter , Oxford , UK
  • Correspondence to Dr Veronika Williams, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; veronika.williams{at}phc.ox.ac.uk

https://doi.org/10.1136/bmjebm-2018-111132

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • qualitative research

Introduction

Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6

Critical appraisal of qualitative research

Is it necessary.

Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.

Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.

Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.

How is appraisal currently performed?

Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23  An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10

Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.

Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.

The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.

On the need for specific appraisal tools

Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.

Concepts of rigour or trustworthiness within qualitative research 31

Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.

Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.

Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.

Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.

However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:

Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.

Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.

Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.

The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.

As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.

  • Williams V ,
  • Boylan AM ,
  • Lingard L ,
  • Orser B , et al
  • Brawn R , et al
  • Van Royen P ,
  • Vermeire E , et al
  • Barker M , et al
  • McGannon KR
  • Dixon-Woods M ,
  • Agarwal S , et al
  • Greenhalgh T ,
  • Dennison L ,
  • Morrison L ,
  • Conway G , et al
  • Barrett M ,
  • Mayan M , et al
  • Lockwood C ,
  • Santiago-Delefosse M ,
  • Bruchez C , et al
  • Sainsbury P ,
  • ↵ CASP (Critical Appraisal Skills Programme). date unknown . http://www.phru.nhs.uk/Pages/PHD/CASP.htm .
  • ↵ The Joanna Briggs Institute . JBI QARI Critical appraisal checklist for interpretive & critical research . Adelaide : The Joanna Briggs Institute , 2014 .
  • Stephens J ,

Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Correction notice This article has been updated since its original publication to include a new reference (reference 1.)

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can Commun Dis Rep
  • v.43(9); 2017 Sep 7

Logo of ccdr

Scientific writing

Critical appraisal toolkit (cat) for assessing multiple types of evidence.

1 Memorial University School of Nursing, St. John’s, NL

2 Centre for Communicable Diseases and Infection Control, Public Health Agency of Canada, Ottawa, ON

Contributor: Jennifer Kruse, Public Health Agency of Canada – Conceptualization and project administration

Healthcare professionals are often expected to critically appraise research evidence in order to make recommendations for practice and policy development. Here we describe the Critical Appraisal Toolkit (CAT) currently used by the Public Health Agency of Canada. The CAT consists of: algorithms to identify the type of study design, three separate tools (for appraisal of analytic studies, descriptive studies and literature reviews), additional tools to support the appraisal process, and guidance for summarizing evidence and drawing conclusions about a body of evidence. Although the toolkit was created to assist in the development of national guidelines related to infection prevention and control, clinicians, policy makers and students can use it to guide appraisal of any health-related quantitative research. Participants in a pilot test completed a total of 101 critical appraisals and found that the CAT was user-friendly and helpful in the process of critical appraisal. Feedback from participants of the pilot test of the CAT informed further revisions prior to its release. The CAT adds to the arsenal of available tools and can be especially useful when the best available evidence comes from non-clinical trials and/or studies with weak designs, where other tools may not be easily applied.

Introduction

Healthcare professionals, researchers and policy makers are often involved in the development of public health policies or guidelines. The most valuable guidelines provide a basis for evidence-based practice with recommendations informed by current, high quality, peer-reviewed scientific evidence. To develop such guidelines, the available evidence needs to be critically appraised so that recommendations are based on the "best" evidence. The ability to critically appraise research is, therefore, an essential skill for health professionals serving on policy or guideline development working groups.

Our experience with working groups developing infection prevention and control guidelines was that the review of relevant evidence went smoothly while the critical appraisal of the evidence posed multiple challenges. Three main issues were identified. First, although working group members had strong expertise in infection prevention and control or other areas relevant to the guideline topic, they had varying levels of expertise in research methods and critical appraisal. Second, the critical appraisal tools in use at that time focused largely on analytic studies (such as clinical trials), and lacked definitions of key terms and explanations of the criteria used in the studies. As a result, the use of these tools by working group members did not result in a consistent way of appraising analytic studies nor did the tools provide a means of assessing descriptive studies and literature reviews. Third, working group members wanted guidance on how to progress from assessing individual studies to summarizing and assessing a body of evidence.

To address these issues, a review of existing critical appraisal tools was conducted. We found that the majority of existing tools were design-specific, with considerable variability in intent, criteria appraised and construction of the tools. A systematic review reported that fewer than half of existing tools had guidelines for use of the tool and interpretation of the items ( 1 ). The well-known Grading of Recommendations Assessment, Development and Evaluation (GRADE) rating-of-evidence system and the Cochrane tools for assessing risk of bias were considered for use ( 2 ), ( 3 ). At that time, the guidelines for using these tools were limited, and the tools were focused primarily on randomized controlled trials (RCTs) and non-randomized controlled trials. For feasibility and ethical reasons, clinical trials are rarely available for many common infection prevention and control issues ( 4 ), ( 5 ). For example, there are no intervention studies assessing which practice restrictions, if any, should be placed on healthcare workers who are infected with a blood-borne pathogen. Working group members were concerned that if they used GRADE, all evidence would be rated as very low or as low quality or certainty, and recommendations based on this evidence may be interpreted as unconvincing, even if they were based on the best or only available evidence.

The team decided to develop its own critical appraisal toolkit. So a small working group was convened, led by an epidemiologist with expertise in research, methodology and critical appraisal, with the goal of developing tools to critically appraise studies informing infection prevention and control recommendations. This article provides an overview of the Critical Appraisal Toolkit (CAT). The full document, entitled Infection Prevention and Control Guidelines Critical Appraisal Tool Kit is available online ( 6 ).

Following a review of existing critical appraisal tools, studies informing infection prevention and control guidelines that were in development were reviewed to identify the types of studies that would need to be appraised using the CAT. A preliminary draft of the CAT was used by various guideline development working groups and iterative revisions were made over a two year period. A pilot test of the CAT was then conducted which led to the final version ( 6 ).

The toolkit is set up to guide reviewers through three major phases in the critical appraisal of a body of evidence: appraisal of individual studies; summarizing the results of the appraisals; and appraisal of the body of evidence.

Tools for critically appraising individual studies

The first step in the critical appraisal of an individual study is to identify the study design; this can be surprisingly problematic, since many published research studies are complex. An algorithm was developed to help identify whether a study was an analytic study, a descriptive study or a literature review (see text box for definitions). It is critical to establish the design of the study first, as the criteria for assessment differs depending on the type of study.

Definitions of the types of studies that can be analyzed with the Critical Appraisal Toolkit*

Analytic study: A study designed to identify or measure effects of specific exposures, interventions or risk factors. This design employs the use of an appropriate comparison group to test epidemiologic hypotheses, thus attempting to identify associations or causal relationships.

Descriptive study: A study that describes characteristics of a condition in relation to particular factors or exposure of interest. This design often provides the first important clues about possible determinants of disease and is useful for the formulation of hypotheses that can be subsequently tested using an analytic design.

Literature review: A study that analyzes critical points of a published body of knowledge. This is done through summary, classification and comparison of prior studies. With the exception of meta-analyses, which statistically re-analyze pooled data from several studies, these studies are secondary sources and do not report any new or experimental work.

* Public Health Agency of Canada. Infection Prevention and Control Guidelines Critical Appraisal Tool Kit ( 6 )

Separate algorithms were developed for analytic studies, descriptive studies and literature reviews to help reviewers identify specific designs within those categories. The algorithm below, for example, helps reviewers determine which study design was used within the analytic study category ( Figure 1 ). It is based on key decision points such as number of groups or allocation to group. The legends for the algorithms and supportive tools such as the glossary provide additional detail to further differentiate study designs, such as whether a cohort study was retrospective or prospective.

An external file that holds a picture, illustration, etc.
Object name is 430902-f1.jpg

Abbreviations: CBA, controlled before-after; ITS, interrupted time series; NRCT, non-randomized controlled trial; RCT, randomized controlled trial; UCBA, uncontrolled before-after

Separate critical appraisal tools were developed for analytic studies, for descriptive studies and for literature reviews, with relevant criteria in each tool. For example, a summary of the items covered in the analytic study critical appraisal tool is shown in Table 1 . This tool is used to appraise trials, observational studies and laboratory-based experiments. A supportive tool for assessing statistical analysis was also provided that describes common statistical tests used in epidemiologic studies.

The descriptive study critical appraisal tool assesses different aspects of sampling, data collection, statistical analysis, and ethical conduct. It is used to appraise cross-sectional studies, outbreak investigations, case series and case reports.

The literature review critical appraisal tool assesses the methodology, results and applicability of narrative reviews, systematic reviews and meta-analyses.

After appraisal of individual items in each type of study, each critical appraisal tool also contains instructions for drawing a conclusion about the overall quality of the evidence from a study, based on the per-item appraisal. Quality is rated as high, medium or low. While a RCT is a strong study design and a survey is a weak design, it is possible to have a poor quality RCT or a high quality survey. As a result, the quality of evidence from a study is distinguished from the strength of a study design when assessing the quality of the overall body of evidence. A definition of some terms used to evaluate evidence in the CAT is shown in Table 2 .

* Considered strong design if there are at least two control groups and two intervention groups. Considered moderate design if there is only one control and one intervention group

Tools for summarizing the evidence

The second phase in the critical appraisal process involves summarizing the results of the critical appraisal of individual studies. Reviewers are instructed to complete a template evidence summary table, with key details about each study and its ratings. Studies are listed in descending order of strength in the table. The table simplifies looking across all studies that make up the body of evidence informing a recommendation and allows for easy comparison of participants, sample size, methods, interventions, magnitude and consistency of results, outcome measures and individual study quality as determined by the critical appraisal. These evidence summary tables are reviewed by the working group to determine the rating for the quality of the overall body of evidence and to facilitate development of recommendations based on evidence.

Rating the quality of the overall body of evidence

The third phase in the critical appraisal process is rating the quality of the overall body of evidence. The overall rating depends on the five items summarized in Table 2 : strength of study designs, quality of studies, number of studies, consistency of results and directness of the evidence. The various combinations of these factors lead to an overall rating of the strength of the body of evidence as strong, moderate or weak as summarized in Table 3 .

A unique aspect of this toolkit is that recommendations are not graded but are formulated based on the graded body of evidence. Actions are either recommended or not recommended; it is the strength of the available evidence that varies, not the strength of the recommendation. The toolkit does highlight, however, the need to re-evaluate new evidence as it becomes available especially when recommendations are based on weak evidence.

Pilot test of the CAT

Of 34 individuals who indicated an interest in completing the pilot test, 17 completed it. Multiple peer-reviewed studies were selected representing analytic studies, descriptive studies and literature reviews. The same studies were assigned to participants with similar content expertise. Each participant was asked to appraise three analytic studies, two descriptive studies and one literature review, using the appropriate critical appraisal tool as identified by the participant. For each study appraised, one critical appraisal tool and the associated tool-specific feedback form were completed. Each participant also completed a single general feedback form. A total of 101 of 102 critical appraisals were conducted and returned, with 81 tool-specific feedback forms and 14 general feedback forms returned.

The majority of participants (>85%) found the flow of each tool was logical and the length acceptable but noted they still had difficulty identifying the study designs ( Table 4 ).

* Number of tool-specific forms returned for total number of critical appraisals conducted

The vast majority of the feedback forms (86–93%) indicated that the different tools facilitated the critical appraisal process. In the assessment of consistency, however, only four of ten analytic studies appraised (40%), had complete agreement on the rating of overall study quality by participants, the other six studies had differences noted as mismatches. Four of the six studies with mismatches were observational studies. The differences were minor. None of the mismatches included a study that was rated as both high and low quality by different participants. Based on the comments provided by participants, most mismatches could likely have been resolved through discussion with peers. Mismatched ratings were not an issue for the descriptive studies and literature reviews. In summary, the pilot test provided useful feedback on different aspects of the toolkit. Revisions were made to address the issues identified from the pilot test and thus strengthen the CAT.

The Infection Prevention and Control Guidelines Critical Appraisal Tool Kit was developed in response to the needs of infection control professionals reviewing literature that generally did not include clinical trial evidence. The toolkit was designed to meet the identified needs for training in critical appraisal with extensive instructions and dictionaries, and tools applicable to all three types of studies (analytic studies, descriptive studies and literature reviews). The toolkit provided a method to progress from assessing individual studies to summarizing and assessing the strength of a body of evidence and assigning a grade. Recommendations are then developed based on the graded body of evidence. This grading system has been used by the Public Health Agency of Canada in the development of recent infection prevention and control guidelines ( 5 ), ( 7 ). The toolkit has also been used for conducting critical appraisal for other purposes, such as addressing a practice problem and serving as an educational tool ( 8 ), ( 9 ).

The CAT has a number of strengths. It is applicable to a wide variety of study designs. The criteria that are assessed allow for a comprehensive appraisal of individual studies and facilitates critical appraisal of a body of evidence. The dictionaries provide reviewers with a common language and criteria for discussion and decision making.

The CAT also has a number of limitations. The tools do not address all study designs (e.g., modelling studies) and the toolkit provides limited information on types of bias. Like the majority of critical appraisal tools ( 10 ), ( 11 ), these tools have not been tested for validity and reliability. Nonetheless, the criteria assessed are those indicated as important in textbooks and in the literature ( 12 ), ( 13 ). The grading scale used in this toolkit does not allow for comparison of evidence grading across organizations or internationally, but most reviewers do not need such comparability. It is more important that strong evidence be rated higher than weak evidence, and that reviewers provide rationales for their conclusions; the toolkit enables them to do so.

Overall, the pilot test reinforced that the CAT can help with critical appraisal training and can increase comfort levels for those with limited experience. Further evaluation of the toolkit could assess the effectiveness of revisions made and test its validity and reliability.

A frequent question regarding this toolkit is how it differs from GRADE as both distinguish stronger evidence from weaker evidence and use similar concepts and terminology. The main differences between GRADE and the CAT are presented in Table 5 . Key differences include the focus of the CAT on rating the quality of individual studies, and the detailed instructions and supporting tools that assist those with limited experience in critical appraisal. When clinical trials and well controlled intervention studies are or become available, GRADE and related tools from Cochrane would be more appropriate ( 2 ), ( 3 ). When descriptive studies are all that is available, the CAT is very useful.

Abbreviation: GRADE, Grading of Recommendations Assessment, Development and Evaluation

The Infection Prevention and Control Guidelines Critical Appraisal Tool Kit was developed in response to needs for training in critical appraisal, assessing evidence from a wide variety of research designs, and a method for going from assessing individual studies to characterizing the strength of a body of evidence. Clinician researchers, policy makers and students can use these tools for critical appraisal of studies whether they are trying to develop policies, find a potential solution to a practice problem or critique an article for a journal club. The toolkit adds to the arsenal of critical appraisal tools currently available and is especially useful in assessing evidence from a wide variety of research designs.

Authors’ Statement

DM – Conceptualization, methodology, investigation, data collection and curation and writing – original draft, review and editing

TO – Conceptualization, methodology, investigation, data collection and curation and writing – original draft, review and editing

KD – Conceptualization, review and editing, supervision and project administration

Acknowledgements

We thank the Infection Prevention and Control Expert Working Group of the Public Health Agency of Canada for feedback on the development of the toolkit, Lisa Marie Wasmund for data entry of the pilot test results, Katherine Defalco for review of data and cross-editing of content and technical terminology for the French version of the toolkit, Laurie O’Neil for review and feedback on early versions of the toolkit, Frédéric Bergeron for technical support with the algorithms in the toolkit and the Centre for Communicable Diseases and Infection Control of the Public Health Agency of Canada for review, feedback and ongoing use of the toolkit. We thank Dr. Patricia Huston, Canada Communicable Disease Report Editor-in-Chief, for a thorough review and constructive feedback on the draft manuscript.

Conflict of interest: None.

Funding: This work was supported by the Public Health Agency of Canada.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 22, Issue 1
  • How to appraise qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Calvin Moorley 1 ,
  • Xabi Cathala 2
  • 1 Nursing Research and Diversity in Care, School of Health and Social Care , London South Bank University , London , UK
  • 2 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/ebnurs-2018-103044

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice. 1 This paper provides a guide on how to critically appraise a qualitative research paper.

What is qualitative research?

  • View inline

Useful terms

Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis. The data collection methods used in qualitative research include in depth interviews, focus groups, observations and stories in the form of diaries or other documents. 3

Authenticity

Title, keywords, authors and abstract.

In a previous paper, we discussed how the title, keywords, authors’ positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers, 4 the same applies to qualitative research. However, other areas such as the purpose of the study and the research question, theoretical and conceptual frameworks, sampling and methodology also need consideration when appraising a qualitative paper.

Purpose and question

The topic under investigation in the study should be guided by a clear research question or a statement of the problem or purpose. An example of a statement can be seen in table 2 . Unlike most quantitative studies, qualitative research does not seek to test a hypothesis. The research statement should be specific to the problem and should be reflected in the design. This will inform the reader of what will be studied and justify the purpose of the study. 5

Example of research question and problem statement

An appropriate literature review should have been conducted and summarised in the paper. It should be linked to the subject, using peer-reviewed primary research which is up to date. We suggest papers with a age limit of 5–8 years excluding original work. The literature review should give the reader a balanced view on what has been written on the subject. It is worth noting that for some qualitative approaches some literature reviews are conducted after the data collection to minimise bias, for example, in grounded theory studies. In phenomenological studies, the review sometimes occurs after the data analysis. If this is the case, the author(s) should make this clear.

Theoretical and conceptual frameworks

Most authors use the terms theoretical and conceptual frameworks interchangeably. Usually, a theoretical framework is used when research is underpinned by one theory that aims to help predict, explain and understand the topic investigated. A theoretical framework is the blueprint that can hold or scaffold a study’s theory. Conceptual frameworks are based on concepts from various theories and findings which help to guide the research. 6 It is the researcher’s understanding of how different variables are connected in the study, for example, the literature review and research question. Theoretical and conceptual frameworks connect the researcher to existing knowledge and these are used in a study to help to explain and understand what is being investigated. A framework is the design or map for a study. When you are appraising a qualitative paper, you should be able to see how the framework helped with (1) providing a rationale and (2) the development of research questions or statements. 7 You should be able to identify how the framework, research question, purpose and literature review all complement each other.

There remains an ongoing debate in relation to what an appropriate sample size should be for a qualitative study. We hold the view that qualitative research does not seek to power and a sample size can be as small as one (eg, a single case study) or any number above one (a grounded theory study) providing that it is appropriate and answers the research problem. Shorten and Moorley 8 explain that three main types of sampling exist in qualitative research: (1) convenience (2) judgement or (3) theoretical. In the paper , the sample size should be stated and a rationale for how it was decided should be clear.

Methodology

Qualitative research encompasses a variety of methods and designs. Based on the chosen method or design, the findings may be reported in a variety of different formats. Table 3 provides the main qualitative approaches used in nursing with a short description.

Different qualitative approaches

The authors should make it clear why they are using a qualitative methodology and the chosen theoretical approach or framework. The paper should provide details of participant inclusion and exclusion criteria as well as recruitment sites where the sample was drawn from, for example, urban, rural, hospital inpatient or community. Methods of data collection should be identified and be appropriate for the research statement/question.

Data collection

Overall there should be a clear trail of data collection. The paper should explain when and how the study was advertised, participants were recruited and consented. it should also state when and where the data collection took place. Data collection methods include interviews, this can be structured or unstructured and in depth one to one or group. 9 Group interviews are often referred to as focus group interviews these are often voice recorded and transcribed verbatim. It should be clear if these were conducted face to face, telephone or any other type of media used. Table 3 includes some data collection methods. Other collection methods not included in table 3 examples are observation, diaries, video recording, photographs, documents or objects (artefacts). The schedule of questions for interview or the protocol for non-interview data collection should be provided, available or discussed in the paper. Some authors may use the term ‘recruitment ended once data saturation was reached’. This simply mean that the researchers were not gaining any new information at subsequent interviews, so they stopped data collection.

The data collection section should include details of the ethical approval gained to carry out the study. For example, the strategies used to gain participants’ consent to take part in the study. The authors should make clear if any ethical issues arose and how these were resolved or managed.

The approach to data analysis (see ref  10 ) needs to be clearly articulated, for example, was there more than one person responsible for analysing the data? How were any discrepancies in findings resolved? An audit trail of how the data were analysed including its management should be documented. If member checking was used this should also be reported. This level of transparency contributes to the trustworthiness and credibility of qualitative research. Some researchers provide a diagram of how they approached data analysis to demonstrate the rigour applied ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Example of data analysis diagram.

Validity and rigour

The study’s validity is reliant on the statement of the question/problem, theoretical/conceptual framework, design, method, sample and data analysis. When critiquing qualitative research, these elements will help you to determine the study’s reliability. Noble and Smith 11 explain that validity is the integrity of data methods applied and that findings should accurately reflect the data. Rigour should acknowledge the researcher’s role and involvement as well as any biases. Essentially it should focus on truth value, consistency and neutrality and applicability. 11 The authors should discuss if they used triangulation (see table 2 ) to develop the best possible understanding of the phenomena.

Themes and interpretations and implications for practice

In qualitative research no hypothesis is tested, therefore, there is no specific result. Instead, qualitative findings are often reported in themes based on the data analysed. The findings should be clearly linked to, and reflect, the data. This contributes to the soundness of the research. 11 The researchers should make it clear how they arrived at the interpretations of the findings. The theoretical or conceptual framework used should be discussed aiding the rigour of the study. The implications of the findings need to be made clear and where appropriate their applicability or transferability should be identified. 12

Discussions, recommendations and conclusions

The discussion should relate to the research findings as the authors seek to make connections with the literature reviewed earlier in the paper to contextualise their work. A strong discussion will connect the research aims and objectives to the findings and will be supported with literature if possible. A paper that seeks to influence nursing practice will have a recommendations section for clinical practice and research. A good conclusion will focus on the findings and discussion of the phenomena investigated.

Qualitative research has much to offer nursing and healthcare, in terms of understanding patients’ experience of illness, treatment and recovery, it can also help to understand better areas of healthcare practice. However, it must be done with rigour and this paper provides some guidance for appraising such research. To help you critique a qualitative research paper some guidance is provided in table 4 .

Some guidance for critiquing qualitative research

  • ↵ Nursing and Midwifery Council . The code: Standard of conduct, performance and ethics for nurses and midwives . 2015 https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21 Aug 18 ).
  • Barrett D ,
  • Cathala X ,
  • Shorten A ,

Patient consent for publication Not required.

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

IMAGES

  1. Table 3 from Step-by-step guide to critiquing research. Part 2

    tools for critiquing research articles

  2. How to Critique a Research Article

    tools for critiquing research articles

  3. (PDF) How to critique qualitative research articles

    tools for critiquing research articles

  4. Critiquing a research article

    tools for critiquing research articles

  5. 10 Easy Steps to Master Writing a Research Article Critique

    tools for critiquing research articles

  6. Critiquing A Research Article

    tools for critiquing research articles

VIDEO

  1. Critiquing Research Designs

  2. RESEARCH CRITIQUE Qualitative Research

  3. Critiquing Research Paper

  4. ||guidelines for critiquing research reports||important points||

  5. RESEARCH CRITIQUE: Quantitative Study

  6. Critique of a Sample Research Prospectus Part V

COMMENTS

  1. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  2. Critiquing Research Evidence for Use in Practice: Revisited

    APPRAISING THE RESEARCH EVIDENCE. Some aspects of appraising a research article are the same whether the study is quantitative, qualitative, or mixed methods (Dale, 2005, Gray and Grove, 2017).Caldwell, Henshaw, and Taylor (2011) described the development of a framework for critiquing health research, addressing both quantitative and qualitative research with one list of questions.

  3. Critical Appraisal Tools

    Now, that you have found articles based on your research question you can appraise the quality of those articles. These are resources you can use to appraise different study designs. Critical Appraisal Tools. Centre for Evidence Based Medicine (Oxford) Evidence-Based Practice (EBP) checklists. University of Glasgow

  4. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  5. Full article: Critical appraisal

    More than 100 critical appraisal tools currently exist for qualitative research. Tools fall into two categories: checklists and holistic frameworks encouraging reflection (Majid & Vanstone, Citation 2018; Santiago-Delefosse et al., Citation 2016; Williams et al., Citation 2020). Both checklists and holistic frameworks are subject to criticisms.

  6. Critiquing Research Evidence for Use in Practice: Revisited

    There are numerous ways to appraise research and practice guidelines that are designed to inform clinical practice with the overall goals of improving patient outcomes. This article presents existing tools to appraise the research evidence in addition to a guide for providers on critical appraisal of a research study.

  7. Writing, reading, and critiquing reviews

    All reviews require authors to be able accurately summarize, synthesize, interpret and even critique the research literature. 1, 2 In fact, for this editorial we have had to review the literature on reviews. Knowledge and evidence are expanding in our field of health professions education at an ever increasing rate and so to help keep pace ...

  8. Systematic mapping of existing tools to appraise methodological

    Previous reviews. While at least five methodological reviews of critical appraisal tools for qualitative research have been published since 2003, we assessed that these did not adequately address the aims of this project [22,23,24, 32, 33].Most of the existing reviews focused only on critical appraisal tools in the health sciences [22,23,24, 32] .One review focused on reporting standards for ...

  9. LibGuides: Reading and Critiquing Research: Home

    Reading and critiquing scholarly research articles is a skill developed with time and practice. As you read more within your discipline you'll likely discover patterns in the structure of the journal articles. ... Eight critical appraisal tools to be used when reading research. Tools for Systematic Reviews, Randomised Controlled Trials, Cohort ...

  10. Using quality assessment tools to critically appraise ageing research

    The critical appraisal of research studies can seem daunting, but tools are available to make the process easier for the non-specialist. Understanding the language and process of quality assessment is essential when considering or conducting research, and is also valuable for all clinicians who use published research to inform their clinical ...

  11. The Ultimate Guide to Critiquing Research Articles

    When it comes to critiquing research articles, having the right tools and resources can make all the difference. Online platforms, software tools, and AI-powered platforms like Avidnote can streamline the critique process, saving you time and improving the quality of your analysis. ... Critiquing research articles is a crucial skill for ...

  12. CASP Checklists

    Critical Appraisal Checklists. We offer a number of free downloadable checklists to help you more easily and accurately perform critical appraisal across a number of different study types. The CASP checklists are easy to understand but in case you need any further guidance on how they are structured, take a look at our guide on how to use our ...

  13. Critiquing Research Evidence for Use in Practice: Revisited

    Stevens, 2019. suggested that critical appraisal of evidence is one of the most valuable skills that a clinician can have in today's health care environment. This article is an update to an original and popular article published in the Journal of Pediatric Health Care entitled "Critiquing Research for Use in Practice" (.

  14. Critical Appraisal Tools & Resources

    Critical Appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. It is an essential skill for evidence-based medicine because it allows people to find and use research evidence reliably and efficiently. Learn more about what critical appraisal ...

  15. Optimising the value of the critical appraisal skills programme (CASP

    The CASP tool is a generic tool for appraising the strengths and limitations of any qualitative research methodology. 30 The tool has ten questions that each focus on a different methodological aspect of a qualitative study (Box 1). The questions posed by the tool ask the researcher to consider whether the research methods were appropriate and ...

  16. Critical Appraisal tools

    This section contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples. Critical appraisal worksheets to help you appraise the reliability, importance and applicability of clinical evidence.

  17. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  18. Making sense of research: A guide for critiquing a paper

    Learning how to critique research articles is one of the fundamental skills of scholarship in any discipline. The range, quantity and quality of publications available today via print, electronic and Internet databases means it has become essential to equip students and practitioners with the prerequisites to judge the integrity and usefulness of published research.

  19. Critical appraisal of qualitative research

    Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the 'how' and 'why'. As we have argued previously1, qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety,2 prescribing,3 4 and ...

  20. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    There are numerous tools available to help both novice and advanced reviewers to critique research studies (Tanner, 2003). These tools generally ask questions that can help the reviewer to determine the degree to which the steps in the research process were followed. However, some steps are more important than others and very few tools ...

  21. Frameworks for critiquing research articles

    Frameworks for critiquing research articles. Download electronic versions of tables 7.2 and 7.3 in the text to print off and help you when critiquing quantitative and qualitative research articles. Find out more, read a sample chapter, or order an inspection copy if you are a lecturer, from the.

  22. Scientific writing: Critical Appraisal Toolkit (CAT) for assessing

    Abstract. Healthcare professionals are often expected to critically appraise research evidence in order to make recommendations for practice and policy development. Here we describe the Critical Appraisal Toolkit (CAT) currently used by the Public Health Agency of Canada. The CAT consists of: algorithms to identify the type of study design ...

  23. How to appraise qualitative research

    Useful terms. Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis. The data collection methods used in qualitative research include in depth interviews, focus groups, observations and stories in the form of diaries ...