• Privacy Policy

Buy Me a Coffee

Research Method

Home » Critical Analysis – Types, Examples and Writing Guide

Critical Analysis – Types, Examples and Writing Guide

Table of Contents

Critical Analysis

Critical Analysis

Definition:

Critical analysis is a process of examining a piece of work or an idea in a systematic, objective, and analytical way. It involves breaking down complex ideas, concepts, or arguments into smaller, more manageable parts to understand them better.

Types of Critical Analysis

Types of Critical Analysis are as follows:

Literary Analysis

This type of analysis focuses on analyzing and interpreting works of literature , such as novels, poetry, plays, etc. The analysis involves examining the literary devices used in the work, such as symbolism, imagery, and metaphor, and how they contribute to the overall meaning of the work.

Film Analysis

This type of analysis involves examining and interpreting films, including their themes, cinematography, editing, and sound. Film analysis can also include evaluating the director’s style and how it contributes to the overall message of the film.

Art Analysis

This type of analysis involves examining and interpreting works of art , such as paintings, sculptures, and installations. The analysis involves examining the elements of the artwork, such as color, composition, and technique, and how they contribute to the overall meaning of the work.

Cultural Analysis

This type of analysis involves examining and interpreting cultural artifacts , such as advertisements, popular music, and social media posts. The analysis involves examining the cultural context of the artifact and how it reflects and shapes cultural values, beliefs, and norms.

Historical Analysis

This type of analysis involves examining and interpreting historical documents , such as diaries, letters, and government records. The analysis involves examining the historical context of the document and how it reflects the social, political, and cultural attitudes of the time.

Philosophical Analysis

This type of analysis involves examining and interpreting philosophical texts and ideas, such as the works of philosophers and their arguments. The analysis involves evaluating the logical consistency of the arguments and assessing the validity and soundness of the conclusions.

Scientific Analysis

This type of analysis involves examining and interpreting scientific research studies and their findings. The analysis involves evaluating the methods used in the study, the data collected, and the conclusions drawn, and assessing their reliability and validity.

Critical Discourse Analysis

This type of analysis involves examining and interpreting language use in social and political contexts. The analysis involves evaluating the power dynamics and social relationships conveyed through language use and how they shape discourse and social reality.

Comparative Analysis

This type of analysis involves examining and interpreting multiple texts or works of art and comparing them to each other. The analysis involves evaluating the similarities and differences between the texts and how they contribute to understanding the themes and meanings conveyed.

Critical Analysis Format

Critical Analysis Format is as follows:

I. Introduction

  • Provide a brief overview of the text, object, or event being analyzed
  • Explain the purpose of the analysis and its significance
  • Provide background information on the context and relevant historical or cultural factors

II. Description

  • Provide a detailed description of the text, object, or event being analyzed
  • Identify key themes, ideas, and arguments presented
  • Describe the author or creator’s style, tone, and use of language or visual elements

III. Analysis

  • Analyze the text, object, or event using critical thinking skills
  • Identify the main strengths and weaknesses of the argument or presentation
  • Evaluate the reliability and validity of the evidence presented
  • Assess any assumptions or biases that may be present in the text, object, or event
  • Consider the implications of the argument or presentation for different audiences and contexts

IV. Evaluation

  • Provide an overall evaluation of the text, object, or event based on the analysis
  • Assess the effectiveness of the argument or presentation in achieving its intended purpose
  • Identify any limitations or gaps in the argument or presentation
  • Consider any alternative viewpoints or interpretations that could be presented
  • Summarize the main points of the analysis and evaluation
  • Reiterate the significance of the text, object, or event and its relevance to broader issues or debates
  • Provide any recommendations for further research or future developments in the field.

VI. Example

  • Provide an example or two to support your analysis and evaluation
  • Use quotes or specific details from the text, object, or event to support your claims
  • Analyze the example(s) using critical thinking skills and explain how they relate to your overall argument

VII. Conclusion

  • Reiterate your thesis statement and summarize your main points
  • Provide a final evaluation of the text, object, or event based on your analysis
  • Offer recommendations for future research or further developments in the field
  • End with a thought-provoking statement or question that encourages the reader to think more deeply about the topic

How to Write Critical Analysis

Writing a critical analysis involves evaluating and interpreting a text, such as a book, article, or film, and expressing your opinion about its quality and significance. Here are some steps you can follow to write a critical analysis:

  • Read and re-read the text: Before you begin writing, make sure you have a good understanding of the text. Read it several times and take notes on the key points, themes, and arguments.
  • Identify the author’s purpose and audience: Consider why the author wrote the text and who the intended audience is. This can help you evaluate whether the author achieved their goals and whether the text is effective in reaching its audience.
  • Analyze the structure and style: Look at the organization of the text and the author’s writing style. Consider how these elements contribute to the overall meaning of the text.
  • Evaluate the content : Analyze the author’s arguments, evidence, and conclusions. Consider whether they are logical, convincing, and supported by the evidence presented in the text.
  • Consider the context: Think about the historical, cultural, and social context in which the text was written. This can help you understand the author’s perspective and the significance of the text.
  • Develop your thesis statement : Based on your analysis, develop a clear and concise thesis statement that summarizes your overall evaluation of the text.
  • Support your thesis: Use evidence from the text to support your thesis statement. This can include direct quotes, paraphrases, and examples from the text.
  • Write the introduction, body, and conclusion : Organize your analysis into an introduction that provides context and presents your thesis, a body that presents your evidence and analysis, and a conclusion that summarizes your main points and restates your thesis.
  • Revise and edit: After you have written your analysis, revise and edit it to ensure that your writing is clear, concise, and well-organized. Check for spelling and grammar errors, and make sure that your analysis is logically sound and supported by evidence.

When to Write Critical Analysis

You may want to write a critical analysis in the following situations:

  • Academic Assignments: If you are a student, you may be assigned to write a critical analysis as a part of your coursework. This could include analyzing a piece of literature, a historical event, or a scientific paper.
  • Journalism and Media: As a journalist or media person, you may need to write a critical analysis of current events, political speeches, or media coverage.
  • Personal Interest: If you are interested in a particular topic, you may want to write a critical analysis to gain a deeper understanding of it. For example, you may want to analyze the themes and motifs in a novel or film that you enjoyed.
  • Professional Development : Professionals such as writers, scholars, and researchers often write critical analyses to gain insights into their field of study or work.

Critical Analysis Example

An Example of Critical Analysis Could be as follow:

Research Topic:

The Impact of Online Learning on Student Performance

Introduction:

The introduction of the research topic is clear and provides an overview of the issue. However, it could benefit from providing more background information on the prevalence of online learning and its potential impact on student performance.

Literature Review:

The literature review is comprehensive and well-structured. It covers a broad range of studies that have examined the relationship between online learning and student performance. However, it could benefit from including more recent studies and providing a more critical analysis of the existing literature.

Research Methods:

The research methods are clearly described and appropriate for the research question. The study uses a quasi-experimental design to compare the performance of students who took an online course with those who took the same course in a traditional classroom setting. However, the study may benefit from using a randomized controlled trial design to reduce potential confounding factors.

The results are presented in a clear and concise manner. The study finds that students who took the online course performed similarly to those who took the traditional course. However, the study only measures performance on one course and may not be generalizable to other courses or contexts.

Discussion :

The discussion section provides a thorough analysis of the study’s findings. The authors acknowledge the limitations of the study and provide suggestions for future research. However, they could benefit from discussing potential mechanisms underlying the relationship between online learning and student performance.

Conclusion :

The conclusion summarizes the main findings of the study and provides some implications for future research and practice. However, it could benefit from providing more specific recommendations for implementing online learning programs in educational settings.

Purpose of Critical Analysis

There are several purposes of critical analysis, including:

  • To identify and evaluate arguments : Critical analysis helps to identify the main arguments in a piece of writing or speech and evaluate their strengths and weaknesses. This enables the reader to form their own opinion and make informed decisions.
  • To assess evidence : Critical analysis involves examining the evidence presented in a text or speech and evaluating its quality and relevance to the argument. This helps to determine the credibility of the claims being made.
  • To recognize biases and assumptions : Critical analysis helps to identify any biases or assumptions that may be present in the argument, and evaluate how these affect the credibility of the argument.
  • To develop critical thinking skills: Critical analysis helps to develop the ability to think critically, evaluate information objectively, and make reasoned judgments based on evidence.
  • To improve communication skills: Critical analysis involves carefully reading and listening to information, evaluating it, and expressing one’s own opinion in a clear and concise manner. This helps to improve communication skills and the ability to express ideas effectively.

Importance of Critical Analysis

Here are some specific reasons why critical analysis is important:

  • Helps to identify biases: Critical analysis helps individuals to recognize their own biases and assumptions, as well as the biases of others. By being aware of biases, individuals can better evaluate the credibility and reliability of information.
  • Enhances problem-solving skills : Critical analysis encourages individuals to question assumptions and consider multiple perspectives, which can lead to creative problem-solving and innovation.
  • Promotes better decision-making: By carefully evaluating evidence and arguments, critical analysis can help individuals make more informed and effective decisions.
  • Facilitates understanding: Critical analysis helps individuals to understand complex issues and ideas by breaking them down into smaller parts and evaluating them separately.
  • Fosters intellectual growth : Engaging in critical analysis challenges individuals to think deeply and critically, which can lead to intellectual growth and development.

Advantages of Critical Analysis

Some advantages of critical analysis include:

  • Improved decision-making: Critical analysis helps individuals make informed decisions by evaluating all available information and considering various perspectives.
  • Enhanced problem-solving skills : Critical analysis requires individuals to identify and analyze the root cause of a problem, which can help develop effective solutions.
  • Increased creativity : Critical analysis encourages individuals to think outside the box and consider alternative solutions to problems, which can lead to more creative and innovative ideas.
  • Improved communication : Critical analysis helps individuals communicate their ideas and opinions more effectively by providing logical and coherent arguments.
  • Reduced bias: Critical analysis requires individuals to evaluate information objectively, which can help reduce personal biases and subjective opinions.
  • Better understanding of complex issues : Critical analysis helps individuals to understand complex issues by breaking them down into smaller parts, examining each part and understanding how they fit together.
  • Greater self-awareness: Critical analysis helps individuals to recognize their own biases, assumptions, and limitations, which can lead to personal growth and development.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Cluster Analysis

Cluster Analysis – Types, Methods and Examples

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Discriminant Analysis

Discriminant Analysis – Methods, Types and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

helpful professor logo

33 Critical Analysis Examples

critical analysis examples and definition, explained below

Critical analysis refers to the ability to examine something in detail in preparation to make an evaluation or judgment.

It will involve exploring underlying assumptions, theories, arguments, evidence, logic, biases, contextual factors, and so forth, that could help shed more light on the topic.

In essay writing, a critical analysis essay will involve using a range of analytical skills to explore a topic, such as:

  • Evaluating sources
  • Exploring strengths and weaknesses
  • Exploring pros and cons
  • Questioning and challenging ideas
  • Comparing and contrasting ideas

If you’re writing an essay, you could also watch my guide on how to write a critical analysis essay below, and don’t forget to grab your worksheets and critical analysis essay plan to save yourself a ton of time:

Grab your Critical Analysis Worksheets and Essay Plan Here

chris

Critical Analysis Examples

1. exploring strengths and weaknesses.

Perhaps the first and most straightforward method of critical analysis is to create a simple strengths-vs-weaknesses comparison.

Most things have both strengths and weaknesses – you could even do this for yourself! What are your strengths? Maybe you’re kind or good at sports or good with children. What are your weaknesses? Maybe you struggle with essay writing or concentration.

If you can analyze your own strengths and weaknesses, then you understand the concept. What might be the strengths and weaknesses of the idea you’re hoping to critically analyze?

Strengths and weaknesses could include:

  • Does it seem highly ethical (strength) or could it be more ethical (weakness)?
  • Is it clearly explained (strength) or complex and lacking logical structure (weakness)?
  • Does it seem balanced (strength) or biased (weakness)?

You may consider using a SWOT analysis for this step. I’ve provided a SWOT analysis guide here .

2. Evaluating Sources

Evaluation of sources refers to looking at whether a source is reliable or unreliable.

This is a fundamental media literacy skill .

Steps involved in evaluating sources include asking questions like:

  • Who is the author and are they trustworthy?
  • Is this written by an expert?
  • Is this sufficiently reviewed by an expert?
  • Is this published in a trustworthy publication?
  • Are the arguments sound or common sense?

For more on this topic, I’d recommend my detailed guide on digital literacy .

3. Identifying Similarities

Identifying similarities encompasses the act of drawing parallels between elements, concepts, or issues.

In critical analysis, it’s common to compare a given article, idea, or theory to another one. In this way, you can identify areas in which they are alike.

Determining similarities can be a challenge, but it’s an intellectual exercise that fosters a greater understanding of the aspects you’re studying. This step often calls for a careful reading and note-taking to highlight matching information, points of view, arguments or even suggested solutions.

Similarities might be found in:

  • The key themes or topics discussed
  • The theories or principles used
  • The demographic the work is written for or about
  • The solutions or recommendations proposed

Remember, the intention of identifying similarities is not to prove one right or wrong. Rather, it sets the foundation for understanding the larger context of your analysis, anchoring your arguments in a broader spectrum of ideas.

Your critical analysis strengthens when you can see the patterns and connections across different works or topics. It fosters a more comprehensive, insightful perspective. And importantly, it is a stepping stone in your analysis journey towards evaluating differences, which is equally imperative and insightful in any analysis.

4. Identifying Differences

Identifying differences involves pinpointing the unique aspects, viewpoints or solutions introduced by the text you’re analyzing. How does it stand out as different from other texts?

To do this, you’ll need to compare this text to another text.

Differences can be revealed in:

  • The potential applications of each idea
  • The time, context, or place in which the elements were conceived or implemented
  • The available evidence each element uses to support its ideas
  • The perspectives of authors
  • The conclusions reached

Identifying differences helps to reveal the multiplicity of perspectives and approaches on a given topic. Doing so provides a more in-depth, nuanced understanding of the field or issue you’re exploring.

This deeper understanding can greatly enhance your overall critique of the text you’re looking at. As such, learning to identify both similarities and differences is an essential skill for effective critical analysis.

My favorite tool for identifying similarities and differences is a Venn Diagram:

venn diagram

To use a venn diagram, title each circle for two different texts. Then, place similarities in the overlapping area of the circles, while unique characteristics (differences) of each text in the non-overlapping parts.

6. Identifying Oversights

Identifying oversights entails pointing out what the author missed, overlooked, or neglected in their work.

Almost every written work, no matter the expertise or meticulousness of the author, contains oversights. These omissions can be absent-minded mistakes or gaps in the argument, stemming from a lack of knowledge, foresight, or attentiveness.

Such gaps can be found in:

  • Missed opportunities to counter or address opposing views
  • Failure to consider certain relevant aspects or perspectives
  • Incomplete or insufficient data that leaves the argument weak
  • Failing to address potential criticism or counter-arguments

By shining a light on these weaknesses, you increase the depth and breadth of your critical analysis. It helps you to estimate the full worth of the text, understand its limitations, and contextualize it within the broader landscape of related work. Ultimately, noticing these oversights helps to make your analysis more balanced and considerate of the full complexity of the topic at hand.

You may notice here that identifying oversights requires you to already have a broad understanding and knowledge of the topic in the first place – so, study up!

7. Fact Checking

Fact-checking refers to the process of meticulously verifying the truth and accuracy of the data, statements, or claims put forward in a text.

Fact-checking serves as the bulwark against misinformation, bias, and unsubstantiated claims. It demands thorough research, resourcefulness, and a keen eye for detail.

Fact-checking goes beyond surface-level assertions:

  • Examining the validity of the data given
  • Cross-referencing information with other reliable sources
  • Scrutinizing references, citations, and sources utilized in the article
  • Distinguishing between opinion and objectively verifiable truths
  • Checking for outdated, biased, or unbalanced information

If you identify factual errors, it’s vital to highlight them when critically analyzing the text. But remember, you could also (after careful scrutiny) also highlight that the text appears to be factually correct – that, too, is critical analysis.

8. Exploring Counterexamples

Exploring counterexamples involves searching and presenting instances or cases which contradict the arguments or conclusions presented in a text.

Counterexamples are an effective way to challenge the generalizations, assumptions or conclusions made in an article or theory. They can reveal weaknesses or oversights in the logic or validity of the author’s perspective.

Considerations in counterexample analysis are:

  • Identifying generalizations made in the text
  • Seeking examples in academic literature or real-world instances that contradict these generalizations
  • Assessing the impact of these counterexamples on the validity of the text’s argument or conclusion

Exploring counterexamples enriches your critical analysis by injecting an extra layer of scrutiny, and even doubt, in the text.

By presenting counterexamples, you not only test the resilience and validity of the text but also open up new avenues of discussion and investigation that can further your understanding of the topic.

See Also: Counterargument Examples

9. Assessing Methodologies

Assessing methodologies entails examining the techniques, tools, or procedures employed by the author to collect, analyze and present their information.

The accuracy and validity of a text’s conclusions often depend on the credibility and appropriateness of the methodologies used.

Aspects to inspect include:

  • The appropriateness of the research method for the research question
  • The adequacy of the sample size
  • The validity and reliability of data collection instruments
  • The application of statistical tests and evaluations
  • The implementation of controls to prevent bias or mitigate its impact

One strategy you could implement here is to consider a range of other methodologies the author could have used. If the author conducted interviews, consider questioning why they didn’t use broad surveys that could have presented more quantitative findings. If they only interviewed people with one perspective, consider questioning why they didn’t interview a wider variety of people, etc.

See Also: A List of Research Methodologies

10. Exploring Alternative Explanations

Exploring alternative explanations refers to the practice of proposing differing or opposing ideas to those put forward in the text.

An underlying assumption in any analysis is that there may be multiple valid perspectives on a single topic. The text you’re analyzing might provide one perspective, but your job is to bring into the light other reasonable explanations or interpretations.

Cultivating alternative explanations often involves:

  • Formulating hypotheses or theories that differ from those presented in the text
  • Referring to other established ideas or models that offer a differing viewpoint
  • Suggesting a new or unique angle to interpret the data or phenomenon discussed in the text

Searching for alternative explanations challenges the authority of a singular narrative or perspective, fostering an environment ripe for intellectual discourse and critical thinking . It nudges you to examine the topic from multiple angles, enhancing your understanding and appreciation of the complexity inherent in the field.

A Full List of Critical Analysis Skills

  • Exploring Strengths and Weaknesses
  • Evaluating Sources
  • Identifying Similarities
  • Identifying Differences
  • Identifying Biases
  • Hypothesis Testing
  • Fact-Checking
  • Exploring Counterexamples
  • Assessing Methodologies
  • Exploring Alternative Explanations
  • Pointing Out Contradictions
  • Challenging the Significance
  • Cause-And-Effect Analysis
  • Assessing Generalizability
  • Highlighting Inconsistencies
  • Reductio ad Absurdum
  • Comparing to Expert Testimony
  • Comparing to Precedent
  • Reframing the Argument
  • Pointing Out Fallacies
  • Questioning the Ethics
  • Clarifying Definitions
  • Challenging Assumptions
  • Exposing Oversimplifications
  • Highlighting Missing Information
  • Demonstrating Irrelevance
  • Assessing Effectiveness
  • Assessing Trustworthiness
  • Recognizing Patterns
  • Differentiating Facts from Opinions
  • Analyzing Perspectives
  • Prioritization
  • Making Predictions
  • Conducting a SWOT Analysis
  • PESTLE Analysis
  • Asking the Five Whys
  • Correlating Data Points
  • Finding Anomalies Or Outliers
  • Comparing to Expert Literature
  • Drawing Inferences
  • Assessing Validity & Reliability

Analysis and Bloom’s Taxonomy

Benjamin Bloom placed analysis as the third-highest form of thinking on his ladder of cognitive skills called Bloom’s Taxonomy .

This taxonomy starts with the lowest levels of thinking – remembering and understanding. The further we go up the ladder, the more we reach higher-order thinking skills that demonstrate depth of understanding and knowledge, as outlined below:

blooms taxonomy, explained below

Here’s a full outline of the taxonomy in a table format:

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

2 thoughts on “33 Critical Analysis Examples”

' src=

THANK YOU, THANK YOU, THANK YOU! – I cannot even being to explain how hard it has been to find a simple but in-depth understanding of what ‘Critical Analysis’ is. I have looked at over 10 different pages and went down so many rabbit holes but this is brilliant! I only skimmed through the article but it was already promising, I then went back and read it more in-depth, it just all clicked into place. So thank you again!

' src=

You’re welcome – so glad it was helpful.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Writing Rules
  • Running Head & Page numbers
  • Using Quotations
  • Citing Sources
  • Reference List
  • General Reference List Principles
  • Structure of the Report

Introduction

  • References & Appendices
  • Unpacking the Assignment Topic
  • Planning and Structuring the Assignment
  • Writing the Assignment
  • Writing Concisely
  • Developing Arguments

Critically Evaluating Research

  • Editing the Assignment
  • Writing in the Third Person
  • Directive Words
  • Before You Submit
  • Cover Sheet & Title Page
  • Academic Integrity
  • Marking Criteria
  • Word Limit Rules
  • Submitting Your Work
  • Writing Effective E-mails
  • Writing Concisely Exercise
  • About Redbook

Some research reports or assessments will require you critically evaluate a journal article or piece of research. Below is a guide with examples of how to critically evaluate research and how to communicate your ideas in writing.

To develop the skill of being able to critically evaluate, when reading research articles in psychology read with an open mind and be active when reading. Ask questions as you go and see if the answers are provided. Initially skim through the article to gain an overview of the problem, the design, methods, and conclusions. Then read for details and consider the questions provided below for each section of a journal article.

  • Did the title describe the study?
  • Did the key words of the title serve as key elements of the article?
  • Was the title concise, i.e., free of distracting or extraneous phrases?
  • Was the abstract concise and to the point?
  • Did the abstract summarise the study’s purpose/research problem, the independent and dependent variables under study, methods, main findings, and conclusions?
  • Did the abstract provide you with sufficient information to determine what the study is about and whether you would be interested in reading the entire article?
  • Was the research problem clearly identified?
  • Is the problem significant enough to warrant the study that was conducted?
  • Did the authors present an appropriate theoretical rationale for the study?
  • Is the literature review informative and comprehensive or are there gaps?
  • Are the variables adequately explained and operationalised?
  • Are hypotheses and research questions clearly stated? Are they directional? Do the author’s hypotheses and/or research questions seem logical in light of the conceptual framework and research problem?
  • Overall, does the literature review lead logically into the Method section?
  • Is the sample clearly described, in terms of size, relevant characteristics (gender, age, SES, etc), selection and assignment procedures, and whether any inducements were used to solicit subjects (payment, subject credit, free therapy, etc)?
  • What population do the subjects represent (external validity)?
  • Are there sufficient subjects to produce adequate power (statistical validity)?
  • Have the variables and measurement techniques been clearly operationalised?
  • Do the measures/instruments seem appropriate as measures of the variables under study (construct validity)?
  • Have the authors included sufficient information about the psychometric properties (eg. reliability and validity) of the instruments?
  • Are the materials used in conducting the study or in collecting data clearly described?
  • Are the study’s scientific procedures thoroughly described in chronological order?
  • Is the design of the study identified (or made evident)?
  • Do the design and procedures seem appropriate in light of the research problem, conceptual framework, and research questions/hypotheses?
  • Are there other factors that might explain the differences between groups (internal validity)?
  • Were subjects randomly assigned to groups so there was no systematic bias in favour of one group? Was there a differential drop-out rate from groups so that bias was introduced (internal validity and attrition)?
  • Were all the necessary control groups used? Were participants in each group treated identically except for the administration of the independent variable?
  • Were steps taken to prevent subject bias and/or experimenter bias, eg, blind or double blind procedures?
  • Were steps taken to control for other possible confounds such as regression to the mean, history effects, order effects, etc (internal validity)?
  • Were ethical considerations adhered to, eg, debriefing, anonymity, informed consent, voluntary participation?
  • Overall, does the method section provide sufficient information to replicate the study?
  • Are the findings complete, clearly presented, comprehensible, and well organised?
  • Are data coding and analysis appropriate in light of the study’s design and hypotheses? Are the statistics reported correctly and fully, eg. are degrees of freedom and p values given?
  • Have the assumptions of the statistical analyses been met, eg. does one group have very different variance to the others?
  • Are salient results connected directly to hypotheses? Are there superfluous results presented that are not relevant to the hypotheses or research question?
  • Are tables and figures clearly labelled? Well-organised? Necessary (non-duplicative of text)?
  • If a significant result is obtained, consider effect size. Is the finding meaningful? If a non-significant result is found, could low power be an issue? Were there sufficient levels of the IV?
  • If necessary have appropriate post-hoc analyses been performed? Were any transformations performed; if so, were there valid reasons? Were data collapsed over any IVs; if so, were there valid reasons? If any data was eliminated, were valid reasons given?

Discussion and Conclusion

  • Are findings adequately interpreted and discussed in terms of the stated research problem, conceptual framework, and hypotheses?
  • Is the interpretation adequate? i.e., does it go too far given what was actually done or not far enough? Are non-significant findings interpreted inappropriately?
  • Is the discussion biased? Are the limitations of the study delineated?
  • Are implications for future research and/or practical application identified?
  • Are the overall conclusions warranted by the data and any limitations in the study? Are the conclusions restricted to the population under study or are they generalised too widely?
  • Is the reference list sufficiently specific to the topic under investigation and current?
  • Are citations used appropriately in the text?

General Evaluation

  • Is the article objective, well written and organised?
  • Does the information provided allow you to replicate the study in all its details?
  • Was the study worth doing? Does the study provide an answer to a practical or important problem? Does it have theoretical importance? Does it represent a methodological or technical advance? Does it demonstrate a previously undocumented phenomenon? Does it explore the conditions under which a phenomenon occurs?

How to turn your critical evaluation into writing

Example from a journal article.

Critical Analysis and Evaluation

Many assignments ask you to   critique   and   evaluate   a source. Sources might include journal articles, books, websites, government documents, portfolios, podcasts, or presentations.

When you   critique,   you offer both negative and positive analysis of the content, writing, and structure of a source.

When   you   evaluate , you assess how successful a source is at presenting information, measured against a standard or certain criteria.

Elements of a critical analysis:

opinion + evidence from the article + justification

Your   opinion   is your thoughtful reaction to the piece.

Evidence from the article  offers some proof to back up your opinion.

The   justification   is an explanation of how you arrived at your opinion or why you think it’s true.

How do you critique and evaluate?

When critiquing and evaluating someone else’s writing/research, your purpose is to reach an   informed opinion   about a source. In order to do that, try these three steps:

  • How do you feel?
  • What surprised you?
  • What left you confused?
  • What pleased or annoyed you?
  • What was interesting?
  • What is the purpose of this text?
  • Who is the intended audience?
  • What kind of bias is there?
  • What was missing?
  • See our resource on analysis and synthesis ( Move From Research to Writing: How to Think ) for other examples of questions to ask.
  • sophisticated
  • interesting
  • undocumented
  • disorganized
  • superficial
  • unconventional
  • inappropriate interpretation of evidence
  • unsound or discredited methodology
  • traditional
  • unsubstantiated
  • unsupported
  • well-researched
  • easy to understand
  • Opinion : This article’s assessment of the power balance in cities is   confusing.
  • Evidence:   It first says that the power to shape policy is evenly distributed among citizens, local government, and business (Rajal, 232).
  • Justification :  but then it goes on to focus almost exclusively on business. Next, in a much shorter section, it combines the idea of citizens and local government into a single point of evidence. This leaves the reader with the impression that the citizens have no voice at all. It is   not helpful   in trying to determine the role of the common voter in shaping public policy.  

Sample criteria for critical analysis

Sometimes the assignment will specify what criteria to use when critiquing and evaluating a source. If not, consider the following prompts to approach your analysis. Choose the questions that are most suitable for your source.

  • What do you think about the quality of the research? Is it significant?
  • Did the author answer the question they set out to? Did the author prove their thesis?
  • Did you find contradictions to other things you know?
  • What new insight or connections did the author make?
  • How does this piece fit within the context of your course, or the larger body of research in the field?
  • The structure of an article or book is often dictated by standards of the discipline or a theoretical model. Did the piece meet those standards?
  • Did the piece meet the needs of the intended audience?
  • Was the material presented in an organized and logical fashion?
  • Is the argument cohesive and convincing? Is the reasoning sound? Is there enough evidence?
  • Is it easy to read? Is it clear and easy to understand, even if the concepts are sophisticated?

Banner

Writing a Critical Analysis

What is in this guide, definitions, putting it together, tips and examples of critques.

  • Background Information
  • Cite Sources

Library Links

  • Ask a Librarian
  • Library Tutorials
  • The Research Process
  • Library Hours
  • Online Databases (A-Z)
  • Interlibrary Loan (ILL)
  • Reserve a Study Room
  • Report a Problem

This guide is meant to help you understand the basics of writing a critical analysis. A critical analysis is an argument about a particular piece of media. There are typically two parts: (1) identify and explain the argument the author is making, and (2), provide your own argument about that argument. Your instructor may have very specific requirements on how you are to write your critical analysis, so make sure you read your assignment carefully.

example of critical evaluation in research

Critical Analysis

A deep approach to your understanding of a piece of media by relating new knowledge to what you already know.

Part 1: Introduction

  • Identify the work being criticized.
  • Present thesis - argument about the work.
  • Preview your argument - what are the steps you will take to prove your argument.

Part 2: Summarize

  • Provide a short summary of the work.
  • Present only what is needed to know to understand your argument.

Part 3: Your Argument

  • This is the bulk of your paper.
  • Provide "sub-arguments" to prove your main argument.
  • Use scholarly articles to back up your argument(s).

Part 4: Conclusion

  • Reflect on  how  you have proven your argument.
  • Point out the  importance  of your argument.
  • Comment on the potential for further research or analysis.
  • Cornell University Library Tips for writing a critical appraisal and analysis of a scholarly article.
  • Queen's University Library How to Critique an Article (Psychology)
  • University of Illinois, Springfield An example of a summary and an evaluation of a research article. This extended example shows the different ways a student can critique and write about an article
  • Next: Background Information >>
  • Last Updated: Feb 14, 2024 4:33 PM
  • URL: https://libguides.pittcc.edu/critical_analysis

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 20 January 2009

How to critically appraise an article

  • Jane M Young 1 &
  • Michael J Solomon 2  

Nature Clinical Practice Gastroenterology & Hepatology volume  6 ,  pages 82–91 ( 2009 ) Cite this article

52k Accesses

100 Citations

435 Altmetric

Metrics details

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article

Critical appraisal provides a basis for decisions on whether to use the results of a study in clinical practice

Different study designs are prone to various sources of systematic bias

Design-specific, critical-appraisal checklists are useful tools to help assess study quality

Assessments of other factors, including the importance of the research question, the appropriateness of statistical analysis, the legitimacy of conclusions and potential conflicts of interest are an important part of the critical appraisal process

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

example of critical evaluation in research

Similar content being viewed by others

example of critical evaluation in research

Making sense of the literature: an introduction to critical appraisal for the primary care practitioner

Kishan Patel & Meera Pajpani

example of critical evaluation in research

How to appraise the literature: basic principles for the busy clinician - part 2: systematic reviews and meta-analyses

Aslam Alkadhimi, Samuel Reeves & Andrew T. DiBiase

example of critical evaluation in research

How to appraise the literature: basic principles for the busy clinician - part 1: randomised controlled trials

Druss BG and Marcus SC (2005) Growth and decentralisation of the medical literature: implications for evidence-based medicine. J Med Libr Assoc 93 : 499–501

PubMed   PubMed Central   Google Scholar  

Glasziou PP (2008) Information overload: what's behind it, what's beyond it? Med J Aust 189 : 84–85

PubMed   Google Scholar  

Last JE (Ed.; 2001) A Dictionary of Epidemiology (4th Edn). New York: Oxford University Press

Google Scholar  

Sackett DL et al . (2000). Evidence-based Medicine. How to Practice and Teach EBM . London: Churchill Livingstone

Guyatt G and Rennie D (Eds; 2002). Users' Guides to the Medical Literature: a Manual for Evidence-based Clinical Practice . Chicago: American Medical Association

Greenhalgh T (2000) How to Read a Paper: the Basics of Evidence-based Medicine . London: Blackwell Medicine Books

MacAuley D (1994) READER: an acronym to aid critical reading by general practitioners. Br J Gen Pract 44 : 83–85

CAS   PubMed   PubMed Central   Google Scholar  

Hill A and Spittlehouse C (2001) What is critical appraisal. Evidence-based Medicine 3 : 1–8 [ http://www.evidence-based-medicine.co.uk ] (accessed 25 November 2008)

Public Health Resource Unit (2008) Critical Appraisal Skills Programme (CASP) . [ http://www.phru.nhs.uk/Pages/PHD/CASP.htm ] (accessed 8 August 2008)

National Health and Medical Research Council (2000) How to Review the Evidence: Systematic Identification and Review of the Scientific Literature . Canberra: NHMRC

Elwood JM (1998) Critical Appraisal of Epidemiological Studies and Clinical Trials (2nd Edn). Oxford: Oxford University Press

Agency for Healthcare Research and Quality (2002) Systems to rate the strength of scientific evidence? Evidence Report/Technology Assessment No 47, Publication No 02-E019 Rockville: Agency for Healthcare Research and Quality

Crombie IK (1996) The Pocket Guide to Critical Appraisal: a Handbook for Health Care Professionals . London: Blackwell Medicine Publishing Group

Heller RF et al . (2008) Critical appraisal for public health: a new checklist. Public Health 122 : 92–98

Article   Google Scholar  

MacAuley D et al . (1998) Randomised controlled trial of the READER method of critical appraisal in general practice. BMJ 316 : 1134–37

Article   CAS   Google Scholar  

Parkes J et al . Teaching critical appraisal skills in health care settings (Review). Cochrane Database of Systematic Reviews 2005, Issue 3. Art. No.: cd001270. 10.1002/14651858.cd001270

Mays N and Pope C (2000) Assessing quality in qualitative research. BMJ 320 : 50–52

Hawking SW (2003) On the Shoulders of Giants: the Great Works of Physics and Astronomy . Philadelphia, PN: Penguin

National Health and Medical Research Council (1999) A Guide to the Development, Implementation and Evaluation of Clinical Practice Guidelines . Canberra: National Health and Medical Research Council

US Preventive Services Taskforce (1996) Guide to clinical preventive services (2nd Edn). Baltimore, MD: Williams & Wilkins

Solomon MJ and McLeod RS (1995) Should we be performing more randomized controlled trials evaluating surgical operations? Surgery 118 : 456–467

Rothman KJ (2002) Epidemiology: an Introduction . Oxford: Oxford University Press

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: sources of bias in surgical studies. ANZ J Surg 73 : 504–506

Margitic SE et al . (1995) Lessons learned from a prospective meta-analysis. J Am Geriatr Soc 43 : 435–439

Shea B et al . (2001) Assessing the quality of reports of systematic reviews: the QUORUM statement compared to other tools. In Systematic Reviews in Health Care: Meta-analysis in Context 2nd Edition, 122–139 (Eds Egger M. et al .) London: BMJ Books

Chapter   Google Scholar  

Easterbrook PH et al . (1991) Publication bias in clinical research. Lancet 337 : 867–872

Begg CB and Berlin JA (1989) Publication bias and dissemination of clinical research. J Natl Cancer Inst 81 : 107–115

Moher D et al . (2000) Improving the quality of reports of meta-analyses of randomised controlled trials: the QUORUM statement. Br J Surg 87 : 1448–1454

Shea BJ et al . (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 7 : 10 [10.1186/1471-2288-7-10]

Stroup DF et al . (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 283 : 2008–2012

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: evaluating surgical effectiveness. ANZ J Surg 73 : 507–510

Schulz KF (1995) Subverting randomization in controlled trials. JAMA 274 : 1456–1458

Schulz KF et al . (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273 : 408–412

Moher D et al . (2001) The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology 1 : 2 [ http://www.biomedcentral.com/ 1471-2288/1/2 ] (accessed 25 November 2008)

Rochon PA et al . (2005) Reader's guide to critical appraisal of cohort studies: 1. Role and design. BMJ 330 : 895–897

Mamdani M et al . (2005) Reader's guide to critical appraisal of cohort studies: 2. Assessing potential for confounding. BMJ 330 : 960–962

Normand S et al . (2005) Reader's guide to critical appraisal of cohort studies: 3. Analytical strategies to reduce confounding. BMJ 330 : 1021–1023

von Elm E et al . (2007) Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 335 : 806–808

Sutton-Tyrrell K (1991) Assessing bias in case-control studies: proper selection of cases and controls. Stroke 22 : 938–942

Knottnerus J (2003) Assessment of the accuracy of diagnostic tests: the cross-sectional study. J Clin Epidemiol 56 : 1118–1128

Furukawa TA and Guyatt GH (2006) Sources of bias in diagnostic accuracy studies and the diagnostic process. CMAJ 174 : 481–482

Bossyut PM et al . (2003)The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138 : W1–W12

STARD statement (Standards for the Reporting of Diagnostic Accuracy Studies). [ http://www.stard-statement.org/ ] (accessed 10 September 2008)

Raftery J (1998) Economic evaluation: an introduction. BMJ 316 : 1013–1014

Palmer S et al . (1999) Economics notes: types of economic evaluation. BMJ 318 : 1349

Russ S et al . (1999) Barriers to participation in randomized controlled trials: a systematic review. J Clin Epidemiol 52 : 1143–1156

Tinmouth JM et al . (2004) Are claims of equivalency in digestive diseases trials supported by the evidence? Gastroentrology 126 : 1700–1710

Kaul S and Diamond GA (2006) Good enough: a primer on the analysis and interpretation of noninferiority trials. Ann Intern Med 145 : 62–69

Piaggio G et al . (2006) Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA 295 : 1152–1160

Heritier SR et al . (2007) Inclusion of patients in clinical trial analysis: the intention to treat principle. In Interpreting and Reporting Clinical Trials: a Guide to the CONSORT Statement and the Principles of Randomized Controlled Trials , 92–98 (Eds Keech A. et al .) Strawberry Hills, NSW: Australian Medical Publishing Company

National Health and Medical Research Council (2007) National Statement on Ethical Conduct in Human Research 89–90 Canberra: NHMRC

Lo B et al . (2000) Conflict-of-interest policies for investigators in clinical trials. N Engl J Med 343 : 1616–1620

Kim SYH et al . (2004) Potential research participants' views regarding researcher and institutional financial conflicts of interests. J Med Ethics 30 : 73–79

Komesaroff PA and Kerridge IH (2002) Ethical issues concerning the relationships between medical practitioners and the pharmaceutical industry. Med J Aust 176 : 118–121

Little M (1999) Research, ethics and conflicts of interest. J Med Ethics 25 : 259–262

Lemmens T and Singer PA (1998) Bioethics for clinicians: 17. Conflict of interest in research, education and patient care. CMAJ 159 : 960–965

Download references

Author information

Authors and affiliations.

JM Young is an Associate Professor of Public Health and the Executive Director of the Surgical Outcomes Research Centre at the University of Sydney and Sydney South-West Area Health Service, Sydney,

Jane M Young

MJ Solomon is Head of the Surgical Outcomes Research Centre and Director of Colorectal Research at the University of Sydney and Sydney South-West Area Health Service, Sydney, Australia.,

Michael J Solomon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jane M Young .

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Young, J., Solomon, M. How to critically appraise an article. Nat Rev Gastroenterol Hepatol 6 , 82–91 (2009). https://doi.org/10.1038/ncpgasthep1331

Download citation

Received : 10 August 2008

Accepted : 03 November 2008

Published : 20 January 2009

Issue Date : February 2009

DOI : https://doi.org/10.1038/ncpgasthep1331

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Emergency physicians’ perceptions of critical appraisal skills: a qualitative study.

  • Sumintra Wood
  • Jacqueline Paulis
  • Angela Chen

BMC Medical Education (2022)

An integrative review on individual determinants of enrolment in National Health Insurance Scheme among older adults in Ghana

  • Anthony Kwame Morgan
  • Anthony Acquah Mensah

BMC Primary Care (2022)

Autopsy findings of COVID-19 in children: a systematic review and meta-analysis

  • Anju Khairwa
  • Kana Ram Jat

Forensic Science, Medicine and Pathology (2022)

The use of a modified Delphi technique to develop a critical appraisal tool for clinical pharmacokinetic studies

  • Alaa Bahaa Eldeen Soliman
  • Shane Ashley Pawluk
  • Ousama Rachid

International Journal of Clinical Pharmacy (2022)

Critical Appraisal: Analysis of a Prospective Comparative Study Published in IJS

  • Ramakrishna Ramakrishna HK
  • Swarnalatha MC

Indian Journal of Surgery (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

example of critical evaluation in research

Rosie Psychology: Your online tutor

Rosie Psychology: Your online tutor

How to demonstrate critical evaluation in your psychology assignments

example of critical evaluation in research

Thinking critically about psychology research

Critical thinking is often taught in undergraduate psychology degrees, and is a key marking criteria for higher marks in many assignments. But getting your head around how to write critically can sometimes be difficult. It can take practice. The aim of this short blog is to provide an introduction to critical evaluation, and how to start including evidence of critical evaluation in your psychology assignments.

So what does “critical evaluation” really mean?

Broadly speaking, critical evaluation is the process of thinking and writing critically about the quality of the sources of evidence used to support or refute an argument. By “ evidence “, I mean the literature you cite (e.g., a journal article or book chapter). By “ quality   of the evidence “, I mean thinking about whether this topic has been tested is in a robust way. If the quality of the sources is poor, then this could suggest poor support for your argument, and vice versa. Even if the quality is poor, this is important to discuss in your assignments as evidence of critical thinking in this way!

In the rest of this blog, I outline a few different ways you can start to implement critical thinking into your work and reading of psychology. I talk about the quality of the evidence, a few pointers for critiquing the methods, theoretical and practical critical evaluation too. This is not an exhaustive list, but hopefully it’ll help you to start getting those higher-level marks in psychology. I also include an example write-up at the end to illustrate how to write all of this up!

The quality of the evidence

There are different types of study designs in psychology research, but some are of higher quality than others. The higher the quality of the evidence, the stronger the support for your argument the research offers, because the idea has been tested more rigorously. The pyramid image below can really help to explain what we mean by “quality of evidence”, by showing different study designs in the order of their quality. 

Not every area of psychology is going to be full of high quality studies, and even the strongest sources of evidence (i.e., systematic reviews and/or meta-analyses) can have limitations! Because no study is perfect, it can be a good habit to tell the reader, in your report, (i) what the  design  of the study is that you’re citing, AND, (ii)  how  this affects your argument. Doing so would be evidence of critical thought. (See an example write-up below for implementing this, but do not copy and paste it!) 

But first, what do I mean by “design”? The design of the study refers to  how  the study was carried out. There are sometimes broad categories of design that you’ll have heard of, like a ‘survey design’, ‘a review paper’, or an ‘experimental design’. Within these categories, though, there can be more specific types of design (e.g. a  cross-sectional  survey design, or a  longitudinal  survey design; a  randomised controlled  experiment or a  simple pre-post  experiment). Knowing these specific types of design is a good place to start when thinking about how to critique the evidence when citing your sources, and the image below can help with that. 

hierarchy of scientific evidence, randomized controlled study, case, cohort, research design

Image source: https://thelogicofscience.com/2016/01/12/the-hierarchy-of-evidence-is-the-studys-design-robust/

In summary, there are various types of designs in psychology research. To name a few from the image above, we have: a meta-analysis or a systematic review (a review paper that summarises the research that explores the same research question); a cross-sectional survey study (a questionnaire that people complete once – these are really common in psychology!). If you’re not familiar with these, I would  highly suggest  doing a bit of reading around these methods and some of their general limitations – you can then use these limitation points in your assignments! To help with this, you could do a Google Scholar search for ‘limitations of a cross-sectional study’, or ‘why are randomised control trials gold standard?’. You can use any published papers as further support as a limitation.

Methodological critical evaluation

  • Internal validity: Are the findings or the measures used in the study reliable (e.g., have they been replicated by another study, and is the reliability high)? 
  • External validity: Are there any biases in the study that might affect generalisability(e.g., gender bias, where one gender may be overrepresented for the population in the sample recruited)?  Lack of generalisability is a common limitation that undergraduates tend to use by default as a limitation in their reports. It’s a perfectly valid limitation, but it can usually be made much more impactful by explaining exactly  how  it’s a problem for the topic of study. In some cases, this limitation may not be all that warranted; for example, a female bias may be expected in a sample of psychology students, because undergraduate courses tend to be filled mostly with females! 
  • What is the design of the study, and how it a good or bad quality design (randomised control trial, cross-sectional study)? 

Theoretical critical evaluation

  • Do the findings in the literature support the relevant psychological theories?
  • Have the findings been replicated in another study? (If so, say so and add a reference!)

Practical critical evaluation

  • In the real world, how easy would it be to implement these findings?
  • Have these findings been implemented? (If so, you could find out if this has been done well!)

Summary points

In summary, there are various types of designs in psychology research. To name a few from the image above, we have: a meta-analysis or a systematic review (a review paper that summarises the research that explores the same research question); a cross-sectional survey study (a questionnaire that people complete once – these are really common in psychology!). If you’re not familiar with these, I would highly suggest doing a bit of reading around these methods and some of their general limitations – you can then use these limitation points in your assignments! To help with this, I would do a Google Scholar search for ‘limitations of a cross-sectional study’, or ‘why are randomised control trials gold standard?’. You can use these papers as further support as a limitation.

You don’t have to use all of these points in your writing, these are just examples of how you can demonstrate critical thinking in your work. Try to use at least a couple in any assignment. Here is an example of how to write these up:

An example write-up

“Depression and anxiety are generally associated with each other (see the meta-analysis by [reference here]). For example, one of these studies was a cross-sectional study [reference here] with 500 undergraduate psychology students. The researchers found that depression and anxiety (measured using the DASS-21 measure) were correlated at  r  = .76, indicating a strong effect. However, this one study is limited in that it used a cross-sectional design, which do not tell us whether depression causes anxiety or whether anxiety causes depression; it just tells us that they are correlated. It’s also limited in that the participants are not a clinical sample, which does not tell us about whether these are clinically co-morbid constructs. Finally, a strength of this study is that it used the DASS-21 which is generally found to be a reliable measure. Future studies would therefore benefit from using a longitudinal design to gain an idea as to how these variables are causally related to one another, and use more clinical samples to understand the implications for clinical practice. Overall, however, the research generally suggests that depression and anxiety are associated. That there is a meta-analysis on this topic [reference here], showing that there is lots of evidence, suggests that this finding is generally well-accepted.”

  • Notice how I first found a review paper on the topic to broadly tell the reader how much evidence there is in the first place. I set the scene of the paragraph with the first sentence, and then the last sentence I brought it back, rounding the paragraph off. 
  • Notice how I then described one study from this paper in more detail. Specifically, I mentioned the participants, the design of the study and the measure the researchers used to assess these variables. Critically, I then described  how  each of these pieces of the method are disadvantages/strengths of the study. Sometimes, it’s enough to just say “the study was limited in that it was a cross-sectional study”, but it can really show that you are thinking critically, if you also add “… because it does not tell us….”. 
  • Notice how I added a statistic there to further illustrate my point (in this case, it was the correlation coefficient), showing that I didn’t just read the abstract of the paper. Doing this for the effect sizes in a study can also help demonstrate to a reader that you understand statistics (a higher-level marking criteria). 

Are these points you can include in your own work?

Thanks for reading,

Share this:

Leave a comment cancel reply.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Banner

Write a Critical Review of a Scientific Journal Article

1. identify how and why the research was carried out, 2. establish the research context, 3. evaluate the research, 4. establish the significance of the research.

  • Writing Your Critique

Ask Us: Chat, email, visit or call

Click to chat: contact the library

Video: How to Integrate Critical Voice into Your Literature Review

How to Integrate Critical Voice in Your Lit Review

Video: Note-taking and Writing Tips to Avoid Plagiarism

Note-taking and Writing Tips to Avoid Accidental Plagiarism

Get assistance

The library offers a range of helpful services.  All of our appointments are free of charge and confidential.

  • Book an appointment

Read the article(s) carefully and use the questions below to help you identify how and why the research was carried out. Look at the following sections: 

Introduction

  • What was the objective of the study?
  • What methods were used to accomplish this purpose (e.g., systematic recording of observations, analysis and evaluation of published research, assessment of theory, etc.)?
  • What techniques were used and how was each technique performed?
  • What kind of data can be obtained using each technique?
  • How are such data interpreted?
  • What kind of information is produced by using the technique?
  • What objective evidence was obtained from the authors’ efforts (observations, measurements, etc.)?
  • What were the results of the study? 
  • How was each technique used to obtain each result?
  • What statistical tests were used to evaluate the significance of the conclusions based on numeric or graphic data?
  • How did each result contribute to answering the question or testing the hypothesis raised in the introduction?
  • How were the results interpreted? How were they related to the original problem (authors’ view of evidence rather than objective findings)? 
  • Were the authors able to answer the question (test the hypothesis) raised?
  • Did the research provide new factual information, a new understanding of a phenomenon in the field, or a new research technique?
  • How was the significance of the work described?
  • Do the authors relate the findings of the study to literature in the field?
  • Did the reported observations or interpretations support or refute observations or interpretations made by other researchers?

These questions were adapted from the following sources:  Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

Once you are familiar with the article, you can establish the research context by asking the following questions:

  • Who conducted the research? What were/are their interests?
  • When and where was the research conducted?
  • Why did the authors do this research?
  • Was this research pertinent only within the authors’ geographic locale, or did it have broader (even global) relevance?
  • Were many other laboratories pursuing related research when the reported work was done? If so, why?
  • For experimental research, what funding sources met the costs of the research?
  • On what prior observations was the research based? What was and was not known at the time?
  • How important was the research question posed by the researchers?

These questions were adapted from the following sources: Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

Remember that simply disagreeing with the material is not considered to be a critical assessment of the material.  For example, stating that the sample size is insufficient is not a critical assessment.  Describing why the sample size is insufficient for the claims being made in the study would be a critical assessment.

Use the questions below to help you evaluate the quality of the authors’ research:

  • Does the title precisely state the subject of the paper?
  • Read the statement of purpose in the abstract. Does it match the one in the introduction?

Acknowledgments

  • Could the source of the research funding have influenced the research topic or conclusions?
  • Check the sequence of statements in the introduction. Does all the information lead coherently to the purpose of the study?
  • Review all methods in relation to the objective(s) of the study. Are the methods valid for studying the problem?
  • Check the methods for essential information. Could the study be duplicated from the methods and information given?
  • Check the methods for flaws. Is the sample selection adequate? Is the experimental design sound?
  • Check the sequence of statements in the methods. Does all the information belong there? Is the sequence of methods clear and pertinent?
  • Was there mention of ethics? Which research ethics board approved the study?
  • Carefully examine the data presented in the tables and diagrams. Does the title or legend accurately describe the content? 
  • Are column headings and labels accurate? 
  • Are the data organized for ready comparison and interpretation? (A table should be self-explanatory, with a title that accurately and concisely describes content and column headings that accurately describe information in the cells.)
  • Review the results as presented in the text while referring to the data in the tables and diagrams. Does the text complement, and not simply repeat data? Are there discrepancies between the results in the text and those in the tables?
  • Check all calculations and presentation of data.
  • Review the results in light of the stated objectives. Does the study reveal what the researchers intended?
  • Does the discussion clearly address the objectives and hypotheses?
  • Check the interpretation against the results. Does the discussion merely repeat the results? 
  • Does the interpretation arise logically from the data or is it too far-fetched? 
  • Have the faults, flaws, or shortcomings of the research been addressed?
  • Is the interpretation supported by other research cited in the study?
  • Does the study consider key studies in the field?
  • What is the significance of the research? Do the authors mention wider implications of the findings?
  • Is there a section on recommendations for future research? Are there other research possibilities or directions suggested? 

Consider the article as a whole

  • Reread the abstract. Does it accurately summarize the article?
  • Check the structure of the article (first headings and then paragraphing). Is all the material organized under the appropriate headings? Are sections divided logically into subsections or paragraphs?
  • Are stylistic concerns, logic, clarity, and economy of expression addressed?

These questions were adapted from the following sources:  Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site. Retrieved July 31, 2006.

After you have evaluated the research, consider whether the research has been successful. Has it led to new questions being asked, or new ways of using existing knowledge? Are other researchers citing this paper?

You should consider the following questions:

  • How did other researchers view the significance of the research reported by your authors?
  • Did the research reported in your article result in the formulation of new questions or hypotheses (by the authors or by other researchers)?
  • Have other researchers subsequently supported or refuted the observations or interpretations of these authors?
  • Did the research make a significant contribution to human knowledge?
  • Did the research produce any practical applications?
  • What are the social, political, technological, medical implications of this research?
  • How do you evaluate the significance of the research?

To answer these questions, look at review articles to find out how reviewers view this piece of research. Look at research articles and databases like Web of Science to see how other people have used this work. What range of journals have cited this article?

These questions were adapted from the following sources:

Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

  • << Previous: Start Here
  • Next: Writing Your Critique >>
  • Last Updated: Jan 11, 2024 12:42 PM
  • URL: https://guides.lib.uoguelph.ca/WriteCriticalReview

Suggest an edit to this guide

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Diagn Res
  • v.11(5); 2017 May

Critical Appraisal of Clinical Research

Azzam al-jundi.

1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.

Salah Sakka

2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

Introduction

Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ].

Critical appraisal is essential to:

  • Combat information overload;
  • Identify papers that are clinically relevant;
  • Continuing Professional Development (CPD).

Carrying out Critical Appraisal:

Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.

Standard Common Questions:

  • What is the research question?
  • What is the study type (design)?
  • Selection issues.
  • What are the outcome factors and how are they measured?
  • What are the study factors and how are they measured?
  • What important potential confounders are considered?
  • What is the statistical method used in the study?
  • Statistical results.
  • What conclusions did the authors reach about the research question?
  • Are ethical issues considered?

The Critical Appraisal starts by double checking the following main sections:

I. Overview of the paper:

  • The publishing journal and the year
  • The article title: Does it state key trial objectives?
  • The author (s) and their institution (s)

The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [ 2 ].

II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.

  • Aim of the study: It should be well and clearly written.
  • Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
  • Results: The measured variables with their statistical analysis and significance.
  • Conclusion: It must clearly answer the question of interest.

III. Introduction/Background section:

An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [ 2 ].

-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’

-What has been already achieved and how does this study be at variance?

-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?

IV. Methods and Materials section : Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [ 3 ].

V. Results section : This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.

VI. Discussion section : This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.

Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).

  • Does it address any source of potential bias?
  • Are interpretations consistent with the results?
  • How are null findings interpreted?
  • Does it mention how do the findings of this study relate to previous work in the area?
  • Can they be generalized (external validity)?
  • Does it mention their clinical implications/applicability?
  • What are the results/outcomes/findings applicable to and will they affect a clinical practice?
  • Does the conclusion answer the study question?
  • -Is the conclusion convincing?
  • -Does the paper indicate ethics approval?
  • -Can you identify potential ethical issues?
  • -Do the results apply to the population in which you are interested?
  • -Will you use the results of the study?

Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.

1-What is the research question?

For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [ 3 ].

P = Patient or problem: Patient/Problem/Population:

It involves identifying if the research has a focused question. What is the chief complaint?

E.g.,: Disease status, previous ailments, current medications etc.,

I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,

C= Comparison: A suitable control or alternative

E.g.,: specific and limited to one alternative choice.

O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,

The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [ Table/Fig-1 ].

[Table/Fig-1]:

Categories of clinical questions and the related study designs.

2- What is the study type (design)?

The study design of the research is fundamental to the usefulness of the study.

In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.

Participants/Sample Population:

Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.

The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [ 4 ].

Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [ 5 ].

  • Is the sample defined? Human, Animals (type); what population does it represent?
  • Does it mention eligibility criteria with reasons?
  • Does it mention where and how the sample were recruited, selected and assessed?
  • Does it mention where was the study carried out?
  • Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?
  • Does it mention a suitable study design/type?
  • Is the study type appropriate to the research question?
  • Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?
  • Are the samples similar at baseline? Is sample attrition mentioned?
  • All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.
  • Does it mention who was blinded? Are the assessors and participants blind to the interventions received?
  • Is it mentioned how was the data analysed?
  • Are any measurements taken likely to be valid?

Researchers use measuring techniques and instruments that have been shown to be valid and reliable.

Validity refers to the extent to which a test measures what it is supposed to measure.

(the extent to which the value obtained represents the object of interest.)

  • -Soundness, effectiveness of the measuring instrument;
  • -What does the test measure?
  • -Does it measure, what it is supposed to be measured?
  • -How well, how accurately does it measure?

Reliability: In research, the term reliability means “repeatability” or “consistency”

Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [ 6 ].

3-Selection issues:

The following questions should be raised:

  • - How were subjects chosen or recruited? If not random, are they representative of the population?
  • - Types of Blinding (Masking) Single, Double, Triple?
  • - Is there a control group? How was it chosen?
  • - How are patients followed up? Who are the dropouts? Why and how many are there?
  • - Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
  • - Is there a statement about sample size issues or statistical power (especially important in negative studies)?
  • - If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
  • - Are there selection biases?
  • • In a case-control study, if exercise habits to be compared:
  • - Are the controls appropriate?
  • - Were records of cases and controls reviewed blindly?
  • - How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?
  • • Cross Sectional Studies:
  • - Was the sample selected in an appropriate manner (random, convenience, etc.,)?
  • - Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?
  • - Were reliability (reproducibility) and validity reported?
  • • In an intervention study, how were subjects recruited and assigned to groups?
  • • In a cohort study, how many reached final follow-up?
  • - Are the subject’s representatives of the population to which the findings are applied?
  • - Is there evidence of volunteer bias? Was there adequate follow-up time?
  • - What was the drop-out rate?
  • - Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.

Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [ 7 ].

Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [ 8 ]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [ 9 ].

4-What are the outcome factors and how are they measured?

  • -Are all relevant outcomes assessed?
  • -Is measurement error an important source of bias?

5-What are the study factors and how are they measured?

  • -Are all the relevant study factors included in the study?
  • -Have the factors been measured using appropriate tools?

Data Analysis and Results:

- Were the tests appropriate for the data?

- Are confidence intervals or p-values given?

  • How strong is the association between intervention and outcome?
  • How precise is the estimate of the risk?
  • Does it clearly mention the main finding(s) and does the data support them?
  • Does it mention the clinical significance of the result?
  • Is adverse event or lack of it mentioned?
  • Are all relevant outcomes assessed?
  • Was the sample size adequate to detect a clinically/socially significant result?
  • Are the results presented in a way to help in health policy decisions?
  • Is there measurement error?
  • Is measurement error an important source of bias?

Confounding Factors:

A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [ 9 ].

6- What important potential confounders are considered?

  • -Are potential confounders examined and controlled for?
  • -Is confounding an important source of bias?

7- What is the statistical method in the study?

  • -Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
  • -Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
  • -Were the tests appropriate for the data?
  • -Are confidence intervals or p-values given?
  • -Are results presented as absolute risk reduction as well as relative risk reduction?

Interpretation of p-value:

The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.

  • When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.

Confidence interval:

Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.

8- Statistical results:

  • -Do statistical tests answer the research question?

Are statistical tests performed and comparisons made (data searching)?

Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.

It is important to identify if this is appropriate for the study [ 9 ].

  • -Was the sample size adequate to detect a clinically/socially significant result?
  • -Are the results presented in a way to help in health policy decisions?

Clinical significance:

Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:

  • -If the results are statistically significant, do they also have clinical significance?
  • -If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?

9- What conclusions did the authors reach about the study question?

Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [ 10 ].

  • -Are the questions posed in the study adequately addressed?
  • -Are the conclusions justified by the data?
  • -Do the authors extrapolate beyond the data?
  • -Are shortcomings of the study addressed and constructive suggestions given for future research?
  • -Bibliography/References:

Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?

10- Are ethical issues considered?

If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [ 10 , 11 ].

Critical appraisal of RCTs: Factors to look for:

  • Allocation (randomization, stratification, confounders).
  • Follow up of participants (intention to treat).
  • Data collection (bias).
  • Sample size (power calculation).
  • Presentation of results (clear, precise).
  • Applicability to local population.

[ Table/Fig-2 ] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [ 12 ].

[Table/Fig-2]:

Summary of the CONSORT guidelines.

Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.

In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:

  • Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).
  • Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).
  • Homogeneity of studies.

[ Table/Fig-3 ] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [ 13 ].

[Table/Fig-3]:

Summary of PRISMA guidelines.

Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.

Financial or other Competing Interests

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • Evaluating Sources | Methods & Examples

Evaluating Sources | Methods & Examples

Published on June 2, 2022 by Eoghan Ryan . Revised on May 31, 2023.

The sources you use are an important component of your research. It’s important to evaluate the sources you’re considering using, in order to:

  • Ensure that they’re credible
  • Determine whether they’re relevant to your topic
  • Assess the quality of their arguments

Table of contents

Evaluating a source’s credibility, evaluating a source’s relevance, evaluating a source’s arguments, other interesting articles, frequently asked questions about evaluating sources.

Evaluating the credibility of a source is an important way of sifting out misinformation and determining whether you should use it in your research. Useful approaches include the CRAAP test and lateral reading .

One of the best ways to evaluate source credibility is the CRAAP test . This stands for:

  • Currency: Does the source reflect recent research?
  • Relevance: Is the source related to your research topic?
  • Authority: Is it a respected publication? Is the author an expert in their field?
  • Accuracy: Does the source support its arguments and conclusions with evidence?
  • Purpose: What is the author’s intention?

How you evaluate a source using these criteria will depend on your subject and focus. It’s important to understand the types of sources and how you should use them in your field of research.

Lateral reading

Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to:

  • Verify evidence
  • Contextualize information
  • Find potential weaknesses

If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable.

Rather than taking these figures at face value, you decide to determine the accuracy of the source’s claims by cross-checking them with official statistics such as census reports and figures compiled by the Department of Homeland Security’s Office of Immigration Statistics.

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

example of critical evaluation in research

How you evaluate the relevance of a source will depend on your topic, and on where you are in the research process . Preliminary evaluation helps you to pick out relevant sources in your search, while in-depth evaluation allows you to understand how they’re related.

Preliminary evaluation

As you cannot possibly read every source related to your topic, you can use preliminary evaluation to determine which sources might be relevant. This is especially important when you’re surveying a large number of sources (e.g., in a literature review or systematic review ).

One way to do this is to look at paratextual material, or the parts of a work other than the text itself.

  • Look at the table of contents to determine the scope of the work.
  • Consult the index for key terms or the names of important scholars.

You can also read abstracts , prefaces , introductions , and conclusions . These will give you a clear idea of the author’s intentions, the parameters of the research, and even the conclusions they draw.

Preliminary evaluation is useful as it allows you to:

  • Determine whether a source is worth examining in more depth
  • Quickly move on to more relevant sources
  • Increase the quality of the information you consume

While this preliminary evaluation is an important step in the research process, you should engage with sources more deeply in order to adequately understand them.

In-depth evaluation

Begin your in-depth evaluation with any landmark studies in your field of research, or with sources that you’re sure are related to your research topic.

As you read, try to understand the connections between the sources. Look for:

  • Key debates: What topics or questions are currently influencing research? How does the source respond to these key debates?
  • Major publications or critics: Are there any specific texts or scholars that have greatly influenced the field? How does the source engage with them?
  • Trends: Is the field currently dominated by particular theories or research methods ? How does the source respond to these?
  • Gaps: Are there any oversights or weaknesses in the research?

Even sources whose conclusions you disagree with can be relevant, as they can strengthen your argument by offering alternative perspectives.

Every source should contribute to the debate about its topic by taking a clear position. This position and the conclusions the author comes to should be supported by evidence from direct observation or from other sources.

Most sources will use a mix of primary and secondary sources to form an argument . It is important to consider how the author uses these sources. A good argument should be based on analysis and critique, and there should be a logical relationship between evidence and conclusions.

To assess an argument’s strengths and weaknesses, ask:

  • Does the evidence support the claim?
  • How does the author use evidence? What theories, methods, or models do they use?
  • Could the evidence be used to draw other conclusions? Can it be interpreted differently?
  • How does the author situate their argument in the field? Do they agree or disagree with other scholars? Do they confirm or challenge established knowledge?

Situating a source in relation to other sources ( lateral reading ) can help you determine whether the author’s arguments and conclusions are reliable and how you will respond to them in your own writing.

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing

 Plagiarism

  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

As you cannot possibly read every source related to your topic, it’s important to evaluate sources to assess their relevance. Use preliminary evaluation to determine whether a source is worth examining in more depth.

This involves:

  • Reading abstracts , prefaces, introductions , and conclusions
  • Looking at the table of contents to determine the scope of the work
  • Consulting the index for key terms or the names of important scholars

Lateral reading is the act of evaluating the credibility of a source by comparing it with other sources. This allows you to:

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

The CRAAP test is an acronym to help you evaluate the credibility of a source you are considering using. It is an important component of information literacy .

The CRAAP test has five main components:

  • Currency: Is the source up to date?
  • Relevance: Is the source relevant to your research?
  • Authority: Where is the source published? Who is the author? Are they considered reputable and trustworthy in their field?
  • Accuracy: Is the source supported by evidence? Are the claims cited correctly?
  • Purpose: What was the motive behind publishing this source?

Scholarly sources are written by experts in their field and are typically subjected to peer review . They are intended for a scholarly audience, include a full bibliography, and use scholarly or technical language. For these reasons, they are typically considered credible sources .

Popular sources like magazines and news articles are typically written by journalists. These types of sources usually don’t include a bibliography and are written for a popular, rather than academic, audience. They are not always reliable and may be written from a biased or uninformed perspective, but they can still be cited in some contexts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). Evaluating Sources | Methods & Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/working-with-sources/evaluating-sources/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, student guide: information literacy | meaning & examples, types of sources explained | examples & tips, what are credible sources & how to spot them | examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

IMAGES

  1. What Is a Critical Analysis Essay? Simple Guide With Examples

    example of critical evaluation in research

  2. Critical Appraisal Guidelines for Single Case Study Research

    example of critical evaluation in research

  3. (PDF) Critical Analysis of Clinical Research Articles: A Guide for

    example of critical evaluation in research

  4. Reflection essay: Critical evaluation of a research paper example

    example of critical evaluation in research

  5. Critical Evaluation Essay Sample

    example of critical evaluation in research

  6. Checklist for critical evaluation

    example of critical evaluation in research

VIDEO

  1. Critical Appraisal of Qualitative Research

  2. Literature Review videos 3

  3. What is the Importance of Critical Thinking in Evaluating Sources?

  4. Critical Essay

  5. Types of Evaluation Research

  6. CRA Basics: Peer Review in Clinical Research

COMMENTS

  1. Critical Analysis

    VI. Example. Provide an example or two to support your analysis and evaluation; Use quotes or specific details from the text, object, or event to support your claims; Analyze the example(s) using critical thinking skills and explain how they relate to your overall argument; VII. Conclusion. Reiterate your thesis statement and summarize your ...

  2. 33 Critical Analysis Examples (2024)

    33 Critical Analysis Examples. Critical analysis refers to the ability to examine something in detail in preparation to make an evaluation or judgment. It will involve exploring underlying assumptions, theories, arguments, evidence, logic, biases, contextual factors, and so forth, that could help shed more light on the topic.

  3. PDF Planning and writing a critical review

    A critical review (sometimes called a critique, critical commentary, critical appraisal, critical analysis) is a detailed commentary on and critical evaluation of a text. You might carry out a critical review as a stand-alone exercise, or as part of your research and preparation for writing a literature review. The

  4. Writing Tips: Critically Evaluating Research

    We have two examples from the method so lets present the idea in a critical evaluation paragraph about the method. Step 3. Write our critical point first (blue font) and then explain why it's a weakness of the research (red font): "Although the research by Cseh, Phillips, and Pearson (2015) provide innovative findings, the method of their study ...

  5. Critical Analysis: The Often-Missing Step in Conducting Literature

    Literature reviews are essential in moving our evidence-base forward. "A literature review makes a significant contribution when the authors add to the body of knowledge through providing new insights" (Bearman, 2016, p. 383).Although there are many methods for conducting a literature review (e.g., systematic review, scoping review, qualitative synthesis), some commonalities in ...

  6. Full article: Critical appraisal

    For example, GRADE (Grading Recommendations, Assessment, Development, and Evaluation, Guyatt et al., Citation 2011) is designed for reviews of quantitative research and is undertaken in two broad phases. First, reviewers conduct a systematic review (a) to generate a set of findings and (b) to assess the quality of the research.

  7. Critical Analysis and Evaluation

    Critical Analysis and Evaluation. Many assignments ask you to critique and evaluate a source. Sources might include journal articles, books, websites, government documents, portfolios, podcasts, or presentations. When you critique, you offer both negative and positive analysis of the content, writing, and structure of a source.

  8. Writing a Critical Analysis

    This guide is meant to help you understand the basics of writing a critical analysis. A critical analysis is an argument about a particular piece of media. ... An example of a summary and an evaluation of a research article. This extended example shows the different ways a student can critique and write about an article.

  9. Writing a Critique

    A 'critical review', or 'critique', is a complete type of text (or genre), discussing one particular article or book in detail. In some instances, you may be asked to write a critique of two or three articles (e.g. a comparative critical review). In contrast, a 'literature review', which also needs to be 'critical', is a part of a larger type ...

  10. How to critically appraise an article

    Key Points. Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article. Critical appraisal provides a basis for decisions on whether to use the ...

  11. Writing Critical Reviews: A Step-by-Step Guide

    Ev en better you might. consider doing an argument map (see Chapter 9, Critical thinking). Step 5: Put the article aside and think about what you have read. Good critical review. writing requires ...

  12. How to demonstrate critical evaluation in your psychology assignments

    Thinking critically about psychology research Critical thinking is often taught in undergraduate psychology degrees, and is a key marking criteria for higher marks in many assignments. But getting your head around how to write critically can sometimes be difficult. It can take practice. The aim of this short blog is to provide an introduction to…

  13. Write a Critical Review of a Scientific Journal Article

    What methods were used to accomplish this purpose (e.g., systematic recording of observations, analysis and evaluation of published research, assessment of theory, etc.)? What techniques were used and how was each technique performed? ... For example, stating that the sample size is insufficient is not a critical assessment. Describing why the ...

  14. Evaluating Research in Academic Journals: A Practical Guide to

    Examples 1.3.1 and 1.3.2 show statements from research articles in which the researchers acknowledge limitations in their methods of measurement. Example 1.3.1

  15. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  16. How to Write a Critical Analysis Essay

    Below are nine organizational and writing tips to help you craft the best possible critical analysis essay. 1. Read Thoroughly and Carefully. You will need to accurately represent an author's point of view and techniques. Be sure you truly understand them before you begin the writing process. 2.

  17. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    critiquing the literature, critical analysis, reviewing the literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants ...

  18. PDF Critical appraisal of a journal article

    Critical appraisal of a journal article 1. Introduction to critical appraisal Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. (Burls 2009) Critical appraisal is an important element of evidence-based medicine.

  19. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  20. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  21. How To Write a Critical Analysis in 5 Steps (With Tips)

    Critical analysis example The following is an example of a short critical analysis of a poem called "XL," by A.E. Housman. Its short length is appropriate for a relatively short poem of just two stanzas and eight lines. Reading this example of critical analysis can help you learn the best format and persuasive techniques for your analyses.

  22. Evaluating Sources

    Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to: Verify evidence. Contextualize information. Find potential weaknesses. If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable. Example: Lateral ...