• Privacy Policy
  • SignUp/Login

Research Method

Home » Case Study – Methods, Examples and Guide

Case Study – Methods, Examples and Guide

Table of Contents

Case Study Research

A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation.

It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied. Case studies typically involve multiple sources of data, including interviews, observations, documents, and artifacts, which are analyzed using various techniques, such as content analysis, thematic analysis, and grounded theory. The findings of a case study are often used to develop theories, inform policy or practice, or generate new research questions.

Types of Case Study

Types and Methods of Case Study are as follows:

Single-Case Study

A single-case study is an in-depth analysis of a single case. This type of case study is useful when the researcher wants to understand a specific phenomenon in detail.

For Example , A researcher might conduct a single-case study on a particular individual to understand their experiences with a particular health condition or a specific organization to explore their management practices. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a single-case study are often used to generate new research questions, develop theories, or inform policy or practice.

Multiple-Case Study

A multiple-case study involves the analysis of several cases that are similar in nature. This type of case study is useful when the researcher wants to identify similarities and differences between the cases.

For Example, a researcher might conduct a multiple-case study on several companies to explore the factors that contribute to their success or failure. The researcher collects data from each case, compares and contrasts the findings, and uses various techniques to analyze the data, such as comparative analysis or pattern-matching. The findings of a multiple-case study can be used to develop theories, inform policy or practice, or generate new research questions.

Exploratory Case Study

An exploratory case study is used to explore a new or understudied phenomenon. This type of case study is useful when the researcher wants to generate hypotheses or theories about the phenomenon.

For Example, a researcher might conduct an exploratory case study on a new technology to understand its potential impact on society. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as grounded theory or content analysis. The findings of an exploratory case study can be used to generate new research questions, develop theories, or inform policy or practice.

Descriptive Case Study

A descriptive case study is used to describe a particular phenomenon in detail. This type of case study is useful when the researcher wants to provide a comprehensive account of the phenomenon.

For Example, a researcher might conduct a descriptive case study on a particular community to understand its social and economic characteristics. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a descriptive case study can be used to inform policy or practice or generate new research questions.

Instrumental Case Study

An instrumental case study is used to understand a particular phenomenon that is instrumental in achieving a particular goal. This type of case study is useful when the researcher wants to understand the role of the phenomenon in achieving the goal.

For Example, a researcher might conduct an instrumental case study on a particular policy to understand its impact on achieving a particular goal, such as reducing poverty. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of an instrumental case study can be used to inform policy or practice or generate new research questions.

Case Study Data Collection Methods

Here are some common data collection methods for case studies:

Interviews involve asking questions to individuals who have knowledge or experience relevant to the case study. Interviews can be structured (where the same questions are asked to all participants) or unstructured (where the interviewer follows up on the responses with further questions). Interviews can be conducted in person, over the phone, or through video conferencing.

Observations

Observations involve watching and recording the behavior and activities of individuals or groups relevant to the case study. Observations can be participant (where the researcher actively participates in the activities) or non-participant (where the researcher observes from a distance). Observations can be recorded using notes, audio or video recordings, or photographs.

Documents can be used as a source of information for case studies. Documents can include reports, memos, emails, letters, and other written materials related to the case study. Documents can be collected from the case study participants or from public sources.

Surveys involve asking a set of questions to a sample of individuals relevant to the case study. Surveys can be administered in person, over the phone, through mail or email, or online. Surveys can be used to gather information on attitudes, opinions, or behaviors related to the case study.

Artifacts are physical objects relevant to the case study. Artifacts can include tools, equipment, products, or other objects that provide insights into the case study phenomenon.

How to conduct Case Study Research

Conducting a case study research involves several steps that need to be followed to ensure the quality and rigor of the study. Here are the steps to conduct case study research:

  • Define the research questions: The first step in conducting a case study research is to define the research questions. The research questions should be specific, measurable, and relevant to the case study phenomenon under investigation.
  • Select the case: The next step is to select the case or cases to be studied. The case should be relevant to the research questions and should provide rich and diverse data that can be used to answer the research questions.
  • Collect data: Data can be collected using various methods, such as interviews, observations, documents, surveys, and artifacts. The data collection method should be selected based on the research questions and the nature of the case study phenomenon.
  • Analyze the data: The data collected from the case study should be analyzed using various techniques, such as content analysis, thematic analysis, or grounded theory. The analysis should be guided by the research questions and should aim to provide insights and conclusions relevant to the research questions.
  • Draw conclusions: The conclusions drawn from the case study should be based on the data analysis and should be relevant to the research questions. The conclusions should be supported by evidence and should be clearly stated.
  • Validate the findings: The findings of the case study should be validated by reviewing the data and the analysis with participants or other experts in the field. This helps to ensure the validity and reliability of the findings.
  • Write the report: The final step is to write the report of the case study research. The report should provide a clear description of the case study phenomenon, the research questions, the data collection methods, the data analysis, the findings, and the conclusions. The report should be written in a clear and concise manner and should follow the guidelines for academic writing.

Examples of Case Study

Here are some examples of case study research:

  • The Hawthorne Studies : Conducted between 1924 and 1932, the Hawthorne Studies were a series of case studies conducted by Elton Mayo and his colleagues to examine the impact of work environment on employee productivity. The studies were conducted at the Hawthorne Works plant of the Western Electric Company in Chicago and included interviews, observations, and experiments.
  • The Stanford Prison Experiment: Conducted in 1971, the Stanford Prison Experiment was a case study conducted by Philip Zimbardo to examine the psychological effects of power and authority. The study involved simulating a prison environment and assigning participants to the role of guards or prisoners. The study was controversial due to the ethical issues it raised.
  • The Challenger Disaster: The Challenger Disaster was a case study conducted to examine the causes of the Space Shuttle Challenger explosion in 1986. The study included interviews, observations, and analysis of data to identify the technical, organizational, and cultural factors that contributed to the disaster.
  • The Enron Scandal: The Enron Scandal was a case study conducted to examine the causes of the Enron Corporation’s bankruptcy in 2001. The study included interviews, analysis of financial data, and review of documents to identify the accounting practices, corporate culture, and ethical issues that led to the company’s downfall.
  • The Fukushima Nuclear Disaster : The Fukushima Nuclear Disaster was a case study conducted to examine the causes of the nuclear accident that occurred at the Fukushima Daiichi Nuclear Power Plant in Japan in 2011. The study included interviews, analysis of data, and review of documents to identify the technical, organizational, and cultural factors that contributed to the disaster.

Application of Case Study

Case studies have a wide range of applications across various fields and industries. Here are some examples:

Business and Management

Case studies are widely used in business and management to examine real-life situations and develop problem-solving skills. Case studies can help students and professionals to develop a deep understanding of business concepts, theories, and best practices.

Case studies are used in healthcare to examine patient care, treatment options, and outcomes. Case studies can help healthcare professionals to develop critical thinking skills, diagnose complex medical conditions, and develop effective treatment plans.

Case studies are used in education to examine teaching and learning practices. Case studies can help educators to develop effective teaching strategies, evaluate student progress, and identify areas for improvement.

Social Sciences

Case studies are widely used in social sciences to examine human behavior, social phenomena, and cultural practices. Case studies can help researchers to develop theories, test hypotheses, and gain insights into complex social issues.

Law and Ethics

Case studies are used in law and ethics to examine legal and ethical dilemmas. Case studies can help lawyers, policymakers, and ethical professionals to develop critical thinking skills, analyze complex cases, and make informed decisions.

Purpose of Case Study

The purpose of a case study is to provide a detailed analysis of a specific phenomenon, issue, or problem in its real-life context. A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community.

The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case studies can help researchers to identify and examine the underlying factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and detailed understanding of the case, which can inform future research, practice, or policy.

Case studies can also serve other purposes, including:

  • Illustrating a theory or concept: Case studies can be used to illustrate and explain theoretical concepts and frameworks, providing concrete examples of how they can be applied in real-life situations.
  • Developing hypotheses: Case studies can help to generate hypotheses about the causal relationships between different factors and outcomes, which can be tested through further research.
  • Providing insight into complex issues: Case studies can provide insights into complex and multifaceted issues, which may be difficult to understand through other research methods.
  • Informing practice or policy: Case studies can be used to inform practice or policy by identifying best practices, lessons learned, or areas for improvement.

Advantages of Case Study Research

There are several advantages of case study research, including:

  • In-depth exploration: Case study research allows for a detailed exploration and analysis of a specific phenomenon, issue, or problem in its real-life context. This can provide a comprehensive understanding of the case and its dynamics, which may not be possible through other research methods.
  • Rich data: Case study research can generate rich and detailed data, including qualitative data such as interviews, observations, and documents. This can provide a nuanced understanding of the case and its complexity.
  • Holistic perspective: Case study research allows for a holistic perspective of the case, taking into account the various factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and comprehensive understanding of the case.
  • Theory development: Case study research can help to develop and refine theories and concepts by providing empirical evidence and concrete examples of how they can be applied in real-life situations.
  • Practical application: Case study research can inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
  • Contextualization: Case study research takes into account the specific context in which the case is situated, which can help to understand how the case is influenced by the social, cultural, and historical factors of its environment.

Limitations of Case Study Research

There are several limitations of case study research, including:

  • Limited generalizability : Case studies are typically focused on a single case or a small number of cases, which limits the generalizability of the findings. The unique characteristics of the case may not be applicable to other contexts or populations, which may limit the external validity of the research.
  • Biased sampling: Case studies may rely on purposive or convenience sampling, which can introduce bias into the sample selection process. This may limit the representativeness of the sample and the generalizability of the findings.
  • Subjectivity: Case studies rely on the interpretation of the researcher, which can introduce subjectivity into the analysis. The researcher’s own biases, assumptions, and perspectives may influence the findings, which may limit the objectivity of the research.
  • Limited control: Case studies are typically conducted in naturalistic settings, which limits the control that the researcher has over the environment and the variables being studied. This may limit the ability to establish causal relationships between variables.
  • Time-consuming: Case studies can be time-consuming to conduct, as they typically involve a detailed exploration and analysis of a specific case. This may limit the feasibility of conducting multiple case studies or conducting case studies in a timely manner.
  • Resource-intensive: Case studies may require significant resources, including time, funding, and expertise. This may limit the ability of researchers to conduct case studies in resource-constrained settings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Case Study? | Definition, Examples & Methods

What Is a Case Study? | Definition, Examples & Methods

Published on May 8, 2019 by Shona McCombes . Revised on November 20, 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case, other interesting articles.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

TipIf your research is more practical in nature and aims to simultaneously investigate an issue as you solve it, consider conducting action research instead.

Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.

Example of a representative case studyIn the 1920s, two sociologists used Muncie, Indiana as a case study of a typical American city that supposedly exemplified the changing culture of the US at the time.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.

Example of a mixed methods case studyFor a case study of a wind farm development in a rural area, you could collect quantitative data on employment rates and business revenue, collect qualitative data on local people’s perceptions and experiences, and analyze local and national media coverage of the development.

The aim is to gain as thorough an understanding as possible of the case and its context.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is case study method in research methodology

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved February 26, 2024, from https://www.scribbr.com/methodology/case-study/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, what is your plagiarism score.

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile No Name

Not Logged In

  • Sign in Signed in
  • My profile No Name

Not Logged In

Case Study Research: What, Why and How?

  • Edition: First Edition
  • By: Peter Swanborn
  • Publisher: SAGE Publications, Inc.
  • Publication year: 2010
  • Online pub date: December 28, 2018
  • Discipline: Health
  • Methods: Case study research , Research questions , Theory
  • DOI: https:// doi. org/10.4135/9781526485168
  • Keywords: attitudes , domain , informants , innovation , organizations , surveying , tradition Show all Show less
  • Print ISBN: 9781849206129
  • Online ISBN: 9781526485168
  • Buy the book icon link

Subject index

How should case studies be selected? Is case study methodology fundamentally different to that of other methods? What, in fact, is a case? Case Study Research: What, Why and How? is an authoritative and nuanced exploration of the many faces of case-based research methods. As well as the what, how, and why, the author also examines the when and which – always with an eye on practical applications to the design, collection, analysis, and presentation of the research. Case study methodology can prove a confusing and fragmented topic. In bringing diverse notions of case study research together in one volume and sensitizing the reader to the many varying definitions and perceptions of ‘case study’, this book equips researchers at all levels with the knowledge to make an informed choice of research strategy.

Front Matter

  • Chapter 1 | What is a case study?
  • Chapter 2 | When to conduct a case study?
  • Chapter 3 | How to select cases?
  • Chapter 4 | What data to collect?
  • Chapter 5 | How to enrich your case study data?
  • Chapter 6 | How to analyse your data?
  • Chapter 7 | Assets and opportunities

Back Matter

  • Appendix 1- Selected Literature on Case Studies
  • Appendix 2- The Political Science Debate on Case Studies
  • Appendix 3- A Note on Triangulation
  • Appendix 4- A Note on Contamination
  • Bibliography
  • Author Index

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists

Academic Success Center

Qualitative & Quantitative Research Support

  • NVivo Group and Study Sessions
  • SPSS This link opens in a new window
  • Statistical Analysis Group sessions
  • Using Qualtrics
  • Dissertation and Data Analysis Group Sessions
  • Research Process Flow Chart
  • Research Alignment This link opens in a new window
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Annotated Bibliography
  • Literature Review This link opens in a new window
  • Systematic Reviews & Meta-Analyses
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Problem Statement
  • Purpose Statement
  • Quantitative Research Questions
  • Qualitative Research Questions
  • Trustworthiness of Qualitative Data
  • Analysis and Coding Example- Qualitative Data
  • Thematic Data Analysis in Qualitative Design
  • Dissertation to Journal Article This link opens in a new window
  • International Journal of Online Graduate Education (IJOGE) This link opens in a new window
  • Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window

Writing a Case Study

Hands holding a world globe

What is a case study?

A Map of the world with hands holding a pen.

A Case study is: 

  • An in-depth research design that primarily uses a qualitative methodology but sometimes​​ includes quantitative methodology.
  • Used to examine an identifiable problem confirmed through research.
  • Used to investigate an individual, group of people, organization, or event.
  • Used to mostly answer "how" and "why" questions.

What are the different types of case studies?

Man and woman looking at a laptop

Note: These are the primary case studies. As you continue to research and learn

about case studies you will begin to find a robust list of different types. 

Who are your case study participants?

Boys looking through a camera

What is triangulation ? 

Validity and credibility are an essential part of the case study. Therefore, the researcher should include triangulation to ensure trustworthiness while accurately reflecting what the researcher seeks to investigate.

Triangulation image with examples

How to write a Case Study?

When developing a case study, there are different ways you could present the information, but remember to include the five parts for your case study.

Man holding his hand out to show five fingers.

Was this resource helpful?

  • << Previous: Thematic Data Analysis in Qualitative Design
  • Next: Journal Article Reporting Standards (JARS) >>
  • Last Updated: Feb 21, 2024 9:32 AM
  • URL: https://resources.nu.edu/researchtools

NCU Library Home

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, Ph.D., is a qualified psychology teacher with over 18 years experience of working in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Case Study | Definition, Examples & Methods

Case Study | Definition, Examples & Methods

Published on 5 May 2022 by Shona McCombes . Revised on 30 January 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating, and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyse the case.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Prevent plagiarism, run a free check.

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

If you find yourself aiming to simultaneously investigate and solve an issue, consider conducting action research . As its name suggests, action research conducts research and takes action at the same time, and is highly iterative and flexible. 

However, you can also choose a more common or representative case to exemplify a particular category, experience, or phenomenon.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews, observations, and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data .

The aim is to gain as thorough an understanding as possible of the case and its context.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis, with separate sections or chapters for the methods , results , and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyse its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, January 30). Case Study | Definition, Examples & Methods. Scribbr. Retrieved 26 February 2024, from https://www.scribbr.co.uk/research-methods/case-studies/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, correlational research | guide, design & examples, a quick guide to experimental design | 5 steps & examples, descriptive research design | definition, methods & examples.

  • Member Benefits
  • Communities
  • Grants and Scholarships
  • Student Nurse Resources
  • Member Directory
  • Course Login
  • Professional Development
  • ONS Guidelines™
  • Oncology Nurse Orientation Program
  • Account Settings
  • Help Center
  • Print Membership Card
  • Print NCPD Certificate
  • Verify Cardholder or Certificate Status

ONS Logo

  • Trouble finding what you need?
  • Check our search tips.

what is case study method in research methodology

  • Oncology Nursing Forum
  • Number 6 / November 2015

Case Study Research Methodology in Nursing Research

Diane G. Cope

Through data collection methods using a holistic approach that focuses on variables in a natural setting, qualitative research methods seek to understand participants’ perceptions and interpretations. Common qualitative research methods include ethnography, phenomenology, grounded theory, and historic research. Another type of methodology that has a similar qualitative approach is case study research, which seeks to understand a phenomenon or case from multiple perspectives within a given real-world context.

Jump to a section

Through data collection methods using a holistic approach that focuses on variables in a natural setting, qualitative research methods seek to understand participants’ perceptions and interpretations. Common qualitative research methods include ethnography, phenomenology, grounded theory, and historic research. Another type of methodology that has a similar qualitative approach is case study research, which seeks to understand a phenomenon or case from multiple perspectives within a given real-world context (Taylor & Thomas-Gregory, 2015). Case study research has been described as a flexible but challenging methodology used in social science research. It has had the least attention and support among social science research methods, as a result of a lack of a well-defined protocol, and has had limited use in nursing research (Donnelly & Wiechula, 2012; Taylor & Thomas-Gregory, 2015; Yin, 2012, 2014). Three methodologists, Yin, Merriam, and Stake, have been credited as seminal authors who have provided procedures for case study research (Yazan, 2015). This article will describe and discuss case study research from the perspective of these three methodologists and explore the use of this methodology in nursing research.

The term case study is well known in the nursing profession as a teaching strategy to analyze a patient’s clinical case. Case study research is less employed and is defined similarly by all three methodologists as a research approach that focuses on one phenomenon, variable or set of variables, thing, or case occurring in a defined or bounded context of time and place to gain an understanding of the whole of the phenomenon under investigation (Merriam, 2009; Stake, 1995; Yin, 2014). The phenomenon or case can be a person, a group, an organization, or an event. The overall goal of case study research is to seek the “how” or “why” a phenomenon works, as opposed to other qualitative research approaches that seek to define the “what” of a phenomenon (Polit & Beck, 2012). Case study research usually requires detailed study during an extended period of time in an effort to obtain present and past experiences, situational factors, and interrelationships relevant to the phenomenon. Case study research has been viewed by some authors as a qualitative research methodology (Polit & Beck, 2012), and others view this type of research as flexible, using a mix of qualitative and quantitative evidence (Taylor & Thomas-Gregory, 2015; Yin, 2014).

Case Study Designs

Merriam, Stake, and Yin each have a differing perspective on case study design. Merriam (2009) purports a flexible design that allows researchers to make changes throughout the research process that is based on two or three research questions that construct and guide data collection. Stake’s (1995) design is based on a literature review that is the foundation of the research questions and theoretical framework but assumes that major changes may occur throughout the research as part of a process described as progressive focusing. Yin’s (2014) design is based on a sequence and includes several design options for the researcher. The selection of a case study design is based on the chosen theory and the case to be studied. The first decision is to determine whether the case study will use a single case or multiple cases. The use of a single case study is an appropriate design for certain circumstances, including when the case represents (a) a critical case to test theory, (b) an unusual or unique case, (c) a common case that can capture an understanding of usual circumstances, (d) a revelatory case that previously has been inaccessible, or (e) a longitudinal case (Yin, 2014).

A multiple case design is used when two or more cases are chosen to examine complementary components of the main research question (Yin, 2012). The multiple case design may be selected when the researcher is interested in examining conditions for similar findings that may be replicated or in examining conditions for contrasting cases. When choosing multiple cases, no formula exists to determine the number of cases needed, unlike power analysis to determine sample size (Small, 2009). In general, including more cases in a multiple case study will achieve greater confidence or certainty in a study’s findings. Conversely, the use of fewer cases will yield less confidence or certainty.

Single and multiple case studies can use holistic or embedded designs. A holistic design comprehensively examines a case or cases, and an embedded design also analyzes subunits associated with the case or cases.

Case study research is flexible and can use multiple sources of data. Yin’s (2014) methodology incorporates qualitative and quantitative data sources, and Merriam’s (2009) and Stake’s (1995) methodology exclusively use qualitative data sources. Multiple sources of evidence provide breadth in comprehending a case or cases and enhance confidence in the study findings. Common sources of evidence include direct observations of human behavior or physical environment, interviews, archival records, documents (e.g., newspaper articles, reports), participant observation, participant records, surveys, photographs, videos, or questionnaires.

Data analysis for case study research uses qualitative and quantitative data analysis methods, depending on the selected methodology, with the focus on describing the case or cases. Merriam’s (2009) data analysis is a process of consolidating, reducing, and interpreting procedures that occur simultaneously through data collection and analysis. Six analytic strategies are ethnographic analysis, narrative analysis, phenomenologic analysis, constant comparative method, content analysis, and analytic induction. Stake (1995) similarly employs data collection and analysis procedures through the use of two strategies—categorical aggregation and direct interpretation. Yin (2012) recommends initially categorizing the data then organizing the data by four techniques—pattern matching, explanation building, program logic models, and time–series analysis. Multiple case studies also would include an additional technique called cross-case synthesis to search for any repetition in the case. The final product of case study research is a narrative report that tells the story of the case and enables the reader to fully understand the case from the narrative (Taylor & Thomas-Gregory, 2015).

Methodologic Issues

An important aspect of case study research is ensuring study rigor similar to other qualitative studies. Strategies to ensure rigor include the maintenance of a diary or journal by the researcher to document personal feelings and reactions and minimize researcher bias, expert verification, an audit trail, use of thick descriptions, long-term observation, multisite designs, and member checking to ensure accuracy of findings by the participants (Taylor & Thomas-Gregory, 2015). Another methodologic issue is ensuring content validity. This can be achieved by the researcher’s final report that should include sufficient evidence and display a deep understanding of the case by the researcher.

Application of Case Study Design in Nursing Research

In this issue of Oncology Nursing Forum, Walker, Szanton, and Wenzel (2015) present their study exploring post-treatment normalcy using a multiple case design. The purpose of the study was to develop a better understanding of how adult survivors of early-stage breast and prostate cancers manage the work of recovery, which exemplifies the goal of case study research by asking “how” a phenomenon works. Multiple case study design was used through data collection that included self-reports, biweekly phone interviews, in-depth interviews, and written journals to evaluate existing theoretical knowledge and generate new theoretical knowledge about the process of managing recovery. The authors describe study rigor by illustrating expert validation and a constant comparative process of data analysis. From the data, the authors provide the reader with a detailed, narrative description of how adult survivors work toward normalcy that is engaging and tells the survivors’ story of life post-treatment.

Despite the lack of a well-defined protocol for case study research, Merriam, Stake, and Yin provide similar yet distinctive philosophies and procedures that researchers can use when embarking on a case study research project. Walker et al. (2015) provide an excellent exemplar of executing case study research in oncology through the investigation of the illness trajectory framework and how survivors work toward normalcy after treatment. Through this research approach, oncology nursing knowledge can benefit from a better understanding of the “how” and “why” of numerous phenomena that have implications for nursing practice and ultimately improve patient outcomes.

Donnelly, F., & Wiechula, R. (2012). Clinical placement and case study methodology: A complex affair. Nurse Education Today, 32, 873–877.

Merriam, S.B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass.

Polit, D.F., & Beck, C.T. (2012). Nursing research: Generating and assessing evidence for nursing practice (9th ed.). Philadelphia, PA: Lippincott Williams and Wilkins.

Small, M.L. (2009). How many cases do I need? On science and the logic of case selection in field-based research. Ethnography, 10, 5–38. doi:10.1177/1466138108099586

Stake, R.E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Taylor, R., & Thomas-Gregory, A. (2015). Case study research. Nursing Standard, 29(41), 36–40.

Walker, R., Szanton, S.L., & Wenzel, J. (2015). Working toward normalcy post-treatment: A qualitative study of older adult breast and prostate cancer survivors [Online exclusive]. Oncology Nursing Forum, 42, E358–E367. doi:10.1188/15.ONF.E358-E367

Yazan, B. (2015). Three approaches to case study methods in education: Yin, Merriam, and Stake. Qualitative Report, 20, 134–152.

Yin, R.K. (2012). Applications of case study research (3rd ed.). Thousand Oaks, CA: Sage.

Yin, R.K. (2014). Case study research: Design and methods (5th ed.). Thousand Oaks, CA: Sage.

About the Author(s)

Diane G. Cope, PhD, ARNP, BC, AOCNP®, is an oncology nurse practitioner at the Florida Cancer Specialists and Research Institute in Fort Myers. No financial relationships to disclose. Cope can be reached at [email protected] , with copy to editor at [email protected] .

Related Articles

Systematic reviews, preferred reporting items for systematic reviews and meta-analyses, nursing intervention research.

what is case study method in research methodology

Case Study Research: Methods and Designs

Case study research is a type of qualitative research design. It’s often used in the social sciences because it involves…

Case Study Method

Case study research is a type of qualitative research design. It’s often used in the social sciences because it involves observing subjects, or cases, in their natural setting, with minimal interference from the researcher.

In the case study method , researchers pose a specific question about an individual or group to test their theories or hypothesis. This can be done by gathering data from interviews with key informants.

Here’s what you need to know about case study research design .

What Is The Case Study Method?

Main approaches to data collection, case study research methods, how case studies are used, case study model.

Case study research is a great way to understand the nuances of a matter that can get lost in quantitative research methods. A case study is distinct from other qualitative studies in the following ways:

  • It’s interested in the effect of a set of circumstances on an individual or group.
  • It begins with a specific question about one or more cases.
  • It focuses on individual accounts and experiences.

Here are the primary features of case study research:

  • Case study research methods typically involve the researcher asking a few questions of one person or a small number of people—known as respondents—to test one hypothesis.
  • Case study in research methodology may apply triangulation to collect data, in which the researcher uses several sources, including documents and field data. This is then analyzed and interpreted to form a hypothesis that can be tested through further research or validated by other researchers.
  • The case study method requires clear concepts and theories to guide its methods. A well-defined research question is crucial when conducting a case study because the results of the study depend on it. The best approach to answering a research question is to challenge the existing theories, hypotheses or assumptions.
  • Concepts are defined using objective language with no reference to preconceived notions that individuals might have about them. The researcher sets out to discover by asking specific questions on how people think or perceive things in their given situation.

They commonly use the case study method in business, management, psychology, sociology, political science and other related fields.

A fundamental requirement of qualitative research is recording observations that provide an understanding of reality. When it comes to the case study method, there are two major approaches that can be used to collect data: document review and fieldwork.

A case study in research methodology also includes literature review, the process by which the researcher collects all data available through historical documents. These might include books, newspapers, journals, videos, photographs and other written material. The researcher may also record information using video cameras to capture events as they occur. The researcher can also go through materials produced by people involved in the case study to gain an insight into their lives and experiences.

Field research involves participating in interviews and observations directly. Observation can be done during telephone interviews, events or public meetings, visits to homes or workplaces, or by shadowing someone for a period of time. The researcher can conduct one-on-one interviews with individuals or group interviews where several people are interviewed at once.

Let’s look now at case study methodology.

The case study method can be divided into three stages: formulation of objectives; collection of data; and analysis and interpretation. The researcher first makes a judgment about what should be studied based on their knowledge. Next, they gather data through observations and interviews. Here are some of the common case study research methods:

One of the most basic methods is the survey. Respondents are asked to complete a questionnaire with open-ended and predetermined questions. It usually takes place through face-to-face interviews, mailed questionnaires or telephone interviews. It can even be done by an online survey.

2. Semi-structured Interview

For case study research a more complex method is the semi-structured interview. This involves the researcher learning about the topic by listening to what others have to say. This usually occurs through one-on-one interviews with the sample. Semi-structured interviews allow for greater flexibility and can obtain information that structured questionnaires can’t.

3. Focus Group Interview

Another method is the focus group interview, where the researcher asks a few people to take part in an open-ended discussion on certain themes or topics. The typical group size is 5–15 people. This method allows researchers to delve deeper into people’s opinions, views and experiences.

4. Participant Observation

Participant observation is another method that involves the researcher gaining insight into an experience by joining in and taking part in normal events. The people involved don’t always know they’re being studied, but the researcher observes and records what happens through field notes.

Case study research design can use one or several of these methods depending on the context.

Case studies are widely used in the social sciences. To understand the impact of socio-economic forces, interpersonal dynamics and other human conditions, sometimes there’s no other way than to study one case at a time and look for patterns and data afterward.

It’s for the same reasons that case studies are used in business. Here are a few uses:

  • Case studies can be used as tools to educate and give examples of situations and problems that might occur and how they were resolved. They can also be used for strategy development and implementation.
  • Case studies can evaluate the success of a program or project. They can help teams improve their collaboration by identifying areas that need improvements, such as team dynamics, communication, roles and responsibilities and leadership styles.
  • Case studies can explore how people’s experiences affect the working environment. Because the study involves observing and analyzing concrete details of life, they can inform theories on how an individual or group interacts with their environment.
  • Case studies can evaluate the sustainability of businesses. They’re useful for social, environmental and economic impact studies because they look at all aspects of a business or organization. This gives researchers a holistic view of the dynamics within an organization.
  • We can use case studies to identify problems in organizations or businesses. They can help spot problems that are invisible to customers, investors, managers and employees.
  • Case studies are used in education to show students how real-world issues or events can be sorted out. This enables students to identify and deal with similar situations in their lives.

And that’s not all. Case studies are incredibly versatile, which is why they’re used so widely.

Human beings are complex and they interact with each other in their everyday life in various ways. The researcher observes a case and tries to find out how the patterns of behavior are created, including their causal relations. Case studies help understand one or more specific events that have been observed. Here are some common methods:

1. Illustrative case study

This is where the researcher observes a group of people doing something. Studying an event or phenomenon this way can show cause-and-effect relationships between various variables.

2. Cumulative case study

A cumulative case study is one that involves observing the same set of phenomena over a period. Cumulative case studies can be very helpful in understanding processes, which are things that happen over time. For example, if there are behavioral changes in people who move from one place to another, the researcher might want to know why these changes occurred.

3. Exploratory case study

An exploratory case study collects information that will answer a question. It can help researchers better understand social, economic, political or other social phenomena.

There are several other ways to categorize case studies. They may be chronological case studies, where a researcher observes events over time. In the comparative case study, the researcher compares one or more groups of people, places, or things to draw conclusions about them. In an intervention case study, the researcher intervenes to change the behavior of the subjects. The study method depends on the needs of the research team.

Deciding how to analyze the information at our disposal is an important part of effective management. An understanding of the case study model can help. With Harappa’s Thinking Critically course, managers and young professionals receive input and training on how to level up their analytic skills. Knowledge of frameworks, reading real-life examples and lived wisdom of faculty come together to create a dynamic and exciting course that helps teams leap to the next level.

Explore Harappa Diaries to learn more about topics such as Objectives Of Research , What are Qualitative Research Methods , How To Make A Problem Statement and How To Improve your Cognitive Skills to upgrade your knowledge and skills.

Thriversitybannersidenav

Research-Methodology

Case Studies

Case studies are a popular research method in business area. Case studies aim to analyze specific issues within the boundaries of a specific environment, situation or organization.

According to its design, case studies in business research can be divided into three categories: explanatory, descriptive and exploratory.

Explanatory case studies aim to answer ‘how’ or ’why’ questions with little control on behalf of researcher over occurrence of events. This type of case studies focus on phenomena within the contexts of real-life situations. Example: “An investigation into the reasons of the global financial and economic crisis of 2008 – 2010.”

Descriptive case studies aim to analyze the sequence of interpersonal events after a certain amount of time has passed. Studies in business research belonging to this category usually describe culture or sub-culture, and they attempt to discover the key phenomena. Example: “Impact of increasing levels of multiculturalism on marketing practices: A case study of McDonald’s Indonesia.”

Exploratory case studies aim to find answers to the questions of ‘what’ or ‘who’. Exploratory case study data collection method is often accompanied by additional data collection method(s) such as interviews, questionnaires, experiments etc. Example: “A study into differences of leadership practices between private and public sector organizations in Atlanta, USA.”

Advantages of case study method include data collection and analysis within the context of phenomenon, integration of qualitative and quantitative data in data analysis, and the ability to capture complexities of real-life situations so that the phenomenon can be studied in greater levels of depth. Case studies do have certain disadvantages that may include lack of rigor, challenges associated with data analysis and very little basis for generalizations of findings and conclusions.

Case Studies

John Dudovskiy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • SAGE Choice

Continuing to enhance the quality of case study methodology in health services research

Shannon l. sibbald.

1 Faculty of Health Sciences, Western University, London, Ontario, Canada.

2 Department of Family Medicine, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.

3 The Schulich Interfaculty Program in Public Health, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.

Stefan Paciocco

Meghan fournie, rachelle van asseldonk, tiffany scurr.

Case study methodology has grown in popularity within Health Services Research (HSR). However, its use and merit as a methodology are frequently criticized due to its flexible approach and inconsistent application. Nevertheless, case study methodology is well suited to HSR because it can track and examine complex relationships, contexts, and systems as they evolve. Applied appropriately, it can help generate information on how multiple forms of knowledge come together to inform decision-making within healthcare contexts. In this article, we aim to demystify case study methodology by outlining its philosophical underpinnings and three foundational approaches. We provide literature-based guidance to decision-makers, policy-makers, and health leaders on how to engage in and critically appraise case study design. We advocate that researchers work in collaboration with health leaders to detail their research process with an aim of strengthening the validity and integrity of case study for its continued and advanced use in HSR.

Introduction

The popularity of case study research methodology in Health Services Research (HSR) has grown over the past 40 years. 1 This may be attributed to a shift towards the use of implementation research and a newfound appreciation of contextual factors affecting the uptake of evidence-based interventions within diverse settings. 2 Incorporating context-specific information on the delivery and implementation of programs can increase the likelihood of success. 3 , 4 Case study methodology is particularly well suited for implementation research in health services because it can provide insight into the nuances of diverse contexts. 5 , 6 In 1999, Yin 7 published a paper on how to enhance the quality of case study in HSR, which was foundational for the emergence of case study in this field. Yin 7 maintains case study is an appropriate methodology in HSR because health systems are constantly evolving, and the multiple affiliations and diverse motivations are difficult to track and understand with traditional linear methodologies.

Despite its increased popularity, there is debate whether a case study is a methodology (ie, a principle or process that guides research) or a method (ie, a tool to answer research questions). Some criticize case study for its high level of flexibility, perceiving it as less rigorous, and maintain that it generates inadequate results. 8 Others have noted issues with quality and consistency in how case studies are conducted and reported. 9 Reporting is often varied and inconsistent, using a mix of approaches such as case reports, case findings, and/or case study. Authors sometimes use incongruent methods of data collection and analysis or use the case study as a default when other methodologies do not fit. 9 , 10 Despite these criticisms, case study methodology is becoming more common as a viable approach for HSR. 11 An abundance of articles and textbooks are available to guide researchers through case study research, including field-specific resources for business, 12 , 13 nursing, 14 and family medicine. 15 However, there remains confusion and a lack of clarity on the key tenets of case study methodology.

Several common philosophical underpinnings have contributed to the development of case study research 1 which has led to different approaches to planning, data collection, and analysis. This presents challenges in assessing quality and rigour for researchers conducting case studies and stakeholders reading results.

This article discusses the various approaches and philosophical underpinnings to case study methodology. Our goal is to explain it in a way that provides guidance for decision-makers, policy-makers, and health leaders on how to understand, critically appraise, and engage in case study research and design, as such guidance is largely absent in the literature. This article is by no means exhaustive or authoritative. Instead, we aim to provide guidance and encourage dialogue around case study methodology, facilitating critical thinking around the variety of approaches and ways quality and rigour can be bolstered for its use within HSR.

Purpose of case study methodology

Case study methodology is often used to develop an in-depth, holistic understanding of a specific phenomenon within a specified context. 11 It focuses on studying one or multiple cases over time and uses an in-depth analysis of multiple information sources. 16 , 17 It is ideal for situations including, but not limited to, exploring under-researched and real-life phenomena, 18 especially when the contexts are complex and the researcher has little control over the phenomena. 19 , 20 Case studies can be useful when researchers want to understand how interventions are implemented in different contexts, and how context shapes the phenomenon of interest.

In addition to demonstrating coherency with the type of questions case study is suited to answer, there are four key tenets to case study methodologies: (1) be transparent in the paradigmatic and theoretical perspectives influencing study design; (2) clearly define the case and phenomenon of interest; (3) clearly define and justify the type of case study design; and (4) use multiple data collection sources and analysis methods to present the findings in ways that are consistent with the methodology and the study’s paradigmatic base. 9 , 16 The goal is to appropriately match the methods to empirical questions and issues and not to universally advocate any single approach for all problems. 21

Approaches to case study methodology

Three authors propose distinct foundational approaches to case study methodology positioned within different paradigms: Yin, 19 , 22 Stake, 5 , 23 and Merriam 24 , 25 ( Table 1 ). Yin is strongly post-positivist whereas Stake and Merriam are grounded in a constructivist paradigm. Researchers should locate their research within a paradigm that explains the philosophies guiding their research 26 and adhere to the underlying paradigmatic assumptions and key tenets of the appropriate author’s methodology. This will enhance the consistency and coherency of the methods and findings. However, researchers often do not report their paradigmatic position, nor do they adhere to one approach. 9 Although deliberately blending methodologies may be defensible and methodologically appropriate, more often it is done in an ad hoc and haphazard way, without consideration for limitations.

Cross-analysis of three case study approaches, adapted from Yazan 2015

The post-positive paradigm postulates there is one reality that can be objectively described and understood by “bracketing” oneself from the research to remove prejudice or bias. 27 Yin focuses on general explanation and prediction, emphasizing the formulation of propositions, akin to hypothesis testing. This approach is best suited for structured and objective data collection 9 , 11 and is often used for mixed-method studies.

Constructivism assumes that the phenomenon of interest is constructed and influenced by local contexts, including the interaction between researchers, individuals, and their environment. 27 It acknowledges multiple interpretations of reality 24 constructed within the context by the researcher and participants which are unlikely to be replicated, should either change. 5 , 20 Stake and Merriam’s constructivist approaches emphasize a story-like rendering of a problem and an iterative process of constructing the case study. 7 This stance values researcher reflexivity and transparency, 28 acknowledging how researchers’ experiences and disciplinary lenses influence their assumptions and beliefs about the nature of the phenomenon and development of the findings.

Defining a case

A key tenet of case study methodology often underemphasized in literature is the importance of defining the case and phenomenon. Researches should clearly describe the case with sufficient detail to allow readers to fully understand the setting and context and determine applicability. Trying to answer a question that is too broad often leads to an unclear definition of the case and phenomenon. 20 Cases should therefore be bound by time and place to ensure rigor and feasibility. 6

Yin 22 defines a case as “a contemporary phenomenon within its real-life context,” (p13) which may contain a single unit of analysis, including individuals, programs, corporations, or clinics 29 (holistic), or be broken into sub-units of analysis, such as projects, meetings, roles, or locations within the case (embedded). 30 Merriam 24 and Stake 5 similarly define a case as a single unit studied within a bounded system. Stake 5 , 23 suggests bounding cases by contexts and experiences where the phenomenon of interest can be a program, process, or experience. However, the line between the case and phenomenon can become muddy. For guidance, Stake 5 , 23 describes the case as the noun or entity and the phenomenon of interest as the verb, functioning, or activity of the case.

Designing the case study approach

Yin’s approach to a case study is rooted in a formal proposition or theory which guides the case and is used to test the outcome. 1 Stake 5 advocates for a flexible design and explicitly states that data collection and analysis may commence at any point. Merriam’s 24 approach blends both Yin and Stake’s, allowing the necessary flexibility in data collection and analysis to meet the needs.

Yin 30 proposed three types of case study approaches—descriptive, explanatory, and exploratory. Each can be designed around single or multiple cases, creating six basic case study methodologies. Descriptive studies provide a rich description of the phenomenon within its context, which can be helpful in developing theories. To test a theory or determine cause and effect relationships, researchers can use an explanatory design. An exploratory model is typically used in the pilot-test phase to develop propositions (eg, Sibbald et al. 31 used this approach to explore interprofessional network complexity). Despite having distinct characteristics, the boundaries between case study types are flexible with significant overlap. 30 Each has five key components: (1) research question; (2) proposition; (3) unit of analysis; (4) logical linking that connects the theory with proposition; and (5) criteria for analyzing findings.

Contrary to Yin, Stake 5 believes the research process cannot be planned in its entirety because research evolves as it is performed. Consequently, researchers can adjust the design of their methods even after data collection has begun. Stake 5 classifies case studies into three categories: intrinsic, instrumental, and collective/multiple. Intrinsic case studies focus on gaining a better understanding of the case. These are often undertaken when the researcher has an interest in a specific case. Instrumental case study is used when the case itself is not of the utmost importance, and the issue or phenomenon (ie, the research question) being explored becomes the focus instead (eg, Paciocco 32 used an instrumental case study to evaluate the implementation of a chronic disease management program). 5 Collective designs are rooted in an instrumental case study and include multiple cases to gain an in-depth understanding of the complexity and particularity of a phenomenon across diverse contexts. 5 , 23 In collective designs, studying similarities and differences between the cases allows the phenomenon to be understood more intimately (for examples of this in the field, see van Zelm et al. 33 and Burrows et al. 34 In addition, Sibbald et al. 35 present an example where a cross-case analysis method is used to compare instrumental cases).

Merriam’s approach is flexible (similar to Stake) as well as stepwise and linear (similar to Yin). She advocates for conducting a literature review before designing the study to better understand the theoretical underpinnings. 24 , 25 Unlike Stake or Yin, Merriam proposes a step-by-step guide for researchers to design a case study. These steps include performing a literature review, creating a theoretical framework, identifying the problem, creating and refining the research question(s), and selecting a study sample that fits the question(s). 24 , 25 , 36

Data collection and analysis

Using multiple data collection methods is a key characteristic of all case study methodology; it enhances the credibility of the findings by allowing different facets and views of the phenomenon to be explored. 23 Common methods include interviews, focus groups, observation, and document analysis. 5 , 37 By seeking patterns within and across data sources, a thick description of the case can be generated to support a greater understanding and interpretation of the whole phenomenon. 5 , 17 , 20 , 23 This technique is called triangulation and is used to explore cases with greater accuracy. 5 Although Stake 5 maintains case study is most often used in qualitative research, Yin 17 supports a mix of both quantitative and qualitative methods to triangulate data. This deliberate convergence of data sources (or mixed methods) allows researchers to find greater depth in their analysis and develop converging lines of inquiry. For example, case studies evaluating interventions commonly use qualitative interviews to describe the implementation process, barriers, and facilitators paired with a quantitative survey of comparative outcomes and effectiveness. 33 , 38 , 39

Yin 30 describes analysis as dependent on the chosen approach, whether it be (1) deductive and rely on theoretical propositions; (2) inductive and analyze data from the “ground up”; (3) organized to create a case description; or (4) used to examine plausible rival explanations. According to Yin’s 40 approach to descriptive case studies, carefully considering theory development is an important part of study design. “Theory” refers to field-relevant propositions, commonly agreed upon assumptions, or fully developed theories. 40 Stake 5 advocates for using the researcher’s intuition and impression to guide analysis through a categorical aggregation and direct interpretation. Merriam 24 uses six different methods to guide the “process of making meaning” (p178) : (1) ethnographic analysis; (2) narrative analysis; (3) phenomenological analysis; (4) constant comparative method; (5) content analysis; and (6) analytic induction.

Drawing upon a theoretical or conceptual framework to inform analysis improves the quality of case study and avoids the risk of description without meaning. 18 Using Stake’s 5 approach, researchers rely on protocols and previous knowledge to help make sense of new ideas; theory can guide the research and assist researchers in understanding how new information fits into existing knowledge.

Practical applications of case study research

Columbia University has recently demonstrated how case studies can help train future health leaders. 41 Case studies encompass components of systems thinking—considering connections and interactions between components of a system, alongside the implications and consequences of those relationships—to equip health leaders with tools to tackle global health issues. 41 Greenwood 42 evaluated Indigenous peoples’ relationship with the healthcare system in British Columbia and used a case study to challenge and educate health leaders across the country to enhance culturally sensitive health service environments.

An important but often omitted step in case study research is an assessment of quality and rigour. We recommend using a framework or set of criteria to assess the rigour of the qualitative research. Suitable resources include Caelli et al., 43 Houghten et al., 44 Ravenek and Rudman, 45 and Tracy. 46

New directions in case study

Although “pragmatic” case studies (ie, utilizing practical and applicable methods) have existed within psychotherapy for some time, 47 , 48 only recently has the applicability of pragmatism as an underlying paradigmatic perspective been considered in HSR. 49 This is marked by uptake of pragmatism in Randomized Control Trials, recognizing that “gold standard” testing conditions do not reflect the reality of clinical settings 50 , 51 nor do a handful of epistemologically guided methodologies suit every research inquiry.

Pragmatism positions the research question as the basis for methodological choices, rather than a theory or epistemology, allowing researchers to pursue the most practical approach to understanding a problem or discovering an actionable solution. 52 Mixed methods are commonly used to create a deeper understanding of the case through converging qualitative and quantitative data. 52 Pragmatic case study is suited to HSR because its flexibility throughout the research process accommodates complexity, ever-changing systems, and disruptions to research plans. 49 , 50 Much like case study, pragmatism has been criticized for its flexibility and use when other approaches are seemingly ill-fit. 53 , 54 Similarly, authors argue that this results from a lack of investigation and proper application rather than a reflection of validity, legitimizing the need for more exploration and conversation among researchers and practitioners. 55

Although occasionally misunderstood as a less rigourous research methodology, 8 case study research is highly flexible and allows for contextual nuances. 5 , 6 Its use is valuable when the researcher desires a thorough understanding of a phenomenon or case bound by context. 11 If needed, multiple similar cases can be studied simultaneously, or one case within another. 16 , 17 There are currently three main approaches to case study, 5 , 17 , 24 each with their own definitions of a case, ontological and epistemological paradigms, methodologies, and data collection and analysis procedures. 37

Individuals’ experiences within health systems are influenced heavily by contextual factors, participant experience, and intricate relationships between different organizations and actors. 55 Case study research is well suited for HSR because it can track and examine these complex relationships and systems as they evolve over time. 6 , 7 It is important that researchers and health leaders using this methodology understand its key tenets and how to conduct a proper case study. Although there are many examples of case study in action, they are often under-reported and, when reported, not rigorously conducted. 9 Thus, decision-makers and health leaders should use these examples with caution. The proper reporting of case studies is necessary to bolster their credibility in HSR literature and provide readers sufficient information to critically assess the methodology. We also call on health leaders who frequently use case studies 56 – 58 to report them in the primary research literature.

The purpose of this article is to advocate for the continued and advanced use of case study in HSR and to provide literature-based guidance for decision-makers, policy-makers, and health leaders on how to engage in, read, and interpret findings from case study research. As health systems progress and evolve, the application of case study research will continue to increase as researchers and health leaders aim to capture the inherent complexities, nuances, and contextual factors. 7

An external file that holds a picture, illustration, etc.
Object name is 10.1177_08404704211028857-img1.jpg

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

36k Accesses

51 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

what is case study method in research methodology

  • Open access
  • Published: 22 February 2024

Understanding implementation of findings from trial method research: a mixed methods study applying implementation frameworks and behaviour change models

  • Taylor Coffey   ORCID: orcid.org/0000-0002-6921-8230 1 ,
  • Paula R. Williamson 2 &
  • Katie Gillies 1

on behalf of the Trials Methodology Research Partnership Working Groups

Trials volume  25 , Article number:  139 ( 2024 ) Cite this article

85 Accesses

6 Altmetric

Metrics details

Trial method research produces recommendations on how to best conduct trials. However, findings are not routinely implemented into practice. To better understand why, we conducted a mixed method study on the challenges of implementing trial method research findings into UK-based clinical trial units.

Three stages of research were conducted. Firstly, case studies of completed projects that provided methodological recommendations were identified within trial design, conduct, analysis, and reporting. These case studies were used as survey examples to query obstacles and facilitators to implementing method research. Survey participants were experienced trial staff, identified via email invitations to UK clinical trial units. This survey assessed the case studies’ rates of implementation, and demographic characteristics of trial units through the Consolidated Framework for Implementation Research. Further, interviews were conducted with senior members of trial units to explore obstacles and facilitators in more detail. Participants were sampled from trial units that indicated their willingness to participate in interviews following the survey. Interviews, and analysis, were structured via the Capability, Opportunity, Motivation Model of Behaviour. Finally, potential strategies to leverage lessons learned were generated via the Behaviour Change Wheel.

A total of 27 UK trial units responded to the survey. The rates of implementation across the case studies varied, with most trial units implementing recommendations in trial conduct and only few implementing recommendations in reporting. However, most reported implementing recommendations was important but that they lacked the resources to do so. A total of 16 senior members of trial units were interviewed. Several themes were generated from interviews and fell broadly into categories related to the methods recommendations themselves, the trial units, or external factors affecting implementation. Belief statements within themes indicated resources issues and awareness of recommendations as frequent implementation obstacles. Participation in trial networks and recommendations packaged with relevant resources were cited frequently as implementation facilitators. These obstacles and facilitators mirrored results from the survey. Results were mapped, via the Behaviour Change Wheel, to intervention functions likely to change behaviours of obstacles and facilitators identified. These intervention functions were developed into potential solutions to reduce obstacles and enhance facilitators to implementation.

Conclusions

Several key areas affecting implementation of trial method recommendations were identified. Potential methods to enhance facilitators and reduce obstacles are suggested. Future research is needed to refine these methods and assess their feasibility and acceptability.

Peer Review reports

Clinical trials provide evidence to support decisions about practice in many aspects of healthcare. As well as generating evidence to inform decision making, trials need to, themselves, be informed by evidence in how they are designed, conducted, analysed, and reported to ensure they produce the highest quality outputs [ 1 , 2 , 3 ]. This is essential to guarantee not only that trials contribute to evidence-based practice, but that all phases of the trial ‘lifecycle’ also support efforts to minimise research waste by building on best practice for how to design, conduct, analyse, and report trials [ 1 , 2 , 4 , 5 ].

Research into how best to design, conduct, analyse, and report clinical trials, known as trial method research [ 1 , 3 ], has expanded in recent years. For example, a widely studied aspect of trial conduct is recruitment. One project, the Online Resource for Research in Clinical triAls (ORRCA), is an ongoing effort to scope methodological work in recruitment. In their initial publication, the ORRCA team identified 2804 articles, published up to 2015, regarding recruitment [ 6 ]. Their most recent update in February 2023 found 4813 eligible papers, an increase of 70% in less than 5 years from the initial publication [ 6 , 7 ]. As this is just one area of trial methodology, it represents only a fraction of the work being done in this space. With such a large volume of research being generated, coordinated efforts are needed to ensure that learning is shared across research groups to prevent duplication of effort and promote collaboration. There is recognition across the trial method research community that there is significant variability in terms of whether and how the findings from this methodological research influence ‘practice’ with regard to trial design, conduct, analysis, or reporting [ 3 , 8 , 9 ]. Similar to clinical practice, where evidence can fail to be implemented [ 10 , 11 ], it is critical that the challenges and opportunities to implementing trial method research findings into practice are understood. This understanding will then maximise the potential for this research to improve health by improving the trials themselves.

Barriers to implementation are known to be complex and involve multifactorial influences [ 12 , 13 , 14 ]. Whilst this is established for clinical evidence [ 15 ], it is also likely to be the case for methodological evidence—yet the specific challenges may be different. Implementation science (and in particular the use of behavioural approaches which are theory-informed) provides a rigorous method for identifying, diagnosing, and developing solutions to target factors with the potential to enhance or impede behaviour change and subsequent integration of those changes [ 2 , 10 , 14 , 16 ]. Data generated using these theoretical approaches are likely more reproducible and generalisable than alternatives [ 2 , 16 , 17 , 18 ]. The potential for lessons from behavioural science to investigate who needs to do what differently, to whom, when, and where, within the context of clinical trials is receiving attention across various stages of the trial lifecycle [ 2 ]. The overall aim of this study was to generate evidence for the challenges and opportunities trialists experience with regard to implementing the results from trial method projects that target the design, conduct, analysis, or reporting of trials.

Overall study description

We designed a sequential exploratory mixed methods study with three linked components:

Case studies : which identified existing examples of trial method research projects with actionable outputs that were believed to influence trial design, conduct, analysis, or reporting practice. “Actionable outputs” were defined broadly as any resource, generated from these projects, that has led to an actual or potential change in the design, conduct, analysis, or reporting of trials.

Survey : which identified the broad range, and frequency, of challenges and opportunities to the implementation of trial method research. Participants were trialists from across the UK, specifically the Clinical Research Collaboration (UKCRC) Network of Registered Clinical Trials Units (CTUs). The UKCRC was established to “help improve the quality and quantity of available expertise to carry out UK clinical trials.” ( https://www.ukcrc.org/research-infrastructure/clinical-trials-units/registered-clinical-trials-units/ ).

Interviews : which explored in depth the challenges and opportunities for implementing trial method research from case study examples and general experience in CTU management.

Theoretical considerations and rationale

It is important when selecting theoretical frameworks, and even more so when combining them within one study, to provide an explicit rationale for the choice of framework(s) [ 14 ]. This study utilised a combined theoretical approach, with the Consolidated Framework of Implementation Research (CFIR) [ 13 ] guiding the survey development, and the Capability, Motivation, and Opportunity Model of Behaviour (COM-B) [ 18 ] guiding the interview guide and analysis. CFIR was designed to synthesise the key elements that underpin implementation efforts [ 13 ]. It was selected in this study to guide the survey design because it provided a systematic framework to structure our inquiry. The CFIR is comprehensive in its descriptions of constructs and how they affect implementation across different organisational levels [ 13 ]. As the survey was intended to focus more explicitly on the organisational structure of the CTUs, the CFIR possessed the context-specific language and concepts to describe and prioritise our initial findings. The COM-B, in contrast, is broader in its scope as a general theory of behaviour and behaviour change. As implementation efforts largely rely on the adoption and maintenance of new behaviours, or changes to existing ones, behaviour change theory is useful to describe the determinants of behaviour and how they relate to one another [ 18 ]. This latter point is particularly relevant for implementation efforts as they are likely to consist of multiple changed behaviours, across different contexts, within an organisation to deliver the ultimate objective of research findings [ 19 ]. The COM-B’s capacity to accommodate such complexity outside the prescribed constructs of the CFIR ensured that all relevant factors to implementation are considered [ 14 ]. The approaches are further complementary in their conception of the socio-ecological layers within CTUs in which implementation takes place. Again, the CFIR provides the context-specific labels to, and ability to prioritise, these layers, with the COM-B acting as a methodological “safety net” to further describe or categorise findings. And finally, the COM-B is linked to a method of intervention development (and policy functions), known as the Behaviour Change Wheel (BCW). Through the BCW, nine potential categories of interventions are linked to the behavioural domains of the COM-B [ 18 ]. This link allows potential solutions to be identified based on the domains found to be most relevant or targetable for the behaviour intended to change.

Case studies

Participants.

Members of the Trials Methodology Research Partnership (TMRP) Working Groups ( https://www.methodologyhubs.mrc.ac.uk/about/tmrp/ ) were invited to contribute. Members of these working groups specialise in one or more areas of clinic trial methodology, and all have academic and/or professional interests in improving the quality of trials.

Data collection

An email was sent directly to members of the TMRP Working Group co-leads to solicit case studies of trial method implementation projects with actionable outputs. The email included a brief description of the project and aims of the case study selection, followed by two questions. The first question asked for any examples of trial method research that respondents were aware of. Question 2 asked respondents to provide what they believed were the “actionable outputs” (i.e. the resources generated that lead to implementation of findings) of those methods research projects. Examples of potential actionable outputs could include published papers, guidelines or checklists, template documents, or software packages.

Data analysis

Responses were collated and reviewed by the research team (TC, PW, KG) for their relevance to the four aspects of design, conduct, analysis, and reporting of trials. These responses were compared with a list of published outputs collected by the HTMR ( Network Hubs:: Guidance pack (mrc.ac.uk) ) to ensure a wide-reaching range of available trial method research. One case study was chosen for each domain of trial method research through team consensus, resulting in four case studies incorporated into the survey.

Directors (or individuals nominated by Directors) of the 52 UKCRC-registered CTUs were invited to participate via email from a central list server independent to the research team.

Inclusion and exclusion criteria

Participants were included if they had been involved in any aspect of trial design, delivery, analysis, or reporting within the network of UKCRC-registered CTUs. Any individuals identifying as not reading, writing, or speaking English sufficiently well to participate, or those unable to consent, were excluded.

The survey was designed, and data collected, via the online survey platform Snap (Version 11). A weblink was distributed to the 52 UK CRC-registered CTUs, along with a description of the study, and a Word document version of the survey (available in Additional file 1 : Appendix 1). CTU staff were instructed to distribute this Word version of the survey to members of staff and collate their responses. Collated responses were then entered into the survey at the provided weblink. The survey was designed utilising the Inner Domains of the CFIR [ 13 ] to broadly capture participant views on how trial method research informed the design, conduct, analysis, and reporting of trials run through their CTU. It assessed the perceived organizational structure of the CTU and how those demographics influence the adoption of trial method research. It also asked specific questions about each of the case studies selected from the previous phase. Responses consisted of a mixture of single-choice, Likert scales from 1 to 9 (1 being negative valence and 9 being positive valence), and free-text.

Examples of trial method research projects suggested by respondents (or research area, e.g., recruitment, if no specific project name was given) were collated and frequency counts for each generated. Frequency counts for the types of actionable outputs from these projects were also calculated. Likert scale responses (ranging from 1 to 9) were analysed through descriptive statistics (mean, standard deviation) to compare responses within and between CTUs, the unit of analysis. Some CFIR domains were assessed by more than one question, and so responses to those questions were averaged to give an overall score for the domain. Scores across all domains for a given site were averaged to give a “general implementation” score. The individual scores on measures of these constructs are presented below using a coloured heatmap to highlight areas of high (green) to low (red) activity and provide easy comparison across and within sites. Additional free-text data were analysed using a directed content analysis approach [ 20 ]. Terms and phrases that occurred frequently within this data were collated and then themes summarising barriers and opportunities were generated.

Survey responders indicated their willingness to be contacted for participation in an interview. Emails were sent directly to those who indicated interest in participating.

Recruitment and data collection

Interviews were conducted by a trained qualitative researcher (TC) and structured using a theory-informed topic guide. This topic guide (Additional file 2 : Appendix 2) was developed using the COM-B Model of Behaviour [ 18 ]. Questions prompted interview participants to consider the behavioural influences relevant to implementing findings from trial method research generally and from the selected case studies. Interviews were conducted and recorded through Microsoft Teams. Verbal consent to participate in interviews was obtained and recorded prior to interviews beginning. Recordings were transcribed verbatim by a third party (approved by the University of Aberdeen), de-identified, and checked for accuracy.

Data from interviews were imported into NVivo (V12, release 1.6.1) and analysed initially using a theory-based (COM-B) content analysis [ 20 ], which allowed data to be coded deductively informed by the domains of the COM-B. This involved highlighting utterances within the transcripts and assigning them to one of the six behavioural sub-domains: “psychological capability”, “physical capability”, “social opportunity”, “physical opportunity”, “reflective motivation”, or “automatic motivation”. The next phase of analysis was inductive, allowing identification of additional themes that may have been outside the COM-B domains but were still deemed relevant to the research question. One author (TC) completed coding independently for all interviews. A second author (KG) reviewed a 10% sample of interviews and coded them independently. Coding was then compared for agreement and any discrepancies resolved. Data were compared and coded through a process of constant comparison to provide a summary of key points that interview participants considered to be important. Interview data were specifically explored for any difficulties reported by trialists with regard to the challenges, opportunities, and potential strategies to facilitate the implementation of findings. These data were collected under “belief statements”, which collected similar statements made across participants under a descriptive heading informed by the statements’ COM-B domain. For instance, similar statements on the availability of resources could be collected under a belief statement, “We do not have enough resources”, representing a barrier within the COM-B domain of “physical opportunity”. Belief statements were then analysed for themes across COM-B domains. These themes were developed as narrative summaries of recurrent experiences, barriers, and facilitators to implementation of methods findings. Themes are presented below with their component COM-B domains indicated within the theme’s title. This thematic framework was reviewed, refined, and agreed by consensus of the research team.

Identifying potential solutions

Relevant COM-B domains identified during the interviews and agreed by group consensus were mapped to behavioural intervention functions. Mapping of intervention functions was based on instructions within a behavioural intervention guideline known as the Behaviour Change Wheel (BCW) [ 18 ]. The BCW describes the intervention functions that are believed to influence the individual domains of the COM-B. For example, a lack of psychological capability could be targeted with the intervention function “Education”, which is defined as “increasing knowledge or understanding” [ 18 ]. More than one intervention function is available for each COM-B domain and domains often share one or more intervention functions in common. Utilising the definitions and examples of intervention functions applied to interventions, the research team generated potential solutions based on the available intervention functions targeting the relevant COM-B domains. These solutions were additionally based on the research team’s impressions of targetable belief statements within relevant COM-B domains. For example, if a lack of knowledge was identified (and thus psychological capability) a blanket educational intervention would not necessarily be fit for purpose if only a particular group within an organisation lacked that knowledge whilst others did not. The potential solutions were refined through application of the Affordability, Practicability, Effectiveness and cost-effectiveness, Acceptability, Side-effects and safety, Equity (APEASE) criteria. Application of these criteria to the selection of intervention functions is recommended by the BCW so that research teams can reflect on factors that may limit the relevance and suitability of potential solutions to stakeholders [ 18 ].

Six of 16 Working Group co-leads responded with potential case studies for inclusion. Participants identified a number of trial method research projects, and the project’s outputs, via free-text response to the email prompts. A total of 13 distinct projects were reported by the respondents, primarily in the areas of trial design and analysis, with a particular emphasis on statistical and data collection methods. As a result, case studies for methods research targeting the other two areas of a trial lifecycle, conduct, and reporting, were selected from the list collated by the research team. The four case studies [ 21 , 22 , 23 , 24 ] were selected to consider the variability of project focus across the four areas of trial method research. The selected case studies are described below in Table  1 .

Site demographics

A total of 27 UK CTUs (Table  2 ) responded to the survey, just over half of all UK CRC-registered CTUs ( N  = 52). CTUs were primarily in operation from 10 to 20 years (55%) or more than 20 years (30%). The size of CTUs, by staff number, were divided fairly equally between the small (< 50), medium (50–100), and large (100 +) categories. Most sites characterised themselves as moderately ( n  = 12) to highly stable ( n  = 12) in regard to staff turnover.

Inner domains of the CFIR: culture, implementation climate, networks, and communication

Alongside the structural demographic characteristics described above, we assessed other constructs within the CFIR’s Inner domains. The individual scores on our measures of these constructs are presented in Table  3 below using a coloured heatmap to highlight areas of high to low activity and provide easy comparison across and within sites. Most sites ( n  = 24) achieved general implementation scores between 5 and 7. Typically, scores were reduced due to low ratings for available resources (i.e. money, training, time) within the CTU. Time possessed the lowest individual score, with an average of 3.2 (SD = 1.9). The individual item with the highest average score, 8.2 (SD = 1.3), asked whether relevant findings were believed to be important for the CTU to implement. Finally, available training/education resources were the item with the highest variability across sites, with a standard deviation of 2.2.

Implementation of example case studies

The two case studies that were the most widely implemented were the DAMOCLES charter and the guidelines for statistical analysis plans. Both case studies were implemented fully by a majority of sites ( n  = 21) with a further minority implementing them at least partially ( n  = 5). The recommendations for internal pilots was fully implemented in some sites ( n  = 8), partially in others ( n  = 9), but was not implemented at all in still others ( n  = 10). The RECAP guidance was not implemented at all in 20 sites, partially in five, and fully in two.

Survey participants reported several key obstacles and facilitators to implementation of the case studies. These factors are summarised, along with the degree of implementation of each case study across the CTUs, in Table  4 below. Two of the most frequently cited factors to enhance or hinder implementation related to the dissemination of findings. The first concerned how findings were packaged for dissemination, with survey respondents noting the utility of templates and write-ups of examples. The second related to the communication of new findings. Respondents mentioned professional networks and conferences as useful in keeping CTU staff up to date on relevant methods research. Workshops, presentations, and other events within those networks also provided these same opportunities with the additional benefit of being tailored to translating findings into practice. A frequently mentioned barrier described potentially inadequate dissemination efforts, as participants cited a lack of capacity to “ horizon scan ” for new findings. Time and funding constraints were described as leading to this lack of capacity. Finally within communication, participants reported that if a member of their CTU had been involved in methods research, it was more likely to be implemented.

Participant characteristics

Sixteen individuals (Table  5 ) participated in interviews, representing CTUs from across the UK. Participants were primarily directors or other senior members of their respective CTUs. Half of respondents ( n  = 8) had been in these roles for less than 5 years, with a further seven being in their roles from 5 to 10 years. Most ( n  = 11) had been working in trials generally for 20–29 years.

Interview findings

Interviews were conducted remotely and typically lasted 30–45 min. Belief statements were generated under the domains of the COM-B. Those domains were psychological capability, reflective motivation, automatic motivation, physical opportunity, and social opportunity. Cross-domain themes were generated from related belief statements to summarise overall content. Seven themes were identified: “The influence of funders”, “The visibility of findings”, “The relevance and feasibility of findings”, “Perceived value of implementation research”, “Interpersonal communication”, “Existing work commitments”, and “Cultural drivers of implementation”. Themes are presented in detail below with the relevant COM-B domains to which they are linked presented in parentheses. The themes are further organised into the socio-ecological levels for which they are most relevant, i.e. at the level of the CTU (Internal), outside the CTU (External), or to do with the findings themselves (Findings).

External factors

Theme 1—The influence of funders (social/physical opportunity and reflective motivation).

Interview participants spoke of the influence of funders as important to what trial method research findings are implemented. These influences were comprised of both the resource implication of funding allocation (physical opportunity) as well as the cultural influence that funders possess (social opportunity). With regard to resource implications, there were restrictions on what implementation-related activities trial staff could perform based on the lack of protected time within their roles that could be allocated to implementation (physical opportunity). Secondly, limitations on time were superseded by requirements set out by funders on which trial method research findings needed to be implemented within their trials. If particular findings were deemed necessary by bodies like the NIHR, CTU staff had no choice but to find time to implement them (reflective motivation). Related to these beliefs was the idea that clear efforts at implementing relevant trial method research findings could signal to funders that the CTU team possessed the skills required to conduct trials, thereby increasing the opportunities for funding through a sort of “competitive edge” (reflective motivation).

“I think the progression criteria, as I said, I think is being driven more by the funders expectations rather than anything else, and then other people go, “Well, if the funder expects to see it, I just have to do it,” so then... they might grumble, basically, but if you’re going to put your grant application in, and you want it to be competitive, this is what we have to do.” – Site 7, director

Theme 2—The visibility of findings (social/physical opportunity and psychological capability).

One of the main barriers cited by interviewees was simply knowing about trial method research findings. Participants described the limits on their own time and capacity in “horizon scanning” for new publications and resources, which was often compounded by the sheer volume of outputs (psychological capability).

“I mean probably the greatest competing demand is being up to speed on what’s coming out that’s new. That’s probably where I would feel that… yes, trying to… I know everyone feels like they don’t have enough time to just read and be aware of the stuff coming out, so that’s… I’m more anxious, and I know others are, that there’s stuff being done that we don’t even know about to try and implement, so in some ways we might almost be repeating the wheel of trying to improve best practice in a topic area, and actually someone’s done loads of work on it.” – Site 3, director.

However, interviewees highlighted several resources as means to close this knowledge gap. Dedicated channels for dissemination of important trial method research findings were one means to stay on top of emerging literature. These could be newsletters, websites, or meetings where part, or all, of the agenda was set aside for updates on findings (physical opportunity). Other resources mentioned included more social opportunities to hear about the latest research, at conferences like the International Clinical Trials Methodology Conference (ICTMC) or network events like training and workshops. These events were also cited as important venues to share lessons learned in implementing trial method research findings or to air general frustrations on the complexities of trial conduct and management (social opportunity). Finally, these networking opportunities were identified by interviewees as potent incubators for collaborations, inspiring new trial method projects or establishing links to assess existing ones. Interviewees reported that the opportunity to be involved in these methods projects worked to also raise awareness of their outputs as well as increasing the perceived relevance of these outputs to CTU staff (psychological capability).

“Again, I think I was very aware of [statistical analysis plans] in my previous role as well, so I’d been along to some of the stats group meetings that the CTU networks have run where this had been discussed before it was published. I think they certainly involved a lot of the CTUs in developing that as well and in canvassing comments that went into the paper. I think potentially that would have been easier for people to implement because we’d had some involvement in the developmental bit as well as it went along.” – Site 22, academic

Internal factors

Theme 3—Interpersonal communication (psychological capability, social/physical opportunity, and automatic motivation).

As our participants were senior members of their respective CTUs, they often described aspects of their role and how their efforts mesh with the overall culture of the CTU. A recurrent feature reported by interviewees relating to their role was to be the central figure in communicating the importance of implementation convincingly to their staff and trial sites. This meant they had to advocate for the relevance of trial method research findings to their CTU staff and motivate staff on changing their processes to align with the findings (reflective motivation). This aspect of communication could be more challenging with chief investigators if they were not convinced of the utility of implementation within their own trials, particularly if they anticipated opportunity or resource cost to hosting the research itself or the process changes of implementing findings (social/physical opportunity). Regardless of where it originated, such resistance to change could be frustrating and draining to senior members that were attempting to spearhead implementation efforts (automatic motivation).

“R – Was it ever stressful or frustrating to implement certain things? P – Yes, I would say it can definitely be. I would be lying if I said no. Because change is always.. there’s always a resistance to change in every institution, so it’s not easy to change things. Yes, it can be frustrating, and it can be painful. Things that help are probably when it’s a requirement and when it’s... whatever you do it goes into your SOPs, and then you say, ‘This is how I have to do it, so this is how we will do it.’ But getting to the step of the institution to recognise it, and the people you’re working with, it can be frustrating because there could be arguments like are hard to argue back like, ‘We don’t have the resources, we don’t have the time. Now is not the moment, we’re...’ so there’s all of these things, but also there’s the effort that it takes to convince people that it’s worthwhile doing the change. It’s definitely... it can be frustrating and disappointing, and it takes a lot of energy.” – Site 21, group lead

However, some broader cultural aspects of the CTU appeared to reduce such frustrations. Participants described that their CTU members were often open to new ideas and that such receptivity facilitated implementation (social opportunity). This openness to change was leveraged through the communication skills of senior staff that were previously mentioned and their ability to solicit opinions and feedback from their staff (psychological capability). Such discussions often took place at internal trainings or meetings that incorporated some focus on implementation efforts for the CTU staff (physical opportunity). These opportunities not only afforded discourse on the practicalities of implementation but also helped to raise general awareness of trial method research findings as well as potential adaptations of findings to better suit the individual requirements of the CTU.

“Yes, I mean at our Trials Unit, I run our monthly trial methodology meetings, so these are predominantly attended by statisticians, so we do focus more on trial methodology that’s more statistical in flavour, but we do always cover the new updates and any key publications we’ve seen. I find that’s a great format for getting people interested and excited in these new methods and distilling them down. Generally, across the unit, we have wider… they’re like two forums, just where everyone gets together, and we tend to have bitesize sessions there where we can distil something. Actually, they’re quite useful because internally, we can distil something new to people but in a bitesize chunk so that people are aware and then can take it further and develop specific… if it’s something quite big, then we can develop working groups to look into it and come to a more solid plan of how we can actually implement it if it seems useful.” – Site 25, academic

Theme 4—Existing work commitments (physical opportunity).

Whilst openness to implementation at the CTU, driven by leadership advocating for its importance, was often present in the interviews, resource restrictions were still an ever-present factor impacting the opportunities for CTU staff to improve practice. Interviewees reported that because any change to be implemented required time and effort to action, mentions of these opportunity costs were reflected universally across our sample. The CTU staff, according to their directive, must prioritise the design of new trials and the delivery of ongoing trials.

“But you know, it’s real, it’s a real challenge and intention to be able to keep your eye on the ball and the many different competing priorities that there are. It does sound like a bit of a weak excuse when you say it out loud. So, our focus is on doing the trials, but of course we should always be trying to have an eye on what is the evidence that it’s underpinning what we do in those trials. We should. But with the best will in the world, it’s writing applications, responding to board comments, getting contracts done once things are funded, getting trials underway. The focus is just constantly on that work of trying to win funding and delivering on what you said you were going to deliver, in amongst all the other business of running a CTU or recruiting staff, managing funding contracts, dealing with our institutions, our universities, our local trusts. All the efforts that go into getting trials underway in terms of writing documents and approvals and recruiting sites, you know?” – Site 10, director

Mitigating these resource restrictions often meant looking to other strategies (mentioned in the next theme) that might allow CTU staff to carve out some capacity towards implementation.

Theme 5—Cultural drivers of implementation (psychological capability, physical opportunity, reflective motivation).

As senior members of their respective CTUs, our participants displayed clear motivations to implement trial method research. They expressed that they would like to see the staff in the CTU improve both the uptake of trial method research findings, as well as generating their own method research. This was part of a larger desire to create a culture within their CTUs that encourages and supports research (reflective motivation).

“I hope that within the Trials Unit, I also create an environment where I’m trying to encourage people to not always work to capacity, so they do have the headroom to go away and explore things and to try things and to develop their own research ideas, so that we can say to people okay. Whether it’s looking at different patient information sheets, whether it’s looking at different recruitment strategies, whether it’s looking at different ways of doing data cleaning across sites, looking at different ways of delivering training to people for data entry because we’ve lots of different ways of delivering training and we still get a very high error rate. I’m sure there are other Trials Units that are doing the same thing, so we should be publishing and sharing that with Trials Units. I’m trying to create that environment.” – Site 1, director

Some potential avenues to promote that development were offered by participants. Firstly, participants were confident in their team’s expertise and ability to either generate or implement trial method research findings. This was evidenced through ongoing work being done within their CTU or discussions with their staff on areas they would like to dedicate time to (psychological capability). An important role for the senior members of staff is then to set out expectations for their teams around how they can leverage their expertise within implementing or generating trial method research findings and for senior members to offer the necessary support for that to happen. One option put forward to facilitate this leveraging of expertise was to provide career development opportunities centred on implementation. This could simply be allocating staff’s time to focus on implementation projects, protecting their time from usual work commitments. A further development opportunity would be appointing so-called “ champions ” within the CTU whose explicit role is to identify trial method research findings and coordinate their implementation (physical opportunity).

“Because sometimes what I think is [...] you need a champion, you need every CTU to implement these things and because every trial or every trials unit is composed of different people, so I would probably champion the SAPs part because I’m the statistician, and I make sure that that goes ahead, but someone else needs to champion the one on the patients, probably. Not necessarily. I would champion for all of these things, but because... I think it's finding these people that are the ones that see the value and then be the drivers of the unit. I think that will probably help. […] But I honestly think the best way is just reaching a champion for each of these areas and reaching out to them and saying, ‘Can you... what do you think of this, and what would you do to implement it in your own unit?’” – Site 21, group lead

Factors related to findings

Theme 6—Relevance and feasibility of findings (physical opportunity, reflective motivation, and psychological capability).

Not all findings from trial method research are applicable to all trials and there to all CTUs. For instance, some of our participants mentioned that the progression criteria recommendations were not widely implemented by their CTU staff because they did not often include internal pilots in their trials. So, once the challenges of knowing about trial method research findings are overcome, CTU staff then need to make decisions on what is most relevant to their trial portfolio and what they would like to prioritise implementing (reflective motivation). This prioritisation was dependent on two factors, the CTU staff’s ability to adapt findings to their needs and the implementation resources that findings are packaged with. These factors appeared to be interconnected as sufficient resources to aid implementation, such as training workshops, could reduce the burden of adaptation (physical opportunity). Conversely, staff that perceived their CTU as capable of adaptation could do so even when implementation resources were lacking, such as when trial method research findings are only shared via publication (psychological capability).

“I think that resources that are guidance types widely available, well-advertised, are probably the most... the easiest way. Everything that makes it easier for a person that has this little win of saying, ‘Oh, yes, we’ve probably considered doing things differently,’ anything that minimises that burden in a system I do. For example, with the SAPs, it’s not just the paper and the guidance, but it’s the templates and the little things that you say, ‘Oh, I can start from here, and then if I just use this and this, then the work is so much less […]’ It’s just that thinking of resources that at least create an easy start point for a person that is the right person. I think that would be the best strategy for me, and make them widely available and well-advertised and probably, I don’t know, distribute them, contact the CTUs and say, ‘By the way, here’s a nice resource that you can use if you want to improve this and that.’ I think anything like that could probably be the way I would go around improving the implementation and the uptake because I feel that the goodwill is there.” – Site 21, group lead

Theme 7—Perceived value of implementation (reflective motivation).

Following on from the idea that there is the “ goodwill ” to implement trial method research findings, it was unsurprising that our participants reported believing that implementation research is important. Many believed that uptake of findings had clear benefits to improving the practice of their CTU. Even for those findings of trial method research that were less enthusiastically received, this appeared to be because the CTU staff were already operating at a high standard and that trial method research findings served to simply reassure them of the quality of their practices.

“I guess yes, I would say so, they help enhance them. Thinking about the first one on progression criteria, we didn’t really have any standard in house guidance on that, so actually reaching out and using that was great because we needed something to base it on. Whereas I’d say for the others, with the Damocles ones and the one on SAP guidance, we did already have in house guidelines for SAPs and DMC charters, but these bits of work have helped to inform them. In a way, they help clarify that most of what you are doing is good practice and then some additional things that could be added in.” – Site 25, academic

Alongside the efficiency and quality benefits to the CTU and its practices, participants also described a desire to implement findings from trial method research because of their promise to improve the quality of trials, and the evidence they generate, more broadly. For example, this could be improved efficiency leading to cost-effective trials to free up funding for other research. It could also be participant-centred improvements that have both ethical implications as well as bolstering the public’s trust in the research process. And, most importantly it seemed, improvements across trials would lead to better evidence to base healthcare decisions on. Finally, implementation of findings from trial method research helps to signal that the CTU is dedicated to best practice and is innovative in pursuing those ideals. There was a perception that it can lead to increased reputation amongst peers and the public as well as making the applications from the CTU attractive to funders.

“I think they maybe come under some of the reasons that you said already, but they are incentives to do [implementing trials methods research findings] because we’re all in the business of trying to produce evidence for interventions that are going to make a difference usually in the NHS, not always, but depending what it is that we’re trialling. But ultimately, you know, we’re all in the business of trying to produce evidence that’s going to get used and make a difference to the patients, and if that can happen more quickly, cheaply, more efficiently, trials that are run better with an evidence base underpinning what happens in the trials, then yeah, that’s why we should be doing it. That’s all incentives to do it.” – Site 10, director

As stated above in “Interview findings”, the COM-B domains identified were psychological capability, reflective motivation, automatic motivation, physical opportunity, and social opportunity. These five domains map to all nine intervention functions within the BCW. Two, “Restriction” and “Coercion”, were eliminated due to limited practicability and acceptability. Potential solutions were generated that targeted specific aspects of beliefs within our themes. The primary factors identified across themes were distilled into three intervention targets. Those targets were as follows: awareness of trial method research findings, the effort required to implement findings, and the culture around implementing findings. Eight potential interventions were generated which are listed in Table  6 .

Awareness of trial method research findings

The first proposed intervention is the incorporation of sessions specific to sharing research findings into the agendas of clinical and methodology conferences. These sessions would serve as a dedicated conduit for trialists to share and receive new methods research findings, giving dedicated time and space to do so. The social elements of these sessions would also benefit implementation through less formal opportunities to share feedback and other comments on recommendations that can then be addressed by the associated researchers present.

Effort required to implement findings

The second proposed intervention would target the effort required to implement findings. As time is at a premium within CTUs, any pre-emptive efforts on the part of the methods research teams to ensure their recommendations are accessible, translatable, and clearly relevant to CTU staff will assist in those recommendations being implemented. This could include template documents, case studies of implementation, software packages, etc. Any resource beyond the publication of results would seem desirable to CTU staff to assist in their efforts at implementation.

Changes to culture

The third potential solution identified would target the cultural changes needed to re-prioritise the directions of CTUs towards implementation of findings. This would proceed mainly through a change in funder attitudes towards the importance of trial method research. Funders would need to provide dedicated funding/time within CTU’s contracts and/or trial grants to allow for the proper conduct and/or implementation of trial method research.

Other potential solutions

As many of our reported barriers are interconnected, so too do several of our proposed solutions target multiple barriers/opportunities to improve implementation. Many of these rely primarily on cultural shifts within the CTUs themselves, where existing structures are modified to accommodate implementation efforts. For example, ensuring that CTU meeting agendas incorporate dedicated time towards discussing implementation efforts or for roles to be established/re-structured that focus on championing these efforts.

This paper presents findings from our mixed methods study on the challenges and opportunities to implementing trial method research findings. Exploration of notable trial method research findings generated four cases studies that were used to solicit implementation experiences from trial staff through survey and interviews. The survey data allowed us to identify trends in the adoption of the case studies in a sample of half of the registered CTUs within the UK. Demographic data from participating CTUs demonstrated some similarities in implementation factors that are consistent across sites, such as a lack of resources. More positive similarities were identified as well, such as the shared belief that implementation research is important. Participants volunteered a number of motivators, such as adhering to best practice, or barriers, such as time/resource limitations, that affected their CTU’s implementation of these case studies and trial method research findings more generally. Our interviews with senior CTU staff further explored these motivators and barriers to implementation through a behavioural lens. A range of relevant themes across three socio-ecological levels (Findings, Internal, and External) were identified from our behavioural analysis.

Findings-level factors that affected implementation related to the quality and accessibility of the research and its outputs, and its perceived relevance to the trials undertaken in the CTUs. Trial method research findings that were ‘well-packaged’ (e.g., included templates or easy to follow guidance) were believed to assist in implementation. Findings that had clear benefits to the work done at a CTU, such as streamlining processes, or the outcomes of the trials themselves, such as improving their quality, were more readily implemented. Factors internal to the CTUs included the interpersonal communication of the staff, their existing workloads, and the culture surrounding implementation. Open communication between members of the CTU, spearheaded by senior staff, seemed to increase buy-in from staff on the relevance of trial method research findings. This buy-in would appear essential to motivate staff that are already stretched thin by their commitments to design and deliver trials. Efforts to improve cultural expectations around implementation were seen as a mechanism to create further opportunities for staff to dedicate to adopting findings. These efforts could be restructuring current staff roles or establishing new ones with a greater focus on implementation rather than strictly trial delivery. External factors affecting implementation of trial method research findings were primarily those linked with the expectations of funders and the availability of findings. Funders were said to drive both cultural expectations related to best practice, as well as creating capacity (or not) for CTU staff through provision of funds that could allow dedicated time for implementation efforts. The availability of findings had to do largely with the channels available for dissemination of findings. The more opportunities trialists had to be exposed to findings, the more likely they were to adopt those findings in their respective CTUs.

Strengths and limitations

Our project has several key strengths. The mixed methods nature of its design allowed for a more complete investigation of implementation factors than either quantitative or qualitative measures alone. The project utilised a combined theoretical approach, taking advantage of the CFIR in survey design and the COM-B in interview design and analysis. The combination of these approaches ensured that our project had the investigative potential to explore the specific implementation factors and general behavioural factors undermining the successful implementation of trial method research. Others have taken a similar epistemological approach in combining the CFIR and COM-B (and the related Theoretical Domains Framework) to investigate challenges in other contexts [ 14 , 25 , 26 , 27 ].

Our project solicited input from a variety of stakeholders in CTUs across the UK to ensure a diverse perspective on implementation challenges. However, our sample was primarily those with a statistics background, along with the number of responses to identify case studies being relatively low. We attempted to correct for this low response rate and homogeneity of response by agreeing as a team which case studies to include outside those offered by our respondents. However, we cannot say how selection of other case studies may have affected our responses to the surveys and interviews. It may be that particular projects had inherently different challenges to implementation that are not represented here. However, by including general organisational-level factors that may influence implementation, we have identified factors that are likely to be generalisable to a range of implementation efforts. A further bias is one of self-selection. It is possible that the CTUs and members that responded to our invitations are more active in implementing trial method research findings and would thus be more interested in participating in the project. It may also be that those CTUs that face the most challenges did not have the capacity or motivation to respond to our invitation due to the time it would take away from trial delivery. This may help to explain our response rate of about half of the 52 registered CTUs. Responses could have also been limited in our surveys as we asked CTUs to collate their answers. This may have led to unintended desirability effects, with some staff feeling unable to offer honest opinions on their CTU.

Recommendations for future

This project has identified a number of areas for future efforts in improving the implementation of trial method research findings. The themes described here can provide a starting point for trial method researchers to consider when implementing and/or disseminate findings from method research. This could include creating plans for how the findings will reach the appropriate CTU teams, how to articulate the importance of findings to those teams, or how to best package those findings to make them more readily accessible, and thus implementable, for the CTU teams. Further, it could prompt methods researchers to consider who should be involved in their research and when, potentially incorporating members from different institutions and organisations who would be required to implement any findings and doing so earlier in the process.

Where these obstacles still exist, future research on the implementation of findings can bridge the gap between research and practice. Our approach describes obstacles and facilitators in a standardised language common to behavioural and implementation science. Along with this clearer articulation of what works, for whom, how, why, and when, links to behavioural theory provides a process to design interventions [ 18 , 28 ]. Although we have identified some preliminary intervention options, future work could produce potential options not accounted for here, but utilising lessons learned from our findings. Further development of these strategies through selection of BCTs targeting one or more of the identified areas for improvement, refined through co-production with stakeholders, would be the next stage of the intervention design process [ 18 , 29 ]. Finally, assessment of the effectiveness of these interventions in improving the implementation of trial method research findings would be warranted. Additionally, as our project was sampled from UK CTUs, further work could explore the generalisability of these findings to settings outside the UK, particularly where trial units are noticeably different in their organisation.

We have presented findings exploring the obstacles and facilitators to the implementation of trial method research findings. Challenges facing CTUs at multiple levels, including demands on time and resources, internal organisational structure, and quality of findings, greatly affect their staff’s ability to incorporate findings into their workflow. We have suggested several potential areas to target with further intervention development based on behavioural theory to maximise the potential for change. These strategies, and others, would need to face refinement and the scrutiny of stakeholders, as well as evaluation of their effectiveness. Ultimately, our project highlights the motivation of trial staff to deliver quality trials underpinned by the latest evidence. However, this motivation is hindered by the realities of ongoing trial logistics and the difficulties faced in identifying this evidence. Trial methodologists will need to work closely with CTU staff, funders, and regulatory bodies to set priorities on what needs to be implemented and how to make that more achievable in light of the challenges faced.

Availability of data and materials

The dataset supporting the conclusions of this article is included within the article (and its additional files). Additional data is available upon reasonable request.

Abbreviations

Affordability, Practicability, Effectiveness and cost-effectiveness, Acceptability, Side-effects and safety, Equity

Behaviour change technique

Behaviour change wheel

Consolidated Framework of Implementation Research

Capability, Motivation, and Opportunity Model of Behaviour

Clinical trial unit

DAta MOnitoring Committees: Lessons, Ethics, Statistics

Enhancing the QUAlity and Transparency Of health Research

Hubs for Trial Methodology Research

International Clinical Trials Methodology Conference

Medical Research Council

National Institute for Health and care Research

Online Resource for Research in Clinical triAls

REporting Clinical trial results Appropriately to Participants

Statistical analysis plans

Trials Methodology Research Partnership

UK Clinical Research Collaboration

Welcome to ORRCA. https://www.orrca.org.uk/ . 2023

Altman DG. The scandal of poor medical research. BMJ. 1994;308:283–4. https://doi.org/10.1136/bmj.308.6924.283 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Michie S, Atkins L, West R. The Behaviour Change Wheel. A Guide to Designing Interventions: Silverback Publishing, Sutton; 2014.

Google Scholar  

Meeker-O’ Connell A, Glessner C, Behm M, et al. Enhancing clinical evidence by proactively building quality into clinical trials. Clin Trials. 2016;13:439–44. https://doi.org/10.1177/1740774516643491 .

Article   PubMed   Google Scholar  

Hsieh H-F, Shannon SE. Three Approaches to Qualitative Content. Analysis. 2005;15:1277–88. https://doi.org/10.1177/1049732305276687 .

Article   Google Scholar  

Gamble C, Krishan A, Stocken D, et al. Guidelines for the Content of Statistical Analysis Plans in Clinical Trials. JAMA. 2017;318:2337–43. https://doi.org/10.1001/jama.2017.18556 .

Rangachari P, Rissing P, Rethemeyer K. Awareness of evidence-based practices alone does not translate to implementation: insights from implementation research. Qual Manag Health Care. 2013;22:117–25. https://doi.org/10.1097/QMH.0b013e31828bc21d .

Pirosca S, Shiely F, Clarke M, Treweek S. Tolerating bad health research: the continuing scandal. Trials. 2022;23:458. https://doi.org/10.1186/s13063-022-06415-5 .

Article   PubMed   PubMed Central   Google Scholar  

Birken SA, Powell BJ, Presseau J, et al. Combined use of the consolidated framework for implementation research CFIR and the Theoretical Domains Framework TDF a systematic review. Implement Sci. 2017;12:2. https://doi.org/10.1186/s13012-016-0534-z .

Glanz K. BISHOP DB The Role of Behavioral Science Theory in Development and Implementation of Public Health Interventions. Annu Rev Public Health. 2010;31:399–418. https://doi.org/10.1146/annurev.publhealth.012809.103604 .

Smyth RMD, Jacoby A, Altman DG, et al. The natural history of conducting and reporting clinical trials: interviews with trialists. Trials. 2015;16:16. https://doi.org/10.1186/s13063-014-0536-6 .

Lau R, Stevenson F, Ong BN, et al. Achieving change in primary care—causes of the evidence to practice gap systematic reviews of reviews. Implement Sci. 2016;11:40. https://doi.org/10.1186/s13012-016-0396-4 .

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225–30. https://doi.org/10.1016/S0140-6736(03)14546-1 .

Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;283:112461. https://doi.org/10.1016/j.psychres.2019.06.036 .

Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. https://doi.org/10.1186/1748-5908-4-50 .

Guyatt S, Ferguson M, Beckmann M, Wilkinson SA. Using the Consolidated Framework for Implementation Research to design and implement a perinatal education program in a large maternity hospital. BMC Health Serv Res. 2021;21:1–1077. https://doi.org/10.1186/s12913-021-07024-9 .

Paul G, Douglas AG, Patrick B, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–76. https://doi.org/10.1016/S0140-6736(13)62228-X .

Kearney A, Harman NL, Rosala-Hallas A, et al. Development of an online resource for recruitment research in clinical trials to organise and map current literature. Clin Trials. 2018;15:533–42. https://doi.org/10.1177/1740774518796156 .

Willmott T, Rundle-Thiele S. Are we speaking the same language? Call for action to improve theory application and reporting in behaviour change research. BMC Public Health. 2021;21:479. https://doi.org/10.1186/s12889-021-10541-1 .

Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci. 2022;17:1–75. https://doi.org/10.1186/s13012-022-01245-0 .

Atkins L, Francis J, Islam R, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:77. https://doi.org/10.1186/s13012-017-0605-9 .

John IPA, Sander G, Mark HA, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75. https://doi.org/10.1016/S0140-6736(13)62227-8 .

Grant A, Altman D, Babiker A, Campbell M. A proposed charter for clinical trial data monitoring committees helping them to do their job well. Lancet. 2005;365:711–22. https://doi.org/10.1016/S0140-6736(05)17965-3 .

Grimshaw J, Shirran L, Thomas R, et al. Changing Provider Behavior An Overview of Systematic Reviews of Interventions. Med Care. 2001;39:II2–45.

Article   CAS   PubMed   Google Scholar  

Gillies K, Brehaut J, Coffey T, et al. How can behavioural science help us design better trials? Trails. 2021;22:882. https://doi.org/10.1186/s13063-021-05853-x .

Khan S, Tessier L. Implementation Blueprint for Community Based Pilots for Supporting Decision Making. 2021. Available from: https://irisinstitute.ca/wp-content/uploads/sites/2/2021/09/Supporting-DM-Implementation-Blueprint.pdf . ISBN: 978-1-897292-38-9. ISBN: 978-1-897292-38-9

Hall J, Morton S, Hall J, et al. A co-production approach guided by the behaviour change wheel to develop an intervention for reducing sedentary behaviour after stroke. Pilot Feasibility Stud. 2020;6:115. https://doi.org/10.1186/s40814-020-00667-1 .

Raza MZ, Bruhn H, Gillies K. Dissemination of trial results to participants in phase III pragmatic clinical trials: an audit of trial investigators intentions. BMJ Open. 2020;10:e035730. https://doi.org/10.1136/bmjopen-2019-035730 .

Avery KNL, Williamson PR, Gamble C, et al. Informing efficient randomised controlled trials exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open. 2017;7:e013537. https://doi.org/10.1136/bmjopen-2016-013537 .

Download references

Acknowledgements

We would like to thank the members of the TMRP working groups that participated in the case study exercise. We would also like to thank all the participants within the survey and interviews.

This project was supported by the MRC – NIHR funded Trials Methodology Research Partnership (MR/S014357/1).

The Health Services Research Unit, Institute of Applied Health Sciences (University of Aberdeen), is core-funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. They were not involved in the design of the study or the collection, analysis, and interpretation of data.

Author information

Authors and affiliations.

Health Services Research Unit, University of Aberdeen, Health Services Research Unit, Foresterhill, Aberdeen, AB25 2ZD, UK

Taylor Coffey & Katie Gillies

Department of Health Data Science, MRC-NIHR Trials Methodology Research Partnership, University of Liverpool, Liverpool, England

Paula R. Williamson

You can also search for this author in PubMed   Google Scholar

Contributions

TC contributed to the conceptualisation of the study and was responsible for the design and conduct of the case study selection, surveys, and interviews. TC also analysed all data and was the primary author of the manuscript. KG contributed to the conceptualisation of the study, data quality and analysis checks, along with contributing to drafting of the manuscript, providing edits and final approval. PW contributed to the conceptualisation of the study, edits and final approval of the manuscript.

Corresponding author

Correspondence to Taylor Coffey .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the University of Aberdeen College Ethics Review Board (CERB) (Application No. SERB/2022/4/2340). Informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix 1..

Survey with PIL. Word document version of the survey circulated to CTUs, which includes a PIL section.

Additional file 2: Appendix 2.

COM-B topic guide. Topic guide used during interviews.

Additional file 3:

Domain 1. Research team and reflexivity.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Coffey, T., Williamson, P.R., Gillies, K. et al. Understanding implementation of findings from trial method research: a mixed methods study applying implementation frameworks and behaviour change models. Trials 25 , 139 (2024). https://doi.org/10.1186/s13063-024-07968-3

Download citation

Received : 09 June 2023

Accepted : 05 February 2024

Published : 22 February 2024

DOI : https://doi.org/10.1186/s13063-024-07968-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Trial method research
  • Implementation science

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is case study method in research methodology

What is the Case Study Method?

Baker library peak and cupola

Overview Dropdown up

Overview dropdown down, celebrating 100 years of the case method at hbs.

The 2021-2022 academic year marks the 100-year anniversary of the introduction of the case method at Harvard Business School. Today, the HBS case method is employed in the HBS MBA program, in Executive Education programs, and in dozens of other business schools around the world. As Dean Srikant Datar's says, the case method has withstood the test of time.

Case Discussion Preparation Details Expand All Collapse All

In self-reflection in self-reflection dropdown down, in a small group setting in a small group setting dropdown down, in the classroom in the classroom dropdown down, beyond the classroom beyond the classroom dropdown down, how the case method creates value dropdown up, how the case method creates value dropdown down, in self-reflection, in a small group setting, in the classroom, beyond the classroom.

what is case study method in research methodology

How Cases Unfold In the Classroom

How cases unfold in the classroom dropdown up, how cases unfold in the classroom dropdown down, preparation guidelines expand all collapse all, read the professor's assignment or discussion questions read the professor's assignment or discussion questions dropdown down, read the first few paragraphs and then skim the case read the first few paragraphs and then skim the case dropdown down, reread the case, underline text, and make margin notes reread the case, underline text, and make margin notes dropdown down, note the key problems on a pad of paper and go through the case again note the key problems on a pad of paper and go through the case again dropdown down, how to prepare for case discussions dropdown up, how to prepare for case discussions dropdown down, read the professor's assignment or discussion questions, read the first few paragraphs and then skim the case, reread the case, underline text, and make margin notes, note the key problems on a pad of paper and go through the case again, case study best practices expand all collapse all, prepare prepare dropdown down, discuss discuss dropdown down, participate participate dropdown down, relate relate dropdown down, apply apply dropdown down, note note dropdown down, understand understand dropdown down, case study best practices dropdown up, case study best practices dropdown down, participate, what can i expect on the first day dropdown down.

Most programs begin with registration, followed by an opening session and a dinner. If your travel plans necessitate late arrival, please be sure to notify us so that alternate registration arrangements can be made for you. Please note the following about registration:

HBS campus programs – Registration takes place in the Chao Center.

India programs – Registration takes place outside the classroom.

Other off-campus programs – Registration takes place in the designated facility.

What happens in class if nobody talks? Dropdown down

Professors are here to push everyone to learn, but not to embarrass anyone. If the class is quiet, they'll often ask a participant with experience in the industry in which the case is set to speak first. This is done well in advance so that person can come to class prepared to share. Trust the process. The more open you are, the more willing you’ll be to engage, and the more alive the classroom will become.

Does everyone take part in "role-playing"? Dropdown down

Professors often encourage participants to take opposing sides and then debate the issues, often taking the perspective of the case protagonists or key decision makers in the case.

View Frequently Asked Questions

Subscribe to Our Emails

Verywell Mind

Descriptive Research in Psychology

Sometimes you need to dig deeper than the pure statistics

Descriptive research is one of the key tools needed in any psychology researcher’s toolbox in order to create and lead a project that is both equitable and effective. Because psychology, as a field, loves definitions, let’s start with one. The University of Minnesota’s Introduction to Psychology defines this type of research as one that is “...designed to provide a snapshot of the current state of affairs.”

That's pretty broad, so what does that mean in practice? Dr. Heather Derry-Vick (PhD) , an assistant professor in psychiatry at Hackensack Meridian School of Medicine, helps us put it into perspective.

"Descriptive research really focuses on defining, understanding, and measuring a phenomenon or an experience," she says. "Not trying to change a person's experience or outcome, or even really looking at the mechanisms for why that might be happening, but more so describing an experience or a process as it unfolds naturally.”

Types of Descriptive Research and the Methods Used

Within the descriptive research methodology there are multiple types, including the following.

Descriptive Survey Research

This involves going beyond a typical tool like a LIkert Scale —where you typically place your response to a prompt on a one to five scale. We already know that scales like this can be ineffective, particularly when studying pain, for example.

When that's the case, using a descriptive methodology can help dig deeper into how a person is thinking, feeling, and acting rather than simply quantifying it in a way that might be unclear or confusing.

Descriptive Observational Research

Think of observational research like an ethically-focused version of people-watching. One example would be watching the patterns of children on a playground—perhaps when looking at a concept like risky play or seeking to observe social behaviors between children of different ages.

Descriptive Case Study Research

A descriptive approach to a case study is akin to a biography of a person, honing in on the experiences of a small group to extrapolate to larger themes. We most commonly see descriptive case studies when those in the psychology field are using past clients as an example to illustrate a point.

Correlational Descriptive Research

While descriptive research is often about the here and now, this form of the methodology allows researchers to make connections between groups of people. As an example from her research, Derry-Vick says she uses this method to identify how gender might play a role in cancer scan anxiety, aka scanxiety.

Dr. Derry-Vick's research uses surveys and interviews to get a sense of how cancer patients are feeling and what they are experiencing both in the course of their treatment and in the lead-up to their next scan, which can be a significant source of stress.

David Marlon, PsyD, MBA , who works as a clinician and as CEO at Vegas Stronger, and whose research focused on leadership styles at community-based clinics, says that using descriptive research allowed him to get beyond the numbers.

In his case, that includes data points like how many unhoused people found stable housing over a certain period or how many people became drug-free—and identify the reasons for those changes.

For the portion of his thesis that was focused on descriptive research, Marlon used semi-structured interviews to look at the how and the why of transformational leadership and its impact on clinics’ clients and staff.

Advantages & Limitations of Descriptive Research

So, if the advantages of using descriptive research include that it centers the research participants, gives us a clear picture of what is happening to a person in a particular moment,  and gives us very nuanced insights into how a particular situation is being perceived by the very person affected, are there drawbacks?

Yes, there are. Dr. Derry-Vick says that it’s important to keep in mind that just because descriptive research tells us something is happening doesn’t mean it necessarily leads us to the resolution of a given problem.

Another limitation she identifies is that it also can’t tell you, on its own, whether a particular treatment pathway is having the desired effect.

“Descriptive research in and of itself can't really tell you whether a specific approach is going to be helpful until you take in a different approach to actually test it.”

Marlon, who believes in a multi-disciplinary approach, says that his subfield—addictions—is one where descriptive research had its limits, but helps readers go beyond preconceived notions of what addictions treatment looks and feels like when it is effective.

“If we talked to and interviewed and got descriptive information from the clinicians and the clients, a much more precise picture would be painted, showing the need for a client's specific multidisciplinary approach augmented with a variety of modalities," he says. "If you tried to look at my discipline in a pure quantitative approach , it wouldn't begin to tell the real story.”

Best Practices for Conducting Descriptive Research

Because you’re controlling far fewer variables than other forms of research, it’s important to identify whether those you are describing, your study participants, should be informed that they are part of a study.

For example, if you’re observing and describing who is buying what in a grocery store to identify patterns, then you might not need to identify yourself.

However, if you’re asking people about their fear of certain treatment, or how their marginalized identities impact their mental health in a particular way, there is far more of a pressure to think deeply about how you, as the researcher, are connected to the people you are researching.

Many descriptive research projects use interviews as a form of research gathering and, as a result, descriptive research that is focused on this type of data gathering also has ethical and practical concerns attached. Thankfully, there are plenty of guides from established researchers about how to best conduct these interviews and/or formulate surveys .

While descriptive research has its limits, it is commonly used by researchers to get a clear vantage point on what is happening in a given situation.

Tools like surveys, interviews, and observation are often employed to dive deeper into a given issue and really highlight the human element in psychological research. At its core, descriptive research is rooted in a collaborative style that allows deeper insights when used effectively.

Read the original article on Verywell Mind .

FG Trade / E+/ Getty

IMAGES

  1. case studies as a methodology

    what is case study method in research methodology

  2. case study methodology approach

    what is case study method in research methodology

  3. Case Study

    what is case study method in research methodology

  4. what is case study methodology

    what is case study method in research methodology

  5. Case study research method

    what is case study method in research methodology

  6. methodology case study approach

    what is case study method in research methodology

VIDEO

  1. Research Methodology Part I simple concepts

  2. Case study

  3. RESEARCH METHODOLOGY

  4. Research Methods

  5. case study Method And Interdisciplinary Research / Reasearch Methodology

  6. Case study method#notes #study #psychology

COMMENTS

  1. Case Study

    Defnition: A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  2. What Is a Case Study?

    A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used. Case studies are good for describing, comparing, evaluating and understanding different aspects of a research problem. Table of contents When to do a case study Step 1: Select a case Step 2: Build a theoretical framework

  3. Case Study Methods and Examples

    Case study is also described as a method, given particular approaches used to collect and analyze data. Case study research is conducted by almost every social science discipline: business, education, sociology, psychology.

  4. Case Study Methodology of Qualitative Research: Key Attributes and

    In a case study research, multiple methods of data collection are used, as it involves an in-depth study of a phenomenon. It must be noted, as highlighted by Yin ( 2009 ), a case study is not a method of data collection, rather is a research strategy or design to study a social unit.

  5. Methodology or method? A critical review of qualitative case study

    Findings were grouped into five themes outlining key methodological issues: case study methodology or method, case of something particular and case selection, contextually bound case study, researcher and case interactions and triangulation, and study design inconsistent with methodology reported.

  6. Case Study Method: A Step-by-Step Guide for Business Researchers

    Case study method is the most widely used method in academia for researchers interested in qualitative research ( Baskarada, 2014 ). Research students select the case study as a method without understanding array of factors that can affect the outcome of their research.

  7. The case study approach

    The case study approach allows in-depth, multi-faceted explorations of complex issues in their real-life settings. The value of the case study approach is well recognised in the fields of business, law and policy, but somewhat less so in health services research.

  8. Sage Research Methods

    Case study methodology can prove a confusing and fragmented topic. In bringing diverse notions of case study research together in one volume and sensitizing the reader to the many varying definitions and perceptions of 'case study', this book equips researchers at all levels with the knowledge to make an informed choice of research strategy.

  9. (PDF) Qualitative Case Study Methodology: Study Design and

    Qualitative case study methodology provides tools for researchers to study complex phenomena within their contexts. When the approach is applied correctly, it becomes a valuable method for...

  10. Case Study Methodology of Qualitative Research: Key Attributes and

    The following key attributes of the case study methodology can be underlined. 1. Case study is a research strategy, and not just a method/technique/process of data collection. 2. A case study involves a detailed study of the concerned unit of analysis within its natural setting. A de-contextualised study has no relevance in a case study ...

  11. Distinguishing case study as a research method from case reports as a

    As a qualitative methodology, case study research encompasses a great deal more complexity than a typical case report and often incorporates multiple streams of data combined in creative ways. The depth and richness of case study description helps readers understand the case and whether findings might be applicable beyond that setting.

  12. Qualitative Case Study Methodology: Study Design and Implementation for

    Qualitative case study methodology provides tools for researchers to study complex phenomena within their contexts. When the approach is applied correctly, it becomes a valuable method for health science research to develop theory, evaluate programs, and develop interventions.

  13. Case Study

    A Case study is: An in-depth research design that primarily uses a qualitative methodology but sometimes includes quantitative methodology. Used to examine an identifiable problem confirmed through research. Used to investigate an individual, group of people, organization, or event. Used to mostly answer "how" and "why" questions.

  14. Case Study Research Method in Psychology

    Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history).

  15. The case study approach

    A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context. It is an established research design that is used extensively in a wide variety of disciplines, particularly in the social sciences. A case study can be defined in a variety of ways (Table 5 ), the ...

  16. Case Study

    A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used. Case studies are good for describing, comparing, evaluating, and understanding different aspects of a research problem. Table of contents When to do a case study Step 1: Select a case Step 2: Build a theoretical framework

  17. Case Study Research Methodology in Nursing Research

    Data analysis for case study research uses qualitative and quantitative data analysis methods, depending on the selected methodology, with the focus on describing the case or cases. Merriam's (2009) data analysis is a process of consolidating, reducing, and interpreting procedures that occur simultaneously through data collection and analysis.

  18. Case Study Research: Methods and Designs

    Case study research methods typically involve the researcher asking a few questions of one person or a small number of people—known as respondents—to test one hypothesis. Case study in research methodology may apply triangulation to collect data, in which the researcher uses several sources, including documents and field data.

  19. Case Studies

    Case studies are a popular research method in business area. Case studies aim to analyze specific issues within the boundaries of a specific environment, situation or organization. According to its design, case studies in business research can be divided into three categories: explanatory, descriptive and exploratory.

  20. Continuing to enhance the quality of case study methodology in health

    Introduction. The popularity of case study research methodology in Health Services Research (HSR) has grown over the past 40 years. 1 This may be attributed to a shift towards the use of implementation research and a newfound appreciation of contextual factors affecting the uptake of evidence-based interventions within diverse settings. 2 Incorporating context-specific information on the ...

  21. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  22. The Case Study as a Research Method

    December 1942, issues of the REVIEW OF EDUCATIONAL use of the case study in research methodology, progress has. this field. First, the case study has been of increased value. of research in education, psychology, sociology, and anthropology; progress has been made in the technics of gathering and study data for research purposes; and third ...

  23. Understanding implementation of findings from trial method research: a

    Overall study description. We designed a sequential exploratory mixed methods study with three linked components: 1. Case studies: which identified existing examples of trial method research projects with actionable outputs that were believed to influence trial design, conduct, analysis, or reporting practice."Actionable outputs" were defined broadly as any resource, generated from these ...

  24. What is the Case Study Method?

    Overview Simply put, the case method is a discussion of real-life situations that business executives have faced. On average, you'll attend three to four different classes a day, for a total of about six hours of class time (schedules vary). To prepare, you'll work through problems with your peers. Read More

  25. Descriptive Research in Psychology

    Descriptive Case Study Research. A descriptive approach to a case study is akin to a biography of a person, honing in on the experiences of a small group to extrapolate to larger themes. We most ...

  26. (PDF) Underground dam construction site selection using ...

    PDF | On Feb 1, 2023, Negin Zarei and others published Underground dam construction site selection using Boolean logic and analytical hierarchy process methods using GIS (Case study: Hassanabad ...