Library Homepage

Research Methods and Design

  • Action Research
  • Case Study Design

Literature Review

  • Quantitative Research Methods
  • Qualitative Research Methods
  • Mixed Methods Study
  • Indigenous Research and Ethics This link opens in a new window
  • Identifying Empirical Research Articles This link opens in a new window
  • Research Ethics and Quality
  • Data Literacy
  • Get Help with Writing Assignments

A literature review is a discussion of the literature (aka. the "research" or "scholarship") surrounding a certain topic. A good literature review doesn't simply summarize the existing material, but provides thoughtful synthesis and analysis. The purpose of a literature review is to orient your own work within an existing body of knowledge. A literature review may be written as a standalone piece or be included in a larger body of work.

You can read more about literature reviews, what they entail, and how to write one, using the resources below. 

Am I the only one struggling to write a literature review?

Dr. Zina O'Leary explains the misconceptions and struggles students often have with writing a literature review. She also provides step-by-step guidance on writing a persuasive literature review.

An Introduction to Literature Reviews

Dr. Eric Jensen, Professor of Sociology at the University of Warwick, and Dr. Charles Laurie, Director of Research at Verisk Maplecroft, explain how to write a literature review, and why researchers need to do so. Literature reviews can be stand-alone research or part of a larger project. They communicate the state of academic knowledge on a given topic, specifically detailing what is still unknown.

This is the first video in a whole series about literature reviews. You can find the rest of the series in our SAGE database, Research Methods:

Videos

Videos covering research methods and statistics

Identify Themes and Gaps in Literature (with real examples) | Scribbr

Finding connections between sources is key to organizing the arguments and structure of a good literature review. In this video, you'll learn how to identify themes, debates, and gaps between sources, using examples from real papers.

4 Tips for Writing a Literature Review's Intro, Body, and Conclusion | Scribbr

While each review will be unique in its structure--based on both the existing body of both literature and the overall goals of your own paper, dissertation, or research--this video from Scribbr does a good job simplifying the goals of writing a literature review for those who are new to the process. In this video, you’ll learn what to include in each section, as well as 4 tips for the main body illustrated with an example.

Cover Art

  • Literature Review This chapter in SAGE's Encyclopedia of Research Design describes the types of literature reviews and scientific standards for conducting literature reviews.
  • UNC Writing Center: Literature Reviews This handout from the Writing Center at UNC will explain what literature reviews are and offer insights into the form and construction of literature reviews in the humanities, social sciences, and sciences.
  • Purdue OWL: Writing a Literature Review The overview of literature reviews comes from Purdue's Online Writing Lab. It explains the basic why, what, and how of writing a literature review.

Organizational Tools for Literature Reviews

One of the most daunting aspects of writing a literature review is organizing your research. There are a variety of strategies that you can use to help you in this task. We've highlighted just a few ways writers keep track of all that information! You can use a combination of these tools or come up with your own organizational process. The key is choosing something that works with your own learning style.

Citation Managers

Citation managers are great tools, in general, for organizing research, but can be especially helpful when writing a literature review. You can keep all of your research in one place, take notes, and organize your materials into different folders or categories. Read more about citations managers here:

  • Manage Citations & Sources

Concept Mapping

Some writers use concept mapping (sometimes called flow or bubble charts or "mind maps") to help them visualize the ways in which the research they found connects.

what type of research design is a literature review

There is no right or wrong way to make a concept map. There are a variety of online tools that can help you create a concept map or you can simply put pen to paper. To read more about concept mapping, take a look at the following help guides:

  • Using Concept Maps From Williams College's guide, Literature Review: A Self-guided Tutorial

Synthesis Matrix

A synthesis matrix is is a chart you can use to help you organize your research into thematic categories. By organizing your research into a matrix, like the examples below, can help you visualize the ways in which your sources connect. 

  • Walden University Writing Center: Literature Review Matrix Find a variety of literature review matrix examples and templates from Walden University.
  • Writing A Literature Review and Using a Synthesis Matrix An example synthesis matrix created by NC State University Writing and Speaking Tutorial Service Tutors. If you would like a copy of this synthesis matrix in a different format, like a Word document, please ask a librarian. CC-BY-SA 3.0
  • << Previous: Case Study Design
  • Next: Quantitative Research Methods >>
  • Last Updated: May 7, 2024 9:51 AM

CityU Home - CityU Catalog

Creative Commons License

Duke University Libraries

Literature Reviews

  • Getting started

What is a literature review?

Why conduct a literature review, stages of a literature review, lit reviews: an overview (video), check out these books.

  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Artificial intelligence (AI) tools
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

what type of research design is a literature review

Contact a Librarian

Ask a Librarian

Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject.

Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field.

Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in academic literature.

Identifying Gaps: Aims to pinpoint areas where there is a lack of research or unresolved questions, highlighting opportunities for further investigation.

Contextualization: Enables researchers to understand how their work fits into the broader academic conversation and contributes to the existing body of knowledge.

what type of research design is a literature review

tl;dr  A literature review critically examines and synthesizes existing scholarly research and publications on a specific topic to provide a comprehensive understanding of the current state of knowledge in the field.

What is a literature review NOT?

❌ An annotated bibliography

❌ Original research

❌ A summary

❌ Something to be conducted at the end of your research

❌ An opinion piece

❌ A chronological compilation of studies

The reason for conducting a literature review is to:

what type of research design is a literature review

Literature Reviews: An Overview for Graduate Students

While this 9-minute video from NCSU is geared toward graduate students, it is useful for anyone conducting a literature review.

what type of research design is a literature review

Writing the literature review: A practical guide

Available 3rd floor of Perkins

what type of research design is a literature review

Writing literature reviews: A guide for students of the social and behavioral sciences

Available online!

what type of research design is a literature review

So, you have to write a literature review: A guided workbook for engineers

what type of research design is a literature review

Telling a research story: Writing a literature review

what type of research design is a literature review

The literature review: Six steps to success

what type of research design is a literature review

Systematic approaches to a successful literature review

Request from Duke Medical Center Library

what type of research design is a literature review

Doing a systematic review: A student's guide

  • Next: Types of reviews >>
  • Last Updated: May 17, 2024 8:42 AM
  • URL: https://guides.library.duke.edu/litreviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What is a Literature Review? | Guide, Template, & Examples

What is a Literature Review? | Guide, Template, & Examples

Published on 22 February 2022 by Shona McCombes . Revised on 7 June 2022.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research.

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarise sources – it analyses, synthesises, and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

Why write a literature review, examples of literature reviews, step 1: search for relevant literature, step 2: evaluate and select sources, step 3: identify themes, debates and gaps, step 4: outline your literature review’s structure, step 5: write your literature review, frequently asked questions about literature reviews, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a dissertation or thesis, you will have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position yourself in relation to other researchers and theorists
  • Show how your dissertation addresses a gap or contributes to a debate

You might also have to write a literature review as a stand-alone assignment. In this case, the purpose is to evaluate the current state of research and demonstrate your knowledge of scholarly debates around a topic.

The content will look slightly different in each case, but the process of conducting a literature review follows the same steps. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Prevent plagiarism, run a free check.

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research objectives and questions .

If you are writing a literature review as a stand-alone assignment, you will have to choose a focus and develop a central question to direct your search. Unlike a dissertation research question, this question has to be answerable without collecting original data. You should be able to answer it based only on a review of existing publications.

Make a list of keywords

Start by creating a list of keywords related to your research topic. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list if you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can use boolean operators to help narrow down your search:

Read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

To identify the most important publications on your topic, take note of recurring citations. If the same authors, books or articles keep appearing in your reading, make sure to seek them out.

You probably won’t be able to read absolutely everything that has been written on the topic – you’ll have to evaluate which sources are most relevant to your questions.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models and methods? Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • How does the publication contribute to your understanding of the topic? What are its key insights and arguments?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible, and make sure you read any landmark studies and major theories in your field of research.

You can find out how many times an article has been cited on Google Scholar – a high citation count means the article has been influential in the field, and should certainly be included in your literature review.

The scope of your review will depend on your topic and discipline: in the sciences you usually only review recent literature, but in the humanities you might take a long historical perspective (for example, to trace how a concept has changed in meaning over time).

Remember that you can use our template to summarise and evaluate sources you’re thinking about using!

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It’s important to keep track of your sources with references to avoid plagiarism . It can be helpful to make an annotated bibliography, where you compile full reference information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

You can use our free APA Reference Generator for quick, correct, consistent citations.

To begin organising your literature review’s argument and structure, you need to understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly-visual platforms like Instagram and Snapchat – this is a gap that you could address in your own research.

There are various approaches to organising the body of a literature review. You should have a rough idea of your strategy before you start writing.

Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarising sources in order.

Try to analyse patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organise your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text, your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

If you are writing the literature review as part of your dissertation or thesis, reiterate your central problem or research question and give a brief summary of the scholarly context. You can emphasise the timeliness of the topic (“many recent studies have focused on the problem of x”) or highlight a gap in the literature (“while there has been much research on x, few researchers have taken y into consideration”).

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, make sure to follow these tips:

  • Summarise and synthesise: give an overview of the main points of each source and combine them into a coherent whole.
  • Analyse and interpret: don’t just paraphrase other researchers – add your own interpretations, discussing the significance of findings in relation to the literature as a whole.
  • Critically evaluate: mention the strengths and weaknesses of your sources.
  • Write in well-structured paragraphs: use transitions and topic sentences to draw connections, comparisons and contrasts.

In the conclusion, you should summarise the key findings you have taken from the literature and emphasise their significance.

If the literature review is part of your dissertation or thesis, reiterate how your research addresses gaps and contributes new knowledge, or discuss how you have drawn on existing theories and methods to build a framework for your research. This can lead directly into your methodology section.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your  dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, June 07). What is a Literature Review? | Guide, Template, & Examples. Scribbr. Retrieved 21 May 2024, from https://www.scribbr.co.uk/thesis-dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a theoretical framework | a step-by-step guide, what is a research methodology | steps & tips.

Research Methods

  • Getting Started
  • Literature Review Research
  • Research Design
  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Literature Review

  • What is a Literature Review?
  • What is NOT a Literature Review?
  • Purposes of a Literature Review
  • Types of Literature Reviews
  • Literature Reviews vs. Systematic Reviews
  • Systematic vs. Meta-Analysis

Literature Review  is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.

Also, we can define a literature review as the collected body of scholarly works related to a topic:

  • Summarizes and analyzes previous research relevant to a topic
  • Includes scholarly books and articles published in academic journals
  • Can be an specific scholarly paper or a section in a research paper

The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic

  • Help gather ideas or information
  • Keep up to date in current trends and findings
  • Help develop new questions

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Helps focus your own research questions or problems
  • Discovers relationships between research studies/ideas.
  • Suggests unexplored ideas or populations
  • Identifies major themes, concepts, and researchers on a topic.
  • Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
  • Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
  • Indicates potential directions for future research.

All content in this section is from Literature Review Research from Old Dominion University 

Keep in mind the following, a literature review is NOT:

Not an essay 

Not an annotated bibliography  in which you summarize each article that you have reviewed.  A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.

Not a research paper   where you select resources to support one side of an issue versus another.  A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.

A literature review serves several purposes. For example, it

  • provides thorough knowledge of previous studies; introduces seminal works.
  • helps focus one’s own research topic.
  • identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
  • suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
  • identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
  • helps the researcher avoid repetition of earlier research.
  • suggests unexplored populations.
  • determines whether past studies agree or disagree; identifies controversy in the literature.
  • tests assumptions; may help counter preconceived ideas and remove unconscious bias.

As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature."  Educational Researcher  36 (April 2007): 139-147.

All content in this section is from The Literature Review created by Dr. Robert Larabee USC

Robinson, P. and Lowe, J. (2015),  Literature reviews vs systematic reviews.  Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393

what type of research design is a literature review

What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California

what type of research design is a literature review

Systematic review or meta-analysis?

A  systematic review  answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.

A  meta-analysis  is the use of statistical methods to summarize the results of these studies.

Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:

  • clearly stated objectives with pre-defined eligibility criteria for studies
  • explicit, reproducible methodology
  • a systematic search that attempts to identify all studies
  • assessment of the validity of the findings of the included studies (e.g. risk of bias)
  • systematic presentation, and synthesis, of the characteristics and findings of the included studies

Not all systematic reviews contain meta-analysis. 

Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.  More information on meta-analyses can be found in  Cochrane Handbook, Chapter 9 .

A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies.  It is a systematic review that uses quantitative methods to synthesize and summarize the results.

An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings.  Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted.  In that case, an integrative review is an appropriate strategy. 

Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.

  • << Previous: Getting Started
  • Next: Research Design >>
  • Last Updated: Aug 21, 2023 4:07 PM
  • URL: https://guides.lib.udel.edu/researchmethods

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 21, Issue 2
  • Reviewing the literature: choosing a review design
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Helen Noble 1 ,
  • Joanna Smith 2
  • 1 School of Nursing and Midwifery , Queen’s University Belfast , Belfast , UK
  • 2 School of Healthcare , University of Leeds , Leeds , UK
  • Correspondence to Dr Helen Noble, School of Nursing and Midwifery, Queen’s University Belfast, Belfast BT9 7BL, UK; helen.noble{at}qub.ac.uk

https://doi.org/10.1136/eb-2018-102895

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Many health professionals, students and academics including health researchers will have grappled with the challenges of undertaking a review of the literature and choosing a suitable design or framework to structure the review. For many undergraduate and master’s healthcare students their final year dissertation involves undertaking a review of the literature as a way of assessing their understanding and ability to critique and apply research findings to practice. For PhD and Master’s by Research students, a rigorous summary of research is usually expected to identify the state of knowledge and gaps in the evidence related to their topic focus and to provide justification for the empirical work they subsequently undertake. From discussions with students and colleagues, there appears to be much confusion about review designs and in particular the use and perhaps misuse of the term ‘systematic review’. For example, some quantitatively focused researchers subscribe to a ‘Cochrane’ approach as the only method to undertake a ‘systematic review’, with other researchers having a more pragmatic view, recognising the different purposes of a review and ways of applying systematic methods to undertake a review of the literature. Traditionally, systematic reviews have included only quantitative, experimental studies, usually randomised controlled trials. 1 More recently, systematic reviews of qualitative studies have emerged, 2 and integrative reviews which include both quantitative and qualitative studies. 3

In this article, we will build on a previous Research Made Simple article that outlined the key principles of undertaking a review of the literature in a structured and systemic way 4 by further exploring review designs and their key features to assist you in choosing an appropriate design. A reference to an example of each review outlined will be provided.

What is the purpose of undertaking a review of the evidence?

The purpose of a review of healthcare literature is primarily to summarise the knowledge around a specific question or topic, or to make recommendations that can support health professionals and organisations make decisions about a specific intervention or care issue. 5 In addition, reviews can highlight gaps in knowledge to guide future research. The most common approach to summarising, interpreting and making recommendations from synthesising the evidence in healthcare is a traditional systematic review of the literature to answer a specific clinical question. These reviews follow explicit, prespecified and reproducible methods in order to identify, evaluate and summarise the findings of all relevant individual studies. 6 Systematic reviews are typically associated with evaluating interventions, and therefore where appropriate, combine the results of several empirical studies to give a more reliable estimate of an intervention’s effectiveness than a single study. 6 However, over the past decade the range of approaches to reviewing the literature has expanded to reflect broader types of evidence/research designs and questions reflecting the increased complexity of healthcare. While this should be welcomed, this adds to the challenges in choosing the best review approach/design that meets the purpose of the review.

What approaches can be adopted to review the evidence?

  • View inline

Key features of the common types of healthcare review

In summary, we have identified and described a variety of review designs and offered reasons for choosing a specific approach. Reviews are vital research methodology and help make sense of a body of research. They offer a succinct analysis which avoids the need for accessing individual research reports included in the review, increasingly vital for health professionals in light of the increasing vast amount of literature available. The field of reviews of the literature continues to change and while new approaches are emerging, ensuring methods are robust and remain paramount. This paper offers guidance to help direct choices when deciding on a review and provides an example of each approach.

  • 5. ↵ Canadian Institutes of Health Research . Knowledge translation. Canadian Institutes of Health Research . 2008 . http://www.cihr.ca/e/29418.html ( accessed Jan 2018 ).
  • 6. ↵ Centre for Reviews and Dissemination . Guidance for undertaking reviews in heathcare . 3rd ed . York University, York : CRD , 2009 .
  • Buchwald H ,
  • Braunwald E , et al
  • Horvath M ,
  • Massey K , et al
  • Sheehan KJ ,
  • Sobolev B ,
  • Villán Villán YF , et al
  • Christmals CD ,
  • Whittemore R ,
  • McInnes S ,
  • Bonney A , et al
  • Greenhalgh T ,
  • Harvey G , et al
  • Rycroft-Malone J ,
  • McCormack B ,
  • DeCorby K , et al
  • Mitchison D ,
  • 19. Joanna Briggs Institute Umbrella reviews . 2014 . http://joannabriggs.org/assets/docs/sumari/ReviewersManual-Methodology-JBI_Umbrella_Reviews-2014.pdf ( accessed Jan 2018 )
  • van der Linde R , et al

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

Auraria Library red logo

Research Methods: Literature Reviews

  • Annotated Bibliographies
  • Literature Reviews
  • Scoping Reviews
  • Systematic Reviews
  • Scholarship of Teaching and Learning
  • Persuasive Arguments
  • Subject Specific Methodology

A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal. A literature review helps the author learn about the history and nature of their topic, and identify research gaps and problems.

Steps & Elements

Problem formulation

  • Determine your topic and its components by asking a question
  • Research: locate literature related to your topic to identify the gap(s) that can be addressed
  • Read: read the articles or other sources of information
  • Analyze: assess the findings for relevancy
  • Evaluating: determine how the article are relevant to your research and what are the key findings
  • Synthesis: write about the key findings and how it is relevant to your research

Elements of a Literature Review

  • Summarize subject, issue or theory under consideration, along with objectives of the review
  • Divide works under review into categories (e.g. those in support of a particular position, those against, those offering alternative theories entirely)
  • Explain how each work is similar to and how it varies from the others
  • Conclude which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of an area of research

Writing a Literature Review Resources

  • How to Write a Literature Review From the Wesleyan University Library
  • Write a Literature Review From the University of California Santa Cruz Library. A Brief overview of a literature review, includes a list of stages for writing a lit review.
  • Literature Reviews From the University of North Carolina Writing Center. Detailed information about writing a literature review.
  • Undertaking a literature review: a step-by-step approach Cronin, P., Ryan, F., & Coughan, M. (2008). Undertaking a literature review: A step-by-step approach. British Journal of Nursing, 17(1), p.38-43

what type of research design is a literature review

Literature Review Tutorial

  • << Previous: Annotated Bibliographies
  • Next: Scoping Reviews >>
  • Last Updated: Feb 29, 2024 12:00 PM
  • URL: https://guides.auraria.edu/researchmethods

1100 Lawrence Street Denver, CO 80204 303-315-7700 Ask Us Directions

Research-Methodology

Types of Literature Review

There are many types of literature review. The choice of a specific type depends on your research approach and design. The following types of literature review are the most popular in business studies:

Narrative literature review , also referred to as traditional literature review, critiques literature and summarizes the body of a literature. Narrative review also draws conclusions about the topic and identifies gaps or inconsistencies in a body of knowledge. You need to have a sufficiently focused research question to conduct a narrative literature review

Systematic literature review requires more rigorous and well-defined approach compared to most other types of literature review. Systematic literature review is comprehensive and details the timeframe within which the literature was selected. Systematic literature review can be divided into two categories: meta-analysis and meta-synthesis.

When you conduct meta-analysis you take findings from several studies on the same subject and analyze these using standardized statistical procedures. In meta-analysis patterns and relationships are detected and conclusions are drawn. Meta-analysis is associated with deductive research approach.

Meta-synthesis, on the other hand, is based on non-statistical techniques. This technique integrates, evaluates and interprets findings of multiple qualitative research studies. Meta-synthesis literature review is conducted usually when following inductive research approach.

Scoping literature review , as implied by its name is used to identify the scope or coverage of a body of literature on a given topic. It has been noted that “scoping reviews are useful for examining emerging evidence when it is still unclear what other, more specific questions can be posed and valuably addressed by a more precise systematic review.” [1] The main difference between systematic and scoping types of literature review is that, systematic literature review is conducted to find answer to more specific research questions, whereas scoping literature review is conducted to explore more general research question.

Argumentative literature review , as the name implies, examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. It should be noted that a potential for bias is a major shortcoming associated with argumentative literature review.

Integrative literature review reviews , critiques, and synthesizes secondary data about research topic in an integrated way such that new frameworks and perspectives on the topic are generated. If your research does not involve primary data collection and data analysis, then using integrative literature review will be your only option.

Theoretical literature review focuses on a pool of theory that has accumulated in regard to an issue, concept, theory, phenomena. Theoretical literature reviews play an instrumental role in establishing what theories already exist, the relationships between them, to what degree existing theories have been investigated, and to develop new hypotheses to be tested.

At the earlier parts of the literature review chapter, you need to specify the type of your literature review your chose and justify your choice. Your choice of a specific type of literature review should be based upon your research area, research problem and research methods.  Also, you can briefly discuss other most popular types of literature review mentioned above, to illustrate your awareness of them.

[1] Munn, A. et. al. (2018) “Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach” BMC Medical Research Methodology

Types of Literature Review

  John Dudovskiy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • CBE Life Sci Educ
  • v.21(3); Fall 2022

Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks: An Introduction for New Biology Education Researchers

Julie a. luft.

† Department of Mathematics, Social Studies, and Science Education, Mary Frances Early College of Education, University of Georgia, Athens, GA 30602-7124

Sophia Jeong

‡ Department of Teaching & Learning, College of Education & Human Ecology, Ohio State University, Columbus, OH 43210

Robert Idsardi

§ Department of Biology, Eastern Washington University, Cheney, WA 99004

Grant Gardner

∥ Department of Biology, Middle Tennessee State University, Murfreesboro, TN 37132

Associated Data

To frame their work, biology education researchers need to consider the role of literature reviews, theoretical frameworks, and conceptual frameworks as critical elements of the research and writing process. However, these elements can be confusing for scholars new to education research. This Research Methods article is designed to provide an overview of each of these elements and delineate the purpose of each in the educational research process. We describe what biology education researchers should consider as they conduct literature reviews, identify theoretical frameworks, and construct conceptual frameworks. Clarifying these different components of educational research studies can be helpful to new biology education researchers and the biology education research community at large in situating their work in the broader scholarly literature.

INTRODUCTION

Discipline-based education research (DBER) involves the purposeful and situated study of teaching and learning in specific disciplinary areas ( Singer et al. , 2012 ). Studies in DBER are guided by research questions that reflect disciplines’ priorities and worldviews. Researchers can use quantitative data, qualitative data, or both to answer these research questions through a variety of methodological traditions. Across all methodologies, there are different methods associated with planning and conducting educational research studies that include the use of surveys, interviews, observations, artifacts, or instruments. Ensuring the coherence of these elements to the discipline’s perspective also involves situating the work in the broader scholarly literature. The tools for doing this include literature reviews, theoretical frameworks, and conceptual frameworks. However, the purpose and function of each of these elements is often confusing to new education researchers. The goal of this article is to introduce new biology education researchers to these three important elements important in DBER scholarship and the broader educational literature.

The first element we discuss is a review of research (literature reviews), which highlights the need for a specific research question, study problem, or topic of investigation. Literature reviews situate the relevance of the study within a topic and a field. The process may seem familiar to science researchers entering DBER fields, but new researchers may still struggle in conducting the review. Booth et al. (2016b) highlight some of the challenges novice education researchers face when conducting a review of literature. They point out that novice researchers struggle in deciding how to focus the review, determining the scope of articles needed in the review, and knowing how to be critical of the articles in the review. Overcoming these challenges (and others) can help novice researchers construct a sound literature review that can inform the design of the study and help ensure the work makes a contribution to the field.

The second and third highlighted elements are theoretical and conceptual frameworks. These guide biology education research (BER) studies, and may be less familiar to science researchers. These elements are important in shaping the construction of new knowledge. Theoretical frameworks offer a way to explain and interpret the studied phenomenon, while conceptual frameworks clarify assumptions about the studied phenomenon. Despite the importance of these constructs in educational research, biology educational researchers have noted the limited use of theoretical or conceptual frameworks in published work ( DeHaan, 2011 ; Dirks, 2011 ; Lo et al. , 2019 ). In reviewing articles published in CBE—Life Sciences Education ( LSE ) between 2015 and 2019, we found that fewer than 25% of the research articles had a theoretical or conceptual framework (see the Supplemental Information), and at times there was an inconsistent use of theoretical and conceptual frameworks. Clearly, these frameworks are challenging for published biology education researchers, which suggests the importance of providing some initial guidance to new biology education researchers.

Fortunately, educational researchers have increased their explicit use of these frameworks over time, and this is influencing educational research in science, technology, engineering, and mathematics (STEM) fields. For instance, a quick search for theoretical or conceptual frameworks in the abstracts of articles in Educational Research Complete (a common database for educational research) in STEM fields demonstrates a dramatic change over the last 20 years: from only 778 articles published between 2000 and 2010 to 5703 articles published between 2010 and 2020, a more than sevenfold increase. Greater recognition of the importance of these frameworks is contributing to DBER authors being more explicit about such frameworks in their studies.

Collectively, literature reviews, theoretical frameworks, and conceptual frameworks work to guide methodological decisions and the elucidation of important findings. Each offers a different perspective on the problem of study and is an essential element in all forms of educational research. As new researchers seek to learn about these elements, they will find different resources, a variety of perspectives, and many suggestions about the construction and use of these elements. The wide range of available information can overwhelm the new researcher who just wants to learn the distinction between these elements or how to craft them adequately.

Our goal in writing this paper is not to offer specific advice about how to write these sections in scholarly work. Instead, we wanted to introduce these elements to those who are new to BER and who are interested in better distinguishing one from the other. In this paper, we share the purpose of each element in BER scholarship, along with important points on its construction. We also provide references for additional resources that may be beneficial to better understanding each element. Table 1 summarizes the key distinctions among these elements.

Comparison of literature reviews, theoretical frameworks, and conceptual reviews

This article is written for the new biology education researcher who is just learning about these different elements or for scientists looking to become more involved in BER. It is a result of our own work as science education and biology education researchers, whether as graduate students and postdoctoral scholars or newly hired and established faculty members. This is the article we wish had been available as we started to learn about these elements or discussed them with new educational researchers in biology.

LITERATURE REVIEWS

Purpose of a literature review.

A literature review is foundational to any research study in education or science. In education, a well-conceptualized and well-executed review provides a summary of the research that has already been done on a specific topic and identifies questions that remain to be answered, thus illustrating the current research project’s potential contribution to the field and the reasoning behind the methodological approach selected for the study ( Maxwell, 2012 ). BER is an evolving disciplinary area that is redefining areas of conceptual emphasis as well as orientations toward teaching and learning (e.g., Labov et al. , 2010 ; American Association for the Advancement of Science, 2011 ; Nehm, 2019 ). As a result, building comprehensive, critical, purposeful, and concise literature reviews can be a challenge for new biology education researchers.

Building Literature Reviews

There are different ways to approach and construct a literature review. Booth et al. (2016a) provide an overview that includes, for example, scoping reviews, which are focused only on notable studies and use a basic method of analysis, and integrative reviews, which are the result of exhaustive literature searches across different genres. Underlying each of these different review processes are attention to the s earch process, a ppraisa l of articles, s ynthesis of the literature, and a nalysis: SALSA ( Booth et al. , 2016a ). This useful acronym can help the researcher focus on the process while building a specific type of review.

However, new educational researchers often have questions about literature reviews that are foundational to SALSA or other approaches. Common questions concern determining which literature pertains to the topic of study or the role of the literature review in the design of the study. This section addresses such questions broadly while providing general guidance for writing a narrative literature review that evaluates the most pertinent studies.

The literature review process should begin before the research is conducted. As Boote and Beile (2005 , p. 3) suggested, researchers should be “scholars before researchers.” They point out that having a good working knowledge of the proposed topic helps illuminate avenues of study. Some subject areas have a deep body of work to read and reflect upon, providing a strong foundation for developing the research question(s). For instance, the teaching and learning of evolution is an area of long-standing interest in the BER community, generating many studies (e.g., Perry et al. , 2008 ; Barnes and Brownell, 2016 ) and reviews of research (e.g., Sickel and Friedrichsen, 2013 ; Ziadie and Andrews, 2018 ). Emerging areas of BER include the affective domain, issues of transfer, and metacognition ( Singer et al. , 2012 ). Many studies in these areas are transdisciplinary and not always specific to biology education (e.g., Rodrigo-Peiris et al. , 2018 ; Kolpikova et al. , 2019 ). These newer areas may require reading outside BER; fortunately, summaries of some of these topics can be found in the Current Insights section of the LSE website.

In focusing on a specific problem within a broader research strand, a new researcher will likely need to examine research outside BER. Depending upon the area of study, the expanded reading list might involve a mix of BER, DBER, and educational research studies. Determining the scope of the reading is not always straightforward. A simple way to focus one’s reading is to create a “summary phrase” or “research nugget,” which is a very brief descriptive statement about the study. It should focus on the essence of the study, for example, “first-year nonmajor students’ understanding of evolution,” “metacognitive prompts to enhance learning during biochemistry,” or “instructors’ inquiry-based instructional practices after professional development programming.” This type of phrase should help a new researcher identify two or more areas to review that pertain to the study. Focusing on recent research in the last 5 years is a good first step. Additional studies can be identified by reading relevant works referenced in those articles. It is also important to read seminal studies that are more than 5 years old. Reading a range of studies should give the researcher the necessary command of the subject in order to suggest a research question.

Given that the research question(s) arise from the literature review, the review should also substantiate the selected methodological approach. The review and research question(s) guide the researcher in determining how to collect and analyze data. Often the methodological approach used in a study is selected to contribute knowledge that expands upon what has been published previously about the topic (see Institute of Education Sciences and National Science Foundation, 2013 ). An emerging topic of study may need an exploratory approach that allows for a description of the phenomenon and development of a potential theory. This could, but not necessarily, require a methodological approach that uses interviews, observations, surveys, or other instruments. An extensively studied topic may call for the additional understanding of specific factors or variables; this type of study would be well suited to a verification or a causal research design. These could entail a methodological approach that uses valid and reliable instruments, observations, or interviews to determine an effect in the studied event. In either of these examples, the researcher(s) may use a qualitative, quantitative, or mixed methods methodological approach.

Even with a good research question, there is still more reading to be done. The complexity and focus of the research question dictates the depth and breadth of the literature to be examined. Questions that connect multiple topics can require broad literature reviews. For instance, a study that explores the impact of a biology faculty learning community on the inquiry instruction of faculty could have the following review areas: learning communities among biology faculty, inquiry instruction among biology faculty, and inquiry instruction among biology faculty as a result of professional learning. Biology education researchers need to consider whether their literature review requires studies from different disciplines within or outside DBER. For the example given, it would be fruitful to look at research focused on learning communities with faculty in STEM fields or in general education fields that result in instructional change. It is important not to be too narrow or too broad when reading. When the conclusions of articles start to sound similar or no new insights are gained, the researcher likely has a good foundation for a literature review. This level of reading should allow the researcher to demonstrate a mastery in understanding the researched topic, explain the suitability of the proposed research approach, and point to the need for the refined research question(s).

The literature review should include the researcher’s evaluation and critique of the selected studies. A researcher may have a large collection of studies, but not all of the studies will follow standards important in the reporting of empirical work in the social sciences. The American Educational Research Association ( Duran et al. , 2006 ), for example, offers a general discussion about standards for such work: an adequate review of research informing the study, the existence of sound and appropriate data collection and analysis methods, and appropriate conclusions that do not overstep or underexplore the analyzed data. The Institute of Education Sciences and National Science Foundation (2013) also offer Common Guidelines for Education Research and Development that can be used to evaluate collected studies.

Because not all journals adhere to such standards, it is important that a researcher review each study to determine the quality of published research, per the guidelines suggested earlier. In some instances, the research may be fatally flawed. Examples of such flaws include data that do not pertain to the question, a lack of discussion about the data collection, poorly constructed instruments, or an inadequate analysis. These types of errors result in studies that are incomplete, error-laden, or inaccurate and should be excluded from the review. Most studies have limitations, and the author(s) often make them explicit. For instance, there may be an instructor effect, recognized bias in the analysis, or issues with the sample population. Limitations are usually addressed by the research team in some way to ensure a sound and acceptable research process. Occasionally, the limitations associated with the study can be significant and not addressed adequately, which leaves a consequential decision in the hands of the researcher. Providing critiques of studies in the literature review process gives the reader confidence that the researcher has carefully examined relevant work in preparation for the study and, ultimately, the manuscript.

A solid literature review clearly anchors the proposed study in the field and connects the research question(s), the methodological approach, and the discussion. Reviewing extant research leads to research questions that will contribute to what is known in the field. By summarizing what is known, the literature review points to what needs to be known, which in turn guides decisions about methodology. Finally, notable findings of the new study are discussed in reference to those described in the literature review.

Within published BER studies, literature reviews can be placed in different locations in an article. When included in the introductory section of the study, the first few paragraphs of the manuscript set the stage, with the literature review following the opening paragraphs. Cooper et al. (2019) illustrate this approach in their study of course-based undergraduate research experiences (CUREs). An introduction discussing the potential of CURES is followed by an analysis of the existing literature relevant to the design of CUREs that allows for novel student discoveries. Within this review, the authors point out contradictory findings among research on novel student discoveries. This clarifies the need for their study, which is described and highlighted through specific research aims.

A literature reviews can also make up a separate section in a paper. For example, the introduction to Todd et al. (2019) illustrates the need for their research topic by highlighting the potential of learning progressions (LPs) and suggesting that LPs may help mitigate learning loss in genetics. At the end of the introduction, the authors state their specific research questions. The review of literature following this opening section comprises two subsections. One focuses on learning loss in general and examines a variety of studies and meta-analyses from the disciplines of medical education, mathematics, and reading. The second section focuses specifically on LPs in genetics and highlights student learning in the midst of LPs. These separate reviews provide insights into the stated research question.

Suggestions and Advice

A well-conceptualized, comprehensive, and critical literature review reveals the understanding of the topic that the researcher brings to the study. Literature reviews should not be so big that there is no clear area of focus; nor should they be so narrow that no real research question arises. The task for a researcher is to craft an efficient literature review that offers a critical analysis of published work, articulates the need for the study, guides the methodological approach to the topic of study, and provides an adequate foundation for the discussion of the findings.

In our own writing of literature reviews, there are often many drafts. An early draft may seem well suited to the study because the need for and approach to the study are well described. However, as the results of the study are analyzed and findings begin to emerge, the existing literature review may be inadequate and need revision. The need for an expanded discussion about the research area can result in the inclusion of new studies that support the explanation of a potential finding. The literature review may also prove to be too broad. Refocusing on a specific area allows for more contemplation of a finding.

It should be noted that there are different types of literature reviews, and many books and articles have been written about the different ways to embark on these types of reviews. Among these different resources, the following may be helpful in considering how to refine the review process for scholarly journals:

  • Booth, A., Sutton, A., & Papaioannou, D. (2016a). Systemic approaches to a successful literature review (2nd ed.). Los Angeles, CA: Sage. This book addresses different types of literature reviews and offers important suggestions pertaining to defining the scope of the literature review and assessing extant studies.
  • Booth, W. C., Colomb, G. G., Williams, J. M., Bizup, J., & Fitzgerald, W. T. (2016b). The craft of research (4th ed.). Chicago: University of Chicago Press. This book can help the novice consider how to make the case for an area of study. While this book is not specifically about literature reviews, it offers suggestions about making the case for your study.
  • Galvan, J. L., & Galvan, M. C. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (7th ed.). Routledge. This book offers guidance on writing different types of literature reviews. For the novice researcher, there are useful suggestions for creating coherent literature reviews.

THEORETICAL FRAMEWORKS

Purpose of theoretical frameworks.

As new education researchers may be less familiar with theoretical frameworks than with literature reviews, this discussion begins with an analogy. Envision a biologist, chemist, and physicist examining together the dramatic effect of a fog tsunami over the ocean. A biologist gazing at this phenomenon may be concerned with the effect of fog on various species. A chemist may be interested in the chemical composition of the fog as water vapor condenses around bits of salt. A physicist may be focused on the refraction of light to make fog appear to be “sitting” above the ocean. While observing the same “objective event,” the scientists are operating under different theoretical frameworks that provide a particular perspective or “lens” for the interpretation of the phenomenon. Each of these scientists brings specialized knowledge, experiences, and values to this phenomenon, and these influence the interpretation of the phenomenon. The scientists’ theoretical frameworks influence how they design and carry out their studies and interpret their data.

Within an educational study, a theoretical framework helps to explain a phenomenon through a particular lens and challenges and extends existing knowledge within the limitations of that lens. Theoretical frameworks are explicitly stated by an educational researcher in the paper’s framework, theory, or relevant literature section. The framework shapes the types of questions asked, guides the method by which data are collected and analyzed, and informs the discussion of the results of the study. It also reveals the researcher’s subjectivities, for example, values, social experience, and viewpoint ( Allen, 2017 ). It is essential that a novice researcher learn to explicitly state a theoretical framework, because all research questions are being asked from the researcher’s implicit or explicit assumptions of a phenomenon of interest ( Schwandt, 2000 ).

Selecting Theoretical Frameworks

Theoretical frameworks are one of the most contemplated elements in our work in educational research. In this section, we share three important considerations for new scholars selecting a theoretical framework.

The first step in identifying a theoretical framework involves reflecting on the phenomenon within the study and the assumptions aligned with the phenomenon. The phenomenon involves the studied event. There are many possibilities, for example, student learning, instructional approach, or group organization. A researcher holds assumptions about how the phenomenon will be effected, influenced, changed, or portrayed. It is ultimately the researcher’s assumption(s) about the phenomenon that aligns with a theoretical framework. An example can help illustrate how a researcher’s reflection on the phenomenon and acknowledgment of assumptions can result in the identification of a theoretical framework.

In our example, a biology education researcher may be interested in exploring how students’ learning of difficult biological concepts can be supported by the interactions of group members. The phenomenon of interest is the interactions among the peers, and the researcher assumes that more knowledgeable students are important in supporting the learning of the group. As a result, the researcher may draw on Vygotsky’s (1978) sociocultural theory of learning and development that is focused on the phenomenon of student learning in a social setting. This theory posits the critical nature of interactions among students and between students and teachers in the process of building knowledge. A researcher drawing upon this framework holds the assumption that learning is a dynamic social process involving questions and explanations among students in the classroom and that more knowledgeable peers play an important part in the process of building conceptual knowledge.

It is important to state at this point that there are many different theoretical frameworks. Some frameworks focus on learning and knowing, while other theoretical frameworks focus on equity, empowerment, or discourse. Some frameworks are well articulated, and others are still being refined. For a new researcher, it can be challenging to find a theoretical framework. Two of the best ways to look for theoretical frameworks is through published works that highlight different frameworks.

When a theoretical framework is selected, it should clearly connect to all parts of the study. The framework should augment the study by adding a perspective that provides greater insights into the phenomenon. It should clearly align with the studies described in the literature review. For instance, a framework focused on learning would correspond to research that reported different learning outcomes for similar studies. The methods for data collection and analysis should also correspond to the framework. For instance, a study about instructional interventions could use a theoretical framework concerned with learning and could collect data about the effect of the intervention on what is learned. When the data are analyzed, the theoretical framework should provide added meaning to the findings, and the findings should align with the theoretical framework.

A study by Jensen and Lawson (2011) provides an example of how a theoretical framework connects different parts of the study. They compared undergraduate biology students in heterogeneous and homogeneous groups over the course of a semester. Jensen and Lawson (2011) assumed that learning involved collaboration and more knowledgeable peers, which made Vygotsky’s (1978) theory a good fit for their study. They predicted that students in heterogeneous groups would experience greater improvement in their reasoning abilities and science achievements with much of the learning guided by the more knowledgeable peers.

In the enactment of the study, they collected data about the instruction in traditional and inquiry-oriented classes, while the students worked in homogeneous or heterogeneous groups. To determine the effect of working in groups, the authors also measured students’ reasoning abilities and achievement. Each data-collection and analysis decision connected to understanding the influence of collaborative work.

Their findings highlighted aspects of Vygotsky’s (1978) theory of learning. One finding, for instance, posited that inquiry instruction, as a whole, resulted in reasoning and achievement gains. This links to Vygotsky (1978) , because inquiry instruction involves interactions among group members. A more nuanced finding was that group composition had a conditional effect. Heterogeneous groups performed better with more traditional and didactic instruction, regardless of the reasoning ability of the group members. Homogeneous groups worked better during interaction-rich activities for students with low reasoning ability. The authors attributed the variation to the different types of helping behaviors of students. High-performing students provided the answers, while students with low reasoning ability had to work collectively through the material. In terms of Vygotsky (1978) , this finding provided new insights into the learning context in which productive interactions can occur for students.

Another consideration in the selection and use of a theoretical framework pertains to its orientation to the study. This can result in the theoretical framework prioritizing individuals, institutions, and/or policies ( Anfara and Mertz, 2014 ). Frameworks that connect to individuals, for instance, could contribute to understanding their actions, learning, or knowledge. Institutional frameworks, on the other hand, offer insights into how institutions, organizations, or groups can influence individuals or materials. Policy theories provide ways to understand how national or local policies can dictate an emphasis on outcomes or instructional design. These different types of frameworks highlight different aspects in an educational setting, which influences the design of the study and the collection of data. In addition, these different frameworks offer a way to make sense of the data. Aligning the data collection and analysis with the framework ensures that a study is coherent and can contribute to the field.

New understandings emerge when different theoretical frameworks are used. For instance, Ebert-May et al. (2015) prioritized the individual level within conceptual change theory (see Posner et al. , 1982 ). In this theory, an individual’s knowledge changes when it no longer fits the phenomenon. Ebert-May et al. (2015) designed a professional development program challenging biology postdoctoral scholars’ existing conceptions of teaching. The authors reported that the biology postdoctoral scholars’ teaching practices became more student-centered as they were challenged to explain their instructional decision making. According to the theory, the biology postdoctoral scholars’ dissatisfaction in their descriptions of teaching and learning initiated change in their knowledge and instruction. These results reveal how conceptual change theory can explain the learning of participants and guide the design of professional development programming.

The communities of practice (CoP) theoretical framework ( Lave, 1988 ; Wenger, 1998 ) prioritizes the institutional level , suggesting that learning occurs when individuals learn from and contribute to the communities in which they reside. Grounded in the assumption of community learning, the literature on CoP suggests that, as individuals interact regularly with the other members of their group, they learn about the rules, roles, and goals of the community ( Allee, 2000 ). A study conducted by Gehrke and Kezar (2017) used the CoP framework to understand organizational change by examining the involvement of individual faculty engaged in a cross-institutional CoP focused on changing the instructional practice of faculty at each institution. In the CoP, faculty members were involved in enhancing instructional materials within their department, which aligned with an overarching goal of instituting instruction that embraced active learning. Not surprisingly, Gehrke and Kezar (2017) revealed that faculty who perceived the community culture as important in their work cultivated institutional change. Furthermore, they found that institutional change was sustained when key leaders served as mentors and provided support for faculty, and as faculty themselves developed into leaders. This study reveals the complexity of individual roles in a COP in order to support institutional instructional change.

It is important to explicitly state the theoretical framework used in a study, but elucidating a theoretical framework can be challenging for a new educational researcher. The literature review can help to identify an applicable theoretical framework. Focal areas of the review or central terms often connect to assumptions and assertions associated with the framework that pertain to the phenomenon of interest. Another way to identify a theoretical framework is self-reflection by the researcher on personal beliefs and understandings about the nature of knowledge the researcher brings to the study ( Lysaght, 2011 ). In stating one’s beliefs and understandings related to the study (e.g., students construct their knowledge, instructional materials support learning), an orientation becomes evident that will suggest a particular theoretical framework. Theoretical frameworks are not arbitrary , but purposefully selected.

With experience, a researcher may find expanded roles for theoretical frameworks. Researchers may revise an existing framework that has limited explanatory power, or they may decide there is a need to develop a new theoretical framework. These frameworks can emerge from a current study or the need to explain a phenomenon in a new way. Researchers may also find that multiple theoretical frameworks are necessary to frame and explore a problem, as different frameworks can provide different insights into a problem.

Finally, it is important to recognize that choosing “x” theoretical framework does not necessarily mean a researcher chooses “y” methodology and so on, nor is there a clear-cut, linear process in selecting a theoretical framework for one’s study. In part, the nonlinear process of identifying a theoretical framework is what makes understanding and using theoretical frameworks challenging. For the novice scholar, contemplating and understanding theoretical frameworks is essential. Fortunately, there are articles and books that can help:

  • Creswell, J. W. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Los Angeles, CA: Sage. This book provides an overview of theoretical frameworks in general educational research.
  • Ding, L. (2019). Theoretical perspectives of quantitative physics education research. Physical Review Physics Education Research , 15 (2), 020101-1–020101-13. This paper illustrates how a DBER field can use theoretical frameworks.
  • Nehm, R. (2019). Biology education research: Building integrative frameworks for teaching and learning about living systems. Disciplinary and Interdisciplinary Science Education Research , 1 , ar15. https://doi.org/10.1186/s43031-019-0017-6 . This paper articulates the need for studies in BER to explicitly state theoretical frameworks and provides examples of potential studies.
  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Sage. This book also provides an overview of theoretical frameworks, but for both research and evaluation.

CONCEPTUAL FRAMEWORKS

Purpose of a conceptual framework.

A conceptual framework is a description of the way a researcher understands the factors and/or variables that are involved in the study and their relationships to one another. The purpose of a conceptual framework is to articulate the concepts under study using relevant literature ( Rocco and Plakhotnik, 2009 ) and to clarify the presumed relationships among those concepts ( Rocco and Plakhotnik, 2009 ; Anfara and Mertz, 2014 ). Conceptual frameworks are different from theoretical frameworks in both their breadth and grounding in established findings. Whereas a theoretical framework articulates the lens through which a researcher views the work, the conceptual framework is often more mechanistic and malleable.

Conceptual frameworks are broader, encompassing both established theories (i.e., theoretical frameworks) and the researchers’ own emergent ideas. Emergent ideas, for example, may be rooted in informal and/or unpublished observations from experience. These emergent ideas would not be considered a “theory” if they are not yet tested, supported by systematically collected evidence, and peer reviewed. However, they do still play an important role in the way researchers approach their studies. The conceptual framework allows authors to clearly describe their emergent ideas so that connections among ideas in the study and the significance of the study are apparent to readers.

Constructing Conceptual Frameworks

Including a conceptual framework in a research study is important, but researchers often opt to include either a conceptual or a theoretical framework. Either may be adequate, but both provide greater insight into the research approach. For instance, a research team plans to test a novel component of an existing theory. In their study, they describe the existing theoretical framework that informs their work and then present their own conceptual framework. Within this conceptual framework, specific topics portray emergent ideas that are related to the theory. Describing both frameworks allows readers to better understand the researchers’ assumptions, orientations, and understanding of concepts being investigated. For example, Connolly et al. (2018) included a conceptual framework that described how they applied a theoretical framework of social cognitive career theory (SCCT) to their study on teaching programs for doctoral students. In their conceptual framework, the authors described SCCT, explained how it applied to the investigation, and drew upon results from previous studies to justify the proposed connections between the theory and their emergent ideas.

In some cases, authors may be able to sufficiently describe their conceptualization of the phenomenon under study in an introduction alone, without a separate conceptual framework section. However, incomplete descriptions of how the researchers conceptualize the components of the study may limit the significance of the study by making the research less intelligible to readers. This is especially problematic when studying topics in which researchers use the same terms for different constructs or different terms for similar and overlapping constructs (e.g., inquiry, teacher beliefs, pedagogical content knowledge, or active learning). Authors must describe their conceptualization of a construct if the research is to be understandable and useful.

There are some key areas to consider regarding the inclusion of a conceptual framework in a study. To begin with, it is important to recognize that conceptual frameworks are constructed by the researchers conducting the study ( Rocco and Plakhotnik, 2009 ; Maxwell, 2012 ). This is different from theoretical frameworks that are often taken from established literature. Researchers should bring together ideas from the literature, but they may be influenced by their own experiences as a student and/or instructor, the shared experiences of others, or thought experiments as they construct a description, model, or representation of their understanding of the phenomenon under study. This is an exercise in intellectual organization and clarity that often considers what is learned, known, and experienced. The conceptual framework makes these constructs explicitly visible to readers, who may have different understandings of the phenomenon based on their prior knowledge and experience. There is no single method to go about this intellectual work.

Reeves et al. (2016) is an example of an article that proposed a conceptual framework about graduate teaching assistant professional development evaluation and research. The authors used existing literature to create a novel framework that filled a gap in current research and practice related to the training of graduate teaching assistants. This conceptual framework can guide the systematic collection of data by other researchers because the framework describes the relationships among various factors that influence teaching and learning. The Reeves et al. (2016) conceptual framework may be modified as additional data are collected and analyzed by other researchers. This is not uncommon, as conceptual frameworks can serve as catalysts for concerted research efforts that systematically explore a phenomenon (e.g., Reynolds et al. , 2012 ; Brownell and Kloser, 2015 ).

Sabel et al. (2017) used a conceptual framework in their exploration of how scaffolds, an external factor, interact with internal factors to support student learning. Their conceptual framework integrated principles from two theoretical frameworks, self-regulated learning and metacognition, to illustrate how the research team conceptualized students’ use of scaffolds in their learning ( Figure 1 ). Sabel et al. (2017) created this model using their interpretations of these two frameworks in the context of their teaching.

An external file that holds a picture, illustration, etc.
Object name is cbe-21-rm33-g001.jpg

Conceptual framework from Sabel et al. (2017) .

A conceptual framework should describe the relationship among components of the investigation ( Anfara and Mertz, 2014 ). These relationships should guide the researcher’s methods of approaching the study ( Miles et al. , 2014 ) and inform both the data to be collected and how those data should be analyzed. Explicitly describing the connections among the ideas allows the researcher to justify the importance of the study and the rigor of the research design. Just as importantly, these frameworks help readers understand why certain components of a system were not explored in the study. This is a challenge in education research, which is rooted in complex environments with many variables that are difficult to control.

For example, Sabel et al. (2017) stated: “Scaffolds, such as enhanced answer keys and reflection questions, can help students and instructors bridge the external and internal factors and support learning” (p. 3). They connected the scaffolds in the study to the three dimensions of metacognition and the eventual transformation of existing ideas into new or revised ideas. Their framework provides a rationale for focusing on how students use two different scaffolds, and not on other factors that may influence a student’s success (self-efficacy, use of active learning, exam format, etc.).

In constructing conceptual frameworks, researchers should address needed areas of study and/or contradictions discovered in literature reviews. By attending to these areas, researchers can strengthen their arguments for the importance of a study. For instance, conceptual frameworks can address how the current study will fill gaps in the research, resolve contradictions in existing literature, or suggest a new area of study. While a literature review describes what is known and not known about the phenomenon, the conceptual framework leverages these gaps in describing the current study ( Maxwell, 2012 ). In the example of Sabel et al. (2017) , the authors indicated there was a gap in the literature regarding how scaffolds engage students in metacognition to promote learning in large classes. Their study helps fill that gap by describing how scaffolds can support students in the three dimensions of metacognition: intelligibility, plausibility, and wide applicability. In another example, Lane (2016) integrated research from science identity, the ethic of care, the sense of belonging, and an expertise model of student success to form a conceptual framework that addressed the critiques of other frameworks. In a more recent example, Sbeglia et al. (2021) illustrated how a conceptual framework influences the methodological choices and inferences in studies by educational researchers.

Sometimes researchers draw upon the conceptual frameworks of other researchers. When a researcher’s conceptual framework closely aligns with an existing framework, the discussion may be brief. For example, Ghee et al. (2016) referred to portions of SCCT as their conceptual framework to explain the significance of their work on students’ self-efficacy and career interests. Because the authors’ conceptualization of this phenomenon aligned with a previously described framework, they briefly mentioned the conceptual framework and provided additional citations that provided more detail for the readers.

Within both the BER and the broader DBER communities, conceptual frameworks have been used to describe different constructs. For example, some researchers have used the term “conceptual framework” to describe students’ conceptual understandings of a biological phenomenon. This is distinct from a researcher’s conceptual framework of the educational phenomenon under investigation, which may also need to be explicitly described in the article. Other studies have presented a research logic model or flowchart of the research design as a conceptual framework. These constructions can be quite valuable in helping readers understand the data-collection and analysis process. However, a model depicting the study design does not serve the same role as a conceptual framework. Researchers need to avoid conflating these constructs by differentiating the researchers’ conceptual framework that guides the study from the research design, when applicable.

Explicitly describing conceptual frameworks is essential in depicting the focus of the study. We have found that being explicit in a conceptual framework means using accepted terminology, referencing prior work, and clearly noting connections between terms. This description can also highlight gaps in the literature or suggest potential contributions to the field of study. A well-elucidated conceptual framework can suggest additional studies that may be warranted. This can also spur other researchers to consider how they would approach the examination of a phenomenon and could result in a revised conceptual framework.

It can be challenging to create conceptual frameworks, but they are important. Below are two resources that could be helpful in constructing and presenting conceptual frameworks in educational research:

  • Maxwell, J. A. (2012). Qualitative research design: An interactive approach (3rd ed.). Los Angeles, CA: Sage. Chapter 3 in this book describes how to construct conceptual frameworks.
  • Ravitch, S. M., & Riggan, M. (2016). Reason & rigor: How conceptual frameworks guide research . Los Angeles, CA: Sage. This book explains how conceptual frameworks guide the research questions, data collection, data analyses, and interpretation of results.

CONCLUDING THOUGHTS

Literature reviews, theoretical frameworks, and conceptual frameworks are all important in DBER and BER. Robust literature reviews reinforce the importance of a study. Theoretical frameworks connect the study to the base of knowledge in educational theory and specify the researcher’s assumptions. Conceptual frameworks allow researchers to explicitly describe their conceptualization of the relationships among the components of the phenomenon under study. Table 1 provides a general overview of these components in order to assist biology education researchers in thinking about these elements.

It is important to emphasize that these different elements are intertwined. When these elements are aligned and complement one another, the study is coherent, and the study findings contribute to knowledge in the field. When literature reviews, theoretical frameworks, and conceptual frameworks are disconnected from one another, the study suffers. The point of the study is lost, suggested findings are unsupported, or important conclusions are invisible to the researcher. In addition, this misalignment may be costly in terms of time and money.

Conducting a literature review, selecting a theoretical framework, and building a conceptual framework are some of the most difficult elements of a research study. It takes time to understand the relevant research, identify a theoretical framework that provides important insights into the study, and formulate a conceptual framework that organizes the finding. In the research process, there is often a constant back and forth among these elements as the study evolves. With an ongoing refinement of the review of literature, clarification of the theoretical framework, and articulation of a conceptual framework, a sound study can emerge that makes a contribution to the field. This is the goal of BER and education research.

Supplementary Material

  • Allee, V. (2000). Knowledge networks and communities of learning . OD Practitioner , 32 ( 4 ), 4–13. [ Google Scholar ]
  • Allen, M. (2017). The Sage encyclopedia of communication research methods (Vols. 1–4 ). Los Angeles, CA: Sage. 10.4135/9781483381411 [ CrossRef ] [ Google Scholar ]
  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action . Washington, DC. [ Google Scholar ]
  • Anfara, V. A., Mertz, N. T. (2014). Setting the stage . In Anfara, V. A., Mertz, N. T. (eds.), Theoretical frameworks in qualitative research (pp. 1–22). Sage. [ Google Scholar ]
  • Barnes, M. E., Brownell, S. E. (2016). Practices and perspectives of college instructors on addressing religious beliefs when teaching evolution . CBE—Life Sciences Education , 15 ( 2 ), ar18. https://doi.org/10.1187/cbe.15-11-0243 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Boote, D. N., Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation . Educational Researcher , 34 ( 6 ), 3–15. 10.3102/0013189x034006003 [ CrossRef ] [ Google Scholar ]
  • Booth, A., Sutton, A., Papaioannou, D. (2016a). Systemic approaches to a successful literature review (2nd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Booth, W. C., Colomb, G. G., Williams, J. M., Bizup, J., Fitzgerald, W. T. (2016b). The craft of research (4th ed.). Chicago, IL: University of Chicago Press. [ Google Scholar ]
  • Brownell, S. E., Kloser, M. J. (2015). Toward a conceptual framework for measuring the effectiveness of course-based undergraduate research experiences in undergraduate biology . Studies in Higher Education , 40 ( 3 ), 525–544. https://doi.org/10.1080/03075079.2015.1004234 [ Google Scholar ]
  • Connolly, M. R., Lee, Y. G., Savoy, J. N. (2018). The effects of doctoral teaching development on early-career STEM scholars’ college teaching self-efficacy . CBE—Life Sciences Education , 17 ( 1 ), ar14. https://doi.org/10.1187/cbe.17-02-0039 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cooper, K. M., Blattman, J. N., Hendrix, T., Brownell, S. E. (2019). The impact of broadly relevant novel discoveries on student project ownership in a traditional lab course turned CURE . CBE—Life Sciences Education , 18 ( 4 ), ar57. https://doi.org/10.1187/cbe.19-06-0113 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Creswell, J. W. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • DeHaan, R. L. (2011). Education research in the biological sciences: A nine decade review (Paper commissioned by the NAS/NRC Committee on the Status, Contributions, and Future Directions of Discipline Based Education Research) . Washington, DC: National Academies Press. Retrieved May 20, 2022, from www7.nationalacademies.org/bose/DBER_Mee ting2_commissioned_papers_page.html [ Google Scholar ]
  • Ding, L. (2019). Theoretical perspectives of quantitative physics education research . Physical Review Physics Education Research , 15 ( 2 ), 020101. [ Google Scholar ]
  • Dirks, C. (2011). The current status and future direction of biology education research . Paper presented at: Second Committee Meeting on the Status, Contributions, and Future Directions of Discipline-Based Education Research, 18–19 October (Washington, DC). Retrieved May 20, 2022, from http://sites.nationalacademies.org/DBASSE/BOSE/DBASSE_071087 [ Google Scholar ]
  • Duran, R. P., Eisenhart, M. A., Erickson, F. D., Grant, C. A., Green, J. L., Hedges, L. V., Schneider, B. L. (2006). Standards for reporting on empirical social science research in AERA publications: American Educational Research Association . Educational Researcher , 35 ( 6 ), 33–40. [ Google Scholar ]
  • Ebert-May, D., Derting, T. L., Henkel, T. P., Middlemis Maher, J., Momsen, J. L., Arnold, B., Passmore, H. A. (2015). Breaking the cycle: Future faculty begin teaching with learner-centered strategies after professional development . CBE—Life Sciences Education , 14 ( 2 ), ar22. https://doi.org/10.1187/cbe.14-12-0222 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Galvan, J. L., Galvan, M. C. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences (7th ed.). New York, NY: Routledge. https://doi.org/10.4324/9781315229386 [ Google Scholar ]
  • Gehrke, S., Kezar, A. (2017). The roles of STEM faculty communities of practice in institutional and departmental reform in higher education . American Educational Research Journal , 54 ( 5 ), 803–833. https://doi.org/10.3102/0002831217706736 [ Google Scholar ]
  • Ghee, M., Keels, M., Collins, D., Neal-Spence, C., Baker, E. (2016). Fine-tuning summer research programs to promote underrepresented students’ persistence in the STEM pathway . CBE—Life Sciences Education , 15 ( 3 ), ar28. https://doi.org/10.1187/cbe.16-01-0046 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Institute of Education Sciences & National Science Foundation. (2013). Common guidelines for education research and development . Retrieved May 20, 2022, from www.nsf.gov/pubs/2013/nsf13126/nsf13126.pdf
  • Jensen, J. L., Lawson, A. (2011). Effects of collaborative group composition and inquiry instruction on reasoning gains and achievement in undergraduate biology . CBE—Life Sciences Education , 10 ( 1 ), 64–73. https://doi.org/10.1187/cbe.19-05-0098 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kolpikova, E. P., Chen, D. C., Doherty, J. H. (2019). Does the format of preclass reading quizzes matter? An evaluation of traditional and gamified, adaptive preclass reading quizzes . CBE—Life Sciences Education , 18 ( 4 ), ar52. https://doi.org/10.1187/cbe.19-05-0098 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Labov, J. B., Reid, A. H., Yamamoto, K. R. (2010). Integrated biology and undergraduate science education: A new biology education for the twenty-first century? CBE—Life Sciences Education , 9 ( 1 ), 10–16. https://doi.org/10.1187/cbe.09-12-0092 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lane, T. B. (2016). Beyond academic and social integration: Understanding the impact of a STEM enrichment program on the retention and degree attainment of underrepresented students . CBE—Life Sciences Education , 15 ( 3 ), ar39. https://doi.org/10.1187/cbe.16-01-0070 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in everyday life . New York, NY: Cambridge University Press. [ Google Scholar ]
  • Lo, S. M., Gardner, G. E., Reid, J., Napoleon-Fanis, V., Carroll, P., Smith, E., Sato, B. K. (2019). Prevailing questions and methodologies in biology education research: A longitudinal analysis of research in CBE — Life Sciences Education and at the Society for the Advancement of Biology Education Research . CBE—Life Sciences Education , 18 ( 1 ), ar9. https://doi.org/10.1187/cbe.18-08-0164 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lysaght, Z. (2011). Epistemological and paradigmatic ecumenism in “Pasteur’s quadrant:” Tales from doctoral research . In Official Conference Proceedings of the Third Asian Conference on Education in Osaka, Japan . Retrieved May 20, 2022, from http://iafor.org/ace2011_offprint/ACE2011_offprint_0254.pdf
  • Maxwell, J. A. (2012). Qualitative research design: An interactive approach (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Miles, M. B., Huberman, A. M., Saldaña, J. (2014). Qualitative data analysis (3rd ed.). Los Angeles, CA: Sage. [ Google Scholar ]
  • Nehm, R. (2019). Biology education research: Building integrative frameworks for teaching and learning about living systems . Disciplinary and Interdisciplinary Science Education Research , 1 , ar15. https://doi.org/10.1186/s43031-019-0017-6 [ Google Scholar ]
  • Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Los Angeles, CA: Sage. [ Google Scholar ]
  • Perry, J., Meir, E., Herron, J. C., Maruca, S., Stal, D. (2008). Evaluating two approaches to helping college students understand evolutionary trees through diagramming tasks . CBE—Life Sciences Education , 7 ( 2 ), 193–201. https://doi.org/10.1187/cbe.07-01-0007 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Posner, G. J., Strike, K. A., Hewson, P. W., Gertzog, W. A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change . Science Education , 66 ( 2 ), 211–227. [ Google Scholar ]
  • Ravitch, S. M., Riggan, M. (2016). Reason & rigor: How conceptual frameworks guide research . Los Angeles, CA: Sage. [ Google Scholar ]
  • Reeves, T. D., Marbach-Ad, G., Miller, K. R., Ridgway, J., Gardner, G. E., Schussler, E. E., Wischusen, E. W. (2016). A conceptual framework for graduate teaching assistant professional development evaluation and research . CBE—Life Sciences Education , 15 ( 2 ), es2. https://doi.org/10.1187/cbe.15-10-0225 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reynolds, J. A., Thaiss, C., Katkin, W., Thompson, R. J. Jr. (2012). Writing-to-learn in undergraduate science education: A community-based, conceptually driven approach . CBE—Life Sciences Education , 11 ( 1 ), 17–25. https://doi.org/10.1187/cbe.11-08-0064 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rocco, T. S., Plakhotnik, M. S. (2009). Literature reviews, conceptual frameworks, and theoretical frameworks: Terms, functions, and distinctions . Human Resource Development Review , 8 ( 1 ), 120–130. https://doi.org/10.1177/1534484309332617 [ Google Scholar ]
  • Rodrigo-Peiris, T., Xiang, L., Cassone, V. M. (2018). A low-intensity, hybrid design between a “traditional” and a “course-based” research experience yields positive outcomes for science undergraduate freshmen and shows potential for large-scale application . CBE—Life Sciences Education , 17 ( 4 ), ar53. https://doi.org/10.1187/cbe.17-11-0248 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sabel, J. L., Dauer, J. T., Forbes, C. T. (2017). Introductory biology students’ use of enhanced answer keys and reflection questions to engage in metacognition and enhance understanding . CBE—Life Sciences Education , 16 ( 3 ), ar40. https://doi.org/10.1187/cbe.16-10-0298 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sbeglia, G. C., Goodridge, J. A., Gordon, L. H., Nehm, R. H. (2021). Are faculty changing? How reform frameworks, sampling intensities, and instrument measures impact inferences about student-centered teaching practices . CBE—Life Sciences Education , 20 ( 3 ), ar39. https://doi.org/10.1187/cbe.20-11-0259 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructionism . In Denzin, N. K., Lincoln, Y. S. (Eds.), Handbook of qualitative research (2nd ed., pp. 189–213). Los Angeles, CA: Sage. [ Google Scholar ]
  • Sickel, A. J., Friedrichsen, P. (2013). Examining the evolution education literature with a focus on teachers: Major findings, goals for teacher preparation, and directions for future research . Evolution: Education and Outreach , 6 ( 1 ), 23. https://doi.org/10.1186/1936-6434-6-23 [ Google Scholar ]
  • Singer, S. R., Nielsen, N. R., Schweingruber, H. A. (2012). Discipline-based education research: Understanding and improving learning in undergraduate science and engineering . Washington, DC: National Academies Press. [ Google Scholar ]
  • Todd, A., Romine, W. L., Correa-Menendez, J. (2019). Modeling the transition from a phenotypic to genotypic conceptualization of genetics in a university-level introductory biology context . Research in Science Education , 49 ( 2 ), 569–589. https://doi.org/10.1007/s11165-017-9626-2 [ Google Scholar ]
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge, MA: Harvard University Press. [ Google Scholar ]
  • Wenger, E. (1998). Communities of practice: Learning as a social system . Systems Thinker , 9 ( 5 ), 2–3. [ Google Scholar ]
  • Ziadie, M. A., Andrews, T. C. (2018). Moving evolution education forward: A systematic analysis of literature to identify gaps in collective knowledge for teaching . CBE—Life Sciences Education , 17 ( 1 ), ar11. https://doi.org/10.1187/cbe.17-08-0190 [ PMC free article ] [ PubMed ] [ Google Scholar ]

Banner

Literature Reviews: Types of Clinical Study Designs

  • Library Basics
  • 1. Choose Your Topic
  • How to Find Books
  • Types of Clinical Study Designs
  • Types of Literature
  • 3. Search the Literature
  • 4. Read & Analyze the Literature
  • 5. Write the Review
  • Keeping Track of Information
  • Style Guides
  • Books, Tutorials & Examples

Types of Study Designs

Meta-Analysis A way of combining data from many different research studies. A meta-analysis is a statistical process that combines the findings from individual studies.  Example :  Anxiety outcomes after physical activity interventions: meta-analysis findings .  Conn V.  Nurs Res . 2010 May-Jun;59(3):224-31.

Systematic Review A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria. A systematic review typically includes a description of the findings of the collection of research studies. The systematic review may also include a quantitative pooling of data, called a meta-analysis.  Example :  Complementary and alternative medicine use among women with breast cancer: a systematic review.   Wanchai A, Armer JM, Stewart BR. Clin J Oncol Nurs . 2010 Aug;14(4):E45-55.

Randomized Controlled Trial A controlled clinical trial that randomly (by chance) assigns participants to two or more groups. There are various methods to randomize study participants to their groups.  Example :  Meditation or exercise for preventing acute respiratory infection: a randomized controlled trial .  Barrett B, et al.  Ann Fam Med . 2012 Jul-Aug;10(4):337-46.

Cohort Study (Prospective Observational Study) A clinical research study in which people who presently have a certain condition or receive a particular treatment are followed over time and compared with another group of people who are not affected by the condition.  Example : Smokeless tobacco cessation in South Asian communities: a multi-centre prospective cohort study . Croucher R, et al. Addiction. 2012 Dec;107 Suppl 2:45-52.

Case-control Study Case-control studies begin with the outcomes and do not follow people over time. Researchers choose people with a particular result (the cases) and interview the groups or check their records to ascertain what different experiences they had. They compare the odds of having an experience with the outcome to the odds of having an experience without the outcome.  Example :  Non-use of bicycle helmets and risk of fatal head injury: a proportional mortality, case-control study .  Persaud N, et al.  CMAJ . 2012 Nov 20;184(17):E921-3.

Cross-sectional study The observation of a defined population at a single point in time or time interval. Exposure and outcome are determined simultaneously.  Example :  Fasting might not be necessary before lipid screening: a nationally representative cross-sectional study .  Steiner MJ, et al.  Pediatrics . 2011 Sep;128(3):463-70.

Case Reports and Series A report on a series of patients with an outcome of interest. No control group is involved.  Example :  Students mentoring students in a service-learning clinical supervision experience: an educational case report .  Lattanzi JB, et al.  Phys Ther . 2011 Oct;91(10):1513-24.

Ideas, Editorials, Opinions Put forth by experts in the field.  Example : Health and health care for the 21st century: for all the people . Koop CE.  Am J Public Health . 2006 Dec;96(12):2090-2.

Animal Research Studies Studies conducted using animal subjects.  Example : Intranasal leptin reduces appetite and induces weight loss in rats with diet-induced obesity (DIO) .  Schulz C, Paulus K, Jöhren O, Lehnert H.   Endocrinology . 2012 Jan;153(1):143-53.

Test-tube Lab Research "Test tube" experiments conducted in a controlled laboratory setting.

Adapted from Study Designs. In NICHSR Introduction to Health Services Research: a Self-Study Course.  http://www.nlm.nih.gov/nichsr/ihcm/06studies/studies03.html and Glossary of EBM Terms. http://www.cebm.utoronto.ca/glossary/index.htm#top  

Study Design Terminology

Bias - Any deviation of results or inferences from the truth, or processes leading to such deviation. Bias can result from several sources: one-sided or systematic variations in measurement from the true value (systematic error); flaws in study design; deviation of inferences, interpretations, or analyses based on flawed data or data collection; etc. There is no sense of prejudice or subjectivity implied in the assessment of bias under these conditions.

Case Control Studies - Studies which start with the identification of persons with a disease of interest and a control (comparison, referent) group without the disease. The relationship of an attribute to the disease is examined by comparing diseased and non-diseased persons with regard to the frequency or levels of the attribute in each group.

Causality - The relating of causes to the effects they produce. Causes are termed necessary when they must always precede an effect and sufficient when they initiate or produce an effect. Any of several factors may be associated with the potential disease causation or outcome, including predisposing factors, enabling factors, precipitating factors, reinforcing factors, and risk factors.

Control Groups - Groups that serve as a standard for comparison in experimental studies. They are similar in relevant characteristics to the experimental group but do not receive the experimental intervention.

Controlled Clinical Trials - Clinical trials involving one or more test treatments, at least one control treatment, specified outcome measures for evaluating the studied intervention, and a bias-free method for assigning patients to the test treatment. The treatment may be drugs, devices, or procedures studied for diagnostic, therapeutic, or prophylactic effectiveness. Control measures include placebos, active medicines, no-treatment, dosage forms and regimens, historical comparisons, etc. When randomization using mathematical techniques, such as the use of a random numbers table, is employed to assign patients to test or control treatments, the trials are characterized as Randomized Controlled Trials.

Cost-Benefit Analysis - A method of comparing the cost of a program with its expected benefits in dollars (or other currency). The benefit-to-cost ratio is a measure of total return expected per unit of money spent. This analysis generally excludes consideration of factors that are not measured ultimately in economic terms. Cost effectiveness compares alternative ways to achieve a specific set of results.

Cross-Over Studies - Studies comparing two or more treatments or interventions in which the subjects or patients, upon completion of the course of one treatment, are switched to another. In the case of two treatments, A and B, half the subjects are randomly allocated to receive these in the order A, B and half to receive them in the order B, A. A criticism of this design is that effects of the first treatment may carry over into the period when the second is given.

Cross-Sectional Studies - Studies in which the presence or absence of disease or other health-related variables are determined in each member of the study population or in a representative sample at one particular time. This contrasts with LONGITUDINAL STUDIES which are followed over a period of time.

Double-Blind Method - A method of studying a drug or procedure in which both the subjects and investigators are kept unaware of who is actually getting which specific treatment.

Empirical Research - The study, based on direct observation, use of statistical records, interviews, or experimental methods, of actual practices or the actual impact of practices or policies.

Evaluation Studies - Works consisting of studies determining the effectiveness or utility of processes, personnel, and equipment.

Genome-Wide Association Study - An analysis comparing the allele frequencies of all available (or a whole genome representative set of) polymorphic markers in unrelated patients with a specific symptom or disease condition, and those of healthy controls to identify markers associated with a specific disease or condition.

Intention to Treat Analysis - Strategy for the analysis of Randomized Controlled Trial that compares patients in the groups to which they were originally randomly assigned.

Logistic Models - Statistical models which describe the relationship between a qualitative dependent variable (that is, one which can take only certain discrete values, such as the presence or absence of a disease) and an independent variable. A common application is in epidemiology for estimating an individual's risk (probability of a disease) as a function of a given risk factor.

Longitudinal Studies - Studies in which variables relating to an individual or group of individuals are assessed over a period of time.

Lost to Follow-Up - Study subjects in cohort studies whose outcomes are unknown e.g., because they could not or did not wish to attend follow-up visits.

Matched-Pair Analysis - A type of analysis in which subjects in a study group and a comparison group are made comparable with respect to extraneous factors by individually pairing study subjects with the comparison group subjects (e.g., age-matched controls).

Meta-Analysis - Works consisting of studies using a quantitative method of combining the results of independent studies (usually drawn from the published literature) and synthesizing summaries and conclusions which may be used to evaluate therapeutic effectiveness, plan new studies, etc. It is often an overview of clinical trials. It is usually called a meta-analysis by the author or sponsoring body and should be differentiated from reviews of literature.

Numbers Needed To Treat - Number of patients who need to be treated in order to prevent one additional bad outcome. It is the inverse of Absolute Risk Reduction.

Odds Ratio - The ratio of two odds. The exposure-odds ratio for case control data is the ratio of the odds in favor of exposure among cases to the odds in favor of exposure among noncases. The disease-odds ratio for a cohort or cross section is the ratio of the odds in favor of disease among the exposed to the odds in favor of disease among the unexposed. The prevalence-odds ratio refers to an odds ratio derived cross-sectionally from studies of prevalent cases.

Patient Selection - Criteria and standards used for the determination of the appropriateness of the inclusion of patients with specific conditions in proposed treatment plans and the criteria used for the inclusion of subjects in various clinical trials and other research protocols.

Predictive Value of Tests - In screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test.

Prospective Studies - Observation of a population for a sufficient number of persons over a sufficient number of years to generate incidence or mortality rates subsequent to the selection of the study group.

Qualitative Studies - Research that derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants.

Quantitative Studies - Quantitative research is research that uses numerical analysis.

Random Allocation - A process involving chance used in therapeutic trials or other research endeavor for allocating experimental subjects, human or animal, between treatment and control groups, or among treatment groups. It may also apply to experiments on inanimate objects.

Randomized Controlled Trial - Clinical trials that involve at least one test treatment and one control treatment, concurrent enrollment and follow-up of the test- and control-treated groups, and in which the treatments to be administered are selected by a random process, such as the use of a random-numbers table.

Reproducibility of Results - The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.

Retrospective Studies - Studies used to test etiologic hypotheses in which inferences about an exposure to putative causal factors are derived from data relating to characteristics of persons under study or to events or experiences in their past. The essential feature is that some of the persons under study have the disease or outcome of interest and their characteristics are compared with those of unaffected persons.

Sample Size - The number of units (persons, animals, patients, specified circumstances, etc.) in a population to be studied. The sample size should be big enough to have a high likelihood of detecting a true difference between two groups.

Sensitivity and Specificity - Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition.

Single-Blind Method - A method in which either the observer(s) or the subject(s) is kept ignorant of the group to which the subjects are assigned.

Time Factors - Elements of limited time intervals, contributing to particular results or situations.

Source:  NLM MeSH Database

  • << Previous: How to Find Books
  • Next: Types of Literature >>
  • Last Updated: Dec 29, 2023 11:41 AM
  • URL: https://research.library.gsu.edu/litrev

Share

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: May 22, 2024 12:03 PM
  • URL: https://libguides.usc.edu/writingguide
  • Library Hours
  • Strategic Plan
  • Giving to the Libraries
  • Jobs at the Libraries
  • Find Your Librarian
  • View All →
  • Google Scholar
  • Research Guides
  • Textbook/Reserves
  • Government Documents
  • Get It For Me
  • Print/Copy/Scan
  • Renew Materials
  • Study Rooms
  • Use a Computer
  • Borrow Tech Gear
  • Student Services
  • Faculty Services
  • Users with Disabilities
  • Visitors & Alumni
  • Special Collections
  • Find Information

Basics of Systematic Reviews

  • About Systematic Review

Types of Reviews

Literature review.

Collects key sources on a topic and discusses those sources in conversation with each other

  • Standard for research articles in most disciplines
  • Tells the reader what is known, or not known, about a particular issue, topic, or subject
  • Demonstrates knowledge and understanding of a topic
  • Establishes context or background for a case or argument
  • Helps develop the author’s ideas and perspective

Rapid Review

Thorough methodology but with process limitations in place to expeditethe completion of a review.

  • For questions that require timely answers
  • 3-4 months vs. 12-24 months
  • Limitations - scope, comprehensiveness bias, and quality of appraisal
  • Discusses potential effects that the limited methods may have had on results

Scoping Review

Determine the scope or coverage of a body of literature on a given topic and give clear indication of the volume of literature and studies available as well as an overview of its focus.

  • Identify types of available evidence in a given field
  • Clarify key concepts/definitions in the literature
  • Examine how research is conducted on a certain topic or field
  • Identify key factors related to a concept
  • Key difference is focus
  • Identify and analyze knowledge gaps

Systematic Review

Attempts to identify, appraise, and summarize all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question.

  • clearly defined question with inclusion/exclusion criteria
  • rigorous and systematic search of the literature
  • thorough screening of results
  • data extraction and management
  • analysis and interpretation of results
  • risk of bias assessment of included studies

Meta-Analysis

Used to systematically synthesize or merge the findings of single, independent studies, using statistical methods to calculate an overall or ‘absolute’ effect.

  • Combines results from multiple empirical studies
  • Requires systematic review first
  • Use well recognized, systematic methods to account for differences in sample size, variability (heterogeneity) in study approach and findings (treatment effects)
  • Test how sensitive their results are to their own systematic review protocol

For additional types of reviews please see these articles:

  • Sutton, A., Clowes, M., Preston, L. and Booth, A. (2019), Meeting the review family: exploring review types and associated information retrieval requirements. Health Info Libr J, 36: 202-222. https://doi.org/10.1111/hir.12276
  • Grant, M.J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
  • << Previous: About Systematic Review
  • Next: Sources >>
  • Last Updated: May 17, 2024 10:04 AM
  • URL: https://libguides.utsa.edu/systematicreview
  • Library Locations
  • Staff Directory
  • 508 Compliance
  • Site Search
  • © The University of Texas at San Antonio
  • Information: 210-458-4011
  • Campus Alerts
  • Required Links
  • UTSA Policies
  • Report Fraud

A Structured Literature Review on the Research and Design of Rehabilitation Environments

Affiliations.

  • 1 Hammes Healthcare, Arlington, VA, USA.
  • 2 Department of Human Centered Design, Cornell University, Ithaca, NY, USA.
  • PMID: 38742748
  • DOI: 10.1177/19375867241248604

Aim: This literature review is conducted to identify knowledge gaps and shape a framework for the development of guidelines and future research on programming and design of rehabilitation environments.

Background: Patients suffering from trauma, stroke, neurological or cardiopulmonary conditions, or recovering from surgery or cancer treatment require rehabilitation services. A comprehensive rehabilitation program can support continuum of care for inpatient and outpatient groups. However, within most facilities, rehabilitation environments are found to be outdated and undersized compared to other programs or lack the correct adjacencies within the facility. Unfortunately, this deficiency is echoed by limited guidelines on programming, planning, and design of these environments. General guidelines derived from healthcare environments research is not adaptable to rehabilitation environments, because the paradigm used in most healthcare environment research does not address specific needs of rehabilitation patients in regaining confidence or relearning daily life skills.

Method: We conducted a structured literature review, using Preferred Reporting Items for Systematic Reviews and Meta-Analyses as a basis for reporting the available body of work on evidence-based research in rehabilitation environments.

Result and conclusion: Through analysis of the limited literature, specific mediators such as patient confidence and motivation were identified. An environment that creates a balance between privacy and social interaction can promote these mediators. Creating enriched environments through elements that engage the senses and encourage more social and physical interaction is essential for recovery. Finally, accessibility and wayfinding are of great importance in these environments due to potential limited mobility or cognitive impairments of patients.

Keywords: evidence-based design; healthcare architecture; healthcare design; physical rehabilitation; physical therapy.

Publication types

  • Open access
  • Published: 16 May 2024

Integrating qualitative research within a clinical trials unit: developing strategies and understanding their implementation in contexts

  • Jeremy Segrott   ORCID: orcid.org/0000-0001-6215-0870 1 ,
  • Sue Channon 2 ,
  • Amy Lloyd 4 ,
  • Eleni Glarou 2 , 3 ,
  • Josie Henley 5 ,
  • Jacqueline Hughes 2 ,
  • Nina Jacob 2 ,
  • Sarah Milosevic 2 ,
  • Yvonne Moriarty 2 ,
  • Bethan Pell 6 ,
  • Mike Robling 2 ,
  • Heather Strange 2 ,
  • Julia Townson 2 ,
  • Qualitative Research Group &
  • Lucy Brookes-Howell 2  

Trials volume  25 , Article number:  323 ( 2024 ) Cite this article

347 Accesses

6 Altmetric

Metrics details

Background/aims

The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University’s Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

We held group discussions on the work of the Qualitative Research Group (QRG) within CTR. The content of these discussions, materials for a presentation in CTR, and documents relating to the development of the QRG were interpreted at a workshop attended by group members. Normalisation Process Theory (NPT) was used to structure analysis. A writing group prepared a document for input from members of CTR, forming the basis of this paper.

Actions to integrate qualitative research comprised: its inclusion in Centre strategies; formation of a QRG with dedicated funding/roles; embedding of qualitative research within operating systems; capacity building/training; monitoring opportunities to include qualitative methods in studies; maximising the quality of qualitative research and developing methodological innovation. Facilitators of these actions included: the influence of the broader methodological landscape within trial/study design and its promotion of the value of qualitative research; and close physical proximity of CTR qualitative staff/students allowing sharing of methodological approaches. Introduction of innovative qualitative methods generated interest among other staff groups. Challenges included: pressure to under-resource qualitative components of research, preference for a statistical stance historically in some research areas and funding structures, and difficulties faced by qualitative researchers carving out individual academic profiles when working across trials/studies.

Conclusions

Given that CTUs are pivotal to the design and conduct of RCTs and related study types across multiple disciplines, integrating qualitative research into trials units is crucial if its contribution is to be fully realised. We have made explicit one trials unit’s experience of embedding qualitative research and present this to open dialogue on ways to operationalise and optimise qualitative research in trials. NPT provides a valuable framework with which to theorise these processes, including the importance of sense-making and legitimisation when introducing new practices within organisations.

Peer Review reports

The value of using qualitative methods within randomised control trials (RCTs) is widely recognised [ 1 , 2 , 3 ]. Qualitative research generates important evidence on factors affecting trial recruitment/retention [ 4 ] and implementation, aiding interpretation of quantitative data [ 5 ]. Though RCTs have traditionally been viewed as sitting within a positivist paradigm, recent methodological innovations have developed new trial designs that draw explicitly on both quantitative and qualitative methods. For instance, in the field of complex public health interventions, realist RCTs seek to understand the mechanisms through which interventions generate hypothesised impacts, and how interactions across different implementation contexts form part of these mechanisms. Proponents of realist RCTs—which integrate experimental and realist paradigms—highlight the importance of using quantitative and qualitative methods to fully realise these aims and to generate an understanding of intervention mechanisms and how context shapes them [ 6 ].

A need for guidance on how to conduct good quality qualitative research is being addressed, particularly in relation to feasibility studies for RCTs [ 7 ] and process evaluations embedded within trials of complex interventions [ 5 ]. There is also guidance on the conduct of qualitative research within trials at different points in the research cycle, including development, conduct and reporting [ 8 , 9 ].

A high proportion of trials are based within or involve clinical trials units (CTUs). In the UK the UKCRC Registered CTU Network describes them as:

… specialist units which have been set up with a specific remit to design, conduct, analyse and publish clinical trials and other well-designed studies. They have the capability to provide specialist expert statistical, epidemiological, and other methodological advice and coordination to undertake successful clinical trials. In addition, most CTUs will have expertise in the coordination of trials involving investigational medicinal products which must be conducted in compliance with the UK Regulations governing the conduct of clinical trials resulting from the EU Directive for Clinical Trials.

Thus, CTUs provide the specialist methodological expertise needed for the conduct of trials, and in the case of trials of investigational medicinal products, their involvement may be mandated to ensure compliance with relevant regulations. As the definition above suggests, CTUs also conduct and support other types of study apart from RCTs, providing a range of methodological and subject-based expertise.

However, despite their central role in the conduct and design of trials, (and other evaluation designs) little has been written about how CTUs have integrated qualitative work within their organisation at a time when such methods are, as stated above, now recognised as an important aspect of RCTs and evaluation studies more generally. This is a significant gap, since integration at the organisational level arguably shapes how qualitative research is integrated within individual studies, and thus it is valuable to understand how CTUs have approached the task. There are different ways of involving qualitative work in trials units, such as partnering with other departments (e.g. social science) or employing qualitative researchers directly. Qualitative research can be imagined and configured in different ways—as a method that generates data to inform future trial and intervention design, as an embedded component within an RCT or other evaluation type, or as a parallel strand of research focusing on lived experiences of illness, for instance. Understanding how trials units have integrated qualitative research is valuable, as it can shed light on which strategies show promise, and in which contexts, and how qualitative research is positioned within the field of trials research, foregrounding the value of qualitative research. However, although much has been written about its use within trials, few accounts exist of how trials units have integrated qualitative research within their systems and structures.

This paper discusses the process of embedding qualitative research within the work of one CTU—Cardiff University’s Centre for Trials Research (CTR). It highlights facilitators of this process and identifies challenges to integration. We use the Normalisation Process Theory (NPT) as a framework to structure our experience and approach. The key gap addressed by this paper is the implementation of strategies to integrate qualitative research (a relatively newly adopted set of practices and processes) within CTU systems and structures. We acknowledge from the outset that there are multiple ways of approaching this task. What follows therefore is not a set of recommendations for a preferred or best way to integrate qualitative research, as this will comprise diverse actions according to specific contexts. Rather, we examine the processes through which integration occurred in our own setting and highlight the potential value of these insights for others engaged in the work of promoting qualitative research within trials units.

Background to the integration of qualitative research within CTR

The CTR was formed in 2015 [ 10 ]. It brought together three existing trials units at Cardiff University: the South East Wales Trials Unit, the Wales Cancer Trials Unit, and the Haematology Clinical Trials Unit. From its inception, the CTR had a stated aim of developing a programme of qualitative research and integrating it within trials and other studies. In the sections below, we map these approaches onto the framework offered by Normalisation Process Theory to understand the processes through which they helped achieve embedding and integration of qualitative research.

CTR’s aims (including those relating to the development of qualitative research) were included within its strategy documents and communicated to others through infrastructure funding applications, annual reports and its website. A Qualitative Research Group (QRG), which had previously existed within the South East Wales Trials Unit, with dedicated funding for methodological specialists and group lead academics, was a key mechanism through which the development of a qualitative portfolio was put into action. Integration of qualitative research within Centre systems and processes occurred through the inclusion of qualitative research in study adoption processes and representation on committees. The CTR’s study portfolio provided a basis to track qualitative methods in new and existing studies, identify opportunities to embed qualitative methods within recently adopted studies (at the funding application stage) and to manage staff resources. Capacity building and training were an important focus of the QRG’s work, including training courses, mentoring, creation of an academic network open to university staff and practitioners working in the field of healthcare, presentations at CTR staff meetings and securing of PhD studentships. Standard operating procedures and methodological guidance on the design and conduct of qualitative research (e.g. templates for developing analysis plans) aimed to create a shared understanding of how to undertake high-quality research, and a means to monitor the implementation of rigorous approaches. As the QRG expanded its expertise it sought to develop innovative approaches, including the use of visual [ 11 ] and ethnographic methods [ 12 ].

Understanding implementation—Normalisation Process Theory (NPT)

Normalisation Process Theory (NPT) provides a model with which to understand the implementation of new sets of practices and their normalisation within organisational settings. The term ‘normalisation’ refers to how new practices become routinised (part of the everyday work of an organisation) through embedding and integration [ 13 , 14 ]. NPT defines implementation as ‘the social organisation of work’ and is concerned with the social processes that take place as new practices are introduced. Embedding involves ‘making practices routine elements of everyday life’ within an organisation. Integration takes the form of ‘sustaining embedded practices in social contexts’, and how these processes lead to the practices becoming (or not becoming) ‘normal and routine’ [ 14 ]. NPT is concerned with the factors which promote or ‘inhibit’ attempts to embed and integrate the operationalisation of new practices [ 13 , 14 , 15 ].

Embedding new practices is therefore achieved through implementation—which takes the form of interactions in specific contexts. Implementation is operationalised through four ‘generative mechanisms’— coherence , cognitive participation , collective action and reflexive monitoring [ 14 ]. Each mechanism is characterised by components comprising immediate and organisational work, with actions of individuals and organisations (or groups of individuals) interdependent. The mechanisms operate partly through forms of investment (i.e. meaning, commitment, effort, and comprehension) [ 14 ].

Coherence refers to how individuals/groups make sense of, and give meaning to, new practices. Sense-making concerns the coherence of a practice—whether it ‘holds together’, and its differentiation from existing activities [ 15 ]. Communal and individual specification involve understanding new practices and their potential benefits for oneself or an organisation. Individuals consider what new practices mean for them in terms of tasks and responsibilities ( internalisation ) [ 14 ].

NPT frames the second mechanism, cognitive participation , as the building of a ‘community of practice’. For a new practice to be initiated, individuals and groups within an organisation must commit to it [ 14 , 15 ]. Cognitive participation occurs through enrolment —how people relate to the new practice; legitimation —the belief that it is right for them to be involved; and activation —defining which actions are necessary to sustain the practice and their involvement [ 14 ]. Making the new practices work may require changes to roles (new responsibilities, altered procedures) and reconfiguring how colleagues work together (changed relationships).

Third, Collective Action refers to ‘the operational work that people do to enact a set of practices’ [ 14 ]. Individuals engage with the new practices ( interactional workability ) reshaping how members of an organisation interact with each other, through creation of new roles and expectations ( relational interaction ) [ 15 ]. Skill set workability concerns how the work of implementing a new set of practices is distributed and the necessary roles and skillsets defined [ 14 ]. Contextual integration draws attention to the incorporation of a practice within social contexts, and the potential for aspects of these contexts, such as systems and procedures, to be modified as a result [ 15 ].

Reflexive monitoring is the final implementation mechanism. Collective and individual appraisal evaluate the value of a set of practices, which depends on the collection of information—formally and informally ( systematisation ). Appraisal may lead to reconfiguration in which procedures of the practice are redefined or reshaped [ 14 , 15 ].

We sought to map the following: (1) the strategies used to embed qualitative research within the Centre, (2) key facilitators, and (3) barriers to their implementation. Through focused group discussions during the monthly meetings of the CTR QRG and in discussion with the CTR senior management team throughout 2019–2020 we identified nine types of documents (22 individual documents in total) produced within the CTR which had relevant information about the integration of qualitative research within its work (Table  1 ). The QRG had an ‘open door’ policy to membership and welcomed all staff/students with an interest in qualitative research. It included researchers who were employed specifically to undertake qualitative research and other staff with a range of study roles, including trial managers, statisticians, and data managers. There was also diversity in terms of career stage, including PhD students, mid-career researchers and members of the Centre’s Executive team. Membership was therefore largely self-selected, and comprised of individuals with a role related to, or an interest in, embedding qualitative research within trials. However, the group brought together diverse methodological perspectives and was not solely comprised of methodological ‘champions’ whose job it was to promote the development of qualitative research within the centre. Thus whilst the group (and by extension, the authors of this paper) had a shared appreciation of the value of qualitative research within a trials centre, they also brought varied methodological perspectives and ways of engaging with it.

All members of the QRG ( n  = 26) were invited to take part in a face-to-face, day-long workshop in February 2019 on ‘How to optimise and operationalise qualitative research in trials: reflections on CTR structure’. The workshop was attended by 12 members of staff and PhD students, including members of the QRG and the CTR’s senior management team. Recruitment to the workshop was therefore inclusive, and to some extent opportunistic, but all members of the QRG were able to contribute to discussions during regular monthly group meetings and the drafting of the current paper.

The aim of the workshop was to bring together information from the documents in Table  1 to generate discussion around the key strategies (and their component activities) that had been adopted to integrate qualitative research into CTR, as well as barriers to, and facilitators of, their implementation. The agenda for the workshop involved four key areas: development and history of the CTR model; mapping the current model within CTR; discussing the structure of other CTUs; and exploring the advantages and disadvantages of the CTR model.

During the workshop, we discussed the use of NPT to conceptualise how qualitative research had been embedded within CTR’s systems and practices. The group produced spider diagrams to map strategies and actions on to the four key domains (or ‘generative mechanisms’ of NPT) summarised above, to aid the understanding of how they had functioned, and the utility of NPT as a framework. This is summarised in Table  2 .

Detailed notes were made during the workshop. A core writing group then used these notes and the documents in Table  1 to develop a draft of the current paper. This was circulated to all members of the CTR QRG ( n  = 26) and stored within a central repository accessible to them to allow involvement and incorporate the views of those who were not able to attend the workshop. This draft was again presented for comments in the monthly CTR QRG meeting in February 2021 attended by n  = 10. The Standards for QUality Improvement Reporting Excellence 2.0 (SQUIRE) guidelines were used to inform the structure and content of the paper (see supplementary material) [ 16 ].

In the following sections, we describe the strategies CTR adopted to integrate qualitative research. These are mapped against NPT’s four generative mechanisms to explore the processes through which the strategies promoted integration, and facilitators of and barriers to their implementation. A summary of the strategies and their functioning in terms of the generative mechanisms is provided in Table  2 .

Coherence—making sense of qualitative research

In CTR, many of the actions taken to build a portfolio of qualitative research were aimed at enabling colleagues, and external actors, to make sense of this set of methodologies. Centre-level strategies and grant applications for infrastructure funding highlighted the value of qualitative research, the added benefits it would bring, and positioned it as a legitimate set of practices alongside existing methods. For example, a 2014 application for renewal of trials unit infrastructure funding stated:

We are currently in the process of undertaking […] restructuring for our qualitative research team and are planning similar for trial management next year. The aim of this restructuring is to establish greater hierarchical management and opportunities for staff development and also provide a structure that can accommodate continuing growth.

Within the CTR, various forms of communication on the development of qualitative research were designed to enable staff and students to make sense of it, and to think through its potential value for them, and ways in which they might engage with it. These included presentations at staff meetings, informal meetings between project teams and the qualitative group lead, and the visibility of qualitative research on the public-facing Centre website and Centre committees and systems. For instance, qualitative methods were included (and framed as a distinct set of practices) within study adoption forms and committee agendas. Information for colleagues described how qualitative methods could be incorporated within funding applications for RCTs and other evaluation studies to generate new insights into questions research teams were already keen to answer, such as influences on intervention implementation fidelity. Where externally based chief investigators approached the Centre to be involved in new grant applications, the existence of the qualitative team and group lead enabled the inclusion of qualitative research to be actively promoted at an early stage, and such opportunities were highlighted in the Centre’s brochure for new collaborators. Monthly qualitative research network meetings—advertised across CTR and to external research collaborators, were also designed to create a shared understanding of qualitative research methods and their utility within trials and other study types (e.g. intervention development, feasibility studies, and observational studies). Training events (discussed in more detail below) also aided sense-making.

Several factors facilitated the promotion of qualitative research as a distinctive and valuable entity. Among these was the influence of the broader methodological landscape within trial design which was promoting the value of qualitative research, such as guidance on the evaluation of complex interventions by the Medical Research Council [ 17 ], and the growing emphasis placed on process evaluations within trials (with qualitative methods important in understanding participant experience and influences on implementation) [ 5 ]. The attention given to lived experience (both through process evaluations and the move to embed public involvement in trials) helped to frame qualitative research within the Centre as something that was appropriate, legitimate, and of value. Recognition by research funders of the value of qualitative research within studies was also helpful in normalising and legitimising its adoption within grant applications.

The inclusion of qualitative methods within influential methodological guidance helped CTR researchers to develop a ‘shared language’ around these methods, and a way that a common understanding of the role of qualitative research could be generated. One barrier to such sense-making work was the varying extent to which staff and teams had existing knowledge or experience of qualitative research. This varied across methodological and subject groups within the Centre and reflected the history of the individual trials units which had merged to form the Centre.

Cognitive participation—legitimising qualitative research

Senior CTR leaders promoted the value and legitimacy of qualitative research. Its inclusion in centre strategies, infrastructure funding applications, and in public-facing materials (e.g. website, investigator brochures), signalled that it was appropriate for individuals to conduct qualitative research within their roles, or to support others in doing so. Legitimisation also took place through informal channels, such as senior leadership support for qualitative research methods in staff meetings and participation in QRG seminars. Continued development of the QRG (with dedicated infrastructure funding) provided a visible identity and equivalence with other methodological groups (e.g. trial managers, statisticians).

Staff were asked to engage with qualitative research in two main ways. First, there was an expansion in the number of staff for whom qualitative research formed part of their formal role and responsibilities. One of the three trials units that merged to form CTR brought with it a qualitative team comprising methodological specialists and a group lead. CTR continued the expansion of this group with the creation of new roles and an enlarged nucleus of researchers for whom qualitative research was the sole focus of their work. In part, this was linked to the successful award of projects that included a large qualitative component, and that were coordinated by CTR (see Table  3 which describes the PUMA study).

Members of the QRG were encouraged to develop their own research ideas and to gain experience as principal investigators, and group seminars were used to explore new ideas and provide peer support. This was communicated through line management, appraisal, and informal peer interaction. Boundaries were not strictly demarcated (i.e. staff located outside the qualitative team were already using qualitative methods), but the new team became a central focus for developing a growing programme of work.

Second, individuals and studies were called upon to engage in new ways with qualitative research, and with the qualitative team. A key goal for the Centre was that groups developing new research ideas should give more consideration in general to the potential value and inclusion of qualitative research within their funding applications. Specifically, they were asked to do this by thinking about qualitative research at an early point in their application’s development (rather than ‘bolting it on’ after other elements had been designed) and to draw upon the expertise and input of the qualitative team. An example was the inclusion of questions on qualitative methods within the Centre’s study adoption form and representation from the qualitative team at the committee which reviewed new adoption requests. Where adoption requests indicated the inclusion of qualitative methods, colleagues were encouraged to liaise with the qualitative team, facilitating the integration of its expertise from an early stage. Qualitative seminars offered an informal and supportive space in which researchers could share initial ideas and refine their methodological approach. The benefits of this included the provision of sufficient time for methodological specialists to be involved in the design of the proposed qualitative component and ensuring adequate costings had been drawn up. At study adoption group meetings, scrutiny of new proposals included consideration of whether new research proposals might be strengthened through the use of qualitative methods where these had not initially been included. Meetings of the QRG—which reviewed the Centre’s portfolio of new studies and gathered intelligence on new ideas—also helped to identify, early on, opportunities to integrate qualitative methods. Communication across teams was useful in identifying new research ideas and embedding qualitative researchers within emerging study development groups.

Actions to promote greater use of qualitative methods in funding applications fed through into a growing number of studies with a qualitative component. This helped to increase the visibility and legitimacy of qualitative methods within the Centre. For example, the PUMA study [ 12 ], which brought together a large multidisciplinary team to develop and evaluate a Paediatric early warning system, drew heavily on qualitative methods, with the qualitative research located within the QRG. The project introduced an extensive network of collaborators and clinical colleagues to qualitative methods and how they could be used during intervention development and the generation of case studies. Further information about the PUMA study is provided in Table  3 .

Increasing the legitimacy of qualitative work across an extensive network of staff, students and collaborators was a complex process. Set within the continuing dominance of quantitative methods with clinical trials, there were variations in the extent to which clinicians and other collaborators embraced the value of qualitative methods. Research funding schemes, which often continued to emphasise the quantitative element of randomised controlled trials, inevitably fed through into the focus of new research proposals. Staff and external collaborators were sometimes uncertain about the added value that qualitative methods would bring to their trials. Across the CTR there were variations in the speed at which qualitative research methods gained legitimacy, partly based on disciplinary traditions and their influences. For instance, population health trials, often located within non-health settings such as schools or community settings, frequently involved collaboration with social scientists who brought with them experience in qualitative methods. Methodological guidance in this field, such as MRC guidance on process evaluations, highlighted the value of qualitative methods and alternatives to the positivist paradigm, such as the value of realist RCTs. In other, more clinical areas, positivist paradigms had greater dominance. Established practices and methodological traditions across different funders also influenced the ease of obtaining funding to include qualitative research within studies. For drugs trials (CTIMPs), the influence of regulatory frameworks on study design, data collection and the allocation of staff resources may have played a role. Over time, teams gained repeated experience of embedding qualitative research (and researchers) within their work and took this learning with them to subsequent studies. For example, the senior clinician quoted within the PUMA case study (Table  3 below) described how they had gained an appreciation of the rigour of qualitative research and an understanding of its language. Through these repeated interactions, embedding of qualitative research within studies started to become the norm rather than the exception.

Collective action—operationalising qualitative research

Collective action concerns the operationalisation of new practices within organisations—the allocation and management of the work, how individuals interact with each other, and the work itself. In CTR the formation of a Qualitative Research Group helped to allocate and organise the work of building a portfolio of studies. Researchers across the Centre were called upon to interact with qualitative research in new ways. Presentations at staff meetings and the inclusion of qualitative research methods in portfolio study adoption forms were examples of this ( interactive workability ). It was operationalised by encouraging study teams to liaise with the qualitative research lead. Development of standard operating procedures, templates for costing qualitative research and methodological guidance (e.g. on analysis plans) also helped encourage researchers to interact with these methods in new ways. For some qualitative researchers who had been trained in the social sciences, working within a trials unit meant that they needed to interact in new and sometimes unfamiliar ways with standard operating procedures, risk assessments, and other trial-based systems. Thus, training needs and capacity-building efforts were multidirectional.

Whereas there had been a tendency for qualitative research to be ‘bolted on’ to proposals for RCTs, the systems described above were designed to embed thinking about the value and design of the qualitative component from the outset. They were also intended to integrate members of the qualitative team with trial teams from an early stage to promote effective integration of qualitative methods within larger trials and build relationships over time.

Standard Operating Procedures (SOPs), formal and informal training, and interaction between the qualitative team and other researchers increased the relational workability of qualitative methods within the Centre—the confidence individuals felt in including these methods within their studies, and their accountability for doing so. For instance, study adoption forms prompted researchers to interact routinely with the qualitative team at an early stage, whilst guidance on costing grants provided clear expectations about the resources needed to deliver a proposed set of qualitative data collection.

Formation of the Qualitative Research Group—comprised of methodological specialists, created new roles and skillsets ( skill set workability ). Research teams were encouraged to draw on these when writing funding applications for projects that included a qualitative component. Capacity-building initiatives were used to increase the number of researchers with the skills needed to undertake qualitative research, and for these individuals to develop their expertise over time. This was achieved through formal training courses, academic seminars, mentoring from experienced colleagues, and informal knowledge exchange. Links with external collaborators and centres engaged in building qualitative research supported these efforts. Within the Centre, the co-location of qualitative researchers with other methodological and trial teams facilitated knowledge exchange and building of collaborative relationships, whilst grouping of the qualitative team within a dedicated office space supported a collective identity and opportunities for informal peer support.

Some aspects of the context in which qualitative research was being developed created challenges to operationalisation. Dependence on project grants to fund qualitative methodologists meant that there was a continuing need to write further grant applications whilst limiting the amount of time available to do so. Similarly, researchers within the team whose role was funded largely by specific research projects could sometimes find it hard to create sufficient time to develop their personal methodological interests. However, the cultivation of a methodologically varied portfolio of work enabled members of the team to build significant expertise in different approaches (e.g. ethnography, discourse analysis) that connected individual studies.

Reflexive monitoring—evaluating the impact of qualitative research

Inclusion of questions/fields relating to qualitative research within the Centre’s study portfolio database was a key way in which information was collected ( systematisation ). It captured numbers of funding applications and funded studies, research design, and income generation. Alongside this database, a qualitative resource planner spreadsheet was used to link individual members of the qualitative team with projects and facilitate resource planning, further reinforcing the core responsibilities and roles of qualitative researchers within CTR. As with all staff in the Centre, members of the qualitative team were placed on ongoing rather than fixed-term contracts, reflecting their core role within CTR. Planning and strategy meetings used the database and resource planner to assess the integration of qualitative research within Centre research, identify opportunities for increasing involvement, and manage staff recruitment and sustainability of researcher posts. Academic meetings and day-to-day interaction fulfilled informal appraisal of the development of the group, and its position within the Centre. Individual appraisal was also important, with members of the qualitative team given opportunities to shape their role, reflect on progress, identify training needs, and further develop their skillset, particularly through line management systems.

These forms of systematisation and appraisal were used to reconfigure the development of qualitative research and its integration within the Centre. For example, group strategies considered how to achieve long-term integration of qualitative research from its initial embedding through further promoting the belief that it formed a core part of the Centre’s business. The visibility and legitimacy of qualitative research were promoted through initiatives such as greater prominence on the Centre’s website. Ongoing review of the qualitative portfolio and discussion at academic meetings enabled the identification of areas where increased capacity would be helpful, both for qualitative staff, and more broadly within the Centre. This prompted the qualitative group to develop an introductory course to qualitative methods open to all Centre staff and PhD students, aimed at increasing understanding and awareness. As the qualitative team built its expertise and experience it also sought to develop new and innovative approaches to conducting qualitative research. This included the use of visual and diary-based methods [ 11 ] and the adoption of ethnography to evaluate system-level clinical interventions [ 12 ]. Restrictions on conventional face-to-face qualitative data collection due to the COVID-19 pandemic prompted rapid adoption of virtual/online methods for interviews, observation, and use of new internet platforms such as Padlet—a form of digital note board.

In this paper, we have described the work undertaken by one CTU to integrate qualitative research within its studies and organisational culture. The parallel efforts of many trials units to achieve these goals arguably come at an opportune time. The traditional designs of RCTs have been challenged and re-imagined by the increasing influence of realist evaluation [ 6 , 18 ] and the widespread acceptance that trials need to understand implementation and intervention theory as well as assess outcomes [ 17 ]. Hence the widespread adoption of embedded mixed methods process evaluations within RCTs. These broad shifts in methodological orthodoxies, the production of high-profile methodological guidance, and the expectations of research funders all create fertile ground for the continued expansion of qualitative methods within trials units. However, whilst much has been written about the importance of developing qualitative research and the possible approaches to integrating qualitative and quantitative methods within studies, much less has been published on how to operationalise this within trials units. Filling this lacuna is important. Our paper highlights how the integration of a new set of practices within an organisation can become embedded as part of its ‘normal’ everyday work whilst also shaping the practices being integrated. In the case of CTR, it could be argued that the integration of qualitative research helped shape how this work was done (e.g. systems to assess progress and innovation).

In our trials unit, the presence of a dedicated research group of methodological specialists was a key action that helped realise the development of a portfolio of qualitative research and was perhaps the most visible evidence of a commitment to do so. However, our experience demonstrates that to fully realise the goal of developing qualitative research, much work focuses on the interaction between this ‘new’ set of methods and the organisation into which it is introduced. Whilst the team of methodological specialists was tasked with, and ‘able’ to do the work, the ‘work’ itself needed to be integrated and embedded within the existing system. Thus, alongside the creation of a team and methodological capacity, promoting the legitimacy of qualitative research was important to communicate to others that it was both a distinctive and different entity, yet similar and equivalent to more established groups and practices (e.g. trial management, statistics, data management). The framing of qualitative research within strategies, the messages given out by senior leaders (formally and informally) and the general visibility of qualitative research within the system all helped to achieve this.

Normalisation Process Theory draws our attention to the concepts of embedding (making a new practice routine, normal within an organisation) and integration —the long-term sustaining of these processes. An important process through which embedding took place in our centre concerned the creation of messages and systems that called upon individuals and research teams to interact with qualitative research. Research teams were encouraged to think about qualitative research and consider its potential value for their studies. Critically, they were asked to do so at specific points, and in particular ways. Early consideration of qualitative methods to maximise and optimise their inclusion within studies was emphasised, with timely input from the qualitative team. Study adoption systems, centre-level processes for managing financial and human resources, creation of a qualitative resource planner, and awareness raising among staff, helped to reinforce this. These processes of embedding and integration were complex and they varied in intensity and speed across different areas of the Centre’s work. In part this depended on existing research traditions, the extent of prior experience of working with qualitative researchers and methods, and the priorities of subject areas and funders. Centre-wide systems, sometimes linked to CTR’s operation as a CTU, also helped to legitimise and embed qualitative research, lending it equivalence with other research activity. For example, like all CTUs, CTR was required to conform with the principles of Good Clinical Practice, necessitating the creation of a quality management system, operationalised through standard operating procedures for all areas of its work. Qualitative research was included, and became embedded, within these systems, with SOPs produced to guide activities such as qualitative analysis.

NPT provides a helpful way of understanding how trials units might integrate qualitative research within their work. It highlights how new practices interact with existing organisational systems and the work needed to promote effective interaction. That is, alongside the creation of a team or programme of qualitative research, much of the work concerns how members of an organisation understand it, engage with it, and create systems to sustain it. Embedding a new set of practices may be just as important as the quality or characteristics of the practices themselves. High-quality qualitative research is of little value if it is not recognised and drawn upon within new studies for instance. NPT also offers a helpful lens with which to understand how integration and embedding occur, and the mechanisms through which they operate. For example, promoting the legitimacy of a new set of practices, or creating systems that embed it, can help sustain these practices by creating an organisational ambition and encouraging (or requiring) individuals to interact with them in certain ways, redefining their roles accordingly. NPT highlights the ways in which integration of new practices involves bi-directional exchanges with the organisation’s existing practices, with each having the potential to re-shape the other as interaction takes place. For instance, in CTR, qualitative researchers needed to integrate and apply their methods within the quality management and other systems of a CTU, such as the formalisation of key processes within standard operating procedures, something less likely to occur outside trials units. Equally, project teams (including those led by externally based chief investigators) increased the integration of qualitative methods within their overall study design, providing opportunities for new insights on intervention theory, implementation and the experiences of practitioners and participants.

We note two aspects of the normalisation processes within CTR that are slightly less well conceptualised by NPT. The first concerns the emphasis within coherence on identifying the distinctiveness of new practices, and how they differ from existing activities. Whilst differentiation was an important aspect of the integration of qualitative research in CTR, such integration could be seen as operating partly through processes of de-differentiation, or at least equivalence. That is, part of the integration of qualitative research was to see it as similar in terms of rigour, coherence, and importance to other forms of research within the Centre. To be viewed as similar, or at least comparable to existing practices, was to be legitimised.

Second, whilst NPT focuses mainly on the interaction between a new set of practices and the organisational context into which it is introduced, our own experience of introducing qualitative research into a trials unit was shaped by broader organisational and methodological contexts. For example, the increasing emphasis placed upon understanding implementation processes and the experiences of research participants in the field of clinical trials (e.g. by funders), created an environment conducive to the development of qualitative research methods within our Centre. Attempts to integrate qualitative research within studies were also cross-organisational, given that many of the studies managed within the CTR drew together multi-institutional teams. This provided important opportunities to integrate qualitative research within a portfolio of studies that extended beyond CTR and build a network of collaborators who increasingly included qualitative methods within their funding proposals. The work of growing and integrating qualitative research within a trials unit is an ongoing one in which ever-shifting macro-level influences can help or hinder, and where the organisations within which we work are never static in terms of barriers and facilitators.

The importance of utilising qualitative methods within RCTs is now widely recognised. Increased emphasis on the evaluation of complex interventions, the influence of realist methods directing greater attention to complexity and the widespread adoption of mixed methods process evaluations are key drivers of this shift. The inclusion of qualitative methods within individual trials is important and previous research has explored approaches to their incorporation and some of the challenges encountered. Our paper highlights that the integration of qualitative methods at the organisational level of the CTU can shape how they are taken up by individual trials. Within CTR, it can be argued that qualitative research achieved high levels of integration, as conceptualised by Normalisation Process Theory. Thus, qualitative research became recognised as a coherent and valuable set of practices, secured legitimisation as an appropriate focus of individual and organisational activity and benefitted from forms of collective action which operationalised these organisational processes. Crucially, the routinisation of qualitative research appeared to be sustained, something which NPT suggests helps define integration (as opposed to initial embedding). However, our analysis suggested that the degree of integration varied by trial area. This variation reflected a complex mix of factors including disciplinary traditions, methodological guidance, existing (un)familiarity with qualitative research, and the influence of regulatory frameworks for certain clinical trials.

NPT provides a valuable framework with which to understand how these processes of embedding and integration occur. Our use of NPT draws attention to the importance of sense-making and legitimisation as important steps in introducing a new set of practices within the work of an organisation. Integration also depends, across each mechanism of NPT, on the building of effective relationships, which allow individuals and teams to work together in new ways. By reflecting on our experiences and the decisions taken within CTR we have made explicit one such process for embedding qualitative research within a trials unit, whilst acknowledging that approaches may differ across trials units. Mindful of this fact, and the focus of the current paper on one trials unit’s experience, we do not propose a set of recommendations for others who are working to achieve similar goals. Rather, we offer three overarching reflections (framed by NPT) which may act as a useful starting point for trials units (and other infrastructures) seeking to promote the adoption of qualitative research.

First, whilst research organisations such as trials units are highly heterogenous, processes of embedding and integration, which we have foregrounded in this paper, are likely to be important across different contexts in sustaining the use of qualitative research. Second, developing a plan for the integration of qualitative research will benefit from mapping out the characteristics of the extant system. For example, it is valuable to know how familiar staff are with qualitative research and any variations across teams within an organisation. Thirdly, NPT frames integration as a process of implementation which operates through key generative mechanisms— coherence , cognitive participation , collective action and reflexive monitoring . These mechanisms can help guide understanding of which actions help achieve embedding and integration. Importantly, they span multiple aspects of how organisations, and the individuals within them, work. The ways in which people make sense of a new set of practices ( coherence ), their commitment towards it ( cognitive participation ), how it is operationalised ( collective action ) and the evaluation of its introduction ( reflexive monitoring ) are all important. Thus, for example, qualitative research, even when well organised and operationalised within an organisation, is unlikely to be sustained if appreciation of its value is limited, or people are not committed to it.

We present our experience of engaging with the processes described above to open dialogue with other trials units on ways to operationalise and optimise qualitative research in trials. Understanding how best to integrate qualitative research within these settings may help to fully realise the significant contribution which it makes the design and conduct of trials.

Availability of data and materials

Some documents cited in this paper are either freely available from the Centre for Trials Research website or can be requested from the author for correspondence.

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Hewison J. What can qualitative research do for randomised controlled trials? A systematic mapping review. BMJ Open. 2013;3(6):e002889.

Article   PubMed   PubMed Central   Google Scholar  

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Goode J, Hewison J. Maximising the value of combining qualitative research and randomised controlled trials in health research: the QUAlitative Research in Trials (QUART) study – a mixed methods study. Health Technol Assess. 2014;18(38):1–197.

Clement C, Edwards SL, Rapport F, Russell IT, Hutchings HA. Exploring qualitative methods reported in registered trials and their yields (EQUITY): systematic review. Trials. 2018;19(1):589.

Hennessy M, Hunter A, Healy P, Galvin S, Houghton C. Improving trial recruitment processes: how qualitative methodologies can be used to address the top 10 research priorities identified within the PRioRiTy study. Trials. 2018;19:584.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258.

Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med. 2012;75(12):2299–306.

Article   PubMed   Google Scholar  

O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1:32.

Cooper C, O’Cathain A, Hind D, Adamson J, Lawton J, Baird W. Conducting qualitative research within Clinical Trials Units: avoiding potential pitfalls. Contemp Clin Trials. 2014;38(2):338–43.

Rapport F, Storey M, Porter A, Snooks H, Jones K, Peconi J, et al. Qualitative research within trials: developing a standard operating procedure for a clinical trials unit. Trials. 2013;14:54.

Cardiff University. Centre for Trials Research. Available from: https://www.cardiff.ac.uk/centre-for-trials-research . Accessed 10 May 2024.

Pell B, Williams D, Phillips R, Sanders J, Edwards A, Choy E, et al. Using visual timelines in telephone interviews: reflections and lessons learned from the star family study. Int J Qual Methods. 2020;19:160940692091367.

Thomas-Jones E, Lloyd A, Roland D, Sefton G, Tume L, Hood K, et al. A prospective, mixed-methods, before and after study to identify the evidence base for the core components of an effective Paediatric Early Warning System and the development of an implementation package containing those core recommendations for use in th. BMC Pediatr. 2018;18:244.

May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7:148.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalization process theory. Sociology. 2009;43(3):535–54.

Article   Google Scholar  

May CR, Mair F, Finch T, Macfarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4:29.

Ogrinc G, Davies L, Goodman D, Batalden PB, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): Revised publication guidelines from a detailed consensus process. BMJ Quality and Safety. 2016;25:986-92.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Jamal F, Fletcher A, Shackleton N, Elbourne D, Viner R, Bonell C. The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example. Trials. 2015;16(1):466.

Download references

Acknowledgements

Members of the Centre for Trials Research (CTR) Qualitative Research Group were collaborating authors: C Drew (Senior Research Fellow—Senior Trial Manager, Brain Health and Mental Wellbeing Division), D Gillespie (Director, Infection, Inflammation and Immunity Trials, Principal Research Fellow), R Hale (now Research Associate, School of Social Sciences, Cardiff University), J Latchem-Hastings (now Lecturer and Postdoctoral Fellow, School of Healthcare Sciences, Cardiff University), R Milton (Research Associate—Trial Manager), B Pell (now PhD student, DECIPHer Centre, Cardiff University), H Prout (Research Associate—Qualitative), V Shepherd (Senior Research Fellow), K Smallman (Research Associate), H Stanton (Research Associate—Senior Data Manager). Thanks are due to Kerry Hood and Aimee Grant for their involvement in developing processes and systems for qualitative research within CTR.

No specific grant was received to support the writing of this paper.

Author information

Authors and affiliations.

Centre for Trials Research, DECIPHer Centre, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Jeremy Segrott

Centre for Trials Research, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Sue Channon, Eleni Glarou, Jacqueline Hughes, Nina Jacob, Sarah Milosevic, Yvonne Moriarty, Mike Robling, Heather Strange, Julia Townson & Lucy Brookes-Howell

Division of Population Medicine, School of Medicine, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Eleni Glarou

Wales Centre for Public Policy, Cardiff University, Sbarc I Spark, Maindy Road, Cardiff, CF24 4HQ, UK

School of Social Sciences, Cardiff University, King Edward VII Avenue, Cardiff, CF10 3WA, UK

Josie Henley

DECIPHer Centre, School of Social Sciences, Cardiff University, Sbarc I Spark, Maindy Road, Cardiff, CF24 4HQ, UK

Bethan Pell

You can also search for this author in PubMed   Google Scholar

Qualitative Research Group

  • , D. Gillespie
  • , J. Latchem-Hastings
  • , R. Milton
  • , V. Shepherd
  • , K. Smallman
  •  & H. Stanton

Contributions

JS contributed to the design of the work and interpretation of data and was responsible for leading the drafting and revision of the paper. SC contributed to the design of the work, the acquisition of data and the drafting and revision of the paper. AL contributed to the design of the work, the acquisition of data and the drafting and revision of the paper. EG contributed to a critical review of the manuscript and provided additional relevant references. JH provided feedback on initial drafts of the paper and contributed to subsequent revisions. JHu provided feedback on initial drafts of the paper and contributed to subsequent revisions. NG provided feedback on initial drafts of the paper and contributed to subsequent revisions. SM was involved in the acquisition and analysis of data and provided a critical review of the manuscript. YM was involved in the acquisition and analysis of data and provided a critical review of the manuscript. MR was involved in the interpretation of data and critical review and revision of the paper. HS contributed to the conception and design of the work, the acquisition and analysis of data, and the revision of the manuscript. JT provided feedback on initial drafts of the paper and contributed to subsequent revisions. LB-H made a substantial contribution to the design and conception of the work, led the acquisition and analysis of data, and contributed to the drafting and revision of the paper.

Corresponding author

Correspondence to Jeremy Segrott .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval was not sought as no personal or identifiable data was collected.

Consent for publication

Competing interests.

All authors are or were members of staff or students in the Centre for Trials Research. JS is an associate editor of Trials .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Segrott, J., Channon, S., Lloyd, A. et al. Integrating qualitative research within a clinical trials unit: developing strategies and understanding their implementation in contexts. Trials 25 , 323 (2024). https://doi.org/10.1186/s13063-024-08124-7

Download citation

Received : 20 October 2023

Accepted : 17 April 2024

Published : 16 May 2024

DOI : https://doi.org/10.1186/s13063-024-08124-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Qualitative methods
  • Trials units
  • Normalisation Process Theory
  • Randomised controlled trials

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what type of research design is a literature review

Literature Review Research Design

  • First Online: 10 November 2021

Cite this chapter

what type of research design is a literature review

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  

3248 Accesses

1 Citations

This chapter addresses the peculiarities, characteristics, and major fallacies of literature review research design. Conducting and writing poor literature reviews is one of many ways to lower the value of an academic work. State-of-the-art literature reviews are valuable and publishable scholarly documents. Too many new scholars think that empirical research is the only proper research. In this chapter, researchers find relevant information on how to write a literature review research design paper and learn about typical methodologies used for this research design. The chapter closes with referring to related research designs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Behringer, S., Ulrich, P., & Unruh, A. (2019). Compliance management in family firms: A systematic literature analysis. Corporate Ownership & Control, 17 (1), 140–157.

Article   Google Scholar  

Boote, D. N., & Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation. Educational Researcher, 34 (6), 3–15.

Borman, G. D., & Dowling, N. M. (2008). Teacher attrition and retention: A meta-analytic and narrative review of the research. Review of Educational Research, 78 , 367–409.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3 , 77–101.

Carrillat, F. A., Legoux, R., & Hadida, A. L. (2018). Debates and assumptions about motion picture performance: A meta-analysis. Journal of the Academy of Marketing Science, 46 , 273–299.

Davis, J., Mengersen, K., Bennett, S., & Mazerolle, L. (2014). Viewing systematic reviews and meta-analysis in social research through different lenses. Springerplus, 3 , 511.

DerSimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7 , 177–188.

Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5 , 3–8.

Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26 , 91–108.

Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly, 82 , 581–629.

Hart, C. (1999). Doing a literature review: Releasing the social science research imagination . SAGE.

Google Scholar  

Lather, P. (1999). To be of use: The work of reviewing. Review of Educational Research, 69 (1), 2–7.

LeCompte, M. D., Klingner, J. K., Campbell, S. A., & Menk, D. W. (2003). Editors’ introduction. Review of Educational Research, 73 (2), 123–124.

Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A. & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine, 151 , W-65.

MacInnis, D. J. (2011). A framework for conceptual contributions in marketing. Journal of Marketing, 75 , 136–154.

Palmatier, R. W., Houston, M. B., & Hulland, J. (2018). Review articles: Purpose, process, and structure. Journal of the Academy of Marketing Science, 46 , 1–5.

Randolph, J. (2009). A guide to writing the dissertation literature review. Practical Assessment, Research, and Evaluation, 14 , (13).

Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104 , 333–339. (Quelle noch einfügen im Text bei LeComplte Abschnitt).

Torraco, R. J. (2005). Writing integrative literature reviews: Guidelines and examples. Human Resource Development Review, 4 , 356–367.

Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management, 14 , 207–222.

Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly, 26 , 3.

Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced Nursing, 52 , 546–553.

Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: Meta-narrative reviews. BMC Medicine, 11 , 20.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ – Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug , Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Hunziker, S., Blankenagel, M. (2021). Literature Review Research Design. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-34357-6_13

Download citation

DOI : https://doi.org/10.1007/978-3-658-34357-6_13

Published : 10 November 2021

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-34356-9

Online ISBN : 978-3-658-34357-6

eBook Packages : Business and Economics (German Language)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. 25 Types of Research Designs (2024)

    what type of research design is a literature review

  2. PPT

    what type of research design is a literature review

  3. 10 Steps to Write a Systematic Literature Review Paper in 2023

    what type of research design is a literature review

  4. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    what type of research design is a literature review

  5. Literature Review Guidelines

    what type of research design is a literature review

  6. How to write a literature review: Tips, Format and Significance

    what type of research design is a literature review

VIDEO

  1. Research Design, Research Method: What's the Difference?

  2. 2020-Project Work III(Research Methods)

  3. Types of Research Design

  4. What is research design? #how to design a research advantages of research design

  5. Research Methodology

  6. Research Methodology

COMMENTS

  1. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  2. Literature review as a research methodology: An ...

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  3. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  4. Literature Review

    Literature Review. A literature review is a discussion of the literature (aka. the "research" or "scholarship") surrounding a certain topic. A good literature review doesn't simply summarize the existing material, but provides thoughtful synthesis and analysis. The purpose of a literature review is to orient your own work within an existing ...

  5. Getting started

    What is a literature review? Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject. Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field. Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in ...

  6. What is a Literature Review?

    A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research. There are five key steps to writing a literature review: Search for relevant literature. Evaluate sources. Identify themes, debates and gaps.

  7. Literature Review Research

    Literature Review is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works. Also, we can define a literature review as the ...

  8. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  9. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  10. Literature Review Research Design

    This chapter addresses the literature review research design's peculiarities, characteristics, and significant fallacies. Conducting and writing poor literature reviews is one way to lower academic work's value. State-of-the-art literature reviews are valuable and publishable scholarly documents. Too many new scholars think that empirical ...

  11. Literature Review and Research Design

    The focus on the practical elements of research design makes this book an invaluable resource for graduate students writing dissertations. Practicing research allows room for experiment, error, and learning, ultimately helping graduate researchers use the literature effectively to build a solid scholarly foundation for their dissertation ...

  12. Study designs: Part 7

    Study designs: Part 7 - Systematic reviews. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies.

  13. Reviewing the literature: choosing a review design

    The purpose of a review of healthcare literature is primarily to summarise the knowledge around a specific question or topic, or to make recommendations that can support health professionals and organisations make decisions about a specific intervention or care issue. 5 In addition, reviews can highlight gaps in knowledge to guide future research.

  14. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  15. Types of Literature Review

    1. Narrative Literature Review. A narrative literature review, also known as a traditional literature review, involves analyzing and summarizing existing literature without adhering to a structured methodology. It typically provides a descriptive overview of key concepts, theories, and relevant findings of the research topic.

  16. Research Methods: Literature Reviews

    A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal. ... Research: locate literature ...

  17. Types of Literature Review

    The choice of a specific type depends on your research approach and design. The following types of literature review are the most popular in business studies: Narrative literature review, also referred to as traditional literature review, critiques literature and summarizes the body of a literature. Narrative review also draws conclusions about ...

  18. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    The first element we discuss is a review of research (literature reviews), which highlights the need for a specific research question, study problem, or topic of investigation. ... theories provide ways to understand how national or local policies can dictate an emphasis on outcomes or instructional design. These different types of frameworks ...

  19. Literature Reviews: Types of Clinical Study Designs

    Systematic Review A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria.

  20. Types of Research Designs

    A systematic review is not a traditional literature review, but a self-contained research project that explores a clearly defined research problem using existing studies. The design of a systematic review differs from other review methods because distinct and exacting principles are applied to the evaluative process of analyzing existing ...

  21. Types of Reviews

    Systematic Review. Attempts to identify, appraise, and summarize all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. clearly defined question with inclusion/exclusion criteria. rigorous and systematic search of the literature. thorough screening of results. data extraction and management.

  22. Literature Review and Research Design A Guide to Effective Research

    Literature review is approached as a process of engaging with the discourse of scholarly communities that will help graduate researchers refine, define, and express their own scholarly vision and voice. This orientation on research as an exploratory practice, rather than merely a series of predetermined steps in a systematic method, allows the ...

  23. A Structured Literature Review on the Research and Design of

    Abstract. Aim: This literature review is conducted to identify knowledge gaps and shape a framework for the development of guidelines and future research on programming and design of rehabilitation environments. Background: Patients suffering from trauma, stroke, neurological or cardiopulmonary conditions, or recovering from surgery or cancer ...

  24. Integrating qualitative research within a clinical trials unit

    The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University's Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

  25. Research on the Rural Revitalization Process Driven by Human Capital

    To sum up, there are still some shortcomings in the domestic and international studies on rural revitalization: firstly, most of the literature only analyzes the correlation between rural human capital and rural revitalization in terms of policy interpretation and theoretical research, but lacks quantitative empirical analysis, which to a ...

  26. Literature Review Research Design

    Literature review research. part of every research paper (staggered design with literature review as one stage) and stand-alone research design. aims at summarizing the existing body of knowledge and identifying the gaps in it. different forms of literature review research design available to address the objective.

  27. Artificial Intelligence-Based Conversational Agents Used for

    2. Methodology. This systematic literature review adheres to the reporting guideline for systematic reviews, following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework (Page et al., Citation 2021).The PRISMA approach was chosen to ensure a transparent and standardized methodology, enhancing the reliability and reproducibility of our systematic review ...

  28. Carbon Monoxide Detection and Alarm Requirements: Literature Review

    Find out the best practices and research on carbon monoxide detection and alarm requirements in this literature review by NFPA's Fire Protection Research Foundation.