• Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Thesis Outline

Thesis Outline – Example, Template and Writing...

Research Paper Conclusion

Research Paper Conclusion – Writing Guide and...

Appendices

Appendices – Writing Guide, Types and Examples

Research Paper Citation

How to Cite Research Paper – All Formats and...

Research Report

Research Report – Example, Writing Guide and...

Delimitations

Delimitations in Research – Types, Examples and...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 20 March 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

methods of research design

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

methods of research design

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

methods of research design

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

methods of research design

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

methods of research design

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Research design

Research design is a comprehensive plan for data collection in an empirical research project. It is a ‘blueprint’ for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process. The instrument development and sampling processes are described in the next two chapters, and the data collection process—which is often loosely called ‘research design’—is introduced in this chapter and is described in further detail in Chapters 9–12.

Broadly speaking, data collection methods can be grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected—quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth—and analysed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that is not available from either type of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key attributes of a research design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesised independent variable, and not by variables extraneous to the research context. Causality requires three conditions: covariation of cause and effect (i.e., if cause happens, then effect also happens; if cause does not happen, effect does not happen), temporal precedence (cause must precede effect in time), and spurious correlation, or there is no plausible alternative explanation for the change. Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are by no means immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalisability refers to whether the observed associations can be generalised from the sample to the population (population validity), or to other people, organisations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalised to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalisability than laboratory experiments where treatments and extraneous variables are more controlled. The variation in internal and external validity for a wide range of research designs is shown in Figure 5.1.

Internal and external validity

Some researchers claim that there is a trade-off between internal and external validity—higher external validity can come only at the cost of internal validity and vice versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs are ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organisational learning are difficult to define, much less measure. For instance, construct validity must ensure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure are valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical tests, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

Different types of validity in scientific research

Improving internal and external validity

The best research designs are those that can ensure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalisable to the population at large. Controls are required to ensure internal validity (causality) of research designs, and can be accomplished in five ways: manipulation, elimination, inclusion, and statistical control, and randomisation.

In manipulation , the researcher manipulates the independent variables in one or more levels (called ‘treatments’), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs, but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalisability, but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomisation technique is aimed at cancelling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomisation are: random selection , where a sample is selected randomly from a population, and random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomisation also ensures external validity, allowing inferences drawn from the sample to be generalised to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalisability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for a few of those dimensions.

Popular research designs

As noted earlier, research designs can be classified into two categories—positivist and interpretive—depending on the goal of the research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalised patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research, while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9–12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the ‘treatment group’) but not to another group (‘control group’), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value) to subjects in the control group. More complex designs may include multiple treatment groups, such as low versus high dosage of the drug or combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned to each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organisation where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analysed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalisability since real life is often more complex (i.e., involving more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a ‘socially desirable’ response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by countries from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualised and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalised to other case sites. Generalisability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically six to ten people) at one location, and having them discuss a phenomenon of interest for a period of one and a half to two hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalised to other settings because of the small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or ‘actions’ into those phenomena and observing the effects of those actions. In this method, the researcher is embedded within a social context such as an organisation and initiates an action—such as new organisational procedures or new technologies—in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalisability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasises that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time—eight months to two years—and during that period, engages, observes, and records the daily life of the studied culture, and theorises about the evolution and behaviours in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves ‘sense-making’. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalisable to other cultures.

Selecting research designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for an individual unit of analysis) or a case study (for an organisational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organisational decision-making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organisational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Mar 21, 2024 9:59 AM
  • URL: https://libguides.usc.edu/writingguide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Language: English | German

How to Construct a Mixed Methods Research Design

Wie man ein mixed methods-forschungs-design konstruiert, judith schoonenboom.

1 Institut für Bildungswissenschaft, Universität Wien, Sensengasse 3a, 1090 Wien, Austria

R. Burke Johnson

2 Department of Professional Studies, University of South Alabama, UCOM 3700, 36688-0002 Mobile, AL USA

This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

Zusammenfassung

Der Beitrag gibt einen Überblick darüber, wie das Forschungsdesign bei Mixed Methods-Studien angelegt sein sollte. Um ein Mixed Methods-Forschungsdesign aufzustellen, müssen Forschende sorgfältig alle Dimensionen von Methodenkombinationen abwägen und von Anfang an auf die Güte und damit verbundene etwaige Probleme achten. Wir erklären und diskutieren die für Forschungsdesigns relevanten sieben Dimensionen von Methodenkombinationen: Untersuchungsziel, Rolle von Theorie im Forschungsprozess, Timing (Simultanität und Abhängigkeit), Schnittstellen, an denen Integration stattfindet, systematische vs. interaktive Design-Ansätze, geplante vs. emergente Designs und Komplexität des Designs. Es gibt außerdem zahlreiche sekundäre Dimensionen, die bei der Aufstellung des Forschungsdesigns berücksichtigt werden müssen, von denen wir zehn erklären. Der Beitrag schließt mit zwei Fallbeispielen ab, anhand derer konkret gezeigt wird, wie Mixed Methods-Forschungsdesigns aufgestellt werden können.

What is a mixed methods design?

This article addresses the process of selecting and constructing mixed methods research (MMR) designs. The word “design” has at least two distinct meanings in mixed methods research (Maxwell 2013 ). One meaning focuses on the process of design; in this meaning, design is often used as a verb. Someone can be engaged in designing a study (in German: “eine Studie konzipieren” or “eine Studie designen”). Another meaning is that of a product, namely the result of designing. The result of designing as a verb is a mixed methods design as a noun (in German: “das Forschungsdesign” or “Design”), as it has, for example, been described in a journal article. In mixed methods design, both meanings are relevant. To obtain a strong design as a product, one needs to carefully consider a number of rules for designing as an activity. Obeying these rules is not a guarantee of a strong design, but it does contribute to it. A mixed methods design is characterized by the combination of at least one qualitative and one quantitative research component. For the purpose of this article, we use the following definition of mixed methods research (Johnson et al. 2007 , p. 123):

Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e. g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration.

Mixed methods research (“Mixed Methods” or “MM”) is the sibling of multimethod research (“Methodenkombination”) in which either solely multiple qualitative approaches or solely multiple quantitative approaches are combined.

In a commonly used mixed methods notation system (Morse 1991 ), the components are indicated as qual and quan (or QUAL and QUAN to emphasize primacy), respectively, for qualitative and quantitative research. As discussed below, plus (+) signs refer to concurrent implementation of components (“gleichzeitige Durchführung der Teilstudien” or “paralleles Mixed Methods-Design”) and arrows (→) refer to sequential implementation (“Sequenzielle Durchführung der Teilstudien” or “sequenzielles Mixed Methods-Design”) of components. Note that each research tradition receives an equal number of letters (four) in its abbreviation for equity. In this article, this notation system is used in some depth.

A mixed methods design as a product has several primary characteristics that should be considered during the design process. As shown in Table  1 , the following primary design “dimensions” are emphasized in this article: purpose of mixing, theoretical drive, timing, point of integration, typological use, and degree of complexity. These characteristics are discussed below. We also provide some secondary dimensions to consider when constructing a mixed methods design (Johnson and Christensen 2017 ).

List of Primary and Secondary Design Dimensions

On the basis of these dimensions, mixed methods designs can be classified into a mixed methods typology or taxonomy. In the mixed methods literature, various typologies of mixed methods designs have been proposed (for an overview see Creswell and Plano Clark 2011 , p. 69–72).

The overall goal of mixed methods research, of combining qualitative and quantitative research components, is to expand and strengthen a study’s conclusions and, therefore, contribute to the published literature. In all studies, the use of mixed methods should contribute to answering one’s research questions.

Ultimately, mixed methods research is about heightened knowledge and validity. The design as a product should be of sufficient quality to achieve multiple validities legitimation (Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ), which refers to the mixed methods research study meeting the relevant combination or set of quantitative, qualitative, and mixed methods validities in each research study.

Given this goal of answering the research question(s) with validity, a researcher can nevertheless have various reasons or purposes for wanting to strengthen the research study and its conclusions. Following is the first design dimension for one to consider when designing a study: Given the research question(s), what is the purpose of the mixed methods study?

A popular classification of purposes of mixed methods research was first introduced in 1989 by Greene, Caracelli, and Graham, based on an analysis of published mixed methods studies. This classification is still in use (Greene 2007 ). Greene et al. ( 1989 , p. 259) distinguished the following five purposes for mixing in mixed methods research:

1.  Triangulation seeks convergence, corroboration, correspondence of results from different methods; 2.  Complementarity seeks elaboration, enhancement, illustration, clarification of the results from one method with the results from the other method; 3.  Development seeks to use the results from one method to help develop or inform the other method, where development is broadly construed to include sampling and implementation, as well as measurement decisions; 4.  Initiation seeks the discovery of paradox and contradiction, new perspectives of frameworks, the recasting of questions or results from one method with questions or results from the other method; 5.  Expansion seeks to extend the breadth and range of inquiry by using different methods for different inquiry components.

In the past 28 years, this classification has been supplemented by several others. On the basis of a review of the reasons for combining qualitative and quantitative research mentioned by the authors of mixed methods studies, Bryman ( 2006 ) formulated a list of more concrete rationales for performing mixed methods research (see Appendix). Bryman’s classification breaks down Greene et al.’s ( 1989 ) categories into several aspects, and he adds a number of additional aspects, such as the following:

(a)  Credibility – refers to suggestions that employing both approaches enhances the integrity of findings. (b)  Context – refers to cases in which the combination is justified in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey. (c)  Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings. (d)  Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others. (e)  Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project. (f)  Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research. (Bryman, p. 106)

Views can be diverse (f) in various ways. Some examples of mixed methods design that include a diversity of views are:

  • Iteratively/sequentially connecting local/idiographic knowledge with national/general/nomothetic knowledge;
  • Learning from different perspectives on teams and in the field and literature;
  • Achieving multiple participation, social justice, and action;
  • Determining what works for whom and the relevance/importance of context;
  • Producing interdisciplinary substantive theory, including/comparing multiple perspectives and data regarding a phenomenon;
  • Juxtaposition-dialogue/comparison-synthesis;
  • Breaking down binaries/dualisms (some of both);
  • Explaining interaction between/among natural and human systems;
  • Explaining complexity.

The number of possible purposes for mixing is very large and is increasing; hence, it is not possible to provide an exhaustive list. Greene et al.’s ( 1989 ) purposes, Bryman’s ( 2006 ) rationales, and our examples of a diversity of views were formulated as classifications on the basis of examination of many existing research studies. They indicate how the qualitative and quantitative research components of a study relate to each other. These purposes can be used post hoc to classify research or a priori in the design of a new study. When designing a mixed methods study, it is sometimes helpful to list the purpose in the title of the study design.

The key point of this section is for the researcher to begin a study with at least one research question and then carefully consider what the purposes for mixing are. One can use mixed methods to examine different aspects of a single research question, or one can use separate but related qualitative and quantitative research questions. In all cases, the mixing of methods, methodologies, and/or paradigms will help answer the research questions and make improvements over a more basic study design. Fuller and richer information will be obtained in the mixed methods study.

Theoretical drive

In addition to a mixing purpose, a mixed methods research study might have an overall “theoretical drive” (Morse and Niehaus 2009 ). When designing a mixed methods study, it is occasionally helpful to list the theoretical drive in the title of the study design. An investigation, in Morse and Niehaus’s ( 2009 ) view, is focused primarily on either exploration-and-description or on testing-and-prediction. In the first case, the theoretical drive is called “inductive” or “qualitative”; in the second case, it is called “deductive” or “quantitative”. In the case of mixed methods, the component that corresponds to the theoretical drive is referred to as the “core” component (“Kernkomponente”), and the other component is called the “supplemental” component (“ergänzende Komponente”). In Morse’s notation system, the core component is written in capitals and the supplemental component is written in lowercase letters. For example, in a QUAL → quan design, more weight is attached to the data coming from the core qualitative component. Due to the decisive character of the core component, the core component must be able to stand on its own, and should be implemented rigorously. The supplemental component does not have to stand on its own.

Although this distinction is useful in some circumstances, we do not advise to apply it to every mixed methods design. First, Morse and Niehaus contend that the supplemental component can be done “less rigorously” but do not explain which aspects of rigor can be dropped. In addition, the idea of decreased rigor is in conflict with one key theme of the present article, namely that mixed methods designs should always meet the criterion of multiple validities legitimation (Onwuegbuzie and Johnson 2006 ).

The idea of theoretical drive as explicated by Morse and Niehaus has been criticized. For example, we view a theoretical drive as a feature not of a whole study, but of a research question, or, more precisely, of an interpretation of a research question. For example, if one study includes multiple research questions, it might include several theoretical drives (Schoonenboom 2016 ).

Another criticism of Morse and Niehaus’ conceptualization of theoretical drive is that it does not allow for equal-status mixed methods research (“Mixed Methods Forschung, bei der qualitative und quantitative Methoden die gleiche Bedeutung haben” or “gleichrangige Mixed Methods-Designs”), in which both the qualitative and quantitative component are of equal value and weight; this same criticism applies to Morgan’s ( 2014 ) set of designs. We agree with Greene ( 2015 ) that mixed methods research can be integrated at the levels of method, methodology, and paradigm. In this view, equal-status mixed methods research designs are possible, and they result when both the qualitative and the quantitative components, approaches, and thinking are of equal value, they take control over the research process in alternation, they are in constant interaction, and the outcomes they produce are integrated during and at the end of the research process. Therefore, equal-status mixed methods research (that we often advocate) is also called “interactive mixed methods research”.

Mixed methods research can have three different drives, as formulated by Johnson et al. ( 2007 , p. 123):

Qualitative dominant [or qualitatively driven] mixed methods research is the type of mixed research in which one relies on a qualitative, constructivist-poststructuralist-critical view of the research process, while concurrently recognizing that the addition of quantitative data and approaches are likely to benefit most research projects. Quantitative dominant [or quantitatively driven] mixed methods research is the type of mixed research in which one relies on a quantitative, postpositivist view of the research process, while concurrently recognizing that the addition of qualitative data and approaches are likely to benefit most research projects. (p. 124) The area around the center of the [qualitative-quantitative] continuum, equal status , is the home for the person that self-identifies as a mixed methods researcher. This researcher takes as his or her starting point the logic and philosophy of mixed methods research. These mixed methods researchers are likely to believe that qualitative and quantitative data and approaches will add insights as one considers most, if not all, research questions.

We leave it to the reader to decide if he or she desires to conduct a qualitatively driven study, a quantitatively driven study, or an equal-status/“interactive” study. According to the philosophies of pragmatism (Johnson and Onwuegbuzie 2004 ) and dialectical pluralism (Johnson 2017 ), interactive mixed methods research is very much a possibility. By successfully conducting an equal-status study, the pragmatist researcher shows that paradigms can be mixed or combined, and that the incompatibility thesis does not always apply to research practice. Equal status research is most easily conducted when a research team is composed of qualitative, quantitative, and mixed researchers, interacts continually, and conducts a study to address one superordinate goal.

Timing: simultaneity and dependence

Another important distinction when designing a mixed methods study relates to the timing of the two (or more) components. When designing a mixed methods study, it is usually helpful to include the word “concurrent” (“parallel”) or “sequential” (“sequenziell”) in the title of the study design; a complex design can be partially concurrent and partially sequential. Timing has two aspects: simultaneity and dependence (Guest 2013 ).

Simultaneity (“Simultanität”) forms the basis of the distinction between concurrent and sequential designs. In a  sequential design , the quantitative component precedes the qualitative component, or vice versa. In a  concurrent design , both components are executed (almost) simultaneously. In the notation of Morse ( 1991 ), concurrence is indicated by a “+” between components (e. g., QUAL + quan), while sequentiality is indicated with a “→” (QUAL → quan). Note that the use of capital letters for one component and lower case letters for another component in the same design suggest that one component is primary and the other is secondary or supplemental.

Some designs are sequential by nature. For example, in a  conversion design, qualitative categories and themes might be first obtained by collection and analysis of qualitative data, and then subsequently quantitized (Teddlie and Tashakkori 2009 ). Likewise, with Greene et al.’s ( 1989 ) initiation purpose, the initiation strand follows the unexpected results that it is supposed to explain. In other cases, the researcher has a choice. It is possible, e. g., to collect interview data and survey data of one inquiry simultaneously; in that case, the research activities would be concurrent. It is also possible to conduct the interviews after the survey data have been collected (or vice versa); in that case, research activities are performed sequentially. Similarly, a study with the purpose of expansion can be designed in which data on an effect and the intervention process are collected simultaneously, or they can be collected sequentially.

A second aspect of timing is dependence (“Abhängigkeit”) . We call two research components dependent if the implementation of the second component depends on the results of data analysis in the first component. Two research components are independent , if their implementation does not depend on the results of data analysis in the other component. Often, a researcher has a choice to perform data analysis independently or not. A researcher could analyze interview data and questionnaire data of one inquiry independently; in that case, the research activities would be independent. It is also possible to let the interview questions depend upon the outcomes of the analysis of the questionnaire data (or vice versa); in that case, research activities are performed dependently. Similarly, the empirical outcome/effect and process in a study with the purpose of expansion might be investigated independently, or the process study might take the effect/outcome as given (dependent).

In the mixed methods literature, the distinction between sequential and concurrent usually refers to the combination of concurrent/independent and sequential/dependent, and to the combination of data collection and data analysis. It is said that in a concurrent design, the data collection and data analysis of both components occurs (almost) simultaneously and independently, while in a sequential design, the data collection and data analysis of one component take place after the data collection and data analysis of the other component and depends on the outcomes of the other component.

In our opinion, simultaneity and dependence are two separate dimensions. Simultaneity indicates whether data collection is done concurrent or sequentially. Dependence indicates whether the implementation of one component depends upon the results of data analysis of the other component. As we will see in the example case studies, a concurrent design could include dependent data analysis, and a sequential design could include independent data analysis. It is conceivable that one simultaneously conducts interviews and collects questionnaire data (concurrent), while allowing the analysis focus of the interviews to depend on what emerges from the survey data (dependence).

Dependent research activities include a redirection of subsequent research inquiry. Using the outcomes of the first research component, the researcher decides what to do in the second component. Depending on the outcomes of the first research component, the researcher will do something else in the second component. If this is so, the research activities involved are said to be sequential-dependent, and any component preceded by another component should appropriately build on the previous component (see sequential validity legitimation ; Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ).

It is under the purposive discretion of the researcher to determine whether a concurrent-dependent design, a concurrent-independent design, a sequential-dependent design, or a sequential-dependent design is needed to answer a particular research question or set of research questions in a given situation.

Point of integration

Each true mixed methods study has at least one “point of integration” – called the “point of interface” by Morse and Niehaus ( 2009 ) and Guest ( 2013 ) –, at which the qualitative and quantitative components are brought together. Having one or more points of integration is the distinguishing feature of a design based on multiple components. It is at this point that the components are “mixed”, hence the label “mixed methods designs”. The term “mixing”, however, is misleading, as the components are not simply mixed, but have to be integrated very carefully.

Determining where the point of integration will be, and how the results will be integrated, is an important, if not the most important, decision in the design of mixed methods research. Morse and Niehaus ( 2009 ) identify two possible points of integration: the results point of integration and the analytical point of integration.

Most commonly, integration takes place in the results point of integration . At some point in writing down the results of the first component, the results of the second component are added and integrated. A  joint display (listing the qualitative and quantitative findings and an integrative statement) might be used to facilitate this process.

In the case of an analytical point of integration , a first analytical stage of a qualitative component is followed by a second analytical stage, in which the topics identified in the first analytical stage are quantitized. The results of the qualitative component ultimately, and before writing down the results of the analytical phase as a whole, become quantitative; qualitizing also is a possible strategy, which would be the converse of this.

Other authors assume more than two possible points of integration. Teddlie and Tashakkori ( 2009 ) distinguish four different stages of an investigation: the conceptualization stage, the methodological experimental stage (data collection), the analytical experimental stage (data analysis), and the inferential stage. According to these authors, in all four stages, mixing is possible, and thus all four stages are potential points or integration.

However, the four possible points of integration used by Teddlie and Tashakkori ( 2009 ) are still too coarse to distinguish some types of mixing. Mixing in the experiential stage can take many different forms, for example the use of cognitive interviews to improve a questionnaire (tool development), or selecting people for an interview on the basis of the results of a questionnaire (sampling). Extending the definition by Guest ( 2013 ), we define the point of integration as “any point in a study where two or more research components are mixed or connected in some way”. Then, the point of integration in the two examples of this paragraph can be defined more accurately as “instrument development”, and “development of the sample”.

It is at the point of integration that qualitative and quantitative components are integrated. Some primary ways that the components can be connected to each other are as follows:

(1) merging the two data sets, (2) connecting from the analysis of one set of data to the collection of a second set of data, (3) embedding of one form of data within a larger design or procedure, and (4) using a framework (theoretical or program) to bind together the data sets (Creswell and Plano Clark 2011 , p. 76).

More generally, one can consider mixing at any or all of the following research components: purposes, research questions, theoretical drive, methods, methodology, paradigm, data, analysis, and results. One can also include mixing views of different researchers, participants, or stakeholders. The creativity of the mixed methods researcher designing a study is extensive.

Substantively, it can be useful to think of integration or mixing as comparing and bringing together two (or more) components on the basis of one or more of the purposes set out in the first section of this article. For example, it is possible to use qualitative data to illustrate a quantitative effect, or to determine whether the qualitative and the quantitative component yield convergent results ( triangulation ). An integrated result could also consist of a combination of a quantitatively established effect and a qualitative description of the underlying process . In the case of development, integration consists of an adjustment of an, often quantitative, for example, instrument or model or interpretation, based on qualitative assessments by members of the target group.

A special case is the integration of divergent results. The power of mixed methods research is its ability to deal with diversity and divergence. In the literature, we find two kinds of strategies for dealing with divergent results. A first set of strategies takes the detected divergence as the starting point for further analysis, with the aim to resolve the divergence. One possibility is to carry out further research (Cook 1985 ; Greene and Hall 2010 ). Further research is not always necessary. One can also look for a more comprehensive theory, which is able to account for both the results of the first component and the deviating results of the second component. This is a form of abduction (Erzberger and Prein 1997 ).

A fruitful starting point in trying to resolve divergence through abduction is to determine which component has resulted in a finding that is somehow expected, logical, and/or in line with existing research. The results of this research component, called the “sense” (“Lesart”), are subsequently compared to the results of the other component, called the “anti-sense” (“alternative Lesart”), which are considered dissonant, unexpected, and/or contrary to what had been found in the literature. The aim is to develop an overall explanation that fits both the sense and the anti-sense (Bazeley and Kemp 2012 ; Mendlinger and Cwikel 2008 ). Finally, a reanalysis of the data can sometimes lead to resolving divergence (Creswell and Plano Clark 2011 ).

Alternatively, one can question the existence of the encountered divergence. In this regard, Mathison ( 1988 ) recommends determining whether deviating results shown by the data can be explained by knowledge about the research and/or knowledge of the social world. Differences between results from different data sources could also be the result of properties of the methods involved, rather than reflect differences in reality (Yanchar and Williams 2006 ). In general, the conclusions of the individual components can be subjected to an inference quality audit (Teddlie and Tashakkori 2009 ), in which the researcher investigates the strength of each of the divergent conclusions. We recommend that researchers first determine whether there is “real” divergence, according to the strategies mentioned in the last paragraph. Next, an attempt can be made to resolve cases of “true” divergence, using one or more of the methods mentioned in this paragraph.

Design typology utilization

As already mentioned in Sect. 1, mixed methods designs can be classified into a mixed methods typology or taxonomy. A typology serves several purposes, including the following: guiding practice, legitimizing the field, generating new possibilities, and serving as a useful pedagogical tool (Teddlie and Tashakkori 2009 ). Note, however, that not all types of typologies are equally suitable for all purposes. For generating new possibilities, one will need a more exhaustive typology, while a useful pedagogical tool might be better served by a non-exhaustive overview of the most common mixed methods designs. Although some of the current MM design typologies include more designs than others, none of the current typologies is fully exhaustive. When designing a mixed methods study, it is often useful to borrow its name from an existing typology, or to construct a superior and nuanced clear name when your design is based on a modification of one or more of the designs.

Various typologies of mixed methods designs have been proposed. Creswell and Plano Clark’s ( 2011 ) typology of some “commonly used designs” includes six “major mixed methods designs”. Our summary of these designs runs as follows:

  • Convergent parallel design (“paralleles Design”) (the quantitative and qualitative strands of the research are performed independently, and their results are brought together in the overall interpretation),
  • Explanatory sequential design (“explanatives Design”) (a first phase of quantitative data collection and analysis is followed by the collection of qualitative data, which are used to explain the initial quantitative results),
  • Exploratory sequential design (“exploratives Design”) (a first phase of qualitative data collection and analysis is followed by the collection of quantitative data to test or generalize the initial qualitative results),
  • Embedded design (“Einbettungs-Design”) (in a traditional qualitative or quantitative design, a strand of the other type is added to enhance the overall design),
  • Transformative design (“politisch-transformatives Design”) (a transformative theoretical framework, e. g. feminism or critical race theory, shapes the interaction, priority, timing and mixing of the qualitative and quantitative strand),
  • Multiphase design (“Mehrphasen-Design”) (more than two phases or both sequential and concurrent strands are combined over a period of time within a program of study addressing an overall program objective).

Most of their designs presuppose a specific juxtaposition of the qualitative and quantitative component. Note that the last design is a complex type that is required in many mixed methods studies.

The following are our adapted definitions of Teddlie and Tashakkori’s ( 2009 ) five sets of mixed methods research designs (adapted from Teddlie and Tashakkori 2009 , p. 151):

  • Parallel mixed designs (“paralleles Mixed-Methods-Design”) – In these designs, one has two or more parallel quantitative and qualitative strands, either with some minimal time lapse or simultaneously; the strand results are integrated into meta-inferences after separate analysis are conducted; related QUAN and QUAL research questions are answered or aspects of the same mixed research question is addressed.
  • Sequential mixed designs (“sequenzielles Mixed-Methods-Design”) – In these designs, QUAL and QUAN strands occur across chronological phases, and the procedures/questions from the later strand emerge/depend/build on on the previous strand; the research questions are interrelated and sometimes evolve during the study.
  • Conversion mixed designs (“Transfer-Design” or “Konversionsdesign”) – In these parallel designs, mixing occurs when one type of data is transformed to the other type and then analyzed, and the additional findings are added to the results; this design answers related aspects of the same research question,
  • Multilevel mixed designs (“Mehrebenen-Mixed-Methods-Design”) – In these parallel or sequential designs, mixing occurs across multiple levels of analysis, as QUAN and QUAL data are analyzed and integrated to answer related aspects of the same research question or related questions.
  • Fully integrated mixed designs (“voll integriertes Mixed-Methods-Design”) – In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur. For example, rather than including integration only at the findings/results stage, or only across phases in a sequential design, mixing might occur at the conceptualization stage, the methodological stage, the analysis stage, and the inferential stage.

We recommend adding to Teddlie and Tashakkori’s typology a sixth design type, specifically, a  “hybrid” design type to include complex combinations of two or more of the other design types. We expect that many published MM designs will fall into the hybrid design type.

Morse and Niehaus ( 2009 ) listed eight mixed methods designs in their book (and suggested that authors create more complex combinations when needed). Our shorthand labels and descriptions (adapted from Morse and Niehaus 2009 , p. 25) run as follows:

  • QUAL + quan (inductive-simultaneous design where, the core component is qualitative and the supplemental component is quantitative)
  • QUAL → quan (inductive-sequential design, where the core component is qualitative and the supplemental component is quantitative)
  • QUAN + qual (deductive-simultaneous design where, the core component is quantitative and the supplemental component is qualitative)
  • QUAN → qual (deductive-sequential design, where the core component is quantitative and the supplemental component is qualitative)
  • QUAL + qual (inductive-simultaneous design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAL → qual (inductive-sequential design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAN + quan (deductive-simultaneous design, where both components are quantitative; this is a multimethod design rather than a mixed methods design)
  • QUAN → quan (deductive-sequential design, where both components are quantitative; this is a multimethod design rather than a mixed methods design).

Notice that Morse and Niehaus ( 2009 ) included four mixed methods designs (the first four designs shown above) and four multimethod designs (the second set of four designs shown above) in their typology. The reader can, therefore, see that the design notation also works quite well for multimethod research designs. Notably absent from Morse and Niehaus’s book are equal-status or interactive designs. In addition, they assume that the core component should always be performed either concurrent with or before the supplemental component.

Johnson, Christensen, and Onwuegbuzie constructed a set of mixed methods designs without these limitations. The resulting mixed methods design matrix (see Johnson and Christensen 2017 , p. 478) contains nine designs, which we can label as follows (adapted from Johnson and Christensen 2017 , p. 478):

  • QUAL + QUAN (equal-status concurrent design),
  • QUAL + quan (qualitatively driven concurrent design),
  • QUAN + qual (quantitatively driven concurrent design),
  • QUAL → QUAN (equal-status sequential design),
  • QUAN → QUAL (equal-status sequential design),
  • QUAL → quan (qualitatively driven sequential design),
  • qual → QUAN (quantitatively driven sequential design),
  • QUAN → qual (quantitatively driven sequential design), and
  • quan → QUAL (qualitatively driven sequential design).

The above set of nine designs assumed only one qualitative and one quantitative component. However, this simplistic assumption can be relaxed in practice, allowing the reader to construct more complex designs. The Morse notation system is very powerful. For example, here is a three-stage equal-status concurrent-sequential design:

The key point here is that the Morse notation provides researchers with a powerful language for depicting and communicating the design constructed for a specific research study.

When designing a mixed methods study, it is sometimes helpful to include the mixing purpose (or characteristic on one of the other dimensions shown in Table  1 ) in the title of the study design (e. g., an explanatory sequential MM design, an exploratory-confirmatory MM design, a developmental MM design). Much more important, however, than a design name is for the author to provide an accurate description of what was done in the research study, so the reader will know exactly how the study was conducted. A design classification label can never replace such a description.

The common complexity of mixed methods design poses a problem to the above typologies of mixed methods research. The typologies were designed to classify whole mixed methods studies, and they are basically based on a classification of simple designs. In practice, many/most designs are complex. Complex designs are sometimes labeled “complex design”, “multiphase design”, “fully integrated design”, “hybrid design” and the like. Because complex designs occur very often in practice, the above typologies are not able to classify a large part of existing mixed methods research any further than by labeling them “complex”, which in itself is not very informative about the particular design. This problem does not fully apply to Morse’s notation system, which can be used to symbolize some more complex designs.

Something similar applies to the classification of the purposes of mixed methods research. The classifications of purposes mentioned in the “Purpose”-section, again, are basically meant for the classification of whole mixed methods studies. In practice, however, one single study often serves more than one purpose (Schoonenboom et al. 2017 ). The more purposes that are included in one study, the more difficult it becomes to select a design on the basis of the purpose of the investigation, as advised by Greene ( 2007 ). Of all purposes involved, then, which one should be the primary basis for the design? Or should the design be based upon all purposes included? And if so, how? For more information on how to articulate design complexity based on multiple purposes of mixing, see Schoonenboom et al. ( 2017 ).

It should be clear to the reader that, although much progress has been made in the area of mixed methods design typologies, the problem remains in developing a single typology that is effective in comprehensively listing a set of designs for mixed methods research. This is why we emphasize in this article the importance of learning to build on simple designs and construct one’s own design for one’s research questions. This will often result in a combination or “hybrid” design that goes beyond basic designs found in typologies, and a methodology section that provides much more information than a design name.

Typological versus interactive approaches to design

In the introduction, we made a distinction between design as a product and design as a process. Related to this, two different approaches to design can be distinguished: typological/taxonomic approaches (“systematische Ansätze”), such as those in the previous section, and interactive approaches (“interaktive Ansätze”) (the latter were called “dynamic” approaches by Creswell and Plano Clark 2011 ). Whereas typological/taxonomic approaches view designs as a sort of mold, in which the inquiry can be fit, interactive approaches (Maxwell 2013 ) view design as a process, in which a certain design-as-a-product might be the outcome of the process, but not its input.

The most frequently mentioned interactive approach to mixed methods research is the approach by Maxwell and Loomis ( 2003 ). Maxwell and Loomis distinguish the following components of a design: goals, conceptual framework, research question, methods, and validity. They argue convincingly that the most important task of the researcher is to deliver as the end product of the design process a design in which these five components fit together properly. During the design process, the researcher works alternately on the individual components, and as a result, their initial fit, if it existed, tends to get lost. The researcher should therefore regularly check during the research and continuing design process whether the components still fit together, and, if not, should adapt one or the other component to restore the fit between them. In an interactive approach, unlike the typological approach, design is viewed as an interactive process in which the components are continually compared during the research study to each other and adapted to each other.

Typological and interactive approaches to mixed methods research have been presented as mutually exclusive alternatives. In our view, however, they are not mutually exclusive. The interactive approach of Maxwell is a very powerful tool for conducting research, yet this approach is not specific to mixed methods research. Maxwell’s interactive approach emphasizes that the researcher should keep and monitor a close fit between the five components of research design. However, it does not indicate how one should combine qualitative and quantitative subcomponents within one of Maxwell’s five components (e. g., how one should combine a qualitative and a quantitative method, or a qualitative and a quantitative research question). Essential elements of the design process, such as timing and the point of integration are not covered by Maxwell’s approach. This is not a shortcoming of Maxwell’s approach, but it indicates that to support the design of mixed methods research, more is needed than Maxwell’s model currently has to offer.

Some authors state that design typologies are particularly useful for beginning researchers and interactive approaches are suited for experienced researchers (Creswell and Plano Clark 2011 ). However, like an experienced researcher, a research novice needs to align the components of his or her design properly with each other, and, like a beginning researcher, an advanced researcher should indicate how qualitative and quantitative components are combined with each other. This makes an interactive approach desirable, also for beginning researchers.

We see two merits of the typological/taxonomic approach . We agree with Greene ( 2007 ), who states that the value of the typological approach mainly lies in the different dimensions of mixed methods that result from its classifications. In this article, the primary dimensions include purpose, theoretical drive, timing, point of integration, typological vs. interactive approaches, planned vs. emergent designs, and complexity (also see secondary dimensions in Table  1 ). Unfortunately, all of these dimensions are not reflected in any single design typology reviewed here. A second merit of the typological approach is the provision of common mixed methods research designs, of common ways in which qualitative and quantitative research can be combined, as is done for example in the major designs of Creswell and Plano Clark ( 2011 ). Contrary to other authors, however, we do not consider these designs as a feature of a whole study, but rather, in line with Guest ( 2013 ), as a feature of one part of a design in which one qualitative and one quantitative component are combined. Although one study could have only one purpose, one point of integration, et cetera, we believe that combining “designs” is the rule and not the exception. Therefore, complex designs need to be constructed and modified as needed, and during the writing phase the design should be described in detail and perhaps given a creative and descriptive name.

Planned versus emergent designs

A mixed methods design can be thought out in advance, but can also arise during the course of the conduct of the study; the latter is called an “emergent” design (Creswell and Plano Clark 2011 ). Emergent designs arise, for example, when the researcher discovers during the study that one of the components is inadequate (Morse and Niehaus 2009 ). Addition of a component of the other type can sometimes remedy such an inadequacy. Some designs contain an emergent component by their nature. Initiation, for example, is the further exploration of unexpected outcomes. Unexpected outcomes are by definition not foreseen, and therefore cannot be included in the design in advance.

The question arises whether researchers should plan all these decisions beforehand, or whether they can make them during, and depending on the course of, the research process. The answer to this question is twofold. On the one hand, a researcher should decide beforehand which research components to include in the design, such that the conclusion that will be drawn will be robust. On the other hand, developments during research execution will sometimes prompt the researcher to decide to add additional components. In general, the advice is to be prepared for the unexpected. When one is able to plan for emergence, one should not refrain from doing so.

Dimension of complexity

Next, mixed methods designs are characterized by their complexity. In the literature, simple and complex designs are distinguished in various ways. A common distinction is between simple investigations with a single point of integration versus complex investigations with multiple points of integration (Guest 2013 ). When designing a mixed methods study, it can be useful to mention in the title whether the design of the study is simple or complex. The primary message of this section is as follows: It is the responsibility of the researcher to create more complex designs when needed to answer his or her research question(s) .

Teddlie and Tashakkori’s ( 2009 ) multilevel mixed designs and fully integrated mixed designs are both complex designs, but for different reasons. A multilevel mixed design is more complex ontologically, because it involves multiple levels of reality. For example, data might be collected both at the levels of schools and students, neighborhood and households, companies and employees, communities and inhabitants, or medical practices and patients (Yin 2013 ). Integration of these data does not only involve the integration of qualitative and quantitative data, but also the integration of data originating from different sources and existing at different levels. Little if any published research has discussed the possible ways of integrating data obtained in a multilevel mixed design (see Schoonenboom 2016 ). This is an area in need of additional research.

The fully-integrated mixed design is more complex because it contains multiple points of integration. As formulated by Teddlie and Tashakkori ( 2009 , p. 151):

In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur.

Complexity, then, not only depends on the number of components, but also on the extent to which they depend on each other (e. g., “one approach affects the formulation of the other”).

Many of our design dimensions ultimately refer to different ways in which the qualitative and quantitative research components are interdependent. Different purposes of mixing ultimately differ in the way one component relates to, and depends upon, the other component. For example, these purposes include dependencies, such as “x illustrates y” and “x explains y”. Dependencies in the implementation of x and y occur to the extent that the design of y depends on the results of x (sequentiality). The theoretical drive creates dependencies, because the supplemental component y is performed and interpreted within the context and the theoretical drive of core component x. As a general rule in designing mixed methods research, one should examine and plan carefully the ways in which and the extent to which the various components depend on each other.

The dependence among components, which may or may not be present, has been summarized by Greene ( 2007 ). It is seen in the distinction between component designs (“Komponenten-Designs”), in which the components are independent of each other, and integrated designs (“integrierte Designs”), in which the components are interdependent. Of these two design categories, integrated designs are the more complex designs.

Secondary design considerations

The primary design dimensions explained above have been the focus of this article. There are a number of secondary considerations for researchers to also think about when they design their studies (Johnson and Christensen 2017 ). Now we list some secondary design issues and questions that should be thoughtfully considered during the construction of a strong mixed methods research design.

  • Phenomenon: Will the study be addressing (a) the same part or different parts of one phenomenon? (b) different phenomena?, or (c) the phenomenon/phenomena from different perspectives? Is the phenomenon (a) expected to be unique (e. g., historical event, particular group)?, (b) something expected to be part of a more regular and predictable phenomenon, or (c) a complex mixture of these?
  • Social scientific theory: Will the study generate a new substantive theory, test an already constructed theory, or achieve both in a sequential arrangement? Or is the researcher not interested in substantive theory based on empirical data?
  • Ideological drive: Will the study have an explicitly articulated ideological drive (e. g., feminism, critical race paradigm, transformative paradigm)?
  • Combination of sampling methods: What specific quantitative sampling method(s) will be used? What specific qualitative sampling methods(s) will be used? How will these be combined or related?
  • Degree to which the research participants will be similar or different: For example, participants or stakeholders with known differences of perspective would provide participants that are quite different.
  • Degree to which the researchers on the research team will be similar or different: For example, an experiment conducted by one researcher would be high on similarity, but the use of a heterogeneous and participatory research team would include many differences.
  • Implementation setting: Will the phenomenon be studied naturalistically, experimentally, or through a combination of these?
  • Degree to which the methods similar or different: For example, a structured interview and questionnaire are fairly similar but administration of a standardized test and participant observation in the field are quite different.
  • Validity criteria and strategies: What validity criteria and strategies will be used to address the defensibility of the study and the conclusions that will be drawn from it (see Chapter 11 in Johnson and Christensen 2017 )?
  • Full study: Will there be essentially one research study or more than one? How will the research report be structured?

Two case studies

The above design dimensions are now illustrated by examples. A nice collection of examples of mixed methods studies can be found in Hesse-Biber ( 2010 ), from which the following examples are taken. The description of the first case example is shown in Box 1.

Box 1

Summary of Roth ( 2006 ), research regarding the gender-wage gap within Wall Street securities firms. Adapted from Hesse-Biber ( 2010 , pp. 457–458)

Louise Marie Roth’s research, Selling Women Short: Gender and Money on Wall Street ( 2006 ), tackles gender inequality in the workplace. She was interested in understanding the gender-wage gap among highly performing Wall Street MBAs, who on the surface appeared to have the same “human capital” qualifications and were placed in high-ranking Wall Street securities firms as their first jobs. In addition, Roth wanted to understand the “structural factors” within the workplace setting that may contribute to the gender-wage gap and its persistence over time. […] Roth conducted semistructured interviews, nesting quantitative closed-ended questions into primarily qualitative in-depth interviews […] In analyzing the quantitative data from her sample, she statistically considered all those factors that might legitimately account for gendered differences such as number of hours worked, any human capital differences, and so on. Her analysis of the quantitative data revealed the presence of a significant gender gap in wages that remained unexplained after controlling for any legitimate factors that might otherwise make a difference. […] Quantitative findings showed the extent of the wage gap while providing numerical understanding of the disparity but did not provide her with an understanding of the specific processes within the workplace that might have contributed to the gender gap in wages. […] Her respondents’ lived experiences over time revealed the hidden inner structures of the workplace that consist of discriminatory organizational practices with regard to decision making in performance evaluations that are tightly tied to wage increases and promotion.

This example nicely illustrates the distinction we made between simultaneity and dependency. On the two aspects of the timing dimension, this study was a concurrent-dependent design answering a set of related research questions. The data collection in this example was conducted simultaneously, and was thus concurrent – the quantitative closed-ended questions were embedded into the qualitative in-depth interviews. In contrast, the analysis was dependent, as explained in the next paragraph.

One of the purposes of this study was explanation: The qualitative data were used to understand the processes underlying the quantitative outcomes. It is therefore an explanatory design, and might be labelled an “explanatory concurrent design”. Conceptually, explanatory designs are often dependent: The qualitative component is used to explain and clarify the outcomes of the quantitative component. In that sense, the qualitative analysis in the case study took the outcomes of the quantitative component (“the existence of the gender-wage gap” and “numerical understanding of the disparity”), and aimed at providing an explanation for that result of the quantitative data analysis , by relating it to the contextual circumstances in which the quantitative outcomes were produced. This purpose of mixing in the example corresponds to Bryman’s ( 2006 ) “contextual understanding”. On the other primary dimensions, (a) the design was ongoing over a three-year period but was not emergent, (b) the point of integration was results, and (c) the design was not complex with respect to the point of integration, as it had only one point of integration. Yet, it was complex in the sense of involving multiple levels; both the level of the individual and the organization were included. According to the approach of Johnson and Christensen ( 2017 ), this was a QUAL + quan design (that was qualitatively driven, explanatory, and concurrent). If we give this study design a name, perhaps it should focus on what was done in the study: “explaining an effect from the process by which it is produced”. Having said this, the name “explanatory concurrent design” could also be used.

The description of the second case example is shown in Box 2.

Box 2

Summary of McMahon’s ( 2007 ) explorative study of the meaning, role, and salience of rape myths within the subculture of college student athletes. Adapted from Hesse-Biber ( 2010 , pp. 461–462)

Sarah McMahon ( 2007 ) wanted to explore the subculture of college student athletes and specifically the meaning, role, and salience of rape myths within that culture. […] While she was looking for confirmation between the quantitative ([structured] survey) and qualitative (focus groups and individual interviews) findings, she entered this study skeptical of whether or not her quantitative and qualitative findings would mesh with one another. McMahon […] first administered a survey [instrument] to 205 sophomore and junior student athletes at one Northeast public university. […] The quantitative data revealed a very low acceptance of rape myths among this student population but revealed a higher acceptance of violence among men and individuals who did not know a survivor of sexual assault. In the second qualitative (QUAL) phase, “focus groups were conducted as semi-structured interviews” and facilitated by someone of the same gender as the participants (p. 360). […] She followed this up with a third qualitative component (QUAL), individual interviews, which were conducted to elaborate on themes discovered in the focus groups and determine any differences in students’ responses between situations (i. e., group setting vs. individual). The interview guide was designed specifically to address focus group topics that needed “more in-depth exploration” or clarification (p. 361). The qualitative findings from the focus groups and individual qualitative interviews revealed “subtle yet pervasive rape myths” that fell into four major themes: “the misunderstanding of consent, the belief in ‘accidental’ and fabricated rape, the contention that some women provoke rape, and the invulnerability of female athletes” (p. 363). She found that the survey’s finding of a “low acceptance of rape myths … was contradicted by the findings of the focus groups and individual interviews, which indicated the presence of subtle rape myths” (p. 362).

On the timing dimension, this is an example of a sequential-independent design. It is sequential, because the qualitative focus groups were conducted after the survey was administered. The analysis of the quantitative and qualitative data was independent: Both were analyzed independently, to see whether they yielded the same results (which they did not). This purpose, therefore, was triangulation. On the other primary dimensions, (a) the design was planned, (b) the point of integration was results, and (c) the design was not complex as it had only one point of integration, and involved only the level of the individual. The author called this a “sequential explanatory” design. We doubt, however, whether this is the most appropriate label, because the qualitative component did not provide an explanation for quantitative results that were taken as given. On the contrary, the qualitative results contradicted the quantitative results. Thus, a “sequential-independent” design, or a “sequential-triangulation” design or a “sequential-comparative” design would probably be a better name.

Notice further that the second case study had the same point of integration as the first case study. The two components were brought together in the results. Thus, although the case studies are very dissimilar in many respects, this does not become visible in their point of integration. It can therefore be helpful to determine whether their point of extension is different. A  point of extension is the point in the research process at which the second (or later) component comes into play. In the first case study, two related, but different research questions were answered, namely the quantitative question “How large is the gender-wage gap among highly performing Wall Street MBAs after controlling for any legitimate factors that might otherwise make a difference?”, and the qualitative research question “How do structural factors within the workplace setting contribute to the gender-wage gap and its persistence over time?” This case study contains one qualitative research question and one quantitative research question. Therefore, the point of extension is the research question. In the second case study, both components answered the same research question. They differed in their data collection (and subsequently in their data analysis): qualitative focus groups and individual interviews versus a quantitative questionnaire. In this case study, the point of extension was data collection. Thus, the point of extension can be used to distinguish between the two case studies.

Summary and conclusions

The purpose of this article is to help researchers to understand how to design a mixed methods research study. Perhaps the simplest approach is to design is to look at a single book and select one from the few designs included in that book. We believe that is only useful as a starting point. Here we have shown that one often needs to construct a research design to fit one’s unique research situation and questions.

First, we showed that there are there are many purposes for which qualitative and quantitative methods, methodologies, and paradigms can be mixed. This must be determined in interaction with the research questions. Inclusion of a purpose in the design name can sometimes provide readers with useful information about the study design, as in, e. g., an “explanatory sequential design” or an “exploratory-confirmatory design”.

The second dimension is theoretical drive in the sense that Morse and Niehaus ( 2009 ) use this term. That is, will the study have an inductive or a deductive drive, or, we added, a combination of these. Related to this idea is whether one will conduct a qualitatively driven, a quantitatively driven, or an equal-status mixed methods study. This language is sometimes included in the design name to communicate this characteristic of the study design (e. g., a “quantitatively driven sequential mixed methods design”).

The third dimension is timing , which has two aspects: simultaneity and dependence. Simultaneity refers to whether the components are to be implemented concurrently, sequentially, or a combination of these in a multiphase design. Simultaneity is commonly used in the naming of a mixed methods design because it communicates key information. The second aspect of timing, dependence , refers to whether a later component depends on the results of an earlier component, e. g., Did phase two specifically build on phase one in the research study? The fourth design dimension is the point of integration, which is where the qualitative and quantitative components are brought together and integrated. This is an essential dimension, but it usually does not need to be incorporated into the design name.

The fifth design dimension is that of typological vs. interactive design approaches . That is, will one select a design from a typology or use a more interactive approach to construct one’s own design? There are many typologies of designs currently in the literature. Our recommendation is that readers examine multiple design typologies to better understand the design process in mixed methods research and to understand what designs have been identified as popular in the field. However, when a design that would follow from one’s research questions is not available, the researcher can and should (a) combine designs into new designs or (b) simply construct a new and unique design. One can go a long way in depicting a complex design with Morse’s ( 1991 ) notation when used to its full potential. We also recommend that researchers understand the process approach to design from Maxwell and Loomis ( 2003 ), and realize that research design is a process and it needs, oftentimes, to be flexible and interactive.

The sixth design dimension or consideration is whether a design will be fully specified during the planning of the research study or if the design (or part of the design) will be allowed to emerge during the research process, or a combination of these. The seventh design dimension is called complexity . One sort of complexity mentioned was multilevel designs, but there are many complexities that can enter designs. The key point is that good research often requires the use of complex designs to answer one’s research questions. This is not something to avoid. It is the responsibility of the researcher to learn how to construct and describe and name mixed methods research designs. Always remember that designs should follow from one’s research questions and purposes, rather than questions and purposes following from a few currently named designs.

In addition to the six primary design dimensions or considerations, we provided a set of additional or secondary dimensions/considerations or questions to ask when constructing a mixed methods study design. Our purpose throughout this article has been to show what factors must be considered to design a high quality mixed methods research study. The more one knows and thinks about the primary and secondary dimensions of mixed methods design the better equipped one will be to pursue mixed methods research.

Acknowledgments

Open access funding provided by University of Vienna.

Biographies

1965, Dr., Professor of Empirical Pedagogy at University of Vienna, Austria. Research Areas: Mixed Methods Design, Philosophy of Mixed Methods Research, Innovation in Higher Education, Design and Evaluation of Intervention Studies, Educational Technology. Publications: Mixed methods in early childhood education. In: M. Fleer & B. v. Oers (Eds.), International handbook on early childhood education (Vol. 1). Dordrecht, The Netherlands: Springer 2017; The multilevel mixed intact group analysis: A mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research 10, 2016; The realist survey: How respondents’ voices can be used to test and revise correlational models. Journal of Mixed Methods Research 2015. Advance online publication.

1957, PhD, Professor of Professional Studies at University of South Alabama, Mobile, Alabama USA. Research Areas: Methods of Social Research, Program Evaluation, Quantitative, Qualitative and Mixed Methods, Philosophy of Social Science. Publications: Research methods, design and analysis. Boston, MA 2014 (with L. Christensen and L. Turner); Educational research: Quantitative, qualitative and mixed approaches. Los Angeles, CA 2017 (with L. Christensen); The Oxford handbook of multimethod and mixed methods research inquiry. New York, NY 2015 (with S. Hesse-Biber).

Bryman’s ( 2006 ) scheme of rationales for combining quantitative and qualitative research 1

  • Triangulation or greater validity – refers to the traditional view that quantitative and qualitative research might be combined to triangulate findings in order that they may be mutually corroborated. If the term was used as a synonym for integrating quantitative and qualitative research, it was not coded as triangulation.
  • Offset – refers to the suggestion that the research methods associated with both quantitative and qualitative research have their own strengths and weaknesses so that combining them allows the researcher to offset their weaknesses to draw on the strengths of both.
  • Completeness – refers to the notion that the researcher can bring together a more comprehensive account of the area of enquiry in which he or she is interested if both quantitative and qualitative research are employed.
  • Process – quantitative research provides an account of structures in social life but qualitative research provides sense of process.
  • Different research questions – this is the argument that quantitative and qualitative research can each answer different research questions but this item was coded only if authors explicitly stated that they were doing this.
  • Explanation – one is used to help explain findings generated by the other.
  • Unexpected results – refers to the suggestion that quantitative and qualitative research can be fruitfully combined when one generates surprising results that can be understood by employing the other.
  • Instrument development – refers to contexts in which qualitative research is employed to develop questionnaire and scale items – for example, so that better wording or more comprehensive closed answers can be generated.
  • Sampling – refers to situations in which one approach is used to facilitate the sampling of respondents or cases.
  • Credibility – refer s to suggestions that employing both approaches enhances the integrity of findings.
  • Context – refers to cases in which the combination is rationalized in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey.
  • Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings.
  • Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others.
  • Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project.
  • Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research.
  • Enhancement or building upon quantitative/qualitative findings – this entails a reference to making more of or augmenting either quantitative or qualitative findings by gathering data using a qualitative or quantitative research approach.
  • Other/unclear.
  • Not stated.

1 Reprinted with permission from “Integrating quantitative and qualitative research: How is it done?” by Alan Bryman ( 2006 ), Qualitative Research, 6, pp. 105–107.

Contributor Information

Judith Schoonenboom, Email: [email protected] .

R. Burke Johnson, Email: ude.amabalahtuos@nosnhojb .

  • Bazeley, Pat, Lynn Kemp Mosaics, triangles, and DNA: Metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research. 2012; 6 :55–72. doi: 10.1177/1558689811419514. [ CrossRef ] [ Google Scholar ]
  • Bryman A. Integrating quantitative and qualitative research: how is it done? Qualitative Research. 2006; 6 :97–113. doi: 10.1177/1468794106058877. [ CrossRef ] [ Google Scholar ]
  • Cook TD. Postpositivist critical multiplism. In: Shotland RL, Mark MM, editors. Social science and social policy. Beverly Hills: SAGE; 1985. pp. 21–62. [ Google Scholar ]
  • Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2. Los Angeles: SAGE; 2011. [ Google Scholar ]
  • Erzberger C, Prein G. Triangulation: Validity and empirically-based hypothesis construction. Quality and Quantity. 1997; 31 :141–154. doi: 10.1023/A:1004249313062. [ CrossRef ] [ Google Scholar ]
  • Greene JC. Mixed methods in social inquiry. San Francisco: Jossey-Bass; 2007. [ Google Scholar ]
  • Greene JC. Preserving distinctions within the multimethod and mixed methods research merger. Sharlene Hesse-Biber and R. Burke Johnson. New York: Oxford University Press; 2015. [ Google Scholar ]
  • Greene JC, Valerie J, Caracelli, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1989; 11 :255–274. doi: 10.3102/01623737011003255. [ CrossRef ] [ Google Scholar ]
  • Greene JC, Hall JN. Dialectics and pragmatism. In: Tashakkori A, Teddlie C, editors. SAGE handbook of mixed methods in social & behavioral research. 2. Los Angeles: SAGE; 2010. pp. 119–167. [ Google Scholar ]
  • Guest, Greg Describing mixed methods research: An alternative to typologies. Journal of Mixed Methods Research. 2013; 7 :141–151. doi: 10.1177/1558689812461179. [ CrossRef ] [ Google Scholar ]
  • Hesse-Biber S. Qualitative approaches to mixed methods practice. Qualitative Inquiry. 2010; 16 :455–468. doi: 10.1177/1077800410364611. [ CrossRef ] [ Google Scholar ]
  • Johnson BR. Dialectical pluralism: A metaparadigm whose time has come. Journal of Mixed Methods Research. 2017; 11 :156–173. doi: 10.1177/1558689815607692. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Christensen LB. Educational research: Quantitative, qualitative, and mixed approaches. 6. Los Angeles: SAGE; 2017. [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14–26. doi: 10.3102/0013189X033007014. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ, Turner LA. Toward a definition of mixed methods research. Journal of Mixed Methods Research. 2007; 1 :112–133. doi: 10.1177/1558689806298224. [ CrossRef ] [ Google Scholar ]
  • Mathison S. Why triangulate? Educational Researcher. 1988; 17 :13–17. doi: 10.3102/0013189X017002013. [ CrossRef ] [ Google Scholar ]
  • Maxwell JA. Qualitative research design: An interactive approach. 3. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Maxwell, Joseph A., and Diane M. Loomis. 2003. Mixed methods design: An alternative approach. In Handbook of mixed methods in social & behavioral research , Eds. Abbas Tashakkori and Charles Teddlie, 241–271. Thousand Oaks: Sage.
  • McMahon S. Understanding community-specific rape myths: Exploring student athlete culture. Affilia. 2007; 22 :357–370. doi: 10.1177/0886109907306331. [ CrossRef ] [ Google Scholar ]
  • Mendlinger S, Cwikel J. Spiraling between qualitative and quantitative data on women’s health behaviors: A double helix model for mixed methods. Qualitative Health Research. 2008; 18 :280–293. doi: 10.1177/1049732307312392. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morgan DL. Integrating qualitative and quantitative methods: a pragmatic approach. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991; 40 :120–123. doi: 10.1097/00006199-199103000-00014. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM, Niehaus L. Mixed method design: Principles and procedures. Walnut Creek: Left Coast Press; 2009. [ Google Scholar ]
  • Onwuegbuzie AJ, Burke Johnson R. The “validity” issue in mixed research. Research in the Schools. 2006; 13 :48–63. [ Google Scholar ]
  • Roth LM. Selling women short: Gender and money on Wall Street. Princeton: Princeton University Press; 2006. [ Google Scholar ]
  • Schoonenboom J. The multilevel mixed intact group analysis: a mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research. 2016; 10 :129–146. doi: 10.1177/1558689814536283. [ CrossRef ] [ Google Scholar ]
  • Schoonenboom, Judith, R. Burke Johnson, and Dominik E. Froehlich. 2017, in press. Combining multiple purposes of mixing within a mixed methods research design. International Journal of Multiple Research Approaches .
  • Teddlie CB, Tashakkori A. Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Yanchar SC, Williams DD. Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher. 2006; 35 (9):3–12. doi: 10.3102/0013189X035009003. [ CrossRef ] [ Google Scholar ]
  • Yin RK. Case study research: design and methods. 5. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

methods of research design

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the methods, such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

website optimization tools

20 Best Website Optimization Tools to Improve Your Website

Mar 22, 2024

digital customer experience software

15 Best Digital Customer Experience Software of 2024

product experience software

15 Best Product Experience Software of 2024

customer analytics software

15 Best Customer Analytics Software of 2024 | QuestionPro

Mar 21, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Study Site Homepage

  • Request new password
  • Create a new account

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches

You are here, student resources, welcome to the companion website.

Welcome to the SAGE edge site for Research Design, Fifth Edition .

The SAGE edge site for Research Design by John W. Creswell and J. David Creswell offers a robust online environment you can access anytime, anywhere, and features an array of free tools and resources to keep you on the cutting edge of your learning experience.

Homepage

This best-selling text pioneered the comparison of qualitative, quantitative, and mixed methods research design. For all three approaches, John W. Creswell and new co-author J. David Creswell include a preliminary consideration of philosophical assumptions, key elements of the research process, a review of the literature, an assessment of the use of theory in research applications, and reflections about the importance of writing and ethics in scholarly inquiry.

The  Fifth   Edition  includes more coverage of: epistemological and ontological positioning in relation to the research question and chosen methodology; case study, PAR, visual and online methods in qualitative research; qualitative and quantitative data analysis software; and in quantitative methods more on power analysis to determine sample size, and more coverage of experimental and survey designs; and updated with the latest thinking and research in mixed methods.

Acknowledgments

We gratefully acknowledge John W. Creswell and J. David Creswell for writing an excellent text. Special thanks are also due to Tim Guetterman of the University of Michigan, Shannon Storch of the University of Creighton, and Tiffany J. Davis of the University of Houston for developing the ancillaries on this site.

For instructors

Access resources that are only available to Faculty and Administrative Staff.

Want to explore the book further?

Order Review Copy

creswell-Book Image

Research Methods

  • Getting Started
  • Literature Review Research

Research Design

  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Quantitative vs. Qualitative Research: The Differences Explained

From Scribbr 

Empirical Research

What is empirical research.

"Empirical research is research that is based on observation and measurement of phenomena, as directly experienced by the researcher. The data thus gathered may be compared against a theory or hypothesis, but the results are still based on real life experience. The data gathered is all primary data, although secondary data from a literature review may form the theoretical background."

Characteristics of Empirical Research

Emerald Publishing's  guide to conducting empirical research  identifies a number of common elements to empirical research: 

A  research question , which will determine research objectives.

A particular and planned  design  for the research, which will depend on the question and which will find ways of answering it with appropriate use of resources.

The gathering of  primary data , which is then analysed.

A particular  methodology  for collecting and analysing the data, such as an experiment or survey.

The limitation of the data to a particular group, area or time scale, known as a  sample  [emphasis added]: for example, a specific number of employees of a particular company type, or all users of a library over a given time scale. The sample should be somehow representative of a wider population.

The ability to  recreate  the study and test the results. This is known as  reliability .

The ability to  generalize  from the findings to a larger sample and to other situations.

If you see these elements in a research article, you can feel confident that you have found empirical research. Emerald's guide goes into more detail on each element. 

Emerald Publishing (n.d.). How to... conduct empirical research. https://www.emeraldgrouppublishing.com/how-to/research-methods/conduct-empirical-research-l 

  • Quantitative vs. Qualitative
  • Data Collection Methods
  • Analyzing Data

When collecting and analyzing data,  quantitative research  deals with numbers and statistics, while  qualitative research  deals with words and meanings. Both are important for gaining different kinds of knowledge.

Quantitative research

Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.

Qualitative research

Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.

Streefkerk, R. (2022, February 7). Qualitative vs. quantitative research: Differences, examples & methods. Scibbr. https://www.scribbr.com/methodology/qualitative-quantitative-research/ 

Quantitative and qualitative data can be collected using various methods. It is important to use a  data collection  method that will help answer your research question(s).

Many data collection methods can be either qualitative or quantitative. For example, in surveys, observations or  case studies , your data can be represented as numbers (e.g. using rating scales or counting frequencies) or as words (e.g. with open-ended questions or descriptions of what you observe).

However, some methods are more commonly used in one type or the other.

Quantitative data collection methods

  • Surveys :  List of closed or multiple choice questions that is distributed to a  sample  (online, in person, or over the phone).
  • Experiments :  Situation in which  variables  are controlled and manipulated to establish cause-and-effect relationships.
  • Observations:  Observing subjects in a natural environment where variables can’t be controlled.

Qualitative data collection methods

  • Interviews : Asking open-ended questions verbally to respondents.
  • Focus groups:  Discussion among a group of people about a topic to gather opinions that can be used for further research.
  • Ethnography : Participating in a community or organization for an extended period of time to closely observe culture and behavior.
  • Literature review :  Survey of published works by other authors.

When to use qualitative vs. quantitative research

A rule of thumb for deciding whether to use qualitative or quantitative data is:

  • Use quantitative research if you want to  confirm or test something  (a theory or hypothesis)
  • Use qualitative research if you want to  understand something  (concepts, thoughts, experiences)

For most  research topics  you can choose a qualitative, quantitative or  mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an  inductive vs. deductive research approach ; your  research question(s) ; whether you’re doing  experimental ,  correlational , or  descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.

Streefkerk, R. (2022, February 7).  Qualitative vs. quantitative research: Differences, examples & methods.  Scibbr. https://www.scribbr.com/methodology/qualitative-quantitative-research/ 

Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.

Analyzing quantitative data

Quantitative data is based on numbers. Simple math or more advanced  statistical analysis  is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.

Applications such as Excel, SPSS, or R can be used to calculate things like:

  • Average scores
  • The number of times a particular answer was given
  • The  correlation or causation  between two or more variables
  • The  reliability and validity  of the results

Analyzing qualitative data

Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.

Some common approaches to analyzing qualitative data include:

  • Qualitative content analysis : Tracking the occurrence, position and meaning of words or phrases
  • Thematic analysis : Closely examining the data to identify the main themes and patterns
  • Discourse analysis : Studying how communication works in social contexts

Comparison of Research Processes

Creswell, J. W., & Creswell, J. D. (2018).  Research design : qualitative, quantitative, and mixed methods approaches  (Fifth). SAGE Publications.

  • << Previous: Literature Review Research
  • Next: Research Design By Discipline >>
  • Last Updated: Aug 21, 2023 4:07 PM
  • URL: https://guides.lib.udel.edu/researchmethods
  • Technical Support
  • Find My Rep

You are here

Research Design and Methods

Research Design and Methods An Applied Guide for the Scholar-Practitioner

  • Gary J. Burkholder - Walden University, USA
  • Kimberley A. Cox - Walden University, USA
  • Linda M. Crawford - Walden University, USA
  • John H. Hitchcock - Abt Associates Inc., USA
  • Description

Research Design and Methods: An Applied Guide for the Scholar-Practitioner is written for students seeking advanced degrees who want to use evidence-based research to support their practice. This practical and accessible text addresses the foundational concepts of research design and methods; provides a more detailed exploration of designs and approaches popular with graduate students in applied disciplines; covers qualitative, quantitative, and mixed-methods designs; discusses ethical considerations and quality in research; and provides guidance on writing a research proposal.

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

Supplements

  • Editable, chapter-specific Microsoft® PowerPoint® slides offer you complete flexibility in easily creating a multimedia presentation for your course. 
  • Lecture Notes , including Outline and Objectives, which may be used for lecture and/or student handouts.
  • Case studies from SAGE Research Methods accompanied by critical thinking/discussion questions.  
  • Tables and figures from the printed book are available in an easily-downloadable format for use in papers, hand-outs, and presentations.

“The chapters in this text are logically and clearly organized around levels of understanding that are intuitive and easy to follow. They offer dynamic examples that will keep students engaged. Readers will learn to connect theory and practice, helping them become better researchers, and better consumers of research.”

“ Research Design and Methods: An Applied Guide for the Scholar-Practitioner is a must-read for both new and seasoned researchers. Every topic in the text is comprehensively explained with excellent examples.”

“This text provides clear and concise discussions of qualitative, quantitative, and mixed-methods research for the new scholar-practitioner. The use of questioning and visuals affords students the opportunity to make connections and reflect on their learning.”

“The edited nature of this book provides a multitude of rich perspectives from well-respected authors. This book is a must-have for introductory research methods students.”

“This is an excellent read for anyone interested in understanding research, the book provides good clarity and practical examples…. It is a pragmatic book that translates research concepts into practice.”

Clear chapters. Acessible format

It contains most of what I needed for my Research Methods class. Very good book!

Current book looks better to teach.

A user-friendly, relevant, and applicable resource for all scholar-practitioners.

  • The authors are highly experienced experts in the field who are skilled in translating difficult concepts into straightforward examples.
  • The text is comprehensive in its coverage of research design and methods across multiple courses , working well as a beginning or intermediate text for a research design course or as a supplementary text in related courses.  
  • Complementary chapters on qualitative, quantitative, and mixed methods data analysis show readers how to holistically apply what they’ve learned about research design to data analysis.
  • A chapter on writing a research proposal ensures that readers develop their skills for formulating their research ideas in proposal form.
  • Questions for Reflection, and Key Sources for further reading are included at the end of each chapter.

Sample Materials & Chapters

3: Conceptual and Theoretical Frameworks in Research

16: Applied Research: Action Research

For instructors

Select a purchasing option, related products.

An Applied Guide to Research Designs

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Creswell, J. W. (2014). Research Design: Qualitative, Quantitative and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage

Profile image of Muhammad Ishtiaq

The book Research Design: Qualitative, Quantitative and Mixed Methods Approaches by Creswell (2014) covers three approaches-qualitative, quantitative and mixed methods. This educational book is informative and illustrative and is equally beneficial for students, teachers and researchers. Readers should have basic knowledge of research for better understanding of this book. There are two parts of the book. Part 1 (chapter 1-4) consists of steps for developing research proposal and part II (chapter 5-10) explains how to develop a research proposal or write a research report. A summary is given at the end of every chapter that helps the reader to recapitulate the ideas. Moreover, writing exercises and suggested readings at the end of every chapter are useful for the readers. Chapter 1 opens with-definition of research approaches and the author gives his opinion that selection of a research approach is based on the nature of the research problem, researchers' experience and the audience of the study. The author defines qualitative, quantitative and mixed methods research. A distinction is made between quantitative and qualitative research approaches. The author believes that interest in qualitative research increased in the latter half of the 20th century. The worldviews, Fraenkel, Wallen and Hyun (2012) and Onwuegbuzie and Leech (2005) call them paradigms, have been explained. Sometimes, the use of language becomes too philosophical and technical. This is probably because the author had to explain some technical terms.

Related Papers

Daniel Ortiz

methods of research design

Research Design: Qualitative, Quantitative and Mixed Methods

Esra Öztürk Çalık

Conducting a well-established research requires deep knowledge about the research designs. Doing research can be likened to jumping into the sea which may transform into a huge ocean if the researcher is not experienced. As a PhD candidate and a novice researcher, I believe that the book "Research Design: Qualitative, Quantitative and Mixed Methods Approaches" by J.W. Creswell is a true reference guide for novice researchers since it is the most comprehensive and informative source with its reader-friendly structure.

International Journal of Social Sciences Interdisciplinary Research

Prashant Astalin

rhoda taller

Yan-yi Chang

John W. Creswell was previously a professor in educational psychology in the University of Nebraska–Lincoln. He moved to the University of Michigan in 2015 as a professor in the Department of Family Medicine. He has published many articles and close to 27 books on mixed methods. Professor Creswell is also one of the founding members of the Journal of Mixed Methods Research. He was a Fulbright scholar in South Africa in 2008 and Thailand in 2012. In 2011, he served as a visiting professor in the School of Public Health of Harvard University. In 2014, he became the Chairman of the Mixed Methods International Research Association. Professor Creswell has a personal website called “Mixed Methods Research” at http://johnwcreswell.com/. The site contains the information about his background, his own blog, consulting works and published books. He also posted replies questions from academic researchers and practitioners in the blog.

kassu sileyew

There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the dissertation design. The second part discusses about qualitative and quantitative data collection methods. The last part illustrates the general research framework. The purpose of this section is to indicate how the research was conducted throughout the study periods.

Chisomo Mgunda

Gloria Thakane Leutle

Richard Baskas, Ed.D. Candidate

RELATED PAPERS

Rpg Rev Pos Grad

osvaldo luiz bezzon

The Cyprus Review

Michalis Kontos

Open-File Report

Patricia Dalyander

Macedonian Pharmaceutical Bulletin

Anely Nedelcheva

Acta Biomaterialia

Imma Ratera

Jurídicas CUC

FERNANDO LUNA SALAS

Avian Diseases

Vilmos Palya

Optical Materials Express

JOhn Canning

Ceiba: A Scientific and Technical Journal

Rogelio Trabanino

Journal of Science and Technology - IUH

Trần Trung Nhân

Revista de la Asociación Colombiana de Dermatología y Cirugía Dermatológica

Adriana Motta

Investigational new drugs

Aleksander Sochanik

Caminhos de Geografia

Raul Reis Amorim

Aspects of Japanese women's language. Tokyo: …

Katsue Reynolds

Acta Technologica Agriculturae

ibrahim Lawan

Sophie Goldingay

Ifeyinwa Okonkwo

Physical Review Letters

ahsan nazir

Carlos Martínez de Ibarreta Zorita

Journal of Antimicrobial Chemotherapy

Erin C Wilson

Journal of medical screening

christian von wagner

Wim Vanhoof

Chí Đại Nguyễn

Journal of Algebra

ALBERTO MEDINA

See More Documents Like This

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Mixed Methods Research | Definition, Guide & Examples

Mixed Methods Research | Definition, Guide & Examples

Published on August 13, 2021 by Tegan George . Revised on June 22, 2023.

Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question . Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods.

Mixed methods research is often used in the behavioral, health, and social sciences, especially in multidisciplinary settings and complex situational or societal research.

  • To what extent does the frequency of traffic accidents ( quantitative ) reflect cyclist perceptions of road safety ( qualitative ) in Amsterdam?
  • How do student perceptions of their school environment ( qualitative ) relate to differences in test scores ( quantitative ) ?
  • How do interviews about job satisfaction at Company X ( qualitative ) help explain year-over-year sales performance and other KPIs ( quantitative ) ?
  • How can voter and non-voter beliefs about democracy ( qualitative ) help explain election turnout patterns ( quantitative ) in Town X?
  • How do average hospital salary measurements over time (quantitative) help to explain nurse testimonials about job satisfaction (qualitative) ?

Table of contents

When to use mixed methods research, mixed methods research designs, advantages of mixed methods research, disadvantages of mixed methods research, other interesting articles, frequently asked questions.

Mixed methods research may be the right choice if your research process suggests that quantitative or qualitative data alone will not sufficiently answer your research question. There are several common reasons for using mixed methods research:

  • Generalizability : Qualitative research usually has a smaller sample size , and thus is not generalizable. In mixed methods research, this comparative weakness is mitigated by the comparative strength of “large N,” externally valid quantitative research.
  • Contextualization: Mixing methods allows you to put findings in context and add richer detail to your conclusions. Using qualitative data to illustrate quantitative findings can help “put meat on the bones” of your analysis.
  • Credibility: Using different methods to collect data on the same subject can make your results more credible. If the qualitative and quantitative data converge, this strengthens the validity of your conclusions. This process is called triangulation .

As you formulate your research question , try to directly address how qualitative and quantitative methods will be combined in your study. If your research question can be sufficiently answered via standalone quantitative or qualitative analysis, a mixed methods approach may not be the right fit.

But mixed methods might be a good choice if you want to meaningfully integrate both of these questions in one research study.

Keep in mind that mixed methods research doesn’t just mean collecting both types of data; you need to carefully consider the relationship between the two and how you’ll integrate them into coherent conclusions.

Mixed methods can be very challenging to put into practice, and comes with the same risk of research biases as standalone studies, so it’s a less common choice than standalone qualitative or qualitative research.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

There are different types of mixed methods research designs . The differences between them relate to the aim of the research, the timing of the data collection , and the importance given to each data type.

As you design your mixed methods study, also keep in mind:

  • Your research approach ( inductive vs deductive )
  • Your research questions
  • What kind of data is already available for you to use
  • What kind of data you’re able to collect yourself.

Here are a few of the most common mixed methods designs.

Convergent parallel

In a convergent parallel design, you collect quantitative and qualitative data at the same time and analyze them separately. After both analyses are complete, compare your results to draw overall conclusions.

  • On the qualitative side, you analyze cyclist complaints via the city’s database and on social media to find out which areas are perceived as dangerous and why.
  • On the quantitative side, you analyze accident reports in the city’s database to find out how frequently accidents occur in different areas of the city.

In an embedded design, you collect and analyze both types of data at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.

This is a good approach to take if you have limited time or resources. You can use an embedded design to strengthen or supplement your conclusions from the primary type of research design.

Explanatory sequential

In an explanatory sequential design, your quantitative data collection and analysis occurs first, followed by qualitative data collection and analysis.

You should use this design if you think your qualitative data will explain and contextualize your quantitative findings.

Exploratory sequential

In an exploratory sequential design, qualitative data collection and analysis occurs first, followed by quantitative data collection and analysis.

You can use this design to first explore initial questions and develop hypotheses . Then you can use the quantitative data to test or confirm your qualitative findings.

“Best of both worlds” analysis

Combining the two types of data means you benefit from both the detailed, contextualized insights of qualitative data and the generalizable , externally valid insights of quantitative data. The strengths of one type of data often mitigate the weaknesses of the other.

For example, solely quantitative studies often struggle to incorporate the lived experiences of your participants, so adding qualitative data deepens and enriches your quantitative results.

Solely qualitative studies are often not very generalizable, only reflecting the experiences of your participants, so adding quantitative data can validate your qualitative findings.

Method flexibility

Mixed methods are less tied to disciplines and established research paradigms. They offer more flexibility in designing your research, allowing you to combine aspects of different types of studies to distill the most informative results.

Mixed methods research can also combine theory generation and hypothesis testing within a single study, which is unusual for standalone qualitative or quantitative studies.

Mixed methods research is very labor-intensive. Collecting, analyzing, and synthesizing two types of data into one research product takes a lot of time and effort, and often involves interdisciplinary teams of researchers rather than individuals. For this reason, mixed methods research has the potential to cost much more than standalone studies.

Differing or conflicting results

If your analysis yields conflicting results, it can be very challenging to know how to interpret them in a mixed methods study. If the quantitative and qualitative results do not agree or you are concerned you may have confounding variables , it can be unclear how to proceed.

Due to the fact that quantitative and qualitative data take two vastly different forms, it can also be difficult to find ways to systematically compare the results, putting your data at risk for bias in the interpretation stage.

Prevent plagiarism. Run a free check.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Mixed Methods Research | Definition, Guide & Examples. Scribbr. Retrieved March 21, 2024, from https://www.scribbr.com/methodology/mixed-methods-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, writing strong research questions | criteria & examples, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

methods of research design

  • Politics & Social Sciences
  • Social Sciences

Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products. Something we hope you'll especially enjoy: FBA items qualify for FREE Shipping and Amazon Prime.

If you're a seller, Fulfillment by Amazon can help you grow your business. Learn more about the program.

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches

  • To view this video download Flash Player

Follow the author

John W. Creswell

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches 5th Edition

This bestselling text pioneered the comparison of qualitative, quantitative, and mixed methods research design. For all three approaches, John W. Creswell and new co author J. David Creswell include a preliminary consideration of philosophical assumptions; key elements of the research process; a review of the literature; an assessment of the use of theory in research applications, and reflections about the importance of writing and ethics in scholarly inquiry. New to this Edition

  • Updated discussion on designing a proposal for a research project and on the steps in designing a research study.  
  • Additional content on epistemological and ontological positioning in relation to the research question and chosen methodology and method. 
  • Additional updates on the transformative worldview. 
  • Expanded coverage on specific approaches such as case studies, participatory action research, and visual methods. 
  • Additional information about social media, online qualitative methods, and mentoring and reflexivity in qualitative methods. 
  • Incorporation of action research and program evaluation in mixed methods and coverage of the latest advances in the mixed methods field
  • Additional coverage on qualitative and quantitative data analysis software in the respective methods chapters. 
  • Additional information about causality and its relationship to statistics in quantitative methods. 
  • Incorporation of writing discussion sections into each of the three methodologies. 
  • Current references and additional readings are included in this new edition.
  • ISBN-10 1506386709
  • ISBN-13 978-1506386706
  • Edition 5th
  • Publication date January 2, 2018
  • Language English
  • Dimensions 7 x 0.75 x 10 inches
  • Print length 304 pages
  • See all details

Amazon First Reads | Editors' picks at exclusive prices

More items to explore

The Coding Manual for Qualitative Researchers

Editorial Reviews

About the author.

John W. Creswell, PhD, is a professor of family medicine and senior research scientist at the Michigan Mixed Methods Program at the University of Michigan. He has authored numerous articles and 30 books on mixed methods research, qualitative research, and research design. While at the University of Nebraska-Lincoln, he held the Clifton Endowed Professor Chair, served as Director of the Mixed Methods Research Office, founded SAGE’s Journal of Mixed Methods Research , and was an adjunct professor of family medicine at the University of Michigan and a consultant to the Veterans Administration health services research center in Ann Arbor, Michigan. He was a Senior Fulbright Scholar to South Africa in 2008 and to Thailand in 2012. In 2011, he co-led a National Institute of Health working group on the “best practices of mixed methods research in the health sciences,” and in 2014 served as a visiting professor at Harvard’s School of Public Health. In 2014, he was the founding President of the Mixed Methods International Research Association. In 2015, he joined the staff of Family Medicine at the University of Michigan to Co-Direct the Michigan Mixed Methods Program. In 2016, he received an honorary doctorate from the University of Pretoria, South Africa. In 2017, he co-authored the American Psychological Association “standards” on qualitative and mixed methods research. In 2018 his book on “Qualitative Inquiry and Research Design” (with Cheryl Poth) won the Textbook and Academic Author’s 2018 McGuffey Longevity Award in the United States. He currently makes his home in Ashiya, Japan and Honolulu, Hawaii.

Product details

  • Publisher ‏ : ‎ SAGE Publications, Inc; 5th edition (January 2, 2018)
  • Language ‏ : ‎ English
  • Paperback ‏ : ‎ 304 pages
  • ISBN-10 ‏ : ‎ 1506386709
  • ISBN-13 ‏ : ‎ 978-1506386706
  • Item Weight ‏ : ‎ 1.2 pounds
  • Dimensions ‏ : ‎ 7 x 0.75 x 10 inches
  • #7 in Social Sciences Methodology
  • #10 in Social Sciences Research
  • #23 in Core

About the author

John w. creswell.

John W. Creswell is a Professor of Educational Psychology at Teachers College, University of Nebraska-Lincoln. He is affiliated with a graduate program in educational psychology that specializes in quantitative and qualitative methods in education. In this program, he specializes in qualitative and quantitative research designs and methods, multimethod research, and faculty and academic leadership issues in colleges and universities.

Customer reviews

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

Reviews with images

Customer Image

  • Sort reviews by Top reviews Most recent Top reviews

Top reviews from the United States

There was a problem filtering reviews right now. please try again later..

methods of research design

Top reviews from other countries

methods of research design

  • Amazon Newsletter
  • About Amazon
  • Accessibility
  • Sustainability
  • Press Center
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Start Selling with Amazon
  • Sell apps on Amazon
  • Supply to Amazon
  • Protect & Build Your Brand
  • Become an Affiliate
  • Become a Delivery Driver
  • Start a Package Delivery Business
  • Advertise Your Products
  • Self-Publish with Us
  • Host an Amazon Hub
  • › See More Ways to Make Money
  • Amazon Visa
  • Amazon Store Card
  • Amazon Secured Card
  • Amazon Business Card
  • Shop with Points
  • Credit Card Marketplace
  • Reload Your Balance
  • Amazon Currency Converter
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns & Replacements
  • Manage Your Content and Devices
  • Recalls and Product Safety Alerts
  • Conditions of Use
  • Privacy Notice
  • Your Ads Privacy Choices

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile My Profile

Not Logged In

  • Sign in Signed in
  • My profile My Profile

Not Logged In

Using Design-Thinking Approaches to Diversify and Co-create Solutions to Wicked Problems with Multiple Stakeholders

  • By: Sofia Strid & Alain Denis
  • Product: Sage Research Methods: Diversifying and Decolonizing Research
  • Publisher: SAGE Publications Ltd
  • Publication year: 2024
  • Online pub date: March 21, 2024
  • Discipline: Sociology , Education , Psychology , Health , Anthropology , Social Policy and Public Policy , Social Work , Criminology and Criminal Justice
  • Methods: Case study research , Stakeholders , Recruiting participants
  • DOI: https:// doi. org/10.4135/9781529691528
  • Keywords: brainstorming , exercise , facilitators , inequalities , inequality , innovation , teams Show all Show less
  • Online ISBN: 9781529691528 Copyright: © 2024 SAGE Publications Ltd More information Less information

The case study draws on design thinking. It offers a research and innovation case method to address so-called wicked problems, that is, complex problems with many interdependent and incomplete factors and variables requiring deep understanding of the involved stakeholders and the innovative approach provided by design thinking. The case takes as point of departure that inequalities are gendered and intersectional and aims to find solutions to reduce these complexities. It focuses on the use of Open Studios, a participatory method that uses design thinking to solve wicked problems in an inclusive and co-creative way, and is characterized by multistakeholder involvement. This case study includes concrete examples of how the Open Studios process was used to design social innovations as pilot actions that were implemented by civil society organizations representing marginalized groups as solutions in Europe. Open Studios are two-day co-creation workshops to develop innovative solutions to reduce gendered and intersectional inequalities. During the 2 days, the diverse participants go through a step-by-step process combining divergence (brainstorming) and convergence (solution development). The diversity of the participants and the techniques used lead to different operational outputs: ideas for social innovations, recommendations to policymakers (and other stakeholders), and the identification of knowledge gaps—all with the aim of reducing inequalities. This case study is based on the application of Open Studios in RESISTIRÉ, a European Union Horizon 2020 research and innovation project on the impact of COVID-19 policy responses on inequalities (2021–2023). The method described can be applied by all levels of research from postgraduate to professorial.

Learning Outcomes

By the end of this case study, readers should be able to

  • recognize and describe the potential and usefulness of co-creation workshops in research in addressing inequalities.
  • design and use solution-oriented workshops in qualitative research.
  • demonstrate a workshop process combining divergence (brainstorming) and convergence (solution development).
  • recognize the added value of co-creation workshops in research contexts, including the diversity of participants.

Project Overview and Context

RESISTIRÉ is a research and innovation project funded by the European Commission (EC) through the Horizon 2020 framework program. It was a rather large project with eleven institutional partners running from April 2021 to September 2023. The objective of RESISTIRÉ was to understand and empower policymakers to mitigate the impacts of the COVID-19 outbreak and its societal and policy responses on behavioral, social, and economic inequalities in 30 countries (EU27, except Malta, as well as Iceland, Serbia, Turkey, and the United Kingdom) and to work toward individual and societal resilience. The project specifically focused on gendered and intersectional inequalities.

Although RESISTIRÉ was a research project, the aim was to produce robust results fast in order to have an impact on policies during the pandemic, based on the hypothesis that there would be more infection waves and that policies could be adapted to reduce impacts on existing and emerging inequalities. The context of the pandemic was characterized, on the one hand, by top-down policies that were driven by the urgency of safeguarding the public health and, on the other hand, by bottom-up reactions from society (i.e., authorities and civil society) to compensate for the negative impacts pandemic policies had on people, particularly the most vulnerable and marginalized. A simple example is the need to close shelters for women suffering from domestic violence to reduce the risk of infection through social contact, going against a dramatic increase in domestic violence and therefore an increased need to provide protection. Despite the simplicity of this example, it also illustrates the complexity of the problem in a social context, where many factors were interdependent, knowledge of potential consequences incomplete, and a problem definition that changed as new possible policy solutions were designed and/or implemented ( Rittel & Webber, 1973 ).

RESISTIRÉ collected data on policies, societal reactions, and quantitative as well as qualitative data via surveys, workshops, and semi-structured and narrative interviews, and then analyzed and translated these into insights. In total, more than 1,200 individuals from diverse inequality groups (based on, for example, age, dis/ability, ethnicity, gender, gender identity, LGBTQ+, and social class) were involved as research participants. The research insights were then used to develop solutions. Two types of solutions were developed: operational recommendations to improve policies and societal responses and social innovations. In addition, gaps in existing knowledge on inequalities and/or the impact of COVID-19 policies were identified and translated into thematic research agendas.

Section Summary

  • The context of the case is the breakout of the COVID-19 pandemic in early 2020 and the consequences of the public health policies on the socioeconomic situation.
  • RESISTIRÉ looked specifically at the impact of the COVID-19 policies on inequalities.
  • The case is results oriented; understanding the problem is used to develop potential solutions and test them, including social innovations that meet unmet needs.

Research Design

RESISTIRÉ was based on a three-step methodology, including research, co-creation, and solutions (see Figure 1 ) ( Strid & Denis, 2024 ). Open Studios are a central activity in this step-by-step approach, starting with collecting evidence, translating it into insights, and transforming these insights into solutions through co-creation ( Boyer, Cook, & Steinberg, 2011 ; see also Kerremans & Denis, 2022 , 2023 ; Kerremans, Živković, & Denis, 2021 ).

Step 1: Quantitative indicators, mapping of policies, and narratives. Step 2: Open studios. Step 3: knowledge gaps and concept development workshops that connect both ways to advocacy, research agenda, and pilot projects. The three steps are connected linearly.

Figure 1. RESISTIRÉ Methodologic Step-by-Step Three-Cycle Process.

An infographic showing a three-step process involved in RESISTIRÉ.

Figure 1 illustrates the research process followed in each cycle of RESISTIRÉ. The cycle was performed three times, each over a period of six to eight months between April 2021 and May 2023. Each cycle started with research activities taking place in parallel:

  • Mappings, consisting of desk-based research conducted by national native-language-speaking, researchers in each of the 30 countries to identify three types of initiatives launched in response to the COVID-19 policies: (1) bottom-up initiatives from civil society, (2) initiatives by local authorities or adaptations to policies to alleviate the impact on inequalities or vulnerable people ( Cibin et al., 2021 , 2022 , 2023 ), (3) surveys that were launched by diverse stakeholders (including the scientific community) to monitor the impacts and/or reactions (including the behavior of people and stakeholders) in response to COVID-19 policies. These surveys were called rapid assessment surveys (RAS) and often were launched by civil society organizations. In total, 326 societal responses, 329 policy responses, and 316 RAS were mapped and analyzed ( Harroche et al., 2023 ; Stovell et al., 2021 , 2022 ).
  • Qualitative research, consisting of individual narratives and semi-structured interviews, and Pan-European thematic workshops with experts and frontline workers. The narratives were collected from people on how they (had) experience(d) the COVID-19 period. In total, 794 narratives, 102 semi-structured interviews, and 14 European workshops with 212 participants were conducted over the three cycles (see Axelsson et al., 2021 ; Sandström et al., 2022 , 2023 ). The individual narratives were collected from people facing at least one vulnerability or ground for discrimination.
  • Quantitative research, consisting of analyzing secondary sources. Indicators on inequalities were monitored based on official, published statistics, such as Eurostat. The RAS previously mentioned were analyzed, and collaborations were started with 12 of these to integrate additional themes linked to better understanding the impact on inequalities.

Insights from the research were used to decide on the themes of the wicked problems addressed in the in total 12 Open Studios. At the end of each cycle, four Open Studios were organized. The ideas coming from the Open Studios were combined with the research results and insights to be further developed and produce the following:

  • Recommendations to policymakers and other stakeholders. These were produced as operational recommendations, published in the form of factsheets, using research results to justify the recommendations made. Over the three cycles, 22 factsheets were produced.
  • Pilot projects. These are ideas of potential social innovations that were prototyped and tested. Calls were launched to find and select nongovernmental organizations (NGOs) that would be ready to run the pilot projects. Nine pilot projects took place in seven countries (see Denis et al., 2024 ).
  • Research agendas. gaps in knowledge were identified and used to define the focus of the next cycle of research activities in RESISTIRÉ, but also to produce thematic research agendas that were promoted towards research funders. The purpose being to inspire them to include research to better understand how inequalities can be reduced ( Živković et al., 2022 ; Sandström & Strid, 2023a; Sandström, Callerstig, & Strid et al., 2023 ).

Open Studios played a central role in this research process as the step where solutions were co-created. Participants in the Open Studios were a mix of researchers from the RESISTIRÉ team and externally recruited participants, on average, 20–22 participants per Open Studio, leading to a total of 255 participants.

Box 1: Open Studio

The Open Studios should be considered as an action-oriented analysis of the research results of the previous steps of the project. The output consists of ideas for concrete action, input for recommendations to reshape policies, and questions that still need to be answered (missing insights or knowledge). The Open Studios approach is a technique developed to design policies in a participative way that brings together multiple expertise, including the user experience. The original concept, as described in Boyer, Cook, and Steinberg ( 2011 ), had a duration of five full days. The Open Studio approach used in RESISTIRÉ is for 2 days given the scope of the issues covered and the feasibility of recruitment of participants. During an Open Studio, participants go through periods of divergence (exploring in an open way and brainstorming) and of convergence (bringing ideas together into concepts of potential solutions). Different exercises shape this process. For two of the exercises used on the first day (divergence), better stories and personas are used to stimulate the creativity.

  • Design thinking or design as a process was used to make the translation of research results into operational solutions.
  • The project was designed to produce results fast, with three cycles of performing research activities, co-creation, and producing solutions.
  • Long co-creation workshops were used to develop ideas of potential solutions as part of the process.

Research Practicalities

The Open Studio participants were a mix of researchers from the RESISTIRÉ consortium teams and externally recruited participants. The invited participants were recruited because of their expertise or link with the theme and/or the target group. The expertise also can be user expertise, meaning that when a specific target group is linked to the theme, it is good to have members of that group present. For vulnerable people, this is not always possible owing to ethical concerns as well as because of the duration of the Open Studio. It is difficult to recruit people for two full consecutive days. In the case of RESISTIRÉ, this was the case for medical professionals, who had to prioritize the care of people suffering from the public health situation and could not free themselves for two consecutive days. Three techniques were used to successfully circumvent this problem:

  • 1. Personas developed based on the research results. Personas are archetypes of real persons (see, e.g., Nielsen & Storgaard Hansen, 2014 ). They are user centered, which means that the design puts the user group at the center of every stage of the process. This includes a challenge to the researcher, namely the risk of personas stereotyping marginalized groups or objectifying vulnerable people. To avoid this, development of the personas began with an in-depth analysis of the needs and interests of the intended target group based on a large amount of material, in the case of RESISIRÉ nearly 300 narrative interviews per cycle. The vast number of narratives collected served as inspiration to develop the personas and ensured diversity, combining different situations as directly experienced by marginalized and vulnerable groups, inequalities, and their intersections firsthand. Personas thus were an evidence-based and efficient way to enable participants to identify with the situations of vulnerable groups and to empathize with the target group during the Open Studio.
  • 2. Better stories . These were selected from a mapping of inspiring initiatives developed by civil society organizations ( Cibin et al., 2023 ) to mitigate the impact of COVID-19 policies on vulnerable people (see Georgis, 2013 ). The better stories do not have to be the “best” stories but rather examples of actions and activities that are inclusive of marginalized voices and groups. These were used to inspire participants to look for innovative solutions that can promote social change and create more just and equitable societies. The better stories thus helped to keep participants focused on the solutions rather than getting stuck on problem formulations.
  • 3. Recruitment of people who represent the target group . Most often this included fieldworkers and NGOs assisting and in direct contact with the vulnerable people. This approach solves two problems: (a) it secures representation of the interests and needs of vulnerable groups and (b) avoids stigmatizing participants by labeling certain individuals as vulnerable—hence addressing the so-called Crenshaw’s dilemma ( Walby, Armstrong, & Strid, 2012 ). This approach means including the voices of vulnerable people directly because these professionals and activists have a thorough knowledge of needs, problems faced, and attitudes.

In addition, participants were renumerated to cover their loss of income for the two days, which further helped to recruit professionals and activists.

A second objective, or challenge, when recruiting invited participants was to ensure diversity in the profiles. RESISTIRÉ had a Pan-European scope, which adds to the complexity. When the focus of an Open Studio was defined, an ideal mix of profiles also was agreed on. All consortium members were involved in the recruitment, helping to develop a long list with based on the suggested profiles. Invitations were then sent out in batches to ensure that the diversity objective was reached. Diversity objectives independent of the thematic focus always included a focus on user expertise (because consortium members had a research profile) and different nationalities, genders, and age. There was also the objective of having at least one creative profile among the invited participants. This could be an artist, designer, or architect but always someone with a specific interest in the theme of the Open Studio. Recruiting eight external participants was a challenge, and while the fact that they were renumerated to cover their loss of income during their participation is likely to have helped, the speed of execution of the project posed a problem. Agreeing on themes was done, on average, 6 weeks before the Open Studio took place, meaning that recruitment was done late with all difficulties linked to blocking the agendas. An advantage here was, first, the size of the consortium, with many people in many different countries being able to support recruitment, and, second, the design of the project, which meant that contacts with potential participants were made already during the earlier project steps. Already established networks were used, and we drew on the contacts established during the research activities of the project.

Apart from recruitment, development of the input material for the Open Studios also had to be done quickly. Better stories were developed based on the material available from the research ( Cibin et al., 2022 ; Sandström et al., 2022 , 2023 ). For reasons of consistency, though, they all needed to be presented according to a standard template. Better stories were sent to the participants (internal and external), together with a briefing, exactly 1 week before the Open Studio took place.

Facilitation is another challenge given the nature of the approach, the length of the Open Studio, and the number of participants. Nearly all exercises were conducted in small groups, with reporting back in plenary to share results. Overall facilitation was done by two of the main facilitators who were switching roles (one facilitating and the other drawing conclusions and main lessons). Each of the small groups also had a facilitator, which included the two main facilitators. Facilitators need to be trained in the techniques used during the Open Studio and should be sufficiently knowledgeable of the theme/topic addressed. Some of the needed skills depend on the tools one chooses to use in the Open Studio, for example, Zoom and Miro boards (for online Open Studios) and brainstorming techniques such as Lotus Blossom (both online and offline). Other skills are related to workshop facilitation, such as active listening, bringing participants to a mutual understanding, unbiased perspectives, conflict resolution, commitment to collaboration, creative problem solving, showing positivity and enthusiasm, making sure that all participants contribute, and not least, time management. The thematic knowledge in the case of RESISTIRÉ was achieved by involving the whole facilitation team in the research step and in the development of the materials (initial thematic briefing of participants, the better stories, and the personas).

For the online format, two different platforms were used in parallel:

  • Zoom as an audiovisual meeting platform, and
  • Miro as a virtual whiteboard where participants can work both individually and together.

Zoom was selected because of some of its characteristic features, mainly the possibility to combine small groups (in the form of breakout rooms) and plenary sessions easily. Miro was selected because of the low barrier to entry for first-time users: participants can start working in it after a brief tutorial of a couple of minutes, and it also allows participants to use digital tools that resemble their physical counterparts in a face-to-face environment (e.g., posters and sticky notes) as closely as possible.

The physical arrangement for the face-to-face format was a large room with a big wall where posters could be put up. The room also should be big enough to allow participants to split into four smaller groups, or there should be sufficient breakout rooms available.

Table 1 provides an overview of the structure of an Open Studio as planned and also as executed 12 times over a period of 18 months. The same structure was used for online and in-person Open Studios.

  • An important ingredient of the success of a co-creation workshop is the choice of the participants: mixing different profiles is a key factor of success.
  • Recruiting participants from the target group is often a challenge, particularly for vulnerable or marginalized groups. A solution is to recruit people who are directly involved with the target group and its challenges, whether professionally or as a volunteer.
  • The researchers/organizers need to carefully consider the speed of the project and time allocation for recruiting participants to the Open Studios. This often takes more time than planned, and potential participants often have busy calendars.

Method in Action: Divergence and Convergence

The format follows the design approach illustrated in the double-diamond description , first proposed by the British Design Council in 2003 (Design Council, n.d.): having different periods of divergent and convergent thinking succeeding each other. The first day was mainly devoted to divergent thinking with exercises meant to develop new ideas and identify directions for solutions. The exercises with better stories and personas had these as clear objectives ( Georgis, 2013 ; Nielsen & Storgaard Hansen, 2014 ). The last exercise of the first day was a straightforward brainstorming exercise using the Lotus Blossom technique, a creative brainstorming exercise for idea generation. It uses a map/grid that starts from one central theme (placed in the middle of the blossom, as the iris) with eight conceptual and deeper subthemes (placed as petals) flowing out from the central theme. Next, each of the eight subthemes/petals are used as central themes to generate eight more subthemes. The technique allows an open but still structured discussion around pre-defined questions (for the Lotus Blossom technique, see, e.g., Yellow Window, n.d.). All three exercises were done in smaller groups to facilitate creativity. Still, results were put in common in plenary, with participants acting as rapporteurs toward the larger group. This allowed all participants to be aware of all ideas, even if they were not part of the discussion that led to them. The result at the end of day 1 was a huge number of ideas and sticky notes.

Box 2: Different Brainstorming Behaviors Online and Face-to-Face

Of the 12 Open Studios, nine were held online and three face-to-face. The same approach and exercises were used in both formats. One of the differences between them is linked to the difference in behavior of participants during brainstorming. In a face-to-face workshop, participants typically share their idea by explaining it to the group. The facilitator or the participant then writes down the idea and puts it on the poster. During this process, other participants start writing their ideas on sticky notes to keep them and share them later. This same process happens online as well, but new ideas written while someone speaks are, due to the technique, shared instantly as they are written on the online white board. This has a few consequences: it can speed up the process, it creates the risk that not all ideas have been shared and explained to the group (it is the role of the facilitator not to let this happen), and it can lead to more ideas produced in the same time period. It also allows people who are less inclined to speak up in a group to express their ideas in writing instead of speaking.

Making All the Ideas of Day 1 Converge

Relevant ideas from day 1 were put into a mind map—by the facilitating team—after the end of the day to be used as a start for the second day. This can be understood as creating a mirror for the participants showing clusters of ideas that were produced by the group. The clustering is done according to potential action lines, even if vague or still theoretical.

This mind map and its use in the plenary group should be considered as the first convergence , taking place with a structuring of ideas according to potential action directions. The mind map, created by the facilitation team, is challenged by the participants: they are asked to say what they agree with, what they disagree with, and what they consider is missing (because the mind map is not exhaustive).

At the end of the first session of the second day, there is consensus in the group on what action directions to work on: six to eight potential ideas are roughly defined, and participants are asked to choose two on which they are interested in working on, one in the morning and the second one in the afternoon. The choice of these action directions sometimes is quite chaotic because this is a combination of individual and group decisions. The challenge is to agree on the scope and focus of the action direction and to choose a maximum of eight, four for the morning and four for the afternoon, and then to have all individual participants make their choices because they can participate in two only. This can lead to action directions having only a couple of participants and decisions to either drop them or convince other participants to change their choices. A longer break had been foreseen and was used in a few Open Studios to allow the facilitation team to generate a proposal before starting the first small-group work.

Action Ideas as Defined Lead to a New Divergence–Convergence Exercise

Each small group starts the development of the action idea with a new structured brainstorming: a very open discussion, inspired by the cluster of ideas of day 1 that served as the basis for the action idea. Rather than starting straightforward with concretizing the potential action, the group brainstorms on “who is this for?” and “What is our ambition?” After half the time available is used, the facilitator “moves” the group from one side of the poster to a second side, forcing the group gently to “converge” in the form of agreeing on concrete objectives of the action, what the main activities would be, and the target groups.

Challenging Moments in the Process

There are three challenging moments in this process where things can go wrong; all three are linked to the transition between divergence and convergence:

  • The clustering of ideas at the end of day 1,
  • The choice of action directions to work on, and
  • Making the action direction a concrete action.

Day two starts with the presentation of all ideas produced during day one. This is done in clusters to the participants, each cluster having the potential to be the start of one or more action directions. This exercise is challenging because there are many ideas, and it has to be done under time pressure. Even though it was challenging, this never failed in RESISTIRÉ. The timing was challenging because it had to be done overnight, but this was made possible by splitting tasks and roles among the facilitators (i.e., rough clustering, adding ideas to the clusters and fine tuning, and review by the full facilitation team). Reactions from the full group also functioned as a safety net: if an important action direction was overlooked by the facilitators, it would be picked up by one or more of the participants.

With regard to the choice of actions to work on, the main challenge is the balancing of individual interests and to agree to leave out some of the action directions to arrive at a maximum of eight. A natural tendency to find a compromise is to merge different action ideas. However, this will make the next stage in the process more difficult because combining different action ideas leads to more complexity, different opinions, and more difficulty defining a concrete action. This challenge can be overcome by not compromising and keeping a clear focus initially. If this is not possible, the need to define a clearer focus is transferred to the small group, and it becomes the responsibility of the facilitator of that session to push for a choice and decision.

The last challenge is the biggest, and it is where there were some failures in nearly all Open Studios: not all ideas were translated successfully into concrete action ideas that could be operationalized in the project. Different factors play a role, but the main factor is the capacity of the small group in charge to go from idea to concrete action. Consensus is not necessarily reached, which means that the discussion goes on while time is running out. The original Open Studio approach ( Boyer, Cook, & Steinberg, 2011 ) lasted 5 days, but that required a lot of time and money, which is why the RESISTIRÉ approach described here attempted to run over two days. The participants sometimes felt that there was not enough time to discuss the problems and solutions. Despite this, the end result produced more actions ideas than could be concretized as innovations and pilot actions. Factors that have helped to maximize the number of concrete ideas in the case of RESISTIRÉ include the following:

  • The openness to different types of outputs . A result could be a recommendation to policymakers or specific stakeholders, or it could be a potential social innovation that could be tested in RESISTIRÉ. All Open Studios produced more than one idea of potential pilot projects, which was, of course, the most challenging.
  • The time factor. The groups were working under the pressure of time, which forced them to become practical and concrete.
  • Using consecutive steps of divergence and convergence helps to co-create, combining the need to have open discussions with the need to generate solutions.
  • In RESISTIRÉ, there were two consecutive cycles of divergence and convergence. The first cycle (day one and the start of day two) led to a choice of action or innovation directions. The second cycle developed these action directions into operational solutions.
  • Co-creation workshops also can be organized online, but this has impacts on the organization and facilitation.

Practical Lessons Learned

Running extended creative workshops online.

Open Studios were conceived from the start to take place online, if needed, because RESISTIRÉ was a project on COVID-19 that took place during the pandemic. The first wave in 2021 was conducted fully online because travel was still impossible for most of the participating countries and teams. The approach and exercises therefore were designed to work both online and face-to-face.

Use of an online format for such long workshops went surprisingly well, with a high level of satisfaction expressed by participants. Tables 2 and 3 offer an overview of some advantages and disadvantages of both formats, but the key message is that there is no differentiation on the important ingredients of a successful Open Studio: quality of discussion, brainstorming, and divergence and convergence worked as planned with both formats.

When the project started, most participants did not have experience with the use of Miro, which was a barrier. This was solved by organizing a small tutorial at the start of the Open Studio and putting people who had difficulties in separate Zoom rooms, with one of the cofacilitators, to help them get acquainted with the basic functions. This need disappeared in the second and third waves because more and more people had experience with this type of tool and because we invited participants who did not have the experience of Miro to join a tutorial 15 minutes before the start of the Open Studio, which saved time.

Another lesson learned during the first Open Studio was not to allow participation from a smart phone. We had not anticipated that this could happen, but it did, and using Miro with a smart phone is not possible. We explicitly informed participants that this was not possible and that participants had to either switch to a computer or not participate.

Zoom fatigue is a reality for both participants and facilitators. Even if participants expressed their surprise at how much energy they had left after two days, we decided to integrate short Qi-Gong sessions during the afternoon breaks of both days. This was voluntary, but most participate did so and expressed their gratitude. We have also applied this in the face-to-face format. Two exercises were used, both very simple to explain and to perform standing up next to a desk/computer.

Looking for Solutions Versus Understanding Problems

The concept of the Open Studio and the flow was geared toward solutions, but facilitation remains an important factor for success. Participants, particularly researchers, have a natural tendency to concentrate on understanding and analyzing the problem rather than looking for solutions. This needs constant attention from facilitators. This was particularly the case during day two, when the solutions were developed and discussed and priorities were agreed on. Participants had to vote at the end of the day on all ideas, not only on the two ideas they had been working on. It is therefore important to have created a group feeling, a common goal, and an openness to other participants’ ideas and opinions as well as the context of the RESISTIRÉ project and what is feasible to achieve with the expertise and resources of the consortium. This worked surprisingly well in RESISTIRÉ. Participants explained this by the “natural” flow of the Open Studio and by the diversity of the participants. They felt empowered to discover how people from other countries and often with very different expertise share the same ambitions to solve problems and find solutions and actually could do it in a rather short period of time.

These solutions are only roughly developed at the end of the Open Studio. They need to be developed further, and their feasibility and innovative nature need to be checked. This process can have already started during the Open Studio, harnessing the expertise and knowledge of the participants. A specific part of the Miro board was used for this purpose, inciting participants to add links to information sources, examples, and inspiration that could help the consortium’s team in this process.

  • Running long creative workshops online is possible but implies strong adherence to timing and respecting the need of participants to have decent breaks.
  • Researchers tend to keep on analyzing the problem, ensuring that a focus on solutions is a challenge.
  • Being creative together creates a bond and boosts energy.

This results-oriented case study has described a research and innovation method—Open Studios—to create solutions to gendered and intersectional inequalities caused by the COVID-19 pandemic, although the methods is applicable to a much wider range of empirical fields. The aim has been to enable other researchers to apply the method by recognizing the potential and usefulness of co-creation workshops in research based on sets of qualitative and quantitative data.

Open Studios are action-oriented analyses of research results and comprise two-day co-creation workshops to develop solutions to reduce inequalities. The co-creative process uses design as a process, illustrated by the Design Council’s double diamond. This process alternates periods and therefore exercises of divergent thinking followed by a period and exercises of convergent thinking. In the RESISTIRÉ Open Studios, two cycles of divergence–convergence were executed over a two-day period. The first cycle lead to a convergence toward six to eight action directions, which were further developed during the second divergence–convergence cycle into potential solutions.

Use of this method enables participants to address so-called wicked problems—problems that are characterized by social complexity, the diversity of stakeholders, incomplete knowledge, in constant flux with interlocking and evolving constraints, and where there is no final or definite solution or problem, only tangible and simply “better,” “worse,” or “good enough” implementable solutions. In this case, these solutions were developed as operational recommendations, pilot actions, and new research agendas to address and reduce social and economic inequalities and to give voice to marginalized groups. The method is based on the method of Open Studios as applied in RESISTIRÉ—an EU Horizon 2020 research project on the impact of COVID-19 societal and policy responses on inequalities.

The case study concludes that design thinking or design as a process can be used to translate research results into operational solutions that take multiple inequalities into account through a diversity of multistakeholder co-creation, empirically vast data, and the process of divergence and convergence. The method can be used to produce operational results fast without increasing existing inequalities.

Discussion Questions

  • 1. Discuss the differences and similarities between various methods of participatory research. What are the strengths and weaknesses of the Open Studios approach?
  • 2. How could Open Studios be designed and implemented to empower both participants and stakeholders?
  • 3. What is the importance of the diversity of participants in an Open Studio? Would a homogeneous participant groups generate similar results as a heterogeneous group?
  • 4. What are the advantages and/or disadvantages of the online and offline Open Studios formats?
  • 5. How can Open Studios be improved to produce solutions that better represent the needs and interests of the most marginalized and vulnerable?

Multiple-Choice Quiz Questions

1. Which of the following methods was the basis for the translation of research results into operational solutions?

Correct Answer

Feedback: Well done, correct answer

Incorrect Answer

Feedback: This is not the correct answer. The correct answer is A.

2. Which of the following is a way to overcome the challenges in recruiting participants from the target groups to an Open Studio?

Feedback: This is not the correct answer. The correct answer is B.

3. Given that the Open Studio and the flow are geared toward solutions, what is a key challenge for facilitators in involving researchers as participants in an Open Studio?

4. What are the core elements of convergence–divergence?

5. What is the advantage in the process of consecutive steps of divergence and convergence?

Further Reading

Sign in to access this content, get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists

Help | Advanced Search

Mathematics > Optimization and Control

Title: multi methods of matrix analysis use for control and optimization system in control engineering.

Abstract: Matrix analysis plays a crucial role in the field of control engineering, providing a powerful mathematical framework for the analysis and design of control systems. This research report explores various applications of matrix analysis in control engineering, focusing on its contributions to system modeling, stability analysis, controllablity, observability, and optimization. The report also discusses specific examples and case studies to illustrate the practical significance of matrix analysis in addressing real-world control engineering challenges Analyze controllability. Informally, a system is controllable if we can construct a set of inputs that will drive the system to any given state. Analyze observability. Informally, observability means that by controlling the inputs and watching the outputs of a system we can determine what the states were. Optimal Control is a control method that aims to find the optimal control input to achieve the best performance of the system under certain constraints. This performance index can be the system output, energy consumption, time, etc.

Submission history

Access paper:.

  • Download PDF
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • Frontiers in Chemical Engineering
  • Materials Process Engineering
  • Research Topics

Solvent Extraction Pathways to Sustainable Industrial Processes: New Solvents, Modelling, and Design Methods

Total Downloads

Total Views and Downloads

About this Research Topic

The supply of strategic elements is a major challenge for securing the decarbonization of transport and energy. The tension is particularly strong on the elements required to capture and store intermittent energy sources (Li, Ni, Co, Dy, Nd, Pr), and on those allowing to ensure a stable base production of low ...

Keywords : solvent extraction, modeling, mass transfer, manufacturing, extraction processes

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, recent articles, submission deadlines, participating journals.

Manuscripts can be submitted to this Research Topic via the following journals:

total views

  • Demographics

No records found

total views article views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

IMAGES

  1. What is Research Design in Qualitative Research

    methods of research design

  2. Different Types of Research

    methods of research design

  3. Research Design in Qualitative Research

    methods of research design

  4. Three Types Of Research Methodology

    methods of research design

  5. Types of Research Methodology: Uses, Types & Benefits

    methods of research design

  6. 15 Types of Research Methods (2024)

    methods of research design

VIDEO

  1. Research Design

  2. Research Methodology

  3. @ Type of Research and Research Design

  4. Research Design| Principles of research design

  5. Research Methods

  6. Mixed Methods Research Design

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Research Design

    Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices. Health sciences: In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases.

  3. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  4. Research Design

    Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions. Introduction. Step 1. Step 2.

  5. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  6. Research Methods Guide: Research Design & Method

    Research design is a plan to answer your research question. A research method is a strategy used to implement that plan. Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively. Which research method should I choose?

  7. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  8. PDF Research Design and Research Methods

    Research Design and Research Methods 47 research design link your purposes to the broader, more theoretical aspects of procedures for conducting Qualitative, Quantitative, and Mixed Methods Research, while the following section will examine decisions about research methods as a narrower, more technical aspect of procedures.

  9. Research Design

    The Sixth Edition of the bestselling Research Design: Qualitative, Quantitative, and Mixed Methods Approaches provides clear and concise instruction for designing research projects or developing research proposals. This user-friendly text walks readers through research methods, from reviewing the literature to writing a research question and stating a hypothesis to designing the study.

  10. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  11. Research design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a 'blueprint' for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process.

  12. Types of Research Designs

    NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods. The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative ...

  13. How to Construct a Mixed Methods Research Design

    What is a mixed methods design? This article addresses the process of selecting and constructing mixed methods research (MMR) designs. The word "design" has at least two distinct meanings in mixed methods research (Maxwell 2013).One meaning focuses on the process of design; in this meaning, design is often used as a verb.

  14. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  15. Research Design: Qualitative, Quantitative, and Mixed Methods

    The SAGE edge site for Research Design by John W. Creswell and J. David Creswell offers a robust online environment you can access anytime, anywhere, and features an array of free tools and resources to keep you on the cutting edge of your learning experience. This best-selling text pioneered the comparison of qualitative, quantitative, and ...

  16. Research Design

    Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions. This type of research can be used to establish generalizable facts about a topic. Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.

  17. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  18. (PDF) Basics of Research Design: A Guide to selecting appropriate

    Mixed method research design is an integration of qualitative and quantitative research an d . data in a research study. According to Burke-Johnson et al., (2007) this is an e mpirical .

  19. Research Design and Methods

    Research Design and Methods: An Applied Guide for the Scholar-Practitioner is written for students seeking advanced degrees who want to use evidence-based research to support their practice. This practical and accessible text addresses the foundational concepts of research design and methods; provides a more detailed exploration of designs and approaches popular with graduate students in ...

  20. Types of Research Design in 2024: Perspective and Methodological

    Yin (2014) has a succinct way of differentiating the two: design is logical, while method is logistical. In other words, the design is the plan, the method is how to realize that plan. There are important factors at play when creating a methodology in research. These include ethics, the validity of data, and reliability.

  21. (PDF) Creswell, J. W. (2014). Research Design: Qualitative

    In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the dissertation design. The second part discusses about qualitative and quantitative data collection methods. The last part illustrates the general research ...

  22. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling cause-and-effect relationships between variables across various disciplines.

  23. Mixed Methods Research

    Mixed methods research designs. There are different types of mixed methods research designs. The differences between them relate to the aim of the research, the timing of the data collection, and the importance given to each data type. As you design your mixed methods study, also keep in mind: Your research approach (inductive vs deductive)

  24. Developing and testing a reflection method for implementation of the

    Design. Design-based research. Methods. A design group and four test groups of community nurses and nursing assistants were formed to develop a reflection method that aligns with the needs and preferences of its end-users. The design and test group meetings were video recorded. The video data were iteratively discussed and analysed thematically ...

  25. Research Design: Qualitative, Quantitative, and Mixed Methods

    John W. Creswell, PhD, is a professor of family medicine and senior research scientist at the Michigan Mixed Methods Program at the University of Michigan. He has authored numerous articles and 30 books on mixed methods research, qualitative research, and research design. While at the University of Nebraska-Lincoln, he held the Clifton Endowed Professor Chair, served as Director of the Mixed ...

  26. Sage Research Methods: Diversifying and Decolonizing Research

    Abstract. The case study draws on design thinking. It offers a research and innovation case method to address so-called wicked problems, that is, complex problems with many interdependent and incomplete factors and variables requiring deep understanding of the involved stakeholders and the innovative approach provided by design thinking.

  27. Title: Multi Methods of Matrix Analysis Use for Control and

    Matrix analysis plays a crucial role in the field of control engineering, providing a powerful mathematical framework for the analysis and design of control systems. This research report explores various applications of matrix analysis in control engineering, focusing on its contributions to system modeling, stability analysis, controllablity, observability, and optimization. The report also ...

  28. Solvent Extraction Pathways to Sustainable Industrial ...

    On the one hand, intensive research is devoted to the discovery of new extractant systems, solvents, or adsorbents, with improved selectivity. Among the new methods, DFT and molecular simulations can guide their design, and microfluidic tools can accelerate the screening of their performance.

  29. TSMC and Synopsys Bring Breakthrough NVIDIA ...

    GTC— NVIDIA today announced that TSMC and Synopsys are going into production with NVIDIA's computational lithography platform to accelerate manufacturing and push the limits of physics for the next generation of advanced semiconductor chips. TSMC, the world's leading foundry, and Synopsys, the leader in silicon to systems design solutions, have integrated NVIDIA cuLitho with their ...