• Privacy Policy

Research Method

Home » Exploratory Research – Types, Methods and Examples

Exploratory Research – Types, Methods and Examples

Table of Contents

Exploratory Research

Exploratory Research

Definition:

Exploratory research is a type of research design that is used to investigate a research question when the researcher has limited knowledge or understanding of the topic or phenomenon under study.

The primary objective of exploratory research is to gain insights and gather preliminary information that can help the researcher better define the research problem and develop hypotheses or research questions for further investigation.

Exploratory Research Methods

There are several types of exploratory research, including:

Literature Review

This involves conducting a comprehensive review of existing published research, scholarly articles, and other relevant literature on the research topic or problem. It helps to identify the gaps in the existing knowledge and to develop new research questions or hypotheses.

Pilot Study

A pilot study is a small-scale preliminary study that helps the researcher to test research procedures, instruments, and data collection methods. This type of research can be useful in identifying any potential problems or issues with the research design and refining the research procedures for a larger-scale study.

This involves an in-depth analysis of a particular case or situation to gain insights into the underlying causes, processes, and dynamics of the issue under investigation. It can be used to develop a more comprehensive understanding of a complex problem, and to identify potential research questions or hypotheses.

Focus Groups

Focus groups involve a group discussion that is conducted to gather opinions, attitudes, and perceptions from a small group of individuals about a particular topic. This type of research can be useful in exploring the range of opinions and attitudes towards a topic, identifying common themes or patterns, and generating ideas for further research.

Expert Opinion

This involves consulting with experts or professionals in the field to gain their insights, expertise, and opinions on the research topic. This type of research can be useful in identifying the key issues and concerns related to the topic, and in generating ideas for further research.

Observational Research

Observational research involves gathering data by observing people, events, or phenomena in their natural settings to gain insights into behavior and interactions. This type of research can be useful in identifying patterns of behavior and interactions, and in generating hypotheses or research questions for further investigation.

Open-ended Surveys

Open-ended surveys allow respondents to provide detailed and unrestricted responses to questions, providing valuable insights into their attitudes, opinions, and perceptions. This type of research can be useful in identifying common themes or patterns, and in generating ideas for further research.

Data Analysis Methods

Exploratory Research Data Analysis Methods are as follows:

Content Analysis

This method involves analyzing text or other forms of data to identify common themes, patterns, and trends. It can be useful in identifying patterns in the data and developing hypotheses or research questions. For example, if the researcher is analyzing social media posts related to a particular topic, content analysis can help identify the most frequently used words, hashtags, and topics.

Thematic Analysis

This method involves identifying and analyzing patterns or themes in qualitative data such as interviews or focus groups. The researcher identifies recurring themes or patterns in the data and then categorizes them into different themes. This can be helpful in identifying common patterns or themes in the data and developing hypotheses or research questions. For example, a thematic analysis of interviews with healthcare professionals about patient care may identify themes related to communication, patient satisfaction, and quality of care.

Cluster Analysis

This method involves grouping data points into clusters based on their similarities or differences. It can be useful in identifying patterns in large datasets and grouping similar data points together. For example, if the researcher is analyzing customer data to identify different customer segments, cluster analysis can be used to group similar customers together based on their demographic, purchasing behavior, or preferences.

Network Analysis

This method involves analyzing the relationships and connections between data points. It can be useful in identifying patterns in complex datasets with many interrelated variables. For example, if the researcher is analyzing social network data, network analysis can help identify the most influential users and their connections to other users.

Grounded Theory

This method involves developing a theory or explanation based on the data collected during the exploratory research process. The researcher develops a theory or explanation that is grounded in the data, rather than relying on pre-existing theories or assumptions. This can be helpful in developing new theories or explanations that are supported by the data.

Applications of Exploratory Research

Exploratory research has many practical applications across various fields. Here are a few examples:

  • Marketing Research : In marketing research, exploratory research can be used to identify consumer needs, preferences, and behavior. It can also help businesses understand market trends and identify new market opportunities.
  • Product Development: In product development, exploratory research can be used to identify customer needs and preferences, as well as potential design flaws or issues. This can help companies improve their product offerings and develop new products that better meet customer needs.
  • Social Science Research: In social science research, exploratory research can be used to identify new areas of study, as well as develop new theories and hypotheses. It can also be used to identify potential research methods and approaches.
  • Healthcare Research : In healthcare research, exploratory research can be used to identify new treatments, therapies, and interventions. It can also be used to identify potential risk factors or causes of health problems.
  • Education Research: In education research, exploratory research can be used to identify new teaching methods and approaches, as well as identify potential areas of study for further research. It can also be used to identify potential barriers to learning or achievement.

Examples of Exploratory Research

Here are some more examples of exploratory research from different fields:

  • Social Science : A researcher wants to study the experience of being a refugee, but there is limited existing research on this topic. The researcher conducts exploratory research by conducting in-depth interviews with refugees to better understand their experiences, challenges, and needs.
  • Healthcare : A medical researcher wants to identify potential risk factors for a rare disease but there is limited information available. The researcher conducts exploratory research by reviewing medical records and interviewing patients and their families to identify potential risk factors.
  • Education : A teacher wants to develop a new teaching method to improve student engagement, but there is limited information on effective teaching methods. The teacher conducts exploratory research by reviewing existing literature and interviewing other teachers to identify potential approaches.
  • Technology : A software developer wants to develop a new app, but is unsure about the features that users would find most useful. The developer conducts exploratory research by conducting surveys and focus groups to identify user preferences and needs.
  • Environmental Science : An environmental scientist wants to study the impact of a new industrial plant on the surrounding environment, but there is limited existing research. The scientist conducts exploratory research by collecting and analyzing soil and water samples, and conducting interviews with residents to better understand the impact of the plant on the environment and the community.

How to Conduct Exploratory Research

Here are the general steps to conduct exploratory research:

  • Define the research problem: Identify the research problem or question that you want to explore. Be clear about the objective and scope of the research.
  • Review existing literature: Conduct a review of existing literature and research on the topic to identify what is already known and where gaps in knowledge exist.
  • Determine the research design : Decide on the appropriate research design, which will depend on the nature of the research problem and the available resources. Common exploratory research designs include case studies, focus groups, interviews, and surveys.
  • Collect data: Collect data using the chosen research design. This may involve conducting interviews, surveys, or observations, or collecting data from existing sources such as archives or databases.
  • Analyze data: Analyze the data collected using appropriate qualitative or quantitative techniques. This may include coding and categorizing qualitative data, or running descriptive statistics on quantitative data.
  • I nterpret and report findings: Interpret the findings of the analysis and report them in a way that is clear and understandable. The report should summarize the findings, discuss their implications, and make recommendations for further research or action.
  • Iterate : If necessary, refine the research question and repeat the process of data collection and analysis to further explore the topic.

When to use Exploratory Research

Exploratory research is appropriate in situations where there is limited existing knowledge or understanding of a topic, and where the goal is to generate insights and ideas that can guide further research. Here are some specific situations where exploratory research may be particularly useful:

  • New product development: When developing a new product, exploratory research can be used to identify consumer needs and preferences, as well as potential design flaws or issues.
  • Emerging technologies: When exploring emerging technologies, exploratory research can be used to identify potential uses and applications, as well as potential challenges or limitations.
  • Developing research hypotheses: When developing research hypotheses, exploratory research can be used to identify potential relationships or patterns that can be further explored through more rigorous research methods.
  • Understanding complex phenomena: When trying to understand complex phenomena, such as human behavior or societal trends, exploratory research can be used to identify underlying patterns or factors that may be influencing the phenomenon.
  • Developing research methods : When developing new research methods, exploratory research can be used to identify potential issues or limitations with existing methods, and to develop new methods that better capture the phenomena of interest.

Purpose of Exploratory Research

The purpose of exploratory research is to gain insights and understanding of a research problem or question where there is limited existing knowledge or understanding. The objective is to explore and generate ideas that can guide further research, rather than to test specific hypotheses or make definitive conclusions.

Exploratory research can be used to:

  • Identify new research questions: Exploratory research can help to identify new research questions and areas of inquiry, by providing initial insights and understanding of a topic.
  • Develop hypotheses: Exploratory research can help to develop hypotheses and testable propositions that can be further explored through more rigorous research methods.
  • Identify patterns and trends : Exploratory research can help to identify patterns and trends in data, which can be used to guide further research or decision-making.
  • Understand complex phenomena: Exploratory research can help to provide a deeper understanding of complex phenomena, such as human behavior or societal trends, by identifying underlying patterns or factors that may be influencing the phenomena.
  • Generate ideas: Exploratory research can help to generate new ideas and insights that can be used to guide further research, innovation, or decision-making.

Characteristics of Exploratory Research

The following are the main characteristics of exploratory research:

  • Flexible and open-ended : Exploratory research is characterized by its flexible and open-ended nature, which allows researchers to explore a wide range of ideas and perspectives without being constrained by specific research questions or hypotheses.
  • Qualitative in nature : Exploratory research typically relies on qualitative methods, such as in-depth interviews, focus groups, or observation, to gather rich and detailed data on the research problem.
  • Limited scope: Exploratory research is generally limited in scope, focusing on a specific research problem or question, rather than attempting to provide a comprehensive analysis of a broader phenomenon.
  • Preliminary in nature : Exploratory research is preliminary in nature, providing initial insights and understanding of a research problem, rather than testing specific hypotheses or making definitive conclusions.
  • I terative process : Exploratory research is often an iterative process, where the research design and methods may be refined and adjusted as new insights and understanding are gained.
  • I nductive approach : Exploratory research typically takes an inductive approach to data analysis, seeking to identify patterns and relationships in the data that can guide further research or hypothesis development.

Advantages of Exploratory Research

The following are some advantages of exploratory research:

  • Provides initial insights: Exploratory research is useful for providing initial insights and understanding of a research problem or question where there is limited existing knowledge or understanding. It can help to identify patterns, relationships, and potential hypotheses that can guide further research.
  • Flexible and adaptable : Exploratory research is flexible and adaptable, allowing researchers to adjust their methods and approach as they gain new insights and understanding of the research problem.
  • Qualitative methods : Exploratory research typically relies on qualitative methods, such as in-depth interviews, focus groups, and observation, which can provide rich and detailed data that is useful for gaining insights into complex phenomena.
  • Cost-effective : Exploratory research is often less costly than other research methods, such as large-scale surveys or experiments. It is typically conducted on a smaller scale, using fewer resources and participants.
  • Useful for hypothesis generation : Exploratory research can be useful for generating hypotheses and testable propositions that can be further explored through more rigorous research methods.
  • Provides a foundation for further research: Exploratory research can provide a foundation for further research by identifying potential research questions and areas of inquiry, as well as providing initial insights and understanding of the research problem.

Limitations of Exploratory Research

The following are some limitations of exploratory research:

  • Limited generalizability: Exploratory research is typically conducted on a small scale and uses non-random sampling techniques, which limits the generalizability of the findings to a broader population.
  • Subjective nature: Exploratory research relies on qualitative methods and is therefore subject to researcher bias and interpretation. The findings may be influenced by the researcher’s own perceptions, beliefs, and assumptions.
  • Lack of rigor: Exploratory research is often less rigorous than other research methods, such as experimental research, which can limit the validity and reliability of the findings.
  • Limited ability to test hypotheses: Exploratory research is not designed to test specific hypotheses, but rather to generate initial insights and understanding of a research problem. It may not be suitable for testing well-defined research questions or hypotheses.
  • Time-consuming : Exploratory research can be time-consuming and resource-intensive, particularly if the researcher needs to gather data from multiple sources or conduct multiple rounds of data collection.
  • Difficulty in interpretation: The open-ended nature of exploratory research can make it difficult to interpret the findings, particularly if the researcher is unable to identify clear patterns or relationships in the data.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

The potential of working hypotheses for deductive exploratory research

  • Open access
  • Published: 08 December 2020
  • Volume 55 , pages 1703–1725, ( 2021 )

Cite this article

You have full access to this open access article

articles on exploratory research

  • Mattia Casula   ORCID: orcid.org/0000-0002-7081-8153 1 ,
  • Nandhini Rangarajan 2 &
  • Patricia Shields   ORCID: orcid.org/0000-0002-0960-4869 2  

61k Accesses

79 Citations

4 Altmetric

Explore all metrics

While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in the research process. The working hypothesis conceptual framework is introduced, placed in a philosophical context, defined, and applied to public administration and comparative public policy. Doing so, this article explains: the philosophical underpinning of exploratory, deductive research; how the working hypothesis informs the methodologies and evidence collection of deductive, explorative research; the nature of micro-conceptual frameworks for deductive exploratory research; and, how the working hypothesis informs data analysis when exploratory research is deductive.

Similar content being viewed by others

articles on exploratory research

Reflections on Methodological Issues

articles on exploratory research

Research: Meaning and Purpose

articles on exploratory research

Research Design and Methodology

Avoid common mistakes on your manuscript.

1 Introduction

Exploratory research is generally considered to be inductive and qualitative (Stebbins 2001 ). Exploratory qualitative studies adopting an inductive approach do not lend themselves to a priori theorizing and building upon prior bodies of knowledge (Reiter 2013 ; Bryman 2004 as cited in Pearse 2019 ). Juxtaposed against quantitative studies that employ deductive confirmatory approaches, exploratory qualitative research is often criticized for lack of methodological rigor and tentativeness in results (Thomas and Magilvy 2011 ). This paper focuses on the neglected topic of deductive, exploratory research and proposes working hypotheses as a useful framework for these studies.

To emphasize that certain types of applied research lend themselves more easily to deductive approaches, to address the downsides of exploratory qualitative research, and to ensure qualitative rigor in exploratory research, a significant body of work on deductive qualitative approaches has emerged (see for example, Gilgun 2005 , 2015 ; Hyde 2000 ; Pearse 2019 ). According to Gilgun ( 2015 , p. 3) the use of conceptual frameworks derived from comprehensive reviews of literature and a priori theorizing were common practices in qualitative research prior to the publication of Glaser and Strauss’s ( 1967 ) The Discovery of Grounded Theory . Gilgun ( 2015 ) coined the terms Deductive Qualitative Analysis (DQA) to arrive at some sort of “middle-ground” such that the benefits of a priori theorizing (structure) and allowing room for new theory to emerge (flexibility) are reaped simultaneously. According to Gilgun ( 2015 , p. 14) “in DQA, the initial conceptual framework and hypotheses are preliminary. The purpose of DQA is to come up with a better theory than researchers had constructed at the outset (Gilgun 2005 , 2009 ). Indeed, the production of new, more useful hypotheses is the goal of DQA”.

DQA provides greater level of structure for both the experienced and novice qualitative researcher (see for example Pearse 2019 ; Gilgun 2005 ). According to Gilgun ( 2015 , p. 4) “conceptual frameworks are the sources of hypotheses and sensitizing concepts”. Sensitizing concepts frame the exploratory research process and guide the researcher’s data collection and reporting efforts. Pearse ( 2019 ) discusses the usefulness for deductive thematic analysis and pattern matching to help guide DQA in business research. Gilgun ( 2005 ) discusses the usefulness of DQA for family research.

Given these rationales for DQA in exploratory research, the overarching purpose of this paper is to contribute to that growing corpus of work on deductive qualitative research. This paper is specifically aimed at guiding novice researchers and student scholars to the working hypothesis as a useful a priori framing tool. The applicability of the working hypothesis as a tool that provides more structure during the design and implementation phases of exploratory research is discussed in detail. Examples of research projects in public administration that use the working hypothesis as a framing tool for deductive exploratory research are provided.

In the next section, we introduce the three types of research purposes. Second, we examine the nature of the exploratory research purpose. Third, we provide a definition of working hypothesis. Fourth, we explore the philosophical roots of methodology to see where exploratory research fits. Fifth, we connect the discussion to the dominant research approaches (quantitative, qualitative and mixed methods) to see where deductive exploratory research fits. Sixth, we examine the nature of theory and the role of the hypothesis in theory. We contrast formal hypotheses and working hypotheses. Seven, we provide examples of student and scholarly work that illustrates how working hypotheses are developed and operationalized. Lastly, this paper synthesizes previous discussion with concluding remarks.

2 Three types of research purposes

The literature identifies three basic types of research purposes—explanation, description and exploration (Babbie 2007 ; Adler and Clark 2008 ; Strydom 2013 ; Shields and Whetsell 2017 ). Research purposes are similar to research questions; however, they focus on project goals or aims instead of questions.

Explanatory research answers the “why” question (Babbie 2007 , pp. 89–90), by explaining “why things are the way they are”, and by looking “for causes and reasons” (Adler and Clark 2008 , p. 14). Explanatory research is closely tied to hypothesis testing. Theory is tested using deductive reasoning, which goes from the general to the specific (Hyde 2000 , p. 83). Hypotheses provide a frame for explanatory research connecting the research purpose to other parts of the research process (variable construction, choice of data, statistical tests). They help provide alignment or coherence across stages in the research process and provide ways to critique the strengths and weakness of the study. For example, were the hypotheses grounded in the appropriate arguments and evidence in the literature? Are the concepts imbedded in the hypotheses appropriately measured? Was the best statistical test used? When the analysis is complete (hypothesis is tested), the results generally answer the research question (the evidence supported or failed to support the hypothesis) (Shields and Rangarajan 2013 ).

Descriptive research addresses the “What” question and is not primarily concerned with causes (Strydom 2013 ; Shields and Tajalli 2006 ). It lies at the “midpoint of the knowledge continuum” (Grinnell 2001 , p. 248) between exploration and explanation. Descriptive research is used in both quantitative and qualitative research. A field researcher might want to “have a more highly developed idea of social phenomena” (Strydom 2013 , p. 154) and develop thick descriptions using inductive logic. In science, categorization and classification systems such as the periodic table of chemistry or the taxonomies of biology inform descriptive research. These baseline classification systems are a type of theorizing and allow researchers to answer questions like “what kind” of plants and animals inhabit a forest. The answer to this question would usually be displayed in graphs and frequency distributions. This is also the data presentation system used in the social sciences (Ritchie and Lewis 2003 ; Strydom 2013 ). For example, if a scholar asked, what are the needs of homeless people? A quantitative approach would include a survey that incorporated a “needs” classification system (preferably based on a literature review). The data would be displayed as frequency distributions or as charts. Description can also be guided by inductive reasoning, which draws “inferences from specific observable phenomena to general rules or knowledge expansion” (Worster 2013 , p. 448). Theory and hypotheses are generated using inductive reasoning, which begins with data and the intention of making sense of it by theorizing. Inductive descriptive approaches would use a qualitative, naturalistic design (open ended interview questions with the homeless population). The data could provide a thick description of the homeless context. For deductive descriptive research, categories, serve a purpose similar to hypotheses for explanatory research. If developed with thought and a connection to the literature, categories can serve as a framework that inform measurement, link to data collection mechanisms and to data analysis. Like hypotheses they can provide horizontal coherence across the steps in the research process.

Table  1 demonstrated these connections for deductive, descriptive and explanatory research. The arrow at the top emphasizes the horizontal or across the research process view we emphasize. This article makes the case that the working hypothesis can serve the same purpose as the hypothesis for deductive, explanatory research and categories for deductive descriptive research. The cells for exploratory research are filled in with question marks.

The remainder of this paper focuses on exploratory research and the answers to questions found in the table:

What is the philosophical underpinning of exploratory, deductive research?

What is the Micro-conceptual framework for deductive exploratory research? [ As is clear from the article title we introduce the working hypothesis as the answer .]

How does the working hypothesis inform the methodologies and evidence collection of deductive exploratory research?

How does the working hypothesis inform data analysis of deductive exploratory research?

3 The nature of exploratory research purpose

Explorers enter the unknown to discover something new. The process can be fraught with struggle and surprises. Effective explorers creatively resolve unexpected problems. While we typically think of explorers as pioneers or mountain climbers, exploration is very much linked to the experience and intention of the explorer. Babies explore as they take their first steps. The exploratory purpose resonates with these insights. Exploratory research, like reconnaissance, is a type of inquiry that is in the preliminary or early stages (Babbie 2007 ). It is associated with discovery, creativity and serendipity (Stebbins 2001 ). But the person doing the discovery, also defines the activity or claims the act of exploration. It “typically occurs when a researcher examines a new interest or when the subject of study itself is relatively new” (Babbie 2007 , p. 88). Hence, exploration has an open character that emphasizes “flexibility, pragmatism, and the particular, biographically specific interests of an investigator” (Maanen et al. 2001 , p. v). These three purposes form a type of hierarchy. An area of inquiry is initially explored . This early work lays the ground for, description which in turn becomes the basis for explanation . Quantitative, explanatory studies dominate contemporary high impact journals (Twining et al. 2017 ).

Stebbins ( 2001 ) makes the point that exploration is often seen as something like a poor stepsister to confirmatory or hypothesis testing research. He has a problem with this because we live in a changing world and what is settled today will very likely be unsettled in the near future and in need of exploration. Further, exploratory research “generates initial insights into the nature of an issue and develops questions to be investigated by more extensive studies” (Marlow 2005 , p. 334). Exploration is widely applicable because all research topics were once “new.” Further, all research topics have the possibility of “innovation” or ongoing “newness”. Exploratory research may be appropriate to establish whether a phenomenon exists (Strydom 2013 ). The point here, of course, is that the exploratory purpose is far from trivial.

Stebbins’ Exploratory Research in the Social Sciences ( 2001 ), is the only book devoted to the nature of exploratory research as a form of social science inquiry. He views it as a “broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding of an area of social or psychological life” (p. 3). It is science conducted in a way distinct from confirmation. According to Stebbins ( 2001 , p. 6) the goal is discovery of potential generalizations, which can become future hypotheses and eventually theories that emerge from the data. He focuses on inductive logic (which stimulates creativity) and qualitative methods. He does not want exploratory research limited to the restrictive formulas and models he finds in confirmatory research. He links exploratory research to Glaser and Strauss’s ( 1967 ) flexible, immersive, Grounded Theory. Strydom’s ( 2013 ) analysis of contemporary social work research methods books echoes Stebbins’ ( 2001 ) position. Stebbins’s book is an important contribution, but it limits the potential scope of this flexible and versatile research purpose. If we accepted his conclusion, we would delete the “Exploratory” row from Table  1 .

Note that explanatory research can yield new questions, which lead to exploration. Inquiry is a process where inductive and deductive activities can occur simultaneously or in a back and forth manner, particularly as the literature is reviewed and the research design emerges. Footnote 1 Strict typologies such as explanation, description and exploration or inductive/deductive can obscures these larger connections and processes. We draw insight from Dewey’s ( 1896 ) vision of inquiry as depicted in his seminal “Reflex Arc” article. He notes that “stimulus” and “response” like other dualities (inductive/deductive) exist within a larger unifying system. Yet the terms have value. “We need not abandon terms like stimulus and response, so long as we remember that they are attached to events based upon their function in a wider dynamic context, one that includes interests and aims” (Hildebrand 2008 , p. 16). So too, in methodology typologies such as deductive/inductive capture useful distinctions with practical value and are widely used in the methodology literature.

We argue that there is a role for exploratory, deductive, and confirmatory research. We maintain all types of research logics and methods should be in the toolbox of exploratory research. First, as stated above, it makes no sense on its face to identify an extremely flexible purpose that is idiosyncratic to the researcher and then basically restrict its use to qualitative, inductive, non-confirmatory methods. Second, Stebbins’s ( 2001 ) work focused on social science ignoring the policy sciences. Exploratory research can be ideal for immediate practical problems faced by policy makers, who could find a framework of some kind useful. Third, deductive, exploratory research is more intentionally connected to previous research. Some kind of initial framing device is located or designed using the literature. This may be very important for new scholars who are developing research skills and exploring their field and profession. Stebbins’s insights are most pertinent for experienced scholars. Fourth, frameworks and deductive logic are useful for comparative work because some degree of consistency across cases is built into the design.

As we have seen, the hypotheses of explanatory and categories of descriptive research are the dominate frames of social science and policy science. We certainly concur that neither of these frames makes a lot of sense for exploratory research. They would tend to tie it down. We see the problem as a missing framework or missing way to frame deductive, exploratory research in the methodology literature. Inductive exploratory research would not work for many case studies that are trying to use evidence to make an argument. What exploratory deductive case studies need is a framework that incorporates flexibility. This is even more true for comparative case studies. A framework of this sort could be usefully applied to policy research (Casula 2020a ), particularly evaluative policy research, and applied research generally. We propose the Working Hypothesis as a flexible conceptual framework and as a useful tool for doing exploratory studies. It can be used as an evaluative criterion particularly for process evaluation and is useful for student research because students can develop theorizing skills using the literature.

Table  1 included a column specifying the philosophical basis for each research purpose. Shifting gears to the philosophical underpinning of methodology provides useful additional context for examination of deductive, exploratory research.

4 What is a working hypothesis

The working hypothesis is first and foremost a hypothesis or a statement of expectation that is tested in action. The term “working” suggest that these hypotheses are subject to change, are provisional and the possibility of finding contradictory evidence is real. In addition, a “working” hypothesis is active, it is a tool in an ongoing process of inquiry. If one begins with a research question, the working hypothesis could be viewed as a statement or group of statements that answer the question. It “works” to move purposeful inquiry forward. “Working” also implies some sort of community, mostly we work together in relationship to achieve some goal.

Working Hypothesis is a term found in earlier literature. Indeed, both pioneering pragmatists, John Dewey and George Herbert Mead use the term working hypothesis in important nineteenth century works. For both Dewey and Mead, the notion of a working hypothesis has a self-evident quality and it is applied in a big picture context. Footnote 2

Most notably, Dewey ( 1896 ), in one of his most pivotal early works (“Reflex Arc”), used “working hypothesis” to describe a key concept in psychology. “The idea of the reflex arc has upon the whole come nearer to meeting this demand for a general working hypothesis than any other single concept (Italics added)” (p. 357). The notion of a working hypothesis was developed more fully 42 years later, in Logic the Theory of Inquiry , where Dewey developed the notion of a working hypothesis that operated on a smaller scale. He defines working hypotheses as a “provisional, working means of advancing investigation” (Dewey 1938 , pp. 142). Dewey’s definition suggests that working hypotheses would be useful toward the beginning of a research project (e.g., exploratory research).

Mead ( 1899 ) used working hypothesis in a title of an American Journal of Sociology article “The Working Hypothesis and Social Reform” (italics added). He notes that a scientist’s foresight goes beyond testing a hypothesis.

Given its success, he may restate his world from this standpoint and get the basis for further investigation that again always takes the form of a problem. The solution of this problem is found over again in the possibility of fitting his hypothetical proposition into the whole within which it arises. And he must recognize that this statement is only a working hypothesis at the best, i.e., he knows that further investigation will show that the former statement of his world is only provisionally true, and must be false from the standpoint of a larger knowledge, as every partial truth is necessarily false over against the fuller knowledge which he will gain later (Mead 1899 , p. 370).

Cronbach ( 1975 ) developed a notion of working hypothesis consistent with inductive reasoning, but for him, the working hypothesis is a product or result of naturalistic inquiry. He makes the case that naturalistic inquiry is highly context dependent and therefore results or seeming generalizations that may come from a study and should be viewed as “working hypotheses”, which “are tentative both for the situation in which they first uncovered and for other situations” (as cited in Gobo 2008 , p. 196).

A quick Google scholar search using the term “working hypothesis” show that it is widely used in twentieth and twenty-first century science, particularly in titles. In these articles, the working hypothesis is treated as a conceptual tool that furthers investigation in its early or transitioning phases. We could find no explicit links to exploratory research. The exploratory nature of the problem is expressed implicitly. Terms such as “speculative” (Habib 2000 , p. 2391) or “rapidly evolving field” (Prater et al. 2007 , p. 1141) capture the exploratory nature of the study. The authors might describe how a topic is “new” or reference “change”. “As a working hypothesis, the picture is only new, however, in its interpretation” (Milnes 1974 , p. 1731). In a study of soil genesis, Arnold ( 1965 , p. 718) notes “Sequential models, formulated as working hypotheses, are subject to further investigation and change”. Any 2020 article dealing with COVID-19 and respiratory distress would be preliminary almost by definition (Ciceri et al. 2020 ).

5 Philosophical roots of methodology

According to Kaplan ( 1964 , p. 23) “the aim of methodology is to help us understand, in the broadest sense not the products of scientific inquiry but the process itself”. Methods contain philosophical principles that distinguish them from other “human enterprises and interests” (Kaplan 1964 , p. 23). Contemporary research methodology is generally classified as quantitative, qualitative and mixed methods. Leading scholars of methodology have associated each with a philosophical underpinning—positivism (or post-positivism), interpretivism or constructivist and pragmatism, respectively (Guba 1987 ; Guba and Lincoln 1981 ; Schrag 1992 ; Stebbins 2001 ; Mackenzi and Knipe 2006 ; Atieno 2009 ; Levers 2013 ; Morgan 2007 ; O’Connor et al. 2008 ; Johnson and Onwuegbuzie 2004 ; Twining et al. 2017 ). This section summarizes how the literature often describes these philosophies and informs contemporary methodology and its literature.

Positivism and its more contemporary version, post-positivism, maintains an objectivist ontology or assumes an objective reality, which can be uncovered (Levers 2013 ; Twining et al. 2017 ). Footnote 3 Time and context free generalizations are possible and “real causes of social scientific outcomes can be determined reliably and validly (Johnson and Onwuegbunzie 2004 , p. 14). Further, “explanation of the social world is possible through a logical reduction of social phenomena to physical terms”. It uses an empiricist epistemology which “implies testability against observation, experimentation, or comparison” (Whetsell and Shields 2015 , pp. 420–421). Correspondence theory, a tenet of positivism, asserts that “to each concept there corresponds a set of operations involved in its scientific use” (Kaplan 1964 , p. 40).

The interpretivist, constructivists or post-modernist approach is a reaction to positivism. It uses a relativist ontology and a subjectivist epistemology (Levers 2013 ). In this world of multiple realities, context free generalities are impossible as is the separation of facts and values. Causality, explanation, prediction, experimentation depend on assumptions about the correspondence between concepts and reality, which in the absence of an objective reality is impossible. Empirical research can yield “contextualized emergent understanding rather than the creation of testable theoretical structures” (O’Connor et al. 2008 , p. 30). The distinctively different world views of positivist/post positivist and interpretivist philosophy is at the core of many controversies in methodology, social and policy science literature (Casula 2020b ).

With its focus on dissolving dualisms, pragmatism steps outside the objective/subjective debate. Instead, it asks, “what difference would it make to us if the statement were true” (Kaplan 1964 , p. 42). Its epistemology is connected to purposeful inquiry. Pragmatism has a “transformative, experimental notion of inquiry” anchored in pluralism and a focus on constructing conceptual and practical tools to resolve “problematic situations” (Shields 1998 ; Shields and Rangarajan 2013 ). Exploration and working hypotheses are most comfortably situated within the pragmatic philosophical perspective.

6 Research approaches

Empirical investigation relies on three types of methodology—quantitative, qualitative and mixed methods.

6.1 Quantitative methods

Quantitative methods uses deductive logic and formal hypotheses or models to explain, predict, and eventually establish causation (Hyde 2000 ; Kaplan 1964 ; Johnson and Onwuegbunzie 2004 ; Morgan 2007 ). Footnote 4 The correspondence between the conceptual and empirical world make measures possible. Measurement assigns numbers to objects, events or situations and allows for standardization and subtle discrimination. It also allows researchers to draw on the power of mathematics and statistics (Kaplan 1964 , pp. 172–174). Using the power of inferential statistics, quantitative research employs research designs, which eliminate competing hypotheses. It is high in external validity or the ability to generalize to the whole. The research results are relatively independent of the researcher (Johnson & Onwuegbunzie 2004 ).

Quantitative methods depend on the quality of measurement and a priori conceptualization, and adherence to the underlying assumptions of inferential statistics. Critics charge that hypotheses and frameworks needlessly constrain inquiry (Johnson and Onwuegbunzie 2004 , p. 19). Hypothesis testing quantitative methods support the explanatory purpose.

6.2 Qualitative methods

Qualitative researchers who embrace the post-modern, interpretivist view, Footnote 5 question everything about the nature of quantitative methods (Willis et al. 2007 ). Rejecting the possibility of objectivity, correspondence between ideas and measures, and the constraints of a priori theorizing they focus on “unique impressions and understandings of events rather than to generalize the findings” (Kolb 2012 , p. 85). Characteristics of traditional qualitative research include “induction, discovery, exploration, theory/hypothesis generation and the researcher as the primary ‘instrument’ of data collection” (Johnson and Onwuegbunzie 2004 , p. 18). It also concerns itself with forming “unique impressions and understandings of events rather than to generalize findings” (Kolb 2012 , p. 85). The data of qualitative methods are generated via interviews, direct observation, focus groups and analysis of written records or artifacts.

Qualitative methods provide for understanding and “description of people’s personal experiences of phenomena”. They enable descriptions of detailed “phenomena as they are situated and embedded in local contexts.” Researchers use naturalistic settings to “study dynamic processes” and explore how participants interpret experiences. Qualitative methods have an inherent flexibility, allowing researchers to respond to changes in the research setting. They are particularly good at narrowing to the particular and on the flipside have limited external validity (Johnson and Onwuegbunzie 2004 , p. 20). Instead of specifying a suitable sample size to draw conclusions, qualitative research uses the notion of saturation (Morse 1995 ).

Saturation is used in grounded theory—a widely used and respected form of qualitative research, and a well-known interpretivist qualitative research method. Introduced by Glaser and Strauss ( 1967 ), this “grounded on observation” (Patten and Newhart 2000 , p. 27) methodology, focuses on “the creation of emergent understanding” (O’Connor et al. 2008 , p. 30). It uses the Constant Comparative method, whereby researchers develop theory from data as they code and analyze at the same time. Data collection, coding and analysis along with theoretical sampling are systematically combined to generate theory (Kolb 2012 , p. 83). The qualitative methods discussed here support exploratory research.

A close look at the two philosophies and assumptions of quantitative and qualitative research suggests two contradictory world views. The literature has labeled these contradictory views the Incompatibility Theory, which sets up a quantitative versus qualitative tension similar to the seeming separation of art and science or fact and values (Smith 1983a , b ; Guba 1987 ; Smith and Heshusius 1986 ; Howe 1988 ). The incompatibility theory does not make sense in practice. Yin ( 1981 , 1992 , 2011 , 2017 ), a prominent case study scholar, showcases a deductive research methodology that crosses boundaries using both quantaitive and qualitative evidence when appropriate.

6.3 Mixed methods

Turning the “Incompatibility Theory” on its head, Mixed Methods research “combines elements of qualitative and quantitative research approaches … for the broad purposes of breadth and depth of understanding and corroboration” (Johnson et al. 2007 , p. 123). It does this by partnering with philosophical pragmatism. Footnote 6 Pragmatism is productive because “it offers an immediate and useful middle position philosophically and methodologically; it offers a practical and outcome-oriented method of inquiry that is based on action and leads, iteratively, to further action and the elimination of doubt; it offers a method for selecting methodological mixes that can help researchers better answer many of their research questions” (Johnson and Onwuegbunzie 2004 , p. 17). What is theory for the pragmatist “any theoretical model is for the pragmatist, nothing more than a framework through which problems are perceived and subsequently organized ” (Hothersall 2019 , p. 5).

Brendel ( 2009 ) constructed a simple framework to capture the core elements of pragmatism. Brendel’s four “p”’s—practical, pluralism, participatory and provisional help to show the relevance of pragmatism to mixed methods. Pragmatism is purposeful and concerned with the practical consequences. The pluralism of pragmatism overcomes quantitative/qualitative dualism. Instead, it allows for multiple perspectives (including positivism and interpretivism) and, thus, gets around the incompatibility problem. Inquiry should be participatory or inclusive of the many views of participants, hence, it is consistent with multiple realities and is also tied to the common concern of a problematic situation. Finally, all inquiry is provisional . This is compatible with experimental methods, hypothesis testing and consistent with the back and forth of inductive and deductive reasoning. Mixed methods support exploratory research.

Advocates of mixed methods research note that it overcomes the weaknesses and employs the strengths of quantitative and qualitative methods. Quantitative methods provide precision. The pictures and narrative of qualitative techniques add meaning to the numbers. Quantitative analysis can provide a big picture, establish relationships and its results have great generalizability. On the other hand, the “why” behind the explanation is often missing and can be filled in through in-depth interviews. A deeper and more satisfying explanation is possible. Mixed-methods brings the benefits of triangulation or multiple sources of evidence that converge to support a conclusion. It can entertain a “broader and more complete range of research questions” (Johnson and Onwuegbunzie 2004 , p. 21) and can move between inductive and deductive methods. Case studies use multiple forms of evidence and are a natural context for mixed methods.

One thing that seems to be missing from mixed method literature and explicit design is a place for conceptual frameworks. For example, Heyvaert et al. ( 2013 ) examined nine mixed methods studies and found an explicit framework in only two studies (transformative and pragmatic) (p. 663).

7 Theory and hypotheses: where is and what is theory?

Theory is key to deductive research. In essence, empirical deductive methods test theory. Hence, we shift our attention to theory and the role and functions of the hypotheses in theory. Oppenheim and Putnam ( 1958 ) note that “by a ‘theory’ (in the widest sense) we mean any hypothesis, generalization or law (whether deterministic or statistical) or any conjunction of these” (p. 25). Van Evera ( 1997 ) uses a similar and more complex definition “theories are general statements that describe and explain the causes of effects of classes of phenomena. They are composed of causal laws or hypotheses, explanations, and antecedent conditions” (p. 8). Sutton and Staw ( 1995 , p. 376) in a highly cited article “What Theory is Not” assert the that hypotheses should contain logical arguments for “why” the hypothesis is expected. Hypotheses need an underlying causal argument before they can be considered theory. The point of this discussion is not to define theory but to establish the importance of hypotheses in theory.

Explanatory research is implicitly relational (A explains B). The hypotheses of explanatory research lay bare these relationships. Popular definitions of hypotheses capture this relational component. For example, the Cambridge Dictionary defines a hypothesis a “an idea or explanation for something that is based on known facts but has not yet been proven”. Vocabulary.Com’s definition emphasizes explanation, a hypothesis is “an idea or explanation that you then test through study and experimentation”. According to Wikipedia a hypothesis is “a proposed explanation for a phenomenon”. Other definitions remove the relational or explanatory reference. The Oxford English Dictionary defines a hypothesis as a “supposition or conjecture put forth to account for known facts.” Science Buddies defines a hypothesis as a “tentative, testable answer to a scientific question”. According to the Longman Dictionary the hypothesis is “an idea that can be tested to see if it is true or not”. The Urban Dictionary states a hypothesis is “a prediction or educated-guess based on current evidence that is yet be tested”. We argue that the hypotheses of exploratory research— working hypothesis — are not bound by relational expectations. It is this flexibility that distinguishes the working hypothesis.

Sutton and Staw (1995) maintain that hypotheses “serve as crucial bridges between theory and data, making explicit how the variables and relationships that follow from a logical argument will be operationalized” (p. 376, italics added). The highly rated journal, Computers and Education , Twining et al. ( 2017 ) created guidelines for qualitative research as a way to improve soundness and rigor. They identified the lack of alignment between theoretical stance and methodology as a common problem in qualitative research. In addition, they identified a lack of alignment between methodology, design, instruments of data collection and analysis. The authors created a guidance summary, which emphasized the need to enhance coherence throughout elements of research design (Twining et al. 2017 p. 12). Perhaps the bridging function of the hypothesis mentioned by Sutton and Staw (1995) is obscured and often missing in qualitative methods. Working hypotheses can be a tool to overcome this problem.

For reasons, similar to those used by mixed methods scholars, we look to classical pragmatism and the ideas of John Dewey to inform our discussion of theory and working hypotheses. Dewey ( 1938 ) treats theory as a tool of empirical inquiry and uses a map metaphor (p. 136). Theory is like a map that helps a traveler navigate the terrain—and should be judged by its usefulness. “There is no expectation that a map is a true representation of reality. Rather, it is a representation that allows a traveler to reach a destination (achieve a purpose). Hence, theories should be judged by how well they help resolve the problem or achieve a purpose ” (Shields and Rangarajan 2013 , p. 23). Note that we explicitly link theory to the research purpose. Theory is never treated as an unimpeachable Truth, rather it is a helpful tool that organizes inquiry connecting data and problem. Dewey’s approach also expands the definition of theory to include abstractions (categories) outside of causation and explanation. The micro-conceptual frameworks Footnote 7 introduced in Table  1 are a type of theory. We define conceptual frameworks as the “way the ideas are organized to achieve the project’s purpose” (Shields and Rangarajan 2013 p. 24). Micro-conceptual frameworks do this at the very close to the data level of analysis. Micro-conceptual frameworks can direct operationalization and ways to assess measurement or evidence at the individual research study level. Again, the research purpose plays a pivotal role in the functioning of theory (Shields and Tajalli 2006 ).

8 Working hypothesis: methods and data analysis

We move on to answer the remaining questions in the Table  1 . We have established that exploratory research is extremely flexible and idiosyncratic. Given this, we will proceed with a few examples and draw out lessons for developing an exploratory purpose, building a framework and from there identifying data collection techniques and the logics of hypotheses testing and analysis. Early on we noted the value of the Working Hypothesis framework for student empirical research and applied research. The next section uses a masters level student’s work to illustrate the usefulness of working hypotheses as a way to incorporate the literature and structure inquiry. This graduate student was also a mature professional with a research question that emerged from his job and is thus an example of applied research.

Master of Public Administration student, Swift ( 2010 ) worked for a public agency and was responsible for that agency’s sexual harassment training. The agency needed to evaluate its training but had never done so before. He also had never attempted a significant empirical research project. Both of these conditions suggest exploration as a possible approach. He was interested in evaluating the training program and hence the project had a normative sense. Given his job, he already knew a lot about the problem of sexual harassment and sexual harassment training. What he did not know much about was doing empirical research, reviewing the literature or building a framework to evaluate the training (working hypotheses). He wanted a framework that was flexible and comprehensive. In his research, he discovered Lundvall’s ( 2006 ) knowledge taxonomy summarized with four simple ways of knowing ( Know - what, Know - how, Know - why, Know - who ). He asked whether his agency’s training provided the participants with these kinds of knowledge? Lundvall’s categories of knowing became the basis of his working hypotheses. Lundvall’s knowledge taxonomy is well suited for working hypotheses because it is so simple and is easy to understand intuitively. It can also be tailored to the unique problematic situation of the researcher. Swift ( 2010 , pp. 38–39) developed four basic working hypotheses:

WH1: Capital Metro provides adequate know - what knowledge in its sexual harassment training

WH2: Capital Metro provides adequate know - how knowledge in its sexual harassment training

WH3: Capital Metro provides adequate know - why knowledge in its sexual harassment training

WH4: Capital Metro provides adequate know - who knowledge in its sexual harassment training

From here he needed to determine what would determine the different kinds of knowledge. For example, what constitutes “know what” knowledge for sexual harassment training. This is where his knowledge and experience working in the field as well as the literature come into play. According to Lundvall et al. ( 1988 , p. 12) “know what” knowledge is about facts and raw information. Swift ( 2010 ) learned through the literature that laws and rules were the basis for the mandated sexual harassment training. He read about specific anti-discrimination laws and the subsequent rules and regulations derived from the laws. These laws and rules used specific definitions and were enacted within a historical context. Laws, rules, definitions and history became the “facts” of Know-What knowledge for his working hypothesis. To make this clear, he created sub-hypotheses that explicitly took these into account. See how Swift ( 2010 , p. 38) constructed the sub-hypotheses below. Each sub-hypothesis was defended using material from the literature (Swift 2010 , pp. 22–26). The sub-hypotheses can also be easily tied to evidence. For example, he could document that the training covered anti-discrimination laws.

WH1: Capital Metro provides adequate know - what knowledge in its sexual Harassment training

WH1a: The sexual harassment training includes information on anti-discrimination laws (Title VII).

WH1b: The sexual harassment training includes information on key definitions.

WH1c: The sexual harassment training includes information on Capital Metro’s Equal Employment Opportunity and Harassment policy.

WH1d: Capital Metro provides training on sexual harassment history.

Know-How knowledge refers to the ability to do something and involves skills (Lundvall and Johnson 1994 , p. 12). It is a kind of expertise in action. The literature and his experience allowed James Smith to identify skills such as how to file a claim or how to document incidents of sexual harassment as important “know-how” knowledge that should be included in sexual harassment training. Again, these were depicted as sub-hypotheses.

WH2: Capital Metro provides adequate know - how knowledge in its sexual Harassment training

WH2a: Training is provided on how to file and report a claim of harassment

WH2b: Training is provided on how to document sexual harassment situations.

WH2c: Training is provided on how to investigate sexual harassment complaints.

WH2d: Training is provided on how to follow additional harassment policy procedures protocol

Note that the working hypotheses do not specify a relationship but rather are simple declarative sentences. If “know-how” knowledge was found in the sexual harassment training, he would be able to find evidence that participants learned about how to file a claim (WH2a). The working hypothesis provides the bridge between theory and data that Sutton and Staw (1995) found missing in exploratory work. The sub-hypotheses are designed to be refined enough that the researchers would know what to look for and tailor their hunt for evidence. Figure  1 captures the generic sub-hypothesis design.

figure 1

A Common structure used in the development of working hypotheses

When expected evidence is linked to the sub-hypotheses, data, framework and research purpose are aligned. This can be laid out in a planning document that operationalizes the data collection in something akin to an architect’s blueprint. This is where the scholar explicitly develops the alignment between purpose, framework and method (Shields and Rangarajan 2013 ; Shields et al. 2019b ).

Table  2 operationalizes Swift’s working hypotheses (and sub-hypotheses). The table provide clues as to what kind of evidence is needed to determine whether the hypotheses are supported. In this case, Smith used interviews with participants and trainers as well as a review of program documents. Column one repeats the sub-hypothesis, column two specifies the data collection method (here interviews with participants/managers and review of program documents) and column three specifies the unique questions that focus the investigation. For example, the interview questions are provided. In the less precise world of qualitative data, evidence supporting a hypothesis could have varying degrees of strength. This too can be specified.

For Swift’s example, neither the statistics of explanatory research nor the open-ended questions of interpretivist, inductive exploratory research is used. The deductive logic of inquiry here is somewhat intuitive and similar to a detective (Ulriksen and Dadalauri 2016 ). It is also a logic used in international law (Worster 2013 ). It should be noted that the working hypothesis and the corresponding data collection protocol does not stop inquiry and fieldwork outside the framework. The interviews could reveal an unexpected problem with Smith’s training program. The framework provides a very loose and perhaps useful ways to identify and make sense of the data that does not fit the expectations. Researchers using working hypotheses should be sensitive to interesting findings that fall outside their framework. These could be used in future studies, to refine theory or even in this case provide suggestions to improve sexual harassment training. The sensitizing concepts mentioned by Gilgun ( 2015 ) are free to emerge and should be encouraged.

Something akin to working hypotheses are hidden in plain sight in the professional literature. Take for example Kerry Crawford’s ( 2017 ) book Wartime Sexual Violence. Here she explores how basic changes in the way “advocates and decision makers think about and discuss conflict-related sexual violence” (p. 2). She focused on a subsequent shift from silence to action. The shift occurred as wartime sexual violence was reframed as a “weapon of war”. The new frame captured the attention of powerful members of the security community who demanded, initiated, and paid for institutional and policy change. Crawford ( 2017 ) examines the legacy of this key reframing. She develops a six-stage model of potential international responses to incidents of wartime violence. This model is fairly easily converted to working hypotheses and sub-hypotheses. Table  3 shows her model as a set of (non-relational) working hypotheses. She applied this model as a way to gather evidence among cases (e.g., the US response to sexual violence in the Democratic Republic of the Congo) to show the official level of response to sexual violence. Each case study chapter examined evidence to establish whether the case fit the pattern formalized in the working hypotheses. The framework was very useful in her comparative context. The framework allowed for consistent comparative analysis across cases. Her analysis of the three cases went well beyond the material covered in the framework. She freely incorporated useful inductively informed data in her analysis and discussion. The framework, however, allowed for alignment within and across cases.

9 Conclusion

In this article we argued that the exploratory research is also well suited for deductive approaches. By examining the landscape of deductive, exploratory research, we proposed the working hypothesis as a flexible conceptual framework and a useful tool for doing exploratory studies. It has the potential to guide and bring coherence across the steps in the research process. After presenting the nature of exploratory research purpose and how it differs from two types of research purposes identified in the literature—explanation, and description. We focused on answering four different questions in order to show the link between micro-conceptual frameworks and research purposes in a deductive setting. The answers to the four questions are summarized in Table  4 .

Firstly, we argued that working hypothesis and exploration are situated within the pragmatic philosophical perspective. Pragmatism allows for pluralism in theory and data collection techniques, which is compatible with the flexible exploratory purpose. Secondly, after introducing and discussing the four core elements of pragmatism (practical, pluralism, participatory, and provisional), we explained how the working hypothesis informs the methodologies and evidence collection of deductive exploratory research through a presentation of the benefits of triangulation provided by mixed methods research. Thirdly, as is clear from the article title, we introduced the working hypothesis as the micro-conceptual framework for deductive explorative research. We argued that the hypotheses of explorative research, which we call working hypotheses are distinguished from those of the explanatory research, since they do not require a relational component and are not bound by relational expectations. A working hypothesis is extremely flexible and idiosyncratic, and it could be viewed as a statement or group of statements of expectations tested in action depending on the research question. Using examples, we concluded by explaining how working hypotheses inform data collection and analysis for deductive exploratory research.

Crawford’s ( 2017 ) example showed how the structure of working hypotheses provide a framework for comparative case studies. Her criteria for analysis were specified ahead of time and used to frame each case. Thus, her comparisons were systemized across cases. Further, the framework ensured a connection between the data analysis and the literature review. Yet the flexible, working nature of the hypotheses allowed for unexpected findings to be discovered.

The evidence required to test working hypotheses is directed by the research purpose and potentially includes both quantitative and qualitative sources. Thus, all types of evidence, including quantitative methods should be part of the toolbox of deductive, explorative research. We show how the working hypotheses, as a flexible exploratory framework, resolves many seeming dualisms pervasive in the research methods literature.

To conclude, this article has provided an in-depth examination of working hypotheses taking into account philosophical questions and the larger formal research methods literature. By discussing working hypotheses as applied, theoretical tools, we demonstrated that working hypotheses fill a unique niche in the methods literature, since they provide a way to enhance alignment in deductive, explorative studies.

In practice, quantitative scholars often run multivariate analysis on data bases to find out if there are correlations. Hypotheses are tested because the statistical software does the math, not because the scholar has an a priori, relational expectation (hypothesis) well-grounded in the literature and supported by cogent arguments. Hunches are just fine. This is clearly an inductive approach to research and part of the large process of inquiry.

In 1958 , Philosophers of Science, Oppenheim and Putnam use the notion of Working Hypothesis in their title “Unity of Science as Working Hypothesis.” They too, use it as a big picture concept, “unity of science in this sense, can be fully realized constitutes an over-arching meta-scientific hypothesis, which enables one to see a unity in scientific activities that might otherwise appear disconnected or unrelated” (p. 4).

It should be noted that the positivism described in the research methods literature does not resemble philosophical positivism as developed by philosophers like Comte (Whetsell and Shields 2015 ). In the research methods literature “positivism means different things to different people….The term has long been emptied of any precise denotation …and is sometimes affixed to positions actually opposed to those espoused by the philosophers from whom the name derives” (Schrag 1992 , p. 5). For purposes of this paper, we are capturing a few essential ways positivism is presented in the research methods literature. This helps us to position the “working hypothesis” and “exploratory” research within the larger context in contemporary research methods. We are not arguing that the positivism presented here is anything more. The incompatibility theory discussed later, is an outgrowth of this research methods literature…

It should be noted that quantitative researchers often use inductive reasoning. They do this with existing data sets when they run correlations or regression analysis as a way to find relationships. They ask, what does the data tell us?

Qualitative researchers are also associated with phenomenology, hermeneutics, naturalistic inquiry and constructivism.

See Feilzer ( 2010 ), Howe ( 1988 ), Johnson and Onwuegbunzie ( 2004 ), Morgan ( 2007 ), Onwuegbuzie and Leech ( 2005 ), Biddle and Schafft ( 2015 ).

The term conceptual framework is applicable in a broad context (see Ravitch and Riggan 2012 ). The micro-conceptual framework narrows to the specific study and informs data collection (Shields and Rangarajan 2013 ; Shields et al. 2019a ) .

Adler, E., Clark, R.: How It’s Done: An Invitation to Social Research, 3rd edn. Thompson-Wadsworth, Belmont (2008)

Google Scholar  

Arnold, R.W.: Multiple working hypothesis in soil genesis. Soil Sci. Soc. Am. J. 29 (6), 717–724 (1965)

Article   Google Scholar  

Atieno, O.: An analysis of the strengths and limitation of qualitative and quantitative research paradigms. Probl. Educ. 21st Century 13 , 13–18 (2009)

Babbie, E.: The Practice of Social Research, 11th edn. Thompson-Wadsworth, Belmont (2007)

Biddle, C., Schafft, K.A.: Axiology and anomaly in the practice of mixed methods work: pragmatism, valuation, and the transformative paradigm. J. Mixed Methods Res. 9 (4), 320–334 (2015)

Brendel, D.H.: Healing Psychiatry: Bridging the Science/Humanism Divide. MIT Press, Cambridge (2009)

Bryman, A.: Qualitative research on leadership: a critical but appreciative review. Leadersh. Q. 15 (6), 729–769 (2004)

Casula, M.: Under which conditions is cohesion policy effective: proposing an Hirschmanian approach to EU structural funds, Regional & Federal Studies, https://doi.org/10.1080/13597566.2020.1713110 (2020a)

Casula, M.: Economic gowth and cohesion policy implementation in Italy and Spain, Palgrave Macmillan, Cham (2020b)

Ciceri, F., et al.: Microvascular COVID-19 lung vessels obstructive thromboinflammatory syndrome (MicroCLOTS): an atypical acute respiratory distress syndrome working hypothesis. Crit. Care Resusc. 15 , 1–3 (2020)

Crawford, K.F.: Wartime sexual violence: From silence to condemnation of a weapon of war. Georgetown University Press (2017)

Cronbach, L.: Beyond the two disciplines of scientific psychology American Psychologist. 30 116–127 (1975)

Dewey, J.: The reflex arc concept in psychology. Psychol. Rev. 3 (4), 357 (1896)

Dewey, J.: Logic: The Theory of Inquiry. Henry Holt & Co, New York (1938)

Feilzer, Y.: Doing mixed methods research pragmatically: implications for the rediscovery of pragmatism as a research paradigm. J. Mixed Methods Res. 4 (1), 6–16 (2010)

Gilgun, J.F.: Qualitative research and family psychology. J. Fam. Psychol. 19 (1), 40–50 (2005)

Gilgun, J.F.: Methods for enhancing theory and knowledge about problems, policies, and practice. In: Katherine Briar, Joan Orme., Roy Ruckdeschel., Ian Shaw. (eds.) The Sage handbook of social work research pp. 281–297. Thousand Oaks, CA: Sage (2009)

Gilgun, J.F.: Deductive Qualitative Analysis as Middle Ground: Theory-Guided Qualitative Research. Amazon Digital Services LLC, Seattle (2015)

Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine, Chicago (1967)

Gobo, G.: Re-Conceptualizing Generalization: Old Issues in a New Frame. In: Alasuutari, P., Bickman, L., Brannen, J. (eds.) The Sage Handbook of Social Research Methods, pp. 193–213. Sage, Los Angeles (2008)

Chapter   Google Scholar  

Grinnell, R.M.: Social work research and evaluation: quantitative and qualitative approaches. New York: F.E. Peacock Publishers (2001)

Guba, E.G.: What have we learned about naturalistic evaluation? Eval. Pract. 8 (1), 23–43 (1987)

Guba, E., Lincoln, Y.: Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. Jossey-Bass Publishers, San Francisco (1981)

Habib, M.: The neurological basis of developmental dyslexia: an overview and working hypothesis. Brain 123 (12), 2373–2399 (2000)

Heyvaert, M., Maes, B., Onghena, P.: Mixed methods research synthesis: definition, framework, and potential. Qual. Quant. 47 (2), 659–676 (2013)

Hildebrand, D.: Dewey: A Beginners Guide. Oneworld Oxford, Oxford (2008)

Howe, K.R.: Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Edu. Res. 17 (8), 10–16 (1988)

Hothersall, S.J.: Epistemology and social work: enhancing the integration of theory, practice and research through philosophical pragmatism. Eur. J. Social Work 22 (5), 860–870 (2019)

Hyde, K.F.: Recognising deductive processes in qualitative research. Qual. Market Res. Int. J. 3 (2), 82–90 (2000)

Johnson, R.B., Onwuegbuzie, A.J.: Mixed methods research: a research paradigm whose time has come. Educ. Res. 33 (7), 14–26 (2004)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Kaplan, A.: The Conduct of Inquiry. Chandler, Scranton (1964)

Kolb, S.M.: Grounded theory and the constant comparative method: valid research strategies for educators. J. Emerg. Trends Educ. Res. Policy Stud. 3 (1), 83–86 (2012)

Levers, M.J.D.: Philosophical paradigms, grounded theory, and perspectives on emergence. Sage Open 3 (4), 2158244013517243 (2013)

Lundvall, B.A.: Knowledge management in the learning economy. In: Danish Research Unit for Industrial Dynamics Working Paper Working Paper, vol. 6, pp. 3–5 (2006)

Lundvall, B.-Å., Johnson, B.: Knowledge management in the learning economy. J. Ind. Stud. 1 (2), 23–42 (1994)

Lundvall, B.-Å., Jenson, M.B., Johnson, B., Lorenz, E.: Forms of Knowledge and Modes of Innovation—From User-Producer Interaction to the National System of Innovation. In: Dosi, G., et al. (eds.) Technical Change and Economic Theory. Pinter Publishers, London (1988)

Maanen, J., Manning, P., Miller, M.: Series editors’ introduction. In: Stebbins, R. (ed.) Exploratory research in the social sciences. pp. v–vi. Thousands Oak, CA: SAGE (2001)

Mackenzie, N., Knipe, S.: Research dilemmas: paradigms, methods and methodology. Issues Educ. Res. 16 (2), 193–205 (2006)

Marlow, C.R.: Research Methods for Generalist Social Work. Thomson Brooks/Cole, New York (2005)

Mead, G.H.: The working hypothesis in social reform. Am. J. Sociol. 5 (3), 367–371 (1899)

Milnes, A.G.: Structure of the Pennine Zone (Central Alps): a new working hypothesis. Geol. Soc. Am. Bull. 85 (11), 1727–1732 (1974)

Morgan, D.L.: Paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J. Mixed Methods Res. 1 (1), 48–76 (2007)

Morse, J.: The significance of saturation. Qual. Health Res. 5 (2), 147–149 (1995)

O’Connor, M.K., Netting, F.E., Thomas, M.L.: Grounded theory: managing the challenge for those facing institutional review board oversight. Qual. Inq. 14 (1), 28–45 (2008)

Onwuegbuzie, A.J., Leech, N.L.: On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. Int. J. Soc. Res. Methodol. 8 (5), 375–387 (2005)

Oppenheim, P., Putnam, H.: Unity of science as a working hypothesis. In: Minnesota Studies in the Philosophy of Science, vol. II, pp. 3–36 (1958)

Patten, M.L., Newhart, M.: Understanding Research Methods: An Overview of the Essentials, 2nd edn. Routledge, New York (2000)

Pearse, N.: An illustration of deductive analysis in qualitative research. In: European Conference on Research Methodology for Business and Management Studies, pp. 264–VII. Academic Conferences International Limited (2019)

Prater, D.N., Case, J., Ingram, D.A., Yoder, M.C.: Working hypothesis to redefine endothelial progenitor cells. Leukemia 21 (6), 1141–1149 (2007)

Ravitch, B., Riggan, M.: Reason and Rigor: How Conceptual Frameworks Guide Research. Sage, Beverley Hills (2012)

Reiter, B.: The epistemology and methodology of exploratory social science research: Crossing Popper with Marcuse. In: Government and International Affairs Faculty Publications. Paper 99. http://scholarcommons.usf.edu/gia_facpub/99 (2013)

Ritchie, J., Lewis, J.: Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage, London (2003)

Schrag, F.: In defense of positivist research paradigms. Educ. Res. 21 (5), 5–8 (1992)

Shields, P.M.: Pragmatism as a philosophy of science: A tool for public administration. Res. Pub. Admin. 41995-225 (1998)

Shields, P.M., Rangarajan, N.: A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. New Forums Press (2013)

Shields, P.M., Tajalli, H.: Intermediate theory: the missing link in successful student scholarship. J. Public Aff. Educ. 12 (3), 313–334 (2006)

Shields, P., & Whetsell, T.: Public administration methodology: A pragmatic perspective. In: Raadshelders, J., Stillman, R., (eds). Foundations of Public Administration, pp. 75–92. New York: Melvin and Leigh (2017)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part I). Sotsiologicheskie issledovaniya 10 , 39–47 (2019a)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part 2). Sotsiologicheskie issledovaniya 11 , 40–51 (2019b)

Smith, J.K.: Quantitative versus qualitative research: an attempt to clarify the issue. Educ. Res. 12 (3), 6–13 (1983a)

Smith, J.K.: Quantitative versus interpretive: the problem of conducting social inquiry. In: House, E. (ed.) Philosophy of Evaluation, pp. 27–52. Jossey-Bass, San Francisco (1983b)

Smith, J.K., Heshusius, L.: Closing down the conversation: the end of the quantitative-qualitative debate among educational inquirers. Educ. Res. 15 (1), 4–12 (1986)

Stebbins, R.A.: Exploratory Research in the Social Sciences. Sage, Thousand Oaks (2001)

Book   Google Scholar  

Strydom, H.: An evaluation of the purposes of research in social work. Soc. Work/Maatskaplike Werk 49 (2), 149–164 (2013)

Sutton, R. I., Staw, B.M.: What theory is not. Administrative science quarterly. 371–384 (1995)

Swift, III, J.: Exploring Capital Metro’s Sexual Harassment Training using Dr. Bengt-Ake Lundvall’s taxonomy of knowledge principles. Applied Research Project, Texas State University https://digital.library.txstate.edu/handle/10877/3671 (2010)

Thomas, E., Magilvy, J.K.: Qualitative rigor or research validity in qualitative research. J. Spec. Pediatric Nurs. 16 (2), 151–155 (2011)

Twining, P., Heller, R.S., Nussbaum, M., Tsai, C.C.: Some guidance on conducting and reporting qualitative studies. Comput. Educ. 107 , A1–A9 (2017)

Ulriksen, M., Dadalauri, N.: Single case studies and theory-testing: the knots and dots of the process-tracing method. Int. J. Soc. Res. Methodol. 19 (2), 223–239 (2016)

Van Evera, S.: Guide to Methods for Students of Political Science. Cornell University Press, Ithaca (1997)

Whetsell, T.A., Shields, P.M.: The dynamics of positivism in the study of public administration: a brief intellectual history and reappraisal. Adm. Soc. 47 (4), 416–446 (2015)

Willis, J.W., Jost, M., Nilakanta, R.: Foundations of Qualitative Research: Interpretive and Critical Approaches. Sage, Beverley Hills (2007)

Worster, W.T.: The inductive and deductive methods in customary international law analysis: traditional and modern approaches. Georget. J. Int. Law 45 , 445 (2013)

Yin, R.K.: The case study as a serious research strategy. Knowledge 3 (1), 97–114 (1981)

Yin, R.K.: The case study method as a tool for doing evaluation. Curr. Sociol. 40 (1), 121–137 (1992)

Yin, R.K.: Applications of Case Study Research. Sage, Beverley Hills (2011)

Yin, R.K.: Case Study Research and Applications: Design and Methods. Sage Publications, Beverley Hills (2017)

Download references

Acknowledgements

The authors contributed equally to this work. The authors would like to thank Quality & Quantity’ s editors and the anonymous reviewers for their valuable advice and comments on previous versions of this paper.

Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. There are no funders to report for this submission.

Author information

Authors and affiliations.

Department of Political and Social Sciences, University of Bologna, Strada Maggiore 45, 40125, Bologna, Italy

Mattia Casula

Texas State University, San Marcos, TX, USA

Nandhini Rangarajan & Patricia Shields

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mattia Casula .

Ethics declarations

Conflict of interest.

No potential conflict of interest was reported by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casula, M., Rangarajan, N. & Shields, P. The potential of working hypotheses for deductive exploratory research. Qual Quant 55 , 1703–1725 (2021). https://doi.org/10.1007/s11135-020-01072-9

Download citation

Accepted : 05 November 2020

Published : 08 December 2020

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11135-020-01072-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Exploratory research
  • Working hypothesis
  • Deductive qualitative research
  • Find a journal
  • Publish with us
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Exploratory Research | Definition, Guide, & Examples

Exploratory Research | Definition, Guide, & Examples

Published on 6 May 2022 by Tegan George . Revised on 20 January 2023.

Exploratory research is a methodology approach that investigates topics and research questions that have not previously been studied in depth.

Exploratory research is often qualitative in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive research or a grounded theory approach due to its flexible and open-ended nature.

Table of contents

When to use exploratory research, exploratory research questions, exploratory research data collection, step-by-step example of exploratory research, exploratory vs explanatory research, advantages and disadvantages of exploratory research, frequently asked questions about exploratory research.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use this type of research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Prevent plagiarism, run a free check.

Exploratory research questions are designed to help you understand more about a particular topic of interest. They can help you connect ideas to understand the groundwork of your analysis without adding any preconceived notions or assumptions yet.

Here are some examples:

  • What effect does using a digital notebook have on the attention span of primary schoolers?
  • What factors influence mental health in undergraduates?
  • What outcomes are associated with an authoritative parenting style?
  • In what ways does the presence of a non-native accent affect intelligibility?
  • How can the use of a grocery delivery service reduce food waste in single-person households?

Collecting information on a previously unexplored topic can be challenging. Exploratory research can help you narrow down your topic and formulate a clear hypothesis , as well as giving you the ‘lay of the land’ on your topic.

Data collection using exploratory research is often divided into primary and secondary research methods, with data analysis following the same model.

Primary research

In primary research, your data is collected directly from primary sources : your participants. There is a variety of ways to collect primary data.

Some examples include:

  • Survey methodology: Sending a survey out to the student body asking them if they would eat vegan meals
  • Focus groups: Compiling groups of 8–10 students and discussing what they think of vegan options for dining hall food
  • Interviews: Interviewing students entering and exiting the dining hall, asking if they would eat vegan meals

Secondary research

In secondary research, your data is collected from preexisting primary research, such as experiments or surveys.

Some other examples include:

  • Case studies : Health of an all-vegan diet
  • Literature reviews : Preexisting research about students’ eating habits and how they have changed over time
  • Online polls, surveys, blog posts, or interviews; social media: Have other universities done something similar?

For some subjects, it’s possible to use large- n government data, such as the decennial census or yearly American Community Survey (ACS) open-source data.

How you proceed with your exploratory research design depends on the research method you choose to collect your data. In most cases, you will follow five steps.

We’ll walk you through the steps using the following example.

Therefore, you would like to focus on improving intelligibility instead of reducing the learner’s accent.

Step 1: Identify your problem

The first step in conducting exploratory research is identifying what the problem is and whether this type of research is the right avenue for you to pursue. Remember that exploratory research is most advantageous when you are investigating a previously unexplored problem.

Step 2: Hypothesise a solution

The next step is to come up with a solution to the problem you’re investigating. Formulate a hypothetical statement to guide your research.

Step 3. Design your methodology

Next, conceptualise your data collection and data analysis methods and write them up in a research design.

Step 4: Collect and analyse data

Next, you proceed with collecting and analysing your data so you can determine whether your preliminary results are in line with your hypothesis.

In most types of research, you should formulate your hypotheses a priori and refrain from changing them due to the increased risk of Type I errors and data integrity issues. However, in exploratory research, you are allowed to change your hypothesis based on your findings, since you are exploring a previously unexplained phenomenon that could have many explanations.

Step 5: Avenues for future research

Decide if you would like to continue studying your topic. If so, it is likely that you will need to change to another type of research. As exploratory research is often qualitative in nature, you may need to conduct quantitative research with a larger sample size to achieve more generalisable results.

It can be easy to confuse exploratory research with explanatory research. To understand the relationship, it can help to remember that exploratory research lays the groundwork for later explanatory research.

Exploratory research investigates research questions that have not been studied in depth. The preliminary results often lay the groundwork for future analysis.

Explanatory research questions tend to start with ‘why’ or ‘how’, and the goal is to explain why or how a previously studied phenomenon takes place.

Exploratory vs explanatory research

Like any other research design , exploratory research has its trade-offs: it provides a unique set of benefits but also comes with downsides.

  • It can be very helpful in narrowing down a challenging or nebulous problem that has not been previously studied.
  • It can serve as a great guide for future research, whether your own or another researcher’s. With new and challenging research problems, adding to the body of research in the early stages can be very fulfilling.
  • It is very flexible, cost-effective, and open-ended. You are free to proceed however you think is best.

Disadvantages

  • It usually lacks conclusive results, and results can be biased or subjective due to a lack of preexisting knowledge on your topic.
  • It’s typically not externally valid and generalisable, and it suffers from many of the challenges of qualitative research .
  • Since you are not operating within an existing research paradigm, this type of research can be very labour-intensive.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2023, January 20). Exploratory Research | Definition, Guide, & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/exploratory-research-design/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, qualitative vs quantitative research | examples & methods, descriptive research design | definition, methods & examples, case study | definition, examples & methods.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

articles on exploratory research

Home Market Research

Exploratory Research: Types & Characteristics

Exploratory-Research

Consider a scenario where a juice bar owner feels that increasing the variety of juices will enable an increase in customers. However, he is not sure and needs more information. The owner intends to conduct exploratory research to find out; hence, he decides to do exploratory research to find out if expanding their juices selection will enable him to get more customers or if there is a better idea.

Another example of exploratory research is a podcast survey template that can be used to collect feedback about the podcast consumption metrics both from existing listeners as well as other podcast listeners that are currently not subscribed to this channel. This helps the author of the podcast create curated content that will gain a larger audience. Let’s explore this topic.

LEARN ABOUT: Research Process Steps

Content Index

Exploratory research: Definition

Primary research methods, secondary research methods, exploratory research: steps to conduct a research, characteristics of exploratory research, advantages of exploratory research, disadvantages of exploratory research, importance of exploratory research.

Exploratory research is defined as a research used to investigate a problem which is not clearly defined. It is conducted to have a better understanding of the existing research problem , but will not provide conclusive results. For such a research, a researcher starts with a general idea and uses this research as a medium to identify issues, that can be the focus for future research. An important aspect here is that the researcher should be willing to change his/her direction subject to the revelation of new data or insight. Such a research is usually carried out when the problem is at a preliminary stage. It is often referred to as grounded theory approach or interpretive research as it used to answer questions like what, why and how.

Types and methodologies of Exploratory research

While it may sound difficult to research something that has very little information about it, there are several methods which can help a researcher figure out the best research design, data collection methods and choice of subjects. There are two ways in which research can be conducted namely primary and secondary.. Under these two types, there are multiple methods which can used by a researcher. The data gathered from these research can be qualitative or quantitative . Some of the most widely used research designs include the following:

LEARN ABOUT: Best Data Collection Tools

Primary research is information gathered directly from the subject.  It can be through a group of people or even an individual. Such a research can be carried out directly by the researcher himself or can employ a third party to conduct it on their behalf. Primary research is specifically carried out to explore a certain problem which requires an in-depth study.

  • Surveys/polls : Surveys /polls are used to gather information from a predefined group of respondents. It is one of the most important quantitative method. Various types of surveys  or polls can be used to explore opinions, trends, etc. With the advancement in technology, surveys can now be sent online and can be very easy to access. For instance, use of a survey app through tablets, laptops or even mobile phones. This information is also available to the researcher in real time as well. Nowadays, most organizations offer short length surveys and rewards to respondents, in order to achieve higher response rates.

LEARN ABOUT: Live polls for Classroom Experience

For example: A survey is sent to a given set of audience to understand their opinions about the size of mobile phones when they purchase one. Based on such information organization can dig deeper into the topic and make business related decision.

  • Interviews: While you may get a lot of information from public sources, but sometimes an in person interview can give in-depth information on the subject being studied. Such a research is a qualitative research method . An interview with a subject matter expert can give you meaningful insights that a generalized public source won’t be able to provide. Interviews are carried out in person or on telephone which have open-ended questions to get meaningful information about the topic.

For example: An interview with an employee can give you more insights to find out the degree of job satisfaction, or an interview with a subject matter expert of quantum theory can give you in-depth information on that topic.

  • Focus groups: Focus group is yet another widely used method in exploratory research. In such a method a group of people is chosen and are allowed to express their insights on the topic that is being studied. Although, it is important to make sure that while choosing the individuals in a focus group they should have a common background and have comparable experiences.

For example: A focus group helps a research identify the opinions of consumers if they were to buy a phone. Such a research can help the researcher understand what the consumer value while buying a phone. It may be screen size, brand value or even the dimensions. Based on which the organization can understand what are consumer buying attitudes, consumer opinions, etc.

  • Observations: Observational research can be qualitative observation or quantitative observation . Such a research is done to observe a person and draw the finding from their reaction to certain parameters. In such a research, there is no direct interaction with the subject.

For example: An FMCG company wants to know how it’s consumer react to the new shape of their product. The researcher observes the customers first reaction and collects the data, which is then used to draw inferences from the collective information.

LEARN ABOUT: Causal Research

Secondary research is gathering information from previously published primary research. In such a research you gather information from sources likes case studies, magazines, newspapers, books, etc.

  • Online research: In today’s world, this is one of the fastest way to gather information on any topic. A lot of data is readily available on the internet and the researcher can download it whenever he needs it. An important aspect to be noted for such a research is the genuineness and authenticity of the source websites that the researcher is gathering the information from.

For example: A researcher needs to find out what is the percentage of people that prefer a specific brand phone. The researcher just enters the information he needs in a search engine and gets multiple links with related information and statistics.

  • Literature research : Literature research is one of the most inexpensive method used for discovering a hypothesis. There is tremendous amount of information available in libraries, online sources, or even commercial databases. Sources can include newspapers, magazines, books from library, documents from government agencies, specific topic related articles, literature, Annual reports, published statistics from research organizations and so on.

However, a few things have to be kept in mind while researching from these sources. Government agencies have authentic information but sometimes may come with a nominal cost. Also, research from educational institutions is generally overlooked, but in fact educational institutions carry out more number of research than any other entities.

Furthermore, commercial sources provide information on major topics like political agendas, demographics, financial information, market trends and information, etc.

For example: A company has low sales. It can be easily explored from available statistics and market literature if the problem is market related or organization related or if the topic being studied is regarding financial situation of the country, then research data can be accessed through government documents or commercial sources.

  • Case study research: Case study research can help a researcher with finding more information through carefully analyzing existing cases which have gone through a similar problem. Such exploratory data analysis are very important and critical especially in today’s business world. The researcher just needs to make sure he analyses the case carefully in regards to all the variables present in the previous case against his own case. It is very commonly used by business organizations or social sciences sector or even in the health sector.

LEARN ABOUT: Level of Analysis

For example: A particular orthopedic surgeon has the highest success rate for performing knee surgeries. A lot of other hospitals or doctors have taken up this case to understand and benchmark the method in which this surgeon does the procedure to increase their success rate.

  • Identify the problem : A researcher identifies the subject of research and the problem is addressed by carrying out multiple methods to answer the questions.
  • Create the hypothesis : When the researcher has found out that there are no prior studies and the problem is not precisely resolved, the researcher will create a hypothesis based on the questions obtained while identifying the problem.
  • Further research : Once the data has been obtained, the researcher will continue his study through descriptive investigation. Qualitative methods are used to further study the subject in detail and find out if the information is true or not.

LEARN ABOUT: Descriptive Analysis

  • They are not structured studies
  • It is usually low cost, interactive and open ended.
  • It will enable a researcher answer questions like what is the problem? What is the purpose of the study? And what topics could be studied?
  • To carry out exploratory research, generally there is no prior research done or the existing ones do not answer the problem precisely enough.
  • It is a time consuming research and it needs patience and has risks associated with it.
  • The researcher will have to go through all the information available for the particular study he is doing.
  • There are no set of rules to carry out the research per se, as they are flexible, broad and scattered.
  • The research needs to have importance or value. If the problem is not important in the industry the research carried out is ineffective.
  • The research should also have a few theories which can support its findings as that will make it easier for the researcher to assess it and move ahead in his study
  • Such a research usually produces qualitative data , however in certain cases quantitative data can be generalized for a larger sample through use of surveys and experiments.

LEARN ABOUT: Action Research

  • The researcher has a lot of flexibility and can adapt to changes as the research progresses.
  • It is usually low cost.
  • It helps lay the foundation of a research, which can lead to further research.
  • It enables the researcher understand at an early stage, if the topic is worth investing the time and resources  and if it is worth pursuing.
  • It can assist other researchers to find out possible causes for the problem, which can be further studied in detail to find out, which of them is the most likely cause for the problem.
  • Even though it can point you in the right direction towards what is the answer, it is usually inconclusive.
  • The main disadvantage of exploratory research is that they provide qualitative data. Interpretation of such information can be judgmental and biased.
  • Most of the times, exploratory research involves a smaller sample , hence the results cannot be accurately interpreted for a generalized population.
  • Many a times, if the data is being collected through secondary research, then there is a chance of that data being old and is not updated.

LEARN ABOUT: Projective Techniques & Conformity Bias

Exploratory research is carried out when a topic needs to be understood in depth, especially if it hasn’t been done before. The goal of such a research is to explore the problem and around it and not actually derive a conclusion from it. Such kind of research will enable a researcher to  set a strong foundation for exploring his ideas, choosing the right research design and finding variables that actually are important for the in-depth analysis . Most importantly, such a research can help organizations or researchers save up a lot of time and resources, as it will enable the researcher to know if it worth pursuing.

Learn more: VoIP Survey Questions + Sample Questionnaire Template

MORE LIKE THIS

NPS Survey Platform

NPS Survey Platform: Types, Tips, 11 Best Platforms & Tools

Apr 26, 2024

user journey vs user flow

User Journey vs User Flow: Differences and Similarities

gap analysis tools

Best 7 Gap Analysis Tools to Empower Your Business

Apr 25, 2024

employee survey tools

12 Best Employee Survey Tools for Organizational Excellence

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Open access
  • Published: 28 May 2018

Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance

  • Britt Hallingberg   ORCID: orcid.org/0000-0001-8016-5793 1 ,
  • Ruth Turley 1 , 4 ,
  • Jeremy Segrott 1 , 2 ,
  • Daniel Wight 3 ,
  • Peter Craig 3 ,
  • Laurence Moore 3 ,
  • Simon Murphy 1 ,
  • Michael Robling 1 , 2 ,
  • Sharon Anne Simpson 3 &
  • Graham Moore 1  

Pilot and Feasibility Studies volume  4 , Article number:  104 ( 2018 ) Cite this article

26k Accesses

96 Citations

66 Altmetric

Metrics details

Evaluations of complex interventions in public health are frequently undermined by problems that can be identified before the effectiveness study stage. Exploratory studies, often termed pilot and feasibility studies, are a key step in assessing the feasibility and value of progressing to an effectiveness study. Such studies can provide vital information to support more robust evaluations, thereby reducing costs and minimising potential harms of the intervention. This systematic review forms the first phase of a wider project to address the need for stand-alone guidance for public health researchers on designing and conducting exploratory studies. The review objectives were to identify and examine existing recommendations concerning when such studies should be undertaken, questions they should answer, suitable methods, criteria for deciding whether to progress to an effectiveness study and appropriate reporting.

We searched for published and unpublished guidance reported between January 2000 and November 2016 via bibliographic databases, websites, citation tracking and expert recommendations. Included papers were thematically synthesized.

The search retrieved 4095 unique records. Thirty papers were included, representing 25 unique sources of guidance/recommendations. Eight themes were identified: pre-requisites for conducting an exploratory study, nomenclature, guidance for intervention assessment, guidance surrounding any future evaluation study design, flexible versus fixed design, progression criteria to a future evaluation study, stakeholder involvement and reporting of exploratory studies. Exploratory studies were described as being concerned with the intervention content, the future evaluation design or both. However, the nomenclature and endorsed methods underpinning these aims were inconsistent across papers. There was little guidance on what should precede or follow an exploratory study and decision-making surrounding this.

Conclusions

Existing recommendations are inconsistent concerning the aims, designs and conduct of exploratory studies, and guidance is lacking on the evidence needed to inform when to proceed to an effectiveness study.

Trial registration

PROSPERO 2016, CRD42016047843

Peer Review reports

Improving public health and disrupting complex problems such as smoking, obesity and mental health requires complex, often multilevel, interventions. Such interventions are often costly and may cause unanticipated harms and therefore require evaluation using the most robust methods available. However, pressure to identify effective interventions can lead to premature commissioning of large effectiveness studies of poorly developed interventions, wasting finite research resources [ 1 , 2 , 3 ]. In the development of pharmaceutical drugs over 80% fail to reach ‘Phase III’ effectiveness trials, even after considerable investment [ 4 ]. With public health interventions, the historical tendency to rush to full evaluation has in some cases led to evaluation failures due to issues which could have been identified at an earlier stage, such as difficulties recruiting sufficient participants [ 5 ]. There is growing consensus that improving the effectiveness of public health interventions relies on attention to their design and feasibility [ 3 , 6 ]. However, what constitutes good practice when deciding when a full evaluation is warranted, what uncertainties should be addressed to inform this decision and how, is unclear. This systematic review aims to synthesize existing sources of guidance for ‘exploratory studies’ which we broadly define as studies intended to generate evidence needed to decide whether and how to proceed with a full-scale effectiveness study. They do this by optimising or assessing the feasibility of the intervention and/or evaluation design that the effectiveness study would use. Hence, our definition includes studies variously referred to throughout the literature as ‘pilot studies’, ‘feasibility studies’ or ‘exploratory trials’. Our definition is consistent with previous work conducted by Eldridge et al. [ 7 , 8 ], who define feasibility as an overarching concept [ 8 ] which assesses; ‘… whether the future trial can be done, should be done, and, if so, how’ (p. 2) [ 7 ]. However, our definition also includes exploratory studies to inform non-randomised evaluations, rather than a sole focus on trials.

The importance of thoroughly establishing the feasibility of intervention and evaluation plans prior to embarking on an expensive, fully powered evaluation was indicated in the Medical Research Council’s (MRC) framework for the development and evaluation of complex interventions to improve health [ 9 , 10 ]. This has triggered shifts in the practice of researchers and funders toward seeking and granting funding for an ever growing number of studies to address feasibility issues. Such studies are however in themselves often expensive [ 11 , 12 ]. While there is a compelling case for such studies, the extent to which this substantial investment in exploratory studies has to date improved the effectiveness and cost-effectiveness of evidence production remains to be firmly established. Where exploratory studies are conducted poorly, this investment may simply lead to expenditure of large amounts of additional public money, and several years’ delay in getting evidence into the hands of decision-makers, without necessarily increasing the likelihood that a future evaluation will provide useful evidence.

The 2000 MRC guidance used the term ‘exploratory trial’ for work conducted prior to a ‘definitive trial’, indicating that it should primarily address issues concerning the optimisation, acceptability and delivery of the intervention [ 13 ]. This included adaptation of the intervention, consideration of variants of the intervention, testing and refinement of delivery method or content, assessment of learning curves and implementation strategies and determining the counterfactual. Other possible purposes of exploratory trials included preliminary assessment of effect size in order to calculate the sample size for the main trial and other trial design parameters, including methods of recruitment, randomisation and follow-up. Updated MRC guidance in 2008 moved away from the sole focus on RCTs (randomised controlled trials) of its predecessor reflecting recognition that not all interventions can be tested using an RCT and that the next most robust methods may sometimes be the best available option [ 10 , 14 ]. Guidance for exploratory studies prior to a full evaluation have, however, often been framed as relevant only where the main evaluation is to be an RCT [ 13 , 15 ].

However, the goals of exploratory studies advocated by research funders have to date varied substantially. For instance, the National Institute for Health Research Evaluation Trials and Studies Coordinating Centre (NETSCC) definitions of feasibility and pilot studies do not include examination of intervention design, delivery or acceptability and do not suggest that modifications to the intervention prior to full-scale evaluation will arise from these phases. However, the NIHR (National Institute of Health Research) portfolio of funded studies indicates various uses of terms such as ‘feasibility trial’, ‘pilot trial’ and ‘exploratory trial’ to describe studies with similar aims, while it is rare for such studies not to include a focus on intervention parameters [ 16 , 17 , 18 ]. Within the research literature, there is considerable divergence over what exploratory studies should be called, what they should achieve, what they should entail, whether and how they should determine progression to future studies and how they should be reported [ 7 , 8 , 19 , 20 , 21 ].

This paper presents a systematic review of the existing recommendations and guidance on exploratory studies relevant to public health, conducted as the first stage of a project to develop new MRC guidance on exploratory studies. This review aims to produce a synthesis of current guidance/recommendations in relation to the definition, purpose and content of exploratory studies, and what is seen as ‘good’ and ‘bad’ practice as presented by the authors. It will provide an overview of key gaps and areas in which there is inconsistency within and between documents. The rationale for guidance and recommendations are presented, as well as the theoretical perspectives informing them. In particular, we examine how far the existing recommendations answer the following questions:

When is it appropriate to conduct an exploratory study?

What questions should such studies address?

What are the key methodological considerations in answering these questions?

What criteria should inform a decision on whether to progress to an effectiveness study?

How should exploratory studies be reported?

This review is reported in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [ 22 ] as evidenced in the PRISMA checklist (see Additional file  1 : Table S1). The review protocol is registered on PROSPERO (registration number: CRD42016047843; www.crd.york.ac.uk/prospero ).

Literature search

A comprehensive search (see Additional file  2 : Appendix) was designed and completed during August to November 2016 to identify published and grey literature reported between January 2000 and November 2016 that contained guidance and recommendations on exploratory studies that could have potential relevance to public health. Bibliographic databases were CINAHL, Embase, MEDLINE, MEDLINE-In-process, PsycINFO, Web of Science and PubMed. Supplementary searches included key websites (see Additional file  2 : Appendix) and forward and backward citation tracking of included papers, as well as contacting experts in the field. The first MRC guidance on developing and evaluating complex interventions in health was published in 2000; we therefore excluded guidance published before this year.

Selection of included papers

Search results were exported into reference management software Endnote and clearly irrelevant or duplicate records removed by an information specialist. Eligibility criteria were applied to abstracts and potentially relevant full-text papers by two reviewers working independently in duplicate (BH, JS). Discrepancies were agreed by consensus or by a third reviewer if necessary. Full criteria are shown in Table  1 . During screening of eligible studies, it became evident that determining whether or not guidance was applicable to public health was not always clear. The criteria in Table  1 were agreed by the team after a list of potentially eligible publications were identified.

Quality assessment of included papers

Given the nature of publications included (expert guidance or methodological discussion papers) quality assessment was not applicable.

Data extraction and thematic synthesis

A thematic synthesis of guidance within included documents was performed [ 23 ]. This involved the use of an a priori coding framework (based on the projects aims and objectives), developed by RT, JS and DW ([ 24 ], see Additional file  2 : Appendix). Data were extracted using this schema in qualitative analytic software NVivo by one reviewer (BH). A 10% sample of coded papers was checked by a second reviewer (JS). Data were then conceptualised into final themes by agreement (BH, JS, DW, RT).

Review statistics

Four thousand ninety-five unique records were identified of which 93 were reviewed in full text (see Fig.  1 ). In total, 30 documents were included in the systematic review representing 25 unique sets of guidance. Most sources of guidance did not explicitly identify an intended audience and guidance varied in its relevance to public health. Table  2 presents an overview of all sources of guidance included in the review with sources of guidance more or less relevant to public health identified as well as those which specifically applied to exploratory studies with a randomised design.

figure 1

Flow diagram

Findings from guidance

The included guidance reported a wide range of recommendations on the process of conducting and reporting exploratory studies. We categorised these into eight themes that capture: pre-requisites for conducting an exploratory study, nomenclature, guidance for intervention assessment, guidance surrounding the future evaluation study design, adaptive vs rigid designs, progression criteria for exploratory studies, stakeholder involvement and reporting.

Narrative description of themes

Theme 1: pre-requisites for conducting an exploratory study.

Where mentioned, pre-requisite activities included determining the evidence base, establishing the theoretical basis for the intervention, identifying the intervention components as well as modelling of the intervention in order to understand how intervention components interact and impact on final outcomes [ 9 , 25 , 26 , 27 ]. These were often discussed within the context of the MRC’s intervention development-evaluation cycle [ 6 , 9 , 10 , 13 , 25 , 26 , 27 , 28 ]. Understanding how intervention components interact with various contextual settings [ 6 , 27 , 29 ] and identifying unintended harms [ 6 , 29 ] as well as potential implementation issues [ 6 , 9 , 10 , 30 ] were also highlighted. There was an absence of detail in judging when these above conditions were met sufficiently for moving onto an exploratory study.

Theme 2: nomenclature

A wide range of terms were used, sometimes interchangeably, to describe exploratory studies with the most common being pilot trial/study. Table  3 shows the frequency of the terms used in guidance including other terms endorsed.

Different terminology did not appear to be consistently associated with specific study purposes (see theme 3), as illustrated in Table  2 . ‘Pilot’ and ‘feasibility’ studies were sometimes used interchangeably [ 10 , 20 , 25 , 26 , 27 , 28 , 31 ] while others made distinctions between the two according to design features or particular aims [ 7 , 8 , 19 , 29 , 32 , 33 , 34 ]. For example, some described pilot studies as a smaller version of a future RCT to run in miniature [ 7 , 8 , 19 , 29 , 32 , 33 , 34 ] and was sometimes associated with a randomised design [ 32 , 34 ], but not always [ 7 , 8 ]. In contrast, feasibility studies were used as an umbrella term by Eldridge et al. with pilot studies representing a subset of feasibility studies [ 7 , 8 ]: ‘We suggest that researchers view feasibility as an overarching concept, with all studies done in preparation for a main study open to being called feasibility studies, and with pilot studies as a subset of feasibility studies.’ (p. 18) [ 8 ].

Feasibility studies could focus on particular intervention and trial design elements [ 29 , 32 ] which may not include randomisation [ 32 , 34 ]. Internal pilot studies were primarily viewed as part of the full trial [ 8 , 32 , 35 , 36 , 37 , 38 ] and are therefore not depicted under nomenclature in Table  3 .

While no sources explicitly stated that an exploratory study should focus on one area and not the other, aims and associated methods of exploratory studies diverged into two separate themes. They pertained to either examining the intervention itself or the future evaluation design, and are detailed below in themes 3 and 4.

Theme 3: guidance for intervention assessment

Sources of guidance endorsed exploratory studies having formative purposes (i.e. refining the intervention and addressing uncertainties related to intervention implementation [ 13 , 15 , 29 , 31 , 39 ]) as well as summative goals (i.e. assessing the potential impact of an intervention or its promise [ 6 , 13 , 39 ]).

Refining the intervention and underlying theory

Some guidance suggested that changes could be made within exploratory studies to refine the intervention and underlying theory [ 15 , 29 , 31 ] and adapt intervention content to a new setting [ 39 ]. However, guidance was not clear on what constituted minor vs. major changes and implications for progression criteria (see theme 6). When making changes to the intervention or underlying theory, some guidance recommended this take place during the course of the exploratory study (see theme 5). Others highlighted the role of using a multi-arm design to select the contents of the intervention before a full evaluation [ 13 ] and to assess potential mechanisms of multiple different interventions or intervention components [ 29 ]. Several sources highlighted the role of qualitative research in optimising or refining an intervention, particularly for understanding the components of the logic model [ 29 ] and surfacing hidden aspects of the intervention important for delivering outcomes [ 15 ].

Intervention implementation

There was agreement across a wide range of guidance that exploratory studies could explore key uncertainties related to intervention implementation, such as acceptability, feasibility or practicality. Notably these terms were often ill-defined and used interchangeably. Acceptability was considered in terms of recipients’ reactions [ 7 , 8 , 29 , 32 , 39 ] while others were also attentive to feasibility from the perspective of intervention providers, deliverers and health professionals [ 6 , 9 , 29 , 30 , 34 , 39 ]. Implementation, feasibility, fidelity and ‘practicality’ explored the likelihood of being able to deliver in practice what was intended [ 25 , 26 , 27 , 30 , 39 ]. These were sometimes referred to as aims within an embedded process evaluation that took place alongside an exploratory study, although the term process evaluation was never defined [ 7 , 10 , 15 , 29 , 40 ].

Qualitative research was encouraged for assessment of intervention acceptability [ 21 ] or for implementation (e.g. via non-participant observation [ 15 ]). Caution was recommended with regards to focus groups where there is a risk of masking divergent views [ 15 ]. Others recommended quantitative surveys to examine retention rates and reasons for dropout [ 7 , 30 ]. Furthermore, several sources emphasised the importance of testing implementation in a range of contexts [ 15 , 29 , 39 , 41 ]—especially in less socioeconomically advantaged groups, to examine the risk of widening health inequalities [ 29 , 39 ].

One source of guidance considered whether randomisation was required for assessing intervention acceptability, believing this to be unnecessary but also suggesting it could ‘potentially depend on preference among interventions offered in the main trial’ ([ 21 ]; p. 9). Thus, issues of intervention acceptability, particularly within multi-arm trials, may relate to clinical equipoise and acceptability of randomisation procedures among participants [ 30 ].

Appropriateness of assessing intervention impact

Several sources of guidance discussed the need to understand the impact of the intervention, including harms, benefits or unintended consequences [ 6 , 7 , 15 , 29 , 39 ]. Much of the guidance focused on statistical tests of effectiveness with disagreement on the soundness of this aim, although qualitative methods were also recommended [ 15 , 42 ]. Some condemned statistically testing for effectiveness [ 7 , 20 , 29 , 32 , 41 ], as such studies are often underpowered, hence leading to imprecise and potentially misleading estimates of effect sizes [ 7 , 20 ]. Others argued that an estimate of likely effect size could evidence the intervention was working as intended and not having serious unintended harms [ 6 ] and thus be used to calculate the power for the full trial [ 13 ]. Later guidance from the MRC is more ambiguous than earlier guidance, stating that estimates should be interpreted with caution, while simultaneously stating ‘safe’ assumptions of effect sizes as a pre-requisite before continuing to a full evaluation [ 10 ]. NIHR guidance, which distinguished between pilot and feasibility studies, supported the assessment of a primary outcome in pilot studies, although it is unclear whether this is suggesting that a pilot should involve an initial test of changes in the primary outcome, or simply that the primary outcome should be measured in the same way as it would be in a full evaluation. By contrast, for ‘feasibility studies’, it indicated that an aim may include designing an outcome measure to be used in a full evaluation.

Others made the case for identifying evidence of potential effectiveness, including use of interim or surrogate endpoints [ 7 , 41 ], defined as ‘…variables on the causal pathway of what might eventually be the primary outcome in the future definitive RCT, or outcomes at early time points, in order to assess the potential for the intervention to affect likely outcomes in the future definitive RCT…’ [ 7 ] (p. 14).

Randomisation was implied as a design feature of exploratory studies when estimating an effect size estimate of the intervention as it maximised the likelihood that observed differences are due to intervention [ 9 , 39 ], with guidance mostly written from a starting assumption that full evaluation will take the form of an RCT and guidance focused less on exploratory studies for quasi-experimental or other designs. For studies that aim to assess potential effectiveness using a surrogate or interim outcome, using a standard sample size calculation was recommended to ensure adequate power, although it was noted that this aim is rare in exploratory studies [ 7 ].

Theme 4: guidance surrounding the future evaluation design

Sources consistently advocated assessing the feasibility of study procedures or estimating parameters of the future evaluation. Recommendations are detailed below.

Assessing feasibility of the future evaluation design

Assessing feasibility of future evaluation procedures was commonly recommended [ 6 , 7 , 10 , 15 , 30 , 32 , 33 , 34 , 37 , 41 ] to avert problems that could undermine the conduct or acceptability of future evaluation [ 6 , 15 , 30 ]. A wide range of procedures were suggested as requiring assessments of feasibility including data collection [ 20 , 30 , 34 , 36 , 41 ], participant retention strategies [ 13 ], randomisation [ 7 , 13 , 20 , 30 , 34 , 36 , 38 , 41 ], recruitment methods [ 13 , 30 , 32 , 34 , 35 , 38 , 41 ], running the full trial protocol [ 20 , 30 , 36 ], the willingness of participants to be randomised [ 30 , 32 ] and issues of contamination [ 30 ]. There was disagreement concerning the appropriateness of assessing blinding in exploratory studies [ 7 , 30 , 34 ], with one source noting double blinding is difficult when participants are assisted in changing their behaviour; although assessing single blinding may be possible [ 30 ].

Qualitative [ 15 , 30 , 34 ], quantitative [ 34 ] and mixed methods [ 7 ] were endorsed for assessing these processes. Reflecting the tendency for guidance of exploratory studies to be limited to studies in preparation for RCTs, discussion of the role of randomisation at the exploratory study stage featured heavily in guidance. Randomisation within an exploratory study was considered necessary for examining feasibility of recruitment, consent to randomisation, retention, contamination or maintenance of blinding in the control and intervention groups, randomisation procedures and whether all the components of a protocol can work together, although randomisation was not deemed necessary to assess outcome burden and participant eligibility [ 21 , 30 , 34 ]. While there was consensus about what issues could be assessed through randomisation, sources disagreed on whether randomisation should always precede a future evaluation study, even if that future study is to be an RCT. Contention seemed to be linked to variation in nomenclature and associated aims. For example, some defined pilot study as a study run in miniature to test how all its components work together, thereby dictating a randomised design [ 32 , 34 ]. Yet for feasibility studies, randomisation was only necessary if it reduced the uncertainties in estimating parameters for the future evaluation [ 32 , 34 ]. Similarly, other guidance highlighted an exploratory study (irrespective of nomenclature) should address the main uncertainties, and thus may not depend on randomisation [ 8 , 15 ].

Estimating parameters of the future evaluation design

A number of sources recommended exploratory studies should inform the parameters of the future evaluation design. Areas for investigation included estimating sample sizes required for the future evaluation (e.g. measuring outcomes [ 32 , 35 ]; power calculations [ 13 ]; derive effect size estimates [ 6 , 7 , 39 ]; estimating target differences [ 35 , 43 ]; deciding what outcomes to measure and how [ 9 , 20 , 30 , 36 ]; assessing quality of measures (e.g. for reliability/ validity/ feasibility/ sensitivity [ 7 , 20 , 30 ]; identification of control group [ 9 , 13 ]; recruitment, consent and retention rates [ 10 , 13 , 20 , 30 , 32 , 34 , 36 ]; and information on the cost of the future evaluation design [ 9 , 30 , 36 ].

While qualitative methods were deemed useful for selecting outcomes and their suitable measures [ 15 ], most guidance concentrated on quantitative methods for estimating future evaluation sample sizes. This was contentious due to the potential to over- or under-estimate sample sizes required in a future evaluation due to the lack of precision of estimates from a small pilot [ 20 , 30 , 41 ]. Estimating sample sizes from effect size estimates in an exploratory study was nevertheless argued by some to be useful if there was scant literature and the exploratory study used the same design and outcome as the future evaluation [ 30 , 39 ]. Cluster RCTs, which are common in public health interventions, were specifically earmarked as unsuitable for estimating parameters for sample size calculations (e.g. intra-cluster correlation coefficients) as well as recruitment and follow-up rates without additional information from other resources, because a large number of clusters and individual participants would be required [ 41 ]. Others referred to ‘rules of thumb’ when determining sample sizes in an exploratory study with numbers varying between 10 and 75 participants per trial arm in individually randomised studies [ 7 , 30 , 36 ]. Several also recommended the need to consider a desired meaningful difference in the health outcomes from a future evaluation and the appropriate sample size needed to detect this, rather than conducting sample size calculations using estimates of likely effect size from pilot data [ 30 , 35 , 38 , 43 ].

A randomised design was deemed unnecessary for estimating costs or selecting outcomes, although was valued for estimating recruitment and retention rates for intervention and control groups [ 21 , 34 ]. Where guidance indicated the estimation of an effect size appropriate to inform the sample size for a future evaluation, a randomised design was deemed necessary [ 9 , 39 ].

Theme 5: flexible vs. fixed design

Sources stated that exploratory studies could employ a rigid or flexible design. With the latter, the design can change during the course of the study, which is useful for making changes to the intervention, as well as the future evaluation design [ 6 , 13 , 15 , 31 ]. Here, qualitative data can be analysed as it is collected, shaping the exploratory study process, for instance sampling of subsequent data collection points [ 15 ], and clarifying implications for intervention effectiveness [ 31 ].

In contrast, fixed exploratory studies were encouraged when primarily investigating the future evaluation parameters and processes [ 13 ]. It may be that the nomenclature used in some guidance (e.g. pilot studies that are described as miniature versions of the evaluation) is suggesting a distinction between more flexible vs. more stringent designs. In some guidance, it was not mentioned whether changes should be made during the course of an exploratory study or afterwards, in order to get the best possible design for the future evaluation [ 6 , 7 , 21 ].

Theme 6: progression criteria to a future evaluation study

Little guidance was provided on what should be considered when formulating progression criteria for continuing onto a future evaluation study. Some focussed on the relevant uncertainties of feasibility [ 32 , 39 ], while others highlight specific items concerning cost-effectiveness [ 10 ], refining causal hypotheses to be tested in a future evaluation [ 29 ] and meeting recruitment targets [ 20 , 34 ]. As discussed in themes 3 and 4, statistically testing for effectiveness and using effect sizes for power calculations was cautioned by some, and so criteria based on effect sizes were not specified [ 38 ].

Greater discussion was devoted to how to weight evidence from an exploratory study that addressed multiple aims and used different methods. Some explicitly stated progression criteria should not be judged as strict thresholds but as guidelines using, for example, a traffic lights system with varying levels of acceptability [ 7 , 41 ]. Others highlighted a realist approach, moving away from binary indicators to focusing on ‘what is feasible and acceptable for whom and under what circumstances’ [ 29 ]. In light of the difficulties surrounding interpretation of effect estimates, several sources recommended qualitative findings from exploratory studies should be more influential than quantitative findings [ 15 , 38 ].

Interestingly, there was ambiguity regarding progression when exploratory findings indicated substantial changes to the intervention or evaluation design. Sources considering this issue suggested that if ‘extensive changes’ or ‘major modifications’ are made to either (note they did not specify what qualified as such), researchers should return to the exploratory [ 21 , 30 ] or intervention development phases [ 15 ].

‘Alternatively, at the feasibility phase, researchers may identify fundamental problems with the intervention or trial conduct and return to the development phase rather than proceed to a full trial.’ (p. 1) [ 15 ].

As described previously, however, the threshold at which changes are determined to be ‘major’ remained ambiguous. While updated MRC guidance [ 10 ] moved to a more iterative model, accepting that movement back between feasibility/piloting and intervention development may sometimes be needed, there was no guidance on under what conditions movement between these two stages should take place.

Theme 7: stakeholder involvement

Several sources recommended a range of stakeholders (e.g. intervention providers, intervention recipients, public representatives as well as practitioners who might use the evidence produced by the full trial) be involved in the planning and running of the exploratory study to ensure exploratory studies reflect the realities of intervention setting [ 15 , 28 , 31 , 32 , 39 , 40 ]. In particular, community-based participatory approaches were recommended [ 15 , 39 ]. While many highlighted the value of stakeholders on Trial Steering Committees and other similar study groups [ 15 , 28 , 40 ], some warned about equipoise between researchers and stakeholders [ 15 , 40 ] and also cautioned against researchers conflating stakeholder involvement with qualitative research [ 15 ].

‘Although patient and public representatives on research teams can provide helpful feedback on the intervention, this does not constitute qualitative research and may not result in sufficiently robust data to inform the appropriate development of the intervention.’ (p. 8) [ 15 ].

Theme 8: reporting of exploratory studies

Detailed recommendations for reporting exploratory studies were recently provided in new Consolidated Standards of Reporting Trials (CONSORT) guidance by Eldridge et al. [ 7 ]. In addition to this, recurrent points were brought up by other sources of guidance. Most notably, it was recommended exploratory studies be published in peer-reviewed journals as this can provide useful information to other researchers on what has been done, what did not work and what might be most appropriate [ 15 , 30 ]. An exploratory study may also result in multiple publications, but should provide reference to other work carried out in the same exploratory study [ 7 , 15 ]. Several sources of guidance also highlight that exploratory studies should be appropriately labelled in the title/abstract to enable easy identification; however, the nomenclature suggested varied depending on guidance [ 7 , 8 , 15 ].

While exploratory studies—carried out to inform decisions about whether and how to proceed with an effectiveness study [ 7 , 8 ]—are increasingly recognised as important in the efficient evaluation of complex public health interventions, our findings suggest that this area remains in need of consistent standards to inform practice. At present, there are multiple definitions of exploratory studies, a lack of consensus on a number of key issues, and a paucity of detailed guidance on how to approach the main uncertainties such studies aim to address prior to proceeding to a full evaluation.

Existing guidance commonly focuses almost exclusively on testing methodological parameters [ 33 ], such as recruitment and retention, although in practice, it is unusual for such studies not to also focus on the feasibility of the intervention itself. Where intervention feasibility is discussed, there is limited guidance on when an intervention is ‘ready’ for an exploratory study and a lack of demarcation between intervention development and pre-evaluation work to understand feasibility. Some guidance recognised that an intervention continues to develop throughout an exploratory study, with distinctions made between ‘optimisation/refinement’ (i.e. minor refinements to the intervention) vs. ‘major changes’. However, the point at which changes become so substantial that movement back toward intervention development rather than forward to a full evaluation remains ambiguous. Consistent with past reviews which adopted a narrower focus on studies with randomised designs [ 21 ] or in preparation for a randomised trial [ 8 , 36 ] and limited searches of guidance in medical journals [ 19 , 36 ], terms to describe exploratory studies were inconsistent, with a distinction sometimes made between pilot and feasibility studies, though with others using these terms interchangeably.

The review identifies a number of key areas of disagreement or limited guidance in regards to the critical aims of exploratory studies and addressing uncertainties which might undermine a future evaluation, and how these aims should be achieved. There was much disagreement for example on whether exploratory studies should include a preliminary assessment of intervention effects to inform decisions on progression to a full evaluation, and the appropriateness of using estimates of effect from underpowered data (from non-representative samples and a study based on a not fully optimised version of the intervention) to power a future evaluation study. Most guidance focused purely on studies in preparation for RCTs; nevertheless, guidance varied on whether randomisation was a necessary feature of the exploratory study, even where a future evaluation study was an RCT. Guidance was often difficult to assess regarding its applicability to public health research, with many sources focusing on literature and practice primarily from clinical research, and limited consideration of the transferability of these problems and proposed solutions to complex social interventions, such as those in public health. Progression criteria were highlighted as important by some as a means of preventing biased post hoc cases for continuation. However, there was a lack of guidance on how to devise progression criteria and processes for assessing whether these had been sufficiently met. Where they had not been met, there was a lack of guidance on how to decide whether the exploratory study had generated sufficient insight about uncertainties that the expense of a further feasibility study would not be justified prior to large-scale evaluation.

Although our review included a broad focus on guidance of exploratory studies from published and grey literature and moved beyond a focus on studies conducted in preparation for an RCT specifically, a number of limitations should be noted. Guidance from other areas of social intervention research where challenges may be similar to those in public health (e.g. education, social work and business) may not have been captured by our search strategy. We found few worked examples of exploratory studies in public health that provided substantial information from learned experience and practice. Hence, the review drew largely on recommendations from funding organisations, or relatively abstract guidance from teams of researchers, with fewer clear examples of how these recommendations are grounded in experience from the conduct of such studies. As such, it should be acknowledged that these documents represent one element within a complex system of research production and may not necessarily fully reflect what is taking place in the conduct of exploratory studies. Finally, treating sources of guidance as independent from each other does not reflect how some recommendations developed over time (see for example [ 7 , 8 , 20 , 36 , 41 ]).

There is inconsistent guidance, and for some key issues a lack of guidance, for exploratory studies of complex public health interventions. As this lack of guidance for researchers in public health continues, the implications and consequences could be far reaching. It is unclear how researchers use existing guidance to shape decision-making in the conduct of exploratory studies, and in doing so, how they adjudicate between various conflicting perspectives. This systematic review has aimed largely to identify areas of agreement and disagreement as a starting point in bringing order to this somewhat chaotic field of work. Following this systematic review, our next step is to conduct an audit of published public health exploratory studies in peer-reviewed journals, to assess current practice and how this reflects the reviewed guidance. As part of a wider study, funded by the MRC/NIHR Methodology Research Programme to develop GUidance for Exploratory STudies of complex public health interventions (GUEST; Moore L, et al. Exploratory studies to inform full scale evaluations of complex public health interventions: the need for guidance, submitted), the review has informed a Delphi survey of researchers, funders and publishers of public health research. In turn, this will contribute to a consensus meeting which aims to reach greater unanimity on the aims of exploratory studies, and how these can most efficiently address uncertainties which may undermine a full-scale evaluation.

Abbreviations

Consolidated Standards of Reporting Trials

GUidance for Exploratory STudies of complex public health interventions

Medical Research Council

National Institute of Health Research Evaluation Trials and Studies Coordinating Centre

National Institute for Health Research

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Randomised controlled trial

Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40:637–44.

Article   PubMed   Google Scholar  

Sanson-Fisher RW, Bonevski B, Green LW, D’Este C. Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33:155–61.

Speller V, Learmonth A, Harrison D. The search for evidence of effective health promotion. BMJ. 1997;315(7104):361.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Arrowsmith J, Miller P. Trial watch: phase II failures: 2008–2010. Nat Rev Drug Discov. 2011;10(5):328–9.

National Institute for Health Research. Weight loss maintenance in adults (WILMA). https://www.journalslibrary.nihr.ac.uk/programmes/hta/084404/#/ . Accessed 13 Dec 2017.

Wight D, Wimbush E, Jepson R, Doi L. Six steps in quality intervention development (6SQuID). J Epidemiol Community Health. 2015;70:520–5.

Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2:64.

Article   PubMed   PubMed Central   Google Scholar  

Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS One. 2016;11:e0150205.

Campbell M, Fitzpatrick R, Haines A, Kinmonth AL. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: new guidance. Medical Research Council. 2008;

National Institute for Health Research. The Filter FE Challenge: pilot trial and process evaluation of a multi-level smoking prevention intervention in further education settings. Available from: https://www.journalslibrary.nihr.ac.uk/programmes/phr/134202/#/ . Accessed 25 Jan 2018.

National Institute for Health Research. Adapting and piloting the ASSIST model of informal peer-led intervention delivery to the Talk to Frank drug prevention programme in UK secondary schools (ASSIST+Frank): an exploratory trial. https://www.journalslibrary.nihr.ac.uk/programmes/phr/12306003/#/ . Accessed 25 Jan 2018.

Medical Research Council. A framework for the development and evaluation of RCTs for complex interventions to improve health. London: Medical Research Council; 2000.

Google Scholar  

Bonell CP, Hargreaves JR, Cousens SN, Ross DA, Hayes R, Petticrew M, et al. Alternatives to randomisation in the evaluation of public-health interventions: design challenges and solutions. J Epidemiol Community Health. 2009; https://doi.org/10.1136/jech.2008.082602 .

O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1(1):32.

National Institute for Health Research. An exploratory trial to evaluate the effects of a physical activity intervention as a smoking cessation induction and cessation aid among the ‘hard to reach’. https://www.journalslibrary.nihr.ac.uk/programmes/hta/077802/#/ . Accessed 13 Dec 2017.

National Institue for Health Research. Initiating change locally in bullying and aggression through the school environment (INCLUSIVE): pilot randomised controlled trial. https://www.journalslibrary.nihr.ac.uk/hta/hta19530/#/abstract . Accessed 13 Dec 2017.

National Institute for Health Resarch. Increasing boys' and girls' intention to avoid teenage pregnancy: a cluster randomised control feasibility trial of an interactive video drama based intervention in post-primary schools in Northern Ireland. https://www.journalslibrary.nihr.ac.uk/phr/phr05010/#/abstract . Accessed 13 Dec 2017.

Arain M, Campbell, MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy BMC Med Res Methodol. 2010;10:67.

Lancaster GA. Pilot and feasibility studies come of age! Pilot Feasibility Stud. 2015;1:1.

Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Med Res Methodol. 2011;11:117.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;151:e1000097.

Article   Google Scholar  

Dixon-Woods M, Agarwal S, Jones D. Intergrative approaches to qualitative and quantitative evidence. London: Health Development Agency; 2004.

Ritchie J, Spencer L, O’Connor W. Carrying out qualitative analysis. Qualitative research practice: a guide for social science students and researchers. 2003;1.

Möhler R, Bartoszek G, Köpke S, Meyer G. Proposed criteria for reporting the development and evaluation of complex interventions in healthcare (CReDECI): guideline development. IJNS. 2012;49(1):40–6.

Möhler R, Bartoszek G, Meyer G. Quality of reporting of complex healthcare interventions and applicability of the CReDECI list—a survey of publications indexed in PubMed. BMC Med Res Methodol. 2013;13:1.

Möhler R, Köpke S, Meyer G. Criteria for reporting the development and evaluation of complex interventions in healthcare: revised guideline (CReDECI 2). Trials. 2015;16(204):1.

Evans BA, Bedson E, Bell P, Hutchings H, Lowes L, Rea D, et al. Involving service users in trials: developing a standard operating procedure. Trials. 2013;14(1):1.

Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complex intervention science: applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions. Evaluation. 2016;22:286–303.

Feeley N, Cossette S, Côté J, Héon M, Stremler R, Martorella G, et al. The importance of piloting an RCT intervention. CJNR. 2009;41:84–99.

Levati S, Campbell P, Frost R, Dougall N, Wells M, Donaldson C, et al. Optimisation of complex health interventions prior to a randomised controlled trial: a scoping review of strategies used. Pilot Feasibility Stud. 2016;2:1.

National Institute for Health Research. Feasibility and pilot studies. Available from: http://www.nihr.ac.uk/CCF/RfPB/FAQs/Feasibility_and_pilot_studies.pdf . Accessed 14 Oct 2016.

National Institute for Health Research. Glossary | Pilot studies 2015 http://www.nets.nihr.ac.uk/glossary?result_1655_result_page=P . Accessed 14 Oct 2016.

Taylor RS, Ukoumunne OC, Warren FC. How to use feasibility and pilot trials to test alternative methodologies and methodological procedures proir to full-scale trials. In: Richards DA, Hallberg IR, editors. Complex interventions in health: an overview of research methods. New York: Routledge; 2015.

Cook JA, Hislop J, Adewuyi TE, Harrild K, Altman DG, Ramsay CR et al. Assessing methods to specify the target difference for a randomised controlled trial: DELTA (Difference ELicitation in TriAls) review. Health Technology Assessment (Winchester, England). 2014;18:v–vi.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10:307–12.

National Institute for Health Research. Progression rules for internal pilot studies for HTA trials [14/10/2016]. Available from: http://www.nets.nihr.ac.uk/__data/assets/pdf_file/0018/115623/Progression_rules_for_internal_pilot_studies.pdf .

Westlund E, Stuart EA. The nonuse, misuse, and proper use of pilot studies in experimental evaluation research. Am J Eval. 2016;2:246–61.

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, et al. How we design feasibility studies. Am J Prev Med. 2009;36:452–7.

Strong LL, Israel BA, Schulz AJ, Reyes A, Rowe Z, Weir SS et al. Piloting interventions within a community-based participatory research framework: lessons learned from the healthy environments partnership. Prog Community Health Partnersh. 2009;3:327–34.

Eldridge SM, Costelloe CE, Kahan BC, Lancaster GA, Kerry SM. How big should the pilot study for my cluster randomised trial be? Stat Methods Med Res. 2016;25:1039–56.

Moffatt S, White M, Mackintosh J, Howel D. Using quantitative and qualitative data in health services research—what happens when mixed method findings conflict? [ISRCTN61522618]. BMC Health Serv Res. 2006;6:1.

Hislop J, Adewuyi TE, Vale LD, Harrild K, Fraser C, Gurung T et al. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review. PLoS Med. 2014;11:e1001645.

Download references

Acknowledgements

We thank the Specialist Unit for Review Evidence (SURE) at Cardiff University, including Mala Mann, Helen Morgan, Alison Weightman and Lydia Searchfield, for their assistance with developing and conducting the literature search.

This study is supported by funding from the Methodology Research Panel (MR/N015843/1). LM, SS and DW are supported by the UK Medical Research Council (MC_UU_12017/14) and the Chief Scientist Office (SPHSU14). PC is supported by the UK Medical Research Council (MC_UU_12017/15) and the Chief Scientist Office (SPHSU15). The work was also undertaken with the support of The Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), a UKCRC Public Health Research Centre of Excellence. Joint funding (MR/KO232331/1) from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, the Welsh Government and the Wellcome Trust, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to copyright infringement.

Author information

Authors and affiliations.

Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), Cardiff University, Cardiff, Wales, UK

Britt Hallingberg, Ruth Turley, Jeremy Segrott, Simon Murphy, Michael Robling & Graham Moore

Centre for Trials Research, Cardiff University, Cardiff, Wales, UK

Jeremy Segrott & Michael Robling

MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

Daniel Wight, Peter Craig, Laurence Moore & Sharon Anne Simpson

Specialist Unit for Review Evidence, Cardiff University, Cardiff, Wales, UK

Ruth Turley

You can also search for this author in PubMed   Google Scholar

Contributions

LM, GM, PC, MR, JS, RT and SS were involved in the development of the study. RT, JS, DW and BH were responsible for the data collection, overseen by LM and GM. Data analysis was undertaken by BH guided by RT, JS, DW and GM. The manuscript was prepared by BH, RT, DW, JS and GM. All authors contributed to the final version of the manuscript. LM is the principal investigator with overall responsibility for the project. GM is Cardiff lead for the project. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Britt Hallingberg .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Table S1. PRISMA checklist. (DOC 62 kb)

Additional file 2:

Appendix 1. Search strategies and websites. Appendix 2. Coding framework. (DOCX 28 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Hallingberg, B., Turley, R., Segrott, J. et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud 4 , 104 (2018). https://doi.org/10.1186/s40814-018-0290-8

Download citation

Received : 06 February 2018

Accepted : 07 May 2018

Published : 28 May 2018

DOI : https://doi.org/10.1186/s40814-018-0290-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Complex interventions
  • Exploratory studies
  • Research methods
  • Study design
  • Pilot study
  • Feasibility study

Pilot and Feasibility Studies

ISSN: 2055-5784

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

articles on exploratory research

ORIGINAL RESEARCH article

This article is part of the research topic.

Towards a Psychophysiological Approach in Physical Activity, Exercise, and Sports-Volume III

Sports preferences in children and adolescents in psychiatric careevaluation of a new questionnaire (SPOQ) Provisionally Accepted

  • 1 Department of Pediatric and Adolescent Psychiatry and Psychotherapy, University Medical Centre, Johannes Gutenberg University Mainz, Germany
  • 2 Department of Sports Psychology, Johannes Gutenberg University, Germany
  • 3 Institut für Integrative Medizin, Universität Witten/Herdecke, Germany

The final, formatted version of the article will be published soon.

Introduction: As part of an exploratory and hypothesis-generating study, we developed the Sports Preference Questionnaire (SPOQ) to survey the athletic behavior of mentally ill children and adolescents, subjectively assessed physical fitness and perceived psychological effects of physical activity. Methods: In a department of child and adolescent psychiatry, we classified 313 patients (6-18 years) according to their primary psychiatric diagnosis. The patients or -in the parental version of the questionnaire -their parents reported their sport preferences on the SPOQ. As possibly influential factors, we also assessed the frequency of physical activity, the importance of a trainer, coping with everyday life through physical activity, and subjectively perceived physical fitness. Results: One in 3 patients (32.4 %) stated that they were not physically active. Patients diagnosed with eating disorders reported, on average, a notably high frequency (median of 3 h/week) and degree of coping with daily life through physical activity (median of 5 on a 6-point Likert scale). Patients with anxiety disorders and depression had the lowest selfperception of physical fitness (mean value of 3.1 or 3.7 on an interval scala from 0 to 9). The presence of a trainer was generally considered not very uniimportant, except forbut not in ADHD patients (median of 3 on a 6-point Likert scale). Conclusion: The SPOQ is sensitive for differential effects of core child and adolescent disorders as well as for main covariates influencing the complex association between physical activity and emotional and behavioral disorders in children and adolescents. Based on this pilot study, we discussed the need for an efficacy study to measure the effects of sports therapy.

Keywords: Sports Preference Questionnaire, mental disorder, Sports therapy, Psychological effect, physical activity

Received: 12 Dec 2023; Accepted: 22 Apr 2024.

Copyright: © 2024 Breido, Stumm, Jenetzky and Huss. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. Michael Huss, Department of Pediatric and Adolescent Psychiatry and Psychotherapy, University Medical Centre, Johannes Gutenberg University Mainz, Mainz, Germany

People also looked at

This paper is in the following e-collection/theme issue:

Published on 22.4.2024 in Vol 26 (2024)

ChatGPT’s Performance in Cardiac Arrest and Bradycardia Simulations Using the American Heart Association's Advanced Cardiovascular Life Support Guidelines: Exploratory Study

Authors of this article:

Author Orcid Image

There are no citations yet available for this article according to Crossref .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

IMAGES

  1. What is Exploratory Research?

    articles on exploratory research

  2. Examples and Types of Exploratory Research Questions

    articles on exploratory research

  3. Exploratory Research: Definition, Types, Examples

    articles on exploratory research

  4. Accounting Nest

    articles on exploratory research

  5. Exploratory Research

    articles on exploratory research

  6. PPT

    articles on exploratory research

VIDEO

  1. Explanatory and Exploratory Research

  2. Exploratory Research Method Lecture 23 Dr Riffat Sadiq

  3. Fundamentals of Qualitative Research Design

  4. Exploratory research design

  5. Exploratory Research

  6. Exploratorium: See Ice Crystals Form

COMMENTS

  1. Grounded Theory: A Guide for Exploratory Studies in Management Research

    Research can also be exploratory, descriptive, or explanatory. The classification of research into one of these categories depends on the purpose of the study. Saunders et al. (2009) explain that the purpose of exploratory research is to find out " what is happening ," " seek new insights ," and " assess phenomena in new light ...

  2. Exploratory studies to decide whether and how to proceed with full

    However, the goals of exploratory studies advocated by research funders have to date varied substantially. For instance, the National Institute for Health Research Evaluation Trials and Studies Coordinating Centre (NETSCC) definitions of feasibility and pilot studies do not include examination of intervention design, delivery or acceptability ...

  3. Exploratory Research

    Revised on November 20, 2023. Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth. Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well.

  4. Exploratory Research (Chapter 2)

    Exploratory research is an attempt to discover something new and interesting by working through a research topic and is the soul of good research. Exploratory studies, a type of exploratory research, tend to fall into two categories: those that make a tentative first analysis of a new topic and those that propose new ideas or generate new ...

  5. Exploratory Research

    Common exploratory research designs include case studies, focus groups, interviews, and surveys. Collect data: Collect data using the chosen research design. This may involve conducting interviews, surveys, or observations, or collecting data from existing sources such as archives or databases.

  6. Beyond exploratory: a tailored framework for designing and assessing

    Exploratory and descriptive data on the topic exist. Research aims: Define aims in broad, exploratory questions guided by the theoretical framework. A priori hypotheses are unnecessary and inappropriate. Define aims based on existing knowledge and/or theoretical framework. A priori hypotheses may be useful, but not needed.

  7. The potential of working hypotheses for deductive exploratory research

    While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in ...

  8. Exploratory research in the social sciences: what is exploration?

    An exploratory research design is particularly suitable when the phenomenon under investigation is still emerging, lacks a clear definition, and has not been extensively studied (Stebbins, 2001 ...

  9. Evaluating Philosophy As Exploratory Research

    research and differentiates it from exploitative research, which constitutes the bulk of funded research activity. This article argues that exploratory research is crucial for long-term progress and requires a distinct evaluative regime. Keywords: exploitation, exploration, general purpose technologies, research evaluation, research policy.

  10. Exploratory research

    Exploratory research takes place when problems are in a preliminary stage. [7] Exploratory research is used when the topic or issue is new and when data is difficult to collect. Exploratory research is flexible and can address research questions of all types (what, why, how). Exploratory research is often used to generate formal hypotheses.

  11. 10000 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on EXPLORATORY RESEARCH. Find methods information, sources, references or conduct a literature review on ...

  12. Exploratory Research

    Exploratory Research | Definition, Guide, & Examples. Published on 6 May 2022 by Tegan George.Revised on 20 January 2023. Exploratory research is a methodology approach that investigates topics and research questions that have not previously been studied in depth.. Exploratory research is often qualitative in nature. However, a study with a large sample conducted in an exploratory manner can ...

  13. Exploratory research: Definition, Types and Methodologies

    Exploratory research: Definition. Exploratory research is defined as a research used to investigate a problem which is not clearly defined. It is conducted to have a better understanding of the existing research problem, but will not provide conclusive results.For such a research, a researcher starts with a general idea and uses this research as a medium to identify issues, that can be the ...

  14. The potential of working hypotheses for deductive exploratory research

    Dewey's definition suggests that working hypotheses would be useful toward the beginning of a research project (e.g., exploratory research). Mead ( 1899) used working hypothesis in a title of an article "The and Social Reform" (italics added). He notes that a scientist's foresight goes beyond testing a hypothesis.

  15. Digital Commons @ University of South Florida

    a using of theory in order to assess its explanatory strength and predictive power and make sense, or explain, a previously defined segment of reality. Qualitative, inductive and hence exploratory research sets out to explain limited segments of reality by suggesting a causal order, and sequence, of events.

  16. Defining Exploratory-Descriptive Qualitative (EDQ) research and

    For the qualitative analysis, we employed an exploratory qualitative descriptive research design [54], which is useful for investigating topics for which there is limited prior research [54][55 ...

  17. The Use of the Exploratory Sequential Approach in Mixed-Method Research

    A rigorous approach for designing an exploratory, sequential research method using both interviews and survey data was created in this work. The tool's novelty was established in its point of departure from the norm in applying reliability tools prior to testing for normality and applying a rigorous process of multi-tool intercoder ...

  18. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  19. Exploratory studies to decide whether and how to proceed with full

    Background Evaluations of complex interventions in public health are frequently undermined by problems that can be identified before the effectiveness study stage. Exploratory studies, often termed pilot and feasibility studies, are a key step in assessing the feasibility and value of progressing to an effectiveness study. Such studies can provide vital information to support more robust ...

  20. An Introduction to Experimental and Exploratory Research

    Abstract. Experimental research is a study that strictly adheres to a scientific research design. It includes a hypothesis, a variable that can be manipulated by the researcher, and variables that ...

  21. Frontiers

    Introduction: As part of an exploratory and hypothesis-generating study, we developed the Sports Preference Questionnaire (SPOQ) to survey the athletic behavior of mentally ill children and adolescents, subjectively assessed physical fitness and perceived psychological effects of physical activity. Methods: In a department of child and adolescent psychiatry, we classified 313 patients (6-18 ...

  22. Journal of Medical Internet Research

    Background: ChatGPT is the most advanced large language model to date, with prior iterations having passed medical licensing examinations, providing clinical decision support, and improved diagnostics. Although limited, past studies of ChatGPT's performance found that artificial intelligence could pass the American Heart Association's advanced cardiovascular life support (ACLS ...

  23. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  24. (PDF) Exploratory Research

    Abstract. The exploratory research examined the reliability of the research instrument and suitabil-. ity for further research. When organizing the testing of the instrument, 159 employees. of one ...