Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Exploratory Research | Definition, Guide, & Examples

Exploratory Research | Definition, Guide, & Examples

Published on December 6, 2021 by Tegan George . Revised on November 20, 2023.

Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth.

Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive research or a grounded theory approach due to its flexible and open-ended nature.

Table of contents

When to use exploratory research, exploratory research questions, exploratory research data collection, step-by-step example of exploratory research, exploratory vs. explanatory research, advantages and disadvantages of exploratory research, other interesting articles, frequently asked questions about exploratory research.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use this type of research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Exploratory research questions are designed to help you understand more about a particular topic of interest. They can help you connect ideas to understand the groundwork of your analysis without adding any preconceived notions or assumptions yet.

Here are some examples:

  • What effect does using a digital notebook have on the attention span of middle schoolers?
  • What factors influence mental health in undergraduates?
  • What outcomes are associated with an authoritative parenting style?
  • In what ways does the presence of a non-native accent affect intelligibility?
  • How can the use of a grocery delivery service reduce food waste in single-person households?

Collecting information on a previously unexplored topic can be challenging. Exploratory research can help you narrow down your topic and formulate a clear hypothesis and problem statement , as well as giving you the “lay of the land” on your topic.

Data collection using exploratory research is often divided into primary and secondary research methods, with data analysis following the same model.

Primary research

In primary research, your data is collected directly from primary sources : your participants. There is a variety of ways to collect primary data.

Some examples include:

  • Survey methodology: Sending a survey out to the student body asking them if they would eat vegan meals
  • Focus groups: Compiling groups of 8–10 students and discussing what they think of vegan options for dining hall food
  • Interviews: Interviewing students entering and exiting the dining hall, asking if they would eat vegan meals

Secondary research

In secondary research, your data is collected from preexisting primary research, such as experiments or surveys.

Some other examples include:

  • Case studies : Health of an all-vegan diet
  • Literature reviews : Preexisting research about students’ eating habits and how they have changed over time
  • Online polls, surveys, blog posts, or interviews; social media: Have other schools done something similar?

For some subjects, it’s possible to use large- n government data, such as the decennial census or yearly American Community Survey (ACS) open-source data.

How you proceed with your exploratory research design depends on the research method you choose to collect your data. In most cases, you will follow five steps.

We’ll walk you through the steps using the following example.

Therefore, you would like to focus on improving intelligibility instead of reducing the learner’s accent.

Step 1: Identify your problem

The first step in conducting exploratory research is identifying what the problem is and whether this type of research is the right avenue for you to pursue. Remember that exploratory research is most advantageous when you are investigating a previously unexplored problem.

Step 2: Hypothesize a solution

The next step is to come up with a solution to the problem you’re investigating. Formulate a hypothetical statement to guide your research.

Step 3. Design your methodology

Next, conceptualize your data collection and data analysis methods and write them up in a research design.

Step 4: Collect and analyze data

Next, you proceed with collecting and analyzing your data so you can determine whether your preliminary results are in line with your hypothesis.

In most types of research, you should formulate your hypotheses a priori and refrain from changing them due to the increased risk of Type I errors and data integrity issues. However, in exploratory research, you are allowed to change your hypothesis based on your findings, since you are exploring a previously unexplained phenomenon that could have many explanations.

Step 5: Avenues for future research

Decide if you would like to continue studying your topic. If so, it is likely that you will need to change to another type of research. As exploratory research is often qualitative in nature, you may need to conduct quantitative research with a larger sample size to achieve more generalizable results.

Prevent plagiarism. Run a free check.

It can be easy to confuse exploratory research with explanatory research. To understand the relationship, it can help to remember that exploratory research lays the groundwork for later explanatory research.

Exploratory research investigates research questions that have not been studied in depth. The preliminary results often lay the groundwork for future analysis.

Explanatory research questions tend to start with “why” or “how”, and the goal is to explain why or how a previously studied phenomenon takes place.

Exploratory vs explanatory research

Like any other research design , exploratory studies have their trade-offs: they provide a unique set of benefits but also come with downsides.

  • It can be very helpful in narrowing down a challenging or nebulous problem that has not been previously studied.
  • It can serve as a great guide for future research, whether your own or another researcher’s. With new and challenging research problems, adding to the body of research in the early stages can be very fulfilling.
  • It is very flexible, cost-effective, and open-ended. You are free to proceed however you think is best.

Disadvantages

  • It usually lacks conclusive results, and results can be biased or subjective due to a lack of preexisting knowledge on your topic.
  • It’s typically not externally valid and generalizable, and it suffers from many of the challenges of qualitative research .
  • Since you are not operating within an existing research paradigm, this type of research can be very labor-intensive.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, November 20). Exploratory Research | Definition, Guide, & Examples. Scribbr. Retrieved March 18, 2024, from https://www.scribbr.com/methodology/exploratory-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, explanatory research | definition, guide, & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research design | types, guide & examples, what is your plagiarism score.

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Exploratory analyses...

Exploratory analyses in aetiologic research and considerations for assessment of credibility: mini-review of literature

  • Related content
  • Peer review
  • Kim Luijken , doctoral student 1 ,
  • Olaf M Dekkers , professor 1 ,
  • Frits R Rosendaal , professor 1 ,
  • Rolf H H Groenwold , professor 1 2
  • 1 Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, Netherlands
  • 2 Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, Netherlands
  • Correspondence to: K Luijken k.luijken{at}umcutrecht.nl
  • Accepted 22 March 2022

Objective To provide considerations for reporting and interpretation that can improve assessment of the credibility of exploratory analyses in aetiologic research.

Design Mini-review of the literature and account of exploratory research principles.

Setting This study focuses on a particular type of causal research, namely aetiologic studies, which investigate the causal effect of one or multiple risk factors on a particular health outcome or disease. The mini review included aetiologic research articles published in four epidemiology journals in the first issue of 2021: American Journal of Epidemiology , Epidemiology , European Journal of Epidemiology , and International Journal of Epidemiology , specifically focusing on observational studies of causal risk factors of diseases.

Main outcome measures Number of exposure-outcome associations reported, grouped by type of analysis (main, sensitivity, and additional).

Results The journal articles reported many exposure-outcome associations: a mean number of 33 (range 1-120) exposure-outcome associations for the primary analysis, 30 (0-336) for sensitivity analyses, and 163 (0-1467) for additional analyses. Six considerations were discussed that are important in assessing the credibility of exploratory analyses: research problem, protocol, statistical criteria, interpretation of findings, completeness of reporting, and effect of exploratory findings on future causal research.

Conclusions Based on this mini-review, exploratory analyses in aetiologic research were not always reported properly. Six considerations for reporting of exploratory analyses in aetiologic research were provided to stimulate a discussion about their preferred handling and reporting. Researchers should take responsibility for the results of exploratory analyses by clearly reporting their exploratory nature and specifying which findings should be investigated in future research and how.

Introduction

Reports of aetiologic studies often have results of multiple exploratory analyses, with the aim of identifying topics for future research. Although this form of reporting might seem reasonable, it is not without risk, because compared with the results of a confirmatory study, assessing the credibility of exploratory findings is generally more complicated.

The origin of exploratory data analysis can be traced back at least to Tukey in the 1960s and 1970s 1 2 who encouraged statisticians to develop visualisation techniques for representing and capturing structures in datasets to establish new research questions. These new research questions should subsequently be answered with independent datasets (often termed confirmatory analysis). For example, when a new biomarker is thought to be part of a known causal pathway, performing a small preparatory exploratory study before conducting a full blown large cohort study seems worthwhile, because the cohort study is financially expensive and requires large investments of resources. Similarly, if a known exposure-outcome effect is thought to vary across subgroups of the population, exploring this idea first before embarking on confirmative analyses of the effect of heterogeneity seems appropriate.

Even when researchers consider an analysis to be exploratory, a hypothesis is easily promoted to a fact. For example, findings in journal articles can be exaggerated to more certain statements in press releases and news articles. 3 In medical science in particular, where results are sometimes quickly implemented in clinical practice, researchers should take responsibility for the results they report. The Hippocratic oath (“First, do not harm”) applies as well to medical research as it does to clinical practice.

In this paper, we discuss issues that complicate the interpretation of exploratory analyses in causal studies. Causal research can refer to different types of research, such as randomised studies or intervention studies. We do not address these studies in our manuscript; we focus on aetiologic research, in which causes of disease are investigated. Specifically, the causal effect of risk factors on a health outcome or disease are studied, typically in an observational setting. We provide practical pointers for researchers on how to report exploratory analyses in aetiologic research and how to clarify what the exploratory results imply for future research and implementation in practice. We hope to encourage a discussion about the preferred handling and reporting of these analyses.

Exploratory analyses in aetiologic research

The term exploratory analysis typically refers to analyses for which the hypothesis was not specified before the data analysis. 4 Considering exploratory analyses in a broader sense, however, is probably more relevant in aetiologic research, because of the observational data and clustering of analyses within cohorts. We use the term exploratory analyses here to indicate analyses that are initial and preliminary steps towards solving a research problem. Exploratory analyses are often conducted in addition to planned primary analyses of a study. We do not consider sensitivity analyses, where the main hypothesis is evaluated under different assumptions, to be exploratory in this paper. We also do not consider outcomes that are evaluated as a secondary objective but are correlated with the primary outcome to be exploratory, because these analyses contribute to the investigation of the primary research question. Genome-wide association studies, where the exploratory nature of analyses is commonly accounted for by looking at multiple testing, 5 are beyond the scope of this paper.

Mini-review and overview of existing reporting guidance

Before we discuss considerations about the reporting of exploratory aetiologic studies, we wanted to illustrate some of the aspects of exploratory studies that need explicit reporting. Hence we performed a small review of published aetiologic studies. We identified all articles on original research in four journals in their first issue of 2021: American Journal of Epidemiology , Epidemiology , European Journal of Epidemiology , and International Journal of Epidemiology . We excluded studies that did not look at an aetiologic research question, such as prediction studies, studies on therapeutic interventions, and randomised trials. For each article, we counted the number of primary analyses, sensitivity analyses, and additional analyses that were performed. The unit of counting was the association estimator, where we counted only one association if the association was reported on different scales (eg, absolute and relative scales for binary endpoints).

Also, we reviewed existing reporting guidance documents on aspects relevant to exploratory analyses, specifically the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement, 6 RECORD (REporting of studies Conducted using Observational Routinely collected health Data) statement, 7 STROBE-MR (Strengthening the Reporting of Observational Studies in Epidemiology Using Mendelian Randomisation) for mendelian randomisation studies, 8 STREGA (Strengthening the Reporting of Genetic Association Studies) for genome association studies, 9 and the CONSORT (Consolidated Standards of Reporting Trials) extension to randomised pilot and feasibility trials. 10

Patient and public involvement

Involving patients or the public in the design, conduct, reporting, or dissemination plans of our research was not appropriate or possible.

Mini-review

The mini-review included 25 original aetiologic articles. These articles reported a mean number of 33 (range 1-120) exposure-outcome associations for the primary analysis, 30 (0-336) for sensitivity analyses, and 163 (0-1467) for additional analyses, mainly concerning subgroup or interaction analyses (supplementary file). Most articles did not explicitly report which analyses were prespecified, and only one study referred to a publicly available protocol. 11 The methodological scrutiny of the subgroup analyses varied from thoughtful evaluations of exposure effect heterogeneity in well established subgroups to evaluations of exposure effects across subgroups that seemed to have been formed exhaustively across many potential risk factors. Despite the fact that our review included only a small sample of studies, the image that arises from it is that many results were presented, and insufficient information was reported to fully judge the validity and merits of the results.

Existing reporting guidance

The STROBE 6 and RECORD 7 statements provide checklists of items to report in observational studies that are relevant to exploratory analyses ( table 1 ). Extensions of STROBE, such as STROBE-MR 8 and STREGA, 9 provide additional guidance for reporting of studies where many analyses are performed. Guidance for reporting randomised trials also provides helpful information for reporting exploratory analyses in aetiologic research, in particular the CONSORT extension to randomised pilot and feasibility trials. 10 Not all of these recommendations can be directly applied to observational aetiologic studies, however, because the procedures for generating and testing of hypotheses are more established in randomised studies than in observational settings.

Considerations for reporting of exploratory aetiologic research

  • View inline

Exploratory research principles

Inspired by the existing recommendations for reporting, we list six considerations for reporting and interpretation that can improve the assessment of the credibility of exploratory analyses in aetiologic research ( table 1 ). The list is not exhaustive but we hope it will encourage further discussion on the reporting of exploratory research.

Consideration 1: explicitly state the objective of all analyses, including exploratory analyses

Stating the objective of an aetiologic study clarifies how to interpret the results. The objectives of confirmatory aetiologic research ideally contain a well defined targeted effect of a specific aetiologic factor on a specific outcome in a specific population. 13 14 In early discovery research, objectives are not always rigorously defined but could be specified more generally (eg, understanding the origin of a particular outcome). An implication of stating the objective in general terms, however, is that the methodological handling of the analysis becomes less clear and the number of researchers’ degrees of freedom becomes large. 15 Consequently, interpreting results without deriving spurious (causal) conclusions requires thought and effort because the analysis does not necessarily provide information towards a causal effect (see consideration 4). 16 17 18 The more general an objective is stated, the more provisional the analysis becomes. This caveat includes machine learning approaches where no explicit causal modelling assumptions are made.

Because exploratory analyses in aetiologic research often aim to inform a future in-depth causal analysis, reporting both the objective of the provisional exploratory analysis and the (future) confirmatory analysis is important. This reporting is in line with the CONSORT reporting checklist for pilot randomised controlled trials which recommends that researchers state the objective of the eventual trial in the manuscript of a pilot study. 10 The rationale and need for the exploratory analysis in aetiologic research should be outlined together with uncertainties that need to be dealt with before performing an independent confirmative analysis of the causal mechanism. Reporting the position of provisional analyses relative to future research clarifies the level of credibility of the findings from exploratory analyses.

Consideration 2: establish a study protocol before data analysis and make the protocol available to readers

Preregistered protocols help distinguish which analyses were planned before observing the data and which analyses were performed post hoc, thereby avoiding hypothesising after the results are known. For randomised trials, preregistration of the study protocol is considered the norm. 19 Preregistration does not seem as widespread in observational aetiologic research, but is increasingly encouraged, 20 21 and explicitly recommended in the RECORD reporting checklist. 7 Because aetiologic research often uses existing cohort data that have been analysed for related research questions, preregistration of aetiologic studies does not ensure the same level of credibility of statistical evidence as preregistration before collecting the data.

Nosek and colleagues 22 have provided preliminary guidance on preregistration of analyses conducted with existing data. These authors suggest that what was known in advance about the dataset should be transparently reported so that the credibility of statistical findings can be assessed, taking into account analyses that have been performed previously. Implementing this advice is probably challenging in large epidemiological cohort studies because of the many analyses that might have been performed. But trying to clarify why and how an analysis is conducted before observing the data is a laudable practice that can be implemented directly in aetiologic studies. This practice is ideally accompanied by work on developing guidance for preregistration of aetiologic studies that use existing data.

Preregistration of analyses that are exploratory in nature is even less common, possibly contradicting the definition of exploration. We consider exploratory analysis, however, as discovery work that serves to motivate funding for larger studies that are, for example, better able to control confounding or to collect data rigorously. Given this important probing role, simply stating in a research protocol that certain relations will be explored is not enough; time and effort must be invested in designing the analysis appropriately. Not every detail can be specified in advance, but interpretation of the results provided by data can be challenging and unintentionally overconfident when no question was clearly articulated before seeing the answer.

Consideration 3: do not base judgments on significance values only

Only reporting the results of analyses that provided a P value below the prespecified α level (eg, 0.05) is discouraged throughout all scientific disciplines (for example, as discussed in a 2019 supplementary issue of The American Statistician ). 23 Avoiding selective reporting based on significance values is particularly relevant to exploratory findings because the statistical properties of exploratory tests are less well known than those of confirmatory tests. 24 For example, the expected number of false positives (that is, the type I error rate) is probably increased when the choice for a statistical test was based on pattens in the observed data. Although procedures have been developed for correction of multiple testing in confirmatory settings, consensus on how to prevent false positive findings in exploratory settings has not yet been established. 24 25 26

Increasing the number of exploratory analyses, without correction for multiple testing, raises the risk of deriving false positive conclusions, but too strict corrections for multiple testing increases the probability of false negative findings (that is, the type II error rate). 27 A raised type II error rate could occur, for example, when an analysis of various positively correlated hypotheses is corrected for multiple testing as if all of the hypotheses were independent (eg, by applying a Bonferroni correction). The decision to statistically correct for multiple testing depends, among other issues, on the total number of tests performed in the same dataset, correlation between the hypotheses being tested, and sample size. Reporting each of these considerations clarifies the analytical context of findings and helps to assess the credibility of the results. This form of reporting is in line with the STROBE-MR 8 and STREGA 9 checklists which recommend stating how multiple comparisons were managed, although recommendations for the handling of multiple testing seem more established in genome-wide association studies than in clinical aetiologic cohort studies. 5

Consideration 4: interpret findings in line with the nature of the analysis

Interpreting and communicating results in line with the exploratory nature of an analysis is challenging because an accurate representation of the degree of tentativeness of the results is required. Assessing this degree of tentativeness based on only the results of an analysis (that is, based on the numerical estimates) is complicated because seemingly convincing results can be misleading and a clinical explanation can be found that does not follow from the statistical evidence. 28 29 Cognitive biases, such as hindsight bias, can distort the interpretation of findings.

Reporting of findings from exploratory analyses starts with indicating whether the analysis was planned before or after observing the data, which is recommended in the CONSORT extension to randomised pilot and feasibility trials. 10 Results of exploratory analyses can be interpreted by focusing on what is reported about the objectives and applied methodology rather than overstepping the findings. The specificity with which findings are interpreted should match the generality with which the objective is stated (see consideration 1). 16 17 18 For example, when various subgroup analyses are performed with the general aim of identifying possible subgroups from the available data where an exposure effect was different, researchers should report that many subgroups were explored, including characterisation of the subgroups and description of the presence or absence of effect heterogeneity, rather than discussing only one or two specific subgroups where the effect size was extreme. Furthermore, exploratory analyses often fail to support strong conclusions. Recommendations for clinical practice or generalisations based on exploratory analyses should generally be avoided.

Consideration 5: report (summarised) results of all exploratory analyses that were performed

When findings are selectively reported, especially when reporting is guided by significant findings (see consideration 3), the credibility of reported findings is probably overstated. 30 Reporting the results of all of the exploratory analyses that were conducted (possibly in a supplementary file) provides a transparent and honest report of the analysis and facilitates better interpretation of the findings. This approach is in line with the STROBE extension in STREGA, which recommends that all results of analyses should be presented, even if numerous analyses were undertaken. 9

Reporting all analyses that have been conducted seems simple, but can be challenging in practice, mainly because the process of performing a study is typically iterative. A framework for initial data analysis by Huebner and colleagues could help keep track of all subanalyses that are conducted as part of a main analysis. 31 This framework distinguishes exploratory analyses that are part of a primary analysis from additional exploratory analyses that require separate reporting. Another helpful practice could be to have a reflection period after performing analyses to establish whether the analyses look at (slightly) different research questions and to report separate analyses for each research question.

Consideration 6: accompany exploratory analyses by a proposed research agenda

The credibility of exploratory findings can be communicated through a research agenda prioritising future research and how this research should be set up. Reporting a research agenda is similar to the CONSORT extension to randomised pilot and feasibility trials that recommends reporting which and how future confirmative trials can be informed by the pilot study. 10

Formulating a research agenda allows researchers to take responsibility for the exploratory findings presented and future research that should be performed, avoiding the empty statement that “more research is needed”. In medical science in particular, where study results are sometimes quickly implemented into clinical practice, researchers are encouraged to take responsibility for the results they report by clearly explaining which exploratory findings should be investigated in future research and how.

Our mini-review showed that exploratory analyses in aetiologic research were not always reported optimally. The credibility of exploratory results is affected by a combination of the theoretical rationale for the analysis, clarity of the defined research problem, applied methodology, and degree to which analytical decisions are driven by the data. Choosing a particular analysis based on observed patterns in the data complicates statistical inferences. Moreover, the design and methods applied in an exploratory analysis might be less optimal than the primary analysis of the study, which further complicates interpretation of exploratory analyses. Therefore, information on these aspects should be clearly reported.

Exploration is essential to the progress of science. Strict confirmatory studies are a powerful mechanism for final evaluations before implementation in clinical practice, but will probably not stimulate new ideas. 32 33 Open minded exploratory analyses can lead to unexpected discoveries and resourceful innovations of epidemiological science, but effort is required to accurately interpret the results. Because exploratory analyses are usually done to generate new research questions, quickly performing a statistical test (or multiple tests) to get the first answer to the problem is tempting. When quick test results are presented in a research article, however, their interpretation might be ad hoc and unintentionally overconfident.

To show their full value, exploratory analyses of aetiologic research need to be conducted and interpreted correctly . We have provided six considerations for reporting of exploratory analyses to encourage a discussion on exploratory analyses and how the credibility of these analyses is ideally assessed in aetiologic research. Continuation of this discussion will contribute to the understanding of inferences that can be made from exploratory analyses in aetiologic research and will help strike a balance between their opportunities and risks.

What is already known on this topic

Exploratory analyses in aetiologic research are initial steps towards solving a research problem and are often conducted in addition to planned primary analyses of a study

Exploratory analyses might lead to new discoveries in aetiologic research, but effort is needed to accurately interpret the results because these analyses are often conducted with few data resources and insufficient adjusting for confounding

Statistical properties of exploratory tests are less well known than those of confirmatory tests

What this study adds

This study focuses on a particular type of causal research, namely aetiologic studies, which investigate the causal effect of one or multiple risk factors on a particular health outcome or disease

Six considerations for reporting of exploratory analyses in aetiologic research were provided to stimulate a discussion about their preferred handling and reporting

Researchers should take responsibility for results of exploratory analyses by clearly reporting their exploratory nature and specifying which findings should be investigated in future research and how

Ethics statements

Ethical approval.

Not required.

Data availability statement

No additional data available.

Contributors: KL was involved in the conceptualisation, investigation, methodology, visualisation, and writing (original draft) of the article. OMD was involved in the conceptualisation, investigation, methodology, and writing (review and editing) of the article. FRR was involved in the conceptualisation, investigation, methodology, and writing (review and editing) of the article. RHHG was involved in the conceptualisation, investigation, methodology, supervision, and writing (review and editing) of the article. KL, OMD, FRR, RHHG gave final approval of the version to be published and are accountable for all aspects of the work. KL is the main guarantor of this study. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: RHHG was supported by grants from the Netherlands Organisation for Scientific Research (ZonMW, project 917.16.430) and from Leiden University Medical Centre. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: support from the Netherlands Organisation for Scientific Research and Leiden University Medical Centre for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

The lead author (the manuscript’s guarantor) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Dissemination to participants and related patient and public communities: An abstract was submitted to the annual Dutch epidemiology conference ( www.weon.nl ). The authors aim to share their work with stakeholders at the annual Dutch epidemiology conference ( www.weon.nl ), at institutional meetings, and will post a link with a plain language summary on their personal websites ( www.rolfgroenwold.nl ).

Provenance and peer review: not commissioned; externally peer reviewed.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

  • Vivian-Griffiths S ,
  • Wagenmakers E-J ,
  • Wetzels R ,
  • Borsboom D ,
  • van der Maas HL ,
  • von Elm E ,
  • Altman DG ,
  • Pocock SJ ,
  • Gøtzsche PC ,
  • Vandenbroucke JP ,
  • STROBE Initiative
  • Benchimol EI ,
  • Guttmann A ,
  • RECORD Working Committee
  • Skrivankova VW ,
  • Richmond RC ,
  • Woolf BAR ,
  • Higgins JP ,
  • Ioannidis JP ,
  • Eldridge SM ,
  • Campbell MJ ,
  • PAFS consensus group
  • Serra-Burriel M ,
  • Martínez-Lizaga N ,
  • Hopewell S ,
  • Schulz KF ,
  • Goetghebeur E ,
  • le Cessie S ,
  • De Stavola B ,
  • Moodie EE ,
  • Waernbaum I ,
  • “on behalf of” the topic group Causal Inference (TG7) of the STRATOS initiative
  • Simmons JP ,
  • Nelson LD ,
  • Simonsohn U
  • Grimes DA ,
  • Westreich D ,
  • Greenland S
  • Williams RJ ,
  • Rajakannan T
  • Hemingway H ,
  • Ebersole CR ,
  • DeHaven AC ,
  • American Statistician
  • Goeman JJ ,
  • Westfall PH
  • Groenwold RHH ,
  • Cessie SL ,
  • Goldacre B ,
  • Drysdale H ,
  • Marston C ,
  • Greenland S ,
  • Rothman KJ ,
  • Ioannidis JP
  • Huebner M ,
  • Schmidt CO ,
  • Vandenbroucke JP

articles on exploratory research

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Exploratory Research – Types, Methods and Examples

Exploratory Research – Types, Methods and Examples

Table of Contents

Exploratory Research

Exploratory Research

Definition:

Exploratory research is a type of research design that is used to investigate a research question when the researcher has limited knowledge or understanding of the topic or phenomenon under study.

The primary objective of exploratory research is to gain insights and gather preliminary information that can help the researcher better define the research problem and develop hypotheses or research questions for further investigation.

Exploratory Research Methods

There are several types of exploratory research, including:

Literature Review

This involves conducting a comprehensive review of existing published research, scholarly articles, and other relevant literature on the research topic or problem. It helps to identify the gaps in the existing knowledge and to develop new research questions or hypotheses.

Pilot Study

A pilot study is a small-scale preliminary study that helps the researcher to test research procedures, instruments, and data collection methods. This type of research can be useful in identifying any potential problems or issues with the research design and refining the research procedures for a larger-scale study.

This involves an in-depth analysis of a particular case or situation to gain insights into the underlying causes, processes, and dynamics of the issue under investigation. It can be used to develop a more comprehensive understanding of a complex problem, and to identify potential research questions or hypotheses.

Focus Groups

Focus groups involve a group discussion that is conducted to gather opinions, attitudes, and perceptions from a small group of individuals about a particular topic. This type of research can be useful in exploring the range of opinions and attitudes towards a topic, identifying common themes or patterns, and generating ideas for further research.

Expert Opinion

This involves consulting with experts or professionals in the field to gain their insights, expertise, and opinions on the research topic. This type of research can be useful in identifying the key issues and concerns related to the topic, and in generating ideas for further research.

Observational Research

Observational research involves gathering data by observing people, events, or phenomena in their natural settings to gain insights into behavior and interactions. This type of research can be useful in identifying patterns of behavior and interactions, and in generating hypotheses or research questions for further investigation.

Open-ended Surveys

Open-ended surveys allow respondents to provide detailed and unrestricted responses to questions, providing valuable insights into their attitudes, opinions, and perceptions. This type of research can be useful in identifying common themes or patterns, and in generating ideas for further research.

Data Analysis Methods

Exploratory Research Data Analysis Methods are as follows:

Content Analysis

This method involves analyzing text or other forms of data to identify common themes, patterns, and trends. It can be useful in identifying patterns in the data and developing hypotheses or research questions. For example, if the researcher is analyzing social media posts related to a particular topic, content analysis can help identify the most frequently used words, hashtags, and topics.

Thematic Analysis

This method involves identifying and analyzing patterns or themes in qualitative data such as interviews or focus groups. The researcher identifies recurring themes or patterns in the data and then categorizes them into different themes. This can be helpful in identifying common patterns or themes in the data and developing hypotheses or research questions. For example, a thematic analysis of interviews with healthcare professionals about patient care may identify themes related to communication, patient satisfaction, and quality of care.

Cluster Analysis

This method involves grouping data points into clusters based on their similarities or differences. It can be useful in identifying patterns in large datasets and grouping similar data points together. For example, if the researcher is analyzing customer data to identify different customer segments, cluster analysis can be used to group similar customers together based on their demographic, purchasing behavior, or preferences.

Network Analysis

This method involves analyzing the relationships and connections between data points. It can be useful in identifying patterns in complex datasets with many interrelated variables. For example, if the researcher is analyzing social network data, network analysis can help identify the most influential users and their connections to other users.

Grounded Theory

This method involves developing a theory or explanation based on the data collected during the exploratory research process. The researcher develops a theory or explanation that is grounded in the data, rather than relying on pre-existing theories or assumptions. This can be helpful in developing new theories or explanations that are supported by the data.

Applications of Exploratory Research

Exploratory research has many practical applications across various fields. Here are a few examples:

  • Marketing Research : In marketing research, exploratory research can be used to identify consumer needs, preferences, and behavior. It can also help businesses understand market trends and identify new market opportunities.
  • Product Development: In product development, exploratory research can be used to identify customer needs and preferences, as well as potential design flaws or issues. This can help companies improve their product offerings and develop new products that better meet customer needs.
  • Social Science Research: In social science research, exploratory research can be used to identify new areas of study, as well as develop new theories and hypotheses. It can also be used to identify potential research methods and approaches.
  • Healthcare Research : In healthcare research, exploratory research can be used to identify new treatments, therapies, and interventions. It can also be used to identify potential risk factors or causes of health problems.
  • Education Research: In education research, exploratory research can be used to identify new teaching methods and approaches, as well as identify potential areas of study for further research. It can also be used to identify potential barriers to learning or achievement.

Examples of Exploratory Research

Here are some more examples of exploratory research from different fields:

  • Social Science : A researcher wants to study the experience of being a refugee, but there is limited existing research on this topic. The researcher conducts exploratory research by conducting in-depth interviews with refugees to better understand their experiences, challenges, and needs.
  • Healthcare : A medical researcher wants to identify potential risk factors for a rare disease but there is limited information available. The researcher conducts exploratory research by reviewing medical records and interviewing patients and their families to identify potential risk factors.
  • Education : A teacher wants to develop a new teaching method to improve student engagement, but there is limited information on effective teaching methods. The teacher conducts exploratory research by reviewing existing literature and interviewing other teachers to identify potential approaches.
  • Technology : A software developer wants to develop a new app, but is unsure about the features that users would find most useful. The developer conducts exploratory research by conducting surveys and focus groups to identify user preferences and needs.
  • Environmental Science : An environmental scientist wants to study the impact of a new industrial plant on the surrounding environment, but there is limited existing research. The scientist conducts exploratory research by collecting and analyzing soil and water samples, and conducting interviews with residents to better understand the impact of the plant on the environment and the community.

How to Conduct Exploratory Research

Here are the general steps to conduct exploratory research:

  • Define the research problem: Identify the research problem or question that you want to explore. Be clear about the objective and scope of the research.
  • Review existing literature: Conduct a review of existing literature and research on the topic to identify what is already known and where gaps in knowledge exist.
  • Determine the research design : Decide on the appropriate research design, which will depend on the nature of the research problem and the available resources. Common exploratory research designs include case studies, focus groups, interviews, and surveys.
  • Collect data: Collect data using the chosen research design. This may involve conducting interviews, surveys, or observations, or collecting data from existing sources such as archives or databases.
  • Analyze data: Analyze the data collected using appropriate qualitative or quantitative techniques. This may include coding and categorizing qualitative data, or running descriptive statistics on quantitative data.
  • I nterpret and report findings: Interpret the findings of the analysis and report them in a way that is clear and understandable. The report should summarize the findings, discuss their implications, and make recommendations for further research or action.
  • Iterate : If necessary, refine the research question and repeat the process of data collection and analysis to further explore the topic.

When to use Exploratory Research

Exploratory research is appropriate in situations where there is limited existing knowledge or understanding of a topic, and where the goal is to generate insights and ideas that can guide further research. Here are some specific situations where exploratory research may be particularly useful:

  • New product development: When developing a new product, exploratory research can be used to identify consumer needs and preferences, as well as potential design flaws or issues.
  • Emerging technologies: When exploring emerging technologies, exploratory research can be used to identify potential uses and applications, as well as potential challenges or limitations.
  • Developing research hypotheses: When developing research hypotheses, exploratory research can be used to identify potential relationships or patterns that can be further explored through more rigorous research methods.
  • Understanding complex phenomena: When trying to understand complex phenomena, such as human behavior or societal trends, exploratory research can be used to identify underlying patterns or factors that may be influencing the phenomenon.
  • Developing research methods : When developing new research methods, exploratory research can be used to identify potential issues or limitations with existing methods, and to develop new methods that better capture the phenomena of interest.

Purpose of Exploratory Research

The purpose of exploratory research is to gain insights and understanding of a research problem or question where there is limited existing knowledge or understanding. The objective is to explore and generate ideas that can guide further research, rather than to test specific hypotheses or make definitive conclusions.

Exploratory research can be used to:

  • Identify new research questions: Exploratory research can help to identify new research questions and areas of inquiry, by providing initial insights and understanding of a topic.
  • Develop hypotheses: Exploratory research can help to develop hypotheses and testable propositions that can be further explored through more rigorous research methods.
  • Identify patterns and trends : Exploratory research can help to identify patterns and trends in data, which can be used to guide further research or decision-making.
  • Understand complex phenomena: Exploratory research can help to provide a deeper understanding of complex phenomena, such as human behavior or societal trends, by identifying underlying patterns or factors that may be influencing the phenomena.
  • Generate ideas: Exploratory research can help to generate new ideas and insights that can be used to guide further research, innovation, or decision-making.

Characteristics of Exploratory Research

The following are the main characteristics of exploratory research:

  • Flexible and open-ended : Exploratory research is characterized by its flexible and open-ended nature, which allows researchers to explore a wide range of ideas and perspectives without being constrained by specific research questions or hypotheses.
  • Qualitative in nature : Exploratory research typically relies on qualitative methods, such as in-depth interviews, focus groups, or observation, to gather rich and detailed data on the research problem.
  • Limited scope: Exploratory research is generally limited in scope, focusing on a specific research problem or question, rather than attempting to provide a comprehensive analysis of a broader phenomenon.
  • Preliminary in nature : Exploratory research is preliminary in nature, providing initial insights and understanding of a research problem, rather than testing specific hypotheses or making definitive conclusions.
  • I terative process : Exploratory research is often an iterative process, where the research design and methods may be refined and adjusted as new insights and understanding are gained.
  • I nductive approach : Exploratory research typically takes an inductive approach to data analysis, seeking to identify patterns and relationships in the data that can guide further research or hypothesis development.

Advantages of Exploratory Research

The following are some advantages of exploratory research:

  • Provides initial insights: Exploratory research is useful for providing initial insights and understanding of a research problem or question where there is limited existing knowledge or understanding. It can help to identify patterns, relationships, and potential hypotheses that can guide further research.
  • Flexible and adaptable : Exploratory research is flexible and adaptable, allowing researchers to adjust their methods and approach as they gain new insights and understanding of the research problem.
  • Qualitative methods : Exploratory research typically relies on qualitative methods, such as in-depth interviews, focus groups, and observation, which can provide rich and detailed data that is useful for gaining insights into complex phenomena.
  • Cost-effective : Exploratory research is often less costly than other research methods, such as large-scale surveys or experiments. It is typically conducted on a smaller scale, using fewer resources and participants.
  • Useful for hypothesis generation : Exploratory research can be useful for generating hypotheses and testable propositions that can be further explored through more rigorous research methods.
  • Provides a foundation for further research: Exploratory research can provide a foundation for further research by identifying potential research questions and areas of inquiry, as well as providing initial insights and understanding of the research problem.

Limitations of Exploratory Research

The following are some limitations of exploratory research:

  • Limited generalizability: Exploratory research is typically conducted on a small scale and uses non-random sampling techniques, which limits the generalizability of the findings to a broader population.
  • Subjective nature: Exploratory research relies on qualitative methods and is therefore subject to researcher bias and interpretation. The findings may be influenced by the researcher’s own perceptions, beliefs, and assumptions.
  • Lack of rigor: Exploratory research is often less rigorous than other research methods, such as experimental research, which can limit the validity and reliability of the findings.
  • Limited ability to test hypotheses: Exploratory research is not designed to test specific hypotheses, but rather to generate initial insights and understanding of a research problem. It may not be suitable for testing well-defined research questions or hypotheses.
  • Time-consuming : Exploratory research can be time-consuming and resource-intensive, particularly if the researcher needs to gather data from multiple sources or conduct multiple rounds of data collection.
  • Difficulty in interpretation: The open-ended nature of exploratory research can make it difficult to interpret the findings, particularly if the researcher is unable to identify clear patterns or relationships in the data.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Exploratory Research | Definition, Guide, & Examples

Exploratory Research | Definition, Guide, & Examples

Published on 6 May 2022 by Tegan George . Revised on 20 January 2023.

Exploratory research is a methodology approach that investigates topics and research questions that have not previously been studied in depth.

Exploratory research is often qualitative in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive research or a grounded theory approach due to its flexible and open-ended nature.

Table of contents

When to use exploratory research, exploratory research questions, exploratory research data collection, step-by-step example of exploratory research, exploratory vs explanatory research, advantages and disadvantages of exploratory research, frequently asked questions about exploratory research.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use this type of research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Prevent plagiarism, run a free check.

Exploratory research questions are designed to help you understand more about a particular topic of interest. They can help you connect ideas to understand the groundwork of your analysis without adding any preconceived notions or assumptions yet.

Here are some examples:

  • What effect does using a digital notebook have on the attention span of primary schoolers?
  • What factors influence mental health in undergraduates?
  • What outcomes are associated with an authoritative parenting style?
  • In what ways does the presence of a non-native accent affect intelligibility?
  • How can the use of a grocery delivery service reduce food waste in single-person households?

Collecting information on a previously unexplored topic can be challenging. Exploratory research can help you narrow down your topic and formulate a clear hypothesis , as well as giving you the ‘lay of the land’ on your topic.

Data collection using exploratory research is often divided into primary and secondary research methods, with data analysis following the same model.

Primary research

In primary research, your data is collected directly from primary sources : your participants. There is a variety of ways to collect primary data.

Some examples include:

  • Survey methodology: Sending a survey out to the student body asking them if they would eat vegan meals
  • Focus groups: Compiling groups of 8–10 students and discussing what they think of vegan options for dining hall food
  • Interviews: Interviewing students entering and exiting the dining hall, asking if they would eat vegan meals

Secondary research

In secondary research, your data is collected from preexisting primary research, such as experiments or surveys.

Some other examples include:

  • Case studies : Health of an all-vegan diet
  • Literature reviews : Preexisting research about students’ eating habits and how they have changed over time
  • Online polls, surveys, blog posts, or interviews; social media: Have other universities done something similar?

For some subjects, it’s possible to use large- n government data, such as the decennial census or yearly American Community Survey (ACS) open-source data.

How you proceed with your exploratory research design depends on the research method you choose to collect your data. In most cases, you will follow five steps.

We’ll walk you through the steps using the following example.

Therefore, you would like to focus on improving intelligibility instead of reducing the learner’s accent.

Step 1: Identify your problem

The first step in conducting exploratory research is identifying what the problem is and whether this type of research is the right avenue for you to pursue. Remember that exploratory research is most advantageous when you are investigating a previously unexplored problem.

Step 2: Hypothesise a solution

The next step is to come up with a solution to the problem you’re investigating. Formulate a hypothetical statement to guide your research.

Step 3. Design your methodology

Next, conceptualise your data collection and data analysis methods and write them up in a research design.

Step 4: Collect and analyse data

Next, you proceed with collecting and analysing your data so you can determine whether your preliminary results are in line with your hypothesis.

In most types of research, you should formulate your hypotheses a priori and refrain from changing them due to the increased risk of Type I errors and data integrity issues. However, in exploratory research, you are allowed to change your hypothesis based on your findings, since you are exploring a previously unexplained phenomenon that could have many explanations.

Step 5: Avenues for future research

Decide if you would like to continue studying your topic. If so, it is likely that you will need to change to another type of research. As exploratory research is often qualitative in nature, you may need to conduct quantitative research with a larger sample size to achieve more generalisable results.

It can be easy to confuse exploratory research with explanatory research. To understand the relationship, it can help to remember that exploratory research lays the groundwork for later explanatory research.

Exploratory research investigates research questions that have not been studied in depth. The preliminary results often lay the groundwork for future analysis.

Explanatory research questions tend to start with ‘why’ or ‘how’, and the goal is to explain why or how a previously studied phenomenon takes place.

Exploratory vs explanatory research

Like any other research design , exploratory research has its trade-offs: it provides a unique set of benefits but also comes with downsides.

  • It can be very helpful in narrowing down a challenging or nebulous problem that has not been previously studied.
  • It can serve as a great guide for future research, whether your own or another researcher’s. With new and challenging research problems, adding to the body of research in the early stages can be very fulfilling.
  • It is very flexible, cost-effective, and open-ended. You are free to proceed however you think is best.

Disadvantages

  • It usually lacks conclusive results, and results can be biased or subjective due to a lack of preexisting knowledge on your topic.
  • It’s typically not externally valid and generalisable, and it suffers from many of the challenges of qualitative research .
  • Since you are not operating within an existing research paradigm, this type of research can be very labour-intensive.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2023, January 20). Exploratory Research | Definition, Guide, & Examples. Scribbr. Retrieved 18 March 2024, from https://www.scribbr.co.uk/research-methods/exploratory-research-design/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, qualitative vs quantitative research | examples & methods, descriptive research design | definition, methods & examples, case study | definition, examples & methods.

The potential of working hypotheses for deductive exploratory research

  • Open access
  • Published: 08 December 2020
  • Volume 55 , pages 1703–1725, ( 2021 )

Cite this article

You have full access to this open access article

  • Mattia Casula   ORCID: orcid.org/0000-0002-7081-8153 1 ,
  • Nandhini Rangarajan 2 &
  • Patricia Shields   ORCID: orcid.org/0000-0002-0960-4869 2  

57k Accesses

77 Citations

4 Altmetric

Explore all metrics

While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in the research process. The working hypothesis conceptual framework is introduced, placed in a philosophical context, defined, and applied to public administration and comparative public policy. Doing so, this article explains: the philosophical underpinning of exploratory, deductive research; how the working hypothesis informs the methodologies and evidence collection of deductive, explorative research; the nature of micro-conceptual frameworks for deductive exploratory research; and, how the working hypothesis informs data analysis when exploratory research is deductive.

Similar content being viewed by others

articles on exploratory research

What is Qualitative in Qualitative Research

Patrik Aspers & Ugo Corte

articles on exploratory research

Criteria for Good Qualitative Research: A Comprehensive Review

Drishti Yadav

Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations

Gordon W. Cheung, Helena D. Cooper-Thomas, … Linda C. Wang

Avoid common mistakes on your manuscript.

1 Introduction

Exploratory research is generally considered to be inductive and qualitative (Stebbins 2001 ). Exploratory qualitative studies adopting an inductive approach do not lend themselves to a priori theorizing and building upon prior bodies of knowledge (Reiter 2013 ; Bryman 2004 as cited in Pearse 2019 ). Juxtaposed against quantitative studies that employ deductive confirmatory approaches, exploratory qualitative research is often criticized for lack of methodological rigor and tentativeness in results (Thomas and Magilvy 2011 ). This paper focuses on the neglected topic of deductive, exploratory research and proposes working hypotheses as a useful framework for these studies.

To emphasize that certain types of applied research lend themselves more easily to deductive approaches, to address the downsides of exploratory qualitative research, and to ensure qualitative rigor in exploratory research, a significant body of work on deductive qualitative approaches has emerged (see for example, Gilgun 2005 , 2015 ; Hyde 2000 ; Pearse 2019 ). According to Gilgun ( 2015 , p. 3) the use of conceptual frameworks derived from comprehensive reviews of literature and a priori theorizing were common practices in qualitative research prior to the publication of Glaser and Strauss’s ( 1967 ) The Discovery of Grounded Theory . Gilgun ( 2015 ) coined the terms Deductive Qualitative Analysis (DQA) to arrive at some sort of “middle-ground” such that the benefits of a priori theorizing (structure) and allowing room for new theory to emerge (flexibility) are reaped simultaneously. According to Gilgun ( 2015 , p. 14) “in DQA, the initial conceptual framework and hypotheses are preliminary. The purpose of DQA is to come up with a better theory than researchers had constructed at the outset (Gilgun 2005 , 2009 ). Indeed, the production of new, more useful hypotheses is the goal of DQA”.

DQA provides greater level of structure for both the experienced and novice qualitative researcher (see for example Pearse 2019 ; Gilgun 2005 ). According to Gilgun ( 2015 , p. 4) “conceptual frameworks are the sources of hypotheses and sensitizing concepts”. Sensitizing concepts frame the exploratory research process and guide the researcher’s data collection and reporting efforts. Pearse ( 2019 ) discusses the usefulness for deductive thematic analysis and pattern matching to help guide DQA in business research. Gilgun ( 2005 ) discusses the usefulness of DQA for family research.

Given these rationales for DQA in exploratory research, the overarching purpose of this paper is to contribute to that growing corpus of work on deductive qualitative research. This paper is specifically aimed at guiding novice researchers and student scholars to the working hypothesis as a useful a priori framing tool. The applicability of the working hypothesis as a tool that provides more structure during the design and implementation phases of exploratory research is discussed in detail. Examples of research projects in public administration that use the working hypothesis as a framing tool for deductive exploratory research are provided.

In the next section, we introduce the three types of research purposes. Second, we examine the nature of the exploratory research purpose. Third, we provide a definition of working hypothesis. Fourth, we explore the philosophical roots of methodology to see where exploratory research fits. Fifth, we connect the discussion to the dominant research approaches (quantitative, qualitative and mixed methods) to see where deductive exploratory research fits. Sixth, we examine the nature of theory and the role of the hypothesis in theory. We contrast formal hypotheses and working hypotheses. Seven, we provide examples of student and scholarly work that illustrates how working hypotheses are developed and operationalized. Lastly, this paper synthesizes previous discussion with concluding remarks.

2 Three types of research purposes

The literature identifies three basic types of research purposes—explanation, description and exploration (Babbie 2007 ; Adler and Clark 2008 ; Strydom 2013 ; Shields and Whetsell 2017 ). Research purposes are similar to research questions; however, they focus on project goals or aims instead of questions.

Explanatory research answers the “why” question (Babbie 2007 , pp. 89–90), by explaining “why things are the way they are”, and by looking “for causes and reasons” (Adler and Clark 2008 , p. 14). Explanatory research is closely tied to hypothesis testing. Theory is tested using deductive reasoning, which goes from the general to the specific (Hyde 2000 , p. 83). Hypotheses provide a frame for explanatory research connecting the research purpose to other parts of the research process (variable construction, choice of data, statistical tests). They help provide alignment or coherence across stages in the research process and provide ways to critique the strengths and weakness of the study. For example, were the hypotheses grounded in the appropriate arguments and evidence in the literature? Are the concepts imbedded in the hypotheses appropriately measured? Was the best statistical test used? When the analysis is complete (hypothesis is tested), the results generally answer the research question (the evidence supported or failed to support the hypothesis) (Shields and Rangarajan 2013 ).

Descriptive research addresses the “What” question and is not primarily concerned with causes (Strydom 2013 ; Shields and Tajalli 2006 ). It lies at the “midpoint of the knowledge continuum” (Grinnell 2001 , p. 248) between exploration and explanation. Descriptive research is used in both quantitative and qualitative research. A field researcher might want to “have a more highly developed idea of social phenomena” (Strydom 2013 , p. 154) and develop thick descriptions using inductive logic. In science, categorization and classification systems such as the periodic table of chemistry or the taxonomies of biology inform descriptive research. These baseline classification systems are a type of theorizing and allow researchers to answer questions like “what kind” of plants and animals inhabit a forest. The answer to this question would usually be displayed in graphs and frequency distributions. This is also the data presentation system used in the social sciences (Ritchie and Lewis 2003 ; Strydom 2013 ). For example, if a scholar asked, what are the needs of homeless people? A quantitative approach would include a survey that incorporated a “needs” classification system (preferably based on a literature review). The data would be displayed as frequency distributions or as charts. Description can also be guided by inductive reasoning, which draws “inferences from specific observable phenomena to general rules or knowledge expansion” (Worster 2013 , p. 448). Theory and hypotheses are generated using inductive reasoning, which begins with data and the intention of making sense of it by theorizing. Inductive descriptive approaches would use a qualitative, naturalistic design (open ended interview questions with the homeless population). The data could provide a thick description of the homeless context. For deductive descriptive research, categories, serve a purpose similar to hypotheses for explanatory research. If developed with thought and a connection to the literature, categories can serve as a framework that inform measurement, link to data collection mechanisms and to data analysis. Like hypotheses they can provide horizontal coherence across the steps in the research process.

Table  1 demonstrated these connections for deductive, descriptive and explanatory research. The arrow at the top emphasizes the horizontal or across the research process view we emphasize. This article makes the case that the working hypothesis can serve the same purpose as the hypothesis for deductive, explanatory research and categories for deductive descriptive research. The cells for exploratory research are filled in with question marks.

The remainder of this paper focuses on exploratory research and the answers to questions found in the table:

What is the philosophical underpinning of exploratory, deductive research?

What is the Micro-conceptual framework for deductive exploratory research? [ As is clear from the article title we introduce the working hypothesis as the answer .]

How does the working hypothesis inform the methodologies and evidence collection of deductive exploratory research?

How does the working hypothesis inform data analysis of deductive exploratory research?

3 The nature of exploratory research purpose

Explorers enter the unknown to discover something new. The process can be fraught with struggle and surprises. Effective explorers creatively resolve unexpected problems. While we typically think of explorers as pioneers or mountain climbers, exploration is very much linked to the experience and intention of the explorer. Babies explore as they take their first steps. The exploratory purpose resonates with these insights. Exploratory research, like reconnaissance, is a type of inquiry that is in the preliminary or early stages (Babbie 2007 ). It is associated with discovery, creativity and serendipity (Stebbins 2001 ). But the person doing the discovery, also defines the activity or claims the act of exploration. It “typically occurs when a researcher examines a new interest or when the subject of study itself is relatively new” (Babbie 2007 , p. 88). Hence, exploration has an open character that emphasizes “flexibility, pragmatism, and the particular, biographically specific interests of an investigator” (Maanen et al. 2001 , p. v). These three purposes form a type of hierarchy. An area of inquiry is initially explored . This early work lays the ground for, description which in turn becomes the basis for explanation . Quantitative, explanatory studies dominate contemporary high impact journals (Twining et al. 2017 ).

Stebbins ( 2001 ) makes the point that exploration is often seen as something like a poor stepsister to confirmatory or hypothesis testing research. He has a problem with this because we live in a changing world and what is settled today will very likely be unsettled in the near future and in need of exploration. Further, exploratory research “generates initial insights into the nature of an issue and develops questions to be investigated by more extensive studies” (Marlow 2005 , p. 334). Exploration is widely applicable because all research topics were once “new.” Further, all research topics have the possibility of “innovation” or ongoing “newness”. Exploratory research may be appropriate to establish whether a phenomenon exists (Strydom 2013 ). The point here, of course, is that the exploratory purpose is far from trivial.

Stebbins’ Exploratory Research in the Social Sciences ( 2001 ), is the only book devoted to the nature of exploratory research as a form of social science inquiry. He views it as a “broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding of an area of social or psychological life” (p. 3). It is science conducted in a way distinct from confirmation. According to Stebbins ( 2001 , p. 6) the goal is discovery of potential generalizations, which can become future hypotheses and eventually theories that emerge from the data. He focuses on inductive logic (which stimulates creativity) and qualitative methods. He does not want exploratory research limited to the restrictive formulas and models he finds in confirmatory research. He links exploratory research to Glaser and Strauss’s ( 1967 ) flexible, immersive, Grounded Theory. Strydom’s ( 2013 ) analysis of contemporary social work research methods books echoes Stebbins’ ( 2001 ) position. Stebbins’s book is an important contribution, but it limits the potential scope of this flexible and versatile research purpose. If we accepted his conclusion, we would delete the “Exploratory” row from Table  1 .

Note that explanatory research can yield new questions, which lead to exploration. Inquiry is a process where inductive and deductive activities can occur simultaneously or in a back and forth manner, particularly as the literature is reviewed and the research design emerges. Footnote 1 Strict typologies such as explanation, description and exploration or inductive/deductive can obscures these larger connections and processes. We draw insight from Dewey’s ( 1896 ) vision of inquiry as depicted in his seminal “Reflex Arc” article. He notes that “stimulus” and “response” like other dualities (inductive/deductive) exist within a larger unifying system. Yet the terms have value. “We need not abandon terms like stimulus and response, so long as we remember that they are attached to events based upon their function in a wider dynamic context, one that includes interests and aims” (Hildebrand 2008 , p. 16). So too, in methodology typologies such as deductive/inductive capture useful distinctions with practical value and are widely used in the methodology literature.

We argue that there is a role for exploratory, deductive, and confirmatory research. We maintain all types of research logics and methods should be in the toolbox of exploratory research. First, as stated above, it makes no sense on its face to identify an extremely flexible purpose that is idiosyncratic to the researcher and then basically restrict its use to qualitative, inductive, non-confirmatory methods. Second, Stebbins’s ( 2001 ) work focused on social science ignoring the policy sciences. Exploratory research can be ideal for immediate practical problems faced by policy makers, who could find a framework of some kind useful. Third, deductive, exploratory research is more intentionally connected to previous research. Some kind of initial framing device is located or designed using the literature. This may be very important for new scholars who are developing research skills and exploring their field and profession. Stebbins’s insights are most pertinent for experienced scholars. Fourth, frameworks and deductive logic are useful for comparative work because some degree of consistency across cases is built into the design.

As we have seen, the hypotheses of explanatory and categories of descriptive research are the dominate frames of social science and policy science. We certainly concur that neither of these frames makes a lot of sense for exploratory research. They would tend to tie it down. We see the problem as a missing framework or missing way to frame deductive, exploratory research in the methodology literature. Inductive exploratory research would not work for many case studies that are trying to use evidence to make an argument. What exploratory deductive case studies need is a framework that incorporates flexibility. This is even more true for comparative case studies. A framework of this sort could be usefully applied to policy research (Casula 2020a ), particularly evaluative policy research, and applied research generally. We propose the Working Hypothesis as a flexible conceptual framework and as a useful tool for doing exploratory studies. It can be used as an evaluative criterion particularly for process evaluation and is useful for student research because students can develop theorizing skills using the literature.

Table  1 included a column specifying the philosophical basis for each research purpose. Shifting gears to the philosophical underpinning of methodology provides useful additional context for examination of deductive, exploratory research.

4 What is a working hypothesis

The working hypothesis is first and foremost a hypothesis or a statement of expectation that is tested in action. The term “working” suggest that these hypotheses are subject to change, are provisional and the possibility of finding contradictory evidence is real. In addition, a “working” hypothesis is active, it is a tool in an ongoing process of inquiry. If one begins with a research question, the working hypothesis could be viewed as a statement or group of statements that answer the question. It “works” to move purposeful inquiry forward. “Working” also implies some sort of community, mostly we work together in relationship to achieve some goal.

Working Hypothesis is a term found in earlier literature. Indeed, both pioneering pragmatists, John Dewey and George Herbert Mead use the term working hypothesis in important nineteenth century works. For both Dewey and Mead, the notion of a working hypothesis has a self-evident quality and it is applied in a big picture context. Footnote 2

Most notably, Dewey ( 1896 ), in one of his most pivotal early works (“Reflex Arc”), used “working hypothesis” to describe a key concept in psychology. “The idea of the reflex arc has upon the whole come nearer to meeting this demand for a general working hypothesis than any other single concept (Italics added)” (p. 357). The notion of a working hypothesis was developed more fully 42 years later, in Logic the Theory of Inquiry , where Dewey developed the notion of a working hypothesis that operated on a smaller scale. He defines working hypotheses as a “provisional, working means of advancing investigation” (Dewey 1938 , pp. 142). Dewey’s definition suggests that working hypotheses would be useful toward the beginning of a research project (e.g., exploratory research).

Mead ( 1899 ) used working hypothesis in a title of an American Journal of Sociology article “The Working Hypothesis and Social Reform” (italics added). He notes that a scientist’s foresight goes beyond testing a hypothesis.

Given its success, he may restate his world from this standpoint and get the basis for further investigation that again always takes the form of a problem. The solution of this problem is found over again in the possibility of fitting his hypothetical proposition into the whole within which it arises. And he must recognize that this statement is only a working hypothesis at the best, i.e., he knows that further investigation will show that the former statement of his world is only provisionally true, and must be false from the standpoint of a larger knowledge, as every partial truth is necessarily false over against the fuller knowledge which he will gain later (Mead 1899 , p. 370).

Cronbach ( 1975 ) developed a notion of working hypothesis consistent with inductive reasoning, but for him, the working hypothesis is a product or result of naturalistic inquiry. He makes the case that naturalistic inquiry is highly context dependent and therefore results or seeming generalizations that may come from a study and should be viewed as “working hypotheses”, which “are tentative both for the situation in which they first uncovered and for other situations” (as cited in Gobo 2008 , p. 196).

A quick Google scholar search using the term “working hypothesis” show that it is widely used in twentieth and twenty-first century science, particularly in titles. In these articles, the working hypothesis is treated as a conceptual tool that furthers investigation in its early or transitioning phases. We could find no explicit links to exploratory research. The exploratory nature of the problem is expressed implicitly. Terms such as “speculative” (Habib 2000 , p. 2391) or “rapidly evolving field” (Prater et al. 2007 , p. 1141) capture the exploratory nature of the study. The authors might describe how a topic is “new” or reference “change”. “As a working hypothesis, the picture is only new, however, in its interpretation” (Milnes 1974 , p. 1731). In a study of soil genesis, Arnold ( 1965 , p. 718) notes “Sequential models, formulated as working hypotheses, are subject to further investigation and change”. Any 2020 article dealing with COVID-19 and respiratory distress would be preliminary almost by definition (Ciceri et al. 2020 ).

5 Philosophical roots of methodology

According to Kaplan ( 1964 , p. 23) “the aim of methodology is to help us understand, in the broadest sense not the products of scientific inquiry but the process itself”. Methods contain philosophical principles that distinguish them from other “human enterprises and interests” (Kaplan 1964 , p. 23). Contemporary research methodology is generally classified as quantitative, qualitative and mixed methods. Leading scholars of methodology have associated each with a philosophical underpinning—positivism (or post-positivism), interpretivism or constructivist and pragmatism, respectively (Guba 1987 ; Guba and Lincoln 1981 ; Schrag 1992 ; Stebbins 2001 ; Mackenzi and Knipe 2006 ; Atieno 2009 ; Levers 2013 ; Morgan 2007 ; O’Connor et al. 2008 ; Johnson and Onwuegbuzie 2004 ; Twining et al. 2017 ). This section summarizes how the literature often describes these philosophies and informs contemporary methodology and its literature.

Positivism and its more contemporary version, post-positivism, maintains an objectivist ontology or assumes an objective reality, which can be uncovered (Levers 2013 ; Twining et al. 2017 ). Footnote 3 Time and context free generalizations are possible and “real causes of social scientific outcomes can be determined reliably and validly (Johnson and Onwuegbunzie 2004 , p. 14). Further, “explanation of the social world is possible through a logical reduction of social phenomena to physical terms”. It uses an empiricist epistemology which “implies testability against observation, experimentation, or comparison” (Whetsell and Shields 2015 , pp. 420–421). Correspondence theory, a tenet of positivism, asserts that “to each concept there corresponds a set of operations involved in its scientific use” (Kaplan 1964 , p. 40).

The interpretivist, constructivists or post-modernist approach is a reaction to positivism. It uses a relativist ontology and a subjectivist epistemology (Levers 2013 ). In this world of multiple realities, context free generalities are impossible as is the separation of facts and values. Causality, explanation, prediction, experimentation depend on assumptions about the correspondence between concepts and reality, which in the absence of an objective reality is impossible. Empirical research can yield “contextualized emergent understanding rather than the creation of testable theoretical structures” (O’Connor et al. 2008 , p. 30). The distinctively different world views of positivist/post positivist and interpretivist philosophy is at the core of many controversies in methodology, social and policy science literature (Casula 2020b ).

With its focus on dissolving dualisms, pragmatism steps outside the objective/subjective debate. Instead, it asks, “what difference would it make to us if the statement were true” (Kaplan 1964 , p. 42). Its epistemology is connected to purposeful inquiry. Pragmatism has a “transformative, experimental notion of inquiry” anchored in pluralism and a focus on constructing conceptual and practical tools to resolve “problematic situations” (Shields 1998 ; Shields and Rangarajan 2013 ). Exploration and working hypotheses are most comfortably situated within the pragmatic philosophical perspective.

6 Research approaches

Empirical investigation relies on three types of methodology—quantitative, qualitative and mixed methods.

6.1 Quantitative methods

Quantitative methods uses deductive logic and formal hypotheses or models to explain, predict, and eventually establish causation (Hyde 2000 ; Kaplan 1964 ; Johnson and Onwuegbunzie 2004 ; Morgan 2007 ). Footnote 4 The correspondence between the conceptual and empirical world make measures possible. Measurement assigns numbers to objects, events or situations and allows for standardization and subtle discrimination. It also allows researchers to draw on the power of mathematics and statistics (Kaplan 1964 , pp. 172–174). Using the power of inferential statistics, quantitative research employs research designs, which eliminate competing hypotheses. It is high in external validity or the ability to generalize to the whole. The research results are relatively independent of the researcher (Johnson & Onwuegbunzie 2004 ).

Quantitative methods depend on the quality of measurement and a priori conceptualization, and adherence to the underlying assumptions of inferential statistics. Critics charge that hypotheses and frameworks needlessly constrain inquiry (Johnson and Onwuegbunzie 2004 , p. 19). Hypothesis testing quantitative methods support the explanatory purpose.

6.2 Qualitative methods

Qualitative researchers who embrace the post-modern, interpretivist view, Footnote 5 question everything about the nature of quantitative methods (Willis et al. 2007 ). Rejecting the possibility of objectivity, correspondence between ideas and measures, and the constraints of a priori theorizing they focus on “unique impressions and understandings of events rather than to generalize the findings” (Kolb 2012 , p. 85). Characteristics of traditional qualitative research include “induction, discovery, exploration, theory/hypothesis generation and the researcher as the primary ‘instrument’ of data collection” (Johnson and Onwuegbunzie 2004 , p. 18). It also concerns itself with forming “unique impressions and understandings of events rather than to generalize findings” (Kolb 2012 , p. 85). The data of qualitative methods are generated via interviews, direct observation, focus groups and analysis of written records or artifacts.

Qualitative methods provide for understanding and “description of people’s personal experiences of phenomena”. They enable descriptions of detailed “phenomena as they are situated and embedded in local contexts.” Researchers use naturalistic settings to “study dynamic processes” and explore how participants interpret experiences. Qualitative methods have an inherent flexibility, allowing researchers to respond to changes in the research setting. They are particularly good at narrowing to the particular and on the flipside have limited external validity (Johnson and Onwuegbunzie 2004 , p. 20). Instead of specifying a suitable sample size to draw conclusions, qualitative research uses the notion of saturation (Morse 1995 ).

Saturation is used in grounded theory—a widely used and respected form of qualitative research, and a well-known interpretivist qualitative research method. Introduced by Glaser and Strauss ( 1967 ), this “grounded on observation” (Patten and Newhart 2000 , p. 27) methodology, focuses on “the creation of emergent understanding” (O’Connor et al. 2008 , p. 30). It uses the Constant Comparative method, whereby researchers develop theory from data as they code and analyze at the same time. Data collection, coding and analysis along with theoretical sampling are systematically combined to generate theory (Kolb 2012 , p. 83). The qualitative methods discussed here support exploratory research.

A close look at the two philosophies and assumptions of quantitative and qualitative research suggests two contradictory world views. The literature has labeled these contradictory views the Incompatibility Theory, which sets up a quantitative versus qualitative tension similar to the seeming separation of art and science or fact and values (Smith 1983a , b ; Guba 1987 ; Smith and Heshusius 1986 ; Howe 1988 ). The incompatibility theory does not make sense in practice. Yin ( 1981 , 1992 , 2011 , 2017 ), a prominent case study scholar, showcases a deductive research methodology that crosses boundaries using both quantaitive and qualitative evidence when appropriate.

6.3 Mixed methods

Turning the “Incompatibility Theory” on its head, Mixed Methods research “combines elements of qualitative and quantitative research approaches … for the broad purposes of breadth and depth of understanding and corroboration” (Johnson et al. 2007 , p. 123). It does this by partnering with philosophical pragmatism. Footnote 6 Pragmatism is productive because “it offers an immediate and useful middle position philosophically and methodologically; it offers a practical and outcome-oriented method of inquiry that is based on action and leads, iteratively, to further action and the elimination of doubt; it offers a method for selecting methodological mixes that can help researchers better answer many of their research questions” (Johnson and Onwuegbunzie 2004 , p. 17). What is theory for the pragmatist “any theoretical model is for the pragmatist, nothing more than a framework through which problems are perceived and subsequently organized ” (Hothersall 2019 , p. 5).

Brendel ( 2009 ) constructed a simple framework to capture the core elements of pragmatism. Brendel’s four “p”’s—practical, pluralism, participatory and provisional help to show the relevance of pragmatism to mixed methods. Pragmatism is purposeful and concerned with the practical consequences. The pluralism of pragmatism overcomes quantitative/qualitative dualism. Instead, it allows for multiple perspectives (including positivism and interpretivism) and, thus, gets around the incompatibility problem. Inquiry should be participatory or inclusive of the many views of participants, hence, it is consistent with multiple realities and is also tied to the common concern of a problematic situation. Finally, all inquiry is provisional . This is compatible with experimental methods, hypothesis testing and consistent with the back and forth of inductive and deductive reasoning. Mixed methods support exploratory research.

Advocates of mixed methods research note that it overcomes the weaknesses and employs the strengths of quantitative and qualitative methods. Quantitative methods provide precision. The pictures and narrative of qualitative techniques add meaning to the numbers. Quantitative analysis can provide a big picture, establish relationships and its results have great generalizability. On the other hand, the “why” behind the explanation is often missing and can be filled in through in-depth interviews. A deeper and more satisfying explanation is possible. Mixed-methods brings the benefits of triangulation or multiple sources of evidence that converge to support a conclusion. It can entertain a “broader and more complete range of research questions” (Johnson and Onwuegbunzie 2004 , p. 21) and can move between inductive and deductive methods. Case studies use multiple forms of evidence and are a natural context for mixed methods.

One thing that seems to be missing from mixed method literature and explicit design is a place for conceptual frameworks. For example, Heyvaert et al. ( 2013 ) examined nine mixed methods studies and found an explicit framework in only two studies (transformative and pragmatic) (p. 663).

7 Theory and hypotheses: where is and what is theory?

Theory is key to deductive research. In essence, empirical deductive methods test theory. Hence, we shift our attention to theory and the role and functions of the hypotheses in theory. Oppenheim and Putnam ( 1958 ) note that “by a ‘theory’ (in the widest sense) we mean any hypothesis, generalization or law (whether deterministic or statistical) or any conjunction of these” (p. 25). Van Evera ( 1997 ) uses a similar and more complex definition “theories are general statements that describe and explain the causes of effects of classes of phenomena. They are composed of causal laws or hypotheses, explanations, and antecedent conditions” (p. 8). Sutton and Staw ( 1995 , p. 376) in a highly cited article “What Theory is Not” assert the that hypotheses should contain logical arguments for “why” the hypothesis is expected. Hypotheses need an underlying causal argument before they can be considered theory. The point of this discussion is not to define theory but to establish the importance of hypotheses in theory.

Explanatory research is implicitly relational (A explains B). The hypotheses of explanatory research lay bare these relationships. Popular definitions of hypotheses capture this relational component. For example, the Cambridge Dictionary defines a hypothesis a “an idea or explanation for something that is based on known facts but has not yet been proven”. Vocabulary.Com’s definition emphasizes explanation, a hypothesis is “an idea or explanation that you then test through study and experimentation”. According to Wikipedia a hypothesis is “a proposed explanation for a phenomenon”. Other definitions remove the relational or explanatory reference. The Oxford English Dictionary defines a hypothesis as a “supposition or conjecture put forth to account for known facts.” Science Buddies defines a hypothesis as a “tentative, testable answer to a scientific question”. According to the Longman Dictionary the hypothesis is “an idea that can be tested to see if it is true or not”. The Urban Dictionary states a hypothesis is “a prediction or educated-guess based on current evidence that is yet be tested”. We argue that the hypotheses of exploratory research— working hypothesis — are not bound by relational expectations. It is this flexibility that distinguishes the working hypothesis.

Sutton and Staw (1995) maintain that hypotheses “serve as crucial bridges between theory and data, making explicit how the variables and relationships that follow from a logical argument will be operationalized” (p. 376, italics added). The highly rated journal, Computers and Education , Twining et al. ( 2017 ) created guidelines for qualitative research as a way to improve soundness and rigor. They identified the lack of alignment between theoretical stance and methodology as a common problem in qualitative research. In addition, they identified a lack of alignment between methodology, design, instruments of data collection and analysis. The authors created a guidance summary, which emphasized the need to enhance coherence throughout elements of research design (Twining et al. 2017 p. 12). Perhaps the bridging function of the hypothesis mentioned by Sutton and Staw (1995) is obscured and often missing in qualitative methods. Working hypotheses can be a tool to overcome this problem.

For reasons, similar to those used by mixed methods scholars, we look to classical pragmatism and the ideas of John Dewey to inform our discussion of theory and working hypotheses. Dewey ( 1938 ) treats theory as a tool of empirical inquiry and uses a map metaphor (p. 136). Theory is like a map that helps a traveler navigate the terrain—and should be judged by its usefulness. “There is no expectation that a map is a true representation of reality. Rather, it is a representation that allows a traveler to reach a destination (achieve a purpose). Hence, theories should be judged by how well they help resolve the problem or achieve a purpose ” (Shields and Rangarajan 2013 , p. 23). Note that we explicitly link theory to the research purpose. Theory is never treated as an unimpeachable Truth, rather it is a helpful tool that organizes inquiry connecting data and problem. Dewey’s approach also expands the definition of theory to include abstractions (categories) outside of causation and explanation. The micro-conceptual frameworks Footnote 7 introduced in Table  1 are a type of theory. We define conceptual frameworks as the “way the ideas are organized to achieve the project’s purpose” (Shields and Rangarajan 2013 p. 24). Micro-conceptual frameworks do this at the very close to the data level of analysis. Micro-conceptual frameworks can direct operationalization and ways to assess measurement or evidence at the individual research study level. Again, the research purpose plays a pivotal role in the functioning of theory (Shields and Tajalli 2006 ).

8 Working hypothesis: methods and data analysis

We move on to answer the remaining questions in the Table  1 . We have established that exploratory research is extremely flexible and idiosyncratic. Given this, we will proceed with a few examples and draw out lessons for developing an exploratory purpose, building a framework and from there identifying data collection techniques and the logics of hypotheses testing and analysis. Early on we noted the value of the Working Hypothesis framework for student empirical research and applied research. The next section uses a masters level student’s work to illustrate the usefulness of working hypotheses as a way to incorporate the literature and structure inquiry. This graduate student was also a mature professional with a research question that emerged from his job and is thus an example of applied research.

Master of Public Administration student, Swift ( 2010 ) worked for a public agency and was responsible for that agency’s sexual harassment training. The agency needed to evaluate its training but had never done so before. He also had never attempted a significant empirical research project. Both of these conditions suggest exploration as a possible approach. He was interested in evaluating the training program and hence the project had a normative sense. Given his job, he already knew a lot about the problem of sexual harassment and sexual harassment training. What he did not know much about was doing empirical research, reviewing the literature or building a framework to evaluate the training (working hypotheses). He wanted a framework that was flexible and comprehensive. In his research, he discovered Lundvall’s ( 2006 ) knowledge taxonomy summarized with four simple ways of knowing ( Know - what, Know - how, Know - why, Know - who ). He asked whether his agency’s training provided the participants with these kinds of knowledge? Lundvall’s categories of knowing became the basis of his working hypotheses. Lundvall’s knowledge taxonomy is well suited for working hypotheses because it is so simple and is easy to understand intuitively. It can also be tailored to the unique problematic situation of the researcher. Swift ( 2010 , pp. 38–39) developed four basic working hypotheses:

WH1: Capital Metro provides adequate know - what knowledge in its sexual harassment training

WH2: Capital Metro provides adequate know - how knowledge in its sexual harassment training

WH3: Capital Metro provides adequate know - why knowledge in its sexual harassment training

WH4: Capital Metro provides adequate know - who knowledge in its sexual harassment training

From here he needed to determine what would determine the different kinds of knowledge. For example, what constitutes “know what” knowledge for sexual harassment training. This is where his knowledge and experience working in the field as well as the literature come into play. According to Lundvall et al. ( 1988 , p. 12) “know what” knowledge is about facts and raw information. Swift ( 2010 ) learned through the literature that laws and rules were the basis for the mandated sexual harassment training. He read about specific anti-discrimination laws and the subsequent rules and regulations derived from the laws. These laws and rules used specific definitions and were enacted within a historical context. Laws, rules, definitions and history became the “facts” of Know-What knowledge for his working hypothesis. To make this clear, he created sub-hypotheses that explicitly took these into account. See how Swift ( 2010 , p. 38) constructed the sub-hypotheses below. Each sub-hypothesis was defended using material from the literature (Swift 2010 , pp. 22–26). The sub-hypotheses can also be easily tied to evidence. For example, he could document that the training covered anti-discrimination laws.

WH1: Capital Metro provides adequate know - what knowledge in its sexual Harassment training

WH1a: The sexual harassment training includes information on anti-discrimination laws (Title VII).

WH1b: The sexual harassment training includes information on key definitions.

WH1c: The sexual harassment training includes information on Capital Metro’s Equal Employment Opportunity and Harassment policy.

WH1d: Capital Metro provides training on sexual harassment history.

Know-How knowledge refers to the ability to do something and involves skills (Lundvall and Johnson 1994 , p. 12). It is a kind of expertise in action. The literature and his experience allowed James Smith to identify skills such as how to file a claim or how to document incidents of sexual harassment as important “know-how” knowledge that should be included in sexual harassment training. Again, these were depicted as sub-hypotheses.

WH2: Capital Metro provides adequate know - how knowledge in its sexual Harassment training

WH2a: Training is provided on how to file and report a claim of harassment

WH2b: Training is provided on how to document sexual harassment situations.

WH2c: Training is provided on how to investigate sexual harassment complaints.

WH2d: Training is provided on how to follow additional harassment policy procedures protocol

Note that the working hypotheses do not specify a relationship but rather are simple declarative sentences. If “know-how” knowledge was found in the sexual harassment training, he would be able to find evidence that participants learned about how to file a claim (WH2a). The working hypothesis provides the bridge between theory and data that Sutton and Staw (1995) found missing in exploratory work. The sub-hypotheses are designed to be refined enough that the researchers would know what to look for and tailor their hunt for evidence. Figure  1 captures the generic sub-hypothesis design.

figure 1

A Common structure used in the development of working hypotheses

When expected evidence is linked to the sub-hypotheses, data, framework and research purpose are aligned. This can be laid out in a planning document that operationalizes the data collection in something akin to an architect’s blueprint. This is where the scholar explicitly develops the alignment between purpose, framework and method (Shields and Rangarajan 2013 ; Shields et al. 2019b ).

Table  2 operationalizes Swift’s working hypotheses (and sub-hypotheses). The table provide clues as to what kind of evidence is needed to determine whether the hypotheses are supported. In this case, Smith used interviews with participants and trainers as well as a review of program documents. Column one repeats the sub-hypothesis, column two specifies the data collection method (here interviews with participants/managers and review of program documents) and column three specifies the unique questions that focus the investigation. For example, the interview questions are provided. In the less precise world of qualitative data, evidence supporting a hypothesis could have varying degrees of strength. This too can be specified.

For Swift’s example, neither the statistics of explanatory research nor the open-ended questions of interpretivist, inductive exploratory research is used. The deductive logic of inquiry here is somewhat intuitive and similar to a detective (Ulriksen and Dadalauri 2016 ). It is also a logic used in international law (Worster 2013 ). It should be noted that the working hypothesis and the corresponding data collection protocol does not stop inquiry and fieldwork outside the framework. The interviews could reveal an unexpected problem with Smith’s training program. The framework provides a very loose and perhaps useful ways to identify and make sense of the data that does not fit the expectations. Researchers using working hypotheses should be sensitive to interesting findings that fall outside their framework. These could be used in future studies, to refine theory or even in this case provide suggestions to improve sexual harassment training. The sensitizing concepts mentioned by Gilgun ( 2015 ) are free to emerge and should be encouraged.

Something akin to working hypotheses are hidden in plain sight in the professional literature. Take for example Kerry Crawford’s ( 2017 ) book Wartime Sexual Violence. Here she explores how basic changes in the way “advocates and decision makers think about and discuss conflict-related sexual violence” (p. 2). She focused on a subsequent shift from silence to action. The shift occurred as wartime sexual violence was reframed as a “weapon of war”. The new frame captured the attention of powerful members of the security community who demanded, initiated, and paid for institutional and policy change. Crawford ( 2017 ) examines the legacy of this key reframing. She develops a six-stage model of potential international responses to incidents of wartime violence. This model is fairly easily converted to working hypotheses and sub-hypotheses. Table  3 shows her model as a set of (non-relational) working hypotheses. She applied this model as a way to gather evidence among cases (e.g., the US response to sexual violence in the Democratic Republic of the Congo) to show the official level of response to sexual violence. Each case study chapter examined evidence to establish whether the case fit the pattern formalized in the working hypotheses. The framework was very useful in her comparative context. The framework allowed for consistent comparative analysis across cases. Her analysis of the three cases went well beyond the material covered in the framework. She freely incorporated useful inductively informed data in her analysis and discussion. The framework, however, allowed for alignment within and across cases.

9 Conclusion

In this article we argued that the exploratory research is also well suited for deductive approaches. By examining the landscape of deductive, exploratory research, we proposed the working hypothesis as a flexible conceptual framework and a useful tool for doing exploratory studies. It has the potential to guide and bring coherence across the steps in the research process. After presenting the nature of exploratory research purpose and how it differs from two types of research purposes identified in the literature—explanation, and description. We focused on answering four different questions in order to show the link between micro-conceptual frameworks and research purposes in a deductive setting. The answers to the four questions are summarized in Table  4 .

Firstly, we argued that working hypothesis and exploration are situated within the pragmatic philosophical perspective. Pragmatism allows for pluralism in theory and data collection techniques, which is compatible with the flexible exploratory purpose. Secondly, after introducing and discussing the four core elements of pragmatism (practical, pluralism, participatory, and provisional), we explained how the working hypothesis informs the methodologies and evidence collection of deductive exploratory research through a presentation of the benefits of triangulation provided by mixed methods research. Thirdly, as is clear from the article title, we introduced the working hypothesis as the micro-conceptual framework for deductive explorative research. We argued that the hypotheses of explorative research, which we call working hypotheses are distinguished from those of the explanatory research, since they do not require a relational component and are not bound by relational expectations. A working hypothesis is extremely flexible and idiosyncratic, and it could be viewed as a statement or group of statements of expectations tested in action depending on the research question. Using examples, we concluded by explaining how working hypotheses inform data collection and analysis for deductive exploratory research.

Crawford’s ( 2017 ) example showed how the structure of working hypotheses provide a framework for comparative case studies. Her criteria for analysis were specified ahead of time and used to frame each case. Thus, her comparisons were systemized across cases. Further, the framework ensured a connection between the data analysis and the literature review. Yet the flexible, working nature of the hypotheses allowed for unexpected findings to be discovered.

The evidence required to test working hypotheses is directed by the research purpose and potentially includes both quantitative and qualitative sources. Thus, all types of evidence, including quantitative methods should be part of the toolbox of deductive, explorative research. We show how the working hypotheses, as a flexible exploratory framework, resolves many seeming dualisms pervasive in the research methods literature.

To conclude, this article has provided an in-depth examination of working hypotheses taking into account philosophical questions and the larger formal research methods literature. By discussing working hypotheses as applied, theoretical tools, we demonstrated that working hypotheses fill a unique niche in the methods literature, since they provide a way to enhance alignment in deductive, explorative studies.

In practice, quantitative scholars often run multivariate analysis on data bases to find out if there are correlations. Hypotheses are tested because the statistical software does the math, not because the scholar has an a priori, relational expectation (hypothesis) well-grounded in the literature and supported by cogent arguments. Hunches are just fine. This is clearly an inductive approach to research and part of the large process of inquiry.

In 1958 , Philosophers of Science, Oppenheim and Putnam use the notion of Working Hypothesis in their title “Unity of Science as Working Hypothesis.” They too, use it as a big picture concept, “unity of science in this sense, can be fully realized constitutes an over-arching meta-scientific hypothesis, which enables one to see a unity in scientific activities that might otherwise appear disconnected or unrelated” (p. 4).

It should be noted that the positivism described in the research methods literature does not resemble philosophical positivism as developed by philosophers like Comte (Whetsell and Shields 2015 ). In the research methods literature “positivism means different things to different people….The term has long been emptied of any precise denotation …and is sometimes affixed to positions actually opposed to those espoused by the philosophers from whom the name derives” (Schrag 1992 , p. 5). For purposes of this paper, we are capturing a few essential ways positivism is presented in the research methods literature. This helps us to position the “working hypothesis” and “exploratory” research within the larger context in contemporary research methods. We are not arguing that the positivism presented here is anything more. The incompatibility theory discussed later, is an outgrowth of this research methods literature…

It should be noted that quantitative researchers often use inductive reasoning. They do this with existing data sets when they run correlations or regression analysis as a way to find relationships. They ask, what does the data tell us?

Qualitative researchers are also associated with phenomenology, hermeneutics, naturalistic inquiry and constructivism.

See Feilzer ( 2010 ), Howe ( 1988 ), Johnson and Onwuegbunzie ( 2004 ), Morgan ( 2007 ), Onwuegbuzie and Leech ( 2005 ), Biddle and Schafft ( 2015 ).

The term conceptual framework is applicable in a broad context (see Ravitch and Riggan 2012 ). The micro-conceptual framework narrows to the specific study and informs data collection (Shields and Rangarajan 2013 ; Shields et al. 2019a ) .

Adler, E., Clark, R.: How It’s Done: An Invitation to Social Research, 3rd edn. Thompson-Wadsworth, Belmont (2008)

Google Scholar  

Arnold, R.W.: Multiple working hypothesis in soil genesis. Soil Sci. Soc. Am. J. 29 (6), 717–724 (1965)

Article   Google Scholar  

Atieno, O.: An analysis of the strengths and limitation of qualitative and quantitative research paradigms. Probl. Educ. 21st Century 13 , 13–18 (2009)

Babbie, E.: The Practice of Social Research, 11th edn. Thompson-Wadsworth, Belmont (2007)

Biddle, C., Schafft, K.A.: Axiology and anomaly in the practice of mixed methods work: pragmatism, valuation, and the transformative paradigm. J. Mixed Methods Res. 9 (4), 320–334 (2015)

Brendel, D.H.: Healing Psychiatry: Bridging the Science/Humanism Divide. MIT Press, Cambridge (2009)

Bryman, A.: Qualitative research on leadership: a critical but appreciative review. Leadersh. Q. 15 (6), 729–769 (2004)

Casula, M.: Under which conditions is cohesion policy effective: proposing an Hirschmanian approach to EU structural funds, Regional & Federal Studies, https://doi.org/10.1080/13597566.2020.1713110 (2020a)

Casula, M.: Economic gowth and cohesion policy implementation in Italy and Spain, Palgrave Macmillan, Cham (2020b)

Ciceri, F., et al.: Microvascular COVID-19 lung vessels obstructive thromboinflammatory syndrome (MicroCLOTS): an atypical acute respiratory distress syndrome working hypothesis. Crit. Care Resusc. 15 , 1–3 (2020)

Crawford, K.F.: Wartime sexual violence: From silence to condemnation of a weapon of war. Georgetown University Press (2017)

Cronbach, L.: Beyond the two disciplines of scientific psychology American Psychologist. 30 116–127 (1975)

Dewey, J.: The reflex arc concept in psychology. Psychol. Rev. 3 (4), 357 (1896)

Dewey, J.: Logic: The Theory of Inquiry. Henry Holt & Co, New York (1938)

Feilzer, Y.: Doing mixed methods research pragmatically: implications for the rediscovery of pragmatism as a research paradigm. J. Mixed Methods Res. 4 (1), 6–16 (2010)

Gilgun, J.F.: Qualitative research and family psychology. J. Fam. Psychol. 19 (1), 40–50 (2005)

Gilgun, J.F.: Methods for enhancing theory and knowledge about problems, policies, and practice. In: Katherine Briar, Joan Orme., Roy Ruckdeschel., Ian Shaw. (eds.) The Sage handbook of social work research pp. 281–297. Thousand Oaks, CA: Sage (2009)

Gilgun, J.F.: Deductive Qualitative Analysis as Middle Ground: Theory-Guided Qualitative Research. Amazon Digital Services LLC, Seattle (2015)

Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine, Chicago (1967)

Gobo, G.: Re-Conceptualizing Generalization: Old Issues in a New Frame. In: Alasuutari, P., Bickman, L., Brannen, J. (eds.) The Sage Handbook of Social Research Methods, pp. 193–213. Sage, Los Angeles (2008)

Chapter   Google Scholar  

Grinnell, R.M.: Social work research and evaluation: quantitative and qualitative approaches. New York: F.E. Peacock Publishers (2001)

Guba, E.G.: What have we learned about naturalistic evaluation? Eval. Pract. 8 (1), 23–43 (1987)

Guba, E., Lincoln, Y.: Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. Jossey-Bass Publishers, San Francisco (1981)

Habib, M.: The neurological basis of developmental dyslexia: an overview and working hypothesis. Brain 123 (12), 2373–2399 (2000)

Heyvaert, M., Maes, B., Onghena, P.: Mixed methods research synthesis: definition, framework, and potential. Qual. Quant. 47 (2), 659–676 (2013)

Hildebrand, D.: Dewey: A Beginners Guide. Oneworld Oxford, Oxford (2008)

Howe, K.R.: Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Edu. Res. 17 (8), 10–16 (1988)

Hothersall, S.J.: Epistemology and social work: enhancing the integration of theory, practice and research through philosophical pragmatism. Eur. J. Social Work 22 (5), 860–870 (2019)

Hyde, K.F.: Recognising deductive processes in qualitative research. Qual. Market Res. Int. J. 3 (2), 82–90 (2000)

Johnson, R.B., Onwuegbuzie, A.J.: Mixed methods research: a research paradigm whose time has come. Educ. Res. 33 (7), 14–26 (2004)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Kaplan, A.: The Conduct of Inquiry. Chandler, Scranton (1964)

Kolb, S.M.: Grounded theory and the constant comparative method: valid research strategies for educators. J. Emerg. Trends Educ. Res. Policy Stud. 3 (1), 83–86 (2012)

Levers, M.J.D.: Philosophical paradigms, grounded theory, and perspectives on emergence. Sage Open 3 (4), 2158244013517243 (2013)

Lundvall, B.A.: Knowledge management in the learning economy. In: Danish Research Unit for Industrial Dynamics Working Paper Working Paper, vol. 6, pp. 3–5 (2006)

Lundvall, B.-Å., Johnson, B.: Knowledge management in the learning economy. J. Ind. Stud. 1 (2), 23–42 (1994)

Lundvall, B.-Å., Jenson, M.B., Johnson, B., Lorenz, E.: Forms of Knowledge and Modes of Innovation—From User-Producer Interaction to the National System of Innovation. In: Dosi, G., et al. (eds.) Technical Change and Economic Theory. Pinter Publishers, London (1988)

Maanen, J., Manning, P., Miller, M.: Series editors’ introduction. In: Stebbins, R. (ed.) Exploratory research in the social sciences. pp. v–vi. Thousands Oak, CA: SAGE (2001)

Mackenzie, N., Knipe, S.: Research dilemmas: paradigms, methods and methodology. Issues Educ. Res. 16 (2), 193–205 (2006)

Marlow, C.R.: Research Methods for Generalist Social Work. Thomson Brooks/Cole, New York (2005)

Mead, G.H.: The working hypothesis in social reform. Am. J. Sociol. 5 (3), 367–371 (1899)

Milnes, A.G.: Structure of the Pennine Zone (Central Alps): a new working hypothesis. Geol. Soc. Am. Bull. 85 (11), 1727–1732 (1974)

Morgan, D.L.: Paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J. Mixed Methods Res. 1 (1), 48–76 (2007)

Morse, J.: The significance of saturation. Qual. Health Res. 5 (2), 147–149 (1995)

O’Connor, M.K., Netting, F.E., Thomas, M.L.: Grounded theory: managing the challenge for those facing institutional review board oversight. Qual. Inq. 14 (1), 28–45 (2008)

Onwuegbuzie, A.J., Leech, N.L.: On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. Int. J. Soc. Res. Methodol. 8 (5), 375–387 (2005)

Oppenheim, P., Putnam, H.: Unity of science as a working hypothesis. In: Minnesota Studies in the Philosophy of Science, vol. II, pp. 3–36 (1958)

Patten, M.L., Newhart, M.: Understanding Research Methods: An Overview of the Essentials, 2nd edn. Routledge, New York (2000)

Pearse, N.: An illustration of deductive analysis in qualitative research. In: European Conference on Research Methodology for Business and Management Studies, pp. 264–VII. Academic Conferences International Limited (2019)

Prater, D.N., Case, J., Ingram, D.A., Yoder, M.C.: Working hypothesis to redefine endothelial progenitor cells. Leukemia 21 (6), 1141–1149 (2007)

Ravitch, B., Riggan, M.: Reason and Rigor: How Conceptual Frameworks Guide Research. Sage, Beverley Hills (2012)

Reiter, B.: The epistemology and methodology of exploratory social science research: Crossing Popper with Marcuse. In: Government and International Affairs Faculty Publications. Paper 99. http://scholarcommons.usf.edu/gia_facpub/99 (2013)

Ritchie, J., Lewis, J.: Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage, London (2003)

Schrag, F.: In defense of positivist research paradigms. Educ. Res. 21 (5), 5–8 (1992)

Shields, P.M.: Pragmatism as a philosophy of science: A tool for public administration. Res. Pub. Admin. 41995-225 (1998)

Shields, P.M., Rangarajan, N.: A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. New Forums Press (2013)

Shields, P.M., Tajalli, H.: Intermediate theory: the missing link in successful student scholarship. J. Public Aff. Educ. 12 (3), 313–334 (2006)

Shields, P., & Whetsell, T.: Public administration methodology: A pragmatic perspective. In: Raadshelders, J., Stillman, R., (eds). Foundations of Public Administration, pp. 75–92. New York: Melvin and Leigh (2017)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part I). Sotsiologicheskie issledovaniya 10 , 39–47 (2019a)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part 2). Sotsiologicheskie issledovaniya 11 , 40–51 (2019b)

Smith, J.K.: Quantitative versus qualitative research: an attempt to clarify the issue. Educ. Res. 12 (3), 6–13 (1983a)

Smith, J.K.: Quantitative versus interpretive: the problem of conducting social inquiry. In: House, E. (ed.) Philosophy of Evaluation, pp. 27–52. Jossey-Bass, San Francisco (1983b)

Smith, J.K., Heshusius, L.: Closing down the conversation: the end of the quantitative-qualitative debate among educational inquirers. Educ. Res. 15 (1), 4–12 (1986)

Stebbins, R.A.: Exploratory Research in the Social Sciences. Sage, Thousand Oaks (2001)

Book   Google Scholar  

Strydom, H.: An evaluation of the purposes of research in social work. Soc. Work/Maatskaplike Werk 49 (2), 149–164 (2013)

Sutton, R. I., Staw, B.M.: What theory is not. Administrative science quarterly. 371–384 (1995)

Swift, III, J.: Exploring Capital Metro’s Sexual Harassment Training using Dr. Bengt-Ake Lundvall’s taxonomy of knowledge principles. Applied Research Project, Texas State University https://digital.library.txstate.edu/handle/10877/3671 (2010)

Thomas, E., Magilvy, J.K.: Qualitative rigor or research validity in qualitative research. J. Spec. Pediatric Nurs. 16 (2), 151–155 (2011)

Twining, P., Heller, R.S., Nussbaum, M., Tsai, C.C.: Some guidance on conducting and reporting qualitative studies. Comput. Educ. 107 , A1–A9 (2017)

Ulriksen, M., Dadalauri, N.: Single case studies and theory-testing: the knots and dots of the process-tracing method. Int. J. Soc. Res. Methodol. 19 (2), 223–239 (2016)

Van Evera, S.: Guide to Methods for Students of Political Science. Cornell University Press, Ithaca (1997)

Whetsell, T.A., Shields, P.M.: The dynamics of positivism in the study of public administration: a brief intellectual history and reappraisal. Adm. Soc. 47 (4), 416–446 (2015)

Willis, J.W., Jost, M., Nilakanta, R.: Foundations of Qualitative Research: Interpretive and Critical Approaches. Sage, Beverley Hills (2007)

Worster, W.T.: The inductive and deductive methods in customary international law analysis: traditional and modern approaches. Georget. J. Int. Law 45 , 445 (2013)

Yin, R.K.: The case study as a serious research strategy. Knowledge 3 (1), 97–114 (1981)

Yin, R.K.: The case study method as a tool for doing evaluation. Curr. Sociol. 40 (1), 121–137 (1992)

Yin, R.K.: Applications of Case Study Research. Sage, Beverley Hills (2011)

Yin, R.K.: Case Study Research and Applications: Design and Methods. Sage Publications, Beverley Hills (2017)

Download references

Acknowledgements

The authors contributed equally to this work. The authors would like to thank Quality & Quantity’ s editors and the anonymous reviewers for their valuable advice and comments on previous versions of this paper.

Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. There are no funders to report for this submission.

Author information

Authors and affiliations.

Department of Political and Social Sciences, University of Bologna, Strada Maggiore 45, 40125, Bologna, Italy

Mattia Casula

Texas State University, San Marcos, TX, USA

Nandhini Rangarajan & Patricia Shields

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mattia Casula .

Ethics declarations

Conflict of interest.

No potential conflict of interest was reported by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casula, M., Rangarajan, N. & Shields, P. The potential of working hypotheses for deductive exploratory research. Qual Quant 55 , 1703–1725 (2021). https://doi.org/10.1007/s11135-020-01072-9

Download citation

Accepted : 05 November 2020

Published : 08 December 2020

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11135-020-01072-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Exploratory research
  • Working hypothesis
  • Deductive qualitative research
  • Find a journal
  • Publish with us
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

articles on exploratory research

Home Market Research

Exploratory Research: Types & Characteristics

Exploratory-Research

Consider a scenario where a juice bar owner feels that increasing the variety of juices will enable an increase in customers. However, he is not sure and needs more information. The owner intends to conduct exploratory research to find out; hence, he decides to do exploratory research to find out if expanding their juices selection will enable him to get more customers or if there is a better idea.

Another example of exploratory research is a podcast survey template that can be used to collect feedback about the podcast consumption metrics both from existing listeners as well as other podcast listeners that are currently not subscribed to this channel. This helps the author of the podcast create curated content that will gain a larger audience. Let’s explore this topic.

LEARN ABOUT: Research Process Steps

Content Index

Exploratory research: Definition

Primary research methods, secondary research methods, exploratory research: steps to conduct a research, characteristics of exploratory research, advantages of exploratory research, disadvantages of exploratory research, importance of exploratory research.

Exploratory research is defined as a research used to investigate a problem which is not clearly defined. It is conducted to have a better understanding of the existing research problem , but will not provide conclusive results. For such a research, a researcher starts with a general idea and uses this research as a medium to identify issues, that can be the focus for future research. An important aspect here is that the researcher should be willing to change his/her direction subject to the revelation of new data or insight. Such a research is usually carried out when the problem is at a preliminary stage. It is often referred to as grounded theory approach or interpretive research as it used to answer questions like what, why and how.

Types and methodologies of Exploratory research

While it may sound difficult to research something that has very little information about it, there are several methods which can help a researcher figure out the best research design, data collection methods and choice of subjects. There are two ways in which research can be conducted namely primary and secondary.. Under these two types, there are multiple methods which can used by a researcher. The data gathered from these research can be qualitative or quantitative . Some of the most widely used research designs include the following:

LEARN ABOUT: Best Data Collection Tools

Primary research is information gathered directly from the subject.  It can be through a group of people or even an individual. Such a research can be carried out directly by the researcher himself or can employ a third party to conduct it on their behalf. Primary research is specifically carried out to explore a certain problem which requires an in-depth study.

  • Surveys/polls : Surveys /polls are used to gather information from a predefined group of respondents. It is one of the most important quantitative method. Various types of surveys  or polls can be used to explore opinions, trends, etc. With the advancement in technology, surveys can now be sent online and can be very easy to access. For instance, use of a survey app through tablets, laptops or even mobile phones. This information is also available to the researcher in real time as well. Nowadays, most organizations offer short length surveys and rewards to respondents, in order to achieve higher response rates.

LEARN ABOUT: Live polls for Classroom Experience

For example: A survey is sent to a given set of audience to understand their opinions about the size of mobile phones when they purchase one. Based on such information organization can dig deeper into the topic and make business related decision.

  • Interviews: While you may get a lot of information from public sources, but sometimes an in person interview can give in-depth information on the subject being studied. Such a research is a qualitative research method . An interview with a subject matter expert can give you meaningful insights that a generalized public source won’t be able to provide. Interviews are carried out in person or on telephone which have open-ended questions to get meaningful information about the topic.

For example: An interview with an employee can give you more insights to find out the degree of job satisfaction, or an interview with a subject matter expert of quantum theory can give you in-depth information on that topic.

  • Focus groups: Focus group is yet another widely used method in exploratory research. In such a method a group of people is chosen and are allowed to express their insights on the topic that is being studied. Although, it is important to make sure that while choosing the individuals in a focus group they should have a common background and have comparable experiences.

For example: A focus group helps a research identify the opinions of consumers if they were to buy a phone. Such a research can help the researcher understand what the consumer value while buying a phone. It may be screen size, brand value or even the dimensions. Based on which the organization can understand what are consumer buying attitudes, consumer opinions, etc.

  • Observations: Observational research can be qualitative observation or quantitative observation . Such a research is done to observe a person and draw the finding from their reaction to certain parameters. In such a research, there is no direct interaction with the subject.

For example: An FMCG company wants to know how it’s consumer react to the new shape of their product. The researcher observes the customers first reaction and collects the data, which is then used to draw inferences from the collective information.

LEARN ABOUT: Causal Research

Secondary research is gathering information from previously published primary research. In such a research you gather information from sources likes case studies, magazines, newspapers, books, etc.

  • Online research: In today’s world, this is one of the fastest way to gather information on any topic. A lot of data is readily available on the internet and the researcher can download it whenever he needs it. An important aspect to be noted for such a research is the genuineness and authenticity of the source websites that the researcher is gathering the information from.

For example: A researcher needs to find out what is the percentage of people that prefer a specific brand phone. The researcher just enters the information he needs in a search engine and gets multiple links with related information and statistics.

  • Literature research : Literature research is one of the most inexpensive method used for discovering a hypothesis. There is tremendous amount of information available in libraries, online sources, or even commercial databases. Sources can include newspapers, magazines, books from library, documents from government agencies, specific topic related articles, literature, Annual reports, published statistics from research organizations and so on.

However, a few things have to be kept in mind while researching from these sources. Government agencies have authentic information but sometimes may come with a nominal cost. Also, research from educational institutions is generally overlooked, but in fact educational institutions carry out more number of research than any other entities.

Furthermore, commercial sources provide information on major topics like political agendas, demographics, financial information, market trends and information, etc.

For example: A company has low sales. It can be easily explored from available statistics and market literature if the problem is market related or organization related or if the topic being studied is regarding financial situation of the country, then research data can be accessed through government documents or commercial sources.

  • Case study research: Case study research can help a researcher with finding more information through carefully analyzing existing cases which have gone through a similar problem. Such exploratory data analysis are very important and critical especially in today’s business world. The researcher just needs to make sure he analyses the case carefully in regards to all the variables present in the previous case against his own case. It is very commonly used by business organizations or social sciences sector or even in the health sector.

LEARN ABOUT: Level of Analysis

For example: A particular orthopedic surgeon has the highest success rate for performing knee surgeries. A lot of other hospitals or doctors have taken up this case to understand and benchmark the method in which this surgeon does the procedure to increase their success rate.

  • Identify the problem : A researcher identifies the subject of research and the problem is addressed by carrying out multiple methods to answer the questions.
  • Create the hypothesis : When the researcher has found out that there are no prior studies and the problem is not precisely resolved, the researcher will create a hypothesis based on the questions obtained while identifying the problem.
  • Further research : Once the data has been obtained, the researcher will continue his study through descriptive investigation. Qualitative methods are used to further study the subject in detail and find out if the information is true or not.

LEARN ABOUT: Descriptive Analysis

  • They are not structured studies
  • It is usually low cost, interactive and open ended.
  • It will enable a researcher answer questions like what is the problem? What is the purpose of the study? And what topics could be studied?
  • To carry out exploratory research, generally there is no prior research done or the existing ones do not answer the problem precisely enough.
  • It is a time consuming research and it needs patience and has risks associated with it.
  • The researcher will have to go through all the information available for the particular study he is doing.
  • There are no set of rules to carry out the research per se, as they are flexible, broad and scattered.
  • The research needs to have importance or value. If the problem is not important in the industry the research carried out is ineffective.
  • The research should also have a few theories which can support its findings as that will make it easier for the researcher to assess it and move ahead in his study
  • Such a research usually produces qualitative data , however in certain cases quantitative data can be generalized for a larger sample through use of surveys and experiments.

LEARN ABOUT: Action Research

  • The researcher has a lot of flexibility and can adapt to changes as the research progresses.
  • It is usually low cost.
  • It helps lay the foundation of a research, which can lead to further research.
  • It enables the researcher understand at an early stage, if the topic is worth investing the time and resources  and if it is worth pursuing.
  • It can assist other researchers to find out possible causes for the problem, which can be further studied in detail to find out, which of them is the most likely cause for the problem.
  • Even though it can point you in the right direction towards what is the answer, it is usually inconclusive.
  • The main disadvantage of exploratory research is that they provide qualitative data. Interpretation of such information can be judgmental and biased.
  • Most of the times, exploratory research involves a smaller sample , hence the results cannot be accurately interpreted for a generalized population.
  • Many a times, if the data is being collected through secondary research, then there is a chance of that data being old and is not updated.

LEARN ABOUT: Projective Techniques & Conformity Bias

Exploratory research is carried out when a topic needs to be understood in depth, especially if it hasn’t been done before. The goal of such a research is to explore the problem and around it and not actually derive a conclusion from it. Such kind of research will enable a researcher to  set a strong foundation for exploring his ideas, choosing the right research design and finding variables that actually are important for the in-depth analysis . Most importantly, such a research can help organizations or researchers save up a lot of time and resources, as it will enable the researcher to know if it worth pursuing.

Learn more: VoIP Survey Questions + Sample Questionnaire Template

MORE LIKE THIS

Word Cloud Generator

9 Best Word Cloud Generator Uses, Pros & Cons

Mar 15, 2024

digital experience platforms

Top 8 Best Digital Experience Platforms in 2024

Patient Experience Software

Top 10 Patient Experience Software to Shape Modern Healthcare

Mar 14, 2024

list building tool

Email List Building Tool: Choose The Best From These 9 Tools

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Open access
  • Published: 28 May 2018

Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance

  • Britt Hallingberg   ORCID: orcid.org/0000-0001-8016-5793 1 ,
  • Ruth Turley 1 , 4 ,
  • Jeremy Segrott 1 , 2 ,
  • Daniel Wight 3 ,
  • Peter Craig 3 ,
  • Laurence Moore 3 ,
  • Simon Murphy 1 ,
  • Michael Robling 1 , 2 ,
  • Sharon Anne Simpson 3 &
  • Graham Moore 1  

Pilot and Feasibility Studies volume  4 , Article number:  104 ( 2018 ) Cite this article

25k Accesses

92 Citations

66 Altmetric

Metrics details

Evaluations of complex interventions in public health are frequently undermined by problems that can be identified before the effectiveness study stage. Exploratory studies, often termed pilot and feasibility studies, are a key step in assessing the feasibility and value of progressing to an effectiveness study. Such studies can provide vital information to support more robust evaluations, thereby reducing costs and minimising potential harms of the intervention. This systematic review forms the first phase of a wider project to address the need for stand-alone guidance for public health researchers on designing and conducting exploratory studies. The review objectives were to identify and examine existing recommendations concerning when such studies should be undertaken, questions they should answer, suitable methods, criteria for deciding whether to progress to an effectiveness study and appropriate reporting.

We searched for published and unpublished guidance reported between January 2000 and November 2016 via bibliographic databases, websites, citation tracking and expert recommendations. Included papers were thematically synthesized.

The search retrieved 4095 unique records. Thirty papers were included, representing 25 unique sources of guidance/recommendations. Eight themes were identified: pre-requisites for conducting an exploratory study, nomenclature, guidance for intervention assessment, guidance surrounding any future evaluation study design, flexible versus fixed design, progression criteria to a future evaluation study, stakeholder involvement and reporting of exploratory studies. Exploratory studies were described as being concerned with the intervention content, the future evaluation design or both. However, the nomenclature and endorsed methods underpinning these aims were inconsistent across papers. There was little guidance on what should precede or follow an exploratory study and decision-making surrounding this.

Conclusions

Existing recommendations are inconsistent concerning the aims, designs and conduct of exploratory studies, and guidance is lacking on the evidence needed to inform when to proceed to an effectiveness study.

Trial registration

PROSPERO 2016, CRD42016047843

Peer Review reports

Improving public health and disrupting complex problems such as smoking, obesity and mental health requires complex, often multilevel, interventions. Such interventions are often costly and may cause unanticipated harms and therefore require evaluation using the most robust methods available. However, pressure to identify effective interventions can lead to premature commissioning of large effectiveness studies of poorly developed interventions, wasting finite research resources [ 1 , 2 , 3 ]. In the development of pharmaceutical drugs over 80% fail to reach ‘Phase III’ effectiveness trials, even after considerable investment [ 4 ]. With public health interventions, the historical tendency to rush to full evaluation has in some cases led to evaluation failures due to issues which could have been identified at an earlier stage, such as difficulties recruiting sufficient participants [ 5 ]. There is growing consensus that improving the effectiveness of public health interventions relies on attention to their design and feasibility [ 3 , 6 ]. However, what constitutes good practice when deciding when a full evaluation is warranted, what uncertainties should be addressed to inform this decision and how, is unclear. This systematic review aims to synthesize existing sources of guidance for ‘exploratory studies’ which we broadly define as studies intended to generate evidence needed to decide whether and how to proceed with a full-scale effectiveness study. They do this by optimising or assessing the feasibility of the intervention and/or evaluation design that the effectiveness study would use. Hence, our definition includes studies variously referred to throughout the literature as ‘pilot studies’, ‘feasibility studies’ or ‘exploratory trials’. Our definition is consistent with previous work conducted by Eldridge et al. [ 7 , 8 ], who define feasibility as an overarching concept [ 8 ] which assesses; ‘… whether the future trial can be done, should be done, and, if so, how’ (p. 2) [ 7 ]. However, our definition also includes exploratory studies to inform non-randomised evaluations, rather than a sole focus on trials.

The importance of thoroughly establishing the feasibility of intervention and evaluation plans prior to embarking on an expensive, fully powered evaluation was indicated in the Medical Research Council’s (MRC) framework for the development and evaluation of complex interventions to improve health [ 9 , 10 ]. This has triggered shifts in the practice of researchers and funders toward seeking and granting funding for an ever growing number of studies to address feasibility issues. Such studies are however in themselves often expensive [ 11 , 12 ]. While there is a compelling case for such studies, the extent to which this substantial investment in exploratory studies has to date improved the effectiveness and cost-effectiveness of evidence production remains to be firmly established. Where exploratory studies are conducted poorly, this investment may simply lead to expenditure of large amounts of additional public money, and several years’ delay in getting evidence into the hands of decision-makers, without necessarily increasing the likelihood that a future evaluation will provide useful evidence.

The 2000 MRC guidance used the term ‘exploratory trial’ for work conducted prior to a ‘definitive trial’, indicating that it should primarily address issues concerning the optimisation, acceptability and delivery of the intervention [ 13 ]. This included adaptation of the intervention, consideration of variants of the intervention, testing and refinement of delivery method or content, assessment of learning curves and implementation strategies and determining the counterfactual. Other possible purposes of exploratory trials included preliminary assessment of effect size in order to calculate the sample size for the main trial and other trial design parameters, including methods of recruitment, randomisation and follow-up. Updated MRC guidance in 2008 moved away from the sole focus on RCTs (randomised controlled trials) of its predecessor reflecting recognition that not all interventions can be tested using an RCT and that the next most robust methods may sometimes be the best available option [ 10 , 14 ]. Guidance for exploratory studies prior to a full evaluation have, however, often been framed as relevant only where the main evaluation is to be an RCT [ 13 , 15 ].

However, the goals of exploratory studies advocated by research funders have to date varied substantially. For instance, the National Institute for Health Research Evaluation Trials and Studies Coordinating Centre (NETSCC) definitions of feasibility and pilot studies do not include examination of intervention design, delivery or acceptability and do not suggest that modifications to the intervention prior to full-scale evaluation will arise from these phases. However, the NIHR (National Institute of Health Research) portfolio of funded studies indicates various uses of terms such as ‘feasibility trial’, ‘pilot trial’ and ‘exploratory trial’ to describe studies with similar aims, while it is rare for such studies not to include a focus on intervention parameters [ 16 , 17 , 18 ]. Within the research literature, there is considerable divergence over what exploratory studies should be called, what they should achieve, what they should entail, whether and how they should determine progression to future studies and how they should be reported [ 7 , 8 , 19 , 20 , 21 ].

This paper presents a systematic review of the existing recommendations and guidance on exploratory studies relevant to public health, conducted as the first stage of a project to develop new MRC guidance on exploratory studies. This review aims to produce a synthesis of current guidance/recommendations in relation to the definition, purpose and content of exploratory studies, and what is seen as ‘good’ and ‘bad’ practice as presented by the authors. It will provide an overview of key gaps and areas in which there is inconsistency within and between documents. The rationale for guidance and recommendations are presented, as well as the theoretical perspectives informing them. In particular, we examine how far the existing recommendations answer the following questions:

When is it appropriate to conduct an exploratory study?

What questions should such studies address?

What are the key methodological considerations in answering these questions?

What criteria should inform a decision on whether to progress to an effectiveness study?

How should exploratory studies be reported?

This review is reported in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [ 22 ] as evidenced in the PRISMA checklist (see Additional file  1 : Table S1). The review protocol is registered on PROSPERO (registration number: CRD42016047843; www.crd.york.ac.uk/prospero ).

Literature search

A comprehensive search (see Additional file  2 : Appendix) was designed and completed during August to November 2016 to identify published and grey literature reported between January 2000 and November 2016 that contained guidance and recommendations on exploratory studies that could have potential relevance to public health. Bibliographic databases were CINAHL, Embase, MEDLINE, MEDLINE-In-process, PsycINFO, Web of Science and PubMed. Supplementary searches included key websites (see Additional file  2 : Appendix) and forward and backward citation tracking of included papers, as well as contacting experts in the field. The first MRC guidance on developing and evaluating complex interventions in health was published in 2000; we therefore excluded guidance published before this year.

Selection of included papers

Search results were exported into reference management software Endnote and clearly irrelevant or duplicate records removed by an information specialist. Eligibility criteria were applied to abstracts and potentially relevant full-text papers by two reviewers working independently in duplicate (BH, JS). Discrepancies were agreed by consensus or by a third reviewer if necessary. Full criteria are shown in Table  1 . During screening of eligible studies, it became evident that determining whether or not guidance was applicable to public health was not always clear. The criteria in Table  1 were agreed by the team after a list of potentially eligible publications were identified.

Quality assessment of included papers

Given the nature of publications included (expert guidance or methodological discussion papers) quality assessment was not applicable.

Data extraction and thematic synthesis

A thematic synthesis of guidance within included documents was performed [ 23 ]. This involved the use of an a priori coding framework (based on the projects aims and objectives), developed by RT, JS and DW ([ 24 ], see Additional file  2 : Appendix). Data were extracted using this schema in qualitative analytic software NVivo by one reviewer (BH). A 10% sample of coded papers was checked by a second reviewer (JS). Data were then conceptualised into final themes by agreement (BH, JS, DW, RT).

Review statistics

Four thousand ninety-five unique records were identified of which 93 were reviewed in full text (see Fig.  1 ). In total, 30 documents were included in the systematic review representing 25 unique sets of guidance. Most sources of guidance did not explicitly identify an intended audience and guidance varied in its relevance to public health. Table  2 presents an overview of all sources of guidance included in the review with sources of guidance more or less relevant to public health identified as well as those which specifically applied to exploratory studies with a randomised design.

figure 1

Flow diagram

Findings from guidance

The included guidance reported a wide range of recommendations on the process of conducting and reporting exploratory studies. We categorised these into eight themes that capture: pre-requisites for conducting an exploratory study, nomenclature, guidance for intervention assessment, guidance surrounding the future evaluation study design, adaptive vs rigid designs, progression criteria for exploratory studies, stakeholder involvement and reporting.

Narrative description of themes

Theme 1: pre-requisites for conducting an exploratory study.

Where mentioned, pre-requisite activities included determining the evidence base, establishing the theoretical basis for the intervention, identifying the intervention components as well as modelling of the intervention in order to understand how intervention components interact and impact on final outcomes [ 9 , 25 , 26 , 27 ]. These were often discussed within the context of the MRC’s intervention development-evaluation cycle [ 6 , 9 , 10 , 13 , 25 , 26 , 27 , 28 ]. Understanding how intervention components interact with various contextual settings [ 6 , 27 , 29 ] and identifying unintended harms [ 6 , 29 ] as well as potential implementation issues [ 6 , 9 , 10 , 30 ] were also highlighted. There was an absence of detail in judging when these above conditions were met sufficiently for moving onto an exploratory study.

Theme 2: nomenclature

A wide range of terms were used, sometimes interchangeably, to describe exploratory studies with the most common being pilot trial/study. Table  3 shows the frequency of the terms used in guidance including other terms endorsed.

Different terminology did not appear to be consistently associated with specific study purposes (see theme 3), as illustrated in Table  2 . ‘Pilot’ and ‘feasibility’ studies were sometimes used interchangeably [ 10 , 20 , 25 , 26 , 27 , 28 , 31 ] while others made distinctions between the two according to design features or particular aims [ 7 , 8 , 19 , 29 , 32 , 33 , 34 ]. For example, some described pilot studies as a smaller version of a future RCT to run in miniature [ 7 , 8 , 19 , 29 , 32 , 33 , 34 ] and was sometimes associated with a randomised design [ 32 , 34 ], but not always [ 7 , 8 ]. In contrast, feasibility studies were used as an umbrella term by Eldridge et al. with pilot studies representing a subset of feasibility studies [ 7 , 8 ]: ‘We suggest that researchers view feasibility as an overarching concept, with all studies done in preparation for a main study open to being called feasibility studies, and with pilot studies as a subset of feasibility studies.’ (p. 18) [ 8 ].

Feasibility studies could focus on particular intervention and trial design elements [ 29 , 32 ] which may not include randomisation [ 32 , 34 ]. Internal pilot studies were primarily viewed as part of the full trial [ 8 , 32 , 35 , 36 , 37 , 38 ] and are therefore not depicted under nomenclature in Table  3 .

While no sources explicitly stated that an exploratory study should focus on one area and not the other, aims and associated methods of exploratory studies diverged into two separate themes. They pertained to either examining the intervention itself or the future evaluation design, and are detailed below in themes 3 and 4.

Theme 3: guidance for intervention assessment

Sources of guidance endorsed exploratory studies having formative purposes (i.e. refining the intervention and addressing uncertainties related to intervention implementation [ 13 , 15 , 29 , 31 , 39 ]) as well as summative goals (i.e. assessing the potential impact of an intervention or its promise [ 6 , 13 , 39 ]).

Refining the intervention and underlying theory

Some guidance suggested that changes could be made within exploratory studies to refine the intervention and underlying theory [ 15 , 29 , 31 ] and adapt intervention content to a new setting [ 39 ]. However, guidance was not clear on what constituted minor vs. major changes and implications for progression criteria (see theme 6). When making changes to the intervention or underlying theory, some guidance recommended this take place during the course of the exploratory study (see theme 5). Others highlighted the role of using a multi-arm design to select the contents of the intervention before a full evaluation [ 13 ] and to assess potential mechanisms of multiple different interventions or intervention components [ 29 ]. Several sources highlighted the role of qualitative research in optimising or refining an intervention, particularly for understanding the components of the logic model [ 29 ] and surfacing hidden aspects of the intervention important for delivering outcomes [ 15 ].

Intervention implementation

There was agreement across a wide range of guidance that exploratory studies could explore key uncertainties related to intervention implementation, such as acceptability, feasibility or practicality. Notably these terms were often ill-defined and used interchangeably. Acceptability was considered in terms of recipients’ reactions [ 7 , 8 , 29 , 32 , 39 ] while others were also attentive to feasibility from the perspective of intervention providers, deliverers and health professionals [ 6 , 9 , 29 , 30 , 34 , 39 ]. Implementation, feasibility, fidelity and ‘practicality’ explored the likelihood of being able to deliver in practice what was intended [ 25 , 26 , 27 , 30 , 39 ]. These were sometimes referred to as aims within an embedded process evaluation that took place alongside an exploratory study, although the term process evaluation was never defined [ 7 , 10 , 15 , 29 , 40 ].

Qualitative research was encouraged for assessment of intervention acceptability [ 21 ] or for implementation (e.g. via non-participant observation [ 15 ]). Caution was recommended with regards to focus groups where there is a risk of masking divergent views [ 15 ]. Others recommended quantitative surveys to examine retention rates and reasons for dropout [ 7 , 30 ]. Furthermore, several sources emphasised the importance of testing implementation in a range of contexts [ 15 , 29 , 39 , 41 ]—especially in less socioeconomically advantaged groups, to examine the risk of widening health inequalities [ 29 , 39 ].

One source of guidance considered whether randomisation was required for assessing intervention acceptability, believing this to be unnecessary but also suggesting it could ‘potentially depend on preference among interventions offered in the main trial’ ([ 21 ]; p. 9). Thus, issues of intervention acceptability, particularly within multi-arm trials, may relate to clinical equipoise and acceptability of randomisation procedures among participants [ 30 ].

Appropriateness of assessing intervention impact

Several sources of guidance discussed the need to understand the impact of the intervention, including harms, benefits or unintended consequences [ 6 , 7 , 15 , 29 , 39 ]. Much of the guidance focused on statistical tests of effectiveness with disagreement on the soundness of this aim, although qualitative methods were also recommended [ 15 , 42 ]. Some condemned statistically testing for effectiveness [ 7 , 20 , 29 , 32 , 41 ], as such studies are often underpowered, hence leading to imprecise and potentially misleading estimates of effect sizes [ 7 , 20 ]. Others argued that an estimate of likely effect size could evidence the intervention was working as intended and not having serious unintended harms [ 6 ] and thus be used to calculate the power for the full trial [ 13 ]. Later guidance from the MRC is more ambiguous than earlier guidance, stating that estimates should be interpreted with caution, while simultaneously stating ‘safe’ assumptions of effect sizes as a pre-requisite before continuing to a full evaluation [ 10 ]. NIHR guidance, which distinguished between pilot and feasibility studies, supported the assessment of a primary outcome in pilot studies, although it is unclear whether this is suggesting that a pilot should involve an initial test of changes in the primary outcome, or simply that the primary outcome should be measured in the same way as it would be in a full evaluation. By contrast, for ‘feasibility studies’, it indicated that an aim may include designing an outcome measure to be used in a full evaluation.

Others made the case for identifying evidence of potential effectiveness, including use of interim or surrogate endpoints [ 7 , 41 ], defined as ‘…variables on the causal pathway of what might eventually be the primary outcome in the future definitive RCT, or outcomes at early time points, in order to assess the potential for the intervention to affect likely outcomes in the future definitive RCT…’ [ 7 ] (p. 14).

Randomisation was implied as a design feature of exploratory studies when estimating an effect size estimate of the intervention as it maximised the likelihood that observed differences are due to intervention [ 9 , 39 ], with guidance mostly written from a starting assumption that full evaluation will take the form of an RCT and guidance focused less on exploratory studies for quasi-experimental or other designs. For studies that aim to assess potential effectiveness using a surrogate or interim outcome, using a standard sample size calculation was recommended to ensure adequate power, although it was noted that this aim is rare in exploratory studies [ 7 ].

Theme 4: guidance surrounding the future evaluation design

Sources consistently advocated assessing the feasibility of study procedures or estimating parameters of the future evaluation. Recommendations are detailed below.

Assessing feasibility of the future evaluation design

Assessing feasibility of future evaluation procedures was commonly recommended [ 6 , 7 , 10 , 15 , 30 , 32 , 33 , 34 , 37 , 41 ] to avert problems that could undermine the conduct or acceptability of future evaluation [ 6 , 15 , 30 ]. A wide range of procedures were suggested as requiring assessments of feasibility including data collection [ 20 , 30 , 34 , 36 , 41 ], participant retention strategies [ 13 ], randomisation [ 7 , 13 , 20 , 30 , 34 , 36 , 38 , 41 ], recruitment methods [ 13 , 30 , 32 , 34 , 35 , 38 , 41 ], running the full trial protocol [ 20 , 30 , 36 ], the willingness of participants to be randomised [ 30 , 32 ] and issues of contamination [ 30 ]. There was disagreement concerning the appropriateness of assessing blinding in exploratory studies [ 7 , 30 , 34 ], with one source noting double blinding is difficult when participants are assisted in changing their behaviour; although assessing single blinding may be possible [ 30 ].

Qualitative [ 15 , 30 , 34 ], quantitative [ 34 ] and mixed methods [ 7 ] were endorsed for assessing these processes. Reflecting the tendency for guidance of exploratory studies to be limited to studies in preparation for RCTs, discussion of the role of randomisation at the exploratory study stage featured heavily in guidance. Randomisation within an exploratory study was considered necessary for examining feasibility of recruitment, consent to randomisation, retention, contamination or maintenance of blinding in the control and intervention groups, randomisation procedures and whether all the components of a protocol can work together, although randomisation was not deemed necessary to assess outcome burden and participant eligibility [ 21 , 30 , 34 ]. While there was consensus about what issues could be assessed through randomisation, sources disagreed on whether randomisation should always precede a future evaluation study, even if that future study is to be an RCT. Contention seemed to be linked to variation in nomenclature and associated aims. For example, some defined pilot study as a study run in miniature to test how all its components work together, thereby dictating a randomised design [ 32 , 34 ]. Yet for feasibility studies, randomisation was only necessary if it reduced the uncertainties in estimating parameters for the future evaluation [ 32 , 34 ]. Similarly, other guidance highlighted an exploratory study (irrespective of nomenclature) should address the main uncertainties, and thus may not depend on randomisation [ 8 , 15 ].

Estimating parameters of the future evaluation design

A number of sources recommended exploratory studies should inform the parameters of the future evaluation design. Areas for investigation included estimating sample sizes required for the future evaluation (e.g. measuring outcomes [ 32 , 35 ]; power calculations [ 13 ]; derive effect size estimates [ 6 , 7 , 39 ]; estimating target differences [ 35 , 43 ]; deciding what outcomes to measure and how [ 9 , 20 , 30 , 36 ]; assessing quality of measures (e.g. for reliability/ validity/ feasibility/ sensitivity [ 7 , 20 , 30 ]; identification of control group [ 9 , 13 ]; recruitment, consent and retention rates [ 10 , 13 , 20 , 30 , 32 , 34 , 36 ]; and information on the cost of the future evaluation design [ 9 , 30 , 36 ].

While qualitative methods were deemed useful for selecting outcomes and their suitable measures [ 15 ], most guidance concentrated on quantitative methods for estimating future evaluation sample sizes. This was contentious due to the potential to over- or under-estimate sample sizes required in a future evaluation due to the lack of precision of estimates from a small pilot [ 20 , 30 , 41 ]. Estimating sample sizes from effect size estimates in an exploratory study was nevertheless argued by some to be useful if there was scant literature and the exploratory study used the same design and outcome as the future evaluation [ 30 , 39 ]. Cluster RCTs, which are common in public health interventions, were specifically earmarked as unsuitable for estimating parameters for sample size calculations (e.g. intra-cluster correlation coefficients) as well as recruitment and follow-up rates without additional information from other resources, because a large number of clusters and individual participants would be required [ 41 ]. Others referred to ‘rules of thumb’ when determining sample sizes in an exploratory study with numbers varying between 10 and 75 participants per trial arm in individually randomised studies [ 7 , 30 , 36 ]. Several also recommended the need to consider a desired meaningful difference in the health outcomes from a future evaluation and the appropriate sample size needed to detect this, rather than conducting sample size calculations using estimates of likely effect size from pilot data [ 30 , 35 , 38 , 43 ].

A randomised design was deemed unnecessary for estimating costs or selecting outcomes, although was valued for estimating recruitment and retention rates for intervention and control groups [ 21 , 34 ]. Where guidance indicated the estimation of an effect size appropriate to inform the sample size for a future evaluation, a randomised design was deemed necessary [ 9 , 39 ].

Theme 5: flexible vs. fixed design

Sources stated that exploratory studies could employ a rigid or flexible design. With the latter, the design can change during the course of the study, which is useful for making changes to the intervention, as well as the future evaluation design [ 6 , 13 , 15 , 31 ]. Here, qualitative data can be analysed as it is collected, shaping the exploratory study process, for instance sampling of subsequent data collection points [ 15 ], and clarifying implications for intervention effectiveness [ 31 ].

In contrast, fixed exploratory studies were encouraged when primarily investigating the future evaluation parameters and processes [ 13 ]. It may be that the nomenclature used in some guidance (e.g. pilot studies that are described as miniature versions of the evaluation) is suggesting a distinction between more flexible vs. more stringent designs. In some guidance, it was not mentioned whether changes should be made during the course of an exploratory study or afterwards, in order to get the best possible design for the future evaluation [ 6 , 7 , 21 ].

Theme 6: progression criteria to a future evaluation study

Little guidance was provided on what should be considered when formulating progression criteria for continuing onto a future evaluation study. Some focussed on the relevant uncertainties of feasibility [ 32 , 39 ], while others highlight specific items concerning cost-effectiveness [ 10 ], refining causal hypotheses to be tested in a future evaluation [ 29 ] and meeting recruitment targets [ 20 , 34 ]. As discussed in themes 3 and 4, statistically testing for effectiveness and using effect sizes for power calculations was cautioned by some, and so criteria based on effect sizes were not specified [ 38 ].

Greater discussion was devoted to how to weight evidence from an exploratory study that addressed multiple aims and used different methods. Some explicitly stated progression criteria should not be judged as strict thresholds but as guidelines using, for example, a traffic lights system with varying levels of acceptability [ 7 , 41 ]. Others highlighted a realist approach, moving away from binary indicators to focusing on ‘what is feasible and acceptable for whom and under what circumstances’ [ 29 ]. In light of the difficulties surrounding interpretation of effect estimates, several sources recommended qualitative findings from exploratory studies should be more influential than quantitative findings [ 15 , 38 ].

Interestingly, there was ambiguity regarding progression when exploratory findings indicated substantial changes to the intervention or evaluation design. Sources considering this issue suggested that if ‘extensive changes’ or ‘major modifications’ are made to either (note they did not specify what qualified as such), researchers should return to the exploratory [ 21 , 30 ] or intervention development phases [ 15 ].

‘Alternatively, at the feasibility phase, researchers may identify fundamental problems with the intervention or trial conduct and return to the development phase rather than proceed to a full trial.’ (p. 1) [ 15 ].

As described previously, however, the threshold at which changes are determined to be ‘major’ remained ambiguous. While updated MRC guidance [ 10 ] moved to a more iterative model, accepting that movement back between feasibility/piloting and intervention development may sometimes be needed, there was no guidance on under what conditions movement between these two stages should take place.

Theme 7: stakeholder involvement

Several sources recommended a range of stakeholders (e.g. intervention providers, intervention recipients, public representatives as well as practitioners who might use the evidence produced by the full trial) be involved in the planning and running of the exploratory study to ensure exploratory studies reflect the realities of intervention setting [ 15 , 28 , 31 , 32 , 39 , 40 ]. In particular, community-based participatory approaches were recommended [ 15 , 39 ]. While many highlighted the value of stakeholders on Trial Steering Committees and other similar study groups [ 15 , 28 , 40 ], some warned about equipoise between researchers and stakeholders [ 15 , 40 ] and also cautioned against researchers conflating stakeholder involvement with qualitative research [ 15 ].

‘Although patient and public representatives on research teams can provide helpful feedback on the intervention, this does not constitute qualitative research and may not result in sufficiently robust data to inform the appropriate development of the intervention.’ (p. 8) [ 15 ].

Theme 8: reporting of exploratory studies

Detailed recommendations for reporting exploratory studies were recently provided in new Consolidated Standards of Reporting Trials (CONSORT) guidance by Eldridge et al. [ 7 ]. In addition to this, recurrent points were brought up by other sources of guidance. Most notably, it was recommended exploratory studies be published in peer-reviewed journals as this can provide useful information to other researchers on what has been done, what did not work and what might be most appropriate [ 15 , 30 ]. An exploratory study may also result in multiple publications, but should provide reference to other work carried out in the same exploratory study [ 7 , 15 ]. Several sources of guidance also highlight that exploratory studies should be appropriately labelled in the title/abstract to enable easy identification; however, the nomenclature suggested varied depending on guidance [ 7 , 8 , 15 ].

While exploratory studies—carried out to inform decisions about whether and how to proceed with an effectiveness study [ 7 , 8 ]—are increasingly recognised as important in the efficient evaluation of complex public health interventions, our findings suggest that this area remains in need of consistent standards to inform practice. At present, there are multiple definitions of exploratory studies, a lack of consensus on a number of key issues, and a paucity of detailed guidance on how to approach the main uncertainties such studies aim to address prior to proceeding to a full evaluation.

Existing guidance commonly focuses almost exclusively on testing methodological parameters [ 33 ], such as recruitment and retention, although in practice, it is unusual for such studies not to also focus on the feasibility of the intervention itself. Where intervention feasibility is discussed, there is limited guidance on when an intervention is ‘ready’ for an exploratory study and a lack of demarcation between intervention development and pre-evaluation work to understand feasibility. Some guidance recognised that an intervention continues to develop throughout an exploratory study, with distinctions made between ‘optimisation/refinement’ (i.e. minor refinements to the intervention) vs. ‘major changes’. However, the point at which changes become so substantial that movement back toward intervention development rather than forward to a full evaluation remains ambiguous. Consistent with past reviews which adopted a narrower focus on studies with randomised designs [ 21 ] or in preparation for a randomised trial [ 8 , 36 ] and limited searches of guidance in medical journals [ 19 , 36 ], terms to describe exploratory studies were inconsistent, with a distinction sometimes made between pilot and feasibility studies, though with others using these terms interchangeably.

The review identifies a number of key areas of disagreement or limited guidance in regards to the critical aims of exploratory studies and addressing uncertainties which might undermine a future evaluation, and how these aims should be achieved. There was much disagreement for example on whether exploratory studies should include a preliminary assessment of intervention effects to inform decisions on progression to a full evaluation, and the appropriateness of using estimates of effect from underpowered data (from non-representative samples and a study based on a not fully optimised version of the intervention) to power a future evaluation study. Most guidance focused purely on studies in preparation for RCTs; nevertheless, guidance varied on whether randomisation was a necessary feature of the exploratory study, even where a future evaluation study was an RCT. Guidance was often difficult to assess regarding its applicability to public health research, with many sources focusing on literature and practice primarily from clinical research, and limited consideration of the transferability of these problems and proposed solutions to complex social interventions, such as those in public health. Progression criteria were highlighted as important by some as a means of preventing biased post hoc cases for continuation. However, there was a lack of guidance on how to devise progression criteria and processes for assessing whether these had been sufficiently met. Where they had not been met, there was a lack of guidance on how to decide whether the exploratory study had generated sufficient insight about uncertainties that the expense of a further feasibility study would not be justified prior to large-scale evaluation.

Although our review included a broad focus on guidance of exploratory studies from published and grey literature and moved beyond a focus on studies conducted in preparation for an RCT specifically, a number of limitations should be noted. Guidance from other areas of social intervention research where challenges may be similar to those in public health (e.g. education, social work and business) may not have been captured by our search strategy. We found few worked examples of exploratory studies in public health that provided substantial information from learned experience and practice. Hence, the review drew largely on recommendations from funding organisations, or relatively abstract guidance from teams of researchers, with fewer clear examples of how these recommendations are grounded in experience from the conduct of such studies. As such, it should be acknowledged that these documents represent one element within a complex system of research production and may not necessarily fully reflect what is taking place in the conduct of exploratory studies. Finally, treating sources of guidance as independent from each other does not reflect how some recommendations developed over time (see for example [ 7 , 8 , 20 , 36 , 41 ]).

There is inconsistent guidance, and for some key issues a lack of guidance, for exploratory studies of complex public health interventions. As this lack of guidance for researchers in public health continues, the implications and consequences could be far reaching. It is unclear how researchers use existing guidance to shape decision-making in the conduct of exploratory studies, and in doing so, how they adjudicate between various conflicting perspectives. This systematic review has aimed largely to identify areas of agreement and disagreement as a starting point in bringing order to this somewhat chaotic field of work. Following this systematic review, our next step is to conduct an audit of published public health exploratory studies in peer-reviewed journals, to assess current practice and how this reflects the reviewed guidance. As part of a wider study, funded by the MRC/NIHR Methodology Research Programme to develop GUidance for Exploratory STudies of complex public health interventions (GUEST; Moore L, et al. Exploratory studies to inform full scale evaluations of complex public health interventions: the need for guidance, submitted), the review has informed a Delphi survey of researchers, funders and publishers of public health research. In turn, this will contribute to a consensus meeting which aims to reach greater unanimity on the aims of exploratory studies, and how these can most efficiently address uncertainties which may undermine a full-scale evaluation.

Abbreviations

Consolidated Standards of Reporting Trials

GUidance for Exploratory STudies of complex public health interventions

Medical Research Council

National Institute of Health Research Evaluation Trials and Studies Coordinating Centre

National Institute for Health Research

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Randomised controlled trial

Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40:637–44.

Article   PubMed   Google Scholar  

Sanson-Fisher RW, Bonevski B, Green LW, D’Este C. Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33:155–61.

Speller V, Learmonth A, Harrison D. The search for evidence of effective health promotion. BMJ. 1997;315(7104):361.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Arrowsmith J, Miller P. Trial watch: phase II failures: 2008–2010. Nat Rev Drug Discov. 2011;10(5):328–9.

National Institute for Health Research. Weight loss maintenance in adults (WILMA). https://www.journalslibrary.nihr.ac.uk/programmes/hta/084404/#/ . Accessed 13 Dec 2017.

Wight D, Wimbush E, Jepson R, Doi L. Six steps in quality intervention development (6SQuID). J Epidemiol Community Health. 2015;70:520–5.

Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2:64.

Article   PubMed   PubMed Central   Google Scholar  

Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS One. 2016;11:e0150205.

Campbell M, Fitzpatrick R, Haines A, Kinmonth AL. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: new guidance. Medical Research Council. 2008;

National Institute for Health Research. The Filter FE Challenge: pilot trial and process evaluation of a multi-level smoking prevention intervention in further education settings. Available from: https://www.journalslibrary.nihr.ac.uk/programmes/phr/134202/#/ . Accessed 25 Jan 2018.

National Institute for Health Research. Adapting and piloting the ASSIST model of informal peer-led intervention delivery to the Talk to Frank drug prevention programme in UK secondary schools (ASSIST+Frank): an exploratory trial. https://www.journalslibrary.nihr.ac.uk/programmes/phr/12306003/#/ . Accessed 25 Jan 2018.

Medical Research Council. A framework for the development and evaluation of RCTs for complex interventions to improve health. London: Medical Research Council; 2000.

Google Scholar  

Bonell CP, Hargreaves JR, Cousens SN, Ross DA, Hayes R, Petticrew M, et al. Alternatives to randomisation in the evaluation of public-health interventions: design challenges and solutions. J Epidemiol Community Health. 2009; https://doi.org/10.1136/jech.2008.082602 .

O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1(1):32.

National Institute for Health Research. An exploratory trial to evaluate the effects of a physical activity intervention as a smoking cessation induction and cessation aid among the ‘hard to reach’. https://www.journalslibrary.nihr.ac.uk/programmes/hta/077802/#/ . Accessed 13 Dec 2017.

National Institue for Health Research. Initiating change locally in bullying and aggression through the school environment (INCLUSIVE): pilot randomised controlled trial. https://www.journalslibrary.nihr.ac.uk/hta/hta19530/#/abstract . Accessed 13 Dec 2017.

National Institute for Health Resarch. Increasing boys' and girls' intention to avoid teenage pregnancy: a cluster randomised control feasibility trial of an interactive video drama based intervention in post-primary schools in Northern Ireland. https://www.journalslibrary.nihr.ac.uk/phr/phr05010/#/abstract . Accessed 13 Dec 2017.

Arain M, Campbell, MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy BMC Med Res Methodol. 2010;10:67.

Lancaster GA. Pilot and feasibility studies come of age! Pilot Feasibility Stud. 2015;1:1.

Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Med Res Methodol. 2011;11:117.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;151:e1000097.

Article   Google Scholar  

Dixon-Woods M, Agarwal S, Jones D. Intergrative approaches to qualitative and quantitative evidence. London: Health Development Agency; 2004.

Ritchie J, Spencer L, O’Connor W. Carrying out qualitative analysis. Qualitative research practice: a guide for social science students and researchers. 2003;1.

Möhler R, Bartoszek G, Köpke S, Meyer G. Proposed criteria for reporting the development and evaluation of complex interventions in healthcare (CReDECI): guideline development. IJNS. 2012;49(1):40–6.

Möhler R, Bartoszek G, Meyer G. Quality of reporting of complex healthcare interventions and applicability of the CReDECI list—a survey of publications indexed in PubMed. BMC Med Res Methodol. 2013;13:1.

Möhler R, Köpke S, Meyer G. Criteria for reporting the development and evaluation of complex interventions in healthcare: revised guideline (CReDECI 2). Trials. 2015;16(204):1.

Evans BA, Bedson E, Bell P, Hutchings H, Lowes L, Rea D, et al. Involving service users in trials: developing a standard operating procedure. Trials. 2013;14(1):1.

Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complex intervention science: applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions. Evaluation. 2016;22:286–303.

Feeley N, Cossette S, Côté J, Héon M, Stremler R, Martorella G, et al. The importance of piloting an RCT intervention. CJNR. 2009;41:84–99.

Levati S, Campbell P, Frost R, Dougall N, Wells M, Donaldson C, et al. Optimisation of complex health interventions prior to a randomised controlled trial: a scoping review of strategies used. Pilot Feasibility Stud. 2016;2:1.

National Institute for Health Research. Feasibility and pilot studies. Available from: http://www.nihr.ac.uk/CCF/RfPB/FAQs/Feasibility_and_pilot_studies.pdf . Accessed 14 Oct 2016.

National Institute for Health Research. Glossary | Pilot studies 2015 http://www.nets.nihr.ac.uk/glossary?result_1655_result_page=P . Accessed 14 Oct 2016.

Taylor RS, Ukoumunne OC, Warren FC. How to use feasibility and pilot trials to test alternative methodologies and methodological procedures proir to full-scale trials. In: Richards DA, Hallberg IR, editors. Complex interventions in health: an overview of research methods. New York: Routledge; 2015.

Cook JA, Hislop J, Adewuyi TE, Harrild K, Altman DG, Ramsay CR et al. Assessing methods to specify the target difference for a randomised controlled trial: DELTA (Difference ELicitation in TriAls) review. Health Technology Assessment (Winchester, England). 2014;18:v–vi.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10:307–12.

National Institute for Health Research. Progression rules for internal pilot studies for HTA trials [14/10/2016]. Available from: http://www.nets.nihr.ac.uk/__data/assets/pdf_file/0018/115623/Progression_rules_for_internal_pilot_studies.pdf .

Westlund E, Stuart EA. The nonuse, misuse, and proper use of pilot studies in experimental evaluation research. Am J Eval. 2016;2:246–61.

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, et al. How we design feasibility studies. Am J Prev Med. 2009;36:452–7.

Strong LL, Israel BA, Schulz AJ, Reyes A, Rowe Z, Weir SS et al. Piloting interventions within a community-based participatory research framework: lessons learned from the healthy environments partnership. Prog Community Health Partnersh. 2009;3:327–34.

Eldridge SM, Costelloe CE, Kahan BC, Lancaster GA, Kerry SM. How big should the pilot study for my cluster randomised trial be? Stat Methods Med Res. 2016;25:1039–56.

Moffatt S, White M, Mackintosh J, Howel D. Using quantitative and qualitative data in health services research—what happens when mixed method findings conflict? [ISRCTN61522618]. BMC Health Serv Res. 2006;6:1.

Hislop J, Adewuyi TE, Vale LD, Harrild K, Fraser C, Gurung T et al. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review. PLoS Med. 2014;11:e1001645.

Download references

Acknowledgements

We thank the Specialist Unit for Review Evidence (SURE) at Cardiff University, including Mala Mann, Helen Morgan, Alison Weightman and Lydia Searchfield, for their assistance with developing and conducting the literature search.

This study is supported by funding from the Methodology Research Panel (MR/N015843/1). LM, SS and DW are supported by the UK Medical Research Council (MC_UU_12017/14) and the Chief Scientist Office (SPHSU14). PC is supported by the UK Medical Research Council (MC_UU_12017/15) and the Chief Scientist Office (SPHSU15). The work was also undertaken with the support of The Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), a UKCRC Public Health Research Centre of Excellence. Joint funding (MR/KO232331/1) from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, the Welsh Government and the Wellcome Trust, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to copyright infringement.

Author information

Authors and affiliations.

Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), Cardiff University, Cardiff, Wales, UK

Britt Hallingberg, Ruth Turley, Jeremy Segrott, Simon Murphy, Michael Robling & Graham Moore

Centre for Trials Research, Cardiff University, Cardiff, Wales, UK

Jeremy Segrott & Michael Robling

MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

Daniel Wight, Peter Craig, Laurence Moore & Sharon Anne Simpson

Specialist Unit for Review Evidence, Cardiff University, Cardiff, Wales, UK

Ruth Turley

You can also search for this author in PubMed   Google Scholar

Contributions

LM, GM, PC, MR, JS, RT and SS were involved in the development of the study. RT, JS, DW and BH were responsible for the data collection, overseen by LM and GM. Data analysis was undertaken by BH guided by RT, JS, DW and GM. The manuscript was prepared by BH, RT, DW, JS and GM. All authors contributed to the final version of the manuscript. LM is the principal investigator with overall responsibility for the project. GM is Cardiff lead for the project. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Britt Hallingberg .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Table S1. PRISMA checklist. (DOC 62 kb)

Additional file 2:

Appendix 1. Search strategies and websites. Appendix 2. Coding framework. (DOCX 28 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Hallingberg, B., Turley, R., Segrott, J. et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud 4 , 104 (2018). https://doi.org/10.1186/s40814-018-0290-8

Download citation

Received : 06 February 2018

Accepted : 07 May 2018

Published : 28 May 2018

DOI : https://doi.org/10.1186/s40814-018-0290-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Complex interventions
  • Exploratory studies
  • Research methods
  • Study design
  • Pilot study
  • Feasibility study

Pilot and Feasibility Studies

ISSN: 2055-5784

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

articles on exploratory research

  • Subscribe Newsletter
  • Track Paper
  • Conferences

International Journal of Research and Innovation in Social Science

International Journal of Research and Innovation in Social Science (IJRISS)

  •                              ISSN No. 2454-6186
  •                                                                       Strengthening Social Sciences for the Future
  • March Issue 2024
  • Research Area
  • Initial Submission
  • Revised Manuscript Submission
  • Final Submission
  • Review Process
  • Paper Format
  • Author (s) Declaration
  • Registration
  • Virtual Library
  • Apply as Reviewer
  • Join as a Board Member
  • Eligibility Details & Benefits
  • Board Members

Exploratory Research Design in Management Science: A Review of Literature on Conduct and Application

Exploratory Research Design in Management Science: A Review of Literature on Conduct and Application

  • SAKA Rahmon Olawale
  • OSADEME Gloria Chinagozi
  • ONONOKPONO Nyong Joe
  • May 21, 2023
  • Business Administration

SAKA Rahmon Olawale, OSADEME Gloria Chinagozi, ONONOKPONO Nyong Joe

Department of Business Administration, Lagos State University, Ojo, Nigeria

DOI:  https://doi.org/10.47772/IJRISS.2023.7515

Received: 02 March 2022;  Accepted: 06 April 2023; Published: 21 May 2023

The study examined the conduct and application of exploratory research design in management science. The study adopted an exploratory research design. Articles that utilised exploratory research design in management sciences were reviewed using a content analysis. Further, an evaluation of the types and methodologies, conduct, and application of exploratory research design in the field of management sciences was carried out. The study’s findings revealed that some researchers used exploratory research design appropriately in their studies, whereas others did not, resulting in inappropriate research design usage. The study concluded that knowledge of exploratory research design could help researchers better define, understand research problems, and advance research in management sciences. The study recommended that, researchers should choose research designs that they are most comfortable with and feel most competent to handle, but the choice should ideally be based on the nature of the research phenomenon being studied and information available.

Key words: Exploratory Research Design, Management Sciences, Primary Methods, Secondary Methods, and Methodologies.

INTRODUCTION

Research is a continuous process that requires improvement over time, and the purpose of all research is to provide answers to questions through the application of scientific procedures (Lelissa, 2018). The research problem having been formulated in clear-cut terms, the researcher will be required to prepare a research design that is, the researcher will have to state the conceptual structure within which research would be conducted. The preparation of such a design facilitates research to be as efficient as possible yielding maximal information (Tegan, 2021).

According to Goundar (2019), one of the first considerations when designing a research project is what the researcher hopes to achieve, in broad terms, by conducting the research. What do they hope to say about their subject? Do they want a deep understanding of whatever phenomenon they are studying, or do they want a broad, but perhaps less deep, understanding? Do they want policymakers or others to use their research findings to shape social life, or is this project more about them exploring their interests? Their responses to each of these questions will influence the design of their research.

Every research problem is uniquely different, but almost all research problems and research objectives can be categorised into one of the types of research designs, viz., exploratory and conclusive (Asika, 2004). When conducting research, the researcher’s choice of design is influenced by the available information. Researchers may decide to work on a problem that has not been thoroughly studied in order to establish priorities, develop operational definitions, and improve the final research design. This type of research is referred to as exploratory research (Umesh, 2021). According to Saunders, Lewis and Thornhill (2016) an exploratory research design is a valuable means of finding out ‘what is happening; to seek new insights; to ask questions and to assess phenomena in a new light.

It is particularly useful if researchers wish to clarify the understanding of a problem, such as if they are unsure of the precise nature of the problem.

Exploratory research can be likened to the activities of the traveller or explorer (Adams & Schvaneveldt, 1991 as cited in Saunders, Lewis & Thornhill, 2016). Its great advantage is that it is flexible and adaptable to change. When conducting exploratory research the researcher must be willing to change their direction as a result of new data that appear and new insights that occur to them (Lelissa, 2018). Adams and Schvaneveldt (1991) reinforce this point by arguing that the flexibility inherent in exploratory research does not mean absence of direction to the enquiry. What it does mean is that the focus is initially broad and becomes progressively narrower as the research progresses. Exploratory research design is evolutionary and historical in nature and it rarely involves the employment of large samples or use of structured questionnaires (Asika, 2004).

Management science research is the systematic development and acquisition of knowledge tailored to specific management needs in order to solve managerial problems in a timely and effective manner. Management science research generates knowledge by combining data from various subjects such as strategy, organisational behaviour, entrepreneurship, innovation, and technology, human resource management, international business, and marketing, to name a few (Bernd, 2017). The selection of the appropriate research method is one of the most important fundamentals for success in this field for solving managerial problems (Lelissa, 2018). The use of this design in management sciences may play an important role in the development of this field because it assists a researcher in developing an understanding of the research problem. In this regard, exploratory research design adds value and contributes to the advancement of business research topics (Bashin, 2020).

This research aims to shed more light on exploratory research design and its conduct in management sciences. The researcher wants to show the application of exploratory research in the management sciences as well. To be reliable, exploratory research should be carried out in a transparent and honest manner, and should adhere to a set of guidelines. In essence, if exploratory research design is carried out in this manner, the study will achieve high validity while also providing new and innovative ways to analyse reality.

Statement of the Problem

The research design establishes the decision-making processes, conceptual structure of investigation, and analytical methods used to address the study’s central research problem (Tegan, 2021). Taking the time to select an appropriate research design will assist  researchers in organising their thoughts, define the scope of their study, increase the reliability of their findings, and avoiding misleading or incomplete conclusions. As a result, if any aspect of research design is flawed or misappropriate, the quality and dependability of the results, as well as the overall value of the study, will diminish (Umesh, 2021).

In management sciences research today, the researcher’s choice of research design, survey questions, and research method is sometimes largely influenced by the preferences of the researcher rather than what works best for the research context, resulting to inappropriate usage of research design (Lelissa, 2018).  For instance, some researchers provide the final and conclusive answers to the research questions in studies where the situations are unclear and have not attracted serious investigations and research in the past. Further, some researchers employ an exploratory research design in studies where specific relationships among variables of a research problem are investigated and the information required is clearly defined. In many instances, inappropriate usage of design in the systematic investigation can infuse bias into the research process. Not all designs are suited for all kinds of research in management sciences. An exploratory research design is required in the preliminary stages of research when the research problem is unclear and the researcher wants to scope out the nature and extent of a specific research problem (Elman, Gerring & Mahoney, 2020).

In other words, researchers must understand what research designs are available to best address their research problems and guide them throughout the research process. With new researchers in mind, this article focuses on exploratory research design – its conduct and applications in management sciences.

Research Objectives

The main objective of this study is to examine the exploratory research design in management sciences. The specific objectives are to:

  • Evaluate the conduct of exploratory research design in management sciences.
  • Examine the application of exploratory research design in management sciences.

METHODOLOGY

This study utilised exploratory research design and reliant on secondary data gathered from various publications, journals, textbooks, and internet sources that focused on exploratory research design.  The articles extracted for the study were examined using content analysis.

CONCEPTUAL REVIEW

Concept of Exploratory Research Design

Exploratory research design can be defined as research conducted to investigate an undefined problem. It is carried out to gain a better understanding of the current problem (Asika, 2004; Olajide & Lawal, 2020; Akhtar, 2016; Saunders, Lewis & Thornhill, 2016, Richard, 2018; Elman, Gerring, & Mahoney, 2020). Explorative research design, as the name implies, seeks to elucidate research questions rather than provide final and conclusive solutions to existing problems (Brown, 2006; Nargundkar, 2008). A researcher begins with a broad concept and uses this research as a vehicle to identify issues that can be the focus of future research (Lelissa, 2018).  The goal of exploratory research is not to provide final and conclusive answers to research questions, but rather to explore the research topic in varying depths. It has been stated that exploratory research is the preliminary research that serves as the foundation for more conclusive research (Asika, 2004; Akhtar, 2016; Saunders, Lewis & Thornhill, 2016).

Exploratory research “tends to address new problems with little or no prior research”. The exploratory research design focuses on collecting secondary or primary data in an unstructured format and interpreting it using informal procedures (Bernd, 2017). Due to its aims and structure, exploratory research designs incorporate the least amount of scientific method and rigor of the other research designs classified above (Umesh, 2021). When only research questions serve as the foundation of the investigation, the explorative research design is qualitative. When hypotheses are used to test a specific relationship between identified variables in a research problem, it is quantitative (Asika, 2004). Tegan (2021) also stated that exploratory research is frequently qualitative. A large-sample exploratory study, on the other hand, can be quantitative as well. Due to its flexibility and open-ended nature, it is also known as interpretive research or a grounded theory approach. Researchers are advised to be careful not to confuse exploratory research with explanatory research, which is also preliminary in nature but instead explores why a well-documented problem occurs (Tegan, 2021).

Exploratory research is not usually generalisable to the general population. The outcomes of this research provide answers to questions like what, how and why (Bernd, 2017).

According to Akhtar (2016), exploratory research aids in determining the best research design, data collection method, and subject selection. Only with extreme caution does exploratory research draw definitive conclusions. Exploratory research, by definition, relies on techniques such as: (i) quantitative research by reviewing available literature. (ii) Informal qualitative approaches, such as conversations with customers, employees, management, or competitors. (iii) Formal qualitative research methods such as in-depth interviews, focus groups, projective methods, case studies, or pilot studies.

Brown (2006) distinguishes between exploratory and conclusive research by stating that exploratory studies yield a variety of causes and alternative solutions for a specific problem, whereas conclusive studies yield the final information that is the only solution to an existing research problem. In other words, an exploratory research design simply investigates the research questions, leaving room for future studies, whereas a conclusive research design seeks to provide final research findings. Furthermore, “an exploratory study may not have as rigorous a methodology as that used in conclusive studies, and sample sizes may be smaller (Nargundkar, 2008).

Benefits and limitations of Exploratory Research Design Voxco (2021) and Tegan (2021) pointed the following benefits and limitation of exploratory research design in management sciences.

The benefits are as follows:

  • Exploratory research can contribute valuable and insightful information to a study and is critical to its success.
  • Exploratory research enables the researcher to be as creative as possible in order to gain the most understanding of a subject.
  • It enables a better understanding of what a research team’s objectives should be throughout the course of a study. Having this information on hand will help anyone conducting research from outside sources.
  • Regardless of the field that research is required, exploratory research can be used in a variety of fields. As a result, it is critical to understand how the various fields will influence any research that will be conducted.
  • It will be useful to compare and contrast different techniques, such as secondary research, discussions, or qualitative research through focus groups, surveys, or case studies. Within exploratory research, the Internet allows for more interactive research methods.
  • The researcher has a lot of flexibility and can adapt to changes as the research progresses.
  • It is usually cost effective.
  • It helps lay the foundation of a research, which can lead to further research.
  • It enables the researcher understand at an early stage, if the topic is worth investing the time and resources and if it is worth pursuing.
  • It can assist other researchers to find out possible causes for the problem, which can be further studied in detail to find out, which of them is the most likely cause for the problem.

While the limitations includes the following;

  • Even though it can point you in the right direction towards what is the answer, it is usually inconclusive.
  • The main disadvantage of exploratory research is that they provide qualitative data. Interpretation of such information can be judgmental and biased.
  • Most of the times, exploratory research involves a smaller sample, hence the results cannot be accurately interpreted for a generalized population.
  • Many a times, if the data is being collected through secondary research, then there is a chance of that data being old and is not updated.

  Figure 1. A Conceptual Model of Exploratory Research

articles on exploratory research

Adopted from Voxco (2021)

Types and Methodologies of Exploratory Research

While it may appear difficult to research something about which there is little information, several methods can assist a researcher in determining the best research design, data collection methods, and subject selection. There are two methods for conducting research: primary and secondary. A researcher can employ a variety of methods under these two categories. The information gleaned from these studies can be qualitative or quantitative (Formplus, 2019). The following are some of the most commonly used research designs:

Primary Research Methods

According to Goundar (2019), primary research entails gathering information directly from the subject. It could be done by a group of people or by an individual. This type of research can be carried out directly by the researcher or by a third party on their behalf. Primary research is conducted specifically to investigate a specific problem that necessitates an in-depth investigation (Formplus, 2019).

  • Surveys/Polls : Surveys/polls are used to collect information from a specific number of participants. It is one of the most essential quantitative methods. To investigate opinions, trends, and so on, various types of surveys or polls can be utilised (Richard, 2018; Umesh, 2021). Surveys can now be sent online and are very easy to access thanks to technological advancements. For example, using a survey app on a tablet, laptop, or even a mobile phone. This information is also available to the researcher in real time (Umesh, 2021).
  • Interviews: It entails gathering a large amount of information from public sources. An interview with a subject matter expert can provide the researcher with meaningful insights that a generalised public source cannot. To obtain meaningful information about a topic, interviews are conducted in person or over the phone using open-ended questions (Tegan, 2021; Umesh, 2021; Formplus, 2019)
  • Focus groups: Another common method in exploratory research is focus groups. In this method, a group of people is selected and allowed to express their thoughts on the topic being investigated. However, it is critical to ensure that the individuals chosen for a focus group share a common background and have comparable experiences (Elman, Gerring, & Mahoney, 2020).
  • Observations:  Observation research can be qualitative or quantitative in nature. This type of research involves observing a person and deducing conclusions based on their reactions to various parameters. There is no direct communication with the subject in such a study (Richard, 2018; Umesh, 2021).

Secondary Research Methods

Secondary research is the collection of information from previously published primary research. In this type of study, the researcher gathers information from various sources such as case studies, magazines, newspapers, books, and so on (Gerring & Mahoney, 2020).

  • Online research: This is one of the quickest ways to gather information on any topic in today’s world. A large amount of data is readily available on the internet, and the researcher can download it as needed (Elman, Gerring & Mahoney, 2020; Baxter & Jack, 2010). The genuineness and authenticity of the source websites from which the researcher is gathering information is an important consideration for such a study (Goundar, 2019).
  • Literature research : Literature research is one of the most cost-effective methods for testing a hypothesis. There is a wealth of information available in libraries, online sources, and even commercial databases.  Newspapers, magazines, library books, documents from government agencies, specific topic related articles, literature, Annual reports, published statistics from research organisations, and so on are examples of its sources (Elman, Gerring & Mahoney, 2020; Umesh, 2021).
  • Case study research: Case study research can assist a researcher in gathering more information by carefully analysing existing cases that have encountered a similar problem. Such analyses are critical, particularly in today’s business world. The researcher only needs to make sure that he carefully analyses the case in comparison to all of the variables present in the previous case against his own case (Baxter & Jack, 2010).

Conduct of an Exploratory Research in Management Sciences

The following is an illustration of the steps for conducting an exploratory research designs in management sciences:

Figure 1. Procedures for Conducting a Mixed Methods Research Design

articles on exploratory research

Adapted from Tegan (2021).

  • Identify the problem: A researcher identifies the subject of research and the problem is addressed by carrying out multiple methods to answer the questions (Voxco, 2021).
  • Define the research questions: Even if the researchers know what they want to investigate, defining the research question in clear terms is an important part of the process. Defining the research question will assist the researchers in determining the direction of their research. It will assist them in keeping the research on track and ensuring that it is carried out efficiently. (Bashin, 2020).
  • Design the methodology: Next, conceptualise the data collection and data analysis methods and write them up in a research design. The researcher can use one or more than one method (described earlier) to conduct the research (Tegan, 2021). Exploratory research necessitates speaking with your people in order to obtain detailed information about the subject matter. Researchers commonly use surveys and interview to gain insight into unexplored subjects. Researchers can gather a wealth of useful information by soliciting customer feedback (Voxco, 2021).
  • Gather research Data: Once the method for conducting exploratory research has been determined, the researchers must collect the resulting data. Spend a good time to explore different sources and make sure that the study is not missing an aspect of the research (Tegan, 2021).
  • Analyse result: The next step is to analyse the data so that the researcher can determine whether the preliminary results are in line with the questions earlier defined (Elman, Gerring & Mahoney, 2020).
  • Prospects for future research: The researcher should determine whether to continue to study the topic. If this is the case, the researcher will almost certainly need to switch to a different type of research (Elman, Gerring & Mahoney, 2020). Since exploratory research is frequently qualitative in nature, the researcher may need to conduct quantitative research with a larger sample size to achieve results that are more generalisable (Umesh, 2021).

Applications of Exploratory Research in Management Sciences

The following table depicts an illustration of the application of exploratory research design in management sciences

Table1 Illustration on the Applications of Exploratory Research Design in Management Sciences

Source: Researcher (2022)

Following a review of a few articles on the subject written by various researchers who used an exploratory research design in management sciences. The findings revealed that some researchers used exploratory research design appropriately in their studies, whereas others’ choice of research design was largely influenced by the researchers’ preferences rather than what works best for the research context, resulting in inappropriate research design usage. For instance, some researchers employed an exploratory research design in studies where specific relationships among variables of a research problem are investigated, the information required was clearly defined, and in subjects that had been severely studied by other researchers in the past.

Further, some researchers were observed to be able to collect data using various methods such as primary and secondary sources. The researcher observed that multiple methods for data collection were used in a single study in order to optimize the strengths and lessen the weaknesses of each method in a research process. As a result, using a variety of data collection methods allows researchers to answer questions that cannot be answered using only primary or secondary sources.

Analysis of management sciences research using an exploratory research design seems to be an intriguing topic for scholars and practitioners due to its positive impact on organisational performance. This design is used to study and attain understanding into situations that have not previously attracted severe investigations and research. Its goal is to more succinctly define a problem and develop courses of action that will lead to its solution. In order to improve the appropriate use of this research design in management science, this study examined the conduct and application of an exploratory research design for research students and novice researchers with little prior knowledge in the field. It is designed to help researchers understand when and how to use exploratory research design at any stage of their research. It highlighted the various types and methodologies of exploratory research design. It also highlighted some of the benefits and limitations of using an exploratory research design. Further, it demonstrated the conduct and applications of exploratory research design in management sciences by referring to various articles in this field.

The study would like to emphasise that knowledge of exploratory research design can help a researcher better define and understand research problems and questions in management research. Ideally, this review of studies that used exploratory research design, as well as the ideas provided for the application and conduct of exploratory studies, will help to advance research in management sciences and can also assist management researchers in designing and conducting this type of study.

RECOMMENDATIONS

The following recommendations are made based on the study’s conclusion:

  • Researchers should choose research designs that they are most comfortable with and feel most competent to handle, but the choice should ideally be based on the nature of the research phenomenon being studied and information available.
  • Researchers should be more sensitized of the appropriate use of exploratory research designs by publishing articles about exploratory research design in peer-reviewed journals.
  • When conducting exploratory research, management sciences researchers should use a multi-method approach for data collection, so that the researcher’s desire for a better understanding of a phenomenon are satisfied.
  • Abbas, M. H., Majeed, H. T. & Luma, M. H. (2021). Employee’s retention strategy and its impact on organisational memory: An exploratory research for the opinion of faculty members at private colleges on Baghdad. Academic Journal of Interdisciplinary Studies, 10(1), 357-372. DOI: https://doi.org/10.36941/ajis-2021-0030
  • Adams, G. & Schvaneveldt, J. (1991). Understanding research methods (2nd edn). New York, Longman.
  • Akhtar, M. D. (2016). Research design. Research in social sciences interdisciplinary perspectives.  Https://www.researchgate.net/publication/308915548
  • Anol, B. (2018). Social science research: Principles, methods, and practices. University of South Florida. Http://scholarcommons.usf.edu/oa_textbooks/3/.
  • Asika, N. (2004). Research methodology: A process approach. Lagos, Mukudamu & Brothers Enterprises.
  • Bashin, H. (2020). What is exploratory research? Types of exploratory studies in sales. Https://www.marketing91.com/exploratory-research/
  • Baxter, P. & Jack, S. (2010). Qualitative case study methodology: study design and implementation for novice researchers. The Qualitative Report, 13(4), 544-559.
  • Bernd, R. (2017). Theory and methodology of exploratory social science research. IJSRM, 5(4), 129-150.
  • Brown, R. B. (2006). Doing your dissertation in business and management: The reality of research and writing. Sage Publications, New York.
  • Camargo, L. R., Pereira, S. C. F. & Scarpin, M. R. S. (2020). Fast and ultra-fast fashion supply chain management: An exploratory research. International Journal of Retail & Distribution Management, 48 (6), 537-553. Https://doi.org/10.1108/IJRDM-04-2019-0133
  • Chiarini, A., Belvedere, V. & Grando, A. (2020). Industry 4.0 strategies and technological developments. An exploratory research from Italian manufacturing companies. Journal of Manufacturing Technology Management, 12(5), 1385-1398. Https://doi.org/10.1080/09537287.2019.1710304
  • Dudovskiy, J. (2019). The ultimate guide to writing a dissertation in business studies: A step by step assistance.  Https://research-methodology.net/research-methodology/_ftnref4
  • Elman, C., Gerring, J. & Mahoney, J. (Eds.). (2020). The production of knowledge: Enhancing progress in social science. Cambridge, England: Cambridge University Press. Https://doi.org/10.1017/9781108762519
  • Fonseca, L. M. & Domingues, J. P. (2018). Exploratory research of ISO 14001:2015 transition  among Portuguese organisations. Sustainability, 10 (781), 1-16. Doi:10.3390/su10030781
  • Formplus. (2019). Exploratory research: What are its method and examples? Https://www.formpl.us/blog/exploratory-research
  • Goundar, S. (2019). Chapter 3 – Research methodology and research method. Victoria University of Wellington.  Https://www.researchgate.net/publication/333015026
  • Haughton, R. (2021). Exploring knowledge retention strategies to prevent knowledge loss in project-based organisations (PBOs). (Doctoral dissertation, Walden University). Https://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=11609&context=dissertations
  • Kei, W. C. & Abdulla, M. (2020).  The nature and challenges of guest house business in the Maldives: An exploratory study. Tourism and Hospitality Research, 21(1), 56-61. Https://doi.org/10.1177/1467358420926688
  • Lelissa, T. B. (2018). Research methodology. (Doctoral dissertation, University of South Africa). Https://www.researchgate.net/publication/329715052
  • Leonor, J., Samuel, P., Pedro, R. & Tiago, O. (2021). What are the main drivers of blockchain adoption within supply chain? – An exploratory research. Procedia Computer Science, 181, 495–502.
  • Nargundkar, R. (2008). Marketing research: Text and cases, 3rd edition. MC Graw Hill, India.
  • Olajide, O. T. & Lawal, O. R. (2020). Exploratory research design in management sciences: An x-ray of literature. Economics and Applied Informatics, Dunarea de Jos, University of Galati, Faculty of Economics and Business Administration, 2, 79-84.                    Https://doi.org/10.35219/eai15840409109
  • Pooja, S. (2019). Techno stress creators -An exploratory research on teaching and non teaching staff working in colleges. International. Journal of Management and Humanities (IJMH), 3(9), 1-7.
  • Richard. S. (2018). On the uses of exploratory research and exploratory studies in social science. Cornel University, Department of Sociology.
  • Sagolsem, A. (2019). Entrepreneurial marketing as sustainable competitive advantage for SME’s: An exploratory study within the Irish Flower Industry. (Master thesis, Dublin Business School). Https://esource.dbs.ie/handle/10788/3789
  • Saunders, M., Lewis, P. & Thornhill, A. (2016). Research methods for business students, 7th ed. Harlow, Essex: Pearson Education Limited.
  • Tegan, G. (2021). A guide to exploratory research. Https://www.scribbr.com/methodology/exploratory-research/
  • Tesfaye. B. (2018). Chapter five: Research design and methodology. Https:www.researchgate.net/publication/329715052
  • Umesh G. (2021). Exploratory research – steps, types, methods, characteristics and advantages. https://resources.pollfish.com/survey-guides/how-to-conduct-exploratory-research/
  • Umesh. G. (2021). Exploratory research – steps, types, methods, characteristics and advantages. Https://freecourses.net/marketing/exploratory-research/
  • Voxco. (2021). Exploratory research: Pros and cons.  Https://www.voxco.com/blog/exploratory-research-pros-and-cons/
  • Wijdène, O. L. & Manel, K. (2020). Cultural challenges of e-learning experiences: An exploratory research. International Journal of E-Services and Mobile Applications (IJESMA), 12(3), 345-351. DOI: 10.4018/IJESMA.2020070102

Article Statistics

Track views and downloads to measure the impact and reach of your article.

PDF Downloads

1,124 views

Subscribe to Our Newsletter

Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.

Email Address * Subscribe

Track Your Paper

Enter the following details to get the information about your paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.377; 2022

Special Paper

Exploratory analyses in aetiologic research and considerations for assessment of credibility: mini-review of literature, kim luijken.

1 Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, Netherlands

Olaf M Dekkers

Frits r rosendaal, rolf h h groenwold.

2 Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, Netherlands

Associated Data

No additional data available.

To provide considerations for reporting and interpretation that can improve assessment of the credibility of exploratory analyses in aetiologic research.

Mini-review of the literature and account of exploratory research principles.

This study focuses on a particular type of causal research, namely aetiologic studies, which investigate the causal effect of one or multiple risk factors on a particular health outcome or disease. The mini review included aetiologic research articles published in four epidemiology journals in the first issue of 2021: American Journal of Epidemiology , Epidemiology , European Journal of Epidemiology , and International Journal of Epidemiology , specifically focusing on observational studies of causal risk factors of diseases.

Main outcome measures

Number of exposure-outcome associations reported, grouped by type of analysis (main, sensitivity, and additional).

The journal articles reported many exposure-outcome associations: a mean number of 33 (range 1-120) exposure-outcome associations for the primary analysis, 30 (0-336) for sensitivity analyses, and 163 (0-1467) for additional analyses. Six considerations were discussed that are important in assessing the credibility of exploratory analyses: research problem, protocol, statistical criteria, interpretation of findings, completeness of reporting, and effect of exploratory findings on future causal research.

Conclusions

Based on this mini-review, exploratory analyses in aetiologic research were not always reported properly. Six considerations for reporting of exploratory analyses in aetiologic research were provided to stimulate a discussion about their preferred handling and reporting. Researchers should take responsibility for the results of exploratory analyses by clearly reporting their exploratory nature and specifying which findings should be investigated in future research and how.

Introduction

Reports of aetiologic studies often have results of multiple exploratory analyses, with the aim of identifying topics for future research. Although this form of reporting might seem reasonable, it is not without risk, because compared with the results of a confirmatory study, assessing the credibility of exploratory findings is generally more complicated.

The origin of exploratory data analysis can be traced back at least to Tukey in the 1960s and 1970s 1 2 who encouraged statisticians to develop visualisation techniques for representing and capturing structures in datasets to establish new research questions. These new research questions should subsequently be answered with independent datasets (often termed confirmatory analysis). For example, when a new biomarker is thought to be part of a known causal pathway, performing a small preparatory exploratory study before conducting a full blown large cohort study seems worthwhile, because the cohort study is financially expensive and requires large investments of resources. Similarly, if a known exposure-outcome effect is thought to vary across subgroups of the population, exploring this idea first before embarking on confirmative analyses of the effect of heterogeneity seems appropriate.

Even when researchers consider an analysis to be exploratory, a hypothesis is easily promoted to a fact. For example, findings in journal articles can be exaggerated to more certain statements in press releases and news articles. 3 In medical science in particular, where results are sometimes quickly implemented in clinical practice, researchers should take responsibility for the results they report. The Hippocratic oath (“First, do not harm”) applies as well to medical research as it does to clinical practice.

In this paper, we discuss issues that complicate the interpretation of exploratory analyses in causal studies. Causal research can refer to different types of research, such as randomised studies or intervention studies. We do not address these studies in our manuscript; we focus on aetiologic research, in which causes of disease are investigated. Specifically, the causal effect of risk factors on a health outcome or disease are studied, typically in an observational setting. We provide practical pointers for researchers on how to report exploratory analyses in aetiologic research and how to clarify what the exploratory results imply for future research and implementation in practice. We hope to encourage a discussion about the preferred handling and reporting of these analyses.

Exploratory analyses in aetiologic research

The term exploratory analysis typically refers to analyses for which the hypothesis was not specified before the data analysis. 4 Considering exploratory analyses in a broader sense, however, is probably more relevant in aetiologic research, because of the observational data and clustering of analyses within cohorts. We use the term exploratory analyses here to indicate analyses that are initial and preliminary steps towards solving a research problem. Exploratory analyses are often conducted in addition to planned primary analyses of a study. We do not consider sensitivity analyses, where the main hypothesis is evaluated under different assumptions, to be exploratory in this paper. We also do not consider outcomes that are evaluated as a secondary objective but are correlated with the primary outcome to be exploratory, because these analyses contribute to the investigation of the primary research question. Genome-wide association studies, where the exploratory nature of analyses is commonly accounted for by looking at multiple testing, 5 are beyond the scope of this paper.

Mini-review and overview of existing reporting guidance

Before we discuss considerations about the reporting of exploratory aetiologic studies, we wanted to illustrate some of the aspects of exploratory studies that need explicit reporting. Hence we performed a small review of published aetiologic studies. We identified all articles on original research in four journals in their first issue of 2021: American Journal of Epidemiology , Epidemiology , European Journal of Epidemiology , and International Journal of Epidemiology . We excluded studies that did not look at an aetiologic research question, such as prediction studies, studies on therapeutic interventions, and randomised trials. For each article, we counted the number of primary analyses, sensitivity analyses, and additional analyses that were performed. The unit of counting was the association estimator, where we counted only one association if the association was reported on different scales (eg, absolute and relative scales for binary endpoints).

Also, we reviewed existing reporting guidance documents on aspects relevant to exploratory analyses, specifically the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement, 6 RECORD (REporting of studies Conducted using Observational Routinely collected health Data) statement, 7 STROBE-MR (Strengthening the Reporting of Observational Studies in Epidemiology Using Mendelian Randomisation) for mendelian randomisation studies, 8 STREGA (Strengthening the Reporting of Genetic Association Studies) for genome association studies, 9 and the CONSORT (Consolidated Standards of Reporting Trials) extension to randomised pilot and feasibility trials. 10

Patient and public involvement

Involving patients or the public in the design, conduct, reporting, or dissemination plans of our research was not appropriate or possible.

Mini-review

The mini-review included 25 original aetiologic articles. These articles reported a mean number of 33 (range 1-120) exposure-outcome associations for the primary analysis, 30 (0-336) for sensitivity analyses, and 163 (0-1467) for additional analyses, mainly concerning subgroup or interaction analyses (supplementary file). Most articles did not explicitly report which analyses were prespecified, and only one study referred to a publicly available protocol. 11 The methodological scrutiny of the subgroup analyses varied from thoughtful evaluations of exposure effect heterogeneity in well established subgroups to evaluations of exposure effects across subgroups that seemed to have been formed exhaustively across many potential risk factors. Despite the fact that our review included only a small sample of studies, the image that arises from it is that many results were presented, and insufficient information was reported to fully judge the validity and merits of the results.

Existing reporting guidance

The STROBE 6 and RECORD 7 statements provide checklists of items to report in observational studies that are relevant to exploratory analyses ( table 1 ). Extensions of STROBE, such as STROBE-MR 8 and STREGA, 9 provide additional guidance for reporting of studies where many analyses are performed. Guidance for reporting randomised trials also provides helpful information for reporting exploratory analyses in aetiologic research, in particular the CONSORT extension to randomised pilot and feasibility trials. 10 Not all of these recommendations can be directly applied to observational aetiologic studies, however, because the procedures for generating and testing of hypotheses are more established in randomised studies than in observational settings.

Considerations for reporting of exploratory aetiologic research

STROBE=Strengthening the Reporting of Observational Studies in Epidemiology statement; RECORD= REporting of studies Conducted using Observational Routinely collected health Data statement; STROBE-MR= Strengthening the Reporting of Observational Studies in Epidemiology Using Mendelian Randomization; STREGA=Strengthening the Reporting of Genetic Association Studies; CONSORT=Consolidated Standards of Reporting Trials.

Exploratory research principles

Inspired by the existing recommendations for reporting, we list six considerations for reporting and interpretation that can improve the assessment of the credibility of exploratory analyses in aetiologic research ( table 1 ). The list is not exhaustive but we hope it will encourage further discussion on the reporting of exploratory research.

Consideration 1: explicitly state the objective of all analyses, including exploratory analyses

Stating the objective of an aetiologic study clarifies how to interpret the results. The objectives of confirmatory aetiologic research ideally contain a well defined targeted effect of a specific aetiologic factor on a specific outcome in a specific population. 13 14 In early discovery research, objectives are not always rigorously defined but could be specified more generally (eg, understanding the origin of a particular outcome). An implication of stating the objective in general terms, however, is that the methodological handling of the analysis becomes less clear and the number of researchers’ degrees of freedom becomes large. 15 Consequently, interpreting results without deriving spurious (causal) conclusions requires thought and effort because the analysis does not necessarily provide information towards a causal effect (see consideration 4). 16 17 18 The more general an objective is stated, the more provisional the analysis becomes. This caveat includes machine learning approaches where no explicit causal modelling assumptions are made.

Because exploratory analyses in aetiologic research often aim to inform a future in-depth causal analysis, reporting both the objective of the provisional exploratory analysis and the (future) confirmatory analysis is important. This reporting is in line with the CONSORT reporting checklist for pilot randomised controlled trials which recommends that researchers state the objective of the eventual trial in the manuscript of a pilot study. 10 The rationale and need for the exploratory analysis in aetiologic research should be outlined together with uncertainties that need to be dealt with before performing an independent confirmative analysis of the causal mechanism. Reporting the position of provisional analyses relative to future research clarifies the level of credibility of the findings from exploratory analyses.

Consideration 2: establish a study protocol before data analysis and make the protocol available to readers

Preregistered protocols help distinguish which analyses were planned before observing the data and which analyses were performed post hoc, thereby avoiding hypothesising after the results are known. For randomised trials, preregistration of the study protocol is considered the norm. 19 Preregistration does not seem as widespread in observational aetiologic research, but is increasingly encouraged, 20 21 and explicitly recommended in the RECORD reporting checklist. 7 Because aetiologic research often uses existing cohort data that have been analysed for related research questions, preregistration of aetiologic studies does not ensure the same level of credibility of statistical evidence as preregistration before collecting the data.

Nosek and colleagues 22 have provided preliminary guidance on preregistration of analyses conducted with existing data. These authors suggest that what was known in advance about the dataset should be transparently reported so that the credibility of statistical findings can be assessed, taking into account analyses that have been performed previously. Implementing this advice is probably challenging in large epidemiological cohort studies because of the many analyses that might have been performed. But trying to clarify why and how an analysis is conducted before observing the data is a laudable practice that can be implemented directly in aetiologic studies. This practice is ideally accompanied by work on developing guidance for preregistration of aetiologic studies that use existing data.

Preregistration of analyses that are exploratory in nature is even less common, possibly contradicting the definition of exploration. We consider exploratory analysis, however, as discovery work that serves to motivate funding for larger studies that are, for example, better able to control confounding or to collect data rigorously. Given this important probing role, simply stating in a research protocol that certain relations will be explored is not enough; time and effort must be invested in designing the analysis appropriately. Not every detail can be specified in advance, but interpretation of the results provided by data can be challenging and unintentionally overconfident when no question was clearly articulated before seeing the answer.

Consideration 3: do not base judgments on significance values only

Only reporting the results of analyses that provided a P value below the prespecified α level (eg, 0.05) is discouraged throughout all scientific disciplines (for example, as discussed in a 2019 supplementary issue of The American Statistician ). 23 Avoiding selective reporting based on significance values is particularly relevant to exploratory findings because the statistical properties of exploratory tests are less well known than those of confirmatory tests. 24 For example, the expected number of false positives (that is, the type I error rate) is probably increased when the choice for a statistical test was based on pattens in the observed data. Although procedures have been developed for correction of multiple testing in confirmatory settings, consensus on how to prevent false positive findings in exploratory settings has not yet been established. 24 25 26

Increasing the number of exploratory analyses, without correction for multiple testing, raises the risk of deriving false positive conclusions, but too strict corrections for multiple testing increases the probability of false negative findings (that is, the type II error rate). 27 A raised type II error rate could occur, for example, when an analysis of various positively correlated hypotheses is corrected for multiple testing as if all of the hypotheses were independent (eg, by applying a Bonferroni correction). The decision to statistically correct for multiple testing depends, among other issues, on the total number of tests performed in the same dataset, correlation between the hypotheses being tested, and sample size. Reporting each of these considerations clarifies the analytical context of findings and helps to assess the credibility of the results. This form of reporting is in line with the STROBE-MR 8 and STREGA 9 checklists which recommend stating how multiple comparisons were managed, although recommendations for the handling of multiple testing seem more established in genome-wide association studies than in clinical aetiologic cohort studies. 5

Consideration 4: interpret findings in line with the nature of the analysis

Interpreting and communicating results in line with the exploratory nature of an analysis is challenging because an accurate representation of the degree of tentativeness of the results is required. Assessing this degree of tentativeness based on only the results of an analysis (that is, based on the numerical estimates) is complicated because seemingly convincing results can be misleading and a clinical explanation can be found that does not follow from the statistical evidence. 28 29 Cognitive biases, such as hindsight bias, can distort the interpretation of findings.

Reporting of findings from exploratory analyses starts with indicating whether the analysis was planned before or after observing the data, which is recommended in the CONSORT extension to randomised pilot and feasibility trials. 10 Results of exploratory analyses can be interpreted by focusing on what is reported about the objectives and applied methodology rather than overstepping the findings. The specificity with which findings are interpreted should match the generality with which the objective is stated (see consideration 1). 16 17 18 For example, when various subgroup analyses are performed with the general aim of identifying possible subgroups from the available data where an exposure effect was different, researchers should report that many subgroups were explored, including characterisation of the subgroups and description of the presence or absence of effect heterogeneity, rather than discussing only one or two specific subgroups where the effect size was extreme. Furthermore, exploratory analyses often fail to support strong conclusions. Recommendations for clinical practice or generalisations based on exploratory analyses should generally be avoided.

Consideration 5: report (summarised) results of all exploratory analyses that were performed

When findings are selectively reported, especially when reporting is guided by significant findings (see consideration 3), the credibility of reported findings is probably overstated. 30 Reporting the results of all of the exploratory analyses that were conducted (possibly in a supplementary file) provides a transparent and honest report of the analysis and facilitates better interpretation of the findings. This approach is in line with the STROBE extension in STREGA, which recommends that all results of analyses should be presented, even if numerous analyses were undertaken. 9

Reporting all analyses that have been conducted seems simple, but can be challenging in practice, mainly because the process of performing a study is typically iterative. A framework for initial data analysis by Huebner and colleagues could help keep track of all subanalyses that are conducted as part of a main analysis. 31 This framework distinguishes exploratory analyses that are part of a primary analysis from additional exploratory analyses that require separate reporting. Another helpful practice could be to have a reflection period after performing analyses to establish whether the analyses look at (slightly) different research questions and to report separate analyses for each research question.

Consideration 6: accompany exploratory analyses by a proposed research agenda

The credibility of exploratory findings can be communicated through a research agenda prioritising future research and how this research should be set up. Reporting a research agenda is similar to the CONSORT extension to randomised pilot and feasibility trials that recommends reporting which and how future confirmative trials can be informed by the pilot study. 10

Formulating a research agenda allows researchers to take responsibility for the exploratory findings presented and future research that should be performed, avoiding the empty statement that “more research is needed”. In medical science in particular, where study results are sometimes quickly implemented into clinical practice, researchers are encouraged to take responsibility for the results they report by clearly explaining which exploratory findings should be investigated in future research and how.

Our mini-review showed that exploratory analyses in aetiologic research were not always reported optimally. The credibility of exploratory results is affected by a combination of the theoretical rationale for the analysis, clarity of the defined research problem, applied methodology, and degree to which analytical decisions are driven by the data. Choosing a particular analysis based on observed patterns in the data complicates statistical inferences. Moreover, the design and methods applied in an exploratory analysis might be less optimal than the primary analysis of the study, which further complicates interpretation of exploratory analyses. Therefore, information on these aspects should be clearly reported.

Exploration is essential to the progress of science. Strict confirmatory studies are a powerful mechanism for final evaluations before implementation in clinical practice, but will probably not stimulate new ideas. 32 33 Open minded exploratory analyses can lead to unexpected discoveries and resourceful innovations of epidemiological science, but effort is required to accurately interpret the results. Because exploratory analyses are usually done to generate new research questions, quickly performing a statistical test (or multiple tests) to get the first answer to the problem is tempting. When quick test results are presented in a research article, however, their interpretation might be ad hoc and unintentionally overconfident.

To show their full value, exploratory analyses of aetiologic research need to be conducted and interpreted correctly . We have provided six considerations for reporting of exploratory analyses to encourage a discussion on exploratory analyses and how the credibility of these analyses is ideally assessed in aetiologic research. Continuation of this discussion will contribute to the understanding of inferences that can be made from exploratory analyses in aetiologic research and will help strike a balance between their opportunities and risks.

What is already known on this topic

  • Exploratory analyses in aetiologic research are initial steps towards solving a research problem and are often conducted in addition to planned primary analyses of a study
  • Exploratory analyses might lead to new discoveries in aetiologic research, but effort is needed to accurately interpret the results because these analyses are often conducted with few data resources and insufficient adjusting for confounding
  • Statistical properties of exploratory tests are less well known than those of confirmatory tests

What this study adds

  • This study focuses on a particular type of causal research, namely aetiologic studies, which investigate the causal effect of one or multiple risk factors on a particular health outcome or disease
  • Six considerations for reporting of exploratory analyses in aetiologic research were provided to stimulate a discussion about their preferred handling and reporting
  • Researchers should take responsibility for results of exploratory analyses by clearly reporting their exploratory nature and specifying which findings should be investigated in future research and how

Web extra. 

Extra material supplied by authors

Web appendix: Supplementary file

Contributors: KL was involved in the conceptualisation, investigation, methodology, visualisation, and writing (original draft) of the article. OMD was involved in the conceptualisation, investigation, methodology, and writing (review and editing) of the article. FRR was involved in the conceptualisation, investigation, methodology, and writing (review and editing) of the article. RHHG was involved in the conceptualisation, investigation, methodology, supervision, and writing (review and editing) of the article. KL, OMD, FRR, RHHG gave final approval of the version to be published and are accountable for all aspects of the work. KL is the main guarantor of this study. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: RHHG was supported by grants from the Netherlands Organisation for Scientific Research (ZonMW, project 917.16.430) and from Leiden University Medical Centre. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: support from the Netherlands Organisation for Scientific Research and Leiden University Medical Centre for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

The lead author (the manuscript’s guarantor) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Dissemination to participants and related patient and public communities: An abstract was submitted to the annual Dutch epidemiology conference ( www.weon.nl ). The authors aim to share their work with stakeholders at the annual Dutch epidemiology conference ( www.weon.nl ), at institutional meetings, and will post a link with a plain language summary on their personal websites ( www.rolfgroenwold.nl ).

Provenance and peer review: not commissioned; externally peer reviewed.

Ethics statements

Ethical approval.

Not required.

Data availability statement

IMAGES

  1. Examples and Types of Exploratory Research Questions

    articles on exploratory research

  2. Exploratory Research

    articles on exploratory research

  3. Exploratory Research: Definition & How To Conduct This Research

    articles on exploratory research

  4. PPT

    articles on exploratory research

  5. Exploratory Research Method

    articles on exploratory research

  6. Exploratory Research: Definition, Types, Examples

    articles on exploratory research

VIDEO

  1. What is research

  2. Research Design

  3. CH2 Exploratory Research BSBA2B

  4. WHAT IS RESEARCH?

  5. Exploratory Research Focus Group

  6. Exploratory Research Method Lecture 23 Dr Riffat Sadiq

COMMENTS

  1. Grounded Theory: A Guide for Exploratory Studies in Management Research

    Research can also be exploratory, descriptive, or explanatory. The classification of research into one of these categories depends on the purpose of the study. Saunders et al. (2009) explain that the purpose of exploratory research is to find out " what is happening ," " seek new insights ," and " assess phenomena in new light ...

  2. Exploratory studies to decide whether and how to proceed with full

    However, the goals of exploratory studies advocated by research funders have to date varied substantially. For instance, the National Institute for Health Research Evaluation Trials and Studies Coordinating Centre (NETSCC) definitions of feasibility and pilot studies do not include examination of intervention design, delivery or acceptability ...

  3. Exploratory Research

    Revised on November 20, 2023. Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth. Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well.

  4. Exploratory Research (Chapter 2)

    Exploratory research is an attempt to discover something new and interesting by working through a research topic and is the soul of good research. Exploratory studies, a type of exploratory research, tend to fall into two categories: those that make a tentative first analysis of a new topic and those that propose new ideas or generate new ...

  5. The potential of working hypotheses for deductive exploratory research

    Stebbins' Exploratory Research in the Social Sciences , is the only book devoted to the nature of exploratory research as a form of social science inquiry. He views it as a "broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding of an area ...

  6. Beyond exploratory: a tailored framework for designing and assessing

    Exploratory and descriptive data on the topic exist. Research aims: Define aims in broad, exploratory questions guided by the theoretical framework. A priori hypotheses are unnecessary and inappropriate. Define aims based on existing knowledge and/or theoretical framework. A priori hypotheses may be useful, but not needed.

  7. Exploratory analyses in aetiologic research and ...

    Results The journal articles reported many exposure-outcome associations: a mean number of 33 (range 1-120) exposure-outcome associations for the primary analysis, 30 (0-336) for sensitivity analyses, and 163 (0-1467) for additional analyses. Six considerations were discussed that are important in assessing the credibility of exploratory analyses: research problem, protocol, statistical ...

  8. Exploratory research in the social sciences: what is exploration?

    An exploratory research design is particularly suitable when the phenomenon under investigation is still emerging, lacks a clear definition, and has not been extensively studied (Stebbins, 2001 ...

  9. (PDF) Exploratory Research

    PDF | On Apr 18, 2018, Pranas Žukauskas and others published Exploratory Research | Find, read and cite all the research you need on ResearchGate

  10. Evaluating Philosophy As Exploratory Research

    research and differentiates it from exploitative research, which constitutes the bulk of funded research activity. This article argues that exploratory research is crucial for long-term progress and requires a distinct evaluative regime. Keywords: exploitation, exploration, general purpose technologies, research evaluation, research policy.

  11. 10000 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on EXPLORATORY RESEARCH. Find methods information, sources, references or conduct a literature review on ...

  12. Exploratory Research

    Common exploratory research designs include case studies, focus groups, interviews, and surveys. Collect data: Collect data using the chosen research design. This may involve conducting interviews, surveys, or observations, or collecting data from existing sources such as archives or databases.

  13. Exploratory Research

    Exploratory Research | Definition, Guide, & Examples. Published on 6 May 2022 by Tegan George.Revised on 20 January 2023. Exploratory research is a methodology approach that investigates topics and research questions that have not previously been studied in depth.. Exploratory research is often qualitative in nature. However, a study with a large sample conducted in an exploratory manner can ...

  14. The potential of working hypotheses for deductive exploratory research

    While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in ...

  15. Preregistration of exploratory research: Learning from the golden age

    Preregistration and Registered Reports ask for a prespecification of the hypothesis, primary outcome, and mode of analysis, which makes them ideally suited for studies aiming to confirm previous research results. But a pressing question is whether they can be applied to exploratory research as well.

  16. Exploratory research

    Exploratory research is "the preliminary research to clarify the exact nature of the problem to be solved." It is used to ensure additional research is taken into consideration during an experiment as well as determining research priorities, collecting data and honing in on certain subjects which may be difficult to take note of without ...

  17. Exploratory research: Definition, Types and Methodologies

    Exploratory research: Definition. Exploratory research is defined as a research used to investigate a problem which is not clearly defined. It is conducted to have a better understanding of the existing research problem, but will not provide conclusive results.For such a research, a researcher starts with a general idea and uses this research as a medium to identify issues, that can be the ...

  18. Exploratory studies to decide whether and how to proceed with full

    Background Evaluations of complex interventions in public health are frequently undermined by problems that can be identified before the effectiveness study stage. Exploratory studies, often termed pilot and feasibility studies, are a key step in assessing the feasibility and value of progressing to an effectiveness study. Such studies can provide vital information to support more robust ...

  19. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  20. Exploratory Research Design in Management Science: A Review of

    This study utilised exploratory research design and reliant on secondary data gathered from various publications, journals, textbooks, and internet sources that focused on exploratory research design. The articles extracted for the study were examined using content analysis. CONCEPTUAL REVIEW. Concept of Exploratory Research Design

  21. An Introduction to Experimental and Exploratory Research

    Abstract. Experimental research is a study that strictly adheres to a scientific research design. It includes a hypothesis, a variable that can be manipulated by the researcher, and variables that ...

  22. Full article: Evaluating the impact of urban morphology on urban

    Research Article. Evaluating the impact of urban morphology on urban vitality: an exploratory study using big geo-data. Yanxiao Jiang a Institute of Remote Sensing and Geographical Information Systems, School of Earth and Space Sciences, Peking University, Beijing, ...

  23. Special Paper: Exploratory analyses in aetiologic research and

    The mini review included aetiologic research articles published in four epidemiology journals in the first issue of 2021: American Journal of Epidemiology, Epidemiology, ... The list is not exhaustive but we hope it will encourage further discussion on the reporting of exploratory research. Consideration 1: explicitly state the objective of all ...

  24. exploratory research

    This page contains Frontiers open-access articles about exploratory research Skip to main content. 0 Article(s) ... Skip to main content. 0 Article(s) ...

  25. Supporting Paraeducators and their Use of Active Supervision at Recess

    Schools have increasingly relied on paraeducators in elementary settings to help address student needs. Research demonstrates that simple prevention strategies, like active supervision (e.g., moving, scanning, interacting), can reduce problem behavior in areas where paraeducators are assigned; however, less is known about how to best support paraeducators in implementing these practices.