U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Engineering, Medicine, and Public Policy; Board on Research Data and Information; Division on Engineering and Physical Sciences; Committee on Applied and Theoretical Statistics; Board on Mathematical Sciences and Analytics; Division on Earth and Life Studies; Nuclear and Radiation Studies Board; Division of Behavioral and Social Sciences and Education; Committee on National Statistics; Board on Behavioral, Cognitive, and Sensory Sciences; Committee on Reproducibility and Replicability in Science. Reproducibility and Replicability in Science. Washington (DC): National Academies Press (US); 2019 May 7.

Cover of Reproducibility and Replicability in Science

Reproducibility and Replicability in Science.

  • Hardcopy Version at National Academies Press

3 Understanding Reproducibility and Replicability

In 2013, the cover story of The Economist, “How Science Goes Wrong,” brought public attention to issues of reproducibility and replicability across science and engineering. In this chapter, we discuss how the practice of science has evolved and how these changes have introduced challenges to reproducibility and replicability. Because the terms reproducibility and replicability are used differently across different scientific disciplines, introducing confusion to a complicated set of challenges and solutions, the committee also details its definitions and highlights the scope and expression of the problems of non-reproducibility and non-replicability across science and engineering research.
  • THE EVOLVING PRACTICES OF SCIENCE

Scientific research has evolved from an activity mainly undertaken by individuals operating in a few locations to many teams, large communities, and complex organizations involving hundreds to thousands of individuals worldwide. In the 17th century, scientists would communicate through letters and were able to understand and assimilate major developments across all the emerging major disciplines. In 2016—the most recent year for which data are available—more than 2,295,000 scientific and engineering research articles were published worldwide ( National Science Foundation, 2018e ). In addition, the number of scientific and engineering fields and subfields of research is large and has greatly expanded in recent years, especially in fields that intersect disciplines (e.g., biophysics); more than 230 distinct fields and subfields can now be identified. The published literature is so voluminous and specialized that some researchers look to information retrieval, machine learning, and artificial intelligence techniques to track and apprehend the important work in their own fields.

Another major revolution in science came with the recent explosion of the availability of large amounts of data in combination with widely available and affordable computing resources. These changes have transformed many disciplines, enabled important scientific discoveries, and led to major shifts in science. In addition, the use of statistical analysis of data has expanded, and many disciplines have come to rely on complex and expensive instrumentation that generates and can automate analysis of large digital datasets.

Large-scale computation has been adopted in fields as diverse as astronomy, genetics, geoscience, particle physics, and social science, and has added scope to fields such as artificial intelligence. The democratization of data and computation has created new ways to conduct research; in particular, large-scale computation allows researchers to do research that was not possible a few decades ago. For example, public health researchers mine large databases and social media, searching for patterns, while earth scientists run massive simulations of complex systems to learn about the past, which can offer insight into possible future events.

Another change in science is an increased pressure to publish new scientific discoveries in prestigious and what some consider high-impact journals, such as Nature and Science. 1 This pressure is felt worldwide, across disciplines, and by researchers at all levels but is perhaps most acute for researchers at the beginning of their scientific careers who are trying to establish a strong scientific record to increase their chances of obtaining tenure at an academic institution and grants for future work. Tenure decisions have traditionally been made on the basis of the scientific record (i.e., published articles of important new results in a field) and have given added weight to publications in more prestigious journals. Competition for federal grants, a large source of academic research funding, is intense as the number of applicants grows at a rate higher than the increase in federal research budgets. These multiple factors create incentives for researchers to overstate the importance of their results and increase the risk of bias—either conscious or unconscious—in data collection, analysis, and reporting.

In the context of these dynamic changes, the questions and issues related to reproducibility and replicability remain central to the development and evolution of science. How should studies and other research approaches be designed to efficiently generate reliable knowledge? How might hypotheses and results be better communicated to allow others to confirm, refute, or build on them? How can the potential biases of scientists themselves be understood, identified, and exposed in order to improve accuracy in the generation and interpretation of research results? How can intentional misrepresentation and fraud be detected and eliminated? 2

Researchers have proposed approaches to answering some of the questions over the past decades. As early as the 1960s, Jacob Cohen surveyed psychology articles from the perspective of statistical power to detect effect sizes, an approach that launched many subsequent power surveys (also known as meta-analyses) in the social sciences in subsequent years ( Cohen, 1988 ).

Researchers in biomedicine have been focused on threats to validity of results since at least the 1970s. In response to the threat, biomedical researchers developed a wide variety of approaches to address the concern, including an emphasis on randomized experiments with masking (also known as blinding), reliance on meta-analytic summaries over individual trial results, proper sizing and power of experiments, and the introduction of trial registration and detailed experimental protocols. Many of the same approaches have been proposed to counter shortcomings in reproducibility and replicability.

Reproducibility and replicability as they relate to data and computation-intensive scientific work received attention as the use of computational tools expanded. In the 1990s, Jon Claerbout launched the “reproducible research movement,” brought on by the growing use of computational workflows for analyzing data across a range of disciplines ( Claerbout and Karrenbach, 1992 ). Minor mistakes in code can lead to serious errors in interpretation and in reported results; Claerbout's proposed solution was to establish an expectation that data and code will be openly shared so that results could be reproduced. The assumption was that reanalysis of the same data using the same methods would produce the same results.

In the 2000s and 2010s, several high-profile journal and general media publications focused on concerns about reproducibility and replicability (see, e.g., Ioannidis, 2005 ; Baker, 2016 ), including the cover story in The Economist ( “How Science Goes Wrong,” 2013 ) noted above. These articles introduced new concerns about the availability of data and code and highlighted problems of publication bias, selective reporting, and misaligned incentives that cause positive results to be favored for publication over negative or nonconfirmatory results. 3 Some news articles focused on issues in biomedical research and clinical trials, which were discussed in the general media partly as a result of lawsuits and settlements over widely used drugs ( Fugh-Berman, 2010 ).

Many publications about reproducibility and replicability have focused on the lack of data, code, and detailed description of methods in individual studies or a set of studies. Several attempts have been made to assess non-reproducibility or non-replicability within a field, particularly in social sciences (e.g., Camerer et al., 2018 ; Open Science Collaboration, 2015 ). In Chapters 4 , 5 , and 6 , we review in more detail the studies, analyses, efforts to improve, and factors that affect the lack of reproducibility and replicability. Before that discussion, we must clearly define these terms.

  • DEFINING REPRODUCIBILITY AND REPLICABILITY

Different scientific disciplines and institutions use the words reproducibility and replicability in inconsistent or even contradictory ways: What one group means by one word, the other group means by the other word. 4 These terms—and others, such as repeatability—have long been used in relation to the general concept of one experiment or study confirming the results of another. Within this general concept, however, no terminologically consistent way of drawing distinctions has emerged; instead, conflicting and inconsistent terms have flourished. The difficulties in assessing reproducibility and replicability are complicated by this absence of standard definitions for these terms.

In some fields, one term has been used to cover all related concepts: for example, “replication” historically covered all concerns in political science ( King, 1995 ). In many settings, the terms reproducible and replicable have distinct meanings, but different communities adopted opposing definitions ( Claerbout and Karrenbach, 1992 ; Peng et al., 2006 ; Association for Computing Machinery, 2018 ). Some have added qualifying terms, such as methods reproducibility, results reproducibility, and inferential reproducibility to the lexicon ( Goodman et al., 2016 ). In particular, tension has emerged between the usage recently adopted in computer science and the way that researchers in other scientific disciplines have described these ideas for years ( Heroux et al., 2018 ).

In the early 1990s, investigators began using the term “reproducible research” for studies that provided a complete digital compendium of data and code to reproduce their analyses, particularly in the processing of seismic wave recordings ( Claerbout and Karrenbach, 1992 ; Buckheit and Donoho, 1995 ). The emphasis was on ensuring that a computational analysis was transparent and documented so that it could be verified by other researchers. While this notion of reproducibility is quite different from situations in which a researcher gathers new data in the hopes of independently verifying previous results or a scientific inference, some scientific fields use the term reproducibility to refer to this practice. Peng et al. (2006 , p. 783) referred to this scenario as “replicability,” noting: “Scientific evidence is strengthened when important results are replicated by multiple independent investigators using independent data, analytical methods, laboratories, and instruments.” Despite efforts to coalesce around the use of these terms, lack of consensus persists across disciplines. The resulting confusion is an obstacle in moving forward to improve reproducibility and replicability ( Barba, 2018 ).

In a review paper on the use of the terms reproducibility and replicability, Barba (2018) outlined three categories of usage, which she characterized as A, B1, and B2:

A: The terms are used with no distinction between them. B1: “Reproducibility” refers to instances in which the original researcher's data and computer codes are used to regenerate the results, while “replicability” refers to instances in which a researcher collects new data to arrive at the same scientific findings as a previous study. B2: “Reproducibility” refers to independent researchers arriving at the same results using their own data and methods, while “replicability” refers to a different team arriving at the same results using the original author's artifacts.

B1 and B2 are in opposition of each other with respect to which term involves reusing the original authors' digital artifacts of research (“research compendium”) and which involves independently created digital artifacts. Barba (2018) collected data on the usage of these terms across a variety of disciplines (see Table 3-1 ). 5

TABLE 3-1. Usage of the Terms Reproducibility and Replicability by Scientific Discipline.

Usage of the Terms Reproducibility and Replicability by Scientific Discipline.

The terminology adopted by the Association for Computing Machinery (ACM) for computer science was published in 2016 as a system for badges attached to articles published by the society. The ACM declared that its definitions were inspired by the metrology vocabulary, and it associated using an original author's digital artifacts to “replicability,” and developing completely new digital artifacts to “reproducibility.” These terminological distinctions contradict the usage in computational science, where reproducibility is associated with transparency and access to the author's digital artifacts, and also with social sciences, economics, clinical studies, and other domains, where replication studies collect new data to verify the original findings.

Regardless of the specific terms used, the underlying concepts have long played essential roles in all scientific disciplines. These concepts are closely connected to the following general questions about scientific results:

  • Are the data and analysis laid out with sufficient transparency and clarity that the results can be checked ?
  • If checked, do the data and analysis offered in support of the result in fact support that result?
  • If the data and analysis are shown to support the original result, can the result reported be found again in the specific study context investigated?
  • Finally, can the result reported or the inference drawn be found again in a broader set of study contexts ?

Computational scientists generally use the term reproducibility to answer just the first question—that is, reproducible research is research that is capable of being checked because the data, code, and methods of analysis are available to other researchers. The term reproducibility can also be used in the context of the second question: research is reproducible if another researcher actually uses the available data and code and obtains the same results. The difference between the first and the second questions is one of action by another researcher; the first refers to the availability of the data, code, and methods of analysis, while the second refers to the act of recomputing the results using the available data, code, and methods of analysis.

In order to answer the first and second questions, a second researcher uses data and code from the first; no new data or code are created by the second researcher. Reproducibility depends only on whether the methods of the computational analysis were transparently and accurately reported and whether that data, code, or other materials were used to reproduce the original results. In contrast, to answer question three, a researcher must redo the study, following the original methods as closely as possible and collecting new data. To answer question four, a researcher could take a variety of paths: choose a new condition of analysis, conduct the same study in a new context, or conduct a new study aimed at the same or similar research question.

For the purposes of this report and with the aim of defining these terms in ways that apply across multiple scientific disciplines, the committee has chosen to draw the distinction between reproducibility and replicability between the second and third questions. Thus, reproducibility includes the act of a second researcher recomputing the original results, and it can be satisfied with the availability of data, code, and methods that makes that recomputation possible. This definition of reproducibility refers to the transparency and reproducibility of computations: that is, it is synonymous with “computational reproducibility,” and we use the terms interchangeably in this report.

When a new study is conducted and new data are collected, aimed at the same or a similar scientific question as a previous one, we define it as a replication. A replication attempt might be conducted by the same investigators in the same lab in order to verify the original result, or it might be conducted by new investigators in a new lab or context, using the same or different methods and conditions of analysis. If this second study, aimed at the same scientific question but collecting new data, finds consistent results or can draw consistent conclusions, the research is replicable. If a second study explores a similar scientific question but in other contexts or populations that differ from the original one and finds consistent results, the research is “generalizable.” 6

In summary, after extensive review of the ways these terms are used by different scientific communities, the committee adopted specific definitions for this report.

CONCLUSION 3-1: For this report, reproducibility is obtaining consistent results using the same input data; computational steps, methods, and code; and conditions of analysis. This definition is synonymous with “computational reproducibility,” and the terms are used interchangeably in this report.

Replicability is obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.

Two studies may be considered to have replicated if they obtain consistent results given the level of uncertainty inherent in the system under study. In studies that measure a physical entity (i.e., a measurand), the results may be the sets of measurements of the same measurand obtained by different laboratories. In studies aimed at detecting an effect of an intentional intervention or a natural event, the results may be the type and size of effects found in different studies aimed at answering the same question. In general, whenever new data are obtained that constitute the results of a study aimed at answering the same scientific question as another study, the degree of consistency of the results from the two studies constitutes their degree of replication.

Two important constraints on the replicability of scientific results rest in limits to the precision of measurement and the potential for altered results due to sometimes subtle variation in the methods and steps performed in a scientific study. We expressly consider both here, as they can each have a profound influence on the replicability of scientific studies.

  • PRECISION OF MEASUREMENT

Virtually all scientific observations involve counts, measurements, or both. Scientific measurements may be of many different kinds: spatial dimensions (e.g., size, distance, and location), time, temperature, brightness, colorimetric properties, electromagnetic properties, electric current, material properties, acidity, and concentration, to name a few from the natural sciences. The social sciences are similarly replete with counts and measures. With each measurement comes a characterization of the margin of doubt, or an assessment of uncertainty ( Possolo and Iyer, 2017 ). Indeed, it may be said that measurement, quantification, and uncertainties are core features of scientific studies.

One mark of progress in science and engineering has been the ability to make increasingly exact measurements on a widening array of objects and phenomena. Many of the things taken for granted in the modern world, from mechanical engines to interchangeable parts to smartphones, are possible only because of advances in the precision of measurement over time ( Winchester, 2018 ).

The concept of precision refers to the degree of closeness in measurements. As the unit used to measure distance, for example, shrinks from meter to centimeter to millimeter and so on down to micron, nanometer, and angstrom, the measurement unit becomes more exact and the proximity of one measurand to a second can be determined more precisely.

Even when scientists believe a quantity of interest is constant, they recognize that repeated measurement of that quantity may vary because of limits in the precision of measurement technology. It is useful to note that precision is different from the accuracy of a measurement system, as shown in Figure 3-1 , demonstrating the differences using an archery target containing three arrows.

Accuracy and precision of a measurement. NOTE: See text for discussion. SOURCE: Chemistry LibreTexts. Available: https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Book%3A_IntroductoryChemistry_(CK-12)/03%3A_Measurements/3.12%3A_Accuracy_and_Precision. (more...)

In Figure 3-1 , A, the three arrows are in the outer ring, not close together and not close to the bull's eye, illustrating low accuracy and low precision (i.e., the shots have not been accurate and are not highly precise). In B, the arrows are clustered in a tight band in an outer ring, illustrating low accuracy and high precision (i.e., the shots have been more precise, but not accurate). The other two figures similarly illustrate high accuracy and low precision (C) and high accuracy and high precision (D).

It is critical to keep in mind that the accuracy of a measurement can be judged only in relation to a known standard of truth. If the exact location of the bull's eye is unknown, one must not presume that a more precise set of measures is necessarily more accurate; the results may simply be subject to a more consistent bias, moving them in a consistent way in a particular direction and distance from the true target.

It is often useful in science to describe quantitatively the central tendency and degree of dispersion among a set of repeated measurements of the same entity and to compare one set of measurements with a second set. When a set of measurements is repeated by the same operator using the same equipment under constant conditions and close in time, metrologists refer to the proximity of these measurements to one another as measurement repeatability (see Box 3-1 ). When one is interested in comparing the degree to which the set of measurements obtained in one study are consistent with the set of measurements obtained in a second study, the committee characterizes this as a test of replicability because it entails the comparison of two studies aimed at the same scientific question where each obtained its own data.

Terms Used in Metrology and How They Differ from the Committee's Definitions.

Consider, for example, the set of measurements of the physical constant obtained over time by a number of laboratories (see Figure 3-2 ). For each laboratory's results, the figure depicts the mean observation (i.e., the central tendency) and standard error of the mean, indicated by the error bars. The standard error is an indicator of the precision of the obtained measurements, where a smaller standard error represents higher precision. In comparing the measurements obtained by the different laboratories, notice that both the mean values and the degrees of precision (as indicated by the width of the error bars) may differ from one set of measurements to another.

Evolution of scientific understanding of the fine structure constant over time. NOTES: Error bars indicate the experimental uncertainty of each measurement. See text for discussion. SOURCE: Reprinted figure with permission from Peter J. Mohr, David B. (more...)

We may now ask what is a central question for this study: How well does a second set of measurements (or results) replicate a first set of measurements (or results)? Answering this question, we suggest, may involve three components:

proximity of the mean value (central tendency) of the second set relative to the mean value of the first set, measured both in physical units and relative to the standard error of the estimate

similitude in the degree of dispersion in observed values about the mean in the second set relative to the first set

likelihood that the second set of values and the first set of values could have been drawn from the same underlying distribution

Depending on circumstances, one or another of these components could be more salient for a particular purpose. For example, two sets of measures could have means that are very close to one another in physical units, yet each were sufficiently precisely measured as to be very unlikely to be different by chance. A second comparison may find means are further apart, yet derived from more widely dispersed sets of observations, so that there is a higher likelihood that the difference in means could have been observed by chance. In terms of physical proximity, the first comparison is more closely replicated. In terms of the likelihood of being derived from the same underlying distribution, the second set is more highly replicated.

A simple visual inspection of the means and standard errors for measurements obtained by different laboratories may be sufficient for a judgment about their replicability. For example, in Figure 3-2 , it is evident that the bottom two measurement results have relatively tight precision and means that are nearly identical, so it seems reasonable these can be considered to have replicated one another. It is similarly evident that results from LAMPF (second from the top of reported measurements with a mean value and error bars in Figure 3-2 ) are better replicated by results from LNE-01 (fourth from top) than by measurements from NIST-89 (sixth from top). More subtle may be judging the degree of replication when, for example, one set of measurements has a relatively wide range of uncertainty compared to another. In Figure 3-2 , the uncertainty range from NPL-88 (third from top) is relatively wide and includes the mean of NIST-97 (seventh from top); however, the narrower uncertainty range for NIST-97 does not include the mean from NPL-88. Especially in such cases, it is valuable to have a systematic, quantitative indicator of the extent to which one set of measurements may be said to have replicated a second set of measurements, and a consistent means of quantifying the extent of replication can be useful in all cases.

  • VARIATIONS IN METHODS EMPLOYED IN A STUDY

When closely scrutinized, a scientific study or experiment may be seen to entail hundreds or thousands of choices, many of which are barely conscious or taken for granted. In the laboratory, exactly what size of Erlenmeyer flask is used to mix a set of reagents? At what exact temperature were the reagents stored? Was a drying agent such as acetone used on the glassware? Which agent and in what amount and exact concentration? Within what tolerance of error are the ingredients measured? When ingredient A was combined with ingredient B, was the flask shaken or stirred? How vigorously and for how long? What manufacturer of porcelain filter was used? If conducting a field survey, how exactly, were the subjects selected? Are the interviews conducted by computer or over the phone or in person? Are the interviews conducted by female or male, young or old, the same or different race as the interviewee? What is the exact wording of a question? If spoken, with what inflection? What is the exact sequence of questions? Without belaboring the point, we can say that many of the exact methods employed in a scientific study may or may not be described in the methods section of a publication. An investigator may or may not realize when a possible variation could be consequential to the replicability of results.

In a later section, we will deal more generally with sources of non-replicability in science (see Chapter 5 and Box 5-2 ). Here, we wish to emphasize that countless subtle variations in the methods, techniques, sequences, procedures, and tools employed in a study may contribute in unexpected ways to differences in the obtained results (see Box 3-2 ).

Data Collection, Cleaning, and Curation.

Finally, note that a single scientific study may entail elements of the several concepts introduced and defined in this chapter, including computational reproducibility, precision in measurement, replicability, and generalizability or any combination of these. For example, a large epidemiological survey of air pollution may entail portable, personal devices to measure various concentrations in the air (subject to precision of measurement), very large datasets to analyze (subject to computational reproducibility), and a large number of choices in research design, methods, and study population (subject to replicability and generalizability).

  • RIGOR AND TRANSPARENCY

The committee was asked to “make recommendations for improving rigor and transparency in scientific and engineering research” (refer to Box 1-1 in Chapter 1 ). In response to this part of our charge, we briefly discuss the meanings of rigor and of transparency below and relate them to our topic of reproducibility and replicability.

Rigor is defined as “the strict application of the scientific method to ensure robust and unbiased experimental design” ( National Institutes of Health, 2018e ). Rigor does not guarantee that a study will be replicated, but conducting a study with rigor—with a well-thought-out plan and strict adherence to methodological best practices—makes it more likely. One of the assumptions of the scientific process is that rigorously conducted studies “and accurate reporting of the results will enable the soundest decisions” and that a series of rigorous studies aimed at the same research question “will offer successively ever-better approximations to the truth” ( Wood et al., 2019 , p. 311). Practices that indicate a lack of rigor, including poor study design, errors or sloppiness, and poor analysis and reporting, contribute to avoidable sources of non-replicability (see Chapter 5 ). Rigor affects both reproducibility and replicability.

Transparency has a long tradition in science. Since the advent of scientific reports and technical conferences, scientists have shared details about their research, including study design, materials used, details of the system under study, operationalization of variables, measurement techniques, uncertainties in measurement in the system under study, and how data were collected and analyzed. A transparent scientific report makes clear whether the study was exploratory or confirmatory, shares information about what measurements were collected and how the data were prepared, which analyses were planned and which were not, and communicates the level of uncertainty in the result (e.g., through an error bar, sensitivity analysis, or p- value). Only by sharing all this information might it be possible for other researchers to confirm and check the correctness of the computations, attempt to replicate the study, and understand the full context of how to interpret the results. Transparency of data, code, and computational methods is directly linked to reproducibility, and it also applies to replicability. The clarity, accuracy, specificity, and completeness in the description of study methods directly affects replicability.

FINDING 3-1: In general, when a researcher transparently reports a study and makes available the underlying digital artifacts, such as data and code, the results should be computationally reproducible. In contrast, even when a study was rigorously conducted according to best practices, correctly analyzed, and transparently reported, it may fail to be replicated.

“High-impact” journals are viewed by some as those which possess high scores according to one of the several journal impact indicators such as Citescore, Scimago Journal Ranking (SJR), Source Normalized Impact per Paper (SNIP)—which are available in Scopus—and Journal Impact Factor (IF), Eigenfactor (EF), and Article Influence Score (AIC)—which can be obtained from the Journal Citation Report (JCR).

See Chapter 5 , Fraud and Misconduct, which further discusses the association between misconduct as a source of non-replicability, its frequency, and reporting by the media.

One such outcome became known as the “file drawer problem”: see Chapter 5 ; also see Rosenthal (1979) .

For the negative case, both “non-reproducible” and “irreproducible” are used in scientific work and are synonymous.

See also Heroux et al. (2018) for a discussion of the competing taxonomies between computational sciences (B1) and new definitions adopted in computer science (B2) and proposals for resolving the differences.

The committee definitions of reproducibility, replicability, and generalizability are consistent with the National Science Foundation's Social, Behavioral, and Economic Sciences Perspectives on Robust and Reliable Science ( Bollen et al., 2015 ).

  • Cite this Page National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Engineering, Medicine, and Public Policy; Board on Research Data and Information; Division on Engineering and Physical Sciences; Committee on Applied and Theoretical Statistics; Board on Mathematical Sciences and Analytics; Division on Earth and Life Studies; Nuclear and Radiation Studies Board; Division of Behavioral and Social Sciences and Education; Committee on National Statistics; Board on Behavioral, Cognitive, and Sensory Sciences; Committee on Reproducibility and Replicability in Science. Reproducibility and Replicability in Science. Washington (DC): National Academies Press (US); 2019 May 7. 3, Understanding Reproducibility and Replicability.
  • PDF version of this title (2.9M)

In this Page

Recent activity.

  • Understanding Reproducibility and Replicability - Reproducibility and Replicabil... Understanding Reproducibility and Replicability - Reproducibility and Replicability in Science

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 2 Chapter 2: The Link Between Theory, Research, and Social Justice

Theory has been mentioned several times in Chapter 1 discussions. In this chapter, we explore the relationship between theory and research, paying particular attention to how theory and research relate to promoting social justice.

In this chapter you will read about:

  • Why theory matters to social work
  • How theory and research relate to social justice

The Significance of Theory

It is helpful to begin with thinking about what theory  is. Theory is defined as a belief, idea, or set of principles that explain something—the set of principles may be organized in a complex manner or may be quite simple in nature. Theories are not facts; they are conjectures and predictions that need to be tested for their goodness-of-fit with reality. A scientific theory is an explanation supported by empirical evidence. Therefore, scientific theory is based on careful, reasonable examination of facts, and theories are tested and confirmed for their ability to explain and predict the phenomena of interest.

Theory is central to the development of social work interventions, as it determines the nature of our solutions to identified problems. Consider an example whereby programmatic and social policy responses might be influenced by the way that a problem like teen pregnancy is defined and the theories about the problem. In Table 2-1 you can see different definitions or theories of the problem on the left, and the logical responses on the right. In many cases, the boxes on the right can also be supplemented with content from other boxes addressing other problem definitions or theories.

Table 2-1. Analysis of teen pregnancy: How defining a problem determines responses

Hopefully, through this example, you can see how the way we define a problem and our theories about its causes determines the types of solutions we develop. Solutions are also dependent on whether they are feasible, practical, reasonable, ethical, culturally appropriate, and politically acceptable.

Theory, Research, and Social Justice

Theory is integral to research and research is integral to theory. Theory guides the development of many research questions and research helps generate new theories, as well as determining whether support for theories exists. What is important to remember is that theory is not fact: it is a belief about how things work; theory is a belief or “best guess” that awaits the support of empirical evidence.

“The best theories are those that have been substantiated or validated in the real world by research studies” (Dudley, 2011, p. 6).

Countless theories exist, and some are more well-developed and well-defined than others. More mature theories have had more time and effort devoted to supporting research, newer theories may be more tentative as supporting evidence is being developed. Theories are sometimes disproven and need to be scrapped completely or heavily overhauled as new research evidence emerges. And, exploratory research leads to the birth of new theories as new phenomena and questions arise, or as practitioners discover ways that existing theory does not fit reality.

Examples of theories and theoretical models with which you may have become familiar in other coursework are developed, tested, and applied in research from multiple disciplines, including social work. You may be familiar with the concepts of multidisciplinary  and interdisciplinary  practice, research and theory, but you also might be interested to learn the concept of transdisciplinary  research and theory. The social work profession engages in all three, as described in Figure 2-1.

Figure 2-1. Comparison of multidisciplinary, interdisciplinary, and transdisciplinary as concepts

FIXME

An example of transdisciplinarity is demonstrated in motivational interviewing. The principles and practice of motivational interviewing skills transcends disciplines—it is relevant and effective regardless of a practitioner’s discipline and regardless of which discipline is conducting research that provides supporting evidence. An example of interdisciplinarity is when social workers, nurses, pediatricians, occupational therapists, respiratory therapists, and physical therapists work together to create a system for reducing infants’ and young children’s environmental exposure to heavy metal contamination in their home, childcare, recreational, food and water (e.g., lead and mercury). An example of multidisciplinarity is when a pediatrician, nurse, occupational therapist, social worker, and physical therapist each deliver discipline-specific services to children with intellectual disabilities.

Here are examples of theories that you may have encountered or eventually will encounter in your career:

  • behavioral theory
  • cognitive theory
  • conflict theory
  • contact theory (groups)
  • critical race theory
  • developmental theory (families)
  • developmental theory (individuals)
  • feminist theory
  • health beliefs model
  • information processing theory
  • learning theory
  • lifecourse or lifespan theory
  • neurobiology theories
  • organizational theory
  • psychoanalytic theory
  • role theory
  • social capital model
  • social ecological theory
  • social learning theory
  • social network theory
  • stress vulnerability model
  • systems theory
  • theory of reasoned action/planned behavior
  • transtheoretical model of behavior change

Social work practitioners generate new theories in the field all the time, but these theories are rarely documented or systematicallyl tested. Applying systematic research methods to test these practice-generated theories can help expand the social work knowledge base. Well-developed theories must be testable through the application of research methods. Furthermore, they both supplement and complement other theories that have the support of strong evidence behind them. In addition to these points, for theories to be relevant to social work, they need to:

  • have practice implications at one or more level of intervention—suggest principles of action;
  • be responsive to human diversity;
  • contribute to the promotion of social justice (Dudley, 2011).

Social Justice and The Grand Challenges for Social Work .  In 2016, the American Academy of Social Work & Social Welfare (AASWSW) rolled out a set of 12 initiatives, challenging the social work profession to develop strategies for having a significant impact on a broad set of problems challenging the nation: “their introduction truly has the potential to be a defining moment in the history of our profession” (Williams, 2016, p. 67). The Grand Challenges directly relate to content presented in the Preamble to the Code of Ethics of the National Association of Social Workers (2017):

The primary mission of the social work profession is to enhance human well-being and help meet the basic human needs of all people, with particular attention to the needs and empowerment of people who are vulnerable, oppressed, and living in poverty. A historic and defining feature of social work is the profession’s dual focus on individual well-being in a social context and the well-being of society. Fundamental to social work is attention to the environmental forces that create, contribute to, and address problems in living (p. 1).

The ambitious Grand Challenges ( http://aaswsw.org/grand-challenges-initiative/12-challenges/ ) call for a synthesis of research and evidence generating endeavors, social work education, and social work practice (at all levels), with promoting social justice and transforming society at the forefront of attention. None of the Grand Challenges can be achieved without collaboration between social work researchers, practitioners, key stakeholders/constituents, policy makers, and members of other professions and disciplines (see Table 2-2). Each of the papers published under the 12 umbrella challenges not only reviews evidence related to the challenge, but also identifies a research and action agenda for promoting change and achieving the stated challenge goals.

Table 2-2. 12 Grand Challenges for Social Work

Theory and its related research is presented in the literature of social work and other disciplines. Being able to locate the relevant literature is an important skill for social work professionals and researchers to master. The next chapter introduces some basic principles about identifying literature that informs us about theory and the research that leads to the development of new theories and the testing of existing theories.

Using Data to Support Social Justice Advocac y

It is one thing to collect data, statistics, and information about the dimensions of a social problem; it is another to apply those data, statistics, and information in action to promote social change around an identified social justice cause. Advocacy is one tool or role important to the social work profession since its earliest days. Data and empirical evidence should routinely support social workers’ social justice advocacy efforts at the micro level—when working with specific client systems (case advocacy). Social justice advocacy also is a macro-level practice (cause advocacy), one that often involves the use of data to raise awareness about a cause, establish change goals, and evaluate the impact of change efforts. Consider, for example, the impact of data on social justice advocacy related to opioid misuse and addiction across the United States. Data regarding the sharp, upward trend in opioid-related deaths have had a powerful impact on public awareness, having captivated the attention of mass media outlets. These are data from the Centers for Disease Control and Prevention (CDC) for the period 1999 to 2016 (see Figure 2-2). The death rate from opioid overdose increased remarkably in the most recent years depicted, including heroin, natural and semisynthetic (morphine, codeine, hydrocodone, oxycodone), and synthetic opioids (fentanyl, fentanyl analogs, tramadol); the rate declined somewhat for deaths related to methadone, a highly controlled prescription medication used to treat opioid addiction (Hedegaard, Warner, & Miniño, 2017). In 2016 the overdose death rate was more than triple the 1999 rate and surpassed all other causes of death (including homicide and automobile crashes) for persons aged 50 or younger (Vestal, 2018). The good news is that the rates dropped significantly across 14 states during 2017 as new policy approaches were implemented—including more widespread use and access to naloxone (an opioid overdose reversal drug). The bad news is that the opioid overdose death rates increased significantly across 5 states and the District of Columbia as fentanyl and related drugs became integrated into the illicit drug supply: “Nationally, the death toll is still rising” (Vestal, 2018, p. 1). The Ohio Department of Health, with Ohio being one of the 5 states with more than a 30% increase in opioid overdose deaths, now recommends that naloxone be used more widely in overdose situations, whether or not opioids are known to be involved.

Figure 2-2. Opioid drug overdose death rate trends 1999-2016, by opioid category

FIXME

Having data of this sort available to argue the urgency of a cause is of tremendous value. Imagine the potential impact of data on social issues such as child maltreatment, intimate partner violence, human trafficking, and death by suicide. Imagine also the potential impact of data concerning disparities in health, incarceration, mental and traumatic stress disorders, and educational or employment achievement across members of different racial/ethnic, age, economic, and national origin groups. This type of evidence has the potential to be a powerful element in the practice of social justice advocacy.

Stop and Think

Follow links below to specific AASWSW Grand Challenges and consider the ways that research evidence is being used to advocate for at least one of the following social justice causes:

  • Health Equity: Eradicating Health Inequalities for Future Generations
  • From Mass Incarceration to Smart Decarceration
  • Safe Children: Reducing Severe and Fatal Maltreatment
  • Ending Gender Based Violence: A Grand Challenge for Social Work
  • Prevention of Schizophrenia and Severe Mental Illness
  • Increasing Productive Engagement in Later Life
  • Strengthening the Social Responses to the Human Impacts of Environmental Change

Social Work 3401 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License , except where otherwise noted.

Share This Book

How to Conceptualize a Research Project

  • First Online: 01 January 2013

Cite this chapter

Book cover

  • Shaili Jain M.D. 2 , 3 ,
  • Steven E. Lindley MD, PhD 2 , 3 &
  • Craig S. Rosen PhD 2 , 3 , 4  

3375 Accesses

1 Citations

The research process has three phases: the conceptual phase, the empirical phase, and the interpretative phase. In this chapter, we focus on the first phase: the conceptual phase—the part of the research process that determines which questions are to be addressed by the research and how research procedures are to be used as tools in finding the answers to these questions. Here we describe the various components of the conceptualization phase that need to be carefully considered before moving on to the empirical and interpretative phases of the research. Conceptualization involves simultaneously bringing together several considerations to identify a good research idea, i.e., an answerable research question that is worth answering. Components of this process include conducting a thorough search of the peer-reviewed literature, finding a research mentor and other collaborators, considering methodology and study design, and assessing feasibility. It should be noted that although we describe these various components in a linear fashion in the text, in reality, the conceptualization phase is not a linear process and will require consideration of these components to varying degrees at various stages depending upon evolving circumstances and the reader’s unique strengths and weaknesses. Even though careful attention to all these components will require considerable time and effort on the part of the physician scientist, we consider this to be time well spent as it will lay the ground for a successful research endeavor.

  • Good Research Ideas
  • Worth Answering
  • Concept Phase
  • Empirical Phase
  • Systematic Scientific Investigation

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Rose AM. Sociology: the study of human relations, 2nd ed. New York, NY: Alfred A. Knopf; 1965, p. 9

Google Scholar  

Batey MV. Conceptualizing the research process. Nurs Res. 1971;20:296–301.

Article   PubMed   CAS   Google Scholar  

Sambunjak D, Straus SE, Marušic´ A. Mentoring in academic medicine: a systematic review. JAMA. 2006;296:1103–15.

Horn C, Plazas Snyder B, Coverdale JH, et al. Weiss Roberts L: educational research questions and study design. Acad Psychiatry. 2009;33:261–7.

Article   PubMed   Google Scholar  

Additional Resources

Chapters 8, 10, 19, 20, and 24. In: Roberts LW, Hilty D, editors. Handbook of career development in academic psychiatry and behavioral sciences, 1st ed. Arlington, VA: American Psychiatric Publishing, Inc.; 2006.

Hulley SB, Cumming CR, Warren S, et al. Designing clinical research. 3rd ed. Philadelphia, PA, USA: Lippincott Williams & Wilkins; 2007.

Kraemer HC, Kraemer KL, Kupfer DJ. To your health: how to understand what research tells us about risk. New York: Oxford University Press; 2005.

Motulsky H. Intuitive biostatistics: a nonmathematical guide to statistical thinking. New York: Oxford University Press; 2010.

Download references

Author information

Authors and affiliations.

VA Palo Alto Health Care System, National Center for Posttraumatic Stress Disorder, Menlo Park, CA, USA

Shaili Jain M.D., Steven E. Lindley MD, PhD & Craig S. Rosen PhD

Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA

VA Sierra-Pacific Mental Illness Research, Education and Clinical Center, Menlo Park, CA, USA

Craig S. Rosen PhD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shaili Jain M.D. .

Editor information

Editors and affiliations.

, Department of Psychiatry and Behavioral, Stanford University School of Medicine, 450 Serra Mall, Stanford, 94305, California, USA

Laura Weiss Roberts

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Jain, S., Lindley, S.E., Rosen, C.S. (2013). How to Conceptualize a Research Project. In: Roberts, L. (eds) The Academic Medicine Handbook. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5693-3_30

Download citation

DOI : https://doi.org/10.1007/978-1-4614-5693-3_30

Published : 22 February 2013

Publisher Name : Springer, New York, NY

Print ISBN : 978-1-4614-5692-6

Online ISBN : 978-1-4614-5693-3

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

New Report Examines Reproducibility and Replicability in Science, Recommends Ways to Improve Transparency and Rigor in Research

WASHINGTON – While computational reproducibility in scientific research is generally expected when the original data and code are available, lack of ability to replicate a previous study -- or obtain consistent results looking at the same scientific question but with different data -- is more nuanced and occasionally can aid in the process of scientific discovery, says a new congressionally mandated report from the National Academies of Sciences, Engineering, and Medicine.   Reproducibility and Replicability in Science  recommends ways that researchers, academic institutions, journals, and funders should help strengthen rigor and transparency in order to improve the reproducibility and replicability of scientific research. Defining Reproducibility and Replicability The terms “reproducibility” and “replicability” are often used interchangeably, but the report uses each term to refer to a separate concept.   Reproducibility  means obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis.   Replicability  means obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.    Reproducing research involves using the original data and code, while replicating research involves new data collection and similar methods used in previous studies, the report says.  Even when a study was rigorously conducted according to best practices, correctly analyzed, and transparently reported, it may fail to be replicated.  “Being able to reproduce the computational results of another researcher starting with the same data and replicating a previous study to test its results facilitate the self-correcting nature of science, and are often cited as hallmarks of good science,” said Harvey Fineberg, president of the Gordon and Betty Moore Foundation and chair of the committee that conducted the study.  “However, factors such as lack of transparency of reporting, lack of appropriate training, and methodological errors can prevent researchers from being able to reproduce or replicate a study.  Research funders, journals, academic institutions, policymakers, and scientists themselves each have a role to play in improving reproducibility and replicability by ensuring that scientists adhere to the highest standards of practice, understand and express the uncertainty inherent in their conclusions, and continue to strengthen the interconnected web of scientific knowledge — the principal driver of progress in the modern world.” Reproducibility  The committee’s definition of reproducibility focuses on computation because most scientific and engineering research disciplines use computation as a tool, and the abundance of data and widespread use of computation have transformed many disciplines. However, this revolution is not yet uniformly reflected in how scientists use software and how scientific results are published and shared, the report says. These shortfalls have implications for reproducibility, because scientists who wish to reproduce research may lack the information or training they need to do so. When results are produced by complex computational processes using large volumes of data, the methods section of a scientific paper is insufficient to convey the necessary information for others to reproduce the results, the report says. Additional information related to data, code, models, and computational analysis is needed. If sufficient additional information is available and a second researcher follows the methods described by the first researcher, one expects in many cases to obtain the same exact numeric values – or bitwise reproduction. For some research questions, bitwise reproduction may not be attainable and reproducible results could be obtained within an accepted range of variation. The evidence base to determine the prevalence of non-reproducibility in research is incomplete, and determining the extent of issues related to computational reproducibility across or within fields of science would be a massive undertaking with a low probability of success, the committee found. However, a number of systematic efforts to reproduce computational results across a variety of fields have failed in more than half of attempts made -- mainly due to insufficient detail on data, code, and computational workflow. Replicability  One important way to confirm or build on previous results is to follow the same methods, obtain new data, and see if the results are consistent with the original. A successful replication does not guarantee that the original scientific results of a study were correct, however, nor does a single failed replication conclusively refute the original claims, the report says. Non-replicability can arise from a number of sources. The committee classified sources of non-replicability into those that are potentially helpful to gaining knowledge, and those that are unhelpful. Potentially helpful sources of non-replicability include inherent but uncharacterized uncertainties in the system being studied. These sources of non-replicability are a normal part of the scientific process, due to the intrinsic variation or complexity in nature, the scope of current scientific knowledge, and the limits of current technologies. In such cases, a failure to replicate may lead to the discovery of new phenomena or new insights about variability in the system being studied. In other cases, the report says, non-replicability is due to shortcomings in the design, conduct, and communication of a study. Whether arising from lack of knowledge, perverse incentives, sloppiness, or bias, these unhelpful sources of non-replicability reduce the efficiency of scientific progress. Unhelpful sources of non-replicability can be minimized through initiatives and practices aimed at improving research design and methodology through training and mentoring, repeating experiments before publication, rigorous peer review, utilizing tools for checking analysis and results, and better transparency in reporting. Efforts to minimize avoidable and unhelpful sources of non-replicability warrant continued attention, the report says. Researchers who knowingly use questionable research practices with the intent to deceive are committing misconduct or fraud. It can be difficult in practice to differentiate between honest mistakes and deliberate misconduct, because the underlying action may be the same while the intent is not. Scientific misconduct in the form of misrepresentation and fraud is a continuing concern for all of science, even though it accounts for a very small percentage of published scientific papers, the committee found. Improving Reproducibility and Replicability in Research The report recommends a range of steps that stakeholders in the research enterprise should take to improve reproducibility and replicability, including:  

  • All researchers  should include a clear, specific, and complete description of how the reported results were reached. Reports should include details appropriate for the type of research, such as a clear description of all methods, instruments, materials, procedures, measurements, and other variables involved in the study; a clear description of the analysis of data and decisions for exclusion of some data or inclusion of other; and discussion of the uncertainty of the measurements, results, and inferences.
  • Funding agencies and organizations  should consider investing in research and development of open-source, usable tools and infrastructure that support reproducibility for a broad range of studies across different domains in a seamless fashion. Concurrently, investments would be helpful in outreach to inform and train researchers on best practices and how to use these tools.
  • Journals  should consider ways to ensure computational reproducibility for publications that make claims based on computations, to the extent ethically and legally possible.
  • The  National Science Foundation  should take steps to facilitate the transparent sharing and availability of digital artifacts, such as data and code, for NSF-funded studies – including developing a set of criteria for trusted open repositories to be used by the scientific community for objects of the scholarly record, and endorsing or considering the creation of code and data repositories for long-term archiving and preservation of digital artifacts that support claims made in the scholarly record based on NSF-funded research, among other actions. 

Confidence in Science Replicability and reproducibility, useful as they are in building confidence in scientific knowledge, are not the only ways to gain confidence in scientific results.  Research synthesis and meta-analysis, for example, are valuable methods for assessing the reliability and validity of bodies of research, the report says.  A goal of science is to understand the overall effect from a set of scientific studies, not to strictly determine whether any one study has replicated any other.  Among other related recommendations, the report says that people making personal or policy decisions based on scientific evidence should be wary of making a serious decision based on the results, no matter how promising, of a single study. By the same token, they should not take a new, single contrary study as refutation of scientific conclusions supported by multiple lines of previous evidence. The study — undertaken by the  Committee on Reproducibility and Replicability in Science  — was sponsored the National Science Foundation and Alfred P. Sloan Foundation.  The National Academies are private, nonprofit institutions that provide independent, objective analysis and advice to the nation to solve complex problems and inform public policy decisions related to science, technology, and medicine. They operate under an 1863 congressional charter to the National Academy of Sciences, signed by President Lincoln. For more information, visit nationalacademies.org. Resources: nationalacademies.org/ReproducibilityInScience Follow us: Twitter  @theNASEM Instagram  @thenasem Facebook  @NationalAcademies Follow the conversation using: #ReproducibilityinScience Contact: Kacey Templin, Media Relations Officer Office of News and Public Information 202-334-2138; e-mail  [email protected]

Featured Report

Cover art for record id: 25303

Reproducibility and Replicability in Science

One of the pathways by which the scientific community confirms the validity of a new scientific discovery is by repeating the research that produced it. When a scientific effort fails to independently confirm the computations or results of a previous study, some fear that it may be a symptom of a lack of rigor in science, while others argue that such an observed inconsistency can be an important precursor to new discovery.

Concerns about reproducibility and replicability have been expressed in both scientific and popular media. As these concerns came to light, Congress requested that the National Academies of Sciences, Engineering, and Medicine conduct a study to assess the extent of issues related to reproducibility and replicability and to offer recommendations for improving rigor and transparency in scientific research.

Reproducibility and Replicability in Science defines reproducibility and replicability and examines the factors that may lead to non-reproducibility and non-replicability in research. Unlike the typical expectation of reproducibility between two computations, expectations about replicability are more nuanced, and in some cases a lack of replicability can aid the process of scientific discovery. This report provides recommendations to researchers, academic institutions, journals, and funders on steps they can take to improve reproducibility and replicability in science.

Read Full Description

  • Report Highlights
  • 10 Things to Know About Reproducibility and Replicability
  • Reproducibility and Replicability in Science: Highlights for Social and Behavioral Scientists
  • Improving Reproducibility and Replicability in Research - Highlights from Three National Academies Reports

All Recent News

if a scientific research is closely linked

In Remembrance of Daniel Kahneman

if a scientific research is closely linked

U.S. State Department Announces New Science Envoys

if a scientific research is closely linked

Coordinating Economic Data on U.S. Households

if a scientific research is closely linked

Improving Smallpox Medical Countermeasures

  • Load More...

IMAGES

  1. Infographic: Steps in the Research Process

    if a scientific research is closely linked

  2. what is scientific research .characteristics of scientific research

    if a scientific research is closely linked

  3. Steps of the Scientific Method

    if a scientific research is closely linked

  4. Module 1: Introduction: What is Research?

    if a scientific research is closely linked

  5. General Research VS Scientific Research

    if a scientific research is closely linked

  6. The seven steps for the scientific method and the appropriate research

    if a scientific research is closely linked

COMMENTS

  1. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  2. Replicability - Reproducibility and Replicability in Science ...

    Replicability. Replicability is a subtle and nuanced topic, especially when discussed broadly across scientific and engineering research. An attempt by a second researcher to replicate a previous study is an effort to determine whether applying the same methods to the same scientific question produces similar results.

  3. Understanding Reproducibility and Replicability ...

    Scientific research has evolved from an activity mainly undertaken by individuals operating in a few locations to many teams, large communities, and complex organizations involving hundreds to thousands of individuals worldwide. In the 17th century, scientists would communicate through letters and were able to understand and assimilate major developments across all the emerging major ...

  4. Module 2 Chapter 2: The Link Between Theory, Research, and ...

    Theory guides the development of many research questions and research helps generate new theories, as well as determining whether support for theories exists. What is important to remember is that theory is not fact: it is a belief about how things work; theory is a belief or “best guess” that awaits the support of empirical evidence.

  5. How to Conceptualize a Research Project | SpringerLink

    The conceptual phase is the part of the research process that determines which questions are to be addressed by the research and how the research project will be designed to successfully find the answers to these questions [ 2 ]. Conceptualization involves simultaneously bringing together several considerations to identify a good research idea ...

  6. Topic-linked innovation paths in science and technology

    For the scientific and technical topics on topic-linked innovation paths, most research only considers the timeline and content similarity while ignoring the causal linkage between them. In the future, we will implement temporary network analysis to find such cause-effect links between scientific discoveries and technological developments.

  7. Linking theory with social research: A philosophical and ...

    The theory focuses on the entanglement of four interlocking, interdependent reality segments or ‘social domains’. These domains are ‘psychobiography’, ‘situated activity’, ‘social settings’ and ‘contextual resources’. Psychobiography focuses on the development of an individual’s personal and social identity over the course ...

  8. New Report Examines Reproducibility and Replicability in ...

    While computational reproducibility in scientific research is generally expected when the original data and code are available, lack of ability to replicate a previous study -- or obtain consistent results looking at the same scientific question but with different data -- is more nuanced and occasionally can aid in the process of scientific discovery, says a new congressionally mandated report ...

  9. Integrative model for discovering linked topics in science ...

    The science and technology semantic linkage integration model improves the identification of linked topics in science and technology (LTSTs). •. Simple fusion and link prediction form a twofold model to identify topics and implicit semantics. •. Term co-occurrence networks of basic and applied research are fused. •.

  10. Sociology Quiz #2 Flashcards | Quizlet

    Study with Quizlet and memorize flashcards containing terms like Scientific methodology is most closely linked to, What concept refers to any change in a subject's behavior that is caused by the awareness of being studied?, Which of the following concepts refers to deciding exactly what is to be measured when assigning value to variable? and more.