• Subscriber Services
  • For Authors
  • Publications
  • Archaeology
  • Art & Architecture
  • Bilingual dictionaries
  • Classical studies
  • Encyclopedias
  • English Dictionaries and Thesauri
  • Language reference
  • Linguistics
  • Media studies
  • Medicine and health
  • Names studies
  • Performing arts
  • Science and technology
  • Social sciences
  • Society and culture
  • Overview Pages
  • Subject Reference
  • English Dictionaries
  • Bilingual Dictionaries

Recently viewed (0)

  • Save Search
  • Share This Facebook LinkedIn Twitter

Related Content

More like this.

Show all results sharing this subject:

ad hoc hypothesis

Quick reference.

Hypothesis adopted purely for the purpose of saving a theory from difficulty or refutation, but without any independent rationale.

From:   ad hoc hypothesis   in  The Oxford Dictionary of Philosophy »

Subjects: Philosophy

Related content in Oxford Reference

Reference entries.

View all related items in Oxford Reference »

Search for: 'ad hoc hypothesis' in Oxford Reference »

  • Oxford University Press

PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). (c) Copyright Oxford University Press, 2023. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice ).

date: 22 April 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|185.66.15.189]
  • 185.66.15.189

Character limit 500 /500

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Prediction versus Accommodation

In early philosophical literature, a ‘prediction’ was considered to be an empirical consequence of a theory that had not yet been verified at the time the theory was constructed—an ‘accommodation’ was one that had. The view that predictions are superior to accommodations in the assessment of scientific theories is known as ‘predictivism’. Commonly, however, predictivism is understood more precisely as entailing that evidence confirms theory more strongly when predicted than when accommodated. Much ink has been spilled modifying the concept of ‘prediction’ and explaining why predictivism is or is not true, and whether the history of science and, more recently, logic (Martin and Hjortland 2021) reveals that scientists are predictivist in their assessment of theories. The debate over predictivism also figures importantly in the debate about scientific realism.

1. Historical Introduction

2. ad hoc hypotheses, 3. early characterizations of novelty, 4. a predictivist taxonomy, 5. the null support thesis, 6.1 reliable discovery methods, 6.2 the fudging explanation, 6.3 arbitrary and non-arbitrary conjunctions, 6.4 severe tests, 6.5 conditional and unconditional confirmation, 6.6 the archer analogy, 6.7 the akaike approach, 6.8 endorsement novelty and the confirmation of background beliefs, 7. anti-predictivism, 8 the realist/anti-realist debate, other internet resources, related entries.

There was in the eighteenth and nineteenth centuries a passionate debate about scientific method—at stake was the ‘method of hypothesis’ which postulated hypotheses about unobservable entities which ‘saved the phenomena’ and thus were arguably true (see Laudan 1981a). Critics of this method pointed out that hypotheses could always be adjusted artificially to accommodate any amount of data. But it was noted that some such theories had the further virtue of generating specific predictions of heretofore unobserved phenomena—thus scientists like John Herschel and William Whewell argued that hypotheses that saved phenomena could be justified when they were confirmed by such ‘novel’ phenomena. Whewell maintained that predictions carry special weight because a theory that correctly predicts a surprising result cannot have done so by chance, and thus must be true (Whewell 1849 [1968: 294]). It thus appeared that predicted evidence confirmed theory more strongly than accommodated evidence. But John Stuart Mill (in his debate with Whewell) categorically denied this claim, affirming that

(s)uch predictions and their fulfilment are, indeed, well calculated to impress the ignorant vulgar, whose faith in science rests solely upon similar coincidences between its prophecies and what comes to pass. But it is strange that any considerable stress should be laid upon such a coincidence by scientific thinkers. (1843, Vol. 2, 23)

John Maynard Keynes provides a simple account of why predictivism has a misleading appearance of truth in a brief passage in his book A Treatise on Probability :

The peculiar virtue of prediction or predesignation is altogether imaginary… The plausibility of the argument [for predictivism] is derived from a different source. If a hypothesis is proposed a priori , this commonly means that there is some ground for it, arising out of our previous knowledge, apart from the purely inductive ground, and if such is the case the hypothesis is clearly stronger than one which reposes on inductive grounds only. But if it is merely a guess, the lucky fact of its preceding some or all of the cases which verify it adds nothing whatever to its value. It is the union of prior knowledge, with the inductive grounds which arise out of the immediate instances, that lends weight to any hypothesis, and not the occasion on which the hypothesis is first proposed. (1921: 305–306) [ 1 ]

By ‘the inductive ground’ for a hypothesis Keynes clearly means the data that the hypothesis fits. Keynes means that when some theorist who undertakes to test a hypothesis first proposes it, typically some other (presumably theoretical) form of support prompted the proposal. Thus hypotheses which are proposed without being built to fit the empirical data (which they are subsequently shown to entail) are typically better supported than hypotheses which are proposed merely to fit the data—for the latter lack the independent support possessed by the former. The appearance of plausibility to predictivism arises because the role of the preliminary hypothesis-inducing evidence is being suppressed.

Karl Popper is probably the most famous proponent of prediction in the history of philosophy. In his lecture “Science: Conjectures and Refutations” Popper recounts his boyhood attempt to grapple with the question “When should a theory be ranked as scientific?” (Popper 1963: 33–65). Popper had become convinced that certain popular theories of his day, including Marx’s theory of history and Freudian psychoanalysis, were pseudosciences. Popper deemed the problem of distinguishing scientific from pseudoscientific theories ‘the demarcation problem’. His solution to the demarcation problem, as is well known, was to identify the quality of falsifiability (or ‘testability’) as the mark of the scientific theory.

The pseudosciences were marked, Popper claimed, by their vast explanatory power. They could explain not only all the relevant actual phenomena the world presented, they could explain any conceivable phenomena that might fall within their domain. This was because the explanations offered by the pseudosciences were sufficiently malleable that they could always be adjusted ex post facto to explain anything. Thus the pseudosciences never ran the risk of being inconsistent with the data. By contrast, a genuinely scientific theory made specific predictions about what should be observed and thus ran the risk of falsification. Popper emphasized that what established the scientific character of relativity theory was that it ‘stuck its neck out’ in a way that pseudosciences never did.

Like Whewell and Herschel, Popper appeals to the predictions a theory makes as a way of separating the illegitimate uses of the method of hypothesis from its legitimate uses. But while Whewell and Herschel pointed to predictive success as a necessary condition for the acceptability of a theory that had been generated by the method of hypothesis, Popper focuses in his solution to the demarcation problem not on the success of a prediction but on the fact that the theory made the prediction at all. Of course, there was for Popper an important difference between scientific theories whose predictions were confirmed and those whose prediction were falsified. Falsified theories were to be rejected, whereas theories that survived testing were to be ‘tentatively accepted’ until falsified. Popper did not hold, with Whewell and Hershel, that successful predictions could constitute legitimate proof of a theory—in fact Popper held that it was impossible to show that a theory was even probable based on the evidence, for he embraced Hume’s critique of inductive logic that made evidential support for the truth of theories impossible. Thus, one should ascribe to Popper a commitment to predictivism only in the broad sense that he held predictions to be superior to accommodations—he did not hold that predictions confirmed theory more strongly than accommodations. It would ultimately prove impossible for Popper to reconcile his claim that a theory which enjoyed predictive success ought to be ‘tentatively accepted’ with his anti-inductivism (see, e.g., Salmon 1981).

Imre Lakatos (1970, 1971) proposed an account of scientific method in the form of his ‘methodology of scientific research programmes’ which was a development of Popper’s approach. A scientific research program was constituted by a ‘hard core’ of propositions which were retained throughout the life of that programme together with a ‘protective belt’ which was constituted by auxiliary hypotheses that were adjusted so as to reconcile the hard core with the empirical data. The attempt on the part of the proponents of the research programme to reconcile the programme to empirical data produced a series of theories \(T_1\), \(T_2\),… \(T_n\) where, at least in some cases, \(T_{i+1}\) serves to explain some data that is anomalous for \(T_i\). Lakatos held that a research programme was ‘theoretically progressive’ insofar as each new theory predicts some novel hitherto unexpected fact. A research programme is ‘empirically progressive’ to the extent that its novel empirical content was corroborated, that is, if each new theory leads to the discovery of “some new fact” (Lakatos 1970: 118). Lakatos thus offered a new solution to the demarcation problem: a research programme was pseudoscientific to the extent that it was not theoretically progressive. Theory evaluation is construed in terms of competing research programmes: a research programme defeats a rival programme by proving more empirically progressive over the long run.

According to Merriam-Webster’s Collegiate Dictionary, [ 2 ] something is ‘ad hoc’ if it is ‘formed or used for specific or immediate problems or needs’. An ad hoc hypothesis then is one formed to address a specific problem—such as the problem of immunizing a particular theory from falsification by anomalous data (and thereby accommodating that data). Consequently what makes a hypothesis ad hoc, in the ordinary English sense of the term, has nothing to do with the content of the hypothesis but simply with the motivation of the scientist who proposes it—and it is unclear why there would be anything suspicious about such a motivation. Nonetheless, ad hoc hypotheses have long been suspect in discussions of scientific method, a suspicion that resonates with the predictivist’s skepticism about accommodation.

For Popper, a conjecture is ad hoc “if it is introduced…to explain a particular difficulty, but…cannot be tested independently” (Popper 1974: 986). Thus Popper’s conception of ad hocness added to the ordinary English meaning a further requirement—in the case of an ad hoc hypothesis that was simply introduced to explain a single phenomenon, the ad hoc hypothesis has no testable consequences other than that phenomenon. In the case of an ad hoc theory modification introduced to resolve an anomaly for a theory, the modified theory had no testable consequences other than those of the original theory.

Popper offered two explications of why ad hoc hypotheses were suspect. One was that if we offer T as an explanation of f , but then cite f as the only reason we have to believe T , Popper claims that we have engaged in reasoning that is suspicious for reasons of circularity (Popper 1972: 192–3). This was arguably fallacious on Popper’s part—a circular proof would offer one proposition, p , in support of a second proposition q , when q has already been offered in support of p . But in the above example, while f is offered as evidence for T , T is offered as an explanation of (not as evidence for) f —and thus there is no circular reasoning (Bamford 1993: 338).

Popper’s other explanation of why ad hoc hypotheses were regarded with suspicion was that they ran counter to the aim of science, which for Popper included the proposal of theories with increasing empirical content, viz., increasing falsifiability. Ad hoc hypotheses, for Popper, suffer from a lack of independent testability and thus reduce (or at least fail to increase) the testability of the theories they modify (cf. above). However, Popper’s claim that the process of modifying a theory ad hoc tends to lead to insufficient falsifiability and is ‘unscientific practice’ has been challenged (e.g., Bamford 1993: 350).

Subsequent authors argued that a hypothesis proposed for the sake of immunizing a theory from falsification could be ‘suspicious’ for various reasons, and thus could be ‘ad hoc’ in various ways. Zahar (1973) argued that a hypothesis was ad hoc 1 if it had no novel consequences as compared with its predecessor (i.e. was not independently testable), ad hoc 2 if none of its novel predictions have actually been verified (either because it has not yet been tested or has been falsified), and ad hoc 3

if it is obtained from its predecessor through a modification of the auxiliary hypotheses which does not accord with the spirit of the heuristic of the programme. (1973: 101)

Beyond Popper’s criterion of a lack of independent testability then, a hypothesis introduced to accommodate some datum could be ad hoc because it was simply unconfirmed (ad hoc 2 ) or because it failed to cohere with the basic commitments of the research programme in which it is proposed (ad hoc 3 ).

Another approach proposes that a hypothesis H introduced into a theory T in response to an experimental result E is ad hoc if it is generally unsupported and appears to be a superficial attempt to paper over deep problems with a theory that is actually in need of substantive revision. Thus to level the charge of ad hocness against a hypothesis was actually to direct serious skepticism toward the theory the hypothesis was meant to rescue. This concept of ad hocness arguably makes sense of Einstein’s critique of the Lorentz-Fitzgerald contraction hypothesis as ‘ad hoc’ as a supplementary hypothesis to the aether theory, and Pauli’s postulation of the neutrino as an ad hoc rescue of classical quantum mechanics (Leplin 1975, 1982; for further discussion see Grünbaum 1976).

It seems clearly true that the scientific community’s judgment about whether a hypothesis is ad hoc can change. Given this revisability, and the aesthetic dimension of theory evaluation (which leaves assessment to some degree ‘in the eye of the beholder’) there may be no particular point to embracing a theory of ad hocness, if by the term ‘ad hoc’ we mean ‘illegitimately proposed’ (Hunt 2012).

Popper wrote that

Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory. (1963: 36)

Popper (and subsequently Lakatos) thereby endorsed a temporal condition of novelty—a prediction counts as novel is if it is not known to be true (or is expected to prove false) at the time the theory is constructed. But it was fairly obvious that this made important questions of confirmation turn implausibly on the time at which certain facts were known.

Thus Zahar proposed that a fact is novel “if it did not belong to the problem-situation which governed the construction of the hypothesis” (1973: 103). This form of novelty has been deemed ‘problem-novelty’ (Gardner 1982: 2). But in the same paper Zahar purports to exemplify this concept of novelty by referring to the case in which Einstein did not use the known behavior of Mercury’s perihelion in constructing his theory of relativity. [ 3 ] Gardner notes that this latter conception of novelty, which he deemed ‘use-novelty’, is distinct from problem-novelty (Gardner 1982: 3). Evidence is use-novel for T if T was not built to fit that evidence (whether or not it was part of the relevant ‘problem-situation’ the theory was intended to address). In subsequent literature, the so-called heuristic conception of novelty has been identified with use-novelty—it was further articulated in Worrall 1978 and 1985. [ 4 ]

Another approach argues that a novel consequence of a theory is one that was not known to the theorist at the time she formulated the theory—this seems like a version of the temporal conception, but this point appeals implicitly to the heuristic conception: if a theorist knew of a result prior to constructing a theory which explains it, it may be difficult to determine whether that theorist somehow tailored the theory to fit the fact (e.g., she may have done so unconsciously). A knowledge-based conception is thus the best that we can do to handle this difficulty (Gardner 1982). [ 5 ]

The heuristic conception is, however, deeply controversial—because it makes the epistemic assessment of theories curiously dependent on the mental life of their constructors, specifically on the knowledge and intentions of the theorist to build a theory that accommodated certain data rather than others. Leplin’s comment is typical:

The theorist’s hopes, expectations, knowledge, intentions, or whatever, do not seem to relate to the epistemic standing of his theory in a way that can sustain a pivotal role for them…. (1997: 54)

(For similar comments see Gardner 1982: 6; Thomason 1992: 195; Schlesinger 1987: 33; Achinstein 2001: 210–230; and Collins 1994.)

Another approach notes that scientists operate with competing theories and that the role of novel confirmations is to decide between them. Thus, a consequence of a theory T is a ‘novel prediction’ if it is not a consequence of the best available theory actually present in the field other than T (e.g., the prediction of the Mercury perihelion by Einstein’s relativity theory constituted a novel prediction because it was not a (straightforward) consequence of Newtonian mechanics; Musgrave 1974: 18). Operating in a Lakatosian framework, Frankel claims a consequence was novel with respect to a theory and its research programme if it is not similar to a fact which already has been used by members of the same research program to support a theory designed to solve the same problems as the theory in question (1979: 25). Also in a Lakatosian framework, Nunan claims that a consequence is novel if it has not already been used to support, or cannot readily be explained in terms of, a theory entertained in some rival research program (1984: 279). [ 6 ]

There are clearly multiple forms of novelty and it is generally recognized that a fact could be ‘novel’ in multiple senses—as we will see, some carry more epistemic weight than others (Murphy 1989).

Global predictivism holds that predictions are always superior to accommodations, while local predictivism holds that this only holds in certain cases. Strong predictivism asserts that prediction is intrinsically superior to accommodation, whereas weak predictivism holds that predictive success is epistemically relevant because it is symptomatic of other features that have epistemic import. The distinction between strong and weak predictivism cross classifies with the distinctions between different types of novelty. For example, one could maintain that temporal predictions are intrinsically superior to temporal accommodations (strong temporal predictivism) or that temporal predictions were symptomatic of some other good-making feature of theories (weak temporal predictivism; Hitchcock and Sober 2004: 3–5). These distinctions will be further illustrated below.

A version of global strong heuristic predictivism is the null support thesis that holds that theories never receive confirmation from evidence they were built to fit—precisely because of how they were built. This thesis has been attributed to Bacon and Descartes (Howson 1990: 225). Popper and Lakatos also subscribe to this thesis, though it is important to remember that they do not recognize any form of confirmational support—even from successful predictions. But others who maintained that successful predictions do confirm theories nonetheless endorsed the null support hypothesis. Giere provides the following argument:

If the known facts were used in constructing the model and were thus built into the resulting hypothesis…then the fit between these facts and the hypothesis provides no evidence that the hypothesis is true [since] these facts had no chance of refuting the hypothesis. (1984: 161; Glymour 1980: 114 and Zahar 1983: 245 offer similar arguments)

The idea is that the way the theory was built provided an illegitimate protection against falsification by the facts—hence the facts cannot support the theory. Others however find this argument specious, noting that since the content of the hypothesis is fixed, it makes no sense to think of any facts as having a ‘chance’ to falsify the theory. The theory says what it says, and any particular fact refutes it or it doesn’t.

Giere has confused what is in effect a random variable (the experimental setup or data source E together with its set of distinct possible outcomes) with one of its values (the outcome e )…Moreover, it makes perfectly good sense to say that E might well have produced an outcome other than the one, e , it did as a matter of fact produce. (Howson 1990: 229; see also Collins 1994: 220)

Thus Giere’s argument collapses.

Howson argued in a series of papers (1984, 1988, 1990) that the null support thesis is falsified using simple examples, such as the following:

An urn contains an unknown number of black and white tickets, where the proportion p of black tickets is also unknown. The data consists simply in a report of the relative frequency \(r/k\) of black tickets in a large number k of draws with replacement from the urn. In the light of the data we propose the hypothesis that \(p = (r/k)+\epsilon\) for some suitable \(\epsilon\) depending on k . This hypothesis is, according to standard statistical lore, very well supported by the data from which it is clearly constructed. (1990: 231)

In this case there is, Howson notes, a background theory that supplies a model of the experiment (it is a sequence of Bernoulli trials, viz., a sequence of trials with two outcomes in which the probability of getting either outcome is the same on each trial; it leaves only a single parameter to be evaluated). As long as we have good reason to believe that this model applies, our inference to the high probability of the hypothesis is a matter of standard statistical methodology, and the null support thesis is refuted.

It has been argued that one of the limitations of Bayesianism is that it is fatally committed to the (clearly false) null support thesis (Glymour 1980). The standard Bayesian condition by which evidence e supports h is given by the inequality \(p(h\mid e) \gt p(h)\). But where e is known (and thus \(p(e) = 1\)), we have \(p(h\mid e) = p(h)\). This came to be known as the ‘Bayesian problem of old evidence’. Howson (1984) noted that this problem could be overcome by selecting a probability function \(p^*\) based on the assumption that e was not known—thus even if \(p(h\mid e) = p(h)\), it could still hold that \({p^*}(h\mid e) \gt {p^*}(h)\). Thus followed an extensive literature on the old evidence problem which will not be summarized here (see, e.g., Christiansen 1999; Eells & Fitelson 2000; Barnes 1999, 2008: Ch. 7; and Hartmann & Fitelson 2015).

6 Contemporary Theories of Predictivism

Patrick Maher (1988, 1990, 1993) presented a seminal thought experiment and a Bayesian analysis of its predictivist implications.

The thought experiment contained two scenarios: in the first scenario, a subject (the accommodator) is presented with E , a sequence of 99 coin flips. E forms an apparently random sequence of heads and tails. The accommodator is then instructed to tell us the outcome of the first 100 flips—he responds by reciting E and then adding the prediction that the 100 th toss will be heads—the conjunction of E and this last toss is T . In the other scenario, another subject (the predictor) is asked to predict the first 100 flip outcomes without witnessing any outcomes—the predictor endorses theory T . Thereafter the coin is flipped 99 times, E is established, and the predictor’s first 99 predictions are confirmed. The question is in which of these two scenarios is T better confirmed. It is strongly intuitive that T is better confirmed in the predictor’s scenario than in the accommodator’s scenario, suggesting that predictivism holds true in this case. If we allow ‘ O ’ to assert that evidence E was input into the construction of T , predictivism asserts:

Maher argues that the successful prediction of the initial 99 flips constitutes persuasive evidence that the predictor ‘has a reliable method’ for making predictions of coin flip outcomes. T ’s consistency with E in the case of the accommodator provides no particular evidence that the accommodator’s method of prediction is reliable—thus we have no particular reason to endorse his prediction about the 100 th flip. Allowing R to assert that the method in question is reliable, and \(M_T\) that method M generated hypothesis T , this amounts to:

Maher’s (1988) provides a rigorous proof of (2), which is shown to entail (1) on various assumptions.

Maher’s (1988) makes the simplifying assumption that any method of prediction used by a predictor is either completely reliable (this is the claim abbreviated by ‘ R ’) or is no better than a random method (\(\neg R\)). (Maher [1990] shows that this assumption can be surrendered and a continuum of degrees of reliability of scientific methods assumed; the predictivist result is still generated.) In qualitative terms, where M generates T (and thus predicts E ) without input of evidence E , we should infer that it is much more likely that the method that generated E is reliable than that E just happened to turn out true though R was no better than a random method. In other words, we judge that we are much more likely to stumble on a subject using a reliable method M of coin flip prediction than we are to stumble on a sequence of 99 true flip predictions that were merely lucky guesses—because

Maher has articulated a weak heuristic predictivism because he claims that predictive success is symptomatic of the use of a reliable discovery method. [ 7 ]

For critical discussion of Maher’s theory of predictivism see Howson and Franklin 1991 (and Maher’s 1993 reply); Barnes 1996a,b; Lange 2001; Harker 2006; and Worrall 2014. [ 8 ]

It was noted above that ad hoc hypotheses stand under suspicion for various reasons, one of which was that a hypothesis that was proposed to resolve a particular difficulty may not cohere well with the theory it purports to save or relevant background beliefs. [ 9 ] This could result from the fact that there is no obvious way to resolve the difficulty in a way that is wholly ‘natural’ from the standpoint of the theory itself or operative criteria of theory choice. For example, the phlogiston theory claimed that substances emitted phlogiston while burning. However, it was established that some substances actually gained weight while burning. To accommodate the latter phenomenon it was proposed that phlogiston had negative weight—but the latter hypothesis was clearly ad hoc in the sense of failing to cohere with the background belief that substances simply do not have negative weight, and with the knowledge that many objects lost weight when burned (Partington & McKie 1938a: 33–38).

Thus the ‘fudging explanation’ defends predictivism by pointing out that the process of accommodation lends itself to the proposal of hypotheses that do not cohere naturally with operative constraints on theory choice, while successful predictions are immune from this worry (Lipton 1990, 1991: Ch. 8). Of course, it is an important question whether scientists actually rely on the fact that evidence was predicted (or accommodated) in their assessment of theories—if a theory was fudged to accommodate some datum, couldn’t the scientist simply note that the fudged theory suffers a defect of coherence and pay no attention to whether the data was accommodated or predicted? Some argue, however that scientists are imperfect judges of such coherence—a scientist who accommodates some datum may think his accommodation is fully coherent, while his peers may have a more accurate and objective view that it is not. The scientist’s ‘assessed support’ of his proposed accommodation may thus fail to coincide with its ‘objective support’, and the scientist might rely on the fact that his evidence was accommodated as evidence that it was fudged (or conversely, that his evidence was predicted as evidence that it was not fudged; Lipton 1991: 150f).

Lange (2001) offers an alternate interpretation of the coin flip example that claims that the process of accommodation (unlike prediction) tends to generate theories that are not strongly supported by confirming data. He imagines a ‘tweaked’ version of the coin flip example in which the initial 99 outcomes form a strict alternating sequence ‘tails heads tails heads…’ (instead of forming the ‘apparently random sequence’ of outcomes provided in the original case). Again we imagine a predictor who correctly predicts 99 outcomes in advance and an accommodator who witnesses them. Both the predictor and the accommodator predict that the 100 th outcome will be tails. Now there is little or no difference in our assessed probability that the subject will correctly predicted the 100 th outcome.

This suggests that the intuitive difference between Maher’s original pair of examples does not reflect a difference between prediction and accommodation per se. (Lange 2001: 580)

Lange’s analysis appeals to what Goodman called an ‘arbitrary conjunction’—the mark of which is that

establishment of one component endows the whole statement with no credibility that is transmitted to other component statements. (1983: 68–9)

An example of an arbitrary conjunction is “The sun is made of helium and August 3 rd 2017 falls on a Thursday and 17 is a prime number”. In the original coin flip case, we judge that H is weakly supported in the accommodator’s scenario because we judge that the apparently random sequence of outcomes is probably an arbitrary conjunction—thus the fact that the initial 99 conjuncts are confirmed implies almost nothing about what the 100 th outcome will be. But the success of the predictor in predicting the initial 99 outcomes strongly implies that the sequence is not an arbitrary conjunction after all:

(w)e now believe it more likely that the agent was led to posit this particular sequence by way of something we have not noticed that ties the sequence together—that would keep it from being a coincidence that the hypothesis is accurate to the 100 th toss…. (Lange 2001: 581)

Having judged it not to be an arbitrary conjunction, we are now prepared to recognize the first 99 outcomes as strongly confirming the prediction in the 100 th case. What accounts for the difference between the two scenarios, in other words, is not primarily whether E was predicted or accommodated, but whether we judge H to be an arbitrary conjunction, and thus whether E provides support for the remaining portion of H .

Thus in Lange’s tweaked case, the non-existence of the predictivist effect is due to the fact that it is clear from the initial 99 flips that the sequence is not an arbitrary conjunction—thus E confirms H equally strongly in both scenarios.

Lange goes on to suggest that in actual science the practice of constructing a hypothesis by way of accommodating known evidence has a tendency to generate arbitrary conjunctions. Thus Lorentz’s contraction hypothesis, when appended to his electrodynamics to accommodate the failure to detect optically any motion with respect to the aether, resulted in an arbitrary conjunction (since evidence that supported the contraction hypothesis did not support the electrodynamics, or vice versa)—essentially for this reason, Lange argues, it was rejected by Einstein as ad hoc. When evidence is predicted by a theory, by contrast, this is typically because the theory is not an arbitrary conjunction. The evidential significance of prediction and accommodation for Lange is that they tend to be correlated (negatively and positively) with the construction of theories that are arbitrary conjunctions. Lange’s view might thus be classed as a weak heuristic predictivism, though Lange never takes a stand on whether scientists actually rely on such correlations in assessing theories.

For critical discussion of Lange’s theory see Worrall 2014: 59–61 and Harker 2006: 317f.

Deborah Mayo has argued (particularly in Mayo 1991, 1996, and 2014) that the intuition that predictivism is true derives from a premium on severe tests of hypotheses. A test of a hypothesis H is severe to the extent that H is unlikely to pass that test if H is false. Intuitively, if a novel consequence N is shown to follow from H , and the probability of N on the assumption \({\sim}H\) is very low (for the reason of its being novel), then testing for N would seem to count as a severe test of H , and a positive outcome should strongly support H . Here novelty and severity appear to coincide—but Mayo observes that there are cases in which they come apart. For example, it has seemed to many that if H is built to fit some body of evidence E then the fact that H fits E does not support H because this fit does not constitute H ’s having survived a severe test (or a test at all). One of Mayo’s central objectives is to expose the fallacies that this latter reasoning involves.

Giere (1984: 161, 163) affirms that evidence H was built to fit cannot support H because, given how H was built, it was destined to fit that evidence. Mayo summarizes his reasoning as follows:

  • (1) If H is use-constructed, then a successful fit is assured no matter what.

But Mayo notes that ‘no matter what’ can be interpreted in two ways: (a) no matter what the data are, and (b) no matter whether H is true or false. (1) is true when interpreted as (a), but in order to establish that accommodated evidence fails to support H (as Giere intends) (1) must be interpreted as (b). However, (1) is false when so interpreted. Mayo (1996: 271) illustrates this with a simple example: let the evidence e be a list of SAT scores from students in a particular class. Use this evidence to compute the average score x , and set h = the mean SAT score for these students is x . Now of course h has been use-constructed from e . It is true that whatever mean score was computed would fit the data no matter what the data are—but hardly true that h would have fit the evidence no matter whether h was true or false. If h were false it would not fit the data, because the data will inevitably fit only a true hypothesis. Thus h has passed a maximally severe test: it is virtually impossible for h to fit the data if h is false—despite the fact that h is built to fit e .

Mayo gives an additional example of how a use-constructed hypothesis can count as having survived a severe test that pertains to the famous 1919 Eddington eclipse experiment of Einstein’s General Theory of Relativity. GTR predicted that starlight that passed by the sun would be bent to a specific degree (specifically 1.75 arcseconds). There were actually two expeditions carried out during the eclipse—one to Sobral in Northern Brazil and the other to the island of Principe in the Gulf of Guinea. Each expedition generated a result that supported GTR, but there was a third result generated by the Sobral expedition that appeared to refute GTR. This result was however disqualified because it was determined that a mirror used to acquire the images of the stars’ position had been damaged by the heat of the sun. While one might worry that such dismissing of anomalous evidence was the kind of ad hoc adjustment that Popper warned against, Mayo notes that this is instead a perfectly legitimate case of using evidence to support a hypothesis (that the third result was unreliable) that amounted to that hypothesis having passed a severe test. Mayo concludes that a general prohibition on use-constructed hypothesis “fails to distinguish between problematic and unproblematic use-constructions (or double countings)” (1996: 285). However, Hudson (2003) argues that there is historical evidence that suggests there was legitimate reason to question the hypothesis that the third result was unreliable (he uses this point to support his own contention that the fact that a hypothesis was use-constructed is prima facie evidence that the hypothesis is suspect). Mayo (2003) replies that insofar as the third result was nonetheless suspect the physicists involved were right to discard it.

Mayo (1996: Ch. 9) defends a predictivist-like position attributed to Neyman-Pearson statistical methods—the prohibition on after-trial constructions of hypotheses. To illustrate: Kish (1959) describes a study that investigated the statistical relationship between a large number of infant training experiences (nursing, toilet training, weaning, etc.) and subsequent personality and behavioral traits (e.g., school adjustment, nail biting, etc.) The study found a number of high correlations between certain training experience and later traits. The problem was that the study investigated so many training experiences that it was quite likely that some correlations would appear in the data simply by chance—even if there would ultimately prove to be no such correlation. An investigator who studied many possible correlations thus could survey that data and simply look for statistically significant differences and proclaim evidence for correlations despite such evidence being misleading—thus engaging in the dubious practice of the ‘after-trial construction of hypothesis’. [ 10 ] Mayo notes that such hypotheses should not count as having passed a severe test, thus she endorses the Neyman-Pearson prohibition on such construction. Hitchcock and Sober (2004) note that Mayo’s definition of severity as applied in this case differs from the one she employs in dealing with cases like her SAT example; Mayo (2008) replies at length to their criticism and argues that while she does employ two versions of the severity definition they nonetheless reflect a unified conception of severity.

For critical discussion of Mayo’s account see Iseda 1999 and Worrall 2006: 56–60, 2010: 145–153—see also Mayo’s (1996: 265f, 2010) replies to Worrall.

John Worrall has been an important contributor to the predictivism literature from the 1970s until the present time. He was, along with Elie Zahar, one of the early proponents of the significance of heuristic novelty (e.g., Worrall 1978, 1985). In his more recent work (cf. his 1989, 2002, 2005, 2006, 2010, 2014; also Scerri & Worrall 2001) Worrall has laid out a detailed theory of predictivism that, while sometimes presented in heuristic terms, is “at root a logical theory of confirmation” (2005: 819)—it is thus a weak heuristic account that takes use-novelty of evidence to be symptomatic of underlying logical features that establish strong confirmation of theory.

Worrall’s mature account is based on a view of scientific theories that he credits to Duhem—which claims that a scientific theory is naturally thought of as consisting of a core claim together with some set of more specific auxiliary claims. It is commonly the case that the core theory will leave undetermined certain ‘free parameters’ and the auxiliary claims fix values for such parameters. To cite an example Worrall often uses, the wave theory of light consists of the core theory that light is a periodic disturbance transmitted though some sort of elastic medium. This core claim by itself leaves open various free parameters concerning the wavelengths of particular types of monochromatic light. Worrall proposes to understand the diminished status of evidential support associated with accommodation as follows: when evidence e is ‘used’ in the construction of a theory, it is typically used to establish the value of a free parameter in some core theory T . The fixed version will be a specific version \(T'\) of T . e serves to confirm \(T'\), then, only on the condition that there is independent support for T —thus accommodation provides only ‘conditional confirmation’. Importantly, evidence e that is used in this way will by itself typically provide no evidence for core theory T . Worrall (2002: 201) offers as an illustration the support offered to the wave theory of light ( W ) by the two slit experiment using light from a sodium arc—the data will consist of various alternating light and dark ‘fringes’. The fringe data can be used to compute the wavelength of sodium light—and thus used to generate a more specific version of the wave theory of light \(W'\)—one which conjoins W with a claim about the wavelength of this particular sort of light. But the data offer merely conditional support to \(W'\)—that is the data support \(W'\) only on the condition that there is independent evidence for W .

Predicted evidence for Worrall is thus evidence that is not used to fix free parameters. Worrall cites two forms that predictions can take: one is when a particular evidential consequence falls ‘immediately out of the core’, i.e., is a consequence of the core, together with ‘natural auxiliaries’, and the other is when it is a consequence of a specific version of a theory whose free parameters have been fixed using other data. To illustrate the first: retrograde motion [ 11 ] was a natural consequence of the Copernican core (the claim that the earth and planets orbit the sun) because observation of the planets was carried out on a moving observatory that periodically passed other planets—however it could only be accommodated by Ptolemaic astronomy by proposing and adjusting auxiliary hypotheses that supposed the planet to move on an epicycle (retrograde motion did not follow naturally from the Ptolemaic core idea that the Sun, stars and planets orbit the earth). Thus retrograde motion was predicted by the Copernican theory and thus offered unconditional support to that theory, while it offered only conditional confirmation to the Ptolemaic theory. The second form of prediction is one which follows from a specific version of a theory but was not used to fix a parameter—imagine \(W'\) in the preceding paragraph makes a new prediction p (say for another experiment, such as the one slit experiment)— p offers unconditional confirmation of \(W'\) (and W ; Worrall 2002: 203).

However it is important to understand that Worrall’s repeated expression of his position in terms of the heuristic conception of novelty (particularly after his 1985) does not amount to an endorsement of strong heuristic predictivism. Worrall clarifies this in his 1989 article that focuses on the evidential significance of the ‘white spot’ confirmation of Fresnel’s version of the wave theory of light. The reason the white spot datum carried such important weight is not ultimately that it was not used by Fresnel in the construction of the theory but because this datum followed naturally from the core theory that light is a wave. The reason the fringe data that was used to compute the wavelength of sodium light (cf. above) did not carry such weight is that it is not a consequence of this core idea (nor has the wavelength of sodium light been fixed by some other data). Thus d is novel for T when “there is a heuristic path to [ T ] that does not presuppose [d’s] existence” (Scerri & Worrall 2001: 418). As Worrall sometimes puts it, whether d carries unconditional confirmation for T does not depend on whether d was actually used in constructing T , but whether it was ‘needed’ to construct T (e.g., 1989: 149–151). Thus Worrall is actually a proponent of ‘essential use-novelty’ (Alai 2014: 304). For Worrall, facts about heuristic prediction and accommodation serve to track underlying facts about the logical relationship between theory and evidence. Thus Worrall is ultimately a proponent of weak (not strong) heuristic predictivism. Worrall categorically rejects temporal predictivism, arguing that the fact that the white spot was a temporally novel consequence in itself was of no epistemic importance.

For further discussion of Worrall’s theory of predictivism see Mayo 2010: 155f; Schurz 2014; Votsis 2014; and Douglas & Magnus 2013: 587–8.

Scerri and Worrall 2001 contains a detailed rendering of the historical episode of the scientific community’s assessment of Mendeleev’s theory of the periodic law—it is argued that this story ultimately vindicates Worrall’s theory of predictivism.

For discussion of Scerri and Worrall see Akeroyd 2003; Barnes 2005b (and replies from Worrall 2005 and Scerri 2005); Schindler 2008, 2014; Brush 2007; and Sereno 2020.

A common argument for predictivism is that we should avoid inferring that a theory T is true on the basis of evidence E that it is built to fit because we can explain why T entails E by simply noting how T was built—but if T was not built to fit E then only the truth of T can explain the fact that T fits E . Various philosophers have noted that this reasoning is fallacious. As noted above it makes no sense to offer an explanation (for example, in terms of how the theory was built) for the fact that T entails E —for this latter fact is a logical fact for which no causal explanation can be given. Insofar as there is an explanandum in need of an explanans here it is rather the fact that the theorist managed to construct or ‘choose’ a theory (which turned out to be T ) that correctly entailed E (Collins 1994; Barnes 2002)—that explanandum could be explained by noting that the theorist built a theory (which turned out to be T ) to fit E , or endorsed it because it fit E .

White (2003) offers a theory of predictivism that begins with this same insight—the relevant explanandum is:

  • (ES) The theorist selected a datum-entailing theory.

This explanandum could be explained in one of two ways:

  • (DS) The theorist designed her theory to entail the datum.
  • (RA) The theorist’s selection of her theory was reliably aimed at the truth.

White explains that (RA) means “roughly that the mechanisms which led to her selection of a theory gave her a good chance of arriving at the truth” (2003: 664). (Thus White analogizes the theorist to an ‘archer’ who is more or less reliable in ‘aiming’ at the truth in selecting a theory.) Then White offers a simple argument for predictivism: assuming ~DS, ES provides evidence for RA. But assuming DS, ES provides no evidence for RA. Thus, heuristic predictivism is true.

Interestingly, White bills his account as a strong heuristic account. In making this claim he is claiming that the epistemic advantage of prediction would not be entirely erased for an observer who was completely aware of all relevant evidence and background knowledge possessed by the scientific community at the relevant point in time. This is because the degree to which theorizing is reliable depends upon principles of evidence assessment and causal relations (including the reliability of our perceptual faculties, accuracy of measuring instruments, etc.) that are not entirely “transparent” to us. [ 12 ] Insofar as fully informed scientists may not be fully convinced of just how reliable these principles and relations are, evidence that they lead to the endorsement of theories which are predictively successful continues to redound to their assessed reliability. Thus, White concludes, strong heuristic predictivism is vindicated (2003: 671–4).

Hitchcock and Sober (2004) provide an original theory of weak heuristic predictivism that is based on a particular worry about accommodation. On the assumption that data are noisy (i.e. imbued with observational error), a good theory will almost never fit the data perfectly. To construct a theory that fits the data better than a good theory should, given noisy data, is to be guilty of “overfitting”—if we know a theorist built her theory to accommodate data, we may well worry that she has overfit the data and thus constructed a flawed theory. If we know however that a theorist built her theory without access to such data, or without using it in the process of theory construction, we need not worry that overfitting that data has occurred. When such a theory goes on to make successful predictions, Hitchcock and Sober moreover argue, this provides us with evidence that the data on which the theory was initially based were not overfit in the process of constructing the theory.

Hitchcock and Sober’s approach derives from a particular solution to the curve-fitting problem presented in Forster and Sober 1994. The curve fitting problem is how to select an optimally supported curve on the basis of a given body of data (e.g., a set of \([X,Y]\) points plotted on a coordinate graph). A well-supported curve will feature both ‘goodness of fit’ with the data and simplicity (intuitively, avoiding highly bumpy or irregular patterns). Solving the curve-fitting problem requires some precise way of characterizing a curve’s simplicity, a way of characterizing goodness of fit, and a method of balancing simplicity against goodness of fit to identify an optimal curve.

Forster and Sober cite Akaike’s (1973) result that an unbiased estimate of the predictive accuracy of a model can be computed by assessing both its goodness of fit and its simplicity as measured by the number of adjustable parameters it contains. A model is a statement (a polynomial, in the case of a proposed curve) that contains at least one adjustable parameter. For any particular model M , a given data set, and identifying \(L(M)\) as the likeliest (i.e. best data fitting) curve from M , Akaike showed that the following expression describes an unbiased estimate of the predictive accuracy of model M:

This estimate is deemed a model’s ‘Akaike Information Criterion’ (AIC) score—it measures goodness of fit in terms of the log likelihood of the data on the assumption of \(L(M)\). The simplicity of the model is inversely proportion to k , the number of adjustable parameters in the model. The intuitive idea is that models with a high k value will provide a large variety of curves that will tend to fit data more closely than models with a lower k value—and thus large k values are more prone to overfitting than small k values. So the AIC score assesses a model’s likely predictive accuracy in a way that balances both goodness of fit and simplicity, and the curve-fitting problem is arguably solved.

Hitchcock and Sober (2004) consider a hypothetical example involving two scientists, Penny Predictor and Annie Accommodator. Working independently, they acquire the same set of data D —Penny proposes theory Tp while Annie proposes Ta . The critical difference however was that Penny proposed Tp on the basis of an initial segment of the data D 1—thereafter she predicted the remaining data D 2 to a high degree of accuracy \((D = D1 \cup D2)\). Annie however was in possession of all the data in D prior to proposing Ta and in proposing this theory accommodated D . Hitchcock and Sober ask whether there might be reason to suspect that Penny’s theory will be more predictively accurate in the future, and in this precise sense be better confirmed.

Hitchcock and Sober argue that there is no one answer to this question—and then present a series of several cases. Insofar as predictivism holds in some and not others, their account of predictivism is clearly a local (rather than global) account. In cases in which Penny and Annie propose the same theory, or propose theories whose AIC scores can be computed and directly compared, there is no reason to regard facts about how they built the theory to carry further significance. But if we do not know which theories were proposed, or by what method they were constructed, the fact that Penny predicted data that Annie accommodated can argue for Penny’s theory having a higher AIC score than Annie’s, and thus carry an epistemic advantage.

Insofar as predictivism holds in some cases but not the others, the question whether predictivism holds in actual episodes of science depends on which cases such actual episodes tend to resemble, but Hitchcock and Sober “take no stand on how often the various cases arise” (2004: 21).

Although their account of predictivism is tailored initially to the curve-fitting problem, it is by no means limited to such cases. They note that it is natural to think of a model as analogous to the ontological framework of a scientific theory where the various ontological commitments can function as ‘adjustable parameters’—for example, the Ptolemaic and Copernican world pictures both begin with a claim that a certain entity (the sun or the earth) is at the center, and these models are articulated by producing models with adjustable parameters.

For critical discussion of Sober and Hitchcock’s account, see Lee 2012, 2013 and Douglas & Magnus 2013: 582–584. Peterson (2019) argues that Sober and Hitchcock's approach can be extended to issue methodological recommendations involving methods of cross validation and replication in psychology.

Barnes (2005a, 2008) maintains that predictivism is frequently a manifestation of a phenomenon he calls ‘epistemic pluralism’. A ‘ T -evaluator’ (a scientist who assigns some probability to theory T ) is an epistemic pluralist insofar as she regards one form of evidence to be the probabilities posted (i.e. publicly presented) by other scientists for and against T and other relevant claims (she is an epistemic individualist if she does not do this but considers only the scientific evidence ‘on her own’). One form of pluralistic evidence is the event in which a reputable scientist endorses a theory—this takes place when a scientist posts a probability for T that is (1) no lower than the evaluator’s probability and (2) high enough that subsequent predictive confirmation of T would redound to the scientist’s credibility (2008: 2.2).

Barnes rejects the heuristic conception of novelty on the grounds that it is a mistake to think that what matters epistemically is the process by which the theory was constructed—what matters is on what basis the theory was endorsed (2008: 33f) . In the example above, confirmation of N (a consequence of T ) could carry special weight for an evaluator who learned that the theorist endorsed the theory without appeal to observational evidence for N (irrespective of how the theory was constructed). He proposes to replace the heuristic conception with his endorsement conception of novelty: N (a known consequence of T ) counts as a novel confirmation of T relative to agent X insofar as X posts an endorsement-level probability for T that is based on a body of evidence that does not include observation-based evidence for N .

Barnes claims that the notion of endorsement novelty has several advantages over the heuristic conception—one is that endorsement novelty can account for the fact that prediction is a matter of degree: the more strongly the theorist endorses T , the more strongly its consequence N is predicted (and thus the more evidence for T for pluralist evaluators who trust the endorser). Another is that the orthodox distinction between the context of discovery and the context of justification is preserved. According to the latter distinction, it does not matter for purposes of theory evaluation how a theory was discovered. But this turns out not to be true on the heuristic conception given the central importance it accords to how a theory was built (cf. Leplin 1987). Endorsement novelty respects the irrelevance of the process by which theories are discovered (Barnes 2008: 37–8).

One claim central to this account is that confirmation is a three-way relation between theory, evidence, and background belief (cf. Good 1967). Barnes distinguishes between two types of theory endorser: (1) virtuous endorsers post probabilities for theories that cohere with their evidence and background beliefs and (2) unvirtuous endorsers who post probabilities that do not so cohere. A common way of explaining the predictivist intuition is to note that accommodators tend to be viewed with a certain suspicion—their endorsement of T based on accommodated evidence may reflect a kind of social pressure to endorse T whatever its merits (cf. the ‘fudging explanation’ above). Such an endorser may post a probability for T that is too high given her total evidence and background belief—predictivism thus becomes a strategy by which pluralist endorsers protect themselves from unvirtuous accommodators (Barnes 2008: 61–69).

Barnes then presents a theory of predictivism that is designed to apply to virtuous endorsers. Virtuous predictivism has two roots: (1) the prediction per se, which is constituted by an endorser’s posting an endorsement level probability for T that entails empirical consequence N on a basis that does not include observation-based evidence for T , and (2) predictive success, constituted by the empirical demonstration that N is true. The prediction per se carries epistemic significance for a pluralist endorser because it implies that the predictor possesses reason R (consisting of background beliefs) that supports T . If the endorser views the predictor as credible, this simple act of prediction carries epistemic weight. Predictive success then confirms the truth of R , which thereby counts as evidence for T . Novel confirmation thus has the special virtue of confirming the background beliefs of the predictor—accommodative confirmation lacks this virtue.

Barnes presents two Bayesian thought experiments that purport to establish virtuous predictivism. In each experiment an evaluator Eva faces two scenarios—one in which she confronts Peter who posts an endorsement probability for T without appeal to N -supporting observations (thus Peter predicts N ) and another in which she confronts Alex who posts an endorsement probability for T on a basis that includes observations that establish N (thus Alex accommodates N ). The idea behind both thought experiments is to make the scenarios otherwise as similar as possible—Barnes makes a number of ceteris paribus assumption that render the probability functions of Peter and Alex maximally similar. However it turns out that there is more than one way to keep the scenarios maximally similar: in the first experiment, Peter and Alex have the same likelihood ratio but have different posteriors for T . In the second scenario they have the same posteriors but different likelihood ratios. Barnes demonstrates that Eva’s posterior probability is higher in the predictor scenario in both experiments—thus vindicating virtuous predictivism (2008: 69–80).

Although his defense of virtuous predictivism is the centerpiece of his account, Barnes claims that predictivism can hold true of actual theory evaluation in a variety of ways. He maintains that the position deemed ‘weak predictivism’ is actually ambiguous—it could refer to the claim that scientists actually rely on knowledge that evidence was (or was not) predicted because prediction is symptomatic of a some other feature(s) of theories that is epistemically important (‘tempered predictivism’ [ 13 ] ) or simply to the fact that there is a correlation between prediction and this other feature(s) (‘thin predictivism’). The distinction between tempered and thin predictivism cross classifies with the distinction between virtuous and unvirtuous predictivism to produce four varieties of weak predictivism. Barnes then turns to the case of Mendeleev’s periodic law and argues that all four varieties can be distinguished in the scientific community’s reaction to Mendeleev’s theory of the elements (2008: 82–122). In particular, he argues that it was specifically Mendeleev’s predicted evidence, not his accommodated evidence, that had the power to confirm his scientific and methodological background beliefs from the standpoint of the scientific community.

Critical responses to Barnes’s account are presented in Glymour 2008; Leplin 2009; and Harker 2011. Barnes 2014 responds to these. See also Magnus 2011 and Alai 2016.

It was noted in Section 1 that John Maynard Keynes rejected predictivism—he argued that when a theory T is first constructed it is usually the case that there are reasons R that favor T . If T goes on to generate successful novel predictions E then those reasons combine with R to support T —but if some \(T'\) is constructed ‘merely because it fit E ’ then \(T'\) will be less supported than T . This has been deemed the “Keynesian dissolution of the paradox of predictivism” (Barnes 2008: 15–18)

Colin Howson cites with approval the Keynesian dissolution (1988: 382) and provides the following illustration: consider h and \(h'\) which are rival explanatory frameworks. \(h'\) independently predicts e ; h does not entail e but has a free parameter which is fixed on the basis of e to produce \(h(a_{0})\)—this latter hypothesis thus entails e . So \(h'\) predicts e while \(h(a_{0})\) merely accommodates e . Let us assume that the prior probabilities of h and \(h'\) are equal (i.e., \(p(h) = p(h')\)). Now it stands to reason that \(p(h(a_0)) \lt p(h)\) since \(h(a_{0})\) entails h but not vice versa—thus Howson shows it follows that the effect of e ’s confirmation will be to leave \(h'\) no less probable—and quite possibly more probable—than \(h(a_{0})\) (1990: 236–7). Thus predictivism appears true but the operating factor is the role of unequal prior probabilities. [ 14 ]

The argument from Keynes and Howson against predictivism holds that the evidence which appears to support predictivism is illusory—they are clearly asserting that strong predictivism is false, presumably in its temporal and heuristic forms.

However, it is important to note that the arguments of Keynes and Howson cited above predate the injection of the concept of ‘weak predictivism’ into the literature. [ 15 ] It is thus unclear what stand Keynes or Howson would take on weak predictivism. Likewise, Collins’ 1994 paper “Against the Epistemic Value of Prediction” strongly rejects predictivism, but what he is clearly denying is what has since been deemed strong heuristic predictivism. He might endorse weak heuristic predictivism as he concedes that

all sides to the debate agree that knowing that a theory predicted, instead of accommodated, a set of data can give us an additional reason for believing it is true by telling us something about the structural/relational features of a theory. (1994: 213)

Similarly Harker argues that “it is time to leave predictivism behind” but also concedes that “some weak predictivist theses may be correct” (2008: 451); Harker worries that proclaiming weak predictivism may mislead some into thinking that predictive success is somehow more important than other epistemic indicators (such as endorsement by reliable scientists). White goes so far as to claim that weak predictivism “is not controversial” (2003: 656).

Stephen Brush is the author of a body of historical work much of which purports to show that temporal predictivism does not hold in various episodes of the history of science. [ 16 ] These include the case of starlight bending in the assessment of the General Theory of Relativity (Brush 1989), Alfvén’s theories of space plasma phenomena (Brush 1990), and the revival of big bang cosmology (Brush 1993). However, Brush (1996) argues that temporal novelty did play a role in the acceptance of Mendeleev’s Periodic Table based on Mendeleev’s predictions. Scerri and Worrall (2001) presents considerable historical detail about the assessment of Mendeleev’s theory and dispute Brush’s claim that temporal novelty played an important role in the acceptance of the theory (2001: 428–436). (See also Brush 2007.) Steele and Werndl (2013) argue that predictivism fails to hold in assessing models of climate change, while Frish (2015) affirms that it displays weak predictivism.

Another form of anti-predictivism holds that accommodations are superior to predictions in theory confirmation. “The information that the data was accommodated rather than predicted suggests that the data is less likely to have been manipulated or fabricated, which in turn increases the likelihood that the hypothesis is correct in light of the data” (Dellsen forthcoming).

Scientific realism holds that there is sufficient evidence to believe that the theories of the ‘mature sciences’ are at least approximately true. Appeals to novelty have been important in formulating two arguments for realism—these are the ‘no miracle argument’ and the realist reply to the so-called ‘pessimistic induction’. [ 17 ]

The no-miracle argument for scientific realism holds that realism is the only account that does not make the success of science a miracle (Putnam 1975: 73). ‘The success of science’ here refers to the myriad verified empirical consequences of the theories of the mature sciences—but as we have seen there is a long standing tendency to regard with suspicion those verified empirical consequences the theory was built to fit. Thus the ‘ultimate argument for scientific realism’ refers to a version of the no miracle argument that focuses just on the verified novel consequences of theories—it would be a miracle, this argument proclaims, if a theory managed to have a sustained record of successful novel predictions if the theory were not at least approximately true. Thus, assuming there are no competing theories with comparable records of novel success, we ought to infer that such theories are at least approximately true (Musgrave 1988). [ 18 ]

Insofar as the ultimate argument for realism clearly emphasizes a special role for novel successes, the nature of novelty has been an important focus in the realist account. Leplin 1997 is a book length articulation of the ultimate argument for realism; Leplin proposes a sufficient condition for novelty consisting of two conditions:

An observational result O is novel for T if:

  • Independence Condition: There is a minimally adequate reconstruction of the reasoning leading to T that does not cite any qualitative generalization of O .
  • Uniqueness Condition: There is some qualitative generalization of O that T explains and predicts, and of which, at the time that T first does so, no alternative theory provides a viable reason to expect instances. (Leplin 1997: 77).

Leplin clarifies that a ‘minimally adequate reconstruction’ of such reasoning will be a valid deduction D of the ‘basic identifying hypotheses’ of T from independently warranted background assumptions—the premises of D cannot be weakened or simplified while preserving D ’s validity. Thus for Leplin what establishes whether O is a novel consequence of T is not whether O was actually used in the construction of T , but rather whether it was ‘needed’ for T ’s construction. As with Worrall’s mature ‘essential use’ conception of novelty, what matters is whether there is a heuristic path to T that does not appeal to O , whether or not O was used in constructing T . The Uniqueness Condition helps bolster the argument for the truth of theories with true novel consequences, for if there were another theory \(T'\) (incompatible with T ) that also provides a viable explanation of O , the imputation of truth could not explain the novel success of both T and \(T'\). The success of at least one would have to be due to chance, but if chance could explain one such success it could explain the other as well.

Both of these conditions for novelty have been questioned. Given the Independence Condition, it is unclear that any observational result O will count as novel for any theory, for it may always be true that the logically weakest set of premises that entail T (which will be cited in a minimally adequate reconstruction of the reasoning that led to T ) will include O as a disjunct of one of the premises (Healey 2001: 779). The Uniqueness Condition insists that there be no available alternative explanation of O at the time T first explains O —but clearly, theories that explain O could be subsequently proposed and would threaten the imputation of truth to T no less. This condition seems arbitrarily to privilege theories depending on when they were proposed (Sarkar 1998: 206–8; Ladyman 1999: 184).

Another conception of novelty whose purpose is to bolster the ultimate argument for realism is ‘functional novelty’ (Alai 2014). A datum d is ‘functionally novel’ for theory T if (1) d was not used essentially in constructing T (viz., there is a heuristic path to T and related auxiliary hypotheses that does not cite d ), (2) d is a priori improbable, and (3) d is heterogeneous with respect to data that is used in constructing T and related auxiliary hypotheses (i.e. d is qualitatively different from such data). Functional novelty is a ‘gradual’ concept insofar as a priori improbability and data heterogeneity come in degrees. If there is more than one theory for which d is functionally novel then the dispute between these theories cannot be settled by the ultimate argument (Alai 2014: 306).

Anti-realists have argued that insofar as we adopt a naturalistic philosophy of science, the same standards should be used for assessing philosophical theories as scientific theories. Consequently, if novel confirmations are necessary for inferring a theory’s truth then scientific realism should not be accepted as true, as the latter thesis has no novel confirmations to its credit (Frost-Arnold 2010, Mizrahi 2012).

Another component of the realist/anti-realist debate in which appeals to novel success figure importantly is the debate over the ‘pessimistic induction’ (or ‘pessimistic meta-induction’). According to this argument, the history of science is almost entirely a history of theories that were judged empirically successful in their day only to be shown subsequently to be entirely false. There is no reason to think that currently accepted theories are any different in this regard (Laudan 1981b).

In response some realists have defended ‘selective realism’ which concedes that while the majority of theories from the history of science have proven false, some of them have components that were retained in subsequent theories—these tend to be the components that were responsible for novel successes. Putative examples of this phenomenon are the caloric theory of heat and nineteenth century optical theories (Psillos 1999: Ch. 6), both of which were ultimately rejected as false but which had components that were retained in subsequent theories; these were the portions that were responsible for their novel confirmations. [ 19 ] So in line with the ultimate argument the claim is made that novel successes constitute a serious argument for the truth of the theory component which generates them. However, antirealists have responded by citing cases of theoretical claims that were subsequently determined to be entirely false but which managed nonetheless to generate impressive records of novel predictions. These include certain key claims made by Johannes Kepler in his Mysterium Cosmographicum (1596), assumptions used by Adams and Leverrier in the prediction of the planet Neptune’s existence and location (Lyons 2006), and Ptolemaic astronomy (Carman & Díez 2015). Leconte (2017) maintains that predictive success legitimates only sceptical realism – the claim that some part of a theory is true, but it is not known which part.

  • Achinstein, Peter, 1994, “Explanation vs. Prediction: Which Carries More Weight”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1994(2): 156–164. doi:10.1086/psaprocbienmeetp.1994.2.192926
  • –––, 2001, The Book of Evidence , Oxford: Oxford University Press. doi:10.1093/0195143892.001.0001
  • Akaike, Hirotugu, 1973, “Information Theory as an Extension of the Maximum Likelihood Principle”, in B.N. Petrov and F. Csaki, (eds.) Second International Symposium on Information Theory , Budapest, Akademiai Kiado, pp. 267–281.
  • Akeroyd, F. Michael, 2003, “Prediction and the Periodic Table: A Response to Scerri and Worrall”, Journal for General Philosophy of Science , 34(2): 337–355. doi:10.1023/B:JGPS.0000005277.60641.ca
  • Alai, Mario, 2014, “Novel Predictions and the No Miracle Argument”, Erkenntnis , 79(2): 297–326. doi:10.1007/s10670-013-9495-7
  • –––, 2016, “The No Miracle Argument and Strong Predictivism vs. Barnes”, in Lorenzo Magnini and Claudia Casadio (eds.), Model Based Reasoning in Science and Technology , (Studies in Applied Philosophy, Epistemology and Rational Ethics, 27), Switzerland: Springer International Publishing, pp.541–556. doi:10.1007/978-3-319-38983-7_30
  • Bamford, Greg, 1993, “Popper’s Explication of Ad Hoc ness: Circularity, Empirical Content, and Scientific Practice”, British Journal for the Philosophy of Science , 44(2): 335–355. doi:10.1093/bjps/44.2.335
  • Barnes, Eric Christian, 1996a, “Discussion: Thoughts on Maher’s Predictivism”, Philosophy of Science , 63: 401–10. doi:10.1086/289918
  • –––, 1996b, “Social Predictivism”, Erkenntnis , 45(1): 69–89. doi:10.1007/BF00226371
  • –––, 1999, “The Quantitative Problem of Old Evidence”, British Journal for the Philosophy of Science , 50(2): 249–264. doi:10.1093/bjps/50.2.249
  • –––, 2002, “Neither Truth Nor Empirical Adequacy Explain Novel Success”, Australasian Journal of Philosophy , 80(4): 418–431. doi:10.1080/713659528
  • –––, 2005a, “Predictivism for Pluralists”, British Journal for the Philosophy of Science , 56(3): 421–450. doi:10.1093/bjps/axi131
  • –––, 2005b, “On Mendeleev’s Predictions: Comment on Scerri and Worrall”, Studies in the History and Philosophy of Science , 36(4): 801–812. doi:10.1016/j.shpsa.2005.08.005
  • –––, 2008, The Paradox of Predictivism , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511487330
  • –––, 2014, “The Roots of Predictivism”, Studies in the History and Philosophy of Science , 45: 46–53. doi:10.1016/j.shpsa.2013.10.002
  • Brush, Stephen G., 1989, “Prediction and Theory Evaluation: The Case of Light Bending”, Science , 246(4934): 1124–1129. doi:10.1126/science.246.4934.1124
  • –––, 1990, “Prediction and Theory Evaluation: Alfvén on Space Plasma Phenomena”, Eos , 71(2): 19–33. doi:10.1029/EO071i002p00019
  • –––, 1993, “Prediction and Theory Evaluation: Cosmic Microwaves and the Revival of the Big Bang”, Perspectives on Science , 1(4):: 565–601.
  • –––, 1994, “Dynamics of Theory Change: The Role of Predictions”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1994(2): 133–145. doi:10.1086/psaprocbienmeetp.1994.2.192924
  • –––, 1996, “The Reception of Mendeleev’s Periodic Law in America and Britain”, Isis , 87(4): 595–628. doi:10.1086/357649
  • –––, 2007, “Predictivism and the Periodic Table”, Studies in the History and Philosophy of Science Part A , 38(1): 256–259. doi:10.1016/j.shpsa.2006.12.007
  • Campbell, Richmond and Thomas Vinci, 1983, “Novel Confirmation”, British Journal for the Philosophy of Science , 34(4): 315–341. doi:10.1093/bjps/34.4.315
  • Carman, Christián and José Díez, 2015, “Did Ptolemy Make Novel Predictions? Launching Ptolemaic Astronomy into the Scientific Realism Debate”, Studies in the History and Philosophy of Science , 52: 20–34. doi:10.1016/j.shpsa.2015.04.002
  • Carrier, Martin, 2014, “Prediction in context: On the comparative epistemic merit of predictive success”, Studies in the History and Philosophy of Science , 45: 97–102. doi:10.1016/j.shpsa.2013.10.003
  • Chang, Hasok, 2003, “Preservative Realism and Its Discontents: Revisiting Caloric”, Philosophy of Science , 70(5): 902–912. doi:10.1086/377376
  • Christiansen, David, 1999, “Measuring Confirmation”, Journal of Philosophy , 96(9): 437–461. doi:10.2307/2564707
  • Collins, Robin, 1994, “Against the Epistemic Value of Prediction over Accommodation”, Noûs , 28(2): 210–224. doi:10.2307/2216049
  • Dawid, R. and Stephan Hartmann, 2017, “The No Miracles Argument without the Base-Rate Fallacy”, Synthese . doi:10.1007/s11229-017-1408-x
  • Dellsen, Finnur, (forthcoming), “An Epistemic Advantage of Accommodation Over Prediction”, Philosophers' Imprint
  • Dicken, P., 2013, “Normativity, the Base-Rate Fallacy, and Some Problems for Retail Realism”, Studies in the History and Philosophy of Science Part A, 44 (4): 563–570.
  • Douglas, Heather and P.D. Magnus, 2013, “State of the Field: Why Novel Prediction Matters”, Studies in the History and Philosophy of Science , 44(4): 580–589. doi:10.1016/j.shpsa.2013.04.001
  • Eells, Ellery and Branden Fitelson, 2000, “Measuring Confirmation and Evidence”, Journal of Philosophy , 97(12): 663–672. doi:10.2307/2678462
  • Forster, Malcolm R., 2002, “Predictive Accuracy as an Achievable Goal of Science”, Philosophy of Science , 69(S3): S124–S134. doi:10.1086/341840
  • Forster, Malcolm and Elliott Sober, 1994, “How to Tell when Simpler, More Unified, or Less Ad Hoc Theories Will Provide More Accurate Predictions”, British Journal for the Philosophy of Science , 45(1): 1–35. doi:10.1093/bjps/45.1.1
  • Frankel, Henry, 1979, “The Career of Continental Drift Theory: An application of Imre Lakatos’ analysis of scientific growth to the rise of drift theory”, Studies in the History and Philosophy of Science , 10(1): 21–66. doi:10.1016/0039-3681(79)90003-7
  • Frisch, Mathias, 2015, “Predictivism and Old Evidence: A Critical Look at Climate Model Tuning”, European Journal for the Philosophy of Science , 5(2): 171–190. doi:10.1007/s13194-015-0110-4
  • Frost-Arnold, Greg, 2010, “The No-Miracles Argument for Scientific Realism: Inference to an Unacceptable Explanation”, Philosophy of Science , 77(1): 35–58. doi:10.1086/650207
  • Gardner, Michael R., 1982, “Predicting Novel Facts”, British Journal for the Philosophy of Science , 33(1): 1–15. doi:10.1093/bjps/33.1.1
  • Giere, Ronald N., 1984, Understanding Scientific Reasoning , second edition, New York: Holt, Rinehart, and Winston. First edition 1979.
  • Glymour, Clark N., 1980, Theory and Evidence , Princeton, NJ: Princeton University Press.
  • –––, 2008, “Review: The Paradox of Predictivism by Eric Christian Barnes”, Notre Dame Philosophical Reviews , 2008.06.13. [ Glymour 2008 available online ]
  • Good, I.J. 1967, “The White Shoe is a Red Herring”, British Journal for the Philosophy of Science , 17(4): 322. doi:10.1093/bjps/17.4.322
  • Goodman, Nelson, 1983, Fact, Fiction and Forecast , fourth edition, Cambridge, MA: Harvard University Press. First edition 1950.
  • Grünbaum, Adolf, 1976, “ Ad Hoc Auxiliary Hypotheses and Falsificationism”, British Journal for the Philosophy of Science , 27(4): 329–362. doi:10.1093/bjps/27.4.329
  • Hacking, Ian, 1979, “Imre Lakatos’s Philosophy of Science”, British Journal for the Philosophy of Science , 30(4): 381–410. doi:10.1093/bjps/30.4.381
  • Harker, David, 2006, “Accommodation and Prediction: The Case of the Persistent Head”, British Journal for the Philosophy of Science , 57(2): 309–321. doi:10.1093/bjps/axl004
  • –––, 2008, “The Predilections for Predictions”, British Journal for the Philosophy of Science , 59(3): 429–453. doi:10.1093/bjps/axn017
  • –––, 2010, “Two Arguments for Scientific Realism Unified”, Studies in the History and Philosophy of Science , 41(2): 192–202. doi:10.1016/j.shpsa.2010.03.006
  • –––, 2011, “ Review: The Paradox of Predictivism by Eric Christian Barnes”, British Journal for the Philosophy of Science , 62(1): 219–223. doi:10.1093/bjps/axq027
  • Hartman, Stephan and Branden Fitelson, 2015, “A New Garber-Style Solution to the Problem of Old Evidence”, Philosophy of Science , 82(4): 712–717. doi:10.1086/682916
  • Healey, Richard, 2001, “Review: A Novel Defense of Scientific Realism by Jarrett Leplin”, Mind , 110(439): 777–780. doi:10.1093/mind/110.439.777
  • Henderson, Leah, 2017, “The No Miracles Argument and the Base-Rate Fallacy”, Synthese (4): 1295–1302.
  • Hitchcock, Christopher and Elliott Sober, 2004, “Prediction versus Accommodation and the Risk of Overfitting”, British Journal for the Philosophy of Science , 55(1): 1–34. doi:10.1093/bjps/55.1.1
  • Holton, Gerald, 1988, Thematic Origins of Scientific Thought: Kepler to Einstein , revised edition, Cambridge, MA and London, England: Harvard University Press. First edition 1973.
  • Howson, Colin, 1984, “Bayesianism and Support by Novel Facts”, British Journal for the Philosophy of Science , 35(3): 245–251. doi:10.1093/bjps/35.3.245
  • –––, 1988, “Accommodation, Prediction and Bayesian Confirmation Theory”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1988 , 2: 381–392. doi:10.1086/psaprocbienmeetp.1988.2.192899
  • –––, 1990, “Fitting Your Theory to the Facts: Probably Not Such a Bad Thing After All”, in Scientific Theories , ( Minnesota Studies in the Philosophy of Science , Vol. XIV), C. Wade Savage (ed.), Minneapolis: University of Minnesota Press, pp. 224–244. [ Howson 1990 available online ]
  • Howson, Colin and Allan Franklin, 1991, “Maher, Mendeleev and Bayesianism”, Philosophy of Science , 58(4): 574–585. doi:10.1086/289641
  • Hudson, Robert G., 2003, “Novelty and the 1919 Eclipse Experiments”, Studies in the History and Philosophy of Modern Physics , 34(1): 107–129. doi:10.1016/S1355-2198(02)00082-5
  • –––, 2007, “What’s Really at Issue with Novel Predictions?” Synthese , 155(1): 1–20. doi:10.1007/s11229-005-6267-1
  • Hunt, J. Christopher, 2012, “On Ad Hoc Hypotheses”, Philosophy of Science , 79(1): 1–14. doi:10.1086/663238
  • Iseda, Tetsuji, 1999, “Use-Novelty, Severity, and a Systematic Neglect of Relevant Alternatives”, Philosophy of Science , 66: S403–S413. doi:10.1086/392741
  • Kahn, J.A., S.E. Landsberg, and A.C. Stockman, 1990, “On Novel Confirmation”, British Journal for the Philosophy of Science , 43, 503–516.
  • Keynes, John Maynard, 1921, A Treatise on Probability , London: Macmillan.
  • Kish, Leslie, 1959, “Some Statistical Problems in Research Design”, American Sociological Review , 24(3): 328–338; reprinted in Denton E. Morrison and Ramon E. Henkel (eds.), The Significance Test Controversy: A Reader , Chicago: Aldine, pp. 127–141. doi:10.2307/2089381
  • Kitcher, Philip, 1993, The Advancement of Science: Science without Legend, Objectivity without Illusions , Oxford: Oxford University Press.
  • Ladyman, James, 1999, “Review: Jarrett Leplin, A Novel Defense of Scientific Realism ”, British Journal for the Philosophy of Science , 50(1): 181–188. doi:10.1093/bjps/50.1.181
  • Lakatos, Imre, 1970, “Falsification and the Methodology of Scientific Research Programmes”, in Imre Lakatos and Alan Musgrave (eds.), Criticism and the Growth of Knowledge: Proceedings of the International Colloquium in the Philosophy of Science, London, 1965 , Cambridge: Cambridge University Press, pp. 91–196. doi:10.1017/CBO9781139171434.009
  • –––, 1971, “History of Science and its Rational Reconstructions”, in Roger C. Buck and Robert S. Cohen (eds.), PSA 1970 , ( Boston Studies in the Philosophy of Science , 8), Dordrecht: Springer Netherlands, pp. 91–135. doi:10.1007/978-94-010-3142-4_7
  • Lange, Marc, 2001, “The Apparent Superiority of Prediction to Accommodation: a Reply to Maher”, British Journal for the Philosophy of Science , 52(3): 575–588. doi:10.1093/bjps/52.3.575
  • Laudan, Larry, 1981a, “The Epistemology of Light: Some Methodological Issues in the Subtle Fluids Debate”, in Science and Hypothesis: Historical Essays on Scientific Methodology (University of Western Ontario Series in Philosophy of Science, 19), Dordrecht: D. Reidel, pp. 111–140.
  • –––, 1981b, “A Confutation of Convergent Realism”, Philosophy of Science , 48(1): 19–49. doi:10.1086/288975
  • Leconte, Gauvain, 2017, “Predictive Success, Partial Truth, and Duhemian Realism”, Synthese , 194(9): 3245–3265. doi:10.1007/s11229-016-1305-8
  • Lee, Wang-Yen, 2012, “Hitchcock and Sober on Weak Predictivism”, Philosophia , 40(3): 553–562. doi:10.1007/s11406-011-9331-8
  • –––, 2013, “Akaike’s Theorem and Weak Predictivism in Science” Studies in the History and Philosophy of Science Part A , 44(4): 594–599. doi:10.1016/j.shpsa.2013.06.001
  • Leplin, Jarrett, 1975, “The Concept of an Ad Hoc Hypothesis”, Studies in History and Philosophy of Science , 5 No. 3: 309–345. doi:10.1016/0039-3681(75)90006-0
  • –––, 1982, “The Assessment of Auxiliary Hypotheses”, British Journal for the Philosophy of Science , 33(3): 235–249. doi:10.1093/bjps/33.3.235
  • –––, 1987, “The Bearing of Discovery on Justification”, Canadian Journal of Philosophy , 17: 805–814. doi:10.1080/00455091.1987.10715919
  • –––, 1997, A Novel Defense of Scientific Realism , New York, Oxford: Oxford University Press.
  • –––, 2009, “Review: The Paradox of Predictivism by Eric Christian Barnes”, The Review of Metaphysics , 63(2): 455–457.
  • Lipton, Peter 1990, “Prediction and Prejudice”, International Studies in the Philosophy of Science , 4(1): 51–65. doi:10.1080/02698599008573345
  • –––, 1991, Inference to the Best Explanation , London/New York: Routledge.
  • Lyons, Timothy D., 2006, “Scientific Realism and the Strategema de Divide et Impera”, British Journal for the Philosophy of Science , 57(3): 537–560. doi:10.1093/bjps/axl021
  • Magnus, P.D., 2011, “Miracles, trust, and ennui in Barnes’ Predictivism”, Logos & Episteme , 2(1): 103–115. doi:10.5840/logos-episteme20112152
  • Magnus, P.D. and Craig Callender, 2004, “Realist Ennui and the Base Rate Fallacy”, Philosophy of Science , 71(3): 320–338. doi:10.1086/421536
  • Maher, Patrick, 1988, “Prediction, Accommodation, and the Logic of Discovery”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1988 , 1: 273–285. doi:10.1086/psaprocbienmeetp.1988.1.192994
  • –––, 1990, “How Prediction Enhances Confirmation”, in J. Michael Dunn and Anil Gupta (eds.), Truth or Consequences: Essays in Honor of Nuel Belnap , Dordrecht: Kluwer, pp. 327–343.
  • –––, 1993, “Howson and Franklin on Prediction”, Philosophy of Science , 60(2): 329–340. doi:10.1086/289736
  • Martin, Ben and Ole Hjortland, 2021, “Logical Predictivism”, Journal of Philosophical Logic , 50: 285–318.
  • Mayo, Deborah G., 1991, “Novel Evidence and Severe Tests”, Philosophy of Science , 58(4): 523–552. doi:10.1086/289639
  • –––, 1996, Error and the Growth of Experimental Knowledge , Chicago and London: University of Chicago Press.
  • –––, 2003, “Novel Work on the Problem of Novelty? Comments on Hudson”, Studies in the History and Philosophy of Modern Physics , 34: 131–134. doi:10.1016/S1355-2198(02)00083-7
  • –––, 2008, “How to Discount Double-Counting When It Counts: Some Clarifications”, British Journal for the Philosophy of Science , 59(4): 857–879. doi:10.1093/bjps/axn034
  • –––, 2010, “An Ad Hoc Save of a Theory of Adhocness? Exchanges with John Worrall” in Mayo and Spanos 2010: 155–169.
  • –––, 2014, “Some surprising facts about (the problem of) surprising facts (from the Dusseldorf Conference, February 2011)”, Studies in the History and Philosophy of Science , 45: 79–86. doi:10.1016/j.shpsa.2013.10.005
  • Mayo, Deborah G. and Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511657528
  • McCain, Kevin, 2012, “A Predictivist Argument Against Skepticism” Analysis , 72(4): 660–665. doi:10.1093/analys/ans109
  • McIntyre, Lee, 2001, “Accommodation, Prediction, and Confirmation”, Perspectives on Science , 9(3): 308–328. doi:10.1162/10636140160176161
  • Menke, C., 2014, “Does the Miracle Argument Embody a Base-Rate Fallacy?”, Studies in the History and Philosophy of Science Part A, 45: 103–108.
  • Mill, John Stuart, 1843, A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation , Vol. 2, London: John W. Parker.
  • Mizrahi, Moti, 2012, “Why the Ultimate Argument for Scientific Realism Fails”, Studies in the History and Philosophy of Science , 43(1): 132–138. doi:10.1016/j.shpsa.2011.11.001
  • Murphy, Nancey, 1989, “Another Look at Novel Facts”, Studies in the History and Philosophy of Science , 20(3): 385–388. doi:10.1016/0039-3681(89)90014-9
  • Musgrave, Alan, 1974, “Logical versus Historical Theories of Confirmation”, British Journal for the Philosophy of Science , 25(1): 1–23. doi:10.1093/bjps/25.1.1
  • –––, 1988, “The Ultimate Argument for Scientific Realism”, in Robert Nola (ed.), Relativism and Realism in Science , Dordrecht: Kluwer Academic Publishers, pp. 229–252. doi:10.1007/978-94-009-2877-0_10
  • Nunan, Richard, 1984, “Novel Facts, Bayesian Rationality, and the History of Continental Drift”, Studies in the History and Philosophy of Science , 15(4): 267–307. doi:10.1016/0039-3681(84)90013-X
  • 1937, “I. The Levity of Phlogiston”, 2(4): 361–404, doi:10.1080/00033793700200691
  • 1938a, “II. The Negative Weight of Phlogiston”, 3(1): 1–58, doi:10.1080/00033793800200781
  • 1938b, “III. Light and Heat in Combustion”, 3(4): 337–371, doi:10.1080/00033793800200951
  • Peterson, Clayton, 2019, “Accommodation, Prediction, and Replication: Model Selection in Scale Construction”, Synthese , 196: 4329–4350.
  • Popper, Karl, 1963, Conjectures and Refutations: The Growth of Scientific Knowledge , New York and Evanston: Harper and Row.
  • –––, 1972, Objective Knowledge , Oxford: Clarendon Press.
  • –––, 1974, “Replies to my critics”, in Paul Arthur Schilpp (ed.), The Philosophy of Karl Popper , Book II, 961–1197, La Salle, Illinois: Open Court.
  • Psillos, Stathis, 1999, Scientific Realism: How Science Tracks the Truth , London and New York: Routledge.
  • Putnam, Hilary, 1975, Philosophical Papers,Vol. 1, Mathematics, Matter, and Method , Cambridge: Cambridge University Press.
  • Redhead, Michael, 1978, “Adhocness and the Appraisal of Theories”, British Journal for the Philosophy of Science , 29: 355–361.
  • Salmon, Wesley C., 1981, “Rational Prediction”, British Journal for the Philosophy of Science , 32(2): 115–125. doi:10.1093/bjps/32.2.115
  • Sarkar, Husain, 1998, “Review of A Novel Defense of Scientific Realism by Jarrett Leplin”, Journal of Philosophy , 95(4): 204–209. doi:10.2307/2564685
  • Scerri, Eric R., 2005, “Response to Barnes’s critique of Scerri and Worrall”, Studies in the History and Philosophy of Science , 36(4): 813–816. doi:10.1016/j.shpsa.2005.08.006
  • Scerri, Eric R. and John Worrall, 2001, “Prediction and the Periodic Table”, Studies in the History and Philosophy of Science , 32(3): 407–452. doi:10.1016/S0039-3681(01)00023-1
  • Schindler, Samuel, 2008, “Use Novel Predictions and Mendeleev’s Periodic Table: Response to Scerri and Worrall (2001)”, Studies in the History and Philosophy of Science Part A , 39(2): 265–269. doi:10.1016/j.shpsa.2008.03.008
  • –––, 2014, “Novelty, coherence, and Mendeleev’s periodic table”, Studies in the History and Philosophy of Science Part A , 45: 62–69. doi:10.1016/j.shpsa.2013.10.007
  • Schlesinger, George N., 1987, “Accommodation and Prediction”, Australasian Journal of Philosophy , 65(1): 1 33–42. doi:10.1080/00048408712342751
  • Schurz, Gerhard, 2014, “Bayesian Pseudo-Confirmation, Use-Novelty, and Genuine Confirmation”, Studies in History and Philosophy of Science Part A , 45: 87–96. doi:10.1016/j.shpsa.2013.10.008
  • Sereno, Sergio Gabriele Maria, 2020, “Prediction, Accommodation, and the Periodic Table: A Reappraisal”, Foundations of Chemistry , 22: 477–488.
  • Stanford, P. Kyle, 2006, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Altenatives , Oxford: Oxford University Press. doi:10.1093/0195174089.001.0001
  • Steele, Katie and Charlotte Werndl, 2013, “Climate Models, Calibration, and Confirmation”, The British Journal for the Philosophy of Science , 64 (30): 609–635.
  • Swinburne, Richard, 2001, Epistemic Justification , Oxford: Oxford University Press. doi:10.1093/0199243794.001.0001
  • Thomason, Neil, 1992, “Could Lakatos, Even with Zahar’s Criterion of Novel Fact, Evaluate the Copernican Research Programme?”, British Journal for the Philosophy of Science , 43(2): 161–200. doi:10.1093/bjps/43.2.161
  • Votsis, Ioannis, 2014, “Objectivity in Confirmation: Post Hoc Monsters and Novel Predictions”, Studies in the History and Philosophy of Science Part A , 45: 70–78. doi:10.1016/j.shpsa.2013.10.009
  • Whewell, William, 1849 [1968], “Mr. Mill’s Logic”, originally published 1849, reprinted in Robert E. Butts (ed.), William Whewell’s Theory of Scientific Method , Pittsburgh, PA: University of Pittsburgh Press, pp. 265–308.
  • White, Roger, 2003, “The Epistemic Advantage of Prediction over Accommodation”, Mind , 112(448): 653–683. doi:10.1093/mind/112.448.653
  • Worrall, John, 1978, “The Ways in Which the Methodology of Scientific Research Programmes Improves Upon Popper’s Methodology”, in Gerard Radnitzky and Gunnar Andersson (eds.) Progress and Rationality in Science , (Boston studies in the philosophy of science, 58), Dordrecht: D. Reidel, pp. 45–70. doi:10.1007/978-94-009-9866-7_3
  • –––, 1985, “Scientific Discovery and Theory-Confirmation”, in Joseph C. Pitt (ed.), Change and Progress in Modern Science: Papers Related to and Arising from the Fourth International Conference on History and Philosophy of Science, Blacksburg, Virginia, November 1982 , Dordrecht: D. Reidel, pp. 301–331. doi:10.1007/978-94-009-6525-6_11
  • –––, 1989, “Fresnel, Poisson and the White Spot: The Role of Successful Predictions in the Acceptance of Scientific Theories”, in David Gooding, Trevor Pinch, and Simon Schaffer (eds.), The Uses of Experiment: Studies in the Natural Sciences , Cambridge: Cambridge University Press, pp. 135–157.
  • –––, 2002, “New Evidence for Old”, in Peter Gärdenfors, Jan Wolenski, and K. Kijania-Placek (eds.), In the Scope of Logic, Methodology and Philosophy of Science: Volume One of the 11th International Congress of Logic, Methodology and Philosophy of Science, Cracow, August 1999 , Dordrecht: Kluwer Academic Publishers, pp. 191–209.
  • –––, 2005, “Prediction and the ‘Periodic Law’: A Rejoinder to Barnes”, Studies in the History and Philosophy of Science , 36(4): 817–826. doi:10.1016/j.shpsa.2005.08.007
  • –––, 2006, “Theory-Confirmation and History”, in Colin Cheyne and John Worrall. (eds.), Rationality and Reality: Conversations with Alan Musgrave , Dordrecht: Springer, pp. 31–61. doi:10.1007/1-4020-4207-8_4
  • –––, 2010, “Errors, Tests, and Theory Confirmation”, in Mayo. and Spanos 2010: 125–154.
  • –––, 2014, “Prediction and Accommodation Revisited”, Studies in History and Philosophy of Science Part A , 45: 54–61. doi:10.1016/j.shpsa.2013.10.001
  • Wright, John, 2012, Explaining Science’s Success: Understanding How Scientific Knowledge Works , Durham, England: Acumen.
  • Zahar, Elie, 1973, “Why did Einstein’s Programme supersede Lorentz’s? (I)”, British Journal for the Philosophy of Science , 24(2): 95–123. doi:10.1093/bjps/24.2.95
  • –––, 1983, Einstein’s Revolution: A Study In Heuristic , La Salle, IL: Open Court.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

confirmation | epistemology: Bayesian | Lakatos, Imre | Mill, John Stuart | Popper, Karl | realism: and theory change in science | scientific discovery | scientific explanation | scientific method | scientific realism | Whewell, William

Copyright © 2022 by Eric Christian Barnes < ebarnes @ smu . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Research Hypothesis In Psychology: Types, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .

Hypotheses connect theory to data and guide the research process towards expanding scientific understanding

Some key points about hypotheses:

  • A hypothesis expresses an expected pattern or relationship. It connects the variables under investigation.
  • It is stated in clear, precise terms before any data collection or analysis occurs. This makes the hypothesis testable.
  • A hypothesis must be falsifiable. It should be possible, even if unlikely in practice, to collect data that disconfirms rather than supports the hypothesis.
  • Hypotheses guide research. Scientists design studies to explicitly evaluate hypotheses about how nature works.
  • For a hypothesis to be valid, it must be testable against empirical evidence. The evidence can then confirm or disprove the testable predictions.
  • Hypotheses are informed by background knowledge and observation, but go beyond what is already known to propose an explanation of how or why something occurs.
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.

Types of Research Hypotheses

Alternative hypothesis.

The research hypothesis is often called the alternative or experimental hypothesis in experimental research.

It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.

The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).

A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:

  • Important hypotheses lead to predictions that can be tested empirically. The evidence can then confirm or disprove the testable predictions.

In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.

It states that the results are not due to chance and are significant in supporting the theory being investigated.

The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.

Null Hypothesis

The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.

It states results are due to chance and are not significant in supporting the idea being investigated.

The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.

Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.

This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.

Nondirectional Hypothesis

A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.

It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.

For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.

Directional Hypothesis

A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)

It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.

For example, “Exercise increases weight loss” is a directional hypothesis.

hypothesis

Falsifiability

The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.

Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.

It means that there should exist some potential evidence or experiment that could prove the proposition false.

However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.

For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.

Can a Hypothesis be Proven?

Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.

All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.

In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
  • Disproving a hypothesis is definitive. Solid disconfirmatory evidence will falsify a hypothesis and require altering or discarding it based on the evidence.
  • However, confirming evidence is always open to revision. Other explanations may account for the same results, and additional or contradictory evidence may emerge over time.

We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.

If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.

Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.

How to Write a Hypothesis

  • Identify variables . The researcher manipulates the independent variable and the dependent variable is the measured outcome.
  • Operationalized the variables being investigated . Operationalization of a hypothesis refers to the process of making the variables physically measurable or testable, e.g. if you are about to study aggression, you might count the number of punches given by participants.
  • Decide on a direction for your prediction . If there is evidence in the literature to support a specific effect of the independent variable on the dependent variable, write a directional (one-tailed) hypothesis. If there are limited or ambiguous findings in the literature regarding the effect of the independent variable on the dependent variable, write a non-directional (two-tailed) hypothesis.
  • Make it Testable : Ensure your hypothesis can be tested through experimentation or observation. It should be possible to prove it false (principle of falsifiability).
  • Clear & concise language . A strong hypothesis is concise (typically one to two sentences long), and formulated using clear and straightforward language, ensuring it’s easily understood and testable.

Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).

Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:

  • The alternative hypothesis states that students will recall significantly more information on a Monday morning than on a Friday afternoon.
  • The null hypothesis states that there will be no significant difference in the amount recalled on a Monday morning compared to a Friday afternoon. Any difference will be due to chance or confounding factors.

More Examples

  • Memory : Participants exposed to classical music during study sessions will recall more items from a list than those who studied in silence.
  • Social Psychology : Individuals who frequently engage in social media use will report higher levels of perceived social isolation compared to those who use it infrequently.
  • Developmental Psychology : Children who engage in regular imaginative play have better problem-solving skills than those who don’t.
  • Clinical Psychology : Cognitive-behavioral therapy will be more effective in reducing symptoms of anxiety over a 6-month period compared to traditional talk therapy.
  • Cognitive Psychology : Individuals who multitask between various electronic devices will have shorter attention spans on focused tasks than those who single-task.
  • Health Psychology : Patients who practice mindfulness meditation will experience lower levels of chronic pain compared to those who don’t meditate.
  • Organizational Psychology : Employees in open-plan offices will report higher levels of stress than those in private offices.
  • Behavioral Psychology : Rats rewarded with food after pressing a lever will press it more frequently than rats who receive no reward.

Print Friendly, PDF & Email

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety
  • Experiments >

Ad Hoc Analysis

An ad hoc analysis is an extra type of hypothesis added to the results of an experiment to try to explain away contrary evidence.

This article is a part of the guide:

  • Significance 2
  • Sample Size
  • Experimental Probability
  • Cronbach’s Alpha
  • Systematic Error

Browse Full Outline

  • 1 Inferential Statistics
  • 2.1 Bayesian Probability
  • 3.1.1 Significance 2
  • 3.2 Significant Results
  • 3.3 Sample Size
  • 3.4 Margin of Error
  • 3.5.1 Random Error
  • 3.5.2 Systematic Error
  • 3.5.3 Data Dredging
  • 3.5.4 Ad Hoc Analysis
  • 3.5.5 Regression Toward the Mean
  • 4.1 P-Value
  • 4.2 Effect Size
  • 5.1 Philosophy of Statistics
  • 6.1.1 Reliability 2
  • 6.2 Cronbach’s Alpha

The scientific method dictates that, if a hypothesis is rejected, then that is final. The research needs to be redesigned or refined before the hypothesis can be tested again.

Amongst pseudo-scientists, an ad hoc hypothesis is often appended, in an attempt to justify why the expected results were not obtained.

An often quoted example of an ad hoc analysis is of a paranormal investigator investigating psychic waves, under scientific conditions. Upon finding that the experiment did not give positive results, they blame the negative brain waves given out by others.

The oft-quoted example of an ad hoc analysis is of a paranormal investigator investigating psychic waves, under scientific conditions. Upon finding that the experiment did not give positive results, they blame the negative brain waves given out by others.

This is simply trying to deflect criticism and failure by throwing out other, completely random reasons. This ad hoc analysis would need the brain waves of the onlookers to be also tested and eliminated, moving the goalpost and creating a fallacy.

The idea of biorhythms, where the body and mind are affected by deep and regular cycles unrelated to biological circadian rhythms, has long been viewed with skepticism. Every time that scientific research debunks the theory, the adherents move the goal posts, inventing some other underlying reason to explain the results.

Often, astrologers presented with contrary evidence will blame the results upon some ‘unknown’ astrological phenomenon. This, of course, is impossible to prove and so the ad hoc analysis conveniently removes the pseudo-science from the debate.

The insanely stupid Water4Gas scam works along the same principles – when researchers pointed out that the whole idea revolves around the principle of perpetual motion, they invented another ad hoc hypothesis to explain where the ‘money saving’ energy came from.

Ad hoc analysis is not always a bad thing, and can often be part of the process of refining research.

Imagine, for example, that a research group was conducting an experiment into water turbulence, but kept receiving strange results, disproving their hypothesis. Whilst attempting to eliminate any potential confounding variables, they discover that the air conditioning unit is faulty, transmitting vibrations through the lab. This is switched off when the experiment is running and they retest the hypothesis.

This is part of the normal scientific process, and is part of refining the research design rather than trying to move the goalposts.

Ad hoc analysis is only a problem when a non-testable ad hoc hypothesis is added to the results to justify failure and deflect criticisms.

The air conditioning unit hypothesis can be tested very easily, simply by switching it off, and was a result of experimental flaw. Negative brainwaves cannot be easily tested, and therefore the deflection causes a fallacy.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth (Nov 17, 2008). Ad Hoc Analysis. Retrieved Apr 21, 2024 from Explorable.com: https://explorable.com/ad-hoc-analysis

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

ad hoc hypothesis definition psychology

Related articles

Experimental Research

Philosophy of Science

Want to stay up to date? Follow us!

Save this course for later.

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

ad hoc hypothesis definition psychology

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

9.1: Hypothetical Reasoning

  • Last updated
  • Save as PDF
  • Page ID 223910

Suppose I’m going on a picnic and I’m only selecting items that fit a certain rule. You want to find out what rule I’m using, so you offer up some guesses at items I might want to bring:

An Egg Salad Sandwich

A grape soda

Suppose now that I tell you that I’m okay with the first two, but I won’t bring the third. Your next step is interesting: you look at the first two, figure out what they have in common, and then you take a guess at the rule I’m using. In other words, you posit a hypothesis. You say something like

Do you only want to bring things that are yellow or tan?

Notice how at this point your hypothesis goes way beyond the evidence. Bananas and egg salad sandwiches have so much more in common than being yellow/tan objects. This is how hypothetical reasoning works: you look at the evidence, add a hypothesis that makes sense of that evidence (one among many hypotheses available), and then check to be sure that your hypothesis continues to make sense of new evidence as it is collected.

Suppose I now tell you that you haven’t guessed the right rule. So, you might throw out some more objects:

A key lime pie

A jug of orange juice

I then tell you that the first two are okay, but again the last item is not going with me on this picnic.

It’s solid items! Solid items are okay, but liquid items are not.

Again, not quite. Try another set of items. You are still convinced that it has to do with the soda and the juice being liquid, so you try out an interesting tactic:

An ice cube

Some liquid water

Some water Vapor

The first and last items are okay, but not the middle one. Now you think you’ve got me. You guess that the rule is “anything but liquids,” but I refuse to tell you whether you got it right. You’re pretty confident at this point, but perhaps you’re not certain . In principle, there could always be more evidence that upsets your hypothesis. I might say that the ocean is okay but a fresh water lake isn’t, and that would be very confusing for you. You’ll never be quite certain that you’ve guessed my rule correctly because it’s always in principle possible that I have a super complex rule that is more complex than your hypothesis.

So in hypothetical reasoning what we’re doing is making a leap from the evidence we have available to the rule or principle or theory which explains that evidence. The hypothesis is the link between the two. We have some finite evidence available to us, and we hypothesize an explanation. The explanation we posit either is or is not the true explanation, and so we’re using the hypothesis as a bridge to get onto the true explanation of what is happening in the world.

The hypothetical method has four stages. Let’s illustrate each with an example. You are investigating a murder and have collected a lot of evidence but do not yet have a guess as to who the killer might be.

1. The occurrence of a problem

Example \(\PageIndex{1}\)

Someone has been murdered and we need to find out who the killer is so that we might bring them to justice.

2. Formulating a hypothesis

Example \(\PageIndex{2}\)

After collecting some evidence, you weigh the reasons in favor of thinking that each suspect is indeed the murderer, and you decide that the spouse is responsible.

3. Drawing implications from the hypothesis

Example \(\PageIndex{3}\)

If the spouse was the murderer, then a number of things follow. The spouse must have a weak alibi or their alibi must rest on some falsehood. There is likely to be some evidence on their property or among their belongings that links the spouse to the murder. The spouse likely had motive. etc., etc., etc.

We can go on for ages, but the basic point is that once we’ve got an idea of what the explanation for the murder is (in this case, the hypothesis is that the spouse murdered the victim), we can ask ourselves what the world would have to be like for that to have been true. Then we move onto the final step:

4. Test those implications.

Example \(\PageIndex{4}\)

We can search the murder scene, try to find a murder weapon, run DNA analysis on the organic matter left at the scene, question the spouse about their alibi and possible motives, check their bank accounts, talk to friends and neighbors, etc. Once we have a hypothesis, in other words, that hypothesis drives the search for new evidence—it tells us what might be relevant and what irrelevant and therefore what is worth our time and what is not.

The Logic of Hypothetical Reasoning

If the spouse did it, then they must have a weak alibi. Their alibi is only verifiable by one person: the victim. So they do have a weak alibi. Therefore...they did it? Not quite.

Just because they have a weak alibi doesn’t mean they did it. If that were true, anyone with a weak alibi would be guilty for everything bad that happened when they weren’t busy with a verifiable activity.

Similarly, if your car’s battery is dead, then it won’t start. This doesn’t mean that whenever your car doesn’t start, the battery is dead. That would be a wild and bananas claim to make (and obviously false), but the original conditional (the first sentence in this paragraph) isn’t wild and bananas. In fact, it’s a pretty normal claim to make and it seems obviously true.

Let’s talk briefly about the logic of hypothetical reasoning so we can discover an important truth.

If the spouse did it, then their alibi will be weak

Their alibi is weak

So, the spouse did it

This is bad reasoning. How do we know? Well, here’s the logical form:

If A, then B

Therefore, A

This argument structure—called “affirming the consequent”—is invalid because there are countless instances of this general structure that have true premises and a false conclusion. Consider the following examples:

Example \(\PageIndex{5}\)

If I cook, I eat well

I ate well tonight, so I cooked.

Example \(\PageIndex{6}\)

If Eric runs for student president, he’ll become more popular.

Eric did become more popular, so he must’ve run for student president.

Maybe I ate well because I’m at the finest restaurant in town. Maybe I ate well because my brother cooked for me. Any of these things is possible, which is the root problem with this argument structure. It infers that one of the many possible antecedents to the conditional is the true antecedent without giving any reason for choosing or preferring this antecedent.

More concretely, affirming the consequent is the structure of an argument that states that a) one thing will explain an event, and b) that the event in question in fact occurred, and then concludes that c) the one thing that would’ve explained the event is the correct explanation of the event.

More concretely still, here’s yet another example of affirming the consequent:

Example \(\PageIndex{7}\)

My being rich would explain my being popular

I am in fact popular,

Therefore I am in fact rich

I might be popular without having a penny to my name. People sometimes root for underdogs, or respond to the right kind of personality regardless of their socioeconomic standing, or respect a good sense of humor or athletic prowess.

If I were rich, though, that would be one potential explanation for my being popular. Rich people have nice clothes, cool cars, nice houses, and get to have the kinds of experiences that make someone a potentially popular person because everyone wants to hear the cool stories or be associated with the exciting life they lead. Perhaps, people often seem to think, they’ll get to participate in the next adventure if they cozy up to the rich people. Rich kids in high school can also throw the best parties (if we’re honest, and that’s a great source of popularity).

But If I’m not rich, that doesn’t mean I’m not popular. It only means that I’m not popular because I’m rich .

Okay, so we’ve established that hypothetical reasoning has the logical structure of affirming the consequent. We’ve further established that affirming the consequent is an invalid deductive argumentative structure. Where does this leave us? Is the hypothetical method bad reasoning ?!?!?!? Nope! Luckily not all reasoning is deductive reasoning.

Remember that we’re discussing inductive reasoning in this chapter. Inductive reasoning doesn’t obey the rules of deductive logic. So it’s no crime for a method of inductive reasoning to be deductively invalid. The crime against logic would be to claim that we have certain knowledge when we only use inductive reasoning to justify that knowledge. The upshot? Science doesn’t produce certain knowledge—it produces justified knowledge, knowledge to a more or less high degree of certitude, knowledge that we can rely on and build bridges on, knowledge that almost certainly won’t let us down (but it doesn’t produce certain knowledge).

We can, though, with deductive certainty, falsify a hypothesis. Consider the murder case: if the spouse did it, then they’d have a weak alibi. That is, if the spouse did it, then they wouldn’t have an airtight alibi because they’d have to be lying about where they were when the murder took place. If it turns out that the spouse does have an airtight alibi, then your hypothesis was wrong.

Let’s take a look at the logic of falsification:

If the spouse did it, then they won’t have an airtight alibi

They have an airtight alibi

So the spouse didn’t do it

Now it’s possible that the conditional premise (the first premise) isn’t true, but we’ll assume it’s true for the sake of the illustration. The hypothesis was that the spouse did it and so the spouse’s alibi must have some weakness.

It’s also possible that our detective work hasn’t been thorough enough and so the second premise is false. These are important possibilities to keep in mind. Either way, here’s the logical form (a bit cleaned up and simplified):

Therefore not A

This is what argument pattern? That’s right! You’re so smart! It’s modus tollens or “the method of denying”. It’s a type of argument where you deny the implications of something and thereby deny that very thing. It’s a deductively valid argument form (remember from our unit on natural deduction?), so we can falsify hypotheses with deductive certainty: if your hypothesis implies something with necessity, and that something doesn’t come to pass, then your hypothesis is wrong.

Your hypothesis is wrong. That is, your hypothesis as it stands was wrong. You might be like one of those rogue and dogged detectives in the television shows that never gives up on a hunch and ultimately discovers the truth through sheer stubbornness and determination. You might think that the spouse did it, even though they’ve got an airtight alibi. In that case, you’ll have to alter your hypothesis a bit.

The process of altering a hypothesis to react to potentially falsifying evidence typically involves adding extra hypotheses onto your original hypothesis such that the original hypothesis no longer has the troubling implications which turned out not to be true. These extra hypotheses are called ad hoc hypotheses.

As an example, Newton’s theory of gravity had one problem: it made a sort of wacky prediction. So the idea was that gravity was an instantaneous attractive force exerted by all massive bodies on all other bodies. That is, all bodies attract all other bodies regardless of distance or time. The result of this should be that all massive bodies should smack into each other over time (after all, they still have to travel towards one another). But we don’t witness this. We should see things crashing towards the center of gravity of the universe at incredible speeds, but that’s not what’s happening. So, by the logic of falsification, Newton’s theory is simply false.

But Newton had a trick up his sleeve: he claimed that God arranged things such that the heavenly bodies are so far apart from one another that they are prevented from crashing into one another. Problem solved! God put things in the right spatial orientation such that the theory of gravity is saved: they won’t crash into each other because they’re so far apart! Newton employed an ad hoc hypothesis to save his theory from falsification.

Abductive Reasoning

There’s one more thing to discuss while we’re still on the topic of hypothetical reasoning or reasoning using hypotheses. ‘Abduction’ is a fancy word for a process or method sometimes called “inference to the best explanation. The basic idea is that we have a bunch of evidence, we try to explain it, and we find that we could explain it in multiple ways. Then we find the “best” explanation or hypothesis and infer that this is the true explanation.

For example, say we’re playing a game that’s sort of like the picnic game from before. I give you a series of numbers, and then you give me more series of numbers so that I can confirm or deny that each meets the rule I have in mind. So I say:

And then you offer the following series (serieses?):

60, 90, 120

Each of these series tests a particular hypothesis. The first tests whether the important thing is that the numbers start with 2, 3, and 4. The second tests whether the rule is to add 10 each successive number in the series. The third tests a more complicated hypothesis: add half of the first number to itself to get the second number, then add one third of the second number to itself to get the third number.

Now let’s say I tell you that only the third series is acceptable. What now?

Well, our hypothesis was pretty complex, but it seems pretty good. I can infer that this is the correct rule. Alternatively, I might look at other hypotheses which fit the evidence equally well: 1x, 1.5x, 2x? or maybe it’s 2x, 3x, 4x? What about x, 1.5x, x\(^2\)? These all make sense of the data, but are they equal apart from that?

Let’s suppose we can’t easily get more data with which to test our various hypotheses. We’ve got 4 to choose from and nothing in the evidence suggests that one of the hypotheses is better than the others—they all fit the evidence perfectly. What do we do?

One thing we could do is choose which hypothesis is best for reasons other than fit with the evidence. Maybe we want a simpler hypothesis, or maybe we want a more elegant hypothesis, or one which suggests more routes for investigation. These are what we might call “theoretical virtues”—they’re the things we want to see in a theory. The process of abduction is the process of selecting the hypothesis that has the most to offer in terms of theoretical virtues: the simplest, most elegant, most fruitful, most general, and so on.

In science in particular, we value a few theoretical virtues over others: support by the empirical evidence available, replicability of the results in a controlled setting by other scientists, ideally mathematical precision or at least a lack of vagueness, and parsimony or simplicity in terms of the sorts of things the hypothesis requires us to believe in.

Confirmation Bias

This is a great opportunity to discuss confirmation bias, or the natural tendency we have to seek out evidence which supports our beliefs and to ignore evidence which gets in the way of our beliefs. We’ll discuss cognitive biases more in Chapter 10, but since we’re dealing with the relationship between evidence and belief, this seems like a good spot to pause and reflect on how our minds work.

The way our minds work naturally, it seems, is to settle on a belief and then work hard to maintain that belief whatever happens. We come to believe that global warming is anthropogenic—is caused by human activities—and then we’re happy to accept a wide variety of evidence for the claim. If the evidence supports our belief, in other words, we don’t take the time or energy to really investigate exactly how convincing that evidence is. If we already believe the conclusion of an inference, in other words, we are much less likely to test or analyze the inference.

Alternatively, when we see pieces of evidence or arguments that appear to point to the contrary, we are either more skeptical of that evidence or more critical of that argument. For instance, if someone notes that the Earth goes through normal cycles of warming and ice ages and warming again, we immediately will look for ways to explain how this warming period is different than others in the past. Or we might look at the period of the cycles to find out if this is happening at the “right” time in our geological history for it not to be caused by humankind. In other words, we’re more skeptical of arguments or evidence that would defeat or undermine our beliefs, but we’re less skeptical and critical of arguments and evidence that supports our beliefs.

Here are some questions to reflect on as you try to decide how guilty you are of confirmation bias in your own reasoning:

Questions for Reflection:

1. Which news sources do you trust? Why?

2. What’s your process for exploring a topic—say a political or scientific or news topic?

3. How do you decide what to believe about a new subject?

4. When other people express an opinion about someone you don’t know, do you withhold judgment? How well do you do so?

5. Are you harder on arguments and evidence that would shake up your beliefs?

psychology

Ad hoc is a term often used in psychology to describe a situation, approach, or intervention that is improvised or created specifically for a unique circumstance or problem. It refers to an impromptu solution that is not pre-planned or part of a structured framework. The term “ad hoc” is derived from Latin, meaning “for this purpose.” In psychology, an ad hoc approach emphasizes flexibility and adaptability in addressing a specific issue.

Examples of Ad Hoc in Psychology

To better understand the concept of ad hoc in psychology, let’s explore a few examples:

1. Therapeutic Techniques

In therapy sessions, psychologists may employ ad hoc techniques to adapt to the unique needs and circumstances of their clients. For instance, if a client is experiencing increased anxiety during a conversation about a particular topic, the therapist may quickly adjust the conversation or introduce a calming technique to alleviate distress. This ad hoc approach ensures that therapy caters to the individual’s specific needs, rather than adhering strictly to a predetermined therapeutic plan.

2. Problem-Solving in Research

In psychological research, researchers often encounter unexpected challenges or obstacles. They may need to devise ad hoc strategies to overcome these hurdles and continue their investigation. For example, if a data collection method proves ineffective, researchers might develop an improvised approach to gathering the required information to maintain the study’s integrity and validity.

Benefits of Ad Hoc Approaches

The ad hoc approach in psychology offers several benefits:

  • Flexibility and Creativity: Ad hoc methods allow psychologists to think on their feet and come up with innovative solutions to address unique problems. This flexibility promotes creativity and adaptability in the field.
  • Personalization: By using an ad hoc approach, psychologists can tailor their interventions and techniques to the specific needs of their clients, resulting in more effective and personalized treatments.
  • Problem-Solving Skills: The ability to improvise and devise ad hoc strategies enhances psychologists’ problem-solving skills. It encourages them to think critically, analyze situations, and generate effective solutions in real-time.

Limitations and Considerations

While ad hoc approaches offer advantages, it’s essential to consider potential limitations:

  • Consistency: Ad hoc methods may lack consistency, as they rely on spontaneous decision-making rather than adhering to standard protocols. This variation in techniques and interventions can lead to inconsistencies in treatment outcomes.
  • Research Validity: In scientific research, an overreliance on ad hoc approaches may compromise the validity and reliability of the study. It is crucial to strike a balance between improvisation and maintaining rigorous research standards.
  • Professional Judgement: Ad hoc approaches require psychologists to make quick decisions based on their expertise and judgement. While this can be advantageous, it also places considerable responsibility on the professional to ensure their decisions are ethical and evidence-based.

In Conclusion

Ad hoc approaches in psychology provide psychologists with the flexibility to adapt and respond effectively to the unique needs and challenges encountered in their practice or research. While they offer numerous benefits, it is vital to strike a balance between improvisation and maintaining consistency, validity, and ethical considerations in the field of psychology. By employing ad hoc techniques judiciously, psychologists can enhance their problem-solving skills and deliver more personalized and effective interventions to their clients.

Sociology Plus

Ad hoc Hypothesis

Sociology Plus

Ad hoc hypothesis denotes a supplementary hypothesis given to a theory to prevent it from being refuted. According to Karl Popper’s philosophy of science, the only way that falsifiable intellectual systems like Marxism and Freudianism have been sustained is through the dependence on ad hoc hypotheses to fill gaps. Ad hoc hypotheses are used to account for abnormalities that the theory’s unaltered form could not foresee.

Explanation

Ad hoc theories are only acceptable if and only if their non-universal, precise nature can be shown, or, to put it another way, if their potential for direct generalization is disproven. It is the hypothesis that is embraced without any other justification in order to save a theory from refutations or criticism. This technique is deployed in sociological research studies.

The derivation of the particular conclusion in an issue may be deemed invalid if an ad hoc hypothesis was proven to be acceptable and non-universal; as a result, the specific example loses its scientific significance. The necessity of repeat testing is implied in the aforementioned working rule for the acceptance of ad hoc hypotheses, which makes this process seem all the more justifiable.

Notably, the system seems to be in question whenever the introduction of an ad hoc hypothesis is required until the acceptability of the ad hoc hypothesis appears to be established by the requisite falsification attempts. The restriction of ad hoc hypotheses and the continuity principle appear to guarantee the objectivity of falsification; in other words, a theory should only be regarded as falsified if its falsification is theoretically testable.

In addition, because it gives a preferential position to a critical evaluation or falsification, this principle of restriction serves as, in a sense, the second part of the working definition for the idea of a theoretical system’s falsification. Ad hoc hypotheses can be used to attempt to prevent falsification based on the continuity principle, but this can only be done if a different hypothesis, the generalized ad hoc hypothesis (which is also subject to the continuity principle), can also be refuted. Therefore, avoiding falsification depends on (yet another) deception. 

The first falsification will take effect if the second one is unsuccessful. This methodological constraint, or the concept of the restriction of ad hoc hypothesis, has effectively eliminated the “conventionalist argument to falsifiability.” The argument that this system is, in theory, not falsifiable has been demonstrated to be inconsistent (via the principle of the restriction of ad hoc hypotheses) provided that a system enables the derivation of empirically verifiable consequences in the first place.

Since the non-falsifiability of any hypothesis (even a generalized ad hoc hypothesis) would necessitate the falsifiability of other hypotheses, this principle gives a workable definition of the term “falsification” (that is, the falsification of the original axiomatic system). This is obviously inconsistent.

The ad hoc hypothesis “This (otherwise accurate) watch showed the wrong time under such and such circumstances” is only a valid ad hoc hypothesis if the universal statement “All (otherwise accurate) watches show the wrong time under such and such circumstances” can be shown to be false, or refuted, by counterexamples.

Related Posts:

  • Action Theory Definition & Explanation
  • Aggression Definition & Explanation
  • Analytic Induction Definition & Explanation
  • Class Consciousness Definition & Explanation
  • Age Definition & Explanation
  • Althusserian Marxism Definition & Explanation
  • Action Research Definition & Explanation
  • Anarchism Definition & Explanation
  • Class Definition & Explanation
  • Civil Society Definition & Explanation

Sociology Plus

Insert/edit link

Enter the destination URL

Or link to existing content

Ad Hoc Philosophy of Science

  • Published: 21 January 2019
  • Volume 50 , pages 297–306, ( 2019 )

Cite this article

  • Thomas Johansson 1 , 2  

371 Accesses

Explore all metrics

It has been shown that the concept of ad hoc ness is ambiguous when applied to natural science. Here, it is established that a similar ambiguity is present also when the concept is applied in a philosophical debate. Neil Tennant’s proposal for solving Fitch’s paradox has been accused for being ad hoc several times, and he has presented several defenses. In this paper, it is established that ad hoc ness is never defined, although each author uses different notions of the concept. And we see that no reason to adopt a certain notion is offered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Hunt ( 2012 ).

From now on, I will use ad hoc and unprincipled as synonymous.

Fitch ( 1963 ).

Williamson ( 1982 ), Percival ( 1990 ).

Edgington ( 1985 ).

Tennant ( 1997 ).

Williamson ( 2000 , 2009 ), Tennant ( 2001a , 2010 ).

Hand and Kvanvig ( 1999 , 2).

Hand and Kvanvig ( 1999 , 4).

Tennant ( 1997 , 272).

Tennant ( 1997 , 272–273).

Tennant ( 1997 , 247–259).

Tennant ( 1997 , 246–247).

Tennant ( 2001b ).

DeVidi and Kenyon ( 2003 , 485).

Douven ( 2005 , 49–50).

Douven ( 2005 , 50).

Tennant ( 2001b , 110).

Douven ( 2005 , 51).

Philosophers worth mentioning are, among others, Comte, Feyerabend, Kuhn and Lakatos.

Popper ( 1963 , 37).

Popper ( 1963 , 241).

Popper ( 1963 , 241–242).

Brush ( 1989 , 1994 , 1999 ).

Brush ( 1999 , 208).

Leplin ( 1975 ).

Leplin ( 1975 , 336–337).

Leplin ( 1975 , 318).

Hunt ( 2012 , 10).

Brush, S. G. (1989). Prediction and theory evaluation: The case of light bending. Science, 246 (4934), 1124–1129.

Article   Google Scholar  

Brush, S. G. (1994). Dynamics of theory change: The role of predictions. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 2 , 133–145.

Google Scholar  

Brush, S. G. (1999). Why was relativity accepted? Physics in Perspective, 1 (2), 184–214.

DeVidi, D., & Kenyon, T. (2003). Analogues of knowability. Australasian Journal of Philosopohy, 81 (4), 481–495.

Douven, I. (2005). A principled solution to Fitch’s paradox. Erkenntnis, 62 (1), 47–69.

Edgington, D. (1985). The paradox of knowability. Mind, 94 (376), 557–568.

Fitch, F. B. (1963). A logical analysis of some value concepts. The Journal of Symbolic Logic, 28 (2), 135–142.

Hand, M., & Kvanvig, J. (1999). Tennant on knowability. Australasian Journal of Philosophy, 77 (4), 422–428.

Hunt, C. (2012). On ad hoc hypotheses. Philosophy of Science, 79 (1), 1–14.

Leplin, J. (1975). The concept of an ad hoc hypothesis. Studies in History and Philosophy of Science, 5 (4), 309–345.

Percival, P. (1990). Fitch and intuitionistic knowability. Analysis, 50 (3), 182–187.

Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge . London: Routledge.

Tennant, N. (1997). The taming of the true . Oxford: Oxford University Press.

Tennant, N. (2001a). Is every truth knowable? Reply to Williamson. Ratio, 14 (3), 263–280.

Tennant, N. (2001b). Is every truth knowable? Reply to Hand and Kvanvig. Australasian Journal of Philosophy, 79 (1), 107–113.

Tennant, N. (2010). Williamson’s woes. Synthese, 173 (1), 9–23.

Williamson, T. (1982). Intuitionism disproved? Analysis, 42 (4), 203–207.

Williamson, T. (2000). Tennant on knowable truth. Ratio, 13 (2), 99–114.

Williamson, T. (2009). Tennant’s troubles. In: J. Salerno (Ed.), New essays on the knowability paradox (pp. 183–205). New York: Oxford University Press.

Chapter   Google Scholar  

Download references

Author information

Authors and affiliations.

Lund University, Lund, Sweden

Thomas Johansson

Viken, Sweden

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas Johansson .

Rights and permissions

Reprints and permissions

About this article

Johansson, T. Ad Hoc Philosophy of Science. J Gen Philos Sci 50 , 297–306 (2019). https://doi.org/10.1007/s10838-018-9438-8

Download citation

Published : 21 January 2019

Issue Date : 15 June 2019

DOI : https://doi.org/10.1007/s10838-018-9438-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Fitch’s paradox
  • Philosophy of science
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    ad hoc hypothesis definition psychology

  2. Ad Hoc Hypothesis Definition & Explanation

    ad hoc hypothesis definition psychology

  3. Hypothesis

    ad hoc hypothesis definition psychology

  4. PPT

    ad hoc hypothesis definition psychology

  5. Research Hypothesis: Definition, Types, Examples and Quick Tips

    ad hoc hypothesis definition psychology

  6. What is Ad Hoc Analysis?

    ad hoc hypothesis definition psychology

VIDEO

  1. DAY 1 II FUNDAMENTAL OF PARTNERSHIP FIRM II CLASS 12TH II 2024-25

  2. Philosophy of Science 6/10 Reductionism, Neuroscience, Measurement & Mathematical Indispensability

  3. Enhance Your Vocabulary! Learn How to Use "lad" in English

  4. Psychology facts |#Facts |#quotes |#psycology |#ytshorts |#shorts

  5. Statistics 2 International A-Level

  6. What does hypothesis mean?

COMMENTS

  1. Ad hoc hypothesis

    A person that wants to believe in leprechauns can avoid ever being proven wrong by using ad hoc hypotheses (e.g., by adding "they are invisible", then "their motives are complex", and so on). [1] In science and philosophy, an ad hoc hypothesis is a hypothesis added to a theory in order to save it from being falsified.

  2. A coherentist conception of ad hoc hypotheses

    Definition of ad hocness: A hypothesis H, when introduced to save a theory T from empirical refutation by data E, is ad hoc, iff (i) E is evidence for H and (ii) H appears arbitrary in that H coheres neither with theory T nor with background theories B, i.e., neither T nor B provide good reason to believe that H (possibly specifying a ...

  3. On Ad Hoc Hypotheses

    In this article I review attempts to define the term "ad hoc hypothesis," focusing on the efforts of, among others, Karl Popper, Jarrett Leplin, and Gerald Holton. I conclude that the term is unhelpful; what is "ad hoc" seems to be a judgment made by particular scientists not on the basis of any well-established definition but rather on ...

  4. On Ad Hoc Hypotheses*

    would be non-ad hoc under Popper's definition. Hempel then suggested a modification of the definition of ad hoc: "An auxiliary [hypothesis] which enables a theory . . . to explain an [embarrassing] result in conjunction with [the hypothesis] is ad hoc if it does not have any observational con-

  5. Chapter 5

    Accounts that spell out ad hocness as the lack of testability, as the lack of independent support, as the lack of unifiedness, or as mere subjective projections are all unsatisfactory. Instead, this chapter proposes that ad hocness has to do with the lack of coherence between the hypothesis in question and (i) the theory which the hypothesis is ...

  6. Ad Hoc Hypotheses and the Monsters Within

    Its connection to the ordinary conception of ad hoc-ness should be obvious. If a hypothesis H, which in this context is taken to be the explicans, has no excess testable content over explicandum E, then its purpose seems at best restricted to that of attempting to explain E.If, however, it has excess testable content, then the hypothesis has a broader purpose in that it can potentially explain ...

  7. Philosophical perspectives on ad hoc hypotheses and the ...

    Grünbaum also conceives of ad hocness as context-relative by taking the ad hoc character of an hypothesis to be time-dependent. On his account, an ad hoc hypothesis at a time t may no longer qualify as ad hoc at a later time t*—provided that the theory which incorporates it makes the right kind of empirical progress between the two times.

  8. Ad hoc hypothesis

    Quick Reference. Hypothesis adopted purely for the purpose of saving a theory from difficulty or refutation, but without any independent rationale. From: ad hoc hypothesis in The Oxford Dictionary of Philosophy ».

  9. On Ad Hoc Hypotheses*

    On Ad Hoc Hypotheses*. J. C. Hunt. Published in Philosophia Scientiæ 1 January 2012. Philosophy. In this article I review attempts to define the term "ad hoc hypothesis," focusing on the efforts of, among others, Karl Popper, Jarrett Leplin, and Gerald Holton. I conclude that the term is unhelpful; what is "ad hoc" seems to be a ...

  10. Popper's Explications of Ad Hoc ness: Circularity, Empirical Content

    Abstract Karl Popper defines an ad hoc hypothesis as one that is introduced to immunize a theory from some (or all) refutation but which cannot be tested independently. He has also attempted to explicate ad hocness in terms of certain other allegedly undesirable properties of hypotheses or of the explanations they would provide, but his account is confused and mistaken. The first such property ...

  11. Prediction versus Accommodation

    2. Ad Hoc Hypotheses. According to Merriam-Webster's Collegiate Dictionary, something is 'ad hoc' if it is 'formed or used for specific or immediate problems or needs'. An ad hoc hypothesis then is one formed to address a specific problem—such as the problem of immunizing a particular theory from falsification by anomalous data (and ...

  12. Research Hypothesis In Psychology: Types, & Examples

    Examples. A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.

  13. What is the Problem of Ad Hoc Hypotheses?

    The received view of an ad hochypothesis is that it accounts for only the observation (s) it was designed to account for, and so non-ad hocness is generally held to be necessary or important for an introduced hypothesis or modification to a theory. Attempts by Popper and several others to convincingly explicate this view, however, prove to be ...

  14. PDF A coherentist conception of ad hoc hypotheses

    (1978), which distinguished between three senses of ad hoc hypotheses (with regards to a research programme): ad hoc 1 = the hypothesis makes no new predictions; ad hoc 2 = the predictions a made by a hypothesis are not confirmed; ad hoc 3 = the hypothesis does not is "not form an integral part of the positive heuristics" (fn 1 on p. 112).

  15. Ad Hoc Analysis

    An ad hoc analysis is an extra type of hypothesis added to the results of an experiment to try to explain away contrary evidence. The scientific method dictates that, if a hypothesis is rejected, then that is final. The research needs to be redesigned or refined before the hypothesis can be tested again. Amongst pseudo-scientists, an ad hoc ...

  16. 9.1: Hypothetical Reasoning

    Newton employed an ad hoc hypothesis to save his theory from falsification. Abductive Reasoning. There's one more thing to discuss while we're still on the topic of hypothetical reasoning or reasoning using hypotheses. 'Abduction' is a fancy word for a process or method sometimes called "inference to the best explanation.

  17. Ad Hoc

    Ad hoc is a term often used in psychology to describe a situation, approach, or intervention that is improvised or created specifically for a unique circumstance or problem. It refers to an impromptu solution that is not pre-planned or part of a structured framework. The term "ad hoc" is derived from Latin, meaning "for this purpose.".

  18. The Magic of Ad Hoc Solutions

    Still, the parapsychologist's hypothesis is ad hoc. In sum, there is little evidence for the idea that a hypothesis is ad hoc if and only if it lacks independent evidence. A lack of generality or a failure to unify is a fourth association (Lange Reference Lange 2001; Leplin Reference Leplin 1975: 336ff.). The idea is that non-ad hoc solutions ...

  19. Popper: Proving the Worth of Hypotheses

    This is thus a methodological rule which excludes ad hoc hypotheses, an ad hoc hypothesis being precisely one which makes the degree of falsifiability zero or very low. Unfortunately, this directive is of little value in identifying a hypothesis as ad hoc. Consider the following example.

  20. Ad Hoc Hypothesis Definition & Explanation

    Definition. Ad hoc hypothesis denotes a supplementary hypothesis given to a theory to prevent it from being refuted. According to Karl Popper's philosophy of science, the only way that falsifiable intellectual systems like Marxism and Freudianism have been sustained is through the dependence on ad hoc hypotheses to fill gaps. Ad hoc hypotheses are used to account for abnormalities that the ...

  21. The concept of an ad hoc hypothesis

    One prominent account has it that ad hoc hypotheses have no independent empirical support. Others have viewed ad hoc judgements as stemming from a lack of unifiedness of the amended theory. Still others view them as merely subjective. Here I critically review these views and defend my own Coherentist Conception of Ad hocness by working out its ...

  22. Ad Hoc Philosophy of Science

    These examples illuminate how Leplin's definition of ad hocness works: over time, the judgment of whether a theory is ad hoc or principled will change. Hunt concludes: "if further evidence is uncovered later on that supports the hypothesis or if the qualms about the overall theory's fundamentality are assuaged, the hypothesis may ...

  23. Philosophical perspectives on ad hoc hypotheses and the Higgs ...

    The ad hoc charge against the SMHM is interesting from a philosophical point of. view on several grounds. First, the claim that our currently best theory of elementary. particle physics is based on an ad hoc hypothesis sounds alarming and is certainly worthy of consideration in itself.