Induction is a specific form of reasoning in which the premises of an argument support a conclusion, but do not ensure it. The topic of induction is important in analytic philosophy for several reasons and is discussed in several philosophical sub-fields, including logic, epistemology, and philosophy of science. However, the most important philosophical interest in induction lies in the problem of whether induction can be "justified." This problem is often called "the problem of induction" and was discovered by the Scottish philosopher David Hume (1711-1776).
Therefore, it would be worthwhile to define what philosophers mean by "induction" and to distinguish it from other forms of reasoning. It would also be helpful to present Hume’s problem of induction, Nelson Goodman’s (1906-1998) new riddle of induction, and statistical as well as probabilistic inference as potential solutions to these problems.
Enumerative induction
The sort of induction that philosophers are interested in is known as enumerative induction. Enumerative induction (or simply induction) comes in two types, "strong" induction and "weak" induction.
Strong induction
Strong induction has the following form:
A1 is a B1.
A2 is a B2.
An is a Bn.
Therefore, all As are Bs.
An example of strong induction is that all ravens are black because each raven that has ever been observed has been black.
Weak induction
But notice that one need not make such a strong inference with induction because there are two types, the other being weak induction. Weak induction has the following form:
A1 is a B1.
A2 is a B2.
An is a Bn.
Therefore, the next A will be a B.
An example of weak induction is that because every raven that has ever been observed has been black, the next observed raven will be black.
Mathematical induction
Enumerative induction should not be confused with mathematical induction. While enumerative induction concerns matters of empirical fact, mathematical induction concerns matters of mathematical fact. Specifically, mathematical induction is what mathematicians use to make claims about an infinite set of mathematical objects. Mathematical induction is different from enumerative induction because mathematical induction guarantees the truth of its conclusions since it rests on what is called an “inductive definition” (sometimes called a “recursive definition”).
Inductive definitions define sets (usually infinite sets) of mathematical objects. They consist of a base clause specifying the basic elements of the set, one or more inductive clauses specifying how additional elements are generated from existing elements, and a final clause stipulating that all of the elements in the set are either basic or in the set because of one or more applications of the inductive clause or clauses (Barwise and Etchemendy 2000, 567). For example, the set of natural numbers (N) can be inductively defined as follows:
1. 0 is an element in N 2. For any element x, if x is an element in N, then (x + 1) is an element in N. 3. Nothing else is an element in N unless it satisfies condition (1) or (2).
Thus, in this example, (1) is the base clause, (2) is the inductive clause, and (3) is the final clause. Now inductive definitions are helpful because, as mentioned before, mathematical inductions are infallible precisely because they rest on inductive definitions. Consider the following mathematical induction that proves the sum of the numbers between 0 and a natural number n (Sn) is such that Sn = ½n(n + 1), which is a result first proven by the mathematician Carl Frederick Gauss [1777-1855]:
First, we know that 0 = ½(0)(0 + 1) = 0. Now assume Sm = ½m(m + 1) for some natural number m. Then if Sm + 1 represents Sm + (m + 1), it follows that Sm + (m + 1) = ½m(m + 1) + (m + 1). Furthermore, since ½m(m + 1) + (m + 1) = ½m2 + 1.5m + 1, it follows that ½ m2 + 1.5m + 1 = (½m + ½)(n + 2). But then, (½m + ½)(n + 2) = ½(m + 1)((n + 1) + 1). Since the first subproof shows that 0 is in the set that satisfies Sn = ½n(n + 1), and the second subproof shows that for any number that satisfies Sn = ½n(n + 1), the natural number that is consecutive to it satisfies Sn = ½n(n + 1), then by the inductive definition of N, N has the same elements as the set that satisfies Sn = ½n(n + 1). Thus, Sn = ½n(n + 1) holds for all natural numbers.
Notice that the above mathematical induction is infallible because it rests on the inductive definition of N. However, unlike mathematical inductions, enumerative inductions are not infallible because they do not rest on inductive definitions.
Non-inductive reasoning
Induction contrasts with two other important forms of reasoning: Deduction and abduction.
Deduction
Deduction is a form of reasoning whereby the premises of the argument guarantee the conclusion. Or, more precisely, in a deductive argument, if the premises are true, then the conclusion is true. There are several forms of deduction, but the most basic one is modus ponens, which has the following form:
If A, then B
A
Therefore, B
Deductions are unique because they guarantee the truth of their conclusions if the premises are true. Consider the following example of a deductive argument:
Either Tim runs track or he plays tennis.
Tim does not play tennis.
Therefore, Tim runs track.
There is no way that the conclusion of this argument can be false if its premises are true. Now consider the following inductive argument:
Every raven that has ever been observed has been black.
Therefore, all ravens are black.
This argument is deductively invalid because its premises can be true while its conclusion is false. For instance, some ravens could be brown although no one has seen them yet. Thus a feature of induction is that they are deductively invalid.
Abduction
Abduction is a form of reasoning whereby an antecedent is inferred from its consequent. The form of abduction is below:
If A, then B
B
Therefore, A
Notice that abduction is deductively invalid as well because the truth of the premises in an abductive argument does not guarantee the truth of their conclusions. For example, even if all dogs have legs, seeing legs does not imply that they belong to a dog.
Abduction is also distinct from induction, although both forms of reasoning are used amply in everyday as well as scientific reasoning. While both forms of reasoning do not guarantee the truth of their conclusions, scientists since Isaac Newton (1643-1727) have believed that induction is a stronger form of reasoning than abduction.
The problem of induction
David Hume questioned whether induction was a strong form of reasoning in his classic text, A Treatise of Human Nature. In this text, Hume argues that induction is an unjustified form of reasoning for the following reason. One believes inductions are good because nature is uniform in some deep respect. For instance, one induces that all ravens are black from a small sample of black ravens because he believes that there is a regularity of blackness among ravens, which is a particular uniformity in nature. However, why suppose there is a regularity of blackness among ravens? What justifies this assumption? Hume claims that one knows that nature is uniform either deductively or inductively. However, one admittedly cannot deduce this assumption and an attempt to induce the assumption only makes a justification of induction circular. Thus, induction is an unjustifiable form of reasoning. This is Hume's problem of induction.
Instead of becoming a skeptic about induction, Hume sought to explain how people make inductions, and considered this explanation as good of a justification of induction that could be made. Hume claimed that one make inductions because of habits. In other words, habit explains why one induces that all ravens are black from seeing nothing but black ravens beforehand.
The new riddle of induction
Nelson Goodman (1955) questioned Hume’s solution to the problem of induction in his classic text Fact, Fiction, and Forecast. Although Goodman thought Hume was an extraordinary philosopher, he believed that Hume made one crucial mistake in identifying habit as what explains induction. The mistake is that people readily develop habits to make some inductions but not others, even though they are exposed to both observations. Goodman develops the following grue example to demonstrate his point:
Suppose that all observed emeralds have been green. Then we would readily induce that the next observed emerald would be green. But why green? Suppose "grue" is a term that applies to all observed green things or unobserved blue things. Then all observed emeralds have been grue as well. Yet none of us would induce that the next observed emerald would be blue even though there would be equivalent evidence for this induction.
Goodman anticipates the objection that since "grue" is defined in terms of green and blue, green and blue are prior and more fundamental categories than grue. However, Goodman responds by pointing out that the latter is an illusion because green and blue can be defined in terms of grue and another term "bleen," where something is bleen just in case it is observed and blue or unobserved and green. Then "green" can be defined as something observed and grue or unobserved and bleen, while "blue" can be defined as something observed and bleen or unobserved and grue. Thus the new riddle of induction is not about what justifies induction, but rather, it is about why people make the inductions they do given that they have equal evidence to make several incompatible inductions?
Goodman’s solution to the new riddle of induction is that people make inductions that involve familiar terms like "green," instead of ones that involve unfamiliar terms like "grue," because familiar terms are more entrenched than unfamiliar terms, which just means that familiar terms have been used in more inductions in the past. Thus statements that incorporate entrenched terms are “projectible” and appropriate for use in inductive arguments.
Notice that Goodman’s solution is somewhat unsatisfying. While he is correct that some terms are more entrenched than others, he provides no explanation for why unbalanced entrenchment exists. In order to finish Goodman’s project, the philosopher Willard Van Orman Quine (1956-2000) theorizes that entrenched terms correspond to natural kinds.
Quine (1969) demonstrates his point with the help of a familiar puzzle from the philosopher Carl Hempel (1905-1997), known as "the ravens paradox:"
Suppose that observing several black ravens is evidence for the induction that all ravens are black. Then since the contrapositive of "All ravens are black" is "All non-black things are non-ravens," observing non-black things such as green leafs, brown basketballs, and white baseballs is also evidence for the induction that all ravens are black. But how can this be?
Quine (1969) argues that observing non-black things is not evidence for the induction that all ravens are black because non-black things do not form a natural kind and projectible terms only refer to natural kinds (e.g. "ravens" refers to ravens). Thus terms are projectible (and become entrenched) because they refer to natural kinds.
Even though this extended solution to the new riddle of induction sounds plausible, several of the terms that we use in natural language do not correspond to natural kinds, yet we still use them in inductions. A typical example from the philosophy of language is the term "game," first used by Ludwig Wittgenstein (1889-1951) to demonstrate what he called “family resemblances.”
Look at how competent English speakers use the term "game." Examples of games are Monopoly, card games, the Olympic games, war games, tic-tac-toe, and so forth. Now, what do all of these games have in common? Wittgenstein would say, “nothing,” or if there is something they all have in common, that feature is not what makes them games. So games resemble each other although they do not form a kind. Of course, even though games are not natural kinds, people make inductions with the term, "game." For example, since most Olympic games have been in industrialized cities in the recent past, most Olympic games in the near future should occur in industrialized cities.
Given the difficulty of solving the new riddle of induction, many philosophers have teamed up with mathematicians to investigate mathematical methods for handling induction. A prime method for handling induction mathematically is statistical inference, which is based on probabilistic reasoning.
Statistical inference
Instead of asking whether all ravens are black because all observed ravens have been black, statisticians ask what is the probability that ravens are black given that an appropriate sample of ravens have been black. Here is an example of statistical reasoning:
Suppose that the average stem length out of a sample of 13 soybean plants is 21.3 cm with a standard deviation of 1.22 cm. Then the probability that the interval (20.6, 22.1) contains the average stem length for all soybean plants is .95 according to Student’s t distribution (Samuels and Witmer 2003, 189).
Despite the appeal of statistical inference, since it rests on probabilistic reasoning, it is only as valid as probability theory is at handling inductive reasoning.
Probabilistic inference
Bayesianism is the most influential interpretation of probability theory and is an equally influential framework for handling induction. Given new evidence, "Bayes' theorem" is used to evaluate how much the strength of a belief in a hypothesis should change.
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or "prior probabilities"; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and "transformation groups" are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
"Cox's theorem," which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic. Nevertheless, how well probabilistic inference handles Hume’s original problem of induction as well as Goodman’s new riddle of induction is still a matter debated in contemporary philosophy and presumably will be for years to come.
ReferencesISBN links support NWE through referral fees
- Barwise, Jon and John Etchemendy. 2000. Language, Proof and Logic. Stanford: CSLI Publications.
- Goodman, Nelson. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press.
- Hume, David. 2002. A Treatise of Human Nature (David F. and Mary J. Norton, eds.). Oxford: Oxford University Press.
- Quine, W.V.O. 1969. Ontological Relativity and Other Essays. New York: Columbia University Press.
- Samuels, Myra and Jeffery A. Witmer. 2003. Statistics for the Life Sciences. Upper Saddle River: Pearson Education.
- Wittgenstein, Ludwig. 2001. Philosophical Investigations (G.E.M. Anscombe, trans.). Oxford: Blackwell.
External links
All links retrieved March 2, 2018.
- Inductive Logic, Stanford Encyclopedia of Philosophy.
- Deductive and Inductive Arguments, The Internet Encyclopedia of Philosophy.
General philosophy sources
- Stanford Encyclopedia of Philosophy.
- The Internet Encyclopedia of Philosophy.
- Paideia Project Online.
- Project Gutenberg.
Credits
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.