In the terminology of standard textbook logic an inductive argument is one that's intended to be strong rather than valid. But this terminology isn't necessarily standard outside of logic.
In the sciences in particular there is a more common reading of induction that means something like "making an inference from particular cases to the general case".
In this lecture we're going to talk about this reading of the term and how it relates to the standard usage in logic, and the role of inductive reasoning in the sciences more broadly.
In the sciences the term "induction" is commonly used to describe inferences from particular cases to the general case, or from a finite sample of data to a generalization about a whole population.
Here's the prototype argument form that illustrates this notion of induction:
1. a1 is B.
2. a 2 is B.
n. a n is B.
Therefore, all A are B
You note that some individual of a certain kind, a1, has a property B. An example would be "This swan is white".
Then you note that some other individual of the same kind, a2, has the same property — "This OTHER swan is white".
And you keep doing this for all the individuals available to you this. THIS swan is white, and THIS swan is white, and THIS SWAN OVER THERE is white, and so on.
So you've observed n swans, and all of them are white. From here the inductive move is to say that ALL swans, EVERYWHERE are white. Even the swans that you haven't observed and will never observe.
This is an example of an inductive generalization.
Arguments of this form, or that do something similar — namely, infer a general conclusion from a finite sample — exemplify the way the term "induction" is most commonly used in science.
Another example that illustrates inductive reasoning in this sense is the reasoning involved in inferring a functional relationship between two variables based on a finite set of data points.
Let's say you heat a metal rod. You observe that it expands the hotter it gets. So for various temperatures you plot the length of the rod against the temperature. You get a spread of data points that looks like this:
What you may want to know, though, is how length varies with temperature generally, so that for any value of temperature you can then predict the length of the rod.
To do that you might try to draw a curve of best fit through the data points, like so:
It looks like a pretty linear relationship, so a straight line with a slope like this seems like it'll give a good fit to the data.
Now, given the equation for this functional relationship, you can now plug in a value for the temperature and derive a value for the length.
The equation for the straight line is an inductive generalization you've inferred from the finite set of data points.
The data points in fact are functioning as premises and the straight line is the general conclusion that you're inferring from those premises.
When you plug in a value for the temperature and derive a value for the length of the rod based on the equation for the straight line, you're deriving a prediction about a specific event based on the generalization you've inferred from the data.
This example illustrates just how common inductive generalizations are in science, so it's not surprising that scientists have a word for this kind of reasoning.
In fact, the language of induction used in this sense can be traced back to people like Francis Bacon who back in the 17th century articulated and defended this kind of reasoning as a general method for doing science.
So how does this kind of reasoning relate to the definition of induction used in logic?
Recall, in standard logic an argument is inductive if it's intended to be strong rather than valid.
The key thing to note is that this is a much broader definition than the one commonly used in the sciences. That definition focuses on arguments of a specific form, those where the premises are claims about particular things or cases, and the conclusion is a generalization from those cases.
But if you take the standard logical definition of an inductive argument you find that many different kinds of arguments will qualify as inductive, not just arguments that infer generalizations from particular cases.
So, for example, on the logical definition, a prediction about the future based on past data will count as an inductive argument.
1. The sun has risen every day in the east for all of recorded history.
Therefore, the sun will rise in the east tomorrow.
The sun has risen every day for as long as the earth has existed, as far as we know. So we expect the sun to rise tomorrow as well.
This is an inductive argument on our definition because we acknowledge that even with this reliable track record it's still possible for the sun to not rise tomorrow. Aliens, for example, might blow up the earth or the sun overnight.
So this inference from the past to the future is inductive, and most of us would say that it's a strong inference. But notice that it's not an argument from the particular to the general. The conclusion isn't a generalization, it's a claim about a particular event, the rising of the sun tomorrow.
So this kind of argument wouldn't count as inductive under the standard science definition, but it does count under the standard logical definition.
Here's a second example that illustrates the difference:
1. 90% of human beings are right-handed.
2. John is a human being.
Therefore, John is right-handed.
Notice that the main premise is a general claim, while the conclusion is a claim about a particular person. On the standard science definition this isn't an inductive argument, since it's moving from the general to the particular rather than from the particular to the general. But on the logical definition of induction this argument does count, since the argument is intended to be strong, not valid.
The relationship between the two definitions is a relation of set to subset. The arguments that qualify as inductive under the standard science definition are a subset of the arguments that qualify as inductive under the standard logical definition.
So from a logical point of view there's no problem with calling an inference from the particular to the general an inductive argument since all such arguments satisfy the basic logical definition.
But scientists are sometimes confused when they see the term "induction" used to describe other forms of reasoning than the ones they normally associate with inductive inferences. There shouldn't be any confusion as long as you keep the two senses in mind and distinguish them when it's appropriate.
But if you don't distinguish them then you may run into discussions like this one that contradict themselves. Below are the first two senses of the Wikipedia entry on "induction" at the time this lecture was first written (2010):
Induction or inductive reasoning, sometimes called inductive logic, is the process of reasoning in which the premises of an argument are believed to support the conclusion but do not entail it, i.e. they do not ensure its truth. Induction is a form of reasoning that makes generalizations based on individual instances.
The first sentence presents the standard logical definition — inductive reasoning is defined as strong reasoning, reasoning that doesn't guarantee truth. The second sentence presents the standard science definition of induction, defining it as reasoning from the particular to the general.
Later on in the article the authors present a number of examples of inductive arguments that satisfy the logical definition but not the scientific definition, such as inferences from correlations to causes, or predictions of future events based on past events, and so on. These examples flat out contradict the definition of induction in the second sentence.
Now let's summarize some key points of this discussion.
First, we should be aware that there is a difference between the way the term "induction" is defined in general scientific usage and the way it's defined in logic. The logical definition is much broader — it's basically synonymous with "non-deductive" inference. The scientific usage is narrower, and focuses on inferences from the particular to the general.
Second, induction in the broader logical sense is fundamental to scientific reasoning in general. Inductive reasoning is risky reasoning, it's fallible reasoning, where you're moving from known facts about observable phenomena, to a hypothesis or a conclusion about the world beyond the observable facts.
The distinctive feature about this reasoning is that you can have all the observable facts right, and you can still be wrong about the generalizations you draw from those observations, or the theoretical story you tell to try to explain those observations. It's a fundamental feature of scientific theorizing that it's revisable in light of new evidence and new experience.
It follows from this observation that scientific reasoning is broadly speaking inductive reasoning — that scientific arguments should aim to be strong rather than valid, and that it's both unrealistic and confused to expect them to be valid.
Disciplines that trade in valid arguments and valid inferences are fields like mathematics, computer science and formal deductive logic. The natural and social sciences, on the other hand, deal with fallible, risky inferences. They aim for strong arguments.