Independence is a very important concept in probability theory.
The basic idea is straightforward. Two events are said to be independent if the occurrence of one doesn’t influence the probability of the occurrence of the other.
We can say the same thing in proposition language: Two propositions are independent if the truth of one doesn’t make the truth of the other any more or less likely.
A schematic way of representing independence is like this. If A and B are probabilistically INDEPENDENT, then the following relations hold:
If A and B are independent, then the probability of A given B, is just the same as the probability of A all by itself.
And if they’re independent, then it works the other way too. The probability of B given A is the same as the probability of B all by itself.
If these are NOT equal, then A and B are probabilistically DEPENDENT on one another. We’re saying that the occurrence of B changes the probability of A, and vice versa:
Toss a coin. Assume this is a fair coin, so the probability of it landing heads is 50 percent, or 0.5.
Let’s assume that it in fact landed heads. Given this, what is the probability that it will land heads again on the second toss?
A = the coin lands heads on toss 1
B = the coin lands heads on toss 2
In other words, what is P(B given A)?
Well, if you think it’s higher or lower than 0.5, then you’d be making a mistake. These are probabilistically independent events:
P(B given A) = P(A) = 0.5
The probability of landing heads on a second toss is still 0.5. This would be the case even if you’d previously landed ten heads in a row — the probability of the next toss landing heads is still just 0.5. To believe otherwise is to commit what’s known as the “gambler’s fallacy” (we’ll talk more about the gambler’s fallacy in the course in fallacies of probabilistic reasoning).
Now, consider this example:
A = the dice roll is an even number
B = the dice roll is a 2
Are these two events independent or dependent? If A is true, does it affect the probability of B being true, and vice versa?
We need to compare P(B) and P(B given A).
We know that the probability of rolling a 2 on a six-sided dice is 1 in 6. So the probability of event B by itself is 1/6.
But if A is true, then the set of possible values is restricted to just the even numbers. The probability of rolling a 2, given that it’s even, is 1/3.
Thus, P(B) ≠ P(B|A). A and B are probabilistically dependent events. The occurrence of one affects the probability of the other.
I want to note that dependence is a symmetrical relationship, in that it works both ways: if the occurrence of A affects the probability of B, then the occurrence of B will also affect the probability of A.
But I also want to point out that this doesn’t mean that the numerical values will be the same. In our example, if we know the dice roll is even, then the probability of rolling a 2 is 1/3, rather than 1/6.
But lets do it the other way around. We roll the dice but we don’t know anything about the outcome. What are the odds that the dice roll is even? Well, that’s just 1/2, since the even numbers are 2, 4 and 6, and that’s half of the possible outcomes.
But now let’s say we know that we rolled a 2. This obviously affects the probability that the dice roll is even. In fact, it’s certain that it’s even — given that it’s a 2, the probability that it’s even is 1.
And this illustrates the point.
P(even) = 1/2, but P(even, given it's a 2) = 1
P(2) = 1/6, but P(2, given that it's even) = 1/3
They’re probabilistically dependent whichever way you go, but the numerical values may differ depending on which way you go.