Independent Events and Independent Experiments
The word independent appears in the study of probabilities in at least two circumstances.
Independent experiments: same or different experiments may be run in a sequence, with the sequence of outcomes being the object of interest. For example, we may be interested to study patterns of heads and tails in successive throws of a coin. We then talk of a singular compound experiment that combines a sequence of constituent trials. The trials - the individual experiments - may or may not affect the outcomes of later trials. If they do, the experiments are called dependent; otherwise, they are independent. The sample space of the compound experiment is formed as a product of the sample spaces of the constituent trials.
Indepedent events: An event is a subset of a sample space. Events may or may not be independent; according to the definition, two events, A and B, are independent iff
P(A∩B) = P(A) P(B).
This is a common practice to blur the distinction between these circumstances. When running independent experiments, the usage of the product formula
Consider tossing a coin three times in a row. Since each of the throws is independent of the other two, we consider all 8
{HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}.
There are 28 possible events, but we are presently interested in, say, two:
A = {HHH, HTH, THH, TTH} and
B = {HHH, HHT, THH, THT}.
A is the sequence of tosses in which the third one came up heads. B is the event in which heads came up on the second toss. Since each contains 4 outcomes out of the equiprobable 8,
P(A) = P(B) = 4/8 = 1/2.
The result might have been expected: 1/2 is the probability of the heads on a single toss. Are events A and B independent according to the definition? Indeed they are. To see that, observe that
A ∩ B = {HHH, THH},
the event of having heads on the second and third tosses.
P(A|B) | = P(A ∩ B) / P(B) |
= 1/4 / 1/2 | |
= 1/2 | |
= P(A). |
So that P(A|B) = P(A) and, according to the definition, events A and B are independent, as expected.
This is in fact always the case. Assume we run a sequence of (independent) experiments with, among others, two possible outcomes x and y with probabilities
P(V1 = x) = p and
P(V2 = y) = q.
If A and B are the corresponding events,
P(A|B) | = P(A ∩ B) / P(B) |
= pq / q | |
= p | |
= P(A) |
making the events A and B independent.

For the sake of illustration we'll look into an example of a considerable interest in its own right [Havil, pp. 4-6; Gardner, 2-10 ]. Both authors attribute the problem to the late Leo Moser.
As a condition for the acceptance to a tennis club a novice player N is set to meet two members of the club, G (good) and T (top) in three games. In order to be accepted, N must win against both G and T in two successive games. N must choose one of the two schedules: playing
Let g and t denote the probabilities of N beating G and T, respectively. The possibilities for the sequence TGT can be summarized in the following table
T | G | T | Probability | ||||
---|---|---|---|---|---|---|---|
W | W | W | tgt | ||||
W | W | L | tg(1 - t) | ||||
L | W | W | (1 - t)gt |
Pertinent to the previous discussion is the observation that the first two rows naturally combine into one: the probability of the first two wins is
P(WW) = tgt + tg(1 - t) = tg,
which is simply the probability of beating both T and G (in the first two games in particular).
Since winning the first two games and losing the first game but winning the second and the third are mutually exclusive events, the Sum Rule applies. Gaining acceptance playing the TGT sequence has the total probability of
PTGT = tg + tg(1 - t) = tg(2 - t).
Similarly, the probability of acceptance for the GTG schedule is based on the following table
G | T | G | Probability | ||||
---|---|---|---|---|---|---|---|
W | W | gt | |||||
L | W | W | (1 - g)tg |
The probability on this case is found to be
PGTG = gt + gt(1 - g) = gt(2 - g).
This is a curiosity. Do you see why?
Assuming that the top member T is a better player than just the good one G,
PGTG < PTGT.
The novice N has a better chance of being admitted to the club by playing the apparently more difficult sequence TGT than the easier one GTG. Perhaps there is a moral to the story/problem: the more difficult tasks offer greater rewards. We shall return to this example after the introduction of the notion of mathematical expectation.
References
- M. Gardner, The Colossal Book of Short Puzzles and Problems, (Edited by Dana Richards) W. W. Norton, 2006
- J. Havil, Nonplussed!, Princeton University Press, 2007

- What Is Probability?
- Intuitive Probability
- Probability Problems
- Sample Spaces and Random Variables
- Probabilities
- Conditional Probability
- Dependent and Independent Events
- Conditional Probability and Independent Events
- Mutually (Jointly) Independent Events
- Independent Events and Independent Experiments
- Algebra of Random Variables
- Expectation
- Probability Generating Functions
- Probability of Two Integers Being Coprime
- Random Walks
- Probabilistic Method
- Probability Paradoxes
- Symmetry Principle in Probability
- Non-transitive Dice

|Contact| |Front page| |Contents| |Up|
Copyright © 1996-2018 Alexander Bogomolny
72338944