Independent Events and Independent Experiments

The word independent appears in the study of probabilities in at least two circumstances.

  1. Independent experiments: same or different experiments may be run in a sequence, with the sequence of outcomes being the object of interest. For example, we may be interested to study patterns of heads and tails in successive throws of a coin. We then talk of a singular compound experiment that combines a sequence of constituent trials. The trials - the individual experiments - may or may not affect the outcomes of later trials. If they do, the experiments are called dependent; otherwise, they are independent. The sample space of the compound experiment is formed as a product of the sample spaces of the constituent trials.

  2. Indepedent events: An event is a subset of a sample space. Events may or may not be independent; according to the definition, two events, A and B, are independent iff P(A∩B) = P(A) P(B).

This is a common practice to blur the distinction between these circumstances. When running independent experiments, the usage of the product formula P(A∩B) = P(A) P(B) is justified on combinatorial grounds. For a pair of independent events, the formula serves as a definition. An association between the two, discussed below, provides a justification for the latter.

Consider tossing a coin three times in a row. Since each of the throws is independent of the other two, we consider all 8 (= 23) possible outcomes as equiprobable and assign each the probability of 1/8. Here is the sample space of a sequence of three tosses:

{HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}.

There are 28 possible events, but we are presently interested in, say, two:

A = {HHH, HTH, THH, TTH} and
B = {HHH, HHT, THH, THT}.

A is the sequence of tosses in which the third one came up heads. B is the event in which heads came up on the second toss. Since each contains 4 outcomes out of the equiprobable 8,

P(A) = P(B) = 4/8 = 1/2.

The result might have been expected: 1/2 is the probability of the heads on a single toss. Are events A and B independent according to the definition? Indeed they are. To see that, observe that

A ∩ B = {HHH, THH},

the event of having heads on the second and third tosses. P(A ∩ B) = 2/8 = 1/4. Further, let's find the conditional probability P(A|B):

P(A|B)= P(A ∩ B) / P(B)
 = 1/4 / 1/2
 = 1/2
 = P(A).

So that P(A|B) = P(A) and, according to the definition, events A and B are independent, as expected.

This is in fact always the case. Assume we run a sequence of (independent) experiments with, among others, two possible outcomes x and y with probabilities P(x) = p and P(y) = q. The event that in a sequence of experiments the first outcome happened to be x has the probability of p because something happens on every trial with the probability of 1 and the combinatorial product rule applies. Similarly, the event of having the outcome of y on the second trial (in any sequence of experiments) has the probability of q. Using random variables V1 and V2 for the outcomes of the first and second experiments, we may express this in the following manner:

P(V1 = x) = p and
P(V2 = y) = q.

If A and B are the corresponding events, P(A) = p, P(B) = q. The event A ∩ B of having x on the first experiment and y on the second has the probability of 1 / pq. It follows that

P(A|B)= P(A ∩ B) / P(B)
 = pq / q
 = p
 = P(A)

making the events A and B independent.

For the sake of illustration we'll look into an example of a considerable interest in its own right [Havil, pp. 4-6; Gardner, 2-10 ]. Both authors attribute the problem to the late Leo Moser.

As a condition for the acceptance to a tennis club a novice player N is set to meet two members of the club, G (good) and T (top) in three games. In order to be accepted, N must win against both G and T in two successive games. N must choose one of the two schedules: playing G, T, G or T, G, T. Which one should he choose?

Let g and t denote the probabilities of N beating G and T, respectively. The possibilities for the sequence TGT can be summarized in the following table

 T G T Probability
W W W tgt
W W L tg(1 - t)
L W W (1 - t)gt

Pertinent to the previous discussion is the observation that the first two rows naturally combine into one: the probability of the first two wins is

P(WW) = tgt + tg(1 - t) = tg,

which is simply the probability of beating both T and G (in the first two games in particular).

Since winning the first two games and losing the first game but winning the second and the third are mutually exclusive events, the Sum Rule applies. Gaining acceptance playing the TGT sequence has the total probability of

PTGT = tg + tg(1 - t) = tg(2 - t).

Similarly, the probability of acceptance for the GTG schedule is based on the following table

 G T G Probability
W W   gt
L W W (1 - g)tg

The probability on this case is found to be

PGTG = gt + gt(1 - g) = gt(2 - g).

This is a curiosity. Do you see why?

Assuming that the top member T is a better player than just the good one G, t < g. But then gt(2 - g) < tg(2 - t). In other words,

PGTG < PTGT.

The novice N has a better chance of being admitted to the club by playing the apparently more difficult sequence TGT than the easier one GTG. Perhaps there is a moral to the story/problem: the more difficult tasks offer greater rewards. We shall return to this example after the introduction of the notion of mathematical expectation.

References

  1. M. Gardner, The Colossal Book of Short Puzzles and Problems, (Edited by Dana Richards) W. W. Norton, 2006
  2. J. Havil, Nonplussed!, Princeton University Press, 2007

|Contact| |Front page| |Contents| |Up|

Copyright © 1996-2018 Alexander Bogomolny

71491255