# Bernoulli Trials

Either you make it or you do not.

It happens very often in real life that an event may have only two outcomes that matter. For example, either you pass an exam or you do not pass an exam, either you get the job you applied for or you do not get the job, either your flight is delayed or it departs on time, etc. The probability theory abstraction of all such situations is a Bernoulli trial.

Bernoulli trial is an experiment with only two possible outcomes that have positive probabilities p and q such that p + q = 1. The outcomes are said to be "success" and "failure", and are commonly denoted as "S" and "F" or, say, 1 and 0.

For example, when rolling a die, we may be only interested whether 1 shows up, in which case, naturally, P(S) = 1/6 and P(F) = 5/6. If, when rolling two dice, we are only interested whether the sum on two dice is 11, P(S) = 1/18, P(F) = 17/18.

The Bernoulli process is a succession of independent Bernoulli trials with the same probability of success. One important question about a succession of n Bernoulli trials is the probability of k success.

Since the individual trials are independent, we are talking of the product of probabilities of successes and failures. Such a product is independent of the order in individual successes and failures come about. For example,

P(SSFSF) = P(SFFSS) = P(FFSSS) = p3q2.

In general, the probability of k successes in n trials is denoted b(k; n, p) and is equal to

b(k; n, p) = C(n, k)pkqn - k,

where C(n, k) is the binomial coefficient n choose k. Observe that, by the binomial formula, ∑b(k; n, p) over k from 0 to n is exactly 1:

∑b(k; n, p) = ∑C(n, k)pkqn - k = (p + q)n = 1.

As a function of k, b(k; n, p) is known as the binomial distribution and plays an important role the theory of probabilities.  