Continuous Sample Spaces

Let's return to the couple of examples of continuous sample spaces we looked at the Sample Spaces page:

Arrival time. The experimental setting is a metro (underground) station where trains pass (ideally) with equal intervals. A person enters the station. The experiment is to note the time of arrival past the departure time of the last train. If T is the interval between two consecutive trains, then the sample space for the experiment is the interval [0, T], or

  [0, T] = {t: 0 ≤ y ≤ T}.

Chord length. Given a circle of radius R, the experiment is to randomly select a chord in that circle. There are many ways to accomplish such a selection. However the sample space is always the same:

  {AB: A and B are points on a given circle}.

One natural random variable defined on this space is the length of the chord. The variable takes on random length on the interval [0, T], where T is the diameter of the circle at hand. The length of a chord AB is zero if the two points happen to coincide.

How do we define the probability of an arrival time t in the first experiment or the length, say, L in the second?

We need to have P(Ω) = 1, i.e., P([0, T]) = 1. On the other hand, in the first experiment, all points in the interval [0, T] seem to be equiprobable. And, since the sum of the probabilities P(t) must be 1, it looks like we have arrived at an impossible situation. If P(t) is non-zero, the sum of all probabilities will be infinite; if P(t) is 0, the sum will vanish as well. The apparent paradox is resolved by pointing out that the notion of the sum of a continuum of values is commonly replaced by an integral - the concept taught at the beginning Calculus courses. The probabilities on a continuous sample space should be defined somehow differently and not point-by-point.

On a continuous space we consider a non-negative function, say, f(t) - called probability density - that satisfies

  ∫f(t)dt = 1,

where the integration is over the interval [0, T]. And instead of talking of the probability of individual points P(t) we are concerned with the probability of a point falling into a small interval around t: [t - δ, t + Δ] which is defined as the integral ∫f(t)dt, where now the integral is taken over the interval [t - δ, t + Δ]. In the case of the first experiment, it makes sense to define f(t) ≡ 1/T, for any t in [0, T]. (Such a distribution is called uniform.) Then, for any subinterval [a, b],

  P(a, b) = P([a, b]) = (b - a) / T.

In particular,

  P(0, T) = (T - 0) / T = 1,

as expected.

Along with f we usually define the probability distribution function F as the probability from the beginning of the interval of definition:

  F(t) = (t - 0) / T = t/T,

or, setting in a more general case

  F(t) = ∫f(s)ds,

where now the integral is taken over interval [0, t]. In any event, the probability distribution is a non-negative function that satisfies

  F(0) = 0 and
F(T) = 1.

|Contact| |Front page| |Contents| |Up|

Copyright © 1996-2018 Alexander Bogomolny


Search by google: