A random walk is a formalization of a discrete process wherein a random selection is made between two alternatives (often ±1) at fixed time intervals. There are several standard examples:
Tossing a coin. At times t = 1, 2, 3, ... a coin is tossed with two possible outcomes: H (heads) or T (tails).
A game of chance. A repeatedly played game such that at each round a player may win or lose a fixed amount of money.
Ballot counting. In an election with two candidates, a vote may go to either one or the other of the candidates.
The theory of random walks expands into a more general theory of Markov chains in which the random fluctuations need not be of fixed size.
In theory, each of the two possible outcomes turn up with fixed probabilities, say, p for one and q for the other so that p + q = 1. For example, in a game of chance, a player may win $1 with probability p and lose $1 with probability q. If the outcome depends on a fair coin toss then it is reasonable to assume p = q = 1/2; in other situations p and q need not be equal.
Formally speaking, a random walk is a sequence (S1, S2, ..., Sn) of equally distributed random variables Sk that take on two values, say, ±1.
Random walks are naturally represented by a two-dimensional diagram as in the applet below. (By no means, this is the only possibility. Any configuration of Pascal's triangle will do as good a job as the one we chose here.)
From any given state (x, y), there are only two attainable states: (x + 1, y ± 1). Most often the walks start at the origin, (0, 0), but other starting points are also of interest, for example, in a game of chance a player may start with a capital of a, i.e., point (0, a) in the diagram. Then naturally one of the pertinent questions would be to find the probability of the player reaching the capital of b in N rounds, i.e., of reaching the point (N, b).
Every node of the walk corresponds to the accumulated value sk = S1 + S2 + ... + Sk, for the k-th step.
First let's count the number of ways to get from point A = (m, a) to point B = (n, b), n > m. As a matter of fact, this is not always possible. Indeed, regardless of the values of a and b, the walk that joins (m, a) to (n, b) consists of N = n - m segments. If there are u upward segments (+1) and v downward (-1) segments then u + v = n - m and also u - v = b - a. Since, by subtracting, 2v = (n - m) - (b - a), these two equations in u and v have a solution only if n - m and b - a are of the same parity. There is also an implied restriction: since neither u or v can be negative, we have to impose a requirement n - m ≥ b - a.
On the other hand, if point B is reachable from point A the numbers u and v, of upward and downward steps, are determined uniquely. The only distinction between different walks is in the sequencing of the upward and downward steps.
To simplify the notations, we seek the number of ways to get from (0, 0) to (N, b) by solving the system
from which u = (N + b)/2 and v = (N - b)/2. The number of ways to get from (0, 0) to (N, b) is given by the binomial coefficient C(N, (N + b)/2) = C(N, (N - b)/2), or C(u + v, u) = C(u + v, v).
The probability of getting from A = (0, 0) to point B = (u + v, u - v), is given by C(u + v, u)puqv which is the coefficient of xu in the binomial expansion of (xp + q)u + v. The latter is known as the generating function of the distribution C(u + v, u)puqv.
In particular, if B = (2w, 0), the probability of getting from A = (0, 0) to B, equals C(2w, w)pwqw.
- W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition, Wiley, 1968
[an error occurred while processing this directive]
Copyright © 1996-2018 Alexander Bogomolny