Derive some facts of the negative binomial distribution

The previous post called The Negative Binomial Distribution gives a fairly comprehensive discussion of the negative binomial distribution. In this post, we fill in some of the details that are glossed over in that previous post. We derive the following points:

  • Discuss the several versions of the negative binomial distribution.
  • The negative binomial probabilities sum to one, i.e., the negative binomial probability function is a valid one.
  • Derive the moment generating function of the negative binomial distribution.
  • Derive the first and second moments and the variance of the negative binomial distribution.
  • An observation about independent sum of negative binomial distributions.

________________________________________________________________________

Three versions

The negative binomial distribution has two parameters r and p, where r is a positive real number and 0<p<1. The first two versions arise from the case that r is a positive integer, which can be interpreted as the random experiment of a sequence of independent Bernoulli trials until the rth success (the trials have the same probability of success p). In this interpretation, there are two ways of recording the random experiment:

    X = the number of Bernoulli trials required to get the rth success.
    Y = the number of Bernoulli trials that end in failure before getting the rth success.

The other parameter p is the probability of success in each Bernoulli trial. The notation \binom{m}{n} is the binomial coefficient where m and n are non-negative integers and m \ge n is defined as:

    \displaystyle \binom{m}{n}=\frac{m!}{n! \ (m-n)!}=\frac{m(m-1) \cdots (m-(n-1))}{n!} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (0)

With this in mind, the following are the probability functions of the random variables X and Y.

    \displaystyle P(X=x)= \binom{x-1}{r-1} p^r (1-p)^{x-r} \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots \ \ \ \ \ \ \ (1)

    \displaystyle P(Y=y)=\binom{y+r-1}{y} p^r (1-p)^y \ \ \ \ \ \ \ y=0,1,2,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)

The thought process for (1) is that for the event X=x to happen, there can only be r-1 successes in the first x-1 trials and one additional success occurring in the last trial (the xth trial). The thought process for (2) is that for the event Y=y to happen, there are y+r trials (y failures and r successes). In the first y+r-1 trials, there can be only y failures (or equivalently r-1 successes). Note that X=Y+r. Thus knowing the mean of Y will derive the mean of X, a fact we will use below.

Instead of memorizing the probability functions (1) and (2), it is better to understand and remember the thought processes involved. Because of the natural interpretation of performing Bernoulli trials until the rth success, it is a good idea to introduce the negative binomial distribution via the distributions described by (1) and (2), i.e., the case where the parameter r is a positive integer. When r=1, the random experiment is a sequence of independent Bernoulli trials until the first success (this is called the geometric distribution).

Of course, (1) and (2) can also simply be used as counting distributions without any connection with a series of Bernoulli trials (e.g. used in an insurance context as the number of losses or claims arising from a group of insurance policies).

The binomial coefficient in (0) is defined when both numbers are non-negative integers and that the top one is greater than or equal to the bottom one. However, the rightmost term in (0) can be calculated even when the top number m is not a non-negative integer. Thus when m is any real number, the rightmost term (0) can be calculated provided that the bottom number n is a positive integer. For convenience we define \binom{m}{0}=1. With this in mind, the binomial coefficient \binom{m}{n} is defined for any real number m and any non-negative integer n.

The third version of the negative binomial distribution arises from the relaxation of the binomial coefficient \binom{m}{n} just discussed. With this in mind, the probability function in (2) can be defined for any positive real number r:

    \displaystyle P(Y=y)=\binom{y+r-1}{y} p^r (1-p)^y \ \ \ \ \ \ \ y=0,1,2,\cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)

where \displaystyle \binom{y+r-1}{y}=\frac{(y+r-1)(y+r-2) \cdots (r+1)r}{y!}.

Of course when r is a positive integer, versions (2) and (3) are identical. When r is a positive real number but is not an integer, the distribution cannot be interpreted as the number of failures until the occurrence of rth success. Instead, it is used as a counting distribution.

________________________________________________________________________

The probabilities sum to one

Do the probabilities in (1), (2) or (3) sum to one? For the interpretations of (1) and (2), is it possible to repeatedly perform Bernoulli trials and never get the rth success? For r=1, is it possible to never even get a success? In tossing a fair coin repeatedly, soon enough you will get a head and even if r is a large number, you will eventually get r number of heads. Here we wish to prove this fact mathematically.

To show that (1), (2) and (3) are indeed probability functions, we use a fact concerning Maclaurin’s series expansion of the function (1-x)^{-r}, a fact that is covered in a calculus course. In the following two results, r is a fixed positive real number and y is any non-negative integer:

    \displaystyle \binom{y+r-1}{y}=(-1)^y \ \binom{-r}{y} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4)

    \displaystyle \sum \limits_{y=0}^\infty (-1)^y \ \binom{-r}{y} \ x^y=(1-x)^{-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)

The result (4) is to rearrange the binomial coefficient in probability function (3) to another binomial coefficient with a negative number. This is why there is the word “negative” in negative binomial distribution. The result (5) is the Maclaurin’s series expansion for the function (1-x)^{-r}. We first derive these two facts and then use them to show that the negative binomial probabilities in (3) sum to one. The following derives (4).

    \displaystyle \begin{aligned} \binom{y+r-1}{y}&=\frac{(y+r-1)(y+r-2) \cdots (r+1)r}{y!} \\&=(-1)^y \ \frac{(-r)(-r-1) \cdots (-r-(y-1))}{y!} \\&=(-1)^y \ \binom{-r}{y}  \end{aligned}

To derive (5), let f(x)=(1-x)^{-r}. Based on a theorem that can be found in most calculus text, the function f(x) has the following Maclaurin’s series expansion (Maclaurin’s series is simply Taylor’s series with center = 0).

    \displaystyle (1-x)^{-r}=f(0)+f^{'}(0)x+\frac{f^{(2)}(0)}{2!}x^2+\frac{f^{(3)}(0)}{3!}x^3+\cdots + \frac{f^{(n)}(0)}{n!}x^n+\cdots

where -1<x<1. Now, filling in the derivatives f^{(n)}(0), we have the following derivation.

    \displaystyle \begin{aligned} (1-x)^{-r}&=1+rx+\frac{(r+1)r}{2!}x^2+\frac{(r+2)(r+1)r}{3!}x^3 \\& \ \ \ \ \ \ \ \ +\cdots+\frac{(r+y-1)(r+y-2) \cdots (r+1)r}{y!}x^y +\cdots \\&=1+(-1)^1 (-r)x+(-1)^2\frac{(-r)(-r-1)}{2!}x^2 \\& \ \ \ \ \ \ +(-1)^3 \frac{(-r)(-r-1)(-r-2)}{3!}x^3 +\cdots \\& \ \ \ \ \ \ +(-1)^y \frac{(-r)(-r-1) \cdots (-r-y+2)(-r-y+1)}{y!}x^y +\cdots  \\&=(-1)^0 \binom{-r}{0}x^0 +(-1)^1 \binom{-r}{1}x^1+(-1)^2 \binom{-r}{2}x^2 \\& \ \ \ \ \ \ +(-1)^3 \binom{-r}{3}x^3+\cdots +(-1)^y \binom{-r}{y}x^y+\cdots    \\&=\sum \limits_{y=0}^\infty (-1)^y \ \binom{-r}{y} \ x^y \end{aligned}

We can now show that the negative binomial probabilities in (3) sum to one. Let q=1-p.

    \displaystyle \begin{aligned} \sum \limits_{y=0}^\infty \binom{y+r-1}{y} \ p^r \ q^y &=p^r \ \sum \limits_{y=0}^\infty (-1)^y \ \binom{-r}{y} \ q^y \ \ \ \ \ \ \ \ \ \ \ \text{using } (4) \\&=p^r \ (1-q)^{-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{using } (5)\\&=p^r p^{-r} \\&=1 \end{aligned}

________________________________________________________________________

The moment generating function

We now derive the moment generating function of the negative binomial distribution according to (3). The moment generation function is M(t)=E(e^{tY}) over all real numbers t for which M(t) is defined. The following derivation does the job.

    \displaystyle \begin{aligned} M(t)&=E(e^{tY}) \\&=\sum \limits_{y=0}^\infty \ e^{t y} \ \binom{y+r-1}{y} \ p^r \ (1-p)^y \\&=p^r \ \sum \limits_{y=0}^\infty  \ \binom{y+r-1}{y} \ [(1-p) e^t]^y \\&=p^r \ \sum \limits_{y=0}^\infty  \ (-1)^y \binom{-r}{y} \ [(1-p) e^t]^y \ \ \ \ \ \ \ \ \ \ \ \text{using } (4) \\&=p^r \ [1-(1-p) \ e^t]^{-r} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{using } (5) \\&=\frac{p^r}{[1-(1-p) \ e^t]^{r}}\end{aligned}

The above moment generating function works for the negative binomial distribution with respect to (3) and thus to (2). For the distribution in (1), note that X=Y+r. Thus E(e^{tX})=E(e^{t(Y+r)})=e^{tr} \ E(e^{tY}). The moment generating function of (1) is simply the above moment generating function multiplied by the factor e^{tr}. To summarize, the moment generating functions for the three versions are:

    \displaystyle M_X(t)=E[e^{tX}]=\frac{p^r \ e^{tr}}{[1-(1-p) \ e^t]^{r}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{for } (1)

    \displaystyle M_Y(t)=E[e^{tY}]=\frac{p^r}{[1-(1-p) \ e^t]^{r}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{for } (2) \text{ and } (3)

The domain of the moment generating function is the set of all t that for which M_X(t) or M_Y(t) is defined and is positive. Based on the form that it takes, we focus on making sure that 1-(1-p) \ e^t>0. This leads to the domain t<-\text{ln}(1-p).

________________________________________________________________________

The mean and the variance

With the moment generating function derived in the above section, we can now focus on finding the moments of the negative binomial distribution. To find the moments, simply take the derivatives of the moment generating function and evaluate at t=0. For the distribution represented by the probability function in (3), we calculate the following:

    E(Y)=M_Y^{'}(0)

    E(Y^2)=M_Y^{(2)}(0)

    Var(Y)=E(Y^2)-E(Y)^2

After taking the first and second derivatives and evaluate at t=0, the first and the second moments are:

    \displaystyle E(Y)=r \ \frac{1-p}{p}

    \displaystyle E(Y^2)=\frac{r(1-p)[1+(1-p)]}{p^2}

The following derives the variance.

    \displaystyle \begin{aligned} Var(Y)&=E(Y^2)-E(Y)^2 \\&=\frac{r(1-p)[1+(1-p)]}{p^2}-\frac{(1-p)^2}{p^2} \\&=\frac{r(1-p)[1+r(1-p)-r(1-p)]}{p^2} \\&=\frac{r(1-p)}{p^2}  \end{aligned}

The above formula is the variance for the three versions (1), (2) and (3). Note that Var(Y)>E(Y). In contrast, the variance of the Poisson distribution is identical to its mean. Thus in the situation where the variance of observed data is greater than the sample mean, the negative binomial distribution should be a better fit than the Poisson distribution.

________________________________________________________________________

The independent sum

There is an easy consequence that follows from the moment generating function derived above. The sum of several independent negative binomial distributions is also a negative binomial distribution. For example, suppose T_1,T_2, \cdots,T_n are independent negative binomial random variables (version (3)). Suppose each T_j has parameters r_j and p (the second parameter is identical). The moment generating function of the independent sum is the product of the individual moment generating functions. Thus the following is the moment generating function of T=T_1+\cdots+T_n.

    \displaystyle M_T(t)=E[e^{tT}]=\frac{p^g}{[1-(1-p) \ e^t]^{g}}

where g=r_1+\cdots+r_n. The moment generating function uniquely identifies the distribution. The above M_T(t) is that of a negative binomial distribution with parameters g and p according to (3).

A special case is that the sum of n independent geometric distributions is a negative binomial distribution with the r parameter being r=n. The following is the moment generating function of the sum W of n independent geometric distributions.

    \displaystyle M_W(t)=E[e^{tW}]=\frac{p^n}{[1-(1-p) \ e^t]^{n}}

________________________________________________________________________
\copyright \ \text{2015 by Dan Ma}

Advertisements

Getting from Binomial to Poisson

In many binomial problems, the number of Bernoulli trials n is large, relatively speaking, and the probability of success p is small such that n p is of moderate magnitude. For example, consider problems that deal with rare events where the probability of occurrence is small (as a concrete example, counting the number of people with July 1 as birthday out of a random sample of 1000 people). It is often convenient to approximate such binomial problems using the Poisson distribution. The justification for using the Poisson approximation is that the Poisson distribution is a limiting case of the binomial distribution. Now that cheap computing power is widely available, it is quite easy to use computer or other computing devices to obtain exact binomial probabiities for experiments up to 1000 trials or more. Though the Poisson approximation may no longer be necessary for such problems, knowing how to get from binomial to Poisson is important for understanding the Poisson distribution itself.

Consider a counting process that describes the occurrences of a certain type of events of interest in a unit time interval subject to three simplifying assumptions (discussed below). We are interested in counting the number of occurrences of the event of interest in a unit time interval. As a concrete example, consider the number of cars arriving at an observation point in a certain highway in a period of time, say one hour. We wish to model the probability distribution of how many cars that will arrive at the observation point in this particular highway in one hour. Let X be the random variable described by this probability distribution. We wish to konw the probability that there are k cars arriving in one hour. We start with using a binomial distribution as an approximation to the probability P(X=k). We will see that upon letting n \rightarrow \infty, the P(X=k) is a Poisson probability.

Suppose that we know E(X)=\alpha, perhaps an average obtained after observing cars at the observation points for many hours. The simplifying assumptions alluded to earlier are the following:

  1. The numbers of cars arriving in nonoverlapping time intervals are independent.
  2. The probability of one car arriving in a very short time interval of length h is \alpha h.
  3. The probability of having more than one car arriving in a very short time interval is esstentially zero.

Assumption 1 means that a large number of cars arriving in one period does not imply fewer cars will arrival in the next period and vice versa. In other words, the number of cars that arrive in any one given moment does affect the number of cars that will arrive subsequently. Knowing how many cars arriving in one minute will not help predict the number of cars arriving at the next minute. Assumption 2 means that the rate of cars arriving is dependent only on the length of the time interval and not on when the time interval occurs (e.g. not on whether it is at the beginning of the hour or toward the end of the hour). The assumptions 2 and 3 allow us to think of a very short period of time as a Bernoulli trial. Thinking of the arrival of a car as a success, each short time interval will result in only one success or one failure.

To start, we can break up the hour into 60 minutes (into 60 Bernoulli trials). We then consider the binomial distribution with n=60 and p=\frac{\alpha}{60}. So the following is an approximation to our desired probability distribution.

\displaystyle (1) \ \ \ \ \ P(X=k) \approx \binom{60}{k} \biggl(\frac{\alpha}{60}\biggr)^k \biggr(1-\frac{\alpha}{60}\biggr)^{60-k} \ \ \ \ \ k=0,1,2,\cdots, 60

Conceivably, there can be more than 1 car arriving in a minute and observing cars in a one-minute interval may not be a Bernoulli trial. For a one-minute interval to qualify as a Bernoulli trial, there is either no car arriving or 1 car arriving in that one minute. So we can break up an hour into 3,600 seconds (into 3,600 Bernoulli trials). We now consider the binomial distribution with n=3600 and p=\frac{\alpha}{3600}.

\displaystyle (2) \ \ \ \ \ P(X=k) \approx \binom{3600}{k} \biggl(\frac{\alpha}{3600}\biggr)^k \biggr(1-\frac{\alpha}{3600}\biggr)^{3600-k} \ \ \ \ \ k=0,1,2,\cdots, 3600

It is also conceivable that more than 1 car can arrive in one second and observing cars in one-second interval may still not qualify as a Bernoulli trial. So we need to get more granular. We can divide up the hour into n equal subintervals, each of length \frac{1}{n}. The assumptions 2 and 3 ensure that each subinterval is a Bernoulli trial (either it is a success or a failure; one car arriving or no car arriving). Assumption 1 tells us that all the n subintervals are independent. So breaking up the hour into n moments and counting the number of moments that are successes will result in a binomial distribution with parameters n and p=\frac{\alpha}{n}. So we are ready to proceed with the following approximation to our probability distribution P(X=k).

\displaystyle (3) \ \ \ \ \ P(X=k) \approx \binom{n}{k} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k} \ \ \ \ \ k=0,1,2,\cdots, n

As we get more granular, n \rightarrow \infty. We show that the limit of the binomial probability in (3) is the Poisson distribution with parameter \alpha. We show the following.

\displaystyle (4) \ \ \ \ \ P(X=k) = \lim \limits_{n \rightarrow \infty} \binom{n}{k} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k}=\frac{e^{-\alpha} \alpha^k}{k!} \ \ \ \ \ \ k=0,1,2,\cdots

In the derivation of (4), we need the following two mathematical tools. The statement (5) is one of the definitions of the mathematical constant e. In the statement (6), the integer n in the numerator is greater than the integer k in the denominator. It says that whenever we work with such a ratio of two factorials, the result is the product of n with the smaller integers down to (n-(k-1)). There are exactly k terms in the product.

\displaystyle (5) \ \ \ \ \ \lim \limits_{n \rightarrow \infty} \biggl(1+\frac{x}{n}\biggr)^n=e^x

\displaystyle (6) \ \ \ \ \ \frac{n!}{(n-k)!}=n(n-1)(n-2) \cdots (n-k+1) \ \ \ \ \ \ \ \  k<n

The following is the derivation of (4).

\displaystyle \begin{aligned}(7) \ \ \ \ \  P(X=k)&=\lim \limits_{n \rightarrow \infty} \binom{n}{k} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k} \\&=\lim \limits_{n \rightarrow \infty} \ \frac{n!}{k! (n-k)!} \biggl(\frac{\alpha}{n}\biggr)^k \biggr(1-\frac{\alpha}{n}\biggr)^{n-k} \\&=\lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \biggl(\frac{\alpha^k}{k!}\biggr) \biggr(1-\frac{\alpha}{n}\biggr)^{n} \biggr(1-\frac{\alpha}{n}\biggr)^{-k} \\&=\biggl(\frac{\alpha^k}{k!}\biggr) \lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k} \\&\times \ \ \ \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{n} \ \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{-k} \\&=\frac{e^{-\alpha} \alpha^k}{k!} \end{aligned}

In (7), we have \displaystyle \lim \limits_{n \rightarrow \infty} \ \frac{n(n-1)(n-2) \cdots (n-k+1)}{n^k}=1. The reason being that the numerator is a polynomial where the leading term is n^k. Upon dividing by n^k and taking the limit, we get 1. Based on (5), we have \displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{n}=e^{-\alpha}. For the last limit in the derivation we have \displaystyle \lim \limits_{n \rightarrow \infty} \biggr(1-\frac{\alpha}{n}\biggr)^{-k}=1.

We conclude with some comments. As the above derivation shows, the Poisson distribution is at heart a binomial distribution. When we divide the unit time interval into more and more subintervals (as the subintervals get more and more granular), the resulting binomial distribution behaves more and more like the Poisson distribution.

The three assumtions used in the derivation are called the Poisson postulates, which are the underlying assumptions that govern a Poisson process. Such a random process describes the occurrences of some type of events that are of interest (e.g. the arrivals of cars in our example) in a fixed period of time. The positive constant \alpha indicated in Assumption 2 is the parameter of the Poisson process, which can be interpreted as the rate of occurrences of the event of interest (or rate of changes, or rate of arrivals) in a unit time interval, meaning that the positive constant \alpha is the mean number of occurrences in the unit time interval. The derivation in (7) shows that whenever a certain type of events occurs according to a Poisson process with parameter \alpha, the counting variable of the number of occurrences in the unit time interval is distributed according to the Poisson distribution as indicated in (4).

If we observe the occurrences of events over intervals of length other than unit length, say, in an interval of length t, the counting process is governed by the same three postulates, with the modification to Assumption 2 that the rate of changes of the process is now \alpha t. The mean number of occurrences in the time interval of length t is now \alpha t. The Assumption 2 now states that for any very short time interval of length h (and that is also a subinterval of the interval of length t under observation), the probability of having one occurrence of event in this short interval is (\alpha t)h. Applyng the same derivation, it can be shown that the number of occurrences (X_t) in a time interval of length t has the Poisson distribution with the following probability mass function.

\displaystyle (8) \ \ \ \ \ P(X_t=k)=\frac{e^{-\alpha t} \ (\alpha t)^k}{k!} \ \ \ \ \ \ \ \  k=0,1,2,\cdots

Splitting a Poisson Distribution

We consider a remarkable property of the Poisson distribution that has a connection to the multinomial distribution. We start with the following examples.

Example 1
Suppose that the arrivals of customers in a gift shop at an airport follow a Poisson distribution with a mean of \alpha=5 per 10 minutes. Furthermore, suppose that each arrival can be classified into one of three distinct types – type 1 (no purchase), type 2 (purchase under $20), and type 3 (purchase over $20). Records show that about 25% of the customers are of type 1. The percentages of type 2 and type 3 are 60% and 15%, respectively. What is the probability distribution of the number of customers per hour of each type?

Example 2
Roll a fair die N times where N is random and follows a Poisson distribution with parameter \alpha. For each i=1,2,3,4,5,6, let N_i be the number of times the upside of the die is i. What is the probability distribution of each N_i? What is the joint distribution of N_1,N_2,N_3,N_4,N_5,N_6?

In Example 1, the stream of customers arrive according to a Poisson distribution. It can be shown that the stream of each type of customers also has a Poisson distribution. One way to view this example is that we can split the Poisson distribution into three Poisson distributions.

Example 2 also describes a splitting process, i.e. splitting a Poisson variable into 6 different Poisson variables. We can also view Example 2 as a multinomial distribution where the number of trials is not fixed but is random and follows a Poisson distribution. If the number of rolls of the die is fixed in Example 2 (say 10), then each N_i would be a binomial distribution. Yet, with the number of trials being Poisson, each N_i has a Poisson distribution with mean \displaystyle \frac{\alpha}{6}. In this post, we describe this Poisson splitting process in terms of a “random” multinomial distribution (the view point of Example 2).

________________________________________________________________________

Suppose we have a multinomial experiment with parameters N, r, p_1, \cdots, p_r, where

  • N is the number of multinomial trials,
  • r is the number of distinct possible outcomes in each trial (type 1 through type r),
  • the p_i are the probabilities of the r possible outcomes in each trial.

Suppose that N follows a Poisson distribution with parameter \alpha. For each i=1, \cdots, r, let N_i be the number of occurrences of the i^{th} type of outcomes in the N trials. Then N_1,N_2,\cdots,N_r are mutually independent Poisson random variables with parameters \alpha p_1,\alpha p_2,\cdots,\alpha p_r, respectively.

The variables N_1,N_2,\cdots,N_r have a multinomial distribution and their joint probability function is:

\displaystyle (1) \ \ \ \ P(N_1=n_1,N_2=n_2,\cdots,N_r=n_r)=\frac{N!}{n_1! n_2! \cdots n_r!} \ p_1^{n_1} p_2^{n_2} \cdots p_r^{n_r}

where n_i are nonnegative integers such that N=n_1+n_2+\cdots+n_r.

Since the total number of multinomial trials N is not fixed and is random, (1) is not the end of the story. The following is the joint probability function of N_1,N_2,\cdots,N_r:

\displaystyle \begin{aligned}(2) \ \ \ \ P(N_1=n_1,N_2=n_2,\cdots,N_r=n_r)&=P(N_1=n_1,N_2=n_2,\cdots,N_r=n_r \lvert N=\sum \limits_{k=0}^r n_k) \\&\ \ \ \ \ \times P(N=\sum \limits_{k=0}^r n_k) \\&\text{ } \\&=\frac{(\sum \limits_{k=0}^r n_k)!}{n_1! \ n_2! \ \cdots \ n_r!} \ p_1^{n_1} \ p_2^{n_2} \ \cdots \ p_r^{n_r} \ \times \frac{e^{-\alpha} \alpha^{\sum \limits_{k=0}^r n_k}}{(\sum \limits_{k=0}^r n_k)!} \\&\text{ } \\&=\frac{e^{-\alpha p_1} \ (\alpha p_1)^{n_1}}{n_1!} \ \frac{e^{-\alpha p_2} \ (\alpha p_2)^{n_2}}{n_2!} \ \cdots \ \frac{e^{-\alpha p_r} \ (\alpha p_r)^{n_r}}{n_r!} \end{aligned}

To obtain the marginal probability function of N_j, j=1,2,\cdots,r, we sum out the other variables N_k=n_k (k \ne j) in (2) and obtain the following:

\displaystyle (3) \ \ \ \ P(N_j=n_j)=\frac{e^{-\alpha p_j} \ (\alpha p_j)^{n_j}}{n_j!}

Thus we can conclude that N_j, j=1,2,\cdots,r, has a Poisson distribution with parameter \alpha p_j. Furrthermore, the joint probability function of N_1,N_2,\cdots,N_r is the product of the marginal probability functions. Thus we can conclude that N_1,N_2,\cdots,N_r are mutually independent.

________________________________________________________________________
Example 1
Let N_1,N_2,N_3 be the number of customers per hour of type 1, type 2, and type 3, respectively. Here, we attempt to split a Poisson distribution with mean 30 per hour (based on 5 per 10 minutes). Thus N_1,N_2,N_3 are mutually independent Poisson variables with means 30 \times 0.25=7.5, 30 \times 0.60=18, 30 \times 0.15=4.5, respectively.

Example 2
As indicated earlier, each N_i, i=1,2,3,4,5,6, has a Poisson distribution with mean \frac{\alpha}{6}. According to (2), the joint probability function of N_1,N_2,N_3,N_4,N_5,N_6 is simply the product of the six marginal Poisson probability functions.

The Poisson Distribution

Let \alpha be a positive constant. Consider the following probability distribution:

\displaystyle (1) \ \ \ \ \ P(X=j)=\frac{e^{-\alpha} \alpha^j}{j!} \ \ \ \ \ j=0,1,2,\cdots

The above distribution is said to be a Poisson distribution with parameter \alpha. The Poisson distribution is usually used to model the random number of events occurring in a fixed time interval. As will be shown below, E(X)=\alpha. Thus the parameter \alpha is the rate of occurrence of the random events; it indicates on average how many events occur per unit of time. Examples of random events that may be modeled by the Poisson distribution include the number of alpha particles emitted by a radioactive substance counted in a prescribed area during a fixed period of time, the number of auto accidents in a fixed period of time or the number of losses arising from a group of insureds during a policy period.

Each of the above examples can be thought of as a process that generates a number of arrivals or changes in a fixed period of time. If such a counting process leads to a Poisson distribution, then the process is said to be a Poisson process.

We now discuss some basic properties of the Poisson distribution. Using the Taylor series expansion of e^{\alpha}, the following shows that (1) is indeed a probability distribution.

\displaystyle . \ \ \ \ \ \ \ \sum \limits_{j=0}^\infty \frac{e^{-\alpha} \alpha^j}{j!}=e^{-\alpha} \sum \limits_{j=0}^\infty \frac{\alpha^j}{j!}=e^{-\alpha} e^{\alpha}=1

The generating function of the Poisson distribution is g(z)=e^{\alpha (z-1)} (see The generating function). The mean and variance can be calculated using the generating function.

\displaystyle \begin{aligned}(2) \ \ \ \ \ &E(X)=g'(1)=\alpha \\&\text{ } \\&E[X(X-1)]=g^{(2)}(1)=\alpha^2 \\&\text{ } \\&Var(X)=E[X(X-1)]+E(X)-E(X)^2=\alpha^2+\alpha-\alpha^2=\alpha \end{aligned}

The Poisson distribution can also be interpreted as an approximation to the binomial distribution. It is well known that the Poisson distribution is the limiting case of binomial distributions (see [1] or this post).

\displaystyle (3) \ \ \ \ \ \lim \limits_{n \rightarrow \infty} \binom{n}{j} \biggl(\frac{\alpha}{n}\biggr)^j \biggl(1-\frac{\alpha}{n}\biggr)^{n-j}=\frac{e^{-\alpha} \alpha^j}{j!}

One application of (3) is that we can use Poisson probabilities to approximate Binomial probabilities. The approximation is reasonably good when the number of trials n in a binomial distribution is large and the probability of success p is small. The binomial mean is n p and the variance is n p (1-p). When p is small, 1-p is close to 1 and the binomial variance is approximately np \approx n p (1-p). Whenever the mean of a discrete distribution is approximately equaled to the mean, the Poisson approximation is quite good. As a rule of thumb, we can use Poisson to approximate binomial if n \le 100 and p \le 0.01.

As an example, we use the Poisson distribution to estimate the probability that at most 1 person out of 1000 will have a birthday on the New Year Day. Let n=1000 and p=365^{-1}. So we use the Poisson distribution with \alpha=1000 \times 365^{-1}. The following is an estimate using the Poisson distribution.

\displaystyle . \ \ \ \ \ \ \ P(X \le 1)=e^{-\alpha}+\alpha e^{-\alpha}=(1+\alpha) e^{-\alpha}=0.2415

Another useful property is that the independent sum of Poisson distributions also has a Poisson distribution. Specifically, if each X_i has a Poisson distribution with parameter \alpha_i, then the independent sum X=X_1+\cdots+X_n has a Poisson distribution with parameter \alpha=\alpha_1+\cdots+\alpha_n. One way to see this is that the product of Poisson generating functions has the same general form as g(z)=e^{\alpha (z-1)} (see The generating function). One interpretation of this property is that when merging several arrival processes, each of which follow a Poisson distribution, the result is still a Poisson distribution.

For example, suppose that in an airline ticket counter, the arrival of first class customers follows a Poisson process with a mean arrival rate of 8 per 15 minutes and the arrival of customers flying coach follows a Poisson distribution with a mean rate of 12 per 15 minutes. Then the arrival of customers of either types has a Poisson distribution with a mean rate of 20 per 15 minutes or 80 per hour.

A Poisson distribution with a large mean can be thought of as an independent sum of Poisson distributions. For example, a Poisson distribution with a mean of 50 is the independent sum of 50 Poisson distributions each with mean 1. Because of the central limit theorem, when the mean is large, we can approximate the Poisson using the normal distribution.

In addition to merging several Poisson distributions into one combined Poisson distribution, we can also split a Poisson into several Poisson distributions. For example, suppose that a stream of customers arrives according to a Poisson distribution with parameter \alpha and each customer can be classified into one of two types (e.g. no purchase vs. purchase) with probabilities p_1 and p_2, respectively. Then the number of “no purchase” customers and the number of “purchase” customers are independent Poisson random variables with parameters \alpha p_1 and \alpha p_2, respectively. For more details on the splitting of Poisson, see Splitting a Poisson Distribution.

Reference

  1. Feller W. An Introduction to Probability Theory and Its Applications, Third Edition, John Wiley & Sons, New York, 1968