Gamma distribution and Poisson distribution

The gamma distribution is important in many statistical applications. This post discusses the connections of the gamma distribution with Poisson distribution. The following is the probability density function of the gamma distribution.

    \displaystyle (1) \ \ \ \ \ f(x)=\frac{1}{\Gamma(\beta)} \ \alpha^\beta \ x^{\beta-1} \ e^{-\alpha x} \ \ \ \ \ \ x>0

The numbers \alpha and \beta, both positive, are fixed constants and are the parameters of the distribution. The symbol \Gamma(\cdot) is the gamma function. This density function describes how the potential gamma observations distribute across the positive x-axis. Baked into this gamma probability density function are two pieces of information about the Poisson distribution. We can simply read off the information from the parameters \alpha and \beta in the density function.

Poisson-Gamma Mixture

For a reason that will be given shortly, the parameters \alpha and \beta in (1) gives a negative binomial distribution. The following is the probability function of this negative binomial distribution.

    \displaystyle (2) \ \ \ \ \ P(N=n)=\binom{\beta+n-1}{n} \ \biggl(\frac{\alpha}{1+\alpha} \biggr)^\beta \ \biggl(\frac{1}{1+\alpha} \biggr)^n \ \ \ \ \ \ n=0,1,2,3,\cdots

If the parameter \beta is a positive integer, then (2) has a nice interpretation in the context of a series of independent Bernoulli trials. Consider performing a series of independent trials where each trial has one of two distinct outcomes (called success or failure). Assume that the probability of a success in each trial is p=\alpha/(1+\alpha). In this case, the quantity in (2) is the probability of having n failures before the occurrence of the \betath success.

Note that when \beta is a positive integer, the binomial coefficient \binom{\beta+n-1}{n} has the usual calculation \frac{(\beta+n-1)!}{n! (\beta-1)!}. When \beta is not an integer but is only a positive number, the binomial coefficient \binom{\beta+n-1}{n} is calculated as follows:

    \displaystyle \binom{\beta+n-1}{n}=\frac{(\beta+n-1) (\beta+n-2) \cdots (\beta+1) \beta}{n!}

For this new calculation to work, \beta does not have to be a positive integer. It only needs to be a positive real number. Then the distribution in (2) does not have a natural interpretation in terms of performing a series of independent Bernoulli trials. It is simply a counting distribution, i.e. a random discrete variable modeling the number of occurrences of a type of random events.

What does this have to do with gamma distribution and Poisson distribution? When a conditional random variable X \lvert \lambda has a Poisson distribution such that its mean \lambda is an unknown random quantity but follows a gamma distribution with parameters \beta and \alpha as described in (1), the unconditional distribution for X has a negative binomial distribution as described in (2). In other words, the mixture of Poisson distributions with gamma mixing weights is a negative binomial distribution.

There is an insurance interpretation of the Poisson-gamma mixture. Suppose that in a large pool of insureds, the annual claim frequency of an insured is a Poisson distribution with mean \lambda. The quantity \lambda varies from insured to insured but is supposed to follow a gamma distribution. Here we have a family of conditional Poisson distributions where each one is conditional on the characteristics of the particular insured in question (a low risk insured has a low \lambda value and a high risk insured has a high \lambda value). The “average” of the conditional Poisson distributions will be a negative binomial distribution (using the gamma distribution to weight the parameter \lambda). Thus the claim frequency for an “average” insured in the pool should be modeled by a negative binomial distribution. For a randomly selected insured from the pool, if we do not know anything about this insured, we can use the unconditional negative binomial distribution to model the claim frequency.

A detailed discussion of the negative binomial distribution is found here and here. The notion of mixture distributions and Poisson-gamma mixture in particular are discussed here. Many distributions applicable in actuarial applications are mixture distributions (see here for examples).

Waiting Times in a Poisson Process

The parameter \beta in (1) is called the shape parameter while the parameter \alpha is called the rate parameter. The name of rate parameter will be made clear shortly. For this interpretation of gamma distribution to work, the shape parameter \beta must be a positive integer.

The gamma distribution as described in (1) is intimately connected to a Poisson process through the rate parameter \alpha. In a Poisson process, event of a particular interest occurs at random at the rate of \alpha per unit time. Two questions are of interest. How many times will this event occur in a given time interval? How long do we have to wait to observe the first occurrence, the second occurrence and so on? The gamma distribution as described in (1) gives an answer to the second question when \beta is a positive integer.

A counting process is a random experiment that observes the random occurrences of a certain type of events of interest in a time period. A Poisson process is a counting process that satisfies three simplifying assumptions that deal with independence and uniformity in time.

A good example of a Poisson process is the well known experiment in radioactivity conducted by Rutherford and Geiger in 1910. In this experiment, \alpha-particles were emitted from a polonium source and the number of \alpha-particles were counted during an interval of 7.5 seconds (2,608 many such time intervals were observed). In these 2,608 intervals, a total of 10,097 particles were observed. Thus the mean count per period of 7.5 seconds is 10097 / 2608 = 3.87. In this experiment, \alpha-particles are observed at a rate of 3.87 per unit time (7.5 seconds).

In general, consider a Poisson process with a rate of \lambda per unit time, i.e. the event that is of interest occurs at the rate of \lambda per unit time. There are two random variables of interest. One is N_t which is the number of occurrences of the event of interest from time 0 to time t. The other is X_n which is the waiting time until the nth occurrence of the event of interest where n is a positive integer.

What can we say about the random variables N_t? First, the random variable N_1, the number of occurrences of the event of interest in a unit time interval, has a Poisson distribution with mean \lambda. The following is the probability function.

    \displaystyle (3) \ \ \ \ \ P(N_1=k)=\frac{1}{k!} \ \lambda^k \ e^{-\lambda} \ \ \ \ \ \ \ \ \ k=0,1,2,3,\cdots

For any positive real number t, the random variable N_t, which is a discrete random variable, follows a Poisson distribution with mean \lambda t. The following is the probability function.

    \displaystyle (4) \ \ \ \ \ P(N_t=k)=\frac{1}{k!} \ (\lambda t)^k \ e^{-\lambda t} \ \ \ \ \ k=0,1,2,3,\cdots

For a more detailed discussion of why P(N_1=k) and P(N_t=k) are Poisson, see here.

We now discuss the distributions for X_1 and X_n. The random variable X_1 would be the mean time to the first occurrence. It has an exponential distribution with mean 1/\lambda. The following is the probability density function.

    \displaystyle (5) \ \ \ \ \ f_{X_1}(t)=\lambda \ e^{-\lambda t} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ t>0

The random variable X_n has a gamma distribution with shape parameter \beta=n and rate parameter \alpha=\lambda. The following is the probability density function.

    \displaystyle (6) \ \ \ \ \ f_{X_n}(t)=\frac{1}{(n-1)!} \ \lambda^n \ t^{n-1} \ e^{-\lambda t} \ \ \ \ \ \ t>0

The density functions in (5) and (6) are derived from the Poisson distributions in (3) and (4). For example, for X_1>t (the first occurrence taking place after time t) to happen, there is no occurrence in the time interval (0,t). Thus P(X_1>t)=P(N_t=0). Note that N_t is a Poisson random variable with mean \lambda t. The following derives the density function in (5).

    \displaystyle P(X_1>t)=P(N_t=0)=e^{-\lambda t}

    \displaystyle P(X_1 \le t)=1-e^{-\lambda t}

Note that the cumulative distribution function P(X_1 \le t) is that for an exponential distribution. Taking the derivative gives the density function in (5). Extending the same reasoning, for X_n>t (the nth occurrence taking place after time t) to happen, there must be at most n-1 occurrences in the time interval (0,t). Once again, we are relating X_n to the Poisson random variable N_t. More specifically P(X_n>t)=P(N_t \le n-1).

    \displaystyle (7) \ \ \ \ \ P(X_n>t)=P(N_t \le n-1)=\sum \limits_{k=0}^{n-1} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]

    \displaystyle (8) \ \ \ \ \  P(X_n \le t)=1-\sum \limits_{k=0}^{n-1} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]=\sum \limits_{k=n}^{\infty} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]

Taking the derivative of (8) gives the density function in (6). See here for a more detailed discussion of the relation between gamma distribution and Poisson distribution in a Poisson process.

Evaluating Gamma Survival Function

The relation (7) shows that the gamma survival function is the cumulative distribution function (CDF) of the corresponding Poisson distribution. Consider the following integral.

    \displaystyle (9) \ \ \ \ \ \int_t^\infty \frac{1}{(n-1)!} \ \lambda^n \ x^{n-1} \ e^{-\lambda x} \ dx

The integrand in the above integral is the density function of a gamma distribution (with the shape parameter being a positive integer). The limits of the integral are from t to \infty. Conceptually the integral is identical to (7) since the integral gives P(X_n>t), which is the survival function of the gamma distribution in question. According to (7), the integral in (9) has a closed form as follows.

    \displaystyle (10) \ \ \ \ \ \int_t^\infty \frac{1}{(n-1)!} \ \lambda^n \ x^{n-1} \ e^{-\lambda x} \ dx=\sum \limits_{k=0}^{n-1} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]

The above is a combination of (7) and (9) and is a way to evaluate the right tail of the gamma distribution when the shape parameter is a positive integer n. Instead of memorizing it, we can focus on the thought process.

Given the integral in (9), note the shape parameter n and the rate parameter \lambda. Consider a Poisson process with rater parameter \lambda. In this Poisson process, the random variable N_1, the number of occurrences of the event of interest in a unit time interval, has a Poisson distribution with mean \lambda. The random variable N_t, the number of occurrences of the event of interest in a time interval of length t, has a Poisson distribution with mean \lambda t. Then the integral in (9) is equivalent to P(N_t \le n-1), the probability that there are at most n-1 occurrences in the time interval (0,t).

The above thought process works even if the gamma distribution has no relation to any Poisson process, e.g. it is just the model for size of insurance losses. We can still pretend there is a Poisson relationship and obtain a closed form evaluation of the survival function. It must be emphasized that this evaluation of the gamma survival function in closed form is possible only when the shape parameter is a positive integer. If it is not, we then need to evaluate the gamma survival function using software.

Remarks

The discussion here presents two relationships between the gamma distribution and the Poisson distribution. In the first one, mixing conditional Poisson distributions with gamma mixing weights produces a negative binomial distribution. Taking the shape parameter \beta and rate parameter \alpha in (1) produces the negative binomial distribution as in (2). This is one interpretation of the two gamma parameters.

In the second interpretation, the rate parameter of the gamma model is intimately tied to the rate parameter of a Poisson process. The interplay between the number of occurrences of the event of interest and the waiting time until the nth occurrence connects the gamma distribution with the Poisson distribution. For a fuller discussion of this interplay, see here.

The gamma distribution is mathematically derived from the gamma function. See here for an introduction to the gamma distribution.

The three assumptions for a Poisson process that are alluded to above allow us to obtain a binomial distribution when dividing a time interval into n subintervals (for a sufficiently large n). As the subintervals get more and more granular, the binomial distributions have Poisson distribution as the limiting distribution. The derivation from binomial to Poisson is discussed here, here and here.

Practice problems can be found here in a companion blog for practice problems.

Dan Ma gamma distribution
Dan Ma Poisson distribution
Dan Ma math

Daniel Ma gamma distribution
Daniel Ma mathematics
Daniel Ma Poisson distribution

\copyright 2018 – Dan Ma

Advertisements

Relating Binomial and Negative Binomial

The negative binomial distribution has a natural intepretation as a waiting time until the arrival of the rth success (when the parameter r is a positive integer). The waiting time refers to the number of independent Bernoulli trials needed to reach the rth success. This interpretation of the negative binomial distribution gives us a good way of relating it to the binomial distribution. For example, if the rth success takes place after k failed Bernoulli trials (for a total of k+r trials), then there can be at most r-1 successes in the first k+r trials. This tells us that the survival function of the negative binomial distribution is the cumulative distribution function (cdf) of a binomial distribution. In this post, we gives the details behind this observation. A previous post on the negative binomial distribution is found here.

A random experiment resulting in two distinct outcomes (success or failure) is called a Bernoulli trial (e.g. head or tail in a coin toss, whether or not the birthday of a customer is the first of January, whether an insurance claim is above or below a given threshold etc). Suppose a series of independent Bernoulli trials are performed until reaching the rth success where the probability of success in each trial is p. Let X_r be the number of failures before the occurrence of the rth success. The following is the probablity mass function of X_r.

\displaystyle (1) \ \ \ \ P(X_r)=\binom{k+r-1}{k} p^r (1-p)^k \ \ \ \ \ \ k=0,1,2,3,\cdots

Be definition, the survival function and cdf of X_r are:

\displaystyle (2) \ \ \ \ P(X_r > k)=\sum \limits_{j=k+1}^\infty \binom{j+r-1}{j} p^r (1-p)^j \ \ \ \ \ \ k=0,1,2,3,\cdots

\displaystyle (3) \ \ \ \ P(X_r \le k)=\sum \limits_{j=0}^k \binom{j+r-1}{j} p^r (1-p)^j \ \ \ \ \ \ k=0,1,2,3,\cdots

For each positive integer k, let Y_{r+k} be the number of successes in performing a sequence of r+k independent Bernoulli trials where p is the probability of success. In other words, Y_{r+k} has a binomial distribution with parameters r+k and p.

If the random experiment requires more than k failures to reach the rth success, there are at most r-1 successes in the first k+r trails. Thus the survival function of X_r is the same as the cdf of a binomial distribution. Equivalently, the cdf of X_r is the same as the survival function of a binomial distribution. We have the following:

\displaystyle \begin{aligned}(4) \ \ \ \ P(X_r > k)&=P(Y_{k+r} \le r-1) \\&=\sum \limits_{j=0}^{r-1} \binom{k+r}{j} p^j (1-p)^{k+r-j} \ \ \ \ \ \ k=0,1,2,3,\cdots \end{aligned}

\displaystyle \begin{aligned}(5) \ \ \ \ P(X_r \le k)&=P(Y_{k+r} > r-1) \ \ \ \ \ \ k=0,1,2,3,\cdots \end{aligned}

Remark
The relation (4) is analogous to the relationship between the Gamma distribution and the Poisson distribution. Recall that a Gamma distribution with shape parameter \alpha and scale parameter n, where n is a positive integer, can be interpreted as the waiting time until the nth change in a Poisson process. Thus, if the nth change takes place after time t, there can be at most n-1 arrivals in the time interval [0,t]. Thus the survival function of this Gamma distribution is the same as the cdf of a Poisson distribution. The relation (4) is analogous to the following relation.

\displaystyle (5) \ \ \ \ \int_t^\infty \frac{\alpha^n}{(n-1)!} \ x^{n-1} \ e^{-\alpha x} \ dx=\sum \limits_{j=0}^{n-1} \frac{e^{-\alpha t} \ (\alpha t)^j}{j!}

A previous post on the negative binomial distribution is found here.

The Negative Binomial Distribution

A counting distribution is a discrete distribution with probabilities only on the nonnegative integers. Such distributions are important in insurance applications since they can be used to model the number of events such as losses to the insured or claims to the insurer. Though playing a prominent role in statistical theory, the Poisson distribution is not appropriate in all situations, since it requires that the mean and the variance are equaled. Thus the negative binomial distribution is an excellent alternative to the Poisson distribution, especially in the cases where the observed variance is greater than the observed mean.

The negative binomial distribution arises naturally from a probability experiment of performing a series of independent Bernoulli trials until the occurrence of the rth success where r is a positive integer. From this starting point, we discuss three ways to define the distribution. We then discuss several basic properties of the negative binomial distribution. Emphasis is placed on the close connection between the Poisson distribution and the negative binomial distribution.

________________________________________________________________________

Definitions
We define three versions of the negative binomial distribution. The first two versions arise from the view point of performing a series of independent Bernoulli trials until the rth success where r is a positive integer. A Bernoulli trial is a probability experiment whose outcome is random such that there are two possible outcomes (success or failure).

Let X_1 be the number of Bernoulli trials required for the rth success to occur where r is a positive integer. Let p is the probability of success in each trial. The following is the probability function of X_1:

\displaystyle (1) \ \ \ \ \ P(X_1=x)= \binom{x-1}{r-1} p^r (1-p)^{x-r} \ \ \ \ \ \ \ x=r,r+1,r+2,\cdots

The idea for (1) is that for X_1=x to happen, there must be r-1 successes in the first x-1 trials and one additional success occurring in the last trial (the xth trial).

A more common version of the negative binomial distribution is the number of Bernoulli trials in excess of r in order to produce the rth success. In other words, we consider the number of failures before the occurrence of the rth success. Let X_2 be this random variable. The following is the probability function of X_2:

\displaystyle (2) \ \ \ \ \ P(X_2=x)=\binom{x+r-1}{x} p^r (1-p)^x \ \ \ \ \ \ \ x=0,1,2,\cdots

The idea for (2) is that there are x+r trials and in the first x+r-1 trials, there are x failures (or equivalently r-1 successes).

In both (1) and (2), the binomial coefficient is defined by

\displaystyle (3) \ \ \ \ \ \binom{y}{k}=\frac{y!}{k! \ (y-k)!}=\frac{y(y-1) \cdots (y-(k-1))}{k!}

where y is a positive integer and k is a nonnegative integer. However, the right-hand-side of (3) can be calculated even if y is not a positive integer. Thus the binomial coefficient \displaystyle \binom{y}{k} can be expanded to work for all real number y. However k must still be nonnegative integer.

\displaystyle (4) \ \ \ \ \ \binom{y}{k}=\frac{y(y-1) \cdots (y-(k-1))}{k!}

For convenience, we let \displaystyle \binom{y}{0}=1. When the real number y>k-1, the binomial coefficient in (4) can be expressed as:

\displaystyle (5) \ \ \ \ \ \binom{y}{k}=\frac{\Gamma(y+1)}{\Gamma(k+1) \Gamma(y-k+1)}

where \Gamma(\cdot) is the gamma function.

With the more relaxed notion of binomial coefficient, the probability function in (2) above can be defined for all real number r. Thus the general version of the negative binomial distribution has two parameters r and p, both real numbers, such that 0<p<1. The following is its probability function.

\displaystyle (6) \ \ \ \ \ P(X=x)=\binom{x+r-1}{x} p^r (1-p)^x \ \ \ \ \ \ \ x=0,1,2,\cdots

Whenever r in (6) is a real number that is not a positive integer, the interpretation of counting the number of failures until the occurrence of the rth success is no longer important. Instead we can think of it simply as a count distribution.

The following alternative parametrization of the negative binomial distribution is also useful.

\displaystyle (6a) \ \ \ \ \ P(X=x)=\binom{x+r-1}{x} \biggl(\frac{\alpha}{\alpha+1}\biggr)^r \biggl(\frac{1}{\alpha+1}\biggr)^x \ \ \ \ \ \ \ x=0,1,2,\cdots

The parameters in this alternative parametrization are r and \alpha>0. Clearly, the ratio \frac{\alpha}{\alpha+1} takes the place of p in (6). Unless stated otherwise, we use the parametrization of (6).
________________________________________________________________________

What is negative about the negative binomial distribution?
What is negative about this distribution? What is binomial about this distribution? The name is suggested by the fact that the binomial coefficient in (6) can be rearranged as follows:

\displaystyle \begin{aligned}(7) \ \ \ \ \ \binom{x+r-1}{x}&=\frac{(x+r-1)(x+r-2) \cdots r}{x!} \\&=(-1)^x \frac{(-r-(x-1))(-r-(x-2)) \cdots (-r)}{x!} \\&=(-1)^x \frac{(-r)(-r-1) \cdots (-r-(x-1))}{x!} \\&=(-1)^x \binom{-r}{x} \end{aligned}

The calculation in (7) can be used to verify that (6) is indeed a probability function, that is, all the probabilities sum to 1.

\displaystyle \begin{aligned}(8) \ \ \ \ \ 1&=p^r p^{-r}\\&=p^r (1-q)^{-r} \\&=p^r \sum \limits_{x=0}^\infty \binom{-r}{x} (-q)^x \ \ \ \ \ \ \ \ (8.1) \\&=p^r \sum \limits_{x=0}^\infty (-1)^x \binom{-r}{x} q^x \\&=\sum \limits_{x=0}^\infty \binom{x+r-1}{x} p^r q^x \end{aligned}

In (8), we take q=1-p. The step (8.1) above uses the following formula known as the Newton’s binomial formula.

\displaystyle (9) \ \ \ \ \ (1+t)^w=\sum \limits_{k=0}^\infty \binom{w}{k} t^k

________________________________________________________________________

The Generating Function
By definition, the following is the generating function of the negative binomial distribution, using :

\displaystyle (10) \ \ \ \ \ g(z)=\sum \limits_{x=0}^\infty \binom{r+x-1}{x} p^r q^x z^x

where q=1-p. Using a similar calculation as in (8), the generating function can be simplified as:

\displaystyle (11) \ \ \ \ \ g(z)=p^r (1-q z)^{-r}=\frac{p^r}{(1-q z)^r}=\frac{p^r}{(1-(1-p) z)^r}; \ \ \ \ \ z<\frac{1}{1-p}

As a result, the moment generating function of the negative binomial distribution is:

\displaystyle (12) \ \ \ \ \ M(t)=\frac{p^r}{(1-(1-p) e^t)^r}; \ \ \ \ \ \ \ t<-ln(1-p)

________________________________________________________________________

Independent Sum

One useful property of the negative binomial distribution is that the independent sum of negative binomial random variables, all with the same parameter p, also has a negative binomial distribution. Let Y=Y_1+Y_2+\cdots+Y_n be an independent sum such that each X_i has a negative binomial distribution with parameters r_i and p. Then the sum Y=Y_1+Y_2+\cdots+Y_n has a negative binomial distribution with parameters r=r_1+\cdots+r_n and p.

Note that the generating function of an independent sum is the product of the individual generating functions. The following shows that the product of the individual generating functions is of the same form as (11), thus proving the above assertion.

\displaystyle (13) \ \ \ \ \ h(z)=\frac{p^{\sum \limits_{i=1}^n r_i}}{(1-(1-p) z)^{\sum \limits_{i=1}^n r_i}}
________________________________________________________________________

Mean and Variance
The mean and variance can be obtained from the generating function. From E(X)=g'(1) and E(X^2)=g'(1)+g^{(2)}(1), we have:

\displaystyle (14) \ \ \ \ \ E(X)=\frac{r(1-p)}{p} \ \ \ \ \ \ \ \ \ \ \ \ \ Var(X)=\frac{r(1-p)}{p^2}

Note that Var(X)=\frac{1}{p} E(X)>E(X). Thus when the sample data suggest that the variance is greater than the mean, the negative binomial distribution is an excellent alternative to the Poisson distribution. For example, suppose that the sample mean and the sample variance are 3.6 and 7.1. In exploring the possibility of fitting the data using the negative binomial distribution, we would be interested in the negative binomial distribution with this mean and variance. Then plugging these into (14) produces the negative binomial distribution with r=3.7 and p=0.507.
________________________________________________________________________

The Poisson-Gamma Mixture
One important application of the negative binomial distribution is that it is a mixture of a family of Poisson distributions with Gamma mixing weights. Thus the negative binomial distribution can be viewed as a generalization of the Poisson distribution. The negative binomial distribution can be viewed as a Poisson distribution where the Poisson parameter is itself a random variable, distributed according to a Gamma distribution. Thus the negative binomial distribution is known as a Poisson-Gamma mixture.

In an insurance application, the negative binomial distribution can be used as a model for claim frequency when the risks are not homogeneous. Let N has a Poisson distribution with parameter \theta, which can be interpreted as the number of claims in a fixed period of time from an insured in a large pool of insureds. There is uncertainty in the parameter \theta, reflecting the risk characteristic of the insured. Some insureds are poor risks (with large \theta) and some are good risks (with small \theta). Thus the parameter \theta should be regarded as a random variable \Theta. The following is the conditional distribution of N (conditional on \Theta=\theta):

\displaystyle (15) \ \ \ \ \ P(N=n \lvert \Theta=\theta)=\frac{e^{-\theta} \ \theta^n}{n!} \ \ \ \ \ \ \ \ \ \ n=0,1,2,\cdots

Suppose that \Theta has a Gamma distribution with scale parameter \alpha and shape parameter \beta. The following is the probability density function of \Theta.

\displaystyle (16) \ \ \ \ \ g(\theta)=\frac{\alpha^\beta}{\Gamma(\beta)} \theta^{\beta-1} e^{-\alpha \theta} \ \ \ \ \ \ \ \ \ \ \theta>0

Then the joint density of N and \Theta is:

\displaystyle (17) \ \ \ \ \ P(N=n \lvert \Theta=\theta) \ g(\theta)=\frac{e^{-\theta} \ \theta^n}{n!} \ \frac{\alpha^\beta}{\Gamma(\beta)} \theta^{\beta-1} e^{-\alpha \theta}

The unconditional distribution of N is obtained by summing out \theta in (17).

\displaystyle \begin{aligned}(18) \ \ \ \ \ P(N=n)&=\int_0^\infty P(N=n \lvert \Theta=\theta) \ g(\theta) \ d \theta \\&=\int_0^\infty \frac{e^{-\theta} \ \theta^n}{n!} \ \frac{\alpha^\beta}{\Gamma(\beta)} \ \theta^{\beta-1} \ e^{-\alpha \theta} \ d \theta \\&=\int_0^\infty \frac{\alpha^\beta}{n! \ \Gamma(\beta)} \ \theta^{n+\beta-1} \ e^{-(\alpha+1) \theta} d \theta \\&=\frac{\alpha^\beta}{n! \ \Gamma(\beta)} \ \frac{\Gamma(n+\beta)}{(\alpha+1)^{n+\beta}} \int_0^\infty \frac{(\alpha+1)^{n+\beta}}{\Gamma(n+\beta)} \theta^{n+\beta-1} \ e^{-(\alpha+1) \theta} d \theta \\&=\frac{\alpha^\beta}{n! \ \Gamma(\beta)} \ \frac{\Gamma(n+\beta)}{(\alpha+1)^{n+\beta}} \\&=\frac{\Gamma(n+\beta)}{\Gamma(n+1) \ \Gamma(\beta)} \ \biggl( \frac{\alpha}{\alpha+1}\biggr)^\beta \ \biggl(\frac{1}{\alpha+1}\biggr)^n \\&=\binom{n+\beta-1}{n} \ \biggl( \frac{\alpha}{\alpha+1}\biggr)^\beta \ \biggl(\frac{1}{\alpha+1}\biggr)^n \ \ \ \ \ \ \ \ \ n=0,1,2,\cdots \end{aligned}

Note that the integral in the fourth step in (18) is 1.0 since the integrand is the pdf of a Gamma distribution. The above probability function is that of a negative binomial distribution. It is of the same form as (6a). Equivalently, it is also of the form (6) with parameter r=\beta and p=\frac{\alpha}{\alpha+1}.

The variance of the negative binomial distribution is greater than the mean. In a Poisson distribution, the mean equals the variance. Thus the unconditional distribution of N is more dispersed than its conditional distributions. This is a characteristic of mixture distributions. The uncertainty in the parameter variable \Theta has the effect of increasing the unconditional variance of the mixture distribution of N. The variance of a mixture distribution has two components, the weighted average of the conditional variances and the variance of the conditional means. The second component represents the additional variance introduced by the uncertainty in the parameter \Theta (see The variance of a mixture).

________________________________________________________________________

The Poisson Distribution as Limit of Negative Binomial
There is another connection to the Poisson distribution, that is, the Poisson distribution is a limiting case of the negative binomial distribution. We show that the generating function of the Poisson distribution can be obtained by taking the limit of the negative binomial generating function as r \rightarrow \infty. Interestingly, the Poisson distribution is also the limit of the binomial distribution.

In this section, we use the negative binomial parametrization of (6a). By replacing \frac{\alpha}{\alpha+1} for p, the following are the mean, variance, and the generating function for the probability function in (6a):

\displaystyle \begin{aligned}(19) \ \ \ \ \ \ &E(X)=\frac{r}{\alpha} \\&\text{ }\\&Var(X)=\frac{\alpha+1}{\alpha} \ \frac{r}{\alpha}=\frac{r(\alpha+1)}{\alpha^2} \\&\text{ } \\&g(z)=\frac{1}{[1-\frac{1}{\alpha}(z-1)]^r} \ \ \ \ \ \ \ z<\alpha+1 \end{aligned}

Let r goes to infinity and \displaystyle \frac{1}{\alpha} goes to zero and at the same time keeping their product constant. Thus \displaystyle \mu=\frac{r}{\alpha} is constant (this is the mean of the negative binomial distribution). We show the following:

\displaystyle (20) \ \ \ \ \ \lim \limits_{r \rightarrow \infty} [1-\frac{\mu}{r}(z-1)]^{-r}=e^{\mu (z-1)}

The right-hand side of (20) is the generating function of the Poisson distribution with mean \mu. The generating function in the left-hand side is that of a negative binomial distribution with mean \displaystyle \mu=\frac{r}{\alpha}. The following is the derivation of (20).

\displaystyle \begin{aligned}(21) \ \ \ \ \ \lim \limits_{r \rightarrow \infty} [1-\frac{\mu}{r}(z-1)]^{-r}&=\lim \limits_{r \rightarrow \infty} e^{\displaystyle \biggl(ln[1-\frac{\mu}{r}(z-1)]^{-r}\biggr)} \\&=\lim \limits_{r \rightarrow \infty} e^{\displaystyle \biggl(-r \ ln[1-\frac{\mu}{r}(z-1)]\biggr)} \\&=e^{\displaystyle \biggl(\lim \limits_{r \rightarrow \infty} -r \ ln[1-\frac{\mu}{r}(z-1)]\biggr)} \end{aligned}

We now focus on the limit in the exponent.

\displaystyle \begin{aligned}(22) \ \ \ \ \ \lim \limits_{r \rightarrow \infty} -r \ ln[1-\frac{\mu}{r}(z-1)]&=\lim \limits_{r \rightarrow \infty} \frac{ln(1-\frac{\mu}{r} (z-1))^{-1}}{r^{-1}} \\&=\lim \limits_{r \rightarrow \infty} \frac{(1-\frac{\mu}{r} (z-1)) \ \mu (z-1) r^{-2}}{r^{-2}} \\&=\mu (z-1) \end{aligned}

The middle step in (22) uses the L’Hopital’s Rule. The result in (20) is obtained by combining (21) and (22).

________________________________________________________________________

Reference

  1. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004

The mean excess loss function

We take a closer look at the mean excess loss function. We start with the following example:

Example
Suppose that an entity is exposed to a random loss X. An insurance policy offers protection against this loss. Under this policy, payment is made to the insured entity subject to a deductible d>0, i.e. when a loss is less than d, no payment is made to the insured entity, and when the loss exceeds d, the insured entity is reimbursed for the amount of the loss in excess of the deductible d. Consider the following two questions:

\text{ }

  1. Of all the losses that are eligible to be reimbursed by the insurer, what is the average payment made by the insurer to the insured?
  2. What is the average payment made by the insurer to the insured entity?

\text{ }

The two questions look similar. The difference between the two questions is subtle but important. In the first question, the average is computed over all losses that are eligible for reimbursement (i.e., the loss exceeds the deductible). This is the average amount the insurer is expected to pay in the event that a payment in excess of the deductible is required to be made. So this average is a per payment average.

In the second question, the average is calculated over all losses (regardless of sizes). When the loss does not reach the deductible, the payment is considered zero and when the loss is in excess of the deductible, the payment is X-d. Thus the average is the average amount the insurer has to pay per loss. So the second question is about a per loss average.

—————————————————————————————————————-

The Mean Excess Loss Function
The average in the first question is called the mean excess loss function. Suppose X is the random loss and d>0. The mean excess loss variable is the conditional variable X-d \ \lvert X>d and the mean excess loss function e_X(d) is defined by:

\text{ }

\displaystyle (1) \ \ \ \ \ e_X(d)=E(X-d \lvert X>d)

\text{ }

In an insurance context, the mean excess loss function is the average payment in excess of a threshold given that the loss exceeds the threshold. In a mortality context, the mean excess loss function is called the mean residual life function and complete expectation of life and can be interpreted as the remaining time until death given that the life in question is alive at age d.

The mean excess loss function is computed by the following depending on whether the loss variable is continuous or discrete.

\text{ }

\displaystyle (2) \ \ \ \ \ e_X(d)=\frac{\int_d^\infty (x-d) \ f_X(x) \ dx}{S_X(d)}

\displaystyle (3) \ \ \ \ \ e_X(d)=\frac{\sum \limits_{x>d} (x-d) \ P(X=x)}{S_X(d)}

\text{ }

The mean excess loss function e_X(d) is defined only when the integral or the sum converges. The following is an equivalent calculation of e_X(d) that may be easier to use in some circumstances.

\text{ }

\displaystyle (4) \ \ \ \ \ e_X(d)=\frac{\int_d^\infty S_X(x) \ dx}{S_X(d)}

\displaystyle (5a) \ \ \ \ \ e_X(d)=\frac{\sum \limits_{x \ge d} S_X(x) }{S_X(d)}

\displaystyle (5b) \ \ \ \ \ e_X(d)=\frac{\biggl(\sum \limits_{x>d} S_X(x)\biggr)+(w+1-d) S_X(w) }{S_X(d)}

\text{ }

In both (5a) and (5b), we assume that the support of X is the set of nonnegative integers. In (5a), we assume that the deductible d is a positive integer. In (5b), the deductible d is free to be any positive number and w is the largest integer such that w \le d. The formulation (4) is obtained by using integration by parts (also see theorem 3.1 in [1]). The formulations of (5a) and (5b) are a result of applying theorem 3.2 in [1].

The mean excess loss function provides information about the tail weight of a distribution, see the previous post The Pareto distribution. Also see Example 3 below.
—————————————————————————————————————-
The Mean in Question 2
The average that we need to compute is the mean of the following random variable. Note that (a)_+ is the function that assigns the value of a whenever a>0 and otherwise assigns the value of zero.

\text{ }

\displaystyle (6) \ \ \ \ \ (X-d)_+=\left\{\begin{matrix}0&\ X<d\\{X-d}&\ X \ge d \end{matrix}\right.

\text{ }

The mean E((X-d)_+) is calculated over all losses. When the loss is less than the deductible d, the insurer has no obligation to make a payment to the insured and the payment is assumed to be zero in the calculation of E[(X-d)_+]. The following is how this expected value is calculated depending on whether the loss X is continuous or discrete.

\text{ }

\displaystyle (7) \ \ \ \ \ E((X-d)_+)=\int_d^\infty (x-d) \ f_X(x) \ dx

\displaystyle (8) \ \ \ \ \ E((X-d)_+)=\sum \limits_{x>d} (x-d) \ P(X=x)

\text{ }

Based on the definitions, the following is how the two averages are related.

\displaystyle E[(X-d)_+]=e_X(d) \ [1-F_X(d)] \ \ \ \text{or} \ \ \ E[(X-d)_+]=e_X(d) \ S_X(d)

—————————————————————————————————————-
The Limited Expected Value
For a given positive constant u, the limited loss variable is defined by

\text{ }

\displaystyle (9) \ \ \ \ \ X \wedge u=\left\{\begin{matrix}X&\ X<u\\{u}&\ X \ge u \end{matrix}\right.

\text{ }

The expected value E(X \wedge u) is called the limited expected value. In an insurance application, the u is a policy limit that sets a maximum on the benefit to be paid. The following is how the limited expected value is calculated depending on whether the loss X is continuous or discrete.

\text{ }

\displaystyle (9) \ \ \ \ \ E(X \wedge u)=\int_{-\infty}^u x \ f_X(x) \ dx+u \ S_X(u)

\displaystyle (10) \ \ \ \ \ E(X \wedge u)=\biggl(\sum \limits_{x < u} x \ P(X=x)\biggr)+u \ S_X(u)

\text{ }

Interestingly, we have the following relation.

\text{ }

\displaystyle (X-d)_+ + (X \wedge d)=X \ \ \ \ \ \ \text{and} \ \ \ \ \ \ E[(X-d)_+] + E(X \wedge d)=E(X)

\text{ }

The above statement indicates that purchasing a policy with a deductible d and another policy with a policy maximum d is equivalent to buying full coverage.

Another way to interpret X \wedge d is that it is the amount of loss that is eliminated by having a deductible in the insurance policy. If the insurance policy pays the loss in full, then the insurance payment is X and the expected amount the insurer is expected to pay is E(X). By having a deductible provision in the policy, the insurer is now only liable for the amount (X-d)_+ and the amount the insurer is expected to pay per loss is E[(X-d)_+]. Consequently E(X \wedge d) is the expected amount of the loss that is eliminated by the deductible provision in the policy. The following summarizes this observation.

\text{ }

\displaystyle (X \wedge d)=X-(X-d)_+ \ \ \ \ \ \ \text{and} \ \ \ \ \ \ E(X \wedge d)=E(X)-E[(X-d)_+]

—————————————————————————————————————-
Example 1
Let the loss random variable X be exponential with pdf f(x)=\alpha e^{-\alpha x}. We have E(X)=\frac{1}{\alpha}. Because of the no memory property of the exponential distribution, given that a loss exceeds the deductible, the mean payment is the same as the original mean. Thus e_X(d)=\frac{1}{\alpha}. Then the per loss average is:

\displaystyle E[(X-d)_+]=e_X(d) \ S(d) = \frac{e^{-\alpha d}}{\alpha}

\text{ }

Thus, with a deductible provision in the policy, the insurer is expected to pay \displaystyle \frac{e^{-\alpha d}}{\alpha} per loss instead of \displaystyle \frac{1}{\alpha}. Thus the expected amount of loss eliminated (from the insurer’s point of view) is \displaystyle E(X \wedge d)=\frac{1-e^{-\alpha d}}{\alpha}.

\text{ }

Example 2
Suppose that the loss variable has a Gamma distribution where the scale parameter is \alpha and the shape parameter is n=2. The pdf is \displaystyle g(x)=\alpha^2 \ x \ e^{-\alpha x}. The insurer’s expected payment without the deductible is E(X)=\frac{2}{\alpha}. The survival function S(x) is:

\displaystyle S(x)=e^{-\alpha x}(1+\alpha x)

\text{ }

For the losses that exceed the deductible, the insurer’s expected payment is:

\displaystyle \begin{aligned}e_X(d)&=\frac{\int_d^{\infty} S(x) \ dx}{S(d)}\\&=\frac{\int_d^{\infty} e^{-\alpha x}(1+\alpha x) \ dx}{e^{-\alpha d}(1+\alpha d)} \\&=\frac{\frac{e^{-\alpha d}}{\alpha}+d e^{-\alpha d}+\frac{e^{-\alpha d}}{\alpha}}{e^{-\alpha d}(1+\alpha d)} \\&=\frac{\frac{2}{\alpha}+d}{1+\alpha d} \end{aligned}

\text{ }

Then the insurer’s expected payment per loss is E[(X-d)_+]:

\displaystyle \begin{aligned}E[(X-d)_+]&=e_X(d) \ S(d) \\&=\frac{\frac{2}{\alpha}+d}{1+\alpha d} \ \ e^{-\alpha d}(1+\alpha d) \\&=e^{-\alpha d} \ \biggl(\frac{2}{\alpha}+d\biggr) \end{aligned}

\text{ }

With a deductible in the policy, the following is the expected amount of loss eliminated (from the insurer’s point of view).

\displaystyle \begin{aligned}E[X \wedge d]&=E(X)-E[(X-d)_+] \\&=\frac{2}{\alpha}-e^{-\alpha d} \ \biggl(\frac{2}{\alpha}+d\biggr) \\&=\frac{2}{\alpha}\biggl(1-e^{-\alpha d}\biggr)-d e^{-\alpha d} \end{aligned}

\text{ }

Example 3
Suppose the loss variable X has a Pareto distribution with the following pdf:

\text{ }

\displaystyle f_X(x)=\frac{\beta \ \alpha^\beta}{(x+\alpha)^{\beta+1}} \ \ \ \ \ x>0

\text{ }

If the insurance policy is to pay the full loss, then the insurer’s expected payment per loss is \displaystyle E(X)=\frac{\alpha}{\beta-1} provided that the shape parameter \beta is larger than one.

The mean excess loss function of the Pareto distribution has a linear form that is increasing (see the previous post The Pareto distribution). The following is the mean excess loss function:

\text{ }

\displaystyle e_X(d)=\frac{1}{\beta-1} \ d +\frac{\alpha}{\beta-1}=\frac{1}{\beta-1} \ d +E(X)

\text{ }

If the loss is modeled by such a distribution, this is an uninsurable risk! First of all, the higher the deductible, the larger the expected payment if such a large loss occurs. The expected payment for large losses is always the unmodified expected E(X) plus a component that is increasing in d.

The increasing mean excess loss function is an indication that the Pareto distribution is a heavy tailed distribution. In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.

Reference

  1. Bowers N. L., Gerber H. U., Hickman J. C., Jones D. A., Nesbit C. J. Actuarial Mathematics, First Edition., The Society of Actuaries, Itasca, Illinois, 1986
  2. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004

The Pareto distribution

This post takes a closer look at the Pareto distribution. A previous post demonstrates that the Pareto distribution is a mixture of exponential distributions with Gamma mixing weights. We now elaborate more on this point. Through looking at various properties of the Pareto distribution, we also demonstrate that the Pareto distribution is a heavy tailed distribution. In insurance applications, heavy-tailed distributions are essential tools for modeling extreme loss, especially for the more risky types of insurance such as medical malpractice insurance. In financial applications, the study of heavy-tailed distributions provides information about the potential for financial fiasco or financial ruin. The Pareto distribution is a great way to open up a discussion on heavy-tailed distribution.

\text{ }

Update (11/12/2017). This blog post introduces a catalog of many other parametric severity models in addition to Pareto distribution. The link to the catalog is found in that blog post. To go there directly, this is the link.

Update (10/29/2017). This blog post has updated information on Pareto distribution. It also has links to more detailed contents on Pareto distribution in two companion blogs. These links are also given here: more detailed post on Pareto, Pareto Type I and Type II and practice problems on Pareto.

\text{ }

The continuous random variable X with positive support is said to have the Pareto distribution if its probability density function is given by

\displaystyle f_X(x)=\frac{\beta \ \alpha^\beta}{(x+\alpha)^{\beta+1}} \ \ \ \ \ x>0

where \alpha>0 and \beta>0 are constant. The constant \alpha is the scale parameter and \beta is the shape parameter. The following lists several other distributional quantities of the Pareto distribution, which will be used in the discussion below.

\displaystyle S_X(x)=\frac{\alpha^\beta}{(x+\alpha)^\beta}=\biggl(\frac{\alpha}{x+\alpha}\biggr)^\beta \ \ \ \ \ \ \ \ \ \text{survival function}

\displaystyle F_X(x)=1-\biggl(\frac{\alpha}{x+\alpha}\biggr)^\beta \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{distribution function}

\displaystyle E(X)=\frac{\alpha}{\beta-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{mean},\beta>1

\displaystyle E(X^2)=\frac{2 \alpha^2}{(\beta-1)(\beta-2)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{second momemt},\beta>2

\displaystyle Var(X)=\frac{\alpha^2 \beta}{(\beta-1)^2(\beta-2)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{variance},\beta>2

\displaystyle E(X^k)=\frac{k! \alpha^k}{(\beta-1) \cdots (\beta-k)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{higher moments},\beta>k, \text{ k positive integer}

——————————————————————————————————————
The Pareto Distribution as a Mixture
The Pareto pdf indicated above can be obtained by mixing exponential distributions using Gamma distributions as weights. Suppose that X follows an exponential distribution (conditional on a parameter value \theta). The following is the conditional pdf of X.

\displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\theta e^{-\theta x} \ \ \ x>0

There is uncertainty in the parameter, which can be viewed as a random variable \Theta. Suppose that \Theta follows a Gamma distribution with scale parameter \alpha and shape parameter \beta. The following is the pdf of \Theta.

\displaystyle f_{\Theta}(\theta)=\frac{\alpha^\beta}{\Gamma(\beta)} \ \theta^{\beta-1} \ e^{-\alpha \theta} \ \ \ \theta>0

The unconditional pdf of X is the weighted average of the conditional pdfs with the Gamma pdf as weight.

\displaystyle \begin{aligned}f_X(x)&=\int_0^{\infty} f_{X \lvert \Theta}(x \lvert \theta) \ f_\Theta(\theta) \ d \theta \\&=\int_0^{\infty} \biggl[\theta \ e^{-\theta x}\biggr] \ \biggl[\frac{\alpha^\beta}{\Gamma(\beta)} \ \theta^{\beta-1} \ e^{-\alpha \theta}\biggr] \ d \theta \\&=\int_0^{\infty} \frac{\alpha^\beta}{\Gamma(\beta)} \ \theta^\beta \ e^{-\theta(x+\alpha)} \ d \theta \\&=\frac{\alpha^\beta}{\Gamma(\beta)} \frac{\Gamma(\beta+1)}{(x+\alpha)^{\beta+1}} \int_0^{\infty} \frac{(x+\alpha)^{\beta+1}}{\Gamma(\beta+1)} \ \theta^{\beta+1-1} \ e^{-\theta(x+\alpha)} \ d \theta \\&=\frac{\beta \ \alpha^\beta}{(x+\alpha)^{\beta+1}} \end{aligned}

In the following discussion, X will denote the Pareto distribution as defined above. As will be shown below, the exponential distribution is considered a light tailed distribution. Yet mixing exponentials produces the heavy tailed Pareto distribution. Mixture distributions tend to heavy tailed (see [1]). The Pareto distribution is a handy example.

——————————————————————————————————————

The Tail Weight of the Pareto Distribution
When a distribution significantly puts more probability on larger values, the distribution is said to be a heavy tailed distribution (or said to have a larger tail weight). According to [1], there are four ways to look for indication that a distribution is heavy tailed.

  1. Existence of moments.
  2. Speed of decay of the survival function to zero.
  3. Hazard rate function.
  4. Mean excess loss function.

Existence of moments
Note that the existence of the Pareto higher moments E(X^k) is capped by the shape parameter \beta. In particular, the mean E(X)=\frac{\alpha}{\beta-1} does not exist for \beta \le 1. If the Pareto distribution is to model a random loss, and if the mean is infinite (when \beta=1), the risk is uninsurable! On the other hand, when \beta=2, the Pareto variance does not exist. This shows that for a heavy tailed distribution, the variance may not be a good measure of risk.

For a given random variable Z, the existence of all moments E(Z^k), for all positive integers k, indicates with a light (right) tail for the distribution of Z. The existence of positive moments exists only up to a certain value of a positive integer k is an indication that the distribution has a heavy right tail. In contrast, the exponential distribution and the Gamma distribution are considered to have light tails since all moments exist.

The speed of decay of the survival function
The survival function S_X(x)=P(X>x) captures the probability of the tail of a distribution. If a distribution whose survival function decays slowly to zero (equivalently the cdf goes slowly to one), it is another indication that the distribution is heavy tailed.

The following is a comparison of a Pareto survival function and an exponential survival function. The Pareto survival function has parameters (\alpha=2 and \beta=2). The two survival functions are set to have the same 75th percentile (x=2).

\displaystyle \begin{pmatrix} \text{x}&\text{Pareto }S_X(x)&\text{Exponential }S_Y(x)&\displaystyle \frac{S_X(x)}{S_Y(x)} \\\text{ }&\text{ }&\text{ }&\text{ } \\{2}&0.25&0.25&1 \\{10}&0.027777778&0.000976563&28  \\{20}&0.008264463&9.54 \times 10^{-7}&8666 \\{30}&0.00390625&9.31 \times 10^{-10}&4194304 \\{40}&0.002267574&9.09 \times 10^{-13}&2.49 \times 10^{9} \\{60}&0.001040583&8.67 \times 10^{-19}&1.20 \times 10^{15} \\{80}&0.000594884&8.27 \times 10^{-25}&7.19 \times 10^{20} \\{100}&0.000384468&7.89 \times 10^{-31}&4.87 \times 10^{26} \\{120}&0.000268745&7.52 \times 10^{-37}&3.57 \times 10^{32} \\{140}&0.000198373&7.17 \times 10^{-43}&2.76 \times 10^{38} \\{160}&0.000152416&6.84 \times 10^{-49}&2.23 \times 10^{44} \\{180}&0.000120758&6.53 \times 10^{-55}&1.85 \times 10^{50}  \end{pmatrix}

Note that at the large values, the Pareto right tails retain much more probability. This is also confirmed by the ratio of the two survival functions, with the ratio approaching infinity. If a random loss is a heavy tailed phenomenon that is described by the above Pareto survival function (\alpha=2 and \beta=2), then the above exponential survival function is woefully inadequate as a model for this phenomenon even though it may be a good model for describing the loss up to the 75th percentile. It is the large right tail that is problematic (and catastrophic)!

Since the Pareto survival function and the exponential survival function have closed forms, We can also look at their ratio.

\displaystyle \frac{\text{pareto survival}}{\text{exponential survival}}=\frac{\displaystyle \frac{\alpha^\beta}{(x+\alpha)^\beta}}{e^{-\lambda x}}=\frac{\alpha^\beta e^{\lambda x}}{(x+\alpha)^\beta} \longrightarrow \infty \ \text{ as } x \longrightarrow \infty

In the above ratio, the numerator has an exponential function with a positive quantity in the exponent, while the denominator has a polynomial in x. This ratio goes to infinity as x \rightarrow \infty.

In general, whenever the ratio of two survival functions diverges to infinity, it is an indication that the distribution in the numerator of the ratio has a heavier tail. When the ratio goes to infinity, the survival function in the numerator is said to decay slowly to zero as compared to the denominator. We have the same conclusion in comparing the Pareto distribution and the Gamma distribution, that the Pareto is heavier in the tails. In comparing the tail weight, it is equivalent to consider the ratio of density functions (due to the L’Hopital’s rule).

\displaystyle \lim_{x \rightarrow \infty} \frac{S_1(x)}{S_2(x)}=\lim_{x \rightarrow \infty} \frac{S_1^{'}(x)}{S_2^{'}(x)}=\lim_{x \rightarrow \infty} \frac{f_1(x)}{f_2(x)}

The Hazard Rate Function
The hazard rate function h_X(x) of a random variable X is defined as the ratio of the density function and the survival function.

\displaystyle h_X(x)=\frac{f_X(x)}{S_X(s)}

The hazard rate is called the force of mortality in a life contingency context and can be interpreted as the rate that a person aged x will die in the next instant. The hazard rate is called the failure rate in reliability theory and can be interpreted as the rate that a machine will fail at the next instant given that it has been functioning for x units of time. The following is the hazard rate function of the Pareto distribution.

\displaystyle \begin{aligned}h_X(x)&=\frac{f_X(s)}{S_X(x)} \\&=\frac{\beta}{x+\alpha}  \end{aligned}

The interesting point is that the Pareto hazard rate function is an decreasing function in x. Another indication of heavy tail weight is that the distribution has a decreasing hazard rate function. One key characteristic of hazard rate function is that it can generate the survival function.

\displaystyle S_X(x)=e^{\displaystyle -\int_0^x h_X(t) \ dt}

Thus if the hazard rate function is decreasing in x, then the survival function will decay more slowly to zero. To see this, let H_X(x)=\int_0^x h_X(t) \ dt, which is called the cumulative hazard rate function. As indicated above, the survival function can be generated by e^{-H_X(x)}. If h_X(x) is decreasing in x, H_X(x) is smaller than H_Y(x) where h_Y(x) is constant in x or increasing in x. Consequently e^{-H_X(x)} is decaying to zero much more slowly than e^{-H_Y(x)}.

In contrast, the exponential distribution has a constant hazard rate function, making it a medium tailed distribution. As explained above, any distribution having an increasing hazard rate function is a light tailed distribution.

The Mean Excess Loss Function
Suppose that a property owner is exposed to a random loss Y. The property owner buys an insurance policy with a deductible d such that the insurer will pay a claim in the amount of Y-d if a loss occurs with Y>d. The insuerer will pay nothing if the loss is below the deductible. Whenever a loss is above d, what is the average claim the insurer will have to pay? This is one way to look at mean excess loss function, which represents the expected excess loss over a threshold conditional on the event that the threshold has been exceeded.

Given a loss variable Y and given a deductible d>0, the mean excess loss function is e_Y(d)=E(Y-d \lvert X>d). For a continuous random variable, it is computed by

\displaystyle e_Y(d)=\frac{\int_d^{\infty} (y-d) \ f_Y(y) \ dy}{S_Y(d)}

Applying the technique of integration by parts produces the following formula:

\displaystyle e_Y(d)=\frac{\int_d^{\infty} S_Y(y) \ dy}{S_Y(d)}

It turns out that the mean excess loss function is one more way to examine the tail property of a distribution. The following is the mean excess loss function of the Pareto distribution:

\displaystyle e_X(d)=\frac{d+\alpha}{\beta-1}=\frac{1}{\beta-1} \ d + \frac{\alpha}{\beta-1}

Note that the Pareto mean excess loss function is a linear increasing function of the deductible d. This means that the larger the deductible, the larger the expected claim if such a large loss occurs! If a random loss is modeled by such a distribution, it is a catastrophic risk situation. In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.

——————————————————————————————————————
The Pareto distribution has many economic applications. Since it is a heavy tailed distribution, it is a good candidate for modeling income above a theoretical value and the distribution of insurance claims above a threshold value.

——————————————————————————————————————

Reference

  1. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004