# Gamma distribution and Poisson distribution

The gamma distribution is important in many statistical applications. This post discusses the connections of the gamma distribution with Poisson distribution. The following is the probability density function of the gamma distribution.

$\displaystyle (1) \ \ \ \ \ f(x)=\frac{1}{\Gamma(\beta)} \ \alpha^\beta \ x^{\beta-1} \ e^{-\alpha x} \ \ \ \ \ \ x>0$

The numbers $\alpha$ and $\beta$, both positive, are fixed constants and are the parameters of the distribution. The symbol $\Gamma(\cdot)$ is the gamma function. This density function describes how the potential gamma observations distribute across the positive x-axis. Baked into this gamma probability density function are two pieces of information about the Poisson distribution. We can simply read off the information from the parameters $\alpha$ and $\beta$ in the density function.

Poisson-Gamma Mixture

For a reason that will be given shortly, the parameters $\alpha$ and $\beta$ in (1) gives a negative binomial distribution. The following is the probability function of this negative binomial distribution.

$\displaystyle (2) \ \ \ \ \ P(N=n)=\binom{\beta+n-1}{n} \ \biggl(\frac{\alpha}{1+\alpha} \biggr)^\beta \ \biggl(\frac{1}{1+\alpha} \biggr)^n \ \ \ \ \ \ n=0,1,2,3,\cdots$

If the parameter $\beta$ is a positive integer, then (2) has a nice interpretation in the context of a series of independent Bernoulli trials. Consider performing a series of independent trials where each trial has one of two distinct outcomes (called success or failure). Assume that the probability of a success in each trial is $p=\alpha/(1+\alpha)$. In this case, the quantity in (2) is the probability of having $n$ failures before the occurrence of the $\beta$th success.

Note that when $\beta$ is a positive integer, the binomial coefficient $\binom{\beta+n-1}{n}$ has the usual calculation $\frac{(\beta+n-1)!}{n! (\beta-1)!}$. When $\beta$ is not an integer but is only a positive number, the binomial coefficient $\binom{\beta+n-1}{n}$ is calculated as follows:

$\displaystyle \binom{\beta+n-1}{n}=\frac{(\beta+n-1) (\beta+n-2) \cdots (\beta+1) \beta}{n!}$

For this new calculation to work, $\beta$ does not have to be a positive integer. It only needs to be a positive real number. Then the distribution in (2) does not have a natural interpretation in terms of performing a series of independent Bernoulli trials. It is simply a counting distribution, i.e. a random discrete variable modeling the number of occurrences of a type of random events.

What does this have to do with gamma distribution and Poisson distribution? When a conditional random variable $X \lvert \lambda$ has a Poisson distribution such that its mean $\lambda$ is an unknown random quantity but follows a gamma distribution with parameters $\beta$ and $\alpha$ as described in (1), the unconditional distribution for $X$ has a negative binomial distribution as described in (2). In other words, the mixture of Poisson distributions with gamma mixing weights is a negative binomial distribution.

There is an insurance interpretation of the Poisson-gamma mixture. Suppose that in a large pool of insureds, the annual claim frequency of an insured is a Poisson distribution with mean $\lambda$. The quantity $\lambda$ varies from insured to insured but is supposed to follow a gamma distribution. Here we have a family of conditional Poisson distributions where each one is conditional on the characteristics of the particular insured in question (a low risk insured has a low $\lambda$ value and a high risk insured has a high $\lambda$ value). The “average” of the conditional Poisson distributions will be a negative binomial distribution (using the gamma distribution to weight the parameter $\lambda$). Thus the claim frequency for an “average” insured in the pool should be modeled by a negative binomial distribution. For a randomly selected insured from the pool, if we do not know anything about this insured, we can use the unconditional negative binomial distribution to model the claim frequency.

A detailed discussion of the negative binomial distribution is found here and here. The notion of mixture distributions and Poisson-gamma mixture in particular are discussed here. Many distributions applicable in actuarial applications are mixture distributions (see here for examples).

Waiting Times in a Poisson Process

The parameter $\beta$ in (1) is called the shape parameter while the parameter $\alpha$ is called the rate parameter. The name of rate parameter will be made clear shortly. For this interpretation of gamma distribution to work, the shape parameter $\beta$ must be a positive integer.

The gamma distribution as described in (1) is intimately connected to a Poisson process through the rate parameter $\alpha$. In a Poisson process, event of a particular interest occurs at random at the rate of $\alpha$ per unit time. Two questions are of interest. How many times will this event occur in a given time interval? How long do we have to wait to observe the first occurrence, the second occurrence and so on? The gamma distribution as described in (1) gives an answer to the second question when $\beta$ is a positive integer.

A counting process is a random experiment that observes the random occurrences of a certain type of events of interest in a time period. A Poisson process is a counting process that satisfies three simplifying assumptions that deal with independence and uniformity in time.

A good example of a Poisson process is the well known experiment in radioactivity conducted by Rutherford and Geiger in 1910. In this experiment, $\alpha$-particles were emitted from a polonium source and the number of $\alpha$-particles were counted during an interval of 7.5 seconds (2,608 many such time intervals were observed). In these 2,608 intervals, a total of 10,097 particles were observed. Thus the mean count per period of 7.5 seconds is 10097 / 2608 = 3.87. In this experiment, $\alpha$-particles are observed at a rate of 3.87 per unit time (7.5 seconds).

In general, consider a Poisson process with a rate of $\lambda$ per unit time, i.e. the event that is of interest occurs at the rate of $\lambda$ per unit time. There are two random variables of interest. One is $N_t$ which is the number of occurrences of the event of interest from time 0 to time $t$. The other is $X_n$ which is the waiting time until the $n$th occurrence of the event of interest where $n$ is a positive integer.

What can we say about the random variables $N_t$? First, the random variable $N_1$, the number of occurrences of the event of interest in a unit time interval, has a Poisson distribution with mean $\lambda$. The following is the probability function.

$\displaystyle (3) \ \ \ \ \ P(N_1=k)=\frac{1}{k!} \ \lambda^k \ e^{-\lambda} \ \ \ \ \ \ \ \ \ k=0,1,2,3,\cdots$

For any positive real number $t$, the random variable $N_t$, which is a discrete random variable, follows a Poisson distribution with mean $\lambda t$. The following is the probability function.

$\displaystyle (4) \ \ \ \ \ P(N_t=k)=\frac{1}{k!} \ (\lambda t)^k \ e^{-\lambda t} \ \ \ \ \ k=0,1,2,3,\cdots$

For a more detailed discussion of why $P(N_1=k)$ and $P(N_t=k)$ are Poisson, see here.

We now discuss the distributions for $X_1$ and $X_n$. The random variable $X_1$ would be the mean time to the first occurrence. It has an exponential distribution with mean $1/\lambda$. The following is the probability density function.

$\displaystyle (5) \ \ \ \ \ f_{X_1}(t)=\lambda \ e^{-\lambda t} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ t>0$

The random variable $X_n$ has a gamma distribution with shape parameter $\beta=n$ and rate parameter $\alpha=\lambda$. The following is the probability density function.

$\displaystyle (6) \ \ \ \ \ f_{X_n}(t)=\frac{1}{(n-1)!} \ \lambda^n \ t^{n-1} \ e^{-\lambda t} \ \ \ \ \ \ t>0$

The density functions in (5) and (6) are derived from the Poisson distributions in (3) and (4). For example, for $X_1>t$ (the first occurrence taking place after time $t$) to happen, there is no occurrence in the time interval $(0,t)$. Thus $P(X_1>t)=P(N_t=0)$. Note that $N_t$ is a Poisson random variable with mean $\lambda t$. The following derives the density function in (5).

$\displaystyle P(X_1>t)=P(N_t=0)=e^{-\lambda t}$

$\displaystyle P(X_1 \le t)=1-e^{-\lambda t}$

Note that the cumulative distribution function $P(X_1 \le t)$ is that for an exponential distribution. Taking the derivative gives the density function in (5). Extending the same reasoning, for $X_n>t$ (the $n$th occurrence taking place after time $t$) to happen, there must be at most $n-1$ occurrences in the time interval $(0,t)$. Once again, we are relating $X_n$ to the Poisson random variable $N_t$. More specifically $P(X_n>t)=P(N_t \le n-1)$.

$\displaystyle (7) \ \ \ \ \ P(X_n>t)=P(N_t \le n-1)=\sum \limits_{k=0}^{n-1} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]$

$\displaystyle (8) \ \ \ \ \ P(X_n \le t)=1-\sum \limits_{k=0}^{n-1} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]=\sum \limits_{k=n}^{\infty} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]$

Taking the derivative of (8) gives the density function in (6). See here for a more detailed discussion of the relation between gamma distribution and Poisson distribution in a Poisson process.

Evaluating Gamma Survival Function

The relation (7) shows that the gamma survival function is the cumulative distribution function (CDF) of the corresponding Poisson distribution. Consider the following integral.

$\displaystyle (9) \ \ \ \ \ \int_t^\infty \frac{1}{(n-1)!} \ \lambda^n \ x^{n-1} \ e^{-\lambda x} \ dx$

The integrand in the above integral is the density function of a gamma distribution (with the shape parameter being a positive integer). The limits of the integral are from $t$ to $\infty$. Conceptually the integral is identical to (7) since the integral gives $P(X_n>t)$, which is the survival function of the gamma distribution in question. According to (7), the integral in (9) has a closed form as follows.

$\displaystyle (10) \ \ \ \ \ \int_t^\infty \frac{1}{(n-1)!} \ \lambda^n \ x^{n-1} \ e^{-\lambda x} \ dx=\sum \limits_{k=0}^{n-1} \biggl[ \frac{(\lambda t)^k}{k!} \ e^{-\lambda t} \biggr]$

The above is a combination of (7) and (9) and is a way to evaluate the right tail of the gamma distribution when the shape parameter is a positive integer $n$. Instead of memorizing it, we can focus on the thought process.

Given the integral in (9), note the shape parameter $n$ and the rate parameter $\lambda$. Consider a Poisson process with rater parameter $\lambda$. In this Poisson process, the random variable $N_1$, the number of occurrences of the event of interest in a unit time interval, has a Poisson distribution with mean $\lambda$. The random variable $N_t$, the number of occurrences of the event of interest in a time interval of length $t$, has a Poisson distribution with mean $\lambda t$. Then the integral in (9) is equivalent to $P(N_t \le n-1)$, the probability that there are at most $n-1$ occurrences in the time interval $(0,t)$.

The above thought process works even if the gamma distribution has no relation to any Poisson process, e.g. it is just the model for size of insurance losses. We can still pretend there is a Poisson relationship and obtain a closed form evaluation of the survival function. It must be emphasized that this evaluation of the gamma survival function in closed form is possible only when the shape parameter is a positive integer. If it is not, we then need to evaluate the gamma survival function using software.

Remarks

The discussion here presents two relationships between the gamma distribution and the Poisson distribution. In the first one, mixing conditional Poisson distributions with gamma mixing weights produces a negative binomial distribution. Taking the shape parameter $\beta$ and rate parameter $\alpha$ in (1) produces the negative binomial distribution as in (2). This is one interpretation of the two gamma parameters.

In the second interpretation, the rate parameter of the gamma model is intimately tied to the rate parameter of a Poisson process. The interplay between the number of occurrences of the event of interest and the waiting time until the $n$th occurrence connects the gamma distribution with the Poisson distribution. For a fuller discussion of this interplay, see here.

The gamma distribution is mathematically derived from the gamma function. See here for an introduction to the gamma distribution.

The three assumptions for a Poisson process that are alluded to above allow us to obtain a binomial distribution when dividing a time interval into $n$ subintervals (for a sufficiently large $n$). As the subintervals get more and more granular, the binomial distributions have Poisson distribution as the limiting distribution. The derivation from binomial to Poisson is discussed here, here and here.

Practice problems can be found here in a companion blog for practice problems.

Dan Ma gamma distribution
Dan Ma Poisson distribution
Dan Ma math

Daniel Ma gamma distribution
Daniel Ma mathematics
Daniel Ma Poisson distribution

$\copyright$ 2018 – Dan Ma

# Parametric severity models

Modeling the severity of losses is an important part of actuarial modeling (severity is the dollar value per claim). One approach is to employ parametric models in the modeling process. For example, the process may involve using claims data to estimate the parameters of the fitted model and then using the fitted model for estimation of future claim costs. A companion blog called Topics in Actuarial Modeling introduces a catalog of parametric severity models (the catalog is found here).

The catalog lists the models according to their mathematical properties (e.g. how they are derived and how they are related to the gamma distribution). The models have been discussed quite extensively in the blog for Topics in Actuarial Modeling. The catalog puts all the models in one place so to speak. It has the links to the blog posts that describe the models. Just click on a link if anyone is interested in knowing more about a particular parametric model. The parametric models in the catalog include the following:

• Gamma distribution
• Erlang distribution
• Exponential distribution
• Chi-squared distribution
• Hypo-Exponential distribution
• Hyper-Exponential distribution
• Lognormal distribution
• Weibull distribution
• Transformed exponential distribution
• Inverse exponential distribution
• Inverse transformed exponential distribution
• Transformed gamma distribution
• Inverse gamma distribution
• Inverse transformed gamma distribution
• Transformed Pareto distribution = Burr distribution
• Inverse Pareto distribution
• Inverse transformed Pareto distribution = Inverse Burr distribution
• Paralogistic distribution
• Inverse paralogistic distribution
• Pareto distribution
• Generalized Pareto distribution
• Loglogistic distribution
• Student t distribution
• Beta distribution
• Generalized beta distribution

This list is by no means comprehensive or exhaustive. It should be a good resource to begin the modeling process (or the study of the actuarial modeling process). In fact, the modeling process is part of the exam syllabus of the Society of Actuaries. The catalog of models is found here.

actuarial modeling

parametric severity models

Dan Ma math

Daniel Ma mathematics

$\copyright$ 2017 – Dan Ma

# More on Pareto distribution

There are several types of Pareto distributions. The Pareto distribution described in this previous post is called the Pareto Type II distribution (also called Lomax distribution). There is also a Pareto Type I distribution. The difference between the two Pareto types is that the support of Pareto Type I is the interval $(\theta, \infty)$. In other words, the random variable takes on positive real numbers larger than some fixed positive parameter $\theta$. On the other hand, Pareto Type II distribution can take on any positive real number. However, the two types are mathematically related. Their density functions have the same shape (if they have the same parameters).

Pareto Type I Density Curve (Red) versus Pareto Type II Density Curve (Green)

In the above diagram, the red curve is the density function for the Pareto Type I distribution while the green curve is the density function for the Pareto Type II distribution. For both distributions, the shape parameter is $\alpha=2$ and the scale parameter is $\theta=5$.

Note that the green density curve is the result of shifting the red curve horizontally 5 units to the left. The following are the density functions.

Red Curve
$\displaystyle f(x)=\frac{2 (5)^2}{x^3}=\frac{50}{x^3} \ \ \ \ \ x>5$

Green Curve
$\displaystyle g(x)=\frac{2 (5)^2}{(x+5)^3}=\frac{50}{(x+5)^3} \ \ \ \ \ x>0$

From the diagram and the density functions, we see that the green density curve is the result of shifting the red curve to the left by 5. Otherwise, both curve have the same distributional shape.

Since Pareto Type II is a shifting of Pareto Type I, they share similar mathematical properties. For example, they are both heavy tailed distributions. Thus they are appropriate for modeling extreme losses (in insurance applications) or financial fiasco or financial ruin (in financial applications). Which one to use depends on where the starting point of the random variable is.

However, the mathematical calculations are different to a large degree since the Pareto Type II density function is the result of a horizontal shift.

We would like to introduce blog posts in two companion blogs that discuss the Pareto distribution in greater details. This blog post is from a companion blog called Topics in Actuarial Modeling. It describes Pareto Type I and Type II in greater details. This blog post from another companion blog called Actuarial Modeling Practice has a side-by-side comparison of Pareto Type I and Pareto Type II. It also discusses the two Pareto types from a calculations stand point. This blog post has practice problems on the two Pareto types.

Another important mathematical property of Pareto Type II is that it is the mixture of exponential distributions with gamma mixing weights (exponential-gamma mixture). This is also discussed in the highlighted blog posts.

$\text{ }$

$\text{ }$

$\text{ }$

$\copyright$ 2017 – Dan Ma

# The generating function

Consider the function $g(z)=\displaystyle e^{\alpha (z-1)}$ where $\alpha$ is a positive constant. The following shows the derivatives of this function.

\displaystyle \begin{aligned}. \ \ \ \ \ \ &g(z)=e^{\alpha (z-1)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ g(0)=e^{-\alpha} \\&\text{ } \\&g'(z)=e^{\alpha (z-1)} \ \alpha \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ g'(0)=e^{-\alpha} \ \alpha \\&\text{ } \\&g^{(2)}(z)=e^{\alpha (z-1)} \ \alpha^2 \ \ \ \ \ \ \ \ \ \ \ \ \ \ g^{(2)}(0)=2! \ \frac{e^{-\alpha} \ \alpha^2}{2!} \\&\text{ } \\&g^{(3)}(z)=e^{\alpha (z-1)} \ \alpha^3 \ \ \ \ \ \ \ \ \ \ \ \ \ \ g^{(3)}(0)=3! \ \frac{e^{-\alpha} \ \alpha^3}{3!} \\&\text{ } \\&\ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \\&\text{ } \\&g^{(n)}(z)=e^{\alpha (z-1)} \ \alpha^n \ \ \ \ \ \ \ \ \ \ \ \ \ \ g^{(n)}(0)=n! \ \frac{e^{-\alpha} \ \alpha^n}{n!} \end{aligned}

Note that the derivative of $g(z)$ at each order is a multiple of a Poisson probability. Thus the Poisson distribution is coded by the function $g(z)=\displaystyle e^{\alpha (z-1)}$. Because of this reason, such a function is called a generating function (or probability generating function). This post discusses some basic facts about the generating function (gf) and its cousin, the moment generating function (mgf). One important characteristic is that these functions generate probabilities and moments. Another important characteristic is that there is a one-to-one correspondence between a probability distribution and its generating function and moment generating function, i.e. two random variables with different cumulative distribution functions cannot have the same gf or mgf. In some situations, this fact is useful in working with independent sum of random variables.

————————————————————————————————————

The Generating Function
Suppose that $X$ is a random variable that takes only nonegative integer values with the probability function given by

$\text{ }$

$(1) \ \ \ \ \ \ P(X=j)=a_j, \ \ \ \ j=0,1,2,\cdots$

$\text{ }$

The idea of the generating function is that we use a power series to capture the entire probability distribution. The following defines the generating function that is associated with the above sequence $a_j$, .

$(2) \ \ \ \ \ \ g(z)=a_0+a_1 \ z+a_2 \ z^2+ \cdots=\sum \limits_{j=0}^\infty a_j \ z^j$

$\text{ }$

Since the elements of the sequence $a_j$ are probabilities, we can also call $g(z)$ the generating function of the probability distribution defined by the sequence in $(1)$. The generating function $g(z)$ is defined wherever the power series converges. It is clear that at the minimum, the power series in $(2)$ converges for $\lvert z \lvert \le 1$.

We discuss the following three properties of generating functions:

1. The generating function completely determines the distribution.
2. The moments of the distribution can be derived from the derivatives of the generating function.
3. The generating function of a sum of independent random variables is the product of the individual generating functions.

The Poisson generating function at the beginning of the post is an example demonstrating property 1 (see Example 0 below for the derivation of the generating function). In some cases, the probability distribution of an independent sum can be deduced from the product of the individual generating functions. Some examples are given below.

————————————————————————————————————
Generating Probabilities
We now discuss the property 1 indicated above. To see that $g(z)$ generates the probabilities, let’s look at the derivatives of $g(z)$:

\displaystyle \begin{aligned}(3) \ \ \ \ \ \ &g'(z)=a_1+2 a_2 \ z+3 a_3 \ z^2+\cdots=\sum \limits_{j=1}^\infty j a_j \ z^{j-1} \\&\text{ } \\&g^{(2)}(z)=2 a_2+6 a_3 \ z+ 12 a_4 \ z^2=\sum \limits_{j=2}^\infty j (j-1) a_j \ z^{j-2} \\&\text{ } \\&g^{(3)}(z)=6 a_3+ 24 a_4 \ z+60 a_5 \ z^2=\sum \limits_{j=3}^\infty j (j-1)(j-2) a_j \ z^{j-3} \\&\text{ } \\&\ \ \ \ \ \ \ \ \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \\&\text{ } \\&g^{(n)}(z)=\sum \limits_{j=n}^\infty j(j-1) \cdots (j-n+1) a_j \ z^{j-n}=\sum \limits_{j=n}^\infty \binom{j}{n} n! \ a_j \ z^{j-n} \end{aligned}

$\text{ }$

By letting $z=0$ above, all the terms vanishes except for the constant term. We have:

$\text{ }$

$(4) \ \ \ \ \ \ g^{(n)}(0)=n! \ a_n=n! \ P(X=n)$

$\text{ }$

Thus the generating function is a compact way of encoding the probability distribution. The probability distribution determines the generating function as seen in $(2)$. On the other hand, $(3)$ and $(4)$ demonstrate that the generating function also determines the probability distribution.

————————————————————————————————————
Generating Moments
The generating function also determines the moments (property 2 indicated above). For example, we have:

\displaystyle \begin{aligned}(5) \ \ \ \ \ \ &g'(1)=0 \ a_0+a_1+2 a_2+3 a_3+\cdots=\sum \limits_{j=0}^\infty j a_j=E(X) \\&\text{ } \\&g^{(2)}(1)=0 a_0 + 0 a_1+2 a_2+6 a_3+ 12 a_4+\cdots=\sum \limits_{j=0}^\infty j (j-1) a_j=E[X(X-1)] \\&\text{ } \\&E(X)=g'(1) \\&\text{ } \\&E(X^2)=g'(1)+g^{(2)}(1) \end{aligned}

$\text{ }$

Note that $g^{(n)}(1)=E[X(X-1) \cdots (X-(n-1))]$. Thus the higher moment $E(X^n)$ can be expressed in terms of $g^{(n)}(1)$ and $g^{(k)}(1)$ where $k.
————————————————————————————————————
More General Definitions
Note that the definition in $(2)$ can also be interpreted as the mathematical expectation of $z^X$, i.e., $g(z)=E(z^X)$. This provides a way to define the generating function for random variables that may take on values outside of the nonnegative integers. The following is a more general definition of the generating function of the random variable $X$, which is defined for all $z$ where the expectation exists.

$\text{ }$

$(6) \ \ \ \ \ \ g(z)=E(z^X)$

$\text{ }$

————————————————————————————————————
The Generating Function of Independent Sum
Let $X_1,X_2,\cdots,X_n$ be independent random variables with generating functions $g_1,g_2,\cdots,g_n$, respectively. Then the generating function of $X_1+X_2+\cdots+X_n$ is given by the product $g_1 \cdot g_2 \cdots g_n$.

Let $g(z)$ be the generating function of the independent sum $X_1+X_2+\cdots+X_n$. The following derives $g(z)$. Note that the general form of generating function $(6)$ is used.

\displaystyle \begin{aligned}(7) \ \ \ \ \ \ g(z)&=E(z^{X_1+\cdots+X_n}) \\&\text{ } \\&=E(z^{X_1} \cdots z^{X_n}) \\&\text{ } \\&=E(z^{X_1}) \cdots E(z^{X_n}) \\&\text{ } \\&=g_1(z) \cdots g_n(z) \end{aligned}

The probability distribution of a random variable is uniquely determined by its generating function. In particular, the generating function $g(z)$ of the independent sum $X_1+X_2+\cdots+X_n$ that is derived in $(7)$ is unique. So if the generating function is of a particular distribution, we can deduce that the distribution of the sum must be of the same distribution. See the examples below.

————————————————————————————————————
Example 0
In this example, we derive the generating function of the Poisson distribution. Based on the definition, we have:

\displaystyle \begin{aligned}. \ \ \ \ \ \ g(z)&=\sum \limits_{j=0}^\infty \frac{e^{-\alpha} \alpha^j}{j!} \ z^j \\&\text{ } \\&=\sum \limits_{j=0}^\infty \frac{e^{-\alpha} (\alpha z)^j}{j!} \\&\text{ } \\&=\frac{e^{-\alpha}}{e^{- \alpha z}} \sum \limits_{j=0}^\infty \frac{e^{-\alpha z} (\alpha z)^j}{j!} \\&\text{ } \\&=e^{\alpha (z-1)} \end{aligned}

$\text{ }$

Example 1
Suppose that $X_1,X_2,\cdots,X_n$ are independent random variables where each $X_i$ has a Bernoulli distribution with probability of success $p$. Let $q=1-p$. The following is the generating function for each $X_i$.

$\text{ }$

$. \ \ \ \ \ \ g(z)=q+p z$

$\text{ }$

Then the generating function of the sum $X=X_1+\cdots+X_n$ is $g(z)^n=(q+p z)^n$. The following is the binomial expansion:

$\text{ }$

\displaystyle \begin{aligned}(8) \ \ \ \ \ \ g(z)^n&=(q+p z)^n \\&\text{ } \\&=\sum \limits_{j=0}^n \binom{n}{j} q^{n-j} \ p^j \ z^j \end{aligned}

$\text{ }$

By definition $(2)$, the generating function of $X=X_1+\cdots+X_n$ is:

$\text{ }$

$\text{ }$

$(9) \ \ \ \ \ \ g(z)^n=\sum \limits_{j=0}^\infty P(X=j) \ z^j$

$\text{ }$

Comparing $(8)$ and $(9)$, we have

$\displaystyle (10) \ \ \ \ \ \ P(X=j)=\left\{\begin{matrix}\displaystyle \binom{n}{j} p^j \ q^{n-j}&\ 0 \le j \le n\\{0}&\ j>n \end{matrix}\right.$

The probability distribution indicated by $(8)$ and $(10)$ is that of a binomial distribution. Since the probability distribution of a random variable is uniquely determined by its generating function, the independent sum of Bernoulli distributions must ave a Binomial distribution.

$\text{ }$

Example 2
Suppose that $X_1,X_2,\cdots,X_n$ are independent and have Poisson distributions with parameters $\alpha_1,\alpha_2,\cdots,\alpha_n$, respectively. Then the independent sum $X=X_1+\cdots+X_n$ has a Poisson distribution with parameter $\alpha=\alpha_1+\cdots+\alpha_n$.

Let $g(z)$ be the generating function of $X=X_1+\cdots+X_n$. For each $i$, the generating function of $X_i$ is $g_i(z)=e^{\alpha_i (z-1)}$. The key to the proof is that the product of the $g_i$ has the same general form as the individual $g_i$.

\displaystyle \begin{aligned}(11) \ \ \ \ \ \ g(z)&=g_1(z) \cdots g_n(z) \\&\text{ } \\&=e^{\alpha_1 (z-1)} \cdots e^{\alpha_n (z-1)} \\&\text{ } \\&=e^{(\alpha_1+\cdots+\alpha_n)(z-1)} \end{aligned}

The generating function in $(11)$ is that of a Poisson distribution with mean $\alpha=\alpha_1+\cdots+\alpha_n$. Since the generating function uniquely determines the distribution, we can deduce that the sum $X=X_1+\cdots+X_n$ has a Poisson distribution with parameter $\alpha=\alpha_1+\cdots+\alpha_n$.

$\text{ }$

Example 3
In rolling a fair die, let $X$ be the number shown on the up face. The associated generating function is:

$\displaystyle. \ \ \ \ \ \ g(z)=\frac{1}{6}(z+z^2+z^3+z^4+z^5+z^6)=\frac{z(1-z^6)}{6(1-z)}$

The generating function can be further reduced as:

\displaystyle \begin{aligned}. \ \ \ \ \ \ g(z)&=\frac{z(1-z^6)}{6(1-z)} \\&\text{ } \\&=\frac{z(1-z^3)(1+z^3)}{6(1-z)} \\&\text{ } \\&=\frac{z(1-z)(1+z+z^2)(1+z^3)}{6(1-z)} \\&\text{ } \\&=\frac{z(1+z+z^2)(1+z^3)}{6} \end{aligned}

Suppose that we roll the fair dice 4 times. Let $W$ be the sum of the 4 rolls. Then the generating function of $Z$ is

$\displaystyle. \ \ \ \ \ \ g(z)^4=\frac{z^4 (1+z^3)^4 (1+z+z^2)^4}{6^4}$

The random variable $W$ ranges from 4 to 24. Thus the probability function ranges from $P(W=4)$ to $P(W=24)$. To find these probabilities, we simply need to decode the generating function $g(z)^4$. For example, to find $P(W=12)$, we need to find the coefficient of the term $z^{12}$ in the polynomial $g(z)^4$. To help this decoding, we can expand two of the polynomials in $g(z)^4$.

\displaystyle \begin{aligned}. \ \ \ \ \ \ g(z)^4&=\frac{z^4 (1+z^3)^4 (1+z+z^2)^4}{6^4} \\&\text{ } \\&=\frac{z^4 \times A \times B}{6^4} \\&\text{ } \\&A=(1+z^3)^4=1+4z^3+6z^6+4z^9+z^{12} \\&\text{ } \\&B=(1+z+z^2)^4=1+4z+10z^2+16z^3+19z^4+16z^5+10z^6+4z^7+z^8 \end{aligned}

Based on the above polynomials, there are three ways of forming $z^{12}$. They are: $(z^4 \times 1 \times z^8)$, $(z^4 \times 4z^3 \times 16z^5)$, $(z^4 \times 6z^6 \times 10z^2)$. Thus we have:

$\displaystyle. \ \ \ \ \ \ P(W=12)=\frac{1}{6^4}(1+4 \times 16+6 \times 10)=\frac{125}{6^4}$

To find the other probabilities, we can follow the same decoding process.

————————————————————————————————————
Remark
The probability distribution of a random variable is uniquely determined by its generating function. This fundamental property is useful in determining the distribution of an independent sum. The generating function of the independent sum is simply the product of the individual generating functions. If the product is of a certain distributional form (as in Example 1 and Example 2), then we can deduce that the sum must be of the same distribution.

We can also decode the product of generating functions to obtain the probability function of the independent sum (as in Example 3). The method in Example 3 is quite tedious. But one advantage is that it is a “machine process”, a pretty fool proof process that can be performed mechanically.

The machine process is this: Code the individual probability distribution in a generating function $g(z)$. Then raise it to $n$. After performing some manipulation to $g(z)^n$, decode the probabilities from $g(z)^n$.

As long as we can perform the algebraic manipulation carefully and correctly, this process will be sure to provide the probability distribution of an independent sum.

————————————————————————————————————
The Moment Generating Function
The moment generating function of a random variable $X$ is $M_X(t)=E(e^{tX})$ on all real numbers $t$ for which the expected value exists. The moments can be computed more directly using an mgf. From the theory of mathematical analysis, it can be shown that if $M_X(t)$ exists on some interval $-a, then the derivatives of $M_X(t)$ of all orders exist at $t=0$. Furthermore, it can be show that $E(X^n)=M_X^{(n)}(0)$.

Suppose that $g(z)$ is the generating function of a random variable. The following relates the generating function and the moment generating function.

\displaystyle \begin{aligned}. \ \ \ \ \ \ &M_X(t)=g(e^t) \\&\text{ } \\&g(z)=M_X(ln z) \end{aligned}

————————————————————————————————————

Reference

1. Feller W. An Introduction to Probability Theory and Its Applications, Third Edition, John Wiley & Sons, New York, 1968

# The hazard rate function

In this post, we introduce the hazard rate function using the notions of non-homogeneous Poisson process.

In a Poisson process, changes occur at a constant rate $\lambda$ per unit time. Suppose that we interpret the changes in a Poisson process from a mortality point of view, i.e. a change in the Poisson process mean a termination of a system, be it biological or manufactured, and this Poisson process counts the number of of terminations as they occur. Then the rate of change $\lambda$ is interpreted as a hazard rate (or failure rate or force of mortality). With a constant force of mortality, the time until the next change is exponentially distributed. In this post, we discuss the hazard rate function in a more general setting. The process that counts of the number of terminations will no longer have a constant hazard rate, and instead will have a hazard rate function $\lambda(t)$, a function of time $t$. Such a counting process is called a non-homogeneous Poisson process. We discuss the survival probability models (the time to the next termination) associated with a non-homogeneous Poisson process. We then discuss several important examples of survival probability models, including the Weibull distribution, the Gompertz distribution and the model based on the Makeham’s law. We aso comment briefly the connection between the hazard rate function and the tail weight of a distribution.

$\text{ }$

The Poisson Process
We start with the three postulates of a Poisson process. Consider an experiment in which the occurrences of a certain type of events are counted during a given time interval. We call the occurrence of the type of events in question a change. We assume the following three conditions:

1. The numbers of changes occurring in nonoverlapping intervals are independent.
2. The probability of two or more changes taking place in a sufficiently small interval is essentially zero.
3. The probability of exactly one change in the short interval $(t,t+\delta)$ is approximately $\lambda \delta$ where $\delta$ is sufficiently small and $\lambda$ is a positive constant.

$\text{ }$

When we interpret the Poisson process in a mortality point of view, the constant $\lambda$ is a hazard rate (or force of mortality), which can be interpreted as the rate of failure at the next instant given that the life has survived to time $t$. With a constant force of mortality, the survival model (the time until the next termination) has an exponential distribution with mean $\frac{1}{\lambda}$. We wish to relax the constant force of mortality assumption by making $\lambda$ a function of $t$ instead. The remainder of this post is based on the non-homogeneous Poisson process defined below.

$\text{ }$

The Non-Homogeneous Poisson Process
We modifiy condition 3 above by making $\lambda(t)$ a function of $t$. We have the following modified counting process.

1. The numbers of changes occurring in nonoverlapping intervals are independent.
2. The probability of two or more changes taking place in a sufficiently small interval is essentially zero.
3. The probability of exactly one change in the short interval $(t,t+\delta)$ is approximately $\lambda(t) \delta$ where $\delta$ is sufficiently small and $\lambda(t)$ is a nonnegative function of $t$.

$\text{ }$

We focus on the survival model aspect of such counting processes. Such process can be interpreted as models for the number of changes occurred in a time interval where a change means “termination” or ‘failure” of a system under consideration. The rate of change function $\lambda(t)$ indicated in condition 3 is called the hazard rate function. It is also called the failure rate function in reliability engineering and the force of mortality in life contingency theory.

Based on condition 3 in the non-homogeneous Poisson process, the hazard rate function $\lambda(t)$ can be interpreted as the rate of failure at the next instant given that the life has survived to time $t$.

Two random variables naturally arise from a non-homogeneous Poisson process are described here. One is the discrete variable $N_t$, defined as the number of changes in the time interval $(0,t)$. The other is the continuous random variable $T$, defined as the time until the occurrence of the first change. The probability distribution of $T$ is called a survival model. The following is the link between $N_t$ and $T$.

$\text{ }$

\displaystyle \begin{aligned}(1) \ \ \ \ \ \ \ \ \ &P[T > t]=P[N_t=0] \end{aligned}

$\text{ }$

Note that $P[T > t]$ is the probability that the next change occurs after time $t$. This means that there is no change within the interval $(0,t)$. We have the following theorems.

$\text{ }$

Theorem 1.
Let $\displaystyle \Lambda(t)=\int_{0}^{t} \lambda(y) dy$. Then $e^{-\Lambda(t)}$ is the probability that there is no change in the interval $(0,t)$. That is, $\displaystyle P[N_t=0]=e^{-\Lambda(t)}$.

Proof. We are interested in finding the probability of zero changes in the interval $(0,y+\delta)$. By condition 1, the numbers of changes in the nonoverlapping intervals $(0,y)$ and $(y,y+\delta)$ are independent. Thus we have:

$\text{ }$

$\displaystyle (2) \ \ \ \ \ \ \ \ P[N_{y+\delta}=0] \approx P[N_y=0] \times [1-\lambda(y) \delta]$

$\text{ }$

Note that by condition 3, the probability of exactly one change in the small interval $(y,y+\delta)$ is $\lambda(y) \delta$. Thus $[1-\lambda(y) \delta]$ is the probability of no change in the interval $(y,y+\delta)$. Continuing with equation $(2)$, we have the following derivation:

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\frac{P[N_{y+\delta}=0] - P[N_y=0]}{\delta} \approx -\lambda(y) P[N_y=0] \\&\text{ } \\&\frac{d}{dy} P[N_y=0]=-\lambda(y) P[N_y=0] \\&\text{ } \\&\frac{\frac{d}{dy} P[N_y=0]}{P[N_y=0]}=-\lambda(y) \\&\text{ } \\&\int_0^{t} \frac{\frac{d}{dy} P[N_y=0]}{P[N_y=0]} dy=-\int_0^{t} \lambda(y)dy \end{aligned}

$\text{ }$

Evaluating the integral on the left hand side with the boundary condition of $P[N_0=0]=1$ produces the following results:

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &ln P[N_t=0]=-\int_0^{t} \lambda(y)dy \\&\text{ } \\&P[N_t=0]=\displaystyle e^{\displaystyle -\int_0^{t} \lambda(y)dy} \end{aligned}

$\text{ }$

Theorem 2
As discussed above, let $T$ be the length of the interval that is required to observe the first change. Then the following are the distribution function, survival function and pdf of $T$:

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &F_T(t)=\displaystyle 1-e^{-\int_0^t \lambda(y) dy} \\&\text{ } \\&S_T(t)=\displaystyle e^{-\int_0^t \lambda(y) dy} \\&\text{ } \\&f_T(t)=\displaystyle \lambda(t) e^{-\int_0^t \lambda(y) dy} \end{aligned}

Proof. In Theorem 1, we derive the probability $P[N_y=0]$ for the discrete variable $N_y$ derived from the non-homogeneous Poisson process. We now consider the continuous random variable $T$, the time until the first change, which is related to $N_t$ by $(1)$. Thus $S_T(t)=P[T > t]=P[N_t=0]=e^{-\int_0^t \lambda(y) dy}$. The distribution function and density function can be derived accordingly.

$\text{ }$

Theorem 3
The hazard rate function $\lambda(t)$ is equivalent to each of the following:

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\lambda(t)=\frac{f_T(t)}{1-F_T(t)} \\&\text{ } \\&\lambda(t)=\frac{-S_T^{'}(t)}{S_T(t)} \end{aligned}

$\text{ }$

Remark
Theorem 1 and Theorem 2 show that in a non-homogeneous Poisson process as described above, the hazard rate function $\lambda(t)$ completely specifies the probability distribution of the survival model $T$ (the time until the first change) . Once the rate of change function $\lambda(t)$ is known in the non-homogeneous Poisson process, we can use it to generate the survival function $S_T(t)$. All of the examples of survival models given below are derived by assuming the functional form of the hazard rate function. The result in Theorem 2 holds even outside the context of a non-homogeneous Poisson process, that is, given the hazard rate function $\lambda(t)$, we can derive the three distributional items $S_T(t)$, $F_T(t)$, $f_T(t)$.

The ratio in Theorem 3 indicates that the probability distribution determines the hazard rate function. In fact, the ratio in Theorem 3 is the usual definition of the hazard rate function. That is, the hazard rate function can be defined as the ratio of the density and the survival function (one minus the cdf). With this definition, we can also recover the survival function. Whenever $\displaystyle \lambda(x)=\frac{f_X(x)}{1-F_X(x)}$, we can derive:

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &S_X(x)=\displaystyle e^{-\int_0^t \lambda(y) dy} \end{aligned}

$\text{ }$

As indicated above, the hazard rate function can be interpreted as the failure rate at time $t$ given that the life in question has survived to time $t$. It is the rate of failure at the next instant given that the life or system being studied has survived up to time $t$.

It is interesting to note that the function $\Lambda(t)=\int_0^t \lambda(y) dy$ defined in Theorem 1 is called the cumulative hazard rate function. Thus the cumulative hazard rate function is an alternative way of representing the hazard rate function (see the discussion on Weibull distribution below).

——————————————————————————————————————
Examples of Survival Models

–Exponential Distribution–
In many applications, especially those for biological organisms and mechanical systems that wear out over time, the hazard rate $\lambda(t)$ is an increasing function of $t$. In other words, the older the life in question (the larger the $t$), the higher chance of failure at the next instant. For humans, the probability of a 85 years old dying in the next year is clearly higher than for a 20 years old. In a Poisson process, the rate of change $\lambda(t)=\lambda$ indicated in condition 3 is a constant. As a result, the time $T$ until the first change derived in Theorem 2 has an exponential distribution with parameter $\lambda$. In terms of mortality study or reliability study of machines that wear out over time, this is not a realistic model. However, if the mortality or failure is caused by random external events, this could be an appropriate model.

–Weibull Distribution–
This distribution is an excellent model choice for describing the life of manufactured objects. It is defined by the following cumulative hazard rate function:

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\Lambda(t)=\biggl(\frac{t}{\beta}\biggr)^{\alpha} \end{aligned} where $\alpha > 0$ and $\beta>0$

$\text{ }$

As a result, the hazard rate function, the density function and the survival function for the lifetime distribution are:

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\lambda(t)=\frac{\alpha}{\beta} \biggl(\frac{t}{\beta}\biggr)^{\alpha-1} \\&\text{ } \\&f_T(t)=\frac{\alpha}{\beta} \biggl(\frac{t}{\beta}\biggr)^{\alpha-1} \displaystyle e^{\displaystyle -\biggl[\frac{t}{\beta}\biggr]^{\alpha}} \\&\text{ } \\&S_T(t)=\displaystyle e^{\displaystyle -\biggl[\frac{t}{\beta}\biggr]^{\alpha}} \end{aligned}

$\text{ }$

The parameter $\alpha$ is the shape parameter and $\beta$ is the scale parameter. When $\alpha=1$, the hazard rate becomes a constant and the Weibull distribution becomes an exponential distribution.

When the parameter $\alpha<1$, the failure rate decreases over time. One interpretation is that most of the defective items fail early on in the life cycle. Once they they are removed from the population, failure rate decreases over time.

When the parameter $1<\alpha$, the failure rate increases with time. This is a good candidate for a model to describe the lifetime of machines or systems that wear out over time.

–The Gompertz Distribution–
The Gompertz law states that the force of mortality or failure rate increases exponentially over time. It describe human mortality quite accurately. The following is the hazard rate function:

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\lambda(t)=\alpha e^{\beta t} \end{aligned} where $\alpha>0$ and $\beta>0$.

$\text{ }$

The following are the cumulative hazard rate function as well as the survival function, distribution function and the pdf of the lifetime distribution $T$.

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\Lambda(t)=\int_0^t \alpha e^{\beta y} dy=\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta} \\&\text{ } \\&S_T(t)=\displaystyle e^{\displaystyle -\biggl(\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}\biggr)} \\&\text{ } \\&F_T(t)=\displaystyle 1-e^{\displaystyle -\biggl(\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}\biggr)} \\&\text{ } \\&f_T(t)=\displaystyle \alpha \ e^{\beta t} \ e^{\displaystyle -\biggl(\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}\biggr)} \end{aligned}

$\text{ }$

–Makeham’s Law–
The Makeham’s Law states that the force of mortality is the Gompertz failure rate plus an age-indpendent component that accounts for external causes of mortality. The following is the hazard rate function:

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\lambda(t)=\alpha e^{\beta t}+\mu \end{aligned} where $\alpha>0$, $\beta>0$ and $\mu>0$.

$\text{ }$

The following are the cumulative hazard rate function as well as the survival function, distribution function and the pdf of the lifetime distribution $T$.

$\text{ }$

\displaystyle \begin{aligned}. \ \ \ \ \ \ \ \ \ &\Lambda(t)=\int_0^t (\alpha e^{\beta y}+\mu) dy=\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t \\&\text{ } \\&S_T(t)=\displaystyle e^{\displaystyle -\biggl(\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t\biggr)} \\&\text{ } \\&F_T(t)=\displaystyle 1-e^{\displaystyle -\biggl(\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t\biggr)} \\&\text{ } \\&f_T(t)=\biggl( \alpha e^{\beta t}+\mu t \biggr) \ e^{\displaystyle -\biggl(\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t\biggr)} \end{aligned}

$\text{ }$

The Tail Weight of a Distribution
The hazard rate function can provide information about the tail of a distribution. If the hazard rate function is decreasing, it is an indication that the distribution has a heavy tail, i.e., the distribution significantly puts more probability on larger values. Conversely, if the hazard rate function is increasing, it is an indication of a lighter tail. In an insurance context, heavy tailed distributions (e.g. the Pareto distribution) are suitable candidates for modeling large insurance losses (see this previous post or [1]).

$\text{ }$

Reference

1. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004

# The mean excess loss function

We take a closer look at the mean excess loss function. We start with the following example:

Example
Suppose that an entity is exposed to a random loss $X$. An insurance policy offers protection against this loss. Under this policy, payment is made to the insured entity subject to a deductible $d>0$, i.e. when a loss is less than $d$, no payment is made to the insured entity, and when the loss exceeds $d$, the insured entity is reimbursed for the amount of the loss in excess of the deductible $d$. Consider the following two questions:

$\text{ }$

1. Of all the losses that are eligible to be reimbursed by the insurer, what is the average payment made by the insurer to the insured?
2. What is the average payment made by the insurer to the insured entity?

$\text{ }$

The two questions look similar. The difference between the two questions is subtle but important. In the first question, the average is computed over all losses that are eligible for reimbursement (i.e., the loss exceeds the deductible). This is the average amount the insurer is expected to pay in the event that a payment in excess of the deductible is required to be made. So this average is a per payment average.

In the second question, the average is calculated over all losses (regardless of sizes). When the loss does not reach the deductible, the payment is considered zero and when the loss is in excess of the deductible, the payment is $X-d$. Thus the average is the average amount the insurer has to pay per loss. So the second question is about a per loss average.

—————————————————————————————————————-

The Mean Excess Loss Function
The average in the first question is called the mean excess loss function. Suppose $X$ is the random loss and $d>0$. The mean excess loss variable is the conditional variable $X-d \ \lvert X>d$ and the mean excess loss function $e_X(d)$ is defined by:

$\text{ }$

$\displaystyle (1) \ \ \ \ \ e_X(d)=E(X-d \lvert X>d)$

$\text{ }$

In an insurance context, the mean excess loss function is the average payment in excess of a threshold given that the loss exceeds the threshold. In a mortality context, the mean excess loss function is called the mean residual life function and complete expectation of life and can be interpreted as the remaining time until death given that the life in question is alive at age $d$.

The mean excess loss function is computed by the following depending on whether the loss variable is continuous or discrete.

$\text{ }$

$\displaystyle (2) \ \ \ \ \ e_X(d)=\frac{\int_d^\infty (x-d) \ f_X(x) \ dx}{S_X(d)}$

$\displaystyle (3) \ \ \ \ \ e_X(d)=\frac{\sum \limits_{x>d} (x-d) \ P(X=x)}{S_X(d)}$

$\text{ }$

The mean excess loss function $e_X(d)$ is defined only when the integral or the sum converges. The following is an equivalent calculation of $e_X(d)$ that may be easier to use in some circumstances.

$\text{ }$

$\displaystyle (4) \ \ \ \ \ e_X(d)=\frac{\int_d^\infty S_X(x) \ dx}{S_X(d)}$

$\displaystyle (5a) \ \ \ \ \ e_X(d)=\frac{\sum \limits_{x \ge d} S_X(x) }{S_X(d)}$

$\displaystyle (5b) \ \ \ \ \ e_X(d)=\frac{\biggl(\sum \limits_{x>d} S_X(x)\biggr)+(w+1-d) S_X(w) }{S_X(d)}$

$\text{ }$

In both $(5a)$ and $(5b)$, we assume that the support of $X$ is the set of nonnegative integers. In $(5a)$, we assume that the deductible $d$ is a positive integer. In $(5b)$, the deductible $d$ is free to be any positive number and $w$ is the largest integer such that $w \le d$. The formulation $(4)$ is obtained by using integration by parts (also see theorem 3.1 in [1]). The formulations of $(5a)$ and $(5b)$ are a result of applying theorem 3.2 in [1].

The mean excess loss function provides information about the tail weight of a distribution, see the previous post The Pareto distribution. Also see Example 3 below.
—————————————————————————————————————-
The Mean in Question 2
The average that we need to compute is the mean of the following random variable. Note that $(a)_+$ is the function that assigns the value of $a$ whenever $a>0$ and otherwise assigns the value of zero.

$\text{ }$

$\displaystyle (6) \ \ \ \ \ (X-d)_+=\left\{\begin{matrix}0&\ X

$\text{ }$

The mean $E((X-d)_+)$ is calculated over all losses. When the loss is less than the deductible $d$, the insurer has no obligation to make a payment to the insured and the payment is assumed to be zero in the calculation of $E[(X-d)_+]$. The following is how this expected value is calculated depending on whether the loss $X$ is continuous or discrete.

$\text{ }$

$\displaystyle (7) \ \ \ \ \ E((X-d)_+)=\int_d^\infty (x-d) \ f_X(x) \ dx$

$\displaystyle (8) \ \ \ \ \ E((X-d)_+)=\sum \limits_{x>d} (x-d) \ P(X=x)$

$\text{ }$

Based on the definitions, the following is how the two averages are related.

$\displaystyle E[(X-d)_+]=e_X(d) \ [1-F_X(d)] \ \ \ \text{or} \ \ \ E[(X-d)_+]=e_X(d) \ S_X(d)$

—————————————————————————————————————-
The Limited Expected Value
For a given positive constant $u$, the limited loss variable is defined by

$\text{ }$

$\displaystyle (9) \ \ \ \ \ X \wedge u=\left\{\begin{matrix}X&\ X

$\text{ }$

The expected value $E(X \wedge u)$ is called the limited expected value. In an insurance application, the $u$ is a policy limit that sets a maximum on the benefit to be paid. The following is how the limited expected value is calculated depending on whether the loss $X$ is continuous or discrete.

$\text{ }$

$\displaystyle (9) \ \ \ \ \ E(X \wedge u)=\int_{-\infty}^u x \ f_X(x) \ dx+u \ S_X(u)$

$\displaystyle (10) \ \ \ \ \ E(X \wedge u)=\biggl(\sum \limits_{x < u} x \ P(X=x)\biggr)+u \ S_X(u)$

$\text{ }$

Interestingly, we have the following relation.

$\text{ }$

$\displaystyle (X-d)_+ + (X \wedge d)=X \ \ \ \ \ \ \text{and} \ \ \ \ \ \ E[(X-d)_+] + E(X \wedge d)=E(X)$

$\text{ }$

The above statement indicates that purchasing a policy with a deductible $d$ and another policy with a policy maximum $d$ is equivalent to buying full coverage.

Another way to interpret $X \wedge d$ is that it is the amount of loss that is eliminated by having a deductible in the insurance policy. If the insurance policy pays the loss in full, then the insurance payment is $X$ and the expected amount the insurer is expected to pay is $E(X)$. By having a deductible provision in the policy, the insurer is now only liable for the amount $(X-d)_+$ and the amount the insurer is expected to pay per loss is $E[(X-d)_+]$. Consequently $E(X \wedge d)$ is the expected amount of the loss that is eliminated by the deductible provision in the policy. The following summarizes this observation.

$\text{ }$

$\displaystyle (X \wedge d)=X-(X-d)_+ \ \ \ \ \ \ \text{and} \ \ \ \ \ \ E(X \wedge d)=E(X)-E[(X-d)_+]$

—————————————————————————————————————-
Example 1
Let the loss random variable $X$ be exponential with pdf $f(x)=\alpha e^{-\alpha x}$. We have $E(X)=\frac{1}{\alpha}$. Because of the no memory property of the exponential distribution, given that a loss exceeds the deductible, the mean payment is the same as the original mean. Thus $e_X(d)=\frac{1}{\alpha}$. Then the per loss average is:

$\displaystyle E[(X-d)_+]=e_X(d) \ S(d) = \frac{e^{-\alpha d}}{\alpha}$

$\text{ }$

Thus, with a deductible provision in the policy, the insurer is expected to pay $\displaystyle \frac{e^{-\alpha d}}{\alpha}$ per loss instead of $\displaystyle \frac{1}{\alpha}$. Thus the expected amount of loss eliminated (from the insurer’s point of view) is $\displaystyle E(X \wedge d)=\frac{1-e^{-\alpha d}}{\alpha}$.

$\text{ }$

Example 2
Suppose that the loss variable has a Gamma distribution where the scale parameter is $\alpha$ and the shape parameter is $n=2$. The pdf is $\displaystyle g(x)=\alpha^2 \ x \ e^{-\alpha x}$. The insurer’s expected payment without the deductible is $E(X)=\frac{2}{\alpha}$. The survival function $S(x)$ is:

$\displaystyle S(x)=e^{-\alpha x}(1+\alpha x)$

$\text{ }$

For the losses that exceed the deductible, the insurer’s expected payment is:

\displaystyle \begin{aligned}e_X(d)&=\frac{\int_d^{\infty} S(x) \ dx}{S(d)}\\&=\frac{\int_d^{\infty} e^{-\alpha x}(1+\alpha x) \ dx}{e^{-\alpha d}(1+\alpha d)} \\&=\frac{\frac{e^{-\alpha d}}{\alpha}+d e^{-\alpha d}+\frac{e^{-\alpha d}}{\alpha}}{e^{-\alpha d}(1+\alpha d)} \\&=\frac{\frac{2}{\alpha}+d}{1+\alpha d} \end{aligned}

$\text{ }$

Then the insurer’s expected payment per loss is $E[(X-d)_+]$:

\displaystyle \begin{aligned}E[(X-d)_+]&=e_X(d) \ S(d) \\&=\frac{\frac{2}{\alpha}+d}{1+\alpha d} \ \ e^{-\alpha d}(1+\alpha d) \\&=e^{-\alpha d} \ \biggl(\frac{2}{\alpha}+d\biggr) \end{aligned}

$\text{ }$

With a deductible in the policy, the following is the expected amount of loss eliminated (from the insurer’s point of view).

\displaystyle \begin{aligned}E[X \wedge d]&=E(X)-E[(X-d)_+] \\&=\frac{2}{\alpha}-e^{-\alpha d} \ \biggl(\frac{2}{\alpha}+d\biggr) \\&=\frac{2}{\alpha}\biggl(1-e^{-\alpha d}\biggr)-d e^{-\alpha d} \end{aligned}

$\text{ }$

Example 3
Suppose the loss variable $X$ has a Pareto distribution with the following pdf:

$\text{ }$

$\displaystyle f_X(x)=\frac{\beta \ \alpha^\beta}{(x+\alpha)^{\beta+1}} \ \ \ \ \ x>0$

$\text{ }$

If the insurance policy is to pay the full loss, then the insurer’s expected payment per loss is $\displaystyle E(X)=\frac{\alpha}{\beta-1}$ provided that the shape parameter $\beta$ is larger than one.

The mean excess loss function of the Pareto distribution has a linear form that is increasing (see the previous post The Pareto distribution). The following is the mean excess loss function:

$\text{ }$

$\displaystyle e_X(d)=\frac{1}{\beta-1} \ d +\frac{\alpha}{\beta-1}=\frac{1}{\beta-1} \ d +E(X)$

$\text{ }$

If the loss is modeled by such a distribution, this is an uninsurable risk! First of all, the higher the deductible, the larger the expected payment if such a large loss occurs. The expected payment for large losses is always the unmodified expected $E(X)$ plus a component that is increasing in $d$.

The increasing mean excess loss function is an indication that the Pareto distribution is a heavy tailed distribution. In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.

Reference

1. Bowers N. L., Gerber H. U., Hickman J. C., Jones D. A., Nesbit C. J. Actuarial Mathematics, First Edition., The Society of Actuaries, Itasca, Illinois, 1986
2. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004

# Another way to generate the Pareto distribution

The Pareto distribution is a heavy tailed distribution, suitable as candidate for modeling large insurance losses above a threshold. It is a mixture of exponential distributions with Gamma mixing weights. Another way to generate the Pareto distribution is taking the inverse of another distribution (raising another distribution to the power of minus one). Previous discussion on the Pareto distribution can be found here: An example of a mixture and The Pareto distribution.

$\text{ }$

Let $X$ be a continuous random variable with pdf $f_X(x)$ and with cdf $F_X(x)$ such that $F_X(0)=0$. Let $Y=X^{-1}$. Then the resulting distribution for $Y$ is called an inverse. For example, if $X$ has an exponential distribution, then $Y$ is said to have an inverse exponential distribution. The following are the cdf and pdf of $Y=X^{-1}$.

$\text{ }$

$\displaystyle F_Y(y)=1-F_X(y^{-1}) \ \ \ \ \ \ \ \ f_Y(y)=f_X(y^{-1}) \ y^{-2}$

$\text{ }$

We now show that the Pareto distribution is an inverse. Suppose that $X$ has the pdf $f_X(x)=\beta \ x^{\beta-1}, 0. We show that $Y=X^{-1}$ has a Pareto distribution with scale paramter $\alpha=1$ and shape parameter $\beta>0$. Once this base distribution is established, we can relax the scale parameter to have other positive values.

$\text{ }$

The cdf of $X$ is $F_X(x)=x^\beta$ where $0. Since the support of $X$ is $0, the support of $Y$ is $y>1$. Thus in deriving the cdf $F_Y(y)$, we only need to focus on $y>1$ (or $0). The following is the cdf $F_Y(y)$:

$\text{ }$

$\displaystyle F_Y(y)=1-F_X(y^{-1})=1-y^{-\beta}, \ \ \ \ y>1$

$\text{ }$

Upon differentiation, we obtain the pdf:

$\displaystyle f_Y(y)=\beta \ y^{-(\beta+1)}=\frac{\beta}{y^{\beta+1}}, \ \ \ y>1$

$\text{ }$

The above pdf is that of a Pareto distribution with scale paramter $\alpha=1$ and shape parameter $\beta$. However, the support of this pdf is $y>1$. In order to have $y>0$ as the support, we have the following alternative version:

$\text{ }$

$\displaystyle f_Y(y)=\frac{\beta}{(y+1)^{\beta+1}}, \ \ \ y>0$

$\text{ }$

We now transform the above pdf to become a true 2-parameter Pareto pdf by relaxing the scale parameter. The result is the following pdf.

$\text{ }$

$\displaystyle f_Y(y)=\frac{\beta \ \alpha^{\beta}}{(y+\alpha)^{\beta+1}}, \ \ \ y>0$