The mean excess loss function

We take a closer look at the mean excess loss function. We start with the following example:

Example
Suppose that an entity is exposed to a random loss X. An insurance policy offers protection against this loss. Under this policy, payment is made to the insured entity subject to a deductible d>0, i.e. when a loss is less than d, no payment is made to the insured entity, and when the loss exceeds d, the insured entity is reimbursed for the amount of the loss in excess of the deductible d. Consider the following two questions:

\text{ }

  1. Of all the losses that are eligible to be reimbursed by the insurer, what is the average payment made by the insurer to the insured?
  2. What is the average payment made by the insurer to the insured entity?

\text{ }

The two questions look similar. The difference between the two questions is subtle but important. In the first question, the average is computed over all losses that are eligible for reimbursement (i.e., the loss exceeds the deductible). This is the average amount the insurer is expected to pay in the event that a payment in excess of the deductible is required to be made. So this average is a per payment average.

In the second question, the average is calculated over all losses (regardless of sizes). When the loss does not reach the deductible, the payment is considered zero and when the loss is in excess of the deductible, the payment is X-d. Thus the average is the average amount the insurer has to pay per loss. So the second question is about a per loss average.

—————————————————————————————————————-

The Mean Excess Loss Function
The average in the first question is called the mean excess loss function. Suppose X is the random loss and d>0. The mean excess loss variable is the conditional variable X-d \ \lvert X>d and the mean excess loss function e_X(d) is defined by:

\text{ }

\displaystyle (1) \ \ \ \ \ e_X(d)=E(X-d \lvert X>d)

\text{ }

In an insurance context, the mean excess loss function is the average payment in excess of a threshold given that the loss exceeds the threshold. In a mortality context, the mean excess loss function is called the mean residual life function and complete expectation of life and can be interpreted as the remaining time until death given that the life in question is alive at age d.

The mean excess loss function is computed by the following depending on whether the loss variable is continuous or discrete.

\text{ }

\displaystyle (2) \ \ \ \ \ e_X(d)=\frac{\int_d^\infty (x-d) \ f_X(x) \ dx}{S_X(d)}

\displaystyle (3) \ \ \ \ \ e_X(d)=\frac{\sum \limits_{x>d} (x-d) \ P(X=x)}{S_X(d)}

\text{ }

The mean excess loss function e_X(d) is defined only when the integral or the sum converges. The following is an equivalent calculation of e_X(d) that may be easier to use in some circumstances.

\text{ }

\displaystyle (4) \ \ \ \ \ e_X(d)=\frac{\int_d^\infty S_X(x) \ dx}{S_X(d)}

\displaystyle (5a) \ \ \ \ \ e_X(d)=\frac{\sum \limits_{x \ge d} S_X(x) }{S_X(d)}

\displaystyle (5b) \ \ \ \ \ e_X(d)=\frac{\biggl(\sum \limits_{x>d} S_X(x)\biggr)+(w+1-d) S_X(w) }{S_X(d)}

\text{ }

In both (5a) and (5b), we assume that the support of X is the set of nonnegative integers. In (5a), we assume that the deductible d is a positive integer. In (5b), the deductible d is free to be any positive number and w is the largest integer such that w \le d. The formulation (4) is obtained by using integration by parts (also see theorem 3.1 in [1]). The formulations of (5a) and (5b) are a result of applying theorem 3.2 in [1].

The mean excess loss function provides information about the tail weight of a distribution, see the previous post The Pareto distribution. Also see Example 3 below.
—————————————————————————————————————-
The Mean in Question 2
The average that we need to compute is the mean of the following random variable. Note that (a)_+ is the function that assigns the value of a whenever a>0 and otherwise assigns the value of zero.

\text{ }

\displaystyle (6) \ \ \ \ \ (X-d)_+=\left\{\begin{matrix}0&\ X<d\\{X-d}&\ X \ge d \end{matrix}\right.

\text{ }

The mean E((X-d)_+) is calculated over all losses. When the loss is less than the deductible d, the insurer has no obligation to make a payment to the insured and the payment is assumed to be zero in the calculation of E[(X-d)_+]. The following is how this expected value is calculated depending on whether the loss X is continuous or discrete.

\text{ }

\displaystyle (7) \ \ \ \ \ E((X-d)_+)=\int_d^\infty (x-d) \ f_X(x) \ dx

\displaystyle (8) \ \ \ \ \ E((X-d)_+)=\sum \limits_{x>d} (x-d) \ P(X=x)

\text{ }

Based on the definitions, the following is how the two averages are related.

\displaystyle E[(X-d)_+]=e_X(d) \ [1-F_X(d)] \ \ \ \text{or} \ \ \ E[(X-d)_+]=e_X(d) \ S_X(d)

—————————————————————————————————————-
The Limited Expected Value
For a given positive constant u, the limited loss variable is defined by

\text{ }

\displaystyle (9) \ \ \ \ \ X \wedge u=\left\{\begin{matrix}X&\ X<u\\{u}&\ X \ge u \end{matrix}\right.

\text{ }

The expected value E(X \wedge u) is called the limited expected value. In an insurance application, the u is a policy limit that sets a maximum on the benefit to be paid. The following is how the limited expected value is calculated depending on whether the loss X is continuous or discrete.

\text{ }

\displaystyle (9) \ \ \ \ \ E(X \wedge u)=\int_{-\infty}^u x \ f_X(x) \ dx+u \ S_X(u)

\displaystyle (10) \ \ \ \ \ E(X \wedge u)=\biggl(\sum \limits_{x < u} x \ P(X=x)\biggr)+u \ S_X(u)

\text{ }

Interestingly, we have the following relation.

\text{ }

\displaystyle (X-d)_+ + (X \wedge d)=X \ \ \ \ \ \ \text{and} \ \ \ \ \ \ E[(X-d)_+] + E(X \wedge d)=E(X)

\text{ }

The above statement indicates that purchasing a policy with a deductible d and another policy with a policy maximum d is equivalent to buying full coverage.

Another way to interpret X \wedge d is that it is the amount of loss that is eliminated by having a deductible in the insurance policy. If the insurance policy pays the loss in full, then the insurance payment is X and the expected amount the insurer is expected to pay is E(X). By having a deductible provision in the policy, the insurer is now only liable for the amount (X-d)_+ and the amount the insurer is expected to pay per loss is E[(X-d)_+]. Consequently E(X \wedge d) is the expected amount of the loss that is eliminated by the deductible provision in the policy. The following summarizes this observation.

\text{ }

\displaystyle (X \wedge d)=X-(X-d)_+ \ \ \ \ \ \ \text{and} \ \ \ \ \ \ E(X \wedge d)=E(X)-E[(X-d)_+]

—————————————————————————————————————-
Example 1
Let the loss random variable X be exponential with pdf f(x)=\alpha e^{-\alpha x}. We have E(X)=\frac{1}{\alpha}. Because of the no memory property of the exponential distribution, given that a loss exceeds the deductible, the mean payment is the same as the original mean. Thus e_X(d)=\frac{1}{\alpha}. Then the per loss average is:

\displaystyle E[(X-d)_+]=e_X(d) \ S(d) = \frac{e^{-\alpha d}}{\alpha}

\text{ }

Thus, with a deductible provision in the policy, the insurer is expected to pay \displaystyle \frac{e^{-\alpha d}}{\alpha} per loss instead of \displaystyle \frac{1}{\alpha}. Thus the expected amount of loss eliminated (from the insurer’s point of view) is \displaystyle E(X \wedge d)=\frac{1-e^{-\alpha d}}{\alpha}.

\text{ }

Example 2
Suppose that the loss variable has a Gamma distribution where the scale parameter is \alpha and the shape parameter is n=2. The pdf is \displaystyle g(x)=\alpha^2 \ x \ e^{-\alpha x}. The insurer’s expected payment without the deductible is E(X)=\frac{2}{\alpha}. The survival function S(x) is:

\displaystyle S(x)=e^{-\alpha x}(1+\alpha x)

\text{ }

For the losses that exceed the deductible, the insurer’s expected payment is:

\displaystyle \begin{aligned}e_X(d)&=\frac{\int_d^{\infty} S(x) \ dx}{S(d)}\\&=\frac{\int_d^{\infty} e^{-\alpha x}(1+\alpha x) \ dx}{e^{-\alpha d}(1+\alpha d)} \\&=\frac{\frac{e^{-\alpha d}}{\alpha}+d e^{-\alpha d}+\frac{e^{-\alpha d}}{\alpha}}{e^{-\alpha d}(1+\alpha d)} \\&=\frac{\frac{2}{\alpha}+d}{1+\alpha d} \end{aligned}

\text{ }

Then the insurer’s expected payment per loss is E[(X-d)_+]:

\displaystyle \begin{aligned}E[(X-d)_+]&=e_X(d) \ S(d) \\&=\frac{\frac{2}{\alpha}+d}{1+\alpha d} \ \ e^{-\alpha d}(1+\alpha d) \\&=e^{-\alpha d} \ \biggl(\frac{2}{\alpha}+d\biggr) \end{aligned}

\text{ }

With a deductible in the policy, the following is the expected amount of loss eliminated (from the insurer’s point of view).

\displaystyle \begin{aligned}E[X \wedge d]&=E(X)-E[(X-d)_+] \\&=\frac{2}{\alpha}-e^{-\alpha d} \ \biggl(\frac{2}{\alpha}+d\biggr) \\&=\frac{2}{\alpha}\biggl(1-e^{-\alpha d}\biggr)-d e^{-\alpha d} \end{aligned}

\text{ }

Example 3
Suppose the loss variable X has a Pareto distribution with the following pdf:

\text{ }

\displaystyle f_X(x)=\frac{\beta \ \alpha^\beta}{(x+\alpha)^{\beta+1}} \ \ \ \ \ x>0

\text{ }

If the insurance policy is to pay the full loss, then the insurer’s expected payment per loss is \displaystyle E(X)=\frac{\alpha}{\beta-1} provided that the shape parameter \beta is larger than one.

The mean excess loss function of the Pareto distribution has a linear form that is increasing (see the previous post The Pareto distribution). The following is the mean excess loss function:

\text{ }

\displaystyle e_X(d)=\frac{1}{\beta-1} \ d +\frac{\alpha}{\beta-1}=\frac{1}{\beta-1} \ d +E(X)

\text{ }

If the loss is modeled by such a distribution, this is an uninsurable risk! First of all, the higher the deductible, the larger the expected payment if such a large loss occurs. The expected payment for large losses is always the unmodified expected E(X) plus a component that is increasing in d.

The increasing mean excess loss function is an indication that the Pareto distribution is a heavy tailed distribution. In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.

Reference

  1. Bowers N. L., Gerber H. U., Hickman J. C., Jones D. A., Nesbit C. J. Actuarial Mathematics, First Edition., The Society of Actuaries, Itasca, Illinois, 1986
  2. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004
Advertisements

An example of a mixture

We use an example to motivate the definition of a mixture distribution.

Example 1

Suppose that the loss arising from an insured randomly selected from a large group of insureds follow an exponential distribution with probability density function (pdf) f_X(x)=\theta e^{-\theta x}, x>0, where \theta is a parameter that is a positive constant. The mean claim cost for this randomly selected insured is \frac{1}{\theta}. So the parameter \theta reflects the risk characteristics of the insured. Since the population of insureds is large, there is uncertainty in the parameter \theta. It is more appropriate to regard \theta as a random variable in order to capture the wide range of risk characteristics across the individuals in the population. As a result, the pdf indicated above is not an unconditional pdf, but, rather, a conditional pdf of X. The below pdf is conditional on a realized value of the random variable \Theta.

    \displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\theta e^{-\theta x}, \ \ \ \ \ x>0

What about the marginal (unconditional) pdf of X? Let’s assume that the pdf of \Theta is given by \displaystyle f_\Theta(\theta)=\frac{1}{2} \ \theta^2 \ e^{-\theta}. Then the unconditional pdf of X is the weighted average of the conditional pdf.

    \displaystyle \begin{aligned}f_X(x)&=\int_0^{\infty} f_{X \lvert \Theta}(x \lvert \theta) \ f_\Theta(\theta) \ d \theta \\&=\int_0^{\infty} \biggl[\theta \ e^{-\theta x}\biggr] \ \biggl[\frac{1}{2} \ \theta^2 \ e^{-\theta}\biggr] \ d \theta \\&=\int_0^{\infty} \frac{1}{2} \ \theta^3 \ e^{-\theta(x+1)} \ d \theta \\&=\frac{1}{2} \frac{6}{(x+1)^4} \int_0^{\infty} \frac{(x+1)^4}{3!} \ \theta^{4-1} \ e^{-\theta(x+1)} \ d \theta \\&=\frac{3}{(x+1)^4} \end{aligned}

Several other distributional quantities are also weighted averages, which include the unconditional mean, and the second moment.

    \displaystyle \begin{aligned}E(X)&=\int_0^{\infty} E(X \lvert \Theta=\theta) \ f_\Theta(\theta) \ d \theta \\&=\int_0^{\infty} \biggl[\frac{1}{\theta} \biggr] \ \biggl[\frac{1}{2} \ \theta^2 \ e^{-\theta}\biggr] \ d \theta \\&=\int_0^{\infty} \frac{1}{2} \ \theta \ e^{-\theta} \ d \theta \\&=\frac{1}{2} \end{aligned}

    \displaystyle \begin{aligned}E(X^2)&=\int_0^{\infty} E(X^2 \lvert \Theta=\theta) \ f_\Theta(\theta) \ d \theta \\&=\int_0^{\infty} \biggl[\frac{2}{\theta^2} \biggr] \ \biggl[\frac{1}{2} \ \theta^2 \ e^{-\theta}\biggr] \ d \theta \\&=\int_0^{\infty} e^{-\theta} \ d \theta \\&=1 \end{aligned}

As a result, the unconditional variance is Var(X)=1-\frac{1}{4}=\frac{3}{4}. Note that the unconditional variance is not the weighted average of the conditional variance. The weighted average of the conditional variance only produces \frac{1}{2}.

\displaystyle \begin{aligned}E[Var(X \lvert \Theta)]&=\int_0^{\infty} Var(X \lvert \Theta=\theta) \ f_\Theta(\theta) \ d \theta \\&=\int_0^{\infty} \biggl[\frac{1}{\theta^2} \biggr] \ \biggl[\frac{1}{2} \ \theta^2 \ e^{-\theta}\biggr] \ d \theta \\&=\int_0^{\infty} \frac{1}{2} \ e^{-\theta} \ d \theta \\&=\frac{1}{2} \end{aligned}

It turns out that the unconditional variance has two components, the expected value of the conditional variances and the variance of the conditional means. In this example, the former is \frac{1}{2} and the latter is \frac{1}{4}. The additional variance in the amount of \frac{1}{4} is a reflection that there is uncertainty in the parameter \theta.

\displaystyle \begin{aligned}Var(X)&=E[Var(X \lvert \Theta)]+Var[E(X \lvert \Theta)] \\&=\frac{1}{2}+\frac{1}{4}\\&=\frac{3}{4}  \end{aligned}

——————————————————————————————————————-

The Definition of Mixture
The unconditional pdf f_X(x) derived in Example 1 is that of a Pareto distribution. Thus the Pareto distribution is a continuous mixture of exponential distributions with Gamma mixing weights.

Mathematically speaking, a mixture arises when a probability density function f(x \lvert \theta) depends on a parameter \theta that is uncertain and is itself a random variable with density g(\theta). Then taking the weighted average of f(x \lvert \theta) with g(\theta) as weight produces the mixture distribution.

A continuous random variable X is said to be a mixture if its probability density function f_X(x) is a weighted average of a family of probability density functions f(x \lvert \theta). The random variable \Theta is said to be the mixing random variable and its pdf g(\theta) is said to be the mixing weight. An equivalent definition of mixture is that the distribution function F_X(x) is a weighted average of a family of distribution functions indexed by a mixing variable. Thus X is a mixture if one of the following holds.

\displaystyle f_X(x)=\int_{-\infty}^{\infty} f(x \lvert \theta) \ g(\theta) \ d \theta

\displaystyle F_X(x)=\int_{-\infty}^{\infty} F(x \lvert \theta) \ g(\theta) \ d \theta

Similarly, a discrete random variable is a mixture if its probability function (or distribution function) is a weighted sum of a family of probability functions (or distribution functions). Thus X is a mixture if one of the following holds.

\displaystyle P(X=x)=\sum \limits_{y} P(X=x \lvert Y=y) \ P(Y=y)

\displaystyle P(X \le x)=\sum \limits_{y} P(X \le x \lvert Y=y) \ P(Y=y)

Additional Practice
See this blog post for practice problems on mixture distributions.

Reference

  1. Klugman S.A., Panjer H. H., Wilmot G. E. Loss Models, From Data to Decisions, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004

A basic look at joint distributions

This is a discussion of how to work with joint distributions of two random variables. We limit the discussion on continuous random variables. The discussion of the discrete case is similar (for the most part replacing the integral signs with summation signs). Suppose X and Y are continuous random variables where f_{X,Y}(x,y) is the joint probability density function. What this means is that f_{X,Y}(x,y) satisfies the following two properties:

  • for each point (x,y) in the Euclidean plane, f_{X,Y}(x,y) is a nonnegative real number,
  • \displaystyle \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f_{X,Y}(x,y) \ dx \ dy=1.

Because of the second bullet point, the function f_{X,Y}(x,y) must be an integrable function. We will not overly focus on this point and instead be satisfied with knowing that it is possible to integrate f_{X,Y}(x,y) over the entire xy plane and its many reasonable subregions.

Another way to think about f_{X,Y}(x,y) is that it assigns the density to each point in the xy plane (i.e. it tells us how much weight is assigned to each point). Consequently, if we want to know the probability that (X,Y) falls in the region A, we simply evaluate the following integral:

    \displaystyle \int_{A} f_{X,Y}(x,y) \ dx \ dy.

For instance, to find P(X<Y) and P(X+Y \le z), where z>0, we evaluate the integral over the regions x<y and x+y \le z, respectively. The integrals are:

    \displaystyle P(X<Y)=\int_{-\infty}^{\infty} \int_{x}^{\infty} f_{X,Y}(x,y) \ dy \ dx

    \displaystyle P(X+Y \le z)=\int_{-\infty}^{\infty} \int_{-\infty}^{x} f_{X,Y}(x,y) \ dy \ dx

Note that P(X+Y \le z) is the distribution function F_Z(z)=P(X+Y \le z) where Z=X+Y. Then the pdf of Z is obtained by differentiation, i.e. f_Z(z)=F_Z^{'}(z).

In practice, all integrals involving the density functions need be taken only over those x and y values where the density is positive.

——————————————————————————————————————–

Marginal Density

The joint density function f_{X,Y}(x,y) describes how the two variables behave in relation to one another. The marginal probability density function (marginal pdf) is of interest if we are only concerned in one of the variables. To obtain the marginal pdf of X, we simply integrate f_{X,Y}(x,y) and sum out the other variable. The following integral produces the marginal pdf of X:

    \displaystyle f_X(x)=\int_{-\infty}^{\infty} f_{X,Y}(x,y) \ dy

The marginal pdf of X is obtained by summing all the density along the vertical line that meets the x axis at the point (x,0) (see Figure 1). Thus f_X(x) represents the sum total of all density f_{X,Y}(x,y) along a vertical line.

Obviously, if we find the marginal pdf for each vertical line and sum all the marginal pdfs, the result will be 1.0. Thus f_X(x) can be regarded as a single-variable pdf.

    \displaystyle \begin{aligned}\int_{-\infty}^{\infty}f_X(x) \ dx&=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f_{X,Y}(x,y) \ dy \ dx=1 \\&\text{ } \end{aligned}

The same can be said for the marginal pdf of the other variable Y, except that f_Y(y) is the sum (integral in this case) of all the density on a horizontal line that meets the y axis at the point (0,y).

    \displaystyle f_Y(y)=\int_{-\infty}^{\infty} f_{X,Y}(x,y) \ dx

——————————————————————————————————————–

Example 1

Let X and Y be jointly distributed according to the following pdf:

    \displaystyle f_{X,Y}(x,y)=y^2 \ e^{-y(x+1)}, \text{ where } x>0,y>0

The following derives the marginal pdfs for X and Y:

    \displaystyle \begin{aligned}f_X(x)&=\int_0^{\infty} y^2 \ e^{-y(x+1)} \ dy \\&\text{ } \\&=\frac{2}{(x+1)^3} \int_0^{\infty} \frac{(x+1)^3}{2!} y^{3-1} \ e^{-y(x+1)} \ dy \\&\text{ } \\&=\frac{2}{(x+1)^3} \end{aligned}

    \displaystyle \begin{aligned}f_Y(y)&=\int_0^{\infty} y^2 \ e^{-y(x+1)} \ dx \\&\text{ } \\&=y \ e^{-y} \int_0^{\infty} y \ e^{-y x} \ dx \\&\text{ } \\&=y \ e^{-y} \end{aligned}

In the middle step of the derivation of f_X(x), the integrand is the Gamma pdf with parameters x+1 and 3, hence the integral in that step becomes 1. In the middle step for f_Y(y), the integrand is the pdf of an exponential distribution.

——————————————————————————————————————–

Conditional Density

Now consider the joint density f_{X,Y}(x,y) restricted to a vertical line, treating the vertical line as a probability distribution. In essense, we are restricting our focus on one particular realized value of X. Given a realized value x of X, how do we describe the behavior of the other variable Y? Since the marginal pdf f_X(x) is the sum total of all density on a vertical line, we express the conditional density as joint density f_{X,Y}(x,y) as a fraction of f_X(x).

    \displaystyle f_{Y \lvert X}(y \lvert x)=\frac{f_{X,Y}(x,y)}{f_X(x)}

It is easy to see that f_{Y \lvert X}(y \lvert x) is a probability density function of Y. When we already know that X has a realized value, this pdf tells us information about how Y behaves. Thus this pdf is called the conditional pdf of Y given X=x.

Given a realized value x of X, we may want to know the conditional mean and the higher moments of Y.

    \displaystyle E(Y \lvert X=x)=\int_{-\infty}^{\infty} y \ f_{Y \lvert X}(y \lvert x) \ dy

    \displaystyle E(Y^n \lvert X=x)=\int_{-\infty}^{\infty} y^n \ f_{Y \lvert X}(y \lvert x) \ dy \text{ where } n>1

In particular, the conditional variance of Y is:

    \displaystyle Var(Y \lvert X=x)=E(Y^2 \lvert X=x)-E(Y \lvert X=x)^2

The discussion for the conditional density of X given a realized value y of Y is similar, except that we restrict the joint density f_{X,Y}(x,y) on a horizontal line. We have the following information about the conditional distribution of X given a realized value Y=y.

    \displaystyle f_{X \lvert Y}(x \lvert y)=\frac{f_{X,Y}(x,y)}{f_Y(y)}

    \displaystyle E(X \lvert Y=y)=\int_{-\infty}^{\infty} x \ f_{X \lvert Y}(x \lvert y) \ dx

    \displaystyle E(X^n \lvert Y=y)=\int_{-\infty}^{\infty} x^n \ f_{X \lvert Y}(x \lvert y) \ dx \text{ where } n>1

In particular, the conditional variance of X is:

    \displaystyle Var(X \lvert Y=y)=E(X^2 \lvert Y=y)-E(X \lvert Y=y)^2

——————————————————————————————————————–

Example 1 (Continued)

The following derives the conditional density functions:

    \displaystyle \begin{aligned}f_{Y \lvert X}(y \lvert x)&=\frac{f_{X,Y}(x,y)}{f_X(x)} \\&\text{ } \\&=\displaystyle \frac{y^2 e^{-y(x+1)}}{\frac{2}{(x+1)^3}}  \\&\text{ } \\&=\frac{(x+1)^3}{2!} \ y^2 \ e^{-y(x+1)} \end{aligned}

    \displaystyle \begin{aligned}f_{X \lvert Y}(x \lvert y)&=\frac{f_{X,Y}(x,y)}{f_Y(y)} \\&\text{ } \\&=\displaystyle \frac{y^2 e^{-y(x+1)}}{y \ e^{-y}}  \\&\text{ } \\&=y \ e^{-y \ x} \end{aligned}

The conditional density f_{Y \lvert X}(y \lvert x) is that of a Gamma distribution with parameters x+1 and 3. So given a realized value x of X, Y has a Gamma distribution whose scale parameter is x+1 and whose shape parameter is 3. On the other hand, the conditional density f_{X \lvert Y}(x \lvert y) is that of an exponential distribution. Given a realized value y of Y, X has an exponential distribution with parameter y. Since the conditional distributions are familiar parametric distributions, we have the following conditional means and conditional variances.

    \displaystyle E(Y \lvert X=x)=\frac{3}{x+1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ Var(Y \lvert X=x)=\frac{3}{(x+1)^2}

    \displaystyle E(X \lvert Y=y)=\frac{1}{y} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Var(X \lvert Y=y)=\frac{1}{y^2}

Note that both conditional means are decreasing functions. The larger the realized value of X, the smaller the mean E(Y \lvert X=x). Likewise, the larger the realized value of Y, the smaller the mean E(X \lvert Y=y). It appears that X and Y moves opposite of each other. This is also confirmed by the fact that Cov(X,Y)=-1.
——————————————————————————————————————–

Mixture Distributions

In the preceding discussion, the conditional distributions are derived from the joint distributions and the marginal distributions. In some applications, it is the opposite: we know the conditional distribution of one variable given the other variable and construct the joint distributions. We have the following:

    \displaystyle \begin{aligned}f_{X,Y}(x,y)&=f_{Y \lvert X}(y \lvert x) \ f_X(x) \\&\text{ } \\&=f_{X \lvert Y}(x \lvert y) \ f_Y(y) \end{aligned}

The form of the joint pdf indicated above has an interesting interpretation as a mixture. Using an insurance example, suppose that f_{X \lvert Y}(x \lvert y) is a model of the claim cost of a randomly selected insured where y is a realized value of a parameter Y that is to indicate the risk characteristics of an insured. The members of this large population have a wide variety of risk characteristics and the random variable Y is to capture the risk charateristics across the entire population. Consequently, the unconditional claim cost for a randomly selected insured is:

    \displaystyle f_X(x)=\int_{-\infty}^{\infty} f_{X \lvert Y}(x \lvert y) \ f_Y(y) \ dy

Note that the above unconditional pdf f_X(x) is a weighted average of conditional pdfs. Thus the distribution derived in this manner is called a mixture distribution. The pdf f_Y(y) is called the mixture weight or mixing weight. Some distributional quantities of a mixture distribution are also the weighted average of the conditional counterpart. These include the distribution function, mean, higher moments. Thus we have;

    \displaystyle F_X(x)=\int_{-\infty}^{\infty} F_{X \lvert Y}(x \lvert y) \ f_Y(y) \ dy

    \displaystyle E(X)=\int_{-\infty}^{\infty} E(X \lvert Y=y) \ f_Y(y) \ dy

    \displaystyle E(X^k)=\int_{-\infty}^{\infty} E(X^k \lvert Y=y) \ f_Y(y) \ dy

In the above derivations, the cumulative distribution function F_X(x) and the moments of E(X^k) are weighted averages of their conditional counterparts. However, the variance Var(X) cannot be the weighted average of conditional variances. To find out why, see the post The variance of a mixture.