This is a discussion of how to work with joint distributions of two random variables. We limit the discussion on continuous random variables. The discussion of the discrete case is similar (for the most part replacing the integral signs with summation signs). Suppose and are continuous random variables where is the joint probability density function. What this means is that satisfies the following two properties:
- for each point in the Euclidean plane, is a nonnegative real number,
Because of the second bullet point, the function must be an integrable function. We will not overly focus on this point and instead be satisfied with knowing that it is possible to integrate over the entire plane and its many reasonable subregions.
Another way to think about is that it assigns the density to each point in the plane (i.e. it tells us how much weight is assigned to each point). Consequently, if we want to know the probability that falls in the region , we simply evaluate the following integral:
For instance, to find and , where , we evaluate the integral over the regions and , respectively. The integrals are:
Note that is the distribution function where . Then the pdf of is obtained by differentiation, i.e. .
In practice, all integrals involving the density functions need be taken only over those and values where the density is positive.
The joint density function describes how the two variables behave in relation to one another. The marginal probability density function (marginal pdf) is of interest if we are only concerned in one of the variables. To obtain the marginal pdf of , we simply integrate and sum out the other variable. The following integral produces the marginal pdf of :
The marginal pdf of is obtained by summing all the density along the vertical line that meets the axis at the point (see Figure 1). Thus represents the sum total of all density along a vertical line.
Obviously, if we find the marginal pdf for each vertical line and sum all the marginal pdfs, the result will be 1.0. Thus can be regarded as a single-variable pdf.
The same can be said for the marginal pdf of the other variable , except that is the sum (integral in this case) of all the density on a horizontal line that meets the axis at the point .
Let and be jointly distributed according to the following pdf:
The following derives the marginal pdfs for and :
In the middle step of the derivation of , the integrand is the Gamma pdf with parameters and 3, hence the integral in that step becomes 1. In the middle step for , the integrand is the pdf of an exponential distribution.
Now consider the joint density restricted to a vertical line, treating the vertical line as a probability distribution. In essense, we are restricting our focus on one particular realized value of . Given a realized value of , how do we describe the behavior of the other variable ? Since the marginal pdf is the sum total of all density on a vertical line, we express the conditional density as joint density as a fraction of .
It is easy to see that is a probability density function of . When we already know that has a realized value, this pdf tells us information about how behaves. Thus this pdf is called the conditional pdf of given .
Given a realized value of , we may want to know the conditional mean and the higher moments of .
In particular, the conditional variance of is:
The discussion for the conditional density of given a realized value of is similar, except that we restrict the joint density on a horizontal line. We have the following information about the conditional distribution of given a realized value .
In particular, the conditional variance of is:
Example 1 (Continued)
The following derives the conditional density functions:
The conditional density is that of a Gamma distribution with parameters and 3. So given a realized value of , has a Gamma distribution whose scale parameter is and whose shape parameter is 3. On the other hand, the conditional density is that of an exponential distribution. Given a realized value of , has an exponential distribution with parameter . Since the conditional distributions are familiar parametric distributions, we have the following conditional means and conditional variances.
Note that both conditional means are decreasing functions. The larger the realized value of , the smaller the mean . Likewise, the larger the realized value of , the smaller the mean . It appears that and moves opposite of each other. This is also confirmed by the fact that .
In the preceding discussion, the conditional distributions are derived from the joint distributions and the marginal distributions. In some applications, it is the opposite: we know the conditional distribution of one variable given the other variable and construct the joint distributions. We have the following:
The form of the joint pdf indicated above has an interesting interpretation as a mixture. Using an insurance example, suppose that is a model of the claim cost of a randomly selected insured where is a realized value of a parameter that is to indicate the risk characteristics of an insured. The members of this large population have a wide variety of risk characteristics and the random variable is to capture the risk charateristics across the entire population. Consequently, the unconditional claim cost for a randomly selected insured is:
Note that the above unconditional pdf is a weighted average of conditional pdfs. Thus the distribution derived in this manner is called a mixture distribution. The pdf is called the mixture weight or mixing weight. Some distributional quantities of a mixture distribution are also the weighted average of the conditional counterpart. These include the distribution function, mean, higher moments. Thus we have;
In the above derivations, the cumulative distribution function and the moments of are weighted averages of their conditional counterparts. However, the variance cannot be the weighted average of conditional variances. To find out why, see the post The variance of a mixture.