In this post, we introduce the hazard rate function using the notions of non-homogeneous Poisson process.

In a Poisson process, changes occur at a constant rate per unit time. Suppose that we interpret the changes in a Poisson process from a mortality point of view, i.e. a change in the Poisson process mean a termination of a system, be it biological or manufactured, and this Poisson process counts the number of of terminations as they occur. Then the rate of change is interpreted as a hazard rate (or failure rate or force of mortality). With a constant force of mortality, the time until the next change is exponentially distributed. In this post, we discuss the hazard rate function in a more general setting. The process that counts of the number of terminations will no longer have a constant hazard rate, and instead will have a hazard rate function , a function of time . Such a counting process is called a non-homogeneous Poisson process. We discuss the survival probability models (the time to the next termination) associated with a non-homogeneous Poisson process. We then discuss several important examples of survival probability models, including the Weibull distribution, the Gompertz distribution and the model based on the Makeham’s law. We aso comment briefly the connection between the hazard rate function and the tail weight of a distribution.

**The Poisson Process**

We start with the three postulates of a Poisson process. Consider an experiment in which the occurrences of a certain type of events are counted during a given time interval. We call the occurrence of the type of events in question a change. We assume the following three conditions:

- The numbers of changes occurring in nonoverlapping intervals are independent.
- The probability of two or more changes taking place in a sufficiently small interval is essentially zero.
- The probability of exactly one change in the short interval is approximately where is sufficiently small and is a positive constant.

When we interpret the Poisson process in a mortality point of view, the constant is a hazard rate (or force of mortality), which can be interpreted as the rate of failure at the next instant given that the life has survived to time . With a constant force of mortality, the survival model (the time until the next termination) has an exponential distribution with mean . We wish to relax the constant force of mortality assumption by making a function of instead. The remainder of this post is based on the non-homogeneous Poisson process defined below.

**The Non-Homogeneous Poisson Process**

We modifiy condition 3 above by making a function of . We have the following modified counting process.

- The numbers of changes occurring in nonoverlapping intervals are independent.
- The probability of two or more changes taking place in a sufficiently small interval is essentially zero.
- The probability of exactly one change in the short interval is approximately where is sufficiently small and is a nonnegative function of .

We focus on the survival model aspect of such counting processes. Such process can be interpreted as models for the number of changes occurred in a time interval where a change means “termination” or ‘failure” of a system under consideration. The rate of change function indicated in condition 3 is called the hazard rate function. It is also called the failure rate function in reliability engineering and the force of mortality in life contingency theory.

Based on condition 3 in the non-homogeneous Poisson process, the hazard rate function can be interpreted as the rate of failure at the next instant given that the life has survived to time .

Two random variables naturally arise from a non-homogeneous Poisson process are described here. One is the discrete variable , defined as the number of changes in the time interval . The other is the continuous random variable , defined as the time until the occurrence of the first change. The probability distribution of is called a survival model. The following is the link between and .

Note that is the probability that the next change occurs after time . This means that there is no change within the interval . We have the following theorems.

* Theorem 1*.

Let . Then is the probability that there is no change in the interval . That is, .

* Proof*. We are interested in finding the probability of zero changes in the interval . By condition 1, the numbers of changes in the nonoverlapping intervals and are independent. Thus we have:

Note that by condition 3, the probability of exactly one change in the small interval is . Thus is the probability of no change in the interval . Continuing with equation , we have the following derivation:

Evaluating the integral on the left hand side with the boundary condition of produces the following results:

**Theorem 2**

As discussed above, let be the length of the interval that is required to observe the first change. Then the following are the distribution function, survival function and pdf of :

* Proof*. In Theorem 1, we derive the probability for the discrete variable derived from the non-homogeneous Poisson process. We now consider the continuous random variable , the time until the first change, which is related to by . Thus . The distribution function and density function can be derived accordingly.

**Theorem 3**

The hazard rate function is equivalent to each of the following:

**Remark**

Theorem 1 and Theorem 2 show that in a non-homogeneous Poisson process as described above, the hazard rate function completely specifies the probability distribution of the survival model (the time until the first change) . Once the rate of change function is known in the non-homogeneous Poisson process, we can use it to generate the survival function . All of the examples of survival models given below are derived by assuming the functional form of the hazard rate function. The result in Theorem 2 holds even outside the context of a non-homogeneous Poisson process, that is, given the hazard rate function , we can derive the three distributional items , , .

The ratio in Theorem 3 indicates that the probability distribution determines the hazard rate function. In fact, the ratio in Theorem 3 is the usual definition of the hazard rate function. That is, the hazard rate function can be defined as the ratio of the density and the survival function (one minus the cdf). With this definition, we can also recover the survival function. Whenever , we can derive:

As indicated above, the hazard rate function can be interpreted as the failure rate at time given that the life in question has survived to time . It is the rate of failure at the next instant given that the life or system being studied has survived up to time .

It is interesting to note that the function defined in Theorem 1 is called the cumulative hazard rate function. Thus the cumulative hazard rate function is an alternative way of representing the hazard rate function (see the discussion on Weibull distribution below).

——————————————————————————————————————

**Examples of Survival Models**

*–Exponential Distribution–*

In many applications, especially those for biological organisms and mechanical systems that wear out over time, the hazard rate is an increasing function of . In other words, the older the life in question (the larger the ), the higher chance of failure at the next instant. For humans, the probability of a 85 years old dying in the next year is clearly higher than for a 20 years old. In a Poisson process, the rate of change indicated in condition 3 is a constant. As a result, the time until the first change derived in Theorem 2 has an exponential distribution with parameter . In terms of mortality study or reliability study of machines that wear out over time, this is not a realistic model. However, if the mortality or failure is caused by random external events, this could be an appropriate model.

*–Weibull Distribution–*

This distribution is an excellent model choice for describing the life of manufactured objects. It is defined by the following cumulative hazard rate function:

where and

As a result, the hazard rate function, the density function and the survival function for the lifetime distribution are:

The parameter is the shape parameter and is the scale parameter. When , the hazard rate becomes a constant and the Weibull distribution becomes an exponential distribution.

When the parameter , the failure rate decreases over time. One interpretation is that most of the defective items fail early on in the life cycle. Once they they are removed from the population, failure rate decreases over time.

When the parameter , the failure rate increases with time. This is a good candidate for a model to describe the lifetime of machines or systems that wear out over time.

*–The Gompertz Distribution–*

The Gompertz law states that the force of mortality or failure rate increases exponentially over time. It describe human mortality quite accurately. The following is the hazard rate function:

where and .

The following are the cumulative hazard rate function as well as the survival function, distribution function and the pdf of the lifetime distribution .

*–Makeham’s Law–*

The Makeham’s Law states that the force of mortality is the Gompertz failure rate plus an age-indpendent component that accounts for external causes of mortality. The following is the hazard rate function:

where , and .

The following are the cumulative hazard rate function as well as the survival function, distribution function and the pdf of the lifetime distribution .

**The Tail Weight of a Distribution**

The hazard rate function can provide information about the tail of a distribution. If the hazard rate function is decreasing, it is an indication that the distribution has a heavy tail, i.e., the distribution significantly puts more probability on larger values. Conversely, if the hazard rate function is increasing, it is an indication of a lighter tail. In an insurance context, heavy tailed distributions (e.g. the Pareto distribution) are suitable candidates for modeling large insurance losses (see this previous post or [1]).

**Reference**

- Klugman S.A., Panjer H. H., Wilmot G. E.
*Loss Models, From Data to Decisions*, Second Edition., Wiley-Interscience, a John Wiley & Sons, Inc., New York, 2004