# Survival models: an important technique employed in medical and engineering sciences.

Carl Sagan once said: "Extinction is the rule. Survival is the exception." In either case, survival analysis is a method where a time to an event, such as death or equipment failure, is measured and modeled. The determination of whether the event has occurred or not, the event status, is also noted. Observations in a study are prone to censoring. Most common is right censoring where the patient or equipment does not experience the event (i.e. is still alive or equipment is still functioning at the end of the study). A patient who drops out of a study is also considered right censored. Less common is left censoring where we follow a patient after testing positive for an illness, but we don't know the time of exposure. Truncation is a related phenomenon whereby a patient with a lifetime less than a threshold is not observed. This is a typical situation in actuarial work. Both censored and uncensored data are used to estimate the parameters of a model.A cumulative distribution function F(t) can be used to describe the probability of observing time T less than or equal to a time t. A distribution function f(t) can be calculated from F(t) to determine the probability of observing survival time up to t.

Figure 1 Distribution Function f(t) = dF(t)/dt Survival Function S(t) = 1 - F(t) Hazard Function h(t) = f(t)/S(t) Relationships S(t) = exp (-H(t)) F(t) = 1 - exp (-H(t)) f(t) = h(t)exp (-H(t))

There are two main functions used in survival analysis: the survival function and the hazard function. The survival function calculates the probability of surviving (or equipment not experiencing failure) up to that time calculated at every time. The hazard function calculates the potential of an event occurring per unit time given that they have survived up to that time. Their relationship to each other and to the distribution function can be seen in Figure 1.

At the start of a study, the distribution function f(t), the survival function S(t), and the hazard rate h(t) are high, while the cumulative hazard rate H(t) is low. As time increases, the survival function S(t) moves towards a minimum, whereas the cumulative hazard function H(t) moves toward a maximum. A quantile survival of interest such as the median, can be calculated from the hazard or survival function. The relationship of a factor, such as a drug on the time to event, can be evaluated in the presence of covariates such as age, weight and gender.

There are three main approaches to model building: parametric, nonparametric and semiparametric. The parametric approach uses linear regression for both location and scale parameters. A linear regression can be used, since the typical distributions used for survival (Weibull, lognormal, exponential, Frechet, log-logistic, and Gompertz) can be made linear through transformation. A Goodness-of-Fit using a Chi-square statistic can be calculated by comparing the likelihood of a distribution with a null model, which allows for a different hazard rate for each interval. This is shown in Figure 2 using the JMP Parametric Survival Fit platform for a lognormal fit where the Chi-square value is statistically significant with a probability less than 0.05. A plot of the 0.1, 0.5, and 0.9 quantiles as a function of the regressor is displayed.

A nonparametric approach, typified by the Kaplan-Meier method, calculates a survival function from continuous survival times, where each time interval contains one case. The estimate of this function is a product-time estimator. An advantage to this method is that it is not dependent on the grouping of the data into different intervals. A comparison of survival times for two or more groups can be done using a test, such as the log-rank, Wilcoxon, Gehan's generalized Wilcoxon, Peto and Peto's generalized Wilcoxon, Cox's F and Cox-Mantel test. Although there are no hard and fast rules on which test to use in a given situation, where there are no censored observations, the samples are from a Weibull or exponential distribution, and sample size is less than 50 per group, the Cox's F test is more powerful than Gehan's generalized Wilcoxon test. Regardless of censoring, where the samples are from a Weibull or exponential distribution, the log-rank and Cox-Mantel are more powerful than Gehan's generalized Wilcoxon test. There are multiple sample tests of the log-rank, Gehan's generalized Wilcoxon, and Peto and Peto's generalized Wilcoxon test. Using Mantel's procedure, a score is assigned to each survival time and a Chi-square value is calculated based on the sums for each group. An example of survival between males and females is shown in Figure 3 using the JMP Survival platform where both the log-rank and Wilcoxon test are statistically significant with a probability less than 0.05.

A semiparametric approach, typified by the Cox proportional hazards regression model, makes no assumptions about the baseline hazard function. A nonlinear relationship between the hazard function and predictors is assumed. However, the proportional hazards assumption needs to be checked. It states that the hazard ratio comparing any two observations is constant over time where the predictors do not vary with time. Using Kaplan-Meier curves, a graph of the log(-log(survival)) vs log of survival time should show parallel curves. An example of a Cox proportional hazards fit of drug data is shown in Figure 4.

The whole model shows a Chi-square test of the hypothesis that there is no difference in survival time among the effects. For a categorical parameter estimates, a confidence interval that does not include zero indicates that the difference between the level and the average of all levels is significant.

Survival analysis is an important technique used in medical, engineering, social and economic sciences and is related to reliability. Evaluation of data distribution with consideration of censoring and a suitable approach to the type of model selected are key components to an evaluation of the process.

Note: All graphs were generated using JMP v.11.2.0 software.

Mark Anawis is a Principal Scientist and ASQ Six Sigma Black Belt at Abbott. He may be reached at mark.anawis@abbott.com

Printer friendly Cite/link Email Feedback | |

Title Annotation: | DATA ANALYSIS |
---|---|

Author: | Anawis, Mark |

Publication: | Scientific Computing |

Date: | Jul 1, 2015 |

Words: | 1033 |

Previous Article: | Software review: Partek genomics suite 6.6: combining easy-to-use statistics with interactive graphics. |

Topics: |