User:Gfleming

From NorthShore-Analytics
Jump to: navigation, search

Introduction[edit | edit source]

Predictive modeling tasks handled by Clinical Analytics fall into one of two categories: classification and regression . Classification answers the question, “What group of patients does this individual belong to?” Its outcome is a categorical - quite often, binary - variable. Regression answers the question, “How much or how many?” Its outcome is a cardinal variable. While the outcomes of these two types of mathematical models are different, the underlying methodologies are very similar and are considered in Sections and . A combination approach may be appropriate for problems requiring the development of quantification metrics for events of interest: first, identify (or classify) potential outcomes, then evaluate the impact of each outcome separately. In this case, a classification algorithm should be followed with a regression; more often than not, quantification of only the positive outcome is of interest to us.

An important - though sometimes overlooked - step in streamlining the research and development (R&D) methodology is agreeing on standardized terminology for the research process. A well-developed glossary of terms (see Section ) can assure that identical tasks or processes are described in identical terms, a concept similar to “data integrity” as defined by the principles of database design .

Anecdotal evidence suggests that a data scientist (whatever this term currently entails) spends <math>90\%</math> of her time scrubbing the data and only <math>10\%</math> of it doing what she learned in her school’s Advanced Scientific Fortunetelling program. Like an experienced cook who appreciates the role of quality ingredients in meal preparation, a sensible data scientist may be able to achieve good results by simply ensuring that the data ingested into her algorithm is clean. Agreed-upon procedures (AUP) for data cleansing and storage are covered in and .

Consistent, scalable development of reliable and reusable software is an important part of introducing the developed methodologies into production. A collection of good coding practices relevant to predictive analytic development is presented in Section . A good foundation for developing robust code and assuring business continuity includes

  • proper revision control practices (),
  • accessible and consistently named code repositories and development sandboxes () and
  • readable and transparent code modules (, ) in R ().

Once an algorithm has been prototyped and implemented to the developers’ satisfaction, the responsibility for putting it into everyday use it shifts to the production team. The process of testing, validation and verification can be drawn out and contentious unless the rules of the game are well defined in advance. Efficient practices for lightening the burden on both the original developers and the QA team are described in Section .

Finally, once the results have been validated, consistent and easy to understand presentation can facilitate their acceptance by the intended audience. Appropriate standards are covered in Section .

Examples contained in this manual are based on real data, however, in order to protect potentially sensitive information, the numbers have been modified and the names of the entities involved obscured where deemed necessary.

Preferred terminology[edit | edit source]

Methodology[edit | edit source]

General approach[edit | edit source]

An outline of a general approach to solving an analytical problem is presented in Fig. .



As mentioned in Section , the majority of predictive analytic problems can be solved by employing one of two wide types of forecasting methodologies: regression and classification. Regression[1] should be used when the output variable[2] is interval or continuous, i.e., can take on any permissible value inside an interval (which may include the whole real axis). Examples of this type of problem include predicting:

  1. a lab test result based on the patient’s demographics, clinical history and other lab tests;
  2. the number of admissions based on the previous history and calendar data;
  3. patient management cost based on patient’s data.

Logistic regression is one of the most widely practically used classification algorithms. It is easy to implement[3] can (and often should) be used when the output variable is an indicator, binary, categorical, nominal or ordinal. Examples of this type of problem include

  1. predicting patient’s risk of mortality, admission or readmission based on demographic and clinical data;
  2. classifying the severity of a patient’s condition based on available clinical data and history;
  3. identifying those patients among high risk population who are most likely to respond to intervention .

Regression[edit | edit source]

A regression problem answers the question “How much output quantity or how many items or events can be generated as a result of the process under investigation?”. The solution of a regression problem can be found as a result of an optimization algorithm on a measure of the difference between predicted and actual outputs. The simplest model in this case, linear regression, assumes that

  • there is no (measurement) error in the values of predictors and the dependent variable;
  • predictor variables are
    • (statistically) independent;
    • linearly independent (not collinear), i.e., the matrix of predictors has full rank (this is separate from the condition above);
  • a linear relationship of the type <math>\protect\label{E.Reg.1}
           Y_i = \beta_0 + \sum_{i=1}^M X_{ij} \beta_j \; , i=\overline{1,N} \; ,</math> where <math>N</math> is the number of observations and <math>M</math> is the number of predictive variables, exists between the predictors and the output variable;
  • residuals, or errors, i.e., differences between observed and predicted values of the dependent variable <math>\protect\label{E.Reg.2}
           \epsilon_i = Y_i - \hat{Y_i} \; ,</math> where <math>\hat{Y_i}, \; i=\overline{1, N}</math> are the predicted and <math>Y_i, \; i=\overline{1, N}</math> are the actual values of the dependent variable, are
    • distributed with a zero mean (exogeneity);
    • homoscedastic (of constant variance);
    • of finite variance;
    • statistically independent of one another and of the independent variables.

It is sometimes assumed that residuals are normally distributed, i.e.<math>\epsilon \sim \mathcal{N} \left ( 0, \sigma^2 \right )</math>, however, this assumption may be relaxed through the application of the Central Limit Theorem if the number of observations is sufficiently large (exceeds the proverbial <math>N=30</math>). In this simplest case, an analytical solution exists and can be found using readily available formulas. A more complex problem can be reduced to linear regression if the functional form of relationship between the predictors and the output is known apriori, e.g., by taking the logarithm of the outcome variable, both sides of the equation or a combination of both (loglinear regression, log-log regression or log-linear-log regression ).

Let us assume that a linear relationship between the predictors and the dependent variable described by () does exist. In the most general case, <math>X</math> is an <math>N \times M</math> matrix of predictive variables, <math>\beta</math> is an <math>M \times 1</math> vector and <math>\beta_0</math> is a scalar: <math>\begin{align}

   \protect\label{E.Reg.D.1}
   X & = & \begin{bmatrix}
               x_{11} & \cdots & x_{1M} \\
               \vdots & \ddots & \vdots \\
               x_{N1} & \cdots & x_{NM}
           \end{bmatrix} \; , \\
   \protect\label{E.Reg.D.2}
   \beta & = & \begin{bmatrix}
               \beta_1 \\ 
               \vdots \\
               \beta_{M}
           \end{bmatrix} \; , \\
   \protect\label{E.Reg.D.3}
   Y & = & X \beta + \beta_0\end{align}</math> Instead of ( - ), we can consider an augmented <math>N \times (M+1)</math> matrix <math>X_*</math> and <math>(M+1) \times 1</math> vector <math>\beta_*</math> that together form an equivalent system of equations: <math>\begin{align}
   \protect\label{E.Reg.D.4}
   X & = & \begin{bmatrix}
               1 & x_{11} & \cdots & x_{1M} \\
               \vdots & \ddots & \vdots & \vdots \\
               1 & x_{N1} & \cdots & x_{NM}
           \end{bmatrix} \; , \\
   \protect\label{E.Reg.D.5}
   \beta_* & = &   \begin{bmatrix}
                       \beta_0 \\
                       \beta_1 \\ 
                       \vdots \\
                       \beta_M \\
           \end{bmatrix} \; , \\
   \protect\label{E.Reg.D.6} 
   Y & = & X_* \beta_* \; .\end{align}</math> Observe that the left hand side of () is exactly the same as the left hand side of () once we set <math>a_0 \equiv b</math>. For the ease of exposition we shall drop subindex <math>_*</math> from <math>X_*</math> and <math>\beta_*</math> and continue to refer to these augmented variables as <math>X</math> and <math>\beta</math>, i.e., <math>\begin{align}
   \protect\label{E.Reg.D.7}
   Y & = & X \beta \; .\end{align}</math>

The solution of () can be found in the form of <math>\protect\label{E.Reg.3}

   \beta = (X^T X)^{-1}X^T y \; ,</math> where <math>X^T</math> is the transposed <math>X</math> (<math>X^T_{ij} = X_{ji}</math>), provided that <math>(X^T X)^{-1}</math> exists (see, e.g., )

An example of a fitted curve is presented in Fig.

fig:

A common metric for assessing the quality (or goodness-of-fit) of linear regression is its coefficient of determination <math>R^2</math>, defined as <math>\protect\label{E.Reg.4}

   R^2 = 1 - \frac{Var(\epsilon)}{Var(y)} = 
       1 - \frac{\frac{1}{N}\sum_{i=1}^N\epsilon_i^2}{\frac{1}{N}\sum_{i=1}^N(y_i 
           - \overline{y})^2} \; ,</math> since we assume <math>E(\epsilon) = 0</math>. The coefficient of determination quantifies what fraction (percentage) of the variation of the dependent variable, <math>y</math>, can be explained by the variation of the independent variable(s), <math>x</math> (via <math>x</math>’s linear relationship to <math>y</math>).

A general approach to attacking regression problems is presented in Fig. . In R} an ordinary linear regression model can be built using function \lstinline[language=R]lm




As an example, consider the data presented in Table .


The corresponding R code is presented in Listing .

set.seed(1)
x <- 1:20
xR <- rnorm( 1:20 )
y <- 2 * x + 1 + 5 * xR
lm <- lm( y ~ x )
coef <- coef( lm )
yHat <- x * coef[2] + coef[1]
plot( x, y, main="Linear regression illustration", col='magenta' )
abline( coef=coef, col='blue' )
points( x, yHat, type='p', col='blue' )
for( i in 1:length( x ) ) {
  segments( x[i], yHat[i], x[i], y[i], lty='dotted', col='red', pch=16 )
}
a <- sprintf( "%.2f", coef[2] )
b <- sprintf( "%.2f", coef[1] )
text( 5, 35, bquote( paste( hat( y ), "=", .(a), "x +", .(b) ) ) )
text( 5, 32, bquote( paste( "y =", hat( y ), "+", epsilon ) ) )
R2 <- sprintf( "%.2f", summary(lm)$r.squared )
text( 5, 29, bquote( paste( R^2, "=", .(R2) ) ) )
summary(lm)
confint(lm)

producing the output in Listing .

Call:
lm(formula = y ~ x)

Residuals:
     Min       1Q   Median       3Q      Max 
-12.4038  -2.5841   0.9373   2.4165   7.7252 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   0.8195     2.1579    0.38    0.709    
x             2.1079     0.1801   11.70 7.56e-10 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.645 on 18 degrees of freedom
Multiple R-squared:  0.8838,    Adjusted R-squared:  0.8774 
F-statistic: 136.9 on 1 and 18 DF,  p-value: 7.56e-10

                2.5 %   97.5 %
(Intercept) -3.714035 5.353106
x            1.729458 2.486368

It follows from Listing () that only the slope estimate is statistically significantly different from 0 at the 95% level (<math>p < 0.05</math>). The 95% confidence interval for the intercept is <math>[-3.71; 5.35]</math>, and for the slope it is <math>[1.73; 2.49]</math>. The coefficient of determination <math>R^2=0.88</math> indicates a very good fit between the data points and the developed linear model which has the form[4] <math>\begin{align}

   \protect\label{E.Reg.5}
   \hat{y} & = &  0.8195 + 2.1079 x \; ,\end{align}</math> or, if we accept the null hypothesis concerning the intercept (<math>\beta_0 = 0</math>), <math>\begin{align}
   \protect\label{E.Reg.6}
   \hat{y} & = & 2.1079 x \; .\end{align}</math> With the benefit of foresight[5], we already presented an illustration of () in Fig. .

Using logistic regression for classification problems[edit | edit source]

A classification problem answers the question “What group does this object belong to?”. The answer can depend on the available data as described in Fig. . Quite often, the easiest-to-implement method is preferable since it can be deployed with the least effort and does not require infrastructure and process adjustments. For practical purposes, logistic regression and its modifications often

  • represent a good trade-off between cost and accuracy,
  • make the contribution of different explanatory variables easy to understand and
  • yield results that easy to interpret,

and therefore should be implemented whenever possible.



The mathematical rationale behind logistic regression is based on mapping the probability domain onto the real axis as outlined below: <math>\begin{align}

   \protect\label{E.Log.D.1}
   Y | X & \sim & \operatorname{B} \left ( {1, p} \right ) \; , \\ 
   \protect\label{E.Log.D.2}
   p(x) = P(Y | X) \; \\
   \protect\label{E.Log.1}
   logit(p(x)) & = & \ln \frac{p(x)}{1-p(x)} \; , p \in [0; 1] \; . \\\end{align}</math> In applying transformation (), one must effectively hold out hope of approximating a discrete-valued function with a smooth sigmoid function defined by () as illustrated in Fig .
fig:

The accuracy of approximation () depends on the separability of two sets, <math>Y = 0</math> and <math>Y = 1</math>.

Logistic regression model corresponding to () has the form <math>\begin{align}

   \protect\label{E.Log.2}
   logit(p(x)) & = & X \beta \; , logit(p(x)) \in [-\infty, \infty] \; ,\end{align}</math> however, coefficients <math>\beta</math> cannot be found using linear regression techniques described in Section  since the observed outcome of interest (<math>p=1</math>) corresponds to positive infinity in the transformed range of () and the remainder of cases (<math>p=0</math>) correspond to negative infinity. The solution of () is found using the maximum likelihood method . Observe from () that <math>\begin{align}
   \protect\label{E.Log.4}
   p(x) & = & \frac{1}{1 + e^{-X \beta}}\; .\end{align}</math> The likelihood of obtaining outcome <math>y_i</math> given the value of the predictor variable <math>x_i</math> is <math>p(x_i)</math> <math>\begin{align}
   \protect\label{E.Log.5}
   P(y_i | x_i) & = & p(x_i)^{y_i} \left [ 1 - p(x_i) \right ]^{1-y_i} \; ,\end{align}</math> and the total likelihood of obtaining a specific sequence of outcomes is <math>\begin{align}
   \protect\label{E.Log.6}
   l(\beta) & = & \prod_{i=1}^{N} p(x_i)^{y_i} \left [ 1 - p(x_i) \right ]^{1-y_i} \; ,\end{align}</math> or, taking the natural logarithm of both sides for convenience, <math>\begin{align}
   \protect\label{E.Log.7}
   L(\beta) & = & \ln \left [ l(\beta) \right ] = 
       \sum_{i=1}^{N} \left \{ y_i \ln p(x_i) + (1-y_i) \ln \left [ 1-p(x_i) \right ] \right \} \; ,\end{align}</math> The maximum of <math>L(\beta)</math> can be found by differentiating () with respect to <math>\beta_i</math> and setting the resulting equations to <math>0</math>: <math>\begin{align}
   \protect\label{E.Log.8}
   \frac{\partial L(\beta)}{\partial \beta_i} & = &  
       \sum_{i=1}^{N} x_i  \left [ y_i - p(x_i) \right ]  = 0 \; .\end{align}</math> The solution of () can be found by standard numerical solution techniques, e.g., the Newton-Raphson method . Fortunately, Open Source and commercial statistical software has logistic regression algorithms efficiently implemented, so that implementing the above-mentioned algorithm is not a concern for a typical user.

As an example, consider a classification problem described by Table .


The corresponding R code is presented in Listing .

x <- 1:20
y <- c(rep(0, 6), 1, rep(0, 4), rep(1, 6), 0, 1, 1)
glm <- glm(y ~ x, family=binomial( link='logit'))
summary(glm)
confint(glm)

producing the output in Listing .

Call:
glm(formula = y ~ x, family = binomial(link = "logit"))

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.1815  -0.5596  -0.2371   0.6403   1.8960  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)  
(Intercept)  -4.0972     1.7830  -2.298   0.0216 *
x             0.3544     0.1476   2.401   0.0163 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 27.526  on 19  degrees of freedom
Residual deviance: 16.707  on 18  degrees of freedom
AIC: 20.707

Number of Fisher Scoring iterations: 5

Waiting for profiling to be done...
                 2.5 %     97.5 %
(Intercept) -8.6739420 -1.2930813
x            0.1207693  0.7329936

It follows from Listing () that both the intercept and slope estimates are statistically significantly different from 0 at the 95% level (<math>p < 0.05</math>). The 95% confident interval for the intercept is <math>[-8.67; -1.29]</math>, and for the slope it is <math>[0.12; 0.73]</math>. The resulting model has the form[6] <math>\begin{align}

   \protect\label{E.Log.9}
   logit(\hat{p}(x)) & = &  -4.0972 + 0.3544 x \; ,\end{align}</math> and the probability estimate is <math>\begin{align}
   \protect\label{E.Log.10}
   \hat{p}(x) & = &  \frac{1}{1+e^{4.0972 - 0.3544 x}} \; .\end{align}</math> As in Section , we already presented an illustration of () in Fig. .

Survival modeling[edit | edit source]

A frequently asked question in healthcare analytics is: “What is the probability of survival for (at least) time <math>t</math> from now (<math>t_0=0</math>)of an individual with specific conditions?” or, conversely, “What is the expected survival time of a given individual?”. is a form of regression that can help answer these questions. A standard procedure for evaluating survival probability, and, to some extent, expected survival time, is Cox survival analysis , , . At the core of it is the semiparametric.

In the following analysis, we assume that an outcome of interest represents an irreversible state transition (e.g., alive to dead). The probability of an event of interest occurring before time <math>t</math> is <math>\begin{align}

   \protect\label{E.SurvMod.1}
   P(t) & = & Pr(T \leq t) = \int_0^t p(x) dx\; , \end{align}</math> where <math>T</math> is the time of the event and <math>p(x)</math> is the probability density of the (possibly unknown) distribution of such an event. The probability of an individual surviving until (at least) time<math>t</math> is termed the survival function and represents the complement of <math>P(t)</math> <math>\begin{align}
   \protect\label{E.SurvMod.2}
   S(t) & = & Pr(T > t) = \int_t^{\infty} p(x) dx\; .\end{align}</math> The rate of arrival of outcomes of at time <math>t</math> is equal to the instantaneous probability of an event at time <math>t</math> conditional upon surviving until that time and can be calculated as[7] <math>\begin{align}
   \protect\label{E.SurvMod.3}
   h(t) & = & \lim_{\Delta t \to 0} \frac{P( t \leq T < t + \Delta t | T \geq t)}{\Delta t} 
   \nonumber \\ 
   & = & \frac{dP(t)}{dt} \frac{1}{S(t)} = \frac{p(t)}{S(t)} \; .\end{align}</math> In the Cox model, hazard rate <math>h(t)</math> is regressed against a set of predictors <math>X_i</math> as <math>\begin{align}
   \protect\label{E.SurvMod.4}
   h(t) & = & h_0(t) e^{\sum_{i=1}^{N} b_i x_i } \; ,\end{align}</math> where <math>b_i</math> is the weighting of <math>x_i</math>, the <math>i</math>-th of <math>N</math> explanatory variables. For the population of <math>M</math> individuals, () can be rewritten as <math>\begin{align}
   \protect\label{E.SurvMod.5}
   h_i(t) & = & h_{i_0}(t) e^{\sum_{j=1}^{N} b_{j} x_{ij} } \; , i = \overline{1:N}, 
       j = \overline{1:M} \; .\end{align}</math> Taking the ()natural) logarithm of both sides of (), we arrive at the equivalent of (): <math>\begin{align}
   \protect\label{E.SurvMod.6}
   \ln \frac{h_i(t)}{h_{i_0}(t)} & = & \sum_{j=1}^{N} b_{j} x_{ij} \; , i = \overline{1:N}, 
       j = \overline{1:M} \; .\end{align}</math> The form of <math>h_{i_0}(t)</math> is not formally specified; its shape is determined by empirical data in the training dataset giving rise to the unparametric portion of the model.[8] The solution of () is delivered by the maximum of the partial likelihood function defined in  as <math>\begin{align}
   \protect\label{E.SurvMod.7}
   L_p & = & \prod_{i=1}^{N} \left [ \frac{e^{x_i \beta}}{\sum_{j=1}{N} Y_{ij} e^{x_i \beta}} \right ]^{\delta_i}  \; , \\
   Y_{ij} & = & 
       \begin{cases} 
           0, \text{if } t_j < t_i \; , \\
           1, \text{otherwise} \;.
       \end{cases} \; , \\
   \delta_{i} & = & 
       \begin{cases} 
           0, \text{if event did not occur at time} t_i  \; , \\
           1, \text{otherwise} \;.
       \end{cases} \; , \end{align}</math> A widely accepted standard for survival analysis in R is the survival package ; a noteworthy extension of it that takes into account  is relsurv .

Two important variables to consider in survival analysis are and age. The former reflects the individual’s “lifetime” measured with respect to others with the same group of conditions, the latter relates his or her expected risk of experiencing a negative outcome to that of the general population.

It is important to distinguish survival analysis, which is characterized by an impenetrable boundary between the sets with null and eventful outcomes, from renewal analysis where such boundary can be crossed. Clearly, the transition from alive to deceased can occur only once whereas the transition between healthy and ill can occur multiple times. Renewal analysis is governed by a similar set of equations but is conceptually different from survival analysis.

An illustrative performance comparison between a regular logistic regression model and Cox proportional hazard model used for predicting one-year mortality among heart failure patients is presented in Fig.



As can be seen from Fig. , the AUC for model in question is approximately 0.81. The attained maxima are approximately 0.45 for and 0.4 for , however, those maxima are attained at approximately 45% of the total population for the logistic regression model and 5% for the Cox proportional hazard model. This leads us to believe that, in this particular case, the latter achieves optimal accuracy for smaller population samples than the former, however, overall model accuracies is virtually identical.

The R code for performing predictive modeling in the example above can be found in Appendix .

Data scrubbing[edit | edit source]

Several methods are available for the imputation of missing values . A summary of currently used methods is presented in Table .



For example, in the Heart Failure End-of-life project, the following fallback sequence was used to backfill missing first diagnosis dates:

  1. most recent Pacemaker Date,
  2. first AICD date,
  3. most recent ejection fraction measurement date,
  4. most recent cardiologist visit date,
  5. most recent contact date.

Outliers[edit | edit source]

Outliers in the input data can be detected by examining the distribution of each independent variable. The following algorithm is suggested for detecting and eliminating outliers:

  1. sort the values of a predictor variable in the ascending or descending order, depending on the nature of the variable;
  2. eliminate obvious outliers, e.g., negative costs or 1000 mmHg blood pressure, by setting them to a predetermined fixed value (e.g., <math>0</math>) or a specified aggregate statistic of the distribution (e.g., median value);
  3. plot the histogram of the distribution and visually inspect it;
  4. if the parametric form of the distribution is known or can be inferred from theoretical or practical considerations, attempt to fit the distribution to its hypothesized shape and purge the “tails” (can be done for either normal or non-normal cases).
  5. truncate the distribution if necessary (this should be considered the last resort);

Predictive variable selection[edit | edit source]

One of the most essential steps in developing a robust and accurate predictive model is variable selection. It is not uncommon to start this process with a candidate list of several hundred candidate predictors, eventually whittling it down to 10-20. While some sources advocate automated variable selection using, e.g., their significance levels, others point out that “...a purely statistical solution is unrealistic. The role of scientific judgment cannot be overlooked.” ; see also . Considering that it may be difficult to implement a manual solution when working with a particularly large number of variables, an automated process, e.g., backward selection, may be used to augment but not supplant the researcher’s judgment; a standard R package, caret, is widely accepted for this purpose . An algorithm for this process is outlined in Fig .



Removing highly correlated variables[edit | edit source]

Model coefficients <math>\beta</math> for linear () and logistic regressions () are computed numerically and are thus susceptible to stability problems if the condition number of the corresponding linear system is large , . The condition number of a matri is computed as <math>\begin{align}

   \protect\label{E.Col.1}
   \kappa(X) & = & \norm{X} \norm{X^{-1}}\end{align}</math> for a non-singular (square) matrix and as[9] <math>\begin{align}
   \protect\label{E.Col.2}
   \kappa(X) & = & \norm{X} \norm{X^{\dagger}} \; , \\
   \protect\label{E.Col.3}
   X^{\dagger} & = & \left\{\begin{matrix}
                        (A^T A)^{-1} A^T , & \text{if} \; |A^T A| \neq 0 \; , \\ 
                        A^T (A A^T)^{-1} , & \text{if} \; |A A^T| \neq 0 \; . 
                     \end{matrix}\right.\end{align}</math> We can see from () that a numerically singular matrix for which <math>|A^T A| \approx 0</math> would lead to a numerically unstable set of coefficients <math>\beta</math> with respect to a small perturbation of <math>X</math>: <math>\begin{align}
   \protect\label{E.Col.4}
   \frac{\norm{\Delta \beta}}{\normalcolor{\beta + \Delta \beta}} & = & \kappa(X)
       \frac{\norm{X}}{\normalcolor{X + \Delta X}}  \; , \\
\end{align}</math> In view of this, it is advisable to pare down highly correlated vectors as illustrated by the example below.

In the course of assessing the probability of hospitalization for chronic obstructive pulmonary disorder (COPD) patients, practitioners suggested an initial set of variables presented in Table as candidates for inclusion in the predictive linear regression model[10].


Available data included patient admission data for years 2010 to 2013[11]. The model was trained on a random subsample consisting of 80% of patient admissions from 2010 to 2013 and tested on the remaining 20% of the data. Correlation matrices for binary and continuous/interval variables are presented in Fig. and .

fig:
fig:

Specifically, highly correlated variables in binary and continuous/interval subspaces are listed in Tables and .


We select HF_IND as the more reliable and transparent of the two indicators and MG_PCP_24M_SEEN_IND as the standard medical group indicator from Table [12] . Selecting variables from Table is based on common sense business considerations and results in the following set: TOTAL_MEDS_PRESCRIBED, EJFR_NUM, NUM_HOSP_365_DAYS_COPD, NUM_ER_VISITS_365_DAYS_COPD, NUM_HOSP_365_DAYS_PNEU and NUM_ER_VISITS_365_DAYS_PNEU. Upon comparing the resulting variable sets with the initial candidate pool in Table , we can eliminate EJFR_NUM, NUM_HOSP_365_DAYS_COPD and NUM_ER_VISITS_365_DAYS_COPD in favor of indicators EJFR_IND, HOSP_365_DAYS_COPD_IND and ER_VISITS_365_DAYS_COPD_IND respectively. Further analysis shows no highly correlated variables on the combined set as shown in Fig. [13].

fig:

Graphs in Fig. - were generated in R using the following command:

    library(lattice)
    levelplot(cor(dataSet), scales=list(x=list(rot=90), cex=0.5))
%

where dataSet} is the \lstinline[language=R]dataframe

Computing a univariate odds ratio[edit | edit source]

Consider a <math>2 \times 2</math> contingency table relating predicted and actual outcomes of interest as displayed in Table .

Predictive variable and outcome of interest
<math>1</math> <math>0</math>
<math>1</math> <math>n_{11}</math> <math>n_{10}</math> <math>n_{1*}</math>
<math>0</math> <math>n_{01}</math> <math>n_{00}</math> <math>n_{2*}</math>
Total <math>n_{*1}</math> <math>n_{*2}</math> N

An odds ratio is the ratio of odds of a patient having a binary outcome of interest (1) conditional upon having the property described by the predictive variable to the odds of the patient not having an outcome of interest (0) conditional upon not having that property: <math>\protect\label{E.Uni.1}

   OR_{uni}^{(ind)} = \frac{\frac{n_{11}}{n_{10}}}{\frac{n_{01}}{n_{00}}}= \frac{n_{11} n_{00}}{n_{10} n_{01}}</math> Statistically, an odds ratio describes how much more likely the patient is to have an outcome of interest if he possesses a property thought to be predictive of the outcome compared to not having that property. If the odds ratio or its inverse are different from 1, then there is a chance that the candidate predictive variable indeed possesses predictive power. This hypothesis can be statistically justified if the confidence interval for the odds ratio does not include 1 at the significance level <math>\alpha</math>.

An example of a contingency matrix for hospital admissions of heart failure patients contingent upon them having had an ejection fraction test previously ordered is presented in Table .

Example: Predictive variable and outcome of interest
yes no
yes 5 90 95
no 20 1000 1020
TOTAL 25 1090 1115

Here the corresponding ratio is <math>OR_{uni}^{(ind)} = \frac{\frac{5}{90}}{\frac{20}{1000}} = \frac{1}{18 \times 0.02} = 2.78 \nonumber \; ,</math> signifying a potentially high predictive value of ejection fraction having been ordered in the past when forecasting future hospitalizations within the following year.

If the predictor variable under consideration is categorical with more than two levels rather than indicator type, a <math>2 \times 2</math> contingency table cannot be constructed and () does not apply. In this case, either of the following modification of the algorithm for calculating the odds ratio can be employed to calculate a suitable proxy:

  1. One-vs.-the-rest:
    1. compute the proportion of the total population that belongs to each category;
    2. roll up categories containing the percentage of the population that is smaller than a predetermined lower boundary (e.g., 5%);
    3. calculate the number of positive and negative outcomes of interest for the remainder of the population excluding each (rolled-up) category in turn;
    4. construct the <math>2 \times 2</math> contingency table as before and compute the “one-vs.the-rest” odds ratio for each category following the algorithm for indicator variables described above.
  2. Benchmark:
    1. roll up sparsely populated categories as described above;
    2. select a “benchmark” category that makes business sense (e.g., “married” if examining marital status); in many instances, it makes sense to choose the most populous category as the benchmark;
    3. for each category, construct the <math>2 \times 2</math> contingency table against the benchmark and compute the “benchmark” odds ratio as you would for an indicator variable.

An example of the one-vs.-the-rest algorithm is given by blood utilization data presented in Table .

Example: Blood utilization data for building one-vs.the-rest contingency table
transfused not transfused
A 800 24,200 25,000 40.32
B 800 12,200 13,000 20.97
C 700 13,300 14,000 22.58
D 700 9,300 10,000 16.13
TOTAL 3,000 59,000 62,000 100.00
Since there are no sparse categories, i.e., the ones containing less than 5% of the total population, we can separate each hospital (pavilion) in turn from the rest and generate <math>2 \times 2</math> contingency tables as shown in Table .


Judging by the odds ratios presented in Table , Pavilion C is the only hospital whose identity appears to have no discernible “predictive” influence on the number of blood transfusions compared to the rest of the pavilions.

As an example of the benchmark algorithm, consider admission data presented in Table .

Example: Predictive variable and outcome of interest
yes no
Divorced 58 2,220 2,278 8.33
Engaged 0 8 8 0.03
Legally Separated 1 39 40 0.15
Life Partner 2 34 36 0.13
Married 305 12,160 12,465 45.61
Separated (Not Legally) 1 96 97 0.35
Single 112 3,671 3783 13.84
Unknown 2 352 354 1.30
Widowed 251 8,020 8,271 30.26
TOTAL 732 26,600 27,332 100.00

We now roll up sparse categories, e.g., the ones containing less than 5% of the total population, by merging “Engaged”, “Legally Separated”, “Life Partner” and “Separated (Not Legally)” into category “Other” as shown in Table .

Example: Predictive variable and outcome of interest, rolled-up “Marital Status”
yes no
Divorced 58 2,220 2,278 8.33
Married 305 12,160 12,465 45.61
Single 112 3,671 3783 13.84
Other 6 529 535 1.96
Widowed 251 8,020 8,271 30.26
TOTAL 732 26,600 27,332 100.00

The most populous category, “Married”, is a natural benchmark selection against which the odds ratios and their statistics can be computed. An example for category “Divorced” in shown in Table .

Example: Odds ratio for “Divorced” vs. “Married”.
yes no
Divorced 58 2,220 2,278
Married 305 12,160 12,465
TOTAL 363 14,380 14,743

Here the odds ratio is <math>OR = \frac{\frac{58}{2,220}}{\frac{305}{12,160}} = \frac{0.0261}{0.0251} = 1.04 \nonumber \; ,</math> for the remaining categories as shown in Table


A straightforward argument based on the data in Table would favor “Widowed” as a predictor of hospitalizations since its odds ratio is statistically significantly different from 1 at the 99% level (<math>p=0.01</math>) and its confidence interval (CI) does not include 1 at the 95% significance level (<math>p=0.05</math>).

If the predictor variable is continuous rather than categorical, it could conceivably be transformed into the categorical form by “binning” its values into intervals, however, this approach is generally not recommended. Instead, an “incremental” odds ratio is computed as follows:

  1. construct a univariate logistic regression model for the variable in question as <math>\protect\label{E.Uni.2}
           \ln \left ( \frac{p(x)}{1-p(x)} \right ) = b_0 + a x \; ,</math> where <math>b_0</math> is the intercept of the logistic equation, <math>a</math> is the slope of the (univariate logistic regression) line;
  1. observe that <math>\protect\label{E.Uni.3}
           \ln \left ( \frac{p(x+1)}{1-p(x+1)} \right ) = b_0 + a(x+1)  \; ,</math> and hence <math>\begin{align}
           \protect\label{E.Uni.4}
           & & \ln \left ( \frac{p(x+1)}{1-p(x+1)} \right ) - \ln \left ( \frac{p(x)}{1-p(x)} \right ) = \nonumber \\
            & & \ln \left ( \frac{p(x+1)[1-p(x)]}{p(x)[1-p(x+1)]} \right ) = a = \ln \left ( OR_{uni}^{cont} \right ) \; .
       \end{align}</math>

Exponentiating both sides, we obtain <math>\protect\label{E.Uni.5}

   OR_{uni}^{cont} = e^a  \; .</math> The odds ratio defined by () can be viewed as a proportional increase in the odds of encountering an outcome of interest corresponding to a unitary increase in the value of the (continuous) predictive variable of interest. Note here that () makes sense only if the predictive variable can indeed vary by 1, if not, it needs to be reformulated with respect to the permissible increment <math>\delta</math> : <math>\begin{align}
   \protect\label{E.Uni.6a}
   & & \ln \left ( \frac{p(x+\delta)}{1-p(x+\delta)} \right ) - \ln \left ( \frac{p(x)}{1-p(x)} \right ) = \nonumber \\
    & & \ln \left ( \frac{p(x+\delta)[1-p(x)]}{p(x)[1-p(x+\delta)]} \right ) = \delta a \; , \\
   \protect\label{E.Uni.6b}         
   & & \ln \left ( OR(\delta)_{uni}^{cont} \right ) = \delta a  \; , \\
   \protect\label{E.Uni.6c}
   & & OR(\delta)_{uni}^{cont} = e^{\delta a}  \; .\end{align}</math> An instructive example of the foregoing is the “incremental” odds ratio with respect to patient age as described in Table .



We construct a univariate logistic regression model from the data in Table using ( - ). <math>\begin{align}

   \protect\label{E.Uni.7}
   \ln \left ( \frac{p(x)}{1-p(x)} \right ) = -19.58 + 0.0247 x \; ,\end{align}</math> and, therefore, <math>\begin{align}
   \protect\label{E.Uni.8}
   OR_{uni}^{cont} = e^0.0247 = 1.025 \; .\end{align}</math> The odds ratio in () does not reveal much of a pattern of dependency of the probability of hospitalization on the patient’s age. Alternatively, considering an increment of 10 years instead, we obtain: <math>\begin{align}
   \protect\label{E.Uni.9}
   OR(10)_{uni}^{cont} = e^{0.247} = 1.28 \; ,\end{align}</math> and thus the odds ratio over a 10-year interval appears[14] to have more potential predictive power than its conventional counterpart defined by (). Regardless of the size of the increment <math>\delta</math>, the graph of <math>logit(p(x))</math> in Fig.  leads one to be skeptical about the influence of age as a continuous variable on the likelihood of hospitalization. Its inclusion in the final set of variables needs to be justified by examining overall model performance as described in section .
File:LogitHosp.pdf
caption Example: Probability of hospitalization from univariate logistic regression on patient age.

The R code used for generating Fig. is presented in Listing .


logit <- function( x ) { return( log( x / ( 1 - x ) ) ) }

COPDadmRaw <- read.csv( "../Data/COPD_ALL_ALIVE.csv" )
outcome <- "INP_OBS_COPD_ADM_365_DAYS"
COPDadm <- transform( COPDadmRaw[, c( "PAT_AGE_YRS", outcome )],
                      PAT_AGE_YRS=round( PAT_AGE_YRS ) )
COPDadm[is.na( COPDadm[, outcome] ), outcome] <- 0
COPDform <- as.formula( paste( "PAT_AGE_YRS ~", outcome, sep="" ) )
COPDprod <- transform( dcast(COPDadm, COPDform, length ) )
colnames( COPDprod ) <- c( "PAT_AGE_YRS", "No", "Yes" )
COPDprod <- mutate( COPDprod, Total=Yes + No, Prob=Yes / Total,
                    Logit=logit( Prob ) )
COPDprod[! is.finite( COPDprod$Logit ), "Logit"] <- -25

plot( COPDprod$PAT_AGE_YRS, COPDprod$Logit, 
      main="Logit of the probability of hospitalization", xlab="Age, yrs.",
      ylab="Logit(p) = p / (1 - p)" )

As can be seen from Table , the most significant predictive variables with respect to their odd ratios are HOSP_365_DAYS_COPD_IND, ER_VISITS_365_DAYS_COPD_IND, PL_SICKLE_IND and O2_IND. The confidence interval for the odds ratio of hospitalization as a function of sickle cell anemia is very wide alerting us to the possible unreliability of this variable as a predictor. Additional data (not presented here for the sake of brevity) shows that the number of patients with sickle cell anemia is too small to derive meaningful conclusions, and therefore, this variable can be dropped from consideration.


As can be seen from Table , the most significant predictive variables with respect to their odd ratios are NUM_HOSP_30_DAYS_COPD, NUM_ER_VISITS_30_DAYS_PNEU, and NUM_ER_VISITS_30_DAYS_COPD. We can also observe that PAT_AGE_YRS appears to be insignificant from the point of view of the corresponding odds ratio. In spite of this, we need to bear in mind that, as pointed out in Section , a one year increase in patient age does not change the odds of hospitalization significantly and thus patient’s age cannot be automatically discarded from the final model.

Computing a multivariate odds ratio[edit | edit source]

The odds ratio defined in Section loses its meaning for a multivariate model regardless of whether the predictive variables are of indicator, categorical or continuous type. Fortunately, () can be generalized to the case of a multivariate model once we realize that all terms in a multivariate logistic equation except <math>a</math> vanish the same way as they did in () once we construct the “incremental” odds ratio. In view of this, our algorithm will proceed as follows:

  1. construct a multivariate logistic regression model for the variable in question as <math>\protect\label{E.Multi.1}
       \ln \left ( \frac{p(x)}{1-p(x)} \right ) = b_0 + \sum_{i=1}^N a_i x_i \; , i=\overline{1, N} \; ,</math> where <math>x = ( x_1, \dots, x_n )^T</math> is the vector of predictive variables;
  1. observe that <math>\protect\label{E.Multi.2}
           \ln \left ( \frac{p(x_i+1)}{1-p(x_i+1)} \right ) = b_0 + \sum_{k=1}^{i-1} a_i x_i + a_i (x_i+1) +  \sum_{k=i+1}^N a_i x_i \; ,</math> and hence <math>\begin{align}
           \protect\label{E.Multi.3}
           & & \ln \left ( \frac{p(x_i+1)}{1-p(x_i+1)} \right ) - \ln \left ( \frac{p(x_i)}{1-p(x_i)} \right ) = \nonumber \\
            & & \ln \left ( \frac{p(x_i+1)[1-p(x_i)]}{p(x_i)[1-p(x_i+1)]} \right ) = a_i = \ln \left ( OR_{multi}^{cont} \right ) \; .
       \end{align}</math>

Exponentiating both sides, we obtain <math>\protect\label{E.Multi.4}

   OR_{multi}^{cont} = e^{a_i}  \; .</math> The odds ratio defined by () represents a proportional increase in the odds of encountering an outcome of interest corresponding to a unitary increase in the value of the respective (continuous) predictive variable of interest. The same note of caution with respect to the domain of the “incremental” multivariate odds ratio applies here as in the univariate case above.

In the example in Section , the odds ratio matrix computed for both continuous / interval and indicator variables is presented in Table .


As can be seen from Table , the most significant predictive variables with respect to their odd ratios are NUM_HOSP_30_DAYS_COPD, NUM_ER_VISITS_30_DAYS_PNEU, and NUM_ER_VISITS_30_DAYS_COPD. We can also observe that PAT_AGE_YRS appears to be insignificant from the point of view of the corresponding odds ratio. In spite of this, we need to bear in mind that, as pointed out in Section , a one year increase in patient age does not change the odds of hospitalization significantly and thus patient’s age cannot be automatically discarded from the final model.

Assessing model coefficients[edit | edit source]

The coefficients of a linear or logistic regression are computed using a variant of the normal equation (). In reality, this relationship includes the random error component <math>\begin{align}

   \protect\label{E.Coef.1a}
   y & = & \sum_{i=0}^N a_i x_i + \epsilon \; , \\
   \protect\label{E.Coef.1b}
   x_0 & = & 1 \; ,\end{align}</math> where the intercept has been incorporated into the general equation for convenience by virtue of (). Coefficients <math>a_i</math>, obtained with the help of () - (), are estimates, albeit unbiased ; the uncertainty in their calculation is implied by the random nature of <math>\epsilon</math>. If we assume the normality of errors, <math>\epsilon \sim \mathcal{N} (0, \sigma^2)</math>, then the standard null hypotheses <math>H_0(a_i) : a_i=0</math> can then be tested by computing the t-statistic <math>\begin{align}
   \protect\label{E.Coef.2a}
   t_i & = & \frac{\hat{a_i} - a_{i0}}{s.e.(\hat{a_i})}  \; , \; i=\overline{1,N} \; ,\\
   \protect\label{E.Coef.2b}
   s.e.(\hat{a_i}) & = & \sqrt{\frac{MS_{Res}}{S_{xx}}} \; , \\
   \protect\label{E.Coef.2c}
   MS_{Res} & = & \frac{1}{N-2}\sum_{i-1}^N \epsilon_i^2 \; , \\
   \protect\label{E.Coef.2d}
   S_{xx} & = & \sum_{i=1}^N\left ( x_i - \overline{x} \right )^2 \; \\
   \protect\label{E.Coef.2e}
   \overline{x} & = & \frac{1}{N} \sum_{i=1}^N x_i \; \\
   \protect\label{E.Coef.2f}
   t_0 & = & \frac{\hat{a_0} - a_{00}}{s.e.(\hat{a_0})} \; , \\
   \protect\label{E.Coef.2g}
   s.e.(\hat{a_0}) & = & \sqrt{MS_{Res} \left ( \frac{1}{N} + \frac{\overline{x}^2}{S_{xx}} \right )} \; , \\
   \protect\label{E.Coef.2h}
   t_i & \sim & \chi^2_{N-2} \; .\end{align}</math> The significance of the coefficient, i.e., the probability that it comes from a distribution centered at <math>0</math> is determined by the test statistic <math>t_i</math>. In view of , we can compute the appropriate <math>p</math>-values at the <math>\alpha</math> significance level and construct the usual confidence intervals for <math>a_i \; , i=\overline{1,N}</math> as <math>\begin{align}
   \protect\label{E.Coef.3a}
   a_i & \in & \left [ \hat{a_i} - t_{\frac{\alpha}{2}, N-2} \times s.e.(a_i) ,  \hat{a_i} + t_{\frac{\alpha}{2}, N-2} \times s.e.(a_i) \right ] \; ,\end{align}</math> In our outgoing COPD example, we can now finalize the set of predictive variables and create a model for testing and validation. Drawing upon the results presented in Table  and Section , we select the coefficients for model () based on the statistical significance of their odds ratios and subject matter knowledge, and calculate their statistics presented in Table .


As follows from Table , HOSP_365_DAYS_COPD_IND, ER_VISITS_365_DAYS_COPD_IND, SMOKER_IND, O2_IND and EJFR_IND have the most impact on the estimated probability of outcome of interest and are statistically significantly different from 0. From the clinical perspective, this makes prefect sense. On the other hand, automatically removing from the model those variables that are not statistically significantly different from 0 may result in loss of information and is not generally recommended.

Transformation of variables[edit | edit source]

In many instances, variable transformation does not change the qualitative nature of the relationship between the corresponding predictive variable and the outcome. In obvious cases, however, it may significantly improve the quality of the model as illustrated by the following, admittedly contrived, example.

The data in Table was generated as <math>y = x^4 + \epsilon</math>, where <math>\epsilon \sim \mathcal{N}(0, 1)</math>.


Constructing a straightforward linear regression model <math>y = \beta_0 + x \beta_1</math> (cf. ) yields an expectedly poor fit depicted in Fig. with <math>R^2 = 0.72</math>.



Performing a simple variable transformation, <math>\tilde{x} = x^4</math> and applying a “generalized” linear model <math>y = \beta_0 + \beta_1 \tilde{x}</math> results in a much better fit with <math>R^2 = 0.98</math>, as can be seen in Fig. .

The code for generating Fig. is presented in Listing .

    linMod <- function( x, y, fn, main, xlab, tx, ty ) {
      b <- fn( x )
      lm <- lm( y ~ b )
      coef <- coef( lm )
      yHat <- x * coef[2] + coef[1]
      plot( b, y, main=main, col='magenta', xlab=xlab )
      abline( coef=coef, col='blue' )
      a <- sprintf( "%.2f", coef[2] )
      b <- sprintf( "%.2f", coef[1] )
      snb <- ifelse( sign( coef[1] ) == 1, "+", "" )
      text( tx, ty[1], bquote( paste( hat( y ), "=", .(a), "x", .(snb), .(b) ) ) )
      text( tx, ty[2], bquote( paste( "y =", hat( y ), "+", epsilon ) ) )
      R2 <- sprintf( "%.2f", summary(lm)$r.squared )
      text( tx, ty[3], bquote( paste( R^2, "=", .(R2) ) ) )
    }

    set.seed(1)
    x <- 1:20
    xR <- rnorm( 1:20 )
    y <- (x + xR)^4
    mt <- "Transformation of variables: y = a *"
    ty <- c( 1e5, 0.9e5, 0.8e5 )
    linMod( x, y, I, paste( mt,  "x" ), "x", 5, ty )
    linMod( x, y, function(x) {x^4}, bquote( paste( .(mt),  x^4) ), bquote( x^4 ),
            2e4, ty )
    df <- data.frame( x=x, x.4=x^4, y=sprintf( "%6.2f", y ) )
    write.csv( df, "./VarTran.csv")

Including interaction terms in the model[edit | edit source]

Common sense suggests that an optimal choice among models with approximately equal performance characteristics is the one that has the fewest “moving parts”. This principal is often (simplistically) referred to as Occam’s razor and quoted as “Numquam ponenda est pluralitas sine necessitate” (Plurality must never be posited without necessity), and “Frustra fit per plura quod potest fieri per pauciora” (It is futile to do with more what can be done with less). In agreement with this principle, we generally prefer linear models to their nonlinear counterparts as long as their performance metrics do not differ significantly. There are cases, however, when a linear model simply will not do (see, e.g., the example in Section ). We are not aware of any universal recipe for selecting a specific variable transformation in every possible instance. If there are sufficient reasons to suspect from general subject domain considerations that predictive variables may influence each other, introducing s may improve model performance.

Table illustrates a contrived example of hospitalization data for a hypothetical population of patients.



For each patient in Table , both age and sex were generated randomly, and only females whose age is at or above the median age of the sample less 5 were hospitalized. Since the data is random by construction, we are at liberty to use the first 20 rows of the table for training and the remaining 10 rows for testing our models. The results of applying a strictly linear model of the form <math>hospitalization \sim age + sex</math> to the testing dataset are presented in Fig. .

fig:

The results of applying a linear model with an of the form <math>hospitalization \sim age + sex + age \times sex</math> to the testing dataset are presented in Fig. .

fig:

Not surprisingly, the performance of the model without the interaction terms (ROC curve AUC of 0.78 in Fig ) is inferior to that of the model with interaction terms included (ROC curve AUC of 1.00 in Fig ), and the use of the more complicated model is justified.

The code for generating Figs. and is presented in Listing .

linMod <- function( train, test, inclCol, outCol, sep, main ) {
  form <- formula( paste( outCol, "~", paste( inclCol, collapse=sep ) ) )
  gm <- glm( form, data=train, family='binomial' )
  prediction <- predict( gm, test )

  perf <- auc.perf.base( prediction, test[, outCol], text=main )
}

set.seed(1)
n <- 30
id <- 1:n
gender <- rnorm( id )
age <- round( 70 + 10 * rnorm( id ) )
sex <- ifelse( gender <= 0, "M", "F" )
y <- ifelse( ( age - median( age ) ) * ifelse( sex == 'M', 0, 1 ) <= -5, 0, 1 )
mt <- "Interaction term: age * gender"
data <- data.frame( ID=id, age=age, sex=sex, hospitalization=y )
trainRows <- 1:round( n * 2 / 3 )
testRows <- ( max( trainRows ) + 1 ):n
test <- data[testRows, ]
train <- data[trainRows, ]

main <- "Hospitalization model performance, strictly linear structure"
linMod(train, test, c( "age", "sex" ),  "hospitalization", '+',  main )
main <- "Hospitalization model performance, interaction terms included"
linMod(train, test, c( "age", "sex" ),  "hospitalization", '*',  main )

write.csv( data, "./IntTermEx.csv" )

Model validation[edit | edit source]

Once a model has been developed, it has to be validated to ensure that it meets development specifications. Regardless of the type of the model, it has to be cross-validated on an independent data set, and the results compared with the training dataset to detect possible under- or overfitting. Specific validation methods for the types of model most commonly used by Clinical Analytics are described in the rest of this section.

Linear regression[edit | edit source]

Linear regression assumes the existence of a linear relationship between the input variables and the observed output. In general, a successful model must satisfy several requirements to be considered acceptable as a predictive tool :

  1. sufficiently high <math>R^2</math> (typically, at least 0.7) - this will confirm that a large proportion of variation in the dependent variable can be explained by the variation in the independent variable(s);
  2. reasonably good visual fit between the straight line predicted by the model and the actual functional relationship between the dependent and independent variables;
  3. sufficiently random residuals (at least no noticeable trend)

Once these requirements have been satisfied, the model can be deemed sufficiently accurate for our needs.

Logistic regression[edit | edit source]

Logistic regression is a classification model and thus needs to be evaluated on its ability to predict the outcome of interest. One of the most intuitive and widely accepted techniques for this purpose is computing the area under the receiver operating characteristics (ROC) curve. We adopt it as a universal measure of fit for classification models of any nature, including logistic regression. In a typical example of time-dependent data the preferred way is to proceed as follows:

  1. build a regression model on the selected training set (80% of all data);
  2. use the model to predict outcomes on the testing set (20% of all data);
  3. compute the area under the curve (AUC) for the corresponding ROC;
  4. if the AUC is acceptable, separate the dataset onto the “old” and “new” data (e.g., all years up to 1 year ago and the most recent year) and repeat the test;
  5. if AUCs from different datasets are comparable and the differences between them can be reasonably explained, accept the model, otherwise, go back to the drawing board and repeat.

If the model allows backward transition from the outcome of interest (admissions), the training and test datasets can be generated from patient data using multiple observations of the same patient; in the opposite case (mortality), a random data point is chosen from the patient’s time-dependent data. This approach can be justified by observing that if a patient can experience the outcome of interest multiple times, each encounter can be viewed as an independent event with a possible outcome of interest. If a patient can only experience the outcome of interest once, the use of the same patient’s data accumulated over the years violates the assumption of independence between observations and, additionally, ascribes disproportionally high weight to those who did not experience such outcome thus leading to potential “survivor bias” .

In order to provide confidence interval boundaries for AUC to facilitate the comparison of model quality, an appropriate estimation technique needs to be selected. The most accurate estimates are based on the parametric assumption of binormality for the AUC curve . Such an assumption is not unduly restrictive for large datasets, and the obtained estimates employ the usual <math>z</math>-statistic argument. When the number of positive outcomes is relatively small[15], a semi-parametric or nonparametric estimate may be desirable. For our purposes, we deem it sufficient to construct the confidence interval for the AUCs by using repeated sampling as described in step of the algorithm in Section .

Other measures of goodness-of-fit include (see Section below) ' and '. These are presented as supplementary metrics for the purpose of identifying the optimal balance between and and usually complement each other.

Continuing with the example in Section , Fig. presents the AUC, and for the logistic regression model for predicting hospitalizations in COPD patients previously referenced in Table [16].

fig:

As can be seen from Fig. , the AUC for the model in question is approximately 0.75. The optimal balance between sensitivity and specificity is attained at the cutoff point of approximately 10% of the population. In other words, it appears optimal to flag approximately 1/10th of the patients as being at high risk of admission for COPD-related reasons and, if the objective is efficient case management, concentrate limited resources allocated to this task on this subgroup.

It is considered good practice to compare the results of a developed model with a benchmark “null hypothesis” option whenever possible. For example, if the object of our investigation is assessment of relative hospitalization risk for a group of patients, the corresponding benchmark could be random selection from the total population of a sample equal in size to our group. Concretely, suppose that we have developed such a model based on the Elixhauser approach and selected a “naïve random” benchmark as described above. Table summarizes the results of applying each model to the total patient population and selecting 1,000 with the highest risk score.



The superiority of the Elixhauser model is evident: 24.4% of the 1,000 patients most likely to be admitted were actually admitted to the hospital during the subsequent year for specified diagnoses, compared to only 0.8% of those selected randomly. For general admissions those figures are 8.2% and 57.6% respectively. AUC comparison yields 0.5 for the random guess (as expected) and 0.93 for the Elixhauser model (excellent).

Validation of temporal datasets[edit | edit source]

When working with patient data, it is common to consider time-dependent outcomes for the same individual as separate dataset entries unless an outcome of interest presents an absorbing boundary, i.e., is irreversible (e.g., in mortality risk modeling).  It is thus pertinent to ask what set of tests is sufficient to convince a reasonably skeptical examiner[17] that a newly developed model works universally well under practical circumstances. While the answer to this question is often subjective, the following testing routine has so far yielded satisfactory results for the purpose of identifying intervention candidates in the total population health management program:
  1. separate the final usable dataset into the training and testing portions by designating a random 20% sample for testing and the remaining 80% for training the model;
  2. repeatedly run the final model (as described by in Fig. ) on training datasets obtained at the previous step until satisfied with the goodness-of-fit statistics;
  3. execute the forward test as
    1. train the model on the first available period[18] of data;
    2. from the remaining period data, select the entries that are appropriate under the assumption that no posterior information is available and no patient data is given disproportional weight in the model, e.g., by selecting only one period data point of patient data at random in the mortality model;
    3. refine the original model as necessary until AUCs for testing period data are satisfactory;
  4. execute the backward test as
    1. train the model on the last available period of data;
    2. from the remaining period data, select the entries in the same way as for the forward test;
    3. refine the original model as necessary until AUCs for testing period data are satisfactory;
  5. execute the mid history test as
    1. train the model on the first and last available periods of data;
    2. from the remaining period data, select the entries in the same way as for the forward test;
    3. refine the original model as necessary until AUCs for testing period data are satisfactory;
  6. execute the last available period test as
    1. from all periods except the last one for which full outcome of interest data is available, select the entries in the same way as for the forward test;
    2. train the model on the data selected in the preceding step;
    3. test the model on the last period for which full outcome of interest data is available;
    4. refine the original model as necessary until AUCs for testing period data are satisfactory;

Predicting future outcomes[edit | edit source]

Once the test program outlined in Section has yielded consistent AUC estimates and reasonably stable coefficients, the “production”, or “forward-looking”, model is constructed by training the algorithm on the whole dataset. Sample code for an implementation of the prediction algorithm is given in Appendices (“vanilla” logistic regression), (Cox proportional hazard model) and . The algorithms apply the respective vectors of regression coefficients () or () to generate the appropriate risk score (“probability” of outcome of interest for linear regression or hazard function for the Cox proportional hazard model). Once a risk rating has been assigned to every member of the test sample, they can be ranked by their ratings in descending order. <math>N</math> riskiest members can then be selected from the population as candidates for intervention.

Table presents 10 patients at highest risk of COPD admission from the test population of 2,049 in the ongoing example from Section .



In Table “Risk” is the probability of outcome of interest given by the logistic regression model.

Evaluating model performance[edit | edit source]

The Clinical Analytics team uses as the main metric for evaluating the performance of a logistic regression or Cox proportional hazard model. For evaluating the optimal balance between and , and are also included.

Receiver operating characteristics curve[edit | edit source]

ROC curves are an essential tool for assessing the quality of a classification model. Table illustrates the relationship between the actual and predicted outcomes as reflected by such curves.




ROC curve graphs are used for internal research purposes and for presentation to technical audiences familiar with this concept. The ROC curve for a survival model describing the mortality risk in heart failure patients is presented as an example in Fig. .


File:Survival AUCm.pdf
caption Sample graph.

An example of a plot combining ROC, and was presented earlier in Fig.

For historical reasons, the Clinical Analytics team has found it more instructive and easily digestible for executive and practitioner audiences to employ combined / graphs as a tool for visualizing the quality of a predictive model. / graphs are presented double-scaled with plotted on the left in red, and on the right in blue. The abscissa (<math>x</math>-axis) represents the percentage of the (test) population that was classified by the model as having an outcome of interest. The combined graph for a logistic heart failure admission prediction model is presented as an example in Fig. .

fig:

As follows from Fig. , should the riskiest 5% of the patients selected by the model be chosen for intervention, true positive rate in that population will be approximately 50%. This rate reaches approximately 10% in the general population, hence the achieved by applying the model for the top 5% is close to 5.

Summary of model performance metrics[edit | edit source]

In order to evaluate model performance, it is helpful to summarize some of the relevant model metrics in one place. Table is an extension of Table that includes the corresponding performance parameters calculated for the same 10 patients on the original test dataset.



In Table columns have the following meanings:


The output template featured in Table is adopted by the Clinical Analytics team as the preferred way of illustrating the model performance. This layout is easy to present and explain to the upper management in order to facilitate operational business decisions, including those concerning resource allocation.

If the model with an irreversible outcome described in Section is tested on a randomly sampled dataset that includes a single entry for surviving patients, an argument can be made that the of the model in Table is inflated by underweighting survivor’s data. This concern can be addressed by selecting the data from the most recent time period (e.g., year) as the test dataset and using it to calculate the corresponding model performance metrics in this table. If this approach is followed, all data in the test population will be reduced to a single entry per patient with equal weights. Linear regression models described in Section and Cox proportional hazard models described in Section do not require this adjustment.

Presentation of results[edit | edit source]

Output data storage[edit | edit source]

The storage model for output data should facilitate the achievement of the following objectives:

  • keep project-related data together in a form that is
    • compact,
    • logical,
    • readable and
    • easily accessible;
  • make it easy to perform unit testing;
  • help compare the results of incremental changes in the code;
  • support creating time snapshots of the model for auditing purposes.

The above-mentioned objectives can be accomplished more easily if

  1. readable output files are stored in the Results directory (as mentioned in Section ),
  2. graphs (PDFs, JPGs etc.) are stored in the Graphs subdirectory of Results
  3. both readable files and graphs are cataloged by date of the corresponding program run in separate directories named yyyy-mm-dd with appropriate commentary appended separated by underscores (e.g., 2014-05-14_Hip_Knee)

Output data naming conventions[edit | edit source]

All output files are named using abbreviated functional descriptions of their contents and are date stamped for subsequent reference. Naming conventions for output files are listed in Table .


Presentation format[edit | edit source]

Most of the research projects carried out by the Department of Clinical Analytics produce data outcome that can best be digested by the audience if presented in the form of tables and graphs. While it is difficult to prescribe a universal format for a successful table, it is nevertheless desirable to establish the broadest possible documentation standards some of which are listed below.

  1. Output files
    1. numeric output that will be ingested into Excel or R for further processing should be stored as .csv files,
    2. plain text files should be avoided whenever possible,
    3. if extended markup is desired (e.g., web browser output), XML output is appropriate,
    4. where extensive data post-processing manipulation is anticipated, a (sandbox) database table for the results is desirable;
  2. Graphs
    1. preferably, graphs should be stored as PDF files with axes, legend and tick marks clearly labeled and easily readable (in general, 12 pts. or larger),
    2. landscape orientation is preferred,
    3. legend coloring scheme should be consistent with that of the plot itself.

Documentation not requiring extensive mathematical formulae, sophisticated graphics or cross-referencing can be created in Microsoft Word. Papers that do require substantial typesetting should be created in LaTeX, if possible.

Data storage[edit | edit source]

Input data storage[edit | edit source]

Interim input data can be stored as text, CSV or XML files, Excel spreadsheets or sandbox databases. For consistency, it is preferable to keep input data in the “Data” folder of the corresponding project folder. When the data is intended for use by other people, it is helpful to use a single format that can be easily picked up and converted into a form convenient for its consumer. For most practical purposes, CSV is preferred. If the data is stored in a sandbox database, SQL scripts used for data extraction can be stored in the “Code” folder of the project. If the creation of a shared internal database is desirable and possible for the purpose of the project, it can be set up on the common server with tables named for specific tasks.

Nomenclature of input variables[edit | edit source]

In an effort to standardize the nomenclature of input variables, a suggested list of common names is presented in Table .


Formatting and storage of intermediate results[edit | edit source]

During the execution of a predictive analytics script, intermediate files are stored in the project directory tree in the directory titled “Results/Interim”. Intermediate plots are stored as PDF files, tables are saved as CSV files. All output files are named using abbreviate functional descriptions of their contents and are date stamped in order to preserve research history and ensure reproducibility of the results. Naming conventions for interim output files are listed in Table .


Coding practices[edit | edit source]

Revision control[edit | edit source]

Revision control is essential for incremental and cooperative development. Git is currently the suggested tool of choice for implementing a robust framework for sharing and improving the code. A centralized local Git repository for the Clinical Analytics team is not available at the time of this writing (), however, interim measures can be taken to ensure that changes made to the code are at least traceable to its developer(s). Collaboration Portal is currently used as a proxy for the centralized code repository, and all current code should periodically be placed on the portal to facilitate quality control and cross-training of the department team members. In preparation for the implementation of the centralized Git repository, all team members should implement Git framework on their local computers and regularly check in working code with appropriate descriptive comments. If this practice is followed, the creation of a centralized Git node will be reduced to pushing individual repositories to the designated location and should take place with minimum resource diversion from higher priority tasks.

Once the central repository is set up, one person (presumably, the project manager) should be designated as the administrator with one or two team members serving as backup resources fully cross-trained on the system functionality. The following is a suggested list of good repository maintenance practices in the form of do’s:

  • DO take regular snapshots of COMPILABLE CODE;
  • DO write concise, informative, itemized comments for each commit highlighting the most significant changes from the previous version;
  • DO minimize the time you keep the code checked out;
  • DO conduct unit tests before checking in the code to make sure it is backward compatible.
  • DO merge branches at the first opportunity.

and don’ts:

  • DON’T check in code that does not compile;
  • DON’T check in code that will break the build;
  • DON’T store the executable, compiled, auxiliary or any other binary files with the source;
  • DON’T create more branches than necessary.

(see, e.g., , , ).

Code storage[edit | edit source]

The following is a suggested directory structure for storing project code:

  • Methodology
    • methodology documents and white papers describing the algorithm;
    • testing and implementation procedures;
    • production implementation requirements;
  • Data
    • input data organized by run date, functionality or model version as appropriate;
    • tools (e.g., Excel spreadsheets) for pre-processing input data (if applicable);
  • Code
    • project files (if applicable);
    • source code differentiated by language (if applicable):
      • R
      • Python
      • SQL
      • C#
      • Other;
  • Log
    • log and error files (if applicable);
  • Results
    • output data files by run date, functionality or model version as appropriate;
    • graphical output by run date, functionality or model version as appropriate;

Code review[edit | edit source]

Peer review is an indispensable code verification and validation tool that also facilitates the development of robust, scalable and reusable code. Once a developer has completed a new release to his or her satisfaction, they should initiate code review with a designated peer. The assignment of peers can be very informal, especially when the new model has been confined to the domain of narrow expertise. It is considered beneficial to the quality of the algorithm to have a person less familiar with the methodology review and, time permitting, replicate the results of the newly shipped release. In the absence of a designated QA department, the only defense against inadvertent flaws in the code is the institution of a process that requires the algorithm’s author to fully explain the methodology and coding decisions behind it to “skeptical” colleague. Such colleague should understand the basic concepts but not be biased in any way towards accepting the result. Resources permitting, having more than one person review the code would strengthen the quality control process - but we have to be realistic about what we can expect of ourselves given more pressing time commitments.

Once the code has been review by a designated “tester”, it can be tagged as “production version” in the repository thus becoming an official release.

Naming conventions[edit | edit source]

Names of objects used throughout the coding code should be clear, concise, consistent and descriptive. Suggested naming conventions are detailed in Table .


Writing quality code[edit | edit source]

The definition of what constitutes quality code in any programming language could be the subject of a lengthy debate that is best carried out away from volatile compounds and other easily inflammable materials. Below follow a few fundamental principles that the original writers of this document believe to be universal and rarely disputed.

R[edit | edit source]

  • Write readable code:

    • create a high level function that calls analytical and auxiliary functions as needed;

    • reference packages only where the use of such packages is required at the lowest level;

    • indent your code

    • use spaces around operators, after commas, after opening and before closing braces and parentheses;

    • wrap long lines at column 80 (remember the punch cards? I’m only partially joking here...);

    • use knitr-style comments;

    • in a function, first list the required, then the optional parameters.

  • Write meaningful comments:

    • in a function

      • explain what the function is for;

      • describe input and output arguments;

      • list any specific parameter values that present special cases.

      The mode function in the NS.CA.statUtils package is an example:

              ## ---- NS.CA.mode ---- 
              ## Mode(s) of the distribution
              ## Usage
              ###  NS.CA.mode(x, fun=function(y) {y})
              ## Arguments
              ###  x - matrix or data frame containing the distribution(s) (convert to matrix if list)
              ###  fun - function determining which mode to select in the multimodal case
              
              NS.CA.mode <- function(x, fun=function(y) {y}) {
                ux <- unique(x)
                t<-tabulate(match(x, ux))
                fun(ux[which(t == max(t))])
              }
              
    • in a loop

      • mark nested long loops if necessary;

      • document “forks” as appropriate

    • in an if-then-else structure

      • explain what the logic means when necessary;

      • mark matching braces as needed.

  • Favor sapply, lapply and ddply over for;

  • Above all, DO NOT copy and paste! If a piece of code is used more than once, turn it into a function instead.

  • Avoid rbind wherever possible since it can be slow.

  • Reduce early and often, e.g.,

        # aggregated charges and costs 
          
          totalChgAll[[aggregator]] <- Reduce( function(...) merge(..., by=totalGroupBy, all=T, suffixes=totSuff), totChgCostsNotNull)
        

Accepted naming conventions are listed in Table .

Testing and QA[edit | edit source]

The design stage of application development is an excellent time to ensure that subsequent testing and validation of the model progress as smoothly as possible. Many of the issues arising at a later stage can be mitigated by ensuring open communication channels between development and production teams by remembering that an ounce of prevention is worth a pound of cure:

  • have a list of candidate predictor variables for the research project;
  • find out which variables are available from the historical dataset - and which are not;
    • if a variable has been consistently available throughout history, find out whether its meaning has changed;
    • if a variable has appeared only recently, find out how it can be synthesized from past data. Ensure that the way you are replicating the new variable from the data warehouse is consistent with the way it is currently being generated. It will save you a lot of time and headaches.
  • ensure that the test data set can be easily replicated by the data warehouse team;
    • if an easy, one-to-one mapping between your dataset and theirs is hard to achieve, change your dataset, if possible;
    • if your dataset must be constructed in a specific way, get the data warehouse team started on matching your data extract as early as possible.

An application can be productionalized efficiently not only through writing correct, clean and efficient code but also through carrying out as many testing and data reconciliation iterations as possible within a limited time frame. This can be achieved by following the general guidelines below:

  1. develop unit test framework whenever possible;
  2. fix and freeze the input data for reconciliation testing to ensure reproducible test results;
    1. hard-code seeds for the code dependent on random number generators;
    2. take a snapshot of the input data at a point in the past and use it for subsequent calculations at least until the current round of testing is finished;
  3. rank all differences between the old and new results in descending order and
  4. drill down into the “worst offenders” until a satisfactory explanation of the differences can be found and an acceptable level of accuracy can be achieved.





  1. Or another appropriate fitting technique
  2. Scalar, i.e., single-valued or vector, i.e., multivalued.
  3. In fact, it is commonly implemented in many statistical packages and programming languages, including R}.}, intuitive and can be made sufficiently accurate for most uncomplicated modeling tasks. Mathematically similar to linear regression, it\footnoteOr another appropriate classification technique.
  4. The accuracy of coefficient estimates is lower than <math>0.5 \times 10^{-4}</math>
  5. Authors of predictive analytics literature have frequently enjoyed this advantage .
  6. As in Section , the accuracy of coefficient estimates is lower than <math>0.5 \times 10^{-4}</math>
  7. conditional probability
  8. Hence the term “semiparametric”.
  9. In application to () and (), we are only concerned with the top line of ().
  10. The full nomenclature of input variables including their type and meaning can be found in Table
  11. The original input data contains over 27,000 rows and is too voluminous to present in this document.
  12. We use this indicator frequently in our reports, as it is our locally accepted definition of a “medical froup primary care patient”.
  13. Technically, Pearson correlation between numeric and indicator variables is not very informative but we present it here anyway for illustration purposes.
  14. Here we are not considering the confidence interval of <math>OR(10)_{uni}^{cont}</math>
  15. 30 or fewer for each type of outcome of interest for moderate AUCs and 150 or fewer for <math>AUC \geq 0.95</math>
  16. Model performance metrics were calculated on the test dataset.
  17. see, e.g.,
  18. most often, year