Return to R Cookbook, R Bibliography, R DevOps, R Data Science, R Statistics, R Machine Learning, R Deep Learning, Data Science Bibliography, Statistics Bibliography
“ (RCook 2019)
Chapter 11. R Linear Regression and ANOVA
“In R statistics, R modeling is where we get down to business. R Models quantify the relationships between our R variables. Models let us make R predictions.” (RCook 2019)
“A simple R linear regression is the most basic model. It’s just two R variables and is modeled as a linear relationship with an R error term:” (RCook 2019)
yi = β0 + β1xi + εi
We are given the data for x and y. Our mission is to fit the model, which will give us the best estimates for β0 and β1 (see Recipe 11.1).
That generalizes naturally to multiple linear regression, where we have multiple variables on the righthand side of the relationship (see Recipe 11.2):
yi = β0 + β1ui + β2vi + β3wi + εi
“Statisticians call u, v, and w the predictors and y the response. Obviously, the model is useful only if there is a fairly linear relationship between the predictors and the response, but that requirement is much less restrictive than you might think. Recipe 11.12 discusses transforming your variables into a (more) linear relationship so that you can use the well-developed machinery of linear regression.” (RCook 2019)
The beauty of R is that anyone can build these linear models. The models are built by a function, lm, which returns a model object. From the model object, we get the coefficients (βi) and regression statistics. It’s easy. Really!
The horror of R is likewise that anyone can build these models. Nothing requires you to check that the model is reasonable, much less statistically significant. Before you blindly believe a model, check it! Most of the information you need is in the regression summary (see Recipe 11.4):
Is the model statistically significant?
Check the F statistic at the bottom of the summary.
Are the coefficients significant?
Check the coefficient’s t statistics and p-values in the summary, or check their confidence intervals (see Recipe 11.14).
Is the model useful?
Check the R2 near the bottom of the summary.
Does the model fit the data well?
Plot the residuals and check the regression diagnostics (see Recipe 11.15 and Recipe 11.16).
Does the data satisfy the assumptions behind linear regression?
Check whether the diagnostics confirm that a linear model is reasonable for your data (see Recipe 11.16).
ANOVA
Analysis of variance (ANOVA) is a powerful statistical technique. First-year graduate students in statistics are taught ANOVA almost immediately because of its importance, both theoretical and practical. We are often amazed, however, at the extent to which people outside the field are unaware of its purpose and value.
Regression creates a model, and ANOVA is one method of evaluating such models. The mathematics of ANOVA are intertwined with the mathematics of regression, so statisticians usually present them together; we follow that tradition here.
ANOVA is actually a family of techniques that are connected by a common mathematical analysis. This chapter mentions several applications:
One-way ANOVA
This is the simplest application of ANOVA. Suppose you have data samples from several populations and are wondering whether the populations have different means. One-way ANOVA answers that question. If the populations have normal distributions, use the oneway.test function (see Recipe 11.21); otherwise, use the nonparametric version, the kruskal.test function (see Recipe 11.24).
Model comparison
When you add or delete a predictor variable in a linear regression, you want to know whether that change improved the model. The anova function compares two regression models and reports whether they are significantly different (see Recipe 11.25).
ANOVA table
The anova function can also construct the ANOVA table of a linear regression model, which includes the F statistic needed to gauge the model’s statistical significance (see Recipe 11.3). This important table is discussed in nearly every textbook on regression.
Example Data
In many of the examples in this chapter, we start by creating example data using R’s pseudorandom number generation capabilities. So at the beginning of each recipe, you may see something like the following:
set.seed(42)
x ← rnorm(100)
e ← rnorm(100, mean=0, sd=5)
y ← 5 + 15 * x + e
We use set.seed to set the random number generation seed so that if you run the example code on your machine you will get the same answer. In the preceding example, x is a vector of 100 draws from a standard normal (mean=0, sd=1) distribution. Then we create a little random noise called e from a normal distribution with mean= 0 and sd= 5. y is then calculated as 5 + 15 * x + e. The idea behind creating example “toy” data rather than using “real-world” data is that with simulated data you can change the coefficients and parameters and see how the change impacts the resulting model. For example, you could increase the standard deviation of e in the example data and see what impact that has on the R^2 of your model.
See Also
There are many good texts on linear regression. One of our favorites is Applied Linear Regression Models, 4th ed., by Michael Kutner, Christopher Nachtsheim, and John Neter (McGraw-Hill/Irwin). We generally follow their terminology and conventions in this chapter.
We also like Linear Models with R by Julian Faraway (Chapman & Hall/CRC), because it illustrates regression using R and is quite readable. Earlier versions of Faraday’s work are available free online, too.
11.1 Performing Simple Linear Regression
Problem
You have two vectors, x and y, that hold paired observations: (x1, y1), (x2, y2), …, (xn, yn). You believe there is a linear relationship between x and y, and you want to create a regression model of the relationship.
Solution
The lm function performs a linear regression and reports the coefficients.
If your data is in vectors:
lm(y ~ x)
Or if your data is in columns in a data frame:
lm(y ~ x, data = df)
Discussion
Simple linear regression involves two variables: a predictor (or independent) variable, often called x, and a response (or dependent) variable, often called y. The regression uses the ordinary least-squares (OLS) algorithm to fit the linear model:
yi = β0 + β1xi + εi
where β0 and β1 are the regression coefficients and εi are the error terms.
The lm function can perform linear regression. The main argument is a model formula, such as y ~ x. The formula has the response variable on the left of the tilde character (~) and the predictor variable on the right. The function estimates the regression coefficients, β0 and β1, and reports them as the intercept and the coefficient of x, respectively:
set.seed(42)
x ← rnorm(100)
e ← rnorm(100, mean = 0, sd = 5)
y ← 5 + 15 * x + e
lm(y ~ x)
>
> Call:
> lm(formula = y ~ x)
>
> Coefficients:
> (Intercept) x
> 4.56 15.14
In this case, the regression equation is:
yi = 4.56 + 15.14xi + εi
It is quite common for data to be captured inside a data frame, in which case you want to perform a regression between two data frame columns. Here, x and y are columns of a data frame dfrm:
df ← data.frame(x, y)
head(df)
> x y
> 1 1.371 31.57
> 2 -0.565 1.75
> 3 0.363 5.43
> 4 0.633 23.74
> 5 0.404 7.73
> 6 -0.106 3.94
The lm function lets you specify a data frame by using the data parameter. If you do, the function will take the variables from the data frame and not from your workspace:
lm(y ~ x, data = df) # Take x and y from df
>
> Call:
> lm(formula = y ~ x, data = df)
>
> Coefficients:
> (Intercept) x
> 4.56 15.14
11.2 Performing Multiple Linear Regression
Problem
You have several predictor variables (e.g., u, v, and w) and a response variable, y. You believe there is a linear relationship between the predictors and the response, and you want to perform a linear regression on the data.
Solution
Use the lm function. Specify the multiple predictors on the righthand side of the formula, separated by plus signs (+):
lm(y ~ u + v + w)
Discussion
Multiple linear regression is the obvious generalization of simple linear regression. It allows multiple predictor variables instead of one predictor variable and still uses OLS to compute the coefficients of a linear equation. The three-variable regression just given corresponds to this linear model:
yi = β0 + β1ui + β2vi + β3wi + εi
R uses the lm function for both simple and multiple linear regression. You simply add more variables to the righthand side of the model formula. The output then shows the coefficients of the fitted model. Let’s set up some example random normal data using the rnorm function:
set.seed(42)
u ← rnorm(100)
v ← rnorm(100, mean = 3, sd = 2)
w ← rnorm(100, mean = -3, sd = 1)
e ← rnorm(100, mean = 0, sd = 3)
Then we can create an equation using known coefficients to calculate our y variable:
y ← 5 + 4 * u + 3 * v + 2 * w + e
Now if we run a linear regression, we can see that R solves for the coefficients and gets pretty close to the actual values just used:
lm(y ~ u + v + w)
>
> Call:
> lm(formula = y ~ u + v + w)
>
> Coefficients:
> (Intercept) u v w
> 4.77 4.17 3.01 1.91
The data parameter of lm is especially valuable when the number of variables increases, since it’s much easier to keep your data in one data frame than in many separate variables. Suppose your data is captured in a data frame, such as the df variable shown here:
df ← data.frame(y, u, v, w)
head(df)
> y u v w
> 1 16.67 1.371 5.402 -5.00
> 2 14.96 -0.565 5.090 -2.67
> 3 5.89 0.363 0.994 -1.83
> 4 27.95 0.633 6.697 -0.94
> 5 2.42 0.404 1.666 -4.38
> 6 5.73 -0.106 3.211 -4.15
When you supply df to the data parameter of lm, R looks for the regression variables in the columns of the data frame:
lm(y ~ u + v + w, data = df)
>
> Call:
> lm(formula = y ~ u + v + w, data = df)
>
> Coefficients:
> (Intercept) u v w
> 4.77 4.17 3.01 1.91
See Also
See Recipe 11.1 for simple linear regression.
11.3 Getting Regression Statistics
Problem
You want the critical statistics and information regarding your regression, such as R2, the F statistic, confidence intervals for the coefficients, residuals, the ANOVA table, and so forth.
Solution
Save the regression model in a variable, say m:
m ← lm(y ~ u + v + w)
Then use functions to extract regression statistics and information from the model:
anova(m)
ANOVA table
coefficients(m)
Model coefficients
coef(m)
Same as coefficients(m)
confint(m)
Confidence intervals for the regression coefficients
deviance(m)
Residual sum of squares
effects(m)
Vector of orthogonal effects
fitted(m)
Vector of fitted y values
residuals(m)
Model residuals
resid(m)
Same as residuals(m)
summary(m)
Key statistics, such as R2, the F statistic, and the residual standard error (σ)
vcov(m)
Variance–covariance matrix of the main parameters
Discussion
When we started using R, the documentation said to use the lm function to perform linear regression. So we did something like this, getting the output shown in Recipe 11.2:
lm(y ~ u + v + w)
>
> Call:
> lm(formula = y ~ u + v + w)
>
> Coefficients:
> (Intercept) u v w
> 4.77 4.17 3.01 1.91
How disappointing! The output was nothing compared to other statistics packages such as SAS. Where is R2? Where are the confidence intervals for the coefficients? Where is the F statistic, its p-value, and the ANOVA table?
Of course, all that information is available—you just have to ask for it. Other statistics systems dump everything and let you wade through it. R is more minimalist. It prints a bare-bones output and lets you request what more you want.
The lm function returns a model object that you can assign to a variable:
m ← lm(y ~ u + v + w)
From the model object, you can extract important information using specialized functions. The most important function is summary:
summary(m)
>
> Call:
> lm(formula = y ~ u + v + w)
>
> Residuals:
> Min 1Q Median 3Q Max
> -5.383 -1.760 -0.312 1.856 6.984
>
> Coefficients:
> Estimate Std. Error t value Pr(>]] |
t |
> (Intercept) 4.770 0.969 4.92 3.5e-06 ***
#> u 4.173 0.260 16.07 < 2e-16 ***
#> v 3.013 0.148 20.31 < 2e-16 ***
#> w 1.905 0.266 7.15 1.7e-10 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.66 on 96 degrees of freedom
#> Multiple R-squared: 0.885, Adjusted R-squared: 0.882
#> F-statistic: 247 on 3 and 96 DF, p-value: <2e-16
The summary shows the estimated coefficients, the critical statistics (such as R2 and the F statistic), and an estimate of σ, the standard error of the residuals. The summary is so important that there is an entire recipe devoted to understanding it (Recipe 11.4).
There are specialized extractor functions for other important information:
Model coefficients (point estimates)
coef(m)
#> (Intercept) u v w
#> 4.77 4.17 3.01 1.91
Confidence intervals for model coefficients
confint(m)
#> 2.5 % 97.5 %
#> (Intercept) 2.85 6.69
#> u 3.66 4.69
#> v 2.72 3.31
#> w 1.38 2.43
Model residuals
resid(m)
#> 1 2 3 4 5 6 7 8 9
#> -0.5675 2.2880 0.0972 2.1474 -0.7169 -0.3617 1.0350 2.8040 -4.2496
#> 10 11 12 13 14 15 16 17 18
#> -0.2048 -0.6467 -2.5772 -2.9339 -1.9330 1.7800 -1.4400 -2.3989 0.9245
#> 19 20 21 22 23 24 25 26 27
#> -3.3663 2.6890 -1.4190 0.7871 0.0355 -0.3806 5.0459 -2.5011 3.4516
#> 28 29 30 31 32 33 34 35 36
#> 0.3371 -2.7099 -0.0761 2.0261 -1.3902 -2.7041 0.3953 2.7201 -0.0254
#> 37 38 39 40 41 42 43 44 45
#> -3.9887 -3.9011 -1.9458 -1.7701 -0.2614 2.0977 -1.3986 -3.1910 1.8439
#> 46 47 48 49 50 51 52 53 54
#> 0.8218 3.6273 -5.3832 0.2905 3.7878 1.9194 -2.4106 1.6855 -2.7964
#> 55 56 57 58 59 60 61 62 63
#> -1.3348 3.3549 -1.1525 2.4012 -0.5320 -4.9434 -2.4899 -3.2718 -1.6161
#> 64 65 66 67 68 69 70 71 72
#> -1.5119 -0.4493 -0.9869 5.6273 -4.4626 -1.7568 0.8099 5.0320 0.1689
#> 73 74 75 76 77 78 79 80 81
#> 3.5761 -4.8668 4.2781 -2.1386 -0.9739 -3.6380 0.5788 5.5664 6.9840
#> 82 83 84 85 86 87 88 89 90
#> -3.5119 1.2842 4.1445 -0.4630 -0.7867 -0.7565 1.6384 3.7578 1.8942
#> 91 92 93 94 95 96 97 98 99
#> 0.5542 -0.8662 1.2041 -1.7401 -0.7261 3.2701 1.4012 0.9476 -0.9140
#> 100
#> 2.4278
Residual sum of squares
deviance(m)
#> [1] 679
ANOVA table
anova(m)
#> Analysis of Variance Table
#>
#> Response: y
#> Df Sum Sq Mean Sq F value Pr(>F)
#> u 1 1776 1776 251.0 < 2e-16 ***
#> v 1 3097 3097 437.7 < 2e-16 ***
#> w 1 362 362 51.1 1.7e-10 ***
#> Residuals 96 679 7
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
If you find it annoying to save the model in a variable, you are welcome to use one-liners such as this:
summary(lm(y ~ u + v + w))
Or you can use magrittr pipes:
lm(y ~ u + v + w) %>%
summary
See Also
See Recipe 11.4 for more on the regression summary. See Recipe 11.17 for regression statistics specific to model diagnostics.
11.4 Understanding the Regression Summary
Problem
You created a linear regression model, m. However, you are confused by the output from summary(m).
Discussion
The model summary is important because it links you to the most critical regression statistics. Here is the model summary from Recipe 11.3:
summary(m)
#>
#> Call:
#> lm(formula = y ~ u + v + w)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -5.383 -1.760 -0.312 1.856 6.984
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) 4.770 0.969 4.92 3.5e-06 ***
#> u 4.173 0.260 16.07 < 2e-16 ***
#> v 3.013 0.148 20.31 < 2e-16 ***
#> w 1.905 0.266 7.15 1.7e-10 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.66 on 96 degrees of freedom
#> Multiple R-squared: 0.885, Adjusted R-squared: 0.882
#> F-statistic: 247 on 3 and 96 DF, p-value: <2e-16
Let’s dissect this summary by section. We’ll read it from top to bottom, even though the most important statistic (the F statistic) appears at the end:
Call
#> lm(formula = y ~ u + v + w)
This shows how lm was called when it created the model, which is important for putting this summary into the proper context.
Residuals statistics
#> Residuals:
#> Min 1Q Median 3Q Max
#> -5.383 -1.760 -0.312 1.856 6.984
Ideally, the regression residuals would have a perfect normal distribution. These statistics help you identify possible deviations from normality. The OLS algorithm is mathematically guaranteed to produce residuals with a mean of zero,1 hence the sign of the median indicates the skew’s direction and the magnitude of the median indicates the extent. In this case the median is negative, which suggests some skew to the left.
If the residuals have a nice bell-shaped distribution, then the first quartile (1Q) and third quartile (3Q) should have about the same magnitude. In this example, the larger magnitude of 3Q versus 1Q (1.856 versus 1.76) indicates a slight skew to the right in our data, although the negative median makes the situation less clear-cut.
The Min and Max residuals offer a quick way to detect extreme outliers in the data, since extreme outliers (in the response variable) produce large residuals.
Coefficients
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) 4.770 0.969 4.92 3.5e-06 ***
#> u 4.173 0.260 16.07 < 2e-16 ***
#> v 3.013 0.148 20.31 < 2e-16 ***
#> w 1.905 0.266 7.15 1.7e-10 ***
The column labeled Estimate contains the estimated regression coefficients as calculated by ordinary least squares.
Theoretically, if a variable’s coefficient is zero then the variable is worthless; it adds nothing to the model. Yet the coefficients shown here are only estimates, and they will never be exactly zero. We therefore ask: statistically speaking, how likely is it that the true coefficient is zero |
t |
this is just over our conventional limit of 0.05, which suggests that w is likely insignificant.2 Variables with large p-values are candidates for elimination.
A handy feature is that R flags the significant variables for quick identification. Did you notice the extreme righthand column containing triple asterisks (*) |
t |
> (Intercept) 1.117 0.340 3.28 0.0051 **
#> as.matrix(best_pred)pred4 0.523 0.207 2.53 0.0231 *
#> as.matrix(best_pred)pred3 -0.693 0.870 -0.80 0.4382
#> as.matrix(best_pred)pred6 1.160 0.682 1.70 0.1095
#> as.matrix(best_pred)pred1 0.343 0.359 0.95 0.3549
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.927 on 15 degrees of freedom
#> Multiple R-squared: 0.838, Adjusted R-squared: 0.795
#> F-statistic: 19.4 on 4 and 15 DF, p-value: 8.59e-06
11.7 Performing Linear Regression with Interaction Terms
Problem
You want to include an interaction term in your regression.
Solution
The R syntax for regression formulas lets you specify interaction terms. To indicate the interaction of two variables, u and v, we separate their names with an asterisk (*):
lm(y ~ u * v)
This corresponds to the model yi = β0 + β1ui + β2vi + β3uivi + εi, which includes the first-order interaction term β3uivi.
Discussion
In regression, an interaction occurs when the product of two predictor variables is also a significant predictor (i.e., in addition to the predictor variables themselves). Suppose we have two predictors, u and v, and want to include their interaction in the regression. This is expressed by the following equation:
yi = β0 + β1ui + β2vi + β3uivi + εi
Here the product term, β3uivi, is called the interaction term. The R formula for that equation is:
y ~ u * v
When you write y ~ u * v, R automatically includes u, v, and their product in the model. This is for a good reason. If a model includes an interaction term, such as β3uivi, then regression theory tells us the model should also contain the constituent variables ui and vi.
Likewise, if you have three predictors (u, v, and w) and want to include all their interactions, separate them by asterisks:
y ~ u * v * w
This corresponds to the regression equation:
yi = β0 + β1ui + β2vi + β3wi + β4uivi + β5uiwi + β6viwi + β7uiviwi + εi
Now we have all the first-order interactions and a second-order interaction (β7uiviwi).
Sometimes, however, you may not want every possible interaction. You can explicitly specify a single product by using the colon operator (:). For example, u:v:w denotes the product term βuiviwi but without all possible interactions. So the R formula:
y ~ u + v + w + u:v:w
corresponds to the regression equation:
yi = β0 + β1ui + β2vi + β3wi + β4uiviwi + εi
It might seem odd that a colon (:) means pure multiplication while an asterisk (*) means both multiplication and inclusion of constituent terms. Again, this is because we normally incorporate the constituents when we include their interaction, so making that approach the default for * makes sense.
There is some additional syntax for easily specifying many interactions:
(u + v + ... + w)^2
Include all variables (u, v, …, w) and all their first-order interactions.
(u + v + ... + w)^3
Include all variables, all their first-order interactions, and all their second-order interactions.
(u + v + ... + w)^4
And so forth.
Both the asterisk (*) and the colon (:) follow a “distributive law,” so the following notations are also allowed:
x*(u + v + ... + w)
Same as x*u + x*v + ... + x*w (which is the same as x + u + v + ... + w + x:u + x:v + ... + x:w)
x:(u + v + ... + w)
Same as x:u + x:v + ... + x:w
All this syntax gives you some flexibility in writing your formula. For example, these three formulas are equivalent:
y ~ u * v
y ~ u + v + u:v
y ~ (u + v) ^ 2
They all define the same regression equation, yi = β0 + β1ui + β2vi + β3uivi + εi .
See Also
The full syntax for formulas is richer than described here. See R in a Nutshell or the R Language Definition for more details.
11.8 Selecting the Best Regression Variables
Problem
You are creating a new regression model or improving an existing model. You have the luxury of many regression variables, and you want to select the best subset of those variables.
Solution
The step function can perform stepwise regression, either forward or backward. Backward stepwise regression starts with many variables and removes the underperformers:
full.model <- lm(y ~ x1 + x2 + x3 + x4)
reduced.model <- step(full.model, direction = "backward")
Forward stepwise regression starts with a few variables and adds new ones to improve the model until it cannot be improved further:
min.model <- lm(y ~ 1)
fwd.model <-
step(min.model,
direction = "forward",
scope = (~ x1 + x2 + x3 + x4))
Discussion
When you have many predictors, it can be quite difficult to choose the best subset. Adding and removing individual variables affects the overall mix, so the search for “the best” can become tedious.
The step function automates that search. Backward stepwise regression is the easiest approach. Start with a model that includes all the predictors. We call that the full model. The model summary, shown here, indicates that not all predictors are statistically significant:
# example data
set.seed(4)
n <- 150
x1 <- rnorm(n)
x2 <- rnorm(n, 1, 2)
x3 <- rnorm(n, 3, 1)
x4 <- rnorm(n,-2, 2)
e <- rnorm(n, 0, 3)
y <- 4 + x1 + 5 * x3 + e
# build the model
full.model <- lm(y ~ x1 + x2 + x3 + x4)
summary(full.model)
#>
#> Call:
#> lm(formula = y ~ x1 + x2 + x3 + x4)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.032 -1.774 0.158 2.032 6.626
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) 3.40224 0.80767 4.21 4.4e-05 ***
#> x1 0.53937 0.25935 2.08 0.039 *
#> x2 0.16831 0.12291 1.37 0.173
#> x3 5.17410 0.23983 21.57 < 2e-16 ***
#> x4 -0.00982 0.12954 -0.08 0.940
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.92 on 145 degrees of freedom
#> Multiple R-squared: 0.77, Adjusted R-squared: 0.763
#> F-statistic: 121 on 4 and 145 DF, p-value: <2e-16
We want to eliminate the insignificant variables, so we use step to incrementally eliminate the underperformers. The result is called the reduced model:
reduced.model <- step(full.model, direction="backward")
#> Start: AIC=327
#> y ~ x1 + x2 + x3 + x4
#>
#> Df Sum of Sq RSS AIC
#> - x4 1 0 1240 325
#> - x2 1 16 1256 327
#> <none> 1240 327
#> - x1 1 37 1277 329
#> - x3 1 3979 5219 540
#>
#> Step: AIC=325
#> y ~ x1 + x2 + x3
#>
#> Df Sum of Sq RSS AIC
#> - x2 1 16 1256 325
#> <none> 1240 325
#> - x1 1 37 1277 327
#> - x3 1 3988 5228 539
#>
#> Step: AIC=325
#> y ~ x1 + x3
#>
#> Df Sum of Sq RSS AIC
#> <none> 1256 325
#> - x1 1 44 1300 328
#> - x3 1 3974 5230 537
The output from step shows the sequence of models that it explored. In this case, step removed x2 and x4 and left only x1 and x3 in the final (reduced) model. The summary of the reduced model shows that it contains only significant predictors:
summary(reduced.model)
#>
#> Call:
#> lm(formula = y ~ x1 + x3)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.148 -1.850 -0.055 2.026 6.550
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) 3.648 0.751 4.86 3e-06 ***
#> x1 0.582 0.255 2.28 0.024 *
#> x3 5.147 0.239 21.57 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.92 on 147 degrees of freedom
#> Multiple R-squared: 0.767, Adjusted R-squared: 0.763
#> F-statistic: 241 on 2 and 147 DF, p-value: <2e-16
Backward stepwise regression is easy, but sometimes it’s not feasible to start with “everything” because you have too many candidate variables. In that case use forward stepwise regression, which will start with nothing and incrementally add variables that improve the regression. It stops when no further improvement is possible.
A model that “starts with nothing” may look odd at first:
min.model <- lm(y ~ 1)
This is a model with a response variable (y) but no predictor variables. (All the fitted values for y are simply the mean of y, which is what you would guess if no predictors were available.)
We must tell step which candidate variables are available for inclusion in the model. That is the purpose of the scope argument. scope is a formula with nothing on the lefthand side of the tilde (~) and candidate variables on the righthand side:
fwd.model <- step(
min.model,
direction = "forward",
scope = (~ x1 + x2 + x3 + x4),
trace = 0
)
Here we see that x1, x2, x3, and x4 are all candidates for inclusion. (We also included trace = 0 to inhibit the voluminous output from step.) The resulting model has two significant predictors and no insignificant predictors:
summary(fwd.model)
#>
#> Call:
#> lm(formula = y ~ x3 + x1)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.148 -1.850 -0.055 2.026 6.550
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) 3.648 0.751 4.86 3e-06 ***
#> x3 5.147 0.239 21.57 <2e-16 ***
#> x1 0.582 0.255 2.28 0.024 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.92 on 147 degrees of freedom
#> Multiple R-squared: 0.767, Adjusted R-squared: 0.763
#> F-statistic: 241 on 2 and 147 DF, p-value: <2e-16
The step-forward algorithm reached the same model as the step-backward model by including x1 and x3 but excluding x2 and x4. This is a toy example, so that is not surprising. In real applications, we suggest trying both the forward and backward regression and then comparing the results. You might be surprised.
Finally, don’t get carried away with stepwise regression. It is not a panacea, it cannot turn junk into gold, and it is definitely not a substitute for choosing predictors carefully and wisely. You might think: “Oh boy! I can generate every possible interaction term for my model, then let step choose the best ones! What a model I’ll get!” You’d be thinking of something like this, which starts with all possible interactions and then tries to reduce the model:
full.model <- lm(y ~ (x1 + x2 + x3 + x4) ^ 4)
reduced.model <- step(full.model, direction = "backward")
#> Start: AIC=337
#> y ~ (x1 + x2 + x3 + x4)^4
#>
#> Df Sum of Sq RSS AIC
#> - x1:x2:x3:x4 1 0.0321 1145 335
#> <none> 1145 337
#>
#> Step: AIC=335
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 +
#> x3:x4 + x1:x2:x3 + x1:x2:x4 + x1:x3:x4 + x2:x3:x4
#>
#> Df Sum of Sq RSS AIC
#> - x2:x3:x4 1 0.76 1146 333
#> - x1:x3:x4 1 8.37 1154 334
#> <none> 1145 335
#> - x1:x2:x4 1 20.95 1166 336
#> - x1:x2:x3 1 25.18 1170 336
#>
#> Step: AIC=333
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 +
#> x3:x4 + x1:x2:x3 + x1:x2:x4 + x1:x3:x4
#>
#> Df Sum of Sq RSS AIC
#> - x1:x3:x4 1 8.74 1155 332
#> <none> 1146 333
#> - x1:x2:x4 1 21.72 1168 334
#> - x1:x2:x3 1 26.51 1172 334
#>
#> Step: AIC=332
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 +
#> x3:x4 + x1:x2:x3 + x1:x2:x4
#>
#> Df Sum of Sq RSS AIC
#> - x3:x4 1 0.29 1155 330
#> <none> 1155 332
#> - x1:x2:x4 1 23.24 1178 333
#> - x1:x2:x3 1 31.11 1186 334
#>
#> Step: AIC=330
#> y ~ x1 + x2 + x3 + x4 + x1:x2 + x1:x3 + x1:x4 + x2:x3 + x2:x4 +
#> x1:x2:x3 + x1:x2:x4
#>
#> Df Sum of Sq RSS AIC
#> <none> 1155 330
#> - x1:x2:x4 1 23.4 1178 331
#> - x1:x2:x3 1 31.5 1187 332
This does not work well. Most of the interaction terms are meaningless. The step function becomes overwhelmed, and you are left with many insignificant terms.
See Also
See Recipe 11.25.
11.9 Regressing on a Subset of Your Data
Problem
You want to fit a linear model to a subset of your data, not to the entire dataset.
Solution
The lm function has a subset parameter that specifies which data elements should be used for fitting. The parameter’s value can be any index expression that could index your data. This shows a fitting that uses only the first 100 observations:
lm(y ~ x1, subset=1:100) # Use only x[1:100]
Discussion
You will often want to regress only a subset of your data. This can happen, for example, when you’re using in-sample data to create the model and out-of-sample data to test it.
The lm function has a parameter, subset, that selects the observations used for fitting. The value of subset is a vector. It can be a vector of index values, in which case lm selects only the indicated observations from your data. It can also be a logical vector, the same length as your data, in which case lm selects the observations with a corresponding TRUE.
Suppose you have 1,000 observations of (x, y) pairs and want to fit your model using only the first half of those observations. Use a subset parameter of 1:500, indicating lm should use observations 1 through 500:
## example data
n <- 1000
x <- rnorm(n)
e <- rnorm(n, 0, .5)
y <- 3 + 2 * x + e
lm(y ~ x, subset = 1:500)
#>
#> Call:
#> lm(formula = y ~ x, subset = 1:500)
#>
#> Coefficients:
#> (Intercept) x
#> 3 2
More generally, you can use the expression 1:floor(length(x)/2) to select the first half of your data, regardless of size:
lm(y ~ x, subset = 1:floor(length(x) / 2))
#>
#> Call:
#> lm(formula = y ~ x, subset = 1:floor(length(x)/2))
#>
#> Coefficients:
#> (Intercept) x
#> 3 2
Let’s say your data was collected in several labs and you have a factor, lab, that identifies the lab of origin. You can limit your regression to observations collected in New Jersey by using a logical vector that is TRUE only for those observations:
load('./data/lab_df.rdata')
lm(y ~ x, subset = (lab == "NJ"), data = lab_df)
#>
#> Call:
#> lm(formula = y ~ x, data = lab_df, subset = (lab == "NJ"))
#>
#> Coefficients:
#> (Intercept) x
#> 2.58 5.03
11.10 Using an Expression Inside a Regression Formula
Problem
You want to regress on calculated values, not simple variables, but the syntax of a regression formula seems to forbid that.
Solution
Embed the expressions for the calculated values inside the I(...) operator. That will force R to calculate the expression and use the calculated value for the regression.
Discussion
If you want to regress on the sum of u and v, then this is your regression equation:
yi = β0 + β1(ui + vi) + εi
How do you write that equation as a regression formula |
t |
> (Intercept) 0.6887 0.0306 22.5 <2e-16 ***
#> t -2.0118 0.0351 -57.3 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.148 on 98 degrees of freedom
#> Multiple R-squared: 0.971, Adjusted R-squared: 0.971
#> F-statistic: 3.28e+03 on 1 and 98 DF, p-value: <2e-16
The right panel of Figure 11-1 shows the plot of log(z) versus time. Superimposed on that plot is their regression line. The fit appears to be much better; this is confirmed by the R2 = 0.97, compared with 0.82 for the linear regression on the original data.
You can embed other functions inside your formula. If you thought the relationship was quadratic, you could use a square-root transformation:
lm(sqrt(y) ~ month)
You can apply transformations to variables on both sides of the formula, of course. This formula regresses y on the square root of x:
lm(y ~ sqrt(x))
This regression is for a log-log relationship between x and y:
lm(log(y) ~ log(x))
See Also
See Recipe 11.13.
11.13 Finding the Best Power Transformation (Box–Cox Procedure)
Problem
You want to improve your linear model by applying a power transformation to the response variable.
Solution
Use the Box–Cox procedure, which is implemented by the boxcox function of the MASS package. The procedure will identify a power, λ, such that transforming y into yλ will improve the fit of your model:
library(MASS)
m <- lm(y ~ x)
boxcox(m)
Discussion
To illustrate the Box–Cox transformation, let’s create some artificial data using the equation y–1.5 = x + ε, where ε is an error term:
set.seed(9)
x <- 10:100
eps <- rnorm(length(x), sd = 5)
y <- (x + eps) ^ (-1 / 1.5)
Then we will (mistakenly) model the data using a simple linear regression and derive an adjusted R2 of 0.637:
m <- lm(y ~ x)
summary(m)
#>
#> Call:
#> lm(formula = y ~ x)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -0.04032 -0.01633 -0.00792 0.00996 0.14516
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) 0.166885 0.007078 23.6 <2e-16 ***
#> x -0.001465 0.000116 -12.6 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.0291 on 89 degrees of freedom
#> Multiple R-squared: 0.641, Adjusted R-squared: 0.637
#> F-statistic: 159 on 1 and 89 DF, p-value: <2e-16
When plotting the residuals against the fitted values, we get a clue that something is wrong. We can get a ggplot residual plot using the broom library. The augment function from broom will put our residuals (and other things) into a data frame for easier plotting. Then we can use ggplot to plot:
library(broom)
augmented_m <- augment(m)
ggplot(augmented_m, aes(x = .fitted, y = .resid)) +
geom_point()
The result is shown in Figure 11-2.
rcbk 1102
Figure 11-2. Fitted values versus residuals
If you just need a fast peek at the residual plot and don’t care if the result is a ggplot figure, you can use Base R’s plot method on the model object, m:
plot(m, which = 1) # which = 1 plots only the fitted vs. residuals
We can see in Figure 11-2 that this plot has a clear parabolic shape. A possible fix is a power transformation on y, so we run the Box–Cox procedure:
library(MASS)
#>
#> Attaching package: 'MASS'
#> The following object is masked from 'package:dplyr':
#>
#> select
bc <- boxcox(m)
The boxcox function plots values of λ against the log-likelihood of the resulting model, as shown in Figure 11-3. We want to maximize that log-likelihood, so the function draws a line at the best value and also draws lines at the limits of its confidence interval. In this case, it looks like the best value is around –1.5, with a confidence interval of about (–1.75, –1.25).
rcbk 1103
Figure 11-3. Output of boxcox on the model (m)
Oddly, the boxcox function does not return the best value of λ. Rather, it returns the (x, y) pairs displayed in the plot. It’s pretty easy to find the values of λ that yield the largest log-likelihood, y. We use the which.max function:
which.max(bc$y)
#> [1] 13
Then this gives us the position of the corresponding λ:
lambda <- bc$x[which.max(bc$y)]
lambda
#> [1] -1.52
The function reports that the best λ is –1.52. In an actual application, we would urge you to interpret this number and choose the power that makes sense to you, rather than blindly accepting this “best” value. Use the graph to assist you in that interpretation. Here, we’ll go with –1.52.
We can apply the power transform to y and then fit the revised model; this gives a much better R2 of 0.967:
z <- y ^ lambda
m2 <- lm(z ~ x)
summary(m2)
#>
#> Call:
#> lm(formula = z ~ x)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -13.459 -3.711 -0.228 2.206 14.188
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(> |
t |
> (Intercept) -0.6426 1.2517 -0.51 0.61
#> x 1.0514 0.0205 51.20 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 5.15 on 89 degrees of freedom
#> Multiple R-squared: 0.967, Adjusted R-squared: 0.967
#> F-statistic: 2.62e+03 on 1 and 89 DF, p-value: <2e-16
For those who prefer one-liners, the transformation can be embedded right into the revised regression formula:
m2 <- lm(I(y ^ lambda) ~ x)
TIP
By default, boxcox searches for values of λ in the range –2 to +2. You can change that via the lambda argument; see the help page for details.
We suggest viewing the Box–Cox result as a starting point, not as a definitive answer. If the confidence interval for λ includes 1.0, it may be that no power transformation is actually helpful. As always, inspect the residuals before and after the transformation. Did they really improve Sources: