The syntax of the main fitting function is
glm(formula, family, data, weights)
where the arguments formula
(model formula specifying the response and predictors) and data
(dataframe to seek variables) are the same as in the lm()
function. The optional argument weights
specifies prior weights (e.g., the number of repetitions of each data row).
The distribution of the response and the link function are specified by the argument family
. R accepts the following specifications for distributions of the response in the family
argument:
gaussian
(normal distribution, variance function \(V(\mu)=1\))Gamma
(gamma distribution, variance function \(V(\mu)=\mu^2\))inverse.gaussian
(inverse Gaussian distribution, variance function \(V(\mu)=\mu^3\))binomial
(alternative or binomial distribution, variance function \(V(\mu)=\mu(1-\mu)\))poisson
(Poisson distribution, variance function \(V(\mu)=\mu\))E.g., for alternative distribution, we specify family = binomial
.
Link functions are specified as further arguments to the fitted distribution. The following links are available:
logit
\(\quad g(\mu)=\frac{\mu}{1-\mu}\)probit
\(\quad g(\mu)=\Phi^{-1}(\mu)\)cauchit
\(\quad g(\mu)=\frac{1}{\pi}\arctan(\mu)+½\)cloglog
\(\quad g(\mu)=\log(-\log(1-\mu))\)identity
\(\quad g(\mu)=\mu\)log
\(\qquad g(\mu)=\log(\mu)\)sqrt
\(\qquad g(\mu)=\sqrt{\mu}\)1/mu^2
\(\quad g(\mu)=\frac{1}{\mu^2}\)inverse
\(\quad g(\mu)=\frac{1}{\mu}\)E.g., to specify a Gamma model with a log link, we write
family = Gamma(link = "log")
The canonical link is the default for each distribution.
The gaussian family accepts the links identity
, log
and inverse
; the binomial family the links logit
, probit
, cauchit
, log
, and cloglog
; the Gamma family the links inverse
, identity
and log
; the poisson family the links log
, identity
, and sqrt
and the inverse.gaussian family the links 1/mu^2
, inverse
, identity
and log
.
For more information, see help to the functions glm
and family
.
n <- 50
set.seed(19730911)
x <- runif(n)
f <- sample.int(3, size = n, replace = TRUE)
f <- factor(f, levels = 1:3, labels = LETTERS[1:3])
eta <- 0.2*x + 0.1*(f == "B") - 0.1*(f == "C")
prob <- exp(eta) / (1 + exp(eta))
y <- rbinom(n, size = 1, prob = prob)
xNotUsed <- runif(n)
fNotUsed <- sample.int(3, size = n, replace = TRUE)
fNotUsed <- factor(fNotUsed, levels = 1:3, labels = LETTERS[1:3])
Data <- data.frame(y = y, x = x, f = f, xNotUsed = xNotUsed, fNotUsed = fNotUsed, eta = eta, prob = prob)
head(Data)
## y x f xNotUsed fNotUsed eta prob
## 1 1 0.976039146 B 0.68105447 A 0.29520783 0.5732706
## 2 0 0.007152175 C 0.91024665 C -0.09856956 0.4753775
## 3 1 0.546285866 A 0.84053977 C 0.10925717 0.5272872
## 4 0 0.270913723 C 0.98206429 C -0.04581726 0.4885477
## 5 1 0.921075755 A 0.05802907 C 0.18421515 0.5459240
## 6 0 0.888099262 B 0.27250015 B 0.27761985 0.5689626
Logistic model that contains one continuous and one categorical regressor:
fit <- glm(y ~ x + f, family = binomial(link = "logit"), data = Data)
The results of the glm()
function, when stored in an object (here called fit
) can be diplayed in a rudimentary form
print(fit)
##
## Call: glm(formula = y ~ x + f, family = binomial(link = "logit"), data = Data)
##
## Coefficients:
## (Intercept) x fB fC
## 0.2145 0.5721 -1.4118 -0.3248
##
## Degrees of Freedom: 49 Total (i.e. Null); 46 Residual
## Null Deviance: 69.23
## Residual Deviance: 64.98 AIC: 72.98
or in a more complete form
summary(fit)
##
## Call:
## glm(formula = y ~ x + f, family = binomial(link = "logit"), data = Data)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.4886 -1.1357 -0.7571 1.0725 1.6479
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.2145 0.7458 0.288 0.7736
## x 0.5721 1.0485 0.546 0.5853
## fB -1.4118 0.7461 -1.892 0.0585 .
## fC -0.3248 0.7217 -0.450 0.6527
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 69.235 on 49 degrees of freedom
## Residual deviance: 64.976 on 46 degrees of freedom
## AIC: 72.976
##
## Number of Fisher Scoring iterations: 4
Basic diagnostic graphs can be obtained by
plot(fit)
This function (actually, its name is plot.lm
) works in the same way for both lm()
and glm()
objects.
A wrapper providing the first three diagnostic graphs (not all are really sensible in the context of generalized linear models) is also available as
library("mffSM")
## Loading required package: car
plotLM(fit)
Estimated regression coefficients:
coef(fit)
## (Intercept) x fB fC
## 0.2145244 0.5721186 -1.4117630 -0.3247743
fit$coef
## (Intercept) x fB fC
## 0.2145244 0.5721186 -1.4117630 -0.3247743
Table of coefficients with standard errors and Wald test results:
summary(fit)$coef
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.2145244 0.7458453 0.2876259 0.77363311
## x 0.5721186 1.0485097 0.5456493 0.58530703
## fB -1.4117630 0.7461268 -1.8921221 0.05847471
## fC -0.3247743 0.7217015 -0.4500120 0.65270182
Estimated covariance matrix of coefficients:
vcov(fit)
## (Intercept) x fB fC
## (Intercept) 0.5562852 -0.56248334 -0.23587914 -0.3332556
## x -0.5624833 1.09937260 -0.06374973 0.1265723
## fB -0.2358791 -0.06374973 0.55670522 0.2611565
## fC -0.3332556 0.12657235 0.26115648 0.5208531
summary(fit)$cov.scaled
## (Intercept) x fB fC
## (Intercept) 0.5562852 -0.56248334 -0.23587914 -0.3332556
## x -0.5624833 1.09937260 -0.06374973 0.1265723
## fB -0.2358791 -0.06374973 0.55670522 0.2611565
## fC -0.3332556 0.12657235 0.26115648 0.5208531
Sequential (type I) analysis of deviance table (deviance (likelihood ratio) tests for adding terms one by one):
anova(fit, test = "Chisq")
## Analysis of Deviance Table
##
## Model: binomial, link: logit
##
## Response: y
##
## Terms added sequentially (first to last)
##
##
## Df Deviance Resid. Df Resid. Dev Pr(>Chi)
## NULL 49 69.235
## x 1 0.1000 48 69.135 0.7518
## f 2 4.1591 46 64.976 0.1250
Sequential (type I) analysis using the Rao score tests:
anova(fit, test = "Rao")
## Analysis of Deviance Table
##
## Model: binomial, link: logit
##
## Response: y
##
## Terms added sequentially (first to last)
##
##
## Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi)
## NULL 49 69.235
## x 1 0.1000 48 69.135 0.1000 0.7519
## f 2 4.1591 46 64.976 4.0505 0.1320
Hierarchical deviance (likelihood ratio) tests for dropped terms:
drop1(fit, test = "Chisq")
## Single term deletions
##
## Model:
## y ~ x + f
## Df Deviance AIC LRT Pr(>Chi)
## <none> 64.976 72.976
## x 1 65.275 71.275 0.2998 0.584
## f 2 69.135 73.135 4.1591 0.125
Hierarchical Rao score tests for dropped terms:
drop1(fit, test = "Rao")
## Single term deletions
##
## Model:
## y ~ x + f
## Df Deviance AIC Rao score Pr(>Chi)
## <none> 64.976 72.976
## x 1 65.275 71.275 0.2990 0.5845
## f 2 69.135 73.135 4.0505 0.1320
Deviance (likelihood ratio) tests for added terms (Rao score tests analogically):
add1(fit, .~. + x:f, test = "Chisq")
## Single term additions
##
## Model:
## y ~ x + f
## Df Deviance AIC LRT Pr(>Chi)
## <none> 64.976 72.976
## x:f 2 64.164 76.164 0.81177 0.6664
add1(fit, .~. + x:f + xNotUsed + fNotUsed, test = "Chisq")
## Single term additions
##
## Model:
## y ~ x + f
## Df Deviance AIC LRT Pr(>Chi)
## <none> 64.976 72.976
## xNotUsed 1 63.861 73.861 1.11447 0.2911
## fNotUsed 2 64.469 76.469 0.50623 0.7764
## x:f 2 64.164 76.164 0.81177 0.6664
Assume the result of the glm()
function for a larger model is stored in the object fit1
, for a submodel in the object fit2
.
Submodel that omits the categorical regressor f
:
fit2 <- glm(y ~ x, family = binomial(link = "logit"), data = Data)
fit1 <- fit
Deviance (likelihood ratio) test of submodel fit2
versus a larger model fit1
:
anova(fit2, fit1, test = "Chisq")
## Analysis of Deviance Table
##
## Model 1: y ~ x
## Model 2: y ~ x + f
## Resid. Df Resid. Dev Df Deviance Pr(>Chi)
## 1 48 69.135
## 2 46 64.976 2 4.1591 0.125
Rao score test of submodel fit2
versus a larger model fit1
:
anova(fit2, fit1, test = "Rao")
## Analysis of Deviance Table
##
## Model 1: y ~ x
## Model 2: y ~ x + f
## Resid. Df Resid. Dev Df Deviance Rao Pr(>Chi)
## 1 48 69.135
## 2 46 64.976 2 4.1591 4.0505 0.132
Confidence intervals for estimated parameters (based on the profile likelihood ratio tests):
confint(fit1)
## Waiting for profiling to be done...
## 2.5 % 97.5 %
## (Intercept) -1.253932 1.728413793
## x -1.478597 2.688941199
## fB -2.953870 0.005719377
## fC -1.769329 1.093460088
These intervals correspond to deviance (likelihood-ratio) tests for the individual coefficients. The related P-values are NOT those from summary(fit)
seen above! To get the P-values related to confint(fit1)
, we can use drop1(fit, test = "Chisq")
. Nevertheless note that this provides only the P-values that correspond to those regression coefficients upon which removal a hierarchically well formulated model is obtained.
Point estimates and Wald confidence intervals (a small function to calculate these can be written to simplify the task in future):
alpha <- 0.05
qq <- qnorm(1 - alpha/2)
be <- coef(fit)
se <- sqrt(diag(vcov(fit)))
Wald <- data.frame(Estimate = be, Std.Error = se, Lower = be - qq * se, Upper = be + qq * se)
print(Wald)
## Estimate Std.Error Lower Upper
## (Intercept) 0.2145244 0.7458453 -1.247306 1.67635438
## x 0.5721186 1.0485097 -1.482923 2.62715985
## fB -1.4117630 0.7461268 -2.874145 0.05061865
## fC -0.3247743 0.7217015 -1.739283 1.08973468
Those confidence intervals are dual to the P-values from summary(fit)
.
The Wald analogue to anova(fit2, fit1)
(removal of a factor regressor f
) can also be calculated quite easily:
(indexf <- grep("f", names(coef(fit))))
## [1] 3 4
(bef <- coef(fit)[indexf])
## fB fC
## -1.4117630 -0.3247743
(Vf <- vcov(fit)[indexf, indexf])
## fB fC
## fB 0.5567052 0.2611565
## fC 0.2611565 0.5208531
(Zf <- as.numeric(crossprod(bef, solve(Vf, bef))))
## [1] 3.866076
(pWaldf <- pchisq(Zf, df = length(bef), lower.tail = FALSE))
## [1] 0.1447079
Estimated dispersion parameter (based on Pearson \(X^2\)):
summary(fit)$dispersion
## [1] 1
IWLS weights \(\hat W_i=\frac{1}{V(\hat\mu_i)[g'(\hat\mu_i)]^2}\):
fit$weights
## 1 2 3 4 5 6 7
## 0.2261334 0.2492970 0.2334110 0.2498749 0.2185572 0.2225186 0.1932612
## 8 9 10 11 12 13 14
## 0.2494170 0.2112066 0.2212521 0.2480975 0.2449203 0.2345378 0.2232325
## 15 16 17 18 19 20 21
## 0.2498177 0.1893865 0.1999757 0.2431083 0.2496027 0.2168231 0.2473653
## 22 23 24 25 26 27 28
## 0.2499259 0.2374425 0.2234579 0.2439954 0.2125829 0.2403302 0.2403774
## 29 30 31 32 33 34 35
## 0.2120513 0.2282892 0.2499996 0.2489284 0.2211800 0.2098026 0.2099692
## 36 37 38 39 40 41 42
## 0.2483899 0.2302495 0.2190976 0.1847755 0.2402503 0.2052470 0.1910651
## 43 44 45 46 47 48 49
## 0.2043041 0.2459300 0.1999492 0.2424892 0.2464318 0.2496569 0.2492544
## 50
## 0.2375015
Unscaled deviance \(D(\mathbf{Y}\mid\hat{\mathbf{\beta}})\):
deviance(fit)
## [1] 64.97552
fit$deviance
## [1] 64.97552
Fitted values \(\hat\mu_i\):
fitted(fit)
## 1 2 3 4 5 6 7
## 0.3455114 0.4734854 0.6287982 0.5111844 0.6773214 0.3342243 0.2618004
## 8 9 10 11 12 13 14
## 0.4758547 0.3030392 0.6695522 0.5436179 0.5712721 0.6243474 0.6636080
## 15 16 17 18 19 20 21
## 0.4864990 0.2538012 0.2763382 0.5830166 0.5199317 0.6821455 0.5513295
## 22 23 24 25 26 27 28
## 0.4913915 0.6120603 0.3370822 0.5774892 0.3065646 0.5983353 0.5980947
## 29 30 31 32 33 34 35
## 0.3051951 0.6473460 0.5006420 0.5327349 0.6697646 0.2995067 0.2999225
## 36 37 38 39 40 41 42
## 0.5401263 0.6405365 0.3242086 0.2446084 0.5987404 0.2884504 0.2572343
## 43 44 45 46 47 48 49
## 0.2862335 0.5637965 0.2762791 0.5866649 0.5597342 0.4814766 0.5273050
## 50
## 0.6117970
predict(fit, type = "response")
## 1 2 3 4 5 6 7
## 0.3455114 0.4734854 0.6287982 0.5111844 0.6773214 0.3342243 0.2618004
## 8 9 10 11 12 13 14
## 0.4758547 0.3030392 0.6695522 0.5436179 0.5712721 0.6243474 0.6636080
## 15 16 17 18 19 20 21
## 0.4864990 0.2538012 0.2763382 0.5830166 0.5199317 0.6821455 0.5513295
## 22 23 24 25 26 27 28
## 0.4913915 0.6120603 0.3370822 0.5774892 0.3065646 0.5983353 0.5980947
## 29 30 31 32 33 34 35
## 0.3051951 0.6473460 0.5006420 0.5327349 0.6697646 0.2995067 0.2999225
## 36 37 38 39 40 41 42
## 0.5401263 0.6405365 0.3242086 0.2446084 0.5987404 0.2884504 0.2572343
## 43 44 45 46 47 48 49
## 0.2862335 0.5637965 0.2762791 0.5866649 0.5597342 0.4814766 0.5273050
## 50
## 0.6117970
Fitted linear predictors \(\hat\eta_i\):
predict(fit, type = "link")
## 1 2 3 4 5
## -0.638828449 -0.106157986 0.527064737 0.044744899 0.741489000
## 6 7 8 9 10
## -0.689140492 -1.036631898 -0.096656352 -0.832867223 0.706160452
## 11 12 13 14 15
## 0.174916193 0.287043377 0.508042099 0.679414405 -0.054017257
## 16 17 18 19 20
## -1.078440820 -0.962698818 0.335169146 0.079769219 0.763649073
## 21 22 23 24 25
## 0.206044050 -0.034437564 0.455980999 -0.676324172 0.312474800
## 26 27 28 29 30
## -0.816229731 0.398533799 0.397532565 -0.822679708 0.607393670
## 31 32 33 34 35
## 0.002568049 0.131127237 0.707120736 -0.849648119 -0.847666945
## 36 37 38 39 40
## 0.160851278 0.577693370 -0.734497329 -1.127577703 0.400219542
## 41 42 43 44 45
## -0.902921873 -1.060393421 -0.913747805 0.256584376 -0.962994432
## 46 47 48 49 50
## 0.350195281 0.240083191 -0.074127341 0.109328714 0.454871977
Fitted terms \(X_{ik}\hat\beta_k\):
predict(fit, type = "terms")
Pearson residuals:
resid(fit, type = "pearson")
## 1 2 3 4 5 6
## 1.3763213 -0.9483051 0.7683328 -1.0226246 0.6902203 -0.7085248
## 7 8 9 10 11 12
## 1.6791974 -0.9528210 1.5165433 -1.4234453 0.9162573 0.8663020
## 13 14 15 16 17 18
## 0.7756755 0.7119788 -0.9733528 -0.5832027 -0.6179490 -1.1824453
## 19 20 21 22 23 24
## -1.0406907 0.6826148 0.9021071 1.0173679 -1.2560734 -0.7130797
## 25 26 27 28 29 30
## -1.1691037 -0.6649025 0.8193312 -1.2198968 -0.6627617 0.7380846
## 31 32 33 34 35 36
## 0.9987168 0.9365395 -1.4241290 -0.6538848 -0.6545329 0.9227235
## 37 38 39 40 41 42
## 0.7491270 -0.6926374 -0.5690489 0.8186409 -0.6366973 1.6992665
## 43 44 45 46 47 48
## -0.6332602 -1.1368851 1.6184958 0.8393751 0.8868835 1.0377591
## 49 50
## -1.0561861 -1.2553771
residuals(fit, type = "pearson")
## 1 2 3 4 5 6
## 1.3763213 -0.9483051 0.7683328 -1.0226246 0.6902203 -0.7085248
## 7 8 9 10 11 12
## 1.6791974 -0.9528210 1.5165433 -1.4234453 0.9162573 0.8663020
## 13 14 15 16 17 18
## 0.7756755 0.7119788 -0.9733528 -0.5832027 -0.6179490 -1.1824453
## 19 20 21 22 23 24
## -1.0406907 0.6826148 0.9021071 1.0173679 -1.2560734 -0.7130797
## 25 26 27 28 29 30
## -1.1691037 -0.6649025 0.8193312 -1.2198968 -0.6627617 0.7380846
## 31 32 33 34 35 36
## 0.9987168 0.9365395 -1.4241290 -0.6538848 -0.6545329 0.9227235
## 37 38 39 40 41 42
## 0.7491270 -0.6926374 -0.5690489 0.8186409 -0.6366973 1.6992665
## 43 44 45 46 47 48
## -0.6332602 -1.1368851 1.6184958 0.8393751 0.8868835 1.0377591
## 49 50
## -1.0561861 -1.2553771
Deviance residuals:
resid(fit, type = "deviance")
## 1 2 3 4 5 6
## 1.4578955 -1.1326749 0.9632703 -1.1964697 0.8827337 -0.9020005
## 7 8 9 10 11 12
## 1.6371762 -1.1366498 1.5452464 -1.4881576 1.1040912 1.0581962
## 13 14 15 16 17 18
## 0.9706167 0.9056088 -1.1545590 -0.7651970 -0.8042774 -1.3226555
## 19 20 21 22 23 24
## -1.2114677 0.8746569 1.0912585 1.1920690 -1.3761580 -0.9067572
## 25 26 27 28 29 30
## -1.3126617 -0.8556835 1.0135126 -1.3502139 -0.8533748 0.9326031
## 31 32 33 34 35 36
## 1.1763197 1.1222578 -1.4885897 -0.8437659 -0.8444693 1.1099119
## 37 38 39 40 41 42
## 0.9438742 -0.8852918 -0.7490247 1.0128447 -0.8249972 1.6478884
## 43 44 45 46 47 48
## -0.8212179 -1.2881353 1.6039600 1.0327647 1.0773053 1.2090472
## 49 50
## -1.2241772 -1.3756648
residuals(fit, type = "deviance")
## 1 2 3 4 5 6
## 1.4578955 -1.1326749 0.9632703 -1.1964697 0.8827337 -0.9020005
## 7 8 9 10 11 12
## 1.6371762 -1.1366498 1.5452464 -1.4881576 1.1040912 1.0581962
## 13 14 15 16 17 18
## 0.9706167 0.9056088 -1.1545590 -0.7651970 -0.8042774 -1.3226555
## 19 20 21 22 23 24
## -1.2114677 0.8746569 1.0912585 1.1920690 -1.3761580 -0.9067572
## 25 26 27 28 29 30
## -1.3126617 -0.8556835 1.0135126 -1.3502139 -0.8533748 0.9326031
## 31 32 33 34 35 36
## 1.1763197 1.1222578 -1.4885897 -0.8437659 -0.8444693 1.1099119
## 37 38 39 40 41 42
## 0.9438742 -0.8852918 -0.7490247 1.0128447 -0.8249972 1.6478884
## 43 44 45 46 47 48
## -0.8212179 -1.2881353 1.6039600 1.0327647 1.0773053 1.2090472
## 49 50
## -1.2241772 -1.3756648
Standardized Pearson residuals:
rstandard(fit, type = "pearson")
## 1 2 3 4 5 6
## 1.4551420 -1.0000613 0.7937330 -1.0568667 0.7271364 -0.7419701
## 7 8 9 10 11 12
## 1.7437976 -1.0029171 1.5651453 -1.4912008 0.9459672 0.9042326
## 13 14 15 16 17 18
## 0.8013163 0.7431631 -1.0169085 -0.6084160 -0.6382153 -1.2442072
## 19 20 21 22 23 24
## -1.0736697 0.7219876 0.9331698 1.0598605 -1.2995897 -0.7483834
## 25 26 27 28 29 30
## -1.2300222 -0.6867638 0.8740600 -1.2678800 -0.6843214 0.7649409
## 31 32 33 34 35 36
## 1.0358242 0.9656420 -1.4921262 -0.6744390 -0.6751473 0.9520696
## 37 38 39 40 41 42
## 0.7750021 -0.7206053 -0.5976156 0.8506072 -0.6563829 1.7690098
## 43 44 45 46 47 48
## -0.6529361 -1.2105003 1.6716013 0.8775903 0.9482497 1.0877880
## 49 50
## -1.0889749 -1.3598515
Standardized deviance residuals:
rstandard(fit, type = "deviance")
## 1 2 3 4 5 6
## 1.5413878 -1.1944935 0.9951150 -1.2365330 0.9299464 -0.9445786
## 7 8 9 10 11 12
## 1.7001597 -1.1964109 1.5947683 -1.5589934 1.1398917 1.1045287
## 13 14 15 16 17 18
## 1.0027015 0.9452741 -1.2062232 -0.7982783 -0.8306546 -1.3917409
## 19 20 21 22 23 24
## -1.2498586 0.9251066 1.1288343 1.2418584 -1.4238346 -0.9516496
## 25 26 27 28 29 30
## -1.3810605 -0.8838175 1.0812121 -1.4033229 -0.8811351 0.9665373
## 31 32 33 34 35 36
## 1.2200259 1.1571315 -1.5596648 -0.8702888 -0.8710658 1.1452113
## 37 38 39 40 41 42
## 0.9764758 -0.9210389 -0.7866263 1.0523943 -0.8505048 1.7155229
## 43 44 45 46 47 48
## -0.8467338 -1.3715441 1.6565885 1.0797846 1.1518472 1.2673336
## 49 50
## -1.2621813 -1.4901497
Cook's distance:
cooks.distance(fit)
## 1 2 3 4 5 6
## 0.062368033 0.028036898 0.010585896 0.019013682 0.014517510 0.013300066
## 7 8 9 10 11 12
## 0.059616754 0.027136986 0.039882513 0.054182727 0.014743195 0.018291761
## 13 14 15 16 17 18
## 0.010788226 0.012359932 0.023654697 0.008174640 0.006788767 0.041485027
## 19 20 21 22 23 24
## 0.018554756 0.015466716 0.015250537 0.023948558 0.029763096 0.014207597
## 25 26 27 28 29 30
## 0.040444715 0.007881049 0.026367968 0.032236691 0.007740746 0.010839181
## 31 32 33 34 35 36
## 0.020302706 0.014713069 0.054421371 0.007261513 0.007291097 0.014643276
## 37 38 39 40 41 42
## 0.010552055 0.010695450 0.009189483 0.014402082 0.006763380 0.065538042
## 43 44 45 46 47 48
## 0.006726032 0.048976526 0.046593986 0.017931219 0.032184623 0.029209701
## 49 50
## 0.018693106 0.080148294
Leverages \(h_{ii}\):
hatvalues(fit)
## 1 2 3 4 5 6
## 0.10539991 0.10082765 0.06297796 0.06374964 0.09896097 0.08812078
## 7 8 9 10 11 12
## 0.07271895 0.09740565 0.06114113 0.08880919 0.06182756 0.08213600
## 13 14 15 16 17 18
## 0.06297297 0.08216249 0.08382828 0.08116433 0.06250105 0.09681501
## 19 20 21 22 23 24
## 0.06048891 0.10609391 0.06546644 0.07857792 0.06584808 0.09212131
## 25 26 27 28 29 30
## 0.09659977 0.06265135 0.12130835 0.07425814 0.06201792 0.06898540
## 31 32 33 34 35 36
## 0.07036464 0.05936778 0.08906475 0.06002316 0.06013421 0.06069691
## 37 38 39 40 41 42
## 0.06565935 0.07611692 0.09331736 0.07374895 0.05908274 0.07729573
## 43 44 45 46 47 48
## 0.05936092 0.11792927 0.06252911 0.08519511 0.12524224 0.08986760
## 49 50
## 0.05931312 0.14775314
See also influence.measures(fit)
.