GeneralizedLinearModel
Generalized linear regression model class
Description
GeneralizedLinearModel
is a fitted generalized linear regression
model. A generalized linear regression model is a special class of nonlinear models that
describe a nonlinear relationship between a response and predictors. A generalized linear
regression model has generalized characteristics of a linear regression model. The response
variable follows a normal, binomial, Poisson, gamma, or inverse Gaussian distribution with
parameters including the mean response μ. A link function
f defines the relationship between μ and the linear
combination of predictors.
Use the properties of a GeneralizedLinearModel
object to investigate a
fitted generalized linear regression model. The object properties include information about
coefficient estimates, summary statistics, fitting method, and input data. Use the object
functions to predict responses and to modify, evaluate, and visualize the model.
Creation
Create a GeneralizedLinearModel
object by using fitglm
or stepwiseglm
.
fitglm
fits a generalized linear regression model to data using a fixed model
specification. Use addTerms
, removeTerms
, or step
to add or remove terms from the model.
Alternatively, use stepwiseglm
to fit a model using stepwise
generalized linear regression.
Properties
Coefficient Estimates
CoefficientCovariance
— Covariance matrix of coefficient estimates
numeric matrix
This property is read-only.
Covariance matrix of coefficient estimates, specified as a
p-by-p matrix of numeric values. p
is the number of coefficients in the fitted model, as given by
NumCoefficients
.
For details, see Coefficient Standard Errors and Confidence Intervals.
Data Types: single
| double
CoefficientNames
— Coefficient names
cell array of character vectors
This property is read-only.
Coefficient names, specified as a cell array of character vectors, each containing the name of the corresponding term.
Data Types: cell
Coefficients
— Coefficient values
table
This property is read-only.
Coefficient values, specified as a table.
Coefficients
contains one row for each coefficient and these
columns:
Estimate
— Estimated coefficient valueSE
— Standard error of the estimatetStat
— t-statistic for a two-sided test with the null hypothesis that the coefficient is zeropValue
— p-value for the t-statistic
Use coefTest
to perform linear hypothesis tests on the coefficients.
Use coefCI
to find the confidence intervals of the coefficient
estimates.
To obtain any of these columns as a vector, index into the property
using dot notation. For example, obtain the estimated coefficient vector in the model
mdl
:
beta = mdl.Coefficients.Estimate
Data Types: table
NumCoefficients
— Number of model coefficients
positive integer
This property is read-only.
Number of model coefficients, specified as a positive integer.
NumCoefficients
includes coefficients that are set to zero when
the model terms are rank deficient.
Data Types: double
NumEstimatedCoefficients
— Number of estimated coefficients
positive integer
This property is read-only.
Number of estimated coefficients in the model, specified as a positive integer.
NumEstimatedCoefficients
does not include coefficients that are
set to zero when the model terms are rank deficient.
NumEstimatedCoefficients
is the degrees of freedom for
regression.
Data Types: double
Summary Statistics
Deviance
— Deviance of fit
numeric value
This property is read-only.
Deviance of the fit, specified as a numeric value. The deviance is useful for comparing two models when one model is a special case of the other model. The difference between the deviance of the two models has a chi-square distribution with degrees of freedom equal to the difference in the number of estimated parameters between the two models. For more information, see Deviance.
Data Types: single
| double
DFE
— Degrees of freedom for error
positive integer
This property is read-only.
Degrees of freedom for the error (residuals), equal to the number of observations minus the number of estimated coefficients, specified as a positive integer.
Data Types: double
Diagnostics
— Observation diagnostics
table
This property is read-only.
Observation diagnostics, specified as a table that contains one row for each observation and the columns described in this table.
Column | Meaning | Description |
---|---|---|
Leverage | Diagonal elements of HatMatrix | Leverage for each observation indicates to what extent
the fit is determined by the observed predictor values. A value close to
1 indicates that the fit is largely determined by that
observation, with little contribution from the other observations. A value
close to 0 indicates that the fit is largely determined by
the other observations. For a model with P coefficients and
N observations, the average value of
Leverage is P/N . A
Leverage value greater than 2*P/N
indicates high leverage. |
CooksDistance | Cook's distance of scaled change in fitted values | CooksDistance is a measure of scaled change in fitted
values. An observation with CooksDistance greater than
three times the mean Cook's distance can be an outlier. |
HatMatrix | Projection matrix to compute fitted from observed responses | HatMatrix is an
N -by-N matrix such that
Fitted = HatMatrix*Y , where
Y is the response vector and Fitted is
the vector of fitted response values. |
The software computes these values on the scale of the linear combination of the
predictors, stored in the LinearPredictor
field of the
Fitted
and Residuals
properties. For
example, the software computes the diagnostic values by using the fitted response and
adjusted response values from the model mdl
.
Yfit = mdl.Fitted.LinearPredictor Yadjusted = mdl.Fitted.LinearPredictor + mdl.Residuals.LinearPredictor
Diagnostics
contains information that is helpful in finding
outliers and influential observations. For more details, see Leverage,
Cook’s Distance, and Hat Matrix.
Use plotDiagnostics
to plot observation
diagnostics.
Rows not used in the fit because of missing values (in
ObservationInfo.Missing
) or excluded values (in
ObservationInfo.Excluded
) contain NaN
values
in the CooksDistance
column and zeros in the
Leverage
and HatMatrix
columns.
To obtain any of these columns as an array, index into the property using dot
notation. For example, obtain the hat matrix in the model
mdl
:
HatMatrix = mdl.Diagnostics.HatMatrix;
Data Types: table
Dispersion
— Scale factor of variance of response
numeric scalar
This property is read-only.
Scale factor of the variance of the response, specified as a numeric scalar.
If the 'DispersionFlag'
name-value pair argument of
fitglm
or stepwiseglm
is
true
, then the function estimates the
Dispersion
scale factor in computing the variance of the
response. The variance of the response equals the theoretical variance multiplied by the
scale factor.
For example, the variance function for the binomial distribution is
p(1–p)/n, where
p is the probability parameter and n is the
sample size parameter. If Dispersion
is near 1
,
the variance of the data appears to agree with the theoretical variance of the binomial
distribution. If Dispersion
is larger than 1
, the
data set is “overdispersed” relative to the binomial distribution.
Data Types: double
DispersionEstimated
— Flag to indicate use of dispersion scale factor
logical value
This property is read-only.
Flag to indicate whether fitglm
used the Dispersion
scale factor to compute standard errors for the coefficients in Coefficients.SE
, specified as a logical value. If DispersionEstimated
is false
, fitglm
used the theoretical value of the variance.
DispersionEstimated
can befalse
only for the binomial and Poisson distributions.Set
DispersionEstimated
by setting the'DispersionFlag'
name-value pair argument offitglm
orstepwiseglm
.
Data Types: logical
Fitted
— Fitted response values based on input data
table
This property is read-only.
Fitted (predicted) values based on the input data, specified as a table that contains one row for each observation and the columns described in this table.
Column | Description |
---|---|
Response | Predicted values on the scale of the response |
LinearPredictor | Predicted values on the scale of the linear combination of the predictors
(same as the link function applied to the Response fitted
values) |
Probability | Fitted probabilities (included only with the binomial distribution) |
To obtain any of these columns as a vector, index into the property using dot
notation. For example, obtain the vector f
of fitted values on the
response scale in the model mdl
:
f = mdl.Fitted.Response
Use predict
to compute predictions for other
predictor values, or to compute confidence bounds on Fitted
.
Data Types: table
LogLikelihood
— Loglikelihood
numeric value
This property is read-only.
Loglikelihood of the model distribution at the response values, specified as a numeric value. The mean is fitted from the model, and other parameters are estimated as part of the model fit.
Data Types: single
| double
ModelCriterion
— Criterion for model comparison
structure
This property is read-only.
Criterion for model comparison, specified as a structure with these fields:
AIC
— Akaike information criterion.AIC = –2*logL + 2*m
, wherelogL
is the loglikelihood andm
is the number of estimated parameters.AICc
— Akaike information criterion corrected for the sample size.AICc = AIC + (2*m*(m + 1))/(n – m – 1)
, wheren
is the number of observations.BIC
— Bayesian information criterion.BIC = –2*logL + m*log(n)
.CAIC
— Consistent Akaike information criterion.CAIC = –2*logL + m*(log(n) + 1)
.
Information criteria are model selection tools that you can use to compare multiple models fit to the same data. These criteria are likelihood-based measures of model fit that include a penalty for complexity (specifically, the number of parameters). Different information criteria are distinguished by the form of the penalty.
When you compare multiple models, the model with the lowest information criterion value is the best-fitting model. The best-fitting model can vary depending on the criterion used for model comparison.
To obtain any of the criterion values as a scalar, index into the property using dot
notation. For example, obtain the AIC value aic
in the model
mdl
:
aic = mdl.ModelCriterion.AIC
Data Types: struct
Residuals
— Residuals for fitted model
table
This property is read-only.
Residuals for the fitted model, specified as a table that contains one row for each observation and the columns described in this table.
Column | Description |
---|---|
Raw | Observed minus fitted values |
LinearPredictor | Residuals on the linear predictor scale, equal to the adjusted response value minus the fitted linear combination of the predictors. For more information about the adjusted response, see Adjusted Response. |
Pearson | Raw residuals divided by the estimated standard deviation of the response |
Anscombe | Residuals defined on transformed data with the transformation selected to remove skewness |
Deviance | Residuals based on the contribution of each observation to the deviance |
Rows not used in the fit because of missing values (in
ObservationInfo.Missing
) contain NaN
values.
To obtain any of these columns as a vector, index into the property using dot
notation. For example, obtain the ordinary raw residual vector r
in
the model mdl
:
r = mdl.Residuals.Raw
Data Types: table
Rsquared
— R-squared value for model
structure
This property is read-only.
R-squared value for the model, specified as a structure with five fields.
Field | Description | Equation |
---|---|---|
Ordinary | Ordinary (unadjusted) R-squared |
|
Adjusted | R-squared adjusted for the number of coefficients |
N is the number of
observations ( |
LLR | Loglikelihood ratio |
L is the loglikelihood of
the fitted model ( |
Deviance | Deviance R-squared |
D is the deviance of the
fitted model ( |
AdjGeneralized | Adjusted generalized R-squared |
R2AdjGeneralized is the Nagelkerke adjustment [2] to a formula proposed by Maddala [3], Cox and Snell [4], and Magee [5] for logistic regression models. |
To obtain any of these values as a scalar, index into the property using dot notation.
For example, to obtain the adjusted R-squared value in the model mdl
,
enter:
r2 = mdl.Rsquared.Adjusted
Data Types: struct
SSE
— Sum of squared errors
numeric value
This property is read-only.
Sum of squared errors (residuals), specified as a numeric value. If the model was
trained with observation weights, the sum of squares in the SSE
calculation is the weighted sum of squares.
Data Types: single
| double
SSR
— Regression sum of squares
numeric value
This property is read-only.
Regression sum of squares, specified as a numeric value.
SSR
is equal to the sum of the
squared deviations between the fitted values and the mean of the
response. If the model was trained with observation weights, the
sum of squares in the SSR
calculation is
the weighted sum of squares.
Data Types: single
| double
SST
— Total sum of squares
numeric value
This property is read-only.
Total sum of squares, specified as a numeric value. SST
is equal
to the sum of squared deviations of the response vector y
from the
mean(y)
. If the model was trained with observation weights, the
sum of squares in the SST
calculation is the weighted sum of
squares.
Data Types: single
| double
Fitting Information
Steps
— Stepwise fitting information
structure
This property is read-only.
Stepwise fitting information, specified as a structure with the fields described in this table.
Field | Description |
---|---|
Start | Formula representing the starting model |
Lower | Formula representing the lower bound model. The terms in
Lower must remain in the model. |
Upper | Formula representing the upper bound model. The model cannot contain
more terms than Upper . |
Criterion | Criterion used for the stepwise algorithm, such as
'sse' |
PEnter | Threshold for Criterion to add a term |
PRemove | Threshold for Criterion to remove a term |
History | Table representing the steps taken in the fit |
The History
table contains one row for each step, including the
initial fit, and the columns described in this table.
Column | Description |
---|---|
Action | Action taken during the step:
|
TermName |
|
Terms | Model specification in a Terms Matrix |
DF | Regression degrees of freedom after the step |
delDF | Change in regression degrees of freedom from the previous step (negative for steps that remove a term) |
Deviance | Deviance (residual sum of squares) at the step (only for a generalized linear regression model) |
FStat | F-statistic that leads to the step |
PValue | p-value of the F-statistic |
The structure is empty unless you fit the model using stepwise regression.
Data Types: struct
Input Data
Distribution
— Generalized distribution information
structure
This property is read-only.
Generalized distribution information, specified as a structure with the fields described in this table.
Field | Description |
---|---|
Name | Name of the distribution: 'normal' , 'binomial' ,
'poisson' , 'gamma' , or
'inverse gaussian' |
DevianceFunction | Function that computes the components of the deviance as a function of the fitted parameter values and the response values |
VarianceFunction | Function that computes the theoretical variance for the distribution as a function of the
fitted parameter values. When DispersionEstimated is
true , the software multiplies the variance
function by Dispersion in the computation of the
coefficient standard errors. |
Data Types: struct
Formula
— Model information
LinearFormula
object
This property is read-only.
Model information, specified as a LinearFormula
object.
Display the formula of the fitted model mdl
using dot
notation:
mdl.Formula
LikelihoodPenalty
— Penalty for likelihood estimate
"none"
(default) | "jeffreys-prior"
Penalty for the likelihood estimate, specified as "none"
or
"jeffreys-prior"
.
"none"
— The likelihood estimate is not penalized during model fitting."jeffreys-prior"
— The likelihood estimate is penalized using the Jeffreys prior.
For logistic models, setting LikelihoodPenalty
to
"jeffreys-prior"
is called Firth's
regression. To reduce the coefficient estimate bias when you have a small
number of samples, or when you are performing binomial (logistic) regression on a
separable data set, set LikelihoodPenalty
to
"jeffreys-prior"
during training.
Example: LikelihoodPenalty="jeffreys-prior"
Data Types: char
| string
Link
— Link function
structure
This property is read-only.
Link function, specified as a structure with the fields described in this table.
Field | Description |
---|---|
Name | Name of the link function, specified as a character vector. If you specify the link function
using a function handle, then Name is
'' . |
Link | Function f that defines the link function, specified as a function handle |
Derivative | Derivative of f, specified as a function handle |
Inverse | Inverse of f, specified as a function handle |
The link function is a function f that links the distribution parameter μ to the fitted linear combination Xb of the predictors:
f(μ) = Xb.
Data Types: struct
NumObservations
— Number of observations
positive integer
This property is read-only.
Number of observations the fitting function used in fitting, specified
as a positive integer. NumObservations
is the
number of observations supplied in the original table, dataset,
or matrix, minus any excluded rows (set with the
'Exclude'
name-value pair
argument) or rows with missing values.
Data Types: double
NumPredictors
— Number of predictor variables
positive integer
This property is read-only.
Number of predictor variables used to fit the model, specified as a positive integer.
Data Types: double
NumVariables
— Number of variables
positive integer
This property is read-only.
Number of variables in the input data, specified as a positive integer.
NumVariables
is the number of variables in the original table or
dataset, or the total number of columns in the predictor matrix and response
vector.
NumVariables
also includes any variables that are not used to fit
the model as predictors or as the response.
Data Types: double
ObservationInfo
— Observation information
table
This property is read-only.
Observation information, specified as an n-by-4 table, where
n is equal to the number of rows of input data.
ObservationInfo
contains the columns described in this
table.
Column | Description |
---|---|
Weights | Observation weights, specified as a numeric value. The default value
is 1 . |
Excluded | Indicator of excluded observations, specified as a logical value. The
value is true if you exclude the observation from the
fit by using the 'Exclude' name-value pair
argument. |
Missing | Indicator of missing observations, specified as a logical value. The
value is true if the observation is missing. |
Subset | Indicator of whether or not the fitting function uses the
observation, specified as a logical value. The value is
true if the observation is not excluded or
missing, meaning the fitting function uses the observation. |
To obtain any of these columns as a vector, index into the property using dot
notation. For example, obtain the weight vector w
of the model
mdl
:
w = mdl.ObservationInfo.Weights
Data Types: table
ObservationNames
— Observation names
cell array of character vectors
This property is read-only.
Observation names, specified as a cell array of character vectors containing the names of the observations used in the fit.
If the fit is based on a table or dataset containing observation names,
ObservationNames
uses those names.Otherwise,
ObservationNames
is an empty cell array.
Data Types: cell
Offset
— Offset variable
numeric vector
This property is read-only.
Offset variable, specified as a numeric vector with the same length as the number
of rows in the data. Offset
is passed from fitglm
or stepwiseglm
in the
'Offset'
name-value pair argument. The fitting functions use
Offset
as an additional predictor variable with a coefficient
value fixed at 1
. In other words, the formula for fitting is
f(μ) ~ Offset + (terms
involving real predictors)
where f is the link function. The Offset
predictor has coefficient 1
.
For example, consider a Poisson regression model. Suppose the number of counts is
known for theoretical reasons to be proportional to a predictor A
.
By using the log link function and by specifying log(A)
as an
offset, you can force the model to satisfy this theoretical constraint.
Data Types: double
PredictorNames
— Names of predictors used to fit model
cell array of character vectors
This property is read-only.
Names of predictors used to fit the model, specified as a cell array of character vectors.
Data Types: cell
ResponseName
— Response variable name
character vector
This property is read-only.
Response variable name, specified as a character vector.
Data Types: char
VariableInfo
— Information about variables
table
This property is read-only.
Information about variables contained in Variables
, specified as a
table with one row for each variable and the columns described in this table.
Column | Description |
---|---|
Class | Variable class, specified as a cell array of character vectors, such
as 'double' and
'categorical' |
Range | Variable range, specified as a cell array of vectors
|
InModel | Indicator of which variables are in the fitted model, specified as a
logical vector. The value is true if the model
includes the variable. |
IsCategorical | Indicator of categorical variables, specified as a logical vector.
The value is true if the variable is
categorical. |
VariableInfo
also includes any variables that are not used to fit
the model as predictors or as the response.
Data Types: table
VariableNames
— Names of variables
cell array of character vectors
This property is read-only.
Names of variables, specified as a cell array of character vectors.
If the fit is based on a table or dataset, this property provides the names of the variables in the table or dataset.
If the fit is based on a predictor matrix and response vector,
VariableNames
contains the values specified by the'VarNames'
name-value pair argument of the fitting method. The default value of'VarNames'
is{'x1','x2',...,'xn','y'}
.
VariableNames
also includes any variables that are not used to fit
the model as predictors or as the response.
Data Types: cell
Variables
— Input data
table
This property is read-only.
Input data, specified as a table. Variables
contains both predictor
and response values. If the fit is based on a table or dataset array,
Variables
contains all the data from the table or dataset array.
Otherwise, Variables
is a table created from the input data matrix
X
and the response vector y
.
Variables
also includes any variables that are not used to fit the
model as predictors or as the response.
Data Types: table
Object Functions
Create CompactGeneralizedLinearModel
compact | Compact generalized linear regression model |
Add or Remove Terms from Generalized Linear Model
addTerms | Add terms to generalized linear regression model |
removeTerms | Remove terms from generalized linear regression model |
step | Improve generalized linear regression model by adding or removing terms |
Predict Responses
Evaluate Generalized Linear Model
coefCI | Confidence intervals of coefficient estimates of generalized linear regression model |
coefTest | Linear hypothesis test on generalized linear regression model coefficients |
devianceTest | Analysis of deviance for generalized linear regression model |
partialDependence | Compute partial dependence |
Visualize Generalized Linear Model and Summary Statistics
plotDiagnostics | Plot observation diagnostics of generalized linear regression model |
plotPartialDependence | Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots |
plotResiduals | Plot residuals of generalized linear regression model |
plotSlice | Plot of slices through fitted generalized linear regression surface |
Gather Properties of Generalized Linear Model
gather | Gather properties of Statistics and Machine Learning Toolbox object from GPU |
Examples
Create Generalized Linear Regression Model
Fit a logistic regression model of the probability of smoking as a function of age, weight, and sex, using a two-way interaction model.
Load the hospital
data set.
load hospital
Convert the dataset array to a table.
tbl = dataset2table(hospital);
Specify the model using a formula that includes two-way interactions and lower-order terms.
modelspec = 'Smoker ~ Age*Weight*Sex - Age:Weight:Sex';
Create the generalized linear model.
mdl = fitglm(tbl,modelspec,'Distribution','binomial')
mdl = Generalized linear regression model: logit(Smoker) ~ 1 + Sex*Age + Sex*Weight + Age*Weight Distribution = Binomial Estimated Coefficients: Estimate SE tStat pValue ___________ _________ ________ _______ (Intercept) -6.0492 19.749 -0.3063 0.75938 Sex_Male -2.2859 12.424 -0.18399 0.85402 Age 0.11691 0.50977 0.22934 0.81861 Weight 0.031109 0.15208 0.20455 0.83792 Sex_Male:Age 0.020734 0.20681 0.10025 0.92014 Sex_Male:Weight 0.01216 0.053168 0.22871 0.8191 Age:Weight -0.00071959 0.0038964 -0.18468 0.85348 100 observations, 93 error degrees of freedom Dispersion: 1 Chi^2-statistic vs. constant model: 5.07, p-value = 0.535
The large p-value indicates that the model might not differ statistically from a constant.
Create Generalized Linear Regression Model Using Stepwise Regression
Create response data using three of 20 predictor variables, and create a generalized linear model using stepwise regression from a constant model to see if stepwiseglm
finds the correct predictors.
Generate sample data that has 20 predictor variables. Use three of the predictors to generate the Poisson response variable.
rng default % for reproducibility X = randn(100,20); mu = exp(X(:,[5 10 15])*[.4;.2;.3] + 1); y = poissrnd(mu);
Fit a generalized linear regression model using the Poisson distribution. Specify the starting model as a model that contains only a constant (intercept) term. Also, specify a model with an intercept and linear term for each predictor as the largest model to consider as the fit by using the 'Upper'
name-value pair argument.
mdl = stepwiseglm(X,y,'constant','Upper','linear','Distribution','poisson')
1. Adding x5, Deviance = 134.439, Chi2Stat = 52.24814, PValue = 4.891229e-13 2. Adding x15, Deviance = 106.285, Chi2Stat = 28.15393, PValue = 1.1204e-07 3. Adding x10, Deviance = 95.0207, Chi2Stat = 11.2644, PValue = 0.000790094
mdl = Generalized linear regression model: log(y) ~ 1 + x5 + x10 + x15 Distribution = Poisson Estimated Coefficients: Estimate SE tStat pValue ________ ________ ______ __________ (Intercept) 1.0115 0.064275 15.737 8.4217e-56 x5 0.39508 0.066665 5.9263 3.0977e-09 x10 0.18863 0.05534 3.4085 0.0006532 x15 0.29295 0.053269 5.4995 3.8089e-08 100 observations, 96 error degrees of freedom Dispersion: 1 Chi^2-statistic vs. constant model: 91.7, p-value = 9.61e-20
stepwiseglm
finds the three correct predictors: x5
, x10
, and x15
.
More About
Adjusted Response
The adjusted response for an observation is the first order Taylor expansion of the link function about a fitted response , evaluated at the observed response . The adjusted response is given by
where
is the adjusted response, corresponding to the ith observation.
is the ith row in the matrix of predictor data.
β is the column vector of model coefficients specified in
mdl.Coefficients
.offset is the offset of the model specified in
mdl.Offset
.is the observed response for the ith observation.
is the fitted response value for the ith observation.
g is the link function in
mdl.Link
.
Canonical Link Function
The default link function for a generalized linear model is the
canonical link function. You can specify a link function when you
fit a model with fitglm
or stepwiseglm
by using the 'Link'
name-value pair
argument.
Distribution | Canonical Link Function Name | Link Function | Mean (Inverse) Function |
---|---|---|---|
'normal' | 'identity' | f(μ) = μ | μ = Xb |
'binomial' | 'logit' | f(μ) = log(μ/(1 – μ)) | μ = exp(Xb) / (1 + exp(Xb)) |
'poisson' | 'log' | f(μ) = log(μ) | μ = exp(Xb) |
'gamma' | -1 | f(μ) = 1/μ | μ = 1/(Xb) |
'inverse gaussian' | -2 | f(μ) = 1/μ2 | μ = (Xb)–1/2 |
Cook’s Distance
Cook’s distance is the scaled change in fitted values, which is useful for identifying outliers in the observations for predictor variables. Cook’s distance shows the influence of each observation on the fitted response values. An observation with Cook’s distance larger than three times the mean Cook’s distance might be an outlier.
The Cook’s distance Di of observation i is
where
is the dispersion parameter (estimated or theoretical).
ei is the linear predictor residual, , where
g is the link function.
yi is the observed response.
xi is the observation.
is the estimated coefficient vector.
p is the number of coefficients in the regression model.
hii is the ith diagonal element of the Hat Matrix H.
Leverage
Leverage is a measure of the effect of a particular observation on the regression predictions due to the position of that observation in the space of the inputs.
The leverage of observation i is the value of the ith diagonal term hii of the hat matrix H. Because the sum of the leverage values is p (the number of coefficients in the regression model), an observation i can be considered an outlier if its leverage substantially exceeds p/n, where n is the number of observations.
Hat Matrix
The hat matrix is a projection matrix that projects the vector of response observations onto the vector of predictions.
The hat matrix H is defined in terms of the data matrix X and a diagonal weight matrix W:
H = X(XTWX)–1XTWT.
W has diagonal elements wi:
where
g is the link function mapping yi to xib.
is the derivative of the link function g.
V is the variance function.
μi is the ith mean.
The diagonal elements Hii satisfy
where n is the number of observations (rows of X), and p is the number of coefficients in the regression model.
Deviance
Deviance is a generalization of the residual sum of squares. It measures the goodness of fit compared to a saturated model.
The deviance of a model M1 is twice the difference between the loglikelihood of the model M1 and the saturated model Ms. A saturated model is a model with the maximum number of parameters that you can estimate.
For example, if you have n observations (yi, i = 1, 2, ..., n) with potentially different values for XiTβ, then you can define a saturated model with n parameters. Let L(b,y) denote the maximum value of the likelihood function for a model with the parameters b. Then the deviance of the model M1 is
where b1 and bs contain the estimated parameters for the model M1 and the saturated model, respectively. The deviance has a chi-squared distribution with n – p degrees of freedom, where n is the number of parameters in the saturated model and p is the number of parameters in the model M1.
Assume you have two different generalized linear regression models M1 and M2, and M1 has a subset of the terms in M2. You can assess the fit of the models by comparing their deviances D1 and D2. The difference of the deviances is
Asymptotically, the difference D has a chi-squared distribution with
degrees of freedom v equal to the difference in the number of parameters
estimated in M1 and
M2. You can obtain the
p-value for this test by using 1 —
chi2cdf(D,v)
.
Typically, you examine D using a model M2 with a constant term and no predictors. Therefore, D has a chi-squared distribution with p – 1 degrees of freedom. If the dispersion is estimated, the difference divided by the estimated dispersion has an F distribution with p – 1 numerator degrees of freedom and n – p denominator degrees of freedom.
Terms Matrix
A terms matrix T
is a
t-by-(p + 1) matrix specifying terms in a model,
where t is the number of terms, p is the number of
predictor variables, and +1 accounts for the response variable. The value of
T(i,j)
is the exponent of variable j
in term
i
.
For example, suppose that an input includes three predictor variables x1
,
x2
, and x3
and the response variable
y
in the order x1
, x2
,
x3
, and y
. Each row of T
represents one term:
[0 0 0 0]
— Constant term or intercept[0 1 0 0]
—x2
; equivalently,x1^0 * x2^1 * x3^0
[1 0 1 0]
—x1*x3
[2 0 0 0]
—x1^2
[0 1 2 0]
—x2*(x3^2)
The 0
at the end of each term represents the response variable. In
general, a column vector of zeros in a terms matrix represents the position of the response
variable. If you have the predictor and response variables in a matrix and column vector,
then you must include 0
for the response variable in the last column of
each row.
References
[1] McFadden, Daniel. "Conditional logit analysis of qualitative choice behavior." in Frontiers in Econometrics, edited by P. Zarembka,105–42. New York: Academic Press, 1974.
[2] Nagelkerke, N. J. D. "A Note on a General Definition of the Coefficient of Determination." Biometrika 78, no. 3 (1991): 691–92.
[3] Maddala, Gangadharrao S. Limited-Dependent and Qualitative Variables in Econometrics. Econometric Society Monographs. New York, NY: Cambridge University Press, 1983.
[4] Cox, D. R., and E. J. Snell. Analysis of Binary Data. 2nd ed. Monographs on Statistics and Applied Probability 32. London; New York: Chapman and Hall, 1989.
[5] Magee, Lonnie. "R 2 Measures Based on Wald and Likelihood Ratio Joint Significance Tests." The American Statistician 44, no. 3 (August 1990): 250–53.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
When you fit a model by using
fitglm
orstepwiseglm
, you cannot specifyLink
,Derivative
, andInverse
fields of the'Link'
name-value pair argument as anonymous functions. That is, you cannot generate code using a generalized linear model that was created using anonymous functions for links. Instead, define functions for link components.
For more information, see Introduction to Code Generation.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
The object functions of the
GeneralizedLinearModel
model fully support GPU arrays.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2012a
Commande MATLAB
Vous avez cliqué sur un lien qui correspond à cette commande MATLAB :
Pour exécuter la commande, saisissez-la dans la fenêtre de commande de MATLAB. Les navigateurs web ne supportent pas les commandes MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)