Estimation of Multivariate Regression Models
Least Squares Estimation
Ordinary Least Squares
When you fit multivariate linear regression models using mvregress
, you can use the optional name-value pair 'algorithm','cwls'
to choose least squares estimation. In this case, by default, mvregress
returns ordinary least squares (OLS) estimates using . Alternatively, if you specify a covariance matrix for weighting, you can return covariance-weighted least squares (CWLS) estimates. If you combine OLS and CWLS, you can get feasible generalized least squares (FGLS) estimates.
The OLS estimate for the coefficient vector is the vector that minimizes
Let denote the nd-by-1 vector of stacked d-dimensional responses, and denote the nd-by-K matrix of stacked design matrices. The K-by-1 vector of OLS regression coefficient estimates is
This is the first mvregress
output.
Given (the mvregress
OLS default), the variance-covariance matrix of the OLS estimates is
This is the fourth mvregress
output. The standard errors of the OLS regression coefficients are the square root of the diagonal of this variance-covariance matrix.
If your data is not scaled such that , then you can multiply the mvregress
variance-covariance matrix by the mean squared error (MSE), an unbiased estimate of . To compute the MSE, return the n-by-d matrix of residuals, (the third mvregress
output). Then,
where is the ith row of .
Covariance-Weighted Least Squares
For most multivariate problems, an identity error covariance matrix is insufficient, and leads to inefficient or biased standard error estimates. You can specify a matrix for CWLS estimation using the optional name-value pair argument covar0
, for example, an invertible d-by-d matrix named . Usually, is a diagonal matrix such that the inverse matrix contains weights for each dimension to model heteroscedasticity. However, can also be a nondiagonal matrix that models correlation.
Given , the CWLS solution is the vector that minimizes
In this case, the K-by-1 vector of CWLS regression coefficient estimates is
This is the first mvregress
output.
If , this is the generalized least squares (GLS) solution. The corresponding variance-covariance matrix of the CWLS estimates is
This is the fourth mvregress
output. The standard errors of the CWLS regression coefficients are the square root of the diagonal of this variance-covariance matrix.
If you only know the error covariance matrix up to a proportion, that is, , you can multiply the mvregress
variance-covariance matrix by the MSE, as described in Ordinary Least Squares.
Error Covariance Estimation
Regardless of which least squares method you use, the estimate for the error variance-covariance matrix is
where is the n-by-d matrix of residuals. The ith row of is
The error covariance estimate, , is the second mvregress
output, and the matrix of residuals, , is the third output. If you specify the optional name-value pair 'covtype','diagonal'
, then mvregress
returns with zeros in the off-diagonal entries,
Feasible Generalized Least Squares
The generalized least squares estimate is the CWLS estimate with a known covariance matrix. That is, given is known, the GLS solution is
with variance-covariance matrix
In most cases, the error covariance is unknown. The feasible generalized least squares (FGLS) estimate uses in place of . You can obtain two-step FGLS estimates as follows:
Perform OLS regression, and return an estimate .
Perform CWLS regression, using .
You can also iterate between these two steps until convergence is reached.
For some data, the OLS estimate is positive semidefinite, and has no unique inverse. In this case, you cannot get the FGLS estimate using mvregress
. As an alternative, you can use lscov
, which uses a generalized inverse to return weighted least squares solutions for positive semidefinite covariance matrices.
Panel Corrected Standard Errors
An alternative to FGLS is to use OLS coefficient estimates (which are consistent) and make a standard error correction to improve efficiency. One such standard error adjustment—which does not require inversion of the covariance matrix—is panel corrected standard errors (PCSE) [1]. The panel corrected variance-covariance matrix for OLS estimates is
The PCSE are the square root of the diagonal of this variance-covariance matrix. Fixed Effects Panel Model with Concurrent Correlation illustrates PCSE computation.
Maximum Likelihood Estimation
Maximum Likelihood Estimates
The default estimation algorithm used by mvregress
is maximum likelihood estimation (MLE). The loglikelihood function for the multivariate linear regression model is
The MLEs for and are the values that maximize the loglikelihood objective function.
mvregress
finds the MLEs using an iterative two-stage algorithm. At iteration m + 1, the estimates are
and
The algorithm terminates when the changes in the coefficient estimates and loglikelihood objective function are less than a specified tolerance, or when the specified maximum number of iterations is reached. The optional name-value pair arguments for changing these convergence criteria are tolbeta
, tolobj
, and maxiter
, respectively.
Standard Errors
The variance-covariance matrix of the MLEs is an optional mvregress
output. By default, mvregress
returns the variance-covariance matrix for only the regression coefficients, but you can also get the variance-covariance matrix of using the optional name-value pair 'vartype','full'
. In this case, mvregress
returns the variance-covariance matrix for all K regression coefficients, and d or d(d + 1)/2 covariance terms (depending on whether the error covariance is diagonal or full).
By default, the variance-covariance matrix is the inverse of the observed Fisher information matrix (the 'hessian'
option). You can request the expected Fisher information matrix using the optional name-value pair 'vartype','fisher'
. Provided there is no missing response data, the observed and expected Fisher information matrices are the same. If response data is missing, the observed Fisher information accounts for the added uncertainty due to the missing values, whereas the expected Fisher information matrix does not.
The variance-covariance matrix for the regression coefficient MLEs is
evaluated at the MLE of the error covariance matrix. This is the fourth mvregress
output. The standard errors of the MLEs are the square root of the diagonal of this variance-covariance matrix.
For , let denote the vector of parameters in the estimated error variance-covariance matrix. For example, if d = 2, then:
If the estimated covariance matrix is diagonal, then .
If the estimated covariance matrix is full, then .
The Fisher information matrix for , , has elements
where is the length of (either d or d(d + 1)/2). The resulting variance-covariance matrix is
When you request the full variance-covariance matrix, mvregress
returns (as the fourth output) the block diagonal matrix
Missing Response Data
Expectation/Conditional Maximization
If any response values are missing, indicated by NaN
, mvregress
uses an expectation/conditional maximization (ECM) algorithm for estimation (if enough data is available). In this case, the algorithm is iterative for both least squares and maximum likelihood estimation. During each iteration, mvregress
imputes missing response values using their conditional expectation.
Consider organizing the data so that the joint distribution of the missing and observed responses, denoted and respectively, can be written as
Using properties of the multivariate normal distribution, the conditional expectation of the missing responses given the observed responses is
Also, the variance-covariance matrix of the conditional distribution is
At each iteration of the ECM algorithm, mvregress
uses the parameter values from the previous iteration to:
Update the regression coefficients using the combined vector of observed responses and conditional expectations of missing responses.
Update the variance-covariance matrix, adjusting for missing responses using the variance-covariance matrix of the conditional distribution.
Finally, the residuals that mvregress
returns for missing responses are the difference between the conditional expectation and the fitted value, both evaluated at the final parameter estimates.
If you prefer to ignore any observations that have missing response values, use the name-value pair 'algorithm','mvn'
. Note that mvregress
always ignores observations that have missing predictor values.
Observed Information Matrix
By default, mvregress
uses the observed Fisher information matrix (the 'hessian'
option) to compute the variance-covariance matrix of the regression parameters. This accounts for the additional uncertainty due to missing response values.
The observed information matrix includes contributions from only the observed responses. That is, the observed Fisher information matrix for the parameters in the error variance-covariance matrix has elements
where is the subset of corresponding to the observed responses in
For example, if d = 3, but is missing, then
The observed Fisher information for the regression coefficients has similar contributions from the design and covariance matrices.
References
[1] Beck, N. and J. N. Katz. "What to Do (and Not to Do) with Time-Series-Cross-Section Data in Comparative Politics." American Political Science Review, Vol. 89, No. 3, pp. 634–647, 1995.
See Also
Related Examples
- Set Up Multivariate Regression Problems
- Multivariate General Linear Model
- Fixed Effects Panel Model with Concurrent Correlation
- Longitudinal Analysis