Gauss–Markov theorem

best linear unbiased estimatorGauss–Markovbest linear unbiased estimationBLUEbest linear unbiased estimatesBest Linear Unbiased Estimators (BLUE)Gauss Markov theoremGauss-MarkovGauss–Markov assumptionsGauss–Markov model
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.wikipedia
80 Related Articles

Ordinary least squares

OLSleast squaresOrdinary least squares regression
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.
The OLS estimator is consistent when the regressors are exogenous, and optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated.

Andrey Markov

A. A. MarkovMarkovAndrei Markov
The theorem was named after Carl Friedrich Gauss and Andrey Markov.

Estimator

estimatorsestimateestimates
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.
Concerning such "best unbiased estimators", see also Cramér–Rao bound, Gauss–Markov theorem, Lehmann–Scheffé theorem, Rao–Blackwell theorem.

Carl Friedrich Gauss

GaussCarl GaussCarl Friedrich Gauß
The theorem was named after Carl Friedrich Gauss and Andrey Markov.
Gauss proved the method under the assumption of normally distributed errors (see Gauss–Markov theorem; see also Gaussian).

Homoscedasticity

homoscedastichomogeneity of variancehomoskedastic
The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance).
As used in describing simple linear regression analysis, one assumption of the fitted model (to ensure that the least-squares estimators are each a best linear unbiased estimator of the respective population parameters, by the Gauss–Markov theorem) is that the standard deviations of the error terms are constant and do not depend on the x-value.

Data transformation (statistics)

data transformationtransformationData Transformations
Data transformations are often used to convert an equation into a linear form.
Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem).

Tikhonov regularization

ridge regressionregularizeda squared regularizing function
See, for example, the James–Stein estimator (which also drops linearity) or ridge regression.
If the assumption of normality is replaced by assumptions of homoscedasticity and uncorrelatedness of errors, and if one still assumes zero mean, then the Gauss–Markov theorem entails that the solution is the minimal unbiased estimator.

Endogeneity (econometrics)

endogenousendogeneityreverse causality
:This assumption is violated if the explanatory variables are stochastic, for instance when they are measured with error, or are endogenous.
The distinction between endogenous and exogenous variables originated in simultaneous equations models, where one separates variables whose values are determined by the model from variables which are predetermined; ignoring simultaneity in the estimation leads to biased estimates as it violates the exogeneity assumption of the Gauss–Markov theorem.

Generalized least squares

feasible generalized least squaresgeneralizedgeneralized (correlated)
The generalized least squares (GLS), developed by Aitken, extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix.
Since OLS is applied to data with homoscedastic errors, the Gauss–Markov theorem applies, and therefore the GLS estimate is the best linear unbiased estimator for β.

Omitted-variable bias

omitted variable biasomitted variablesomitted variable
This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no omitted variables.
The Gauss–Markov theorem states that regression models which fulfill the classical linear regression model assumptions provide the most efficient, linear and unbiased estimators.

Heteroscedasticity

heteroscedasticheteroskedasticityheteroskedastic
Heteroskedasticity occurs when the amount of error is correlated with an independent variable.
Breaking this assumption means that the Gauss–Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators.

Autocorrelation

autocorrelation functionserial correlationautocorrelated
This assumption is violated when there is autocorrelation.
(Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE).

Best linear unbiased prediction

BLUPbest linear unbiased predictor (BLUP)best linear unbiased predictors
"Best linear unbiased predictions" (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) (see Gauss–Markov theorem) of fixed effects.

Statistics

statisticalstatistical analysisstatistician
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.

Uncorrelatedness (probability theory)

uncorrelated
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists. The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance).

Variance

sample variancepopulation variancevariability
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.

Bias of an estimator

unbiasedunbiased estimatorbias
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.

Normal distribution

normally distributedGaussian distributionnormal
The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance).

James–Stein estimator

James–SteinJames–Stein estimationStein-type shrinkage approach
See, for example, the James–Stein estimator (which also drops linearity) or ridge regression.

Errors and residuals

residualserror termresidual
The random variables are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; see errors and residuals in statistics). of y and X (where X' denotes the transpose of X) that minimizes the sum of squares of residuals (misprediction amounts):

Linear regression

regression coefficientmultiple linear regressionregression
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists. (The dependence of the coefficients on each X_{ij} is typically nonlinear; the estimator is linear in each y_i and hence in each random which is why this is "linear" regression.) The estimator is said to be unbiased if and only if

If and only if

iffif, and only ifmaterial equivalence
(The dependence of the coefficients on each X_{ij} is typically nonlinear; the estimator is linear in each y_i and hence in each random which is why this is "linear" regression.) The estimator is said to be unbiased if and only if

Mean squared error

mean square errorsquared error lossMSE
Then the mean squared error of the corresponding estimation is

Transpose

matrix transposetranspositionmatrix transposition
of y and X (where X' denotes the transpose of X) that minimizes the sum of squares of residuals (misprediction amounts):