Ordinary least squares

OLSleast squaresOrdinary least squares regressionleast-squareslinear least squaresregressionsConstrained least squarecovariance matrixinverting the matrix of the normal equations in linear least squareslinear
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model.wikipedia
246 Related Articles

Least squares

least-squaresmethod of least squaresleast squares method
OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the given dataset and those predicted by the linear function.
Least-squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns.

Maximum likelihood estimation

maximum likelihoodmaximum likelihood estimatormaximum likelihood estimate
Under the additional assumption that the errors are normally distributed, OLS is the maximum likelihood estimator.
In some cases, the first-order conditions of the likelihood function can be solved explicitly; for instance, the ordinary least squares estimator maximizes the likelihood of the linear regression model.

Simple linear regression

simple regressioni.e. regression linelinear least squares regression with an intercept term and a single explanator
The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation.
It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible.

Linear least squares

normal equationslinear least-squaresLinear least squares (mathematics)
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model.
ordinary (unweighted),

Econometrics

econometriceconometricianeconometric analysis
OLS is used in fields as diverse as economics (econometrics), data science, political science, psychology and engineering (control theory and signal processing).
Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or "best linear unbiased estimator" (where "best" means most efficient, unbiased estimator) given the Gauss-Markov assumptions.

Proofs involving ordinary least squares

[proof
The function S(b) is quadratic in b with positive-definite Hessian, and therefore this function possesses a unique global minimum at, which can be given by the explicit formula: [proof]
The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.

Linear regression

regression coefficientmultiple linear regressionregression
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model.
ordinary least squares):

Moore–Penrose inverse

Moore–Penrose pseudoinverseMoore-Penrose pseudoinversepseudoinverse
The matrix (X T X) –1 X T =Q X T is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the normal matrix to have no inverse).
A common use of the pseudoinverse is to compute a "best fit" (least squares) solution to a system of linear equations that lacks a unique solution (see below under § Applications).

Autocorrelation

autocorrelation functionserial correlationautocorrelated
The OLS estimator is consistent when the regressors are exogenous, and optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated.
In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals.

Statistics

statisticalstatistical analysisstatistician
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model.
Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares.

Multicollinearity

collinearitymulticollinearperfect multicollinearity
The matrix (X T X) –1 X T =Q X T is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the normal matrix to have no inverse).
Note that in statements of the assumptions underlying regression analyses such as ordinary least squares, the phrase "no multicollinearity" usually refers to the absence of multicollinearity, which is an exact (non-stochastic) linear relation among the predictors.

Polynomial least squares

The variance in the prediction of the independent variable as a function of the dependent variable is given in the article Polynomial least squares.
These methods include polynomial regression, curve fitting, linear regression, least squares, ordinary least squares, simple linear regression, linear least squares, approximation theory and method of moments.

Reduced chi-squared statistic

reduced chi-squaredMean square weighted deviationChi-squared per degree of freedom
Using these residuals we can estimate the value of σ 2 using the reduced chi-squared statistic:
(see Ordinary least squares#Reduced chi-squared)

Weighted least squares

Batch Least Squareslinear least squaresweighted
When this requirement is violated this is called heteroscedasticity, in such case a more efficient estimator would be weighted least squares.
Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which the errors covariance matrix is allowed to be different from an identity matrix.

Gauss–Markov theorem

best linear unbiased estimatorGauss–Markovbest linear unbiased estimation
The OLS estimator is consistent when the regressors are exogenous, and optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated.
In statistics, the Gauss–Markov theorem states that in a linear regression model in which the errors are uncorrelated, have equal variances and expectation value of zero, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator, provided it exists.

Instrumental variables estimation

instrumental variableinstrumental variablestwo-stage least squares
In such case the method of instrumental variables may be used to carry out inference.
Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term, in which case ordinary least squares and ANOVA give biased results.

Coefficient of determination

R-squaredR'' 2 R 2
The coefficient of determination R 2 is defined as a ratio of "explained" variance to the "total" variance of the dependent variable y, in the cases where the regression sum of squares equals the sum of squares of residuals:
:where the covariance between two coefficient estimates, as well as their standard deviations, are obtained from the covariance matrix of the coefficient estimates.

Heteroscedasticity

heteroscedasticheteroskedasticityheteroskedastic
When this requirement is violated this is called heteroscedasticity, in such case a more efficient estimator would be weighted least squares.
For instance, while the ordinary least squares estimator is still unbiased in the presence of heteroscedasticity, it is inefficient because the true variance and covariance are underestimated.

Robust regression

robust estimationRobustrobust linear model
In this case, robust estimation techniques are recommended.
Certain widely used methods of regression, such as ordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results if those assumptions are not true; thus ordinary least squares is said to be not robust to violations of its assumptions.

Overdetermined system

overdeterminedover-determined system overdetermined equation system
Consider an overdetermined system
The method of ordinary least squares can be used to find an approximate solution to overdetermined systems.

Multivariate normal distribution

multivariate normalbivariate normal distributionjointly normally distributed
This case arises frequently in statistics; for example, in the distribution of the vector of residuals in the ordinary least squares regression.

Generalized least squares

feasible generalized least squaresgeneralizedgeneralized (correlated)
In such cases generalized least squares provides a better alternative than the OLS.
In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences.

Panel data

longitudinal datapanel(panel)
. This assumption may be violated in the context of time series data, panel data, cluster samples, hierarchical data, repeated measures data, longitudinal data, and other data with dependencies.
If \mu_i is unobserved, and correlated with at least one of the independent variables, then it will cause omitted variable bias in a standard OLS regression.

Degrees of freedom (statistics)

degrees of freedomdegree of freedomEffective degrees of freedom
: The numerator, n−p, is the statistical degrees of freedom.
Many non-standard regression methods, including regularized least squares (e.g., ridge regression), linear smoothers, smoothing splines, and semiparametric regression are not based on ordinary least squares projections, but rather on regularized (generalized and/or penalized) least-squares, and so degrees of freedom defined in terms of dimensionality is generally not useful for these procedures.

Projection (linear algebra)

orthogonal projectionprojectionprojection operator
will have the smallest length when y is projected orthogonally onto the linear subspace spanned by the columns of X.
Whereas calculating the fitted value of an ordinary least squares regression requires an orthogonal projection, calculating the fitted value of an instrumental variables regression requires an oblique projection.