Linear predictor function

In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.wikipedia
53 Related Articles

Perceptron

Perceptronsperceptron algorithmFeedforward Neural Network (Perceptron)
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.
It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.

Linear regression

regression coefficientmultiple linear regressionregression
This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. where is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function.
In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data.

Logistic regression

logit modellogisticlogistic model
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.
The basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability p i using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials.

Statistics

statisticalstatistical analysisstatistician
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.

Machine learning

machine-learninglearningstatistical learning
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.

Linear function

linearlinear factorlinear functions
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.

Linear combination

linear combinationslinearly combined(finite) left ''R''-linear combinations
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.

Linear classifier

linearLDClinear classification
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.

Support-vector machine

support vector machinesupport vector machinesSVM
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.

Linear discriminant analysis

discriminant analysisDiscriminant function analysisFisher's linear discriminant
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.

Principal component analysis

principal components analysisPCAprincipal components
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.

Factor analysis

factorfactorsfactor analyses
However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis ), as well as in various other models, such as principal component analysis and factor analysis.

Dot product

scalar productdotinner product
using the notation for a dot product between two vectors.

Transpose

matrix transposetranspositionmatrix transposition
where and are assumed to be a (p+1)-by-1 column vectors, is the matrix transpose of (so is a 1-by-(p+1) row vector), and indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar.

Row and column vectors

column vectorrow vectorvector
where and are assumed to be a (p+1)-by-1 column vectors, is the matrix transpose of (so is a 1-by-(p+1) row vector), and indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar.

Matrix multiplication

matrix productmultiplicationproduct
where and are assumed to be a (p+1)-by-1 column vectors, is the matrix transpose of (so is a 1-by-(p+1) row vector), and indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar.

Scalar (mathematics)

scalarscalarsbase field
where and are assumed to be a (p+1)-by-1 column vectors, is the matrix transpose of (so is a 1-by-(p+1) row vector), and indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar.

Random variable

random variablesrandom variationrandom
where is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function.

Design matrix

data matrixdesign matricesdata matrices
The matrix X is known as the design matrix and encodes all known information about the independent variables.

Dependent and independent variables

dependent variableindependent variableexplanatory variable
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable. The matrix X is known as the design matrix and encodes all known information about the independent variables.

Least squares

least-squaresmethod of least squaresleast squares method
This makes it possible to find optimal coefficients through the method of least squares using simple matrix operations.

Moore–Penrose inverse

Moore–Penrose pseudoinverseMoore-Penrose pseudoinversepseudoinverse
The matrix is known as the Moore-Penrose pseudoinverse of X.

Invertible matrix

invertibleinversenonsingular
The use of the matrix inverse in this formula requires that X is of full rank, i.e. there is not perfect multicollinearity among different explanatory variables (i.e. no explanatory variable can be perfectly predicted from the others).