Statistical classification

classificationclassifierclassifiersclassifyingclassifydata classificationClassification (machine learning)classification modelclassification ruleclassifications
In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.wikipedia
313 Related Articles

Machine learning

machine-learninglearningstatistical learning
In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. In machine learning, the observations are often known as instances, the explanatory variables are termed features (grouped into a feature vector), and the possible categories to be predicted are classes.
Classification algorithms and regression algorithms are types of supervised learning.

Pattern recognition

pattern analysispattern detectionpatterns
Classification is an example of pattern recognition.
An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is "spam" or "non-spam").

Cluster analysis

clusteringdata clusteringcluster
The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance.
Besides the term clustering, there are a number of terms with similar meanings, including automatic classification, numerical taxonomy, botryology (from Greek βότρυς "grape"), typological analysis, and community detection.

Logistic regression

logit modellogisticlogistic model
In statistics, where classification is often done with logistic regression or a similar procedure, the properties of observations are termed explanatory variables (or independent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of the dependent variable.
The model itself simply models probability of output in terms of input, and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier.

Binary classification

binary classifierbinarybinary categorization
Classification can be thought of as two separate problems – binary classification and multiclass classification.
Statistical classification is a problem studied in machine learning.

Probabilistic classification

Class membership probabilitiesprobabilistic classifierprobabilistic
Some Bayesian procedures involve the calculation of group membership probabilities: these can be viewed as providing a more informative outcome of a data analysis than a simple attribution of a single group-label to each new observation.
In machine learning, a probabilistic classifier is a classifier that is able to predict, given an observation of an input, a probability distribution over a set of classes, rather than only outputting the most likely class that the observation should belong to.

Sequence labeling

labeling
Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence; etc.
Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence.

Feature (machine learning)

feature vectorfeature spacefeatures
In machine learning, the observations are often known as instances, the explanatory variables are termed features (grouped into a feature vector), and the possible categories to be predicted are classes.
Choosing informative, discriminating and independent features is a crucial step for effective algorithms in pattern recognition, classification and regression.

Training, validation, and test sets

training settraining datatest set
In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.
A training dataset is a dataset of examples used for learning, that is to fit the parameters (e.g., weights) of, for example, a classifier.

Multiclass classification

multi-classmulti-class categorizationmulti-class classification
Classification can be thought of as two separate problems – binary classification and multiclass classification.
In machine learning, multiclass or multinomial classification is the problem of classifying instances into one of three or more classes.

Linear classifier

linearLDClinear classification
Algorithms with this basic setup are known as linear classifiers.
In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to.

Multinomial logistic regression

multinomial logitMaximum entropy classifiermaximum entropy
In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.

Support-vector machine

support vector machinesupport vector machinesSVM
In machine learning, support-vector machines (SVMs, also support-vector networks ) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.

Linear discriminant analysis

discriminant analysisDiscriminant function analysisFisher's linear discriminant
Early work on statistical classification was undertaken by Fisher, in the context of two-group problems, leading to Fisher's linear discriminant function as the rule for assigning a group to a new observation.
The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

K-nearest neighbors algorithm

k-nearest neighbor algorithmk-nearest neighbork-nearest neighbors
In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression.

Quadratic classifier

Quadratic discriminant analysis
A quadratic classifier is used in machine learning and statistical classification to separate measurements of two or more classes of objects or events by a quadric surface.

Random forest

random forestsRandom multinomial logitRandom naive Bayes
Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees.

Least-squares support-vector machine

Least squares support vector machine
Least-squares support-vector machines (LS-SVM) are least-squares versions of support-vector machines (SVM), which are a set of related supervised learning methods that analyze data and recognize patterns, and which are used for classification and regression analysis.

Boosting (machine learning)

boostingBoosting (meta-algorithm)boosted
A weak learner is defined to be a classifier that is only slightly correlated with the true classification (it can label examples better than random guessing).

Precision and recall

precisionrecallF-measure
The measures precision and recall are popular metrics used to evaluate the quality of a classification system.
In pattern recognition, information retrieval and classification (machine learning), precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of the total amount of relevant instances that were actually retrieved.

Learning vector quantization

LVQ
In computer science, learning vector quantization (LVQ), is a prototype-based supervised classification algorithm.

Naive Bayes classifier

Naive Bayesnaive Bayes classificationNaïve Bayes
The naive Bayes classifier combines this model with a decision rule.

Mahalanobis distance

Mahalanobis metricChi distanceMahalanobis
Later work for the multivariate normal distribution allowed the classifier to be nonlinear: several classification rules can be derived based on different adjustments of the Mahalanobis distance, with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.
Mahalanobis distance is widely used in cluster analysis and classification techniques.

Receiver operating characteristic

ROC curveAUCROC
More recently, receiver operating characteristic (ROC) curves have been used to evaluate the tradeoff between true- and false-positive rates of classification algorithms.
A classification model (classifier or diagnosis) is a mapping of instances between certain classes/groups.