Supervised learning

supervisedsupervised machine learningsupervised classificationtraining supervised learning algorithms(supervised) machine learningbiasfunction from examplesmachine learningsupervised classifier
Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs.wikipedia
222 Related Articles

Machine learning

machine-learninglearningstatistical learning
Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs.
In supervised learning, the algorithm builds a mathematical model from a set of data that contains both the inputs and the desired outputs.

Support-vector machine

support vector machinesupport vector machinesSVM
In machine learning, support-vector machines (SVMs, also support-vector networks ) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.

Generalization error

generalization
There are several algorithms that identify noisy training examples and removing the suspected noisy training examples prior to training has decreased generalization error with statistical significance.
In supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error ) is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data.

Deterministic noise

In such a situation, the part of the target function that cannot be modeled "corrupts" your training data - this phenomenon has been called deterministic noise.
In (supervised) machine learning, specifically when learning from data, there are situations when the data values cannot be modeled.

Anomaly detection

outlier detectionanomaliesdetecting
In practice, there are several approaches to alleviate noise in the output values such as early stopping to prevent overfitting as well as detecting and removing the noisy training examples prior to training the supervised learning algorithm.
In supervised learning, removing the anomalous data from the dataset often results in a statistically significant increase in accuracy.

Dependent and independent variables

dependent variableindependent variableexplanatory variable
A fourth issue is the degree of noise in the desired output values (the supervisory target variables).
The target variable is used in supervised learning algorithms but not in unsupervised learning.

Multilayer perceptron

multi-layer perceptronMulti-Layer perceptronsMLP
MLP utilizes a supervised learning technique called backpropagation for training.

Empirical risk minimization

empirical riskempirical risk functionalminimize empirical risk
There are two basic approaches to choosing f or g: empirical risk minimization and structural risk minimization.
Consider the following situation, which is a general setting of many supervised learning problems.

Naive Bayes classifier

Naive Bayesnaive Bayes classificationNaïve Bayes
For example, naive Bayes and linear discriminant analysis are joint probability models, whereas logistic regression is a conditional probability model.
For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting.

Discriminative model

discriminativeDiscriminative trainingdiscriminative classifiers
The training methods described above are discriminative training methods, because they seek to find a function g that discriminates well between the different output values (see discriminative model).
Discriminative models, also referred to as conditional models, are a class of models used in statistical classification, especially in supervised machine learning.

Learning to rank

Machine-learned rankingLearn to Rankmachine-learned
Learning to rank or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems.

Structured prediction

structured output labelsStructured perceptronStructure Learning
Structured prediction or structured (output) learning is an umbrella term for supervised machine learning techniques that involves predicting structured objects, rather than scalar discrete or real values.

Artificial neural network

artificial neural networksneural networksneural network
The three major learning paradigms are supervised learning, unsupervised learning and reinforcement learning.

Backpropagation

back-propagationback propagationbackpropagate
In machine learning, specifically deep learning, backpropagation (backprop, BP) is an algorithm widely used in the training of feedforward neural networks for supervised learning; generalizations exist for other artificial neural networks (ANNs), and for functions generally.

Semi-supervised learning

semi-supervisedSemisupervised learningco-training
Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data).

Boosting (machine learning)

boostingBoosting (meta-algorithm)boosted
In machine learning, boosting is an ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones.

Weak supervision

weakly supervised learning
Weak supervision is a branch of machine learning where noisy, limited, or imprecise sources are used to provide supervision signal for labeling large amounts of training data in a supervised learning setting.

Learning classifier system

Learning Classifier Systemsclassification schemeclassifier
Learning classifier systems, or LCS, are a paradigm of rule-based machine learning methods that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning).

Ensemble learning

ensembles of classifiersensembleBayesian model averaging
Supervised learning algorithms perform the task of searching through a hypothesis space to find a suitable hypothesis that will make good predictions with a particular problem.

Linear regression

regression coefficientmultiple linear regressionregression
The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties.

Proaftn

Proaftn is a fuzzy classification method that belongs to the class of supervised learning algorithms.

Pattern recognition

pattern analysispattern detectionpatterns
Pattern recognition systems are in many cases trained from labeled "training" data (supervised learning), but when no labeled data are available other algorithms can be used to discover previously unknown patterns (unsupervised learning).

Computational learning theory

learnabilitylearning theorycomputational
Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning.

Unsupervised learning

unsupervisedunsupervised classificationunsupervised machine learning
It is one of the main three categories of machine learning, along with supervised and reinforcement learning.

List of datasets for machine-learning research

List of datasets for machine learning researchComparison of datasets in machine learningnatural language datasets
* List of datasets for machine learning research
High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data.