Receiver operating characteristic

ROC curveAUCROCROC analysischaracteristicarea under the curveArea under the curve (Receiver operating characteristic)operating characteristicreceiver operating characteristic (ROC)receiver operating characteristic analysis
A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.wikipedia
119 Related Articles

Type I and type II errors

Type I errorfalse-positivefalse positive
It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities).
The quality of the test is independent of the cut-off value; it is determined by the shape and separation of the result curves from the positives and negatives.

Biometrics

biometricbiometric authenticationbiometric data
ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research.

Precision and recall

precisionrecallF-measure
The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning.
Recall and Inverse Recall, or equivalently true positive rate and false positive rate, are frequently plotted against each other as ROC curves and provide a principled mechanism to explore operating point tradeoffs.

Data mining

data-miningdataminingknowledge discovery in databases
ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research.
Several statistical methods may be used to evaluate the algorithm, such as ROC curves.

Machine learning

machine-learninglearningstatistical learning
ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research. The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning.
TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used Receiver Operating Characteristic (ROC) and ROC's associated Area Under the Curve (AUC).

Sensitivity and specificity

sensitivityspecificitysensitive
The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.
This trade-off can be represented graphically using a receiver operating characteristic curve.

Youden's J statistic

InformednessYouden's index
Whereas ROC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known as Informedness, Certainty and Gini Coefficient (in the single parameterization or single system case) all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response.
Youden's index is often used in conjunction with receiver operating characteristic (ROC) analysis.

Statistical classification

classificationclassifierclassifiers
A classification model (classifier or diagnosis) is a mapping of instances between certain classes/groups.
More recently, receiver operating characteristic (ROC) curves have been used to evaluate the tradeoff between true- and false-positive rates of classification algorithms.

Mann–Whitney U test

Mann–Whitney UWilcoxon rank-sum testMann-Whitney U test
It can further be shown that the AUC is closely related to the Mann–Whitney U, which tests whether positives are ranked higher than negatives.
The U statistic is equivalent to the area under the receiver operating characteristic curve (AUC) that can be readily calculated.

Receiver Operating Characteristic Curve Explorer and Tester

ROCCET
Receiver Operating Characteristic Curve Explorer and Tester (ROCCET) is an open-access web server for performing biomarker analysis using ROC (Receiver Operating Characteristic) curve analyses on metabolomic data sets.

Binary classification

binary classifierbinarybinary categorization
A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.

Sensitivity index

sensitivity index ''ddd '' (d prime)
Another variable used is d (d prime) (discussed above in "Other measures"), which can easily be expressed in terms of z-values.
d′ can be related to the area under the receiver operating characteristic curve, or AUC, via

Brier score

Brier
Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution, and AUC has been linked to a number of other performance metrics such as the Brier score.
The second term is known as refinement, and it is an aggregation of resolution and uncertainty, and is related to the area under the ROC Curve.

Gini coefficient

Gini indexGini ratioGini
Whereas ROC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known as Informedness, Certainty and Gini Coefficient (in the single parameterization or single system case) all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response. The AUC is related to the Gini coefficient (G_1) by the formula, where:
The Gini coefficient is sometimes alternatively defined as twice the area between the receiver operating characteristic (ROC) curve and its diagonal, in which case the AUC (Area Under the ROC Curve) measure of performance is given by.

Detection theory

signal detection theorysignal detectionsignal recovery
The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal detection theory.
There are also non-parametric measures, such as the area under the ROC-curve.

Total operating characteristic

The Total Operating Characteristic (TOC) also characterizes diagnostic ability while revealing more information than the ROC.
The Receiver Operating Characteristic (ROC) also characterizes diagnostic ability, although ROC reveals less information than the TOC.

Detection error tradeoff

An alternative to the ROC curve is the detection error tradeoff (DET) graph, which plots the false negative rate (missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes.

Evidence-based medicine

evidence-basedmedical evidenceevidence
ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine.

Graph of a function

graphgraphsgraphing
A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.

False positive rate

Comparisonwise error rateFall-outfalse-positive errors
The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.

Power (statistics)

statistical powerpowerpowerful
It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities).

Cumulative distribution function

distribution functionCDFcumulative probability distribution function
In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from -\infty to the discrimination threshold) of the detection probability in the y-axis versus the cumulative distribution function of the false-alarm probability on the x-axis.

Decision-making

decision makingdecisionsdecision
ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.