# F1 score

**F-MeasureF-scoreF1F_1 score**

In statistical analysis of binary classification, the F 1 score (also F-score or F-measure) is a measure of a test's accuracy.wikipedia

42 Related Articles

### Sørensen–Dice coefficient

**Czekanowski binary indexDiceDice coefficient**

The F 1 score is also known as the Sørensen–Dice coefficient or Dice similarity coefficient (DSC).

F1 score

### Binary classification

**binarybinary classifierbinary categorization**

In statistical analysis of binary classification, the F 1 score (also F-score or F-measure) is a measure of a test's accuracy.

The F-score combines precision and recall into one number via a choice of weighing, most simply equal weighing, as the balanced F-score (F1 score).

### Harmonic mean

**harmonicweighted harmonic meanharmonic average**

The F 1 score is the harmonic average of the precision and recall, where an F 1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.

In computer science, specifically information retrieval and machine learning, the harmonic mean of the precision (true positives per predicted positive) and the recall (true positives per real positive) is often used as an aggregated performance score for the evaluation of algorithms and systems: the F-score (or F-measure).

### Named-entity recognition

**named entity recognitionentity extractionnamed entities**

The F-score has been widely used in the natural language processing literature, such as the evaluation of named entity recognition and word segmentation.

For example, the best system entering MUC-7 scored 93.39% of F-measure while human annotators scored 97.60% and 96.95%.

### Youden's J statistic

**informednessYouden's index**

Note, however, that the F-measures do not take the true negatives into account, and that measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier.

An unrelated but commonly used combination of basic statistics from Information Retrieval is the F-score, being a (possibly weighted) harmonic mean of recall and precision where recall = sensitivity = true positive rate, but specificity and precision are totally different measures.

### Precision and recall

**precisionrecallF-measure**

The F 1 score is the harmonic average of the precision and recall, where an F 1 score reaches its best value at 1 (perfect precision and recall) and worst at 0. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results returned by the classifier, and r is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive).

The two measures are sometimes used together in the F1 Score (or f-measure) to provide a single measurement for a system.

### C. J. van Rijsbergen

**C. J. 'Keith' van RijsbergenC.J. van RijsbergenVan Rijsbergen**

It is based on Van Rijsbergen's effectiveness measure

*F1 score

### Matthews correlation coefficient

**Matthews Correlation**

Note, however, that the F-measures do not take the true negatives into account, and that measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier.

F1 score

### Word error rate

**Word error rate (WER)error rateWER model**

Word error rate (WER)

F-Measure

### Receiver operating characteristic

**roc curveAUCROC**

Receiver operating characteristic

F1 score

### Uncertainty coefficient

**proficiencyProficiency (metric)Theil's U**

Uncertainty coefficient, aka Proficiency

F1 score

### Statistics

**statisticalstatistical analysisstatistician**

In statistical analysis of binary classification, the F 1 score (also F-score or F-measure) is a measure of a test's accuracy.

### Type I and type II errors

**type I errorfalse positivefalse-positive**

The formula in terms of Type I and type II errors:

### Information retrieval

**queryretrievalqueries**

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.

### Web search engine

**search enginesearch enginessearch**

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.

### Document classification

**text classificationtext categorizationtext categorisation**

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.

### Web query classification

**query classification**

### Machine learning

**learningmachine-learningstatistical learning**

The F-score is also used in machine learning.

### Cohen's kappa

**kappakappa statisticCohen Kappa**

Note, however, that the F-measures do not take the true negatives into account, and that measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier.

### Text segmentation

**word segmentationsegmentedsegmentation**

The F-score has been widely used in the natural language processing literature, such as the evaluation of named entity recognition and word segmentation.

### David Hand (statistician)

**David HandDavid J. HandD. J. Hand**

David Hand and others criticize the widespread use of the F-score since it gives equal importance to precision and recall.