# Fisher's exact test

**Fisher exact testexact testFisher testFisher-Yates exact test**

Fisher's exact test is a statistical significance test used in the analysis of contingency tables.wikipedia

54 Related Articles

### Contingency table

**cross tabulationcontingency tablescrosstab**

Fisher's exact test is a statistical significance test used in the analysis of contingency tables.

The significance of the difference between the two proportions can be assessed with a variety of statistical tests including Pearson's chi-squared test, the G-test, Fisher's exact test, and Barnard's test, provided the entries in the table represent individuals randomly sampled from the population about which conclusions are to be drawn.

### Ronald Fisher

**R.A. FisherR. A. FisherFisher**

It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., P-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests.

In this book Fisher also outlined the Lady tasting tea, now a famous design of a statistical randomized experiment which uses Fisher's exact test and is the original exposition of Fisher's notion of a null hypothesis.

### Lady tasting tea

He tested her claim in the "lady tasting tea" experiment.

The test used was Fisher's exact test.

### Chi-squared test

**chi-square testchi-squared statisticChi-squared**

With large samples, a chi-squared test (or better yet, a G-test) can be used in this situation.

For an exact test used in place of the 2 x 2 chi-squared test for independence, see Fisher's exact test.

### Hypergeometric distribution

**multivariate hypergeometric distributionhypergeometrichypergeometric test**

As pointed out by Fisher, this leads under a null hypothesis of independence to a hypergeometric distribution of the numbers in the cells of the table.

The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test ).

### Exact test

**exact inferenceexactness**

It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., P-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests.

Fisher's exact test, based on the work of Ronald Fisher and E. J. G. Pitman in the 1930s, is exact because the sampling distribution (conditional on the marginals) is known exactly.

### G-test

**G''-testFisher's G testG''-tests**

With large samples, a chi-squared test (or better yet, a G-test) can be used in this situation.

For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the G-test.

### P-value

**p''-valuepp''-values**

It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., P-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests.

In that case, the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the p-value was so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly.

### Muriel Bristol

Fisher is said to have devised the test following a comment from Muriel Bristol, who claimed to be able to detect whether the tea or the milk was added first to her cup.

He developed Fisher's exact test to assess the probabilities and statistical significance of experiments.

### Barnard's test

(Barnard's test relaxes this constraint on one set of the marginal totals.) In the example, there are 11 such cases.

It examines the association of two categorical variables and is a more powerful alternative than Fisher's exact test for 2×2 contingency tables.

### Odds ratio

**ORoddsodds ratios**

Choi et al. propose a p-value derived from the likelihood ratio test based on the conditional distribution of the odds ratio given the marginal success rate.

One alternative estimator is the conditional maximum likelihood estimator, which conditions on the row and column margins when forming the likelihood to maximize (as in Fisher's exact test).

### Bernoulli trial

**Bernoulli trialsBernoulli random variablesBernoulli-distributed**

* Bernoulli trial

### Statistical significance

**statistically significantsignificantsignificance level**

Fisher's exact test is a statistical significance test used in the analysis of contingency tables.

### Sample (statistics)

**samplesamplesstatistical sample**

Although in practice it is employed when sample sizes are small, it is valid for all sample sizes.

### Null hypothesis

**nullnull hypotheseshypothesis**

### Categorical variable

**categoricalcategorical datadichotomous**

The test is useful for categorical data that result from classifying objects in two different ways; it is used to examine the significance of the association (contingency) between the two kinds of classification.

### Sampling distribution

**finite sample distributiondistributionsampling**

However, the significance value it provides is only an approximation, because the sampling distribution of the test statistic that is calculated is only approximately equal to the theoretical chi-squared distribution.

### Degrees of freedom (statistics)

**degrees of freedomdegree of freedomEffective degrees of freedom**

The usual rule of thumb for deciding whether the chi-squared approximation is good enough is that the chi-squared test is not suitable when the expected values in any of the cells of a contingency table are below 5, or below 10 when there is only one degree of freedom (this rule is now known to be overly conservative ).

### Monte Carlo method

**Monte CarloMonte Carlo simulationMonte Carlo methods**

However the principle of the test can be extended to the general case of an m × n table, and some statistical packages provide a calculation (sometimes using a Monte Carlo method to obtain an approximation) for the more general case.

### Binomial coefficient

**binomial coefficientschoose(generalized) binomial coefficient**

where \tbinom nk is the binomial coefficient and the symbol !

### Factorial

**!factorials8!**

indicates the factorial operator.

### R (programming language)

**RR programming languageCRAN**

For example, in the R statistical computing environment, this value can be obtained as.

### One- and two-tailed tests

**one-tailed testtwo-tailed testone-sided**

This gives a one-tailed test, with p approximately 0.001346076 + 0.000033652 = 0.001379728.

### List of statistical software

**statistical softwareList of statistical packagesStatistical package**

However the principle of the test can be extended to the general case of an m × n table, and some statistical packages provide a calculation (sometimes using a Monte Carlo method to obtain an approximation) for the more general case.

### Gamma function

**ΓgammaEuler gamma function**

A simple, somewhat better computational approach relies on a gamma function or log-gamma function, but methods for accurate computation of hypergeometric and binomial probabilities remains an active research area.