One- and two-tailed tests

one-tailed testtwo-tailed testone-sidedtwo-sided testOne-sided testtwo-sidedone-tailed and two-tailedone-tailed hypothesisone-tailed or two-tailedone-tailed or two-tailed test
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic.wikipedia
36 Related Articles

Statistical significance

statistically significantsignificantsignificance level
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic.
These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.

Test statistic

Common test statisticst''-test of test statistics
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic. In the approach of Ronald Fisher, the null hypothesis H 0 will be rejected when the p-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance.
Using one of these sampling distributions, it is possible to compute either a one-tailed or two-tailed p-value for the null hypothesis that the coin is fair.

Null hypothesis

nullnull hypotheseshypothesis
This method is used for null hypothesis testing and if the estimated value exists in the critical areas, the alternative hypothesis is accepted over the null hypothesis. In the approach of Ronald Fisher, the null hypothesis H 0 will be rejected when the p-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance. In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable X which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) \bar X. If testing for whether the coin is biased towards heads, a one-tailed test would be used – only large numbers of heads would be significant.
The choice of null hypothesis (H 0 ) and consideration of directionality (see "one-tailed test") is critical.

P-value

p''-valuepp''-values
In the approach of Ronald Fisher, the null hypothesis H 0 will be rejected when the p-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance.
Thus computing a p-value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a one-tailed test or a two-tailed test), and data.

Z-test

Z''-teststandardized testingStouffer Z
If the test is performed using the actual population mean and variance, rather than an estimate from a sample, it would be called a one-tailed or two-tailed Z-test.
which one-tailed and two-tailed p-values can be calculated as Φ(−Z) (for upper/right-tailed tests), Φ(Z) (for lower/left-tailed tests) and 2Φ(−|Z|) (for two-tailed tests) where Φ is the standard normal cumulative distribution function.

Student's t-test

t-testt''-testStudent's ''t''-test
If the test statistic follows a Student's t-distribution in the null hypothesis – which is common where the underlying variable follows a normal distribution with unknown scaling factor, then the test is referred to as a one-tailed or two-tailed t-test.
Each of these statistics can be used to carry out either a one-tailed or two-tailed test.

Paired difference test

matching methodpaired samples
* Paired difference test, when two samples are being compared
The power of the unpaired, one-sided test carried out at level

Statistical hypothesis testing

hypothesis testingstatistical teststatistical tests
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic.

Parameter

parametersparametricargument
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic.

Normal distribution

normally distributedGaussian distributionnormal
If the test statistic follows a Student's t-distribution in the null hypothesis – which is common where the underlying variable follows a normal distribution with unknown scaling factor, then the test is referred to as a one-tailed or two-tailed t-test. Alternative names are one-sided and two-sided tests; the terminology "tail" is used because the extreme portions of distributions, where observations lead to rejection of the null hypothesis, are small and often "tail off" toward zero as in the normal distribution, colored in yellow, or "bell curve", pictured on the right and colored in green. One-tailed tests are used for asymmetric distributions that have a single tail, such as the chi-squared distribution, which are common in measuring goodness-of-fit, or for one side of a distribution that has two tails, such as the normal distribution, which is common in estimating location; this corresponds to specifying a direction. The distinction between one-tailed and two-tailed tests was popularized by Ronald Fisher in the influential book Statistical Methods for Research Workers, where he applied it especially to the normal distribution, which is a symmetric distribution with two equal tails.

Chi-squared distribution

chi-squaredchi-square distributionchi square distribution
One-tailed tests are used for asymmetric distributions that have a single tail, such as the chi-squared distribution, which are common in measuring goodness-of-fit, or for one side of a distribution that has two tails, such as the normal distribution, which is common in estimating location; this corresponds to specifying a direction.

Ronald Fisher

R.A. FisherR. A. FisherFisher
In the approach of Ronald Fisher, the null hypothesis H 0 will be rejected when the p-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance. The distinction between one-tailed and two-tailed tests was popularized by Ronald Fisher in the influential book Statistical Methods for Research Workers, where he applied it especially to the normal distribution, which is a symmetric distribution with two equal tails.

Sampling distribution

finite sample distributiondistributionsampling
In the approach of Ronald Fisher, the null hypothesis H 0 will be rejected when the p-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance.

Lady tasting tea

In the archetypal lady tasting tea experiment, Fisher tested whether the lady in question was better than chance at distinguishing two types of tea preparation, not whether her ability was different from chance, and thus he used a one-tailed test.

Bernoulli trial

Bernoulli trialsBernoulli random variablesBernoulli-distributed
In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable X which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) \bar X. If testing for whether the coin is biased towards heads, a one-tailed test would be used – only large numbers of heads would be significant.

Sample mean and covariance

sample meansample covariancesample covariance matrix
In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable X which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) \bar X. If testing for whether the coin is biased towards heads, a one-tailed test would be used – only large numbers of heads would be significant.

Karl Pearson

PearsonPearson, KarlCarl Pearson
The p-value was introduced by Karl Pearson in the Pearson's chi-squared test, where he defined P (original notation) as the probability that the statistic would be at or above a given level.

Pearson's chi-squared test

chi-square statisticPearson chi-squared testchi-square
The p-value was introduced by Karl Pearson in the Pearson's chi-squared test, where he defined P (original notation) as the probability that the statistic would be at or above a given level.

Goodness of fit

goodness-of-fitfitgoodness-of-fit test
One-tailed tests are used for asymmetric distributions that have a single tail, such as the chi-squared distribution, which are common in measuring goodness-of-fit, or for one side of a distribution that has two tails, such as the normal distribution, which is common in estimating location; this corresponds to specifying a direction.

Statistical Methods for Research Workers

intraclass correlationmethods
The distinction between one-tailed and two-tailed tests was popularized by Ronald Fisher in the influential book Statistical Methods for Research Workers, where he applied it especially to the normal distribution, which is a symmetric distribution with two equal tails.

The Design of Experiments

book
Fisher emphasized the importance of measuring the tail – the observed value of the test statistic and all more extreme – rather than simply the probability of specific outcome itself, in his The Design of Experiments (1935).

Student's t-distribution

Student's ''t''-distributiont-distributiont''-distribution
If the test statistic follows a Student's t-distribution in the null hypothesis – which is common where the underlying variable follows a normal distribution with unknown scaling factor, then the test is referred to as a one-tailed or two-tailed t-test.

Quantile function

quantileinverse distribution functionnormal quantile function
The statistical tables for t and for Z provide critical values for both one- and two-tailed tests.

Critical value

critical value setcritical valuesThreshold value
The statistical tables for t and for Z provide critical values for both one- and two-tailed tests.

Cochran's C test

In statistics, Cochran's C test, named after William G. Cochran, is a one-sided upper limit variance outlier test.