Variance

sample variancepopulation variancevariabilityscattervariationBienaymé formulaSemivariancestatistical variancevaryas measured discretely
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.wikipedia
730 Related Articles

Expected value

expectationexpectedmean
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value.

Cumulant

cumulant generating functioncumulantscumulant-generating function
The variance is also equivalent to the second cumulant of a probability distribution that generates X. The variance is typically designated as, \sigma^2_X, or simply \sigma^2 (pronounced "sigma squared").
The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment.

Random variable

random variablesrandom variationrandom
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.

Algorithms for calculating variance

computational algorithmsNumerically stable algorithmsnumerically stable alternatives
There exist numerically stable alternatives.
A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

Normal distribution

normally distributednormalGaussian
The normal distribution with parameters \mu and \sigma is a continuous distribution (also known as gaussian distribution) whose probability density function is given by
In its most general form, under some conditions (which include finite variance), it states that averages of samples of observations of random variables independently drawn from independent distributions converge in distribution to the normal, that is, they become normally distributed when the number of observations is sufficiently large.

Covariance

covariantcovariationcovary
The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by \sigma^2, s^2, or. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.
where is the expected value of X, also known as the mean of X. The covariance is also sometimes denoted \sigma_{XY} or \sigma(X,Y), in analogy to variance.

Cauchy distribution

LorentzianCauchyLorentzian profile
If a continuous distribution does not have a finite expected value, as is the case for the Cauchy distribution, it does not have a variance either.
The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined.

Statistics

statisticalstatistical analysisstatistician
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
Commonly used estimators include sample mean, unbiased sample variance and sample covariance.

Central limit theorem

limit theoremsA proof of the central limit theoremcentral limit
The role of the normal distribution in the central limit theorem is in part responsible for the prevalence of the variance in probability and statistics. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.
be a random sample of size —that is, a sequence of independent and identically distributed (i.i.d.) random variables drawn from a distribution of expected value given by and finite variance given by

Descriptive statistics

descriptivedescriptive statisticstatistics
Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling.
Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness.

Poisson distribution

PoissonPoisson-distributedPoissonian
The Poisson distribution with parameter \lambda is a discrete distribution for.
The positive real number λ is equal to the expected value of X and also to its variance

Exponential distribution

exponentialexponentially distributedexponentially
The exponential distribution with parameter \lambda is a continuous distribution whose support is the semi-infinite interval [0, \infty).
The variance of X is given by

Central moment

central momentsmoment about the meanmoments about the mean
The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by \sigma^2, s^2, or.
The second central moment μ 2 is called the variance, and is usually denoted σ 2, where σ represents the standard deviation.

Pareto distribution

Paretogeneralized Pareto distributionParetian tail
An example is a Pareto distribution whose index k satisfies
* The variance of a random variable following a Pareto distribution is

Analysis of variance

ANOVAanalysis of variance (ANOVA)corrected the means
A similar formula is applied in analysis of variance, where the corresponding formula is
In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation.

Irénée-Jules Bienaymé

BienayméBienaymé, Irénée-JulesI. J. Bienaymé
This statement is called the Bienaymé formula and was discovered in 1853.
In particular, he formulated the Bienaymé–Chebyshev inequality concerning the law of large numbers and the Bienaymé formula for the variance of a sum of uncorrelated random variables.

Standard error

SEstandard errorsstandard error of the mean
This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.
This forms a distribution of different means, and this distribution has its own mean and variance.

Uncorrelatedness (probability theory)

uncorrelated
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances:
Uncorrelated random variables have a Pearson correlation coefficient of zero, except in the trivial case when either variable has zero variance (is a constant).

Law of total variance

The general formula for variance decomposition or the law of total variance is: If X and Y are two random variables, and the variance of X exists, then
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or Law of Iterated Variances also known as Eve's law, states that if X and Y are random variables on the same probability space, and the variance of Y is finite, then

Conditional variance

conditional covariancescedastic function
The conditional expectation of X given Y, and the conditional variance may be understood as follows.
In probability theory and statistics, a conditional variance is the variance of a random variable given the value(s) of one or more other variables.

Moment (mathematics)

momentsmomentraw moment
The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. . Conversely, if a continuous function \varphi satisfies for all random variables X, then it is necessarily of the form, where a > 0.
If the function is a probability distribution, then the zeroth moment is the total probability (i.e. one), the first moment is the mean, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.

Binomial distribution

binomialbinomial probability distributionbinomial random variable
The binomial distribution with parameters n and p is a discrete distribution for.
The variance is:

Probability density function

probability densitydensity functiondensity
The normal distribution with parameters \mu and \sigma is a continuous distribution (also known as gaussian distribution) whose probability density function is given by If the random variable X represents samples generated by a continuous distribution with probability density function f(x), and F(x) is the corresponding cumulative distribution function, then the population variance is given by
For instance, the above expression allows for determining statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous distribution of the probability.

Invariant (mathematics)

invariantinvariantsinvariance
Variance is invariant with respect to changes in a location parameter.
The variance of a probability distribution is invariant under translations of the real line; hence the variance of a random variable is unchanged by the addition of a constant to it.

Heavy-tailed distribution

heavy tailsheavy-tailedheavy tail
The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.
Some authors use the term to refer to those distributions which do not have all their power moments finite; and some others to those distributions that do not have a finite variance.