Normal distribution

normally distributednormalGaussianstandard normal distributionbell curveGaussian distributionstandard normalnormalitynormal curvenormally
In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution.wikipedia
1,141 Related Articles

Central limit theorem

limit theoremsA proof of the central limit theoremcentral limit
The normal distribution is useful because of the central limit theorem.
In probability theory, the central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed.

Student's t-distribution

Student's ''t''-distributiont''-distributiont-distribution
The normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).
In probability and statistics, Student's t-distribution (or simply the t-distribution) is any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small and population standard deviation is unknown.

Standard normal deviate

standard normal variablestandard-normal
If Z is a standard normal deviate, then will have a normal distribution with expected value \mu and standard deviation \sigma.
A standard normal deviate is a normally distributed deviate.

Logistic distribution

logisticbell-shaped curvelogistical
The normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).
It resembles the normal distribution in shape but has heavier tails (higher kurtosis).

Mode (statistics)

modemodalmodes
\mu is the mean or expectation of the distribution (and also its median and mode),
The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions.

Log-normal distribution

lognormallog-normallognormal distribution
Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed.

Cauchy distribution

LorentzianCauchyLorentzian profile
The normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions).
It is one of the few distributions that is stable and has a probability density function that can be expressed analytically, the others being the normal distribution and the Lévy distribution.

Outlier

outliersconservative estimateirregularities
Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data.
In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution.

Propagation of uncertainty

error propagationtheory of errorspropagation of error
Moreover, many results and methods (such as propagation of uncertainty and least squares parameter fitting) can be derived analytically in explicit form when the relevant variables are normally distributed.
For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation

Unimodality

unimodalunimodal distributionunimodal function
It is unimodal: its first derivative is positive for x\mu, and zero only at x=\mu.
Figure 1 illustrates normal distributions, which are unimodal.

Stable distribution

stable distributionsStablealpha stable distributions
The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite.
Stable distributions have 0 < α ≤ 2, with the upper bound corresponding to the normal distribution, and α = 1 to the Cauchy distribution.

Standard deviation

standard deviationssample standard deviationsigma
\sigma is the standard deviation, and
This means that most men (about 68%, assuming a normal distribution) have a height within 3 inches (7.62 cm) of the mean (67–73 inches (170.18–185.42 cm))one standard deviationand almost all men (about 95%) have a height within 6 inches (15.24 cm) of the mean (64–76 inches (162.56–193.04 cm))two standard deviations.

Fourier transform

Fouriercontinuous Fourier transformuncertainty principle
The Fourier transform of a normal density f with mean \mu and standard deviation \sigma is
The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion).

Median

averagesample medianmedian-unbiased estimator
\mu is the mean or expectation of the distribution (and also its median and mode),
The median of a normal distribution with mean μ and variance σ 2 is μ. In fact, for a normal distribution, mean = median = mode.

Cumulant

cumulant generating functioncumulantscumulant-generating function
The normal distribution is the only absolutely continuous distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero.
As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property.

Phi

ΦΦ φϕ
The probability density of the standard Gaussian distribution (standard normal distribution) (with zero mean and unit variance) is often denoted with the Greek letter \phi (phi).
In probability theory, ϕ(x) = (2π) −½ e −x²/2 is the probability density function of the normal distribution.

68–95–99.7 rule

3-sigma68-95-99.7 rulethree sigma rule
This fact is known as the 68-95-99.7 (empirical) rule, or the 3-sigma rule.
a band around the mean in a normal distribution with a width of two, four and six standard deviations, respectively; more accurately, 68.27%, 95.45% and 99.73% of the values lie within one, two and three standard deviations of the mean, respectively.

Statistical inference

inferenceinferential statisticsinferences
Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data.
With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem.

Error function

complementary error functionerfcomplementary Gaussian error function
The related error function gives the probability of a random variable with normal distribution of mean 0 and variance 1/2 falling in the range [-x, x]; that is The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:
In statistics, for nonnegative values of x, the error function has the following interpretation: for a random variable Y that is normally distributed with mean 0 and variance 1/2, erf(x) describes the probability of Y falling in the range [−x, x].

Probability density function

probability densitydensity functiondensity
The probability density of the normal distribution is
The standard normal distribution has probability density

Robust statistics

robustbreakdown pointrobustness
In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied.
Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal.

Hermite polynomials

HermiteHermite polynomial Hermite functions
More generally, its th derivative is where is the th (probabilist) Hermite polynomial.
is the probability density function for the normal distribution with expected value 0 and standard deviation 1.

Probit

probit functioninverse distribution function of a standard normal distribution
The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:
In probability theory and statistics, the probit function is the quantile function associated with the standard normal distribution, which is commonly denoted as N(0,1).

1.96

In particular, the quantile z_{0.975} is 1.96; therefore a normal random variable will lie outside the interval in only 5% of cases.
In probability and statistics, 1.96 is the approximate value of the 97.5 percentile point of the normal distribution.

Dirac delta function

delta functionimpulsedirac delta
However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" \delta translated by the mean \mu, that is Its CDF is then the Heaviside step function translated by the mean \mu, namely
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.