# Normality test

**Non-normality of errorsnormality testingNormality testsstatistical test for the normalitytest for normalitytest of normality**

In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.wikipedia

59 Related Articles

### Goodness of fit

**goodness-of-fitfitgoodness-of-fit test**

Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov–Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-squared test).

### 68–95–99.7 rule

**3-sigma68-95-99.7 rulethree sigma rule**

(number of sample standard deviations that a sample is above or below the sample mean), and compares it to the 68–95–99.7 rule:

It is also used as a simple test for outliers if the population is assumed normal, and as a normality test if the population is potentially not normal.

### Shapiro–Wilk test

**Shapiro-Wilk testShapiro-Wilk**

The Shapiro–Wilk test is a test of normality in frequentist statistics.

### Lilliefors test

**Lilliefors**

In statistics, the Lilliefors test is a normality test based on the Kolmogorov–Smirnov test.

### Sample maximum and minimum

**sample maximumsample minimumMaximum**

Simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic

The sample extrema can be used for a simple normality test, specifically of kurtosis: one computes the t-statistic of the sample maximum and minimum (subtracts sample mean and divides by the sample standard deviation), and if they are unusually large for the sample size (as per the three sigma rule and table therein, or more precisely a Student's t-distribution), then the kurtosis of the sample distribution deviates significantly from that of the normal distribution.

### Skewness

**skewedskewskewed distribution**

Historically, the third and fourth standardized moments (skewness and kurtosis) were some of the earliest tests for normality.

D'Agostino's K-squared test is a goodness-of-fit normality test based on sample skewness and sample kurtosis.

### Kurtosis

**excess kurtosisleptokurticplatykurtic**

Historically, the third and fourth standardized moments (skewness and kurtosis) were some of the earliest tests for normality.

D'Agostino's K-squared test is a goodness-of-fit normality test based on a combination of the sample skewness and sample kurtosis, as is the Jarque–Bera test for normality.

### D'Agostino's K-squared test

Frequently in the literature related to normality testing, the skewness and kurtosis are denoted as √β 1 and β 2 respectively.

### Statistics

**statisticalstatistical analysisstatistician**

In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.

### Data set

**datasetdatasetsdata sets**

In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.

### Random variable

**random variablesrandom variationrandom**

In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.

### Model selection

**statistical model selectionselectingchoose a model**

### Probability interpretations

**philosophy of probabilityinterpretation of probabilityinterpretations of probability**

### Descriptive statistics

**descriptivedescriptive statisticstatistics**

### Frequentist inference

**frequentistfrequentist statisticsclassical**

### Statistical hypothesis testing

**hypothesis testingstatistical teststatistical tests**

### Null hypothesis

**nullnull hypotheseshypothesis**

### Bayesian statistics

**BayesianBayesian methodsBayesian analysis**

### Bayes factor

**Bayesian model comparisonBayes factorsBayesian model selection**

Spiegelhalter suggests using a Bayes factor to compare normality with a different class of distributional alternatives.

### Prior probability

**prior distributionpriorprior probabilities**

### Posterior probability

**posterior distributionposteriorposterior probability distribution**

### Histogram

**histogramsbin sizebin**

An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve.

### Quantile

**quantilestertilequintile**

In this case one might proceed by regressing the data against the quantiles of a normal distribution with the same mean and variance as the sample.

### Normal probability plot

A graphical tool for assessing normality is the normal probability plot, a quantile-quantile plot (QQ plot) of the standardized data against the standard normal distribution.

### Q–Q plot

**plotting positionQ-Q plotProbability plot correlation coefficient**

A graphical tool for assessing normality is the normal probability plot, a quantile-quantile plot (QQ plot) of the standardized data against the standard normal distribution.