# Uniform distribution (continuous)

**uniform distributionuniformuniformly distributedcontinuous uniform distributionuniformlyuniform probability distributioncontinuous uniformdistributed uniformlyuniform measureContinuous**

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable.wikipedia

265 Related Articles

### Maximum entropy probability distribution

**maximum entropymaximum entropy distributionlargest entropy**

The distribution is often abbreviated U(a,b). It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b], and thus the probability density is 0 outside of the interval.

### Probability density function

**probability densitydensity functiondensity**

The probability density function of the continuous uniform distribution is:

Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0, ½] has probability density f(x) = 2 for 0 ≤ x ≤ ½ and f(x) = 0 elsewhere.

### Symmetric probability distribution

**symmetricsymmetric distributionSymmetry**

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable.

Continuous uniform distribution

### Probability distribution

**distributioncontinuous probability distributiondiscrete probability distribution**

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable.

If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others.

### Cumulant

**cumulant generating functioncumulantscumulant-generating function**

For n ≥ 2, the nth cumulant of the uniform distribution on the interval [-1/2, 1/2] is b n /n, where b n is the nth Bernoulli number.

The cumulants of the uniform distribution on the interval [−1, 0] are κ n = B n /n, where B n is the n-th Bernoulli number.

### Moment-generating function

**moment generating functionCalculations of momentsgenerating functions**

The moment-generating function is:

| Uniform (continuous) U(a, b)

### Order statistic

**order statisticsorderedth-smallest of items**

Let X (k) be the kth order statistic from this sample.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

### Probability theory

**theory of probabilityprobabilityprobability theorist**

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable.

Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions.

### Cumulative distribution function

**distribution functionCDFcumulative probability distribution function**

The cumulative distribution function is:

As an example, suppose X is uniformly distributed on the unit interval [0,1].

### Random variable

**random variablesrandom variationrandom**

For a random variable following this distribution, the expected value is then m 1 = (a + b)/2 and the variance is However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.

Given any interval, a random variable called a "continuous uniform random variable" (CURV) is defined to take any value in the interval with equal likelihood.

### Inverse transform sampling

**inverse transform sampling methodCDF inversioninverse distribution method**

If X has a standard uniform distribution, then by the inverse transform sampling method, Y = − λ −1 ln(X) has an exponential distribution with (rate) parameter λ.

Inverse transformation sampling takes uniform samples of a number u between 0 and 1, interpreted as a probability, and then returns the largest number x from the domain of the distribution P(X) such that.

### Beta distribution

**betabeta of the first kindBeta PDF**

Then the probability distribution of X (k) is a Beta distribution with parameters k and n − k + 1.

The differential entropy of the beta distribution is negative for all values of α and β greater than zero, except at α = β = 1 (for which values the beta distribution is the same as the uniform distribution), where the differential entropy reaches its maximum value of zero.

### Triangular distribution

**triangularright-triangle distributionTriangular Probability Density Function**

The sum of two independent, equally distributed, uniform distributions yields a symmetric triangular distribution.

This distribution for a = 0, b = 1 and c = 0 is the distribution of X = |X 1 − X 2 |, where X 1, X 2 are two independent random variables with standard uniform distribution.

### Irwin–Hall distribution

**Irwin-HallIrwin–Hall**

The Irwin–Hall distribution is the sum of n i.i.d. U(0,1) distributions.

In probability and statistics, the Irwin–Hall distribution, named after Joseph Oscar Irwin and Philip Hall, is a probability distribution for a random variable defined as the sum of a number of independent random variables, each having a uniform distribution.

### Exponential distribution

**exponentialexponentially distributedexponentially**

If X has a standard uniform distribution, then by the inverse transform sampling method, Y = − λ −1 ln(X) has an exponential distribution with (rate) parameter λ.

A conceptually very simple method for generating exponential variates is based on inverse transform sampling: Given a random variate U drawn from the uniform distribution on the unit interval (0, 1), the variate

### P-value

**p''-valuepp''-values**

In statistics, when a p-value is used as a test statistic for a simple null hypothesis, and the distribution of the test statistic is continuous, then the p-value is uniformly distributed between 0 and 1 if the null hypothesis is true.

When the null hypothesis is true, if it takes the form, and the underlying random variable is continuous, then the probability distribution of the p-value is uniform on the interval [0,1].

### Maximum likelihood estimation

**maximum likelihoodmaximum likelihood estimatormaximum likelihood estimate**

The latter is appropriate in the context of estimation by the method of maximum likelihood. Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).

From the point of view of Bayesian inference, MLE is a special case of maximum a posteriori estimation (MAP) that assumes a uniform prior distribution of the parameters.

### Pseudorandom number generator

**pseudo-random number generatorPRNGpseudorandom**

Many programming languages come with implementations to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution.

Numbers selected from a non-uniform probability distribution can be generated using a uniform distribution PRNG and a function that relates the two distributions.

### Maximum spacing estimation

**maximum spacingMoran test**

This follows for the same reasons as estimation for the discrete distribution, and can be seen as a very simple case of maximum spacing estimation.

Suppose {x (1), …, x (n) } is the ordered sample from a uniform distribution U(a,b) with unknown endpoints a and b.

### Box–Muller transform

**Box-Muller transformBox–Muller methodpolar form**

However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.

The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a pseudo-random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers.

### Antithetic variates

One interesting property of the standard uniform distribution is that if u 1 has a standard uniform distribution, then so does 1-u 1 . This property can be used for generating antithetic variates, among other things.

If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be, where, for any given i, u_i is obtained from U(0, 1). The second sample is built from, where, for any given i:.

### Normal distribution

**normally distributednormalGaussian**

However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables. The normal distribution is an important example where the inverse transform method is not efficient.

All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates.

### Minimum-variance unbiased estimator

**best unbiased estimatorminimum variance unbiased estimatoruniformly minimum variance unbiased**

minimum-variance unbiased estimator (UMVUE) for the maximum is given by Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).

Further, for other distributions the sample mean and sample variance are not in general MVUEs – for a uniform distribution with unknown upper and lower bounds, the mid-range is the MVUE for the population mean.

### Mid-range

**midsummarymidrangehalf-range**

Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).

For example, for a continuous uniform distribution with unknown maximum and minimum, the mid-range is the UMVU estimator for the mean.

### Bates distribution

**Bates**

Bates distribution — Similar to the Irwin-Hall distribution, but rescaled for n. Like the Irwin-Hall distribution, in the degenerate case where n=1, the Bates distribution generates a uniform distribution between 0 and 1.

In probability and statistics, the Bates distribution, named after Grace Bates, is a probability distribution of the mean of a number of statistically independent uniformly distributed random variables on the unit interval.