Uniform distribution (continuous)

uniform distributionuniformuniformly distributedcontinuous uniform distributionuniformlyuniform probability distributioncontinuous uniformdistributed uniformlystandard uniform distributionuniform measure
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions.wikipedia
282 Related Articles

Maximum entropy probability distribution

maximum entropymaximum entropy distributionlargest entropy
It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.
The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b], and thus the probability density is 0 outside of the interval.

Symmetric probability distribution

symmetricsymmetric distributionSymmetry
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions.

Probability density function

probability densitydensity functiondensity
The probability density function of the continuous uniform distribution is:
Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0, ½] has probability density f(x) = 2 for 0 ≤ x ≤ ½ and f(x) = 0 elsewhere.

Probability distribution

distributioncontinuous probability distributiondiscrete probability distribution
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions.
There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others.

Variance

sample variancepopulation variancevariability
For a random variable following this distribution, the expected value is then m 1 = (a + b)/2 and the variance is

Cumulant

cumulant generating functioncumulant-generating functioncumulants
For n ≥ 2, the nth cumulant of the uniform distribution on the interval [-1/2, 1/2] is B n /n, where B n is the nth Bernoulli number.

Moment-generating function

moment generating functionCalculations of momentsgenerating functions
The moment-generating function is:

Order statistic

order statisticsk'th-smallest of n itemsordered
Let X (k) be the kth order statistic from this sample.
When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

Cumulative distribution function

distribution functionCDFcumulative probability distribution function
The cumulative distribution function is:
As an example, suppose X is uniformly distributed on the unit interval [0,1].

Probability theory

theory of probabilityprobabilityprobability theorist
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions.
Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions.

Random variable

random variablesrandom variationrandom
For a random variable following this distribution, the expected value is then m 1 = (a + b)/2 and the variance is However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.
Given any interval, a random variable called a "continuous uniform random variable" (CURV) is defined to take any value in the interval with equal likelihood.

Inverse transform sampling

Inverse transform sampling methodinversion methodCDF inversion
In other words, this property is known as the inversion method where the continuous standard uniform distribution can be used to generate random numbers for any other continuous distribution.
Inverse transformation sampling takes uniform samples of a number u between 0 and 1, interpreted as a probability, and then returns the largest number x from the domain of the distribution P(X) such that.

Triangular distribution

triangularright-triangle distributionTriangular Probability Density Function
This distribution for a = 0, b = 1 and c = 0 is the distribution of X = |X 1 − X 2 |, where X 1, X 2 are two independent random variables with standard uniform distribution.

Beta distribution

beta betabeta of the first kind
Then the probability distribution of X (k) is a Beta distribution with parameters k and n − k + 1.
The differential entropy of the beta distribution is negative for all values of α and β greater than zero, except at α = β = 1 (for which values the beta distribution is the same as the uniform distribution), where the differential entropy reaches its maximum value of zero.

Irwin–Hall distribution

Irwin-HallIrwin–Hall
In probability and statistics, the Irwin–Hall distribution, named after Joseph Oscar Irwin and Philip Hall, is a probability distribution for a random variable defined as the sum of a number of independent random variables, each having a uniform distribution.

Maximum likelihood estimation

maximum likelihoodmaximum likelihood estimatormaximum likelihood estimate
The latter is appropriate in the context of estimation by the method of maximum likelihood. Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).
From the point of view of Bayesian inference, MLE is a special case of maximum a posteriori estimation (MAP) that assumes a uniform prior distribution of the parameters.

Exponential distribution

exponentialexponentially distributedexponentially
A conceptually very simple method for generating exponential variates is based on inverse transform sampling: Given a random variate U drawn from the uniform distribution on the unit interval (0, 1), the variate

P-value

p''-valuepp''-values
In statistics, when a p-value is used as a test statistic for a simple null hypothesis, and the distribution of the test statistic is continuous, then the p-value is uniformly distributed between 0 and 1 if the null hypothesis is true.
When the null hypothesis is true, if it takes the form, and the underlying random variable is continuous, then the probability distribution of the p-value is uniform on the interval [0,1].

Maximum spacing estimation

maximum spacingMoran test
This follows for the same reasons as estimation for the discrete distribution, and can be seen as a very simple case of maximum spacing estimation.
Suppose {x (1), ..., x (n) } is the ordered sample from a uniform distribution U(a,b) with unknown endpoints a and b.

Antithetic variates

This property can be used for generating antithetic variates, among other things.
If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be, where, for any given i, u_i is obtained from U(0, 1).

Minimum-variance unbiased estimator

minimum variance unbiased estimatorbest unbiased estimatorUMVU
minimum-variance unbiased estimator (UMVUE) for the maximum is given by Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).

Mid-range

Midrangemidsummaryhalf-range
Although both the sample mean and the sample median are unbiased estimators of the midpoint, neither is as efficient as the sample mid-range, i.e. the arithmetic mean of the sample maximum and the sample minimum, which is the UMVU estimator of the midpoint (and also the maximum likelihood estimate).
For example, for a continuous uniform distribution with unknown maximum and minimum, the mid-range is the UMVU estimator for the mean.

Box–Muller transform

Box–Muller methodBox-Muller transformBox–Muller transformation
However, there is an exact method, the Box–Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables.
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a pseudo-random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers.

Method of moments (statistics)

method of momentsmethod of matching momentsmethod of moment matching
The method of moments estimator is given by:
Consider the uniform distribution on the interval [a,b], U(a,b).

Bates distribution

Bates
In probability and statistics, the Bates distribution, named after Grace Bates, is a probability distribution of the mean of a number of statistically independent uniformly distributed random variables on the unit interval.