Binomial distribution

binomialbinomial modelBinomial probabilitybinomial probability distributionbinomial random variablebinomially distributedbinomial frequenciesbinomiallyapproaches one-halfbinomial distributed
[[File:Pascal's triangle; binomial distribution.svg|thumb|280px|Binomial distribution for p=0.5with n and k as in Pascal's trianglewikipedia
264 Related Articles

Bean machine

quincunxGalton boxGalton board
The probability that a ball in a Galton box with 8 layers (n = 8) ends up in the central bin (k = 4) is 70/256.]]
The bean machine, also known as the Galton Board or quincunx, is a device invented by Sir Francis Galton to demonstrate the central limit theorem, in particular that with sufficient sample size the binomial distribution approximates a normal distribution.

Probability distribution

distributioncontinuous probability distributiondiscrete probability distribution
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q = 1 − p).
Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution.

Hypergeometric distribution

multivariate hypergeometric distributionhypergeometrichypergeometric test
If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one.
In contrast, the binomial distribution describes the probability of k successes in n draws with replacement.

Bernoulli process

BernoulliBernoulli sequenceBernoulli variable
A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution.
The probability measure thus defined is known as the Binomial distribution.

Binomial test

binomial
The binomial distribution is the basis for the popular binomial test of statistical significance.
For large samples such as the example below, the binomial distribution is well approximated by convenient continuous distributions, and these are used as the basis for alternative tests that are much quicker to compute, Pearson's chi-squared test and the G-test.

Cumulative distribution function

distribution functionCDFcumulative probability distribution function
The cumulative distribution function can be expressed as:
The proper use of tables of the binomial and Poisson distributions depends upon this convention.

Binomial coefficient

binomial coefficientschoose(generalized) binomial coefficient
is the binomial coefficient, hence the name of the distribution.

Bernoulli distribution

BernoulliBernoulli random variableBernoulli random variables
A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. Nowadays, it can be seen as a consequence of the central limit theorem since B(n, p) is a sum of n independent, identically distributed Bernoulli variables with parameter p.
The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution).

Variance

sample variancepopulation variancevariability
The variance is:

Probability theory

theory of probabilityprobabilityprobability theorist
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q = 1 − p).
Some fundamental discrete distributions are the discrete uniform, Bernoulli, binomial, negative binomial, Poisson and geometric distributions.

Kullback–Leibler divergence

relative entropyKullback-Leibler divergenceinformation gain
where D(a || p) is the relative entropy between an a-coin and a p-coin (i.e.
P is the distribution on the left side of the figure, a binomial distribution with N = 2 and p = 0.4.

Beta distribution

beta betabeta of the first kind
A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution.
In Bayesian inference, the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial and geometric distributions.

Conjugate prior

conjugateconjugate distributionconjugate prior distribution
A closed form Bayes estimator for p also exists when using the Beta distribution as a conjugate prior distribution.
This random variable will follow the binomial distribution, with a probability mass function of the form

Binomial sum variance inequality

smaller than the variance of a binomial variable
However, if X and Y do not have the same probability p, then the variance of the sum will be smaller than the variance of a binomial variable distributed as
The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the same n and p parameters.

Binomial proportion confidence interval

Agresti–Coull intervalarcsine square root transformationbinomial confidence intervals
The exact (Clopper–Pearson) method is the most conservative.
There are several formulas for a binomial confidence interval, but all of them rely on the assumption of a binomial distribution.

Beta function

regularized incomplete beta functionincomplete beta functionEuler beta function
It can also be represented in terms of the regularized incomplete beta function, as follows:
The regularized incomplete beta function is the cumulative distribution function of the beta distribution, and is related to the cumulative distribution function of a random variable X from a binomial distribution, where the "probability of success" is p and the sample size is n:

Completeness (statistics)

completecompletenessboundedly complete
This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: x).
T is a statistic of X which has a binomial distribution with parameters (n,p).

Probability mass function

mass functionprobability massmass
The probability of getting exactly k successes in n independent Bernoulli trials is given by the probability mass function:
There are three major distributions associated, the Bernoulli distribution, the Binomial distribution and the geometric distribution.

Central limit theorem

Lyapunov's central limit theoremlimit theoremscentral limit
Nowadays, it can be seen as a consequence of the central limit theorem since B(n, p) is a sum of n independent, identically distributed Bernoulli variables with parameter p.
The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution, is now known as the de Moivre–Laplace theorem.

Poisson distribution

PoissonPoisson-distributedPoissonian
The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed or at least p tends to zero.
The lower bound can be proved by noting that is the probability that, where, which is bounded below by, where D is relative entropy (See the entry on bounds on tails of binomial distributions for details).

De Moivre–Laplace theorem

Theorem of de Moivre–Laplaceapproximately normalde Moivre-Laplace theorem
This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large n are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738.
In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions.

Continuity correction

and this basic approximation can be improved in a simple way by using a suitable continuity correction.
If a random variable X has a binomial distribution with parameters n and p, i.e., X is distributed as the number of "successes" in n independent Bernoulli trials with probability p of success on each trial, then

Maximum likelihood estimation

maximum likelihoodmaximum likelihood estimatormaximum likelihood estimate
This estimator is found using maximum likelihood estimator and also the method of moments.
By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but for different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values:

Multinomial distribution

multinomialmultinomially distributed
In probability theory, the multinomial distribution is a generalization of the binomial distribution.

Normal distribution

normally distributedGaussian distributionnormal
In this case a reasonable approximation to B(n, p) is given by the normal distribution