Cramér–Rao bound

Cramér–Rao inequalityCramér–Rao lower bound
In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter.wikipedia
66 Related Articles

Estimation theory

parameter estimationestimationestimated
In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter.
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number

C. R. Rao

Calyampudi Radhakrishna RaoC.R. RaoCR Rao
This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s.
Among his best-known discoveries are the Cramér–Rao bound and the Rao–Blackwell theorem both related to the quality of estimators.

Efficiency (statistics)

efficientefficiencyinefficient
An unbiased estimator which achieves this lower bound is said to be (fully) efficient.
The Cramér–Rao bound can be used to prove that e(T) ≤ 1.

Estimator

estimatorsestimateestimates
In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter.
In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cramér–Rao bound, which is an absolute lower bound on variance for statistics of a variable.

Maurice René Fréchet

FréchetMaurice FréchetFrechet
This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s.
Fréchet is sometimes credited with the introduction of what is now known as the Cramér–Rao bound, but Fréchet's 1940s lecture notes on the topic appear to have been lost.

Harald Cramér

CramérHarald CramerCramér, Harald
This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s.

Fisher information

Fisher information matrixinformation matrixinformation
In its simplest form, the bound states that the variance of any unbiased estimator is at least as high as the inverse of the Fisher information. The variance of any unbiased estimator of \theta is then bounded by the reciprocal of the Fisher information I(\theta):
The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any unbiased estimator of θ.

Chapman–Robbins bound

It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems.

Multivariate normal distribution

multivariate normalbivariate normal distributionjointly normally distributed
For the case of a d-variate normal distribution
This can be used, for example, to compute the Cramér–Rao bound for parameter estimation in this setting.

Brascamp–Lieb inequality

Brascamp-Lieb inequality
The Brascamp–Lieb inequality is also related to the Cramér–Rao bound.

Statistics

statisticalstatistical analysisstatistician
In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter.

Variance

sample variancepopulation variancevariability
In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter. The variance of any unbiased estimator of \theta is then bounded by the reciprocal of the Fisher information I(\theta):

Georges Darmois

Darmois, GeorgesG. DarmoisG.Darmois
This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s.

Mean squared error

mean square errorsquared error lossMSE
Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator.

Minimum-variance unbiased estimator

minimum variance unbiased estimatorbest unbiased estimatorUMVU
Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator.

Bias of an estimator

unbiasedunbiased estimatorbias
In its simplest form, the bound states that the variance of any unbiased estimator is at least as high as the inverse of the Fisher information. The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased.

Scalar (mathematics)

scalarscalarsbase field
The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased.

Probability density function

probability densitydensity functiondensity
Suppose \theta is an unknown deterministic parameter which is to be estimated from n independent observations (measurements) of x, each distributed according to some probability density function f(x;\theta).

Multiplicative inverse

reciprocalinversereciprocals
The variance of any unbiased estimator of \theta is then bounded by the reciprocal of the Fisher information I(\theta):

Natural logarithm

lnnatural logarithmsnatural log
:and is the natural logarithm of the likelihood function and denotes the expected value (over x).

Likelihood function

likelihoodlikelihood ratiolog-likelihood
:and is the natural logarithm of the likelihood function and denotes the expected value (over x).

Expected value

expectationexpectedmean
:and is the natural logarithm of the likelihood function and denotes the expected value (over x).

Vector space

vectorvector spacesvectors
Extending the Cramér–Rao bound to multiple parameters, define a parameter column vector