Estimator
estimatorsestimateestimatesestimationasymptotically unbiasedestimatedparameter estimateuniversal estimatorasymptotically normalasymptotically normal estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.wikipedia
210 Related Articles
Estimation theory
parameter estimationestimationestimated
Estimation theory is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory.
An estimator attempts to approximate the unknown parameters using the measurements.
Estimand
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.
The term is used to more clearly distinguish the target of inference from the function to obtain this parameter (i.e., the estimator) and the specific value obtained from a given data set (i.e., the estimate).
Statistic
sample statisticempiricalmeasure
An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model.
When a statistic is used to estimate a population parameter, is called an estimator.
Interval estimation
interval estimateintervalInterval (statistics)
There are point and interval estimators.
In doing so, he recognized that then-recent work quoting results in the form of an estimate plus-or-minus a standard deviation indicated that interval estimation was actually the problem statisticians really had in mind.
Point estimation
point estimatepointpoint estimator
An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. There are point and interval estimators.
More formally, it is the application of a point estimator to the data to obtain a point estimate.
Robust statistics
robustbreakdown pointrobustness
However, in robust statistics, statistical theory goes on to consider the balance between having good properties, if tightly defined assumptions hold, and having less good properties that hold under wider conditions.
Unfortunately, when there are outliers in the data, classical estimators often have very poor performance, when judged using the breakdown point and the influence function, described below.





Statistical model
modelprobabilistic modelstatistical modeling
An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model.
All statistical hypothesis tests and all statistical estimators are derived via statistical models.
Consistent estimator
consistentconsistencyinconsistent
The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory.
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ 0 —having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ 0.
Mean squared error
mean square errorsquared error lossMSE
The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory.
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value.
Statistics
statisticalstatistical analysisstatistician
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.
Consider now a function of the unknown parameter: an estimator is a statistic used to estimate such function.





Bias of an estimator
unbiasedunbiased estimatorbias
The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory.
In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.
Decision rule
rule for making a decision
In the context of decision theory, an estimator is a type of decision rule, and its performance may be evaluated through the use of loss functions.
Expected value
expectationexpectedmean
:where is the expected value of the estimator.
If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate).
Variance
sample variancepopulation variancevariability
The variance of is simply the expected value of the squared sampling deviations; that is,.
This means that one estimates the mean and variance that would have been calculated from an omniscient set of observations by using an estimator equation.
Normal distribution
normally distributedGaussian distributionnormal
An asymptotically normal estimator is a consistent estimator whose distribution around the true parameter θ approaches a normal distribution with standard deviation shrinking in proportion to 1/\sqrt{n} as the sample size n grows.
The following table gives the quantile z_p such that X will lie in the range with a specified probability p. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:.






Standard deviation
standard deviationssample standard deviationSD
Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers).
Statistical dispersion
dispersionvariabilityspread
This occurs frequently in estimation of scale parameters by measures of statistical dispersion.
Sample mean and covariance
sample meansample covariancesample covariance matrix
The central limit theorem implies asymptotic normality of the sample mean \bar X as an estimator of the true mean.
The sample mean and sample covariance are estimators of the population mean and population covariance, where the term population refers to the set from which the sample was taken.
Efficiency (statistics)
efficientefficiencyinefficient
In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cramér–Rao bound, which is an absolute lower bound on variance for statistics of a variable.
In the comparison of various statistical procedures, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure.
Circumflex
circumflex accentô^
If the parameter is denoted \theta then the estimator is traditionally written by adding a circumflex over the symbol:.
In statistics, the hat is used to denote an estimator or an estimated value, as opposed to its theoretical counterpart.

Sample size determination
sample sizeSampling sizessample
A consistent sequence of estimators is a sequence of estimators that converge in probability to the quantity being estimated as the index (usually the sample size) grows without bound.
The estimator of a proportion is, where X is the number of 'positive' observations (e.g. the number of people out of the n sampled people who are at least 65 years old).
Cramér–Rao bound
Cramér–Rao inequalityCramér–Rao lower bound
In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cramér–Rao bound, which is an absolute lower bound on variance for statistics of a variable. Concerning such "best unbiased estimators", see also Cramér–Rao bound, Gauss–Markov theorem, Lehmann–Scheffé theorem, Rao–Blackwell theorem.
In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter.
Asymptotic distribution
asymptotically normalasymptotic normalitylimiting distribution
The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory. An asymptotically normal estimator is a consistent estimator whose distribution around the true parameter θ approaches a normal distribution with standard deviation shrinking in proportion to 1/\sqrt{n} as the sample size n grows.
One of the main uses of the idea of an asymptotic distribution is in providing approximations to the cumulative distribution functions of statistical estimators.
Scale parameter
scalerate parameterestimation
This occurs frequently in estimation of scale parameters by measures of statistical dispersion.
An estimator of a scale parameter is called an estimator of scale.
Rao–Blackwell theorem
Rao-Blackwell theoremRao–BlackwellRao–Blackwell procedure
Concerning such "best unbiased estimators", see also Cramér–Rao bound, Gauss–Markov theorem, Lehmann–Scheffé theorem, Rao–Blackwell theorem.
In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria.