Expected value

expectationexpectedmeanexpectationsexpected numbermathematical expectationexpectation operatorexpected valuesexpectation valueexpected outcome
In probability theory, the expected value of a random variable is a key aspect of its probability distribution.wikipedia
743 Related Articles

Law of large numbers

strong law of large numbersweak law of large numbersBernoulli's Golden Theorem
For example, the expected value of rolling a six-sided die is 3.5, because the average of all the numbers that come up converges to 3.5 as the number of rolls approaches infinity (see for details). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer to the expected value as more trials are performed.

Random variable

random variablesrandom variationrandom
In probability theory, the expected value of a random variable is a key aspect of its probability distribution.
In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.

Probability distribution

distributioncontinuous probability distributiondiscrete probability distribution
In probability theory, the expected value of a random variable is a key aspect of its probability distribution.

Von Neumann–Morgenstern utility theorem

von Neumann–Morgenstern utility functionvon Neumann and MorgensternVon Neumann–Morgenstern utility
For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.
In decision theory, the von Neumann-Morgenstern utility theorem shows that, under certain axioms of rational behavior, a decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he or she is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future.

Problem of points

divide the stakes fairlygambling problemproblem of the points
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished.
One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is known as an expected value.

Decision theory

decision sciencestatistical decision theorydecision sciences
For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.
Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is contained in his Pensées, published in 1670), the idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value.

Probability density function

probability densitydensity functiondensity
The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum.
If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as

Blaise Pascal

PascalPascal, BlaisePascalian
This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré.
From this discussion, the notion of expected value was introduced.

Cauchy distribution

LorentzianCauchyLorentzian distribution
An example of such a random variable is one with the Cauchy distribution, due to its large "tails".
The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined.

Christiaan Huygens

HuygensChristian HuygensChristiaan Huyghens
Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see ) "De ratiociniis in ludo aleæ" on probability theory.
Huygens took as intuitive his appeals to concepts of a "fair game" and equitable contract, and used them to set up a theory of expected values.

St. Petersburg paradox

Saint Petersburg ParadoxPetersbergSaint Petersburg problem
It is based on a particular (theoretical) lottery game that leads to a random variable with infinite expected value (i.e., infinite expected payoff) but nevertheless seems to be worth only a very small amount to the participants.

Roulette

roulette wheelAmerican roulettebetting wheel
It can be easily demonstrated that this payout formula would lead to a zero expected value of profit if there were only 36 numbers.

Probability measure

measureprobability distributionlaw
The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.
For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate.

Bias of an estimator

unbiasedunbiased estimatorbias
In such settings, a desirable criterion for a "good" estimator is that it is unbiased – that is, the expected value of the estimate is equal to the true value of the underlying parameter. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate).
In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.

Errors and residuals

residualserror termresidual
If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate).
A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly.

Variance

sample variancepopulation variancevariability
The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.

Moment-generating function

moment generating functionCalculations of momentsgenerating functions
The moments of some random variables can be used to specify their distributions, via their moment generating functions.
wherever this expectation exists.

Statistics

statisticalstatistical analysisstatistician
For a different example, in statistics, where one seeks estimates for unknown parameters based on available data, the estimate itself is a random variable.
Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter.

Moment (mathematics)

momentsmomentraw moment
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X].
The n-th moment about zero of a probability density function f(x) is the expected value of X n and is called a raw moment or crude moment.

Central moment

moment about the meancentral momentsmoments about the mean
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X].
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean.

Monte Carlo method

Monte CarloMonte Carlo simulationMonte Carlo methods
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., where is the indicator function of the set \mathcal{A}.
By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable.

Estimation theory

parameter estimationestimationestimated
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., where is the indicator function of the set \mathcal{A}. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results.
This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.

William Allen Whitworth

W. A. WhitworthWhitworth, William Allen
The use of the letter E to denote expected value goes back to W. A. Whitworth in 1901, who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".
He is the inventor of the E[X] notation for the expected value of a random variable X, still commonly in use, and he coined the name "subfactorial" for the number of derangements of n items.

Indicator function

characteristic functionmembership functionindicator
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise.
The notation is used in other places as well, for instance in probability theory: if X is a probability space with probability measure and A is a measurable set, then becomes a random variable whose expected value is equal to the probability of A:

Uncertainty principle

Heisenberg uncertainty principleHeisenberg's uncertainty principleuncertainty relation
The uncertainty in \hat{A} can be calculated using the formula.