# Expected value

**expectationexpectedmeanexpectationsexpected numberexpectation operatormathematical expectationexpectation valueexpected valuesexpected outcome**

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents.wikipedia

750 Related Articles

### Law of large numbers

**strong law of large numbersweak law of large numbersLaws of large numbers**

Less roughly, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity.

According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

### Variance

**sample variancepopulation variancevariability**

By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value.

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.

### Random variable

**random variablesrandom variationrandom**

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents.

In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.

### Cauchy distribution

**LorentzianCauchyLorentzian profile**

The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.

The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined.

### Expected utility hypothesis

**expected utilityexpected utility theoryvon Neumann-Morgenstern utility function**

In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information.

Initiated by Daniel Bernoulli in 1738, this hypothesis has proven useful to explain some popular choices that seem to contradict the expected value criterion (which takes into account only the sizes of the payouts and the probabilities of occurrence), such as occur in the contexts of gambling and insurance.

### Risk aversion

**risk averserisk-averserisk tolerance**

For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function.

It is the hesitation of a person to agree to a situation with an unknown payoff rather than another situation with a more predictable payoff but possibly lower expected payoff.

### Probability distribution

**distributioncontinuous probability distributiondiscrete probability distribution**

The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.

Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof.

### Chebyshev's inequality

**ChebyshevAn an inequality on location and scale parametersBienaymé–Chebyshev inequality**

More generally, the rate of convergence can be roughly quantified by e.g. Chebyshev's inequality and the Berry–Esseen theorem.

In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean.

### Von Neumann–Morgenstern utility theorem

**von Neumann–Morgenstern utility functionReduction of Compound Lotteries axiomvon Neumann and Morgenstern**

For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function.

In decision theory, the von Neumann-Morgenstern utility theorem shows that, under certain axioms of rational behavior, a decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he or she is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future.

### Probability density function

**probability densitydensity functiondensity**

The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum.

If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as

### Gordon–Loeb model

**Gordon-Loeb model**

One example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment.

From the model we can gather that the amount of money a company spends in protecting information should, in most cases, be only a small fraction of the predicted loss (for example, expected value of a loss following a security breach).

### Decision theory

**decision sciencestatistical decision theorydecision sciences**

In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information.

Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is contained in his Pensées, published in 1670), the idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value.

### Roulette

**roulette wheelAmerican Roulettebetting wheel**

*The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge.

It can be easily demonstrated that this payout formula would lead to a zero expected value of profit if there were only 36 numbers.

### St. Petersburg paradox

**PetersbergSaint Petersburg problemSt. Petersburg's game**

*An example that diverges arises in the context of the St. Petersburg paradox.

It is based on a particular (theoretical) lottery game that leads to a random variable with infinite expected value (i.e., infinite expected payoff) but nevertheless seems to be worth only a very small amount to the participants.

### Probability measure

**measureprobability distributionlaw**

The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.

For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate.

### Risk neutral preferences

**risk neutralrisk-neutralrisk neutrality**

For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function.

In portfolio choice, a risk neutral investor who is able to choose any combination of an array of risky assets (various companies' stocks, various companies' bonds, etc.) would invest exclusively in the asset with the highest expected yield, ignoring its risk features relative to those of other assets, and would even sell short the asset with the lowest expected yield as much as is permitted in order to invest the proceeds in the highest expected-yield asset.

### Loss function

**objective functioncost functionrisk function**

Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms.

### Covariance

**covariantcovariationcovary**

The amount by which the multiplicativity fails is called the covariance:

where is the expected value of X, also known as the mean of X. The covariance is also sometimes denoted \sigma_{XY} or \sigma(X,Y), in analogy to variance.

### Convex function

**convexconvexitystrictly convex**

Let be a Borel convex function and X a random variable such that.

In probability theory, a convex function applied to the expected value of a random variable is always less than or equal to the expected value of the convex function of the random variable.

### Bias of an estimator

**unbiasedunbiased estimatorbias**

A formula is typically considered good in this context if it is an unbiased estimator— that is if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.

In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.

### Characteristic function (probability theory)

**characteristic functioncharacteristic functionscharacteristic function:**

The probability density function f_X of a scalar random variable X is related to its characteristic function \varphi_X by the inversion formula:

For a scalar random variable X the characteristic function is defined as the expected value of e itX, where i is the imaginary unit, and t ∈ R is the argument of the characteristic function:

### Linear map

**linear operatorlinear transformationlinear**

The expected value operator (or expectation operator) is linear in the sense that

The expected value of a random variable (which is in fact a function, and as such a member of a vector space) is linear, as for random variables X and Y we have E[X + Y] = E[X] + E[Y] and E[aX] = aE[X], but the variance of a random variable is not linear.

### Moment-generating function

**moment generating functionCalculations of momentsgenerating functions**

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

wherever this expectation exists.

### Monte Carlo method

**Monte CarloMonte Carlo simulationMonte Carlo simulations**

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g., where is the indicator function of the set \mathcal{A}.

By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable.

### Central moment

**central momentsmoment about the meanmoments about the mean**

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean.