Random variable

random variablesrandom variationrandomstochasticstochastic variablerandom variabilityvariablesAleatory variablechange of variable formulacontinuous
In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is a variable whose possible values are outcomes of a random phenomenon.wikipedia
847 Related Articles

Statistical unit

experimental unitunitsexperimental units
It is the material source for the mathematical abstraction of a "random variable".

Log-Cauchy distribution

In probability theory, a log-Cauchy distribution is a probability distribution of a random variable whose logarithm is distributed in accordance with a Cauchy distribution.

Nonparametric skew

In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values.

Markov random field

Markov NetworksInferenceMarkov
In the domain of physics and probability, a Markov random field (often abbreviated as MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph.

Sunspots (economics)

sunspot equilibriasunspot equilibriumsunspots
In economics, the term sunspots (or sometimes "a sunspot") usually refers to an extrinsic random variable, that is, a random variable that does not affect economic fundamentals (such as endowments, preferences, or technology).

Sexual dimorphism measures

differences between the sexes
Most of the measures are used on the assumption that a random variable is considered so that probability distributions should be taken into account.

Canonical correlation

canonical correlation analysisCCAcanonical correlation analysis (CCA)
If we have two vectors X = (X 1, ..., X n ) and Y = (Y 1, ..., Y m ) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of X and Y which have maximum correlation with each other.

Bernoulli process

Bernoullicoin flipping
In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1.

Stable distribution

stable distributionsStablealpha stable distributions
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters.

Kurtosis

excess kurtosisleptokurticplatykurtic
In probability theory and statistics, kurtosis (from κυρτός, kyrtos or kurtos, meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable.

Conditional entropy

conditional entropiesconditionalchain rule of conditional probability
In information theory, the conditional entropy (or equivocation) quantifies the amount of information needed to describe the outcome of a random variable Y given that the value of another random variable X is known.

Ratio distribution

Gaussian ratio distributionratio of two
A ratio distribution (or quotient distribution) is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions.

Differential entropy

differential entropiesdifferential Shannon informationentropy
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions.

Experimental uncertainty analysis

estimating the errorTable 1uncertainty analysis
The uncertainty has two components, namely, bias (related to accuracy) and the unavoidable random variation that occurs when making repeated measurements (related to precision). The measured quantities may have biases, and they certainly have random variation, so what needs to be addressed is how these are "propagated" into the uncertainty of the derived quantity.

Normality test

Non-normality of errorsnormality testingstatistical test for the normality
In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.

Rayleigh distribution

RayleighRayleigh distributeddistribution
In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for positive-valued random variables.

Kernel density estimation

kernelkernel densitykernel density estimate
In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable.

Shifted Gompertz distribution

The shifted Gompertz distribution is the distribution of the larger of two independent random variables one of which has an exponential distribution with parameter b and the other has a Gumbel distribution with parameters \eta and b. In its original formulation the distribution was expressed referring to the Gompertz distribution instead of the Gumbel distribution but, since the Gompertz distribution is a reverted Gumbel distribution, the labelling can be considered as accurate.

Partial correlation

In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.

Compound probability distribution

compound distributioncompoundingmixture
In probability and statistics, a compound probability distribution (also known as a mixture distribution or contagious distribution) is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables.

Sum of normally distributed random variables

(see here)[proofsum of two independent realisations
In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables, which can be quite complex based on the probability distributions of the random variables involved and their relationships.

Tweedie distribution

Tweedie compound Poisson distributionTweedie exponential dispersion modelscompound poisson-gamma or Tweedie distribution
For any random variable Y that obeys a Tweedie distribution, the variance var(Y) relates to the mean E(Y) by the power law,

Stochastic optimization

stochastic searchsimulation-based optimisationstochastic optimisation
Stochastic optimization (SO) methods are optimization methods that generate and use random variables.

Stochastic dominance

first-order stochastically dominantfirst-order stochastic dominancefirst-order stochastically dominating
Stochastic dominance is a partial order between random variables.

Sample mean and covariance

sample meansample covariancesample covariance matrix
The sample mean or empirical mean and the sample covariance are statistics computed from a collection (the sample) of data on one or more random variables.