# Jeffreys prior

following JeffreysJeffreys' prioruninformative
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:wikipedia
33 Related Articles

### Harold Jeffreys

Sir Harold JeffreysJeffreysJeffreys, Harold
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:
One of his major contributions was on the Bayesian approach to probability (also see Jeffreys prior), as well as the idea that the Earth's planetary core was liquid.

### Fisher information

Fisher information matrixinformation matrixinformation
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:
The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.

### Prior probability

prior distributionpriorprior probabilities
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix: This is an improper prior, and is, up to the choice of constant, the unique translation-invariant distribution on the reals (the Haar measure with respect to addition of reals), corresponding to the mean being a measure of location and translation-invariance corresponding to no information about location. Sometimes the Jeffreys prior cannot be normalized, and is thus an improper prior.
Priors can also be chosen according to some principle, such as symmetry or maximizing entropy given constraints; examples are the Jeffreys prior or Bernardo's reference prior.

### Arcsine distribution

This is the arcsine distribution and is a beta distribution with.

### Beta distribution

beta betabeta of the first kind
This is the arcsine distribution and is a beta distribution with.
The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability).

### Haar measure

unimodularHaar integralHaar measures
This is an improper prior, and is, up to the choice of constant, the unique translation-invariant distribution on the reals (the Haar measure with respect to addition of reals), corresponding to the mean being a measure of location and translation-invariance corresponding to no information about location.
For the group of affine transformations on the parameter space of the normal distribution, the right Haar measure is the Jeffreys prior measure.

### Bayesian probability

Bayesiansubjective probabilityBayesianism
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:

### Square root

In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:

### Determinant

determinantsdetmatrix determinant
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix:

### Coordinate system

coordinatescoordinateaxis
It has the key feature that it is invariant under a change of coordinates for the parameter vector \vec\theta.

### Integration by substitution

change of variablessubstitutionchange of variables formula
using the change of variables theorem for transformations and the definition of Fisher information:

### Normalizing constant

normalizednormalization constantnormalization
Sometimes the Jeffreys prior cannot be normalized, and is thus an improper prior.

### Normal distribution

normally distributedGaussian distributionnormal
For example, the Jeffreys prior for the distribution mean is uniform over the entire real line in the case of a Gaussian distribution of known variance.

### Likelihood principle

law of likelihoodlikelihoodlikelihood ratios
Use of the Jeffreys prior violates the strong version of the likelihood principle, which is accepted by many, but by no means all, statisticians.

### Minimum description length

MDLminimum description length principle
In the minimum description length approach to statistics the goal is to describe data as compactly as possible where the length of a description is measured in bits of the code used.

### Exponential family

exponential familiesnatural parameternatural parameters
The main result is that in exponential families, asymptotically for large sample size, the code based on the distribution that is a mixture of the elements in the exponential family with the Jeffreys prior is optimal.

### Poisson distribution

PoissonPoisson-distributedPoissonian
For the Poisson distribution of the non-negative integer n,

### Dirichlet distribution

DirichletDirichlet modelDirichlet PDF
Similarly, for a throw of an N-sided die with outcome probabilities, each non-negative and satisfying, the Jeffreys prior for is the Dirichlet distribution with all (alpha) parameters set to one half.

### Unit sphere

unit ballclosed unit ballsphere
Equivalently, if we write for each i, then the Jeffreys prior for \vec{\phi} is uniform on the (N−1)-dimensional unit sphere (i.e., it is uniform on the surface of an N-dimensional unit ball).

### Bayesian inference

BayesianBayesian analysisBayesian method

### Student's t-distribution

Student's ''t''-distributiont-distributiont''-distribution
With n data points, if uninformative, or flat, location and scale priors and can be taken for μ and σ 2, then Bayes' theorem gives

### List of statistics articles

List of statistical topicsList of statistics topicsIndex of statistics articles

### Bures metric

Helstrom metricBures distanceHelstrom measurement
Some of the applications of the Bures metric include that given a target error, it allows the calculation of the minimum number of measurements to distinguish two different states and the use of the volume element as a candidate for the Jeffreys prior probability density for mixed quantum states.

### F-distribution

F distributionF''-distributionF'' distribution
The quantity X has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant Jeffreys prior is taken for the prior probabilities of σ 1 2 and σ 2 2.

### Principle of transformation groups

method of transformation groups
Which is the well-known Jeffreys prior for scale parameters, which is "flat" on the log scale, although it is derived using a different argument to that here, based on the Fisher information function.