# Likelihood principle

law of likelihoodlikelihoodlikelihood ratios
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.wikipedia
46 Related Articles

### Likelihood function

likelihoodlikelihood ratiolog-likelihood
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.
Later, Barnard and Birnbaum led a school of thought that advocated the likelihood principle, postulating that all relevant information for inference is contained in the likelihood function.

### A. W. F. Edwards

A.W.F. EdwardsAnthony William Fairbank EdwardsEdwards, A. W. F.
More recently the likelihood principle as a general principle of inference has been championed by A. W. F. Edwards.
With Luigi Luca Cavalli-Sforza, he carried out pioneering work on quantitative methods of phylogenetic analysis, and he has strongly advocated Fisher's concept of likelihood as the proper basis for statistical and scientific inference.

### Ronald Fisher

R.A. FisherR. A. FisherFisher
but arguments for the same principle, unnamed, and the use of the principle in applications goes back to the works of R.A. Fisher in the 1920s.
In the winter of 1954–1955 Fisher met Debabrata Basu, the Indian statistician who wrote in 1988, "With his reference set argument, Sir Ronald was trying to find a via media between the two poles of Statistics – Berkeley and Bayes. My efforts to understand this Fisher compromise led me to the likelihood principle".

### Conditionality principle

conditional inference
Birnbaum proved that the likelihood principle follows from two more primitive and seemingly reasonable principles, the conditionality principle and the sufficiency principle.
Together with the sufficiency principle, Birnbaum's version of the principle implies the famous likelihood principle.

### P-value

p''-valuepp''-values
The use of frequentist methods involving p-values leads to different inferences for the two cases above, showing that the outcome of frequentist methods depends on the experimental procedure, and thus violates the likelihood principle.
Some statisticians have proposed replacing p-values with alternative measures of evidence, such as confidence intervals, likelihood ratios, or Bayes factors, but there is heated debate on the feasibility of these alternatives.

### Likelihoodist statistics

likelihoodistlikelihoodismschool of thought
The central idea of likelihoodism is the likelihood principle: data are interpreted as evidence, and the strength of the evidence is measured by the likelihood function.

### Allan Birnbaum

BirnbaumBirnbaum, Allan
Birnbaum proved that the likelihood principle follows from two more primitive and seemingly reasonable principles, the conditionality principle and the sufficiency principle.
Birnbaum's argument for the likelihood principle generated great controversy; it implied, amongst other things, a repudiation of the approach of Wald and Lehmann, that Birnbaum had followed in his own research.

### Statistics

statisticalstatistical analysisstatistician
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

### Statistical model

modelprobabilistic modelstatistical modeling
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

### Sampling (statistics)

samplingrandom samplesample
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

### Probability density function

probability densitydensity functiondensity
A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument.

### Random variable

random variablesrandom variationrandom
For example, consider a model which gives the probability density function ƒ X (x | θ) of observable random variable X as a function of a parameter θ.

### Probability mass function

mass functionprobability massmass
The density function may be a density with respect to counting measure, i.e. a probability mass function.

### Stopping time

stopping rulelocalizationlocalized
The strong likelihood principle applies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying a stopping rule to the observations earlier in the experiment.

### Design of experiments

experimental designdesignExperimental techniques
This is the case in the above example, reflecting the fact that the difference between observing X = 3 and observing Y = 12 lies not in the actual data, but merely in the design of the experiment.

### Independence (probability theory)

independentstatistically independentindependence

### Bernoulli trial

Bernoulli trialsBernoulli random variablesBernoulli-distributed

### Bayesian statistics

BayesianBayesian methodsBayesian analysis
In Bayesian statistics, this ratio is known as the Bayes factor, and Bayes' rule can be seen as the application of the law of likelihood to inference.

### Bayes factor

Bayesian model comparisonBayes factorsBayesian model selection
In Bayesian statistics, this ratio is known as the Bayes factor, and Bayes' rule can be seen as the application of the law of likelihood to inference.

### Bayes' theorem

Bayes' ruleBayes theoremBayes's theorem
In Bayesian statistics, this ratio is known as the Bayes factor, and Bayes' rule can be seen as the application of the law of likelihood to inference.

### Frequentist inference

frequentistfrequentist statisticsclassical
The use of frequentist methods involving p-values leads to different inferences for the two cases above, showing that the outcome of frequentist methods depends on the experimental procedure, and thus violates the likelihood principle. In frequentist inference, the likelihood ratio is used in the likelihood-ratio test, but other non-likelihood tests are used as well.

### Likelihood-ratio test

likelihood ratio testlikelihood ratiolikelihood-ratio
In frequentist inference, the likelihood ratio is used in the likelihood-ratio test, but other non-likelihood tests are used as well.

### Neyman–Pearson lemma

Neyman-Pearson lemmaabstractNeyman & Pearson
The Neyman–Pearson lemma states the likelihood-ratio test is the most powerful test for comparing two simple hypotheses at a given significance level, which gives a frequentist justification for the law of likelihood.

### Power (statistics)

statistical powerpowerpowerful
The Neyman–Pearson lemma states the likelihood-ratio test is the most powerful test for comparing two simple hypotheses at a given significance level, which gives a frequentist justification for the law of likelihood.

### Statistical significance

statistically significantsignificantsignificance level
The Neyman–Pearson lemma states the likelihood-ratio test is the most powerful test for comparing two simple hypotheses at a given significance level, which gives a frequentist justification for the law of likelihood.