Likelihood function
44968
221718654
2008-06-25T19:51:02Z
132.235.28.157
/* Likelihoods for mixed continuous - discrete distributions */
{{Wiktionarypar|likelihood}}
In [[statistics]], the '''likelihood function''' (often simply the '''likelihood''') is a function of the [[parameter]]s of a [[statistical model]] that plays a key role in [[statistical inference]]. In non-technical usage, "likelihood" is a synonym for "[[probability]]", but throughout this article only the technical definition is used. Informally, if "probability" allows us to predict unknown outcomes based on known parameters, then "likelihood" allows us to estimate unknown parameters based on known outcomes.
In a sense, likelihood works backwards from probability: given ''B'', we use the conditional probability P(''A''|''B'') to reason about ''A'', and given ''A'', we use the likelihood function ''L''(''B''|''A'') to reason about ''B''. This mode of reasoning is formalized in [[Bayes' theorem]]:
:<math>P(B \mid A) = \frac{P(A \mid B)\;P(B)}{P(A)}.\!</math>
In [[statistics]], a '''likelihood function''' is a [[conditional probability]] [[function (mathematics) | function]] considered as a function of its ''second'' argument with its first argument held fixed, thus:
:<math>b\mapsto P(A \mid B=b), \!</math>
and also any other function proportional to such a function.
That is, the likelihood function for ''B'' is the [[equivalence class]] of functions
:<math>L(b \mid A) = \alpha \; P(A \mid B=b) \!</math>
for any constant of proportionality <math>\alpha > 0</math>.
The numerical value <math>L(b | A)</math> alone is immaterial; all that matters are likelihood [[ratio]]s of the form
:<math>\frac{L(b_2 | A)}{L(b_1 | A)}, \!</math>
which are invariant with respect to the [[constant of proportionality]].
For more about making inferences via likelihood functions, see also the method of [[maximum likelihood]], and [[likelihood-ratio test]]ing.
==Likelihood function of a parameterized model==
Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of [[probability density function]]s (or [[probability mass function]]s in the case of discrete distributions)
:<math>x\mapsto f(x\mid\theta), \!</math>
where θ is the parameter, the '''likelihood function''' is
:<math>\theta\mapsto f(x\mid\theta), \!</math>
written
:<math>L(\theta \mid x)=f(x\mid\theta), \!</math>
where ''x'' is the observed outcome of an experiment. In other words, when ''f''(''x'' | θ) is viewed as a function of ''x'' with θ fixed, it is a probability density function, and when viewed as a function of θ with ''x'' fixed, it is a likelihood function.
''Note:'' This is ''not'' the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous real-world consequences in medicine, engineering or jurisprudence. See [[prosecutor's fallacy]] for an example of this.
===Likelihoods for continuous distributions===
The use of the [[probability density]] instead of a probability in specifying the likelihood function above may be justified in a simple way. Suppose that, instead of an exact observation, x, the observation is instead that the value was in a short interval (''x''<sub>''j''-1</sub>,''x''<sub>''j''</sub>), with length Δ<sub>''j''</sub>, where the subscripts refer to a predefined set of intervals. Then the probability of getting this observation (of being in interval ''j'') is approximately
:<math>L_{approx}(\theta \mid x \text{ in interval } j)=\Delta_j f(x_{*}\mid\theta), \!</math>
where x<sub>*</sub> can be any point in interval ''j''. Then, recalling that the likelihood function is defined up to a multiplicative constant, it is just as valid to say that the likelihood function is approximately
:<math>L_{approx}(\theta \mid x \text{ in interval } j)= f(x_{*}\mid\theta), \!</math>
and then, on considering the lengths of the intervals to decrease to zero,
:<math>L(\theta \mid x )= f(x\mid\theta). \!</math>
===Likelihoods for mixed continuous - discrete distributions===
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses ''p<sub>k</sub>''(θ) and a density ''f''(''x''|θ), where the sum of all the ''p'''s added to the integral of ''f'' is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with as above by setting the interval length short enough to exclude any of the discrete masses. For an observation from the discrete component, the probability can either be written down directly or treated within the above context by saying that the probability of getting an observation in an interval that does contain a discrete component (of being in interval ''j'' which contains discrete component ''k'') is approximately
:<math>L_{approx}(\theta \mid x \text{ in interval } j \text{ containing discrete mass } k)=p_k(\theta)+\Delta_j f(x_{*}\mid\theta), \!</math>
where x<sub>*</sub> can be any point in interval ''j''. Then, on considering the lengths of the intervals to decrease to zero, the likelihood function for a observation from the discrete component is
:<math>L(\theta \mid x )= p_k(\theta), \!</math>
where ''k'' is the index of the discrete probability mass corresponding to observation ''x''.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation ''x'', but not with the parameter θ.
== Example ==
[[Image:likelihoodFunctionAfterHH.png|thumb|400px|The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HH]]
[[Image:likelihoodFunctionAfterHHT.png|thumb|400px|The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HHT]]
For example, a [[coin]] is tossed with a probability ''p<sub>H</sub>'' of landing heads up ('H'), the probability of getting two heads in two trials ('HH') is ''p<sub>H</sub><sup>2</sup>''. If ''p<sub>H</sub>'' = 0.5, then the probability of seeing two heads is 0.25.
In symbols, we can say the above as
:<math>P(\mbox{HH} \mid p_H = 0.5) = 0.25</math>
Another way of saying this is to reverse it and say that "the likelihood of ''p<sub>H</sub>'' = 0.5, given the observation 'HH', is 0.25", i.e.,
:<math>L(p_H=0.5 \mid \mbox{HH}) = P(\mbox{HH}\mid p_H=0.5) =0.25</math>.
But this is not the same as saying that the ''probability'' of ''p<sub>H</sub>'' = 0.5, given the observation, is 0.25.
To take an extreme case, on this basis we can say "the likelihood of ''p<sub>H</sub>'' = 1 given the observation 'HH' is 1". But it is clearly not the case that the ''probability'' of ''p<sub>H</sub>'' = 1 given the observation is 1: the event 'HH' can occur for any ''p<sub>H</sub>'' > 0 (and often does, in reality, for ''p<sub>H</sub>'' roughly 0.5). If the ''probability'' of ''p<sub>H</sub>'' = 1 given the observation is 1, it means that ''p<sub>H</sub>'' must and can only be equal 1 for event 'HH' to occur which is obviously not true.
The likelihood function is not a [[probability density function]] – for example, the integral of a likelihood function is not in general 1. In this example, the integral of the likelihood density over the interval [0, 1] in ''p<sub>H</sub>'' is 1/3, demonstrating again that the likelihood density function cannot be interpreted as a probability density function for ''p<sub>H</sub>''. On the other hand, given any particular value of ''p''<sub>''H''</sub>, e.g. ''p''<sub>''H''</sub> = 0.5, the integral of the probability density function over the domain of the [[random variable]]s '''is''' 1.
==Likelihoods that eliminate nuisance parameters==
In many cases, the likelihood is a function of more than one parameter but interest focusses on the estimation of only one or at most a few of them, with the others being considered as [[nuisance parameter]]s. Several alternative ways have been developed to eliminate such nuisance parameters so that a likelihood can be written as a function of the parameter (or parameters) of interest only, the main ones being marginal, conditional and profile likelihoods.<ref>
{{cite book
| title=In All Likelihood: Statistical Modelling and Inference Using Likelihood
| first=Yudi | last=Pawitan | year=2001| publisher=Oxford University Press|isbn=0198507658
}}</ref> <ref>
{{cite web | author = Wen Hsiang Wei
| url= http://web.thu.edu.tw/wenwei/www/glmpdfmargin.htm
| title = Generalized linear model course notes | pages = Chapter 5
| publisher = Tung Hai University, Taichung, Taiwan | accessdate = 2007-01-23 }}</ref>
These are useful because standard likelihood methods can become unreliable or fail entirely when there are many nuisance parameters (or the nuisance parameter is high-dimensional), particularly when the number of nuisance parameters is a substantial fraction of the number of observations and this fraction does not decrease when the sample size increases. They can also be used to derive closed-form formulae for statistical tests when direct use of maximum likelihood requires iterative numerical methods, and find application in some specialized topics such as [[sequential analysis]].
===Conditional likelihood===
Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central [[hypergeometric distribution]]. (This form of conditioning is also the basis for [[Fisher's exact test]].)
===Marginal likelihood===
<!-- needs expansion ~~~~ -->
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear [[mixed model]]s, where considering a likelihood for the residuals only after fitting the fixed effects leads to [[residual maximum likelihood]] estimation of the variance components. (Note that there is a different meaning of [[marginal likelihood| marginal likelihood in Bayesian inference]]).
===Profile likelihood===
<!-- 1st 2 paras based on material previously headed "Concentrated likelihood" ~~~~~ -->
It is often possible to write some parameters as functions of other parameters, thereby reducing the number of independent parameters.
(The function is the parameter value which maximises the likelihood given the value of the other parameters.)
This procedure is called concentration of the parameters and results in the concentrated likelihood function, also occasionally known as the maximized likelihood function, but most often called the profile likelihood function.
<!--
Suggestions for expansion: an example of concentration [added below, on august 27, 2005]; explanation on usage in maximum likelihood in optimization -->
For example, consider a [[regression analysis]] model with [[normal distribution|normally distributed]] [[errors and residuals in statistics|errors]]. The most likely value of the error [[variance]] is the variance of the [[errors and residuals in statistics|residuals]]. The residuals depend on all other parameters. Hence the variance parameter can be written as a function of the other parameters.
Unlike conditional and marginal likelihoods, profile likelihood methods can always be used (even when the profile likelihood cannot be written down explicitly). However, the profile likelihood is not a true likelihood as it is not based directly on a probability distribution and this leads to some less satisfactory properties. (Attempts have been made to improve this, resulting in modified profile likelihood.)
The idea of profile likelihood can also be used to compute [[confidence interval]]s that often have better small-sample properties than those based on asymptotic [[standard error]]s calculated from the full likelihood.
==Historical remarks==
Some early thoughts on likelihood were made in a book by [[Thorvald N. Thiele]] published in [[1889]]<ref>[[Steffen L. Lauritzen]], [http://www.stat.fi/isi99/proceedings/arkisto/varasto/laur0313.pdf Aspects of T. N. Thiele's Contributions to Statistics] (1999).</ref>.
The first paper where the full idea of the "likelihood" appears was written by [[R.A. Fisher]] in 1922<ref>[[Ronald A. Fisher]]. "On the mathematical foundations of theoretical statistics". ''Philosophical Transactions of the Royal Society'', A, 222:309-368 (1922). ''("Likelihood" is discussed in section 6.)''</ref>: "On the mathematical foundations of theoretical statistics". In that paper, Fisher also uses the term "[[method of maximum likelihood]]". Fisher argues against [[inverse probability]] as a basis for statistical inferences, and instead proposes inferences based on likelihood functions.
==See also==
* [[Bayes factor]]
* [[Bayesian inference]]
* [[conditional probability]]
* [[likelihood principle]]
* [[likelihood-ratio test]]
* [[principle of maximum entropy]]
* [[score (statistics)]]
== Notes ==
<references />
== References ==
* [[A. W. F. Edwards]] (1972). ''Likelihood: An account of the statistical concept of likelihood and its application to scientific inference'', Cambridge University Press. Reprinted in 1992, expanded edition, Johns Hopkins University Press.
[[Category:Estimation theory]]
[[ar: دالةالإمكان]]
[[fr:Fonction de vraisemblance]]
[[it:Funzione di verosimiglianza]]
[[ru:Функция правдоподобия]]
[[ja:尤度関数]]