Exponential family 339174 225139159 2008-07-12T02:22:35Z Btyner 185327 +cat In [[theory of probability|probability]] and [[statistics]], an '''exponential family''' is a class of [[probability distribution]]s sharing a certain form, which is specified below. This special form is chosen for mathematical convenience, on account of some useful algebraic properties; as well as for generality, as exponential families are in a sense very natural distributions to consider. The concept of exponential families is credited to<ref>{{cite journal | last = Andersen | first = Erling | year = 1970 | month = Sep. | title = Sufficiency and Exponential Families for Discrete Sample Spaces | journal = Journal of the American Statistical Association | volume = 65 | issue = 331 | pages = 1248–1255 | language = English | doi = 10.2307/2284291 }}</ref> [[E. J. G. Pitman]],<ref>{{cite journal | last = Pitman | first = E. | year = 1936 | title = Sufficient statistics and intrinsic accuracy | journal = Proc. Camb. phil. Soc. | volume = 32 | pages = 567–579 | language = English }}</ref> G. Darmois,<ref>{{cite journal | last = Darmois | first = G. | year = 1935 | title = Sur les lois de probabilites a estimation exhaustive | journal = C.R. Acad. sci. Paris | volume = 200 | pages = 1265–1266 | language = French }}</ref> and [[Bernard Koopman|B. O. Koopman]]<ref>{{cite journal | last = Koopman | first = B | year = 1936 | title = On distribution admitting a sufficient statistic | journal = Trans. Amer. math. Soc. | volume = 39 | pages = 399–409 | language = English | doi = 10.2307/1989758 }}</ref> in 1935-6. == Definition == The following is a sequence of increasingly general definitions of an exponential family. A casual reader may wish to restrict attention to the first and simplest definition, which corresponds to a single-parameter family of [[discrete probability distribution|discrete]] or [[continuous probability distribution|continuous]] probability distributions. === Scalar parameter === A single-parameter exponential family is a set of probability distributions whose [[probability density function]] (or [[probability mass function]], for the case of a [[discrete distribution]]) can be expressed in the form :<math> f_X(x; \theta) = h(x) \exp(\eta(\theta) T(x) - A(\theta)) \,\!</math> where <math>T(x)</math>, <math>h(x)</math>, <math>\eta(\theta)</math>, and <math>A(\theta)</math> are known functions. The value &theta; is called the parameter of the family. Note that ''x'' is often a vector of measurements, in which case ''T''(''x'') is a function from the space of possible values of ''x'' to the real numbers. If &eta;(&theta;) = &theta;, then the exponential family is said to be in ''[[canonical form]]''. By defining a transformed parameter &eta; = &eta;(&theta;), it is always possible to convert an exponential family to canonical form. The canonical form is non-unique, since &eta;(&theta;) can be multiplied by any nonzero constant, while ''T''(''x'') is multiplied by its inverse. Further down the page is the example of [[#Normal distribution: Unknown mean, unit variance|a normal distribution with unknown mean and known variance]]. === Vector parameter === The single-parameter definition can be extended to a vector parameter <math>{\boldsymbol \theta} = (\theta_1, \theta_2, \ldots, \theta_s)^T</math>. A family of distributions is said to belong to a vector exponential family if the probability density function (or probability mass function, for discrete distributions) can be written as :<math> f_X(x; \theta) = h(x) \exp\left(\sum_{i=1}^s \eta_i({\boldsymbol \theta}) T_i(x) - A({\boldsymbol \theta}) \right) \,\!</math> As in the scalar valued case, the exponential family is said to be in canonical form if <math>\eta_i({\boldsymbol \theta}) = \theta_i</math>, for all <math>i</math>. Further down the page is the example of [[#Normal distribution: Unknown mean and unknown variance|a normal distribution with unknown mean and variance]]. === Measure-theoretic formulation === We use [[cumulative distribution function]]s (cdf) in order to encompass both discrete and continuous distributions. Suppose ''H'' is a non-decreasing function of a real variable and ''H''(''x'') approaches 0 as ''x'' approaches &minus;&infin;. Then [[Lebesgue-Stieltjes integral]]s with respect to ''dH''(''x'') are integrals with respect to the "reference measure" of the exponential family generated by ''H''. Any member of that exponential family has cumulative distribution function :<math>dF(x|\eta) = e^{-\eta^{\top} T(x) - A(\eta)}\, dH(x).</math> If ''F'' is a continuous distribution with a density, one can write ''dF''(''x'') = ''f''(''x'')&nbsp;''dx''. ''H''(''x'') is a [[Lebesgue-Stieltjes integral|Lebesgue-Stieltjes integrator]] for the ''reference measure''. When the reference measure is finite, it can be normalized and ''H'' is actually the [[cumulative distribution function]] of a probability distribution. If ''F'' is continuous with a density, then so is ''H'', which can then be written ''dH''(''x'') = ''h''(''x'')&nbsp;''dx''. If ''F'' is discrete, then ''H'' is a [[step function]] (with steps on the [[support (mathematics)|support]] of ''F''). == Interpretation == In the definitions above, the functions <math>T(x), \eta(\theta),</math> and <math>A(\theta)</math> were arbitrarily defined. However, these functions play a significant role in the resulting probability distribution. * <math>T(x)</math> is a ''[[sufficiency (statistics)|sufficient statistic]]'' of the distribution. Thus, for exponential families, there exists a sufficient statistic whose dimension equals the number of parameters to be estimated. This important property is further discussed [[#Classical estimation: sufficiency|below]]. * <math>\eta</math> is called the ''natural parameter''. The set of values of <math>\eta</math> for which the function <math>f_X(x;\theta)</math> is finite is called the ''natural parameter space''. It can be shown that the natural parameter space is always [[convex set|convex]]. * <math>A(\theta)</math> is a [[normalization factor]], or ''log-partition function'', without which <math>f_X(x;\theta)</math> would not be a probability distribution. The function ''A'' is important in its own right, because in cases in which the reference measure <math>dH(x)</math> is a probability measure (alternatively: when <math>h(x)</math> is a probability density), then ''A'' is the [[cumulant generating function]] of the [[probability distribution]] of the sufficient statistic <math>T(X)</math> when the distribution of <math>X</math> is <math>dH(x)</math>. == Examples == The [[normal distribution|normal]], [[exponential distribution|exponential]], <!--[[log-normal distribution|log-normal]],--> [[gamma distribution|gamma]], [[chi-square distribution|chi-square]], [[beta distribution|beta]], [[Dirichlet distribution|Dirichlet]], [[Bernoulli distribution|Bernoulli]], [[binomial distribution|binomial]], [[multinomial distribution|multinomial]], [[Poisson distribution|Poisson]], [[negative binomial distribution|negative binomial]], [[geometric distribution|geometric]], and [[Weibull distribution|Weibull]] distributions are all exponential families. The [[Cauchy distribution|Cauchy]], [[Laplace distribution|Laplace]], and [[uniform distribution|uniform]] families of distributions are ''not'' exponential families. Following are some detailed examples of the representation of some useful distribution as exponential families. === Normal distribution: Unknown mean, unit variance === As a first example, suppose <math>x</math> is distributed normally with unknown mean <math>\mu</math> and variance 1. The probability density function is then :<math>f_X(x;\mu) = \frac{1}{\sqrt{2 \pi}} e^{-(x-\mu)^2/2}.</math> This is a scalar exponential family in canonical form, as can be seen by setting :<math>h(x) = e^{-x^2/2}/\sqrt{2\pi}</math> :<math>T(x) = x\!\,</math> :<math>A(\mu) = \mu^2/2\!\,</math> :<math>\eta(\mu) = \mu.\!\,</math> === Normal distribution: Unknown mean and unknown variance === Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then :<math>f_X(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-(x-\mu)^2/2 \sigma^2}.</math> This is an exponential family which can be written in canonical form by defining :<math> {\boldsymbol \theta} = \left({\mu \over \sigma^2},{1 \over \sigma^2} \right)^T </math> :<math> h(x) = {1 \over \sqrt{2 \pi}} </math> :<math> T(x) = \left( x, -{x^2 \over 2} \right)^T </math> :<math> A({\boldsymbol \theta}) = { \theta_1^2 \over 2 \theta_2} - \ln( \theta_2^{1/2} ) = { \mu^2 \over 2 \sigma^2} - \ln \left( {1 \over \sigma } \right) </math> === Binomial distribution === As an example of a discrete exponential family, consider the [[binomial distribution]]. The [[probability mass function]] for this distribution is :<math>f(x)={n \choose x}p^x (1-p)^{n-x}, \quad x \in \{0, 1, 2, \ldots, n\}.</math> This can equivalently be written as :<math>f(x)={n \choose x}\exp\left(x \log\left({p \over 1-p}\right) + n \log\left(1-p\right)\right),</math> which shows that the binomial distribution is an exponential family, whose natural parameter is :<math>\eta = \log{p \over 1-p}.</math> === Differential identities === As mentioned above, <math>\scriptstyle K(u) = A(u + \eta) - A(\eta) </math> is the cumulant generating function for <math>\scriptstyle T </math>. A consequence of this is that one can fully understand the mean and covariance structure of <math>\scriptstyle T = (T_{1}, T_{2}, \dots , T_{p}) </math> by differentiating <math> \scriptstyle A(\eta) </math>. :<math> E(T_{j}) = \frac{ \partial A(\eta) }{ \partial \eta_{j} } </math> and :<math> \mathrm{cov}(T_{i},T_{j}) = \frac{ \partial^{2} A(\eta) }{ \partial \eta_{i} \, \partial \eta_{j} }. </math> The first two raw moments and all mixed moments can be recovered from these two identities. This is often useful when <math>\scriptstyle T </math> is a complicated function of the data whose moments are difficult to calculate by integration. As an example consider a real valued random variable <math>\scriptstyle X </math> with density :<math> p_{\theta}(x) = \frac{ \theta e^{-x} }{(1 + e^{-x})^{\theta + 1} } </math> indexed by shape parameter <math> \theta \in (0,\infty) </math> (this distribution is called the skew-logistic). The density can be rewritten as :<math> \frac{ e^{-x} } { 1 + e^{-x} } \mathrm{exp}( -\theta \mathrm{log}(1 + e^{-x}) + \mathrm{log}(\theta)) </math> Notice this is an exponential family with canonical parameter :<math> \eta = -\theta, </math> sufficient statistic :<math> T = \mathrm{log}(1 + e^{-x}), </math> and normalizing factor :<math> A(\eta) = -\mathrm{log}(\theta) = -\mathrm{log}(-\eta) </math> So using the first identity, :<math> E(\mathrm{log}(1 + e^{-X})) = E(T) = \frac{ \partial A(\eta) }{ \partial \eta } = \frac{ \partial }{ \partial \eta } [-\mathrm{log}(-\eta)] = \frac{1}{-\eta} = \frac{1}{\theta}, </math> and using the second identity :<math> \mathrm{var}(\mathrm{log}(1 + e^{-X})) = \frac{ \partial^{2} A(\eta) }{ \partial \eta^{2} } = \frac{ \partial }{ \partial \eta } \left[\frac{1}{-\eta}\right] = \frac{1}{(-\eta)^{2}} = \frac{1}{\theta^2}. </math> This example illustrates a case where using this method is very simple, but the brute force calculation would be nearly impossible. == Maximum entropy derivation == The exponential family arises naturally as the answer to the following question: what is the maximum [[entropy]] distribution consistent with given constraints on expected values? The [[information entropy]] of a probability distribution ''dF''(''x'') can only be computed with respect to some other probability distribution (or, more generally, a positive measure), and both [[measure (mathematics)|measure]]s must be mutually [[absolutely continuous]]. Accordingly, we need to pick a ''reference measure'' ''dH''(''x'') with the same support as ''dF''(''x''). As an aside, frequentists need to realize that this is a largely arbitrary choice, while Bayesians can just make this choice part of their [[prior probability distribution]]. The entropy of ''dF''(''x'') relative to ''dH''(''x'') is :<math>S[dF|dH]=-\int {dF\over dH}\ln{dF\over dH}\,dH</math> or :<math>S[dF|dH]=\int\ln{dH\over dF}\,dF</math> where ''dF''/''dH'' and ''dH''/''dF'' are [[Radon-Nikodym derivative]]s. Note that the ordinary definition of entropy for a discrete distribution supported on a set ''I'', namely :<math>S=-\sum_{i\in I} p_i\ln p_i</math> ''assumes'' (though this is seldom pointed out) that ''dH'' is chosen to be [[counting measure]] on ''I''. Consider now a collection of observable quantities (random variables) ''T''<sub>''i''</sub>. The probability distribution ''dF'' whose entropy with respect to ''dH'' is greatest, subject to the conditions that the expected value of ''T''<sub>''i''</sub> be equal to ''t''<sub>''i''</sub>, is a member of the exponential family with ''dH'' as reference measure and (''T''<sub>1</sub>, ..., ''T''<sub>''n''</sub>) as sufficient statistic. The derivation is a simple [[calculus of variations|variational calculation]] using [[Lagrange multipliers]]. Normalization is imposed by letting ''T''<sub>0</sub> = 1 be one of the constraints. The natural parameters of the distribution are the Lagrange multipliers, and the normalization factor is the Lagrange multiplier associated to ''T''<sub>0</sub>. == Role in statistics == === Classical estimation: sufficiency === According to the '''Pitman-Koopman-Darmois theorem''', among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there a [[sufficient statistic]] whose dimension remains bounded as sample size increases. Less tersely, suppose ''X''<sub>''n''</sub>, ''n'' = 1, 2, 3, ... are [[statistical independence|independent]] identically distributed random variables whose distribution is known to be in some family of probability distributions. Only if that family is an exponential family is there a (possibly vector-valued) [[sufficient statistic]] ''T''(''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>) whose number of scalar components does not increase as the sample size ''n'' increases. === Bayesian estimation: conjugate distributions === Exponential families are also important in [[Bayesian statistics]]. In Bayesian statistics a [[prior distribution]] is multiplied by a [[likelihood function]] and then normalised to produce a [[posterior distribution]]. In the case of a likelihood which belongs to the exponential family there exists a [[conjugate prior]], which is often also in the exponential family. A conjugate prior ''&pi;'' for the parameter ''&eta;'' of an exponential family is given by :<math>\pi(\eta) \propto \exp(-\eta^{\top} \alpha - \beta\, A(\eta)),</math> where <math>\alpha \in \mathbb{R}^n</math> and <math>\beta>0</math> are [[hyperparameter]]s (parameters controlling parameters). A conjugate prior is one which, when combined with the likelihood and normalised, produces a posterior distribution which is of the same type as the prior. For example, if one is estimating the success probability of a binomial distribution, then if one chooses to use a beta distribution as one's prior, the posterior is another beta distribution. This makes the computation of the posterior particularly simple. Similarly, if one is estimating the parameter of a [[Poisson distribution|Poisson]] distribution the use of a gamma prior will lead to another gamma posterior. Conjugate priors are often very flexible and can be very convenient. However, if one's belief about the likely value of the theta parameter of a binomial is represented by (say) a bimodal (two-humped) prior distribution, then this cannot be represented by a beta distribution. An arbitrary likelihood will not belong to the exponential family, and thus in general no conjugate prior exists. The posterior will then have to be computed by numerical methods. === Hypothesis testing: Uniformly most powerful tests === {{further|[[Uniformly most powerful test#Important case: The exponential family|Uniformly most powerful test]]}} The one-parameter exponential family has a monotone non-decreasing likelihood ratio in the [[Sufficiency (statistics)|sufficient statistic]] ''T''(''x''), provided that &eta;(&theta;) is non-decreasing. As a consequence, there exists a [[uniformly most powerful test]] for [[hypothesis testing|testing the hypothesis]] ''H''<sub>0</sub>: &theta; ≥ &theta;<sub>0</sub> ''vs''. ''H''<sub>1</sub>: &theta; < &theta;<sub>0</sub>. == See also == * [[Natural exponential family]] == References == <references/> == Further reading == * {{cite book | last = Lehmann | first = E. L. | coauthors = Casella, G. | title = Theory of Point Estimation | date = 1998 | pages = 2nd ed., sec. 1.5 }} * {{cite book | last = Keener | first = Robert W. | title = Statistical Theory: Notes for a Course in Theoretical Statistics | publisher = Springer | date = 2006 | pages = 27-28, 32-33 }} ==External links== * [http://www.casact.org/pubs/dpp/dpp04/04dpp117.pdf A primer on the exponential family of distributions] * [http://members.aol.com/jeff570/e.html Exponential family of distributions] on the [http://members.aol.com/jeff570/mathword.html Earliest known uses of some of the words of mathematics] {{ProbDistributions|families}} [[Category:Exponentials]] [[Category:Continuous distributions]] [[Category:Discrete distributions]] [[Category:Types of probability distributions]]