Differential entropy 3504168 223068673 2008-07-02T12:35:21Z Mcld 504497 /* See also */ {{Mergefrom|Information entropy#Extending discrete entropy to the continuous case: differential entropy |Talk:Differential entropy#Merge proposal |date=September 2007 |User:Blaisorblade}} '''Differential entropy''' (also referred to as '''continuous entropy''') is a concept in [[information theory]] which tries to extend the idea of (Shannon) [[information entropy|entropy]], a measure of average [[surprisal]] of a [[random variable]], to continuous [[probability distribution]]s. ==Definition== Let ''X'' be a random variable with a [[probability density function]] ''f'' whose [[support (mathematics)|support]] is a set <math>\mathbb X</math>. The ''differential entropy'' <math>h(X)</math> or <math>h(f)</math> is defined as :<math>h(X) = -\int_\mathbb{X} f(x)\log f(x)\,dx.</math> As with its discrete analog, the units of differential entropy depend on the base of the [[logarithm]], which is usually ''2'' (i.e., the units are [[bit]]s). See [[logarithmic units]] for logarithms taken in different bases. Related concepts such as [[joint entropy|joint]], [[conditional entropy|conditional]] differential entropy, and [[Kullback-Leibler divergence|relative entropy]] are defined in a similar fashion. One must take care in trying to apply properties of discrete entropy to differential entropy, since probability density functions can be greater than ''1''. For example, [[Uniform distribution (continuous)|Uniform]](''0'',''1/2'') has differential entropy <math>\int_0^\frac{1}{2} -2\log2\,dx=-\log2\,</math>. The definition of differential entropy above can be obtained by partitioning the range of ''X'' into bins of length <math>\Delta</math> with associated sample points <math>i\Delta</math> within the bins, for ''X'' Riemann integrable. This gives a [[Quantization (signal processing)|quantized]] version of ''X'', defined by <math>X_\Delta = i\Delta</math> if <math> i\Delta \leq X \leq (i+1)\Delta</math>. Then the entropy of <math>X_\Delta</math> is :<math>-\sum_i f(i\Delta)\Delta\log f(i\Delta) - \sum f(i\Delta)\Delta\log \Delta</math>. The first term approximates the differential entropy, while the second term is approximately <math>-\log(\Delta)</math>. Note that this procedure suggests that the differential entropy of a discrete random variable should be <math>-\infty</math>. Note that the continuous [[mutual information]] <math>I(X;Y)</math> has the distinction of retaining its fundamental significance as a measure of discrete information since it is actually the limit of the discrete mutual information of ''partitions'' of ''X'' and ''Y'' as these partitions become finer and finer. Thus it is invariant under linear transformations of ''X'' and ''Y'', and still represents the amount of discrete information that can be transmitted over a channel that admits a continuous space of values.<ref name = Reza>{{ cite book | title = An Introduction to Information Theory | author = Fazlollah M. Reza | publisher = Dover Publications, Inc., New York | year = 1961, 1994 | isbn = 0-486-68210-2 | url = http://books.google.com/books?id=RtzpRAiX6OgC&pg=PA8&dq=intitle:%22An+Introduction+to+Information+Theory%22++%22entropy+of+a+simple+source%22&as_brr=0&ei=zP79Ro7UBovqoQK4g_nCCw&sig=j3lPgyYrC3-bvn1Td42TZgTzj0Q }}</ref> ==Properties of differential entropy== * For two densities ''f'' and ''g'', <math>D(f||g) \geq 0</math> with equality if <math>f = g</math> [[almost everywhere]]. Similarly, for two random variables ''X'' and ''Y'', <math>I(X;Y) \geq 0</math> and <math>h(X|Y) \leq h(X)</math> with equality [[if and only if]] ''X'' and ''Y'' are [[Statistical independence|independent]]. * The chain rule for differential entropy holds as in the discrete case :<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum h(X_i)</math>. * Differential entropy is translation invariant, ie, <math>h(X + c) = h(X)</math> for a constant ''c''. * Differential entropy is in general not invariant under arbitrary invertible maps. In particular, for a constant ''a'', <math>h(aX) = h(X) + \log \left| a \right|</math>. For a vector valued random variable '''X''' and a matrix ''A'', <math>h(A\mathbf{X}) = h(\mathbf{X}) + \log(\det A)</math>. * In general, for a transformation from a random vector ''X'' to a random vector with same dimension ''Y'' <math>\mathbf{Y} = m(\mathbf{X})</math>, the corresponding entropies are related via <math>h(\mathbf{Y}) = h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial h}{\partial x} \right\vert dx</math> where <math>\left\vert \frac{\partial h}{\partial x} \right\vert</math> is the [[Jacobian]] of the transformation ''m''. * If a random vector <math>\mathbf{X} \in \mathbb{R}^{n}</math> has mean zero and [[covariance]] matrix ''K'', <math>h(\mathbf{X}) \leq \frac{1}{2} \log[(2\pi e)^n \det{K}]</math> with equality if and only if '''X''' is [[jointly gaussian]]. ==Example: Exponential distribution== Let ''X'' be an [[exponential distribution|exponentially distributed]] random variable with parameter <math>\lambda</math>, that is, with probability density function :<math>f(x) = \lambda e^{-\lambda x} \mbox{ for } x \geq 0.</math> Its differential entropy is then {| |- | <math>h_e(X)\,</math> | <math>=-\int_0^\infty \lambda e^{-\lambda x} \log (\lambda e^{-\lambda x})\,dx</math> |- | | <math>= -\left(\int_0^\infty (\log \lambda)\lambda e^{-\lambda x}\,dx + \int_0^\infty (-\lambda x) \lambda e^{-\lambda x}\,dx\right) </math> |- | | <math>= -\log \lambda \int_0^\infty f(x)\,dx + \lambda E[X]</math> |- | | <math>= -\log\lambda + 1\,.</math> |} Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base ''e'', to simplify the calculation. ==Differential entropies for various distributions== In the table below, <math>\Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1} dt</math> (the [[gamma function]]), <math>\psi(x) = \frac{d}{dx} \Gamma(x)</math>, <math>B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}</math>, and <math>\gamma</math> is [[Euler-Mascheroni constant|Euler's constant]]. {| class="wikitable" style="background:white" |+ Table of differential entropies. |- ! Distribution Name !! Probability density function (pdf) !! Entropy in nats |- | [[Uniform distribution (continuous)|Uniform]] || <math>f(x) = \frac{1}{b-a}</math> for <math>a \leq x \leq b</math> || <math>\ln(b - a) \, </math> |- | [[Normal distribution|Normal]] || <math>f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)</math> || <math>\ln\left(\sigma\sqrt{2\,\pi\,e}\right) </math> |- | [[Exponential distribution|Exponential]] || <math>f(x) = \lambda \exp\left(-\lambda x\right)</math> || <math>1 - \ln \lambda \, </math> |- | [[Rayleigh distribution|Rayleigh]] || <math>f(x) = \frac{x}{b^2} \exp\left(-\frac{x^2}{2b^2}\right)</math> || <math>1 + \ln \frac{\beta}{\sqrt{2}} + \frac{\gamma}{2}</math> |- | [[Beta distribution|Beta]] || <math>f(x) = \frac{x^{p-1}(1-x)^{q-1}}{B(p,q)}</math> for <math>0 \leq x \leq 1</math> || <math> \ln B(p,q) - (p-1)[\psi(p) - \psi(p + q)] - (q-1)[\psi(q) - \psi(p + q)] \, </math> |- | [[Cauchy distribution|Cauchy]] || <math>f(x) = \frac{\lambda}{\pi} \frac{1}{\lambda^2 + x^2}</math> || <math>\ln(4\pi\lambda) \, </math> |- | [[Chi distribution|Chi]] || <math>f(x) = \frac{2}{2^{n/2} \sigma^n \Gamma(n/2)} x^{n-1} \exp\left(-\frac{x^2}{2\sigma^2}\right)</math> || <math>\ln{\frac{\sigma\Gamma(n/2)}{\sqrt{2}}} - \frac{n-1}{2} \psi\left(\frac{n}{2}\right) + \frac{n}{2}</math> |- | [[Chi-square distribution|Chi-squared]] || <math>f(x) = \frac{1}{2^{n/2} \sigma^n \Gamma(n/2)} x^{\frac{n}{2} - 1} \exp\left(-\frac{x}{2\sigma^2}\right)</math> || <math>\ln 2\sigma^{2}\Gamma\left(\frac{n}{2}\right) - \left(1 - \frac{n}{2}\right)\psi\left(\frac{n}{2}\right) + \frac{n}{2}</math> |- | [[Erlang distribution|Erlang]] || <math>f(x) = \frac{\beta^n}{(n-1)!} x^{n-1} \exp(-\beta x)</math> || <math>(1-n)\psi(n) + \ln \frac{\Gamma(n)}{\beta} + n</math> |- | [[F distribution|F]] || <math>f(x) = \frac{n_1^{\frac{n_1}{2}} n_2^{\frac{n_2}{2}}}{B(\frac{n_1}{2},\frac{n_2}{2})} \frac{x^{\frac{n_1}{2} - 1}}{(n_2 + n_1 x)^{\frac{n_1 + n2}{2}}}</math> || <math>\ln \frac{n_1}{n_2} B\left(\frac{n_1}{2},\frac{n_2}{2}\right) + \left(1 - \frac{n_1}{2}\right) \psi\left(\frac{n_1}{2}\right) -</math> <math>\left(1 + \frac{n_2}{2}\right)\psi\left(\frac{n_2}{2}\right) + \frac{n_1 + n_2}{2} \psi\left(\frac{n_1 + n_2}{2}\right)</math> |- | [[Gamma distribution|Gamma]] || <math>f(x) = \frac{x^{\alpha - 1} \exp(-\frac{x}{\beta})}{\beta^\alpha \Gamma(\alpha)}</math> || <math>\ln(\beta \Gamma(a)) + (1 - \alpha)\psi(\alpha) + \alpha \, </math> |- | [[Laplace distribution|Laplace]] || <math>f(x) = \frac{1}{2\lambda} \exp(-\frac{|x - \theta|}{\lambda})</math> || <math>1 + \ln(2\lambda) \, </math> |- | [[Logistic distribution|Logistic]] || <math>f(x) = \frac{e^{-x}}{(1 + e^{-x})^2}</math> || <math>2 \, </math> |- | [[Log-normal distribution|Lognormal]] || <math>f(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp\left(-\frac{(\ln x - m)^2}{2\sigma^2}\right)</math> || <math>m + \frac{1}{2} \ln(2\pi e \sigma^2)</math> |- | [[Maxwell-Boltzmann distribution|Maxwell-Boltzmann]] || <math>f(x) = 4 \pi^{-\frac{1}{2}} \beta^{\frac{3}{2}} x^{2} \exp(-\beta x^2)</math> || <math>\frac{1}{2} \ln \frac{\pi}{\beta} + \gamma - 1/2</math> |- | [[Generalized Gaussian distribution|Generalized normal]] || <math>f(x) = \frac{2 \beta^{\frac{\alpha}{2}}}{\Gamma(\frac{\alpha}{2})} x^{\alpha - 1} \exp(-\beta x^2)</math> || <math>\ln{\frac{\Gamma(\alpha/2)}{2\beta^{\frac{1}{2}}}} - \frac{\alpha - 1}{2} \psi\left(\frac{\alpha}{2}\right) + \frac{\alpha}{2}</math> |- | [[Pareto distribution|Pareto]] || <math>f(x) = \frac{a k^a}{x^{a+1}}</math> || <math>\ln \frac{k}{a} + 1 + \frac{1}{a}</math> |- | [[Student's t-distribution|Student's t]] || <math>f(x) = \frac{(1 + x^2/n)^{-\frac{n+1}{2}}}{\sqrt{n}B(\frac{1}{2},\frac{n}{2})}</math> || <math>\frac{n+1}{2}\psi\left(\frac{n+1}{2}\right) - \psi\left(\frac{n}{2}\right) + \ln \sqrt{n} B\left(\frac{1}{2},\frac{n}{2}\right)</math> |- | [[Triangular distribution|Triangular]] || <math> f(x) = \begin{cases} \frac{2x}{a} & 0 \leq x \leq a\\ \frac{2(1-x)}{1-a} & a \leq x \leq 1 \end{cases}</math> || <math>\frac{1}{2} - \ln 2</math> |- | [[Weibull distribution|Weibull]] || <math>f(x) = \frac{c}{\alpha} x^{c-1} \exp\left(-\frac{x^c}{\alpha}\right)</math> || <math>\frac{(c-1)\gamma}{c} + \ln \frac{\alpha^{1/c}}{c} + 1</math> |- | [[Multivariate_normal_distribution|Multivariate normal]] || <math> f_X(x_1, \dots, x_N) =</math> <math> \frac{1} {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}} \exp \left( -\frac{1}{2} ( x - \mu)^\top \Sigma^{-1} (x - \mu) \right) </math> || <math>\frac{1}{2}\ln\{(2\pi e)^{N} \det(\Sigma)\}</math> |- |} ==See also== *[[Information entropy]] *[[Information theory]] *[[Self-information]] *[[Kullback-Leibler divergence]] *[[Entropy estimation]] == References == {{reflist}} * Thomas M. Cover, Joy A. Thomas. ''Elements of Information Theory'' New York: Wiley, 1991. ISBN 0-471-06259-6 * Lazo, A. and P. Rathie. ''[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1055832 On the entropy of continuous probability distributions]'' Information Theory, IEEE Transactions on, 1978. 24(1): p. 120-122. ==External links== * {{planetmath reference|id=1915|title=Differential entropy}} [[Category:Entropy and information]] [[Category:Information theory]] [[Category:Statistical randomness]] [[Category:Randomness]] [[fr:Entropie différentielle]]