Efficiency (statistics) 574392 216783315 2008-06-03T05:09:27Z Zvika 413489 simplify In [[statistics]], '''efficiency''' is one measure of desirability of an [[estimator]]. The efficiency of an [[estimator bias|unbiased]] [[statistic]] <math>T</math> is defined as :<math> e(T) = \frac{1/\mathcal{I}(\theta)}{\mathrm{var}(T)} </math> where <math>\mathcal{I}(\theta)</math> is the [[Fisher information]] of the sample. Thus <math>e(T)</math> is the minimum possible variance for an unbiased estimator divided by its actual variance. The [[Cramér-Rao bound]] can be used to prove that <math>e(T) \le 1</math>: :<math> \mathrm{var} \left(\widehat{\theta}\right) \geq \frac {1} {\mathcal{I}(\theta)} </math> :<math> 1\geq \frac {1/\mathcal{I}(\theta)} {\mathrm{var} \left(\widehat{\theta}\right)} = e(T). </math> ==Efficient estimator== If an [[estimator bias|unbiased]] [[estimator]] of a parameter <math>\theta \in \Theta</math> attains <math>e(T) = 1</math> for all values of the parameter, then the estimator is called efficient. Equivalently, the estimator achieves equality on the [[Cramér-Rao inequality]] for all <math>\theta \in \Theta</math>. An efficient estimator is also the [[minimum variance unbiased estimator]] (MVUE). This is because an efficient estimator maintains equality on the Cramér-Rao inequality for all parameter values, which means it attains the minimum variance for all parameters (the definition of the MVUE). The MVUE estimator, even if it exists, is not necessarily efficient, because "minimum" does not mean equality holds on the Cramér-Rao inequality. Thus an efficient estimator need not exist, but if it does, it is the MVUE. ==Asymptotic efficiency== For some [[estimator]]s, they can attain efficiency [[asymptotically]] and are thus called asymptotically efficient estimators. This can be the case for some [[maximum likelihood]] estimators or for any estimators that attain equality of the Cramér-Rao bound asymptotically. ==Examples== Consider a sample of size <math>N</math> drawn from a [[normal distribution]] of mean <math>\mu</math> and unit [[variance]] (i.e., <math>x[n] \sim \mathcal{N}(\mu, 1)</math>). The [[sample mean]], <math>\overline{x}</math>, of the sample <math>x[0], x[1], \ldots, x[N-1]</math>, defined as :<math> \overline{x} = \frac{1}{N} \sum_{n=0}^{N-1} x[n] </math> has variance <math>\frac{1}{N}</math>. This is equal to the reciprocal of the [[Fisher information]] from the sample (this is clear from the definition) and thus, by the [[Cramér-Rao inequality]], the sample mean is '''efficient''' in the sense that its efficiency is unity. Now consider the [[sample median]]. This is an [[estimator bias|unbiased]] and [[consistent]] estimator for <math>\mu</math>. For large <math>N</math> the sample median is approximately [[normal distribution|normally distributed]] with mean <math>\mu</math> and variance <math>\frac{\pi}{2N}</math> (i.e., <math>x[n] \sim \mathcal{N}\left(\mu, \frac{\pi}{2N}\right)</math>). The efficiency is thus <math>\frac{2}{\pi}</math>, or about 64%. Note that this is the [[asymptote|asymptotic]] efficiency &mdash; that is, the efficiency in the limit as sample size <math>N</math> tends to infinity. For finite values of <math>N</math> the efficiency is higher than this (for example, a sample size of 3 gives an efficiency of about 74%). Many workers prefer the sample median as an estimator of the mean, holding that the loss in efficiency is more than compensated for by its enhanced [[robust]]ness in terms of its insensitivity to [[outlier]]s. ==Relative efficiency== If <math>T_1</math> and <math>T_2</math> are estimators for the parameter <math>\theta</math>, then <math>T_1</math> is said to '''[[dominating decision rule|dominate]]''' <math>T_2</math> if: # its [[mean squared error]] (MSE) is smaller for at least some value of <math>\theta</math> # the MSE does not exceed that of <math>T_2</math> for any value of &theta;. Formally, <math>T_1</math> dominates <math>T_2</math> if :<math> \mathrm{E} \left[ (T_1 - \theta)^2 \right] \leq \mathrm{E} \left[ (T_2-\theta)^2 \right] </math> holds for all <math>\theta</math>, with strict inequality holding somewhere. The relative efficiency is defined as :<math> e(T_1,T_2) = \frac {\mathrm{E} \left[ (T_2-\theta)^2 \right]} {\mathrm{E} \left[ (T_1-\theta)^2 \right]} </math> Although <math>e</math> is in general a function of <math>\theta</math>, in many cases the dependence drops out; if this is so, <math>e</math> being greater than one would indicate that <math>T_1</math> is preferable, whatever the true value of <math>\theta</math>. [[Category:Estimation theory]] [[Category:Statistical theory]] [[de:Effizienz (Statistik)]] [[it:Efficienza (statistica)]]