Propagation of uncertainty
593908
220764235
2008-06-21T13:34:41Z
DOI bot
6652755
Citation maintenance. You can [[WP:DOI|use this bot]] yourself! Please [[User:DOI_bot/bugs|report any bugs]].
In [[statistics]], '''propagation of uncertainty''' (or '''propagation of error''') is the effect of [[variable]]s' [[uncertainty|uncertainties]] (or [[Errors and residuals in statistics|errors]]) on the uncertainty of a [[function (mathematics)|function]] based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g. instrument [[Accuracy and precision|precision]]) which propagate to the the combination of variables in the function.
The uncertainty is usually defined by the [[absolute error]]. Uncertainties can also be defined by the [[relative error]] Δ''x''/''x'', which is usually written as a percentage.
Most commonly the error on a quantity, <math>\Delta x</math>, is given as the [[standard deviation]], <math>\sigma</math>, . Standard deviation is the positive square root of [[variance]], <math>\sigma^2</math>. The value of a quantity and its error are often expressed as <math>x\pm \Delta x</math>. If the statistical [[probability distribution]] of the variable is known or can be assumed, it is possible to derive [[confidence limits]] to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a variable belonging to a [[normal distribution]] are ± one standard deviation from the value, that is, there is a 68% probability that the true value lies in the region <math>x \pm \sigma</math>.
If the variables are [[correlated]], then [[covariance]] must be taken into account.
==Linear combinations==
Let <math>f_k(x_1,x_2,...,x_n)</math> be a set of ''m'' functions which are linear combinations of <math>n</math> variables <math>x_1,x_2,...,x_n</math> with combination coefficients <math>A_{1k},A_{2k},...,A_{nk}, (k=1-m)</math>.
:<math>f_k=\sum_i^n A_{ik} x_i: \mathbf {f=A^Tx}\,</math>
and let the [[variance-covariance matrix]] on x be denoted by <math>\mathbf {M^x}\,</math>.
:<math>
{\mathbf{M^x}} =
\begin{pmatrix}
\sigma^2_1 & COV_{12} & COV_{13} & ... \\
COV_{12} & \sigma^2_2 & COV_{23} & ...\\
COV_{13} & COV_{23} & \sigma^2_3 & ... \\
...& & &\\
\end{pmatrix}
</math>
Then, the variance-covariance matrix <math>\mathbf M^f\,</math>, of ''f'' is given by
:<math>M^f_{ij}= \sum_k^n \sum_l^n A_{ik} M^x_{kl} A_{lj}: \mathbf{ M^f=A^T M^x A}</math>
This is the most general expression for the propagation of error from one set of variables onto another. When the errors on ''x'' are un-correlated the general expression simplifies to
:<math>M^f_{ij}= \sum_k^n A_{ik} \left(\sigma^2_k \right)^x A_{kj}</math>
Note that even though the errors on ''x'' may be un-correlated, their errors on ''f'' are [[always]] correlated. The general expressions for a single function, ''f'', are a little simpler.
:<math>f=\sum_i^n a_i x_i: f=\mathbf {a^Tx}\,</math>
:<math>\sigma^2_f= \sum_i^n \sum_j^n a_i M^x_{ij} a_j= \mathbf{a^T M^x a}</math>
Each covariance term, <math>M_{ij}</math> can be expressed in terms of the [[correlation coefficient]] <math>\rho_{ij}\,</math> by <math>M_{ij}=\rho_{ij}\sigma_i\sigma_j\,</math>, so that an alternative expression for the variance of ''f'' is
:<math>\sigma^2_f= \sum_i^n a_i^2\sigma^2_i+\sum_i^n \sum_{j (j \ne i)}^n a_i a_j\rho_{ij} \sigma_i\sigma_j </math>
In the case that the variables ''x'' are uncorrelated this simplifies further to
:<math>\sigma^2_f= \sum_i^n a_i^2\sigma^2_i</math>
== Non-linear combinations ==
{{seealso|Taylor expansions for the moments of functions of random variables}}
When ''f'' is a set of non-linear combination of the variables ''x'', it must usually be linearlized by approximation to a first-order [[Maclaurin series]] expansion, though in some cases, exact formulas can be derived that do not depend on the expansion <ref name="Goodman1960">{{Cite journal
| author = [[Leo Goodman]]
| title = On the Exact Variance of Products
| journal = Journal of the American Statistical Association
| year = 1960
| volume = 55
| issue = 292
| pages = 708–713
| url = http://links.jstor.org/sici?sici=0162-1459(196012)55%3A292%3C708%3AOTEVOP%3E2.0.CO%3B2-3
| doi = 10.2307/2281592
}}</ref>.
:<math>f_k \approx f^0_k+ \sum_i^n \frac{\partial f_k}{\partial {x_i}} x_i </math>
where <math>\frac{\partial f_k}{\partial x_i}</math> denotes the [[partial derivative]] of ''f<sub>k</sub>'' with respect to the ''i''-th variable. Since ''f<sup>0</sup><sub>k</sub>'' is a constant it does not contribute to the error on ''f''. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, ''A<sub>ik</sub>'' and ''A<sub>jk</sub>'' by the partial derivatives, <math>\frac{\partial f_k}{\partial x_i}</math> and <math>\frac{\partial f_k}{\partial x_j}</math>.
=== Example ===
Any non-linear function, ''f(a,b)'', of two variables, ''a'' and ''b'', can be expanded as
: <math>f\approx f^0+\frac{\partial f}{\partial a}a+\frac{\partial f}{\partial b}b</math>
Whence
:<math>\sigma^2_f=\left(\frac{\partial f}{\partial a}\right)^2\sigma^2_a+\left(\frac{\partial f}{\partial b}\right)^2\sigma^2_b+2\frac{\partial f}{\partial a}\frac{\partial f}{\partial b}COV_{ab}</math>
In the particular case that <math>f=ab\!</math>, <math>\frac{\partial f}{\partial a}=b, \frac{\partial f}{\partial b}=a</math>. Then
:<math>\sigma^2_f=b^2\sigma^2_a+a^2 \sigma_b^2+2abCOV_{ab}</math>
or
:<math>\left(\frac{\sigma_f}{f}\right)^2=\left(\frac{\sigma_a}{a}\right)^2+\left(\frac{\sigma_b}{b}\right)^2+2\left(\frac{\sigma_a}{a}\right)\left(\frac{\sigma_b}{b}\right)\rho_{ab}</math>
==Caveats and warnings==
Error estimates for non-linear functions are [[Bias of an estimator|biased]] on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log x increases as x increases since the expansion to 1+x is a good approximation only when x is small.
In data-fitting applications it is often possible to assume that measurements errors are uncorrelated. Nevertheless, parameters derived from these measurements, such as [[least-squares]] parameters, will be correlated. For example, in [[linear regression]], the errors on slope and intercept will be correlated and this correlation should be taken into account when deriving the error on a calculated value.
:<math>y=mz+c: \sigma^2_y=z^2\sigma^2_m+\sigma^2_c+2z\rho \sigma_m\sigma_c</math>
In the special case of the inverse <math>1/B</math> where <math>B=N(0,1)</math>, the distribution is a [[Cauchy_Distribution|Cauchy distribution]] and there is no definable variance. For such [[Ratio distribution | ratio distributions]], there can be defined probabilities for intervals which can be defined either by Monte Carlo simulation, or, in some cases, by using the Geary-Hinkley transformation <ref name="HayyaJ1975On">{{Cite journal
| author = [[Jack Hayya]], [[Donald Armstrong]] and [[Nicolas Gressis]]
| title = A Note on the Ratio of Two Normally Distributed Variables
| journal = [[Management Science (journal)|Management Science]]
| year = 1975
| volume = 21
| issue = 11
| pages = 1338–1341
| month = July
| url = http://links.jstor.org/sici?sici=0025-1909(197507)21%3A11%3C1338%3AANOTRO%3E2.0.CO%3B2-2
}}</ref>.
==Example formulas==
This table shows the variances of simple functions of the real variables <math>A,B\,</math> with standard deviations <math>\sigma_A, \sigma_B\,</math>, and precisely-known real-valued constants <math>a,b\,</math>.
:{| class="wikitable" background: white"
! style="background:#ffdead;" | Function !! style="background:#ffdead;" |
|-
| <math>f = aA\,</math> || <math>\sigma_f^2 = a^2 \sigma_A^2\,</math>
|-
| <math>f = aA \pm bB</math> || <math>\sigma_f^2 = a^2\sigma_A^2 +b^2 \sigma_B^2\pm2abCOV_{AB}\,</math>
|-
| <math>f = aAB\,</math> || <math>\left(\frac{\sigma_f}{f}\right)^2 = \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2+2 \frac{\sigma_a}{A} \frac{\sigma_b}{B} \rho_{AB}</math>
|-
| <math>f = a\frac{A}{B}</math> || <math>\left(\frac{\sigma_f}{f}\right)^2 = \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2-2 \frac{\sigma_a}{A} \frac{\sigma_b}{B} \rho_{AB}</math>
|-
| <math>f = aA^{\pm b} \,</math> || <math>\frac{\sigma_f}{f} = b \frac{\sigma_A}{A}</math>
|-
| <math>f = a \ln(\pm bA) \,</math> || <math>\sigma_f = a \frac{\sigma_A}{A}</math>
|-
| <math>f = a e^{\pm bA} \,</math> || <math>\frac{\sigma_f}{f} =b\sigma_A</math>
|-
| <math>f = a^{\pm bA} \,</math> || <math>\frac{\sigma_f}{f} =b \ln a \sigma_A</math>
|}
For uncorrelated variables the covariance terms are zero.
Expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation gives,
:<math>f = AB(C): \left(\frac{\sigma_f}{f}\right)^2 = \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2+ \left(\frac{\sigma_C}{C}\right)^2</math>
==Partial derivatives==
Given <math>X=f(A, B, C, \cdots)</math>
:{| class="wikitable" style="text-align:center; background: white"
! style="background:#ffdead;" | Absolute Error !! style="background:#ffdead;" | Variance
|-
| <math>\Delta X=\left |\frac{\partial f}{\partial A}\right |\cdot \Delta A+\left |\frac{\partial f}{\partial B}\right |\cdot \Delta B+\left |\frac{\partial f}{\partial C}\right |\cdot \Delta C+\cdots</math> || <math>\sigma_X^2=\left (\frac{\partial f}{\partial A}\sigma_A\right )^2+\left (\frac{\partial f}{\partial B}\sigma_B\right )^2+\left (\frac{\partial f}{\partial C}\sigma_C\right )^2+\cdots</math><ref>{{cite web |url=http://www.rit.edu/~uphysics/uncertainties/Uncertaintiespart2.html |title=Uncertainties and Error Propagation |accessdate=2007-04-20 |author=Vern Lindberg |authorlink=http://www.rit.edu/~vwlsps/ |date=2000-07-01 |work=Uncertainties, Graphing, and the Vernier Caliper |publisher=Rochester Institute of Technology |pages=1 |language=eng |archiveurl=http://web.archive.org/web/*/http://www.rit.edu/~uphysics/uncertainties/Uncertaintiespart2.html |archivedate=2004-11-12 |quote=The guiding principle in all cases is to consider the most pessimistic situation. }}</ref>
|}
===Example calculation: Inverse tangent function===
We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.
Define
:<math>f(\theta) = \arctan{\theta}</math>,
where <math>\sigma_{\theta}</math> is the absolute uncertainty on our measurement of <math>\theta</math>.
The partial derivative of <math>f(\theta)</math> with respect to <math>\theta</math> is
:<math>\frac{\partial f}{\partial \theta} = \frac{1}{1+\theta^2}</math>.
Therefore, our propagated uncertainty is
:<math>\sigma_{f} = \frac{\sigma_{\theta}}{1+\theta^2}</math>,
where <math>\sigma_{f}</math> is the absolute propagated uncertainty.
===Example application: Resistance measurement===
A practical application is an [[experiment]] in which one measures [[current (electricity)|current]], ''I'', and [[voltage]], ''V'', on a [[resistor]] in order to determine the [[electrical resistance|resistance]], ''R'', using [[Ohm's law]], <math>R = V / I.</math>
Given the measured variables with uncertainties, ''I''±Δ''I'' and ''V''±Δ''V'', the uncertainty in the computed quantity, Δ''R'' is
: <math>\Delta R = \left( \left(\frac{\Delta V}{I}\right)^2+\left(\frac{V}{I^2}\Delta I\right)^2\right)^{1/2} = R\sqrt{\left(\frac{\Delta V}{V}\right)^2+\left(\frac{\Delta I}{I}\right)^2}.</math>
Thus, in this simple case, the [[relative error]] Δ''R''/''R'' is simply the square root of the sum of the squares of the two relative errors of the measured variables.
== Notes ==
<references/>
==External links==
* [http://physicslabs.phys.cwru.edu/MECH/Manual/Appendix_V_Error%20Prop.pdf Uncertainties and Error Propagation], Appendix V from the Mechanics Lab Manual, Case Western Reserve University.
* [http://www.av8n.com/physics/uncertainty.htm A detailed discussion of measurements and the propagation of uncertainty] explaining the benefits of using error propagation formulas and monte carlo simulations instead of simple [[significance arithmetic]].
==See also==
* [[Errors and residuals in statistics]]
* [[Accuracy and precision]]
* [[Delta method]]
* [[Significance arithmetic]]
[[Category:Numerical analysis]]
[[Category:Statistical approximations]]
[[Category:Uncertainty of numbers]]
[[de:Fehlerrechnung]]
[[fi:Virheen kasautumislaki]]
[[fr:Propagation des erreurs]]
[[it:Propagazione degli errori]]