Normal distribution 21462 223884951 2008-07-06T09:47:23Z Lansey 660275 /* Properties */ Added a missing " ( " to property 5 {{Probability distribution| name =Normal| type =density| pdf_image =[[Image:Normal Distribution PDF.svg|360px|Probability density function for the normal distribution]]<br /><small>The red line is the standard normal distribution</small>| cdf_image =[[Image:Normal Distribution CDF.svg|360px|Cumulative distribution function for the normal distribution]]<br /><small>Colors match the image above</small>| parameters =<math>\mu</math> [[location parameter|location]] ([[real number|real]])<br/><math>\sigma^2>0</math> squared [[scale parameter|scale]] (real)| support =<math>x \in\mathbb{R}\!</math>| pdf =<math>\frac{1}{\sigma \sqrt{2\pi} } \exp \left(-\frac{(x-\mu)^2}{2\sigma ^2} \right) </math>| cdf =<math>\frac12 \left(1+\mathrm{erf}\left( \frac{x-\mu}{\sigma\sqrt2}\right) \right)</math>| mean =<math>\mu</math>| median =<math>\mu</math>| mode =<math>\mu</math>| variance =<math>\sigma^2</math>| skewness =0| kurtosis =0<!-- THIS IS THE EXCESS KURTOSIS, NOT THE SIMPLE KURTOSIS - DO NOT REPLACE THIS WITH THE SIMPLE KURTOSIS WHICH IS 3. -->| entropy =<math>\ln\left(\sigma\sqrt{2\,\pi\,e}\right)\!</math>| mgf =<math>M_X(t)= \exp\left(\mu\,t+\frac{\sigma^2 t^2}{2}\right)</math>| char =<math>\chi_X(t)=\exp\left(\mu\,i\,t-\frac{\sigma^2 t^2}{2}\right)</math>| }} The '''normal distribution''', also called the '''Gaussian distribution''', is an important family of [[continuous probability distribution]]s, applicable in many fields. Each member of the family may be defined by two parameters, ''location'' and ''scale'': the [[mean]] ("average", ''μ'') and [[variance]] ([[standard deviation]] squared) ''σ''<sup>2</sup>, respectively. The '''standard normal distribution''' is the normal distribution with a [[mean]] of zero and a [[variance]] of one (the red curves in the plots to the right). [[Carl Friedrich Gauss]] became associated with this set of distributions when he analyzed astronomical data using them,<ref>Havil, 2003</ref> and defined the equation of its probability density function. It is often called the '''bell curve''' because the graph of its [[probability density function|probability density]] resembles a [[bell (instrument)|bell]]. The importance of the normal distribution as a model of quantitative phenomena in the [[natural science|natural]] and [[behavioral sciences]] is due in part to the [[central limit theorem]]. Many measurements, ranging from [[psychology|psychological]]<ref>[http://findarticles.com/p/articles/mi_g2699/is_0002/ai_2699000241 Gale Encyclopedia of Psychology - Normal Distribution]</ref> to [[physics|physi]]cal phenomena (in particular, [[thermal noise]]) can be approximated, to varying degrees, by the normal distribution. While the mechanisms underlying these phenomena are often unknown, the use of the normal model can be theoretically justified by assuming that many small, independent effects are additively contributing to each observation. The normal distribution is also important for its relationship to [[least-squares estimation]], one of the simplest and oldest methods of statistical estimation. The normal distribution also arises in many areas of [[statistics]]. For example, the [[sampling distribution]] of the [[sample mean]] is approximately normal, even if the distribution of the population from which the sample is taken is not normal. In addition, the normal distribution maximizes [[information entropy]] among all distributions with known mean and variance, which makes it the natural choice of underlying distribution for data summarized in terms of sample mean and variance. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In [[probability theory]], normal distributions arise as the [[convergence of random variables|limiting distributions]] of several continuous and [[discrete random variable|discrete]] families of distributions. == History == The normal distribution was first introduced by [[Abraham de Moivre]] in an article in 1733, which was reprinted in the second edition of his ''[[The Doctrine of Chances]]'', 1738 in the context of approximating certain [[binomial distribution]]s for large ''n''. His result was extended by [[Pierre Simon de Laplace|Laplace]] in his book ''[[Analytical Theory of Probabilities]]'' (1812), and is now called the [[theorem of de Moivre-Laplace]]. Laplace used the normal distribution in the [[analysis of errors]] of experiments. The important [[method of least squares]] was introduced by [[Adrien Marie Legendre|Legendre]] in 1805. [[Carl Friedrich Gauss|Gauss]], who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors. The name "bell curve" goes back to [[Jouffret]] who first used the term "bell surface" in 1872 for a [[multivariate normal distribution|bivariate normal]] with independent components. The name "normal distribution" was coined independently by [[Charles S. Peirce]], [[Francis Galton]] and [[Wilhelm Lexis]] around 1875.{{Fact|date=March 2008}} Despite this terminology, other probability distributions may be more appropriate in some contexts; see the discussion of [[#Occurrence|occurrence]], below. == Characterization == There are various ways to [[characterization (mathematics)|characterize]] a [[probability distribution]]. The most visual is the [[probability density function]] (PDF). Equivalent ways are the [[cumulative distribution function]], the [[moment (mathematics)|moment]]s, the [[cumulant]]s, the [[Characteristic function (probability theory)|characteristic function]], the [[moment-generating function]], the cumulant-[[generating function]], and [[Maxwell's theorem]]. See [[probability distribution]] for a discussion. To indicate that a real-valued [[random variable]] ''X'' is normally distributed with mean ''μ'' and variance ''σ''² ≥ 0, we write :<math>X \sim N(\mu, \sigma^2).\,\!</math> While it is certainly useful for certain limit theorems (e.g. [[estimator#Asymptotic normality|asymptotic normality of estimators]]) and for the theory of [[Gaussian process]]es to consider the probability distribution concentrated at ''μ'' (see [[Dirac measure]]) as a normal distribution with mean ''μ'' and variance ''σ''² = 0, this degenerate case is often excluded from the considerations because no density with respect to the [[Lebesgue measure]] exists. The normal distribution may also be parameterized using a [[precision]] parameter ''τ'', defined as the reciprocal of ''σ''². This parameterization has an advantage in numerical applications where ''σ''² is very close to zero and is more convenient to work with in analysis as ''τ'' is a [[Exponential family#Interpretation|natural parameter]] of the normal distribution. === Probability density function === [[Image:Normal Distribution PDF.svg|360px|right|Probability density function for the normal distribution]] The continuous [[probability density function]] of the '''normal distribution''' is the [[Gaussian function]] : <math>\varphi_{\mu,\sigma^2}(x) = \frac{1}{\sigma\sqrt{2\pi}} \,e^{ -\frac{(x- \mu)^2}{2\sigma^2}} = \frac{1}{\sigma} \varphi\left(\frac{x - \mu}{\sigma}\right),\quad x\in\mathbb{R},</math> where ''σ'' > 0 is the [[standard deviation]], the real parameter ''μ'' is the [[expected value]], and : <math>\varphi(x)=\varphi_{0,1}(x)=\frac{1}{\sqrt{2\pi\,}} \, e^{-\frac{x^2}{2}},\quad x\in\mathbb{R},</math> is the density function of the "standard" normal distribution: i.e., the normal distribution with ''μ'' = 0 and ''σ'' = 1. The [[integral]] of <math>\varphi_{\mu,\sigma^2}</math> over the [[real line]] is equal to one as shown in the [[Gaussian integral]] article. As a Gaussian function with the denominator of the exponent equal to 2, the standard normal density function <math>\scriptstyle\varphi</math> is an [[eigenfunction]] of the [[Fourier transform]]. The probability density function has notable properties including: * symmetry about its mean ''μ'' * the [[mode (statistics)|mode]] and [[median]] both equal the mean ''μ'' * the [[inflection point]]s of the curve occur one standard deviation away from the mean, i.e. at ''μ'' &minus; ''σ'' and ''μ'' + ''σ''. === Cumulative distribution function === [[Image:Normal Distribution CDF.svg|360px|right|Cumulative distribution function for the normal distribution]] The [[cumulative distribution function]] (cdf) of a [[probability distribution]], evaluated at a number (lower-case) ''x'', is the probability of the event that a [[random variable]] (capital) ''X'' with that distribution is less than or equal to ''x''. The cumulative distribution function of the normal distribution is expressed in terms of the density function as follows: : <math> \begin{align} \Phi_{\mu,\sigma^2}(x) &{}=\int_{-\infty}^x\varphi_{\mu,\sigma^2}(u)\,du\\ &{}=\frac{1}{\sigma\sqrt{2\pi}} \int_{-\infty}^x \exp \Bigl( -\frac{(u - \mu)^2}{2\sigma^2} \ \Bigr)\, du \\ &{}= \Phi\Bigl(\frac{x-\mu}{\sigma}\Bigr),\quad x\in\mathbb{R}, \end{align} </math> where the standard normal cdf, Φ, is just the general cdf evaluated with ''μ'' = 0 and ''σ'' = 1: :<math> \Phi(x) = \Phi_{0,1}(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp\Bigl(-\frac{u^2}{2}\Bigr) \, du, \quad x\in\mathbb{R}. </math> The standard normal cdf can be expressed in terms of a [[special function]] called the [[error function]], as :<math> \Phi(x) =\frac{1}{2} \Bigl[ 1 + \operatorname{erf} \Bigl( \frac{x}{\sqrt{2}} \Bigr) \Bigr], \quad x\in\mathbb{R}, </math> and the cdf itself can hence be expressed as :<math> \Phi_{\mu,\sigma^2}(x) =\frac{1}{2} \Bigl[ 1 + \operatorname{erf} \Bigl( \frac{x-\mu}{\sigma\sqrt{2}} \Bigr) \Bigr], \quad x\in\mathbb{R}. </math> The complement of the standard normal cdf, <math>1 - \Phi(x)</math>, is often denoted <math>Q(x)</math>, and is sometimes referred to simply as the '''Q-function''', especially in engineering texts.<ref>[http://cnx.org/content/m11537/latest/ The Q-function<!-- Bot generated title -->]</ref><ref>http://www.eng.tau.ac.il/~jo/academic/Q.pdf</ref> This represents the tail probability of the Gaussian distribution. Other definitions of the Q-function, all of which are simple transformations of <math>\Phi</math>, are also used occasionally.<ref>[http://mathworld.wolfram.com/NormalDistributionFunction.html Normal Distribution Function - from Wolfram MathWorld<!-- Bot generated title -->]</ref> The inverse standard normal cumulative distribution function, or [[quantile function]], can be expressed in terms of the inverse error function: :<math> \Phi^{-1}(p) = \sqrt2 \;\operatorname{erf}^{-1} (2p - 1), \quad p\in(0,1), </math> and the inverse cumulative distribution function can hence be expressed as :<math> \Phi_{\mu,\sigma^2}^{-1}(p) = \mu + \sigma\Phi^{-1}(p) = \mu + \sigma\sqrt2 \; \operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1). </math> This quantile function is sometimes called the [[probit]] function. There is no elementary [[primitive (integral)|primitive]] for the probit function. This is not to say merely that none is known, but rather that the non-existence of such an elementary primitive has been proved. Several accurate methods exist for approximating the quantile function for the normal distribution - see [[quantile function]] for a discussion and references. The values Φ(''x'') may be approximated very accurately by a variety of methods, such as [[numerical integration]], [[Taylor series]], [[asymptotic series]] and [[continued fraction of Gauss#Of Kummer's confluent hypergeometric function|continued fraction]]s. ==== Strict lower and upper bounds for the cdf ==== For large ''x'' the standard normal cdf <math>\scriptstyle\Phi(x)</math> is close to 1 and <math>\scriptstyle\Phi(-x)\,{=}\,1\,{-}\,\Phi(x)</math> is close to 0. The elementary bounds :<math> \frac{x}{1+x^2}\varphi(x)<1-\Phi(x)<\frac{\varphi(x)}{x}, \qquad x>0, </math> in terms of the density <math>\scriptstyle\varphi</math> are useful. Using the [[integration by substitution|substitution]] ''v''&nbsp;=&nbsp;''u''²/2, the upper bound is derived as follows: :<math> \begin{align} 1-\Phi(x) &=\int_x^\infty\varphi(u)\,du\\ &<\int_x^\infty\frac ux\varphi(u)\,du =\int_{x^2/2}^\infty\frac{e^{-v}}{x\sqrt{2\pi}}\,dv =-\biggl.\frac{e^{-v}}{x\sqrt{2\pi}}\biggr|_{x^2/2}^\infty =\frac{\varphi(x)}{x}. \end{align} </math> Similarly, using <math>\scriptstyle\varphi'(u)\,{=}\,-u\,\varphi(u)</math> and the [[quotient rule]], :<math> \begin{align} \Bigl(1+\frac1{x^2}\Bigr)(1-\Phi(x)) &=\int_x^\infty \Bigl(1+\frac1{x^2}\Bigr)\varphi(u)\,du\\ &>\int_x^\infty \Bigl(1+\frac1{u^2}\Bigr)\varphi(u)\,du =-\biggl.\frac{\varphi(u)}u\biggr|_x^\infty =\frac{\varphi(x)}x. \end{align} </math> Solving for <math>\scriptstyle 1\,{-}\,\Phi(x)\,</math> provides the lower bound. === Generating functions === ==== Moment generating function ==== The [[moment generating function]] is defined as the [[expected value]] of exp(''tX''). For a normal distribution, the moment generating function is : <math> \begin{align} M_X(t) & {} = \mathrm{E} \left[ \exp{(tX)} \right] \\ & {} = \int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi} } \exp{\left( -\frac{(x - \mu)^2}{2 \sigma^2} \right)} \exp{(tx)} \, dx \\ & {} = \exp{ \left( \mu t + \frac{\sigma^2 t^2}{2} \right)} \end{align} </math> as can be seen by [[completing the square]] in the exponent. ==== Cumulant generating function ==== The [[cumulant]] generating function is the logarithm of the moment generating function: ''g''(''t'') = μ''t'' + σ²''t''²/2. Since this is a quadratic polynomial in ''t'', only the first two cumulants are nonzero. ==== Characteristic function ==== The [[characteristic function (probability theory)|characteristic function]] is defined as the [[expected value]] of <math>\exp (i t X)</math>, where <math>i</math> is the [[imaginary unit]]. So the characteristic function is obtained by replacing <math>t</math> with <math>it</math> in the moment-generating function. For a normal distribution, the characteristic function is : <math>\begin{align} \chi_X(t;\mu,\sigma) &{} = M_X(i t) = \mathrm{E} \left[ \exp(i t X) \right] \\ &{}= \int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi}} \exp \left(- \frac{(x - \mu)^2}{2\sigma^2} \right) \exp(i t x) \, dx \\ &{}= \exp \left( i \mu t - \frac{\sigma^2 t^2}{2} \right). \end{align} </math> == Properties == Some properties of the normal distribution: #If <math>X \sim N(\mu, \sigma^2)</math> and <math>a</math> and <math>b</math> are [[real number]]s, then <math>a X + b \sim N(a \mu + b, (a \sigma)^2)</math> (see [[expected value]] and [[variance]]). #If <math>X \sim N(\mu_X, \sigma^2_X)</math> and <math>Y \sim N(\mu_Y, \sigma^2_Y)</math> are [[statistical independence|independent]] normal [[random variable]]s, then: #* Their sum is normally distributed with <math>U = X + Y \sim N(\mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y)</math> ([[sum of normally distributed random variables|proof]]). Interestingly, the converse holds: if two independent random variables have a normally-distributed sum, then they must be normal themselves &mdash; this is known as [[Cramér's theorem]]. #*Their difference is normally distributed with <math>V = X - Y \sim N(\mu_X - \mu_Y, \sigma^2_X + \sigma^2_Y)</math>. #*If the variances of ''X'' and ''Y'' are equal, then ''U'' and ''V'' are independent of each other. #*The [[Kullback-Leibler divergence]], <math>D_{\rm KL}( X \| Y ) = { 1 \over 2 } \left( \log \left( { \sigma^2_Y \over \sigma^2_X } \right) + \frac{\sigma^2_X}{\sigma^2_Y} + \frac{\left(\mu_Y - \mu_X\right)^2}{\sigma^2_Y} - 1\right). </math> #If <math>X \sim N(0, \sigma^2_X)</math> and <math>Y \sim N(0, \sigma^2_Y)</math> are independent normal random variables, then: #*Their product <math>X Y</math> follows a distribution with density <math>p</math> given by #*:<math>p(z) = \frac{1}{\pi\,\sigma_X\,\sigma_Y} \; K_0\left(\frac{|z|}{\sigma_X\,\sigma_Y}\right),</math> where <math>K_0</math> is a [[Bessel function#Modified Bessel functions|modified Bessel function of the second kind]]. #*Their ratio follows a [[Cauchy distribution]] with <math>X/Y \sim \mathrm{Cauchy}(0, \sigma_X/\sigma_Y)</math>. Thus the Cauchy distribution is a special kind of [[ratio distribution]]. #If <math>X_1, \dots, X_n</math> are independent standard normal variables, then <math>X_1^2 + \cdots + X_n^2</math> has a [[chi-square distribution]] with ''n'' degrees of freedom. #If <math>X_1,\dots,X_n</math> are independent standard normal variables, then the [[sample mean]] <math>\bar{X}=(X_1+\cdots+X_n)/n</math> and [[sample variance]] <math>S^2=((X_1-\bar{X})^2+\cdots+(X_n-\bar{X})^2)/(n-1)</math> are [[statistical independence|independent]]. This property [[characterization (mathematics)|characterizes]] normal distributions (and helps to explain why the [[F-test]] is non-robust with respect to non-normality!) === Standardizing normal random variables === As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal. If <math>X</math> ~ <math>N(\mu, \sigma^2)</math>, then :<math>Z = \frac{X - \mu}{\sigma} \!</math> is a standard normal random variable: <math>Z</math> ~ <math>N(0,1)</math>. An important consequence is that the cdf of a general normal distribution is therefore :<math>\Pr(X \le x) = \Phi \left( \frac{x-\mu}{\sigma} \right) = \frac{1}{2} \left( 1 + \operatorname{erf} \left( \frac{x-\mu}{\sigma\sqrt{2}} \right) \right) . </math> Conversely, if <math>Z</math> is a standard normal distribution, <math>Z</math> ~ <math>N(0,1)</math>, then :<math>X = \sigma Z + \mu</math> is a normal random variable with mean <math>\mu</math> and variance <math>\sigma^2</math>. The standard normal distribution has been tabulated (usually in the form of value of the cumulative distribution function Φ), and the other normal distributions are the simple transformations, as described above, of the standard one. Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution. === Moments === The first few [[moment (mathematics)|moments]] of the normal distribution are: {| class="wikitable" |- bgcolor="#CCCCCC" ! Number !! Raw moment !! Central moment !! Cumulant |- | 0 || 1 || 1 || |- | 1 || <math>\mu</math> || 0 || <math>\mu</math> |- | 2 || <math>\mu^2 + \sigma^2</math> || <math>\sigma^2</math> || <math>\sigma^2</math> |- | 3 || <math>\mu^3 + 3\mu\sigma^2</math> || 0 || 0 |- | 4 || <math>\mu^4 + 6 \mu^2 \sigma^2 + 3 \sigma^4</math> || <math>3 \sigma^4</math> || 0 |- | 5 || <math>\mu^5 + 10 \mu^3 \sigma^2 + 15 \mu \sigma^4</math> || 0 || 0 |- | 6 || <math>\mu^6 + 15 \mu^4 \sigma^2 + 45 \mu^2 \sigma^4 + 15 \sigma^6 </math> || <math> 15 \sigma^6 </math> || 0 |- | 7 || <math>\mu^7 + 21 \mu^5 \sigma^2 + 105 \mu^3 \sigma^4 + 105 \mu \sigma^6 </math> || 0 || 0 |- | 8 || <math>\mu^8 + 28 \mu^6 \sigma^2 + 210 \mu^4 \sigma^4 + 420 \mu^2 \sigma^6 + 105 \sigma^8 </math> || <math> 105 \sigma^8 </math> || 0 |} All [[cumulant]]s of the normal distribution beyond the second are zero. Higher central moments (of order 2''k'' with ''μ&nbsp;=&nbsp;0) can be obtained using the formula : <math> E\left[x^{2k}\right]=\frac{(2k)!}{2^k k!} \sigma^{2k}. </math> === Generating values for normal random variables === For computer simulations, it is often useful to generate values that have a normal distribution. There are several methods and the most basic is to invert the standard normal cdf. More efficient methods are also known, one such method being the [[Box-Muller transform]]. An even faster algorithm is the [[ziggurat algorithm]]. The Box-Muller algorithm says that, if you have two independent random numbers ''U'' and ''V'' [[uniform distribution|uniformly distributed]] on (0, 1], (e.g. the output from a [[random number generator]]), then two independent standard normally distributed random variables are ''X'' and ''Y'', where: :<math>X = \sqrt{- 2 \ln U} \cdot \cos(2 \pi V) </math> :<math>Y = \sqrt{- 2 \ln U} \cdot \sin(2 \pi V) </math> This is because the chi-square distribution with two degrees of freedom (see property 4 above) is an easily-generated exponential random variable. === The central limit theorem === {{main|central limit theorem}} [[Image:Normal approximation to binomial.svg|325px|thumb|Plot of the pdf of a normal distribution with μ = 12 and σ = 3, approximating the pdf of a binomial distribution with ''n'' = 48 and ''p'' = 1/4]] Under certain conditions (such as being [[independent and identically-distributed random variables|independent and identically-distributed]] with finite variance), the sum of a large number of random variables is approximately normally distributed — this is the central limit theorem. The practical importance of the central limit theorem is that the normal cumulative distribution function can be used as an approximation to some other cumulative distribution functions, for example: * A [[binomial distribution]] with parameters ''n'' and ''p'' is approximately normal for large ''n'' and ''p'' not too close to 1 or 0 (some books recommend using this approximation only if ''np'' and ''n''(1&nbsp;&minus;&nbsp;''p'') are both at least 5; in this case, a [[continuity correction]] should be applied).<br/>The approximating normal distribution has parameters μ = ''np'', σ<sup>2</sup> = ''np''(1&nbsp;&minus;&nbsp;''p''). * A [[Poisson distribution]] with parameter λ is approximately normal for large λ.<br/>The approximating normal distribution has parameters μ = σ<sup>2</sup> = λ. Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound of the approximation error of the cumulative distribution function is given by the [[Berry–Esséen theorem]]. ===Infinite divisibility=== The normal distributions are [[Infinite divisibility (probability)|infinitely divisible]] probability distributions: Given a mean ''μ'', a variance ''σ'' <sup>2</sup>&nbsp;≥&nbsp;0, and a natural number ''n'', the sum ''X''<sub>1</sub>&nbsp;+ .&nbsp;.&nbsp;.&nbsp;+ ''X<sub>n</sub>'' of ''n'' independent random variables :<math>X_1,X_2,\dots,X_n \sim N(\mu/n, \sigma^2\!/n)\,</math> has this specified normal distribution (to verify this, use [[Sum of normally distributed random variables|characteristic functions or convolution]] and [[mathematical induction]]). ===Stability=== The normal distributions are strictly [[stability (probability)|stable]] probability distributions. ===Standard deviation and confidence intervals=== [[Image:standard deviation diagram.svg||325px|thumb|Dark blue is less than one [[standard deviation]] from the [[mean]]. For the normal distribution, this accounts for about 68% of the set (dark blue) while two standard deviations from the mean (medium and dark blue) account for about 95% and three standard deviations (light, medium, and dark blue) account for about 99.7%.]] About 68% of values drawn from a normal distribution are within one standard deviation σ&nbsp;>&nbsp;0 away from the mean μ; about 95% of the values are within two standard deviations and about 99.7% lie within three standard deviations. This is known as the "[[68-95-99.7 rule]]" or the "[[empirical rule]]." To be more precise, the area under the bell curve between μ&nbsp;&minus;&nbsp;''n''σ and μ&nbsp;+&nbsp;''n''σ in terms of the cumulative normal distribution function is given by :<math>\begin{align}&\Phi_{\mu,\sigma^2}(\mu+n\sigma)-\Phi_{\mu,\sigma^2}(\mu-n\sigma)\\ &=\Phi(n)-\Phi(-n)=2\Phi(n)-1=\mathrm{erf}\bigl(n/\sqrt{2}\,\bigr),\end{align}</math> where erf is the [[error function]]. To 12 decimal places, the values for the 1-, 2-, up to 6-sigma points are: {| class="wikitable" style="text-align:center" |- bgcolor="#CCCCCC" !&nbsp;<math>n\,</math>&nbsp;!! <math>\mathrm{erf}\bigl(n/\sqrt{2}\,\bigr)\,</math> |- |1 || &nbsp;0.682689492137&nbsp; |- |2 || 0.954499736104 |- |3 || 0.997300203937 |- |4 || 0.999936657516 |- |5 || 0.999999426697 |- |6 || 0.999999998027 |} The next table gives the reverse relation of sigma multiples corresponding to a few often used values for the area under the bell curve. These values are useful to determine (asymptotic) [[confidence interval]]s of the specified levels based on normally distributed (or [[Estimator#Asymptotic normality|asymptotically normal]]) [[estimator]]s: {| class="wikitable" style="text-align:center" |- bgcolor="#CCCCCC" !&nbsp;<math>\mathrm{erf}\bigl(n/\sqrt{2}\,\bigr)</math>&nbsp;!!<math>n\,</math>&nbsp; |- |0.80 ||&nbsp;1.28155&nbsp; |- |0.90 || 1.64485 |- |0.95 || 1.95996 |- |0.98 || 2.32635 |- |0.99 || 2.57583 |- |0.995 || 2.80703 |- |0.998 || 3.09023 |- |0.999 || 3.29052 |} where the value on the left of the table is the proportion of values that will fall within a given interval and ''n'' is a multiple of the standard deviation that specifies the width of the interval. ===Exponential family form=== The Normal distribution is a two-parameter [[exponential family|exponential family form]] with natural parameters μ and 1/σ<sup>2</sup>, and natural statistics ''X'' and ''X''<sup>2</sup>. The canonical form has parameters <math>{\mu \over \sigma^2}</math> and <math>{1 \over \sigma^2}</math> and sufficient statistics <math>\sum x </math> and <math>-{1 \over 2} \sum x^2 </math>. == Complex Gaussian process == Consider complex Gaussian random variable, :<math> Z=X+iY\, </math> where ''X'' and ''Y'' are real and independent Gaussian variables with equal variances <math>\scriptstyle \sigma_r^2\,</math>. The pdf of the joint variables is then :<math> \frac{1}{2\,\pi\,\sigma_r^2} e^{-(x^2+y^2)/(2 \sigma_r ^2)} </math> Because <math>\scriptstyle \sigma_z\, =\, \sqrt{2}\sigma_r</math>, the resulting pdf for the complex Gaussian variable ''Z'' is :<math> \frac{1}{\pi\,\sigma_z^2} e^{-|z|^2/\sigma_z^2}. </math> ==Related distributions== *<math>R \sim \mathrm{Rayleigh}(\sigma^2)</math> is a [[Rayleigh distribution]] if <math>R = \sqrt{X^2 + Y^2}</math> where <math>X \sim N(0, \sigma^2)</math> and <math>Y \sim N(0, \sigma^2)</math> are two independent normal distributions. *<math>Y \sim \chi_{\nu}^2</math> is a [[chi-square distribution]] with <math>\nu</math> [[degrees of freedom (statistics)|degrees of freedom]] if <math>Y = \sum_{k=1}^{\nu} X_k^2</math> where <math>X_k \sim N(0,1)</math> for <math>k=1,\dots,\nu</math> and are independent. *<math>Y \sim \mathrm{Cauchy}(\mu = 0, \theta = 1)</math> is a [[Cauchy distribution]] if <math>Y = X_1/X_2</math> for <math>X_1 \sim N(0,1)</math> and <math>X_2 \sim N(0,1)</math> are two [[statistical independence|independent]] normal distributions. *<math>Y \sim \mbox{Log-N}(\mu, \sigma^2)</math> is a [[log-normal distribution]] if <math>Y = e^X</math> and <math>X \sim N(\mu, \sigma^2)</math>. *Relation to [[Lévy skew alpha-stable distribution]]: if <math>X\sim \textrm{Levy-S}\alpha\textrm{S}(2,\beta,\sigma/\sqrt{2},\mu)</math> then <math>X \sim N(\mu,\sigma^2)</math>. *[[Truncated normal distribution]]. If <math>X \sim N(\mu, \sigma^2),\!</math> then truncating ''X'' below at <math>A</math> and above at <math>B</math> will lead to a random variable with mean <math>E(X)=\mu + \frac{\sigma(\varphi_1-\varphi_2)}{T},\!</math> where <math>T=\Phi\left(\frac{B-\mu}{\sigma}\right)-\Phi\left(\frac{A-\mu}{\sigma}\right), \; \varphi_1 = \varphi\left(\frac{A-\mu}{\sigma}\right), \; \varphi_2 = \varphi\left(\frac{B-\mu}{\sigma}\right)</math> and <math>\varphi</math> is the [[probability density function]] of a standard normal random variable. *If <math>X</math> is a random variable with a normal distribution, and <math>Y=|X|</math>, then <math>Y</math> has a [[folded normal distribution]]. ==Descriptive and inferential statistics== ===Scores=== Many scores are derived from the normal distribution, including [[percentile rank]]s ("percentiles"), [[normal curve equivalent]]s, [[stanine]]s, [[Standard score|z-scores]], and T-scores. Additionally, a number of behavioral [[statistics|statistical]] procedures are based on the assumption that scores are normally distributed; for example, [[Student's t-test|t-test]]s and [[Analysis of variance|ANOVA]]s (see below). [[Bell curve grading]] assigns relative grades based on a normal distribution of scores. {{Sectstub|date=May 2008}} === Normality tests === {{main|normality test}} Normality tests check a given set of data for similarity to the normal distribution. The [[null hypothesis]] is that the data set is similar to the normal distribution, therefore a sufficiently small [[P-value]] indicates non-normal data. *[[Kolmogorov-Smirnov test]] *[[Lilliefors test]] *[[Anderson-Darling test]] *[[Ryan-Joiner test]] *[[Shapiro-Wilk test]] *[[Normal probability plot]] ([[rankit]] plot) *[[Jarque-Bera test]] === Estimation of parameters === ====Maximum likelihood estimation of parameters==== Suppose :<math>X_1,\dots,X_n</math> are [[statistical independence|independent]] and each is normally distributed with expectation ''μ'' and variance ''σ''² > 0. In the language of statisticians, the observed values of these ''n'' random variables make up a "sample of size ''n'' from a normally distributed population." It is desired to estimate the "population mean" ''μ'' and the "population standard deviation" ''σ'', based on the observed values of this sample. The continuous joint probability density function of these ''n'' independent random variables is :<math>\begin{align}f(x_1,\dots,x_n;\mu,\sigma) &= \prod_{i=1}^n \varphi_{\mu,\sigma^2}(x_i)\\ &=\frac1{(\sigma\sqrt{2\pi})^n}\prod_{i=1}^n \exp\biggl(-{1 \over 2} \Bigl({x_i-\mu \over \sigma}\Bigr)^2\biggr), \quad(x_1,\ldots,x_n)\in\mathbb{R}^n. \end{align} </math> As a function of ''μ'' and ''σ'', the [[likelihood function]] based on the observations ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> is :<math> L(\mu,\sigma) = \frac C{\sigma^n} \exp\left(-{\sum_{i=1}^n (X_i-\mu)^2 \over 2\sigma^2}\right), \quad\mu\in\mathbb{R},\ \sigma>0, </math> with some constant ''C'' > 0 (which in general would be even allowed to depend on ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>, but will vanish anyway when partial derivatives of the log-likelihood function with respect to the parameters are computed, see below). In the method of [[maximum likelihood]], the values of ''μ'' and ''σ'' that maximize the likelihood function are taken as estimates of the population parameters ''μ'' and ''σ''. Usually in maximizing a function of two variables, one might consider [[partial derivative]]s. But here we will exploit the fact that the value of ''μ'' that maximizes the likelihood function with ''σ'' fixed does not depend on ''σ''. Therefore, we can find that value of ''μ'', then substitute it for ''μ'' in the likelihood function, and finally find the value of ''σ'' that maximizes the resulting expression. It is evident that the likelihood function is a decreasing function of the sum :<math>\sum_{i=1}^n (X_i-\mu)^2. \,\!</math> So we want the value of ''μ'' that ''minimizes'' this sum. Let :<math>\overline{X}_n=(X_1+\cdots+X_n)/n</math> be the "sample mean" based on the ''n'' observations. Observe that :<math> \begin{align} \sum_{i=1}^n (X_i-\mu)^2 &=\sum_{i=1}^n\bigl((X_i-\overline{X}_n)+(\overline{X}_n-\mu)\bigr)^2\\ &=\sum_{i=1}^n(X_i-\overline{X}_n)^2 + 2(\overline{X}_n-\mu)\underbrace{\sum_{i=1}^n (X_i-\overline{X}_n)}_{=\,0} + \sum_{i=1}^n (\overline{X}_n-\mu)^2\\ &=\sum_{i=1}^n(X_i-\overline{X}_n)^2 + n(\overline{X}_n-\mu)^2. \end{align} </math> Only the last term depends on ''μ'' and it is minimized by :<math>\widehat{\mu}_n=\overline{X}_n.</math> That is the maximum-likelihood estimate of ''μ'' based on the ''n'' observations ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>. When we substitute that estimate for ''μ'' into the likelihood function, we get :<math>L(\overline{X}_n,\sigma) = \frac C{\sigma^n} \exp\biggl(-{\sum_{i=1}^n (X_i-\overline{X}_n)^2 \over 2\sigma^2}\biggr), \quad\sigma>0.</math> It is conventional to denote the "log-likelihood function", i.e., the logarithm of the likelihood function, by a lower-case <math>\ell</math>, and we have :<math>\ell(\overline{X}_n,\sigma)=\log C-n\log\sigma-{\sum_{i=1}^n(X_i-\overline{X}_n)^2 \over 2\sigma^2}, \quad\sigma>0,</math> and then :<math> \begin{align} {\partial \over \partial\sigma}\ell(\overline{X}_n,\sigma) &=-{n \over \sigma} +{\sum_{i=1}^n (X_i-\overline{X}_n)^2 \over \sigma^3}\\ &=-{n \over \sigma^3}\biggl(\sigma^2-{1 \over n}\sum_{i=1}^n (X_i-\overline{X}_n)^2 \biggr), \quad\sigma>0. \end{align} </math> This derivative is positive, zero, or negative according as ''σ''² is between 0 and :<math>\hat\sigma_n^2:={1 \over n}\sum_{i=1}^n(X_i-\overline{X}_n)^2,</math> or equal to that quantity, or greater than that quantity. (If there is just one observation, meaning that ''n'' = 1, or if ''X''<sub>1</sub> = ... = ''X''<sub>''n''</sub>, which only happens with probability zero, then <math>\hat\sigma{}_n^2=0</math> by this formula, reflecting the fact that in these cases the likelihood function is unbounded as ''σ'' decreases to zero.) Consequently this average of squares of [[errors and residuals in statistics|residuals]] is the maximum-likelihood estimate of ''σ''², and its square root is the maximum-likelihood estimate of ''σ'' based on the ''n'' observations. This estimator <math>\hat\sigma{}_n^2</math> is [[estimator bias|biased]], but has a smaller [[mean squared error]] than the usual unbiased estimator, which is ''n''/(''n''&nbsp;&minus;&nbsp;1) times this estimator. =====Surprising generalization===== The derivation of the maximum-likelihood estimator of the [[covariance matrix]] of a [[multivariate normal distribution]] is subtle. It involves the [[spectral theorem]] and the reason it can be better to view a [[scalar (mathematics)|scalar]] as the [[Trace (linear algebra)|trace]] of a 1&times;1 [[Matrix (mathematics)|matrix]] than as a mere scalar. See [[estimation of covariance matrices]]. ==== Unbiased estimation of parameters ==== The maximum likelihood estimator of the population mean <math>\mu</math> from a sample is an [[unbiased estimator]] of the mean, as is the variance when the mean of the population is known ''a priori''. However, if we are faced with a sample and have no knowledge of the mean or the variance of the population from which it is drawn, the unbiased estimator of the variance <math>\sigma^2</math> is: :<math> S^2 = \frac{1}{n-1} \sum_{i=1}^n (X_i - \overline{X})^2. </math> This "sample variance" follows a [[Gamma distribution]] if all ''X''<sub>''i''</sub> are [[independent and identically-distributed random variables|independent and identically-distributed]]: :<math> S^2 \sim \operatorname{Gamma}\left(\frac{n-1}{2},\frac{2 \sigma^2}{n-1}\right), </math> with mean <math>\operatorname{E}(S^2)=\sigma^2</math> and variance <math>\operatorname{Var}(S^2)=2\sigma^4/(n-1)</math>. == Occurrence == ''Approximately'' normal distributions occur in many situations, as explained by the [[central limit theorem]]. When there is reason to suspect the presence of a large number of small effects ''acting additively and independently'', it is reasonable to assume that observations will be normal. There are statistical methods to empirically test that assumption, for example the [[Kolmogorov-Smirnov test]]. Effects can also act as ''multiplicative'' (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the [[logarithm]] of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called [[log-normal distribution|log-normal]]. Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting marginal distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below). To summarize, here is a list of situations where approximate normality is sometimes assumed. For a fuller discussion, see below. *In counting problems (so the [[central limit theorem]] includes a discrete-to-continuum approximation) where [[reproductive family|reproductive random variables]] are involved, such as **[[binomial distribution|Binomial random variables]], associated to yes/no questions; **[[Poisson distribution|Poisson random variables]], associated to [[rare event]]s; *In physiological measurements of biological specimens: **The ''logarithm'' of measures of size of living tissue (length, height, skin area, weight); **The ''length'' of ''inert'' appendages (hair, claws, nails, teeth) of biological specimens, ''in the direction of growth''; presumably the thickness of tree bark also falls under this category; **Other physiological measures may be normally distributed, but there is no reason to expect that ''a priori''; *Measurement errors are often ''assumed'' to be normally distributed, and any deviation from normality is considered something which should be explained; *Financial variables **Changes in the ''logarithm'' of <!--interest rates, [Interest rates are not free to move indefinitely, so they're not in the same category]--> exchange rates, price indices, and stock market indices; these variables behave like compound interest, not like simple interest, and so are multiplicative; <!--**Stock-market indices are supposed to be multiplicative too, but some researchers claim that they are [[Levy skew alpha-stable distribution|Levy-distributed]] variables instead of [[log-normal distribution|lognormal]]; Removed this because the stock market indices are in the same category as exchange rates and price indices. They all have this non-Gaussian behavior. --> **Other financial variables may be normally distributed, but there is no reason to expect that ''a priori''; *Light intensity **The intensity of laser light is normally distributed; **Thermal light has a [[Bose-Einstein statistics|Bose-Einstein]] distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. Of relevance to biology and economics is the fact that complex systems tend to display [[power law]]s rather than normality. === Photon counting === Light intensity from a single source varies with time, as thermal fluctuations can be observed if the light is analyzed at sufficiently high time resolution. The intensity is usually assumed to be normally distributed. <!-- Deleted the following, which doesn't seem very relevant: In the classical theory of optical coherence, light is modelled as an electromagnetic wave, and correlations are observed and analyzed up to the ''second order'', consistently with the assumption of normality. (See [[Gaussian stochastic process]].) However, non-classical correlations are sometimes observed. --> Quantum mechanics interprets measurements of light intensity as [[photon]] counting. The natural assumption in this setting is the [[Poisson distribution]]. When light intensity is integrated over times longer than the coherence time and is large, the Poisson-to-normal limit is appropriate. <!-- This material should be put in an article on light emission: Correlations are interpreted in terms of "bunching" and "anti-bunching" of photons with respect to the expected Poisson behaviour. Anti-bunching requires a quantum model of [[light emission]]. Ordinary light sources producing light by thermal emission display a so-called [[blackbody spectrum]] (of intensity as a function of frequency), and the number of photons at each frequency follows a [[Bose-Einstein distribution]] (a geometric distribution). The coherence time of thermal light is exceedingly low, and so a Poisson distribution is appropriate in most cases, even when the intensity is so low as to preclude the approximation by a normal distribution. The intensity of laser light has an exactly Poisson intensity distribution and long coherence times. The large intensities make it appropriate to use the normal distribution. It is interesting that the classical model of light correlations applies only to laser light, which is a macroscopic quantum phenomenon. On the other hand, "ordinary" light sources do not follow the "classical" model or the normal distribution. --> === Measurement errors === Normality is the ''central '''assumption''''' of the mathematical [[theory of errors]]. Similarly, in statistical model-fitting, an indicator of goodness of fit is that the [[errors and residuals in statistics|residuals]] (as the errors are called in that setting) be independent and normally distributed. The assumption is that any deviation from normality needs to be explained. In that sense, both in model-fitting and in the theory of errors, normality is the only observation that need not be explained, being expected. However, if the original data are not normally distributed (for instance if they follow a [[Cauchy distribution]]), then the residuals will also not be normally distributed. This fact is usually ignored in practice. Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is ''assumed'' that the remaining error must be the result of a large number of very small ''additive'' effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account. Whether this assumption is valid is debatable. A famous and oft-quoted remark attributed to [[Gabriel Lippmann]] says: "Everyone believes in the [normal] law of errors: the mathematicians, because they think it is an experimental fact; and the experimenters, because they suppose it is a theorem of mathematics."{{Fact|date=February 2008}} === Physical characteristics of biological specimens === <!--Removed some of the following, which didn't make sense: The overwhelming biological evidence is that bulk growth processes of living tissue proceed by multiplicative, not additive, increments, [What does that actually mean? Don't things usually grow more or less continuously?] and that therefore measures of body size should at most follow a lognormal rather than normal distribution. [That would be the case if all the specimens had undergone the same number of such multiplicative increments.] Despite common claims of normality, the sizes of plants and animals is approximately lognormal. [Certainly not true. The height of trees for example depends on how old they are.] --> The sizes of full-grown animals is approximately [[lognormal]]. The evidence and an explanation based on models of growth was first published in the 1932 book ''[[Problems of Relative Growth]]'' by [[Julian Huxley]]. Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the distribution of sizes deviate from lognormality. The assumption that linear size of biological specimens is normal (rather than lognormal) leads to a non-normal distribution of weight (since weight or volume is roughly proportional to the 2nd or 3rd power of length, and Gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no ''a priori'' reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed. On the other hand, there are some biological measures where normality is assumed, such as blood pressure of adult humans. This is supposed to be normally distributed, but only after separating males and females into different populations (each of which is normally distributed). <!-- Removed the following, which doesn't seem logical: * The length of ''inert'' appendages such as hair, nails, teeth, claws and shells is expected to be normally distributed if measured in the direction of growth. This is because the growth of inert appendages depends on the size of the root, and not on the length of the appendage, and so proceeds by ''additive'' increments. Hence, we have an example of a sum of very many small increments (possibly lognormal) approaching a normal distribution. Another plausible example is the width of tree trunks, where a new thin ring is produced every year whose width is affected by a large number of factors. [But all that ignores the fact that claws wear down, hair falls out after a certain time, bark falls off...] --> === Financial variables === Already in [[1900]] [[Louis Bachelier]] proposed representing price changes of [[stock]]s using the normal distribution. This approach has since been modified slightly. Because of the exponential nature of [[inflation]], financial indicators such as [[stock]] values and [[commodity]] [[price]]s exhibit "multiplicative behavior". As such, their periodic changes (e.g., yearly changes) are not normal, but rather [[lognormal]] - i.e. ''returns'' as opposed to values are normally distributed. This is still the most commonly used hypothesis in [[finance]], in particular in [[asset pricing]]. Corrections to this model seem to be necessary, as has been pointed out for instance by [[Benoît Mandelbrot]], the popularizer of [[fractals]], who observed that the changes in logarithm over short periods (such as a day) are approximated well by distributions that do not have a finite variance, and therefore the central limit theorem does not apply. Rather, the sum of many such changes gives [[Levy skew alpha-stable distribution|log-Levy distribution]]s. === Distribution in testing and intelligence === Sometimes, the difficulty and number of questions on an [[Intelligence quotient|IQ]] test is selected in order to yield normal distributed results. Or else, the raw test scores are converted to IQ values by fitting them to the normal distribution. In either case, it is the deliberate result of test construction or score interpretation that leads to IQ scores being normally distributed for the majority of the population. However, the question whether ''[[intelligence (trait)|intelligence]]'' itself is normally distributed is more involved, because [[intelligence (trait)|intelligence]] is a [[latent variable]], therefore its distribution cannot be observed directly. <!-- All true IQ tests have a normal distribution of scores as a result of test design; otherwise IQ scores would be meaningless without knowing what test produced them. [Removed, because even if the test does yield a Gaussian distribution, you have to know what sort of test was used. Gaussian results could be obtained in many ways, giving different weightings to different kinds of intelligence.] [Deleting the following. If it is valid, it should be under an article on intelligence or intelligence measures. And some of it contradicts what was already said (that IQ tests deliberately give Gaussian results).] For an example of how arbitrary the distribution of intelligence test scores really is, imagine a 20-item multiple-choice test entirely composed of problems that consist mostly of finding the areas of circles. Such a test, if given to a population of high-school students, would likely yield a U-shaped distribution, with the bulk of the scores being very high or very low, instead of a normal curve. If a student understands how to find the area of a circle, he can likely do so repeatedly and with few errors, and thus would get a perfect or high score on the test, whereas a student who has never had geometry lessons would likely get every question wrong, possibly with a few right due to guessing luck. If a test is composed mostly of easy questions, then most of the test-takers will have high scores and very few will have low scores. If a test is composed entirely of questions so easy or so hard that every person gets either a perfect score or a zero, it fails to make any kind of statistical discrimination at all and yields a rectangular distribution. These are just a few examples of the many varieties of distributions that could theoretically be produced by carefully designing intelligence tests. Whether intelligence itself is normally distributed has been at times a matter of some debate. [As mentioned, intelligence is not a number, so you can't ask whether it is normally distributed.] Some critics maintain that the choice of a normal distribution is entirely arbitrary. [[Brian Simon]] once claimed that the normal distribution was specifically chosen by [[psychometrics|psychometricians]] to falsely support the idea that superior intelligence is only held by a small minority, thus legitimizing the rule of a privileged elite over the masses of society. Historically, though, intelligence tests were designed without any concern for producing a normal distribution, and scores came out approximately normally distributed anyway. American educational psychologist [[Arthur Jensen]] claims that any test that contains "a large number of items," "a wide range of item difficulties," "a variety of content or forms," and "items that have a significant correlation with the sum of all other scores" will inevitably produce a normal distribution. Furthermore, there exists a number of correlations between IQ scores and other human characteristics that are more provably normally distributed, such as nerve conduction velocity and the glucose metabolism rate of a person's brain, supporting the idea that intelligence is normally distributed. Some critics, such as [[Stephen Jay Gould]] in his book ''[[The Mismeasure of Man]]'', question the validity of intelligence tests in general, not just the fact that intelligence is normally distributed. For further discussion see the article [[IQ]]. Replaced - it is legitimate to say that intelligence is a quantitative variable, but it is latent. However, ''[[intelligence (trait)|intelligence]]'' cannot be said to be normally distributed because it is not a number. --> ===Diffusion equation=== The probability density function of the normal distribution is closely related to the (homogeneous and isotropic) [[diffusion equation]] and therefore also to the [[heat equation]]. This [[partial differential equation]] describes the time evolution of a mass-density function under [[diffusion]]. In particular, the probability density function : <math>\varphi_{0,t}(x) = \frac{1}{\sqrt{2\pi t\,}}\exp\left(-\frac{x^2}{2t}\right), </math> for the normal distribution with expected value 0 and variance ''t'' satisfies the diffusion equation: : <math> \frac{\partial}{\partial t} \varphi_{0,t}(x) = \frac{1}{2} \frac{\partial^2}{\partial x^2} \varphi_{0,t}(x). </math> If the mass-density at time ''t''&nbsp;=&nbsp;0 is given by a [[Dirac delta]], which essentially means that all mass is initially concentrated in a single point, then the mass-density function at time ''t'' will have the form of the normal probability density function with variance linearly growing with ''t''. This connection is no coincidence: diffusion is due to [[Brownian motion]] which is mathematically described by a [[Wiener process]], and such a process at time ''t'' will also result in a normal distribution with variance linearly growing with ''t''. More generally, if the initial mass-density is given by a function φ(''x''), then the mass-density at time ''t'' will be given by the [[convolution]] of φ and a normal probability density function. == Numerical approximations of the normal distribution and its cdf == The normal distribution is widely used in scientific and statistical computing. Therefore, it has been implemented in various ways. The [[GNU Scientific Library]] calculates values of the standard normal cdf using [[piecewise]] approximations by [[rational function]]s. Another approximation method uses third-degree polynomials on intervals [http://www.doc.ic.ac.uk/~dfg/AndysSplineTutorial/BSplines.html]. The article on the [[bc programming language]] gives an example of how to compute the cdf in Gnu bc. Generation of deviates from the unit normal is normally done using the [[Box-Muller transform| Box-Muller method]] of choosing an angle uniformly and a radius exponential and then transforming to (normally distributed) ''x'' and ''y'' coordinates. If log, cos or sin are expensive then a simple alternative is to simply sum 12 uniform (0,1) deviates and subtract 6 (half of 12). This is quite usable in many applications. The sum over 12 values is chosen as this gives a variance of exactly one. The result is limited to the range (-6,6) and has a density which is a 12-section eleventh-order polynomial approximation to the normal distribution <ref>Johnson NL, Kotz S, Balakrishnan N. (1995) Continuous Univariate Distributions Volume 2, Wiley. Equation(26.48)</ref>. A method that is much faster than the Box-Muller transform but which is still exact is the so called [[Ziggurat algorithm]] developed by George Marsaglia. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases where the combination of those two falls outside the "core of the ziggurat" a kind of rejection sampling using logarithms, exponentials and more uniform random numbers has to be employed. There is also some investigation into the connection between the fast [[Hadamard transform]] and the normal distribution since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data. In [[Microsoft Excel]] the function NORMSDIST() calculates the cdf of the standard normal distribution, and NORMSINV() calculates its inverse function. Therefore, NORMSINV(RAND()) is an accurate but slow way of generating values from the standard normal distribution, using the principle of [[inverse transform sampling]]. == See also == *A [[wikisource:fr:Table de la loi normale centrée réduite|typical normal distribution table]] *[[Behrens-Fisher problem]] *[[Bell curve grading]] *[[Central limit theorem]] *[[Data transformation (statistics)]] - simple techniques to transform data into normal distribution *[[Erdős-Kac theorem]], on the occurrence of the normal distribution in [[number theory]] *[[Gaussian blur]], [[convolution]] using the normal distribution as a kernel *[[Gaussian function]] *[[Gaussian process]] **[[Wiener process]] **[[Brownian bridge]] **[[Ornstein-Uhlenbeck process]] *[[Iannis Xenakis]], Gaussian distribution in [[music]]. *[[Inverse Gaussian distribution]] *[[Lognormal distribution]] *[[Multivariate normal distribution]] *[[Matrix normal distribution]] *[[Normal-gamma distribution]] *[[Normally distributed and uncorrelated does not imply independent]] (an example of two normally distributed uncorrelated random variables that are not independent; this cannot happen in the presence of [[multivariate normal distribution|joint normality]]) *[[Probit function]] *[[Sample size]] *[[Skew normal distribution]] *[[Student's t-distribution]] *[[Tweedie distributions]] ==Notes== {{reflist}} ==References== *John Aldrich. [http://members.aol.com/jeff570/stat.html Earliest Uses of Symbols in Probability and Statistics]. Electronic document, retrieved [[March 20]], [[2005]]. (''See "Symbols associated with the Normal Distribution".'') *[[Abraham de Moivre]] ([[1738]]). ''[[The Doctrine of Chances]]''. *[[Stephen Jay Gould]] ([[1981]]). ''[[The Mismeasure of Man]]''. First edition. W. W. Norton. ISBN 0-393-01489-4 . *Havil, 2003. ''Gamma, Exploring Euler's Constant'', Princeton, NJ: Princeton University Press, p. 157. *[[Richard Herrnstein|R. J. Herrnstein]] and [[Charles Murray]] ([[1994]]). ''[[The Bell Curve]]: Intelligence and Class Structure in American Life''. [[Free Press]]. ISBN 0-02-914673-9 . *[[Pierre-Simon Laplace]] ([[1812]]). ''[[Analytical Theory of Probabilities]]''. *Jeff Miller, John Aldrich, et al. [http://members.aol.com/jeff570/mathword.html Earliest Known Uses of Some of the Words of Mathematics]. In particular, the entries for [http://members.aol.com/jeff570/b.html "bell-shaped and bell curve"], [http://members.aol.com/jeff570/n.html "normal" (distribution)], [http://members.aol.com/jeff570/g.html "Gaussian"], and [http://members.aol.com/jeff570/e.html "Error, law of error, theory of errors, etc."]. Electronic documents, retrieved [[December 13]], [[2005]]. *S. M. Stigler ([[1999]]). ''Statistics on the Table'', chapter 22. Harvard University Press. (''History of the term "normal distribution".'') *[[Eric W. Weisstein]] et al. [http://mathworld.wolfram.com/NormalDistribution.html Normal Distribution] at [[MathWorld]]. Electronic document, retrieved [[March 20]], [[2005]]. *Marvin Zelen and Norman C. Severo ([[1964]]). Probability Functions. [http://www.math.sfu.ca/~cbm/aands/page_931.htm Chapter 26] of ''[[Abramowitz and Stegun|Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables]]'', ed, by [[Milton Abramowitz]] and [[Irene A. Stegun]]. [[National Bureau of Standards]]. == External links == '''The normal distribution''' *[http://mathworld.wolfram.com/NormalDistribution.html Mathworld: Normal Distribution] *[http://www.gnu.org/software/gsl/manual/html_node/Random-Number-Distributions.html GNU Scientific Library &ndash; Reference Manual &ndash; The Gaussian Distribution] *[http://planetmath.org/encyclopedia/NormalRandomVariable.html PlanetMath: normal random variable] *[http://courses.ncssm.edu/math/TALKS/PDFS/normal.pdf Intuitive derivation]. *[http://www.visualstatistics.net/Statistics/Euler/Euler.htm Is normal distribution due to Karl Gauss? Euler, his family of gamma functions, and place in history of statistics] *[http://www.visualstatistics.net/Statistics/Maxwell%20Demons/Maxwell%20Demons.htm Maxwell demons: Simulating probability distributions with functions of propositional calculus] '''Online results and applications''' *[http://www.digitalreview.com.ar/normaldistribution/ Normal distribution table] *[http://www.math.unb.ca/~knight/utility/NormTble.htm Public Domain Normal Distribution Table] *[http://www.vias.org/simulations/simusoft_distcalc.html Distribution Calculator] &ndash; Calculates probabilities and critical values for normal, ''[[Student's t-distribution|t]]'', [[chi-square distribution|chi-square]] and [[F-distribution|''F''-distribution]]. *[http://www-stat.stanford.edu/~naras/jsm/NormalDensity/NormalDensity.html Java Applet on Normal Distributions] *[http://socr.stat.ucla.edu/htmls/SOCR_Distributions.html Interactive Distribution Modeler (incl. Normal Distribution)]. *[http://www.danielsoper.com/statcalc/calc02.aspx Free Area Under the Normal Curve Calculator] from Daniel Soper's ''Free Statistics Calculators'' website. *[http://www.measuringusability.com/normal_curve.php Interactive Graph of the Standard Normal Curve] Quickly Visualize the one and two-tailed area of the Standard Normal Curve '''Algorithms and approximations''' *[http://www.sitmo.com/doc/Calculating_the_Cumulative_Normal_Distribution Calculating the Cumulative Normal distribution, C++, VBA], sitmo.com *[http://home.online.no/~pjacklam/notes/invnorm/ An algorithm for computing the inverse normal cumulative distribution function ] by Peter J. Acklam &ndash; has examples for several [[programming language]]s *[http://www2.isye.gatech.edu/~christos/3044/inv_normal.pdf An Approximation to the Inverse Normal(0, 1) Distribution], gatech.edu *[http://www.math.sfu.ca/~cbm/aands/page_932.htm ''Handbook of Mathematical Functions'': Polynomial and Rational Approximations for P(x) and Z(x)], Abramowitz and Stegun {{ProbDistributions|continuous-infinite}} {{Statistics}} [[Category:Continuous distributions]] [[ar:توزيع احتمالي طبيعي]] [[az:Normal paylanma]] [[ca:Distribució normal]] [[cs:Normální rozdělení]] [[cy:Dosraniad normal]] [[da:Normalfordeling]] [[de:Normalverteilung]] [[es:Distribución normal]] [[eo:Normala distribuo]] [[fa:توزیع نرمال]] [[fr:Loi normale]] [[gl:Distribución normal]] [[ko:정규 분포]] [[hr:Normalna raspodjela]] [[id:Distribusi normal]] [[is:Normaldreifing]] [[it:Variabile casuale normale]] [[he:התפלגות נורמלית]] [[la:Distributio normalis]] [[lv:Normālsadalījums]] [[lt:Normalusis skirstinys]] [[hu:Normális eloszlás]] [[nl:Normale verdeling]] [[ja:正規分布]] [[no:Normalfordeling]] [[pl:Rozkład normalny]] [[pt:Distribuição normal]] [[ru:Нормальное распределение]] [[simple:Normal distribution]] [[sk:Normálne rozdelenie]] [[sl:Normalna porazdelitev]] [[sr:Нормална расподела]] [[su:Sebaran normal]] [[fi:Normaalijakauma]] [[sv:Normalfördelning]] [[vi:Phân bố chuẩn]] [[tr:Normal dağılım]] [[uk:Нормальний розподіл]] [[ur:معمول توزیع]] [[zh:正态分布]]