Renewal theory
1945387
225030159
2008-07-11T15:26:41Z
84.143.183.84
'''Renewal theory''' is the branch of [[probability theory]] that generalizes Poisson processes for arbitrary ''holding times''. Applications include calculating the expected time for a [[infinite monkey theorem|monkey who is randomly tapping at a keyboard]] to type the word ''Macbeth'' and comparing the long-term benefits of different insurance policies.
==Renewal processes==
===Introduction===
A '''renewal process''' is a generalisation of the [[Poisson process]]. In essence, the Poisson process is a [[continuous-time Markov process]] on the positive integers (usually starting at zero) which has [[independent identically distributed]] ''holding times'' at each integer <math>i</math> ([[exponential distribution|exponentially distributed)]] before advancing (with probability 1) to the next integer:<math>i+1</math>. In the same informal spirit, we may define a renewal process to be the same thing, except that the holding times take on a more general distribution. (Note however that the [[IID]] property of the holding times is retained).
===Formal definition===
[[Image:Renewal_process.reetep.png|thumb|right|350px|Sample evolution of a renewal process with ''holding times'' ''S<sub>i</sub>'' and jump times ''J<sub>n</sub>''.]]
Let <math>S_1 , S_2 , S_3 , S_4 , S_5, \ldots </math> be a sequence of [[independent identically distributed]] [[random variable]]s such that
:<math> 0 < \mathbb{E}[S_i] < \infty. </math>
We refer to the random variable <math>S_i</math> as the "<math>i</math>th" ''holding time''.
Define for each ''n'' > 0 :
:<math> J_n = \sum_{i=1}^n S_i, </math>
each <math>J_n</math> referred to as the "<math>n</math>th" ''jump time'' and the intervals
:<math>[J_n,J_{n+1}]</math>
being called ''renewal intervals''.
Then the random variable <math>(X_t)_{t\geq0}</math> given by
:<math> X_t = \sum^{\infty}_{n=1} \mathbb{I}_{\{J_n \leq t\}}=\sup \left\{\, n: J_n \leq t\, \right\}</math>
is called a '''renewal process'''.
===Interpretation===
One may choose to think of the ''holding times'' <math>\{ S_i : i \geq 1 \}</math> as the time elapsed before a machine breaks for the "<math>i</math>th" time since the last time it broke. (Note this assumes that the machine is immediately fixed and we restart the clock immediately.) Under this interpretation, the ''jump times'' <math>\{ J_n : n \geq 1 \}</math> record the successive times at which the machine breaks and the ''renewal process'' <math>X_t</math> records the number of times the machine has so far had to be repaired at any given time <math>t</math>.
However it is more helpful to understand the renewal process in its abstract form, since it may be used to model a great number of practical situations of interest which do not relate very closely to the operation of machines.
==Renewal-reward processes==
[[Image:Renewal-reward_process.reetep.png|thumb|350px|right|Sample evolution of a renewal-reward process with ''holding times'' ''S<sub>i</sub>'' , jump times ''J<sub>n</sub>'' and rewards ''W<sub>i</sub>'']]
Let <math>W_1, W_2, \ldots</math> be a sequence of IID random variables (''rewards'') satisfying
:<math>\mathbb{E}|W_i| < \infty.\, </math>
Then the random variable
:<math>Y_t = \sum_{i=1}^{X_t}W_i </math>
is called a '''renewal-reward process'''. Note that unlike the <math>S_i</math>, each <math>W_i</math> may take negative values as well as positive values.
===Interpretation===
In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards" <math>W_1,W_2,\ldots</math> (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.
An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as <math>S_i</math>. Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards" <math>W_i</math> are the successive (random) financial losses/gains resulting from successive eggs (''i'' = 1,2,3,...) and <math>Y_t</math> records the total financial "reward" at time ''t''.
==Properties of renewal processes and renewal-reward processes==
We define the '''renewal function''':
:<math>m(t) = \mathbb{E}[X_t].\, </math>
===The elementary renewal theorem===
The renewal function satisfies
:<math>\lim_{t \to \infty} \frac{1}{t}m(t) = 1/\mathbb{E}[S_1].</math>
====Proof====
Below, you find that the strong law of large numbers for renewal processes tell us that
:<math>\lim_{t \to \infty} \frac {X_t}{t} = \frac{1}{\mathbb{E}[S_1]}.</math>
To prove the elementary renewal theorem, it is sufficient to show that <math>\left\{\frac{X_t}{t}; t \geq 0\right\}</math> is uniformly integrable.
To do this, consider some truncated renewal process where the holding times are defined by <math>\overline{S_n} = a \mathbb{I}\{S_n > a\}</math> where <math>a</math> is a point such that <math>0 < F(a) = p < 1 </math> which exists for all non-deterministic renewal processes. This new renewal process <math> \overline{X_t} </math> is an upper bound on <math> X_t </math> and its renewals can only occur on the lattice <math> \{na; n \in \mathbb{N} \} </math>. Furthermore, the number of renewals at each time is geometric with parameter <math>p</math>. So we have
:<math>
\begin{align}
\overline{X_t} &\leq \sum_{i=1}^{[at]} \mathrm{Geometric}(p) \\
\mathbb{E}\left[\,\overline{X_t}\,\right]^2 &\leq C_1 t + C_2 t^2 \\
P\left(\frac{X_t}{t} > x\right) &\leq \frac{E\left[X_t^2\right]}{t^2x^2} \leq \frac{E\left[\overline{X_t}^2\right]}{t^2x^2} \leq \frac{C}{x^2}.
\end{align}
</math>
===The elementary renewal theorem for reward renewal processes===
We define the '''reward function''':
:<math>g(t) = \mathbb{E}[Y_t].\, </math>
The renewal function satisfies
:<math>\lim_{t \to \infty} \frac{1}{t}g(t) = \frac{\mathbb{E}[W_1]}{\mathbb{E}[S_1]}.</math>
===The renewal equation===
The renewal function satisfies
:<math>m(t) = F_S(t) + \int_0^t m(t-s) f_S(s)\, ds </math>
where <math>F_S</math> is the cumulative distribution function of <math>S_1</math> and <math>f_S</math> is the corresponding probability density function.
====Proof of the renewal equation====
:We may iterate the expectation about the first holding time:
::<math>m(t) = \mathbb{E}[X_t] = \mathbb{E}[\mathbb{E}(X_t \mid S_1)].</math>
:But by the [[Markov property]]
::<math>\mathbb{E}(X_t \mid S_1=s) = \mathbb{I}_{\{t \geq s\}} \left( 1 + \mathbb{E}[X_{t-s}] \right).</math>
:So
::<math>
\begin{align}
m(t) & {} = \mathbb{E}[X_t] \\
& {} = \mathbb{E}[\mathbb{E}(X_t \mid S_1)] \\
& {} = \int_0^\infty \mathbb{E}(X_t \mid S_1=s) f_S(s)\, ds \\
& {} = \int_0^\infty \mathbb{I}_{\{t \geq s\}} \left( 1 + \mathbb{E}[X_{t-s}] \right) f_S(s)\, ds \\
& {} = \int_0^t \left( 1 + m(t-s) \right) f_S(s)\, ds \\
& {} = F_S(t) + \int_0^t m(t-s) f_S(s)\, ds,
\end{align}</math>
:as required.
===Asymptotic properties===
<math>(X_t)_{t\geq0}</math> and <math>(Y_t)_{t\geq0}</math> satisfy
:<math> \lim_{t \to \infty} \frac{1}{t} X_t = \frac{1}{\mathbb{E}S_1} </math> (strong [[law of large numbers]] for renewal processes)
:<math> \lim_{t \to \infty} \frac{1}{t} Y_t = \frac{1}{\mathbb{E}S_1} \mathbb{E}W_1 </math> (strong law of large numbers for renewal-reward processes)
almost surely.
====Proof====
:First consider <math>(X_t)_{t\geq0}</math>. By definition we have:
::<math>J_{X_t} \leq t \leq J_{X_t+1}</math>
:for all <math>t \geq 0</math> and so
::<math>
\frac{J_{X_t}}{X_t} \leq \frac{t}{X_t} \leq \frac{J_{X_t+1}}{X_t}
</math>
:for all ''t'' ≥ 0.
:Now since <math>0< \mathbb{E}S_i < \infty </math> we have:
::<math>X_t \to \infty</math>
:as <math>t \to \infty</math> [[almost surely]] (with probability 1). Hence:
::<math>\frac{J_{X_t}}{X_t} = \frac{J_n}{n} = \frac{1}{n}\sum_{i=1}^n S_i \to \mathbb{E}S_1 </math>
:almost surely (using the strong law of large numbers); similarly:
::<math>\frac{J_{X_t+1}}{X_t} = \frac{J_{X_t+1}}{X_t+1}\frac{X_t+1}{X_t} = \frac{J_{n+1}}{n+1}\frac{n+1}{n} \to \mathbb{E}S_1\cdot 1 </math>
:almost surely.
:Thus (since <math>t/X_t</math> is sandwiched between the two terms)
::<math>
\frac{1}{t} X_t \to \frac{1}{\mathbb{E}S_1}
</math>
:almost surely.
:Next consider <math>(Y_t)_{t\geq0}</math>. We have
::<math>\frac{1}{t}Y_t = \frac{X_t}{t} \frac{1}{X_t} Y_t \to \frac{1}{\mathbb{E}S_1}\cdot\mathbb{E}W_1 </math>
:almost surely (using the first result and using the law of large numbers on <math>Y_t</math>).
===The inspection paradox===
A curious feature of renewal processes is that if we wait some predetermined time ''t'' and then observe how large the renewal interval containing ''t'' is, we should expect it to be typically larger than a renewal interval of average size.
Mathematically the inspection paradox states: ''for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval.'' That is, for all ''x'' > 0 and for all ''t'' > 0:
:<math> \mathbb{P}(S_{X_t+1} > x) \geq \mathbb{P}(S_1>x) = 1-F_S(x)</math>
where ''F''<sub>''S''</sub> is the cumulative distribution function of the IID holding times ''S<sub>i</sub>''.
====Proof of the inspection paradox====
[[Image:Inspection_paradox.reetep.png|thumb|350px|The renewal interval determined by the random point ''t'' (shown in red) is stochastically larger than the first renewal interval.]]
Observe that the last jump-time before ''t'' is <math>J_{X_t}</math>; and that the renewal interval containing ''t'' is <math>S_{X_t+1}</math>. Then
:<math>
\begin{align}
\mathbb{P}(S_{X_t+1}>x) & {} = \int_0^\infty \mathbb{P}(S_{X_t+1}>x \mid J_{X_t} = s) f_S(s) \, ds \\
& {} = \int_0^\infty \mathbb{P}(S_{X_t+1}>x | S_{X_t+1}>t-s) f_S(s)\, ds \\
& {} = \int_0^\infty \frac{\mathbb{P}(S_{X_t+1}>x \, , \, S_{X_t+1}>t-s)}{\mathbb{P}(S_{X_t+1}>t-s)} f_S(s) \, ds \\
& {} = \int_0^\infty \frac{ 1-F(\max \{ x,t-s \}) }{1-F(t-s)} f_S(s) \, ds \\
& {} = \int_0^\infty \min \left\{\frac{ 1-F(x) }{1-F(t-s)},\frac{ 1-F(t-s) }{1-F(t-s)}\right\} f_S(s) \, ds \\
& {} = \int_0^\infty \min \left\{\frac{ 1-F(x) }{1-F(t-s)},1\right\} f_S(s) \, ds \\
& {} \geq 1-F(x) \\
& {} = \mathbb{P}(S_1>x)
\end{align}
</math>
as required.
==Example applications==
===Example 1 - use of the strong law of large numbers===
Eric the entrepreneur has ''n'' machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.
What is his optimal replacement policy?
====Solution====
We may model the lifetime of the ''n'' machines as ''n'' independent concurrent renewal-reward processes, so it is sufficient to consider the case ''n=1''. Denote this process by <math>(Y_t)_{t \geq 0}</math>. The successive lifetimes ''S'' of the replacement machines are independent and identically distributed, so the optimal policy is the same for all replacement machines in the process.
If Eric decides at the start of a machine's life to replace it at time 0 < ''t'' < 2 but the machine happens to fail before that time then the lifetime ''S'' of the machine is uniformly distributed on [0, ''t''] and thus has expectation 0.5''t''. So the overall expected lifetime of the machine is:
:<math> \mathbb{E}S = \mathbb{E}[S \mid \mbox{fails before } t] \cdot \mathbb{P}[\mbox{fails before } t] + \mathbb{E}[S \mid \mbox{does not fail before } t] \cdot \mathbb{P}[\mbox{does not fail before } t]</math>
:<math> = \frac{t}{2}\left(0.5t\right) + \frac{2-t}{2}\left( t \right)</math>
and the expected cost ''W'' per machine is:
:<math> \mathbb{E}W = \mathbb{E}(W \mid \mbox{fails before } t) \cdot \mathbb{P}(\mbox{fails before } t) + \mathbb{E}[W \mid \mbox{does not fail before } t).\mathbb{P}(\mbox{does not fail before } t)</math>
:<math> = \frac{t}{2}( 2600 ) + \frac{2-t}{2} ( 200 ) = 1200t + 200.\,</math>
So by the strong law of large numbers, his longterm average cost per unit time is:
:<math>
\lim_{t \to \infty} \frac{1}{t} Y_t = \frac{\mathbb{E}W}{\mathbb{E}S}
= \frac{ 4(1200t + 200) }{ t^2 + 4t - 2t^2 }
</math>
then differentiating with respect to ''t'':
:<math>
\frac{\partial}{\partial t} \frac{ 4(1200t + 200) }{ t^2 + 4t - 2t^2 } = 4\frac{ (4t - t^2)(1200) - (4 - 2t)(1200t + 200) }{ (t^2 + 4t - 2t^2)^2 },
</math>
this implies that the turning points satisfy:
:<math>
0 = (4t - t^2)(1200) - (4 - 2t)(1200t + 200)
= 4800t - 1200t^2 -4800t - 800 + 2400t^2 + 400t </math>
::<math>
= -800 + 400t + 1200t^2,
</math>
and thus
:<math>
0 = 3t^2 + t - 2 = (3t -2)(t+1).
</math>
We take the only solution ''t'' in [0, 2]: ''t'' = 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity as ''t'' tends to zero, meaning that the cost is decreasing as ''t'' increases, until the point 2/3 where it starts to increase.
==See also==
*[[Poisson process]]
*[[Continuous-time Markov process]]
*[[Semi-Markov process]]
*[[Queueing theory]]
*[[Ruin theory]]
*[[Campbell's theorem]]
*[[Little's lemma]]
[[Category:probability theory]]
[[Category:Stochastic processes]]
[[Category:Point processes]]
[[de:Erneuerungstheorie]]
[[ru:Основное уравнение дискретной теории восстановления]]