This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
0 when p G (0,1) and s g f = (ImeN gf+i/m- B y Theorem 8.7 for martingales with reversed time, we have (1) (■, w) is bounded on every finite interval in R+for almost every u> G £2. Remark 15.10. Every quasimartingale on a standard filtered space (Q, 5, {5t},P) is in C(R+ x £2). Note also that C(R + x £2) c B(R+ x Q) c L ^ ( R + x Q, ^ , P) c L'1<^(R+ X £2, (J,A, P) for every yl G A' 0C (Q,5, {5 ( },P). The last set inclusion is from the fact that for every t G R+ and w G £2, P-A(-, <*>) is a finite measure on ([0, t], 93[o,<]) so that for every measurable process O on (£2,5, -P) we have ®2(s,uj)dA(s,u>) < oo =► / • X, then M^tX = <3> • Mx and V^tX = . • Y), ) < Ai^(^i) + A 2 y(6)Then the left derivative (D~(p)(0 and the right derivative (D+ -0(0 < ( - D » ( 0 for £ G P ( J D - ^ ) ( 0 , ( ^ V X 0 T as { T; for any £0 G I m(( - £0) + v(£o) for £ 6 ^ a^ t h e r e exists a countable collection {£n : n £ N} of affine functions, that is, 4 ( 0 = a n £ + Pn with a n , /?„ 6 R for £ £ / , such that <^(0 = sup neN 4 ( 0 for £ G J. Theorem B.22. (Conditional Jensen's Inequality) Let X be an integrable extended real valued random variable on a probability space (£2,5, P). Let (X) is integrable. By (1) we have (2) ) = m, 5) V( we have by the substitution z = C ( r — m) be the characteristic functions of the probability distributions Px,,-■■, Pxd and Px where X = ( X ] , . . . , Xd). According to Kac's Theorem, {X\,..., Xd} is an independent system if and only if (4)
II* + Y\\l < 11*115+ \\Y\\> forX,y G L p (Q,5,P). This completes the proof that || • ||p is a quasinorm. The fact that p is a metric follows immediately from the fact that || ■ ||p is a quasinorm. The completeness of LP(Q, 5, P) with respect to p is proved in the same way as for p > 1.
■
Lemma 4.3. Let Xn G Lp(£l,$,P),n G N, and X G L„(Q,5,P) wfcere p g (0,co). // lim IIXJL = \\X\L and lim Xn = X a.s., then lim \\Xn -X\L = 0. Proof. Note that from \a + P\" < 2"{|a| p + \/3\"} for a, /3 G R and p e (0, oo), we have 2 p {|X n |" + | Z n - | X „ - X | p > 0 . By the a.s. convergence of Xn to X we have lim {2p(\Xn\* + \X\") - \Xn -X\'}=
2p+1 \X\?
a.s.
n—t-co
By Fatou's Lemma and the convergence of \\Xn\\p to \\X\\P, we have / 2 P+, |X| P
<
liminf / {2 p (|X n | p +|Jf | p ) - |X„ - X|"} d P
52
CHAPTER 1. STOCHASTIC 2P+1 / | X | " d P + liminf f Jn "^°° Jn
PROCESSES
(-\Xn-X\p)dP
= 2P+1 / L X f d P - l i m s u p / Jn n-»oo Jn
\Xn-X\"dP
and thus limsup / \Xn - X\p dP < 0. On the other hand from the nonnegativity of the n—*oo
Jn
integrands the limit inferior is nonnegative. Hence lim / |X„ — X\p dP = 0. ■ Notation. For extended real valued random variables Xn, n G N, and X on a probability space (O., J , P) we write P ■ lim X„ = X if X„ converges to X in probability, that is, 71—fOO
lim P{\Xn - XI > e} = 0 for every e > 0.
n—+co
J
'
Theorem 4.4. Forp € (0,oo), /ef X n G L p (£2,3,P),n G N, and X G L„(Q,#,P) a/m Then lim ||X„ — X L = 0 if and only if we have both lim IIXJL = IIXIL and n—t-oo "
"^
P • lim X„ = X.
n—*oo "
^
Proof. When p G [1, oo), let pp(X, Y) = ||X - Y\\p and when p G (0,1) let pp{X, Y) = \\X - F | | P for X , F G LP(Q,$,P). In any case, pp is a metric on LP{£L,$,P) and lim ||X n - X L = 0 if and only if lim pp(Xn, X) = 0. 1) Suppose lim pp(Xn,X) = 0. By the triangle inequality for the metric pv we have 71—► O O
'
pp(Xn,0) < pp(Xn,X)+pp(X,0)sothatpp(Xn,0)-pp(X,0) < pp(Xn,X). Interchanging the roles of X n and X we have pp(X, 0) - pp(Xn, 0) < pp(Xn, X) and therefore | pp(Xn, 0) PP(X,0)\ < pp(Xn,X). Thus j lim ]/ 9p(X„ ) X) = 0 implies lim pp(Xn,0) = /9 p (X,0), that is, n lim||X„L = | | X L . To prove P • lim X„ = X , note that for any e > 0 we have by the Markov Inequality n—KX)
P{|X„-XI>£}<E('X"-X'P)
=
"X"-^,
Since lim p„(Xn, X) = 0 we have lim l|Xre - Xll? = 0 and therefore we have lim P{ |X„ - X | > e} = 0, that is, P • lim Xn = X. n—*oo
n—*co
2) Conversely assume lim ||J£ n || P = Iffllp and P ■ lim X n = X. n—>-oo
n—t-oo
Consider an arbi-
trary subsequence {X„. : k G N} of {X„ : n G N}. Since P ■ lim X„ = X we have P
lim X n t = X and therefore there exists a subsequence {X„ t k—*oo
n—*oo
: £ G Nj such that
*
§4. CONVERGENCE INLP AND UNIFORM
INTEGRABILITY
53
lim Xn = X a.s. Then by Lemma 4.3 we have lim \\Xnt - XIL = 0, that is, Mm pp(Xnk[,X) = 0. Thus we have shown that an arbitrary subsequence {Xnk : k £ N} of {Xn : n £ N} always has a subsequence {Xnk : l £ N} such that lim p p (X„ t , X) = 0. This implies that lim pp(Xn, X) = 0. To show this, suppose that lim pp(Xn, X) = 0 does not hold. Then there exists e0 > 0 such that pp(Xn,X) > e0 for infinitely many n £ N and therefore we can select a subsequence {nk\ of {n} such that p p (X„ t ,X) > £0 for all k £ N. Then no subsequence {X„ t : £ £ N} of {X nt : k £ N} has the property that lim pp(Xnk , X ) = 0. This is a contradiction. ■
[II] Uniformly Integrable Systems of Random Variables In order to relate the uniform integrability of a system of random variables to the integrability of a single random variable X, let us give a condition equivalent to the latter in terms o f P { | X | > A} as A-xx>. Lemma 4.5. An extended real valued random variable X on a probability space (Q., 5, P) is integrable if and only iflt\x\>\} \X\dP [ 0 as X —> oo. Proof. 1) Suppose X is integrable. Then|X| < ooa.e. on(Q,3,P). If we let An = {\X\ > n} for n £ N, then An [ and lim An = n„ €N A„ = {|X| = oo} so that lAn | li\x\=oo}n—*oo
Since |l,4 n X| < |X| and |X| is integrable we have by the Dominated Convergence Theo rem lim !
\X\dP = lim / U , | X | d P = /
lim 1 4 „|X| dP = f l { W = o o } | X | dP = 0.
Since P { | X | > A} < P { | X | > [A]} where [A] is the greatest nonnegative integer not exceeding A, the last equality implies J{\X\>\} |X| d P J. 0 as A -> oo. 2) Conversely suppose I{\X\>x} \x\ d P | 0 as A -► oo. Then for every e > 0 there exists A > 0 such that f{\X\>\} \X\dP < e. With such A > 0, we have f\X\dP=f
\X\dP+f
\X\dP < A + e < o o ,
that is, X is integrable. ■ Definition 4.6. A system of extended real valued random variables {Xa : a € A} on a
54
CHAPTER 1. STOCHASTIC
PROCESSES
probability space (Q, 5, P) is said to be uniformly integrable if (1)
sup / \Xa\ dP 10 o,eAJ{\xa\>x)
as A -> oo,
or equivalently, for every e > 0 there exists A > 0 such that (2)
sup/
\XJdP<e,
aeAM\xa\»,}
or equivalently, for every e > 0 there exists A > 0 such that (3)
/ LXJdP<£ J{\xa\>\}
for alias
A.
Wesaythat{Xa : a £ A) is pth-order uniformly integrable if {\Xa\p : a £ A} isuniformly integrable for some p £ (0, oo). Thus, the uniform integrability of {Xa : a £ A} implies by Lemma 4.5 the integrability of Xa for every a £ A. The integrability of Xa then implies that for every e > 0 there exists 8 > 0 such that fE \Xa \dP <e for every E £ 5 with P ( £ ) < <5. Theorem 4.7. A system of extended real valued random variables {Xa : a £ A} on a probability space (Q,, 5, P) is uniformly integrable if and only if 1° supE(|X„|) < oo, a£A
2° for every e > 0 there exists 8 > 0 JUC/Z rfetf j B \Xa\ dP < e for all a £ A whenever E £3 and P(E) < 8. Proof. 1) Suppose {Xa : a £ A} is uniformly integrable. Then by (3) of Definition 4.6 for every e > 0 there exists A > 0 such that E(|XQ|)=/
\Xa\dP+
f
\Xa\dP<X
+E
forallaGA
and therefore 1° is satisfied. To verify 2°, note that by (3) of Definition 4.6 again for every e > 0 there exists A > 0 such that J^Xa\>\} \Xa\dP < e/2 for all a £ A. Then for every E £ 5 we have / \Xa\dP
=
[
\Xa\dP+
f
<
I \Xa | dP + \P(E) J{\xa\>n
\Xa\dP < £- + 2
\P(E).
H
CONVERGENCE IN LP AND UNIFORM INTEGRABILITY
55
Let S = e/2A. Then for E G 5 with P(E) < 6 we have / E |A"„| d P < e, verifying 2° 2) Conversely suppose {JTa : a G A} satisfies 1° and 2°. Now for any A > 0 P{lxa\>x}<^&<M-
foranaeA
A A where M = sup Q e j l E(|X a |) < oo by 1° Let e > 0 be arbitrarily given. Let A > 0 be so large that M A - 1 < 6 as in 2°. Then J{\xa\>\)
| dP < e
for all a € A,
verifying (3) of Definition 4.6. Thus {Xa : a G A} is uniformly integrable. ■ The following is an example of a system of random variables which satisfies 2° but not 1° of Theorem 4.7. Example 4.8. Consider the probability space ([0, l],2$[o,i],P) where P is the unit mass concentrated at {0}, that is, P is a probability measure satisfying the condition P({0}) = 1. Consider the system of random variables {Xn : n G N} where Xn is defined by Xn(uj) = n for all u G O, n G N. Let e > 0 be arbitrarily given. For any <5 G (0,1), if E G ®[o,i] and P{E) < 6 then 0 $ E so that £ C (0,1] and thus P(E) = 0. Therefore JE \Xn\ dP = 0 < e for all n G N whenever E G (A 5, P). then {XaY : a G A} is uniformly integrable. 3) If {Xa : a G A} and {Ya : a G A} are uniformly integrable systems then so is {Xa + Ya:aeA}. Proof. 1) Suppose \ca\ < M for a G A with some M > 0. Then /
\caXa\dP<
I
M\Xa\dP
so that lim sup /
\caXa \dP<M
lim sup /
\Xa\ dP = 0,
56
CHAPTER 1. STOCHASTIC
PROCESSES
that is, {caXa :a£A} satisfies (1) of Definition 4.6. 2) The uniform integrability of {Xa : a G A} implies according to Theorem 4.7 suPE[\XaY\]
< | | y | U s u p E [ | X 0 | ] < oo.
When ||K||oo = 0, the assertion is trivially true. Thus assume ||y||oo > °- T h e n f o r e v e r y e > 0 there exists 6 > 0 such that JE \Xa\dP < e/||y||oo for all a G A whenever E eg and P(E) < 8. Then for E G 5 with P{E) < 6 we have / \XaY\dP<\\Y\\00
JE
I JE
\Xa\dP<e.
Thus by Theorem 4.7 the system {XaY : a 6 A] is uniformly integrable. 3) If {Xa : a £ A} and {Ya : a G A} are uniformly integrable systems, then each one of the two satisfies 1° and 2° of Theorem 4.7. From this follows immediately that {Xa + Ya : a G A} satisfies the same conditions 1° and 2° and is therefore uniformly integrable. ■ Proposition 4.10. A finite system {Xn : n = I,... ,N} of extended real valued integrable random variables on a probability space is always uniformly integrable. Proof. If Xn is integrable then it satisfies the condition in Lemma 4.5, that is, lim /
\Xn\dP = 0.
A—oo J{\X„\>\]
Then lim
sup
r /
*->°°n=l,...,NJ{\X„\>\}
Thus {Xn : n = 1,...,N} grable. ■
\Xn\dP<
N t lim V /
A^oo J^J
J{\X„\>\}
\Xn\dP
= 0.
satisfies (1) of Definition 4.6 and is therefore uniformly inte
As immediate consequences of Definition 4.6 we have the following statements. Proposition 4.11. 1) If{Xa : a G A} is uniformly integrable then so is any subsystem of {Xa:aeA}. 2) {Xa : a £ A} is uniformly integrable if and only if{\Xa\ : a £ A} is. 3) Let Ya = Xa or —Xa for each a G A. Then {Xa : a G A] is uniformly integrable if
§4. CONVERGENCE INLP AND UNIFORM
INTEGRABILITY
57
and only if{Ya:a€A} is. 4) If \Xa\ < \Ya| for every a £ A and {Ya : a £ A} is uniformly integrable, then so is {Xa : a £ A}. In particular if \Xa\ < Y for every a £ A for some integrable random variable Y then {Xa : a £ i } is uniformly integrable. 5) {Xa : a G A} is uniformly integrable if and only if both {X+ : a G A} and {X~ : a G A} are. 6) If {Xa : a G A} and {Ye : ft G B) are uniformly integrable systems then so is {Xa,Y0 : a e AJ 6 B}. The following theorem compares different orders of uniform integrability. Theorem 4.12. Let {Xa : a G A} be a system of extended real valued random variables on a probability space (£1,5,P). If supaeA\\Xa\\P0 < oo for some p0 £ (0,oo), then {Xa : a £ A} is pth-order uniformly integrable, that is, {\Xa\p : a £ A} is uniformly integrable for every p £ (0,po). Proof. Let p £ (0, p0). Then for 0 < T? < ( we have £p = £*-»£*> < ^P-M^PO SO m a t
/
ij'-»|.xardP<»7p-wPUI£!
\xa\"dp
and therefore lim sup /
\Xa\pdP
< lim r? p - M sup ||X a ||» = 0
by the fact that sup a€j4 ||^Q||p° < oo and p — po < 0. Therefore writing A for rf, we have lim sup /
\Xa\pdP = 0.
This verifies (1) of Definition 4.6 and proves the uniform integrability of {l-X^ : a £ A} forpG(0,p 0 ). ■ Note that on account of Theorem 4.7, Theorem 4.12 implies that if {Xa : a £ A} is poth-order uniformly integrable, then it is pth-order uniformly integrable for every p G (0,po). Theorem 4.13. Let {Xa : a £ A} and {Ya : a £ A} be systems of extended real valued random variables on a probability space (£2,5, P). If {Xa : a £ A} and {Ya : a £ A}
58
CHAPTER 1. STOCHASTIC
PROCESSES
are respectively pth-order and qth-order uniformly integrable for some p,q G (1, oo) such that \/p+ \/q = 1, then {XaYa : a G A} is uniformly integrable. Proof. If {\Xa\p : a G A} and {\Ya\q : a e A} are uniformly integrable, then by Theorem 4.7 (1)
sup||X a || p < oo and
sup \\Ya\\q < oo
and for every e > 0 there exists 8 > 0 such that (2) / \Xa\" dP <e and / \Ya\" dP < e for all a G A whenever E G 5 and P(E) < S. JE
JE
Now for every S e j w e have by the Holder Inequality (3)
/ \XaYa\dP< JE
{ / \Xa\vdP}L'{( JE
\Ya\"dP}L*. JE
With E = £2 in (3) we have by (1) S U P E ( | X a F a | ) < 8 U p d l J r a i l l y . i l , } < { s u p | | X 0 | | „ } { s u p | | y , | | , } < OO. a£A a£A a^A a€A
Also, by (3) and (2) / \XaYa \dP<ehi
JE
=e
for all a G A whenever E G 5 and P(E) < 6.
Thus {XaYa : a £ A} satisfies 1° and 2° of Theorem 4.7 and is therefore uniformly integrable. ■ Let us turn to the role played by uniform integrability in convergence in Lp. We shall show that a sequence of random variables converges in Lp if and only if it is pth-order uni formly integrable and converges in probability. Toward this end we prepare the following lemma. Lemma 4.14. Let Xn G I p ( f l , 5 , P ) , n G N, where p G (0, oo). / / lim ||X„||j, = 0 then {X„ : n G N} is pth-order uniformly integrable. Proof. If lim IIXJL = 0 then for every e > 0 there exists N G N such that n—t-oo
(1)
/|Xn|"dP<e
w h e n n > i V + l.
§4. CONVERGENCE IN LP AND UNIFORM
INTEGRABILITY
59
Since {|Xi| p ,..., |-X"jv|p} is a finite system of integrable random variables it is uniformly integrable by Proposition 4.10. Therefore for our e > 0 there exists A > 0 such that (2)
/
\Xn\"dP<e
fom=l,...,N,
\Xn\p dP < e for all n G N for our A > 0. This proves
By (1) and (2) we have / J{\x„\p>\}'
the uniform integrability of {\Xn \" : n G N} by (3) of Definition 4.6. ■ In the last lemma we showed that if lim ||X„|| P = 0, then the sequence {Xn : n G N} is pth-order uniformly integrable. We observe that if lim \\Xn II. = c where c is an arbitrary n—foo
real number and c ^ 0 then the sequence need not be pth-order uniformly integrable. See Ex ample 4.15 below. The difference is that while the convergence lim ||X n ||p = 0 is not only the convergence of the numerical sequence {||X„||P : n G N} to 0 but also the convergence of the sequence of the random variables in Lv to the identically vanishing random variable, the convergence lim ||X n || p = c does not imply lim ||X„ — c\\p = 0 . The next theorem shows that if lim \\X„\L = \\X\L for some X G LJQ., 5, P) and if P ■ lim Xn = X then the sequence is pth-order uniformly integrable. Example 4.15. Consider the probability space ((0,1], Q3(o,i], mi) where 93(o,ij is the Borel (T-algebra on (0,1] and mi is the Lebesgue measure. For each n G N, let Xn be defined by Xn{u) = n for u G (0, l/n] and Xn(u>) = 0 for w G (1/n, 1]. Let X be defined by X(u) = 1 for a; G (0,1]. Then E(|X„|) = 1 for n G N and E(|X|) = 1 so that Xn and X are all in Za(Q,5, P) with lim E(\X n \) = E(|X|). 71—+0O
To show that {| Xn \ : n G N} is not uniformly integrable, we show that if e G (0,l)then for any S > 0 we can always find some n G N and some E G 5 with P(E) < 6 such that / \Xn\ dP > e. Indeed, if n G N is so large that l / n < <5, then P((0, l/n]) = l / n < 6
JE
but/
-'(0,1/n]
\Xn\dP
=
\>e.
Theorem 4.16. Let Xn G LP(Q, 5 , P), n G N, where p G (0, oo). Let X be an extended real valued random variable on (O., 5,p) and assume that P ■ lim Xn = X. Then the fol lowing three conditions are equivalent: (1)
{Xn : n G N} is pth-order uniformly integrable.
60
CHAPTER 1. STOCHASTIC PROCESSES
(2)
X e £„(Q,3, P) and lim ||Jr* - X|| p = 0.
(3)
l£lp(Q,3,P)
and Jim ||XS||P = (|X||P.
Proof. 1) (1) => (2). Assume (1), that is, {|Xn|p : n G N} is uniformly integrable. Let us show first that X G LJii, 5, P). Now since P ■ lim Xn = X there exists a subsequence {X nt : k G N} such that lim Xn. = X a.s. and thus lim \Xnk\" = |X|" a.s. Then by k—*oo
Fatou's Lemma
A:—*oo
E(|X|p)
Since the uniform integrability of {\Xn\p : n € N} implies sup n€N EdX^I") < oo accord ing to 1° of Theorem 4.7, we have E(|X| P ) < oo and thus X e LP(Q., £, P). To prove lim \\Xn - X L = 0, note that
ixn-xi"<2"{|x„r+|xn. Now the uniform integrability of {|X„|P : n G N} and the integrability of |X| P imply according to Proposition 4.9 the uniform integrability of {2 P (|X„| P + |X| P ) : n G N}. Thus by 4) of Proposition 4.11, {|Xn - X\" :rcG N} is uniformly integrable. Then by 2° of Theorem 4.7 for every e > 0 there exists <5 > 0 such that / \Xn - X\p dP <e for all n G N whenever E G g and P(E) < S.
JE
Now P ■ lim X„ = X implies that for our e > 0 and <S > 0 there exists N G N such that 71—*0O
P { | X n - X | > £ } < 6 for n > TV and thus / { | X „_*|> £} |X n - X | p d P < e for n > TV and consequently ||X„ - X||£ = / P
\Xn-X\?dP+[
J{\X„-X\>e}
\Xn-X\'dP<e + e" J{\Xn-X\<e}
for n > TV. This implies lim sup ||X„ - X||£ < e + e p By the arbitrariness of e > 0, the n—t-OO
limit superior is equal to 0 and therefore lim ||X„ — X|| p = 0. 2)(2) => (1). Assume (2). According to Lemma 4.14, lim \\Xn — X\\v = 0impliesthe uniform integrability of {|X„ - X| p : n G N}. Note that
n—»-oo "
"^
\xn\' = \xn-x + x\* < 2p{|xri - x|p + ixin.
r
§4. CONVERGENCE IN LP AND UNIFORM INTEGRABILITY
61
Since \X\P is integrable we have the uniform integrability of {2p{\Xn-X\f+\X\f) : n g N} by Proposition 4.9 and from this follows the uniform integrability of {\Xn\T : n G N} by 4) of Proposition 4.11. 3) The equivalence of (2) and (3) is from Theorem 4.4. ■ Corollary 4.17. Let Xn G i i ( i i , 5, P ) and Xn > 0 a.e. on (Q, 5, P)for n G N. Le? X fee an extended real valued random variable on (Q, 3 , P) a«
Then the following two conditions are equivalent: (1)
{Xn : n G N} w uniformly integrable.
(2)
E(X) < oo and
lim E(X„) = E(X). n—*oo
Proof. Since P lim Xn = X implies the existence of a subsequence {Xnk : k G N} such that lim XUk = x T e . on (Q,5, P) we have X > 0 a.e. on (Q, 5, P). Thus E(X) < oo is k—KX>
equivalent to E(|X|) < co, that is, X G i i ( Q , 5, P). Then the equivalence of (1) and (2) is implied by Theorem 4.16. ■ Although the substance of the next theorem is contained in Theorem 4.16 and Theorem 4.4, we state it separately because of its simplicity. Theorem 4.18. Let Xn,n G N, and X be in LP(£1,$,P) where p G (0,oo). Then lim \\Xn — X\L = 0 if and only if P lim Xn = X and {Xn : n G N) is pth-order unin—*oo
n—*oo
formly integrable. Proof. If P
lim Xn = X and {Xn : n G N} is pth-order uniformly integrable then
71—*00
lim 11 Xn - X11 j, = 0 by Theorem 4.16. Conversely, if Jirr^ 11 Xn - X \ \ v = 0 then by Theo rem 4.4 we have P lim X„ = X and thus {Xn : n G N} is pth-order uniformly integrable n—*oo
by Theorem 4.16. ■
For p G (0, co), Lp(£l, 5, P) is a metric space with the metric derived from the norm ||-|| p whenp G [l,co)and with the metric derived from the quasinorm ||-||£whenp G (0,1). We show next that the closure of a pth-order uniformly integrable set in Lp(Cl, 5, P) with respect to the metric topology is pth-order uniformly integrable.
62
CHAPTER 1. STOCHASTIC
PROCESSES
Theorem 4.19. For p G (0, oo), let ft be a subcollection ofLp(Q, 5, P). IfSj is pth-order uniformly integrable, then so is its closure Sj in the metric topology o/L p (fi, 5 , P). Proof. According to Theorem 4.7, the pth-order uniform integrability of Sj implies that M = sup E(|X| P ) < oo xef>
(1)
and for every e > 0 there exists 8 > 0 such that (2)
/ \X\pdP
for all X G Sj whenever E G 5 and P(E) < 8.
<e
JE
If Y £ Sj then there exists a sequence {Xn : n G N} in Sj such that lim \\Xn — Y\\vp = 0. Forp G [l,oo) the triangle inequality of the norm || • ||p implies |||£|| p — ||»?||p| < ||£ — V\\PSimilarly forp G (0,1) the triangle inequality of the quasinorm || • ||p implies the inequality \U\\'P - h\\l\ < U - Mil- T h u s ^ every p G (0, oo), JHm \\Xn - Y\\$ = 0 implies l i m E ( | X „ n = E(|F| p )
(3)
n—►oo
and thus for every E £ 5 we have (4)
lim / \Xn\"dP=
n->oo JE
I
\Y\"dP.
JE
From (3) and (1) we have E(|Y| p ) < M < oo
(5)
for all Y g Sj.
For e > 0, let 8 > 0 be as specified in (2). Then by (4) and (2) we have (6)
/ \Y\p dP<e
for all Y £ £ whenever E G 5 and P(E) < 8.
JE
According to Theorem 4.7, the inequalities (5) and (6) above establish the uniform integra bility of {|y| p : Y G Sj}. ■ Definition 4.20. A sequence {Xn : n G N} in Li(Q,5, P) is said to converge weakly to
xeLi(.n,3,P)if lim / XnYdP=
[ XYdP
for every Y G £„(£}, & P ) .
§ 4. CONVERGENCE IN LP AND UNIFORM
63
INTEGRABILITY
X is called the weak limit of the sequence. Note that the weak limit of a sequence, if it exists, is uniquely determined up to a null set in (£2,5, P). Indeed if {Xn : n e N} converges weakly to X and X', then by using Y = lE for an arbitrary E G 5 we have JEXdP = JEX'dP and therefore X = X' a.e. on ( Q , 5 , P ) . Note also that if a sequence {Xn : n £ N} in £ , ( £ 2 , 5 ^ ) converges to X G Li(£2,5, -P) in Li, then {Xn : n G N} converges to X weakly. This follows from the fact that for every Y G L^Q., 5, P) we have \ f XnYdPJ it
f XYdP\<
f \Xn - X\\Y\dP
< \\Y\U\Xn
- X\U.
J \i
"/Si
In the next theorem we show that for an arbitrary probability space (Q,5,-P) any uni formly integrable subset of £i(Q, 5, P) is relatively weakly compact. The converse of this theorem is also true. See [21] P. A. Meyer. Theorem 4.21. Let f) C L\(£l,$,P) be uniformly integrable. Then for every sequence {Xn : n G N} in f) there exists a subsequence {Xnk : k G N} and some X G £ i ( Q , 5 , P) such that lim J XnkYdP
= J XY dP
for every Y G ioo(", 5, P).
A:—oc
Proof. 1) For an arbitrary sequence {Xn : n G N}ini5,let(5 = a{Xn : n G N}. According to Theorem 1.15, the u-algebra is separable and in fact there exists a countable sub-algebra 21 of 5 such that a(2l) = 0 . Let 21 = {Am : m G N}. For every Am the uniform integrability of {Xn : n G N} implies that | JAm Xn dP\ < fn \Xn\ dP < sup„eN E(|X n |) < oo for all n G N. Then {JA Xn dP : n G N} is a bounded sequence in R and thus has a convergent subsequence {JAm X„k dP : k G N}. Since this is true for every Am, there exists by the diagonal procedure a subsequence of {Xn : n G N} which we denote by {Xk : k G N} such that {fA Xk dP : k G N} is a convergent sequence in R for every Am. Let us show next that the sequence {JaXkdP : k G N} converges for every G G <S. For an arbitrary e > 0 the uniform integrability of {Xfc : fc G N} implies the existence of some 5 > 0 such that (1)
/ \Xk\dP
<e
for all A: G N whenever E G 5 and P ( £ ) < 6.
JE
Since G G <£> and 0 = a(2l), there exists some A G 21 such that P(GAA) < <5. Then I / XkdP-
I XkdP\<
I
\Xk\ dP <e
for all k G N.
64
CHAPTER 1. STOCHASTIC
PROCESSES
From this we have limsup f XkdP k-too
< lim f XkdP
JG
+e
*—°° JA
and liminf JG f XkdP>
lim / X f c dP - e
A:—foo JG
Then from the arbitrariness of e > 0 we have lim f XkdP
k—*oo J A
= lim / XkdP
6 R.
This shows the convergence of the sequence {fG Xk dP : k € N} in R for every G e ©. 2) Let us define a real valued set function Q on 0 by setting (2)
Q(G) = lim f XkdP
for G 6 ©.
Clearly Q(0) = 0. To show that Q is a signed measure it remains only to show that Q is countably additive on ®, that is, for an arbitrary disjoint collection {Gm;m € N} in <S we have <3(Um6NGm) = EmgN Q(Gm). Since Q is clearly finitely additive on © from its definition by (2), it suffices to show that if {Gm : m 6 N} is an increasing sequence in © and if we let G = UmeKiGm then Q{G) = lim Q(Gm). Let e > 0 be arbitrarily given. By 771—fOO
the uniform integrability of {Xk : k e N} there exists S > 0 such that (1) holds. Since Gm t G there exists N g N such that P(G - G m ) < 5 for m > N. Then by (1) we have \Q(G - Gm)\ = lim lim /
XkdP < e
for m > N.
-►OO JG-Gm
This shows that lim Q(G - Gm) = 0 and therefore Q(G) = lim Q(Gm). This completes m—voo
m—»oo
the proof that Q is a signed measure on ©. Since P(G) = 0 implies Q(G) = 0 by (2) for G G ®, the signed measure Q is absolutely continuous with respect to P on the measurable space (Q, ©). By the Radon-Nikodym Theorem there exists an extended real valued ©-measurable function X on £2 such that Q(G) = JGX dP for every G e ©, in other words, (3)
lim f XkdP= fc->°° JG
f XdP
forG € ©.
JG
Since Q assumes values in R only it is a finite signed measure. This implies that its RadonNikodym derivative X is integrable on (Q, 5, P)-
§4. CONVERGENCE INLP AND UNIFORM
65
INTEGRABILITY
3) We show next that for every Y £ L^iO., <S, P) we have (4)
lim / XkYdP= f XYdP. fc-*°o Jn Jn To apply Theorem 1.10, let us use a real valued representative for Y 6 LX(Q.,&,P). Let V be the collection of all Y £ L^Q., @,P) which satisfy (4). By (3), 1 G G V for every G G ©. Also V is a linear space. Thus V satisfies conditions 1° and 2° of Theorem 1.10. To show that condition 3° of the same theorem is also satisfied, we show that if {Ym : m G N) is an increasing sequence of nonnegative functions in V and if Y = lim Ym m—*oo
is in L^Q., 0 , P) then Y £ V. For convenience let us write Xo for X and Y0 for Y. Since {Xi : k G N} is uniformly integrable and XQ is integrable, {Xk : k G Z+} is uniformly integrable. Since 0 < Ym < YQ on Q for all m G N and since y 0 G Loo(Q,<S,P), the system {XiFm : k,m £ Z+} is uniformly integrable by 2) of Proposition 4.9 and 4) of Proposition 4.11. Thus according to Theorem 4.7, for an arbitrary e > 0 there exists some 8 > 0 such that (5).
/ |X fc F m | d P < e for all k, m £ Z+ when E € 5 and P(E) < S. JE
Since lim Ym = Y0 on Q. we have P TIT.—t-CO
lim Ym = Y0. Thus lim P{\Ym - Y0\ > e} = 0
and hence there exists TV £ N such that (6)
771—»-00
JTl—1-00
P{\Ym - Y0\ > e} < 6 for m > N.
Now with an arbitrary m > TV we have (7)
| / XkY0dP
- I
X0YodP\
< ~
I / XkY0 dP- I XkYm dP\ + \ ( XkYm dP- f X0Ym dP\ Jn Jn Jn Jn
+
| / X0YmdPJn
f Jn
X0Y0dP\.
With a = supfcgZ+ E(|X*|) < oo, applying (5) and (6) to the first member on the right side of the inequality (7) we have
I [ XkY0dP< <
Jn f
2e + ae.
I
XkYmdP\
Jn \XkY0-XkYm\dP+
j
\Xk\\Y0-Ym\dP
66
CHAPTER 1. STOCHASTIC
PROCESSES
By exactly the same argument we have the same estimate for the third member on the right side of (7). Using these estimates in (7) we have (8)
| / XkY0 dPin
f X0Y0 dP\ < 2(2 + a)e + \ f XkYm dP - f X0Ym dP\. in in in
Since Ym g V the second member on the right side of (8) converges to 0 as k ~* oo. Therefore limsup| / XkY0dPI X0Y0dP\ < 2(2 + a)e. k-, oo in in Since this holds for every e > 0 the limit superior above is equal to 0. Hence we have the equality lim / XkY0dP= I XQY0dP Thus F 0 g V. This completes the verification k-too in in that V satisfies all the conditions in Theorem 1.10. Therefore (4) holds for every Y g
LeoOa.e.P).
Finally, to complete the proof of the theorem we show that (4) holds for every Y g L«,(n,$,P). N o w i f y g L 0 0 ( Q , 5 , P ) t h e n E ( r | ( S ) g L^Q, &, P) and therefore lim / X E(Y\®)dP = I XE(Y\<8)dP. *:—oo in k in By the 0-measurability of Xk and X we have XkE(Y | ©) = E(XkY 10) and XE(Y 10) = E(XY 10) a.e. on (Q, 0 , P). Therefore lim / E(X Y\<8)dP= k-*oo in k '
f
in
E(XY\<3)dP.
Since Q. g 0 , by the definition of conditional expectation the last equality implies lim / XkYdP= *:—oo in
f XYdP. in
This completes the proof. ■ Let us consider the family of conditional expectations with respect to a fixed cr-algebra of all random variables in a uniformly integrable family. In Theorem 4.22 below we show that such a family maintains the uniform integrability. In Theorem 4.24 we show that the family of conditional expectations of an integrable random variable with respect an arbitrary family of sub-tr-algebras in the probability space is always uniformly integrable. Theorem 4.22. Let {Xa : a g A) be a uniformly integrable system of extended real valued random variables on a probability space (Q, 5, P). Let 0 be a sub-a-algebra of$ and let
§4. CONVERGENCE IN LP AND UNIFORM INTEGRABILITY
67
YQ be an arbitrary version of E(Xa\<S) for a G A. Then {Ya : a G A} is a uniformly integrable system. Proof. Since {Ya : a g A) is a system of extended real valued random variables on the probability space (Q, 0 , P), to show its uniform integrability it is sufficient to verify according to Theorem 4.7 that
(i)
supE(|y„|)
and for every e > 0 there exists 8 > 0 such that (2)
/ |Ya | dP < e
JE
for all a € A whenever E 6 0 and P(E) < 8.
Now (3)
|y a | = | E ( X a | 0 ) | < E ( | X o | | < S )
a.e.on(Q,<3,P)
so that E(|ra|)<ErE(|XQ||<S)]=E(|Xa|) and then since sup Q6i4 E(|X a |) < oo by 1° of Theorem 4.7 we have (1). By 2° of Theorem 4.7, for every e > 0 there exists 8 > 0 such that fE \Xa \dP <e for all a e A whenever E G 5 and P(E) < 8. Thus for any E G 0 C g with P ( £ ) < { w e have by (3) / l^cl dP < I E{\Xa\ \<3)dP = f \Xa\ dP <e
JE
JE
JE
for all a G A,
proving (2). ■ Theorem 4.23. Let {Xn : n e N} C L i ( i i , 5 , P ) be uniformly integrable so that there exists a subsequence {nk} of{n} andX G i i ( H , 5 , P) such that (1)
lim / Xnk(dP=
Let & be a sub-a-algebra of$.
I XidP
fori G I ^ f l . S . P ) .
Then there exists a subsequence {ne} of {nk} and Y G
Li(i2,0,P)s«cft^af (2)
F=E(X|0)
a.e. on (&,<£>, P),
68
CHAPTER 1. STOCHASTIC
PROCESSES
and (3)
lim / E{Xnt | 0 ) £ dP= I Y( dP for £ £ ^ ( f l , 5, P)-
Proof. The uniform integrability of {Xnk : k G N} implies that of {E(Xnt | 0 ) : k G N} C £i(Q, 0 , P ) by Theorem 4.21. Thus by Theorem 4.22 there exists a subsequence {ne} of {nk} and F G L,(Q, 0 , P ) such that (4)
lim / E(X„, |0)7? dP= [ Yrj dP
for r? G ! „ ( « , 0 , P).
Since / E{Xnt\<8)r,dP = f E(XntV\&)dP= Jn Jn
( XntVdP Jn
forr,e
L^Cl^P),
(1) implies that (5)
lim / E ^ , |0)r7 dP = lim / Xn,rj dP = / Xr] dP
t-*x> Jn
e^co Jn
Jn
for r? G LX{Q., 0 , P ) .
By (4) and (5) we have (6)
/ Xr]dP=
f Yri dP
for r, G Lx(n, 0 , P ) .
In particular, with rj = 1E where £ e 0 , we have by (6) (7)
/ E ( X | 0 ) d P = / XdP=
JE
JE
[ XlEdP= Jn
I YlEdP= Jn
[ YdP.
JE
Since E(X | 0 ) and Y are both ©-measurable and since (7) holds for every E £ 0 , we have (2). To prove (3), note that for every £ G L^Cl, 5, P), we have / E{Xnt\<S)idP= Jit
f E[E(Xn,|0)f|0]dP= / E(X„,|0)E(£|0)dP.
J iI
■''. I
Since E ( £ | 0 ) G L^Cl, 0 , P), we have by (4) lim / E ( X n , | 0 ) £ d P = / F E ( £ | 0 ) d P = / E(Y(\<3)dP
«->oo Jn
proving (3). ■
Jn
Jn
= /
Jn
Y£dP,
H
69
CONVERGENCE INLP AND UNIFORM INTEGRABILITY
Theorem 4.24. Let X be an integrable extended real valued random variable on a proba bility space (Q,5', P). Let{<3a : a € A} be an arbitrary system of sub-a-algebras of $ and let Ya be an arbitrary version ofE(X | <Sa)for a € A. Then {Ya : a G A) is a uniformly integrable system of random variables on (Q, 5, P). Proof. According to (3) of Definition 4.6, to show the uniform integrability of {Ya : a G A} we show that for every e > 0 there exists A > 0 such that (1)
/
\Ya\dP<e
for all a € A.
J{\Ya\>X}
Now for each a g i w e have (2)
|YQ| = |E(X|<S 0 )|<E(|X||<S a )
a.e.on(n,0n)P)
and then E(|Fa|)<E[E(|X||<SQ)]=E(|*|). Thus for every A > 0 we have (3)
AP{|F„| > A} < /
\Ya\dP<E(\Ya\)<E(\X\)
for all o e A.
Let e > 0 be arbitrarily given. The integrability of X implies that there exists 8 > 0 such that (4)
/ \X\ dP < e whenever E G 5 and P(E) < 6.
JE
Let A > 0 be so large that
(5 )
\L^dp<8-
Then for such A > 0 we have by (3) and (5) (6)
P { | r a | > A} < jE*\X\) < 8 for all a G A
and thus for every a G A we have /
\Ya\dP<[
E(|X||0o)dP= /
\X\dP<e
by (2), the fact that {\Y„\ > A} G <&a, (4) and (6). This proves (1). ■
Chapter 2 Martingales §5 Martingale, Submartingale and Supermartingale [I] Martingale, Submartingale and Supermartingale Properties Definition 5.1. Let us fix some terminologies for stochastic processes X = {Xt : t G D} on a probability space (Q, S, P) where D is a subset o/R. 1) X is null at OifOe D and X0=0 a.e. on (Q, $, P). 2) X is nonnegative ifXt > 0 a.e. on (Q, 5, P)far every t G D. 3) X is bounded if there exists M > 0 such that \X(t, w)\ < M for all (t, w) G D x Cl. 4) X is an Lv-process for some p G (0, oo) ifXt G LP(Q, 5, P)for allt G D . 5) X is Lp-bounded, or bounded in Lp, if suptlED E(|X ( | P ) < oo. 6) X is uniformly integrable if the system of random variables {Xt : t € D} is uniformly integrable. X is pth-order uniformly integrable if for some p G (0, oo) the system ofrandom variables {\Xt\p : t G D} is uniformly integrable for some p G (0, oo). Recall that a system of sub-
72
CHAPTER 2.
MARTINGALES
conditions: (i) X is {$t}-adapted. (ii) X is an L\ -process. (Hi) E(Xt|&) = Xs a.e. on ( Q , f o , P ) f o r s , t £ D , s < t . X is called a submartingale if instead of (Hi) it satisfies the condition (iv) E(Xt\$,) > Xs a.e. on (Q,$ s P)for s,teD,s
As an example of submartingales consider an {5t}-adapted L r process X = {Xt : t £ T} on a filtered space (Q, 5, {3* : * £ T}, P) whose sample functions are increasing functions on T. For such a process we have Xt > Xs for s,t e T , s < t, sothatE(X t |5 s ) > X s a.e. o n ( Q , 5 s , P ) . To construct a martingale, let (fi,5, {$t ■ t £ T},P) be a filtered space, let f be an integrable random variable on the probability space (£2,5, P) and let Xt be an arbitrary real valued version of E(£|&) for each t £ T. Then X = {Xt : t £ T} is an {&}-adapted Lrprocess on the filtered space. It has the martingale property since for s, t £ T, s < t we have E(Xt\$s) = E [ E « | 5 , ) | 5 S ] = E(f |3 S ) = Xs a.e. on ( Q , & , P ) . This martingale is a uniformly integrable stochastic process by Theorem 4.24 and in par ticular it is Li-bounded by Theorem 4.7. Now for an arbitrary version rj of E(£ Uoo) where 5oo = ffilitetSi), we have 5 ( C 5oo for every t £ T and therefore E07|&) = E [ E ( £ | & o ) | & ] = E ( £ | & ) = .X« For a martingale X = {Xt:t£
a.e. o n ( Q , 5 t , P ) .
T} on a filtered space (Q, 5, {5, : i e T},P), if there exists
§5. MARTINGALE, SUBMARTINGALE AND SUPERMARTINGALE
73
an extended real valued integrable 3oo-measurable random variable 77 such that E(7?|5f) = Xt a.e. on (£2,3t, P ) for every t G T then we call 77 a final element for X. In Theorem 8.2 we show that a final element exists for a martingale X if and only if X is uniformly integrable. In Theorem 5.3 next, we show that if a final element exists for a martingale then it is uniquely determined up to a null set in (£2, 3oo, -P)Theorem 5.3. (Uniqueness of the Final Element of a Martingale) Let X = {Xt :t £T}be a martingale on a filtered space (£2,3, {3 ( : t G T}, P) and /ef 5 ^ = cr(U16T3t). If there exists an integrable g^-measurable random variable £ on (£2,5, P) SUCH fnaf E(£|5t) = X( a.e. on (£2,5(,P)/or every f £ T, tnen £ is uniquely determined up to a null set in (0,300,-P).
Proof. Suppose there exist two final elements £ and 77 of the martingale which are not a.e. equal on (£2,3oo,-P)- We may assume without loss of generality that £ > 77 on some A G 3co with P(A) > 0. Then there exists some k G N such that £ - 77 > 1/fc on some Ai C A, A! G goo with P(Ai) > 0. Since |£| + \r)\ is integrable, for an arbitrary e > 0 and in particular for (2fc)~'P(Ai) > 0 there exists 8 > 0 such that for every £ G 3oo with P ( £ ) < 6 we have
JEu^{\(\ +• , . , „\-V\}dP<^-P(A,). - 2fcNow since Ufgigt is an algebra of subsets of £2 and since g ^ =
1/ {Z-V}dP-f \JA,
{Z-tfdP
JA-L
= 1/
{Z-r,}dP-[
<
{iei + | 7 7 i } d P < ^ p ( A 1 ) .
\JAI-A2
/
JA2-A,
{(-r,}dp\ 1
From this inequality and from fA} {£ - 17} dP > 1/fc P ( A 0 we have
(1)
£ - 7 7 } d P > / it-l}4P{£-77}d^- ^ ^ J it-v)dP>J
p
^ Z ^ p ^ ' ) > 0-
From the equality E(f |3<) = #« = E(?7|gt) a.e. on (£2, g t , P) we have E[{£ - 77} | g t ] = 0 a.e. on (£2,g ( , P). Then since A2 G g ( we have / {£-r,}dP JAi
= / E[{£-77}|3]ciP = 0, JM
which contradicts (1). Therefore £ = 77 a.e. on (£2, goo, P)- ■
74
CHAPTER 2.
Proposition 5.4. Let X = {Xt
MARTINGALES
: t £ T} be a stochastic process on a filtered space
(0,S,{&:*eT},P). 1) IfX is a martingale then E(Xt) = const for ( e l 2) IfX is a submartingale then E(Xt) T as t —> oo. 3) IfX is a supermartingale then E(Xt) J, as t —» oo. 4) A submartingale X is a martingale if and only ifE(Xt) = const for t £ T. Similarly for a supermartingale. Proof. 1), 2) and 3) are immediate consequences of (iii), (iv) and (v) respectively of Defini tion 5.2. To prove for 2), note that by (iv) of Definition 5.2, we have E[E(X ( |fo)] > E(X S ) fors < t. ButE[E(X t |5 s )] = E(Xt). ThusE(X ( ) > E(XS). ThereforeE(Xt) | as* -> oo. Similarly we have 1) and 3). To prove 4), suppose X is a submartingale and E(Xt) = const for t G T. Then for every s,t £ T, s < t, v/e have E(Xt\$s) > Xs a.e. on (Q,5 S ,P). On the other hand, E[E(Xt\ds)] = E{Xt) = E(X S ), namely, / ^ E ^ l J J d P = $Q.,XsdP Therefore, E(-X"(|5s) = Xs a.e. on (Q,g 3 ,P). This shows that X is a martingale. Similarly for a supermartingale. I Observation 5.5. The monotonicity conditions (iii), (iv) and (v) in Definition 5.2 for a martingale, submartingale and supermartingale are respectively equivalent to the following conditions. (iii)'
f XtdP= JE
(iv)'
f XtdP> JE
(v)'
for E £ & , s, t £ T, s < t.
j XsdP JE
for E £ & , s, t £ T, s < t.
f XsdP JE
f XtdP JE
for E £ 5 S , s.t. £ T, s < t.
< I Xs dP JE
Let us show the equivalence of (iv) and (iv)' for instance. Recall that by the definition of E(X ( |5 s )wehave / XtdP = I E(Xt\$s)dP JE
for every E £ 5 S .
JE
Thus if (iv) holds, then (iv)' holds. Conversely if (iv)' holds then by the last equality we have / E(Xt\$s)dP > / XsdP for every E £ & . JE
JE
§5. MARTINGALE,
SUBMARTINGALE AND SUPERMARTINGALE
75
Then the gs-measurability of both E(X f |g s ) and Xs implies (iv). Proposition 5.6. When T = Z+, conditions (Hi), (iv) and (v) in Definition 5.2 are respec tively equivalent to the following conditions. (Hi)"
E(X n+ , |ffn) = Xn
a.e. on (Q,$ n ,P)for n £ Z + .
(iv)"
E(X n+1 | g n ) > X n
a.e. on (O,$ n ,P)for n € Z + .
(")"
E(X„ + 1 |g„) < X„
a.e. on ( Q , g n , P ) / o r n e Z + .
Proof. Let us show the equivalence of (iv) and (iv)" Clearly (iv) implies (iv)". To prove the converse assume (iv)". Let n,m 6 Z+ and n <m. Then by- iterated application of (iv)" we have E(Xm|g„)
= E[E(Xm|gm_,)|gJ>E(Xm-,|gn) = E[E(X m _, | g m _ 2 ) | g j > E ( X m _ 2 | g J = E(X n + 1 |g„) > Xn
a.e. o n ( 0 , g „ , P ) ,
proving (iv). ■ Proposition 5.7. Let X = {Xt : t E T} be a stochastic process on a filtered space (£2,g,{& : * 6 T},P). Ler{gf : * G T) be the filtration of (&.,$, P) generated by X, namely, gf
=
T, 5 <<}
/ort
6 T.
7/"X is a martingale, submartingale or supermartingale with respect to ({g ( },P), then it is also a martingale, submartingale or supermartingale with respect to ({gf }, P). Proof. Let X be a submartingale with respect to ( { g j , P). Then X ( is {g,}-measurable and therefore g f C 5, for every t G T. By the definition of gf, the process X is {gf }adapted. Therefore according to Observation 5.5, to show that X is in submartingale with respect to ({gf }, P), it remains only to verify fxtdP>fxsdP
JE
for£egf,s,teT, s
76
CHAPTER 2.
MARTINGALES
Since X is a submartingale with respect to ({&},P),the last inequality holds for every E £ 5s- Since 5 f c fo. the inequality holds for every E £ g f Thus X is a submartingale with respect to ({gf }, P). Similarly for a martingale and a supermartingale. ■ Proposition 5.8. For an adapted L\-process X = {X( : t £ T} on a filtered space ( A 5, {5<}, P) the following statements hold. 1) X is a martingale if and only if it is both a submartingale and a supermartingale. 2)X is a submartingale (resp. supermartingale) if and only if—X = {—X% '. t £ T} is a supermartingale (resp. submartingale). 3)IfX is a martingale then so is cX = {cXt : t £ T} for c £ R. 4) IfX is a submartingale (resp. supermartingale), then so is cX for c > 0. 5) IfX and Y are both martingales (resp. submartingales or super martingales) then so is X+Y = {Xt + Yt:teT}. 6) IfX is a martingale and Y is a submartingale (resp. supermartingale) then X +Y is a submartingale (resp. supermartingale). Proof. These statements are immediate consequences of Definition 5.2. ■
[II] Convexity Theorems Let X = {Xt : t £ T} be a stochastic process on a probability space. For a real valued function / on R, let us write f(X) or / o X for {/ o Xt : t e T}. For instance X* = {Xt V 0, t e T} and X2 = {Xf :teT}. Similarly we write X VY = {XtV Yt : t eT}. Theorem 5.9. IfX = {Xt : t e T} and Y = {Yt : t £ T} are submartingales (or supermartingales) on a filtered space (Q, 5, {5*}, P), then so is X V Y (or X A Y). Proof. Suppose X and Y are submartingales. Then for any t £ T, Xt and Yt are Simeasurable so that Xt V Yt is ^-measurable and therefore X V Y is an adapted process. Since Xt, Yt £ L,(Q, &, P), and |X ( V F«| < \Xt\ + \Yt\, we have XtWYt£ L,(Q, &, P). Thus X V V is an L r process. Now for s, t £ T, s < t, we have X ( V YJ > Xt, Yt so that E(Xt V y . | 5 . ) > E(X ( |5.) > Xs
a.e. on ( 0 , 5 S , P )
and similarly E(Xt V y,)|5 5 ) > y
a.e. on (Q,5, P).
Therefore E(X ( V y | 5 s ) > Xs V y
a.e. on (Q,5 S ,P)-
§5. MARTINGALE, SUBMARTINGALE AND SUPERMARTINGALE
11
This shows that X V Y is a submartingale. Similarly X A Y is a supermartingale when X and y are supermartingales. ■ Corollary 5.10. Let X = {Xt
: t g T} be a stochastic process on a filtered space
1) IfX is a submartingale then so is A~+ 2) IfX is a supermartingale then X~ is a submartingale. 3) IfX is a martingale then X = X' - X" where X' and X" are nonnegative submartingales. Proof. 1) If A" is a submartingale then since Y = 0 is a martingale, X+ = X V 0 is a submartingale by Theorem 5.9. 2) If X is a supermartingale then since Y = 0 is a martingale, X A 0 is a supermartingale by Theorem 5.9. Then X~ = —(X A 0) is a submartingale. 3) If A" is a martingale, then it is a submartingale so that X* is a submartingale by 1). But X is also a supermartingale so that A"" is a submartingale by 2). Since X = X* — X', the process X is the difference of two nonnegative submartingales. ■ Theorem 5.11. Let X = {Xt : t g T} be an adapted L\-process on a filtered space (£2,5, {3$t\,P)- Let f be a real valued increasing function on K such that f o Xt g Za(Q,&,P) for every i g T. 1) IfX is a submartingale and f is a convex function then f o X is a submartingale. 2) IfX is a supermartingale and f is concave function then f o X is a supermartingale. 3) When X is a martingale, the conclusions of 1) and 2) still hold when the monotonicity condition on f is removed. Proof. 1) Suppose X is a submartingale. Since / is a real valued increasing function on R, / is 93(R)/23(R)-measurable and therefore / o Xt is 5(/*8(R)-measurable, that is, / o X is an adapted process. Since A" is a submartingale, for 5, t g T, s < t, we have (1)
E(Xt\$s)
> Xs
a.e. on (Q,&,-P)-
Then since / is an increasing function (2)
/(E(X ( |5„)) > f(Xs)
a.e. on (Q, & , P).
By the Conditional Jensen Inequality (3)
E(/(X- t )|&) > /(E(A\|3 S ))
a.e. on (£1,&,P).
78
CHAPTER 2.
MARTINGALES
From (2) and (3), we have E ( / ( X t ) | £ , ) > f(X3) a.e. on (Q,$ s , P). Therefore / ( X ) is a submartingale. When X is a martingale then the equality in (1) holds and therefore the equality in (2) holds without assuming that / is an increasing function. Thus, in this case f(X) is a submartingale when the monotonicity condition on / is removed. 2) Suppose X is a supermartingale. Then for s 1 t € l , s < f , w e have E(Xt |5 S ) < Xs a.e. on (£2, J s , P). Since / is an increasing function, we have f(E(Xt |3 3 )) < f(Xs) a.e. on (Q, 5 S , P). If / is concave then - / is convex. Thus by the Conditional Jensen Inequality E ( - / ( X * ) | & ) > -/(E(.Y t |ff,)). Therefore E(/(X t )15 S ) < f(Xs) a.e. on(Q,5 s ,-P). This shows that / ( X ) is a supermartingale. ■ Corollary 5.12.7) Let X = {Xt : t € f} be an Lp-martingale, that is, an Lp-process and a martingale, ona filtered space (£2,5, (3<},P) where p G [l,oo), Then \X\P = {\Xt\p : t e T } is a submartingale. 2) IfX is a nonnegative Lp-submartingale then Xv is a submartingale for p G [1, oo). Proof. 1) The function / on E defined by /(f) = |f | p for f G K is a convex function when p G [l,oo). I f X is an Lp-martingale then / o Xf = |X t | p G L i ( A S t , P ) so that / o X is an Li-process. Thus by 3) of Theorem 5.11, |X| P is a submartingale. 2) For p € [1, oo), define a real valued increasing convex function / on R by setting f(Q = 0 for £ e (-oo,0) and / ( Q = fp for f e [0, co). If X is a nonnegative L p submartingale, then Xf G L^Q, 5, P) and X? = f o Xt for t G T. Then by Theorem 5.11, Xp is a submartingale. ■
[III] Discrete Time Increasing Processes and Doob Decomposition Let us give a formal definition for discrete time increasing processes. Definition 5.13. By an increasing process we mean a stochastic process A = {An : n € Z + } on a filtered space (Q, 5, {5n}, -P) satisfying the following conditions. \°. Ais {5„}-adapted. 2°. A is an L\ -process. 3°. {^4„(w) : n G Z+} is an increasing sequence with A0(iv) = Ofor every LO G £1. A is called an almost surely increasing process if it satisfies conditions 1°, 2° and the fol lowing condition. 4°. There exists a null set A in (Q, 5oo, P) such that {An(u>) : n G Z+] is an increasing sequence with A0(ur) = Ofor every u G Ac.
§5. MARTINGALE,
SUBMARTINGALE AND SUPERMARTINGALE
79
An almost surely increasing process A = {An : n G Z+} is always a submartin gale since for every n € Z+ we have An+i > A„ a.e. on (£!,&„„ P) and consequently E(A>+i |5„) > E(A n |5„) = An a.e. on (£2, g„, P). On the other hand there are submartingales which are not almost surely increasing processes. In fact, if A is an almost surely increasing process and thus a submartingale and if M is a martingale, then X = A + M is a submartingale by Proposition 5.8, but it may not be an almost surely increasing process since it may not satisfy condition 4° in Definition 5.13. The Doob Decomposition Theorem shows that a submartingale in discrete time is always the sum of a martingale and an almost surely increasing process. Let us define the predictability of discrete time processes. This condition ensures the uniqueness in the Doob decomposition. Definition 5.14 An adapted discrete time process X = {Xn : n G Z + } on a filtered space (Q,5, {5n},-P) is called an {5„}-predictableprocess if Xn is %n-\-measurable for every n G N. Let us observe that if X = {Xt : t G T} is an adapted L\-process on a filtered space (Q,5, {5,},P) and if we define a null at 0 adapted Z^-process Y = {Yt ; t G T} by setting Yt = Xt — X0 for t G T. Then Y is respectively a martingale, submartingale or supermartingale if and only if X is. Theorem 5.15. (Doob Decomposition) Let X = {Xn : n G Z+} be an adapted L\-process on a filtered space (Q, 5, {5„}, P). Then X has the Doob decomposition X =X0 + M + A where M is a null at 0 martingale and A is a null at 0 predictable L\-process. Moreover the decomposition is unique in the sense that ifX = Xo+M'+A' is another such decomposition then M(-,u) = M'(-,u>) and A{-,u>) = A'{-,u>)for a.e. u in (Q, 5<x>, P)- Furthermore an adapted L\ -process X on the filtered space is a submartingale if and only the null at 0 predictable L\ -process A in the decomposition is an almost surely increasing process. Proof. 1) Let X = {Xn : n G Z + } be an adapted L^process. Define a null at 0 process A = {An : n € Z + } by setting (1)
(A0 = 0 l i 4 B = A B _,+E(Jfn-X B _,|ff B _ 1 )
fornGN.
Then An is 5„_i -measurable for every n G N so that A is a predictable process. Also An is integrable for every n G Z+ so that A is an L\ -process.
80
CHAPTER 2.
MARTINGALES
For the process X — A we have for every n G N
E(Xn-An|3„-l) =
E(Xn\^n_])-E(An^\dn-l)-MXn-Xn-l\dn-l)
= X„_! - A„_i
a.e. on (£2,5 n _i,P).
This shows that X - A is a martingale. If we set M = (X - A) - X 0 then since A is null at 0, so is M. Thus we have a Doob decomposition X = X 0 + M + A. 2) To prove the uniqueness of the decomposition, suppose an adopted L\-process X is given as X=Xo + M + A = X0 + M' + A', where M and M' are null at 0 martingales and A and A' are predictable L r processes. By X = Xo + M + A we have for every n G N E(X„ - X„_, |g„_,) = E[(M„ + A„) - (M„_, + A„_,)|5 n _i] = A n - A„_, a.e. on (£2,5„_ i, P ) by the martingale property of M and by the predictability of A. Simi larly by X = X 0 + M ' + A1 we have E(X„ - X„_j |&,_,) = A; - A;_, a.e. on (£2, 3„_,, P). Thus for every n G N we have (2)
Are - A„_, = A'n - A'n_x
a.e. on (&,&>_,,P).
Since A and A' are both null at 0, A0 = AJ, a.e on (Q, 5oi P)- T n e n ( 2 ) implies that A\ = A\ a.e. on (£2,50,P)- By iterated application of (2) we have A„ = AJ, a.e. on (Q,5„_!,P) for every n e N. By the countability of Z+ there exists a null set A in (£2, g ^ , P ) such that A(-, w;) = A'(-, w) for w G Ac. Since M = X - X 0 - A and M ' = X - X 0 - A', the last equality implies M(-, w) = M'(-, u>) for w € Ac. 3) Suppose X is a submartingale and X = Xo + M + A is its Doob decomposition. Since M is a martingale, A = X — M — Xoisa submartingale and thus for every n G N we have An = E(A„ | J n _ i ) > A n _i a.e. on (£2, &,_], P) by the predictability of A. Since N is a countable set, there exists a null set A in (£2,5oo, P) such that An_i(u>) < A„(a;) for all n G N when u> G Ac. This shows that A is an almost surely increasing process. Conversely suppose A is an almost surely increasing process. Then A is a submartingale. Since M is a martingale, X = Xo + M + A i s a submartingale. I An extension of the Doob decomposition to continuous time submartingales, the DoobMeyer decomposition, will be treated in §10.
§5. MARTINGALE, SUBMARTINGALE AND SUPERMARTINGALE
81
[IV] Martingale Transform The martingale transform is a prototype of the stochastic integral with respect to martin gales. It is the discrete time analog of the stochastic integral in continuous time. Definition 5.16. Let X = {Xn : n g Z + } be a martingale, a submartingale or a supermartingale and let C = {Cn : n e Z + } be a predictable process on a filtered space (Q 5, {$n}, P)- By the martingale transform of X by C we mean the stochastic process C • X defined by
r(C.x) 0 = o I (C . X)n = E L i Ck ■ {Xh - Xk-i}
for n g N.
Observation 5.17. Note that C0 plays no role in the definition of C - X. Note also that Ck, Xk and Xk-i for k = 1 , . . . , n, are all $n-measurable so that C s X is an adapted process. If C is also a bounded process then ^Ck
■ {Xk — Xk_\] e L\(Q.,5„,P)
since
k=\
X is an Lj-process so that C • X is an Li-process. If both C and X are i^-processes then n
y*Ck ■ {Xk — Xk-i} € L\ (Q., 5„, P) by Holder's Inequality so that C • X is an Li-process. i=l
Theorem 5.18. (Martingale Transform) Let (Q, 5, {$n : n 6 Z + }, P) i e a filtered space. l)IfC is a bounded nonnegative predictable process and X is a martingale, a submartin gale or a supermartingale on the filtered space then the martingale transform C • X is respectively a null at 0 martingale, submartingale or supermartingale. 2) If' C is a bounded predictable process and X is a martingale then C • X is a null at 0 martingale. 3) In 1) and 2) the boundedness condition on C can be replaced by the condition that both C and X are L2-processes. Proof. As we noted in Observation 5.17, under the hypothesis in any one of 1), 2) and 3), C • X is an adapted L r process. Note also that for any n e N we have n-l
(i)
mac • x)nysn-{\ = E£Cr{^-Jf*-.}|s^.] + E[C n • {Xn — -X„-l}|5n-l]
82
CHAPTER 2. = X) C k • {Xk - Xk.x}
MARTINGALES
+ Cn • {E(X„|£,_,) - X„_,}
= (c«j^_i + c„{E(Ar,|ffn_i)-jr».i} a.e. on (Q, 5 n _ j , P ) by the #„_ i -measurability of Cn. 1) Now suppose C is bounded and nonnegative. According as X is a submartingale, a martingale or a supermartingale, wehaveE(X„|5 n _i) >, =, < X„_i a.e. on(£2,5„_i,P). Then since Cn > 0 a.e. on ( Q , 5 n _ , , P ) , we have C„ • {E(X n |5„-i) - X„_ , } > , = , < 0 a.e. o n ( Q , 5 n _ , , P ) . Using this in (1), we have E[(C • X ) „ | 5 „ - i ] >, =, < (C • X ) n _ , a.e. on (Q, 5n-i, -P)> that is, C • X is a submartingale, a martingale or a supermartingale respectively. 2) Suppose C is bounded and X is a martingale. Then E(X n |g : n-i) = Xn-\ a.e. on (Q, &,_,, P ) and thus C„ • {E(Xn |g n _,) - X„_,} = 0 a.e. on (Q, &,_,, P). Using this in (1) we have E[(C • X)n |5„_ t ] = (C • X) n _i a.e. on (Q, 5„_i, P). This shows that C • X is a martingale. ■ Definition 5.19. Let T be a stopping time on a filtered space (Q, 5 , {3n : n G Z+}, P). By f/ie stopping process derived from T we mean the predictable process C(T) = {C^ r) : n G Z + } vv/zere C^T) w defined by (1)
C
forneZ+,
in other words,
cf*»-i«wM-{i jt:* T(
W ).
The fact that C ( T ) is a predictable process follows from the fact that l{ n
§5. MARTINGALE, SUBMARTINGALE AND SUPERMARTINGALE
83
-**-i} = 1{.
Since X<> ■ 1{1
forfe = l , . . . , n - 1,
we have 71—1
(C
• X)„ = — Xfj + 2 J X i • 1{T=A} + X„ • l{T>n} /c=0
= —Xo + XjAn = — Xo + X„ , that is, X j A = X 0 + (C ( T ) . X)„
forneN.
For n = 0 we have X 0 TA = X T A 0 = X 0 = X 0 + ( C ( T ) . X ) 0 .
This completes the proof. ■ Theorem 5.21. Let X = {X„ : n G Z + } be a submartingale, a martingale or a super martingale and let T be a stopping time on a filtered spare (il, 5, {3n :G Z+}, P). Then the stopped process X T A = {Xr A n : n € Z + } is a submartingale, martingale or supermartin gale respectively and in particular E(XTATI) >, =, < E(X 0 ) respectively for n 6 Z+. Proof. Since ourX = {X„ : n e Z + } is an Li-process, so is X T A by Remark 3.31. By The orem 5.20 we have X T A = Xo+(C ( T ) »X). Since C ( T ) is a bounded nonnegative predictable process, according to Theorem 5.18, C ( T ) • X is a null at 0 submartingale, martingale or su permartingale according as X is a submartingale, a martingale or a supermartingale. There fore X T A is a submartingale, a martingale or a supermartingale respectively. I
84
CHAPTER 2.
MARTINGALES
[V] Some Examples Example 5.22. Consider the probability space (fi, 5, P) = ((0,1], <8(0,i], mL) where mL is the Lebesgue measure on the Borel cr-algebra 93(o,i] of subsets of (0, 1]. For each n G Z+, let 3n = {In
M,M=———f
f(u>)mL(du)
forwe/ n > J : forfc = l , 2 , . , . , 2 n
that is, M„ is an averaging of / on the members of 3n. Now since M„ is constant on each member of 3„, M n is 5„-measurable. From (1) we also have (2)
/
MndmL=
I
for k = 1,2,... ,2n
fdmL
Since (0,1] is the union of the finitely many disjoint members In^ of 3n, (2) implies / •'(0,1]
M„ dmL = I
■'(0,1]
f dmL G R,
that is, Mn is integrable. Therefore M = {Mn : n € Z+] is an adapted I,i-process on the filtered space ((0,1], <8(0,ij, {5„}, mL). Let us show that for every n e Z+, Mn is a version of E ( / | g „ ) , that is, E ( / | 5 n ) = Mn a.e. on ((0,1], g„, m£>. Now since Mn is 5n-measurable we need only verify that (3)
/ MndmL=
JE
I fdmL
JE
for every E e $n.
But every E G 5„ is the union of finitely many disjoint members of 3„. Thus (3) follows from (2). Therefore we have shown that E ( / | 5 n ) = Mn a.e. on ((0,1], $n, mi) for every n G Z+. This implies that M = {Mn : n G Z+} is a uniformly integrable martingale. We show in §8 that if { is an integrable random variable on a probability space (Q, 5 , P ) and if we define a martingale I = { I , : i 6 T} on a filtered space (Q, 5, { 5 ; : t G T}, P) by letting X« be an arbitrary real valued version of E(£ |5 ( ) for every t G T, then we have lim Xt = E(£|5oo) a.e. on (£2,S W ,P). (See Remark 8.3). For our particular example
§5. MARTINGALE, SUBMARTINGALE AND SUPERMARTINGALE
85
here let us show that if / is continuous at w0 £ (0,1] then lim Mn(w0) = /(w 0 ). Now the n—*oo
continuity of / at w0 implies that for every e < 0 there exists 6 > 0 such that (4)
|/(w)-/(w0)|<e
forwe(wo-£,wo + «)n(0,l].
Foreveryn G Z+, from the disjointness of the members of 3„ there exists a unique fc(n) e N such that wo G InMn)- For the sequence of intervals {/„><„) : n G Z+} we have 7„,jt(n) I as n —> oo and n n6 z t I ni jt (n) = {w0}. Since m£,(7„^{„)) = l / 2 n J. 0 as n —> oo and since ^o € -Tn,ifc(n) for every n G Z+, there exists N G Z+ such that (5)
i^cn) C (w0 - 8, LOQ + 5) n (0,1]
for n > TV.
By(l) (6)
Mn(w0) =
/
fdmL
forn G Z + .
Using (5) and (4) in (6) we have /(w 0 ) - £ < M„(w0) < /(wo) + £ for n > TV. This shows that lim M„(OJO) = /(w 0 ). n—*co
Example 5.23. (Sums of Independent Random Variables with Mean 0) Consider a sequence of independent random variables X = {Xn : n G Z+} on a probability space («, 5, P ) with E(X„) = 0 for n G Z + . Let S„=X0 + --- + Xn
and
g„ = a{X0,... ,Xn}
fornGZ+.
If we let S = {Sn : n G Z+} then S is an adapted ii-process on the filtered space (Q,5, {5„},P). To show that 5 is a martingale we verify that E(5'„|5„_ 1 ) = 5„_i a.e. on (Q, 5 n _i, P ) for every n G N. Now the independence of the system of random vari ables {X0,..., Xn} implies the independence of the system {(X0,..., Jf„_i), Xn) of two random vectors, or equivalently, the independence of a{(Xo,..., Xn-\)} and Xn. Since a{(X0, ...,Xn-i)} = a{X0,...,Xn^i] =5„_i we have the independence of g„_, andX n and this implies E(Xn |5„_,) = E(Xn) = 0 a.e. on (£2,3„_i, P). Then E ( S n | 5 „ - i ) = E(S„_, + X n | 5 n _ 1 ) = E(5„_,|5„_i) + E(X n |5 n _i) = S„_j a.e. on (£2,£„_i, P). Therefore 5 is a martingale.
86
CHAPTER!
MARTINGALES
Example 5.24. (Products of Nonnegative Independent Random Variables with Mean 1) Let X = {Xn : n G Z+} be a sequence of independent nonnegative random variables on a probability space (£2,5, P) with E(Xn) - 1 for n G Z + . Let Mn=X0---Xn
and
5 n = o-{X 0 ,... ,Xn}
fornGZ+.
Since X0,...,Xn are nonnegative we have E(M„) = E(X 0 ) ■ ■ ■ E(X„) = 1 by the indepen dence and by the Tonelli Theorem. Thus M = {Mn : n G Z+} is an adapted L\-process on the filtered space (Q, 5, {5n}, P). As we noted in Example 5.23,5n-i and Xn are inde pendent and this implies E(X n |5„_i) = E(X„) = 1 a.e. on (Q,5n-i, P ) for n G N. Thus we have E(M„|5„-,) = E(M n _,X B [&,_,) = M„_,E(X n |5„_,) = M n _, a.e. on (£2,5n-i, •?)• This shows that M is a martingale. Example 5.25. (Processes with Independent Increments with Mean 0) Let X - {Xt : t G R+} be an L\-process with independent increments on a probability space (£2,5, P) with E(Xt — Xs) = 0 for s,t G M+ such that $ < t. By independence of increments we mean that for every finite strictly increasing sequence t\ < ■ ■ ■ < t„ in R+ the system of random variables {Xt:, Xt2 — Xtx, Xtj — Xtl,..., Xtn — Xt„_,} is an independent one. If we let 5 f = cr{Xs : s G [0,i]} for i G K.+ then X is an adapted Li-process on the filtered space (£2,5, { j f }, P). The independence of increments implies that for s, t G R+, s < t, we have the independence of 37 and Xt — Xs. (See Theorem 13.10. ) Thus E(Xt - Xs | 3 f ) = E(X ( - Xs) = 0. Then E(X ( ) = E(X t - Xs | 5 f ) + E(X S | 5 f ) = Xs
a.e. on (Q,5f, P).
Therefore X is a martingale.
§6 Fundamental Submartingale Inequalities A submartingale increases on average. From this monotonicity condition we derive some basic inequalities for estimating the behavior of sample functions of a submartingale. These inequalities are derived first for discrete time submartingales by means of truncation by stopping times and then extended to cover continuous time submartingales.
§5. FUNDAMENTAL
SUBMARTINGALEINEQUALITIES
87
[I] Optional Stopping and Optional Sampling In §3 we showed that if X is an adapted process and T is a stopping time on a filtered space then XT is always a random variable when the time parameter is discrete and when the time parameter is continuous then XT is a random variable if we assume that X is a measurable process. Let us consider the integrability of XTTheorem 6.1. (Doob's Optional Stopping Theorem) Let X = {Xn : n 6 Z + } be a submartingale and T be a stopping time on a filtered space (Q, 5, {5„}, P). Assume that T is finite a.e. on (£2,3oo,.P). Then E(X 0 ) < E(Xy) < oo under each one of the following conditions. (a) T is bounded, that is, there exists m £ Z+ such that T(u) < mforui £ Q. (b) X is bounded, that is, there exists K > 0 such that |X„(u;)| < K for (n, u ) e Z , x Q. (c) T is integrable and X has bounded increments, that is, there exists L > 0 such that \Xn(u) - Xn_i(uj)\ < Lfor(n,u) e N x O. (d) X < 0 on R+ x Q. IfX is a martingale, then under anyone of the conditions (a), (b), and (c), we have E(Xr) = E(X 0 ). Proof. If X is a submartingale then the stopped process XT = {Xr A n : n £ Z+} is also a submartingale according to Theorem 5.21. Then {E(Xr A „) : n 6 Z+} is an increasing sequence in R which is bounded below by E(Xy A0 ) = E(X 0 ). Let us note that if T(u>) < oo for some u> G Q, then there exists N € Z+ such that T{LJ) A n = T(w) for n > N so that lim X T A „ M = lim X(T(u>) A n,w) = X ( T M , w ) = X T (u;).
7i—*oo
n—"-oo
Therefore if T is finite a.e. on (£2, S,*,, P) then (1)
lim XTAn = X T
a.e. on (£!,&„, P).
1) If we assume (a) then T A m = T so that E(X 0 ) < E(XTAm) = E(X T ) and then E(X T ) < oo. If X is a martingale then so is XT by Theorem 5.21. This implies that E(X T A m ) = E(X 0 ) and therefore E(XT) = E(X 0 ). 2) Let us assume (b). Now the condition |X„(OJ)| < K for (n,u>) € Z+ x Q implies \XT(oj)/\n(u)\ < K for (n,uj) e Z+ x Q. Then by (1), the Bounded Convergence Theorem is applicable and we have E(XT) = lim E(XTAn) > E(X 0 ). Since |E(X TAn )| < K for n £ Z+ we have E(X 0 ) < E(Xr) < i^- If X is a martingale then XT is a martingale so that E(X TA n) = E(X 0 ) for n £ Z+. This implies E(X T ) = E(X 0 ).
88
CHAPTER 2.
MARTINGALES
3) Assume (c). For every (n,u>) £ Z+ x Q we have T(U»)ATI T(UJ)ATI
-x*_
X0(w) == £ {Jft(w)!(«)}• XTAn(w) - X0(w) = J2 { * * ( « ) - X * - i ( w ) } . XTAU(U)
--
Then for every n € Z+ we have (2)
TAn TAn
< LL••(T (TAAn) n)<
Now the integrability of T implies that T is finite a.e. on (Q, 5oo, P ) so that (1) is applicable. Then we have lim (XTAn — X0) = XT — X0 a.e. on (£2,3oo; P). Thus by the Dominated Convergence Theorem, X?—X0 is integrable and E(Xr — XQ) = lim E(XTATI — Xo) > 0. n—t-oo
This shows that X? is integrable and E(X 0 ) < E(XT) < oo. If X is a martingale then E(Xr) = E{X0) by the same reason as in 2). 4) Assume (d). Then since X(n,o>) < 0 for (n,u>) e Z+ x Q we have X T (n,u;) = X(T(a))An,ix)) < 0for(n,o;) 6 Z+ x Q. Also, since T < oo a.e. on (£2, g ^ P ) , we have Xr(w) = X(T(w), w) < 0 for a.e. w in (Q, g ^ , P). Then by (1) and by Fatou's Lemma for the limit superior of a sequence of nonpositive functions, we have the inequalities 0>E(Xr)>limsupE(XTAn)>E(X0). 1 n—*oo
Corollary 6.2. Let X = {Xn : n 6 Z+}be a martingale with bounded increments, C = {Cn : n G Z+} be a bounded predictable process and T be an integrable stopping time on a filtered space (£2,g, {5„}, P). Then for the martingale transform C • X ofX by C we have E[(C • M)T] = 0. Proof. Let K,L > 0 be such that \Cn\ < K for n € Z + and |X„ - X n _,| < L for n e N. Since X is a martingale and C is a bounded predictable process, C • X is a null at 0 martingale by 2) Theorem 5.18. From Definition 5.16 - (Cn•_lX)„_, \ \(C*X)n--(C»X) 1==\C \C n(X n-X n-.,)\
forneN. fornGN.
Thus (c) of Theorem 6.1 is satisfied and therefore E[(C • X)T] = E[(C • X) 0 ] = 0. ■ In the next theorem we extend part of Theorem 6.1 to bounded submartingales with continuous time. Extensions to unbounded, but uniformly integrable, submartingales will be given in §8.
§6 FUNDAMENTAL SUBMARTINGALE INEQUALITIES
89
Theorem 6.3. (Optional Stopping Theorem with Continuous Time) Let X = {Xt : t G R+} be a right-continuous submartingale and Tbea stopping time on a right-continuous filtered space ( Q , 3 , {&}, P)- Assume that T is finite a.e. on (£2,3oo, P). If X is bounded, that is, thereexistsK > 0such that \X(t,u>)\ < Kfor(t,u) G R+x£2, thenE(X0) < E(XT) < oo. If X is a bounded right-continuous martingale then E(Xj) = E(Xo). Proof. For each n G N, let k2 n »(*) = (I oo '
for t G [(it - 1)2-'', k2~n) for it 6 for t = oo
and let Tn = dn o T. According to Theorem 3.20, {Tn : n g N} is a decreasing sequence of stopping times on (£2,3, {3t}, -P) with T„ assuming values in {A:2_" : k G Z+} U {oo} and Tn IT uniformly on Q as n -* oo. Let A = {T = oo} and A„ = {T„ = oo} for n e N. By the definition of i9„ we have A n = T-'({oo}) = T - ' o t9-'({oo}) = T-'({oo}) = A. Since P(A) = 0 we have P(A„) = 0 for every n e N. Thus Tn is finite a.e. on (Q, 5 ^ , P) for every n G N. Now for each fixed n G N, consider the filtered space (£2,5, {3*2-" : k G Z+}, P). Then Tn is a stopping time on this filtered space as we noted in 2) of Remark 3.23. Since X is a submartingale on (£2,3, {3< : < G R+},P), {Xk2-* : k G Z + } is a submartingale on our discrete time filtered space above. Thus by Theorem 6.1, we have E(X0) < E(Xjn) < oo. Since Tn J. T on Q as n —► oo and since X is right-continuous, we have lim XTn(uj) = lim X(Tn(o;),u;) = XT{u>) forw G A=. Since X is bounded by K, we have |Xr„ | < JC on A£. Thus by the Bounded Convergence Theorem, we have lim E(XTn) = E(XT). Since |E(X r „)| < K, we have \E(XT)\ < K. n—*oo
Thus E(X 0 ) < E(XT) < oo. When X is a martingale, we have E(XTn) = E(X0) for every n e N by Theorem 6.1 and thus E(XT) = E(X0) ■ Theorem 6.4. (Doob's Optional Sampling Theorem with Bounded Stopping Times, Dis crete Case) Let X = {Xn : n G Z+} be a submartingale and let S and T be stopping times satisfying S
E(XT\3s)>Xs
a.e. on (Q, 3s, P),
and in particular we have (2)
E(Xr) > E(XS)
90
CHAPTER 2.
MARTINGALES
and (3)
E(Xm) > E(XT) > E(X 0 ).
IfX is a martingale, then equalities in (1), (2), and (3) hold. Proof. Since 5 and T are stopping times, Xs and XT are 5 5 - and 5r-measurable respec tively as we noted following Definition 3.21. The boundedness of S and T implies the integrability of Xs and XT by Theorem 6.1. Let us define a stochastic process D{S'T] = {D{J?'T] : n € Z+} on the filtered space by setting Df 'T1 = 1{S<„
=
£l{s
n
= Yl Mk
k=\
=
{XTAV. — Xo) — (XSA„
-
Xxfrn
—
— Xo)
Xs/\n
by the computation made in the proof of Theorem 5.20. Since 5 < T < m we have (D<s'Tl.X)m
=
XT-Xs.
Since _D(S>T1 • X is a submartingale, we have E[(D(S'T] • X)n] > E[(DISJ]
• X) 0 ] = E(0) = 0
and thus E(XT - Xs) > 0,
§6 FUNDAMENTAL SUBMARTINGALE
INEQUALITIES
91
proving (2). To prove (1), note that since E ( X T | 5 S ) and Xs are both 55-measurable it suffices to show (4)
I XTdP>
JA
l XsdP
JA
for A e 5s-
To prove (4), for each A 6 5s define two random variables SA and TA on (Q, 5, P ) by setting SA = S on A and 5 A = m on Ac and similarly T^ = T on yl and TA = m on A c . Then 5,4 and T^ are stopping times. Indeed since A G J s we have ,.
l A
-
]
i_f^£5« 1 { S < n } n A e 5 »
for n > m forn<m
and similarly for TA since A € 5 S C 5r- (One can also argue that since SA is 5 s measurable and SA > 5, SA is a stopping time by a discrete time version of Theorem 3.6. Similarly for TA.) Now that SA and TA are stopping times and SA
E(XTA)>E(XSA).
But E(XTA ) = JAXTdP + fAC Xm dP and similarly E(XSA ) = JAXsdP + JAC Xm dP. By these two equations and (5) we have JA XT dP > fA Xs dP This proves (4). To derive (3), note that since 0, T and m are bounded stopping times and 0 < T < m, (3) is implied by (2). If X is a supermartingale, then —X is a submartingale so that the inequalities (1), (2), and (3) are reversed for a supermartingale. Then since a martingale is both a submartingale and a supermartingale, the equalities in (1), (2), and (3) hold. ■ Optional sampling theorems with unbounded stopping times for uniformly integrable submartingales in both discrete and continuous time will be proved in §8. Corollary 6.5. Let T be a stopping time on a filtered space (Q, 5, {Sv. : n € Z+}, P ) satis fying a boundedness condition T < m for some m 6 Z+. 1) IfX = {Xn : n e %+} is a submartingale on the filtered space.then (1)
E(|Xr|)<-E(X0) + 2E(X;)<3
sup E(|X„|). n=0,-.,m
2) If X - {Xn :neZ+}isa (2)
supermartingale on the filtered space, then
E(|XT|)<E(Xo) + 2 E ( X - ) < 3
sup E(|X n |). n.=0,---,m
92
CHAPTER 2.
MARTINGALES
Proof. 1) Suppose X is a submartingale. Since XT = X} - XT and \XT\ = X J + X r , we have \XT\ + X T = 2X}. Thus (3)
E(|Xr|) = ~ E ( X r ) + 2E(X£).
Since X is a submartingale, X + is a submartingale by Corollary 5.10. Applying (3) of Theorem 6.4 to X and X + we have -E(XT) < - E ( X 0 ) and E(X}) < E(X£). Using these in (3) we have (1). 2) When X is a supermartingale, —X is a submartingale. Applying (1) to —X we have E(| - X r | ) < - E ( - X o ) + 2E((-X m ) + ), that is, E(|X T |) < E(X 0 ) + 2E(X"). ■ Corollary 6.6. Let X = {Xn : n G Z+} be an adapted L\-process on a filtered space (£1,5, {5n}, P). Then X is a submartingale if and only iffor any two bounded stopping times S and T such that S
[ XmdP>
JE
f XndP
JE
for every E 6 g v
Now with E e Sn given, let us define two random variables S and T on (Q, 5, P) by setting S = n on E, S = m on Ec and T = m on fl. Then S is a stopping time since for every k G Z+ we have [ 0 € 5* for fc < n {S E(X S ). But E ( X T ) = [ XmdP+ JE
f
JE"
XmdP
and E(XS)=
JE
XndP+
Therefore (1) holds and X is a submartingale.
JE"
XmdP.
§6. FUNDAMENTAL SUBMARTINGALE
93
INEQUALITIES
If X is a supermartingale, then —X is a submartingale. Therefore by our results above, X is a supermartingale if and only if E(XT) < E(XS) for any two bounded stopping times S and T such that S
| II | Maximal and Minimal Inequalities For the sample path of a submartingale X = {X n : n G Z+}, Doob's maximal inequality gives an estimate of the probability that the maximum of {XO(I<J), • • •, Xm(u>)} exceeds a positive number A in terms of the expectation of Xm. Theorem 6.7. (Doob's Maximal and Minimal Inequalities, Finite Case) Let X = {Xn : n € Z+} be a submartingale on a filtered space (£2,5, {5n}i P)- Then for every m £ Z+ and A > 0 we have (1)
AP{ max Xn>\}<
[
XmdP
<E{X+m)
and (2)
XP{ min Xn < -A} < /
X m d P - E(X 0 ) < E(X*) - E(Ar0).
Proof. 1) To prove (1), define a function T on D by setting y, ,
|min{n = 0, ■ ■ ■ ,m : Xn(u) > A}, I m if the set above is 0.
As we noted in Remark 3.23, to show that T is a stopping time it suffices to show that {T = n} € 5n for n 6 Z+. For this observe that {T = n} = {X0<
\,---,Xn-i
< X,Xn>
{T = m} = { X 0 < A,---,X m _, < \,Xm
A}£8„
> A } U { X 0 < \,---,Xm<
and {T = n} = 0 e 5 „
forn = 0 , - - , m - 1,
forn>m.
A} G 5m,
94
CHAPTER 2.
MARTINGALES
For brevity let A = {maxn=0>...,m X„ > A}. Since X is a submartingale and T is a stopping time bounded by m we have by (3) of Theorem 6.4 E(X m ) > E(XT) = f XTdP+
(4)
JA
f JA'
XT dP.
If A = 0, then (1) holds trivially. If A ^ 0, then on this set we have XT > A by (3). By (3), we also have T = monAc. Thus by (4) we have E(Xm) > XP(A) + I
JA'
XmdP
and therefore XP(A) < ECXm) - /
JA'
XmdP=
j XmdP<
JA
I X+m dP.
Jn
This proves (1). 2) To prove (2), let (5)
5( w ) _ [ min{n = 0, • ■ •, m : X„(w) < - A } \m if the set above is 0.
The fact that S is a stopping time can be verified as we did for T above. For brevity, let B = {min,,^...^ Xn < —A}. By (3) of Theorem 6.4 we have (6)
E(X 0 ) < E(X 5 ) = I XsdP+ JB
f
JBC
Xs dP.
If B = 0, then (2) holds trivially. If B i- 0, then on this set we have Xs < - A by (5). Also S = m on Bc Thus by (6) we have E(X0)<-AP(B)+ /
XmdP.
JB'
Therefore \P(B) This proves (2). I
< -E(X0) + /
JB'
XmdP<
-E(Xo) + / XI dP. JSl
Corollary 6.8 IfX = {Xn : n G Z + } is a supermartingale on a filtered space (f2,5, {5„}, P) and ifm eZ+and\>0 then (1)
\P{
max Xn > A} < E(X 0 ) - /
X m dP < E(X 0 ) + E(X~)
§6. FUNDAMENTAL SUBMARTINGALE
95
INEQUALITIES
and (2)
\P{
min Xn<-\}<-
[
Xm<
E(X~).
Proof. (1) and (2) follow respectively from (2) and (1) of Theorem 6.7 applied to the submartingale —X. I Corollary 6.9. Let X = {Xn : n e Z+} be an L2-martingale on a filtered space (Q,5, {5„}, P). Then for every m € Z + and A > 0 A2P{ max | X „ | > A} < E(X^).
(1)
n=0,---,m
Proof. Since X is an L2-martingale, X2 = {X* : n € Z+} is a submartingale by Corollary 5.12. Then by (1) of Theorem 6.7, A2P{ max Xl > A2} < E ( ^ ) n=0,-",m
From this (1) follows . Let us consider a nonnegative submartingale X = {X„ : n e Z+} on a filtered space (£1,5, {5 n },P). Then for every m e Z+ and A > 0 the two random variables £ = maxn=o,-,m X„ and ?? = X m are nonnegative and satisfy according to (1) of Theorem 6.7 the following inequality AP{f > A} < /
r/dP
for A > 0.
In the next lemma we show that if £ and rj are two arbitrary nonnegative random variables satisfying the inequality above and if £ G L p (fi,5, P) for some p e (1, oo) then we have \\i\\v ^ ^IMIP where g is the conjugate exponent of p. This result will be used in proving the Doob-Kolmogorov Inequality for nonnegative Lp-submartingales. Lemma 6.10. Let X and Y be nonnegative random variables on a probability space (Q, 5, P ) satisfying the condition (1)
\P{X
> X} < f J{x>\}
YdP
for every A > 0.
96
CHAPTER 2.
IfX e LP(Q,$, P) for some p £ (l,oo)andq (2)
MARTINGALES
6 (1, oo) is its conjugate exponent, then
||X||P<9||F||P.
Proof. Multiplying both sides of (1) by p\v~2 we have pA"-'P{X > A} < pX"-2 f
(3)
YdP.
Since P{X > A} and J,x>A} F d P are decreasing functions of A e (0, oo), they are Borel measurable functions on (0, oo). Let m j be the Lebesgue measure on ((0, oo), 93(o,oo))Integrating both sides of (3) we have (4)
/ •/(O.oo)
p\v-lP{X>
X}mL(dX)<
'
f
p\v~2\(
i(0,oo)
YdP\mL(d\). (/{*>><}
J
To change the order of integration in the iterated integral in (4) we need to verify the measurability of the integrands as functions on the product measure space ((0, oo) x Q, cr(Q3(o,oo) x $), 'm-L x P)- Let t be the identity mapping of (0, oo) into R. Since X and Y are 5/QSg-measurable mappings of Q into R, all three mappings t, X , and Y may be regarded as c08(o,oo) x5)/©f-measurable mappings of (0,oo)xQinto R. This implies that {X > A} 6 <7(2J(o,oo) x 5) and thus l{x>\} is a <7(2*(o,oo) x SVQSg-measurable mapping of (0, oo) x Q into R. ThereforepA p_1 l^>A} andpA p_2 l{^> A }F are
pX"-'P{X>
\}mL{d\)
-'(O.oo)
=
/ pA""1 { / J(o,oo) Un
=
/ { / p\p-ilix>x}^)mL(dX)\ Jn I-'(o.oo) -
=
/ ( / pX"'1 mL(d\)\ P(du,) Jn lJ(.o,x(-w)] J
=
/
=
X(uyP(du) \\X\\l
l{x>x}{u)P(du))mL{dX) J J
P(du>)
§5. FUNDAMENTAL SUBMARTMGALE
97
INEQUALITIES
Similarly for the right side of (4), we have P^"2W
/
7(o,oo) =
Ufi
-
'
J
pA p - 2 m L (dA)lr(u;)P(cM
/ ( / ./n
J
pX"-2 { / 1 { ^ > A } (O;)F(U;) P(du>)\ mL{d\)
/ J(0,oo)
=
Ydp\mL{d\)
yJ{x>\}
U(0,*M]
J
since p ( p - l)" 1 = g
= j qX(ujy-xY{u)P{du) <
g ||y|| p ||J!:
p 1
- ||,
by Holder's Inequality
=
9 ||y|| p {/ f i (^-')VP}i
=
g r l i r U j ^ J ^ d P } * since ( p - l ) « = p
= l\\YUX\\fThus (4) becomes
(5)
iixii* < simwwi^
Since \\X\\V < oo we have H-XII^' < oo. If ||X|| P = 0 then (2) holds trivially. If \\X\\t j . 0 then by dividing both sides of (5) by the finite positive number ||X||^* and recalling P — Pll = 1 we have (2). ■ Theorem 6.11 (Doob-Kolmogorov Inequality) Let X = {Xn : n g Z + } be a nonnegative Lp-submartingale on a filtered space (£2,5, {5„},P) for some p 6 (l,oo) with conjugate exponent q £ (1, oo). Then for every m £ Z+ and A > 0 (1)
A P P{ max Xn > A} < / n=0, -,m
X*mdP < E(X£)
•'{max n= o i ... jm
Xn>X}
and E( max X*n) < g p E(X£).
(2)
n=0,---,m
Proof. Since X is a nonnegative Lp-submartingale, X" is a nonnegative submartingale by Corollary 5.12. By applying (1) of Theorem 6.7 to Xp we have A»P{ max XI >>?}
J{maxn=o,
,m
„ ^ K dP < E(X»n\
X%>^p}
98
CHAPTER 2.
MARTINGALES
which is equivalent to (1). Applying (1) of Theorem 6.7 to the submartingale X we have \P{
max Xn > A} < /
-'{max„rf),... i „Xn>A}
n=0,—,m
Xm dP.
Thus the two nonnegative random variables max^o,...,m Xn and Xm satisfy the condition (1) of Lemma 6.10. Also the fact that X0,---,X„ are in LJQ, 5, P) implies that max Xn n=0,---,n
is in L P (Q, 5, P). Therefore by Lemma 6.10 E( max XI) = E[( max Xn)p] < gE(X£) proving (2). ■
Corollary 6.12. Let X = {Xn : n 6 Z+} be a Lp-martingale on a filtered space (Q., 5, {5n}i P)fi>r some p £ (1, oo). With the conjugate exponent q ofp we have for any m e Z+ and A > 0 (1)
\VP{
max \Xn\ >\}<
n=0,-,m
f J{mmn=0,..
, m |X„|>A}
\Xm\»dP
< E(\Xm\")
and (2)
E( max \Xn\') < qpE(\Xm\*).
Proof. Since X is an Lv-martingale, \X\ is a nonnegative Lp-submartingale by Corollary 5.12. Applying (1) and (2) of Theorem 6.11 to \X\ we have (1) and (2). ■ Theorem 6.7 (Maximal and Minimal Inequalities, Finite Case) gave estimates for the probabilities of the maximum and minimum of finitely many elements in a submartingale. Let us extend these results to estimate probabilities of the supremum and infimum of the entire submartingale for both the discrete and the continuous case. Theorem 6.13. (Maximal and Minimal Inequalities, Discrete Case) Let X = {Xn : n G Z+} be a submartingale on a filtered space (£2,5, {5„},P). Then for every A > 0 (1)
AP{ sup Xn > A} < sup E(X+) neZ.,
n€Z+
§6. FUNDAMENTAL SUBMARTINGALE INEQUALITIES
99
and (2)
\P{ inf Xn < -A} < sup E(X+) - E(X 0 ). n£Z
*
n gZ*
Proof. To prove (1), note that max Xk | sup X„ on Q as n -* oo. Thus *=0,-,n
n€Zt
{ sup X n > A} C M { max Xk > A} = | lim { max Xk > A} ngZ +
^ ^
n—oo lAr=0,-,7i
te=0,-,o
and therefore (3)
P{ sup Xn > A}
By (1) of Theorem 6.7
neZ +
n_>0
°
M),-,n
^ { ™ x ^ > A ) < E(K) < sup E(X+). *=o.~.n nez+ From (3) and (4) we have (1). Similarly (2) is derived from (2) of Theorem 6.7. ■ (4)
Theorem 6.14. (Maximal and Minimal Inequalities, Continuous Case) Let X = {Xt : t e K+} be a submartingale on a filtered space (Q,5, {5 ( },P). Let S be a countable dense subset o/R+ and I = [a, fi) C K+. Then for every A > 0 (1)
\P{ sup Xt > A} < E{XV) tems
and (2)
inf Xt < -A} < E(XZ) - E(Xa). v terns IfX is right-continuous then I n S in (1) and (2) can be replaced by I. \P{
Proof. Let {s„ : n G Z+} be an arbitrary renumbering of the elements of I fl S. For each N 6 Z+, let to, ■ • •, t# be the rearrangement of so, • • ■, SN in increasing order. Then {Xto,- ■ ■ ,XtN} is a submartingale with respect to {&„,•■ • ,$tN}- Thus by (1) of Theorem 6.7 and by the fact that X* is a submartingale so that E(X?) f as 1 1 , we have (3)
AP{
max
Xt > A} = AP{
max
X< > A} < E(Xt).
100
CHAPTER 2.
MARTINGALES
Since max,e{s0]...|Sw} Xt f sup ( e S n / Xt as TV -> oo we have { sup Xt > A} C (J {(e s max X, > A}. telns NeZt { o,--.»w} From this follows (4)
P { sup Xt > A} < | lim P{ max X t > A} tzins N^c» «e{aoi-,»w}
Combining (3) and (4) we have (1). When X is right-continuous, the right-continuity of the function Xt(u>), t 6 R + , implies sup (g/ Xt(u>) = sup ( 6 / n S Xt(u>) for every w e f i , Thus {sup tg7 Xt > A} = {sup i 6 / n S X« > A} and therefore I n S in (1) can be replaced by / . Similarly (2) is proved by using (2) of Theorem 6.7. ■ Next we extend Theorem 6.11 (Doob-Kolmogorov Inequality) to estimates of the prob abilities of the supremum and infimum of nonnegative Lp-submartingales for both the dis crete and the continuous case. Theorem 6.15. Let X = {Xn : n £ Z+} be a nonnegative Lv-submartingale on a filtered space (Q., $,{$„}, P) for some p 6 (1, oo) with conjugate exponent q S (l,oo). Thenfor every A > 0 (1)
\pP{ sup Xn > A} < sup E(X%)
and (2)
E(supX n p )
Proof. The inequality (1) is derived from (1) of Theorem 6.11 in the same way as (1) of Theorem 6.13 was derived from (1) of Theorem 6.7. To prove (2) recall that by (2) of Theorem 6.11, for every n e Z+ we have (3)
E( max X*k) <
Then since max^o.-.n %l T su P n ez + %n a s n ~* °°> if w e let n —> oo in (3) then by the Monotone Convergence Theorem we have (2). ■
§6 FUNDAMENTAL SUBMARTINGALE INEQUALITIES
101
Theorem 6.16. Let {Xt : t 6 R+} be a nonnegative Lp-submartingale on a filtered space (£2,5, {$t},P)for some p G (l,oo) with conjugate exponent q G (l,oo). Let S be a countable dense subset ofR+ and I = [a, (5) C R+. Then for every A > 0 \VP{ sup Xt > A} < E ( J Q terns
(1) and
E( sup X?) < q"E(Xp„). te/ns
(2)
IfX is right-continuous then I (1 S in (]) and (2) can be replaced by I. Proof. This theorem is derived from Theorem 6.11 in exactly the same way Theorem 6.15 is derived from Theorem 6.7. I
[III] Upcrossing and Downcrossing Inequalities Let [a, 6] C R. The number of times a sample path of a stochastic process traverses the interval [a, b] is a measurement of the oscillation of the sample path. The upcrossing and downcrossing numbers of a sample path are defined to count that number. Definition 6.17. Let X = {Xn : n G Z+} be a stochastic processes on a probability space (n, 5, P). Leta,beR,a< b._ 1) Let us define a sequence of 7,+-valuedfunctions {Tj : j G N} on Q by T,(w) = inf{n G Z+ : Xn{u) < a}, T2(u>) = inf{n > Ti(ui) : X n (w) > 6}, T 2fc+ ,M 2W")
= inf{n > T2k{u): Xn(u) < a}, = inf{n > T2k+l(u>): X„(w) > 6},
with the understanding that infimum on an empty set is oo. For N G Z+, the number of upcrossings by the sample path {Xn(u) : n € Z+} of the interval [a, b] by time N is defined by rmax{fceN:T 2 ,(u,)<7v-}, N (J> (ulaMA>(">-{0 if the set above is <&.
102
CHAPTER 2.
MARTINGALES
The number of upcrossings of the interval [a, b] by the sample path {Xn(uj) : n € Z+} is defined by (2)
(U£MX)(u)=
limJU»MX)(u).
2) Define a sequence of Z+-valuedfunctions {Sj : j G N} on Q by 5i(w) = M{n G Z+ : Xn(u>) > b}, S2(UJ)
= inf{n > S i M : Xn(u) < a},
SZA+ICW) = inf{n > S2k(uj) : Xn(u>) > b}, 5,2/t+2(w) = inf{n > S2k+\(u>) : X i M < a},
wifn rte understanding that infimum on an empty set is oo. For N € Z+, ?/ie number of downcrossings by the sample path {Xn(u>) : n G Z+) of the interval [a,b] by time N is defined by m (3)
(T)N D
XViA_(maK{keN:S2k(u)
The number of downcrossings of the interval [a, b] by the sample path {Xn(u>) : n G Z + } is defined by (4)
TOJ-XXW)
= Jim (D£ 6 ] X)(UO.
3) Let y N. We define the numbers of upcrossings and downcrossings of [a, b] by YiN) by equating (5)
(I/£ t ) Y ( W ) )M = (t/£ t ] y)(u,)
and (D»MYm)(u>)
= (2?£ t ,y)( W ).
Lemma 6.18. Let X = {X n : n G "L+]be an adapted process on a filtered space (£1,5, {&,}, P) and let {T, : j G N} and {S, : j G N}feeas m Definition 6.17. Then
§6. FUNDAMENTAL SUBMARTINGALE INEQUALITIES
103
1) {Tj '• j € N} is an increasing sequence of stopping times such that for every u G O. the sequence {Tj(u) : j G N} is strictly increasing until the value oo is reached. Similarly for {Si : j 6 Pi}. 2) For every N g Z+, U^^X and D^a^X are nonnegative $N-measurable random vari ables on (£2,5, P) bounded by 2-'(TV + 1). 3) U^^X and D^^X are 5^-measurable random variables on (Q, 3,-P) with values in [0,ooj. Proof. 1) Let us show that Tj is a stopping time for j G N. Since the time variable is discrete, to show that Tj is a stopping time it suffices to show that {T3■ = k} G 5* for every k G Z+ as we noted in Remark 3.23. Now for k G Z+ we have by Definition 6.17 {T, = k} = {X 0 > a} n • • • n { X ^ > a} 0 {X* < a} G $k. Thus T\ is a stopping time. Next suppose X, is a stopping time for some j G N. To show that Tj+i is a stopping time let k G Z+ and consider
n{T J+1 =fc}.
{ T * . =fc}> Li=0
Now for z = 0 , . . . , k — I, when j is odd we have {2} = *} n {T;+1 = fc}
= {T,- =»} n {xM < b} n • • ■ n {x,_,
= {Tj = i) n {x +l > a} n • • ■ n {xfc_! > a} n {x* < a} g &. Thus {Tj+i = fc} G 5*: and this shows that Tj+i is a stopping time. Therefore by induction Tj is a stopping time for j g N. Similarly S-, is a stopping time for j G N. 2) For each TV G Z + , clearly t / ^ X < 2"'(TV + 1). To show that UgMX is g N measurable, define for each k G N ri ifT2fcMiVi Since T2k is a stopping time, {T2k < TV} G $N and thus C2,t is an fov-measurable random variable. Now (E&j*)(«)=i;C5tt(w). ifegN
104
CHAPTER 2.
MARTINGALES
Then since C2k is 5w-measurable for every l e N , P£ 6) X is 5 w -measurable. Similarly 3) By definition Ugfi]X ] U£b]X as TV -> oo. Since £ / £ n ] ^ is J^-measurable it is 5oo-measurable. Since this holds for every TV e Z+, U^b]X is 5oo -measurable. Similarly
forJDfo*. ■
Theorem 6.19. (Doob's Upcrossing and Downcrossing Inequalities for Submartingales) Let X = {Xn : n € Z+} be a submartingale on a filtered space (Q, 5, {5n}, P)- Then for any a, b € R, a < 6, and TV 6 Z + we nave (1)
E(t/£ 6 ] X) < ——E[(XN - a) + - (X0 - a) + ], o—a
and (2)
E(££
L
t ] X)
< T-^—EKXH
- a)+] + 1.
o—a
Proof. Let F = {Yn : n e Z+} be a stochastic process defined by Y = (X — a)+, that is, Yn = (Xn-a)+
(3)
forn€Z+.
Since X is a submartingale, X—a is a submartingale and then y = (X—a) + is a nonnegative submartingale by Corollary 5.10. Note that (4)
U»MX = Ugj^Y
and
D ^ j X = D^b_a]Y.
1) To prove (1), let {Tj : j g N} be as in Definition 6.17 with a, 6 and X replaced by 0, b — a and Y respectively. Let TV £ Z+ be given and let k 6 N be such that 2fc > TV. Then we have T2k > 2fc - 1 > TV - 1 so that T2k > TV. Let T0* = 0 and T/ = Tj A TV for j G N. Note that {T* : j 6 Z+} is an increasing sequence of stopping times. Now since T0* = 0 and T2k > TV we can write 2k
(5)
it
fc-1
YN - y 0 = £ { F T ; - FT;.,} = £ { F T . - F r ^ , } + £ { F ^ + i - F r . }. j=i
j=i
j=o
Regarding the first sum on the right side of (5), note that there is contribution to the sum only when T2*_, < TV, and note that when T2* _, < TV then we have T2j_{ = Tjy-i so that
§<5. FUNDAMENTAL SUBMARTINGALE INEQUALITIES
105
Yxf_, = 0. Thus we have
hy^-YT-J>(b-a)U^_a]Y so that (6)
£
E
{ ^ - - r T . _,} > (6 - a ) E [ ^ 4 _ a ] y ] .
On the other hand since Y is a submartingale and T2* < T2*+1 < N for j = 0, • • •, k — 1, we have MYT^J > E(Tr2- ) by (2) of Theorem 6.4 so that (7)
EE{FT.(|-FT.}>0.
Using (6) and (7) in (5) we have (6 - aWUg^Y)
< E(YN - Y0).
By the fact that YN - Y0 = (XN - a)* - (Xa - a)* and by (4) we have E ( < t ] X ) < -*—E[(XN b— a
- a) + - (X0 - a) + ],
which proves(l). 2) To prove (2), let {5, : ;' £ N} be as in Definition 6.17 with a, 6 and X replaced by 0, b - a and Y respectively. Let TV £ Z+ be given and let k € N be such that 2k > N. This implies that S2k > N. Let 5* = Sj A N for j e N. Then we have
W
E ( % - * V , } < {0 - (6 - a)}Dgi6_a]y + {FN + (6 - a)}.
By the fact that D$ib_a]Y = DgMX
and by (8), we have
£ E { F 5 i - % _ , } < (a - fcJECD^jX) + E[(X W - a) + ] + (6 - a).
106
CHAPTER 2.
MARTINGALES
Since Y is a submartingale and S£ > Sy-i- we have E{YS^ - Ysj^_t} > 0 by (2) of Theorem 6.4. Therefore we have 0 < (a - b)E(D?aMX) + E[(XN - af) + (b - a), and then - a)+] + 1,
E(D?ab]X) < -r^—EKXs (j — a
proving (2). ■ Upcrossing and downcrossing inequalities for supermartingales can be derived from those for submartingales by the fact that if X is a supermartingale then —X is a submartin gale. Theorem 6.20.. (Upcrossing and Downcrossing Inequalities for Supermartingales) Let {Xn : n 6 Z + } be a supermartingale on a filtered space (£2,5, {$n},P)Then for any a,b £ R, a < b, and N G Z+ we have (1)
MU(!MX) < -r^—E[(XN - b)~] + 1 o—a
and (2)
E(D^b]X)
- b)~ - (X0 - &)-]-
< y^E[(XN
Proof. Let us note that for any stochastic process X we have <MX
= -D^6,-a](-^)
and
D»aMX =
U^^-X).
Now if X is a supermartingale on the filtered space then —X is a submartingale and there fore by (2) of Theorem 6.19 we have WgMX)
= E[D»bt_a](-X)] ' -E[(-XN b—a
<
l
-—
—a — (—0)
E[(-XN
+ 6)+] + 1 = -1~E[(XN b—a
- (-fe))+] + 1 - b)~] + 1.
§5. FUNDAMENTAL SUBMARTINGALE INEQUALITIES This proves (1). Similarly by (1) of Theorem 6.19 we have E(D»MX) = <
EiU^i-X)] 1 —1-E[{-XN -a - (-6)
1
b— a = This proves (2). ■
- (-6)) + - {-X0 - (-b))+]
-E[(b-xNy-(b-x0y]
7 EHXN - b)~ - (JT0 - 6)-]. b— a
The following simplification of Theorem 6.19 and Theorem 6.20 is often useful. Corollary 6.21. If X = {Xn : n 6 Z+} is a submartingale on a filtered space (Q, 5, {5 n }, P) then for a,b G R, a < b, and N 6 Z+, we have (1)
WgqX)
< -r^—{E(\XN\) o—a
+ E(\X0\) + 2\a\}
and
(2)
KIT)?\.X\ < E(D» MX)<-±-{E(.\X N\)
+ \a\} + l.
b—a
IfX is a supermartingale, then
(3)
..X\ < < r^{EQXN\) E([/ ( VO
+ \b\} + 1
b—a
and (4)
E(D»MX)
< ^{E(\XN\)
+ W\Xo\) + 2\b\}.
Proof. To prove (1) recall that by (1) of Theorem 6.19 E(^M,X)
<
^-^E[(XN
<
-^—E[\XN-a\ b—a
<
-1—E[\XN\
b—a
- a) + - (X0 - a)+]
+
\Xo-a\]
+ \a\ + \X0\ + \a\]
' {E(|XAf|) + E(|X 0 |) + 2|a|}. b—a
107
108
CHAPTER 2.
MARTINGALES
Similarly (2) is derived from (2) of Theorem 6.19 and (3) and (4) are derived from (1) and (2) of Theorem 6.20. ■ Let us define upcrossing and downcrossing numbers for a stochastic process with con tinuous time. Definition 6.22. Let X = {Xt : t 6 R+} be a stochastic process on a probability space (£2,5, P), S be a countable dense subset of R+ and J be an interval in E+. Let T = {*o, • • • ,
V{aMX = U»MX^
and D\aMX =
Dlb]X^
The numbers of upcrossings and downcrossings of [a, b] by X on JTl S are defined by (2)
U^X
= supU{aMX
and D^X
= sup
D^X,
where the suprema are over the collection of all strictly increasing finite sequences r in
jns. Note that if X is an adapted process on a filtered space (Q, 5, {& }, P) then by Lemma 6.18, Ufab]X and DJa^X as defined above are $tN-measurable random variables on (Q, 5, P). Since J n 5 is a countable set the collection of all strictly increasing finite se quences T in Jf]S is a countable collection and therefore the suprema over this collection are suprema of countably many J^-measurable random variables on (Q, 5, P). Thus UffbfX and Dff$X axe 5oo-measurable. Theorem 6.23. Let X = {Xt : t £ R + } be a submartingale on a filtered space (Q, 5, {5(}, P)> S be a countable dense subset o/R+ and J be an interval in R+. Let a, b 6 R a < b. Then for every strictly increasing finite sequence r = {to, ■ ■ ■, t/v} in J PI S we have (1)
WJlaMX)
< r^—E[{XtN b— a
- a) + - (X t0 - a) + ]
§5. FUNDAMENTAL SUBMARTINGALE INEQUALITIES
109
and ECD^,X) < r^—E[{XtN - a)+] + 1. o—a If J has a and /3 as its end-points where a, j3 £ R+, a < /3, then (2)
(3)
< y^[(X0
- a)+ - {Xa - a)+]
E(Z?i^fX) < -^EKXp
- a)+] + 1.
WgfiX)
and (4)
Proof. Note that (1) and (2) are immediate from (1) of Definition 6.22 and Theorem 6.19. To prove (3), note that by (2) of Definition 6.22 there exists a sequence {rn : n 6 N} of strictly increasing finite sequences T„ in J D S such that U^X = lim U^b]X. Then by Fatou's Lemma (5)
W&HX)
< lim inf E(U{;MX) < sup E(U{aMX)
where the supremum is over the collection of all strictly increasing finite sequences r in JC\S. Now since X is a submartingale, (X — a)+ is a submartingale so that E[(Xf — a) + ] f as i t - Thus from (1) E(t/,ra t ] X) < - i — E [ ( X „ - a) + - (X a - a)*]. Using this in (5) we have (3). Similarly for (4). ■ Theorem 6.24. If X in Theorem 6.23 is a supermartingale, then
CD (2) (3)
w{aMx> $ r^E[{XtN
~br]+1'
MD[aMX) < ^ _ E [ ( X ( „ - by - (Xt0 - by], EdfjftfX) < -^~amXp
- 6)"] + 1,
110
CHAPTER 2.
MARTINGALES
and (4)
WPg&X) < T^—mXp - by (Xa
-by]-
b— a
Proof. (1) and (2) follow from (1) of Definition 6.22 and Theorem 6.20. Then (3) and (4) follow from (1) and (2) by the same argument as in Theorem 6.23. ■
§7 Convergence of Submartingales Let X = {Xt : t £ T} be a submartingale on a filtered space (£2,5, {& : t 6 T},P). Consider the process X+ = {X* : t e T}. We shall show that if X* is L\-bounded, that is, sup i e T E(Xf+) < oo, then there exists an extended real valued integrable $«-measurable random variable X^ on (Q, 5, P) such that lim Xt = X^ a.e. on (£2, fr*,, P). Next we show that if we assume the uniform integrability of X+ (which implies the Li-boundedness of X+), then X^ is a final element for the submartingale in the sense that EiX^ |5 t ) > Xt a.e. on (Q, fo, P) for every t € T. In §8 we show that if X is uniformly integrable then we have lim | | J f , - ^ | | , =0.
[I] Convergence of Submartingales with Discrete Time Observation 7.1. If X = {Xt : t e T) is a submartingale then X is L\-bounded if and only if X+ is, that is, (1)
supE(|Xt|) < oo «• supE(X(+) < oo.
If X is a supermartingale then X is Li-bounded if and only if X" is, that is, (2)
supE(l-Xil) < oo <£> supE(.Xt") < oo. *6T
tgT
Proof. Note that since \Xt\ > X*, X^, the condition sup t e T E(|X 4 |) < oo always implies both sup t€T E(X4+) < ooandsuptgTjEfXj-) < oo. Note also that we have \Xt\ = 2X?-Xt as well as \Xt\ = 2X(~ +Xt. Thus, if X is a submartingale then E(Xt) f as i f and we have E(|X t |) = 2E(X+) - E(X t ) < 2E(X(+) - E(X 0 )
§7. CONVERGENCE OF SUBMARTINGALES
111
so that supE(|X|)<2supE(X(+)-E(Xo).
(3)
If X is a supermartingale then E(Xt) J, as t f and we have E(|X ( |) = 2E(X") + E(X t ) < 2E(Xf) + E(X 0 ) so that (4)
supE(|X ( |) < 2supE(JC(-) +E(X 0 ).
By (3) and (4) we have the implication <= in (1) and (2). ■ Lemma 7.2. LetX = {Xn : n £ Z+} be an Li-bounded submartingale or supermartingale on a filtered space (Q, 5, {5n} > -P)- Then for any a,b £ R, a < 6, we fazve (C^ &] Jf) (w)< oo
a*/
(Dg^jZ) (w)< oo
A»rM.u«(n,5„,n
Proof. L e t X be an L\ -bounded submartingale. By (1) of Corollary 6.21 for every N £ Z+ we have E«7£ 6 ) X) 1
J
<
—!—{E(|X w |) + E(|X 0 |) + 2|a|} o—a < - ^ - { s u p E ( | X „ | ) + |a|}. o - a neZ,
Since U^b]X f Uf£b-,X as TV —> oo, we have by the Monotone Convergence Theorem E(U£b]X)
< - ^ - { s u p E ( | X „ | ) + |a|} < oo.
Thus ([/g t ] X)(w) < oo for a.e. u in ( Q ^ . P ) . Similarly by using (2) of Corollary 6.21 we have (D^b]X)(uj) < oo for a.e. u in (Q, 5,^, P). For an ^.-bounded supermartingale the same conclusion holds by means of (3) and (4) of Corollary 6.21. ■ Lemma 7.3. Let X = {Xn : n £ Z + } be a stochastic process on a probability space ( Q , 5 , P ) . Leta,b £ R, a < b. Then for every u £ Q. liminfX„(w) < a < b < limsupX„M => (U?b]X)(uj) = oo. n—*oo
n—*oo
112
CHAPTER 2.
The same holds for
MARTINGALES
D^b]X.
Proof. Suppose liminf Xn(w) < a
n—*oo
lim Xnt(uj) = limsupX n (u;). Thus there exist ni < n2 < n 3 < ■ ■ • such that Xn,,,(«) a and Xnij (u>) > b for j G N. Therefore (t/£t,]X)(u>) = oo. Similarly for D£MX.
■
Theorem 7.4. (Doob's Martingale Convergence Theorem). Let X = {Xn : n 6 Z+} be a submartingale or a supermartingale on a filtered space (£2,5, {$n}, P). Let us define an extended real valued % ^-measurable random variable X^ on (Q, 5, P) by setting XO0(UJ) = liminf XJui) for w G il. If X is L\-bounded then lim Xn(uj) = X^iuj) for n—foo
n—*oo
a.e. OJ in (Q, doo,P) and furthermore X^ is integrable so that X^ is real valued a.e. on (&,doo,P). Proof. Let A = \u> G Q : lim XJUJ) does not exist in R} 71—*00
= \u> G Q : liminf Xnho) < limsupX„(a;)}. "-»°°
71-00
Let Q be the collection of all rational numbers. For a,b,E Q,a < b, let A a i = { u e f i : liminfX„(a;) < a
A=
U
a,f>eQ,a<6
A
°.<>-
Note that since lim inf Xn and lim sup Xn are g^ -measurable, we have A, Aa b G 5oo ■ Now by Lemma 7.3, we have A a C { u 6 D : ( D J ^ X « ) = oo}, and then by Lemma 7.2 P(A0,6) < P{u G fi : ( ^ 6 ] X ) ( w ) = oo} = 0.
<
§7. CONVERGENCE OF SUBMARTINGALES
113
Thus Aa,6 is a null set in (Q,5oo, P) and as a countable union of such null sets A is a null set in (Q, g ^ , P). Therefore Jirn X„(w) exists in I for a.e. w in (£2,£«,, P). Then (1)
^ n ^ Z n ( w ) = liminfZ„(tj) = X0o(w)
for a.e. w in ( a , L , P ) .
Thus by Fatou's Lemma and the L r boundedness of X, = E(| lim Xn\) = E( lim \Xn\) < liminf E(|X„|) < sup E(IXJ) < oo.
E(\XX\)
n—*co
Therefore I
M
n—*oo
n—*oo
^v
is integrable on Q. and consequently Xoo(tj) G R for a.e. CJ in (Q,goo, P).
Corollary 7.5. Lef X = {Xn : n € Z+} be a nonpositive submartingale or a nonnegative supermartingale on a filtered space (Q, g, {gn}, P)- ^/zen X converges a.e. on (Q, g ^ , P)Proof. If X is a nonpositive submartingale, then E(Xo) < E(X„) < 0 and E(|X„|) = - E ( X n ) < -E(Xo) for n G Z+ so that sup n£Z+ E(|X„|) < -E(X 0 ) < oo, that is, X is Li-bounded. Thus X converges a.e. on (Q, goo, P ) by Theorem 7.4. If X is a nonnegative supermartingale then —X is a nonpositive submartingale so that by the result above X converges a.e. on (Q, goo, P). ■ Corollary 7.6. Let X = {X n : n £ Z + } be an Lp-bounded nonnegative submartingale on a filtered space (Q, g, {g„ }, P ) /or some p G (1, oo). Ler Xoo be an extended real valued g' -measurable random variable on (Q, g, P) defined by Xoo = lim inf X„. 77zen TI-+OO
(1)
lim X n =Xoo
a.e. on ( Q ^ ^ P ) ,
n—t-oo
(2)
Xoo e Lp(£l,goo, P)
and
Jim ||X„ - Xoo||P = 0,
ami (3)
IIXooll^TJimllX^ll^supllX^I,.
Proof. Since ||X„||i < \\Xn\\p for all n G Z+ forp G (l,oo), the Lp-boundedness of X implies its L]-boundedness. Then by Theorem 7.4, Xoo is integrable and (1) holds. Since X is a nonnegative Lp-submartingale, if we write q for the conjugate exponent of p then we have E(sup n6Z+ X£) < qp sup„ 6Zt E(X£) by (2) of Theorem 6.15. Since X is
114
CHAPTER 2.
MARTINGALES
Lp-bounded, that is, sup n e Z t E(X£) < oo, we have the integrability of sup n g Z i X%. This together with the fact that 0 < XTn < sup„gZ+ Xvn, implies the uniform integrability of {XI : n G Z + } by Proposition 4.11. Since (1) implies P nljm>X„ = X^, (2) holds by Theorem 4.16. Finally the fact that JC is a nonnegative Lp-submartingale implies that Xv is a nonnegativeLi-submartingale by Corollary 5.12. Therefore E(X£) | a s n -> oo and hence \\Xn\\p | suPnez, \\Xn\\P. But (2) implies Hm \\Xn\\p = WX^. Thus | | X „ | | , =T Jim \\Xn\\f. ■
[II] Convergence of Submartingales with Continuous Time For a stochastic process with continuous time X = {Xt : t G R+}, the upcrossing number Ufalf-X w n e r e S is a countable dense subset of R+ and J is an interval in K+ was defined in Definition 6.22. In particular we have U^b]X = U^b"sX. Lemma 7.7. Let X = {Xt : t 6 R+}feean L\-boundedsubmartingale or supermartingale on a filtered space (£1,5, {$t},P) and let S be a countable dense subset o/K + . Then for every a, 6 £ R SHC/I f/iar a < 6, we nave (Dg f l Jf)(w)
and (Df 0 , t ] X)M < oo fora.e. w in ( O ^ . P ) .
Proof. Let Jf = {Jf( : i G R + } be an Li-bounded submartingale. For every n G N, let J„ = [0, n]. Then by (3) of Theorem 6.23 we have
E(u$fx)
<
^E[(Xn-ay-(X0-a)+]
<
^ { E ( | X n | ) + E(|X 0 |) + 2|a|}
<
- ^ - S U p { E ( | X e | ) + |a|}. b — a (gii^
Now since U^b"sX f U^ b]X as n T, we have by Fatou's Lemma and the L r boundedness ofX, Wi.MX)
< liminf E ( l / ( ^ S X ) < -?— sup{E(|X t |) + \a\} < oo. 0 —a
( e ]m
The integrability of UfaMX implies that it is finite a.e. on (Q, 5 ^ , P). Similarly for D^MX. Similarly for an L\-bounded supermartingale by means of Theorem 6.24. ■
§ 7. CONVERGENCE OF SUBMARTINGALES
115
Lemma 7.8. Let X = {Xt : t G R+} be a stochastic process on a probability space ( A 3 , P) and let S be a countable dense subset o/R + . Let a, b € R such that a < b. Then for any u> G Q we have Hmmf Xt(w) < a
limsupX ( (u;) => (l/faMX)
(w) = oo.
t—*oo
The same holds for D?a b,X. Proof. Suppose liminfX((u)) < a < b < limsup Xt(w). We can select a strictly increasing t—>-oo
sequence {tn : n € N} in 5 such that lim tn = oo and such that Xht n—.oo
, (w) < a and
** '
Xt2k(.ui) > b for k e N. Then for every fc e N if we let r = {
Xoo w integrable and lim -X"i(u>) = X^^for Proof. Let
t—»oo
a.e. CJ m (Q^ooi^ 5 )-
A = l u e Q : lim -X"t(u) does not exist in R} f—.00
= {UJ 6 i2 : liminfXt(ai) < limsupXt(uj)} '—°°
!-»oo
and for any two rational numbers a and 6 such that a < b, let A a t = {ui £ Q.: lim inf X<(w) < a < fe < lim sup X ( (a;)}. <—°°
t—oo
The proof then follows from Lemma 7.8 and Lemma 7.7 in the same way as Theorem 7.4 followed from Lemma 7.3 and Lemma 7.2. ■ Corollary 7.10. Let {Xt : t G R+} be an LT-bounded nonnegative submartingale on a filtered space (Q,5, {$t},P)for some p G (l,oo). Let X^ be an extended real valued goo-measurable random variable on (Q, 3, P) defined by X^ = lim inf Xt. Then (1) v
'
\\mXt=Xoo
(—oo
a.e. on(Q,5oo,P),
116 (2)
CHAPTER 2. Xoo g Lp(Q.,do.,P)
and
lim \\Xt - Xx\\v
MARTINGALES
= 0,
and (3)
llJTcll, =T lim ||X(||P = sup ||X f || p .
Proof. Since X is Lp-bounded and p G (1, oo), X is L]-bounded. Therefore by Theorem 7.9 we have (1). To prove (2), let {£„ : n g Z+} be a strictly increasing sequence in R+ such thatt n 1 co asn —> oo. ThenXoo = liminf Xt„. Applying Corollary 7.6 to the L„-bounded nonnegative submartingale {Yn : n g Z+} with respect to {&n : n g Z+} where Yn = Xtn and &n = $tn for n g Z + we have X ^ g L ^ , ^ , P) and lim ||Xt„ - X^Hp = 0. Since this holds for every strictly increasing sequence { ( „ : « £ Z t } such that tn 1 oo we have lim \\Xt - X^Hp = 0. Thus (2) holds. Then (3) follows from (2) and the fact thatX p is a submartingale by 2) of Corollary 5.12 so that 11 Xt 11" | as t -> oo . ■
[III] Closing a Submartingale with a Final Element Definition 7.11. LetX = {Xt : t g T} be a submartingale, martingale or supermartingale on a filtered space (Q, 5, {5( : i g T},P). If there exists an extended real valued integrable ^^-measurable random variable £ on (Q, 5, P) such that E(£ |&) > , =, or < Xt
a.e. on (Q, $t, P)for every t g T
then we call £ a final element for X. If ?7 is a random variable which satisfies all but the Joo-measurability condition on a final element for X, then an arbitrary version £ of E(r?|5oo) is a final element for X since £ is 5oo-measurable and since E ( f | & ) = E[E( f ? |5 o o )|5 4 ] = E 0 7 | 5 t ) > , = , or < X,
a.e. on (£!,&, P).
If a final element { exists for a submartingale or supermartingale X = {Xt : t g T}, it may not be unique. Indeed if X is a submartingale and E(£|5 t ) > Xt a.e. on (Q,5 ( , P) for every t g T, then for every c > 0 we have E(£ + c|5,) > X t a.e. on (Q,fo, P ) for every t g T also. If however X is a martingale and a final element exits then it is unique according to Theorem 5.3. Theorem 7.12. Let X = {Xt : t G T} be a submartingale, martingale or supermartingale on a filtered space (Q, 5, {3( : t g T}, P).
§ 7. CONVERGENCE OF SUBMARTINGALES
117
1) IfX is a submartingale then a final element exists if and only if X* = {X? : t G T} is uniformly integrable. 2) IfX is a supermartingale then a final element exists if and only if X~ = {X(~ : t E T) is uniformly integrable. 3) IfX is a martingale then a final element exists if and only ifX is uniformly integrable. In each of the three cases, if the uniform integrability condition is satisfied then there exists an extended real valued integrable g^-measurable random variable Xx on (Q, 5, P) such that 1° lim Xt = X ^ a.e. on (Q, 5 M , P). t—>oo
2°. X<x, is a final element for X. Proof. If X is a supermartingale then -X is a submartingale and X' = ( - X ) + . Thus if we prove the theorem for a submartingale then we have also a proof for a supermartingale. Furthermore since a martingale is both a submartingale and a. supermartingale and since X is uniformly integrable if and only if both X* and X~ are, we also have a proof for a martingale. Therefore to prove the theorem it suffices to consider a submartingale. Let us assume that X is a submartingale. Suppose X* is uniformly integrable. Then in particular X+ is Li-bounded. Since X is a submartingale the L |-boundedness of X+ implies that of X by Observation 7.1. Therefore by Theorem 7.4 in case T = Z+ and by Theorem 7.9 in case T = R+, there exists an extended real valued 5^-measurable integrable random variable X^ on (Q,5, P) such that lim Xt = X,*, a.e. on (Q, ^ocn P). Then for every a > 0 we have t—*CO
(1)
lim Xt V ( - a ) = Xw V ( - a )
a.e. on ( Q . & ^ P ) .
Since |X4 V (—a)| < X* + a and since X+ is uniformly integrable, X V ( - a ) is uniformly integrable by Proposition 4.11. The uniform integrability of X V (—a) and the convergence in (1) imply according to Theorem 4.16 that lim \\Xt V ( - a ) - I
B
V (-a)H, = 0.
t—.oo
Since convergence in L \ of random variables implies convergence in L | of their conditional expectations (see Theorem B.26), for a fixed t0 € T we have lim ||E(X t V ( - a ) | & 0 ) - EfXoo V (-a)|&„)||i = 0. Since convergence of a sequence in L\ implies the existence of a subsequence which con verges a.e., there exists a sequence {tn : n e N}, („ | oo as n -» oo, such that (2)
lim E(X4„ V (-a)|5, 0 ) = E ^
V (-a)|& 0 )
a.e. on (Q,&„,P).
118
CHAPTER 2.
MARTINGALES
Since X is a submartingale, X V ( - a ) is a submartingale by Theorem 5.9. Then for n e N large enough so that tn > t0 we have by the submartingale property of X V ( - a ) (3)
E(Xt„V(-a)|&0)>X(0V(-a)
a.e. on (ft,&„,P)-
By (2) and (3) we have E(X„ V (~a)|&„) > X,0 V ( - a )
a.e. on (Q,5 t o , P).
Letting a —» oo, by the Conditional Monotone Convergence Theorem (see Theorem B.22) we have E d c o |&„) > Xt0 a.e. on (£1,&0, P). Since this holds for every t 0 € T, the random variable Xx, is a final element for X. Conversely suppose that there exists a final element for X, that is, there exists an ex tended real valued integrable 5<x>-measurable random variable £ on (Q, 5, P) such that for every t G T we have (4)
E((\$t)>Xt
a.e. on (£2,&,P).
Let us show that X + is uniformly integrable. Now (4) implies
(E((\dt)T>X;
a.e.on(a,dt,P).
According to Jensen's Inequality for conditional expectations (see Theorem B.22), if r\ is an integrable random variable on (Q, 5, P), y is a convex function on K such that ip(rf) is an integrable random variable and 0 is an arbitrary sub-a-algebra of 5, then E((^(T?) | &) > ip(E(r] | <S)) a.e. on (Q, 85, P). Since y>(x) = x V 0 for x 6 E is a convex function we have m*l&)
> (E(f |&)) +
a.e. on (Q,5t, P).
Combining the last two inequalities above we have E(ri5t)>X(+
(5)
a.e. o n ( f i , 5 t , P ) .
By (5) we have E(Xt+) < E(£+) for t € T so that for every A > 0 we have
P{X;>X}<^1<^1 A
A
and therefore lim P{X* > A} = 0 uniformly in t € T, that is, for every 77 > 0 there exists A—*oo
some A0 > 0 such that P{Xf
> A} < 77 for all t € T whenever A > A0. Now the
§ 7. CONVERGENCE OF SUBMARTINGALES
119
integrability of £+ implies that for every e > 0 there exists 77 > 0 such that fE £+dP < s whenever E e 5 ^ and P ( £ ) < 77. Thus for every e > 0 there exists A0 > 0 such that sup /
whenever A > A0.
CdP <e
The left side of the last inequality is a nonnegative decreasing function of A and thus its limit as A —> 00 exists and is bounded between 0 and e. By the arbitrariness of e > 0 we have (6)
lim sup /
CdP = 0.
A-^OO ( 6 l J{x;>x)
Since {X? > A} g 3t, we have from (5) (7)
/
J{x;>\)
X + dP < f
J{X*>X}
t dP.
By (6) and (7) we have lim sup /
A^OO
ieT
X* dP < lim sup / A-00
J{x*>x)
This proves the uniform integrability of X*
i6T
C dP = 0. J{x;>\}
■
[IV] Discrete Time 1,2-Martingales Let (D, 5, P) be a probability space and consider L2(£l, $,P}- If we define a binary function {•,■) on L 2 ( Q , 5 , P ) by (£,77) = fn&dP for £,77 € £ 2 ( Q , 5 , P ) , then (-,-) is an inner product and L2(Q.,:5,P) is a Hilbert space with respect to this inner product. For £ e L 2 (Q, 5, P ) we have ||£|| 2 = yj!a?dP = J{U). If f, 1/ 6 L2CA 5, P) and (£, r?) = 0, then we say that £ and 77 are orthogonal. Lemma 7.13. An L2-martingale X = {Xn : n £ Z + } on a filtered space (Q, 5, {5 n }, P) is a stochastic process with orthogonal increments, that is, for any n,m,£ and k in Z+ such that n>m>£>kwe have (1)
{Xn-XmtXt-Xk)=0.
In particular for every n € N we have (2)
E(^)=E(X02) + ^ E [ ( X , - X t _ , ) 2 ] .
120
CHAPTER 2.
MARTINGALES
Proof. Let n > m > I > k. Then (Xn - Xm,Xe)
= E[(X„ - Xm)XE] = E[E[(X„ Xm)Xe|&]] = E[X,E[X„ - Xm \ m = E[Xe(Xe - Xe)] = 0.
Similarly we have {Xn - Xm, Xk) = 0. Therefore (1) holds. To prove (2), let us write Xn = X 0 + T,l=i(Xk - Xk.\). Then
E(X2) = (xn = X0 + it(Xk - Xk^),Xn = XQ + J2(Xk - X4_,)) \
= (Xo,X0> +f:(Xk
k=\
k=i
I
- X A _ , ) = E(X 2 ) + £ E [ ( X * - X , _ , ) 2 ] . ■
-Xk_uXk
k=l
k=\
Theorem 7.14. Let X = {X„ : n 6 Z+} be an Li-martingale on a filtered space (Q, 3 , {3n }, P)- Then X is L2-bounded if and only if (1)
^
E
ifceN
[|^-^-i|2]<°°-
IfX is L2-bounded then there exists X,*, £ L2(£l,5ooi -P) such that (2)
lim X„ = X ^
a.e. on (Q, 5oo, P),
71—>-00
(3)
l i m E [ | X „ - X o o | 2 ] = 0, n—too
and (4)
E[|X00-X„|2]= £
E[|Xfc-X,_,|2].
Proof. Observe that sup n e 2 t E(X 2 ) < 00 if and only if s u p n 6 N J X i E[|X,t-X,t_i| 2 ] < 00, that is, EfceNE[|X)t - X , t - i | 2 ] < 00 by (2) of Lemma 7.13. Thus X is L2-bounded if and only (1) holds. Suppose X is L2-bounded. Then X is L\ -bounded so that by Theorem 7.4 there exists Xoo e I i ( Q , 3oo, P) such that (2) holds. To show that X ^ e £ 2 ( ^ , 3oc, P) and (3) holds,
§7. CONVERGENCE OF SUBMARTINGALES
121
note that by (1) of Lemma 7.13 for any n £ Z+ and p G N we have / "+P
(5)
E[\Xn+p - Xn\2] = / £
n+p
(Xk - Xk_x),
™+P
= £
£
(X t - X t _,)
n+p
(Xk-Xk_l,Xk-Xk.i)
E[\Xk-Xk-t\2].
= Y,
By (2), Fatou's Lemma, and (5) we have E[|*oo - Xn\2] = E[| lim X„ +p - Xn\2] = E y i m \Xn+p - Xn\2] < MmME[\Xn+p - Xn\2] n+p
= liminf £
E[|Xt-X*_,|2]<
£
E[|*t-*t_,|2].
The last sum is finite by (1). This implies that X^ - Xn e Z2(£2, 5<x>, P ) and then I M g -L2(Q,5oo, -P)- Letting rc —► oo in the last member of the inequalities above and recalling (1) we have (3). Since lim Xn = X^ in L2 implies lim {Xn+P - X„) = Xx - Xn in L2 and conse71—*00
p—tOO
quently lim E[\XMp - X n | 2 ] = EEI-X^, - Xn\2], by letting p - t o o o n both sides of (5) we have (4). ■ Let X be an L2-martingale on a filtered space (fi, 5, {5„}, P). Then X 2 is a submartingaleso that according to Theorem 5.15 we have the Doob decomposition X2 = X2+M+A where M is a null at 0 martingale and A is a predictable almost surely increasing process and M and A are unique up to a null set in (Q, 5^,, P). Definition 7.15. Lef X fee an L2-martingale on a filtered space (Q, 5, {i?n}) P)- The pre dictable almost surely increasing process A in the Doob decomposition X2 = Xl + M + A of the submartingale X2 is called the quadratic variation process of X. It is uniquely de termined up to a null set in (Q, 5oo i P)Theorem 7.16. Let X be an L2-martingale on afilteredspace (Q, 5, {5„}, P ) and let A be the quadratic variational process ofX. Let A^ = lim An. Then for every n G N we have (1) E ( X 2 - X 2 _ 1 | 5 „ _ , ) = E ( | X „ - X „ _ , | 2 | 5 „ _ 1 ) = A r v - A „ _ 1
a.e. o n ( Q , 3 „ - i , P )
122
CHAPTER 2.
MARTINGALES
and in particular (2)
E(Xl - X 2 _,) = E(\Xn - Xn^ | 2 ) = E(A„ - A„_,).
Also X is L2-bounded if and only if A is integrable, that is, EiAoo) < oo. Proof. Let the Doob decomposition of the submartingale X2 be given by X2 = Xl + M + A,
(3)
where M is a null at 0 martingale. Then E(X2n - X2n_x |5„_.) = E[(M„ + An) - (M„_, - A„_,)|5„-i] = E(An- A n _,|5„_,) = J 4 n - A n _, by the martingale property of M and then by the predictability of A. On the other hand E(\Xn - X n _,| 2 |5„_,) = E(X2n - 2XnXn-t +X2n_l |5„_i) = E ( X 2 | 5 n _ 1 ) - 2 X „ _ 1 E ( X n | 5 „ - 1 ) + X2_1 = E(X2n\dn-i) - 2X2n_, +X2n_t = E(X2n - X 2 _, |5n-i). This completes the proof of (1). Integrating both sides of (1) we have (2). Since M is a null at 0 martingale we have E(Mn) = E(M 0 ) = 0 for n G N. Thus from (3) we have E(X 2 ) = E(X2) + E(An). Then supE(J>O = E(X 0 2 )+supE(A n )
ngZ+
ngZ,
2
= E(X ) + lim E(A n ) = E(X2) + E ^ ) by the Monotone Convergence Theorem. Therefore sup ngS+ E(X 2 ) < oo if and only if EC^co) < OO. ■
§8 Uniformly Integrable Submartingales [I] Convergence of Uniformly Integrable Submartingales Let X = {Xt : t € T} be a submartingale, martingale or supermartingale on a filtered space (£2,5, {& : * G T}, P). In Theorem 7.9 we showed that if X is L r bounded then there
§ 8. UNIFORMLY INTEGRABLE
123
SUBMARTINGALES
exists an extended real valued integrable J^-measurable random variable X ^ on (Q, 5, P) such that lhn Xt = X^ a.e. on (Q, 5 ^ , P). We show next that if X is uniformly integrable foenUmfXt-X^i^O. Theorem 8.1. Let X = {X< : t e 1} be a uniformly integrable submartingale, martin gale or supermartingale on a filtered space (£2,5, {& : t € T},P). 77ien there exists an extended real valued integrable g'^-measurable random variable Xx on (£2,g,P) such that 1° lim Xt = X^ a.e. on (Q, 3 ^ , P). t—*oo
2°. Xao is a final element for X. 3° ten\\Xt-XJ\x=Q. t—t-OO
Proof. If X is uniformly integrable then so are X* and X~ Thus by Theorem 7.12 there exists an extended real valued integrable Soo-measurable random variable X^ on (Q, 5, P) satisfying 1° and 2°. Now 1° implies P ■ lim Xt = X^. This and the uniform integrability t—►CO
of X imply 3° according to Theorem 4.16. ■ Let us note that under the assumption of uniform integrability of X ,not just the uniform integrability of X* or X~ as in Theorem 7.12, 2° in Theorem 8.1 can be derived more directly than in the proof of Theorem 7.12. Let us show this for a submartingale X. Now for 5, t e T, t < 5, we have E(X S | & ) > X, a.e. on (Q., & , P ) and thus (1)
f XsdP>
f Xt dP
for A G St-
Now for A £ 5 ( we have by 3° in Theorem 8.1 lim | f XsdPso that lim f XsdP
f X^dP\<
lim / \Xt-X„,\dP<
lim | | X „ - X « | | , = 0
= f Xx dP Thus, letting s -> oo in (1) we obtain
J X^dPy J Xt dP for A € &. Since this holds for every A e & we have ECX^Ifo) > X«. Thus X ^ is a final element forX. The next theorem characterizes a uniformly integrable martingale as a martingale for which a final element exists.
124
CHAPTER 2.
MARTINGALES
Theorem 8.2. Let X = {Xt : t G T} be a martingale on a filtered space (£2,g, {gt -i 6 T},P). Then X is uniformly integrable if and only if there exists a final ele mentfor X, that is, there exists an extended real valued integrable fi^-measurable random variable ( on (Q, g, P) such that E(£|g ( ) = Xt a.e. on (Q, g 4 , P)for every t E T. Proof. If X is uniformly integrable then by Theorem 8.1 there exists a final element for X. Conversely if a final element ( exists for X then E(£ |g ( ) = Xt a.e. on (£2, g ( , P) for every i e T . Thus X is uniformly integrable by Theorem 4.24. ■ Remark 8.3. As we noted following Definition 5.2, if (Q, g, {g t : t G T}, P) is a filtered space and £ is an integrable random variable on a probability space (Q., g, P ) then by letting Xt be an arbitrary real valued version of E(£ | g ( ) for t £ T we obtain a uniformly integrable martingale X = {Xt : t £ T}. Now the uniform integrability of X implies according to Theorem 8.1 the existence of an extended real valued integrable 5oo-measurable random variable Xx which is a final element of the martingale X and to which X converges both almost surely and in L\. But any version Ym ofE(£|3«) is a final element of the martingale X. Thus by the uniqueness of the final element of a martingale according to Theorem 5.3 we have Yao = Xx a.e. on (Q., goo, P). In particular, we have Mm Xt = Y^ a.e. on (Q, g ^ , P) t—>oo
as well as lim \\Xt — Yoo||i = 0, that is, we have 1° limE(£|ff,) = E(£|5oo)a.e. o n C Q ^ P ) . I—KX)
2" lim||EK|&)-Etf|3„)||i = 0. t—KX>
[II] Submartingales with Reversed Time A submartingale X = {Xt : t 6 T} has a first element but has no last element. We define a submartingale with reversed time as a process submartingale which has a last element but no first element. Our main question regarding a submartingale with reversed time is its convergence as the time parameter decreases. Here we confine ourselves to a discrete case only. We shall show that if a submartingale with reversed time X = {X_„ : n 6 Z + } is uniformly integrable then it has a first element to which X_ n converges both almost surely and in L\ as n —> oo. Definition 8.4. Let (Q,, g, P) be a probability space. A system of sub-a-algebras of g, {g_„ : n 6 Z+] is called a filtration if it is an increasing system that is, g_ m C 3-nfor m,n € Z+ such that m > n. We define g_oo = n n 6z + g_ n . A stochastic process X = {X_ n : n £ Z+} on a filtered space (Q, g, {g_„ : n e Z+}, P) is said to be {g_„} -adapted
§& UNIFORMLY INTEGRABLE
SUBMARTINGALES
125
(or simply adapted) ifX_n is ^S ^-measurable for every n G Z+. An {$^n}-adapted Liprocess X = {X_ n : n £ Z+} is called a submartingale, martingale or supermartingale, with reversed time ifE(X_n\$_m) >,=, or < X_ m a.e. on ( Q , 5 _ m , P ) for n,m G Z+, m > n. Lemma 8.5. 1) If X = {X_ n : n G Z+} w an adapted process on a filtered space (Q3, {d-n},P), then liminf X_ n is an extended real valued 5_^-measurable random n—*oo
variable on (Q, 3 , P). 2) Every martingale, nonnegative submartingale and nonpositive supermartingale with re versed time is an L\-boundedprocess. Proof. 1) By definition, liminfX_ n = lim n-too
adapted with respect to { 5 - n : n G Z+} then Z+ so that lim
inf
n—*oo — m < _ n
inf
n—tco _ m < _ „
inf
X_ m . If X = {X_ n : n G Z+} is
X_ m is 5-n-measurable for every n G
— m < —n
X_ m is J-n-measurable for every n G Z+ and thus it is n n 6 2.5_„-
measurable. Therefore liminf X_ n is5_oo-measurable. 71—fOO
2) If X = {X_ n : n G Z+} is a nonnegative submartingale with reversed time, then E(X_ n ) | as rc -> oo and thus E(X_ n ) < E(X 0 ) for all n G Z+. Then by the nonnegativity of X we have sup n6Z+ E(|X_„|) < E(|X 0 |) < oo. Thus X is Ii-bounded. If X is a martingale with reversed time, then |X| is a nonnegative submartingale with reversed time so that |X| is Li-bounded, or equivalently, X is Li-bounded. If X is a nonpositive supermartingale with reversed time, then —X is a nonnegative submartingale with reversed time so that —X is L\-bounded, or equivalently, X is L\-bounded. ■ The next theorem shows that if X = {X_ n : n G Z+} is a submartingale with reversed time then X_oo = lim X_„ always exists a.e. o n C Q . S . ^ P ) . If X is uniformly integrable, then X_oo is integrable and X_„ converges to X . ^ a s n - » o o both almost surely and in L\ and furthermore X_oo is a first element of the submartingale with reversed time. Theorem 8.6. Let X - {X_„ : n G Z+} be a submartingale with reversed time on a filtered space (£2,5, {5_„}, P). Let X_oo be an extended real valued 5-^-measurable random variable on (Q,5, P ) defined by X-^UJ) = liminf X_„(a>)/orw G Q. Then (1)
lim X_„ =X_oo
a.e. on (n,5_oo,P).
126
CHAPTER 2.
MARTINGALES
IfX is uniformly integrable then (2)
X_oc eLi(Q,d-oo,P)
and
lim ||X_„ - X^W,
= 0,
n—*oo
and for every n E Z+we have (3)
ECZ-nlS^S^-co
a.e.on(a,3-oo,P)-
IfX is a supermartingale, then the inequality in (3) is reversed. If X is a martingale, then the inequality is replaced by an equality. Proof. For a, b £ R, a < b, consider the upcrossing number U[J^X of [a, b] by the finite sequence {X_N,...,X0}. If X is a submartingale with reversed time then by Theorem 6.19 E(Z7r^X) < r - ^ E R X o - a)+ - (X.N - a)+] < — E f C X o - a) + ] 1 ''
b—a
b— a
and if X is a supermartingale with reversed time then by Theorem 6.20 we have E(t^X ) < T^—EKXO J b— a
-
a)"].
Therefore for both cases we have E(^x)<^{E(|X0|)+|a|}. Let us define the upcrossing number of [a, b] by X = {X^n : n e Z+} by
u^x^^u^x. Then by the Monotone Convergence Theorem WJCJ L }X)
= Jim Wf*%X) l jv—foo
Thus U^X < oo a.e. on (Q.,^_X,,P). for any a, b € K, a < b
< J—{E{\Xo\) o— a
+ \a\} < oo.
By the same argument as in Lemma 7.3 we have
HminfX-nCw) < a < b < limsupX.„(u>) =*- (U,~°ZX)(LJ) = oo. Then the fact that U^X < oo a.e. on (Q,5-oo, -P) implies that we have Iiminf X-n(u>) = lim sup X-„(OJ) which in turn implies that lim X_n(w) exists in E for
127
§& UNIFORMLY INTEGRABLE SUB MARTINGALES a.e. w in (Q, SL^,, P). Thus for our X , ^ = lim inf X_„ we have X-^ 71—t-CO
= lim X_„ a.e. in Tl—t-OO
(£2, 5_oo, P). Now by Fatou's Lemma we have EflX-ool) = EQirn |X_„|) < liminf E(|X_„|) < sup E(|X_„|). If we assume that X is uniformly integrable, then X is Li-bounded so that the last member in the inequalities above is finite and therefore X^x is integrable. Then X-oo is finite a.e. on (Q.,^_co,P) and therefore X_ n converges to X-«, a.e. on (t2,5_ 0 0 ,P) asn -> oo. The uniform integrability of X then implies (2) by Theorem 4.16. To prove (3) it suffices to show that for every n G Z+ (4)
/ X_„ dP > ! X^ JA
dP
for A e 5 - o o -
JA
Note that for any A G 5 and A: G Z+ we have I / X_kdP-
I X-oo dP\ < l \X-k - * _ « , | d P < || JT_t - jr_„o ||,
so that by (2) we have (5)
lim / X_k dP= I X_ r o dP fc->oo y ^
for i e j .
JA
Now for m , n G Z + , m > n, since E(X_ n |3_ m ) > X_ m a.e. on (t2,5_ m ,P) we have JA X_„ d P > /^ X_ m dP for A G 5 _ m . Then since J . ^ C 5_ m we have (6)
! X_„ dP > [ X-m dP
-U
./A
for A G 5-oc-
Then applying (5) to the right side of (6) we have (4). ■ Let us remark that if a first element exists for a martingale with reversed time then it is unique. To show this, let X = {X_ n : n G Z+} be a martingale with reversed time on a filtered space (Q,5, {5_„ : n G Z+},P). If X..^ and Y l ^ are first elements of X then ECXolS-oJ = *-oo and E(X 0 |5-oo) = Y"-°o a.e. on (fl, £-<*,, P) so that X on ( Q . S ^ P ) .
—oo
—
^—oo 3-C.
The next theorem concerns a martingale with reversed time generated by an integrable random variable.
128
CHAPTER 2.
MARTINGALES
Theorem 8.7. Let (£2,g, {g_„ : n 6 Z+},P) &e a filtered space and £ be an extended real valued integrable random variable on (Q., g, P). Let X_„ be an arbitrary version of E(£| $-n)for n G Z+ and let F_0O &e an arbitrary version O / E ( ^ J 5 _ 0 0 ) . Then for the uniformly integrable martingale with reversed time X = {X_„ : n € Z+} we have (1)
lim X-n = Y.x
a.e. on ( f l , ? ^ , ? )
n—*oo
and (2)
lim | | X _ „ - y_„o||i= 0.
Proof. By Theorem 4.24, X is a uniformly integrable martingale with reversed time. Then there exists an extended real valued integrable g ^ -measurable random variable X_oo such that lim X_„ = X _ c o a.e. on ( Q . g . ^ P ) and lim \\X-n-X-»\U = 0 by Theorem 8.6. n—t-oo
n—too
Therefore, to prove (1) and (2) it suffices to show that F_oo = X_oo a.e. on (Q,g_ o c , P). Since Y l ^ and X-^ are both gl^-measurable it suffices to verify (3)
jY.00dP
= Jx.codP
for A G g_oo.
Let A 6 g_oo- Since g . ^ = n n 6 z + g_ n we have A G g_ n for every n G Z+. Since E(£|g_„) = X-n a.e. on (Q, g_ n , P ) and A G g_„ we have (4)
[ (dP=
[ X.n dP
for all n G Z + .
Since F - ^ is a version of E(f |g_oo), we have (5)
/ Y_0OdP = / £
But lim ||X_ n — X-oJIi = 0 implies 71—fOO
lim / X_ n dP = / X.oo d P
n->oo JA
for A G g.
JA
Thus, letting n —> co in (4) we have (6)
/ (dP=
[ X-„dP
ferAeS-*.
§ 8. UNIFORMLY INTEGRABLE
129
SUBMARTINGALES
With (5) and (6) we have (3). I In Theorem 8.6 we showed that if a submartingale with reversed time X = {X_„ : n e Z+} is uniformly integrable then it has a first element to which it converges both almost surely and in L\ as n —» oo. In the next theorem we give a sufficient condition for a submartingale with reversed time to be uniformly integrable. This condition is derived from the fact that a submartingale with reversed time has a last element, namely X0. Theorem 8.8. IfX = {X_n : n G Z+} is a submartingale with reversed time on a filtered space (Q, 5, {5_ n }, P) and satisfies the condition (1)
inf E(A"_n) > - o o ,
ngZ»
then X is uniformly integrable. Similarly ifX is a supermartingale with reversed time and satisfies the condition (2)
sup E(X-n)
< oo,
then X is uniformly integrable. Proof. Let X be a submartingale with reversed time satisfying (1). To show that X is uniformly integrable, we show according to (3) of Definition 4.6 that for every e > 0 there exists A > 0 such that (3)
/
){\X-n\>\}
|X_ n I dP < e
for all n e Z+.
Now since X is a submartingale with reversed time, E(X_ n ) J. as -n L By (1), E(X_„) J. c 6 Kas —n J. - o o . Then for every e > 0 there exists N E Z+ such that E(X-N)~c < e/2 and therefore (4)
E(X.N)
- E(X_„) < e/2
for n > N.
Now for an arbitrary A > 0 and n E Z+, we have (5) '
I \X.n \dP= ( X-„dPJ{\X-n\>\} J{X-n>\) = f X.ndP+ f X.n dPJ{x-„>\} J{x-„>-\}
[ X-n dP J{X.n<->,} / X_ n dP. m
130
CHAPTER 2.
MARTINGALES
By the submartingale property of X we have E(X_w|5- n ) > X-n a.e. on (Q, 5 _ n , P ) for n > TV". Since {X_ n > A}, {X_„ > -A} 6 g_ n we have from (5) and (4) (6)
/
\X_n\dP
X.NdP+£-
< I
X-NdP+ I
X-NdP-
= I
X_NdP-
X.NdP + ^-
J{X-„>\}
J{X-„>-X)
J{X-„>\}
I J{X-n<-\]
I
Jn
2
2 e
<
/
=
/
\X_N\dP+
[
.N\dP + ^ \X-ni...
\X.N\dP
+\
for r i > TV.
Since X o , . . . , X_wJ{\Xaren\>\} all integrable there2 exists 5 > 0 such that (7)
/ \X0\dP,..., JA
I \X_N\dP JA
<^
forAe$v/\thP{A)<6. 2
Since |X_„ | = 2X*_n — X-n for every n € Z+ we have P { | X _ n | > A} < - E f l X ^ D = ^ { 2 E ( X ! J - E(X_ n )}. Since X is a submartingale with reversed time so is X + and thus E(X* n ) < E(XQ ). Re calling that E(X_„) J. c as — n J. — oo, we have P { | X _ n | > A} < -{2E(X 0 + ) - c} < oo
for all n e Z+.
A
Therefore there exists A > 0 so large that P{|X_„| > A} < 6 for all n e Z+. For such A, by (7) and (6) we have /{|X_„|>A} l-X-»| dP < e for all n e !•+, which proves (3) and thus establishes the uniform integrability of the submartingale with reversed time. If X is a supermartingale with reversed time and satisfies (2) then —X is a submartingale with reversed time and satisfies inf„6z+ E(—Xn) > —oo. Thus —X is uniformly integrable and so is X. I Corollary 8.9. Let X = {Xt : t G R+} be a submartingale or a supermartingale on a filtered space (Q, J , {3<},P)- Let {i_„ : n £ Z+} be a strictly decreasing sequence in R+ with £_„ J. i_oo 6 K+. Let ^t_os = n„ e s t 5 t _ n . Then {Xt_n : n E Z+} is uniformly
§*.
131
UNIFORMLYINTEGRABLESUBMARTINGALES
integrable and there exists Yt_„ € Li(£2, $ , , P) such that lim Xt_„ = Yt_„ a.e. on n~-KX>
(O, &_„, P) and Jiim ||X,_„ - Y,_||, = 0. Proof. Since t_ n J. as n f, {X ( _ n : n € Z+} is a submartingale or a supermartingale with reversed time with respect to {5,_ n : n G Z + } according as X is a submartingale or a supermartingale. Now if X is a submartingale then since i_„ > 0 for n € Z + we have inf E(X,_J > E(X 0 ) > - o o , n£Z+
and if X is a supermartingale then sup E(Xt_ J < E(X 0 ) < oo. Therefore by Theorem 8.8, {X t _ n : n € Z + } is uniformly integrable. Thus by Theorem 8.6 (with the cr-algebra &_„ = n ne z t 3<_„ corresponding to $_<*, = D„ 6 z.5- n in Theorem 8.6), there exists Yt_x e L^tl, &-„,?) to which {Xt_„ : n £ Z + } converges both almost surely and in L\. ■
[III] Optional Sampling by Unbounded Stopping Times According to Theorem 6.4, if X = {Xn : n € Z + } is a submartingale and S and T are two bounded stopping times on a filtered space (£2,5, {5„}, P) and if S < T on Q then E(Xz-|5 s ) > Xs a.e. on (£2,5s, -P) where the inequality is replaced by an equality when X is a martingale. Let us extend this result to the case T = R+. Theorem 8.10. (Optional Sampling with Bounded Stopping Times, Continuous Case) Let X = {Xt : t £ R+} be a right-continuous submartingale and S and T be two bounded stopping times on a right-continuous filtered space (Q, 5, {■$t},P) such that S < T on £2. Then Xs and XT are integrable and E ( X T | 5 S ) > Xs a.e. on (£2,5 S ,P). If X is a supermartingale, then the inequality is reversed. IfX is a martingale, then the inequality is replaced by an equality. Proof. Let 5 and T be two bounded stopping times such that S < T < m on Q for some m e N. According to Theorem 3.20 there exist two sequences of stopping times {Sn : n e N} and {T„ : n € N} with respect to the filtration {5, : t 6 R+} such that S„ and Tn assume values in {k2~n : k = 1 , . . . , m2"}, 5„ < Tn on £2 for each n 6 N, 5 n | 5 and T n j T on £2 as n —> oo. Now since the ranges of Sn and Tn are contained in
132
CHAPTER 2.
MARTINGALES
{kl~n : k = 1 , . . . , m2"}, Sn and T„ are also stopping times with respect to the filtration {5*a~» : k G Z + }. Therefore for each n G N we have by Theorem 6.4, E(XT„ \$sj > XSn a.e. on (Q, 5 S „, P) so that (1)
/ X r d P > / X Sn d P JA "
JA
for A G 5s„-
To prove the theorem we show that (2)
f XTdP>
JA
f Xs dP
JA
for A G 5s-
Now since Tn J. T and 5„ J. 5 on Q and since X is right-continuous we have Iim
IT
= Xy
and
lim XSn = Xs
n—t-oo
on Q.
7i—i-oo
If we show that {XTn : n G N} and {Xs„ : n G N} are uniformly integrable, then by Theorem 4.16 we have XT G L{(Q,$T,P), Xs G Li(Q,&j,P), lim \\XTn - XT\\i = 0 n—►oo
and lim \\Xsn — Xs\\\ = 0 . From this, we have lim f XTn dP= I XT dP
n 0o
^ JA
JA
and
lim / XSn dP = f Xs dP n->ooJA
for A e J .
JA
This and (1) then yield (2). Thus it remains to verify the uniform integrability of {XT„ : n 6 N} and {XSn : n G N}. Now for each n G N, Tn is a stopping time with respect to the filtration {5;a-" : k € Z+}. Since, by the construction in Theorem 3.20, the range of T„_i is contained in that of T„, T„_i is also a stopping time with respect to the filtration {5^2-" '■ k G Z+}. Then since T„ < T„_i, Theorem 6.4 implies that E(XT„_, | 5 T J > X T „. Recall also that Tn < T n _, implies 5r„ C 5Tn-i- Thus {Xr„ : n G N} is a submartingale with reversed time with respect to the filtration {5r„ : n G N}. Note also that from E ( X T „ | 5 0 ) > Xo a.e. on (Q,5o,P) wehaveE(X T „) > E(X 0 ). Then inf n e N E(X T n ) > E{X0) > - c o . Therefore by applying Theorem 8.8 to our submartingale with reversed time {XTn : n G N} with respect to the filtration {5yn : n G N} we have the uniform integrability of {X?n : n G N}. Similarly for {Xs„ : n € N}. ■ Theorem 8.10 can be restated as follows. Theorem 8.11. LetX = {Xt : t £ R+} be a right-continuous submartingale, martingale or supermartingale on a right-continuous filtered space (Q, 3 , {5 t }, P). Let {St : t G R+} be
§ 8. UNIFORMLY INTEGRABLE
SUBMARTINGALES
133
an increasing system of stopping times on thefilteredspace each of which is bounded. Let ®t = 5s, and Yt = Xs, for t G R+. Then Y = {Yt : t G R+} is a submartingale, martingale or supermartingale on the filtered space (£2,5, {0<},P). Let X = {Xt : t G T} be an adapted process and T be a stopping time on a filtered space (£2,5, {5*},-P) and consider the stopped process XTA = {XjA : ( e T } where XfA(u>) = X(T(u) A t,u) for w G £2. In Theorem 3.31, we showed that if T = Z + then XTA is adapted to the filtration {5 T A n : n G Z + } and if T = R+ then, under the assumption that both X and {5* : t G R+} are right-continuous, X T A is a right-continuous process adapted to the right-continuous filtration {5TAI : i € R+}Theorem 8.12. Let X = {Xt : t G T} be a submartingale, martingale, or supermartingale and Tbe a stopping time on a filtered space (£2,5, {&}, P). l)IfT = Z+, //ien X T A is a submartingale, martingale, or supermartingale with respect to the filtration {5rAn : n £ Z+} as we// as the filtration {5n : " £ Z+}. 2)IfT = R+, then under the assumption that both X and {^t ■ t 6 R+} are right-continuous, XTA is a right-continuous submartingale, martingale, or supermartingale with respect to the filtration {5rA* : * G R+} <w we// as the filtration {5 t : t G R+}. Proof. It suffices to prove the theorem for the case where X is a submartingale. Since XTA is adapted to {3TM : £ G T} according to Theorem 3.31 it is also adapted to {5 t : t G T}. Thus to show that XTA is a submartingale with respect to {5TA< : < G T} as well as {$t '■ t G T}, we need only verify that for any pair s , ( e T , s < t , w e have (1)
E(XTM\$TAs)
> XTA3
a.e. on(Q,5 T A 3 ,P),
E(XTAt\3s)
> XTAs
a.e. on(£2,3 s ,P),
and (2)
But (1) holds by the Optional Sampling Theorems, that is, Theorem 6.4 for T = Z+ and Theorem 8.10 for T = R+. Let us prove (2), or equivalently,
(3)
J XTM dP> J XTAs dP for A G &.
Note first that by (1) we have (4)
f XTMdP>
f XTA3dP
forA 0 G5 T A S -
134
CHAPTER 2.
MARTINGALES
Let A £ g j . Decompose A into two subsets A\ = A n {T > s} and A2 = A n {T < 5} both of which are in 5 S since {T < s] e 5s- For w 6 T, consider the set A,n{TAs
(5)
/ XTMdP>
JA,
I XTAsdP.
JA,
On the other hand since 5 < i, on the set {T < 5} we have XTAt = XT = XTAs and thus by the fact that A2 is a subset of {T < 5} we have (6)
/ XTMdP=
JA2
f
JA2
XTAsdP.
Adding (5) and (6) we have (3). I By similar argument we have the following theorem. Theorem 8.13. Let X = {Xt : t £ T} be a submartingale, martingale, or supermartingale on a filtered space (Q., 5, {& : t e T}, P) and let S and T be two stopping times such that S
a.e. on(Q,5sA«,P).
V(XTAt\$SAt)>XSAt
Since XSAt is 5 SAr measurable it is 5s-measurable. Similarly XTAt is 5r-measurable. Then to show that {XSAt,XTAt} is a submartingale with respect to {5s, 5 T } it remains to verify that E ( X r A i | 5 s ) > XSAt a.e. on (Q,5s, P), or equivalently,
(2)
/ XTAt dP> I XSAt for A € 5s-
JA
JA
§& UNIFORMLYINTEGRABLE
SUBMARTINGALES
135
Note that by (1) we have (3)
fxTMdP>
•Mo
I XSM
forAoGSsA*.
JAo
Let A e 3s and decompose A into two sets in 5 5 by setting A\ = A n {5 > £} and A2 = A n {5 < *}. For u G T, consider the set
A, n {s A i < u } = A n {5 > *} n {5 A t < u}. Uu < t then the last intersection is an empty set so that Ai f l { S A f < u} = 8 e 3„. If u > t then {S A t < u} = Q, so that Ai n {5 A t < u} = A n {5 > i} e fo since A e f e and thus A, n {5 A t < u} G g u since t < u. Therefore Ai D {5 A < < u} g 5„ for u G T. This shows that A\ G 5SAJ and thus by (3) we have (4)
/ XTMdP> JAi
I
XSMdP.
JA,
On the set {5 < t}, we have XSM =Xt= XTM since S
/ XTM dP= f JA2
XSM dP.
JA-I
Adding (4) and (5) we have (2). ■ Let us consider optional sampling by unbounded stopping times. Let X = {Xt : t G T} be a submartingale on a filtered space (£2,5, {St '■ t G T}, P) and let 5 and T be two stop ping times on the filtered space such that S < T on Q. If S and T are bounded, then ECXrlSJs) > -Xs a.e. on (£1,5s, P) where the inequality is replaced by an equality when we have a martingale. This was proved in Theorem 6.4 for the case T = Z+ and in Theorem 8.10 for the case T = R+ under the assumption that both the filtered space and the sub martingale were right-continuous. Let us extend these results to cases where the stopping times S and T are no longer assumed to be bounded. Since S and T may assume the value oo, we need a random variable at infinity so that Xs(u) can be defined foru G Q. for which S(ui) = oo and similarly for XT- Suppose a final element exists for X, that is, there exists an extended real valued integrable 5^-measurable random variable £ on (£1,5, P ) such that E(£ 15t) > Xt a.e. on (Q, 5,, P) for every t G T. We showed in Theorem 7.12 that uniform integrability of X+ is a necessary and sufficient condition for existence of a final element for X and if X* is uniformly integrable then there exists an extended real valued integrable
136
CHAPTER 2.
MARTINGALES
3oo-measurable random variable Xoo such that ! „ is a final element of X furthermore lim Xt = Xoo a.e. on (£1, Joo, P)- However X ^ is not the only final element for X. In fact Xoo + c is a final element for X for any c > 0. With an arbitrary final element £ for X, we define X s (w) = £(u>) for a; G Q such that 5(CJ) = oo and similarly we define Xr(w) = £(u>) for a; G £2 such that T(w) = oo. In Theorem 8.16 we show that E ( X T | 3 s ) > Xs a.e. on (£2, $s,P) independently of the choice of the final element £ of X used in defining Xs and XxLet us consider first a martingale with discrete time generated by an integrable random variable. Theorem 8.14. Let (£2,5, {5n '■ n eZ+},P) be a filtered space, £ be an extended real valued integrable random variable on (Q, 5, P) and let X = {Xn : n G Z+} be a uniformly integrable martingale on thefilteredspace defined by letting Xn be an arbitrary real valued version ofE{£\$n)for n G Z+. Let S and T be two stopping times on the filtered space such that S < T onQ. With an arbitrary version Yoc o/E(£ |5oo). define Xs and XT with Xs{ui) = Yoo{uj)foruj G {S = oo} andXT(LJ) = Yoo(uj)forui G {T = oo}. Then (1)
E(£|3 T ) = X T
a.e. 0 « ( Q , 5 r , P ) ,
so ?/iaf in particular XT is integrable and similarly for Xs- Also (2)
E(XT\$s)
= Xs
a.e.on(.Q,$s,P).
Proof. Let Tfc = T Afcfor fc G Z + . Then lim XTk(u) = lim X T A A , M = XT(u>) for w G {T < oo}. But we have lim Xk(u>) = YooM for a.e. LO in (O, 3}OQ)P) by Remark 8.3 so fc—fOO
that lim Xrk{w) = lim Xfc(u;) = Yco(w) = XT(OJ) for a.e. u in {T = oo}. Thus we have lim XTk = X T a.e. on (Q, 5 ^ , P). A:—too
To prove (1) we show that
(3)
/ £dP=
JA
I XTdP
JA
forAedr-
Now since Tk and fc are two stopping times and T* < fc on £2 we have E(Xjt | 5 r J = XT,. a.e. on (£2,5j- t , P) by Theorem 6.4 and thus
/ XkdP=
I XTkdP
for A G 5 Tk
§& UNIFORMLY INTEGRABLE SUBMARTINGALES
137
Now if A G 5 T then by Theorem 3.9, A n {T < it} = A n {T < T*} € 5 T t so that /
XkdP=
f
JAn{T
for^€gr-
XTkdP
JAn{T
SinceE(^|5 fc ) = X k a.e. on (£2,fo,P) and since A G 5 r implies A n {T < fc} G &,, we have / JAn{T
£dP = f
XkdP=
JAn{T
f
XTkdP=
I
JAn{T
XT dP
for A G 5 T -
JAn{T
Since this holds for every k G Z+ we have /
£dP = /
JAn{T=0}
XTdP
JAn{T=0}
and
/
£dP= f
JAn{k-l
XTdP. JAn{k-\
From this and from the existence of fAn^
/
Jyln{T
tdP=
[
J^n{T
XTdP
forAG5T.
Since E(f l&x,) = F ^ a.e. on (Q,3<x>, i 3 ) and since {T = oo} g &*, and A G Sr C Soo we have j 4 n { T = oo}G 5oo and therefore (5)
/
.Mn{r=cx>}
£dP=i
Jy4n{T=oo}
XTdP
forAGfo-
From (4) and (5) we have (3). This proves (1). Finally, (2) follows from (1) since E(XT\3s)
= E[Etf l * r ) | S s ] = Wt\3s)
= Xs
a.e. on (Q,ff fft P).
This completes the proof. ■ If X = {X n : n G Z+} is a nonpositive submartingale on a filtered space (",5,{3n},-P)thenE(0|5 : „) > * » a e - o n ( 0 , 5 „ , P ) f o r n £ Z + . Thus in definingX T for a stopping time T on the filtered space, we can use 0 as an integrable random variable £ on the probability space (Q, 5, -P) satisfying the condition E(£|g n ) > Xn a.e. on (Q,g„, P) for n G Z+. This is precisely what we do in the next lemma.
138
CHAPTER 2.
MARTINGALES
Lemma 8.15. Let X = {Xn : n G Z + } be a nonpositive submartingale and S and T be two stopping times on a filtered space (£2,3, {$n}, P) such that S
= lim XTA/CO^) = XT(UJ)
fc—*oo
for a; G {T <
oo}
k—»oo
and limsupXr^aO = limsupXrA*:(w) < 0 = XT(UJ) fc—too
foru; G {T = oo}.
k—*oo
Therefore by Fatou's Lemma for the limit superior of nonpositive functions we have E(X 0 ) < limsupE(Xr fc ) < E(limsupX r J < E(XT) < 0 k—*oo
k—»oo
so that E(Xr) G R, that is, X r is integrable. Similarly Xs is integrable. To show that E(XT | 3 s ) > X s a.e. on (Q, 3 S , P) we show that (1)
[ XTdP>
for A G 3 S .
f Xs dP
Let St = 5 A A: for A: G Z+. Then 5*: < Tk on Q. so that by Theorem 6.4 we have E(X T i | & s j > XSk a.e. on (Q, 5 s t , P) and then
(2)
jAXTkdP>
forAG5s t .
JAXSkdP
Let A G g s . Then by Theorem 3.9 we have An {S < k} = An {S < Sk} e $sk so that by (2) / XTk dP> I XSk dP for A € 3s.Mn{S
JAn{S<*:}
Since {S < k} D {T < k] and since XTk is nonpositive, the integral on the left side is increased if the domain of integration An {S < k} is replaced by An {T < k}. Thus /
XTdP>
JAn{T
I
Xs dP
for A g 5 S -
JAn{S
Letting k —> oo v/e have by the Monotone Convergence Theorem /
XTdP>
JAn{T<<x>]
I JAn{S
XsdP
for
Aeds-
§& UNIFORMLY INTEGRABLE
SUBMARTINGALES
139
Since XT(u) = 0 for u G {T = 00} and X s M = 0 for w G {S = 00}, we have (1). ■ Theorem 8.16. (Optional Sampling with Unbounded Stopping Times, Discrete Case) Let X = {X„ : n G Z+} be a submartingale, a martingale, or a supermartingale on a filtered space ( 0 , 5 , {3n},-P)- Suppose there exists an extended real valued integrable random variable £ on (£2, 5, P ) such that for every n eZ+we have (1)
E(£|ff„) > , = , or <Xn
a.e. o n ( Q , 3 „ , P )
according as X is a submartingale, a martingale, or a supermartingale. Let S and T be two stopping times on the filtered space such that S < T on £2. Let Yx be an arbitrary version o/E(£|J?oo) and define Xs and XT with Xs(w) = Yoo(to)for u> G {S = 00} and XJ{UJ) = Yx(u!)fortn G {T = 00}. Then Xs andXr are integrable and satisfy (2)
E(£|3f T ) > , =, or < X r
« . on (£2,&., P)
ared (3)
E ( X r | 5 s ) > , =, or < Xs
a.e. on (£2,5 S ,P),
according as X is a submartingale, a martingale, or a supermartingale. Proof. It suffices to prove the theorem for a submartingale. If we let Yn be an arbitrary real valued version of E(£|5n) for n G Z+ then by Remark 8.3, Y = {Yn : n G Z+} is a uniformly integrable martingale with lim Yn = 5^, a.e. on (£2,5^,, P ) and lim \\Yn - KJI, = 0. If we define F s and YT w i t h ^ ( w ) = ^ ( U J ) for n—KOO
w G {5 = 00} and YT(u) = Y ^ M for w G {T = 00}, then by Theorem 8.14 both Ys and Yr are integrable and satisfy (4)
E((\$T)
= YT
a.e. on ( £ 2 , 5 T , P )
E(YT\$s)
= Ys
a.e. on(i2,5s,P)-
and (5)
Let us define a stochastic process Z by setting Z = X -Y. Then since X is a sub martingale and Y is a martingale, Z is a submartingale. Also Z is nonpositive since by (1) we have Zn = Xn - Yn = Xn - Etf I 5 J < 0 a.e. on (II, g„, P).
140
CHAPTER 2.
MARTINGALES
If we define Zs and ZT with ZS(OJ) = 0 for u G {5 = 00} and ZT{UJ) = 0forw G {T = oo}, then by Lemma 8.15 the two random variables Zs and ZT are integrable and (6)
E(0\3T)>ZT
i.e.aa{a,Sr,P)
and (7)
E(ZT|5s)>^s
a.e. o n ( Q , f o , P ) .
Then X r = ( F + Z)r = Yj + ZT is integrable and similarly for X s . Adding (4) and (6) side by side we have (2) and adding (5) and (7) side by side we have (3) for the submartingale X. I We observe that Theorem 8.14 can be extended to a right-continuous martingale X = {Xt : t G R+} generated by an integrable random variable f on a right-continuous filtered space (£2, 5, {&}, P) by approximating a stopping time T with a sequence of discrete val ued stopping times {Tn : n G N} according to Theorem 3.20 and, in passing to the limit, us ing the uniform integrability of the resulting martingale with reversed time {XT„ : n G N} according to Theorem 8.8. The proof parallels that of Theorem 8.10 and the argument need not be repeated here. Let us state the result for later reference. Theorem 8.17. Let X = {Xt : t G R+} be a right-continuous uniformly integrable martin gale on a right-continuous filtered space (Q, 5, {5 ( },P) where Xt is a version o/E(£|5 t ) fort G M.+for an extended real valued integrable random variable £on(Q.,$,P). Let S and T be two stopping times on the filtered space such that S < T onQ.. With an arbitrary real valued version Yoo o/E(f l&x,), define Xs and XT with -Xs(w) = Y^iw) for LJ G {S = oo} andXT(u) = F ^ M for w G {T = oo}. Then E ( £ | 5 r ) = XT a.e. on ( Q , 5 r , P ) and thus E ( X T | 3 s ) = Xs a.e. on ( Q , 5 S , P\ Similarly Theorem 8.16 can be extended to a right-continuous submartingale or supermartingale on a right-continuous filtered space by using Theorem 3.20 and Theorem 8.8 exactly in the same way as in the proof of Theorem 8.10. Thus we have the following theorem. Theorem 8.18. (Optional Sampling with Unbounded Stopping Times, Continuous Case). Let X = {Xt : t G R+} be a right-continuous submartingale, martingale, or supermartingale on a right-continuous filtered space (Q, 5, {& }, P). Suppose there exists an extended
§ 8. UNIFORMLY INTEGRABLE
SUBMARTINGALES
141
real valued integrable random variable £ on (Q, 5, P) such that for every t £ R+ (1)
m\dt)>,=,or
<Xt
a.e.on(n,Zt,P),
according as X is a submartingale, martingale, or supermartingale. Let S and T be two stopping times on the filtered space such that S < T onQ.. Let Y^ be an arbitrary real valued version of E((\Sco) and define Xs and XT with X S (OJ) = FQO(w)/oru; £ {S = oo} and XT(OJ) = Y^i^for OJ £ {T = oo}. Then Xs and XT are integrable and satisfy (2)
E(£ | S r ) >,=,or
<XT
a.e. on (£1, 5 T , P)
and (3)
E ( X T | 5 s ) > , = , 0 r <XS
a.e.on{Q.,$s,P),
according as X is a submartingale, martingale, or supermartingale. As a restatement of Theorem 8.18 we have: Theorem 8.19. Let X = {Xt : t € R+} be a right-continuous submartingale, martingale, or supermartingale on a right-continuous filtered space (Q,5, {5(}, P) satisfying condition (1) in Theorem 8.18 and let Y^ be a real valued version o/E^lSco)- Let {St : t G R+} be an increasing system of stopping times on the filtered space. Let <8t = 5s, and Yt = Xs, for t 6 R+ where XSt(u) = F«(w)/or w £ {5 t = oo}. Then Y = {Yt : t £ R + } is a submartingale or a supermartingale on the filtered space (Q., 5, {©«}, -P) according as X is a submartingale, martingale, or supermartingale.
[IV] Uniform Integrability of Random Variables at Stopping Times If a submartingale X = {Xt : t £ T} on a filtered space (£2,5, {&}, P) is uniformly inte grable and S is the collection of all finite stopping times on the filtered space, is the system of random variables {XT '■ T £ S} also uniformly integrable? We shall show that this is the case when T = Z+. Definition 8.20. Let S be the collection of all finite stopping times on a filtered space (£2,5, {St :t £f},P). For every a £ T let S a be the subcollection ofS consisting of those stopping times which are bounded by a. A submartingale X = {Xt : t £ T} on the filtered space is said to belong to the class (D) if {XT : T £ S} is uniformly integrable.
142
CHAPTER 2.
MARTINGALES
X is said to belong to the class (DL), if {XT : T 6 S„} is uniformly integrable for every
«£T. Theorem 8.21. A submartingale X = {Xn : n € Z+} on a filtered space (£2,5, {5„},P) belongs to the class (D) if and only if it is uniformly integrable. Proof. Let S be the collection of all finite stopping times on the filtered space. Since every deterministic time is a member of S, if X belongs to the class (D) then X is uniformly integrable. Conversely, suppose X is uniformly integrable. Then by Theorem 8.1 there exists an extended real valued 5oo-measurable integrable random variable X^ on (Q, 5 , P ) such that lim Xn = Jfoo a.e. on ( Q . & ^ P ) , lim \\Xn - X ^ = 0, andE(X OCJ |5„) > Xn a.e. on n^-oo
n—t-oo
(Q,3„,P)forneZ+. If we let Yn be an arbitrary real valued version of EC-X^ 15 n ) for n e Z+ then by Remark 8.3, Y = {Yn : n € Z+} is a uniformly integrable martingale with lim Yn = Xx a.e. on (Q,5oo,P) and lim ||Jf„ - X^H, = 0. If we define a process Z = {Zn : n e Z+} by letting .Z = X — Y then since X is a submartingale and Y is a martingale, Z is a submartingale. Also Z is nonpositive since Z n = Xn - Yn = Xn - ECXoo | 5 J < 0 a.e. on (£2, #„, P ) for n G Z+. Note also that since lim Xn = XQO = lim Yn a.e. on (Q, 5<x>, P) we have lim Z n = 0 a.e. on (Q, 5<x>, P)n—*oo
n—*oo
n—*oo
From X = y + Z we have XT = YT + ZT for T € S. Thus, to show the uniform integrability of {XT : T 6 S} we show that of { y T : T £ S} and {ZT : T e S}. By (1) of Theorem 8.14, we have E(Xoo |5 T ) = YT a.e. on (£2, § r , P) for every T £ S. Thus by Theorem 4.24, {YT : T 6 S} is uniformly integrable. To prove the uniform integrability of {Zj : T € S}, let us note that since lim Zn = 0 71—»00
a.e. on (Q, 3oo, P) and since Z = X - y is uniformly integrable, we have lim 11 Zn 111 = 0 71—*00 "
"
by Theorem 4.16. Therefore for every e > 0 there exists fc 6 Z+ such that E(|ZA;|) < e. For any T" € S and A > 0 we have
(i)
/
\zT\dP = J2f
< V /
|Z ; |rfP+ /
\zT\dp+f |Z r |dP.
\zT\dP
§& UNIFORMLY INTEGRABLE SUBMARTINGALES For the finitely many integrable random variables Zu..., (2)
l ™ £ /
143 Zk we have
| ^ | d P = 0.
For the two stopping times A: and TV A: satisfying*: < TVk onnv/ehaveE(ZTvk\$k) a.e. on(Q,fo,P)byTheorem8.16. Since {T > k} = {T < k}c e $k, we have /
ZTdP=
f
ZTvkdP>
(
> Zk
ZkdP.
Since Z is nonpositive, Z T is nonpositive and thus by the fact that E(|Zt|) < e we have (3)
/
\ZT\dP<
f
\Zk\dP<e.
By(l), (2) and (3) we have limsup s sup / A^OO
lreS-M|Zrl>A}
\ZT| d P > J
From the arbitrariness of e the limit superior above is equal to 0. Therefore lim sup /
*-°°T6S-/{|Zrl>A}
\ZT\dP = 0,
proving the uniform integrability of {Z? : T £ S}. ■ For T = R+, a uniformly integrable right-continuous submartingale X may not belong to the class (£>). For a counter example see [16] Johnson and Helms. If however X is a uniformly integrable right-continuous martingale then it is in the class (D) as we show next. Theorem 8.22. On a right-continuous filtered space (£1,5, {& : t g R+ }, P), 1) every right-continuous martingale belongs to the class (DL), 2) every right-continuous nonnegative submartingale belongs to the class (DL), 3) every uniformly integrable right-continuous martingale belongs to the class (D). Proof. 1) Let X be a right-continuous martingale. Then for every a G R+ we have by Theorem 8.10, E ( X a | # r ) = XT a.e. on (f2,5 r ,P) for T G SB. Thus by Theorem 4.24 { X T : T e S„} is uniformly integrable.
144
CHAPTER 2.
MARTINGALES
2) Let X be a nonnegative right-continuous submartingale. Let a G E+. For every T G SQ w e h a v e E ( X a | 3 T ) > XT a.e. on ( f i , 5 T , P ) by Theorem 8.10. For every A > 0 we have {XT > A} € 5 T so that (1)
/
XadP>[
XTdP.
J{XT>\]
J{XT>*}
Also AP{Xr > A} < E(XT) < E(Xa) so that (2)
lim P{XT
A—»-oo
> A} = 0
uniformly in T G S 0 .
By (1) and (2) lim sup /
A- + oo T g S a 7{x T >A}
XTdP
< lim sup / A ,0o
-
TeSa-'{^r>A}
X„ dP = 0.
Therefore {XT : T G S„} is uniformly integrable and X belongs to the class (DL). 3) If X is a uniformly integrable right-continuous martingale then by Theorem 8.1 there exists an extended real valued integrable random variable f such that E(£ 15t) = Xt a.e. on ( Q , 5 ( , P ) for every t G R+. Thus according toTheorem 8.17 we have E(£|57-) = Xj a.e. on ( Q , 5 T , P) for every T G S. Therefore by Theorem 4.24, {X T : T G S} is uniformly integrable, that is, X belongs to the class (D). ■ Remark 8.23. If X = {Xt : t G R+} is a right-continuous submartingale on a rightcontinuous filtered space (Q, 5, {5<},P)andifX is in the class (D), then not only is {Xj : T G S} uniformly integrable but also { X T : T G S^} where S^ is the collection of all stopping times, finite or not, on the filtered space and XT is defined with X^ = lim Xt. t—+CO
Proof. The uniform integrability of {XT : T G S} implies the uniform integrability of its closure {XT : T G S} in Li(Q,5oo,P) according to Theorem 4.19. Thus, to show the uniform integrability of {XT : T G Soo} we show that it is contained in {XT : T G S}. To show this we note that for any T G S^,, {T A n : n G N} is a sequence in S and show that the sequence {X TAn : n G N} in {XT : T G S} satisfies lim ||X rA n - XT\\i = 0. Now the uniform integrability of {X T : T G S} implies that of {X ( : t G R+} since deterministic times are particular cases of finite stopping times. The uniform integrability of X then implies according to Theorem 8.1 that X^ = lim Xt exists a.e. on (Q, Joo, P), and (—t-oo
is an integrable random variable satisfying lim \\Xt — X^^ = 0. Then for any T G S m , finite or not, XT is defined with X^, and fim XTAn - XT a.e. on (Q, g ^ , P). Now since T A n G S for n G N, {X TAn : n G N} is uniformly integrable by our assumption that X is in the class (D). Then lim HXjAn — X T ||i = Oby Theorem 4.16. ■
§9. REGULARITY
145
OF SAMPLE FUNCTIONS OF SUBMARTINGALES
§9 Regularity of Sample Functions of Submartingales [I] Sample Functions of Right-Continuous Submartingales The Maximal and Minimal Inequalities and the Upcrossing Inequality for submartingales imply certain regularity properties for their sample functions. We show in Theorem 9.2 that if {Xt : t E R+} is a right-continuous submartingale on a filtered space (Q, 5, {5<},P)then almost every sample function is bounded on every finite interval in R+, has a finite left limit everywhere on R+, and has at most countably many points of discontinuity. Proposition 9.1. Let X = {Xt : t E R+} be a submartingale on a filtered space (Q, 3s {3i}, P) and let Q+ be the collection of all nonnegative rational numbers. 1) There exists a null set A ^ in (Q., Soo, P ) such that ifui E A ^ then X(-,LJ) is a bounded function on [0, /3) n Q+for every (3 E R+. 2) There exists a null set A in (Q, 5oo, P), A D A^,, such that if to E Ac then lim Xs(u>) and
s~\t,s£Q+
lim X3(u) exist in R/or every t E R+.
Proof. 1) For every n 6 N, let Qn = [0, n) n Q+. Then for every n 6 N and A > 0 we have by Theorem 6.14 AP{supX(>A}<E(|X„|) t£Qn
and AP{ inf Xt < -A} <
E(\Xn\)+E(\X0\).
t£Q„
Since
{ sup \Xt\ > A} = { sup Xt > A} U { inf Xt < - A } , ieQ„
i€Qn
*6Qn
we have P{sup |X,| > A} < -{E(|Xo|) + 2E(|X„|)} A
KQn
and therefore (1)
lim P{sup \Xt\ > A}=0.
A^oo
t6Qn
For every n E N, let A, = f k i 6 f l : -XX-, w) not bounded on Qn} = f ] -^ sup |X ( | > fc [ G g ^ .
146
CHAPTER 2.
MARTINGALES
Then -P(An) < P | sup \Xt\ > k \
for every k G N,
and thus by (1) wehavePCAJ = 0. LetA^ = U ^ N A ^ , anullsetinCQjJoojP). If a; € A^, then u G A£ for every n G N so that X(-, u) is bounded on Qn for every n G N. Then X(-,w) is bounded on [0, /?) n Q+ for every /? G R+. 2) Let <3 be the collection of all rational numbers. For n G N and a,b e Q such that a < 6, let
An,a,b = {wen:(PgjjXXw) = °°) e 5 According to (3) of Theorem 6.23 we have E ( E / g V 0 < r ^ - E [ ( X „ - a) + - (Xo - a) + ] < oo o—a so that U^"b]X < oo a.e. on (X2, doo,P) and thus we have P(An|1Iif,) = 0. Let A
~U
U j4".^.'-
ngNa,6gQ,a<6
As a countable union of null sets, A is a null set in (Q, Joo, P). Let us show that if w G J4C then lim X,(o>) exists in R for all i G R+. Assume the contrary. Then there exists some t G R+ such that
lim X5(u)) does not exist in K. Let n G N be such that t < n. Then sn.seQ* _ lim XJUJ) does not exist in R, that is, we have liminf XS(UJ) < limsupX s (aj), s1t,s€Q„
»T*,»6Q»
and therefore there exist a, 6 G Q, a < 6, such that liminf Xs(u>) < a 6 for k G N so that (ug^X) (w) = oo. Then w G An,a,6 C .A, contradicting the assumption that u> £ J4C. This shows that if CJ £ A c then lim Xs(u) s1t,s€Q+
exists in R for every t G R+. Consider the null set A = A U Aoo in (Q, ^oo, P)- For w G Ac we have u G A^ so that X(-,u/) is bounded on [0, /?) n Q+ for every J3 G R+ and this
§9. REGULARITY implies that then
OF SAMPLE FUNCTIONS OF SUBMARTINGALES
147
lim X,(u) exists in R for every t £ R+. We show similarly that if w 6 Ac
lim X,(u>) exists in R for every i £ R+. ■ sl',s€Q+
Theorem 9.2. Le? X = {X( : t £ R+} fee a right-continuous submartingale on a filtered space (Q, 5, {&}, P). 77ien r/iere ai'jis a n«/Z set A j'n (Q, 5 ^ , P) swc/i that for every w 6 Ac f/ie sample function X(-,u>) is bounded on every finite interval in R+, has finite left limit everywhere on R+, and has at most countably many points of discontinuity. Proof. Let A be the null set in (Q, g ^ , P) in Proposition 9.1 and let w 6 Ac. 1) To show that X(-,w) is bounded on an arbitrary finite interval, let /3 £ R+. By Proposition 9.1, X(-,u;) is bounded on [0,/?) n Q+ so that there exists K > 0 such that \X(r,uj)\ < K for r £ [0,/?) n Q + . Let i 6 [0,/?). Then there exists a sequence {r„ : n g N} in [0,0) n Q+ such that rn J. <. By the right-continuity of X(-,u>), we have X(t,uj) = \im^X(rn,u) and thus \X(t,uj)\ < K, that is, X(-,u) is bounded on [0,/?). 2) To show that lim X(t, OJ) exists in R for every t0 g (0, oo), assume the contrary, that 'T'o
is, limX(t,u>) does not exist in R for some t0 £ (0, oo). Then since X(-,u>) is bounded on (T'o
every finite interval, lim X(t, UJ) can not exist in R either. Thus there exist a, b 6 R, a < 6, such that
'T'o
liminf X(t,u>) < a < b < lim sup-XX*, UJ). 't'o
'T'o
Then we can select a strictly increasing sequence {tn : n £ N} such that t„ | to and X{tn,ui) < a for odd n and X(tn,u>) > b for even n. By the right-continuity of X{-,UJ) there exists a rational number s„ 6 (*n,in+i) such that X(s„,a;) < a for odd n and X(sn,ui) > b for each n £ N. Then {s n : n £ N} is a strictly increasing sequence of rational numbers such that sn T io and lim X(s„,u;) does not exist. But according to Proposition 9.1, lim X,(u) exists in R. This is a contradiction. Thus lim Xtt.to) exists v st*o,seQ+ 'T'o in R for every to £ (0, oo). 3) The fact that X(-,u>) is real valued, right-continuous and has finite left limit every where on R+ implies that X(-,UJ) has at most countably many points of discontinuity. This implication, which is unrelated to the submartingale, is proved in Proposition 9.3 below. ■ According to Theorem 9.2, almost every sample function of a right-continuous sub martingale is bounded on every finite interval. An arbitrary real valued right-continuous function does not have this property. A real valued continuous function / on R is bounded
148
CHAPTER 2.
MARTINGALES
on every finite interval of R, but if / is only right-continuous, then / may not be bounded on every finite interval. To construct such a function, let io = 0 and tk = Y.j~i 2 _ J for t e N . Decompose [0,1) into subintervals h = [tk-i,tk) for k £ N. Define / on [0,1) by setting f(x) = k for x £ h for k £ N. Thus defined, / is right-continuous but unbounded on [0,1). Extend the definition of / periodically with period 1 to the entire R. Then / is right-continuous on R but unbounded on any finite interval containing an integer in its interior. Proposition 9.3. Let f be a real valued function on R such that f(t—) = limsf( f(s) and /(<+) = lim sit f(s) exist in Rfor every t € R. Let E =
{t£R:f(t-)jf(t+)},
and for every k £ N let Ek =
{t£R:\f(t-)-f(t+)\>^}.
Then E is a countable set and for every finite interval [a, b] in R, Eh fl [a, b] is a finite set. In particular, iff is a real valuedfunction which is right-continuous and has finite left limit everywhere on R, then f has at most countably many points of discontinuity. Proof. Note that E = Uk€^Ek. Suppose E is an uncountable set. Then there existsfco€ N such that Eko is an uncountable set. Let Ekotm = Eko 0 [m, m + 1] for m £ 1. Then there exists m0 e Z such that Eka,mo is an uncountable set. By partitioning [m 0 , m0 + 1] into two closed intervals of equal length (with one end-point in common) and repeating the process to the two resulting closed intervals indefinitely we obtain a decreasing sequence of closed intervals {In : n £ Z+} such that /„ has length l(In) = 2~n and Eko f] In is an uncountable set for every n € Z+. By the Nested Interval Theorem there exists t* 6 R such that nnez+-fn = {**}• Lete £ (0, JJ-). Since / ( t * - ) and f(t*+) exist in R there exists 6 > Osuch that \f(t') - f(t")\ < e for f,t" £ (i* - <5, i*) and |/(f) - /(<")| < e for t', t" £ (t*, t* + 6). Since t* £ In for every n £ Z+ and since lim £(/„) = 0, there exists n0 £ Z+ such that n—*oo
Ina C (t* — 6,t*+6). Then since Eko n Ino is an uncountable set so is Ekor\{t* — 5, t*+6) and thus at least one of the two sets Eko fl (t* — 6,f) and Eka n (£*, f + 8) is not empty. Suppose for instance Eko fl (i* - S, f) 4 0. Let t0 £ Eko O (t* - 8, f). Then | / ( i 0 - ) - /(
§P. REGULARITY
OF SAMPLE FUNCTIONS OF SUBMARTINGALES
149
in Ek n [a, b] which converges to some i 0 £ [a, b]. Now we have either a subsequence {<„, : ' e N } such that tne f t 0 or a subsequence {<„m : m G N} such that i„ m J. t 0 . Consider the latter case for instance. Since t„ m 6 Ek and tnm > t0 (a consequence of distinctness of t„ m for m G N and the fact that tnm | t 0 ), there exist t'Um and t£m in the interval (t„m - \(tnm - t0),tnm + [{tnm -<<,)) such that \f(t'nJ- fKJ\ > ^ form € N. Now since t' , t" > t0 and lim t' = t0 and lim t" = t0, this contradicts the existence of /(to+)- Similarly t n , |
[II] Right-Continuous Modification of a Submartingale Let X = {Xt : < e R+} be a submartingale on a filtered space (£2,5, {5t},P)- We are interested in the existence of a submartingale Y which is a modification of X in the sense that Y{t, ■) = X(f, ■) a.e. on (£2,5t, P) for every t G R+ but has the property that every sample function is right-continuous everywhere on R+. We shall show that if the filtration {5< : t G R+} is right-continuous and if 5o is augmented, that is, if 5o contains all the null sets of the probability space (Q, 5, P), then the right-continuity of the function E(X ( ), t G R+, implies the existence of such a modification Y Let us remark that if X = {X( : t G R+} is a right-continuous submartingale on a rightcontinuous filtered space (£2, 5, {5<}, P), then the function E(X t ), i G R+, is indeed rightcontinuous. To show this, let t G R+ be arbitrarily fixed. Let {t_„ : n G Z+} be a strictly decreasing sequence in R+ with £_„ j. t as n —> oo. The process {X(_„ : n G Z+} is then a submartingale with reversed time with respect to the filtration {5t_„ : n G Z+}. According to Corollary 8.9, there exists Yt_ai G Li(Q,$t_^,P), where 5 ( _ M = n n e Z ,5 ( _„, such thatjjm X(_„ = y(_M a.e. on (£2, fo^,-P) andjhn^ ||X,_„ - lyi_00||i = 0. Now the rightcontinuity of the filtration implies that dt-„ = 5* and the right-continuity of X implies that w e h a v e X ( = lim Xt_„ = Yt_x a.e. on (£2,5,,P). Thus we have lim ||X,_„ - X,||i = 0. n—*oo
n—.00
This implies lim E ( X t _ J = E(X ( ). From the arbitrariness of the sequence {t_n : n G Z+} n—>oo
we have the right-continuity of the function E(X ( ), t G R+, at t. Definition 9.4. IfX = {Xt : t G R+} is a submartingale on a filtered space (£2,5, {5<},P) and if there exists a right-continuous submartingale X ( r ) = {X,
150
CHAPTER 2.
MARTINGALES
mented i/5i is an augmented sub-a -algebra, that is, $< contains all the null sets in (Q, J , P), for every i G R+. A filtered space (Q, 3 , {&}, P) is called an augmented filtered space if the filtration {& : t G R+} is augmented. Note that since a filtration {& : * 6 R+} is an increasing system of sub-cr-algebras, it is augmented if and only if J 0 is augmented. Theorem 9.6. IfX = \X%: t G tt+} is a submartingale on an augmented right-continuous filtered space (Q, 5, {&}, P ) and ifE(Xt), t G R+ w a right-continuous function, then X has a right-continuous modification X{r) = {XJr) : t G R+}. The proof of this theorem is based on the following three Propositions. Proposition 9.7. Let X = {Xt : t G R+} be a submartingale on a filtered space (Q,5, {3(}> P). fe* 0+ fee fne collection of all nonnegative rational numbers and let A be the null set in (£2, &„, P) in Proposition 9.1. Let X^ = {XJn -.teR+jand Xw = {Xf' : i G R+}feerwo processes defined by setting for every t G R+ f X<%0 = limsl(,s6QtXsM forueA0, I X < % ) = lim sTi , seQt X s (a;) for u>eAc, { X[T\UJ) = X t (0 (w) = 0
for UJ€ A.
(r)
Then X is right-continuous with finite left limit, and Xm is left-continuous with finite right limit on R+. For both X ( r ) and Xv) every sample function is bounded on every finite interval in R+. The process X ( r ) is an L\-processes. Proof. Let us prove the existence of finite left limits for XiT\ It suffices to prove the existence of finite left limit of X(r)(-,u>) on R+ for w G Ac. Let t 0 e R+. To show that lim ( | to X ( r ) (i, u>) exists in R we show that for every e > 0 there exists 6 > 0 such that |X (r) (t',u;) - X(r\t",uj)\ V)
Now since X (tQ, w) =
lim
< E for t',t" G (t0 - M o ) n R t .
X(s, w) G R, for every e > 0 there exists <5 > 0 such that
\X(s,u) - Xw(t0,Lj)\
<E-
for s G (t0 - Mo) D Q + .
Let f, i" G (to - 5, to) n R+ and let 4 , 4 £ (t0 - 5, t 0 ) n Q+ be such that s'n | f and s£ 11" as n —> oo. Then by the definition of X ( r ) |X(r)(t>) - X
M
(t»|
§9. REGULARITY
OF SAMPLE FUNCTIONS OFSUBMARTINGALES
-Jte.^tC")!
=
|nKrnX(3»
=
U m | X ( s » - -X(C")I
<
lhnsup{|X(/ n , w ) - X^(t0,u)\
^
£
v
151
+| X (
S
» - X«>(i0,a,)|}
n—>co
e
- + -=£.
~ 2 2 This proves the existence of finite left limits for X
gf> =
0
& and Sj° =
$.)
(Note thattf? = n u > ( g u and^f1 =
E(Xf|&) > X
a t on (Q, ff„ P)
and (2) E(X«|g<°) > X f a t on (Q, g f , P). (r) r) 5J X is a submartingale with respect to {gj : t £ R + } . IfX is a martingale to start with then X w is a martingale. Proof. For each s e f t consider a random variable Ys on (£2,5, P) defined by Y,(u>) = Xs(u;) for w e Ac and Ys(u) = 0 for u> € A. Since g 0 contains all null sets in (Q, g, P ) we have in particular A 6 5o C 5 , . Then from the gs-measurability of X s follows the
152
CHAPTER 2.
MARTINGALES
fo-measurability of Ys. Since Ya is 5s-measurable for every 5 g Q+, lim Ys is $sslt,s£Q+
measurable for every s G Q+, s > t, and thus it is g^-measurable. But by the definitions of X4W and Ys we have X | " = lim Ys. Thus X((r) is gf'-measurable. This shows that s[t,seQ+
X ( r ) is adapted to { $ r ) : i g R + }. Similarly X r o is adapted to {&? : t g R + }. To prove (1), let t g R+ and let {s„ : n g Z + } be a strictly decreasing sequence in <3+ such that sn J. i as n -» 00. By Corollary 8.9, {Xs„ : n g Z+} is uniformly in tegrate and there exists Y G L,(Q, $ r ) , P) such that lirn^ X s „ = F a.e. on (Q, 5
and therefore lim ||X,„ - X t (r '||i = 0. This
convergence in L\ implies lim ||.E(XS„ |fo) - E(X t
a.e. on (Q,3f \ P). By the submartingale property of X we have F„ = E(X t |5 S n ) > X Sn a.e. on (Q, g Sn , P). Letting n —> 00 and recalling the definition of Xf) we have F*, > X' f l a.e. on ( Q , 3 f \ P ) . This proves (2). Since X w is an Li-process adapted to {gj r) : < G R+}, to show that it is a submartingale with respect to {3(r) : * G M+} it remains only to verify that for s, t G K+, s < t, we have E(X t (r> |^ r) ) > X
(3)
for every E G # r ) .
Let {en : n g Z+} be a strictly decreasing sequence such that e„ J. 0 and < + e„ g Q+ for every n g Z + . Then by the definition of X f w , lim X t+£n = X,(r) a.e. on (f2,5
by Corollary 8.9, {Xt+e„ ■ n g Z+} is uniformly integrable. Therefore by Theorem 4.16, lim ||X t+ . — X^'lli = 0. This convergence in L\ implies 71-+0O "
(4)
"
lim / Xt+£„ d P = / X((r) dP n-»oo JE
for every E g g .
JE
Similarly, with a strictly decreasing sequence {rjn : n G Z+} such that r?„ J. 0 and 5 + rjn g
§£. REGULARITY
OFSAMPLE FUNCTIONS OFSUBMARTINGALES
153
Q + for every n G Z+, we obtain Jhn J Xs+Vn dP= I X{p dP
(5)
for every E G 5.
By choosing Vn so that r]n < en, we have E{Xt+€n |5 5 + „J > Xs+rin a.e. on (Q, &,«,„, P) by the submartingale property of X, that is, (6)
J Xt+e„ dP> j Xs+„n dP
for E G &+„„ •
Then for E G & " C 3 s+ „„, we have by (4), (6) and (5) / X\T)dP JE
= lim / Xt+e„ dP > lim / Xs+, dP= f n
I^COJE
^°°JE
X^dP,
JE
proving (3). If A" is a martingale, then the inequality in (6) is replaced by an equality and therefore the inequality in (3) is replaced by an equality so that Xlr) is a martingale. ■ Proposition 9.9. In the same setting as in Proposition 9.7, assume that the filtered space (£2,5, {5<}, P) is augmented and right-continuous. Then for every to G R+ we have X^ > Xtoa.e. on (Gl,§to, P) and X^ =Xtoa.e. on{Q,$ta,P)if and only the functionE{Xt),t G R+, is right-continuous at toProof. Let us assume the right-continuity of the filtration {fo : t G R+}. Then 5j r ) = fo for every t G R+. Let tQ G R+. To show that X((0r> > X,0 a.e. on (fi,5 i o ,P), let {en : n G Z + } be a strictly decreasing sequence such that en | 0 and t 0 + £n G Q+ for every n £ Z t . By the submartingale property of X we have / Xt0+Cn dP> f Xt0 dP JE
for E G &„.
JE
Recalling (4) in the proof of Proposition 9.8, we have j X%> dP > JE Xt0 dP
for E G $t0.
Since S*r) = £,„, X^ 1 and X to are both 5(o-measurable. Then the last inequality implies that X<0r) > 1 ( 0 a.e. on(£2,5 ( 0 ,P). Let {s n : n G Z+} be a strictly decreasing sequence in Q+ such that sn J. io as ra -► oo. Then as we saw in the proof of Proposition 9.8, Inn \\XSn - X4(0r,||, = 0 so that
CHAPTER 2.
154
MARTINGALES
lim E(XS„) = E(X„ ). Since X is a submartingale, E(Xt) decreases as t decreases. Thus n—+00
u
the sequential convergence above implies thatE(X ( ) J. EiX^) as t J, i0- Now if Jf|j = X io a.e. on ( A &„, P) then E(X((0r)) = E(Xto) so that E(X ( ) J. E(X io ) as t J. i 0 , that is, E(Xt) is right-continuous at i0. Conversely, if E(Xt) is right-continuous at i0» then E(X ( ) J. E(X io ) as t i t0 so that E(Xt(0r)) = E(X (o ). This, together with the fact that X^ > Xto a.e. on (Q, &„, P), implies that X<0r) = Xto a.e. on ( A &„, P). ■ Proof of Theorem 9.6. Let X = {Xt : t G R+} be a submartingale on an augmented rightcontinuous filtered space (Q,5, {5i},P) such that E(Xt), t G R+, is a right-continuous function. Let Q+ be the collection of all nonnegative rational numbers. According to Propo sition 9.1, there exists a null set A in (£2, 3oo,P) such that lim Xs(ui) and lim Xs(u;) exist in R for every t G R+ and for every u> G Ac. If we define X ( r ) = {X\T) : i G R+} by setting for every t G R+ X t ( r > M = lim si(iSeg+ Xs(u) X\T\u) = G
for u E A ' for w G A,
then according to Proposition 9.7, X ( r ) is an Li-process and every sample function of X ( T ) is bounded on every finite interval in R+ and is right-continuous with finite left limit ev erywhere on R+. The right-continuity of the filtration {3< : < 6 R+} implies that X(r) is {5t}-adapted according to Proposition 9.8. The right-continuity of E(X ( ), t G R+, implies that Xlr) = Xt a.e. on (£2, fo, P ) for every t G R+ according to Proposition 9.9. Thus X (r> is a right-continuous modification of X. ■ Corollary 9.10. Let (£2, 5, {5«}, P)feean augmented right-continuous filtered space and let £feean integrable random variable on the probability space (£2, 5, P). Then for every t G R+ /nere exi'ste a version Xt o/E(£|5<) SKC/I f/iar X = {X ( : £ G R+} is a right-continuous uniformly integrable martingale on the filtered space. Furthermore if \£\ < K a.e. on (£2,5, P)for some K > 0, then Xt can be so chosen that besides being right-continuous X satisfies the condition \X(t,u>)\ < K for (i,u) G R+ x £2. Proof. Let Yt be an arbitrary version of E(£|3 ( ) for t G R+. Then Y = {Yt : t G R+} is a martingale on the filtered space and according to Theorem 4.24 it is uniformly integrable. Now E(Yt) is constant function of t G R+ since E(Y() = E[E(£|&)] = E ( 0 . Thus by Theorem 9.6 there exists a right-continuous martingale X = {Xt : t G R+} on the filtered space such that Xt = Yt a.e. on (Q, fo, P ) for every t G R+. This implies that Xt is a version
§/ft INCREASING
PROCESSES
155
of E(£|3,) for every t G R+. Thus for every t G R+ there exists a version X, of E(f | & ) such that X = {X< : t e R+} is a right-continuous martingale on the filtered space. The uniform integrability of {Yt: t G R+} implies the uniform integrability of {Xt: t G R+}. If |£| < IT a.e. on ( Q , 5 , P ) then for every i 6 R+ we have |E(f|&)| < K a.e. on ( A 5 t , P ) - Let Yt be an arbitrary version of E(£|&)- Then there exists a null set Ay, in (£2,3,, P) such that \Yt\ < if for w G AJ,. If we let Z ( = Y, on Ay, and Z ( = 0 on Ay,, then Zt is a version of E(£|5 ( ) and \Z,\ < K on Q. According to Proposition 9.1, there exists a null set A in (£2, 5 TO , P ) such that if u G Ac then lim Zs(u>) and lim ZAui) exist in sT*,s6Q +
sii,s€Qt
R for every t G R+. By defining Xt for every < G R+ by f X,(a;) = lim si , i36 Q t Zs(u) U((w) = 0
for w G Ac forwGA,
we obtain a version Xt of E(£|5 ( ) such that X - {Xt : t e R+} is a right-continuous martingale bounded by K. ■
§10 Increasing Processes [I] The Lebesgue Stieltjes Integral If we set aside technical refinements in its definition, then an increasing process is a stochas tic process 4 = {A, : i £ R+} on a probability space (Q, J, P) whose sample functions A(-,<x>), a; G £2, are real valued monotone increasing functions on R+. For each u> G £2, let H(-,OJ) be the Lebesgue-Stieltjes measure on (R+, ® R , ) determined by the monotone in creasing function A(-,UJ). If X = { X : t G K+} is a stochastic process on (£2,5, -P) whose sample functions X(-,u;), u> G £2, are Borel measurable real valued functions on R+, then foranyi G R+, the Lebesgue-Stieltjes integral / [ 0 ( ] XO,aO^(ds, u>)exists for every w G £2 such that at least one of / [01] X+(s,u) n(ds,u>) and / [ 0 ( ] X~(s, ui) p(ds, UJ) is finite. Let us review the definition of the Lebesgue-Stieltjes measure determined by a real valued monotone increasing function on R. A collection 3 of subsets of a set 5 is called a semialgebra of subsets of S if it satisfies the following conditions: 1°. 0 , 5 G 3 , 2°. 7 n J G 3 for J, J G 3,
CHAPTER 2.
156
MARTINGALES
3°. for every I G 3 there exists a finite disjoint collection {h : k = 0 , . . . , n} in 3 such that Jo = / and U*L0Ij G 3 for fc = 0 , . . . , n with U^I, = S. In R, for instance, the collection of all intervals which are open on the left and closed on the right and 0 constitute a semialgebra. If 3 is a semialgebra, then F is a finite disjoint union of members of 3 for every I e3. Also every finite union of members of 3 is equal to a finite disjoint union of members of 3. Let us write a(3) for the algebra generated by 3, that is, the smallest algebra of subsets of S containing 3. It follows immediately that a(3) is the collection of all finite unions of members of 3. Let us call a set function / i o n a semialgebra 3 of subsets of a set S a measure on a semialgebra if it is Revalued with ^(0) = 0 and countably additive on 3, that is, if {In : n 6 N} is a disjoint collection in 3 with U„6N-?n G 3 then fj,(UnS^In) = £„<=JJ M-^)Similarly we call a set function on an algebra of subsets of a set a measure on an algebra if it is Revalued with ^(0) = 0 and countably additive on the algebra. Let ^ be a measure on a semialgebra 3 of subsets of a set S. Since every A G a(3) is the union of finitely many disjoint members I\,..., In of 3, if we define /i(A) = 53J=, MCj) t n e n /^ is well-defined, that is, it does not depend on the way A is decomposed, and furthermore it is countably additive on a(3) and is thus a measure on a(3). The extension of a measure from a semialgebra 3 to the algebra a(3) generated by it is always unique, that is, if \i and v are two measures on a(3) such that \i = i/ on 3 then /x = t/ on a(3). A measure ^ on an algebra 21 of subsets of a set 5 can always be extended to a measure on CT(21), the u-algebra generated by 21 by means of an outer measure derived from fi as follows. Let us define a set function /J,* on the <J-algebra of all subsets of S by setting for an arbitrary E C S
ti\E) = inf{X; A«(4.) : An € 21, n G N, £ C (J A J , n6N
n£N
where the infimum is over the collection of all coverings of E by countably many members of 21. Thus defined, fi* is an outer measure satisfying the conditions that it is an R+-valued set function defined on the u-algebra of all subsets of 5, vanishes for 0, monotone increasing in the sense that (i*(Ei) < fi*(E2) for Ej C E2 C S, and countably subadditive, that is, for an arbitrary collection {En : n G N} of subsets of 5 we have /j.*(\Jn^En) < E n 6 N tJ-*{En)The collection 21* of all subsets E of S satisfying the Caratheodory criterion H*(T) = fi*(T C\E) + n*(T n Ec)
for every
TcS
§70. INCREASING
PROCESSES
157
is a cr-algebra of subsets of S containing the algebra 21 and the restriction of ft" to this cr-algebra which we denote by ft is countably additive on this a-algebra. Thus we have extended the measure ft on the algebra 21 to a measure on the cr-algebra 21*. This is the Hopf Extension Theorem. Note also that the measure space (S,21*, ft) is always a complete measure space. The cr-algebra 21* depends on ft. However, since 21 C 21* we always have (7(21) C 21* for all ft. The restriction of ft on 21* toCT(21)is then an extension of the original measure ft on 21 to a measure on cr(2l). The extension of a measure ft on an algebra 21 to a measure on the (7-algebra cr(2l) is unique provided ft is (7-finite on 21. Let g be a real valued monotone increasing function on R. Then for every a G R, we have g(a-) = hmg(x) G R and g(a+) = limc<(z) G R; g is continuous at a G R if x\a
x[a
and only if g(a—) = g(a+); and g has at most countably many points of discontinuity. Let g{—co) = lim g(x) and 5(00) = lim g(x). Let a„ be a set function defined on the X —> — CO
X—*CO
collection of all open intervals and singletons in R by setting ftg{(a,b)) = g(b-)-
g(a+) and
ftg({a}) = g(a+) -
g(a-).
The definition of ftg is then extended to intervals of other types by setting ftg((a, &]) = ftg((a, &)) + ftg({b}) = g(b+) - g(a+), ftg([a, &)) = ftg({a}) + ftg((a, b)) = g(b-) - g(a-), fj.g([a, b]) = ftg({a}) + ija((a, b)) + ftg({b}) = g(b+) - g(a-). Note in particular that if g is right-continuous, then fig((a, b]) = g(b) — g(a). Let 3 be the semialgebra of subsets of R consisting of 0, (—00,00), and all left-open and right-closed intervals, that is, all intervals of the types (a, b], (—00, b], and (a, co). Let g be a real valued monotone increasing function on R and let ftg be a set function on 3 defined by ftg(a, b] = g(b+) — g(a+). Then ftg is a cr-finite measure on the semialgebra 3. Let us extend ftg to a cr-finite measure on the algebra 21 = a(3) by the procedure described above. Let ft* be the outer-measure derived from the measure ftg on the algebra 21. Let 21* be the (7-algebra containing 21 and consisting of all subsets of R satisfying the Caratheodory criterion. The complete measure space (R, 21*, ftg) is the Lebesgue-Stieltjes measure space on R determined by the real valued monotone increasing function g on R. In particular, the Lebesgue-Stieltjes measure space on R determined by the function g{x) = x for x G R is the Lebesgue measure space on R. Note that the (7-algebra 21* depends on the function g. For instance, when g(x) = x for x G R then 21* is the cr-algebra of the Lebesgue-measurable sets in R. But when g is the identically vanishing function on R, then itg(R) = 0 so that R is
158
CHAPTER 2.
MARTINGALES
a null set in the complete measure space (R, 21*, p,g), and this implies that every subset of the null set R is in the cr-algebra 21* and therefore 21* in this case is the collection of all subsets of R. When considering a family of Lebesgue-Stieltjes measures \ig on R corresponding to a family of real valued monotone increasing functions g it is convenient to have a common cr-algebra of subsets of R on which all the Lebesgue-Stieltjes measures in the family are defined. Now since 21 C 21* and since cr(2l) = cr(a(3)) = <8E we have 53m C 21*. The Borel cr-algebra 53m then can serve as the common cr-algebra. The measure spaces (R, *8m, ng), corresponding to real valued monotone increasing functions j o n R have a common cralgebra 53i on which all the measures \xg are defined. The Lebesgue-Stieltjes measure p,g on (R, 21*) determined by a real valued monotone increasing function g on R is always a cr-finite measure. It is a finite measure if and only if g is bounded on R. If E £ 21* and E is a bounded set, that is, contained in a finite interval, then y.g(E) < oo. For any a £ R, we have fig({a}) = g(a+) — g(a—) so that fj.g{{a}) = 0 if and only g is continuous at a £ R. The value of ng({a}) does not depend on the value of g at a g R. Furthermore, from the definition of p,g on the semialgebra 3 of left-open and right-closed intervals in R by setting fJ-g((a, b]) = g(b+) — g(a+) which is independent of the values g(a) and g(b), it is clear that if we redefine g at its points of discontinuity in such a way that its monotone increasing property is not destroyed then the redefined function determines the same Lebesgue-Stieltjes measure space as the original function g. In particular the right-continuous modification of g defined by h(x) = g{x+) for x £ R determines the same Lebesgue-Stieltjes measure space as g, that is, p,h = ftg on 21*. = 21*. For the same reason as above, for any c e R we have /ig+c = fig on 2l*+c = 21*.
[II] Integration with Respect to Increasing Processes Definition 10.1. A stochastic process A = {At : t £ R+} on a filtered space (X2,5, {$t},P) is calledan increasing process if it satisfies thefollowing conditions. 1°. A is {$t}-adapted. 2°. A is an Li-process. 3°. A(-,ui) is a real valued right-continuous monotone increasing function on R+ with A(0, ui) = Ofor every u> £ Q. A is called an almost surely increasing process if it satisfies conditions 1°, 2° and thefol lowing condition.
§/0. INCREASING
PROCESSES
4°. There exists a null set AA in ( Q , £ « , P)
159 rac^
that 3° holds for every w € A^.
We ca// A^ a« exceptional null set for the almost surely increasing process A. Note that an exceptional null set A^ for an almost surely increasing process A is not unique. In fact any null set in (Q.^^, P) containing A^ is an exceptional null set for A. For an almost surely increasing process A = {At:t 6 R+}, A^UJ) = lim At(u>) exists t—*oo
for u e A j , that is, A^ exists a.e. on ( Q , 5 ^ , P). Let A«,(w) = 0 for u G AA so that A^ is an extended real valued g^-measurable random variable defined on the entire space Q. Regarding the integrability of Ax we have the following. Lemma 10.2. For an almost surely increasing process A = {At : t G R+} on a filtered space (£2,5, {5*}, P) the following conditions are equivalent. V. Aoo is integrable. 2°. A is uniformly integrable. 3°. A is L\-bounded. Proof. If A is an almost surely increasing process, then 0 < At < A^ a.e. on (Q, 3oo, P). Thus, if Aoo is integrable, then by 4) of Proposition 4.11, A is uniformly integrable and in particular Li-bounded. Conversely, if A is ii-bounded, that is, sup t 6 l t E(A ( ) < oo, then by the Monotone Convergence Theorem we have EiA^) = lim E(A.() < sup E(At) < oo (
~' 0 0
t€lK
so that Ac*, is integrable. ■ To consider the Lebesgue-Stieltjes measures on (R+, 9 3 E J determined by the sample functions of an increasing process A = {At : t G R+} on a filtered space (H,5, { 5 J , P ) let us assume that the definition of a sample function A(-,w) is always extended to all of R by setting A{t, ui) = 0 for t 6 (-co, 0). Let p.A{-,UJ) be the Lebesgue-Stieltjes measure on (R,*8m) determined by the real valued right-continuous increasing function A(-,u>) on R. By ourextension of the definition of A(-,u>), we have//^({O},^) = A(0,LJ)-A(0— ,UJ) = 0. We then restrict p.A{-,u) to (R+, Q3mJ. For an almost surely increasing process A, we let HA(-,U) = 0 for u in an exceptional null set AA for A. By this convention, corresponding to an almost surely increasing process A there exists a family {^(-,aj) : UJ G Q} of Lebesgue-Stieltjes measures on (R+, 25ijDefinition 10.3. For an almost surely increasing process A = {At : t G R+} on a filtered space ( A 5 , {5(},P) let (J.A(-,u) be the Lebesgue-Stieltjes measure on (R + ,53IE + ) deter-
CHAPTER!
160
MARTINGALES
mined by A(-,ui)for w £ £1 Le/ X be a stochastic process on the filtered space such that X(-,w)isa Borel measurable function on R+for every to £ £2. We define the integral ofX with respect to A on [0, t]for t £ R+ by I J[0,t]
X(s,u>)dA(s,to)
= /
X(s,u>)HA(ds,to)
foru£Q.,
J[0,t]
provided the Lebesgue-Stieltjes integral on the right hand side exists. The Lebesgue-Stieltjes integral j [ 0 t ] X(s,u>) HA(
§70. INCREASING
PROCESSES
161
the set function A on the semialgebra 21 x 33 defined by \(A x B)= f n(A, y) v(dy)
for A x B G 21 x 23
JB
is countably additive on 21 x 23 with A(0) = 0 and thus can be extended to a measure on
2) If there exists a disjoint collection {An : n G N} in 21 such that Un€®An = S and JTfi(An,y)p(dy) < oo for n G N, then A is a-finite on 01 x 23 and its extension as a measure to
A(£ n ) = £ A(A„ x 5„) = W
n£N
neN ■ /B "
n£N
H(An, y) v{dy)
= E / r i B„(y)/i(A„ »y) i/(dj/) == y T E ^ ,(y)/i(A„
> ! / ) ■
t/(dj/)
- / J T.6N
by the Monotone Convergence Theorem. For D C S x T and y e T, let D.
a
V
e
,
10
-C
Tl
ifyeBj,
and thus lB„(y)M^n,y) = M(-EJ-,*>j/)Therefore
£A(£ n ) = / ^M(^*.»)«<*). ngN
Since {£„ : n G N} is a disjoint collection, so is {(£„).,„ : n G N}. Also from U „ 6 N £ „ = JS we have , . ., _ D
Ucw.--.-tf R!t
ngN
CHAPTER 2.
162
MARTINGALES
Thus £ =
A(£„) = / fi((.UnenEn).lV,y)v(dy)=
f lB(yMA,
f
fj.(RtU,y)v(dy)
y) v{dy) = / n(A, y) u(dy) = \(E).
JT
JB
This proves the countable additivity of A on the semialgebra 21 x ©. Then A can be extended to a measure on the u-algebra
§/0. INCREASING
PROCESSES
163
Let / be an arbitrary real valuedCT(21X OS)-measurable function on S x T such that /(-, y) is bounded on S for every y g T. Let / = / + - / " Then / + is
every E g 21 x 03.
2°. ' / / , ? €Vanda,0
>0thenaf
+ /3g g V; i / / , y eV and f < g then g - f e \ .
3°- if {fn',n g N} is an increasing sequence of nonnegative functions in V .rucfe ffozf / = lim /„ w a rea/ valued nonnegative function and /(-, y) is bounded on S for n—*oo
every y g T, fncn / g V. ZTien V contains all real valued nonnegative <J(21 x ^-measurable functions f on S x T such that /(-, y) is bounded on S for every y g T. Theoreml0.8. Let (S,2i) and(T,*B) be two measurable spaces and let {[j.(-,y) : y e T} be a 03-measurable family of finite measures on (5,21). For are extended real valued a(2l x 03)measurable function f on S x T, ter /* oe defined by f*(y)=
I
f(x,y)ii(dx,y)
for y eTfor which the integral exists. 1) If f is such that /(•, y) is bounded on S for every y g T, then f* is a real valued 03measurable function on T. 2) If f is a nonnegative extended real valued function, then f* is a nonnegative extended real valued 23-measurable function on T.
164
CHAPTER 2.
MARTINGALES
Proof. The
S for every y G T, then /*(y)= / lim fn(x,y)/j,(dx,y)= J S Ti—*oo
lim / fn(x,y)fi(dx,y) n—*oo J e
= lim f*(y)
fory G T,
71—*co
by the Monotone Convergence Theorem. Then the 23-measurability of /* for every n G N implies that of /*. Thus / G V. This verifies the conditions in Proposition 10.6 for our V. Therefore V contains all real valued
110. INCREASING
PROCESSES
165
and let v be a measure on (T, 03). Let A be a measure onCT(21X 23) such that (1)
A(A x B) = / ^(A, y) !/(dy) for A x £ € 21 x 03.
7) Let f be a real valued
/ s x T fix, y) A(d(x, y)) = j
{jf /(x, y) p(
in tfie sense that the right side of (2) also exists and the equality holds. On the other hand, 'fir {Is \f(x, y)\ n(dx, y)} v{dy) < oo, then both sides of (2) exist and the equality holds. 2) Iff is a nonnegative extended real valued u(2l x *B)-measurable function on S xT then (2) holds. Proof. 0) Let us show first that if / is a nonnegative valued
lE(x,y)X(d(x,y))
= \(E) =
\(AxB).
JSxT
On the other hand, / ljlE(x,y)fj,(dx,y)>i'(,dy) =
f fx(A,y)u(dy)
=
= J l B (j/)/i(A, y) v(dy)
\{AxB).
JB
This shows that 1E G V. Clearly V satisfies 2° of Corollary 10.7. Let {/„ : n G N} be an increasing sequence of nonnegative functions in V such that / = Inn /„ is a real valued nonnegative function and /(•, y) is bounded on S for every y G T. To show that / G V we show that / satisfies (2). By the Monotone Convergence Theorem we have /
f(x,y)X(d(x,y))=
= Jimy U
fn(x,y)
lim /
fn{x,y)\(d(x,y))
/J,(dx,y)^ u(dy) = JTysf{x,y)
fj.(dx,y)^ u(dy).
166
CHAPTER 2.
MARTINGALES
This shows that / G V. Therefore by Corollary 10.7, V contains all nonnegative valued
J
f(x,
y) X(d(x, y)) = jT{Jg
/ + ( * , V) pldz, y)} u{dy)
and (4)
/
f-(x,y)\(d(x,y))=
[ \ [
f-(x,y)v(dx,y)\v(dy).
Since at least one of (3) and (4) is finite, the difference of the two exists in E. Subtracting (4) from (3), we obtain (2). If on the other hand we have JT {fs \f(x, y)\ fi(dx, y)} v(dy) < oo, then applying our result in 0) to | / | we have JsxT \f(x, y)\ X(d(x, y)) = JT {| s \f(x, y)\ p{dx, y)} u(dy) < oo. Thus / S x T \f(x, y)\ X(d(x, y)) exists and this implies according to what we showed above the existence of the right side of (2) and the equality. 2) For a nonnegative extended real valued function / on S x T, let fn = / A n for n G N. Then fn | / on 5 x T. By 0) we have (5)
/
fn(x,y)X(d(x,y))
=j
U fn{x,y)n(dx,y)\
v{dy).
As we saw in the proof of Theorem 10.8, / s f„(x,y) n(dx,y) | Js f(x,y) fj,(dx,y) for y G T. Letting n —> oo on both sides of (5), we have (2) by the Monotone Convergence Theorem. ■ Let us remark that the existence or even the finiteness of the right side of (2) in The orem 10.9 does not imply the existence of the left side. To construct an example, let (S,2l) and (T,93) be copies of ([-1,1],58[-i,u), l e t M'.y) be the Lebesgue measure mL on ( [ - 1 , 1 ] , 23[-!,ij) for every y G T, let v = mL also and let X = mL x mL. Let 5 be the
110. INCREASING
PROCESSES
167
sign function, that is, s(x) = 1 for x > 0 and 5(1) = - 1 for x < 0. Define / on S x T by setting y ( X j y ) _ I s(x)|y|-l \ 0
for 1 e 5 and y e T - {0} for x e 5 and y = 0.
Our function / is
CHAPTER 2.
168
MARTINGALES
thus HA(E, •) is 5t-measurable. On the other hand, if A is only an almost surely increasing process, then there exists a null set A^ in (£1,^X,P) such that A(-,LO) is a real valued right-continuous monotone increasing function on R+ with A(0,u>) = 0 for every w G A^. According to our convention in the definition of \XA, we set \IA(-,U) = 0 for u> G A^. If the filtered space is augmented, then A^ G $t and this implies the 5t-measurability of HA ((a, b], •). Similarly when E = [0,6] with 0 < b < t and when £ = 0. Next, let 33 be the collection of all members E of ) = nA(E2,iv)-fiA(E\,oj)whichis?m 5t-measurable function of u G £2. If En G 33 for rc G N and .En f £ , then £ G 23[o,i] and U,A(E,LJ) = lim u,A(En,uj), which is an ^-measurable function of ui G £2. Thus E 6 3 . n—*oo
Thus 3D is a d-class. Since 53 contains the semialgebra <£, it contains the cr-algebra generated by (E, namely Q3[o,fj. Therefore (J.A(E, •) is 5,-measurabIe for every E G 2$[O,J]The goo-measurability of the family {pA(-,u) : to G £2} is shown by the same argument as above. ■ Theorem 10.11. Let A = {At : t £ B.+} be an almost surely increasing process on an aug mentedfiltered space (£2, 5, {5t}, P) and let {^A{' , UJ) : LO £ Q,} be the family ofLebesgueStieltjes measures on (R+, 23m) determined by A. Let X be an adapted measurable process on the filtered space so that its sample functions are Borel measurable functions on R+. For t G R+, let I X(s,Ul)dA(s,U>) J[0,t]
= / X(s,U>) J[0,t]
fJ.A(ds,Lj)
for every u> G £2/or which the Lebesgue-Stieltjes integral exists. 1) If every sample function ofX is bounded on every finite interval in R+, then /, 0 „ Xs dAs is a real valued $t-measurable random variable on (£2, 5, P)2) If the sample functions of X are nonnegative functions on R+, then / [0 (] Xs dAs is a nonnegative extended real valued ^-measurable random variable on (£2, j , P). If we do not assume that the filtered space is augmented then under the condition on X in 1) or 2), J,0 „ Xs dAs is ^^-measurable. Proof. By Lemma 10.10, Theorem 10.11 is a particular case of Theorem 10.8. ■ If A = {At : t G R+} is an almost surely increasing process on a filtered space (£2, 5, {&}, P) such that EiA^) < oo, then there exists a null set A in (£2, &*,, P ) such that Aoofa) < oo, and thus the Lebesgue-Stieltjes measure PA(-,U) on (R+, *8iJ is a fi nite measure for u> G Ac. Let us adopt a convention that /J^(-,O;) = 0 for u> G A. Then
§/0. INCREASING
PROCESSES
169
by Lemma 10.10, the family of Lebesgue-Stieltjes measures {/^(-.w) : w € Q} on the measurable space (R+, 35i+) determined by the sample functions of A is an 3oo-measurable family of finite measures. Thus the following theorem is a particular case of Theorem 10.8. Theorem 10.12. Let A = {At : t G R + } be an almost surely increasing process with E(Aoo) < o o o n a filtered space (Q, 5, {&}, P)- Let X be an adapted measurable process on the filtered space so that its sample functions are real valued Borel measurable functions on R+. Let I X(s,w)dA{s,tjj)= I X(s,ui) u,A{ds,LJ) for every w G Qfor which the Lebesgue-Stieltjes integral exists. 1) If X is a bounded process, then / K Xs dAs is a real valued 5^-measurable random variable on (Q, 5 , P). 2) If X is a nonnegative process, then /™ Xs dAs is a nonnegative extended real valued 5^-measurable random variable on (Q, 5, P) ■ Theorem 10.13. Let A = {At : t G R+} be an almost surely increasing process and M = {Mt : t G R t } tea right-continuous bounded martingale on a filtered space (O,5, {St}, P). Then for every t G R+, we have M(s,-)dA(s, E(M f At) = E / J[0,n
(1) IfE(Ao0)
< oo, then
(2)
E(.M0BA00) = E J
M(s,-)dA(s,
Proof. Let \M(t, u>)\ < K for (t, u>) G R+ x £2 for some if > 0. Let t G R+ be fixed. With n G N fixed, let sk = kl~nt for k = 0 , . . . , 2 " and let 4 = Uk-i,sk] forfc= 1,...,2". Let M ( n ) be defined by (3)
M^n)=M0
and M
for s G Ik, k = l , . . . , 2 n .
Since M ( , w) is a right-continuous function on R+ for every u £ f l , w e have (4)
lim M(n\s,Lo) = M(s,w)
for (s,u>) G [0,<] x Q..
CHAPTER 2.
170
MARTINGALES
Now K(MtAt)
= E M(t)Yl{Msk)
-
A(sk-i)}
k=] 2"
= 52E[M(t){A(sk)-
Msk-i)}]
k=l
=
£E[E[M«){A(^)-A(^-,)}|5SJ] k=l
=
£K[M(.ah){A{»k)-Msh-i)}] k=l
where the last equality is by the fact that A(sk) - A(.Sk_i) is 5 st -measurable and then by the fact that M is a martingale. Then E(M,At) = E 52M(sk){A(.sk)-A($k-i))
=E /
MM(s)
dA(s)
since for each to G £2, M{n)(-,u>) is a step function assuming the value M(sk,u) on the interval Ik and since ^A(h,^) = A(sk,u) — A(sk-\,u) by the right-continuity of A(-,u) for u> £ ACA where A^ is an exceptional null set of the almost surely increasing process A. Thus M(n)(s)dA(s)
E(M«A() = E /
(5)
J[0,t]
Now since \M(t,u>)\ < K for (t,ui) G K+ x Q and /U/i([0,i],a>) < oo, and since (4) holds, we have by the Bounded Convergence Theorem, (6)
lim /
n-»°° J[0,t]
M<-n)(s,u)dA(s,u)=
f
M(s,uj)dA(s,u>)
forwGQ.
J[0,t]
Since \f
M{n\s,Lo)dA(s,u)\
forweAc,
\J[0,t]
with A(t) G Li(Q, 5,-P) and since (6) holds, by applying the Dominated Convergence Theorem to the right side of (5) we obtain (1). Since M is a bounded martingale it is Lj-bounded and thus by Theorem 7.9 the ex tended real valued 5oo-measurable random variable and lim Mt = M^ a.e. on (£2,3oo, P)t—*oo
§/0. INCREASING
PROCESSES
171
Suppose E(A 0 0 }< oo. Since Hm MtAt = MoaAoa&&. on ( Q . ^ . P ) and since \MtAt\ < KAQV a.e. on (Q, 5 ^ , P), we have by the Dominated Convergence Theorem (7)
lim E[MtAt] = E ^ ^ ] . [—►OO
On the other hand Hm Jo
t
M(s, -)dA(s, ■) = Hm / lmM{s,
•) dA(s, ■)= f
M(s, ■) dA(s, ■).
We also have | f[ot] M(s, ■) cL4(s, )| < KA^ a.e. on (Q., ^<x>, P). Since KAco is integrable, by the Dominated Convergence Theorem we have (8)
lim E / M(s, •) dA(s, ■)] = E f / M(s, ■) dA(s, •) J[o,t] J urn.
Letting i —> oo on both sides of (1), we obtain (2) by (7) and (8). I
[HI] Doob-Meyer Decomposition Let M = {Mt : t G R+} be a right-continuous bounded martingale on a filtered space (£2, J , {5*}, P)- According to Theorem 9.2 the right-continuity of M implies that there exists a null set A in (Q, S ^ , P) such that for every u i g A c the sample function M(-, u>) has finite left limit everywhere on R+ and has at most countably many points of discontinuity. Definition 10.14. For a right-continuous bounded martingale M = {Mt : t € R+} on a filtered space (Q,5, {&}, P) with the exceptional null set A in (Q, 5<x>, P) as described above, let M_ = {M_(i): t G R + } be defined by
I
lim sT (M(5,a;) for t (0, oo) andu> G Ac, M(0,w) fort = 0andw G AC, 0
/or i G R+ and w G A
Thus defined, iW_ (i) is g^-measurable for every t G R+. Let K > 0 be a bound for the right-continuous bounded martingale M. Then |M(t,w)| < K for all (t,u>) G R+ x Q. so that |M_(i)| < # and therefore M_(i) is integrable for every t G R+. Lemma 10.15. Le? M = {Mt : t G R+} be a right-continuous bounded martingale on an augmented filtered space (Q, £ , { & } , P). Then M- = {M.(t) : t G R + } is a leftcontinuous bounded martingale on the filtered space.
172
CHAPTER 2.
MARTINGALES
Proof. As we have noted, M_(t) is an 3oo-measurable integrable random variable on the probability space (Q, g, P) for every t G R+. Let A be the null set in (Q,goo, P) in Definition 10.14. Since the filtered space is aug mented, A G g ( for every t G R+. According to Definition 10.14, M_(<) is the limit of gt-measurable functions M(s) for s < t on the ^-measurable set Ac and is thus g ( measurable on Ac. Also M_(t) is trivially ^(-measurable on the g r measurable set A. Thus M_(i) is g(-measurable on Q. To show that M_ is a martingale, let 0 < t' < s < t". Since M is a martingale, we have (1)
E(M S |5,0 = Mf
a.e. on(£2,&,, P).
Since M_(i") = lim M(s) a.e. on (Q,g t „,P) and since \M(s)\ < K for s G R + where A' > 0 is a bound for M, we have by the Conditional Bounded Convergence Theorem (2)
limE[M(s)|&,] = E[limM(s)|&,] = E[M_(t")|g<<] sTi"
a.e. on
(£!,$*,P).
sT<"
From (1) and (2) we have E[M_(t")|&<] = Mv
a.e. on (Q,&,,P),
This shows that M_ is a martingale. By Definition 10.14, M_ is bounded by K and is left-continuous. ■ Lemma 10.16. LetM = {Mt : t G R+} be a right-continuous bounded martingale and A = {At : i G R+} be an almost surely increasing process on a filtered space (Q, g, {5(}> P)For i > 0, let £„,* = k2~ntfor k = 0 , . . . , 2" and rc G N. 7nen fnere existe a null set A in (Q, goo, P), independent oft, such that for every LO G A C we nave 2"
(1)
lim VM(i n ,A_,,w){A(t n ,t,w)-A(i t l i i_i,a;)}= /
M_(s,u;)dA( s,w).
A/50
(2)
lim E £M(in,*_,){A(tn,*) - A(tB,fc_,)} = E / n—►oo
U[o,t]
M.(s)dA(s)
J
§/0. INCREASING PROCESSES
173
Proof. For every n e N , let us define M ( n ) = {Mf f M<">(0) = M(0) \M ( " ) (5) = M(i n , i _ 1 )
(3)
: s G [0, i]} by setting
fors € (t„,fc-,, <„,*] for fc = l , . . . , 2".
Every sample function of M ( n ) is a left-continuous step function on [0,r]. Now the rightcontinuity of the martingale M implies according to Theorem 9.2 the existence of a null set A M in (0,5oo, P ) such that M(-, u>) has finite left limit everywhere on E + for w G A^Since A is an almost surely increasing process, there exists a null set AA in (£2,5^,.?) such that A(-,LJ) is a real valued right-continuous monotone increasing function on K+ with A(0,u/) = 0forw 6 A^. LetA = A M UA j 4 . By Definition 10.14, for u G A M we have lim M(n)(5,o>) = M_(s,o;).
(4) For u> € Ac, we have 2
(5)
SAf(tB,i_,,«){A(fB,t,w)-A(tM_,,w)}=
M (n) ( 5 ,u;)dA(s,u;).
/
By (4) and the Bounded Convergence Theorem, we have (6)
MM(s,u>)dA(s,ui)=
lim /
f
"-><*> J[0,i]
M_(s,u)dA(s,u).
J[0,t]
Combining (5) and (6) we have (1). To prove (2), note that if we let K > 0 be a bound for the bounded martingale M then Y, M(in,*_i,U3){A(.tnJt, w) - A(t„, t _i,CJ)} <
KA(t,u).
Since A(t) is integrable and since (5) and (6) hold, we can apply the Dominated Conver gence Theorem to have
lim E \YlM(tn,k-i){Mtn,k) - A(t„,*_,)} = E [ lim / n—ooy [ 0 ( ]
This proves (2).
M{n)(s) dA(s)} = E [ / J
U[0,t]
M_(a) dA(s)
174
CHAPTER 2.
MARTINGALES
Definition 10.17. An almost surely increasing process A = {At : t G R+} on a filtered space (Q., 5, { 5 J , P ) is called natural if for every right-continuous bounded martingale M = {Mt : t € R+} on the filtered space we have E l
M(s)dA(s)]
J[0,i]
=E[7 1
M_(S)CL4(S)|
forte
U[0,t]
or equivalently E[M(t)A(t)] = E
I M-(s)dA(s)
J
Jio.t]
fort GR+.
Let us observe that the sample functions of M and M_ are bounded Borel measur able functions on R+ so that the two Lebesgue-Stieltjes integrals J[0 (] M(s, u>) dA(s, LO) and L M_ (5, LO) dA(s,u>) exist as real numbers for every u> G Q. Also/ [0/] M(.s)
[/
U[o,t)
J[0,t)
Theorem 10.18. Let A = {At : t G R+} be an almost surely increasing process on a filtered space (0,5> {3t}, P)- If A is almost surely continuous, then it is natural. Proof. Let M = {Mt : t G R+} be a right-continuous bounded martingale on the filtered space. By Theorem 9.2 there exists a null set AM in (Q, 5<», P) such that M(-, u) has finite left limit everywhere and has at most countably many points of discontinuity on R + for every w G ACM. Since ^4 is an almost surely continuous almost surely increasing process, there exists a null set AA in (Q, d<x,,P) such that A(-,UJ) is a continuous monotone increasing function on R+ with A(0,ui) = 0 for w £ AcA. Let A = A,4 U AM- Let u G AC and let {tn : n G N} be the points of discontinuity of M(-,u>) on R+. Then M(-,u) — M_(-,u>) vanishes on R+ except at i„ where it assumes the value M(tn,u>) — lim M(s, w). Thus for s\t„
any t G (0,00) we have /
Jio.t]
53 neN:t„<«
{M(s,u>)-M_(s,Lj)}dA(s,u) {M(t„,w) - limM(s,w)}^({tn},w) = 0, sT<
§70. INCREASING
PROCESSES
175
since the continuity of A{-,ui) implies pA({s},ui) = 0 for every s € (0, oo). Since the last equality holds for every w e Ac and since P(A) = 0, we have E / {M(s) - M-(s)} Jw.n U[0,ij
dA(s) = 0.
This shows that A is natural. Lemma 10.19. 1) An almost surely increasing process A = {At : t G R+} on a filtered space (Q., 5, {& : t € R+}, P) is a submartingale of class (DL). IfE^A^) < oo, then A of class (D). 2)IfM is a right-continuous martingale and A is an almost surely increasing process on a right-continuous filtered space (Q, 5, {&}, P), then M + A is a submartingale of class (DL). 3) If M is a uniformly integrable right-continuous martingale and A is an almost surely increasing process with E(A00) < oo on a right-continuous filtered space (Q, 5, {&}, P), then M + A is a submartingale of class (D). Proof. 1) Since A is an almost surely increasing process, there exists a null set A^ in (£2,5<x>, P) such that A(-, w) is a real valued right-continuous monotone increasing function on R+ with A(0, ui) = 0 for every w G A^. Thus for every pair s, t G R+ such that s < t we have At > As a.e. on (Q,3«„-P) and therefore E(At\$B) > E(A S |5 S ) = As a.e. on (£2,5S, P). This shows that A is a submartingale. To show that A is of class (DL), let a e M.+ and consider the collection S„ of all stopping times on the filtered space which are bounded by a. Then for every T e S„ we have 0 < AT(UJ) < Aa(u)) for u 6 A^, that is, 0 < AT < Aa a.e. on (Q, 5oo, P). Then since Aa is integrable, {AT : T G S a } is uniformly integrable by 4) of Proposition 4.11. Since this holds for every a G R+, A is of class (DL). Suppose E(Aoo) < oo where A^ = lim At t—*oo
on A^. Let S be the collection of all finite stopping times on the filtered space. Since 0 < AT(u>) < A x , M for u G A^, that is, 0 < AT < Ax a.e. on (Q,doo,P), for every T G S, the integrability of A^ implies the uniform integrability of {AT : T G S}. Therefore A is of class (D). 2) A right-continuous martingale M is always of class (DL) according to Theorem 8.22. An almost surely increasing process A is a submartingale of class (DL) according to 1). Then M + A is a submartingale. The fact that M + A is of class (DL) follows from the fact that (M + A)T = MT + AT for every T G S and from Proposition 4.9. 3) A uniformly integrable right-continuous martingale M is always of class (D) by Theorem 8.22. An almost surely increasing process A with E(^oo) < oo is of class (D) by
CHAPTER 2.
176
MARTINGALES
1). Then M + A is of class (D) by the same argument as in 2). Lemma 10.20. Let A = {At : t G R + } and A' = {A't : t £ R+} be two almost surely increasing processes on a filtered space (£2,5, {$t},P) such that E[At - A , | 3 , ] = E [ 4 - A ; | S J
(1)
a.e. o» ( 0 , & , P ) ,
/or every pair .s,f 6 R+ such that s < t. Let Y be a left-continuous bounded adapted process on the filtered space. Then for every t £ R+ we have (2)
E / Y(s) dA(s) = E [ / Y(s) dA'(s) Jio.t] 1 U[o,t]
Proof. Let t > 0 be fixed. For every n £ N, let in,A = k2~nt for fc = 0 , . . . , 2" and let In,k = (tn,k-\,tn,k] for k = l , . . . , 2 n . Define a left-continuous bounded adapted process F ( n ) on [ 0 , t ] x f i b y setting (YM(s,u)
m
1
= Y(tn,k-Uuj)
fors e l n , f c , f c = l , . , . , 2 \ a n d w e £ 2
I r(n)(o, w) = F(o, w)
for w e n .
By the same argument as in Lemma 2.8 we have lim Y(n\s,u)
(4)
= Y(S,UJ)
for(s,u;) G [0,i] x Q.
n—*oo
Since A is an almost surely increasing process there exists a null set AA such that A(-,ui) is a real valued right-continuous monotone increasing function on R+ with A(0,LJ) = 0 for a> G A^. Then for every UJ G A^ we have by (3) 2"
(5)
/
•Ao.t]
y (n) (s,u;)dA( S ,a;) = ^y(i„, t _ I ,a;){A(i n , f c ,u;)-A(t n ,,_ 1 ,a;)}. ,=1
For a; G A^, (4) and the Bounded Convergence Theorem imply
(6)
lim /
Y{n\s,uj)dA(s,u})=
n-»°° 7[0,t]
f
Y(s,u)dA(a,u).
J[0,t]
If A' > 0 is a bound for Y, then for u G A^ we have (V)
J[0,t)
§70. INCREASING
PROCESSES
111
Now from (5) we have E
\Lj°'M(S)dA{S)] L L
fl
J
= k=\E E ^(^-.){^n,,)-^n.,-.)}] 2"
= £ E [y(t„,t_,)E[A(tB,t) - ^(i„,fc_,)|5(„k_,]] We have a similar expression for E [/[ot] YM(s) dA'(s)}. But (1) implies that E[A(tn,k) - ^((„, A: _ 1 )|g 4nfc _ 1 ] = E[A'(tn,k) - A U a - i ) | & n < s _ , ]
a.e. on (Q,&„ t _ , , P ) .
Therefore we have (8)
y ( "'(5)dyl(5) = E
/
•'[0,!]
Since J4< is integrable, (6), (7) and the Dominated Convergence Theorem imply lim E I
■no/]
n—*oo
Similarly for E [/[0
(J
YM(s) dA(s)] = E [ / J
U[o,<]
Y(s) dA(s)
YM(s) dA'(s)]. Thus letting n -> oo on both sides of (8) we have (2).
Theorem 10.21. Let X = {Xt : t € R+} be a right-continuous submartingale on an augmented right-continuous filtered space (Q,5, {$t}, P)- If A = {At '■ t G R+} and A' = {A't : t G R+} are fwo natural almost surely increasing processes on the filtered space such that both M = X — A and M' = X — A' are martingales then A and A' are equivalent, that is, there exists a null set A in (Q, 5, P) such that A(-,u>) = A'(-,uj)for w G Ac. Proof. Since A - A' = (X - M) - (X - M') = M' - M, A - A' is a martingale. Thus for every pair s, t G R+ such that s < t, we have E[(A - A')t\$s] = (A - A')s a.e. on (Q, 5 S , P ) , that is, we have E(At - As |5 S ) = E(A't - A's | & ) a.e. on (£2, g s , P). Thus A and A' satisfy condition (1) in Lemma 10.20. Let £ be a bounded random variable on (Q,5, P). According to Corollary 9.10, for each t G R+ we can select a version Z ( of E(£|3 ( ) so that Z = {Zt : t £ R+} is a rightcontinuous martingale. Then Z_ defined as in Definition 10.14 is a left-continuous bounded martingale. Thus by Lemma 10.20 for every t G R+ we have /
J[o,t]
ZUs)dA(s)}
J
=E\[
L7[o,(]
ZM)dA'(s)
J
178
CHAPTER 2.
MARTINGALES
Now since A is natural, we have E f/[0>t] Z-(s) dA(s)] = E(ZtAt) and similarly E [fm ZUs)dA'(s)] = E(ZtA't). Therefore we have E(ZtAt) = E(ZtA't). Recalling that Zt = E(£|&) we have E[E(Z\$t)At] = E[E(f | & ) ^ ] . Then by the fo-measurability of At and A't we have E[E(ZAt\$t)] = E[E(£A't |fo)] that is, E(£At) = E(£A't). In particular by letting £ = 1E for E € fo, we have / B A< d P = JE A't dP for every E £ fo. This shows that At = AJ a.e. on (fi,5 ( , P). Then since A and A' are almost surely right-continuous processes, there exists a null set A in ( Q , 5 , P ) such that A(-,LO) = A'(-,UJ) for w £ Ac according to Theorem 2.3. ■ Lemma 10.22. Let X = {Xt : t € R+} be a right-continuous submartingale of class {DL) on an augmented right-continuous filtered space (Q., 5, {dt},P). For a > 0 define a stochastic process Y = {Yt : t £ [0, a]} by setting (1)
Yt = Xt-W,Xa[&)
fort
£[0,a].
n
For each n £ N, let tU]k = k2~ afor k = 0 , . , . , T, let T„ = {tnM : k = 0 , . . . , 2"} and define a discrete time stochastic process j4(n) = {At : i S T„} by setting W
Then {A^
/A'">(i rl ,,) = E ) = 1 E [ r ( < n , J ) - F ( i n j _ 1 ) | 5 4 n j _ 1 ]
fork=\,...,2n
\ ^ ) ( t „ , 0 ) = 0. : n £ N} w uniformly integrable.
Proof. The stochastic process {E(Xa\$t) : t £ [0,a]}, where E(X a |5 4 ) is a real val ued version, is a uniformly integrable martingale with respect to the filtration {5 4 : t £ [0, a]}. Since the filtered space (Q,5, {fo},P) is augmented and right-continuous, by Corollary 9.10 we can select a version of E{Xa |fo) for each t £ [0, a] so that the mar tingale {E(X„|5() : t £ [0, a]} is right-continuous. The right-continuity implies that this martingale is of class (DL) by Theorem 8.22. The process Y defined by (1) is then a right-continuous submartingale. Since X is a submartingale, we have E(Xa |5 ( ) > Xt a.e. on (Q, 5 t , P) for every t € [0, a] and thus Y is nonpositive. In particular, Ya = Xa - E(Xa | 5 J = 0 a.e. on (fl, 5a, P). Since X is of class (DL) and the martingale {E(X„ |fo) : i £ [0, a]} is also of class (DL), Y is of class (DL). Since Y is a submartingale, each summand in (2) is a.e. nonnegative and thus A{n) is a discrete time almost surely increasing process with respect to the filtration {5
§70. INCREASING
PROCESSES
179
To prove the uniform integrability of {A™ : n £ N}, let us show first that there is a version of E[A< n) |5t„,J such that (3)
E[4"» |&„ J = A<">(iniJt) - F(i n ,,)
for fc = 0 , . . . , 2".
Now by (2) we have
EWfl^J = E t ^ E i r y - ^ . , ) ! ^ . , ] ^ , ] = ^ E [ F ( i n j ) - F(t, ) J -. 1 )|5 i „ J _ 1 ]E[l|5 < »J 2"
+
£ E[r(t nJ ) -- Y(t«,j.■i)\K J j'=/fc+l
= A ( n >(* B | i t )E[l|? < n i J+E[r{a) - y ( i „ , t ) | 5 t „ t ] = A ( » > ( * n i i ) E [ l | 5 t „ J - K ( ^ , i ) a.e. o n ( n , & n t , P ) . By selecting the constant 1 from among all the versions of E[l | 5 ( n t ] , we have (3). With c > 0 and the convention that in,_i = 0, let us define a T„-valued function on Q by setting 5 (n) =
(4)
f inf{<„,*_! £ T„ : A™(tn,k) > c} I a if the set above is 0.
To show that S
< c,A(.tnMl)
> c} £ &„,fc,
and finally, {Sf ! = i„,2-} = {A(tn,0) < c , . . . , A(t„,2n) < c } £ 5 l n 2 „. Now if we apply the Optional Sampling Theorem in the form of (2) of Theorem 8.16 to the martingale {E[A™|&„k] : k = 0,... ,2n} given by (3) with respect to the filtration {&„ t :fc = 0 , . . . , 2 " } , t h e n (5)
E[Al n) |5S<„,] = A<">(S<">) -
Y(S^).
CHAPTER 2.
180
MARTINGALES
Now for any w 6 ft if ^ i n , M > c then by (4) we have S
> c <S> S ' n V ) < a
Af\w)
for a.e. u; in{ft,g a ,P).
Therefore (7) =
/
,4
A^'dP
/
E[^n,|5s,n.]rfP
since {Sf > < a} e 3>>
= 4".<0>A(")(5"VP-i-"-<.>F(5'",)dp by(5)
< cp{5r><«}-/{^
(8)
- 4 ^ >E[y ( ^,| ) d]dPP " 4.< a} ^ ^ -i-<.> A(n)(5 ^ )dP = / >
A^dP_Ain
7{s<"
I
by(5)
AM{S{;hdP
\sy2)}dp,
since the last integrand is nonnegative and {S
-f
Y(S%)dP>^P{S™
Using (9) in (7) we have
(10)
A dp < -2 2I Y{S dpdP- I f A™ dP Y(S^l) L> - J U .Y(S, y(S)^dP.dp c} " * - U<* ^ M
§70. INCREASING
PROCESSES
181
To show the uniform integrability of {A
lim s u p /
A)dP = 0.
If we show that (12)
lim s u p ( - /
y(S<">)dp) = 0,
then we also have (13)
lim sup{—2 /
Y(Sy2)dp\ = 0.
Applying (12) and (13) to (10) we obtain (11). Let us prove (12). Recall that Y is a submartingale of class (DL) with respect to the filtration {5J : r G [0,a]}. Since S£ n) ,n G N are stopping times with respect to the filtration {5t : < G T„} and since {& : t G T„} C {& : i G [0,a]}, they are also stopping times with respect to the filtration {5t : t 6 [0, a]}. Since these stopping times are all bounded by a, the fact that Y is of class (DL) implies that {F^-i : n G N} is uniformly integrable. Thus by Theorem 4.7 for every e > 0 there exists some S > 0 such that (14)
- / Y^S™) dP <e
for all n G N whenever E G 5 and P{E) < 6.
JE
Now E(A^)
> I
A™ dP > cP{A™ >c} = cP{S™ < a}
by (6). On the other hand from (3) and (2) we have E[A<,"> |5„] = A^ -Y0 E[A™] = E(-Y 0 ). Thus we have P{5,<") < a} < -E(-Fo)
= -Y0 so that
for all c > 0 and n G N.
Thus there exists CQ > 0 such that P{5< n) < a} < 8 for all n G N when c > c0. Then by (14) - / Y(SM) dP <e for all n G N when c > c0. Therefore sup { - /
. F (S<">) <£P 1 < e
for c > c0.
182
CHAPTER 2.
MARTINGALES
Thus lim sup sup | - /
Y(Sf >) cLP 1 < e.
Since this holds for an arbitrary e > 0, we have (12). ■ Theorem 10.23. (Doob-Meyer Decomposition) LetX = {Xt :t€R+}bea rightcontinuous submartingale of class (DL) on an augmented right-continuous filtered space. Then there exists a right-continuous martingale M = {Mt : t £ R+} and a natural almost surely increasing process A = {At : t £ R+} on the filtered space such that X(-,u>) = M(-,w) + A(-,oj)forui £ Ac where A is a null set in (Q., 3, P). Moreover such a decomposition is unique in the sense that ifM and M' are right-continuous martingales and A and A' are natural almost surely increasing processes such that X = M + A = M' + A' then M and M' are equivalent and A and A' are equivalent, that is, there exists a null set A in (Q,5, P) such that M(-,LJ) = M'(-,u>) and A{-,u) = A'(-,L>)foru> £ Ac. Proof. 1) Uniqueness of decomposition. Suppose M and M' are two right-continuous martingales and A and A' are two natural almost surely increasing processes such that X = M + A and X = A' + M' also. Then X - A and X - A' are both martingales so that by Theorem 10.21 there exists a null set A in (£2,5, P) such that A(-,u>) = A'(-,w) for a; £ Ac. Then since M - M' = A' - A we have (M - M')(-,u>) = (A - A')(-,UJ) = 0, that is, M(-,oj) = M'(-,u)foru> e Ac. 2) Existence of decomposition on a finite interval. Let us show that for an arbitrary a > 0, there exists a right-continuous martingale M = {Mt : t £ [0, a]}, a natural almost surely increasing process A = {At : t £ [0, a]} and a null set A in (£2,5„,P) such that X(t,uj) = M(t,Lo) + A(t,u)fort £ [0,a]andw £ AC. With a > Ofixed, l e t F = {Yt : t £ [0,a]}, T„ a n d ^ n ) = {A™ : t £ T„} forn £ Nbe as defined by (1) and (2) of Lemma 10.22. Then {A™ : n £ N} is uniformly integrable so that by Theorem 4.21 there exist A"a £ i j ( Q , 5 a , P) and a subsequence {ne) of {n} such that (1)
lim / A^i
dP= f AX dP
for every ( £ L W (Q, &,, P).
According to Corollary 9.10, for each t £ R+ we can select a version of E(A* |5t) so that the martingale {EC^'Ifo) : t £ [0,a]} with respect to the filtration {& : t £ [0,a]} is right-continuous. Define a stochastic process A = {At : t £ [0, a]} by setting (2)
At = Yt+E{A*a\$t)
fort £ [ 0 , a ] .
§70. INCREASING
183
PROCESSES
Since Y too is right-continuous, A is right-continuous. If we define a stochastic process M on [0, a] by setting M = X - A, then by (1) of Lemma 10.22 and (2) above we have (3)
Mt = E(X 0 - A'a |&)
for t G [0, a]
so that M is a martingale with respect to the filtration {& : i G [0, a]}. The right-continuity of X and A implies that of M. It remains to show that A is a natural almost surely increasing process with respect to the filtration {5 ( : t G [0, a]}. Let us show that A defined by (2) is an almost surely increasing process. For n G N recall the discrete time almost surely increasing process A(n) = {A\n) : t G T n } with respect to the filtration {& : t G T„} defined by (2) of Lemma 10.22. According to (3) of Lemma 10.22, at any tn
(4)
= Y(tn,k) + E[A
and thus Y(tn,k) - y(fn,fc_1) + E[A 1 |3 w ] - E[i4W|ftM-J
(5)
= A * " ' ^ ) - AW(*n,i_i) > 0
a.e. on(Q,3a,-P)-
According to (2) and (3) of Theorem 4.23, there exists a subsequence {nt} of {n} such that for every £ G L^iQ., 5„, P ) we have (6)
lim / E[4T<> | ? t n J £ d P = / E[A: | 5 i n J £ dP
and (7)
lim / E[A^\dtB,k_^dP
= /
E[A:|5w_,]fdP.
Multiplying both sides of (5) by £ G L^iCl, 5„, P), £ > 0, and integrating, we have (8)
Jo {Y(tn,k) - F ( t n , , _ , ) + E [ A i " ' | 5 w ] - E t A i " ' | 5 w _ , ] } e ^ P > 0.
For n , > n w e have T„ C T„, so that tn,k-i,tn,k G T„, and (8) holds with A<,"> replaced by A^'K Letting I -> oo in (8) and using (6) and (7) we have /
{Y(tn,k)
- F(i n ,*_,) + E W : | 5 w ] - E [ A : | 5 w _ , ] } ^ d P > 0.
184
CHAPTER 2.
MARTINGALES
By (2), the last inequality is reduced to f
{A(tn,k)-A(tn,k^)HdP>0,
In particular, with £ = l g where £ e 3 n . we have / {A(t„.*) - A(t„,fc_,)> d P > 0
for every 5 6 & .
Since both A(tnik) and ACi^-i) are 5a-measurable, the last inequality implies that A{tnik) > A(tUtk-\) a.e. on (£2,3a, P)- Then since Un6NT„ is a countable set, there exists a null set A,*, in (Q, 3 „ , P ) such that for every w G A^ we have A(S,UJ) < A(t,u>) for all s , ( £ Un£NT„ such that s < t. Then by the right-continuity of A we have for every w G A^, A(s,tu) < A(i,u;) for all s,f e [0,a] such that 5 < t. Next, let us show that A0 = 0 a.e. on (O, fo, P). By (2), A0 = {F0 + E(A*a |3o)}- Now for £ G 5o we have / lEE(Al\do)dP= Ja
I E(1EA*a\So)dP Ja
=
i \EA*adP=\im I UA^dP Ja *-"x> Ja
=
lim / l B S > r y ( *
O-^tf
by(l) -i)|5
]
by (2) of Lemma 10.22
9=1
=
Urn £ jf E [ l E { F ( i w ) - F C i ^ - , ) } I & ^ . J d P
= i™ E / w y =
- nw-i)} <*p
lim / l B { y ( a ) - F ( 0 ) } d P 'a in lEY(0)dP
since F(a) = 0 a.e. on (Q, $a,P). Thus we have fE{Y(0) + E(A* |5 0 )} d P = 0 for every E G 5o- This implies that F(0) + E(A* |5„) = 0 a.e. on (Q, £„, P). Thus there exists a null set Ao in (Q, g 0 , -P) such that A(0, w) = 0 for w G AJ. Let A = Ao U A^. Then for every w G Ac, A(-, LJ) is real valued right-continuous monotone increasing with A(0, u>) = 0. This shows that A is an almost surely increasing process on [0, a].
§70. INCREASING
PROCESSES
185
To show that the almost surely increasing process A = {At : t e [0, a]} is natural, we show that for every right-continuous bounded martingale M = {Mt : t G [0, a]} we have (9)
E[MtAt] = E [ / U{o,t)
M_(s) dA(s)]
for t € [0, a].
i
Since a is arbitrary it suffices to prove (9) for the case t = a. Note first that E[MaA<">]
=
X;E[M(a){A<">(tB,,-)-A<»>(tniJ-_,)}] 2"
= £E[E[M(a){A<">(t nj -) - A<">(t„,i-i)}!&,„;-.]] i=i
=
£E[{A<»>(i»,3) - AW(t nj -_,)}E[M( a )l5« nj _J] 2"
= ^ E[Af(tni,-_1){A<">(*BJ-) - A^ft^-i)}] = ^EfMa^-OEtr^) - r^-olff,^,]] i=l 2"
=
^E[M(t B J -_,){y(* B J -) - F(t B J _,)}] j=l 2"
= ^ E [ M ( t „ j _ , ) { A ( t B j ) - A(t BJ _,)}] -I){E(A;|^ B J )- -E(A:\dUj_M where the third equality is by the predictability of the process A (n) , the fifth equality is by (2) of Lemma 10.22 and the last equality is by (2). Note then that for the summands in the last sum we have E[M(t BJ _,){E(A:|5<„, J ) - E(A;|& B J _,)}] = EmM(tnj-i)A'JdtnJ] - E[E[M«„, j _ I )A;|3t„, J _,]] = 0. Thus we have 2"
(10)
E[MaA™] = E E M f i , , , . , ) ! ^ ) - A(t B J _,)}].
CHAPTER 2.
186
MARTINGALES
Since Ma g L^iCl, J a , P) we have by (1) and then by (2) for the definition of Aa and the fact that Ya = 0 a.e. on (SI, &,, P) by (1) of Lemma 10.22 we have lim E[MaA[n'}] = E[MaA*a] = E [ M a E « | & ) ] = E[M 0 AJ.
(11)
On the other hand according to Lemma 10.16 we have (12)
limE[£Af(*n<J_,){A(infJ-A(tBW_,)}] = E [ /
M_(s)dA(s)
Using (11) and (12) in (10) we have (9). This completes the proof that the decomposition exists on an arbitrary finite interval [0, a]. 3) Existence of decomposition on R+. According to our results in 2), for every n g N there exists a right-continuous martingale M ( n ) = {M\n) : t E [0, n]}, a natural almost surely increasing process A (n) = {A^ : t £ [0,n]} and a null set A(n) such that X(t,uj) = MM(t,u>) + A{n\t,Lo) for t g [0,n]andw € (A(n))c Then by the same argument as in Theorem 10.21, there exists a null set A„ in ( Q , 5 , P ) such that MM = M (n+1) and consequently A (n) = v4
I
Corollary 10.24. Let X = {Xt : t G K.+} be a right-continuous submartingale of class (D) on an augmented right-continuous filtered space (£2,5, {$t},P)- Then in the Doob-Meyer decomposition X = M+A, the right-continuous martingale M is uniformly integrable and the natural almost surely increasing process A is uniformly integrable. Proof. If X is a submartingale of class (D), then X is uniformly integrable since every deterministic time is a finite stopping time. Then by Theorem 8.1, there exists an integrable 5oo-measurable random variable X^ to which X converges both a.e. on (O, $00? -^) ^nd in Li. In particular, we have lim E(Xt) = E ( X Q A t—*OQ
§70. INCREASING
PROCESSES
187
If X = M + A is a Doob-Meyer decomposition of X, then since M is a martingale we have E(M t ) = E(M 0 ) G R for every t e R+. By the Monotone Convergence Theorem we have E(^oo) = Hm E(At) = Hm {E(Xt) - E(M ( )} = Ef.X^) - E(M 0 ) G R. Thus EC^oo) < oo. Then by Lemma 10.2, A is uniformly integrable. The uniform integrability of both X and A implies that of M = X - A. ■
[IV] Regular Submartingales Definition 10.25. A submartingale X = {Xt : t € R+} ore a filtered space (Q, 5, {&}, P) is said to be regular iffor every a > 0 and every increasing sequence {Tn : n € N} ire free collection Sa of all stopping times bounded by a converging to some T G S a , we have lim E(XTJ = E(X T ). n—»oo
Remark 10.26. From the definition above it follows immediately that a submartingale {Xt : t e R+} on a filtered space (Q, 5, {&}, P ) is regular if and only if for any stopping times Tn, n G N, and T such that T„ T T on Q we have lim E[XT„At] = E[XTM] for every t€R+. Observation 10.27. Every right-continuous martingale X = {Xt : t e R+} on a rightcontinuous filtered space (£2,5, {&}, -P) is regular. Proof. Let a > 0. For any 5, T e S„ such that S < T on Q we have by Theorem 8.10 (Optional Sampling with Bounded Stopping Times), E ( X r | 5 s ) = -^s a.e. on (Q, 5 S , P ) and thus E ( X T ) = E(Xj). Then for an increasing sequence {Tn : n G N} and T e S a such that Tn < T for n G N we have E(X T „) = E(Xr) for every n e N. I Observation 10.28. An almost surely continuous and nonnegative right-continuous sub martingale X = {Xt : t G R+} on a right-continuous filtered space (£2,5, {5 ( },P) is always regular. Proof. Let a > 0. For any stopping times T„, n G N, and T in Stt such that Tn t T on Q, we have lim XT„ = XT a.e. on (Q, 5, P) since almost every sample function is continuous. n—►oo
Thus by Fatou's Lemma we have (1) E[X T ] = E[ lim XTJ < lim inf E[JC T J.
188
CHAPTER 2.
MARTINGALES
By Theorem 8.10, {XT„,IT, G N , X r } is a submartingale with respect to the filtration { 3 r „ , " e N,&r} so that E[XT„] < E[XT] for n G N and E[XT„] T as n -» oo. Thus lim E [ X T J < E[X T ]. On the other hand (1) implies that E[XT] < lim E [ X T J . Theren—►oo
7i—>-oo
fore we have lim E[Xr 1 = E[Xr], proving the regularity of X. ■ n—*oo
Observation 10.29. An almost surely continuous almost surely increasing process A = {At : t G R+} on a filtered space ( 0 , 5 , {5<}, P ) is always regular. Proof. If A is an almost surely continuous almost surely increasing process, then there exists a null set A^ in (£2, J ^ , P) such that A(-, u) is a real valued continuous monotone increasing function on R+ with A(0,ui) = 0 for w G A^. If {T„ : n G N} is an increasing sequence in S„ converging to T G Sa then we have Ay„ | 4 r as n -> oo on A^ by the continuity of A(-,w). Thus by the Monotone Convergence Theorem we have lim E(AT„) = E(Aj). ■ n—.oo
An almost surely continuous almost surely increasing process is natural by Theorem 10.18 and regular as we have just shown. We shall show that if an almost surely increasing process is both natural and regular then it is almost surely continuous provided that the filtered space is an augmented right-continuous filtered space on a complete probability space. Remark 10.30. If an almost surely increasing process A = {At : t G R+} on a filtered space (£2,5, {&},-?) is regular then for every increasing sequence {Tn : n G N} in S„ converging to T G S„ we have lim AT. = AT a.e. on (£2, g ^ , P). n—hoo
Proof. Since A is an almost surely increasing process there exists a null set A^ in (Q, $oo, P) such that A{-,UJ) is a real valued right-continuous monotone increasing function on R + with A(0, OJ) = 0 for to G A°A. Thus ATn f as n —> oo and Ar„ < AT on A^. By the Monotone Convergence Theorem we have lim E(AT„) = E( lim AT ) and by the regularity of A we n—.oo
7i—+oo
~
^
have lim E(J4T ) = E(Ar). Therefore we have n—*oo
E[A T - lim A r 1 = 0. Since AT - lim AT„ > 0 on A^, the last equality implies that the n—>oo
n—t-oo
integrand is equal to 0, that is, lim Aj = AT, a.e. on (£2 g^
A
^
r
P). ■
n,—t-oo
Lemma 10.31. Let A = {At : i G R+} be an almost surely increasing process on a filtered space (£2, 5, {&}, P)7J //'/or every a > 0 and every increasing sequence {Tn : n G N} in S a converging to
§70. INCREASING
PROCESSES
189
T e S„ we have lim ||j4Tn - ATII, = 0, rfcerc A w regular. 71—+00
"
°
2; Converse^, if A is regular then for Tn and T as in 1), ATn converges to AT both a.e. on (£1, Soo, P ) and in L\ as n —> 00. 3) If A is regular then for every c > 0 the process A A c = {(A A c)(t) : t G K+}, wfcere (A A c)(t) = A(i) A c, w a t o regular. Proof. 1) is immediate since lim \\AT„ - AT\\\ = 0 implies lim E(ATJ = E(AT). n—»-oo
n—*oo
2) The regularity of A implies that ATn converges to AT a.e. on (fi, g ^ , P) as n -> 00 by Remark 10.30. Also, since 0 < AT„ < AT a.e. on (Q.gcojP), the convergence lim E(AT„) = E(AT) is equivalent to lim E(\AT - ATA) = 0, that is, AT„ converges to n-+oo
n—*oo
Aj in L I as n —> 00. 3) If A is regular then with Tn and T as in 1) we have lim Ar = AT a.e. o n C ^ g ^ P ) 71—coo
as we saw in 2). Then lim ATn Ac = AT Ac, that is, lim (A A cW = (A A c) T , a.e. on TI—too
n—*oo
(iijJoojP). According to 2), Ari> converges to AT in Li and thus {ATn : n G N} is uniformly integrable by Theorem 4.16. This implies that {ATn A c : n G N} is uniformly integrable by 4) of Proposition 4.11. Therefore by Theorem 4.16 AT„ A c converges to AT A C in L\. Then by I), A Ac is regular. ■ Lemma 10.32. Let A = {At : t £ K+} be a natural and regular almost surely increasing process on an augmented right-continuous filtered space on a complete probability space (H, g, {g,}, P). For a, c > 0, de/zwe a stochastic process AAc= {(A A c)(t) : t G [0, a]} wfcere (A A c)(<) = A(<) A c. Let tBtk = k2-nafor k = 0 , . . . , 2", /„,* = [*„,*_,, t„,k)for k = 1 , . . . , 2" — 1, In:2" = [tn,2"~11a] a " ^ i € N. For n G N,
■dn{t) = tn,k
forteIn.k,k=l,...,2n
For rc G N, define a stochastic process Ain) = {Ain)(t) : i £ [ 0 , a ] } fry (2)
A (n) (t) = E[(A A c)(i9„(i))|g(] / o r i G [0,a],
wnere we choose a version of the conditional expectation at each t G [0, a] so that A{n) is a right-continuous martingale on each I„:k- Then there exist a subsequence {ne} of {n} and a null set La in (Q, g a , P) such that (3)
lim sup \AM(t,u>)-(AAc)(t,Lj)\=0 t^°° *ero.ai
forojeLca.
190
CHAPTER 2.
MARTINGALES
Proof. Note that by (1) and (2), we have (4)
A(n\t)
= E[(AAc)(tn,k)\dt]
fortG !„,*,fc = 1 , . . . , 2 n .
Thus A{n) is a martingale on each In
A (n) (i,u) I
as n -> oo for all i G [0, a] when w eL%
and (6)
A(n)(f, w) > (A A c)(t, w) for all n G N and all i G [0, a] when u; € L c
Since A is an almost surely increasing process, there exists a null set AA in (Q, #oo> P ) such that A(- , u>) and then (A A c)(-, u;) also are real valued right-continuous monotone increasing functions on [0, a] and vanishing at t = 0 when u> € A^. Since i5„(t) J. i as n —> oo for every t G [0, a], we have (A A c)(i9„+i(t)) < (A A c)($„(t)) on A^. Then there exists a null set Mt,„ in (£2, ft, P ) such that ERA A c ) 0 W * ) ) l & ] < E[(A A c)(tf B (0)l&]
on M£„.
c
Let Af( = U„eNM,,„. Then E[(A A c)(t9 n (i))|ft] J. on M, as n -^ oo. Let {rm : m G N} be the collection of all the rationals in [0, a]. For the null set M = UmeNAfr„ we have A{n\rm,u>) I
as n —> oo for all m G N when u G M c
Then by the right-continuity of A(n) on [0, a] we have (5) for u G M c . Since tfn(t) > t and since A A c is increasing on A^ we have (A A c)(i9„(i)) > (A A c)(t) for t G [0, a] on A^. Thus A (n) (i) = E[(A A c)03 n (t))|ft] > (A A c)(t) a.e. on (Q,ft, P ) . Therefore there exists a null set Nn,Tm in (£2,5a, P ) such that A{n)(rm) N^Tm Then for the null set Nn = UmeNAfn,rm in (Q, ft, P ) we have Aln)(rm) > (A A c)(r m )
for all m G N on iVn.
> (A A c)(r m ) on
§/0. INCREASING
PROCESSES
191
By the right-continuity of AM on [0, a] and that of A A c on ACA we have AM(t) >(AA
c)(t)
for all t 6 [0, a] on (JV„ U AA)C
Let N = U„6N(ATn U A A ). Then (6) holds on Nc. Finally, the null set L = M \J N in (SI, 5 a , P ) satisfies both (5) and (6). For e > 0, let (7)
T„ e(w) = ( i n f {f e [ 0 ' a l : j4(n)(<> w) - (A A c)(t, u>) > e} l a if the set above is 0.
Since T„i£ is defined as the first passage time of the open set (e, oo) by the right-continuous adapted process A
T „ » = a«Aw(t,u)-(AAc)((,u)<£
for all t € [0,o].
By (5) we have T„,e(u;) | as n —» oo for u> 6 L c . Let T£ be defined by (9)
T £ (u>) = \ n - o c
La
foroj € L.
Since our filtered space is an augmented right continuous filtered space on a complete prob ability space, Te defined by (9) is a stopping time by Theorem 3.19. Clearly T£ € Sa. Now since dn(t) > t, we have (10)
tf„(T„,e)
> T„,£
on a
Also, since T„,£ is 5r„. e /®i*-measurable and i9„ is 55^/9Si t -measurable, the composite mapping i9n(Tn,£) is 3r„,,/©!♦-measurable. This fact and (10) imply that r?„(TTl,£) is a stopping time by Theorem 3.6. Therefore i9„(Tn,£) € SB. By (10) and by the fact that i9„ is an increasing function on [0, a] and T„,£ < T£ we have Tn,£ < tf„(Tn,£) < tf„(T«). Since Tn,£ f T£ as n -> oo on i c and since dn(t) ], < implies i9„(TE) | T£ as n -» oo, we have (11)
MTnJITc
as n - > oo on L c .
192
CHAPTER 2.
MARTINGALES
Now since A (n) is a martingale on (<„,*_ i, tn
E[AM(tnik)\dTn,,MnJ
= AM(T^
A *„,*)
a.e. on 3Tn,eAtn,k,
and then recalling (4) we have (13)
E[E[(A A c)(t n , t )|&„,J|3T„,,A* n ,J = A(n)(T„,s A tBfk)
a.e. on 3Tn,cM„,k-
For & = 1 , . . . , 2" and n G N, let (14)
G„,* = {Tn,£ G (i„,i-, ,<„,*]} = {tf„(T„.«) = tnik}.
Since {G,^ : k = 1 , . . . , 2"} is a disjoint collection in Jo whose union is equal to £2, we have E[A<">(Tn,£)] = E /
A (n) (T n , £ )dP
2"
= E /
A(n\Tn
since T„,£ < f „,* on G„,,
2"
= £ /
E[(AAc)(t n ,i)|ff Tni , A( „J
by (13).
By (14) we have Gn,t € 5T„,, as well as G„,t G g { n l . Therefore according to Theorem 3.12 we have Gn,k G 5T„,, H &„,* = 5r„ltA(n,t- Thus 2"
(15)
E[A<">(T„,£)] = Y.I
U/\c)(tn,k)dP
2"
= 53/
(AAc)(tf n (T n , £ ))dP
by (14)
= EPAcX0„(Tv))]. Since A A c is monotone increasing on A^, (10) implies (16)
(AAc)( l ? n (T n , E ))-(AAc)(T n ,,)>0
onA^.
§70. INCREASING
PROCESSES
193
Now (17)
E[(A A c)(tf„(Tn,£)) - (A A c)(T„,£)] = E[AW(Tn|£)-(AAc)(Tv)]
by (15)
M
>
/
{A {Tn,c) - (A A c)(T„,£)} d P
>
eP{T„,e
by (16)
by the fact that the integrand is bounded below by e on {Tn,£ < a} according to (7). The regularity of A implies that of A A c according to Lemma 10.31. Now TnA, tf„(T„i£) and T£ are all in S a and since Tn,£ T T e and i9„(T„iE) f T£ on Q as n -» oo, we have lim E[(A A c)(Tn,£)] = E[(A A c)(Te)] and lim E[(A A c)(i?n(T„,,))] = E[(A A c)(T,)]. n—*oo
n—*oo
Thus letting n —» oo in (17), we obtain (18)
lim P{Tne< n—*oo
a} = 0.
'
By (7) we have {Tn,£ < a} = {Ain){t) -{A A c){t)>e for some t 6 [0, a]} = I sup {A(n)(t) - (A A c)«)} > e \ l*€[0,a] J By (6) we have A (n) - (A A c) > 0 on [0, a]xLc. Thus {T„. < a} n i c = { sup |A { n \t) - (A A c)(i)| > e 1 n i c L«6[0,a] J
(19)
Let us define a function £„ on Q by £„(w)= sup |A < 7 , ) (t,u;)-(AAc)(i,w)|.
(20)
l€[0,a]
Since A (n) — (A A c) is right-continuous on Q we have, fn(w) = sup \Ain)(rm,u)
- (A A c)(r m ,uO|.
mgN
As the supremum of countably many 5„-measurable random variables, £„ as defined above is an 3a-measurable random variable. Thus by (19) and (20) we have P{\U
> e} = P({(n > £} n U) = P({Tn,£
194
CHAPTER 2.
MARTINGALES
and therefore by (18) we have lim P{|£„| > e} = 0. Since this holds for every e > 0, £n n—KX)
converges to 0 in probability as n —> oo. This implies that there exists a subsequence {nt} of {n} such that £Ht converges to 0 a.e. on (£2,5 a ,P) as (. —* oo. Therefore there exists a null set La in (£2,5„, P ) such that lim £n/(u>) = 0 for u> € Lca. This proves (3). ■ Theorem 10.33. Lef i = {A t : t 6 1+} te on almost surely increasing process on an augmented right-continuous filtered space on a complete probability space (£2,5, {3t}, P)7f A is £O£/J natural and regular then A is almost surely continuous. Proof. Suppose we show that for every a > 0 there exists a null set A a in (£2, #„, P ) such that A(-,LJ) is continuous on [0, a] for u> G A^. Then for every n G N there exists a null set A„ in (£2, #„, P ) such that yl(-, OJ) is continuous on [0, n] for u i £ J ^ . Thus for the null set Aoo = U n6 u A„, A(-, UJ) is continuous on R+ for w G A^ and we are done. It remains to show the existence of A a . Since A is an almost surely increasing process, there exists a null set A^ in (Q, 5oo, P) such that A(-, u>) is a real valued right-continuous monotone increasing function on R+ with A(0,u0 = 0foru; G A^. Let us define A_ = {A_(i): t e R + } by setting
I
(1)
lim A(s, u>) for < G (0, oo) and u e A j 1(0, w)
for i = 0 and CJ G A$i
0 fort G R+ando; G A^. Since our filtered space is an augmented filtered space, the null set AA is in g ( for every t G R+. Thus A_{t) is $t-measurable for every t G IR+ and therefore A- is an adapted process. Let a > 0 be fixed. For every c > 0, let A A c = {(A A c)(t): i G [0, a]} be defined by setting (A A c)(i) = A(i) A c and similarly define i _ A c = {(A_ A c){t) : t G [0,a]} by setting (A_ A c)(t) = A_(i) A c. For u» G A^, (A A c)(-,ui) is real valued right-continuous monotone increasing and bounded by c on [0, a] with (A A c)(0, u>) = 0. In particular it has at most countably many points of discontinuity and a discontinuity occurs at t G [0, a] if and only if (A- A c)(t, w) < (A A c)(t, a>). Thus for every u> e ACA, the sum
J2 i(A
(2)
A
<0(*, w> - ( A - A CX*. w »
(6[0,o]
is a sum of at most countably many positive terms. Let us show that (3)
E
£ _(e[0,a]
{(A A e)(t,w) - (A_ A c)(t,u,)} 2 = 0,
§/0. INCREASING
PROCESSES
195
where the integrand is undefined on the null set A^. Now for u g A ^ w e have Y, KA A c)(t,w) - (A_ A c)(t,u;)} 2 =
/
pAc)(t,u)-(A.Ac)(t,w)}(iUAc)(()
j[0,a]
<
/
{(A A c)(i,w) - (A_ A c)(t, w)} dA(<).
Thus, to show (3), it suffices to show /
{(AAc)(t,w)-(A_ Ac)(<,w)}dA(t)l = 0
J[0,a]
or equivalently (4)
E
=E / ■J[0,a]
(A-Ac)(t,w)dA(t)
.
-'[0,0]
Consider the process A (n) defined in Lemma 10.32. Since A (n) is a bounded right-continuous martingale on In^ for A: = 1 , . . . , 2" into which [0, a] is decomposed, the fact that A is nat ural implies r /• 1 2" T r / A (n) (t)dA(t) = V E / U[0,a] J £^ I/7".*
(5)
= £E
/
A™(t)dA{t)
AM(t)dA(t)
= E [ / A(_n)(<)dA(i) .
./,?„ t
U[0,a]
According to Lemma 10.32, there exist a subsequence {ne} of {n} and a null set £„ in (Q, g a , P ) such that for w € i ° we have (6)
lim A ( "''(t, w) = (A A c)(t, w) uniformly for t e [0, a].
Then since A (n/> (i) 1 as £ -> oo for all i G [0, a] except on a null set as we showed in Lemma 10.32, we have / ( 0 o ) A ( " () (i)dA(i) | except on a null set in (Q,g a , P), the Mono tone Convergence Theorem implies (7)
limE
/
AM(.t)dA(t)]
= Eflim / = E / .i[0,a]
AM(t)dA(t)
(A A c)(t) dA(t)
196
CHAPTER 2.
MARTINGALES
where the second equality is by the uniform convergence (6). Since (6) implies that for u> G LI we have also lim Al*e)(t, LO) = (A_ A c)(t, u>) uniformly for t G [0, a], we have by ■£->oo
the same argument as in (7), (8)
lim E t—MX)
/
A{y\t) dA(t)] = E 17 (A_ A c)(«) dA(t)
J[0,a]
J
U[0,a]
Using (7) and (8) in (5) we have (4) and thus (3) holds. Now (3) implies that there exists a null set A„iC in (Q, $a, P) such that for w G A* we have J2 {(A A c)(t,u) - (A- A c)«,o,)} 2 = 0. tS[0,a]
Then since (A A c)(i, w) > (A_ A c)(i, u) > 0 for w G AJ, we have X) {(^4 A c)(i,w) - (A_ A c)(t,u)} = 0, i€[0,a]
forui G (Aa,c U A^)11. Thus (A A c)(-,o>) is continuous on [0, a] for u> G (Aa>c U AAY- Let Aa = (UmgNAa,m) U A^. Then (A A m)(-,u>) is continuous on [0, a] for all m G N when w G A^. Thus by the fact that A(t,u>) < A(a,ui) < oo whenw G A^ we have the continuity of A(-,LJ) on [0, a] when CJ G A^.
I
Theorem 10.34. Let A = {At : t G R+} be an almost surely increasing process on an augmented right-continuous filtered space on a complete probability space (Cl, 5, {5,}, P). Then A is almost surely continuous if and only if A is both natural and regular. Proof. If A is almost surely continuous, then A is natural by Theorem 10.18 and regular by Observation 10.29. Conversely if A is both natural and regular then A is almost surely continuous by Theorem 10.33. ■ Theorem 10.35. Let X = {Xt : t G R+} be a right-continuous submartingale of class (DL) on an augmented right-continuous filtered space on a complete probability space {Q., 5, {5t}, P) and let X = M + A be a Doob-Meyer decomposition. Then A is almost surely continuous if and only if X is regular. Proof. Since M is a right-continuous martingale on a right-continuous filtered space, it is regular by Observation 10.27. Thus X is regular if and only if A is. Since A is natural, it is regular if and only if it is almost surely continuous by Theorem 10.34. Therefore X is regular if and only if A is almost surely continuous. ■
Chapter 3 Stochastic Integrals §11 L2-Martingales and Quadratic Variation Processes [I] The Space of Right-Continuous £ 2 -Martingales Definition 11.1. A filtered space (Q, 5, {& : t 6 R+}, P) is called a standard filtered space if it satisfies the following conditions. 1°. The probability space (Q, 5, P) is a complete measure space. 2°. The filtration {$t : t e R+} is right-continuous. 3°. The filtration {& : t £ R+} is augmented. Let us observe that if (Q., 5, {5 ( }, P) is a standard filtered space, then (Q, J,, P ) is a complete measure space for every t G R+. In fact if A is a null set in (Q, 5,, P) then it is a null set in (Cl, 5 , P ) , and then by the completeness of (£2, $, P ) every subset Ao of A is in $ and therefore in fo since & is augmented. Recall that two stochastic processes X = {Xt : t G K+} and Y = {Yt : t G M+} on a probability space (£2,5, -P) are s a id to be equivalent if there exists a null set A in (Q, 5, P) such that X(-,w) = y(-,w)forw e Ac. Observation 11.2. Let X = {X t : i G K+} be a submartingale, martingale, or supermartingale on an augmented filtered space (Q, #, {&}, P) in which (Q, 5, P ) is a complete mea197
198
CHAPTER 3. STOCHASTIC
INTEGRALS
sure space. If Y is an equivalent process of X, then Y is a submartingale, martingale, or supermartingale, according as X is. Proof. It suffices to prove this for a submartingale. Let A be a null set in (£2,5, P ) such that X(-,u>) = Y(-,u>) ior u> 6 Ac. Suppose X is a submartingale. Then X is an adapted process so that Xt is 5(-measurable for every t E R+. Since the filtered space is augmented, A is in g ( . Since Yt = Xt on Ac, Yt is 5,-measurable on Ac Since (£2, 5, P) is a complete measure space, Yt is 5 r measurable on A. Thus Yt is fo-measurable on £2. This shows that Y is an adapted process. Since Yt = Xt on Ac and Xt is integrable, Yt is integrable. This shows that Y is an Li-process. Now since X is a submartingale, for any pair s, t e R+ such that s < t w e have E(X ( |5 S ) > X s a.e. on (O, fo, P). Since X ( = F t a.e. on (£2, &, P), wehaveE(X,|&,) = E(Y ( |&) a.e. on (£2,&,P). ButX s = Ys a.e. on (£2,5,, P). Thus E(y4 | & ) > F s a.e. on (Q, j s , P). This shows that Y is a submartingale. ■ Definition 11.3. Let M2(£2,3> {5*}. P). or briefly M2, fee f/ie linear space of equivalence classes of all right-continuous L2-martingales X = {Xt : t 6 R + } w/rfe Xo = 0 a.s. on a standard filtered space (£2, 5, {&}, P). Le/ M!j(£2, 5, {g 4 }, P), or briefly M|, fee rte //wear subspace consisting of almost surely continuous members o/M 2 . Unless otherwise stated, the filtered space (£2,5, {5(},P) in M2(£2,5, {5t},P) is al ways a standard filtered space. Note that 0 € M 2 is the equivalence class consisting of right-continuous L2-martingales X = {Xt :teR+} such that there exists a null set A in (£2, 5, P ) such that X(-, w) = 0 on 1+foro; 6 Ac. Recall that according to Theorem 9.2, for every right-continuous martingale X = {Xt : t 6 R+} on a filtered space (£2,5, {&},P), there exists a null set A in (£2, S ^ P ) such that the sample function X(-, u>) is bounded on every finite interval in R + , has finite left limit everywhere on K+ and has at most countably many points of discontinuity for u £ A c . Definition 11.4. On M 2 (£2,5, {& }, P) where (£2,5, {&}, P) is arc arbitrary filtered space, let (1) (2)
| X | t = E(X 2 )'/ 2 1X1^= ^ 2 " m { | X | m A l }
forX e Ma, t € R*. / o r X GM 2 .
rngN
Note that since X G M 2 is a martingale, X 2 is a submartingale and therefore | X | t t
§/;. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
199
as t —t oo. Recall that a on a linear space V over scalars K, where K is either R or C, is a function p on V such that p(x) G [0, oo) for x G V and p(0) = 0 for 0 € V, p(ax) = \a\p(x) for x 6 V and a G K, and p(x + y) < p(x) +p(y) for x, y e V. Remark 11.5. | • 11 and | ■ |«, have the following properties. 1) I - |t is a seminorm on M2 for every t G R+. 2) For X,X (n > e M 2 , n 6 N, we have lim I X ( n ) - I L = 0 if and only if we have lim I X
n—*oo
3) I• |oo is a quasinorm on M2. Proof. 1) is obvious. Note that | | ( is not a norm since |Af|, = 0 does not imply M = 0GM2. 2) Since 2 m \X |oo > | X | m A 1 for every m 6 N, Jirr^| X ( n ) - X|oo = 0 implies lim {I X ( n ) - X I m A 1} = 0, which in turn implies lim I X ( n ) - X I m = 0. Conversely n—*oo
"
r
"
if lim I X ( n ) - X | m = 0 for every m € N, then
n-.oo "
■
n—KOO
l i m | X < n ) - X | o o = lim Y 2- m {|X (n) - X | m A 1} = 0 mgN
by interpreting the sum as the Lebesgue integral of a step function on [0,1) with steps of lengths 2~ m and with values | X ( n ) - X | m A 1 on these steps, and then by applying the Bounded Convergence Theorem to pass to the limit under summation. 3) Clearly | X | o o £ [0,1] for X 6 M 2 and 1 0 1 ^ = 0. Conversely, if X € M 2 and | X | o o =0then | X | m = 0 for every m G N and consequently | X | j = 0 for every t 6 R+. Then Xt = 0 a.e. on (Q, fo, P) f° r every t e R+. The right-continuity of X then implies that there exists a null set A in ( H , ^ , P) such that X(-,u>) = 0 on R+ for u G Ac by Theorem 2.3. Thus X = 0 G M 2 . Clearly | - X | o o = | * ! « , forX G M 2 . The triangle inequality for | ■ !«> follows from the triangle inequalities for | ■ | m for m G N, and the fact that for every a, b > 0 we have (a + b) A 1 < (a A l) + (6 A 1). Thus |-|oo isaquasinorm on X G M 2 . ■ Proposition 11.6. LetX,X™ G M 2 (Q, 5, { 5 t } , P ) / o r n G N where (Q,3, { 5 t } , P ) is an augmented right-continuous filtered space. If lhn | X (rl) — X | ^ = 0,tfzenX ( n ) converges
200
CHAPTER 3. STOCHASTIC
INTEGRALS
to X in probability uniformly on [0, m)for every m £ N, that is, \XJn)-Xt\\=0.
P - lira f sup
(1)
Furthermore there exists a subsequence {rik} of {n} and a null set A in (£1,5oo,P) such that for every 10 € Ac the sample functions X<-nk\-,u) converge to X(-,u>) uniformly on every finite interval in R+. Proof. Note that since X(n) - X is right-continuous, the supremum over [0, m) in (1) is equal to the supremum over a countable dense subset of [0, m) and is therefore a random variable, in fact an #m-measurable random variable, on (Q, 5, P)- Now lim I X ( n ) — X1^ = 0 imn—>-oo
plies that lim I X ( n ) - X | m = 0, that is, lim EttX™ - X m | 2 ] 1 / 2 = 0, for every m £ N. ■
n—*oo
n—*oo
u
Since |X
r f 2E [ | X « - X m | 2 ]
and therefore lim P | sup |X (n) - X, I > n 1 = 0. " ^ |.*6[0,m) J This proves (1). Since convergence in probability of a sequence of random variables implies the exis tence of a subsequence which converges almost surely, (1) implies that for each m £ N there exist a subsequence {nmik : k £ N} of {n} and a null set A m in (Q, 5m> P ) such that lim sup \X(n">-k)(t,u) - X(t,w)| = 0
n_foo
te[0,m)
for a; £ A^.
Form > 2, let{nmijt : k £ N} be a subsequence of { n ^ y : fc £ N} and let A = LU^A™, a null set in (Q,5oo, P)- Then for the subsequence { n ^ : k £ N} of {n} we have lim sup \Xlnk'k\s,u>) - X(S,UJ)\ *—°° se[o,t)
=0
for every t £ R + when w £ Ac.
This completes the proof. ■ To show that M2 is a complete metric space with respect to the metric associated with the quasinorm | • |<x> we need the following proposition.
§//. L2-MARTINGALES
AND QUADRATIC VARIATIONPROCESSES
201
Proposition 11.7. Let {X (n) : n G N}, where XM = {Xt(n) : t G R + } (feea sequence of left- or right-continuous processes on a probability space (£2,5, P) and let D C R+. //rt sequence is a Cauchy sequence with respect to uniform convergence in probability on D, that is, if for every 77 > 0 and every e > 0 there exists N G N such that (1)
pLup\Xin)-Xle)\>r]\<e
forn,e>N,
then there exists a left- or right-continuous process X on D x Q such that P J ilim r n <{ ssiu p | X ( ( n ) - X ( | | > = 0 ,
(2)
and thus there exist a subsequence {nk} of{n} and a null set A in (Q, 5, P) such that lim sup \Xl"k) - XtI = 0 J ^ 0 0 te£>
(3)
for u> G A c
Proof. Observe that the left- or right-continuity of Xin) for n G N implies the measurability of the supremum over D in (1) as noted in the proof of Proposition 11.6. Now by (1) we can select a subsequence {nk} of {n} such that
(4)
P I sup \Xf*"' - A?»5| > 2-*l < 2"*,
and then £
P (sup \Xt**l) - Xlnk)\ >
D fceN fceN i^
Let Ak = {sup i€D |X t (nt+l) - X\nk)\ > l'k)
2-k) >
and A = limsup Ak. Since E* S N -P(^t) < °° fc—COO
we have P(yl) = 0, that is, P{u> e Q. : w £ Ak for infinitely many k G N} = 0, by the Borel-Cantelli Theorem. Therefore there exists a null set A in (Q, 5, P) such that (5)
forwGA c .
Y,sup\Xint*<\t,u>)-X{nt\t,u)\
If we define a real valued function F on D x Ac by setting (6)
Y(t,ur)=^2{XinM\t,^-X<^{i^)} JfcgN
for(t,u,)Gi?xAc,
202
CHAPTER 3. STOCHASTIC
INTEGRALS
then we have F(f,«)=lkJf("wl(l,w)-IM(t1w)
(7)
fov(t,u;)eDxAc.
k—*oo
Define a real valued function X on D x Q by
rr(t,u0 + x<"'>(t,u0 for(i,u,)e£>xAc (8)
X(i,u;)-j0
for(t,c)E£xA.
By (6), F(i, ■) is 5-measurable on Ac and thus X(t, •) is 5-measurable on Q for t £ D. By the uniform convergence on D in (7) which is implied by (5), Y(-, UJ) is left- or rightcontinuous on D for to 6 Ac and then X(-, u>) is left- or right-continuous on D for LO £ £2. To prove (2), let us recall that for an arbitrary e > 0 there exists Ari € N such that
'{sup|X<">-X<< P{wv\X™-X™\>-\
O)
fmn,l>Nl.
By the uniform convergence of X("*'(-, w) to X(-, u>) on £> for LJ £ Ac, we have lim sup |X 4 (ni) — Xt\ = 0 on Ac. Since almost sure convergence implies convergence in probability, there exists iVj G N such that p(sup|X|"fc)-X(|>^}<^ {teD 2) 2
(10)
for
nk>N2.
Note that since sup (6D \X\n) - Xt\ < sup lgD \X\n) - X\nk)\ + sup i e D |X("k - Xt\ we have -Xt\<£-\ 2)
f sup [ I f - X*\ < e l 3 (sup | X « - Xf*»| < | ) U f sup |X<"'> {teD ) {teD 2) [teD and therefore (11)
p|sup|X((n,-Xt|>el
< i>{sup|X«-Xf*>| > {teD
£
2)
-\+pLup\X?>)-Xt\ > £-\ {teD
2)
Let AT = max{Ari,Ar2}. If we take nk > N, then by (11), (9) and (10) we have for n > N pjsuplX^-X^e^e.
§11. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
203
This proves (2). Finally, (3) is a direct consequence of (2). ■ Theorem 11.8. Let (Q,g, {&}, P) be an augmented right-continuous filtered space. Then M 2(£2,5, {3t},P) is complete with respect to the metric associated with the quasinorm I • | oo in Definition 11.4. Furthermore, M\(Q., g, {g, }, P) is a closed linear subspace of M 2 (fl,g,{3 i },-P). Proof. Let {X (n) : n 6 N} be a Cauchy sequence with respect to the metric associated with the quasinorm |• |co on M 2 . Then for every 6 > 0 there exists Ns € N such that lX**-Xmlm<6
(1)
for
n,e>Ns. {n)
Let us show that this implies that for fixed m G N, {X : n G N} is a Cauchy sequence with respect to uniform convergence in probability on [0, m), that is, for every 77 > 0 and e > 0, there exists Nn{t £ N such that P ( sup |X((n) - X f l > r, \ < e
(2)
Ue[0,m)
for n, > iV„,£.
J
Now for any n, £ € N, Xin) - Xw is a right-continuous i 2 -martingale so that \Xin) - Xw\ is a right-continuous nonnegati ve L2-submartingale. Then by (1) of Theorem 6.16 we have 2 V
P ( sup |X{"> - X?j > 4 Ue[0,m)
< E[|X« - X«p],
J
in other words, TJP ( sup |X<"> - X
rypf sup =
VP
1/2
< | X<"> - X«>| m .
11/2
\X^-X»\>f,}
sup |X((n) - X^l >V\ [tel0,m)
A 1 < 1 XM - X^U
A 1.
)
Thus by (2) of Definition 11.4 and (1) we have •/2
2-™VP { sup |X((n) - Xf\
> v\
< I * ( n > - X«loo < * for n, > ATS,
204
CHAPTER 3. STOCHASTIC
INTEGRALS
that is, P{
sup \Xt)-Xf)\>r1\
forn,e>N6.
For an arbitrary e > 0, let 6(e) > 0 be so small that (2mr/-lS(e))2 P \ sup \X\n) - X f | > r, 1 < e
forn,^ >
< e. Then Nm
for our T) G (0,1]. On the other hand for 77 > 1 we have P ( sup \X[n) - X[e)\ >r\
sup \X™ - X f 1 > l l < e
for n, I >
Nm.
This proves (2) with iV,,e = iVf(sj. Now the fact that {X (rl) : n G N} is a Cauchy sequence with respect to uniform con vergence in probability on [0, m) implies according to Proposition 11.7 that there exists a subsequence {nm^ : k £ N} of {n} and a null set Am in (Q,5, P) such that lim sup \X'-n^k\t,u})-X(t,uj)\=0
fc-*°°fg[0,m)
forwGA^,
where X is a right-continuous process on (Q, 5, P). We can select the subsequences induc tively so that {n mi t : k G N} is a subsequence of {nm_iiA. : fc € N} for m > 2. Then for the subsequence { n ^ : k £ N} of {n} and the null set A = Um6MAm in (Q, 5, P ) we have (3)
lim sup \Xin"'k)(t,u)-X(t,oj)\
k^oo tg[0,m)
=0
for m £ N and w £ AC
Let us redefine X on R+ x A by setting (4)
X(t,
LO)
=0
for (t, u>) £ R+ x A.
Note that the definition of X on R+ x Ac is independent of m G N. To show that X is an adapted process, that is, Xt is 5(-measurable for every t G R+, note first that since the filtered space is augmented, the null set A in (£1,g, P ) is in fo for every t G R+. Now (3) implies that the sequence of 5; -measurable random variables Xf*' for A: G N converge to X t on Ac G 5< so that Xf is fo-measurable on Ac £ fo. Then since Xt = 0 on A by (4), X ( is fo-measurable on Q. This shows that X is an adapted process. Since X<^'k,k) = 0 a.e. for k G N, we have X 0 = 0 a.e.
111. L2 -MARTINGALES AND QUADRATIC VARIATION PROCESSES
205
To show that X is an L2-process, that is, E(X 2 ) < oo for every t e R+, let t £ R+ be arbitrarily selected and let m e N be so large that t £ [0,m). Since X ( n i , ) - Xln'^ is a martingale, |X<"'.'> - X"^>| 2 is a submartingale and therefore E[|Z ( ( " , , ) xf'^f] increases with t. Thus by (1), for every e > 0 there exists N £ N such that E[|Xf"«' - X ^ ' l 2 ] * / 2 < | X ^ - X ^ U
<s
foct,;
> TV.
Thus by the completeness of L2(£l, 5 t , P ) with respect to the metric of the L2 norm, there exists Yt £ L 2 (Q, $t, P) such that lim xfk'k) = Yt in L2. Then there exists a subsequence of { X ( n t t : k £ N} which converges to Yt a.e. on (Q,fo,P). But the subsequence converges to Xt a.e. on ( n , 3 , , P ) . Thus Xt = Yt a.e. on ( Q , 5 t , P ) . Therefore X t 6 £2
Let us show that X is a martingale. Let s, t £ R+ and s < t. Since X4
iX-'-xi^^l x^-x'^'l^+lx^-'-xi^. Since lim I X ( " t f c ) - X | m = 0 for every m £ N, we have lim | X("*'fc) - X 1 ^ = 0 by /c—*oo
k—*oo
Remark 11.5. From this fact and the fact that {X (n) : n £ N} is a Cauchy sequence with respect to the metric associated with | ■ | oo, we have lim | X ( n ) - X | x = 0. Finally to show that M 2 is a closed subset of M 2 , we show that if {X (n) : n £ N} is a sequence in M% and if lim I X ( n ) - X 1 ^ = 0 for some X £ M 2 , then X £ M\. But acn—>oo
cording to Proposition 11.6, lim | X ( n ) — X |oo = 0 implies that there exists a subsequence {nk} of {n} and a null set A in ( Q , 5 , P ) such that X (nt> (-,w) converges to X(-,w) uni formly on every finite interval in R+ for w £ Ac. Thus X(-,u>) is continuous on R+ for u £ Ac, that is, X e M^. ■
[II] Signed Lebesgue-Stieltjes Measures Observation 11.9. Let us give a brief review of signed Lebesgue-Stieltjes measures. Let a' and a" be two real valued right-continuous monotone increasing functions on R+ and let
206
CHAPTER 3. STOCHASTIC
INTEGRALS
v = a' — a", a real valued right-continuous function on R+ with bounded variation on [0, t] for every t G K+. Consider the total variation function \v\ of v defined by
(1)
|o|(t) = sup { £ |»(*») - »(it_,)| : 0 = t0 < ■ • • < t„ = 11,
for every t e R + . | u | is a real valued right-continuous monotone increasing function on R+ with | u | ( 0 ) = 0. Since v is right-continuous, | v | ( i ) can be computed as 2"
(2)
2"
| o | ( t ) = s u p ^ |v(fB)t) - «(
where in,A = fc2""t, A; = 0 , . . . , 2", and n G N. Let fiai, fiaii and ;U|„| be the Lebesgue-Stieltjes measures on (R+, 98a,) determined by a', a", and | v | respectively. Let us call g i „ | the total variation measure of e. For every t G R+, (iv = (j.a.>—fj.aii is a signed Lebesgue-Stieltjes measure on ([0, t], 5W])- /J.V is defined on (IL,., 93n+) if and only if at least one of jv(R+) and /v(R+) is finite. Consider the signed Lebesgue-Stieltjes measure ^„ on ([0,i],93[o,t]). According to the Hahn Decomposition Theorem, there exist two sets A, B G 53[0,t] such that AC\B = 0, AU.5 = [0, *], fiv(Er\A} > 0 and //„(£ n B ) < 0 for every E G 93[o,t] and moreover such a decomposition is unique in the sense that if A' and B' are another pair of such sets then fiv(E n A) = ^/„(.E Pi A') and Hv(.EnB) = fiv(Er\BJ) for every £ G <8[0,<). The two measures (4 and /i~ on ([0, t], 53[0,t]) defined by (3)
p*(J5) = ^ ( E n A) and / ^ ( E ) = - p „ ( E n B)
for £ e 5 8 M
are called the positive and negative parts of \iv. Note that (4)
fiu =
/4-IK-
and (5)
pywi <* £ + [t~.
and in particular,
(6)
H<*) = /M.|(P,*D = 0<+/O([0Jtl). For a real valued Borel-measurable function / on [0, t], we define / ■'[0,(1
f(s)d^v(s)=
f ./[0,i]
/(s)/i„(ds),
§ 11. L2 -MARTINGALES AND QUADRATIC VARIATION PROCESSES
207
and /(s)a>H(s) = /
I
f(s)fJ.\v\(,ds),
provided the Lebesgue-Stieltjes integrals on the right hand sides exist. Note that with A, B G <8[0,(] as above in the definition of p+v and p,~, we have /
f(s)n\*\(ds)
J[0,t]
=
J
I
* '
f(s)ij.+v(ds)+ I
[0.t)
=
I
J
f(s)p,-v{ds)
J[0,t]
f{s){U{s)
- l fl (a)}^(
[0,t]
f{s){U(s)
since we have f{%(] flB dp.% = Jm flA d^~ = 0, fm /[o,(] / ! B d-Vv = lio.t] f d^Z- Thus (7)
-
lB(s)}ii;(ds),
J[0,t]
/
/ W p |.i(A) = /
J[0,t]
'
'
flA drf = fm
f dp.*v and
f(s){lA(s)-lB(s)}f,v(ds).
J[0,t)
[III] Locally Bounded Variation Processes Definition 11.10. A stochastic process V = {Vt : t G R+} on afilteredspace (Q, 5, {5t}, P) is called a locally bounded variation process if it satisfies the following conditions. 1°. V is {&} -adapted. 2°. V(-,LJ) is a right-continuous function on R+ and is of bounded variation on [0, t]for every ( £ R t with V(0, u>) = Ofor every u> G Q.. 3°. The process | F | = {\V | : t G R + } defined by 2"
(1)
|F|(i,o;) = sup^|y(i„,fc,w)-y(t„it_,,a;)|
/ w ( t , « ) 6 B+X Q,
vvnere i,,,* = k2~ntfor k = 0, ■ ■ •, 2" and n G N, is an L\-process. A stochastic process V on an augmented filtered space (£1,5, { 5 J , F) is called an almost surely locally bounded variation process if it satisfies 1° and the following conditions. 4°. There exists a null set Av in (Q,J, P) such that V(-,w) satisfies condition 2° when w G Ay.
208
CHAPTER 3. STOCHASTIC
INTEGRALS
5°. The process | V | which is defined by (1) for u £ Av andby setting \V |(-,u>) = Ofor ijj 6 Ay is an L\-process. The set Av is called an exceptional null set for the almost surely locally bounded variation process V. The process | V | is called the total variation process ofV. Lemma 11.11. If V = { Vt : t £ R + } is an almost surely locally bounded variation process on an augmented filtered space (£2,5, {5t},-P). then its total variation process \V\ = {| V11 '■ t £ R+} is an increasing process. Proof. Since the filtered space is augmented, an exceptional null set Ay for V is in 5 t for every t £ R+. For t £ R+, the sum in (1) of Definition 11.10 is ^-measurable and so is the supremum over n £ N. Thus | V\{t, •) is ^-measurable 0 n Av £ 5<- Since | V \ ( * , - ) = 0 on Ay, | V | ( i , ■) is ^-measurable on Q. This shows that \V\ is an Li-process. Also every sample function of | V | is a right-continuous monotone increasing function on R+ vanishing at t = 0. This shows that | V | is an increasing process. ■ Theorem 11.12. A stochastic process V = {Vt : t £ R+} on an augmented filtered space (Q, 5, {5 4 }, P) is a locally bounded variation process if and only ifV = A' — A" where A' and A" are two increasing processes on the filtered space. Similarly a stochastic process on an augmented filtered space is an almost surely locally bounded variation process if and only it is the difference of two almost surely increasing processes on the filtered space. Proof. 1) Suppose V = A' — A" where A' and A" are two increasing processes on a filtered space. The fact that V satisfies the conditions in Definition 11.10 can be verified easily. In particular for the sum in (1) in Definition 11.10, we have YX=\ \V(tn,k,t*>) — V(tnik-i,uj)\ < A'(t,u) + A"(t,uj)sotha.t |V|(i,cj) < A'(t,w) + A"(ttw) for (t,u) £ R+ x Q. Since A't and A" are integrable so is | V | i . 2) Suppose V is a locally bounded variation process on a filtered space. As we saw in Lemma 11.11, \V\ is an increasing process. Let A =\ V\ — V. Then A is a rightcontinuous adapted L\-process vanishing at t = 0. Also for 5, t £ R+, s < t, we have A(t,cu) - A(s,to) = { | V f t t , w ) - | V\{s,u)}
- {V(t,w) - V(s,u)}
> 0.
Thus A is an increasing process. With A' = | V | and A" = A, we have V = A' — A" where A' and A" are increasing processes. 3)The statement regarding an almost surely bounded variation process on an augmented filtered space is proved likewise. I
§ 11. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
209
Definition 11.13. An almost surely locally bounded variation process V on an augmented filtered space is called natural ifV = A'- A" where A' and A" are two natural almost surely increasing processes. Let A = {At : t G R+} be an almost surely increasing process on a filtered space (£i, 5 , {5t}, P) and let p.A be the family of Lebesgue-Stieltjes measures on (R+, 53^) de termined by the sample functions of A. For a real valued function X on R+ x Q. such that X(-,u) is a Borel measurable function on R+ for every u> g £2, we defined in Definition 10.3 / X(s,u>)dA(s,Lo)= X(s,uj)uA(ds,uj) forwgQ, - / [0,(]
J[Q,t]
provided the Lebesgue-Stieltjes integral on the right hand side exists. Definition 11.14. Let V = {Vt : t g R+} be an almost surely locally bounded variation process on a filtered space (Q, 5, {$t},P) given by V = A' — A" where A' and A" are two almost surely increasing processes. For every t g R+, the family of signed LebesgueStieltjes measures on ([0, f], 23[o,«j) determined by V is defined by pv = PA1 — PA»Note that if V = A' - A" and V = B' - B" where A', A", B> and B" are almost surely increasing processes on the filtered space (Q, 5, {&}, P), there exists a null set A in (£l,5oo,P) such that ^ - ( ^ w ) - ^.»(-,w) = p.B'(-,u) - PB"{;W) forw g Ac, so that the family of signed Lebesgue-Stieltjes measures pv on ([0, t], 35[o,t]) is independent of the decomposition of V into two almost surely increasing processes up to a null set in (n,5oo,P). If at least one of A' and yl", say A' satisfies the integrability condition Ef^AJ^) < oo, then there exists a null set A in (£2,5^,, P) such that for every w g Ac we have A^(UJ) < oo so that pA'(®U,w) < °° a n d thus fiv(Rt,w) = pA'fM+iv) — /i^»(R+,u;) is defined. Recall that the condition E(A'0O) < oo is equivalent to the Li-boundedness and the uniform integrability of A' according to Lemma 10.2. For an almost surely locally bounded variation process V = {Vt : t g R + } on a filtered space (£i, 5, {&}, P) and a real valued function l o n R + x Q such that X(-, LO) is a Borel measurable function on R+, we define for every t g R+ and w g Q, / J[0,t]
X(s,u)dV(s,u>)= "'[0,(]
X(s,u>)pv(ds,Li)
provided the integral with respect to the signed Lebesgue-Stieltjes measure on the right side
210
CHAPTER 3. STOCHASTIC
exists. Note that since |^y(jg,«)| < fnVy(E,u) 1/
X(»tw)dV(s,u)\
for E G 33m,, we have
< [ I
\J[0,t]
INTEGRALS
|JT(«,w)|d|V|(a,w).
J[0,t]
Definition 11.15. Let A(Q, g, {&}, P ) be the collection of equivalence classes of all al most surely increasing processes and V(Q, 5, {& }, P) be the linear space of equivalence classes of all almost surely locally bounded variation processes on a standard filtered space («, 3, {St}, PI Let AC(Q, 5, {&}, P) be the subcollection ofA(Q, 5, {&}, P ) consisting of almost surely continuous members and let V c (il, 5, {3<}i P) &e f ^ e linear subspace of V(£2,5, { g j , P) consisting of almost surely continuous members. If an almost surely in creasing process on (Q, 5, {'St}-, P) «* natural, then so are all its equivalent processes. In this case we call the equivalence class natural. Similarly for an equivalence class of almost surely locally bounded variation processes on the filtered space. In what follows we write A £ A(£2,5, { 5 J , P ) to mean both an almost surely in creasing process and an equivalence class of such processes. Whether a process or an equivalence class of processes is meant should be clear from the context. Similarly for V€V(Q,g,{&},P). Observation 11.16. Let A G A(£2,5, {&}, P ) and p G [1, oo) be fixed. Consider the col lection of all measurable processes X = {Xt : t G E+} on (Q, 5 , P) satisfying the condition (1)
/
\X(s)\"dA(s) < oo
for every i G
The family of Lebesgue-Stieltjes measures {^^(-,0;) : u G Q] on (R+,Q3R + ) is an 5^,measurable family and the restrictions of these measures to ([0, t], 33[o,(]) constitute an Soo-measurable family of finite measures for every t g R+ according to Lemma 10.10. By Theorem 10.11, L , , |X(s)| p dA(s) is an Sx,-measurablerandom variable on (£1,5, P ) and, if X is an adapted process, then / [0 (] | X ( » | P dA(s) is an 5(-measurable random variable. For two measurable processes X and Y on (£2,#, P ) each satisfying condition (1), the condition (2)
E / \X(s) - Y(s)\p dA(s)} = 0 Jlo.il I
for every t G R+
is an equivalence relation. Let Lpl00(R+ x £2, jiA, P ) be the collection of the equivalence classes of all measurable processes on (Q, 5, P ) satisfying condition (1) with respect to this
§ 11. L2 -MARTINGALES AND QUADRATIC VARIATION PROCESSES
211
equivalence relation. From the fact that \a + b\" = 2 p {|a| p + \b\p} for a, b G R, it follows immediately that if X, Y e Lp.^R,. x Q, fj.A, P) then X + Y G LPi00(R+ x Q, ^ , P). Clearly if X G LP,00(R+ x Q , / M , P ) and c e R then cX G L^CR* x £l,p,A, P). Thus LP,oo(R+ x O, ^ 4 , P ) is a linear space. Let us observe that condition (1) is equivalent to
(3)
E [ 7 \X{s)\'dA(s) < oo for every m G
Condition (3) implies that for every m G N, there exists a null set A™ in (Q, 5, P ) such that /[0,m] |X(s,u>)|pcL40s,u;) < oo for w G A^. Then A = U m6N A m is a null set in (ft,3,P) such that for every to G Ac we have (4)
\X(s,u)\pdA(s,Lj)
/
< oo
forevery i GR+.
7[0,(]
The element 0 G L poo (R + x Q, ^^, P) is the equivalence class of measurable processes X on (Q, 5 , P ) such that (5)
\X(s)\'dA(s) \f l*Ml'
E /
=0
forevery < G R+.
Note that when a measurable process X satisfies condition (5) then by the same argument as in (4) there exists a null set A in (D, 5, P) such that for every UJ G Ac we have (6)
/
V[o,t]
\X{s,w)\vdA{s,u>) = 0
for every t G R+.
Conversely if there exists a null set A in (£2,5, P) such that for every ui G Ac condition (6) holds, then (5) holds. Thus (5) and (6) are equivalent. Note that (6) does not imply that X(-, UJ) = 0 for u G Ac since we may have A(-, u>) = 0. Thus the equivalence condition (2) is less stringent than the equivalence condition that X(-,u>) = F(-,u>) for w G Ac where A is a null set in (Q, 5, P). Definition 11.17. For A G A(Q,5, {&}, P ) andp G [0, oo), fef LPi<x)(R+ x Q, ^ , P) be the linear space of the equivalence classes of all measurable processes X = {Xt : t G R+} on (£1,#, P ) satisfying the condition (1)
El"/ L-/[o,t]
|X(5)|"dA(5) < oo for every t G R+
212
CHAPTER 3. STOCHASTIC INTEGRALS
For X £ Lj,l00(!+ x Q, fiA, P), define for every t £ R+ (2)
\X(3)\'dA(s)]1/r
\\X\\$f = E\/
and
II^C=E 2 _ m {II^CAl}.
(3)
mSN
Remark 11.18. For A £ A(Q,g, {&}, P), the functions || ■ \\*f for t G R + and || • ||££ on LP|00(R+ x Q, [iA,P) denned above have the following properties. 1) || • Up/5 is a seminorm on LP,00(R+ x f l , ^ , P) for every t £ R+. 2) For X,X(n> £ L„,00(R+ x Q., fiA,P\ n £ N, we have Jlirn ||X (n) - X\\f^ = 0 if and only if lim ||X("> - X\\££ = 0 for every m G N. 3) II ■ llp,£ is a quasinorm on L^CR* x £2, ^ , P). Proof. These statements are proved in the same way as Remark 11.5 for the seminorm | | and the quasinorm | • 1^ on the space M2(X2, 5, {$t}, P)- ■ Observation 11.19. Let A £ A(Q,#, { 5 J , P ) and p £ [0, oo). Then every bounded measurable process X = {Xt : t £ R+} on (£2,5, P) is in LP,00(R+ x Q, /^4, P). Proof. Let X be a measurable process on (Q, 5, P) such that |X(t, w)| < K for (t, a;) G R+ x Q for some A' > 0. Then for any (i, LO) £ R+ x Q, we have
/
\X(s,u)\yA(ds,
W)
< J f ^ ( [ 0 , t], w) =
K*Mu\
for a.e. w in (Q, 5 ^ , P) so that p p E [/ |X(s)| \X{s)YdA{s) dA(s)\ <
Thus X G Lpi00(R+ x Q, / ^ , P). ■
[IV] Quadratic Variation Processes Proposition 11.20. Let M £ M 2 (f2,5, {&},P)- There exists an equivalence class A £ A(Q, 5, {5(}, P) SMC/I //iaf M 2 - i i s i ! right-continuous null at 0 martingale. Moreover such A can be chosen to be natural and under this condition A is unique.
§17. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
213
Proof. Since M is a right-continuous L2-martingale, M2 is a right-continuous nonnegative submartingale. Since our filtered space is right-continuous, M2 belongs to the class (DL) by Theorem 8.22. Therefore by Theorem 10.23 (Doob-Meyer Decomposition), there exists a unique natural almost surely increasing process A such that M2 - A is a right-continuous martingale. Since both M and A are null at 0, so is M2 - A. ■ Lemma 11.21. Lef M be a right-continuous martingale and A be a natural almost surely increasing process on an augmented right-continuous filtered space (£2,5, {&}, P)- lfM+ A is a natural almost surely increasing process, then M = 0. Proof. Since M is a right-continuous martingale on a right-continuous filtered space, it belongs to the class (DL) by Theorem 8.22. Since A is an almost surely increasing process, it is a submartingale of class (DL) by Lemma 10.19. Thus M + A is a right-continuous submartingale of class (DL) on an augmented right-continuous filtered space and therefore by Theorem 10.23 (Doob-Meyer Decomposition) M + A = M' + A' where M' is a rightcontinuous martingale and A' is a natural almost surely increasing process and furthermore the decomposition is unique. Now by assumption M + A is itself a natural almost surely increasing process. Thus by the uniqueness of the decomposition, we have M + A = A' and M' = 0. On the other hand, M + A is a Doob-Meyer decomposition of M' + A'. Thus M = M' = 0. I Proposition 11.22. LetM,N € M2(£2,5, {&}, P). 77iere exisfs an equivalence class V 6 V(Q, 5, {5(}i P ) ^ " ^ ^ ' MTV — V is a right-continuous null at 0 martingale. Moreover such V can be chosen to be natural and under this condition V is unique. Proof. Let M' = (M + N)/2 and M" = (M - N)/2. Then both M' and M" are in M 2 (Q, 5, {Si}, P) so that by Proposition 11.20 there exist natural almost surely increasing processes A' and A" such that (M') 2 - A1 and (M") 2 - A" are right-continuous null at 0 martingales. Now MN = (M'f - (M"f so that (1)
MN - (A' - A") = {(M')2 - A'} - {(M")2 - A"}.
Since A' - A" is a natural almost surely locally bounded variation process and {(M1)2 A'} — UM")2 — A"} is a right-continuous null at 0 martingale, the equality (1) proves the existence of a natural almost surely locally bounded variation process V such that MN - V is a right continuous null at 0 martingale. To prove the uniqueness, suppose V and V" are two natural almost surely locally bounded variation process such that MN - V and MN - V" are right-continuous null at
214
CHAPTER 3. STOCHASTIC
INTEGRALS
0 martingales. Then V - V" = (MN - V") - (MN - V) is a right-continuous null at 0 martingale. On the other hand, since V and V" are natural almost surely locally bounded variation process, so is V — V" Thus V — V" = A' — A" where A' and A" are two natural almost surely increasing processes. Now (V — V") + A" = A' so that the sum of the rightcontinuous martingale V — V" and the natural almost surely increasing process A" is equal to a natural almost surely increasing process A'. Then by Lemma 11.21, V — V" = 0, that is, V = V" ■ Definition 11.23. Let M 6 M2(Q,S,{3t},P)We write [M]forA 6 A ( Q , 5 , { & } , P ) such that M2 — A is a right-continuous null at 0 martingale and call [M] the quadratic variation process of M. For M,N € M2(£l,S, {5 ( },P), we write [M, N] for V £ V(Q, 5, {3*}, P ) such that MN — V is a right-continuous null at 0 martingale and call [M, N] the quadratic variation process ofM and N. In Definition 11.23, the existence of [M] and [M, N] and their uniqueness under the condition that they be natural have been proved in Proposition 11.20 and Proposition 11.22. In what follows we write [M] for both an equivalence class of processes and an arbitrary representatives of an equivalence class. Similarly for [M, N]. Remark 11.24. Let M and N be right-continuous i^-martingales on a standard filtered space (£2,5, {$t},P) which may not be null at 0. Then M - M0 and N - N0 are in M 2 (fi,5, {St}, P) so that there exists X = [M - M0, N - NQ\ in V(Q.,S, {&}> P ) s u c h that (M - M0)(N -N0)-Xisanull at 0martingale. LetY = M0N0+X. Then MN - Y is a null at 0 martingale. Proof. Since X is a null at 0 process, so is MN — Y = MN — M0N0 — X. Since (M — M0)(N — No) — X is a martingale, to show that MN — Y is a martingale it suffices to show that {MN - Y} - {(M - M0)(N - N0) - X} is a martingale. Now {MN -Y}{(M - MQ)(N - N0) - X} = (MN - M0N0) - (M - M0)(N - N0) = NoM + MoN. Since M and N are 1,2-processes, NoM and MoN are L\ processes and in fact martingales and so is the sum. I The family /Z[M] of Lebesgue-Stieltjes measures on (R+, S8» ) determined by
§ 11. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
215
[M] G A(Q,5, {5(},P) and the family of signed Lebesgue-Stieltjes measures H\M,N] on ([0, t], »[<),(]) for t G R+ determined by [M, N] G V(Q, 5, {&}, P) are defined. The space L2i00(K+ x Q, / j [ M ] , P) is defined by Definition 11.17. As an immediate consequence of the Definition 11.23, we have the following lemma. Lemma 11.25. LetM,N (1)
G M 2 (fi, 5, {&}, -P). Then for s,teR+,s<
t, we have
E[(M ( - M.)(Ai - JV.)|&] = E[M,JV( - M s iV s |ff.] = E[[M,iV](-[M,iVL|5s],
an4 (H particular (2)
E[(M< - M S ) 2 | 5 J = E[M(2 - M S 2 |5J = E[[M] t - [ M ] S | 5 J ,
o.e. »«(fl,§ s ,P). Proof. To prove the first equality in (1), note that by the martingale property of M and N we have E[(Mt - Ms)(Nt - N.)\$.1 = E[MtNt - MsNt - MtNs + MSNS|&] = E[MtNt - MSNS\5S] a.e. on ( Q , 5 „ P ) . To prove the second equality in (1), note that since MN — [M, N] is a martingale, we have E[{MtNt - [M,N]t}
- {MSNS - [M,JV].}|ff.] = 0
a.e. on (Q,ff„P).
This proves the second equality in (1). ■ Quadratic variation processes have the following algebraic properties. Proposition 11.26. Let M,M',M",N G M 2 (n,5, {5,},P). 77ien (1) [M] = 0 <£> M = 0, (2) [ M , M ] = f M ] , (3) [M,JV] = [JV,M], (4) [c'M1 + c"M", N] = c'[M', N) + c"[M", N]for c', c" G R, (5) [cM] = c2[M]forceM., (6) [ M , J V ] = 4 - ' [ M + i V ] - 4 - | [ M - J V ] .
216
CHAPTER 3. STOCHASTIC
INTEGRALS
Proof. 1) If M = 0, then clearly 0 is a quadratic variation process of M. Conversely suppose [M] = 0. By the definition of [M], X = M2 - [M] is a right-continuous null at 0 martingale. Thus when [M] = 0 then for any f e R + w e have E(Mt2) = E(Xt) + E([M] ( ) = E(X 0 ) + E(0) = 0, and therefore there exists a null set A( in (Q, 5, P) such that Mt = 0 on AJ. Let {rm :m£ N} be the collection of all rational numbers in R+ and let A = Um6NAm. Then for the null set A we have M(rm,w) = 0 for all m e N when w 6 Ac Then by the right-continuity of M(-,u>) we have M(t,Lo) = 0 for all t G R+ when UJ e Ac. 2) Since an almost surely increasing process is a particular case of an almost surely locally bounded variation process, we have (2). 3) (3) follows from the fact that MN = NM. 4) (4) follows from the definition of [-, ■] and the fact that we have c'M' + c"M" 6 M 2 (S, 5, {&}, P). (5) is a particular case of (4). 5) With A', A", M',M" as defined in the Proof of Proposition 11.22, we have [M, N] = A' - A" = [M'] - [M"] = [2"'(M + N)] - [2"'(M - N)) = 4 " ' [ M + N] - 4 " ' [ M - TV], proving (6). ■ Let us consider the continuity of a quadratic variation process. Recall that if M £ M 2 (£2,5, { 5 J , P) then M2 is a right-continuous submartingale of class (DL) by Theorem 8.22. Then according to Theorem 10.35, under the assumption that the filtered space is a standard filtered space, [M] is almost surely continuous if and only if M2 is a regular submartingale. In Proposition 11.27 and Proposition 11.30, we give sufficient conditions for M2 to be regular. Proposition 11.27. Let M,N 6 M£(Q,5, {&}, P). Then M2 and N2 are regular submartingales and [M], [TV] and [M, TV] are almost surely continuous. Proof. M 2 is a right-continuous submartingale of class (DL) by Theorem 8.22. It is also a continuous nonnegative submartingale on a right-continuous filtered space so that it is a regular submartingale by Observation 10.28. Therefore [M] is almost surely continuous by Theorem 10.35. Similarly for [TV]. Now since both 2~'(M + TV) and 2"'(Af - TV) are in M§(Q, 5, {dt}, P), both [2-'(M + TV)] and [2~\M - TV)] are almost surely continuous by our result above. Then by (6) of Proposition 11.26, [M, TV] is almost surely continuous. I
111. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
217
Definition 11.28. A filtration {& : t G R + } on a probability space (Q,3, {&}, P) " said to have no time of discontinuity if for any stopping times Tn, n G N, and T with respect to the filtration such that Tn f T on Q as n -» oo we have
lim E(MS ) = E(M£).
Let X„ = E [ M 0 | 5 T J for n G N and let I M = EL^I®,*,] where (S^ =
E(M|) = E( lim Ml ) < liminf E{Ml ) . n—*oo
n
n—*oo
"
on the other hand since {Xn;n G N} is a martingale with X^ as a final element, {X 2 ; n G N} is a submartingale with X^ as a final element. Thus E(MTn) | as n —> oo and E(MTn) < E(Af|). Thus (1) follows from (2). ■ Proposition 11.30 Let M,N G M 2 (Q, 3 , {3t}, P) where the filtered space is a standard filtered space with no time of discontinuity. Then M2 and N2 are regular submartingales and [M], [N] and [M, N] are almost surely continuous. Proof. By Lemma 11.29, M2 and N2 are regular right-continuous submartingales of class (DL). Thus by Theorem 10.35, [M] and [N] are almost surely continuous. Since 2" \M + N) and2- 1 (M-7V) are in M 2 (£2,5, {&},P>, [2~\M+N)] and [2"'(M-AQ] are almost surely continuous by than same reason as above. Then by (5) of Proposition 11.26, [M, N] is almost surely continuous. ■ For M,N e M 2 (Q,#, {&}, P), the quadratic variation processes [M] and [M, N] are in A(fl,5, {§(}, P) and V(Q, 5, {&}, P) respectively and thus | [M, N] | is in
218
CHAPTER 3. STOCHASTIC
INTEGRALS
A(£2,5, {3i}, P) by Lemma 11.11 and the families of Lebesgue-Stieltjes measures H[M] and / J | [ M w i are denned on (R+,*8i t ) and the family of signed Lebesgue-Stieltjes measures H[M,N] is defined on ([0,t],5B[0l(]) for every t G K+ according to Definition 11.14. For a real valued function X on R+ x Q such that X(-,u>) is a Borel measurable function on R+, we define for every t G R+ and u 6 Q, X(S; ,u)d[M](s.,
U):
J[0,t]
J[0,t)
X(s. ,u)it[M](ds, w)
and /
X(s,u)d[M,N](s,ij)=
J[0,t]
f
X{s,u)iHM,N)(ds,uj)
J[0,t]
provided the integral with respect espect to the signed Lebesgue-Stieltjes Lebesgue-Sl measure on the right sides exist. Note that by (5) of Proposition 11.26, we have A*[MJ = 4 ~ PlM+N] + 4 ~
fi[M-N]
on([0,t],<8 [0 , (] ). Theorem 11.31. Let A e AC(Q, 5, {& }, P). For t e M + f l n ( i n € N, Zef Anfoetfte partition of[0,t] byO = i„,0 < • ■ ■ < tn>p„ = t. Let\W= max (*„,* - t„,k-i) and lim |A„| = 0. k=\,...,p„
Then Then (1)
lim ||f>W tnt -- ^ n . t - l 3 t „ , t _ , ] - ^ | | i
n—t-oo " *k=\ —'
n
n-KX>
= 0.
-*
IfAteL2(n,$,P),then (2)
Jim || £ E L 4 , „ t - A ^
| f c ^ _ , ] - At\\2 = 0.
Proof. Let us prove (2) first. For brevity let us write (3)
Then (4)
I
an,k = Atnk - A (nk _, &,* = E[ol„,fc | S W _ J Sn = YZlPn,k
for fe = l , . . . , p „ for i = 1 , . . . , pn forneN.
2 p 2 E[{5„ - A E[{££,(a Pn,t)} ^ (t}} 2]] ==E[{T, kl,(an,kn k)} ] ,* -— /?»,.
= £
i,t=l,...,p„
E [ ( a n j — Pn,i)(0intk -"A..*)]■
§77. L2-MARTINGALES
AND QUADRATIC VARIATION PROCESSES
219
Now for j
=
jSnjEIan.fclS,,.^,] + ( 8n, J E[/3„,,|5 tnt _ 1 ] o •Acm(fl,ylM_I1p)1
since E r j J ^ l J ^ ^ ^ ] = E[a rii iU
(5)
<
MTtAAt„,k
Now lim sup {At n
-*°°«:=I,...,p„
k
- A w _ , } 2 ] < E[suP)c=I
pJAtnk
- A ( „ k _,}A t ].
— At k,\ = 0 a. e. in (£2,5, P ) since A(-,u>) is uniformly contin-
uous on [0, i] for almost every u> £ Q. Also since almost every sample function of A is an increasing function, we have s u p ^ p n {A,„ t - A(nfc_,}Aj < A2 a.e. in (Q,J, P). Then since A< 6 L2(£l, 5 , P), the limit as n —> oo of the last member in (5) is equal to 0 by the Dominated Convergence Theorem. Thus lim E[{S„ — At}2] = 0. This proves (2). Consider now the general case where At £ Li(Q, 5, P). For m £ N, let A (m) = A A m. Then both A (m) and A - A (m) are in A C (Q,5, {5,}, P ) and A (m) is bounded by m. Let us write A = A (m) + {A - A (m) } and let _ W
f S'n = E & E[A<:> - A £ > _ , l&„,_, 1 1 5;' = £ E i E[(A - A(m))(7>ii - (A - A(m>)(„k_, |&„ t _,]
fom e N forn £ N.
220
CHAPTER 3. STOCHASTIC
Then At = A™ +(A(7)
INTEGRALS
A<m))t and Sn = S'n + S£. Thus
E(|S„ - M\) < E(\S'n - A<m)|) + E ( 5 0 + E[(A - A<m>)t]
since (A - Aim))t > 0 and S^' > 0 a.e. Now since A (m) G Ac(Q.,$,{$t},P) and E[(,4<m))2] < m2 < oo we have J u ^ E ^ S ; - 4 m ) } 2 ] = 0 by (2) and therefore we have lim E(\S'n — j4' m) |) = 0. Note that the second and third expectations on the right side of (7) are equal since by (6) we have E(S;') = f ) E [ ( A - A * " V * -(A-
A^)tnk_J
= E[(A -
A™)t].
Now E[(A - A<m>)(] = /
{At - A™} dP<
J{At>m.}
f
At dP.
J{Ai>m)
Let e > 0 be arbitrarily given. Since At is integrable, we have fjAt>m\ At dP < e for sufficiently large m G N. For such m we have E(|5„ - At\) < E{\S'n - ^ m , | ) + 2e by (7). Then we have limsupE(|5„ — At\) < 2s. From the arbitrariness off > 0 we have lim E(\Sn -AA)
n—*co
= 0. This proves (1). ■
n—>-co
Theorem 11.32. LetM,N G M|(Q,5, {dt},P). Fort G R+ and n G N, let A^ be the partition of[0,t] by 0 = £nio < ■ ■ • < i„,p„ = t. Let \kn\ = max (*„,£ — tn,fc-i) a«n
lim I An I = 0 . TTien 71—*00
(1)
J i ^ II £ E [ { M „ , - M V i _ , } { i ¥ w - ^ ^ - , } l ^ _ , ] - [M, JV]4||, = 0. /fc=l
and in particular (2)
lim || f > [ { M ^ - M (nt _,} 2 |5*„, t _,] - [ML||i = 0. i=i
Proof. Since M G M^(Q,5, {3,},P) we have [M] G A c (£2,5, {&},P) by Proposition 11.27. Applying (2) of Lemma 11.25 and identifying [M] with A in Theorem 11.30 we have (2). To derive (1) from (2), note that for any 5, u G 1R+ such that s o i w e have (Mu - MS)(NU - Ns)
221
§12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES = ^[{(Mu - M.) + (Nu - Ns)}2 - {(Mu - M.) - (Nu - Ns)}2] = -\{{M + N\
- (M + N)s}2 - {(M - N)u - (M -
N)s}2].
Also by Proposition 11.26 we have [M, N]t = j{[M + N]t - [M -
N]t}.
By these two equalities we have
\\f:mM,nik-M^k_l}{Nt^~N^k_l}\^.l]-[M,N]t\\l k=l
<
i\\f^E[{(M *
+
+ NWnJc)-(M
+ lWn,k-i)}2\$t»k_i]-lM
+ mt\\l
k=i
- | | £ E [ { ( M - N)(tn,k) - (M - A0(in,*-i)} 2 |3t„ k _,] - [M - JV],||,. *
k=\
By (1) the two terms on the right side of the last inequality tend to 0 as n —> oo. This proves (1). ■ In connection with Theorem 11.31, let us remark that we have also Jim || £ { M ( . . t - M^JiN^
- Nt^_,}
- [M, JV].||i = 0.
k=l
This equality will be proved in §12.
§12 Stochastic Integrals with Respect to Martingales Let X = {Xt : t G R+} be a stochastic process and M = {Mt : t G R+} be a martin gale on a filtered space (Q, 5, {&}, P). An integral of X(-, w) with respect to M(■, w) on an interval [0,t], / ( 0 i ] X(s,w)dM(s,a;) for f 6 R, and w G £2, can not be defined as a Lebesgue-Stieltjes integral of X(-,UJ) with respect to a signed Lebesgue-Stieltjes mea sure on ([0, t], 93[o,<]) since M(-,u) may not be a function of locally bounded variation on R+ and a corresponding signed Lebesgue-Stieltjes measure may not exist. If however the
222
CHAPTER 3. STOCHASTIC
INTEGRALS
sample functions of X are step functions, then ^a^X(s,u) dM(s,u) can be defined as a Riemann-Stieltjes sum of X(-,uO with respect to M(-,u>) for every u> e £i. We shall show that if X is a bounded adapted left-continuous simple process and M is a right-continuous L2-martingale on a standard filtered space, then the Riemann-Stieltjes sum of X with respect to M is a right-continuous i2-martingale on the filtered space. We then extend the definition to cover processes X which are predictable processes and satisfy the integrability conditions of the space L2i00(R+ x £2, /J[M]> P)-
[I] Stochastic Integral of Bounded Left-Continuous Adapted Simple Pro cesses with Respect to ^-Martingales Definition 12.1. Let L0(£2, $, {&}, P) be the collection of all bounded adapted leftcontinuous simple processes on a standard filtered space (O., 5 , {5;}, P), that is, for every X e Lo(Q, 5, {$t},P) there exist a strictly increasing sequence {*& : k £ Z + } m R+ with t0 = 0 and lim i* = oo and a bounded sequence of real valued random variables h—*oo
{£k '■ k G Z+}, that is \(k(u)\ < if for all a> 6 Q andfc6 Z+/or some 7f > 0, sucn fnar £0 is $to-measurable, fa is $tkl-measurable for k € N and X is given as (1)
(X(t,u>) = tk(.u) l X ( 0 , w ) = f0(w)
forte(tk.utkiken,ioe£i foruj€Q,
that is, (2)
X(i, w) = eo(^)i{o}(<) + £ W « ) W . I A J ( * )
/<"" (*•w) e R+ x " •
fcgN
Note that in the definition of L 0 (fi, 5, {3t}, P), the probability measure P of the under lying standard filtered space (Q, 5, {&}, P ) plays no role. Observation 12.2. 1) Every X 6 Lo(Q, 5 , {&}, P ) is a predictable process. 2) L 0 (A 5 , {&}, P ) C L2,oo(R+ x Q, nm, P) for every M in M 2 ( A 3 , {&}, P). Proof. 1) Since X is a left-continuous adapted process on the filtered space, it is a pre dictable process on the filtered space. 2) If X e Lo(0,5, {&}, P), then X is a left-continuous process so that it is a measur able process on (Q, 5, P ) by Theorem 2.10. Also for every t e R+, say i e [iJt_i ,i A ] for
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
223
some k e N, we have E /
V[o,i]
= E
X2(s) d[M]{s)\ < E 17 J
Uio,tk)
X2{s) d[M}{s)
XX 2 {[Mk-[M] t ,_,} < A 2 E[[M] ( J < oo,
where K bound of the sequence of random variables {£k : k G Z + }. This shows that
Definition 12.3. Let M e M 2 (Q, 5, {&}, P). For X € L 0 (Q, 5, {& }, P) given fry
(1)
^W = eoi{o}«) + E^1fe-.,tk](*)
^teH
km as in Definition 12.1, we define a function X • M on R+ x Q by setting (2)
(X . M)(t) = £ tiWd
- M«,_,)} + 6 { M ( i ) - M(t*_,)}
for t e [tfc_i, tfc] and fc € N vvi'fn fne understanding that Y%j = 0, /naf is, (3)
(X.M)(i) = £&{Af(i,Ai)-M&_,Ai)}
/orteR*.
«eN We call X • M the stochastic integral ofX with respect to M. Note that X(0) and £0 play n 0 r ° l e s in t n e definition of X • M and f (X • M)(0) = (i{M(0) - M(0)} = 0, I (X ; M)(tk) = E t , £ W , ) - M(t,_,)} for fc 6 N . Note also that X • M is well-defined, that is, it does not depend on the different ways an element X of L0(Q, 3, {& }, P ) can be expressed in the form (1). Proposition 12.4. Let X G L 0 (Q, 5, {&}, P) and M G M 2 (Q, 5, {&}, P) 77ien/or X • M
we have l)X:MeM2(n,3,{5t},Pl
224
CHAPTER 3. STOCHASTIC
INTEGRALS
2)X;Me Mf(Q,S, {&},P) ifM G M§(Q,g, {&},P). 3 j / / y G L 0 (ii,5,{5(},P)anrf7V G M 2 (fi,5, {5 ( },P), then for every t € M.+, we have E[(X . M)<(F . JV)t] = E /
(1)
X(s)Y(s)d[M,N](s)
J[0,t]
and m and in particular 1/2
(2)
| X . M |
(
= E [ / uro.ii
X2(s)d[M](s)
= IIX IgT'
and (3)
[M],P
IX.MI^M
Proof. By (2) of Definition 12.3, every sample function of X • M is right-continuous since M has this property. Also (X • M)(0) = 0 on Q. Since f, is &._,-measurable for i G N, (X ; M)(t) is fo-measurable for every t G R+, that is, X • M is an adapted process. Since M is an L2-process and £, is a bounded random variable for i G N, X • M is an L2-process. To show that X : M is a martingale, we show that for every pair s, £ G R+ such that s < t we have (4)
E[(X ; MX*) - ( ^ ; M)(s)\$s]
= 0 a.e. on ( 0 , 5 S ) P ) .
We may add two more points tk in the expression of X by (2) of Definition 12.1 if necessary so that s = tk and t = tk+p for some k G Z+ and p G N. Then k+p
(X . M)(*3 - {X • M)( 5 ) = £
6{M(* ; ) _ M(*t-_,)}.
i=/fc+l
Now for i = k + 1 , . . . , k + p, we have EK;{M(i,) - M(i,_,)}|5 s ] =E{E[ti{M
*(*) = fol{o}(*) + E &W-iAl(*)
for
*e R+
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
225
and (6)
Y(i) = »7ol{o}(*) + E V* V.i,**J<*) JcgN
for
* G R +>
where {(i : t 6 Z t } and {r?*, : fc G Z + } are two bounded sequences of random variables on (Q, 5, P) such that £0, »7o are 5<0-measurable and fo, »;* are 5ti_1-measurable for k G N. Let £ € (0,00) be given. We add another point to the sequence {tk : k € Z + } if necessary so that ourt = tk for some i e N . For brevity, let a{ = M{U) - Af(*;_i) and # = JV(^) - JVft-i) for t 6 N. Then (X • M) 4 = £ * , &«< and ( F • N)t = E t i 7*# and therefore (7)
E[(X . M ) t ( F • iV)t] = ^ ^ E K f f l M l 1=1 j=i
Now for t = j , we have = = = =
E K i ^ a A ] = E[EK,77,a,ft|5 t ,_,]] E[&77;E[a;/3; !&,_,]] by the fo.^-measurability of £,77; E K w E K M , JV]«. - [M, iV]tl_, |&._J] by Lemma 11.25 E[EtiM[M,N]u - [ M ^ . J !&._,]] E[&77,{[M,JV] t ,-[M.TVL,.,}].
On the other hand for i ^ j , say i < j , we have
El&tyo■iPj] ■■ = E[EK^-o•*%IV ,]]
= E[Ci%a^Et^l^_,]] bythefo^.-measurabilityof&.T/j.a, = 0 by the martingale property of iV". Using these computations in (7), we have E [ ( X . M ) t ( F . i V ) ( ] = E £ & ! * { [ » , A T * -[M,iVL,_,} i=l
E
J[0,t]
s)d[M,N](s)
since the sample functions of X Y are step functions. This proves (1). In particular when X = Y and M = N, (1) reduces to (8)
E[(X • Mft] = E /
U[o,t]
X2(s)d[M](s)
226
CHAPTER 3. STOCHASTIC
INTEGRALS
Recalling the definition of | • | , in Definition 11.4 and the definition of || ■ ||^f],P in Definition 11.17, we have (2). By Definition 11.3 and Definition 11.17 again we have
\X . M U « £ 2~™{\X ; M\m A 1} = £ 2-m{\\X\\[^P A 1} = meN
\\X\^f,
mSN
proving (3). I As we noted in Observation 12.2, Lo(£2,i?, { 3 J , P ) C L2,0O(R+ x Q,/* [ M ] ,P) for every M e M 2 (Q,g, {fo},P). According to (3) of Proposition 12.4, the mapping of X € Lo(Q, S, {St}, P) into X • M E M 2 (Q,5, {&}, P ) is an isometry with respect to the metric associated with the quasinorm || ■ \$Q'P on L2|00(R+ x Q,, H[M], P) and the metric associated with the quasinorm | • |oo on M 2 (Q, S, {St}, P)- We shall show in Theorem 12.8 that if X is a predictable process on the filtered space (Q, S, {St}, P) and if X is also in L2,<XJ(1R+ x Cl,nlM],P), then there exists a sequence {X (n) : n e N} in L 0 (Q,#, {St},P) such that Inn ||X (n) - Xf^Jf = 0. Now {X(7l) : n e N} is a Cauchy sequence in L2i00(R+ x Q.,n[M],P) and the isometry implies that {X ( n ) • M : n e N} is a Cauchy sequence in M 2 (Q,5, {5 4 },P). Then the completeness of M2(£l,S, {5 t },P) implies that there exists some Y e M 2 (Q, g, {St}, P ) such that lim I X ( n ) • M - Y |«, = 0. We then n—*-oo
define X • M = Y
According to Proposition 12.4, X • M and Y • iV are in M2(£l,S, {fo},P) for X and Y in Lo(Q, S, {St},P) and thus the quadratic variation processes [X s M], [Y • TV], and [X • M, Y • N] exist. The next theorem shows that these processes are obtained by integrating the sample functions of X2, Y 2 , and XY with respect to the families of Lebesgue-Stieltjes measures fi[M], M[N], and fi\M,N] determined by the quadratic variation processes [M], [N], and [M,N]. Proposition 12.5. LetM,N € M 2 (Q,S, {St},P) and X,Y e h0(£l,S,{St},P)there exists a null set A in (Q, S, P ) such that on Ac, for every t € R+, we have (1)
[X.M,Y.iV](= /
X(s)Y(s)d[M,N](s),
and thus for any s,t € R+, s < t, we have (2)
E[{(X . M)t - (X . M) S }{(Y . N)t - (Y . N).} | g j = K[(X • M M Y . TV), - (X . M)S(Y ,
= E [ 7 A-(u)y(u)d[M,JV](«)|»J U( 8 ,i]
J
N)s\Ss]
a.,?. 0 /i(Q,3 s ,P).
Then
%12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
227
In particular (3)
X2(s)d[M](s),
[X • M]t = f
and (4)
E[{(X : M)t - {X . M ) s } 2 | & ] = E[(X . Mft - (X = M)]\$s] = E [j
X2(u)d[M](u)\ds
a.e.on{Q.,$s,P).
Proof. Let V = {V, : < G JR+} where (5)
V(i)=/
X( 5 )y( 5 )d[M,AT]( 5 ).
■Ao.f)
To prove (1), we show that V G V(f2,5, {5<},P) and that (X s M)(F • N) - V is a null at 0 right-continuous martingale. Now since [M,JV] G V(Q,5, {fo},P), we have [M, TV] = A' - .4" where A', A" G A(ft, £ {&}, P ) according to Theorem 11.12. Then (6)
V(t) = /
(X ■ Y)(s) dA\s) - f
J[Q,t)
(X ■ Y)(s) dA"(s).
J[0,t]
Let us define two processes Am = {A^ : t G R+} and A(2) = {A? : t € K+} on (£2, 5, P ) by setting (7)
A(1)(t) = /
(X • Yy(s)dA'(s)
A(2\t) = f
(X
+ I
(X ■
Y)-(s)dA"(s),
and (8)
Y)-(s)dA'{s)+
J[0,t]
I
(X-Y)\s)dA'\s).
J[0,t]
Then from (6) we have (9)
V(t) = Aw(t) -
Am(t).
Note that since X and Y are bounded processes, Am(t) and Am(t) defined by (7) and (8) are finite on Q. and thus V(t) is defined and finite on Q.. By Theorem 10.11, Am and A{2) are adapted processes and so is V. Since A' and A" are L\-processes and X and Y are bounded processes, A0) and A(2) are L]-processes. The right-continuity of Am and A(2) is
228
CHAPTER 3. STOCHASTIC
INTEGRALS
implied by that of A' and A". Also the sample functions of Am and A(2) are real valued monotone increasing and vanish at 0. Thus A°\Am E A(Q,ff, {ff<},-P) and therefore V e V(Q, ff, {ffJ,P) by Theorem 11.12. To prove that (X • M)(Y • JV) — V is a null at 0 right-continuous martingale is equivalent to proving the second equality in (2). Note that the first equality in (2) is an immediate consequence of the martingale property of X • M and Y »N. To prove the second equality in (2), let X and Y be expressed as in (5) and (6) in the proof of Proposition 12.4. We may assume without loss of generality that s = tk and t = tk+p for some k e Z+ and p e N. Then with a{ = M(i.) - M(<;_i) and # = N(U) - #(*,•_,) for i e N as in the proof of Proposition 12.4, we have (X ; M) t - (X ; M ) . = E,tT+i & a . a n d similarly ( F ; N)t - (Y . JV). = E l t r i WA- Thus k+p
{(X . M) ( - (X . M ) s } { ( r . N)t - ( F . TV).} = £
k+p
E
&&*<&.
.=/t+l j=/t+I
Now forfc+ 1 < z < j < fc + p, we have E f ^ ^ a . ' A l S J =E[E[C i % a,ft|5 ( j _,]|5 s ] = E [ ^ a , E ( f t | 3 i j _ , ) | 3 : J = 0 since E(/3j |ff« ,) = 0 by the martingale property of N. On the other hand KKtotQ«AlS.] = E[EK,-i ?i a i #|&,_ 1 ]|ff,] = EK,-i 7 ,-E(a i #|& 1 _ 1 )|ff,] = E [ ^ , E [ [ M , N i t , - [M,JV](,_, |ff(,_,]|ff.] = mmllM,Nlti - [M,7V](,_,}|ff.], by Lemma 11.25. Thus we have E[{(X . M)t - (X . M).}{(Y . N)t - (Y ; JV).} |ff.] k+p
=
E
EK,77,{[M,iV]t, - [M,/V] t ,_,}|ff.] fc+p
= E = E [ /
£fo j {[M,iV] t .-[Af,JV] f ._ 1 }|ff, X( u )r( u )d[M,/v]( u )|ff s
proving the second equality in (2). I Regarding integrals with respect to the families of Lebesgue-Stieltjes measures ^y.\M)> [M), fJ'lNh ^[M,N], and \x |[M,AT]| we have the following inequalities.
§12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
229
Theorem 12.6. LetM,N € MSj(Q,5, {5,}, P). 1) There exists a null set Aoc, in (£1,3, P ) such thatfor any two bounded real valuedfunctions X andY o n R t x Q.with Borel measurable sample functions X(- ,<*;) andY'(• ,a>) for every u> G Q we have for every t G R+ (1)
1/
X( s )F( s )
|X( 5 )y( 5 )|d|[M,JV]|( 5 ) 1/2
< ( / X ^ M M K * ) } ' ( / Y2(.s)diN](s)V<
oo
MA!
2) LeJ X and Y be two adapted measurable processes on the filtered space whose sample functions are almost surely locally square integrable on R+ with respect to fi[M] and (J.[N) respectively, that is, f[Qt]X2(s,uj)d[M](s,u>) < oo for every t € R+whenui G Acx where Ax is a null set in (Q, 5, P) and similarly f[ot] Y2(s, ui) d[N](s, w) < oo for every t e R+ when UJ G Ay where Ay is a null set in (Q,5,P)- Then there exists a null set Ax,x,y "i ( 0 , 5 , P) such that (1) holds for every t G R+ andu G A^ x Y and furthermore (2)
E
/
J[o,t]
< E / J[0,t]
X(s)Y(s)d[M,N](s)
X2(s) d[M](s)
J
1< E [/
i
E /
U[o,t)
\X(s)Y(s)\dl[M,N]l(s)
Y2(s)d[N](s)
U[0,t)
Proof. 1) Consider first the case where X,Y G L 0 (Q, 5, {&}, P). Then X • M, F • JV G M|(Q, 5, {&}, P ) so that we have a(X • M) + b(Y ; N) G Mc2(&, 5, {&}, P) for any a, b G R+ and thus the quadratic variation process [a(X • M) + b(Y ; iV)] exists. Let Q be the collection of all rational numbers. Then for every a, b G Q, there exists a null set Ax,Y,a,b in (£2,5, P ) such that [a(X s M) + 6(F s iV)](-, w) is a continuous monotone increasing function on R+ vanishing at t = 0 for u> G A^ y a 6. Thus by Proposition 11.26 we have for t G R+ and u> G A^ y a b, (3)
0 < [a(X i M) + 6(F • N)](t, u) = a2[X • M](t,u) + 2ab[X ; M,Y • AT](t,u>) + 62[Y • N](t,uj).
Then for the null set AX,Y = U^eQA^y,,,.!,, the inequality (3) holds for all a, b G Q and t G R+ when LO G A C . Since the right side of (3) is a continuous function in a and b and since Q is dense in R, (3) holds for all a, b G R and t G R+ when u) G A^ y . Thus we have a nonnegative definite quadratic form in a and b for each fixed t G R+ and u> G A^ y and
230
CHAPTER 3. STOCHASTIC
INTEGRALS
this implies that the determinant of the matrix of the coefficients of the quadratic form is nonnegative, that is, for (i,w) € R+ x KCXY we have {[X»M,Y»
N](.t,u)}2 <[X»
M](t,
N](t,u).
Now since M,N e M£(£2,5, {5<},P), the quadratic variation processes [M], [N], and [M, N] are almost surely continuous according to Proposition 11.27. This fact and Propo sition 12.5 imply that there exists a null set A^ y in (Q, 5, P ) containing A*,y such that for u e (A'xyY the functions [M](-,w>, [JV](-,w),!and [M,Af](-,w) are continuous on R+ and furthermore for (i, u>) € R+ x (A^ Y ) ° w e n a v e
(7
(4)
x(»,«)y(*,w)«itw,JVK*,w)}
U[0,i]
{jotX2(s,
J
Y\s, w) d[N](s, w)} .
to) d[M](s, ")}{J[o
Let Q be the subcollection of Lo(£2,5, {5 ( },P) consisting of bounded adapted leftcontinuous simple processes X of the type X(t,u)
= a0l{to}(t) + J2 fflfclosi,_.A](0 f or (*,w) 6 R+ x Q, km
where {*& : fc € Z+} is a strictly increasing sequence of rational numbers in R+ with t0 = Q and {a/t : fc £ Z + } is a bounded sequence of rational numbers. Note that for every X S Q the sample functions are all identical and that Q is a countable collection. Let A,*, be the null set in (Q., 5, P) which is the union of the null sets A^ y for X,Y eQ. Then (4) holds for(t,uj)eR+xAc00. Let w 6 A^ be fixed. Since [M](-,u>), [N](-,u>), and [M, N](-,u) are continuous on R+, for every t G R+ we have iHM]({t},to) = filN]({t},ui) = /J-[M,N]({t},uj) = 0. Then for two arbitrary bounded Borel measurable functions / and g and for an arbitrary e > 0, there exist two bounded left-continuous step functions fe and gc assuming only rational values and having only rational points of discontinuities such that \I\ — 7i £ | < e, \I2 — l2,c\ < e, and I/3 — J^ej < e, where
h= I
f\s)d[M]{s,u),
h=
I
J[0,i]
g2(s)d[N]{s,u),
h,e= f
J[0,t]
= \[ \J[o,t]
f2(s)d[M](s,uj),
h,c=i
J[0,t]
g2c(s)d[N](s,uj),
J[0,t]
f(s)g(s)d[M,N](s,u)
J
V =|/
fe(s)ge{s)d[M,N](S,u)
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
231
Now since fe and ge are bounded left-continuous step functions assuming only rational values and having only rational points of discontinuities, there exist X, Y £ Q such that X = fe and Y = gc. Thus by (4) we have 73|£ < Ii,e/2,e- Then h < hfi + £< h,eh,c + £ < { I i + e}{h +£}+£• By the arbitrariness of e > 0, we have 73 < J172, that is, for arbitrary bounded Borel measurable functions / and g on R+, we have for every ( e R , ( 5 ) | 7 f(s)g(s)[M,N](s,uj)\ U[o,i]
J
< { / i\s)d[M\s,uj)\\l U[o,(]
J U[0,t]
g2{s)d[N](s,uj)\. J
Clearly we have (6)
1/
nsMs)[M,N](s,u)\<
I I
\J[0,t]
\f{s)g{s)\d\[M,N]\{s,u).
J[0,t]
Let t e R + b e fixed and let {A^, Bw} C 2$[o,<] be the Hahn decomposition for the signed Lebesgue-Stieltjes measure [i[M,N)(-,U) on ([0, fj, 33[o,t]) for our w 6 A^,. Then by (7) in Observation 11.9, we have (7)
/
\f(s)g(s)\d\[M,N]\(s,u)
J[0,t]
=
/
\mg(s)\{lA.(8)-lB„(s)}d[M,NKs,u)
J[0,t)
<
U
f2(s)d[M](s,u,)}
=
[j
f2(s)d[M)(s,^
{Jotg2(s){lA„(s)-lB„{s)}2d[N](s,u)} 2
yg^g2(s)d[N](s,w)}[
2
where the inequality above is obtained by applying (5) to the two bounded Borel measurable functions | / | and |s|{l/i„ — 1B„}We have shown so far that if a; 6 A^,, then for any two bounded Borel measurable functions / and g on R+ and any t € R+ we have (8)
1/
f(S)g(s)d[M,N](s,Lo)\<
\J[0,t]
<
U
f2(s)d[M](s,^
j I
\f(s)g(s)\d\[M,N]\(s,uj)
J[0,<]
y^^g2(s)d[N](s,u,)}
232
CHAPTER 3. STOCHASTIC
INTEGRALS
Let X and Y be two bounded real valued functions on R+ x Q such that X(-, u>) and Y(; w) are Borel measurable functions on R+ for every w £ Q. Then by (8), (1) holds for every t e R + a n d u ; £ A^. 2) Let X and Y be two adapted measurable processes on the filtered space whose sam ple functions are almost surely locally square integrable on R+ with respect to /J.[M] and H[M] respectively. Recall that every sample function of a measurable process is a Borel measurable function. For every n £ N, let X(n) = l[_„, n] (X) and F ( n ) = l-_„, n ] (r). We have (9)
lim Xin\t,
u) = X(t, OJ) and lim YM(t, u) = Y(t, u)
71 —KX>
for (t, w) £ R+ x Q.
71—»00
(n)
(Tl)
(n)
Now X and Y are bounded measurable processes so that X (-, w) and YM(-,u>) are Borel measurable functions for every ui € Q.. Thus by 1), for every t £ R+ we have 1/ XM(s)YM(s)d[M,N](s)\< I U[0,i] I J[o,t]
(10) ~
{Jot (-WW^Aflw}'
2
\XM(s)Y{n\s)\d\[M,N]\{s)
{Jot (Y<*>f(S)dim(S)}1
2
< oo on A^.
Now there exists a null set A ^ ^ . y , which we may assume to contain A^ without loss of generality, such that (11)
X2(s,u)d[M](s,u>),
/
Y2(s,u)d[N](s,u)<
I
oo,
for(i,w) 6 R+xA^jf y . Since (X (n) ) 2 < X2 and (F(n>)2 < F 2 , we have by the Dominated Convergence Theorem (12)
/ ( X ( " ' ) 2 ( ^ M M ] ( 5 , u , ) T / X 2 ( 5 ,a,)
J[0,t]
(n)
(n)
for (t, w) £ R + x A ^ ] X r . Since |X F | f | X F | as n -» oo, the Monotone Convergence Theorem implies that for every t £ R+ and o> £ A^ x Y we have (13)
lim /
" - » ° ° J[0,t]
=
f
\XM(s,u;)YM(s,u})\dl[M,N]l(s,uJ)
\X(s,u)Y(s,w)\dl[M,N]l(.s,w)
J[0,t]
< if
U[0,i]
X2(s,u)d[M](s,Lj)} J
U
U[o,t]
Y2(s,Lj)d[N](s,u)
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
233
where the first inequality is by (10) and (12). To prove the first equality in (1), let us decompose X{n) in its positive part and negative part as X ( n ) = X(">+ - X ( n ) " and similarly F("> = y<"»+ - Y^~. Then X^Y{U)
= \xln)+Y(n)+
+XM~YM~\
- {x ( n ) + Y ( n ) _ +
xM~YM+\
Since H{M,N] = ftfjujr\ ~ P[M,N]> w e have (14)
XMY™dMM,N]
/ J[0,t]
= I {XM+YM+ + •/[0,t]
-
(
-
/
{X^Y^
XM~YM-}d^mN]
X^-Y^-}d^N]
+
{XM+YM'+XM-YM+}dfi+lM:Ny
Now X ( n ) " T X\ XM~ T X " , F ( n ) + T Y+, and Y (n) ~ f Y~ as n -> oo. Letting n -> oo and applying the Monotone Convergence Theorem to each one of the four integrals on the right side of (14) we obtain (15)
lim /
n-»°° J[0,t]
A W " dpwM
= J[Q^{X+Y+ + =
f
{X+Y+
J[0,t] / XY J[0,t]
+
X-Y-}dS[MiN]+jio^{X+Y-+X-Y+}d^N]
X-Y-}d^MiN]-
f
{X+Y-+X-Y+}d^N]
J[0,t) d(i[M,N]-
Letting n —» oo in the first inequality in (10), we obtain the first inequality in (1) for every t 6 R+ and w G A^, x x by (15) and the equality in (13). Now the fact that X and Y are adapted measurable processes on the filtered space im plies that fm X(s)Y(s) d[M, N](s), J ^ , \X(s)Y(s)\ d[[M, N)j(s), J[0it] X\s) d[M](s), and / [0 (] Y2(s) d[N](s) are all random variables, in fact St-measurable random variables, on (Q,5, P) by Theorem 10.11. Thus integrating (1) with respect to P and applying Schwarz's Inequality we have (2). I
234
CHAPTER 3. STOCHASTIC
INTEGRALS
For the families of Lebesgue-Stieltjes measures ^[M]i P-IN], and (I\IM,N]\ on the mea surable space (K + ,<8 1+ ) determined by [M], [TV], and \[M,N~\\, we have the following inequality. Theorem 12.7. Let M,N e Mf (£2, 5 , {&}, P). rfen tfzere exi'stt a TZM/Z ser A in (Q, 5 , P ) such that for every w € Ac, i g R+ ami £ £ ®[0,<] we ^ove {V\IM,NI\(E,U)}2
<
nm(E,u)fj,lN](E,w).
Proof. Let 3 be the semialgebra consisting of all intervals of the type (s1, s"] in (0, t] and 0 and let 21 be the algebra of subsets of (0, t] generated by 3, that is, 21 is the collection of all finite disjoint unions of members of 3. Let A € 21. Then A = (s[,s"] U • • • U {s'n, s"] where 0 < s\ < s" < ■ ■ ■ < s'n < sjj < t. Let us define two processes X and Y in L 0 (fi,S,{&},P) by setting X(s,u)
fl = Y(s,u) = ![Q
for(s,uj)£
AxQ. ^xa
f o r ( S ) W ) e
Let AQO be as defined in Proposition 12.6. The on A ^ we have
<
{/
|X(^)F(s,u;)|d|[M,/V]|(5,u,)}2
{/
X2(5,w)d[M](5,w)}f/
Y2(s,Lj)d[N](s,w)\,
that is, {/ lA(s)d\lM,N]l(s,u)} U[o,t]
J
< { / lA(s)d[M](s,u)}{ [ U[o,<] J U[0,t]
lA(s)d[N](3,u)\. J
In other words, for u> e A^, we have (1)
{ti\[M,Ni\(A,u)}2
< fJ,iM](A,L})iJ,m(A,ui)
for every A e 21.
For each u> G A^,, consider a measure on 2J(o,f] defined by K - . ^ ) = ^|[M,Ar]|(-,^) + /U[M](-,w) + / u ( A r ] (-,w).
Since
fi,[MiN]i{EAA,ui),
/J.[M](EAA,IO),
fim(EAA,Lj)
< e.
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
235
By (1) and (2), we have {v\{M,N)\(E,u)}2
< {v\[M,N]\{A,u) + e}2
=
{^\[M,N]\(A,u})}2 + 2ejU|[M,AT]|(i4,a;) + e2
<
A<[M](A,u;)yu[AfJ(^,u)) + 2£^| (MiAf) |((0,t],u;) + e2
< {li o) + (E,Lj) < {/JiM-[(E,u) + s}{fj, e}{fi[N] lM](E,i m(E,u)
++ e} e} ++2e/i 2e^||[M,W]|((0, + ee22 t], u>) + (M|JV] |((0,i],u>)
< Ai[M](-E,w)/i[JV](E,o;) + e{ WM] ((0 J f],u;) + W w ] ((0,i] ) a;) + 2/x, [MiA , ] ,((0 1 t],a;)} + 2e 2 . Since this holds for every e > 0, we have (3)
{H\IM,N]I(E,LJ)}2
<
fj.[M](E,cj)fi[N](E,ui).
By our definition of the family of Lebesgue-Stieltjes measures pA on (R+, %%) determined by an almost surely increasing process A, there exists a null set A0 in (£2,5, P) such that /*[M]({0},w) = ^ [W] ({0},w) = ^[M,w]({0},w) = 0 for w 6 AQ- Let A = Aoo U A0. Then for every u 6 Ac, (3) holds for every E G
[II] Stochastic Integral of Predictable Processes with Respect to /^-Martingales Let us show first that if X is a predictable process on (£2,5, {5<},P) which is also in LPi00(R+ x £2, jj.[M], P) for some M G M 2 ( 0 , 5 , {& }, P ) and p G [1, oo), then there ex ists a sequence in Lo(£2,5, {5t}, P ) which converges to X in the metric of the quasinorm H I ^ P on 1 ^ ( 1 8 + x Q , W M , , P ) . Theorem 12.8. Let M e M2(£2,5, {&},P). i e ( I = { I i : t £ R + }feea predictable process on the filtered space (£2,5, {'St}, P) which is in LP]00(R+ x £2, p.[M], P) for some p G [O.oo). 77ien f/iere exists a sequence {Xin) : n G N} irc L0(£2,5, {$t},P) such that ( ) nlirn||X " -X||^=0. Proof. We prove this theorem by applying Theorem 2.22, a monotone class theorem for pre dictable processes. By Observation 11.19, all bounded measurable processes on (£2,5, P) are in LPi00(R+ x £2, /JfMh P)* Let V be the collection of all bounded measurable processes Y on (£2,5, P ) for which there exist sequences {YM : n G N} in L0(£2, g, {5,}, P ) such that lim ||y
236
CHAPTER 3. STOCHASTIC
INTEGRALS
1°. Let us show that V contains all bounded left-continuous adapted processes. Let Y be such a process and suppose \Y\ < K on R+ x Q. for some K > 0. For each n G N, define Y (n) by m (1)
VWU l / y ( ( f c - l ) 2 - " , « ) Y (*'«> = l y ( 0 i W )
for<e((fc-l)2-",fc2-"]forfcGN,u;€Q forwefl.
Then Y (,l) G L 0 (Q, 5, {&}, P) and for every m 6 N we have (2)
||F«-FK'
p
=f/
{/
|yW(*tw)-y(*,w)rd[An(3,«)ip(dw)l,F.
According to Lemma 2.8, lira Y(n)(t,u>) = Y(t,w) for (i,o>) € R+ x Q. Thus by the TL—*O0
Bounded Convergence Theorem we have lim j
\YM(s,uj)-Y(s,u)\pd[M](s,uj)
= 0 for every u € Q.
Now /
J[0,m]
| Y(n)(.s, w) - Y(», u)\v dlAflC', w) < 2p/iTp[M]m(u;),
and E[[M] m ] < co since [M] is an Lj-process. Thus in letting n —> oo in (2) we can apply the Dominated Convergence Theorem for the integrals with respect to P and then apply the Bounded Convergence Theorem for the integrals with respect to /^[M](-,^) for each u e f l . Then we have lim ||Y (n) - Y||^'p = 0. Since this holds for every m G N, we have jirn [] Y is bounded. Then by the Dominated Convergence Theorem and 71-+00
then by the Bounded Convergence Theorem as in (2), we have lim \\Y — Y(n)\\^lP = Ofor every m G N and therefore Urn ||Y - Y ^ H ^ = 0 by Remark 11.18. Since Y(n> £ V there exists Z™ G L 0 (Q,3,"{5T},P) such that \\YM - ZM\\^P < 2~n. Then for the sequence {Z ( n > : n G N} in Lo(A tf, {&},P) we have Jim ||Y - Z « | | ^ < Um {||Y - Y^f^j*
+ ||F („
_ ^»|,M,P} = 0.
Thus y G V. This shows that V satisfies the conditions in Theorem 2.22. Thus V contains all bounded predictable processes on the filtered space. Therefore for every bounded predictable process
§12. STOCHASTIC INTEGRALS WITHRESPECT TO MARTINGALES
237
X on the filtered space, there exists a sequence {X (n) : n £ N} in L 0 ( A 5 , {&}, P) such that lim ||X ( n ) - X\\Wy = 0. ' Finally, let X be a predictable process which is in LP]0o(R+ x Q, fj,[M], P) also. For each n £ N, let us define X ( n ) by setting X{n)(t,oj) = l ( _» i «j(X(t,w))jr«,«). Since X is a predictable process, it is an 6/2$i-measurable mapping of R+ x Q into R. Sincel[_niTl] is a <8m/<8B-measurable mapping of R into R,X ( n ) = l[_„ in] (X) isan6/Q3imeasurable mapping of R+ x Q into R, that is, it is a predictable process. Thus X ( n ) is a bounded predictable process. Now for every m £ N we have (3)
\\XM-X\\^P
=y
^
\X*\8,u)-X(s,u,)\'d[m{s,»)}P(d>>>)]1''
Since X £ LPiCO(R+ x Q, ^i[M], P), according to (3) of Observation 11.16 there exists a null set A in (Q,5, P ) such that for every w 6 Ac we have / [0 m ] |X(s, <x>)|p d[M]0, a;) < oo for every m £ N. Since |X ( n ) (s, w) - A'(5, w)| p < \X(s, u>)\p, we have by the Dominated Convergence Theorem lim/
" - » o ° ,/[0,m]
= 0 for every w £ Ac.
\Xln\s,uj)-X(s,u)\!'d[M](s,u)
On the other hand, /
J[0,m ] J[0,m]
M , -X(s 1 w)|' \X (s w)pS*"(* w)|'d[M](a d[MKa,w)< / > «)-X(»
-'[0,m -'[0,m]
|X(s w)|'d[M](a w), |X(s,w)|'d[Af](a,«),
d[M](s)] < 00. Thus by applying the Dominated Convergence Theo and E [ L m ] |X(s)| p d[M](5)] Jirn^ \\X - X (n) || p A ^" p = 0 for every m £ N rem twice in letting n -> 00 in (3), we have Jirr^ and therefore by Remark 11.18 we have Jim \\X - X{n)\\^p = 0. Now since XM is a bounded predictable process, according to what we showed above there exists ZM in L 0 (£2,5, {&}> p ) s u c h t h a t ll X< "' ~ zin%MotP < 2 ""- F r o m t h i s a n d t h e triangle inequal ity P X " " | | p ^ pP + ||X<"> - Z<»>||J£Z^WlSf, \\X - ZM\\M'PP < \\X - X<">C' , we have lim ||X \\X - ZM\\[^P
= 0. ■
Let us extend the definition of X • M to a predictable process X on the filtered space (Q, g, {&}, -P) which is also in L2,oo(R+ x ^2, A*[M], P)- According to Theorem 12.8, there
238
CHAPTER 3. STOCHASTIC
INTEGRALS
exists a sequence {X<"> : n g N} inL 0 (O,5, {fit},P) such that Jim \\XM - X\\mJ;p = 0. For m, n G N, by Proposition 12.4, we have | X ( m ) . M - X<"> • M h o = ||X ( m ) -
X ^ H ^
Then since {X("> : n 6 N} is a Cauchy sequence in the metric of the quasinorm || ■ |J™' , the sequence {X ( "'»M : n € N} is a Cauchy sequence in the metric of the quasinorm | | o o The completeness of M 2 (Q, 5, {fit}, P) with respect to the metric of the quasinorm | ■ | <*, implies that there exists Y G M 2 (£2,5, {& } > p ) such that lira | X ( n ) • M - F|oo = 0. We define X • M = Y. It is clear that Y does not depend on the choice of the sequence {X (n) : n e N} in L„(Q,$, {&}, P ) satisfying the condition Jim ||X (n) - X | | 2 ^ ' p = 0. In fact if {X (1 ' n) : n G N} and {X (2 ' n) : n G N} are two such sequences and if we let{X (n > : n g N} be the sequence X ( 1 ' 1 ) ,X ( 2 ' 1 \X ( 1 - 2 \X ( 2 ' 2 >,X ( 1 ' 3 \X< 2 ' 3 ) ,..., then for Y G M 2 (Q, 5 , {&}, P ) such that lim I X ( n ) • M - YI» = 0, we have n—t-oo
lim I X ( , ' n ) • M - Yloo = 0 for i = 1 and 2. n—t-oo
Definition 12.9 Lef M G M2(Q, 5, {&}, P) and W X = {X< : i G R+} be a predictable process on the filtered space {Q., $, {$t}, P) such that X G L2,oo(R+ x ^,^[M],P)- 77j£ e/emenf X -M G M 2 (Q, 5, { 5 J , P) SKCA r/iaf lim I X ( n ) • M - X ; M I oo = 0 /or an n—>co
arbitrary sequence {X (n) : n G N} in Lo(Q, 5, {fit}-, P) satisfying the condition lim ||X (n) - X|| 2 J ^" P = 0 is called the stochastic integral of X with respect to M. For every t G K+, the random variable (X • M)(t) has an alternate notation L,;, X(s) dM{s). Its value atuiEQ, is denoted by (/,0,, X(s) dM(5))(u). Note that the stochastic integral X • M is defined as a stochastic process. Note also that although we use the notation f[0^X(s)dM(s) for the random variable (X • M)(i), it is not defined as the Lebesgue-Stieltjes integral of the individual sample functions of X. The random variable f[0 (] X(s) dM(s) is sometimes called the stochastic integral of X with respect to M on [0, t]. Observation 12.10. IfM 6 M£(Q,5, { & } , P ) t h e n X . M G M|(S2,& {3,},P)also. This follows from the fact that for an arbitrary sequence {X (n) : n G N} in L 0 (Q, 5, {&}, P) such that Jim ||X (n) - X | | 2 ^ p = 0, X<"> • M is in M^(Q, 5, {&}, P ) according to Propo sition 12.4 so that our X • M satisfying the condition lim I X ( n ) • M - X « M | o o = 0 i s 71-+00
in the closed linear subspace M§(Q,5, {&}, P) of M 2 (Q,5, {&}, P).
"
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
239
Observation 12.11. For X • M in Definition 12.9, we have | X s M | „ = | | X | | ^ ' P , that is, the mapping of a predictable process X in L2|00(R+ x £1, fi[Mh P ) into X • M in M 2 (£2,5, {St}, P) preserves the quasinorm. Also for every t 6 R+, we have | X • M\t =
\\x\\%lP
Proof. Let {X (n) : n G N} be a sequence in L 0 (O.,5, {5 ( },P) satisfying the condi tion lim ||X ( n ) - X | | ^ ' p = 0. Then lim I X ( n ) • M - X • M l * , = 0 and this implies that Jim | XM . M | « = | X • M | « , But |X<"» • M ^
= ||X<">||!,^'P by Proposi
tion 12.4. Also Jim ||X<"> - X | | ^ ' p = 0 implies Jim ||X<">||™'P = ||X||™' P . There fore | X . M l . = | | X | | ^ P For every t g R+, Jiin ||X (n) - X\\[MJ;P = 0 implies Jim ||X
o o
= 0 implies
lim | X(n) • M - X • M L = 0 and then lim I X ( n ) • M l , =1 X • Ml,. But according to Proposition 12.4, |X<"> • M\t = ||X(n)||!£f1'p . Thus | X • M\t = ||X||^f ] ' p . ■ Proposition 12.12. Let M e M 2 (Q,5, { 5 J , P ) w/iere ( Q , 5 , P ) is no/ assumed to be complete. For predictable processes X and Y on the filtered space (£1,5, {&}, P) which are also in L2,00(R+ x Q, /J-[M], P) and for a, /3 G R we have (1) X = Y € L2,oc(R+ x n , / i [ M ] , P ) =*• X • M = Y • M 6 M2(Q,ff, {&},P), (2) { a X + /?F} • M = a ( X • M) + 0(K • M), (3) ( { a l + /3Y} • M), = a(X • M) t + /?(7 • M)t a.e. on (Q, &, P)for t € R+> (4) 1 • M = M. Proof. To prove (1), suppose X = Y as elements of L2l00(R+ x £2, ^[M], P). that is, 11-^ - ^ll2^ 1 , P = ° f o r e v e r y m e N according to (4) of Observation 11.16. Let {X<"> : n 6 N} be a sequence in L 0 ( n , 5 , {&}, P) such that Jim ||X (n) - X | | ^ ' p = 0. This implies lim ||X ( n ) - X | | ^ ] , p = 0 for every m G N according to Remark 11.18. Since HX(„) _ y | | [ M , , P < | | x < „ , _ x]lm,p + u _y||[M,,PJ w e h a v e ^ | | x ( „ ) _ Yllm,P = 0 for every m G N and therefore lim ||X (n) - Y||2^'P = 0 by Remark 11.18. Thus we have lim I X ( n ) • M - X s M | o o = 0 a s well as lim | X ( n ) • M - Y • M | „ = 0. Therefore n—»-oo
n—*oo
X ; M = F » M a s elements of M 2 (Q, ?,{&}> P)To prove (2), let {X (n) : n G N} and {Yln} : n G N} be two sequences of processes in M O , 5, {*.}, P ) such that Jim ||X<"> - X | | ^ ' p = 0 and Jim || Y<"> - Y\\^p = 0. It
CHAPTER 3. STOCHASTIC
240
INTEGRALS
follows from Definition 12.3 that {aXin) + /3Yin)} • M = a(Xin) • M) + /3(YM • M). Letting n -» oo we obtain (2). Now (2) implies that there exists a null set A in (£2, g, P ) such that for u f A ' w e have ({aX + PY}»M)(;u)) = a(X»M)(;u) + P(y»M)(;ij) and in particular ({aX+/3Y}» M)(t,w) = a(X • M)(t,u) + j3(Y • M)(t,w) for every t € R+. Since A g 5o C h, we have (3). By the fact that 1 g L 0 (£2,5, {St}, P)> (4) follows from Definition 12.3. ■ Proposition 12.13. Let M g M 2 (£2,5, {&},P). Lef X ( n ) , n € N, and X be predictable processes which are in L2i0O(K+ x £2, p,[M),P) such that lim ||X (n) — X j l ^ = 0. Then lim \XM
•M - X *
M\oa=0.
Proof. There exists YM g Lo(D, 5, {&}, P ) such that ||F ( n ) - X ( " ' | | ^ ' F < 2" n for each n e N by Theorem 12.8. Then by the triangle inequality of the metric of the quasinorm, ||y(„> _
X||[MU>
we have lim ||y ( n ) - X\\lf2'p
< | | y « _ X<">||^P + ||XW - X | | ^ ' P ,
= 0. Now
|Xw • M - Z • M|«
(n) ,M-Y •Ml00•Ml +IYM+lY »M(n) M- J T - M l c < IX ixM(n)»M-Y < »M-X»M\00 00 M _F(„||[M],P + ||F(n)_X||[M l , P P P = \\X = \\X^-Y^\\[^ + \\Y^-X\C-
by Observation 12.11 and thus lim I XM • M - X ; M | 0 O = 0 . n-+oo
■
Remark 12.14. Let M g M 2 (£2,3, {3,},P). For predictable processes X{n\ n g N, and X, which are in L2i00(E+ x £2, ^ [ M ] , P) and such that lim \\X(n) - X\\[M2'P = 0, we have {X(n) • M, n g N , X » M } C M2(£2, 5, {5 ( },P) and by Proposition 12.13 we have lim I X{n) • M — X ; M | o o = 0 . This implies according to Proposition 11.6 that for every n—►oo
m g N we have
P • lim sup \(Xin) • M)t - (X • M)t\ = 0, n-,
° ° (6[0,m)
and moreover there exists a subsequence {n^ } of {n} and a null set A in (£2,5, P ) such that for every u> g Ac the sample functions (X(nfc) • M)(-,u>) converge to the sample function (X • M)(-,u) uniformly on every finite interval in R+.
§12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
241
Remark 12.15. For M g M 2 (Q,S, {&},P) and a predictable process X on the filtered space which is also in L2l00(R+ x Q.,p[MhP), the stochastic integral X • M is in M g M 2 (Q,5, {&}, P)- In particular it is a right-continuous martingale and thus according to Theorem 9.2 almost every sample function of X • M is not only right-continuous but also has finite left limit everywhere on R+ and is bounded on every finite interval in R+ and has at most countably many points of discontinuity. (X • M)(t,u>) is related to the limit of Riemann-Stieltjes sums of X(-, w) with respect to M(-,u>) in the following way. Proposition 12.16 Let M 6 M 2 (Q, S, {St}, P) andX be a bounded left-continuous adapt ed process on the filtered space. For n g N, let A„ be a partition ofR+ into subintervals by a strictly increasing sequence {tn^ : k g Z+} in R+ such that tnfl = 0 and tn^ f oo as k —» co and lim |A„| = 0 wnere |A„| = s u p ^ ^ t — t„,*:-i). Then for every t G M.+we have (1)
lim || £x(t„,t_,){Af(t n i J b ) - M(t„,fc_,)} - (X . M)(i)|| 2 = 0,
where tUiPn = t and £n/t < i / c k = 0,... ,pn — 1 /or n 6 N. fe particular (2)
P • lim £-X"(t„,*-i){M(t n , t ) - M« n , fc _,)} = ( ^ - M)(t).
Proof. Note that since .X is a left-continuous and adapted process, it is a predictable process. The fact that X is bounded implies that X is in L2i00(R+ x Q, ^ [ M ] , P) by Observation 11.19. Thus X • M exists in M 2 (Q,5, {St},P\ For n € N, define a process X ( n ) in L 0 ( Q , 5 , { 3 t } , P ) by setting X ( n ) (s) = X(0)lm(s)
+ Y, XQtHfi-ilhnj^tntlis)
^ a g R+.
fcgN
The left-continuity of every sample function of X implies that lim X M(s, u>) = X(S,LU) for (s,w) g R+xQ. Now[M] g A(£2,5, {5<},P) and in particular [M], is integrable for every t g R+. From this it follows by applying the Bounded Convergence Theorem and the Dom inated Convergence Theorem that for every m g N we have Jirr^ ||X
= 0 by Remark 11.18. Thus by Definition 12.9, we have
242
CHAPTER 3. STOCHASTIC ; Mloo = 0. This implies that lim I X ( n ) mM~X
lim | Xin) »M-X
M
every i G R+. Now since X
INTEGRALS = M\t
= 0 for
G L 0 (A 3 , {&}, P ) we have according to Definition 12.3
(X (n) • M)(t) =
'£X(tn,k-1){M(trhk)
- M(i„,*_,)}.
k=\
Thus (1) holds. For M,N £ M 2 (Q, 5, {&}, P) and predictable processes X G L2l00(R+ x Q, ^ [ M ] , P ) and Y £ L2i00(R+ x £1, fi[N],P), both of the stochastic integrals X ; M and F • iV" are in M 2 ( 0 , 5 , { 5 J , P ) so that the quadratic variation processes [X • M], [Y • N], and [X • M, Y • JV] exist. In next theorem under the assumption that X and Y are almost surely continuous, we show that these quadratic variation processes are obtained by integrating the sample functions of X2, Y2, and XY with respect to the families of Lebesgue-Stieltjes measures /U[M], M[JV]> and HIM,N] determined by the quadratic variation processes [M], [N], and[M,7V]. Theorem 12.17. Let M,N G M5(Q,3,{3<},P) and let X and Y be two predictable processes on the filtered space (£2,3, {&}, P) such that X G L2,00(!R+ x Q,, fj,[M], P ) and Y G L2,oo(R+ x Q, /i[jv), P). Tften rnere arista a null set A in (£2,3, P) Men /ftaf o« Ac, for every t £R+, we have [X • M, y • N]t = I
(1)
X(s)Y{s) d[M, N](s),
J[0,t]
and thus for any s, t G K+, s < t, we have (2)
E[{(X . M}« - (X . M ) J { ( F . TV)* - (Y . 7V)S} |3 S ] = E[(X . M W F . N)t - (X . M) S (F ; tf), | § s ] E /
X(u)Y(u)d[M,N](u)\ds
a.e.
on(Q,$s,P).
J(s,t)
In particular there exists a null set A in (Q, 3 , P ) SMC/I ffta? on Ac we have for every t G K+ (3)
X2(s)d[M](s),
[X*M]t=[ J[0,t]
and (4)
E[{(X . M), - (X • M ) J 2 | & ] = E[(X ; M) 2 - (X • M ) 2 | 3 J [/ LJ(s,(]
X 2 (u)d[Af](u)|&
a.e. o n ( Q , 3 s , P ) .
112. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
243
Proof. Let A be the union of the two null sets A^ and ATCiXiy in the statement of Theorem 12.6. Let V = {Vt : t G R+} be defined by (5)
//p,t]*(^w)y(*,w)d[M,JV](3,w) 10
V(t ^= V ' '
for(t,w) € R+ x Ac f o r ( t , w ) G R + x A.
By (1) of Theorem 12.6, V is real valued function on R+ x Q.. To prove (1), we show that V 6 V(Q,5, {&}, P ) and that (X s M ) ( F ; TV) - V is a null at 0 right-continuous martingale. Note that the first equality in (2) is an immediate consequence of the martingale property of X • M and Y • N. Note also that proving that {X ; M)(Y • N) — V is a martingale is equivalent to proving the second equality in (2). To show that V G V(Q,§, {&},P), note first that by writing V = A' - A" where A', A" G A(Q, 5 , {& }, P ) by Theorem 11.12 and by applying Theorem 10.11 we have the 5t-measurability of Vt for every t G R+. Thus V is an adapted process. The right-continuity of V follows from that of [M, N]. For t G M+, let <„>fc = kl~nt forfc= 0 , . . . , 2" and n G N. Then by (5) we have /
<
X( S )F(s)d[M,JV]( S )
\X{s)Y{s)\d\[M,N}\{s)< oo on Ac
I J[0,t]
by (1) of Theorem 12.6, and thus | 7 | ( t ) = s u p £ |F(r„, k ) - K«„, fc _,)| < oo on Ac, that is, V(•, u>) is a right-continuous function of bounded variation on [0, t] for every t G and vanishes at t = 0 for w G Ac. By (2) of Theorem 12.6, we have E(|V|«)<E|7
|X(3)F( 5 )|d|[M,iV]|( 5 )
< CO,
U[0,«]
and thus | F | t is integrable for every t G R+. This shows that | F | is an ii-process and completes the proof that V G V(Q,5, {&}, P). Let us prove the second equality in (2). Now by Theorem 12.8, there exist two sequences {JfW : „ G N} and {F<"> : n G N} in L 0 (O,3, {&},P) with Jim ||X
CHAPTER 3. STOCHASTIC
244
INTEGRALS
and Jam ||F ( n ) - Y\\[N^P = 0. Since X ( n ) and YM are in L 0 (Q,5, {&}, P ) , by Proposition 12.5 for any s , i £ R + , 5 < t, we have E[(X("> • M)t(YM
(6)
• iV), - (X("> • M) s (y ( n ) • X ) s | & ]
= E [ / X w («)F w («)d[M,iV1Ctt)|& L/(*.*I
a.e. o n ( Q , 5 s , P ) .
Let us show that if we restrict (6) to a certain subsequence {ne} of {n} and let I —> 00, then we obtain the second equality in (2). Now the convergence lim \\Xin) — X\\[^J)'P = 0 implies lim I X ( n ) • M — X • Ml™ = 0 according to Definition 12.9 and this in turn implies lim I X ( n ) • M - X • MI, = 0 for every t e R+ by Remark 11.4. Thus (X ( n ) • M\ n—»-oo
converges to (X • M) t in i 2 ( ^ , 3 , P). Similarly (X<"> • M)s, (y ( n ) • M) 4 and (YM • M ) s converge to (X : M)s, (Y ; M)t and ( F • M) s respectively in L2(£l, 5, P). Thus (X ( n ) • M) t (Y (rl) • N)t - (XM • M) s (F ( n ) • N)s converges to (X • M)t(Y»N)t - (X • M)S(Y»N)S in Iq(Q, 5, P). Now the convergence of a sequence of random variables to a random vari able in Li(Q, 5, P) implies the convergence of the sequence of the conditional expectations with respect to an arbitrary sub-
(7)
• X ) s |5 S ]
fc—•■ 00
= E[(X.M)((F.X)(-(X.M)s(y.iV)s|5s]
a.e. o n ( ^ , 5 s , P ) .
Let us show next that (8)
limEi/"
Xlnk\u)YM(u)d[M,N](u)-
f
X(u)Y(u)d[M,N]{u)
0.
By the algebraic equality
x (nt) y (nt) - X Y = {x(n,i) - x}{y (nt) - y} + {x (nt) - x}y+x{y ( n t ) - y} and by (2) of Theorem 12.6, we have (9)
E ( /
X(nk\u)Ylnk)(u)d[M,N](u)-
f
I J(s,t]
<
E
X(u)Y(u)
d[M, N](u) \
J^s,t]
) - X(u)\
2
1/2
d[M](u)
E /
J M
2
\Y (u)-Y(u)\ d[N](u)
1/2
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES +
E{/
\XM(.u)-X(u)\2d[M](u)Y/2E\f
+
E {/
X\u )d[M](u)
Y2(u)d[N](u)\
245
1/2
1/2
\Yint)(u) - Y(u)\2 d[N](u)
I J
KJ(s,t]
+
\\X&P\\Y™~Y\\™;P
Now lim ||X ( n t ) - X | | ^ ' p = 0 implies Jirr^ ||X(n*» - X | | 2 ^ ' P = 0. Similarly we have lim \\Ylnt) - y || 2 ^ ] ' P = 0. Thus if we let k -t oo in (9) we have (8). Now (8) implies the k—*oo
existence of a subsequence {n<} of {n^} such that (10)
XM(u)YM(u)d[M,N](u)\$s
lim E / 71—*00
J(s,t]
= E /
X(u)Y(.u)d[M,N](u)\$s
a.e. o n ( Q , 5 s , P ) .
U(s,t]
Restricting (6) to the subsequence {ne} and letting < - > o o and applying by (7) and (10), we have the second equality in (2). This completes the proof of (1) and (2). I As a consequence of Theorem 12.17, we have the following characterization of the stochastic integral of a process. Proposition 12.18. Let M G M 2 (Q, 3 , {& }, P) and let X be a predictable processes on the filtered space (Q, 5, {&}, P) such that X G L2l0O(K+ x ft, ^ [ M ) , P). 77zen X ~ M is the unique element in M 2 (£2,5, {$t},P) such that for every N e M 2 (Q, 5, {5t},P) 'Acre ejcto a null set A in (£2,5, -P) swc/i ' ^ on A!1 for every t € R+ we teve (1)
[X • M, iV], = /
X(s) d[M, N](s).
J[0,t]
Proof. By taking 7 = 1 e L2|0O(R+ x £1, nm, P) in Theorem 12.17, we see that X • M satisfies (1). To show the uniqueness, suppose 2 e M 2 ( f l , 5 , {& }, P) has the property that for every A^ 6 M 2 (Q, 5, {St} > P) there exists a null set A in (£2,5, P ) such that on Ac for every i e R + w e have (2)
[Z,N)t=
[ J[0,t]
X(s)d[M,N)(s).
246
CHAPTER 3. STOCHASTIC
INTEGRALS
Now (1) and (2) imply according to Proposition 11.26 that [X - M - Z,N] = 0 for every N G M 2 (Q, 5, {&}, P). Thus with N = X»M-Zwe have [X ; M - Z] = 0 and hence Z = X*M. ■ For M e M 2 (Q, 5, {&}, P ) and a predictable process X on the filtered space which is in L2,oo(R+ x fi, //[Mj, P), the stochastic integral X • M exists in M 2 (Q,5, {5<}, P). Thus if F is a predictable process on the filtered space and is also in L2i00(R+ x Q, H[x,Mh P)» then y • (X • M) is defined and is in M 2 (Q, 5, {&}, P)- Concerning F • (X • M) we have the following theorem. Theorem 12.19. Let M G M^(Q, 5, {5 ( }, P) and /ef X and F fee two predictable processes on the filtered space such that X, YX G L 2 T 0 0 (R + X Q, HIM],P)- ThenY G L2]00(R+ x Q, A'tx.M], P ) and there exists a null set A in (Q, 5, P) such that for u> G Ac we have (Y • (X • M))(-,a;) = ( F X • M)(-,u>). Proof. Since Y is a predictable process, it is a measurable process. Thus to show that it is in L^oo(R+ x Q, /Jpf.M], P ) it remains to show that for every t G R+ we have /
(1)
Jm.ti U[o,t]
Y2(s) d[X : M](s)
< oo.
Now according to Theorem 12.17 there exists a null set A in (Q., 5, P ) such that for to e Ac we have for every t G R+ [X»M](t,u:)=
X2(s,u,)d[M](s,uj).
I HO.t] JK.t]
Thus the Lebesgue-Stieltjes measure fi[x.M]{-,w) on (R+, <8Et) is absolutely continuous with respect to the Lebesgue-Stieltjes measure /XJMIO, U) with the Radon-Nikodym deriva tive ^^(S,UJ) dfi[M)
= X2(S,CO)
for,GR+.
Thus by the Lebesgue-Radon-Nikodym Theorem, we have / ■'[0,!]
Y2(s,u)d[X»M](s,uj)
= / V[0,t]
Y 2 (s,w)X 2 (s,u>)d[M](s,uj)
112. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
247
and therefore E
Y2(s)d[X»M](s)
Y2(s)X\s)d[M](s) < oo
=E / J[o,t]
J[o,t]
since YX e L2i00(R+ x Q, ^ [ M ] , P). This proves (1). Now that Y is inL 2 , co (R + x Q, ^[x.M], -P), the stochastic integral Y»(X»M) is defined and is in M2(£2, 5, {&}, -P). On the other hand since X and F are predictable processes, so is XY. Then since YX is in L2i00(R+ x Q, // [M) , P), the stochastic integral F X • M is defined and is in M | ( Q , 5 , {&}, P). Now according to Proposition 12.18, Y • (X s M) is the unique element in M 2 (Q,5, {&}, P) such that for every JV in M 2 (Q, 5, {5;}, P) there exists a null set Ai in (£2, 5, P ) such that on Aj we have for every t € R+ [ F • (X ■ M), 7V]( = /
(2)
Y(s) d[X . M, iV](5),
J[0,t]
and similarly F X • M is the unique element in M2(£2, 5, {5;}, P) such that for every N in M 2 (Q, 5, {5t} i P ) there exists a null set A2 in (Q, 5, P) such that on A2 we have for every teR+ [YX • M, N]t = f
(3)
Y(s)X(s) d[M, N](s).
J[0,t]
Thus if we show that there exists a null set A3 in (£2,5, P) such that on A3 we have for every t g R+ (4)
/
Y(s) d[X • M, N]{s) = /
J[0,(]
F(s)X(s)d[M,iV](s),
./[0,«]
then we have { F • (X • M")}(-, w) = ( F X • M)(-,u>) for u> 6 Ac where A = U?=1 A,- and we are done. To prove (4), note that according to Theorem 12.17 there exists a null set A3 in (£2,5, P ) such that on A3 we have for every t € R+ [X • M, Nit = [X;M,l»N]t=
f
X(s) d[M, N](s).
J[0,t]
Thus for a; e A3, H[X»M,N](-,U) is absolutely continuous with respect to )J.[M,N](-,W) with Radon-Nikodym derivative X(-,OJ) so that by the Lebesgue-Radon-Nikodym Theorem we have (4). ■ [III] Truncating the Integrand and Stopping the Integrator by Stopping Times
248
CHAPTER 3. STOCHASTIC
INTEGRALS
Theorem 12.20. Let M,N £ M2(£2,g, {3 t },P) and let X and Y be two predictable processes on the filtered space (Q, 5, {&}, P) such that X £ L2|0O(R+ x Q, PJMJ, -P) and Y £ L2,oo(lR+ x Q, /Li[AT], P). i e / 5 and T be two stopping times on the filtered space such that S
E[(X . M)TM - (X . M)SM\ds]
= 0 a.e. on (Q.&j.F),
and (2)
E[{(X . M)TM
- (X . M)SM}{{Y
= E[(X . M)TM(Y
• 7V)TA( - ( T • N)SM)
. 7V)TAi - (X : M)SM(Y
\3s]
; A0SA« \$s)
= E[fiSMjTM]X(u)Y(u)d[M,Nl(u)\$s]
a.e.on(n,ds,P),
and in particular (3)
E[{(X;M)TA(-(X;M)SAt}2|fo] = E[(X.MfTAt-(X.M)2SM\$s] = E[/ ( S A ( ] T A ( ] X 2 («)d[M]( M )|5 s ] a.e. o « ( a , 5 s , P ) .
Proof. Since X • M is a right-continuous martingale with respect to a right-continuous filtration, {(X • M)SM, (X • M)j- Ai } is a two-term martingale with respect to {5s, ST} for every t £ E+ according to Theorem 8.13, that is, (4)
E[(X.M)TA4|3S] = (X.M)SA(
a.e. o n ( t 2 , 5 s , P ) .
From this follows (1). Similarly we have (5)
E[(r.7V)TAt|5s] = (F.iV)SA(
a.e. o n ( Q , 5 5 , P ) .
The first equality in (2) then follows from (4) and (5). Since (X • M){Y • N) — [X • M, Y • N] is a right-continuous martingale with respect to a right-continuous filtration, by Theorem 8.13 again we have E[(X . M)TM(Y = (X . M)SM(Y
. N)TM - [X . M, F . • N)sAt -[X»M,Y»
N]TAt\3s)
N]SM
a.e. on ( Q , 3 S , P ) ,
that is, (6)
E[(X . M)TM(Y ; N)TAt - (X . M) S A ( (F • N)SM | g 5 ] = E[[X »M,Y* N]TAt - [X • M, Y • A T ] S A ( | 5 S ] a.e. on ( Q , 5 S , P ) .
§72. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
249
According to (1) of Theorem 12.17, [X • M, Y - N](t,u) is equal to the integral of X(-,U>)Y(-,UJ) with respect to the signed Lebesgue-Stieltjes measure ti$ujrfc,u>) on [0,t] for every t G R+ when ui e Ac where A is a null set in (Q, S, P). Thus we have
I
[X»M,Y»N]TM(U)-[X»M,Y»N]SM(U)=
X(u,uj)Y(u,uj)d[M,N]{u,io)
J(SA(,TA(]
for every t £ l + when u £ A c Using this equality in (6), we have the second equality in (2). ■ Let M G M 2 (Q, S, {$t}, P) and let X be a predictable process on (£2,5, {&}, P) such that X G L2|00(K+ x Q,// [ M ] ,P). We shall show that for every stopping time T on the filtered space we have X[T) • M = (X • M) T A , that is, the stochastic integral of the truncated process by the stopping time is equal to the stochastic integral of the process stopped by the stopping time. We shall prove this first for bounded adapted left-continuous simple processes, and then extend to the general case. Lemma 12.21. Let M 6 M2(Cl,S, {St}, P) and X G Lo(Q,fo {St},P\ Then for any stopping time T on the filtered space (Q, S, {St}, P) there exists a null set A in (Q, S, P) such that(XlT] • M)(-,w) = (X • M)TA(-,u)foru> G Ac. Proof. Let X G L0(O, S, {St}, P) be given by (1)
A - (*,w) = 6 » ( w ) l W ( t ) + ] C 6 < w ) l f t w A j < 0
for(t,a;)GR+xD
JcgN
where {tk : k £ R+| is a strictly increasing sequence in R+ with to = 0 and lim tj. = oo; fc—*oo
{£i : k G Z+} is a bounded sequence of random variables on (Q, S, P) such that £0 is 5to-measurable and £t is St„_,-measurable for fc G N. Let T be a stopping time on the filtered space. The sample functions of the truncated process XlT] are still left-continuous step functions but XlT] may not be a simple process anymore since T(u>) G R+ may be a point of discontinuity for XlT\-,ui) and T(UJ) varies with u G O- Let us construct a sequence of discrete valued stopping times {T„ : n G N} which converges to T and for which X [T " ] is in L 0 (fi, S, {St},P) for every n G N. For n G N, let 5„,A = k2~n, k G Z + , and {un,k : k e %+} = {tk : k e Z+} U {s*^ : k G Z+} where «„,t_are distinct and numbered in increasing order. Let us define an Revalued function t?n on R+ by setting (• U n ] i
(2)
3n(t) = I "n./t I 00
for*
G [«n,0,«r.,l]
fort G («„,*-{,«„,*] and k > 2 for t = 00,
250
CHAPTER 3. STOCHASTIC
INTEGRALS
and let (3)
Tn = tin o T.
Since T is an 5 T / ® I -measurable mapping of Q into R+ and dn is a ©j^/©^-measurable mapping of S+ into R+, Tn is an 5r/2Ji + -measurable mapping of Q into R+. Also since $nW > t for i e R+, we have Tn > T on £1. Thus by Theorem 3.6, Tn is a stopping time. Let us show that XlTn] is in L 0 (ii, 5, {&}, P ) and (XlT"] »M) = (Xi M) T " A
(4)
on R+ x Q.
The process X given by (1) can also be written as (5)
X(t,uj)
= IKO(W)1«,}(*) + £
iJmk(w)l<^^„v»l(0
for (*, w) G R+ x Q.
where for each n g N, {r)n,k '■ k G Z+} is a bounded sequence of random variables on (Q, 5 , P) such that 77„i0 is S,,,,,,-measurable a n ( l ^n,* ' s 5 u „ k _,-measurable for k G N. By (2) of Observation 3.34, we have (6)
*
('•")-to
fori>T n M.
[Tn]
Clearly X is bounded process. By Observation 3.35, X[Tn] is an adapted process. It remains to show that XlT"^ is a left-continuous simple process. Now Tn assumes values in {un,it : k G N} U {oo} and in fact we have (7)
, ( {Tn= «„,i} = { T e [ u „ ^ « „ , , ] } { {r„ =«„,*} = {T G («„,*-!,«„,*]} [ { T n = o o } = {T = oo}.
forfc> 2
Define a sequence of real valued random variables {£„,& : k G Z+} by setting forfe= 0 (8)
Cn,0 = ffafl,
and for k > 1 CO-,
^
_f»Kk
W
W
\0
on {Tn > u„ifc} = {T > M„, A _I}
on {Tn <«„,*_,} = {T <«„,*_,}.
i
Since {Tn < u„,,t_i} e , 5 , n l r _ , by the fact that Tn is a stopping time, we have {Tn > un,k} = {T„ < un>k_i}c G 5 U n l _ , . Then by the5 u „ ]t _,-measurability of 77„|A, our^nifc is
112. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
251
&»»,*-i -measurable. By (5) and (6) and the definition of {C„,jt : k g Z+} by (8) and (9), we have (10)
XlT"\t,
u) = Cn,o(^)l{o}(i) + £ Cn,t(w)l(„n,t_„„„,t](t) km
for (i, w) e R * x Q .
Thus X [ T n l is a bounded adapted left-continuous simple process, that is, X is in Lo(Q,&{&},P). Now by (3) in Definition 12.3 and (10), we have (11)
( X ™ . M)(t) = £ C».*{M(uB,fc A t ) - M f u ^ - i A 0 } fceN
and similarly by (5) we have (12)
(X . M)(Tn A t) = £ 1n,k{M(un,k A T„ A t) - M(M„jk_1 A Tn A t)}. fcgN
Take an arbitrary w £ £2. Recalling (7), let us assume that Tn(u>) = u ^ for somefco€ Z+. Then T„(ui) > u^k and thus (n,k(u) = r/n>fc(aj) for fc = 1,..., ko, and similarly Tn(u>) < un,k-i and thus Cn,fc(<^) = 0 for k > ko + 1 by (9). Therefore from (11), we have ko
(13)
(X [ r " ] . WX*,w) = ^i7n,*(w){Af(«„,t A *,w) - M(u„,*_, A t,w)}. fc=i
On the other hand, since T„(u>) = u„ ito we have un,/t_i A T„(w) = Tn(w) = u„i/t A T„(w) for k > k0 + 1. Thus from (12) we have ko
(14)
(X • MXT»(w) A t,w) = 2>n,*M{Af(u„,* A t,w) - M(u„,A_, A *,w)}. fc=i
By (13) and (14), we have (4) for the case where Tn(u) = u„fko for some k0 £ Z+. For the case where Tn{u>) = oo, (4) holds likewise by applying (9). Now for any t G R+, we have I| X [T„, _ x m | [ t « w
=
[^ { ^ ( ] |i { ( ,< T n } _ 1 { ( 0 < T } |X 2 d[M]} d p ] ' / 2 .
According to (1) in the proof of Theorem 3.38, the first factor in the integrand above con verges to 0 on R+ x Q. Since X is a bounded process the integrand is bounded. Thus by
252
CHAPTER 3. STOCHASTIC
INTEGRALS
the Bounded Convergence Theorem and the Dominated Convergence Theorem, we have Jim \\X™ - X[T]\\%lP = 0 for every t G R*. Thus Jim \\X™ - X [ r ) | | ^ ' F = 0 ac cording to Remark 11.16. Then lim I X{Tn) • M - X m • M | M = 0 by Definition 12.9. n—*oo
Thus by Proposition 11.5, there exist a null set A in (Q,5, -P) and a subsequence {nfc} of {n} such that lim(X [T "* ] «M)(<,u>) = (X [ r i »M)(i,u>)
(15)
for(t,w) g R+ x Ac.
Since i?„(t) J. < for t G R + , we have Tnk | T and then Tnk A i | T A * on Q for every i G R+. Since X • M is right-continuous, we have (16)
Km(X*M)(TnAu)At,u)
= (X*M)(T(.u>)At,h>)
for(t,w) G R+ x Q.
Taking the subsequence {n^} in (4), letting t - t o o and using (15) and (16), we have the proof of the lemma. ■ Theorem 12.22. Let M e M2(Q, 5 , {$t},P) and let X be a predictable processes on the filtered space (£2,5, {5«}i P) sucn that X g L2i0O(R+ x Q, /j[M], -P). Let T be a stopping time on the filtered space. Then there exists a null set A in (Q, 5, P) such that (X[T] • M)(-,u) = (X • M)T%,uj)foruj g Ac. Proof. By Theorem 12.8, there exists a sequence {X(n) : n g N} in L o ( 0 , 5 , {&},P) such that Jiin ||X (n) - X | | ^ ' P = 0 and thus Jim | X ( n ) • M - X • M | « , = 0 by Defini tion 12.9. Then by Remark 12.14, there exist a null set A] in (Q, 5, P) and a subsequence {r^} of {n} such that lim (X ( n i ) • M)(t,w) = ( I : M)(t,u>) for (t,w) g R+ x AT. This implies that (1)
lim(X(nt).M)(TMAi,u;) = (X.M)(T(tj)At,tj)
for (i, w) g R+ x Af.
k—*-oo
Since |(X<">)[T1 - X [ T 1 | = I I ^ D J X ' " ' - X } | < |X<"> - X ] ,
the convergence lim ||X (n) - X\\[2M^P = 0 implies lim \\(XM)m - X [ r i | | ^ ' p = 0. Now since X ( n ) is a predictable process, (X < n ) ) [ r i is a predictable process according to Ob servation 3.35. Clearly (X (n) ) [T1 g L2i0O(R+ x Q , W M ] , F ) . Thus by Proposition 12.13 we have lim KX*"')17"1 • M- X m • M ^ = 0. According to Lemma 12.21 we have n—*oo
(X<"')[T1 • M = (X<"> • M) T A Thus we have J i m J ( X ( n ) • M ) T A - X [ T ] • M | „ = 0.
§12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
253
This implies according to Remark 12.14 the existence of a null set A2 and a subsequence {ni} of {rik} such that (2)
lim(Xini)»M)(T(Lj)At,w)
= (XlT]»M)(t,uj)
for (t,w) G E + x A\.
£—►oo
With A = Ai U A2, we have the proof of the theorem by (1) and (2). ■ In Theorem 12.22, we showed that XlT] • M = (X ; M) T A . We show next that X • MTA = (X • M) T A . For this we need the following lemma. Lemma 12.23. Let M 6 M 2 (Q,5, { 5 J , P ) and let T be a stopping time on the fil tered space. Then for the two families of Lebesgue-Stieltjes measures fj,[M] and H[MT*] on (WL+, 33i„) there exists a null set A in (£2,5, P) such that (1)
[1[MTK](E,<X>) < fi[M](E,uj) for every E G 2JSL, whenu £ Ac
Proof. Since M is in M 2 ( 0 , 5 , {&}, P), M T A is in M 2 (n, 5, {&}, P ) by Theorem 8.12. Thus the quadratic variation process [MTA] and the family of Lebesgue-Stieltjes measures MfM™] o n (K+i ®m+) a r e defined. IfX e L 0 (Q,5, {3(},P), then X » M andX»M T A are defined. F o r i e Reintegrating the equality (4) in Proposition 12.5, we have (2)
E\(X:M)2t-(X»M)l\
=E|7
1
L^[o,t]
X2Wd[M](«)
and similarly (3)
X 2 (u) d[M TA ](«
E \(X • MTA)2t - (X . MTA)l] J = E [ / 1
Uio.t]
Now for X e L 0 (a, 5, {5 t },P) given by (1) of Definition 12.3 as X{t) = £0l{o}(*) + £ G4»*-iA]<*)
for
jteN
* e R+
we have by (3) of Definition 12.3 (X . M)(f) = £ & W * « i6N
A
0-
M
(*i-i
A
*)}
for
* 6 R +-
CHAPTER
254
3. STOCHASTIC
INTEGRALS
Replacing M with MTA, we have for t € K+ ( X . MTA)(t)
=£
Z,{MTA(ti
At)-
MTA(t^
A t)}
= J2 M*fft ATM)- M«,_, A T A t)} = (X . MXT A *), i€N
that is, X s M T A = (X • M ) T A .
(4)
Thus for the left side of (3) we have E [(X . MTA)2
(5) =
- (X . MTA)2}
= E [{(X . M)TA}2
- {(X =
M)TA}2]
E [(X a M)2TA4 - (X . M) 2 ] < E [(X . M) 2 - (X . M) 2 ] ,
where the last inequality is from the fact that since X • M is an L2-martingale (X • M ) 2 is a submartingale so that E[(X • M)\M] < E[(X = M)2]. Using (3) and (2) in (5), we have I X2(u) U[o,t)
d[MTA](u) < E
/
X2(u)d[M](u)
J[0,t]
Letting t —* oo we have by the Monotone Convergence Theorem (6)
E
/ X2(u)d[MTA](u)] im+
J
< E | 7 X 2 (u)d[M](u) L./K+
for every stopping time T and X G L o ( Q , 5 , { 5 J } , F ) Let Q be the collection of all rational numbers in tt+. Let a,b E Q,a < b, and A G 5<,The mapping X of K+ x Q into R defined by setting X{t,uj) = l^(u;)l( a] 6](i) for ( i , w ) G R+ x £2 is then a member of Lo(Q, 5 , {$t} , JP) so that by applying (6) to our X w e have / A«[MrA]((a. 6 L ') rf-P < / /i[M]((a, 6], ■) dP. JA
JA
Since this holds for an arbitrary A G 5 a » there exists a null set Aa,6 in ( 0 , 5 , P ) such that for ui G A^ j we have (7)
/i [M rA]((a,6],ct») <
nlM)((a,b],ui).
Let A = Ua,beQ,a
§ 12. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
255
intervals of the type (a, 6] where a,b £ Q, a < b, is a semialgebra of subsets of (0, oo). Let 21 be the algebra generated by 3, that is, the collection of all finite disjoint unions of members of 3. Now since P[MT*](-, to0) and /U[MJ(-, ^O) are measures on ((0, oo), 93(o,oo)) and since p[MT^(E, u>0) < p[M^(E, uj0) for every E G 3, the inequality holds for every E G 21 by the additivity of the measures. Now ©(o,«o = c(2Q- To extend the inequality from 21 to 93(0,00) by applying Theorem 1.13, let O be the collection of all open sets in (0, oo) and let On be the collection of all open sets in (0, n] for n G N. Then by Theorem 1.5 we have according to Theorem 1.13 for every e > 0 there exists some F in the algebra 21 n (0, n] of subsets of (0, n] such that HlMr^(EnAF,LJo) Then fi[MT^(EnAF,ui0),
+ HiM)(EnAF,uj0) < e.
p,[M](EnAF,uj0) < e so that fj,[Mr^(En,uj0)
<
< /J,[MT*J(F,U>O)
+ £
H[M](F, wo) + e < nlM](En, UJ0) + 2e,
where the second inequality is from the fact that F G 21 D (0, n] C 21. From the ar bitrariness of e > 0, we have fj.[MT^(En, u>o) < fj.[M\(E„,u>Q). Letting n —* oo, we have fi[MT^(E,iv0) < ^[M)(£,w 0 ). Thus p.[MT^{-,ujQ) < ^(M](-,w0) on <8(0lOo)- Since ^[M^]({0},o;o) = /U[M]({0},w0) = 0, we have ^ [ M TAJ(£,W 0 ) < /J,[M](E,U}0) for every E G 23M+. Since OJ0 is an arbitrary element of Ac, this completes the proof of (1). ■ Theorem 12.24. Let M G M 2 (Q,5, {5<},-P). let X be a predictable process on the filtered space which is in L2]00(R+ x £2, ^[Mb P) ond let T be a stopping time on the filtered space. Then X is in L2l00(R+ X Q I"[M™], P) s° that x • MTA is defined. For X • MTA there exists a null set A in (Q, 5, P) such that (X • M TA )(-, w) = (X = M) TA (-, w)/or w G Ac. Proof. Since M G M 2 (fi,3, {3t},-P), we have M r A G M 2 (Q,3, { 5 J , P ) by Theorem 8.12. Let A be the null set in Lemma 12.23. Let X G L2,oo(lR+ x Q., p.[M], P). Then for u) G Ac we have for every t G R+ / J[0,t]
X2(s,u>)d[MTA](s,u>)<
I J[0,t]
X2(s,u)d[M](s,u)
256
CHAPTER 3. STOCHASTIC
INTEGRALS
so that E /
Jio.t]
X2(s)d[MTA](s)]
J
< E | 7
U[o,t]
X2(s)d[M](s)
Since X is in L2v0O(R+ x Q., fi[M),P), the right side of the last inequality is finite so that the left side is finite and thus X is in L2,oo(R+ x Q, ^ [ M " ) , -P). Then X • MTA is defined and isinM2(n,S,{&},P). Let X be a predictable process which is in L2i00(R+ x £2, /i[M], -P)- By Theorem 12.8 and Definition 12.9, there exists a sequence {X (n) : n £ N} in L 0 (Q, £, {& }, -P) such that lim | | X ( " > - X | | ! , ^ ' P = 0and lim|X(n,.M-X.M|oo=0.
(1)
Now for every t G R+, Lemma 12.23 implies that ||X<"> - X\\[^]'p <
\XM(s)
E /
Since Jim ||X (n) - X\\l££F
= E [/
|X(">(s) -
T 1/2
X(s)\2d[MTA](s)
- X(s)\2 d[M](s)] ' / 2 = ||X<"> - X |[M],P g
= 0 implies lim ||X (n) - X H ^ = 0 the last inequality im
plies lim ||X<"> - X | | 2 f ] ' P = 0. Then lim ||X<"> - X\\2M^P ■. 0 by applying Remark 11.18. Consequently by Definition 12.9 we have lim | XM • M T A - X - M T A loo = 0.
(2)
Now (4) implies lim I X(n) • M - X • M L = 0 for every t e R+ according to Remark 11.5, that is, (3)
n—"-co
Jirn E \{{X(n) • M) - {X • M)}2(*)] = 0.
Since X ( n ) • M - X ; M is an L2-martingale, {X ( n ) • M - X ; M } 2 is a submartingale and this implies that E[{X (n) • M - X • M}2(T A i)] < E[{X (n) • M - X • M} 2 (i)]. Therefore by (3) we have lim E[(X (n) • M - X • M) 2 (T A t)] = 0, that is, lim E[{(X<"> • M) T A - (X - MfA}\t)}
=0
112. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
257
Now since X ( n ) e L 0 (Q, 5, {&}, P), (X (n) • M) T A = X ( n ) • M T A as we saw in the proof of Lemma 12.23. Thus Jnn E[{X(n> • MTA - (X • M) TA } 2 (t)] = 0, that is, lim I X ( n ) • MTA - (X . M) T A I, = 0. Since this holds for every t G R+, we have according to Remark 11.5 (4)
lim I X ( n ) • M T A - (X • M) T A loo = 0. ■
n—*oo
By (2) and (4), we have X ; M T A = (X • M) T A
■
Proposition 12.25. Let M,N G Mf (Q, 5, {&}, P ) arcd let T be a stopping time on the filtered space. Then there exists a null set A in (Q., 3 , P) such that for every u 6 Ac and t £ R + w e have (1)
/i[MTAiWTA](-,o;) = /i[AfiAr]rA(-,a;)
on ([0,<],2$[o,t]),
arcJ fni« (2)
[M TA ,iV TA ](-,u;) = [M,iVf A (-,6 l ;).
/rc particular (3)
/i[MTA](-,uj) = //[M]TA(-,a;)
on(R+,99i^),
ami (4)
[M TA ](-,u,) = [M] TA (-,w).
Proof. Since M,JV € M|(Q,g, { 5 J , P ) , we have MTA,NTA G M | ( 0 , 3 , {&},P). Then for X G Lo(£2,5, {&}, P), X • M T A and X • NTA are defined. According to Proposition 12.5 there exists a null set Ai in (Q, 5, P) such that on AJ we have for every t G R+ (5)
[X.MTA,X.JVTA]t= /
./[0,<]
X 2 (5)rf[M TA ,Ar TA ](s).
258
CHAPTER 3. STOCHASTIC
INTEGRALS
By Theorem 12.22 and Theorem 12.24, X • MTA = X[T^ • M and X • NTA = X[T] • N. Thus there exists a null set A2 in (Q, 5, P ) such that on Aj we have for every f £ R t [X • M T A , X . 7VTAL = [ X [ T 1 . M, X[T] »N]t=
(6) =
l{{.)
I J[0,t]
X
(X [ r ) ) 2 (s) d[M, #](*)
X2(s)d[M,N]TA(s),
= I
~ '
f
J{0,t)
where the second equality is by Theorem 12.17 and third equality is by Definition 3.33. Let A = Ai U A2. Then by (5) and (6), we have on Ac and for every t g K+ /
X2(s)d[MTA,NTA](s)
= f
J[Q,t]
X2(s)d[M,N]TA(s).
J[0,t]
In particular with a, b g R+, a < b < t and X g L 0 (Q,5, {$t},P) l(t.,6]('S) for (s, w) g R+ xfi,we have for LJ g Ac l (a , 6] ( 5 )d[M rA ,AT TA ]( 5 ,o;)= /
/ J[0,t]
defined by X(s,u>) =
l (a , 6 ,(5)d[M,iV] TA ( 5 ,u,).
J[0,t]
From the arbitrariness of t g R+ in the last equality, for any a, b g E+, a < 6, and OJ g Ac, we have /i[M^A,[wrA]((a; &]><»>) = ^[M,N]TA((a, &]>w). Now since /i[Af)rn(-,u;) is a finite signed measure on ([0, t], 23[o,(j) and since the collection 3 of intervals of the type (a, 6] in (0, t] and 0 is a semialgebra and a(3) = 93(o,t], the last equality implies according to Corollary 1.8 that (1) holds for every u g Ac. Then for every t g K+ we have [MTA,NTA]{t,u)
= /iIMTAiJVTA]([0,t],w) = MtM,w«([0,*],w) = [ M , i V ] T A ( ^ ) ,
proving (2). ■ Lemma 12.26. Let M = {Mt : t g R+}feea null at 0 martingale on a filtered space ( & , # , { & } , F ) such that \M\ < K on [0,t] x Q. for some t g R+ and K > 0. For 0 = t0<---
S2 = {±a]}2 = ±a2+2 5=1
J=l
£ i,j=\,...,n\i<j
«ta3=±a) + j=\
2±ai{±a]}. 2=1
j"=i+l
§12. STOCHASTIC INTEGRALS WTTH RESPECT TO MARTINGALES
259
Note that by the martingale property of M we have for j = 1 , . . . , n (2) E[oy |&,_,] = EK& - £-i) 2 |3.,_,] = E[(£2 - $ _ , ) | & ^ , ] a.e. on (£2,&,_„/>)• Now
(3)
E E[ aj |5 (i ] = E EKIS^.JS,,]
j=i+l
j=i+l
= EEKfti)!^-.!^]
by (2)
= t E E[(^2 - £_,)!&.]=Ere2 - £ l & j = E[(^-ei)2|5
a.e. o n ( n , 5 ( „ P ) .
Then
(4)
E t E «.{ E «i)l = E E[E[ai{ E «i> 15..]] i"=l
J=i+1
t=l
a
= E E[«.E[ E
j=i+l
i l&J] < (2i^)2 E E(«i) = (2tf)2E(S),
where the inequality is by (3). But
(5)
E(5) = E r E > ] = E E [E[ai|54._j] = EE[E[(£ - €?_,)I^t^J] i=i
i=i
i=i
= EE« 2 -e 2 -,) = E ( D < ^ 2 , 1=1
where the third equality is by (2) and the last equality is from the fact that £0 = 0. By (4) and (5) we have
E[2Ea,{X>;}]<8A' 4 .
(6)
i=i
j=;+i
2
Finally since a_, < (2K) for j = 1 , . . . , n, we have
(7)
E[E «"] < (2^) 2 E[E «i] < 4^ 4 , j=i
y=i
260
CHAPTER 3. STOCHASTIC
by (5). Using (6) and (7) in (1), we have E(5 2 ) < UK4.
INTEGRALS
■
Theorem 12.27. LetM,N g M|(Q,S,{5 4 },P). Fort e K+ a n d n £ N, /ef An be the partition of [0,<] fcy 0 = <„n < ••• < <„,„„ = t Let |A„| = max (<„,* - fn,/t-i) ««d K=\
,...,Pn
lim |AJ = 0. 77ie/z n—KX>
«s, II £ { M ^ - M M _ t }{^ M - JV.M_,} - [M,mt\u = o
^)
and in particular lim || £{M ( n | f c - M« B> _,} 2 - [M] t ||, = 0.
(2)
Proof. Let us prove (2) and then derive (1) from it. Consider first the case where both M and [M] are bounded by a constant K > 0. For brevity let us write (3)
f An,*, = {M„, t - M B > _, } 2 and 5„iA: = [M] ( n t - [M)tn < an = maxi=it...,j,„ An
fc_,
for fc = 1,... ,p„ for n £ N for n e N.
Then E[{Sn - [M]t}2] = E[{ET=](An,k
- Bn,k)}2] =
Y.
E
KA„,i - B„j)(^.,t -
Bn,t)].
j,k=l,...,p„
Now for j < k we have E[(A n j - Bnj)(A„jk - Sn,fc)] = E[E[(And - Bn,3)(A^k = E[(A n j - B„,,-)E[(A„,4 - -Bn,fc)|&„,t_,]] = 0,
- £n,*:)!&„,„_,]]
since E[(An,k - B n ,*)|&„ t _,] = Oby (2) of Lemma 11.25. Thus E[{5„ - [M] t } 2 ] = E E J . , ( ^ - Bn,kf]
(4) <
2E[a„ E S . ^ . t ] + 2E[(3nB Efc, B„,t]
<
2E(a 2 ) 1 / 2 E(5 2 ) 1 / 2 + 2E(/9„[M]/)
<
2E( Q 2 ) 1 / 2 %/l2^ 2 + 2E(/?„[M]()
<2EE£,«
S
+ -B2,,)]
§72. STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES
261
by Lemma 12.26. Since almost every sample function of M and [M] is continuous and hence uniformly continuous on [0, t], lim al = 0 and lim Bn = 0 a.e. on Q. Since M 71—*0O
71—*00
and [M] are bounded, we have lim E(a 2 ) = 0 and lim E(/3n[M]f) = 0 by the Bounded 71—tOO
71—tCO
Convergence Theorem. Thus we have lim E[{5 n - [M]*}2] = 0 and therefore 71—*0O
lim E(|S„ - [M] ( p = 0. This proves (2) for the case where M and [M] are bounded. To prove (2) for the general case we may assume without loss of generality that every sample function of M and [M] is continuous. For m 6 N, let Sm = inf{< G R+ : \Mt\ > m} A m Tm = inf {< € R+ : [M] ( > m} A m As we showed in Proposition 3.5, the first passage time of an open set (-co, —m) U (m, co) in R by a right-continuous adapted process is a stopping time. Thus 5 m , T m , and then Sm A Tm, are stopping times. From the fact that every sample function of M and [M] is real valued it follows that Sm | oo and Sm f oo and thus Rm f oo as m —> oo on Q. By the continuity of the sample functions of M and [M], M R m A and [M]Rmn are bounded by m. By Proposition 12.25, we have [M] flmA = [M flmA ] and thus [M] R m A is bounded. Let f A^l = {(M«™A)(„,fc - (M f l - A ) ( „ t _ 1 } 2 1 B$ = [MR™%^k - [M R ™ A ] tnt _,
for fc = 1,... ,p - n for k = 1 , . . . , p - n
Now (5)
E(| EK,(A».* - B„.*)|) < E(| E f e ( A * - 4™*:')!) + E(| YHUA™ - B™)\) + E(| T&0&
-
BnJt)\).
A
RmA
Since M and [ M ^ ] are bounded, we have lim E(| EfiiC^Tk - 5 ^ ) 1 ) = 0 by our result above for the bounded case. Regarding the first term on the right side of (5), note that \(MR™A)tnk - (Af fl ™ A ) (nt _ l | < | M t n t - M ( n t _J s o t h a t A ^ < An,k and therefore E(| nZi(An,k
- 4 $ D = E(E£j(A B l S - A<$))
= £ E ( [ M ] ! n t - [M] t „ t _,) - £E([M*«> A ] t „ ]t _ [ M R - A ] t „ ^ , ) fc=l
<==1
= E([M] ( ) - E([M f l - A ] t ), where the second equality is by Lemma 11.25. But [MRmA}t = ( [ M ] ^ " ) ; < [M]t and lim {[M]RmA)t = [M]t since i C | oo on Q. By the Dominated Convergence Theorem
262
CHAPTER 3. STOCHASTIC
INTEGRALS
we have lim E(([M] HmA ) ( ) = E([M] ( ). Then for every e > 0 there exists me 6 N such 771—*-CO
that E(| Y?kl\(An,k - A%$)\) < e for all n e N when m > m e . For the third term on the right side of (5), we have according to Lemma 12.23 R
B<™> = [M
^]tn
k
- [M fl - A ] t „, t _, < [M]tnk - [M]«n>_, = Bn,k.
Thus E(| E £ i ( S i t - 5 n , t )|) = E d & ( 5 * j ' - £„.*)) < E([M] ( ) - E([M*~*] ( ). Then for every e > 0 there exists me 6 N such that E(| Ek"i(-B£!t - £-.,*)!) < £ for all n G N when m>me. Therefore for m > me we have
E(| EE,(A„^ - Bn,k)\) < E(| E&(<2 - O l ) + 2£
for a11 n e N
>
and thus limsupE(| Ei"i(^n,it - Bn,fc)|) < 2e. From the arbitrariness of e > 0 we have n—*oo
Jim E(| EKi(-4n,A - ^n,*)|) = 0, proving (2). Finally (1) is derived from (2) by using the same argument as in (1) and (2) in Theorem 11.32. ■
§13 Adapted Brownian Motions [I] Processes with Independent Increments Definition 13.1. Let {SI, 5, P) be a probability space and let d 6 N. A mapping £ ofQ. into K. which is 5/^B^d-measurable is called a d-dimensional random vector. A d-dimensional stochastic process on (Q, 5, P) is a mapping XofR+xQ. into Rd such that X(t, •) is a d-dimensional random vector on the probability space for every t g M.+. Regarding the Borel cr-algebra
9 V =
§73. ADAPTED
263
BROWNIANMOTIONS
Proof. The open intervals with rational endpoints in R constitute a countable base for the open sets in R. Thus (1) follows from Theorem 1.4. To prove (2), let SH be the countable collection of subsets of Rd of the type (aubi) x ■ ■ ■ x (oj, bd) where the a\, b\,..., aj, bj are all rational numbers. Clearly 9t C SS^d, x • ■ ■ x 53Kdt. Since every open sets in Rd is a union of members of the countable collection SH, the collection Dmd of all open sets in Rd is contained in a^B^d, x • • • x 93mdt). Therefore (3)
33md = <j(Dmd) C (T(» Ril x - • • x 33„dt).
On the other hand, writing Dm*x for the collection of open sets in R* for i = 1 , . . . , d, we have C
©nd, x • • • x 33mdk = t7(Dmd,) x ■ • • x a(Dmd, x • • • x D m dJ C aCO^d) = <8md
a(D^k)
where the first set inclusion is by Lemma 1.3. Thus (4)
<j(93md, x - - - x Q3mdJ C 9 3 ^ .
With (3) and (4), we have (2). I Let £ be a d-dimensional random vector on a probability space (£2,5, P). Let 7r, be the projection of Rd = R\ x • ■ ■ x Rj onto R;. Consider the ith component £, = ir,,o £ of £. Since £ is 5/53Ed-measurable and 7r; is SSjjd/QSi;-measurable, £,■ is 5/931,-measurable, that is, £, is a random variable. Conversely, if £i, ...,£<; are
= C V(23m, x • •• x Q3m„)) by Proposition 13.1 = aCr'OBm, x ' ' ' x ®m„)) by Theorem 1.1 = a{£i\El)n---ntd-\Ed):E, €
Now since £"'(£.) <2 5 for i = 1,..., d, we have ^\EX) n ■ ■ • n ^ ' ( ^ ) G 5- Thus £ _ 1 (®i^) C 5 and therefore f is a random vector. We summarize these observations in the following proposition. Proposition 13.3. Given a probability space (£2,5, P), let X; be a mapping ofQ. into Rfor i= 1 , . . . , d and let X = (Xi ,...,Xd),a mapping ofQ. into Rd. Then X is a random vector on (Q, 5, P) if and only i/X, is a real valued random variable on(Cl,$,P)fori = l,...,d.
264
CHAPTER 3. STOCHASTIC
INTEGRALS
Proposition 13.4. 1) Let X = {Xt :teR+}bea d-dimensional stochastic process on a filtered space (Q, g, {&},P). Let 7r, be the projection of'Rd = R, x ■•• x Rd onto R{ for i = l,.,.,d. The ith component of X defined by X ( , ) = m o X is then a l-dimensional stochastic process on the filtered space and it is adapted ifX is. 2) Conversely let X ( , ) be a 1 -dimensional stochastic process on a filtered space (Q.,$,{$t},P)fori = \,...,d. Then the mapping X of R+ x £1 into Rd = R{ x • • • x Rd (1) defined by X = ( X , . . . , X(c()) is a d-dimensional stochastic process on the filtered space and X is adapted ifXm X w are. Proof. 1) If X is a d-dimensional stochastic process and X ( , ) = 7r, o X, then X ( , ) is a mapping of R+ x Q into R and for every f € l + w e have X f ( - ) = X ( , ) ( i , ■) = (7T, O X}(t, •) = (7T, 0
Xt)(-).
The fact that X ( , ) is a stochastic process on the probability space follows from the fact that (X((,>)-1(95i) = (X(-1o7rr1)(
x---xEd)
n
= {Xf\...,X\d)){E,
x---xEd)
d)
= {Xl T\Ex) n ■ • ■ n {x\ r\Ed) e g. From this we have X-'(33 m d)
= = = C
X-\a(
This shows that X is a d-dimensional stochastic process on the filtered space. If X ( 1 \ . . . , Xid) are {&}-adapted then g in the last two expressions are replaced by g { and X is { g j adapted. ■
%13. ADAPTED
265
BROWNIANMOTIONS
Observation 13.5. Let ( 0 , 5 , P) be a probability space and let {(Sa, 6 Q ) : a € A} be a collection of measurable spaces. Let X a be an 5/6 a -measurable mapping of fi into Sa for each a G A. Then
= = = = C =
=a{XiA
...,Xk,dk}-
(- dk. Now X_1(CT(55Bd, x • • ■ x ®ffiifc)) by Proposition 13.2
On the other hand
C C = =
X ( » i - , *■•• x « B ^ J a^-'CBn x-.-x®^)) A"_1((7(<8mj, x • • • x Q5K
=
a(X).
k
)
266
CHAPTER 3. STOCHASTIC
INTEGRALS
Recalling Observation 13.5, we have a{Xu...,Xk.,X
= a(ul^(Xi)) (7(Ut l ( 7(X i ))Ca(X). C a{X).
Therefore we have (1)
(7(X) = c r { X ia{X , . . .,**}• a(X) = u...,Xk}.
Now a{X .. .,Xkk)} a{Xuu...,X
= (TCU^CTCX;)) 13.5 ( T d l t ^ J Q ) by by Observation 13.5 a{\jt a{X ...,X ,}) = < r(U t i « r { X , . . . , X » by applying (1) l itU ui4 iA = ^ ff[utjff{u£,ff(.X;,j)}] [ u t ^ l ^ ^ X ; , . , ) } ] by Observation 13.5
=
-x-
1 ; 0 7T, (®mji x ... x id i ))cx= X 7 : 1' °7i'r (58B*^_■ xBx mji), -K '(^mfx- ■XlKn C X " '(93ifx^)' 1
and similarly for tr(XT2). Thus both A] and A2 are in X~l(*BMdx...xWd). Hence there exist Bx and B 2 in 231jix...xmd such that A\ = X-l(B{) and A2 = -X"" 1 ^)- Then -4, A,
n A A 2 ■■ n ==X;\B, x; l{®»fx) = a(XT)T)cCuU, ( T ( XTT ). ). €T a(X mU...■xK TeT[ x:\Bx n sB22)) e£ x;\
t
§75. ADAPTED BROWNIANMOTIONS
267
This shows that UTe7;
,Xtn)} =a{Xtl,...,XtJ
C a{Xs : s e [0,t]},
where the second equality is by Lemma 13.6. This completes the proof of 2). ■ Lemma 13.8. Let X = {Xt : i £ 1+} te a d-dimensional stochastic process on a proba bility space (£1,5, P). For a finite strictly increasing sequence r = {t\,... ,t„} in R+, let XT = (Xtl,...,XtJandlet T\Xr) = (Xtl,Xt2
— Xtl, Ajj — A( 2 ,...,Xt n —
Xtn_l).
Then
(i)
<7(T(XT» = mxT)r\VB^)
= x;'^-'^^)).
Let Dind) be the collection of all open sets in Rnd. Since T is a homeomorphism of Rnd we have T - ' C D ^ ' ^ D * " * Then (2)
T - ' O B ^ ) = T-'CaCD1"*)) = a(T-\D{nd)))
=
268
CHAPTER 3. STOCHASTIC
INTEGRALS
where the second equality is by Theorem 1.1. Using (2) in (1), we have o(T(XT)) X;\^nd) = <j{XT). ■
=
Definition 13.9. Let X = {Xt :teR+}bea d-dimensional stochastic process on a prob ability space (Q, 5, P). We say that X is a process with independent increments if for every finite strictly increasing sequence {<,,...,<„} in R+ the system of random vectors {Xtl, Xtl - Xh, Xt3 - Xt2,..., Xta - Xtn_t} is an independent system. Theorem 13.10. Let X = {Xt :t€R+}bea d-dimensional stochastic process with inde pendent increments on a probability space (Q, £, P). Let {5f : t G R+ } be the filtration on the probability space (Cl,$,P)generatedby X, thatis, $? = a{Xs : s G [0,t]}fort G R+. Then for every pair s,t G R+ such that s < t, the system { j f , Xt - Xs} is independent, that is, {5f, cr(Xt — Xs)} is an independent system. Proof. Let Ts be the collection of all finite strictly increasing sequences r in [0, s]. By Lemma 13.7, UT€racr(XT) is a n-class of subsets of Q and 5 f = cr{Xu : u 6 [0, s]} = a(UT€x,o(XT)). Thus to show the independence of the system { j f , a(Xt — Xs)}, it suffices to show the independence of the system {(JT€T,O-(XT), a(Xt — Xs)}. (See Theorem A.4.) Thus it remains to show that for any A £ UTeTav(XT) and B e a(Xt — Xs) we have P(A n B) = P(A)P(B). Now if A G U TeT ,o'(X T ), then there exists r = {tu ... ,tn} G Ts such that A G cr{XT). Let T be as in Lemma 13.8. Then cr(XT) = a(T(XT)) and thus A € a(T(XT)). Since X is a process with independent increments, the system of random vectors {Xtl, Xt2 — Xt|,..., Xtn — Xtn_{, Xs — Xtn,Xt — Xs} is an independent system and so is its subsystem {Xtl,Xt2 — Xtl,... ,Xtn—Xtn_l,Xt — Xs}. Then {(XtnXt2 — Xtl,... ,Xt„ — X( n _ 1 ),X ( — Xs} is an independent system of two random. (SeeTheoremA.il.) Then {T(XT),Xt—Xs} is an independent system (see Theorem A.8), or equivalently, {a(T(XT)), u(Xt — Xs)} is an independent system. Then for A G cr(T(Xr)) and B G a{Xt-Xs), wehaveP(AnB) = P{A)P{B). ■ The following theorem contains the converse of Theorem 13.10. Theorem 13.11. Let X = {Xt : t G R+} be a d-dimensional adapted process on a filtered space (Q,5,{5(},P). If for every pair s,t G R+ such that s < t, {$s,Xt - Xs} is an independent system, then X is a process with independent increments. Proof. Let tx < ■ ■ ■ < tn. Let us prove the independence of the system {Xtl,Xt2 Xt[,... ,X ( n —X t n _,}. Let£i = Xtn^2 = Xt2—Xtl,... ,(n = Xtn—Xtn_l for brevity. Since
fj/3. ADAPTED
269
BROWNIANMOTIONS
& is an & „ _ , / ^ - m e a s u r a b l e mapping of Q into Rd for i = 1,... , n - 1, (ft,..., f„_i) is an ^^/©mfr.-'w-measurable mapping of £2 into R (n_1) ''. Thus
t" < oo} for s G R + .
CHAPTER 3. STOCHASTIC
270
INTEGRALS
Proof. Let Y be the collection of all random vectors Y of the form (1)
Y =
(Xt»-Xt,,...,Xt,,-Xt!,),
where s < t\ < t" for i = 1 , . . . , n and n e N. By the same argument as in the Proof of Lemma 13.7, it is easily verified that \JY£Yv(Y) is a -K-class of subsets of Q. and (2)
0 S = a(UYeY
Let Z be the collection of all random vectors of the form (3)
Z =
(Xti-Xt„...,Xtm-Xtm_t)
where s < to < ■ ■ ■ < tm and m e N. Let us show that (4)
UZeZ
UYeYa(Y).
Let Y G Y be as given by (1). Leti;,i",z = 1 , . . . , n be arranged in the increasing order and let the resulting finite strictly increasing sequence be {i 0 , • • •, tm} and let Z be given by (3). Then each component of Y is the sum of some components of Z and hence cr(Y) c cr{Z). On the other hand, every member of Z is also a member of Y. This proves (4). Thus Uzez^CZ) is a 7r-class of subsets of Q and by (4) and (2) we have (5)
®s =
To show the independence of {5 S , <5S} it suffices to show the independence of the system {&,U Z 6 Z ff(Z)}. (See Theorem A.4.) Let A € 5 , and B £ UZeZv(Z). Then B £ cr(Z) forsomeZ 6 Z. But according to Theorem 13.12, {$S,Z} is an independent system. Thus P{AV\B) = P(A)P(B). I
[II] Brownian Motions in Rd Given a probability space (Q, 5, P) and a measurable space (S, 0 ) . Let X be an 5 / 0 measurable mapping of Q into S. The probability distribution of X on (5, 0 ) is the proba bility measure Px on ( 5 , 0 ) defined by PX(E) = (Po X~l)(E)
= P{X~\E))
for E € 0 .
Let / be an extended real valued 0-measurable function on S and let E 6 0 . According to the Image Probability Law we have / JX~HE)
f[X(cu)] P(du) = I f(x) JE
Px(dx),
§13. ADAPTED BROWNIAN
271
MOTIONS
in the sense that the existence of one side implies that of the other and the equality of the two. In particular with E = 5, we have / /[X(«)] P(dw) = / f(x) Jil
Px{dx).
JS
Definition 13.14. A d-dimensional Brownian motion is a d-dimensional stochastic process X = {Xt : t G K+} on a probability space (Q, 5, P) such that 1° X is a process with independent increments, 2°. for every s,t g R+ such that s < t, the probability distribution of the random vector Xt — Xs is the d-dimensional normal distribution Ni(0, (t — s) ■ I), 3° every sample function of X is an Rd-valued continuous function on R+. Clearly a Brownian motion does not exist on an arbitrary probability space. For the proof of its existence on the space of continuous functions by means of Kolmogorov's ex tension theorem we refer to [31] J. Yeh. Notations 13.15. According to Definition 13.14, for a d-dimensional Brownian motion X = {Xt : t 6 R+} on a probability space (SI, 5, P), the probability distribution Px,-x, of the random vector Xt — Xs for s,t 6 R+, s < t, is the d-dimensional normal distribution Nd(0, (t — s)-1) with mean vector 0 and covariance matrix (t — s) ■ I where / is the d x d identity matrix. Thus Px,-x, is absolutely continuous with respect to the Lebesgue measure mdL on (Rd, 33Ed) with Radon-Nikodym derivative given by
<>»
^ ^ W ' - ' ^ H r S } '" x e:
where we write | a; | for the Euclidean norm of i 6 R d . If we letp be a function on (0, oo)xl defined by (2)
p(t,a;) = ( 2 7 r t ) - ' i / 2 e x p | - ^ ^ - l
fot(t,x)
£ (0,oo) X Rd,
then (3)
dP
*^dx'(x)=p(t-s,x)
forxeRd.
272
CHAPTER 3. STOCHASTIC
INTEGRALS
The initial distribution of X, that is, the probability distribution PXo of the initial random variable XQ of the Brownian motion X, is an arbitrary probability measure on (Rd, 2Jmd). As a particular case we have a unit mass at some a G Rd, that is, Px0({a}) = 1 an( ^ consequently Px0(Rd - {a}) = 0. Proposition 13.16. Let X = {Xt : t G R+} be a d-dimensional Brownian motion on a probability space (Q, 5, P). Then for 0 = t0 < ■ ■ ■ < tn, the probability distribution P(X,0,...,Xt„) on (R{n+l)d, *Bl(n+i)d) of the random vector (Xto,..., XtJ is given by (1)
G E) = PiXt0
P{(X P{(Xt0,...,X tn)€E} i0,...,X tJ
*«„)(£)
"(Xt0,...,Xt„
= =
/ p(t )---p(t -i,x--pitn p{i\ t0,0x\ - xna-t)n■ - tn-i,x l-t0-,xi-x n-xn-i)(Px 0x n
JE
- xn-i)(PXo
d
ml )(d(x x mld)(d(x • •,z nn)) )) 0, 0•,...,x
- i«-m-«»»^/,«p{i£l;:t'1} (PXo0 X x m£ mdL X x ■• ■••• X x m^)(c?(io, mdL)(d(x (PAx%,..., x jn)), ), 0,xu...,x for E G ©jjin+iw. In particular when E = Eo X • • ■ x En where Ej G 33Md for j = 0 , . . . , n, we have
p{xtatoeE P{x eE00,...,x ,...,xtntneeEn} En} = {(2*)"nfc -■^ -tj-: i )l)}}-m "rf/2
(2) /
JBO
PXo(dx0))f
-'Bl
mdL(dx (dxy)--f y)---
/En
|z m£(cfoJap - ^-l|2 I mlLdx n )exp<j -- -1VE— ff ~ 1 2 ~J t j -ty
f}
ave F o r t< >> 00<2«
(3)
d 2d/2 P{X G£} = = (2nty (27ri)[Px (dx0 ) P{X / fPx t t €E} d 0 m Vmd
f exp e xJ p- lj J- li Z^ f^g 1H \ mm£d(Ld(dx). x)2 tt — — to to \ [i 2
JEB V
In particular when Px0 is a unit mass at a G R /n Rdd,, we we■have have (4)
l | x - « p \ „ f(dx). m {dx) "2 t-fa J m i
P{Xtt £E} = (2nty P{X ( 2 7 rd'2i ) - "j ^/ e2 ^x pe jx-p | - l ^ f ^ |
+1)
(yo,---,yn) (yo, • • •, Vn)==T(xo,...,x T(x0, ...,x (x0,x -x0«,...,x xn—_i). o , . . n. -,Wn — 2-Tl 0, ]xI — n)n) ==(x i).
§73. ADAPTED BROWNIAN MOTIONS
273
As we noted in the proof of Lemma 13.8, T is a homeomorphism of R(n+])d. Now T(Xtt„...
,Xtn) = (Xto,Xtl
— Xio,... ,Xt„ — Xtn_x),
and (Xt0,... ,Xtn) = T
(Xto,Xtl
— Xta,... ,Xin — Xt„_x).
For any E e 95m(n+iw we have Pvc,
xtn)(E) = P o (Xt0,...
= Po(Xt0,Xtl
,XtJ~\E)
~Xt0,...,Xtn-Xtn_,r1(T(E))
= P(x,0 ,xt, -x,0,...,x,n -X,nl)(T(E)) = (Pxt0 x Pxtl-Xt0 x ■ • ■ x PXtn_x,nJ{T{E)) = J =
I P(U -t0,Xi
p(U-to,yi)-''P(tn-tn-i,yn)(PxHxrnZl)(d(y0,...,yn)) -x0) ■■•?(*„ - i „ - i , xn - i n - i ) ( ^ x , 0 x
mld)(d(x0,...,xn))
where the fourth equality is by the independence of increments, the fifth equality is by (3) of Notations 13.15, and the last equality is by the fact that the Jacobian of T is equal to 1. This proves the second equality in (1). The third equality in (1) then follows from the definition ofpby (2) of Notations 13.15. ■ Definition 13.17. By an {$t}-adapted d-dimensional Brownian motion we mean a ddimensional stochastic process X = {Xt : t 6 R+ } on a standard filtered space (Q, # , { & } , P ) such that 1° X is an {$t}-adapted process, that is, Xt is an fit/ft^d-measurable mapping ofQ. into Rd for every t e R+, 2° for every s, t 6 R+, s < t, the system {J s , Xt - Xs} is independent, 3°- for every s,t £R+, s < t, PX,-x,
= N(0,(t - s) ■ I),
4° every sample function of X is an Rd-valued continuous function on R+. Remark 13.18. An {£,}-adapted ci-dimensional Brownian motion on a standard filtered space (£2,3, {5t}, P) is always a d-dimensional Brownian motion on the probability space
274
CHAPTER 3. STOCHASTIC
INTEGRALS
(Q, g, P) in the sense of Definition 13.14. This follows from the fact that conditions 1° and 2° of Definition 13.17 imply condition 1° of Definition 13.14 according to Theorem 13.11. Conversely if X = {Xt : t G R+} is a d-dimensional Brownian motion on a complete probability space ( 0 , g , P), then a filtration can be constructed on the probability space so that X is an adapted Brownian motion on a standard filtered space. This is done as follows. Let {gf : t G R + } be the filtration generated by X, that is, g * = a{Xs : s G [0,<]} for ( e M + . Let 91 be the collection of all the null sets in the complete probability space ( Q , g , P ) and let g f = <J(3* U 91) for i G R+. Then {?f : t G R + } is an augmented filtration on the complete probability space. The fact that this filtration is right-continuous will be proved in Proposition 13.22. Then (Q, g, {gf }, P ) is a standard filtered space and X is an gf -adapted process. It remains to verify the independence of {g s ,Xt — Xs} for every s,t G R+ such that s < t. Now according to Theorem 13.10, the fact that X is a process with independent increments implies the independence of {$7, Xt — Xs}, that is, for every A G g f and B G a(Xt - X,) we have P(A (1B) = P(A)P(B). For any N € % we have P(7Y n B) = 0 = P(N)P(B). Thus {gf U 9L Xt - Xs} is an independent system. Now since (£2, g, P ) is a complete probability space, an arbitrary subset of a member of 71 is again a member of 91. Thus g f U 91 is closed under intersection, that is, it is a 7rclass of subsets of £2. Then the independence of {gf U 91, a(Xt — Xs)} implies that of {cr(gf U yi),
E[/ 0 (X t0 ) • • ■ /„(X ( „)|gf ] = f0(Xt0)
■ ■ ■ fk^(Xtk_MX*)
a.e. on ( Q , g f , P ) ,
where ip is a real valued function on Kd defined by (2)
V(x)
= {(2*T-k+\tk-s)
y.<—-
F
\ 2[tk-s
flCti-tj-i)}-d/2
.£
ti
_«,.., j |
§/J. ADAPTED BROWNIANMOTIONS
275 i
+1) • fk(xk) ■■■/»(*») ■ ■ /«(*«) for x 6 Rd m ^m£-* ' ^ d C s *V(»*,..., , . . . , i , ) )x„))/o/-x£R' •
/ * ( * * ) ■
Proof. Note that since Xto,..., X
(3)
JA
j f (X ) -L= MX,,)0
t0
■ ->(***_ ■ ■ fk_,{X > (tkX _MXs)dP ■fks)dP
f o r A efor Sf A € ? f
JA
d d ForO : u<m uand For 0 == uu 0 < < ■ ■■■■ <■ , us letdefine us define let yo,.- yo,...,y ■,Vm me R£, R m and 0
1
m
lw--W-tPl
/2 (4) K(UQ, ^ ( U o , ......^, mU; my\ 0yo,-.-,y , - . - , y mm) ) = { ( 2m 7 r r n K --";" J -, i) )}}" i"/ 2' i exp^ expl-^f:lj/j~;/j"'|2l = {(27T)
IlK-
Let r* = {t 0 ,... ,*i_i,s} and XT. = (Xto,...,Xtk_,,X,). Px r . on (R(fc+1)'', 33K(^IKI) we have for E G 2 3 ^ *
For the probability distribution
(5)
x mfX
PXT.(E)
= J K(t0,...,tk-Us;xo,...,xh-Ux)(PXo
by (1) of Proposition 13.16 and (4). Let A e a(XT.) = (XT.)"1CBm(^'w) so that A = (XT-)~\B) for some B G QBj^iu. The right side of (3) for our A is computed by the Image Probability Law, (2) and (5) as (6)
/ / 0 (X t0 _ i (tkX_MX>) fk-x(X ^ t , _ > ( J Q dP t0) ■ ■ ■ A = =
fo(x0)- ■ ■ ■ fk-i(x fk-i(xk-i)
JB JB
/ 7<(t0,---
JB JB
,x ,xkk-i,x)) -i,x))
,tk-\,s;x0,...,xk-\,x)f00(x K(to,...,tk-i,s;xo,---,Xk-i,x)f (xo)---fk-i(x 0)---fk-dxkk-i) ^ )
• /*(**) ■ ■ ■ ■• /»<*•) /»<*«) m m ?f -** * * 0^* * ^ *, , •• ■ ■ ■, x»))](P^ x » ) ) ] ( P ^ x m?)(d(xo, ro?)(d(x0, -...,**_ . .I,*)) , **-i, x)) =
/
V B x l (( " " * ++ ,, ww
K(t0,...,tk-i,s,tk,...,tn;xo,---,xk-i,x,xk,...,xn)
1M • fo(xo)---fn(x xm( LL" ++1M )(d(x00ll ...,x ffcc _ 1 ,x,x ffcc ,...,x„)). /o(*0>---/»(*«X-Ps^ )(d(x n)(PXto
276
CHAPTER 3. STOCHASTIC
INTEGRALS
To evaluate the left side of (3) for our A £ o(XT.), consider the probability distribution of the random vector (XT.,Xtk,... ,Xtn) on (R(n+2W,
P(Xr.,xtk =
x,JE)
/
K(t0,...,tk-\,s,tk,...,tn;xo,...,xk-Ux,xk...,xn)
JE
(Px,0 x m%+[)d){d(x0,... Now for our A = X~.\B)
,xk_ux,xk,...
,xn))
f o r £ € (BE(n+2W.
where B £ 2JK<*tiw, we have
x~\B) = i;,'(B)nfln-nfl = x~.' (B) n x~' (Rd) n • ■ ■ n x;nl (Rd) =
(XT.,Xtk,...,XtJ-\BxRin-M)d).
Thus by the Image Probability Law and (7), we have (8) i h(Xta)---fn{Xtn)dP JA
=
=
/ , « fo(Xo) ■ ■ ■ fn{Xn)P(XT.,X,. JBxl ( "~ '
/
, . ...^-(fof-ftk-i>s,tk,...
X,„)(d(x0,
,tn;xo,...
..., Sfc-l,X, * * , . - - , Xn))
,xic-\,x,Xk,-■■
iXn)
• fo(xo) ■ ■ ■ fn(xn)(Px0 x m^+lw)(
§73. ADAPTED
277
BROWNIANMOTIONS
the integrals with respect to the integrands. Thus (3) holds for A G gf. Since (£2,5, P) is a complete probability space an arbitrary subset of a member of 9t is again a subset of 91 From this it follows that g f U 71 is a w-class of subsets of £2. Since (3) holds for every A G 5 f U 91, it holds for every A G
fn(Xt„)
are gf
= MXt0) ■ ■ ■ / „ ( X t J E [ l |gf +0 ],
and E[/o(X (o ) ■ • • / „ ( X t J | g f ] = f0(Xt0)
■ ■ ■ fn(XtJE[l
| g f ].
Now E[l Igf+o] consists of all gf+0-measurable functions is characterized by the condition that ip = 1 a.e. on (£2, gfg, P ) , that is, there exists a null set A in (£2, gf^,, P) such that <^ = 1 on Ac and y> is arbitrary on A since (£2, gfo, P) is a complete measure space. Similarly E[l | g f ] consists of all functions yj such that
E[/ 0 (X t0 )-■•/„(*<„)|gf +0 ] =
lim E[/o(X (o ) • • • / „ ( X ( J | g f + 1 / J
a.e. on (£2,gfo, P),
CHAPTER 3. STOCHASTIC
278
INTEGRALS
for arbitrary versions of the conditional expectations. Now according to Lemma 13.19 (2)
E[/ 0 (X to ) • • • / . ( J C J l S f l = MX*) ■ ■ ■ fk-dXtk_,MXs)
a.e. on (Q,fff, P )
and similarly for m £ N so large that s + 1/m < tk we have (3)
E[/0(Jt4o)---/n(Xt„)|^f+1/J = M%%) ■ ■ ■ fk-iiXtk_,MX3+l/m)
a.e. on ( f l , ^ , P ) .
^ Note that since the filtered space (£2,5, {5 t }, P ) is augmented, such conditions as 'a.e. on ( Q , j f , P ) ' , 'a.e. on (£2,#f,. 0 ,P)\ and 'a.e. on ( A ^ + i / m , P ) ' are all equivalent to the condition 'a.e. on (£2,5, P)'- From (1) and (3), we have (4)
E[/o(X, 0 ) ■ • • /„(X ( J|5f + o] = Jim, MXto) ■ ■ ■ /*-i(X { t _,)^(X s + 1 / m ) = /oW-/w(I
l M
MX,)
a.e. on (£2, s f , P ) ,
where the second equality is by the continuity of X and the continuity of ip on Rd. Thus we have E[/ 0 (X % ) ■ ■ ■ fn{XtJ\^] = E[/ 0 (X ( „) • ■ ■ fn{XtJ\$f] by (4) and (2). ■ Observation 13.21. Let G be an open set in a metric space (S, p). Then there exists a sequence {/, : n £ N} of continuous functions on 5 such that 0 < /„ < 1 on S for n € N and lim fn = 1Q on 5. n~-*-oo
Proof. If G = S, then /„ = 1 on S for n g N will do. Suppose G 4 S. Then Gc is a nonempty closed set. Let p{x, Gc) = infj,6G<: p(x, y) for x € S. Then p(a;, G c ) is a continuous function of x € 5 and p(i, Gc) = 0 if and only if x G G c . For n G N, let / n be defined by p(:r, Gc) + n ' Then / n is continuous on 5 and satisfies the condition 0 < /„ < 1 on S. If x G G, then i £ G c , /»(x, G c ) > 0, and jim^ /„(z) = 1. If x $ G, then x £ G c , p(x, G c ) = 0, /„(x) = 0, and lim fn(x) = 0. This shows that lim fn = \Q on S. ■ n—too
n—foo
Proposition 13.22. Let X = {Xt : t € K+} be a d-dimensional Brownian motion on a complete probability space (£2,5, P). Let'Jibe the collection of all the null sets in (£2, #, P)
§/3. ADAPTED
279
BROWNIANMOTIONS
and let g f = CT{XS : s G [0,i]} and g f =
E[l / 1 |gf + 0 ] = E [ l / 1 | g f ]
for every A G g f
Now since g^,, C g< for r. > s, (1) implies (2)
E[ly4|gf+0]=E[lyl|gf]
for every A G gf+o-
Since 1A G EpUlgf+o] for A in gf+0, (2) implies 1A G E t l ^ g f ], that is, 1A is gfmeasurable, in other words, A G g f . Thus g ^ , C g s and therefore gfo = gf. Therefore it remains to prove (1). Now g f = c ( g f U 9t) and according to Lemma 13.7 we have g f = cr(UTS7;<7(XT)) where Tt is the collection of all finite strictly increasing sequences r in [0, t], X T = (Xtn... ,Xtn) for r = {<],...,<„}, and UTer,cr(XT) is a 7r-class of subsets of Q. Let us prove (1) first for the particular case where A G cr(XT) for some r € %. Now if A G
E[lx-,(e)|gf+0] = E[lx-,(E)|gf]
for every £ G <8 r *.
d
Let G i , . . . , Gn be open sets in R By Observation 13.21, there exist sequences {/,-,* : fc G N}, j = 1 , . . . , n, of continuous functions on Rd, bounded between 0 and 1 such that lim fjj, = 1 G for j = 1 , . . . , n. According to Lemma 13.20 we have k
(4)
E [ / U ( X ( | ) ■ ■ • / n ,*(X ( J|gf + 0 ] = E [ / M ( X t l ) • • ■ / M ( J £ i J | 3 f 1.
By the Conditional Bounded Convergence Theorem, we have (5)
lim E[/,, fc (X ( ,) • • ■ / ^ ( X „ ) | g f + 0 ]
K—►OO
= E[lGl(X(|)---lGn(XtJ|gf+0]
a.e. on(O,gf + 0 ,P),
for arbitrary versions of the conditional expectations, and similarly (6)
limE[/1,*(X,)---/„,*(XtJ|gf] k—t-oo
= E[lG|(Xt,)---lG„(X(J|gf]
a.e. o n ( Q , g f , P ) .
280
CHAPTER 3. STOCHASTIC
INTEGRALS
By (4), (5) and (6), we have (7)
E[l G l (X ( | ) • ■ • low(*f„)l&>] = E[1 G ,(X ( | )- • • l G „ ( X ( J | S f ]■
Since XT = (Xtl,...,
Xt„), we have
1 G , ( ^ « I ) • ■ • 1G„(-X'*„) = Ijf-'(Gi)' ' ' ^"'CGW =
=
*-X^HOt)n-rtX-nHon)
l(X,,,...,Artn)-'(G,x-xG„) = lxr'(G,x-xG„)
so that by (7) we have (8)
E[lx-i{GiX...xGn)\3f+0]
= E [ l x - i ( G l X . . . x G n ) | 5 s ].
Let £> be the collection of all members of QSrd such that
(9)
E[lx-iCD)lCo] = E[l^-lcD)|fff ].
It is easily verified that D is a d-class. For instance, if £>fc G ID,fcG N, and Dk | , then lim Z)A G ID by the Conditional Monotone Convergence Theorem. Let D(d} and D(nd) be the collections of all open sets in Rd and Rnd respectively. Now since the collection of members of 9 3 ^ of the type Gi x • • ■ x Gn where G j , . . . ,Gn G Did) is a 7r-class, and since 3? contains this 7r-class according to (8), we have a{d
x • • • x Gn : Gu ..., Gn € Did)} C D
by Theorem 1.5. Now every member of Oind) is a countable union of sets of the type G\ x • ■ • x Gn where G\,..., Gn G O and thus we have cr(Q
Dw}.
Therefore <8ind = a(0{nd)) C D. Thus (9) holds for every E G 23E„d. This proves (3) and therefore (1) holds for every A G u{XT). From arbitrariness of r G 7^, (1) holds for every A G U T6Tt o-(^ T )It is easily verified that the collection of all members A of 5 for which (1) holds is a d-class of subsets of Q.. Now since this
E[1A\^+0}=E[1A\^]
foreveryAGgf.
113. ADAPTED
281
BROWNIANMOTIONS
Also (11)
E[l w |5f + 0 ] = E [ l N | 5 f ]
for every Ne
%
since the left side consists of all extended real valued functions on £2 which are equal to 1 except on a null set in (Q,#s+o, P), the right side consists of all extended real valued functions on £2 which are equal to 1 except on a null set in (£2, gf, P), and the collections of all the null sets in these two measure spaces are the same 9t. By (10) and (11). the equality (1) holds for every A G g f U 9L Now by the completeness of the probability space (£2, 5, P), an arbitrary subset of a member of 9t is again a member of 91. This implies that g f U 91 is a 7r-class. Then since (1) holds for every A in the ir-class g f U 91, it holds for every A in cr(^x U 9t) = g f b y the same argument as above. ■ By Remark 13.18 and Proposition 13.22 we have the following theorem. Theorem 13.23. Let X = {Xt : t G R+} be a d-dimensional Brownian motion on a com plete probability space (£2, g, P)- Let 91 be the collection of all the null sets in (£2, g, P) and let $? = a{Xs : s G [Q,t]} md$f = a($? U 9t)/ort G R+. Then (Q, 5, {lg},P) is a standard filtered space and X is an {5f } -adapted d-dimensional Brownian motion on it. We show next that conditions 2° and 3° in Definition 13.17 for an {JJ-adapted ddimensional Brownian motion is equivalent to a condition on the conditional expectation of the characteristic function of Xt — Xs with respect to g 3 . For this we need the following lemma. Lemma 13.24. Let X be a d-dimensional random vector on a probability space (Q, 5, P) and let
E[e^x>l
a.e.on(n,<5,P).
2) If there exists a complex valued function i/> on Rd such that for every y eRd we have (2)
E[ei
then {<&, X} is independent and thus ip = ipx-
on(n,<3,P),
282
CHAPTER 3. STOCHASTIC
INTEGRALS
Proof. 1) If { 0 , X} is independent, then {0, e^v<x>} is independent for every y G K"1 since e*'^) is a!BEd-measurablefunction onK . Thus Etc^*-*' 10] = Ete'^*)] = ^x(y)
a.e. on (Q, 0 , P).
2) Conversely suppose (2) holds. To show the independence of { 0 , X}, it suffices to show the independence of {1G, X} for every G G 0 . According to Kac's Theorem, the ddimensional random vector X and the random variable 1 G constitute an independent system if and only if for every y G Rrf and z £ l , w e have E i y ^ ' ^ M * ' 1 0 ^ ] - E[e'^"^]E[e'( 2,lG> ]. To verify this last equality note that E[(*(WW»4o»] = E[E[e i{+<2 ' lG>} |0]] = E[ei{*'lG)E[ei{y-x)\®]]
since 1 G is 0-measurable
= E l e ^ ^ ^ C v ) ] = ^(y)E[e'
I
Proposition 13.25. Let X = {Xt : t G R+} be a d-dimensional stochastic process on a filtered space ( 0 , 5 , {$t\,P)- Then conditions 2° and 3° in Definition 13.17 are equivalent to the condition that for any s, t G K+ such that s < t and y g R we have E[e'<**-*«>[S,] = e - ^ ( i - s )
a.e. on (Q,&„P).
Proof. 1) Assume 2° and 3° in Definition 13.17. The independence of the system {$s,Xt — X,} implies that of {&, e'te'*'-*^}. Thus E[e«v.x,-x.)
|
5 J =
£[„*<**,-*.>] = e - ^ « - * >
a.e.
on (Q, & , P),
where the last equality is from the fact that the probability distribution Px,-x, of Xt — Xs is the d-dimensional normal distribution Nd(0, (t — s) ■ I) so that by the Image Probability Law Ery<».*.-*.>] =
I
JM.d
jMpXt_xAdx)
and this last integral is the characteristic function of Nd(0, (t — s) ■ I) which is equal to e -W-
e Rd b y T h e o r e m D . 5 .
§/3. ADAPTED BROWNIAN
283
MOTIONS
2) Conversely assume the condition in the Proposition. Then by Lemma 13.24, the system {$s, Xt - Xs} is independent, and the characteristic function
= e~^
for y € Rd
so that the probability distribution of Xt - Xs is the d-dimensional normal distribution jVd(0,(f - 8) ■ I). ■ As an immediate consequence of Definition 13.17 and Proposition 13.24 we have the following theorem. Theorem 13.26. Let X = {Xt : t e K + } be a d-dimensional stochastic process on a stan dard filtered space (Q,5, {&}, P). Then X is an {$t}-adapted d-dimensional Brownian motion on the filtered space if and only if 1°. Xt is ^-measurable for every t £ M+> 2°. for every s, t 6 R+ such that s < t and y &Rd we have
3°. every sample function ofX is continuous on R+.
Theorem 13.27. An {&} -adapted d-dimensional process X = {Xt : t € R+} on a standard filtered space (Q, 5, {5t}, P) is an {$t}-adapted d-dimensional Brownian motion on the filtered space if and only if it satisfies the following two conditions. jo {ei(!/,jrt)+J4-« : t e R+} is a martingale on the filtered space for every y g Rd, 2°. every sample function ofX is continuous on R+. Proof. It suffices to note that condition 2° in Theorem 13.26 is equivalent to the condition that for every s, t £ R+ such that s < t and y € Rd we have E[e '
= 6%.*.)+^
a .e.
on (Q,ff.,P). ■
284
CHAPTER 3. STOCHASTIC
INTEGRALS
Let X = {Xt : t G R+} be an {3t}-adapted d-dimensional Brownian motion on a stan dard filtered space (£2, 5, {&}, P). For a fixed t0 G R + , let 0 , = &0+1 for t e R+. Then (£2,3, {©(}, P ) is a standard filtered space, and if we let F< = Xto+t for t 6 R+, then Y = {Yt : t 6 R+} is a {©(}-adapted d-dimensional process on (£2,5, {<SJ, F ) with initial probability distribution PYo = PXt . For s,t G R+ such that a < *, the system {&t, Yt Ys} = {5s+t0,Xt+t0 - Xs+t0} is independent. Also Py.-y. = PxH,0-x„,0 = Nd(0, (t - s) ■ I). Thus Y is a {0 ( }-adapted d-dimensional Brownian motion on (£2,3, {<St}, P). This shows that if X is an adapted d-dimensional Brownian motion, then at any time t0 G R+ it starts anew as an adapted d-dimensional Brownian motion Y with Px,0 as its initial probability distribution but otherwise the probability distribution of Y does not depend on the proba bility distribution of X in the time interval [0, to)- This property of an adapted Brownian motion is a particular case of the following theorem. Theorem 13.28. (Strong Markov Property) Let X = {Xt : t G R+} be an {$t}-adapted d-dimensional Brownian motion on a standard filtered space (£2, 3 , {fit}, P)- For a finite stopping time T on the filtered space, let &t = $T+t and Yt = Xr+t for t G R+. Then (£2, 3 , {&t},P) is a standard filtered space and Y = {Yt : t G R+} is a {<3>t]-adapted ddimensional Brownian motion on the standard filtered space with initial distribution Py0 = PxTProof. Clearly {<St : t G R+} is a filtration on the probability space (£2,5, P), that is, an increasing system of sub-a-algebras of 3- To show that this filtration is augmented, we show that ©o contains all the null sets in (£2,3, P)- Now <S0 = 3 r and 3 r consists of all A e doo such that A n {T < t] G 3t for every t € R+. Let N be an arbitrary null set in (£2,3, P). Then N G 3o C 3oo since 3o is augmented. The completeness of the measure space (£2,3, P) implies that N D {T < t] is a null set in it and thus iV n {T < t} G 3 t since fo is augmented for every t G R+. Therefore TV G 3 T = @0. This shows that the filtration {&t '• t £ R+} is augmented. Let us show the right-continuity of the filtration {<St : t G R+}, that is, for every io G R+ we have n„>4o<Su = <Sto, in other words, nu>t03r+u = $T+t0- To show this, we show that if A G 3T+U for all u > tQ, then A G 3T+( 0 - NOW since the filtration {& : t G R+} is right-continuous, according to Theorem 3.4, A G 3r+u if and only if A G 3oo and A n {T + u < t) 6 5* for every i G R+, and similarly A G 3r+t0 if and only if A G 3oo and A H {T + <0 < <} 6 3* for every i G R + . Note that with fixed t G R+, we have {T + u < i} \ as u j £0, and {T + u < i} c {T +10 < i) for u > t0. If io e {T + t0 < t}, then T(w) + 1 0 < i so that T(w) + u < t and thus w G {T + u < i}
§13. ADAPTED
285
BROWNIANMOTIONS
for some u > f0. Therefore {T +1 0 < i} = U u>(o {T + u < t} = lim{T + u < t} and thus A n lim{T + u < t] = A n {T +1 0 < t) for any A C £2. Now let A € fo^ for all u > t0. Then since ,4 D {T + u < t} G fo for every u > t 0 we have A n lim{T + u < i} G £ ( , that uito
is, A n {T + 1 0 < t} e 5(, for every t G R+, and therefore A G 5r+t0. This proves the right-continuity of {<Sf : t 6 R+}. Let us show that F is a {<5(}-adapted d-dimensional Brownian motion on the stan dard filtered space ( 0 , 5 , {(5 ( },P). Now since Yt = XT+U <S, = 5 T + ( , and X T + ( is i m measurable,!^ is (St-measurable for every t G R+, that is, Y is {0(}-adapted. Clearly every sample function of Y is continuous. According to Theorem 13.27, it remains to show that {ei
|<s s] = e>(y.n>+i#»
a e
on
(Q^^P),
in other words,
(1)
E [ e %^«)+¥«
|5T+J] = e«WW**£«
a e o n (Q5 5 T + S I P )
The Jx+s-measurability of Xj+S implies that of the right side of (1). Thus it remains to verify that for every A G ST+S we have
(2)
I e*<****>*¥« dP = /" e****"*^* dP.
Now according to Remark 13.28, {e^"*'> +lj ^' : t G R+} is a martingale on the filtered space (Q, g, {&}, P ) for every y G Rd. For n G N, consider the stopping time Tn = TAn. By Theorem 8.10 (Optional Sampling for Bounded Stopping Times) we have E[e
.
= e.
a e
o n
(
Qi5Tn+s)P).
By the gTn+s-measurability of e i T" , the last equality reduces to (3)
E [ e * ' x ^ ^ ; ? r » + J = e''(^M>+1^s
a.e. on (Q, 5 r „ + „ P ) .
If A G 5 T + S , then ,4 n {T + s < Tn + s) G 5r B « by Theorem 3.9. But {T + s
f JAn{T
ti
dP = / JAn{T
e'^XT"^'dP.
286
CHAPTER 3. STOCHASTIC
INTEGRALS
Since (4) holds for every n e N and since the finiteness of T implies that U ne N{T < n) = £2 and consequently lim lAu{T
[Ill] 1-Dimensional Brownian Motions A 1-dimensional Brownian motion is a particular case of d-dimensional Brownian motion we considered thus far and will be referred to simply as a Brownian motion. We shall show that if X = {Xt : t 6 R+} is an {5(}~aclapted Brownian motion on a standard filtered space (£2,5, {&}, P ) and if X 0 = 0 a.e. on (£2,5, P ) then X £ M£(£2, 5, {&}, P ) and its quadratic variation process is given by [X]t = t for < 6 R t . Lemma 13.30. Le? X = {X t : i 6 R+} foe a Brownian motion on a probability space (£2,5, P ) wifn an arbitrary initial distribution PXg. Then for every t G R+, we have (1)
E(X4) = E(X 0 ) = /
x0PXo(dx0),
J*.
provided the integral exists, and (2)
E(X 2 ) = E(X02) + t= f x20PXo(dx0) +1. m
Thus X is an L\-process if and only if / m xoPxoC^o) 6 R and X is an L2-process if and only if f& xlPXo(dx0) < oo. In particular, if PXo is a unit mass at some a € R, then E(X ( ) = a and E(X(2) = a2 + tfor t G R+ and X is an Lj-process.
287
§13. ADAPTED BROWNIAN MOTIONS
Proof. For t > 0, the probability distribution Px, of the random variable Xt is given by (3) of Proposition 13.16 as PX,(E) = (27ri)-'/ 2 / PXo{dx0) I e x p l - l ^ " ^ 1 2 } Jm JE { 2 t — to J for E e 93K. NOW from (3)
mddx),
(2^)- , / 2 ^exp|-iyjm L (di)=l,
(4) we have for any xo € R,
(2nt)-V2J^xexpLl-]-^f£\rnL(dx)
(5) =
(2^)-1/2^(x-xo)exp|-^|x~Xo|2|mL(rix)
+ (27rt)" 1/2 / xo expI _l\x~xo\ JR [ 2 t
J
\mL(dx)
= x0 Then by the Image Probability Law and by (3) and (5), we have E(Xt)
=
( XtdP= ( xPXl{dx) Jn Jm = J^PXo(dx0)h2nt)-^^x
=
exp|-^J~Io|2|mL(^)|
/ x 0 PXo(dx0),
proving (1). Similarly from (2irt)-'/2 J x2 exp I ~ ~
(6)
\ mL(dx) = t,
we have (7)
(27rr)-,/2|ffiX2exp|-^^^}mL(rfx) f = (27ri)- 1/2 J {{x - x 0 ) 2 + 2x0(x - x 0 ) + x 2 } exp I = t + x\.
2 1 |x — xol l — l - \ mL{dx)
288
CHAPTER 3. STOCHASTIC
INTEGRALS
By the Image Probability Law, (3), and (7), we have
E(Xf)
= f XfdP= f x2PxMx) = [ PXo(dxt,)|(27rt)- " 2 / x1 expl-
1 \x -x0\ - \ mL{dx) > t 2
= / xlPXo(dx0) + t, Jm
proving (2). ■ Proposition 13.31. Let X = {Xt :teR+}bean {ff(}-adapted Brownian motion on a standard filtered space (Q,ff, {ffj, P)- If XQ is integrable then X is a martingale and if Xo is square-integrable then X is an Lx-martingale on the filtered space. If XQ = 0 a.e. on (Q, ff, P) then X G M|(Q, ff, {$t}, P) and a quadratic variation process [X] of X is given by [X]t = tfort G Rt. Proof. If X 0 is integrable, then by Lemma 13.30, E(Xt) = E(X 0 ) G R for all t G R+ and thus X is an Lj-process. To show that X is a martingale, let s, t G R+ be such that s < t. Then E[Xt - Xs |ff.] = E[X ( - X.] = 0 a.e. on (Q, ffs, P ) , where the first equality is by the independence of {ffs, Xt—Xs} according to 2° of Definition 13.17 and the second equality is by the fact that the probability distribution of Xt — Xs is given by iV(0, t — s) according to 3° of Definition 13.17. If Xo is square-integrable then X is an L2-process by Lemma 13.30. IfX 0 = 0a.e. on (Q,ff, P) thenX G M^(£2,ff, {ff ( },P) by Definition 11.3. To find a quadratic variation process [X], let a stochastic process A = {At : t G R+} be defined on the filtered space by setting A(t,ui) = t for (t,ui) G R+ x Cl. Then A is trivially a continuous increasing process on the filtered space. To show that A is a quadratic variation process of X, it remains to verify that X 2 — A is a null at 0 rightcontinuous martingale. Clearly X 2 — A is null at 0 and continuous. To show that it is a martingale, we show that for s, t G K+ such that s < t we have E[X 2 - At |ff,] = X 2 - As or, equivalently,
E[X 2 - X 2 |ff,] = t - s
a.e. on (Q,ffs, P ) , a.e. on (£2, g „ P).
But E[X 2 - X 2 | f f J = E[{X t - X,} 2 |ff,] = E[{Xt-Xs}2] =t - s a.e. on(Q,ff„P),
§13. ADAPTED
289
BROWNIANMOTIONS
where the first equality is by the martingale property of X, the second equality is by the independence of {# s , {Xt - X s } 2 } implied by that of {&, Xt -Xs}, and the third equality is by the fact that the probability distribution of X, - Xs is given by 7V(0, t - s). This shows that A is a quadratic variation process of X. ■ Proposition 13.32. Let X = {Xt : t g R+} be an {&}-adapted d-dimensional Brownian motion on a standard filtered space (Q, 5, {J ( }, P). Then its components X ( i ) ,i = l,...,d, ore {5 ( } -adapted 1 -dimensional Brownian motions on the standard filtered space with ini tial distributions PXM = Px0 o -K~X where 7T; is the projection ofRd = Ri x • ■ ■ x Rd onto R for i = \,...,d. Proof. By Proposition 13.4, X w is an {JJ-adapted process on the filtered space. Also X ( , ) is a continuous process. Thus according to Theorem 13.26, to show that X ( , ) is an {5t }-adapted Brownian motion it remains to verify that for every s, t g R+ such that s < t and any y; € R we have (1)
E [ e t M ' , - ^ ' ' > | 5 J = e-^<<-*>
a.e.on(A&,P).
But since X is an {5t}-adapted d-dimensional Brownian motion, Theorem 13.26 implies that for every y € Rd we have E[e , '< ! '^-^>|5J = e - 1 ^ ( t - s )
(2)
a.e. on ( Q , & , P ) .
d
With the choice y = ( 0 , . . . , 0, yu 0 , . . . , 0) g M. , (2) reduces to (1). Regarding the initial distribution PXM of X ( , ) , we have PJM{E} =
= P o (X< !) )-'(£) = P o fo o X 0 ) - ' ( £ )
P o X 0 -' o TT-'CE) = P x „ o Tfl(J5)
for E 6 » a . I
Proposition 13.33. Lef X = {X t : t g R+} be an {$t}-adapted d-dimensional Brownian motion on a standard filtered space {Q.,$,{$t},P). 7fE(|X 0 | 2 ) < oo, then the components X ( l ) , . . . , X w of X are continuous Li-martingales on the filtered space. For every s,t g R+ such that s < t we have (1)
E[X ( ( i ) -X<'>|S s ] = 0
a.e.
OTI(Q,S.,P),
and (2)
E[{X<° - * « } { X t 0 ) - X « } | & ] = M * - 5> °-e- o n ( Q - 5 S , F ) .
290
CHAPTER 3. STOCHASTIC
INTEGRALS
In particular when X 0 = 0 then X ( 1 ) , . . . , X(d) £ M£(Q,ff,{fft}, P ) and their quadratic variation processes are given by (3)
[X«\Xlj)]t = 6iJ
forteR+.
Proof. By Proposition 13.32, X ( 1 ) ,... ,X(d) are {ffj-adapted Brownian motions on the filtered space. Since |X 0 | 2 = E t i \X®\2. if E(|X 0 | 2 ) < oo then E(|X^| 2 ) < oo for i = 1 , . . . , d and thus by Proposition 13.31, X ( 1 ) , . . . , X W ) are L2-martingales. The equation (1) is the martingale property of X ( , ) . To prove (2), recall that since X is an {ffj-adapted d-dimensional Brownian motion, {3s, Xt - Xs} is an independent system by Definition 13.17. Then since the mapping 7Tij(xi, ...,xd) = XiXj of Rd into K is a ©j^/OSi-measurable mapping, the system {&, {X t w - Xf }{Xf - X ^ } } is an independent one. Thus E [ { X f - Xf}{X? - X « } | g s ] = E[{X« - X « } { X f - ! « } ] = 8ij(i - s) a.e. on (£2,ffs, P), since the probability distribution of Xt — Xs is the normal distribution iV(0, (t — s) ■ I) whose covariance matrix is given by (t — s) ■ I according to Theorem D.5. This proves (2). IfX 0 = 0, then X^ 0 = 0 so that X ( , ) G M§(Q,ff, {ffJ,P). Letz,; = 1,... ,d be fixed. Consider V e V(Q,ff, {ffj, P ) defined by Vt = S{)jt for t e R+. Then
E[{xM> - vt} - {xf xp - vs} m = E[X ( (,) X ( 0) - X«X<-»|ff s ] - fiy(i - s) = E[{X((l> - X<»}{X« - Xjp) |ff.] - *„-(* - s) = 0 a.e. on(Q,ff s ,P), where the third equality is by the martingale property of X w and X 0 ' and the last equality is by (2). Thus X W X W — V is a right-continuous null at 0 martingale. This proves (3). ■
[IV] Stochastic Integrals with Respect to a Brownian Motion If B = {Bt : t e R+} is an {ffj-adapted null at 0 Brownian motion on a standard filtered space (ii,ff,{ff«},P), then B e M|(£i,ff, {ff(},P) and its quadratic variation process is given by [B]t = t for t € M+ according to Proposition 13.31. Thus for our
113. ADAPTED BROWNIAN
291
MOTIONS
[B] G A(£2,§, {&}, P), the family of Lebesgue-Stieltjes measures {/<(B](*, w) : w G £2} on (R+, fBiJ determined by [P] is simply Wsj(*i«) - n»£
for every w € £2,
where mi, is the Lebesgue measure. Since a null at 0 Brownian motion B is a particular case of martingales in M§(£2, 3 , {&}, P), the results in § 12 concerning stochastic integrals with respect to M G M | ( Q , 5 , {5,}, P) apply. We show below some implications of the special property of the quadratic variation process [B] of B. Observation 13.34. Letp G D,oo). Consider the collection of all measurable processes X = {Xt : t G R+} on a probability space (£2,5, P) satisfying the following integrability condition (1)
\X{s,w)\p{mL
/
x P)(d(s,u>)) < oo
for every t € K+.
This condition is equivalent to the condition (2)
/
i[0,m]x(!
\X(s,u)\"(mL
x P)(d(s,ij)) < oo
for every m € N.
By the Tonelli Theorem, we have (3)
/
|Ja^)|"(mLxP)(d(5,uO) = E [ /
J[0,m]xn
U[0,m]
\X(s)\pmL(ds)
.
The condition E[/ [ 0 m , |X(s)|pmi,(d.s)] < oo implies that / [ 0 m ] \X(s)\pmL(ds) < oo a.e. on (£2,5, P). Thus for every m G N there exists a null set Am in (£2,5, P) such that /[0,m] \X(s,uj)\pmL(ds) < oo forw G AJj,, Then with the null set A = Um6NAm we have (4)
/
\X(s,u)\pmL(ds)
< oo
for every i G R+ when w G Ac.
The condition (5)
/
|X - F | p d(m L x P) = 0
for every i G R+
is an equivalence relation in the collection of all measurable processes on (£2, 5, P). Let L p coCR^ x Q, mi x P ) be the linear space of the equivalence classes of all measurable processes on (£2, #, P ) satisfying (1) with respect to this equivalence relation. The element
292
CHAPTER 3. STOCHASTIC
INTEGRALS
0 g Lp]0O(R+ x fi,m[ x P) is the equivalence class of all measurable processes X on (Q, 5, P) satisfying the condition / \X(s,uj)\p(mLxP)(d(s,Lo)) V[o,(]xn
(6)
=0
for every t £ R+.
A measurable process X on (Q, 5, P) satisfies condition (6) if and only if there exists a null set A in (Q., 5, P) such that for every u> e Ac we have / \X(s,u>)\pmL(ds) = 0 i[0,t]
(7)
for every t G R+.
This follows by the same argument as in Observation 11.16. Note that condition (7) is equivalent to the condition that there exists a null set A in (Q, $, P) such that for every u £ Ac we have (8)
X(-,to) = 0
a.e. onCK+^m^mL).
Definition 13.35. Let p £ [l,oo). In the linear space LPi00(R+ x Q,,mL x P ) of the equivalence classes of measurable processes X = {Xt : t 6 R+} on a probability space (Q, 5, P ) satisfying the condition / \X(s,ui)\v(mL ./[o,i]xn
(1)
x P)(d(s,u))<
oo for every t 6 R+,
we define (2)
||X||^*F=[/
JX|»d(mLxP)]1/P = E [ /
L/[0,t]xi2
J
|X(5)|»m£(<&)
-ll/p
U[o,t]
for every t £ R+, and define
(3)
ii^ii P m i, , < p =i:2- m {ii^ii P m 4 x P A i }meN
Remark 13.36. The functions || • \\™$ x P for t e R+ and || ■ ||££, xP on L Pi00 (R + x Si, m t x P ) defined above have the following properties. 1) || • \\™tXP is a seminorm on LPj00(R+ x Q, mL x P ) for every t e R+. 2) For X, XM <E L ^ C R * x Q, m L x P), n G N, we have ton | | X « - X | | - ^ = 0 « , ton ||X<"> - X | | - x P = 0 for every m 6 N.
§13. ADAPTED
293
BROWNIANMOTIONS
3) || • ||p,oo is a quasinorm on L P|00 (R + x Q, mL x P). Proof. These statements are proved in the same way as Remark 11.4 for the seminorm | | and the quasinorm | • | m on the space M 2 (Q, 3 , {& }, P). ■ Let the Banach space L p ([0, m] x £2, £r(95[0,m] x 3), m L x P) for m G N be abbreviated as Lp([0, m] x Q). The function || • ||™4*p is only a seminorm on the space LPi0O(R+ x Q,m L x P), but it is a norm on L p ([0,m] x £2) and l p ([0,m] x £2) is complete with respect to the metric associated with this norm. We use this fact to show that the space LPi00(K+ x Q , m t x P ) is complete with respect to the metric associated with the quasinorm
II • \\%>xP-
Theorem 13.37. Let p G [1, oo). The space LPi00(R+ x £2,mi x P) is a complete metric space with respect to the metric associated with the quasinorm || ■ ||p"£,xP on LP|O0(R+ x £2, mL x P). Proof. Let {XM : n G N} be a Cauchy sequence in LPi00(R+ x Q,mL x P) with respect to the metric associated with the quasinorm || • \\™^p Let m G N be fixed and let X^ be the restriction of Xin) to [0, m] x Q for every n G N. Now for every e > 0 there exists N G N such that \\XM - Xm\\™gp < 2-me for nJ>N, and thus 2- m {||X ( t ) ) - XZWW
A 1} < 2 -
£
for
nJ>N.
We may assume without loss of generality that e < 1. Then we have
pq2)-X$A%?P<*
for
nJ>N.
Thus {X^ : n G N} is a Cauchy sequence in the Banach space Lp([0,m] x fil) and therefore there exists Y{m) e l p ([0,m] x Q) such that Jinn \\X$} - Y(m)\\pnJkxP = 0. Let us take an arbitrary real valued representative function of Y<m) and fix it for m G N. Now for m = 1, X$ converges to Yw in the Lp-norm on £ p ([0,1] x £2) and therefore there exists a subsequence {ni,*} of {n} such that X,",'*' converges to Ym on [0,1] x il- A{ where Ai is a null set in [0,1] x £2. Then since X^,k converges to Ym in the Lp-norm on L p ([0,2] x Q), there exists a subsequence {n2,k} of {n^/t} such that X^'* converges to y(2) on [0,2] x £2 — A2 where A2 is a null set in [0,2] x Q. containing Ai. Thus proceeding inductively we obtain a subsequence {nkxk } of {n} such that for every m G N the sequence
294
CHAPTER 3. STOCHASTIC
INTEGRALS
X^jk converges both in the Lp-norm of Lp([0, m] x £2) and pointwise on [0, m] x Q. — Am where A m is a null set in [0, m] x Q. and contains A ] , . . . , A m _i. Thus y (m) = Y(m-\-, on [0, m - 1] x Q. - A(m_D for m > 2. Let A = Um£NAm- Let us define a function Y on R + x Q b y setting vu
F(f
v
/ * W * . w>
'w)-l0
for
(*•w) e [°> m] x Q - A m and m 6 N
fbr(*fW)6A.
The function F is well defined and is a measurable process on (£2,5, P ) satisfying the condition | | y | | £ £ x P = | | y ( m ) | | ^ x P < oo so that Y G L ^ O ^ t x Q , m L x P ) . Also lira H X ^ ^ - Y\\™£p = Urn | | X ( " ^ ) - y ( m ) | | ^ x P = 0
fc—>-00
"
for m € N.
K—t'OO
This implies lim IIX*"'*' - Y\\™£p = 0 according to Remark 11.18. ■ fc—^oo
In Proposition 2.13 we showed that every left- or right-continuous adapted process on a filtered space is a progressively measurable process. More generally we showed in Propo sition 2.23 that every well-measurable process, and in particular every predictable process, on a filtered space is a progressively measurable process. Theorem 13.38. Let X = {Xt : t 6 R + } be a progressively measurable process on an augmented filtered space (Q, J , {§t},P). If there exists a null set A in (Q, 5 , P ) such that (1)
\X(s, ui)\m.L(ds) < oo for every t € K+ when u> € A c ,
/ ■/[<>,«]
then there exists a predictable process Y on the filtered space such that (2)
X{-,UJ) = Y{-,UJ)
a.e. o/i (R+.QSn^mi) w/ien w g Ac.
/w particular if X is in LP|00(R+ x Q, m L x P)for some p € [0, oo), then (I) is satisfied and X and Y are equivalent in this space. Proof. Since X is a progressively measurable process, it is an adapted measurable process according to Observation 2.12. Since the filtered space is augmented, A e J , for every t € R+. For every n e N, define a real valued function XM on K+ x Q. by setting ri\
YMH , A _ j
n
U
/
,
X(s,u)mL(ds)
for (i, w) 6 K+ x Ac for(<,oj)GR + x A.
§75. ADAPTED BROWNUN
295
MOTIONS
Let us show first that Xin) is an adapted process. Now since X is progressively measurable, foreveryi € R+ the restriction ofX to [0,t]xQ is
=
n
<
n/ n/
Jf(3,w)mt(
|l [ t _i i ] (5)-l [ t o _i ( o J (5)||X(s,u;)|m L (£fs)
[0,t 0 +l]
1
[0,to+l]
[(-J- 1 t]A[(o-- ! -,(o] ( ' s) ll X(s ' a;) l mL(d ' s) -
Since lim l, t _± (] ^ [( _ j . t ,(3) = 0 and since X(-,u>) is integrable on [0, t0 + 1], we have by the Dominated Convergence Theorem lim \Xin)(t,u>) - XM(t0,u)\ t—*tf)
(n)
= 0. This proves the
continuity of X (-, u>) at
F ( i w ) = liminfX ( " ) (i,u;) 71—+OO
for (t,w) G R+ x n .
Since X ( n ) is an 6/QJis-measurable mapping of R+ x Q. into R for every n G N, Y is an 6/9%-measurable mapping of R+ x Q into R. Then {Y = ±co} G 6 so that if we define a real valued function Y on R+ x Q by setting (5)
.y
Y
&u>-U
{I
wheny(t,o;)GR when7(t, W ) = ±oo
then Y is an 6/93a-measurable mapping of R+ x £1 into R, that is, Y is a predictable process on the filtered space. By (1), for every u> G Ac the sample function X(-, u) is Lebesgue integrable on every finite interval in R+. Then by Lebesgue's Theorem the indefinite integral of X(-,u>) is
296
CHAPTER 3. STOCHASTIC
INTEGRALS
differentiable with derivative equal to X(t,ui) for a.e. t G R+. Thus recalling the definitions of X ( n ) and Y by (3) and (4) respectively, we have for every u e A c (6)
X(-,U)
= Y(;UJ)
= Y(-,UJ)
a.e. on(R + ,Q5m„m L ),
where the second equality is from the fact that X is real valued. This proves (2). If X is in LP?00(R+ x a, mL x P), then X satisfies (1) and thus by (6) for every u 6 A c we have \Y(s,uj)\1'mL(ds)=
/ J[0,t]
f
\X(s,u)\"mL(ds)
for every t G K+,
\X(s,u>)\"mL(ds)
< oo
J[0,t]
and then \Y(s,u)\pmL(ds)}
E[/ J[0,t]
=E\ [ J
for every t G E + .
U[0,t]
This shows that F is in L P)00 (E + x Q, mL x P). From (5) we also have E [/
U[0,t]
\Y{s,u>)-X{s,w)\>mL{ds) = 0 for every t G
This shows according to (5) of Observation 13.34 that X and Y are equivalent processes in L ^ C R * x Q, mL x P). ■ For quadratic variation processes of stochastic integrals with respect to Brownian mo tions we have the following. Proposition 13.39. Let B = {Bt : t G R+} be an {$t}-adapted d-dimensional Brownian motion on a standard filtered space (Q, 5, {& }, P) with BQ = 0 and let BM, i = 1 , . . . , d, be its components. Let X w be a predictable process in L2>00(R+ x Q, mi x P)for i = 1 , . . . , d. Then for i,j = 1 , . . . , d, there exists a null set A in (Q, $,P) such that on Ac we have for every ( 6 1 + (1)
[ X w . B ( , ) , X w . B^]t = 6tJ I
X^(s)X^(s)mL(ds),
J[0,t)
and thus for any s,t G R+, s < t, we have (2)
E[{(X W . B(% - (X (i) . B (, ' ) )J{(X (J ' ) . B«)« - (X°'> . BU))S} | & ] = E[(X (i) • B%(X^ • S 0 ' ^ - (X (i) . B(i))s(XU) • S ° ' ) S | 5 S ] = SijElf
A- w (u)X 0 ) («)m 1 (du)|ff J ]
a.e. o n ( n , 5 „ P ) ,
297
114. EXTENSIONS OF THE STOCHASTIC INTEGRAL and in particular E[{(X (,) . P<")t - ( I d . B®) S }{(X W . Bu\
(3)
- ( X w . B°>),}]
X < , , ( u )X ( j ) («)m L (du)].
= 6ijE,[[ J(s,(]
Proof. According to Proposition 13.33, [.BW,.BW]4 = 6i}jtiort G R+. Thus the Proposition is a particular case of Theorem 12.16. ■ Corollary 13.40. Let B and Bli\ i = 1,... ,d, be as in Proposition 13.39. Let X{i) and YM be predictable processes in L2|00(M+ x Q., TTIL X P) for i = l,...,d. Then for X = E t , X ( i ) • P ( i ) and Y = £? =1 *"W • # W « M|(Q,g, {&}, P) f/iere e x t o a n«« sef A in (£2, 5, P ) SMC/I that on Ac we have for every t G K+
(1)
£X(,,(5)ym(5)mL(ck)
[*,n(=/
anrf thus for any s,t G R+, s < £, we nave
(2)
E[{x( - xs}{Yt - r j | 5 s ] = E[x(y - x,y,|3f,] VX('V)y(,)(«)^L(d«)|5s]
= E[/
a.e. on (£},&, P).
Proof. Since X w • P ( " is in M§(Q,#, {&}, P), so is X = E t i ^ ( 0 • £ W - Similarly for y . Thus [X, Y] is defined and according to Proposition 11.26, we have [X, Y]t = [ £ X ( , ) • S w , £ y ( i ) • P ( , ) ] 4 = £ [X c0 • B®, X w »-P 0 ) ] ( i=l
=
Y'Sijf
.'=1
X{i)(s)Yu)(s)mdds)=
i,i=\
I
tl(i)(S)y(')(S)mt(dS)
where the third equality is by Proposition 13.39. ■
§14
Extensions of the Stochastic Integral
[I] Local L2 -Martingales and Their Quadratic Variation Processes For M G M 2 (fi, 5, {&}, P ) and a predictable process X on the filtered space satisfying the integrability condition X G L2,oo(R+ x il,filMhP), that is, E[/ [ 0 A X 2 (s) d[M](s)] < oo
298
CHAPTER 3. STOCHASTIC
INTEGRALS
for every t £ R + , the stochastic integral X ; M of X with respect to M was defined in Definition 12.9 as an element in M 2 (Q, 3 , {& }, P). We show below that if X satisfies the weaker integrability condition that / [0 (] X\s) d[M](s) < oo for every t £ R+ for almost every u £ Q, then there exists a sequence {X ( n ) : n £ N} in L2,oo(R+ x Q, ^ [ M ] , P ) such that XM converges pointwise on R+ x Ac to X and X ( n ) • M converges pointwise on R+ x Ac where A is a null set in (Q, 3 , P). We then extend the definition of stochastic integral with respect to M by defining the limit of the pointwise convergence of the sequence {XM • M : n £ N} in M 2 (Q,3, {3<}, P) as the stochastic integral of X with respect to M. This leads to the definition of local martingales. Definition 14.1. Let X = {Xt : t £ R+} be an adapted process on a filtered space (£2, 3 , { 3 J , P) and let {Tn : n £ N} be an increasing sequence of stopping times on the filtered space such that Tn | co a.e. on (Q, 5, P). We say that X is a local martingale with respect to the sequence {T„ : n £ N} ifXTnA is a martingale on the filtered space for every n £ N. We say that X is a local L-i-martingale with respect to {Tn : n £ N} ifXT"n is an L2-martingale on the filtered space for every n £ N. Thus if X is a local martingale with respect to {T„ : n £ N}, then since Tn(u>) | oo for every u £ A D where A is a null set in (Q, 5, P) we have lim XTnA(t, w) = X(t, UJ) for n—►oo
(t,u>) £ R+ x Ac. Thus X is the limit of pointwise convergence on R+ x Ac of a sequence of martingales. Note that a martingale is always a local martingale with respect to the sequence of stopping times {Tn : n £ N} where Tn = oo for every n £N. Note also that the fact that X is a local martingale with respect to an increasing sequence of stopping times {T„ : n £ N} such that Tn | oo a.e. on (£2,3, P) does not imply that it is a local martingale with respect to every other increasing sequence of stopping times tending to oo almost surely. Observation 14.2. Let X = {Xt : t £ R + } be a local martingale on an augmented fil tered space ( Q , 3 , {3(}j-P)- If there exists a nonnegative integrable random variable Y on ( Q , 5 , P ) such that \X(t,u)\ < Y{w) for (t,w) € R + x Ac where A is a null set in (Q, 3 , P), then X is in fact a martingale. If the random variable Y2 is integrable, then X is an L2-martingale. Proof. Since X is a local martingale, there exists an increasing sequence of stopping times {Tn : n e N} on the filtered space such that Tn f oo on A£ where Ao is a null set in (Q, 3 , P) and XTnA is a martingale on the filtered space for every n £ N. Then for any pair
§ 14. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
299
s,t £ R+ such that s < t, we have according to Theorem 8.12 (1)
E[X T n A t |5 s ] = A-r„As
a.e. on(£2,5 s ,P).
Since T„ | ° ° on A£, we have lim X TnA( = Xt on A£. Also |Xr„A1l < Y on Ac. Thus by the Conditional Dominated Convergence Theorem, we have (2)
Jun E[XTnM |5.] = E[Xt |ff,]
a.e. on (O,&, P ) .
We have also lim XT„ A S = Xs on Aj;. Since the filtered space is augmented, the null set Ao is in 5 S - Thus (3)
lim XTnA, = X,
a.e. on (£2, 5 „ P ) .
Letting n -> 00 in (1) and using (2) and (3), we have E [ X , | 5 J = X, a.e. on (£2,5S, P). This shows that X is a martingale. If Y2 is integrable, then since X2 < Y2 on Ac we have E(X(2) < E(Y2) < 00 for every t e K + s o that X is an L2-process. ■ Note that for a local martingale X, the right-continuity and the continuity of X(-, u) for an arbitrary ui £ £2 is equivalent to the right-continuity and continuity of X T " A (-, u>) for all n £ N. Lemma 143. Let {Sn : n £ N} an^ {Tn : n £ N ) t e two increasing sequences of stopping times on a right-continuous filtered space (£2,5, {St}, P ) such that S n T °° and Tn f 00 a.e. orc(Q,5,P)- | f X is a local martingale with respect to the sequence {Sn : n £ N}, f/ien if is a /oca/ martingale with respect to the sequence of stopping times {Sn A T„ : n £ N}. 7f X is a right-continuous local Lr-martingale with respect to the {Sn : n £ N}, r/ien if is a local L2-martingale with respect to {Sn A T , : n £ N}. Proof. Clearly 5„ A T„ j 00 a.e. on (£2,5, P)- If X is a local martingale with respect to {Sn : n £ N}, then for every n £ N, X 5 " A = {X SnAt : t £ K+} is a martingale. Thus ^S„AT„A _ {jfSjiATnAi : i e R+} is a martingale by Theorem 8.12. This shows that X is a local martingale with respect to {5„ A T„ : n £ N}. Suppose X is a local Z/2-martingale with respect to {S„ : n £ N}. Then for every n £ N, X S " A = {XS„A4 : t £ R + } is an L2-martingale so that (X S " A ) 2 = {X| n A t : t £ R + } is a submartingale on the filtered space. Since Sn A t and 5„ A Tn A t are stopping times on the filtered space and Sn A t > Sn A T„ A t for i £ R+, Theorem 8.10 (Optional Sampling with Bounded Stopping Times) implies that E[XfnA4|5s„AT„A<] > X| n A T n A t
300
CHAPTER 3. STOCHASTIC
INTEGRALS
a.e. on (Q,5s„AT„At,-P) and consequently we have E[Xf nATnAi ] < E[X| n A f ] < oo. This shows that x s " A T n A is an Ir2-process and is thus an Z/2-martingale. Since this holds for every n G N, X is a local L2-martingale with respect to {Sn A T„ : n G N}. ■ Definition 14.4. Let M20C(Q, 5, {&}, P), abbreviated as M20C,feetfzecollection of equiva lence classes of all right-continuous local Li-martingales X = {Xt : t £ R+} with Xo = 0 almost surely on a standard filtered space (Q,5, {5,}, P). Le* M 2 '' oc (Q, 5, {&}, P), afcbreviated as M2''°c,feer/ze subcollection consisting of almost surely continuous members of M'2°c. Observation 14.5. The fact that M 2 0C (fi,5, {&}, P) is a linear space, that is, aX + bY G M2oc for X, F £ M^ c and a, b G R, can be shown as follows. Now if X € M2DC the clearly aX G M20C for a £ R. Thus it suffices to show that if X, F G M ^ then X + F G M2DC. Suppose X, F G M2DC. Then there exist two increasing sequences of stopping times {Sn : n G N} and {Tn : n G N} such that 5„ | °° and r n T °° a.e. on ( a , 5 , P ) and X S " A and YTNA are L2-martingales for every n G N. By Lemma 14.3, X SwAl- " A and F S w A T " A are L2-martingales for every n G N. But (X + F) s " A r » A = x s " A T " A + F S " A T " A Thus (X + y) s « A r n A i s an L2-martingales for every n G N. This shows that X + F is a local I/2-martingale. Since X and F are right-continuous and null at 0, so is X + F . Therefore X + F G M2oc. Definition 14.6. Let A'°C(X2,5, {5t},P) be #«e collection of equivalence classes of all stochastic processes A on a standard filtered space (Q, 5, {5t}, P ) satisfying conditions 1° and 4°, but not necessarily condition 2° that A be an L\-process, of Definition 10.1. Let V'°C(Q, 5, {5 t },P) be the linear space of equivalence classes of all stochastic pro cesses V on a standard filtered space (£2,5, {5«}, P) satisfying conditions 1° and 4°, but not necessarily condition 5° that | V | be an L\-process, of Definition 11.10. We write A C ' /0C (Q,5, {5,}, P) amf V C ' ,0C (Q,5, {&},F)/br fte subcollections o/A' 0 < : (0,5, {&}, P) and V'0C(Q, 5, {5t},P) respectively consisting of almost surely continuous members. In what follows we write A G A'0C(Q, g, {J,}, P) for both an equivalence class and an arbitrary representative of the equivalence class. Similarly for V G V'°C(Q, 5, {5;}, P). Lemma 14.7. If A', A" G A'~(£l,ff, {&},P), fnen A' - A" £ V' o c (ft,3, {3,},P). / / V G V' o c (Q,5, {St},P), then V = A' - A" where A', A" G A ,0C (Q,g, {&},P).
§ 14. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
301
Proof. The proof parallels that of Theorem 11.12. ■ Theorem 14.8. For every X in M20C(Q, 3 , {&}, P), there exists an equivalence class A in A'°C(Q, 3 , {3 t }, P ) such that X2-Aisa right-continuous null at 0 local martingale. IfY is also in M20C(Q, 3 , {&}, P), then there exists an equivalence class V in V'0C(Q, 3 , {St}, P) such that XY — V is a right-continuous null at 0 local martingale. Proof. Suppose X and Y are in M2oc(£2,3, {&},P) and are local L2-martingales with respect to two increasing sequences of stopping times {Sn : n G N} and {T„ : n G N} such that 5„ T oo and T„ f oo a.e. on (Q., 3, P ) respectively. Let Rn = Sn A Tn for n G N. Then both X and Y are local L2-niartingales with respect to the sequence of stopping times {R^; n G N} according to Lemma 14.3. Let Ai be a null set in (Q, 3 , P ) such that Rn(u;) f oo for a; G Aj. Now for every n G N, since X fi " A and F * " " are in M 2 (£2,3, {3 ( }, P), according to Proposition 11.22, there exists a unique natural quadratic variation process V(n) of I ' * " " and y H " A in V(Q,3, {gt},P) such that X*" A Y R " A - VM is a right-continuous null at 0 martingale. Similarly there exists a unique natural y ( n + " G V(Q, 3 , {3t}, P) such that jf/^^Ay^+iA _ y(n+i) i s a right-continuous null at 0 martingale. Then (XR"*'AYR"*'A y ( " + I ) ) f l " A is a right-continuous null at 0 martingale by Theorem 8.12. But R„. < P„ + i implies that
(X*"
+lAyfl„+|A _ y(n+lh.R„A _ T£-K„Ay.R„A _
/y(n+l)\RnA
Thus by the uniqueness of natural quadratic variation process of XR"A and YR"A we have yM
_ /y(n+lKfi„A
In what follows we write V(n) for an arbitrarily fixed representative of the equivalence class. Then by the last equality there exists a null set A2 in (D, 3 , P) such that (1)
Vr(",(-,u;) = (V("+1,)finA(-,u>)
for all n G N when w G A|.
Iterating (1) and using the fact that Rn A P„ + i = P„, we have (2)
V
forn,p G N when a; G AJ.
Since V (n) G V(£2,3, {&}, P ) for every n G N, there exists a null set A3 in (Q, 3 , P ) such that if u G A3 then y ( n ) (-,^) are functions of bounded variation on every finite interval of R+ for all n G N. Let A = U L A.% Let t G R+ be fixed. Since P„(u>) f 00 for w G Ac, there
CHAPTER 3. STOCHASTIC
302
INTEGRALS
exists Nw G N such that s < Rn(uj) for 5 G [0, t] when n > Nu. Then by (2) we have for W6A 1 , Vin\s, w) = V (n+P, (s, w)
(3)
for s G [0, i], p G N, n > JVW.
(n,
Thus lim V (t, w) exists in K for every t € R+ when u> G Ac. Let us define a real valued n—»-oo
function V on R+ x Q by setting lim VM(t, u) (4) V w ) «' 10
for (t, w) G R+ x Ac for«,u,)eR+xA.
Since F ( n ) is an adapted process, ^ is fo-measurable on O for every < G R+. Since the filtration is augmented the null set A is in g ( for every t G R+. Then Vt defined by (4) is 5t-measurable. Thus V is an adapted process on the filtered space. Let us show that for u> € Ac, V(-, ui) is a function of bounded variation on every finite interval in R+. Let t e R+ be fixed. Let N G N be so large that RN(u) > t. Then by (3) and (4) we have V{S,LJ) = V(N\S,L>) for s G [0,t]. Since V(Ar>(-,o;) is a function of bounded variation on every finite interval of R+ and in particular on [0,f], V(-,io) is a function of bounded variation on [0,t]. We have thus shown that if u> 6 Ac then V(-, u>) is a function of bounded variation on every finite interval in R+. This proves that the equivalence class represented byVisinV'~(A$,{&},P). Finally to show that for every n € N, (XY — V)RnA is a right-continuous null at 0 martingale, note that for (t, w) G R+ x Ac we have (XY -V)RnA{t,u)
= (XY)R"A(t,u>)-VRnA(i,uj) R A = (XY) " (t, u) - (lim V(*>)n"A(i, w)
by (4)
A:—t-oo
= (XY) R " A (i,a;)- lim(V w ) H " A (t,u>) = (iy)fl»A{{,w)-(VC))(t,u)
by (2).
(n)
Since (JfY)*"* — (V ) is a right-continuous null at 0 martingale and since A e 5 , for every t G R+, the last equality implies that ( 1 7 - V)fi»A is a right-continuous null at 0 martingale. This shows that XY - V is a right-continuous null at 0 local martingale. The existence of A in A,0C(Q, 5, {5/}, P ) such that X2 - A is a right-continuous null at 0 martingale is proved likewise. ■ Corollary 14.9. Let X,Y G M£ i o c (0,& {&},P). 77je« r/iere exisf arc equivalence class A G A c - ' 0 C ( O , 5 , { 3 t } , P ) a ^ a n e ^ v a Z e n c e d a w y 6 VC''DC(Q, 5, { & } , P ) sucfc that X2 — A and XY — V are almost surely continuous null at 0 local martingales.
§ 14. EXTENSIONS
OF THE STOCHASTIC
INTEGRAL
303
Proof. Since X,Y g M^ 0 C (Q,5, {3<},P), the processes X*" A and Y fl " A in the proof of Theorem 14.8 are in Mf(Q,3, {&},P). Then by Proposition 11.27, there exists V (n) in V c (fi,5, {&}, P ) such that X f i " A y*" A - V<"> is an almost surely continuous null at 0 martingale. The almost sure continuity of VCn) implies that the process V defined by (4) in the proof of Theorem 14.8 is almost surely continuous. The existence of A in AC''0C(Q, 5, {& }, P) is proved likewise. ■ Definition 14.10. Let X, Y g M ^ Q , 5 , {&}, P). An equivalence class A in A'oc(Q,5J> {&}> P ) weft fftaf X2 — A ijr a right-continuous null at 0 /oca/ martingale is called a quadratic variation process ofX and we write [X]for it. An equivalence class V in V' oc (£2,5, {5i}, P ) weft fftaf X Y — V is a right-continuous null at 0 local martingale is called a quadratic variation process ofX and Y and we write [X, Y]for it. In what follows we write [X] for both an equivalence class of processes and an arbi trary representative of an equivalence class. Similarly for [X, Y]. The existence of [X] in A'° C (Q,5, {$,}, P ) and [X, Y] in V'~(Q,5, {&}, P) for X and Y in M^(fl,5, {&}, P) was proved in Theorem 14.8. Theorem 14.11. LetX,Y g M£ , o c (n,& {&},P)andfef {S n : n g N} and {Tn : n g N} oe increasing sequences of stopping times such that Sn f oo, T„ j oo almost surely as n -> co, andX S " A , Y T " A g Mf/or n € K Let R„ = Sn A Tnfor n e KThen there exists a null set A i« (Q, #, P ) JMC/Z that for every u> g Ac we have (1) lim (E,u) = fiix,Y)(E,uj)for E e <8m, t g R+, 71—►OO
"
(2) T lim n{XRn^]{E,Lj) = fHx](E,ij)forE g 05^, (3) T lim n[YRnA](E, w) = jUmCB. w ) / " ' ^ g » » , . In particular fort g R+ OIK/W g Ac, we have (4) Jirn [X fl " A , Y fl " A ](i, u) = [X, Y](t, w), (5) T lim[X*" A ](t,a/) = [XKi,a;), n—*oo
(6)
T Hm[YR"*](t,u>) = [Y](t,Lj). n—t-oo
Proof. As we showed in the proof of Theorem 14.8, there exists a null set Ai in (Q, 5, P) such that on R+ x A, we have (7)
Jim [X"»A, Y*" A ] = [X, Y] Jirn [X fl " A ] = [X], and Jim [Yfl"A] = [Y].
Since X f i ~' A g M | ( Q , 5 , {&},P) and (X H -' A ) f l " A = X*" A , Lemma 12.23 implies that //[JT^AJCW) < ^xH^iAjC-jW) for w g A£ where A2 is a null set in (fi,5,P). Thus
304
CHAPTER 3. STOCHASTIC
INTEGRALS
^(xfi„«](-,u) f as n —♦ oo and in particular for any a, 6 G R+ such that a < b we have T lim|iyfW(a,6],w)=T 71—►OO
lim{[X*" A ](&,^)-LY*" A ](a,a>)} Tl—►OO
= [X](6, u) - [X](a, w) = / i m ( ( o , &],«). Thus ^[x«»A]((a,6],w) < /j[Jf]((a,&],u>). From this and by the same argument for Y it follows that (8)
^i[XRn^(E,iv)
and/j,[YRn^](E,ij)<
filX)(E,Lj)
for .E G QSJL, •
c
Let A = A! U A2. Let u;A and t G R+ be fixed and let 0 be the collection of all members E G 23(o,(] such that lim /itjfj«n*)yju,A}(13,w) = / i ^ y ^ E , w). Since iJ n (w) f ooasrc —> oo, for every s £ [0, t] we have by (7) lim [XR"n,YR"A](s,uJ)
= lim [X,Y]R"A(S,CJ)
=
[X,Y](s,u).
Thus for a, 6 G (0, t] such that a < 6 we have lim mx^Ay*.*j((a, Tl—»-00
L
J
k],w)«
lim{[Xfl"A,rR"A](6,u.)-[XR"A,rR"A](a,a;)} Tl—t-OO
= [X, F](&, w) - [X, K](o, w) = Mtx,n((«, &].«>)■ Let 3 be the semialgebra of subsets of (0, t] consisting of subinterval of (0, t] of the type (a, 6] and 0. We have just shown that 3 C 85. Since 3 is also a 7r-class and since <J(3) = 25(0,<], if we show that 0 is a d-class then we have 93(o,i) C © by Theorem 1.7. To show that 0 is a
l e N with -Eo = 0- Then {Fk : fc G N} is a disjoint sequence in 0 . Let us define step functions f„,n G N, / and g on (0, oo) by setting / n (x) == ^/Li[xflnA,yH„A](Ft,a;) a; gG (fc (fc --- l,fc]and fc €g N Jn(X) XKnA,yJInAI(i')t,W) tfor or X l,fc]andfc N < /(x) = H[X,Yi(Fk,Lj) forx g (fc -- l,fc]andfc g N (9) i g(x) = {/j.[X](Fk,u)ij,[Y]{Fk,Lo)}1'1 forx g (fc -- l,fc]andfc g N. /(aO = Mtjr,yj(iPfc,w) forx g (fc - l,fc] and fc g N Since i ^ g (x) 0 = {/^(i^uOwnCPfc.w)} 1 / 2 for a; G (fc - l,fc] and fc £ N. Since Fk G 0 we have lim fi^HnA^snA^Ft.w) = fi[x,Y](Fk,ui) and this implies that lim fn(x) = /(x) for x e (0, oo). Now | / J [ J f * r , ' ■ YRn* jtPt,w)| < / i | [ J f K » 'sy*»Al|(-f*>w) n—*oo ,/2 ,/2 < {fJ. < {/^[Jf < {A*m(J*. {^ m (E t ,u/) w )) )^ / i [[y y f«„A i . »](**,«)}' *n*](-Ejfc, ) (F fc ,u ] (F fc ,a;)} 1/2 < y u [ y ) (.F, : ,uj)} «)}' /2 , [X«„A w)wri(JPfci
I
114. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
305
by Theorem 12.7 and (8). Thus |/„(x)| < g(x) for x G (0, oo). To show that g is Lebesgue integrable on (0, oo), note that / '
/(0
'°
o)
g(,x)mL{dx)=Y,{»m{Fk,Lo)ii[Y]{Fk,Lj)yi2 k&
< { £ »m
<
Wn(P^)},/2 = {W](£^)}l/2{f[n(£,W},/2
km
W]((0,a^}
1/2
W ] ( ( 0 , < ] , w ) } , / 2 < oo.
Thus by the Dominated Convergence Theorem we have lim / *-"*>
J(0,oo)
fn(x)mL{dx)
= \
f(x)mL(dx),
7(0,00)
in other words, n'il^ 12 /i[Jrfi"\Y*""]CF*:,w) = E l*lX,Y](Fk,Ui), AeN
jteN
that is, „'i m D ^[^ R " A .i" ! " A )(- B ' u; ) =
VIX,YI(.E,U).
This shows that E G 0 and completes the proof that 0 is a d-class. Therefore 93(o,t] C © and thus (1) holds for every E G 93(o,tj- By the equalities //[^H„A yR„A]({0},a;) = 0 and /■i[*,y]({0}> OJ) = 0, (1) holds for every E g ®(o,t]- The equalities (2) and (3) are proved in the same way as (1). Finally (4), (5), and (6) follow from (1), (2), and (3) respectively by letting E = [0,t]. I Proposition 14.12. Let X e M ^ c ( a , 5 , {ft},P). IfE([X]a) < oo for some a € M+, then X is an Li-martingale on [0, a]. Thus if X is in M2°c(£2,5, {ft}, P ) tfzen -X" is m M 2 (fi, 3, {& }, P ) if and only if [X] is in A(Q, » , {ft}, P). Proof. Since X G M2°C(Q, 5, {ft},P), there exists an increasing sequence of stopping times {Sn : n € N} such that 5„ f oo almost surely as n —► oo and X 5 n A is in M 2 (Q, 5, {ft}, P ) for n G N. Now [X] G A' M (a, 3 , {ft},P) and X 2 - [X] is a rightcontinuous null at 0 local martingale. Thus there exists an increasing sequence of stopping times {Tn : n € N}suchthatT„ | oo almost surely and ( X 2 - [ X ] ) T n A is aright-continuous null at 0 martingale. Let P„ = 5„ A Tn for n G N. Then there exists a null set A in (£2,5, P ) such that RJp) T °o for a; G Ac, X H » A G M 2 (Q,ft {ft},P), and (X 2 - [X])*" A is a right-continuous null at 0 martingale. Thus for any s,t G [0,a] such that s < t we have E C X 2 ^ - [X]ft.Atlft) = -X/LA. - [X]R^S a.e. on ( Q , f t , P ) . In particular for
CHAPTER 3. STOCHASTIC
306
INTEGRALS
s = 0, integrating the last equality and recalling that X0 = 0 and [X]0 = 0, we have E ( X | n A i - [ X ] f i n A ( ) = 0sothat E(X£„ At ) = E([X]«„ At ) < E([X] ( ) < E([X]a) < oo. Thus sup n e N ||X flnA (|| 2 < oo and therefore {XRnM : n G N} is uniformly integrable by Theorem 4.12. Now since X f l n A is a martingale we have (1)
E(X f i „ A i |&) = X f l n A s
a.e. o n ( n , 5 s , P ) .
Now Rn(ui) T oo for a; G Ac implies that lim XRnM = Xt on Ac. This convergence and the n—»-oo
uniform integrability of {XRnM : n G N} imply that Inn ||XK„At - X ( ||i = 0 by Theorem 4.16. Thus (2)
limE(X R „ A ( |3 s ) = E ( X t | 3 s )
a.e. o n ( Q , 5 s , P ) .
n—*oo
Since the filtration is augmented we have A 6 5 S . Thus letting n —> oo in (1), we have E(X ( |5 S ) = Xs a.e. on ( f i , 3 s , P ) by (2). This shows that X is a martingale on [0,o]. Finally since lim X£ nA( = X\ on Ac, we have by Fatou's Lemma E(X?) < liminf E(XR
Ai )
< E ( [ X ] J < oo.
n—*oo
This shows that X is an L2-martingale on [0, a]. ■
[II] Extensions of the Stochastic Integral to Local Martingales Observation 14.13. For A G A'0C(Q, 5, {&}, P), consider the collection of all measurable processes X = {Xt : t G R+} on the (Q, 5, P ) such that for every u e A ' where A is a null set in (£2,5, P ) depending on X we have /
X2(s,w)(iA(ds,u)
< oo forevery i G R+.
-/[0,t]
For such processes X and Y the condition that there exists a null set in (Q., 5, P ) depending on X and F such that for ui G Ac we have /
|X(s,uO-l%s,u;)|V4(ds,w) = 0 for every t G R +
Jl0,t]
is an equivalence relation.
§ 14. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
307
Definition 14.14. For A G A' oc (n, 3 , {&}, P), toL^(R+ x £2, ^ , P ) oe rne linear space of equivalence classes of measurable processes X = {X t : t G R + } o« (£2, 5, P ) rac^ f/iar for OJ G Ac w/iere A u a nn// ser in (£2, 5, P) we have /
X 2 (5,cj)^(
Note that L ^ C R * x £2, ,/„, P ) c L|%(R + x £2, p.A, P ) for A 6 A(Q, 5, {&}, P ) by (3) of Observation 11.16. Lemma 14.15. Let M e Mj' oc (£2,3, {&}, P)TOtfiar[M] G A ^ Q , 5, {5,}, P ) and let X = {X ( : t 6 R+} oe a predictable process on the filtered space such that X G L^CR* x £2, ^[M], P)- Let {an : n € N} be a strictly increasing sequence in R+ such that an | 00 as n —> 00. For every n G N, /er Tn fee a nonnegative valued function on £2 defined by Tn(w) = inf{i E l + : /
X 2 (5,w)/i (M] ((is,w) > a„} A a„ / o r u £ a .
J[0,(]
Ler {Sn : n G N} be an increasing sequence of stopping times such that Sn f 00 a.e. on (£2,3,P)asn -> 00 andMs»A 6 M^(£2,3, {&},P).ro that [M S " A ] G A c (£2,3, {&},P). Then 1) {Tn : n G N} is an increasing sequence offinite stopping time on the filtered space and Tn 1 00 a.e. on (£2, 5, P ) as n —» 00, 2^X [TnI is a predictable process and XlT"^ G L2l00(R+ x Q,fi[Msk^,P) for every k G N. Proof. In Proposition 3.5 we showed that the first passage time of an open set in R+ by an adapted right-continuous process on a right-continuous filtered space is a stopping time. Now if Z = {Zt : t G R+} is an adapted continuous process on a right-continuous filtered space, then the function on the sample space defined by inf{i G R+ : Zt > an) is equal to inf{< G R+ : Zt > an}, that is, the first passage time of the open set (an, 00), and is therefore a stopping time for every n G N. The continuity of Z also implies that at the stopping time defined above, Z is equal to an. Let us construct an adapted continuous process Z on our standard filtered space (£2,5, {&}, P ) which is equivalent to the process Ul0:t]X\s)plM](ds):teR+}. Since X G L ^ C R * x £2, p.{M], P), there exists a null set Ai in (£2,5, P) such that for every u> G A, we have (1)
/
X2{s,uj)ij.[M]{ds,uj) < 00
for every < G R+.
308
CHAPTER 3. STOCHASTIC
INTEGRALS
Since [M] is an almost surely continuous process there exists a null set A2 in (Q, 5 , P ) such that [M](-,u>) is a continuous monotone increasing function on R+ for u> G A£. Let A = Ai U A2 and define a real valued function Z on R+ x Q by setting / / [0 „ X2(s, u)fi[M)(ds, u) *(t,u)-y
m (>
7
for (i, u) G R+ x Ac for(t,u/)GR+xA.
Now since X is a predictable process, it is a measurable process and then so is X2 Thus {/[0 j] X2(s)fi[M](ds) : t e R + } is an adapted process by Theorem 10.11. Since the filtered space (Q,$, {&}, P ) is augmented, A g & for every t € ! + . Thus Z(t, ■) defined by (2) is 5(-measurable. This shows that Z is an adapted process. For u> € Ac, [M](-,u>) is a continuous function on R + and therefore fj,lM]({s},uj) = 0 for every s € R+. This implies that / [0j] X 2 (s,o;)//[M](d5,ai) is a continuous function oft G R+. Thus we have shown that 2 is an adapted continuous process. For every n £ N, let P„ be a nonnegative valued function on O defined by (3)
Rn(u) = inf{t eR+:Z(t,u)>
an} A an
forw G i2.
By the continuity of Z(-,UJ) for every u> G £1, inf{t G K+ : Z(t,w) > a„} = inf{t G R+ : Z(f,w) > a n } , which is the first passage time of the open set (a n , co) in E and hence a stopping time by Proposition 3.5. Thus Rn is a stopping time. Clearly Pn(tj) | as n —> 00 for every w G £2. Let us show that actually P„(a;) f 00 for every CJ G Ac. Let u G A c . If Z(-,u>) is bounded on R+, say Z(t,ui) < am for some m G N for all t G R+, then inf{t G R+ : Z{t,w) > an} = inf{0} = 00 for n > m and thus Rn{u>) = 00 A an = an for n > m and then P„(u;) f 00 as n —> 00. On the other hand if Z(-,u) is not bounded on R + , then lim Z(t,u>) = 00. If R„(ui) T 00 does not hold, then there exists m G N such that /—too
R„(u>) < am for all n G N. Then for n > m wehaveinfjt G R+ : Z(t,u>) > an}Aan < am and therefore inf{t G R+ : Z(t,u>) > an} < a m f o r n > m. Thus lim Z(t,u>) = 00. But the *Tam
continuity of Z(-, w) implies lim Z(t, u>) = 2(a m , u) and then Z(a m , w) = 00 contradicting Z(am:tj) = f[0am]X2(s,u)iJ,iM](ds,u)) < CXD. This shows that Rn(u>) f co for this case also. Now by (2), Z(-,u) = J[QAX2(s,u))fMlM](ds,uj) forw G Ac. Thus for il„ defined by (3) we have Rn{ui) = Tn(uj) for w G Ac. Since Rn is a stopping time on a standard filtered space, this implies that Tn is a stopping time on the filtered space by Lemma 3.17. Clearly Tn(u>) t for every w G Q. Since il„(w) t 00 for u» G Ac, we have Tn(u>) T co for w G Ac.
§ 14. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
309
Consider the truncated process X[Tn]. Since X is a predictable process, X[Tn] is a pre dictable process by Observation 3.35. To show that XlT"] £ L2i00(R+ x Q, /i[MsfcA,, P ) for anyfcG N, note first that since XlT"] is a predictable process, it is a measurable process by Proposition 2.23. Next, by Observation 3.34, for every w e £1 we have v-py,,
m
( s
lA_f
X ( * , w ) for 5 e [0,T„(w)] '^-i0 for«e(T«(W),oo).
According to 2) of Remark 14.11 we have ^[MsfcA](-,w) < JU[M](-,^) on (R+,*BaJ for u 6 A c t where Afc is a null set in (£2,5, P). Thus for every w e (A U A*)0 and f G R + , we have by (4) and the fact that Tn(uj) = R\.{D) (5)
(X [ r " 1 ) 2 ( 5 ,a;) / i [ M 5 t . ] (d S ,^))< /
/ -'[0,t]
=
(X^"..^
}
(d
)}
J[0,t]
^ 2 (5,^V[M](rf5,aj)) = Z(t A Pi„(u;),t<;) < a„,
/ ./[0,tAR„M]
where the last inequality holds since by the definition of Rn by (3) if t < R„(LO) then Z(tARn(u),u>) = Z{t,u) < a„andif< > i?„(w)then Z(MP n (cj),a;) = ^(P^(w),w) = a„ by the continuity of Z(-,UJ). Since (5) holds for w G (A U A,t)c, we have (XlT"])2(s)d[MSkA](d.s) < E(a n ) = an < oo,
E I J[0,t]
for every < G M+. This shows that XlTn] G L2,oo(K+ x Q, M[Ms*Ai. -f)- ■ Theorem 14.16. Ler M G M^' o c (n,5, { 5 J , P ) and let X = {Xt : t G K + }fcea pre dictable process on the filtered space such that X G L ^ C R t x Q, /j.[M], P). Let {Sn : n G N} be an increasing sequence of stopping times such that Sn ] oo a.e. on (Q, 5, P) as n —> oo awd M S " A G M|(Q, 5, {&}, P ) / o r every " G N. Le/ {a„ : n G N} be a strictly increasing sequence in R+ such thatan | oo as n —> oo. Let {Tn : n G N} be an increasing sequence of stopping times defined by Tn{u) = inf{t G R+ : /
X2(s,L})plM](ds,u)
> a„} A a„
/oruefi.
V[0,(]
Then there exists Y e M^iQdAZt}^) such that YS»*T"* = X™ • Ms"'-for n G N and therefore there exists a null set A in (Q, 5, -P) sucfe /nat Y(t,u)
= lim (X (TnJ • M S " A )(i,a;) /or (i,u;) G R+ X Ac
every
310
CHAPTER 3. STOCHASTIC
INTEGRALS
Furthermore the process Y is not only unique in M^' °c(£2, J , {&}, P ) but is in fact inde pendent of the choices of the sequences {an : n £ N} and {Sn : n £ N}. In particular 1) if M £ Mc2(£2,$, {&},P) and X £ L ^ ( R + x £1, fi[M], P), thenthereexists Y £ M2'' 0C (Q,5, {5;}, P)sue/! rta?F TnA = X[T"> • M/orevery n e N a n r f t h e r e exists a null set A in (Q, 5, P ) such that Y(t,u) = lim (X [ r " ] • M)(t, w)/or (t, UJ) £ R+x Ac, n—*oo
and!) if M £ Mc2'loc(n,$, {$t},P) and X £ L2,<x,(IR+ x Cl,filM],P), then there exists Y £ M2',oc(fi, 5, {&}, P) .sudi f/iaf F S " A = X • Ms"*for every neNand there exists a null set A in (Q, £, P ) such that Y(t, u>) = lim (X ; MSnA)(t, uS)for (t, w) £ R+ x Ac. n—>-oo
Proof. Since M £ M^'^CQ, 5, {5«},P), it has an almost surely continuous quadratic variation process [M] by Corollary 14.9. By Lemma 14.15, T„ t oo a.e. on (Q, 5 , P ) and the truncated process X[Tn] is a predictable process and X [ 7 "' 6 L2i0O(R+ x £2, /i[M], P)According to Lemma 12.23, p.[Ms„^(-,u!) < H[M)(-,U) on (R+, 18i+) for a.e. u> € Q. This implies that Xt7""1 6 L2,00(K+ x Q, ^[Msn^, P) for every n £ N. Thus X [ T " ] • M s " n is defined and is in M|(Q, j , {5 t }, P). Consider the subset {(•) < SnATn} = {(t,ui) £ R+ x Q. : t < Sn(uj) A T„(w)} of R+ x £2. Since Sn A T„ t on £2, {(•) < S n A T„} | as n -> oo. Let A be a null set in (fi, 5, P ) such that Sn(uj) f oo and T„(u>) | oo as n —> oo for w S Aj;. If a; £ AQ then for any i € R+ we have i < S„(UJ) A Tn(w) so that (t,u>) £ {(•) < 5„ A Tn} for sufficiently large n £ N. Thus R+ x AJ C UneKi{(0 < 5„ A T n }. For each n £ N, define a real valued function Y (,l) on {(•) < Sn A T n } l~l {R+ x AJ} by setting (1)
YM(t,u)
= {X^.Ms^){t,Lj)
for (t,u/) £ { ( ■ ) < £ > A T n } n { R + x A < } .
For any pair k, n £ N such that k < n, there exists according to Theorem 12.22 and Theorem 12.24 a null set Ak
foru, £ AcKn.
Let us show that (3)
YM(t,Lv) = Yik\t,uj)
for«,o;)6{(-)<5fcATfc}n{R+x(AoUAM)c}.
Now for (t,u>) £ {(■) < Sk A Tt} n {R+ x (Ag U AfcT„)c}, we have Y{k\t,Lo)
= (X [ T t l .M s * A )(t,o;)
by(l)
= ((X[T"1)[Tfc] • (M s " A ) SkA )(t, LJ) by Observation 3.36 = (I|r"'iMs'A)(SiHAWAf,u) by (2) = (X [Tnl • Ms"A)(i, w) since i < S^w) A Tfc(u;) = y ( n ) (t,w) by(l).
§ 14. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
311
This proves (3). Now let A = A0 U (Uk,n6rj,fc<„AA,n). Define a real valued function Y on M+ x Q. by setting (4)
Y(t
W
)=(r(n)(<'a') 10
f
or(i^)e{(-)<S„ATn}n{R+xAc}andneN f o r ( i , u ; ) G R + x A.
Note that Y is well-defined on R+ x Ac by (3) and hence it is well-defined on R+ x Q. To show that Y is in M^oc{Q., 5, {& }, P), let us show first that Y is adapted, that is, for every 5 G R+, Y(s, ■) is & -measurable. Now since 5„ A Tn is a stopping time on a rightcontinuous filtered space, we have {s < Sn/\Tn} = {Sn ATn < s}c G fo by Theorem 3.4. Also since the filtered space is augmented, the null set A is in 5 S . Thus {s < S„ A T„} fl Ac is in &,. Since on this 5S-measurable set Y(s, ■) = Y{n\s, •) = (XlT»] • M S " A )(5, ■) which is 5s-measurable, Y(s, •) is 5S-measurable on this set. Since this holds for every n G N and since U„ £ N{S < SnATn}r\Ac = Ac, Y(s, •) is fo-measurable on Ac. Then since Y(s, ■) = 0 on A, F ( s , ■) is 5s-measurable on Q.. This shows that Y is adapted. Next we show that with our increasing sequence of stopping times {Sn A Tn : n G N} such that 5„ A T„ | oo on A£ we have F S " A T " A G M£(ft, 5, {&}, P) by showing (5)
yS„AT„/x _ Z [ T „ ] ,
MS„A
Now an arbitrary (t, u>) G R+ x Ac is in {(•) < Sm A T m } n {R+ x Ac} for some m G N and we can assume that m > n without loss of generality. Then by (4) and (1) we have Y(t,u) = (X [Tml • MSmA)(.t,v) and therefore yS„AT„A(i; w )
=
(X[Tm]
.
M
SmA)(5B(W)
A TB(W) A tj
w )
5
= ( ( i W ) W , ( M - r t ( f , u ) by (2) = (X [Tnl • M s " A )(t,u;) since m > n. This shows that Fs"AT"A(-,u>) = (X [T "'.M s " A )(-,w)foro; G Ac. Thus (5) holds. Therefore Y G M2''°c(£2,5, {&}, P). For w G Ac we have 5„(w) A T n (u) j o o a s n - t o o s o that for any < G R+ we have Sn{u>) A T„(u>) A t = t for sufficiently large n G N and thus F(t,w) = lim F 5 " AT " A (i,u>). n—»-oo
To show that Y is independent of the choices of the sequences {an : n e N} and {5„ : n G N}, let {a'n : n G N} be another strictly increasing sequence in R + such that a'n t oo and let T„ be defined by 2^(w) = inf{< G R+ : /
X2(s,u;)WM](
for w G fl.
CHAPTER 3. STOCHASTIC
312
INTEGRALS
Let {S'n : n g N} be the increasing sequence of stopping times such that S'n f °° a.e. on ( Q , 5 , P ) a s n -> ooandM s « A g M£(Q,3, {&},P) for every n £ N. Then by what we showed above there exits Z G M ^ ' C n , 5, {& }, P ) such that £ S ; , A 7 > _ jjrp;]
# M
S;A
fof e v e r y n
€
N
Let us show that there exists a null set A in (Q, 5, P) such that K(-, u) = Z(-,bj)forui £ Ac. Let A0 be a null set in (Q, 5, P) such that Sn A Tn A S'n A T^ t °° as n -> oo on A|j. Now yS„AT„AS>TiA _ (Jf[Tn] . [T 1 ra
= (X " )
J^S„f\^.S'„AT^
S A S
. (M " ) > = X 1T " Ar "] • M S " A S - A ,
where the second equality is by Theorem 12.22 and Theorem 12.24. Similarly 7-S„AT„AS;AT^A _
-v-[T„AT^]
JW-S„AS;A
Thus r s " AT " AS " AT " A (-,w) = Z S " A T " A 5 " A T " A (-,U>) for (j G A^ where A„ is a null set in (Q, 5, P). Let A = U neZ+ A n . Then YS»AT"AS"Ar>(., w) = Z s " A T ^ A ^ A ^ W ) f o r e v e r y n G N when w e Ac. Since S„ A T„ A S'n A T^ | oo as n —> oo on Ac, by letting n —* oo in the last equality we have Y(-,u>) = Z(-,u>) for u g Ac. This proves the independence of Y from the sequences {an : n g N} and {S n : n g N}. Finally when M G M|(Q, 5, {&}, P) and X G I^,(R+ x Q, W M ] , P), we let S„(u) = oo for all w G Q and n G N. Then M S " A = M and V S " AT " A = YT"A for every n G N. Similarly when M G M ^ Q , 5, {&},P) and X g L2:00(R+xQ,fj,[MhP),we\etTn(Lj) = oo for all u G Q and n G N so that X[T"] = T and YS"AT»* = y s " A for every n g N. I Definition 14.17. Let M £ M%'°c(Cl,$, {5<},P) and to X = {X t : t g R + } fee a predictable process on the filtered space such that X g lS£l0(R+ x Q, /U[M], P)- i ^ ' {5 n : n g N} fee £«e increasing sequence of stopping times such that Sn | oo a.e. on (Q, 5, P ) as 7i -» oo and M S " A g M.%Q,$, {$t},P)for every n £ N. Let {an : n £ N} fee a strictly increasing sequence in E+ such that an f oo as n —» oo. For every n g N, /ef T„feedefined by Tn(u>) = inf{t G R+ : /
X 2 (s,w)^ M (ds,u;) > a„} A a n
/ o r u g O.
We de/me rfee stochastic integral X • M of X with respect to M to be the unique Y in MC/°C(Q,$, {&}, P ) sucfe fnaf F S " A T " A = X ™ . Ms^ for every n £ N and thus Y(t, w) = lim (X [T " 1 • Ms"A)(t, Lj)for (i, w) g R+ x Ac wfeere A iy a nu« sef in (Q. 5, P).
114. EXTENSIONS OF THE STOCHASTIC INTEGRAL
313
As an alternate notation, we write J[01] X(s) dM{s)for the random variable (X ; t € R+.
M)tfor
For truncation by stopping times for the extended stochastic integral we have the fol lowing. Theorem 14.18. Let M G M£'0C(Q, 5, {&}, P) and fer X fee a predictable process on the filtered space such that X G L ^ ( R + x £2, /i [ M ] , P). Then for any stopping time T on the filtered space there exists a null set A in (£2, j , P ) swcn that for every w G Ac we nave (X • M TA )(-,u;) = (X . M ) T A ( - , W ) .
Proof. Let {S„ : n € N} and {Tn : n G N} be as in Theorem 14.16 so that there exists a null set Ai in (£2,5, P ) such that (1)
X • M = lim X [ r " ] • M S " A
onR+xA^.
n—*oo
Now since M 5 " " G M|(£2,5, {&},P) we have M T A S " A G M|(£2,£, {&},P) for every n G N and this shows that M T A G M j ' ' 0 ^ ^ 5, {5«},P). Since ^ [ M A] < // [M] accord ing to Theorem 14.11, the fact that X is in L^CR* x £2,^[Af],P) implies that X is in L4°|0(R+ x Q,/i [M TA 3 ,P). Thus according to theorem 14.16, X • M T A is defined and is in M2'' oc (£2,3,{fo},P). By Lemma 14.15, X17""1 G L2,oo(R+ x Q , ^ [ M S „ A ] ; P ) . Since /X[MTAS„A] < ^[M^A] by Lemma 12.23, we have X [Tnl G L2,oo(R+ x £2, ^ [ M TAS„A],P). Thus X [T " ] • M T A S " A G M|(£2,5, {&}, P) and according to Theorem 14.16 there exists a null set A2 in (£2,5, P ) such that (2)
X • M T A = lim X l T n l • M T A 5 " A
on R+ x AS.
n—►oo
By Theorem 12.22 there exists a null set A3 in (£2,5, P) such that for every n G N (3)
X [T " ] . M T A 5 " A = (X ( r " ] . M S " A ) A T
onRtx4
Let A = UL Ai. Then X . M T A = lim (X [Tnl . M S " A ) T A = (X . M) T A on E + x Ac by (2), (3)and(l). ■ Theorem 14.19. Let M G Mf00 (£2,5, {&}, P) and Zef X fee a predictable process on the filtered space such that X G 14%(K+ x £2, /i [ M ], P). Let T be a stopping time defined by T(w) = inf{< G R+ : / •/[0,t]
I 2 (s,u)ftM]((is,w)>tt}Aa
/cruG£2
CHAPTER 3. STOCHASTIC
314
INTEGRALS
where a > 0 and let S be an arbitrary stopping time. Then there exists a null set A in (Q, 5, P ) such that for every o> g Ac we have ( X [ T A S 1 • M)(-, u) = (X • M ) T A S A ( - , W) a«d in particular (X [ T ] • M)(-,w) = (X • M) TA (-,u;). Proof. Since M g M2'/oc(£2, $, {&}, P ) there exists an increasing sequence of stopping times {Sn : n g N} such that S n f oo on Af where A t is a null set in (£l,$,P) and M S " A g M^(Q,5, {&},P) for n g N. Let {Tn : n g N} be as defined in Theorem 14.16. By Lemma 14.15, X [ T ] , X [ r " ] g Lil00(R+ x £1,H{Usk«},P) for n,k g N. Note that for sufficiently large n g N we have T < Tn so that T A 5 < Tn and thus X [ T A 5 ] = ( X [TAS])[T„I _ (jsf [T„])[rASj B y t h i s f a c t a n d b y Theorem 12.22 there exists a null set A2 in ( 0 , 5 , P ) such that for sufficiently large n g N we have (1)
X [ T A S I . M 5 " A = (X [ T " ] ) [ T A 5 1 . M 5 " A = (X [T " ] . Ms^fASA
o n R + x AJ.
Since X m g L2l0o(K+ x n,^ [ M s„A],P), we have X [ T ] g L2<^(R+ x £ 2 , ^ [ M S „ A J , P ) and this implies that X [ T A 5 ) g L2%(R+ x Q, ^ M ^ . * ] , P). Then by Theorem 14.18, there exists a null set A3 in (£2, #, P ) such that for all n g N we have (2)
X [ T A S 1 • M S " A = (X [TAS1 • M) S " A
on R+ x A|.
On the other hand by Theorem 14.16 there exists a null set A3 in (Q, g, P ) such that for all n g N we have (3)
lim (X [T " ] • M S " A ) T A S A = (X s M) T A S A
on R+ x A3.
Let A = u t , A - Then letting n -* 00 in (1) and using (2) and (3) we have X [ r A S 1 • M = (X • M) T A S A on E + x Ac. In particular with S = 00, we have X m • M = (X ; M ) T A . ■ For the quadratic variation process of the extended stochastic integral we have the fol lowing. Theorem 14.20. LetM,N g MJloc(S2, 5, {&}, P ) and fef X and F foe predictable pro cesses on the filtered space such that X g L2°^)(M+ x Q, /i [ M ] , P ) and Y g 14°^(R+ x Q, p[tr\, P). Then there exists a null set A in (£2,5, P) .wen f t a on Ac we have for every t g K+ (1)
[X.M,Y.iV]t= /
X(s)Y(s)d[M,JV](s),
§ 14. EXTENSIONS
OF THE STOCHASTIC
INTEGRAL
315
and in particular (2)
[X*M\t=f
X\s)d[M](s). J[0,t)
Proof. Since M,N £ MC2M(Q.,$, {&},P), there exists by Lemma 14.3 an increasing sequence of stopping times {Sn : n £ N} such that Sn t oo on A£ where A0 is a null set in ( Q , g , P ) and MS»'\NS»A G M§(n,g, {&},P) for n € N. Let {an : n G N} be a strictly increasing sequence in R+ such that an t oo as n —> oo. Let {Tn : n G N} and {Tn:n e N} be two increasing sequences of stopping times defined by Tn(u) = inf{t G R+ : /
X2(s,L})/j.[M](ds,w)
> an} A an
foru;G£2,
Y2(s,uj)^m(ds,OJ)
> an] A an
forwGQ.
-'[0,1]
and T^w) = inf{t G R + : /
J[0,t]
By Lemma 14.15, Tn t oo and T^ | oo on AJ where Ai is a null set in ( Q , 5 , P ) and furthermore XiTn] G L2,oo(R+ x S 2 , ^ [ M ^ « ] , P ) and Y[T«] G L2,oo(K+ x £2,^[Ars«A],-p) for n G N. If we let Rn = Tn A Tn then {P„ : n G N} is an increasing sequence of stopping times, Rn t oo on AJ, X[flnl G L i ^ C ^ x £2, fi(Ms„»], P), and F[flT>1 G L2,oo(K+ x a, /i[Ns„A], P ) . Thus X[R"] • M S " A , r [ R » ] ' • iVs"A G M ^ Q , ^ , {5*}, P ) and by Theorem 12.17 there exists a null set A2 in (Q, 5, P ) such that on Aj we have for every t G K+ and every n G N (3)
[X [flnl • Ms"A, y [ f l " ] • iV s " A ], = /
X [R " 1 ( 5 )y [fi " 1 (5) d[M s " A , ATS" A ](s).
Now (4)
LY1*"1 . M S " A , y [ f l " ] . JVS"A]( = [(XlT"])[T"] . M S " A , ( y ™ ) ™ . ATS»A]( = [(XlT"] • M S " A ) T > , (y'7-"1 • Ar 5 » A f" A ] t = [(X.M)T»ASnA:rnA,(yra»iV)7'"AS^:r"A]i
by Theorem 12.22 by Theorems 14.18 and 14.19
= [(X • M) f i " A S " A , ( y - A0*" AS " A ] ( on R+ x A3 for all n G N where A3 is a null set in (Q, 5, P). Now since X • M, Y • TV is in M£''0C(Q, 5, {5,}, P), according to Theorem 14.11 there exists a null set A4 in (Q, 5, P ) such that on ACA and for every i G K+ we have (5)
lira [(X . M)*" A 5 " A , ( y • AT)*"**1*], = [X»M,Y*
N]t
316
CHAPTER 3. STOCHASTIC
INTEGRALS
and (6)
lim HrMS-A N^^{E,LJ)
= (i,M,N](E,u)
for E £ 93[o,t]-
Let A = UtoAj. For w G Ac we have (iJ„ A Sn)(oj) > t for sufficiently large n since (P n A Sn)(w) t oo for any t £ R+. Thus such n we have X [fl " 1 ( 5 ,o;)F [fl " 1 ( 5 ,u.)d[M 5 " A ,Ar S " A ]( 5 ,u;)= /
/ J[0,t]
X{s,u)Y(s,uj)d[M,
N](s,u).
J[0,t]
Letting n -► oo in (3) and applying (5) and the last equality we have (1). We have (2) as a particularcaseof(l). ■ Theorem 14.21. Let M £ M^'0C(Q, 5, {&}, P ) and let X and Y be two predictable pro cesses on the filtered space such that X and YX are in L^CK-f x Q., p.[M],P)- Then X • M £ Mc',oc(a,5, {&}, P), Y £ L^%(R+ x Q, Mrx.OT, P ) «> * « * F • (X • M ) exirte in M c ' /oc (f2,5, {5t}, P ) and furthermore there exists a null set A in (£1, #, P) such that for every u G Ac we have (Y • (X • M))(;u>) = (YX • M)(-,w). Proof. Let us show that y G L ^ ( R + x Q, /irx.Mli p )- B v Theorem 14.20, there exists a null set A, in (Q,5, P ) such that [X ; Af](t, w) = J[0)t]X2(s,u>)d[M](s, w) for i G R+ when u e AJ so that ^[jjf.M](-,^) is absolutely continuous with respect to ^(M](-,^) on (R+, 58M,.) with Radon-Nikodym derivative given by (
/ J[0,t]
= [
Y2(s,Lj)X2(s,Lj)d[M]( S,U))
< OO,
J[0,t]
the finiteness of the last integral being from the fact that YX £ L ^ C R * x Q, fj.[M], P). This shows that Y £ L ^ ( R + x £2, (J.iX»Mh P)Let {an : n £ N} be a strictly increasing sequence in R+ such that an | oo as n —» oo. Let {Tn : n G N} and {T^ : n G N} be two increasing sequences of stopping times defined by Tn{u)
= inf{t G R+: /
X2{s,uj)fj,[M](ds,ui)
> an} A an
foru) G Q,
J[0,t]
T^OJ)
= inf{i G R+ : /
Y 2 (s,uO W x.M)(ek,w) > o , } A a ,
^ [0,i]
= inf{t G R+ : /
Y 2 ^ , ^ ) ^ 2 ^ , ^ ) ^ ) ^ , ^ ) > a„} A an
forw G a .
§ 14. EXTENSIONS OF THE STOCHASTIC
INTEGRAL
317
Then Tn f oo and T^ | oo almost surely. Since both M and X • M are in M j oc (Q, 5, {3(}, P ) there exists by Lemma 14.3 an increasing sequence of stopping times {S„ : n e N} such that Sn f oo almost surely and both M S " A and (X • M) 5 " A are in M | ( Q , 5 , {&},P) for every n € N. Thus (X • M ) S " A T " A T > e M|(Q,5, {&},P) for every n G N. Now (1)
y • (X • M) = lim F17-"1 • (X • M) 5 " AT " A =
[T ]
[T ]
S A
lim F " • (X " • M " )
by Theorem 14.16 by Theorem 14.19 and Theorem 14.18
o n R t X A j where A2 is a null set in (Q, 5, P). Since X is in L ^ ( R + x Q, /i[M], P ) and |i[MS.A] < yU[M], we have X G L ^ C I ^ x Q, ^ [ M S„A], P). This implies that X[Tn] is in L ^ ( E + x il, nlMsnA}, P). Similarly since YX is in V2%(R+ x Q, ^ [ M ] , P), we have YX in Li,%(R+ x fi, /i[Afs„A], P ) and thus y[T»lX[T»1 is in 14% (Rt x Q, /iIWA,Aj, P). Therefore by Theorem 12.19 there exists a null set A3 in (Q, g, P) such that on R+ x A3 we have for alln 6 N y [ r " ] • (XlT"] • M S " A ) = YlT"]X[T"] • M 5 " A = (FX) [T " AT " 1 • M 5 " A .
(2)
Therefore if we let A = U?=1 A, then on R+ x Ac we have by (1) and (2) Y • (X • M) = lim (yx) [ T - A T " ] • M 5 " A = lim ((YX) [T " ] • M s " A ) r " A =
lim (YX) [T » ] • M S " AT " A = (YX) • M, n—*ao
where the second equality is by Theorem 12.19, the third equality is by Theorem 12.24, and the last equality is by Theorem 14.16. ■ Corollary 14.22. Let M e M^ o c (fi,g, {&}, P) a/uf Zef X " \ . . . , X<"» fee predictable processes such that X' 1 ', X ( , ) X ( 2 ) , . . . , X<" • • • X<"> € L!%(R+ x Q, R M ] ) P). T/ien (X ( n ) • • • • • (X (2) • (X ( , ) . M)) ■ • ■) = (X (1) • • • X ( n ) ) • M.
Proof. By iterated application of Theorem 14.21, we have the Corollary. ■
CHAPTER 3. STOCHASTIC
318
§15
INTEGRALS
Ito's Formula
[I] Continuous Local Semimartingales and Ito's Formula Definition 15.1. By a continuous local semimartingale we mean a stochastic process X = {Xt : t G R+} on a standard filtered space (Q., 3 , {3,}, P) such that X = X0 + M + V where M G M ^ C Q , 3 , {&},P), V G V c ' , o c (n,3, {&},P) and X0 is a real valued domeasurable random variable on (£2,3, P), The processes M and V are called the martin gale part and the bounded variation part of the continuous local semimartingale X. In particular, when X0 is integrable, M is in Mc2(£l, 3 , {&}, P) and V is in Vc(£i, 3 , {3<}, P) then X is called a continuous semimartingale. A continuous local semimartingale is also called a quasimartingale. Proposition 15.2. The decomposition of a quasimartingale X in Definition 15.1 is unique, that is, if X = X0 + M + V and X = Y0 + N + W where X0 and Y0 are real valued d0-measurable random variables on (Q, 3 , P), M and N are in Mj °c(£l, 5, {3t}, P), and V and W are in VC,'°C(Q, 3 , {3t}, P), then there exists a null set A in (Q, 3 , P) such that XQ(UJ) = YQ(u), M ( - , W ) = 2V(-,w) andV(;w) = W{-,u>)forw G Ac. Proof. Since M, N, V, and W are almost surely continuous processes, we may assume that they are continuous without loss of generality in the proof. Now since M is in MC2hc(Cl,3, {3t},P) there exists an increasing sequence of stopping times {Tn : n G N} such that T„ f almost surely and AfT"A is a null at 0 continuous L2-martingale for every n G N. By the continuity of M, the function Sn on Q defined by Sn(ui) = mf{t G R+ : \M(t,u)\
> n) = M{t G R+ : \M(t,u>)\ > n } ,
for ui G £1 and n G N is the first passage time of the open set (n, oo) in R and is therefore a stopping time by Proposition 3.5. Clearly {5„ : n G N} is an increasing sequence. Since M(-, a>) is a real valued continuous function on R+, it is bounded on [0, t\ for every t G R+. From this it follows immediately that Sn{u>) | oo as n —* oo for every w G il. Since V is a continuous process, so is its total variation process | V | . Thus if we define .R„(uO = i n f { t G R + : | V\(.t,u)>
n},
forw G Q. and n G N, then {R„ : n G N} is an increasing sequence of stopping times such that Rn(u>) T oo as n -+ oo for every u> G £2 for the same reason as for {Sn : n G N}. Let {T^-.ne N}, {S'n : n G N},and {R'n : n G N} be defined in the same way for N and
§75. ITO'S FORMULA
319
W in the place of M and V. Let Q„ = T„ A S„ A P_„ A Tn A S'n A < for n 6 N. Then {Q n : n 6 N} is an increasing sequence of stopping times such that Qn(u) T oo as n -> oo for u e A\ where Ai is a null set in (Q, g, P). Since M T " A ,AT T > e M ^ ( n , £ , { g ( } , P ) , we have MO-",JV«-« G M ^ U , { & } , P ) by Lemma 14.3. By Definition 11.10, it is clear that | V G » A | = | V | « " A . Since | V | « " A is bounded by n, so is | VQnA \. Thus | VQnA | is an i,-process and therefore V g " A is in Vc(fi, 3 , {& }, P). Similarly W«- A is in VC(Q, 5, {5,}, P). Now since X 0 + M + V = y „ + N + W and M, N, V, and W are null at 0, X0 = Y0, that is, there exists a null set A2 such that X0(u) = Y0(.u) for u e A j . Next, from M + V = N + W we have M — N = WV and then M 5 " " - Ar°"A = WQnA - F 0 " "
(1)
Since V Q " A and WQ"A are in VC(Q, g, {&}, P), so is W"3'"'1 - VQ"A Then there exist A and P. in A c (I2,g, {&}, P ) such that W Q " A - VQ"A = B - A. Then M Q " A + . 4 = AfQ"A + B. Now MclnA and iVt3"A are continuous martingales bounded by n so that they are uniformly integrable and thus are in the class (D) by Theorem 8.22. On the other hand A and B are continuous nonnegative submartingales so that they are in the class {DL). Thus MQ"A + A and JVQ"A + B are continuous submartingales of class {DL). Note also that since A and B are continuous increasing processes, they are natural by Theorem 10.18. Therefore by the uniqueness of Doob-Meyer Decomposition in Theorem 10.23, we have MQnA = NQnA. From this and (1), we have VQ"A = WQ"A also. This implies that there exists a null set A3 such that f MQ"A{-,u) = NQ"A{-,u) \VQ"A{-,ui) = WQ"A{-,u)
( )
f o r n e Nando; G A^ f o r n e N a n d w G A^.
Let A = U?=1 A,. Since Qn{u) t 00 as n -> 00 for u> G Ac, we have lim MQnA{t,u) n—HX>
= lim M{Qn{oj) A t,u) = M(t,w)
for (i,w) G R+ x Ac.
n—t-oo
Similarly for NT"A, VT"A, and WT"A Therefore letting n -► 00 in (2), we have M(-,u;) = N{-,LJ) and V{-,u) = W{-,u})foTu> G Ac. We have also Xa{w) = Y0{u) for u> G Ac. ■ Let C 2 (R) be the collection of all real valued continuous functions F with continuous derivatives F' and F" on R. Let X = X0 + M + V be a quasimartingale on a standard
CHAPTER 3. STOCHASTIC
320
INTEGRALS
filtered space (Q,5, {5 ( },P). Consider the real valued function F o I o n M + x f i . We have ( F o X) ( = (F o X)(t, •) = F o X ( and therefore F o X = {(F o X)t : t € R + } = { F o I , : f e R+}. Similarly for F' o X and F " o X. Ito's Formula shows that F o X is again a quasimartingale and gives its martingale part and bounded variation part in terms ofM, [Af]andV. Theorem 15.3. (Ito's Formula) Let X = X 0 + M + V be a quasimartingale on a standard filtered space (Q.,$,{$t},P) and let F e C2(R). Then for the real valued function F o X onR+xCl there exists a null set A in (£2,5, P) such that on Ac we have for every t £ R+
(FoX)4-(FoX)0 =
/
J[o,t]
(F'oX)(s)dM(s)+
Recall that fm](F'
f
J[o,t]
(F'oX)(s)dV(s)
+- f
2 Jio,t)
(F"oX)(s)d[M](s).
o X)(s) dM(s) is an alternate notation for ( F ' o X • M)t for t G E+.
Since M € Mf'^fi,ff, {5,}, P) and V G V c ' /D<: (n,5, { & } . - ? ) . t h e r e e x i s t s a n u l 1 s e t A i n ( Q , 5 , P ) such that M(-,w) and y(-,u;) are continuous on R+for u> G Ac. If we redefine M and V by setting M(-,u>) = 0 and V(-, w) = 0 for w G A, then since the filtered space is augmented M and V are still in M£I<,0(Q, 5, {&}, P ) and V c ' /oc (Q, #, {&}, P ) respectively but now every sample function of M and of V is continuous. Thus we assume that our M and V are not only almost surely continuous but are in fact continuous processes. For [M] 6 Ac'!°c(£l,5, {5t}, P), we select a representative which is a continuous process. Before we prove Ito's Formula, we show in Proposition 15.5 below that the stochastic integral {/[0(t](F' oX){s) dM(s) : t G R + } is defined and is in M£ foo (Q, 5 , {&}, P ) and the twoprocesses{/ [ 0 ( ] (F'oX)(s)dV(s): t G R + } and{/ [ 0 ( ] (F"oX)(s)d[M](.s): t G R + } are defined and are in Vc,'°c(£2,5, {& }, P). Then since ( F o X)o is a real valued 50-measurable random variable, F o X , as given by Ito's Formula, is indeed a quasimartingale. Observation 15.4. 1) If M e M2' / o c (Q,5,{&},P) and V G V C '' 0C
§75.
TTO'S FORMULA
321
processes and in particular measurable processes. 3) Since X(-,OJ) is a real valued continuous function on R+ for every w G ft, and since F, F', and F" are real valued continuous functions on R, {F o X)(-,u), {F' o X)(-,u), and (F" o X)(-,u) are real valued continuous functions on R+. This implies that every sample function of ( F o X), ( F ' o X), and (F" o X) is bounded on every finite interval in R+. 4) Since (F' o X ) and (F" o X) are measurable processes and since every sample func tion of these processes is bounded on every finite interval in R+, / [0 t](F' o X)(s) dV(s) and / [0 (] (i ? "oX)(s) d[M](s) are real valued fo-measurable random variables by Theorem 10.11. Thus {/[0 t]{F'oX)(s) dV(s): t € R + } and {/[(M](F"oX)0s) d[M](s) : t G R+} are adapted processes. (Note that our V G V c ' ,oc (ft,5, {5,},P)and[M] e Ac''°c(£l,$, {&},-?) are not Li-processes. Nevertheless Theorem 10.11 is still applicable since the fact that the process A in Theorem 10.11 is an L\-process was not needed in the proof of Theorem 10.11.) Proposition 15.5. 1) The two processes {f[ot]{F' o X)(s)dV(s) : t G R+} and UM(F" o X)(s) d[M](s): t G R+} are in Vc''°c(ft, 3 , {&}, P). 2) The process F' o X is in L^tR-f x ft, fJ,[M), P) so tnat tne stochastic integral Ulo,t](F' ° X ) ( 5 ) dM^ ^ e * * } ' 1 defined and is in M2''°c(ft, 3 , {&}, P). Proof. 1) Let us consider the process <1> = {< : t € R+} where
(F'oI)(i,u)^(s,w)
for(i,a;)GR + x f i .
J[0,t]
As we saw in Observation 15.4, <J> is an adapted process. Also since every sample function of F' o X and V is continuous, <&(•, u>) as given above is a continuous function on R+ for every u € ft. It remains to show that (•, w) is of bounded variation on [0,i] for every t G R+. Let w G ft and t G R+ be fixed. Since ( F ' o X)(-, w) is continuous on R+) we have C = sup s6[0 „ \(F' o X)(5, w)| < oo. Let T be the collection of all finite strictly increasing sequences r = {i 0 , • -• ,U} with t0 = 0 and t„ = t. Then
Ars]£|*(fe,«)-«(*fc-i,«)|
£
/
(F'oX)(s,L>)dV(s,u>)-
J[0,tk]
< y f
I JlO,tk-t)
|(JF'ojr)(3,w)|
< C | V | ( i , w ) < oo.
(F'oX){s,uj)dV(s,uj)
322
CHAPTER 3. STOCHASTIC
INTEGRALS
Thus sup Tg7; AT < oo. This shows that <£>(-,o») is of bounded variation on every finite interval in R+ and completes the proof that 4> is in V c ' /oc (Q, 5, {& }, P). We show similarly that ¥ is in V c ' ,oc (Q, #, {&}, P) for the process T = { 4 ' i : t e R + } defined by 4/(<,u))= /
(f"oI)(s,u)(i[M)(s,u)
./[0,i)
for(i,w)£K+xfi.
2) For every u> 6 Q, (P"oX)(-, u>) continuous and hence bounded on every finite interval in R+ so that for every t 6 R+wehave/ [0<] (F'oX) 2 (5,a')/i[ A f](d5,w) < oo. T h u s P ' o X i s in Li^CRt x Q, ft[M], P ) and consequently the stochastic integral {/ [0j] (P'oX)(.s) dM{s) : t € R + } is defined and is in M ^ Q , 5, {&}, P) by Definition 14.17. ■ Lemma 15.6. Let M = {Mt : t £ R + } be a null at 0 martingale on a filtered space (Q, 5, {& }, P ) 5«c^ r/ia/1M | < K on [0, t] x Q/or wwe t e R+ a«d if > 0. For 0 = t0< ■■■
E( 7 j ) < E(a,) = E\E[{M(t3) - M ( V , ) } 2 | V . " = E[E[M(i,) 2 - M(tj^f\^J]
= E[M(t3)2 - M(i,_,) 2 ]-
Thus (2)
E(5) < £ EtMCt,) 2 - M ^ . , ) 2 ] = E[M(<)2] < tf2. 3=1
To estimate E(5 2 ), let us write
(3)
S2 = {±lj}2 = ±1) J=l
j=l
2±li{±1]].
+ i=l
j=i+l
By the computations in (1) and (2), we have
(4)
EE(7 2 ) = EE(a J )
3=1
§/5.
LTO'S FORMULA
323
By the Conditional Holder Inequality and by (3) in the proof of Lemma 12.26 we have n.
n
E f y l & J < £ E[a;|ff ( J < (2K)2
£ j=i+l
a.e. on (Q,%n,P)
j=i+l
and thus (5)
E[£
7i{
i=l
= £
E
£
7 J }]
i=«+i
= £ E[E[ 7 ,{ £
Ti} j=i+i
i=i
|&J]
t7iE[ £ 7,15tj] < (2K)2 £ E( 7 .) = (2K)2E(S) < 4K4
i=l
j=i+l
(=1 2
2
By (3),(4), and (5), we have E(5 ) < (1 + AK )K2
■
Proof of Theorem 15.3. Step 1. Since the process F o I - ( F o A")0 is continuous and since the three processes Ul0yt](F' o X)(s) dV(s): t e R+}> {{I[0,t](F" o X)(s) d[M](s): t G R+} and {/[o,t](-Fv ° X)(s) dM(s) : i G R+} are almost surely continuous according to Proposition 15.5, to show that F o X — (F o X)0 is equivalent to the sum of the three processes it suffice to show according to Theorem 2.3 that for every t G R+ there exists a null set A( in (£2,5, P) such that on A£ we have (6)
(F o X)t - (F o X)0 =
I
(F'oX){s)dM(s)+
J[0,t]
I (F'oX)(s)dV(s) J[0,t]
+ l- f
(F" o
X)(s)d[M](s).
2 J[Q,t]
Let us consider first the case where Xo, M, [M] and | V | are all bounded, that is there exists K > 0 such that (2)
\X0(u)\,\M(t,u>)\,[M)(t,L>),\Vl(t,L>)
fortf.uOeR+xQ.
In this case, Xo is an integrable random variable, M is a martingale by Observation 14.2 so that M e M![(Q, 5, {&}, P), and \V| is an L r process so that V e V C (Q,£, {&}, P). Note also that since |V| < | V | , we have |X| = \X0 + M + V\ <3.ff o n R + x n . SinceF, F ' and F" are continuous on R+ they are bounded on the finite interval [-3K, 3K]. Thus there exists C > 0 such that sup ie[-3K,3tf]
|J?(x)|,
sup \F'{x)\, x£[-3K,3K]
sup *6[-3A:,3/n
|F"(x)|
CHAPTER 3. STOCHASTIC
324
INTEGRALS
Then (3)
\FoX\,\F'oX\,\F"oX\
onR+xQ.
The boundedness of | F ' o X | in particular implies that \F' oX\ isinL2i0O(R+ x ( l , / i M , P ) by Observation 11.19 so that (F' o X) • M e M|(Q, 5, {5,}, P) by Definition 12.9. For n 6 N, let A„ be a partition of R+ into subintervals by a strictly increasing sequence Unk '■ k £ Z+j in R+ such that tn n = 0 and <„ ^ T oo as fc —> co and lim IAJ = 0 where |A„ | = sup^j^i,,^ — <„,/t_i). Let t e R+ be fixed. For each n e N, let in,Prl = t where tn,k < t for A; = 0 , . . . , p„ — 1. Let us write (4)
F(X(t)) - F(X(0)) = £{nX-(<„,*)) -
F{X{tn^x))}.
it=i
Now for a, b e R, a ^ 6, according to Taylor's Theorem we have F(b) - F(a) = F'(a)(b -a) + -F"(c)(b - a) 2
for some c e (a A 6, a V 6).
Thus we have (5) =
F(X(tn,k, w)) - F(X(*„,*-,, w)) F , (Jr(t M _i,w)){A-(* n ,»,u;)-A , (< B> _i J w)}
+ ^F"(^,,(a;)){X(t„,,,cu) - I ( V l , w ) } 2 , where (6)
Z(t»^_i,w) A X(t n , t ,w) < ^ ( w ) < I ( ^ n , w ) V X(tn,A:,aj).
Now £„.*. may not be a random variable, that is, it may not be 5-measurable. However on the set {X(tnik) — X(tn^-\) 7^0} G §tnk we have from (5) the equality F\Uk)
= 2{F(X(tn,k)) - J , (JC(t„,t-i))}{A-(t Bit ) - 2F'(X(t n ,,_,)){X(i n ,,) - *(*„,*_,)}-•,
X(tn,k_,)}-2
which is 5 t nfc-measurable. Let us define a function G„^ on Q by setting Gnk(u)=fF"(Uk("))
10
foruj e {X{tn)k)-X{tn,k_x)iQ} otherwise.
§75.
ITO'S FORMULA
325
Then G„tk is an g (n k -measurable random variable and furthermore (7)
F d W - F d ^ n ) )
= F\X(tn>k^)){X(.tn:k)
-
X(tn
^Gn,k{X(tnik)-X(ta>k^)}2.
+
Note also that by (6), the Intermediate Value Theorem applied to the continuous function X(-,u>), and (3) we have (8)
sup |G„,it(w)| < sup \F"((nJe(u)))\ < C.
We now have (9)
F(X(t))-F(X(.Q))
=
Y,F'(X(tn^)){X(tn,k)-X(tn,k^)}
+ ^£G„,,{X(i„,A)-X(t„,A_,)}2 Let us write the first member on the right side of (9) as (10)
£ F\X{tnik_,)){X{tn,k)
-
k=\ Pn
X{tn,k-X)} Vn
=
Y.F'(X(tn,k-i)){M(tn,k)
=
S^ + S^
- M(i n ,*_,)} +£F'(X(i„,t_ 1 )){y(t„,*) - V(t„, t _,)}
By Proposition 12.16, ljm ||5j n> - ((F' o X) • M)(t)\\2 = 0. This implies that there exist a subsequence {ne} of {n} and a null set A' in (Q, #, P) such that (11)
\imS\n')(u>) = ((F'oX)»M)(t,uj)
f o r c e A'c.
Regarding S j 0 , note that since ( F ' o X)(-,w) is a continuous function and V(-,u>) is a continuous function of bounded variation on every finite interval in R+ for every u € £2, we have (12)
HmSfV)=/ n-.oo
*
J[0,t]
(F , oX)(5,w)rfy(s,w)
forceQ.
326
CHAPTER 3. STOCHASTIC
INTEGRALS
Let us write the second member on the right side of (9) as \Y.Gntk{X(tn,k)-X(tn^)}2
(13) =
£ Gn,k{M(tn>k)
- M(*Blik_,)}{V(*nit) - V(*»,*_i)}
k=\
+
- V(i n , fc _,)} 2 + \ £ Gn,k{M(tn,k)
\ £ Gn,k{V(tnM) 1
z
>t=i
- M(t n , fc _!)} 2
fc=i
= Sf,) + S,f) + 5f). Now by (8) we have |S
|M(tnii)-M(tB,tM)|£|V(t».*)-V(t»,*-i)|
C max
|M(* n , t )-M(t B ,fc_,)||V| t .
Since every sample function of M is continuous and hence uniformly continuous on [0, t], lim max \M(tn k) — M(tn k_i)\ = 0 on fl. Thus we have n-too A=l,...,p„
'
lim Sln\u) = 0 for w £ Q.
(14) Similarly we have
| 5 f | < ^ C max | V ( t M ) - V ( * ^ _ i ) | | 7 | , L
fc=l,...,p„
so that by the uniform continuity of every sample function of V on [0, i] we have (15)
lim S f V ) = 0
forojeQ.
71—HX)
To show Jiirn E(\S^n) - \ fM(F" a X)(s) d[M](s)\) = 0, let 5en) = \ £ ( * " ' ° *)(t»,*-i){M(i„, t ) - M(<„,,_,)} 2 , and Sf
= \ £ ( ^ " o X)(tn,k^){[M](tn,k)
- [M](i B ,*-i)}.
§15.
327
ITd'SFORMULA
For brevity, let us write An,k = {M(tn:k)-M(tnik_i)}2,
an=
max
Bn,k = {[M](tn,k)-[M]{tnik_l)},
pn=
max"B„,fc.
An
Now l^
n )
-5n
\Y,\Gn,k-{F"°X)(tn^)\{M{tn,k)-M{tn^)}2
< L
<
k=i
C max
|M(t„i4)-M(tnit_i)|£|M(t„ii)-M(*„,t_i)|
so that by Holder's Inequality and Lemma 15.6 we have E(|S
|M(t Bii ) - iW(in,fc_,)|}2]^ <
C E ^ ^ T T ^ .
By the uniform continuity of the sample functions of M on [0, t], we have lim an = 0 on n—*oo
D. Since an is bounded by 4if 2 we have lim E[a n ] = 0 by the Bounded Convergence n—*oo
Theorem. Therefore (16)
jirnEClSf'-S^'l^O.
Next we have s£° - Sf> = | £ £ , ( F " ° X X ^ , ^ ) ^ - BnJt} so that E(|Sj n ) - S^n)\2) = i £ E [ ( F " O X)(tn,k-i)2{An,k 4
+ 7 ^
^
- B n ,,} 2 ]
A:=l
E[(F" o I ) ( i n i ) _ , ) ( F " o I K i . w ) ! ^ - ^ J j i ^ - B„ | t }].
j,k=\,...,V„\]ik
For j ^ k, say j < k, we have E[(F" o X)(t„, J _i)(F" o X)(i n , Jt - 1 ){^„, J - B nJ -}{A n , t - -B„,fc}|&nk_,] = ( F " o X){tn,3-X){F"
o X X ^ - i M A ^ - B«j}E[{^» -
BnM)!&„,»_,],
a.e. on (£2, & n t _,, P). By the definition of [M] we have E [ { A n , f c - £ „ , a i& n , k _,] = E[{Jlf (*„,*) - M(i„,fc_,)}2 - {[M](t„,*) - [M](t„, t _,)}!&._,_,] = 0,
328
CHAPTER 3. STOCHASTIC
INTEGRALS
a.e. on (£2,5 (n , P). Thus the summands in the second sum in the expression for E(|S£n) - S\n)\2) above are all equal to 0 and therefore we have E ( | S f - S<">|2) = - £ E [ ( P " o X ) ( i n , , _ , ) 2 { ^ ^ - 5 n , A } 2 ] 4
i
k=\
p«
<
- Y, E[(F" o A-)(*„,*_,)2{ A 2 ,, + P 2 ,,}]
<
\c2Y,K[{M(tn,k)
<
^C 2 E[a„ JT{M(tn,k)
<
2
2 ,/2
^C E[a ] Z
<
- M(i„,*_,)} 4 + {[M](f„.*) - [M](i„,,_,)} 2 ] - M(t n ,*_,)} 2 ] + lc2E[(3n £{[M](* n ,*) - [M](t Bik _,)}]
E [ { £ { M ( i n , t ) - M(i„, fc _,)} 2 } 2 ] 1/2 + ic 2 E[^„[M](i)] Z
4=1
^C 2 E[ a 2 ] 1 / 2 v^2A^ 2 + ic 2 E[/3JM](t)],
by Lemma 12.26. By the uniform continuity of every sample function of M and [M] on [0, t] we have lim a 2 = 0 and lim 5„ = 0 on Q. Since M and [M] are bounded we have n—*oo
n—*oo
lim E[a 2 ] = 0 and lim E[/3n[M](t)] = 0 by the Bounded Convergence Theorem. Theren—►oo
n—>-oo
fore lim E(|S£n) - S f >|2) = 0 and consequently lim E(|Si n) - Si B) |) = 0.
(17)
71—*0O
Since every sample function of F" o X is continuous we have (18)
limS5°(w) = i /
"-*00
(F"oJt)(5,w)d[M](s,w)
foruefl.
2 J[o,t]
Now (16) and (17) imply that there exists a subsequence {n m } of {n^} and a null set A" in (£2,5, P ) such that (19)
lim \Sf\u>) - Sf\w)\ 77i—>-oo
= lim |S£n)(w) - S$n)(u>)| = 0 forw e AT. m—»-oo
*
Let A, = A' U A?. Replace n in (9) by nm. If we let m -> oo in (9), then by (10), (11), (12), (13), (14), (15), (18), and (19) we have (1) holding on Af. This proves (1) for the case where X0, M, [M] and I V | are all bounded.
§ 15.
UO '5 FORMULA
329
Step 2. Let us remove the assumption that XQ is bounded maintaining the assumption that M, [M], and | V | are bounded. Let X
( F o Xin))t
- ( F o X(n,)0 = /
( F ' o X (n) )( 5 ) dM(s)
J[0,t]
/
J[o,t]
(F'oXM)(s)dV(s)
+ ]- I
2 7[o,t]
(F"oXin))(s)d[M)(s).
Since the stochastic integral does not depend on the values of the integrand process at t = 0 as we noted after Definition 12.3, we have (F'oXM)(s)dM{s)
/ J[0,t]
= f
(F'oX)(s)dM(s).
J[0,t]
For a fixed u G £2, let n be so large that |-Xo(o>)| < n s o that ^ " ' ( C J ) = Xa(ui) and hence X
/ J[0,t]
and /
f
(F'oX)(s)dV(s),
J[0,t]
(F"oXM)(s)d[M](s)=
Ho.t]
I
(F"oX)(s)d[M](s
J[o,t]
On the other hand since lim X tn) (w) = X(u>) and since F is continuous, we have lim ( F o XM)(s,
n—*oo
w) = ( F o X)(S,UJ) for every 5 G R+. Let A = U n€N A n . Then letting
n—*oo
n —> oo in (20) we have (1) holding on Ac. Step 3. Let us remove the assumption that X0, M, [M], and \V|
are bounded. Let
inf{tGR+ \M(t)\ > n} An, 5 2 , n = inf{t G R+ [M](<) > n } A n , 53,„ = inf{* 6 R+ | V | ( i ) > n } An, 54,n = i n f i t G l ^ | ( F ' o X ) ( t ) | > n } A n , S,,n
Tn
=
= inf{i G R+ /
( F ' o X)2(s) d[M](s) > n} A n.
V[o,(]
Then {S;,„ : n G N} for i = l , . . - , 4 , and {T„ : n G N} are increasing sequences of stopping times, and since every sample function of M, [M], | V | and F'oX is continuous
330
CHAPTER 3. STOCHASTIC
INTEGRALS
by our assumption, we have S;,„ f oo and T„ | °o on Q. as n —» oo. Let R„ = 5i,„ A • ■ • A S4i„ A T„. Then {#„ : n g N} is an increasing sequence of stopping times such that Rn T oo on i2. Since the processes MRnA, [M] R " A , | F | f l » A and(F'o X)RnA are bounded by n, we have M*" A g Mg(0,ft {&},P), [M]*" A g A c (Q,5, {&},P), | ^ | H " A € V c (Sl,5,{5 t },PX and ( F ' o JO*" A g L2l0O(K+ x fl,ttM^»i,P). Note that [MR-A] = [M]R"A by Proposition 12.25 and | V * " A | = | V| T " A For every n g N, let X f i " A = X 0 +M R n A +V R n A . By Step 2 there exists a null set A in (Q., 5, P ) such that on A c we have for every n g N and every t g E+ ( f o X f i n A ) ( - ( F o Xfi"A)o = /
(21)
( f o XR"A)(s)
dMR"A(s)
J[0,t]
+
f
J[o,t]
( F ' o XRnA)(s)
dVR"A(s) + I- f
2 J[o,t]
(F"oXR"A)(s)d[MR"A](s).
Let us observe that if Y is an adapted process and 5 is a stopping time on a filtered space, and G is a real valued function on K, then for the stopped process YSA and the truncated process YlS] we have G o YSA = (G o Y)SA
(22) and
(F S A ) t 5 ] = F [ 5 ]
(23)
as can be verified easily. For u £ Ac and i g R+, for sufficiently large n g N we have Pn(u>) > t so that P„(w) A s = 5 for all s g [0, i]. For such n we have by (22) (24)
(FoX*" A )(s,w) = (FoX) f i " A (s,u;) = (FoX)(s,u;)
forsg[0,t],
and similarly C251 V '
' (p' o X*" A )(S,UJ) = ( F o X)(s,u>)
for s g [0,t]
{(f"oI«"»)(j,u) = ( f o I ) ( s , u )
forsg[0,t].
Regarding the first term on the right side of (21) we have {F' o XRnA) • M R " A = ( F o X) fl " A • M R » A = ( F ' o X) R " A • M H " Afl " A = ((F' o X) R " A • MRnA)RnA = « F ' o X)*" A ) [finl • M R " A = ( F ' o X)lRn] • M H " A = ( ( F o X) [ R n l • M) fl " A
§75.
ITd'S FORMULA
331
where the first equality is by (22), the third equality is by Theorem 12.24, the fourth equality is by Theorem 12.22, the fifth equality is by (23), and the last equality is by Theorem 14.18. Now F'oX e I4%(K+ x Q A*[M]) by Proposition 15.5. Now by Theorem 14.19, we have (F' o X)lRn]
•M = (F'o J Q T " A S ' . " A " A S ' . « • M = ((F' o X) • M) fl " A
Thus ( F o X H " A ) • M*" A = {{F' o X) • M) R " A . For w G Ac, since R„(UJ) A t = t for sufficiently large n g N, for such n we have ((F' o X*" A ) • JWH"A)(i,u;) = ((F' o X) • M)R"\t, Therefore we have shown that for u £ A (26)
c
LJ) = ((F' o X) •
M){t,u).
for sufficiently large n £ N we have
( 7 {F'oXR"A)(s)dMR"\s))(uJ)=([ \J[0,t] /
\J[0,t]
(F'oX)(S)dM(s))(.oj). I
For 5 £ [0, t] for sufficiently large n G N we have ii„(u>) A s = s so that VR"\S,UJ)
= V(iZ„(u,) A *, w ) = V(s,u>),
and similarly [M fi " A ](s,u,) = [M]R"\S,OJ)
= [ M K i ^ M A s,u) = [M](5)u>).
Thus by these equalities and by (25) we have for sufficiently large n e N (27)
(F'oIft'A)(J,u)^s"A({,u)= /
/ ./[0,<]
(foI)(s,u)jy(s,u),
-^[0,(1
and (28)
/
(F"oX^)(s,u)d[MR^](s,uJ)=
f
(F"oX)(s,u)d[M](s,u).
J
J[0,t]
[0,t]
Using (24), (26), (27) and (28) in (20), we have (1) holding on Ac
■
As an application of Ito's Formula, we have the following. Example 15.7. Let B be an {5(}-adapted null at 0 Brownian motion on a standard filtered space ( Q , 5 , {5<},.P). Then (1)
/ -'[0,4]
B(s)dB(s)
\{B2(t)-t},
= ^
CHAPTER 3. STOCHASTIC
332
INTEGRALS
and (2)
[ i f B(u)dB(u)}dB(s) J[0,t) Uio.s] J
=
±-{B3(t)-3tB(t)}.
3!
Proof. By Proposition 13.31, B G M^(Q,5, {$t},P) with [B]t = t for t G K+. For n G N, let F(x) = a;" for a: G R. Then F G C 2 (R) so that by Theorem 15.3 applied to X = X0 + M + V in which X 0 = 0, M = B, and V = 0 we have (3)
Bn(t) = n [
Bn-l(s)dB(s)+U(-n~
Jio.t]
1}
2
B n - 2 (s)m L (ds).
/ J[o,t]
In particular for n = 2 we have B2(t) = 2[
(4)
B(s)dB{s) + t,
J[0,t]
and therefore (1) holds. For n = 3, we have by (3) and (4) (5)
B3(t)
= 3 /
B2{s) dB(s) + 3 /
J[0,t]
= 3! /
B(s)mL(ds)
J[0,t]
(/
J[0,«] U [ 0 , s ]
S(u)d5(u)}dB(s) J
+ 3< [ sdB(s)+ [ B(s)mL(ds)\ U[o,t] J[o,t] J Since the integrand in / [01] s dB(s) is a continuous function, (/ [01] 5 dB(s))(cu) is in fact equal to the Riemann-Stieltjes integral /0' 5 dB(s,oj) for every u> G Q. Now /o 5 dB(.s, w)+/„* £ ( s , u) ds = [sB(s, w)]*0 = tB{t) by integration by parts for the RiemannStieltjes integral. Using this in (5), we have B\t)
= 3! / ( / B(u)dB{u)\ dB(s) + 3tB(t). J[0,t) U[0,s] J
From this last equality we have (2). I The equalities (1) and (2) in Example 15.7 make interesting contrast to the formulas /o s ds = j<2 and /o{/ 0 5 u du}ds = jjt 3 in ordinary calculus.
§75.
LTO'S FORMULA
333
[II] Stochastic Integrals with Respect to Quasimartigales Definition 15.8. For A G A' oc (£2,5, {&}, P), let LfafBU x f l , ^ , P ) fee fte linear space of equivalence classes of measurable processes $ = { $ , : ( £ E , } on (£2, 5, P ) JMC/Z that I[o,t) \®(.s,u)\l*A(.ds,u) < oo for every t G R+/or almost every u G £2. Definition 15.9. Let C(R+ x £2)feef/jg /wear space of equivalence classes of measurable processes <E> = {O t : < e R+} on a probability space (£2,5, P) s«c/i tfza/ <£(•, w) is continuous on R+for almost every u G £2. Let B(R+ x £2)feef/ie /mear space of equivalence classes of measurable processes
/ ./[CM]
|*(s,w)|rfj4(s,w) < oo.
J[0,t]
Consider a quasimartingale X = X 0 +M+V on a standard filtered space (£2,5, {5<}, P). Let Q> be a predictable process on the filtered space such that O G L ^ ( R + x Q.,pm,P)
n Lfc(R* x £ 2 , ^ , K | , P ) .
S i n c e * G L£»(R+ x £2, W W J ,P),
I
f Aa.t)
\®(s,u)\dlVl(s,u)<00,
so that f[0t] <1>0, w) dV(s, u ) £ t for (i, w) G R+ x £2. The stochastic process If O(s) d l ^ s ) : t G R+} is adapted and its sample functions are of bounded variation on
CHAPTER 3. STOCHASTIC
334
INTEGRALS
every finite interval in R+ by the same argument as in the proof of Proposition 15.5. Thus the process is in V c '' oc (£2,5, {&}, P). Definition 15.11 Let X be a quasimartingale given by X = XQ + Mx + Vx with Mx G Mc2''oc(n,$,{$t},P)andVx G V C ' ,0C (Q,5, {St}, PI Let <J> be a predictable process on the filtered space such that
(i)
o> G i4%(K+ x Q, t e ) ,P)nLi;(i t x ntfilVxlP).
By the stochastic integral of <J> with respect to X we mean the null at 0 quasimartingale {fm]<$>(s)dX(s) : t G R+} defined by (2)
/
<5>{s)dX{s)= I
•no,*]
0(s)dMx(s)+
./[o,<]
f
®(s)dVx(s)
forteR*.
-Ao,!]
We also use the notations O • X, $ • Mx, and O • Vx /or the processes {/[0ij] ®(s) dX(s) : t e R+}, {/ [0!] $(s)(iMjf(s) : t 6 R + }, and {f[ot]&(s)dVx(s) : < 6 R+j respectively. Thus <£> • X = O • Mx +
J[o,t]
(F' o X)(s) dX(s) + ~ f
2 J[o,t]
(F" o X)(s)
d[Mx](s),
or in the alternate notations, (F o X)t - (F o X)0 = (F'oX)»X
+ ]-{F" OX)»
[MX].
Contrast the first of the two expressions above with the formula F(t) — P(0) = Jjj F'(s) da forPGC'(R). Theorem 15.12. Let X and Y be two quasimartingales on a standard filtered space ( A S , {&}, P) given by X = X0 + Mx + Vx and Y = Y0 + MY + VY where MX,MY
G
335
§75. TTd'S FORMULA
M?°c(n,$,{$t},P)andVx,VY 6 \c-loc(Q,$, {&},P). Then the stochastic integral of Y with respect to X,Y • X, a/ways exists as a n«H af 0 quasimartingale with martingale part MY.x =Y • Mx and bounded variation part Vy.x =Y • Vx. Proof. Since Y is a quasimartingale, it is in C(R+ x Q) which is contained in L vio( R + x "> Pv*xl> ■P)nL',°^(R+ x n , / i | V j r | , P ) . Thus by Definition 15.11, F « X exists and My.x = F • M^ and VY.X = Y • V*. Theorem 15.13. Lef X = X 0 + Mx + Vy and Y = Y0 + MY + VY be quasimartingales on a standard filtered space (£2, ^, {iJt}, .P) and let 0 tv/W *F be predictable processes on the filtered space such that (1)
<S,r
6
L^(R+xO,/i[Mx],P)nL'1%(R+xii,/i,^,,P)
n L ^ ( R + x a, K ^ ] , P) n LfeaL. X n, ^ , yy ,, P). Then for a, 6, a, /3 G R we /lave (2)
(aO + &¥) • ( a X + ,SF) = a a $ • X + a/?* • Y + i a ^ • X + fc^T • F
Zrc particular when <&, ¥ e B(R+ x £2), (2) w satisfied. Proof. By the linearity of stochastic integrals with respect to local L2 martingales and Lebesgue-Stieltjes integrals we have the linearity of stochastic integrals with respect to quasimartingales. I
[DI] Exponential Quasimartingales If X is a quasimartingale then since the exponential function has continuous derivatives of all orders Theorem 15.3 implies that ex = {exm : t e K+} is a quasimartingale. Definition 15.14. For a quasimartingale X, ex is called the exponential quasimartingale ofX. Theorem 15.15. Let X be a null at 0 quasimartingale given by X = Mx + Vx where Mx £ Mj' o c (£2,5,{&},P)a/KfFx G V c -' o c (n,5, {&},P). Thenfor the null at 0 quasi martingale defined by Y = X - \[MX], we have (1)
eK«>=l+/ J[0,t]
tY^dX{s)
336
CHAPTER 3. STOCHASTIC
INTEGRALS
on R+ x Ac where A is a null set in (£2,5, P)Proof. Since Y = X - ^[Mx] = Mx + Vx - \[MX], Y is a null at 0 quasimartingale with its martingale part My and bounded variation part Vy given by MY
(2)
=
MX
VY = Vx - tlMxl
By Theorem 15.3, we have (3)
e ™ - ey(0> = /
J[0,t]
e y w dMY{s) + f
J[0,t)
e™ dVY{s) + \ [
2 J[o,t]
ey(s> d[MY](s).
Now en0) = e° = 1. Substituting (2) in (3) we have (1). ■ Corollary 15.16. Let M e M%Ioc(£l, 5, {& }, P ) and let 4> be a predictable process such that <£> £ L^CR-f x £2, /U[M), P). If we define a null at 0 quasimartingale by setting (1)
Y(t)=
f ®(s)dM(s)-J[0,t)
[ ®2(s)d[M](s) 2 J[o,t]
fort£R+,
then its exponential quasimartingale eY satisfies the condition (2)
eYm=\+f
eYU)0(s)dM(s) J[0,t]
on R+ x Ac where A is a null set in (Q, 5, P). In particular the exponential quasimartingale ez of the quasimartingale Z defined by Z = M — \[M~\ satisfies the condition (3)
em
= 1+ /
eZ(s) dM(s)
J[0,t)
on R+ x Ac. Proof. Let X =
337
%15. ITO'S FORMULA
X - \[MX] satisfies the condition eYm - 1 = / [01] e y(s) dX(s). Thus for stochastic integrals with respect to M the quasimartingale e* ~ i [ M x l corresponds to the exponential function in Riemann integrals. Observation 15.18. Let us define the Hermite polynomials Hn(t, x), (t, x) 6 (0, oo) x R for n 6 Z+ as the coefficients in the power series expansion of exp{7x - ^ } in 7 £ R. Thus (1)
n
exp{7x - 2 - U = £
7
-ffn(t,x)
for 7 g R.
Differentiating the power series n times, we have (2)
Hn(t,x)
=
1
9"
drexpVx~ (x2)
1
= ^T
exp
7H
2
7=0
[
[ <9"
i2^)[a^
exp
t (
x\
\-2^-7;
-,=0
where the second equality is by completing square for 7 in yx — &. Now with y = 7 — ~, we have f^ = 1 and thus
5 expj97
4( T - -f)
_ 9 exp|- * 2 <9y -2*
}
By iteration we have gr,
If we set7 = 0, then y = — f and dy = —\dx so that
9"
1
4(- T)"}
» 3n
= M)^exp
1 •7=0
f v
4}-
Using this in (2), we obtain (3)
"■"
!=L
x ex ex i? forn€Z+, n! p{?7}£ [ 2 t j dx P^-^[ It
CHAPTER 3. STOCHASTIC
338
INTEGRALS
which is the customary definition of Hn(t, x). It follows from (3) that Hn(t, x) is a polyno mial in the two variables t and x of mixed degree 4 with leading term (n!)_1x™ for n G Z+. For instance we have H0(t,x)=l, Hl(t,x) = x, H2{t, x) = \x2 - | t ,
(4)
Hi(.t,x)=lzxi-12-xt, Hi{t,x) = ^ - \ x H + \t2, H5(t, x) = ^-0x5 - ±xH + \xt2
For small values of n, Hn(t, x) can be obtained more easily by writing
4 }.■{£*?}{*(=?)"n!1 )J
exp < -yx -
and then by long multiplication of the two power series on the right side and by equating the coefficients of 7 to Hn(t, x) according to (1). Theorem 15.19. Let X = Mx +Vx be a null at 0 quasimartingale on a standard filtered space. Then for n 6 N, we have
I
dX(U) [
J[0,t)
J[0,*i]
dX(t2)■■■ I
1 dX{tn) =
Hn([Mx]tlXt),
J[0,t„_i]
that is, (• ■ • ((1 . X) . X) . ■ ■ ■) . X =
Hn([Mx],X).
Proof. For 7 6 R , let (1)
Y = 7 X - ^Mx]
=7X-
^-[Mxl
With the quasimartingale -yX replacing the quasimartingale X in Theorem 15.15, we have (2)
ey(i)=l+7/
ey^dX(s).
A0,t)
By (1) above and (1) of Observation 15.18, we have
(3)
e y( " = exp f 7Xt - ^-[Mx]t\
= £
jnHn(.[Mx]t,Xt).
§/5.
ITO'S FORMULA
339
Note that since Hn(t, x) is a polynomial in t and x and since [Mx] and X are quasimartingales, Hn([Mx], X) is a quasimartingale by Theorem 15.3. For brevity, let us write Ln for Hn([Mxl X). Then by (2) and (3), we have £
n 7
L„(t) = 1 + 7 /
nSZ+
E 7"-Ms) dJf(a) = 1 + £
•'("■'Inez,
7"
+
nlz.
' /
£„(«) <*X(s).
^.'l
Equating the coefficients of 7" of the two sides of the last equation for n G Z+ we have / £o(t) = 1 \ £»(*) = /[0,t] i n - i W dX(a)
for n £ N
and therefore by iteration L„(i) = =
/
£„_,(*!) dX(t,)
/
{/"
£„_2(t2)dA-(i2)}dX(t,)
./[o,<] U[o,ti]
=
f
If
J
{[
L_3(ij)(Ur(*3)}«W(t2)}dX(ti)
J[o,fl U[0,ti) U[o,t 2 ]
= /•••(/ y[o,d
J
(/
J
lrfX(u}^(t,-i)}---^«,),
|/[o,t„_ 2 ] [^[o,( n _,]
j
j
that is, £„(*)=/
dX{ii) f
J[0,t]
dX(t2)-~
I
J[0,til
\dX(tn).
J[0,t„-,]
Since Ln = if„([Mx), X ] by definition this completes the proof. ■ Example 15.20 To continue with Example 15.7, let B be an {5,}-adapted null at 0 Brownian motion on a standard filtered space (Q,5, {&}, P). By Theorem 15.19 and by (4) of Observation 15.18, we have t
I
J[0,t]
r dB{U) I
./[0,
r dB(h) /
S dB{W /
J[0,t2]
B*(t) tB2(t) t2 = - ~ - ~ 1 + ^,
1 dB(U) = H4([B]t,Bt)
^4
J[0,t3]
and similarly / J[0,t]
=
dB(U) I J[0M]
dB(t2) I
-/[0,*2] 3
dB(t3) f
•/[0,t3]
B\t) tB (t) ?B(t) ffs([5L,.B,)--J2o---J2+ —g—■
dB(U) f JlO.U]
1 dB(ts)
4
8
CHAPTER 3. STOCHASTIC
340
INTEGRALS
[IV] Multidimensional Ito's Formula Definition 15.21. By a d-dimensional quasimartingale we mean a d-dimensional stochastic process X = {Xt : t G R+} on a standard filtered space (Q, 5, {$t}, P) whose components X ( 1 ) , . . . , X w are quasimartingales on the filtered space, that is, X ( , ) = X$ + M w + V® where XQ1' is a real valued ^-measurable random variable, M w £ MJ ° c (£2,5, {5«}, -P). and 7 ® G V C ''-(Q, 5, {&}, P),/or i = 1 , . . . , d. Let C 2 (R d ) be the collection of all real valued functions F on Rd whose first order partial derivatives Ff, i = 1 , . . . , d and second order partial derivatives i^"-,», j = 1,... d, are all continuous on Rd. Theorem 15.22. (Multidimensional Ito's Formula) Let X = {Xt : t G R+} be a d-dimensional quasimartingale on a standard filtered space (Q, 3 , {5t}> P) with compo nents given by X ( i ) = Xf + M (i) + V(i> for i = 1 , . . . , d. Let F G C 2 ^ ) . Twera F o X is a l-dimensional quasimartingale given by (F o X)t - (F o X) 0 = £
;=i
I
(Ft o X)(s)
dM{%)
J
\M
+ 32 [ (F^oX)(3)dV^(s)+1-32 I
(i^oX)( 5 )d[M ( '\M«]( 5 )
o/i R+ x Ac wnere A is a null set in (Q., 5, P). Proof. Theorem 15.22 can be proved in the same way as Theorem 15.3 using Taylor's Theorem for F G C^R*). that is, for any a, b G R d , F(b) - F(a) = 32F!(a)(b, - a,)+ i £ i=i
^ = 1 ( c ) ( 6 ; - a,)(6,- - aj),
i,j=i
where c = (cu ■ • •, cd) G Rd with Ci G (a,- A &,-, a,- V 6,)for i ~ 1 , . . . , d, ■ Let us note that if we write MXc> for the martingale part of the quasimartingale X ( i ) in Theorem 15.22 then in the notations introduced in Definition 15.11 the multidimensional Ito's Formula takes the following form: (F o X)t - (F o X) 0 = 3JF! o X) . X w + \ £ ( F £ o X)(s) . ;=i
* i,j=i
[MxV»tMxw].
%15. LTO'S FORMULA
341
Theorem 15.23. Let B = ( P ( 1 ) , . . . , P
X«\t) = Xf + j^f k=l
■/[0,*]
Y^k\s)dBik\s)+
I
Z«\s)mL{ds).
J[0,t]
Then X is a d-dimensional quasimartingale and for any F € C2(R
(Fo X)t - (Fo X)0 = E E / + W fc? •'[0,0
(l!?°J0«5'ft*)M<*Bwto (^'°^)(s)Z W (5)m L (d 5 )
+ J E E / (fI>i)(S)rW)(S)ytti»(S)mL(
»,i=l t=i •/[°'i]
c
on R+ x A where A is a null set in (Q, 5, P). Proof. Let us show first that XM is a quasimartingale. Now since Y{l'k) 6 L2°L(R+ x Q., mL x P ) , the stochastic integral y(i'*> • B(*> is defined and is in M;;''0C(Q, 5, {&}, P). Then (3)
M w = E^(',A:)»BWeM2'°c(«,5.{Srt},-P)
fori = l , . . . , d .
ifc=l
For i = 1 , . . . ,
Z (,) (s, w)mi(ds)
for (i, w) e R+ x Q.
•/[o,<]
Every sample function of V{*> is absolutely continuous and in particular of bounded varia tion on every finite interval in R+. The continuity of the sample functions also implies that F ( , ) is a measurable process. By Theorem 10.11, V (,) is an adapted process. Thus (4)
7 ( , ) 6 V ^ ( I 2 , 5 , { 5 , } , P ) far » « ! , . . . , *
342
CHAPTER 3. STOCHASTIC
INTEGRALS
By (3) and (4), X ( l ) = Xf + M(i) + V ( 0 is a quasimartingale for i = 1 , . . . , d and therefore X is a d-dimensional quasimartingale. By Theorem 15.22, we have (5)
(F!oX)(s)dM(i)(s)
(FoI),-(FoI), =W {F'ioX){s)dV&{s)+l-Jz
+ y-j
[
(P/>X)(s)d[M('\Mw](s),
on R+ x Ac where A is a null set in ( 0 , 5 , P). Now by Theorem 14.21, we have
(Ff o X) • M (,) = J2(F! o X) . {Y{i'k) • P w ) = £ ( * ? o X)F (i ' 4) . B w
(6)
Next by the Lebesgue-Radon-Nikodym Theorem we have (7)
/
(F;oX)(s)dV-'\s)=
[
J[0,t]
(F! o
X)(s)Z{i)(s)mL(ds).
J[0,t]
Finally
[Af®, M°>] = f ) £[y<«» . B«, y W> . B<«] i=l «=1
= E E /
y(,'1*)(a)y°'A(a)d[Bw,B{0](s)
= T f i*• \s)Y k
(jA
(s)mL(ds),
where the second equality is by Theorem 14.20 and the third equality is by Proposition 13.33. Therefore (8)
/
■/[0,t]
(F!'i3oX)(s)d[M^\M^](s)
= J2 I
Using (6), (7), and (8) in (5) we have (2).
(F^oX)(s)Y^k\s)Y^%)mL(dS).
J[0,t]
k=l
|
Theorem 15.24. Let X be a d-dimensional quasimartingale on a standard filtered space (Q, $, {& }, P) with components given by X^ = X 0 0 ) +M 0 ) where X00> is an do-measurable
%15.
343
ITO'SFORMULA
real valued random variable and MU) G M^0C(Q,5, {$t},P)forj = l,...,d. Then X is a d-dimensional {$t}-adapted Brownian motion if and only if every sample function of X is continuous and [M(j\Mm]t
(1)
= Sjikt
forteR+andj,k
=
l,...,d.
Proof. If X is a d-dimensional {5(}-adapted Brownian motion then every sample function of X is continuous and M = (Mm,...,Mw) is a d-dimensional {5<}-adapted null at 0 Brownian motion so that (1) holds according to Proposition 13.33. To prove the converse, according to Proposition 13.25 it suffices to show that for every s , t e R t such that s < t and y G Rd we have E[e .
_ e-^C-s)
a .e.
on(Q,3s,P),
or equivalently E l y f o ' * ' - * ' ^ ] = P(A)e-^v-s)
(2)
for A G £,.
Consider the function F(x) = Sv'^ for x G Rd We have F G C2(Md) with Fj(x) = iy^'*) and F?k(x) = -yjykeiM for j , k = 1,..., d. Thus by Theorem 15.22 and by (1) we have
(3)
ei<**)
_ e '(^> = i V y, / ~^
e'<^<*» dM0)(«) - ^
J(S,(]
2
Let yl G 5 S . Multiplying both sides of (3) by e~'^'x'hA
(4) E \e'^x'-x'h y
A - P(A) = zJ2y^U"{yMs))^A '
. j
L
/
e^^W**)-
7( s ,t]
and integrating we have
I e'^M»» dM«\u) J(s,«]
_ Ji^ E \1A f e^XM-x^mL(du) 2
L
.
J(s,t]
By (1), [M 0 ) ] ( = i and then E([M 0 ) ] ( ) = t < oo for every t G R+. Thus by Theorem 12.36, M 0 ) G M£(Q,5, {&}, P ) for j = 1,... ,d. The process e'"<s,:?> is continuous and hence predictable. It is also bounded so that it is in la,c<,(M+ x £2, M[MW], -P) by Observation 11.19. Then the stochastic integral e'<*'x'«Af °"' is defined and is in M|(£i, 5, {3<}, P). Thus
CHAPTER 3. STOCHASTIC
344
INTEGRALS
we have E [/{s t] e*
E {e^x'-x°hA}
- P(A) = - ^
j
E [e'^W-^lx]
mL(du).
Now if we define a real valued function tp on [s, oo) by setting ip(t) = E [e^y-x,~x'hA]
(6)
for t G [5,00),
then by (5) we have (7)
V(t)-P(A)
=- ^ - f 2
J(s,t]
The unique solution of this integral equation is given by (8)
By (6) and (8) we have (2). I As a particular case of Theorem 15.24, we have Corollary 15.25. Let X e M 2 (£2,5,{fo},P). Then X is an {^-adapted motion if and only if
Brownian
1°. every sample function of X is continuous, 2°. {X? —i:t€
R+} is a right-continuous null at 0 martingale.
§16 Ito's Stochastic Calculus [I] The Space of Stochastic Differentials For a real valued function / with a continuous derivative / ' on R, we have /(i 2 ) — f(tj) = ft? f'(t)dt for any t\,t2 € K such that i] < t 2 according to the Fundamental Theorem
§/6". LTO'S STOCHASTIC
CALCULUS
345
of Integral Calculus. We write df for / ' dt and thus f(t2) - }{tx) = //* df(t). Every antiderivative g of / ' satisfies the condition g(t2) - g(U) = ffi f{t)di = f(t2) - f(h) for any t\,t% G R such that t\ < t2. Also every antiderivative g of / ' has the form g = / + C for some C G R. For a quasimartingale X on a standard filtered space (£1,5, {$t},P) the derivative is undefined. Instead we define the stochastic differential dX of X as the collection of every quasimartingale Y on the filtered space satisfying the condition that Y(t2) - Y(ti) = Y(t2) - Y(ti) any tut2 g E+ such that t\ < t2 almost surely. Equiva lent^, dX is the collection of every the quasimartingales Y on the filtered space of the form Y = X + C where C is a real valued 50-measurable random variable on (D, 5, F). Note then that X £ dX andifY E dX then dX = dY Definition 16.1. Let Q(Q,5, {5t},P) fee £/ie collection of all quasimartingales X on a standard filtered space (Q,5, {3i},P), tfiaf js, X = X0 + Mx + Vx where Mx G M j ' o c ( n , 5 , { 5 t } , P ) , Vx 6 V c - , o c (n,5,{5 f },PX anrfXo G (&,), the collection of all real valued ^Q-measurable random variables on (Q, 5, P)Observation 16.2. By Proposition 15.2, the decomposition of X € Q(fi, 5, {5t}, P) as a sum of the random variable X0 G (5o), the martingale part Mx G M2lcc(Q.,$, {&},P), and the bounded variation part V* G VC,,0C(Q, 5, {5<},P) is unique. Let us call this decom position the canonical decomposition of a quasimartingale. Since M2 °c, Vc''oc, and (#0) are linear spaces, so is Q. Furthermore for X,Y e Q and a, b G R, if we let MaX+bY and Vax+bY be the martingale part and bounded variation part of the quasimartingale aX + bY, that is, aX + bY = (aX0 + bY0) + (aMx + bMY) + (aVx + bVy), then by the uniqueness of decomposition we have MaX+bY = aMx + bMY VaX+bY =aVx + bVY. Let X e QiQd, {dt}, P) given as X = X0 + Mx + Vx. For a predictable process <£ on the filtered space which is in B(R+ x Q), we have by Remark 15.10 and Definition 15.11 O . Mx
={[
<5(s) dMx(s) : t G R + } G M£' oc ,
J[0,t]
®,VX
={[
*(«) dVx(s) : t G R + } G Vc'loc,
J[0,t]
®,X
=
CHAPTER 3. STOCHASTIC
346
INTEGRALS
and as an alternate notation for (O • X) t for t G K+, we have / J[0,t]
3>(s)dX(s)=
[
®(s)dMx(s)+[
J[0,t]
0(s)dVx(s). J[0,t]
If X, Y G Q and if we assume that every sample function of Y is continuous then Y is a predictable process which is also in B(K+ x Q) so that Y • X exists in Q. Definition 16.3. We say that X, Y G Q ( A 5, {&}, P ) are equivalent as quasimartingales and write X ~ Y iff/iere ex^te a WK/Z .ye* A in (Q, 5, P ) such that for w G Ac we have X(i,w)-X(s,w) = Y(t,w)-Y(s,w)
for every s,teR+,
s
For X G Q(Q, 5 , {&}, P), we write dXfor the equivalence class in Q(Q,#, {&}, P ) vWtfi respect to the equivalence relation ~ fo w/ii'c/i X belongs and call it the stochastic differ ential ofX. We write AQfor the collection of the equivalence classes in Q(£2, 5 , {3t}j P) with respect to ~. For two stochastic processes X and Y on a probability space (Q, 5 , P ) we write X = Y to indicate their equivalence in the sense of Definition 2.1, that is, there exists a null set A in ( Q , 3 , P ) such that X(t,u) - Y(i,w) for (i,w) G R+ x Ac. If £ is a real valued random variable on (Q,§, P), then we write X = f to indicate that X(i,u>) = £(u>) for ( i , u ; ) € R + X Ac. Remark 16.4. Let X, Y G Q(Q,3,{5 4 },P) given by X = X 0 + Mx + Vx and Y = Y0 + My + Yy respectively. Then (1) (2) (3)
X tY & X-Y = X0-Y0, X^Y & MX=MY and VX = VY, X X Y <* X - Y = Z0 for some Z 0 G (&,)•
Proof. 1) (1) is immediate from Definition 16.3 by considering 5 = 0 . 2) If X £ Y then by (1) we have X - Y = X 0 - Y0 G (So)- Since X - Y g Q this implies M*_y = 0 and VX-Y = 0 by Proposition 15.2. But according to Observation 16.2, MX-Y = Mx -MY and VX-Y = Vx - VY. Thus Mx = MY and Vx =VY. Conversely, if Mx = MY and Vx = VY, then X - Y = X0 - Y0 and thus X ~ Y by (1). This proves (2). 3) By (1) the implication =4- in (3) holds. Conversely if X - Y = Z0 G (j 0 ) then by Proposition 15.2 we have MX-Y = 0 and VX-Y = 0. Then by Observation 16.2, we have Mx = MY and VX = VY. Thus by (2) we have X ~ Y This proves (3). ■
§ 16. LTO'S STOCHASTIC
347
CALCULUS
Observation 16.5. 1) Since (5o) contains the identically vanishing random variable 0 and M2'' oc (Q,3, {&}, P ) and VC''°C(Q,5, {&}, P) contain the identically vanishing stochastic process 0, M = 0 + M + 0 G Q ( n , 5 , {&},P) for any M G M^^Q, £, {&},P) and similarly V = 0 + 0 + V G Q(Q,3, {&},P) for any V G V c ''° c (n,3,{S ( },P). Thus M ^ C Q ^ - f & ^ P ) and VC''°C(Q,5, {3 ( },P) are linear subspaces of Q(Q,5,{5,},P). Note a l s o M ^ o c ( n , 5 , {5,},P) D V c '' o c (n,5, {&},P) = {0} by Proposition 15.2. 2) Let X G Q(£2,5, {5<},P) be given in its canonical decomposition as X = Xo + Mx + Vx . If X & M for some M G M=''oc(n, 5, {&}, P), then Mx = M and V* = 0 by (2) of Remark 16.4. Similarly if X ~ V for some V G V c '' oc (Q,5, {&},P), then M* = 0 and Vx = V. 3)SincelVf 2 '' oc (Q,5,{5 t },P) C Q«2,5, {&},P), for M G Mj' DC (Q,5,{5<},P) the stochastic differential dM is defined and consists of all X G Q(£2,5, {dt},P) such that X = Z0 + M where Z 0 £ (3o) according to (3) of Remark 16.4. Similarly for V G V c '' o c (Q,5,{5 ( },P), dV is defined and consists of all X G Q(fi,5, {5 ( },P) such that X = Z0 + V where Z 0 G (5o)Definition 16.6. Wfe define dMj' 0 0 as f/ie subcollection of dQ consisting of dM for M G M2ioc(Si, 5, {5t}, P)- Similarly we define d\c,loc as the subcollection of dQ consisting of dVforV e\c-'°c(Q.,3,{dt},P). Definition 16.7. We definefour operations in dQ. ForX,Y G Q ( A 5 , {3t},P)andc Zer addition, scalar multiplication and product be defined respectively by (1) (2) (3)
dX + dY = d(X+Y), cdX = rf(cX), dY = d[Mx,MY],
andfor a predictable process ® G L ^ R * x Q, p[Mxh •-multiplication o/O fry c/X fee defined by (4)
G K,
<6«cfX
=
P) D L ^ R * X Q, ^ ^Vx |, P), to
d(*.X).
Sometimes dX ■ dY is written as dXdY and <E> • dX is written as OdX. Regarding (3) and (4), note that since [Mx, MY] G \cM and since $ i l £ Q , d(<£ • X ) is defined.
C Q, d[Mx, MY] is defined
CHAPTER 3. STOCHASTIC
348
INTEGRALS
With addition and scalar multiplication as defined above dQ is a linear space over R. We show next that dQ is a commutative ring with respect to addition and product. If we define addition and product in B(R+ x Q.) by pointwise addition and product of its elements on R+ x Q, then B(R+ x Q) is a commutative ring with identity. We show below that with respect to addition and •-multiplication, dQ is a commutative algebra over the ring B(R+ x fi). Observation 16.8. For X,Y,Z (1)
£ Q(fi, 5, {&}, P), we have
(dX + dY) ■ dZ = dX -dZ +
(2)
dX-dY
dY-dZ,
= dY ■ dX.
Thus dQ is a commutative ring with respect to addition and product in Definition 16.7. Proof. To prove (1), note that by (1) and (3) of Definition 16.7 we have {dX + dY) ■ dZ = d(X + Y)-dZ = d[Mx+Y,Mz) = d[Mx + My, Mz] = d([Mx, Mz] + [MY,MZ]) = d[Mx,Mz] + d[MY, Mz] = dX-dZ + dY ■ dZ. To prove (2), note that by (3) of Definition 16.7 we have dX ■ dY = d[Mx,MY] Theoreml6.9. LetX,Y filtered space such that
6 Q(£l,$,{$t},P)
= d[MY, Mx] = dY ■ dX.
■
and® and ¥ be predictable processes on the
L^(R+xa,WMxl,P)nLi%(R+xQ,^|Vx|,P)
n L^(R + xn,fHMYhP)nL[°c00(R+ x Q l M | ^ , , P ) . Then (1) (2) (3)
Q>»(dX + dY) = »dY, (® + *V)»dX = ®»dX + V*dX,
In particular, (1), (2), and (3) hold for ,¥ <= B(R+ x Q). Thus dQ is a commutative algebra over B(R+ x £2) with respect to addition and •-multiplication in Definition 16.7. Also for 0>, *F 6 B(R+ x O) we have (4)
O T • dX =
§ 16. ITO'S STOCHASTIC
CALCULUS
349
Proof. 1) To prove (1), note that by (1) and (4) of Definition 16.7 and by the linearity of the stochastic integral we have
= d(9»X)+dQ¥»X)
=
^dX+VmdX.
3) To prove (4), note that by (3) and (4) of Definition 16.7, we have on one hand (5)
= d ( * • [Mx, My]),
and on the other hand (6)
(O • dX) ■ dY = d(0 *X)-dY
= d[M^x,MY]
= d[<5 • Mx,
My],
where the last equality is from the fact that the martingale part of the stochastic integral of G> with respect to X is the stochastic integral of <£ with respect to the martingale part of X. By Theorem 14.20, we have [<5 • Mx, MY]t = fm «Mx,My] = d(4>» [Mx,MY]). Using this in (6), we have (4) from (5) and (6). 4) To prove (4), let us observe first that 4>*F • X = <X>*P • Mx + **P • Vx ■ By Theorem 14.21, we have O^P • Mx = <£ • (*P • Mx). Also for every t e R+ we have (V»Vx)t=[
4>(sms)dVx(s) J[0,t]
=
f J[0,t]
®(s)d\ [ I-'[0,5]
V(.u)dVx(u)\
= (
by the Lebesgue-Radon-Nikodym Theorem so that <J>Y • Vx = $ • 0? • Vx)- Thus O)^ • X =
CHAPTER 3. STOCHASTIC
350
INTEGRALS
This proves (4). ■ Corollary 16.10. Under the same assumptions as in Theorem 16.9, we have (1)
(2)
d(<X> • X) ■ d(¥ • Y) =
and for predictable processes <£],..., O n G B(E+ x Q), we fcave « ! . . . 4»„ • dX = * , • ( ■ • ■ (*»_i • (*„ • dX)) ■ • •).
(2)
Proof. The first equality is by (3) of Theorem 16.9. The remaining equalities are from the fact that the product in dQ is commutative and the fact that O • dX, O • dY € dQ. To prove (2), note first that by (3) of Definition 16.7 and by Definition 15.11, we have d(* • X) ■ dQ¥ • Y) = d[M^tX, MWmY] = d [ 0 • Mx, V • MY]. Now [<S • Mx, ¥ • My] = O T • [Mx, My] by Theorem 14.20 so that by (4) and (3) of Definition 16.7 we have d[<& • Mx,¥
• My] = d(OY • [M x , My]) = 0»P • d[Mx,MY]
= <£¥ • (dX • dY).
Therefore d(<£ • X) • d(»F • Y) =
dQ dQ C dV c ' 0C , dV c ' / o c -dQ={dO}, dQ dQ dQ = {dO}.
§ 16. FT&S STOCHASTIC
CALCULUS
351
Proof. To prove (1), note that if X, Y g Q then by (3) of Definition 16.7 we have dX ■ dY = d[Mx,MY] € dVc''0C. To prove (2), note that if V £ dVc''DC, then Mv = 0 so that [MV,MX] = 0 for any X e Q and thus dV ■ dX = dO. Finally for X,Y,Z eQwe have dX-dY £ dV c ' ,oc by (1) and then dX ■ dY ■ dZ = dO by (2). This proves (3). ■ Definition 16.12. Let X € Q(Q, 5, {& }, P). For the stochastic differential dX € dQ and for s, t 6 R+ SMC/I f/iaf s < ( w define (1)
/
dX(u) = X(t) - X(s).
J(s,t]
For dX, dY edQ and a,be K, we
/
(adX + bdY)(u) = a (
J[0,t]
dX(u) + b f
J[0,t]
dY(u).
J[0,t]
Remark 16.13. 1) Recall that the stochastic differential dX consists of all quasimartingales Y e Q ( i i , 5 , {5(},P) such that Y ~ X, that is, Y(t,u) - F{s,w) = X(t,ui) - X(s,u) for every s, t e R+, s < i, and u £ Ac where A is a null set in (Q., 5, P). Thus if Y is an arbitrary representative of dX, then we have L t] dX{u) = Y(t) — Y(s) = X(t) — X(s). This shows that / (s t] dX(u) in (1) of Definition 16.12 is well defined. 2) According to Definition 15.11, for the stochastic integral of the process 1, that is, the process 4>(i,w) = 1 for (t,uj) 6 R.+ x Q, with respect to a quasimartingale X S Q(«, 3, {&}, P) g^en by X = X0 + Mx + Vx, we have /
lM(u)dX(u)=
J(s,t]
= {Mx(t)
I Jte.t]
lM](u)dMx(u)+
I
\{s,t](u)dVx{u)
J(s,t]
- Mx(s)} + {Vx{t) - Vx(s)} = X(t) - X(s).
Thus / [01] dX(u) in Definition 16.12 is equal to/ [ 0 ( ] 1 dX(u) in Definition 15.11. In partic ular for the stochastic differential d(& • X) where 3> is a predictable process in L ^ C K t x ii,/i [ M x ],P)nL' 1 %(lR + x a , M | V x | , P ) , wehave / J(.s,t]
(O . dX){u) = ( J(s,t]
®(u) dX(u).
J(.s,t]
Observation 16.14. Let X = (X ( 1 ) ,... ,X<«) where X ' 1 ' , . . . ,X^ e Q(Q,$,{ft},P) andletP € C 2 (R' i ). According to Theorem 15.21 there exists a null set A in (Q, 5, P ) such
CHAPTER 3. STOCHASTIC
352
INTEGRALS
that on Ac we have for any s, t G K+ such that s < t (F o X)(i) - (F o X)(s) = £ { ( ( J V o X) . X (i) )(i) - ((Jf o X) . X » ) ( 5 ) } i=i
=
J2 li((F"j .',j=i
° X > • Wfcw, AfjcwDM - « J ^ o X ) . [M*<.„ Afjr«])(a)}.
2
By Definition 16.3 and Theorem 16.9, we have i
d
d
d(F o X) = £
i=\
.,j=i
[II] Fisk-Stratonovich Integrals L e t X , F G Q ( Q , 5 , { 5 J , P ) . ThenF G BtR.xfl) C L ^ ( R t x « , / i [ M j r ] , P ) n L ' , % ) ( R t x Q, ^ | V x | , P ) so that F • X exists in Q(Q,5, { 5 J , P) by Definition 15.11 and d(Y • X ) = F • dX by Definition 16.7. Lemma 16.15. 7 / X , F G Q(Q,3, {&},P),tfiercX Y G Q(Q,$,{3t},P)
and
d(XY) = X»dY + Y»dX + dX-dY. Proof. Consider the 2-dimensional quasimartingale (X (1) , X (1) ) = (X, Y) and F G C 2 (R 2 ) defined by F(x, y) = xy for (x, y) G K2. By the multidimensional Ito's Formula we have F o (X (1) , X (2) ) G Q, that is, XY G Q, and furthermore since F{{x, y) = y, F±(x, y) = x, F[\(x,y) = F£2{x,y) = 0, and F;[2(x,y) = Fi'A(z,y) = 1 for (x,y) G R2, we have according to the multidimensional Ito's Formula as given in Observation 16.14 d(XY)
= X (2) . dXm + X(1> . dXm + \{dXm = Y*dX
+ X»dY
+ dX-dY.
■ dXm + dXi2) ■ dXm}
■
Definition 16.16. Let X, Y G Q(Q, 5, {&}, P). The symmetric multiplication ofY by dX is defined by YodX
= Y*dX
+ \dY ■ dX. 2
§/6~. ITO'S STOCHASTIC Note that YodX
353
CALCULUS
e dQ.
Lemma 16.17. LetX,Y
£ Q(Q, 5, {&}, P). ffcen d(Xr) = X o d F + F o d X
Proof. By Definition 16.16, we have Y o dX = Y • dX + \dY ■ dX and X o dY = X*dY + \dX ■ dY. Adding the two equalities side by side and recalling Lemma 16.15 we have X o dY + Y o dX = X • dY + Y • dX + dX ■ dY = d(XY). ■ Theorem 16.18. For X,Y,Z (1) (2) (3) (4) (5)
e Q(Q,5, {&}, P), we have
Xo(dY + dZ) = XodY+XodZ, (X+Y)odZ = XodZ + YodZ, XYodZ = Xo(Yo dZ), X o (dY ■ dZ) = X • (dY ■ dZ), Xo(dY ■ dZ) = (Xo dY) ■ dZ.
Proof. To prove (1), note that X o(dY + dZ) = X od(Y + Z) = X • d(Y + Z) + -dX ■ d(Y + Z) = X»dY
+ X»dZ
+ -(dX
dY + dX -dZ) = XodY
+
XodZ,
where the first equality is by (1) of Definition 16.7, the second equality is by Definition 16.16, the third equality is by (1) of Definition 16.7 and Observation 16.8, and the last equality is by Definition 16.16. The equality (2) is proved likewise. To prove (3), note that (6)
XY o dZ = XY • dZ + -d(XY) = XY»dZ
■ dZ
+ -(X • dY + Y • dX + dX ■ dY) ■ dZ
354
CHAPTER 3. STOCHASTIC
by Lemma 16.15. Since YodZ (7)
Xo(YodZ)
= XY»dZ
e dQ, we have by Definition 16.16
= X*(YodZ)
+
)-dX-(YodZ)
+ ]-dY ■ dZ) + X-dX (Y»dZ
= X»(Y»dZ
INTEGRALS
+ X-dY ■ dZ)
+ -(X • dY) ■ dZ + )-{Y • dX) ■ dZ + jdX
-dY ■ dZ
by (4) of Theorem 16.9 and (1) of Corollary 16.10. By (6), (7), and the fact that dX ■ dY dX = dO, we have (3). To prove (4), note that since dY ■ dZ £ dQ we have by Definition 16.16 and (3) of Theorem 16.11 X o (dY • dZ) = X • (dY ■ dZ) + -dX ■ (dY ■ dZ) = X • (dY ■ dZ). To prove (5), note that (X o dY) -dZ = (X»dY = (X»dY)-dZ
+ ^dX ■ dY) ■ dZ
=X» (dY ■ dZ) = X o (dY ■ dZ),
where the second equality is by (3) of Theorem 16.11, the third equality is by (3) of Theorem 16.9, and the last equality is by (4). ■ Theorem 16.19. Let X = (X<«,... , I < « ) where X^ G Q(fi,£, { & } , P ) / o r i = l , . . . , d and let F e C 3 ^ ) . Then FoX e Q(Q, 5, {&}, P ) and d
d(F oX) = J2(Fl o X) o dX{i) i=\
Proof. By Definition 16.16, we have (1)
(F! o X) o dX{i) = (F[ o X) • dX(i) + l-d(Fl o X) ■ dXm.
Since F- £ C2(Rd), by Ito's Formula as given in Observation 16.14 we have
d(F; o X) = £(*& o x). ax™ +l~£ Wkk ° X). dx& ■ dx«l
§ 16. LTO'S STOCHASTIC
CALCULUS
355
Substituting this in (1) and recalling that dX(i) ■ dXu) ■ dX{k) = dO by Theorem 16.11, we have
£(F;oi)odx" = Yfift°x)• «ur w +JE^°^)' d x & -dx^ = d{FoX). ■ Definition 16.20. Ler X,Y € Q(£2,3, {&},P) fee given by X = X0 + Mx + Vx and Y = YQ + My + Vy. The Fisk-Stratonovich integral ofY with respect to X,Y o X, is the quasimartingale Y o X defined by (1)
YoX
= Y •X +
hLMy,Mx}
and thus (2)
d(Y oX) = d(Y»X)
+ ^ d[MY ,Mx] = Y»dX
+ -dY-dX
=
YodX.
We also use the notation J[ot] Y o dX for (Y o X)t, that is, f[ot] Y o dX = / [0 (] d(Y o I ) ( s ) forteR+. Example 16.21. Let B be an {g(}-adapted null at 0 Brownian motion on a standard filtered space (£2,5, {5,}, P). Then / BodB= Jio.t]
f B(s)dB(s) + - f d[B,B](s) J[o,t] 2 J[o,t]
Since / [0 (] B(s) dB(s) = \{B2(t) — t) as we showed in Example 15.7 and since [B, B]t = t, we have /
BodB
=
)-B\t\
which resembles /0' sds = \t2 Theorem 16.22. Let X € M|(£2, #, {St}, P) and let Ybea bounded continuous martingale on (£2,3, {5i}, P). For n G N, let t^bea partition o/E+ into subintervals by a strictly increasing sequence {tn,k : k e Z+} in R+ such that tnfi = 0 and tn
356
CHAPTER 3. STOCHASTIC INTEGRALS
lim |A„ | = 0 where |A„ | = sapk^(tn,k — tn,k-1 )■ Let t 6 K+ befixed.For each n E N, Ze?
F o dX = P • lim £ ^{F((„,i) + K(<„1*_i)}{A-(tBii) - X(t n , t _,)}.
Proof. We have £ ^{F(i„,fc) + F(tn,,_,)}{X(in,A) - *(*„,*_,)} = Y.Y{in,k-i){X{tn,k)
- X(tn,k_,)}
k=\
+ \ Y.{Y(tn,k) - Y(tn,k-i)}{X(tn:k)
- *(*„,*_,)}.
By Proposition 12.16 P ■ lim £ r(i n ,n){X(i n ,i) - X(tn^)}
= (Y . X)(t).
By Theorem 12.27 p
>)" - F(*n>-■t)}W*«.,*)- - -X" (*«,*-■.)} == [y,x](i).
Thus
*=1
•i)H*(*M>-Jrcw- .)}
Z
= (Y . X)(<) + - [F, X](i) = / 2
J[0,t]
F o dX.
Chapter 4 Stochastic Differential Equations §17
The Space of Continuous Functions
[I] Function Space Representation of Continuous Processes Let W d be the collection of all Revalued continuous functions w on R+. Our objective is to introduce a cr-algebra 2Urf of subsets of Wd and an increasing system of sub-a-algebras of 13d, {2U? : t E R+}, with the following properties: 1°. For every t e R+, the mapping qt of Wd into Rd defined by qt(w) = w(t) for tii 6 Wrf is a 2Ud/*BMd-measurable mapping so that g = {qt : t g R + } is a d-dimensional {iHJ^-adapted stochastic process on the filtered measurable space (Wd, Wd, {Wd}). 2°. For an arbitrary continuous d-dimensional stochastic process X = {Xt : < G M+} on an arbitrary probability space ( 0 , 5 , P), the mapping X of Q into W d defined by A"(a;) = X(-,u>) for w € Q is S/aU^-measurable. Under the assumption of 1° and 2° let P% be the probability distribution of X on the measurable space (Wd,
358
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Definition 17.1. Let Wd be the collection of all Rd-valued continuous functions on R+. For t e R+, let qt be a mapping of\Vd into Rd defined by qt(w) = w(t)for w £ Yfd. Let T = { i , , . . . , i„} be a finite strictly increasing sequence in R+ and let E\,...,En £ 55B
= {we\Vd:
(iu(*i), ■ ■ ■, w(tn)) £ £ , x ■ ■ ■ x E n }
Let 3 be the collection of all cylinder sets in Wd, and for t £ E+ let 3[o,«] be the collection of all cylinder sets with indices r = {t\,... ,<„} preceding t, that is , tn < t. We write 3t for the collection of all cylinder sets with index r = {t}, that is, 3t = qt (®]g<<)Lemma 17.2. 3 is a semialgebra of subsets o / W d semialgebra of subsets ofVfd.
For every t £ K+, 3[o,«] is also a
Proof. Since 0 = g~l(0) and Wd = g("'(Rd) for an arbitrary t £ R+, 0 and W* are in 3Clearly 3 is closed under intersections. Therefore it remains to show that if Z £ 3 then there exists a finite disjoint collection {Z* : k = 0 , . . . , n } in 3 such that Zo = Z and U|=0Zj £ 3 for k = 0 , . . . , n with U]^Zj = W Let Z = g^'CEi) n • • ■ n q^(En) with 0 < t\ < ■ • ■ < tn and Ej £ 33Md for j = 1 , . . . , n. Let
zk = qrkl(BQ n qrkl(EM)
n • ■ • n ?,-„'(£„) for k = l , . . . , n
and in particular Zn = q^ (E„). Then {Zk : k = 0 , . . . , n} is a disjoint collection in 3- Also
Zo u • • • u ^ i =ft-'CJ?*)n • ■ • n q;n\En) e 3 so that in particular
Zo u • • ■ u z„_, u z n = g^CE,,) u 3 ( ;'(^) = 9r„'(Rd) = w d This completes the proof that 3 is a semialgebra of subsets of Wrf. The fact that 3[o,<] is a semialgebra of subsets of W d can be shown in the same way. ■ Since 3 is a semialgebra of subsets of Y?d, the algebra of subsets of Wd generated by 3 is the collection of all finite unions of members of 3- Since 3[o,s] C 3[o,t] C 3 we have ff(3[o,»]) C
§77. THE SPACE OF CONTINUOUS FUNCTIONS
359
Definition 173. Let W1 =
(2) a(uielE+3f) = arr*. (3) <x(U,e[0,t]3.) = 2nf /or every t 6 R+. Lef (fl,g) fee a« arbitrary measurable space and let T be a mapping ofil into Wd. Then (4) T is g/W*-measurable if and only ifqt o T is 3793^ -measurable for every t G R+, (5) For fixed t G R+, T is g/W*-measurable if and only ifq, o T is ^/^d-measurable for every s g [0, t\ Proof. To prove (1), note that for t G R+ we have 3[o,«] C 3, c(3[o,*]) C cr(3), and thus Ut€]Rto-(3[oi(]) C
air'. To prove (2), note first that Utej^5t C 3 so that
^ 1
x • • • x
x • ■ ■ x QS.,))
=
■
360
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Vfd is a linear space over R if we define addition and scalar multiplication by (u>i + w2)(t) = wi(t) + w2(t) for t G R+ and wuw2 G Wd and (Xw)(t) = Xw(t) for t G R+, A G R, and w G W Consider the translate of a subset E of W* by an element w0 of Wrf defined by E + w0 = {w + w0 : w G E} and the A-multiple of E defined by XE = {Xw :w G E). Remark 17.6.
w(ti) G £,•} + wo = {w G Wrf : w(U) eEt + w0(U)} G 3
since JEJi + «)Q(<,) G 25md. Therefore Z + w0 G 3 C cr(3). This shows that Z E & and thus 3 C 65. Since <S is a d-class containing the 7r-class 3, we have
361
§77. THE SPACE OF CONTINUOUS FUNCTIONS
3[o,i] is the collection of all finite intersections of members of Use[o,f]g71(931
on(W,arr',{aD?}). 3) We have g(t, tu) = g((tu) = tu(t) for (t, w) G R+ X W*. Thus g(-, w) = w. ■ Theorem 17.8. Let X = {Xt : t G R+} be a continuous d-dimensional stochastic process on a probability space (Cl,$,P). Then the mapping XofQ. into W d defined by X(UJ) = X(-,u>)for Lj 6 O. is 51'2tT-measurable. Let Px be the probability distribution of X on (Wd,'Wd). Then the d-dimensional {2U*}-adapted process q = {qt : t G R+} on the filtered space (Wd,2Bd, {W*},Px) and the d-dimensional process X on (fi, 3 , P ) have identical finite dimensional probability distributions, that is, for anyfinitestrictly increasing sequence r = { f j , . . . , tn} in R + the probability distribution (Px)qT on (Knd, 23and) of the random vector qT = (qtl,..., qt„) and the probability distribution PXT on (Knd, 331"d) of the random vector XT = (Xtl,...,Xin) are identical. Also, for every E G 23ffind and the cylinder set Z = {w 6 W : (iu(
X~\Z)
= A - , ( n ^ g - | ( £ , ) ) = nr =1 A- 1 (g t - 1 (£,))
= ntiCft, o *)"'(£.) = n?=1x-'(£,-) 6 5. This shows that r ' ( 3 ) C j . To show that (Px)qr = PXr on (R'"', ® r * ) , let £ G ! 8 r * . Then (P*), r (S) = P^(g71(JE)) = - P ° ^ " 1 ° g T " , ( ^ ) = P o (,T o # ) - ' ( £ ) = P o JCT-'(E) =
PXr(E).
To show that P ^ C E ) = Px(Z), note that P X r ( £ ) = P o X 7 ' ( £ ) = P{u G a : Xr(E) G E}
362
CHAPTER 4. STOCHASTIC DIFFERENTIAL = P{u e i2 : X(-,w) £ Z} = Po X~\Z)
EQUATIONS
= Px(Z).
When X is adapted to a filtration {& : i € R+}, we show that X is &/2U?-measurable by starting with Z G 3[o,<] in (1). I Definition 17.9. Let B be a d-dimensional null at 0 Brownian motion on a probability space (£2,3, P). Let B be the mapping ofQ. into XVd defined by B(OJ) = B(-,u>)for iv <E £1 We call the probability distribution PB ofB on (Wd, Z0d) the Wiener measure on (Wd, %3d) and write mwfor it. We call (Wd, Wd, m ^ ) the d-dimensional Wiener space. For a cylinder set Z in W J with index r = {tu..., we have by Proposition 13.16 mdw(Z) = PB(Z) = PB{weV/d: (w(*i),...,io(tJ) eE}=
tn } where ti > 0 and base E € 9 3 ^
P{(B(ti),...,B{tn))
72
|g
e E}
l|2
= {wfi^-^o}-* / exp(-^E ;~^- }m2^(g,,...,^)) j=i
-'■E
[
z
j=i
'J
r
j-i
J
where to = 0. Theorem 17.10. Let W be a stochastic process on (Wd, 2Ud, mfy) defined by setting W(t, w) = w(t)
for (t, w) e R+ x Wd
Then W is a d-dimensional null at 0 Brownian motion on (Wd, 2Ud, m^). Proof. Let us verify that W satisfies the conditions in Definition 13.14. Clearly every sample function of W is an Rd-valued continuous function on R+. Let us show that for every s,t £ R+, s < t, the probability distribution Q on (Rd, 93Kd) of the random vector Wt — Ws is the d-dimensional normal distribution Nd(0, (t — s) ■ T). Let x = T(y) be the nonsingular linear mapping of R2d onto R2d defined by x ( = y\ and x2 = y\ + 3/2 for y = (.V\,yi) 6 ^2d Then for any E e Q3md we have Q(E) = mdw 0 (Wt - WST\E) = mdw o {Ws, Wt - Wsy\Rd = mfyiw eWd :(w(s),w(t)-w(s))£Rd xE} d d d = m v{w€'W :(w(s),w(t))eT(R xE)}
x E)
§77. THE SPACE OF CONTINUOUS FUNCTIONS = {(2n)h(t-s)}-^
exp(-^-^f^}m2M^,x2))
f JTVXE)
{(2^) 2 .<*-
=
s)}-^[d
[
{2TT(<-
2s
2 expj- M
2s
d
Jm xE =
363
2(t - s) J
llbl2 1
2(t-
*>J'
n2i{d{yu
yi))
2(t - .•
This shows that Q = JVd(0, (t - s) • / ) . To show that W is a process with independent increments, we show that if { t i , . . . , *„} is a strictly increasing sequence in R + , Q is the probability distribution of the random vector (W t „ Wtl - Wt>,..., W,„ - Wtn_,) on (R™*, «8m„d), and QuQt,...,Qn are the probability distributions of the random vectors Wt,, W<2 — W t | , . . . , Wtn — Wtn_, on (Rd, SB^) respec tively, then Q(E) = (Qi x ■ • • x Qn)(E) for every £ G ©K»<<- For this it suffices according to Corollary 1.8 to show Q(Ei x • • • x En) = Qi(-Bi) ■ • • Qn(En)
for £,- e SB,*, j = 1,... ,n.
Let i = T(y) be the nonsingular linear mapping of Rnd onto R nd defined by x\ = y\, x2 = yx + y2,...
,xn = yx + • ■ ■ + yn
for y = (t/,,... ,y n ) e R"
so that 2/i = * i , 2/2 = £2 - x : , ...,!/n = i » -xn-i
f o r i = ( i i , . . . , i „ ) G R"
Let us write E = Ei x ■•■ x En. Then Q(E, x ■ • • x En) = rniy o (W„, W,2 - Wt = m^{u; G W d : (w(t,),io(i 2 ) - w(t{),...,w(tn)
,Wu - Wtn_xy\E) - u>(t„_,)) G E]
= m^{u) G W : (w(*i),..., «»(*„)) G T(E)}
= {(2^-n^-*,--,)}-^/ exp ( - 1 £ |T; " *J~l''1 *#(*»„ ■..,«.)) ■ - , & ) )
m ) =ft{2^ - *-.>r* jEj ^{-\T^T-;} n{2^-*i-.)}-^/ ^ Bj «p{4J-l,}<^ 3=1
364
CHAPTER 4. STOCHASTIC DIFFERENTIAL
This proves the independence of increments for W
EQUATIONS
■
[II] Metrization of the Space of Continuous Functions Our next objective is to introduce a metric to Wd with respect to which Wd is a complete separable metric space and the Borel cr-algebra of this metric space is equal to the cr-algebra Wd = (7(3). Definition 17.11. For every t € R+, let us define a seminorm vt on Wd by setting vt(w) = max |ui(s)| A 1 for w e Wd s€[0,t]
where \-\is the Euclidean norm on Rd, and let pt(wuw2)
= i/t(wi-w2)
forwuw2£\Vd.
Let us define a quasinorm v^ on Wd by ^oo(u0 = Yl 2~mvm(w)
forw 6 Wd
mgN d
and define a metric p^ on W by pxi(wi,w2)
= ^ooOi - w2) = ^2 2~mpm(wuw2)
for wuw2
G W*.
mgN
Lemma 17.12. Let {wn : n 6 N} be a sequence and w be an element in Wrf. Then lim poo(wn, w) = 0 if and only if lim wn(s) = w(s) uniformly in s £ [0, i] for every t G n—*oo
n—*oo
R+. Proof. It suffices to show that lim poo(wn, w) = 0 if and only if lim pm(wn, w) = 0 for 71—i-OO
ri—»-oo
every m G N. Clearly lim p^Wn, W) = 0 impl ies that lim pm{wn. w) — 0 for every m G N. Conversely suppose lim pm{wn, w) = 0 for every m G N. Given e > 0 let N G N be n—^oo
so large that Em>Ar+i 2 _ m < e. Then since pm(wn, w) < 1 for every m, n G N, we have Em>N+i 2" m p m (iu„, u>) < e for every n e N so that lim sup £ } 2~"Vm(i/)n,u>) < e. n
-"x
m>N+l
§ ; 7. THE SPACE OF CONTINUOUS FUNCTIONS
365
Then limsupp 0o (u) n ,u;) = l i m s u p i ^ 2 - m p m ( u ; „ , u ; ) +
T
2~mpm(v>n,w)\
N
<
J|i^£2- m /> m (u> n ) u>) + lirnsup S
2 _ m /) m (w n ,«;)<e
since by assumption lim pm(wn, tu) = 0 for every m e N. By the arbitrariness of e > 0 we have lim sup Poo(wn, w) = 0 and thus lim ^ ( u i , , w) = 0. ■ n->oo
n-too
Theorem 17.13. (W"', p^) is a complete separable metric space. Proof. To show that (W d , p^,) is a complete metric space, let {wn : n G N} be a Cauchy sequence in (Wd, p^). By Lemma 17.12, it suffices to show that there exists w € W1 such that lim p m (w„, w) = 0 for every m 6 N. Let e € (0,1). Let m € N be fixed. Since n—KX>
{wn : n e N} is a Cauchy sequence there exists N 6 N such that Poo{wi,wn) < 2~me when £,n > N. Then 2~mpm(wt, wn) < 2~me when £,n > N and thus pm{wi,wn) < e when £,n > JV. Thereforemaxse[o,m] \w((s)—wn(s)\/\l < e, orequivalently,maxse[o,m] \w((s)— tw„(s)| < e, when £,n> N. This shows that the restrictions of wn to [0, m] for n 6 N is a Cauchy sequence with respect to the metric of the uniform norm on the space of continuous E.d-valued functions on [0, m]. By the completeness of this metric space there exists a continuous R''-valued functions / ( m ) on [0, m] such that the restrictions of w„, n 6 N, to [0,m] converge uniformly on [0,m] to / ( m ) . For mi,m2 6 N, mi < m,2, we have / ( m i ) = / ( m 2 ) on [0, mi] by the uniqueness of the limit of the convergence. Thus there exists a continuous Rd-valued functions / on R+ such that / = / <m) on [0, m] for every m € N. Since lim max \wn(s) - / ( m ) (s)| = 0 and / = / ( m ) on [0,m] for every m £ N, n—*oo s€[0,m]
we have lim max \wn(s) - / ( s ) | A 1 = 0, that is, lim pm(wn, f) = 0. This proves the n-wo s6[0,m]
n
-*°°
completeness of the metric space (W*, Poo)To show the separability of the metric space (Wd, Poo) let us show that it has a countable dense subset. For every k G N let Vk be the collection of all Rd-valued polygonal functions v on R+ with vertices occurring at £2~k where £ G Z+ and with v(£2~k) equal to rational points in R"*. Then Vk is a countable subset of W d and so is V = Ujt6N Vi. To show that V is dense in Vid we show that for every w € W and e > 0 there exists some o e V such that Pooiwjv) < e. LetiV € N be so large that Em>/v+i 2 _ m < 2 _ l £ - By the uniform continuity
366
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
of w on [0, N] there exists v G V such that max(e[o,/v] \u>(t) - v(t)\ < 2 le. Then AT
= V ; 2 - m { m a x \w(t) - v(t)\ A 1}+ V 2" m { max > ( t ) - v(*)| A 1} t€[0 m] ^ 'el0'"1' ' m iN + i
Poo(w,v)
N
< ^ 2 - m 2 - ' e + Y. m=l m=\
2" m <e.
m>N+l m>N+\
Lemma 17.14. LetOMdbe the collection of all open sets in Rd and let Dy/d be the collection of all open sets in (Wd, p^). Then q^\0Md) C Oy/dfor every t G R+. Proof. LetG G DBd. Consider ql'(G) for an arbitrary t G R+. If G = 0, then g ( - '(G) = 0 G Dy/d. Consider the case where G =f $ and thus q^l{G) j - 0. To show that qTx(G) G Ow« we show that for every w0 G ft-'(G) there exists an open sphere in Wd with center wQ and radius 77, S(w0,r]), contained in gt~'(G). Now since wQ G q7l(G), we have u>o(£) G G. Then there exists 6 G (0,1) such that the open sphere in Rd with center u>o(t) and radius 8, S(w0(t), 6), is contained in G. Let TV G N be so large that t G [0, N]. We proceed to show that S(w0,2~N8) C gt_1(G). Let w e S(w0,2~N6). Then P^W^WQ) < 2"N<5 and thus 2-7V{max36[o,w] \w(s) - w0(s)\ A 1} < 2~NS. Since 6 G (0,1) we have maxse[o,N] \w(s) — w0(s)\ < 6. Since t G [0, N] we have \w(t) — w0(t)\ < S. Thus w(t) G S(wo(*);^) C G. Therefore u; G q^i(G). Since UJ is an arbitrary element in S(w0,2~NS), we have S{w0,2~N8) C q^(G). ■ Theorem 17.15. Let Dy/d be the collection of all open sets in (Wd, p^). Consider the Borel a-algebra 55Wd = CT(DWM) in Wd. Then
i)3c<8w«. 2) D w „ C <J(3), 3)
gr'CV) =
367
§77. THE SPACE OF CONTINUOUS FUNCTIONS
2) To show D W J C CT(3), let us show first that for every w0 g W* and e > 0 the closed sphere in Wd with center w0 and radius e, ir(tuo,e) = {w £ Wd : poo(iu,u>o) < e}, is in cr(3)- Now N
Poo(u), w0) < e O 5 3 2~mpm(w, w0) < e
for every TV € N.
m=l
Thus X"(wo,e)= p |
L e W ^ ^ - ^ K ^ ^ E l
W6N I
m=l
J
Since tr(3) is closed under countable intersections, to show that K{w0, e) G <J(3), it suffices to show that | w S W* ; Yl 2-mpm(w, wo) < e l 6 0-Q)
(1)
for every TV e N.
Note that (2)
{w e W : pm(u>, u>0) < e} = {u> £ W : max |u;(s) - w0(s)\ A 1 < e}. sG[0,m]
If e e [1, co), then the last set is equal to W d , which is a member of 3- On the other hand if e 6 (0,1) then the set on the right side of (2) is equal to {io € W : max \w(s) - w<,(s)\ < e} = Se[0,ml
D
f]
{w eWd:
\w(r) - w0(r)\ < e]}
r6[0,m]nQ
q;\K(wo(r),e)),
r6[0,m]nQ
where Q is the collection of all nonnegative rationals and K(wo(r), e) is the closed sphere in Rd with center wo(r) and radius e, by the continuity of wo and w. Since q~} (K(wo(r), e)) is a member of 3 the last countable intersection above is in o-(3)- Thus we have shown that {w e W d : pm(w, wo) <e} e
for any w0 £ W , e > 0, and m 6 N.
Consequently we have {w G V?d : pm(w, wo) < e} = \J{w 6 W : pm(u;,u>o) < e - r 1 } e
368
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
and then (3)
{w e W : 2-mPm(w,
wo) <s} = {weWd:
Pm(w,
w0) < 2 m e} G
Let (ri,..., rN) be an N-tuple of positive rational numbers r i , . . . , r^ such that n + • • • + r w < e. Let Q be the countable collection of all such iV-tuples. Then \w G Wd : £ 2-> m (w,u>6) < e l
(4)
N
U
( 1 { » e W ' : 2 - > m ( u ; , wo) < rm } G oQ)
(ri,...,rN)6Qm=l
by (3). From this we have \weWi:Yi^mpm(w,wo)<e} =
H \w G W : Y, 2-mpm(w,w0)
< e + r 1 } G aQ).
This proves (1) and therefore K(wo, e) GCT(3)for every u>o G W d and e > 0. For an open sphere in W* with center wo G W* and radius e > 0, we have S(wo, e) = Uee^K(w0, e — l~l) G cr(3). Since (Wd, p^) is a separable metric space, it has a countable dense subset D. Every open set in Wd can be given as the union of open spheres with centers in D and with rational radii. Thus as a countable union of open spheres, an open set in Wd is always in CT(3). Therefore Oy/d C c(3). 3) By 1) we have <x(3) C 23w<<- By 2) we have 35 wd =
§18 Definition and Function Space Representation of So lution [I] Definition of Solutions Consider stochastic differential equations, that is, equations involving stochastic differen tials, of the type T
(1)
dX\=Y<x){t,X)dB1(t)
+ pi(t,X)dt
far»
= l,...,d,
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
369
where B = (J5 1 ,..., Br) is an r-dimensional {5t}-adapted Brownian motion, X = (X1,...,Xd) is a d-dimensional continuous {5«}-adaptedprocess on a standard fil tered space ( 0 , 5 , {5(}, P), and the coefficients aj and ft for t = 1 , . . . , d, j = 1 , . . . , r, are real valued functions on R+ x Wd satisfying certain measurability and integrability con ditions to ensure the existence of the stochastic integrals {/[01] a)(s, X) dB'{s) : t G R+} and {J[01] /3'(s, X)mL(ds) : t G R+}. The first and the second term on the right side of (1) are called the diffusion term and the drift term respectively of the stochastic differential equation. Note that at any ( £ R t the coefficients a) and /?' depend on X(-, u), not just X{t, UJ), for w G Q. The particular case of (1) where at any t G R+ the coefficients depend only on J ( ( , UJ), that is, a stochastic differential equation of the type r
(2)
dX\ = J2 a){t, X{t)) dB\t) + b\t, X(t)) dt for i = 1,..., d,
is called Markovian. Here aj and 6' are real valued functions on R+ x Rd satisfying some measurability and integrability conditions. As a further specialized case we have T
(3)
dX\ = Ytfi(X{t))dBi{f)
+ g\X{t))dt
far»
= l,...,d,
where /j and g' are real valued functions on Rd satisfying some measurability and integra bility conditions. This type of equation is called time-homogeneous Markovian. In (2), if a!- = 0 for i = 1 , . . . , d, j = 1 , . . . , r, then we have dX't=b\t,X(t))dt
fori =
\,...,d,
which is a randomization of a dynamical system. Definition 18.1. Let M.d ® R r be the collection of all d x r matrices of real numbers. We identify Rd ® RT with Rdr and consider the measurable space (Rd ® RT, ©m'*')Definition 18.2. Let MdXT(\Vd, Wd, {Wd}) be the collection of all Rd ® KT-valued {Wdt}progressively measurable processes a on the filtered measurable space (Wd,Wi, {Wd}), that is, every a in this collection is a mapping ofR+ x Wd into Rd ® Rr such that for every t G R+ the restriction of a to [0,i] x Wd is aff(Q3[o,ox Wd)/<8Wdr-measurable mapping of[0,t]x Wd into Rd®RT. Remark 18.3. According to Observation 2.12, a progressively measurable process is al ways an adapted measurable process. rThus if a G M d x r ( W , 2Bd, {Wd}) then
370
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
1°. a is aCT(Q3mtx Wd)/
a(t, w) = a(t, w(t))
for (i, w) G R+ x W*,
d
then a G M ^ ^ W ' ' , 2U , {2Hf }). In particular, if / is a SS^/OS^-measurable mapping of Rd into Rd ® R r , then the mapping a of R+ x W into Rd
a(t, w) = f(w(t))
is a member of Mdxr(Wd,Wd,
for (t, w) G R+ x Vfd
{Wd}).
Proof. 1) With t G R+ fixed, consider the restriction of a to [0,t] x R d . Since 5 H-> S for s G [0,£] is a 2$[o,t]/23[o,4]-measurable mapping of [0, t] into [0, t] and w i-> w(t) for w G W d is a QU^/SS^-measurable mapping of W* into R d , (s,w) >-» (s,iu(i)) is a o-(S3[0,t) x 2D^)/o-(©to,t] x <8ld)-measurable mapping of [0, t] x Wd into [0, *] x Rd. Then since the restriction of a to [0, i] x Rd is a cr(Q5[0,t] x S S ^ ) / ^ * - -measurable mapping of [0, t] x R^ into Rd ® RT, (s, w) h-> a(s, u>(<)) is a cr(<8[o,<) x 3Df )/25 B *-measurable mapping of [0, t] x W* into Rd
§18. DEFINITION AND FUNCTION SPACE REPRESENTATION
371
defined by X{ui) = X(-, u) for u> e fl is an 5/2U''-measurable mapping, and if X is adapted then X is &/20?-measurable for every t G R+. Let us show that for every t G R+, the restriction of a * to [0,t] x Q is a
(1)
dX't = Y, a K<> X ) dB'V) + £'(*, * ) <*<
fori=l,...,d.
(Ve 5ay fnaf fne stochastic differential equation has a solution if there exists a standard filtered space (fl,5, {5<}, P ) wM a pa;> of stochastic process (B, X) on it satisfying the following conditions. 1°. B is an r-dimensional {$t}-adapted null at 0 Brownian motion on (fl,5, {3t}i P)2°. X is a continuous d-dimensional adapted process on (fl, 5, {St}, P)3°. 77K: {$t}-progressively measurable processes (ax)) for i = l , . . . , d , j = l , . . . , r d e finedby(ax))«t,w) = aft, X{;ui))for(i,u) G R+x flare in L j ^ R + x f l . n n x P ) , and fne {5 ( } -progressively measurable processes (fix)' for i = 1 , . . . , d, defined by (/3xy(t,ui) = P\i,X(-,L>))for(t,u)) G R+ x flare m L,,00(R+ x fl,mL x P). 4°. 77iere exisfs a n«W «tf A G (fl, 3 , P) Jucn fnaf on Ac we have for every t G R+ X ' « ) - X'(0) = V / /
(ajr)JW <*PJ(s) + /
(fo)*(a)m L (ds)
372
CHAPTER 4. STOCHASTIC DIFFERENTIAL
In this case we say that (B,X)
EQUATIONS
is a solution of the stochastic differential equation.
Note that the fact that X' satisfies condition 4° above implies that X, is the sum of a process in MSj(Q,5, {&},P), a process in V C (Q,5, {5 ( },P), and an Jo-measurable real valued random variable and is therefore a quasimartingale on (£1, J , { 5 J , P). Hence the stochastic differential dX' is defined. Remark 18.7. If X satisfies condition 2° of Definition 18.6, then by Lemma 18.5, (a*)} and (fix)' are {&}-progressively measurable processes on (Q, J , {&}, P). 1) If (otxjj are in L2,oo(R+ X £2, m j x P) as required by condition 3° of Definition 18.6, then by Theorem 13.38 there exist equivalent processes which are {foj-predictable. Let us understand (ax)) to be {5(}-predictable versions so that the stochastic integrals {/[0]1](ax)j(s)cZPJ'(s) : t € R + } exist in M£(Q, 5, {5 ( },P). Condition^ in Definition 18.6 refers to these stochastic integrals. Note also that by Theorem 12.8 there exist sequences {(o»)j : n 6 N} in L 0 (fi,5, {5,}, P) such that lim ||(a")j - ( « x ) j f | ^ x P = 0. 2) If (/3x)' are in L loo (R+ x Q,TTIL X P ) as required by condition 3° of Definition 18.6, then the stochastic integrals {Jm](fix)'(s)mL(ds) : t G R+} exist in VC(Q, 5, {&}, P). By Theorem 13.38, (fix)' have {&}-predictable versions. Let us understand (fix)1 to be {&}predictable versions. Note also that by Theorem 12.8 there exist sequences {(&")' : n g N} in Lo(Q,5, {&}, P ) such that Jim 11(6")' - (fix)'\\?£F = 0. Whether or not a solution exists for the stochastic differential equation (1) in Def inition 18.6 depends entirely on the given coefficients a G Mdxr(Wd,Wi, {Wd}) and dxl d d d fi e M. (W ,W , {W }). We shall show this by showing that whenever the stochas tic differential equation has a solution on some standard filtered space it has a solution on the function space Wr+d
[II] Function Space Representation of Solutions Our objective here is to show that if the stochastic differential equation (1) in Definition 18.6 has a solution (B,X) on some standard filtered space (Q,5, {&}, P), then it has a solution (W, Y) on a standard filtered space constructed on the function space Wr+<J and furthermore (B, X) and (W, Y) have the same probability distribution on ( W + d , T3r+d). Proposition 18.8. Let Wr+d = W x Wd. LetWf^, W, and<md be the a-algebras gener-
8. DEFINITION AMD FUNCTION SPACE REPRESENTATION
373
ated by the collections of the cylinder sets, 3, 3', and 3" in WT+d, W , and Wd respectively. Then
(i)
zcr+d = a(w x 2nd).
Similarlyfor the a-algebras Wt+d, Wv and 2Df generated by the collections of the cylinder sets with indices preceding t G R+, 3[o,t], 3'[0,(]. and 3"0,i] in Wr+d, W , and Wd, we have (2)
Wl+d = a(Wt x Wd).
Proof. To prove (1), it suffices to show that (3)
To prove the first equality in (3), note that 3'x 3" C 3sothato-(3'x3") C <x(3). To show the reverse inclusion, let w 6 Wr+(* be given as w = (w1, to") with w' G W and tt>" 6 W*. For t e R+, let qt, q[, and gj' be mappings of WT+d, W r , and Wd into R1* defined by qt{w) = w(t) for w € Vfr+d, q't(w') = to'(i) for w' G W r , and q't'(w") = w"{t) for to" G W* Then for £ ' € QSm' and £ " G <8K„, we have ft-'C-E' x E") = (q'trl(E') x (tfr'C-E") 6 3 ' x 3 " Thus g( _1 (®s r x 93E<<) C 3 ' x 3 " and then by Proposition 13.2 and Theorem 1.1 we have ? r , 0 8 i ~ 0 = 9t~ V ( » r x
for (t, to) G R+ x W + d for (*, to) G K, x Wr+d.
374
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Suppose the stochastic differential equation (1) in Definition 18.6 has a solution (B, X) on a standard filtered space (£2,5, {&}, P). According to Theorem 17.8, the mapping (B, X) of i2 into W+d defined by (B, X)(u) = (£(•, w), X(-, w)) for w € £2 is ^/ST*''-measurable and 5(/23;+''-measurable for every t G K+. Let P(B,x) be the probability distribution of (B, X) on (W+d,Wr+d). We define a standard filtered space on the probability space (W+d, <mr+d, P0iX)) as follows. Definition 18.10 Consider the probability space (Wr+d, W+d, P(B,X)) where ( 5 , X) is a so lution of the stochastic differential equation (1) in Definition 18.6 on some standard filtered space. Let 1°. WT*d'* be the completion ofW*d with respect to P(p,xy 2°. Wrt+d'° = <j(Wrt+d U 91) where 9t is the collection of all the null sets in the complete measure space (W+d, STT^'*, P{B,X)l 3°- Wt+d'' = nE>02UI^'°. We then have a standard filtered space (Wr+d,W*d'*, {Wt*d''}, P{B ,X)) in which the aalgebra, the filtration, as well as the probability measure, depend on (B, X). Let us call this standard filtered space the standard filtered space on (WT+i, W+d) generated by P(B,X))Our aim is to show that with W and Y defined in Observation 18.9, (W, Y) is a solution of the stochastic differential equation on (yVT+d,W+d'*, {WC%+i>*},PiB,xy)~ This is done in Theorem 18.13 below. For this we show in Lemma 18.11 that W is a Brownian motion satisfying condition 1° of Definition 18.6. In Lemma 18.12 we show that ay and/?y satisfy condition 3° of Definition 18.6. Lemma 18.11. The r-dimensional continuous adapted process W on the filtered space {W+d,W+d, {W+d}) defined in Observation 18.9 is an r-dimensional {2Brt¥d'*}-adapted null at 0 Brownian motion on the standard filtered space (Wr+d, 2XT+,i'*, {'Wrt+d'*}, P{B,x))Proof. As we noted in Observation 18.9, W is an r-dimensional continuous {Wrt*d}adapted process on (W+d,Wr+d, {TT^}). Since Wl+d is a sub-cr-algebra of 2IJJ+d'*, W is a 2UTrf'*-adapted process on (W +rf , WT+d-*, {W^'*}, PlB,X))- To show that W is null at 0, let g0 be the arT^/B^-measurable mapping of W+d into Rd defined by q0(w) = w(0) for w e W + d . Then (1)
P(B,X){W0 = 0} = P{B,X){(W, F)(0) e {0} x Rd) = P{qo o (B, X) e {0} x Rd}
§18. DEFINITION AND FUNCTION SPACE REPRESENTATION
375
= P{(B, X)(0) G {0} x Rd} = P{B = 0} = 1 by the fact that B is an r-dimensional {5t}-adapted null at 0 Brownian motion on the stan dard filtered space (Q, #, {J ( }, P). Thus W is null at 0. To show that W is an r-dimensional {2rrT'i'*}-Brownian motion on ((W r+,i , 2DT+<''*, {Wt+d'*},P(B,x)\ it remains according to Theorem 13.26 to verify that for any s, t e R+, s < t, and y G W we have E[e^'w'~w^
(2)
\W;d-']
= e-^(i-s>
a.e. on (W+d, Wrs+d', P(B,X)).
Since the right side of (2) is W*d>* -measurable, it remains to verify (3)
/ jW-«U
dP(BX)
=
for A G 2^ + "'-,
P^X){A)e-^^
or equivalently according to the Image Probability Law (4)
ei^B'-B^dP
/
= P((B,X)-\A))e-il^it-s)
forAG2n! +
J(B,X)-*(A)
To verify (2), consider first the case where A G %8rs+d. Since B is an r-dimensional continuous {5(}-adapted process, B is an 5t/2UJ-measurable mapping of Q. into W for every t G R+ by Theorem 17.8. By condition 2° of Definition 18.6, X is an 52tftrfmeasurable mapping of O. into W*. Then since Wt+d = a(Wt x 2U?) by Proposition 18.8, ( 5 , A") is an St/SU^-measurable mapping of Q. into W*d for every t G R+. Thus we have (B, X)'\TTs+d) C 5 S . Since B is an r-dimensional {5(}-adapted Brownian motion on (Q,5, {&},P), (4) holds by Theorem 13.26 for our A G W*d. Then since SU^' 0 = o-CfflJ^ U 9t) and since (3) holds for every A G W,+d, (3) holds for every A G 2U^+ti'0 Thus we have shown that (5)
E[e
'(^-^)|2n;^0] =e-^-s»
a.e.on(W+d,w;dfi,PiB,X)).
Now W;d'" = n £ > 0 2U^'° = n n6N 2n;;^°„. Let n G N be so large that s + l/n < t. Then (6)
E[e
' ( ^ - " W » > |2D^'*] = E t E t e ' ' ^ - 1 ^ ' / " ) | f f l C # , ] | 2 D ^
= E[e-T i(t - (s+1/n)) |23J: +
a.e. on (Wr+
where the second equality is by (5). Letting n —> oo in (6) and applying the Conditional Bounded Convergence Theorem we have (2). I
376
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Lemma 18.12. For the d-dimensional continuous adapted process Y on the filtered space ( W + d , W+d, {Wrt+d}) defined in Observation 18.9 and for the coefficients a and j3 in the stochastic differential equation (1) in Definition 16.8 for which (B,X) is a solution on a standard filtered space (£2,5, {&}, P), the {Wt+d,*}-progressively measurable processes (aY))fori = l,...,d,j = l , . . . , r , defined by (aY))(t,w) = a){t,Y{-,w))for (t,w) 6 R+ x WT+d are in L2]00(R+xWT+d,mLx P{B,X)), and the { Wt+d'* } -progressively measurable processes {^YY forl = l,...,d, defined by {ftyj% w) = f3\t,Y{-,w))for (t,w) £ R+ x W+d are in L, ,«,(]&+ x W+d, mL x P{B,X))Proof. We noted in Observation 18.9 that Y is a d-dimensional {2UJ+rf}-adapted process on (WT+d,Wr+d, {Ztrt+i}). Thus Y is a { 2 ^ ' ^ - a d a p t e d process on the standard filtered space (WT+d, W+d-*, {Wt+d-*}, PiB,X))- Then the mapping of y of W+d into Wd defined by y(w) = Y(-, w) for w g V/T+d is Wt+d/Wd-measurable for every t € R+ by Theorem 17.8. Thus y is WTt+d'/Wd-measurable for every t <E R + . From this we have the {Wrt*d'r}progressive measurability of (ay)) for i = 1 , . . . , d, j = 1 , . . . , r and ((Hyf for i = 1 , . . . , d by Lemma 18.5. To show that (aY)) are in L2|00(R+ x W+d, mL x P(B,xy), we show that for every t £ R+ we have (1)
j
\a)(s, Y(; w))\\mL
x P{B,X))(d(s, w)) < oo.
Consider the measure space ([0, t] x Q., o-(*8[o,i] x &), mL x P). Let t be the identity mapping of [0, t] into [0,t]. Since (B, X) is an fo/aiJ^-measurable mapping of Q into W + d for every t € R+ as we noted in the proof of Lemma 18.11, the mapping (i,(B, X)) of [0,t] x Q into [0,i] x W*d is
x P) = (m L x P) o (i, (5, ^ ) ) - ' ( P x P )
= (m L x P ) ( 0 _ 1 ( P ) x (B, XT\F) = mL{E) ■ P( fl ,x)(P).
= mL(E) -Po(B,
Xy\F)
This shows that the two finite measures P(lt(B,x)) and mL x P(B,X> on o-(©[o,t] x W[*d)) are equal on the 7r-class QS[0,(] x Wt+d and therefore equal on £T(33[0I(] X Wt+d)) by Corollary 1.8. Thus for any extended real valued o-(Q3[0,«] x 2UJ+ti)-measurable function
v(s,w)(mLxPiBiX))(d(s,w))
§;«. DEFINITIONAND FUNCTIONSPACE REPRESENTATION
377
in the sense that the existence of one side implies that of the other and the equality of the two. Now since w i-» to" for w = (to', to") e W*d is a W^/W* -measurable mapping of W+d into Wd for every t G R+, (5, to) i-» (5,10") for (a, to) € [0, t] x Wr+d is a <7(<8[0,t] x 2UJ+li)/(7(f8[o,(] x 2U?)-measurable mapping of [0, t] x W+d into [0, i] x Wd Then since (s,to") h-> aj(s,u>") for (s,to") G [0,f] x W d is a
/
\a)(s,X(-,u,))\\mLx
P)(d(s,uj))
J[o,t]x.n
=
j
\a](s,Y(;w))\2(mLxP{BtX))(d(s,w))
in the sense that the existence of one side implies that of the other and the equality of the two. But the right side of (3) exists and is finite since (ax)) are in L2i<:o(K+ x £2, mL x P). This proves (1). Thus (ay)) are in L2]00(R. x Wr*d, mL x P^B.X))- Similarly (/3y)' are in L^dtXF^mixPfBj)). ■ Theorem 18.13. Consider the r-dimensional continuous {WTt+d}-adaptedprocess W and thed-dimensionalcontinuous {Wt*d}-adapted process Y on(\VT+d, WT+d, {Wrt+d})defined by W(t,w) = w'(t) and Y{t,w) = w"{t) for w = (to', to") G W r+d , to' G W , to" G Wd, and t G R+. Suppose the stochastic differential equation (1) in Definition 18.6 has a solution (B, X) on a standard filtered space (Q, 5, {5<}, P). Then (W, Y) is a solution of the stochastic differential equation on (Wr+<J, 23 r+ti '*, {Wt+d'*}, P(B,X)) in Definition 18.10. Furthermore the two processes (W, Y) and (B, X) have identical probability distributions on (W+d, © w ^ ) and the two processes Y and X have identical probability distributions on (Wd, 93w*)Proof. Since (B,X) is a solution, there exists a null set A in (Q, 5, P) such that on Ac and for every t G R+ we have X\i)
- X'(0) = V /
(axfAs) dB\s) + f
(Px)\s)rnL(ds)
for i = 1 , . . . , d,
or equivalently, writing (ax)) • BJ and (/?*)' • mL respectively for the stochastic integrals {Jl0:t](ax))(s)dBj(s) : ( £ « + } and {Jl0:t](Pxy(s)rndds) : t G R + }, we have (1) «F(t) = YMx)) 3=1
. B>)(t) + ((fixf . mL)(t) + X\0) - X\t)
=0
for i = 1,... ,d.
378
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
By Lemma 18.11, W is an r-dimensional {2UJ+,i'*}-adapted null at 0 Brownian motion on the standard filtered space (W r+d ,W +d '*', {Wt+i*},PiB,x))By Lemma 18.12, Y is a ddimensional continuous {2U[+00(R+ x W +
for t = 1 , . . . , d.
Since Y, (ay)J • W3 and (/3y)' • m£, are all continuous processes, it suffices to show that for every t e K+ there exists a null set At in (W+d, W*d'*, PfB,X)) such that on A£ the equality (2) holds. Now if we show that for every i = 1 , . . . , d, the random variable
fMmllWJ- ( « * « " * * « 0, \nlirn||(^)'-tfy)"||^xP'^'=0.
Thus by Definition 12.9, we have lim |(a K )'; • W J - (a y Y. • W j l ^ = 0 in the space n—+oo
*'
j
■
r
M|(W + ' i ) aU r+ '''*, {Sa^+,l'*}JP(B,x)) and then by Remark 11.5 for every t G R + we have
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
379
J™, I D ° ? ) j • Wi - £(<*y )j • w' I1 = 0, in other words,
i ™ / W r ^ I E « Q y ) ; • WJM
~ fftar))
• Wj)(t)\2dP(B,X)
= 0.
Similarly we have
■S3* jCm l(('S?)i " mi)(t) ~ ((i3y)' * m i ) ( < ) l dP(fl';f) = °" Since convergence of a sequence in L 2 (W + '', W+i, P(B,X)) implies convergence in Li(W r+d , 2DT+d, P(B,jf)) which then implies convergence in the probability measure P(B,X), the last two equalities imply
(4)
P(B,X) ■ Jim
£ ( ( a j ) j • W J )t + ( ( # ) ' • ™L)* + *o " 1? I
= E ( ( a y ) j • W"")i + ((A') 1 • mL)t + Yj - Yt'. 3=1
Now since (<*£)} and (/??)'' are in L0(Wr+<<, 2rT+li'*, {ZOl**'*}, P(B,X)) we have (aJXa, w) = (/ 0 n )>)l { o}(s) + £ ( / £ ) > ) ! / „ > (s), ArgN
C8?)(a, IB) = (5o")'(^)l{o}(5) + E ( ^ ) > ) l / „ . k ( s )
for
(5> ^ ) G »U x W+d,
fceN
where /„_* = (tn,k-i,tn,k] for n 6 N; {t„ it : k £ Z+} are strictly increasing sequences in R+ such that tn 0 = 0 and lim *„,* = 0; {(/£)' : fc 6 Z + } and {(g£)* : k 6 Z + } are bounded it-+oo
sequences of real valued random variables on (WT+d,Wr+d'*, PiB,X)) such that (/£)} and (g$y are 2D£*"-measurable and (ft)) and (g£)' are 2D££,-measurable for fe 6 N. If we replace (/£)' and (£)' with E[(/£)' | 2 D ^ _ , ] and Et(tf) i |2D£[_ 1 ] respectively in (5), then (3) and (4) still hold. Thus we may and do assume that (/£)< and (ff£)'' are 2D£*_, measurable for k £ N. Consider the mapping (0, -V) of Q into W r+d defined by (B, #)(w) = (B(-, w), X{-, w)) for a; € Q. Since ( 5 , X ) is {3,}-adapted, (B, X) is &„„_, /2D^J_ I -measurable for every
CHAPTER 4. STOCHASTIC DIFFERENTIAL
380
EQUATIONS
in,*_i by Theorem 17.8. Thus (/ t n )j((B, # ) M ) and (g£)''((B, X){u)) foru> g Q are & n measurable. Then the random variables (a£)j and (&£)' on (Q, 5 , P) defined by f(flZ)j(«) = (/?)j((B,^)(«)), I (*!)'<«) = (ff*y((B, * X « » for w e f t m e a s u r a r j l e ;. This implies that the stochastic pro< are 5 t n f c _ r - This implies that the stochastic processes (<*£)} and (/?£)' on the standard filtered space; ((£2, {&}, F ) defined by " , 5J , {&},-P) (6)
(7)
1
(o&)(a,w) = (a$(w)l{o>M + E ( a S ) 5 ( u ) l ^ M , * eN (/9J)(a,w) = (65)'(w)l {0} (s)+ X > D V ) l i „ , * ( s )
for(a,w) g R + x O ,
fceN
are in L 0 (fi, 5, {&}, P). Let us show
Um||(ai)}-(^)}||™ix^-x,=0, LXP,£
°'
\
i-
II , i n M '
V » |i| W H XxP ( B J( .fX )
yo
Tlim||O3jy-09^)
||^ ^ '=O.
Note that according to Remark 11.18, the first of these two expressions are equivalent to the condition that jirn ||(oft)j - (a^)jll2!*iXP<s'x> = 0 for every t g R+. Now for fixed t g R+, letting tnj)n = t for n g N, we have |n>iXP( B | X)
=
/
|(o5)j(w)l{0}(3) + £(aJ)j(w)l/ B 4 (a) - aj(a,X(-,a;))| 2 (mL X P)(d(a,w))
•/[O.flxJJ
=
/
fca
'
|(y?)j(w)l {0 }(s) + E ( / j f ) j ( « ) l w ( a ) - a)(s, w")\\mL ixr-
(B| = \\{^ Il(^)*-(«y)*ll^^ Y))-{ay))\\ir
x P ( s,x))(d(s, w))
x)
by the Image Probability Law. But according to Remark 11.18, the first equality in (3) is equivalent to having Vbx^ ||(o£)j - ( a y ) } | | ^ x P ( B ' x l = 0 for every t 6 1+. Thus the first equality in (8) holds. The second equality in (8) is proved likewise. From (8) we derive (9)
P • Jim I £((orJ)J • B>)t + ((/?£)< . mL)t +X*- X\ 1 £ ( ( e r x ) J • B>)t + ((fey • rnL)t +Xl03=1
X\
381
118. DEFINITION AND FUNCTION SPACE REPRESENTATION
in the same way we derived (4) from (3). We show next that the sum on the left side of (4) and the sum on the left side of (9) have identical probability distributions on (R, 58«), For brevity, let these two sums be denoted by SfWY) and S"B X ) respectively. Then by (5), we have (10)
S(V,y,(u>) = Y,((-aY))»W3)t(w)
+ ((l3Zy»mL)t(w)
+
YJ(w)-YtXw)
= ££c/D>){wt,>)-- < * ..,(»)} .7=1
i=i k=\ j = l k=\
+ X K ) » ( * * , * - t«,k-i) + Yj(w) - Yt'(w)
for w G W r+d ,
/fc=l
and similarly by (7) and (6) we have (11) SfciX)(u)
= £((<**); • BJ)t(")
+ ( ( « ) ' • mL\{u)
+ Xj(w) - X\{w)
i=i
= EE(/?)}((S, *)(")){<>) - #„,,_,(«)} j = i fc=i
+ lL(9tf(.(B,X)(uWn,k
-*»,it-i) + Xfa) - Xi(.w)
forwGQ.
fc=i
Consider two (2r + l)(p n + 1) + 2-dimensional random vectors V(w,Y) space (WT+d, mr+d'', P{B,X)) and V{B,X) on (Q, 5, P) defined by
on tne
probability
f W > = x;., x^w/ n i xj=1 x £,(/£)■ • x^o^)' x yt'n0 x Ylpn, > = X; ; =1 X £,(<$)} \[ W ViB,x) X^==11 X X C^ B B j ^^ t X J=1 fcZo«)j •• X CCftgy x JCJ^ x X(' Let us show that their probability distributions P{B,X) ° V
^ W ) ° Vr(Wly)(A) == PVBJCWU
^
Wl,kr\K,k)
■ n;=, n£„ ( W d ^ )
• nEo^)-')"'^.*)n (^.r'Cifn.o)n ( t f r ' t f l ^ ) )
CHAPTER 4. STOCHASTIC DIFFERENTIAL
382
EQUATIONS
and
p°y(tx)iA) ■--- p
n (jr;'npn )-'(#„,,„)).
Since Bfnfc(w) = W/n ,,((£, #)(w)) and X*n t (w) = r/ n k((B, X)(tv)) for w e Q. we have
(Binkr\Ei)k)
=
{B,X)-\Wik)'\Elk\
and
(x-n tr'(.ff„,t) = (tf^r'a^r'cff*,*)Also from (6), we have
( ( O i r ' c o =«/?)} o (s, A - ) ) - 1 ^ ) = (B, A-)-1 o ((/,")})-'(<,) and similarly
((&D'')_1(Gifc) = (B,Xyl o ((^)')"'(Gn,fc). Substituting these equalities in the expression above for PoV^X)(A) and recalling P(B,X) — Po(B,X)-\ wehaveP^.xjoVJ^y^A) = PoV(B]X)(A) Now since X ^" l l , " + 1 , + 2 ®m m isa 7r-class and the
X
r
\s Pn
j=l
A
j
wr
k^n.fc ■ A j=l
w Pn A
j
k=oVn,k
w Pn '
A
k=0Zn,k
X V„,0 X V n , p n
w i t h x ^ , y£_fe, £„,*, un,A G R. Let us define a mapping T of R ^ 0 ^ 1 ' * 2 into R by setting T(XTj=] X^ 0 i J „ f c • Xj = 1 X ^ 0 ^ i J ; • Xpkn=0zn,k x un,0 x vntPJ T
=
Pn
H H
pn
Vn.k&i.k -
x
i,k-l)
+ H
Zn,Ar(in,A ~
Clearly T is a *Bm(2r*L,(p„*iw)/2Sm-measurable mapping of R(2r+,)(p"+1)+2 into R. Thus T o V^w,Y) and T o T^B.XJ are real valued random variables on (WT+d, 2I^+'',*, P(B,X)) and (£2,5, P ) respectively. Furthermore by (10) and (11), we have T o V(W,Y) = S£WY) and T o V{B,x) = S^B,xy T n u s f o r t n e probability distributions of S^wy) and S ^ X) on (R, 3Sm),
383
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
which we denote by ^ S( v, y , and ns^X), we have ^ t Y ) = p(fl,*> ° ( T ° % ) ) " ' = ptf,x) ° ( % « ) " ' o T-1 and /uSnflx) = P o'(T o V^,*)) - 1 = P o (F(B,X)) _I ° T" 1 But P ( S , X ) o (T-W.y))-1 = P o (VJB,*)) - 1 on Q3i as we showed above. Therefore HsnWY = A's^x o n (R, 23m). Let nn, n e N, and ,u be probability measures on a measurable space (S, 53s) where 5 is a metric space and 23 s is the Borel a-algebra of subsets of 5. We say that fin converges weakly to // and write w ■ lim fxn = fj, if lim / / d^„ = j f dfj, for every real valued bounded continuous function / on 5. The limit of weak convergence of probability mea sures is unique in the sense that if v is a probability measure on (5,03s) and w ■ lim un = \i n—t-OO
as well as w ■ lim p,n = v, then yu = v on (S, Q3S). If £n, n £ N, and £ are S-valued random variables on a probability space and their probability distributions on (5,23s) are denoted by fin, n £ N, and /J respectively, then we say that £„ converges in distribution to £ if w ■ lim /i n = fj,. If £n converges in probability to £, then £„ converges in distribution to £. Now according to (4), the random variables S™WY) on (Wr+d, W+d-\P(B,x)) denned by (10) converge in probability P(B,X) to the random variable *F*(t) defined by (2) and there fore their probability distributions ^ s " , on (R, 93») converge weakly to the probability distribution of *¥'(t). On the other hand according to (9), the random variables S"B X) on (Q, 5, P ) defined by (11) converge in probability P to the random variable &{t) defined by (1) and thus their probability distributions fj,s"B on (R, Q3i) converge weakly to the proba bility distribution of O' (i). But us" v = A*S" v on (R, 03m) for every n 6 N as we showed {Vv, Y)
(fl,X)
above. Thus by the uniqueness of limit in weak convergence of probability measures the probability distributions of *F'(i) and
384
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
for A e ©vv- This shows that Y and X have identical probability distributions. ■ As a converse of Theorem 18.13 we have the following. Theorem 18.14. Let B be an r-dimensional {&} -adapted null at 0 Brownian motion and X be a d-dimensional {$t}-adapted continuous process on a standard filtered space (Q,3, {$t},P). Let PiB,X) be the probability distribution of(B,X) on (Wr+d, W+d) and let (Wr+d, <WT+d'*, {Wt+d' *}, P(B,X)) be the standard filtered space generated by P(B,x) as in Definition 18.10 and let W and Y be two processes on this filtered space defined by setting W(t, w) = w'(t) for (t, « i ) £ 8 t x W+d Y(t, w) = w"(t) for (t, w) e K+ x W+d, where we write w = (to', w") with w' 6 W and w" € Wrf. If(W, Y) is a solution of the stochastic differential equation (1) in Definition 18.6 on (WT+d, W*d'*, {W*d>*}, P{B,X)X then (B,X) is a solution of the stochastic differential equation on (Q, 5, {dt},P)Proof. To show that {ax)) € L2l00(R+ x £l,mL x P), note that since (W, Y) is a solution of the stochastic differential equation, we have (ay)* e L2,oo(R+ x W +
Jm.
Then by the Image Probability Law as in the proof of Lemma 18.12, we have /
\a)(s,X(;u>))\\mL
x P)(d(s,u:)) < oo,
and thus (ax)) € L2|0o(R+ x £2, m L x P). Similarly from (/?y)' G Lli0O(R+ x W + d , m L x P(B,X)) we have ( f e ) ' € L1|00(R+ x Q, m L x P). Then the stochastic integrals (a*)*- • Bj and (,9x)' • n»x, exist. For t £ R+, let
**'(*) = £((<*4 . B>){t) + «fixy . mL)(0 + X\0) - X\t)
for i = 1,..., d,
and r
TO = £ ( ( « r ) } • W")W + (OW* • m £ )(i) + F ; (0) - F*(i) for i = 1 , . . . , d. The fact that $*(*) and T*(<) have identical probability distributions on (R, 55K) can be verified by the same argument as in the proof of Theorem 18.13. Since (W, Y) is a solution
§ 18. DEFINITION AND FUNCTION SPA CE REPRESENTATION
385
of the stochastic differential equation we have 4"'(t) = 0 a.e. on (W+d,W+d,P(B,x)) and thus *'(<) = 0 a.e. on ( Q , 3 , P ) . This shows that (B,X) is a solution of the stochastic differential equation on (£2,5, {5t}, P). ■ Combining Theorem 18.13 and Theorem 18.14 we have the following. Theorem 18.15. Let Biq) be an r-dimensional {5j*'}-adapted null at 0 Brownian motion and Xiq) be a d-dimensional {g*5)} -adapted continuous process on a standard filtered space (£2W, 2H(,), {3[q)}, pM)for 8 = 1 , 2 . 7/(P (1) , X<1>) w a solution of the stochastic differential equation (I) in Definition 18.6 on (Q (l) , 2U 0 ', {5',1'}, P (1) ) and (B (1) , Xm) and (BG\ X<2)) ftave identical probability distributions on (Wr+d, WT+d), then (P <2) , X (2) ) is a solution of the stochastic differential equation on (Q<2), 2n(2), {gf >}, P (2) ). Proof. Let P/B W ,X<") b e t h e probability distribution of (P (9) , X<»>) on (W + l i , 2Tr+,i). Considerforg = 1,2 the standard filtered space (Wr+<J, 2Ir+'i'*'(5,, {2n?+'i'*'(')},P(B)^)) generated by P(B(,, x ( „ . . Since P/gm x'»i = MBO I B I by o u r assumption these two standard filtered spaces are identical. Let W and Y be two processes on the standard filtered space defined by W(t,w) = w'(t) and Y(t,w) = w"(t) for (f,w) G R+ x W r+,i , where w = (w',w") with to' € W r and u>" 6 W d Since (P ( 1 ) ,X ( 1 ) ) is a solution of the stochastic differential equation on (Q (1) , 3U*1*, { $ " } , P (1) ), (W, Y) is a solution of the stochastic differential equa tion on ( W + l i , 2 i r + ' w , ) , {2UJ+d,*'(,)},P((B)Hs,) by Theorem 18.13. Then by Theorem 18.14, (B (2) ,X (2) ) is a solution of the stochastic differential equation on (Q(2,,2U(2), {# 2 ) },P ( 2 ) ).
[Ill] Initial Value Problems In Theorem 18.13 we showed that if the stochastic differential equation (1) of Definition 18.6 has a solution (B,X) on some standard filtered space (£2,5, {5 ( },P), then (W,Y), where W and Y are defined by W(t, w) = w\t) and Y(t, w) = w"(t) for i € R+ and iw = (u>', w") G W r+,i with to' € W r and w" S W , is a solution of the stochastic differ ential equation on CNT+d,wr+d'*, {Wrt+d'*},P(B,x))- Let p be the probability distribution on (Rd, Q3md) of the d-dimensional random vector X0 on (Q, #, P). We shall show that for a.e. x in (Rd, <8Ed, /z), the mapping (W, F) is a solution on a certain standard filtered space (W*d, W+d'*'x, {2nj+,J'*'*}, P(B,A-)) o f t h e initial value problem dXt = a(t, X)dB(t) + P{t, X)dt = x.
XQ
386
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Observation 18.16. Consider the solution (W, Y) of the stochastic differential equation on (Wr+d,W+d'*,{Wrt*d-*},P&,x)) in Theorem 18.13. Now Y is a d-dimensional {Wr*i}adapted process on (W+d, W+d, {Wt+d}, P(B,X)) and Y0 is a 23K+,i/<81d-measurable map ping of Wr+
on 2X1 x R satisfying the following conditions according to Definition mjr+d ly-
1°. There exists a null set N\ in (R , 23]!^,^) such that f(B,x) sure on QXF+ti for every x 6 TVf.
(">a;) ' s a probability mea
2°. P ( ^ ) ' y ' 0 ( A , ■) is a 53md-measurable function on Rd for every .4 e Wr+d 3°. P C B, X) U n y 0 "'(G)) = fG P(B,7)|y°(A> *)/*(<&) for every A G 2rr+rf and G G 93K«. By Theorem C.14, for an extended real valued random variable £ on (W+d, Wr+d, P(B,x))> we have for every G G 93m
4
i, G) « dp ^>=j£ LL « ^ > | r o ^ > *> fi(dx)
°-
in the sense that the existence of one side implies that of the other and the equality of the two. According to Theorem C.15, there exists a null set N2 in (Rd, Q5md, (j.) such that •P(B7)IV0"1(W),X)=1
5°
for*eJV|.
Let NYo = Nil) N2. Let us define (i)
■P(B,Jf)(') -
P&jc) "(■.*) lO0
for x<ENYo forxeN forxeNYo.
Thus defined, P ( s, X) (A) is a <8Ed-measurable function of x on Rd for every A e 23T+d, and •PfB^jC)is
a
measure on 2IT+'i for every x eRd and in particular for x g JVfo, P £
X) (-)
equal to the probability measure P ( B i J 0 ' "(-, x) on 20r+d. From 3°, 4°, and 5°, we have (2)
P(B,X)(A n Y0~\G)) = I JG
P^X){A)ii{dx)
is
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
387
for every A in W*d and G G *8md
(3)
JL*e dPiB-x)=I L L « w >^ .***»>
/i(dx),
in the sense that the existence of one side implies that of the other and the equality of the two, and (4)
^WV({*}))=1
forzGJTC.
Definition 18.17. For each x G N$o, let 1°. 2Ur+d'*,x be the completion ofW"^ with respect to P,
(B,xy
+d
x
d
2°. Wt '°' = a{Wt* U 91) where V\ is the collection of all the null sets in the complete measure space (Wr+d, W+d-*'x, PfeiX)).
3°. mrt+d-*'x =
ne>0wtHAl
Thus (WT+d, <SJr+d-*-x, {Vfft*d'*'x}, PxBiX)) is a standard filtered space for each x 6 Np0. Observation 18.18. The BorelCT-algebraWd of subsets of W* for any d £ N is countably determined according to Proposition C.8 since Wd is a separable metric space. We show here that for any t G R+, the
max \vi(s) - v2(s)\
se[o,<]
forui, v2 G V*,
V* is a complete separable metric space. Let 2J be the Borel
(v(ti),...,
v(tn)) G Ex x • ■ • x £ „ } ,
where 0 < t\ < ■■■ < tn < t andEi G ©»for i = l , . . . , n . The fact that 23 =
388
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Let T be the mapping of Wd into Vd defined by letting T(w) be the restriction of to 6 W d to [0, *]. For T = {tu ..., tn) where 0 < t\ < ■ ■ ■ < tn < t and E = E\ x ■ ■ • x En where Ei G QSmforz = l , . . . , n , let ZTiE = {w € W : (to(ii),..., w{tn)) € Ex x • • • x En) € 3[0,fl CT,E = {» € V : (w(*i),.. •, »(*»)) G JS?i x ■ ■ • x £ „ } G C Clearly T(ZTtE) = CTyE and T " ' (CT,g) = ZT,£. Thus T establishes a one-to-one correspon dence between 3[o,t] and £ This implies T _ 1 (£) = 3[o,<) and then T-'(QJ) = r ' ( a ( 0 ) = <7(T-'(£)) = a(3 [0 , t] ) = 2D?. This shows that T is a 2ITf /93-measurable mapping of W* into V d . Now since \d is a separable metric space, its Borel IT-algebra of subsets 93 is countably determined according to Proposition C.8. Let CD = {Dn : n G N} be a countable collection of determining sets for 93. Let <S = { £ n : n G N} where £ „ = T~l(En) for n G N. Let us show that <E is a countable collection of determining sets for Wd. Let Pi and P 2 be two probability measures on (Wd,Wd) and suppose Pi = P 2 on (£. Let (Ji and Q2 be the probability distributions of T on CVd, 93) relative to the two probability measures Pi and P 2 on (Wd, 2Uf), that is, Qi = Pi o T" 1 and Q2 = Pi o T" 2 on 93. Now the fact that P[ = P 2 on <£ implies that Qi(D K ) = Pi o T~\Dn) = P , ( £ J = P2(En) = P 2 o T~\Dn) = Q2(Dn) for every n G N. Thus Qi = Q 2 on S3 and therefore Qi = Q2 on 93 since 33 is a countable collection of determining sets for 93. Take an arbitrary A G 223?. Since 20? = T _1 (93), there exists B e 93 such that A = T~\B). Then P,(A) = Pi(T-'(B)) = Q,(P) = Q2{B) = P 2 (T-'(B)) = P 2 (A). Thus Pi = P 2 on 2U?. This shows that £ is a countable collection of determining sets for
233?. ■
Lemma 18.19. There exists a null set N in (Rd, 9 3 ^ , fi) such that for every x G Nc, the rdimensional continuous adapted process W on (WT+d, Wr+d, {WTt+d}) is an r-dimensional {Wrt+d'*'x}-adapted null at 0 Brownian motion on the standard filtered space (Wr+d, W+d'*'x, {Wrt+d'*'x}, P
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
389
Thus W is a 2UJ+li'*'3;-adapted process on (W + d , SZTT^'*'1, {Wtu'*>x}, PXB,X)) for x € N$0. We show next that there exists a null set iV0 in (Rd, 53Md, jj.) such that for x G JV^, W is a null at 0 process on (Wr+d, W+i"*'*, { 2 ^ ' * ' }, Pfe,X)l Now since P £ i J f , is a ©^-measurable function of x on Rd, integrating with respect to fi we have
/Kd P{XB,X){W0 = o}M(dx) = PowW-'ao}) n y0-'(Rd))
(i)
^(fl,X)(W 0 - 1 ({0}))=l,
=
by (2) of Observation 18.16 and (1) in the proof of Lemma 18.11. Now since PxBiX){W0 = 0} G [0,1] for i 6 l ' and since JKd fi{dx) = 1,(1) implies that there exists a null set NQ in (Rd, *8Kd, ^) containing the null set jVf0 such that P^tXy{ Wo = 0} = 1, that is, W is null at 0 when x G Nfi. To show that there exists a null set JV in (R1*, 2 3 ^ , /i) such that for every x £ Nc the process W is an r-dimensional {2UJ+,i'*'x}-adapted Brownian motion on a standard filtered space (W +
E[e^'w'-w->
\Ws+d-*'x] = e-^{t-s)
a.e. on (W+d,2U^+rf-*^, P ( ^, X) ),
that is, 0)
/" e %,w,-w.) ^
=
P(^|Jf)(A)e-^(t-s)
for A G 2D;**-*,
JA
Now for A G 2 0 ^ and G G
e^w'-w'HAdP^x\^dx)=j
/ { /
,
e^'-^UdPuw
by (3) of Observation 18.16. By Lemma 18.11, W is an r-dimensional {2D[***}-adapted Brownian motion on QNr+d,Wr+d'*, {2U^**},P(/B,X))- Also since A G Ws+d and since Y0-\G) G W^, we have A D y 0 "'(G) e 23^+,i C 20;***. Thus by (3) in the proof of Lemma 18.11 and (2) of Observation 18.16, we have (5)
j
e*H''-^
JAnY-'(G)
= e-^-s)
j PxB%x)(AMdx). JG
390
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Combining (4) and (5) we have for A 6 20"'' and G € 23K<<
la { j L ^ ' " ' ^ <"$.*>} *<»*) = e-^* jQ PfrjcMMdx). Since this equality holds for every G £ 23,^ and the integrands are 23Kd-measurable func tions, there exists a null set Naj,y,A in (Rd, 9 3 ^ , A*) containing 7V0 such that for z G NZ,t,v,A we have (6)
/
e'^-w'hAdP^BX)
=
e-^-s)P^X){A).
Let 33 be a countable collection of determining sets of the countably determined a-algebra Ws+d. Let Q+ be the collection of all nonnegative rational numbers and let Qd be the col lection of all rational points in Rd. Let N = Us,ieQ*,s
= B -*JrV«)
ae. on
(W^'ZO^'0'*,?^,).
Then by same argument as in the last part of the proof of Theorem 18.11 we have (2). ■ According to Lemma 18.11, W is an r-dimensional {20J+li'*}-adapted null at 0 Brownian motion on (W+d,W+d'*,{Wt+d'*},P(B,x))According to Lemma 18.19 there ex ists a null set N in (K , 9J K d,^) such that for every x £ Nc, W is an r-dimensional {2ij^«.s}_ a daptednull at OBrownianmotion on (WT+d,mr+d'*'x,{Wrt*d"*'x},P^BiXyy. If
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
391
{p(B,X) ■ f[o,t]^)dWj(s): t £ R + } respectively. We write | | S f l x ' and | - | 2 a , x l for the quasinorm | !«, in Definition 11.4for the two spaces Ml(Wr+d,W+d'*, {Vtrt+d'*},P(B,x)) and MXW+d,Wr+d'*'x, {Wt+d^x},PfetX)) respectively. Note that if the process
'
f P*B,X) ■ (ay)) • W> = PiB,X) ■ (ay)) . W*, \ P?B,X) ■ (fir)1 • mL = Paw ■ (PYY • mL
on AJ.
Proof. According to Lemma 18.19 there exists a null set N{ in (Rd,*8md,fj.) such that for every x £ JVf, W is an r-dimensional {22JJ+,i'*':c}-adapted null at 0 Brownian mo tion on (Wr+d,Wr+d'm'x, {Wt*d'*'x}1P^B:X)). Since Y is a d-dimensional {23J£+(iu) = Y(-,w) for u; € W+d is 2DJ+lf/2n^-measurable and hence Wt+d'*'x/Z8d-measurable for every t £ R+. Thus by Lemma 18.5, ( a y ) ' and (ftO' are {Wt+d-*'x}-progressively measurable processes on ( W ^ S H r * * ' * , {W[+d'''x}, P(XB,X)) for every x £ iVf. Now for every m £ N, we have JWd L/[0,m)X W r + d
= / , [ / AI 4
=
d
\(aY))\2d(mL
x PfBiX)) fi(dx)
\(ay))\ld(mAdP^)) J
Jm Uv/'-* U[o,m] > I \i \(<XY))\2d(mL)]dP(B,X)) Jy/r+d L/[0,m] 1
fi(dx)
1
392
CHAPTER 4. STOCHASTIC DIFFERENTIAL =
\(aY))\2d{mL
/
EQUATIONS
x P(B,X)) < oo,
where the first equality is by applying the Fubini-Tonelli Theorem to the measurable process (aY)), the second equality is by (3) of Observation 18.16, and the finiteness of the last integral is by (1) in the proof of Lemma 18.12. Thus there exists a null set in (Rd, QS^, fi), whose union with iVj we call N2, such that /
\(ay))\2d(mLxP^BtX))<^
„
for all m G N when x G JVf. This shows that (ay)) is in L2]0o(lR+ x Wr+d, mL x P*B%X))
forxeNi. According to Lemma 18.12, (aY)) is in L2|00(R+ x Wr+d,mL x P(B,xy) SO that there exists a sequence {(ay)) : n G N} in L 0 (W + , i , 2Dr+
(2)
Recall that (aY)) can be chosen from L 0 (W r+ti , snT"*, {Ttrt+d},P{Btx)) as we showed in the proof of Theorem 18.13. By Definition 12.9, forthe stochastic integral P{B,xy(ay))»W:lof (ay)) with respect to the {Wt+d'*}-adapted null at 0 Brownian motions W3 on the standard filtered space ( W + d , 2nr+ w e h a v e
Jim \(aY)) . W - P(B<X) ■ (ay)) . Wi |2 f l ^' = 0, or equivalently, lim \(aY)) • Wj - P{B,x) • (ay)) • W-7' |£<*•■*> = 0 for every m g N by Remark 11.5, that is, lim / n
—°°J[0,m]
(aY))(s) dW3(s) = Pm X) ■ f '
'
(ay)Us) dWJ(s) in J
J[0,m]
L 2 (W r+ '',2n r+d, *,P (B ,x)) for every m G N. Now convergence in L2 implies the existence of a subsequence which converges a.e. Note that a null set in (W + ( i , 2D r+i, * ) P(S,jf)) is always a subset of a null set in (W r+d , 2n r+,i , P (S ,x)) since W+d'* is the completion of W+d with respect to P(B,X))- Thus there exist a null set A in (W r+d , Wr+d, P(B,XI) and a collection of subsequences {(m, fc) : k £ N} of {n} for m G N in which {(m + 1, k) : k £ N} is a subsequence of {(m, k): k G N} for m G N such that (3)
lim/
(4 m '' : ) )K5)^W(5) = P(B,X)- /
fc—oo J[0,m]
J[0,m]
(aY))(s)dW3(s) '
§ 18. DEFINITION AND FUNCTION SPACE REPRESENTATION
393
for every m € N on Ac. On the other hand (
(4)
k)
U<x ?' 11(4"
■p (B,X) ))-{ay))\\l£ ■i<*Y)%Zy ™
{/ L-'ro.n
i i d LAv"i
P
\{af'% - ( a y ) } | : dm,L \
U[0,r>
JW1 U[0,m] x W "
^ | ( « ^-
dP{B,X)
{aY))\2dmi<\dP*B,X) M<&0
(a<^>)' --(aY))\2d(mL
xP(B,X))
Htfx),
where the second equality is by (3) of Observation 18.16. Since (2) is equivalent to Jirn^ \\{aY)) - (aY))\\™n*FiB,X' = 0 for every m 6 N by Remark 11.18, (4) implies that there exists a null set in (Rd, V8Md,(i), whose union with N2 we call N3, and a subsequence {(m,e) : £ G N}of {(m,k) : £ G N} for m G N chosen so that {(m + l,£) : £ G N} is a subsequence of {(m, £) : £ G N} such that
& L w - |(a^(s) - (a^2dimL x ^ = ° for every m G N when i G JVJ. If we take the diagonal sequence {(£, ^)} and write {£} for this subsequence of {n}, then we have lim /
«—oo J[0,m i W " '
|(4)}(S) -- ( ^ • ^ ( r r i i X J
-P(B,X)) = = 0,
that is, lim \\{aY)) - ( a K ^ I I ^ ^ ' ^ = Ofor every m G Nwhenz G N$. Thus by Remark 11.18 we have
(5)
Hm ||(4)" - MC"*"*' = °
for
* e AJ.
Recall that our (a?-)" are in Lo(WT+i, W*d, {WTt+d},PlB,x))- Since the spaces L 0 do not actually depend on the probability measure in the underlying filtered space, (a y )} are also in L0(Wr+d, Wr+d, {W*d},P*B,X)) f o r x e Ni- T h u s b y Definition 12.9, (5) implies lim | ( 4 ) 5 . W1 - FfBXy ■ (ay)} • Wi \PIB-X) = 0
for x G N^
I—*oo
and then by Remark 11.5, lim / ' +d
+d
:r
L2(W ,%IF '*' ,Pm
<-H»./[0,m] r
(aY))(s)dW3(s) '
= P(^X) • / J[0,m]
(aY))(s)dW1(s)
in
J
X)) f° every m G N when i G N%. Since convergence in i 2
394
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
implies the existence of a subsequence which converges a.e. and since a null set in (W+d, W+d'*'x, P[BX)) is always contained in a null set in (W+d, WT^, Pfe.x)), f o r e v e r y x G 7V3C there exists a null set A^ in (W*d, W+d, P*B,x->> a n d a subsequence {k} of {1} such that (6)
lim /
(4)}(s) dWj(s) = P(B X) ■ I
^ W
for all m G N on (A^.)c when x g N$. From (3) we have (7)
lim/
( 4 ) k 5 ) r f ^ ( 5 ) = P (jB , X )- /
*->oo J[0,m] c
(ayKW^W
J[0,m]
for all m G N on A . Now by (2) of Observation 18.16, we have
(8)
J^ P{%X}(A)fi(dx) = P{B,X)(A) = 0.
Thus there exists a null set N4 in (Rd, <8Md, fi) such that P*BtX)(A) = 0 for a; G JV|. Let N0 = N3U N4. For every z G iV0c, let A , = A ^ U A. Then for every x G iV£, Ax is a null set in (W r+d , 22r+
(*Y)){S) dW\s)
= P(B>X) ■ I
(ay)'(s)
dW^s)
for all m G N on Acx, that is, Pfe<X) ■ (ay)} • Wj = PiB,x) • (ay)) • W* on Acx. Following similar line of argument we have P*Btx)' (PY)' • mi = P(B,x> ■ (ft-)' • m-h on A^. ■ Theorem 18.21. Suppose the stochastic differential equation (1) in Definition 18.6 has a solution (B, X) on some standard filtered space (Q, 5, {St}, P)- Let p. be the probability distribution ofXo on (Rd, QSjjd). Then there exists a null set N in (Kd, S8ffid, fi) such that for every x G Nc the pair ofprocesses (W, Y) on (WT+d, Wr+d) defined by Wit, w) = w'(t) and Y(t, w) = w"(t)for t € R+and w = (to', w") G W+d with to' G W and w" G Wd is a solution of the stochastic differential equation (1) in Definition 18.6 on the standard filtered space (WT+d,W+d-*'x,{Wrt+d'*-x},P{IBX)) satisfying the condition Y0 = x a.e. on (B,Jf)) (w^.arr^.p&jn) D
Proof. Let N0 and A, for x G N$ be as specified in Lemma 18.20. For x G N§, let
(1)
"¥*/ = Jl(P5w ■ ( " 4 • Wj)t + {P^X) ■ {M1 • mL)t + YJ - Yj.
§ 19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
395
Let r
(2)
V\ = Yl(PiB,X) ■ {<*Y)) • Wj)t + (P{B,X) • (ft,)< . mL\ + Y* - Yt\ 3=1
According to Theorem 18.13, (W, Y) is a solution of the stochastic differential equation on (Wr+<<, W+d;*, {aBI^'''}, P^,*,) and thus there exists a null set A in (W+d,fflT^, P (B ,;o) such that »P; = 0 on Ac. Since a null set in (WT+d, W+d'", P(B,X)) is always contained in a null set in (Wr+d, W+d, PiB,xy), we can choose A to be a null'set in (Wr+d, %r+d, P(B,X))Then as we saw in (8) in the proof of Lemma 18.20, there exists a null set Nt in (Rd, 95K<<, /*) such that A is a null set in (V/d,Wd,PfB X}) for every x e Nf. Let N = N0 U Nt and A; = A* U A for x e Nc. Applying (1) of Lemma 18.20 to (1), we have %{ = *FJ = 0 on (A')c for x e 7VC. This shows that (W,Y) is a solution of the stochastic differential equation on (XVr+d, SZT^'*'1, {Wt*d,*'x},P*B%X)) for every x € Nc. Finally according to (4) of Observation 18.16, we have Pfe X)(YQ~l({0}) = 1 for x G 7VC, that is, Y0 = x a.e. on
{vr+*,mr+d,p?B:X)).
§19 Existence and Uniqueness of Solutions [I] Uniqueness in Probability Law and Path wise Uniqueness of Solutions Let (Biq\X{q)) be a solution of the stochastic differential equation (1) in Definition 18.6 on a standard filtered space (Cliq\&q\ {Sr(5)},P(«,) for q = 1,2. Consider the standard filtered space (W+d, mT+d-'M, m+d-*M},P$wx
396
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
identical probability distributions on (Wd,'Wd). We say that the solution is unique in the sense of probability law under deterministic initial conditions if whenever X$ = x a.e. on (Qm, 5 ( , ) , P(1>) and X^ = x a.e. on (Q ( 2 \ g<2), Pm)for some x G Rd, then Xm and X™ have identical probability distributions on (Wd, 2Urf). Lemma 19.2 Uniqueness of solution in the sense ofprobability law for the stochastic differ ential equation (1) in Definition 18.6 is equivalent to uniqueness in the sense of probability law under deterministic initial conditions. Proof. Clearly uniqueness in the sense of probability law contains uniqueness in the sense of probability law under deterministic initial conditions as particular cases in which the initial probability distributions are unit masses. Conversely assume that whenever XQ" = x a.e. on (£2(1), 5 ( I ) , P (1) ) and XQ 2) = x a.e. on (Q (2) , 3<2), P (2) ) for some x G Krf, then X (1) and Xm have identical probability distribu tions on (Wd, Wd). Suppose that XQ1) and X™ have identical probability distribution fi on (Rd, tBjfd). We are to show that Xm and Xl2) have the identical probability distributions on (Wd, Wd), that is, (1)
P(1)o(Xwyl(A)
= Pwo(X(2)rl(A)
for A G 2 ^
where (X^) is the mapping of Q (?) into Wd denned by (A,(9,)(u>) = X<9)(-, w) for w G Q<5). For the mapping (#<*>, # « ) ( « ) = (B(«»(-, w), X(«»(-, w)) for w G Q into W + d we have (2)
P ( ? ) o (X{q)y\A)
= P ( ? ) o (B(5), Xiq)y\W
x A)
for A e ^
Now since the process (W(«>, Yiq)) on (WT+d, W+d'*M, {W^*-^}, P^',,,,* <„,) and the pro cess (B (9) ,X (9) ) on (Q (,, ,5 ( » ) , {$'>}, P(«>) have identical probability distributions on (W+d, W*d) according to Theorem 18.13, we have (3)
P ( 9 ) o (B<«>, X™yl(W = ^ ( W x A )
x A) = P$qKX(q)) o (W ( 9 \ y 5 , ) - ' ( W r x A) forAG2U d ,
where the last equality is by the fact that (W), yiq))(w) = (W(9>(-, i»), F (9) (-, to)) = to for it) G W r+ti , that is (W (,) , yq)) is an identity mapping of Wr+d into W + d . By (2) and (3), to prove (1) it suffices to show (4)
p (
< B<»,xa>)(wr * A) = P,^,,* Q ) (W r x A)
for A G 2 ^ .
§ 19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
397
Now since (W<»>, y ( '») and (B{q}, X^) have identical probability distributions on (W r+[/ , W+d), F <9) and X ( " have identical probability distributions on (Wd,ZOd). This implies according to Theorem 17.8 that Y"0(,) and X^ have identical probability distributions on (Rd, 93 K J). Therefore both yo(1) and yo(2) have ^ as their probability distributions on (Rd,(8mj). According to Theorem 18.21, there exists a null set N in (Rd,fB^,fj.) such that for every x e Nc, {W{q\ Y(q)) are solutions of the stochastic differential equation on (W*d,Wr+d'*^\{^t+d^^},P^KXW)) satisfying the condition Y*> = x a.e. on (\Vr+d, Wr+d, PffiKX(q))). Therefore Y ( , ) and Y(2) have identical probability distribution on ( W , Wd). Thus for x e Nc and A € 2Urf, we have P
(X<") ° O^V'W) = i ^ r a , ° (J(2))-'(A),
and consequently ^ o . ,
° (W(1>, y ' T ' C V T x A) = P^xa))
o (Wt2>, y 2 ')">(W r x A).
Since (W ( , ) , ^ ( , > ) are identity mappings, the last equality reduces to (5)
"(Bt'»,X<»)(
x A) = P(ga)xa))("
x
•"•)■
But according to (2) of Observation 18.16, for every x € iVc we have
(6)
J # » W W x A) = X. pw*,x<4w x ^)M^)
for A
e an*
Using (5) in (6), we have (4). ■ Definition 19.3. We say that the solution of the stochastic differential equation (1) in Defi nition 18.6 is pathwise unique if whenever (B, Xil)) and ( P , Xm) are two solutions on the same standard filtered space (£2,#, {&}, P) such that X^ = X™ a.e. on (Q, 3 , P) then there exists a null set A in (Q, 3 , P) such that Xm(-, w) = Xm{-, m)for u <E Ac. We say that the solution satisfy the pathwise uniqueness condition under deterministic ini tial conditions if whenever X^ = Xf] = x a.e. on (fi,3, P) for some x € & then there exists a null set A in (Q,3, P ) such that Xm(-, w) = Xm(-, uS)for u g Ac. Unlike uniqueness in the sense of probability law, pathwise uniqueness assumes that (B, Xm) and (B, X (2) ) are two solutions on the same standard filtered space with the same Brownian motion B. Nevertheless pathwise uniqueness implies uniqueness in the sense of probability law. This will be shown in Lemma 19.19 below. Also the equivalence of
398
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
pathwise uniqueness and pathwise uniqueness under deterministic initial conditions will be shown in Theorem 20.20. Let us define two conditions on the coefficients a and B of the stochastic differential equation. The first is a Lipschitz condition which will imply the pathwise uniqueness of solutions. The second is a growth condition which together with Lipschitz condition will imply the existence of solutions. Definition 19.4. We define two conditions on the coefficients a and B in the stochastic differential equation (1) in Definition 18.6 asfollows: There exists a measure A on (R.+, Q5mt) which is finite for every compact subset of R+ and satisfies the condition that for every T G K+ there exists LT £ R+ such that for t G [0, T] and wuw2 e\Vd we have \a(t, w,) - a(t, w2)\2 + \B{t, wx) - 3(t, w2)\2
(1) <
Lj ■ <
1
|w,0) - w2(s)\2X(ds) + |«n(i) - w2(t)\2} ,
f /
'[o,*] KJ[0,t]
)
d
and for t G [0, T] and w £\V we (2)
have |u;(a)|2A(da) + |u>(t)| 2 +l}.
\a(t,w)\2 + \B(t,w)\2
Note that in the definition above, | • | is the generic Euclidean norm for all finite di mensional Euclidean spaces. Thus \a(t,w)\ = { £ i i EJ=i \a){t,w)\2yi2 and \8(t,w)\ =
{Eti \B'(t,w)\2y/2. Lemma 19.5. Let Z bea nonnegative left-continuous adapted process on a standard filtered space (Q, 5, {3t}, P) which is increasing in the sense that Zs < Ztfor any s,t £ E + such that s
A? = {weQ.:Z{t,u)<
N}.
The process IN on (Q, 5, {& }, P) defined by (2)
IN(t,u)
= 1AH(U>) far(t,u>) G K+ x Q
isa{0,1} -valued decreasing left-continuous adapted process. For every w G Q, the sample function IN(-,u>), if it is not identically equal to 0 or 1, is given by (3)
I
(i W)
'
-\0
/0rt€(rHoo)
§ 19. EXISTENCE
AND UNIQUENESS
OF
399
SOLUTIONS
for some T(OJ) G R+. Proof. Since Z is an increasing process, A f decreases as t increases. Since Z is an adapted process, A? G fo for t G R + . Thus l^jv is an immeasurable random variable and IN is an adapted process. Since Af O A f for 5 < i, we have 1 A N > l^w and thus IN is a decreasing process. To show that every sample function of IN is left-continuous, let OJ G Q. and t G R+ be fixed. If IN(t,uj) = 1, then l ^ M = 1 so that ui G A f . This implies that OJ G A^ for every 5 G [0,*] so that 1 A W(CJ) = 1, that is, IN(S,OJ) = l f o r s G [0, i ] , proving the left-continuity of IN(-,u) at t. If on the other hand 7 w ( i , w) = 0, then I^N(OJ) = 0, that is, OJ g A?. This implies by (1) that Z(t,oj) > N. Then by the left-continuity of Z(-,OJ) there exists 6 > 0 such that Z ( s , w) > TV for s G (t - 5, t]. Thus w £ A f and l,4w(u>) = 0, that is, 1^(5, OJ) = 0 for 5 G (i — <5, i ] , proving the left-continuity of 7 W (-, OJ) at t. This shows the left-continuity of every sample function of IN. Now since every sample function of IN is a {0, l}-valued, left-continuous and decreasing function on.R+, it is given by (3) unless it is identically equal to 0 or 1 on R+. ■ L e m m a 19.6. Let J be an adapted process on a standard filtered space (£2,5, { & } , P) such that every sample function is of the form
[0
fort
e (T(U>),CX>)
for some T(OJ) G K+ unless J(-,OJ) is identically equal to 0 or 1 o« R+. 77ien (2)
' J{t,Ljf
= J(t,oj)
' J(s,oj)J(t,oj)=
fort J(S,OJ)
fors,t
eR+and
OJ eQ
G R+, s
G £2.
F o r M G M 2 ( f i , 5 , { & } , P ) , Jf G L 1 ] 0 0 (R + x £2, W M ) , P ) and t G R + , we have (3)
J ( < ) | ( * • m L )(<)| < \{JX • m L ) ( i ) |
o« £2.
IfX is a predictable process and is in L2 i00 (R+ x £2, fj,[M], P)> in (£2,5, P ) such that (4)
J ( i ) | ( ^ • M)(t)\
< \(JX • M)(t)|
tnen
tnere
exists a null set A
on Ac.
Proof. The equalities in (2) are immediate form the properties of the sample functions of J . Regarding (3) and (4), let us note that since J is a left-continuous adapted process, it
400
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
is a predictable process and in particular a measurable process. Thus if X is a process in Lii0O(R+ x £l,filM],P), then the measurabihty and the boundedness of J imply that JX too is in Li^fR* x Q , ^ ( M ] , P ) . Similarly, if X is a predictable process and is in L2i0O(R+ x Q, fi[M], P), then so is JX. To prove (3), note that if J(t, u>) = 0, then J(t,OJ)\ j
X(s,uj)mL(ds)\
J[0,t]
< | /
J(s,u>)X(s,w)mL(ds)\
J[0,t]
and if J(t,u) = 1, then J(s, w) = 1 for s G [0, t] so that J(t,w)\
(
X(s,cj)mL(ds)\
J[0,t]
<| /
J(s,v)X(s,u)mL(ds)\.
J[0,t)
Therefore (3) holds. To prove (4), let us consider first the case where X is a bounded left-continuous adapted process. In this case JX too is a bounded left-continuous process. Thus by Proposition 12.16 and by the fact that convergence in probability implies the existence of a subsequence which converges almost surely, for t G R+ there exists a null set A in (Q, 5, P) such that (5)
lim £-X-(*n,t_,){Af(*„,*) - M«„, fc _,)} = (X • M)(t)
on Ac
fel
and (6)
lim £ J(tn,k^)X(tn^){M{tnik)
- M(i„,,_,)} = ( J X . M)(t)
on Ac
where {tUik '■ k G Z + } is a strictly increasing sequence in R+ with i„,o = 0 and tn$ t co as k —> oo for each n G N and lim sup(i„i; — tnk-\) = 0, tnv = t and t„h < t for fc = 0 , . . . ,p n — 1 for n G N. From (5) we have (7)
lim J«)£x(i„, f c _,){M(i n > f c )-M(i r l , f c _ 1 )} = J ( i ) ( X . M ) ( t )
onA c .
Letw G Ac be fixed. If J(t,u)= 1, then J(i„,jt_i) = 1 for k = 1,... ,pn so that the left side of (7) and that of (6) are identical and thus we have J(t, u>)(X«M)(t, u>) = (JX»M)(t, u>) by (7)and(6). On the other hand if J(t,u>) = 0,then J(t,oj)\{X»M)(t,u>)\ < |(JX«M)(t,cu)|. Thus (4) holds in this case.
§ 19. EXISTENCE AND UNIQUENESS OF SOL UTIONS
401
For the general case where X is a predictable process and is in L2|00(R+ x Q, H\M], P) there exists by Theorem 12.8 a sequence {X(n) : n 6 N} in Lo(£2,j, {$t},P) such that Jim \\XM - X | | ^ , p = 0 and thus Jim | X w • M - X . M | o o = 0by Definition 12.9. Now Jim \\XM - X\\%2'P = 0 implies that Jlirn || JX{n) - JX\\[MJ;P = 0 as can be ver ified easily. This convergence implies lim | JX(n) • M — JX • M|oo = 0 according to Proposition 12.13. Now since Xin) are bounded left-continuous adapted processes, there exists according to our result above a null set Ao in (Q, g, P) for our t 6 K and all n 6 N such that we have J(t)\(X™ • M)(t)\ < \{JXM
(8)
• M)(t)|
on A£.
Now lim | X »M -X iMloo = 0 implies lim E(\X • M - X ; Ml 2 ) = 0 according to Remark 11.5 and thus the existence of a subsequence {nf} of {n} such that lim Xinl) • M = X • M on h\ where Ai is a null set in (£2,5, P). Similarly the converln)
M
n'—t-co
gence lim | J X < n } • M — JX • M|oo = 0 implies the existence of a subsequence {n") n'—*-oo
of {n1} such that lim ; i ( " " ' • M = JX ; M on A^ where A2 is a null set in (Q, 5, P). n"—KX>
Let A = U2=0A,-. Now (8) holds for every n" on Ac. Letting n" - » o o w e have (4). ■ Lemma 19.7. Let cbe a bounded nonnegative Lebesgue measurable function on a finite interval [0, T\. If for some a,b>0we have (1)
c(t)
+ bf
c{s)mL(ds)
forte[0,T]
J[0,t)
then (2)
c(t)
forte[0,T].
Proof. By substituting the inequality (1) into its right side and by repeating this substitution we obtain for every n e N the inequality c(i) < o f ^ - + bn+l / / ••• / c(tMl)mL(dtn) ~ £5J k\ Jlo,t] Jio,tt] J[0M
■ ■ ■ mL{dt2)mL{dh).
This inequality can then be proved by mathematical induction on n. Let M 6 R+ be a boundofcon[0,T]. Then
cit)<«±^
+^
M
for i e [ 0,T].
402
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Letting n —> oo, we have (2). I Theorem 19.8. If the coefficients a and fi in the stochastic differential equation (I) in Definition 18.6 satisfy the Lipschitz condition (I) in Definition 19.4, the solution, if it exists, is pathwise unique. Proof. Suppose (B,XW) and {B,Xm) are two solutions on the same standard filtered space (Q, 5, {&}, P) such that X^ = X^2) a.e. on (Q, 5, P) Assume that a and p satisfy the Lipschitz condition. We are to show that there exists a null set A in (Q, 5, P) such that X (1) (-,w) = X(2)(-,u) for bj e Ac. Now since X (1) and Xm are continuous processes, it suffices to show that for every ( e l , there exists a null set A( in (Q, 5, P) such that Xm = Xi2) on A£. To show this we show equivalently that for every t G R+ we have P{X\X)-X{2)\
(1)
>0}=0.
Now since (B,X(l)) and (B,Xm) are two solutions and X(0l) = X^ condition 4° of Definition 16.8 implies that for i = 1 , . . . , d we have
a.e. on
. B>)t + ({/3xm - / W
• mL)(.
xm
_ xm
=
J2({axm
- axai})
(£l,$,P),
For brevity in notation, let us define k ;
f A„(i, w) = aXm(t,u) - axm(t, to) = a(t,X(I>(-, w)) ~ <*(t,X™(; w)) \ ^ ( t , W ) = ftro>(t,w) - j9 X B (t l W ) = / ) ( ( , I < » ( , U ) ) - /3(t,X»>(-,w))
for (i, w) e Rj x Q. We write (A^- and (A^)' for the component processes of A„ and A/j. Then xm,i
_ xm,i
=
£ ( ( A J , _ B3){
+ ((A ^i B m i ) ;
i=l
For any real numbers c i , . . . , c n , we have {£" =1 Cj}2 < n £" = 1 c2. Thus
(3)
[X™ - Xf >'*'|2 < (r + 1) I £((AJJ . B') 2 + ((A^)1 . mL)2 J .
Let us define a nonnegative continuous increasing process Z by setting £ ( t , w ) = sup {|X(1)(s,u/)|2 + |X <2) (s,u/)| 2 } s6[0,t]
for(t,w) G R+ x £1.
§/P. EXISTENCE AND UNIQUENESS OF SOLUTIONS
403
By the continuity of Xm and Xa\ sups6[01] is equal to supse[0 t]nQ where Q is the countable collection of the rational numbers and thus Z{t) is 5-measurable. Actually since X(l) and X(2) are adapted processes, Z(t) is ^(-measurable and thus Z is an adapted process. The continuity of X(1) and X(2) also implies that of Z. As in Lemma 19.5, for N G N and t G R+ let Af = {u> G Q, : Z(t,u) < N} and define a {0, l}-valued decreasing left-continuous adapted process by setting IN(t,u>) = IAN(UJ) for (t,oj) G R+ x £2. By Lemma 19.6, for every t G K+ we have f IN(t)|((A„)j . B'')(t)| < \(IN(K)) • B J )«)| \ Iw(i)|((A*)' . mL)(t)| < |(/W(A,)< . mL)(t)|. Multiplying both sides of (3) by IN(t), using the inequalities above and the fact that IN(t)2 = IN(t), we have 7"(i)|X((1)'*' - * f >'f < (r + 1)
• 50? + (IN(Apy . m L ) 2 1
fel'UjJ
and hence (4). E[/ N (t)|X» w - X f ' f ] < (r +1) j J2 E[(/"(A0)- . B')2] + E[(/^(A^)' . mL)2] 1 By Observation 12.11,
E[(/ N (A4 • #)?] ■ = 1 IN{^))•B>\t = / ./[0,(]
HI^MlE?'
E[/A'(5)|(Aa)'(5)|2]mL(^) < / J
J[0,<]
E[J%)|A8(«)p]mz;(
for i = 1,...,rfand j = 1,..., r. Also, applying the Schwarz Inequality we have E[(IN(Apy • mL)2] = E[| /[0|(] Jw(«)(A,)«"(a)mL(ds)|2] < E[t/ B ) f ,/%)|(^j)''(a)| 2 m L (da)]
K[I*(«)[(A»X*)lVi;(«fa)
•'[0,*]
for i = 1,..., d. Using these estimates in (4), we have (5)
E[/"(t)|X((1) - X((2)|2] = £E[I N (t)\X™ - Xf w | 2 ] i=l
< d{r + \){T + t)j
w
E[I (*){|Aa(a)|2 + |A/J(a)|2}]mL(da).
404
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Let T G M.+ be arbitrarily fixed. By the Lipschitz condition we have for s G [0, t] C [0, T] and u> G Q
=
|AQ(5,uO|2 + |A,3(S,u;)|2 \a(t,Xw(-,u)) - a(t,X(2)(;Oj))\2
<
Lrff
+ \p(t,XW(;u))
ftt,Xm(;w))|2
-
\Xw(u,u)-X(2)(u,u>)\2\(du)+\X0)(s,u)-Xi2\s,uj)\2}
U[o,s]
J
N
Substituting this in (5) and recalling 1 % ) < I (u) for u G [0, j ] , E[IN(t)\X™
(6) <
d(r+l)(r
-
X^2]
+ T)LT f J[0,t]
+ d(r + l)(r + T)LT f
E[IN(u)\Xm(u)-X<-2\u)\2]X(du)\mL(ds)
If U[0,s] N
) w
E[I (s)\X (s)
2
-
X™(s)\ ]mL(ds).
JlO.t]
Since \Xm(u) A^ = {ueQ: (7)
- Xm(u)\2 < 2{\Xw(u)\2 + \X™(u)\2} < 2Z{u) and IN(u) = 1AN and 2Z(u) < 2N}, we have IN(u)\Xm(u) - X (2) (u)| 2 < 2N. Thus
Ef/^OIX, 0 * - X((2,|2] < d(r + l)(r + T)LT2iV{A([0, T]) + 1}T
for < 6 [0, T].
Define a function c on [0, T] by setting (8)
- X (2) (s)| 2 ]
c(t) = sup E[IN(s)\Xm(s)
for t G [0, T].
s6[0,(]
By (7), c is bounded on [0, T]. From (6) and (8) we have E[IN{t)\X\l)
- X<2)|2] < d(r + l)(r + T)L T /
< d(r + \)(r + T)LT{X([0,T])+l}
[
c(s)mL(ds)
{c(s)A([0, s]) + c(s)}mL(ds) fortG[0,T],
J[0,t]
Substituting this in the right side of (8), we have c(t) < d(r + l)(r + T)L T {A([0, T]) + 1} /
V[o,*]
c(s)mL(ds)
for i g [0, T].
Applying Lemma 19.7 with a = Oandfe = d(r+l)(r+T)L T {A([0, T])+1}, we have c(t) = 0 for t 6 [0, T]. Then by (8), sup 36[0 () E[IN(s)\Xw(s) - X (2) (s)| 2 ] = 0 for t e [0, T] and in particular E[I^(i)|X (1 '(*) - X<2>(0|2] = 0 for t G [0, T] and then for every t G K+ by the
§ 19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
405
arbitrariness of T e R+. Thus for every t e R+ we have E[lAN\Xm(t) This implies
- X (2) (i)| 2 ] = 0.
|X(,)(i)-X(2>(i)|2
I„
Thus P(A» n {|X(1)(t) - X<2>(*)| > 0}) = 0 and then P{|X(1)(i)-X<2>(0|>0}
(9)
0
forTVGN.
c
Since A? | as TV -» oo, (vlf) | and P ( ( ^ f ) ) j as TV -> oo. To show P ( ( ^ f ) c ) I 0 as TV -> oo, assume the contrary, that is, there exists e > 0 such that P((A^)C) > e for all TV G N. Then P ( n W 6 N ( A f )c) = lim P ( ( ^ f ) c ) > e, that is, N—*oo
P{w G « : sup {|X (1) (s,a;)| 2 + \Xi2\s,u)\2 «e[o,t]
> TV for all TV} > e,
in other words, P{u> G Q : sup {|X(1)(6,u>)|2 + |X (2) O,u0| 2 = oo} > e, sg[0,(]
which is impossible since the continuity of |X (1) | 2 + |X (2) | 2 implies that for every w G £2 we have sup s e [ 0 ( ] {|X ( 1 , (s,w)| 2 + |X(2)(s,a>)|2} < oo . This shows that P((A?)C) J. 0 as TV —» oo. Using this in (9), we have (1). ■
[II] Simultaneous Representation of Two Solutions on a Function Space Our immediate goal here is to show that any two solutions of the stochastic differential equa tion can be represented on one function space with a common Brownian motion on it. We need this to show that pathwise uniqueness implies uniqueness in the sense of probability law. To be specific, if (P ( , ) ,X ( ? ) ), q = 1,2, are two solutions of the stochastic differen tial equation (1) in Definition 18.6 on two standard filtered spaces (Q (,) ,5 (5> , {&?}}, P l , ) ), q = 1,2, then we can introduce a probability measure and a filtration to the measurable space (Wr+2d,W+2d) in such a way that three processes W and Yiq\ q = 1,2, on the resulting standard filtered space defined by W(t, w) = v{t), Yw(t, w) = v'(t) and Ym(t, w) = v"(t) for (t, w) G R+ x W+2d where w = (v, v', v") with v G W and v', v" G Yfd are such that (W, Y{q)) is a solution of the stochastic differential equation for q = 1,2 and (W, Ym) and (W, y ( 2 ) ) have identical probability distributions on (WT+d, WT+d).
406
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Observation 19.9. Let (B,X) be a solution of the stochastic differential equation (1) in Definition 18.6 on a standard filtered space (Q,5, {&}, P)- Consider the probabil ity space (W+d,Wr+d,P(B,x)) where P(B,X) is the probability distribution of (B,X) on (W+d,W+i). Let 7T0 be the projection of W+d onto W r . The probability distribution (P(B,X))*0 of 7T0 on ( W , W) is given by (1)
(P(B,X)U
= P(B,X) O TTo"1 = P O (B, A")" 1 O TTQ-'
= P o (TTO O (S, A1))"1 = P o 6 - ' = P s = m r w by Definition 17.9. Definition 19.10. On the r-dimensional Wiener space ( W , W, mrw), we define a filtered space ( W ) 2 r r v " , {mrt'w},m\v) by letting W'w be the completion ofW with respect to the Wiener measure mw and letting Wt'w = a(Wt U VI) for every f 6 E t where 9t is the collection of all the null sets in (Wr, W'w, mrw). Lemma 19.11. {W[-w : t g R + } is a right-continuous filtration on (Wr,Wr'w,mrvy) ( W , W>w, {Vtrt'w}, mrw) is a standard filtered space.
and
Proof. The stochastic process W on ( W , W, mTw) defined by W(t, w) = w(t) for (i, w) G R+ x W is an r-dimensional null at 0 Brownian motion on ( W , 2 I F , m ^ ) by Theorem 17.10. Since the completion of (Wr,W,mTw) to (Wr,W'w,mTw) has no effect on the probability distributions of random vectors defined on ( W , <3BT, mTw), our W remains an r-dimensional null at 0 Brownian motion on ( W , 2DT,<", mTw). Thus by Proposition 13.22, a{a{Ws:s g [0, t]} U 91} for t G R+ is a right-continuous filtration on ( W , 237'"", mTw). But according to Proposition 17.7, a{Ws : 5 g [0,<]} = 2UJ for t G R+. Thus 23^'"' = o(Wt U 9t) for t g R+ is a right-continuous filtration. Since Wrt'w is augmented for every t g R + , ( W , 237'™, {23^'"'}, m ^ ) is a standard filtered space. ■ Let (W+d, Wr+d'*, {20^'*}, P(B)X)) be the standard filtered space generated by P(B,X) as in Definition 18.10. On the probability space (Wr+d, SIT""''*, PiB,x)), consider a regular image conditional probability given the projection n0 of YfT+d onto W , P{B X) °(A, v) for (A,v) g 23r+ti'* x W r . (See Definiton C.ll.) For brevity let us write QV(A) for P
(B,X)
(A,v)-
Then
1°. there exists a null set N in ( W , ! ^ ' ' " , mTw) such that Q" is a probability measure on (W +
407
$19. EXISTENCE AND UNIQUENESS OF SOLUTIONS 2°. Q(\A) is a StT^-measurable function on W for every A G W+d'*, 3°. for every A G 2Xr+,i'* and A0 € 257'"' we have P(B,X)Un7r 0 - 1 (A 0 ))= /
Q"{A)mrw(dv).
In particular if A G W+d'* is of the type A = W ' x i , with A, G Wd, then A n 7r^'(A0) = (W r x AO n (A0 x Wd) = AQ x Ax. Thus by 3° we have 4°. for Ao G W'™ and A, G 23J
Q»(WT x A,)mW(dw).
Let an element in WT+d be arbitrarily chosen and redefine Qv() to be a unit mass at the arbirarily chosen element for v E N. With this redefinition Qv() is a probability measure on W+d'* for every u G W r . Property 2° is unaffected by this redefinition. Lemma 19.12. For every Ai G W, <3(>(W x Ai) is a W,w-measurable function on W . Furthermore if A, G 2U?/or some t G R+,tfiereQ ( 0 ( W x A,) w a Wt'w-measurable function on W . Proof. If A, G 23d, then W x Ai G 2rr+,J C W+d'* so that by 2° above Q ( ) ( W x A,) is a 2nr,*'-measurable function on W Suppose A, G 2H? for some i G K+. Then A, G Wd and thus Q ( ) ( W x Ax) is a 2Hr'u'measurable function on W r . The projection TT0 of YT+d onto W is a 22T'"' x W'/aTT''"measurable mapping of W*d into W . Thus the composite mapping Q ^ ' ^ W x Ai) is a 2jjr,™ x W^-measurable function on VT*'. By 4° above and (1) in Observation 19.9 and by the Image Probability Law we have (1)
Q'»<""(Wr x A,)P ( B ,x)(^)
P(B,x)((W x A,) n (Ao x Wd)) = /
for Ao G 2rr , u and Ax G 20^. In the probability space (XVT+d, W+d>*, PiB,xy) consider the conditional probability P
Q*»«(W x AO G P(B,^)(W' x A, 1 2 3 ^ x W^),
that is, Q*°<->(W x A,) is a version of P(B,x^T
x
A, |2Ur'"' x W ) .
408
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Let us show that for our Ax £ Wd we have (3)
P(B,X)(W x Ai \Wt'w x Wd) = PlB,x)(W
x A, |2XT^ x Wd).
Now if 5li, 212, and 2l3 are sub-u-algebras of 5 in a probability space (Q, 5, P ) such that a(% U2l2) and Ql3 are independent with respect to P and if (Cl,
P{A\%2) = P(A|
foryi € 2li.
(See Theorem B.34.) To apply (4) to our probability space (W+d, W+d'*, P (B ,;n), let » i = Wt+d'* and 2t2 = aUJ''" x W C 2ti. The process W defined on the standard filtered space (WT+d, W*d'*, {2U[+d'*}, PiB,x)) by setting W(i, w) = (TT0 O w)(t) for (t, u>) £ R + x W + l i is an r-dimensional {2ZJ£+<''*}-adapted null atOBrownian motion according to Lemma 18.11. Let 2l3 = a(ay/r{Wtn -Wt,:t
=
{Wt,,-Wt,
: < < t ' < i" < oo} U 9t)) x V?d
W'wxWd
This shows that (3) is a particular case of (4). Thus Q^^iVf x A\) which is a version of P(B,X)(WT x A{ \
(i)
Q»*<\A)
= (p^xm)
u,v) f«(J4,»)eairJxwT
According to (1) of Observation 19.9, the probability distribution (P ( BI,) xwiXro of 7r0 on ( W , SET) is the Wiener measure mrw on ( W , 2Ur). By our convention Q" ,(,) is a probability measure on Wr+d for every v £ W . If we let (2)
i T ' ^ O l , ) = Q"' (?) (W x Ai)
for A, € 3Hd,
§ 19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
409
then RvM) is a probability measure on (Wd, Wd) for every v 6 W r Lemma 19.14. There exists a probability measure Q on (W + 2 d , W*2d) such that for every A0xAixA2e W-a x <mdm x Wd'G) we have (1)
R"^\Ax)Rv'a\A2)mrw(dv).
Q(Ao xAlxA2)=f JAQ
Proof. For every v G W , define a set function n(-,v) on 2XTi,<1) x Wd,m by setting (2)
/i(A, x1xA A 22,,v) w) = iT'(1)(Ai).R"'(2)(A2) fi(A
for A, x i 2 e SIT'''1' x WdX2).
Clearly /i(0,v) = 0 and it can be verified readily that n(-,v) is a countably additive set function on the semialgebra 2Ud,(1) x WdX2) of subsets of Wd'(1> x W^ 2 '. Thefinitenessof the two measures if' ( ' > (W r x •) for g = 1,2 then implies that //(•, u) can be extended uniquely to a probability measure on (Wd'w x Wd'(2), a(2IJ'i•(,, x aF*'®)) for every v G W . Since Q"'(9)(A) is a arT'*"-measurable function of v on W for every A G 2Br+'', #"'<9)(A,) defined by (2) of Observation 19.13 is a SHT^-measurable function of v on W for every A\ G 2U'' Thus /i(Ai x A 2) ■) is a aXT^-measurable function on W r for every Ax x A2 G 2nd,(1) x aiT*'02'. To show that p(E, •) is a 2nr'w-measurable function for every E G a(2Dd'(1) x aU^ 2 '), let © be the collection of all members E of <J(2»''' (1) x W}d<m) such that //(£, •) is 2Ur'u'measurable. Clearly W * 0 ' x W ^ 2 ' is in <3. Let £ n G <S, n G N, and £ „ \. The fact that u(-,v) is a measure for every v G W r implies that fi,( lim i? n , t)) = lim fi(En, v) every n—*oo
n—t-oo
v G W and thus the 3Xr"'"'-measurabi]ity of //(£„, •) for every n G N implies the JHF'"'measurability of u( lim £„,«). Thus lim £„ G <5. Similarly if E,F € 0and E C F, n—*oo
n—>oo
then JF - £ G <S. This shows that (S is a d-class of subsets of W*>(1) x W*-'2'. As we have pointed above, & contains the 7r-class aU^ 1 ' x Wd'{2). Thus it contains a(Wd'm x WdX2)) by Theorem 1.7. Thus n(E, •) is a W'w-measurable function on W for every £ G
Q(A0 x E) = I fi(E, v)mTw{dv) -Mo
for A0 x E G arT'1" x aOD* 0 ' x an*'*2')
410
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
can be extended to be a measure on
Wr+2d.
Proposition 19.16. Let W, Ym, and Y{2) be as in Definition 19.15. Then W is an rdimensional {Wrt*2d'*}-adapted null at 0 Brownian motion on the standard filtered space ( W + M , 2ir +M '*, {ZtrS2'*}, Q), (W, Y<«>) and (B<«>, X<«>) have identical probability distri butions on (WT+d, WT+d) and (W, Y(»>) is a solution of the stochastic differential equation (1) in Definition 18.6 for q = 1,2. Proof. Clearly W is an /--dimensional continuous {2D£+M}-adapted, and hence {%B[+2d'"}adapted, process on ( W + M , VXr+2d'\ {2trt+2d'}, Q). To show that W is null at 0, let Z = {v G W : v(0) = 0}. Then by (1) of Lemma 19.14, we have Q{W0 = 0} = Q(Z x WdM x Wd'i2))
= J Rv'(1\V/d'm)Rv'l2\XVdx2))mTw(dv)
= mrw(Z).
Now the stochastic process V denned on the probability space ( W , W'w, mrw) by V(t, v) = v(t) for (t,v) G K+ x W is a null at 0 Brownian motion so that mrw(Z) = 1. Thus Q{ W0 = 0} = 1. This shows that W is null at 0. To show that W is an r-dimensional {2UJ'+2'i'*}-adapted Brownian motion, it suffices according to Theorem 13.26 to verify that for every s, t G R+ such that s < t and y G W+2d we have (1)
/ e^W-^dQ JA
= Q(A)e-^t-s)
for A G W3+2d'\
§ 19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
411
Consider first the case where A = A0 x Ax x A2 6 W;w x 2Uf,(1) x 2Uf'(2>. In this case we have
/ e^w-w->dQ= JA
S
jWr+2d
e^w'-w'HAoxAlxA2dQ=
f
JlVr
e'^^^{vWw{dv),
where *(w) = lMRv<m(Ai)IV-®U2)
x A,)Q"'<2)(Wr x At)
= U0QvXl)(W
for v G W . Since A0 6 20?'", U„ is a 2D;''"-measurable function on W . Since A 1( A 2 6 2Uf, Q ( 1 , ' ( ) (W x A t ) and Q (2) - () (W r x A2) are W,'w-measurable functions by Lemma 19.12. Thus <3> is a aU^-measurable function on W Now if we define a process V on ( W , arr'1", m ^ ) by setting V(t, v) = v{t) for (i, u) G R+ x W r , then V is an r-dimensional {2BJ'"'}-adapted Brownian motion. Thus for any s,t € K+, s < £, the cr-algebra Ws,w and the random vector v(t) — u(s) are independent in the probability space (W r ,2U 7 ' ,u ',m^) by Definition 13.17. The 2U;-measurability of O then implies the independence of the two random variables <J> and e'(».*(«-«<*». Thus »<*)--v^(i>(v)mTw(dv)
JVfr 1
.■)
= e-^-^QiAo
*(*>} {/wr ®0)7T i ^ ( d i ; ) |
{yrtth ■«w>,
■-
x A, x A2) =
.
.2
e-^-s)Q(A)
by Lemma 19.14. This verifies (1) for the case A G 2Uf,(,) x 2Uf,(2). By applying Corollary 1.8, we then have (1) for A G W*2i = er(3Bfa) x 2Uf'(2)). By the same argument as in the proof of Lemma 18.11, we then have (1) holding for A e 2H^+M'* This completes the verification that W is W is an r-dimensional {aU^^I-adapted null at 0 Brownian motion. Let us show next that (W, Y(q)) and (5 ( ? > , X ( , ) ) have identical probability distributions on (W+d, 2tr +xm)(E) = Q ° (W, yl))~\E) for E 6 2XT+
o o (w, y'r'tAo x A,) = Q(W-'(A0) n (y'r'cA,)) = Q(Ao x Wd x W d n W x A, x W1*) = Q(A0 x A, x W d )
-L =
Q"' (I) (W r x A 1 )Q"' (2, (W r x Wd)mTw(dv) [ Q"'(1)(Wr x Ax)mrw(dv) = P$, U ( 1 ) ) (Ao x A,),
412
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
by 4° of Observation 19.9. This shows that Qo(>V,J rtl, )~ 1 and P^',,, x(1)) agree on WxW*. Then by Corollary 1.8, they agree on ^ + i =
S x S:sx
= s2}.
Ifipx v){D) = 1, then p. = v on 93 s and furthermore there exists a unique so € S such that p({s0}) = v({s0}) = 1. Proof. Let S x S be topologized by the product topology and let 93s x s be the Borel aalgebra of subsets of S x S. Let p be the metric on S. Then p(s 1, s2) for (s 1,52) € S x 5 is a continuous function on 5 x S and is thus 93sxs-measurable. This implies that the diagonal D, which is the subset of 5 x S on which the 93sxS-measurable function p is equal to 0, is a member of 93sxS- Since 5 is a separable metric space it satisfies the second axiom of countability and therefore 93s x s =
§ 19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
413
otherwise we would have p.(S) > 2. Let Kn be the intersection of all those closed spheres with ^-measure 1. Then p(Kn) = 1 and the radius r(Kn) < n~l. Consider the sequence of the closed sets Kn, n G N. By the same reason as above we have Kn n Km ^ 0 for n =/ m. If we let Cn = n^ =1 iir m , then we have a decreasing sequence of closed sets Cn, n G N, with p(Cn) = 1 and r(Cn) < n _ 1 for every n G N. Since S is a complete metric space and r(Cn) ! 0 as n -» oo, there exists sQ G S such that n ne NC„ = {s0}. Then M((so}) = lim /i(C„) = 1. Since //(S) = 1 such s0 G 5 is unique. ■ Theorem 19.18. Lef i £ R
be fixed. Suppose that the initial value problem f dXt = a(t, X)dB{t) + /3(t, X)dt \XQ=X
for the stochastic differential equation (l)in Definition 18.6 has a solution on some standard filtered space and suppose that the solution of the stochastic differential equation ispathwise unique under deterministic initial conditions. Then there exists a mapping Fx of WT into ~Wd, unique up to a null set in (Wr,Wr,w, mrw), such that 1°. Fx is W['w IWdt-measurable for every t G R+, 2°. ;/ (B,X) is a solution of the initial value problem on some standard filtered space (£2,5, {5 ( }, P), then there exists a null set N0 in ( W , Wr,w, mTw) such that X(-,u) = Fx[B(-,u)]
forix> G Q. with B(-,u>) G N£,
and therefore there exists a null set A in (£2,5, P) such that the equality holds for wG Ac. Let (Q, 5, P) be an arbitrary complete probability space on which an r-dimensional null at 0 Brownian motion B exists. Let (il, 5, { j f }, P) be the standard filtered space generated by B. Let X be a continuous d-dimensional process on (£2,5, {57 }, P) defined by X{-,UJ) = FX[B(-,U)-\
forced.
Then (B, X) is a solution of the initial value problem on (Q, 5, Proof. Let (-B(,), X(,)) be a solution of the initial value problem on a standard filtered space (Q (,) , 5 ( , ) , {S^}, Pw) for q = 1,2. Let P^w.xw) b e t h e probability distribution of
414
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
(B<«», X('>) on (W+d, W+d) for q = 1,2. Let <5(0'(') and #•>><*> be as in Observation 19.13 and (W+2d,W+2d'\ {Wt+2d'*}, Q), W, Ym, Ym be as in Definition 19.15. According to Proposition 19.16, (W, F ( , ) ) is a solution of the stochastic differential equation on ( W + M , 2jr +M '*, {Wt+2d'*}, Q) and its probability distribution on (W + r f , 2DT+<') is identical to that of (B{q), X{g)) for each value of q = 1,2. Since X ( ? ) = x a.e. on (n(9,,5(»),P») we have F ( 9 ) = x a.e. on (W r + M , W+2d, Q) for g = 1,2. Then since (W, y ( 1 ) ) and (W, Y(2)) are two solution of the stochastic differential equation on the same standard filtered space (Wr+2d, W+2d'*, {2UJ+M'*}, Q), pathwise uniqueness of solution un der deterministic initial condition implies that there exists a null set TV in (W+2d, 2Ur+M'*, Q) such that Ym(-,w) = Ya\-,w), that is, xt(w) = ir2(w), for w G Nc. Thus 1 = Q(NC) = Q({w G W+2d : JT,(U») = 7r2(u>)}) =
/
(rH(D
x
JJ**.),®)
{D«MWw(d{ir0(w))),
where ATOM = {(TO(«0, wi(u>),*2(w)) G {TTQCU;)} x W " 1 ' x Wd'i2) : * , ( « ) = ir 2 («0} .
Since the integrand in the last integral is nonnegative and bounded by 1, the fact that the integral is equal to 1 implies that there exists a null set TVo in ( W , Wr'w, mTw) such that for "o(«0 G NQ we have ( P W ( , ) x r w ' ( 2 ) ) ( A ^ , ) = 1.
(1)
If we let Ni = {we W r + M : 7r0(io) G /V"0} then Q(/V,) = /
JJV0
R^wW\WdM)R^wU2\Vfd'm)mrw(d^oM))
= /
,w0
mrw(dfr0(w)))
=0
so that TV", is a null set in (W r + M , 2rr +:w , Q) and (1) holds for w G JVf. Thus by Lemma 19.17, for every w G JVf we have (■ ^ „ ( u , ) , ( I ) _ i J7r„ (w ),(2) (2)
Qn
(W^
gjj^
< TT^U)) = -K2{w)
{ ^0(w),(D({7ri(u;)J) = iP0(»W)({^2(u,)}) = 1. Then for every t £ J\f{ there exists a unique ^?(t>) G W such that R"'0)({
*AV)
10
for^GiVo.
119. EXISTENCE AND UNIQUENESS OF SOLUTIONS
415
Since only one singleton can have probability measure 1, the last equality in (2) implies that
FlMw))=l^w)
(4)
= w)
^
f w€
" %
10 for w e Ni. Let us verify that the mapping Fx defined by (3) satisfies conditions 1° and 2° To show that Fx is W[M/Wdt -measurable for every t e R+, note that since Wt'w contains every null set in ( W , W'w, mrw) and in particular N0 and since Fx(v) = 0 for v £ N0, it suffices to show that the restriction of Fx to N§ is Wt'wIWdt -measurable. Now for every Ax g 2Uf, we have F~\A,)
= {ve
JV"0C
n N£ = { » E N£ : V(v) e A,} = {v g JV0C: iT ( I ) (Ai) = 1}
: <2"'(1)(W x A , ) = 1} e W[w,
since according to Lemma 19.12, Q"'(1)(Wr x Ai) is a 2U^-measurable, and hence 2D[,Wmeasurable, function of v. To verify 2°, let ( 5 , X) be a solution of the initial value problem on a standard filtered space, say our ( P ( 1 \ Xw) on (ii (1) , 5 (1) , { $ " } , P (1) ). Recall that the probability distribution PgJ, of P ( l ) on (W r , fflT) is given by mfo- by Definition 17.9. Let A = (B^-^iVo). Then p(D ( A )
=
pm
w l
0 {B
r (N0)
= P^,(N0)
= m^(iVo) = 0,
that is, A is a null set in (£2(,), $m, P (1) ). For w £ Ac, F . ^ - . w ) ) = P*(B(1>M) = Fr(7r0 o (B(1), *(1>)(u,)) = TT, o (S<",X m )(u) = * ( 1 V ) = #'>(•, w), where the third equality is by (4). Similarly for w G A, we have F r (B (l) (-, u>)) = 0 by (4). Let (Q, 5 , P ) be an arbitrary complete probability space on which an r-dimensional null at 0 Brownian motion B exists. Let (£2,5, {#f }, P) be the standard filtered space generated by P. as in Theorem 13.23, that is, j f =
an S/2XT -measurable mapping of £2 into W and furthermore it is gf/23^-measurable for every t € R+ by Theorem 17.8. The probability distribution of B on ( W , 2 i r ) is given by mrw by Definition 17.9. Consider Wt'w as in Definition 19.10. If E0 is a null set in ( W , 2ZT, roM then 0 = mrw(E0) = P o B^C^o) so that B-'(-Bo) is a null set in (Q, 5, P).
416
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
If Ei c E0, then as a subset of the null set B'X(E0) in the complete probability space (£1,5, P), B~l(Ei) is a null set in ( A 5, P) and thus B~\EX) e # f for every t £ R + since § f is augmented. This and the fact that B~\Wt) C $f implies that H - ' ^ ' " ' ) C tff, that is, £ is #f/SB^-measurable for every t € R+. The mapping Fx is W^^/Wf -measurable. The mapping qt of W d into Rd defined by 5i(w) = w(t) for to e W* is Wdt /©^-measurable as we saw in Proposition 17.7. Thus qtoFxoB is an 57/93 E d-measurable mapping of Q into a*. But qtoFxo
B(u) = qto FX[B(-,OJ)]
= qt o X(-,u) = X(t, u>) for u e f i .
Thus X( is jf/QSjjj-measurable, that is, X is an {5, }-adapted process. Let us show next that the process (B, X) on (Q, 5, {$f } , P ) and the process (Bm, Xm) on (Q ( , \ 5 ( 1 ) , { # " } , P (1) ) have identical probability distributions on (WT+d, W+d). For the probability distribution P^B,X) of (B, X), writing I for the identity mapping, we have by Definition 17.9 for mrw P(B,X) = P o (B, Xyl = P o (B, Fx o B)"' = P o (J o B, Fx o B)" 1 = P o ((J, JFX) O B ) - ' = P o B" 1 o (I, P J - 1 = mW o (I, Fx)~\ On the other hand the probability distribution PfJw Xw\ ' s given by P[B\m
= Pm ° C5 W
= P<» o (B' 1 ')- 1 o {I,Fxr
= m' w o ( I . P , ) - ' ,
by 2° Thus P^\,KXW) = P(B,xy This shows that ( B ( " , I ( 1 ) ) and ( P , X ) have identical probability distributions. Then since (Bm,Xm) is a solution of the stochastic differential equation on (Q (1) , 5 ( 1 ) , { $ " } , P (1) ), (B, X) is a solution of the stochastic differential equa tion on ( 0 , 5 , {&},P) by Theorem 18.15. Also X ^ ' = x a.e. on ( Q ^ . S ^ P " ' ) implies that XQ = x a.e. on (Q, 5, P). Thus ( B , X ) is a solution of the initial value problem on (£2,3,{&},P). ■ Lemma 19.19. Pathwise uniqueness under deterministic initial conditions of solution of the stochastic differential equation (1) in Definition 18.6 implies uniqueness of solution in the sense of probability law. Proof. According to Lemma 19.2, uniqueness in the sense of probability law is equiv alent to uniqueness in the sense of probability law under deterministic initial conditions.
§19. EXISTENCE AND UNIQUENESS OF SOLUTIONS
All
Therefore it suffices to show that pathwise uniqueness under deterministic initial condi tions implies uniqueness in the sense of probability law under deterministic initial condi tions. Now let (P ( 1 ) , X (1) ) and (P (2) , X (2) ) be two solutions on two standard filtered spaces (£2(1\ &l\ {&»}, ? ( " ) and (Q (2) ,5 (2) , {&2)}, P (2) ) respectively and suppose X<" = x a.e. on (Q (1) , # » , P<") and X<2) = x a.e. on (S2(2), 5<2), P<2>) for some x G Rd Let Fx be the mapping of W into Wd defined in Theorem 19.18. Then by 2° of Theorem 18.20 there exists a null set A (,) in (£2(,), &q\ P<«') such that Xiq)(;u>) = FJB ( "(-,uO]
foru; e ( A w ) '
for q = 1,2. Now for the probability distribution P$q) of X (9) on (W*, 2ITd) we have pW, = P<«> o ( ^ ( 9 ) ) - ' = P(«> o (P x o /5 (?) )-' = P(<> o (6 ( 5 ) )-' o P " 1 = mTw a F'K Thus P^o, = P^Q) on (W d , 2U''). This proves uniqueness of solution in the sense of proba bility law under deterministic initial conditions. ■ Theorem 19.20. For the stochastic differential equation (1) in Definition 18.6, pathwise uniqueness of solution and pathwise uniqueness of solution under deterministic initial con ditions are equivalent. Proof. Since the former contains the latter as a particular case, it suffices to show that the latter implies the former. Let us assume the latter. Let (B, X (1) ) and (B, X (2) ) be two solutions of the stochastic differential equation on a standard filtered space (£2,5, {5t},P) such that X^ = Xf> a.e. on (Q, 3 , P). The representation {W("\ YM) of (P, X<«>) on (Wr+d, Wr+d'*M\ {Wl^'^^^^n) is a solution of the stochastic differential equation, (W < ' ) , F c?) ) and (B, X ( , ) ) have identical probability distributions on (Wr+d, W+d), and F ( ? ) and X (9) have identical probability dis tributions on (Wd, Wd) for each of q = 1,2 according to Theorem 18.13. Since X^1' = X^2) a.e. on (£2,5, P), X^1' and X^2) have identical probability distributions on (Rd,
for {A, x) € WT+d x Rd
418
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Now pathwise uniqueness under deterministic initial condition implies pathwise unique ness in the sense of probability law according to Lemma 19.19. Therefore y0<5) = x im plies that Ym and Y{2) have identical probability distributions on (Wd, 2D''). As we noted above, the probability distribution P{B,XW) of (B, X (?) ) is also the probability distribution of (WCq\ y ( ? ) ). Thus P (B ,^w)(W r x Ai) for Ax g SD^ is the probability distribution of F(«> on (Wd,2Bd). By (2) and (4) of Observation 18.16, we have x
A
p
?B,xw)(Wr
(1)
JW<*>(W
(2)
PSBJCW) ( O t f V t f s } ) ) = 1
^ = Jd
x AI)/I(JT)
for A, € 2Dd
foriGiVc.
Thus for x g iVc, P ( B ^„, ) (W r x A,) for A, g Wd is the probability distribution of F when y 0 (,) = x. Therefore we have (3)
P ( B,X<»)(W x Ay) = P ( B,*<2>)(W x AO
for A, g 2Drf when a g
w
Nc.
To show that pathwise uniqueness holds, we show that there exists a null set A in (£2,5, P ) such that X (1) (-, w) = X (2) (-, w) for w g Ac. Suppose no such null set in (£}, £, P ) exists. The continuity of the sample functions of X (1) and X (2) then implies that there ex ists some t 0 g R+ such that P{u g Q : X%\u) 4 Xt(2)(w)} > 0. Then the probability distributions Pxm and Pxm on (Rd, %Smd) are not equal. Thus there exists E g 93Ed such that Pxm{E)4Pxm{E). S°ince Pxm(E) = P ( f t Jf(i))(Rr X £ ) , we have P
(4)
(Bl0,x«>)(Kr x E) 4 P(Bt0,x«')(Kr x E).
Let 9(„ be the mapping of Wr+d onto R r+,i denned by gto(tu) = io(i0) for iw g W r + f Then P
(B10,<>> = -P ° ( ^ „ , X"))- 1 = P o (9<0 o (B, X™))-1
= Po^,^'1)-
'•tf
- P(B,XW) ° %
so that P(B,o,x^(W
XE) = PlB,xm
o q~\Rr x £ ) .
Thus from (4), we have (5)
iW«>)(<7*;W x £)) y P ^ ^ ' C R ' x £ ) ) .
§20. STRONG SOLUTIONS Since q^\W
419
x E) is a set of the type W r x A, with A, £ 2D"', (1) and (5) imply
/ K r ^ T B , * . . . ) ^ ' ^ x -E))Mdz) ^
J ^ e ^ ' O r
x
E)Mdx).
Thus there exists F G <8Ed with /i(F) > 0 such that for x 6 F we have r
-P(B,X('))(9(0 '(R
xE))i
■f(B,XO))(9t0 \W
xE)),
contradicting (3). This shows that pathwise uniqueness holds. ■
§20 Strong Solutions [I] Existence of Strong Solutions Consider the product measure space (Rd x Wr,cr(?81d x 2XT),/i x mrw) where p. is an arbitrary probability measure on (Rd, 25^). Let c ^ ® ^ x 2ZFy ""vf be the completion of
Vary™™/Wd-measurable,
2°. for every x 6 Rd, F„[x, •] is a Wt,w /W*-measurable mapping efW into Wdfor every t € R+, 3°. X(-,LV) = F^Xoi^Bi^fora-e.
u in (£2,g,P).
In Theorem 20.5 below we show that if the coefficients a and /3 in the stochastic dif ferential equation (1) in Definition 18.6 satisfy the Lipschitz condition (1) and the growth condition (2) in Definition 19.4 then a strong solution exists on any standard filtered space (£2,#, {St}-, P) on which an r-dimensional {5,}-adapted null at 0 Brownian motion exists. Definition 20.2. Let L2£,(R+ x Q., {&}, mL x P),or briefly 1%%, be the collection of all d-dimensional continuous {&} -adapted processes X on a standard filtered space (£2, $, {&}, P) satisfying the condition that sup, e[(M] E(|X(s)| 2 ) < oo for every t e R+.
420
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Lemma 20.3. Let (Q, 5, {St}-, P) be a standard filtered space on which an r-dimensional {■St}-adapted null at 0 Brownian motion B exists. Assume that the coefficients a and j3 in the stochastic differential equation (1) in Definition 18.6 satisfy condition (2) in Definition 19.4. With fixed x G Rd, let us define a mapping r o / L 2 ^ by setting for X £ L 2 £, (1)
(rX)(t) = x+ [
a(s,X)dB(s)+
J[0,t]
f
/3(s,X)ds
Then TX G L ^ . If we define a sequence {X(t,) :qeZ+}
in l$% by letting
[X®\t) = x fort€R+ \ X (9) (i) = (rX ( »-")(0 for t£R+andq£
m
/orteR+.
J[0,t]
N,
then for every T € R+ there exists Kj € K+ such that sup E(|X<"(*)|2) < KT teio.T]
(3)
forq£Z+.
Proof. To show the existence of the stochastic integrals in (1), we show that a* (•, X ) and /?'(-, X) are in L2,oo(K+ x ^ , TTIL x P) and L1|00(M+ x Q, mj, x P) respectively. Now since a and /3 satisfy condition (2) in Definition 19.4 and since X G L ^ , for every T G K+ we have
{\a(t,X)\2 + \P(t,X)\2}mL(dt)
E\[ U[0,T]
<
LT f E\[ IX(s)|2A(d5) + IX(i)| 2 + l mL(dt) Jm.T] [0,T] Um.t] Ul0,t]
< LT
sup E(|X(t)| 2 ){A([0,T])+l} + l T < o o . L*e[o,T]
This shows that aJ(-,X) and /?'(•,X) are all in L2,oo(R+ x Q.,m,L x P) and hence the stochastic integrals in (1) exist. To show that r X is in L2'^,, it remains to verify that for every t G R+ we have sup E(|X(s)| 2 ) < oo.
(4)
»e[o,«]
Letc 0 = max{|x| 2 ,1}. ForX G L ^ and T G R+ let A(<;X) be a real nonnegative valued monotone increasing function for t G [0, T] such that (5)
max{ sup E(|X(s)| 2 ), 1} < A(t;X) s6[0,(]
fort G [0,T].
§20. STRONG SOLUTIONS
421
Since supsg[0_,, E(|X(s)| 2 ) < oo for every t 6 M+ such function A{-\ X) always exists. Let us show that with A{-\ X) we have for TX defined by (1) and T G R+ the estimate E(|(rX)(t)| 2 ) < 3 fc 0 + CT f
(6)
I
A(s;X)mL(ds)\
J[0,i]
J
fort G R+,
where (7)
C T = (l+r)L T {A([0,T]) + 2}.
Note that (6) and (7) imply E(|(rX)(t)| 2 ) < 3 {c0 + CTA(T;X)T] < oo which is (4). To prove (6), note that since {£"=i aj } 2 < n £" = 1 a) for any real numbers a{,..., an, we have from(l) E(|(rZ)(t)|2)<3fc0+Ef| / a(s,X)dB(s)|2l+E[| / P(s,X)ds\2\\. I L J[o,t] J L J[o,t] J) Now by (3) of Proposition 13.39 E[|/
= /
E[|a(«,X)| 2 ]m L (ds),
E[\/l(s,X)\2]mdds).
a(s,X)dB(s)\2}
and by the Schwarz Inequality X)ds\2]
E f| / fa, L J[o,t]
J
Thus for t G [0, T] we have by (2) of Definition 19 A and (5) E(|(rX)(t)| 2 ) < 3 {co + (1 + T) /
E [|a(a, X)| 2 + |/?(s, X)| 2 ] m L (ds)}
|X(u)| 2 A(du)+|X(s)| 2 +l]m£,(ds)l
<
3(co + ( l + T ) L T /
<
3 | c o + (l+T)L T {A([0,T]) + 2 } y o ( A ( s ; X ) m L ( d s ) | ,
I
j[o,<]
E | 7
t
U[o.s]
J
J
proving (6) and in particular (4). Consider the sequence defined by (2). Clearly X{0) € L ^ , . Then since r is a mapping of Lj£, into L ^ we have X<«> e L^L, f o r ? G N. To prove (3), let us show first that for every T G R+ we have for t G [0, T] and g G Z+ the estimate (8)
Pi(3CrO* E(\X^(t)\2)<3c0\gJ:^^+3-
,„-i(Qtty
422
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Now for q = 0, we have E(|X (0) (i)| 2 ) = \x\2 < c0 so that (8) holds. Suppose (8) holds for some q e Z + . Let A(t;X(q)) be defined to be equal to the right side of the inequality (8) fori 6 [0,T]. Then maxfsup^^EdXWCs)! 2 ), 1} < A(i;X<«>) fort G [0,T] so that A(-;X(q)) satisfies (5). Therefore by (6) we have E(|X" + »(t)| 2 ) = E(|(rX<")«)| 2 ) < 3 {CO + CTJ
J i c0 ^ n = 3 + 3CQCJ
A(s;
Xw)mL(ds)}
{<jl (3CTs) ,,-,(Cr£) + 3»- r L J[0,t] «I J k
I
',3CTt)h k\
= 3co{t-
+l
y(cTty
\
forte[0,T],
+
that is, (8) holds for q + 1. Thus by induction (8) holds for all q e Z+. From (8) we then have for t e [0, T] and 9 G Z+ the bound E(|X<"(i)|2) < 3C0S ^Tp-
±
3c fi3CTT
o
-
With KT = 3c0e3Cl-T we have (3). ■ Lemma 20.4. Suppose a and f3 satisfy both (1) and (2) in Definition 19.4. Then for every T G R+ there exists MT G R+ such that sup | ( T X ) ( 5 ) - ( T Y ) ( S ) | 2
(1)
s6[0,(]
< Mr /
E
<MT
I
sup E(|X(u) -
J[0,t] u6[0,a]
Y{u)\2)mL(ds)
sup [ X ( t * ) - y ( u ) | : m L (ds) /or t G [0, T] and X, F G L^
u6[0,s]
Proof. For X, Y G L^L we have rX, TY G L ^ by Lemma 20.3. By (1) of Lemma 20.3 we have \(TX)(S)-(TY)(S)\2
<
2| /
+ 2\[
{a{u,X)-a{u,Y)}dB(.u)\2
{/3(u,X)-/3(u,Y)}du\2
and thus (2)
E
sup |(rX)(3) -
'S[0,i]
(rY)(s)\2
< 2E sup I /
>e[o,t] ~'[o,s]
{a(u,X)
- a(u,Y)}
dB(s)\:
%20. STRONG
423
SOLUTIONS + 2E sup | ,/ seto.t] - (o,s]
{/?(«,X)-P{u,Y)}du\ 2
N o w E [ s u p s e [ 0 (] M 2 ] < 4E(M(2) for a continuous L2-martingale M on (12,5, {&},-?) according to (2) of Theorem 6.16. Since | /
{«(«,X) - o(u,F)}rfB(u)| 2 = J2Y,\I
{ « K « . * ) - <*;(«> y )l<W'(«)| 2
311(1
{/[o s ]{ a j( u >-^) — aj(u,y)}t/B'(u) : 5 G R+} is a continuous I/2-martingale on (n,5,{5(},P),wehave (3)
E
{a(u,X)-a(u,Y)}dB(s)\2
sup I / .j>e[o,(] J[o,s]
{a(u,X)-a(u,Y)}dB(s)\2
< 4E\\ f L J[Q,t]
= 4E[/
L/[o,(]
\a(u,X) - a(u,Y)\2mL(du)
.
Similarly by the Schwarz Inequality {p{u,X)-p{u,Y)}du\2<s
\[
\/3(u,X)-P(u,Y)\2mL(du)
I
JlO,s]
J[0,s]
so that {0(u,X)-P{u,Y)}du\2
sup I /
(4)
e[0,t] J[o,s]
\P(u,X)-/3(u,Y)}\2mL(du)
7 < * E [U[o,s]
J
Using (3) and (4) in (2) and applying (1) of Definition 19.4 we have E <
sup
-
\(TX)(S)
(TY)(S)\:
s£[0,i]
(8 + 2 T ) /
E[{\a(s,X)-a(s,Y)\2
+ \0(s,X)
-/3(s,Y)\2}]mL(.ds)
E f|X(«) - Y(u)\2\(du)
+ \X(s) - Y(s)\]
J[0,t]
<
(8 + 2T) / J[0,t]
< <
L
(8 + 2 T ) / sup J[o,t] ue[0,s] (8 + 2T){A([0,s])+l} /
J
mL(ds)
E(\X(u)-Y(u)\2){\([0,s])+l}mL(ds) E(|X(u) -
Y(u)\2)mL{ds).
424
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
WithM r = (8+2T){A([0,s])+l), we have the first inequality in(l). The second inequality is immediate from E(|X(u) - Y(u)\2) < E[sup u e [ 0 s ] \X(u) - Y(u)\2] for u € [0,s] and then suPu€[0,s]E(\X(u) - Y(u)\2) < E[sup u6[(M , \X(u) - Y(u)\2]. ■ Theorem 20.5. Suppose the coefficients a and /3 in the stochastic differential equation (1) of Definition 18.6 satisfy conditions (1) and (2) of Definition 19.4. Let ( Q , ^ , {St}, P) be a standard filtered space on which an r-dimensional {$t}-adapted null at 0 Brownian motion B exists. Then there exists a mapping F ofR.d x W into Wd such that (1) F is (7(
(4) 1 ;
foriGK+ for t e K+ and q e N,
where r is a mapping of L j ^ into L 2 ^ defined by setting (5)
{rX){t) = x+f
a{s,X)dB(s)+
J[0,t]
[
forteR+
/3(s,X)ds
J[0,t]
for X G L 2 '^. By iterated application of (1) in Lemma 20.4 we have E <
sup |X ( « +1) (i) - X ( ? ) (t)| 2 < 610,7) T
MT I' JO
<
AiUT JO
<
M£ j JO
sup E [|X (9, (i) - X{q-l\t)\2}
<6[0,(,]
f*
L
(6[0,t,_,]
T q
■■■ t
f
dtq
sup E f|X{«-»(t) - X ( '- 2 '(i)| 2 l A g di,
JO
JO
J
JO
L
J
sup E [|X(1>(i) - X<°>(t)|2l < W i . •. &, t € [0,(,]
L
J
425
120. STRONG SOLUTIONS By (3) in Lemma 20.3 we have sup E [\Xm(t) - X (0) (t)| 2 ] < sup E[2|X (1 >(i)| 2 + 2|X (0) (i)| 2 l < 4KT
*eto,ti]
(6[o,i|]
'
and therefore (6)
sup |X (?+1) (t) - X w ( t ) | :
E
*6[0,T]
/■T ,«,
,(2
(Mrr)«
< 4KTM$.jg J " ■ ■ ■ p dt,dtq^ ■ ■ ■ dti = 4KTFor q G Z+ let sup |X (9+1) (i) - X ) (t)| > —}.
X, = { w e f l :
«6[0,T]
2?
By the Chevyshev Inequality we have
P(Aq)<(Tf4KT{-^ Since EqezS4MTT)qm~l
=
q\
=
4KT^pi. q\
< oo, we have P(liminf A=) = 1 by the Borel-
Cantelli Lemma. Thus for a.e. w e f i w e have sup J€[0TJ |X (?+1) (t,cj) - X{q){t,u)\ < 2~q for all but finitely many q G Z+. Then for an arbitrary e > 0, for a.e. u> G Q there exists iV(aj) G Z + such that m— 1
m—l
sup |x(m)a,uo-x(n)(t,uoi < £ sup I X ^ ' C ^ U O - X ^ ^ I < y; 2-' < £
<6[0,T]
, = n ie[0,T]
g=„
(?)
for m > n > N(u>). This shows that {X (-,o;) : q G Z+} converges uniformly on [0,T] for a.e. oi G £2. Considering T = n for n G N, we have a null set A in (£2,5, F ) such that {X (,) (-, u>) : g G Z+} converges uniformly on every finite interval in R+ for u G Ac. Let us define a process X on (£2,5, -P) by (7)
v/
,
,
/ lim X ( ? , (i,^)
for^GR+xA'
X ( t , w ) = { 9—°°
(.0
for(t,u;)GR + x A.
By the uniform convergence of {X (,) (-, CJ) : q G Z + } to X(-, u) on every finite interval for UJ G Ac and by the continuity of {X,(-,u;) : q G Z+} on R+ for every u; G £2, X is a
426
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
continuous process. The uniform convergence also implies according to Lemma 17.12 that for w e A ' X(q){-, w) converges to X(-,ui) in the metric px on W d .
(8)
By the fact that A e J i for every t G R+ since 5 ( is augmented, and by the fact that X^\t, •) is 5t-measurable, X(t, •) is ^-measurable, that is, X is an {& }-adapted process. Regarding the convergence of X ( , ) to X we have furthermore
lim I sup E L\\Xw(t) - X(t)\2]J 1 = 0 .
(9)
«^°° [teio.T]
J
To prove (9) note that , ~ (5+1) * - i (i) w - ^Xw{q).(t)\ . , n2 V 2 sup E f|X (m, (t) - X(n)(<)|2]i>/2 < TV V sup „Er f|X telo.T] l ' ,=„ *e[0,T] L 1/2 , m-l
< £E
SU
P l*(5+1)w-xw(t)\:
<2^r£
»
by (6). Since ^ , 6 z + \J(MTT)i(q\)-^ < oo, the Cauchy Criterion for uniform convergence implies that there exists X*(t) G ta(Q, 5, P) for i G [0, T] such that
M^*^"**®1*'}*0, Since convergence in L2 of X{q)(t) to X*(i) implies the existence of a subsequence which converges a.e. to X*(t) and since Xiq)(t) converges a.e. to X(t), we have X*(<) = X(t) a.e. on (Q, 5, P) for every i 6 [0, T\. Thus (9) holds. To show that X e L2'^0 note that by Fatou's Lemma and by (3) of Lemma 20.3 E(|X t | 2 ) < liminfE(|X ( (,) | 2 ) < liminf sup E(|X ( (,) | 2 ) < KT so that sup ( 6 [ 0 T ] E(|X ( | 2 ) < KT < oo. Thus X G L ^ and r X is defined. Let us show that TX = X for our X defined by (7). Now for every T G R+ we have sup | ( r X ) ( i ) - X ( i ) | 2 < 2 sup | ( r X ) ( i ) - X ( ' + 1 ) ( i ) | 2 + 2 S up |(rX) (?+1) (i) - X(t)| 2 <6[0,t] (£[0,4] t6[0,(]
§20. STRONG
SOLUTIONS
All
for an arbitrary q 6 Z + . Since {X{*\-,ui) : q 6 Z + } converges uniformly on [0,T] to X ( - , « ) f o r w e Ac we have ,_l lim sup |(rX) (,+1) (t,a;) - X(i,u;)| 2 = 0fora; G AC. Thus °° ie[o,(] sup |(rX)(f) - X(t)\2 < 21imirrf ( sup |(rX)(t) - X ( ' + 1 ) ( t ) | 2 j . <e[0,t] te[o,t] Then by Fatou's Lemma and Lemma 20.4 E
sup |(rX)(t) -
X(i)f
te[o,t]
<21iminfE q—»-co
/ < 2M T liminf 9^00 y
sup |(rX)(t)-X<' + 1 ) (0| 2 *e[o,t]
sup E[|X(s) - X (,) (s)| 2 ]mi,(di)
[ 0 ,r] s 6 [ 0 ,i]
and therefore (10) E sup | ( r X ) ( t ) - X ( i ) | 2 < 2M r limsup f i€[0,i]
sup E[|X(s)-X w (s)| 2 ]m I ,(dt).
Now since X, X (9) g L ^ , we have by (3) of Lemma 20.3 sup E [ | X ( s ) - X ( 9 ) ( s ) | 2 ] < 2 J sup E(|X(s)| 2 ) + sup E(|X ( "(s)| 2 )l
»e[o,<]
<
Ue[o,<]
»e[o,t]
J
2 I sup E(|X(s)| 2 ) + Kr \ < oo. Ue[o,(] J
This shows that the integrands on the right side of (10) are bounded on [0, T] uniformly in q £ Z+. Thus Fatou's Lemma for limit superior applies and we have sup
(6[0,t]
<
\(rX)(t)-X(t)\'
2MT f lim sup { sup E[|X(s) - X(q)(s)\2] \ mL{dt) = 0 J[0,T) 9—<x. [ss[0,t] )
where the last equality is by (9). Therefore sup t6(0 (] |(rX)(t) - X(t)\ = 0 a.e. on (£2, g, P). Thus (rX)(t) = X(t) for * e [0, T] a.e. on (Q,5, P). By considering T = n for n € N, we have a null set A in (Q,5, P ) such that (rX)(i, w) = X(i, w) for (f, w) € R+ x Ac.
428
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Recalling the definition of TX by (5) we have (11)
Xt = x+f
7(0,*]
a(s,X)dB(s)+
[
foriGR+,
P(s,X)ds
J[o,t]
that is, X is a solution of the differential equation (1) in Definition 18.6 with XQ = x. Let us show that there exists a mapping F of Rd x W into Wd satisfying (1) and (2) and for X defined by (4) and (7) there exists a null set A^ in (£2,5, P) such that we have (12)
forwGA^.
X{-,UJ) = F[X,B{-,UJ)}
For this we show first by induction that for every q G Z+ there exists a mapping F ( , ) of Rd x W into Wd satisfying (1) and (2) and for X(9> defined by (4) there exists a null set A, in (Q, 5 , P ) such that X ( ' ) (-,o;) = F ( « ) [i,B(-,aj)]
(13)
for^GA;;.
Now according to (4), X(0)(£,u;) = x for (t,Lo) G R+ x Q. Let uix be an element in W d defined by wx(t) = x fort G R+. If we define a mapping Fm of R* x W into W d by setting F (0) [x, w] = » , for (x, w) G Rrf x W then F (0) [x, #(•, w)] = wx = Xm(-,u>) for every w G £2, verifying (13) for F ( 0 ) . To show that F ( 0 ) satisfies (1), note that since Fi0) maps the entire Rd x W into a single point wx G W , for any A G 2XT* we have ( F ® ) " ^ ) = R d x W or 0 according a s ^ e i or wx G ^4C- Thus Fi0) is
=x + I
a{s,X{,l))dB(s)+
J[0,t]
f
P(.s,X^})ds
fortGR+.
J[0,t)
For brevity let us use the notations <*(•, X<") . B = [£J =1 a j ( - , X « ) • B< : i = 1 , . . . , d] M;XW) • mL = W\-,X^). m L : t = 1 , . . . ,d] Then X (9+1) = x +a(-,X ( ? ) ) . B + (-,X (?) ) • m L . Since x =X (0) (-,o;) = F ( 0 ) [X,B(-,CJ)] for a; G £2, if we show that there exist mappings G(?) and #<«> of Rd x W into W* satisfying (1), (2) and U
*
J
r ( a ( - , X < " ) . £ ) ( u / ) = G('>[x,B(-,u/)] \(/9(-,Jr f a ) )«m L )(w) = Jff«'>[j;,B(->w)]
foru,GA< forweAf,
§20. STRONG
429
SOLUTIONS
where A, is a null set in (£2,3, P), then by setting F*-q+l) = F ( 0 ) + G{q) + Hlq) we have a mapping Fiq+1) of R* x W into W satisfying (1), (2) and F (9+1) [:r,B(-,a,)] = i + (<*(•, X<<>) • B)(w) + (/3(-,X(?)) . m L ) M = X ( ' +1, (-,w) for a; 6 A,, verifying (13). Thus it remains to show the existence of G(q) and H{q). Let <J>(,) be a mapping of R + x Q into Rdr defined by (15)
0{q\t1u)
= a(t,X{q\-,uj))
for(i,uOeR+xQ.
Since Fiq) satisfies (13) we have (16)
&q\t,u)
= a(t,F<-q)[x,B(-,(v)])
for(t,u>)eR+xAcg.
By Lemma 18.5, 0 ( , ) is an {3t}-progressively measurable and in particular {3t}-adapted measurable process on ( 0 , 3 , {$t},P)- Also since X(q) g L ^ , we have a}(-,X ( , ) ) g L2,cx>(R+ x Q., mi x P) as we saw in the proof of Lemma 20.3. Let ipiq) be a mapping of R t x R ^ x W into R * defined by (17)
for(f,i,ki)€EtxR''xWr
Let us consider first the case where the components (fl>(,))} of <£(,) are in Lo(0,3, {3t}, -P)In this case we have for (t, u) g R+ x A, r
(18)
(*<*>• B)''(t,w) =
m
^^(^(^.^{BXi^^-S*^-!^)} j=\ r
k=\ m
= Y.Y.0i)^F(q)[x,B{-,u)]){B\tk,uj)j=\
B\tk^M}
k=\
where {ifc : ik g Z+} is a strictly increasing sequence in R+ with t0 = 0 and lim tk = co with t m = t. Define a mapping (v?(»> • W)'' o f R+ x Rd x W into R by r
(19)
m
(v ( , ) • W)\t, x, w) = £ £ «$(**. F ( , ) [ x ' w»{«','(*t) - w*'(*t-i)}
for (t, x, w) g R+ x Rd x W where w* is the i-th component of w. Let us show first that («> • W0*'(-,x,iy) as a mapping of Rd x W into W is a(«8]Ed x 2ir)/2n-measurable. According to (4) of Proposition 17.4, it suffices to show that for every t g R+ the mapping
430
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
(><«> • wy(t, ■, •) of Rd x W into R is ff(«8K(,) • WO'O, x , w ) is a
G{q)[x,w] = (ip{q)»W)(-,x,w)
f0T{x,w)eRdxWT,
then G (,) satisfies conditions (1) and (2) and also for u> G Acq we have G ( "[x,B(-,u,)] = (<^> . TVX-, x, B(-,w)) = (<J>(?) . B)(-,u>) = ( a ( - , X w ) . B)( w ) by (20), (18), (18) and (15). This verifies (14) for the particular case where (
(&f • B)(-,u>) converges to (lD(,, • B)(-, w) in the metric Poo on W*
Let Giq) be the mapping of Rd x W into W d corresponding to ^ G(q) satisfies (1) and (2) and (20)
G<,'>[x,B(-,w)] = («j» . BX-,w)
as defined by (20). Then
§20. STRONG
431
SOLUTIONS
for w G AJ_n where A,,n is a null set in (fi, 5, P). Let A, = U neZ ,A,, n and let Eq = {(i, u ) e l ' x W : Jiirn G? 5 [x, w] exists in W*}.
(23)
Since Giq) is cr(55ld x Wr)/fmd-measurable a mapping G(?) of R d x W into W* by
for every n € N, E„ G cr(Q3ffi« x 2TT). Define for (x, ID) G .E, for (x, u?) G E\.
Gw[i,u;] = ^ « - » J 0£W
(24)
Since G ^ satisfies (1) and (2) for every n G N so does G(9). Let K, be the subset of Rd x W covered by the mapping (x, B(-,u>)) of Acq C £2. Then for every (x, to) G if, there exists some u G AJ such that (x, IU) = (x, £(-, w)) so that by (22) and (21) for u g A J w e have lim G™[x,w]=
lim G<5,[x,-B(-,u>)] = lim (<&} • S)(-,w) = (
This shows that Kq C -E,. Then by (24) we have lim G{q)[x,B(-,uj)} = G{q)[x,B{-,u)]
(25)
forw G A'.
By (15), (21), (22) and (25) we have («(■, X<") • B)(w) = (* (,) • S)(-, w) = lim (4#> • 5)(-, w)
(26) =
limG^[i,B(-,o;)] = G ( ' ) [x,B(-,a;)]
foruGAJ,
verifying (14). This proves the existence of G (,) . The existence of Hiq) is proved likewise. With this we have shown that the existence of Fiq) implies that of F(q+i). Therefore by induction we have a sequence of mappings {Fiq) : q G Z+} where F(q) satisfies (1), (2), and (13). To show the existence of a mapping F of Rd x W into Wd satisfying (1), (2), and (12), let (27)
E = {(x, w)eB.dxW:
lim Fiq)[x, w) exists in Wd}.
The (7(93]^ x 2ir)-measurability of Fiq) for q G Z+ implies that £ is in CT(93KJ x Define a mapping of Rd x W into Wd by
(28)
lim F ( , ) [x,w]
F\xM = \V_„ M 0GW l-KXJ
for(x,u))G£ for (x, w) G £ c
W).
432
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Since Fiq) satisfies the measurability condition (1) for q G Z+, so does F. For fixed x G E , F (5) [x, •] is 2nj'"72D?-measurable for every t G R+ for q G Z + so that {w G W : lim F (?) [x, u>] exists in Vfd} £ SOT?'1" for every £ G K+. Then by (28), F[x, ■] is Wt'w/Wdt-measurable for every t e R+. This shows that F satisfies (2). To show that F satisfies (12), let A^ = (U,6z*A,) U A where A is the null set in (8). For u G A" lim F(q)[x,w] exists in W by (8) and (13). Thus by w
q—»-co
(28), (13), and (8) we have for u £ A ^ F[X,B(;LJ)]
= limFiq)[x,B(-,u)]
= lim X(q\-,cj)
q—>oo
= X(-,u>).
q—>oo
This proves (12). Finally if Z is an 5o-measurable d-dimensional random variable on (£2, #, -P) and if we let X(-,LO) = F[Z(UJ),B(-,UJ)] for a> G fi, then ( 5 , X ) is a solution of the stochastic differential equation on (Q, 5, {3<}, P) satisfying Xo = Z. ■
[II] Uniqueness of Strong Solutions Definition 20.6. Let S(E1' x W )feetfzecollection of all mappings F ofRd x W into W d satisfying the following conditions: 1°. /or every probability measure p on (M.d, QSKd) there exists a null set Nu in (Rd, 53 K J, p) such that F[x, •] = F„[x, •] a.e. on ( W , 2IT»*°, m ^ ) wnerc z G iV=, where F^ is a mapping ofRd x W into W1* such that 2°. .FM w
Wry*™*/Wd-measurable,
3°. /or every x G E*, i^[x, ■] is a Wt'w/V0d-measurable t GK+.
mapping ofW
into Wdfor every
Definition 20.7. We say that the stochastic differential equation (1) in Definition 18.6 has a unique strong solution if there exists F G S(Rd x W ) such that 1°. if(B, -X") is a solution of the stochastic differential equation on a standard filtered space (£1,5, { 5 J , P) and p is the probability distribution ofX0 on (Rd, 2$]^) then X(.-,u) = F„[X0(.u>),B{;u>)]
fora.e.um(a,$,P),
120. STRONG
433
SOLUTIONS
2°. if(£l, 5, {$t},P) is a standard filtered space on which an r-dimensional {$t}-adapted null at 0 Brownian motion B exists, Z is an arbitrary d-dimensional ^-measurable random variable on {Q., 5, P) with probability distribution Pz on (Rd, QSmd), and if we define a stochastic process X by setting X(-lu>) = FPz[Z(uy,B(;«>)]
forueQ,
then ( 5 , X ) is a solution satisfying the condition Xo = Z a.e. on (£2,5, P)-
Observation 20.8. Suppose the stochastic differential equation (1) in Definition 18.6 has a solution {B, X) on some standard filtered space (Q,5, {3<}, P)- Let P(B,X) be the prob ability distribution of (B, X) on (W r+d , 2ir +d ) and p. be the probability distribution of X0 on (Rd,S8m«). Let (W+d,<2rT+d'*, {2UJ+d,*},P(BiJn) be the standard filtered space gener ated by P(B,X), that is, W*d'* is the completion of 2TT+d with respect to Pm,xy, 2IF+d'0 = a(Wt+d U 91) where 91 is the collection of all the null sets in (WT+d,Wr+d'*',P(BtX)), and Wt+d-' = n e>0 2UJ«'° Let 7r0 and TT, be the projection of W r+d onto W and W d respec tively and let W and F be two processes defined on (WT+d,W+d'\ {2trt+,i'*},P{B,x)) by setting W(t,w) = (7r0(io))(t) and Y(t,tu) = (7r,(u>))(i) for (t,w) g R+ x W + d . We showed in Theorem 18.13 that (W, F ) is a solution of the stochastic differential equa tion on (W + d ,2ir + d '*, {Wt+d'*},P(B:X)\ (W,Y) and (B,X) have the identical probabil ity distribution P(B,X) on ( W + d , 2XT+d), and Y0 and X 0 have the identical probability dis tribution p on (Rd, ®md). Let Q = p x P(B,;O and consider the product measure space (Rd x W + d , (7(*8Ed x W+d), Q). Now let n2 be the projection of Rd x W r+d onto Rd x W and Qn2 be its probability distribution on (Md x W , cr(<8md x W)). Then for ExA0e <8ad x W we have Q„2(E x A0) = (px PiB,X))(E xA0x = p(E)P0,X)Ua
Wd)
x W d ) = p(E)PB(Ao) = p{E)mrw{A0).
From this it follows that Qn2= px mw on CT(251<1 X 2TT+d). Let Q(x-V\A) for A e
is a
2°. Q () (A) is a er(!8ad x 2ir)-measurable function on Rd x W for A G <7(93m.i x 2Ur+d),
434
CHAPTER 4. STOCHASTIC DIFFERENTIAL
3°. Q(A n 7r2-'(G)) = fa Q<x-"\A)(n x mrw)(d(x, v)) for A G a^B^ a(<8m* x 2TT).
EQUATIONS
x a U " ^ and G G
If we define Q&rt(Ai) = Q{x'v)(Rd x W x Ax)
(1)
for ,4, G STr*
then #<*•"> is a probability measure on (W
Q(E xA0xAl)=
f
CpdiAxXp
x
mTw)(d(x,«))
JEXAQ
for J5 x A0 x A, G 93Kd x W x 2 ^ when (a, u) G iVc. On the probability space (Rd x W r+d ,ff(95B<<x 2IF+''), Q) consider the projection TT3 of Rd x WT+d onto Rd. The probability distribution Q„, of TT3 on (Rrf, 93md) is equal to (i since for every E G 9$Bd we have = Q o ^~\E)
Qn(E)
= (ft x P(B I X ) )(JS x 2F +
Let (^(A) for A G c(23md x 2nr+
is a 93m
6°. Q(A n 7r3(G)) = Ja Qx(A)n(dx) for A G
Q7(A) = Qx(Rd x A)
foTA€Wr+d,
then Q* is a probability measure on (Wr+
Q(E x A0 x A{) = f Q^(A0 x
Ai)n(dx).
JE
Consider the probability space ( W ^ , W+d, Q*) for x G iVc By Theorem 18.21, (W, Y) is a solution of the stochastic differential equation on (Wr+'J, 2n r+d '*' 1 , { a u ^ ' * ' 1 } , Q*) sat isfying the condition Y0 = x and in particular W is an r-dimensional {2UJ+'i'''''a;}-adapted
§20. STRONG SOLUTIONS
435
null at 0 Brownian motion. Thus for the projection 7r0 of WT*d onto W , the probability distribution (Q1)^ of 7r0 on ( W , W) is that of the process W which is the r-dimensional Wiener measure mTw by Definition 17.9. Let (Q*)V(A) for A G W+d and v G W be a regular image conditional probability of Q* given n0 when x G Nc. Then 7°. there exists a null set Nx in ( W , 21T, mTw) such that (Q 7 )" is a probability measure on
(yvr+i,mr+d)foTveNi,
go (Q?)()(J4) j s
a
2IT-measurable function on W for every A G 2Ur+<',
9°. (W)(A n 7r0-'(G)) = / e ^ r W r n ^ C d w ) for A G 2Ur+<' and G G 2TT Thus if we define (5)
WYUi)
= ( ^ ) " ( W x A,) for A, e 2nd
then (Q 1 )" is a probability measure on (5) we have for every x € Nc (6)
every v € JV°. Furthermore by 9° and
Q^(Ao x A,) = / Wf{Ay)mrw{dv)
for A0 x A] G 2TT x Wd
•Mo
Lemma 20.9. There exists a null set N in (Rd, © j ^ , ^) and a collection {Nx : x G 7VC} o/ null sets in ( W , 2IT, m ^ ) iwcft ffeaf Q<«=.") = (Q*)«
on (W*, 2JT) w/iew v £ Ncxandx
e Nc.
Proof. L e t £ x A ( , x A i 6 93md x SIT x 23J'i. Then (1)
/ ( / W*KAi)mTw{dv))
i £ l.Mo
fi(dv) = / J
JEx/lo
Q5W(A,XAI x m r w )(d(x,t,))
= Q ( £ x A0 x A,) = / Q^CAo x Aj)^(dx) = J U
(Q*HAi)mrw(dv)}
fi(dx),
where the second equality is by (2), the third equality is by (4), and the last equality is by (6) of Observation 20.8. Thus corresponding to A0 x AjjEjEF x Wd, there exists a null set NAoxAl in (Rd, 9 V > Hi s u c h t h a t f o r t h e t w 0 f u n c t i o n s Qlx'vKM) and (Q*)"(Ai) o f i e R J we have f Q^(Ai)rnrw(dv)
= f Wr(Arirnrw(dv)
forz g
NAoxAr
436
CHAPTER 4. STOCHASTIC DIFFERENTIAL
Consider two measures j/f and i/f on (WT+d, W f i/f (A0 x A,) = fAo \ vl(A0 x A,) = SM
EQUATIONS
) determined by Q<^(Ai)mrw(dv) {QxY(Ai)mUdv)
for A0 x A, e 227 x 22T\ Let <S be the collection of all A g 227+<< such that i/f (A) = i/f (A) fora.e. xin(R t i ,«8 l d ,(3y 0 ). Then 227x227* C 0 and it is easily verified that © is a d-class. Thus by Theorem 1.7 we have 227+t' =
Q^^{Ai)mrw{dv)
= / T&fiAxWwidv) •'>lo
for A0 x A, G 227 x 2n d when x G ATC
Thus corresponding to every x e Nc and A) G 227^ there exists a null set iVIij4l in ( W , 2 2 7 , m ^ ) such that Q ^ C A , ) = (Q^RA,) when i> G NcxA]. Since 227 is countably determined, corresponding to our x G Nc there exists a null set iVr in ( W , 227, mrw) such that <2<*'">(Ai) = (Q^RAi) for Ai G 22Jd when u G A^. ■ Lemma 20.10. Suppose the stochastic differential equation (1) in Definition 18.6 has a solution (B,X) on some standard filtered space (£2,5, {5t}, P)- Let p be the probability distribution of XQ on (R.d, 23n<0- Assume further that the stochastic differential equation satisfies the pathwise uniqueness condition under deterministic initial conditions. Then there exists a null set N^ in (M.d, 2Jmd, p.) and a collection {Nx : x G N*} of null sets in ( W , 227,m^,) such that on (Wd, Wd) when v e Ncx and x € N<,
Q&d = 6FM
where Fx is the mapping ofW into Wd defined in Theorem 19.18, that is, the probability measure Q(x'v) on (Wd, 20d) is the unit mass at Fx(v) G W d . Proof. According to Theorem 18.21 there exists a null set N in (Rd, 93K
XQ
§20. STRONG
437
SOLUTIONS
has a solution on the function space Wr+d- If we assume further that the stochastic differ ential equation satisfies the pathwise uniqueness condition under deterministic initial con ditions then by Theorem 19.18 the mapping Fx of W into W* exists for every x G Nc. Let N^ be the union of this null set and the null set N in Lemma 20.9. Then for every x G N^ there exists a null set N'x in ( W , 2TT, mrw) such that Y(-, w) = Fx[W(-,w)\ for w G W+d with W(-, w) G {N'x)c. In other words, for the projections 7r0 and ir\ of W*d onto W and Wd we have (1)
JTi(tu) = FX[TTO(W)]
for u> G VT+d with 7r0(™) G (iV*)".
Now for x G iV=, there exists a null set N'J in ( W , 2 i r , m ^ ) such that for v G (i\^') c , (Q 1 )" is a regular image conditional probability of Qx given the projection no of W + ' ' onto W , but restricted from W*d to Wd by (5) of Observation 20.8 . But according to (1), for every w G W+d with 7r0(u>) G (iV^)c, w\(w) is uniquely determined by wo(w) and thus ( Q ^ = 6Fx{v). Let NX = N'XU N%. Then ( ^ p F = * Fl(u) for v G JVJ and x G 2V£. I Lemma 20.11. Under the same assumptions as in Lemma 20.10, let A = {(x, v) G Rd x W : £<*■»> is wof a unit mass on (Wd, Wd)}. Then A is a null set in (Rd x W , a(JB%4 x 2TT), p. x mTw). Proof. Since Wd is a separable metric space, for every n G N there exists a countable collection of closed spheres {5 n ,, : i G N} in W d , each with radius r(S n ,;) = n _ 1 , such that Wd = UigN'S'n,,. Let us show that for an arbitrary probability measure v on (W d , 233^) we have (1)
i/ is a unit mass O v(Snj) = 0 or 1 for every n and *
Now if v is a unit mass then clearly i/(Sn,,) = 0 or 1 for every n and i. Conversely assume that v(5„,,) = 0 or 1 for every n and i. For fixed n, no two closed spheres S„,; with v measure 1 can be disjoint for otherwise we would have v(Wd) > 2. Let Kn be the closed set which is the intersection of those closed spheres in the collection {Sn%i : i G N} which have v measure 1. Then v(K„) = 1 and the radius r(Kn) < n~l. For the collection of closed sets {Kn : n G N} we have Kn l~l Km ^ 0 for otherwise we would have i/(Wrf) > 2. If we let C„ = n^ = 1 Km for n G N, then Cn, n G N, is a decreasing sequence of closed sets with v(Cn) = 1 and <5(C„) < n~l. Since Wd is a complete metric space and since 6(Cn) J. 0 as n —> co, there exists a unique w G W such that n„6NC„ = {w}. Then i/({to}) = lim v(Cn) = 1. Thus i/ is a unit mass. This proves (1). n—*oo
438
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
By (1) we have Ac = {(x, v) G Rd x W : ~Q^ is a unit mass on (Wd, &}d)} = {(x, v) G Rd X W : Q&rt(Sn,i) = 0 or 1 for every n and i}. Since Sn,{ G Q3wd = 2Hd, Q^C^n,;) is a a-(Q5Md X 2ir)-measurable functions of (x, v) G Rd x W" by 2° and (1) of Observation 20.8. Thus Ac G a^* x ST). To show that Ou x m^)(A) = 0, let N„ be a null set in (Rd, 2 3 ^ , p) and {iV^ : a; G Nfi be a collection of null set in (WT,Wr,mTw) as specified in Lemma 20.10. Then for x G N£ and v G 2VJ, we have Q^'^ = SFl{v) so that (x,v) G Ac and thus v G (A%,. which is the section of Ac C Rd x W at x G R* defined by (A%,. = {v e W : ( i , u) G A c } G S T . Therefore a; G N° implies iV£ C (A'),;,., or equivalently, A,,. C Nx. Thus (p x m U A ) = / j mw(A I .)/i(di) = / 7ld
^W=
mw(Ax.)/j,(dx)
< /
»
mrw{Nx)p{dx)
= 0,
since mTw(Nx) = 0 for x G N°. This shows that (p x m^)(A) = 0. ■ Lemma 20.12. Under the same assumptions on the stochastic differential equation (1) in Definition 18.6 as in Lemma 20.10, let F^ be a mapping ofRd x W into Wd defined by (1)
%,[*,,„] = W^
(2)
F„[xt v]=0eWi
(3)
F J s , u] = 0 G W d
for (x, v) G (Nl x W ) n Ac for(x,v)£(N°xW)nA for (x, v) G JV„ x W r .
77iera FM satisfies conditions 2° and 3° of Definition 20.6, that is, (4) F„ is ff(»md x WY*mw/W1-measurable, 1 (5) /or every as G R *, F^x, ■] w a 52HJ'1"/2Uf-measurable mapping o / W info W ' / o r every i G R+. Assume further that for every x G R+ ine stochastic differential equation has a solution (B, X) with Xo = x on some standard filtered space so that the mapping Fx o / W into W d defined in Theorem 19.18 existsfor every x G Rd. Let F be a mapping ofRd x W into Wd defined by (6)
F[x, v] = Fx(v)
for (x, v) G Rd x W
TTien F^ and F satisfy condition 1° of Definition 20.6, that is, there exists a null set N^ in (Rd ,Q3md, fi) such that (7)
F[x, ■] = FIM[x, •] a.e. on (W r , 237'"', mrw) when x G Nc
439
%20. STRONG SOLUTIONS
Proof. To prove (4) note that by Lemma 20.11, Ac €
F"1 n (Ni x W) n AC = {(*,v) e (iv; x W) n AC : F^X,v)eA,} = {(x, u) £ (N° x W ) n Ac : Q^>(A,) = 1} G ( r ( » ^ x W) by the definition of A in Lemma 20.11 and by 2° of Observation 20.8. Thus F„ is
= {ve(Ac)x,:WY(Al) = i}e<Wrr
while i^[x,-]"'(Ai) n A,,. = A*,, or 0 according as 0 6 Ay or 0 G A\. Thus -FJx,-] is 2UJ',"/2Uf-measurable when x 6 JV=. When x G JV„, i ^ x , •] = 0 so that F^x, ■) is Wt'w/Wd -measurable. To prove (7), note that x G N; and u G Ncx =► Q ^ > = tfRW => (x, » ) 6 A C 4 % t I , u ] = Q55T so that i ^ O ) = FM[x, u] for a G N£ and t> G N%. Since iVJ is a null set in this verifies (7). ■
(WT,WT,mrw),
Theorem 20.13. The stochastic differential equation (1) of Definition 18.6 has a unique strong solution if and only if (1). for every probability measure /i on (Rd, Q3sd) there exists a solution (B, X) of the stochastic differential equation on some standard filtered space (Q, 5, {& }, P) with /j, as the probability distribution ofXo, (II). the stochastic differential equation satisfies the pathwise uniqueness condition.
440
CHAPTER 4. STOCHASTIC DIFFERENTIAL
EQUATIONS
Proof. 1) Necessity. Suppose a unique strong solution F exists. To verify (I), let ft be an arbitrary probability measure on (Rd, 23m.0. Consider the product measure space (Rd x W , a(^BMd x W), fj, x mTw) and consider the standard filtered space generated by fi x mrw, (Rd x W , < T ( 3 V x Wry*mw, {
Y(;W) =
FPz[Y0(w),W{;W)]
for a.e. w in (WT+d,Wr+d't, P / ^ , , ) . If we define (2)
X(;u)
=
FPx[Z(.u),B(;u>)\
§20. STRONG
441
SOLUTIONS
for w € £i, then (B,X) and (W,Y) have an identical probability distribution P[B',X') o n (Wr+
X(-,UJ) =
F[Z(LJ),B(;UJ)]
for Z(LO) € Nc where N is a null set in (Rd,
Appendix A Stochastic Independence Definition A.l. With an arbitrary index set A, let (£„ be a subcollection of$ in a probability space (Q, #, P)for a £ A. We say that system { { „ : < « £ A} is stochastically independent with respect to P, or simply independent when P is understood iffor every finite subset { « i , . . . , a „ } of A and for arbitrary Eaj 6
P(r\]=1Eai) =
l[P(Eaj).
In particular {Ea : a 6 A} where Ea 6 5 '* said to be independent iffor every finite subset {on,..., an} of A we have we have P(n^=lEQ]) = n"=i P(Eaj). Note that for a finite index set A, say A = { 1 , . . . , n } and we have {l£j : j ~ 1 , . . . , ra}, the assumption that P(nj =1 -E J ) = ETjU P(-Ej) for arbitrary^ € £jforj = 1,... ,ndoesnot imply that P(n} = ] Ej) = JTJ=i P(Ej) for arbitrary Ej € <Ej for j = 1,... ,k for k < n. For instance consider the particular case {Ej : j = 1,..., n} where i?, = £ for j = 1 , . . . , n — 1 for some E 6 ? with P(.E) = 1/2 and En = 0. In this case we have P(n]=]Ej) = 0 = Il"=i P(£j), but PCn^i'-Bj) = P(E) = 1/2 ^ 1/2""1 = n ^ l 1 P(Ej). There is an exception to this when O G (5j, and in particular when <£,- is a
P(n%lEj) = 443
f[P(Ej)
444
APPENDIX A. STOCHASTIC
INDEPENDENCE
for arbitrary Ej € <&jforj = 1 , . . . , n, then {<Ej : j = 1 , . . . , n} is an independent system. Proof. Take an arbitrary subset of { 1 , . . . , n}, say { 1 , . . . , k} where k < n. Then for arbitrary Ej G
P(El n---nEh)
= P{EX n • ■ ■ n Ek n nM n ■ ■ ■ n Q„)
np(^)}(n p(a4= n w ;
[j=l
J l;=yfc+l
-I
(./=1
by (1). This shows that {(£,-: j = 1 , . . . ,n} is an independent system. ■ Observation A.3. As immediate consequences of Definition A. 1 we have the following. 1) If {<£<, : a G A} is an independent system, then for any subset A0 of A, {<£„ : a G ^40} is an independent system. 2) If {<8a : a £ A} is an independent system, and
P(n%1Fj) =
f[P(Fj).
We prove (1) by induction. Thus let us establish first that for any F\ G
P(Fi n n%2E3) = PCF,)
f[(Ej). 3=2
With fixed Ej e <£j for j = 2,...,n, setting (X.
()
define two finite measures \i\ and v\ on
j /ii(*i) = P{F, n H J ^ )
\ux{Fl) = P{Flm%1P{Ej)
for ^ G <7(<£0
forFGa^).
445
APPENDIX A. STOCHASTIC INDEPENDENCE
Then for Ej G CT(<£J) we have AM(PI) = P(Tlj=1Ej) = Il"=i -P(-Bj) = ^IC^I) w h e r e t h e second equality is by the independence of {<£Q : a G A). Thus ^i = v\ on the 7r-class (Hi. We have also px{Q) = P(£2 n n? =2 Pj) = P(Q) n" =2 P(£y) = ^i(Q), where the second equality is by the independence of {<Ea : a G A}. Thus by Corollary 1.8 we have p.\ = vx on
(4)
P(n*=IF; n nj=t+I£,) = n PW ft P(*i)
for Fj G ff(€j) for j = 1 , . . . , k and E} € €> for j = k + 1,... ,n. With fixed P,- G
f /«*ti(f]tn) = P ( n £ j J J n n ^ a B i ) \ •/*+,(Pt+1) = Yl-ll P(Fj)YlU+2P{Ej)
for P i + I G a(<Et+1) forF M G <7((£,+1).
Then for Ek+\ G (£k+i we have f,k+l(Ek+l) =
P(n%lF3nn]=k+lE]) = f[P(F]) ft P(E3) = uM(EM) J=\
j=k+i
where the second equality is by the induction hypothesis (4). Thus p,k+\ = vk+i on
/ifcri(fl) = P(nkjStFjnflnn]=k+2Ej) = f[PiE^PW j=l
n
H P^) = VM(.a) j=k+2
where the second equality is by (4). Therefore by Corollary 1.8 we have /ik+i = vk+l on a(j£.k+\). Therefore by (5) we have fc+I
n
Pirt&Fi n n%k+2Ej) = n P{F3) n P&J) forP, G a(
446
APPENDIX A. STOCHASTIC
INDEPENDENCE
Proof. Let <Ea = {Ea} for every a 6 A. Then <Ea is trivially a 7r-class. Thus the in dependence of {Ea : a G A} implies that of {a(£a) : a e A} by Theorem A.4. Now cr(<£a) = {Ea, Eca, 0, Q.}. Thus {JF*a : a G A} is an independent system by 2) of Observa tion A.3. ■ Let X be a d-dimensional random vector on a probability space (£1,£, P ) , that is, X is an 5/QSjd-measurable mapping of Q. into R d . Let a(X) be the a-algebra generated by X, that is, the smallest sub-cr-algebra of g with respect to which X is measurable. In other words o(X) = X-\fBm<). Definition A.6. Let {Xa : a G A} be a system of random vectors on a probability space (Q, g, P), the dimension ofXa being da G N. We say that {Xa : a £ A} is an independent system if{a(Xa) : a G A} is an independent system in the sense of Definition A. 1. Remark A.7. A system {Ea\a € A] of members of 5 in a probability space (Q, g, P) is independent if and only if the system of random variables {lsa : a G A} is independent. Proof. By Definition A.6, { 1 ^ : a G A] is an independent system of random variables if and only if {O-(1E 0 ) : a G A} is an independent system of sub-c-algebras of g. But vi^Ea) = {Ea, Ea, 0, £2} = <j({Ea}) and according to Corollary A.5 the independence of {a{{Ea}): a G ^4} is equivalent to the independence of {Ea; a G A}. I Theorem A.8. Let {Xa : a € A} be an independent system of random vectors on a probability space (Q, J , P), the dimension of X being equal to da G N. For each a G A, let Ta be a 53mdQ/QSjjite, -measurable mapping ofRda into Rk" where ka G N. Then {Ta o Xa '■ a G A} is an independent system of random vectors, the dimension ofTa o Xa being kaProof. Since Xa is an g/23 i; i -measurable mapping of O. into Rda and Ta is a Q3mda /2$ffi*„ measurable mapping of R*" into Rk" ,TaoXa is an S/93K*<» -measurable mapping of Q. into Rh", that is, a ka-dimensional random vector on (Q, 5 , P). To show that {Ta oXa : a G A} is an independent system of random vectors we show that {a(Ta o Xa) : a G A} is an independent system of sub-cr-algebras of 5 according to Definition A.6. Now (1)
a{Ta o Xa) = (T„ o J r a ) - , ( » 1 » a ) = X - ' C T - ' O B ^ ) ) .
APPENDIX A. STOCHASTIC
447
INDEPENDENCE
Since Ta is 03^0,/nSji*,,-measurable, we have T ^ ' O B E * ) C © B **- Thus from (1) we have (2)
Now since {Xa) : a £ ,4} is an independent system of random vectors, {a(Xa : a £ A} is an independent system of sub-cr-algebras of euf. Then (2) implies that {o(Ta o Xa) : a £ A} is an independent system of sub-u-algebras of 5 by 2) of Observation A.3. ■ Theorem A.9. Lef Xj be a dj-dimensional random vector on a probability space (£2,5, P) for j = 1,...,n. Consider the d-dimensional random vector X = ( X i , . . . , X n ) where d = d\ + ■ ■ ■ +dn. Let Pxl, ■ ■ ■, Px„, and Px be the probability distributions ofX\, . . . , Xn and X on (R
{Xi, . . . , X„} is an independent system -» Px = Px, x • ■ ■ x Px„ on (Rd, 23E<<).
Proof. Since 55Kd, x • ■ • x 93^,, is a semialgebra of subsets of Rd and since ^(05^^, x ■ • ■ x 551d„) = QSjd by Proposition 13.2, we have (2)
Px = Px, X • • • x P*„ on (R',
<* PxCBi x ••• x P „ ) = n ^ J ( S J ) f o r B j £ » , « , , j = l , . . . , n j=i
«■ ( P o X - ' ( P i x ■ ■ • x Bn) = f[(P o X-l)(Bj)
for B} £ Q S ^ , j = 1 , . . . , n
i=i
On the other hand
&
{X|, . . . , X„} is an independent system {o-(Xi), . . . , o-(Xn)} is an independent system n
*> P(n? =1 £j) = I I p (- E i) f o r ^ «
P{^(X-\Bj)))
€ ff
(*•>), i = 1 , . . . , n by Proposition A.2
= f[PCXf'CBy)) for Bj £ B B «,, j = 1,...,n
448
APPENDIX A. STOCHASTIC INDEPENDENCE
since o(Xj) = XJ1^*,). Now the fact that V^{Xjl{Bj)) = X~\BX x • • • x Bn) implies l that Ptf%y(Xj {Bj))) = (Po X-l)(B{ x ■ • • x Bn). Thus (3)
{X\, . . . , Xn} is an independent system n
<* ( P o J f - ' ) ( 5 , x • • • x B J = n P{Xj\Bjy)
for ^ e » , - , , j = 1, • • • , n
By (2) and (3) we have (1). ■ The characteristic function of a probability measure p. on (Rd, 2$md) is a complex valued function on Rd defined by ¥>(»;; /0 = / . exp{i (e, T?) }/i(dO
for ^ e l
d
where (£, T?) = Y?j=\ £.jVj f° r £>»7 £ ^ . It is well known that the characteristic function of a probability measure on (Rd, QSJJJ) is unique, that is, if p and v are two probability measures on (Rd, Q3Md) and ?(■; AO = >(•; v) on Rd then p, = u on (Rrf, Q3 M J). Let ^ i , . . . ,/<„ be probability measures on (Rdl, 9SB
for r\ = (r)h ... ,r)n) £ Rd' x ■ ■ ■ x E
4
Theorem A.10. (M. Kac) Consider a system of random vectors {X\,..., Xn } on a prob ability space (Q, 5, P), the dimension of X, being dj g N. Let Px, be the probability distribution ofXj on (Rd', SBa
(i)
v(.n\Px) =
for T) = (»?i,..., »7n) € Rd' x • • • x Rdn, or equivalently (2)
E f f t e x p ^ {*;,%)}] = flEfexplj ( X ^ ) } ] 3=1
forrj = (riu---,Vn) £ Rd] x ■ ■ • x R*«
i=i
449
APPENDIX A. STOCHASTIC INDEPENDENCE Proof. Note that (1) can be written as
(3)
Lnex P {^ >J?i )}p*(do=n L «?(*<&. »y}}fti(«*o
and (3) is equivalent to (2) by the Image Probability Law. Now suppose { X i , . . . ,Xn} is an independent system. Then by Theorem A.9 we have Px = Px, x • ■ ■ x Px„- Fubini's Theorem then implies (3). Conversely suppose (3) holds. To shows the independence of {Xt,..., Xn} it suffices to show that Px = Px, x • ■ ■ x P*„ according to Theorem A.9. Let Q = PX] x ■ • ■ x PXn. If we show that ^(77; P x ) = ¥>(T?; <2) for 77 G Rd = Rdl x • ■ • x Rd", then Px = Q and we are done. Now ^n\Px)=
fexV{i{(,n)}Px(dO
= J^exV{i{£,r,}}Q(dO
= f[fd
exp{i(fc,ifc)}iVdfc)
=
where the second equality is by (3) and the third equality is by the fact that Q = Px, x • • • x Px„ and Fubini's Theorem. ■ Theorem A.ll. Consider a system of random vectors on a probability space (Q, 5, P) given by (1)
ixk,qk : s * = l , . . . , n t ; f c = l , . . . , m } ,
rte dimension ofXk,qk being dk,qk £ N. For each k = 1,..., m, consider the subsystem
(2)
{X M ,...,**,„,}.
Lef X4 be a dk,\ + ■ ■ ■ + dk,nk -dimensional random vector defined by X = (Xk,l, ■ •• ,Xk,nk)
and consider the system of random vectors (3)
{Xk:k
=
l,...,m}.
1) If the system (1) is independent, then so is the system (3). 2) Conversely if the system (3) is independent and furthermore for every k = 1 , . . . , m the system (2) is independent, then the system (1) is independent.
450
APPENDIX A. STOCHASTIC
INDEPENDENCE
Proof. Suppose the system (1) is independent. Then m
m
E [ g e x p { i (Xk,r,k)}]:= E[exp{i^(X,, J ? A : )}] ErJJexpi; (X ,r, )}] = E[exp{i £ (Xk,T)k)}] m nkk k
= E[« P {»XE(^**. »?*,»*>}] : = IHlE[exp{i(Xfe,ti»•?*,!»}}]
= E [ c a p { * X E ( * * * . ^ . « ) } ] = 1 1 I I E[exp{i(Jrt^,i7t, w )}] k=\ 9i =l t = l 9t=l = Y[E[tx P{i(Xk,nk)}] =
m k=\
Y[E[txP{i(Xk,nk)}]
k=\
where the third equality is by the independence of the system (1) and Theorem A. 10 and the last equality is by the independence of the system (2) as a subsystem of the independent system (1) and by Theorem A. 10. According to Theorem A. 10, this shows the independence of the system (3). Conversely suppose the system (3) is independent and for every k = 1 , . . . , m the system (2) is independent. Then 771
Tlfc
Etn n
ex
k=l qqk=l k=\ k=\
771
7lt
k=\
qk=\
)}]=E[exp{i'£ E(X kMT,kM)}] z ££(X t , ? * % , ' u»3 Pi i (**..*»& : ,,J}]=E[exp{ 1
m
m
= £ (Xk, Vk}}] = (Xk, r,k}}] E[nexp{:(Jr*,i/*)}] = E[exp{i E[exp{i^(X,,^)}] == Etflexp{i m
771 m
Tit Tlfc
= l[E[ex Y[E[ex {i(X ,Vk)}]-.==n IY[Y[E[cx (X I E t e x P {VJ {i{X (Xk, )}] V k qk,Vk,qk,}}] V{i k,Vkk)}] fc=l
k=\ k=l *=1 q9k=l
where the second from last equality is by the independence of the system (3) and Theorem A. 10 and the last equality is from the independence of the system (2) and Theorem A. 10. This shows the independence of the system (1) according to Theorem A. 10. I Definition A.12. Let <£„ be a subcollection of $ for a g A in a probability space (Q., 5, P) and Xp be a dp-dimensional random vector on (Q., g, P) for fi 6 B. We say that {(£„, Xp : a £ A,f3 € B} is an independent system if{£a, a(Xp) : a G A, 0 £ B} is an independent system in the sense of Definition A. 1. Theorem A.13. Let (Sj be a sub-o-algebra o / 5 in a probability space (Q, #, P) for j = 1 , . . . , m and Xk be dk-dimensional random vector on (Q, 5, P)for k= 1 , . . . , n. Then the following three conditions are equivalent:
451
APPENDIX A. STOCHASTIC INDEPENDENCE
!"• {®ji-^t '■ j = l,...,rn;k = I,... ,n} is an independent system in the sense of Def inition A. 12, that is, {(Sj,
(i)
PlriJUGj n nnk=lFk) = fl P(G,) f [ P(Fk). j=\
k=l
Now 1 G is a (Sj-measurable random variable and lpk is a crCJGO-measurable random vari able. Thus by 3°, {1Q, lpt : j = 1 , . . . , m; k = 1,... ,n} is an independent system and thus by Definition A.6 the system {cr(lG;), {lFk) '■ j = 1,... , m ; A = 1,... , n } is independent. Then since Gj G CT(1GJ) and F* G
This page is intentionally left blank. This page is intentionally left blank
Appendix B Conditional Expectations [I] Definition and Basic Equalities Let © be a sub-er-algebra of 5 in a probability space (£2, 5, P). Two ©-measurable extended real valued random variables X\ and X2 are said to be ( 0 , P)-a.e. equal if X\ = X% on Ac where A £ 0 with P(A) = 0. In the collection of all ©-measurable extended real valued random variables on (£2,5, P), ( 0 , P)-a.e. equality is an equivalence relation. Proposition B.l. Let & be a sub-a-algebra of$ in a probability space (£2,5, P). Con sider the equivalence relation o / ( 0 , P)-a.e. equality in the collection of all 0 -measurable extended real valued random variables on (£2,5, P). For every integrable extended real valued random variable X on (£2,5, P) /Aere arista a unique equivalence class consisting of all &-measurable extended real valued random variables Y on (£2,5, P ) such that [YdP=fxdP Ja Ja
foreveryGe®.
Proof. For an arbitrary integrable extended real valued random variable X on (£2,5, P), let us define a set function p. on 0 by setting p(G) = JGX dP for G G 0 . Then p is a signed measure on 0 . Since p(G) = 0 for every G G 0 with P(G) = 0, p is absolutely continuous with respect to P on (£2,0). The Radon-Nikodym derivative of p with respect to P on (£2, 0 ) is then an equivalence class with respect to the ( 0 , P)-a.e. equality consisting of all 0-measurable extended real valued functions Y on £2 such that JaY dP = p(G), that is, / G YdP = JGXdP, for every G G 0 . ■ Definition B.2. Let X be an integrable extended real valued random variable on a proba453
454
APPENDIX B. CONDITIONAL
EXPECTATIONS
bility space (Q, 5, P) and let <& be a sub-a-algebra o / J . By the conditional expectation of X given 0, denoted by E(X | 0 ) , we mean the unique equivalence class of all 0-measurable extended real valued random variables Y on (Q, 5, P) such that (YdP=j Ja
XdP
for every G € <5.
JG
The members of the equivalence class E(X\<3) are called versions of the conditional ex pectation of X given 0. Thus an extended real valued random variable Y is a version of E(X 10) if and only if it satisfies the following two conditions: 1°. Y is &-measurable. 2°. fG Y dP = fa X dPfor every G € 0. For convenience the notation E ( X | 0 ) is also used for an arbitrary version of the con ditional expectation. For an arbitrary i 6 5, the conditional probability of A given 0, denoted by P(A\<8), is defined as the conditional expectation of the random variable 1A given 0 , that is, F ( A | 0 ) = ECU |®). Note that every version of Y of E(X | 0 ) is an integrable random variable since O g 0 and thus fnY dP = / n X dP £ R. This implies that every version is finite a.e. on (Q, 0 , P) and thus there exists a real valued version of E(X | 0 ) . Since the same notation E(X 10) is used for both an equivalence class and an arbitrary representative of it, what is meant by the notation E(X 10) should be determined from the context. For instance in expressions such as " JG E(X | <8)dP " and "E(X 2 10) > E(X] 10) a.e. on (Q.,®,P)", E ( X | 0 ) , E ( X i | 0 ) , a n d E ( X 2 | 0 ) are arbitrary versions of the con ditional expectations, and in expressions such as " Z £ E ( X | 0 ) " and "ECX^©!) C E(X2102)", the notations E(X \ 0 ) , E(X, 10,), and E(X 2 10 2 ) are for equivalence classes. For a random variable Z the expression " E ( X | 0 ) = Z" is also used to indicate that Z is a version of E(X 10). The notation ZE(X \ 0 ) may mean the collection of all versions multiplied by Z of an arbitrary version multiplied by Z. Observation B.3. Let X\ and X2 be two integrable extended real valued random variables on a probability space {SI, 5, P) and let 0 be a sub-cr-algebra of J. If there exists a 0 measurable extended real valued random variable Y0 on (Q, 5, P) which is a version of both E(Xi 10) and E{X2 \ 0 ) then E(X, 10) = E(X 2 10), that is, if E{X^ 10) n E(X 2 10) 4 0 thenE(X,|0) = E(X2|0).
APPENDIX B. CONDITIONAL EXPECTATIONS
455
Proof. Let Yj be an arbitrary version of E ( X j | 0 ) for j = 1,2. Then Yj = Y0 a.e. on (£2, 0 , P ) for j = 1,2 so that yj = y 2 a.e. on (Q, (5, P). Thus the two equivalence classes with respect to ( 0 , P)-a.e. equality, E(Xj 10) and E(X 2 10), are identical. ■ Observation B.4. Let 0 be a sub-cr-algebra of g in a probability space (£2,g, P). Then for any c 6 R, we have c G E(c|0), that is, {c} C E(c|0), but in general {c} 4 E ( c | 0 ) since E ( c | 0 ) consists of all ©-measurable extended real valued random variables which are ( 0 , P)-a.e. equal to c on Q. Theorem B.5. Let X, X\, and X 2 be integrable extended real valued random variables on a probability space and let & be a sub-a -algebra of$. Then 1) X, = X 2 a.e. on ( G , 3 , P ) => E(X, | 0 ) = E(X 2 |0), 2) X is 0-measurable => X G E(X 10), 3) X = ca.e. on (Q, g, P ) => c G E(X 10), 4) X >Oa.e. on (Cl,$,P) ^ E ( X | 0 ) > 0a.e, o « ( Q , 0 , P ) , 5) E ( c X 1 0 ) D cE(X | <S)for c G R an
3) A constant c is always 0-measurable. If X = c a.e. on (Q,, g, P), then for any G G 0 C g we have JacdP = faX dP. Thus c satisfies conditions 1 ° and 2° in Definition B.2 and is therefore a version of E(X | 0 ) . 4) Suppose X > 0 a.e. on (Q, g, P). Let Y G E(X 10). Then / G y rfP = JG X dP > 0 for every G G 0 and therefore y > 0 a.e. on (£1, 0 , P). 5) Let y G E ( X | 0 ) . Then Y is 0-measurable and so is cY. For every G G 0 we have JacYdP = cfGY dP = cfaX dP = / G cX dP and thus c y G E ( c X | 0 ) . This shows that c E ( X | 0 ) C E ( c X | 0 ) for any c G R. Suppose c 4 0. To show that in this case we have E ( c X | 0 ) C c E ( X | 0 ) , let Z G E ( c X | 0 ) . Then Z is 0-measurable and JaZdP = JGFcXdP = cfaXdP = cfGYdP for every G G 0 and thus Z = cY a.e.
456
APPENDIX B. CONDITIONAL
EXPECTATIONS
on (fi, 0 , P). This shows that Z/c = Y a.e. on (Q, 0 , P ) and thus Z/c G E(X | 0 ) , that is Z 6 c E ( X | 0 ) . Therefore E ( c X | 0 ) C c E ( X | 0 ) and thus E ( c X | 0 ) = c E ( X | 0 ) when 6) Let y, 6 E(X, | 0 ) and F 2 g E ( X 2 | 0 ) . Then c,X, + c 2 X 2 is 0-measurable and for every G e S w e have / ( c F , + c 2 y 2 )dP = ci / YxdP + c2 I Y2 dP
JG
JG
= cx f XxdP JG
+ c2 f X2dP= JG
JG
f (CJXI + c2X2)dP.
■*G
Thus c{Yx + c2Y2 G E(ciX, +c 2 X 2 |(S). This shows that c , E ( X , | 0 ) + c 2 E ( X 2 | 0 ) C E ^ X j + c 2 X 2 | 0 ) . Suppose at least one of cx and c2 is not equal to 0, say c2 i- 0. To showE(cjXi+c 2 X 2 |©) C c 1 E(X 1 |0) + c 2 E(X 2 |0), let 2 G E C d X , + c 2 X 2 | 0 ) . Let y G E(X! |(8) and y = l/c 2 {Z - ciYi}. Since Z and y are ©-measurable so is Y2. Also for every G G 0 we have \ Y2dP = - { \ ZdP-cx Ja c2 {JG
\ YxdP~\ J
JG
= — / {c,Xi + c 2 X 2 }dP - - / X , d P = / X 2 dP. c2 JG C2 7G ./G This shows that Y2 G E ( X 2 | 0 ) . Thus we have shown that for an arbitrary version Z of E(cxXx + c 2 X 2 | 0 ) we have Z = cxYx +c2Y2 where Y\ is a version of E(Xi |(S) a n d y is a version of E(X 2 10). Thus Z G c,E(X 1 |0) + c 2 E(X 2 |0) and then E f a X i + c 2 X 2 | 0 ) C c,E(X, | 0 ) + c 2 E(X 2 |0). Consequently E( C l X, + c 2 X 2 | 0 ) = c,E(X, | 0 ) + c 2 E(X 2 |0). 7) If X 2 > Xi a.e. on (Q.,3,P), then X 2 - X, > 0 a.e. on ( Q , 3 , P ) so that by 2) we have E(X 2 - X, | 0 ) > 0 a.e. on (Q, 0 , P). By 6) we have E(X 2 - X, | 0 ) = E(X2|0)-E(X1|©).ThusE(X2|0)>E(Xi|0)a.e.on(n,0,P). I Remark B.6. Regarding 5) of Theorem B.5 note that c E ( X | 0 ) = E ( c X | 0 ) does not hold in general when c = 0 since in this case we have 0 • E ( X | 0 ) = {0} while E(0 • X | 0 ) = E(O|0) which consists of all ©-measurable extended real valued random variables on (Q, j , P ) which are ( 0 , P)-a.e. equal to 0. Observation B.7. Let X be an integrable extended real valued random variable on a prob ability space (Q, 5, P ) and let 0 be a sub-
457
APPENDIX B. CONDITIONAL EXPECTATIONS that is, for every version Y of E(X | ©) we have E(Y) = E(.X").
Proof. If Y G E(X 1©), then Ja Y dP = JG X dP for every G G © and in particular with £2 G © we have JnYdP = fnX dP, that is E(y) = E(X). ■ Theorem B.8. Let & be a sub-a-algebra o / 5 in a probability space. Let X be an integrable extended real valued random variable and Z be a © -measurable extended real valued random variable on (£2, 5, P ) such that ZX is integrable. Then E(ZX\<S) = ZE(X\<8). Proof. If we show that for an arbitrary version Y of E(X | 0 ) , ZY is a version of E(ZX 10) then E(ZX | 0 ) D ZE(X 10). The fact that ZY is a version of E(ZX 10) also implies that for an arbitrary version V of E{ZX | 0 ) we have V = ZY a.e. on (£2, ©, P) and thus V is in ZE(X 10) so that E{ZX10) C ZE(X 10) and therefore E(ZX | 0 ) = ZE(X | ©). Thus it remains to show that for an arbitrary version Y of E(X \ 0), ZY is a version of E ( ^ X | ©). Since ZY is ©-measurable, it remains to show that (1)
I ZY dP = I ZX dP
for every G e ©.
To prove (1), consider first the case where Z = 1G0 where Go £ ©• In this case we have f ZYdP= JG
=
/ JGnGa
I lGoYdP
= f
JG
XdP=
YdP
JGnGa
f lGoXdP= JG
f
ZXdP
JO
where the third equality is by the fact that G n Go € ©. This verifies (1) for the particular case Z = 1G<) where Go G ©. If Z is a simple function on (£2,0, P) then (1) holds by the linearity of the integral with respect to the integrand. Next consider the case where Z > 0 and X > 0 on £2. Since Z > 0 on £1 there exists an increasing sequence {Zn : n G N} of nonnegative simple functions on (£2,0,P) such that Zn | Z on Q. Since Zn is a simple function on (Q, ©, P ) we have by our result above (2)
/ ZnY dP= f ZnX dP
for G G 0 .
Since X > 0 on Q we have Y > 0 a.e. on (£2,0, P ) by 4) of Theorem B.5. Thus Zn\ Z on £2 implies ZnY T Z ^ a.e. on (£2,0, P). Since X > 0 on £2 we have also Z„X | ZX
458
APPENDIX B. CONDITIONAL
EXPECTATIONS
on £2. Letting n -> oo in (2) we have (1) by the Monotone Convergence Theorem for nonnegative functions. Thus we have shown that for the case where Z > 0 and X > 0 on £2, (1) holds and therefore we have (3)
Z > 0 and X > 0 on Q =» E(ZJC|<5) = ZE(X|<S).
Let us remove the condition that Z > 0 on £2 but retain the condition that X > 0 on £2. In this case we have E(ZX\®) +
= E({Z+ - Z-}X\<S)
= Z E{X\&)-Z-E{X\&)
=
E(Z+X\<$)-E(Z-X\<3)
= ZE{X\<&)
where the third equality is by (3). Finally if we remove the condition that X > 0 on £2, then E(ZX\<8) = E(Z{X+ - X-}\0) = E(ZX+\<£>) = ZE{X*\<&)- ZE(X-\<8) = Z{E(X+\®)-E(X-\&)}
E(ZX~\e) = ZE(X\&)
where the third equality is from the result above for nonnegative X and the last equality is by 6) of Theorem B.5. ■ As a particular case of Theorem B.8, let us observe that if X and Z are two extended real valued random variables on a probability space (Q, 5, P), Z is 0-measurable where 0 is a sub-c-algebra of 5, X € LP(Q, 5, P) and Z 6 £,(£2,5, P) where p, g € (1, oo) with i + i = 1, then Z X is integrable so that E(ZX|(S) = ZE(X|<8). Definition B.9. Lef X be an integrable extended real valued random variable on a proba bility space (£2,5, P) and let & and 9) be sub-o-algebras o / J . With an arbitrary version YofE(X\<£>), we define E[E(X|(S)|fi]=E(y|S). An alternate notation E{X \ <S | ft) is used for E[E(X 10) | Sy]. The fact that the equivalence class E[E(X 10) | ft] of random variables does not depend on the choice of the version Y of E(X|(5) in Definition B.9 is shown in the following proposition. Proposition B.10. Let X be an integrable extended real valued random variable on a probability space (£2,5, P) and let <& and ft be sub-o-algebras of$. Let Y\ and Y2 be two versions of E{X\@). Then E(Yi\ft) = E(Y2\ft)
APPENDIX B. CONDITIONAL EXPECTATIONS
459
as equivalence classes. Proof. For j = 1,2, E(Y, \Sj) consists of all immeasurable extended real valued random variables Z such that JH Z dP = JH Y3 dP for every H £ ft. Now since Yi and Y2 are versions of E(X | 0 ) , we have Y{ = Yz a.e. on (£1, 0 , P) which implies that Yi = Y2 a.e. on (Q, 5 , P ) so that JA Y, d P = fA Y2 d P for every A £ 5 and in particular fH Y, d P = JH Y2 d P for every H £ S. If W, is a version of E(Yt | £ ) and W2 is a version of E(Y2\ft), then for any H £ ft we have JH W, dP = fH Y, dP = fH Y2 dP = fH W% dP and thus Wi = W2 a.e. on (Q, 3 , P). From this we have E(Yi \ft) = E(Y2 \ft). ■ Observation B.ll. For E[E(X 10) | £ ] defined above, note that E [ E [ E ( X | 0 ) | S ] ] = E[E(X|0)] = E(JQ. Note also that if Z £ E[E(X\<S)\Sj] and W is an ^-measurable extended real valued function such that W = Z a.e. on (Q.,ft,P), then W £ E[E(X\&)\ft]. Theorem B.12. Let X be an integrable extended real valued random variable on a proba bility space (£2, 5, P ) and let ®\bea sub-a-algebra o / 5 and 0 2 be a sub-a-algebra of<5\. Then E ( X | 0 , | 0 2 ) = E(X|02). Proof. Let Y be an arbitrary version of E ( X | 0 ! ) and let Z be an arbitrary version of E ( Y | 0 2 ) . To show E(Y|(S 2 ) = E ( X | 0 2 ) , it suffices according to Observation B.3 to show that there exists an 02-measurable extended real valued random variable which is a version of both E ( Y | 0 2 ) and E ( X | 0 2 ) . Since Z is a version of E ( Y | 0 2 ) , it suffices to show that Z is also a version of E ( X | 0 2 ) . Now Z is 02-measurable. Also for every G2 £ 0 2 we have fa Z dP = JQ2 Y dP = JGz X dP where the first equality is from the fact Z is a version of E(Y 10 2 ) and the second equality is from the fact that Y is a version of E ( X | 0 i ) . Thus we have shown that for an arbitrary version Y of E ( X | 0 t ) we have E ( Y | 0 2 ) = E ( X | 0 2 ) . Then E [ E ( X | 0 , ) | 0 2 ] = E ( X | 0 2 ) by Definition B.9. I Theorem B.13. Let X be an integrable extended real valued random variable on a proba bility space (£2,5, P ) and let <&\bea sub-a-algebra of$ and 0 2 be a sub-a-algebra of (Si. Then (1)
E(X|0 2 |0i)DE(X|0 2 ).
460
APPENDIX B. CONDITIONAL
If every version o / E ( X | © 2 | ® i ) is ^-measurable, (2)
EXPECTATIONS
then
E ( X | © 2 | © , ) = E(X|© 2 ).
Proof. 1) To prove (1), let F be an arbitrary version of E(X | ©2). Then F is ©2-measurable and hence ©i-measurable. According to Observation B.ll, to show that Y is a version of E ( X | © 2 | © j ) it suffices to show that there exists a version Z of E ( X | © 2 | © 0 such that Z = Y a.e. o n ( n , © i , P ) . Now let Z bean arbitrary version of E ( X | © 2 | © i ) . Then Z g E(W|©,) for some W g E(X|© 2 ). Thus for an arbitrary G, g ®i, we have / G | Z d P = / Gl W d P . Since W and F are versions of E(X |© 2 ), we have W = Y a.e. on (Q, © 2 , P ) which implies W = F a.e. on ( 0 , 5 , P ) and consequently / G | W dP = fGi Y dP. Thus / Gi ZdP = / G i F d P . Since both Z and F are ©i-measurable, this implies that Z = F a.e. on E(X |©,). This proves (1). 2) Suppose every version of E ( X | © 2 | © i ) is ©2-measurable. To prove (2), we show that an arbitrary version of Z of E(X|(S 2 |©i) is aversion of E(X|<S 2 )- Since Z is <S2measurable, it suffices to show that there exists a version F of E(X | <S2) such that Z = Y a.e. on (fl, (S2, P ) according to Observation B.ll. For this it suffices to show that fG2 Z dP = / G2 F d P for every G2 g 0 2 . Now if G 2 g <82, then G2 g <Si so that
/ ZdP= I E(X|<32)dP= / XdP=
[
YdP.
This proves (2). ■ According to (1) of Theorem B.13, we have E ( X | © i | © 2 ) D E(X | (S2) if ©2 C ©1 C 5. For an example of E(X | ©, | ©2) i E(X | ©2), see Example B. 14 below. Example B.14. Let (£2,5,P) = ((0, l],9K«),i],"U) where Sm(0>i) is the or-algebra of all Lebesgue measurable subsets of (0,1] and TUL is the Lebesgue measure. Let f(x) = x for x g (0,1]. Consider ©1 = 5 = M(0:1] and ©2 = {0,(0,1]}. Since the only © 2 measurable extended real valued functions on (0,1] are the constant functions on (0,1] and since fm]cdmL = / ( 0 1 ] / d m i , = \ for c £ E implies that c = | , E ( / | © 2 ) consists of the constant function \ on (0,1]. Thus E ( X | ® 2 | ® i ) consists of all Lebesgue measurable extended real valued functions g on (0,1] such that / G[ g dmL = fGi j drriL = \mL{G\) for every G\ g 9Jt(o,i]. The constant function \ is such a function but it is not the only one. Thus we have E(XI©, I©2) D E(X|© 2 ) a n d E ( X | © , | © 2 ) ^ E ( X | © 2 ) . Since E ( / | © , ) consists of all Lebesgue measurable extended real valued functions g on (0,1] such that
APPENDIX B. CONDITIONAL
EXPECTATIONS
461
/G, 9 dmL = JGi f dmt for every G\ G 9H(0,i] and since the constant function \ does not satisfy this condition, we have E(X | 0 2 ) n E(X 101) = 0. The condition 0 2 C 0 i implies that every version ofE(X|0 2 )is0i-measurable. How ever in general there is no inclusion relation between E(X | 0 2 ) and E(X 101) since every version Y of E(X 10 2 ) satisfies the condition fG Y dP = Ja X dP for G G 0 2 but not nec essarily for G G 0 1 . Example B. 14 above presents a case where E(X 10 2 ) n E(X 101) = 0. See Example B.15 for a case where E ( X | 0 2 ) C E ( X | 0 , ) b u t E ( X | 0 2 ) ^ E ( X | 0 i ) . Example B.15. Let (£2,5,P) = ((0, l],9H(o,i],"U) and let / be an integrable Lebesgue measurable but not Borel measurable function on (0,1]. Let 0 i = 5 = 9K(o,i] and 0 2 = 53(o,i], the a-algebra of Borel measurable sets in (0,1], We have 0 2 C 0 i . Let us show thatE(X\<St) CE{X\®X)butE(X|©2)^E(X|®i). Now E(X 101) consists of all 9Jt(o,i]-measurable extended real valued functions g such that (1)
/ gdmL= Ja,
Ja,
for G] G OT(0,i]
f dmL
and E ( X | 0 2 ) consists of all 55(0,i]-measurable extended real valued functions h such that (2)
/
Ja2
hdmL=
for G2 G 35(o,i]-
I fdmL
JGi
Let h be an arbitrary member of E(X 10 2 ). Then h is 93(0,i]-measurable and hence 9H(o,i]measurable. To show that ft is a member of E(X 10i), it remains to show that (3)
/
hdmL=
Ja,
I fdmL
for Gi G 2K(o,u-
Ja,
Let SDtm be the cr-algebra of all Lebesgue measurable sets in E. Recall that the measure space (R, SDTE, mL) is the completion of the measure space (R, ©m, mL). If d is in aTt(o,i] it is in 9 % and thus can be given as a disjoint union Gi = B n iV0 where B G 23m and N0 C N where TV G 93a with mL(N) = 0. Note that since B G 23m and 5 C (0,1] we have B G «8(0,i] and that 7V0 G 3Jt(0,i] with mL(N0) = 0. Thus / h dmL = h dmL + h dmL = f dmL + 0 Ja, JB JN0 JB =
I fdmL+
JB
JN0
fdmL=
Ja,
f dmL,
462
APPENDIX B. CONDITIONAL
EXPECTATIONS
probing (3). This shows that an arbitrary member h of E(X | ©2) is a member of E(X | ©1) and therefore E(X|© 2 ) C E(X|©i). Now since / is 9Jl(0,i]-measurable and since ©i = g =OT(o,i],/ is a member of E(/1 ©i). On the other hand since / is not <8(0,i]-measurable, / is not a member of E(/1 ©2). This shows that E(X | ©2) i E(X | © i). Theorem B.16. Let X be an integrable extended real valued random variable on a proba bility space (Q, g, P) and let <&\bea sub-o-algebra ofS and ©2 be a sub-o-algebra of<8\. 7JE(X|© 2 ) D E ( X | © 0 if and only if every version o / E ( X | © 0 is <S2-measurable. 2) E(X I ©2) C E(X I © 1) if and only if for every version Y ofE(X | 0 2 ) we have (1)
/ YdP= JG,
f XdP
/orGig©,.
JG,
3) If (Q, ©1, P ) is a complete measure space, ©2 contains all the null sets in (Q, © 1, P), and condition (1) is satisfied, then E(X | ©2) = E{X | ©1). Proof. 1) If E(X|© 2 ) D E(X|©i), then every version of E ( X | © 0 is ©2-measurable. Conversely, assume that an arbitrary version Y of E ( X | © 0 is ©2-measurable. To show that Y is a version of E(X |© 2 ), it remains to verify that JGl Y dP = fG2 X dP for every G2 G © 2 . Now if G2 £ ©2 then G2 G ©1 so that JGl Y dP = JGi X dP and thus Y is a version of E(X|© 2 ). ThereforeE{X\© 2 ) D E(X|©,). 2) Since every version Y of E(X|© 2 ) is ©2-measurable and thus ©1-measurable, Y is a version of E(XI©1) if and only if (1) holds. ThusE(X|© 2 ) C E ( X | © , ) if and only if (I) holds. 3) If (1) is satisfied then E(X|© 2 ) c E ( X | © , ) by 2). If we show that an arbitrary version Y\ of E ( X | © 0 is ©2-measurable, then by 1) we have E(X|© 2 ) D E ( X | © 0 and therefore E(X \ 0 2 ) = E(X | © 1). Thus it remains to show the ©2-measurability of Y\. Let Y2 be an arbitrary version of E(X | ©2). Since Y2 is a version of E(X 10 ]) b y 2), we have Y2 = Fi a.e. on (Q, ©1, P). Thus there exists a null set A, in (Q, ©1, P ) such that Y2 = Yx on K\. Since ©2 contains all the null sets in (Q, ©1, P), A* is a null set in (Q, © 2 , P ) and thus A^ G © 2 . The equality of Y\ and Y2 on A[ then implies the ©2-measurability of Y\ on \\. Since (Q, ©1, P) is a complete measure space and since ©2 contains all the null sets in (Q, © 1, P), (Q, ©2, P ) too is a complete measure space. Thus Yj , which is ©2-measurable on A,, is indeed ©2-measurable on Q. ■ [II] Conditional Convergence Theorems Theorem B.17. (Conditional Monotone Convergence Theorem) Let Xn, n G N, and X
463
APPENDIX B. CONDITIONAL EXPECTATIONS
be integrable extended real valued random variables on a probability space (Q, 5, P) and let & be a sub-a-algebra of 3. If Xn | X (resp. Xn i X ) a.e. on (£2,5,P), then E(X„ | ©) T E(X 10) (rcwp. E(X n 10) 1 E(X 10); a.e. on (£2, 0 , P). Proof. Consider the case where Xn f X a.e. on (£2,5,P). Let F„, n e N, and y be arbitrary versions of E(X„ | 0 ) , n G N, and K(X | 0 ) respectively. Let us show that Yn\Y a.e. on (£2,0,P). Now since Xn+i > Xn a.e. on ( f i . & P ) , we have E ( X n + , | 0 ) > E ( X n | 0 ) a . e . on (£2,0, P ) by 7) of Theorem B.5. Thus Yn t a.e. on(£2,0,P). Since Xn] X a.e. on (£2, 5, P ) and since X\ is integrable we have by the Monotone Convergence Theorem for an increasing sequence (1)
lim / XndP= "->°° JA
for A e 5
[ XdP
JA
and similarly since Yn | a.e. on (£2,5, P ) and since Y\ is integrable we have (2)
lim f YndP=
n—oo 7 G
f lim F„ d P
for G 6 0
y G n—oo
Since F n is a version of E ( X n | 0 ) we have JaYndP we have
= JGXndP
(3)
forG e 0 .
/ lim YndP= f XdP JG n—°° ./G
Thus from (1) and (2)
Since F is a version of E(X 10) we have fa Y dP - fa X dP for G e 0 . Using this in (3) we have / lim YndP= f YdP forG e 0 .
/ G n-»cx> yG Therefore lim F„ = F a.e. on ( £ 2 , 0 , P ) . This shows that E ( X „ | 0 ) | E ( X | 0 ) for an n—K>O
increasing sequence. By a parallel argument we have E(Xn | 0 ) J. E(X 10) for a decreasing sequence. I Theorem B.18. (Conditional Fatou's Lemma) Let Xn, n 6 N, be integrable extended real valued random variables on a probability space (£2,5, P ) and & be a sub-a-algebra of$. 1) If'lim inf Xn is integrable and if there exists an integrable extended real valued random n—*oo
a.e. on (£2,5, P)for n g N, then
variable X such that Xn>X (1) x
'
E(liminfX„|0) < liminfE(X n |0) v
n-.oo
'
n-*oo
a.e. on (£2,0,P).
464
APPENDIX B. CONDITIONAL
EXPECTATIONS
2) If km supX n is integrable and if there exists an integrable extended real valued random n—*oo
variable X such that Xn < X a.e. on (Q, 5, P)for n G N, then (2)
E(limsupX n |0) > limsupE(X„|0) n—t-oo
a.e. on (£2,0,P).
n—too
Proof. To prove (1), recall that liminf Xn = lim {inf Xk}. Since X < inf Xk < Xn a.e. n—t-oo Kk>n
n-»oo
k>n
on (£1,$,P) and since X and Xn are integrable, so is inf Xk. Thus inf Xk, n G N, is /:>n
fc>n
an increasing sequence of integrable random variables. Since inf Xk t lim inf Xn on Q as n - t o o , Theorem B.17 implies that E ( l i m i n f X „ | 0 ) = lim E(inf,t>„ Xk\<3) = liminf n—KX>
n—*oo
n—t-oo
—
E(.mfk>nXk\&) —
a.e. on (£2, 0 , P). Since infk>„ Xk < Xn on O, we have E(inf,t>„ Xk | 0 ) < E(X„ 10) a.e. on (Q, 0 , P ) by 7) of Theorem B.5. Thus (1) holds. By a parallel argument we have (2).
■
Theorem B.19. (Conditional Dominated Convergence Theorem) Let Xn, n G N, and X be integrable extended real valued random variables on a probability space (£2, 5, P ) such that lim Xn = X a.e. on (Q, 5, P ) and let & be a sub-o -algebra o/5- If there exists an integrable extended real valued random variable Xo such that \Xn\ < X0 a.e. on (£2, #, P ) for n e N , then \imE(Xn\<3) = E(X\<3) a.e. on(Q,&,P). n—*oo
Proof. The condition \Xn | < X 0 a.e. on (Q, 5, P ) for n G N implies that I liminf XJ < X0 n—too
a.e. on ( & , # , P ) . The integrability of X 0 then implies that of liminf Xn. We also have n—too
-X0 <X n a.e. on (£2, ^ P ) for n G N. Thus the conditions in 1) of Theorem B.18 are satisfied and consequently we have E(liminfX n |0) < liminfE(X„|0) a.e. on(£2,0,P). n—*oo
"
n—too
■
'
'
/
Now since X = lim inf Xn a.e. on (£2,5,P) we have E ( X | 0 ) = E(liminf X n | 0 ) a.e. on n—t-oo
(£2,0, P ) by 1) of Theorem B.5. Thus we have E ( X | 0 ) < luninfE(X„|0)
n—*oo
a.e. on (£2,©,P).
APPENDIX B. CONDITIONAL
465
EXPECTATIONS
Similarly by 2) of Theorem B.18, we have E ( X | 0 ) > limsupE(X„|0)
a.e. on ( 0 , 0 , P )
n—*oo
and therefore E(X | 0 ) = Jikn E(Xn | 0 )
a.e. on (Q, 0 , P). ■
Remark B.20. Let Xn, n G N, and X be integrable extended real valued random variables on a probability space (Q, 5, P ) and let 0 be a sub-a-algebra of g. Convergence of X n to l o n f i alone does not imply lim E(Xn10) = E(X10). See the following example. Example B.21. Let (ft, g, P ) = ((0,1], <8(0,i], m L ). For n 6 N, let Xn be defined by L;
J
2" fore 6 ( 0 , ^ ] -'0 force (i,l]
and let X be identically equal to 0 on Q. We have lim Xn(uj) = X{u>) for every o> G Q. 71—►CO
Consider a sub-cr-algebra 0 of g given by 0 = {0, (0, | ] , (^, 1], (0,1]}. For n 6 N, let Y„ be defined by ' 2 f o r c e (0,±] 0 forwe(|,l] and let Y be identically equal to 0 on Q. Since Y„ is 0-measurable and JG Yn dP = JaXn dP for every G G 0 , Y„ is a version of E ( X n | 0 ) . Furthermore since 0 is the only null set in (£2, 0 , P), any 0-measurable extended real valued function that is equal to Y„ a.e. on (Q, 0 , P) is actually equal to Y„ everywhere on Q. Thus Yn is the only version of E ( X n | 0 ) . Similarly Y is the only version of E ( X | 0 ) . We have however lim Yn{u>) 4 Y(CJ) for u> G (0, }].
w=i:
n—*oo
™": ?!
z
[III] Conditional Convexity Theorems Let
466
APPENDIX B. CONDITIONAL
EXPECTATIONS
and m G [ ( £ » ( & ) » (£>»(&)] we have
V(0 = s u p { a „ e + ^ }
foreeR.
Since X is integrable, X(UJ) € R for a.e. u> in (£2, 5, P). Since a convex function is a continuous function, y> is Borel measurable and thus
¥>W^)) = s u p K * ( w ) + /?„}> a n X(u;) + /?„ n€N
for all n £ N for a.e. w in (£2,5, P). Applying 7) of Theorem B.5 to (2) we have E [ ^ ( X ) | 0 ] > a „ E ( X | 0 ) + /3„ for all n G N a.e. on (£2,0, P). Recalling that every version of E(X 10) is real valued a.e. on (Q, 0 , P ) , we have by (1) E | > ( X ) | 0 ] > S up{a„E(X|0) + /?n} neN
=¥>(E(X|0))
a.e. on (£2,0, P). ■ Remark B.23. Let X be an integrable extended real valued random variable on a proba bility space (£2,5, P)- Since , + =
467
APPENDIX B. CONDITIONAL EXPECTATIONS
Example B.24. Let ( Q , 5 , P ) = ((0, l ] , 5 , m L ) where? = {0,(O,f],(|, 1],(0,1]} and let 0 = {0, (0,1]}. Define an integrable random variable X by setting I
( U A W
) = ( 2 for « e (0,1] \-\ forW€(|sl].
If we define Y(u>) = fnXdP = jforo; G Q, then as a constant function Y is ©-measurable. Y also satisfies the condition fG Y dP = Ja X dP for G = 0 and G = Q. The fact that 0 is the only null set in (Q, <9, P ) then implies that Y is the unique version of E(X |G5). From the nonnegativity of Y we have E(X 10) + = F + = F . If we define Z(u) = fa X+ dP = 1 for u> £ Q. then by the same reason as above Z is the unique version of E(X+1 <S). Thus we h a v e E ( X | © ) + < E(X + |(S)onQ. Corollary B.25. Let X be an integrable extended real valued random variable on a prob ability space (Q., 5, P) and let & be a sub-a-algebra o/J. Then (1)
|E(X|0)| <E(|Jf||0)
a.e.on(Q,&,P).
IfX E £ P (Q, 5 , P ) for some p€ [l,oo), tfzen (2)
E(|X||0)<E(|X|p|0)^
a.e. on(Cl,<S,P)
and (3)
||E(X|0)||p<||E(|X||©)||p<||X||p.
Proof. Since £ >-» |£| for £ 6 R is a convex function on It, (1) is a particular case of Theorem B.22. Suppose X € LP(Q., 5, P) for some p g [1, oo). Since £ >-> [f | p for £ € R is a con vex function on R for p € [1, oo), we have by Theorem B.22 the inequality |E(X |<5)|p < E(|X| P |<3) and then | E ( X | 0 ) | < E(|X| p |<S)r a.e. on ( 0 , © , P ) . Applying the last in equality to \X\ E LP(Q., ?, P) we have (2). By (1) and (2) we have / \E(X\<S)\pdP < f E(|X||®) P < / E(\X\"\&) = I
\X\pdP.
Taking the p-th roots of the members in these inequalities we have (3). ■
468
APPENDIX B. CONDITIONAL
EXPECTATIONS
Theorem B.26. Let ( A 5 , P ) be a probability space, Xn G Lv(Q,3,P)for n G N and X G LJQ., 5, P)for some p G [1, oo). / / lim Xn = X in L p (£2,5, P), tfie/J lim E(X„ 10) = E(X 10) m L„(Q, g, P) /or an arbitrary sub-a-algebra 0 o/5Proof. By (3) of Corollary B.25 we have ||E(X„ 10)|| p < \\Xn ||„ < oo so that E(Xn 10) € I„(Q, 5, -P) for n 6 N. Similarly E(X 10) € L „ ( 0 , 5 , P). By 6) of Theorem B.5 and by (3) of Corollary B.25 we have | | E ( X n | 0 ) - E ( X | 0 ) | | p = ||E(X n -X\<S)\\p
< \\Xn - X\\p
Thus if lim \\Xn - X\L = 0, then lim | | E ( X n | 0 ) - E(J\T|Q5)!|P = 0. ■ n—*oo "
"p
n—t-oo "
Theorem B.27. (Conditional Holder's Inequality) Let 0 be a sub-a-algebra of$ in a prob ability space (Q,5, P). IfX G L P (Q, 5, P) and F 6 £,(Q, 5 , P ) wnere p, q G (1, oo) and i + i = 1, tfzerc E(|Jfy||0)<E(|X|P|0)pE(|y|«|0)'
a.e. o n ( Q , 0 , P ) .
Proof. For n G N, let X n = \X\ + \ and Yn = \Y\ + ±. Then Xn, Yn > £ on £2, X n G i p ( Q , 5 : , P ) a n d r n G L , ( Q , 5 , P ) . By 7) of Theorem B.5 we have a = E ( | X n | p | 0 ) f > £ and/? = E ( | K | ' | © ) ' > ± a.e. on (£2,0, P). By the inequality |£T?| < p _ 1 |£| p -+-«—* I77I9 for £,77 e R , we have
^<W
+
1 ?
,e.on(£2,0,P).
Note that since a, (3 > £ > 0 a.e. on (£2,0,P), the divisions in the last inequality is possible a.e. on (£2,0, P). By 7) and 6) of Theorem B.5,
E
<;>)^M'">K (>)
a.e. on(£i,0,P).
The fact that a and ,8 are ©-measurable implies according to Theorem B.8 1 E(|X n |»|0) E f l y n | ' | 0 ) 1 1 —-T < + = - + - = 1 a.e. on (£2, 0 , P ) a/3 pa>> qfr p q
469
APPENDIX B. CONDITIONAL EXPECTATIONS and thus E ( | X n y „ | | ® ) < a ^ = E(|X n |P|®)pE(|F n |9|0)i
a.e. on
(Q,6,P).
Letting n —► oo and applying Theorem B.17 (Conditional Monotone Convergence Theo rem), we complete the proof. I [IV] Conditioning and Independence Theorem B.28. Let X be an integrable extended real valued random variable on a prob ability space (Q, 5, P) and let <& be a sub-a-algebra of$. If {X, 0 } is an independent system, then E ( X | 0 ) = E(X) a.e. on (£2,0, P). Proof. Let {X, 0 } be an independent system, that is, {cr(X), (8} is an independent system of sub-cr-algebras of 5. We are to show that every version of E(X | 0 ) is equal to the constant E(X)a.e. on(£2,0,P). Let E e
JG
where the second equality is by the independence of {a(X), <S}. Thus we have shown that for an arbitrary E £ c(X) we have (1)
E ( 1 E | 0 ) = E(1 £ ) a.e. o n ( n , 0 , P ) .
Let us write X = X+ — X~ for the integrable extended real valued random variable X on (£2,
470
APPENDIX B. CONDITIONAL
EXPECTATIONS
Remark B.29. The converse of Theorem B.28 is false, that is, E(X | 0 ) a.e. on (Q, 0 , P ) does not imply the independence of the system {X, 0 } . See Example B.30 below. See also Corollary B.33. Example B.30. L e t ( Q , 5 , P ) = ((0, l],Q3(o,i],mL). Consider G u = (0,±], G,,2 = (±,±1 Gi = (0, | ] , G2 = ( i 1], and let 0 =
{
P(G,tl)P«?i) = H = 1 s o that p ( G '.' n G>) ^P(Gi,,)P(G,).
Theorem B.31. Let X be an integrable extended real valued random variable on a proba bility space (Q, 5, P ) and let 0 be a sub-a-algebra o/5- Then the following two conditions are equivalent: 1°. E(X\&)
= E(X)a.e.
on(Q,&,P).
2°. E(XZ) = E(J*QE(Z)/or every bounded 0-measurable random variable Z. Proof. Assume 1°. Then E(XZ) =
E[E(XZ\&)]=E[ZE(X\(S>)]=E{X)E(Z)
where the second equality is by Theorem B.8 and the third equality is by 1° This proves 2°. Conversely assume 2° Then for every bounded 0-measurable random variable Z we have (1)
E[ZE(X | ©)] = E[E(ZX 10)] = E(ZX) = E(X)E(Z)
= E[ZE(X)]
where the first equality is by Theorem B.8 and the third equality is by 2°. Now for an arbitrary G 6 0 , 1G is a bounded 0-measurable random variable so that by (1) we have
APPENDIX B. CONDITIONAL EXPECTATIONS
471
fG E(X 10) dP = JG E(X) dP. Since both E(X | ©) and the constant E(X) are ©-measurable, the arbitrariness of G G © implies that E ( X | 0 ) = E{X) a.e. on (£1,(8, P). This proves 1°
Theorem B.32. Let ©i and ©2 oe two sub-a-algebras of$. Then the following two condi tions are equivalent: 1°. {©i, ©2} is an independent system. 2°. E(X\ |© 2 ) = E(Xi) a.e. on (Q, <82,P)for every integrable <&\-measurable extended real valued random variable X\. Proof. Assume 1°. Let Xi be an integrable ©1-measurable extended real valued random variable. Since cr(X\) C ©1, the independence of {©(,©2} implies the independence of {a(Xi), © 2 }. Then by Theorem B.28 we have E(X, | 0 2 ) = E(X,) a.e. on (Q, ©2, P). Conversely assume 2° Then for every integrable ©1-measurable extended real valued random variable X\ and bounded ©2-measurable random variable X2, X\X2 is integrable and E(XtX2)
= E[E(XiX 2 |© 2 )] = E[X 2 E(X, |© 2 )] = E[X 2 E(X,)] = E{XX)E(X2)
where the second equality is by Theorem B.8 and the third equality is by 2°. Let G\ 6 ©1 and G2 e 0 2 - Then with Xx = 1 G , and X2 = \Gl we have P(G\ n G2) = E(l G l l G 2 ) = E(l G ,)E(lo 2 ) = P(G\)P{G2), proving the independence of {©,, © 2 }. ■ Corollary B.33. Let X be an integrable extended real valued random variable on a proba bility space (£2,3, P) and let & be a sub-a-algebra o/S- Then the following two conditions are equivalent: 1°. {
472
APPENDIX B. CONDITIONAL
EXPECTATIONS
Theorem B.34. Let %, 2l2, and 2l3 be three sub-a-algebras o / 5 in a probability space (Q, 5, P). //(T(21I U 2l2) and 2l3 are independent, then for every At e 2li we have (1)
P ( A , | 3 W c P ( A , \crQXi U2l 3 )),
tfwffc,every veraow o/P(A, |2l 2 ) is a version o/P(Ai |cr(2l2 U 2l3)). //(O, a(2l 2 U 2l3), P ) & a complete measure space and i/2l2 contains all the null sets in (Q, <J(Q12 U 2l3), P), then (2)
P(A,|2t 2 ) = P ( A , | ( 7 ( a 2 u a 3 ) ) .
Proof. Since 2l2 is a sub-a-algebras of cr(2l2U2l3), to prove (1) it suffices to show according to 2) of Theorem B.16 that for every version Y of P(A\ |2l 2 ) we have (3)
I YdP = P(AX n B)
for B e
To prove (3), let us prove first that for Ai e 2li and A3 € 2l3 we have
(4)
P(Ai n A3 |a 2 ) = p(Ai |a 2 )P(A 3 |a 2 ).
Now for A2 6 2t2 we have
(5)
/ P(A{ n A3 |a 2 ) dP = P(A, n A2 n A3) = P(A, n A2)P(A3) JA2
= P(A3) I P(A 1 |a 2 )dP=/ P(A1|a2)P(A3)dp! JA2
JA2
where the first and the third equalities are by the definition of conditional probability, and the second equality is by the independence of a^\ U 2l2) and 2l3. Since P(A\ n A3 |2l 2 ) and P(A] |2t 2 )P(A 3 ) are both 2l2-measurable, (5) implies P ( A n A3 |2l 2 ) = P(Ai |2l 2 )P(A 3 )
a.e. on (0,2t 2 ) P ) .
Now the independence of 17(21] U 2l2) and 2l3 implies that of 2l2 and 2l3 and consequently we have P(A 3 |2l 2 ) = P(A3) a.e. on (Q,2l 2 ,P) by Theorem B.28. Therefore we have P(A! n A3 |2l 2 ) = P{AX |2l 2 )P(A 3 |2l 2 ) a.e. on (Q, 2l2, P). This proves (4). Next let us observe that
E[P(A,|a 2 )i^|a 2 ] = PUi ia 2 )E(U s |ai 2 ) = P(A, n A 3 |SI 2 ) = E[P(A, n A 3 |a(a 2 u 2i 3 ))|a 2 ] = EUU.ECU. |
413
APPENDIXB. CONDITIONAL EXPECTATIONS
where the first equality is by the 2l2-measurability of P(AX |2l 2 ) and by Theorem B.8, the second equality is by (4), the third equality is by Theorem B.12, and the last equality is by the fact that lA} is CT(212 U ^-measurable. Thus we have (6)
/
P(i4,|Sl2)dP= /
JA^nAj
=
f
E[l Al |<x(2l 2 U2l 3 )]rfP
JA2nA-i
lAldP = P(Ai
r\A2nA3),
JA2nA)
where the second equality is from the fact that A2 (1 A3 G cr(2l2 U 2l3). Now 2l2 (~l 2t3 is a 7r-class of subsets of Q. and cr(Ql2 n 2l3) =
This page is intentionally leftblank blank. This page is intentionally left
Appendix C Regular Conditional Probabilities Given a probability space (Q, 5, P). Let <S be a sub-cr-algebra of 5, A G 5 and consider the conditional probability of A given 0 , that is, P(A|(S) = E(l,t|(S). Let {An : n G N} be a countable disjoint collection in 5- By the linearity of the conditional expectation and by the Conditional Monotone Convergence Theorem, there exists a null set A depending on {An : n e N} in (Q, <S, P ) such that P(Un^An | <5)(u>) = £ „ 6 N P(A„ |<S)(u;) for w G Ac. Since there may be uncountably many countable disjoint collections of members of 5, we may have to exclude uncountably many null sets whose union may not be an 5-measurable set. Thus a subset E of Q such that P(-1 &)(u>) is a measure on 5 for w G E may be an empty set or it may not be an 5-measurable set. To overcome this difficulty, let us consider regular conditional probabilities. Definition C.l. Let (Q, 5, P) be a probability space and let <S be a sub-a-algebra o/J. By a regular conditional probability given <S we mean a function P s l e on 5 x £2 such that 1°- there exists a null set A in (Q, 0 , P) such that P 5 I 0 ( - , u>) is a probability measure on "Sfor every u> G Ac. 2°. P 5 l e ( y l , •) is a (25-measurablefunction on Q.for every A e $. 3°. P(A n G) = / G P5l®(yL, oj)P(dw)for every A G ? and G G 0. Hfe raj rtar f/ze regular conditional probability given 0 is unique if for any two functions p andp' on 5 x Q. satisfying conditions 1°, 2°, and 3° there exists a null set A in (Q, 0 , P) such that p(-,tx>) = p'(-,ui) ongforco G Ac. 475
476
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
We show in Theorem C.6 below that if Q. is a complete separable metric space, # is the Borel cr-algebra of subsets of Q and P is an arbitrary probability measure on #, then the regular conditional probability given an arbitrary sub-cr-algebra <S of 5 exists uniquely. Regular conditional probability is known to exist for a standard measurable space, of which a complete separable metric space with its Borel cr-algebra of subsets is an example. See [26] K. P. Parthasarathy. Observation C.2. Conditions 2° and 3° in Definition C.l are equivalent to the single con dition P s l 0 G V ) e P G 4 | < S ) for every A e 5, that is, for every A 6 5, the function P 5 ' ® ^ , ■) on Q is a version of P{A\<5). Proof. By 2°, P f f l e (A, •) is a ©-measurable function on Q. By 3°, we have for every G 6 <5 the equality JG P:g\@(A,iv)P(cL>) = P(A n G) = fG lA(u)P(du>). This shows that P5l's(^,-)isaversionofP(yl|©). ■ Proposition C.3. IfP^l® is a regular conditional probability given a sub-a-algebra <£> ina probability space (£2,5, P), then for every integrable random variable X on the ( 0 , 5 , P) we have (1)
E(Jf|0) = | AV)P5le(dwV)
a.e.
on(n,&,P),
that is , the right side of {I) as a function on Q. is a version o/E(X |<S). In particular for every G e (S, we have (2)
f XdP=
f U
X(a/)P5|e(
Proof. Consider the case where the integrable random variable X is nonnegative. Let us show first that the right side of (1) is a (5-measurable function on Q,. Now there exists an increasing sequence of nonnegative simple functions {ipn : n £ N} on (£2,5) such that (fin T X. Let ipn = Y%Z\ cn,k1-Ank where An
APPENDIX C. REGULAR CONDITIONAL
All
PROBABILITIES
which, being a finite sum of ©-measurable functions according to 2° of Definition C.l, is ©-measurable. Then by the Monotone Convergence Theorem,
/ X(u/)P 5 I S (^V) = lim I ¥ > n (c/)P s l c W, ), which, as the limit of a sequence of (S-measurable functions, is ©-measurable. To show that fa X ( u ) ' ) - P 5 , e ( ^ ' , •) is a version of E(X |<9), it remains to show that for every G € <S we have y M X(oj')P^&(duj',u)\
(4)
P(duj) = / X(w)P(dw)-
Now by the Monotone Convergence Theorem, (3), and 3° of Definition C.l, we have / If
X(a.')P 5 , < 9 (dw',u;)}p(dw)= / Jim | J
V
,(W')P?IV,W)}F(4I)
=
lim / £c„, A ; P ! 5 l , 8 ( J 4 n ,,,^)P(du;)= lim £ ^ ( ^ 0
=
lim /
0
Similarly for X " . Therefore (4) holds. This proves (1) for a nonnegative integrable random variable X . For an arbitrary integrable random variable X, let X = X + — X~. Since E(X|<8) = E(X + |©) - E ( X " |<S) a.e. on (£2, &, P) and since (1) holds for E ( X + | 0 ) and E ( X ~ | 0 ) it holds forE(X|<8). Finally (2) is an immediate consequence of (1). ■ Proposition C.4. Let &bea sub-o-algebra of$ in a probability space (Q, 5, P). Suppose a regular conditional probability P s l e exists. Let (Q, 3*, P ) be the completion o/(Q, 3) P). Then a regular conditional probability P s * I s exists. Proof. 3* consists of subsets of Q of the form E = A U TV0 where A € 3 and iV0 is a subset of a null set TV in (Q, 3 , -P)- Let us define a function ip on 3* x £2 by setting (1)
<^(£,a.) = P 5 l e ( A , a ; )
for (72, w) £ 3* x£2,
where A € 3 is as specified above for the given set E g 3* To show that c^(75, •) is uniquely defined up to a null set in (Q, <S, P ) for each 75 e 3*. suppose 75 is given as E = A1 U N(, as well as 75 = A" U 7V£ where A', A" e j ^ c TV', TV^' c TV" and TV', N" are null sets
478
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
in (Q, 5, P). We assume without loss of generality that A' n TV^ = 0 and A" n Ng = 0 and thus A' = E-N'Q and A" = £ - JVJ\ By (1) and condition 3° of Definition C.l, we have / P 5 l e ( A » P ( < M = P(A' n G) = P((E - M ) D G)
./G
= P ( ( £ - JV?) n G) = P(A" n G) = f
P^6(A",uj)P(dw)
JG
for every G G <S. Thus P 5 I®(A', •) = P 5 I 6 ( A " , •) a.e. on (Q, 0 , P). Let us show that y> satisfies the conditions in Definition C.l. Now since P 5 !® satisfies condition 1° of Definition C.l there exists a null set A in ( Q , 0 , P ) such that P 5 l®(-,a>) is a probability measure on 5 for w G Ac. From this and from (1) it follows immediately that y>(-,w) is a probability measure on #* for w G Ac. By (1), v(-E, ) is a (S-measurable function on ft for every £ G 5*. For 73 G 5*, writing £ = A U TV0 with A G $ and TV0 C TV where TV is a null set in (Q, 5, P), we have P(E n G) = P(A n G) = j P 5 l®(A,w)P(dw)= / JG
ip(E,Lj)P(duj)
JG
for every G G <S by condition 3° of Definition C.l satisfied by P 5 I® This shows that tp satisfies all the conditions in Definition C. 1 and is therefore a regular conditional probability on the probability space (Q, 3"", P ) given (5. ■ Lemma C.5. Let p be a finite, nonnegative, finitely additive set function on an algebra 53 of subsets of a Hausdorff space S. Let ^ be the collection of all compact sets in S and let 21 be a sub-algebra of 3). If p satisfies the condition that (1)
p(A) = sup{p(K) : K d A,K G £ n 5 3 } for every A G 21,
Jtoen /J is countably additive on 21. Proof. Since 21 is a subalgebra of 5), p is a finite, nonnegative, finitely additive set function on 21. A finite, nonnegative, finitely additive set function p on an algebra is countably additive if and only if for every decreasing sequence {A n : n G N} in the algebra such that An I 0 we have p(An) J. 0. Thus if p is not countably additive on 21, then there exist 6 > 0 and a decreasing sequence {An : n G N} in 21 such that An J. 0 and p(An) > 6 for every n G N. Let us show that this contradicts (1). Now according to (1) for every n G N there exists Kn e ^ n S such that Kn C An and p{An - Kn) < <5/3n. Then we have An - n,"=, Ki C U?=1(An - j £ ) C U?=1(A,- - #,■)
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
479
so that n(n^Ki) > n(An) - Eli <5/3* > 6/2. Then Fn = n^Ki 4 0. Since 5 is a Hausdorff space, the compact set jFn is a closed set. Consider the decreasing sequence of closed sets {Fn : n G N}. Let us show that DneN-Fn ¥ 0- Suppose n „ 6 N F n = 0. Then U„ e N i^ = 5 so that in particular {F£ : n G N} is an open covering of the compact set Kt and thus K\ C U?=lFf for some N G N. Since {F£ : n G N} is an increasing sequence we have K\ C Ffr and thus K\ n FN = 0. This contradicts the fact that F w ^ 0 and FN = n£,tf,- C K\. Therefore nnexFn 4 0. But Fn C KnC An and n„ 6N A„ = 0 so that n„6N.F„ = 0. This is a contradiction. Therefore p. is countably additive on 21. ■ Let (5,03, p) be a measure space in which S is a topological space and 03 is an arbitrary (7-algebra of subsets of S. Let M. be the collection of all compact subsets of S. We say that p is regular if for every A G 03 we have p(A) = sup{fi(K) : K C A, if G ^ n 03}. According to Ulam's Theorem if 5 is a complete separable metric space, 03s is the Borel a -algebra of subsets of 5, and /J is a finite measure on (5,03s), then \x is regular. For a proof of Ulam's Theorem we refer to [4] R. M. Dudley. Theorem C.6. Let (5,03s, P) be a probability space in which S is a complete separable metric space and 03s is the Borel a-algebra of subsets of S. Then for every sub-a-algebra <S o/03s, the regular conditional probability given <£>, P ^ l 8 , exists uniquely. Proof. 1) Let us show first that there exists a countable algebra 21 of subsets of S such that 03s = c(2l). For an arbitrary collection £ of subsets of 5 let us write a(£) for the algebra generated by C. Let O be the collection of all open sets in S. Since 5 is a separable metric space it has a countable dense subset. Let Do = {On : n e N} be the countable collection of all open spheres in 5 having centers in a countable dense subset of 5 and having rational radii. Every O € D is then a union of members of Do- Let 21 = a(£>o). Then O C
480
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
Since a finite union of compact sets is a compact set we can choose Kn,m so that Kn,m f as m —> oo. Let £> = a({A n ,/<„,„, : m g N , n g N } ) . Since 2) is an algebra generated by a countable collection, it is a countable collection by the same argument as in 1) in showing the countability of 21 = a(Oo)- Now 1°. P ( D | < S ) > 0 a . e . on (S,<S,P) for every D e D . 2°. P ( S |<8) = 1 a.e. on (S, 0 , P). 3°. P(U*L,D,■ | 0 ) = Ef=i P(-D. | « ) a.e. on ( 5 , 0 , P) for A g 2), * = 1, -- -, k, disjoint. 4°. P(A' n , m |(S) T P(An\<5) a.e. on ( 5 , « , P ) . Let p be a function on 53s x 5 defined by letting p(A, ■) be equal to an arbitrarily fixed version of P(Aj&) for every A g 23s- Now 2) C 53 s is a countable collection and the collection of all finite combinations of members of 23 is countable. Therefore there exists a null set A^ in (5, 0 , P) such that conditions 1°, 2°, 3°, and 4° hold at x for x £ A ^ . Thus for a; g A^, p(-, z) is a nonnegative, finitely additive set function on the algebra 2) with p(S, x) = 1 by 1°, 2°, and 3° Writing J? for the collection of all compact sets in S, we have by 4° p(An,x)
= P{An\<S)(x) = sup{P(A|0)(z) : K C An,K
= sup{p(A,K) : if c An,K
g #n23}
g #n£>}.
Thus by Lemma C5,p(-,x)is countably additive on thealgebra 21 when x g A^. Therefore when x g A^, the restriction of p(-, x) to 21 can be extended uniquely to be a measure q(-, x) on 53s = c(2l) which is a probability measure since p(S, x) = 1 by 2°. Let q be a function on 53 x 5 defined by setting q(-,x) to be equal to the probability measure on 53s obtained by the extension of the restriction of p(-, x) when x g A^, and by setting g(-, x) = 0 when x g AJO. Thus defined, q satisfies condition 1° of Definition C.l. To show that q satisfies 2° and 3° of Definition C.l we show equivalently according to Observation C.2 that q(A, ■) is a version of P(A|<3) for every A g 53 s . Let <£ be the collection of all members A of 53 s such that q(A, ■) is a version of P(A|(S). Now if A g 21 then q(A, •) = p(A, •) on A ^ and q(A, •) = 0 on A^,. But p(A, •) is a version of P(^4|(S). Thus q(A, ■) is a version of P(A\<&). This shows that 21 C <E. Let us show that <£ is a d-class of subsets of S. Now since S g 21 we have S g (£. Let ^4 and B be two members of <£ such that A
APPENDIX C. REGULAR CONDITIONAL
481
PROBABILITIES
and q(B, x) = P{B | 0 ) for x e ACB. Also there exists a null set AAtB in ( 5 , 0 , P ) such that P ( S - A | 0 ) ( x ) = P ( P | 0 ) ( x ) - P ( A | 0 ) ( x ) f o r x e A ^ B . LetA = A^UA B UA yM3 UA 0O . Then for i £ A c w e have, recalling that q(-,x) is a measure on 93 s for x e A^, g(jB - A, x) = q(B, x) - q(A,x) = P(B10)(x)
- P(A|<8)(x) = P{B -
A\0)(x).
This shows that B - A £ <£. Next let {A„ : n € N} be an increasing sequence in <£. Then for every n e N there exists a null set A„ in (5, 0 , P) such that g(j4n, x) = P04„ | ©)(x) for x G A^. By the Conditional Monotone Convergence Theorem there exists a Ao in (5, ©, P ) such that P ( lim An\<8){x) = lim P ( A n | 0 ) ( x ) for x 6 AS. Let A = (U Cj A„) U A^. n—too
n—too
Then for x 6 Ac we have g (lim 71—'oo
An,x)=
'
nfcau+
lim g(A„,x)= lim P(An 10)(x) = P( lim A„|0)(x).
n—too
n—too
n—too
This shows that lim An £ <E. Thus <£ is a d-class of subsets of 5. Then 93 s = ^"(20 = 71—»CO
d(QV) C <£ by Theorem 1.7. Therefore q(A, ■) is a version of P ( A | 0 ) for every A 6 93sThis shows that q is a regular conditional probability given 0 . 3) To prove the uniqueness of regular conditional probability given a sub-c-algebra in our probability space (S, 93s, P)> let gi and q2 be two regular conditional probability given a sub-cr-algebra 0 of 03s- Then there exists a null set Ao in ( 5 , 0 , P) such that q\(-,x) and qi{-,x) are probability measures on 93s when x g AJj. For every A 6 93s and in particular for every An in the countable algebra 21, q\{An, •) and q2(An, •) are versions of P ( A n | 0 ) by Observation C.2 and thus there exists a null set A„ in (5, 0 , P ) such that q\(An,x) = q2(An,x) when x £ Acn. Let A = U„<=2+A„. Then for x 6 Ac, <7i(-,x) and q2(-,1) are probability measures on 93s and are equal on 21. For fixed x £ AC, let € be the collection of all members A of 93s such that qi(A, x) = q2(A, x). As we saw above, 21 C 2. By the fact that qi(-,x) and qi(-,x) are measures it is easily verified that <E is a tf-class of subsets of S. Then by Theorem 1.7, 93 s C <E. Therefore giL4,x) = q2(A, x) for A £ 93s when x € Ac. This completes the proof of the uniqueness of the regular conditional probability. ■ Let (Q, 5, P ) be a probability space and let 0 be a sub-a-algebra of 5- Consider P 5 l e , a regular conditional probability given 0 . For every i e j w e have according to 3° of Definition C.l the equality fG pt\0(A,w)P(du) = P(A 0 G) = Ja lA(u)P(
482
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
the existence of a null set A in (Q, (S, P), not depending on A, such that for every A in a sub-a-algebra fj of© we have P$\@{A,w) = 1^(01) for every A 6 Sj when a> G A c . Definition C.7. Lef (Q, §)feea measurable space. We say that 5 is countably determined if there exists a countable subcollection 33 o / 5 such that whenever two probability measures P and Q are equal on 53 then they are equal on 5- We call 33 a countable collection of determining sets for $. Proposition C.8. Let S be a separable metric space. Then the Borel cr-algebra of subsets ofS, 93s, is always countably determined. Proof. Let D be the collection of all open sets in S. We have 93s = a(Q). Since S is a separable metric space, there is a countable subcollection Do of D such that every member of D is the union of some members of Do- The collection S3 of all finite unions of members of Do is a countable subcollection of 95s- Suppose P and Q are two probability measures on 5 such that P(D) = Q(D) for every D G S3. Let O G D. Then O = U n 6 N O n where O n G D 0 for n G N. Let Gn = lS^=iOk G S3 for n G N. Then Gn T and O = U n e N G n = lim Gn. Since G n G 33, we have P(G n ) = Q{Gn) for every n G N n—*oo
and therefore P(O) = lim P(G n ) = lim Q{Gn) = Q(0). Thus P = Q on D. Since D is a n—*oo
n—*oo
7r-class, this implies that P = Q on <J(D) according to Corollary 1.8. Thus the equality of P and Q on 33 implies the equality of P and Q on 93s- Therefore 33 is a countable collection of determining sets for 93s- ■ Theorem C.9. Let (£2,5, P ) be a probability space, let <& be a sub-o-algebra of 5 and suppose a regular conditional probability given (3, P>M® exists. Let Sj be a countably determined sub-a-algebra of <8. Then there exists a null set A in (Q., <S, P ) such that P 5 l®(A,uO = 1A(w) for every A G £ wfren w G A c .
Proof. Let 33 C .f) be a countable collection of determining set for fj, that is, any two probability measures on fj that are equal on 53 are equal on Sj. Let A e 53. Then since A G 0 , there exists a null set A^ in (Q, ®, P) such that P 5 I ( 8 (A, w) = 1^(OJ) for u> G A^. Let Aoo = LUgjjA,!. Since 53 is a countable collection, A is a null set in (£2, (8, P). Then (1)
P 5 l®(A,o)) = l^(w)
for every ,4 G 53 when u g A ^ .
APPENDIX C. REGULAR CONDITIONAL
483
PROBABILITIES
Now by 1° of Definition C.l there exists a null set Ao in ( Q , 0 , P ) such that for every w G A£, P 5 l e ( - , a ; ) is a probability measure on 5 is hence a probability measure on the sub-cr-algebra Sj. Let A = AQ U A*,. With fixed w G Q, the set function l A (w) for A G f> is a probability measure on fj which assigns the value 1 to every A & Sj which contains w and the value 0 to every A G Sj which does not contain w. For w 6 Ac, the two probability mea sures P 5 I S ( - , O J ) and l(.)(w) on Sj are equal on 35 according to (1). Since 5) is a countable collection of determining sets for Sj, the two measures are equal on Sj for w 6 Ac. ■ Corollary C.IO. Ler (Q, J , P ) fee a probability space, let <& be a sub-a-algebra o / 5 and suppose a regular conditional probability given 0, P 5 ! 1 9 , exists. Ler(S, 93) be a measur able space such that 23 is countably determined and {i} G 93/or every x G 5. Lef £ 6e a 0/'23-measurable mapping ofQ. into S. Then there exists a null set A in (£2,0, P) JMCH f / i a / P 5 l s ( r ' ( { ^ ) } ) , w ) = 1/oru 6 Ac, that is, P 5 ' ( 8 ( K G Q : £(u/) = £M},u>) = 1 / o r u G Ac. Proof. Note that since £ is 0/95-measurable and { £ M } G 33, we have £"'({CM}) G 0 for every u> G fl. Let £> be a countable collection of determining sets for 93. Let Sj = £~'(23). Since f is 0/93-measurable, Sj is a sub-a-algebra of 0 . Let <E = £_1(3}). To show that (£ is a countable collection of determining sets for Sj, let ^ and v be two probability measures on Sj which are equal on (£. Let n^ and U( be two probability measures on 93 defined by fi({B) = M ( £ _ 1 ( # ) ) and v^B) = v{£-l(B)) for B G 93. For D G 3 , we have p.^{D) = //(£~'(-D)) = v(£~'(-D)) = u((D) since /^ = v on <£. Thus /^ = v^ on £>. Since ID is a countable collection of determining sets for 93, we have ^ = i/f on 93. Thus Mf~'CB)) = v<Jt\B)) for P G 93, that is, (i{H) = v(H) for P G Sj. This shows that Sj is countably determined. Then by Theorem C.9 there exists a null set A in (Q, 0 , P ) such that for every A G £ we have P 5 '^(A, u>) = \A{w) for u £ A c . Now for every u e Q w e have {u/ G Q.: £(w') = c » } = r ' ( £ M ) e £ since { c » } G 93. Thus P 5 I « ( { W ' £ n : « u ' ) = « ^ ) } , W ) = l{u.6si:{w)=«„)}^) = 1 foru,GA c
I
Let (Q, 5, P ) be a probability space and let (S, 93) be an arbitrary measurable space. Let X be an 3/23-measurable mapping of Q. into S. Then X~'(93) is a sub-cr-algebra of 5 so that a regular conditional probability given X, P 5 I X " ( S ) , is defined as a function on 5 x £1 We can also define a regular image conditional probability given X as a function on 5 x 5 as follows.
484
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
Definition C.U. Let (Q, J , P) be a probability space and (S, 23) be a measurable space. Let X be an g/VB-measurable mapping ofQ. into S and let Px be the probability distribution ofX on (S, 23). By a regular image conditional probability given X we mean a function P^\x on 5 x S satisfying the following conditions. F . There exists a null set A in (S, 03, Px) such that P$\x(■, x) is a probability measure on $for every x 6 Ac. 2°. P*\X(A,
■) is a 03-measurable function on S for every A 6 $. = fB P*\X(A,
3°. P(A n X~\B))
x)Px(dx)for
every A € $ and B € 23.
We say that the regular conditional probability given X is unique iffor any two functions p andp' on$ x S satisfying 1°, 2°, and 3° there exists a null set A in (S, 23, Px) such that p(A, x) = p'{A, x)for every A £ 5 when x 6 Ac. Remark C.12. The regular image conditional probability defined above exists uniquely when the probability space (Q, 5, P) is a complete separable metric space with the Borel cr-algebra of subsets. This can be shown by the same argument as in the proof of Theorem C.6. Proposition C.13. Let (Q, 5, P) be a probability space, (5,23) be a measurable space, X be an 5 /SB -measurable mapping ofQ, into S, and Px be the probability distribution ofX on (S, 23). Assume the existence of a regular image conditional probability given X, P^\x Let f be an extended real valued 23 -measurable function on S. Then for the extended real valued random variable f o X on (Q, J , P) we have for every B 6 23 /
f(x)pZ\x(X-\B),x)Px(dx),
f°XdP=[
JX-^B)
JS
in the sense that the existence of one side implies that of the other and the equality of the two. Proof. Consider first the particular case where / = lg where E £ 23. Then I JX
foXdP=[ (B)
•>X (S)
= P(X-\B)nX-\E))=
l B (Jf(w))P(dw) = /
lx-'(B)(w)lj r -. (B) (w)P(dw)
J£l
j P*lx(X-\B),x)Px(dx)
by 3° of Definition C. 11
JE
= JslE(x)P*\x(X-\B),x)Px(dx)
=
Jsf(x)P*\x(X-l(B),x)Px(dx).
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
485
The general case follows by writing / = / + - / " and by the fact that there exist two increasing sequences of nonnegative simple functions on (5,33) which converge to / + and / ~ respectively on 5. I Proposition C.14. Let ( 0 , 5 , P) be a probability space, (S, 53) be a measurable space, X be an $/^-measurable mapping of£l into S, and Px be the probability distribution ofX on (S, 93). Assume the existence of a regular image conditional probability given X, P 5 \x Let f be an extended real valued random variable on (£2, 5, P). Then for every B G 03 we have i
, n t
d P =
l
au)PSlX(du>,x)]pX(dx),
\l
Jx-'iB) JB Un J in the sense that the existence of one side implies that of the other and the equality of the two. Proof. Consider first the particular case where £ = 1A with A G $. Then / (,dP= f lA(iv)P(du) = P(AnX-\B)) JX-^B)x Jx-'(B) = J P*l (A,x)Px(dx) =J ^lAu)P*lx{duj,x) Px(dx) = J U £M-P5|*(^,z)]
Pxidx),
where the second equality is by 3° of Definition C.ll. The general case follows by writing £ = £+ — f ~ and by the fact that there exist two increasing sequences of nonnegative simple functions on (Q, #) which converge to £+ and £ - respectively on Q. ■ Theorem C.15. Let (0,3) P) be a probability space, (5,03) be a measurable space, X be an 5/93 -measurable mapping ofQ. into S, and Px be the probability distribution of X on (S, 03). Assume the existence of a regular image conditional probability given X, P$\x. Assume further that 03 is countably determined and {x} G 'Bfor every x G S. Then there exists a null set A in (5,03, Px) such that (1)
Pdlx(X~l(B),x)
= lB(x)
for every B G 03 when x G Ac
In particular we have (2)
P 5 l * ( X - ' ( { i } ) , x) = 1 for every x G Ac.
486
APPENDIX C. REGULAR CONDITIONAL
PROBABILITIES
Proof. Let £> = {Dn : n g N} be a countable collection of determining sets for 23. By choosing A = X~\Dn) and B = Dn in 3° of Definition C.ll, we have Px(dx) = P(X-l(Dn))=
/ JD„
f
Pslx(X-\Dn),x)Px(dx).
JD„
By 1° of Definition C.ll, P 5 l * ( X - ' ( A . ) , x ) G [0,1] for x g 5 and in particular for x g Dn. Thus the last equality implies that there exists a null set A„ in (S, 23, Px) such that P?>\x(X-\Dn),x) = lDn(x) for x g Acn. LetA = U neN A. Then we have (3)
P*\x{X-\Dn),x)
= lD„(x)
for every n when x g A c .
Now for every x g S, Pdix(X~\B), x) as a function of B g 05 is a probability measure on 35 by 1° of Definition C.ll. For fixed x g S, 1B(X) as a function of B g 23 is also a probability measure on 23. According to (3), when x g Ac these two probability measures are equal on 53. Then since 3D is a countable collection of determining sets for 23, these two probability measures are equal on 23 when x g Ac. This proves (1). When B = {x} for some x g S, we have l{x}(x) = 1 and thus (2) holds. ■ Definition C.16. Let (Q, 5, P) be a probability space and (S, 23) be a measurable space. Let X be an 3/23-measurable mapping ofQ. into S. Let (8 be a sub-a-algebra of$. By a regular conditional probability distribution of X given <& we mean a function Px\® on 23 x Q. satisfying the following conditions. 1°. There exists a null set A in (Q, (S, P) such that Px I® (•, u») is a probability measure on 23 for every u> g Ac. 2°. Pxle(B 3°. P{X~\B)
, ■) is a (8-measurable function on Q.for every B g 23. f\G)=Sa
Px i0(B,uj)P(dio)for
every B
e<8andGe<S.
We say that the regular conditional probability distribution of X given & is unique if for any two functions p and p' on 23 x Q satisfying 1°, 2°, and 3° there exists a null set A in (Q, 0 , P) such that p(B, u>) = p'(B, w)for every B g 23 when to g Ac. Remark C.17. The regular conditional probability distribution defined above exists and is unique if the measurable space (S, 23) is a complete separable metric space with the Borel cr-algebra of subsets. The proof of this parallels that of Theorem C.6.
Appendix D Multidimensional Normal Distributions Definition D.l. Let Q? be a probability measure on (Rd,*8Kd) where d € N. The mean vector of O, namely M(<X>) = (M(O)j, j = 1 , , . . , d), is defined by (1)
M(*),-= j
xjtydx)
forj =
l,...,d,
provided all the d integrals exist and are finite. The covariance matrix o/O, namely V(
V ( * ) A * = f{xj-Mmj}{xk-Mmk}^dx) forj,k = l,...,d, Jwd provided all the jk integrals exist and are finite. The characteristic function <pof® is defined by (3)
Observation D.2. The covariance matrix V() of a probability measure <£ on (R , <8md) if it exists, is always a nonnegative definite matrix. Indeed for every y £ M.d we have d
d
(V(0)y,y) = E E y ( % K W 7=1 fc=l d
d
.
= E E / Ax> - MWj}vi{** - M(*)t}»«(«fe) j = l 4=1 - 7 1
= / K J E { ^ ' - M(4»),-}W]2*(dx) > 0. 487
488
APPENDIXD. MULTIDIMENSIONAL NORMAL
DISTRIBUTIONS
Under the assumption that JMd | n |" ■ • • \xd\qd®(dx) < oo, for some qu...,qd have
(1)
W---W {y) = Liix,r'''^x^deHy,x)^dx^
for G R
y
6 N, we
•
for j>! = 1,... ,f t ;... ;pd = 1 , . , , , gj, and in particular,
(2)
( 5 ^ ) ( 0 ) = JP+, ^L<---^W-
Formula (1) is obtained by differentiating the integral in (3) of Definition D. 1 under integral sign which is justified by the Dominated Convergence Theorem. Definition D.3. A probability measure 5> on (Rd, 93^) is called a d-dimensional nor mal distribution if it is absolutely continuous with respect to the Lebesgue measure mdL on (R'', QSJJJ) with the Radon-Nikodym derivative given by
-7-7(z) = (27r)-i(detVT* exp \-\{V~\x
- ™),x - m)\
for x €Rd,
where m E K ' and V is a dx d positive definite symmetric matrix. We write Nd(m, this probability measure. Observation D.4 From the well known formula /
oo
,
e " du = v ^ , /J—oo e~u du = y ^ ,
(1)
-oo
we have by a simple substitution -00 m) 2 11 ii zr°° I( (u — mY (2) (2)
(2TTO)-5 / (2TTO)-5 /
exp < ~ — - — exp ^ - i — — -
4 4
= 1 = 1
for m € R and v > 0. for m € R and t> > 0.
We obtain for a > 0 and 6b real or imaginary
/:«-—-£->{=}• by completing square for uu and, in case 6b is imaginary, by contour integration. Also OO
/
,
|u|"e-" cfo
-co
u
forpeN.
V)for
APPENDIXD. MULTIDIMENSIONAL NORMAL
489
DISTRIBUTIONS
Theorem D.5. Letfl>be the d-dimensional normal distribution Nd(m, V) and let
— m), x — m\ = (ClC{x — m), x — m) = (z, z) ,
and the Jacobian of the inverse mapping x = C~xz + m is given by IdetCT'l^detCr'^detV)*. Thus we have ®(Rd)
= (27T)-7(dety) _ 5 f
expl--(v-'(x-m),x-rn)\mdL{dx)
=
(27r)-*jfiiexp{-i(z,z>}mi(dz)
=
n(27r)-5^exp|-^-imL(d2_;)
= 1, by (2) of Observation D.4. 2) Let p i , . . . , pd € N. Then j
d
|x,| p i • • ■ \xd\Pd
z)^mdL{dz).
Now since x = C~lz + m, the component XJ is a linear combination of z\,... ,zd andmj. Thus \xi\"' ■■■ \xd\Pd is bounded by a polynomial in | z , | , . . . , \zd\. But for any qt,..., qd G N, we have
-ww
j^d\z^---\zd\«tx.v[-\{z,z)}m z) I mdL(dz) expj- dL2{dz) V I'll* '
= n,tW d
7= 1
J
,J
2?
1 L(dz ,) < oo, exp < --i^>m
490
APPENDIX D. MULTIDIMENSIONAL NORMAL
DISTRIBUTIONS
by (4) of Observation D.4. Therefore /Hd |xi|« ■ ■ ■ \xd\Pd®(dx) < oo. 3) For the characteristic function
f
exp{i(y,x)}®(dx)
= (27r)-^y =
dexp{j(y,a;)}exp|--(z,z)|m^(dz)
(27r)-Texp{i(y,m)}j|Bijexp{-^(2,z)+2((C-1)'y,z)}mi(^).
For brevity let us write w = (C~lYy. Then f{y)
= (2n)-^exp{i(y,m)}flj^txp^--z^
+ ex
= (2ir)~^ exp{z (y, m)} J[(2n)*
iw]z:ijmL(dzj) b
P j —Zw) \
=
exp{i(y,m)}exp|--(iy,w)j
=
exp{2(y,m)}exp{-^((C-1)ty,(C-1)'y)}
=
exp{j(y,m)}exp|--(Fy,y>j
y ( 3 ) o f Observation D.4
4) Let m = {rrij : j = 1 , . . . , d) and V = (»/,* : j,k = 1 , . . . ,
d
i
d
d
y(y) = exp < i Y, rn?yp - - ^ X l p=I
"MSPVS
p=l g=l
and thus 9v?
1
(y) = v(y)
im
li
U
i - ~ { £ «P,jyp + 1 ] " M V J P=I
9=1
According to (2) of Observation D.2, for j = 1 , . . . , d we have M«t>)3 = j This shows that M(O) = m.
x^{dx)
= - | ^ ( 0 ) = r?
APPENDIXD. MULTIDIMENSIONAL NORMAL DISTRIBUTIONS
491
5) Since M(O) = m, we have
=
L{x>-
V(*)ilk ■
=
-mj}{xk-
(27r)- = (detV)-5
- mk}Q>(dx) /jxj-myH**-*"*}
x
exp|--(y_l(a:-m),x-m)|m^(di)
=
(27rr^(dety)-5^xii,exp{-l(y-1x,x)}mi(dx),
by the translation invariance of the Lebesgue integral. Let *F be the d-dimensional normal distribution Nd(0, V). Then MQ¥) = 0 by 4) and therefore V(n,t
(2irr*(detVrijj^xi*fcexp|-i{v-,s,*)|mt(«fa)
=
= V(*) iJk
forj,fc = l , . . . , d .
To compute V(
9V ,.,_.,,..„ .. ,
~(y) = ip(y){-Vj,k} 5yj3y k
,„._w
foryG
By (2) of Observation D.2, we have y
m* = jniv,vMdv) =
This shows that V(*) = V.
dyjdyk
h
1
Let X i , . . . , JQ be real valued random variables on a probability space (Q, 5, P). The joint probability distribution of X\,..., Xd is by definition the probability distribution of the d-dimensional random vector X = (Xx,..., Xd) on (Rd, 93md), that is, O is the proba bility measure on (Rd,
492
APPENDIX D. MULTIDIMENSIONAL NORMAL
DISTRIBUTIONS
Theorem D.6. Let Xu... ,Xd be real valued random variables on a probability space (Q., 5, P) which are jointly normally distributed and let the probability distribution ofX = (Xu..., Xd) be given by Nd(m, V) with mean vector m = ( m i , . . . , md) and covariance matrix V = [vjik : j,k= l,...,d]. Then 1) the probability distribution of Xj is given by N\(mj,Vjj)for j = 1,... ,d, 2) {X\,..., Xd} is an independent system if and only if the matrix V is diagonal. Proof. 1) Let Px,,..., Pxd and Px be the probability distributions of X , , . . . , Xd and X respectively. Let ip\,... ,
p ( y ) = [ exp{i(x,y)}Px(dy)=
I exp{z (X,y)} dP
foryeRd,
and for every j = 1 , . . . , d we have (2)
^Cw) = / expO
fo,W)}P*,(dy)
= / exp{t (X„y,)}
dP
fory, g R.
By selecting y = ( 0 , . . . , 0, yj, 0 , . . . , 0) in (1), we have (3)
Now if the probability distribution of X = ( X , . . . , JQ) is given by Nd(m, V), then by Theorem D.5 we have
(y],...,yd)eRd
Suppose X\,..., Xd are jointly normally distributed and the probability distribution of X is given by Nd(m, V). Then according to 1), the probability distribution of X3 is given by N\{mi,Vjj)sottu&
1 - -vhJy)}
d
= JJ^(Vj)-
APPENDIXD. MULTIDIMENSIONAL NORMAL
DISTRIBUTIONS
493
Thus by Kac's Theorem, {X{,...,Xi} is an independent system. Conversely if {X\,..., Xj} is an independent system, then by Kac's Theorem we have d 1 exp{i {m,y) - - {Vy,y)} =Hexp{im]yJ
j=l
1 - -vjjyj} l
fory = (y,,... ,yd) € Rd.
Then e x p { - j (Vy,y)} = e x p { - j £^ 0 , fjjS/f} by equating the real parts on the two sides of the last equality and thus (Vy, y) = £^ = | Vjjyj. This shows that V is diagonal. ■
This page is intentionally left blank. This page is intentionally left blank
Bibliography [1] Chung, K. L., A Course in Probability Theory, 2nd ed., Academic Press, New York, 1974. [2] Chung, K. L., and Williams, R. J., Introduction to Stochastic Integration, 2nd ed., Birkhauser, Boston, 1990. [3] Doob, J. L., Stochastic Processes, John Wiley and Sons, New York, 1953. [4] Dudley, R. M., Real Analysis and Probability, Wadsworth & Brooks/Cole, 1989. [5] Fisk, D. L., Quasi-martingales and stochastic integrals, Tech. Rep. 1, Dept. Math. Michigan State University, 1963. [6] Halmos, P., Measure Theory, Van Nostrand, New York, 1950. [7] Dceda, N., and Watanabe, S., Stochastic Differential Equations and Diffusion Pro cesses, 2nded., North-Holland-Kodansha, New York, 1989. [8] Ito, K., Stochastic integral, Proc. Imp. Acad. Tokyo, 20 (1944), 519-524. [9] Ito, K., On a stochastic integral equation, Proc. Imp. Acad. Tokyo, 22 (1946), 32-35. [10] Ito, K., On stochastic differential equations, Mem. Am. Math. Soc, 4 (1951). [11] Ito, K., On a formula concerning stochastic differentials, Nagoya Math. J., 3 (1951), 55-65. [12] Ito, K., Theory of Probability, Iwanami, Tokyo, 1953 (in Japanese). [13] Ito, K., Lectures on Stochastic Processes, Tata Institute of Fundamental Research, Bombay, 1960. 495
496
BIBLIOGRAPHY
[14] Ito, K., Stochastic Processes, Lecture Notes Series, 16, Aarhus Univ., 1969. [15] Ito, K., and Watanabe, S., Introduction to stochastic differential equations, Proc. In tern. Symp. SDE Kyoto 1976 (ed. by K. Ito), Kinokuniya, Tokyo, 1978. [16] Johnson, G., and Helms, L. L., Class (D) supermartingales, Bull. Am. Math. Soc, 69 (1963), 59-62. [17] Kallianpur, G., Stochastic Filtering Theory, Springer-Verlag, New York, 1980. [18] Kolmogoroff, A. N., Uber die analytischen Methoden in der Wahrscheinlichkeitsrechnung, Math. Ann., 104 (1931), 415-458. [19] Kunita, H., and Watanabe, S., On square integrable martingales, Nagoya Math. J., 30 (1967), 209-245. [20] McKean, H. P., Stochastic Integrals, Academic Press, New York, 1969. [21] Meyer, P. A., Probability and Potentials, Blaisdel, Waltham, Massachusetts, 1966. [22] Meyer, P. A., Integrales stochastiques I-IV, Seminaire de Prob. (Univ. de Strasbourg) I, Lecture Notes in Math., 39, 72-162, Springer Verlag, Berlin, 1967. [23] Meyer, P. A., Martingales and stochastic integrals I, Lecture Notes in Math. 284, Springer Verlag, Berlin, 1972. [24] Meyer, P. A., Un cours sur les integrales stochastiques, Seminaire de Prob. (Univ. de Strasbourg) X, Lecture Notes in Math., 511, 245-400, Springer Verlag, Berlin, 1976. [25] Neveu, J., Bases Mathematiqu.es du Calcul des Probabilites, Mason et Cie, Paris, 1964. [26] Parthasarathy, K. P., Probability Measures on Metric Spaces, Academic Press, New York, 1967. [27] Rao, K. M., On decomposition theorems of Meyer, Math. Scand., 24 (1969), 66-78. [28] Watanabe, S., Stochastic Differential Equations, Sangyo Tosho, Tokyo, 1975 (in Japanese). [29] Williams, D., Probability and Martingales, Cambridge University Press, New York, 1991.
BIBLIOGRAPHY
497
[30] Yamada, T., and Watanabe, S., On the uniqueness of solutions of stochastic differential equations, J. Math. Kyoto Univ., 11 (1971), 155-167. [31] Yeh, J., Stochastic Processes and the Wiener Integral, Marcel Dekker, New York, 1973.
Index A augmented, filtered space, 149 filtration, 149 c -algebra, 11
regular, 475 regular image, 483 countably generated cr-algebra, 10 cylinder set, 358 D d-class,4 determining sets of cr-algebra, 481 deterministic initial condition, 396, 397 discontinuity, time of, 217 discrete time, a.s. increasing process, 79 increasing process, 79 L2-martingale, 119, 120, 121 predictable process, 79 stopping time, 39 Doob decomposition theorem, 79 discrete time L2-martingale, 121 Doob-Kolmogorov inequality, 97, 100, 101 Doob-Meyer decomposition theorem, 182 downcrossig, 102, 108 number of, 102, 108 downcrossing inequalities, submartingale, 104, 107, 108 supermartingale, 106, 107, 109
B Borel cr-algebra, 1 Brownian motion, 1-dimensional, 271 adapted, 273 multidimensional, 286 quadratic variation process, 288 strong Markov property, 284 bounded variation, locally, 207 a. s. locally, 207 C class (D), 141 class (DL), 141 conditional dominated convergence theo rem, 464 conditional expectation, 454 conditional Fatou's lemma, 463 conditional Holder's inequality, 468 conditional Jensen's inequality, 466 conditional monotone convergence theo rem, 462 conditional probability, 454
E exponential quasimartingale, 335 498
INDEX F filtered space, 14, 274 augmented, 149 right-continuous, 14 right-continuous modification, 149 standard, 197 filtration, 14 augmented, 149 generated by a process, 14, 274 right-continuous, 14 right-continuous modification, 149 final element, 73, 116 existence, 116 uniformly integrable submartingale, 123 Fisk-Stratonovich integral, 355 Fubini's theorem extended, 164 H Hermite polynomial, 337 I increasing process, 158 almost surely, 158 natural, 173 with discrete time, 79 independence, 443 collections of sets, 443 events, 443 7r-classes, 444 random vectors, 446 independent increments, 268 Ito's formula, 1-dimensional, 320 multidimensional, 340 K Kac's theorem, 448
499 L Lp-bounded, 71 Lebesgue-Stieltjes measure, 157 signed, 205 Lipschitz condition, 398, 402 M martingale, definition, 71 local, 298 local L2-, 298 property, 71 regular, 187 reversed time, 124 martingale convergence theorem, continuous time, 115 discrete time, 112 submartingale with reversed time, 125 uniformly integrable submartingale, 122 martingale transform, 81 maximal and minimal inequalities, 93,98, 99 continuous case, 99 discrete case, 98 finite case, 93 measure, on a semialgebra, 155 on an algebra, 155 measurable process, 12 monotone class theorem, for functions, 7, 8 for sets, 5 •-multiplication, 348 N
500
INDEX
natural increasing process, 173 normal distribution, 488 null at 0,71 7r-class, 4
regular image conditional probability, 483 regular submartingale, 186 reversed time, 124 right-continuous modification, 149
O optional sampling theorem, with bounded stopping times, 90,131 with unbounded stopping times, 138, 140 optional stopping theorem, 87, 89 outer measure, 156
S semialgebra, 155 semimartingale, 318 seminorm, 199 separable cr-algebra, 10 cr-algebra, at stopping time, 26 augmented, 11 Borel, 1 countably determined, 481 countably generated, 10 predictable, 19 separable, 10 well-measurable, 19 signed Lebesgue-Stieltjes measure, 205 standard filtered space, 197 stochastic differential, 346 stochastic differential equation, 368 definition of solution, 371 deterministic initial condition, 396 initial value problem, 385 strong solution, 419 stochastic independence, 443 stochastic integral, of a bounded left-continuous adapted simple process, 223 of a predictable process, 238 stochastic process, 12 adapted, 14 a.s. continuous, 12 a.s. left-continuous, 12 a.s. right-continuous, 12
P pathwise uniqueness, 397 predictable, process, 21 process with discrete time, 79 rectangle, 20 a -algebra, 19 progressively measurable process, 18 Q quadratic variation process, 214 discrete time L2-martingale, 121 Brownian motion, 288 quasimartingale, 318 bounded variation part, 318 equivalent, 346 exponential, 335 martingale part, 318 quasinorm, 50 R random variable, 12 at a stopping time, 141 random vector, 262 regular conditional probability, 475
INDEX bounded, 71 continuous, 12 equivalent, 12 left-continuous, 12 left-continuous simple, 15 L„,71 Lp-bounded, 71 nonnegative, 71 right-continuous, 12 right-continuous simple, 15 measurable, 12 predictable, 21 progressively measurable, 18 simple, 15 stopped, 45 truncated, 47 uniformly integrable, 71 well-measurable, 21 stopped process, 45 stopped submartingale, 132, 134 stopping time, 25 discrete time, 39 random variable at, 141 (j-algebra at, 26 submartingale, definition, 71 property, 71 regular, 187 supermartingale, definition, 71 property, 71 symmetric multiplication, 352 T time of discontinuity, 217 total variation function, 206 total variation measure, 206
501 truncated process, 47 U Ulam's theorem, 479 uniformly integrable, 54 random variables at stopping times, 141 uniqueness of solution, in probability law, 395 pathwise, 397 upcrossig, 101, 108 number of, 101, 108 upcrossing inequalities, submartingale, 104, 107, 108 supermartingale, 106, 107, 109 W well-measurable, process, 21 CT-algebra, 19 Wiener measure, 362 Wiener space, 362