LONDON MATHEMATICAL SOCIETY STUDENT TEXTS
Managing Editor: Professor E.B. Davies, Department of Mathematics, King's College, Strand, London WC2R 2LS, England 1 Introduction to combinators and lambda-calculus, J.R. HINDLEY & J.P.SELDIN 2 Building models by games, WILFRID HODGES
3 Local fields, J.W.S. CASSELS 4 An introduction to twistor theory, S.A. HUGGETT & K.P. TOD 5 Introduction to general relativity, L. HUGHSTON & K.P. TOD 6 Lectures on stochastic analysis: diffusion theory, DANIEL W. STROOCK
London Mathematical Society Students Texts. 6
Lectures on Stochastic Analysis: Diffusion Theory DANIEL W. STROOCK Massachusetts Institute of Technology
]he ighr of the
UnirerriIY fC.-b.dge
ro p...
manne, .f
"'I
r of books ar gamed by Henry '111 in
a/f
The Ublrshe ry har p.inred 84,1 34.my and Onberrhed 15581.
CAMBRIDGE UNIVERSITY PRESS Cambridge
London New York New Rochelle Melbourne Sydney
CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo, Delhi
Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521333665
© Cambridge University Press 1987
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 1987 Re-issued in this digitally printed version 2008
A catalogue record for this publication is available from the British Library
Library of Congress Cataloguing in Publication data Stroock, Daniel W. Lectures on stochastic analysis
(London Mathematical Society Student Texts; 6) Includes index. 1. Diffusion processes. I. Title. II. Series.
QA274.75.585
1987
519.2-33
ISBN 978-0-521-33366-5 hardback ISBN 978-0-521-33645-1 paperback
86-20782
Contents
Introduction 1
1.1
vii
Stochastic processes and measures on function space Conditional probabilities and transition probability functions
1.2 The weak topology
1
4
1.3
Constructing measures on C( [0,-) ; RN
12
1.4
Wiener measure, some elementary properties
15
2
Diffusions and martingales
2.1 A brief introduction to classical diffusion theory
19
2.2
The elements of martingale theory
27
2.3
Stochastic integrals, Ito's formula and semi-martingales
49
3 The martingale problem formulation of diffusion theory 3.1
Formulation and some basic facts
73
3.2
The martingale problem and stochastic integral equations
87
3.3
Localization
101
3.4
The Cameron-Martin-Girsanov transformation
106
3.5
The martingale problem when g is continuous and positive
112
Appendix
120
Index
127
Introduction
These notes grow out of lectures which I gave during the fall semester of 1985 at M.I.T.
My purpose has been to
provide a reasonably self-contained introduction to some stochastic analytic techniques which can be used in the study of certain analytic problems, and my method has been to concentrate on a particularly rich example rather than to attempt a general overview.
The example which I have chosen
is the study of second order partial differential operators of parabolic type.
This example has the advantage that it
leads very naturally to the analysis of measures on function space and the introduction of powerful probabilistic tools like martingales.
At the same time, it highlights the basic
virtue of probabilistic analysis: the direct role of intuition in the formulation and solution of problems.
The material which is covered has all been derived from my book [S.&V.] (Multidimensional Diffusion Processes, Crundlehren #233, Springer-Verlag, 1979) with S.R.S. Varadhan.
However, the presentation here is quite different.
In the first place, the emphasis there was on generality and detail; here it is on conceptual clarity.
Secondly, at the
time when we wrote [S.&V.], we were not aware of the ease
with which the modern theory of martingales and stochastic integration can be presented.
As a result, our development
of that material was a kind of hybrid between the classical ideas of K. Ito and J.L. Doob and the modern theory based on the ideas of P.A. Meyer, H. Kunita, and S. Watanabe.
In
these notes the modern theory is presented; and the result is,
I believe, not only more general but also more
understandable. In Chapter I, I give a quick review of a few of the
important facts about probability measures on Polish spaces: the existence of regular conditional probability distributions and the theory of weak convergence.
The
chapter ends with the introduction of Wiener measure and a brief discussion of some of the basic elementary properties of Brownian motion.
Chapter II starts with an introduction to diffusion theory via the classical route of transition probability functions coming from the fundamental solution of parabolic equations.
At the end of the first section, an attempt is
made to bring out the analogy between diffusions and the theory of integral curves of a vector field.
In this way I
have tried to motivate the formulation (made precise in Chapter III) of diffusion theory in terms of martingales,
and, at the same time, to indicate the central position which martingales play in stochastic analysis.
The rest of Chapter
II is devoted to the elements of martingale theory and the development of stochastic integration theory.
(The
presentation here profitted considerably from the
ix
incorporation of some ideas which I learned in the lectures
given by K. Ito at the opening session of the I.M.A. in the fall of 1985.) In Chapter III,
I formulate the martingale problem and
derive some of the basic facts about its solutions.
The
chapter ends with a proof that the martingale problem corresponding to a strictly elliptic operator with bounded continuous coefficients is well-posed.
This proof turns on
an elementary fact about singular integral operators, and a derivation of this fact is given in the appendix at the end of the chapter.
I. Stochastic Processes and Measures on Function Space:
1. Conditional Probabilities and Transition Probability Functions:
We begin by recalling the notion of conditional Namely, given a probability space (E,9,P) and a
expectation.
sub-o-algebra .°f',
the conditional expectation value EP[XI5']
of a function X E L2(P) is that 5'-measurable element of L2(P) such that
JA
X(f)P(df) =
JA
EP[XJ9*](f)P(df). A E 9'.
(1.1)
Clearly EP[XI9'] exists: it is nothing more or less than the projection of X onto the subspace of L2(P) consisting of 'f'-measurable P-square integrable functions. EP[XI5;'] > 0 (a.s.,P) if X > 0 (a.s.,P).
Moreover,
Hence, if X is any
non-negative 9-measurable function, then one can use the monotone convergence theorem to construct a non-negative g'-measurable EP[X1V'] for which (1.1) holds; and clearly, up to a P-null set, there is only one such function.
In this
way, one sees that for any 9-measurable X which is either non-negative or in L1(P) there exists a P-almost-surely unique 9'-measurable EP[Xl9']satisfying (1.1).
Because
is linear and preserves non-negativity, one
might hope that for each f E E there is a Pf E M1(E) (the space of probability measures on (E,9)) such that EP[XIg'](f) = JX(T7)Pf(dT).
general.
Unfortunately, this hope is not fulfilled in
However, it is fulfilled when one imposes certain
2
topological conditions on (E,°f).
Our first theorem addresses
this question. (1.2) Theorem: Suppose that f2 is a Polish space (i.e. 0
is a topological space which admits a complete separable
metricization), and that A is a sub a-algebra ofO (the
Borel field over f2).
Given P E M1(O), there is an
s4-measurable map wI-'PW E M1(O) such that P(Af1B) =
fA P,,(B) P(dw) for all A E A and B E 10.
Moreover, cuF-*PW is
uniquely determined up to a P-null set A E A.
Finally, if A
is countably generated, then wI---+PW can be chosen so that
P(A) = XA((J) for all w E Q and A E A. Proof: Assume for the present that 0 is compact, the general case is left for later (c.f. exercise (2.2) below). Choose {tip n: n > 0) C C(O) to be a linearly independent
set of functions whose span is dense in C(ft), and assume that For each n > 0, let
APO = 1.
EP[ppkI4] and choose y0 E 1.
`,k
be a bounded version of
Next, let A be the set of cg's
such that there is an n > 0 and a0,.... an E Q (the rationals)
n
n
such that I a pp > 0 but I am ym (u) < 0, and check that A is m m m=0 m=0
an A-measurable P-null set. n
define A
W
(
otherwise.
P
For n > 0 and a0,.... an E Q, n
n
I amp) = E [ 2 amp] if w E A and m=O
m=0
2 am 'm (U)
m=O
Check that, for each w E Q, AW determines a
unique non-negative linear functional on C(0) and that AU(1)
= 1.
Further, check that
pp E C(ft).
is 4-measurable for each
Finally, let PW be the measure on 0 associated
with AW by the Riesz representation theorem and check that
3
wl-iPW satisfies the required conditions. The uniqueness assertion is easy. (a.s.,P) for each A E sA,
Moreover, since P.(A)
it is clear that, when sA is
countably generated, wI--;P(i can be chosen so that this
equality holds for all w E Q.
Q.E.D.
Referring to the set-up described in Theorem(l.2), the map wI--iP(i is called a conditional probability distribution
of P given sA (abbreviated by c.p.d. of P IA).
If cal->P6) has
the additional property that PCO (A) = XA((w) for all w E 0 and
A E sA,
then col-->P(i is called a regular c.p.d. of LIA
(abbreviated by r.c.p.d. of 111A).
(1.3) Remark: The Polish space which will be the center of most of our attention in what follows is the space 11 = C([O,m);IRN) of continuous paths from [0,-) into RN with the
topology of uniform convergence on compact time intervals. Letting x(t,GG) = sj(t) denote the position of w E 0 at time t
> 0, set At = a(x(s): 0 <
s
<
t) (the smallest a-algebra over
11 with respect to which all the maps w-x(s,w), 0 < s are measurable).
<
t,
Given P E M1(Q), Theorem (1.2) says that
for each t > 0 there is a P-essentially unique r.c.p.d.
wl>Pt of PJ.Nt.
Intuitively, the representation P =
J pt P(d(w) can be thought of as a fibering of P according to
how the path w behaves during the initial time interval [0,T].
We will be mostly concerned with P's which are Markov
in the sense that for each t > 0 and B E A0 which is measurable with respect to a(x(s):
s
>
t), wF-_Pt(B) depends
P-almost surely only on x(t,(j) and not on x(s,(w) for s
<
t.
4
2. The Weak Topology:
(2.1) Theorem: Let 9 be a Polish space and let p be a metric on f2 for which (0,p) is totally bounded.
Suppose that
A is a non-negative linear functional on U(f2,p) (the space of
p-uniformly continuous functions on 0) satisfying A(1) = 1.
Then there is a (unique) P E Ml(t2) such that A(pp) = for all
E
0
is
used to abbreviate "compact subset of") with
the property that A(pp)
W > XK
a
>
1
- e whenever w E U(12,p) satisfies
.
6
Proof: Suppose that P exists.
Choose {cwk} to be a
countable dense subset of 0 and for each n >
1 choose Nn so
N
that P(UnB(wk,l/n)) >
1
- e/2n, where the balls B(w,r) are
1
defined relative to a complete metric on Q.
Set K6 =
N
fl Un B(wk,l/n). n 1
Then KE CC 0 and P(KE) >
Next, suppose that A(,p)
>
1
1
- e.
- e whenever W > XK
.
Clearly we may assume that KE increases with decreasing a. Let f2 denote the completion of 0 with respect to p.
U(ft,p)I----o;p,
the unique extention of
isometry from U(Q,p) onto C(f2).
'p
to f2
Then 'p E
in C(R), is an
Hence, A induces a unique A
E C(f2)* such that A(pp) = A(pp), and so there is a P E Ml(f2) such that A(pp) = EP[,p], p E U(t2, p) . 0' = U K e e>0
,
and so P(T) = P(Tflt2') determines an element of
K1(t2) with the required property. obvious.
Clearly, P(ft') = 1 where
The uniqueness of P is Q.E.D.
5
(2.2) Exercise: Using the preceding, carry out the proof of Theorem (1.2) when 0 is not compact.
Given a Polish space 0. the weak topology on M1(Q) is the topology generated by sets of the form
(u: Iv0p) - µ('p) I < e}, for µ E M1(Q),
f
E Cb(fl), and a > 0.
Thus, the weak topology
on M1(0) is precisely the relative topology which M1(Q) inherits from the weak* topology on Cb(fl)*.
(2.3) Exercise: Let {ck} be a countable dense subset of Q.
Show that the set of convex combinations of the point
masses 6
with non-negative rational coefficients is dense
in M1(Q).
In particular, conclude that M1(Q) is separable.
(2.4) Lemma: Given a net (µa) in M1(Q). the following are equivalent:
i) Aa-'µ; ii) p is a metric on Q and
for every p E
UP, P);
iii) rim µa(F) S µ(F) for every closed F in Q; a iv)
a
v)
lim µ(F) = µ(F) for every r E tt
lim µa(G) Z µ(G) for every open C in 0; with µ(8T) = 0. f2
a
Proof: Obviously i) implies ii) and iii) implies iv) implies v).
To prove that ii) implies iii), set
ye(w) = P(w.(F(E))c)/[P(w,F) + P(..(F(E))c)] where F(e) is the set of cg's whose p-distance from F is less than e.
Then, pE E U(Q,p), XF <
'
S XF(E), and so:
6
lim µa(F) < lim µa(we) = u(we) < µ(F(e)
a
a
After letting e-->0, one sees that iii) holds. Finally, assume v) and let w E Cb(0) be given.
Noting
that µ(a < w < b) = µ(a < f < b) for all but at most a countable number of a's and b's, choose for a given e > 0 a finite collection a0 <...< aN so that a0 < w < aN'
an - an-1 < e, and ji(an-1 < p < an) = p(an-1 < p < an) for 1 < n N. Then:
lua(w) - u(w)I
<-
N
2e + 211 w 11Cb(12)i I4 a(an-1 < w < an)
- il(an-1 < w < an) I.
and so, by v), lim lua(w) - u(w)I < 2e.
Q.E.D.
a
(2.5) Remark: M1(f2) admits a metric.
Indeed, let p be a
metric on 0 with the property that (0,p) is totally bounded. Then, since U(0,p) is isometric to C(ft), there is a countable dense subset {wn} of U(f2,p).
Define
CO U(wn)Ii2n(1
R(µ,v) =
lu(wn) -
+
II
wn11Cb(D)
1
Clearly R is a metric for M1(0), and so (in view of (2.3)) we now see that M1(0) is a separable metric space.
Actually,
with a little more effort, one can show that M1(0) is itself a Polish space.
The easiest way to see this is to show that
111(0) can be embedded in M1(f2) as a C.
Since M1(Q) is
compact, and therefore Polish, it follows that Ml(0) is also Polish.
In any case, we now know that convergence and
sequential convergence are equivalent in Mi(f2).
7
(2.6) Theorem(Prokhorov & Varadarajan): A set F C Ml(fl)
is relatively compact if and only if for each e > 0 there is a KE CC 0 such that µ(KE) >
1
- e for every µ E F.
Given e > 0 and
Proof: First suppose that r CC M1(il).
n > 1, choose for each p E F a Kn(µ) CC 0 so that µ(Kn(A)) > 1
u(Kn(p)(1/n)) >
- e/2n and set Gn(p) = {u:
1
- e/2n}, where
distances are taken with respect to a complete metric on Q. N
Next, choose 4n,1.... 'µn.N E F so that r C Un Gn(µn,k)' and k=1 n 00
N
Un Kn(µn,k)(1/n) n=1 k=1
Clearly K CC it and µ(K) >
set K = fl
1
-
e for every µ E F.
To prove the opposite implication, think of M1(0) as a subset of the unit ball in Cb(fl)*.
Since the unit ball in
Cb(fl)* is compact in the weak* topology, it suffices for us
to check that every weak* limit A of µ's from F comes from an
element of M1(0).
But A(pp)
satisfying p > XK
,
>
1
- e for all p E Cb(fl)
and so Theorem(2.1) applies to A. Q.E.D.
F-
(2.7) Example: Let n = C([O,m);E), where (E,p) is a
Polish space and we give 0 the topology of uniform convergence on finite intervals.
Then, r C M1(12) is
relatively compact if and only if for each T > 0 and e > 0 satisfying lim b(T) _
there exist K CC E and
T10 0, such that: sup P({tw: x(t,w) E K, PET
t E [0,T], and
p(x(t,w),x(s,(w)) < 6 (It-s I). s,t E [O,T]}) >
1
- e-
8
In particular, if p-bounded subsets of E are relatively compact, then it suffices that: lim R
sup P({w: p(x,x(O,(j)) < R and PET
p(x(t,w),x(s,w)) S 60t-sI), s,t E [0,T]}) 2 1 - e for some reference point x e E.
The following basic real-variable result was discovered by Garsia, Rademich, and Rumsey.
(2.8) Lemma(Garsia et al.): Let p and W be strictly increasing continuous functions on (0.0) satisfying p(O) = W(0) = 0 and
lim W(t) = w.
For given T > 0 and p E
t -mo
suppose that:
C([O,T];ltN).
T T
'li(Ip(t) - p(s) I/p(It - s I))dsdt s B < -.
fOJO
Then, for all 0 S s S t
S T:
1'f(t) - T(s)1 S 0
Proof: Define P(Ipp(t) - p(s)I/p(It - sI))ds,
I(t) = J
t E [0,T].
0
T
Since J I(t)dt
B, there is a t0 E (0,T) such that I(t0)
0
B/T.
Next, choose t0 > d0 > tl
follows.
>
...
> to > do > ... as
Given tn_l, define dn_1 by p(dn_1) = 1/2p( tn-1) and
choose to E (O,dn_1) so that I(tn) S 2B/dn_1 and 4'(IP(tn) -
p(tn-1)I/p(Itn_to-l1))
S 21(tn-1)/dn-1'
Such a to exists because each of the specified conditions can fail on a set of at most measure dn_1/2.
9
Clearly: 2p(dn+1) = P(tn,tl) < p(dn).
Thus, tn10 as n- .
Also, p(tn-tn+l) < p(tn) = 2p(dn) _
4(p(dn) - 1/2p(dn)) < 4(p(dn) - p(dn+1))'
Hence, with d_1 E
T:
I`P(tn+l) - p(tn)I < I-1(2I(tn)/dn)P(tn-tn+1) < -1(4B/dn-ldnn)(P(dn) - p(dn+1)) 4J n %P-1 (4B/u2)P(du), do+1 rT
and so hp(t0) - p(0)I <
41-
By the same
0
argument going in the opposite time direction,
I,p(T) - p(t0)I
(`T
W-1(4B/u2)p(du).
< 4J
Hence, we now have:
0 rT
Ip(T) - p(O)I S 8J
W-1(4B/u2)P(du).
(2.9)
0
To complete the proof, let 0 < a < T < T be given and apply (2.9) to ap(t) = V(a + (T-a)t/T) and p(t) = p((T-a)t/T). Since
_
rT T J
_ _ - p(S)I/p(It - SI))dsdt =
(IwP 0J0
(T/(T-U)) 2fT5 'y(Igp(t) - w(S)I/p(It - sI))dsdt a a
< (T/(T-C) 2B E B,
we conclude that:
8f
IT (T) - p(a)I
IP-1 (4B/u2)P(du)
0
T-a =
8f0 41 (4B/u2)P(du).
Q.E.D.
(2.10)Exercise: Generalize the preceding as follows.
10
be a normed linear space,
Let
p:IRd-+L a weakly continous map.
r
> 0, d E Zand
Set B(r) = {xEIRd: lxl < r}
and suppose that
B(r)B(r)
*(Ihhp(y) - ip(x) II/p(ly-x i) )dxdy S B <
Show that
( l y-x l
Ilgp(y) - p(x)II S 8J
'Y1 - (4d+1B/-ru2d)p(du)
0
where 7 =
inf{l(x + B(r))f1B(1)1/rd:
'd
lxl
<
1 and r < 2}.
A proof can be made by mimicking the argument used to prove Lemma(2.9) (cf. 2.4.1 on p. 60 of [S.&V.]).
(2.12)Theorem (Kolmogorov's Criterion): Let (0.9,P) be a probability space and f a measurable map of mdxn into the
normed linear space
Assume that x E RdI_4 (x,w) is
weakly continuous for each w E 9 and that for some q E [1,-),
r
> 0, a > 0, and A < m: Aly - xld+a, x.y E B(r).
EP[Ilf(y) - f(x)IIq]
(2.13)
Then, for all X > 0, P(x
YEB(r)IIf(y)
xIR
- f(x)II/ly -
X) < AB/Aq
(2.14)
where (3 = a/2q and B < - depends only on d. q. r, and a.
Proof: Let p = 2d + a/2.
B(r) B(r)
EP[(Ilf(y)
Then:
- f(x)II/ly - xlp/q)q) ]dxdy S AB'
where ly - xl
B' = B(r) B(r)
Next, set
d+a/2
dxdy.
11
Y(w) =
JB(r)JB(r)
[IIE(Y,u) - E(x,w)II/ly - xlP/q]gdxdy.
Then, by Fubini's theorem, EP[Y] S AB', and so: P(Y 2 Aq) S AB'/Aq, X > 0. In addition, by (2.10):
Ilf(y,(J) - f(x.w)II
8
IY- xl (4d+1Y(W)/1u2d)1/gduP/q JJ0
S CY(cw)1/qly - xIl.
Q.E.D.
(2.15) Corollary: Let 0 = C([O,w);IRN) and suppose that T
C Ml(0) has the properties that: lim
R-)
sup P(lx(0)l PET
R) = 0
and that for each T > 0: sup
sup
EP[Ix(t) - x(s)lq]/(t - s)1+a <
PET 0Ss
0.
Then, r is relatively compact.
12
3. Constructing Measures on C([O
N):
Throughout this section, and very often in what follows,
Q denotes the Polish space C([O,co);IRN), a(x(s): 0 < s < t),
t
Al =
Q, and At
=
> 0.
(3.1) Exercise: Check that Al = a( U Alt t>0
In particular,
conclude that if P,Q E M1(Q) satisfy P(x(t0) E r0,...,x(tn) E rn) = Q(x(tO) E TO,...,x(tn) E rn <
tn, and r0,.... rn E
for all n > 0, 0 < t0 <...
N, then P = Q.
Next, for each n > 0 and 0 < t0<...< tn, suppose that t E M1(([R N n
Pt
)
) and assume that the family (Pt
n
0
t
}
n
0
is consistent in the sense that: P
t0, .
.
tk+l'...,tn(r0x...xrk-lxrk+lx...xrn)
. It k-l'
(3.2) Pt t
0'
(r0x...xrk-1xExrk+1x...xrn)
'
for all n > 0, 0 < k < n, t0<...< tn, and r..... ,rn E !%
N.
IR
(3.3) Example: One of the most important sources of
consistent families are (Markov) transition probability functions. 0 <
s
<
Namely, the function
t and x E D
defined for
and taking values in M1(IRN), is called a
transition probability function on IRN if it is measurable and
satisfies the Chapman-Kolmogorov equation: (3.4)
J IRN
for all 0 <
s
<
t and x E
IRN.
Given an initial distribution
go E M1(IRN), we associate with po and
the
13
consistent family {P Pt
,t
0' =
t0
.
,
.
tn
}
determined by
(r0x...xrn)
n ((
((`
wo(dxo)Jr P(t0,x0:tl,dxl)...Jr P(tn-l'xn-l;tn,dxn).
fr
0
n
1
(3.5) Theorem: Let {P
,tn
t0,
} be a consistent family and
assume that for each T > 0: ly
sup
- xjq Ps t(dxxdy)/(t-s)i+a <
(3.6)
OSSStST J for some q E [1,-) and a > 0. Ml(fl) such that Pt
t
=
Then there exists a unique P E Po(x(t0),...,x(tn))-1 for all n
0 > 0 and 0 S t0
tn.
(Throughout. PoO-1(r) a P(O-1(r)).)
Proof: The uniqueness is immediate from exercise (3.1). To prove existence, define for m > 0 the map 0m:(R N)4+1---- *0
so that x(t,om(x0,...,x m)) = Xk + 2m(t - k/2m)(xk+1 4
k/2m
t
-
xk) if
< (k+l)/2m and 0 S k < 4m, and x(t.0m(x0,...,x m)) _ 4 l
x 4m if t > 2`°.
Next, set P m = P t0
and tk = k/2m.
Then, by Corollary (2.15), {Pm: m > 0} is
tno@ m
where n m = 4m
m
relatively compact in Ml(0).
Moreover, if P is any limit of
{Pm: m > 01, then
(3.7)
r
qP0(xo).."Pn(xn)Pto,...Itn(dx0x...xdxn) J
for all n > 0, dyadic 0 S t0 <...< tn, and VO'''',fn E C1(IRN).
Since both sides of (3.7) are continuous with
respect to (t0,....tn), it follows that P has the required property.
Q.E.D.
14
(3.8) Exercise: Use (3.6) to check the claims, made in the preceding proof, that {Pm: m Z 0) is relatively compact and that the right hand side of (3.7) is continuous with respect to (t0,.. ,tn).
Also, show that if
is a
transition probability function which satisfies: 0<s p
f ly
-
xIq P(s,x;t,dy)/(t-s)l+a <
(3.9)
for each T > 0 and some q E [l,-) and a > 0, then the family associated with any initial distribution and satisfies (3.6).
15
4. Wiener Measure, Some Elementary Properties:
We continue with the notation used in the preceding section.
The classic example of a measure on it is the one
constructed by N. Wiener.
Namely, set P(s,x;t,dy) _
g(t-s,y-x)dy, where (2w)-N/2 exp(-Ix12/2s)
(4.1)
is the (standard) Gauss (or Weierstrass) kernel.
It is an
easy computation to check that: N
N
J exp[ 1 0 y.]g(t,y)dy = exp[t/2 1 0] J
j=1
(4.2)
j=1 j
J
for any t > 0, and 01,...,0N E C; and from (4.2), one can
easily show that
is a transition probability
function which satisfies fly - xlq P(s,x;t,dy) = CN(q)(t - s)q/2
for each q E [1.m) a = 1.
(4.3)
In particular, (3.9) holds with q = 4 and
The measure P E M1(il) corresponding to an initial
distribution µ0 and this P(s,x;t,.) is called the (N-dimensional) Wiener measure with initial distribution }-to
and is denoted by Wµ
.
In particular, when µ0 = Sx, we use
0
and refer to Wx as the (N-dimensional)
9x in place of WS x
Wiener measure starting at x; and when x = 0, we will use W (or, when dimension is emphasized, W(N)) instead of N0 and will call 1f the (N-dimensional) Wiener measure.
In this
connection, we introduce here the notion of an N-dimensional Wiener process.
Namely, given a probability space (E,9,P),
we will say that 0:[O,-)xE-,RN is an (N-dimensional)-Wiener process under P if
$3
is measurable,
tl
->(3(t) is P-almost
16 0,(N)
surely continuous, and
(4.4) Exercise: Identifying C([O,o);IRN) with C([O,o);t1)N, show that w(N) = (,,(1))N
In addition, show
is a Wiener process under w and that wx =
that
woTx1
where Tx: 0-+O is given by x(t,Tx(w)) = x + x(t,w). Finally, for a given s Z 0, let ,F-+ws be the r.c.p.d. of Show that, for w-almost all w,
VIAs.
x(s,w) is a
Wiener process under VS, and use this to conclude that WsoGs1 = wx(s,w) (a.s.,V), where 9s: 0-->0 is the time shift map given by
Thus far we have discussed Wiener measure from the Markovian point of view (i.e. in terms of transition probability functions).
An equally important way to approach
this subject is from the Gaussian standpoint.
From the
Gaussian standpoint, If is characterized by the equation: N
exp(-1/2
N I
k, a=1
(4.5)
(tkAte)(Ok,0e) 6t
N
for all n Z 1, tl.....tn E [0,-), and 01,....0n E QtN
(4.6) Exercise: Check that (4.5) holds and that it characterizes V. Al/2x(t/X,w) 9oAX1.
Next, define 41 X: tt-+0 by x(t,$A(w)) _
for X > 0; and, using (4.5), show that w =
This invariance property of w is often called the
Brownian scaling propery.
In order to describe the time
inversion property of Wiener processes one must first check that w( lim x(t)/t = 0) = 1.
t --W
To this end, note that:
17
2 E) S I0, N( n=m
sup llx(t)I 2 nE)
and that, by Brownian scaling:
W(nStSn+iIx(t)I 2 nE) S w(OStS2Ix(t)I Z nl/2E) Now combine (4.3) with (2.14) to conclude that nE) S C/n2E4
V(nStSn+llx(t)I
and therefore that 9( lim x(t)/t = 0) =1.
t-mo
The Brownian time
inversion property can now be stated as follows. 0 and, for t > 0. set R(t) = tx(1/t).
and (4.5). check that
Define (3(0)
Using the preceding
is a Wiener process under W.
We close this chapter with a justly famous result due to Wiener.
In the next chapter we will derive this same result
from a much more sophisticated viewpoint.
(4.6) Theorem: #-almost no w E 0 is Lipschitz continuous at even one t > 0.
Proof: In view of exercise (4.4). it suffices to treat the case when N = 1 and to show that N-almost no w is
Lipschitz continuous at any t E [0.1).
But if w were
Lipschitz continuous at t E [0,1), then there would exist e.m E Z+ such that Ix((k+l/n)) - x(k/n)I would be less than 2/n for all n > m and three consecutive k's between 0 and (n +2). Hence, it suffices to show that the sets w
B(2,m) =
fl
n u
2 fl
A(B,n,k + j), e,m E Z
n=m k=l j=0
where A(e,n,k) a {Ix((k+l/n)) - x(k/n)I S 2/n}, have V-measure 0.
But, by (4.1) and Brownian scaling:
18
w(B(2,m))
lim nw(Ix(1/n)I < 2/n)3 n --r°
= lim nw(Ix(1)I
n-+
e/n1/2)3
1/2 r2 /n
g(l,y)dy)3 = 0.
lim n(J
new
1/2
Q.E.D.
(4.7) Remark: P. Levy obtained a far sharper version of the preceding; namely, he showed that: w(Sim
0<spt
(4.8)
t-s<S
The lower bound on the lim sup is a quite elementary application of the Borel-Cantelli lemma (cf. p.
14 of H.
P.
McKean's Stochastic Integrals, Academic Press, 1969), but the upper bound is a little difficult.
A derivation of the less
sharp estimate when the upper bound 1 is replaced by 8 can be based on the reasoning used to prove (2.14). [S.&V.] for more details.
See 2.4.8 in
19
II. DIFFUSIONS AND MARTINGALES:
1. A Brief Introduction to Classical Diffusion Theory: We continue with the notation used in section 1.3.
Let S+LN! denote the space of non-negative definite symmetric matrices, and for given bounded measurable functions a: [0,m)xtN-aS+(IR N) and b:
[0,00)xff ___*R
N
define
the operator valued map t E [O,w)I-->Lt by Lt = 1/2
a'J(t,x)8 i8
.2
x x
i,3=1
+
I bi(t,x)8 x
i=1
The following theorem can be proved using quite elementary analytic methods (cf. Chapter 3 in [S.&V.]). (1.2) Theorem: Assume that a E Cb'3([O,m)xIRN;S+(IRN)) and that b E Cb'2([O,o)xIRN;IRN).
Then there is a unique
transition probability function
on IR
such that
for each T > 0 and all f E C1.2([O,T]xlRN):
Jf(T.y)P(s,x;T,dy) - f(s,x) (1.3)
dtJ(8t + Lt)f(t,y)P(s,x;t,dy), (s.x) E [O,T]xR J s
Moreover, if T > 0 and [0,T]xIRN1-+uT
p(s,x) =
pp` E
C00(IRN),
then (s,x) E is an element of
Cb1.2([O,T]xlR N).
(1.4) Remark: Notice that when Lt = 1/2A (i.e. a = I and
20
b E 0). P(s,x;t,dy) = g(t-s,y-x)dy where g is the Gauss kernel given in 1.4.1.
Throughout the rest of this section we will be working with the situation described in Theorem (1.2).
We first observe that when p E CA(IRNuT
is the
unique u E Cb'2([O,T]xDRN) such that (8s + Ls)u = 0 in [0,T)xlR
N
(1.5)
lim u(s,.) _ sp. stT
The uniqueness follows from (1.3) upon taking f = u. prove that u = uT
,
To
satisfies (1.5), note that
asu(s,x) = lim (u(s+h,x) - u(s,x))/h h 1O rr
= lim Iu(s+h,x) h10 LL
fu
(s+h,y)P(s,x;s+h,dy)J/h
rs+h r
= -lim 1/hJ dtJ[Ltu(s+h..)](y)P(s,x;t,dy) h10 s = -Lsu(s,x),
where we have used the Chapman-Kolmogorov equation ((1.3.4)), followed by (1.3).
We next prove an important estimate for
the tail distribution of the measure
(1.6) Lemma: Let A = sup IIa(t,x)Ilop and B =
sup Ib(t.x)l. t,x
Then for all 0 S s < T and R > N1/2B(T-s):
P(s,x;T.B(x,R)c)
2Nexpl-(R-N1/2B)2/2NA(T-s)1.
(1.7)
In particular, for each T > 0 and q E [1,-), there is a C(T,q) < -, depending only on N, A and B. such that fly-xlgP(s,x;t,dy)
C(T,q)(t-s)q/2, 0 S
s
<
t ST.
(1.8)
21
Proof: Let A and B be any pair of numbers which are strictly larger than the ones specified, and let T > 0 and x E
IN
so that 0 <
Choose n E
be given.
[-1,1]. and q S 0 off of (-2,2).
<
Tl
° 1 on
1,
Given M Z 1, define 1P M
IR N->R N by i
y -x
(OM(Y))i
i
= f0
TI(f/M)df.
<
1
i
< N;
and consider the function N + (AI9I2+BIOI)(T-t)J
fM R
for 0 E R N and (t,y) E [0,T]xIRN. Cb([O,T]xIRN).
(Bt+Lt)fM.e
Clearly fM
,
e
E
Moreover, for sufficiently large M's, 0.
Thus, by (1.3):
ffM e(T,Y)P(s,x;T.dY) < fM 6(s'x) for all sufficiently large M's. After letting M- and applying Fatou's lemma, we now get: fexp[(e,y-x) N]P(s,x;T,dy) < exp[(AIeI2 + BIeI)(T-s)].
Since (1.9) holds for all choices of A and B strictly larger than those specified, it must also hold for the ones which we were given.
To complete the proof of (1.7), note that: N
P(s,x;T,B(x,R)c) < 2 P(s,x;T,{y:
Iyl-x1I
> R/N1/2})
i=1
< 2N
max-1P(s,x;T,{y:
GES
and, by (1.9)
P(s,x;T,{y: (O.y-x) N Z R/N1/2})
(O,y-x) N Q2
R/N1/2})
22
e-AR/N1/2
<
exp[A(0,y-x) J
IR
< exp[A2A(T-s)/2 - T(R-N1/2B(T-s))/N1/2]
Hence, if R > N1/2B(T-s) and we
for all 0 E SN-1 and A > 0. take X =
we arrive at (1.7).
(R-N1/2B(T-s))/N1/2,
Q.E.D.
In view of (1.8), it is now clear from exercise (1.3.8) that for each s > 0 and each initial distribution
there is a unique Ps
li 0
uO
E M1(IRN)
E M1(0) such that
PS'po(x(tO)ETO,...,x(tn)ETn)
P(s+t0'x0;s+t1,dx1)
µ0(dx0)J
=J
r0
(1.10)
rl
...rT P(s+tn-l'xn-l;s+tn,dxn) n
for all n > 0, 0 = t0
tn, and r0'...Irn E
N.
We will
use the notation Ps,x in place of Ps,S X,
(1.11) Theorem: The map (s,x) E [0,o3)xIRNl.-Ps,x E Mi(f2) is continuous and, for each
WO E M1(RN), Ps,µ0 =
,'Ps,xp"0(dx).
Moreover, Ps x is the one and only P E M1(0) which satisfies: P(x(0) = x) =1 (1.12)
P(x(t2) E FJA t
)
= P(s+t1,x(s+t1);s+t2,T) (a.s.,P)
1
for all 0 <
t1 < t2 and T E !% N.
Finally, if
t > 0 and
ll
cvF-)(Pt x) is a r.c.p.d. of Ps x11(
,
then (Pt tx)W00t1
Ps+t,x(t,w) for Ps x-almost every u.
Proof: First observe that, by the last part of Theorem (1.2), (s,x) E [0,T]xIRNF-->J,p(y)P(s,x;T,dy) is bounded and
23
continuous for all
Combining this with (1.7),
.p E CO(IRN).
one sees that this continues to be true for all 'p E Cb(!RN)
Hence, by (1.10), for all n > 1. 0 < E Cb(IRN). E
tn, and
tl
s.x[wl(x(tl))...,pn(x(tn))]
continuous function of (s,x) E [0,w)xIRN.
is a bounded
Now suppose that
(sk,xk)-i(s,x) in [0,ao)xcRN and observe that, by (1.8) and
(1.2.15), the sequence (P
}
is relatively compact in
}
is a convergent subsequence
sk xk '
Moreover, if (Ps
M1(0).
x
k
k'
and P is its limit, then: P [v1(x(tl))...,pn(x(tn))]
E
P
=klim iE sk,.xk,[p1(x(tl))...,pn(x(tn))] P
= E s'x[,pl(x(tl))...,pn(x(tn))]
for all n >
1,
0 < t1
tn, and p1....,pn E Cb('RN).
Hence, P = Ps,x, and so we conclude that Ps fact that Ps,µ0
that (s,x)HPs
=
x
k'x k-+Ps,x.
The
fPs.xp,(dx) is elementary now that we know is measurable.
Our next step is to prove the final assertion concerning (Pt
)
s,x G)
> 0.
.
When t = 0, there is nothing to do.
Given m,n E Z+, 0 < al
and Al..... Am,T1.....Tn E
Assume that t
am < t, 0 < T1 N'
set A = {x(al)EA1,...,
IR
and B = {x(T1)ETl,...,x(Tn)ETn}.
fAPs+t,x(t,(O)(B)Ps.x(d(w)
Then:
Tn
24
= Je P(s,x;s+al.dxl) l
re
J P(s+am-1.xm-1;s+am,dxm) m
xfRNP(s+am,xm;s+t,dy0)xfr1P(s+t,y0;s+t+Tl,dyl) ...
P(s+t+Tn-1'yn-1's+t+Tn,dyn)
Jr n
= Ps x(x(al)Eel....,x(am)Eem,x(t+T1)Erl,...,x(t+Tn)Ern) t
A(Ps,x)woG
-1
t
(B)Ps x(dw).
Hence, for all A E At and B E A: r
fA(Ps,x)Woetl(B)Ps.x(d(j) =
JAPs+t,x(t,W)(B)Ps x(d(j).
Therefore, for each B E A. (Pt.x)Wo9t1(B) = (a.s..Ps,x).
Since A is countably generated, this, in turn,
implies that (Ps x)(Joe-t
1
=
Ps+t,x(t,W) (a.s.,Ps,x).
Finally, we must show that Ps x is characterized by (1.12). That Ps satisfies (1.12) is a special case of the x result proved in the preceding paragraph.
On the other hand,
if P E M1(O) satisfies (1.12). then one can easily work by induction on n Z 0 to prove that P satisfies (1.10) with µ0 = Q.E.D.
(1.13) Corollary: For each (s,x) E [0,o)xIRN, Ps,x is the unique P E M1(ft) which satisfies P(x(O) = x) = 1 and: EP[w(x(t2)) - w(x(tl))'At 1
EP[ft2[Ls+tp](x(t))dtlAtll (a.s.,P) 1
for all 0 <
t1 < t2 and 'p E COON).
(1.14)
25
Proof: To see that Ps
satisfies (1.14), note that, by
x
(1.12) and (1.3):
EP[N(x(t2)) - w(x(tl))IAt ] 1
v(x(tl))
=
f2 t
=
tl
ft2
EsIx[[Ls+tp](x(t))IAtl]dt
= 1
=
(a.s.,Ps.x). t
If
1
Conversely, if P satisfies (1.14) then it is easy to check that
(1.15)
EP Ift2[(0t + Ls+t)f(t.x(t))dtIAtl] (a.s..P) 1
for all 0
t1
< t2 and f E C1.2([O,m)xIRNIn particular,
if w E C,(RN) and u(t,y) = f1p(1)P(s+t,y;s+t2,dJl),
then, by
the last part of Theorem (1.2) together with (1.5), u E Cb'2([O,co)xR N). p.
(at + Ls+t)u = 0 for t E [O,t2), and u(t2,.)
Hence, from (1.15) with f = u: P
E s'x[p(x(t2))I#t ] = u(tl,x(tl)) (a.s..P). 1
Combined with P(x(0) = x) = 1,
this proves that P satisfies
the condition in (1.12) characterizing PSIX-
Q. E.D.
(1.16) Remark: The characterization of Ps x given in
Corollary (1.13) has the great advantage that it only involves Lt and does not make direct reference to P(s,x;t,).
26
Since, in most situations, Lt is a much more primitive quantity than the associated quantity
it should
be clear that there is considerable advantage to having Ps,x characterized directly in terms of Lt itself. (1.14) has great intuitive appeal.
In addition.
What it says is that, in
some sense, Ps,x sees the paths w as the "integral curves of Lt initiating from x at time s."
Indeed, (1.14) can be
converted into the statement that EP[p(x(t+h)) - w(x(t))I4t] = hLtw(x(t)) + o(h), hlO,
which, in words, says that "based on complete knowledge of the past up until time t, the best prediction about the
P-value of p(x(t+h)) - w(x(t)) is, up to lower order terms in h, hLtw(x(t))."
This intuitive idea is expanded upon in the
following exercise.
(1.17) Exercise: Assume that a ° 0 and that b is independent of t.
Show, directly from (1.14) that in this
case PO,x = 6X(.,x) where X(.,x) is the integral curve of the vector field b starting at x.
In fact, you can conclude this
fact about PO x from P(x(0) = x) = 1 and the unconditional version of (1.14): EP[Jt2[Ls+tw](x(t))dt1. 11
(1.14')
EP[w(x(t2)) - w(x(tl))] = tl
Finally, when Lt = 1/2A, show that the unconditional statement is not sufficient to characterize Wx.
(Hint: let X
be an IRN-valued Gaussian random variable with mean 0 and
covariance I. denote by P E M1(0) the distribution of the paths tf-it1/2X, and check that (1.14') holds with this P but that P X V.)
27
2. The Elements of Martingale Theory: Let Ps
x
be as in section 1).
Then (1.14) can be
re-arranged to be the statement that: E
P
(t2)IAtl] = XV(tl) (a.s..Ps,x), 0 <
tl
<
t2,
where t
(t) = v(x(t)) - v(x(O)) - f
(2.1)
[Ls+TV](x(T))dT.
0
Loosely speaking, (2.1) is the statement that
is
"conditionally constant" under Ps,x in the sense that X,,(tl) is "the best prediction about the Ps,x-value of X,,(t2) given
perfect knowledge of the past up to time t1" (cf. remark (1.16)).
Of course, this is another way of viewing the idea
that Ps.x sees the path w as an "integral curve of Lt." Indeed, if w were "truly an integral curve of Lt", we would have that X,,(-.w) is "truly constant."
must settle for
The point is that we
being constant only in the sense that
it is "predicted to be constant."
Since these conditionally
constant processes arrise a great deal and have many interesting properties, we will devote this section to explaining a few of the basic facts about them. Let (E,i,P) be a probability space and {9t: non-decreasing family of sub-a-algebras of 9.
t
Z 0} a
A map X on
[0,m)xE into a measurable space is said to be ({Et}-)Progressively measurable if its restriction to [0,T]xE
is [O,T]x9T-measurable for each T Z 0.
A map X on [0,-)xE
with values in a topological space is said to be,
respectively, right continuous (P-a.s. right continuous) or
28
continuous (P-a.s. continuous) if for every (P-almost every) f E E the map tF--+X(t,f) is right continuous or continuous.
(2.2) Exercise:
Show that the notion of progressively
measurable coincides with the notion of measurability with respect to the a-algebra of Progressively measurable sets (i.e. those subsets r of [0,-)xE for which Xr is a progressively measurable function).
In addition, show that
if X is {It}-adapted in the sense that X(t,.) is 9t-measurable for each t Z 0, then X is {9t}-progressively measurable if it is right continuous.
A C-valued map X on [0,-)xE is called a martingale if X is a right-continuous, {9t}-progressively measurable function such that X(t) E L1(P) for all t Z 0 and X(tl) = EP[X(t2)Igt ] (a.s.,P), 0 S ti < t2.
(2.3)
1
Unless it is stated otherwise, it should be assumed that all martingales are real-vauled.
An 1R1-valued map X on [0,w)xE
is said to be a sub-martingale if X is a right continuous,
{gt)-progressivly measurable function such that X(t) E L1(P) for every t Z 0 and
X(tl) S EP[X(t2)I5t
]
(a.s.,P), 0 S
tl
< t2.
(2.4)
1
We will often summarize these statements by saying that
M0.9
t
P) is a martingale (sub-martingale).
(2.5) Example: Besides the source of examples provided
29
by (2.1), a natural way in which martingales arrise is the following.
Let X C L1(P) and define X(t) = EP[XIg[t]] (we
use [r] to denote the integer part of an r E R1).
Then it is
an easy matter to check that (X(t).gt,P) is a martingale. More generally, let Q be a totally finite measure on (E,5) and assume that Qtgt << Pt5; t for each t 2 0.
Then
(X(t),9t.P) is a martingale when X(t) denotes the Radon-Nikodym derivative of Qtg[t] with respect to Ptg[t]. It turns out that these examples generate, in a resonable sense, all the examples of martingales.
The following statement is an easy consequence of Jensen's inequality.
(2.7) Lemma: Let (X(t),St.P) be a martingale (sub-martingale) with values in the closed interval I.
Let v
be a continuous function on I which is convex (and non-decreasing).
If poX(t) E L1(P) for every t 2 0, then
(4poX(t),1t,P) is a sub-martingale.
In particular, if q E
[1,-) and (X(t),9t,P) is an Lq-martingale (non-negative Lq-sub-martingale) (i.e. X(t) E Lq(P) for all t Z 0), then (jX(t)Iq.gt,P) is a sub-martingale.
(2.8) Theorem (Doob's Inequality): Let (X(t),9t,P) be a non-negative sub-martingale.
Then, for each T > 0:
S EP[X(T), X*(T)2R], R 2 0,
(2.9)
where
X*(T) = OStSTIX(t)I, T 2 0.
(2.10)
30
In particular, for each T > 0, the family {X(t): t E [0,T]) is uniformly P-integrable; and so, for every s > 0, Finally, if X(T) E L q(p) for
X(t)->X(s) in L1(P) as tls. some T > 0 and q E (1,-), then EP[X*(T)q]1/q
(2.11)
S q/(q-1)EP[X(T)q]1/q.
Proof: Given n > 1, note that: n
P( OSkSnX(kT/n)>R) = X P(X(eT/n)>R & OSk,eX(kT/n)
S 1/R I EP[X(eT/n), X(eT/n)>R & e=0
max OS k<eX(kT/n)
S 1/R X EP[X(T), X(eT/n)>R & Osk<eX(kT/n)
REP[X(T),
X*(T)>R].
Since OmaxnX(kT/n)-4X*(T) as n- . the proof of (2.9) is complete.
To prove the uniform P-integrability statement, note that for t E [0,T]:
EP[X(t), X(t)>R] S EP[X(T), X(t)>R] S EP[X(T), X*(T)>R].
Since X(T) E L1(P) and P(X*(T)>R)--)0 as R-i, we see that:
Rl-
tE[O,T]EP[X(t),
X(t)2R] = 0.
Finally, to prove (2.11), we show that for any pair of non-negative random variables X and Y satisfying P(Y>R) S
EP[X,Y>R], R> 0,
II
Y
S q/(q-l)II X
II
Lq(P)
,
II
q E (l,co).
Lq( P)
Clearly, we may assume ahead of time that Y is bounded. proof then is a simple integration by parts:
The
31
EP[Yq] = qro Rq-1P(Y>R)dR < gJRq-2EP[X, Y>R]dR
0
= qro Rq-2dRJ
P(X2r, Y2R)dr = q/(q-1) JwE[Yq-1, X>r]dr 0
0 q/(q-1)EP[Yq-lX]
=
q/(q-1)EP[Yq](q-1)/gEP[Xq]l/q.
S
Q.E.D.
A function r: E-+[0,w] is said to be a ({°t}-)stonnine time if {T < t} E 9t for every t > 0.
T, define 9
Given a stopping time
to be the collection of sets r C E such that
rn{T < t} E 9t for all t > 0.
(2.12) Exercise: In the following, a and T are stopping times and X is a progressively measurable function.
Prove
each of the statements: i) 5T is a sub-a-algebra of 9, 5;T = 9T if T ° T. and r is 5T-measurable;
ii) f E EI-->X(T,f) = X(T(f),g) is 9T-measurable; iii) a + T, aVT, and aAT are stopping times; iv) if r E 9a, then rn{a < T} and rn{a < T} are in 9uAT v) if a < T. then 9
C 9T.
If you get stuck, see 1.2.4 in [S.&V.].
(2.13) Theorem (Hunt): Let (X(t),9t,P) be a martingale (non-negative sub-martingale).
Given stopping times a and r
which satisfy a < T < T for some T > 0, X(a) = EP[X(r)J9a] (X(a) < EP[X(r)I9a]) (a.s.,P).
In particular, if (X(t),9t,P)
32
is a non-negative sub-martingale, then, for any T > 0. {X(T): T is a stopping time S T) is uniformly P-integrable. Proof: Let I denote the set of all stopping times a:
E-+[O,T] such that X(a) = EP[X(T)I5Q] (X(a) S EP[X(T)I9o]) (a.s.,P) for every martingale (non-negative sub-martingale)
Then, for any non-negative sub-martingale
(X(t),5t,P). (X(t),9t.P): lira
sup EP[X(a). X(a)ZR] S
R--w- aE%
lira
sup EP[X(T). X(a)2R]
R--wo aE'S
SRlimEP[X(T),
X*(T)2R] = 0;
and so {X(a): a E <5) is uniformly P-integrable.
In
particular, if a is a stopping time which is the non-increasing limit of elements of 1, then a E V.
We next show that if a is a stopping time which takes on only a finite number of values 0 = t0 V.
tn= T. then a E
To this end, let T E 9;a be given and set Tk = Tfl{a=tk}.
Then Tk E 5t
and so: k
EP[X(a), T] _ I EP[X(tk), r0 k=0 (S) n X EP[X(T). Tk] = EP[X(T). T]. k=O Now let a: E---+[O,T] be any stepping time, and, for n
0, set an = (([2na]+1)/2n)AT. each n 2 1.
In addition, anla.
By the preceding, an E '8 for
Hence, we now know that
every stopping time bounded by T is an element of 18.
In
particular, if (X(t).5t,P) is a non-negative sub-martingale, then the set of X(o) as a runs over stoping times bounded by
33
T is uniformly P-integrable.
Also, if a S T S T are stopping
times, then for any martingale (X(t),9t,P) we have: EP[X(T)I51 a] = EP[EP[X(T)I5;r]Iga] = EP[X(T)I90']
= X(a)
(a.s.,P).
It remains to show that if (X(t),gt,P) is a non-negative sub-martingale and a S T
T are stoping times, then
EP[X(T)I9a] Z X(a) (a.s.,P).
Notice that, by the uniform
integrability property already proved, we need only do this
when a and r take values in a finite set 0 = t0
to = T.
To handle this case, define e-1
IE [X(tk+l)-X(tk)Igt] k tE[te.te+1) and OSe
A(t) = n-1 P kIOE [X(tk+l)-X(tk)Igtk], t2T.
is P-almost surely non-decreasing and
Then,
(M(t),I; t,P) is a martingale, where
X(te)-A(t), tE[te,te+1) and OSe
EP[X(T)I9a]
= EP[M(T)+A(T)I51 a]
X(a) + EP[A(T)-A(a)I9a] 2 X(a) (a.s.,P) Q.E.D.
(2.14) Corollarv(Doob's Stopping Time Theorem): If (X(t).9t,P) is a martingale (non-negative sub-martingale) and
r is a stopping time, then (X(tAT),9t.P) is a martingale (sub-martingale).
Proof: Let 0 rn{r>s} E 5
:
s
S
t and r E 5.
Then, since
34
EP[X(tAr), r]
=
EP[X(tAT). rn{r>s}] + EP[X(tAr), rn{r<s}]
(2) P
E [X(sA ), rn{r>s}] + E P[X(snr), rn{r<s}] EP[X(sAT), r].
Q.E.D.
(2.15) Exercise: Let p: [0,o)-*1R1 be a right continuous Given a < b and T E (0,00] we say that p upcrosses
function.
[y] at least n times during [0,T) if there exist 0 < s1< t
1
< m < n.
n
< T such that p(sm) < a and p(tm) > b for each
Define U(a,b;T) = inf{n 2 0: W does not upcross
[a,b] at least (n+l) times during [0,T)) to be the number of times that p uncrosses [a b] during [0,T).
Show that p has a
left limit (in [-w,w]) at every t E [0,T] if an only if U(a,b;T) < ro for all rational a < b.
Also, check that if
Um(a,b;T) is defined relative to the function tI
P(([2mt]/2m), then Um(a,b;T)TU(a,b;T) as m-.
(2.16) Theorem (Doob's Upcrossing Inequality): Let
(X(t),5t,P) be a sub-martingale; and, for a < b, f E E, and T E (0,-], define U(a,b;T)(f) to be the number of times that tI ->X(t,g) upcrosses [a,b] during [0,T). (2.17)
Then:
EP[U(a,b;T)] < EP[(X(T)-a)+]/(b-a), T E (0,-).
In particular, for P-almost all f E E, at each t E (0,w).
limit (in sup EP[X(T)+] < T>0
--
(supEP[IX(T)I] < 00),
has a left In addition, if then tlim X(t) exists
in [_w,w) ((- )) (a.s.,P). Proof: In view of exercise (2.15), it suffices to prove that (2.17) holds with U(a,b;T) replaced by Um(a,b;T) (cf.
35
the last part of (2.15)).
Given m Z 0, set Xm(t) = X(([2mt]/2m) and TO = 0, and define an and Tn inductively for n Z 1 by: an = (inf{tZTn-1: Xm(t) < a})AT and
Tn = (sup {tZa
:
Xm(t) > b})AT.
Clearly the an's and Tn's are stopping times which are bounded by T, and Um(a,b;T) =
Tn
Thus, if Ym(t)
(Xm(t)-a)+/(b-a), then: [2mT]
Um(a,b;T) S
(Ym(Tn) - Ym(an)); n=0
and so:
Ym(T) 2 Ym(T) - Ym(0) [2mT]
Um(a,b;T) +
n=
(Ym(an+l) - Ym(Tn)).
=0
But (Ym(t).9t.P) is a non-negative sub-martingale and therefore:
EP[Ym(an+l) - Ym(Tn)] 2 0.
At the same time, ((X(t)-a)+,5t,P) is a sub-martingale and therefore
EP[Ym(T)] S EP[(X(T)-a)+]/(b-a).
Q.E.D.
(2.18) Corollary: If (X(t),5; t,P) is a P-almost surely
continuous sub-martingale, then tll mX(t) exists (in (a.s.,P) on the set B a {fEE: supX(t,f) < -}.
In the case of
P-almost surely continuous martingales, the conclusion is
36
that the limit exists P-almost surely in
on B.
Proof: Without loss in generality, we assume that X(O) 0.
Given R > 0. set TR = inf{t>0:o u
Then TR is a stopping time and XR(t)
define XR(t) = X(tATR). < R,
t
> 0, (a.s.,P).
X(s) > R} and
Hence, (XR(t),?t,P) is a
sub-martingale and EP[XR(T)+] < R for all T > 0.
particular, tlim X(t) exists (in }.
In
(a.s.,P) on {TR =
Since this is true for every R > 0, we now have the
desired conclusion in the sub-martingale case.
The
martingale case follows from this, the observation that EP[1XR(T)I] =
2EP[XR(T)+]
-
EP[XR(0)], and Fatou's lemma. Q.E.D.
(2.19) Exercise: Prove each of the following statements. i) (X(t),gt,P) is a uniformly P-integrable martingale if
and only if X(m) = tlimX(t) exists in L1(P), in which case (a.s.,P) and X(T) = EP[X(a)I9T] (a.s.,P) for each stopping time T.
ii) If q E (1,-) and (X(t),9t,P) is a martingale, then 00)
(X(t),9;t,P) is LgLa-bounded (i.e. supEP[IX(t)lq] < only if X(-) =
if and
limX(t) in Lg(P), in which case
(a.s.,P) and X(T) = EP[X(°)IgT] (a.s.,P) for each stopping time T.
iii) Suppose that X: [0,-)xE-R1 ([0,-)) is a right continuous progressively measurable function and that X(t) E L1(P) for each t in a dense subset D of [0,-).
If
X(s)(<)EP[X(t)Igs] (a.s.,P) for all s,t E D with s
<
t, then
37
(X(t),5t.P) is a martingale (non-negative sub-martingale).
iv) Let 0 be a Polish space, P E X1(Q). and 4 a countably generated sub-a-algebra of 90.
Then there exists a
nested sequence (IIn) of finite partitions of 0 into CO
d-measurable sets such that
U a(IIn) generates 4. n=1
In
addition, if w-*Pn is defined by Pn(B) = I AEII
[P(Bf1A)/P(A)]XA(w) n
for B E to (0/0 a 0 here), then there is a P-null set A E d
such that Pn-+PW in M1(0) for each w f A.
Finally, P(j can
be defined for w t A so that wI ->PW becomes a r.c.p.d. of
(2.20) Theorem: Assume that 9t is countably generated for each t 2 0.
Let r be a stopping time and suppose that
wHP(i is a c.p.d. of Pl%r.
Let X: [0,m)xE-*R1 be a right
continuous progressively measurable function and assume that X(t) E L1(P) for all t 2 0.
Then (X(t),5t.P) is a martingale
if and only if (X(tAr),5t.P) is a martingale and there is a P-null set A E 9r such that (X(t) - X(tAr),5t.PW) is a martingale for each . E A.
Proof: Set Y(t) = X(t) - X(tAr).
Assuming that
(X(tAT),St.P) is a martingale and that (X(t) - X(tAr).gt,PW) is a martingale for each w outside of an 9r-measurable P-null, we have, for each s < t and r E 9s: EP[X(t)-X(s), T] = EP[Y(t)-Y(s), T] + EP[X(tAT)-X(sAr), r] r
E W[Y(t)-Y(s), r]P(dw) = 0. J
38
That is, (X(t).9t,P) is a martingale.
Next, assume that (X(t).
Then
. P) is a martingale.
(X(tAr),5t,P) is a martingale by Doob's stopping time To see that (Y(t),.°ft,P(j ) is a martingale for each w
theorem.
outside of a P-null set A E °f T, we proceed as follows.
0Ss
Given
t, r E 9s , and A E 9, we have:
<
E[Y(t), F]P(dw) = EP[Y(t), rnA] J
A =
EP[Y(t), rnAn{Tss}] + EP[Y(t), rnAn{s
Note that, by exercise (2.12), rnAn{s
Thus EP[Y(t), rnAn{s
EP[Y(s), rnA]
( =
J
E
[Y(s), r]P(dw).
A
proved that:(
We have
(i
E PW [Y(t), r]P(dw) = J
E P [Y(s), r]P(dw). J
A
A
Since 9s is countably generated, we conclude from this that there is a P-null set A E 9T such that for all ,
A: {Y(t):
tEQn[0,-)l C L1(PW) and Y(s) = E ("[Y(t)l5s] (a.s.,P(i )
rational s
<
t.
for all
In view of the iii) in exercise (2.19), this
completes the proof.
Q.E.D.
(2.21) Exercise: Let everything be as in Theorem (2.20),
only this time assume that X(t) Z 0 for all t Z 0.
Show that
39
(X(t),9t,P) is a sub-martingale if and only if (X(tAT),?t,P)
is a sub-martingale and there is a P-null set A E 9T such that (X(t)X[O,t](T),9t,PW) is a sub-martingale for all w ff
A.
The rest of this section is devoted to a particularly useful special case of the reknowned Doob-Meyer decomposition theorem.
What their theorem says is that, under mild
technical hypotheses, every sub-martingale (X(t),5:t.P) is the
sum of a martingale (M(t),5t,P) and a right-continuous,
progressively measurable function A: [0,-)xE-[O,-) having the property that tF->A(t) is P-almost surely non-decreasing. Moreover, A can be chosen so that A(O) = 0 and tI---3,A(t) is
"nearly left-continuous" (precisely, A is a "{9t)predictable"); and, among such non-decreasing processes,
there is only one whose difference from X is a {gt)-martingale under P.
We have already seen a special case
of this decompostion in our proof of Theorem (2.13) (cf. the construction of the processes M and A made there).
The idea
used in the proof of Theorem (2.13) is due to Doob and it works whenever
is piece-wise constant.
What Meyer
did is show that, in general, A can be realized as the limit of A's constructed for piece-wise constant approximations to X.
Simple as this procedure may sound, it is frought with
technical difficulties.
To avoid these difficulties, and
because we will not have great need for the general result.
we will content ourselves with the special case of
sub-martingales (X(t),9t,P) where (X(t),5t.P) is a 2
40
real-valued; P-almost surely continous, L2(P)-martingale.
Our proof of existence for this case is based on ideas of K. Ito.
The uniqueness assertion will be a consequence of the
following simple lemma. (2.22) Lemma: Let (X(t).
. P) be a martingale and A:
[0,-)xE-->R1 a right-continuous progressively measurable
function which is P-almost surely continuous and of local bounded variation (i.e. for each T > 0,
the total variation
is finite for P-almost every g E E).
IAI(T,f) of
Then, assuming that EP[OSipTIX(t)I(IAI(T)+IA(O)I)] < rt
for all T > 0, (X(t)A(t)-J X(s)A(ds),5;t,P) is a martingale. 0
Proof: Let 0 S s < t and r e gs be given.
Then:
EP[X(t)A(t)-X(s)A(s),r] n-l
EP[
)A(un,k+l)-X(un.k)A(un,k)]'rl
L [X(un,k11 k=O n-1
[X(un,k+l)(A(un,k+l)-A(un,k))],rI.
EP[ k=O
where un k = s + k(t - s)/n. continuous and
Since uI-
>X(u) is right
is P-almost surely continuous and of
local bounded variation, n-1
t
X(s)A(ds) (a.s.,P);
[X(un,k+l)(A(un,k+l)-A(un,k))]-
k=1
0
and our integrablility assumption allows us to conclude that this convergence takes place in L1(P).
Q.E.D.
41
(2.23) Theorem: Let (X(t).gt.P) be a P-almost surely continuous martingale and define f = sup{t>O:
IXI(t) < -}.
Then, P-almost surely, X(tAC) = X(0),
t
In particular,
if P(X(t) = X(s)) = 0 for all 0 S s
t, then tI->X(t) is
<
2 0.
P-almost never of bounded variation on any interval.
Proof: Without loss of generality, we assume that X(O) 0 and that X(-. f) is continuous for all f E E.
For R > 0, define CR = sup{t>O:
Then CR is
IXI(t) < R}.
a stopping time for each R > 0 and CR-+C as R-. Moreover, by Lemma (2.22) with
and
replaced by
respectively:
and
tA (X(tACR)2-JORX(s)X(ds),gt.P) is a martingale; and therefore r r0tAc
EP[X(tACR)2] = E PIJ
RX(s)X(ds)
I.
`
On the other hand, since
is P-almost surely
continuous and of local bounded variation, X(tACR)2 = 2JtACRX(s)X(ds) (a.s..P). and therefore EP[X(tACR)2] 0 r r0 tAC
2EP1
1
Hence, EP[X(tAcR)2] = 0 for all t > 0,
RX(s)X(ds)J.
L
and so X(tACR) = 0,
t
> 0, (a.s.,P).
Clearly this leads
immediately to the conclusion that X(tAcR) = 0,
t
> 0,
(a.s.,P).
To prove the last assertion, it suffices to check that, for each 0 S s X(-As).
<
t,
IXsJ(t) _ - (a.s..P), where
But, if [s = sup{u>O:
IXsk(u) < w}, then
P(IXs1(t)<-) = P(Cs>t); and, by the preceding with Xs replacing X, P(cs>t) S P(X(t)=X(s)) = 0.
Q.E.D.
42
(2.24) Corollary: Let X: [0,co)xE--->RI be a right
continuous progressively measurable function.
Then there is,
up to a P-null set, at most one right continuous
progressively measurable A: [0,m)xE->R1 such that: A(O) = 0, t--A(t) is P-almost surely continuous and of local bounded variation, and, in addition, (X(t)-A(t),5t.P) is a martingale.
Proof: Suppose that there were two, A and A'.
Then
(A(t)-A'(t),9t.P) would be a P-almost surely continuous martingale which is P-almost surely of local bounded variation.
Hence, by Theorem (2.23), we would have A(t)
- A'(t) = A(0) - A'(0) = 0.
t
> 0, (a.s.,P).
Q.E.D.
Before proving the existence part of our special case of the Doob-Meyer decomposition theorem, we mention a result which addresses an extremely pedantic issue.
Namely, for
technical reasons (e.g. countability considerations), it is often better not to complete the a-algebras 9; t with respect to P.
At the same time, it is convenient to have the
processes under consideration right continuous for every f E E, not just P-almost every one.
In order to make sure that
we can make our processes everywhere right continuous and, at the same time, progressively measurable with respect to possibly incomplete a-algebras 9t, we will sometimes make reference to the following lemma, whose proof may be found in 4.3.3 of [S.&V.].
On the other hand, since in most cases
there is either no harm in completing the a-algebras or the
43
asserted conclusion is clear from other considerations, we will not bother with the proof here. (2.25) Lemma: Let {Xn: n21} be a sequence of right
continuous (P-almost surely continuous) progressively measurable functions with values in a Banach space If
P(O MTIIXn(t)-Xm(t)IIZe) = 0
M----4m n2m
for every T > 0 and e > 0, then there is a P-almost surely unique right-continuous (P-almost surely continuous) progressively measurable function X such that nli c P(O'upTIIXn(t)-X(t)112e) = 0
for all T > 0 and e > 0. (2.26) Theorem (Doob-Meyer): Let (X(t),9t.P) be a P-almost surely continuous real-valued L2(P)-martingale.
Then there is a P-almost surely unique right continuous
progressively measurable A: [0,-)xE-4[0,w) such that: A(0) 0, v-->A(t) is non-decreasing and P-almost surely continuous and (X(t)2-A(t),9t,P) is a martingale.
Proof: The uniqueness is clearly a consequence of In proving existence, we assume, without
Corollary (2.24).
loss in generality, that X(0) a 0.
Define Tk a k for k Z 0; and given n Z 1, define TO a 0 and for 2 Z 0: TQ+1 = (inf{tZr :
if Tk-1 S Te <
Tkn-1
+1.
TnSSSt1X(s)-X(TQ)121/n))A(Te+1/n)ATk+1'
n Clearly the Tk's are stopping times, Tk
< Tk+1' and {Tk} C {Tk+1} (a.s.,P).
In addition, by P-almost
44
sure continuity. Tk-s (a.s.,P) as k X(T
nk)I
1/n, k Z 0, (a.s.,P).
i
and IX(Tk+1)
Kn < so
Choose K0
that P(Ac) S 1/n where An = (TK
> n}, and define: n
K
n
X(Tk)(X(tATk+1) - X(tATk))
Mn(t) _
k=0 and K n
An(t) _
(X(tATk+1) - X(tATk))2.
k=
=0
Clearly, for all n
0: Mn(0) = An(0) E 0; (Mn(t),5t.P) is a
P-almost surely continuous martingale; An is a right-continuous, non-negative, progressively measurable function which is P-almost surely continuous; and An(s) S An(t) if C A
t
Z s + 1/n.
Moreover, for n 2 1, 0 S
t
s n, and g
n
(2.27)
2Mn(t.f) + An(t.f).
Given T > 0 and e > 0, we are now going to show that
lim m----4-
n2m P(OStP<TIMn(t)-Mm(t)I2e) = 0.
To this end, let T S m < n and set
= TATK ATK m
.
Then:
n
P(OStSTIMn(t)-Mm(t)I2e) S P(ACUAC) + p(O5UPTIMn(tAC)-Mm(tAC)I2e) S 2/m + 1/e2EPL(Mn(C)-M(C))2],
where we have used Doob's inequatity. ae = TQAC.
Note that:
Define pk = TkAC and
45
Mn(c)-Mm(c)
(ae)(X(oe)-X( Pk))(X(ae+l)-X(ae))
k=0 a=0
[Pk'Pk+l)
(a.s.,P) and that the terms in this double sum are mutually P-orthognal.
Hence:
EP[(Mn(c)-M(c))2] EP[
2
X
k=0 a=0
S
1/m2EP[
.2
x
(ae)(X(ae)-X( Pk))2(X(a¢+l)-X(ae))2] [Pk'Pk+l)
1
k=0 a=0
(ae)(X(ae+l)-X(ae))21 [Pk'Pk+l)
00
1/m2EP[ X (X(ae+1)-X(ae))2] = 1/m2EP[X(c)2] e=0 1/m2EP[X(T)2]1
where we have used the P-orthognality of {X(ae+1) X(ae)} in the last line.
We now apply Lemma (2.25) and conclude that there is a right continuous. P-almost surely continuous, progressively
measurable M: [0,co)xE--1R1 such that Mn-->M, uniformly on finite intervals, in P-measure.
Furthermore, our argument
shows that Mm(t)--+M(t) in L1(P) for each t Z 0.
Hence,
(M(t),5t,P) is a P-almost surely continuous martingale. Finally, set A' = X2 - 2M. (2.27). we see that in P-measure.
Then, as a consequence of
uniformly on finite intervals,
In particular, A'(0) = 0 and ti- A'(t) is
non-decreasing P-almost surely.
Thus, we are done once we
define A(t) = 0supt(A'(s)-A'(0)) for t 2 0.
Q.E.D.
46
(2.28) Exercise: Let ((3(t),9 t,P) be a one-dimensional
Wiener process.
Show that ($3(t),5t,P) is an L2(P)-martingale
and that the associated non-decreasing process constructed in the preceding is A(t) = t,
each T > 0,
n-.
2n-1 X
k=0
t
> 0.
In addition, show that for
(l3((k+l )T/2n) - (3(kT/2n) )2-+T (a. s . , P) as
Let Mart2 (= Mart 2L{t}.P1) denote the space of all real-valued P-almost surely continuous L2(P)-martingales (X(t),9t,P).
Mart
,
Clearly Mart2 is a linear space.
we will use <X> to denote the associated process A
constructed in Theorem (2.26).
Mart
,
Given X E
In addition, given X,Y E
define <X.Y> by:
<X,Y> = 1/4[<X+Y> - <X-Y>].
Clearly, <X,Y> is a right-continuous progressivesly measurable function which is not only of local bounded variation, with I<X,Y>I(T) E L1(P) for all T > 0, but also P-almost surely continuous.
(2.29) Exercise: Given a stopping time T and an X E Mart
2,
define XT(t) = X(tfT) and XT(t) = X(t) - XT(t),
t
> 0.
Show that XT and XT are elements of Mart2 and that <XT>(t) _ <X>(tAT) and <XT>(t) = <X>(t) - <X>(tAT),
t
> 0, (a.s.,P).
(2.30) Theorem(Kunita & Watanabe): Given X,Y E Mart ,
<X,Y> is the P-almost surely unique right continuous
47
progressively measurable function which has local bounded variation, is P-almost surely continuous, and has the properties that <X,Y>(O) a 0 and (X(t)Y(t)-<X,Y>(t),9t,P) is a martingale.
In particular, <X> = <X,X> (a.s.,P) for all X
E Mart ; and, for all X,Y,Z E Mart , = a<X,Z> + b, a,b E Q{1,
(a.s.,P).
Finally, for all X,Y E Mart :
I<X,Y>I(r) S <X>(r)1/2(r)1/2 <X+Y>(F) S <X>(r)1/2 +
F E °t[0 (r)1/2,
-),
(a.s.,P);
r E 1[0 -), (a.s.,P); and
I<X,Y>I(dt) S (<X>(dt) + (dt))/2 (a.s.,P).
Proof: To prove the first assertion, simply note that XY 1/4[(X+Y) 2 - (X-Y)2], and apply Lemma (2.24).
The equality <X> = <X,X> as well as the linearity assertion follow easily from uniqueness.
In order to prove the rest of the theorem, it suffices to show that I<X,Y>I(r) S <X>(F)1/2(r)1/2, F E 9[0 (a.s.,P); and to do this we need only check that, for each 0 s
<
t,
I<X,Y>(t) - <X,Y>(s)I
S (<X>(t) - <X>(s))1/2((t) - (s))1/2 (a.s.,P). Furthermore, by replacing X and Y with Xs and Ys,
respectively (cf. Exercise (2.29)), we see that it is enough to prove that I<X,Y>(t)I S <X>(t)1/2(t)1/2 (a.s.,P) for each t Z 0.
But, by the linearity property, 0 S <XX±Y/X>(t)
X 2<X>(t) ± 2<X,Y>(t) + (t)/X2
,
X > 0, (a.s.,P).
Hence
the desired inequality follows by the same argument as one uses to prove the ordinary Schwarz's inequality.
Q.E.D.
48
(2.31) Exercise: Given X,Y E Mart({5t},P) and an {5t}-stopping time T. show that <XT.Y> (.) =
Next, set 9t = a(X(s): OSsSt) and 9i = a(Y(s): OSsSt).
Show
that X,Y E Mart ({9tx9L},P) and that, up to a P-null set,
<X,Y> defined relative to ({9txtt},P) coincides with <X,Y> defined relative to ((9t ),P).
Conclude that, if for some T >
0, gT and 9T are P-independent, then <X,Y>(t) = 0. 0 S (a.s..P).
t
S I.
49
3. Stochastic Integrals, Ito's Formula, and Semi-martingales:
We continue with the notation with which we were working in section 2.
Given a right continuous, non-decreasing, P-almost surely continuous, progressively measurable function A:
[0,-)xE-4[0,co), denote by Lloc(A.P) = Lloc({9t),A,P) the space of progressively measurable a: [0,o)xE-+R1 such that r
1
EPLJ a(t)2A(dt)J < - for all T > 0.
Clearly, Lloc(A,P)
0
admits a natural metric with respect to which it becomes a Frechet space.
Given X E Mart2 and a E [0,o)xE_
at most one I:
1
L2oc(<X>,P),
note that there is
such that:
i) 1(0) E 0 and I E Mart22
(3.1)
'
ii) = a<X,Y> (a.s.,P) for all Y E Mart .
(Given a measure p and a measurable function a, as denotes the measure v such that dµ = a.)
Indeed, if there were two,
say I and I', then we would have E 0 (a.s.,P) for all Y E Mart 2.
In particular, taking Y = I - I', we would
conclude that EP[(I(T)-I'(T))2] = 0, T 2 0, and therefore that I = I' (a.s.,P).
Following Kunita and Watanabe, we will
say that, if it exists, the P-almost surely unique I
satisfying (3.1) is the (Ito) stochastic integral of a with respect to X and we will denote I by Observe that if a,(3 E
f a(s)dX(s) and 0
Lloc(<X>,P)
and if both
J$3(s)dX(s) exist, then:
J[aa(s)+b(3(t)]dX(s)
0
0
50
exists and is equal afa(s)dX(s) 0
+
bf(3(s)dX(s) (a.s.,P) for 0
all a,b E ftl, and to
EP[0
<
4EP[( J 0 4EP[
J0
T
1
(3.2)
a,(s)dX(s)- f (3(s)dX(s))2J 0
(a(t)-(3(t))2<X>(dt)
I.
T Z 0.
From this it is easy to see that the set of a's for which
Ja(s)dX(s) exists is a closed linear subspace of 0 Lloc(<X>,P).
(3.3) Exercise: Let X E Mart2 be given and suppose that a < T are stopping times.
Let 7 be an 9a-measurable function
satisfying EP[72(<X>(TAT)-<X>(TAa))] < - for all T > 0, and set a(t) = X[a T)(t)7,
t
2 0.
Show that a E
Lloc(<X>,P)
that fa(s)dX(s) exists and is equal to
and
(see
0
exercise (2.29) for the notation here).
We want to show that fa(s)dX(s) exists for all a E 0 Lloc(<X>,P).
To this end, first suppose that a is simple in
the sense that there is an n Z t
> 0.
1
for which a(t) = a([nt]/n),
Set: 00
a(k/n)(X(tA((k+l)/n))-X(tA(k/n))),
I(t) _
t
Z 0.
k=0
Then I E Mart .
Moreover, if k/n < s <
t
< (k+l)/n, then:
51
EP[I(t)Y(t)-I(s)Y(s)15s] = EP[(I(t)-I(s))(Y(t)-Y(s))I5s] =
a(k/n)EP[(X(t)-X(s))(Y(t)-Y(s))I5s]
=
a(k/n)EP[<X,Y>(t) - <X,Y>(s)jgs]
r rt
1
EPLJ a(u)<X,Y>(du)I9s] (a.s.,P). s
In other words, = a<X,Y> (a.s.,P), and so fa(s)dX(s) 0
exists and is given by I.
Knowing that 5 a(s)dX(s) exists 0
for simple a's,.one can easily show that
Ja(s)dX(s) exists 0
for bounded progressively measurable a's which are P-almost surely continuous; indeed, simply take an(t) = a([nt]/n), t
Z 0, and note that an-'a in
2oc(<X>,P).
have completed our demonstration that
Thus, we will
Ja(s)dX(s) exists for 0
all a E
Lloc(<X>,P)
once we have proved the following
approximation result.
(3.4) Lemma: Let A: [0,w)xE->[0,-) be a non-decreasing, P-almost surely continuous, progressively measurable function with A(0) a 0. Lloc(A,P)
C
Given a E
Lloc(A,P),
there is a sequence {an}
of bounded, P-almost surely continuous functions
which tend to a in
2loc(A.P).
Proof: Since the space of bounded elements of Lloc(A'P) are obviously dense in
Lloc(A,P),
we will assume that a is
bounded.
We first handle the special case when A(t) = tr for all t
Z 0.
To this end, choose p E CO((0,1)) so that
Jp(t)dt =
1, and extend a to IR 1xE by setting a(t) = 0 for t < 0.
Next,
52
define an(t) = nfa(t-s)p(ns)ds for t > 0 and n >
1.
Then it
is easy to check that {an} will serve.
To handle the general case, first note that it suffices for us to show that for each T > 0 and e > 0 there exists a
bounded P-almost surely continuous a' E Lloc(A.P) such that rT
1
Given T and a. choose M >
(a(t)-a' (t))2A(dt)J < e2.
EP[J
1
0
T
so that EP[f'a(t)2A(dt), A(T)>M-l] < (a/2)2 and 17 that
X[O,M-1]
COO(IR 1) so
Set B(t) = f i(A(s))2A(ds) + t,
< D
0 t
> 0, and T(t) = B-1(t).
Then {T(t): t>O} is a
non-decreasing family of bounded stopping times. 9T(t) and R(t) = a(T(t)).
Set Vt =
Then Q is a bounded
{}_progressively measurable function; and so, by the preceding, we can find a bounded continuous progressively measurable 3' such that T+M EP[f (R(t)-R'(t))2dt < (e/2) 2. I
Finally, define a'(t) = R'(B(t))i(A(t)).
Then a' is a
bounded P-almost surely continuous element of
Lloc(A,P),
and:
1/2
EP[f
(a(t)-a'(t))2A(dt)J
0
11/2 < a/2 + EP[fO(a(t)-R'(B(t)))2B(dt)J T +M
< a/2 + EP[ f
,1 1/2
(R(t)-R'(t))2dtj
< E.
0
Q.E.D.
As a consequence of the preceding, we now know that
53
f a(s)dx(s) exists for all X E Mart2 and a E
L2oc(<X>,P).
0
(3.5) Exercise: Let X E Mart2 be given.
i) Given stopping times a S T and a E
show
Lloc(<X>,P),
that X[o,T)(t)a(t) Er Lloc(<X>,P) and that: rTATa(t)dX(t)
- J0X[o,T)a(t)dX(t) Tno
0
TAo a(t)dX(t) (a.s.,P)
(TAT
a(t)dX(t) -
= J
J
0
ii) Given 19 E
Lloc(<X>,P)
and a E
Lloc(132<X>,P),
show
that: rt0
((`T
(`T0
J a(t)d(J P(s)dX(s)) = J a(s)P(s)dX(s) (a.s.,P). 0
Our next project is the derivation of the reknowned Ito's formula.
(Our presentation again follows that of
Kunita and Watanabe).
Namely, let X = (X1
,
...xM) E (Mart2)M
and Y: [O.-)xE-->l2N be a P-almost surely continuous
progressively measurable function of local bounded variation N
such that IYI(T) = (.2
IYiI(T)2)1/2 E L1(P) for each T > 0.
1
Given f E Cb'1(lrMxn
Ito's formula is the statement that:
f(Z(T)) - f(Z(O))
I JTa if(Z(t))dXi (t) +
i=1 O x + 1/2
M 2
X
T jf(Z(t))Yj(dt) fa
j=l O y
(3.6)
(T
a i,f(Z(t))<Xi,Xt >(dt) (a.s..P). a i,i'=1J 0 x i x
where Z a (X.Y).
54
It is clear that, since (3.6) is just an identification
statement, we may assume that tI ->Z(t, f) and tH->«X.X>>(t, f ) M
((<Xi'Xi >(t'f)))i,i'=1 are continuous for all f E E.
In
addition, it suffices to prove (3.6) when f E Cb(IRM+N) Thus, we will proceed under these assumptions.
Given n >
1,
define Tk' k > 0, so that TO = 0 and Tk+1 = [inf{t>Tk:
(1(t)-<Xi>(Tk))
VIZ(t)-Z(Tk)I>1/n}]A(Tk+l/n)AT.
Then, for each T > 0 and f E E. Tn(f) = T for all but a finite number of k's.
Hence, f(Z(T)) - f(Z(0))
W
2 (f(Zk+l)-f(Zk)), where Zk = (Xk,Yk) = Z(Tk). k=0
Clearly:
f(Zk+1) - f(Zk) _ [f(Xn n) - f(Xn Yn)] + [f(Xnk+l'Yn ) k+1'Y k k' k k+1 - f(Xn k+1'Yn)] k 2 8 f(Zk)AkXi + 1/2 X JTa is i,f(Zk))Alc<Xi,Xi > i,i'=1 O x x i=1 X1
n Tk+1 f +j11fTn N
8yjf(Xk+l'Y(t))Y.(dt)
+ Rk,
k
n(Tk) and
where Ak5 = 5(Tk+1) M 2
Rk ° 1/2
i,i'=1
+ 1/2
i,f(Zk))AkX'AkXi
(a is i,f(Zk) - 8 i8 M 2
x
x
x
x
a is i,f(Zk)(AkX1AkX1 i'i'=1 x x
-
Ak<Xi,Xi >)
with Zk a point on the line joining (Xk+1,Yk) to (Xn.yn). exercise (3.3),
By
55 T
00
f a if(Z )AkXi = J0 a if(Z"(s))dX1(s) k=0 x x
I
where Zn(s) = Zk for S E [Tk,Tk+1) and Zn(s) = Z(T) for s
>
Since Zn(s)->Z(s) uniformly for s E [0,T]. we conclude
T.
that if(Zk)AkXi-'JTa
X a k=0 x
if(Z(s))dXi(s) 0 x
Also, from standard integration theory
in L2(P). (`T
X
a
a
k=0J 0 xi xi
.f(Zn))An<Xi'xi >-+ k k T a
.f(Z(s)))<X1,X1 >(ds)
a
k=0 0 xi xi and
n
rT
Tn+la
f(Xk+1'Y(t))Yi(dt)-.f a
2
k=0STn
f(Z(s))Yi(ds)
0 y
y
It therefore remains only to check that IRk- 0 in
in L1(P).
k
P-measure.
First observe that I(a is i,f(Zn) - a is i,f(Zk))AkX'AkX' x
x
I
x
x
< C[(AkXi)2 + (AkX1 )2]/n and therefore that
EP[IX(a is i,f(Zk) - a is i,f(Zk))AkX'AkX1 )I] k
x
x
x
x
< 2CEP[IX(T) - X(0)12]/n -0. At the same time: 00
EP[( X a is i,f(Zk)(AkXiAkXi k=0 x x
-
Ak<Xi.Xl >))2]
X EP[(a is i,f(Zk)(AkXiAnXl k=0 x x P
4
< C X E [((AkX A -X k=0
i'
-
Ak<Xi,Xi >))2k
4
- Ak<X,X i'
>))
2 ]
]
56
ni4
P
n i' )4 +(Ak<X n i >)2 +(Ak<X n i' >) 2
S C' 2 E [(AkX ) +(AkX k=O
]
m
C'' I EP[IAkXI2]/n = C' EP[IX(T)-X(0)12]/n ->0. k=O
Combining these, we now see that (3.6) holds.
The applications of Ito's formula are innumerable.
One
particularly beautiful one is the following derivation, due to Kunita and Watanabe, of a theorem proved originally by Levy.
(3.7) Theorem (Levy): Let /3 E (Mart2)N and assume that
«/3,f3>>(t) = tI, t
0 (i.e. 3i,/33>(t) = t61'j).
Then
(/3(t)-/3(0),a(/3(s): OSsSt),P) is an N-dimensional Wiener process.
Proof: We assume, without loss of generality, that /3(0) 0.
What we must show is that Pop-1 = W; and, by Corollary
(1.13), this comes down to showing that
(c(x(t))-1/2J 0
is a martingale for every p E C0(IRN).
Clearly this will
(t0
is a
follow if we show that martingale.
But, by Ito's formula: N
X
'p(R(t)) - w((O)) 0
and so the proof is complete.
Jt
a
i=O x
w(R(S))dRl(s). Q.E.D.
Given a right continuous, P-almost surely continuous,
57
{g t}-progressively measurable function 13: will say that (13(t).9; .,.
[0,o)xE--.)IRN, we
is an N -dimensional Brownian
motion if 13 E (Mart ({5t},P))N, {3(0) = 0, and «(3,!3>>(t) = t, 0, (a.s.,P).
t
(3.8) Exercise: i)
Let R: [0,co)xE_IRN be a right continuous, P-almost
surely continuous, progressively measurable function with Show that ((3(t),5;t,P) is an N-dimensional Brownian
(3(0) = 0.
motion if and only if P(/3(t)EFI9 s) =
g(t-s.Y-(3(s))dy. 0 <
s
J <
t and r E R
N'
denotes the N-dimensional Gauss
where
kernel. ii)
Generalize Levy's theorem by showing that if a and
b are as in section 1 and {P5,x: (s,x)E[O,co)xIRN} is the
associated family of measures on Q. then, for each (s,x), Ps,x is the unique P E M1((2) such that: P(x(0)=x) = 1,
E (Mart({At},P))N and
b(s+t,x(t))dt>>(T) =
rT
J a(s+t,x(t))dt, T 2 0, (a.s.,P) 0
Although the class Mart2 has many pleasing properties, it is not invariant under changes of coordinates. even if f 6
CCO (IR1).
(That is,
foX will seldom be an element of Mart2
58
simply because X is.)
There are two reasons for this, the
first of which is the question of integrability.
To remove
this first problem, we introduce the class Martloc
(= Mart °C({t,P}) of P-almost surely continuous local martingales.
Namely, we say that X E MartboC if X:
[0,oz)xE-lR1 is a right continuous, P-almost surely continous
function for which there exists a non-decreasing sequence of
stopping times an with the properties that an-
(a.s.,P)
a
and (X n(t),gt,P) is a bounded martingale for each n (recall that
It is easy to check that Mart loc is a
E
Moreover, given X E Martcoc, there is a
linear space.
P-almost surely unique non-decreasing, P-almost surely continuous, progressively measurable function <X> such that <X>(O) = 0 and X2 - <X> E Martcoc
The uniqueness is an easy
consequence of Corollary (2.24) (cf. part ii),of exercise To prove existence, simply take <X>(t) =
(3.9) below). a
sup<X n>(t),
t
0.
Finally, given X,Y E Martboc, <X,Y>
1/4(<X+Y> - <X-Y>) is the P-almost surely unique
progressively measurable function of local bounded variation which is P-almost surely continuous and satisfies <X,Y>(0) 0 and XY - <X,Y> E Martooc
(3.9) Exercise:
i) Let X: [O,m)xE-41R1 be a right continuous P-almost surely continuous progressively measurable function.
Show
that (X(t),5;t,P) is a martingale (X E Mart) if and only if X E Martloc and there is a non-decreasing sequence of stopping
59
times Tn such that Tn-4 (a.s.,P) and {X(tATn): n 2 1) is uniformly P-integrable (su
EP[X(tATn)2] < -) for each t 2 0.
ii) Show that if X E Mart loc and then X(tAC) = X(0),
t
= sup{t20:
IXI(t) <
2 0, (a.s.,P).
iii) Let X E Martloc and let a:
be a T
progressively measurable function satisfying
a(t)2<X>(dt) J0
< - (a.s.,P) for all T 2 0.
Ja(s)dX(s)
Show that there exists a
a<X,Y> for 0 all Y E Mart °C and that, up to a P-null set, there is only
c
0'
one such element of Mart°C.
The quantity
Ja(s)dX(s) is 0
again called the (Ito) stochastic integral of a with respect to X.
iv) Suppose that X E (Mart loc)M and that Y: [0,0)xE---.)ltN
is a right-continuous P-almost surely continuous progressively measurable function of local bounded variation. Set Z = (X,Y) and let f E
C2,1(IR M QtN)
be given.
Show that
all the quantities in (3.6) are still well-defined and that (3.6) continues to hold.
We will continue to refer to this
extension of (3.6) as Ito's formula.
(3.10) Lemma: Let X E Mart loc and let a S T be stopping
times such that <X>(T) - <X>(a) S A for some A < -. P(vs p J X(t)-X(a)I2R) S
2exp(-R2/2A).
Then,
(3.11)
In particular, there exists for each q e (0,-) a universal Cq < - such that
60
(3.12)
CgAq/2.
S
EP[oStSTIX(t)-X(o)Iq]
Proof: By replacing X with Xa = X - Xa, we see that it suffices to treat the case when X(0) E 0, a E 0 and T E OD. For n > 1, define cn = inf{t>0:0SuptIX(s)I>n) and set Xn 2
XCn
and Yx = exp(XXm - 2 <Xn>). 1
+ XJYx(s)dX"(s) E Mart .
Then, by Ito's formula, Hence, by Doob's
0
inequality:
P(OsupTXn(t)>R) <
P(OstpTYn(t)>exp(XR-X2A/2))
< exp(-XR+X2A/2)
for all T > 0 and X > 0.
After minimizing the right hand
side with respect to X > 0, letting n and T tend to -, and then repeating the argument with -X replacing X. we obtain the required estimate.
Clearly, (3.12) is an immediate
consequence of (3.11).
Q.E.D.
(3.13) Exercise:
i) Suppose that X E (Martloc)M and that T is a stopping M
time for which
X <X1>(T) < A (a.s.,P) for some A < -.
Let
i=l a:
[0,o,)xE-FIRM be a progressively measurable function with
the property that 0
'
I a(s)I
S BexplOSsptJX(s)-X(0)ITJ
(a.s.,P) for each t > 0 and some T Ell[0,2) and B < -.
M rtAr that ( X i=lJ 0
Show
al(s)dX1(s),9t,P) is a martingale which is
Lq(P)-bounded for every q E [1,-).
In particular, show that
M if
X <X i >(T) is P-almost surely bounded for each T > 0 and i=1
if
f E C2.1(IR MxlN) satisfies the estimate 1
Aexp(B1xj1), (x,y) E IRMxQN. for some A,B E (0,-), then the
stochastic integrals occuring in Ito's formula are elements of Mart2
c
ii) Given X E MartloC, set 9X(t) = exp[X(t)-1/2<X>(t)] for t > 0.
Show that 9X is the P-almost surely unique Y E rt
Y(s)dX(s),
Mart soC such that Y(t) = 1 +
t
Z 0, (a.s.,P).
J0
(Hint: To prove uniqueness, consider Y(t)/gX(t).)
Also, if
< - for some finite stopping time T, show that
II<X>(r)II
LA(P)
(gX(tAT,5;t,P) is a martingale and that, for each q E [1,-), II9X(T)II
L q(P)
exp[(q-l)II<X>(T)II
].
The quantity 1X is
L (P)
sometimes called the Ito exponential of X.
The following exercise contains a discussion of an important and often useful representation theorem.
Loosely
speaking, what it says is that an X E Mart soC "has the paths
of a Brownian motion and uses <X> as its clock."
(3.14) Exercise:
i) Let (X(t),9t,P) be a martingale and for each t Z 0 let 9t+ denote the P-completion of 9t+ = n gt+e'
Show that
e>0
(X(t),yt+,P) is again a martingale.
ii) Let Y: [0,ai)xE-4R1 be a measurable function such that tI-+Y(t,f) is right-continuous for every f E E. Assuming that for each T > 0:
62 < 0<s
is continuous for P-almost every E E E.
show that
(Hints: For n > 0 let
Yn(t) = Y(k/2n) + 2n(t-k/2n)(Y((k+l)/2n) - Y(k/2n)) for t E [k/2n,(k+l)/2n) and k > 0.
Show that for any (3 E
(0,1] and T E Z+ IYn(t)-Yn(s)I
sup
0<s
IYn+l(t)-Yn+l(s)I
sup
0<s
(t-s)(3
(t-s)R
Next, using Theorem (I.2.12), show that for each T E Z+ there is a p E (0,1] such that: sup P
lim
R-wo n>0
sup 10<s
IYn(t)-Yn(s)I>R
(t-s)R
=
0;
and conclude that
lim
R-T-
) IYn(t)-Yn(s)I sup >RI = 0. `n>0 0<s
P rsup
iii) Let X E Martloc({9t},P) and define T(t) = sup{s>0: <X>(s) < t}.
Show that tHT(t) is right-continuous and
non-decreasing and that, for each t > 0, T(t) is an {gs+}-stopping time.
Next, set Ot
gT(t)+ (a {A: An{T(t)<S}
E TS+ for all s > 0)); and define Z: [0,co)xE--R1 so that: X(T(t)) - X(0) if T(t) < Z(t) =
X(o) - X(0) if T(t) = -
where X(w) = slim00X(s) when this limit exists in X(0) otherwise.
and
Show that: <X>(s) is a {1St}-stopping time
for each s > 0, Z(0) = 0 (a.s.,P), (Z(t),(St,P) is a
martingale, and that EP[IZ(t)-Z(s)I4] < C4(t-s)2, 0 < t
< - (where C4 is the constant in (3.12)).
s
<
Conclude that Z
E Mart ({'St},P) and show that (dt) < dt (a.s.,P).
Finally, show that, for each T > 0, (t) = t, t E [0,T],
63
(a.s.,P) on {<X>(-) > T).
In particular, if <X>(-) = OD
(a.s.,P), set B = Z - Z(O) and conclude that (B(t),Vt,P) is a one-dimensional Brownian motion and that X(t) = X(O) + B(<X>(t)),
t
0, (a.s.,P).
(3.15)
iv) To extend the representation in (3.15) to cases in which <X> = - (a.s.,P) may fail, proceed as follows.
Let W
denote one-dimensional Wiener measure on (0.1l) and set Q = PxW and *t = Itx.Nt, t Z 0.
Let
be as in iii); but now
define B: [0,o)xEx0-4lR1 by: B(t,(g,w)) = Z(tA<X>(w,f),E) - Z(0) + x(t,w) - x(tA<X>(-,f),w).
Show that (B(t),*t,Q) is a one-dimensional Brownian motion and that (3.15) holds (a.s.,Q).
As an application of the preceding, consider the following.
Show that {s1X(s)
(3.16) Exercise: Let X C Mart c°O {sli )WX(s) exists in
exists in < -} (a.s.,P).
(Hint: Prove that if
one-dimensional Brownian motion, then
{<X>(-) is a
lim P(s) = - lim
s)w
s100 P(s)
almost surely.)
Remark: As a consequence of (3.16), we see that there is (`T
no hope of defining J a(s)dX(s) for a's which fail to satisfy 0
J Oa(s)2<X>(ds) <
64
We have seen in Lemma (3.10) that EP10Sup IX(t)-X(0)Iq] for X E Martloc
< Cgll<X>(T)IIg12
At least when q E [2,-),
L -(P)
we are now going to prove a refinement of this result.
The
inequalities which we have in mind are referred to as
Burkholder's inequality; however, the proof which we are about to give is due to A. Carsia and takes full advantage of the fact that we are dealing with continuous martingales.
(3.17) Theorem (Burkholder's Inequality): For each q E [2,w), all X E Martloc, and all stopping times T: a II<X>(T)1/2II q Lq(P)
< II(X-X(0))*(T)N
Lq(P) (3.18)
< A II<X>(T)1/2II q
where aq = (2q)-1/2 and Aq =
Lq(p)
21/2q(q')(q-1)/2
(1/q'
_
1 - 1/q).
Proof: First note that it suffices to prove (3.18) when X(O) = 0 and T. X(T), and <X>(T) are all bounded.
Second, by
replacing X with XT if necessary, we can reduce to the case when T E T < - and X and <X> are bounded. prove (3.18) under these conditions.
Hence, we will
In particular, this
means that X E Mart2 c
To prove the right hand side, apply Ito's formula to write: sgn(X(t))IX(t)Iq-1dX(t)
IX(T)Iq = of 0 +
T
q(g2l)
IX(t)Iq-2<X> 1
Then, by (2.11):
(dt).
65
(1/q')gEP[X*(T)q] S EP[IX(T)Iq] r
1
)T
EPI
IX(t)Iq-2<
X>(dt)J
0
q(g21)EP[X*(T)q-2<X>(T)]
S q(q1)EP[X*(T)q]1-2/gEP[<X>(T)q/2]1/q, from which the right hand side of (3.18) is immediate.
To prove the left hand side of (3.18), note that, by Ito's formula: (q-2)/4 =
where
Y(T) +
JX(t)<X>_2'4(dt),
f<X>(t)(q-2)/4 dX(t).
Hence:
0 2X*(T)<X>(T)(q-2)/4
IY(T)I S
At the same time: rT0
(T) =
<X>(t)
(q-2)/2
<X>(dt) = 2q<X>(T) q/2
J
Thus:
EP[<X>(T)q/2]
=
2EP[(T)] = 2EP[Y(T)2] 2gEP[X*(T)2<X>(T)(q-2)/2]
2gEP[X*(T)q]2/gEP[<X>(T)q/2]1-2/q.
S
Q.E.D.
Remark: It turns out that (3.18) actually holds for all q E (0,-) with appropriate choices of aq and Aq.
When q E
(1,2], this is again a result due to D. Burkholder; for q E (0,1], it was first proved by D. Burkholder and R. Gundy using a quite intricate argument.
However, for continuous
66
martingales, A. Garsia showed that the. proof for q E (0,2]
can be again greatly simplified by clever application of Ito's formula (cf. Theorem 3.1 in Stochastic Differential Equations and Diffusion Processes by N. Ikeda and S. Watanabe, North Holland, 1981).)
Before returning to our main line of development, we will take up a particularly beautiful application of Ito's formula to the study of Brownian paths.
(3.19) Theorem: Let (l3(t),5t,P) be a one-dimensional
Brownian motion and assume that the 9t's are P-complete. Then there exists a P-almost surely unique function e:
[0,m)xll xE-4[0,o') such that: i) For each x E
IR1.
(t,g)ie(t.x,f) is progressively
E E. (t,x)He(t,x,E) is continuous;
measurable; for each
and, for each (x,f) E IR1xE, e(O,x,f) = 0 and t'
)e(t,x,E) is
non-decreasing.
ii) For all bounded measurable p: R rt
r J
1
p(y)e(t,y)dy = 1/2J q(R(s))ds,
t
2 0, (a.s.,P).
(3.20)
> 0.
(3.21)
0
Moreover, for each y E IR 1:
e(t.Y) = R(t)VO -
t
fOx[y.0)(R(s))dP(s),
t
(a.s.,P).
Proof: Clearly i) and ii) uniquely determine e.
To see
how one might proceed to construct e, note that (3.20) can be interpreted as the statement that "e(t,y) =
67
(t
1/2J b(13(s)-y)ds". where b is the Dirac 6-function.
This
0
interpretation explains the origin of (3.21). X[y
Indeed,
Hence, (3.21) is precisely
and
the expression for e predicted by Ito's formula.
In order to
justify this line of reasoning, it will be necessary to prove that there is a version of the right hand side of (3.21) which has the properties demanded by i).
To begin with, for fixed y, let tF-k(t,y) be the right hand side of (3.21).
We will first check that
P-almost surely non-decreasing. CO(IR1)+
is
To this end, choose p E
having integral 1, and define fn(x) _
nJp(n(x-C))(CVy)dc for n Z
1.
Then, by Ito's formula:
fn(P(t)) - fn(0) - ffn(P(s))dP(s) = l/2ftfn'(p(s))ds t 0 0 (a.s.,P).
Because f'' Z 0. we conclude that the left hand
side of the preceding is P-almost surely non-decreasing as a function of t.
In addition, an easy calculation shows that
the left hand side tends, P-almost surely, to uniformly on finite intervals.
is P-almost
Thus
surely non-decreasing.
We next show that, for each y,
can be modified on
a set of P-measure 0 in such a way that the modified function is continuous with respect to (t,y).
Using (1.2.12) in the
same way as was suggested in the hint for part ii) of (3.14), one sees that this reduces to checking that:
EP[O
CT(y-x)2
68
for some C < - and all (T,x,y) E [0,m)xIR2.
r
this comes down to estimating
Ll` 0
But, by (2.11),
]4] X[x y)(13(s))d(3(s)for
x < y; and, by (3.18), this, in turn, reduces to estimating rT
But
0)([x,Y)(P(s))dsJ2J.
EPR
l2l
rT EPff Lrr I
fOX[x,Y)(R(s))ds, T t y y = 2f0 dtf dsf dcg(s,c)f g(t-s,i-[)dTj,
0
x
x
where g is the one-dimensional Gauss kernel, and the required estimate is immediate from here.
We now know that there is an e which satisfies both i) and (3.21).
To prove that it also satisfies (3.20), set
M(t,y) = R(t)Vy - OVy -B(t,y).
Then
M(-.Y) = f$X[y,o)(P(s))dP(s) (a.s.,P)
for each y. Hence, if p 6 CO(IR ) and we use Riemann 1
approximations to compute f'p(y)M(t.y)dy, E Mart coc
that
then it is clear
In particular, we now see that
O(R(')) - Op0)) - fP(Y)M(',Y)dy E Mart where O(x) = f(xVy)v(y)dy.
oc
c,
On the other hand, by Ito's
formula: 0(13(0)) - 1/2f0 c(R(s))ds E Mart
coc
69
Thus, by part ii) of (3.9),
1/2J0 p(R(s))ds
(a.s.,P); and clearly (3.20) follows from this.
described above is called
Remark: The function the local time of Q at y.
Q.E.D.
8 was first discussed by P. Levy
and its existence was first proved by H. Trotter.
The
beautiful and simple development given above is the idea of H. Tanaka. and (3.21) is sometimes referred to as Tanaka's formula.
(3.22) Exercise: The notation is the same as that in Theorem (3.19). i) Show that: rt
sgn(R(s))dR(s) + 22(t,0),
1R(t)I = J
t
> 0.
(3.23)
0
(a.s.,P).
ii) Show that if A C A0+ (=
n AE), then #(A) E {0,1). e>0
(Hint: Note that, for any 0 c Cb(n) and t > 0, EW[0oOt,A] _ E9[Ewx(t)[,],A].
EW[O]N(A) for all
Let t10 and conclude that c Cb(9) and therefore that A is
independent of A.) iii) Set
Jsgn(R(s))dl(s) and note that 0
(B(t),5t,P) is a one-dimensional Brownian motion.
Using ii).
show that P(00) = 1; and conclude from this and (3.23) that P(e(t,0)>0 for all t>0) = 1.
iv) Show that, for each y, ¢({t>0: R(t,f)0y},y,f) = 0 for P-almost every f, and conclude that e(dt,y,f) is singular
70
with respect to dt for P-almost every f.
In particular,
is P-almost surely a continuous, non-decreasing
function on [0,-) such that e(dt,O) is singular with respect to dt and e(t,O) > 0 for all t > 0.
We at last pick up the thread which we dropped after introducing the class Mart c°O
Recall that we were
attempting to find a good description of the class of processes generated by Mart2 under changes of coordinates. We are now ready to give that description.
Denote by B.V.c
the class of right continuous, P-almost surely continuous, progressively measurable Y: [0,o)xE---- 9IR1 which are of local
bounded variation.
We will say that (Z(t),9t,P) is a
P-almost surely continuous semi-martingale and will write Z E S.Mart
I
if Z can be written as the sum of a martingale part X
E Mart loc and a locally bounded variation part Y E B.V.c.
Note that, up to a P-null set, the martingale part X and the locally bounded variation part Y of a Z E S.Martc are uniquely determined (cf. part ii) of (3.9)).
Moreover, by
Ito's formula, if Z E (S.Martc)N and f E C2(RN), then foZ E S.Martc.
Thus S.Martc is certainly invariant under changes
of coordinates.
Given Z E S.Martc with martingale part X and locally bounded variation part Y. we will use to denote <X>; and if Z'
is a second element of S.Martc with associated parts X'
and Y', we use to denote <X,X'>.
Also, if a:
[0,w)xE- 1R1 is a progressively measurable function
71
satisfying ((``
JO
a(t)2(dt)VJOIa(t)IIYI(dt) < -, T >0.
(a.s.,P), we define
J0 a(s)dZ(s)
(3.24)
to be $a(s)dX(s) +
Notice that in this notation. Ito's formula for P-almost surely continuous semi-martingales becomes N
f(Z(t)) - f(Z(0)) _
8 if(Z(s))dZi(s) t i=lJ o
z
(3.25)
+ 1/2
N .
t
8 i8 jf(Z(s))(ds)
i,j=l J 0
z
z
for Z E (S.Martc)N and f E C2(IRN)
(3.26) Exercise: Let Z C (S.Martc)N and f C C2(,RN)
Show that, for any Y E S.Mart
c
,
N = .2 (8 i=1
z
i
f)oZ
(a.s ,P)
We conclude this section with a brief discussion of the Stratonovich integral as interpreted by Ito.
Namely, given
X,Y E S.Martc, define the Stratonovich integral J X(s)odY(s) 0
of X with respect to Y (the "o" in front of the dY(s) is put there to emphasize that this is not an Ito integral) to be the element of S.Martc given by
JX(s)dY(s) +
0
Although the Stratonovich integral appears to be little more than a strange exercise in notation, it turns out to be a very useful device.
The origin of all its virtues is
72
contained in the form which Ito's formula takes when Namely, from (3.25) and
Stratonovich integrals are used.
(3.26), we see that Ito's formula becomes the fundamental theorem of calculus: N rt e f(Z(t)) - f(Z(O)) = X i=1 J 0
i
f(Z(s))odZ(s)
(3.27)
z
for all Z E (S.Martc)N and f E C3(RN).
The major drawback to
the Stratonovich integral is that it requires that the integrand be a semimartingale (this is the reason why we restricted f to lie in C3 (RN )).
However, in some
circumstances, Ito has shown how even this drawback can be overcome.
(3.28) Exercise: Given X,Y,Z E S.Martc, show that: rt0
1
J 0X(t)od[J Y(s)odZ(s)J =
r0
J(XY)(s)odZ(s).
(3.29)
73
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY:
1. Formulation and Some Basic Facts:
Recall the notation (1, A. and (At} introduced at the beginning of section 1.3.
Given bounded measurable functions a: [0,-)xRN-IS+(IRN) (cf. the second paragraph of (II.1)) and b: define
Motivated by the results in
by
Corollary (II.1.13), we now pose the martingale problem for {Lt}.
Namely, we say that P E M1((2) solves the martingale
problem for {Lt} starting from (s,x) E [0,o)xIRN and write P E M.P.((s,x);{Lt}) if: i) P(x(O)=x) = 1, (M.P.)
t ii)
[Ls+u
v](x(u))du,At,P)
is a martingale
0
for every pp E C0(IRN)
set
Given V E C2(IRN) ,
t p(x(t)) -
J0[Ls+uv](x(u))du.
E Mart ({A4 },P) for every
If P E M.P.((s,x);{Lt}), then Xs P E C'O(RN).
To compute <Xs
V>,
note that: t
Xs,w(t)2 = v(x(t)) 2 - 2p(x(t))f 0 t
+
rr lJ
0
(1.1)
12
[Ls+up](x(u))du1
74
=
p(x(t))2
[Ls+uv](x(u))du
t
fft[Ls+uv](x(u))du]2 0
t
2(t) + f
Xs
p
[Ls+u`p2](x(u))du
0
ff[Ls+u`p](x(u))du,
t
-
2Xs,P(t)f[Ls+up](x(u))du
-
0t
2
0
Applying Lemma (11.2.22) (or Ito's formula), we see that: Xs,,p(t)2-f
O`[Ls+u`p2](x(u))-2[vLs+up](x(u))Idu,.Ilt,P)
(
is a martingale.
Noting that [Ls+utip2](y)-2[`pLs+u'p](y)
=
we conclude that: f0 (v,p,avPp)(s+u,x(u))du, (a.s.,P).
Remark:
(1.2)
As an immediate consequence of (1.2), we see
that when a = 0, Xs,V (t) = for each p E CO(IRN).
,p(x),
Z 0, (a.s.,P)
t
From this it is clear that: t
x(t,(J) = x + f b(s+u,x(u,w))du,
t
Z 0,
0
for P-almost every (1 E 0.
In other words, when a ° 0 and P E
M.P.((s,x);{Lt}), P-almost every w E 0 is an integral curve of the time dependent vector field (s,x).
starting from
This is, of course, precisely what we would expect on
the basis of (11.1.16) and (11.1.17).
Again suppose that P E M.P.((s,x);{Lt}). C2(IRN), note that the quantity Xs Mart loc({..Nt},P).
P
Given p E
in (1.1) is an element of
Indeed, by an easy approximation procedure,
it is clear that Xs
E Mart({.Nt},P) when p 6 C2(IRN).
For
75
general p E C2(IRN), set an = inf{Q0:
and choose in
E CO(IRN) so that tn(y) = 1 for
1.
antes and Xs
jyI
< (n+l), n Z
E Mart 2 ((A ),P).
Xs
Then,
At the
n
same time, it is clear that (1.2) continues to hold for all p E C2(IR N).
In particular, if: rt
x(t) = x(t) -
b(s+u,x(u))du, J
t
(1.3)
0,
0
then x E (Mart°O({At},P))N and rt
a(s+u,x(u))du, t Z 0, (a.s.,P).
<<x,x>>(t) = J
(1.4)
0
Using (1.4) and applying Lemma (11.3.10) (cf. the proof of (11.1.6) as well), we now see that: P(sssup tpTlx(t)-x(s)I
>-
R)
(1.5) <
2Nexp[-R2/2AN(T-s)], 0 <
where A = sup Ila(t,y)Ilop.
s
< T.
From (1.5) it follows that for
each q E (0,co) there is a universal C(q,N) E [1,o) such that: sup
< C(q,N)(A(T-s) )q/2,
(1.6)
Lq(P) 0 <
s
< T.
In particular, this certainly means that x
E(Martc({fit},P))N.
In addition, by Ito's formula, for any f
E C1'2([0,o)xnN), P-almost surely it is true that: f(0,x) -
10 [(eu+Ls+u)f](u,x(u).)du
0 if(u,x(u))dxi(s)); J
0
i=1 J 0
x
and, by part i) of exercise (11.3.13), if Ivyf(t.y)I <
76
Aexp(BIyi"), (t,y) E [0,-), for some A,B E [0,w) and -r E
[0,2), then the right hand side of (1.7) is an element of
Mart({At},P).
(1.8) Exercise: Show that P E M.P.((s,x);{Lt}) if and only if P(x(0)=x) = 1, x E (Martcoc({fit},P))N, and (1.4) holds.
(1.9) Exercise: The considerations discussed thus far in this section can all be generalized as follows.
Let a:
[0,w)xn->S+(IR N) and b: [O,w)xINN be bounded {At)-progressively measurable functions and define ti-_Lt by analogy with
Define M.P.(x;{Lt)) to be the set of
P E Ml(0) such that P(x(0)=x) =1 and t
(P(x(t))_0 is a martingale for all W E CD(IN).
Define x(t) = x(t) -
ft
b(u)du and show that P E M.P.(x;{Lt}) if and only if 0
P(x(0)=x) = 1, x E (Mart loc((At},P))N and <<
f a(u)du (a.s.,P).
Also, show that if P E M.P.(x;{Lt}). then
(1.5), (1.6), and (1.7) continue to hold and that the right hand side of (1.7) is an element of Mart ({fit},P) under the
same conditions as those given following (1.7).
(1.10) Remark: When a and b do not depend on t E [0,-),
denote L0 by L and note that M.P.((s,x);{Lt}) is independent
77
of s E [0,c).
Thus we will use M.P.(x:L) in place of
M.P.((s,x);{Lt}) for time-independent coefficients a and b.
(1.11) Exercise: Time-dependent martingale problems can be thought of as time-independent ones via the following trick.
Set (2 = C([O,m);RN+1) and identify C([O,m);MN+1) with
C([O,-);R l)xl.
Show that P E M.P.((s.x);{Lt}) if and only if
P E M.P.(x;L), where x = (s,x), L = at + Lt, P = Ss+.xP, and as+. E M1(C([0,OG);Otl) denotes the delta mass at the path
tl-4s+ t . The following result is somewhat technical and can be avoided in most practical situations.
Nonetheless, it is of
some theoretical interest and it smooths the presentation of the general theory. (1.12) Theorem: Assume that for each (s,x) E [0,oG)xIRN
the set M.P ((s,x);{Lt}) contains precisely one element Ps,x Then, (s,x)l- °s x E M1(O) is a Borel measurable map.
Proof: In view of (1.11), we loose no generality by assuming that a and b are independent of time.
Thus we do
so, and we will show that xHPx is measurable under the assumption that Px is the unique element of M.P.(x;L) for each x E
N ll
Define r to be the subset of M1((2) consisting of those P ('t0
with the property that martingale for all
tip
E C0(IRN).
[Lrp](x(u))du,.Nt,P) is a
Clearly, there is a sequence
78
{X } of bounded measurable functions on D such that P E F if and only if EP[Xn] = 0 for all n Z measurable subset of Ml(Q).
1.
Thus r is a Borel
In the same way, one can show
that the set A consisting of those P E M1(D) such that Po(X(O))-1 = 6x for some x E MN is a Borel subset of M1(O).
Thus, TO ° U{M.P.(x;L): x E RN} = rnA is a Borel subset of M1(n).
At the same time, P E IOF--iPo(x(O))-1 E M1(RN) is
clearly a measurable mapping, and, by assumption, it is one-to-one.
In particular, since Ml(0) and M1(RN) are Polish
spaces (see (1.2.5)), the general theory of Borel mappings on
Polish spaces says that the inverse map 6xHPx is also a measurable mapping (cf. Theorem 3.9 in K. R. Parthasarathy's Probability Measures on Metric Spaces, Academic Press, 1967). But
is certainly measurable, and therefore we have now
shown that xI->Px is measurable.
Q.E.D.
(1.13) Exercise: Suppose that T is an {At}-stopping time.
Show that AT = a(x(tAT): t2O) and conclude that AT is
countably generated.
(See Lemma 1.3.3 in [S.&V.] for help.)
Warning: Unless it is otherwise stated, stopping times will be {.t}-stopping times.
Our next result plays an important role in developing criteria for determining when M.P.((s,x);{Lt}) contains at most one element (cf. Corollary (1.15) below) as well as allowing us to derive the strong Markov property as an
79
essentially immediate consequence of such a uniqueness statement.
(1.14) Theorem: Let P E M.P.((s,x);{Lt}) and a stopping Suppose that 4 is a sub a-algebra of AT
time T be given.
with the property that wI-+x(T(w),w) is 4-measurable, and let
wHPW be a r.c.p.d. of PIE.
Then there is a P-null set A E
A such that PWoST(1W) E M.P.(s+T(w),x(T(w),u);{Lt}) for each w A.
Proof: Let 0- 4PT be a r.c.p.d. of PIAT.
Then, by
Theorem (II.2.20). for each v C COO(IRN): t
(w(x(t)) - f0[Ls+T(W)+u
p](x(u))du,.t,PTo9T(1(J))
is a martingale for all w outside of a P-null set A(p) E AT;
and from this it is clear that there is one P-null set A E A
T
E M.P.((s+T(w),x(T(w),w);{Lt}) for all u C
such that A.
To complete the proof, first note that PW(A) = 0 for all w outside of a P-null set A' E 4.
Second, note that PW =
for all w outside of a P-null set A'' E 4. J
Finally, since PWoOT(W)(x(0)=x(T(w),(J)) = 1 for all w outside
of a P-null set A''' E 4, we see that P(,o91E T ((J) M.P.(s+T(w),x(T((J),(J);{Lt}) for all w 6 A'UA ''UA '''-
Q.E.D.
(1.15) Corollary: Suppose that for all (s,x) E [0,co)xIRN, t
Z 0, and P,Q E M.P.((s,x);{Lt}), Pox(t)-1 = Qox(t)-
Then, for each (s,x) E [0,-)xIRN, M.P.((s,x);{Lt}) contains at most one element.
80
Proof: Let P,Q E M.P.((s,x);{Lt}) be given.
prove by induction that for all n
1 and 0
t1
Po(x(t1),...,x(tn))-1 = Qo(x(t1),...,x(tn))_1. there is nothing to prove.
tn,
When n = 1,
Next, assume that this equality
holds for n and let 0 S t1
to+1 be given.
a(x(t1),...,x(tn)) and let uF->P(i and uI
P14 and Q14, respectively.
We will
Set A =
QG, be r.c.p.d.'s of
Since P and Q agree on A, there
is a A E A such that P(A) = Q(A) = 0 and both P(ioOi1 and n
QWoOt1 are elements of M.P.((s+tn,x(tn,(J);{Lt}) for all u it n
In particular, PWox(tn+1)-1 = QWox(tn+l)-1 for all u f A;
A.
and so the inductive step is complete.
Q.E.D.
(1.16)Remark: A subset I of bounded measurable functions on a measurable space (E, g) Is called a determining set if, for all µ,u E M1(E), fPdµ =
JPdu
for all p E Y implies that p
Now suppose that there is a determining set f C Cb(R N)
= V.
such that for each T > 0 and .p E $ there is a u = uE Cb'2([O,T)xlN) satisfying (Bt+Lt)u = 0 in [O,T)xIRN and lim
tTT
Given P E M.P.((s,x);{Lt}) and T > 0, we see that, for all .p E $: us+T,f(s,x).
In particular, Pox(T)-1 is uniquely determined for all T > 0 by the condition that P E M.P.((s,x);{Lt}).
Hence, for each
(s,x) E [0,co)xJRN, M.P.((s,x);{Lt}) contains at most one element. ut
'V
Similarly, suppose that for each 'p E $ there is a u
E C1.2([O,T)xoN) satisfying (Bt+Lt)u = 'p in [O,T)xlN b
81
and tim
0.
Then P E M.P.((s,x);{Lt}) implies that:
T
EPLJ0p(x(t))dtI = us+T,V(s,x) for all T > 0 and p E $.
From this it is clear that if P,Q E
M.P.((s,x);{Lt}), then Pox(t)-1 =
Qox(t)-1
for all t > 0. and
so M.P.((s,x);{Lt}) contains at most one element.
It should
be clear that this last remark is simply a re-statement of the result in Corollary (11.1.13).
We now introduce a construction which will serve us well when it comes to proving the existence of solutions to martingale problems as well as reducing the question of uniqueness to local considerations.
(1.17)Lemma: Let T > 0 and y E C([O,T];MN) be given. Suppose that Q E M1(f) satisfies Q(x(0)=y(T)) = 1.
Then
there is a unique R = by®TQ E M1(1) such that R(AnOTIB) _ XA(*P[0,T])Q(B) for all A E AT and B E A. Proof: The uniqueness assertion is clear.
To prove the
existence, set R = 5*xQ on nxO and define ;P:f2xQ---f1 so that x(t,(j) if t E [0,T] x(t-T,w') - x(T,(j') + x(T.w) if
Then R =
Rod-1
has the required property.
> T.
t
Q.E.D.
(1.18)Theorem: Let T be a stopping time and suppose that
w E OI4QW E M1(0) is an AT-measurable map satisfying Q(i (x(T(w))=x(T(w),w)) = 1 for each w E 0.
Given P E M1(n),
there is a unique R = P®TQ.E M1(0) such that RtAT = Pt.T and W!
16 ®
Q
is a r.c.p.d. of RIB
.
In addition, suppose
82
that (w,t,w') E Ox[O,-)xOl-_Y (t,w') E RI is a map such that, (i
for each T > 0, (w,t,w') E Ox[0,T]xO1-+Y(i (t,w') is ATx9[0,T]x4T-measurable,
and, for each w,w' E 0, tHY(i (t,(J)
is a right continuous function with YW(0,w') = 0.
Given a
right continuous progressively measurable function X:
[0,0)x0- lRdefine Z = X®TY. by: Z(t,(J) =
X(t,w) if t E [O,T(w)) X(T(w),w) + Y(i (t-T(W),9T((J)w) if
t
> T((J).
Then, Z is a right continuous progressively measurable function.
Moreover, if Z(t) E L1(R) for all t 2 0, then
(Z(t),.t,R) is a martingale if and only if (X(tAT).At,P) is a martingale and (Y(i (t),At.Q(i )
is a martingale for P-almost
every w E O.
Proof: The uniqueness of R is obvious; to prove existence, set R = f6(1 0T(-)Q(j P(d(j) and note that
R(AnB) = for all A E AT and B E A.
JAS(O®T(W)Q(,(B)P(d(j)
To see that Z is progressively
measurable, define Z(t.(J,w') = X(t.(J) + Y(i ((t-T(w))VO,9T((J)(J')
and check that Z Is {Atx.t}-progressively measurable and that Z(t,(j) = Z(t,w,(j).
To complete the proof, simply apply
Theorem (11.2.20).
Q.E.D.
We now turn to the problem of constructing solutions. and b: [0,-)xIRN-->RN be bounded
Let a:
continuous functions and define {Lt} accordingly. E [0,co)XRN. define *a,b: 0--Q so that: S'x
For (s,x)
83
x(t.Ws'X(G)) = x + a(s,x)1/2x(t,w) + b(s,x)t, f/ooia,b1
and define U'a'b = s,x
> 0.
t
It is then easy to check that
s,x
x E (Mart({Jtt},WS.X))N, where x(t) a x(t) - b(s,x)t, and
that <<x,x>>(t) = a(s,x)t,
t
Next, for n > 1, define
> 0.
Pn.k for k > 0 so that Pn.O = Va
X
and
Pn,k = Pn.k-10 9a,b x x kk=1 k=1,x(k=1)
n
n
n
2 for k >
1.
Set Pn = Pn,n
and define:
N
ai,3( ntnAn2,x( ntnAn2))8
Lt = 1/2
x
i,j=1
ntAn2 n
x(
is
xj
ntAn2 n
))a
x
Then Pn E M.P.((0,x);{Lt}) (cf. exercise (1.9) above).
In
particular, for each T > 0 there is a C(T) < ro such that: sup sup n>1
x
Pn r E x[Ix(t)-x(s)I4]
C(T)(v-u)2, 0
s,t S T.
L
Hence, {Pn: n > 1) is relatively compact in M1(f1) for each x E O2N.
Because, for each tip
E COO(IRN),
uniformly for (t,w) in compact subsets of [0,0)xfl, our
construction will be complete once we have available the result contained in the following exercise.
(1.19) Exercise: Let E be a Polish space and suppose that 9 9 Cb(E) be a uniformly bounded family of functions
84
which are equi-continuous on each compact subset of E.
that if µn-+u in M1(0), then 9.
Show
uniformly for p E
In particular, if {pn} C Cb(E) is uniformly bounded and
pn-Hp uniformly on compacts, then J4pndpn-->JVdg whenever
An---}i in M1(f1). Referring to the paragraph preceding exercise (1.19), we now see that if {PX.} is any convergent subsequence of {PX}
and if P denotes its limit, then P(x(0)=x) =1 and, for all
0 < tl < t2, all .Nt-measurable
E Cb(0), and all p E C*O(lRN):
1
EP{[ (x(t2))-'P(x(tl)),I] L`
tl
From this it is clear that P E M.P.((O,x);{Lt}). replacing a and b with
and
By
we also see
that there is at least one P E M.P.((s,x);{Lt}) for each (s,x) E [0,m)xIRN.
In other words, we have now proved the
following existence theorem.
(1.20) Theorem: Let a: [0,m)XIRN-+S+(IRN) and b: [O,m)xIRN_*ItN be bounded continuous functions and define {Lt} accordingly.
Then, for each (s,x) E [0,co)xRN there is at
least one element of M.P.((s,x);{Lt}).
(1.21) Exercise: Suppose that a and b are bounded,
measurable and have the property that x E RNHJ a(t,x)dt and 0 T
x E IRNI> b(t,x)dt are continuous for each T > 0. 0
Show that
85
the corresponding martingale problem still has a solution starting at each (s,x) E [0,co)xIRN.
We now have the basic existence result and uniqueness criteria for martingale problems coming from diffusion operators (i.e. operators of the sort in (11.1.13)).
However, before moving on, it may be useful to record a summary of the rewards which follow from proving that a martingale problem is well-posed in the sense that precisely one solution exists for each starting point (s,x). (1.22) Theorem: Let a and b be bounded measurable functions, suppose that the martingale problem for the corresponding {Lt} is well posed, and let {PS.x: (s,x) E IRN}
be the associated family of solutions.
Then (s,x)H->Ps x is
measurable; and, for all stopping times T. Ps,x = Ps,x®TPT,X(T)'
In particular, W~'sW0T(W)PT(W),X(T(W),W) is
a r.c.p.d.((' of PS,XIA
xE 0
T
for each stopping times T.
Finally, if
and x E N HJb(t,x)dt T are continuous 0
for each T > 0, then (s,x) E [0,-)xRNHPS x E M1(0) is continuous.
Proof: The measurability of (s,x)HP5 x is proved in Theorem (1.12).
Next, let T be a stopping time.
Then, by
Theorem (1.18), it is easy to check that, for all p E CA(IRN (Xs,,p(t).4(.Ps,x0TPT,x(T)) is a martingale (cf. (1.1) for the
notation X5,).
Hence Ps,X0TPT,x(T) C M.P.((s,x);{Lt}); and
so, by uniqueness, P s,X0TP T,X(T) = P S,X
86
rT
Finally, suppose that x E NH->J a(t,x)dt and x E IR NF 0 (`T
Then, for each
J b(t,x)dt are continuous for each T > 0.
E
0
C'(IRN), (s,t,u) E [0,-)x[0,-)xSli-'Xs is continuous. Now let (sn,xn)->(s,x) and assume that Ps x -i-P. Then, by W(t,w)
n' n exercise (1.19): JXs
r n
for all t
n
2 0. 'p E CO(IR'), and 0 E Cb(il).
M.P.((s,x);(Lt}), and so P = nli mPs
Hence, P E At the same time,
n'x n
by (1.6) and Kolmogorov's criterion, (Ps x: s20 and jxjSR) is relatively compact in M1(Q) for each R Z 0; and combined with the preceding, this leads immediately to the conclusion that (s,x)I-+ Ps,x is continuous.
Q.E.D.
(1.23)Exercise: For each n 2 1, let an and bn be given
bounded measurable coefficients and let Pn be a solution to the corresponding martingale problem starting from some point (sn,xn).
and that an --+a and
Assume that
bn-+b uniformly on compacts, where a and b are bounded T
measurable coefficients such that xHJ a(t,x)dt and 0
T
xi-J b(t,x)dt are continuous.
If the martingale problem
0
corresponding to a and b starting from (s,x) has precisely one solution Ps x. show that Pn-+Ps,x
87
2. The Martingale Problem and Stochastic Integral Equations:
Let a:
and b: [0,co)xltN--IRN be bounded
measurable functions and define tF__Lt accordingly.
When a
0, we saw (cf. the remark following (1.2)) that P E b(s+t,x(t))dt,
M.P.((s,x);{Lt}) if and only if x(T) = x + J
T > 0, (a.s.,P).
0
We now want to see what can be said when a
does not vanish identically.
In order to understand what we
have in mind, assume that N = 1 and that a never vanishes. Given P E M.P.((s,x);{Lt}), define (T
3(T)
= J
a(s+t,x(t))dx(t), -1/2 T > 0, 0 (`T
where x(T) = x(T) - J b(s+t,x(t))dt, T > 0.
Then, <(3,(3>(dt)
0 =
(a-1/2(s+t.x(t)))2a(s+t,x(t))dt = dt, and so (/3(t),.Mt,P) is
a 1-dimensional Brownian motion. x(T) - x = JTdx(t) 0
and so
=
In addition:
JTal/2(s+t,x(t))dp(t), T > 0, (a.s.,P); 0
satisfies the stochastic. integral equation:
x(T) = x + rTa1/2(s+t,x(t))dp(t) + JTb(s+t,x(t))dt, T > 0, 0 0 (a.s.,P). Our first goal in this section is to generalize the preceding representation theorem.
However, before doing
so, we must make a brief digression into the theory of
stochastic integration with respect to vector-valued martingales.
Referring to the notation introduced in section 2 of Chapter II, let d E Z+ and X E (Mart ({it},P))d be given.
88
Define
Lloc({9t},<<X,X>>,P)
to be the space of
{5t}-progressively measurable 8: r(T
[0.-)xE----Dtd such that:
E1 0(8(t),<<X,X>>(dt)8(t))IR di < -, T > 0. Note that
11
(Lloc({9t},Trace<<X,X>>,P))d
dense subspace of
can be identified as a
Lloc({,°ft},<<X,X>>,P) (to see the density,
simply take 8n(t) - K[O n)(10(t)1)O(t) to approximate 0 in
Next, for 0 E (Lloc({fit}.
Lloc({°fL},<<X,X>>,P)).
Trace<<X,X>>, P) )d, define: d
rT
.T
8(t)dX(t) = X J
8i(t)dXi(t), T > 0;
(2.1)
i=1 J 0
0
and observe that: EPLOstpTIJ08dXI2] S 4EP IIJ08dXI2]J L ,J r(T
``
IJ0(8(t). <<X,X>>(dt)8(t))Rd]
4EPrr
.
Hence there is a unique continuous mapping
8E
Ll2
such that
oc({fit},<<X,X>>
J0
E Mart({9t},P)
J8dX is given by (2.1) whenever 0 E 0
(Lloc({9t} ,Trace<<X,X>>, P) )d (2.2)Exercise: Given 0 E
Lloc({°ft},<<X,X>>,P),
show that
is the unique Y E Mart({9t},P) such that: 0*
(dt) = (8(t),<<X,X>>(dt)rj(t))IR d, (a.s.,P),
In particular, conclude
for all 17 E
Lloc({°ft},<<X,X>>,P).
that if 0 E
Lloc(Pt},<<X,X>>,P)) and T is an {9t}-stopping
time, then:
89 TAT
T
OdX = fOX[O,T)(t)9(t)dX(T), T > 0. (a.s.,P).
JO
Next, suppose that a: [0,w)xE-*Hom(Rd;RN) is an {gt}-progressively measurable map which satisfies: T
EPI
J0
1
Trace(a(t)<<X,X>>(dt)a(t)t)J <
We then define
ao,
T > 0.
(2.3)
E (Mart({°Jt},P))N so that: 0 fo
(9,JadX) N = f(at9)dX, (a.s.,P), 0
0
Qt
for each 0 E IRN. (2.4)Exercise: Let X E (Mart ({rt},P))d and an {9t}progressively measurable a: [O,w)xE->Hom(IRd;IRN) satisfying (2.3) be given.
i) If Y E (Mart ({°ft},P))e and T: [0,co)xE-Hom(IRe:IR N) is an {9t}-progressively measurable function which satisfies:
EPIJ Trace(T(t)<>(dt)T(t)t) < -, T > 0, 0
show that J'[a,T]d1Y
+JTdY, (a.s.,P)
=
0 fo
1 0'
0
and that
<< JadX,JTdY>>(dt) = a(t)<<X,Y>>(dt)T(t)t, (a.s.,P) 0
0
where <<X,Y>> ° ((<Xi,Yj>)) 1
1«<e ii) Show that fo 0
such that Y(0) = 0 and
is the unique Y E (Mart 2({'3t},P))N
90
«[Y], [X]>>(dt)
=
[a't)]«x,x»(dt)[I,a(t)t
(a.s.,P). iii) Next, show that if T: [0,o)xE-+Hom(IR N;IRM) is an
{g t}-progressively measurable function such that (2.3) is satisfied both by a and by Ta, then: r0
(IJ(
t
`
0
adXI =
JTadX, (a.s.,P). 0
The following lemma addresses a rather pedantic measurability question. (2.5)Lemma: Let a E S+(IRN), denote by 7r the orthognal
projection of RN onto Range(a), and let a be the element of
S+(IRN) satisfying as = as = jr. Then 7r = 6lim 10 (a + eI)-la and a= (a + EI)-17r. Next, suppose that a E Hom(IRd;IRN) and E10
that a = cat.
onto Range(at).
Let Ira denote the orthogonal projection of IRN
Then Range(a) = Range(a) and ataa = Ira.
In
particular, aF-->7r and al--->a are measurable functions of a, and aF-->zr a
is a measurable function of a.
Proof: Set aE _ (a + EI)-1 and 7r6 = a a. <
I.
Moreover, if
Then 0 <
7r
E Range(a)l, then Tl E Null(a) and so Tr 6 T1
= 0; whereas, if 17 E Range(a), then there is a T) = ag, and so
7r 6--r as 610. Also, if E Range(a) 1, then aE7rr) = 0; and if Ti E Range(a), then there is a if E Range(a) such that Ti = aif, and so a67rI) = a6 71 = 76 as E10. Hence, a67r-tea as elO. Since, for each 6 > 7 6 TJ = Ti
- E7r6E-+Tj as E10.
Hence,
0, aF---*a E is a smooth map, we now see that aF--+7r and are measurable maps.
Now suppose that a = sa. t
w-1;
Clearly
91
Range(a) C Range(d).
On the other hand, if i E Range(a),
then there exists a f E Null(a)1 = Range(at) such that i _ ag.
Hence, choosing i' so that
aat
= an'; from which we conclude that Range(a) = Range(a).
= ati7',
we then have TI _
Finally, to see that ataa = aa, note that if TI E then n E Null(a) and so ataan = 0.
Range(ot)l,
On the other hand, if n E
Range(at), then there is a f E Null(at)1 = Range(a) = Range(a) such that ii = atf, and so ataaT = ataaL = atnf = atf Hence, a t aa = aa.
77.
Q.E.D.
We are at last ready to prove the representation theorem alluded to above. (2.6)Theorem: Let a:
[0,o)xIRN___+S+(IR N) and b:
be bounded measurable functions and suppose that P E M.P.((s,x);{Lt}) for some (s,x) E [0,co)xIN.
measurable a: [O,w)xR -Hom(IR ;ll
Given a
) satisfying a = aat, there
is a d-dimensional Brownian motion (p(t),9t,Q) on some
probability space (E,9,Q) and a continuous {9t}-progressively measurable function X: [0,o)xE---4lrrN such that
X(T) = x + JOa(s+t,X(t))dR(t) + J0b(s+t,X(t))dt,
(2.7)
In particular, if d = N
T 2 0, (a.s.,Q), and P =
and a is never singular (i.e. a(t,y) > 0 for all (t,y) E and
[0,-)xIRN), then we can take E = 0, Q = P, rT
ata-1(s+t,x(t))dx(t), T
Q(T) =
0, where
J0
rT
b(s+t,x(t))dt, T
x(T) = x(T) J
0
0.
92
Proof: Let E = C([O,00);IRNxIRd) = C([0,0°);IRN)xC([0.0D);ltd), 9 = 9§E, and Q = Px%, where 9 denotes d-dimensional Wiener
Given f E E, let Z(T,f) _
measure on 0d = C([O,w);IRd). rX(T')1
lY(T,f) JJ
E IRNxI;d denote the position of the path f at time T,
set 9t = a(Z(u): Osu 0, and note that (by the
second part of exercise (11.2.31)) Z E (Mart({9t}.P))N+d with
<>(dt) = [a(s+t.X(t)) 0 L
_
--
where Z(T)
_
[Y(T)l X( T) and X(T) = X(T)
d Jdt, -
rT
J b(s+t,X(t))dt.
Next,
0
define n(t,y) and aa(t,y) to be the orthogonal projections of IRN and ltd
onto Range(a(t,y)) and Range(at(t,y)).
I = I d - era, and define respectively. Set ira
by
It
/3(T) =
rT0
J
[ata, rra](s+t,X(t))dZ(t), T > 0.
Then:
[a(s+t0X(t)) 0
«/3,R»(dt) =
[7a]dt 1
(ataaaa + 7a)(s+t,X(t)) = I ddt,
since aaaaa = ataaa = alas = aa. t d-dimensional Brownian motion.
Hence, ((3(t),9t.Q) is a
Moreover, since oata = as =
n, we see that:
JOc(s+t,X(t))d/3(t) =
JO
rr(s+t,X(t))dX(t) T (0
(T J
where al = I N - V. It
rrl(s+t,X(t))dX(t),
b(s+t,X(t))dt -
= X(T) -x -
0
At the same time.
J
93
<< JaldX,JnldX>>(dt) = alanl(s+t,X(t))dt = 0, 0
and so
0
Jn1dX = 0 (a.s.,Q).
We have therefore proved that
0
X(') satisfies (2.7) with this choice of (p(t),9t,Q); and Moreover, if N = d and a is never
clearly P = QoX(')-1.
singular, then nv = 0. and so we could have carried out the whole procedure on (n,44,P) instead of (E,9,Q).
Q.E.D.
(2.8) Remark: It is not true in general that the X(') in (2.7) is a measurable function of the R(') in that equation. To dramatize this point, we look at the case when N = d = 1, a = 1, b = 0,
s = 0, x = 0, and a(x) = sgn(x).
Obviously,
(x(t).At,P) is a 1-dimensional Brownian motion; and, by i) in exercise (11.3.22), we see that in this case: R(T) = Ix(T)I
- 2E(T,0) (a.s.,P), where 2(',0) is the local time of x(') at rT
0.
In particular, since e(T,O) = lim
610
x[0 E)(lx(t)l)dt. 0
(a.s.,P), R(') is measurable with respect to the P-completion
A of a(lx(t)l: QO).
On the other hand, if x(') were
si-measurable, then, there would exist a measurable function P: C([O,o);[O,o))-AIR1 such that x(1,(o) = 1(lx(',u)I) for
every w E 0 which is not in a P-null set A. P(-A) = P(A), we could assume that A = -A.
Moreover, since
But this would
mean that x(l,w) = O(Jx(',u)I) = c(Ix(',-w)I) = x(l,-w) = -x(1,w) for w f A; and so we would have that P(x(l)=O) = 1, which is clearly false.
Hence, x(') is not p1-measurable.
In spite of the preceding remark, equation (2.7) is
94
often a very important tool with which to study solutions to martingale problems.
Its usefulness depends on our knowing
enough about the smoothness of a and b in order to conclude from (2.7) that
can be expressed as a measurable
functional of
The basic result in this direction is
contained in the next statement.
(2.9) Theorem (Ito): Let a: [0,-)xftN-iHom(IRN;IRl) and b :[0,-)xIRN-IR N are measurable functions with the property
that, for each T > 0, there exists a C(T) < - such that: )IIH S VIb(t,0)I S C(T)
0supTIIa(t,0
(2.10)
b(t,y)I
S C(T) l y ' - y l (Here, and throughout,
S
y.y' E
R2N
denotes the Hilbert-Schmidt
Denote by V the standard d-dimensional Wiener measure
norm.)
on (0,4).
Then there is for each (s,x) E [0,co)xRtN a right
continuous, (At}-progressively measurable map Os x: [0,-)xQ
SIN
such that T a(s+t,,t
0s,x(T) = x +
J0
(a.s.,N).
S I X
(t))dx(t) + J0b(s+t,,t(t))dt, T > 0, 0
Moreover, if (13(t),5t,Q) is any d-dimensional
Brownian motion on some probability space (E,9,Q) and if X: is a right continuous, {9t}-progressively measurable function for which (2.7) holds Q-almost surely, then X (
)
= 0S
continuous}.
(a.s.,Q) on
(3(*,g) is
sx In particular, QoX-1 = #00-S1
Proof: We may and will assume that s = 0 and that x = 0.
95
In order to construct the mapping 0 _ 00 on 0,0 we begin by setting 0, and, for n > 1, we define qP n inductively by: 4P n(T) = $0a(t,On-1(t))dx(t) +
T > 0.
f0b(t,4bn-l(t))dt,
Set An(T) = OStpTIon(t) - On-1(t)' for T > 0, and observe that:
EW[A1(T)2] S 2Ew[OSsup
tPTIJOa(u,0)dx(u)I2J
t
+
2Ew[OStSTIJ0 b(u,0)duI
fT
8EW[IJOa(u,O)dx(u)I2]l
S
21 J
(T +
2EW[TJOIb(u,0)I2du
8Ew[JT
IIa(t,0)IIH S dt]
S
+ 2T2C(T)2 S (8+2T)C(T)2T.
0
Similarly: EV
rrT
[An+1(T)2] <
a(t,45n-1(t))IIH.S.dt]
rT
+2TEW[0lb(t,tn(t))
-
b(t,"n-l(t))I2dt]
T
< (8 + 2T)C(T)2J E9[IZn(t) - 0n-l(t)I2dt] 0
< (8 + 2T)C(T)2J EW[An(t)2]dt. 0
Hence, by induction on n > 1, EW[An(T)2] S K(T)n/n!, where K(T) = (8 + 2T)C(T)2; and so: CO
n>PE#If supTI0n(t) - 0m(t)121
<-
EW[Au(T)Aµ(T)] u=m+i
r
(K(T)u/u!)1/2] u=m+1
J
2
0
96
We can therefore find (cf. Lemma (11.2.25)) a
as m---- wo.
right continuous, W-almost surely continuous, {.Nt}-
progressively measurable 0 such that, for all T > 0, 0n(t)I2I_)0
Ew[0
4P(T) =
T a(t,t(t))dx(t) +
J0
as n-*w.
In particular,
J0 b(t,O(t))dt,
T > 0, (a.s..W)
Finally, suppose that a Brownnian motion ((3(t),°t.Q) on (E,9,Q) and a right continuous, {'fit}-progressively measurable
Without loss in generality,
solution X to (2.7) are given.
we assume that /3(*,L) is continuous for all f E E.
Set
Then, as a consequence of ii) in exercise (2.4) and the fact that Qo(3-1 = V: rT
(0 T
a(t,Y(t))dP(t) +
Y(T) = J
b(t,Y(t))dt, T > 0, (a.s.,Q). J
0
Hence, proceeding in precisely the same way as we did above, we arrive at: T
,up EQ[0supTIX(t) - Y(t)I2J S K(T)f EQ[0
11
-
Y(u)I21dt;
0
from which it is easy to conclude that
(a.s.,Q). Q.E.D.
(2.11) Corollary: Let a and b be as in the statement of Theorem (2.9), set a = aat, and define tI_ Lt accordingly. Then, for each (s,x) E [0,o)xlRN there is precisesly one PS,x E M.P.((s,x);{Lt}).
In fact, if 0s x is the map described in
Theorem (2.9), then Ps x = NOO-1x Proof: Note th.t, by Ito's formula, for any p E C0(IRN 'p(Os x(T)) = 'p(x) + f ot(s+t,os x(t))v`p(Os x(t))dx(t) 0
T +
f
[Ls+tp](4,s,x(t))dt,
T > 0,
97
(a.s..W).
Hence, WoVs1x E M.P.((s,x);{Lt}).
On the other
hand, if P E M.P.((s,x);{Lt}), then, by Theorem (2.6), there is a d-dimensional Brownian motion ((3(t),9t,Q) on some
probability space (E,9,Q) and a right continuous, {°;t}-progressively measurable function X: [0,oo)xE-IRN with
the properties that P = QoX-1 and (2.7) holds (a.s.,Q). by Theorem (2.9), QoX-1
=
But,
(a.s..Q), and so P =
904b-1x
Q.E.D.
The hypotheses in these results involve a and b but directly not a.
Furthermore, it is clear that when a can be
singular, no a satisfying a = oat need be as smooth as a is itself (e.g. when N = 1 and a(x) = 'xI, there is no choice of
a which is Lipschitz continuous at 0. even thought a itself is).
The next theorem addresses this problem.
In this
theorem, and throughout, a1/2 denotes the unique a E S+(ll
)
satisfying a = a2.
(2.12) Theorem: Given 0 < e < 1, set S+(IRN) = (aES+(IRN): 6I < a < II).
Then, for a E S+(IRN):
al/2 = (1/e)1/2
(1/2](ea - I) n, n==OI
where r1/2]of is expansilton
J
(2.13)
the coefficient of zn in the Taylor series
(1 + z) 1/2 in Izi < 1. Hence, a E S+(IRN)__a1/2 has bounded derivatives of every order. Moreover, for each a E S+(IRN) al/2 = 6lim (a+eI)1/2, and so a E S+(!RN)I4al/2 is a 10 measurable map.
Next, suppose that f E 021H--.a(g) E S+(IRN) is
98
a map which satisfies a(if) > eI and Ila(r7) - a(f)IIH.S.
<
CIn - E l for some e > 0 and C < - and for all f,r, E IR1.
Then
(2.14)
Ila1/2(rl) - a1/2(E)11 H. S. < Clrl - f (/261/2
for all f ,rl E IR1. Finally, suppose that if E IR11-a(f) E S+(IRN) is a twice continuously differentiable map and that Ila'' (g) 11 op < C < 00 for all if E IR1 . Then
Ila1/2(i) - al/2(
)II
< N(C)
H.S.
1/21n
-ft
(2.15)
for all f ,rl E IR1. Proof: Equation (2.13) is a simple application of the spectral theorem, as is the equation a1/2 = lim
(a+eI)1/2
610
S+(lRN)_4a1/2
From these, it is clear that a E
has the
+
asserted regularity properties and that a E S
N (IR
1/2
)---+a
is
measurable.
In proving (2.14), we assume, without loss in
generality, that f---a(f) is continuously differentiable and that Ila'(f)IIH S
for each if
< C for all
if
E IR
and we will show that,
E IR1: II(a1/2)'(f)IIH.S.
(2.16)
< (1/2e1/2)Ila(f)IIH.S..
In proving (2.16), we may and will assume that a(f) is diagonal at if.
Noting that a(rl) = al/2(ll)a1/2(n)
we see
that:
((a1/2)IM) iJ _ a-1/2
(a'O)ij 11
+ a 1/2 (
(f)
(2.17) )jj
and clearly (2.16) follows from this. Finally, to prove (2.15), we will show that if a(f) > 0, then II((a1/2)'(f))IIH.S.
<
N(C)1/2; and obviously (2.15)
99
follows from this, after an easy limit procedure. we will again assume that a(f) is diagnal.
Moreover,
But, in this
case, it is clear, from (2.17), that we need only check that: (2.18)
Ia'(f)1i1 < (C)1/2(aii(f) + ajj(f))1/2
N ((e1,...,eN)
To this end, set pf(rI) = R
is the standard basis in IRN), and note that a'(f)13 _ p+(f) - y_(f). Iwf(f)I <
Hence, (2.18) will follow once we show that
(C)1/2/2;
and this, in turn, will follow if we show
C2(IR )+
that for any p E satisfying I'i (TI) I < K. rl E IR1 0'(f)2 < 2Ky(f). But, for any such gyp, 0 < 'p(ri) < p(f) + p'(f)(rl-) + 1/2K(rl-f)2 for all f,r, E IR1; and so, by the elementary theory of quadratic inequalities, p'(f)2 < 2Ky(f). Q.E.D.
In view of the preceding results, we have now proved the following existence and uniqueness result for martingale problems.
(2.19) Theorem: Let a: [O,m)xJN---- S+(IRN) and b: [0,co)xlRN---- ARN be bounded measurable functions.
Assume that
there is a C < - such that Ib(t,y) - b(t,y')I < CIy - y'I for all t
(2.20)
0 and y,y' E IRN, and that either
eI and Ia(t,y) - a(t,y')I <- CIy - y'I
a(t,y)
(2.21)
for all t Z 0 and y,y' E IRN and some e > 0 or that y E IRNH
a(t,y) is twice continuously differentiable for each t
> 0
and that max
0
(2.22)
y
Let tI__ Lt be the operator-valued map determined by a and b.
100
Then, for each (s,x) E [0,0)xIN there is precisely one Ps x E M.P.((s,x);{Lt)).
In particular,
x is continuous
and, for all stopping times T, P S,X = P S,X 0T PT,X(T)
(2.23) Remark:
The preceding cannot be considered to be
a particularly interesting application of stochastic integral equations to the study of martingale problems.
Indeed, it
does little more than give us an alternative (more probabilistic) derivation of the results in section 1 of chapter I.
For a more in depth look at this topic, the
interested reader should consult Chapter 8 of [S.&V.] where the subject is,given a thorough treatment.
101
3. Localization:
Thus far the hypotheses which we have had to make about a and b in order to check uniqueness are global in nature.
The purpose of this section is to prove that the problem of checking whether a martingale probelm is well-posed is a local one.
The key to our analysis is contained in the
following simple lemma.
(3.1)Lemma: Let a.;: [0,-)xRK---)S+(RN) and b,b:[O,o)xIRN_
4RN be bounded measurable functions, and let
and tHLt be defined accordingly.
Assume that the
martingale problem for {Lt} is well-posed, and denote by {Ps,x: (s,x)E[O,co)xIRN} the corresponding family of solutions. If
is an open subset of [0,o)xIRN on which a coincides with
a and b coincides with b, then, for every (s,x) E 1B and P E M.P.((s,x);{Lt}), PP.N = Ps xtAC , where c = inf{t20 x(t) f
).
Proof: Set Q = P®CPC,x(C). M.P.((s,x);{Lt}).
PtAf = Qt A
Then, by Theorem (1.18). Q E
Hence, by uniqueness, Q = Ps,x; and so
= Ps xP,M1.
Q.E.D.
Given bounded measurable a: [0,o)xlN-4S+(IR N) and b:
[O,m)xHtN____R N and the associated
map tF-+Lt, we say that the
martingale problem for {Lt} is locally well-posed if
[0,cK)xIRN
can be covered by open sets U with the property that there
exist bounded measurable aU: [0,co)xIRN--+S+(IRN) and b U : [0,M)xIRN--.)IRN such that alU and blU coincide with aUIU and
bUlu, respectively, and the martingale problem associated
102
with aU and bU is well-posed.
Our goal is to prove that.
under these circumstances, the martingale problem for {Lt} is itself well-posed.
Thus, until further notice, we will be
assuming that the martingale problem for {Lt} is locally well-posed, and we will be attempting to prove that there is
The following lemma
exactly one element of M.P.((O,O);{Lt}).
is a standard application of the Heine-Borel theorem.
(3.2)Lemma: There is a sequence {(aelbe,Ue): eEZ+} such that:
i) {Ue: QEZ+} is a locally finite cover of [O.m)xIRN; N)
ii) for each e E Z+, ae:
and be:
[0,W)xIN-SIN are bounded measurable functions for which the associated martingale problem is well-posed;
iii) for each m E Z+ there is an em > 0 with the property that whenever (s,x) E [m-l,m)x(B(O,m)\B(O,m-1)) there exists an e E Z+ for which [s,s+em]xB(x,em) C Ue.
Referring to the notation introduced in Lemma (3.2),
define e(s,x) = min{eEZ+: [s,s+em]xB(x,em) C UQ) if (s,x) E Next, define stopping times an, n
[m-l,m)x(B(O,m)\B(O,m-1)).
> 0, inductively so that: co
0; an(y) = - if an-1((J)
= _'
and, if an(w) < -, then an(w) = inf{t>an-i(w): (t,x(t,w)) AUe
(W))'
where en-1(c)
n-1
define: QO = S(1
,
where
=
e(an-1(w)'x(o, n(w)'w))'
Finally,
0; and Qn+1 =
0
e (w) [Qn®a n Pn]°(x('Aon+l))-1- where P = Pan((J) an((0)
< -.
(J),
x(an((J)
if
103
(3.3) Lemma: For each n Z 0 and all N E Co*(ON): (p(x(t)) - J [Lnp](x(u))du,Att,Qn) 0
is a martingale, where Lt = X[O.a )(t)Lt,
t
Z 0.
n
Proof: We work by induction on n 2 0. nothing to prove when n = 0.
Clearly there is
Now assume the assertion for n,
and note that, by Theorem (1.18), we will know it holds for
n+l as soon as we show that for each w E {a<-} and S(i®on(w)Pw-almost every w' E Q: a(t,x(t,w')) _ aen((J)(t,x(t,(j')) and b(t.x(t,w')) = ben((J)(t,x(t,w')) for t
E [an(w)'an+l((J'))
But, if w E {an<m}, then for
6 0an(w)Pn-almost every w' E Q: an(w') = an(w), therefore, an+l(w') = inf{t2an((J): x(t.w')EUen((J)).
Since a and b coincide with
aen(w) and ben((J), respectively, on Uen((J) whenever
< m, the proof is complete.
an((j)
Q.E.D.
(3.4) Lemma: For each T > 0, tlimMQn(anST) = 0.
In
particular, there is a unique Q E K1(Q) such that QtAa = n
for all n Z 0.
QnP4a
Finally, Q E M.P.((0,0);{Lt}).
n
Proof: By Lemma (3.3) and exercise (1.9), there exists for each T > 0 a C(T) < - (depending only on the bounds on a and b) such that <
t
S T.
Hence,
EQnrr l,x(t)
-
x(s)I41 S C(T)(t - s)2, 0 S
s
since Qn(x(0)=0) = 1 for all n Z 0, {Qn:
n20} is relatively compact in M1(Q).
Next, note that an(w)1o
104
uniformly fast for w's in a compact subset of Q.
Qn(a 0.
Hence,
Also, observe that
Combining this with the
QnPAIa = QmPAIa for all 0 < m < n. m m
preceding, we now see that for all T > 0 and all r E AIT: exists.
Thus, {Qn} can has precisely one limit Q;
Finally, we see
and clearly QPAIa = Qn"A'a for each n 2 0. n n
from Lemma (3.3) that EQ[pp(x(tAon))
r(tAo - pp(x(sAan),r] = EQIJ n
f E CO(IItN0 < s therefore Q E M.P.((0,0);{Lt}). n
<
t, and F E As; and Q.E.D.
(3.5)Theorem: Let a:[0,-)xllN--IS+(pN) and b: [0,-)xRN
-->02N be bounded measurable functions and define tF-+Lt accordingly.
If the martingale problem for {Lt} is locally
well-posed, then it is in fact well-posed.
Proof: Clearly, it is sufficient for us to prove that M.P.((0,0);{Lt}) contains precisely one element.
Since we
already know that the Q constructed in Lemma (3.4) is one element of M.P.((0,0);{Lt}), it remains to show that it is the only one.
To this end, suppose that P is a second one.
Then, by Lemma (3.1). PPAI
=
o
PO(0,0)P.M
1
assume that PPAta = QPAIa PIAIa
.
n
and let wF---+P
a =
QPAI a
0
.
Next,
0
be a r.c.p.d. of
(i n n Then, for P-almost every w E {an<-}.
E
n
M.P.((an(w),x(an((w),(j);{Lt}) and therefore, by Lemma (3.1),
105
P(i
Qn(w)
-1
an(w)P rte=
where w
inf{t>0: (t,x(t,w')fUn(j)}. time shift map.)
)
_
(Recall that Ot:OF-*0 is the
At the same time, if an(w) < -, then
°n+1(w ) = °n(w) + rw(O°(w)w') for P0-almost every w'.
Combining these, we see that
PPA w Cr n+1 =
en(w)
Sw® an(w)P°n(w),x(°n(w),x(an(G1),w)P °n+l
for P-almost every w E (an<-}.
Since, by induction
hypothesis, we already know that PP4° = QP4° conclude that PP41
= QP41
°n+l
.
°n+l
,
we can now
n
n
Because a t- (a.s.,Q), we n
have therefore proved that P = Q.
Q.E.D.
The following is a somewhat trivial application of the preceding.
We will have a much more interesting one in
section 5 below.
(3.6) Corollary: Suppose that a: [0,m)xlR N_S+(IRN) and b:[O,m)xIRN,IRN are bounded measurable functions with the properties that
has two continuous derivatives and
has one continuous derivative for each t > 0. Further, assume that the second derivatives of first derivatives of for t in compact intervals.
and the
at x = 0 are uniformly bounded Then, the martingale problem for
the associated {Lt} is well-posed and the corresponding family (Ps,x: (s,x)E[O,m)xIRN} is continuous.
106
4. The Cameron-Martin-Girsanov Transformation:
It is clear on analytic grounds that if the coefficient matrix a is strictly positive definite then the first order part of the operator Lt is a lower order perturbation away from its principle part N
Lt = 1/2
a'J(t,y)6 10 i,j=1
y
(4.1) y
Hence, one should suspect that, in this case, the martingale problems corresponding to {L°0 } and {Lt} are closely related.
In this section we will confirm this suspicion.
Namely, we
are going to show that when a is uniformly positive definite, then, at least over finite time intervals, P's in
M.P.((s,x);{Lt}) differ from P's in M.P.((s,x);{0}) by a quite explicit Radon-Nikodym factor.
(4.2)Lemma: Let (R(t),1t,P) be a non-negative martingale with R(O) E 1.
Then there is a unique Q E M1(() such that
QPAT = R(T)PPMT for each T 2 0. Proof: The uniqueness assertion is obvious. the existence, define Qn =
To prove
for n Z 0.
Then Qn+ltAn = QntAn; from which it is clear that {Qn: n20}
is relatively compact in M1(D).
In addition, one sees that
any limit of {Qn: n20} must have the required property. Q.E.D.
(4.3)Lemma: Let (R(t),.t,P) be an P-almost surely continuous strictly positive martingale satisfying R(0) E 1.
Define Q 6 M1(n) accordingly as in Lemma (4.2) and set 9 =
107
Then (1/R(t),4(.Q) is a Q-almost surely continuous
logR.
strictly positive martingale, and PLAT = (1/R(T))QtAT for all
T Z 0.
Moreover, 51 E S.Martcr({4t},P),
9(T) = S0 R(t)dR(t) - 2J 0( R(t))2(dt)
(4.4)
(a.s.,P) for T Z 0; and X E Mart loc({,Nt},P) if and only if XR In particular,
X - <X,!%> E Mart loc({4t},Q).
Finally. if X,Y E
S.Martc((At),P) = S.Martc({At},Q).
S.Martc({.Nt},P), then, up to a P.Q-null set, <X.Y> is the same whether it is computed relative to ({At ).P) or to ({.Ilt},Q).
In particular, given X E S.Martc({.Nt},P) and an
(At)-progressively measurable a: [0,0x0---- *RI satisfying rT
a(t)<X,X>(dt) < - (a.s.,P) for all T > 0. the quantity J0
f adX is, up to a P.Q-null set, is the same whether it is 0
computed relative to ({.Mt},P) or to ({.Mt},Q).
Proof: The first assertion requiring comment is that (4.4) holds; from which it is immediate that °Jt E S.Martc({.Nt},P).
But applying Ito's formula to log(R(t)+e)
for e > 0 and then letting 40, we obtain (4.4) in the limit. In proving that X E Mart loc({At},P) implies that XR E 1/R, and X
Mart loc({Alt},Q), we may and will assume that R,
are all bounded.
Given 0 S
t1
< t2 and A E At
,
we have:
1
<X,X>(t2),A] = EPIR(t2)X(t2) - <X,R>(t2),AJ
EQIX(t2) - R(t lL
2)
L`L
= EP{R(t1)X(t1)
EQ[X(t1) -
-
<X,R>(t1),AI
)<X,X>(t1),AJ. R(l) 1
108
Hence, since, by (4.4), <X,51>(dt) = R(t)<X,X>(dt), we will be
done once we show that RI
E 0
Martc1° c ({.Nt},Q).
However, by Ito's formula
R(T) <X,X>(T) = JO<X,X>(t)d1R(t), + fOR(t)<X,X>(dt),
and so the required conclusion follows from the fact that (1/R(t),.Mt,Q) is a martingale.
We have now shown that XR E
Mart s°C({.Mt},Q) whenever X E Marts°c({.Mt},P) and therefore that S.MartC({.Mt},P) C S.Martc({.Mt},Q).
Because the roles of
P and Q are symmetric, we will have proved the opposite implications as soon as we show that <X,Y> is the same under ({.Mt},P) and ({.Mt},Q) for all X,Y E S.Martc({.Mt},P).
Finally, let X,Y E Marts°C({.Mt},P).
To see that <X,Y>
is the same for ({.Nt},P) and ({.Mt},Q), we must show that XRY R
- <X,Y>P E Marts°C({.Nt},Q) (where the subscript P is used to emphasize that <X,Y> has been computed relative to ({.Mt}.P)).
However, by Ito's formula: XRYR(T) = XY(O) +
J0
XR(t)dYR(t)
+ <XR,YR>P(T), T 2 0,
+ JO 0
(a.s.,P).
Thus, it remains to check that JXRdYR and
fYdxR 0
are elements of Mart°C({.Mt},Q). But: JOXRdYR = JOXRdY - If xRdP = fox RdY - <J XRdY, r
1R
xRdYJ E Martc°C({.Mt},Q),
If0 and, by symmetry, the same is true of JO0 Y
Q.E.D.
109
(4.5)Exercise:-A more constructive proof that <X,Y> is the same under P and Q can be based on the observation that
<X>(T) can be expressed in terms of the quadratic variation of
over [0,T] (cf. exercises (11.2.28) and (11.3.14)).
(4.6)Theorem (Cameron-Martin-Girsanov): Let a: [0,co)xIN-* S+(RN) and b,c: [0,co)xIN___lRN be bounded
measurable functions and let tI_*Lt and tHLt be the operators associated with a and b and with a and b+ac, respectively.
Then Q E M.P.((s,x);{Lt}) if and only if there
is a P 6 M.P.((s,x);{Lt}) such that QtAT = R(T)PtAT, T 2 0, where
r rT R(T) = explJ c(s+t,x(t))dx(t) t 0
(4.7)
r0 T
- 1/2J (c,ac) N(s+t,x(t))dt rT
with x(T) ° x(T) -
b(s+t,x(t))dt, T > 0. J
In particular,
0
for each (s,x) 6 [0,o)x!N, there is a one-to-one correspondence between M.P.((s,x);{Lt}) and M.P.((s,x);{Lt}). Proof: Suppose that P E M.P.((s,x);{Lt}) and define by (4.7).
By part ii) in exercise (11.3.13), (R(t),At,P) is
a martingale; and, clearly, R(0) = 0 and surely positive.
is P-almost
Thus, by Lemmas (4.2) and (4.3), there is a
unique Q E X1(0) such that QtAT = R(T)PPAT, T Z 0. Moreover. X E Mart loc({,t},P) if and only if X - <X,R> 6 Mart loc({At},Q), where 91 = logR.
In particular, since
N
<X. >(dt) = X c (S+t,x(t))<xi,X>(dt), i=1
i
110
if * E CD(RN), then N I i=1 N
(c a1JB p)(s+t,x(t))dt i,J=1 i xJ N
I (ac)J(s+t,x(t))e
p(x(t))dt, xJ
J=1 rt0
and so
[Luc](x(u))du,.Nt,Q) is a martingale.
In
J
other words, Q E M.P.((s,x);{Lt}).
Conversely, suppose that Q E M.P.((s,x);{Lt}) and define as in (4.7) relative to Q. 1
r
rT
Then: (T
1
R(T) = expl-J c(s+t,x(t))dx(t) - 1/2J (c.ac)(s+t.x(t))dt] LL
0
0
where x(T) ° x(T) -
(b+ac)(s+t,x(t))dt. J
Hence, by the
0
preceding paragraph applied to Q and {Lt}, we see that there is a unique P E M.P.((s,x):{Lt}) such that PP"WT T Z 0.
=
Since stochastic integrals are the same whether they
are defined relative to P or Q. we now see that QtAT = R(T)Pt4IT, T 2 0, where
is now defined relative to P. Q.E.D.
(4.8) Corollary: Let a: [0,-)xlN_9S+(!RN) and b: [O.o,)xORN--4Q{N be bounded measurable functions and assume that
a is uniformly positive definite on compact subsets of
[0,w)xltN.
Define tF-+0 as in (4.1) and let tF- Lt be the
operator associated with a and b.
Then, the martingale
problem for {L°} is well-posed if and only if the martingale
111
problem for {Lt} is well-posed.
Proof: In view of Theorem (3.5), we may and will assume that a is uniformly positive definite on the whole of [0,-)xIN. (4.6).
But we can then take c = a 1b and apply Theorem Q.E.D.
112
5. The Martingale Problem when a is Continuous and Positive:
Let a: IRN_S+(IRN) be a bounded continuous function
satisfying a(x) > 0 for each x E IRN.
Let b: [0.o)xIRN--Q N be
Our goal in this section is
a bounded measurable function.
to prove that the martingale problem associated with a and b is well-posed.
In view of Corollary (4.8), we may and will assume that b = 0, in which case existence presents no problem.
Moreover, because of Theorem (3.5), we may and will assume in addition that (5.1)
IIa(x) - IIIH.S.S e.
where e > 0 is as small as we like. N
Set L = 1/2
What we are going to do is
2 aii(y)8 8 J.
i,j=1
y
y
show that when the a in (5.1) is sufficiently small then, for each A > 0, there is a map SA from the Schwartz space Z(IRN
into Cb(ll )° such that 1
EP[J e-"tf(x(t))dt] = S,f(x)
(5.2)
0
for all f E .d(U
). whenever P E M.P.(x;L).
proved (5.2), the argument is easy.
Once we have
Namely, if P and Q are
elements of M.P°.°(x;L), then (5.2) allows us to say that EQ[J e-"tf(x(t))dt] EPLJ e-?tf(x(t))dt] = for all A > 0 and f E Cb(IRN).
But, by the uniqueness of the
Laplace transform, this means that Pox(t)-1 = Qox(t)-1 for all t Z 0; and so, by Corollary (1.15). P = Q.
Hence,
113
everything reduces to proving (5.2).
where g(t,y) denotes the
(5.3)Lemma: Set tit =
standard Gauss kernel on IRN; and, for X > 0, define RAf =
RA maps Z(fl 2(IRN).
Then
for f E d(RN) ("*" denotes convolution).
Joe-xt,rt*fdt )
into itself and (AI-1A)oRA = RAo(AI-2A) = I on
Moreover, if p E (N/2,-), then there is an A = A(X,p)
E (0,w) such that IIRAf11
N
S AIIfJI
Lp (IR )
N
,
(5.4)
f E Z(IRN).
Lp (IR )
Finally, for every p E (1,m) there is a C = C(p) E (0,-) (i.e. independent of X > 0) such that N
It(i,3-1 (8yi8yiRAf)2)1/2
(5.5)
C IIfIILP(RN)
for all f E Z(RN).
Proof: Use 5f to denote the Fourier transform of f.
Then it is an easy computation to show that 5RAf(f) = (A+2IfI2)-lgf(f).
From this it is clear that RA maps .S(ffN)
into itself and that RA is the inverse on A(IRN) of (AI-2A).
To prove the estimate (5.4), note that
II-r t*fII
Cb(IRN) IIti
=
t
II
11 f 11
Lp(IRN)
,
where 1/q = 1 - 1/p, and that IIT t
BNtN/(2p) for some BN E (0,m).
11R
A
f II
S B IJ Lp(IRN) Nl0
II
Lq(IRN
Thus, if p C (N/2,m), then
e-Att-N/(2p)dt) Ilf II
= Allf II Lp(IRN)
Lp(IRN) ,
where A E (0,m). The estimate (5.5) is considerably more sophisticated.
114
What it comes down to is the proof that for each p E (1,a) there is a K = K(p) E (0,w) such that N II (
_T
i, j=l
(8y i8 y 1 f )2)1/211
N
K11 2Af
Lp(JR )
II
Lp(IR
(5.6)
N
Indeed, suppose that (5.6) holds.
for f E Z(IRN).
Then,
since 2AR, = I - XR., we would have N II(
.2
i, j=1
(8 i8 3RXf)2)1/211
y
y X
Lp(DtN) IIti t I I
Ll (Qt N)
S KII 2ARXf
II
Lp(IR
N )
S 2KIIfII
+ KIIXR f11
S K11f11
since
N
Lp(IR )
Lp(6tN)
= 1 and so IIXR A f I I
Lp(DtN) Ilf II LP(rR N) .
S
Except
LP(IR N)
when p = 2, (5.6) has no elementary proof and depends on the Rather than spend
theory of singular integral operators.
time here developing the relevant theory, we will defer the proof to the appendix which follows this section.
Q.E.D.
Choose and fix a p E (N/2,w) and take the e in (5.6) to lie in the interval (0'2VC(p))' where C(p) is the constant C in (5.5).
We can now define the operator SX.
Namely, set DT
Then, for f E Z(MN
(L-2A)Rx.
N
IIDX fIILp(IRN) =
1
211i,J=l
(a-I)1 8
N S
II(
=1Yijx Rf
8
yl yJ
R1 fll
)2)1/2(85
N)
11
y
Lp(IRN)
LP(IR
1/21IfI N).
LP(IR
Hence, DX admits a unique extension as a continuous operator on LP(IRN) with bound not exceeding 1/2.
Using DX again to
denote this extension, we see that I-DX admits an continuous inverse KA with bound not larger than 2.
We now define SX =
115
RxoKx.
Note that if K5f E 3(RN), then (XI-L)Sxf = (XI-2A)RxKxf - DxKxf = (I-Dx)Kxf = f.
Thus, if 5;A = {fEL1(IR N): K f E x
3(RN)}, then we have that: (AI-L)SAf = f, f E 5
(5.7)
.
In particular, we see that 9; A C Cb(RN).
Moreover, since
.Z(RN) is dense in LP(RN) and Kx is invertible, it is clear that 9x is also dense in LP(RN).
We will now show that (5.2)
holds for all P E M.P.(x;L) and f E 9A.
Indeed, if f e 9X,
then an easy application of Ito's formula in conjunction with (5.7) shows that pt0
(e-AtSXf(x(t))+J
At, P)
Thus, (5.2) follows
is a martingale for every P E M.P.(x;L).
by letting taw in
r rt
11
EP[e-AtSNf(x(t))] - Sxf(x) = EPii e-"sf(x(s))d,]. ` 0
At first sight, we appear to be very close to a proof of (5.2) for all f E 2(IR N).
However, after a little reflection,
one will realize that we still have quite a long way to go. Indeed, although we now know that (5.2) holds for all f E 9T and that 9x is a dense subset of LP(IR N), we must still go
through a limit procedure before we can assert that (5.2) holds for all f E Z(IR N).
The right hand side of (5.2)
presents no problems at this point.
In fact, (5.4) says that
f E LP(IR N)I--H RAf(x) is a continuous map for each x E IN, and
therefore Sx has the same property.
On the other hand, we
know nothing about the behavior of the left hand side of
116
Thus, in order to
(5.2) under convergence in Lp(IRN).
complete our program we have still to prove an a Priori estimate which says that for each P E M.P.(x;L) there is a B
E (0,-) such that 11
IEPIJOe-Atf(x(t))dt]
J
S BIIfIILp(IR N), f E Z(IRN).
To prove (5.8), let P E M.P.(x;L) be given.
(5.8)
Then, by
Theorem (2.6), there is an N-dimensional Brownian motion rT
a(x(t))dp(t), T Z 0
((3(t).At,P) such that x(T) = x + J
a1/2.
(a.s.,P), where a =
0
Set an(t,w) = a(x(
nn
An,w)) and
rT
Xn(T) = x +
Note that, for each T Z 0,
an(t)d(3(t), T 2 0. 0 0
- Xn(t)
EP
12
T
S
as n
4EP[ 0 IIa(x(t))-a(x(
1
nn ))IIH.S.dt]---O
Hence, if µn E M1(IRN) is defined by
W.
f Nfdµn = AEPIIOe-Atf(Xn(t))dtI. f E Cb(IRN
then µn--ur in M1(IRN) ), where J
fdg =
XEP[J e-Atf(x(t))dtf
N
e Cb(Q{N).
0
In particular, if
IJfdµn1 $ ABIIfII
N
,
f e Z(RN),
(5.9)
Lp (Qt )
for some B E (0,-) and all n Z
1,
then (5.8) holds for the
same B.
(5.10)Lemma: For all n Z
1,
the estimate (5.9) holds
with B = 2A, where A = A(X,p) is the constant in (5.4). Proof: Choose and fix n Z 1.
We will first show that if
117
(5.9) holds for some B E (0,-), then it holds with B = 2A. To this end, note that Xn E (Mart ({At),P))N and that <<Xn,Xn>>(dt) = an(t)dt where an(t,u) = a(x( nn An,w)). Hence, by Ito's formula, for f E Z(IRN rt0
(eXtRXf(x(t))
Nt,P)
+ f o
is a martingale, where N 2
`P(t,w) = 1/2
(a(t,(J)-I)' 8 i8
i,j=1
y
y
RXf.
Hence, we have that XRXf(x) + XEP[J e-XtP(t)dt] = Jfdµn. 0 1/2,
Noting that
I`Y(t,(w)I
(8
S 2f =1
y
8
we y
see that [i.3=1(8yiayiRXf)2]1/2dun
IEP[fe-Xty(t)dt]
if
J
LlJ
RN
ECMllfll
2
Lp(IRN)
S Mllfll 2 Lp(lRN)
where M denotes the smallest B for which (5.9) holds.
Using
this and (5.4) in the preceding, we obtain M S XA + M/2, from which M S 2XA is immediate. We must still check that (5.9) holds for some B E (0,CQ).
Let f E Z(IRN)+ be given.
n2-1 1/n C Xe-Xm/nEP[( e-Xtf(Xn(n t))dt]
(fdun
J
Then
=
`J o
mL=O
+
Xe-XnEPtl e-Xtf(Xn(n+t))dt1. 0
At the same time,
118
f10/n f
e-Xtf(Xn(nt))dt]
EPLJ fl0 /n
e-"tEP[f(Xn(m/n)+o(x(m/n))(p(t)-p(m/n)))]dt
('1 /n
e-"tEPIJ Nf(a(x(m/n))Y)g(t,Y-Xn(m/n))dyIdt
fo `
S
R
f[RAfm(..w)](Xn(m/n,(j))P(dw)
where fm(y,w) m f(a(x(m/n,w))y).
Note that, because of (5.1)
and the fact that e $ 1/2, there is a K E (0,m) such that Ilf m
N
S Kllfll Lp(ItN)
Lp(hR )
for all m Z 0 and w E 0. Hence,
we have now proved that rr r10 /n
EP If
e-Xtf(Xn(n t))dt ]
KAIIfIILp (
N)
and the same argument shows that EPlfine-Atf(Xn(n+t))dt] `L
S KAilfll
0
Lp(IRN) ('
Combining these, we conclude that
Jfdµ n
S (n2+1)KAIIfII
Lp(RN)
for all f E Z(RN)+
Q.E.D.
With Lemma (5.10), we have now completed the proof of the following theorem. (5.11)Theorem: Let a:
1N___S+(RN)
be a bounded
continuous function satisfying a(x) > 0 for each x E
IRN.
Then, for every bounded measurable b: [0,m)xIRN--4IRN, the
martingale problem for the associated L is well-posed.
(5.12)Remark: There are several directions in which the
119
preceding result can be extended.
In the first place, if a
depends on (t,y) E [0,o)xIN in such a way that for each T > 0 and K CC RN the family
is uniformly
positive and equicontinuous, then the martingale problem for a and any bounded measurable b is well-posed.
Moreover, when
N = 1, this continues to be true without any continuity assumption on a, so long as a is uniformly positive on compact subsets of [0,oo)xIN.
Finally, if N = 2 and a is
independent of t, then the martingale problem for a and any
bounded measurable b is well-posed so long as a is a bounded measurable function of x which is uniformly positive on compact subsets of DN.
All these results can be proved by
variations on the line of reasoning which we have presented here.
For details, see 7.3.3 and 7.3.4 on pages 192 and 193
of [S.&V.].
120
Appendix:
In this appendix we will derive the estimate (5.6).
There are various approaches to such estimates, and some of these approaches are suprisingly probabilistic.
In order to
give a hint about the role that probability theory can play, we will base our proof on Burkholder's inequality.
However,
there is a bit of preparation which we must make before we can bring Burkholder's inequality into play. Given 1
<
3
< N. define 91 on Z(IRN) to be the operator
given by 9:9 1f(f) = (ifi/Ifl)5f(f). denote the Fourier transform.)
(Recall that we use N to
Then (5.6) comes down to
showing that for each p C (1,a) there is a Cp < - such that max < c pIIf IILp(IR N) . f E d(ff N) N9li f 11 Lp(IR N (A.1)
1«
Indeed, suppose that (A.1) has been proved. Then we would have that I18y 8y 3
P
f II
j'
= II919J , A f II
L (RN)
Lp p
< C2II A f II
N
)
N
L p(IR )
since 58y 8yff(f) = -fjfj,5f(f) = (fife,/Ifl2)5Af(f) _
-g9Ei 9Ef(f). (A.2) Lemma: There is a cN E C such that, for each 1
< j< N. the mapping
'p E
2(RN)l-1e10 f lxi>e
determines the tempered distribution TJ whose Fourier transform is i(f3/Ifi) (a 0 if f = 0). f*T
In particular, 9t.f =
f E Z(RN) (again, "*" is used to denote convolution).
Proof: Note that for 'p 6 2(IRN):
121
r (xi1IxiN+l)p(x)dx =
J
lxl>e
((x j/IxIN+1) ('P(x)-t'(0) )dx + J (xi /IxIN+1)f(x)dx el and lim
(xj/IxIN+1)('P(x)-w(0))dx
=
610f e
(xj/IxIN+l)(N(x)-p(0))dx. J
Ixlsl
Since E
&(RN)--f
(xj/IxIN+l)(N(x)-'P(0))dx
+
r
IxjSi
J lxl>l
is obviously a continuous linear functional on .H(ORN), we will
have completed the proof once we show that: e1O lim
Rlim im J
cfj/IW
I
e
Clearly there is nothing to do when f =
When f # 0. standard calculations show that
e10 Rim f
e
(`
- a1O lim Rim i fSN-1 m
w3 dwJ sin(r(f w))dr = 12 J e
SN-1
cij sgn(f
At the same time, if (ei,...,ej) is an orthonormal basis of ORN with ei = g/IEI, then N
I
J SN-1 _
V=1
SN-1
(e' SN-1
k(fj/IfI)
where k E C\{0} and is independent of
1
S
j
s N.
(In the
preceding and in what follows, (e1,....eN) is used to denote N the standard ortho-normal basis in OR.)
Q.E.D.
122
Now define rj(x) = cN(xj/IxIN+l) on pN\{0); and, for a > 0, set r(E)(x) = X(E ,,)(lxl)rJ(x) and define 5I(E)f = f
In view of Lemma (A.2), what we must show is that
E -S(RN).
for each p E (1,w) there is a Cp < w such that
S Clifll p
E>0 11g('411 Lp (RN ) SUP
N, f E
Lp(R )
(A.3)
Z(IRN).
To this end, note that
51(6)f (x) = cN J
w jdw
SN-1
wf(x-rw)drr
F.
(cN/2)J SN-1
w dwJ f(x-rw)dL. lrl>e
Next, choose a measurable mapping WESN-1HU(i E 0(N) so that U(i e1 = w for all w; and, given f E Z(IRN), set f(J(y) = f(U(jy). Then: !c(E)f(x) = (cN/2)J
SN-1
= (cN/2a)J
w
SN-1
lrl>e
i
*(E)f(i (x) dw
where
g(x-re l)- . g E .b(IRN).
*(E)g(x) a (1/7) J lrl>e
x E IR1. and
In particular, set h(e)(x) = suppose that we show that sup p
Ily*h(E)IILp(IRN) S KpIIyIILp(UNE .d(ftl).
for each p E (1,-) and some Kp < sup
ll
N
S K 11f p
11
N
(A.4)
Then we would have that = K Ilfll
N
for all w e
Lp(O ) p SN-1; and so, from the preceding, we could conclude that
Lp(IR )
Lp(IR )
(A.3) holds with Cp = lcNIKp/2n.
In other words, everything
123
(Note that the preceding reduction
reduces to proving (A.4).
allows us to obtain (A.3) for arbitrary N E Z+ from the case when N = 1.)
For reasons which will become apparent in a moment, it is better to replace the kernel h(a) with he(x) = a
2x
2.
X +6
Noting that alih(a)-h
(
(°D
2J x/(x2+a2)dx + 2J a2/(x(x2+e2))dx 0 a
a L 1 (Utl ) 11
2J x/(x2+1)dx + 2J 1/(x(x2+1))dx, 1
we see that (A.4) will follow as soon as we show that
sup IIy*h >0 a
II
L p (IR )
S K pII4II Lp
(
1
y E 2(IR1) ,
)
In addition, because:
for each p E (1,w) and some Kp < -. f w(x) (dl*ha) (x)dx = -
f JA(Y) (w*ha)
(Y)dY. P,,P E d(Dtl)
an easy duality argument allows us to restrict our attention to p E [2,w); and therefore we will do so.
Set py(x) = A y > 0}.
Given 4' E
x 2y +y 2
for (x,y) E IR
C0CO (lt1 ;ll1)
{(x,y) E IR2:
(we have emphasized here that y
is real-valued), define u1P (x,y) = P*py(x) and v41(x,y) _ **hy(x). are
(A.6) Lemma: Referring to the preceding, uy and vJ+
conjugate harmonic functions on IR+ (i.e. they satisfy the Cauchy-Riemann equations).
Moreover, there is a C = C,, < W
such that Iuy(x,y)IVIv4'(x,y)I S C/(x2+y2)1/2 for all (x,y) E a+
Finally,
lim 640 sxp If-xsup
uVy<61u4(L'Y)-'P(f)j
= 0.
Proof: Set F(z) = uy(x,y) + iv0(x,y) for z = x + iy with
124
(x,y) C R2; and note that F(z) = n -M
dg.
Z-f
Clearly, all
the assertions about u4, and v,, except the last one about u,,
follow immediately from the preceding representation of F. On the other hand, the asserted behavior of
as y10
can be easily derived from elementary estimates.
Q.E.D.
We are at last ready to see how Burkholder's inequality Namely, for (x,y) E !2, let 0X y
enters the proof of (A.5).
denote the two-dimensional Wiener measure starting from (x,y).
0(t) E
Using z(t,w) = (x(t,w),y(t,w)) to denote the position 1R2
of the path w E 0 at time t Z 0, define ra((J) _
inf{t20: y(t,w)Se}.
for all 0 S a
We must first check that NX Y(TE<-) = 1
y; and, obviously, it is enough to do so in
the case when a = 0.
But, by Ito's formula,
(exp[-X(tArO)-(2X)
1/2y(tATO)],At,wx,Y)
is a martingale for each X 2 0; and so exp[-(2X)
1/2Y]
V = E X'y lexp[-X(tATO)-(2X)
1 /2y(tATO)]V
(lexp[-X(tATO)],.
E x'Y Hence, rr
11
9x.y(r0<w) = X10 E x'Ylexp[-XTO]] - X10 t1W EVx'yIexp[-X(tATO)],
X10
exp[-(2X)
Next, let y E C0(IR1) be given and define Me(t,w) = u1P (z(tATe(w),w)) - uy(X,Y)
and
1/2Y] = 1.
125
NE(t,w) = V,(z(tATE(w).w)) - V,(X,Y). where
and vy are defined relative to 4. as in the u`l,
preceding.
Then, by Ito's formula and Lemma(A.6),
(Ma(t),AIt,N'X y) and (Na(t),AIt,N'X.Y) are bounded martingales for each (x,y) E R+ and 0 < e S y. the Cauchy-Riemann equations) (t) _
In addition, since (by
Ivu1,I =
Ivv,11I. we have
tATIvv4(z(s))I2ds
('tATaIvu(z(s))I2ds
= <M6>(t)
fO
JO
Hence, by Burkholder's inequality (cf. (3.18)),
(a.s.,N'X y).
we see that for each p E [2,-) there is a Cp < - such that
IINE(Te)II
S C IIME(TE)H
IINE(TE)II Lp(WX,Y)
LP(wx,Y S CPIIMO(TO)II
LP(WX.y)
P
LP(WX,y)
S (P/(p-1))KPIIMO(TO)II
(N'X,y),
p L
where we have used Doob's inequality in order to get the last relation.
Since y(Te) = e (a.s.,N'X.y), we conclude from the
preceding that 1 1/P
E X'Y[IVy(X(TE).e)-v41(X,Y)Ip I
S
and so EpX.Y[IV,(x(TE).C)Ip]l/P
P
S
Iu41(x,Y)I +
for all (x,y) E R2 and 0 < e S y.
distribution of x(T x+x(T
under W,
,
Noting that the is the same as that of
under N'O,y, raising the preceding to the power p, and
126
integrating with respect to x E R1, we obtain: [IV
S 2p-1KpIi.IlpLp
1
L (IR
)
+
(IR
2p-1I(
)
KpIIv LP(IR ) p y Lp(R1 ) But, by the estimate on u0 and v1P in Ilu
1P
for all 0 < e $ y.
Lemma(A.6), it is clear that --+0 as yt-.
In other words, we have now proved (A.5), with
2Kp replacing Kp, so long as 4y E C0(IR1).
Obviously, the same
result for all y E Z(1R1) follows immediately from this; and so we are done.
127
INDEX
B
Ito's exponential of X, p.61
Brownian scaling, p.16
Ito's formula, pp.53,71,72
Burkholder's inequality, p.64 Ito's stochastic integral
for X E Mart, p.49
C
Cameron-Martin-Girsanov theorem, p.109 Chapman-Kolmogorov
for X E Mart coc, p.59
for Z E S.Martc, p.70,71
equation, p.12
for X E (Mart2)d, P.88,89
conditional exectation
K
EPLXIg']. p.1 conditional probability distribution (c.p.d.)
Kolmogorov's criterion, p.10
and regular c.p.d. (r.c.p.d.), p.3
L
Levy's theorem, p.56 local martingale, p. 58
Mart °C, P.58
consistent family of distributions, p. 12
local time, p.69
D
K
determining set, p. 80 Doob's inequality, p.29
A, p.12 At. p.12
Doob-Meyer Theorem, pp.39,43
M1(E), p.1
<X> and <X,Y>, p.46 Diob's stopping time
martingale, p.28
theorem, p.33
Dbob's upcrossing inequality, p.34 C
Gauss kernel, p.15 I
<X> and <X,Y>, p.58
Lq-martingale, p.29 Lq-bounded, p.36
Mart, p.46 <X> and <X,Y>, p.46 XT and XT, p.46 martingale problem, p.73 M.P.((s,x);{Lt}), p.73 M.P.(x;L), p.77 well-posed, p.85
initial distribution, p.12
locally well-posed, p.101
128
P®TQ., P.81 1
p.13
stochastic integral equation, p.87 Stratonovitch integral, 71
Polish space, p.2
stopping time, p.31 9T, p.31 sub-martingale. p. 28
progressively measurable
T
Pot-
,
(a.s.) continuous, p.28 right continuous, p.27
Tanaka's formula, p.69 time inversion, p.16 time shift At, p.16
locally bounded variation, pp.40,70
transition probabilitiy function, p.12
function, p.27
Prokhorov-Varadarajan theorem, p.7
W
S
weak topology on M1(0), p.5
semi-martingale, p.70 S.Martc, p.70 and , P.70
Wiener measure 0, p.15 Wiener process, p.15 X
stochastic integral, see Ito
x(t,(j), p.3