Lectures on Stochastic Analysis: Diffusion Theory
Daniel W. Stroock DEPARTMENT OF MATHEMATICS, MASSACHUSSETTS INSTITUTE OF TECHNOL USA
OGY,
E-mail address:
[email protected] URL: http://math.mit.edu/~dws/
2000 Mathematics Subject Classification. Primary: 60J60, 60J65, 60J70; Secondary: 60G05, 60G15, 60G17, 60G40, 60G44, 60G51
Contents Frequently Used Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Chapter I. Stochastic Processes and Measures on Function Space . . . . §I.1. Conditional probabilities and transition probability functions §I.2. The weak topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . §I.3. Constructing measures on C ([0, ∞[, RN ) . . . . . . . . . . . . . §I.4. Wiener measure: some elementary properties . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
3 3 4 8 10
Chapter II. Diffusions and Martingales . . . . . . . . . . . . . . . . . . §II.1. A brief introduction to classical diffusion theory . . . . . §II.2. The elements of martingale theory . . . . . . . . . . . . . . §II.3. Stochastic integrals, Itô’s formula, and semi-martingales .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
13 13 17 28
Chapter III. The Martingale Problem Formulation of Diffusion Theory . §III.1. Formulation and some basic facts . . . . . . . . . . . . . . . . . . . . §III.2. The martingale problem and stochastic integral equations . . . . §III.3. Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . §III.4. The Cameron-Martin-Girsanov transformation . . . . . . . . . . §III.5. The martingale problem when a is continuous and positive . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
41 41 47 55 57 60
..............................................
65
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
Appendix.
iii
. . . .
. . . .
. . . .
Frequently Used Notation
R Z Z+ N Q C B(S) N Id ×d p −1 1A B (c, r ) C = C ([0, ∞[, RN ) M1 (Ω) U (Ω, ρ) 〈·, ·〉L2 (X ) 〈·, ·〉RN cl(·), int(·), bd(·) â
The real numbers ] − ∞, ∞[ The integers {. . . , −2, −1, 0, 1, 2, . . .} The positive integers {1, 2, . . .} The natural numbers {0, 1, 2, . . .} The rational numbers The complex numbers Borel σ-algebra on a topological space S Normal distribution The d × d identity matrix The imaginary unit Indicator function of a set A Open ball of radius r centered at c RN -valued continuous functions on [0, ∞[ The set of probability measures on Ω (equipped with σ-algebra F) ρ-uniformly continuous functions on Ω Inner product in L2 (X ) Inner product in RN closure, interior, and boundary, respectively “compact subset of”
v
Introduction These notes grew out of lectures which I gave during the Fall semester of 1985 at MIT. My purpose has been to provide a reasonably self-contained introduction to some stochastic analytic techniques which can be used in the study of certain analytic problems, and my method has been to concentrate on a particularly rich example rather than to attempt a general overview. The example which I have chosen is the study of second order partial differential operators of parabolic type. This example has the advantage that it leads very naturally to the analysis of measure on function space and the introduction of powerful probabilistic tools like martingales. At the same time, it highlights the basic virtue of probabilistic analysis: the direct role of intuition in the formulation and solution of problems. The material which is covered has all been derived from my book [Stroock and Varadhan, 2006] with S.R.S. Varadhan. However, the presentation here is quite different. In the first place, the emphasis there was on generality and detail; here it is on conceptual clarity. Secondly, at the time when we wrote [Stroock and Varadhan, 2006], we were not aware of the ease with which the modern theory of martingales and stochastic integration can be presented. As a result, our development of that material was a kind of hybrid between the classical idea of K. Itô and J.L. Doob and the modern theory based on the ideas of P.A. Meyer, H. Kunita, and S. Watanabe. In these notes the modern theory is presented, and the result is, I believe, not only more general but also more understandable. In Chapter I I give a quick review of a few of the important facts about probability measures on Polish spaces: the existence of regular conditional probability distributions and the theory of weak convergence. The chapter ends with the introduction of Wiener measure and a brief discussion of some of the most elementary properties of Brownian motion. Chapter II starts with an introduction to diffusion theory via the classical route of transition probability functions coming from the fundamental solution of parabolic equations. At the end of the first section, an attempt is made to bring out the analogy between diffusions and the theory of integral curves of a vector field. In this way I have tried to motivate the formulation (made precise in Chapter III) of diffusion theory in terms of martingales, and at the same time, to indicate the central position which martingales play in stochastic analysis. The rest of Chapter II is devoted to the elements of martingale theory and the development of stochastic integration theory. (The presentation here profitted considerably from the incorporation of some ideas which I learned in the lectures given by K. Itô at the opening session of the I.M.A. in the Fall of 1985.) In Chapter III I formulate the martingale problem and derive some of the basic facts about its solutions. The chapter ends with a proof that the martingale problem corresponding to a strictly elliptic operator with bounded continuous coefficients is wellposed. This proof turns on an elementary fact about singular integral operators, and a derivation of this fact is given in the appendix at the end of the chapter. 1
CHAPTER I
Stochastic Processes and Measures on Function Space §I. 1. Conditional probabilities and transition probability functions We begin by recalling the notion of conditional expectation. Namely, given a probability space (E, F, P) and a sub-σ-algebra F0 , the conditional expectation value EP [X | F0 ] of a function X ∈ L2 (P) is that F0 -measurable element of L2 (P) such that Z Z (1.1) X (ξ )P(dξ ) = EP [X | F0 ](ξ )P(dξ ), A ∈ F0 . A
A
Clearly, EP [X | F0 ] exists: it is nothing more or less than the projection of X onto the subspace of L2 (P) consisting of F0 -measurable P-square integrable functions. Moreover, EP [X | F0 ] ¾ 0 (a.s. P) if X ¾ 0 (a.s. P). Hence, if X is any non-negative Fmeasurable function, then one can use the monotone convergence theorem to construct a non-negative F-measurable EP [X | F0 ] for which (1.1) holds; and clearly, up to a P-null set, there is only one such function. In this way, one sees that for any F-measurable X which is either non-negative or in L1 (P), there exists a P-almost surely unique F0 measurable EP [X | F0 ] satisfying (1.1). Because X 7−→ EP [X | F0 ] is linear and preserves non-negativity, one might hope that for each ξ ∈ E there is a Pξ ∈ M1 (E) (the space R of probability measures on (E, F)) such that EP [X | F0 ](ξ ) = X (η)Pξ (dη). Unfortunately, this hope is not fulfilled in general. However, it is fulfilled when one imposes certain topological conditions on (E, F). Our first theorem addresses this question. (1.2). THEOREM. Suppose that Ω is a Polish space (i.e., Ω is a topological space which admits a complete separable metrization,) and that A is a sub-σ-algebra of B(Ω) (the Borel field over Ω). Given R P ∈ M1 (Ω), there is an A-measurable map ω 7−→ Pω ∈ M1 (Ω) such that P(A ∩ B) = A Pω (B)P(dω) for all A ∈ A and B ∈ B(Ω). Moreover, ω 7−→ Pω is uniquely determined up to a P-null set A ∈ A. Finally, if A is countably generated, then ω 7−→ Pω can be chosen so that Pω (A) = 1A(ω) for all ω ∈ Ω and A ∈ A. PROOF. Assume for the present that Ω is compact, the general case is left for later (cf. Exercise (2.2) below). Choose (ϕn )n∈Z+ ⊂ C (Ω) to be a linearly independent set of functions whose span is dense in C (Ω), and assume that ϕ0 ≡ 1. For each n ∈ Z+ , let ψk be a bounded version of EP [ϕk | A] and choose ψ0 ≡ 1. Next, let A be the set of ω’s such that there is an n ∈ Z+ P P and a0 , . . . , an ∈ Q such that nm=0 a m ϕ m ¾ 0 but nm=0 a m ψ m (ω) < 0, and check that A is an A-measurable P-null set. For n ∈ N and a0 , . . . , an ∈ Q, define n X P E a ϕ if ω ∈ A, and X m m n m=0 a m ϕ m := X Aω n m=0 a m ψ m (ω) otherwise. m=0 3
4
I. STOCHASTIC PROCESSES AND MEASURES ON FUNCTION SPACE
Check that, for each ω ∈ Ω, Aω determines a unique non-negative linear functional on C (Ω) and that Aω (1) = 1. Further, check that ω 7−→ Aω (ϕ) is A-measurable for each ϕ ∈ C (Ω). Finally, let Pω be the measure on Ω associated with Aω by the Riesz representation theorem, and check that ω 7−→ Pω satisfies the required conditions. The uniqueness assertion is easy. Moreover, since P· (A) = 1A(·) (a.s. P) for each A ∈ A, it is clear that, when A is countably generated, ω 7−→ Pω can be chosen so that this equalityholds for all ω ∈ Ω. Referring to the setup described in Theorem (1.2), the map ω 7−→ Pω is called a conditional probability distribution of P given A (abbreviated by c.p.d. of P | A). If ω 7−→ Pω has the additional property that Pω (A) = 1A(ω) for all ω ∈ Ω and A ∈ A, then ω 7−→ Pω is called a regular c.p.d. of P | A (abbreviated by r.c.p.d. of P | A). (1.3). REMARK. The Polish space which will be the center of most of our attention in what follows is the space Ω := C ([0, ∞[, RN ) of continuous paths from [0, ∞[ into RN with the topology of uniform convergence on compact time intervals. Letting x(t, ω) : = ω(t ) denote the position of ω ∈ Ω at time t ¾ 0, set M t := σ x(s ) | 0 ¶ s ¶ t (the smallest σ-algebra over Ω with respect to which all the maps ω 7−→ x(s, ω), 0 ¶ s ¶ t , are measurable). Given P ∈ M1 (Ω), Theorem (1.2) says that for each t ¾ 0 there is a t of P | M t . Intuitively, the representation P = P-essentially unique r.c.p.d. ω 7−→ Pω R t Pω P(dω) can be thought of as a fibration of P according to how the path ω behaves during the initial time interval [0, T ]. We will be mostly concerned with P’s which are Markov, in the sense that forteach t ¾ 0 and B ∈ B(Ω) which is measurable with respect (B) depends P-almost surely only on x(t , ω) and not on to σ x(s) | s ¾ t , ω 7−→ Pω x(s, ω) for s < t . §I. 2. The weak topology (2.1). THEOREM. Let Ω be a Polish space and let ρ be a metric on Ω for which (Ω, ρ) is totally bounded.1 Suppose that Λ is a non-negative linear functional on U (Ω, ρ) (the set of ρ-uniformly continuous functions on Ω) satisfying Λ(1) = 1. Then there is a (unique) P ∈ M1 (Ω) such that Λ(ϕ) = EP [ϕ] for all ϕ ∈ U (ω, ρ) if and only if for all " > 0 there is a K" â Ω with the property that Λ(ϕ) ¾ 1 − " whenever ϕ ∈ U (Ω, ρ) satisfies ϕ ¾ 1K" . PROOF. Suppose that P exists. Choose (ωk )k∈N to be a countable dense subset of SN n Ω and for each n ∈ Z+ shoose Nn ∈ N such that P n=1 B (ωk , n1 ) ¾ 1 − 2"n , where the balls B (ω, r ) are defined with respect to a complete metric on Ω. Define the set T SN n cl B (ωk , n1 ) . Then K" â Ω and P(K" ) ¾ 1 − ". K" := n∈Z n=1 +
Next, suppose that Λ(ϕ) ¾ 1 − " whenever ϕ ¾ 1K" . Clearly, we may assume that K"
increases with decreasing ". Let Ω denote the completion of Ω with respect to ρ. Then U (Ω, ρ) 3 ϕ 7−→ ϕ, the unique extension of ϕ to Ω in C (Ω), is a surjective isometry from U (Ω, ρ) to C (Ω). Hence, Λ induces a unique Λ ∈ C (Ω)? such that Λ(ϕ) = Λ(ϕ), and so there is a P ∈ M1 (Ω) such that Λ(ϕ) = EP [ϕ], ϕ ∈ U (Ω, ρ). Clearly, P(Ω0 ) = 1 S where Ω0 = ">0 K" , and so P(Γ) = P(Γ ∩ Ω0 ) determines an element of M1 (Ω) with the required property. The uniqueness of P is obvious. 1Recall that a subset Ω0 ⊂ Ω of a metric space (Ω, ρ) is totally bounded if for every " > 0 there exists
N" ∈ Z+ and a set {ξ1 , . . . , ξN" } ⊂ Ω0 such that Ω0 ⊂ always construct another metric (e.g., ρ :=
ρ0
1+ρ0
SN"
i =1
Bρ (ξi , "). Note that, given a metric ρ0 on Ω, one can
,) such that (Ω, ρ) is totally bounded.
§I.2. THE WEAK TOPOLOGY
5
(2.2). EXERCISE. Using the preceding, carry out the proof of Theorem (1.2) when Ω is not compact. Given a Polish space Ω, the weak topology on M1 (Ω) is the topology generated by sets of the form ν |ν(ϕ) − µ(ϕ)| < " for µ ∈ M1 (Ω), ϕ ∈ C b (Ω), and " > 0. Thus, the weak topology on M1 (Ω) is precisely the relative topology which M1 (Ω) inherits from the weak-? topology on C b (Ω)? . (2.3). EXERCISE. Let (ωk )k∈N be a countable dense subset of Ω. Show that the set of convex combinations of the point masses δωk with non-negative rational coefficients is dense in M1 (Ω). In particular, conclude that M1 (Ω) is separable. (2.4). LEMMA. Given a net (µα ) in M1 (Ω), the following are equivalent: i) ii) iii) iv) v)
µα → µ; ρ is a metric on Ω and µα (ϕ) → µ(ϕ) for every ϕ ∈ U (Ω, ρ); lim supα µα (F ) ¶ µ(F ) for every closed F ⊂ Ω; lim infα µα (G) ¾ µ(G) for every open G ⊂ Ω; limα µ(Γ) = µ(Γ) for every Γ ∈ B(Ω) with µ(bd(Γ)) = 0. PROOF. Obviously i) ⇒ ii) and iii) ⇒ iv) ⇒ v). To prove that ii) ⇒ iii), set ρ ω, Ω \ F (") ϕ" (ω) = , ρ(ω, F ) + ρ ω, Ω \ F (")
where F (") is the set of ω’s whose ρ-distance from F is less than ". Then ϕ" ∈ U (Ω, ρ), 1F ¶ ϕ" ¶ 1F (") , and so lim sup µα (F ) ¶ lim µα (ϕ" ) = µ(ϕ" ) ¶ µ F (") . α
α
After letting " → 0 one sees that iii) holds. Finally, assume v) and let ϕ ∈ C b (Ω) be given. Noting that µ(a < ϕ < b ) = µ(a ¶ ϕ ¶ b ) for all but at most a countable number of a’s and b ’s, choose for a given " > 0 a finite collection a0 < · · · < aN so that a0 < ϕ < aN , an − an−1 < ", and µ(an−1 < ϕ < an ) = µ(an−1 ¶ ϕ ¶ an ) for n = 1, . . . , N . Then |µα (ϕ) − µ(ϕ)| ¶ 2" + 2 kϕkC
b
N X µ (a α n−1 < ϕ ¶ an ) − µ(an−1 < ϕ ¶ an ) , (Ω) n=1
and so, by v), lim supα |µα (ϕ) − µ(ϕ)| ¶ 2".
(2.5). REMARK. M1 (Ω) admits a metric. Indeed, let ρ be a metric on Ω with the property that (Ω, ρ) is totally bounded. Then, since U (Ω, ρ) is isometric to C (Ω), there is a countable dense subset (ϕn )n∈N of U (Ω, ρ). Define R(µ, ν) :=
∞ X |µ(ϕn ) − ν(ϕn )| n=1
2n 1 + kϕn kC
b (Ω)
.
Clearly, R is a metric for M1 (Ω), and so (in view of (2.3)) we now see that M1 (Ω) is a separable metric space. Actually, with a little more effort, one can show that M1 (Ω) is itself a Polish space. The easiest way to see this is to show that M1 (Ω) can be embedded in M1 (Ω) as a Gδ . Since M1 (Ω) is compact, and therefore Polish, it follows that M1 (Ω) is also Polish. In any case, we now know that convergence and sequential convergence are equivalent in M1 (Ω).
6
I. STOCHASTIC PROCESSES AND MEASURES ON FUNCTION SPACE
(2.6). THEOREM (Prokhorov & Varadarajan). A set Γ ⊂ M1 (Ω) is relatively compact if and only if for each " > 0 there is a K" â Ω such that µ(K" ) ¾ 1 − " for every µ ∈ Γ. PROOF. First suppose that Γ â M1 (Ω). Given " > 0 and n ∈ Z+ , choose for each 1 µ ∈ Γ a Kn (µ) â Ω such that µ(Kn (µ)) > 1 − 2"n and set Gn (µ) := ν ν Kn (µ)( n ) > 1 − 2"n , where distances are taken with respect to a complete metric on Ω. Next, choose SNn T∞ SNn 1 µn,1 , . . . , µn,Nn ∈ Γ, so that Γ ⊂ k=1 Gn (µn,k ), and set K := n=1 k=1 cl Kn (µn,k )( n ) . Clearly, K â Ω, and µ(K) ¾ 1 − " for every µ ∈ Γ. To prove the opposite implication, think of M1 (Ω) as a subset of the unit ball in C b (Ω)? . Since the unit ball in C b (Ω)? is compact in the weak-? topology, it suffices for us to check that every weak-? limit Λ of µ’s from Γ comes from an element of M1 (Ω). But Λ(ϕ) ¾ 1 − " for all ϕ ∈ C b (Ω) satisfying ϕ ¾ 1K" , and so Theorem (2.1) applies to Λ. (2.7). EXAMPLE. Let Ω = C ([0, ∞[, E), where (E, ρ) is a Polish space, and we give Ω the topology of uniform convergence on finite intervals. Then Γ ⊂ M1 (Ω) is relatively compact if and only if for each T > 0 and " > 0 there exist K â E and δ : ]0, ∞[ −→ ]0, ∞[ satisfying limτ↓0 δ(τ) = 0, such that sup P ω x(t , ω) ∈ K, t ∈ [0, T ], and P∈Γ
ρ(x(t , ω), x(s, ω)) ¶ δ(|t − s|) for s, t ∈ [0, T ]
¾ 1 − ".
In particular, if ρ-bounded subsets of E are relatively compact, then it suffices that lim sup sup P {ω ρ(x, x(0, ω)) ¶ R and R→∞ P∈Γ
ρ(x(t , ω), x(s, ω)) ¶ δ(|t − s |) for s, t ∈ [0, T ]
¾1−"
for some reference point x ∈ E. The following basic real-variable result was discovered by Garsia, Rodemich, and Rumsey. (2.8). LEMMA (Garsia et al.). Let p and ψ be strictly increasing continuous functions on ]0, ∞[ satisfying p(0) = ψ(0) = 0 and lim t →∞ ψ(t ) = ∞. For given T > 0 and ϕ ∈ C ([0, T ], RN ), suppose that ZTZT kϕ(t ) − ϕ(s )k ds dt ¶ B < ∞. ψ p(|t − s |) 0 0 Then, for all 0 ¶ s ¶ t ¶ T : kϕ(t ) − ϕ(s)k ¶ 8
Z 0
t −s
ψ−1
4B u2
p(du).
PROOF. Define I (t ) :=
Z
T
0
kϕ(t ) − ϕ(s )k ψ ds, p(|t − s|)
t ∈ [0, T ].
RT Since 0 I (t ) dt ¶ B, there is a t0 ∈ ]0, T [ such that I (t0 ) ¶ TB . Next, choose t0 > d0 > t1 > d1 > · · · > tn > dn > · · · as follows. Given tn−1 , define dn−1 by p(dn−1 ) = 12 p(tn−1 )
§I.2. THE WEAK TOPOLOGY
7
and choose tn ∈ ]0, dn−1 [ so that I (tn ) ¶ 2
B
ϕ(t ) − ϕ(t ) I (tn−1 ) n n−1 ψ . ¶2 dn−1 p( tn − tn−1 )
and
dn−1
Such a tn exists because each of the specified conditions can fail on a set of measure strictly less than 12 dn−1 . Clearly 2 p(dn+1 ) = p(tn+1 ) ¶ p(dn ). Thus, tn ↓ 0 as n → ∞. Also, p(tn − tn−1 ) ¶ p(tn ) = 2 p(dn ) = 4 p(dn ) − 4 p(dn ) − p(dn+1 ) . Hence, with d−1 := T ,
1 2 p(dn )
¶
2I (t )
n
¶ ψ−1 p(tn − tn+1 ) dn 4B ( p(dn ) − p(dn+1 )) ¶ ψ−1 dn−1 dn Z dn 4B ψ−1 2 p(du), ¶4 u dn+1 R T −1 4B so that kϕ(t0 ) − ϕ(0)k ¶ 4 0 ψ p(du). By the same argument going in the oppo2 R uT −1 4B site direction, kϕ(T ) − ϕ(t0 )k ¶ 4 0 ψ p(du). Hence, we now have u2
ϕ(t
n+1 ) − ϕ(tn )
kϕ(T ) − ϕ(0)k ¶ 8
(2.9)
Z
T
ψ−1
4B
0
u2
p(du).
To complete the proof, let 0 ¶ σ ¶ τ ¶ T be given and apply (2.9) to ϕ(t ) := ϕ σ + (τ − σ) Tt and p(t ) := p (τ − σ) Tt . Since Z T Z T ZTZT kϕ(t ) − ϕ(s)k kϕ(t ) − ϕ(s)k T ψ ψ ds dt = ds dt p(|t − s|) p(|t − s|) (t − σ)2 σ 0 σ 0 T B =: B, ¶ (t − σ)2 we conclude that kϕ(τ) − ϕ(σ)k ¶ 8
Z
T
ψ
−1
0
4B u2
p(du) = 8
Z
t −σ
ψ−1
0
4B u2
p(du).
(2.10). EXERCISE. Generalize the preceding as follows. Let (L, k·k) be a normed linear space, r > 0, d ∈ Z+ , and ϕ : Rd −→ L a weakly continuous map. Set B (r ) := x ∈ Rd kxk < r , and suppose that Z Z ψ kϕ(y) − ϕ(x)k dx dy ¶ B < ∞. p(ky − xk) B (r ) B (r ) Show that kϕ(y) − ϕ(x)k ¶ 8
Z
ky−xk
0
ψ−1
4d +1 B γ u 2d
p(du),
where γ = γd := inf 1d (x + B (r )) ∩ B (1) Leb kxk ¶ 1 and r ¶ 2 (here |·|Leb is the r Lebesgue measure). A proof can be made by mimicking the argument used to prove Lemma (2.8) (cf. [Stroock and Varadhan, 2006, §2.4.1, p. 60]).
8
I. STOCHASTIC PROCESSES AND MEASURES ON FUNCTION SPACE
(2.11). THEOREM (Kolmogorov’s Criterion). Let (Ω, F, P) be a probability space, and ξ a measurable map of Rd × Ω into the normed linear space (L, k·k). Assume that Rd 3 x 7−→ ξ (x, ω) is weakly continuous for each ω ∈ Ω and that for some q ∈ [1, ∞[, r, α > 0 and A < ∞: (2.12) EP kξ (y) − ξ (x)kq ¶ Aky − xkd +α , x, y ∈ B (r ). Then for all λ > 0, kξ (y) − ξ (x)k
(2.13)
sup
P
ky − xkβ
x,y∈B (r )
where β =
α 2q
AB ¾λ ¶ q , λ
and B < ∞ depend only on d , q, r and α.
PROOF. Let ρ = 2d + α2 . Then !q Z Z kξ (y) − ξ (x)k P dx dy ¶ AB 0 , E ρ q B (r ) B (r ) ky − xk where 0
B :=
Z
Z
α
B (r ) B (r )
ky − xk−d + 2 dx dy.
Next, set Y (ω) :=
Z
Z
kξ (y, ω) − ξ (x, ω)k
q
ρ
B (r ) B (r )
dx dy.
ky − xk q
Then, by Fubini’s theorem, EP [Y ] ¶ AB 0 , and so: P Y ¾ λq ¶
AB 0 λq
,
λ > 0.
In addition, by (2.10): kξ (y, ω) − ξ (x, ω)k ¶ 8
Z
ky−xk 4d +1 Y (ω) q1
γu
0
2d
ρ
1
du q ¶ C Y (ω) q ky − xkβ .
(2.14). COROLLARY. Let Ω = C ([0, ∞[, RN ) and suppose that Γ ⊂ M1 (Ω) has the properties that lim sup P kx(0)k ¾ R = 0, R→∞ P∈Γ
and that for each T > 0: EP kx(t ) − x(s)kq
sup sup P∈Γ 0¶s
(t − s)1+α
<∞
for some q ∈ [1, ∞[ and α > 0. Then Γ is relatively compact. §I. 3. Constructing measures on C ([0, ∞[, RN ) Throughout this section, and very often in what follows, Ω denotes the Polish space C ([0, ∞[, RN ), M := B(Ω), and M t := σ x(s) 0 ¶ s ¶ t , t ¾ 0. S (3.1). EXERCISE. Check that M = σ M t . In particular, conclude that if P, Q ∈ t ¾0 M1 (Ω) satisfy P x(t0 ) ∈ Γ0 , . . . , x(tn ) ∈ Γn = Q x(t0 ) ∈ Γ0 , . . . , x(tn ) ∈ Γn for all n ∈ N, 0 ¶ t0 < · · · < tn , and Γ0 , . . . , Γn ∈ B(RN ), then P = Q.
§I.3. CONSTRUCTING MEASURES ON C ([0, ∞[, RN )
9
Next, for each n ∈ N and 0 ¶ t0 ¶ · · · ¶ tn , suppose that P t0 ,...,tn ∈ M1 (RN )n and assume that the family P t0 ,...,tn is consistent in the sense that P t0 ,...,tk−1 ,tk+1 ,...,tn Γ0 × · · · × Γk−1 × Γk+1 × · · · × Γn (3.2) = P t0 ,...,tn Γ0 × · · · × Γk−1 × E × Γk+1 × · · · × Γn for all n ∈ N, 0 ¶ k ¶ n, t0 < · · · < tn , and Γ0 , . . . , Γn ∈ B(RN ). (3.3). EXAMPLE. One of the most important sources of consitent families are (Markov) transition probability functions. Namely, the function P(s, x; t , ·) defined for 0 ¶ s < t adn x ∈ RN and taking values in M1 (RN ), is called a transition probability function on RN if it is measurable and satisfies the Chapman-Kolmogorov equation: Z (3.4) P(s , x; u, ·) = P(t , y; u, ·) P(s, x; t , dy) RN
for all 0 ¶ s < t and x ∈ RN . Given an initial distribution µ0 ∈ M1 (RN ), we associate with µ0 and P(s, x; t , ·) the consistent family P t0 ,...,tn determined by Z Z Z P t0 ,...,tn Γ0 × · · · × Γn = µ0 (dx0 ) P(t0 , x0 ; t1 , dx1 ) · · · P(tn−1 , xn−1 ; tn , dxn ). Γ0
Γ1
Γn
(3.5). THEOREM. Let P t0 ,...,tn be a consistent family, and assume that for each T > 0: Z ky − xkq (3.6) sup P s ,t (dx × dy) < ∞ (t − s)1+α 0¶s ¶t ¶T
for some q ∈ [1, ∞[ and α > 0. Then there exists a unique P ∈ M1 (Ω) such that −1 P t0 ,...,tn = P ◦ x(t0 ), . . . , x(tn ) for all n ∈ N and 0 ¶ t0 < · · · < tn . (Throughout, −1 −1 P ◦ φ (Γ) := P φ (Γ) .) PROOF. The uniqueness is immediate form (3.1). To prove existence, define for m m ∈ N the map Φ m : (RN )4 +1 −→ Ω so that ( xk + 2 m t + 2km (xk+1 − xk ) if t ∈ [ 2km , k+1 [, k = 0, . . . , 4 m , 2m x t , Φ m (x0 , . . . , x4m ) = x4 m if t ¾ 2 m . Then, by Corollary (2.14), (P m ) m∈N is relatively compact in M1 (Ω). Moreover, if P is any limit of (P m ) m∈N , then Z (3.7) EP ϕ0 (x(t0 )) · · · ϕn (x(tn )) = ϕ0 (x0 ) · · · ϕn (xn )P t0 ,...,tn (dx0 × · · · × dxn ) for all n ∈ N, dyadic 0 ¶ t0 < · · · < tn , and ϕ0 , . . . , ϕn ∈ C b1 (RN ). Since both sides of (3.7) are continuous with respect to (t0 , . . . , tn ), it follows that P has the required property. (3.8). EXERCISE. Use (3.6) to check the claims, made in the preceding proof, that (P m ) m∈N is relatively compact, and that the right-hand side of (3.7) is continuous with respect to (t0 , . . . , tn ). Also, show that if P(s, x; t , ·) is a transition probability function that satisfies Z ky − xkq P(s , x; t , dy) < ∞ (3.9) sup (t − s )1+α 0¶s
0 and some q ∈ [1, ∞[ and α > 0, then the family associated with any initial distribution and P(s, x; t , ·) satisfies (3.6).
10
I. STOCHASTIC PROCESSES AND MEASURES ON FUNCTION SPACE
§I. 4. Wiener measure: some elementary properties We continue with the notation used in the preceding section. The classic example of a measure on Ω is the one constructed by N. Wiener. Namely, set P(s, x; t , dy) := g (t − s, y − x) dy, where g (s, x) :=
(4.1)
1 (2π)
N 2
e−
kxk2 2s
is the (standard) Gauss (or Weierstrass) kernel. It is an easy computation to check that Z (4.2)
exp
X N j =1
θ j y j g (t , y) dy = exp
X t N 2
j =1
θ2j
for any t > 0 and θ1 , . . . , θN ∈ C; and from (4.2) one can easily show that P(s, x; t , ·) is a transition probability function which satisfies Z q ky − xkq P(s, x; t , dy) = CN (q)(t − s) 2 (4.3) for each q ∈ [1, ∞[. In particular, (3.9) holds with q = 4 and α = 1. The measure P ∈ M1 (Ω) corresponding to an initial distribution µ0 and this P(s , x; t , ·) is called the N -dimensional Wiener measure with initial distribution µ0 , and is denoted by Wµ0 . In particular, when µ0 = δ x , we use W x in place of Wδx , and refer to W x as the N -dimensional Wiener measure starting at x; and when x = 0, we shall use W (or, when dimension is emphasized, W (N ) ) instead of W0 and will call W the N dimensional Wiener measure. In this connection, we introduce here the notion of an N -dimensional Wiener process. Namely, given a probability space (E, F, P), we shall say that β : [0, ∞[ ×E −→ RN is an N -dimensional Wiener process under P if β is measurable, t 7−→ β(t ) is P-almost surely continuous, and P ◦ β(·)−1 = W (N ) . (4.4). EXERCISE. Identifying C ([0, ∞[, RN ) with the product C ([0, ∞[, R)N , show that N W (N ) = W (1) . In addition, show that −x(·) is a Wiener process under W, and that W x = W ◦ T x−1 , where T x : Ω −→ Ω is given by x(·, T x (ω)) = x + x(t , ω). Finally, for a given s ¾ 0, let ω 7−→ Wωs be the r.c.p.d. of W | M s . Show that, for W-almost all ω, x(·+s) = x(s, ω) is a Wiener process under Wωs , and use this to conclude that Wωs ◦θ−1 = s W x(s ,ω) (a.s. W), where θ s : Ω −→ Ω is the time shift map given by x(·, θ s ω) = x(·+s, ω). Thus far we have discussed the Wiener measure from the Markovian point of view (i.e., in terms of transition probability functions). An equally important way to approach this subject is from the Gaussian standpoint. From the Gaussian standpoint, W is characterized by the equation (4.5)
E
W
N X p 〈θk , x(tk )〉RN exp −1
= exp −
k=1
N 1 X
2 k,`=1
(tk ∧ t` ) 〈θk , θ` 〉RN
for all n ∈ Z+ , t1 , . . . , tn ∈ [0, ∞[, and θ1 , . . . , θn ∈ RN . (4.6). EXERCISE. Check that (4.5) holds and that it characterizes W. Next, define Φλ : p Ω −→ Ω by x t , Φλ (ω) := λ x λt , ω) for λ > 0, and using (4.5), show that W = W ◦ Φ−1 . This invariance property of W is often called the Brownian scaling property. λ In order to describe the time inversion property of Wiener processes, one must first
§I.4. WIENER MEASURE: SOME ELEMENTARY PROPERTIES
11
x(t ) check that W lim t →∞ t = 0 = 1. To this end, note that X ∞ kx(t )k W sup ¾" ¶ W sup kx(t )k ¾ n" , t t ¾m n¶t ¶n+1 n=m and that, by Brownian scaling, p W sup kx(t )k ¾ n" ¶ W sup kx(t )k ¾ " n . n¶t ¶n+1
0¶t ¶2
Now combine (4.3) with (2.13) to conclude that C W sup kx(t )k ¾ n" ¶ 2 4 , n " n¶t ¶n+1 x(t ) and therefore, that W lim t →∞ t = 0 = 1. The Brownian time inversion property can now be stated as follows. Define β(0) := 0, and for t > 0, set β(t ) := t x 1t . Using the preceding and (4.5), check that β(·) is a Wiener process under W. We close this chapter with a justly famous result due to Wiener. In the next chapter we shall derive this same result from a much more sophisticated viewpoint. (4.7). THEOREM. W-almost no ω ∈ Ω is Lipschitz continuous at even one t ¾ 0. PROOF. In view of (4.4), it suffices to treat the case when N = 1, and to show that W-almost no ω is Lipschitz continuous at any t ∈ [0, 1[. But if ω were Lipschitz k − x continuous at t ∈ [0, 1[, then there would exist `, m ∈ Z+ such that x k+1 n n
would be less than n` for all n ¾ m and three consecutive k’s between 0 and (n + 2). Hence, it suffices to show that the sets ∞ [ ∞ \ 2 n o \ k+ j +1 k+ j ` := B(`, m) ¶ , `, m ∈ Z+ , − x x n n n n=m k=1 j =0
have W-measure 0. But by (4.1) and Brownian scaling: 3 W B(`, m) ¶ lim sup nW x n1 ¶ n` = lim sup nW |x(1)| ¶ n→∞
= lim sup n n→∞
n→∞
Z
` p n
` −p n
= 0.
` p n
3
!3 g (1, y) dy
(4.8). REMARK. P. Lévy obtained a far sharper version of the preceding; namely, he showed that ! |x(t ) − x(s )| (4.9) W lim sup sup Æ = 1 = 1. δ↓0 0¶s
The lower bound on the lim sup is a quite elementary application of the Borel-Cantelli lemma (cf. [McKean, 1969, p. 14]), but the upper bound is a little difficult. A derivation of the less sharp estimate when the upper bound 1 is replaced by 8 can be based on the reasoning used to prove (2.13). See [Stroock and Varadhan, 2006, §2.4.8] for more details.
CHAPTER II
Diffusions and Martingales §II. 1. A brief introduction to classical diffusion theory We continue with the notation used in §I.3. Let S+ (RN ) denote the space of positive semi-definite matrices, and for given bounded measurable functions a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN , define the operator-valued map [0, ∞[ 3 t 7−→ L t :=
(1.1)
N 1X
2 i , j =1
a i , j (t , x)∂ x i ∂ x j +
N X i =1
b i (t , x)∂ x i .
The following theorem can be proved using quite elementary analytic methods (cf. [Stroock and Varadhan, 2006, Chapter 3]). (1.2). THEOREM. Assume that a ∈ C b0,3 [0, ∞[ ×RN , S+ (RN ) , b ∈ C b0,2 [0, ∞[ ×RN , RN . Then there is a unique transition probability function P(s, x; t , ·) on RN such that for each T > 0 and all f ∈ C b1,2 ([0, T ] × RN ): Z (1.3)
f (T , y)P(s, x; T , dy) − f (s, x) =
T
Z
Z dt
s
(∂ t + L t ) f (t , y)P(s, x; t , dy), (s, x) ∈ [0, T ] × RN .
Moreover, if T > 0 and ϕ ∈ C0∞ (RN ), then [0, T ] × RN 3 (s, x) 7−→ uT ,ϕ (s, x) :=
Z
ϕ(y)P(s, x; T , dy)
is an element of C b1,2 ([0, T ] × RN ). (1.4). REMARK. Notice that when L t := 12 ∆ (i.e., when a := I and b := 0,) P(s, x; t , dy) = g (t − s, y − x) dy where g is the Gauss kernel given in I-(4.1). Throughout the rest of this section we shall be working with the situation described in Theorem (1.2). We first observe that when ϕ ∈ C0∞ (RN ), uT ,ϕ is the unique u ∈ C b1,2 ([0, T ] × RN ) such that (1.5)
(∂ s + L s )u = 0
on [0, T ] × RN , and
lim u(s, ·) = ϕ. s↑T
The uniqueness follows from (1.3) upon taking f = u. To prove that u = uT ,ϕ satisfies (1.5), note that ∂ s u(s, x) = lim h↓0
1 h
u(s + h, x) − u(s, x) 13
14
II. DIFFUSIONS AND MARTINGALES
= lim h↓0
1
u(s + h, x) −
1
Z
h
= − lim
s +h
Z
h s = −L s u(s, x),
u(s + h, y)P(s, x; s + h, dy)
Lt u(s + h, ·) (y)P(s, x; t , dy)
dt
h↓0
Z
where we have used the Chapman-Kolmogorov equation I-(3.4) followed by (1.3). We next prove an important estimate for the tail distribution of the measure P(s, x; t , ·). (1.6). LEMMA. Let A := sup t ,x ka(t , x)kop and B := sup t ,x kb(t , x)k. Then for all 0 ¶ s < p T and R > N B(T − s): p 2 R− N B N . (1.7) P s , x; T , R \ B (x, R) ¶ 2N exp − 2N A(T − s) In particular, for each T > 0 and q ∈ [1, ∞[, there is a C (T , q) < ∞, depending only on N , A, and B, such that Z q ky − xkq P(s, x; t , dy) ¶ C (T , q)(t − s) 2 , 0 ¶ s < t ¶ T . (1.8) PROOF. Let A and B be any numbers which are strictly greater than the ones specified, and let T > 0 and x ∈ RN be given. Choose η ∈ C ∞ (R) so that 0 ¶ η ¶ 1, η|[−1,1] = 1, and η|R\ ]−2,2[ = 0. Given M ∈ Z+ , define ΦM : RN −→ RN by Z y i −x i ξ i η ΦM (y) = dξ , i = 1, 2, . . . , N , M 0 and consider the function fM ,θ (t , y) := exp 〈θ, ΦM (y)〉RN + Akθk2 + B kθk (T − t ) for θ ∈ RN and (t , y) ∈ [0, T ] × RN . Clearly, fM ,θ ∈ C b∞ ([0, T ] × RN ). Moreover, for sufficiently large M ’s, ∂ t + L t fM ,θ ¶ 0. Thus, by (1.3), Z fM ,θ (T , y)P(s, x; T , dy) ¶ fM ,θ (s, x) for all sufficiently large M ’s. After letting M → ∞ and applying Fatou’s lemma, we get: Z (1.9) exp 〈θ, y − x〉RN P(s, x; T , dy) ¶ exp Akθk2 + B kθk (T − s ) . Since (1.9) holds for all choices of A and B strictly larger than those specified, it must hold for the ones which were given. To complete the proof of (1.7), note that P s, x; T , RN \ B (x, R) ¶
N X
P s, x; T , y y i − x i ¾
i =1
R p N
¶ 2N max P s, x; T , y 〈θ, y − x〉RN ¾ θ∈SN −1
and by (1.9), P s, x; T , y 〈θ, y − x〉RN ¾
R p N
− pλR
Z
R p N
,
exp λ 〈θ, y − x〉RN P(s, x; T , dy) p R − N B(T − s) A(T − s) 2 ¶ exp λ −λ p 2 N
¶e
N
§II.1. A BRIEF INTRODUCTION TO CLASSICAL DIFFUSION THEORY
for all θ ∈ SN −1 and λ > 0. Hence, if R > arrive at (1.7).
p N B(T − s ) and we take λ =
15
p R− N B(T −s ) p , N
we
In view of (1.8), it is not clear from I-(3.8) that for each s ¾ 0 and each initial distribution µ0 ∈ M1 (RN ) there is a unique P s,µ0 ∈ M1 (Ω) such that P s,µ0 x(t0 ) ∈ Γ0 , . . . , x(tn ) ∈ Γn Z Z Z (1.10) P s + tn−1 , xn−1 ; s + tn , dxn P s + t0 , x0 ; s + t1 , dx1 · · · µ0 (dx0 ) = Γ0
Γn
Γ1
for all n ∈ N, 0 = t0 < · · · < tn and Γ0 , . . . , Γn ∈ B(R ). We will use the notation P s ,x in place of P s ,δx . N
(1.11). THEOREM. The map [0, ∞[ ×RN 3 (s, x) 7−→ P s,x ∈ M1 (Ω) is continuous, and R for each µ0 ∈ M1 (RN ), P s,µ0 = P s,x µ0 (dx). Moreover, P s ,x is the one and only P ∈ M1 (Ω) which satisfies P(x(0) = x) = 1
(1.12)
P x(t2 ) ∈ Γ M t1 = P s + t1 , x(s + t1 ); s + t2 , Γ)
(a.s. P) t
for all 0 ¶ t1 < t2 and Γ ∈ B(RN ). Finally, if t ¾ 0 and ω 7−→ P s,x = P s +t ,x(t ,ω) for P s ,x -almost every ω. P s ,x | M t , then P ts,x ω ◦ θ−1 t
ω
is a r.c.p.d. of
PROOF. First observe that, by the last part of Theorem (1.2), [0, T ]×RN 3 (s, x) 7−→ ϕ(y)P(s , x; T , dy) is bounded and continuous for all ϕ ∈ C0∞ (RN ). Combining this with (1.7), one sees that this continues to be true for all ϕ ∈ Cb (RN ). Hence, by (1.10), for all n ∈ Z+ , 0 < t1 < · · · < tn , and ϕ1 , . . . , ϕn ∈ C b (RN ), EPs,x ϕ1 (x(t1 )) · · · ϕn (x(tn )) is a bounded continuous function of (s , x) ∈ [0, ∞[ ×RN . Now suppose that(sk , xk ) → (s, x) in [0, ∞[ ×RN and observe that by (1.8) and I-(2.14), the sequence P sk ,xk k∈N is relatively compact in M1 (Ω). Moreover, if P s 0 ,x 0 k 0 ∈N is a convergent subsequence and P is its k k limit, then P EP ϕ1 (x(t1 )) · · · ϕn (x(tn )) = lim E sk 0 ,xk 0 ϕ1 (x(t1 )) · · · ϕn (x(tn )) 0 k →∞ = EPs ,x ϕ1 (x(t1 )) · · · ϕn (x(tn )) R
for all n ∈ Z+ , 0 < t1 < · · · < tn , and ϕ1 , . . . , ϕn ∈ C b (RN ). Hence, P = P s ,x , and so we R conclude that P sk ,xk → P s ,x . The fact that P s,µ0 = P s,x µ0 (dx) is elementary now that we know that (s, x) 7−→ P s,x is measurable. Our next step is to prove the final assertion concerning P ts ,x ω . When t = 0, there is nothing to do. Assume that t > 0. Given m, n ∈ Z+ , 0 < σ1 < · · · < σ m < t , 0 < τ1 < · · · < τn , and Λ1 , . . . , Λ m , Γ1 , . . . , Γn ∈ B(RN ), set A := x(σ1 ) ∈ Λ1 , . . . , x(σ m ) ∈ Λ m , and B := x(τ1 ) ∈ Γ1 , . . . , x(τn ) ∈ Γn . Then Z P s +t ,x(t ,ω) (B)P s ,x (dω) A Z Z = P(s, x; s + σ1 , dx1 ) · · · P(s + σ m−1 , x m−1 ; s + σ m , dx m ) Λ1
Λm
Z ×
RN
P(s + σ m , x m ; s + t , dy0 )
16
II. DIFFUSIONS AND MARTINGALES
Z ×
Γ1
P(s + t , y0 ; s + t + τ1 , dy1 ) · · ·
Z Γn
P(s + t + τn−1 , yn−1 ; s + t + τn , dyn )
= P s ,x x(σ1 ) ∈ Λ1 , . . . , x(σ m ) ∈ Λ m , x(t + τ1 ) ∈ Γ1 , . . . , x(t + τn ) ∈ Γn Z = P ts ,x ω ◦ θ−1 (B)P s,x (dω). t
A
Hence, for all A ∈ M t and B ∈ M, Z Z P ts ,x ω ◦ θ−1 (B) P (dω) = P s +t ,x(t ,ω) (B)P s,x (dω). s ,x t A
A
Therefore, for each B ∈ M,
P ts ,x ω
◦ θ−1 (B) = P s +t ,x(t ,ω) (B) t
(a.s. P s,x ).
Finally, we must show that P s ,x is characterized by (1.12). That P s ,x satisfies (1.12) is a special case of the result proved in the preceding paragraph. One the other hand, if P ∈ M1 (Ω) satisfies (1.12), then one can easily work by induction on n ∈ N to prove that P satisfies (1.10) with µ0 = δ x . (1.13). COROLLARY. For each (s, x) ∈ [0, ∞[ ×RN , P s,x is the unique P ∈ M1 (Ω) which satisfies P(x(0) = x) = 1 and Z t2 P P (1.14) E ϕ(x(t2 )) − ϕ(x(t1 )) M t1 = E Ls+t ϕ (x(t )) dt Mt1 (a.s. P). t1
for all 0 ¶ t1 ¶ t2 and ϕ ∈ C0∞ (RN ). PROOF. To see that P s,x satisfies (1.14), note that, by (1.12) and (1.3), EP ϕ(x(t2 )) − ϕ(x(t1 )) M t1 Z = ϕ(y)P s + t1 , x(s + t1 ); s + t2 , dy − ϕ(x(t1 )) Z t2 Z Ls +t ϕ (y)P s + t1 , x(s + t1 ); s + t , dy dt = =
Z
t1 t2
EPs ,x
Ls +t ϕ (x(t )) Mt1 dt
Z
t2
Ls +t ϕ (x(t )) dt Mt1
t1
=E
P s ,x
t1
(a.s. P s,x ).
Conversely, if P satisfies (1.14), then it is easy to check that Z t2 (a.s. P) (1.15) EP f (t2 , x(t2 )) − f (t1 , x(t1 )) M t1 = EP ∂ t + L s +t f (t , x(t )) dt M t1 t1
∈ C b1,2 ([0, ∞[ ×RN ).
for all 0 ¶ t1 < t2 and f In particular, if ϕ ∈ C0∞ (RN ) and u(t , y) = R ϕ(η)P(s + t , y; s + t2 , dη), then, by the last part of Theorem (1.2) together with (1.5), u ∈ C b1,2 ([0, ∞[ ×RN ), ∂ t + L s +t u = 0 for t ∈ [0, t2 [, and u(t2 , ·) = ϕ. Hence, from (1.15) with f = u, EPs ,x ϕ(x(t2 )) M t1 = u(t1 , x(t1 )) (a.s. P). Combined with P(x(0) = x) = 1, this proves that P satisfies the condition in (1.12) characterizing P s ,x .
§II.2. THE ELEMENTS OF MARTINGALE THEORY
17
(1.16). REMARK. The characterization of P s ,x given in Corollary (1.13) has the great advantage that it only involves L t and does not make direct reference to P(s, x; t , ·). Since, in most situations, L t is a much more primitive quantity than the associated quantity P(s, x; t , ·), it should be clear that there is a considerable advantage to having P s ,x characterized directly in terms of L t itself. In addition, (1.14) has great intuitive appeal. What it says is that, in some sense, P s,x sees the paths ω as the “integral curves of L t initiating from x at time s.” Indeed, (1.14) can be converted into the statement that EP ϕ(x(t + h)) − ϕ(x(t )) M t = h L t ϕ(x(t )) + o(h), h ↓ 0, which, in words, says that “based on complete knowledge of the past up until time t , the best prediction about the P-value of ϕ(x(t + h)) − ϕ(x(t )) is, up to lower order terms in h, h L t ϕ(x(t )).” This intuitive idea is expanded upon in the following exercise. (1.17). EXERCISE. Assume that a ≡ 0 and that b is independent of t . Show, directly from (1.14) that in this case P0,x = δX (·,x) , where X (·, x) is the integral curve of the vector field b starting from x. In fact, you can conclude this fact about P0,x from P(x(0) = x) = 1 and the unconditional version of (1.14): Z t2 P P 0 (1.14) E ϕ(x(t2 )) − ϕ(x(t1 )) = E Ls +t ϕ (x(t )) dt . t1
Finally, when L := 12 ∆, show that the unconditional statement is not sufficient to characterize W x . (Hint: Let X be an RN -valued Gaussian random variable with mean 0 and p covariance I ; denote by P ∈ M1 (Ω) the distribution of the paths t 7−→ t X , and check that (1.14)0 holds with this P but that P 6= W.) §II. 2. The elements of martingale theory Let P s,x be as in §II.1. Then (1.14) can be rearranged to be the statement that EPs ,x Xϕ (t2 ) M t1 = Xϕ (t1 ) (a.s. P s ,x ), 0 ¶ t1 < t2 , where (2.1)
Xϕ (t ) := ϕ(x(t )) − ϕ(x(0)) −
Z 0
t
Ls+τ ϕ (x(τ)) dτ.
Loosely speaking, (2.1) is the statement that t 7−→ Xϕ (t ) is “conditionally constant” under P s ,x in the sense that Xϕ (t1 ) is “the best prediction about the P s,x -value of Xϕ (t2 ) given perfect knowledge of the past up to time t1 ” (cf. Remark (1.16)). Of course, this is another way of viewing the idea that P s ,x sees the path ω as an “integral curve of L t .” Indeed, if ω were “truly an integral curve of L t ”, we would have that Xϕ (·, ω) is “truly constant.” Since these conditionally constant processes arise a great deal and have many interesting properties, we shall devote this section to explaining a few of the basic facts about them. Let (E, F, P) be a probability space and (F t ) t ¾0 a non-decreasing family of sub-σalgebras of F. A map X on [0, ∞[ ×E into a measurable space is said to be (F t ) t ¾0 progressively measurable if its restriction to [0, T ] × E is B([0, T ]) × FT measurable for each T ¾ 0. A map X on [0, ∞[ ×E with values in a topological space is said to be, respectively, right-continuous (P-a.s. right-continuous) or continuous (P-a.s. continuous) if for every (P-almost every) ξ ∈ E the map t 7−→ X (t , ξ ) is right-continuous or continuous.
18
II. DIFFUSIONS AND MARTINGALES
(2.2). EXERCISE. Show that the notion of progressively measurable coincides with the notion of measurability with respect to the σ-algebra of progressively measurable sets (i.e., those subsets Γ of [0, ∞[ ×E for which 1Γ is a progressively measurable function). In addition, show that if X is (F t ) t ¾0 -adapted in the sense that X (t , ·) is F t -measurable for each t ¾ 0, then X is (F t ) t ¾0 -progressively measurable if it is right-continuous. A C-valued map X on [0, ∞[ ×E is called a martingale if X is a right-continuous, (F t ) t ¾0 -progressively measurable function such that X (t ) ∈ L1 (P) for all t ¾ 0, and (2.3) X (t1 ) = EP X (t2 ) F t1 (a.s. P), 0 ¶ t1 < t2 . Unless it is stated otherwise, it should be assumed that all martingales are real-valued. An R-valued map X on [0, ∞[ ×E is said to be a submartingale if X is a right-continuous (F t ) t ¾0 -progressively measurable function such that X (t ) ∈ L1 (P) for every t ¾ 0, and (2.4) X (t1 ) ¶ EP X (t2 ) F t1 (a.s. P), 0 ¶ t1 < t2 . We will often summarize these statements by saying that (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale (submartingale). (2.5). EXAMPLE. Besides the source of examples provided by (2.1), a natural way in which martingales arise is the following. Let X ∈ L1 (P) and define X (t ) := EP [X | Fbt c . Then it is an easy matter to check that (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale. More generally, let Q be a totally finite measure on (E, F) and assume that Q|Ft P|Ft for each t ¾ 0. Then (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale when X (t ) :=
(2.6)
dQ|Fbt c dP|Fbt c
,
t ¾ 0.
It turns out that these examples generate, in a reasonable sense, all the examples of martingales. The following statement is an easy consequence of Jensen’s inequality. (2.7). LEMMA. Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a martingale (submartingale) with values in the closed interval I . Let ϕ be a continuous function on I which is convex (and non-decreasing). If ϕ ◦ X (t ) ∈ L1 (P) for every t ¾ 0, then (ϕ ◦ X (t )) t ¾0 , (F t ) t ¾0 , P is a submartingale. In particular, if q ∈ [1, ∞[ and (ϕ ◦ X (t )) t ¾0 , (F t ) t ¾0 , P is an L q -martingale (non-negative L q -submartingale) (i.e., X (t ) ∈ L q (P) for all t ¾ 0,) then (|X (t )|q ) t ¾0 , (F t ) t ¾0 , P is a submartingale. (2.8). THEOREM (Doob’s inequality). Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a non-negative submartingale. Then, for each T > 0: (2.9)
P X ? (T ) ¾ R ¶
1 R
EP X (T )1{X ? (T )¾R} ,
R > 0,
where (2.10)
X ? (T ) := sup |X (T )| ,
T ¾ 0.
0¶t ¶T
In particular, for each T > 0, the family X (t ) t ∈ [0, T ] is uniformly P-integrable, and so, for every s ¾ 0, X (t ) −→ X (s) in L1 (P). Finally, if X (T ) ∈ L q (P) for some T > 0, and q ∈ ]1, ∞[, then (2.11)
t ↓s
EP X ? (T )q
1 q
q ¶
q −1
EP X (T )q
1 q
§II.2. THE ELEMENTS OF MARTINGALE THEORY
i.e., kX ? (T )kL q (P) ¶
q q−1
kX (T )kL q (P) .
PROOF. Fix R > 0. Given n ∈ Z+ , note that n X P max X kT ¾ R = P X `T ¾ R and max X n n 0¶k¶n
1 ¶
R 1
¶
R 1
¶
R
n X `=0 n X `=0
0¶k<`
`=0
h EP X
`T n
1{X ( `T )¾R} 1{max
n
kT n
)
h
EP X (T )1{X ( `T )¾R} 1{max
kT n
i
kT 0¶k<` X ( n
n
EP X (T )1{X ? (T )¾R}
Since max0¶k¶n X
19
i kT 0¶k<` X ( n
)
−−→ X ? (T ), the proof of (2.9) is complete. n→∞
To prove the uniform P-integrability statement, note that for t ∈ [0, T ], EP X (t )1{X (t )¾R} ¶ EP X (T )1{X (t )¾R} ¶ EP X (T )1{X ? (T )¾R} . Since X (T ) ∈ L1 (P) and P X ? (T ) ¾ R −−→ 0, we see that R→∞
lim sup E
R→∞ t ∈[0,T ]
P
X (t )1{X (t )¾R} = 0.
Finally, to prove (2.11), we show that for any pair of non-negative random variables q X and Y satisfying P(Y ¾ R) ¶ EP [X 1{Y ¾R} ], R ¾ 0, that kY kLq (P) ¶ q−1 kX kL q (P) ,
q ∈ [1, ∞[. Clearly, we may assume ahead of time that Y is bounded. The proof then is a simple integration by parts: Z∞ Z∞ Rq−2 EP X 1{Y ¾R} dR Rq−1 P(Y ¾ R)dR ¶ q EP [Y q ] = q 0 Z∞ Z0∞ Z∞ q q−2 P(X ¾ r, Y ¾ R) dr = R dR =q E Y q−1 1{X ¾r } dr q −1 0 0 0 q q−1 P = E Y X q −1 q−1 1 q ¶ EP [Y q ] q EP [X q ] q . q −1 A function τ : E −→ [0, ∞] is said to be a (F t ) t ¾0 -stopping time if {τ ¶ t } ∈ F t for every t ¾ 0. Given a stopping time τ, define Fτ to be the collection of sets Γ ⊂ E such that Γ ∩ {τ ¶ t } ∈ F t for all t ¾ 0. (2.12). EXERCISE. In the following, σ and τ are stopping times and X is a progressively measurable function. Prove each of the statements: i) ii) iii) iv) v)
Fτ is a sub-σ-algebra of F, Fτ = FT if τ ≡ T , and τ is Fτ -measurable; E 3 ξ 7−→ X (τ(ξ ), ξ ) is Fτ -measurable; σ + τ, σ ∨ τ, and σ ∧ τ are stopping times; if Γ ∈ Fσ , then Γ ∩ {σ ¶ τ} and Γ ∩ {σ < τ} are in Fσ∧τ ; if σ ¶ τ, then Fσ ⊂ Fτ .
If you get stuck, see [Stroock and Varadhan, 2006, §1.2.4].
20
II. DIFFUSIONS AND MARTINGALES
(2.13). THEOREM (Hunt). Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a martingale (non-negative submartingale). Given times σand τ which stopping satisfy σ ¶ τ ¶ T for some T > 0, X (σ) = EP X (τ) Fσ ( X (σ) ¶ EP X (τ) Fσ ) (a.s. P). In particular, if the process (X (t )) t ¾0 , (F t ) t ¾0 , P is a non-negative submartingale, then, for any T > 0, X (τ) τ is a stopping time ¶ T is uniformly P-integrable. PROOF. Let G denote the set of all stopping times σ : E −→ [0, T ] such that X (σ) = EP [X (T ) | Fσ ] (X (σ) ¶ EP [X (T ) | Fσ ]) (a.s. P) for every martingale (non negative submartingale) (X (t )) t ¾0 , (F t ) t ¾0 , P . Then, for any non-negative submartin gale (X (t )) t ¾0 , (F t ) t ¾0 , P : lim sup EP X (σ)1{X (σ)¾R} ¶ lim sup EP X (T )1{X (σ)¾R} R→∞ σ∈G
R→∞ σ∈G
¶ lim EP X (T )1{X ? (T )¾R} R→∞
= 0, and so, X (σ) σ ∈ G is uniformly P-integrable. In particular, if σ is a stopping time which is the non-increasing limit of elements of G, then σ ∈ G. We next show that if σ is a stopping time which takes on only a finite number of values 0 = t0 < · · · < tn = T , then σ ∈ G. To this end, let Γ ∈ Fσ be given, and set Γk := Γ ∩ {σ = tk }. Then Γk = F tk , and so EP [X (σ)1Γ ] =
n X k=0
n (¶) X P
EP [X (tk )1Γk ] =
E [X (T )1Γk ] = EP [X (T )1Γ ].
k=0
b2n σc+1 ∧ Now let σ : E −→ [0, T ] be any stopping time, and, for n ∈ N, set σn := 2n T . By the preceding, σn ∈ G for each n ∈ Z+ . In addition, σn ↓ σ. Hence, we know that every stopping time bounded by T is an element of G. In particular, if (X (t )) t ¾0 , (F t ) t ¾0 , P is a non-negative submartingale, then the set of X (σ) as σ runs over stopping times bounded by T is uniformly P-integrable. Also, if σ ¶ τ ¶ T are stopping times, then for any martingale (X (t )) t ¾0 , (F t ) t ¾0 , P we have EP [X (τ) | Fσ ] = EP EP [X (T ) | Fτ ] Fσ = EP [X (T ) | Fσ ] = X (σ) (a.s. P).
It remains to show that if (X (t )) t ¾0 , (F t ) t ¾0 , P is a non-negative submartingale and σ ¶ τ ¶ T are stopping times, then EP [X (τ) | Fσ ] ¾ X (σ) (a.s. P). Notice that, by the uniform integrability property already proved, we need only do this when σ and τ take values in a finite set 0 = t0 < · · · < tn = T . To handle this case, define `−1 X EP X (tk+1 ) − X (tk ) F tk if t ∈ [t` , t`+1 [ and ` = 0, . . . , n, k=0 A(t ) := n−1 X EP X (tk+1 ) − X (tk ) F tk if t ¾ T . k=0
Then t 7−→ A(t ) is P-almost surely non-decreasing, and (M (t )) t ¾0 , (F t ) t ¾0 , P is a martingale, where ( X (t` ) − A(t ) if t ∈ [t` , t`+1 [ and ` = 0, . . . , n, M (t ) := X (T ) − A(t ) if t ¾ T .
§II.2. THE ELEMENTS OF MARTINGALE THEORY
21
Hence, P-almost surely, EP [X (τ) | Fσ ] = EP [M (τ) + A(τ) | Fσ ] = X (σ) + EP [A(τ) − A(σ) | Fσ ] ¾ X (σ).
(2.14). COROLLARY (Doob’s stopping time theorem). If (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale (non-negative submartingale) and τ is a stopping time, then the process (X (t ∧ τ)) t ¾0 , (F t ) t ¾0 , P is a martingale (submartingale). PROOF. Let 0 ¶ s ¶ t and Γ ∈ F s . Then, since Γ ∩ {τ > s } ∈ F s , EP [X (t ∧ τ)1Γ ] = EP [X (t ∧ τ)1Γ∩{τ>s } ] + EP [X (t ∧ τ)1Γ∩{τ¶s } ] (¾)
= EP [X (s ∧ τ)1Γ∩{τ>s } ] + EP [X (s ∧ τ)1Γ∩{τ¶s } ]
= EP [X (s ∧ τ)1Γ ].
(2.15). EXERCISE. Let ϕ : [0, ∞[ −→ R be a right-continuous function. Given a < b and T ∈ ]0, ∞], we say that ϕ upcrosses [a, b ] at least n times during [0, T [ if there exist 0 ¶ s1 < t1 < · · · < sn < tn < T such that ϕ(s m ) < a and ϕ(t m ) > b for each m = 1, . . . , n. Define U (a, b ; T ) := inf n ∈ N | ϕ does not upcross [a, b ] at least n + 1 times during [0, T [ to be the number of times that ϕ upcrosses [a, b ] during [0, T [. Show that ϕ has a left limit (in [−∞, ∞]) at every t ∈ [0, T ] if and only if U (a, b ; T ) < ∞ for all rational b2 m t c a < b . Also, check that if Um (a, b ; T ) is defined relative to the function t 7−→ ϕ 2m , then Um (a, b ; T ) ↑ U (a, b ; T ) as m → ∞. (2.16). THEOREM (Doob’s upcrossing inequality). Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a submartingale, and for a < b , ξ ∈ E, and T ∈ ]0, ∞], define U (a, b ; T )(ξ ) to be the number of times that t 7−→ X (t , ξ ) upcrosses [a, b ] during [0, T [. Then EP [U (a, b ; T )] ¶
EP [(X (T ) − a)+ ]
, T ∈ ]0, ∞[. b −a In particular, for P-almost all ξ ∈ E, X (·, ξ ) has a left limit (in [−∞, ∞[) at each t ∈ ]0, ∞[. In addition, if supT >0 EP [X (T )+ ] < ∞ (supT >0 EP [|X (T )|] < ∞), then lim t →∞ X (t ) exists in [−∞, ∞[ ( ] − ∞, ∞[) (a.s. P). (2.17)
PROOF. In view of (2.15), it suffices to prove that (2.17) holds with U (a, b ; T ) replaced by Um (a, b ; T ) (cf. the last part of (2.15)). b2 m t c Given m ∈ N, set X m (t ) := X 2m and τ0 ≡ 0, and define σn and τn inductively for n ∈ Z+ by: σn := inf t ¾ τn−1 X m (t ) < a ∧ T and τn := inf t ¾ σn X m (t ) > b ∧ T . Clearly, the σn ’s and τn ’s are stopping times which are bounded by T , and Um (a, b ; T ) = max{n ∈ N | τn < T }. Thus, if Y m (t ) := (X m (t ) − a)+ /(b − a), then Um (a, b ; T ) ¶
m b2X Tc
Y m (τn ) − Y m (σn ) ;
n=0
and so Y m (T ) ¾ Y m (T ) − Y m (0) ¾ Um (a, b ; T ) +
m b2X Tc
n=0
Y m (σn+1 ) − Y m (τn ) .
22
II. DIFFUSIONS AND MARTINGALES
But (Y m (t )) t ¾0 , (F t ) t ¾0 , P is a non-negative submartingale, and therefore, EP Y m (σn+1 ) − Y m (τn ) ¾ 0. At the same time, (X (t ) − a)+ t ¾0 , (F t ) t ¾0 , P is a submartingale, and therefore, EP [Y m (T )] ¶
EP [(X (T ) − a)+ ]
b −a
.
(2.18). COROLLARY. If (X (t )) t ¾0 , (F t ) t ¾0 , P is a P-almost surely continuous submartingale, then lim t →∞ X (t ) exists (in [−∞, ∞[) (a.s. P) on the set o n B := ξ ∈ E sup X (t , ξ ) < ∞ . t ¾0
In case of P-almost surely continuous martingales, the conclusion is that the limit exists P-almost surely in ] − ∞, ∞[ on B. PROOF. Witout loss of generality, we assume that X (0) ≡ 0. Given R > 0, set τR : = inf{t ¾ 0 | sup0¶s ¶t X (s) ¾ R}, and define XR (t ) := X (t ∧ τR ). Then τR is a stopping time, and XR (t ) ¶ R, t ¾ 0 (a.s. P). Hence, (XR (t )) t ¾0 , (F t ) t ¾0 , P is a submartingale, and EP [XR (T )+ ] ¶ R for all T ¾ 0. In particular, lim t →∞ X (t ) exists (in [−∞, ∞[) (a.s. P) on {τR = ∞}. Since this is true for every R > 0, we now have the desired conclusion in the submartingale case. The martingale case follows from this, the observation that EP [|XR (T )|] = 2EP [XR (T )+ ] − EP [XR (0)], and Fatou’s lemma. (2.19). EXERCISE. Prove each of the following statements: i) (X (t )) t ¾0 , (F t ) t ¾0 , P is a uniformly P-integrable martingale if and only if X (∞) : = lim t →∞ X (t ) exists in L1 (P), in which case X (t ) → X (∞) (a.s. P) and X (τ) = EP [X (∞) | Fτ ] (a.s. P) for each stopping time τ. ii) If q ∈ [1, ∞[ and (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale, then (X (t )) t ¾0 , (F t ) t ¾0 , P is L q (P)-bounded (i.e., sup t ¾0 EP [|X (t )|q ] < ∞) if and only if X (∞) = lim t →∞ X (t ) in L q (P), in which case X (t ) → X (∞) (a.s. P) and X (τ) = EP [X (∞) | Fτ ] (a.s. P) for each stopping time τ. iii) Suppose that X : [0, ∞[ ×E −→ R ([0, ∞[) is a right-continuous progressively measurable function, and that X (t ) ∈ L1 (P) for each t in a dense subset D of [0, ∞[. If (¶) X (s) = EP [X (t ) | F s ] (a.s. P) for all s , t ∈ D with s < t , then (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale (non-negative submartingale). iv) Let Ω be a Polish space, P ∈ M1 (Ω), and A a countably generated sub-σ-algebra of B(Ω). Then there exists a nested sequence (Πn )n∈Z+ of finite partitions of Ω into S∞ n A-measurable sets such that n=1 σ(Πn ) generates A. In addition, if ω 7−→ Pω is defined by X P(B ∩ A) n 1A(ω) Pω (B) := P(A) A∈Π n
( 00
n for B ∈ B(Ω) := 0 here), then there is a P-null set Λ ∈ A such that Pω 7−→ Pω in M1 (Ω) for each ω 6∈ Λ. Finally, Pω can be defined for ω 6∈ Λ so that ω 7−→ Pω becomes a r.c.p.d. of P | A.
(2.20). THEOREM. Assume that F t is countably generated for each t ¾ 0. Let τ be a stopping time and suppose that ω 7−→ Pω is a c.p.d. of P | Fτ . Let X : [0, ∞[ ×E −→ R be a right-continuous progressively measurable function, and assume that X (t ) ∈ L1 (P) for all t ¾ 0. Then (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale if and only if (X (t ∧
§II.2. THE ELEMENTS OF MARTINGALE THEORY
23
τ)) t ¾0 , (F t ) t ¾0 , P is a martingale, and there is a P-null set Λ ∈ Fτ such that (X (t ) − X (t ∧ τ)) t ¾0 , (F t ) t ¾0 , P is a martingale for each ω 6∈ Λ. PROOF. Set Y (t ) := X (t ) − X (t ∧ τ). Assuming that (X (t ∧ τ)) t ¾0 , (F t ) t ¾0 , P is a martingale and that (X (t ) − X (t ∧ τ)) t ¾0 , (F t ) t ¾0 , P is a martingale for each ω outside of an Fτ -measurable P-null set, we have, for each s < t and Γ ∈ F s : EP [(X (t ) − X (s))1Γ ] = EP [(Y (t ) − Y (s ))1Γ ] + EP [(X (t ∧ τ) − X (s ∧ τ))1Γ ]
=
Z
EPω [(Y (t ) − Y (s))1Γ ]P(ω)
= 0. That is, (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale. Next, assume that (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale. Then the process (X (t ∧ τ)) t ¾0 , (F t ) t ¾0 , P is a martingale by Doob’s stopping time theorem. To see that the process (Y (t )) t ¾0 , (F t ) t ¾0 , Pω is a martingale for each ω outside a P-null set Λ ∈ Fτ , we proceed as follows. Given 0 ¶ s < t , Γ ∈ F s , and A ∈ Fτ , we have Z EPω [Y (t )1Γ ]P(dω) = EP [Y (t )1Γ∩A] = EP [Y (t )1Γ∩A∩{τ¶s } ] + EP [Y (t )1Γ∩A∩{s<τ¶t } ]. A
Note that by (2.12), Γ ∩ A ∩ {s < τ ¶ t } = Γ ∩ {s < τ} ∩ A ∩ {τ ¶ t } ∈ Fτ . Thus, EP [Y (t )1Γ∩A∩{s <τ¶t } ] = EP [((X (τ) − X (τ))1Γ∩A∩{s <τ¶t } ] = 0. At the same time, Γ ∩ A ∩ {τ ¶ s} ∈ F s , and so EP [Y (t )1Γ∩A∩{τ¶s } ] = EP [(X (s) − X (τ))1Γ∩A∩{τ¶s} ]
= EP [Y (s)1Γ∩A] Z = EPω [Y (s)1Γ ]P(dω). A
Since F s is countably generated, we conclude from this that there is a PP-null set Λ ∈ Fτ such that for all ω 6∈ Λ, Y (t ) t ∈ Q ∩ [0, ∞[ ⊂ L1 (Pω ) and Y (s) = E ω [Y (t ) | F s ] (a.s. P) for all rational s < t . In view of (2.19)-iii), this completes the proof. (2.21). EXERCISE. Let everything be as in Theorem (2.20), only this time assume that X (t ) ¾ 0 for all t ¾ 0. Show that (X (t )) t ¾0 , (F t ) t ¾0 , P is a submartingale if and only if (X (t ∧ τ)) t ¾0 , (F t ) t ¾0 , P is a submartingale and there is a P-null set Λ ∈ Fτ such that (X (t )1[0,t ] (τ)) t ¾0 , (F t ) t ¾0 , Pω is a submartingale for all ω 6∈ Λ. The rest of this section is devoted to a particularly useful special case of the renowned Doob-Meyer decomposition theorem. What their theorem says is that, under mild technical hypotheses, every submartingale (X (t )) t ¾0 , (F t ) t ¾0 , P is the sum of a martin gale (M (t )) t ¾0 , (F t ) t ¾0 , P and a right-continuous, progressively measurable function A : [0, ∞[ ×E −→ [0, ∞[ having the property that t 7−→ A(t ) is P-almost surely nondecreasing. Moreover A can be chosen so that A(0) = 0 and t 7−→ A(t ) is “nearly leftcontinuous” (precisely, A is “(F t ) t ¾0 -predictable”); and, among such non-decreasing processes, there is only one whose difference from X is a (F t ) t ¾0 -martingale under P. We have already seen a special case of this decomposition in our proof of Theorem (2.13) (cf. the construction of the processes M and A made there). The idea used in the proof of Theorem (2.13) is due to Doob, and it works whenever t 7−→ X (t ) is piecewise constant. What Meyer did demonstrate is that, in general, A can be realized as the limit of A’s
24
II. DIFFUSIONS AND MARTINGALES
constructed for piecewise constant approximations to X . Simple as this procedure may sound, it is frought with technical difficulties. To avoid these difficulties, and because we shall not have great need for the general result, we shall content ourselves with the special case of submartingales (X (t )2 ) t ¾0 , (F t ) t ¾0 , P where (X (t )) t ¾0 , (F t ) t ¾0 , P is a real-valued, P-almost surely continuous, L2 (P)-martingale. Our proof of existence for this case is based on ideas of K. Itô. The uniqueness assertion will be a consequence of the following simple lemma. (2.22). LEMMA. Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a martingale, and A : [0, ∞[ ×E −→ R a right-continuous progressively measurable function which is P-almost surely continuous and of locally bounded variation (i.e., for each T > 0, the total variation |A| (T , ξ ) of A(·, ξ )|[0,T ] is finite for P-almost every ξ ∈ E). Then, assuming that for every T > 0, EP sup0¶t ¶T |X (t )| |A| (T ) + |A(0)| < ∞, the process Zt X (t )A(t ) − X (s ) A(ds) , (F t ) t ¾0 , P 0
t ¾0
is a martingale. PROOF. Let 0 ¶ s < t and Γ ∈ F s be given. Then X n−1 P P X (un,k+1 )A(un,k+1 ) − X (un,k )A(un,k ) 1Γ E (X (t )A(t ) − X (s)A(s))1Γ = E k=0
= EP
X n−1
X (un,k+1 ) A(un,k+1 ) − A(un,k ) 1Γ ,
k=0
s) nk .
Since u 7−→ X (u) is right-continuous and u 7−→ A(u) is where un,k := s + (t − P-almost surely continuous and of locally bounded variation, Zt n−1 X X (un,k+1 ) A(un,k+1 ) − A(un,k ) → X (s) A(ds ) (a.s. P); 0
k=0
and our integrability assumption allows us to conclude that this convergence takes place in L1 (P). (2.23). THEOREM. Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a P-almost surely continuous martingale and define ζ := sup{t ¾ 0 | |X | (t ) < ∞}. Then, P-almost surely, X (t ∧ ζ ) = X (0), t ¾ 0. In particular, if P(X (t ) = X (s)) = 0 for all 0 ¶ s < t , then t 7−→ X (t ) is P-almost never of bounded variation on any interval. PROOF. Without loss of generality, we assume that X (0) ≡ 0, and that X (·, ξ ) is continuous for all ξ ∈ E. For R > 0, define ζR := sup t ¾ 0 | |X | (t ) < R . Then ζR is a stopping time for each R > 0, and ζR −−→ ζ . Moreover, by Lemma (2.22) with X (·) and A(·) replaced by R→∞
X (· ∧ ζR ) and A(· ∧ ζR ), respectively, Z X (t ∧ ζR )2 −
t ∧ζR
X (s) X (ds)
0
t ¾0
, (F t ) t ¾0 , P
is a martingale; therefore, E X (t ∧ ζR )2 = EP P
Z 0
t ∧ζR
X (s) X (ds) .
§II.2. THE ELEMENTS OF MARTINGALE THEORY
25
On the other hand, since X (· ∧ ζR ) is P-almost surely continuous and of locally bounR t ∧ζ ded variation, X (t ∧ ζR )2 = 2 0 R X (s ) X (ds) (a.s. P), and therefore, EP X (t ∧ ζR )2 = R t ∧ζ 2EP 0 R X (s ) X (ds) . Hence, EP X (t ∧ ζR )2 = 0 for all t ¾ 0, and so X (t ∧ ζR ) = 0, t ¾ 0, (a.s. P). Clearly this leads immediately to the conclusion that X (t ∧ ζ ) = 0, t ¾ 0, (a.s. P). To prove the last assertion, it suffices to check that, for each 0 ¶ s < t , |X s | (t ) = ∞ (a.s. P), where X s (·) := X (·) − X (· ∧ s). But, if ζ s := sup{u ¾ 0 | |X s | (u) < ∞}, then P(|X s | (t ) < ∞) = P(ζ s > t ), and by the preceding with X s replacing X , P(ζ s > t ) ¶ P(X (t ) = X (s)) = 0. (2.24). COROLLARY. Let X : [0, ∞[ ×E −→ R be a right-continuous progressively measurable function. Then there is, up to a P-null set, at most one right-continuous progressively measurable A : [0, ∞[ ×E −→ R such that A(0) ≡ 0, t 7−→ A(t ) is P-almost surely continuous and of locally bounded variation, and, in addition, the process (X (t ) − A(t )) t ¾0 , (F t ) t ¾0 , P is a martingale. PROOF. Suppose that there were two, A and A0 . Then (A(t )−A0 (t )) t ¾0 , (F t ) t ¾0 , P would be a P-almost surely continuous martingale which is P-almost surely of locally bounded variation. Hence, by Theorem (2.23), we would have A(t ) − A0 (t ) = A(0) − A0 (0) = 0, t ¾ 0, (a.s. P). Before proving the existence part of our special case of the Doob-Meyer decomposition theorem, we mention a result which addresses an extremely pedantic issue. Namely, for technical reasons (e.g., countability considerations), it is often better not to complete the σ-algebras F t with respect to P. At the same time, it is convenient to have the processes under consideration right-continuous for every ξ ∈ E, not just P-almost every one. In order to make sure that we can make our processes everywhere right-continuous and, at the same time, progressively measurable with respect to possibly incomplete σalgebras F t , we shall sometimes make reference to the following lemma, whose proof may be found in [Stroock and Varadhan, 2006, §4.3.3]. On the other hand, since in most cases there is either no harm in completing the σ-algebras or the asserted conclusion is clear from other considerations, we shall not bother with the proof here. (2.25). LEMMA. Let (Xn )n∈Z+ be a sequence of right-continuous ( P-almost surely continuous) progressively measurable functions with values in a Banach space (B, k·k). If lim sup P sup kXn (t ) − X m (t )k ¾ " = 0 m→∞ n¾m
0¶t ¶T
for every T > 0 and " > 0, then there is a P-almost surely unique right-continuous ( P-almost surely continuous) progressively measurable function X such that lim P sup kXn (t ) − X (t )k ¾ " = 0
n→∞
0¶t ¶T
for all T > 0 and " > 0. (2.26). THEOREM (Doob-Meyer). Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a P-almost surely continuous real-valued L2 (P)-martingale. Then there is a P-almost surely unique rightcontinuous progressively measurable A : [0, ∞[ ×E −→ [0, ∞[ such that A(0) ≡ 0, t 7−→ A(t ) is non-decreasing, P-almost surely continuous, and (X (t )2 − A(t )) t ¾0 , (F t ) t ¾0 , P is a martingale.
26
II. DIFFUSIONS AND MARTINGALES
PROOF. The uniqueness is clearly a consequence of Corollary (2.24). In proving existence, we assume, without loss of generality, that X (0) ≡ 0. Define τk0 ≡ k for k ∈ N, and given n ∈ Z+ , define τ0n ≡ 0 and for ` ∈ N: 1 n n n n−1 τ`+1 := inf t ¾ τ` sup X (s) − X (τ` ) ¾ n ∧ τ`n + n1 ∧ τk+1 , τ`n ¶s ¶t
n−1 n if τkn−1 ¶ τ`n < τk+1 . Clearly, the τkn ’s are stopping times, τkn < τk+1 , and (τkn ) ⊂ (τkn+1 ) n ) − X (τkn ) ¶ n1 , (a.s. P). By P-almost sure continuity, τkn −−→ ∞ (a.s. P) and X (τk+1 k→∞
k ∈ N, (a.s. P). Choose K0 < · · · < Kn < · · · so that P(E \Λn ) ¶ and define M n (t ) :=
Kn X
1 n
where Λn := {τKn > n}, n
n X (τkn ) X (t ∧ τk+1 ) − X (t ∧ τkn )
k=0
and An (t ) :=
Kn X
2 n X (t ∧ τk+1 ) − X (t ∧ τkn ) .
k=0
Clearly, for all n ∈ N, M n (0) = An (0) ≡ 0, (M n (t )) t ¾0 , (F t ) t ¾0 , P is a P-almost surely continuous martingale; An is a right-continuous, non-negatively, progressively measurable function which is P-almost surely continuous, and An (s) ¶ An (t ) if t ¾ s + n1 . Moreover, for n ∈ Z+ , 0 ¶ t ¶ n, and ξ ∈ Λn : X (t , ξ )2 = 2M n (t , ξ ) + An (t , ξ ).
(2.27)
Given T > 0 and " > 0, we are now going to show that lim sup P sup |M n (t ) − M m (t )| ¾ " = 0. m→∞ n¾m
0¶t ¶T
To this end, let T ¶ m < n, and set ζ := T ∧ τKm ∧ τKn . Then m n P sup |M n (t ) − M m (t )| ¾ " 0¶t ¶T
¶ P E \ (Λ m ∪ Λn ) + P sup |M n (t ∧ ζ ) − M m (t ∧ ζ )| ¾ " 0¶t ¶T
2 ¶
+
1
E
P
2 M n (ζ ) − M (ζ ) ,
m " where we have used Doob’s inequality. Define ρk := τkm ∧ ζ and σ` := τ`n ∧ ζ . Note that 2
M n (ζ ) − M m (ζ ) =
∞ X ∞ X k=0 `=0
1[ρk ,ρk+1 [ (σ` ) X (σ` ) − X (ρk ) X (σ`+1 ) − X (σ` )
(a.s. P) and that the terms in this double sum are mutually P-orthogonal. Hence, 2 EP M n (ζ ) − M (ζ ) X ∞ X ∞ P =E 1[ρk ,ρk+1 [ (σ` ) X (σ` ) − X (ρk ) X (σ`+1 ) − X (σ` ) 1 ¶ =
m 1 m
k=0 `=0 X ∞ X ∞
EP 2 EP 2
k=0 `=0
X ∞ `=0
2 1[ρk ,ρk+1 [ (σ` ) X (σ`+1 ) − X (σ` )
2 X (σ`+1 ) − X (σ` )
§II.2. THE ELEMENTS OF MARTINGALE THEORY
= ¶
1 m 1
2
m2
27
EP [X (ζ )2 ] EP [X (T )2 ],
where we have used the P-orthogonality of X (σ`+1 ) − X (σ` ) to arrive at the last step. We now apply Lemma (2.25) and conclude that there is a right-continuous, P-almost surely continuous, progressively measurable M : [0, ∞[ ×E −→ R such that M n 7−→ M , uniformly on finite intervals, in P-measure. Furthermore, our argument shows that M m (t ) → M (t ) in L1 (P) for each t ¾ 0. Hence, (M (t )) t ¾0 , (F t ) t ¾0 , P is a P-almost surely continuous martingale. Finally, set A0 := X 2 −2M . Then, as a consequence of (2.27), we see that An → A0 , uniformly on finite intervals, in P-measure. In particular, A0 (0) = 0 and t 7−→ A0 (t ) is nondecreasing P-almost surely. Thus, we are done once we define A(t ) := sup0<s 0, n 2X −1 2 (k+1)T − β kT −−→ T (a.s. P). β 2n n 2 n→∞
k=0
Let MartC2 (= MartC2 ((F t ) t ¾0 , P)) denote the space of all real-valued P-almost surely continuous L2 (P)-martingales (X (t )) t ¾0 , (F t ) t ¾0 , P . Clearly, MartC2 is a linear space. Given X ∈ MartC2 , we shall use 〈X 〉 to denote the associated process A constructed in Theorem (2.26). In addition, given X , Y ∈ MartC2 , define 〈X , Y 〉 by 〈X , Y 〉 :=
1 4
〈X + Y 〉 − 〈X − Y 〉 .
Clearly, 〈X , Y 〉 is a right-continuous progressively measurable function which is not only of locally bounded variation, with |〈X , Y 〉| (T ) ∈ L1 (P) for all T ¾ 0, but also P-almost surely continuous. (2.29). EXERCISE. Given a stopping time τ and an X ∈ MartC2 , define X τ (t ) := X (t ∧ τ) and Xτ (t ) := X (t ) − X τ (t ), t ¾ 0. Show that X τ and Xτ are elements of MartC2 and that 〈X τ 〉 (t ) = 〈X 〉 (t ∧ τ) and 〈Xτ 〉 (t ) = 〈X 〉 (t ) − 〈X 〉 (t ∧ τ), t ¾ 0, (a.s. P). (2.30). THEOREM (Kunita-Watanabe). Given X , Y ∈ MartC2 , 〈X , Y 〉 is the P-almost surely unique right-continuous progressively measurable function which has locally bounded variation, is P-almost surely continuous, and has the properties that 〈X , Y 〉 (0) ≡ 0 and (X (t )Y (t ) − 〈X , Y 〉 (t )) t ¾0 , (F t ) t ¾0 , P is a martingale. In particular, 〈X 〉 = 〈X , X 〉 (a.s. P) for all X ∈ MartC2 , and for all X , Y, Z ∈ MartC2 , 〈aX + b Y , Z〉 = a 〈X , Z〉 + b 〈Y, Z〉, a, b ∈ R, (a.s. P). Finally, for all X , Y ∈ MartC2 : Æ
Æ 〈X 〉 (Γ) 〈Y 〉 (Γ) , Γ ∈ B([0, ∞[), (a.s. P), Æ Æ 〈X + Y 〉 (Γ) ¶ 〈X 〉 (Γ) + 〈Y 〉 (Γ) , Γ ∈ B([0, ∞[), (a.s. P), |〈X , Y 〉| (dt ) ¶ 12 〈X 〉 (dt ) + 〈Y 〉 (dt ) (a.s. P). |〈X , Y 〉| (Γ) ¶
28
II. DIFFUSIONS AND MARTINGALES
PROOF. To prove the first assertion, simply note that X Y = and apply Corollary (2.24).
1 4
(X +Y )2 −(X −Y )2 ,
The equality 〈X 〉 = 〈X , Y 〉 as well as the linearity assertion follow easily from uniqueness. p to prove the rest of the theorem, it suffices to show that |〈X , Y 〉| (Γ) ¶ p In order 〈X 〉 (Γ) 〈Y 〉 (Γ) , Γ ∈ B([0, ∞[), (a.s. P), and to do this we need only check that, for each 0 ¶ s < t , Æ Æ |〈X , Y 〉 (t ) − 〈X , Y 〉 (s)| ¶ 〈X 〉 (t ) − 〈X 〉 (s ) 〈Y 〉 (t ) − 〈Y 〉 (s ) (a.s. P). Furthermore, by replacing X and Y with X s and Y s , respectively (cf. (2.29)), we see that p p it is enough to prove |〈X , Y 〉 (t )| ¶ 〈X 〉 (t ) 〈Y 〉 (t ) (a.s. P) for each t ¾ 0. But, by the linearity property, ¬ ¶ 0 ¶ λX ± λ1 Y (t ) = λ2 〈X 〉 (t ) ± 2 〈X , Y 〉 (t ) + λ12 〈Y 〉 (t ), λ > 0, (a.s. P). Hence, the desired inequality follows by the same argument as one uses to prove the ordinary Schwarz’s inequality. (2.31). EXERCISE. Given X , Y ∈ MartC2 ((F t ) t ¾0 , P) and an (F t ) t ¾0 -stopping time τ, show that 〈X τ , Y 〉 (·) = 1[0,τ[ (·) 〈X , Y 〉 (·). Next, set FXt := σ X (s) 0 ¶ s ¶ t and FYt := σ Y (s) 0 ¶ s ¶ t . Show that X , Y ∈ MartC2 FXt × FYt t ¾0 , P and that, up to a P-null set, 〈X , Y 〉 defined relative to FXt × FYt t ¾0 , P coincides with 〈X , Y 〉 defined relative to ((F t ) t ¾0 , P). Conclude that, if for some T > 0, FXT and FYT are P-independent, then 〈X , Y 〉 (t ) = 0, 0 ¶ t ¶ T , (a.s. P). §II. 3. Stochastic integrals, Itô’s formula, and semi-martingales We continue with the notation with which we were working in §II.2. Given a right-continuous, non-decreasing, P-almost surely continuous, progressively measurable function A : [0, ∞[ ×E −→ [0, ∞[, denote by L2loc (A, P) := L2loc (F t ) t ¾0 , A, P R T the progressively measurable α : [0, ∞[ ×E −→ R such that EP 0 α(t )2 A(dt ) < ∞ for all T > 0. Clearly, L2loc (A, P) admits a natural metric with respect to which it becomes a Fréchet space. Given X ∈ MartC2 and α ∈ L2loc (〈X 〉 , P), note that there exists at most one I : [0, ∞[ ×E −→ R such that (3.1)
i) I (0) ≡ 0
and
I ∈ MartC2 ,
ii) 〈I , Y 〉 = α 〈X , Y 〉
(a.s. P)
for all Y ∈ MartC2 .
(Given a measure µ and a measurable function α, αµ denotes the measure ν such that
dν = α.) Indeed, if there were two, say I and I 0 , then we would have I − I 0 , Y ≡ 0 dµ (a.s. P) for all Y ∈ MartC2 . In particular, taking Y = I − I 0 , we would conclude that EP (I (T ) − I 0 (T ))2 = 0, T ¾ 0, and therefore, that I = I 0 (a.s. P). Following Kunita and Watanabe, we shall say that, if it exists, the P-almost surely unique I satisfying R · (3.1) is the (Itô) stochastic integral of α with respect to X , and we shall denote I by 0 α(s ) dX (s). R· R· Observe that if α, β ∈ L2loc (〈X 〉 , P) and if both 0 α(s) dX (s) and 0 β(s ) dX (s) exist, R· R· R· then 0 aα(s )+ b β(s) dX (s) exists, and is equal to a 0 α(s ) dX (s)+ b 0 β(s) dX (s) (a.s.
§II.3. STOCHASTIC INTEGRALS, ITÔ’S FORMULA, AND SEMI-MARTINGALES
29
P) for all a, b ∈ R, and
(3.2) EP
t
Z
α(s) dX (s) −
sup 0¶t ¶T
0
Z
t
2 β(s) dX (s)
0 P
Z
T
¶ 4E
α(s) dX (s ) −
Z
0
= 4EP
Z
T
0 T
2 β(s ) dX (s ) 2 α(t ) − β(t ) 〈X 〉 (dt ) ,
T ¾ 0.
0
From this it is easy to see that the set of α’s for which subspace of L2loc (〈X 〉 , P).
R· 0
α(s) dX (s) exists is a closed linear
(3.3). EXERCISE. Let X ∈ MartC2 be given and suppose that σ ¶ τ are stopping times. Let γ be an Fσ -measurable function satisfying EP γ 2 〈X 〉 (T ∧ τ) − 〈X 〉 (T ∧ σ) < ∞ for all T > 0, and set α(t ) := 1[σ,τ[ (t )γ , t ¾ 0. Show that α ∈ L2loc (〈X 〉 , P), and that R· α(s ) dX (s ) exists and is equal to γ X τ (·) − X σ (·) (see (2.29) for the notation here). 0 R·
α(s) dX (s) exists for all α ∈ L2loc (〈X 〉 , P). To this end, first bn t c suppose that α is simple in the sense that there is an n ∈ Z+ for which α(t ) = α n , t ¾ 0. Set ∞ X k I (t ) := α nk X t ∧ k+1 − X t ∧ , t ¾ 0. n n We want to show that
0
k=0
, then Then I ∈ MartC . Moreover, if nk ¶ s ¶ t ¶ k+1 n EP I (t )Y (t ) − I (s )Y (s ) F s = EP I (t ) − I (s) Y (t ) − Y (s) F s = α nk EP X (t ) − X (s) Y (t ) − Y (s) F s 2
=α = EP
k n
EP 〈X , Y 〉 (t ) − 〈X , Y 〉 (s) F s t
Z s
α(u) 〈X , Y 〉 (du) F s
(a.s. P).
R· In other words, 〈I , Y 〉 = α 〈X , Y 〉 (a.s. P), and so 0 α(s ) dX (s ) exists and is given by I . R· R· Knowing that 0 α(s) dX (s) exists for simple α’s, one can easily show that 0 α(s ) dX (s ) exists for bounded progressively measurable α’s which are P-almost surely continuous; bn t c indeed, simply take αn (t ) := α n , t ¾ 0, and note that αn → α in L2loc (〈X 〉 , P). R· Thus, we shall have completed our demonstrationthat 0 α(s) dX (s) exists for all α ∈ L2loc (〈X 〉 , P) once we have proved the following approximation result: (3.4). LEMMA. Let A : [0, ∞[ ×E −→ [0, ∞[ be a non-decreasing, P-almost surely continuous, progressively measurable function with A(0) ≡ 0. Given α ∈ L2loc (A, P), there is a sequence (αn )n∈Z+ ⊂ L2loc (A, P) of bounded, P-almost surely contintuous functions which tend to α in L2loc (A, P). PROOF. Since the space of bounded elements of L2loc (A, P) are obviously dense in we shall assume that α is bounded.
L2loc (A, P),
We first handle the special case when A(t ) ≡ t for all t ¾ 0. To this end, choose R1 ρ ∈ C0∞ (]0, 1[) so that 0 ρ(t ) dt = 1, and extend α to R × E by setting α(t ) ≡ 0 for
30
II. DIFFUSIONS AND MARTINGALES
R t < 0. Next, define αn (t ) := n α(t − s)ρ(n s) ds for t ¾ 0 and n ∈ Z+ . Then it is easy to check that (αn )n∈Z+ will serve. To handle the general case, first note that it suffices for us to show that for each T > 0 and " > 0 there exists a bounded P-almost surely continuous α0 ∈ L2loc (A, P) R T such that EP 0 (α(t ) − α0 (t ))2 A(dt ) < "2 . Given T and ", choose M > 1 such that RT EP α(t )2 A(dt ) 1{A(T )¾M −1} < ( "2 )2 and η ∈ C ∞ (R) so that 1[0,M −1] ¶ η ¶ 1[−1,M ] . 0 Rt Set B(t ) := 0 η(A(s ))2 A(ds) + t , t ¾ 0, and τ(t ) := B −1 (t ). Then (τ(t )) t ¾0 is a nondecreasing family of bounded stopping times. Set J t := Fτ(t ) and β(t ) := α(τ(t )). Then β is a bounded (J t ) t ¾0 -progressively measurable function, and so, by the preceding, we can find a bounded continuous (J t ) t ¾0 -progressively measurable β0 such that Z T +M "2 2 EP β(t ) − β0 (t ) dt < . 4 0 Finally, define α0 (t ) := β0 (β(t ))η(A(t )). Then α0 is a bounded P-almost surely continuous element of L2loc (A, P), and s Z s Z T T " 2 0 2 P (α(t ) − α (t )) A(dt ) ¶ + EP α(t ) − β0 (B(t )) B(dt ) E 2 0 0 s Z T +M " 2 P 0 ¶ + E β(t ) − β (t ) dt 2 0 < ". As a consequence of the preceding, we now know that X ∈ MartC2 and α ∈ L2loc (〈X 〉 , P).
R· 0
α(s) dX (s) exists for all
(3.5). EXERCISE. Let X ∈ MartC2 be given. i) Given two stopping times σ ¶ τ and α ∈ L2loc (〈X 〉 , P), show that 1[σ,τ[ (t )α(t ) ∈ L2loc (〈X 〉 , P) and that ZT Z T ∧τ Z T ∧σ Z T ∧τ := α(t ) dX (t )− α(t ) dX (t ) (a.s. P). 1[σ,τ[ α(t ) dX (t ) = α(t ) dX (t ) T ∧σ
ii) Given β ∈ ZT
0
0
0
L2loc (〈X 〉 , P)
β 〈X 〉 , P , show that Z t ZT α(t ) d β(s) dX (s) = α(s)β(s ) dX (s ) (a.s. P).
0
and α ∈
L2loc
2
0
0
Our next project is the derivation of the renowned Itô’s formula. (Our presentation again follows that of Kunita and Watanabe.) Namely, let X := (X 1 , . . . , X M ) ∈ (MartC2 )M and Y : [0, ∞[ ×E −→ RN be a P-almost surely continuous measurable qPprogressively 2 N j function of locally bounded variation such that |Y | (T ) := Y (T ) ∈ L (P) for j =1
each T > 0. Given f
∈ C b2,1 (R m
1
× R ), Itô’s formula is the statement that M Z T N Z T X X i f (Z(T )) − f (Z(0)) = ∂ x i f (Z(t )) dX (t ) + ∂y j f (Z(t ))Y j (dt )
(3.6)
N
k=1
0
j =1
M Z 1 X
2 i ,i 0 =1
T
0
0
¬ ¶ 0 ∂ x i ∂ x i 0 f (Z(t )) X i , X i (dt ) (a.s. P),
§II.3. STOCHASTIC INTEGRALS, ITÔ’S FORMULA, AND SEMI-MARTINGALES
31
where Z := (X , Y ). It is clear that, since (3.6) is just an identification statement, we may assume that ¬ ¶ M 0 t 7−→ Z(t , ξ ) and t 7−→ 〈〈X , X 〉〉 (t , ξ ) := X i , X i (t , ξ ) are continuous for 0 i ,i =1
all ξ ∈ E. In addition, it suffices to prove (3.6) when f ∈ C b∞ (RM +N ). Thus, we shall proceed under these assumptions. Given n ∈ Z+ , define (τkn )k∈N so that τ0n := 0 and o n ¬ ¶ ¬ ¶ n := τkn + n1 ∧T ∧inf t ¾ τkn max τk+1 X i (t ) − X i (τkn ) ∨ Z(t )−Z(τkn ) ¾ n1 . i =1,...,M
Then, for each T > 0 and ξ ∈ E, τkn (ξ ) = T for all but a finite number of k’s. Hence, P n ) − f (Zkn ) , where Zkn := (Xkn , Ykn ) =: Z(τkn ). Clearly, f (Z(T )) − f (Z(0)) = ∞ f (Zk+1 k=0 n n n n n f (Zk+1 ) − f (Zkn ) = f (Xk+1 , Ykn ) − f (Xkn , Ykn ) + f (Xk+1 , Yk+1 ) − f (Xk+1 , Ykn ) M Z T M X ¬ ¶ 1 X 0 ∂ x i ∂ x i 0 f (Zkn )∆nk X i , X i = ∂ x i f (Zkn )∆nk Xi + 2 i,i 0 =1 0 k=1 n N Z τk+1 X n + ∂y j f (Xk+1 , Y (t ))Y j (dt ) + Rnk , j =1
τkn
n where ∆nk Ξ := Ξ(τk+1 ) − Ξ(τkn ) and
Rnk :=
M 1 X
2 i ,i 0 =1 +
0 ∂ x i ∂ x i 0 f Zˆkn − ∂ x i ∂ x i 0 f (Zkn ) ∆nk X i ∆nk X i
M 1 X
2 i,i 0 =1
¬ ¶ 0 0 , ∂ x i ∂ x i 0 f (Zkn ) ∆nk X i ∆nk X i − ∆nk X i , X i
n with Zˆkn a point on the line joining (Xk+1 , Ykn ) to (Xkn , Ykn ). By (3.3), ZT ∞ X n i n ∂ x i f (Z n (s )) dX i (s) ∂ x i f (Zk )∆k X = 0
k=0
where Z (s) := for s ∈ and Z (s) := Z(T ) for s ¾ T . Since Z n (s ) → Z(s ) uniformly for s ∈ [0, T ], we conclude that ZT ∞ X ∂ x i f (Zkn )∆nk X i → ∂ x i f (Z(s)) dX i (s) in L2 (P). n
n [τkn , τk+1 [
Zkn
n
0
k=0
Also, from standard integration theory, ∞ X k=0
and
∂x i ∂x i 0 f ∞ Z X k=0
(Zkn )∆nk
T Z ¬ ¶ X i i0 X ,X → k=0
n τk+1
τkn
∂y j f
T
0
¬ ¶ 0 ∂ x i ∂ x i 0 f (Z(s )) X i , X i (ds )
n (Xk+1 , Y (t ))Y j (dt ) →
in L1 (P). It therefore remains only to check that
P
Z
T
0
∂y j f (Z(s))Y j (ds)
n k∈N Rk
→ 0 in P-measure.
First observe that C 2 0 0 2 ∆nk X i + ∆nk X i ∂ x i ∂ x i 0 f Zˆkn − ∂ x i ∂ x i 0 f (Zkn ) ∆nk X i ∆nk X i ¶ n and therefore that i h ˆ n − ∂ i ∂ i 0 f (Z n ) ∆n X i ∆n X i 0 ¶ 2C EP |X (T ) − X (0)|2 → 0. EP ∂ x i ∂ x i 0 f Z x x k k k k n
32
II. DIFFUSIONS AND MARTINGALES
At the same time, X ∞ EP
k=0
=
∞ X
¬ ¶2 0 0 ∂ x i ∂ x i 0 f (Zkn ) ∆nk X i ∆nk X i − ∆nk X i , X i
EP
k=0 ∞ X
¶C
¶C
EP
h
k=0 ∞ h X 0 P
¶ C 00 =
h ¬ ¶2 i 0 0 ∂ x i ∂ x i 0 f (Zkn ) ∆nk X i ∆nk X i − ∆nk X i , X i
E
k=0 ∞ X
1
k=0
n
C 00
¬ ¶2 i 0 0 ∆nk X i ∆nk X i − ∆nk X i , X i ∆nk X i
4
+ ∆nk X i
0
4
¬ ¶2 ¬ 0 ¶2 i + ∆nk X i + ∆nk X i
2 EP ∆nk X
EP |X (T ) − X (0)|2
n −→ 0.
Combining these, we now see that (3.6) holds. The applications of Itô’s formula are innumerable. One particularly beautiful one is the following derivation, due to Kunita and Watanabe, of a theorem proved originally by Lévy. (3.7). THEOREM (Lévy). Let β ∈ (MartC2 )N and assume that 〈〈β, β〉〉 (t ) = t I , t ¾ 0 (i.e.,
i j β , β (t ) = t δ i, j ). Then (β(t ) − β(0)) t ¾0 , (σ(β(s) | 0 ¶ s ¶ t ) t ) t ¾0 , P is an N dimensional Wiener process. PROOF. We assume, without loss of generality, that β(0) ≡ 0. What we must show is that P ◦ β−1 = W; and, by Corollary (1.13), this comes down to showing that Zt 1 −1 ϕ(x(t )) − 2 ∆ϕ(x(s)) ds , (M t ) t ¾0 , P ◦ β 0
t ¾0
∈ C0∞ (RN ).
is a martingale for every ϕ Clearly this will follow if we show that Zt 1 ϕ(x(t )) − 2 ∆ϕ(x(s)) ds , (F t ) t ¾0 , P 0
t ¾0
is a martingale. But by Itô’s formula: Zt N Z X ϕ(β(t )) − ϕ(β(0)) − 12 ∆ϕ(β(s)) ds = 0
and so the proof is complete.
i=1
T
0
∂ x i ϕ(β(s)) dβi (s ),
Given a right-continuous, P-almost surely continuous (F t ) t ¾0 -progressively measur able function β : [0, ∞[ ×E −→ RN we shall say that (β(t )) t ¾0 , (F t ) t ¾0 , P is an N N dimensional Brownian motion if β ∈ MartC2 ((F t ) t ¾0 , P) , β(0) = 0, and 〈〈β, β〉〉 (t ) ≡ t , t ¾ 0, (a.s. P). (3.8). EXERCISE. i) Let β : [0, ∞[ ×E −→ RN be a right-continuous, P-almost surely continuous, pro gressively measurable function with β(0) ≡ 0. Show that (β(t )) t ¾0 , (F t ) t ¾0 , P is
§II.3. STOCHASTIC INTEGRALS, ITÔ’S FORMULA, AND SEMI-MARTINGALES
33
R an N -dimensional Brownian motion if and only if P β(t ) ∈ Γ F s = Γ g (t − s , Y − β(s)) dy, 0 ¶ s ¶ t and Γ ∈ B(RN ), where g (·, ·) denotes the N -dimensional Gaussian kernel. ii) Generalize Lévy’s theorem by showing that if a and b are as in §II.1 and P s ,x (s, x) ∈ [0, ∞[ ×RN is the associated family of measures on Ω, then, for each (s , x), P s ,x is the unique P ∈ M1 (Ω) such that: P(x(0) = x) = 1,
Z
·
x(·) − 0
N b (s + t , x(t )) dt ∈ MartC2 ((M t ) t ¾0 , P) , Z
·
x(·) −
b (s + t , x(t )) dt , x(·) −
0
Z
·
0
b (s + t , x(t )) dt
(T ) =
Z
T
a(s + t , x(t )) dt
0
for T ¾ 0, (a.s. P). Although the class MartC2 has many pleasing properties, it is not invariant under changes of coordinates. (To wit, even if f ∈ C ∞ (R), f ◦ X will seldom be an element of MartC2 simply because X is.) There are two reasons for this, the first of which is the question of integrability. To remove this first problem, we introduce the class MartCloc (= MartCloc ((F t ) t ¾0 , P)) of P-almost surely continuous local martingales. Namely, we say that X ∈ MartCloc if X : [0, ∞[ ×E −→ R is a right-continuous, P-almost surely continuous function for which there exists a non-decreasing sequence of stopping times (σn )n∈N with the properties that σn → ∞ (a.s. P) and X σn (t ) t ¾0 , (F t ) t ¾0 , P is a bounded mar-
tingale for each n (recall that X σ (·) := X (· ∧ σ)). It is easy to check that MartCloc is a linear space. Moreover, given X ∈ MartCloc , there is a P-almost surely unique non-decreasing, P-almost surely continuous, progressively measurable function 〈X 〉 such that 〈X 〉 (0) ≡ 0 and X 2 − 〈X 〉 ∈ MartCloc . The uniqueness is an easy consequence of Corollary (2.24) (cf. (3.9)-ii) below). To prove existence, simply take 〈X 〉 (t ) := supn∈N 〈X σn 〉 (t ), t ¾ 0. Finally, given X , Y ∈ MartCloc , 〈X , Y 〉 := 14 〈X + Y 〉 − 〈X − Y 〉 is the P-almost surely unique progressively measurable function of locally bounded variation which is P-almost surely continuous and satisfies 〈X , Y 〉 (0) := 0 and X Y − 〈X , Y 〉 ∈ MartCloc . (3.9). EXERCISE. i) Let X : [0, ∞[ ×E −→ R be a right-continuous P-almost surelycontinuous progressively measurable function. Show that (X (t )) t ¾0 , (F t ) t ¾0 , P is a martingale (X ∈ MartC2 ) if and only if X ∈ MartCloc and there is a non-decreasing sequence of stopping times (τn )n∈Z+ such that τn → ∞ (a.s. P) and (X (t ∧ τn ))n∈Z+ is uniformly P-integrable (supn∈Z EP [X (t ∧ τn )2 ] < ∞) for each t ¾ 0. +
ii) Show that if X ∈ MartCloc and ζ := sup{t ¾ 0 | |X | (t ) < ∞}, then X (t ∧ ζ ) = X (0), t ¾ 0, (a.s. P). iii) Let X ∈ MartCloc and let α : [0, ∞[ ×E −→ R be a progressively measurable function RT satisfying 0 α(t )2 〈X 〉 (dt ) < ∞ (a.s. P) for all T ¾ 0. Show that there exists a ¬R · ¶ R· loc α(s) dX (s) ∈ Mart such that α(s) dX (s), Y = α 〈X , Y 〉 for all Y ∈ MartCloc 0 C 0 and that, up to a P-null set, there is only one such element of MartCloc . The quantity R· α(s) dX (s) is again called the (Itô) stochastic integral of α with respect to X . 0 M iv) Suppose that X ∈ MartCloc and that Y : [0, ∞[ ×E −→ RN is a right-continuous P-almost surely continuous progressively measurable function of locally bounded variation. Set Z = (X , Y ) and let f ∈ C 2,1 (RM , RN ) be given. Show that all the
34
II. DIFFUSIONS AND MARTINGALES
quantities in (3.6) are still well-defined and that (3.6) continues to hold. We will continue to refer to this extension of (3.6) as Itô’s formula. (3.10). LEMMA. Let X ∈ MartCloc and let σ ¶ τ be stopping times such that 〈X 〉 (τ) − 〈X 〉 (σ) ¶ A for some A < ∞. Then R2 (3.11) P sup |X (t ) − X (σ)| ¾ R ¶ 2e− 2A . σ¶t ¶τ
In particular, there exists for each q ∈ ]0, ∞[ a universal Cq < ∞ such that q q P (3.12) E sup |X (t ) − X (σ)| ¶ Cq A 2 . σ¶t ¶τ
PROOF. By replacing X with Xσ := X − X σ , we see that it suffices to treat the case when X (0) ≡ 0, σ ≡ 0, and τ ≡ ∞. For n ∈ Z+ , define ζn := inf t ¾ 0 sup0¶s ¶t |X (s)| ¾ n , and set X n := X ζn and 2 Yλn := exp λX n − λ2 〈X n 〉 . The, by Itô’s formula, Z· n Yλ (·) = 1 + λ Yλn (s ) dX n (s) ∈ MartC2 . 0
Hence, by Doob’s inequality: λ2 λ2 P sup X n (t ) ¾ R ¶ P sup Yλn (t ) ¾ eλR− 2 A ¶ e−λR+ 2 A 0¶t ¶T
0¶t ¶T
for all T > 0 and λ > 0. After minimizing the right-hand side with respect to λ > 0, letting n, T → ∞, and then repeating the argument with −X replacing X , we obtain the required estimate. Clearly, (3.12) is an immediate consequence of (3.11). (3.13). EXERCISE. M P i i) Suppose that X ∈ MartCloc and that τ is a stopping time for which M X (τ) ¶ i=1 A (a.s. P) for some A < ∞. Let α : [0, ∞[ ×E −→ RM be a progressively measurable function with the property that sup0¶s¶t |α(s)| ¶ B exp sup0¶s ¶t kX (s) − X (0)kγ (a.s. P) for each t ¾ 0 and some γ ∈ [0, 2[ and B < ∞. Show that ! X M Z t ∧τ α i (s) dX i (s) , (F t ) t ¾0 , P i=1
0
t ¾0
is a martingale which is L q (P)-bounded for every q ∈ [1, ∞[. In particular, show that P i if M X (T ) is P-almost surely bounded for each T ¾ 0 and if f ∈ C 2,1 (RM ×RN ) i=1 satisfies the estimate γ max ∂ i f (x, y) ¶ AeBkxk , (x, y) ∈ RM × RN , i =1,...,M
x
for some A, B ∈ ]0, ∞[, then the stochastic integrals occuring in Itô’s formula are elements of MartC2 . ii) Given X ∈ MartCloc , set EX (t ) := exp X (t ) − 12 〈X 〉 (t ) for t ¾ 0. Show that EX is Rt the P-almost surely unique Y ∈ MartCloc such that Y (t ) = 1 + 0 Y (s) dX (s), t ¾ 0, Y (t ) .) EX (t )
Also, if k〈X 〉 (τ)kL∞ (P) < ∞ for EX (t ∧ τ) t ¾0 , (F t ) t ¾0 , P is a martingale,
(a.s. P). (Hint: To prove uniqueness, consider
some finite stopping time τ, show that and that, for each q ∈ [1, ∞[, kEX (τ)kL (P) ¶ exp (q − 1) k〈X 〉 (τ)kL∞ (P) . q
§II.3. STOCHASTIC INTEGRALS, ITÔ’S FORMULA, AND SEMI-MARTINGALES
35
The quantity EX is sometimes called the Itô exponential of X . The following exercise contains a discussion of an important and often useful representation theorem. Loosely speaking, what is says is that an X ∈ MartCloc “has the paths of a Brownian motion and uses 〈X 〉 as its clock.” (3.14). EXERCISE. i) Let (X (t )) t ¾0 , (F t ) t ¾0 , P be a martingale, and for each t ¾ 0 let F t + denote the T P-completion of F t + := ">0 F t +" . Show that (X (t )) t ¾0 , (F t + ) t ¾0 , P is again a martingale. ii) Let Y : [0, ∞[ ×E −→ R be a measurable function such that t 7−→ Y (t , ξ ) is rightcontinuous for every ξ ∈ E. Assuming that for each T > 0, EP |Y (t ) − Y (s )|q < ∞, sup (t − s)1+α 0¶s
iii) Let X ∈ MartCloc ((F t ) t ¾0 , P) and define τ(t ) := sup{s ¾ 0 | 〈X 〉 (s ) ¶ t }. Show that t 7−→ τ(t ) is right-continuous and non-decreasing, and that, for each t ¾ 0, τ(t ) is an (F s + ) s¾0 -stopping time. Next, set J t := Fτ(t )+ (:= {A | A ∩ {τ(t ) ¶ s} ∈ F s + for all s ¾ 0), and define Z : [0, ∞[ ×E −→ R so that ( X (τ(t )) − X (0) if τ(t ) < ∞, Z(t ) = X (∞) − X (0) if τ(t ) = ∞, where X (∞) := lim s→∞ X (s ) when this limit exists in ] − ∞, ∞[ and X (0) otherwise. Show that 〈X 〉 (s) is a (J t ) t ¾0 -stopping time for each s ¾ 0, Z(0) = 0 (a.s. P), (Z(t )) t ¾0 , (J t ) t ¾0 , P is a martingale, and that EP |Z(t ) − Z(s)|4 ¶ C4 (t − s )2 , 0 ¶ s < t < ∞ (where C4 is the constant in (3.12)). Conclude that Z ∈ MartC2 ((J t ) t ¾0 , P), and show that 〈Z〉 (dt ) ¶ dt (a.s. P). Finally, show that, for each T > 0, 〈Z〉 (t ) = t , t ∈ [0, T ] (a.s. P) on {〈X 〉 (∞) > T }. In particular, if 〈X 〉 (∞) = ∞ (a.s. P), set B := Z − Z(0) and conclude that (B(t )) t ¾0 , (J t ) t ¾0 , P is a one-dimensional Brownian motion and that (3.15) X (t ) = X (0) + B 〈X 〉 (t ) , t ¾ 0, (a.s. P). iv) To extend the representation in (3.15) to cases in which 〈X 〉 = 0 (a.s. P) may fail, proceed as follows. Let W denote the one-dimensional Wiener measure on (Ω, M)
36
II. DIFFUSIONS AND MARTINGALES
and set Q := P ⊗ W and H t := J t ⊗ M t , t ¾ 0. Let Z(·) be as in iii), but now define B : [0, ∞[ ×E × Ω −→ R by B t , (ξ , ω) := Z t ∧ 〈X 〉 (∞, ξ ), ξ − Z(0) + x(t , ω) − x t ∧ 〈X 〉 (∞, ξ ), ω . Show that (B(t )) t ¾0 , (H t ) t ¾0 , P is a one-dimensional Brownian motion and that (3.15) holds (a.s. Q). As an application of the preceding, consider the following: (3.16). EXERCISE. Let X ∈ MartCloc . Show that (a.s. P), n o n o lim X (s) exists in ] − ∞, ∞[ = lim X (s ) exists in [−∞, ∞] = 〈X 〉 (∞) < ∞ . s →∞
s →∞
(Hint: Prove that if β(·) is a one-dimensional Brownian motion, then lim sup s →∞ β(s) = − lim inf s →∞ β(s ) = ∞ almost surely.) REMARK. As a consequence of (3.16), we see that there is no hope of defining the quanRT RT tity 0 α(s) dX (s) for α’s which fail to satisfy 0 α(s )2 〈X 〉 (ds) < ∞. We have seen in Lemma (3.10) that q 2 P E sup |X (t ) − X (0)| ¶ Cq |〈X 〉 (T )|L2 0¶t ¶T
∞ (P)
for X ∈ MartCloc .
At least when q ∈ [2, ∞[, we are now going to prove a refinement of this result. The inequalities which we have in mind are referred to as Burkholder’s inequality; however, the proof which we are about to give is due to A. Garsia and takes full advantage of the fact that we are dealing with continuous martingales. (3.17). THEOREM (Burkholder’s inequality). For each q ∈ [2, ∞[, all X ∈ MartCloc , and all stopping times τ:
Æ
Æ
(3.18) aq 〈X 〉 (τ) ¶ (X − X (0))? (τ) L (P) ¶ Aq 〈X 〉 (τ) , L q (P)
where aq := p1
2q
and Aq :=
(1− q1 )
L q (P)
q
p 2q (q−1) 2
.
PROOF. First note that it suffices to prove (3.18) when X (0) ≡ 0 and τ, X (τ) and 〈X 〉 (τ) are all bounded. Second, by replacing X by X τ if necessary, we can reduce to the case when τ ≡ T < ∞ and X and 〈X 〉 are bounded. Hence, we shall prove (3.18) under these conditions. In particular, this means that X ∈ MartC2 . To prove the right-hand side, apply Itô’s formula to write ZT Z q(q − 1) T q q−1 |X (t )|q−2 〈X 〉 (dt ). |X (T )| = q sgn(X (t )) |X (t )| dX (t ) + 2 0 0 Then, by (2.11): 1 q q
0
EP X ? (T )q ¶ EP |X (T )|q
=E
q(q − 1)
Z
T
q−2
〈X 〉 (dt )
|X (t )| 2 0 q(q − 1) P ? ¶ E X (T )q−2 〈X 〉 (T ) 2 P
§II.3. STOCHASTIC INTEGRALS, ITÔ’S FORMULA, AND SEMI-MARTINGALES
q(q − 1)
¶
EP X ? (T )q
1− 2 q
2 from which the right-hand side of (3.18) is immediate.
q
EP 〈X 〉 (T ) 2
1 q
37
,
To prove the left-hand side of (3.18), note that, by Itô’s formula: ZT (q−2) (q−2) X (t ) 〈X 〉 4 (dt ), X (T ) 〈X 〉 (T ) 4 = Y (T ) + 0
where Y (·) :=
R· 0
〈X 〉 (t )
(q−2) 4
dX (t ). Hence |Y (T )| ¶ 2X ? (T ) 〈X 〉 (T )
(q−2) 4
.
At the same time: 〈Y 〉 (T ) =
Z
T
〈X 〉 (t )
(q−2) 4
〈X 〉 (dt ) =
0
2 q
q
〈X 〉 (T ) 2 .
Thus, q
EP 〈X 〉 (T ) 2
q
=
2
EP [〈Y 〉 (T )] =
q 2
EP [Y (T )2 ]
(q−2) ¶ 2q EP X ? (T ) 〈X 〉 (T ) 2 q 1− 2 2 q ¶ 2q EP X ? (T )q q EP 〈X 〉 (T ) 2 .
REMARK. It turns out that (3.18) actually holds for all q ∈ ]0, ∞[ with appropriate choices of aq and Aq . When q ∈ ]1, 2], this is again a result due to D. Burkholder; for q ∈ ]0, 1] it was first proved by D. Burkholder and R. Gundy using a quite intricate argument. However, for continuous martingales, A. Garsia showed that the proof for q ∈ ]0, 2] can be again greatly simplified by clever applications of Itô’s formula (cf. [Ikeda and Watanabe, 1981, Theorem 3.1]). Before returning to our main line of development, we shall take up a particularly beautiful application of Itô’s formula to the study of Brownian paths. (3.19). THEOREM. Let (β(t )) t ¾0 , (F t ) t ¾0 , P be a one-dimensional Brownian motion, and assume that the F t ’s are P-complete. Then there exists a P-almost surely unique function ` : [0, ∞[ ×R × E −→ [0, ∞[ such that i) for each x ∈ R, (t , ξ ) 7−→ `(t , x, ξ ) is progressively measurable, for each ξ ∈ E, (t , x) 7−→ `(t , x, ξ ) is continuous, and for each (x, ξ ) ∈ R × E, `(t , x, ξ ) = 0 and t 7−→ `(t , x, ξ ) is non-decreasing; ii) for all bounded measurable ϕ : R −→ R, Z Z 1 t (3.20) ϕ(y)`(t , y) dy = ϕ β(s) ds , t ¾ 0, (a.s. P). 2 0 R Moreover, for each y ∈ R: (3.21)
`(t , y) = β(t ) ∨ 0 −
Z 0
t
1[y,∞[ β(s) dβ(s),
t ¾ 0,
(a.s. P).
PROOF. Clearly, i) and ii) uniquely determine `. To see how one might proceed to construct `, note that (3.20) can be interpreted as the statement that Z 1 t “`(t , y) = δ β(s) − y ds”, 2 0
38
II. DIFFUSIONS AND MARTINGALES
where δ is the Dirac δ-function. This interpretation explains the origin of (3.21). Indeed, (·∨y)0 = 1[y,∞[ (·) and (·∨y)00 = δ(·−y). Hence, (3.21) is precisely the expression for ` predicted by Itô’s formula. In order to justify this line of reasoning, it will be necessary to prove that there is a version of the right-hand side of (3.21) which has the properties demanded by i). To begin with, for fixed y, let t 7−→ k(t , y) be the right-hand side of (3.21). We will first check that k(·, y) is P-almost surely non-decreasing. To this end, choose ρ ∈ C0∞ (R)+ R having integral 1, and define fn (x) := n R ρ n(x − ζ ) (ζ ∨ y) dζ for n ∈ Z+ . Then, by Itô’s formula: Zt Z 1 t 00 0 fn β(t ) − fn (0) − fn β(s ) dβ(s) = fn β(s) ds (a.s. P). 2 0 0 Because fn00 ¾ 0, we conclude that the left-hand side of the preceding is P-almost surely non-decreasing as a function of t . In addition, an easy calculation shows that the left hand side tends, P-almost surely, to k(·, y) uniformly on finite intervals. Thus, k(·, y) is P-almost surely non-decreasing. We next show that, for each y, k(·, y) can be modified on a set of P-measure 0 in such a way that the modified function is continuous with respect to (t , y). Using Theorem I(2.11) in the same way as was suggested in the hint for (3.14)-ii), one sees that this reduces to checking 4 Z Zt t P 1[x,∞[ β(s ) dβ(s) ¶ C T (y − x)2 E sup 1[y,∞[ β(s ) dβ(s) − 0¶¶T 0 0 for some C < ∞ and all (T , x, y) ∈ [0, ∞[ ×R × R. But, by (2.11), this comes down to RT 4 1 β(s) dβ(s) for x < y, and by (3.18), this in turn reduces to estimating EP 0 [x,y[ 2 RT P 1 β(s) dβ(s) . But estimating E 0 [x,y[ Zt 2 Z T Z T 1[x,y[ β(s) ds 1[x,y[ β(t ) dt 1[x,y[ β(s) dβ(s) = 2EP EP 0
0
0
=2
Z
T
Z
Z
y
dζ g (s, ζ )
ds
dt 0
t
0
x
Z
y
g (t − s, η − ζ ) dη, x
where g is the one-dimensional Gauss kernel, and the required estimate is immediate from here. We now know that there is an ` which satisfies both i) and (3.21). To prove that it also satisfies (3.20), set M (t , y) := β(t ) ∨ y − 0 ∨ y − `(t , y). Then Z· M (·, y) = 1[y,∞[ β(s) dβ(s) (a.s. P) 0
for each y. Hence, if ϕ ∈ C0 (R) and we use Riemann approximations to compute R R ϕ(y)M (t , y) dy, then it is clear that ϕ(y)M (·, y) dy ∈ MartCloc . In particular, we now see that Z Φ β(·) − Φ β(0) − ϕ(y)M (·, y) dy ∈ MartCloc , R where Φ(x) := (x ∨ y)ϕ(y) dy. On the other hand, by Itô’s formula, Z 1 · Φ β(·) − Φ β(0) − ϕ β(s ) ds ∈ MartCloc . 2 0 R R 1 · Thus, by (3.9)-ii), ϕ(y)`(·, y) dy = 2 0 ϕ β(s) ds (a.s. P), and clearly (3.20) follows from this.
§II.3. STOCHASTIC INTEGRALS, ITÔ’S FORMULA, AND SEMI-MARTINGALES
39
REMARK. The function `(·, y) described above is called the local time of β at y. The quantity ` was first discussed by P. Lévy and its existence was first proved by H. Trotter. The beautiful and simple development given above is the idea of H. Tanaka, and (3.21) is sometimes referred to as Tanaka’s formula. (3.22). EXERCISE. The notation is the same as that in Theorem (3.19). i) Show that |β(t )| =
(3.23)
t
Z
sgn β(s) dβ(s) + 2`(t , 0),
t ¾ 0,
(a.s. P).
0
T ii) Show that if A ∈ M0+ (:= ">0 M" ), then W(A) ∈ {0, 1}. (Hint: Note that, for any Φ ∈ C b (Ω) and t > 0, EW Φ ◦ θ t 1A = EW EWx(t ) [Φ]1A . Let t ↓ 0 and conclude that EW [Φ1A] = EW [Φ]W(A) for all Φ ∈ C b (Ω), and therefore, A is independent of M.) R· iii) Set B(·) := 0 sgn β(s) dβ(s) and note that the process (B(t )) t ¾0 , (F t ) t ¾0 , P is a one-dimensional Brownian motion. Using ii) show that P inf B(t ) < 0 for all t > 0 = 1, 0
and conclude from this and (3.23) that P `(t , 0) > 0 for all t > 0 = 1.
iv) Show that, for each y, ` {t ¾ 0 | β(t , ξ ) 6= y}, y, ξ = 0 for P-almost every ξ , and conclude that `(dt , y, ξ ) is singular with respect to dt for P-almost every ξ . In particular, `(·, 0) is P-almost surely a continuous, non-decreasing function on [0, ∞[ such that `(dt , 0) is singular with respect to dt and `(t , 0) > 0 for all t > 0. We at last pick up the thread which we dropped after introducing the class MartCloc . Recall that we were attempting to find a good description of the class of processes generated by MartC2 under changes of coordinates. We are now ready to give that description. Denote by BVC the class of right-continuous, P-almost surely continuous, progressively measurable Y : [0, ∞[ ×E −→ R which are of locally bounded variation. We will say that (Z(t )) t ¾0 , (F t ) t ¾0 , P is a P-almost surely continuous semimartingale and will denote Z ∈ S.MartC if Z can be written as the sum of a martingale part X ∈ MartCloc and a locally bounded variation part Y ∈ BVC . Note that, up to a P-null set, the martingale part X and the locally bounded variation part Y of a Z ∈ S.MartC are uniquely determined (cf. (3.9)-ii)). Moreover, by Itô’s formula, if Z ∈ (S.MartC )N and f ∈ C 2 (RN ), then f ◦ Z ∈ S.MartC . Thus, S.MartC is certainly invariant under changes of coordinates. Given Z ∈ S.MartC with martingale part X and locally bounded variation part Y , we 0 shall use 〈Z〉 to denote 〈X 〉; if Z 0 is a second element of S.MartC with associated parts X 0 0 0 and Y , we use Z, Z to denote X , X . Also, if α : [0, ∞[ ×E −→ R is a progressively measurable function satisfying Z T Z T 2 |α(t )| |Y | (dt ) < ∞, T > 0, (a.s. P), (3.24) α(t ) 〈Z〉 (dt ) ∨ 0
R·
0
R·
R· we define 0 α(s) dZ(s) := 0 α(s ) dX (s )+ 0 α(s )Y (ds). Notice that in this notation, Itô’s formula for P-almost surely continuous semimartingales becomes (3.25) N Z t N Z t X ¬ ¶ 1X i ∂ x i ∂ x j f (Z(s)) Z i , Z j (ds) f (Z(t )) − f (Z(0)) = ∂ z i f (Z(s)) dZ (s) + 2 i, j =1 0 i=1 0
40
II. DIFFUSIONS AND MARTINGALES
for Z ∈ (S.MartC )N and f ∈ C 2 (RN ). (3.26). EXERCISE. Let Z ∈ (S.MartC )N and f ∈ C 2 (RN ). Show that, for any Y ∈ S.MartC ,
P 〈 f ◦ Z, Y 〉 = Ni=1 ∂ z i f ◦ Z Z i , Y (a.s. P). We conclude this section with a brief discussion of the Stratonovic integralRas inter· preted by Itô. Namely, given X , Y ∈ S.MartC , define the Stratonovic integral 0 X (s) ◦ dY (s) of X with respect to Y (the “◦” in front of the dY (s) is put there R · to emphasize that this is not an Itô integral) to be the element of S.MartC given by 0 X (s) dY (s) + 1 〈X , Y 〉 (·). Although the Stratonovic integral appears to be little more than a strange 2 exercise in notation, it turns out to be a very useful device. The origin of all its virtues is contained in the form which Itô’s formula takes when Stratonovic integrals are used. Namely, from (3.25) and (3.26), we see that Itô’s formula becomes the fundamental theorem of calculus: N Z t X (3.27) f (Z(t )) − f (Z(0)) = ∂ z i f (Z(s)) ◦ dZ(s) i =1
0
for all Z ∈ (S.MartC )N and f ∈ C 3 (RN ). The major drawback to the Stratonovic integral is that it requires that the integrand be a semimartingale (this is the reason why we restricted f to lie in C 3 (RN )). However, in some circumstances, Itô has shown how even this drawback can be overcome. (3.28). EXERCISE. Given X , Y, Z ∈ S.MartC , show that Z· Z t Z· (3.29) X (t ) ◦ d Y (s) ◦ dZ(s) = (X Y )(s) ◦ dZ(s ). 0
0
0
CHAPTER III
The Martingale Problem Formulation of Diffusion Theory §III. 1. Formulation and some basic facts Recall the notation Ω, M, and (M t ) t ¾0 introduced in §I.3. Given bounded measurable functions a : [0, ∞[ ×RN −→ S+ (RN ) (cf. the second paragraph of §II.1) and b : [0, ∞[ ×RN −→ RN , define t 7−→ L t by II-(1.1). Motivated by the results in Corollary II-(1.13), we now pose the martingale problem for (L t ) t ¾0 . Namely, we say that P ∈ M1 (Ω) solves the martingale problem for (L t ) t ¾0 starting from (s, x) ∈ [0, ∞[ ×RN and write P ∈ MP (s, x); (L t ) t ¾0 if
(MP)
i) P(x(0) = x) = 1; Zt ϕ(x(t )) − Ls+u ϕ (x(u)) du , (M t ) t ¾0 , P is a martingale ii) 0 t ¾0 for every ϕ ∈ C0∞ (RN ).
Given ϕ ∈ C 2 (RN ), set X s,ϕ (t ) := ϕ(x(t )) −
(1.1)
t
Z
Ls +u ϕ (x(u)) du.
0
If P ∈ MP (s, x); (L t ) t ¾0 , then X s,ϕ ∈ MartC2 ((M t ) t ¾0 , P) for every ϕ ∈ C0∞ (RN ). To ¬ ¶ compute X s ,ϕ , note that Zt Z t 2 2 2 Ls+u ϕ (x(u)) du + Ls +u ϕ (x(u)) du X s ,ϕ (t ) = ϕ(x(t )) − 2ϕ(x(t ))
= ϕ(x(t ))2 − 2X s ,ϕ (t ) = X s ,ϕ2 (t ) +
Z
Z
0 t
Ls+u ϕ (x(u)) du −
0
Z
0 t
0
Ls +u ϕ (x(u)) du
2
t 2
Ls +u ϕ (x(u)) du
0
− 2X s ,ϕ (t )
Z 0
t
Ls+u ϕ (x(u)) du −
Ls +u ϕ (x(u)) du .
0
Applying Itô’s formula, we see that Z t X s ,ϕ (t )2 − Ls +u ϕ 2 (x(u)) − 2 ϕ Ls+u ϕ (x(u)) du 0
2
t
Z
t ¾0
, (M t ) t ¾0 , P
is a martingale. Noting that L s +u ϕ 2 (y) − 2 ϕ L s +u ϕ (y) = 〈∇ϕ, a∇ϕ〉 (s + u, y), we conclude that Z· ¬ ¶ (1.2) X s ,ϕ (·) = 〈∇ϕ, a∇ϕ〉 (s + u, x(u)) du, (a.s. P). 0
41
42
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
REMARK. As an immediate consequence of (1.2), we see that when a ≡ 0, X s ,ϕ (t ) = X s,ϕ (0) = ϕ(x), t ¾ 0, (a.s. P) for each ϕ ∈ C0∞ (RN ). From this it is clear that Zt x(t , ω) = x + b s + u, x(u, ω) du, t ¾ 0, 0
for P-almost every ω ∈ Ω. In other words, when a ≡ 0 and P ∈ MP (s , x); (L t ) t ¾0 , Palmost every ω ∈ Ω is an integral curve of the time-dependent vector field b(·, ·) starting from (s, x). This is, of course, precisely what we should expect on the basis of II-(1.16) and II-(1.17). Again suppose that P ∈ MP (s , x); (L t ) t ¾0 . Given ϕ ∈ C 2 (RN ), note that the quantity X s,ϕ in (1.1) is an element of MartCloc ((M t ) t ¾0 , P). Indeed, by an easy approximation procedure, it is clear that X s ,ϕ ∈ MartC2 ((M t ) t ¾0 , P) when ϕ ∈ C b2 (RN ). For general ϕ ∈ C 2 (RN ), set σn := inf{t ¾ 0 | |x(t )| ¾ n}, and choose ηn ∈ C0∞ (RN ) so that ηn (y) = 1 for |y| ¶ (n +1), n ∈ Z+ . Then σn ↑ ∞ and X s ,ϕ (·∧σn ) = X s ,ηn ϕ (·∧σn ) ∈ MartC2 ((M t ) t ¾0 , P). At the same time, it is clear that (1.2) continues to hold for all ϕ ∈ C 2 (RN ). In particular, if Zt x(t ) := x(t ) − b s + u, x(u) du, t ¾ 0, (1.3) 0
N then x ∈ MartCloc ((M t ) t ¾0 , P) and Zt 〈〈x, x〉〉 (t ) = a s + u, x(u) du, (1.4)
t ¾ 0,
(a.s. P).
0
Using (1.4) and applying Lemma II-(3.10) (cf. the proof of Lemma II-(1.6) as well,) we now see that R2 (1.5) P sup kx(t ) − x(s )k ¾ R ¶ 2N exp − , 0 ¶ s < T, 2AN (T − s) s ¶t ¶T where A := sup t ,y ka(t , y)kop . From (1.5) it follows that for each q ∈ ]0, ∞[ there is a universal C (q, N ) ∈ [1, ∞[ such that
q
¶ C (q, N ) A(T − s) 2 , 0 ¶ s < T . (1.6)
sup kx(t ) − x(s )k L q (P)
s ¶t ¶T
N In particular, this certainly means that x ∈ MartC2 ((M t ) t ¾0 , P) . In addition, by Itô’s formula, for any f ∈ C 1,2 ([0, ∞[ ×RN ), P-almost surely it is true that Z· Z· (1.7) f (·, x(·)) − f (0, x) − (∂ u + L s +u ) f (u, x(u)) du = ∇ x f (u, x(u))·dx(u) 0
0
Rt
P Rt where 0 ∇ x f (u, x(u))·dx(u) = Ni=1 0 ∂ x i f (u, x(u))dx i (s), and by (3.13)-i), if we
γ have ∇y f (t , y) ¶ AeBkyk , (t , y) ∈ [0, ∞[ ×RN , for some A, B ∈ [0, ∞[ and γ ∈ [0, 2[, then the right-hand side of (1.7) is an element of MartC2 ((M t ) t ¾0 , P). (1.8). EXERCISE. Show that P ∈ MP (s, x); (L t ) t ¾0 if and only if P(x(0) = x) = 1, x ∈ N MartCloc ((M t ) t ¾0 , P) , and (1.4) holds. (1.9). EXERCISE. The considerations discussed thus far in this section can all be generalized as follows: Let a : [0, ∞[ ×Ω −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN be bounded
§III.1. FORMULATION AND SOME BASIC FACTS
43
(M t ) t ¾0 -progressively measurable functions, and define t 7−→ L t by analogy with II(1.1). Define MP(x; (L t ) t ¾0 ) to be the set of P ∈ M1 (Ω) such that P(x(0) = x) = 1 and Zt ϕ(x(t )) − Lu ϕ (x(u)) du , (M t ) t ¾0 , P 0
t ¾0
is a martingale for all ϕ ∈ C0∞ (RN ). Define x(t ) := x(t ) −
Rt
b(u) du, and show that N P ∈ MP(x; (L t ) t ¾0 ) if and only if P(x(0) = x) = 1, x ∈ MartCloc ((M t ) t ¾0 , P) and R· 〈〈x, x〉〉 (·) = 0 a(u) du (a.s. P). Also, show that if P ∈ MP(x; (L t ) t ¾0 ), then (1.5), (1.6), and (1.7) continue to hold and that the right-hand side of (1.7) belongs to MartC2 ((M t ) t ¾0 , P) under the same conditions as those given following (1.7) 0
(1.10). REMARK. When a and b do not depend on t ∈ [0, ∞[, denote L0 by L and note that MP (s, x); (L t ) t ¾0 is independent of s ∈ [0, ∞[. Thus, we shall use MP(x; L ) in place of MP (s , x); (L t ) t ¾0 for time-independent coefficients a and b. (1.11). EXERCISE. Time-dependent martingale problems can be converted into timee := C ([0, ∞[, RN +1 ), and identify the independent ones via the following trick. Set Ω N +1 set C ([0, ∞[, R ) with C ([0, ∞[, R) × Ω. Show that P ∈ MP (s, x); (L t ) t ¾0 if and e ), where e e := ∂ + L , P e ∈ MP(e e := δ ⊗ P, and δ := only if P x; L x := (s , x), L t t s +· s+· M1 C ([0, ∞[, R) denotes the delta mass at the path t 7−→ s + t . The following result is somewhat technical, and can be avoided in most practical situations. Nonetheless, it is of some theoretical interest and it smoothens the presentation of the general theory. (1.12). THEOREM. Assume that for each (s, x) ∈ [0, ∞[ ×RN the set MP (s , x); (L t ) t ¾0 contains precisely one element P s ,x . Then (s, x) 7−→ P s ,x ∈ M1 (Ω) is a Borel measurable map. PROOF. In view of (1.11), we loose no generality by assuming that a and b are independent of time. Thus we do so, and we shall show that x 7−→ P x is measurable under the assumption that P x is the unique element of MP(x; L ) for each x ∈ RN . Define Γ to be the subset of M1 (Ω) consisting of those P with the property that Rt ϕ(x(t )) − 0 (L ϕ)(x(u)) du t ¾0 , (M t ) t ¾0 , P is a martingale for all ϕ ∈ C0∞ (RN ). We therefore know that there is a sequence (Xn )n∈Z+ of bounded measurable functions on Ω such that P ∈ Γ if and only if EP [Xn ] = 0 for all n ∈ Z+ . Thus, Γ is a Borel measurable subset of M1 (Ω). In the same way, one can show that the set Λ consisting of those −1 N P ∈ M1 (Ω) such S that P ◦ (x(0)) = δ x for some x ∈ R is a Borel subset of M1 (Ω). Thus, Γ0 := x∈RN MP(x; L ) = Γ ∩ Λ is a Borel subset of M1 (Ω). At the same time, Γ0 3 P 7−→ P ◦ (x(0))−1 ∈ M1 (RN ) is clearly a measurable mapping, and, by assumption, it is injective. In particular, since M1 (Ω) and M1 (RN ) are Polish spaces (see I-(2.5)), the general theory of Borel mappings on Polish spaces says that the inverse map δ x 7−→ P x is also a measurable mapping (cf. [Parthasarathy, 2005, Theorem 3.9]). But x 7−→ δ x is certainly measurable, and therefore we have now shown that x 7−→ P x is measurable
(1.13). EXERCISE. Suppose that τ is an (M t ) t ¾0 -stopping time. Show that Mτ = σ x(t ∧ τ) t ¾ 0 , and conclude that Mτ is countably generated. (See [Stroock and Varadhan, 2006, Lemma 1.3.3.] for help.) Caveat: Unless it is otherwise stated, stopping times will be (M t ) t ¾0 -stopping times.
44
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
Our next result plays an important role in developing criteria for determining when MP (s, x); (L t ) t ¾0 contains at most one element (cf. Corollary (1.15) below) as well as allowing us to derive the strong Markov property as an essentially immediate consequence of such a uniqueness statement. (1.14). THEOREM. Let P ∈ MP (s , x); (L t ) t ¾0 and a stopping time τ be given. Suppose that A is a sub-σ-algebra of Mτ with the property that ω 7−→ x(τ(ω), ω) is Ameasurable, and let ω 7−→ Pω be a r.c.p.d. of P | A. Then there is a P-null set Λ ∈ A −1 such that Pω ◦ θτ(ω) ∈ MP (s + τ(ω), x(τ(ω), ω)); (L t ) t ¾0 for each ω 6∈ Λ. PROOF. Let ω 7−→ Pτω be a r.c.p.d. of P | Mτ . Then, by Theorem II-(2.20), for each ϕ ∈ C0∞ (RN ): Zt τ −1 ϕ(x(t )) − Ls +τ(ω)+u ϕ (x(u)) du , (M t ) t ¾0 , Pω ◦ θτ(ω) 0
t ¾0
is a martingale for all ω outside of a P-null set Λ(ϕ) ∈ Mτ , and from this it is clear that −1 there is on P-null set Λ ∈ Mτ such that Pτω ◦ θτ(ω) ∈ MP (s + τ(ω), x(τ(ω), ω)); (L t ) t ¾0 for all ω 6∈ Λ.
To complete the proof, firstRnote that Pω (Λ) = 0 for all ω outside of a P-null set Λ ∈ A. Second, note that Pω = Pτω0 Pω (dω 0 ) for all ω outside of a P-null set Λ00 ∈ A. −1 Finally, since Pω ◦ θτ(ω) x(0) = x(τ(ω), ω) = 1 for all ω outside of a P-null set Λ000 ∈ A, −1 we see that Pω ◦θτ(ω) ∈ MP (s +τ(ω), x(τ(ω), ω)), (L t ) t ¾0 for all ω 6∈ Λ0 ∪Λ00 ∪Λ000 . 0
(1.15). COROLLARY. Suppose that for all (s, x) ∈ [0, ∞[ ×RN , P, Q ∈ MP (s, x); (L t ) t ¾0 , and t ¾ 0, P◦x(t )−1 = Q◦x(t )−1 . Then, for each (s, x) ∈ [0, ∞[ ×RN , MP (s, x); (L t ) t ¾0 contains at most one element. PROOF. Let P, Q ∈ MP (s , x); (L t ) t ¾0 be given. We will prove by induction that for all n ∈ Z+ and 0 ¶ t1 < · · · < tn , −1 −1 P ◦ x(t1 ), . . . , x(tn ) = Q ◦ x(t1 ), . . . , x(tn ) . When n = 1, there is nothing to prove. Next, assume that this equality holds for n and let 0 ¶ t1 < · · · < tn < tn+1 be given. Set A := σ x(t1 ), . . . , x(tn ) , and let ω 7−→ Pω and ω 7−→ Qω be r.c.p.d.’s of P | A and Q | A, respectively. Since P and Q agree on A, there is a Λ ∈ A such that P(Λ) = Q(Λ) = 0 and both Pω ◦ θ−1 and Qω ◦ θ−1 are tn tn −1 elements of MP (s + tn , x(tn , ω)); (L t ) t ¾0 for all ω 6∈ Λ. In particular, Pω ◦ x(tn+1 ) = Qω ◦ x(tn+1 )−1 for all ω 6∈ Λ, and so the inductive step is complete. (1.16). REMARK. A subset J of bounded measurable functions onRa measurable space R (E, B) is called a determining set if, for all µ, ν ∈ M1 (E), ϕ dµ = ϕ dν for all ϕ ∈ J implies that µ = ν. Now suppose that there is a determining set J ⊂ C b (RN ) such that for every T > 0 and ϕ ∈ J there is a u = u t ,ϕ ∈ C b1,2 ([0, T [ ×RN ) satisfying (∂ t + L t )u = 0 in [0, T [ ×RN and lim t ↑T u(t , ·) = ϕ. Given P ∈ MP (s , x); (L t ) t ¾0 and T > 0, we see that for all ϕ ∈ J : EP ϕ(x(T )) = u s +T ,ϕ (s, x). −1 In particular, P ◦ x(T ) is uniquely determined for all TN > 0 by the condition that P ∈ MP (s, x); (L t ) t ¾0 . Hence, for each (s, x) ∈ [0, ∞[ ×R , MP (s, x); (L t ) t ¾0 contains at most one element. Similarly, suppose that for each ϕ ∈ J , there is a u = u t ,ϕ ∈
§III.1. FORMULATION AND SOME BASIC FACTS
45
C b1,2 ([0, T [ ×RN ) satisfying (∂ t + L t )u = ϕ in [0, T [ ×RN and lim t ↑T u(t , ·) = 0. Then P ∈ MP (s, x); (L t ) t ¾0 implies that Z T P E ϕ(x(t )) dt = u s +T ,ϕ (s, x) 0
for all T > 0 and ϕ ∈ J . From this it is clear that if P, Q ∈ MP (s , x); (L t ) t ¾0 , then P ◦ x(t )−1 = Q ◦ x(t )−1 for all t > 0 and so MP (s, x); (L t ) t ¾0 contains at most one element. It should be clear that this last remark is simply a re-statement of the result in Corollary II-(1.13). We now introduce a construction which will serve us well when it comes to proving the existence of solutions to martingale problems as well as reducing the question of uniqueness to local considerations. N (1.17). LEMMA. Let T > 0 and ψ ∈ C ([0, T ], R ) be given. Suppose that Q ∈ M1 (Ω) satisfies Q x(0) = ψ(T ) = 1. Then there is a unique R = δψ ⊗T Q ∈ M1 (Ω) such that R A ∩ θT−1 B = 1A ψ|[0,T ] Q(B) for all A ∈ MT and B ∈ M.
e := δ × Q PROOF. The uniqueness assertion is clear. To prove the existence, set R ψ on Ω × Ω and define Φ : Ω × Ω −→ Ω so that ( x(t , ω) if t ∈ [0, T ], 0 x t , Φ(ω, ω ) = 0 0 x(t − T , ω ) − x(T , ω ) + x(T , ω) if t > T . e ◦ Φ−1 has the required property. Then R := R
(1.18). THEOREM. Let τ be a stopping time, and suppose that Ω 3 ω 7−→ Qω ∈ M1 (Ω) is a Mτ -measurable map satisfying Qω x(τ(ω)) = x(τ(ω), ω) = 1 for each ω ∈ Ω. Given P ∈ M1 (Ω), there is a unique R = P ⊗τ Q· ∈ M1 (Ω) such that R|Mτ = P|Mτ and ω 7−→ δ ⊗ Q is a r.c.p.d. of R | M. In addition, suppose that Ω × [0, ∞[ ×Ω 3 (ω, t , ω 0 ) 7−→ Yω (t , ω 0 ) ∈ R is a map such that for each T > 0, Ω × [0, ∞[ ×Ω 3 (ω, t , ω 0 ) 7−→ Yω (t , ω 0 ) is Mτ ⊗ B([0, T ]) ⊗ MT -measurable, and, for each ω, ω 0 ∈ Ω, t 7−→ Yω (t , ω) is a right-continuous function with Yω (0, ω 0 ) = 0. Given a rightcontinuous progressively measurable function X : [0, ∞[ ×Ω −→ R, define Z := X ⊗τ Y· by ( X (t , ω) if t ∈ [0, τ(ω)], Z(t , ω) := X (τ(ω), ω) + Yω t − τ(ω), θτ(ω) ω if t > τ(ω). Then Z is a right-continuous progressively measurable function. Moreover, if Z(t ) ∈ L1 (R) for all t ¾ 0, then (Z(t )) t ¾0 , (M t ) t ¾0 , R is a martingale if and only if (X (t ∧ τ)) t ¾0 , (M t ) t ¾0 , P is a martingale and (Yω (t )) t ¾0 , (M t ) t ¾0 , Qω is a martingale for Palmost every ω ∈ Ω. PROOF. The uniqueness of R is obvious; to prove existence, set R(·) := Qω (·)P(dω) and note that Z R(A ∩ B) = δω ⊗τ(ω) Qω (B)P(dω) A
for all A ∈ Mτ and B ∈ M. To see that Z is progressively measurable, define 0 e , ω, ω 0 ) := X (t , ω) + Y (t − τ(ω)) ∨ 0, θ Z(t ω τ(ω) ω ,
R
δω ⊗τ(ω)
46
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
e is M ⊗M -progressively measurable and that Z(t , ω) = Z(t e , ω, ω). and check that Z t t t ¾0 To complete the proof, simply apply Theorem II-(2.20). We now turn to the problem of constructing solutions. Let a : [0, ∞[ ×RN −→ S (RN ) and b : [0, ∞[ ×RN −→ RN be bounded continuous functions and define (L t ) t ¾0 accordingly. For (s, x) ∈ [0, ∞[ ×RN , define Ψa,b : Ω −→ Ω so that s,x Æ := x + a(s, x) x(t , ω) + b(s , x)(t ), t ¾ 0, x t , Ψa,b s ,x −1 a,b N := W ◦ Ψa,b and define W sa,b . It is easily seen that x ∈ MartC2 (M t ) t ¾0 , W s,x , ,x s,x where x(t ) := x(t ) − b(s, x)t , and that 〈〈x, x〉〉 (t ) = a(s, x)t , t ¾ 0. Next, for n ∈ Z+ , define Pn,k for k ∈ N so that x +
a,b := W0,x Pn,0 , x
and
:= Pn,k−1 ⊗ k−1 W a,b Pn,k k−1 x x n
n
,x( k−1 ) n
for k ∈ Z+ .
2
Set Pnx := Pn,n and define x
Ltn :=
N 1X
2 i , j =1
ai,j
bn t c∧n 2 bn t c∧n 2 ,x n n
∂x i ∂x j +
N X i =1
bi
bn t c∧n 2 bn t c∧n 2 ,x n n
∂x i .
Then Pnx ∈ MP (0, x); (L tn ) t ¾0 (cf. (1.9) above). In particular, for each T > 0 there is a C (T ) < ∞ such that n sup sup EPx kx(t ) − x(s)k4 ¶ C (T )(v − u)2 , 0 ¶ s, t ¶ T . n∈Z+
Hence, (Pnx )n∈Z+ ϕ ∈ C0∞ (RN )
x
is relatively compact in M1 (Ω) for each x ∈ RN . Because, for each (L tn ϕ)(x(t , ω)) → (L t ϕ)(x(t , ω))
uniformly for (t , ω) in compact subsets of [0, ∞[ ×Ω, our construction will be complete once we have available the result contained in the following exercise. (1.19). EXERCISE. Let E be a Polish space and suppose that F ⊂ C b (E) be a uniformly bounded family of functions which areR equicontinuous on each compact subset of E. R Show that if µn → µ in M1 (Ω), then ϕ dµn → ϕ dµ uniformly for ϕ ∈ F. In particular, if (ϕn )n∈Z+ ⊂ C b (E) is uniformly bounded and ϕn → ϕ uniformly on compacts, R R then ϕn dµn → ϕ dµ whenever µn → µ in M1 (Ω). Referring to the paragraph preceding (1.19), we now see that if Pnx
0
n 0 ∈Z+
is any
convergent subsequence of (Pnx )n∈Z+ , and if P denotes its limit, then P(x(0) = x) = 1 and, for all 0 ¶ t1 < t2 , all M t1 -measurable Φ ∈ C b (Ω), and all ϕ ∈ C0∞ (RN ): Z t2 h i P P E ϕ(x(t2 )) − ϕ(x(t1 )) Φ = E (L t ϕ)(x(t )) dt Φ . t1
From this it is clear that P ∈ MP (0, x); (L t ) t ¾0 . By replacing a and b with a(s + ·, ·) and b(s + ·, ·), we also see that there is at least on P ∈ MP (s, x); (L t ) t ¾0 for each (s , x) ∈ [0, ∞[ ×RN . In other words, we have now proved the following existence theorem:
(1.20). THEOREM. Let a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN be bounded continuous functions, and define (L t ) t ¾0 accordingly. The, for each (s, x) ∈ [0, ∞[ ×RN there is at least one element of MP (s, x); (L t ) t ¾0 .
§III.2. THE MARTINGALE PROBLEM AND STOCHASTIC INTEGRAL EQUATIONS
47
(1.21). EXERCISE. Suppose that a and b are bounded, measurable, and have the property RT RT that RN 3 x 7−→ 0 a(t , x) dt and RN 3 x 7−→ 0 b(t , x) dt are continuous for each T > 0. Show that the corresponding martingale problem still has a solution starting at each (s , x) ∈ [0, ∞[ ×RN . We now have the basic existence result and uniqueness criteria for martingale problems coming from diffusion operators (i.e., operators of the sort in Corollary II-(1.13)). However, before moving on, it may be useful to record a summary of the rewards which follow from proving that a martingale problem is well-posed in the sense that precisely one solution exists for each starting point (s, x). (1.22). THEOREM. Let a and b be bounded measurable functions, suppose that the martingale problem for the corresponding (L t ) t ¾0 is well-posed, and let P s,x (s , x) ∈ [0, ∞[ ×RN be the associated family of solutions. Then (s, x) 7−→ P s ,x is measurable, and for all stopping times τ, P s,x = P s ,x ⊗τ Pτ,x(τ) . In particular, ω 7−→ δω ⊗τ(ω) Pτ(ω),x(τ(ω),ω) is a r.c.p.d. of P s,x | Mτ for each stopping time τ. Finally, if RN 3 RT RT x 7−→ 0 a(t , x) dt and RN 3 x 7−→ 0 b(t , x) dt are continuous for each T > 0, then [0, ∞[ ×RN 3 (s, x) 7−→ P s ,x ∈ M1 (Ω) is continuous. PROOF. The measurability of (s, x) 7−→ P s ,x is proved in Theorem (1.12). Next, let τ be a stopping time. Then, by Theorem (1.18) it is easy to check that, for all ϕ ∈ N ∞ C0 (R ), X s ,ϕ (t ) t ¾0 , (M t ) t ¾0 , P s ,x ⊗τ Pτ,x(τ) is a martingale (cf. (1.1) for the notation X s,ϕ ). Hence, P s,x ⊗τ Pτ,x(τ) ∈ MP (s, x); (L t ) t ¾0 , and so, by uniqueness, P s ,x ⊗τ Pτ,x(τ) = P s ,x . RT RT Finally, suppose that RN 3 x 7−→ 0 a(t , x) dt and RN 3 x 7−→ 0 b (t , x) dt are continuous for each T > 0. Then, for each ϕ ∈ C0∞ (RN ), the map [0, ∞[ ×[0, ∞[ ×Ω 3 (s, t , ω) 7−→ X s ,ϕ (t , ω) is continuous. Now let (sn , xn ) → (s, x) and assume that P sn ,xn → P. Then, by (1.19), Z Z X s ,ϕ (t )Φ dP = lim X sn ,ϕ (t )Φ dP sn ,xn n→∞ ∞ N for all t ¾ 0, ϕ ∈ C0 (R ), and Φ ∈ C b (Ω). Hence, P ∈ MP (s , x); (L t ) t ¾0 , and so P = limn→∞ P sn ,xn . At the same time, by (1.16) and Kolmogorov’s criterion, P s ,x s ¾ 0 and kxk ¶ R is relatively compact in M1 (Ω) for each R ¾ 0, and combined with the preceding, this leads immediately to the conclusion that (s, x) 7−→ P s ,x is continuous. (1.23). EXERCISE. For each n ∈ Z+ , let a n and b n be given bounded measurable coefficients and let Pn be a solution to the corresponding martingale problem starting from some point (sn , xn ). Assume that (sn , xn ) → (s, x) and that a n → a and b n → b uniformly on compacts, where a and b are bounded measurable coefficients such that RT RT x 7−→ 0 a(t , x) dt and x 7−→ 0 b(t , x) dt are continuous. If the martingale problem corresponding to a and b starting from (s, x) has precisely one solution P s ,x , show that Pn −−→ P s ,x . n→∞
§III. 2. The martingale problem and stochastic integral equations Let a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN be bounded measurable functions and define t 7−→ L t accordingly. When a ≡ 0, we way (cf. the remark RT following (1.2)) that P ∈ MP (s, x); (L t ) t ¾0 if and only if x(T ) = x + 0 b(s + t , x(t )) dt ,
48
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
T ¾ 0, (a.s. P). We now want to see what can be said when a does not vanish identically. In order to understand what we havein mind, assume that N = 1, and that a never vanishes. Given P ∈ MP (s, x); (L t ) t ¾0 , define ZT 1 β(T ) := dx(t ), T ¾ 0, p a(s + t , x(t )) 0 2 RT where x(T ) := x(T )− 0 b(s+t , x(t )) dt , T ¾ 0. Then 〈β, β〉 (dt ) = p 1 a(s+ a(s+t ,x(t )) t , x(t )) dt = dt , and so, (β(t )) t ¾0 , (M t ) t ¾0 , P is a one-dimensional Brownian motion. In addition, ZT ZTÆ x(T ) − x = dx(t ) = a(s + t , x(t )) dβ(t ), T ¾ 0, (a.s. P), 0
0
and so x(·) satisfies the stochastic integral equation ZT ZTÆ a(s + t , x(t )) dβ(t ) + x(T ) = x + b(s + t , x(t )) dt ,
T ¾ 0,
(a.s. P).
0
0
Our first goal in this section is to generalize the preceding representation theorem. However, before doing so, we must make a brief digression into the theory of stochastic integration with respect to vector-valued martingales. d Referring to the notation defined in §II.2, let d ∈ Z+ and X ∈ MartC2 ((F t ) t ¾0 , P) be given. Define L2loc (F t ) t ¾0 , 〈〈X , X 〉〉 , P to be the space of (F t ) t ¾0 -progressively measurable θ : [0, ∞[ ×E −→ Rd such that Z T
P θ(t ), 〈〈X , X 〉〉 (dt )θ(t ) Rd < ∞, T ¾ 0. E 0
d Note that L2loc ((F t ) t ¾0 , tr 〈〈X , X 〉〉 , P) can be identified as a dense subspace of the vectorspace L2loc ((F t ) t ¾0 , 〈〈X , X 〉〉 , P) (to see the density, simply take θn (t ) := 1[0,n[ (kθ(t )k)θ(t ) d to approximate θ ∈ L2loc ((F t ) t ¾0 , 〈〈X , X 〉〉 , P)d ). Next, for θ ∈ L2loc ((F t ) t ¾0 , tr 〈〈X , X 〉〉 , P) , define ZT d Z T X (2.1) θ(t ) dX (t ) := θi (t ) dX i (t ), T ¾ 0, 0
and observe that EP sup 0¶t ¶T
i =1
0
Z t 2 Z θ(t ) dX (t ) ¶ 4EP
T
0
0
= 4EP
Z
2 θ(t ) dX (t )
T
0
θ(t ), 〈〈X , X 〉〉 (dt )θ(t ) Rd
Hence, there is a unique continuous mapping Z· L2loc ((F t ) t ¾0 , 〈〈X , X 〉〉 , P) 3 θ 7−→ θ(t ) dX (t ) ∈ MartC2 ((F t ) t ¾0 , P) 0
d θ(t ) dX (t ) is given by (2.1) whenever θ ∈ L2loc ((F t ) t ¾0 , tr 〈〈X , X 〉〉 , P) . R· (2.2). EXERCISE. Given θ ∈ L2loc ((F t ) t ¾0 , 〈〈X , X 〉〉 , P), show that 0 θ(t ) dX (t ) is the unique Y ∈ MartC2 ((F t ) t ¾0 , P) such that Z· Y, η(t ) dX (t ) (dt ) = 〈θ(t ), 〈〈X , X 〉〉 (dt )η(t )〉Rd , (a.s. P), such that
R·
0
0
§III.2. THE MARTINGALE PROBLEM AND STOCHASTIC INTEGRAL EQUATIONS
49
for all η ∈ L2loc ((F t ) t ¾0 , 〈〈X , X 〉〉 , P). In particular, show that θ ∈ L2loc ((F t ) t ¾0 , 〈〈X , X 〉〉 , P) and τ is an (F t ) t ¾0 -stopping time, then Z T ∧τ ZT θ(t ) dX (t ) = 1[0,τ[ (t )θ(t ) dX (t ), T ¾ 0, (a.s. P). 0
0
Next, suppose that σ : [0, ∞[ ×E −→ Hom(Rd , RN ) is an (F t ) t ¾0 -progressively measurable map which satisfies Z T (2.3) EP tr σ(t ) 〈〈X , X 〉〉 (dt )σ(t )† < ∞, T ¾ 0. 0
We then define
N σ(t ) dX (t ) ∈ MartC2 ((F t ) t ¾0 , P) so that Z· Z· θ, σ(t ) dX (t ) = σ(t )† θ dX (t ),
R·
0
RN
0
(a.s. P),
0
for each θ ∈ RN . d (2.4). EXERCISE. Let X ∈ MartC2 ((F t ) t ¾0 , P) and an (F t ) t ¾0 -progressively measurable σ : [0, ∞[ ×E −→ Hom(Rd , RN ) satisfying (2.3) be given. e i) If Y ∈ MartC2 ((F t ) t ¾0 , P) and τ : [0, ∞[ ×E −→ Hom(Re , RN ) is an (F t ) t ¾0 progres-sively measurable function which satisfies Z T † P tr τ(t ) 〈〈Y, Y 〉〉 (dt )τ(t ) < ∞, T > 0, E 0
show that Z ·
σ(t ) dX (t ),
Z
·
τ(t ) dY (t ) = σ(t ) 〈〈X , Y 〉〉 (dt )τ(t )† ,
(a.s. P),
0
0
i j X ,Y where 〈〈X , Y 〉〉 = 1¶i ¶d . 1¶ j ¶e R· N ii) Show that 0 σ(t ) dX (t ) is the unique Y ∈ MartC2 ((F t ) t ¾0 , P) such that Y (0) = 0 and X X I 〈〈X , X 〉〉 (dt ) I σ(t )† , (a.s. P). , (dt ) = Y σ(t ) Y iii) Next, show that if τ : [0, ∞[ ×E −→ Hom(RN , RM ) is an (F t ) t ¾0 -progressively measurable function such that (2.3) is satisfied by both σ and τσ, then Z· Z t Z· τ(t ) d σ(s ) dX (s) = τ(t )σ(t ) dX (t ), (a.s. P). 0
0
0
The following lemma addresses a rather pedantic measurability question. (2.5). LEMMA. Let a ∈ S+ (RN ), denote by π the orthogonal projection of RN onto e be the element of S+ (RN ) satisfying a ea = ae Range(a), and let a a = π. Then π = −1 e = lim"↓0 (a + "I )−1 π. Next, suppose that σ ∈ Hom(Rd , RN ) lim"↓0 (a + "I ) a, and a and that a = σσ † . Let πσ denote the orthogonal projection of RN onto Range(σ † ). eσ = πσ . In particular, a 7−→ π and a 7−→ a e are Then Range(a) = Range(σ) and σ † a measurable functions of a, and σ 7−→ πσ is a measurable function of σ. PROOF. Set a " := (a + "I )−1 and π" := a " a. Then 0 ¶ π" ¶ I . Moreover, if η ∈ Range(a)⊥ , then η ∈ Null(a) and so π" η = 0; whereas, if η ∈ Range(a), then there
50
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
is a η = aξ , and so π" η = η − "π" ξ −→ η. Hence, π" −→ π. Also, if η ∈ Range(a)⊥ , "↓0
"↓0
then a " πη = 0; if η ∈ Range(a), then there is a ξ ∈ Range(a) such that η = aξ , and so e. Since, for each " > 0, a 7−→ a " is a smooth a " πη = a " η = π" ξ −→ ξ . Hence, a " π −→ a "↓0
"↓0
e are measurable maps. Now suppose that map, we now see that a 7−→ π and a 7−→ a a = σσ † . Clearly, Range(a) ⊂ Range(σ). On the other hand, if η ∈ Range(σ), then there exists a ξ ∈ Null(σ)⊥ = Range(σ † ) such that η = σξ . Hence, choosing η0 so that ξ = σ † η0 , we then have η = σσ † ξ = aη0 ; from this we conclude that Range(a) = Range(σ). eσ = πσ , note that if η ∈ Range(σ † )⊥ , then η ∈ Null(σ), and so Finally, to see that σ † a eση = 0. On the other hand, if η ∈ Range(σ † ), then there is a ξ ∈ Null(σ † )⊥ = σ †a eση = σ † a eaξ = σ † πξ = σ † ξ = η. Range(σ) = Range(a) such that η = σ † ξ , and so σ † a †e Hence, σ a σ = πσ We are at last ready to prove the representation theorem alluded to above. (2.6). THEOREM. Let a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN be bounded measurable functions and suppose that P ∈ MP (s, x); (L t ) t ¾0 for some (s, x) ∈ [0, ∞[ ×RN . Given a measurable σ : [0, ∞[ ×RN −→ Hom(Rd , RN) satisfying a = σσ † , there is a d -dimensional Brownian motion (β(t )) t ¾0 , (F t ) t ¾0 , Q on some probability space (E, F, Q) and a continuous (F t ) t ¾0 -progressively measurable function X : [0, ∞[ ×E −→ RN such that ZT ZT b(s + t , X (t )) dt , T ¾ 0, (a.s. Q), σ(s + t , X (t )) dβ(t ) + (2.7) X (T ) = x + 0
0
and P = Q ◦ X (·)−1 . In particular, if d = N and a is never singular (i.e., a(t , y) > 0 for all (t , y) ∈ [0, ∞[ ×RN ), then we can take E = Ω, Q = P, X (·) = x(·), and ZT σ † a −1 (s + t , x(t )) dx(t ), T ¾ 0, β(T ) = 0
where x(T ) := x(T ) −
Z
T
b(s + t , x(t )) dt ,
T ¾ 0.
0
PROOF. Let E = C ([0, ∞[, RN × Rd ) = C ([0, ∞[, RN ) × C ([0, ∞[, Rd ), F = B(E), and Q = P ⊗ W, where W is the Wiener measure on Ωd := C ([0, ∞[, Rd ). d -dimensional X (T , ξ ) Given ξ ∈ E, let Z(T , ξ ) := ∈ RN × Rd denote the position of the path ξ Y (T , ξ ) at time T , set F t := σ Z(u) 0 ¶ u ¶ t for t ¾ 0, and note that (by the second part of N +d with II-(2.31)) Z ∈ MartC2 ((F t ) t ¾0 , P)
a(s + t , X (t )) 0 Z, Z (dt ) := dt , 0 Id ×d RT X (T ) where Z(T ) := and X (T ) := X (T ) − 0 b(s + t , X (t )) dt . Next, define π(t , y) Y (T ) and πσ (t , y) to be the orthogonal projections of RN and Rd onto Range(a(t , y)) and Range(σ † (t , y)), respectively. Set πσ⊥ = Id ×d − πσ , and define β(·) by β(T ) :=
Z
T
0
† e σ a
πσ⊥ (s + t , X (t )) dZ(t ),
T ¾ 0.
§III.2. THE MARTINGALE PROBLEM AND STOCHASTIC INTEGRAL EQUATIONS
51
Then e 〈〈β, β〉〉 (dt ) = σ † a
πσ⊥
a(s + t , X (t )) 0
0 Id ×d
eσ a πσ
dt
eae = σ †a a σ + πσ⊥ (s + t , X (t )) = Id ×d dt , eσ = πσ . Hence, (β(t )) t ¾0 , (F t ) t ¾0 , Q is a d -dimensional since σ a ae a σ = σ πe aσ = σ †a e = ae Brownian motion. Moreover, since σσ † a a = π, we see that ZT ZT σ(s + t , X (t )) dβ(t ) = π(s + t , X (t )) dX (t ) †e
†
0
0
= X (T ) − x −
Z
T
b(s + t , X (t )) dt −
0
Z
T
π⊥ (s + t , X (t )) dX (t ),
0
where π⊥ := IN ×N − π. At the same time, Z· Z · ⊥ ⊥ π dX , π dX (dt ) = π⊥ aπ⊥ (s + t , X (t )) dt = 0, 0
R·
0
and so 0 π dX = 0 (a.s. Q). We have therefore proved that X (·) satisfies (2.7) with this choice of (β(t )) t ¾0 , (F t ) t ¾0 , Q , and clearly P = Q ◦ X (·)−1 . Moreover, if N = d and a is never singular, then πσ⊥ ≡ 0, and so we could have carried out the whole procedure on (Ω, M, P) instead of (E, F, Q). ⊥
(2.8). REMARK. It is not true in general that the X (·) in (2.7) is a measurable function of the β(·) in that equation. To dramatize this point, we look at the case when N = d = 1, a ≡ 0, b ≡ 0, s = 0, x = 0, and σ(x) = sgn(x). Obviously, (x(t )) t ¾0 , (M t ) t ¾0 , P is a onedimensional Brownian motion, and by (3.22)-i), we see that in this case β(T ) = |x(T )| − 2`(T , 0) (a.s. P), where `(·, 0) is the local time of x(·) at 0. In particular, since `(T , 0) = RT lim"↓0 0 1[0,"[ (|x(t )|) dt (a.s. P), β(·) is measurable with respect to the P-completion A of σ |x(t )| t ¾ 0 . On the other hand, if x(·) were A-measurable, then there would exist a measurable function Φ : C ([0, ∞[, [0, ∞[) −→ R such that x(1, ω) = Φ(|x(·, ω)|) for every ω ∈ Ω which is not in a P-null set Λ. Moreover, since P(−Λ) = P(Λ), we could assume that Λ = −Λ. But this would mean that x(1, ω) = Φ(|x(·, ω)|) = Φ(|x(·, −ω)|) = x(1, −ω) = −x(1, ω) for ω 6∈ Λ, and so we would have P(x(1) = 0) = 1, which is clearly false. Hence, x(·) is not A-measurable. In spite of the preceding remark, (2.7) is often a very important tool with which to study solutions to martingale problems. Its usefulness depends on our knowing enough about the smoothness of σ and b in order to conclude from (2.7) that X (·) can be expressed as a measurable functional of β(·). The basic result in this direction is contained in the next statement. (2.9). THEOREM (Itô). Let σ : [0, ∞[ ×RN −→ Hom(RN , Rd ) and b : [0, ∞[ ×RN −→ RN are measurable functions with the property that, for each T > 0 there exists a C (T ) < ∞ such that sup kσ(t , 0)kHS ∨ kb(t , 0)k ¶ C (T ),
(2.10)
and
0¶t ¶T
sup σ(t , y 0 ) − σ(t , y) HS ∨ b(t , y 0 ) − b(t , y) ¶ C (T ) y 0 − y , y, y 0 ∈ RN .
0¶t ¶T
Denote by W the standard d -dimensional Wiener measure on (Ω, M). Then there is for each (s, x) ∈ [0, ∞[ ×RN a right-continuous, (M t ) t ¾0 -progressively measurable map
52
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
Φ s ,x : [0, ∞[ ×Ω −→ RN such that Φ s,x (T ) = x +
Z
T
0
σ s + t , Φ s,x (t ) dx(t ) +
T
Z
0
b s + t , Φ s,x (t ) dt ,
T ¾ 0,
(a.s. W).
Moreover, if (β(t )) t ¾0 , (F t ) t ¾0 , Q is any d -dimensional Brownian motion on some probability space (E, F, Q), and if X : [0, ∞[ ×E −→ RN is a right-continuous, (F t ) t ¾0 progressively measurable for which (2.7) holds Q-almost surely, then X (·) = function Φ s ,x (·, β(∗)) (a.s. Q) on ξ ∈ E β(∗, ξ ) is continuous . In particular, Q ◦X −1 = W ◦Φ−1 . s ,x PROOF. We may and will assume that s = 0 and x = 0. In order to construct the mapping Φ := Φ0,0 on (Ω, M, W), we begin by setting Φ0 (·) ≡ 0, and for n ∈ Z+ , we define Φn inductively by Φn (T ) :=
Z
T
σ t , Φn−1 (t ) dx(t ) +
0
Z
T
0
b t , Φn−1 (t ) dt ,
T ¾ 0.
Set ∆n (T ) := sup0¶t ¶T Φn (t ) − Φn−1 (t ) for T ¾ 0, and observe that
2
2
Z t
Z t
W
sup b(u, 0) du sup σ(u, 0) dx(u) + 2E
0¶t ¶T 0¶t ¶T 0 0
2 Z T ZT
2
+ 2EW T kb(u, ¶ 8EW σ(u, 0) dx(u) 0)k du
E ∆1 (T )2 ¶ 2EW W
¶ 8EW
Z
0 T
0
0
kσ(t , 0)k2HS dt + 2T 2 C (T )2
¶ (8 + 2T )C (T )2 T . Similarly, EW ∆n+1 (T )2 ¶ 8EW
Z
T 0
σ t , Φ (t )) − σ t , Φ (t )) 2 dt n n−1 HS
+ 2T EW
Z
T
b t , Φ (t ) − b t , Φ (t ) dt n n−1
0
¶ (8 + 2T )C (T )
Z
¶ (8 + 2T )C (T )2
Z
2
T
0 T 0
2 EW Φn (t ) − Φn−1 (t ) dt
EW ∆n (t )2 dt .
K(T )n Hence, by induction on n ∈ Z+ , EW ∆n (T )2 ¶ n! , where K(T ) := (8 + 2T )C (T )2 , and so ∞ X 2 W sup E sup kΦn (t ) − Φ m (t )k ¶ EW ∆i (T )∆ j (T ) n¾m
0¶t ¶T
i , j =m+1
¶
X ∞ i =m+1
−−−→ 0. m→∞
s
K(T )i i!
2
§III.2. THE MARTINGALE PROBLEM AND STOCHASTIC INTEGRAL EQUATIONS
53
We can therefore find (cf. Lemma II-(2.25)) a right-continuous, W-almost surely continuous, (M t ) t ¾0 -progressively measurable Φ such that for all T > 0, 2 W E sup kΦ(t ) − Φn (t )k −−→ 0. n→∞
0¶t ¶T
In particular, Φ(T ) =
Z
T
σ t , Φ(t ) dx(t ) +
0
Z
T
b t , Φ(t ) dt ,
T ¾ 0,
(a.s. W).
0
Finally, suppose that a Brownian motion (β(t )) t ¾0 , (F t ) t ¾0 , Q on (E, F, Q) and a right-continuous, (F t ) t ¾0 -progressively measurable solution X to (2.7) are given. Without loss of generality, we may assume that β(·, ξ ) is continuous for all ξ ∈ E. Set Y (·, ξ ) = Φ(·, β(∗, ξ )). Then, as a consequence of (2.4)-ii) and the fact that Q ◦ β−1 = W, ZT ZT Y (T ) = σ(t , Y (t )) dβ(t ) + b(t , Y (t )) dt , T ¾ 0, (a.s. Q). 0
0
Hence, proceeding in precisely the way as we did above, we arrive at ZT EQ sup kX (u) − Y (u)k2 dt , EQ sup kX (t ) − Y (t )k2 ¶ K(T ) 0¶t ¶T
0
0¶t ¶T
from which it is easy to conclude that X (·) = Y (·) (a.s. Q).
(2.11). COROLLARY. Let σ and b be as in the statement of Theorem (2.9), set a := σσ † , and define t 7−→ L t accordingly. Then, for each (s, x) ∈ [0, ∞[ ×RN there is precisely one P s ,x ∈ MP (s, x); (L t ) t ¾0 . In fact, if Φ s ,x is the map described in Theorem (2.9), . then P s ,x = W ◦ Φ−1 s ,x PROOF. Note that by Itô’s formula, for any ϕ ∈ C0∞ (RN ) and T ¾ 0, ZT ZT † Ls +t ϕ Φs ,x (t ) dt , σ s + t , Φ s ,x (t ) ∇ϕ Φ s ,x (t ) dx(t ) + ϕ Φ s ,x (T ) = ϕ(x) + 0
0
Φ−1 s,x
(a.s. W). Hence, we have W ◦ ∈ MP (s , x); (L t ) t ¾0 . On the other hand, if P ∈ MP (s, x); (L t ) t ¾0 , then, by Theorem (2.6), there is a d -dimensional Brownian mo tion (β(t )) t ¾0 , (F t ) t ¾0 , Q on some probability space (E, F, Q) and a right-continuous, (F t ) t ¾0 -measurable function X : [0, ∞[ ×E −→ RN with the properties that P = Q ◦X −1 and (2.7) holds (a.s. Q). But, by Theorem (2.9), X (·) = Φ s ,x (·, β(∗)) (a.s. Q), and so P = Q ◦ X −1 = W ◦ Φ−1 . s ,x
The hypotheses in these results involve σ and b v e c but directly not a. Furthermore, it is clear that when a can be singular, no σ satisfying a = σσ † need be as smooth as a itself (e.g., when N = 1 and a(x) = kxk, there is no choice of σ which is Lipschitz continuous at 0, even though a itself is). The next theorem addresses this problem. In 1 this theorem, and throughout, a 2 denotes the unique σ ∈ S+ (RN ) satisfying a = σ 2 . (2.12). THEOREM. Given " ∈ ]0, 1[, set S+ (RN ) = a ∈ S+ (RN ) "I < a < "1 I . Then, for a ∈ S+ (RN ): " q X ∞ 1 1 2 (2.13) a 2 = "1 ("a − I )n , n n=0
54
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
1 where
2
1
is the coefficient of z n in the Taylor series expansion of (1 + z) 2 in |z| < 1.
n 1 Hence, S+ (RN ) 3 a 7−→ a 2 has bounded derivatives of every order. Moreover, for each " 1
1
1
a ∈ S+ (RN ), a 2 = lim"↓0 (a + "I ) 2 , and so S+ (RN ) 3 a 7−→ a 2 is a measurable map. " Next, suppose that R 3 ξ 7−→ a(ξ ) ∈ S+ (RN ) is a map which satisfies a(ξ ) ¾ "I and ka(η) − a(ξ )kHS ¶ C |η − ξ | for some " > 0 and C < ∞ and for all ξ , η ∈ R. Then
1
C 1
2
(2.14)
a (η) − a 2 (ξ ) ¶ p |η − ξ | HS 2 " + N for all ξ , η ∈ R. Finally, suppose
00 that
R 3 ξ 7−→ a(ξ ) ∈ S (R ) is a twice continuously differentiable map and that a (ξ ) op ¶ C < ∞ for all ξ ∈ R. Then
1 p 1
2 (2.15)
a (η) − a 2 (ξ ) ¶ N C |η − ξ | HS
for all ξ , η ∈ R. PROOF. Equation (2.13) is a simple application of the spectral theorem, as is the 1 1 1 equation a 2 = lim"↓0 (a + "I ) 2 . From these, it is clear that S+ (RN ) 3 a 7−→ a 2 has the " 1
(RN ) 3 a 7−→ a 2 is measurable. asserted regularity properties, and that S+ "
In proving (2.14), we assume, loss of generality, that ξ 7−→ a(ξ ) is contin without
uously differentiable and that a 0 (ξ ) HS ¶ C for all ξ ∈ R. and we shall show that for each ξ ∈ R,
1
1
2 0 (2.16)
a (ξ ) ¶ p ka(ξ )kHS . HS 2 " In proving (2.16), we may and will assume that a(ξ ) is diagonal at ξ . Noting that a(η) = 1 1 a 2 (η)a 2 (η), we see that i, j a 0 (ξ ) i, j 1 0 = 1 (2.17) a 2 (ξ ) i ,i j,j , 1 a 2 (ξ ) + a 2 (ξ ) and clearly (2.16) follows from this.
1 0
Finally, to prove (2.15), we shall show that if a(ξ ) > 0, then a 2 (ξ )
¶ HS p N C , and obviously (2.15) follows from this after an easy limit procedure. Moreover, we shall again assume that a(ξ ) is diagonal. But in this case, it is clear, from (2.17), that we need only check that Æ 0 i , j p (2.18) a (ξ ) ¶ C a i ,i (ξ ) + a j , j (ξ ) . ¬ ¶ To this end, set ϕ± (η) := 41 ei ± e j , a(η)(ei ± e j ) N (where (e1 , . . . , eN ) is the standard R i , j basis in RN ), and note that a 0 (ξ ) = ϕ+0 (ξ ) − ϕ−0 (ξ ). Hence, (2.18) will follow once p we show that ϕ±0 (ξ ) ¶ 2C , and this, in turn, will follow if we show that for any ϕ ∈ C 2 (R)+ satisfying ϕ 00 (η) ¶ K, η ∈ R, ϕ 0 (ξ )2 ¶ 2Kϕ(ξ ). But for any such ϕ, 1 2K(η−ξ )2 0 2
0 ¶ ϕ(η) ¶ ϕ(ξ ) + ϕ 0 (ξ )(η − ξ ) +
for all ξ , η ∈ R, and so, by the elementary
theory of quadratic inequalities, ϕ (ξ ) ¶ 2Kϕ(ξ ).
In view of the preceding results, we have now proved the following existence and uniqueness result for martingale problems.
§III.3. LOCALIZATION
55
(2.19). THEOREM. Let a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN be bounded measurable functions. Assume that there is a C < ∞ such that
b(t , y) − b(t , y 0 ) ¶ C y − y 0 (2.20) for all t ¾ 0 and y, y 0 ∈ RN , and that either
(2.21) a(t , y) ¾ "I and a(t , y) − a(t , y 0 ) ¶ C y − y 0 for all t ¾ 0 and y, y 0 ∈ RN and some " > 0, or that RN 3 y 7−→ a(t , y) is twice continuously differentiable for each t ¾ 0 and that
(2.22) max ∂y i a(t , y) ¶ C , (t , y) ∈ [0, ∞[ ×RN . i =0,...,N
op
Let t 7−→ L t be the operator-valued map determined by a and b. Then, for each (s, x) ∈ [0, ∞[ ×RN there is precisely one P s ,x ∈ MP (s , x); (L t ) t ¾0 . In particular, (s, x) 7−→ P s ,x is continuous, and for all stopping times τ, P s,x = P s ,x ⊗τ Pτ,x(τ) . (2.23). REMARK. The preceding cannot be considered to be a particularly interesting application of stochastic integral equations to the study of martingale problems. Indeed, it does little more than give us alternative (more probabilistic) derivation of the results in §I.2. For a more in depth look at this topic, the interested reader should consult [Stroock and Varadhan, 2006, Chapter 8] where the subject is given a thorough treatment. §III. 3. Localization Thus far the hypothese which we have had to make about a and b in order to check uniqueness are global in nature. The purpose of this section is to prove that the problem of checking whether a martingale problem is well-posed is a local one. The key to our analysis is contained in the following simple lemma. b : [0, ∞[ ×RN −→ S+ (RN ) and b, b (3.1). LEMMA. Let a, a b : [0, ∞[ ×RN −→ RN be b be defined accordbounded measurable functions, and let t 7−→ L t and t 7−→ L t b ingly. Assume that the martingale problem for L t t ¾0 is well-posed, and denote by b (s , x) ∈ [0, ∞[ ×RN the corresponding family of solutions. If G is an open subset P s,x b and b coincides with b of [0, ∞[ ×RN on which a coincides with a b, then, for every b | , where ζ := inf{t ¾ 0 | x(t ) 6∈ G}. (s, x) ∈ G and P ∈ MP (s, x); (L ) , P| = P t t ¾0
Mζ
s ,x Mζ
b b PROOF. Set Q := P ⊗ζ P ζ ,x(ζ ) . Then, by Theorem (1.18), Q ∈ MP (s , x); L t t ¾0 . b , and so P| = Q| = P b | . Hence, by uniqueness, Q = P s ,x
Mζ
Mζ
s ,x Mζ
Given bounded measurable a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ R and the associated map t 7−→ L t , we say that the martingale problem (L t ) t ¾0 is locally well-posed if [0, ∞[ ×RN can be covered by open sets U with the property that there exist bounded measurable a U : [0, ∞[ ×RN −→ S+ (RN ) and b U : [0, ∞[ ×RN −→ RN such that a|U and b|U coincide with a U |U and b U |U , respectively, and the martingale problem associated with a U and b U is well-posed. Our goal is to prove that, under these circumstances, the martingale problem for (L t ) t ¾0 is itself well-posed. Thus, under further notice, we shall be assuming that the martingale problem for (L t ) t ¾0 is locally well-posed, and we shall be attempting to prove that there is exactly one element in MP (0, 0); (L t ) t ¾0 . The following lemma is a standard application of the Heine-Borel theorem. N
56
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
(3.2). LEMMA. There is a sequence a ` , b ` , U`
`∈Z+
such that
i) (U` )`∈Z+ is a locally finite cover of [0, ∞[ ×RN ; ii) for each ` ∈ Z+ , a ` : [0, ∞[ ×RN −→ S+ (RN ) and b ` : [0, ∞[ ×RN −→ RN are bounded measurable functions for which the associated martingale problem is wellposed; iii) for each m ∈ Z+ there is an " m > 0 with the property that whenever (s, x) ∈ [m − 1, m[ × B (0, m) \ B (0, m − 1) , there exists ` ∈ Z+ for which [s, s + " m ] × cl B (x, " m ) ⊂ U` . introduced in Lemma (3.2), define `(s, x) := min ` ∈ Referring to the notation Z+ [s , s +" m ]×cl B (x, " m ) ⊂ U` if (s, x) ∈ [m−1, m[ × B (0, m)\B (0, m−1) . Next, define stopping times (σn )n∈Z+ inductively, so that σ0 ≡ 0, σn (ω) = ∞ if σn−1 (ω) = ∞, and if σn (ω) < ∞, then σn (ω) := inf t ¾ σn−1 (ω) (t , x(t , ω)) 6∈ U`n−1 (ω) , where −1 n := `n (ω) x(·, ω0 ) ≡ 0, and Qn+1 := Qn ⊗σn P·n ◦ x(· ∧ σn+1 ) , where Pω Pσ (ω),x(σ (ω),ω) , if n n σn (ω) < ∞. (3.3). LEMMA. For each n ∈ N and all ϕ ∈ C0∞ (RN ): Zt n ϕ(x(t )) − Lu ϕ (x(u)) du 0
t ¾0
, (M t ) t ¾0 , Q
n
is a martingale, where L tn := 1[0,σn [ (t )L t , t ¾ 0. PROOF. We work by induction on n ∈ N. Clearly, there is nothing to prove when n = 0. Now assume the assertion for n, and note that, by Theorem (1.18), we shall know n -almost it holds for n+1 as soon as we show that for each ω ∈ {σn < ∞} and δω ⊗σn (ω) Pω 0 0 0 0 every ω ∈ Ω, a(t , x(t , ω )) = a `n (ω) (t , x(t , ω )) and b(t , x(t , ω )) = b `n (ω) (t , x(t , ω 0 )) n -almost evfor t ∈ [σn (ω), σn+1 (ω)[. But if ω ∈ {σn < ∞}, then for δω ⊗σn (ω) Pω 0 0 0 0 ery ω ∈ Ω: σn (ω ) = σn (ω), x · ∧ σn (ω ), ω = x · ∧ σn (ω), ω , and, therefore, σn+1 (ω 0 ) = inf t ¾ σn (ω) x(t , ω 0 ) 6∈ U`n (ω) . Since a and b coincide with a `n (ω) and b `n (ω) , respectively, on U`n (ω) whenever σn (ω) < ∞, the proof is complete. (3.4). LEMMA. For each T > 0, limT →∞ Qn (σn ¶ T ) = 0. In particular, there is a unique Q ∈ M1 (Ω) such that Q|Mσ = Qn |Mσ for all n ∈ N. Finally, Q ∈ MP (0, 0); (L t ) t ¾0 . n
n
PROOF. By Lemma (3.3) and (1.9), there exists for each T > 0 a C (T ) < ∞ (den pending only on the bounds on a and b) such that EQ kx(t ) − x(s)k4 ¶ C (T )(t − s )2 , 0 ¶ s < t ¶ T . Hence, since Qn (x(0) = 0) = 1 for all n ∈ N, Qn n∈N is relatively compact in M1 (Ω). Next, note that σn (ω) ↑ ∞ uniformly fast for ω’s in a compact subset of Ω. Hence, Qn (σn ¶ T ) −−→ 0 for each T > 0. Also, observe that Qn |Mσ = Q m |Mσ n→∞
m
m
for all 0 ¶ m ¶ n. Combining this with the preceding, we now see that for all T > 0 and all Γ ∈ MT , limn→∞ Qn (Γ) exists. Thus, Qn n∈N can have precisely one limit Q, and clearly Q|Mσ = Qn |Mσ for each n ∈ N. Finally, we see from Lemma (3.3) that n
E
Q
n
ϕ(x(t ∧ σn )) − ϕ(x(s ∧ σn )) 1Γ = EQ
Z
t ∧σn s∧σn
(L u ϕ)(x(u)) du 1Γ
for all n ∈ N, ϕ ∈ C0∞ (RN ), 0 ¶ s < t , and Γ ∈ M s . Thus, Q ∈ MP (0, 0); (L t ) t ¾0 .
§III.4. THE CAMERON-MARTIN-GIRSANOV TRANSFORMATION
57
(3.5). THEOREM. Let a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN be bounded measurable functions, and define t 7−→ L t accordingly. If the martingale problem for (L t ) t ¾0 is locally well-posed, then it is in fact well-posed. PROOF. Clearly, it is sufficient for us to prove that MP (0, 0); (L t ) t ¾0 contains precisely one element. Since we already know that the Q constructed in Lemma (3.4) is one element of MP (0, 0); (L t ) t ¾0 , it remains to show that it is the only one. To this end, `(0,0)
suppose that P is a second one. Then, by Lemma (3.1), P|Mσ = P0,0 |Mσ = Q|Mσ . 1
0
0
Next, assume that P|Mσ = Q|Mσ , and let ω 7−→ Pω be a r.c.p.d. of P | Mσn . Then, n n for P-almost every ω ∈ {σn < ∞}, Pω ◦ θσ−1 ∈ MP σn (ω), x(σn (ω), ω) ; (L t ) t ¾0 , and n
therefore, by Lemma (3.1), ` (ω) = Pσn (ω),x(σ (ω),ω) M , ζω ζω n n n 0 := 0 where ζω (ω ) inf t ¾ 0 (t , x(t , ω )) 6∈ U`n (ω) . (Recall that θ t : Ω −→ Ω is the time shift map.) At the same time, if σn (ω) < ∞, then σn+1 (ω 0 ) = σn (ω) + ζω θσn (ω) ω 0 for Pω -almost every ω 0 . Combining these, we see that ` (ω) P | =δ ⊗ Pn Pω ◦ θσ−1(ω) M
ω Mσ
n+1
ω
σn (ω)
σn (ω),x(σn (ω),ω) Mσn+1
for P-almost every ω ∈ {σn < ∞}. Since, by induction hypothesis, we already know that P|Mσ = Q|Mσ , we can now conclude that P|Mσ = Q|Mσ . Because σn ↑ ∞ (a.s. Q), n
n
we have therefore proved that P = Q.
n+1
n+1
The following is a somewhat trivial application of the preceding. We will have a much more interesting one in §III.5 below. (3.6). COROLLARY. Suppose that a : [0, ∞[ ×RN −→ S+ (RN ) and b : [0, ∞[ ×RN −→ RN are bounded measurable functions with the properties that a(t , ·) has two continuous derivatives and b(t , ·) has one continuous derivative for each t ¾ 0. Further, assume that the second derivatives of a(t , ·) and the first derivative of b(t , ·) at x = 0 are uniformly bounded for t in compact intervals. Then the martingale problem for the asso ciated (L t ) t ¾0 is well-posed, and the corresponding family P s,x (s , x) ∈ [0, ∞[ ×RN is continuous. §III. 4. The Cameron-Martin-Girsanov transformation It is clear on analytic grounds that if the coefficient matrix a is strictly positive definite, then the first order part of the operator L t is a lower order perturbation away from its principal part (4.1)
Lt0 :=
N 1X
2 i, j =1
a i, j (t , y)∂y i ∂y j .
Hence, one should suspect that, in this case, the martingale problems corresponding to (L t0 ) t ¾0 and (L t ) t ¾0 are closely related. In this section we shall confirm this suspicion. Namely, we are going to show that when a is uniformly positive definite, then, at least over finite time intervals, P’s in MP (s, x); (L t ) t ¾0 differ from P’s in MP (s, x); (L t0 ) t ¾0 by a quite explicit Radon-Nikodym factor. (4.2). LEMMA. Let (R(t )) t ¾0 , (M t ) t ¾0 , P be a non-negative martingale with R(0) ≡ 1. Then there is a unique Q ∈ M1 (Ω) such that Q|MT = R(T )P|MT for each T ¾ 0.
58
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
PROOF. The uniqueness assertion is obvious. To prove the existence, define Qn := −1 R(n)P ◦ x(· ∧ n) for n ∈ N. Then Qn+1 |Mn = Qn |Mn , from which it is clear that n (Q )n∈N is relatively compact in M1 (Ω). In addition, one sees that any limit of (Qn )n∈N must have the required property. (4.3). LEMMA. Let (R(t )) t ¾0 , (M t ) t ¾0 , P be a P-almost surely continuous strictly positive martingale satisfying R(0) ≡ 1. Define Q ∈ M1 (Ω) accordingly as in Lemma 1 (4.2), and set R := ln R. Then R(t , (M t ) t ¾0 , Q is a Q-almost surely continu) t ¾0 1 ous strictly positive martingale, and P|MT = R(T Q|MT for all T ¾ 0. Moreover, ) R ∈ S.MartC ((M t ) t ¾0 , P), (4.4)
R(T ) =
Z
T
0
1 R(t )
dR(t ) −
1 2
Z T 0
2
1 R(t )
〈R, R〉 (dt ),
T ¾ 0,
(a.s. P),
and X ∈ MartCloc ((M t ) t ¾0 , P) if and only if X R := X − 〈X , R〉 ∈ MartCloc ((M t ) t ¾0 , Q). Finally, if X , Y ∈ S.MartC ((M t ) t ¾0 , P), then up to a P, Q-null set, 〈X , Y 〉 is the same whether it is computed relative to ((M t ) t ¾0 , P) or to ((M t ) t ¾0 , Q). In particular, given X ∈ S.MartC ((M t ) t ¾0 , P) and an (M t ) t ¾0 -progressively measurable α : [0, ∞[ ×Ω −→ R R· RT satisfying 0 α(T ) 〈X , X 〉 (dt ) < ∞ (a.s. P) for all T > 0, the quantity 0 α dX is, up to a P, Q-null set, the same whether it is computed relative to ((M t ) t ¾0 , P) or to ((M t ) t ¾0 , Q). PROOF. The first assertion requiring comment is that (4.4) holds, from which it is immediate that R ∈ S.MartC ((M t ) t ¾0 , P). But applying Itô’s formula to ln(R(t ) + ") for " > 0 and then letting " ↓ 0, we obtain (4.4) in the limit. In proving that X ∈ MartCloc ((M t ) t ¾0 , P) implies that X R ∈ MartCloc ((M t ) t ¾0 , Q), we may and will assume that R, R1 , and X are all bounded. Given 0 ¶ t1 < t2 and A ∈ M t1 , we have 1 〈X , X 〉 (t2 ) 1A = EP R(t2 )X (t2 ) − 〈X , R〉 (t2 ) 1A EQ X (t2 ) − R(t2 ) = EP R(t1 )X (t1 ) − 〈X , R〉 (t1 ) 1A 1 Q 〈X , X 〉 (t1 ) 1A . =E X (t1 ) − R(t1 ) 1 〈X , X 〉 (dt ), we shall be done once we show that Hence, since, by (4.4), 〈X , R〉 (dt ) = R(t ) R · 1 1 〈X , X 〉 (·) − 0 R(s) 〈X , X 〉 (ds ) ∈ MartCloc ((M t ) t ¾0 , Q). However, by Itô’s formula, R(·)
1 R(T )
〈X , X 〉 (T ) =
Z
T
0
〈X , X 〉 (t ) d
1 R(t )
+
Z
T
0
and so the required conclusion follows from the fact taht
1 R(t )
〈X , X 〉 (dt ),
1 , (M t ) t ¾0 , Q R(t ) t ¾0
is a
martingale. Thus, we have now shown that X ∈ MartC ((M t ) t ¾0 , Q) whenever X ∈ MartCloc ((M t ) t ¾0 , P), and therefore, that S.MartC ((M t ) t ¾0 , P) ⊂ S.MartC ((M t ) t ¾0 , Q). Because the roles of P and Q are symmetric, we shall have proved the opposite implications as soon as we show that 〈X , Y 〉 is the same under ((M t ) t ¾0 , P) or ((M t ) t ¾0 , Q) for all X , Y ∈ S.MartC ((M t ) t ¾0 , P). R
loc
Finally, let X , Y ∈ MartCloc ((M t ) t ¾0 , P). To demonstrate that 〈X , Y 〉 is the same under ((M t ) t ¾0 , P) and ((M t ) t ¾0 , Q), we must show that X R Y R − 〈X , Y 〉P ∈ MartCloc ((M t ) t ¾0 , Q)
§III.4. THE CAMERON-MARTIN-GIRSANOV TRANSFORMATION
59
(where the subscript P is used to emphasize that 〈X , Y 〉 has been computed relative to ((M t ) t ¾0 , P)). However, by Itô’s formula, ZT ZT ¬ ¶ R R R R X Y (T ) = X Y (0) + X (t ) dY (t ) + Y R (t ) dX R (t ) + X R , Y R (T ), 0
P
0
R·
R·
T ¾ 0, (a.s. P). Thus, it remains to check that 0 X R dY R and 0 Y R dX R are elements of MartCloc ((M t ) t ¾0 , Q). But Z· Z· Z· R R R X dY = X dY − X R d 〈Y, R〉P 0 0 Z0· Z ∗ R R = X dY − X dY , R (·) 0
=
0
Z
·
X R dY
R
0
and, by symmetry, the same is true of
R· 0
∈ MartCloc ((F t ) t ¾0 , Q),
Y R dX R .
(4.5). EXERCISE. A more constructive proof that 〈X , Y 〉 is the same under P and Q can be based on the observation that 〈X 〉 (T ) can be expressed in terms of the quadratic variation of X (·) over [0, T ] (cf. II-(2.28) and II-(3.14)). (4.6). THEOREM (Cameron-Martin-Girsanov). Let a : [0, ∞[ ×RN −→ S+ (RN ) and b, c : e [0, ∞[ ×RN −→ RN be bounded measurable functions, and let t 7−→ L t and t 7−→ L t be the operators associated with a and b and with a and b + ac, respectively. Then e if and only if there is a P ∈ MP (s, x); (L ) such that Q ∈ MP (s, x); L t t ¾0 t t ¾0 Q|MT = R(T )P|MT , T ¾ 0, where Z Z T 1 T 〈c, ac〉RN (s + t , x(t )) dt , c(s + t , x(t )) dx(t ) − (4.7) R(T ) := exp 2 0 0 RT and x(T ) := x(T )− 0 b(s+t , x(t )) dt , T ¾ 0. In particular, for each (s, x) ∈ [0, ∞[ ×RN , e . there is a bijection between MP (s, x); (L ) and MP (s , x); L t t ¾0
t t ¾0
PROOF. Suppose that P ∈ MP (s, x); (L t ) t ¾0 and define R(·) by (4.7). By II-(3.13) ii), (R(t )) t ¾0 , (M t ) t ¾0 , P is a martingale, and, clearly, R(0) ≡ 0 and R(·) is P-almost surely positive. Thus, by Lemmas (4.2) and (4.3), there is a unique Q ∈ M1 (Ω) such that Q|MT = R(T )P|MT , T ¾ 0. Moreover, X ∈ MartCloc ((M t ) t ¾0 , P) if and only if X − 〈X , R〉 ∈ MartCloc ((M t ) t ¾0 , Q), where R := ln R. In particular, since 〈X , R〉 (dt ) =
N X
¬ ¶ c i (s + t , x(t )) x i , X (dt ),
i=1
if ϕ
∈ C0∞ (RN ),
then 〈ϕ(x(·)), R〉 (dt ) =
N X
¬ ¶ c i (s + t , x(t )) x i , ϕ(x(·)) (dt )
i =1
=
N X i , j =1
=
N X j =1
c i a i, j ∂ x j ϕ (s + t , x(t )) dt
(ac) j (s + t , x(t ))∂ x j ϕ(x(t )) dt ,
60
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
Rt e ϕ(x(t )) − 0 L u ϕ (x(u)) du t ¾0 , (M t ) t ¾0 , Q is a martingale. In other words, e . Q ∈ MP (s , x); L t t ¾0 e , and define R(·) as in (4.7) relative Conversely, suppose that Q ∈ MP (s, x); L t t ¾0 to Q. Then ZT Z 1 1 T 〈c, ac〉 (s + t , x(t )) dt = exp − c(s + t , x(t )) de x (t ) − R(T ) 2 0 0 RT where e x (T ) := x(T ) − 0 (b + ac)(s + t , x(t )) dt . Hence, by the preceding paragraph e , we see that there is a unique P ∈ MP (s, x); (L ) such applied to Q and L and so
t t ¾0
t t ¾0
1 Q|MT , T ¾ 0. Since stochastic integrals are the same whether they are that P|MT = R(T ) defined relative to P or to Q, we now see that Q|MT = R(T )P|MT , T ¾ 0, where R(·) is now defined relative to P.
(4.8). COROLLARY. Let a : [0, ∞[ ×RN −→ S+ (RN ) and b, c : [0, ∞[ ×RN −→ RN be bounded measurable functions, and assume that a is uniformly positive definite on compact subsets of [0, ∞[ ×RN . Define t 7−→ L t0 as in (4.1) and let t 7−→ L t be the operator associated with a and b. Then the martingale problem for L t0 t ¾0 is wellposed if and only if the martingale problem for (L t ) t ¾0 is well-posed. PROOF. In view of Theorem (3.5), we may and will assume that a is uniformly positive definite on the whole of [0, ∞[ ×RN . But we can then take c = a −1 b and apply Theorem (4.6). §III. 5. The martingale problem when a is continuous and positive Let a : RN −→ S+ (RN ) be a bounded continuous function satisfying a(x) > 0 for each x ∈ RN . Let b : [0, ∞[ ×RN −→ RN be a bounded measurable function. Our goal in this section is to prove that the martingale problem associated with a and b is well-posed. In view of Corollary (4.8), we may and will assume that b ≡ 0, in which case existence presents no problem. Moreover, because of Theorem (3.5), we may and will assume in addition that ka(x) − I kHS ¶ ",
(5.1)
where " > 0 is as small as we like. P Set L := 12 Ni, j =1 a i , j (y)∂y i ∂y j . What we are going to do is show that when the " in (5.1) is sufficiently small, then, for each λ > 0 there is a map Sλ from the Schwarz space S (RN ) into Cb (RN ) such that Z ∞ P −λt (5.2) E e f (x(t )) dt = Sλ f (x) 0
for all f ∈ S (R ), whenever P ∈ MP(x; L ). Once we have proved (5.2), the argument is easy. Namely, if P and Q are elements of MP(x; L ), then (5.2) allows us to say that Z ∞ Z ∞ EP e−λt f (x(t )) dt = EQ f (x(t )) dt N
0
0
for all λ > 0 and f ∈ C b (RN ). But, by the uniqueness of the Laplace transform, this means that P ◦ x(t )−1 = Q ◦ x(t )−1 for all t ¾ 0, and so, by Corollary (1.15), P = Q. Hence, everything reduces to proving (5.2).
§III.5. THE MARTINGALE PROBLEM WHEN a IS CONTINUOUS AND POSITIVE
61
(5.3). LEMMA. Set γ t (·) := g (t , ·), where g (t , y) denotes the standard Gauss kernel on R∞ RN , and, for λ > 0, define Rλ f := 0 e−λt γ t ? f dt for f ∈ S (RN ). Then Rλ maps S (RN ) into itself, and λI − 12 ∆ ◦ Rλ = Rλ ◦ λI − 12 ∆ = I on S (RN ). Moreover, if q ∈ ] N2 , ∞[, then there is an A = A(λ, p) ∈ ]0, ∞[ such that kRλ f kL
(5.4)
N p (R )
¶ Ak f kL p (RN ) ,
f ∈ S (RN ).
Finally, for every p ∈ ]1, ∞[ there is a C = C ( p) ∈ ]0, ∞[ (i.e., independent of λ > 0) such that
v
u X 2
u N
t ∂ y i ∂ y j Rλ f (5.5) ¶ C k f kL p (RN )
i, j =1
N L p (R )
for all f ∈ S (RN ). PROOF. Use F f to denote the Fourier transform of f . Then it is an eady com−1 putation to show that F Rλ f (ξ ) = λ + 12 kξ k2 F f (ξ ). From this it is clear that Rλ maps S (RN ) into itself and that Rλ is the inverse on S (RN ) of λI − 12 ∆ . To prove the existence of (5.4), note that kγ t ? f kC
N b (R )
¶ kγ t kL
N q (R )
k f kL
N p (R )
where ( 1p + q1 = 1),
and that kγ t kL
N
N q (R )
¶ BN t 2 p ,
for some Bn ∈ ]0, ∞[.
Thus, if p ∈ ] N2 , ∞[, then Z kRλ f kL
N p (R )
¶ BN
∞
N −λt − 2 p
e
t
0
dt k f kL p (RN ) = Ak f kL p (RN ) ,
were A ∈ ]0, ∞[. The estimate (5.5) is considerably more sophisticated. What it comes down to is the proof that for each p ∈ ]1, ∞[ there is a K = K( p) ∈ ]0, ∞[ such that
v
u X
2
u N
1
t
∆ f ∂ ¶ K (5.6) i∂ j f
y y
2 L p (RN )
i, j =1
N L p (R )
for f ∈ S (RN ). Indeed, suppose that (5.6) holds. Then, since 12 ∆Rλ = I −λRλ , we would have
v
u X
2
u N
t ∂ y i ∂ y j Rλ f ¶ K 12 ∆Rλ f
L p (RN )
i , j =1
N L p (R )
¶ K k f kL p (RN ) + K kλRλ f kL
N p (R )
¶ 2K k f kL p (RN ) , since kγ t kL
N 1 (R )
= 1 and so kλRλ f kL
N p (R )
¶ k f kL p (Rn ) . Except when p = 2, (5.6) has no
elementary proof an depends on the theory of singular integral operators. Rather than spend time here developing the relevant theory, we shall defer the proof to the appendix which follows this section.
62
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
Choose and fix a p ∈ ] N2 , ∞[, and take the " in (5.6) to lie in the interval ]0, 2∨C1 ( p) [, where C ( p) is the constant C in (5.5). We can now define the operator Sλ . Namely, set Dλ := L − 12 ∆ Rλ . Then, for f ∈ S (RN ):
X
N
i,j
kDλ f kL (RN ) = 12 (a − I ) ∂ i∂ jR f λ y y
p
i , j =1
L p (RN )
r
X 2
n
¶ "2 ∂ i∂ jR f λ y y
i , j =1
N L p (R )
1 2
¶ k f kL p (RN ) . Hence, Dλ admits a unique extension as a continuous operator on L p (RN ) with bound not exceeding 12 . Using Dλ again to denote this extension, we see that I − Dλ admits a continuous inverse Kλ with bound not larger than 2. We now define Sλ := Rλ ◦ Kλ . Note that if Kλ f ∈ S (RN ), then (λI − L )Sλ f = λI − 12 ∆ Rλ Kλ f − Dλ Kλ f = (I − Dλ )Kλ f = f . Thus, if Fλ := f ∈ L p (RN ) Kλ f ∈ S (RN ) , then we have that (5.7)
(λI − L )Sλ f = f ,
f ∈ Fλ .
In particular, we see that Fλ ⊂ C b (RN ). Moreover, since S (RN ) is dense in L p (RN ) and Kλ is invertible, it is clear that Fλ is also dense in L p (RN ). We will show that (5.2) holds for all P ∈ MP(x; L ) and f ∈ Fλ . Indeed, if f ∈ Fλ , then an easy application of Itô’s formula in conjunction with (5.7) shows that Zt e−λt Sλ f (x(t )) +
e−λs f (x(s)) ds
0
t ¾0
, (M t ) t ¾0 , P
is a martingale for every P ∈ MP(x; L ). Thus, (5.2) follows by letting t → ∞ in Z t e−λs f (x(s)) ds . EP e−λt Sλ f (x(t )) − Sλ f (x) = EP 0
At first sight, we appear to be very close to a proof of (5.2) for all f ∈ S (RN ). However, after a little reflection, one will realize that we still have quite a long way to go. Indeed, although we now know that (5.2) holds for all f ∈ Fλ and that Fλ is a dense subset of L p (RN ), we must still go through a limit procedure before we can assert that (5.2) holds for all f ∈ S (RN ). The right-hand side of (5.2) presents no problems at this point. In fact, (5.4) says that L p (RN ) 3 f 7−→ Rλ f (x) is a continuous map for each x ∈ RN , and therefore, Sλ has the same property. On the other hand, we know nothing about the behavior of the left-hand side of (5.2) under convergence in L p (RN ). Thus, in order to complete our program, we have still to prove an a priori estimate which says that for each P ∈ MP(x; L ), there is a B ∈ ]0, ∞[ such that Z t P −λt (5.8) e f (x(t )) dt ¶ B k f kL p (RN ) , f ∈ S (RN ). E 0 To prove (5.8), let P ∈ MP(x; L ) be given. Then by Theorem (2.6), there exists an N RT dimensional Brownian motion (β(t )) t ¾0 , (M t ) t ¾0 , P with x(T ) = x+ 0 σ(x(t )) dβ(t ), 1 bn t c T ¾ 0, (a.s. P), where σ := a 2 . Set σ n (t , ω) := σ x n ∧ n, ω and Xn (T ) :=
§III.5. THE MARTINGALE PROBLEM WHEN a IS CONTINUOUS AND POSITIVE
x+
63
RT
σ n (t ) dβ(t ), T ¾ 0. Note that for each T ¾ 0, Z T
EP sup kx(t ) − Xn (t )k2 ¶ 4EP
σ(x(t )) − σ x 0
0¶t ¶T ∧n
0
Hence, if µn ∈ M1 (RN ) is defined by Z Z ∞ P −λt f dµn = λE e f (Xn (t )) dt , RN
0
then µn → µ in M1 (RN ), where Z Z P f dµ = λE RN
∞
−λt
e
f (x(t )) dt ,
0
bn t c 2
n HS
dt −−→ 0. n→∞
f ∈ C b (RN ),
f ∈ C b (RN ).
In particular, if Z f dµ ¶ λB k f k n L p (RN ) ,
(5.9)
f ∈ S (RN ),
for some B ∈ ]0, ∞[ and all n ∈ Z+ , then (5.8) holds for the same B. (5.10). LEMMA. For all n ∈ Z+ , the estimate (5.9) holds with B = 2A, where A = A(λ, p) is the constant in (5.4). PROOF. Choose and fix n ∈ Z+ . We will first show that if (5.9) holds for some N B ∈ ]0, ∞[, then it holds with B = 2A. To this end, note that Xn ∈ MartC2 ((M t ) t ¾0 , P) bn t c and that 〈〈Xn , Xn 〉〉 (dt ) = a n (t ) dt , where a n (t , ω) := a x n ∧ n, ω . Hence, by Itô’s formula, for f ∈ S (RN ): Zt −λs −λt f (x(s)) − ψ(s) ds , (M t ) t ¾0 , P e e Rλ f (x(t )) + 0
t ¾0
is a margtingale, where ψ(t , ω) :=
N 1X
a n (t , ω) − I
2 i , j =1
i , j
∂ y i ∂ y j Rλ f .
Hence, we have that λRλ f (x) + λEP
Z
∞
0
Z e−λt ψ(t ) dt = f dµn .
Noting that v uX N "u 2 |ψ(t , ω)| ¶ t ∂y i ∂y j Rλ f Xn (t , ω) , 2 i , j =1 we see that Z P E
∞
e 0
−λt
v Z uX 2 u N " t ψ(t ) dt ¶ ∂y i ∂y j Rλ f dµn 2 RN i , j =1 ¶
"C M 2 M
k f kL
N p (R )
k f kL (RN ) , p 2 where M denotes the smallest B for which (5.9) holds. Using this and (5.4) in the preceding, we obtain M ¶ λA + M2 , from which M ¶ 2λA is immediate. ¶
64
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY
We must still check that (5.9) holds for some B ∈ ]0, ∞[. Let f ∈ S (RN )+ be given. Then 2 Z Z 1 nX −1 n m −λ mn P −λt λe f dµn = e f Xn n + t dt E 0
m=0
+ λe
−λn P
Z
∞
−λt
e
E
0
At the same time, Z P
1 n
E
0
=
Z
e−λt f Xn 1 n
0
=
Z Z0
¶
1 n
m n
+t
h e−λt EP f Xn −λt P
e
Z
E
RN
m n
! .
dt
+σ x
f σ x
m n
m ,ω n
Rλ f m (·, ω) Xn
f Xn (n + t ) dt
m n
β(t ) − β
y g t , y − Xn
m n
m n
i
dt
dy dt
P(dω),
where f m (y, ω) := f σ x mn , ω . Note that, because of (5.1) and the fact that " ¶ 12 , there is a K ∈ ]0, ∞[ such that k f m (·, ω)kL (RN ) ¶ K k f kL p (RN ) for all m ∈ N and ω ∈ Ω. p
Hence, we have now proved that Z 1 n EP e−λt f Xn
m n
0
+t
dt ¶ KAk f kL p (RN ) ,
and the same argument shows that Z ∞ −λt P e f Xn (n + t ) dt ¶ KAk f kL p (RN ) . E 0
Combining these, we conclude that f ∈ S (RN )+ .
R
f dµn ¶ (n 2 + 1)KAk f kL p (RN ) for all functions
With Lemma (5.10) we have now completed the proof of the following theorem: (5.11). THEOREM. Let a : RN −→ S+ (RN ) be a bounded continuous function satisfying a(x) > 0 for each x ∈ RN . Then, for every bounded measurable b : [0, ∞[ ×RN −→ RN , the martingale problem for the associated L is well-posed. (5.12). REMARK. There are several directions in which the preceding result can be exN tended. In the first place, if a depends on (t , y) ∈ [0, ∞[ ×R in such a way that for each T > 0 and K â RN the family a(t , ·)|K t ∈ [0, T ] is uniformly positive and equicontinuous, then the martingale problem for a and any bounded measurable b is well-posed. Moreover, when N = 1, this continues to be true without any continuity assumption on a, so long as a is uniformly positive on compact subsets of [0, ∞[ ×RN . Finally, if N = 2 and a is independent of t , then the martingale problem for a and any bounded measurable b is well-posed so long as a is a bounded measurable function of x which is uniformly positive on compact subsets of RN . All these results can be proved by variations on the line of reasoning which we have presented here. For details see [Stroock and Varadhan, 2006, §§7.3.3, 7.3.4].
APPENDIX
In this appendix we shall derive the estimate III-(5.6). There are various approaches to such estimates, and some of these approaches are surprisingly probabilistic. In order to give a hint about the role that probability theory can play, we shall base our proof on Burkholder’s inequality. However, there is a bit of preparation which we must take before we can bring Burkholder’s inequality into play. Given j ∈ {1, . . . , N }, define R j on S (RN ) to be the operator given by FR j f (ξ ) : p −1 ξ j F f (ξ ). (Recall that we use F to denote the Fourier transform.) Then (5.6) = kξ k comes down to showing that for each p ∈ ]1, ∞[ there is a C p < ∞ such that
¶ C p k f kL p (RN ) , f ∈ S (RN ). (A.1) max R j f N j =1,...,N
L p (R )
Indeed, suppose that (A.1) has been proved. Then we would have that
0∆f = R R ¶ C p2 k∆ f kL (RN ) ,
∂ y j ∂ y j 0 f
j j N N L p (R )
L p (R )
since F∂y j ∂y j 0 f (ξ ) = −ξ j ξ j 0 F f (ξ ) =
ξ ξ j
j0
kξ k2
p
F∆ f (ξ ) = −FR j R j 0 f (ξ ).
(A.2). LEMMA. There is a cN ∈ C such that, for each j = 1, . . . , N , the mapping Z xj N S (R ) 3 ϕ 7−→ lim cN ϕ(x) dx "↓0 kxk>" kxkN +1 p ξ determines the tempered distribution T j whose Fourier transform is −1 kξj k (:= 0 if ξ = 0). In particular, R j f := f ? T j , f ∈ S (RN ).
PROOF. Note that for ϕ ∈ S (RN ): Z Z xj xj xj ϕ(x) dx = ϕ(x) − ϕ(0) dx + ϕ(x) dx, N +1 N +1 N +1 kxk>" kxk kxk>1 kxk "
Z
and Z lim "↓0
"
xj kxk
N +1
ϕ(x) − ϕ(0) dx =
xj
Z kxk¶1
kxk
N +1
ϕ(x) − ϕ(0) dx.
Since
S (RN ) 3 ϕ 7−→
Z kxk¶1
xj kxk
N +1
ϕ(x) − ϕ(0) dx +
Z kxk>1
xj kxkN +1
ϕ(x) dx
is obviously a continuous linear functional on S (RN ), we shall have completed the proof once we show that Z p xj ξj lim lim e −1 〈ξ ,x〉RN dx = c N +1 "↓0 R↑∞ "
66
for some c ∈ C \ {0}. Clearly, there is nothing to do when ξ = 0. When ξ 6= 0, standard calculations show that Z p xj lim lim e −1 〈ξ ,x〉RN dx N +1 "↓0 R↑∞ "
N ¬ X ¶ ω j sgn 〈ξ , ω〉RN dω = e0n , e j
ξ kξ k
RN
=k
ξj kξ k
SN −1
then
¬ ¶ ¬ ¶ e0n , ω sgn e01 , ω N dω R
SN −1
¸ Z
®
ej ,
N
R
n=1
=
Z
ξ , kξ k
¬ 0 ¶ e1 , ω RN dω
.
where k ∈ C \ {0} and is independent of j ∈ {1, . . . , N }. Define r j (x) := cN
xj
kxkN +1 (") ? rj , f
(")
(")
on RN \ {0}, and, for " > 0, set r j (x) := 1[",∞[ (kxk)r j (x)
∈ S (RN ). In view of Lemma (A.2), what we must show is and define R j f := f that for each p ∈ ]1, ∞[ there is a C p < ∞ such that
(") (A.3) sup R j f ¶ C p k f kL (RN ) , f ∈ S (RN ). N L p (R )
">0
To this end, note that Z Z (") R j f (x) = cN ω j dω
∞
f (x − r ω)
"
SN −1
p
dr r
=
cN
Z
2
SN −1
ω j dω
Z
f (x − r ω)
|r |>"
dr r
.
Next, choose a measurable mapping SN −1 3 ω 7−→ Uω ∈ O (N ) so that Uω e1 = ω for all ω, and, given f ∈ S (RN ), set fω (y) := f (Uω y). Then Z Z Z cN dr cN (") Rj = ω j dω fω (x − r e1 ) = ω j K(") fω (x) dω, 2 SN −1 r 2π SN −1 |r |>" where (")
K g (x) :=
1 π
Z |r |>"
g (x − r e1 )
dr r
,
g ∈ S (RN ).
(")
In particular, set h (x) := 1[",∞[ (|x|) x1 , x ∈ R, and suppose that we show that
(A.4) sup ψ ? h (") ¶ K p kψkL (RN ) , ψ ∈ S (RN ) N p ">0
L p (R )
for each p ∈ ]1, ∞[ and some K p < ∞. Then we would have that
sup K(") fω ¶ K p k fω kL (RN ) = K p k f kL p (RN ) N p ">0
L p (R )
for all ω ∈ SN −1 , and so, from the preceding, we could conclude that (A.3) holds with 1 |cN | K p . In other words, everything reduces to proving (A.4). (Note that the C p = 2π preceding reduction allows us to obtain (A.3) for arbitrary N ∈ Z+ from the case N = 1.)
67
For reasons which will become apparent in a moment, it is better to replace the 1 x (") kernel h with h" (x) := π x 2 +"2 . Noting that Z∞ Z"
"2 x
(")
dx + 2 dx π h − h" ¶2 2 2 2 2 L1 (R) " x(x + " ) 0 x +" Z1 Z∞ x 1 =2 dx + 2 dx, 2 2 0 x +1 1 x(x + 1) we see that (A.4) will follows as soon as we show that (A.5)
sup kψ ? h" kL ">0
p (R)
¶ K p kψkL
p (R)
,
ψ ∈ S (R)
for each p ∈ ]1, ∞[ and some K p < ∞. In addition, because Z Z ϕ(x) ψ ? h" (x) dx = − ψ(y) ϕ ? h" (y) dy,
ϕ, ψ ∈ S (R),
an easy duality argument allows us to restrict our attention to p ∈ [2, ∞[, and therefore we shall do so. y Set py (x) := π1 x 2 +y 2 for (x, y) ∈ R× ]0, ∞[. Given ψ ∈ C0∞ (R, R) (we have emphasized here that ψ is real-valued), define uψ (x, y) := ψ? py (x) and vψ (x, y) := ψ? hy (x).
(A.6). LEMMA. Referring to the preceding, uψ and vψ are conjugate harmonic functions on R× ]0, ∞[ (i.e., they satisfy the Cauchy-Riemann equations). Moreover, there is a C = Cψ < ∞ such that C , (x, y) ∈ R× ]0, ∞[. uψ (x, y) ∨ vψ (x, y) ¶ Æ x2 + y2 Finally, lim sup δ↓0
x
sup |ξ −x|∨y<δ
uψ (ξ , y) − ψ(ξ ) = 0.
p p −1 vψ (x, y) for z = x + −1 y with (x, y) ∈ p R ∞ ψ(ξ ) R× ]0, ∞[, and note that F (z) = π−1 −∞ z−ξ dξ . Clearly, all the assertions about uψ and vψ , except the last one about uψ follow immediately from the preceding representation of F . On the assertion, the asserted behavior of uψ (·, y) as y ↓ 0 can be easily derived from elementary estimates. PROOF. Set F (z) := uψ (x, y) +
We are at last ready to see how Burkholder’s inequality enters the proof of (A.5). Namely, for (x, y) ∈ R× ]0, ∞[, let W x,y denote the two-dimensional Wiener measure starting from (x, y). Using z(t , ω) := (x(t , ω), y(t , ω)) to denote the position ω(t ) ∈ R2 of the path ω ∈ Ω at time t ¾ 0, define τ" (ω) := inf{t ¾ 0 | y(t , ω) ¶ "}. We must first check that W x,y (τ" < ∞) = 1 for all 0 ¶ " ¶ y, and, obviously, it is enough to do so in p the case when " = 0. But, by Itô’s formula, e−λ(t ∧τ0 )− 2λ y(t ∧τ0 ) t ¾0 , (M t ) t ¾0 , W x,y is
a martingale for each λ ¾ 0, and so p p e− 2λ y = EWx,y e−λ(t ∧τ0 )− 2λ y(t ∧τ0 ) ¶ EWx,y e−λ(t ∧τ0 ) . Hence,
p W x,y (τ0 < ∞) = lim EWx,y e−λτ0 = lim lim EWx,y e−λ(t ∧τ0 ) ¾ lim e− 2λ y = 1. λ↓0
λ↓0 t ↑∞
λ↓0
68
Next, let ψ ∈ C0∞ (R) be given, and define M" (t , ω) := uψ z(t ∧ τ" (ω), ω) − uψ (x, y) and
N" (t , ω) := vψ z(t ∧ τ" (ω), ω) − vψ (x, y),
where uψ and vψ are defined relative to ψ as in the preceding. Then, by Itô’s formula and Lemma (A.6), (M" (t )) t ¾0 , (M t ) t ¾0 , W x,y and (N" (t )) t ¾0 , (M t ) t ¾0 , W x,y are bounded martingales for each (x, y) ∈ R× ]0, ∞[ and 0 < " ¶ y. In addition, since (by the Cauchy Riemann equations) ∇uψ = ∇vψ , we have Z t ∧τ" Z t ∧τ" 2 2 〈N" 〉 (t ) = ∇vψ (z(s)) ds = 〈M" 〉 (t ) (a.s. W x,y ). ∇uψ (z(s)) ds = 0
0
Hence, by Burkholder’s inequality we see that for each p ∈ [2, ∞[ there is a C p < ∞ such that
kN" (τ" )kL (W ) ¶ N"? (τ" ) ¶ C p M"? (τ" ) p
L p (W x,y )
x,y
¶ C p M0? (τ0 )
L p (W x,y )
p L p (W x,y )
¶
p −1
K p kM0 (τ0 )kL
p (W x,y )
,
where we have used Doob’s inequality in order to get the last relation. Since y(τ" ) = " (a.s. W x,y ), we conclude from the preceding that pi 1 pi 1 h h p p EWx,y vψ (x(τ" ), ") − vψ (x, y) ¶ K p EWx,y ψ(x(τ0 )) − uψ (x, y) , and so pi 1 p
h
EWx,y vψ (x(τ" ), ")
h i1 p ¶ K p EWx,y |ψ(x(τ0 ))| p + uψ (x, y) + K p vψ (x, y)
for all (x, y) ∈ R× ]0, ∞[ and 0 < " ¶ y. Noting that the distribution of x(τ" ) under W x,y is the same as that of x + x(τ" ) under W0,y , raising the preceding to the power p, and integrating with respect to x ∈ R we obtain
p
p
p
p + K pp vψ (·, y) ¶ 2 p−1 K pp kψkL (R) + 2 p−1 uψ (·, y)
vψ (·, ") L p (R)
p
L p (R)
L p (R)
for all 0 < " ¶ y. But, by the estimate on uψ and vψ in Lemma (A.6), it is clear that
p
p
−−→ 0. ∨ vψ (·, y)
uψ (·, y) L p (R)
L p (R) y↑∞
In other words, we have now proved (A.5), with 2K p replacing K p , so long as ψ ∈ C0∞ (R). Obviously, the same result for all ψ ∈ S (R) follows immediately from this, and so we are done.
Bibliography Ikeda, N. and Watanabe, S. [1981], Stochastic Differential Equations and Diffusion Processes, Vol. 24 of North-Holland Mathematical Library, North-Holland Publishing Co., Amsterdam. McKean, H. P. [1969], Stochastic Integrals, Academic Press, New York. Parthasarathy, K. R. [2005], Probability Measures on Metric Spaces, AMS Chelsea Publishing, Providence, RI. Reprint of the 1967 original. Stroock, D. W. and Varadhan, S. R. S. [2006], Multidimensional Diffusion Processes, Classics in Mathematics, Springer-Verlag. Originally published as Vol. 233 in the series “Grundlehren der Mathematischen Wissenschaften, reprint of the 1st ed. 1979.
69