MARKOV PROCESSES AND
POTENTIAL THEORY
Pure and Applied Mathematics A Series of Monographs and Textbooks Edited by Paul A. Smith and Samuel Eilenberg
Columbia University, New York
1 : ARNOLD SOMMERFELD. Partial Differential Equations in Physics. 1949 (Lectures on Theoretical Physics, Volume V I ) 2 : REINHOLDBAER.Linear Algebra and Projective Geometry. 1952 BUSEMANN A N D PAUL KELLY.Projective Geometry and Projective 3 : HERBERT Metrics. 1953 BERGMAN A N D M. SCHIFFER.Kernel Functions and Elliptic Differential 4 : STEFAN Equations in Mathematical Physics. 1953 5: RALPHPHILIPBOAS,JR. Entire Functions. 1954 BUSEMANN. The Geometry of Geodesics. 1955 6 : HERBERT Fundamental Concepts of Algebra. 1956 7 : CLAUDECHEVALLEY. HIJ. Homotopy Theory. 1959 8 : SZE-TSEN Solution of Equations and Systems of Equations. Second 9 : A. M. OSTROWSKI. Edition. 1966 ~. of Modern Analysis. 1960 10: J. D I E U D O N NFoundations 11 : S. I. GOLDBERG. Curvature and Homology. 1962 HELGASON. Differential Geometry and Symmetric Spaces. 1962 12 : SIGURDUR Introduction to the Theory of Integration. 1963 13: T. H. HILDEBRANDT. ABHYANKAR. Local Analytic Geometry. 1964 14 : SHREERAM 15 : RICHARDL. BISHOPA N D RICHARD J. CRITTENDEN. Geometry of Manifolds. 1964 A. GAAL.Point Set Topology. 1964 16: STEVEN Theory of Categories. 1965 17: BARRYMITCHELL. 18: ANTHONYP. MORSE.A Theory of Sets. 1965
Pure and Applied Mathematics A Series of Monographs and Textbooks
19 : GUSTAVE CHOQUET.Topology. 1966 20: Z. I. BOREVICH A N D I. R. SHAFAREVICH. Number Theory. 1966 21 : J o s i LUIS MASSERA A N D J U A N JORGE SCHAFFER. Linear Differential Equations and Function Spaces. 1966 22 : RICHARD D. SCHAFER. An Introduction to Nonassociative Algebras. 1966 23: MARTINEICHLER.Introduction to the Theory of Algebraic Numbers and Functions. 1966 24 : SHREERAM ABHYANKAR. Resolution of Singularities of Embedded Algebraic Surfaces. 1966 25 : FRANCOIS TREVES. Topological Vector Spaces, Distributions, and Kernels. 1967 D. LAXand RALPHS. PHILLIPS. Scattering Theory. 1967 26: PETER 27: OYSTEINORE.The Four Color Problem. 1967 28 : MAURICEHEINS.Complex Function Theory. 1968 29 : R. M. BLUMENTHAL A N D R. K. GETOOR. Markov Processes and Potential Theory. 1968
In preparation
HANSFREUDENTHAL A N D H. DE VRIES.Linear Lie Groups.
This Page Intentionally Left Blank
MARKOV PROCESSES AND
POTENTIAL THEORY R. M . BLUMENTHAL DEPARTMENT OF MATHEMATICS UNIVERSITY OF WASHINGTON SEATTLE, WASHINGTON
and R. K . GETOOR DEPARTMENT OF MATHEMATICS UNIVERSITY OF CALIFORNIA AT S A N DIEGO LA JOLLA, CALIFORNIA
1968
ACADEMIC PRESS New York and London
COPYRIGHT 0 1968, BY ACADEMIC PRESS INC. ALL RIGHTS RESERVED. NO PART OF THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS INC. I 1 1 Fifth Avenue, New York 10003
United Kingdom Edition published by ACADEMIC PRESS INC. (LONDON) LTD. Berkley Square House, London W.l
LIBRARY OF CONGRESS CATALOG CARDNUMBER: 68-18659
PRINTED IN T H E UNIlED STATES O F AMERICA
PREFACE The study of the relationship between Markov processes and potential theory began in the early 1930's. Initially attention was directed toward analytic aspects such as differential equations satisfied by the transition probabilities and questions concerning existence and uniqueness of solutions of these equations. Then, through the work of Doob, Feller, Kac, and Kakutani, among others, it emerged that this relationship has a deep probabilistic basis. An extensive study of the relationship between Brownian motion and classical potential theory was given by Doob [4] in 1954. This paper marks the beginning of the modern era in the subject. Then in 1956 and 1957 Hunt (2-41 laid bare a large part of the entire subject: he gave a rather general definition of " potential theory " and associated with each such theory a Markov process in terms of which potential-theoretic objects and operations (superharmonic functions, balayage, etc.) have probabilistic interpretations. He used this relationship to generalize and reinterpret (often in a more illuminating form) many facts from classical potential theory. The purpose of this book is to collect within one cover most of the contents of Hunt's fundamental papers, portions of the theory of additive functionals, and some closely related matters. We hope that we have presented the material in such a way that it is accessible to a diligent advanced graduate student. The reader is assumed to be familiar with general measure theory at the level of a basic graduate course. Aside from this, the book is very nearly self-contained. Still it is unrealistic to recommend it to a reader who does not have some background in probability theory or at least a bit of intuition concerning random variables, independence, conditioning, and the like. Unfortunately the material is somewhat top-heavy; that is, one must start with some extensive (and perhaps dull) measure-theoretic preliminaries. Frequently the need for a particular level of generality (e.g., taking as sample space a general measure space rather than some specific function space) is not immediately apparent. However, we have tried to select the lowest vii
viii
PREFACE
level of abstraction that allows one firstly to operate freely and rigorously in subsequent developments, and secondly to avoid unnecessary restrictions on the transformations of processes one is willing to consider. Most of the measure-theoretic preliminaries are presented in Chapter I, while the more interesting developments begin in Chapter 11; we ask the reader to go at least this far before passing judgement on the entire presentation. The reader especially interested in potential theory can go directly from the end of Chapter I1 to Section 6 of Chapter 111 and from there to Section 1 of Chapter V and Sections 1, 2, and 4 of Chapter VI. These sections are more or less independent of the rest of the book, and they contain the basic potential theoretic facts. Most sections are followed by exercises designed to further the theory as well as give practice. We do not hesitate to use the results of an exercise in subsequent sections. Historical references and credits are collected in the “Notes and Comments.” We hope that we always have told the truth, but realize that it is seldom the whole truth. It is not our intention to give anyone less than his full measure of credit and we apologize in advance to anyone who may feel slighted. This book does not cover all of the theory of Markov processes or even all of probabilistic potential theory. We have not said anything about infinitesimal generators (which, however, receive excellent treatment in Dynkin [2] and in Ito-McKean [l]). Neither have we said anything about the problem of constructing a Markov semigroup starting from a potential operator (or a class of excessive functions) nor about the boundary theory for the representation of harmonic functions-to mention only a few omissions. Finally we have the pleasure of acknowledging a few of our debts. We benefited greatly from the interest which P. A. Meyer took in this project; he read most of the manuscript and set us straight on a number of important matters. Harry Dym and Frank Knight read portions of the manuscript and made useful comments. We received financial support from the National Science Foundation and from the Air Force Office of Scientific Research during part of the writing. We are particularly indebted to our typists Donna Thompson and Olive Lee for the capable manner in which they handled the job. They lightened considerably our work in preparing the manuscript. February, 1968
R. M. BLUMENTHAL R. K. GETOOR
CONTENTS vii
Preface
Chapter 0.
PRELIMINARIES 1. Notation 2. The Monotone Class Theorem 3. Topological Spaces
Chapter I.
1 5 8
MARKOV PROCESSES 1. 2. 3. 4. 5.
6. 7. 8. 9. 10. 11.
12.
General Definitions Transition Functions General Definitions Continued Equivalent Processes The Measures PN Stopping Times Stopping Times for Markov Processes The Strong Markov Property Standard Processes Measurability of Hitting Times Further Properties of Hitting Times Regular Step Processes
11 14 20 24 25 30 36 37 44 52 61 63
Chapter 11. EXCESSIVE F U N C T I O N S 1. Introduction
2. Excessive Functions 3. Exceptional Sets 4. The Fine Topology 5. Alternative Characterization of Excessive Functions
69 72 79 84 89
ix
CONTENTS
X
Chapter 111. MULTIPLICATIVE FUNCTIONALS A N D SUBPROCESSES 1. Multiplicative Functionals
91
2. Subordinate Semigroups 3. Subprocesses
101 105
4. Resolvents and Strong Multiplicative Functionals
113 125 136
5. Excessive Functions 6. A Theorem of Hunt
Chapter IV. ADDITIVE FUNCTIONALS A N D THEIR POTENTIALS 2. Potentials of Additive Functionals 3. Potentials of Continuous Additive Functionals
148 154 160
4. Potentials of Natural Additive Functionals 5. Classification of Excessive Functions
170 186
1. Additive Functionals
Chapter V. FURTHER PROPERTIES OF CONTINUOUS ADDITIVE FUNCTIONALS 1. Reference Measures
2. Continuous Additive Functionals 3. Fine Supports and Local Times 4. Balayage of Additive Functionals 5. Processes with Identical Hitting Distributions
196 205 212 228 233
Chapter VI. D U A L PROCESSES A N D POTENTIAL THEORY 1. Dual Processes 2. Potentials of Measures 3. Potentials of Additive Functionals 4. Capacity and Related Topics
252 265 279 283
Notes and Comments
295
Bibliography
305
Index of Notation
31 1
Subject Index
312
0
PRELIMINARIES
This chapter contains preliminary material that will be used in the main text. Section 1 establishes the basic notation that will be used without further explanation in the sequel. As a result this section should be read before the main text is attempted. The remaining sections of this chapter need not be read by everyone. It will suffice to refer to them whenever necessary. Section 2 contains the so-called “monotone class ” theorems, which are perhaps the most useful results from set theory for us. They will be used extensively in the sequel. In Section 2 we also give a number of typical applications of these theorems in order to illustrate their use to readers who are unfamiliar with this type of argument. Often we will omit such arguments in the body of the text. Finally in Section 3 we collect some results connecting topology and measure theory. 1. Notation
Let R denote the real numbers, Z the integers, and B the extended real numbers { - co} u R u { 0 0 ) with the usual topology. If a and b are in R, then a v b = max(a, b), a A b = min(a, b), and a+ = a v 0. If E is a set, a numerical (real-valued) function on E is a function from E to R(R). For any numerical function f on E, llfll = supxEEIf(x)l. A a-algebra d on E is a collection of subsets of E such that E E d and d is closed under complements and countable unions. We will use the usual notation “ n ” and “ u ” to denote set intersection and union. If A c E, then A‘ will denote the complement of A. Finally A - B = A n B‘ and A A B = ( A - B) u (B - A) will denote set difference and symmetric difference, respectively. If A is any collection of subsets of E, a(&) denotes the a-algebra generated by A, that is, the smallest a-algebra on E containing A. Clearly a(A) is the intersection of all a-algebras
+
1
2
0. PRELIMINARIES
on E containing A. A pair (E, 8)consisting of a set E and a a-algebra on E is called a measurable space. Let (E, 8 ) and (F, 9) be measurable spaces. A function f : E + F will be called measurable relative to 6 and 9 provided that for every B E 9, f E 8. Following Chung and Doob [ 11 we will write simply f E 8/9if f is measurable relative to 8 and 9. In the case where F = R and 9 is the class of all Bore1 subsets of R we write simplyf E 6 for a numerical function f E 8/9.If A is a subset of E we denote the indicator function of A by IA, so that for example, the statements 1, E b and A E d have the same meaning. We denote the collection of all real-valued C measurable functions by r 6 . If X is any collection of numerical functions, then X + denotes the set of nonnegative functions in &’ and b X denotes the set of bounded functions in 2.I f f is a numerical function on E, then f = 0, f 2 0, and f > 0 mean f ( x ) = 0 for all x E E, f ( x ) 2 0 for all x E E, and f ( x ) > 0 for all x E E, respectively. If E, F, and G are three sets and iff : E + F and g : F + G, then we write g of for the composition map from E to G, g o f ( x ) = g1f(x)]. In particular if (E, 8),(F, 9), and (G, ’3) are measurable spaces and iff E 6 / 9 and g E S/’3,then g of E 8/’3. Let Q be a set and let (Ei, 8i)iE, be a collection of measurable spaces indexed by some set I. For each i letfi be a map from R to E,; then a(fi, g i ; i E I) denotes the a-algebra on Q generated by the sets {&‘(Ai): A i E Ci} as i ranges over I. Clearly this is the smallest cr-algebra on R relative to which all the are measurable. We will write merely a(fi ; i E I ) when the Ei and tZi are understood clearly from the context. By a measure on (E, 8) we always will mean a nonnegative measure. A measure space is a triple (E, 8,p) where (E, 8)is a measurable space and p is a measure on (E, 8).We recall that p is finite if p ( E ) < 00 and p is a-finite if there exists a sequence (En} of elements of 6 such that E = U E , and p(En)< 00 for all n. We assume the reader is familiar with the theory of integration on a measure space, Often we write p ( f ) instead of 1f dp for those f E 8 for which the integral exists. A measure space (E, 8, p ) is called complete if B E C and p ( B ) = 0 implies that every subset of B is in 8. Given any measure space (E, 6, p ) we define 8’ to consist of all sets B c E for which there exist B1,B, E 8 such that B, c B c B2 and p(B2 - Bl).= 0. It is evident that b’ is a a-algebra and that the measure p has a unique extension to 8’. We denote the extension again by p. b’ is called the completion of d with respect to p. One verifies easily that d = 8’’if and only if (E, I ,p ) is where the intersection is over all finite p on complete. Define &* = (E, 8). It is easy to see that b* is a a-algebra and every finite measure on C has a unique extension to 8*.8* is called the a-algebra of universally measurable sets over ( E , 8).If p is a measure on (E, 8)we say that p does not charge a set A provided A E 8’’and p ( A ) = 0.
n,&”
3
1. NOTATION
Let (E, 8,p) be a measure space and let (F,9) be a measurable space. E d / 9 , then the formula v(A) = pLf-'(A)] for A E 9defines a measure v on (F, 9).We will write v = pf-'. If h E 9,then h f E d and
Iff
0
in the sense that if either integral exists then so does the other and they are equal. If (Ei, B i ) i E I is a family of measurable spaces, then the product I-Iis,(Ei,Bi)= (E, 4)is a measurable space where E is the ordinary Cartesian product of the Ei's and d is the a-algebra on E generated by sets of the form A, with A , E 8 , for all i and only finitely many A, different from E , . We will write d = d i and call d the product of the a-algebras B i . We assume the reader to be familiar with the theory of product measures and the various forms of the Fubini theorem. A probability space (Q, 9, P) is a measure space with P(Q) = 1. If (E, 8 ) is a measurable space then a map X : Q + E is called an (E, 8)random variable provided X E 918.The distribution of X is the probability measure px = PX-' on (E, 8 ) and we often write P ( X e B) for pcx(B).A numerical random variable is by definition an element of 9.If Xis a numerical random variable, then the expectation of X , E ( X ) , is defined by E ( X ) = X d P provided the integral in question exists. If Xis an (E, 4)random variable and f E b8, then (1.1) becomes
ni
ni
5
and this extends to any f E B for which either side exists, in particular to E 8 , . Often we will establish formulas such as (1.2) forfE bb and then use these formulas for nonnegative f E B without special mention. Obviously this is permissible in view of the monotone convergence theorem. Frequently when X is a function in 9 and A is a set in 9we write E ( X ; A) in place of I, X,dP. We assume that the reader is familiar with the concepts of independence and conditional expectation as set forth, for example, in Lokve [l] or Neveu [I]. In particular if 9 is a a-algebra contained in 9and X E 9is P-integrable we use E(X I Y) to denote any function f such that: (1) f E 8 and (2) E(fi A) = E ( X ; A) for every A E 8. E ( X (9)is called the conditional expectation of X given 8.When X = I", A E 9,we write P(A I 8)in place of E(Z, I 8).Of course conditions (1) and (2) determine f up to a set in Y of P measure 0. On the other hand, when we write f = E(XI Y) we mean simply that f is a function satisfying ( I ) and (2); consequently, relationships involving conditional expectations are not enhanced by the qualifying phrase " almost everywhere relative
f
4
0. PRELIMINARIES
to P." When Y is of the form a(fi, r f i ; i E I ) we sometimes write E(X I fi ;i E I) instead of E(X I Y). Finally we summarize the basic results on supermartingales that will be needed in later chapters. Let (Q, 9, P) be a probability space and let T be a subset of the extended real numbers. For each t E T, let X , be a numerical P) and let 9, be a sub-a-algebra of 9 such that random variable over (Q, 9, 9, c FSwhenever t I s, t and s in T. Then { X , , 9, ; t E T} is a supermartingale (over (Q, 9, P))provided
(i) X,E F , , ~ E T , (1.3)
(ii) E(lX,l) < co, (iii) E { X , I 9,} IX , ;
t E T, s, t E T, s > t .
If the a-algebras 9, are omitted from the definition, then it is understood that 9, = a(X, ; s E T, s I t). We say that { X , , 9,} is a submurtingale provided { - X , , F,} is a supermartingale, and that { X , , 9,} is a martingale provided that it is both a supermartingale and a submartingale. If Z is an arbitrary index set, then a family { Yi; i E Z} of numerical random variables over (Q, 9, P) is said to be uniformly integrable provided that limn E{ I YiI ; I Yil > n } = 0 uniformly on I. We refer the reader to Meyer [l] or Neveu [ l ] for the properties of uniformly integrable families. The next two theorems contain the basic results which will be needed in later chapters. Proofs may be found in Doob [l], Lokve [l], Meyer [l], or Neveu [l]. (1.4) THEOREM.LetN = (1, 2,. . .}andN = N u {GO}.Let {A',,, 9 , , ; n ~ N } be a supermartingale such that sup,, E(lX,,l) < GO. Then X , = limn X,,exists and is finite almost surely. Moreover if {A',,; n E N } is uniformly integrable then {X,,,9,,; n E N} is a supermartingale where Srn = o(US,,). Note that if X,, 2 0 for all n, then (1.3) implies that E(lX,,l) = E(X,,) 5 E(X,), and so a nonnegative supermartingale {X,, , 9,,;n E N} always satisfies the hypothesis of Theorem 1.4.
(1.5) THEOREM: (a) If {X,, 9,; t E (0,GO)}is a supermartingale such that t + X,(w) is right continuous on (0,00) for almost all w , then t + X,(w) has left-hand limits on (0, GO)for almost all w , that is, for almost all w , limsT,XJw) exists (and is finite) for all t E (0, GO). (b) Let { X , , 9,; t E [0, GO)}be a supermartingale, and let D be a countX,(w) and able dense subset of [0, 001. Then for almost all w , limsf,,sED lims~r,soD X,(w) exist (and are finite) for all t E [0, a).
2.
5
THE MONOTONE CLASS THEOREM
The following fact will also be needed in Chapter I. We leave its proof t o the reader as an exercise. (1.6) PROPOSITION. Let { X , , 9,;f E [0, co)} be a nonnegative supermartingale and let D be a countable dense subset of [0, 00). Then for each t E [0, co), P { X , > 0, inf,,,,,,, X , = 0) = 0.
2. The Monotone Class Theorem
In this section we will discuss the monotone class theorem in the form we find most useful for application in probability theory. (2.1) DEFINITION. Let R be a set and Y a collection of subsets of R; then (i) Y is a n-system (on R) if Y is closed under finite intersections; (ii) Y is a d-system (on R) if (a) RE^, (b) if A, B E Y , A c B, then B - A €9, (c) if {A,} is an increasing sequence of elements of 9, then U A , E Y .
Obviously any 0-algebra is both a d-system and n-system. Conversely, one easily verifies that if Y is both a d-system and a n-system, then Y is a a-algebra. If A is any collection of subsets of R, then we define d ( M )to be the smallest d-system on R containing A. The existence of d ( M )is clear since the intersection of an arbitrary number of d-systems is again a d-system. We will say that d ( A ) is the d-system generated by A. We come now to the main result on d-systems. (2.2)
THEOREM.Let Y be a n-system on R; then d ( 9 ) = o(Y).
Proof. Since d ( Y ) c ~(9’) it suffices to show that d(Y) is a 0-algebra, and for this it suffices to show that d ( Y ) is a n-system. To this end define g1= { B E d ( 9 ) : B n A E d ( Y ) for all A
E
9).
The reader will easily verify that 9, is a d-system and that g13 Y since Y is a n-system. Hence 3 d ( 9 ) . But by definition c d ( 9 ) and so g1= d(Y). Next define g2= { B E d ( Y ) : B n A E d ( Y ) for all A
E d(Y)}.
Again one shows without difficulty that Q2 is a d-system. If A E 9, then B n A E d(Y) for all B E Q1 = d(Y), and consequently Y c g2. Hence
6
0. PRELIMINARIES
d ( Y ) = 9,,and this is just the statement that d ( Y ) is closed under finite intersections. Thus Theorem 2.2 is established. We next give a version of Theorem 2.2 that deals with functions rather than sets.
(2.3) THEOREM.Let R be a set and Y a n-system on R. Let &’ be a vector space of real-valued functions on R satisfying: (i) l ~ I a n d I , ~ & ‘ f o r a l l A € Y ; (ii) if {f,}is an increasing sequence of nonnegative functions in I such that f = sup, f.is finite (bounded), then f~ 2. Under these assumptions &‘ contains all real-valued (bounded) functions on R that are a(Y) measurable. Proof. Let 93 = {A: I , E S}. According to Assumption (i) R E 9 and Y c 9.If A, c A, are in 9,then =Z , - Z,, E X since I is a vector space, and so A, - A, E 9. Finally if {A,} is an increasing sequence of sets in 9,then ZuA, = sup I,, which is in Iby (ii). Thus 93 is a d-system on R containing Y and hence 9 = a(Y) by Theorem 2.2. I f f € ~ ( is 9real-valued ) then f =f -f - with f and f - being nonnegative, real-valued, and a(Y) measurable. On the other hand, i f f € o ( Y ) is nonnegative thenfis an increasing limit of simple functions f, = 4ZA; with each A; E ~(9’). Hence eachf, E &’ and using (ii) the result is now immediate. +
+
Theorems 2.2 and 2.3 will be used in the following form: Let R be a set and ( E , , b,),,, be a family of measurable spaces indexed by an arbitrary set I. For each i E Z, let 9, be a n-system generating b iand letf, be a map from R to E , . Using this setup we state the following two propositions. (2.4) PROPOSITION. Let Y consist of all sets of the form n l E J f i - l ( A i ) where A, E 9, for i E J and J ranges over all finite subsets of I . Then 9 is a n-system on R and a(9) = a(fi; i E I ) .
(2.5) PROPOSITION. Let &’ be a vector space of real-valued functions on R such that (i) 1 ~ 8 ; (ii) if {A,} is an increasing sequence of elements of 2 , such that h = sup h, is finite (bounded), then h ES; (iii) &’ contains all products of the form Hie,Z A , o f i where J is a finite subset of Z and A, E 9, for i E J.
2.
THE MONOTONE CLASS THEOREM
7
Under these assumptions 2 contains all real-valued (bounded) functions in cr(fi; i E I ) . Proposition 2.4 is immediate and (2.5) follows from (2.4) and (2.3). In the sequel any of the results, (2.2)-(2.5) will be referred to as the Monotone Class Theorem (MCT). We now give some applications of these results. We begin with the following uniqueness theorem for finite measures. (See Exercise 3.2 for an extension to a-finite measures.) PROPOSITION.Let p and v be finite measures on a measurable space (E. 8).Let 9’c 6 be a n-system containing E and generating 8. If p and v agree on 9, then they agree on 6 . (2.6)
This is an immediate consequence of (2.2) since {A E 6 :p ( A ) = v(A)} is clearly a d-system containing 9. We close this section with two useful facts. PROPOSITION. Let R be a set and (E, 6 ) a measurable space. Let cp : R -,R is a(f) measurable if and only if there exists h E S such that cp = h of. (2.7)
f :R -+ E; then
Proof. If cp = h 0 f with h E 6 then obviously cp E cr(f). To prove the converse let .$ be all real functions on i2 of the form h 0 f with h E r 8 . Clearly I is a vector space containing the constants. Suppose {h, of} is an increasing sequencein.$+ such that JI = sup(h, 0f)isfinite. LetA = { X E E: sup h,(x) < a}; then A E B and f ( i 2 ) c A. Define h = sup h, on A and h = 0 on A‘. Then h E r 6 and = h of. If C ~ a ( f then ) C = f - ’ ( A ) for some A € 8 and so I, = IA f E.$. Thus 2 contains all the real-valued functions in a(f) according to (2.5). If cp E a(f) is numerical valued then cp’ = arctan cp E a(f) and is real valued. So cp’ = h’offor some h’ E 8.Clearly we may assume that h’(E) c [ - zr/2,7r/2] because $takes values only in this interval. Then cp = h 0 f with h = tan h’ and of course h E 8. 0
We should point out that if cp is real valued (bounded) we may assume that h is real valued (bounded). PROPOSITION.Let R be a set, {Yi; i E I} a family of a-algebras on R, and Y = a(%,; i E I}.Then for every A E Y there is a countable subset J c I such that A E a(%i; i E J ) . (2.8)
Proof. The class of sets A having this property is clearly a cr-algebra and it contains each Yi .
8
0. PRELIMINARIES
The following corollary is an easy consequence of (2.7) and (2.8). We leave the details to the reader as an exercise,
(2.9) COROLLARY.Let R be a set and let (Ei, bi)i,rbe a family of measurable spaces. For each i let fi : iZ + Ei and set 9 = o ( f i ;i E I ) . If rp : R 3 R is in 9,then 9 depends on at most countably many coordinates in the sense that there exist a countable subset J of Z and a function h : Die,Ei 3 R which is &i measurable and such that cp = h f, where f, is the map 0 3 ( f i ( O ) ) i e , from to n i E J Ei *
nie,
0
3. Topological Spaces
A topological space E will always mean a Huusdorflspace unless explicitly stated otherwise. If E is a topological space the Borel sets of E is the smallest a-algebra containing the open sets of E. We write W ( E )(or just W) for the Borel sets of E. If E is locally compact, then C(E), C,(E), C,(E) denote, respectively, the bounded real-valued continuous functions on E, the realvalued continuous functions vanishing at m, the real-valued continuous functions with compact support. Clearly C , c Co c C and C , and C are Banach spaces under the supremum norm 11 .I[. Moreover C, is dense in C , . The Buire sets of E is the a-algebra a(C,) = W,(E). Obviously W,c W. In most of our work we will be concerned with locally compact spaces with a countable base (LCCB). We summarize some of the relevant properties of such a space. (3.1) Let E be an LCCB; then (i) W = g o ; (ii) E is metrizable, and one can choose a metric d compatible with the topology, such that (E, d) is a separable complete metric space in which every closed and d bounded set is compact; (iii) if I is a countable base for the topology of E, then a(%) = W ; (iv) Co is separable (as a Banach space).
Let E be an LCCB. A measure p on (E, W(E))is called a Radon measure on E if p ( K ) < co for all compact K . In particular a Radon measure on an LCCB is a-finite. If L is a nonnegative linear functional on C,, then there exists a unique Radon measure p on E such that L ( f ) = p ( f ) for all f~ C , . Every Radon measure is regular; that is, if B E W then p ( B ) = sup{p(K): K c ByK compact} = inf{p(G): G 3 B, G open}.
9
EXERCISES
Let M+ be the set of all Radon measures p on E such that p ( E ) c co and let M = M + - M, be the vector space of all real-valued set functions I on E which have the form I = p - v with p, v E M+ . For 1 E M we define 11111 = sup I1(Ai)l where the supremum is taken over all finite partitions { A l , . . . , A,,} of E into Bore1 sets. Under this norm M is a Banach space and, in fact, M = C z . We will call the topology induced on M by C, (the weak*topology on M) the weak (or vague) topology on M.
1
Exercises
(3.2) Let p and v be two u-finite measures on the measurable space (E, 8). Let Y c d be a n-system generating d and suppose Y contains a sequence {B,,} with UB,, = E. If p and v agree on 9’ and p(B,,) = v(B,,) < co for all n, then p and v agree on 8. (3.3) Let ( E i , &‘J, i = 1, 2, be measurable spaces. If f e
then
f E s:/s:. (3.4) Let (E, 8,p) be a measure space. Show that f E 8’ if and only if there existsf, and fie d such thatf, If I f2 and p({fi f2}) = 0.
-=
(3.5) Prove that W ( E ) = Wo(E)when E is LCCB. (3.6) Let R be a set and E an LCCB. Let {Xi; i E I } be a family of functions from R to E. Let 2 be a vector space of real-valued functions on R satisfying: (i) 1 E &‘; (ii) If {F,,}is an increasing sequence in &’+ such that F = sup F,, is bounded, then F E 2 ; (iii) % . ‘ contains all functions of the form X i where J is a finite subset of I and f i E C, for i E J. Show that &’ contains all bounded functions in u(Xi;i E I).
ni,f,
0
(3.7) Let (R, 9)be a measurable space and E be a metric space. Let {A} be a sequence in .F/B(E).Prove the following statements: (i) iff,(w) +f ( w ) for all o,thenfe .F/W(E). (ii) If E is complete and x, a point in E, then f(o)= limfn(w)
when the limit exists
n
= xo
when the limit does not exist
is in F/&?(E). The completeness hypothesis cannot be entirely eliminated.
10
0. PRELIMINARIES
What happens if W is replaced by W*(the universally measurable sets over (E, W)) throughout ?
(3.8) Let (E, 8 ) be a measurable space and let T be a positive linear map from b b to bb such that whenever {f,}is a decreasing sequence in bb with inf f.= 0 then inf Tf, = 0. Show that there exists a function P(x, A) defined for x E E and A E 6 such that (i) A -+ P(x, A) is a finite measure on b for each x E E; (ii) x -+ P(x, A ) is in bb for each A E 8 ; (iii) T f ( x ) = J P(x, d y ) f ( y )for eachfe b b and x E E. [Hint: let P(x, A) = TZ,(x).]
I MARKOV PROCESSES
1. General Definitions
In this section we will give the general definition of a Markov process following more or less the definition in Doob [I]. However this definition is too general for the purposes of this book, and so very shortly we will restrict ourselves to a more tractable class of processes. Let (Q, 9, P) be a probability space, (E, 8) a measurable space, and T an arbitrary index set. Then a stochastic process with values in ( E , 8 ) andparameter set T (over (Q, 9,P))is a collection X = { X , : t E T} of maps X , from Q to E each of which belongs to 9/8. (See Section 1 of Chapter 0 for notation.) For each w E Q the map t -,X,(w) from T to E is called the path or trajectory corresponding to w. It will sometimes be convenient to write X ( t , w ) for X , ( o ) and also X ( t ) for X , . The measure P clearly plays no role in this definition and so one can define in exactly the same way a stochastic This will sometimes be useful. process over a measurable space (Q, 9). From now on we assume that T is a subset of R (the extended real numt E T} is an increasing family of sub-a-algebras of 9, that is, bers). If {9,; 9,c PSwhenever t and s are in T and t c s, we say that the process X is adapted to {9, provided } X , E F,/& for each t in T. In particular Xis always adapted to the family of sub-a-algebras 9, = a(X,: s I t). Of course in expressions such as this the letters t and s refer to elements of T. (1.1) DEFINITION. Let X = { X , : t E T} be a stochastic process with values in (E, a). One says that X is a Markov process with respect to an increasing t E T} of sub-a-algebras of 9provided (1) X is adapted to family {9*; {9, and } (2) for each t E T the a-algebras 9,and a(Xs:s 2 t) are conditionally independent given X , , that is, 11
12 (1.2)
1. MARKOV
PROCESSES
P ( A n B I XI) = P ( A I Xr)P(B I X t )
whenever A is in F, and B is in o(Xs; s 2 t ) . One says that X is a Markov process (without specifying the a-algebras g1) if it is a Markov process with respect to the family of a-algebras Y, = a(Xs;s It ) . Notice that if X is a Markov process with respect to some family {9,} of a-algebras, then it is also a Markov process with respect to the family {Yr = a(X,; s It ) } . If X is a Markov process with respect to Y, = a(Xs;s It), then the a-algebras a(X,; s It) and o(Xs;s 2 t ) enter symmetrically in the definition. Consequently the Markov property is preserved if one reverses the order in T. The intuitive meaning of this condition should be clear: namely, given the present, X,,the past, o(X,; s It ) , and the future, a(X,; s 2 t) are independent. In discussing conditional expectations the following situation will often arise. Let (a,9,P) be a probability space, and Yl and Yz a-algebras contained in f with Y1 c Y2. In this situation i f f € F, then the equality
should be interpreted as meaning that there exists a function g E Y1 such that E ( f ; A) = E(g; A) for all A E Yz . We now give some equivalent formulations of Definition (1.1). (1.3) THEOREM: (a) Let X = { X , ; t E T} be a stochastic process adapted to the family {Ft}. Then the following statements are equivalent: (i) X is a Markov process with respect t o { P f;} (ii) for each t E T and Y in bo(X,: s 2 t) one has (1.4)
E(Y I 5,) = E(Y1 xr);
(iii) if t and s are in T and t Is then (1.5)
E(f0
xs 1
9 1 )
= E(f0
xs
IX , )
for allfe bb. (b) X is a Markov process (with respect to a(Xs; s It ) ) if for each finite collection tl I. . . 5 t, It from T andfE bb one has (1.6)
E(f0 Xr I XI, . . . > XrJ = E(f0 Xc I Xtn)3
Proof. We will first establish the equivalence of (i) and (ii). Suppose (i) holds. It is immediate from MCT that it suffices to prove (1.4) when Y = I, with B E a(X,: s 2 t ) . If A E F 1 then ,
13
1. GENERAL DEFINITIONS
P ( A n B) = E { P ( A n BI X,)} = E { P ( A I Xr) P(B I Xr)> = ECE{I, P(B I Xr) I Xrll
P(B I Xr)l*
=
Hence (i) implies (ii). Conversely suppose (ii) holds. If A €9, and B E o(X,;s 2 t ) , then P ( A n B I X,) = E { P ( A n B I 9,) I X,}
P(B I 9,) I Xr> = E { I , P(B I Xr) I XI> = P ( A I XI) P(B I XI), =E{I,
and so (ii) implies (i). We next establish the equivalence of (ii) and (iii). Clearly (iii) is a special case of (ii) and so we need only show that (iii) implies (ii). Therefore assume (iii) holds. First observe that if H denotes those elements Y Ebo(X,; s 2 t ) for which (1.4) holds, then H is a vector space containing the constants and closed under bounded monotone limits. Thus by MCT in order to show that H = bo(Xs; s 2 t ) it suffices to show that H contains all Y of the form Y = nl=l ,fi(Xsi)where t Is1 < . .. < s,, and f i E bB for 1 I i In. We will establish this last statement by induction on n. When n = 1 this reduces to (iii). If n > 1, we may write for such a Y E(YI 9,) = E(E(Y I 9 s n - l ) 19,)
From(iii)and(2.7)ofChapterOwehaveE(f,(Xsn)I 9sn-l) = E(fn(Xs,)I XSn-,) = g(XSn_ ,) where g E B may be assumed bounded since f, is. Consequently applying the induction hypothesis to the functions evaluated at X,, , .. . , Xsn-, we obtain E(Y I 9,) = ~ ( i“= f1 i l ~ ( xgs(~x s . -
A, .. ., f n - 2 ,
f,-lg
I xr)
= E( Y I X,).
Thus Part (a) of Theorem 1.3 is established. If X is a Markov process then clearly (1.6) follows from ( l S ) , and so in order to establish (b) it will suffice to show that (1.6) implies E(f X , I 3,) = 0
14
I. MARKOV PROCESSES
I s and f E b6 where 9,= a(X,: s I t). This amounts to showing that for all A E 9,one has
E(f0 X,l A',) for t (1.7)
Let 9 be the collection of all sets A E 9,for which (1.7) holds. Plainly 9 is a d-system. On the other hand it follows from (1.6) that if A = n;=l{Xf, E A j } where 0 I tl < . .. < t,, = t and A j E 6 for 1 5 j 5 n, then A E 9. But A's of this form are a n-system generating 9,and hence 9 =9,completing the proof of Theorem 1.3. 2. Transition Functions In this section we will assume that the parameter set T of any stochastic process under discussion is R, = [0, co) unless explicitly stated otherwise. This will be the case of most interest to us and the reader should have no difficulty in adapting the definitions to other cases of interest: for example, T = Z, = (0, 1,2, ...}. (2.1) DEFINITION.Let (E, 8 ) be a measurable space; then a function Pr,,(x, A) defined for 0 I t < s < co,x in E, and A in &, is called a Markou transition function on ( E , &) provided (i) A -,Pf,,(x,A) is a probability measure on 8 for each t , s, and x; (ii) x -,Pf,,(x,A) is in & for each t, s, and A ; (iii) if 0 I t s < u, then
-=
(2.2)
j
P,,"(X, A ) = Pf,,(X, d Y ) P,,"(Y,A )
for all x and A. The relationship (2.2) is called the Chapman-Kolmogorov equation.
(2.3) DEFINITION.A Markov transition function Pf,,(x,A) on (E, 9)is said to be temporally homogeneous provided there exists a function P,(x, A) defined for t > 0, x E E, and A E I such that Pf,,(x,A) = P s - f ( x ,A) for all t, s, x , and A. In this case P,(x, A) is called a temporally homogeneous Markov transition function over ( E , 8)and the Chapman-Kolmogorov equation becomes
2. TRANSITION FUNCTIONS
15
for all t, s, x, and A. In actual fact the transition functions that we will deal with in this book are the temporally homogeneous ones and we will very shortly restrict ourselves to consideration of these only. DEFINITION.Let X be a stochastic process with values in (E, 6) that is adapted to {F,} and let P,,,(x, A) be a Markov transition function on (E, 8).One says that X is a Markov process with respect to {6,} having P,,,(x, A) as transition function provided (2.5)
(2.6)
E(S
O
x,I 9,)= P,.,(X,
9Sh
for all 0 I t < s and f i n b 6 . ( P , , s ( x , f )= P,,,(x, d y ) f ( y ) . ) Taking conditional expectations with respect to o(X,) in (2.6) one obtains (2.7)
W Xs I Xt) = Pr,s(xt S) 0
9
and upon combining (2.6) and (2.7) with Theorem 1.3 we see that the above definition is consistent with Definition 1.1. We remark in passing that there exist processes satisfying (2.7) but not (2.6). Moreover there exist Markov processes (in the sense of Definition 1.1) which do not possess transition functions. The fact that a Markov process has a transition function means that it is possible to define " nice " conditional probability distributions for the conditional probabilities P ( X , E A I Fr), 0 5 t < s, A E 8.As usual if we say that X is a Markov process with transition function Pf,s(x,A) without it is understood that 6, = o(X,; s 5 t ) . specifying the o-algebras F,, The intuitive meaning of the transition function should be clear: Pf,s(x,A) is " the conditional probability that X , E A given that X , = x when 0 5 t < s." Although this statement is very attractive, one must beware of such statements since conditional probabilities are determined only almost surely. Perhaps we should remark that without further restrictions a Markov process may have more than one transition function. But of course (2.6) implies that if PI and P 2 are transition functions for X then for each fixed t , s, and A, P&(x, A) = P,!,(x, A) for almost all x relative to the distribution in E of X , . A Markov process X with respect to {Pr} is called temporally homogeneous provided it possesses a temporally homogeneous transition function. In particular we will apply the adjective temporally homogeneous only to those Markov processes which have transition functions. Let X be a Markov process with values in (E, 6) and transition function P r , s (A). ~ , Let p be the distribution of X , ; that is, p is the probability measure on d defined by p ( A ) = P ( X , E A). The measure p is called the initial distribution of X . Using the Markov property repeatedly one easily obtains the following formula for the finite-dimensional distributions of X : if 0 It , < t2 c . . . < t, and f E bS" where 8"= 6 x .. . x 6 (n factors), then
16
I. MARKOV PROCESSES
where one integrates first on x,, , then on x,,-~,. .., and finally on xo, and if t , = 0, P0,,(x, * ) is taken to be unit mass at x. Thus all of the finite-dimensional distributions of X are expressible in terms of its initial measure and transition function. It is now natural to ask the following question: Suppose we are given a probability measure p and a transition function P J x , A) on a measurable space (E, 8).Then does there exist a Markov process X with values in (E, 8) which has p for its initial measure and P J x , A) as transition function? If T = Z, , then there is always such a process, but if T = R, the most general conditions on (E, 8)for which the answer is affirmative are unknown. However, if E is a a-compact Hausdorff space and 8 the a-algebra of topological Bore1 sets, then the celebrated theorem of Kolmogorov guarantees the existence of a Markov process with the desired properties. The proof will be outlined in the following paragraphs. Let (E, 8) be a measurable space and let T be an arbitrary index set. Define R = E' and 9 = I' so that (R,9)is the usual product measurable space, For each t E T, let X I : R + E be the coordinate map X,(w) = a(?).In particular 9 = a(X,:t E T). If J is a finite subset of T we will write (E', aJ) for the appropriate product space, and we will denote the natural projection of R on E' by n,; that is, if J = (tl, . . ., t,) nJw = (w(tl),
.. ., w(t,,))E E'.
If J = {t} then nJ = X I . Finally if I and J are finite subsets of T with I c J we will denote the natural projection of E' on E' by ni . Note that n; E 8J/b' and that if K c Jc Ithen ni = nin;.
(2.9) DEFINITION. Let q ( T )be the class of all finite subsets of T and suppose that for each J E q(T) we are given a probability measure P J on (E', 8'). Then the system { P J : J E q ( T ) } is called a projective system over (E, 8) provided (2.10)
PJ(ni)-' = PI
for I c J
E
rp(T).
Let 8: be the algebra of alljinite-dimensional cylinder sets in R, that is, all subsets A of R for which J E q(T)and B E I' exist such that A = n;' B. It is immediate that ~ ( 8 :=) IT.Moreover using condition (2.10) it is easy to construct ajinitely additive probability measure P on 8: such that Pn;' = P I for all J E q ( T ) . If P is countably additive on & ,: then it can be extended uniquely to a probability measure on 8' = 9,which we again denote by P,
2.
TRANSITION FUNCTIONS
17
so that (R, 9, P) is a probability space. Finally X = { X I ;t E T} is a stochastic P) with values in (E, 8)and the finite-dimensional distriprocess over (R, 9, butions of X are the P,'s we started with; that is, if J = ( t l , ..., t,,) and B E 8, then PC(X,, * . Xrn)E B ] = PJ(B). 9
-
9
The following theorem of Kolmogorov gives sufficient conditions under which P is countably additive on 8:. We omit the proof, which may be found in any standard text on probability theory. THEOREM(Kolmogorov). Let E be a a-compact Hausdorff space and 8 be the topological Borel sets of E. If {P,:J E rp(T)} is a projective system over (E, &'), then the finitely additive measure P on 8: constructed above is actually countably additive and hence can be extended to bT. (2.11)
Let us now apply this result to the Markov process situation. Let T = R, and suppose that a probability measure p and a transition function PI,s(x,A) are given on a measurable space (E, 8).If J = {tl f 2 < . . . < t,,} is a finite subset of T, then it is not difficult to see that (2.8) defines a probability measure P, on ( E J ,B J ) , and that {P,:J E rp(T)} is a projective system over (E, 8). Thus if (E, 8)satisfies the conditions of Kolmogorov's theorem we obtain a probability measure P on (R, 9)where R = ET and 9 = BT. The reader should now verify that the coordinate mappings { X I }form a Markov process over (Q, 9, P) with values in (E, 8) which has p as initial measure and Pr,s(x,A) as transition function. We will illustrate this result with an important class of examples. Let E = R" be Euclidean n-space and 8 be the Borel sets of E. Let {pI;t > 0 ) be a semigroup (under convolution) of probability measures on (E, 8) such that pI + E weakly as t + 0 where go is unit mass at the origin. I f f € b 8 and t > 0 define
-=
(2.12)
It is then immediate that P,(x, A) is a temporally homogeneous transition function on (E, 8).Consequently if p is any probability measure on (E, 8) it follows from the above discussion that there exists a (temporally homogeneous) Markov process X with values in (E, &') which has p as initial measure and PI(x,A) as transition function. Note that PI(x,A) is translution invariant; that is, P,(x y , A y ) = P,(x, A) for all y E E. The reader should check for himself that the translation invariance of the transition function implies that the process X has independent increments; that is, if to < r , < . . . < r,,, then the (E, 8) random variables XI,, X,, - XI,, . . ., Xrn- A',"-, are independent.
+
+
18
1. MARKOV PROCESSES
It is well known (for example, Levy [l] or Bochner [I]) that such a semigroup { p , } may be characterized as follows: Let cp,(x) = ei(x*y) p,(dy) be the Fourier transform of p, . Then cp,(x) = exp[ - t $(x)] where
s
with a E R",S a nonnegative definite symmetric operator on R",( S x , x ) the usual quadratic form associated with S, and v a measure on R" satisfying Ix12(1 + Ixl')-' v(dx) < co. Here ( x , y ) is the usual inner product in R" and 1x1 = ( x , x)ll' is the distance between x and 0. Conversely if $ is defined by (2.13) with a, S,and v as described, then there exists a semigroup {p,} of probability measures on R" such that
for all t > 0 and p, -,E,, weakly as z + 0. Some important special cases of such transition functions will be described in the exercises at the end of this section. Other important examples of Markov processes will be introduced later.
Exercises (2.14) Let X be a Markov process over (!2,9, P) with respect to {f,}. Suppose A is a a-algebra contained in f and that for every t, A and f ,are Prove that independent, ( P ( A n 8)= P ( A ) P ( B ) for all A E A and B E 9,). X is a Markov process relative to the family { c ( A u 9,)). (2.15) Let X be a Markov process with values in ( E , 8)and having a transition function Pt,s(x,A), which for each A is jointly measurable in (t, s, x). Prove that the process Y given by Y,(w)= (Xt(w),t) with values in (E x R, , 8 x S?(R+)) is a temporally homogeneous Markov process. (2.16) Let ( 9 , ; t > 0)be a convolution semigroup of probability measures on (0, a).Let P,(x, A) be a transition function on (E, 8) such that for all A the mapping (t, x ) + Pt(x, A ) is jointly measurable. Prove that Q,(x, A) = P,(x, A ) q,(du) defines a transition function on (E, 8). Prove that if P,(x, A) is of the form (2.12) then Q,(x, A) is also of this form. (2.17) Let E = R",g,(x) = (4nt)-"/' exp( - 1xI2/4t) for r > 0, x E R".Verify that P,(x, A) = j A g,(y - x ) dy is a transition function (the Brownian motion transition function in R") of the form (2.12) and that in the representation (2.13), a = 0, v = 0, and +(Sx, x ) = 1x1'. The 'd denotes the element of Lebesgue measure in R".The function g,(x)is called the Gauss kernel for R".
2.
TRANSITION FUNCTIONS
19
(2.18) Given positive numbers d and A let X be a (Poisson) random variable with the distribution P(X = nd) = exp(-A)[A"/n!], n = 0, 1, . .. . Prove that the Laplace transform E(exp( - u X ) ) = exp[ -A( 1 - exp( - du))]. Prove that if v is a measure on W(0, GO)such that 52 [x/(l + x)] v(dx) < 00, then 8(u) = jg { 1 - exp( - x u ) } v(dx), u 2 0, defines a function 8 such that exp( - 8(u)) is the Laplace transform of a probability measure q on ( 0 , ~ ) . [Hint: approximate the integral by a sum and use the result of the first part together with the convolution and continuity theorems for Laplace transforms.] (2.19) Prove that the measure q in (2.18) is q, in a convolution semigroup { q , } of probability measures on (0, GO),and that the measure v is the same one appearing in the representation (2.13). Prove that for E (0, 1) the function 8(u) = uBis a special case of (2.18). (If we denote the corresponding semigroup of measures by {qfl} then the transition function defined by (2.12) with 111 in place of p , is called the one-sided stable transition function of index B.) Prove that the measures qfl have continuous density functions. (2.20) Let E = R". Given a E (0, 2) define a transition function Pa by P,"(x, A) = Pu(x,A)tfZ(du), as in (2.16), where Pu(x, A) is the transition function in (2.17) and the semigroup qP/* is the one in (2.19). (Pa is called the symmetric stable transition function of index u in R".) By (2.16) Pa is of the form (2.12). Verify that in this example $(x) = 1x1' and that in the representation (2.13) of $(x) the first two terms are absent while v(dx) = c I x I - " - ~ dx with
5
(2.21) In the first part of (2.18) take d = 1. Prove that the distribution of X there is p , in a convolution semigroup on R. (The transition function (2.12) in this case is called the Poisson transition function with parameter A.) Find the quantities a, S, and v in the representation (2.13). (2.22) Let (Q, F, P) be a probability space. Let A be a positive number and let { U,,} be a sequence of independent identically distributed random variables on 51 with P( U,, > x) = exp( -Ax), x > 0. Define a stochastic process { X , ; t 2 O} by
X,(w)=O
if
t<
=n
if
V , ( w )+ ... + V,(w) It < U , ( w ) + ... + Un+l(o).
U,(w),
20
I. MARKOV PROCESSES
Prove directly that X is a Markov process and that the transition function arising in (2.21) is a transition function for this process.
(2.23) Let Pr,s(x,A ) be a transition function on a measurable space (E, 8). Let {XI;t E R,} be a stochastic process with values in (E, 8)and suppose that for all f E b8" and 0 Itl < . . . < t, Eq. (2.8) holds. Prove that { X , } is a Markov process with transition function Pr,s(x,A ) and initial distribution p. 3. General Definitions Continued
In this section we will introduce the type of Markov process that we will deal with in this book. Roughly speaking it will be a temporarily homogeneous Markov process defined on a " suitable " R. In this section the parameter set T will be [O,001 and as in Section 2 R + will denote the interval [O,oo). As will become clear, our point of view in this section is somewhat different from that in the previous sections. Consider the following objects : (i) A measurable space (E, 8 ) and point A not in E. We write E A = be the a-algebra in EA generated by 8. Note that E u {A} and let {A} E &A* (ii) A measurable space (Q, A!) and an increasing family {A,:t E T} of sub-a-algebras of .d.Also a distinguished point w,, of R. (iii) For each t E T a map XI : R + EA such that if X,(w) = A then X,(w) = A for all s 2 t, X,(w) = A for all w E 0,and X0(wA)= A. We will sometimes write X(r, w ) for X,(w) and X ( t ) for X,. (iv) For each t E T a map 8, : R + R such that 8, w = wAfor all w. (v) For each x in EA a probability measure P" on (Q, A'). (3.1) DEFINITION. The collection X = (Q, .H, A',, X , , 8,,P") is called a (temporally homogeneous) Markov process (with translation operators) and with state space (E, S) (augmented by A) provided the following axioms hold :
Axiom R (Regularity Conditions) (a) For each t E R, , X,E (b) The map x + P"(X, E B ) from E to [0, 11 is in d for each t E R, and BE&. (c) PA(Xo= A) = 1. Axiom H (Homogeneity) For all t, h E T, X , o Oh = X,+h.Note that this is consistent if either t or h is co.
3.
GENERAL DEFINITIONS CONTINUED
21
Axiom M (Markov Property) P"(X, +,
(3.2) for all x
E E,,
EB
I A,)= PX'"(X, E B),
B E I , ,and t , S E T.
Comments on the axioms. One may check immediately that if Axiom R holds then (b) remains valid when we replace E, I , and R, by E , , d', and T. Define 9:= a(X,: s I t) and 9 '= o(X,: s E T). These are o-algebras in R and it follows from Axiom R(a) that 9:c A, and hence that 9 'c A. Clearly {9: is} an increasing family. It will sometimes be convenient to write 9:for 9'. It follows from Axiom H that if B is in 8 , then X;'(B) = Xs-.h(B)E 9:+,, , and consequently
(3.3) eh F:+h19? for all t, h E T. In particular 8, E 9'/9'. Iff (x) = P"(X, E B), then f E I , and so P x c r (X, ) E B) =f 0 X , is in 9:. In addition the reader should check that (3.2) is consistent with Axiom R(c) if x = A and with (iii) and (iv) if either t or s is equal to 00. Equation (3.2) displays the conditional probability on the left as a measurable function of X I , and so Axiom M does indeed imply that for each x E 8 , the family {X,; t E T} is a Markov process (in the sense of Definition (1.1)) over (Q, A, P") with respect to { A , ;t E T} taking values in ( E , , 8,J.In particular Definition 3.1 requires not one Markov process but a family of them, one for each of the measures P". We will shortly show (Proposition 3.5) that each of them has the same (temporally homogeneous) transition function. Usually we will omit the phrases in parentheses in Definition 3.1 ;thus the collection X = (Q, A, A , ,X , , Or, P") will be called simply a Markov process with state space (E, 8).If the a-algebras A or A, are omitted, then it is understood that we are taking A to be 9 'or A, to be 9:. Here is an intuitive way of thinking about Definition 3.1. If we regard t -,X,(w)as the path (or trajectory) of a particle moving in the space E, then P" should be thought of as the probability law of the particle assuming it starts from x at time t = 0. The particle moves in E until it "dies" at which time it is transported to A where it remains forever. (The point A may be thought of as a "cemetery" or '' heaven " depending on one's point of view.) Roughly speaking Axiom M states that if we know the history of the particle up to time t , then probabilistically its future behavior is exactly the same as that of a particle starting at X,(o), Finally the existence of the translation operators, O f , says that the underlying space R is "sufficiently rich." With this interpretation in mind we define [(o) = inf{t : X,(w) = A}
22
I. MARKOV PROCESSES
provided the set in braces is not empty and c(o)= 00 if it is empty. Since (Q denotes the rationals) (3.4) [ is a numerical random variable. It is called the lifetime of the process X.
(3.5) PROPOSITION. Define Nt(x, A) = P"(X, E A) for ? E T, x E E d , and A E gA.Then N,(x, A), 0 < t < 00, is a (temporally homogeneous) transition function on (Ed, bA). Moreover for each x E E,, N is a transition function for the Markov process {A',; I E T} over (Q, A, P") with values in ( E A ,&A). Proof. The fact that P x is a probability measure on A implies that Nt(x, A) is a probability measure in A. Axiom R(b) and the comments following the axioms imply that N,(x, A) is in as a function of x. As to the ChapmanKolmogorov equation we have N , + Ax, 4 = P " ( K + s E 4 = EX{PX'(X,E A ) } = JWNAX, 9
4)
Finally Eq. (3.2) with the right side replaced by N , ( X , , B) is just the statement that N is the required transition function. It is immediate from Axiom R(c) that "(A, {A}) = 1 for all t and that N,(x, A) is completely determined by its restriction to (E, 8). We will denote the restriction of N,(x, A) to (E, b) by P,(x, A) and call P,(x, A) the (subMarkov) transition function of X. It satisfies all the conditions of a temporally homogeneous Markov transition function on ( E , 8) except that A -+ P,(x, A) need not be a probability measure on 8 ;one can assert only that P,(x, E) I 1. From now on the term transition function with no qualifying adjectives will mean such an object. I f f € b b we write
Clearly P,is a positive linear operator from b& to bb. The following result is analagous to Theorem 1.3 and may be proved in a similar manner. (3.6) THEOREM. (a) x + Ex( Y)is bA measurable for all Y E b 9 ' .
3.
23
GENERAL DEFINITIONS CONTINUED
(b) Under Axioms R and H, Axiom M is equivalent to each of the following : (MA
E " ( f 0 X I + ,1 AI}= EX"'f
0
X,
for all x, t , s, andfE bb,; (M2)
E"{Y
e, I A,}= E ~ ( ~ ) Y
for all x, t , and Y E bFo. (c) If A, = F:,then under Axioms R and H, Axiom M is equivalent to: (M3) Given 0 It , < . . . < t, andf,,
. . . ,f n
E
MAthen
In the remainder of this book a Markov process is always understood in the sense of Definition 3.1 unless explicitly stated otherwise.
Exercises
(3.7) Let Q = R u {A} where A is the usual point at
03.
Let A = A r=
g(Q), the usual a-algebra of Bore1 sets on R u {A}. Let P" denote unit mass at x and for t < 03 and o E R define X,(w) = o t , 0,o = o t. Complete
+
+
the definitions of X , and 8, as required for (3.1) and prove that the resulting process is a Markov process with state space (R,9(R)) (called uniform motion to the right). Show that the resulting transition function, when restricted to R, is of the form (2.12). (3.8) Let X = (Q, A, A , ,X , , e,, P") be a Markov process, with state r ) , o E R, r 2 0). space (E, 8) and fi = Q x [O,03) (with points (I, = (o, E d such that Let = A x B[O,GO) and dr consist of all sets n {Q x ( t , a)} is of the form A x (t, m) for a suitable A E A,. Let P = P" x 1 e-Ardr ( A a positive constant), and finally if (I, = (a,r )
f1(&) = x,(o),
t c r,
= A,
t 2 r.
Define translation operators 0, and a point (I,, as required for Definition 3.1 and prove that the resulting 3 is a Markov process with state space (E, 8).
24
I. MARKOV PROCESSES
Prove that its transition function is e-" P,(x, A) if P,(x, A) is the transition function for X defined following the proof of (3.5). (A more general construction along these lines will be given in Chapter 111.) 4. Equivalent Processes
Again in this section all process have T = [0,00] as parameter set. (4.1) DEFINITION. Two Markov processes with the same state space (E, 8 ) are equuiualent if they have the same transition function.
In this section we will single out from a given equivalence class of Markov processes a particularly " nice " representative defined over a function space where the meaning of the translation operators 8, will be transparent. Let (E, 8) be a measurable space which we extend to ( E d , 8,) as in Section 3. Consider the following objects: (i) W: the space of all maps w : T + EA such that w(co) = A and if w(t) = A then w(s) = A for all s 2 t . Let wA be the constant map wA(t)= A for all t 2 0. (ii) Let Y,, t E T, be the coordinate maps Yt(w)= w(t), and define in W the a-algebras So = a( Y,: t E T), 3; = a( Ys:s 5 t). (iii) Let p, : W - r W be defined by p, w(s) = w(t + s). Note that (paw = wA and that Y , cps = Y,,, for all t, s E T. 0
DEFINITION. A Markov process X = (Q, -U, A,,X , ,8, ,P")with state space (E, 8) is said to be of function space type provided R = W, J? 3 Yo, A, 2 Sp, X , = Y , , and 8, = p,.
(4.2)
THEOREM. Any Markov process X = (Q, A, A,, X , , 8, ,P")with state space (E, 8)is equivalent to a Markov process Y of function space type with (E, 8) as state space. (4.3)
Proof. Using the notation developed above (4.2) we define a map A : R 4 W as follows ( R 4 ( t ) = X,(W).
Hence Y,
0
A
= X,, and consequently
7r-yyl € A ) = 7T-l Y;'(A) = x ; ' ( A )
if A E & A . Therefore n"YY c 9: c A,; that is, A E 9p/Sp for all t E T where as usual 9: = 9' and 9% = Yo. We now define measures P" on (W, Yo) by P" = P"A-' and we claim that Y = (W, So,Yp, Y,, p,, p)is a
25
5. THEMEASURESPp
Markov process equivalent to X. Once this is established Theorem 4.3 will be proved since Y is clearly of function space type. Now
Byr, E A ) = P x - '
Y;'(A) = p" x ; ' ( A )
= P"(X, E A )
and so it only remains to show that Y is a Markov process. It is clear that Y satisfies Axioms R and H, and so only Axiom M needs to be checked. First note that (no, WNS) = x,(et0 ) = x t + , w= ( x ~ +t S) = (cpt ~ w ) ( s ) ; that is, net = cpt x . We next claim that if H : W + R is in bB', then H o x E b9' and $"(If) = E"(H R). The first statement is obvious, while if H = Z,, for A E B', then 0
P(H)= F ( A )
= P"x-'A = Ex(Zz-~A) = E"(IA0 n),
and so the result follows for general H E b8'. To check Axiom M for the process Y we must show that (4.4)
p[{Y,+, E A } n A]
= p{pyct)( Y, E A); A}
for all A E 8;and t , s, x, and A E JA.Using the facts that z-'A that X is a Markov process we see that the left side of (4.4) equals P"[x-' Y,',(A) n x-'A] = P[X,',(A)
E
9;and
n n-'A]
= EX{PX'"(X,E A ) ; x-'A},
while according to the above remark the right side of (4.4) becomes Ex{Prto"(Ys E A ) ; x - ' A } = E"{Px("(YSE A ) ; x-'A} = E"{PX'"(X, E A ) ; n-'A}.
Thus (4.4) is established and so the proof of Theorem 4.3 is completed.
5. The Measures ' P
In this section we will extend the basic o-algebras 9 'and 9;by completing them in an appropriate manner. We assume throughout this section that X = (0, 4, A , ,X,, O r , P")is a given Markov process with state space (E, 6')and that T = [0, 001. We have already remarked that x -,P"(A) is &A measurable whenever A is in 9 '(Theorem 3.6). Therefore, given a finite measure p on &A, we can ' define for any A in 9 (5.1)
P"(A) = J P"(A) p(dx).
26
I. MARKOV PROCESSES
It is immediate that P , is a finite measure on f o(P" is a probability on 9 ' if and only if p is a probability on 8J. Moreover if E, denotes unit mass at x then Pex= P" on 5'. In general the measures PI can not be defined on A unless we make some measurability assumption on the functions x + P"(A) for A in A. We now pause from our main development in order to introduce some terminology that is essential in the sequel. Let (E, 8) be a measurable space. If p is a measure on B and 9is a sub-a-algebra of 8,then gP, as in Chapter 0, denotes the completion o f f with respect to p. Of course f p c8 ''and the measure p extends uniquely to 8".This extension will again be denoted by p. Suppose now that U is a family of finite measures on 8;then the completion, F', of f with respect to U is defined by (5.2)
The reader should verify that (F") = "f " , and that f E 9"if and only if for each p E U there exist fl and f2in f such that fiIf Ifiand p ( f i c f2)= 0. If U is the family of all finite measures on B, then I" = I * ,the a-algebra of universally measurable sets over (E, 8).See Chapter 0.
(5.3) DEFINITION. Let (E, 8)and U be as above. If Y is any a-algebra contained in '8 we define the completion of 9 in I"with respect to U,which we denote for the moment by @, as follows: A E 9 if and only if for each p E U there exists A, E Y such that A - A , and A, - A are in I"and p ( A - A,) = p ( A , - A ) = 0. The following characterization of 9 is useful in some proofs. The notation is that of (5.3).
(5.4) PROPOSITION. A is in g if and only if for each p E U there exist D, in 9,A , and B, in d such that D, - A , c A c D, u B, and p ( A , ) = AB,) = 0. Proof. Let A have the property of Proposition 5.4. Since D, - A , and D, u B, are in E" and p [ ( D , u B,) - (0, - A,)] I p ( A , u B,) = 0, it follows that A is in 8 ' . Hence A - D,and D, - A are in 8".But p ( A - 0,) I p(B,) = 0 and p ( D , - A) I &4,) = 0. Consequently A is in g. Conversely if A is in @, then given p there exists D, in 9 such that A - D, and D, - A are in 8" and are p null. Hence there exist A , and B, in d such that D, - A c A, and A - D, c B, with p(A,) = p(B,) = 0. But
D, - (0,- A ) c A
c
D, u ( A - 0,)
5. THE MEASURES Pp
27
and so D, - A , c A c D , u B , . Thus A meets the requirements of Definition 5.3. The reader should now have no difficulty in checking the following elementary properties : (5.5)
(i)
8 is a a-algebra, 3 c 8 c bu;
(ii)
QU c
(iii)
9;
(g)-= 8.
The following measurability lemma will be of frequent use in the sequel. (This generalizes Exercise 3.3 of Chapter 0.) (5.6) LEMMA.Let (Ei, bi) be measurable spaces and Ui be families of finite measures on b i , i = 1, 2. Let g i be a-algebras contained in &", i = 1,2. Supposefis in 9J3,and in bl/b,. If pf-' E U, for all p E U,, then
fE 91/@2. Proof. If A E 8, we must show that f - ' ( A ) is in 8,.Given p E U, then v = pf E U, and so there exist D, E 9,,A , and B, in 8, such that D, - A , c A c D , u B, and v(A,) = v(B,) = 0. Defining D, =f -'(D,), A , = f - ' ( A , ) , and B , = f - ' ( B , ) itis clear that D , - A , c f -' ( A ) c D, u B, and that p ( A , ) = p(B,) = 0. Consequentlyf-'(A) E g1. In the special case that 3,= b, and 9,= E, the lemma states that if b,/b, and U, f-' c U, thenfe &yl/€yz. In particular b1/8, c Sr/S;. If we assume only 9,= b,, then the above argument actually shows that
-'
f E $!?y1/€yZ. Let us return now to the consideration of a Markov process X = (Q, 4, A f ,X , , Of,P") with state space ( E , b).We definedto be the completion of A with respect to the family of measures {P",x E E,}, and 9 to be the completion of 9'with respect to the family {P";p a finite measure on b,}. We next define d, to be the completion of 4, in d with respect to {P"; x E E,} and 9, to be the completion of 9;in 9 with respect to {P";p a finite measure on S,}. Since 4 I> 9'and A, 3 9; it is immediate that J? I> 9and df2 9,. These completions turn out to be the appropriate ones for our theory. See, for example, Proposition 8.12. The next definition is of central importance. DEFINITION. Let q(w) be a property of w ; then q is said to hold almost surely (as.) on A E 9 if the set A, of w in A for which q(w) fails to (5.7)
28
I. MARKOV PROCESSES
hold is in 9 and P"(Ao) = 0 for all x . If A = R, we simply say "almost surely." The following propositions, (5.10)-(5.12), are designed to show that, roughly speaking, one can replace the a-algebras A, and 9:by A,and Pr provided one replaces 8, by 8:. (Clearly 62, the universally measurable sets over (Ed, &'A), is just the a-algebra in EA generated by b*.) In particular it will follow from our discussion that (R, A,A,,X,,6 , , P") is a Markov process with state space (E, 8*).
(5.8) PROPOSITION.If Y Eb 9 , then x + Ex( Y) is S,* measurable. Proof. Given p there exist Y, and Y2 in b9' such that Y, IY IY , and
EM(Y2 - Y,) = 0. Clearly Ex( Y,) < Ex( Y ) 5 EX(Y,) for all x and x + EX(Y j ) is 8, measurable, j = I, 2. But
J
[Ex( Y2) - Ex( Yl)] p ( d x ) = E"( Y, - Y,) = 0,
and hence x --+ Ex( Y) is in
Sincep is arbitrary this yields Proposition 5.8.
(5.9) REMARK.It follows from the proof of Proposition 5.8 that if Y Eb 9 then EM( Y) = SEX(Y) p(dx) for each p. This relation then holds for any nonnegative Y in 9. (5.10)
PROPOSITION. For each t, X,E 9",82.
Proof. This is an immediate consequence of Lemma 5.6. Actually X,E (9:)p'/82.See the remark following (5.6).
0,
Combining (5.9) and (5.10) we see that x + E xf ( X J is 8; measurable for all t and f E b 82.In particular x + N,(x, A ) = P"(X, E A ) is 82 measurable ifAE82. (5.11)
PROPOSITION.For all t , h E T, 6 , '9, c F,+h,and so 6; '9 c 9.
Since is in 9 : / 9 : + h and in 9'/9' this will follow from Lemma 5.6 once we show that for each finite p on 8 , there exists a finite v on 6, such that PV;' = P". We define a finite measure v on 8, by Proof.
V(A) = P'(XI, E A ) = J P x ( x h E A ) p(dX),
and check that for A E 9 '
5.
THE MEASURES P"
=
1P"[Xh
E
29
d y ] P'(A)
= J ~ ( d yPy(A) ) = P"(A).
(5.12)
PROPOSITION. If Y E b 9 then for each x and t F(Y
8, I 2,) =E~(~)(Y).
Proof. From (5.8) and (5.10) it follows that the real-valued function w + EX'("')( Y) is in 9, , which is contained in A t ,and so we need only check that for each A E 2, E"(Y o 8,; A)
(5.13)
= E"[EX"'(Y);A ] .
Clearly, from the definition of .,#, it suffices to prove this for A E 4,. Given x and t define a measure p on 82 by p ( A ) = P"(X, E A). The computation made in the proof of (5.11) shows that for B E 9 'we have Pp((B)= P"(0; B), and of course for any Y E 6 9 we have E"Y
(5.14)
=
s
EY( Y ) p ( d y ) = E"[EX(')(Y ) ] .
If Y E b 9 let Z E b9' be such that { Y # Z } c r where r E 9' and Pp(r) = 0. Then { Y 8, # Z 0 Of} c 0; 'r and P"(8; 'r) = Pp(r) = 0. Also by (5.14) 0
E"(EX'"IY-ZI)=E'IY
-Z(=O
and so Ex(')Y = Ex(')Z almost surely relative to P". Thus we may replace Y by Z on each side of (5.13) without altering its validity. But with this replacement the validity of (5.13) is a consequence of Theorem 3.6(b), so the proof is complete. It is evident now that (Q, 2, df,X,,O r , P") is a Markov process with state space (E, S*),and hence, in this case, with state space (E, 8). Thus nothing is lost by enlarging the a-algebras A and At to J? and .dt. Consequently we will assume unless explicitly stated otherwise that (5.15)
.&=AT;
A,=J?,.
In particular then 9 c 4 = 2 and 9, c 4, = A,. So far we have made no assumption to insure that P" is actually the probability measure of the process starting from x , although we have stated that P" should be thought of in this manner. We now make this precise.
30
I. MARKOV PROCESSES
(5.16)
DEFINITION.
The Markov process X is called normal provided that
{x} E d and that P"[Xo = x] = 1 for all x in E. The assumption of normality will always be explicitly stated in the hypotheses of those results for which it is needed. We close this section with following simple, but very useful, result. then (5.17) PROPOSITION. (Zero-One Law). Let X be normal. If A E 9,, P"(A) is either zero or one. Proof. We first observe that if B E 9 'then 8;'B = B, and consequently if A E 9then P p ( A - &'A) = Pp(B;'A - A) = 0 for all p. Therefore Px(A) = F ( A n 0; A) = Ex[PX'o'(A);A] = [P"(A)]'.
Exercises (5.18)
each p
In the notation of Definition 5.3 show that SE9 if and only if for U there exist g E Y and A E d such that {f# g} c A and Pp(A)= 0.
E
Consider the process of uniform motion to the right discussed in Exercise 3.7. Find A? and 9 for this example.
(5.19)
Let X be a Markov process with state space (E, 8)where E is a metric space and 8 is the Bore1 sets of E. Suppose X satisfies the conclusion of Proposition 5.17. Let B = {x E EA: P"(Xo = X) = l}. Show that B E & , but that B need not equal E A (that is, X need not be normal). On the other hand show that, for each x and t , P"(X, E B) = 1.
(5.20)
Let E = R, 8 = g(R), Y = (0,R},and U = {p} where p is a fixed nonzero finite measure on (E, 8).Using the notation of (5.2) and (5.3) show that Y" # 9.
(5.21)
6. Stopping Times
In this section we will introduce a class of random variables that will play a fundamental role in the remainder of this book. Let (Q, 9 )be a measurable space and let { S tt:E T} be a fixed increasing family of sub-a-algebras of 9.
6.
STOPPING TLMES
31
Again in this section T = [0, co] and it will be convenient to assume that Frn = 9. We will leave to the reader the task of developing the analogous (but simpler) situation in which T = (0, 1, . . . , co}. A map T : R + T is called a stopping time (with respect (6.1) DEFINITION. provided { T I t} E 2FGrfor all t in [0, a). to {st,}) Note that {T = co} = {T < co}' E 9 = 9,, and ,so {T < t } E 9, for all t E T. Hence T E 9. Of course { T I f } E 9, if and only if {T > f } E S t . Clearly any nonnegative constant is a stopping time. Given the family {F,} we define for each t E [0, 00) a new a-algebra 9,+ = 9,. We have used the notation 9,+ for this a-algebra because it is the standard notation. Unfortunately in our notation this same symbol measurable functions. We will could be used to denote the nonnegative 9, neuer use 9,+ in this latter sense. For convenience we set + = 9. Similarly nw e d e f i n e f o r O < t < c o , F , - = a ( U s < r 9 s ) a n d s e t 9 0 -= 9 0 , 9 r=9. It is immediate that { F r + }and {9,-} are increasing families of suband 9,c Ftt c F r +for all t. a-algebras of 9,
ns,,
The family {PI} is right continuous if .Ft = 9,+ for (6.2) DEFINITION. each t < 00. Note that {PI+} is always right continuous. (6.3) PROPOSITION. Tis a stopping time with respect to {St,+} if and only if {T < t} E 9, for all t < co. Proof.
This follows from the identities
Note that this actually shows that T is a stopping with respect to {9,+} if and only if { T < t} E 9,for all t. If T is a stopping time with respect t o {T,}, then from the above identity {T < t} E St,-and so T is a stopping time with respect to { P r +The } . converse is not true in general. Example. If X is a Markov process then according to (3.4) ( is a stopping time with respect to {9:+}.
If T and S are stopping times (with respect to {P,}), (6.4) PROPOSITION. then so are sup(T, S), inf(T, S),and T S.
+
32
I. MARKOV PROCESSES
Proof. The following identities establish the first two statements : {suP(T,S ) I t} = {T I t } n { S I t } , {inf(T, S) I t } = { T 5 t } u { S I t } . For the third statement we write
{ T + s > t } = (0 < T < t , T + S > t } u { T = 0,T + S > t }
u{T > t , S = 0 } u { T r t , S > 0} = A1 v
A2
u A3 u A4.
We observe that (Q= rationals) A1 =
u
({r< T < t } n { S > t
- r } )E F1,
re(O,t)nQ
A2 = { T = 0 } n { S > t } E F ~ .
Similarly A3 E f,, and finally A4 = {T 2 t } n {S = 0)' T + S is a stopping time.
E P I .Consequently
(6.5) PROPOSITION. If {T,,} is a sequence of stopping times with respect to {F,} then , sup T,, is such a stopping time. If {f,} is right continuous then inf T,, lim sup T,,, and lirn inf T,, are stopping times with respect to {F",.
Proof. The proposition results from the following identities: {SUPT, s t } =
n{T,I t i ,
{inf T, < t } =
u {T,<
n
t},
n
lim sup T,, = inf sup T, , k
nzk
lim inf T,, = sup inf T,. k
n2k
(6.6) DEFINITION. If T is a stopping time with respect to (9,) we define F T to consist of all sets A in f such that A n {TI t } E 9, for all t < 00. The reader should have no difficulty in verifying that F Tis a a-algebra and that if T(w)= a 2 0 for all w then F T= F, ., Intuitively one should think of F, as containing all the information in some physical process up to time t ; then F Tcontains all the information up to the random time T. The defining property of a stopping time means that we can tell whether or not T is greater than t knowing only the information up to time t.
6.
33
STOPPING TIMES
If T is a stopping time with respect to { R t + }we write F T +for the corresponding object, that is, the collection of all A in F such that A n {TI t} E F,+ for all t < 00. It is easy to see that A E F T +if and only if A E F and A n { T < t } €9, for all f
(6.7) PROPOSITION. Let {Fr} be given. (i) If T is a stopping time, then T is FTmeasurable. (ii) If T and S are stopping times and T IS, then RFT c Fs . (iii) If {F, is} right continuous and {T,,}is a sequence of stopping times, then if T = inf Tnone has f T = OnFTn. Proof. (i) We must show that { T I a } E FTfor all a 2 0. But
{ T 5 a } n { T It } = { T Ia
A t}E
FaAr c Ft,
and so T E F T . (ii) If A E FTthen
A n { S Ir } = (A n { T It } ) n { S It } E Fr, and so A E fs. (iii) According to (6.5) T is a stopping time and from (ii) we have F Tc FTn. On the other hand, if A E FT,,then
n,,
0.
and so A E F T . PROPOSITION.Let {R,}be given and suppose that T and S are stopping times; then each of the sets {T c S}, { S < T } ,{ T IS}, {S IT}, and {S= T} is in both P Tand Fs.
(6.8)
Proof.
We have the equality {T < S } n { S It} =
U< ( { T < r } n { r < S It } ) E Rt,
rEQ.r
I
and so { T < S}e R S .If D, = (Q n [0, t ] ) u {r}, then { T < S } n { T It } =
U { T Ir } n { r < S } E 9, , rsDt
and so {T < S}E .FT.By symmetry {S< T} is in both RTand Fs, and Proposition 6.8 now follows by taking complements and differences.
34
I. MARKOV PROCESSES
The following approximation procedure will be useful in some proofs. Given H : R + [0, m] we define (n = 1,2, ., .;k = 0,1, ...)
k+l H'"'(w) = 2" =co
on { H = c o } .
It is obvious that H ( " ) JH for all w and that H(")> H on {H < 00). Morethen so is each H(").Finally it is easy to see that if H is a stopover if H E 9, ping time so is each H(").Thus any stopping time T can be approached pointwise from above by stopping times taking on at most countably many values. Let (R, 9)and {9,; t E T} be given and suppose that for each t we are given a map X , from R into a measurable space (E, 8).Let us call X = (R, f,9, , X,) a randomfunction taking values in (E, 8)if X , E 9,/& for each t E T. Given H : R + [0,00] we denote the map w + XH(o)(w) by X , . Finally we write 9,for the ordinary Borel sets of the interval [0, t] and set W = 9,.
(6.10) DEFINITION. Let X = (a,9,9,,X , ) be a random function taking values in (E, 8).For each t E T let 0,be the map (u, w ) + X J w ) from [O, t ] x R to E and let (D = Om.Then (i) Xis measurable if 0 E (Se x F)/& and (ii) X is progressively measurable (with respect to {F,}) if @, E (9,x 9 , ) / S for each t .
If E is a metric space and d the a-algebra of Borel sets of E and if t + X,(w) is right continuous for all w, then X is progressively measurable. The proof of this useful fact is left to the reader as Exercise 6.13. (6.11) THEOREM. Let X be progressively measurable and let T be an {9,} stopping time; then X , is in pT/8. Proof. If B E d we must show that {X,E B} E F,,or equivalently { X , E B } ~ { T t} < € 9 ,for all t. Let Y t : {TI t } + [0,t] x R be defined by w + (T(w),w), and let 0,be the map defined in (6.10). If Y is the restriction of X , to {T It } , then Y = 0,0 Y, Now by hypothesis 0,E (9,x F,)/S. Moreover Y, E 9 , / ( S e , x 9,) since if 0 Iu, < u2 I t and A E 9, then Y; ' ( ( u l ,u2] x A) = {ul < T 5 u 2 } n A E 9,.Thus Y E 9,/8. But { X , E B } n {TI t } = Y-'(B) E 9,,and this establishes (6.1 1).
.
(6.12) COROLLARY. If X is measurable and H : R + [0, 001 is in f,then
x, E 9"l.
6 . STOPPING TIMES
35
Proof. Let gr= 9 for all t and apply (6.1 I) to the random function (Q, X,). 9
3
3
, 9
Exercises (6.13)
Prove the assertion following Definition 6.10.
Let ( E , b) be a measurable space and U a family of finite measures on 8.Let { b r :t E B + }be an increasing family of sub-a-algebras of 8" and let a superscript tilde denote completion in 8'' with respect to U (Definition 5.3). Prove that (b,,)" = (&J+ for all t . (6.14)
Let (E, b) be a measurable space and suppose that d # (0,E } . Let R = ET and let X,(w) = w(t) be the coordinate mappings. Here, as usual, T = [0, 001. Let 9:= a(X,; s I t). Show that for each t < co,F: # 9:+ . (6.15)
Let ( Q 9 be ) a measurable space, E a metric space, and d the a-algebra of Bore1 sets of E. Let X , : R + E be in 9 / S for each t E R + and suppose that t + Xr(w)is continuous for each o.If A is a subset of E define D A ( o )= inf{t 2 0: X,(w) E A } where the infimum of the empty set is + co by convention. Finally let 9:= o(X,: s 5 t ) . (a) If A is open show that DA is an {S:,} stopping time. (b) Give an example to show that DA need not be an (F:}stopping time when A is open. (c) If A is closed show that D A is an {S:} stopping time. Note that (a) remains valid if one only assumes that t + X,(w) is right continuous for each w. (6.16)
(6.17) Let R be a space and ( E , 8)be a measurable space. Suppose that for each t E R, we are given a map X , : R + E. As usual define 9:= a(X,: s E R,) and 9:= .(A',: s I t ) . Suppose further that for each t and w there exists w' such that Xs(w')= X,, ,(w) for all s E R+ . (a) Show that A E 9: if and only if (i) A E 9: and (ii) wo E A and X,(w) = X,(wo) for all s I t together imply that w E A. (b) T 2 0 is an {9:stopping } time if and only if T E9:and T(wo)I t, X,(w) = X,(wo) for all s I t imply T ( o )I t. (c) Let T be an {S:} stopping time. Show that A E if and only if (i) A E 9: and (ii) w,, E A and X,(wo) = Xs(w) for all s I T(wo) imply that w E A. [Hint : use Corollary 2.9 of Chapter 0.1
Let (R, 9) and (9,; t E T} be as in the first paragraph of this section. stopping time is the infimum of a countable family of Show that any {S,} stopping times T each of which has the following special form: T = a on AE.F',and T = 00 on A'. (6.18)
36
I. MARKOV PROCESSES
7. Stopping Times for Markov Processes
In this section X = ( O , A ,Af, X I ,8,,P") will be a fixed Markov process with state space (E, 8).We assume without loss of generality that A = A and J?, = Af, We will consider stopping times relative to {A,}and (9,) for the most part. In particular we will say that T is a stopping time for X provided it is an (9,) stopping time. If T is an { A , } ( { F stopping ,}) time then of course AT(9,)consists of those sets A e A ( 9 ) such that A n {T I r} E Af(9,) for all t. The reader should have no difficulty in verifying that (7.1)
2,= A
T
(9,= 9,)
where A?,($,) is the completion of A#,) in A(9)with respect to the family {P"; x E EA}({Pp;p a finite measure on 8,)).We say that X is progressively measurable provided that the random function (a,A, A,,X,) taking values in (E,, 8,) is progressively measurable. It then follows from (6.11) that X,E&,/&, whenever T is an {A,}stopping time. Moreover making use of (7.1) and (5.6) it is easy to see that for such a stopping time (7.2)
XT E AT/&:.
The following situation will often arise. Suppose (E, 8) is a given measurA,A,,X , ,e,, P") is a Markov process with state space able space and X = (0, (E, 8*).Clearly A,/&:c &,/&A and so we may consider (a,&, A,, X , ) as a random function taking values in (E4, 8,) as well as in (E,, 8:). If this random function (with values in (EA, 86))is progressively measurable, then for any { A f }stopping time T, X , E & T / 8 A and consequently (7.2) is again valid. For example, let EA be a metric space and 8, = W(E,) (the Bore1 sets). If X is a Markov process with state space (E, b*) such that t + X,(o) is right continuous for each w , then (0, A, A,,X,) is progressively measurable if we regard it as taking values in ( E , , 8,) and so (7.2) holds. However if we regard (a, A, A , , XI)as taking values in (E,, 82)it will not, in general, be progressively measurable. When considering {9,} stopping times the following theorem allows us to stopping times in many situations. restrict our attention to {9:+}
(7.3) THEOREM.Let T be an { S t +stopping } time. Then for each p there such }that P'(T # T,,) = 0. exists a stopping time T,, relative to {9:+ Proof. For each n define T'") as in (6.9) Then each T'") is an {F,} stopping time taking on the discrete set of values {k/2";k = 1,2, , .. , 00). Suppose the measure p is given. Fix n for the moment, let A, = {T'")= k/2"}, and let A , E St12,, be such that P'(AkA Ak) = 0; this is possible because Ak E Sk12,, .
8. THE STRONG MARKOV PROPERTY Let B, = A , , Bk = Ak - u
j < k
R'"'(W)
37
A j for k = 1, ..., 00, and define R'"' by
k 2
Bk
= 5,
W E
=00,
WER-UBk. k
One checks immediately that R'") is an (9:stopping ) time and that P'(R(") # T'"))= 0. If we set S,, = inf,,,, R'k', then according to Propositions (6.4) and (6.5) {S,,}is a sequence of {Pf}stopping times decreasing to a limit, T,, , which is an IF:+}stopping time. Since T'") decreases to Tit is clear that P"(S,,# T'")) = 0 for all n, and consequently Pp(T # T,,) = 0. If H : R + [0, 001 we define the translation operator 0, as follows:
e,
(7.4)
W
=
eH(o)w .
Obviously X , o 8, = X,+,for each fixed t. (7.5) PROPOSITION. If X is a Markov process and for all {A,}stopping times T, X , E A T l b b ,then for any such T, O i ' 9 f c Af+Tand hence e;lso A.
nl=,
Proof. If A = { XfJE B j } with Bj E &A and t, < t , < . . . c t,, 5 t, then 8,'A = n ; = l { X ( J + TE B j } and so O i ' A E A T + The ( . desired conclusion is now obvious.
8. The Strong Markov Property In many arguments it is necessary to use the Markov property (3.2) for certain stopping times T as well as for fixed times 1. In early work it was more or less tacitly assumed that this was possible. However, this requiresproofand in fact is not true for all Markov processes (see Exercise 8.20). In this section we will develop a sufficient condition (Theorem 8.1 1) for this "strong" or " extended " Markov property. In this section all stopping times will be mentioned otherwise.
{&(I
stopping rimes unless explicitly
DEFINITION. Let X = (R,A, .A'(, X , , O r , P") be a Markov process with state space (6 8).Then X is said to have the strong Markov property or simply to be strong Markov, provided that for each stopping time T and f E b b Aone has (8.1)
38
I. MARKOV PROCESSES
(S.R.)
XT
and (S.M.)
EX{f
x~+ T I A T }
= EX'T'[f(X,)l
for all t and x. REMARK.x + E xf (XI) is 8, measurable and so the function EX'T'f(X,)on ZZ is A, measurable. Hence under (S.R.) the right side of (S.M.) has at least the appropriate measurability property. Note that (S.R.) is satisfied whenever X is progressively measurable. We now introduce a convention which will prove extremely useful in the remainder of this book: any numerical function f on E will automatically be extended to E A by setting f(A) = 0 unless explicitly mentioned otherwise. From this point of view bb is identified with the subspace of bb, that consists of all elements (in bd',) vanishing at A. Note also that we may now write P I f ( X ) = E"{f(X,); x,E El = E"f(X,) iffis in bb or in &+ . Clearly PIf(A) = 0 and so our convention is consistent.
(8.2) PROPOSITION. If X satisfies (S.R.), then X is strong Markov if and only if for each stopping time T and f i n bb one has (S.M.)'
E"f(X~+ T ) = E"{EX'T'[f(Xl)I}
for all t and x .
Proof. First of all let us suppose that (S.M.)' holds for all f E bb,. Let T be a stopping time and let A E d TDefine . T A as follows: TA = T o n A, TA = 00 on A'. Since {TA 5 t} = {T It} n A E A t ,TA is a stopping time. Iff E bb, then E"f(Xl
+ T A ) = E X { f ( X l + T ) ; A} -k f(A) px(Ac)
and EX{EX(TA)
Cf(X1)I)= E"{EX'T'rf(Xl)I ; A> +f(N P ( A C ) .
By assumption the left sides of the above equations are equal and so (S.M.) holds. Thus it remains to show that (S.M.)' holds for all f = bb,. Now { fbb,: ~ (S.M.)' holds} is a vector space which contains IEa and, by hypothesis, any f E bb, which vanishes at A. But any f E bb, is a linear combination of two such functions, and so (S.M.)' holds for ally€ bb, . Thus Proposition 8.2 is established since the converse is obvious.
8.
39
THE STRONG MARKOV PROPERTY
(8.3) REMARK.If Af = S t ,then because of Theorem 7.3 it suffices that (S.R.) and (S.M.)' hold for all {S:,} stopping times, and when this is the case then (Q, 9,F t + ,X I , O f , P") is strong Markov. The reader should recall that & t + = Sf+(Exercise 6.14). (8.4) PROPOSITION. Let X be strong Markov and Y E b9'. E x [Y 0 flT I &T] = EX(T)(Y) for all x and all stopping times T.
Then
Proof. By Proposition 7.5, O ; ' F 0 c A , and so Y o 8 , ~ b A . Also o -,EXT(0)( Y ) is in AT and so it suffices to prove that E"[ Y 0 8,; A] = E"[EX(")(Y); A] for all A E AT. As usual it is enough to consider only Y of the form Y = fj0 X t , withfjE bb, and 0 I tl < . . . < f n . Weargue by induction on n. When n = 1 the required equality is just (S.M.). Consider the stopping time R = T tn-l. Since A E ATc ARand n ~ ~ ~ f j [ X f E, +b A T ]R we have
n'&l
+
."[ fi
j =1 f j ( X f j + T ) ;
n
n- 1
A] = E x [
fj(x,,+T)f(xR+fn-fn-~); A]
j=1
If we set gj =fi, 1 < j I n - 2, and gn-l(x)=fn-l(x) EXLf(Xfn-fn-,)], then each gj is in bb, and the last displayed expression may be written E x [ n ; : : gj(Xf,+r);A]. Thus using the induction hypothesis we obtain
completing the proof of (8.4). (8.5) COROLLARY. If X is strong Markov then for each stopping time T and t E T one has O; IFf c A T + and consequently 0; '9 c 4. Proof. Making use of (7.1) this will follow from Lemma 5.6 once we show that for each x E EAthere exists a finite measure v on 8, such that PxO; = P'. But using the strong Markov property one checks exactly as in the proof of (5.1 1) that v(A) = Px[XTE A ] is such a measure.
'
40
I. MARKOVPROCESSES
(8.6) COROLLARY. Let X be strong Markov. Then for each Y E b 9 one has E X [Y o 8 T I A T ] = EX(T)(Y) for all stopping times T and all x.
Proof. Corollary 8.5 implies that Y o E b A and as before EX(T’(Y) is ATmeasurable as a function of w. The proof of (8.6) then goes exactly as the proof of Proposition 5.12. Suppose R and T are stopping times; then one can form R + T O8,. Intuitively if we think of R and T as the times at which certain physical events, o! and B, occur, then R T 0 8 R would be the first time p occurs after o! has occurred and hence should be a stopping time itself. The following theorem makes this precise.
+
(8.7) THEOREM. Suppose X is strong Markov and let R be an { A , + } stopping time and T be an {St+} stopping time. Then R + To 8, is an { A f +stopping } time.
Proof. If S = R
+T
0
8R
then
{S
u
{R
- r; T
0
8R < r}.
rEQ
But {T O R < r } = 8,’{T < r } E d R +byr (8.5), and if A E A R +then r A n {R < t - r } = A n { R r < t } € A fThus . { S < t } E At and so S i s an { A , +stopping } time. 0
+
Of course, one can replace {A,+} and {9,+} by {Af}and {Sf}, respectively, in the hypothesis of (8.7) without altering the conclusion. However it is not clear (and perhaps not true) that with this replacement one can also replace { A , +by } {Af}in the conclusion. We are now going to give some sufficient conditions for the strong Markov property. We begin by introducing the “potential operators ” which will play a central role in the remainder of this book. In Section 3 we defined the transition operators PI f o r f e bd by
(8.8) = E ” [ f ( X , ) ;X,E El
for t 2 0.
It follows from the Chapman-Kolmogorov equations that {P,;t 2 0) is a semigroup of (linear) operators on either b& or b8* to itself. Both 6 8 and bb* are Banach spaces under the supremum norm, )I -11, and llPfII I1. If X is normal, then Po = I. Recall that any numerical function f on E is extended to EA by settingf(A) = 0.
8.
THE STRONG MARKOV PROPERTY
41
Let us now assume that X is measurable relative to 9'; that is, the map 0 : ( t , o)+ X,(w) is in (W x Fo)/S,, W denoting as usual the Borel sets of [O,oo). It now follows from Lemma 5.6 that if I is a finite measure on W and p is a finite measure on b,, then 0 E (9 x Fo)A*p/&X where (Wx Fo)A*p is the completion of W x 9 'with respect to the product measure I x P". In particular one can take , Ito be equivalent to Lebesgue measure. Therefore iffis in &, (&':), then (t, w ) +f[X,(w)] is in W x 9 '((9x for all I , p), respectively. It is now evident that i f f E b b then ( t , x ) -+ P , f ( x ) = E " f ( X , ) is in b(W x b), while i f f € bB* then the above map is (W x measurable for all I and p where the a-algebra in question is the completion of W x &' with respect to the product measure 2 x p. Therefore if u > 0 and f E bb* we can define 00
(8.9)
1
00
U " f ( x )= Jo e-" P , f ( x ) dt = E x
e-"'f(X,) d t .
0
One verifies easily that U" mapsbb(bb*) into bb(M*) and that 11 U"II I u - ' . The operator U" is called the a-potential operator of X and the family { U"; u > 0) is called the resolvent of the semi-group {P,;t 2 O } . It is easy to t 2 0 } ,that the resolvent equation check, using the semigroup property of {P,; (8.10)
U" - up = (/3
- u)U"up,
u, /3
> 0,
holds, and consequently U"Up= U p U " . We now assume that E, is a metric space and that 8, is the a-algebra of Borel sets in E, . A Markov process X is said to be right continuous if almost surely the mapping t + X , ( o ) is a right continuous function from [0,co] to E,. If X is a right continuous Markov process with state space (E, &*), then the random function (Q, A, A f ,X,) with values in ( E , , 8,) is progressively measurable and so the above discussion applies. We come now to the main theorem of this section. (8.11) THEOREM.Let (E, , 8,)be as above and let X = (Q, A,A,,X , ,8, ,P") be a right continuous Markov process with state space (E, &'*). Suppose there exists a linear space L of bounded continuous functions on E such that (i) for every f E L and u > 0 the map f + U " f ( X , ) is right continuous on [0, 4') almost surely and (ii) whenever G is an open subset of E there exists an increasing sequence {f,} in L with f,f Z, . Then (Q, A, At+, X , , O f ,P") is strong Markov. REMARKS. In particular X is then Markov with respect to { A r +Note } . that if CJ" : C , ( E ) + C ( E ) for each a > 0, then L = C,(E) certainly satisfies the conditions in (8.1 I).
42
1.
Proof,
MARKOV PROCESSES
The discussion following (7.2) implies that (S.R.) holds. Let T be an
{ A r +stopping } time and define T(")as in (6.9) for each n. I f f € L and a > 0,
then 00
Ex
joe - " f ( X T + , ) dt = lim Ex n
I
m
e-"'f[X(T("'
0
+ t)] dt
m
1
= lim n k = l EX(jome-.f[X(t
+ k2-")] d t ; T'")= k 2 - " ) .
But (7"")= k2-"} = {(k - 1)2-" IT < k2-"} E A,,-",and so using the ) last displayed expression becomes Markov property (relative to { A , }the lim n
2 Ex(Ex(k'-")[jome-.'f(Xr) d t 1;T'"'
= k2-"
k=l
= lim E x U a f [ X ( T ' " ) ) ] . n
But t -+ U e f ( X r )is right continuous on [0, () and equal to zero on [(, and hence is right continuous on [0,00](as.). Since T'"' J. T we obtain
oc)]
jOme-",E x [ f ( x t + T ) l dt = E " [ ~ " f ( x T ) I = /ome-ulE x { E X ( T ) [ f ( X r )d] t}
for all a > 0. The functions t --t E"lf(X,+T)] and t + E x { E X ( T ) [ f ( X , ) are ]} both right continuous and the above calculation shows that they have the same Laplace transform. Therefore by the uniqueness theorem for Laplace transforms, they are equal. Now using the second property of L we obtain P X [ X , + ,E C ] = Ex{PX'T'(XrE C ) } for all open sets G in E and hence by MCT for all G in 8.An application of Proposition 8.2 now completes the proof of Theorem 8.1 1. Since F,+ c M I + we see that X is Markovian with respect to {.F,+} under the hypotheses of Theorem 8.1 1. In this situation things are very nice indeed as the following proposition shows.
(8.12) each t.
PROPOSITION.
If X i s Markov relative to {9:+}, then F,= F,+for
Proof. By assumption for any Y E bFo,t, and p we have
E"[ Y 0 8,l F:+]= Ex(')(Y ) = E"[Y 0, I F:]. 0
8.
43
THE STRONG MARKOV PROPERTY
Suppose we have established the fact that for all p and Y E b9' Efl( Y I 9;+> = E'( Y I 9;).
(8.13)
If we set Y = I , with A E 9;+, then (8.13) implies that, for each p, I,, differs from an 9;measurable function on a P' null set. (See the discussion above c 9, this implies that A E $7 = 9,.ConseTheorem 1.3.) Since A E 9:+ quently 9;+ c 9,. But (S;+)= 9,+ by (6.14), and so 9,+ c 9,. Thus Proposition 8.12 follows from (8.13). It evidently suffices to prove (8.13) ... for Y = fl'j=lfj X,,, with f j E bbA and 0 Itl < ... < t i It c < 1,. Such a Y can be written as (f15=l,fj X,,)(G 0,) where G = = + f j o X,, - ,. Consequently 0
0
fl;
0
j= 1
= Ep( Y
I 9;).
This completes the proof of (8.12). It is clear that whenever the hypotheses of Theorem 8.11 hold we can assume without loss of generality that A,+= d l
(8.14)
and
2,= A,.
Whenever (8.14) is in force, (8.12) also obtains and in this situation Theorem 8.7 appears more natural. However we will explicitly mention it when we assume (8.14). (8.15) REMARK. If X = (R, d,&,,,'A , 6 , , P") is strong Markov and if X , E 9,./b* whenever Tisan {F,} stoppingtime, then (R, 9,F I ,X , , 0,, P") is also strong Markov. In particular this will be the case whenever E A is a metric space (8,= W(E,) or B(EA)*)and X is right continuous.
Exercises
Let X = (Q, A, A , ,X , , O r , P") be a strong Markov process with state space ( E , 8).Let T be a stopping time and let G : R x R + R be in b ( 4 , x 9). Let H(m) = G(o,8,m). Show that (8.16)
E"{H 1 A , } ( W )
=
1G(o,
m')
PXT@)(dW')
for each x. (8.17)
Let X be as in (8.16) and assume further that X is 9 measurable.
44
1. MARKOV PROCESSES
Let T be a stopping time and S 2 0 be ATmeasurable. I f f € bbz, then E"{f(XT+,) I A T l ( 4 = ~ S ( m ) ( X T ( 4 * . f )
for all x where N , ( x , f ) = E xf ( X , ) . [Hint: suppose first that f E b b , and use (8.16) with G(o,a')=f [Xscm,(w')].] (8.18) Let E be a metric space and I, = a(EA). Let X = (Q, A, A , , X,,Or, P") be a right continuous Markov process with state space (E, a). For each x E E let T, = inf{t: XI # x}. Show that for each x there exists A(x), 0 5 A(x) Im, such that P"(T, > I ) = e-A(x)r.[Hint: consider instead h(t) = Px(Tx2 t) and use the fact that h(t) = Px(Tx> t ) except for countably
many values of
I.]
Let E = R" and I = W(R"). Let X = (Q, A, A,,XI, O,, P") be a Markov process with state space (E, I ) whose transition function is given by (2.12) and which has right continuous paths. (The existence of such a process for given {p,;t > 0} will be established in Section 9.) (a) Show that X is strong Markov with respect to {A,+}. (b) Show that P"(X, E B) = P o ( X , + x E B) for all x, t, and B E I. (c) Let T be a finite stopping time and let Y(t) = X(t + T) - X(T) for t 2 0. Let Y = a( Y , ; t E R+). Show that Y and f T are independent with respect to each PP.[Hint: begin by showing that for each p (8.19)
(8.20) Consider the process defined in (2.22). For each t 2 0 define Z,(w) = limsTIX,(w). Show that P ( 2 , # XI) = 0 for each t and hence that { Z , } is a Markov process (in the sense of Definition 1.1) having as a transition function the one defined in (2.21) (which we denote by N,(x, A)). Let 9:= ~ ( 2s, ;I t). Find an { f : , } stopping time Tand a bounded continuous f such that E f ( Z ~ + T#) E { N ~ ( Z T , f 9. Standard Processes
In this section we introduce the class of Markov processes with which we will be mainly concerned in the remainder of this book-the " standard " processes. In particular we will give conditions for the existence of a standard process with a prescribed transition function. We begin with some definitions of fundamental importance.
9.
STANDARD PROCESSES
45
(9.1) DEFINITION. Let X = (Q, 4, A,,X , , O f , P*) be a Markov process with state space ( E , 8).We assume that EA is a metric space and that &,= a(EA), the class of Borel sets in EA. Suppose that X , E A,/&,*for all {&,} stopping times T-this is just condition (S.R.) of (8.1). Under these conditions we say that X is quasi-left-continuous provided that whenever {T,,}is an increasing sequence of {A,} stopping times with limit T, then almost surely X(T,,) + X(T)on {T < [}.
It is useful to introduce a property somewhat stronger than (9.1) : namely, we will say that Xis quasi-left-continuous on [0, co) provided the convergence X(T,,)+ X ( T ) holds almost surely on { T < co} rather than just on { T < [}. In general quasi-left-continuity will refer to Definition 9.1 ; when confusion is possible we will refer to the property of (9.1) as quasi-left-continuity on [0, [) as opposed to quasi-left-continuity on [0, a).We are, of course, using the topology of EA in discussing the convergence of {X(T,,)}. We come now to the definition of a standard process. (9.2) DEFINITION. A normal Markov process X = (Q, A, MI,X , , O f , P") with state space (E, 8)is called a standard process provided : (i) E is a locally compact space with a countable base and A is adjoined to E as the point at infinity if E is noncompact and as an isolated point if E is compact. Furthermore &, is the a-algebra of Borel sets of EA (or, equivalently, 8 is the a-algebra of Borel sets of E). (ii) A,+= A!, = J Ifor all t . (iii) The paths functions t -+ X l ( o ) are right continuous on [0, 00) and have left-hand limits on [0, 5) almost surely. (iv) X is strong Markov. (v) Xis quasi-left-continuous.
REMARKS.Proposition 8.12 implies that, for a standard process, 9, + = 9, for all t. Also the assumption in (iii) that t + X,(w) has left-hand limits on [0, 4') almost surely is in fact a consequence of the other hypotheses (see Exercise 9.15). A standard process which is quasi-left-continuous on [0, CO) is called a Hunt process. Recall that a subset A of E is bounded provided that there exists a compact subset of K of E such that A c K . The next proposition implies that almost surely the left-hand limits of t --t X , on [0, [) must lie in E. (9.3) PROPOSITION.If X is a standard process, then for each t the set A ( w ) = {X,(w):0 I s I I, t < [ ( w ) } is almost surely bounded.
46
I. MARKOV PROCESSES
Proof. Let { K,,} be an increasing sequence of compacts in E such that K,, c K:, for all n and U K , , = E. (Here KO denotes the interior of K . ) Let T,, = inf{t: XI $ K,,}. Then T,, is an (9,) stopping time because EA - K,, is open. The sequence {T,,}is increasing and if T = lim T,,,then, because of the quasileft-continuity, XTn+ X , almost surely on {T c [}. However, by the right continuity of the paths X,,,+, is not in K,, , and so X , = A almost surely on {T < [} and, hence, also on R. Thus T = [ almost surely. The conclusion of (9.3) is now immediate because { X , ( o ) : 0 I s < T,,(w)}is contained in K,,.
We come now to the basic existence theorem for standard processes. This theorem will be used in the sequel only in the discussion of examples and so a reader might skip the details of the proof on a first reading. However, the techniques of the proof are of general interest. (9.4)
THEOREM. Let E be locally compact with a countable base and let
d be the Bore1 sets of E. Let P,(x, A ) be a sub-Markov transition function on (E, 8)with P o ( x , ) = 8,. Let Co denote the space of continuous functions on E which vanish at co and suppose that (1) P ICo c C, for each t 2 0 ,and (2) P,f -f uniformly on E as t 0 for eachfc Co . Then there exists a standard process (in fact, a Hunt process) with state space ( E , 8) and transition function the given P,(x, A). --f
REMARK. It follows from standard semigroup considerations (see Exercise 9.13) that, in the presence of Condition (l), Condition (2) is equivalent to the apparently weaker requirement that P , f +f pointwise as t + 0 for each fEC0.
Proof. Define E A as in (9.2i) and extend P,(x, A ) to a Markov transition function, N,(x, A ) on ( E A ,gA),by setting for A E d A N t ( x , A ) = Pt(x, A n E ) = lA(A),
+ I,(A)
(I
- P,(x, E ) ) ,
x E E,
x = A.
One checks immediately that N is indeed a transition function on (EA, 8,) with N,(x, EA) = I for all t 2 0, x E E,. Let C = C(E,) and observe that if f E C and if g is the restriction off -f(A) to E, then N , f ( x ) = P , g ( x ) + f ( A ) for x E E and N , f(A) =f(A). It follows from this that, for every f E C , N , f E C and that N , f +f uniformly on EA as t + 0. Also if f E C then N t + ,f - N , f = N,(N, f - f) 0 as s J. 0 and so t + Nt f ( x ) is right continuous. Therefore given a > 0 we may form U"f ( x ) = j; e-"' N , f ( x ) dt and U"C c C. Since aU"f(x>= j; e-' N,,, f ( x ) dt, it follows that aU"f +f uniformly as a co for each f E C. We may regard Co as the subspace of C
-
--f
9.
47
STANDARD PROCESSES
consisting of allfe C satisfyingf(A) = 0. With this interpretation UaCo c Co for each a > 0. In particular i f f € C, is strictly positive on E and if a > 0, then U'f is strictly positive on E. These remarks will be used in the course of the proof. Next let T = [0, a),W = EZ, and 4 = 8': so that ( W , 4)is the usual product measurable space. By Kolmogorov's theorem, (2.1 l), and the ensuing discussion there exists for each x E EA a probability measure P" on ( W , Y) such that over ( W , 9,P") the coordinate mappings X,(w) = w ( t ) form a temporally homogeneous Markov process in the sense of Section 2 with N as transition function and initial distribution E, . Let 4, = a(X,; s I t ) , and let Q denote the rationals in [0, 00). Let A consist of all those w in W which have the following two properties: (a) limsrt,SEQ w(s) exists (in EA) for each t > 0, limslt,sEQw ( s ) exists (in EA) for each t 2 0; (b) w(Q n [0,t ] )is bounded in E for every t E Q such that w ( t ) E E. Denote by A, and j \ b the set of functions defined by conditions (a) and (b), respectively, so that A = A, n A b . We now assert that A E Y and that P"(A) = 1 for all x E EA. To get started on this letfE C, and g = U a f , 01 > 0. Then
1
03
(9.5)
e-a' N,g(x) =
e-au
~ , j ( x du ) Ig ( x ) .
-1
Thus given 0 It < s,
rE
EX{e-asg(X,);
and x
E EA
we have, using (2.6),
r}= e-a-l Ex{e-'(s-r) N I E"{e-"'g(X,); r}.
s--1
g(X-1);l-1
Therefore for each x E EA the family {e-a-lg ( X , ) , Yt; t 2 0 } is a nonnegative supermartingale over ( W , 4, P"). Choosefc C, so thatfis strictly positive on E, and let g = CJaA01 > 0. Then, as noted above, g E Co and g is strictly positive on E. Observe that (Ab)e is precisely the union over all t E Q of the sets
But each Tr E and so Ab E Y. Moreover according to (1.6) of Chapter 0, P"(T,) = 0 for each x E EA and t 2 0. Therefore P"(hb) = 1 for all x E EA. Next let d be a metric for E, and define for E > 0, h,(x, y ) = 1 if d(x, y ) 2 E
48
I. MARKOV PROCESSES
and h,(x, y) = 0 if d(x, y) c E. If U is a finite subset of [0, 00) with an even number of elements, say u1 < .. . < u Z n ,define
Clearly He(U)E 93. For each D c [0, 00) define He(D) = sup He(U)where the supremum is taken over all finite subsets U of D with an even number of elements. If D is countable, then H,(D) is again in 93. Next observe that Aa =
fi {Hl/n(Q n LO, ml) < 001,
n=l m=l
and so Aa E 93. According to Theorem 1.5 of Chapter 0, for each f E C+ the restriction to Q of r + e-"'g(X,) has right- and left-hand limits at all points of [0, co) almost surely P" for all x . This assertion is then also valid for any f E C. (Of course a > 0 and g = U'j) Let A(a, f)denote the set of w's such that the restriction of t + g [ X , ( w ) ] to Q has right- and left-hand limits at each point of [0, co). By the above argument A(a, f)E Y and P"[A(a,f)]= 1 for all x E EA , a > 0, and f E C. Let {a"} be a sequence approaching co and let {f k } be a countable dense subset of C. Then {a, U""fk; n 2 1, k 2 1) is uniformly dense in C since aU'f +f uniformly as a --t 00 whenever f~ C. Now let A' = { w : for everyf E C the restriction o f t + f [ X , ( w ) ] to Q has rightA ( a n , h ) and and left-hand limits at all points of [0, 00)). Clearly A' = so P"(A') = 1 for all x. But it is also evident that A' = Aa and so the assertion that P"(A) = 1 for all x E EA is established. We now simply delete the set A' from W; that is, we define W = W n A , 93; = { A n A : A E ?I 93',} =, { A n A: A E Y}, and we take (P")' to be the restriction of P" to Y' and A',' to be the restriction of X,to W'. Clearly for each x , X,'is a Markov process with respect to (93;) over ( W', Y', (P")')having the original N as transition function. At this point we will drop the primes from our notation; that is, we will assume that A' is deleted from W so that W = A. Now for each t 2 0 and w in W set Z,(w) = lim X,(w). slt.seQ
Of course this limit exists since W = A. Furthermore it is easy to see that for each w the mapping t + Z,(w) is everywhere right continuous and has lefthand limits, and is such that Z,(w) = A whenever Z,(w) = Aand s 2 r. Clearly each 2, is in (Y,+)/cF'~.We next assert that for each x , { Z , ; t 2 0} is a Markov process with respect to {Y,,} over (W, Y, P") having N as transition function, and that P"{X, = Z , } = 1 for all r. Indeed suppose we are given x, r, E Y, , s > t with s - t rational, and f E C. Let {r,} be a sequence of rationals
9.
STANDARD PROCESSES
49
+
decreasing to t; then s, = s - t t, is a sequence of rationals decreasing to s - t. Now r E 9," for each n, and so using the Markov property for X
(9.7)
E"{f(X,); r}= EX{Ns-,f(Xrn); r>
for each n. But Xsn+ Z , and Xtn+ Z, as n -+ co while x + N,-,f(x) is continuous, and so letting n -, 00 in (9.7) we obtain (9.8)
EX{f(Zs);r>= E"{Ns-,f(Z,); l-1.
The restriction that s - t be rational is removed by observing that each side of (9.8) is right continuous in s. Finally, the validity of (9.8) for all f E C implies its validity for all f E b6,. Thus we have established that {Z,, Y, ; t 2 0} is a Markov process over (W, Y, P") with transition function N for each x . Now suppose that g E bd',, f E C, t E [0, m), and {s,} is a sequence of rationals decreasing to t . Then
~X{f(Xs.) g(X,))= E"{N,"-,f(X,) g(X,)), and letting n -, 00 we obtain EX{f(Z,dX,)) ) = E"{f(X,) g(X,)I.
It now follows from MCT that E"{h(Z,, X,)}= E"{h(X,, X,)} for all h E b(6, x 6J, and this implies that P"{X, = Z,} = 1 (take, for example, h ( x , y ) to be a bounded metric for E J . Of course we need this conclusion only for t = 0 since we already know that X and Z have the same transition A) for x in E and function N. We remark at this point that P"(2, E A) = Pt(x, A in 8. For the next step in the construction let R denote the set of all functions w from [0, co] to EA such that t -,w ( t ) is right continuous and has left-hand limits throughout [0, 00), w(00) = A, and w(s) = A if s 2 t and w ( t ) = A. By construction for each w the functions t + Z , ( w ) (with Z,(w) set equal to A) is in R. Let Y,(w) = o(t)and 9; = a( Y,; s 5 t) for all t E [0, a]. Define a mapping II : W - , R by II w ( t ) = Z , ( w ) and note that ~ - ' 9c:Y and c ;Y,, for each t E [0, m). Let px= P 3 r - l . Define translation that I I - ' ~ operators 8, on R by 8, w(s) = w(t + s). It is then easy to see that (0, S", 9;, Y,, O,, p) is a Markov process with state space (E, 8 ) in the sense of Definition 3.1 and having the given P,(x, A) as transition function. as in Section 5, then Y = Moreover if we form the "completions" S and 9, ( R , 9 , 9,+ , Y,, t?,, P")is a strong Markov process because the hypotheses = S,+ by (8.12). We of Theorem 8.11 are certainly satisfied. In particular 9, will drop the """ from bx at this point. We now assert that Y is quasi-leftcontinuous on [0, 00). Once this is established the proof of Theorem 9.4 will be complete.
50
I. MARKOV PROCESSES
To this end let {Tn}be an increasing sequence of stopping times with limit T. In trying to prove that YTn+ Y , almost surely on { T < co} there is no loss of generality in assuming that T is bounded, as we now do. Let L = lirn Y(TJ and for t > 0, L, = lirn Y(Tn+ t), these limits existing by the definition of Y. Since Tn+ t is in [T, T + t ] for all large n and t -+ Y, is right continuous, it follows that lim,io L, = Y,. Letfand g be elements of C.Then for each x
E"{f(L) S(YT)} = lim lim ~ " { f ( Y , , 9(YTn+,)> ) n-tm
1-0
= lim lim EX{f( Y,.) N , g( Y,,)} 1-0 n - t m
= lim 1-0
E"{f(L) N , g(L)l = EX{S(L)g(L)}.
An application of MCT then shows that E"{hW, Yd} = EX{h(L,L))
(9.9)
for all h E b(8, x 8,J,and consequently L = Y , almost surely. The proof of Theorem 9.4 is now complete. We will conclude this section by giving a condition under which the paths of the process constructed above may be assumed to be continuous on [0, c). In order to obtain a simple statement we will retain the assumptions of Theorem 9.4 and work with the process Y constructed in the proof of (9.4). (9.10) PROPOSITION. Assume the hypothesis of (9.4), and in addition suppose that for each compact subset K of E and neighborhood G of K, t-' P,( x , E - C) -+ 0 as t 0 uniformly on K. Then almost surely the sample functions t -, Y , are continuous on [0, 5). --f
Proof. Let d be a metric for E. By (9.3) the set {Y,(o):0 Is It , t c ((o)} is almost surely bounded. Consequently it will suf-lice to show that, for each x , each t > 0, each compact K c E, and each E > 0, (9.11)
P'[U'{d[Y(tk/n), Y(t(k k=O
+ l)/n)] > E } , Y([O,
t]) c
K]
approaches zero as n --f co. Define (9.12)
d t ,
K ) = SUP PLx, E XEK
- B,(x))
-=
where B,(x) = { y E E : d ( x , y ) E } . The reader will verify easily that the additional hypothesis in (9.10) (in addition to the assumptions of (9.4)) is equivalent to the assertion that t-' ctL(f, K) 0 as t -,0 for each E > 0 and --f
9. STANDARD PROCESSES
51
compact K c E. Now the expression in (9.11) is dominated by
as n -+ co, and so (9.10) is established.
Exercises (9.13) Verify the Remark following the statement of Theorem 9.4. [Hint: let P,(x, A ) satisfy the hypotheses of Theorem 9.4. First show that U"Co c Co and that L = LITois independent of LY > 0. (See (8.9) and the remarks 0 as t -+ 0 for all f~ L and hence following it.) Next check that llP, f -fll for allfE L', the uniform closure of L. Finally use the Hahn-Banach theorem to show that L' = Co.] -+
Show that any transition function of the form (2.12) satisfies the hypotheses of Theorem 9.4, and that if X is the resulting Hunt process one may assume that c(w) = 00 whenever X o ( o ) E RN.
(9.14)
Show that the condition that t + X , has left-hand limits on [0, 5) almost surely in the definition of a standard process is a consequence of the other hypotheses. [Hint: let d be a metric for E A . Show that for each E > 0, T = inf{t: d ( X o , X , ) > E } is an (9,) stopping time, and then consider the sequence of stopping times To = 0, T,,, = T, + T 0 &-".I Note that the same argument shows that paths of a Hunt process have left-hand limits (in Ed)on [0, co) almost surely.
(9.15)
(9.16)
Let E = R, d = g ( R ) , and define P,(x,
a)
= ex+, =+E"+,
if x 2 0 or x + t < 0, if x < O and x + t > O .
Verify that P,(x, A ) is a transition function, but note that UuCois not contained in Co. Prove that any standard process with state space ( E , 8) and this transition function will fail to be quasi-left-continuous on [0, 00). (The existence of a standard process with this transition function will follow from the results of Chapter 111.) Show that for the Brownian motion transition function in RN (see (2.17)) the hypotheses of (9.4) and of (9.10) are satisfied. The resulting Hunt process is called the Brownian motion process in RN.In view of (9.10) we may assume that all paths of this process are continuous on [0, co).
(9.17)
52
I. MARKOV PROCESSES
(9.18) Let X be a Hunt process with state space (RN,W(RN)) and a transition function of the form (2.12). Prove that if almost surely t + X,(w) is continuous, then the transition function must satisfy the condition of (9.10). [Hint: in this situation we must show that t-’ p,(Gc) + 0 as t + 0 where {p,} is the semigroup appearing in (2.12) and G is a neighborhood of the origin. Let {t,} be a sequence of numbers decreasing to 0 and let r, be the greatest integer in t,’. Use the equality
Po{IX(kt,) - X ( ( k
+ I)t,,)l s E for all k I r,} = [ p , , ( { x : 1x1 < E})]‘”
to deduce the desired conclusion.] (9.19) Let Xbe a standard process and define j ( t ) = sups t,xsE Ps(x, E A - {x}). Prove that if j ( t ) + O as t + 0 , then almost surely the sample functions t + X, are step functions with only finitely many jumps in any bounded t interval. [Hint: recall from (8.18) the equality Px(T, > t) = ,-Icx)‘ where T, = inf{t: X,# x}. Obtain the estimate Px{T, I t} I 2 j ( t ) by considering separately P”{T, I t ; X,# x} and P”{T, I t; X,= x}. Conclude from this estimate that A(x) is bounded. Define TI = inf{t: A’, # X,} and T,,, = T, + TI o ern. Prove that T, is a stopping time and that almost surely T, + 00.1
10. Measurability of Hitting Times In this section we will introduce certain stopping times which will be of central importance in the following chapters. Suppose X = (0,A, A , ,X,, Or, P”) is a Markov process with state space (E, 8).If A is a subset of E A we define two functions (10.1)
DA(o)
= inf{t 2 0: X , ( w ) E A }
TA(w)= inf{r > 0: X , ( w ) E A }
where in both cases the infimum of the empty set is understood to be + co. We call D A the first entry time of A and TA the first hitting time of A, It is obvious that these definitions are valid for any stochastic process A’. We often omit the adjective “first.” The following properties are immediate consequences of the definitions and we leave their verification to the reader. PROPOSITION (a) s + DA(Osw)= inf{r 2 s: X,(w) E A}, (b) s + TA(esO)= inf{t > s: X,(w) E A}, (c) t + D A 0, = DA on the set { D A2 t},
(10.2)
0
10. MEASURABILITY OF HITTING
(d) t + T, 0 O1 = TA on the set {T, > t } , (e) D, 5 TA and DA(o)= TA(w) if X , ( o )
TIMES
53
4 A.
It is sometimes convenient to refer to s + D, O,(s + T, o 0,) as the first entry (hitting) time of A after s. It is immediate from (10.2) that these are increasing functions of s and that 0
lim(s + D, 0 0,) = lim(s + T A 0 0,) = T, .
(10.3)
s10
s10
The next proposition contains additional elementary properties of DA and TA . PROPOSITION. Let A and B be subsets of EA. Then (a) A c B implies D, 2 D,; TA 2 T,; (b) D A U B = inf(D,, DJ; T A ~=Binf(T,, TB); DB); TAn B 2 TB); (c) D A n B 2 (d) if { A , } is a sequence of subsets of EAand A = A , , then DA = inf D,,; T, = inf TA,.
(10.4)
9
9
u
Proof. Since the first three statements are obvious we restrict our attention to the fourth, and in particular to TA. Clearly inf TAn2 T,. On the other hand if TA(o) < cx) then for each E > 0 there exists a t such that TA(w) I t < TA(o) E and X , ( o ) E A . But then X , ( o )is in some A , and so inf TA, ITA + E which completes the proof. (Choose t > 0 if TA(w)= 0.)
+
We are interested in formulating conditions under which D, and T A are stopping times for a large class of sets A . To this end we introduce the theory of capacity. See Brelot [I], Bourbaki [I], Dynkin [I], or Meyer [l] for a more detailed discussion. (10.5) DEFINITION. Let E be a locally compact separable metric space and let X be the class of all compact subsets of E. A function rp : X + R is called a Choquet capacity provided : (i) if A , B E X and A c B, then cp(A) Icp(B); (ii) given A E X and E > 0 there exists an open set G 3 A such that for every B E X with A c B c G one has cp(B) - rp(A) c E ; (iii) cp(A u B ) + cp(A n B ) I cp(A) cp(B) for all A , B E X .
+
Given such a capacity cp one defines the inner capacity, cp,(A), of an arbitrary set A c E by cp,(A) = supKcAcp(K) where the supremum is taken over all compact subsets of A . One next defines the outer capacity, cp*(A), of an arbitrary set A c E by cp*(A) = infG,, cp,(G) where the infimum is
54
1. MARKOV PROCESSES
taken over all open sets containing A. An arbitrary set A is said to be capacitable provided cp*(A) = cp,(A) and we define the capacity of A to be the common value, which we denote by cp(A). In view of (10.5) the notation is consistent; that is, if K E .fthenKis capacitable and cp,(K) = qn*(K) = qn(K). We are now ready to state Choquet’s fundamental theorem. We refer the reader to Brelot [l], Bourbaki [l], or Dynkin [l] for a proof. Also a very general discussion in an abstract setting will be found in Meyer [l].
(10.6) THEOREM(Choquet). Every Borel (more generally, analytic) subset of E is capacitable. We are now going to apply (10.6) to the study of hitting and entry times. We assume in the remainder of this section that E is a locally compact separable metric space, that A is adjoined to E as the point at infinity if E is noncompact or as an isolated point if E is compact, and that b(bA)is the cr-algebra of Borel subsets of E(EA). We further assume that the given Markov process X has state space ( E , b*) and is right continuous, quasi-leftcontinuous, and that M I 3 Str + . Consequently 9, = Ft+ .For convenience of exposition we will actually assume that t + XI(w) is right continuous for all o. This causes no loss of generality for we can always accomplish this by removing from R a fixed set in F having Pp measure zero for all p. The next theorem and Theorem 10.19 are the main results of this section.
(10.7) THEOREM. Under the above assumptions, DA and T A are {S,} stopping times for all Borel (more generally, analytic) subsets A of EA. We will break up the proof into several steps. First note that if D A is an {F,} stopping time then so is s + DA 0 Os for all s E R + (see the proof of (8.7)), and, consequently, in light of (10.3), we may restrict our attention to D A . (10.8) LEMMA. If G is open (in EA) then D G = T, and D G is an {9bp+} stopping time. Proof. that
This results from the right continuity of the paths, which implies {DG
U
{X,E G} E 9:.
reQn[O,t)
Of course, if t -+ X , ( o ) is only almost surely right continuous one can only assert that D G = T , almost surely and that D G is an {F,} stopping time. Until the proof of Theorem 10.7 is completed A, B, K , G, etc., will always
10.
MEASURABILITY OF HITTING TIMES
55
denote subsets of E. If A c E we define
(10.9)
LEMMA.D A is an {g,} stopping time if and only if dA is.
Proof. Since 5 is an {9:+ stopping } time the “only i f ” statement is immediate. On the other hand D A = dA on { d A < 5) and DA = 00 on {dA= 5) and so {DA
which is in 9, if dA is an
< I} = { d A < t }
{ d< ~ (}
{sf} stopping time.
For any A c E and t 2 0 let R,(A) = {a: X,(w) E A u {A}, for some s, 0 Is It } , R:(A) = { w : X,(w) E A for some s,
o Is I t } .
Obviously R,(G) and R:(G) are in 9:if G is open. Moreover for any set A , {dA< t } = U nRf-lI,,(A),and so to show that dA is an {Pt} stopping time it for all t . Finally note that one always has suffices to show that &(A) E 9, &(A) c { d A I t } . LEMMA.Let K c E be compact; then for all 1 ; (i) R , ( K ) E 9, (ii) if {G,,} is a decreasing sequence of open subsets of E with G,, 3 G, + 2 K for all n and such that K = nG,, = nc,,, then dG,fdK a s . and P’(R,(G,))JP”(R,(K))for all p and t .
(10.10)
Proof. Let {G,} be as in (ii); then {dGn}is an increasing sequence of {9:+} stopping times, and so its limit, which we denote by T, is an {9:+ stopping } time. Clearly T 5 d, I 5. If T = 5 then T = d K , while on { T < C}, dGn= DGn for all n and X(Dcn)+ X ( T ) a s . on { T < 5) by the quasi-left-continuity. But X(DGn)E G, on { DGn< a},and consequently X ( T )E nG, = K a s . on { T < 5). Hence d , IT a s . on { T < 5). Therefore dGnfdKa s . and so dK is stopping time. an {F,} If o E {dK I t } , then either X,(w) E K u {A} for some s Ir or there exists a sequence {s,,} decreasing to t with X J o ) E K u {A} for all n. But K u {A} is closed and so the right continuity of the paths implies that in this case X,(w) E K u {A). In other words {dK5 t } c R,(K) and hence these two sets are equal. Therefore R , ( K )E 9,. Finally {dGnIt } decreases t o {dKIt } a.s. as n + 00, and since
56
1. MARKOV PROCESSES {dG,
I t}
2
R,(G,,) 3 R , ( K ) = {dK I t )
it follows that P’[R,(G,,)] 1P’[R,(K)]. Thus Lemma 10.10 is established. (10.11) REMARK.If X is assumed quasi-left-continuous on [0, 00) rather than on [0, c), then one sees by exactly the same argument that (10.10) remains valid if we replace R by R* and d by D throughout. (10.12) LEMMA.Let p and t be fixed; then cp(K) = P’[R,(K)] is a Choquet capacity on the compact subsets of E.
Proof. We must check that the three properties of Definition 10.5 are valid. Property (i) is obvious. Property (ii) is an immediate consequence of Lemma (10.1Oii). As to Property (iii), if A and B are compact, then
R,(A u B) - R,(W ={o:X,(cu)~A uB
u {A} for some s I t butX,(o)#Bu{A} f o r a n y s s t}
c { w : X , ( o ) E A for some s I t but X,(w) # (A n B) u {A}
for any s I t )
= R,(A) - R,(A n B), and therefore cp(A u B) - cp(B) I cp(A) - cp(A n B). Hence cp is a Choquet capacity. (10.13) REMARK.Under the assumption of (10.11) the set function K + P’[R:(K)] is a Choquet capacity for all p and t. (10.14)
LEMMA.For any Borel set B, R,(B) E 9, .
Proof. Let cp be the capacity cp(K) = P’[R,(K)]. If G is open and K is any compact contained in G, then q ( K ) = P’[R,(K)] I Pp[R,(G)]and hence cp,(G) = supKc cp(K) 5 P’[[R,(G)]. Let {K,,} be an increasing sequence of compact subsets of G whose union is G. Clearly R,(K,,)fR,(G) and so cp(K,,)tP”[R,(G)]. Thus cp,(G) = P’[R,(G)] for any open set G.If B is a Borel set, then by Choquet’s theorem (10.6), cp,(B) = q*(B). Hence for each n there exist a compact set K,,c B and an open set G,, 2 B such that P*(Gn) - d K n ) = P’CRr(Gn)I - P’CRt(Kn)I < I/n*
Let A1 =
u,,R,(K,,) and A 2 n,, =
R,(G,,); then A1 and A2 are in 9, and
10.
MEASURABILITY OF HITTING TIMES
57
A1 c R,(B) c A 2 . But P’(A2 - A1) I P”[R,(G,,)]- Pp[R,(K,,)]c I/n for each n, and hence Pp(A2- A1) = 0. Since p is arbitrary it is now immediate that R,(B) E 9, *
REMARK. The proof of (10.14) actually shows that q ( B ) = P’[R,(B)] for any Borel set B where q ( B )is the capacity of B. Finally note that under the assumptions of (10.1 1) if $ ( K ) is defined to be P’[R:(K)] then again qi is a Choquet capacity and $(B) = P’[R:(B)] for any Borel set B. (10.15)
Theorem 10.7 is now an immediate corollary of the above lemmas. Thus stopping times for any Borel set we now know that T, , D A ,and dAare {9,} A. However useful this information may be, the following approximation theorems are even more useful. THEOREM.If B is a Borel subset of E, then for each p there exist an increasing sequence {K,,}of compact subsets of B and a decreasing sequence {G,,} of open sets containing B such that d K n J. d , and dGnt d , almost surely P p .
(10.16)
Proof. Let p be given. Since q ( B ) = P’[R,(B)], there exists for each t 2 0 an increasing sequence { K i } of compact subsets of B such that Pp[R,(Ki)]f P[R,(B)].Let {q,} be an enumeration of the nonnegative rationals. Define K,, = K,4’ v . . . v K P . Clearly {K,,} is an increasing sequence of compact subsets of B, and so {dKn}decreases to a limit T 2 d,. Now {d, < T} = {d, < q,, < T} and for each m
u,,
{ d , < q n < TI c { d <~ q n < d ~ , } c
Rq,(B)
- Rq,(Km)*
But if m > n then K,,, 2 K Z and so
Id, < 4. < TI c Rq,(B) - Rq,(K%)* Consequently Pp[dB< q,, < T] = 0, and this implies that T = lim dKm = d, almost surely P’. The required sequence {G,,} of open sets is constructed in a similar manner. See Exercise 10.22. COROLLARY. If B is a Borel subset of E, then for each p there exist an increasing sequence {K,,} of compact subsets of B and a decreasing sequence {G,,} of open sets containing B such that DKnJD , almost surely P’ on R and DG,f D , almost surely P” on { D , < m}. (10.17)
Proof.
By (10.16) there exists such a sequence { K,,} with dKn1d, almost surely
58
I. MARKOV PROCESSES
P’. Now on {D, < co} = {D, < [} one has d, = DBand hence dKn< [ for n sufficiently large. Rut then dKn= DKnandconsequently DKn1DBon {D, < co} a.s. P”.But on {D, = co} each DKn = co since B 3 K,, . Again from (10.16) there exists an appropriate sequence {G,,} with dGn7 dB= D , A [ as. P”, and on {D, < oo} = { D , c c} one has dCn= DGn < [. Therefore DG,t D, a.s. P p on {D, < 00). (10.18) REMARK.If X is quasi-left-continuous on [0, 00) then making use of (lO.ll), (10.13), and (10.15) one can repeat the proof of (10.16) to obtain for a given p a decreasing sequence {G,,} of open sets containing B such that DGnt DBalmost surely P”. (10.19) THEOREM.If B is a Bore1 subset of E then for each p there exists an increasing sequence {K,,} of compact subsets of B such that TK, 1TBa.s. P”. Proof. Let { f k } be a strictly decreasing sequence of positive numbers with limit 0. We know that t k D, 0 Or, 1TA as k + co for any set A. Suppose that we could find an increasing sequence {K.} of compact subsets of B such that DKn 0 Of, 1D, elk a s . P” as n + 00 for each k. Then
+
0
+ DKn O,k) = lim lirn ( t k + DKn elk) n k
TB= lirn lim ( f k k
0
n
o
= lirn
TKn a.s. P”,
n
where the interchange of limits is justified since the double sequence in question is decreasing in both n and k. Thus the proof of (10.19) is reduced to constructing a sequence {K,,} with the above property. For the given p define for A E d vk(A) = l p ( d x ) p”[X(tk) E A ]
= P”[X(fk)E A ] .
According to Corollary 10.17 there exists for each k an increasing sequence {K;} of compact subsets of B such that DKk 5. DBa.s. Pvk.Define K,, = K,’ u K: u . .. K; . Clearly {K,,} is an increasing sequence of compact subsets of B. Moreover K,k c K,, c B if n 2 k and so D, I DKn I DKk for n 2 k. Thus (which exists since {K,,}is increasing), DKn1D, a.s. Pvkfor all k. If T = lim,,DKn then we have shown so far that Pvk[[T# D,] = 0 for all k. Clearly DKn 0 O,, 1 TOO,, asn+coand P ~ T elk z D,
elk]= E” P ~ ( ‘ ~ )z( D,) T = Pvk(T# DB) = 0.
10. MEASURABILITY OF HITTING TIMES
59
Therefore DKn0 Or, 1DB 0 Or, as n -,co a s . Pfl for all k, and thus the construction is complete. Let us emphasize that the approximating sequences constructed in (10.16)(10.19) depend in general on the measure p. See Exercise 10.24 for a sharpening of (10.19) under an additional hypothesis. The only place that we have used the Markov property in the present section is in the proof of (10.19). We should also point out that the analogous approximation theorem by open sets Gn 3 B is not valid in general. A simple example is given by uniform motion to therightalong thereal axis(see(3.7)). IfB = {O}thenPo(T, m) = 0, while if G is any open set containing 0 then Po(TG= 0) = 1. In the remainder of this section we will assume that Xis a standard process although somewhat weaker hypotheses would suffice. The above discussion of course applies to any standard process.
-=
PROPOSITION.Let X be a standard process and suppose that B is a Borel set. Define
(10.20)
R;(B) = R , ( B ) LJ {w : X , - ( w ) exists and is in B u {A} for some s, 0 I s It } .
Then Pp[[R;(B)- R,(B)]= 0 for all p. In particular R;(B)E 9,. Proof. Clearly R,(A) c Ri(A) for any set A . For the given B and p there exists a decreasing sequence {G,,} of open sets containing B such that P’[R,(C,,)- R,(B)]I l/n for each n. Also for any open set G, R,(G) = R;(G). Therefore c RKB) c MGn) n
and Proposition 10.26 now follows since the extreme members in this string of inclusions have the same P p measure. Let c’(o) = inf{r: X,(w)= A or X,-(w) exists and equals A}. Clearly
5’ I iand
{c’ < r < (}
c {s
-+
X , is unbounded in E on [0, r ] , r < i}.
Consequently according to (9.3), = I’ almost surely. Combining this with (10.20) one easily sees that almost surely D , = inf{t: X , E B or X , - exists and is in B } for any Borel set B. In many situations it is necessary to deal with a class of sets that lies between the Borel sets and the universally measurable sets. (10.21) DEFINITION.A set A c EA is a nearly Borel set (relative to the given process X ) if for each p there exist Borel subsets B and B’ of EA such that
60
I. MARKOV PROCESSES
B c A c B and Pj"X, E B' - B for some t 2 01 = 0. This is equivalent to P"[DB,- < 001 = 0. Roughly speaking a set is nearly Borel if the process can not distinguish it from a Borel set. Clearly the above definition makes sense for any process and not just for standard processes. The reader should verify for himself that the class of nearly Borel sets is a o-algebra in E A . We denote it by 8;.The o-algebra of nearly Borel subsets of E will be denoted by 8". Obviously 8 c 8"c 8* and a similar relationship holds for EA. A functionfE 8"is called nearly Borel measurable. Such functions are characterized by the property that for each p there exist fi,f2 € 8 such thatf, 5 f If2 and Pp[cfi(X,)# j 2 ( X , ) for some t 2 01 = 0.
It is easy to check that all of the results of this section which were proved for Borel sets extend to nearly Borel sets, and we will use them for nearly Borel sets without special mention. We leave the details to the reader.
Exercises Give a detailed construction of the sequence of open sets {G,} in Theorem 10.16.
(10.22)
Let X be a standard process having as transition function the one defined in (9.16). Let B = (0).Show that if p is a measure with p[( - 00, O)] > 0, then there is no sequence {G,} of open sets containing B and such that DGn + DB almost surely P '. Consequently (10.17) can not in general be strengthened. (10.23)
(10.24) Let X be a standard process with transition function P,(x, A) and suppose that there exists a o-finite measure 1 on 8 such that for each x in E and t > 0 the measure P,(x, .) is absolutely continuous with respect to 1. Show that if B is a Borel set then there is an increasing sequence {K,,} of compact subsets of B such that TK. decreases to T,, almost surely. [Hint: let p be a finite measure equivalent to A and choose the sequence {K,} such that TKn4 TBalmost surely P".] (10.25) Let X be a standard process and let A be a nearly Borel subset of E. Let T = T,, and let a(t) = supxEEP"[T > t ] . Show that for each x E E and t > 0 one has E X ( T )s r[l - a(t)]-'. [Hint: if a(r) = 1, there is nothing to
11. FURTHER PROPERTIES OF HITTING TIMES
61
prove. If a(t) < 1, show that P"[T > n t ] I [a(t)l", and then write E"(T) = PX(T2 s) ds.] Prove that the class of nearly Borel sets in contained in 82.
(10.26)
EA
forms a a-algebra
11. Further Properties of Hitting Times Throughout this section X will denote a given standard process with state space ( E , I ) . If T is any { M I }stopping time and a 2 0 we define the kernel P; as follows: Iff E b b z then
PTf(x) = E"{e-"Tf(XT);T c a}. In particular if f E bb*, then P+ f ( x ) = E"{e-aTf(XT); T < 5). If T is an {AI} stopping time this need not be a measurable function of x. However if T is an {SI}stopping time then P; is a bounded linear operator from bb,* to bbz (or from bb* to bb*). As usual the definition extends to any nonnegative function in 82.If a = 0 we write PT in place of PF . If T = TA where A E 8; we write P i in place of P;", and P A when a = 0. The measure P i ( x , * ) is called the a-order hitting distribution of A starting from x or the a-order harmonic measure of A relative to x. When a = 0 we drop it from our terminology as usual. If A is a nearly Borel subset of E, then the right continuity of the paths implies that all of the measures P i ( x , * ) are concentrated on A (the closure of A in E ) . In actual fact one can make a good deal more precise statement than this, and we next develop the necessary machinery for this more precise statement. If T is an {PI}stopping time then { T = 0) E So,and so, for each x E E A , P"(T = 0) is either zero or one according to the zero-one law (5.17). The point x is said to be irregular for T if this probability is zero and regular for T if this probability is one. (11.1) DEFINITION. If A
E 8; then a point x is regular for A provided it is regular for T A ,and x is irregular for A provided it is irregular for TA .
Thus x is regular for A if P"(TA = 0) = 1 and irregular for A if P"(T, > 0) = 1. Intuitively x is regular for A if the process starting from x is in A at arbitrarily small strictly positive times with probability one. If A' is the set of all points which are regular for the nearly Borel set A, then A' = {x: P"(TA= 0 ) = 1 ) and consequently A' is universally measurable. In
62
I. MARKOV PROCESSES
fact A‘ is itself nearly Bore1 but we cannot prove this until later. If A’ denotes the interior of A, then plainly A’ c A‘ c A. We can now complete the approximation theorems given in Section 10. THEOREM.Let A€&’” and let p be a finite measure such that p(A - A‘) = 0. Then there exists a decreasing sequence {G,,} of open sets containing A such that TGn1TA almost surely P p on {TA < co} and TGnA l t TA A ( almost surely P p (on Q).
(11.2)
Proof. According to (10.16) and (10.17) there exists such a sequence of open sets for which a s . P” one has DG,,fDA on {DA < 03) and D,,, A 5 1DA A l. But for any open set G, DG = T, , and so to complete the proof we need only show that P’((DA TA) = 0. But
+
P”(DA < T A ) = J p ( d x ) P”(DA < TA). If x E A‘ then P”(DA = TA)= 1, while if x E A‘ then 0 I DA I TA = 0 almost surely P x . Since p attributes all its mass to A‘ u A‘ it follows that P’(DA < TA)= 0 and so the proof of (1 1.2) is complete. REMARK.If X is quasi-left-continuous on [0, co), then using (10.18) and the above argument one can strengthen the conclusion of (1 1.2) to TGn T A almost surely Pr.*
(11.3)
The next result gives us the precise information alluded to earlier regarding p x x , .). THEOREM.Let A E 8;; then (i) X(TJ E A u A‘ a s . on { T A < a}; (ii) for each x and u, P X x , * ) is concentrated on A u A’.
(11.4)
Proof. We will prove only (i), for (ii) is an obvious con ,equence of (i). Since X , 4 A for 0 < t < TA it is the case that if X,, 4 A then ‘’‘A + TA 8,, = TA . Consequently for any p 0
P”(TA < 00, X T ,
4 A ) = P”(TA
< 00, X , , 4 A ) = E’{pX‘TA’(TA = 0 ) ; T A < 00, X T , 4 A } . OT,
= 0,
TA
Therefore almost surely P’ on {TA < 00, X,, 4 A} we must have
* Theorem 6.1 of Chapter 111 contains another important extension of ( I 1.2).
12. REGULAR STEP PROCESSES
63
p x y q = 0)= 1 ; that is, X,,
E A‘.
This clearly yields Statement (i) of Theorem 11.4.
Exercises (11.5)
If Tand R are stopping times show that X ,
(11.6) Let A E 8: and a > 0. Prove that x (11.7) Let A
E 8:. Prove that if x
E A‘
o
8, = XR+T.BR
if and only if P;(x, E,J = 1.
4 A then x E A‘ if and only if P,(x,
-)=
8,.
(11.8) Show by an example that the condition “ x $ A ” in (11.7) cannot be eliminated. [Hint: consider the Hunt process “ uniform motion around a circle.” The definition of this process should be clear by analogy with Exercise 3.7.1 (11.9) Let A and B be in 8: with A c B. Show that if X,, $ A - A‘ almost surely on { T , > 0} then P i p : = P: for all a 2 0, and conversely that if P i P: = P i for some a > 0 then X,, $ A - A‘ almost surely on { T B> O } . Show that P:Pi = P i for all a 2 0 provided that X,, E B‘ almost surely on { T , a}, and that if P i P i = P: for some a > 0, then X,, E B‘ almost surely on { T A< a}.
-=
12. Regular Step Processes
In this section we will introduce an important class of Markov processes. However, the material in this section will not be needed in the sequel except as a source of examples, and so the reader may omit a detailed reading of the proofs. The processes to be constructed have a very simple intuitive description; however, we will go into some detail regarding the construction. By a Markou kernel Q on a measurable space ( E , 8) we mean a function Q(x, A ) defined for x E E and A E d such that (a) for each A E 8,x + Q(x, A ) is in I , and (b) for each x E E, A + Q(x, A ) is a probability measure on 8. If in (b) above we only assume that, for each x E E, A + Q(x, A ) is a measure on & with Q(x, E ) I 1, then the resulting object Q is called a sub-Murkov kernel on ( E , &). We now fix a measurable space ( E , d), and in this section we will assume that { x } E d for each x E E. The basic data from which we will construct the process are (i) a function , IE d satisfying 0 < A(x) < co for all x E E, and (ii) a Markov kernel Q(x, A ) on (E, 8)satisfying Q(x, { x } ) = 0
64
I. MARKOV PROCESSES
for all x. We caD now give an intuitive description of the process to be constructed. A particle starting from a point xo E E remains there for an exponentially distributed holding time T, with parameter A(xo) at which time it "jumps " to a new position x1 according to the probability distribution Q(xo, .). It then remains at xi for a length of time T, which is exponentially distributed with parameter A(xl), but which is conditionally independent, given x,, of T,. Then it jumps to X, according to Q(x,, .), and so on. Most of this section is devoted to a rigorous construction of such a process. Let F = E x R, and 9 = d x g ( R + ) , where R, = [0, 00) as usual. Let N = (0,1,2, . ..}, and set R = F N and Y = fN. Thus (F,9) is a measurable space and (R, Y) is the usual infinite product measurable space over (F, 9). The points o E R are sequences {(x,,, f,,); n 2 0} with x,, E E and t,, E R+ for all n. If o = {(x,,, t,,); n 2 0) let Y,,(o)= (x,,, t,,), Z,,(w) = x,,, and T,,(o) = r,, . Thus Y,, is the nth coordinate map and Y,, = (Z,,,z,,). Obviously Y,,E 3/9, Z,, E Y/ 8, and z,, E Y for each n. We next define a Markov kernel n on the measurable space (F, 9)as follows :
Note that n is translation invariant in the R, variables. We now invoke a theorem of Ionescu Tulcea (see Doob [l, pp. 613-6151 or Neveu [I]) which states the following in the present situation: if y is a probability measure on (F,F),then there exists a probability measure P y on (R, 9)such that the coordinate mappings { Y,,; n E N} form a (temporally homogeneous) Markov process over (R, Y, Py)with y as initial measure and II as transition function; that is,
for all A E 9 and n 2 0. If y is unit mass at (x, 0) E F we will write P" for Py. One easily calculates that for each y EY{exPC-a(zn+1 - zn)l} = E'{A(zn)/Ca + 4Zn)l}, and hence letting a + 00 we see that Py(z,,+, 5 r,,) = 0. Moreover from the assumption that Q(x, {x}) = 0 for all x it easily followsthat Py(Z,,+l= 2,) = 0 for all n and y. Finally Px(zo = 0) = 1 for all x. Consequently if R' = {Z,,,, # Z,, and ,z, > T,, for all n and r0 = 0}, then R' E Y and P"(f2') = 1 for all x. If Y' denotes the trace of Y on R', then, for each x, { Y,,; n E N} is a Markov process over (R', Y', P") with E , , ~ as initial measure and IT as transition function. We now drop the prime from our notation; that is, we now let R denote the set of all sequences {(x,, , r,,); n 2 0) with 0 = t o < z, < . . . and
12. REGULAR STEP PROCESSES
65
x , + ~# x, for all n 2 0, and let 9 be the o-algebra in R generated by the coordinate mappings { Y,;n E N}. The measures P" are now regarded as probability measures on this (R, 9). Finally we are ready to define the process which interests us. First let [ ( w ) = limnT,,(w)and then define for t 2 0 X,(w) = Z,(w) (12.3)
=A
if q,(o)I t < z,+ l(w), if [(w) I t.
where A is a point adjoined to E in the usual manner. Of course we set X,(o) = A for all w. Clearly t + X,(w) is a step function on [0, [(w)) for each o.Consequently if E is given the discrete topology then t + X , ( o ) is right continuous on [0, 00) and has left-hand limits on [0, [(w)) for each o. Moreover X , E for each t. We adjoin a point oAto R and set X,(W,) = A for all t. We put {wA}E 9 and set P X ( { w A }= ) 0 for all x E E. Also we set Z,(W,) = A and ?,(@A) = 00 for all n. Next we define translation operators as follows: e,oA= oAfor all t ; if o = {(x,, t,,);n 2 0) and t 2 [(o) = limn t, then 8, o = w A , while if tk I t < t k + l , k 2 0, then 0,w = {(x,+~,[ t n + k- t ] v0); n 2 O}. One checks immediately that X , o 8, = X , + , for all s and t. Finally we let P A be unit mass at oA. We now come to the main result of this section. THEOREM.X = (R, 9,X,,01,P") is a Markov process with state space (E, 8).
(12.4)
Proof. We will carry out the proof in a series of observations and lemmas, leaving detailed verification to the reader. First of all let 9,= a( Y,; 0< n) and let 9:= o(X,; s I t). We assert that if A E 9:, then for each n there is a set An E 9,such that A n {t, I t < z , , + ~ }= A,, n {t < T , + ~ } .
Indeed, the class of sets A with this property is a a-algebra and obviously it contains the sets { X , E A} with A E 8 and s I t. We pause to introduce some additional notation. If u 2 0, we define for xEEandAE8 (12.5)
Q.(x, A) = -Q(x, A ) .
+ A(x)
If K(r, B), r E C,B E g,is any sub-Markov kernel on a measurable space (C, %) we define the iterates, K", of K by Ko(r, B ) = E,(B)and K"+'(r, B) = K(s, B ) K"(r,ds) for n 2 0. In the present case, and with this notation, a simple induction argument shows that
66
I. MARKOV PROCESSES
jOae-_ nn(x, 0 ; A , ds) = Q X ~A,)
(12.6)
for all u 2 0, n 2 0, x E E, and A E 8 . It also follows by induction that nn is translation invariant in its R+ variables. The following lemma contains the basic calculations which we will need. The proof is a straightforward calculation which we will leave to the reader. It makes use of (12.6) and the Markov property for { Yn}(that is, the fact that the finite-dimensional distributions of { Y,} can be written down in terms of the iterates of n).
(12.7) LEMMA.Let g E 8, and a 2 0. Then for each x (i) EX{@,) [e-"'* - e-"rk+l]I Sn}
(ii) EX{e-Uh+I
d z n + l ) ; rn+1 > 11Sn)
= QAzn9 9 ) expC4zn)
rn
- (a + l(zn))(tv
~n)],
for each n 2 0 and t 2 0. Next given u 2 0 and f~ I +(with f extended to A by f(A)= 0) we define U"f(x) = Ex
j
m
e-"'f(X,) dr.
0
Since by definition m
joe-"'f(X,) dt =
c
e-"'f(X,) dt 0
an application of (12.79 yields (12.8)
(12.9) LEMMA. I f f € 8, , t 2 0, a 2 0, and x E E, then
Ex(J)me-a"f(XJ19:) = e-"' U " f ( X , ) .
12. REGULAR STEP PROCESSES
67
Proof. We must show that for A E F G I )we have
I,
m
(12.10)
Ex(
e-auf(Xu) d u ; A
e-af Uaf(X,); A).
c}
c}
We need not concern ourselveswith {t 2 and so we may write A n {t < = u,,(A n {t,,S t < 7,+1}). But A n ( 7 , I t < T , , + ~ } = A,, n {t < T , , + ~ } with A,, E Y,, and An c (7,s t}. Thus t o establish (12.9) we need show only that (12.10) is valid when A is replaced by A,, n {t < T ~ + ~ with } , A,, E Y,, and A,, c (7, I t}. With this replacement we may calculate the right-hand side of (12.10) using (12.7ii) with c1 = 0 and g = 1. The result is e-af Ex(e-Uzn)(f-rn) U af (z 1. I\ ). (12.11) n , n We calculate the left side of (12.10) by writing the integral as + cp=n+lJ:;+I. Using (12.7ii) twice, the first term becomes
e-"'E"{f(Z,,) [c1
+ A(Z,,)]-'
Jp+'
e-A(zn)(f-'n); A,,}.
For the second term we use (12.7i) and (12.7ii) in that order to obtain
Combining these calculations with (12.8) we obtain (12.1 1) for the left side of (12.10) also. Hence Lemma 12.9 is established. The conclusion of Theorem 12.4 is now immediate. Indeed if f E b l + , then Lemma 12.9 implies that
whenever rp is a linear combination of exponentials and hence, by uniform approximation, whenever rp is continuous and vanishes at a.Since f(X,,) is right continuous and bounded, given s 2 0 one can choose a sequence {rp,} such that upon replacing rp by rp,, the left side of (12.12) approaches E"{f(X,+,) I G.' I)} while the right side approaches Ex('){f(X,)}. Thus (3.2) holds and so the proof of Theorem 12.4 is complete. Observe that 71 = inf{t: X , # X,,} and hence 71 is an {F:+} stopping time. Moreover T n + l = T,, + 71 o 8," and each 7, is an {F:+} stopping time. If T is an {PI}stopping time, then X , = Z,, on (7, I T < T . + ~ } and so X , E.A&/'. Iff is any bounded function on E,, then t - f ( X f ) is right continuous since t + XI is right continuous when E, is given the discrete topology. Therefore the proof of Theorem 8.1 1 can be repeated essentially word for word to show that X is a strong Markov process relative to {Fr+}.
68
I. MARKOV PROCESSES
Finally let us show that X is quasi-left-continuous when EA is given the discrete topology. Let {T,,} be an increasing sequence of {Fr} stopping times with limit T. We may assume that T I [. In view of the form of the path functions it is clear that X,,,+ X , for all o in { T < c} {T = z,}. Consequently for a given path f -+ Xr(o)with T ( o )< [ the convergence of X,,, to X , can fail only if for some k we have T = z,+~ and T , I T,,< T , + ~ for all large n. Let k and an initial measure p be given. On the set { T , I T,, < z , + ~ } we have 7 k + l = T,, z1 0 0,. and X(T,,)= 2,. Thus if A = { T k IT,, < z , + ~ for all large n}, then
u,
+
P"(T = z k + l ;A) = lim Efi(e-"('k+I-T) ;A) a+ m
lim lim inf Efi(e-"(zk+l-Tn). a+m
9
zk
n
= lim lim inf EQ(e-"'IoeTn; 7, I a+m
5
Tn
< 7k+l)
T , < zk+1)
n
Ilim E"{(EZk(e-"")} (1'
m
This establishes the quasi-left-continuity of X. If E is a locally compact space with a countable base and B is the Bore1 sets of E then it is an immediate consequence of the above results that X is a standard process.
Exercises (12.13) Given a > 0 and x E E show that a necessary and sufficient condition that P"(C = 00) = 1 is that Qx x , E) -P 0 as n + a.Conclude from this that if 1 is bounded then, for all x, P"([ = 00) = 1. [Hint: use (12.6).] (12.14) Show that relative to each Pyif h E bB, then z,,+~ - z, Z,,+l, and h are conditionally independent given Z,, . (12.15) Let E be locally compact with a countable base and let d = %?(E). Show that if 1 is a bounded continuous function and Q maps C, into C, , then, for each a > 0, U"maps C, into C o wOne may replace C, by C in the hypothesis and conclusion. [Hint: show that if Q maps C, into C, , then so does each Q:. Then use (12.8).]
II EXCESSIVE FUNCTIONS
1. Introduction
Most of the paragraphs which follow have as fixed data a given standard process X = (Q, A, A , ,X,, O f ,P") with state space (E, 8). Often the definitions and theorems involve only the action of the corresponding semigroup {P,}of transition operators and nothing more about X. However our viewpoint is that the process Xas well as the semigroup { P I }is of fundamental interest, and so we will not attempt to separate statements involving only the semigroup from those which also involve the process. When starting with a standard process X it is to be understood that objects such as nearly Bore1 sets, potential operators U", hitting operators P ; , and so forth are defined relative to X. Frequently we will make calculations of the following sort: Let T be an { A , }stopping time, f~ S*, , and a a nonnegative number. Recall that m ~ a j ( x= )
e - a ' j ( x , ) dt.
EX
0
We will refer to U y a s the a-potential o f f . As usual when u = 0 we will drop it from our notation and terminology. Observe that U ( x , B) = U o ( x ,B) is the expected amount of time that the process starting from x spends in B. The integral on r in the above expression for U a f ( x )can be written + j?. If T(o) 00, then using the change of variable t - T = u the second integral becomes
-=
69
70
11. EXCESSIVE FUNCTIONS
and this is valid also if T = co since f(A) = 0. The function in braces is an element of .F and so by the strong Markov property (1.1)
E x /Tae-arj(X,) dt = Ex(e-aT E X ( T /oae-atj(X,) ) dt) = EX{e-aTvaj(XT)>
= PTU"f(x).
Consequently
I
T
(1.2)
~ a j ( x=) E X
e-a'j(X,) dt
+
~ + ~ a j ( x ) .
0
This shows, for example, that P ; V " f l U"fand that P;V"fdecreases as T increases. Generally we will simply write down the results of such calculations, leaving the intermediate steps to the reader.
(1.3) THEOREM. (a) I f f € 8: and A is nearly Borel, then P i U " f l U l f . (b) If in additionfvanishes off A, then PiU"f= Vlf. (c) Iff and g are in S*, with f'vanishing off the nearly Borel set A and U " f i Vag on A, then U " f l V'g everywhere. Proof. The first two statements are immediate consequences of (I .2) applied to the stopping time TA, We will prove (c) as an application of some of the results from Chapter I. If K is a compact subset of A then Pt;U"fl; Pt;U"g, for the measures Pg are carried by K. Fix x and let {K.} be an increasing sequence of compact subsets of A such that TK,1TA a s . P". If B is nearly Borel, then
Ex /T:e-a'f(X,) dt = Pf,U"j(x), and so Lebesgue's monotone convergence theorem shows that Pt;,U"f(x) PplU"f(x). Then from (a) and (b) we have
t
U a j ( x )= P i U " j ( x ) = lim P ; m V a j ( x )s lim PknU' g(x) n
n
5 U"g(x),
which establishes (c). For a > 0 we write Pp = e-"' P , . Note that this agrees with our definition of P; if T = t. The reader should note that, for each a 2 0, { P : ; t 2 0} is a semigroup of bounded operators on bS (or bS*) whose resolvent { Va; /? > 0)
1. INTRODUCTION
71
is given by V B= U a + B In . particular V o = U" is a bounded operator if U a = J0"Pp d f . This notation will be used in the sequel without special mention. u > 0. Note that we now have
Exercises (1.4)
Prove that iff E bB* and T is an {A,}stopping time then
/+
00
~ P ~ , U Q ~ (=X E)X
f
e-a"f(x,,) du T.&
for all x , f , and u 2 0. Let X be the Brownian motion process in R" (see (2.17) and (9.17) of Chapter I). (a) Prove that if n 2 3 the potential operator for this process is given by
(1.5)
for f 2 0, where the integral is over R" and 'd denotes integration with respect to n-dimensional Lebesgue measure. (The precise value of the constant appearing before the integral sign is, of course, relatively unimportant.) Specifically, Uf is the ordinary Newtonian potential off in R".(b) Prove that if n = 1 or 2 then, for f 2 0, Uf = 0 iff = 0 a.e. (Lebesgue measure) and otherwise Vf = 03.
(1.6) Let X be the one-sided stable process of index /I, 0 < /? < 1 , in R (see (2.19) of Chapter I). Show that
[Hint: let hB(r,x) be the continuous density function for the measure qf (Chapter I, (2.19)). Then evaluate the integral k&) = Jgh,(f, x) dt by taking its Laplace transform.] (1.7) Let X be the symmetric stable process of index u, 0 < u < 2, in Rn (see (2.20) of Chapter I). If u < n prove that for f 2 0
Specifically Uf is the Riesz potential of order u. (b) Prove that if u 2 n then Uf has the same description as in Part (b) of Exercise 1.5. [Hint: because of
72
11. EXCESSIVE FUNCTIONS
Exercise 2.20 of Chapter I it suffices to calculate u ) g u w du, Jomdtjrnh.,z(t, 0
where h,,, is defined in (1.6) above and gu is defined in (2.17) of Chapter I. Make this calculation by interchanging the order of integration and then using the results of (1.6).]
2. Excessive Functions
In this section X = (Q, 4, M I ,X , , O f , P") will be a fixed standard process with state space (E, 8). (2.1) DEFINITION.Let a 2 0. A function f E S*, is called a-excessive (relative to X ) if (a) Ppf sffor every t 2 0 and also (b) P,9f-+fpointwise as t -P 0.
Of course Condition (a) implies that Prf increases as t decreases. If a = 0 we simply say " excessive" rather than " 0-excessive." Let 9" denote the class of functions which are a-excessive. Since the process X is fixed in each discussion it need not be referred to in the notation. The next four propositions collect some elementary but important properties of a-excessive functions. Statements involving a are understood to hold for all a 2 0 unless explicitly stated otherwise. (2.2)
PROPOSKION. (a) Constant nonnegative functions are in 9". (b) If
f,g E 9"then f + g E 9"and cf E 9"(c a nonnegative constant). (c) If {f.} E 9" andf, If, I .. . , then limf, E 9". (d) 9" = flyp, the intersection being over all fl > a. (e) I f f € 8: then U " f e 9". Proof. Properties (a)-(d) are immediate consequences of the definition. For Property (e) we write
The term on the right is less than U y = f$P: f du and approaches it as t decreases to 0.
(2.3) PROPOSITION. I f f € ti?*, thenfE 9'" if and only if (a) pU@'ysf for all p 2 0 and (b) pUa'"f-+ f as j3 -+ 00.
2. Proof.
EXCESSIVE FUNCTIONS
73
Iff E 9" then clearly D U a ' s f s f , while
as j-+ co by the monotone convergence theorem. For the sufficiency we first note that iff satisfies (a) and (b) for a particular value of tl, then it follows readily from the resolvent equation (8.10) of Chapter I that f satisfies (a) and (b) for every larger value of a. Thus in view of Property (d) of (2.2) we may (and do) assume that a > 0 without loss of generality. Let us first suppose that f E Bf satisfies (a) only (with a > 0). Then f, = min(f, n) satisfies (a) also, and the resolvent equation implies that for q > p we have
pufl+y,,= puq+y,+ (q - p)u~+=(pus+y,) I puq+y, + (q - p)uq+y,= quq+y,. Letting n -+ 00 we obtain pUa+ysqU"yand consequently p + pUs+Yis an increasing function of p whenever f e 8; satisfies (a). Again from the (2.4)
resolvent equation we obtain (2.5)
us+y,,= u=(f,,- pus'y,).
Consequently from Property (e) of (2.2) we see that PUP'% E 9"because f, - pUp+y, Ebb*,. As p -+ co, pUs+ynincreases to g, which is in 9'"by (c) of (2.2), and, as n -+ co, g,,increases to g E 9'". Clearly g, sf,and so g ~ f But . for each-P> 0, g 2 limn pus'% = / ? U p + yThus iff also satisfies (b) we obtain, on letting p + 00, g 2f.Therefore f = g E 9". (2.6) PROPOSITION. Iff E 9'"and a > 0 there is a sequence {h,} E bbf such that U"h, increases to f as n -+ 00. Proof.
Taking /3 = n in (2.5) we have
nU"+y,, = Ua(n[f, - nU"f%]). Since flUp+y, is increasing in both and n it follows from the proof of (2.3) that n U n + y , increases to f as n -+ 03. Hence we obtain (2.6) by setting h, = n(f,,- nU"+y,,). (2.7) REMARK. The condition in (2.6) that a be strictly positive cannot generally be removed. For example, if X is Brownian motion in R the results of Exercise 1.5 state that, forf 2 0, Uf is either identically infinite or identically 0. For conditions under which (2.6) is valid when a = 0 see Exercise 2.19.
stopping time, then P ; f If. (2.8) PROPOSITION. Let f E Y".If T is an { A , } If A is a nearly Bore1 subset of E,, then P i f E 9".
14
11. EXCESSIVE FUNCTIONS
Proof. Since P ; f increases to P$f as a decreases to /? it follows from (2.2) that we may assume a > 0. Then by Proposition 2.6 it will suffice to consider only f of the form f = Uag with g E b l * ,. The first statement in (2.8) is now an immediate consequence of Eq. (1.2). Furthermore we have (Exercise 1.4) (2.9)
CfiU"g ( x ) = Ex
e-'S g(X,) d s .
1
+ To&
Recall that if A is in 8: then T A is an {F,} stopping time for which t + TA o O r > TA and t + TAo 8, 1TA as t 10. Combining this with (2.9) we see that PiU'g E Y",completing the proof of Proposition 2.8. It follows from Proposition 2.8 taking f = 1 on E that if A is a nearly Bore1 subset of E,, then @ i ( x ) = EX(e-OITA;TA < is a-excessive. The main theorem in this section, Theorem 2.12, concerns the behavior of an excessive function composed with the process X . Before stating it we need some preliminary remarks. Recall from Section 11 of Chapter I, that if A E b", then A' denotes the set of points regular for A and that A' E 6*.
c)
(2.10)
PROPOSITION. Suppose f E Y", A
E P,
and x E A'. Then
inf{f(y): y E A} ~ f ( x I) sup{f(y): y E A}. Proof. To prove the left inequality let I be the infimum in question and K be a compact subset of A. By (2.8)
f ( x ) 2 f i f ( x ) 2 I EX(e-aTK;T~ < c), because X ( T K )E A if TK < 00. By (10.19) of Chapter I and the fact that x E A' we may choose K so that E x ( e - a T K ;TKc [) is close to 1. Hence f ( x ) 2 I. In proving the second inequality we may assume that a > 0 and then by (2.6) that f = U'g, g E b l f . By a now familiar calculation, if K is a compact subset of A
Since there is a sequence {K,} of compacts in A such that Px(TK,+ 0) = 1 and since E XJ;e-"'g(XJ dt < 00 it follows that the first term on the right may be made arbitrarily close to 0. Hence the desired inequality follows. LetfE 8"take on only finite values. For E > 0 define T(w) = W t : If(XO(4) -f(X,(w))l > E l .
If we set Ak,#= { f c k/n - E } u {f> (k
+ I)/n +
E},
for n = 1,2, . . . , and
2. k
= 0, + I , .
75
EXCESSIVE FUNCTIONS
. ., then for any 1 > 0 we have k+l n
n=l k=-m
3
TAk,.
<
1).
The sets Ak," are nearly Borel and so T is an {Ft} stopping time. (2.11) PRoPoSrTroN. Let f~ 9" be nearly Borel measurable and with only finite values, and let T be defined as above. Then no point is regular for T. Also I f ( X o ) -f(X,)l 2 E on { T < 0 0 ) almost surely.
+
Proof. Given x let Bl = {y : f ( y )>f(x) E } , B, = {y : f ( y ) TA. Proof. The regularity assertions in (b) are of course relative to the topology of [0, 001. To get started assumefe 9"and thatfis nearly Borel measurable. Let us first show that as far as (b) is concerned there is no loss of generality in assuming that f is bounded. Indeed define q : [0, co] + [0, 11 by q(x) = 1
- e-x,
x < co,
q(o0) = 1.
Then q is continuous, concave, strictly increasing, and satisfies q(xy) 2 x q(y) for y 2 0 and 0 5 x s 1. From this and Jensen's inequality we have m e - "'qCf(X,)I1 5 m q Ce-"'f(xI)l> qC~"{e-"'f(X,)Il
qCf (x)I;
that is, Pp(q of) 5 q o f for all t. Given x and E > 0 let A = {y: q V ( y ) ]> qLf(x)] + E } . Then A is empty iff(x) = co while iff(x) 00
=-
inf{f(y): Y E A } 2 4-"qCf(x)l
+ E l >fW,
76
11. EXCESSIVE FUNCTIONS
and so x is not regular for A according to Proposition 2.10. Similarly x is c q l f ( x ) ] - E } . Therefore q l f ( X , ) ] + q l f ( x ) ] as not regular for {y: t + 0 almost surely P". Hence q 0 f E 9" whenever f E 9'".We may analyze the truth of Statement (b) using q 0 f as readily asf. So we will assume f is bounded. Now given E > 0 define T as in (2.11) and then define
quo)]
To = 0,
T,+, T,, + T
0
ern,
n = 0, 1 , . . . .
Given x E E and A E F T ,we have EX{e-aTn+*f(XTnt,); A} = EX{e-"Tn P"~f(x,,,); A} E"{e-"Tnf(XTn);A}. That is, relative to P" the family of random variables {e-'Tnf ( X T n ) ;FTn} is a supermartingale. It is bounded since f is bounded and hence has a limit almost surely P" as n + co ; see Theorem 1.4 of Chapter 0.On the other hand If(XT.) -f(XTntl)l 2 E a.s. on {T,,,, < 00) according to (2,11), and so P"(lim T,, = co) = 1. Since x is arbitrary and {lim T,, = co} E 9 it follows that lim T,, = 00 almost surely. This argument can be repeated for E = l/k, k = 1, 2, .. ., with the corresponding iterates being denoted by T : . Almost surely limn T: = 00 for all k. On the other hand it is clear that if w is such that Ti(w)+ co as n + co for all k, then t +f(X,(w)) is right continuous and has left-hand limits on [0, co). This proves Asseition (b) iffis known to be nearly Borel measurable. In proving (a) we may assume that tl > 0. Then in view of Proposition 2.6 it is enough to prove (a) f o r f = U'g, g E b b f . Given a measure p on I define a measure v on b by v(B)= p(dx) U"(x,B). Then there exist gl,g 2 in bI+ such that g1 I g I g2 and v(g2 - gl)= 0. Now U'gl I U'g I Uag2 and the extreme members of this inequality are in bb,. For each fixed c 2 0
= eQ'v(g2
- g l ) = 0.
It now follows that Uagl(Xr) = Uag2(Xr)for all rational 12 0 almost surely P'. But we have already shown that t + U" g i ( X , ) is right continuous in t almost surely since U'gi is Borel measurable. Consequently U" gl(X,) = U" g2(Xr) for all t 2 0 almost surely P', and since ,u was arbitrary this implies
77
2. EXCESSIVE FUNCTIONS
that U"g is nearly Borel measurable. Thus the proofs of (a) and (b) are complete. Let A = {f< co} and B = { f = co}. We now know that A and B are nearly Borel sets. If x E B', then x E B by (2. lo). From (2.8), f ( x ) 2 Pi f ( x ) and this implies that P"(TB < 00) = 0 if x is in A. Consequently if K is a compact subset of A, then P"[TB0 OTK < co] = Ex{PX(TK'(TB < 00)) = 0. For a given x we can find an increasing sequence {K,,} of compact subsets of A such that TKn1T, almost surely P x . But then TB O, + TB 0 OTA almost surely P", and hence Px[TB0 O,, = co] = 1. Since x is arbitrary this yields Statement (c) of Theorem 2.12. 0
The proof of (2.12) yields a little more when a = 0. Specifically, suppose f E 9. Then the exponential factor in the supermartingale e-"=, f (X,,) is absent. It follows that almost surely T,, = co for some n and so lim f ( X J exists also as t -+co. It is easy to see that this limit is finite almost surely on {T, < co} where A = { f < co}. (2.13)
COROLLARY.If A E 6",then A' E 6".
Proof. Let T = T, . If a > 0, then @i(x) = E"(e-"=) is a-excessive according to the remark following Proposition 2.8, and hence is nearly Borel measurable. Since A' = {@> = 1) we obtain Corollary 2.13. (2.14)
COROLLARY.Iff, g E 9" then min(f, g) E 9".
Proof. Let h = min(f, 9). Obviously h 2 Pph. The fact that Pph + h as t + 0 is an immediate consequence of Theorem 2.12(b).
Exercises (2.15) Let f be in 9". Let R and T be { M r }stopping times with R I T. If A E M R show that
E"{e-""f(XR); A} 2 E x { e - " T f ( X , ) ; A} for all x. (2.16) A standard process X is called a strong Feller process if U"f is continuous for all f E 6 6 , having compact support and all a > 0. Show that in this case any a-excessive function (a 2 0) is lower semicontinuous. (2.17) A function f E S*, satisfying (a) of Definition 2.1 is called a-supermean-aalued. If f is a-super-mean-valued show that = lim,+o Ppf exists
78
11. EXCESSIVE FUNCTIONS
(pointwise), that fsf, and that f is a-excessive. Also show that, for all b 2 0 , U s f = U@f.[Hint: for each x the mapping t + P , " f ( x ) has only countably many discontinuities; if t is a continuity point then P , f ( x ) = P If ( x ) . ] Furthermore characterize f as the largest a-excessive function dominated byf. The function f is called the (a-excessiue)regularization off. (2.18) An {Fl}stopping time T is called a terminal time provided t T 0 8,= T almost surely on {T > t} for all r 2 0. Note that if A is a and T nearly Borel set, then both TA and DA are terminal times. I f f € 9"
+
is a terminal time show that P%f is a-super-mean-valued. If in addition T has the property that whenever {t,} is a sequence decreasing to zero one has t. T 0 8," + T almost surely, then PF f E 9". Note that if A is a nearly Borel set, then TA always has this additional property while DA may fail to have it.
+
Let X be a standard process such that U ( x , K) = j;P,(x, K ) dt is bounded in x for each compact subset K of E. Show that in this situation if f i s excessive, then there exists a sequence {g,,}of bounded functions in S*, such that {Vg,} increases tofand each Ug,,is bounded. [Hint: (i) let f be a bounded excessive function such that limf-m P,f = 0. If g 1 = I -'(f- Pf) show that Ug, increases to f a s t + 0. (ii) Let {K,} be an increasing sequence of compact subsets of E whose union is E. If h, = nZKn show that Uh, t 00 and each Uh, is bounded. (iii) Iff is excessive show that f,, = min(f, Uh,,) satisfies the hypotheses of (i). Use this to construct the desired sequence {g,,}.] (2.19)
(i) Let (E, 8 ) and (F, 9) be measurable spaces, p a measure on 9 which is a countable sum of finite measures, and u : E + F be such that (a) u E S/F, (b) there exists a mapping T : F+ E such that z E F/S and u[r(x)]= x for all x in F. Let v = p~-'. Then show that J E ( f o u ) d v = j F f d p for eachfE F+.In particular derive from this the fact that if u is a continuous nondecreasing function on the interval [a, b] and if F(s) is a right continuous nondecreasing numerical function defined on [u(a), u(b)],then (2.20)
fCu(01 dFCu(01 = J
Iu(4),u(b)l
f(0 d F ( 0
for all nonnegative Borel functionsf on [u(a), u(b)]. (ii) Let a > 0 and let h and cp be nonnegative functions in bb*. Define
Show that
JI = U"[h(cp- $)I. [Hint: use (i) to show that
3.
EXCEPTIONAL SETS
79
1 - exp[ - j)(Xs) ds] = j;h(X,) exp[ - /,'h(X.) d u ] ds, and substitute this into the expression defining 49.1 (iii) If cp = U"f with f a nonnegative function in bd*, then show that (notation as in (ii))
Consequently II/ Icp and t+b increases as h increases in this case. Finally show that this last statement continues to hold if cp is only assumed to be a bounded a-excessive function.
3. Exceptional Sets In this section we are going to introduce various classes of sets that will play the role of exceptional sets in the theory to be developed. The importance and the appropriateness of these notions will only become apparent later. In the present section we will merely give the definitions and develop some elementary consequences of them. Again in this section X = (Q, A, Ar, X , , O f ,P") will be a fixed standard process with state space (E, 8).Our definitions will involve the process X, or at least its semigroup, but since we are regarding X as fixed the qualifying phrase "relative to X " will generally be omitted. (3.1) DEFINITION. (a) A set A E b* is of potential zero (or simply "null") if U ( x , A ) = 0 for all x E E. (b) A set A c E is polar if there exists a set D E 8" such that A c D and P"(TD < co) = P"(TD < c) = 0 for all x. (c) A set A c E is thin if there exists a set D E I"such that A c D and D' is empty, that is P"(T, = 0) = 0 for all x. (d) A set A c E is semipolar if A is contained in a countable union of thin sets. (e) A set A c E is thin at x if there exists a set D E 8" such that D 2 A and x is irregular for D; that is, P"(TD = 0) = 0. Since U"(x,A) = e-"' P,(x, A ) dt, it is clear that A is of potential zero if and only if U a ( x ,A) = 0 for all x for some, and hence for all, a 2 0. Intuitively a polar set is one which the process never enters at a strictly positive time alniost surely. If A E I", then A is polar if and only if @:(x) = E x ( e - a T A TA ; < [) = E*(e-uTA;TA < 00) = 0 for all x for some, and hence for all, a 2 0. On the other hand a thin set is one which the process avoids during some initial open time interval almost surely. If A E b", then A is thin if and only if 05 < 1 for some, and hence for all, a > 0. Clearly a polar set
11.
80
EXCESSIVE FUNCTIONS
is thin and a thin set is semipolar. In addition, it will follow from Proposition 3.4 that a semipolar set A in B* is of potential zero. It is evident that a countable union of polar (semipolar) sets is again polar (semipolar). A set may be thin at every point without being a thin set. (See Exercise 3.14.) Often instead of saying A is thin at x we will say that x is irregular for A.
(3.2) PROPOSITION.Iff and g are in 9'"and f 2 g except on a null set, then f 2 g everywhere. In particular iff = g except on a null set then f = g everywhere. Proof. It is immediate from the definition of a null set that pU"'sfr flLJ"'@g for all /?> 0. Letting /3 + 00 and using Proposition 2.3 yields the desired conclusion. The next proposition is a very useful result.
(3.3) PROPOSITION.If A E B", then A - A' is semipolar. Proof. Let a > 0, and let A, = A n {@: I 1 - l/n}. Clearly A - A' is the union of the A,. If @:(x) < 1 , then x is not regular for A and hence is not regular for A,. If @i(x) = 1 , then by (2.10) x is not regular for {a: I 1 - l/n}. Thus no point is regular for A,; that is, A, is thin. This yields Proposition 3.3. An immediate consequence of our next result is the fact, mentioned earlier, that a semipolar set in B* is null. (3.4) PROPOSITION. Let A be semipolar. Then almost surely X , E A for only countably many values of 1.
Proof. We may suppose that A is thin and in 8". Pick a > 0, fl < I , and let B = A n {WAI fl}. The set A is a countable union of sets such as B and so it will suffice to prove X , E B only countably often. Let T , = T , and T,,, = T, + TB &.,. Since B c A and A is thin, B' is empty. Therefore almost surely XTnE B on {T, < a}.Now 0
E*(e-uTn+l) = E ~ [ Q ~ ( Xe-IIT,] ~,) I p Ex(e-aTn). Consequently EX(e-aTn) + 0 as n + 00 and hence T, --t co almost surely. Since X , $ B if T, < t < T,,, the proof of Proposition 3.4 is complete. PROPOSITION.Let f~ 9'"and A = { f = m}. Then A is polar if and only if A is null.
(3.5)
3.
EXCEPTIONAL SETS
81
Proof: Assume A is null and let x E E. Since U ( x , A) = 0 there is a sequence { t,,} approaching 0 such that P "( XrnE A) = 0. By Theorem 2.12(c), P"(X, E A for some r > t,,) = 0. Thus P"(X, E A for some t > 0) = 0. Since x is arbitrary and A E b", this implies that A is polar.
Recall (see Exercise 2.17) that a function f in 8: is a-super-mean-valued provided Ppf 5 f for all t, and that for such a n f , f = limfJ0Ppf is the largest a-excessive function which is dominated by f. Moreover for each /3 2 0, W a f = U a f . Consequently { f # f }= { f < f } is of potential zero. However the following theorem of Doob which generalizes an important theorem of Cartan in classical potential theory allows us to say much more in an important special case.
(3.6) THEOREM.Let {f,} be a decreasing sequence of a-excessive functions and let f = lim,,f,. Thenf is a-super-mean-valued and {f a, and /3 - a < E. According to (2.10), x is not regular for {f 5 a} and by what was said above x is not regular for (12b}. Consequently A: c A,. To complete the proof of Theorem 3.6 it clearly suffices to show that each A, is thin. This is accomplished by the following lemma.
a,
(3.7) LEMMA. Suppose that x is regular for A,, that /3 2 0, and that there stopping times such that (a) P"(T,,-,0) = 1 and is a sequence {T,,}of {9,} (b) f ( x ) 2 P T n f ( x + ) /3 for all n. Then there is a sequence {Q,,} of (9,) stopping times such that (a') P"(Q, 0) = 1 and (b') f ( x ) 2 PQnf(x) + + 4 2 for all n.
82
11. EXCESSIVE FUNCTIONS
Indeed repeated application of (3.7) with B first equal to 0, then equal to ~ / 2 ,and so on, shows that if x E A: then f ( x ) = 00 which contradicts the fact thatfis bounded. Thus (3.7) implies that A, is thin and hence the proof of (3.6) will be complete as soon as (3.7) is established. Proof of' (3.7). Let 6 > 0 and 0 < q < 1, and let R = TA,. Since almost surely P", T,,--t 0 and X, E A, at arbitrarily small strictly positive values of t (xEA:), we may choose n so large that S = T, + R of&-,, satisfies P"(S 2 6/2) < q/2. Moreover X ( S ) E A, almost surely on {S< 03) because A: c A,. Let p(dy) = P"[X(S)E dy; S < 001. Since A' = E, P'(TA > 0) = p(dy) Py(TA> 0) = 0. Consequently we can choose a compact subset K of A such that
If Q = S
+ TK
0
OS, then
But f(XT,) = f ( X T , ) almost surely on {TK < co} because X(T,) E K c A. Therefore the following inequalities obtain: Exf(XQ) = E"{E"'S'[f(XT,)l)
5 EXf(Xs) I E X f ( X s )- & P"(S < 00) 5 E"f(XT,,) - & P"(S < a)
If(4- B - 41 - 1/2) I f ( x ) - B - 42. In view of the fact that 6 and q are arbitrary, these estimates yield Lcmma 3.7.
Exercises
(3.8) (a) Give an example of a thin set which is not polar. (b) Give an example of a semipolar set which is not thin. (c) Give an example of a null set which is not semipolar. [Hint: consider the process of uniform motion to the right.] (3.9)
Let A c E be null. Show that each point of E is regular for A'.
3. EXCEPTIONAL SETS
83
(3.10) Let X be a standard process and assume that there exist a p > 0 and a Radon measure 1 on 6 such that f~ Y pand f = 0 almost everywhere 1 implies t h a t f = 0. Show that there exists a finite measure t on 6 such that A E 6* is null if and only if &A) = 0. Show that under this assumption any a-excessive function is Bore1 measurable. (See Section 1 of Chapter V.) (3.11) Let A E I n and IetfE 9". Show that if every point of A is regular for A, then P; f is the smallest u-excessive function dominatingf on A. Give an example to show that P i fneed not dominate f on A if A - A' is not empty. (3.12) Let A E 6"and let x E E be fixed. Let {T,,} be an increasing sequence of { F l }stopping times with lirn T, = T A almost surely P". If 0 = @> show that @(X,J + 1 almost surely P" on {T,, < TA for all n, TA< a}and hence that lirn,,,, @(X,) = I almost surely P" on this set. [Hint: since 0;increases as a decreases it suffices to consider a > 0. Use (2.15) to show that {e-"Tn@(XT,);9,") is a supermartingale relative to P" and hence that limn e-",. @(XT,) = L exists and L 5 e P T A almost surely P". Next use the strong Markov property to show that if A = {T,, < T A for all n}, then E"(L; A) = Ex(e-uTA; A). Conclude from this that limn@(A',") = 1 almost surely P" on A n {T, < co).]
(3.13) Let A
E 6" and
suppose that lim suplloP,(x, A) > 0. Then show that
x is regular for A.
(3.14) Construct an example of a set A which is thin at every point without being a thin set. [Hint: let X be uniform motion to the right in RZ;that is, starting from any point (xo,yo) E R2 the process moves to the right along the line y = yo with speed one. The reader should have no difficulty writing down a rigorous definition of this process. Let A be a nonmeasurable set (with respect to two-dimensional Lebesgue measure) which intersects each horizontal line in at most one point. See Hewitt and Stromberg [l, 21-27] for the construction of such a set.] (3.15) Let X be the symmetric stable process of index u in R. If a 5 1 show that for each x the singleton {x} is polar. [Hint: using the notation of (1.7) let f , ( t , x ) = @ l ~ " , ~u) ( t g,(x) , du. If a < 1 show y -,( x - yIu-' = c J ; f u ( t , y - x) dt is excessive for each x, and then use Proposition 3.5. If u = 1 and p > 0 let Up(x)= e - p ' f i ( t ,x) dr; then show that U,(O)= co, U,,(x) < co for x # 0, and that y + Up(x- y ) is p-excessive for each x . ]
jr
(3.16) Let X be the Brownian motion process in R. Show that for all points x, x is regular for {x}. Consequently the only thin set is the empty set. [Hint:
84
11. EXCESSIVE FUNCTIONS
use (3.13) and the continuity of the paths.] This result is also valid for stable processes in R of index a with 1 < a < 2 as will be proved later.
(3.17) Show that iffis a-super-mean-valued then {fcf}need not be semipolar. [Hint: use the result of (3.16).] (3.18) Let X be Brownian motion in R. I f f = ILO,m), T = T { o , ,and x < 0, then compute
to obtain PX(Ts t ) = 2P"(X, 2 0). Now use the explicit form of the Gauss kernel g,(x) (see (2.17) of Chapter I) to show that E"{exp(-aTy)} = e-J'lx-YI for all x, y E R and a > 0, where Ty = T{yr, In particular show that Px(Ty< CO) = 1.
(3.19) The notation is that of (3.18). Let a .c x c b and let a > 0. Let T = min(T,, Tb) = inf{t : X,4 (a, b)} almost surely P" and let J, = E"(e-'=; X ( T ) = a) and Jb = E"(e-""; X ( T ) = b). Express EX(e-"To)and EX(e-(ITb) in terms of J, and Jb and solve the resulting equations to obtain J, =
sinhJa(b - x). sinh&(b - a)'
Jb
=
sinh&(x - a ) sinh&(b - a)'
Conclude from this that P"(T, < Tb)= (b - x)(b - a)-' and that E X ( T )= +(x - u)(b - x).
(3.20) Letfe 9"and let {B,,} be a decreasing sequence of nearly Bore1 subsets of E, such that BL c B,,, whenever n > m. Then { P f , f } decreases to an a-super-mean-valued function g. Show that g = S except possibly at those points x in E such that either g(x) = co or x E EL for all n. [Hint: first show that for each n, g = P i n g on { g < 00). Next show that limtloP f g ( x ) 2 lim inftloP:Pin g(x) 2 P i , g(x) if x is not regular for B,, .]
4. The Fine Topology
Once again, in this section the basic datum is a given standard process X = (Q, A, A , ,X , , Or,P " ) with state space (E, 8), and all definitions are relative to this given process.
4.
THE FINE TOPOLOGY
85
(4.1) DEFINITION. A set A c EA is calledjnely open if A' = EA - A is thin at each x in A. In other words for each x E A there is a set D E 8: such that A ' c D and P"(T, > 0) = 1.
Intuitively a set A is finely open provided the process remains in A for an initial interval of time almost surely P" for each ~ E ABy. the right continuity of the paths any open set is finely open. Let O(9,) be the collection of all finely open subsets of E(EA). One checks without difficulty that O(0,) is a topology on E(EA). It is called thejine topology on E(EA). We already observed that the fine topology on E(EJ is finer than the original topology on E(EA). Note that 9 is just the topology that E inherits when we regard E as a subspace of (EA, OA), and that A is isolated in (EA, OA). Consequently we will restrict our attention to the fine topology on E. In this section we will give an alternate description of 0 which will show that in reality 0 depends only on the transition function of X. We will also discuss the behavior o f t + f ( X , ) whenfis nearly Bore1 measurable and finely continuous, that is, continuous relative to 9. As usual the continuity of numerical functions is defined relative to the topology of R, the extended real line. Let us also recall that an elementfof 9" is regarded as a function on E, and hence sets of the formf-'(D) where D c R are subsets of E. Of course in certain formulas we must set f(A) = 0; but {f< I}, for example, is a subset of E. (4.2) PROPOSITION. If YE Y",then f is finely continuous. Proof. Let Z be an open interval in R and let B = f - ' ( I ) . By Proposition 2.10 no point in B is regular for B'. Consequently B is finely open.
(4.3) PROPOSITION.Let A c E and suppose that x is in A and A' is thin at x. Then there exists a compact set K such that x E K c A and K' is thin at x. Proof. By definition there exists a B E 8"such that B c A andP"(T,, > 0) = 1. Since x E A we may assume that x E B, and also that B has compact closure B in E. Of course B' = EA - B. By Theorem 11.2 of Chapter I there exists a decreasing sequence {G,} of open subsets of E containing B'- {A} = E - B 1 E - B such that min(TGn,[) 7 min(T,-,, [) almost surely P". But min(T,-,, C) = TEE, min(TGn,C) = TGnv(A),and G,, u {A} is open in Ed since G,, 1 E - B. Consequently, by the zero-one law, there exists an open set G in EA such that A E G, G' c B, and Px(TG> 0) = 1. If we let K = G ' the proof is complete.
86
11. EXCESSIVE FUNCTIONS
(4.4) PROPOSITION. Let 9 = {B : B E 0 n b" and Then % is a base for the fine topology.
B is compact in E}.
Proof. Let A be a fine neighborhood of the point x. By (4.3) there is a compact subset K of E such that x E K c A and P"(T,, > 0) = 1. Let u > 0 and @'"(x)= and let B = {a" < l}. Since ma E Y",the set B is in 0 n b", and of course x E B. If y E K' then @"(y)= 1 because K' is open, so B c K. Finally B is compact in E because K is compact in E. This completes the proof. (4.5) THEOREM.Let u > 0. Then the fine topology is the coarsest topology relative to which the elements of 9" are continuous.
Proof. In the proof of (4.4) we produced a base for the fine topology consisting of sets of the form {@: < 1) where G ranges over the open sets of E , containing the point A. Hence we obtain Theorem 4.5.
REMARK.We have actually shown that the fine topology is the coarsest topology rendering the functions @,: G as above, upper semicontinuous. (4.6) THEOREM.Suppose that U ( x ,K) is finite for all x whenever K is a compact subset of E. Then the fine topology is the coarsest topology relative to which the elements of Y = Y oare continuous.
Proof. Let Y be the coarsest topology relative to which the elements of Y are continuous. By (4.2), Y c 0, and, by the preceding remark, it will suffice to show that < l} E 9-for all open sets G in E,. Let {G,,} be a sequence of open sets with compact closures such that G,,c G,,, for n 2 1 and UG,, = E. If
{@a
(4.7)
cpn(x) =
u IG,(x) - PGU IG,(x)
then cp, is 9-continuous since both UIGnand P, UIGnare excessive and finite. Clearly cp, increases with n. If cp = limn cp,, then cp is Y-lower-semicontinuous. Thus to complete the proof it suffices to show that {@A < l} = {cp > 0). If rp,(x) > O then Px(TG>O) = 1, and so Od(x) < 1. Hence {cp > 0} c < 1). On the other hand if @a(x)c 1 and x E G,, then cp,,(x) > 0. Consequently {@Q < l} c {cp > 0}, completing the proof of Theorem 4.6.
{@a
(4.8) THEOREM.Suppose f~ 8".Then f is finely continuous if and only if the mapping t + f ( X , ) is right continuous almost surely.
87
4. THE FINE TOPOLOGY
Proof. The implication from the right continuity of the composition to the fine continuity off is immediate; in fact only the right continuity at t = 0 is needed. Coming to the converse implication, suppose that t + f [ X , ( o ) ] is not right continuous and that t 4 X , ( o ) is right continuous. Then there is a point to < c(o),a sequence {t,,} decreasing to to with t, < C(o)for all n, and a pair of intervals Z,, Z2 of the form [ - CQ, r), (s, 001 (perhaps not in that order) with r and s rational and r < s, such thatf[Xt0(w)]E Zland f [ X J w ) ] E Z, for all n 2 1. Let A =f -'(Z1) and B =f -'(Z2) so that A and B are disjoint finely open sets in 8".If cp = @,, then cp[XrO(o)] < 1 and (p[X,,(w)]= 1 for all n. But cp E 9" and hence almost surely t + cp(Xr) is right continuous. Since there are only countably many pairs (I,,I,) of the above form and t + X , ( o ) is right continuous almost surely, the proof of Theorem 4.8 is complete.
Exercises Let A E 8".Show that A is finely closed if and only if A' c A , and that the fine closure of A is A u A'. [Hint: first show that almost surely TAI TAP .] Consequently if x # A , then A is thin at x if and only if x is not in the fine closure of A . Show that in this last statement the condition " x # A '* may not be eliminated. See (1.8) of Chapter V for an extension of this result to arbitrary subsets A of E. (4.9)
(4.10) Assume that {x} is polar for each x E E. Under this assumption show that a sequence {x,,} approaches x in the fine topology if and only if x,, = x for all large n. (4.11)
Let B be a null set. Show that E - B is finely dense in E.
(4.12)
Show that ( E , 0)is a completely regular Hausdorff space.
(4.13) Let A E b", x E A (the closure of A in E ) , and consider the following statements: (i) x is not in A and A is thin at x ; (ii) there exists an f E 9" such that f ( x ) < lim inf,,,,, f ( y ) . Show that (ii) implies (i) and that (i) implies (ii) if c1 > 0. [Hint: to show that (i) implies (ii) for c1 A 0, use Proposition 4.3. Make use of Proposition 2.10 for the other assertion.] (4.14) Let Y E b 4 . Suppose that for each p, Y O,, + Y almost surely P" whenever {T,,} is a sequence of (9,) stopping times decreasing to zero almost surely P'. Show that f ( x ) = E x ( Y) is nearly Bore1 measurable and finely continuous. [Hint: one may assume Y 2 0. Show that 0rUY-t f and so 0
88
11. EXCESSIVE FUNCTIONS
f is nearly Bore1 measurable. If E > 0 show that x is not regular for A = {y : f ( y )2 f ( x ) + E } and B = {y : f ( y )I f ( x ) - E } , and conclude from this that f is finely continuous.] (4.15) Let a > 0 and cp E 9". If A E 0 n &", show that there is a sequence {g,,} in 8; such that each gnvanishes off A and Uagnf P>q.Each g,,may be
taken to have compact support in A, if A is a countable union of closed sets. [Hint: assume first that cp is bounded. Let {h,,} be an increasing sequence of nonnegative elements of b F such that limh,, is infinite on A and each h, vanishes off A-or off some variable compact subset of A if A is a countable union of closed sets. Let $,, be defined as in Exercise 2.20 using rp and h,,. Show that
and use this to show that $,, P>cp. Use the results of (2.20) to complete the proof when cp is bounded. If cp is unbounded apply the above argument to min(cp, a) and let a -,co.] --f
Let X be Brownian motion in R. Prove that CPE,(x) 1 as y + x. [Hint: see (3.18).] Prove that the fine topology coincides with the usual topology of R. [Hint: by the remark following (4.5) it suffices to show that CP; is upper semicontinuous when a > 0 and G is open. For this use the result of the first part of (4.16).] (4.16)
--f
Let X be a standard process such that for all B E 8"either UI, = 0 or UI, = co;that is, VIE= co whenever B is not of potential zero. Show that if B E c f n and UI, # 0, then CP, = 1. Of course CP&) = P"(T, < () = P"(T, < 00) is regarded as a function on E and CP, E 9. [Hint: let YE= limf-m P,@, and O,=CPB-YB. Then P I Y E = Y E .Use (2.19i) and the hypothesis to show that 0,= 0, and hence that P,@, = 0,.Show that there exists an xo such that CPB(xo)= 1. Finally show that if D = {CP, < l}, then U ( x o , D) = 0 and conclude from this that D must be empty.]
(4.17)
Let X be as in (4.17). Show that if B E b" and 0,# 0, then CP, = 1. [Hint: assume B compact and let q = sup 0,> 0. If 0 c 6 c 11 and A = {a, > 6 ) use (4.17) to show CPA = 1. Conclude from this that CP, = q. Finally show that 0,= 1.1
(4.18)
Let X be a standard process such that CP, = 1 whenever B E d'n is nonvoid and finely open. Show that the only excessive functions are constants. Combine this with (4.17) (or (4.18)), (1.5b), and (1.7b) to conclude that the
(4.19)
5. ALTERNATIVE CHARACTERIZATION OF EXCESSIVE FUNCTIONS
89
only excessive functions for Brownian motion in R or R2,or the symmetric stable process in R with u 2 I , are constants. (4.20) Let X be as in (4.19) and assume that E contains at least two distinct points. Show that P x ( [ < 00) = 0 for all x E E. [Hint: show that $(x) = E"(1 - e - [ ) is excessive and hence constant. Use this to show that E X ( e - r )= 0 for all x in E.]
Let X be as in (4.20). If B E &" show that UI, = 0 or UI, = 00. [Hint: suppose that q = sup U I , > 0. If 0 < 6 < q and A = { U I , > S}, then @ A = 1. Use this and (4.20) to show that U I , = 00.1 Note that if excessive functions are lower semicontinuous, then one need only assume @, = 1 whenever B is a nonvoid open subset of E. (4.21)
(4.22) Combine the results of (4.17)-(4.21) to show the equivalence of the following statements. (Assume that E contains at least two distinct points.) (i) UZ, = 0 or U I , = co for B E 8".(ii) 0,= 1 whenever B E &" is nonvoid and finely open (or whenever B is a nonvoid open subset of E if excessive functions are lower semicontinuous). (iii) @, = 1 whenever B is in b" and is not polar. (iv) The only excessive functions are constants. A process satisfying any, and hence all. of these conditions is called recurrent. (4.23) Let X be a standard process such that 111, = 0 or a, > 0 for all B E b". Then if B in b" is not polar, 0,> 0. [Hint: suppose @,(x0) = q > 0 for some xo . Show that if D = {@, > q/2}, then QD > 0. Conclude from this that 0,> 0.1 Note that if there exists a measure ( on & such that P,(x, dy) = f ( t , x , y ) ((dy) with f strictly positive, then X satisfies the above condition.
In particular it is satisfied for Brownian motion or the symmetric stable process in R". Let X be a standard process and assume (i) for some u , U a : C, + C and (ii) for each compact subset K of E, U ( x , K ) is everywhere finite. Show that almost surely X , + A as f + co. [Hint: let K be a compact subset of E, let G be an open set containing K and having compact closure in E, and let u be such that U " : C, + C. Then the assertion is a consequence of the following three observations: (i) U ( x , C ) 2 U " ( x ,C ) 2 6 > 0 for x in K , (ii) P,U I J x ) 2 6 P"(T, 0 0, < 00). and (iii) P , UIG-+ 0 as t + 00.3 (4.24)
5. Alternative Characterization of Excessive Functions
According to (2.8) iff'€ Y " ,then P i f I f for every compact K . In this section we will derive various converses to this assertion. These are useful
90
11. EXCESSIVE FUNCTIONS
for finding the functions excessive relative to specific semigroups and for proving that in various situations excessiveness is a local property. The fundamental theorem, which we will now prove, is due to Dynkin [2] and [4]. As usual X = (0, A, A l ,X,,Or, P " ) is a fixed standard process with state space (E, a). (5.1) THEOREM. Suppose f E 8; and f 2 P i f for every compact subset K of E. Then q U q + " f I f f o r all q > 0.
Proof. We will take c( = 0 to simplify notation. Clearly it is enough to prove the theorem for boundedf. Let p > 0, q > 0, and define h = U q + p fF, = qh -f. The resolvent equation implies that for y > 0
h = U'[(y - P)h - F ]
=
Uyg,
where we have put g = ( y - j ) h - F. Since h is the y potential of the bounded function g we have for any stopping time T
1
T
h(x) = EX
e-7'
g(x,)dt + P ;
h(x),
0
from which we obtain
1
T
(5.2)
Pk F(x) - F ( x ) = -qE"
0
e-" g ( X , ) dt + f ( x ) - P k f ( x ) .
Let A = { F < 0). Choose y < fl and let T = TK, K any compact subset of A. By hypothesis, f - P , f is positive and so f - Pf f is also positive. Of course ( p - y)h is positive and P$ F is negative because XT E K c A almost surely on {T < a}.Consequently, using the definition of g,
/ e - y t F ( X , ) dt. T
F ( x ) I -qE"
0
But E x e - y t F ( X , ) dr 2 0 and we can find compacts K , c A such that P"(TK,+ TA)= 1. Hence F ( x ) I 0; that is, qU"+Bfl f. Letting p decrease to 0 completes the proof of Theorem 5.1. COROLLARY.I f f satisfies the hypotheses of Theorem 5.1 and also lim inf,,, Ppf 2 f,then f E Y".
(5.3)
Proof. By (5.1) and the argument in (2.3), qU,+"fincreases as r] 4 00. Let =f.It follows from the condition in (5.3) and Fatou's lemma that lim inf,,, q U v + " f f 2 f ,and since g s f t h i s implies that g =j:
g = lim, qU"+"f. By (2.3), f~ Y" if and only if g
5.
ALTERNATIVE CHARACTERIZATION OF EXCESSIVE FUNCTIONS
91
Before coming to the applications of (5.3) we need some definitions. DEFINITION. Let T = inf{t: X , # X o } . A point x E EA is called a holding point (for X ) provided P " ( T > 0) = 1. In the alternative case, P X ( T= 0) = 1, the point x is called an instantaneous point. (5.4)
Clearly A is a holding point. Define a sequence of stopping times {S,,} by So = 0 and S,,, = S,, + T OBs,. (5.5) as n
-
DEFINITION.The process X is called a regular step process if S,, --f co almost surely P" for all x in E.
The definition implies of course that for a regular step process every point is a holding point, and that the path functions r + X , are step functions taking the value X(S,,) on [S,,, S,,,) for all n almost surely. These processes are just the ones constructed in Section 12 of Chapter I. Most of the familiar processes which one encounters are either regular step processes or else all points are instantaneous. The next two theorems treat these two cases. THEOREM.Let X be a regular step process. Then f E 8: is in 9"if and only iff 2 Ptf (where T is defined in (5.4)). (5.6)
Proof. First note that for each x E E the set {x} is finely open so that f is finely continuous. Next we observe that f E 8"; indeed given an initial measure p, let p,, be the measure pc,(n) = P"(X(Sn) E A ) ,
and let u,, u, E d be such that u,, sf I u,, and p,,({v,, < u,}) = 0. Of course, we set u,,(A) = u,(A) = 0. If v = sup u,, and u = inf u,, then u, u E 8,v sf Iu, and since X , = X(S,,) on [S,,, S,,,,) we have P"[u(X,) < u ( X , ) for some t ]
Cn P"Cu,(X(S,)) < un(X(Sn))I
= 0.
Consequentlyf'e b". Let K be a compact subset of E and let Q = T , . We next claim that for each n20 (5.7)
f ( x ) 2 E"(e-"Qf(XQ);Q IS,)
+ E"(e-"Snf(XS,);Q > S,,), We will prove this by an induction argument. Clearly (5.7) is valid when n = 0.
92
11. EXCESSIVE
FUNCTIONS
Now assume that (5.7) holds for some fixed value of n. Denoting the second summand on the right side of (5.7) by J and using the inequalityf(Xsn) 2 P;f(X,,,) we have J 2 E"{e-"'" EX'Sn'[e-"Tf(X,)];Q > S,} = E X { e - u S n + l f ( X S ~Q+ ,>) ;S,}.
But { Q > S,} differs from
{ Q > Sn; Q 0s.
=T
0
0
es,,) u { Q > S,;Q
0
0s. > T 0 esn}
by a set having P " measure 0, and Q = S, + Q 0 Osn on { Q > S,}. Consequently J 2 E"{e-"Qf(X,); Q > S , , Q = S,,
,>
+
EX{e-&+
I
f ( X s n t l ) ;Q > % + I } ,
and since { S , < Q I S,,,} = {S, < Q ; Q = S,,,} almost surely, we obtain (5.7) with n replaced by n 1. Thus (5.7) is valid for all n. Now letting n + 00 in (5.7) we o b t a i n f r Pkffor all compact subsets K of E. Finally the fine continuity off and Fatou's lemma imply that lim inffl, PBf r J and so Theorem 5.6 follows from Corollary 5.3.
+
Before stating the next theorem we let d be a fixed metric on E compatible with the topology and define (5.8)
T, = inf{t: d ( X , , X , ) > l/n}.
(5.9) THEOREM. Suppose all points of E are instantaneous. Let f E 8; be finely continuous and suppose that for every compact K c E there is an integer N such thatf(x) 2 P;, f ( x ) for all x E K and n 2 N . ThenfE 9".
Proof. Once again we assume a = 0 to simplify the writing. Let K be a compact subset of E and pick a positive integer N such thatf(x) 2 P T n f ( x ) for all x E K and n 2 N. Fix n 2 N and define R = T,, =0,
if X , E K , if X , # K ,
+
and Ro = 0, Rk+l = Rk R 0 e R , , . Obviouslyfr PRJ and because d(X(Rk), X ( R k + , ) )2 l / n if R k + l < T K E 4 5 it follows from the quasi-left-continuity of X that limk,, Rk 2 T K E almost surely. Given an integer k and E FRkwe have from the strong Markov property E"{f[X(Rk)l;
r}2 E " { P R f ( X ( R k ) ) ;
l-1
= E X { f ( X ( R k +1)); l-1.
5.
ALTERNATIVE CHARACTERIZATION OF EXCESSIVE FUNCTIONS
93
From this it follows easily that for any given t m
Recalling the dependence of Rk on the integer n, define S,,=Rk+l,
= a3,
if R k s t < R k + l , if Rk t for all k.
Then for all n 2 N (5.10)
f(x) 2 EX{S(X(Sn));t < 7 " C l .
It is clear from the definitions that if t < TKc then S,,, is in the interval + T,, o O,].As there are no holding points 7,, 8, approaches 0 as n -, a3 almost surely on {( > t } . By hypothesis and Theorem 1.8 the mapping s + f ( X , ) is continuous on the right almost surely, and so letting n approach co in (5.10) we obtainf(x) 2 E " ( f ( X , ) ; t < TKC). By choosing Klarge enough we may make TKC arbitrarily close to ( and so finallyf2 P, f for all t. This inequality, along with the fact thatfis finely continuous, implies thatfE 9'. Thus the proof of Theorem 5.9 is complete. (t,t
0
We have not attempted to find the best way of combining (5.6) and (5.9). Rather we will content ourselves with the following result in the general situation. THEOREM.Let f be defined, nonnegative, and lower semicontinuous on E, and suppose that, for each x in E, f ( x ) 2 P;" f(x) for a sequence of values of n approaching infinity (the sequence may depend on x; T,, is defined in (5.8)). Then f~ Y a .
(5.11)
Once again we assume c( = 0 to simplify the notation. Clearly f is Bore1 measurable in the present case. Let K be a compact subset of E. We are going to show t h a t f 2 P K J To this end define
Proof.
A,, = {Y E E : 4 y , K ) > l/n andf(y) 2 PTnf(y)).
It then follows from our hypothesis that E - K = R1=T,,,
U,, A , .
Let
if X , , E A , - U A k , k
=0,
if X o # E - K .
Clearly f 2 P R ,.f. Fix x E E - K . If Q is an {F,} stopping time, A E 2FQ, and R = Q + R, o 0,, then
94 (5.12)
11. EXCESSIVE FUNCTIONS
E " { f ( X Q ) ; A> 2 E " { P R , f ( X p ) ; A} = E"{f(x~); A}-
Let /? be any countable ordinal and suppose that R, has been defined for all y < /?.If /? has a predecessor, /? - 1, define R, = R,-l R , 0 O R # - , . If /? is a limit ordinal define R, = sup,,,, R , . In either case R, is an {S,} stopping time and R, 2 R , whenever y /?.Since R, > 0 almost surely on { X , E E - K } , we have for y < /?
+
P X [ R ,= R,; R , < min((, TK)] = 0.
+
For each countable ordinal /?,let q(p) = E"[R,/(I R&]. Since x E E - K , 0 < q(p) I 1 for all /? and q ( y ) I q(p) whenever y I /?.Moreover if y < /? and P X [ R ,< min((, TK)] > 0, then q(y) < q(/?). Consequently there exists a countable ordinal Po such that P"[R,, = min((, TK)]= 1. We know f ( x ) 2 P R I f ( x ) Suppose . that f ( x ) 2 PRYf ( x ) for all y < b. If j has a predecessor, /? - 1, then (5.12) implies thatf(x) 2 P R g f ( x ) If. /? is a limit ordinal, then we can find an increasing sequence {p,} such that R," t R, almost surely P", and consequently X(RB,)+ X ( R B )on { R , < 5) almost surely P". Therefore using the lower semicontinuity o f f and Fatou's lemma we obtain f ( x ) 2 P R g f ( x ) In . particularf(x) 2 P R p , f ( x )= P K f ( x ) . So far we have shown that f ( x ) 2 P K f ( x )for any compact subset K of E and x E E - K . If x is regular for K , then P K f ( x )= f ( x ) . Finally suppose that x E K - K'. Then x must be regular for E - K and so there exists an increasing sequence {K,} of compact subsets of E - K such that Px(TK,J.O) = 1. Since X(TK,)E E - K almost surely on {TK,< 0 0 } and x E E - K, we obtain with two applications of what has already been established E"{f[X(TKn
+
TK
e,Kn)l}
=
EX{P,f[X(TK,)l} PK,f(X)
On the other hand X ( T K ,+ TKo Or,) + X ( T K )almost surely P";in fact there is equality as soon as TKn < TK. Thus Fatou's lemma implies that f ( x ) 2 P K f ( x ) for x E K - K' as well. One final appeal to the lower semicontinuity off and Fatou's lemma yields lim inf,,, P If 2S,and so Theorem 5.11 follows from Corollary 5.3. We are now going to identify some of the objects defined in this chapter with objects from classical potential theory when the process X in question is Brownian motion. Therefore let X = (R, A, A , ,XI, O,,P") be the Brownian motion process in R".We assume that Xis given in its natural function space representation; that is, R consists all maps w : [0, co3 + (R"u {A}) which are continuous on [0, a),w ( w ) = A, and o(t)E R" for all t c 0 0 ;
5. ALTERNATIVE CHARACTERIZATION OF EXCESSIVE FUNCTIONS
95
+
X,(w) = w ( t ) , 0, w(s) = w(s t ) ; and and A, are equal to 9 and 9,, respectively, for all t. If r > 0 let Tr = inf { t : IX, - X,l > r } where Ix - yl is the usual Euclidean metric on R".The key fact which we need is that PTr(x,.) is the uniform distribution of unit mass on S,.(x) = { y : Ix - yl = r } . Clearly the continuity of the paths implies that P,,(x, . ) is carried by Sr(x). First of all let us show that Px(Tr< 03) = 1, which implies that PTr(x,Sr(x))= 1. Fix x and r > 0 and let T be the first hitting time of { y : ly - XI > r } . Let g,(x) be the Gauss kernel defined in (2.17) of Chapter I. Then P"(T, It ) = P"(T It ) 2 P"[lX, - XI > r ] =
j,y,&(Y) d Y , >r
and using the form of the Gauss kernel it is easy to see that this last expression approaches one as t approaches infinity. Hence Px(Tr< co) = 1. Next let 0 be a distance-preserving transformation of R".Then 0 induces a transformation 0 on R by (Oo)(t)= O w ( t ) if t < co and (Ow)(co) = A. We now claim that P x 0 - ' = P o x for any such 0 and all x. Indeed first observe that if P , ( x , A ) is the transition function of X , then P,(x, O - ' A ) = P,(Ox, A ) for any such 0. Consequently if 0 It, < . . . < f k and A l , . . . , A , are in g(R"), then
Thus P " 0 - I = P o x on F0, and hence on 9. Finally note that T,(Ow) = inf{t: IX,(Ow) - X,(Ow)I > r } = inf { I : =
IOX,(o) - OX,(w)/ > r }
TAW),
and so XTr 0 = O X T, .Let x and r > 0 be fixed and suppose Ox = x. Then if r is a Bore1 set 0
P , ~ ( ~0, lr)= P"[x(T,) E 0- '1-1
rl = P O ~ [ X ( T , )E rl
= P X [ O X ( T , )E
= P,,(x,
r);
that is, the measure PTr(x, .) is invariant under all distance-preserving transformations of R" which leave x fixed, Combining this with our previous
96
11. EXCESSIVE FUNCTIONS
observations it follows that PT,(x, .) is indeed the uniform distribution of unit mass on S,(x). In this paragraph we assume that the reader is familiar with classical potential theory as expounded, for example, in Brelot’s monograph [l]. Clearly the Brownian motion process X is a strong Feller process, (2.16), and hence excessive functions are lower semicontinuous. It now follows from Theorem 5.11 and the above evaluation of Ps,(.Jx, .) that f E Y if and only i f f = 00 or f is a nonnegative superharmonic function. In particular, since the only nonnegative superharmonic functions in R or RZ are the constants, we see that the only excessive functions for Brownian motion in one or two dimensions are the nonnegative constants. On the other hand if n 2 3, then X satisfies the condition of (4.6) and so the fine topology is the coarsest topology relative to which the nonnegative superharmonic functions are continuous; that is, it coincides with the Cartan fine topology of classical potential theory; see Brelot [ l , p. 901. The reader familiar with classical potential theory should now have no difficulty in checking the following facts (n 2 3): (i) A is thin at x as defined in this chapter if and only if A is thin at x in the potential theoretic sense; and (ii) the concept of polar set in this theory and in potential theory agree. Moreover it follows from results of potential theory that a thin set, and hence a semipolar set, is polar. Finally we remark that a set is polar if and only if it has capacity zero. These last two statements will be proved in much greater generality by probabilistic methods in Chapter VI.
MULT
AND SUBPROCESSES
In this chapter we develop the fundamental properties of multiplicative functionals of a Markov process. Multiplicative functionals arise naturally when one “kills” a given process according to some rule (see Sections 2 and 3). The reader could skip Section 3 and the second half of Section 5 on a first reading. Also Section 6 stands somewhat apart from the remainder of the chapter. This section contains Hunt’s characterization of the hitting operators Pi. This result is one of the most important in probabilistic potential theory and could have been included in Chapter 11. However we deferred it to the present chapter in order to discuss it in somewhat greater generality than would have been possible in Chapter 11. The reader particularly interested in this result might well go immediately from Section 6 of this chapter to Section 1 of Chapter V where related questions are discussed.
1. Multiplicative Functionals
Let X = (Q, A, A , ,X , , O r , P“) be a Markov process with state space (E, 8).We will shortly impose more conditions on X but the following definition makes sense for any Markov process. DEFINITION. A family M = { M , ; 0 5 t < a}of real-valued random is called a rnultiplicatitie functional of X provided : variables on (Q, 9) (i) M I E Frfor each t 2 0; (ii) M I + ,= M , ( M , 0,) a s . for each t , s 2 0; (iii) 0 I M,(w) I 1 for all t and w.
(1.1)
0
97
98
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
Most authors do not assume that multiplicative functionals satisfy Condition (iii). Thus we should perhaps call a multiplicative functional satisfying (iii) a multiplicative functional with values in [0, 11; however, this is the only type of multiplicative functional we will consider in this book. From now on we will write “ M F ” in place of “ multiplicative functional.” We emphasize that the exceptional set in (ii) depends on t and s in general. Also it follows from (ii) and (iii) that, for each t and s, M , , , I M , almost surely. Sometimes it will be convenient to write M ( t , w ) for M,(w) and M ( t ) for M I . We say that a multiplicative functional M is measurable provided the family ( M I }is progressively measurable relative to {F,}. We say that M is right continuous (or continuous) provided t --f M,(w) is right continuous (or continuous) almost surely. In particular a right continuous multiplicative functional is measurable, It will be convenient to let M , = 0 for any MF. The relationship Mo = Mo(Mo 0 8,) = M i a s . implies that almost surely M , is either zero or one. We call a point x in Epermanent for M if P X ( M ,= 1) = 1, and we let E M denote the set of permanent points. Clearly E M E d *. In case X is normal, the zero-one law implies that x E E - E M if and only if P”(Mo = 0) = 1. We next give a few examples. The reader should verify for himself that these are indeed MF’s. (1.2)
For each a 2 0, MI = e-“’ is a M F and
EM
= E.
Recall from (2.18) of Chapter 11 that a terminal time T is an {F, stopping } time satisfying, for each t 2 0,
(1.3) T = t
+T
0
8, almost surely on {T > t ) ,
Let T be a terminal time and define M , ( o ) = 1 if t < T(w)and M,(w) = 0 if r 2 T ( w ) ;that is, M, = (t). In this case EMconsists of those points x in E which are irregular for T, that is, such that P”(T > 0) = 1. (1.4)
Terminal times will play an important role in this and later chapters. Intuitively one should think of a terminal time as thejirst time some physical event occurs. Since t + T 0 8, is then the first time the event occurs after t , the relationship (1.3) becomes intuitively clear. (1.5) Let X be progressively measurable with respect to {F,} and let f E 8 , . Define
M ,= exp( - J ‘ ~ ( xds). ~) 0
99
1. MULTIPLICATIVE FUNCTIONALS
Obviously Example (1.2) is continuous and (1.4) is right continuous. Example (1.5) is continuous i f f is bounded, but need not be even right is right continuous and we continuous i f f i s unbounded. However if {F,} define
then T is a terminal time and Nt
= I [ O , T , ( f ) exp(-S'f(X,) 0
ds)
is a right continuous multiplicative functional. In addition for each o the functions t + N,(w) and t + M,(w) agree for all values of t except possibly t = T(o).
In general we will only be interested in a M F during the time the trajectory t + X, is in E. With this in mind we make the following definition.
(1.6) DEFINITION. Two MF's of X , say M and N, are equivalent provided P"[M, # N , ; X , E El = 0 for all t and x.
If EA is a metric space and Xis right continuous, and if, in addition, both
M and N are right continuous, then this is equivalent to the statement that t + M,(w) and t + N,(w) are identical functions on [0, c(w)) almost surely. Observe that if M is a MF, then N , = M , I E ( X , )is a M F and that M and N are equivalent. We will ordinarily be interested in MF's only up to equivalence and consequently when this is the case we may assume without loss of generality that M,(w) = 0 whenever XI(@)= A . Finally note that if EA is a metric ) equivalent right space and X is right continuous then I E ( X , )and l L 0 , [ ) ( tare continuous MF's. The following proposition is of some interest. We leave its proof to the reader as Exercise 1.10.
PROPOSITION.Let M be a M F of X and let (9,) be right continuous. Then a necessary and sufficient condition that M be equivalent to a right continuous M F of X i s that t + E"(M,) be right continuous at t = 0 for all x.
(1.7)
If M is a M F of X we define for each t 2 0 an operator Q, on bb* by (1.8)
Qrf(x) = E"{f(X,) Mrl
= E " { f ( X t ) Mi;
Xi E El
(recall that numerical functions on E are extended to EA by setting f(A) = 0). Clearly Q, is a positive linear map from bS* to bb* such that Q, IPI where
100
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
PI is the transition operator of X . Moreover Qt
+ s f ( x ) = E"Cf(xt + s) = E"C(f0
Xt
0
Mi
+s1
0s) Ms(Mr 0 031
= E"{M,EX("[f(X,) = Qs
M,]}
Qrf(x),
and so { Q , ; t 2 0) is a semigroup-the semigroup generated by M . Obviously equivalent multiplicative functionals generate the same semigroup. The following proposition contains the converse of this statement.
(1.9) PROPOSITION. Two multiplicative functionals are equivalent if and only if they generate the same semigroup { Q , ; t 2 0). Proof. Let M and N be MF's. We must show that if E " { f ( X , ) M I }= E x {f ( X t )N,}for allfE bb*, then M and N are equivalent. For this it suffices to show that if t and x are fixed, then for each H E b F , one has E X { H M , ; XI E E } = E"{HN,; X I E E ) . But the set of H E bFt for which this last relation holds is a vector space containing the constants and closed under monotone convergence. Thus by MCT we need only consider H of the form n ; = , f i ( X , , ) where 0 It , < . . . < t, = t and eachfi E bb*. If n = 1 the desired equality is just our hypothesis. Suppose 0 It, < .. . c t, c t , + , = t ; then
If we let g,(x) =fn(x) Q t - , n f n + l ( ~ )the , induction step is clear. Thus (1.9) is proved. Let us point out that Q, need not be the identity operator; in fact Q o f ( x )= E"{J(X,) M , } . In particular if X is normal Q , f = I E M $ Finally, if Ed is a metric space and Xis right continuous and if M is right continuous, then Q,l(x) = E"{M,; X I E E } approaches Qol(x) = E X { M , ;X , E E } as t 40. In Section 3 we will give conditions under which there exists a Markov process whose transition semigroup is { Q,}. However, before that, we are going to show in the next section that under mild restrictions any semigroup that is dominated by { P , } is generated by a multiplicative functional.
2.
SUBORDINATE SEMIGROUPS
101
Exercises
(1.10) Prove Proposition 1.7. [Hint: show that, under the hypotheses of ~ , ~a ~right ~ continuous M ~ M F equivalent to M.] (1.7), N , = S U ~ ~ , defines (1.11) Let M be a M F such that almost surely the mapping t -,M , is nonincreasing. Let T = inf{t: M , = O}. Show that T is an {F",} stopping time and that T = t T 0 8, almost surely on { T > 1 ) for all t. In particular if {%,} is right continuous, then T is a terminal time.
+
2. Subordinate Semigroups Let X = (Q, A,A , ,X , , 8,, P") be a Markov process with state space ( E ,8).Let {f,}denote the semigroup of transition operators of X and B = bb*. We have seen in Chapter I that P,B c B for each t 2 0.
DEFINITION.A semigroup { Q,: t 2 0 ) of nonnegative linear operators on B is subordinate to {P,}if Q , f r P,ffor each tr 0 andfE B +.
(2.1)
It is an immediate consequence of the definition that IIQ,ll I llPrII I 1. Since P , f ( x ) = P,(x, dy)f(y) f o r f e B, it follows that for each t and x there exists a measure Q,(x, on 6* such that Q,(x, *) I P,(x, *) and Q , f ( x ) = Q,(x, dy),f(y) for allfe B. In particular Q,(x, A ) is a transition function on ( E , b*).We showed in Section 1 that if M is a MF of X , then the semigroup generated by M is subordinate to {P,}.The next theorem, which is due to Meyer [ 2 ] ,states that under mild regularity assumptions on X any semigroup subordinate to {P,}is generated by a MF. Before stating this theorem we introduce the following condition on X : 0
)
(a) There exists a countable family &? c 8 such that a(&?)* = b* and {x} E & for each x E E. (b) Given t > 0 and J, a countable dense subset of [0, t ] containing t , then a(X,: s E J ) " = %, . Recall that if Q c 9 then 9 is
(2.2)
(See (5.3) of Chapter I.) the completion of 9 in 9relative to the family {P}. Observe that if EA is a separable metric space with &?(E,) c bAc 9?(EA)* and if X is right continuous, then (2.2) is certainly satisfied. In particular (2.2) is satisfied if X is a standard process.
(2.3) THEOREM. Let X be normal'and satisfy (2.2). If { Q,} is a semigroup subordinate to {P,},then there exists a multiplicative functional M of X which generates { Q,}. If {F,} is right continuous and t + Q,l(x) is right continuous at t
=0
for all x, then one may take M to be right continuous.
102
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
Since X i s normal P,(x, .) = E , . But Qo(x, .) I P,(x, .) and therefore = c(x) E,. Plainly the fact that Q , = Q: implies that c(x) is either zero or one. Let E, = { x : Q o l ( x ) = l } ; then E, E b* and Q,(x, .) = ZEo(x)E , . The points in E, will be called permanent for { Q , } . Observe that for any t and x
Proof.
Qo(x,
a)
so that all of the measures Q,(x, -) are concentrated on E, . Also Q,(x, E) = Qo(x,dy) Q,(y, E) = 0 if x E E - E,; that is, Q,(x, = 0 for all t if x # Eo . Since Q,(x, .) I P r ( x , for each fixed x and t , it follows from the RadonNikodym theorem that there exists a function 9,(x, y) defined on R, x E x E such that Q,(x, dy) = 9?(x,y ) P , ( x , dy). We may assume that 0 I 9,(x, y ) I 1 for all ( x , y ) . Since x + P , ( x , A) and x--, Q,(x, A) are b* measurable for each t > 0 and A E I *it follows from (2.2a) and a theorem of Doob [I, p. 3441 (see also Dynkin [2, p. 2181) that we may assume that 9 , E I * x I*for each t. By the remarks in the preceding paragraph we may suppose that 9 , vanishes off E, x E , . It will be convenient to extend 9 1 to EA x EA by letting 91 be zero if either of its arguments is A. The extended function is again denoted by 9 , . Evidently q1 E I,*x I,* and vanishes off E, x Eo . Given t > 0 let V = {0 = t o < t , < .. . < t, = t} be a finite subset of [0, t] containing 0 and t. Let 9(U)= .(X,: s E V ) " ,and define 9)
a)
Clearly M , ( V ) is in 9 ( V ) c 9,, and 0 I M , ( U ) I 1. Also note that M , ( U ) (0) = 0 if X,(w) = A. As in Eq. (2.8) of Chapter I the formula
In particular, for any f E B and x E E
As we observed just following the statement of Theorem 2.1 I of Chapter I, the family { Q"(x, as U ranges over the finite subsets of [0, t ] containing 0 and r , is a projective system. In particular, if V = (0 = to < t , < .. . < t, = t } , V is a finite subset of [0, t ] containing V, and (D is an w function of the form a)},
2.
103
SUBORDINATE SEMIGROUPS
@ = h(X,, , . . . , X,") with h E bb"", then
EX{@ M , ( U ) } = EX{@M , ( V ) } . Consequently we have (2.6)
E"{M,(V) I m u ) } =
whenever U c V . Let { U,,} be an increasing sequence of finite subsets of [0, r], each containing 0 and t , and such that D = U,, is dense in [0, t ] . Let M: = M,( U,,) and 9" = 9 ( U n ) . It is immediate from (2.6) that { M : ; 9"} is a martingale relative to the measure P" for each x. Since each M: is in 9, and 0 I M: I 1, it follows from the martingale convergence theorem (( 1.4) of Chapter 0) that measurable random variable M , such that 0 I M , I 1 and there exists an 9, M: --+ M , as n + 03 almost surely. Moreover we may assume that M , = 0 if X , = A since each M: has this property. Suppose {On}is another increasing sequence of finite subsets of [0, t ] , each containing 0 and t, and having D = UO,, dense in [O, t ] . If R, is defined relative to the sequence {On}in the same manner as M , is defined relative to { U,,}, then we claim that M , = M t almost surely. To prove this it clearly suffices to suppose that U,, c o,, for each n. But then (2.6) implies that for p > n we have
u
EX(Rp I F n )= M:
= EX(Mp
I 9")
for all x. Letting p + 03 we obtain E"(R,- M , ; A) = 0 for all A E U,,.W' and hence, from (2.2b), for any A E 9,. Therefore M , = a, almost surely since both M , and R, are in 9,. For each t > 0 we define M , as above using an increasing sequence {U,,}. If t = 0 we define Mo = lEo(Xo).Observe that IEo(x)= Qo(x,{x}) = qo(x, x) P,(x, {x}) = qo(x,x). We will now show that { M , } is a M F and that (2.7)
Qtf(x) = E"Cf(x,>Mil ;
f e B.
Obviously 0 5 M , I1 and M , E F, for all t . It is also clear from (2.5) and the definition of M , that (2.7) holds. Moreover M , ( w ) = 0 if X,(w) = A . Thus it remains to verify that (2.8)
M I + ,= M , ( M , 0 0,) almost surely.
Assume first that t > 0 and s > 0. Let D consist of the rationals in [0, t + s], t + s, and all numbers of the form s + r with r a rational in [0, t ] . Let {U,,} be an increasing sequence of finite subsets of D, each of which contains 0, s, and t + s and such that D = U,,. Let V,, = U,, n [0, s] and let W , = { r 2 0 : r + s E U,,}.Clearly { W,,} is an increasing sequence of finite subsets of [0,1], each of which contains 0 and t , whose union is dense in
u
104
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
[0, t]. If U,, = (0 = to < tl < . . . < ti < s < is easy to verify that
< . . . < tp = t
Mt+s(Vn) = Ms(vn) CMt(Wn) 0
+ s},
then it
Osl,
and so letting n + 03 weobtain (2.8) in this case. If t = s = 0 then(2.8) isclear. Finally if t > 0 and s = 0 or t = 0 and s > 0, then using the fact that qu vanishes off Eo x Eo one can easily verify that (2.8) holds. We have now proved the first assertion of Theorem 2.3. But E X { M , }= Q , l ( x ) since M , = 0 if X , = A, and so if t + Q , l ( x ) is continuous at t = 0 for all x so is t --t E " { M , } . Therefore we can apply Proposition 1.7 to obtain a right continuous MF, N, equivalent to M provided {F, is} right continuous. This establishes the last sentence of Theorem 2.3. REMARK. Note that Eo is the set of permanent points of M , that is Eo = E M .
Exercises (2.9) Let X be normal and satisfy (2.2). Let Q,(x, A) be a transition function on (E, 8*)such that for each x E E and t 2 0 the measure Q,(x, -)is absolutely continuous with respect to P,(x, .). Let qt(x,y) be a density for Q,(x, .) with respect to P,(x, that is jointly measurable in x and y, and assume that q,(x, y ) = 0 if either x or y equals A. If U is a finite subset of [0, t ] containing 0 and t , then define M I (17) as in (2.4). Show that for each t 2 0 there exists a nonnegative M , E 9, such that M,( V,,)+ M , almost surely whenever { V,,} is an increasing sequence of finite subsets of [0, t], each containing 0 and t , such that UU,, is dense in [0, t]. Show that { M , ; t 2 0 ) satisfies (i) and (ii) of Definition 1.1. Let us call such a family a generalized nonnegatiuemultiplicatiue a)
functional (GMF for short). Show that Q , f ( x ) 2 E " { f ( X , ) M , } if f E B,. Finally show that Q , f ( x ) = E " ( f ( X , )M I }for allfe B, ifand only if { M , ( U ) ; U a finite subset of [0, t] containing 0 and t} is uniformly integrable relative to P".
Let X = (a,Po,F:,A', O f , P") be a Markov process of function space type having (E, 8) as state space (see (4.2) of Chapter 1). Assume X is F:,XI, O,, p) be another normal and satisfies (2.2). Let 9 = (a,Fo, Markov process of function space type with the same state space (E, 8). Show that the following two conditions are necessary and sufficient that the restriction of p to 9 :be absolutely continuous with respect to the restriction of P" to 9:for all x and t: (i) fi,(x, -) 4 P l ( x , .) for all t and x , and (ii) using the notation of (2.9) with Q,(x,A) = fi,(x, A), for each t and x, { M , ( U ) ; U a finite subset of [0, t ] containing 0 and t } is uniformly integrable relative to (2.10)
3. SUBPROCESSES
105
P". Moreover there exists a GMF, { M I } ,of X such that, for each x, M I is a version of the Radon-Nikodym derivative of b"I 9:with respect to P" 1 9:.
3. Subprocesses Let X = (R, A, A , ,X , , O r , P") be a Markov process with state space ( E , a).Let Eo E I* and let 8: be the trace of I *on E, . A Markov process Y with state space ( E , , 8:) is called a subprocess of X provided its transition semigroup { Q , } is dominated by {P,} in the following sense: Q , f ( x ) I P , f ( x ) for all x E E, a n d f 2 0 in bb:, wheref is taken to vanish on E - E, in forming P, f. If we define 0, on B = bI* by setting Q, f ( x ) = Q r f E o ( xfor ) x E E, wheref,, is the restriction o f f t o E, , and Q r f ( x )= 0 for x E E - E , , then { Q,} is a semigroup of nonnegative linear operators on B which agrees with Q , on bb:. The above definition is then equivalent to the statement that the semigroup { Q,} is subordinate to { P I }as defined in Section 2. We will now drop the notation 0, and write Q , for the extended operators defined on B. In the remainder of this section we will assume that X i s normal although this is not necessary for the validity of all that follows. If X satisfies the conditions of Theorem 2.3 and Y is a subprocess of X , then there exists a multiplicative functional M of Xsuch that for each x E E , , t 2 0, and f~ bd* vanishing off Eo one has ,!?"{f(Y,)} = E " { f ( X , ) M , } where fi is the expectation operator corresponding to Y. In this section we are going to establish a partial converse; namely, given a right continuous MF, M , of X we are going to construct a subprocess Y satisfying the above condition. Roughly speaking Y , is obtained by " killing" X I at a rate -dM,/M,; that is, - d M , / M , is the "conditional probability" that XI is killed in the interval ( t , t + dt) given that it is " still alive " at time t. We will now make this discussion precise. A) We begin with some definitions. Let fi = R x [0, co] and write & = (0, for the generic point in fi. Let g be the a-algebra of Bore1 subsets of [0, co] and set .k = A x 9. Let R : fi 4 i2 and y : fi 4 [0, 001 be the natural projections; that is, R ( W , A) = o and y ( o , A) = A. Let &A = (oA, 0) and A) = [(u)A A. Define 8,(&) = X,(w) if set = ([ o n) A y ; that is, ((o, t < A and XI(&) = A if t 2 A ; here, of course, 03 = ( 0 ,A). Note that p = inf{t: 8,= A}. We will often write simply o and A for n(03) and y(&). Define 8,03 = (O,w, (A - t ) v 0) where co - 03 is taken to be zero. Note that 8, 03 = dAand 8,o a,, = 8,+,for all t and h. Let fir= R x ( t , co] E A@, and define A@, to consist of all sets A E .k for which there exists a A E A, such that n fir= A x (1, 031.
e
(3.1) PROPOSITION. {A@,}is an increasing sequence of sub-a-algebras of
106
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
4 and 2,E &,/S, for all t. Moreover if
{ A , }is right continuous so is
{-/%,I. Proof. The reader will easily verify that {A,} is an increasing sequence of sub-a-algebras of A. If A E 8, then (8,E A} = { X , E A } x ( t , m] E J?,. Thus the first sentence of (3.1) is established. Regarding the second, fix t < 00 and suppose A E A,+ . Then for each n 2 1, A E and so there is a set A, E such that A n = An x ( t l/n, 001. It is easy to check that all the sets A,, are the same. If we set A = A1 then A E .MI+llnfor all n, and since { A , }is right continuous A E A,. Clearly then A n 6, = A x ( t , co] and so E A,. Thus the proof of (3.1) is complete.
+
We now suppose that we are given a right continuous multiplicative functional M of X . Let R, be the set of w's such that t -,M,(w) is right continuous, nonincreasing, and M,(w) is either 1 or 0. By definition Px(R,) = 1 for all x. For w E R, we define a measure a, on [0, m] by a,((& 001) = M,(w) for 1 E [0, co] and a,({O}) = 0. This defines a unique measure, a,, on the Bore1 sets of [0, m]. Since M , = 0 by convention, a, is a probability measure if M,(w) = I and a, = 0 if M,(w) = 0. For convenience we define aU = E, if o # R, . Since M AE 9, and R, E 9it is immediate that w + a,( I-) is 9 measurable for any r E 9. We next define measures p on as follows: if E d? = M x 9 and if = {A: (0, A) E A}, then A'" E 9 and a,(A'") exists for each w E Q. Moreover w --t a,(A") is in 9 since this is true for rectangles in A x W. (Use MCT.) Recall that E M = { X E E : P"(M, = 1) = I } is the set of permanent points of M. We now define
P(A)= E"[a,(A")] if X E E ~ , p = unit mass at dA= (u,, 0) if x E E , - EM. Clearly each p is a probability measure on (0,A).In order to avoid trivial
(3.2)
notational difficulties we will now assume, as we may without loss of generality, that R, = R. We come now to the basic result of this section. The notation is that established above.
(3.3) THEOREM.2 = (0,A@, A,, 8 , ,8,, p) is a Markov process with , )E"{J(X,) } M,} for all J E bb*; state space (E, b*) such that ~ " { f ( ~ = that is, 8 is a subprocess of X whose transition semigroup is generated by M. = A) = I . If A Proof. Since A 4 EM it is clear that fiA(8, {g,E A } = {XIE A } x (t, m] and so
(3.4)
P12 E A ) = E"{M,; x,E A } .
E b*,
then
107
3. SUBPROCESSES
Therefore x + p ( 8 , ~ A is ) b* measurable for each A E&*. Since (3.4) remains true if one replaces IA by any f E bb*, the only thing that remains to be checked is the Markov property of 8 ;that is, for all B E B*, t , s, and x we must show that (3.5)
F{lB(8,+s); A} = p{fii(f)[1B(8s)]; A}
for all A E A,.If x 4 EM both sides of (3.5) are zero, and so we may suppose that X E E M . Observe first of all that MCT implies that if Y Eb.& and x E EM then
F(Y ) =
E X {
1
Y ( 0 , I ) cc,(dl)).
By definition of d t ,A n f i , = A x ( t , co] where A E A , . Since B c E, {$,+, E B } c and so {8,+, E B } n A = ( ( X , + , E B }n A) x ( t s, co]. Therefore the left side of (3.5) may be written
+
Ex{lB(xt+s)
Mt+s;
A> = E x { E X " ' [ ~ B ( x s )
Ms]Mt;
A}*
On the other hand if F ( y ) = E y [ I B ( 8 J = ] Ey[IB(Xs)M,],then since F ( A ) = 0 the right side of (3.5) becomes
E x [ F ( 8 , ) ;A
n fit] = E X { F ( X , )M,;A},
and hence (3.5) is verified. Thus Theorem 3.3 is established. In the sequel we will refer to the process 8 constructed above as the canonical subprocess corresponding to M . Note that we do not complete the a-algebras A and A?, in the appropriate manner relative to {P.'; x E E A } . Thus for canonical subprocesses we do not assume condition (5.15) of Chapter I. Of course, p is the lifetime of 8. It is immediate that if M and N are equivalent right continuous multiplicative functionals, then the corresponding canonical subprocesses are equivalent processes. Also note that if x E EM, then p[fi,= x] = P x [ X , = x] = 1 while p [ 8 ,E E l = 0 if x E E - EM. Consequently it is natural to ask if X can be considered as a Markov process with state space ( E M ,8;) where 6'; is the trace of b* on EM. Unfortunately this is not always possible. See Exercise 3.19.
(3.6) REMARK.Let A* = A x {S, [0, a]}.One should think of A* as the a-algebra in fi containing the information in the original process X. From the definition of the measures p it follows immediately that
P*{y > t I A*}= M ,
0
n
for all x E E,. In particular the conditional probability depends only on the behavior of the original process in the time interval [0, t ] . Also it is this
108
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
equality that is taken as justification for the intuitive statement that obtained by " killing " X at a rate - d M , / M , .
8 is
(3.7) REMARK.Suppose that T is an {St,} stopping time such that t + T 8, = T on {T > t} for each t 2 0. This is slightly stronger than assum0
ing T is a terminal time for X , which requires that the above identity should hold only almost surely for each t 2 0. For simplicity assume that T I (. This can always be accomplished by replacing T by min(T, (). If M , = ILO,T)(t), then M , is a right continuous MF. In this case EM is the set of points in E which are not regular for T, and u, is easily seen to be unit mass at T ( w ) if T(w) > 0. If @ : Q + 6 is the map w + ( w , T ( w ) ) ,then @ E .A?/&. Finally define f , ( w ) = X,(w) if t < T ( w ) and f , ( w ) = A if t 2 T ( w ); 8, w = 8,w if t < T ( w ) and 0,w = w,, if t 2 T(w). It follows from the definitions that ft, O @ = 8 , and 8,+s o as. Also if A E .A? and A = A x (A, 031, then @-'A = A n {T > A} and consequently P W - ' A = P"(A n {T > A}) = E"(M,; A) = p ( A ) . Therefore P" @-' = on & and this makes it clear that 8 = (Q, 9, f , ,8,, P") is a Markov process equivalent to 8. It is usually simpler to work with 8 than with 8 when M has this special form.
=xi
(3.8) REMARK.If T is merely assumed to be a terminal time for X and M ,= I L O , T ) (one f ) can still carry out the above construction o f 8 , and it is evident that, for each x, 8 is a Markov process over (Q, A, P") in the sense of Definition 1.1 of Chapter I and that it is equivalent to 8.The only drawback is that f , , ,= 8, g,, only almost surely in this case. This is an unimportant difficulty, but as a result 8 does not satisfy the assumptions of Definition 3.1 of Chapter I as it stands. 0
We are now going to investigate the regularity properties of 8. For example if EA is a metric space and X is right continuous (or has left-hand limits), then it is obvious from the construction that 8 has the same properties. In order to discuss the strong Markov property for 2 or the quasi-leftcontinuity of 8 we prepare the following lemma. We use the notation already developed in this section. (3.9) LEMMA.Let ?be a stopping time relative to {J?,}.Then: (a) There is a unique { A , }stopping time T (defined on Q) such that T A y = (T 0 n) A y. (b) Let p a n d T be as above. Then given A E J?? there is a set A E &IT such that A n { < y } = (A x [0, 001) n ((0,A): T ( w )< A}. REMARKS.y is an {d,} stopping time since {y It} n 6 , is empty for each t. Observe that y 0 8 , = y - t on { y > t } . The relationship whose
3.
109
SUBPROCESSES
existence is asserted in (a) may, of course, be written
p(0,A) A A
(3.10)
= T(w)A
A
for all w and 1. Proof of (a). First of all suppose that p i s some numerical function on fi and that T is a numerical function on R which is related to $by (3.10); then it is evident that T is uniquely determined. Suppose in addition that p i s an {J?,} stopping time. If A = { w : T ( w ) I a}, then A x (a, co] = {PI a} n fia. But by the definition of {A,} and the fact that pis an {d,} stopping time this last intersection is of the form A, x (a, co] with A, E Aa.Consequently T is an { A , }stopping time. Let f denote the class of all {d,} stopping times p f o r which there exists a numerical function T on R related to by (3.10). Write T = dD(T). We have already seen that T is uniquely determined by T and that T is an { M I }stopping time. Next let { p,} be any family of elements o f f and T, = @(pa). If 5'= inf pa is known to be an {A,} stopping time, then obviously FEf and @(p)= inf T,. Finally suppose that f' is an {A,} stopping time of the form
T(w, A) = a = 03
if
(0,
A) E fa,
if ( w , , ~ ) # f , ,
where faE 2,. Now f,nfi, = rax (a, 001 with r,E M a . If we define T(w) = a for w E r, and T ( w ) = 00 for w 4 r,, then T is an { A , }stopping time and one checks easily that p and Tare related by (3.10). But any { A t } stopping time pis the infimum of a countable family of stopping times of this special form and so from our previous remarks it follows that TE'?. Thus Assertion (a) is established. Proof of (b). Recall that saying A E d7, means that { p I a } E dafor all a E [O, 00). Given A let
ii, = A n { ( w , A): ~
A Ed
and that
An
( wI ) u , A > a } = ii n { T I a } n fia
ua
for each a 2 0, so that A n { p< y } = A,, the union being over the noneach A, is of the form negative rationals, for example. By the definition of d, A, x (a, 001 with A, E A,. Plainly A, c { T I a} for all a, h b c A, if b I a, and A, n { T I b} = n { T I b} if b 5 a. Consequently if b 2 a then A a n { T I b} = A , E .Mac& b , while if b I a, A a n { T I b} = A b n { T I b} E M b. Therefore each A, is in AT. If A = Aa , the union being over the nonnegative rationals, then A E AT and A n {T I a} = A, for any nonnegative rational a. Therefore
ua
110
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
(A x [0, m]) n {(a,A): T(a)c A} = u ( A n ( T Ia } ) x ( a , a] a
= U
=Uha=hn{f'
completing the proof of (b). Let us assume now that X is strong Markov (as well as normal) and that M is a (right continuous) M F of X , and let 8 be the canonical subprocess corresponding to M. We are now interested in conditions that insure that 8 is strong Markov. Simple examples show that this is not always the case. (see Exercise 3.19). First note that since M is right continuous { M I } is proand hence with respect to { A , } gressively measurable with respect to {F,}, since we are, of course, assuming that A, 2 F t . See (5.1 5 ) of Chapter I. If T is an { A , }stopping time then Theorem 6.11 of Chapter I implies that M T E A T .
(3.11) DEFINITION.A MF, M, of X is said to be a strong multiplicative functional (SMF) provided that M is right continuous and satisfies E"[f(xt+
7') M I +
TI = E"{EX'T'[f(Xt)
for all x, 1, { A , } stopping times T, and f
E
MT)
bd.
PROPOSITION. Let X be a strong Markov process and let M be a SMF of X. Then the canonical subprocess 8 corresponding to M is strong Markov.
(3.12)
Proof. Let f' be an {A@,}stopping time. By (3.9) there exists an {.A',} stopping time T such that '?(a, A) = T(a)on {f'(a,A) < A}. Therefore if A E b*, then ( 8 ,E A} n { It } n fi, = ({A', E A } n { T I t } ) x ( t , a] and so 8, E dT/&:. Hence according to (8.2) of Chapter I it suffices to show that W ( % + T H
for allfe bS*. Sincef(A)
= 0 we
= Ex(~i"'f(ft,)}
may assume that '?
Bx{f(8~+?)} = B x { f ( x ~ + T ) ; + < 71 = E"(f(Xt+T)
MI+T}
= E"{Ex"'[f(X,) h f , ] M T } =E"{EX(T)[f(8,)]hfT} =
and Theorem 3.12 is established.
&{E"9f(rZI)]},
3.
111
SUBPROCESSES
(3.13) PROPOSITION. Suppose EA is a metric space, 8, =I 9(EA), and X is quasi-left-continuous; then so is 8.
Proof. Let { p,,} be an increasing sequence of {A@,} stopping times with limit p; then we must show that 8(p,,)+ 8 ( p ) almost surely on { T < p } . Since ( = A y we may assume PI y. Thus there exist { A , }stopping times T and T,, such that ?= T A y and p,, = T,, A y. Consequently T,,t T, and therefore if XEE,
P[8(in)i+r7(f); T < [] = li"[X(T,,)t,X(T);T =E"[MT;
<5
A
y]
X(T,,)*X(T); T <
If x 4 E M , then 8(p,,)= A = 8(f) almost surely Proposition 3.13 is complete. In order to study the quasi-left-continuity of the following condition.
c] =o.
p", and
so the proof of
8 on [0, 00) we introduce
Let { R,,} be an increasing sequence of { A r }stopping times with limit R ; then M,,, .+ M, almost surely on { R <
(3.14)
c}.
PROPOSITION. Suppose EA is a metric space, b,= 9(E,), and M satisfies condition (3.14). Then if X is quasi-left-continuous on [0, a),so is 8.
(3.15)
Proof. Let {p,,} be an increasing sequence of {A?,} stopping times with limit F, Let T,, and T be { A r stopping } times such that p,, A y = Tn A y and P Ay = T A y. A simple inspection of the situation, using the definition of shows that it suffices and the fact that Xis quasi-left-continuous on [0, a), to verify that
r = {(a,A): ((a)> A, T,,(a)< A for all n, T ( a )= A} has P " measure zero for all x
E E M .But
for each n
r = w , ~ ~) :~ (<0A I ) ~ ( w ) , and so for each x
E
<
c(m,
EM
p(r)5 E x { M T n- M T ; T < [}
+O
as n + 03 by (3.14). Thus Proposition 3.15 is established. COROLLARY. Suppose X i s a standard process and M is a SMF of X satisfying:
(3.16)
112
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
(i) EM = E, i.e., all points are permanent for M; (ii) M, E 9 7 for each t . Then fz = 9, @, , 2,, 0, , "), where @ and 3@, have their usual meanings relative to the process 8,is a standard process (with the same state space as X). If in addition Xis a Hunt process and M satisfies (3.14), then ft is a Hunt process.
(a,
REMARK.Theorem 4.12 of the next section implies that if M is a right continuous MF with EM = E, then M is a SMF. Consequently in the hypothesis of Corollary 3.16 one need not assume that M is a strong MF; this actually follows from Condition (i) of (3.16). Proof. Since X is a standard process, {.,+'if} is right continuous and hence so is {dt}. But @: c d, and so fz is Markov relative to {&:+}. Consequently it follows from (8.12) of Chapter I that {@,} is right continuous. Clearly 2 is right continuous and has left-hand limits on [0, [) almost surely since X enjoys these properties. Next, in view of Theorem 7.3 of Chapter I, in checking the strong Markov property and the quasi-left-continuity of ft it suffices to consider {@:+} stopping times; since @+: c &, these properties follow from Propositions 3.12 and 3.13. So far we have not made any use of the two assumptions on M in our theorem. However (i) implies that ft is normal. (See the discussion following the proof of Theorem 3.3.) Finally if A E 8 thenP(fz, E A) = EX(M,;X , E A ) and so (ii) implies that x + p(8,EA) is 8 measurable. Therefore we may regard (E, 8) as the state space offt, and the first sentence of (3.16) is proved. The second sentence follows immediately from (3.15) of this chapter and (7.3) of Chapter I. (3.17) EXAMPLE.Let X be a standard process and let M, = e - p r , P > 0. Let X B denote the process in (3.16) corresponding to this MF. Then X p is called the b-subprocess of X. It follows from Corollary 3.16 that X a is a standard process, and if Xis a Hunt process then so is X p . If Pf denotes the transition semigroup of X p , then obviously Pf = e-prP,. In particular a function is a-excessive for X Bif and only if it is a + P excessive for X . The use of the P-subprocess will turn out to be an important tool in studying the original process X . Let us close this section by giving conditions under which (EM, b:), where 8; is the trace of b* on EM, may be taken as the state space of 2.An obvious necessary condition is that P x ( f t ,E E - EM for some t < [) = 0 for all x in EM. If we are willing to delete from fi a set which is almost surely be the hitting null, this is also sufficient. With this in mind let R = T E - E M time (for X) of E - EM and let us assume that R is an (9,) stopping time. Since 8,= X , if t < p the probability in question is just
4. RESOLVENTS AND STRONG MULTIPLICATIVE FUNCTIONALS
P[R < c]
= P[R < y ; R = E X { M R ;R
113
< []
< c).
Thus if R = TE-Ew is a stopping time, then a necessary and sufficient condition that we may regard ( E M ,)&; as the state space for 8 (after a trivial modification of is that M R = 0 almost surely on { R < [}, See Proposition 4.21 in this connection.
a)
Exercises
Show that there exists a standard process having the transition function defined in (9.16) of Chapter I. [Hint: let X be uniform motion to the right in R and obtain the desired standard process as a subprocess of X.] Show directly that condition (3.14) is not satisfied by the M F involved in the construction. (3.18)
(3.19) Let X be the Brownian motion process in R. Let M , ( o ) = 1 for all t if Xo(w) # 0 and let M,(w) = 0 for all t if Xo(w) = 0. Show that M = { M I }
is a continuous M F of X . Show that M i s not a SMF and that the corresponding canonical subprocess 8 is not strong Markov. [Hint: consider T = inf{t: X , = 0) and use (3.18) of Chapter 11.1 Describe 8 in this case. Note that E - E M = (0) and that dX[8,E E M for all t < p] = 0 if x # 0. Consequently E M can not be taken as the state space of 2. The notation is that of Proposition 3.1. Show that if f(o,I ) is A, measurable, then f ( w , A) =f(w, p ) whenever I , p > t. [Hint: show that if 3. b d , , then there exists an f E b A I such that f(o, A ) Zfit(o, I ) = (3.20)
f(44l,rn](~).l (3.21) Let X be a standard process and let G be an open subset of E such that each x in EA - G is regular for Eb - G. Show that TEA-fi= TEA-G almost surely. Let T = TEA-aand let r? be constructed as in (3.7) using this T. Show that we may regard (G, %) as the state space of r? where 9 = W(G) is the c-algebra of Bore1 subsets of G.If A is redefined as the point at infinity of G, then show that f is a standard process. f is called the restriction of X to G.
4. Resolvents and Strong Multiplicative Functionals In this section we are going to develop some criteria to insure that a right continuous M F is a SMF. Our main tool is a result about resolvents (Theorem
114
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
4.9) which is due to Meyer. Therefore we will first develop some general facts about resolvents and prove Theorem 4.9, after which we will return to the main problem of this section-the characterization of strong multiplicative functionals. We will assume throughout this section that X = (a,A, A , , X , , B , , P") is a given standard process with state space (E, b), although slightly less stringent assumptions would suffice. We will let B denote the Banach space b&*. Let M = {M,} be a right continuous M F of X and let Q , f ( x ) = E X { f ( X , ) M,} denote the corresponding semigroup on B. If YE C = C ( E ) , then t -+ Q, f ( x ) is right continuous for each x and hence ( 8 , x) -P Q, f ( x ) from [0, t) x E -+ R is in b(W, x &*) for each t when f~ C, where, as usual, W, denotes the Bore1 sets of [O, t ) . This last statement then extends to any f~ bb by use of MCT. Thus for any a > 0 we can define V " f ( x )= e-"'Qt f ( x ) dt, and V" maps b b into B. If Y Ebb and f 2 0, then V " ~ I U"fand so V" is given by a measure on (E, b) which we denote by V"(x,dy). Evidently V " f ( x )= j V"(x,d y ) f ( y ) and V" I(x) Ia - l . Each of the measures V"(x, -) extends uniquely to b* and hence we can define V u . f ( x )= V"(x,d y ) f ( y )f o r f e B. We next show that V% c B. To this end let p be a finite measure on b*. Then p V a ( B ) = j p(dx) V"(x,B) defines a finite measure on 8.Thus givenfE B there exist g , h E bB with g sf I h such that pV"g = pV"h. But this implies that V"g IV " ~ I V"h and V'g = V"h, a.e. p. Consequently V ~ B. E Let us next show that V " f ( x )= e-"' Q , f ( x ) dt forfE B. We know that (1, x) -P Q , j ( x ) is in b(W x b*) and that this formula holds for f~ bC. Given a finite measure v on b* and a finite measure 1 on 9 we can define for B E 8
5;
PL(B) =
J[ Qr(x,B ) 4 d t ) v(dx),
and obviously p is a finite measure on 8. Now i f f € B there exist g, h E bb such that g sfs h and p(g) = p(h). Thus Q , g ( x ) IQ r f ( x )IQ , h(x) for all ( t , x) and
Therefore ( 1 , x) -+ Q , f ( x ) is in (W x b*)*," (this is the completion of W x d* with respect to A x v) for all 1 and v. Thus for each x a n d f e B we can form e-"' Q , f ( x ) dt, and since this is a measure in f and agrees with V " f ( x ) forfE b b we must have V " f ( x )= e- Q , f ( x ) dr forfE B. In particular the above argument shows that t Q , f ( x ) is Lebesgue measurable for each x. Finally note that we have enough joint measurability in Q , f ( x ) to use Fubini's theorem.
4.
RESOLVENTS AND STRONG MULTIPLICATIVE FUNCTIONALS
115
There remains one more measurability detail to discuss: namely, to show that V " f ( x )= E x
Jo
e - " ' f ( X , )MI dt
for all f~ B. I f f € C it follows from the right continuity of the paths that W x %, and this continues to hold for f E bB. One now shows by an argument similar to that used above that (t, w)+ f ( X , ( w ) ) is in (9 x %)A,r for all 1and p whenfis in B, where the a-algebra in question is the completion of W x % with respect to 1 x P p , ,Ia finite measure on 9 and p a finite measure on Thus we can form the expression on the right side of (4.1) for anyfE B and since it is a measure which agrees with the left side of (4.1) on C the equality in (4.1) must obtain f o r f e B. Again we have enough joint measurability inf[X,(w)] to use the Fubini theorem. The family { V " ;a > 0 ) has the following properties: (t, w ) + f ( X , ( w ) ) is in
(4.2)
(i)
IIv"~~
I
a-l,
(ii) V " ~ I U"fiffEB+; (iii) 1/" -
V B = (fi - a ) 1 / " V , a, fi
> 0.
This last relation is the resolvent equation and is an easy consequence of the fact that {QI; t 2 0} is a semigroup (or (4.1) and the Markov property). We now make the following definition. DEFINITION. A family { V " ;a > 0 ) of positive linear operators on B is called a resolvent subordinate to { U"}provided it satisfies (4.2).
(4.3)
In particular it follows from the above discussion that the resolvent of a semigroup generated by a right continuous M F is subordinate t o { V " } . It follows from (4.2iii) that V " V B= V B V "for a, fi > 0. Sincef+ U " f ( x ) is a finite measure, Condition (4.2ii) implies thatf- V " f ( x )is a finite measure also. As usual, we denote it by V"(x, Hence iff 2 0 is in B*, V " f = V"(.,f) exists and is in 8: . Also i f f € B and a > 0, j 2 0, then (4.2) implies a).
(4.4)
I]v+Bv"f= V"f- V"+Bf+ ( a - fi)V+BYolf +
V"f(in norm) as I] -,00.
(4.5) DEFINITION. A function f E 8: is called a - V supermedian if fiV"+pS 5.f for all fi > 0, and is called u - V excessive if, in addition, fiVa+pf+f pointwise as j+ co.
Obviously the minimum of two a - V supermedian functions is again
116
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
- V supermedian. Also pV"+BI 5 /?U"+Bl5 /I/( +.p), implies that nonnegative constants are a - V supermedian for any a.
a
(4.6) PROPOSITION : (i) Iff E S*,then V y i s a - V excessive. (ii) Iff is a - V supermedian, then p + pV"+pf is increasing and f = lima,, pVu'pf is the largest a- V excessive function dominated by f . We callfthe a - V excessive regularization off, and we have VBf = Vpffor any p > 0. Proof. For bounded f the first statement is an immediate consequence of (4.4). This statement for unbounded f then follows from (4.6ii) if we approximatefbyf, =f A n. Thus we need only prove (4.6ii). If p > q, then p ~ l + S- ?y'+q = pv"+q + p(q - p ) I / " + B j / " + s - qV"+w = ( p - q)V"+V[I
- PV"+@].
It is immediate from this identity that iff is a bounded a - V supermedian function, then p -, V U f af ( x ) is increasing. For general a - V supermedian fletf, = f A n ; thenf, t f and eachf, is a - V supermedian. Hence for each n if p > q, then jVfl+y, 2 q V q + y , and letting n -+ 00 we obtain the first conclusion of (4.6ii). Thus f ( x ) = limfl+,, p V " ' B f ( x ) exists when f is a - V supermedian. Clearly f~ a*, and f If.Suppose g is a - V excessive and g s f; then pV"+flg jV"+pfand letting p-+ 00 we find that g sf.Thus to complete the proof of Proposition 4.6 we must show that f is a - V excessive. Suppose first that f is bounded; then from the monotone convergence theorem and (4.4) we have
Pf=~
f Iim l [q+m
1VY,
qV+"j
= lim q ~ f l V " + ' J f = rl+m
for any > 0. Consequently BV"+Pf= pV"+@ft j as j?-,00, and so f is a - V excessive. I f f is unbounded let f, =f A n again; then pV"+pf,, is increasing in both p and n. Therefore f = limb limnpVa+pf, = lim, 1,. But it is easy to see from what has already been proved that the limit of an increasing sequence of a - V excessive functions is a - V excessive, and so f is a - V excessive. We already know that Vpf,, = Vsf,for each n and p > 0, and hence letting n --f 00 we obtain the last conclusion of (4.6). (4.7) PROPOSITION. I f f E B+, then U . f - Vo'f is a - U supermedian for any a > 0.
4. Proof.
Iff
RESOLVENTS AND STRONG MULTIPLICATIVE FUNCTIONALS E
117
B+, then (U'f - V " f )E B+ for any a > 0. Moreover
uy- vy- pup+yuy- vy>2 uy- vy- pu@+"u"f+ flP'"vy = U"++
va+Sf20,
and so Proposition 4.7 is proved. Recall from (2.3) of Chapter 11 that f is a-excessive (for X) if and only iff is u - U excessive in the sense of Definition 4.5. (4.8) DEFINITION. A resolvent { V"} is exactly subordinate to { U"} provided it is subordinate and, in addition, U y - V y i s a-excessive (relative to X) for allfe B+ and u > 0. Example. Let B be a nearly Bore1 subset of E and T = T B . If V " f ( x )= Exj: e - " ' f ( X t )dt for f~ B, then { V " } is a resolvent subordinate to { U"}; in fact it is just the resolvent corresponding to the right continuous MF, MI = IIO,T)(t).I f f € B+, then one easily computes that U y - V " f = Pi Uy, and so { V"} is exactly subordinate to { U " } according to Proposition 2.8 of Chapter 11.
(4.9) THEOREM. Let { V"} be aresolvent subordinate to { U"}.Then W " f ( x )= PUPV"f ( x ) exists for all f ' B,~ u > 0, and x E E and the family { W"}is a resolvent exactly subordinate to { U " } . Moreover for each u > 0, V"(X, I W a ( x , for all x, with equality precisely for those x for which /3UpVyl(x) + V y I(x) as /3 + 00 for some y > 0. In particular equality holds at any x for which /3Va I(x) + 1 as /3 + co. a)
a)
Proof. We first prove the existence of the limit in question. From the resolvent equation u a = U t + a + fluB+aum UP = UP+" + UUP+"Ufl,
and so iff E B+ we have Uo'f - pusv"f=V+"f+ fluB+"U"f - flUS'"V"f- / 3 u ~ P + + ' ~ P v " f =
/3uP+yuy- vy] + O(l/P)
since IIUyll I y - ' . Because of this equality, Propositions 4.6 and 4.7 imply that W y = JimP,, pUPV"fexistsfor f E B+ and that U y - W y i s the u - U excessive regularization of U y - V y Consequently W y = FUPVy
118
111. MULTIPLICATIVE FUNCTlONALS AND SUBPROCESSES
exists for allfE B. Clearly V p t f ~Wptfr Uptffor f E B,, and hence, for each a > 0, W" is a bounded positive linear operator on B which is given by measures Wa(x,.) satisfying Va(x,.) I W a ( x , I Ua(x,.) for all x in E. Let us next show that the family { W " } is a resolvent. If ~ E B ,it follows from the above remarks and Proposition 4.6 that, for any p > 0, U f l ( U . f - Vptf) = U f l ( U . f - Waf), and hence that U p V " f = U pW y for any f E B,. Therefore U p V " = U BW". Furthermore for f E B, one has a)
0 = Ufl[W" -
V"lf'2
V q W " - V " l f 2 0,
and consequently V pW" = V pV". Now V" - Vp = (B - a)VpV" = (p - a)VpW", and operating on this relation by q U q and letting q + co one obtains W" - W @= ( p - a)WflWa. Thus { W " } is a resolvent and since U Y - W y i s a-excessive (it is the a - U excessive regularization of Uaf - V"f and hence is a-excessive) for any f~ B,, it follows that { W " } is a resolvent exactly subordinate to { U'}. We have already seen that V a ( x ,.) IW a ( x ,.) for all x and a. Suppose /?UflVyl(y) + V yl(y) as p + co for some fixed y > 0 and y E E. By definition /?UflVYl + Wyl pointwise as p + co and hence V y ( y ,.) = W y ( y , Therefore as /?+ co, / ? U f l V Y f ( y ) +W y f ( y )= V y f ( y )for all ~ E B Now . given any a > 0, Val - Vyl = (y - a)VyV"l and operating on this by flupand letting /? -+ 00 we obtain W" l(y) = V y l(y) (y - cr)VyVal(y) = V" l(y). In other words if /?UflVyl(y) + V y I(y) for some y > 0, then this convergence holds for all y > 0. We have now proved all except the last sentence of Theorem 4.9. If /?VBl(y) + 1 as p-+ 03, then PUP l(y) - p V B l(y) -0. Let a > 0. Then aV"1 I1 and so ( P U P- b V p ) V al(y) I (l/a)(pUp - P V p ) l(y) -0. But /?VflV"l+ Val and hence p U p V " l(y) + V" l(y). Therefore W a ( y ,.) = V a ( y ,.) and the proof of Theorem 4.9 is now complete. 9).
+
(4.10) COROLLARY.Let M be a right continuous M F of X and let {Q,} and {V'} denote the corresponding semigroup and resolvent. Let { W " } be the exactly subordinate resolvent corresponding to { V " }; then V " ( X , .) = Wa(x,.) for all x E E M .
Proof. If X E EM then Q, l(x) = E X { M , ;t BVfl l(x) + 1 as p + 00, proving (4.10).
-= [} + 1
as t + 0 , and so
We turn now to the main topic of this section-the characterization of strong multiplicative functionals, which were defined in (3.11). We remind the reader that for any MF, M, we have M , = 0 by convention.
4.
RESOLVENTS AND STRONG MULTIPLICATIVE FUNCTIONALS
119
(4.11) DEFINITION. A right continuous MF, M = { M , ] , is called regular (RMF) provided P X [ X T € E- E M ; M T > O ]= O for all x and all {A,} stopping times T. Perhaps we should point out explicitly that the properties of being a regular MF or a strong MF depend only on the equivalence class (in the set of all right continuous MF's) to which a MF belongs. In other words if M is a RMF (SMF) and N is a right continuous M F equivalent to M , then N is a RMF (SMF). We will eventually show that these two notions are equivalent; that is, M is a RMF if and only if it is a SMF. We begin with the implication in one direction. (4.12) THEOREM. Any RMF is a SMF. Proof. Let M be a RMF and let { Q , } and { Va} be the corresponding semigroup and resolvent. Let { Wa} be the exactly subordinate resolvent associated with { Va} in Theorem 4.9. It follows from (4.10) that for each f E C , and a > 0 the function ga = Was= U y - ( U ' f - Waf) is bounded, nearly Bore1 measurable, finely continuous, and agrees with V"f on E M . Given an {.MI} stopping time T recall that T'")= (k 1)2-" if k2-" 5 T < ( k + 1)2-" and T'")= co if T = co, k = 0, 1 , . , is an { A , }stopping time and T(")1T. Consider for a > 0 andfE C,
..
/ e-"' f ( x +f
+
m
qa(x) = Ex
T)
Mf+ T dt
0
m
= lim
EX
Jo e-a'f[X(t
n
+T
~~ I(
+t
~ ( n ) dt )
cEx(/owe-a~~(xf+,,-n) dt; lim c jOme-a'/(X,) M , dt]hfk,-n;
= lim n
= n
= Iim
Mt+kz-l
k
k ~ x(~ x(, z-n)[
~ ( n= ) k2-"
1
~ ( n= ) k2-")
E " { v " ~ [ X ( T ' " ' ) ] M(T'"')}.
n
Since M is regular this last expression becomes lim EX(ya[X(T'"')] M(T'"')}= Ex{ga(X(T)) MT} 1,
and so
= E x { V a f ( X T )M T } ,
120
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
q"(x) = E"[I/"f(XT)
hfT]
= E'P("[
/ome-a'f(XI) hf, d t ] M T ) .
Consequently the functions q(t) = E " { f ( X , + , ) M I + T } and @ ( t ) = E x { E X ( T ) L f (M,]kfT} ~,) have the same Laplace transforms, namely qa(x), and since cp and @ are both right continuous the uniqueness theorem for Laplace transforms implies that they must be identical. But this is just the statement that (3.1 1) holds for any f E C+ and consequently for any f E B. Thus Theorem 4.12 is proved.
(4.13) DEFINITION. A right continuous MF is said to be exact (EMF) provided the corresponding resolvent { V'} is exactly subordinate to { U"}. The following proposition is a consequence of the proof of Theorem 4.12.
(4.14) PROPOSITION. Any EMF is a SMF. Let M be a SMF. Then for all {A,}stopping times (4.15) PROPOSITION, T and Y E b 9 one has E"{(Y 0 e T ) h f f + T ; A} = Ex{EX(T)(YM,)M T ; A} for all A E A T ,t, and x. Proof. As usual it suffices to consider the case A = 0 and Y of the form f i ( X , , ) with t, . .. t, and fjE 68: ,j = 1, .. ., n. Of course, Proposition 4.15 must essentially express the fact that the canonical subprocess 9 corresponding to M is a strong Markov process, and we will deduce (4.15) from this fact. Suppose first of all that we have proved (4.15) for Y's of the above form with 1, It. If t, < ... < t j I t < z ~ <+ ~. < t , , then the strong Markov property for X yields
-= -=
..
which, according to the supposition we have just made, becomes
Thus the general case is reduced to the case t , I2.
4. RESOLVENTS AND STRONG MULTIPLICATIVE
FUNCTIONALS
121
To handle the case t , 5 t we use the fact that 2,the canonical subprocess corresponding to M, is a strong Markov process. The notation is that of stopping time, and so Section 3. If $= T A y, then p i s an {A,}
and hence Proposition 4.15 is proved. (4.16) COROLLARY.Let M be a SMF, T an { A , }stopping time, Y E b 9 , and R E 9, R 2 0. Then
Ex{( Y BT)M(T + R 0 O T ) ; A} = E x { E X ( T )Y( M R ) M T A} ; 0
for all A E ATand x. Proof. We leave it to the reader to check that MR E 9 and M T + R o BETA under the stated conditions. Let R(") have its usual meaning; then {R(') = k2-"} E 9 a n d {R'")0 e T = k2-"} = 0; '{R'")= k 2 - " } . Recalling that M , = 0 by assumption, the left side of the desired equality equals
lim E"{(Y 0 BT)M(T + R(")0 OT); A} n
= lim n
=lim n
C Ex{( Y
0
eT)kfT+kZ-n;
R(")0 e T = k2-"; A}
k
C E x { E X ( T ) [ Y M k yR'"' n ; = k 2 - " ] h f ~A} ; k
= Ex{EX'T'[YhfR]MT; A},
completing the proof of Corollary (4.16). (4.17) THEOREM.Let M be a SMF, T a n (9,) stopping time, and R E A, R 2 0. Then (4.18)
almost surely.
M T ( ~ ) + R ( ~ )=( oM) T ( ~ ) ( UM)R ( ~ ) ( ~ T W )
122
Ill. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
Proof. In general we will omit the o ' s when writing expressions such as (4.18). For example (4.18) will be written M T + R = M , MR(&). The reader should distinguish carefully between MR(OT)= MR(u)(OTo)and M R o 0 , = MR(~To)(eT~ First ). note that (4.18) holds on { R = a} since M , = 0 by convention. Therefore it suffices to consider the case R(o)= t since we can approximate R from above in the usual manner by countably valued random = M,(M, o 0,) almost surely for variables. Thus we must show that each t . The fact that T is an {F,} stopping time implies that M , + T , M,, and M , 0 OT are all in 9. Let X T = a{X,+,; s 2 O } - and Y = a ( X Tu FT)-. We will show that (4.19)
E"{M'r+i;A> = E"{MT(MI 0 ~ )A; }
for all A E Y, 1, and x. It suffices to consider A = D n { X T + , ,E A , } n . ., n {X,+,"EA,,} where D € F T and A i e 8 * , 1 I j I n. But for such a A Proposition 4.15 implies that the left side of (4.19) reduces to while the strong Markov property for X implies that the right side of (4.19) reduces to the same thing. Thus in order to complete the proof it suffices to show that Y = 9. Since this is of some independent interest we formulate it as a proposition. (4.20)
PROPOSITION.Let Y be as above. Then Y
= 9.
Proof. Iff E C and a > 0, then
and so ;j e - " f ( X , ) dt E bY. It now follows from the Stone-Weierstrass approximation theorem that ;j g ( t ) e - ' f ( X , ) dr E bY for any bounded continuous function g on [0, 00). Since t - t f ( X f ) is right continuous almost surely, for each u 2 0 we can find a sequence {g,,}in C([O, 00)) such that gn(t)e - ' f ( X , ) dt + f ( X U ) almost surely. Consequently 9 't Y c 9, and since go= 9 we obtain Proposition 4.20. We next complete the characterization of strong multiplicative functionals. (4.21)
PROPOSITION. M is a SMF if and only if M is a RMF.
Proof. In view of Theorem 4.12 we need only show that a SMF is regular. stopLet M be a SMF and define R = inf{t: M I= 0). Clearly R is an (9,)
4.
RESOLVENTS AND STRONG MULTIPLICATIVE FUNCTIONALS
123
ping time and the right continuity of M implies that M R = 0 almost surely (M, = 0). Let T be any { A , }stopping time; then H = T R 0, i s also a n { A , }stopping time according t o (8.7) of Chapter I. Using Corollary 4.16 we have for each x
+
0
E"{M,; T < (} = E"{EX'T'[hfR] M , ; T < (} = 0.
Consequently T + R 0 OT = H 2 R almost surely on {T < (}, and so P"[X,
EE
- E M ; M, > 01 = P x [ X , E E - E M ; T < R] I P"[X,
EE
- EM ;R 0, > 01 0
= E"{PX'T'(R> 0); X T E E
- EM)
= 0,
since PY(R> 0) = 0 for all y E E - E M . (4.22)
PROPOSITION. Suppose that M is a right continuous M F and that
EM is nearly Bore1 measurable. If T = T E V E M then , M is regular if and only if M ,
=0
almost surely.
Proof. Let x be fixed. Then there exists a n increasing sequence {K,,} of compact subsets of E - EM such that T,, = T K , l T almost surely P". Since X(T,,) E K,, c E - EM almost surely on {T,,< co} one has M(T,,) = 0 almost surely on IT,, < co} if M is regular. But M(T,,) + M ( T ) almost surely P" and hence M ( T ) = 0 almost surely P" on {T < co}. Since x is arbitrary and M, = 0 we obtain M , = 0 almost surely. Conversely suppose M , = 0 almost surely. If R is any { A , }stopping time and X , E E - E M ,then either R 2 Tor R = 0. If R 2 T, then M , = 0 almost surely. On the other hand
P"(X, E E - E M ,
MR
> 0; R = O ) I P"(X0 E E
-E M ,
M o > 0)
= 0,
from the definition of EM and the fact that X is normal. Consequently M is regular. REMARK.It is sometimes convenient t o normalize a right continuous MF, M = {MI},as follows: Define N , ( o ) = 1 for all t if Xo(w) = A, N , ( o ) = M , ( u ) if t < [(o)and X o ( o ) E E, and finally N , ( o ) = infs<,(o) M s ( o ) if t 2 ( ( 0 )and X o ( o ) E E. Then N, = N,- = M , - almost surely on {( 5 t}. The reader should check for himself that N = { N , } is a right continuous M F equivalent to M , although N, need not be zero. We will say that a right (4.23)
124
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
continuous MF, M, is normalized provided t --t M,(o) and t -+ N,(w) are identical functions of t almost surely where N is defined from M as above. The definitions of strong, regular, or exact M F do not depend on the convention M , = 0, and, in fact, among right continuous MF's they depend only on the equivalence class to which the M F belongs. The only results above which appear possibly to depend on the convention M, = 0 are (4.15), (4.16), and (4.17). However the reader can easily check that these results hold for normalized MF's. For example let us check (4.17) for a normalized MF, M. It follows from (4.17) that M T + R = MT MR(8,) on {T + R < co}. Here and in the rest of this paragraph equality means equality almost surely. But writing this equality for R, = R A n and letting n -+ co we obtain the desired equality on {T < 00, R = co}. Finally since 8,w = wA,M , 0 8, = 1 for all t on {T = a}, and so the desired equality holds on this set also. We close this section with several definitions that will prove useful in the sequel. Let T be a terminal time. Then, as we have observed several times, M, = I[O,T)(f)is a right continuous MF. We say that Tis an exact terminal time provided the corresponding M F is exact. The reader should check that a terminal time Tis exact if and only if Pi;T U Y E 9'" for allfe 8: and ct > 0. A terminal time T is a strong terminal time provided that R + T 8, = T almost surely on { R < T} for all {S,} stopping times R. Again the reader should check that, under the assumption that A, = 9, for all t , T is a strong terminal time if and only if the corresponding M F is a SMF (see Exercise 4.26). 0
Exercises (4.24) Let X be a standard process such that there is a set A E d which is nonempty and polar. Let M,(w) = 1 for all t if Xo(w)$ A and let M , ( w ) = 0 for all t if X,(w) E A. Show that {M,}is a RMF, and hence a SMF, but that it is not exact. What is the corresponding exactly subordinate resolvent { W'} ? (4.25) Let X be a standard process and let M be a right continuous MF. Let { Q,} be the semigroup generated by M and let { V " } be the corresponding resolvent. (a) Show that if r < s < t , then M , - , 0 8, I 0 BS almost surely. (b) Use (a) to show that if t > 0, x E E, and f E B, then p , f ( x ) = lim,,o P,Q,-r .f(x) exists and that 0, f I P , f i f f ~ B, . (c) Show that if t > 0 and s 2 0, then Q,,, = 0,Q,. (d) Show that iff is a bounded nearly Bore1 measurable finely continuous function, then t + Q, f ( x ) is right continuous on (0, co). (e) Let { W'} be the exactly subordinate resolvent corresponding
5. EXCESSIVE FUNCTIONS to { V " } . Show that P , V f - W y as t -+ 0 for each f
125 E B.
(f) Show W y =
sr e-"' Q, f dt for each.fE B. (8) Use (f) to show that {Q,;t > 0) is a semigroup. (h) Show that there exists a MF, M', of X such that Q, f ( x ) = E " { f ( X , )MI} for t > 0, X E E , and f E B . (i) Show that Q o f ( x ) = lim,,o,rsaQrf(x) forfe C,(E)defines a measure on&. Prove that h(x) = limfio Q, l ( x ) exists, that Qo(x, = h(x) E, for each x, and that h(x) is either zero or one. Showthat {Of; t 2 O}isasemigroupsubordinatetoP,andthat{Q,; t 2 0 } is generated by a right continuous MF, say (j) Finally show that EM c EM and that if x E E M ,then P" almost surely M , = M , for all t 2 0. a)
a,
Let X be a standard process with 4, = 9, for all t and 4 = 9. Show that T is a strong terminal time if and only if M , = JLO,T)(t)is a SMF.
(4.26)
5. Excessive Functions A',, X , , O f , P") will be a fixed Throughout this section X = (R,4, standard process with state space ( E , 8).The main purpose of this section is to study functions which are excessive for a subordinate semigroup {Q,}; in particular to study the regularity properties of such a function composed with X . We will always assume that { Q,} is generated by a right continuous multiplicative functional M , which in view of Theorem 2.3 is equivalent to assuming that t -+ Q, I(x) is right continuous at t = 0. As in Section 4, { V " } will denote the corresponding resolvent and we will write V for V o . We will very shortly assume that M is exact; however, the basic definition and elementary properties do not depend on this assumption and so we will not introduce it for awhile.
(5.1) DEFINITION. Let a 2 0. A nonnegative function f in b* is said to be cr-excessive for ( X , M ) (or a - ( X , M ) excessive) provided that (i) e-"' Q, f If for all t 2 0 and (ii) Q,f-fpointwise as t -+ 0. As usual when a = 0 we will drop it entirely from our notation and terminology. Let Y " ( M ) denote the class of all a - ( X , M ) excessive funcr , . Since Propositions 2.2, 2.3, and 2.6 of Chapter 11 tions and let QP = F u Q depend only on the semigroup in question for their statement and proof they U",and remain valid (both statements and proofs) provided one replaces P,", 9" throughout by Q;, V " , and 9'"(M), respectively. We will use these results without special mention. Note that f E 9 ' " ( M ) if and only i f f is a - V excessive in the sense of Definition 4.5. Also note that if j3 > 0, then M! = c - ~ ' M is, a M F and that Y u ( M P=) Y p " ' P ( M for) any cr 2 0. Moreover the resolvent corresponding to M P is given by V " + P .Using this device we can
126
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
often assume a = 0 and V bounded when discussing a-(A', M) excessive functions. If T is an {Af}stopping time we define P T f ( x )= E x { e - u Tf(X,.) M,.} for u 2 0 and f~ B = bB* or f E S*,; in particular if T is an {F,} stopping time, then Q; is a bounded operator on B whose norm does not exceed one. If A E 8: we write PAfor pT,. We assume now that M is exact. There is no loss of generality in assuming that M I = 0 whenever t 2 C since we can always replace MI by the equivalent right continuous MF, MI Zro,s,(t). These assumptions will be in force without special mention in the remainder of this section. Consequently M is a SMF. By Theorem 4.17 if T is an {F,} stopping time, then MT+f= M , ( M f 0,) almost surely for each t and hence this relation holds for all t almost surely (consider rational t and use the right continuity of M). It is now easy to check that 0
00
Q;V' g(x)
= Ex
e-'" g ( X , ) Mudu T
whenever T is an {sf) stopping time and g E a*,. Hence Proposition 2.8 of , and P; by Y " ( M ) ,{Ff}, and Q; Chapter I1 is valid if one replaces 9'*{Af}, respectively. Since Qf(x,.) = 0 if x 4 EM it is clear that any u - (A', M) excessive function (a 2 0) vanishes off E M . One can now prove the analog of Proposition 2.10 of Chapter 11. We will state this explicitly but omit the proof since it is exactly the same as that given in Chapter 11. (5.2) PROPOSITION. Suppose SEY " ( M ) , A
E
8",and x E (A' n EM).Then
inf{f(y): Y E A ) I f ( x ) 5 sup{f(y): Y E A ) . Note that the right-hand inequality holds for any x E A'. By the very definition of exactness V"f is nearly Borel measurable and finely continuous for a > 0 and f E B, , and hence any f E 9 " ( M ) , a 2 0, is nearly Borel measurable and finely lower semicontinuous. Let rp = V' 1 ; that is, rp(x) = E x e-' MI dt. Since MI = 0 if t 2 [, q ( A ) = 0 and so EM = {rp > 0). Therefore EM is nearly Borel measurable and finfly open. Define En = { rp > l / n }; then E M = u E n. We now introduce three stopping times associated with M which will be of importance in the sequel:
:s
Tn= TEA-€,,
n 2 1,
T = lim T,, n
R =T E ~ - E ~ S = infit: MI = 0). 3
5. Note that P"(T
127
EXCESSIVE FUNCTIONS
= 0) = P"(S = 0) = 1
if x 4 E M .
PROPOSITION : (i) R , S, and T a r e {F,} stopping times; (ii) R , S, and T are strong terminal times (see the last paragraph of Section 4) ; (iii) S I T I R almost surely.
(5.3)
Prooj Since {T,,}is an increasing sequence of hitting times, T is a terminal time which does not exceed R . If H is a hitting time of a nearly Bore1 set, then t + H o 0, = H on { H > t } without an exceptional set and so Q + H 0 6Q = H on { H > Q } for any nonnegative function Q . Consequently R and Tare strong stopping time. Given an (9,) stopping terminal times. Clearly S is an (9,) time Q one has I V , , +=~M,(M, 0 6,) for all u 2 0 almost surely, and if Q < S then M Q > 0. Thus almost surely on { Q < S } , Mu 8Q = 0 if and only if M u + , = 0, and it is now easy to see that S is a strong terminal time. The only thing remaining to be checked is that S I Talmost surely. First note that M , + , I M,,,, = M,,(M, o 6,") I M , o 0," for all t almost surely. Therefore 0
1
-I1 2 E X { d X , , ) ; T,, < S )
2 E"(/ome-, M,+,d t ; T < s),
and so ~ ; e - f M , + , d t = Oalmost surely on { T < S } . But MT > 0 if T < S and consequently, by the right continuity of M , j; e-' M , , , dt > 0. Therefore P"(T < S ) = 0 for all x and this completes the proof of Proposition 5.3. It is intuitively clear that if q ( x ) 2 a > 0, then r + M , must be bounded away from zero on some interval [0, t o ] with positive P" probability. In fact this is precisely the reason for introducing the sets E n . The following lemma gives a precise meaning to this statement. The argument is one that is often used, and is generally omitted. (5.4) LEMMA.There exist positive constants z and y independent of x such that P " ( M , > y) 2 a/2 whenever q ( x ) 2 a.
Proof. If for a fixed x we let G,(y) = P " ( M , > y), then for every positive and y < 1
T
128
111. MULTIPLICATlVE FUNCTIONALS AND SUBPROCESSES m
p(x) = E x
e-'
M,dt
e-' EX(M,)dt
=
+
0
1 e-'E"(M,) W
dt
r
+ e-'
s (1 - e-')
E"(M,),
while
E"(M,)
= E"{M,;
5 rC1
M,I y }
- Gr(y)l
+ E X { M , ;M, > y }
+ Gr(y).
Thus whenever p(x) 2 a G,(y) 2 1 - er(l - a)/(l - y).
By choosing t and y small enough we may make the right side of this inequality exceed 4 2 , so the proof is complete.
s$
(5.5) REMARK.Let @(x) = E x e-BrM , dt for /? > 0. If P"(M, 2 y) 2 a/2 then pa@)2 a(y(1 - e-'9/2/?}. Consequently if TE is the hitting time of {pa I I/n} and T B= limn TE, then TB= T, as defined just before (5.3). In particular if MfP = edBtM , , then the stopping times S, T, and R are the same regardless of whether they are defined relative to M, or to Mf .
Define S,, = inf{t: M , I l / n } . Clearly {S,,}is an increasing sequence of
{F,} stopping times and S,, I S for all n. Moreover it is immediate that S,, 7 S. Note that if Ms-= lim,qs M,, then { M s - = 0} = {S,, c S for all n} and {MS> 0)= {S,, = S for some n}. The next result shows that T = S almost surely on { M s - = 0). (5.6)
PROPOSITION. For all x, P"(S,, c S for all n, S c T ) = 0.
ProoJ Since T,, 7 T it suffices to show that, for all x E E M ,P"(S,, < S for all n, S < T,,) = 0 for all p. But if 0 < S,, < T,,, then rp(Xsn)> l/p and consequently by Lemma 5.4 there exist positive constants T and y such that Pxcsn)[Mr> y ] 2 1/2p. Thus given x E EM and n 2 2 one has
P ( S , c s, s, < T,,)5 2 p E"{PX'S"'(M, > 7); Sn < S , Sn < Tp) I 2 p P ~ M ,esn> y ; s, < sl. But almost surely M,+sn= Msn(M, 0 Bs,) and Msn > 0 on {S,, < S}. Therefore
ws,,< s, s, c T,) I2 p PX(M,+sn> 01, and this last probability approaches zero as n + m since S,, S and Hence we obtain Proposition 5.6.
t
> 0.
5.
129
EXCESSIVE FUNCTIONS
We come now to the main result of this section.
THEOREM. Let f be ci - ( X , M ) excessive. Then (i) { e - " ' f ( X , ) M , , F,, P"} is a right continuous supermartingale for any p such that p ( f ) < 03. (ii) Almost surely f -f( X,) is right continuous on [0, S ) and has left-hand limits on [0, T ) . In addition t + f ( X , ) has a left-hand limit at S if Ms- > O . (iii) t + f ( X , ) is right continuous on [0, R ) almost surely P" if x E E M u (E3. (iv) Almost surely t - f ( X , ) is finite on [u, S ) iff(X,) c 00.
(5.7)
Proof. First of all in light of (5.5) and the remarks following Definition 5.1 we may assume that ci = 0. Secondly in proving (ii) and (iii) we may assume, as explained in the proof of Theorem 2.12 of Chapter 11, thatfis bounded. Of course, the regularity assertions in (ii) and (iii) are relative to the topology of [0, oo]. Considering (iii) first of all, if x E (E,&)' then P"(R = 0) = 1 and so there is nothing to prove. If x E EM and X o ( o ) E E M ,then t + f [ X , ( w ) ]fails to be right continuous at a point to < R(w) only if there exist a sequence {t,} decreasing to to and a pair of intervals 11, I2 of the form [- 00, r ) , (s, 001 (perhaps not in that order) with r and s rational and r < s such that X,,(w) E f-'(Il)n EM and X t n ( o ) ~ f - ' ( J n Z )EM for each n2 1. The two sets ,f-'(ll) n EM andf-'(Z,) n EM are disjoint nearly Bore1 sets, and Proposition 5.2 implies that they are finely open. The proof of (iii) may now be completed by appealing to the argument which finishes the proof of 4.8 of Chapter 11. Since P"(S = 0) = 1 if x 4 E M , S I R , and M , = 0 if t 2 S, it follows that t - + f ( X , ) M , is right continuous on [0, 00) almost surely. Moreover
E"CM,+sf(X,+s) ISfl = M , EX"'CMSf(XS)1 I M,f(X,),
and consequently (i) is established (we are assuming ci = 0). In proving (ii) we assume that f is bounded. Then for each x, {M,f(X,); 9,. P"} is a right continuous bounded supermartingale, and so a standard supermartingale theorem (( 1.5) of Chapter 0) implies that t - f ( X , ) M , has left-hand limits almost surely. Therefore t - f ( X , ) is right continuous and has left-hand limits almost surely on [0, S ) . If M,- = 0 then, by Proposition 5.6, S = T and (ii) is established in this case. But if M,- > 0 then t + f ( X , ) has a left limit at t = S. Now define Q , = 0 and Qn+'= Q, + S Ban for n 2 0. Clearly {Q,} is an increasing sequence of {S,} stopping times. Let Q = lim Q,. It is an immediate consequence of Proposition 5.3 that Q,+, IT almost surely on { Q , < T } . We will next show that almost surely Q 2 T. 0
130
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
Now to show that Q 2 T almost surely it sufficesto show that P"( Q < T,) = 0 for all p and x E E M . Let p be fixed; then according to Lemma 5.4 there exists t > 0 such that P x ( S 2 T) 2 1/2p whenever q ( x ) 2 p - l . If k is a positive integer we have for x E EMand n 2 1
PX{Qnti- Q, 2 T ; Q n < T p A k } = E"{PX'Qn'(S 2 t); Q, < T, A k } 1 2 - P"(Q, < T, 2P
Let A, = {Q.,, - Q, 2
T ; Q, <
A
k).
Tp A k}; then
-
P"(A,) I P"[Q,+l - Q, 2 t; Q < .o]
and consequently PX(An)+ 0 as n
00.
+ P"[Q, < k ; Q = a]
But
for each n and so letting n 3 co and then k + co we obtain P " ( Q < T,) = 0. Therefore Q 2 T almost surely. Suppose that t + f ( X , ) has left-hand limits on [0, Q,) and that either Q, = T or t + f ( X , ) has a left-hand limit at t = Q,. Since Q, = S we have already seen that this is indeed the case when n = 1. If Q, < T, then t -f( X I ) has left-hand limits on [0, Q, + S 6Q,). Moreover if Ms-o Ban = 0 then S o 0," = T 6Qn= T - Q, and so Q,,, = T, while if M,- Ban > 0 then t + f ( X , ) has a left limit at t = Q, + S 6," = Q,,,. Combining this with the results of the previous paragraph clearly yields (ii). To prove (iv) let A = {f< co} and B = {f= co}. If x E B' n E M , then Proposition 5.2 implies that x E B. Sincef(x) 2 QBf(x) one sees, just as in the proof of Theorem 2 . 1 2 ~of Chapter 11, that PX(TB< S) = 0 for all x E A n E M , a n d h e n c e f o r a l l x E A s i n c e P " ( S = O ) = li f x $ E M . I t n o w follows as in Chapter I1 that X , E A almost surely on (T, , s>, and this plainly yields Conclusion (iv) of (5.7). Thus the proof of Theorem 5.7 is now complete. 0
0
0
0
(5.8) REMARK.It follows from this theorem (or Proposition 5.2) that the restriction of an a - (A', M) excessive function to EMis finely continuous on E M . Another consequence is the fact that the minimum of two ct - ( X , M ) excessive functions is a - (X,M) excessive.
We will next give a few examples to illustrate some of the various possibilities. However, inasmuch as our main results are proved under the assumption that the M F is exact, let us first give a simple criterion for exactness. PROPOSITION. Let M be a right continuous MF. Then M is exact provided that for each u > 0 and x 4 EM one has limf,oE"{M,- , Of} = 0.
(5.9)
0
5.
131
EXCESSIVE FUNCTIONS
Proof. Let { W a }be the exact resolvent constructed from { V " } in Theorem 4.9. According to (4.9) and (4.10) in order to show that V " = W" it suffices to show that (b l ) U B + 'V' I(x) .+ 0 as fl-+ co for each x E E - E M . But
+
( p + l)UB+' V ' ( x ) = ( p + 1)E"
/
'rJ
e-(B
+
1)'
M , 0, du dt 0
0
=
(0 + 1) /me-Br
[me-'
E"(M,-,
0
0,) du dt
0
which under the hypotheses of (5.9) approaches zero as fi + co if x establishing Proposition 5.9.
4 EM,
We begin with an example to show that S < T < R is possible. Let E = (0) u [ I , co) and let X be uniform motion to the right a t speed one if X ( 0 ) 2 1, while X starting from 0 remains there for an exponential holding time after which it jumps to 1 and then moves to the right at speed one. Let Q be the first hitting time of { 1, 2) and M , = I I o , p ) ( t )Clearly . E M = E so that R = co. On the other hand S = Q and it is easy to see that T is the hitting time of {2}. Thus S < T < R almost surely Po. Next consider the following example: X is uniform motion to the right on the real line. Let h(0) = 0 and h(x) = IxI-' if x # 0 and set M , = exp( h ( X J ds).* This defines a right continuous M F of X , and obviously E - EM = (0). It is also easy to see that M is exact (use (5.9) for example). Finally one can check without difficulty that R = T{o,(the first hitting time of {0)), while S = T = D ( o l (the first entry time of (0)). Note that this shows that S need not be an exact terminal time. I f f = IEMthenf is ( X , M ) excessive, but t - + f ( X , ) is not right continuous at t = 0 almost surely P o . This shows that one may not eliminate the exceptional set of x's in (5.7iii). However in this example t -f(A',) is right continuous on the open interval (0, R ) almost surely, and the reader should have no trouble in proving that this is the case in general. Finally we give an example to show that the assertion concerning left-hand limits in Theorem 5.7 is the best that one can do in general. Let E = { - 1, -4, - 3. . . .} u [0, co). Let {H,,} be a sequence of independent exponential holding times such that CE(H,,) < 03. We describe the process X as follows: if X starts at - l/n it remains there for a time H,, after which it jumps to - l/(n + I ) and remains there a time H,,,, and so on until it reaches 0 after which it moves to the right at unit speed. If X ( 0 ) 2 0 the process moves to the right at unit speed. Note P"{T,,, < m} = 1 , for each x < 0. Let A , be the number of jumps of t -+ A', in the interval [0, t ] and let M , = exp( - A , ) . It is immediate that M is a right continuous multiplicative functional and that
-yo
Of course, we set Mo(w)= 0 if Xo(w)= 0.
132
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
E M = E which implies that M is exact and R = 00. Moreover it is easy to check that S = T = T { o , . Let f be defined as follows: f ( x ) = 0 if x 2 0, f( - l/n) = 1 if n is odd, andf( - l/n) = 2 if n is even. The reader should have no difficulty in verifying that f is (X, M ) excessive. However it is clear that t + f ( X , ) does not have a left limit at T when X ( 0 ) < 0, although this function is everywhere right continuous. In a certain sense the preceding examples are artificial. However it seems to be difficult to formulate simple and reasonable hypotheses that rule out such examples. Let us return for a moment to the canonical subprocess 8 corresponding to M. Of course, a functionfis a (X,M) excessive if and only if it is a-excessive relative to 8. Recall that p(w, A) = c(w) A 1 and that y(w, 1) = A. Using the notation developed above and in Section 4 we see that
-
(5.10)
P ( S < y ) = E"(M,) = 0,
and so it follows from Theorem 5.7 that t + f ( 8 , ) has left-hand limits on [0, p) and is right continuous on [0, 00) almost surely wheneverfis a-excessive for 2. In order to have a useful theory of excessive functions (a = 0) it is necessary to impose some finiteness condition on the potential operator. See, for example, (2.19) and (4.6) of Chapter 11. We will now discuss some conditions under which V(x, B) has appropriate finiteness properties. These conditions are most useful when U does not have reasonable finiteness properties; for example, when X is Brownian motion in one or two dimensions. As in the previous paragraphs X is a standard process and M is an exact MF vanishing on [c, 001. We begin with the following simple result. (5.11) PROPOSITION.Suppose that there exists a sequence (9,) in 8: such that, for each n and x, Vg,(x) < co, Vgl I Vg, 5 ., and V g , ( x ) 7 co as n + 03 for each x in E M . Then for each f e Y ( M ) there exists a sequence {f,} of bounded functions in 8: such that Vf, is bounded for each n and Vf, If as n -,co.
..
Proof. Let h, = min( Vg,, n , f ) . Then h, E Y ( M ) and Q,h, + 0 as t --t co since h, IVg, < co. Iff, = n[h, h,], then Vf, = n JA/n Q, h, du and the reader will easily check that {f,}has the desired properties.
el/,
We will now formulate (following Hunt [3]) some conditions which imply the hypothesis of Proposition 5.11. As in the paragraph above (5.3) let q(x) = V' l(x) = E x e-' M , dt. For 0 < p < 1 define Js = {cp > p}. (Note that J,,, is what we previously called En .) Each J, is finely open and nearly Bore1 measurable; if 0 < < y < 1 then Jy c J, c E M , and U J , = E M .
5.
133
EXCESSIVE FUNCTIONS
(5.12) DEFINITION. A set D E ~ is" called special provided (i) D is finely open, (ii) D has compact closure in E, and (iii) D is contained in some J,,O
We next introduce two conditions on M .
+ TD
0,) + 0 almost surely as t -+ a. (E) If D is special, then V ( . , D ) is bounded. (D) If D is special, then M(t
0
Under (D) or (E) the hypothesis of Proposition 5.11 (5.13) PROPOSITION. is satisfied. Proof. Assume (E). Let {G,} be an increasing sequence of open sets with compact closures in E whose union is E. Let D, = G, n J1,, and g, = nZDn. Since each D, is special and D, = E M ,this sequence {g,} has the required properties. Assume (D). If D is special and qD(x)= E"{M(TD)}, then one easily checks that qDE Y ( M ) . Moreover Q , q D ( x ) = E " { M ( t + TD 0,)) + 0 as r + 00, and so V [ t - ' ( q , - Q , P D ) ] 7 qDas t 0. But qD= 1 on D and EM is a countable union of special sets, and so we can construct the desired sequence { g,} .
u
0
Note that (D) holds whenever M , + 0 almost surely as r -+ co. In particular this is the case if almost surely 5 is finite. In order to give some conditions under which (E) holds we prepare some lemmas. LEMMA.If /? < 1, then there exist t > 0 and y < 1 such that E"(M,) < y for all x 4 J , . Moreover for all x E EA and n 2 1, E X { M ( n t ) ; nz < TjB}5 y"-'. (5.14)
Proof. If x 4 J,, then for any T > 0 we have /l> q ( x ) 2 E x 1'e-l M , dt 2 ( I - e-') E"(M,). 0
Choosing z large enough that p(1 - e-')-' < I proves the first assertion. For the second let T, = T J B Then . for any n 2 1 and x we have E"{M[(n
+ l ) ~ ] (;n + 1 ) t < T,} I E"{M,,(M,
0,'); n t c 7 ' ' ) = EX{EX(n')(Mr) Mnr;f l T < T,} 0
5 )'E"{M,,; n t < Tp},
since X ( n t ) 4 J, if 0 < nt < T, . This establishes the second assertion. (5.15)
LEMMA.Let T, = T J B. Then ExSOT@M, dr is bounded in x. Moreover
134
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
if 0 < 6 < j < 1, then for all x 4 Ja one has E "[M(T,)]I S/P. Proof. Let y and z be as in (5.14). Then
m
I
1z E X { M ( n t ) ;nz < T,} n=O
proving the first assertion. If x
4 Ja, then
TP
e-' M , dt
6 I q ( x ) = Ex
+ Ex
m
e-'M, dt T,
0
2 B EX{M(T,)),
establishing the second assertion. (5.16)
PROPOSITION.If 6 < 1, then V ( x ,J:) is a bounded function of x.
Proof. Pick and
p such that 6 < B < 1. Let T = Tj, and R = Tj; , Define To = 0 T'n + 1 = T2n T2n+2
+T +
= T2n+l
Q
OT,,
9
OTzntl *
for n 2 0. It follows from (5.15) that, for n 2 1, E x { M( T 2 n + 2))
(6/P)Ex{M(Td),
and consequently M(T,,) + 0 as n -,00 almost surely. Therefore limnTZn2 S almost surely. Also since X, is not in J i for t E (TZn-',Tzn)we have, letting A be a bound for E x j l M t df,
c M,dt 2 E x [M(T2,) A[1 + 2 (6/py-']. Tzntl
m
V(x,J@I
EX/
n=O
Tzn
=n=O
I
This proves (5.16).
Ex(Tzn)[
n= 1
lOTMtdi])
5.
EXCESSIVE FUNCTIONS
135
The next result is an immediate consequence of (5.16). (5.17) COROLLARY. If for each special set D there exists /3 < 1 such that D n J, is empty, then (E) holds. In particular (E) holds if J, is empty for some /3 < 1, or if each compact subset of E is disjoint from some J, .
We close this section with a result that is applicable in many situations. (5.18) THEOREM. Assume that (i) functions in 9'"are lower semicontinuous, (ii) the process Xis such that either U ZB(x) = 0 for all x E E or @,(x) > 0 for all x E E whenever B E b",and (iii) M is such that q ( x ) < 1 for some x in E. Then (E) holds.
Proof. Choose 6 c 1 so that B = { q < 6) n E is not empty. Consequently OB(x)= Ux(TB< co)> 0 for all x E E, and so is strictly positive and lower semicontinuous. Therefore given a compact subset K of E there exists q 0 such that E"(e-TB)= @L(x) 2 q for all x in K. Thus for such an x
@A
I Ex(1
=-
- e-TD) + GEX(e-TB)
5 1 - q(l - 61, and so K is disjoint from some J, . This establishes Theorem 5.18.
Exercises (5.19) The following three statements are the analogs of Theorems 5.1, 5.9, and 5.1 1 of Chapter 11. Prove each statement by adapting the proof of the corresponding result in Chapter 11. In each case X i s a given standard process, M an exact M F vanishing on [[, 001, and f is a nonnegative function in 8" which vanishes off EM. (i) Iff'> Qkffor all compact subsets K of E M ,then /3VP'"f 0. (ii) Let T,, = inf{r : d ( X , , X , ) > l / n } where d is a metric for E. Suppose each point of E M is instantaneous for Xand thatfis finely continuous on E M . If for each compact K c EM there exists an integer N such thatf(x) 2 Q;, f ( x ) for all x E K and n 2 N , thenfE Y a ( M ) . (iii) Let T, be as in (ii) and assume that the restriction o f f t o EM is lower semicontinuous. Suppose that for each x E EM, f ( x ) 2 Qn; f ( x ) for a sequence of values of n approaching infinity. ThenfE Y " ( M ) .
136
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
(5.20) Prove that if X is a standard process and M an exact MF, then for each u > 0 and x 4 E M ,E"{M,- 0 Or} approaches 0 as t + 0. Use this and Proposition 5.9 to conclude that the product of two exact MF's is exact. (5.21) Let X be a standard process and M an exact MF. GivenfE Y ( M ) with f < co define Y = limnf(X,,,) MTn(the limit exists by Theorem 5.7, T,, is defined just above Proposition 5.3). Define g(x) = E X (Y). Show (a) g E Y ( M ) ,(b) g sf,(c) g = Q K g whenever K is the complement of a special set. Also show that iff is bounded, then g is the largest function satisfying (a), (b), and (c). 6. A Theorem of Hunt
In this sectionwe are goin to give a characterizationof Q, u for u E Y " ( M ) and D E 8"which exhibits the relationship between the operator Q", and the operation of '' balayage " or " reduite " in classical potential theory (see Brelot [l]). This characterization, which is due to Hunt [2], is one of the most important results in probabilistic potential theory. Before discussing Hunt's theorem it will be necessary to establish a very useful extension of Theorem 11.2 of Chapter I. As in previous sections X = (a,4, 4,) A', O,, P") will denote a fixed standard process with state space (E, 8). For simplicity we assume that 4 = 9 and MI = .Frfor all t 2 0. Stopping times are then {Fr} stopping times. Here is the extension of Theorem 11.2 of Chapter I mentioned above. (6.1) THEOREM.Let B E & " and let p be a finite measure on & * such that p(B - B') = 0. Then there exists a decreasing sequence {B,,} of finely open nearly Bore1 subsets of E, each containing B, and such that TBnt TB almost surely Pg. (6.2) REMARK.Note that if X is quasi-left-continuous on [0, a),then (11.3) of Chapter I is a sharper result than (6.1). The importance of (6.1) is
that it is valid for general standard processes. We will break up the proof of Theorem 6.1 into several steps which we will establish as lemmas or propositions. First note that it suffices to prove (6.1) in the case p ( B ) = 0. To see this suppose we have established (6.1) in this case and write p = p, + p 2 where pl is the restriction of p to E - B and p2 is the restriction of p to B n B'. Then pl(B) = 0 and so there exists {B,,} with the desired properties such that TBm 7 T B almost surely P r l . But TB = T B . = 0 almost surely Prz and so TB. t TB almost surely Pp.Thus we will suppose that
6.
137
A THEOREM OF HUNT
p(B) = 0 and, in addition, we will assume, as we may, that p is a probability measure. Recall that Opl(x) = EX(e-aTA) = EX(e-aTA; TA< 5) for any A E 8".The following notation will be used in our discussion without special mention. We let cpA = 0: and T(A) = cpA(x)p(dx) = E P ( e W T Afor ) any A E 8".We also define ~ ( x = ) E"(e-[) for x E E. Both cpA and cp are considered as functionsonEandare1-excessive.NotethatU' l(x) = E " f $ e - ' d t = 1 - cp(x) for x E E and so cp is actually Borel measurable.
(6.3) LEMMA.Let {A,,} and {B,,} be two sequences of sets in 8" such that, for every n, A,, c B,,. Let A = UA,, and B = UB,,. Then for each x E E, CPEW - ( P A @ ) I [ c p ~ ~ (-x )cp~,(x)I. In particular W )- U A ) 5 E n [ U B J - r(An)I*
Cn
Proof. If T B ( o )< t < TA(w)then TEn(w) c t < TA,(w) for some n. Therefore
and this establishes (6.3). LEMMA.Let {T,,} be an increasing sequence of stopping times with limn T,, 2 5 almost surely PI. Let A = {T,,< 5 for all n } . Then
(6.4)
PP{(lim,,cp(XTn)< I , A, [ < CO} = 0.
Proof. Since { e - T ncp(XTn),FT,,P"} is a supermartingale for each x E E, there exists an L E F with 0 5 L 1 and such that e-Tncp(XTn)+e-cL almost surely. Now EP{e-<;T,,< 5) = E p { e - T ne - ( 5 - T n-) *Tn < c } = E"{e-T" d X T , , ) ; T,, <
0,
and letting n + co we obtain E p { e - : ;A} = EP{e-SL;A}. Consequently PP(L < 1, A, [ < m) = 0 establishing (6.4). (6.5) PROPOSITION.Let D E 8" .with p ( D ) = 0. Suppose that for some < 1, D c {cp < a } , and that PP(TDc 5) = 0. Then there exists a decreasing sequence { D,,} of finely open Borel subsets of E, each containing D,and such that l-( D,,) -+ 0.
a
138
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
Proof. By Theorem I I .2 of Chapter I there exists a decreasing sequence {G,} of open sets containing D such that lim T,, 2 [ almost surely P'. Let 0, = G, n {cp < a } . (Each D, is a Borel subset of E since cp is a Borel measurable function on E.) Now D c D,c C, for each n and so lim TDn2 [ almost surely Pp.But (p[x(TD,)]I a almost surely on {TD,,< and hence, by Lemma 6.4, Pp{TD,< [ for all n, [ < m} = 0. If TD,,2 [ then TD,,= 00 since D,c E, and combining this with the preceding two sentences yields lim TDn= 00 almost surely P p . Hence T(D,) + 0.
c}
c)
(6.6) COROLLARY. Let D E 8"with p ( D ) = 0. Suppose that P"(TD < = 0. Then there exists a decreasing sequence { D,} of finely open Borel subsets of E, each containing D,such that T(D,) + 0. Proof. Let A, = D n {cp < 1 - l/k}. Let E > 0. By Proposition 6.5 there exists for each k a finely open Borel set Ck with A, c Ck c E and r(c,)< 42'. If C = u C , , then since D = U A , it follows from Lemma 6.3 that T(C) - T ( D ) < E. By hypothesis T ( D ) = 0 and so we obtain (6.6).
(6.7) PROPOSITION. Let A E 8"and assume that P! cp -+ cp as t -+ 0 uniformly on A. If p(A) = 0, then there exists a decreasing sequence {A,} of finely open Borel subsets of E, each containing A, such that T(A,) + T(A). Proof. Of course, P!cp + rp pointwise on E since cp is 1-excessive. By hypothesis p ( A ) = 0 and so there exists a sequence {G,} of open sets containing A such that (TG,A [) T (TAA [) almost surely P '. Given E > 0 choose s with 0 < s < 1 such that cp(y) - P,' ~ ( y<) E for all Y E A. If C = { y : cp(y) P,' cp(y) < E}, then C is finely open, Borel measurable, and A c C c E. (cp E 8 implies P,'cp E 8.)Define C, = G, n C so that each C, is a finely open '. Next Borel set containing A and (Ten A [) 7 (TAA [) almost surely P observe that if z is such that cp(z) - P,' cp(z) I E, in particular if z E C', then E
2 cp(z) - P,' cp(z) = E'(e-c) - fi'(e-c; [ > s) = E'(e-[; [ I s) 2 e-'
P'([
I s);
that is, P'([I s) I eSEI eE. Now lim[r(c,) - T(A)] = lim E'{e-TCn - e-TA} n
n
1TCn< [ for all
I Pp
n , lim
TCn= [ < m ) ,
n
c.
since (Ten A C) (TAA [) almost surely Pp and TA= m if TA2 Denote this last displayed probability by 6. If 6 > 0, then for any 6' with 0 < 6' < 6
6. A there exists an n such that y,,
139
THEOREM OF HUNT
5 - TCnI s; Tcn< C} > 6’. But
= Pp{
y, = E ” { P ~ ( ~ I ~ ~s];) [TCn [ < [}
I eE
since X(Tc,) E C; c C‘ almost surely if TCn< 5. Hence 6‘ I ee and so 6 I eE. This proves that lim,[T(C,,) - T(A)] I eE. Thus for each k we can find a finely open Borel set D , with A c D , c E and such that T ( D , ) - T(A) < l/k.This clearly yields the conclusion of (6.7). The following corollary is obtained from (6.7) by an appeal to Lemma 6.3 in the same manner that Corollary 6.6 was obtained from (6.5). (6.8) COROLLARY.Let A E 8” with p(A) = 0. Suppose that A = U A , , where, for each n , P:cp + cp uniformly on A,,. Then given E > 0 there exists a finely open Borel set D with A c D c E and such that T ( D ) - T(A) < E. (6.9) LEMMA.Let D , C E 8”with C c D. Suppose q I1 and that cpD(y) I q forally E D . Iflp(dx) Ph(x, C ) = E p { e - , , ZJX,,)} = 0, then T(C) Iq T ( D ) .
+
ProoJ Obviously T, 2 T D and P p [ X , , E C] = 0. Therefore T, = TD T, o O,, almost surely P”.Now the hypothesis implies that cpD(y) I q for all y e D v D’and so
T(C) = E ” { e - T C }= E”{e-TDcpc(XT,))
< E’{e-TD cpD(X,,)} I q I-@). We come now to the key step in the proof of Theorem 6.1.
(6.10) PROPOSITION. Let B E &“andsuppose that, for some q < 1, cpB(y)I q for every y E B (and hence also for every y E B‘). If p ( B ) = 0 then for every E > 0 there exists a finely open Borel set A with B c A c E and such that T(A) - T(B) < E . Proof. Let E > 0 be given. Let v be the measure on b* defined by v ( F ) = f p(dx) Pk(x, F ) and let v, be the restriction of v to B. Since cp - P,‘cp + 0 as t + 0 we may, by Egorov’s theorem, find an increasing sequence {B,} of subsets of B such that for each n, B, E 8,cp - P,‘ cp + 0 as t + 0 uniformly on B, , and v,( B - B,,) = 0. Let M , = B,, and D, = B - Bn. Since M , c B, p ( M , ) = 0 and so by (6.8) there exists a finely open Borel set A , such that M I c A , c E and T(A,) - T ( M , ) < ~ / 4 .Furthermore p(dx) Pk(x, D l ) = v(D,) = v , ( D , ) = 0 and so, by Lemma 6.9, T ( D , ) I q T(B) I q. Finally D, c B and hence cp,,(y) I cpB(y)Iq for all y E D , .
u
u
u 5
140
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
5
Next let v, be the measure F - r p(dx) P&(x, F) restricted to D,.According to the argument just given we can find a Borel measurable subset M , of D, such that if D, = D , - M , , then (i) jp(dx) P&(x, D,) = 0 and (ii) there is a finely open Borel set A , such that M , c A, c Eand T(A,) T ( M , ) < 4 8 . By Lemma 6.9, T(D,) 5 q T ( D , ) 5 1’. Plainly we may continue this procedure: that is, we can find a sequence {M,} of Borel subsets of B and a sequence {D,} of nearly Borel subsets of B such that (a) for each n there is a finely open Borel set A, with M , c A, c E and such that I‘(A,) - T(M,) c ~/2”+’,and (b) D, = B - U ; = , M k and T(D,) I qn. Now let D = (7, D,, M = O M , , and 2 = U A , . Then 2 is a finely open Borel subset of E containing M and, by Lemma 6.3, T(2) - T ( M ) c ~ / 2 .Moreover T ( D )= 0 since q c 1, and so P”(T,< () = 0. Also p ( D ) = 0 since D c B. Thus by Corollary 6.6 there exists a finely open Borel set C with D c C c E and such that T(C) - T ( D ) c ~ / 2 .Let A = 2 u C. Then A 3 B, A is a finely open Borel subset of E, and T(A) - T(B) c E by Lemma 6.3.
-
(6.11) PROPOSITION. Let B E 6“and suppose that p(B) = 0. Then for every E > 0 there exists a finely open nearly Borel set A with B c A c E and such that T(A) - T(B) c E . Proof, Let E > 0 be given. Choose q c 1 such that (l/q - 1) < ~ / 2 .Let B, = {rp, > q } and B, = B n {rp,,~ q } . The set B, satisfies the hypotheses of (6.10) and so there is a finely open Borel set A , with B2 c A , c E such that T(A,) - T(B,) c ~ / 2 .Let A = A, u B,. Plainly A is a finely open nearly Borel set and B c A c E. Now
Tsl)]} - E”{e-TBI;T,,< TA2}+ EC{e-TAz;TA25 TBI}.
T(A)=
= Ep(exp[-(TA2
A
Also T,, + TB e T B , 2TB and so 0
E ” { e - T B l ;T,, < T A ~ } < (l/q) E ” { e - T BC~P B ( X T B , ) ; TB1< T A J
< (l/q) E ” { e - T B ;TB,c TAz}. In addition B, c B and hence El((e-T-42 - e - T B ; TA2I TB,}I Ep{e-TAz- e-TBz} = T(A,)
- T(BJ < 42.
+
Therefore EP{eKTA2;TA2< TBl} IE p { e - T B ;TA2I TB,} ~ / and 2 upon combining these estimates we obtain
r(A)= Ep(e-TA)IE N ( e - T B+) (I/q - 1) + 4 2 ,
6.
A THEOREM OF HUNT
141
or T(A)5 T(B)+ E by the choice of q. Thus the proof of Proposition 6.11 is complete. Now Theorem 6.1 is an immediate consequence of (6.1 1). Indeed, using (6.1l), we can find a decreasing sequence {B,} of finely open nearly Bore1 subsets of E, each containing B, such that T(B,) + T(B). Let T = lim TB,. Then T 5 TB and E p ( e - T )= limn T(B,) = E'(e-Tfl). Consequently T = TB almost surely P',and so finally the proof of Theorem 6.1 is complete. We are now ready to formulate and prove Hunt's fundamental theorem. We assume that M is an exact MF of X and that M ,= 0 whenever t 2 c. As in Section 5, Q, and V" denote the semigroup and resolvent associated with (X, M).
(6.12) THEOREM.Assume that there exists a sequence {h,} of nonnegative functions in bb* such that Vh, is bounded for each n and Vh, 00 on EM. Let f E Y ( M ) and A E 8".Let 9 = {u E Y ( M ) : u 2 f on A } and let f A denote the lower envelope of 9 ;i.e., f A = inf{u : u E a}.Then QAf 5 f A and QAf ( x ) = f A ( x ) except possibly for those points x E EM n ( A - Ar). Once again we will break up the proof into a sequence of lemmas and propositions. First note that according to Proposition 5.13 the hypothesis of (6.12) is satisfied whenever condition (D) or (E) of Section 5 holds. (Take f = 00 on E M ,f= 0 off EM in the statement of 5.1 1.) Next observe that the hypothesis of (6.12) is equivalent to the assertion that there exists an h E b 8 f such that Vh is bounded and strictly positive on EM.Indeed if such an h exists and h, = nh, then Vh, t co on E M . Conversely if the hypothesis of (6.12) is satisfied and ifh = hn(2"b,)-' where b,, = sup(1, lh,II, II Vh.ll), then h and Vh are bounded and Vh > 0 on EM. This is often a useful way to formulate the hypothesis in (6.12). Finally note that if a > 0 then the MF, M,"= e-"' M , , automatically satisfies the hypothesis of (6.12) according to Proposition 5.13. As a result we have the following corollary which we state explicitly for ease of future reference.
(6.13) COROLLARY. Let a > O , f E Y " ( M ) , and A E ~ " .Let f j = inf{u:uE Y " ( M ) , u 2 f onA}. Then PAf i f j and PAf ( x ) =f i ( x ) exceptpossibly for those x E EM n ( A - Ar). Obviously in proving (6.12) we may assume that A c EM since all the functions involved vanish off EM . Our proof of (6.12) is essentially a repetition of the original proof in Hunt [2].
142
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
(6.14) PROPOSITION. k t f , A, and fA be as in (6.12). Then QAf If A and QAf ( x ) =fA(x) if x E A'. Here it is not necessary to assume the special hypothesis contained in the first sentence of (6.12). Proof. If u E Y ( M ) and u 2 f on A , then u 2 f on ( A u A') n E M since u and f are finely continuous on E M ,Hence u 2 f on A u A' since both u and f vanish off E M . Therefore Q A f 5 Q A u Iu, and so Q A f I fA .If x E EM n A', then Q A f ( x ) =f ( x ) . But f E $! and so fA Sf. Hence Q A f ( X ) =fA(x) if x E E M n A', and this yields (6.14) since both fA and Q A f vanish off E M . The next result is very similar to (6.4). (6.15) PROPOSITION. Let A E I"and let {T,,} be an increasing sequence of stopping times with limit T. Let p be an initial measure and suppose that P p ( T # TA)= 0. Then Qi(XTn)-+ 1 almost surely P p on {T,, < TA for all n, TA < a}.As usual (Di(x) = EX{e-aTA; T A < 0. Proof. Since {e-aTn(D;(XTn),STn, P") is a nonnegative supermartingale for each x , there exists an L E S with 0 I L I1 such that e - I T n@i(XTn)-+ e-aTA L almost surely. Now E"{e-aTn(D~(~T,,); T,,< T ~=}E p { e - I I T A ; T,, < TA}
and letting n -+ 00 we obtain
Elr{e-aTAL ; A} = Elc{e-aTA;A} where A = {T,, < T A for all n}. It follows from this that L = 1 almost surely P' on A n {TA < a},and so one obtains (6.15).
(6.16) PROPOSITION. Let A E 8"and let p be an initial measure such that p ( A - A') = 0. Then there exists a decreasing sequence {A,,} of sets in P, each containing A, and such that the following statements hold: (i) lim TA, = TA almost surely PI,(ii) TAn= TA for sufficiently large n almost surely P" on {TA< GO},and (iii) A,, c A', for each n. Proof. For the purposes of this proof we write q B = (DL for any B E I". Assume first of all that there is an q < 1 such that A c { q A < q}. Then A' is empty and so p ( A ) = 0. By Theorem 6.1 we can find a decreasing sequence {B,,} of finely open nearly Bore1 sets containing A such that TBnt TA almost surely P". Let A,, = 8, n { p A< q } . Then each A,, is finely open and {A,,} has all the properties asserted in (6.16) except possibly (ii). To see that (ii) holds let A = {TA, < TA for all n, TA < GO}.Then q A ( X T A , ) I q almost surely on A, while vA(XTA,;) + 1 almost surely P' on A according to (6.15). Conse-
6.
A THEOREM OF HUNT
143
quently Pr(A) = 0, so (6.16) is proved in this case, and one may even assert that each A, is finely open in this case. Bk where Bo = { q A= 1) n A and In the general case we write A = Bk = { ' P A I 1 - l/k} n A fork 2 1 . Fork 2 1, Bk C A - A'andSOP(Bk) = 0. Also qBkI qA< 1 - I / k on B,. Thus by what was proved above we can find for each k 2 1 a decreasing sequence {B:} of finely open sets in 8",each containing Bk, and such that almost surely P", TI t Tk and T[ = Tk for large enough n on {Tk< 00). Here we have written Ti for the hitting time of Bi and Tkfor the hitting time of B, . For each pair of positive integers k and n we can find a positive integer m(k, n) such that pr{Tm(k.n) < Tk , T , " ( k , n ) < n } < 2 - ( n + k ) , k
ukm,o
and we may assume that m(k, n ) is increasing with n. Now define
Plainly {A,} is a decreasing sequence of sets in b", each containing A. Moreover TAn= min(TB, , inf,,, TF(k,n)),and since TA = inf,,, T B k we see that if TA,< TA then TAn< TBo and so TA, = inf,,, TF(k*n). Consequently m
2
PF{TA,< TA;TA,< n} I 2 - ( n + k = ) 2-". k= 1
Therefore almost surely P" for all large n either TA, = TA or TA. 2 n. Hence almost surely P", lim TA. = TA and TAn= TA for sufficiently large n on {TA < a}.We must still show that for each n, A, c A:. If x E B, then x is regular for A and hence also for A,, n 2 1. Each B: is finely open and so if x E B;(k*n)c A,, then x E A ; . Thus A, c A; for each n, completing the proof of (6.16). In light of Proposition 6.14 the proof of Theorem 6.12 will be complete as soon as the following lemma is established. LEMMA.The hypotheses and notation are those of (6.12). Given > 0 and x E EM - A there exists a u E Y ( M ) such that u 2 f on A and 4 x 1 IQA f(x) + E (6.17) E
Proof. As mentioned above we may assume that A c E M .As a first step we suppose that there exists an h E S*,such that Vh is bounded and f I Vh on A. Let {A,} be the sequence of sets constructed in (6.16) relative to the given set A and with E, as initial measure. Define B, = A, n { f < Vh + e / 2 } . Clearly A c B, c EM and each point of Bn is regular for B, . Also by (6.16), TB, t TA and TBn= TA for large enough n on {TA < a}.Here and in the remainder
144
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
of the proof of Lemma 6.17 we omit the phrase " almost surely P" " in those places where it is clearly appropriate. Recall that S = inf{t: M , = O}. We write Tfor TAand T,,for TB.. If T < S then T,, < S and so f ( X T n )I Vh(XT,) &/2II1 Vh)I 4 2 . Thus by the bounded convergence theorem
+
+
< s}
EX{f(XT,,)
+E
"{f(XT)
= QA
MT;
< s)
f
By Theorem (5.7i), limr-,w M,Vh(X,) = L exists almost surely, and, since Vh is bounded, E y { L } = lim E y { M , V h ( X , ) }= lim EY r-m
r+m
1
h(X,) Mu du = 0
wr!(
for any y. Thus lim,-,m M,Vh(X,) = 0 almost surely. Next observe that E X { f ( X T nMT,,; ) T = S } = E X { f ( X T n )MT,,; T = S < a, T,, < S }
+ E X { f ( X T n MT,; ) T =S =
00,
T,, < S } .
The integral over {T= S < cu,T,, < S } approaches zero as n + 00 since this set decreases to the empty set and the integrand is bounded. The integral over { T = S = 00, T,, < S } is dominated by E " { V ~ ( X T ,M ) T , ; T,, < 00; T = s = CO}
+ &/2
and by the above remark this integral approaches zero as n -,cu. Combining these estimates we have ) Q ~ f ( x+) &/2, lim Q E , ~ ( x 5 n
and so for a sufficiently largevalue of n, QE,f(x) I Q A . f ( x )+ E. But B,, c B," and B,, c E M .Therefore QB, f =f on B,, 2 A. Thus u = QEnfhasthe desired properties and so (6.17) is proved under the assumption thatfis dominated on A by a bounded potential. In the next step we suppose that f is finite on A. If Q A f ( x )= GO, then u = f has the desired properties. Therefore we suppose that Q A f ( x )< co. By hypothesis there is a sequence {Vh,,} of bounded potentials such that V h , t 00 on EM.Let A,, = A n { f I Vh,,}. Then A = U A , . A l s o f s V h , on A,. Therefore, by the proof in the previous paragraph, we can find for each n a nearly Bore1 set B,, 2 A,, such that B,, c E M , B,, c BL, and QB,f(x) - QA,f(x) I 42". Let B = UB,, c EM. Clearly each point of B is regular for B and B 3 A. We claim that W
(6.18)
Q ~ f ( x-) Q A f(x) IC QE, f(x) - QA. f(x). n= 1
6.
145
A THEOREM OF HUNT
Assuming (6.18) for the moment, it then follows that Q,f(x) I QAf(x) + E and thatf(x) = Q B f ( x ) on B 3 A . Thus u = Q,fhas the desired properties. As to (6.18) it suffices to prove that a,
(6.19)
QaSVa g ( x ) - QIva d x )I
1 QaS,vad x ) - Q:.vUg ( x )
n= 1
for each a > 0 and g E bb*, with Vag bounded. The left side of (6.19) is equal to
, t < TAn}. Thus(6.19)and hence since for each t , { T , < t < TA}c u F = l { T B< (6.18) is established. Note that (6.18) is merely an extension of Lemma 6.3. We have now established (6.17) under the assumption that f is finite on A . We will need the following lemma when discussing the general case of (6.17).
LEMMA. Assume the hypothesis of (6.12). Let A be a nearly Bore1 subset of EM and let E > 0. If x E EM - A and QA l ( x ) = 0, then there exists a u E Y ( M ) such that u = co on A and u(x) < E . (6.20)
By what we have already proved applied to the function f = I E M , there exists for each n 2 1 a u, E Y ( M ) with u, 2 1 on A and u,,(x) < QA I(x) + ~ 2 - "= ~ 2 - " Plainly . u = C u, has the desired properties. Proof.
We are now ready to complete the proof of (6.17) in the general case. Once again we may assume that Q A f ( x ) < co. Let B = A n { f < co} and C = A n { f = co}. Note that f ( X T , ) = co almost surely on {T, < S } . Observe that
a > Q ~ f ( x2) Qcf(x) = E " { ~ ( X T , M ) T~), and consequently Q, I(x) = E"{MT,} = 0. Now by Lemma 6.20 we can find a u E Y ( M ) such that u = co on C and u(x) < 4 2 . On the other hand f is finite on Band so by what we have already established there exists a g E Y ( M ) with g 2 f on B and g(x) 5 Q , f ( x ) + 4 2 . Let u = g u. Then u 2 f on A and
+
U(X)
=g(X)
+
U(X)
IQBf(x)
+
E
I QAf(x)
+
E.
Thus the proof of Lemma 6.17 and hence the proof of Theorem 6.12 is finished.
146
111. MULTIPLICATIVE FUNCTIONALS AND SUBPROCESSES
In Section 1 of Chapter V we will discuss various implications and refinements of Theorem 6.12 under an additional hypothesis.
Exercises Let X be a standard process such that (i) there exists a nonempty polar set and (ii) the only excessive functions are constants. Show that in this situation the conclusion of (6.12) is false (take M , = I,,x,(t)). Use (1.5b) and (4.22) of Chapter I1 to show that Brownian motion in R2 satisfies these conditions.
(6.21)
(a) Let X satisfy the hypothesis of (6.12) with M , = ZL,,c)(f). If p is a finite measure on b* show that pU determines p. Here p U is the measure A -,J p(dx) U(x, A) on b*. (Hint: let h E bb*, be such that Uh is strictly positive and bounded. Let g = U'h. Use the resolvent equation to show that Ug is bounded. Furthermore show that g is strictly positive. Let f be a continuous function with compact support and let f, =f A (ng). Show that /3Upf.-,fn boundedly as /3 -, co and use this to show that p( f,)is determined by pU.) (b) Let X and X * be standard processes with the same state space (E, b) and assume that both X and X * are as in Part (a). Assume further that they have the same class of excessive functions ( a = 0). If B is a Bore1 set, then use (6.12) to show that P, f = P i f for any excessive function f. (This makes sense since Y ( X ) = Y ( X * ) . ) Combine this with (a) to show that the measures P,(x, .) and P;(x, *) are identical for each x E E and B E 8". (6.22)
Iv ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
Let X = (Q, A, A!,, X , , O f , P") be a standard process with state space ( E , 8).Let f be a nonnegative bounded Bore1 measurable function on E and consider A,(w) = J"/cX,(w)] cls. 0
Clearly t A,(w) is continuous, nondecreasing, and A , = 0. Moreover, for Also observe that t -+ A , is constant on the interval [l,031 each t , A , E 9,. since X , = A on this interval and f(A) = 0. In particular A , = j i f ( X , ) dt. Finally an elementary calculation yields the following fundamental property of A : A , + , = A , A, 0,. (*) -+
+
0
The potential off'is easily expressed in terms of A ,
and, more generally,
joe - " ' f ( X , ) dr = EX1 e-"I dA,. m
V'J(x) = Ex
m
0
In this chapter we are going to study functionals A , which satisfy (*) and appropriate regularity conditions. These will be called additive functionals of X . In light of the above discussion the function uA(x)= E"(A,) is called the potential of A . The main purpose of this chapter is to show that a large class of excessive functions can be represented as the potentials of additive functionals. In later chapters we will investigate further properties of additive 147
148
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
functionals. In particular we will see that there is a strong analogy between additive functionals in the present theory and measures in classical potential theory. We will actually discuss additive functionals of ( X , M ) for a suitable class of multiplicative functionals M . However, we will state our definitions and theorems in terms of the process X and the multiplicative functional M rather than make any explicit use of the canonical subprocess of Section 3 of Chapter 111.
1. Additive Functionals As in the introduction, X = (0, A, A r ,X r , 0, , P") is a fixed standard process with state space (E, &). Throughout this chapter M = { M , ; f 2 0} will denote a fixed multiplicative functional (MF) of X . We will assume in this chapter, without special mention, that for all w the mapping t + M , ( w ) is right continuous and that M,(w) = 0 if t 2 [(w).The reader should recall that any right continuous M F is equivalent to one having these properties. The following notation will be used throughout this chapter:
S =inf{t: M , =O}.
Clearly S is a terminal time, S s [, and EM consists of those x which are irregular for S. We come now to the basic definition of this chapter. (1.1) DEFINITION. A family A = { A r :t 2 0} of functions from !2 to [0, co] is called an additive functional (AF) of ( X , M ) provided the following three conditions are satisfied: (a) almost surely the mapping t -,A , is nondecreasing, right continuous, continuous at t = S,and satisfies A,, = 0; (b) for each t , A , E Pt; (c) for each t and s, A,+t = A , M,(A, 0 0,) almost surely.
+
Of course the exceptional set in (1.1~)depends, in general, on t and s. However, in view of the right continuity of A , for each fixed t 2 0, (1.1~) holds for all s almost surely. Approximating S from above by countable valued stopping times, (1.1) implies that almost surely A , = A , for all t 2 S. We will sometimes write A(t, w) for A,(w) and A ( t ) for A , . Obviously by redefining A , on a set r E 9 with P"(T) = 0 for all x we may assume that the regularity properties in (1.1a) hold for all w. This we will do without special mention. Suppose that A satisfies all of the conditions of Definition 1.1 except that t -+ A , is not assumed to be continuous at t = S. If we assume that the other conditions in (l.la) hold for all w, as we may, then As- = lim,Ts A , exists. It is easy to check, using the fact that S is a
1.
149
ADDITIVE FUNCTIONALS
terminal time, that if we redefine A , so that A , = A,- for t 2 S, then (I.lb) and (1. lc) remain valid. In particular the redefined functional satisfies all the conditions of Definition 1.1. We define A,(w) = lim,+mA,(w). We may assume that this limit always Clearly A , = A , = A s - . exists in [0, a]. In case T is a terminal time such that T I and M , = IcO,T)(t),an A F of ( X , M )will be called also an AF of ( X , T). We will call an A F of ( X , 5) simply an A F of X . The reader should check that in this case (1. lc) reduces to (*) of the introduction. At first reading it might be reasonable to consider only additive functionals of X . This will eliminate some technical difficulties and is easily accomplished by replacing M , by 1 and S by [ throughout.
(1.2) DEFINITION.Two additive functionals A and B of ( X , M ) are equivalent provided that, for each t 2 0, A , = B, almost surely on {S> t } . In view of the right continuity of AF’s, this is equivalent to the statement that almost surely t A,(w) and t -,B,(w) are identical functions on [O,S(w)). Clearly one may replace [0, S(w)) by [0, a] in the preceding sentence. Equality between AF‘s will always be understood to mean equivalence. It will be convenient for later work to introduce a stronger condition than ( I .1c). (1.3) DEFINITION.An AF, A , of ( X , M ) is perfect provided there exists a set A in 9 with P”(A) = 1 for all x and such that A,+,(w)= A,(w) + M,(w) [A,(B, w ) ] for all t and s whenever w E A.
Similarly we will say that an MF, M , of X is perfect provided there exists a set A E 9with P”(A) = 1 for all x and such that M,+,(w) = M,(w)[M,(B,w)] for all t and s whenever w E A. We will show in Chapter V, under a slight additional hypothesis, that if M is perfect then any continuous AF of ( X , M ) is equivalent to a perfect AF of ( X , M ) . We remark that our use of the term “perfect” is different from that of Dynkin [2]. Let M and N be two MF’s of X and let B be an AF of ( X , N ) . Let S = inf{t: M ,= 0) and R = inf{t: N , = 0). Suppose that f E S*,and that a 2 0. Frequently we will need an integral of the form (1.4)
S(X.1 e-‘” M udB,
=J(f,
0,
s,O,t,
and so we need some discussion of its meaning and properties. In the first place if w is such that u + X,(w) is right continuous then, by an argument we have used several times already, the mapping u +f(X,(w)) is B*measurable. (W* is the o-algebra of universally measurable subsets of [0,a].) Consequently
150
IV.
ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
the integral of f ( X u ) e - a " M uagainst the measure induced by the nondecreasing function u + B,, is defined almost surely. This is the meaning of (1.4). Observe that dB,, attributes no mass to the interval [R, 00) according to (1.la). Next we wish to show that J ( f , t ) E 9,.This is obvious if t = 0 and so we assume that t > 0. Suppose first that f is bounded and continuous and define a(n, k ; w) =
inf
f(X,(w)) e-" M,(w),
k k+t 2"<us2"
k ( k + 1) Y,,(u, W ) = a(t1, k ; W ) if - < u I2" 2"
for n 2 1, k 2 0. Plainly a(n, k ) E @ ( k + l ) , 2 n . by K/2" < t I (K 1)/2". Then
+
$-
For a fixed value of n define K
a(n, K ) (B, - B K / 2 " ) *
But the right side of (1.5) is in 9,+2..m, and as t z + co the left side of (1.5) implies that increases to J(f; t ) . Thus the right continuity of the family IS,} J ( f , t ) E 9,.In (1.4) we may replace (0, t ] by (0, co). Let us then write J ( f ) in place of J ( f , t ) . Our discussion applies to J(f) as well and shows that J ( f ) E 9 iff is positive and continuous. In the extension of our results to an arbitrary f~ &*, we will simplify matters by assuming that J(f, t ) is bounded as a function of o for each bounded continuousf. This assumption can be dispensed with by another passage to a limit. (Consider the integral over the interval (0, t A T,) where T,, = inf{u: Bu 2 n } and then let n + co.) It follows from this assumption in the usual way that J ( f , t ) E 9, iff E b+ . Finally suppose f E b l * , . Given a finite measure p on 8,choose g, h E b b such that g I f 5 h and v(g) = v(h) where v is the measure v(A) = E p { J ( I At, ) } , A E 8.Then J(g, t ) =J(h, t ) almost surely P p and hence J ( f , t ) E 9,.The extension to arbitrary f E S*,is by monotone convergence. All that we have said remains valid if t is replaced by an arbitrary {@,} stopping time. We leave the details to the reader as Exercise (1.16). (1.6) PROPOSITION. Let R be a terminal time with S IR I almost surely, let B be an AF of ( X , R),and suppose that f~ b&*,. Define
If a.s., ( A , < co, t < S } c {B,< a}then ( I .7) defines an A F of (A', M ) .
1.
ADDITIVE FUNCTIONALS
151
Proof. The supplementary hypothesis certainly holds i f f is bounded away from zero. The validity of Conditions (a) and (b) of Definition 1.1 is clear. In checking (c) we must consider A(t
+
S)
- A(t) =
1 (1.1
+ sl
J ( X , , ) Mud B , .
By a change of variable this integral becomes
This equals 0 if t 2 S. Using the remark following Definition 1.1 and the fact that R 2 S almost surely, this last expression equals
almost surely on { S > t } , completing the proof. The hypothesis in Proposition 1.6 that f be bounded is only for the purpose of ensuring that t + A , be right continuous. Of course it is not needed if t + A , is known from other considerations to be right continuous, for example if A , is finite for all t. The most important special case of Proposition 1.6 and the remarks following its proof is this: suppose f E 8: and
Then { A , } is an AF of X , at least if A , is finite for all t or even if almost surely t + A , has the property that A , + , = co for all E > 0 implies A , = 00. (1.9)
PROPOSITION.Let A be an AF of (A', M ) and define
(1.10)
(where we set (Mu)-' = 0 if M u= 0). Then B is an AF of (A',S ) . The proof is straightforward, and so we omit it. Of course, the functionals A and B are related by (1.7) when we take f there equal to 1 on E. DEFINITION. An AF, A , of (A', M ) is called a strong additioe functional (or is said to have the strong Markov property) if whenever T is an {g,} stopping time and R E A, R 2 0, then almost surely
(1.11)
152 (1.12)
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS AT(o)+R(o)(W)
= AT(m)(W)
+ MT(m)(m)
[AR(o)(eTW)l*
Obviously any perfect AF is a strong AF. Usually when writing expressions such as (1.12) we will omit the w's. For example, (1.12) will be written AT+R
= AT
+ MT[AR(e7')l*
Observe that A , 0 0, = AR.@,(&) # A R ( & ) in general. The next result gives a simple sufficient condition for an AF to be a strong AF. PROPOSITION.Let A be an A F of (X,M )and suppose that M is an SMF (strong multiplicative functional). Then A is a strong additive functional.
(1.13)
Proof. Define an AF, B, of (X, S ) by (1.10). Since B, and A, are related by (1.7) (withf= 1) and M is an SMF by hypothesis, it is clear that we need stopping time T and u 2 0 show only that for every {F,} (1.14)
BT+" = B ,
+ Ico,s,(T)( B ,
8,)
almost surely. One easily checks that N, = e-Btlco,s,(t)defines an MF, N,of X. Obviously EN = E M because Bo = 0 and t --t B, is right continuous. According to Proposition 4.21 of Chapter 111 an M F is strong if and only if it is regular. Thus if Q is any {MI} stopping time we have for each x P"[XQE E
- E N , N , > 01 IP"[XQE E - E M , Q < S ] =P"[X,€E-E,,
MQ>0] =O,
and so N is regular and hence strong. From this the validity of (1.14) is immediate. Any AF of X has the strong Markov property because the relevant MF, M, = lLo,c)(f), is clearly an SMF. In particular (1 -8) defines a strong A F of X. In actual fact (1 3)defines a perfect AF of X. Moreover we can say a bit more concerning the right continuity requirement in (1 3).Let A be defined by (1.8) and let R = inf{t: A , = co}. Let us suppose that no point is regular for R. Then P"(AR < 00, R < 5) < Ex{pX'R'(R= 0); R < [} = 0.
From this it follows that almost surely t + A, is continuous (in the topology of the extended reals) throughout [0, co]. We close ithis section by introducing the two classes of AF's which will prove to be of fundamental importance in the remainder of this book. (1.15)
DEFINITION.An AF, A, of ( X , M) is said to be continuous (CAF) if
1.
ADDITIVE FUNCTIONALS
153
almost surely t + A , is continuous. An AF, A , of ( X , M ) is said to be natural ( N A F ) if almost surely t + A , and t + X , have no common discontinuities. Obviously any CAF is an NAF. Since t + A , is constant on [S, 001 and is continuous at S, the discontinuities of t + A , can occur only at points in (0, S ) . The importance of the class of NAF's will emerge in the next section.
Exercises (1.16) The notation and assumptions are those of (1.4). If T is an {9,} stopping time and f E S*,show that J(f, T ) = J ( o , T ] f ( X J e - a u M udBu is FT measurable. (1.17) An {9,} stopping time T is called a perfect terminal time provided there exists a set A E 9 with P"(A) = I for all x and such that, for each w E A, T(0,w ) = T(w) - t whenever t < T(w).Let T be a perfect terminal time . Define To = 0 and such that no point in EA is regular for T. Let f E S*, T,,,, = T,, T Or, for n 2 0, and define A , = f ( X T , ) where the sum is over all n 2 1 such that T,,It (if no such n exist then A , = 0). Prove that A is a perfect AF of X . [Hint: show that i f w E A and Tk(w)I t < Tk+l(w), then t T,,(O,w)= T,,+k(0) for k 2 0 and n 2 1.3
+
1
0
+
(1.18) The purpose of this exercise is to illustrate the relationship between additive functionals of ( X , M ) as defined in this section and additive functionals of the canonical subprocess 8 corresponding to M . In order to avoid technical details which may obscure the basic idea we impose rather strong conditions. Assume that A = 9 and A, = 9, for all t 2 0. Let M be an M F of X and let 2 be the corresponding canonical subprocess. We use the notation of Section 3, Chapter 111, without special mention. Let A^ be an AF of 2 such that 2,E A, for each t . Of course 8 need not be a standard process but this is of no importance in the present discussion. By (3.20) of Chapter I11 for each t there exists A: E A, = 9, such that A,(w, A) = A:(w) if t < A. Show that A,(w, A) = A,*-(w) if t 2 A. Let B,(w)= A:(w) if t < S ( o ) and B,(w)= A,*-(w) if t 2 S ( w ) . Show that B = { B , } is an A F of ( X , S ) . Let A , = j(o,,lM , d B , so that A = { A , } is an A F of ( X , M ) . Show that A , = j(o,,l M udA,*. If A* is the a-algebra defined in (3.6) of Chapter 111, then show that P{A,I A*}= A , IC for each x. In other words if we "integrate out" the auxiliary variable A in a , ( w , A) we obtain an A F of ( X , M ) . Conversely if A is an A F of ( X , M )and B is related to A by ( I . lo), then putting A,(w, A) = B,(w) if t < [(w, A) _< A and A,(&) = At-(&) if t 2 [ defines an AF of 2. 0
154
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
2. Potentials of Additive Fuuctionals
As in Section 1, X = (Q, 4, A,,XI, O , , P")will denote a fixed standard process with state space (E, 8).From now on we will assume that M is an S M F of X ; then, according to Proposition 1.13, any AF of (X, M) is a strong AF. As in Section 1, Swill always denote inf{t: M , = 0}, which, in the present situation, is a strong terminal time. Of course we still assume, as in Section 1, that M,(w) = 0 if t 2 ( ( w ) and that t + M , ( o ) is right continuous for all w . Consequently S I c.
DEFINITION.Let A be an A F of (X, M ) . Suppose u 2 0 and f E 8: We define the a-potential off relative to A , Ujf,by
(2.1)
.
It is immediate from our discussion of the integral (1 -4) that U: : Sf + Sf . When f = 1 we write & for U i l . The function u i is called the or-potential of A . If 4 is finite then the operators U: are given by finite measures, which we denote by U>(x, .) as usual. We recall some notation from Chapter 111: namely, Qff(x1 = e-"' E"{f(XJ Mr},
S,
m
v j ( X )
=
Q;~(X>
dt = E X
J
m
e - " ' j ( X , )M ,d t ,
0
Q; f(x> = E " { e - O T f ( x ~M) T } for a stopping time T, and Y " ( M )for the set of u - (X, M) excessive functions. The family of operators { U:} is not, in general, a resolvent. However, the following relationship is somewhat analogous to the resolvent equation. We formulate it as a proposition but leave its proof to the reader as Exercise 2.18. PROPOSITION. Let f E Bf and let u, p 2 0. If U i f ( x ) and U5 f ( x ) are finite and almost surely A , < 00 on { t < S}, then
(2.3)
U i j ( X ) - U 5 f ( x ) = ( p - u ) V ~ U f : f ( x=) ( p - u ) v ~ u : j ( x ) .
(2.4)
PROPOSITION. If f E Bf and a s . , A , < 00 on { r c S},then
(i) U: f E Y " ( M ) . (ii) If Tis an (9,) stopping time, then
2.
(iii) If D
E 8"and f
POTENTIALS OF ADDITIVE FUNCTIONALS
155
vanishes off D,then
u : f ( x ) = Q"D ua,m
+w-"TDf(X,,)
(ATD
- A T D - 11.
(iv) If A is natural, D open, and f vanishes off some compact subset of D, then U: f = p D U ; f .
Proof. The first two assertions are routine calculations with which the reader is familiar by now. Assertion (iii) is an immediate consequence of (i). Iff vanishes off a compact subset of an open set D and if f(XTD)> 0, then t + X, must have a discontinuity at TD. Consequently if A is natural, then t + A, is continuous at TDand so (iv) follows from (iii). (2.5) PROPOSITION. Let A be an AF of (X, M) such that, for a fixed c( 2 0, u: is finite. Then A is natural if and only if for every f E CK and open set G containing the support off one has
QEZJ: f = U: f .
(2.6)
Proof. The necessity of this condition was established in (2.4iv). If (2.6) holds for all continuous f with compact support in G, then it also holds for f = ZG by montone convergence. Since each side of (2.6) is finite it follows from (2.4iii) with f = ZG that almost surely on {TG< co} either X(TG)# G or A(T,) = A(T,-). Now let B be the AF of (A', S ) defined in (1.10). Plainly A and B have the same points of discontinuity and so it will suffice to show that B is natural. Given E > 0 define
7 ' = inf{t: B, - B,- > E } . A moment's thought shows that T, is an (9,) stopping time, and {T, < co} = {T, < S}since t + B, is constant on [S, co]and continuous at S. It is also easy to check that T, is a strong terminal time. Moreover B(T,) - B(T,-) 2 E if T, c co. Define T f = T, and, for n 2 1, T:" = Tf + T, 0 8,:. The discontinuities of t + B, occur only at the points T: as E ranges over the positive rationals, and so to prove that B is natural it suffices to prove that almost surely t -+ X, is continuous at T, when T, c S. Suppose that, for some E > 0 and x E E, P x { t --t X , is discontinuous at T,; T, c S } > 0. Then since E has a countable base there will exist a nonnegative h E CK and positive numbers p and r such that (2.7)
a
P"{h(X,) < for t E [ r , Tz);h ( X , ) >
a; T, c S} > 0.
If we denote by A the set whose probability is being computed in (2.7) and
156
1V. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
let G = {h > fl}, then on A, T, = r + TGo 8,. Since t + A, and t - t B, have the same discontinuities we see that P"(A) does not exceed
P"{A(r + TG 0,) # A[(r + TG Or)-]; 0
0
X T c 8,
=E"{Px'"[A(T~)# A(TG-); X(T,)
0
E
E
G ; r + TG 8, < S } 0
G ; TG < S ] ; r < S } ,
and this last expression is zero by the first part of the proof. Thus Proposition 2.5 is established. The main result of this section is that a natural additive functional A of (X, M) is determined by its potential 4 provided that the potential is finite. The next few propositions lead up to this result.
(2.8) PROPOSITION. Let A be an AF of (X, M) and let B be related to A by (1.10). Suppose for a fixed a 2 0, & is everywhere finite. Then for every fE8*,.
Proof. It suffices to carry out the proof for a > 0 and nonnegative continuous fwith compact support, for each side of (2.9) is a measure infand each side is right continuous in a. Assumefand a are so. The left side of (2.9) is
We may assume that the expression in braces is continuous on the right in t . Indeed, let W"f be defmed as in (4.9) of Chapter 111. The resolvent { W'} is exactly subordinate to {U"}and so W y i s continuous in the fine topology. But W"f(x)= V y ( x ) for all x E EM, while for all t < S, X , E EM. Thus the expression in braces is just W"f(X,)if t < S. This being so we have (2.10)
ud,V " f ( x )
where indeterminate quotients O/O are set equal to 0. The last limit in (2.10) is just EX/:
[ /,se-'uj(x,,) M,
d u ) B,
2.
POTENTIALS OF ADDITIVE FUNCTIONALS
157
The equality (2.9) now follows from an integration by parts in this last expression. (2.11) PROPOSITION. Let B' and B2 be AF's of (A', S). Suppose that, for all t 2 0, x E E, and f E b bf , Ex{f ( X , ) B:M,} = Ex{f (X,)B:M,} < 00. Then B' and BZ are equivalent.
Proof. A straightforward induction argument shows that, for 0 Itl < .. . < t,, 5 t and fl, ...,f,, E bbf , Ex( ,&(A',,) B:'M,) is independent of i. Consequently for every E 9,, E"(B;M, ; r) is independent of i and so almost surely B: = B: on {t < S}.
n'&
(2.12) PROPOSITION. Let A' and A' be AF's of (X, M). Suppose that, for some fixed a 2 0, G I< 00, and that, for every nonnegative continuous f with compact support in E, U i l f = U:2f. Then A' and A 2 are equivalent.
Proof. It follows as usual that, for every f E Sf , U j i f is independent of i. Then from Proposition 2.3 it follows that, for every /3 2 a, U$,f is independent of i, and hence so is U $ i V s f . According to Proposition 2.8 then for all Bra
Ex
j
m
0
Bf M,f ( X , ) e-sr dt
=
e-B1E"(B:'M,f ( X , ) ) dt
is independent of i, where B' is related to A' by (1.10). Now B; M , 5 A: and, by hypothesis, 00 > IpAi(x) 2 e-',EX(A:). Consequently iff is bounded and continuous, then E"(B,'M,f(A',)) is right continuous in t. From this and the uniqueness theorem for Laplace transforms it follows that E"(B:M, f( X , ) ) is independent of i for f bounded and continuous. The equality extends immediately to f E bd'f because of the finiteness estimate we have just made. Proposition 2.1 1 now implies that B' and B2 are equivalent, and hence so are A' and A'. We are now able to state the main theorem of this section, a uniqueness theorem for additive functionals. (2.13) THEOREM.Let A and B be natural additive functionals of (X, M) and suppose that, for some fixed a 2 0, 4 < co. If u; = G,then A and B are equivalent.
Proof. Because of (2.12) it will suffice to show that for every nonnegative f in CK we have Ud,f = U i f . Given such an f and a number E > 0 let us define T, = inf{t : If(A',) - f ( X , ) l 2 E }
158
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
+
and To = 0, T,,,, = T,, T, BTn for n 2 0. If T,,,, c 00 then If(XTn+l)f(XTn)I 2 E by the right continuity of the paths and the fact that f is continuous. Because the paths have left limits it follows that lim T,, 2 [. Now If(X,) -f(XTn)l 5 E for all t in [T,,, T,,, l). This inequality is valid also for t = T,,,,if the path t -,X,is continuous there. If the path is not continuous at T,,,, < 00 or if T,,,, = 00, the measure dA, puts no mass at T,,,, almost surely by the hypothesis that A is natural. So in any event almost surely
Ij
0
e - a ' f ( x f ) dAf - f ( X T n )
(TnvTn + 11
I
(TnBTn + I 1
e-"' dAf
1
j(T n,
e-"' d A , . n+ll
Consequently
(2.14) where ILI I u:(x). A typical summand on the right of (2.14) is
Now for each y E E we have
(2.15)
e-" dAf = u i ( y ) - Q;= u:(y).
EY '(O.Te1
But all of these calculations are also valid when A is replaced by B throughout, while by hypothesis the right side of (2.15) is unchanged if one replaces A by B. Therefore the infinite sum in (2.14) is also unchanged if one replaces A by B and since E is arbitrary it follows that U; f = U;f.This completes the proof of Theorem 2.13. The example in Exercise 2.16 shows that Theorem 2.13 is not valid without the assumption that both A and B are natural.
Exercises (2.16) Let X be the Poisson process on the integers with parameter 1, that is, the process constructed in Section 12of Chapter I with E = {. . ., - 1, 0, 1, . , .}, l ( x ) = 1, and Q(x, .) = E,, for all x E E. Let A, be the number of jumps of U-P Xu in the interval [0, t] and let B, = r. Clearly A and B are additive for all a > 0. functionals of X . Show that 4 = = CL-'
,
2.
POTENTIALS OF ADDITIVE FUNCTIONALS
159
If A is an A F of ( X , M ) andJ'E S*,is bounded away from zero, then U: f E 9'"(M). In particular u i is a - ( X , M ) excessive. Show by an example that Uifneed not be in Y " ( M )even if it is bounded ( f 8: ~).
(2.17)
(2.18)
Prove Proposition 2.3.
Let A be an AF of ( X , M ) . The functionf,(x) = Ex(A,) is called the characteristic of A . (i) Show thatf,(x) has the following properties: (a) f,E 8: for each t 2 0, (b) t +f,(x) is nondecreasing, fo(x) = 0, and t +fr(x) is right continuous on [0, u ) providedf,(x) < 00, (c) Sr+s(x) =f,(x) + Qr L(x) for all t and 3. (ii) Show that &(x) = a e-"'f,(x) dt for each a 2 0. (iii) If EM is nearly Borel measurable (since we are not assuming that M is exact this is not automatic) show that, for each t 2 0, f,is nearly Bore1 measurable and finely lower semicontinuous on EM and even finely continuous on EM iff, is finite for some u > t . [Hint: for a fixed t let Y,, = A , A n. If g,,(x) = Ex( Y,,) show that aU'g,, +g,, on EM and conclude from this that f, is nearly Borel measurable. Adapt the argument of (4.14), Chapter 11, to show that g,, is finely continuous at each point of EM and that if fu is finite for some u > t, then f,is finely upper semicontinuous at each point of EM .] (iv) Suppose that f,(x) is finite for all 1 and x and that g is a bounded continuous function on E. If A is natural show that for each t and x
(2.19)
jr
1
n- 1
dxu)dAu = lim C Qkt/n(Sf/n)* Ex (OJI n k=O Let A and B be NAF's of ( X , M) and assume that EM E 8".If A and B have the same finite characteristic, than A and B are equivalent. [Hint: to begin assume that A is an AF of ( X , M ) withf,(x) = Ex(A,)finite, and hence f, E :'6 and is finely continuous on EM for each t 2 0 by (2.19iii). Let Bk = {x:fi(x) > k } and let Tk = T,, . If Mk(t)= M , { o, Tk)(t),then show that A k ( t )= A(t A T,) defines an AF of ( X , M k )which is natural i f A is. In addition show that A k has a bounded a-potential for any a > 0. Next show that almost surely lim, Tk 2 S and so Ak(r)+ A ( t ) for all t as k + m. Now let A and B be two AF's of ( X , M ) with the same finite characteristicf,(x). Show that if T is a bounded stopping time than Ex(AT)= Ex(&) for all x by approximating T from above by stopping times taking on only finitely many values. Conclude that if Ak and Bk are defined as above, then Ak and Bk have the same bounded characteristic. Finally prove the assertion of (2.20).] (2.20)
160
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
(2.21) Let A be a CAF of ( X , M).Let B be related to A by (1.10). (i) Let cp(x) = Ex j: e-' M , exp( - B,) dt. Use (2.20i) of Chapter I1 to show that if R = inf{t : A , = co}, then
V: q ( x ) = E x
e-'M,(1 - e-Bt)dr I 1. jOR
(ii) Let h, be the indicator function of {cp > l/n} and let A,(t) = yoh,(X,)dA,. Show that, for each n, A , is a CAF of (A', M ) with a bounded 1-potential and that A,(t) A ( t ) as n -+ co. (iii) If, in addition, M is exact show that cp is nearly Bore1 measurable and finely continuous. (2.22) Let A , B, and M be as in (2.21) with M exact. Assume that for a fixed is finite. For p 2 0 andf define a 2 0,
ES*, m
vj j ( x ) = E X
S,
e-BBt e-a'
f (X,) dAf
*
Show that, for each /?2 0, N! = M , exp( -PB,) is an exact MF of X and that Vzfis a ( X , NB)excessive for each f E b*,. Show that the family { V:; p 2 0 } satisfies the resolvent equation and that 11 V$ll I p-' for p > 0. In particular note that Ui V$ = V{ Ui = ( l / p ) [ U ; - V j ] for each p > 0. Let { V y }be the resolvent corresponding to M and let { W y }be the resolvent corresponding to NB. Show that BUj W" = V" - W"= pv,BV".
-
3. Potentials of Continuous Additive Functionals In this section we are going to characterize the potential of a continuous additive functional of ( X , M ) . The following basic assumptions will be in force throughout this section. X = (Q, A, A,,X , , O , , P") will be a fixed standard process with state space (E, b), and, in order to simplify the terminology, we will assume that A = f and A, = 9,. Therefore a stopping time will always be an (9,) stopping time. M will be a fixed exact multiplicative functional of X with M , = 0 if t 2 c. As in Section 1, S = inf{t: M , = 0). In the present case S is a strong terminal time and S I C. We will denote the semigroup and resolvent generated by M by { Q,} and { V " } ,respectively. As before uA(x)= E"A(co) = E"A(S) is the potential of A . We saw in Section 2 that A(t + T ) = A ( T ) + M T A , 0 , almost surely whenever T is a stopping time. It follows easily from this that for R E 9+ and T a stopping time we have A , = AT + MTAR-T((&) almost surely on { R 2 T } . If A is an 0
3.
POTENTIALS OF CONTINUOUS ADDITIVE FUNCTIONALS
161
additive functional of ( X , M )with finite potential uAand Tis a stopping time, then it follows from (2.4)that
(3.1)
QT
~ A ( x= ) E"{A(a)= E"{A(S) - A ( T ) } .
Consequently if A is continuous and {T,,}is an increasing sequence of stopping times with limit T then QT,uA+ QTuA as n + co. Motivated by this fact we make the following definition.
(3.2) DEFINITION, A finite ( X , M ) excessive function f is called a regular potential (of ( X , M ) )provided that Q T , f + QT f whenever { T,,}is an increasing sequence of stopping times with limit T. We have just observed that the finite potential of a CAF of ( X , M )is a regular potential of ( X , M).The main purpose of this section is to prove the converse of this statement. Let f be a fixed finite ( X , M )excessive function and define for each n 2 1 (3.3) Each f, is ( X , M )excessive and {f,,} increases to f as (3.4) PROPOSITION. n-co.
Pro05 We may write f, = Ji Q,,,,f dt and Q,f,= Qs+,,,, f dt. Since f is ( X , M ) excessive, Q s + , , , , f lQ,,,,f and Q,,,, f increases to f as n +oo. This makes both assertions in the proposition obvious. Given E > 0 we define
(3.5)
B, = ( x : f ( x ) -f,(x)
2 E}.
Each B,, is nearly Bore1 measurable and, since f and f, vanish on E&, B,, c E M . Moreover {B,,} is a decreasing sequence of sets with empty intersection, and each B,, is finely closed in E M .As in Section 5 of Chapter I11 let S, = inf { t : M , I l/p}; then {S,} is an increasing sequence of stopping times with limit S. (3.6) PROPOSITION.Let T,, = TBnand let T = lim T,,. If limn Q T n f = then limn P"(T, < S,,) = 0 for all p and x.
QTf,
Proof. Recall that if g is ( X , M ) excessive, then t + g ( X , ) is right continuous a s . on [0, S), and so f ( X T , ) - f , ( X T n ) 2 E almost surely on {T,,< S}. Thus
162
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
for a fixed x and p we have E
- P"[T, < S, for all n] P lim inf[f(XTk) -fk(xTk)]; T,,< S , for all n lim inf E"{[f(xTk) -fk(xTk)l MTk} k-tm
5 lim inf E"{[f(X,k) -fj('Tk)] k+
MTk)
m
for each j. Since QTnf j 2 QTfjwe obtain E
- P"[T,,c S, for all n] I lim inf[QTkf(x) - QTfj(x)] P k = Q ~ f ( x) QTfj(x),
and lettingj- co yields P"[T, < S,, for all n] = 0. Since {T,,} is an increasing sequence this evidently completes the proof of Proposition 3.6.
(3.7) REMARK. Since f -f. = n j:ln (f- Q,f) dt ~f - Ql,,,f. if we let A, = {x:f(x) - QlInf ( x ) 2 E } then B,, c A,. Therefore TA,I T,, and so the conclusion of Proposition 3.6 holds whenever limnP"[T,, < S,] = 0. An obvious consequence of Proposition 3.6 is that limn T,, 2 S and that {T,,< S for all n} is contained in { M s - = 0} almost surely. It is equally clear from Definition 3.2 that Q, f --+ 0 as t + co whenever f is an (X,M) regular potential. The next result is the main step in our task of characterizing the potentials of continuous additive functionals of (X,M). (3.8) THEOREM. Let f be a bounded (X,M) excessive function and suppose (i) Q ,f + 0 as t --+ 03, (ii) for each E > 0, limnP"(T,, < S,) = 0 for all x and p where T,, = TBnand B,, is defined in (3.5). Then there exists a CAF, A, of (A', M) whose potential i s 5
REMARK.The preceding discussion shows that any bounded ( X , M) regular potential satisfies the hypotheses of Theorem 3.8. Moreover A must be unique up to equivalence according to Theorem 2.13. Proof. Let g,,= n( f - Q,,,, f ) ; then the fact that Q, f + 0 as t + co implies that Vg,, =f.wheref,, is defined in (3.3). We next define
(3.8 bis)
A Y ~ )= S'sn(xu) 0
~u
du,
3.
POTENTIALS OF CONTINUOUS ADDITIVE FUNCTIONALS
163
and clearly A" = {A"(t)} is a CAF of (X, M) for each n. If u,, denotes the potential of A", then u,, = Vg,,=f n T f as n -+ 03, and so it is clear that the CAF that we are searching for is in some sense the limit of the A". For each x and t
Consequently if we let e,,(t)= A"((t)+ M, f . ( X , ) , then {e,,(t),P t ,P"} is a nonnegative martingale for each n and x. Since o ( U r 9,)=9we have en(03) =
lim en(t) = E"{A"(co) I 9} = A"(co) = A"(S) t+m
almost surely. It now follows from the extension of Kolmogorov's inequality to martingales (Doob [I, p. 3151) that for each 6 > 0 le,,(t)
- e,(r)l
I
26 I =
We now estimate this last expectation:
Jnm
E"{[e,,(oo)- e,(03)]~} E"{[A"(oo)- A"(co)]2}.
164
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
However arguing as above we find
I2llf
1I2Y
and so i f n 2 m Jn,m I 2 3 / 2II f II {Ex(Hm)I'/2* (3.9) Of course E"(H,,,) = 0 if x # EM. Recalling the definition of S, we define
Hm.p -
SUP o
{ ~ t C f ( x-frn(xt)II2* J
Note that if x E EM and p 2 2, then P"(S, > 0) = 1. Also note that the suprema defining Hm,, and H,,, are almost surely unchanged i f we replace the condition t > 0 by t 2 0. Clearly H,,, I H,,,,, + ~ - ~ I l f l l ~ . On the other hand given E > 0 and recalling the definition of Tmwe may write for x E EM and P22 E"(Hm,p)
c S p ) + EX(Hm,p;T m 2 S p ) I llf1I2 P"(T,,,< S,) + E'. = E"(Hm,p; T m
It is now clear that E"(H,,,)+ 0 as m 3 co for all x, and hence J,,, -,0 as n and m approach infinity. It is immediate from this estimate that, for all 6 > 0, P"[supr2,(en(t) em(t)l 2 6]+ 0 as n, m + co. Also the Tchebychev inequality implies that for any S > 0 and n > m
which approaches zero as m obtain
--f
00.
Combining these two statements we
for each 6 > 0 and x E E A . Thus { A : } converges in measure uniformly in t with respect to each P", and hence also with respect to P' whenever p is a finite measure on 8 : . Let 9 be the class of all measures on 82 with mass less than or equal to one. Then for each I E [0, 00) and p E 9 there exists A: 2 0 in Srsuch that A: 5 A"(t) A; in P ' measure as n -P co. But E " { [ A " ( c ~ )5] ~211f1I2 } and so for each t and p E 9 the family {A:} is P' uniformly integrable. Therefore E'{A: Y}+ E'{A: Y} as n + 00 for all Y E bF. We will write A: for A ; when --f
3.
165
POTENTIALS OF CONTINUOUS ADDITIVE FUNCTIONALS
< a,and p be given. Let X , = 6;' 9. Then by (2.7) of Chapter 0, Y E bX', if and only if there exists 2 E 6 9 such that Y = 2 o 0, . Consequently, given Y E b 9 there is a 2 E b 9 such that E'( Y10s-19}= 2 0 OS. Therefore if v = pPs we have p = E,. Let 0 It, s
E'{(A:
es)}
8,) Y 1= E ~ A :e,)(z = E'{E*'""A:Z]} -+
= E'{Af Z }
E V { A ; Z } = E ~ ( A ;e,)(z 0 0,)) 0 0,)Y).
= E'{(AI
But
and so letting n -+
(3.11)
00
we obtain
A,",, = A:
+ M,(A;
a.s. P",
O,),
0
where v = pl', . If Y E b 9 , then x -i Ex{ YA:} is S,*measurable, and hence so is its limit x - + Ex{ YA:}. Now for a fixed t, let @ = A:PX so that is a finite measure on (R, 9) and x -+ @(A) is 8::measurable for each A E 9. Obviously 4 P" for each x . (Also x -iP"(A) is in S,*for each A E P}.) In view of the right continuity of the paths, it is clear that 9 is countably generated up to completion; that is, there exists a countably generated a-algebra 9 such that 9 = B p r with p ranging over 8.(Lf t -i X,(w) is right continuous for all w , then we may take B = Po.) Thus by the theorem of Doob used in the proof of (2.3), Chapter 111, we can find a function &(x, w ) which is S,* x 9 measurable and such that w &(x, o)is a density for relative to P" for each x. Consequently for each x
e.
0,
e.
-+
~ ; ( w= ) B,(x, w),
a s . P".
Define B,(w) =B,(X,-,(o), w ) ; then B, E 9. Now if Y E b F then E x { A : Y } + E'{A:Y} = E X { B , Y } ,
and integrating with respect to p we see that for each p E 8 a.s. P'.
A: = B,,
In particular this implies that B, E F,for all t. Next if v = pPs then P'{A)
0
8, # B, 0 0,)
= P'{A," #
B,} = 0,
and so it follows from (3.11) that for each fixed t and s
(3.12)
B, + s = B,
almost surely. In addition since
+
0
0s)
166
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
andf, ff, we see thatf(x) - Q,f(x) = E"{B,} for all x and t < co.It follows from (3.12) that B, IB, almost surely if t 5 s. Now define A, = inf B, s>r SSQ
where Q denotes the rationals. Then A, E F,+= 9, and B, 5 A, almost surely for each t . Clearly t + A, is right continuous and nondecreasing almost surely. But for each t < co E"{A,} = lim EX{B,} = lim[f(x) - Q , f ( x ) ] sLt saQ
84,
seQ
=f(x) - Q, f(x) = EX{Brl, and so A, = B, almost surely for each t . Let A, = sup, A , . Then A, = lirnfTmA, almost surely and so EX(Am)= lirnftm E"{A,} = lirn,?,[f(x) Q,f(x)] = f ( x ) . Finally P X { A ,o 8, # B,0 0,) = E"{PX'S'[A,# B,]}= 0,
and so it follows that (3.12) holds with B replaced by A, thatf(x) = E X { A m } , and that t + A,(w) is almost surely right continuous. (Plainly A, = 0, a s . ; however, we do not as yet know that t + A, is almost surely continuous at t = S.)We will complete the proof of Theorem 3.8 by showing that t + A, is almost surely continuous on [0, co). Let x E E, be fixed; then using (3.10) one can find a sequence {nk}, depending on x in general, such that
L
I
P" sup IA"'(t) - A"'(2)l 2 2-k 5 2-k provided n, 2 nk . It now follows by standard reasoning that (A"'(t, w ) } converges uniformly on [0, co] to a finite limit almost surely P". Let A be the set of those w's such that {Ank(t,w ) } converges uniformly in t on [0, a]; then A E 9 and P"(A) = 1. Let B"(t, w ) = lim, Ank(t,w ) if w E A and B"(t, w ) = 0 for all t i f w $A. Thus t + B"(t, w ) is continuous for all w andAnk(t,.) -+ B"(t, .) almost surely P" for each t. But by our previous construction if Y E b 9 and t < 00 then Ex{ YA:} + E X {YA,}, and so it follows that, for each fixed t < coy A, = B: almost surely P". Finally since t + A, is almost surely right continuous, we see that the functions t -+ A,(w) and t + BX(t,0 )agree almost surely P". But x was arbitrary and so t + A, is almost surely continuous. This completes the proof of Theorem 3.8. We now remove the restriction that f be bounded.
3.
167
POTENTIALS OF CONTINUOUS ADDITIVE FUNCTIONALS
(3.13) THEOREM.Let f be a regular ( X , M )potential. Then there exists a CAF of ( X , M) whose potential is$ For each positive integer k let R; = T{/,,) and R, = min(R;, S). Thus { R , ) is an increasing sequence of strong terminal times. Now f is a finite (X, M) excessive function and so, for each x, {MRkf(XRk),FRk ,P"} is a nonnegative supermartingale. Hence almost surely MRkf(XRk)approaches a limit L as k + 00 and EX(L)<j"(x). It is now evident that Px(R, S,, for all k) = 0 for each x and p . In particular R, t S almost surely and Rk = S for > 0 ) almost surely. We now define gk =f - Q R k J large enough k on (Msand sincefis a regular ( X , M )potential it follows that g k t f a s k + 00. Let N k ( t )= M ,ZLO,Rk)(t)= M,z [ o , R i ) ( f ) . In the remainder of this paragraph k will be held fixed and so we will drop it from our notation. Plainly N = (N,} is a right continuous MF of X. Moreover both M and Zco,Rr) ( t ) are exact and hence so is N according to (5.20) of Chapter 111. We are now going to show thatg =f - Q,fisaboundedregular(X, N)potential. Let {K,}denote the semigroup generated by N. First note that E N c {x: P"(R > 0) = 1) c {fsk}. If R, = min(R, t ) , then using the fact that R is a terminal time we find Pro05
-=
If x E E N ,then K fd X )
+
E"{MOf(XO)
-MRf(XR);
> O)
- Q R ~ ( x ) =g(x> as t -+ 0. On the other hand if x # EN, then it is easy to see that g(x) = 0. Thus g is ( X , N) excessive. Moreover g(x) s f ( x ) I k on {flk} =I E N and g = 0 off E N .Therefore g is bounded. Finally we must show that g is a regular ( X , N) potential. Let {Tn)be an increasing sequence of stopping times with limit T. If Ti = T, A R and T' = T A R, then TA t T' and so KT,,d x ) = E"{[f(x~,,) - Q R ~ ( X T , , ) ] M TTn ,; < R} = E " ( M T , ~ ( X T , ,-) M R ~ ( X R ) T, ;
Q T , ~ ( x-) Q R ~ ( x )= K T ~ ( x ) -
Hence g is a bounded regular ( X , N)potential.
168
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
We now bring the parameter k back into our notation. Since g, is a bounded regular (X, Nk)potential there exists a CAF, Ak, of (X, Nk)whose potential is g, , Recalling the definition of N kwe have
+ s) = ~ k ( t )+ M ,
e,
almost surely on {t < R,}. If n > k it is easy to check that t + A"(? A A,) is a CAF of (X, N,), and that E"{A"(R,)} = EX{A"(Rn) - [A"(R,) - A " ( R , ) ] } = g,(x)
- EX{A"(Rn) - A"(R,); R, < R,,}.
But gn =f - Q R , f while the term to be subtracted from it is just E " { M R k A"(Rn) 0 ORk; R, < Rn}
- Q R n f ( x R k ) l M R k ; Rk < = E " { M R , ~ ( X R , ) - M ~ , , f ( x ~ t . ) ;R, < R,}
=E"{[f(XRk)
=QRkf(x) - Q ~ , f ( x ) -
Thus the potential relative to (X, N') of A"(t A R,) is g, and so the uniqueness theorem implies that A"(?)= A k ( t ) on [0, R,. almost surely. Now R, T S and so if we define A ( t ) = limnA"(?), then this exists on [0, S ) almost surely and the above compatibility implies that A is continuous and finite on [0, S) and that A(t s) = A(t) M , A(s) 0 8, almost surely on {t + s c S}. If we now define A ( t ) = limuTsA(u) when t 2 S, then it is routine that A is a CAF of (X, M). (Of course A ( t ) = 0 for all t if S = 0.) Finally A(&) T A ( S ) and consequently
+
+
E"{A(S)} = lim E"{A(R,)} n
= lim E"{A"(R,)} n
= lim g,(x) = f ( x ) . n
Thus the proof of Theorem (3.13) is complete. Suppose that f is a - (X, M) excessive. Then f is (X, N) excessive where N , = e-" M, . Moreover if A is a CAF of (X,N),then B, = yoeuUdA, defines a CAF of (X, M). Consequently Theorem 3.13 implies the following result. (3.14)
that
COROLLARY. Let f be a finite a - (X, M) excessive function such
pTn f pTf whenever {T,,}is an increasing sequence of stopping times
3.
POTENTIALS OF CONTINUOUS ADDITIVE FUNCTIONALS
169
with limit T. Then there exists a unique CAF, A , of ( X , M )whose a-potential is f ; that is,
Suppose now that M is a perfect MF. Then it is immediate that each of the approximating additive functionals A" defined in (3.8bis) is perfect, and it is natural to ask if A may also be taken to be perfect. We close this section with a preliminary result in this direction. Further results on this problem will be given in Chapter V. (3.15) DEFINITION. A function f in &'*, is said to be uniformly (X,M ) excessive provided that (i) f is bounded, (ii) f € Y ( M ) , and (iii) QJ+f uniformly on E as t + 0. (3.16) THEOREM. Let M be perfect (in addition to being exact and vanishSuppose that f is uniformly ( X , M )excessive and that Q,f + 0 ing on [(, a]). as t + 00. Then there exists a perfect CAF of (A', M )whose potential is$ Proof. First note that f satisfies the hypotheses of Theorem 3.8 since for each E 0 the set B, defined in (3.5) is empty if n is large enough. Thus exactly as in the proof of Theorem 3.8 one obtains the estimate (3.9). But now H,,, I 11 f -f,,,II2 + O as m + co and so (3.10) may be strengthened to
=-
lr
P" sup IA"(t) - A"(t)l 2 6
as n, m + a
uniformly in x for each 6 > 0. Therefore one can find a sequence {n,} independent of x such that
L
P" sup IAk(t) - A'(t)l 2 2-11 5 2-' for all x provided j 2 k where, for notational convenience, we have written Ah for Ank. Hence { A k ( t ) } converges uniformly for t E [0, 001 almost surely. Let A be the set of those w for which A k ( t ,w ) converges for all t E [0, S(w)). Then we define A(r, w) = lim, A''(f, w ) if t < S(w) and w E A, A(t, w ) = limutS((I,)A(u,w) for t 2 S(w) and w E A , and A ( t , w ) = 0 for all t if w $ A . Since 6;'A n { S > t } c A for each t and since ZKo,s)(t)is also perfect, it follows that A is a perfect A F of ( X , M ) . Clearly A is continuous and ziA = J Thus Theorem 3.16 is established.
170
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
Exercises
(3.17) Let f,(x) be a nonnegative finite function defined for t 2 0 and x E E. Suppose that: (i) f,( * ) E b* for each t 2 0, (ii)fo(x)= 0 and t +f,(x) is right continuous at 0 for all x, and (iii) f r + s =fr + Q,Lfor all t and s. Show that t +f,(x) is nondecreasing and right continuous on [0, a).If Q)
e-"'ft(x) dt,
ua(x)= a 0
then show that u" E Y " ( M ) . Let a 2 0 be fixed and suppose that whenever {T,,} is an increasing sequence of stopping times with limit T one has pT, f,+ pT f,as n IXI for almost all t (Lebesgue measure). If, in addition, f,(x) is bounded, then show that there exists a CAF, A, of ( X , M ) such that f,(x) = E"(A,) for all t and x .
(3.18) Give an example of a bounded regular potential which is not uniformly excessive. [Hint: let X be uniform motion on the line and M , = ZLO,T)(t) where T is the time of hitting a suitable set.] 4. Potentials of Natural Additive Functionals In this section we are going to investigate those excessive functions which may be represented as the finite potential of a natural additive functional. The most satisfactory approach to this problem is to make use of P. A. Meyer's theory of the decomposition of supermartingales. Since Meyer [l] contains a definitive treatment of this theory there would be no purpose in our reproducing it here. Rather we will take a more constructive approach (also due to Meyer [2]) under an additional hypothesis on X (Assumption 4.1 below). However for the reader familiar with Meyer's theory we will sketch the approach based on the decomposition theorem for supermartingales in the Notes and Comments for this section. In particular the main result of the present section, Theorem 4.22, is valid without Assumption 4.1. The techniques of Sections 4 and 5 are of interest, but the results are not particularly essential for the remainder of the book. These two sections might be omitted at a first reading. As in the previous section X = (a,A, A,,X , , Or, P")is a fixed standard As before, process with state space (E, 8) such that A = 9 and A, = 9,. M denotes a fixed exact M F of X with M , = 0 if t 2 C. We will use the notation developed in the first paragraph of Section 3 without special mention. We come now to the special assumption to be imposed on Xin this section.
4.
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
171
Suppose that {T,,} is an increasing sequence of stopping times; we then let is the following:
V,, STndenote the o-algebra r~(u,, ST,)“. Our special assumption
(4.1) If {T,,) is an increasing sequence of stopping times with limit T, then X(T) E ( V n RttTn)/&A *
Note that (4.1) is certainly satisfied if X is a Hunt process, since, in that case, X(T,,)-+ X(T) almost surely on {T < CO}and X , = A on {T = co} while {T < CO}= U k {T. < k } E F,,,Consequently X(T) E STn)/&A. If X is only quasi-left-continuous on [0, c), then X(T,,)-+ X ( T ) almost surely on {T < c} while X, = A on {T 2 c}. But the above argument breaks down since it is not true that {T < c} E R,,(see Exercise 4.34). However note that {T Ic } is in 9,” In . the remainder of this section we will assume that (4.1) holds. As we have seen this is no restriction if X is a Hunt process. Before coming to the study of potentials of NAF’s it will be necessary to develop certain preliminary material about stopping times. These results are useful in other situations as well.
n,,
v,,
v,,
(v,,
v,,
(4.2) PROPOSITION. Let {T,,} be an increasing sequence of stopping times with limit T. Then (under (4.1)) PTn =ST.
v,,
=v,,
Proof. For the purposes of this proof let Y STn. Since T,,--t T, T E Y ;and X ( T ) E Y/ 8,by assumption. Since T,, A t f T A t and S,,, c FTn for all t 2 0 it also follows that X(T A t) E for all t 2 0. Let p be a finite measure on ( E A , 8,) and let f and a with or without subscripts denote, respectively, a bounded continuous function on EA and a positive constant. Suppose H : R --t R is a finite product of the form
ft
If we write each integral in this product as + f,: then H may be written as a finite sum of products where each summand has the form
In view of the remarks at the beginning of this proof the product over i is Y measurable. On the other hand the strong Markov property yields
172
IV. ADDITIVE FUNCTIONAL3 AND THEIR POTENTIALS
where
Clearly rp E therefore
&'A
and so q(X,) exp( - T
E'(H
(4.4)
a,) is in Y. Of course Y c .FT and
I 9 T ) = E"(H
I Y)
for all H of the form (4.3). In order to complete the proof of Proposition 4.2 it will certainly suffice to show that (4.4) holds whenever H is a finite product of the form n , f , ( X , , ) where 0 < ti < . .. < t, and eachf, E C(E,). I f f € C(EA) then t + f ( X , ) is right continuous. Therefore given to one can find a sequence {y,(t)} of continuous functions on [0, 00) with compact support such that for each f~ C(EA)almost surely 12 y , ( t ) f ( X , ) dt + f ( X , , ) boundedly as n + 00. Consequently it will suffice to show that (4.4) holds whenever H is a finite 12 y,(t)fi(X,) dt. But any such y is a uniform limit of product of the form polynomials in e-' by the Stone-Weierstrass theorem and since we know that (4.4) holds whenever H is a linear combination of products of the form (4.3) it is now evident that (4.4) also holds when H = 12 y,(t)f,(X,) dt. This completes the proof of Proposition 4.2.
n,
n,
REMARK.
Obviously (4.2) implies (4.1) and so, in fact, (4.1) and (4.2) are
equivalent. We now introduce a concept that will be very important in the remainder of this section.
(4.5) DEFINITION. Let T be a stopping time and A a set in 9. Then T is said to be accessible on A if for each initial measure p on I A there exists an increasing sequence {T,,}of stopping times such that lim T, = Ton A and T, < T for all n on A n {T > 0}, both statements holding ' P almost surely, When A = R we say simply that T is accessible. If T is accessible on A, then the mapping t + X , is continuous from the left (as well as from the right) at T almost surely on A n {T < [}. This is an immediate consequence of quasi-left-continuity of the process. Meyer [5] (or [lo]) has proved the remarkable converse that if X is a Hunt process, then any stopping time T is accessible on {t + X , is continuous at T; T < a}.We wi!l be concerned primarily with terminal times and will be content to prove directly that certain ones are accessible. The interested reader should consult Meyer [l] for a complete discussion of accessible stopping times.
4.
173
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
Before coming to the main fact we shall need concerning accessibility we must introduce another concept involving terminal times. (4.6) DEFINITION. Let T be a terminal time and define the iterates, T,,, of T by To = 0, T,,,, = T,, T 0 8,". Then T is called complete if for each k 2 0, n 2 1, and stopping time R we have T,,,, = R + T,, 0 8 R almost surely On {Tk R < Tk+i}.
+
It is likely that every strong terminal time is complete. However, rather than attempt to prove a general theorem we will simply give a sufficient condition for completeness that can be checked easily in the cases of interest to us. We will give some additional discussion at the end of this section for the benefit of the interested reader (see Exercise 4.36). (4.7) PROPOSITION. Let T be a strong terminal time and as in (4.6) let T,, k 2 0, denote the iterates of T. Suppose that for every stopping time R and every k 2 0, R T 0 8 R = Tk+lalmost surely on {TkIR < Tk+l}.Then T is complete.
+
Proof. Suppose for the moment that T, R, and Q are any three stopping times, that A E F ,and that T = R Q 8 R almost surely on A. Then (Exercise 4.35) it follows that, for every F E 9, F o OT = F 0 8, o OR almost surely on A. In particular returning to the T of the proposition we have for each FE9
+
F
(4.8)
0
8T
0
0
8Tk
=F
0
8Tk+l
almost surely. Now the relation T,,,, = T, + T,, 0 8 T k is valid for all k if n = 1, by definition of the iterates. If this relation is known to hold almost surely for all k and all n I m with a given m 2 1, then using (4.8) we obtain Tm+l+k= T k + l = Tk
Tm
8Tk+l
+ Tm+l
= Tk
+ T o8Tk + Tm
OT
OTk
'Tk
almost surely. So we have proved by induction that (4.9)
Tn+k
= Tk
+ Tn
eTk
almost surely for all n and k. Our main hypothesis is that T,,, = R + To 8, almost surely on {Tk IA < Tk+l}.Using this and (4.9) twice we obtain for any n and k Tn+k= T k + l =R
+ T,,-i
+ T,,
0
0
8Tk+,
=R
+7
0 8R
+ Tn-l
OR
almost surely on {Tk IR < T,,,} completing the proof.
0
8T
0
OR
174
1V. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
We come now to the main fact that we need concerning accessibility of terminal times. Let us first establish some notation. If Tis a terminal time and we define $(x) = for x in E, then $ is 1-super-mean-valued and so J(x) = Iim,?, P: $(x) = 1irnt,, Ex{e-('+T is 1-excessive with J I II/. Moreover if $(x) < 1, then by the zero-one law P"(T> 0) = 1 and hence $(x) = 1,6(x). Thus I$ = $ on {$ < 1). If Tis a strong terminal time and Q is a stopping time, then EX{PX'Q'(T= 0);Q < T}= P"(T - Q = 0;Q < T ) = 0,
and so PxcQ)(T= 0) = 0 almost surely on {Q < T}. Therefore $(XQ)= $(XQ) almost surely on {Q < T}. Let B = {$ = 1) and R = T, A 5. This notation will be used in the statement and proof of the next proposition. (4.10) PROPOSITION. Let T be a strong terminal time which is also complete. Let Q be an exact terminal time with Q I R and suppose that, for each x, (a) Px{$(XT)= 1 ; T < Q} = 0; (b) P"{X,- # A',; 0 < T < Q}= 0. If B, = {J > 1 - l/n} and R, = T,., then almost surely R, < T for all n on (0< T < Q} and lim R, = Ton {R, < T for all n } . In particular T is accessible on {T < Q}.
Proof. First note that Ex{e-'T-Rn'; R , < T } = EX{$(XRn);R, < T } 2 (1 - l/n) PX(R,< T), and so lim R, = T almost surely on {R, < T for all n } . Thus to complete the proof we must show that R, < T for all n almost surely on (0 < T < Q}. To this end suppose that, for some y and n, Py(O < T < Q;R, 2 T) > 0. Since $(X,) 5 +(A',) < 1 almost surely on (0 < T < Q} by hypothesis, we may actually assume that for some m, Py(O < T < Q;R, > T) > 0. We will derive a contradiction from this. Let 6 = 1 - l/m, and H = R, A Q. Then H I 5 since Q IR I c. Now both R, and Q are exact and so it follows from (5.20) of Chapter I11 that H is also exact. As before let To = 0 and T,+, = T, + T OOTn denote the iterates of T. Next observe that (4.10a) implies that Px{$(XTn)= 1 ; T, < Q} = 0 for each x and n and hence
EX{e-Tn+l; T, < H }
T, < H) < 6 EX{e-Tn;Tn-l < H } .
= EX{$(XTn)e-,";
4.
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
175
Thus E X { e - T n +T,, l ; < H } I a", and so almost surely T,,+ co if H = co, and T,, 2 H for some n if H < 00. We define an AF of ( X , H ) by setting A, = n , =limAr
if T,,It < T,,, and t < H , if t 2 H .
rtH
The fact that T is complete yields readily the fact that A is indeed an AF of (A', H ) . The discontinuities of A occur only at the points T,, < H ; but P"{X(T,,-) # x(T,);T,, < H } IEX{PX(Tn-I)[X(T-) # X ( T ) ;T < HI} = 0.
and consequently A is natural. The 1-potential of A is m
00
u:(x) = E x
e-'dA(t) = 0
C E"{e-Tn; T,, < H } n= 1
and so ui is bounded. In addition we assert that u i is a regular I-potential of (A', H ) . Let us suppose this last statement has been established and use it to complete the proof. If u i is known to be a regular 1-potential, then according to Corollary 3.14 there is a continuous AF, say B, of ( X , H ) such that u i = u:. Of course B is natural since it is continuous, but A is also natural and so by the uniqueness theorem, (2.13), A and B are equivalent. In particular A is continuous; but P Y ( Ahas a discontinuity) 2 PY(T< H ) > 0, a contradiction. Thus to complete the proof we need only show that u i is a regular 1-potential. To this end suppose {J,,} is an increasing sequence of stopping times with limit J. Then, with Q, denoting the transition operators for ( X , H ) , we have
and to prove that this approaches 0 as n + 00 it suffices to prove that for all x and k (4.11)
P"{Tk < H , J,, < Tk for all n, J
= Tk} = 0.
Since (4 E y',{e-," J(X,,), s,.} is a positive supermartingale relative to each measure P" and so we can define a random variable L, 0 I L I 1, such that
e-, L
= lim
e-Jn
J(x,.>
n
almost surely P" for each x. Because T is complete, if W is any stopping time,
176
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
then T,+, = W + T OOw on {T, IW < T,+l}. Now fix x and k. Given any n, m 2 n, and A E 4tJ,we have EX(e-Tk
t 1. 7
A n { Tk I
Jm
< Tk+1
A
= E X ( e - J m$(XJ"); A n {Tk I
HI) J , < Tk+l A H } ) .
r = {TkIJ,,, c Tk + lA H for all large m } , then E X { e - T k +Al ; n T} = E"{e-JL;A n r}.
If we let m + 00 and define
This equality holds for all A E P J n for all n and hence for all A E V, .FJn. Butaccordingto(4.2)thiso-algebraissimply 5FJ.ThuswecanletA = {Tk+ = J, J c H} and obtain the conclusion that L = 1 almost surely P" on A n r = {Tk+l= J , Tk+lc H , TkIJ, c Tk+lfor all large m}.
On the other hand $(XJ,) I 6 if J, c H and so L I6 on A n r. Hence P"(A n r) = 0, which certainly implies (4.11). The proof of (4.10) is complete.
For use in Section 5 we need an additional fact concerning accessibility. As before let T be a terminal time with T I[ and let $(x) = E"(e-T)for x in E and $ = limtL, P:$. (4.12) PROPOSITION.Let T be a strong terminal time, let R, be the hitting time of {$ > 1 - I/n}, and let A = {R, c T for all n}. Then (a) lim R, = T almost surely on A, (b) lim,TT$(X,) = 1 almost surely on A n {T c m}, and (c) if {Q,} is an increasing sequence of stopping times with limit Q7 then almost surely { Q, c T for all n, Q = T c m} is contained in A. Proof. The first sentence in the proof of (4.10)establishes (a). We next prove (c). Let L E 9be such that e-Qn$(XQn)-+ e-QLalmost surely as n + 00. Given positive integers k and j, a set r E gQ,, and n 2 j we have, since $(XQJ = JI(XQJ on { Q n < TI7
E"[e-Qn$(XQn); r n { Q , c T
A
k } ] = E"[e-T; n { Q , c T
A
k}].
If we fix k and let B = { Q, c T A k for all n}, then letting n -+ co in the above equality we obtain (4.13)
E"{e-QL;r n B } = E"{e-T;r n B } .
This holds for all r E .FQ4for each j and hence by (4.2)for all r E F QIn . particular r = {T = Q} is in .FQ.Since L 1 almost surely on B, this choice of r in (4.13)implies that L = I almost surely on B n r. Consequently $(XQn)+ 1 and so R, c T = Q for all n almost surely on B n r. If we let k --t 00 we obtain (c). Finally applying the above proof to the sequence {R,}
4. POTENTIALS OF NATURAL ADDITIVE
FUNCTIONALS
177
rather than { Q,,}we see that limn $(A',,) = 1 on A n {T c co} and this implies (b) since limrtT $(XI) exists almost surely on { T < co}. (4.14) REMARK.If Tis an exact terminal time and Jl"(x) = E"{e-"T;T < c}, then the above arguments show that whenever { Q,,} is an increasing sequence of stopping times with limit Q one has limn (cIa(Xon)= 1 almost surely on { Q, < T for all n, Q = T < 00) for each a > 0. The same conclusion then holds for a = 0 since Jl" increases as a decreases.
We turn next to the analysis of a particular stopping time. Let g be a finite ( X , M ) excessive function. Recall that almost surely the mapping t + g ( X , ) is right continuous and has left-hand limits on [0, S) where S = inf{r : M I = 0 ) .With thisg fixed let r denote the set of o such that t -P g(X,(o)) and t -P X,(w) are right continuous and have left limits on [0, S ) and [0, c), respectively. Let Y , = g ( X J M I ; we use Y,- and g ( X , ) - to denote limstr Y, and lirnstf g(X,), respectively, whenever these limits exist. Given E > 0 define (4.15) T ( w )= inf{t < S(w): lY, -
=o,
w+
K-1 > EM,;X , = X , - } ,
w E r,
r.
Finally let T(w) = S(w) if w E r and the set in braces is empty. In view of the right continuity of r -P Y , the infimum in (4.15) is actually attained. In particular X, = X T - on {T < S } . PROPOSITION.T is a complete terminal time and T is accessible on {T < S } .
(4.16)
Proof. Suppose w E r and a > 0. Using the regularity properties of Y , ( o ) and X,(w) one checks without difficulty that T(w) < a if and only if the following holds: either S(o) < a or S(w) 2 a and for some positive integer k and every positive integer n there are rational numbers r,, and s,, such that (a) r, < s,,< a - 1/k, (b) s,,- r,, c 1 / ~ , (c) d [ X r , ( 4 , X,.(w)I < l/n, and ( 4 I Y,.(w) - Ysn(w)l 2 ( E l/k)Msn(w), where d is a metric for E,. This describes l- n { T < a } in terms of a countable number of sets, each of which is in 9,, , Since P x ( Q - r) = 0 for all x this proves that {T < a } E Fa. In view of the explicit description of T and the fact that M is a strong multiplicative functional one obtains immediately that T and its iterates satisfy the hypothesis of (4.7), so T is complete. Now P"(T = 0) = 1 if and only if P"(S = 0) = 1. Hence, in the notation of Proposition 4.10, {Jl c l } = E M , and so R 2 T E A T E. As M in Section 5 of Chapter 111 let cp(x) = E x e-' M I dr, E,, = {cp > l/n}, and Q,, = TEA-En (in Section 5 of Chapter 111 we used T,, in place of Q,,).According to Proposition 5.3 of Chapter 111, limn Q. 2 Salmost
+
178
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
surely. Now fix k and set Q = Qk A S. If P"(Q = 0)= 1, then x is regular for Qk or p(x) = 0. But if q ( x ) = 0 then x is regular for Qk since cp iS finely continuous. Thus P x ( Q = 0) = 1 implies x is regular for Qk, and hence t + Q o 0 , s t + Qk 0 0, --t 0 as t + 0 almost surely P". Thus it follows from Proposition 5.9 of Chapter 111 that Q is exact. We now apply Proposition 4.10 with this Q = Qk A S. The hypotheses of (4.10) are obviously satisfied. Consequently with the R,'s defined there we have almost surely R, < T for all n on (0 < T < Qk A S}and lim R, = Ton {R,< Tfor all n}. Finally letting k + 00 we see that T is accessible on {T < S}.We have actually proved much more: namely, that almost surely lim R, 2 T and that {R,}increases to T strictly from below on (0 < T < S} = { T S}.
-=
We finally have the necessary tools at our disposal for studying the potentials of natural additive functionals. We will use the following fact repeatedly : if f e Y ( M ) , Q and R stopping times, and A E ~ , then , for each x, E"{MQf ( X Q ) ;Q 5 R ; A} 2 EX{MRf ( x R ) ; Q 5 R ; A } .(This iS obvious for potentials and hence extends to elements of Y ( M ) . ) DEFINITION.A finite (X,M )excessive function u is called a natural potential if for each x whenever {T,} is an increasing sequence of stopping times with limit T 2 S almost surely P x , then limn QT, u(x) = 0. (4.17)
If A is any AF of (X, M) with a finite potential u,, then the fact that uA(x)= Ex{A(S)- A(T)} implies that uA is a natural potential. The remainder of this section is devoted to proving that any natural potential is the potential of an NAF. QT
PROPOSITION.Let u be a natural potential and let p be a measure on EA such that u dp c co. Then the family of random variables {MTu ( X T ) ; T a stopping time} is P" uniformly integrable.
(4.18)
Proof. Let R, = inf{t: M , u(X,) > n}. Clearly {R,} is an increasing sequence of stopping times and since almost surely MRnu(XRn)2 n on {R, < co} we have n PX(R, < s) 5 E X { M ~~, (, X R , , < ) } u ( x ) < 00,
and so R, 2 S for large n almost surely. If T is any stopping time and A = {MTu(XT) > n}, then almost surely A c {R, IT } so (4.19)
E"{(MT u ( x T ) ; A}
5 / E x { M R nu(xR,)} d d x ) .
But E x { M R ,u(XRn)}+ 0 since u is a natural potential and the integrands are
4.
179
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
dominated by u(x) with Ju dp < co, so the right side of (4.19) approaches 0. The lack of dependence on T i n this estimate yields the result. (4.20) PROPOSITION. Let u be a natural potential and {J,} an increasing sequence of stopping times with limit J I S almost surely P". Then E"{M,, u(XJn);J = S}-,0 as n --t co. Proof. Let Y = limn M,,, u(XJn).This limit exists almost surely and in fact, since u(x) < 00, (4.18) implies that the convergence also takes place in L'(P"). Of course Y = 0 on {J, = S for some n}. Let us bring in Proposition 4.12 applied to the strong terminal time S. If {R,}is the sequence defined there and r = {J,, < J for all n}, then Q, = R, A k in creases to S strictly from below, and so for each k
lim E"{MJnu(XJn);J = S} = lim EX{MJnu ( X , J ; n
r; J
= S)
n
n
as k -+ 00, since lim Q, 2 Sand u is a natural potential. The proof is complete.
(4.21) PROPOSITION.Let {T,,}be an increasing sequence of stopping times with limit T. If a > 0 and f~ b b f , then limn M T n V u f ( X T n=) M T V u f ( X T ) almost surely on {T < a}. If g is in Y " ( M ) , a 2 0, then limn MTng(XTn) 2 M T g(X,) almost surely on {T < C O } .
Proof. If x is fixed, then {e-uTnMTnVuf(XTn), P"} is a nonnegative supermartingale and hence U = limn e-urn MTnV a f ( X T , ) exists P" almost surely. In particular L = limnM T m V u f ( X T nexists ) almost surely on { T < a}. If A E 9," and k 2 n one has
and consequently if we subtract the corresponding expression with Tk re; = 0. placed T and then let k -+ co one obtains Ex{U - e - u T M T V u f ( X T )A} This then holds for all A E F Tbecause of Proposition 4.2 and hence U = e- U T M T V a f ( X T ) almost surely. In particular L = MTV"f(XT) almost surely on {T < a}. If g is ( X , M ) excessive, then g is ct - ( X , M ) excessive for some positive a. Thus there exists a sequence {f,}of elements of b b f such that V"s, t g . Now limn M T ng(XTn)exists almost surely on {T < a}, and hence almost surely on {T<
180
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
which establishes Proposition 4.21. We are now ready to prove the main theorem on representation of natural potentials. (4.22) THEOREM.Let u be a natural (X,M) potential. Then there exists a unique natural additive functional A of (X, M ) whose potential is u.
Proof. Given E > 0 let T be the stopping time defined in (4. 15) with this E and of course the (X,M )excessive function g there replaced by u. Define as usual To = 0 and T,,, = min(T, + T OOr,, S). In the remainder of the proof we will omit the phrase "almost surely" in places where there is no doubt as to its appropriateness. Since J[MTn u(XTn)]- - MTmu(XT,)I exceeds &MTn it follows from Theorem 5.6 of Chapter 111 that lirn T, = S and in fact that T,,= S for large n on {Ms> 0). We will use the following notation: Y,,= MTnu(XTn) and Z, = [MT, u(XTn)]-. Of course Y,, = 0 if T,, = S. Furthermore, since T is a complete terminal time according to (4.16), if K is any stopping time then Yn+k = MK(Yk 0 8), and z , , + k = MK(Zk 8), on {T,,2 K < T , + l } . Let us define 0
where of course we are setting 2, = Yo = u(Xo).Also we adopt the convention [Msu(Xs)]- = 0, so that in reality we are summing only over those n for which T,, 2 f and T,, < S. Now it follows from (4.16) and (4.21) that each of the summands Z,, - Y,,is nonnegative. Suppose we are given positive numbers f and r and that Tk 5 t < T k + ,. Then Tkt It + r if and only if Ti o Or Ir, and T k + j< S if and only if Ti 0 Of c S 0 O f . Since for any such j we have Zk+j - Yk+j = M,(Zj - Yj) 0 8,it follows that on {Tk 5 f < Tk+i},Ae(f + 24)' A'(t) 4- M,(Ae((u)0 of). But {s > f } = U k { T k5 f < T k + l } and COIlSeqUeIltly A' = {Ae((t)}is an AF of (X, M ) . The discontinuities of Ae occur only at the points T,, < S. By construction the path is continuous at all such points and so A' is an NAF of (X, M).Now Tis accessible on {T < S}; in fact, according to the last sentence of the proof of Proposition 4.16 there is an increasing sequence {R,,} of stopping times with limit T and such that R, T for all n on { T c S } . Moreover u(x) 2 E"MR, U(XRn)for all n and x, since u is (X,M ) excessive.Also MR, U(XR,)= 0 if R, = Sand, by (4.20), MRn U(XRn)approaches
-=
4.
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
181
0 on { R , < S for all n, R,, + S}. On the remaining set, MRnu(XRn)approaches Z1 and so in light of (4.18) (4.23)
0I
U(X)
- E"MR,
u ( X R , ) + U(X)
- E"Z1.
Having made these observations we continue the proof by comparing the potential we(x) = E"A'(S) with the excessive function u. Since u is a natural potential we have EXM,, u(X,,) = E"Yn + 0 as n + 00 and of course we(x)= limn ( z k - Y k ) } . Consequently
~"(2;
I
n
u(x)
- E"Y, - C Ex(& - Yk) 1
According to (4.23) each summand in this last expression for u - w' is positive and so w eI u. Let uE = u - wE. We are next going to show that ue is (X,M) excessive. Since u and we are ( X , M) excessive we have Q, ue -,u' as t + 0, and so it is enough to show that Q,u'< ue. Let us begin by separating out the basic inequality we need. We will show that for each x, n, and t (424) E " { Y n - Z n + ~ ~ T n < t } ~ E " { [ u ( X ~ ) M ~ - Z n + ~Tn+i}. ]~TnIt<
Indeed, let u k = Vgk with uk increasing to u (this is possible since Q,u + 0 as t + 03, u being a natural potential) and let { R j } be a sequence of stopping times increasing to Tn+land strictly less than Tn+,on {T,,,, < S}. Moreover we may assume R j 2 T,, for all j . Then for each k a n d j (4.25)
E X { U k ( X T n ) MT,,
= EX{Uk(X,)
- uk(xR,)
MR,;
Tn
5 t}
M,- u k ( x R , ) M R , ; Ta I f < R j } .
If in (4.25) we first let k -+ 00 and then j + 00 the extreme members approach the left and right sides of (4.24), and so (4.24) is established. Since lim T, = S it is clear, writing u for ZI', that the relationship Q, u I o will be established if we can show that for each n
182
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS U(X)
+
2 E"(U(X,)M , ; t < Tn} E X { u ( X ~ ,MT,,; , ) Tn S t } .
Obviously this is true when n = 0. To pass from its validity for a given value of n to the next larger one it suffices to show that (4.26)
EX{u(XT,,)MT.; Tn I t } 2 E"{o(X,) M , ; Tn S t < Tn+1} + E X { 4 X T n + l ) M T . + ITn+, ; I t}.
E"{C:(
But we have seen that u(x) = Yk - &+,)} and so the left side of (4.26) is E X M T , E X V [~f ( y k - z k + l ) ] ; Tns
2)
(
0
= E x ( f0 [ ( Y , - Z k + l ) o e T , , ] MT,; TnS t = E x ( f (yk n
1
- zk+1); Tn5 2).
In the second summand on the right of (4.26) we first express the range of integration {T,,,, 5 t } as {T,, 5 t } - {T. I t < Tn+l},Bringing in the expression for u again we see that this second summand is
Consequently subtracting the right side of (4.26) from the left we obtain (4.27)
Ex{ Yn - Zn+1 ; T, It }
-E"{u(Xt) - u(XT,,+,)MTncl;Tn 5 t < Tn+,J* Now u = u - we and w"(x) = x y E"(Zk - Yk).Using this for the first occurrence of u in (4.27) and our previous expression for the second, we see that (4.27) reduces to E X { Y n - Z n + , ;T n ~ t } - E x { u ( X , ) M , - Z , + , T ; n~t 0 we have u = we + ue. It is clear from the construction that we increases as E decreases to 0. Thus w = lime,o we exists and is an (X, M) excessive function dominated by u. Consequently ue must decrease and if u = lim ue, then it is obvious that Q,u I u, and hence u is (A', M) excessive being the difference of two finite (X,M) excessive functions. It is also clear from the definitions and what we have already proved that A"?) A'J(t) is an ( X , M ) additive functional provided E < q. We define A,(t) = lime+,, A'(?) for 0 5 t I 00. Now w(x) = limewe(x)= lim, E x Ae(oo) =
-
4.
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
183
E"A,(co), and hence A,(oo) is finite. But if E < then 0 I Ae(t) - M ( t ) I A'(co) - Aq(co), and therefore A'(t) + A,(t) uniformly on [0,a]. This implies that A , is right continuous on [0, co] and continuous at S. It is now clear that A , is an NAF of ( X , M ) whose potential is w. We will complete the proof of Theorem (4.22) by showing that u is a regular ( X , M )potential. Let {R,} be an increasing sequence of stopping times with limit R ; we must then show E"{MR, v(XR,)} -,E X { M Ru(X,)}. Clearly it is no restriction to assume that R IS.Now 0 I EX{MR,,u(XR,,) - M
R u(XR)}
= E"{MR, u ( X R , , ) ;
R =S}
+ EX{M,,,~ ( X R , -, ) MR ~ ( X R R) ; < S } , and Proposition 4.20 and the fact that u I u imply that the first term on the right approaches zero as n + co. If A = {R, < R for all n}, then the second term approaches E"{[MR u ( x R ) ] - - MR u(XR);A, R < S}.But R is accessible on A n {R c S} and hence t + X , is continuous at R on A n {R < S} because of the quasi-left-continuity of X . We also claim that (4.28)
[MR
u ( X R ) ] - - MR
u(XR)
= CMR u(XR>l - CA,(R)
-M
R u(XR)
- AJ(R -11
on A n { R < S ) . We will prove a statement slightly more general than this in a moment. But assuming this, the right side of (4.28) is zero since by construction the discontinuities of A , are precisely the discontinuities of M , u(X,) at which X , is continuous. Consequently u is a regular ( X , M ) potential and so by Theorem 3.13 there exists a CAF, A c , of ( X , M ) whose potential is u, and thus u is the potential of A , + Ac which is natural. Thus the proof of Theorem 4.22 will be complete as soon as (4.28) is established. We formulate this as a proposition since we will want to refer to it in the sequel. (4.29) PROPOSITION. Let A be an ( X , M ) additive functional with finite potential u. Let {R,} be an increasing sequence of stopping times with limit R and let A = {R, < R for all n } . Then lim,[A(R) - A(R,)] = limn MRnu(XRn)M R u(XR) almost surely. In particular limn M R mu(XRn)= [ M Ru(XR)]- and limn A(&) = A ( R - ) on A n {R < S}.
r E 5FRkand n > k, then E X { A ( R )- A(R,); r}= E ~ { M , ,U(X,.)
Proof. If
- M~
u(x,);r}.
Using Propositions 4.18 and 4.2 we'obtain lim[A(R) - A(R,)] n
= lim
M R nu(XRn)- MR u ( X R )
n
almost surely, which establishes (4.29).
184
1V. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
We close this section with a characterization of CAF's that is analogous to that given for NAF's in Proposition 2.5. (4.30) THEOREM.Let A be an additive functional of (X, M ) with finite potential. Then A is continuous if and only if Q, U, I K = U, I K for all compact K. Recall that U , ~ ( X=) E " J Z f ( X , )dA(t) is the potential operator corresponding to A.
Proof. We first observe that
while "W
Therefore (4.31)
U,IK(x) - QK U A f d x ) = E " { ~ T K-)N T K - ) } ,
and so if A is continuous then U, I , = QK U, I K. We now turn to the proof of the converse. If G is open and K a compact subset of G, then T, 5 TK and so U A I K5 QG U A I K5 U A I K But by hypothesis QK U,IK = UAIK and SO QGU A I K= U A I K . Hence QGUAI,= U, I,. By Proposition 2.5, or more precisely its proof, this implies that A is natural. Now given E > 0 define QK
T = inf{t : A,
- A,-
> &MI}
where as usual we set T = S if the set in braces is empty. A discussion similar to that in (4.16) shows that T is a complete terminal time. Because A is natural, the path is continuous at T if T < S, and so, arguing as in the second part of the proof of (4.16), T is accessible on {T < S}. Somewhat more precisely, the proof of (4.12) shows that if $ ( x ) = E x e - T , then almost surely on {T < S } , $(A',) -+ 1 as t increases to T. Of course almost surely $ ( X T ) < 1 on { T < S}. To complete the proof of (4.30) it will suffice to show that P"(T < S ) = 0 for all x, since E in the definition of T is arbitrary. Suppose, to the contrary that, for some x , P"(T < S ) > 0. Because $ ( X T ) < 1 if T < S there is an q < 1 and a compact subset K of {$ < q } such that P"(X, E K, T < S) > 0. Let {T,,}be a sequence of stopping times increasing to T such that almost surely T,, < T for all n on {T < S}. If $(A',) 2 q for all ? E [T,, T ) ,
4. then T = T,, large n,
POTENTIALS OF NATURAL ADDITIVE FUNCTIONALS
+ TK
0
8,.
on { X ,
E K,
T < S } . Consequently for sufficiently
o < PX{A(T,+ TK ern) - A([T,,+ T~ o,,] -1 0
-~x{pX(Tn) <
0
-
185
-)I
> 01
> O> = O ,
since, by (4.31), Ey{A(TK)- A ( T K - ) } = 0 for all y . This contradiction completes the proof.
REMARK.We have actually proved that if A is a NAF of (X,M),then either A is continuous or else there is a compact set K and a point x E EM such that t -+ A, is discontinuous at TK with positive P" probability. Exercises (4.32)
Prove that (4.1) is satisfied if and only if U a f ( X T , ) approaches
Uaf(XT) almost surely on { T < co} for every a =- 0 , f c CK(E),and increasing
sequence {T,,} of stopping times (where T = lim T,,). (4.33) Prove that the following condition is sufficient for the validity of (4.1) : whenever { T,,} is an increasing sequence of stopping times with limit
T, then almost surely on { T < co} either X(T,,)+ X(T) or lim X(T,,) does not exist in E and T = l. (4.34) Show that for the process considered in (9.16) of Chapter I and again in (3.18) of Chapter 111 condition (4.1) is not satisfied.
Let T, R, Q be three stopping times, A a set in F,and suppose that T = R + Q o 8, almost surely on A. Prove that for every FEF we have F o 8, = F O8, o OR almost surely on A. [Hint: check directly that the conclusion is valid when F = nl=f , ( X , , ) with fi E b*. Use linearity and the MCT to conclude that it is valid for every F E 8'. Extend the conclusion to F E 9 by using the definition of 9 and the strong Markov property.]
(4.35)
(4.36) Let T be an exact terminal time with A = { x : x is regular for T ) , let R be a stopping time and p be a finite measure on 8 , . Define {R,,} by R, = (k + 1)/2" if k/2" I R < (k 1)/2" and R,, = 00 if R = co. (a) Prove that as n -+ 00, E P { f ( T o8,") h o OR,; A } + E p { f ( To 8,) h o 8;, A } for A E 9,, f a continuous function on [0, co] and h E 9 of the form h = nl=lfi(X,,) with eachf; bounded and continuous on EA. [Hint: use the strong Markov property and the fact that (Chapter 11, (4.14)) x + E " { f ( T )h }
+
is finely continuous.]
186
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
(b) Show that R,, + T O.8, = R + T o 8, in 'P measure on E A)* (c) Show that if n 2 m, then R,, + T O8,. = R, + T O8Rm almost surely on {R, < R, T OOR,,}, hence almost surely on { T o ORn 2 2-"}. (d) Use (a) and the fact that T o 8, > 0 almost surely on { X , $ A } to show that {xR
+
lim sup P p { X R$ A, T o 8,. < 2 - 9 = 0. m+w n z m
(e) Use (d) and (c) to show that limnT OOR, exists in P" measure on { R < 00, xR$ A } and use (a) to identify this limit as T o 8., cf) Combine what we have done so far to conclude that R,, + T 0 8,. approaches R + T 0 8, in 'P measure as n -+ 00. (g) Show that T is a complete terminal time. [Hint: check that the hypotheses of (4.7) are satisfied by approximating Tk and R with stopping times which take values in a fixed countable set and then using (f).] (4.37) Let {R,,} be an increasing sequence of stopping times with limit R, let T be a stopping time and let A = {R, < T for all n, R = T}. Prove that for each p there is an increasing sequence {J,,} of stopping times such that J,, < T for all n and lim J, = T almost surely P p on A, and such that also J,, = 00 for P on Q - A. [Hint: use (4.2) to conclude that for all large n almost surely ' each integer n there is an integer k,, and a set r,,EFRknSUChthatPr{r,, A A} I 2-". Assume (as we may) that kl < k2 < .. ., and define stopping times Qn by QAm) = Rk,,(w) =0O
if w E r n if w $ r , .
3
Finally set J,, = infj2, Q, and check that {J,,} has the required properties.] (4.38) Prove that if a stopping time T is accessible on each of a sequence {A,,} of sets, then T is accessible on UA,,. [Hint: start by using (4.37) to conclude that if T is accessible on A1 and A2 then it is also accessible on A1 vA2 *I
5. Classification of Excessive Functions The purpose of this section is to define various special classes of excessive functions and establish some important properties of these classes. In particular we will relate them to the regular potentials and natural potentials defined in Sections 3 and 4. As usual X = (Q, A, A t ,,'A , O f , P") denotes a
5.
CLASSIFICATION OF EXCESSIVE FUNCTIONS
187
fixed standard process with state space (E, b), and again in this section we assume for simplicity that A = 9 and A, = Fttfor each t. M denotes a fixed exact M F of X with M, = 0 if t 2 [, and S = inf{t: M , = 0). Thus S is a strong terminal time and S I [. Finally we assume that (4.1) holds throughout this section, although some of the elementary results do not depend on this assumption. According to Proposition 4.12 we can find an increasing sequence {T,} of finite stopping times such that 7, t S almost surely, and if Rs = {T, < S for all n ; S < 00) then whenever {R,} is an increasing sequence of stopping times with limit R we have {R, < R for all n ; R
=S
< a}c R,
almost surely. Indeed if $(x) = lim,J.oE"{e-('+S"et)} for x E E and $(A) = 0 and T,, is the hitting time of {$ > 1 - l/n}, then we may take 7, to be min(T,,, n, S). We now fix such a sequence {T,} for the remainder of this section. Our definitions will be in terms of this fixed sequence. We restrict our attention tofinite (X, M) excessive functions. This is a serious restriction since the extension to general (X, M) excessive functions can be quite delicate. (5.1) DEFINITION. Letfbe a finite (X, M) excessive function: (i) f is M uni$ormly integrable provided that the family { M T f ( X T ) ; T a stopping time} is P" uniformly integrable for all x; (ii) f is M-harmonic provided Q,, f =f for all n ; (iii) f is an M-pseudopotential provided that limn M,, f ( X , , ) = 0 almost surely; (iv) f is M-regular provided that t + M , f ( X , ) is continuous wherever t + X , is continuous on [0, S) almost surely.
Since M will be fixed throughout our discussion we will drop it from our terminology. Thus we will say that anfE Y ( M )is harmonic or is a pseudopotential rather than M-harmonic or an M-pseudopotential. Recall that t + M , f ( X , ) is right continuous and has left-hand limits almost surely. Also { M , f ( X , ) , 9, , P"} is, for each x, a nonnegative supermartingale and so is { M R ,f ( X R , ) , F R m P"} , whenever {R,} is an increasing sequence of stopping times. Thus (5.liii) makes sense. Next observe that f i s a pseudopotential if and only if lim M,f(X,) = 0 on 0, u {S = 0 0 ) Its
almost surely, which, using the properties of the sequence {T,}, is equivalent to the statement that whenever {R,} is an increasing sequence of stopping times with limit R then MR,f(XRn) + 0 almost surely on {R = S } .In particular
188
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
this shows that the definition of a pseudopotential does not depend on the specific choice of the sequence {T,}. By contrast the definition of a harmonic function does depend on the choice of the sequence {z,}, as we will show by an example at the end of this section. We now introduce some notation that will be used in the remainder of this section. Whenever f is a finite (X, M) excessive function we set Y, = M , f ( X , ) and R, = inf{t: Y , > n}. Since t 4 Y, is almost surely right continuous each R, is a stopping time and YRn2 n almost surely on {R, < co}. More generally we write Y, for M,f(xT)when T is a stopping time. Finally let As = R, u {S = co} = {T,, < S for all n}. Note that modulo the class of sets Jf = {A E 9 :P"(A) = 0 for all x } R, and A, do not depend on the specific choice of the sequence {z,}.
LEMMA. If T is any stopping time, then 0;' As n { T < S} = As n {T < S } almost surely.
(5.2)
Proof. As in the second paragraph of this section let T, be the hitting time for of {J;> 1 - l/n} and let H , = T, A S where $(x) = lim,$o E"{e-('+S"et)} x E E and $(A) = 0. Then according to (4.12), R, = { H , < S for all n, S < 0 0 } almost surely. Now using the fact that each H, is a strong terminal time it is easy to see that 0; R, n {T < S } = R, n {T < S} almost surely. It is also clear that 0; '{S= 00) n { T < S} = {S= a}n {T < S} almost surely, and combining these statements yields Lemma 5.2.
(5.3)
Let f be a finite (X, M) excessive function and let = 00 for large enough n, and f is uniformly integrable if and only if Q R n f + 0 as n + 00. In this case the family { Y,; T a stopping time} is P" uniformly integrable whenever p ( f ) < 00. PROPOSITION.
R, = inf{t: Y, > n}. Then almost surely R,
Proof. The only part of the conclusion not already proved in (4.18) is that iff is uniformly integrable then QR,, f -+ 0 as n + co. Butf (XRn)MRn= 0 if R, = 00 and the uniform integrability allows us to interchange limits and expectations, so this part of the conclusion follows immediately from the fact that almost surely R, = co for large n.
(5.4) PROPOSITION. Iff is a finite (X, M )excessive function then f = h + p where h is the largest uniformly integrable harmonic function dominated by f a n d p is a pseudopotential. Proof. Since { Yrn,P"} is a nonnegative supermartingale for each x , limn Yrn = L exists almost surely. Recall that As = R, u {S = 0 0 } = {z, < S for all n}. Then L = Y, = 0 on Ksand L = limtts Y, on A,; that is, L = limits (IAs Y,)
5.
189
CLASSIFICATION OF EXCESSIVE FUNCTIONS
almost surely. If T is a stopping time it is easy to see using (5.2) that L o 0, = ( M T ) - ' L almost surely on {T < S } . Define h(x) = EX(L)I f ( x ) . Clearly h E b* and h vanishes off E M .Now
Q~h(x) = E X { M , L e, ; t < S } = E"{L; t < S} and so Q , h I h and Q , h + h as t -,0. Thus h is (X, M) excessive. Moreover for each n, 7, < S almost surely on As and hence Qr,
h(x) = E"{Mr, L 0 ern;7, < S} = E"{L; 7, < S ; As} = E"{L; As} = h(x).
Therefore h is harmonic. Now let R,
= inf{t:
Q R , h(x) = EX{L; R,
M , h(X,) > n}; then
<S},
According to Proposition 5.3, R, = co for large enough n almost surely and hence Q R , h(x) + 0 as n + 00. In view of (5.3) this implies that h is uniformly integrable. Let g be a uniformly integrable harmonic function with g I J If E = limn M,, g(Xr,), then E I L almost surely. Consequently, since g is uniformly integrable, h(x) = EX(L)2 Ex( = limn EX{Mr,g(Xrn)} = g(x). Thus h is the largest uniformly integrable harmonic function dominated by$ If L' = limn Mrnh(X,,), then L' I L almost surely since h I J But h is a uniformly integrable harmonic function and so EX@')= limn h(x) = h(x) = Ex(,!.). Consequently L = L' almost surely. Thus if we define p =f - h it is obvious that limn Mrnp ( X J = 0 almost surely. So to complete the proof of (5.4) we need show only that p is (X, M) excessive. First observe that if T is any stopping time and T 5 7, for some n, then h 2 Q T h 2 Q,,h = h and so Q T h = h. Consequently QT( f - h) h = Q T f SS, and so Q T p I p for any such T. Given t 2 0 let T,, = t A 7,. We have just observed that QTn p Ip, so in particular
z)
ern
+
AX) 2 E"(p(X3 Mr; t < 7,). Since p ( X r ) M , = 0 if t 2 7, for all n, it follows that p ( x ) 2 EX(p(Xr)M , ) = Qrp ( x ) ; that is, p is ( X , M) super-mean-valued. But also p is the difference of two finite (X, M) excessive functions, so p is itself (X, M) excessive.
(5.5) PROPOSITION. (i) An (X, M) excessive function f is a uniformly integrable pseudopotential if and only if it is a natural potential in the sense of (4.17); that is, for each x whenever {T,} is an increasing sequence of stopping times with limit T and with PX(T2 S ) = 1, then Q T . f ( x ) -,0 as n + co.
190
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
(ii) A uniformly integrable (X, M) excessive function f is harmonic if and only if QTf=f for all stopping times T such that T < S almost surely on As.
ProoJ Let {T,,} be as in (i). Iff is a pseudopotential we have already seen in the remarks following (5.1) that YTn-+Oalmost surely P x , and so iff is uniformly integrable Ex( YTn)+ 0 as n + a,Conversely supposefis a natural potential. Proposition 4.18 states thatfis Uniformly integrable, and the usual supermartingale argument implies that L = limn Y7,,exists almost surely. But z, t S almost surely and f i s uniformly integrable, hence EX(L)= limn Qrnf(x) = 0. Thus L = 0 almost surely and so f is a pseudopotential. Regarding (ii) we need only show that any uniformly integrable harmonic function has the stated property. We may assume T I S without loss of generality. If T,, = T A T,, t h e n f 2 QT.f> Q,,, f =fand so f = QTnJ On the other hand since T < S on As it is clear that T,,= T for sufficiently large n, and using the uniform integrability o f f this implies that QT,f+ QTf as n + co. Hence Q T f = f a n d (5.5) is established. Note that (5.5) states that within the class of uniformly integrable (X, M) excessive functions the definition of harmonicity is independent of the particular sequence {T,} being used. DEFINITION.Let f and g be (X, M ) excessive functions. Then f dominates g in the strong order, written f % g , provided there exists an ~ E Y ( M ) such that f = g + h.
(5.6)
We are now going to decompose a pseudopotential into a “ maximal” uniformly integrable part and a residual part which in a certain sense “contains ’’ no uniformly integrable piece.
+
(5.7) PROPOSITION. Let f be a pseudopotential. Then f = p q where p is a uniformly integrable pseudopotential and q is a pseudopotential with the property that the only uniformly integrable (X,M) excessivf function which q dominates in the strong order is zero. Moreover p is the largest (in the strong order) uniformly integrable (X, M) excessive function dominated in the strong order by J
Proof. Let R,, = inf{t: Y, > n} where Y , = M , f ( X , ) and let T,, be the hitting time of {f> n } . Clearly {T,} is increasing and T,, I R,,. Since f(x) 2 E X { f ( X T nMTn} ) 2 n Ex{MTn}we must have lim T,, 2 Salmost surely. Nowf is a pseudopotential and so the remarks following (5.1) imply that almost ) 0 as n -+ a.Finallyfr QT,f> QRnfforeach n. Define surely M T n f ( X T n+
191
5. CLASSIFICATION OF EXCESSIVE FUNCTIONS
q = limn QT,f. Since { Q T n f }is a decreasing sequence of ( X , M ) excessive functions, q is super-mean-valued relative to the semigroup { Q , } ; that is, Q , q I q for all t 2 0. Moreover i j = lim,+o Q t q is ( X , M ) excessive and i j I q. In order to show that q E Y ( M ) we will prove that q = g. Let n be fixed and let T be a stopping time such that T 5 T,,. Now {f> n} is finely open and so X(T,,) is regular for {f> n} almost surely on {T,,< a}.Therefore T T,, OT = T,,almost surely on { T I T,,} = R and not just on {T < T,,}. Consequently
+
QT QT,
f
=EX{MT+ = QT.
T . BT 0
f( X T + T ,
B e T ) }
f.
Thus if T I Tkfor some fixed k, one obtains upon letting n + co in the above equality Q T q = q. In particular QTnq= q for each n. We are now prepared to show that q = 4. For a fixed x E E choose n such that f ( x ) < n. Then P"(T,, > 0) = 1 and so q(x) = lim 1-0
Qt
d x ) = lim Qt 1-0
QT,
dx)
= lim J W M f + T n o e q, ( X , + T , . e * ) } f-tO
2 lim E*{MT,q(XT,); f < K > = Q T , 4 ( X ) = dx).
u{f
f-0
But E = < n} and hence ij 2 q. Therefore i j = q and so q E Y ( M ) . Finally q is pseudopotential since q I f. We now define p = f - q. In order to show that p E 904)it suffices to show that Q , p I p for all t 2 0, and exactly the same argument as that used in the last paragraph of the proof of Proposition 5.4 yields this fact (replace T,,by the T,, of the present proof). Thus p E Y ( M ) and p is a pseudopotential since p If: Now Q T , p = Q T n ( f - q ) + O as n -, co. If H,, = inf{t : M , p ( X , ) > n},then T,, I H,, and so Q H , p I Q T , p + 0. Thus Proposition 5.3 implies that p is uniformly integrable. Next suppose that q = g + h with g, h E 9 ( M ) and g uniformly integrable. Now Q T . g Ig , QT,h s h, and Q T n q= q for each n, and as a result QT, g = g and QT,h = h for each n. But g Iq and so almost surely MTng(XT,)+ 0 as n + co. Since g is uniformly integrable we obtain g = QT,,g -,0 as n + 00. In other words the only uniformly integrable function in Y ( M ) that q dominates in the strong order is zero. To establish the last sentence of Proposition 5.7 suppose t h a t f = u + u with u, u E Y ( M ) and u uniformly integrable. Plainly QT,u + 0 and hence q = limn Q T , f = limn Q T n u I u. Exactly the same argument as that used to show that p = f - q is ( X , M ) excessive shows that u - q is (A', M ) excessive and consequently f = u + u = u (u - q) q. Therefore p = u (u q) and so p + u. This completes the proof of Proposition 5.7.
+
+
+ -
192
1V. ADDITIVE FUNCTIONALS AND THEIR WTENTlALS
REMARK.The reader should verify that q has the following property: if u E Y ( M ) and u 2-q then u - q E Y ( M ) ; that is, u E 9 ( M ) and u 2 q imply % q.
Combining Propositions 5.4 and 5.7 one readily obtains the following result.
+ +
THEOREM. Iff is a finite (X,M) excessive function, then f = h p q where h is the largest uniformly integrable harmonic function dominated byf, p is the largest (in the strong order) uniformly integrable pseudopotential dominated in the strong order by f - h, and q is a pseudopotential which dominates only zero in the strong order. (5.8)
We turn now to a study of regularity. We begin with the following alternate characterization of regularity.
PROPOS~TION. Let f be a finite (X,M) excessive function; then f is regular if and only if whenever {T,} is an increasing sequence of stopping times with limit T one has limn YTn= Y , almost surely on { T < S } . (5.9)
Proof. Let us first suppose that f is regular. On the set of w's for which t --f X,(w) is continuous at T(w)< S(w) one has YTm Y , almost surely by the definition of regularity. On the set of w's for which t -+ X,(w) is discontinuous at T(w)< S(w) one has Tn(w)= T ( o ) for sufficiently large n almost surely by the quasi-left-continuity of X(recal1 S I C) and so YTn+ Y , almost surely on this set also. Thus regularity implies the condition of Proposition 5.9. Conversely suppose f is not regular; then there exists an E > 0 and a y in E such that Py(Te< S) > 0 where Teis the stopping time defined by --f
T ' = i n f { t < S : lY,- Y,-I>cM,andX,=X,-} and Te = S if the set in braces is empty. According to Proposition 4.16, T cis accessible on {TC< S}.Hence there exists an increasing sequence of stopping times {Tn}which increases to T" strictly from below on { T e< S}almost surely PY, and consequently if T = limn T,, it is not the case that Y,, + Y , almost surely on {T S}. Therefore the condition in (5.9) implies regularity.
-=
The following is just a restatement of Proposition 4.21 in the present situation. PROPOSITION. Let f be an ( X , M) excessive function and {T,} an increasing sequence of stopping times with limit T. Then limn Y T n 2Y , almost surely.
(5.10)
193
5. CLASSIFICATION OF EXCESSIVE FUNCTIONS
(5.11) COROLLARY.Let f be a finite regular (X, M ) excessive function. Then any (X, M) excessive function that f dominates in the strong order is itself regular.
+
Proof. Suppose f = g h with g and h in Y ( M )and let {T,,}be an increasing sequence of stopping times with limit T. Now
+ h(XT,)l
lim M T , b ( X T , ) n
= MTf(XT) = MTb(XT)
+ h(XT)l
almost surely on {T < S} and combining this with (5.10) we can conclude = M, g(XT) almost surely on {T < S} with a similar that limn MTng(XTm) statement for h. Hence g and h are regular by Proposition 5.9. (5.12)
COROLLARY.Any harmonic function is regular.
Proof. Let {Tk}be an increasing sequence of stopping times with limit T and suppose that h is harmonic. Let Y , = M, h(X,) and for fixed n let H = T A 7, and Hk = Tk A 7,,. Since T,,t S, in order to show that h is regular it suffices to show that Y,, + Y, almost surely. But lim, Y,, 2 Y, by Proposition 5.10 and if R is any stopping time dominated by T,,, then Q R h = h, since h is harmonic. Therefore by Fatou's lemma E x lim ,Y ( k
I
Ilim QHkh(x) = h(x) = Q, h(x) = Ex{ Y,}, k
and consequently lim, Y,, = Y, almost surely. Thus (5.12) is established. Recall that in Section 3 a finite (X, M) excessive function f was called a regular potential provided QTnf + Q T f whenever {T.} is an increasing sequence of stopping times with limit T. We can now given an alternate form of this definition. Recall, however, that we are assuming (4.1) in this section. PROPOSITION. A finite (X, M) excessive function is a regular potential if and only if it is uniformly integrable, regular, and a pseudopotential.
(5.13)
Proof. Letfbe a regular potential. Then it follows from Proposition 5.5 that f i s a uniformly integrable pseudopotential, and so we need only show thatf is regular. Suppose {T,,}is an increasing sequence of stopping times with limit T. Then limn YTn2 YT and using the uniform integrability off
I
E"(YT)= Q T f ( x ) = lim QT,f(x) = E x lim YTn. n
( n
194
IV. ADDITIVE FUNCTIONALS AND THEIR POTENTIALS
Hence f is regular. Conversely iff is a uniformly integrable, regular, pseudopotential and {T,} and T are as above, then almost surely YTn+ Y , on {T c S} and YT,+ 0 = Y , on {T2 S}. That is, Y,,, + Y , almost surely and hence f, being uniformly integrable, is a regular potential. Thus Proposition 5.13 is established.
+
Letfbe a finite (X, M) excessive function. By Theorem 5.lO,f= h p + q. Since h is harmonic it is regular, and in fact h is uniformly integrable. Now p is a natural potential and so by the proof of Theorem 4.22 there exist a unique CAF, A c , of (X,M) and a “pure jump” NAF, A,, of (X, M) such that p = uAC+ u A J . Since pc = uAc is a regular potential it is regular. Clearly pJ = uAJhas the property that the only regular potential which p J dominates in the strong order is zero and p c is the largest, in the strong order, regular potential strongly dominated by p. Finally let us show that q is regular. Recall the definition of q from (5.7) and (5.8). Namely if u =f- h and T, is the hitting time of {u > n}, then q = lim Q,”U, and it was shown in the course of the proof of (5.7) that QTnq= q for all n. Now using the fact that almost surely T, t S one can show that q is regular by exactly the same argument that was used in the proof of Corollary 5.12 to show that harmonic functions are regular. We summarize this discussion in the following theorem. (5.14) THEOREM.Letfbe a finite (X, M )excessive function. Thenfcan be written uniquely in the following manner:f= h + p c + p J + q where pc is the potential of a CAF of (X, M), p J is the potential of a pure jump NAF of (X, M), h is the largest uniformly integrable harmonic function dominated by f, and q is a pseudopotential with the property that the only uniformly integrable ( X , M) excessive function dominated in the strong order by q is zero. Finally h, q, and pc are regular.
We conclude this section with an example. Let E = R3 - ( 0 ) and let X be Brownian motion on E. Since {0} is polar for Brownian motion in R3 this makes sense. Also 5 = 03 almost surely. We set M, = IL0,[)(t)so that S = 5 = 03 almost surely. Let f ( x ) = [ X I - ’ where I I denotes the distance from the origin. Sincefis harmonic in the ordinary sense on E, it follows from Theorem 5.11 of Chapter I1 that f is excessive. Let T, be the hitting time of { x E E: 1x1 > n or 1x1 < n-’} and letQ,be the hitting time of { x E E: 1x1 > n}. Both sequences are of the type under consideration; that is, they both increase to S = oc) strictly from below. Clearly f is {z,}-harmonic since it is harmonic in the ordinary sense, but not {Q,}-harmonic. In fact f(X,,) = l/n + 0. Thus f is a pseudopotential relative to {t,}and hence also relative to {T,} since this concept does not depend on the specific sequence in question. In other words f is both harmonic and a pseudopotential relative to
5.
CLASSIFICATION OF EXCESSIVE FUNCTIONS
195
{T"}. In particular this implies that f can not be uniformly integrable. On the other hand it is not difficult to see that, for each x in E, E " { f ( X J Z }is uniformly bounded in t . (Of course, the bound depends on x . ) Consequently the family { f ( X , ) : t 2 0) is P" uniformly integrable for each x E E. Thus the P" uniform integrability of { f ( X , ) : t 2 0} for all x does not imply that f is uniformly integrable. This example is due to Helms and Johnson [l].
V FURTHER PROPERTIES OF CONTINUOUS ADDITIVE FUNCTIONALS
In this chapter we will derive and discuss a number of important properties of continuous additive functionals. We will also give some applications of these results. In many respects additive functionals are analogous to measures, and this analogy, which goes quite deep, will become clear in the light of the results of this chapter. Some of these results will be proved only under an additional regularity assumption, (1.3), on X , while others take an especially nice form under this assumption. Therefore Section 1 is devoted to a discussion of this assumption with the main line of development of the chapter beginning in Section 2. The reader whose main interest is additive functionals might proceed from Theorem 1.6 to Section 2 and refer back to the rest of Section 1 only as needed. 1. Reference Measures Let X = (0, A, A ?X, , , Or, P") be a standard process with state space (E, 8 ) and, for ease of exposition, assume that A = F and A?= 9, for all t 2 0. (1.1) DEFINITION.A measure L on b* which is a countable sum of finite measures is called a reference measure for X provided that a set A E b* is of potential zero if and only if 1(A) = 0.
Note that if L is any countable sum of finite measures then there is a finite measure p such that p(A) = 0 if and only if L(A) = 0. Hence if there exists a reference measure, then there exists a finite reference measure. 1%
1.
REFERENCE MEASURES
197
It follows from (3.2) of Chapter I1 that if 1 is a reference measure for X and f and g are a-excessive (a 2 0), thenf= g a.e. I implies that f and g are identical. (1.2) PROPOSITION. Suppose that for some fixed a > 0 there exists a measure
< on (E, d*)which is a countable sum of finite measures and is such that given f E bd: with U y = 0 a.e. (, then U " f = 0. Then there exists a reference measure for X. Proof. If p is a measure and f~ 8: , then it will be convenient to write ( p , f ) = p ( f ) . We may assume that ( is finite and hence that <(,U'l) < 0 0 ; that is, the measure 1 = t U " is finite. Clearly I ( A ) = 0 if A is of potential zero. On the other hand if I ( A ) = 0 then UuZA= 0 a.e. ( and hence UuZA= 0. Thus A is of potential zero. Consequently 1 is a reference measure for X.
We can now state the regularity assumption that we will impose from time to time. (1.3)
There exists a reference measure for X.
When ( I .3) is being assumed to hold we will explicitly say so. Note that if the functions in 9'"are lower semicontinuous for some a > 0, then the measure ( = n = 1 2-"~,,, where {x,} is a countable dense subset of E, satisfies the hypotheses of (1.2), and so (1.3) is satisfied in this situation. In fact (1.3) is a very mild assumption in the sense that it is valid for practically any standard process of interest (see, however, Exercise 1.19). The following simple result begins to indicate the importance of (1.3).
c"
(1.4)
PROPOSITION.
Under (1.3) any a-excessive function is Borel measurable.
Proof. It suffices to show that U y i s Borel measurable for any f E 8; and a > 0. Let I be a reference measure for X such that the measure p = IU" is finite. Then there exist g, h E 8+ such that g If 5 h and p { g c h } = 0. Therefore U'g I U'f 5 U"h and the extreme members of this inequality agree 1 a.e. and hence are identical. But U"h and U'g are Borel measurable and consequently so is U "f.
Note that if A is a nonempty set in b* and A is finely open, then 1(A) > 0 for any reference measure 1. Hence if 1 ( B ) = 0, the complement of B is finely dense in E. If M is an exact MF of X and f, g are in Y " ( M ) , then { , f # g ] is finely open and nearly Borel measurable, and so iff = g a.e. I then j = 9. Before coming to the main result of this section it is necessary to establish some notation. In the sequel M will always denote a fixed exact M F of X
198
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
which vanishes on [r, 001. Recall that a function f~ S*,is said to be u (X, M) super-mean-valued provided f vanishes off EM and QFfsf for all t 2 0. In this casef = lim,&,, Q;lfexists andf is the largest function in Y " ( M ) which is dominated byf. We callfthe u - (X, M) excessive regularization of f. If {f,}is a decreasing sequence of a - (X, M) excessive functions, then it is immediate that f = lim,f, is u - (X, M) super-mean-valued, and the reader should have no difficulty in modifying the proof of Theorem 3.6 of Chapter I1 to show that {f < f } is semipolar in this case.* Since the infimum of two elements in Y " ( M ) is again in Y " ( M ) , it follows that if IS.}is any countable family of functions in Y " ( M )and f = inf, f,,thenfis u - (X, M) super-meanvalued and { f < f } is semipolar. The next two theorems are the main results of this section. Of these, Theorem 1.6 is the more important one. Both of them are due to Meyer [2]. (1.5) THEOREM. Assume (1.3) holds. Let F be a family of a - (X,M) excessive functions which is filtering upward; that is, i f f , g E F then there exists h E F such that f s h and g 5 h. Then there exists an increasing sequence {f,} of functions in F such that sup,f, = sup{fi f~ F}. In particular sup{f:f~ F} is u - (X, M) excessive.
Proof. First note that the last conclusion follows immediately from the first since the limit of an increasing sequence of functions in Y " ( M ) is in Y " ( M ) . Let A be a finite reference measure and a = sup{(A,f(l + f ) - ' ) : f ~ F } . Since F is filtering upward and t --+ t( 1 + t ) - is a strictly increasing continuous function from [0, 001 onto [0, I], one can find an increasing sequence {f,} of functions in F such that lim,(A, f.(l +f,)-') = a. Let u = lim f,. If f~ F then one can find an increasing sequence {g,} of functions in F such that g, 2 sup(f,f,) for each n. Let g = lim g, Then g 2f,g 2 u, and clearly (A, g(l + g)-l) = (A, u(l + u ) - ' ) . Hence g = u a.e. 1 and, since both u and g are in Y " ( M ) , g = u. Consequentlyf 5 u, and since f~ F is arbitrary the proof is complete.
.
THEOREM.Assume (1.3) holds. Let F be a family of ct - ( X , M) excessive functions and let u = inf{f:fE F}. Then there exists a countable subset {f,} of F such that if u = inf,f,, then ij s u I u. We have already remarked that {ij < u} is semipolar. (1.6)
* Observe that if B is a nearly Bore1 subset of E M and no point of E M is regular for B, then B is semipolar. To see this write B = ( B n En) where En is defined above (5.3) of Chapter 111 and note that B n En is thin for each n. Thus a subset of EM is "semipolar relative to (X,M)" if and only if it is semipolar.
u.
1.
REFERENCE MEASURES
199
Proof. There is no loss of generality in assuming that F is filtering downward. Arguing as in the proof of (1.5) we can find a decreasing sequence {f,} of functions in F such that if u = lim,f,, then for any f E F we have u Ifa.e. 1, where A is a finite reference measure. Hence V ~ f and , since f E F is arbitrary the proof is complete.
We may use (1.6) to give a very useful characterization of polar sets. (1.7) PROPOSITION. Assume (1 -3)and let A be a polar set. If u > 0 then there exists a function f E Y" such that f is finite except on a set of potential zero and A c {f = co}. This is also true when CI = 0 provided that there is a sequence {h,} of functions in S*,such that Uh, is bounded for each n and lim Uh, = co on E.
Proof. We will carry out the proof under the second hypothesis. Note that it is equivalent to the statement that there is an h E S*,such that Uh is bounded and Uh > 0 on E. There is no loss of generality in assuming that A is nearly Borel. Let F = { f E 9: f 2 Uh on A}. The family F is filtering downward, and if g = inf{f : f E F} then by (6.12) of Chapter 111, g ( x ) = PAUh(x) for every x # A. But PAUh = 0 and consequently by (1.6) there is a decreasing sequence { f , } of functions in F such that lim f , ( x ) = 0 except for x in a Borel semipolar set. We may assume each f, I I( Uh 11 and so if A is a finite reference measure then ( A , f , ) + 0. By passing to a subsequence we may assume that (A, f , ) I2-" for each n. Iff = Efn then (A, f) c 00 and f = co on A since Uh > 0. This completes the proof. REMARK. In view of Proposition (3.5) of Chapter I1 this gives a complete characterization of polar sets when (1.3) holds. The name " polar" comes from this characterization in the classical case (i.e., Brownian motion), a polar set being contained in the set of "poles" of a superharmonic function.
We turn next to some applications of (1.3) (or more precisely Theorem 1.6) to the fine topology. These results are due to Doob [3] and we follow his discussion quite closely. Once again we will make it explicit when we are assuming (1.3). If A is any subset of E we say that x is irregular for A provided A is thin at x ; that is ((3.1) of Chapter 11) there exists a set B E 8"such that A c B and x is irregular for B. We say that x is regular for A provided that x is regular for every nearly Borel set containing A. Thus x is irregular for A if and only if it is not regular for A. We let A' denote the set of all points which are regular for A. The following proposition generalizes (4.9) of Chapter 11. (1.8) PROPOSITION. Let A c E. Then (i) A is finely closed if and only if A' c A, (ii) (A u A'), = A', and (iii) A u A' is the fine closure of A.
200
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Proof. Suppose A is finely closed. Then A is thin at each x E E - A ; that is, given x E E - A there exists a nearly Borel set B 3 A such that x is not in B'. Hence x is not in A', or A' c A. Conversely if A' c A and x E E - A, then x is irregular for A. Hence A is finely closed. Thus (i) is established. Let us first prove (ii) under the assumption that A E 8".In this case A' E 8"and we claim that TA I TAP almost surely. To see this fix an x and let {K,,} be an increasing sequence of compact subsets of A' such that TK,, 1TAr almost surely P". Then
P"(T', < T A ) = lim P"(T,, < T A ) n
- lim E"{PX'T"n'[TA > 03 ; TK, < TA}= 0, n
establishing the claim. Now TA, = min(TA, TAr) = T A almost surely, and so A' = ( A u A'), if A is nearly Borel measurable. In the general case A c A u A' implies that A' c (A u A')'. If x 4 A' then there exists a B E 8" containing A such that x 4 B'. But B u B' E 8"and contains A u A', and by what was proved above x 4 (B u B'),. Hence x is irregular for A u A', or ( A u A'), c A'. Thus (ii) holds. Next suppose that F is finely closed and A cF. Then by (i), A u A' c F u F' = F. On the other hand (ii) and (i) imply that A u A' is finely closed, and so (iii) is proved. We now introduce a " regularization " of an arbitrary numerical function which will play an important role in our development. (1.9) DEFINITION. Let u be a numerical function on E. Then we define u+ by u + ( x ) = sup{c: x E {u 2 c}'} for each x in E.
The reader should check that if A is a subset of E then I:
= ZAP.
(1.10) PROPOSITION. If u E I", then for each x u + ( x ) = lim sup u ( X , )
a s . P".
110
Proof. It is understood that t = 0 is excluded from the limiting procedure under consideration. Also it is not a priori clear that the limit in question is in any sense a measurable function of w . Let u'(x) = a. Then x E { u 2 c}' for all c < a, and so P" almost surely u(X,) 2 c for arbitrarily small strictly positive t. Hence lim sup,$ u(X,) 2 a almost surely P". If c > a, then x 4 { u 2 c}' and so u(X,) c on some open interval (0, T ( w ) ) almost surely P". Thus lim supt$ u(X,) I a almost surely P" establishing ( I . 10).
-=
Note that the proof of (1.10) actually shows that the set on which the desired equality fails to hold is in 9.Also note that if u is finely continuous
1.
REFERENCE MEASURES
201
and nearly Borel measurable, then u = u + . In particular if u is a-excessive, then u = u'. (1.11) PROPOSITION.Let {u,} be a decreasing sequence of a-excessive functions with limit u. Then U = u'. Proof. Of course, U is the a-excessive regularization of u. Since E s u and U E 9'" we have U = 6' I u'. On the other hand given E > 0 it was shown in the proof of (3.6) of Chapter I1 that A , = {u 2 U E } is thin. Now u E 8"and so u'(x) = lim suplLou(XJ almost surely P" for each x. If x is fixed then x i s not regular for A , , and so almost surely P"
+
u +(x) I lim sup
~ ( x +, )E = ~ ( x +) E ,
140
which implies u'
< U.
Thus ( 1 . 1 1 ) is proved.
The following corollary is an immediate consequence of Theorem 1.6 and Proposition 1 . 1 1. (1.12) COROLLARY. Assume (1.3). If {ui: i E I } is a family of a-excessive functions, then there exists a countable subset J of I such that if u, = inf, E, uiand u, = inf, ui then u: = u: = U, < uI S u, and the set on which the extreme members of this inequality differ is semipolar. In particular u; is a-excessive. In the remainder of this section we will assume that (1.3) holds. If A c E and c( 2 0 then we define
Fi
={
f 9'": ~ f 2 1 on A }
and e i = inf{f:fE Fi},
We now fix a positive value of a, say a = I, and write eA and F A in place of e i and Fi , Note that it follows from (1.12) that e i E 9" and that { e i < eA}is semipolar. In addition, if B E 8"we write q B ( x ) for E X ( e - T BT; B< l). (1.13) PROPOSITION. If A is a subset of E, then there is a Borel set B such that e i = q B .
=I A
Proof. According to (1.12) we can find a decreasing sequence {f,} of functions in FAsuch that iff = limf, then e i =f '= f l eA Sf:Let B, = {f.2 I} and B = B, . Clearly j ; 2 PBnf, 2 PB,1 = qB,2 q B ,and so q Br f =e i
n
202
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
because cpB E 9" and cpB ~ f On . the other hand according to (6.13) of Chapter I11 if x # B - B' then cpB(x)= eB(x)> e,(x), so that cpB > e i except on the semipolar set B - B'. Consequently ( p B 2 e i since both functions are in 9",and SO cpB = e i proving (1.13). REMARK.It is easy to see that if D E I", then cpD = e t . As a result, using the notation of ( m ) , if C E 8"and C 3 A, then qcn = c p B . There are several interesting corollaries to (1.13) and the remark following it. The sets A and B are those in the statement of (1.13). (1.14) COROLLARY. A' = B'; consequently, if A is any subset of E, then A' is a Borel set and A - A' is semipolar.
Proof. Since A c B it follows that A' c B'. If x # A', then there is a set C E I"such that C 3 A and cpc(x) < 1. By the remark following (1.13) we have 1
'cpc(x>
(PcndX)
= cp&)
and so x 6 B'. Thus A' = B'. Of course B' = { c p B = 1) and under (1.3) this is a Borel set. Finally A - A' = A - B' c B - B' which is, of course, semipolar. REMARK.It is apparent from (1.13) and (1.14) that A' = { e i = I } for any subset A of E. COROLLARY. If A is semipolar then A is contained in a semipolar Borel set.
(1.15)
Proof: It suffices to treat the case in which A is thin. If A is thin and B is a Borel set related to A as in (1.13), then by (1.14) A' and B' are both empty. Consequently B is thin.
REMARK.Note that the proof of (1.15) actually shows that a set which is thin at every point is itself thin. Compare this with (3.14) of Chapter 11. COROLLARY.If u is any numerical function on E, then u+ is Borel measurable and {u' < u } is semipolar.
(1.16)
Proof.
It is immediate from the definition of u'
that {u' 2 c }
=
n,,{u 2 c - l/n}r for any c. Consequently (1.14) implies that u+ is Borel
measurable. If A = { x : u(x) 2 a > b 2 u'(x)} and x
E A',
then x
E
{u 2 a}'
1.
203
REFERENCE MEASURES
and so u ' ( x ) 2 a ; that is, A' c A'. Therefore (1.14) implies that A is semipolar, and this yields the fact that {u' < u } is semipolar.
=A
- A'
We come now to the main result of this development. (1.17) THEOREM.Let { u i : i E I}be a family of finely upper semicontinuous (u.s.c.) functions on E (that is, U.S.C. in the fine topology). Then there exists a countable subset J of I such that if u, = inf, ,I ui and u, = inf, , u, , then uJ' I u, I u, . In particular u, and u, differ on a semipolar set. We emphasize that (1.3) is being assumed here.
,
Proof. Let us suppose that we have established (1.17) in the special case in which each u iis the indicator function of a finely closed set A , . Since I: = I", , the conclusion would read in this case that there exists a countable subset J of I such that A; c A, c A, where A, = A , and A, = ( - ) , , , A i . We now show that this special case implies the general result. For each a, consider the family of finely closed sets {ui 2 a } as i ranges over I. Then there exists a countable subset J, of Isuch that {u,, 2 u}' c {u, 2 a } where uJa= infie,. u,. Let J = U n e Q J , .Then {u, 2 a} c {u,, 2 a } and so {u, 2 u}' c {u, 2 a } for each a E Q. Thus if u i ( x ) = c, then x E {u, 2 a}' c {u, 2 a } for all rational a c c, and so uI(x)2 c. Hence uJ' I u,, and consequently to complete the proof of Theorem 1.17 it will suffice to treat the above special case. Suppose that { A i ; i E I } is a family of finely closed subsets of E and let A= , A . Now {el,; i E I} is a family of 1-excessive functions and so by Corollary 1.12 there exists a countable subset J of I such that if u, = inf,., e l , and u, = infie, e i , , then uJ' = U, = u: Iu,. Define B = Ai. Plainly e i I el, for i E J and hence e i 5 u, . Consequently e i 5 UJ since ii, is the largest 1-excessive function which u, dominates. Therefore using the remark following (1.13) we obtain
ni,,
ni , ,
nie,
B' = {e;
= i} c
{u,
= i} c
n { e l , = I}
i E I
=
n A;
A.
isl
Thus the first assertion in Theorem 1.17 is established. The second follows from the first and (1.16). REMARK.There is a result dual to (1.17) about families of finely lower semicontinuous functions. In particular if {I?,;i E I}is a family of finely open sets, ,I B,) - E J Bi) is then there exists a countable subset J of I such that semipolar.
(u
(ui
204
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
We will give a number of applications of Theorem 1.17 in the exercises. We close this section with following interesting result of Doob [3]. (1.18) PROPOSITION. Let 8/ be the a-algebra of fine Borel sets on E, that is, the a-algebra generated by the finely open sets. Assume that (1.3) holds. Then A E 8’ if and only if there exists a Borel set B and a semipolar set N such A=BuN.
Proof. If A is finely closed, then A = A‘ u ( A - A‘) where A‘ is Borel and A - A‘ is semipolar. Let 9 be the class of all subsets of E which can be written as the union of a Borel set and a semipolar set. Clearly 9 is closed under countable unions and by the above remark contains all finely closed sets. If A E 9 and A = B u N , then there is a Borel semipolar set D 3 N according to (1.15). Consequently, letting “ prime ” denote complement in E, we have A’ = (B’ - D)u [B’ n ( D - N ) ] . Thus S@ is closed under complements and so 8’ c 9.Conversely any set of the form A = B u N with B E B and N semipolar is in Bf since a thin set is finely closed.
Exercises (1.19) Consider the transition function P,(x, A) = IA(x) on (15, 8).The corresponding standard process is said to be “ constant in time.” Prove that there is no reference measure unless E is countable. (1.20) Assume (1.3). If A is nearly Borel show that there exists an increasing sequence {K,}of compact sets contained in A such that lim TK,= TA almost surely. [Hint: use (1.5).] (1.21) Assume (1.3) and let p be a measure on B* which vanishes on semipolar sets. Show that there exists a smallest finely closed set A such that p ( E - A) = 0. The set A is called thefine support of p. [Hint: use (1.17) and ( 1.1 8).] (1.22) If A is an additive functional of (A’, M), then we say that A vanishes on a set D provided UAI, = 0. We say that D carries A if A vanishes on E - D. Assume (1.3). Show that if A is a continuous additive functional of ( X , M), then there is a smallest set Ffinely closed in EM (i.e., closed in the fine topology restricted to EM)which carries A. F is called the fine support of A and will be studied in some detail in Section 3. (1.23)
Suppose that there exists a finite measure 1 on b* such that if
2.
CONTINUOUS ADDITIVE FUNCTIONALS
2Q5
A E 8"and P"(T, < 00) = 0, a.e. A, then Px(TA< 00) = 0 for all x. Show that X has a reference measure. [Hint: use (1.2).] (1.24) Assume (1.3). Show that any OL - (X, M) excessive function is Bore1 measurable. We are, of course, supposing that M is exact.
2. Continuous Additive Functionals
In this section we are going to establish two very important properties of continuous additive functionals under the assumption that (1.3) holds. We d o not know to what extent (1.3) is necessary for the validity of these properties, but we will make crucial use of (1.3) in our discussion. As usual X = ( Q -X, M f X ,, ,O f ,P") denotes a fixed standard process with state space (E, 8) and M a fixed exact M F of X vanishing on [C, a]. Again we will assume that A = 9 and A, = 9, for all t. We begin with the following extension of Theorem 3.16 of Chapter IV. Recall from (1.3) of Chapter IV the definition of a perfect AF or MF. (2.1) THEOREM. Assume (1.3) and that M is perfect. Then any CAF of (X, M) is equivalent to a perfect CAF of (X, M).
ProoJ Let A be a CAF of (X,M) and let us assume to begin with that A has a bounded potential; that is, u(x) = E " A ( c o ) is bounded. Let Iz be a finite reference measure for X and let U,(X, r)= U, I,(x) = E"j: Z,-(X,) dA for I'E b*. Plainly UA(x,.) is a finite measure on 6*for each x. Since (A, u ) is finite, p( r) = j U,(X, r)A(dx) defines a finite measure on t*.Now Q, u u as t + 0 and so it follows from Egorov's theorem that we can find an increasing sequence {K,,} of compact subsets of E such that (i) Q, u -,u as t + 0 uniformly on K,, for each n and (ii) p(E - K,,) J. 0 as n + 00. Clearly 17, IE-K, decreases with n and 0 there exists 6 > 0 such that u I Q fu + E on Kn provided t < 6. Hence on K, we have for t c 6
+
206
V. PROPERTIES OF CONTINUOUS FUNCTIONALS un+Q,UAIE-KnIUn+
U,I,-Kn=U
+ Q, + and so u,, 5 Q,un + on K,, if t < 6 . But Q,u, + is (X, M) excessive and +
I Q ~ u
E
= Qf U,
UAIE-K,
E
E,
E
hence Un
= QK,Un
QK,(QtUn
+ &)
I Qr
Un
+E
everywhere provided t < 6. Thus u,, is uniformly (X, M) excessive. If we define u, = 0, then plainly f, = u, - u,,-~is (X, M ) uniformly excessive for By Theorem 3.16 of Chapter IV for each n there is each n 2 I and u = a perfect CAF, A", of (X, M) whose potential isf,. If we define B(t) = A"(t) for 0 I t 5 a,then E " B ( a ) = u(x)and so B ( a ) < co almost surely. Therefore A"(t) converges uniformly on [0, a]almost surely and so B is a CAF of (X, M). Clearly B is perfect. Since A and B have the same bounded potential u; they are equivalent. Thus (2.1) is proved under the assumption that A has a bounded potential. To treat the general case we make use of (2.21) of Chapter IV which asserts that there exists a sequence {A"} of CAF's of (X, M) with bounded l-potentials such that A"((t)t A ( t ) for all t as n + co. Define N, = e-' M r and B"(t) = yoe-'dA"((s). Then N is a perfect M F of X and B" is a CAF of (X, N) with a bounded potential. Thus by what was proved above there exist perfect CAF's, B" of (X,N) such that B" and B" are equivalent for each n. If we define P(t)= yoes LIB"@), then each 2" is a perfect CAF of (X, M) equivalent t o A". It follows easily from this that A is equivalent to a perfect CAF of (X, M), completing the proof of Theorem 2.1.
cf,.
c
c
Let A ( M ) be the collection of all continuous additive functionals of (X, M). We will write simply A for the collection of all CAF's of X. Equality in A ( M ) is understood to be equivalence. Under the usual pointwise definitions of A Band aA for a 2 0 the set A ( M ) becomes a cone. This introduces an order relation " I " in A ( M ) as follows: A I B provided there exists C E A ( M ) such that A C = B. The reader should observe that if A, B E A ( M ) have finite a-potentials g ,u:, then A I B if and only if u i < $, . Given a 2 0 let A"(M) denote those elements in A ( M ) which have finite a-potentials. Then if 0 I a < fl, A"(M)c A B ( M )c A ( M ) . We are now going to study the cone A ( M ) in some detail. However it will first be necessary to prepare some tools which will also be used in Section 3.
+
+
(2.2) LEMMA.Let a(t) be a function from [0, a]to [0, co] which is nondecreasing and right continuous and satisfies a(0) = 0, a ( w ) = lim,, a(t). Define r(t) = inf{s: a(s) > t }
2.
207
CONTINUOUS ADDITIVE FUNCTIONALS
for 0 I t < 00 where as usual we set ~ ( t=) 00 if the set in braces is empty. The function z from [0, 00) to [0, co] is called the inverse of a. It is right continuous and nondecreasing. Define z(co) = lim, z(t). Then a(s) = inf{r: z ( t )
(i)
> s},
0 I s < 00;
and iffis a nonnegative Bore1 measurable function on [0, a]vanishing at co one has
j(o f(t>d d t ) = jrnfC~(I)Idt.
(ii)
0
.m)
Proof. It is easy to verify that ~ ( tis) nondecreasing and right continuous. As to (i) we first note that if a(s) < 00, then t[a(s)]= inf{u : a(u) > a@)}2 s with equality if and only if s is a point of right increase of a ; that is, a(s + E ) > a(s) for all E > 0. Thus ?[a(s E)] 2 s + E > s if a(s + E ) < co and so inf{t: ~ ( t>) s} I a(s E ) in any case. Since u is right continuous, this yields inf{t: ? ( t ) > s} I a(s). Suppose this last inequality is strict. Then there exists an r < u(s) such that T ( r ) > s. But the definition of 7 then implies that a(s) I r. Thus (i) holds. It suffices to prove (ii) when a(co) < co. In this case the class of boundedf for which (ii) holds is clearly a linear space closed under increasing limits and so it suffices to prove (ii) when f is the indicator function of [0, s], s < 00. But then j ( O , m ) f ( da(t) t ) = a(s) - a(0) = a(s).Of course, da assigns no mass to (0) or ( 0 0 ) and so we could just as well integrate over [0, co]. On the other hand if t<< a(s) = inf{u: T ( U ) > s} then t(t) I s, while if t > a(s) then r ( t ) > s. Therefore f[t(t)] is one on [0, a(s)) and zero on (a(s), 001. Consequently S:f[t(t)] dt = a($).This completes the proof of (2.2).
+
+
Although (2.2ii) is closely related to (2.20i) of Chapter I1 it is not a consequence of that result.
[c,
(i) Let M be an SMF of X vanishing on 001. (For (2.3) PROPOSITION. this proposition only we do not assume that M is exact.) Let A be an A F of ( X , M). For each t, 0 I t < co, define ?,(a) = inf{u: A , ( o ) > t } .
Then each 5 , is a stopping time. (ii) Suppose that M , = Zco,s)(t)where S is a strong terminal time and that A is continuous. Then for each u, u 2 0 one has 7Ut"
almost surely on Proof.
{T,,
= '5,
+ T"
8,"
< S}.
We may assume that t + A,(w)is right continuous and nondecreasing
208
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
for all w . Then Lemma 2.2 implies that {T, < u} = Un{Au-l,n > t } and so each T , is a stopping time. Note that if t + A,(w) is continuous, then u -,T,(o) is strictly increasing on [0, A,(w)) and that T,(w) = max{t: A,(w) = u} provided the set in braces is not empty, while T,(w)= 00 otherwise. In particular A[z,(w), w ] = u $T,(o) < 00. In proving (ii) we may assume that t A , ( o ) is continuous for all w . Let u, u 2 0 be fixed. Proposition 1.13 of Chapter IV and the right continuity of A, imply that --f
+ s) = A(?,) +
0,") for all s, almost surely on {T, < S}.Since A(?,) = u this implies that, almost surely on {T, < S } , A(?, + s) > u + u if and only if A(s, 0,") > v. Proposition 2.3 follows immediately from this and the definition of T , . 4 7 ,
In general we will call T the inverse of A. We now assume again that M is exact. We next introduce some notation. If A is a CAF of (X, M) we write A*(t) = (M,)-' dA, so that A* is a CAF of (X, S). For short we will write dA: = ( A 4 J - I dA, and dA, = M , dA: for this relationship. We let T denote the inverse of A and T* the inverse of A*. Since A, I A: for all t one has T: I T , for all t. I f f € 8: we write .fA for the family of random variables (fA)(t)= fof(X,) dA, . Under appropriate continuity or finiteness assumptionsfA will be a CAF of (X, M). Note that (fA)* =fA*. We come now to the key step in our study of A ( M ) . We assume that (1.3) holds in the remainder of this section.
yo
(2.4) PROPOSITION. Let A be a CAF of (X, M) with finite a-potential. Then necessary and sufficient conditions that an f~ Y " ( M ) have the representation f = U i g where g is Bore1 measurable and 0 Ig I1 are (i) f I4 and ( i i ) f ( x ) - Q f f ( x ) I E Xfo e-" dA, for all t and x.
Proof. The necessity of these conditions is immediate. By replacing M, with e-"' M, and A , with e-", dA, we reduce the general case to the case ci = 0 in the usual way. Thus in proving the sufficiency it suffices to consider the case a = 0. Now
yo
Q,(uA
-f>(x)
= UA(X)
- E X A ( t )- Q t f ( ~ )I AX) -f(x),
and so u, -f is (X, M ) excessive. Since u, is a regular (X,M) potential the relationshipf+ (u, -f) = u, with both f and u, -fin Y ( M ) implies that f and u, -fare regular (X, M ) potentials. Hence there exist CAF's, B and C, of (X,M) such that f = us and u, -f = u c , and the uniqueness theorem implies that A = B C. Consequently if T is any stopping time
+
f(x) - Q T f ( x ) = E x B(T) I E x A(T) for each x. Now A[?(?)]= t if z ( t ) < 00 and A(00o) It if ~ ( t=) 00. Since
2.
209
CONTINUOUS ADDITIVE FUNCTIONALS
z*(t) I T ( t ) this implies that A [ t * ( t ) ]I A [ r ( t ) ]5 t for all Q,,,,,f(x) I r for all t and x . If h E 8: define for f l > 0
t.
Hence f ( x ) -
m
W Bh(x) = E x
joe-@‘ ~ I ( X , . ( ~M,.(,) ) ) dt.
Making use of A*[z*(t)]= 1 if T * ( t ) < 00 and (2.2$, this may be written m
W Bk ( x ) = E X
j
e-BA*(‘)
h ( X , ) d ~. ,
0
It now follows from (2.22) of Chapter IV (or is easily verified in the present situation) that the family { W B ;fl 2 0) is a resolvent on bS* with W o = U,. Taking Laplace transforms one obtains from f - Q,.(,)f It the inequality (2.5)
O < f l ( f - f l W B f ) < 1.
In particular flWBf+fas + co. Since uA is finite we may choose a reference measure I. so that p(r)= AU,,(r) = f I(dx) U,(x, r) is a finite measure on b*. Let L, = Lm(p)and L, = L,(p). Since L, is the dual of L, (as Banach spaces), a bounded set in L, is relatively compact in the weak * topology, i.e., the topology on L, induced by L,. Also L, is separable since E has a countable base and so the weak * topology is metrizable on bounded subsets of L, . It now follows from (2.5) that one can find a sequence (8.) increasing to infinity such that /?,(f-flnWB”f)converges to an element g of L, in the weak * topology. We may assume that g is Bore1 measurable and 0 Ig I 1. (Here we are making the usual “identification” of an element of L, with a “function.”) Let gn = &(f- fl,WB”f).Since U,, = W o on bounded functions. the resolvent equation yields
+f
u A g n = flnwsnf
pointwise as n + 00. On the other hand for each x the measure U,(X, .) is absolutely continuous with respect to p, because if p(r)= I.(dy) UA(y,r) = 0, then the (A’, M ) excessive function UA( r) vanishes a.e. A and hence is identically zero. For fixed x let U,(x, dy) = h,(y) p(dy) where h, E L,. Then a ,
Consequently f = U,g and the proof of Proposition 2.4 is complete.
210
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
(2.6) COROLLARY. If A , B E A"(M), then A I B if and only if for all and x
t
If A , B E A ( M ) , then A I B if and only if there exists a Borel measurable f with 0 If I 1 such that A =fB. Proof. If A , B E A"(M) and A I B, then clearly (2.7) holds. If (2.7) holds then, since the left side of (2.7) is just &(x) - Q:&(x), it follows from (2.4) that 4 = U : f = u;,with 0 I f I 1. Hence A =fBand so B = A + (1 -f ) B , or A I B. Note that i f f 2 0 is bounded and B E A"(M), then fB E A"(M).It remains to show that if A , B E A ( M ) and A I B, then A =.fB with f Borel measurable and 0 I f I 1. It follows from (2.21) of Chapter IV that if q ( x ) = E x sg e-' M , exp( - B): dt and g,, is the indicator function of { l/(n 1) < cp I l/n} for n 2 1, then B" = g,, B has a bounded one potential for each nand 19. = l E M Of. course each g,, E 8: . Let A" = g n A . If A C = B then g n A gn C = gnB and so A" I B". Clearly A" has a bounded one potential and = A . By the first part of the proof, then, there is for each n a function h, E 8: with 0 I h, I 1 such that A" = h,g,B. Let I be a finite reference measure and let p be the measure p ( D ) = E Ajg e-' l D ( X t )d E , . Then p(h,g,,) = I(dx) U:n(X) < 00 and so there is a function f.E 8, such that f n I hngnand p(f,) = p(h,,gJ. Now the two bounded functions ujn, and uinenB are in 9 " ( M ) and are equal almost surely I. Hence they are identical, so by the uniqueness theorem for additive functionalsf, B = h,g, B = A". If we set f= then f E 8, and A =f B . It is obvious that 0 sf I 1 and so the proof of (2.6) is complete.
+
+ IA"
+
If,
(2.8) COROLLARY. Let A and B be in A ( M ) . Then a necessary and sufficient condition that there exists a Borel measurable f 2 0 such that A =f B is that, for any g E 8, , U B g= 0 implies U,g = 0. Proof. The necessity is obvious. To prove the sufficiency there is no loss of generality in assuming that A and B have bounded one potentials. If C = A + B,then by(2.6)thereexistL g E 8 , ,O ~ f g ,5 I , suchthat A =fCand E = gC. Formally A = (f/g)B, and we will now justify this relation. Note that UAh = 0 if and only if U B h= 0. If N, = { g = 0 } and I, is the indicator function of N, , then ULI, = Ui(gI,) = 0 and so, using the hypothesis, U;(fI,) = UiI, = 0. Hence f = 0 on N, a.e. Ui(x, .) for all x . Define h(x) = f ( x ) / g ( x ) if x q! N, and k(x) = 0 if x E N , . Then for each x, hg =f a.e. Ui(x, .) and consequently :u = U ; f = UL gh = U i h . The uniqueness theorem now yields A = hB completing the proof of (2.8).
2.
CONTINUOUS ADDITIVE FUNCTIONALS
211
We are now going to give an example to show that (2.4), and hence (2.6) and (2.8), are not valid for natural additive functionals. Let E bethefollowing subset of the Euclidean plane where (x, y ) is the generic point in R2: E = El v E2 v
E3
v E4
where E,
= {(x,y):
y = 0, x I O } ,
E , = { ( x , ~ )y : = - x
+ 1,0 I X< l},
E3 = { ( x , ~ )Y: = X - 1 , O S x < l}, E,={(x,y):y=O;x>l}. The process X is described as follows: a particle moves to the right at unit speed until reaching (0,O) which is a holding point with parameter 1 from which the particle jumps to (0, 1) or (0, - 1) with probability 3, respectively, and then continues to move to the right with unit speed. The reader should write down a formal definition of this process and verify that it is a Hunt process. Since excessive functions are right continuous in the obvious sense, condition (1.3) holds. If T is the hitting time of the point (1, 0), then upon defining B ( t ) = 0 for t < Tand B(r) = 1 for t 2 Twe obtain a natural additive functional of X . Define u as follows: u = 1 on E l , u = 0 on E, u E 4 , and u = 1 on E 3 . Then A ( t ) = 0 if t < T and A(T) = u(X,)- - u(X,) if t 2 T also defines a natural additive functional of X . Clearly A IB. If A = J B , then A ( T ) - A ( T - ) = f ( X T ) [ B ( T )- B(T-)I = f [ ( I , O)] almost surely PcoS0). But A ( T ) -- A ( T - ) = u(X,)- - u(X,) is not constant PcO*O) almost surely.
Exercises (2.9) Assume (1.3.) (i) Prove that A ( M ) is a lattice under " I." (ii) Prove that A ( M ) is boundedly complete; that is, if {Ai}is a family of elements of A ( M ) then i n f A i exists in A ( M ) , and if there exists B E A ( M ) such that A i IB for all i, then sup A i exists in A ( M ) . [Hint: use the fact that L,(c() is boundedly complete under the ordering given by f Ig provided A { f >9))= 0.1 (2.10) Assume (1.3). Let M be an exact M F of X vanishing on [c, m] and 'for all 1. Note that M , = I ~ o ~ c always ~ ( t ) satisfies this consuch that M , E 9 dition. Show that any CAF, A , of ( X , M ) is equivalent to a CAF, B, of ( X , M ) such that B, E 9 ' for all t 2 0. [Hint: use (1.24) and imitate the proofs of (3.16) of Chapter 1V and (2.1).] In particular when dealing with a CAF, A ,
212
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
of X we may assume without loss of generality, under (1.3), that A is perfect 'measurable. and that each A, is 9 (2.11) Let A be a CAF of X and let z be the inverse of A. Assume that t -,A , ( o ) is continuous and nondecreasing for all w. Then t -,z,(w) is right continuous and strictly increasing for all o.(i) Show {z, < a } = {A, > t } for all a and t, 0 Ia, t < a.(ii) Show that 8 = (a, 9, Sf(,), Xr(,),er(,), P") is a Markov process with right continuous paths having (E, $*) as state space and whose resolvent W" is given by W " f ( x )= E x e - a A ( t ) f ( X tdA,. ) (iii) Let $9, = Sf(,) for t 2 0 and Y = 9. Note that {Y,: t 2 0} is right continuous. Show that if T is a {Y,} stopping time then zT (zT(w)= T ~ ( ~ ) ( O ) ) is an {9,} stopping time. Also show that Y, = 9r(T). (iv) Show that if Tis a {Y,} stopping time then t,+ = z T z, 0 Or(T) almost surely. Use this to show that the process 2 defined in (ii) is a strong Markov process. (v) If A is strictly increasing, that is, u < v, u < [(o), and A J w ) < 00 imply that A,,(w)< A,(w), show that 8 is normal and quasi-left-continuous and that the lifetime p of 8 is given by p = A,. If, in addition, (1.3) holds use (2.10) to show that x -, E " f ( 8 , ) is Bore1 measurable for each t 2 0 and f E bb, and hence 8 is a standard process. See (4.11) for further properties of 8.Here we have written for t < 00, and, of course, 8 , = A. 8,=
+
3. Fine Supports and Local Times In this section we are going to investigate the set of time points on which a CAF, A, is increasing. Roughly speaking the situation is this: there exists a subset F of E such that t -P A,(w) is increasing at to if and only if Xr, E F. (See Theorem 3.8 for the precise statement.) If (1.3) holds, then F turns out to be just the fine support of A defined in (1.22). However we will proceed somewhat more generally and will give an explicit definition of F. We will also discuss in some detail CAF's for which the corresponding F reduces to a single point xo . Such a CAF is called a local time for X at xo , We will give a simple characterization of those points xo which admit a local time in this sense, and will investigate the dependence of the local time on the point xo . As usual X = (Q, A, A , ,A', O , , P") denotes a fixed standard process with state space (E, 8)and with A = 9, dr= 9, for all 1. Also M denotes a fixed exact M F of X which vanishes on [[, 003 and S = inf{t: M , = O}. Let A be a CAF of ( X , M) and let A* be the CAF of (X, S) defined by dA: = (M,)-' dA, where the notation is that of Section 2. In particular zr and T: denote the inverses of A , and A: ,respectively. It will be convenient to assume that t + M , ( w ) is right continuous and nonincreasing for all w and that t A,(w) is continuous and nondecreasing for all w. Define
3.
213
FINE SUPPORTS AND LOCAL TlMES
R(w) = inf{t: A,(w) > 0}
(3.1)
provided the set in braces is not empty and R(o)= 03 if it is empty. Clearly R is a stopping time and R = sup{?: A, = O}. Since A , ( o ) = 0 if and only if A:(o) = 0, we also have R(w) = inf{t: A:(o) > 0}
(3.2)
provided the set in braces is not empty and R(o)= 03 if it is empty. It follows from (3.2) that if T i s a stopping time, then T R 0 8, = R almost surely on {T < R A S}. In the notation of Section 2, R = T,, = TO*. Finally observe that A, = A: = 0 since A and A* are continuous. We next define
+
q A ( x ) = E"(e-,; R < S}.
(3.3)
Since R = 03 on {R 2 S}, it is evident that (~"(x)= E"(e-,). It is also clear that (P" is in 9 ' ( M ) . We now define Supp(A) = {x: q A ( x ) = l},
(3.4)
and we call Supp(A) the support of A. Clearly Supp(A) is nearly Bore1 measurable, Supp(A) c E M , and Supp(A) is finely closed in EM , that is, closed in the relative fine topology on EM.Moreover Supp(A) = (x: P"(R = 0) = l}, and so intuitively Supp(A) consists of those points x such that, starting from x , t + A, begins to increase immediately with probability one. Obviously Supp(A) = Supp(A*).
PROPOSITION. Let T be the hitting time of Supp(A). Then each of the following holds almost surely: T I R, {R < S } = {T c S}, and T = R on { R < S}. In addition each x in Supp(A) is regular for Supp(A).
(3.5)
Proof. First observe that X T E Supp(A) almost surely on { T c S}. Consequently for any x
P"(T < R , R < S ) = P"(T < R , R 0 OT > 0, R < S ) < E"{PX'T'(R> 0);T < S } = 0. Similarly Px(T < R, T < S) = 0. Therefore R I T almost surely on {T A R S}. For notational convenience let F = Supp(A). Now for any t > 0 we have
-=
P"{R < 7') = P"{A(R + t ) > 0; R < T } = E"{M, P x c R ) ( A> , 0); R < T
A
S}.
If x E F' this last expression is zero since P"(T = 0) = 1. If x # F then X, # F
214
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
almost surely P" on { R c T } ,while if y
4F
P'(A, > 0)I PY(R c t ) + 0 as t + 0. Thus letting t + 0 we obtain P"(R c T ) = 0 provided x $ F - F'. Let $(x) = E"(e-T; R c S). Then using the easily verified fact that t + R o Bt J. R almost surely P" for each x E E M ,one sees that $ is in 9 ' ( M ) . But by what was proved above, 1(1 and qAagree except possibly on the semipolar set F - F', and hence, since they are both in Y ' ( M ) , they are identical. It now follows that R = T almost surely on { R c S } . As a result it is clear that F - F' must be empty and this yields the last assertion of (3.5). In view of what was proved above this implies that T I R almost surely. Combining this with the statement that R I T almost surely on { T A R c S } completes the proof of Proposition 3.5.
(3.6) REMARK.Note that R = 00 almost surely on { R 2 S}.If S = [ then T = 00 on {T2 S},and so in the case S = [ we obtain T = R almost surely. An immediate consequence of Proposition 3.5 is the fact that A = 0 if and only if Supp(A) is empty. We are now going to investigate the set on which t + A, increases. To this end we define the following sets, each ofwhich depends on w :
(3.7)
I = { t : A(t + E ) - A(t) > 0 for all E > 0)
J = { t : A(t + E ) - A(t - E) > 0 for all E > 0)
z = { t c S; x,E Supp(A)} Q = {t c
00
;z(u) = t for some u } .
In the definition of J it is understood that we set A(u) = 0 if u < 0. We also define I*, J*, and Q* by replacing A and z by A* and z*, respectively, in (3.7). We call I the set of points of right increase of A and J the set of points of increase of A. The reader should check that the assumed regularity of A implies that J is the closure of I and that Q = I. Of course, the same relationships hold among I*, J*, and Q*. Finally the assumed regularity of M implies that I =I* and J = J*; hence, Q = Q*. We come now to our main result.
(3.8) THEOREM. Almost surely I c Z c J. Proof. If T is the hitting time of Supp(A), then almost surely we have { w : Z # J} c U { A ( t )- A ( r ) = 0 , r r
+ To 0, c t c S }
3.
FlNE SUPPORTS AND LOCAL TIMES
215
where the union is over all pairs (r, t ) of rationals with 0 5 r < t . But for each x
+T
8, < t < S } I Ex{PX"'[A(t - r ) = 0, T < t - r < S ] ; r < S }
PX{A(t) A(r) = 0, r
0
and Proposition 3.5 implies that this last expression is zero. To show that I c Z we first observe that if t E I and t 4 Z, then t c S and (pA(Xl)< 1. Using the right continuity of u + q A ( X , ) on [0, S ) we then have
{I $ Z} c
u
{ A , - A , > 0, r
+T
0
0, 2 q , r < S }
r
almost surely where the union is over all pairs (r, q)of rationals with 0 5 r < q. As before, Proposition 3.5 implies that the above union has Y" measure zero for all x. Hence Theorem 3.8 is established.
(3.9) COROLLARY.Let g A be the indicator function of Supp(A). Then A = gAA. Proof. For any o,A induces a measure dA,(o) on the Borel sets of [0, co) whose support is J(w). Consider an w such that I(w) c Z(w) t J(w). Now J(w) - I(w) is countable and so, t + A , ( o ) being continuous, the set I(w) carries all of the mass of dA,(w). Therefore
JO
In light of (3.8) this proves the equality (i.e., equivalence) of A and g A A . (3.10) COROLLARY.Supp(A) is the smallest nearly Borel subset of EM which is finely closed in EM and on whose complement A vanishes.
Proof: Recall that A vanishes on a set D provided UAZ, = 0 or, equivalently, I , A = 0. A certainly vanishes on the complement of Supp(A) inview of (3.9). To complete the proof of (3.10) it suffices to show that if D is nearly Borel and finely open and A vanishes on D, then D n Supp(A) is empty. Suppose x E D n Supp(A). Since A vanishes on D,A = ZDcA, and this and the ( X 1=) 0. Consequently T D c I R. continuity of A imply A(TD,)= ~ ~ D C Z D e dA,
216
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
But x E D n Supp(A) implies P"(R = 0) = 1 and P"(T,. > 0) = 1. This contradiction establishes (3.10). We leave it to the reader as Exercise 3.33 to show that if (1.3) holds, then Supp(A) is the fine support of A as defined in (1.22). We are now going to study a very special situation, but one which is of importance in the applications. We begin with the following special case of Corollary 2.8 which is easily verified directly even when (1.3) is not assumed to hold. We leave the details to the reader as Exercise 3.32. PROPOSITION. Let A and B be CAF's of (X, M) and suppose that Supp(B) is countable. Then a necessary and sufficient condition that A = f B for some f E 8 +is that ifg E b+ and U B g = 0 then U,g = 0. Onemaychoose f so that it vanishes off Supp(B). (3.11)
In the remainder of this section we will deal with CAF's A of X , In this case Supp(A) is finely closed and, according to (3.6), T = R almost surely, where the notation is that of (3.5). (3.12) DEFINITION.Let x, be a fixed point of E. Then a CAF, A, of X is called a local time for X at x, provided that Supp(A) = {x,}.
THEOREM.A necessary and sufficient condition that there exists a local time for Xat x, is that xo be regular for {x,}. Let T be the hitting time of {x,}. Then if A is any local time for X at x, ,there exists a positive constant k such that u:(x) = k E"(e-T)for all x . Moreover A is equivalent to a perfect CAF of X.
(3.13)
Proof. If there exists a CAF, A with Supp(A) = {x,}, then x, is regular for {x,} according to Proposition 3.5. Conversely suppose x, is regular for {x,} and define +(x) = E"(e-T)where T = T{xo,.Then E 9' and $(x,) = 1. Now X , = xo almost surely on {T < 0 0 ) and so
+
P i +(x)
=Ex{e-TEx(T)(e-T)} = +(xo) +(x) = $b).
Given E > 0 there exists a 6 > 0 such that P: $(xo) I $(xo) I P: +(xo) + E whenever t I S . But P i ( x , dy) is concentrated on {x,}, and consequently if t I6 we have
+
+ = P i + IP;(P:+ +
E)
IP;+
+
E.
Therefore is uniformly 1-excessive. Also P:$ + 0 as t + 00 since $ is bounded, and so it follows from Theorem 3.16 of Chapter IV that there
3.
FINE SUPPORTS AND LOCAL TIMES
217
exists a pefect CAF, B, of Xsuch that u i ( x ) = $(x). We will assume, as we may, that t + B,(w) is continuous and nondecreasing for all w. We will next show that Supp(B) = {xo}; that is, B is a local time for X at xo . Let R = inf{t : B, > O } . Then
1
m
= P+ $(x> = E X
e-' d ~, ,
T
and so Ex 1; e-' dB, = 0 for all x. Consequently T I R almost surely. Now if x # xo then P"(T > 0) = 1 and so x is not in Supp(B) since R 2 T almost surely. But Supp(B) is not empty because B is not zero (uA(xo) = Il/(xo)= 1). Hence Supp(B) = {xo}. In particular R = T almost surely. To complete the proof of Theorem 3.13 it will suffice to show that if A is any local time for X then there exists a constant k > 0 such that A = kB. If g E € + and U Bg = 0, then g(xo) must be zero. Consequently U,g = 0 since Supp(A) = {xo}.Therefore according to (3.1 1) there exists anfcz € + such that A =.fB. Let k = f ( x o ) . Then obviously A = kB completing the proof of Theorem 3.13. Theorem 3.13 states that when a local time at xo exists it is unique up to a multiplicative constant. Therefore whenever xo is regular for {xo} we define the local time of X at xo to be the unique CAF, L, of X satisfying uL(x) = E x ( e - T )for every x.Here T = T{x,r.We assume, as we may, that L is perfect and that t + L,(w) is continuous and nondecreasing for all w. We let z,(w) = z(t, o)be the functional inverse to L. It follows from Proposition 2.3 that for each u, u 2 0 one has (3.14)
L+"= TI4 + 5" 0," < [} = {T, < 03). But (3.14) is obviously valid if T, = cu. O
almost surely on {z, Thus (3.14) holds almost surely and moreover the exceptional set may be chosen independent of u and u since L is perfect. Therefore (3.14) remains valid if u and u are replaced by nonnegative random variables (I and V . In addition u 4 T,(w)is strictly increasing for all w on [0,L,) and Theorem 3.8 implies that X ( 7 , ) = xo almost surely on (7, < 03) = {u < L, = L m } . Finally note that T, < 03 for all u < 03 if and only if L , = 00. We are now going to investigate the stochastic properties of z, considered as a function of t. We will use the notation developed above without special mention. For notational simplicity we write for a > 0
1
03
(3.15)
ua(x) = uT(x) = E x
e-" dL,,
0
so that in our previous notation ( T = T[,o,) u ' ( x ) = $(x) = E x ( e - T ) .
218
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Observe that 00
(3.16)
e-"' dL, = EX{e-IITua(XT)}
u"(x) = E x T
= ua(xo)Ex(e-aT).
We are now in a position to compute the Laplace transform of the distribution of z,. (3.17) THEOREM. For each x, t, and a > 0 EX{e-"r(r)}= Ex(e-aT)exp[ - t / u a ( x o ) ] .
Since t + f ( t ) is bounded and right continuous, f ( t ) = e-k' for some k 2 0. By (3.18), j2 f ( t ) dt = tp(xo) and so k = [ t p ( x 0 ) ] - ' . Now for a general x we have EX{e-ar(l)}= E"{exp[ -a(zo
- Ex{e-aT(o)} -
+ T,
0
O,)]}
Exo{e-ar(f)}
- E X { e - a r ( 0 ) 1 exPC - t/u"(xo>I, and since z0 = T this completes the proof of (3.17). The fact that X [ z ( t ) ]= xoalmost surely on { z ( t ) < co} has as aconsequence the fact that, roughly speaking, the process { ~ ( t )t :2 0} has stationary independent increments. The following result is the precise statement. (3.19) PROPOSITION. If 0 = to < t , < . . . < t, and if B1,.. . , En are Bore1 subsets of [0, a),then Pxo{z(tj)- T ( ? j - l )
EB
j ; j = 1,
. . . , n ; t ( t , ) < co} =
n n
PXo{r(tj-
j= 1
E Bj}.
3. FINE SUPPORTS AND
LOCAL TIMES
219
Proof. Recall that t o= inf{t: L ( t ) > 0} and so Pxo[r(0)= 01 = 1 . Thus the above assertion is obvious when n = 1 since B1 is a subset of [0, co). Let { t ( t j )- t ( r j W l ) E B j } . Then using (3.14) and the fact that A, = X [ t ( t ) ]= xo on ( ~ ( 2 )< 03) we find
n;=
Note that Theorem 3.17 implies that Pxo[T(t)< co] = 1 if and only if uL(xo)= lima+oIjl(xo) = a.In this case the process ( ~ ( 2 ) ;P""} has stationary independent increments in the ordinary sense. Let Y = { Y ( t ) ;r 2 0 } be a real-valued stochastic process defined over some probability space (Q*, 9,P). Then Y is called a subordinator provided Y has right continuous paths, Y(0)= 0, and Y has stationary independent nonnegative increments. It is known that for such a process E{e-aY(')}= e-'g(a) for all t 2 0 and CI 2 0 where
(3.20)
g(a) = ba
+
m
(1 - e-'") v(du). 0
In (3.20), b is a nonnegative constant and v is a Bore1 measure on (0, co) satisfying f; u( 1 + u)-' v(du) < co. It is easy to see that b = lima+QI a - l g(a). The function g is called the exponent of Y and v is called the LPuy measure of Y. Conversely given such a b and v there exists a subordinator whose exponent is given by (3.20). In this connection, see (2.18)and (2.19) of Chapter I. For a general discussion of subordinators, see Blumenthal and Getoor [ I ] , Bochner [ I ] , or Ito and McKean [I, p. 311. (What we call a subordinator Ito and McKean call a homogeneous diflerential process with increasing (and right continuous) paths.) According to (3.19, a + tp(xo) is a decreasing function and so y = [u"(x0)]-' exists with 0 I y c 00. This notation is used in the statement of the next theorem, which gives the stochasticstructure of t ( t )underPxo. (3.21) THEOREM.There exists a subordinator Y and a nonnegative random variable Z defined on some probability space (Q*, 9,P ) such that (i) { Y , ; t 2 0 } and Z are independent, (ii) P ( Z 2 t ) = e-yt for all t 2 0, and (iii) if one defines t*(t) = Y ( t ) for t < Z and t * ( t ) = co for t 2 Z , then {z*(t), P} and { r ( t ) ,P""} are stochastically equivalent, i.e., they have the same finitedimensional distributions. Moreover the exponent g of Y is given by g(a) = [~(Xo)l-
- 7.
220
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
REMARKS.In fact {r*(t),P} and { ~ ( t P""} ) , are temporally homogeneous Markov processes (in the sense of Section 2 of Chapter I) with values in (T, W(T)), where T = [0, a], having the same transition function. If y = 0 then { ~ ( t P""} ) , is itself a subordinator with exponent g(a) = [u"(xO)]-'. Proofof(3.21). Let R + = [0, co). For each t > 0 define a measure p, on W(R+) by p,(B) = P""[[z(t>E B].A straightforward computation making use of (3.19) yields P , + ~= p, * ps for all t, s > 0 where " *" denotes the ordinary convolution of measures on R. Since ~ ( t+) 0 a5 t J 0 almost surely PxO,it is clear that p, + eo weakly as t + 0. Finally using (3.17) we see that p,(R+) = lim exp[ - t/ua(xo)]= e-Yt. a-0
Therefore if we define v, = e Y t p , ,the family { v , ; t > 0} is a semigroup (under convolution) of probability measures on R + such that v,--reo weakly as t + 0. Under these circumstances it follows from (9.14) of Chapter I that there exists a subordinator Y on a probability space (a*,9,P) such that P[Y(t)E B]= v,(B) for all t > 0 and B E O(R+). Next observe that
E(e-''(')) = fe-." v,(du) = eyf e-'" h ( d u ) = eYt Exo(e-aT('))= exp{ -t([ua(xo)]-' - y)},
and so the exponent of Y is [Ip(xO)]-'- y. We may assume without loss of generality that there exists a nonnegative random variable Z on (a*,9,P) that is independent of Y and such that P(Z 2 t ) = e - y t for all t 2 0. Now define z* as in the statement of (3.21). Then for any Bore1 subset B of R + and t 0 we have
=-
P[T*(t) E B]
= P [ Y ( t )E B , 2
> t]
= e-7' P [ Y ( t ) E B ] = e-Yf
v m
= P""[T(t ) E B ] .
Combining this with (3.19) and the fact that Y has stationary independent , and increments one easily obtains the stochastic equivalence of { ~ ( t )P""} {r*(t),P} since P[T*(O)= 01 = Px0[z(O)= 01 = 1. This completes the proof of Theorem 3.21. The local time can be a powerful tool in studying the "local" properties of X. In many situations it reduces the study of Z = { t : X , = x o } to a study of Q = { t < a:~ ( u=) t for some u}, and, in view of Theorem 3.21, the set Q
3.
FINE SUPPORTS AND LOCAL TIMES
221
is much more tractable than Z. However we will not go into such applications in this book. The interested reader should consult Blumenthal and Getoor [3]. Also Ito and McKean [I, Ch. 61 contains a very deep study of the local times for one-dimensional diffusion processes. We will return to the subject of local times in Chapter VI. There we will discuss a number of examples. We next turn to a discussion of how the local time, L", of X at x varies with x. These results, in the present context, are due to Meyer [8], and our discussion will follow his closely. Many deep results are known about (x,t ) + L"(t) in various special situations. See, for example, Trotter [I], McKean [I], Ray [3], and Boylan [I]. Let a and b be fixed points in E each satisfying the condition of Theorem 3.13. Let To and Tb denote the hitting times of { a } and { b } and let Lo and Lb denote the local times at a and b (for X ) . Let B, = L; - Lf , Then B = {B,} has all the properties of a CAF of Xexcept that t + B, is of bounded variation on each finite interval rather than nondecreasing and, of course, B, may have arbitrary sign. Let us define a family of operators on the space of bounded complex-valued B* measurable functions as follows :
where L is a real parameter. Since IdB,I IdLi + dLf each W Ais a bounded linear operator. The next lemma is essentially (2.22ii) of Chapter IV.
(3.23) LEMMA.For each real I , iL W oW' = W A- W o = iI W' W o Proof. This is obvious if I = 0, and so we consider only the case I # 0. If f E bB* then (4.14) of Chapter II implies that W' f is nearly Bore1 measurable and finely continuous. Let f be a bounded continuous function on E. Then making use of the right continuity of t --t W ' f ( X , ) the following formal calculations are easily justified. m
WoW'j(x) = Ex
j0 e-' W'j(X,) dB,
m
= E"
joei*Buf(XU)e-"
dBt d B , . 0
But B, is continuous and so the integration on t yields (l/iL)[l- e-i'Bu].
222
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
(See Exercise 3.42.) Thus the first identity in (3.23) is proved. The second is proved by a similar calculation.
If f b is the indicator function of the two-point set {a, b}, then clearly W'#bf= W Af for each real A; that is, the (complex) measures W"(x,.) are supported by {a, 6). Let $,(x) = E"(e-Ta) and $ b ( x ) = E"(e-Tb).Iff is a
.);;(
function on E we write (f)for the column vector
Using this notation
and the properties of La and Lb we may write ( W of) = Wo( f) where Wo is the matrix (3.24)
(Recall that $&) = $ b ( b ) = 1.) Also, for general A, ( W'f) = WA(f) where W' is a square matrix of order two. In this notation (3.23)becomes iAWoWA= W' - Wo and so (I- iAWo)WA= Wo. But the determinant of (I - iAWo) is given by 1 A2[1 - $#(b)$b(a)] 2 1, and so I - iAWo is invertible. Consequently
+
(3.25) I
+ iAWA= ( I - iAWo)-'
= (1
+ A2y2)-'
1
+ iA
( i A $,(b)
- iA & , ( a ) ) 1 - iA
We will use these calculations to make the following basic estimate. (3.26) LEMMA.For each x E E let 6" be the measure defined on the Bore1 sets of the real line by P(r)= P X ( B tE r) e-'dr. Then for each 6 > 0, 6"({u: Iu) 2 6 ) ) 5 e-''U.
Proof.
First observe that integrating by parts we have
1
00
iAW' l(x)
= iAE"
0
and so
eiABre-' dB,
3.
FINE SUPPORTS AND LOCAL TIMES
223
Therefore (3.25) yields
+ in[1 - ll/b(a)l,
= 1 - iA[l - $,(b)] 1 + l2yZ 1 + 12y2 Inverting these Fourier transforms one sees that 8"(du) and Ob(du)have densities with respect to Lebesgue measure given, respectively, by
@(A)
=
where 11' = 1 - $&a), 11 = 1 - $,(b), and Y(u) = 1 for u 2 0, Y(u)= 0 for u < 0. One now obtains the conclusion of (3.26) when x = a or b by a simple computation. (There actually is equality in this case.) Let T = T, A Tb be the hitting time of the two-point set {a, 6 ) . Then for any x , since dB, = 0 on [0, TI and B T = 0, we have
&(A)
- 1 = iAWa l(x) = ilE"
= i l p,(x) Wa l(a)
where we have written p,(x) Thus
@(A)
=1
/ye-iaBre-' dB,
+ i l Pb(X) Wa l(b),
= E"(e-T; T = T,)
- pa(x) - PdX) + p,(x)
and pb(x)= E X ( e - T ;T = Tb).
+ P b ( X ) eb((n),
and consequently = [I - pa(x) - Pb(X)IEO+ 8" + Pb(X) eb, where E~ is unit mass at the origin. The conclusion of (3.26) for general x now follows from the results in the cases x = a and x = b. Of course (3.27) may also be obtained by a simple direct computation beginning with the definition of the measure 8".
(3.27)
We now apply Lemma 3.26 to obtain the following result.
(3.28) PROPOSITION. Let N and S be positive numbers. Then for any x,
where the notation is that used above.
224
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Proof. Let T = inf{t: lBJ > 26). Obviously T is a stopping time. Let fa(u) = 1 if IuI c 6 and fa(u) = 0 if IuI 2 6, and let ga= 1 fa. Then for a fixed x we have
-
IT e - ' d t m
E"(e-T) = E x
m
=Ex T
.m
I E x J, e-' ga(B,) dt
e-'[fa(BI)
+ gd(Br)]dt
.m
+ E x J T e-'fa(Bt) d t .
If By (3.26) the integral involving ga is equal to Ox({u:IuI 2 6)) I t > T and lBJ < 6, then, since lBTl 2 26, one must have 6 c IB, - BTI = IB,- T(eT)I. Therefore EXjTrne-'/a(Bt)dt I E x
IEx[e-T
j, UI
C
e-'ga[B,-T(&)]
dt
1
O X ( T ) ( d ~I ) 26
and so E"(e-T) I 2e-*ly. Now for any N
e - N P"(T I N ) I E"(e-T; T I N ) I 2e-'lY. Finally P"
[ sup IB'I > 2 6 1 < P X I T IN ] OStSN
I 2eNe-'/Y,
proving (3.28), since B, = L: - Lp . (3.29) COROLLARY. Let a be a fixed point in E and suppose that the local time for Xexists at all points b in some neighborhood of a. If limb+,, +,,(a)= 1, then for any N, S U IL: - LpJ ~ approaches ~ zero ~ in ~P" probability ~ ~as b + a for all x . Proof. One first observes that limb-,,,+,,(b) = 1 also. (See Exercise 3.36.) Hence (3.29) is an immediate consequence of (3.28) since y = [I +a(b)@b(a)]l'z-+ 0 as b + a.
We will close this section by applying Proposition 3.28 to obtain a result which is essentially due to Boylan [l]. Again we follow Meyer [8]. We will
3. FINE SUPPORTS AND
LOCAL TIMES
225
assume that E is an interval of the real line R since this seems to be the only case of interest, and for simplicity we will actually assume that E = R-the extension to the more general situation being obvious. Thus we assume that Xis a standard process with state space (R, g(R)) and we further assume that, for each x E R, x is regular for {x} so that the local time L" exists for all x. As above T, is the hitting time of {x} and $,(y) = EY(e-Tx).Finally we assume that there exists a monotone increasing function h on [0, a ]with h I 1, limU$,h(u) = 0, and with Enn[h(2-")]'lZ co and such that for each integer M > 0 there exists a K for which 1 - $,(y) I K h ( ( x - yl) whenever x and y are in [ - M , M I .
-=
(3.30) THEOREM.Under the above assumptions it is possible to choose the local time L" in such a manner that ( t , x) -+ L:(w) is continuous for all w and L" is perfect for all x. Proof. To begin we may assume that for each x, L" is perfect and that t + L:(w) is continuous for all w . Let D be the set of all dyadic rationals. For each from D n [ - M , M ] pair of positive integers ( N , M ) consider the map to C[O, N ] which assigns to each a E D n [ - M , M I the continuous function t 4L:(w) on [0, N ] . Theorem 3.30 will be established once the following statement is proved.
(3.31) For almost all w , @!3M is uniformly continuous on D n [ - M , M I .
Indeed let R, be in 9 with P"(R,) = 1 for all x and such that, for each is uniformly continuous for all N and M . Then for each w E R, , @*:" can be extended to a continuous map from [ - M , M ] to C[O,N ] , and this clearly defines a continuous function L:(w) of the pair ( t , x), t 2 0, x E R. If we set L:(w) = 0 for all ( t , x) if w E R - R,, then ( t , x) + L:(w) is continuous for all w . We next claim that, for each x, L" is a perfect CAF of X. To see this let R , be in 9 with P"(R,) = 1 for all x and such that L:+,(w) = L ~ ( o ) + L ~ ( B , w ) f o r a l l w ~ R , , r , s ~ O , a n d a ~ D . I f0, o ~t hRe ,nnf o r a fixed t , L:(B, w ) = Lp+,(w) - Lp(w) for all a E D and s 2 0, and so 8,w E R, . If w E R, n R , , t , s 2 0, and x are fixed, then we can find a sequence {a") of elements in D such that Lan(w)+ L"(w) and Lan(B,w) + L"(8,w) in C[O, t + s + 1 1 . This clearly implies that ,T;:+s(w)= L:(o) + L;(B,o) and so 1" is a perfect CAF. Finally (3.29) implies that, for each x, Lx and Lx are equivalent, which establishes Theorem 3.30. Thus it remains to prove (3.31). Suppose 1 > (2K)'/' log 2, and let 6, = An[h(2-")]"2. If a = i2-" and b = (i + 1)2-" are in D n [ - M , M I , then y = [ I - $,(b) I + ! I ~ ( ~ ) ]I ' / ~[2Kh(2-")]'12.Now using (3.28) we obtain w E R,,
226
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
I2 eNexp[-An/(2~)'/~].
We next define sup
sup
ILfZ-"
- Ly+')2-"I > 26.).
OsrsN
-MZ"sI<M2"
Then it follows from the estimate we have just made that
-
pn I2M2" 2 eNexp[-h1/(2K)''~].
Consequently x p n < 00 because of our choice of A. It now follows from the Borel-Cantelli lemma that for almost all o there exists an integer v such that sup sup ILr12-" - L('+l)2-" r 1 526, 04tsN
-MZ"SiIiM2"
for all n 2 v. Finally this implies (3.31). Indeed if a, b E D n [ - M , M I and la - bl < 2-", then we may write
+ p12-"-' + p22-"-' + b = bo + q12-"'-' + q22-"-' +
a = a,
where each p i and qj has the value zero or one and only finitely many are different from zero, and where a, and b, are of the form a, = i2-", bo = j2-" withj = i - 1, i, or i 1 . Consequently
+
W
I2 2 6 , + 2 6 , n=m
a,
+ C26, n=m
provided m 2 v, and since 26, < 00 this establishes (3.31). Thus the proof of (3.30) is complete. Exercises
(3.32) Give a proof of Proposition 3.1 1 . [Hint: use (3.9) to show that it suffices to consider the case in which Supp(B) consists of a single point xo . If u: is finite, then f ( x o ) = u ~ ( x , ) / u ~ ( x , )f, ( x ) = 0 if x # x, has the desired properties.] (3.33) Assume (1.3). If A is a CAF of (A', M ) show that Supp(A) is the fine support of A as defined in (1.22).
3.
FINE SUPPORTS AND LOCAL TIMES
227
(3.34) Let L be the local time for X at x, and v be the LCvy measure of the corresponding subordinator Y constructed in Theorem 3.21. Show that v(R+) < 00 if and only if x, is a holding point. [Hint: use the fact that v[(a, 00)] is the expected number of jumps of Y in unit time which exceed a in magnitude. See Blumenthal and Getoor [3, p. 601.1 (3.35) Let x, be a holding point for X. Show that the local time for X at x, exists and that Z = I in this case. See (3.7) for the definitions of I, Z, and J. If X has continuous paths and the local time for X exists at x,, then show that Z = J. Finally observe that I = J if and only if x, is a trap. Thus Theorem 3.8 cannot be improved in general. Lamperti [l] has shown that almost surely Z = J for a wide class of processes with independent increments. (3.36) Let $,(b) = Eb(e-TE).Suppose $a(u)= 1. Show that if {b,} is a sequence of points and $b,(a) + I , then Tb, + Ta OTbn + 0 in P a probability. Conclude that in this situation $,,(b,) --t 1. Here T, is the hitting time of {x}. 0
(3.37) Let X be Brownian motion in R. According to (3.16) of Chapter I1 the local time at x, exists for all x, . Let x, = 0 and T = T{,,). By (3.18) of Chapter 11, E"(e-T) = e-lxl. Use (2.3) of Chapter IV and the explicit form of is a the Gauss kernel to show that u"(0) = (a)-"*. Conclude that { r ( t ) ,Po} stable subordinator of index in this case. See (2.19) of Chapter I. Show that the hypothesis of Theorem 3.30 is satisfied in this case.
+
(3.38) Let A be a CAF of ( X , M ) with a finite a-potential for some fixed a 2 0. Let D be a nearly Bore1 subset of EM which is finely closed in E M . Show that ,Q J @, = @, if and only if D 3 Supp(A). This gives another characterization of Supp(A) when A E A". (3.39) Let x, be a holding point for X and assume that xo is not a trap. Let Q and T be the hitting times of E , - {x,} and {x,}, respectively. Then there exists a constant A,, 0 < A, < 00, such that P 0 ( Q > t ) = e-lor for all t 2 0. See (8.18) of Chapter I. Let .f be the indicator function of {xo} and define B, = Y,f(X,,) du. Let k = (1 + A,)(l - yo) where yo = EXo{e-QE X ( Q ) ( e - T ) } . Show that L , = kB, is the local time for X at x,. Thus in this case the local time at x, is just a constant multiple of the Lebesgue measure of {u I t : Xu = x,}-the actual time X spends at x,. (3.40) Let x, be a fixed point in E and assume that {x,} is not polar and that x, is not regular for {x,}, i.e., {x,} is thin but not polar. Let T = T{,,, and To = 0, T,+, = T,, + T O0," for n 2 0. Prove that T,,+ 00 almost surely. Define A , to be the number of visits to {x,} by X i n the interval [0, t]; that is,
228
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
A , = n if T,, It < T,,,. Show that if @"(x) = EX(e-aT),then U j f ( x ) = f ( x o ) [l - @"(x0)]-' @"(x) for ct 0. Thus we might call an appropriate multiple of A the local time at xo . Show by an example that A need not be natural.
=-
Let X satisfy the assumptions of Theorem 3.30 and also condition (1.3). If t is a reference measure for X , then show that there exists a nonnegative Bore1 function g on R such that for each D E W(R) one has almost surely
(3.41)
for all t . The left side is the " amount of time " the process spepds in D up to time t, and so, for all 1, x --t L: is a continuous density for this "occupation time" with respect to the measure gt for almost all w. [Hint: we may assume without loss of generality that t is finite. Show that L, =J L: t ( d x ) is a CAF of X with a finite one potential. If A , = t A 5, then by computing one potentials and using (2.8) show that A = g L for some g E 9(R)+ . Show that this g has the desired property.] Compute g explicitly if X is Brownian motion in R and t is Lebesgue measure. Let b be a continuous function of bounded variation on [0, a] with b(0) = 0 and letfbe a continuous function on R. Show that J;f[b(t)] db(t) = J ~ @ " ' fdt. ( t ) [Hint: first consider the case in which b is absolutely continuous.] (3.42)
(3.43) Let A be a CAF of X and suppose that t -+ A,(w) is continuous for all w. Let 7 be the inverse of A . Then if T is a stopping time, A[T(T)] = T if 7 ( T ) < co.Show that z [ A ( T ) ]= T almost surely on { X , E Supp(A)}.
4. Balayage of Additive Functionals
Let X be a fixed standard process with state space (E, 8) and let A be a CAF of X . It is an immediate consequence of (3.8) and the remarks preceding it that the "time changed " process J? defined in (2.1 1) " lives " on D = Supp(A) in the sense that almost surely XT(,)E D for all t < = A , . In some sense J? is the process X " sampled " when it is in D . Therefore it is often useful to know whether or not a particular set D is the support of a CAF. If A is in A" and D E 8"and if P i @, = & for some B E A", then it is not unreasonable to hope that Supp(B) = D at least if Supp(A) 3 D. (See (4.8)for the precise statement.) Therefore we will begin by studying the sets D for which such a B exists and the relationship between A and B.
p
4. BALAYAGE OF ADDITIVE FUNCTIONALS
229
Suppose for the moment that A is an NAF of X with a finite potential u, . If D E &" then u = PDuA is obviously a natural potential and so by Theorem 4.22 of Chapter IV there exists a unique NAF, B, of X such that u = u B .We will call B the projection (or buluyuge) of A on D and write A , for B. (We will always write A,(t) or A,(t, o)and never use t as a subscript on A , . ) As explained above, our main interest is in the projection of CAF's and so we will discuss CAF's directly rather than as a specialization of NAF's. Most of the results of this section are due to M. Motoo [2]. Let X be as above with A = 9 and A, = 9, for all t. Let M be a fixed exact M F of X with M , = 0 for t 2 5. As in Section 2, A ( M ) denotes the set of all CAF's of ( X , M ) and A"(M) those elements in A ( M ) having finite a-potentials. (4.1) DEFINITION. Let A E A ( M ) and let D E 8".If there exists a B E A"(M)such that 4 = Q," g ,then B is called the u-projection (or baluyuge) of A on D.
Note that if such a B exists it is necessarily unique. We will denote it by A ; ; in particular, we will write A , for A : . Also by Corollary 3.14 of Chapter IV, A ; exists if and only if pDuiis a regular u - ( X , M ) potential; that is, Q:g is finite and, whenever {T,,}is an increasing sequence of stopping times p , 4 + p, pD~ 1 . with limit T, pTn DEFINITION. A set D E 8" is called M-projective provided that, for each u 2 0 and A E A"(M), A: exists. We say that D is projective if it is Mprojective for M , = Zr0,Jt).
(4.2)
(4.3) PROPOSITION. A set D E &" is M-projective if and only if whenever { T,,}is an increasing sequence of stopping times with limit T we have lim,(T, + TD o 0,") = T TD 0 d, almost surely on {lim,,(Tn T D 0,") < S}.
+
+
0
Proof. Let A E A"(M) and u = p,u;. Clearly u is a finite u - ( X , M ) excessive function, and, as we noted earlier, u = 4 with B E A"(M)if and only if p,,u 4 pdTu whenever {T,,}is an increasing sequence of stopping times with limit T. Now T, + T, o BTn increases to a limit R, and in any event R I T To 0,. If, for the moment, T is any stopping time, then
+
0
1
S
Q;
v ( x ) = & Q"D u ; ( x ) = E x
e-"' d A , .
T+TDoBT
Thus in the present case we have e-"' dA,.
230
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Consequently if R = T + TD 0 8,. almost surely on {R < S } , then the right side of (4.4) equals 0. Suppose, on the contrary, that for some {T,} we have P"(R < T + TD0 8,, R < S ) > 0 for some x. If we let A, =& M , ds then A E A"(M) provided ci 0, and t + A, is strictly increasing on [0, S ) . Consequently the right side of (4.4) i s strictly positive. This completes the proof of (4.3).
=-
Since S < 5 it follows from (4.3) that if D is projective then D is M-projective for all M. Also note that if we let A, = (t A C), then the proof of (4.3) actually shows that D is projective if and only if A: exists for some ci > 0.
(4.5) PROPOSITION. Let D E 8" and assume that (D - D') n EM is polar where D is the closure of D in E (or Ed).Then D is M-projective. Proof. We will verify that the condition in (4.3) holds. Let T,, T, and R be as in the proof of (4.3) and let R, = T, + TD 0 OTn. One easily sees that { R < T TD e T ,R < S } is contained in
+
0
A = { T, < T for all n, T, 0 OTn + 0, T,
0
OT > 0,T < S } .
But on A, R, T and so X ( R , ) --t X ( T ) almost surely on A. Since X ( R , ) E this implies that X ( T ) E B almost surely on A. Thus if x E EM we have
D
P"(A) < P"{T,, 0 OT > 0,X(T)E b , 0 < T < S ) = E"{PX'T'(T,
> 0); X(T)E D,0 < T < S }
= 0,
since (D- D') n E M is polar. Consequently R = T + TD o OT almost surely on {R < S } and so Proposition 4.5 is established. Let A E A ( M ) and D E 8" and assume that, for some fixed ci2 0, A t exists. We are going to study the relationships among D, Supp(A), and Supp(A:). It is possible for D and Supp(A:) to be disjoint even if D is closed and M-projective and A E A"(M). See Exercise 4.14. However we have the following simple result.
(4.6) PROPOSITION. Let A and D be as above and assume that (D- D') n E M is polar. Then Supp(A:) c D'. Proof. If (D - a)n E M is polar then obviously TD, = TO almost surely on { T , < S } a n d s o Q"Drf=Q"Dfforall f E 8 * , . LetB=A",ndlet u=u;= Q",U;. Suppose that there is a point x E Supp(B) such that x 4 D'. Then P"(TD, > 0) = 1 so that Px(E(TDp)> 0) = 1 and hence u(x) - pD, v(x) = EXJpe - a t dB(t) > 0. Now To. TD 0 OTDr = To. almost surely and so
+
4. BALAYAGE OF ADDITIVE FUNCTIONALS
PDr &f that u = (4.7)
D'
= Q&
. From this and the definition
f for all f E 67:
231
of u it follows
u. This contradiction completes the proof.
PROPOSITION.Let A and D satisfy the conditions above (4.6). Then
n Supp(A) c Supp(A:).
Proof. Lf B = A: and R* = inf{t: B, > 0 } , then u i = PR& = Qi;pD&. Now if x E D' then u:(x) = PDu:(x) = ui(x), and so &(x) = PR.Q:&(x). It follows from this that A[R* + T D o OR,] = 0 and hence R* + TDo 8,. 5 R almost surely P" where, as before, R = inf{t: A , > O } . If, in addition, x E Supp(A), then P X ( R= 0) = 1, and consequently PX(R*= 0) = 1. Therefore x E Supp(B) and (4.7)is established. The following corollary is an immediate consequence of (4.6) and (4.7). (4.8) COROLLARY.Let A and D be as in (4.6) and assume, in addition, that that EM n D' c Supp(A). Then Supp(A:) = D' n EM.
We next study the manner in which A: varies with a. Let Al = & M , ds. Then A is a CAF of ( X , M ) and A E A " ( M )for all a > 0. If M I = Zro,c)(t),then A, = ( t A 0. (4.9) PROPOSITION.Let A E A"(M), D E b", and suppose that A; exists. e-"' d A , . Then fuA E A B ( M ) , Then A$ exists for any P > a. Let fu(x)= E x (faA)$ exists, and A: = A$ + (P - a)(fJ)$.
Proof. Since D is fixed in this proposition we will write A" for A:. Fix ~ $ u:5< 00 and u : ~5 &. < co. Now f , = & - P,u: 5 u: and soyu < 00. Also by (2.3) of Chapter IV we have u i = u$ (fi - c1)V'u: and hence V p f , < co. Therefore (faA) E A p ( M ) .We next compute the Ppotential of A". Using (2.3) of Chapter 1V we have
P > a. Then
+
u$m= U i a =Q
+ (a - p)V'ua,,
L u +~ ( a - fi)V'QLui.
Also a straightforward computation yields
and combining this with the above equality we obtain
232
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Using (2.3) of Chapter IV once again the above becomes
u{# = Q!u$
(4.10)
+ ( P - ci)QbVa(u> - Q"Du>)
It follows from this that Qtu$ and Q$VVa are regular B - ( X , M ) potentials since u$- is. Thus A$ and (faA)$ exist. Finally (4.10) and the uniqueness theorem yield the conclusion of Proposition 4.9.
In the remainder of this section we will consider only CAF's of X ; that is, we assume that M , = Ir0,[)(t).If D = Supp(A) for some A E A, then D E I" and D = D'. It is natural to ask if any such set D is the support of a CAF. We are unable to answer this question at present. However if, in addition, B - D' is polar, then the answer is in the affirmative. For example if At = t A l then Supp(A) = E, and so it follows from (4.5) and (4.8) that A: exists for each a > 0 and that Supp(A:) = D. Also E x jrDe-'' dAt Ict-'. Consequently (4.9) implies that A$ I A: 5 (P/ct)A$ whenever 0 < ci < p. In particular if (1.3) holds then A$ = gA: and A: = g-'At where g E B and ci/B I g I 1. If D consists of just one point xo , then Ag is a multiple of the local time at xo . For general sets D subject to the above conditions one might call any of the functionals A: a local time on D for X. However, at present there seems to be no uniqueness theorem which would give an intrinsic definition of local time on D. We close this section with the following result which is often useful in discussing the time changed process 8.
PROPOSITION. Let A be a CAF of X and let 8 be the time changed process defined in (2.1 1). Let D = Supp(A) and suppose that D is projective. Then 8 is quasi-left-continuous. Note that by (4.5) the hypothesis on D is certainly satisfied if b - D is polar, where D is the closure of D in E. (4.11)
Proof. Let {p,} be an increasing sequence of stopping times for 8 with limit '? By (2.1 l), T,, = z(p,,) and T = T( p) are stopping times for X . Moreover T, t R I T. If f'< = A, then T < l, and so X , , F ~= , XTn+ X , almost surely on {p< f } . The proof of (4.11) will be complete provided we show that R = T almost surely on { p < [}. In the remainder of the proof we omit the phrase " almost surely " in those places where it is clearly appropriate. Now X(T,,) = X [ T ( ~ , ,E) ]D on {T,, < (} and so T, + To o 8," = T,, on { p < [} since D = D'. Letting n + co and using the fact that D is projective we see that R = R + TD 8, and hence ,'A E D on { $ < t } .Therefore by (3.43), z(AR) = R on { p < t ] .On the other hand A, = limn A(T,,) = limn p,, = 2 and so R = T(?) = T on {f'<[} completing the proof of (4.1 1).
e
0
5. PROCESSES WITH
IDENTICAL HITTING DISTFUBUTIONS
233
REMARK. If (I .3) holds and A is a CAF of X and if D = Supp(A) is closed in E, then according to (2.10), (2.11), (3.8), and (4.11) we may regard 2 as a standard process with state space (D,93(D)).
Exercises The assumptions are those of Proposition 4.9. Show that Supp(A$) = SUPP(A3.
(4.12)
Let A be a CAF of X and let 8 be the corresponding time changed process. (i) If X satisfies (1.3), then there exists a finite measure p such that a set has potential zero relative to 8 if and only if p(r)= 0. [Hint: use (2.21) of Chapter IV to show that AUj is a a-finite measure carried by D = Supp(A) whenever I is a finite reference measure for X . Take p to be a finite measure equivalent to AUi .] (ii) If is a nearly Borel (relative to X ) subset of D show that Px[Tr < 001 = P"[T, < 001 for all x. (iii) Use (ii) to show that a subset of D is nearly Borel relative to X if and only if it is nearly Borel relative to 8. (4.13)
(4.14) Consider the following process: E = (-m, 01 u [l, 00) and starting from any point x 2 1 one translates to the right at unit speed, 0 is a holding point with parameter 1 from which one jumps to 1, and starting from x < 0 one translates to the right at unit speed until reaching zero. Let D = (1). Show D is projective even though D - D' = (1) is not polar. [Hint: use (4.3).] Show that Supp(A:) = (0) for any CI > 0 where A, = t .
Let X be uniform motion to the right on R.Let D = [0, I). Show that D is not projective although D = D'. Show that there is an A E A' such that Supp(A) = D.
(4.15)
Let Xbe as in (4.15). Let D = [0, 1) u [2, 3) andlet A , = yoZD(Xs)ds. Show that the corresponding time changed process 8 is not quasi-leftcontinuous. (4.16)
5. Processes with Identical Hitting Distributions
Let X = (a,A, A , ,X , , 0, ,P") be a standard process with state space (E, a). Let A be a CAF of X which is strictly increasing and finite on [0, () and let T be the inverse of A . If 8 =(Q, 9, Frc,,, Xr(t),Of(,), P") then, according to (2.1 l), 8 is a standard process with state space (E, b), at least if
234
V. PROPERTIES OF CONTlNUOUS FUNCTIONALS
(1.3) holds. Moreover it is easy to see that X and 8 have the same hitting distributions; that is, for all D E &A and x E E A , P,(x, = b,(x, *). The purpose of this section is to prove the converse of this statement. We now formulate this converse precisely. Let X be as above with A! = 9 and A, = 9,. Let 8 =(a,J?,d r 8 , ,, 8,, px)be another standard process with the same state space (E, 8)and with J? = 9,d, = @, . For typographical convenience we will omit the tilde ' L " in those places where it is clearly applicable. For example in place of ,?"{f(8,,); T, c I } we will write simply E x { f ( X T B )T; , < 0.We can now state the main result of this section. a )
-
x
(5.1) THEOREM. Let X and be as above, and suppose that, for each compact subset K of EA, PK(x,.) = pK(x,.) for all x. Then there exists a CAF, A , of X which is strictly increasing and finite on [0, l ) such that if z is the inverse of A and 8 is defined as in the preceding paragraph, then 8 and 8 are equivalent. We will break up the proof of Theorem 5.1 into a number of steps. Roughly speaking, the content of Theorem 5.1 is that if X and 8 have the same hitting distributions (as in the hypothesis of (5.1)), then they have the same "set-theoretic" paths but they move along these paths with different "speeds." First note that Xand 8 must have the same traps; for simplicity we assume that there are no traps except A. Second, it is clear that the hypothesis implies that P,(x, = H,(x, * ) for every Borel set B in EA since the hitting time of B can be approximated by the hitting times of compact subsets of B. Finally if B E &A and x 4 B, then x is regular for B relative to X ( 8 ) if and only if P,(x, .) = E, (H,(x, .) = E,). It follows from this and (4.3) of Chapter I1 that X and induce the same fine topology on E, and so we may speak of the fine topology without confusion. Let G be a Borel subset of E and suppose that G' = E A - G is finely open. Let T ( T ) be the hitting time of G' for X ( 8 ) . Then M , = ILO,T)(t) (fir= ZE0,,)(t)) defines an exact perfect M F of X ( 8 ) vanishing on [l,001 GO]). If E , denotes the set of points not regular for G', then EM = EM = E, and E G is a finely open Borel set contained in G. The fact that E G is a Borel set is not completely obvious but follows from the following assertion. See Exercise 5.30. a )
(re,
(5.1 bis) If B is a finely open Borel set, then x -,P"[TB c t ] is Borel measurable for each t .
Indeed (5.1 bis) implies that x -+ E"(eCT)is Borel measurable and so E ,
=
5.
PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
235
{ x : E"(e-T) c I} is Borel measurable. This notation will be used without special mention in the next few paragraphs.
(5.2)
LEMMA. If D is a Borel subset of E , , then for eachfe b€ and x E E E"{f(XT,); To < TI
=E"{f(xT,,);
TD< TI.
Proof. It suffices to prove this when D is compact and x is in EG . Since D is a compact subset of E , and ((3')' = (EG)C,P"[T = TD< co] = 0. Therefore if X E E , and F = D v G'we have P D
f(x-1 = E " { f ( X T , ) ;
T < TD}
+ E"{f(X,,);
= J E G PF(X,d y ) PD f ( Y )
Since X and
TD < TI
+ E"{f(x~,); TD< TI.
8 have the same hitting distributions this yields Lemma 5.2.
We will write ( X , T)and(8, T)inplaceof(X, M)and(8, fi),respectively. It follows from (5.2) and Dynkin's theorem (5.19i) of Chapter I11 that (A', T ) and ( 8 , T)have the same excessive functions; that is, Y ( T )=Y(T). Indeed if YE g(T)then (5.2) and Dynkin's theorem imply that c r V y l f for all c1 > 0 where V" is the resolvent corresponding to ( X , T ) . In addition f is finely continuous on E , and so f E Y ( T ) . A set G as above (C E € and C' finely open) is called an exit set provided P"(T < 00) = I for all x . Since p x ( T < co) = PGc(x,E,) it follows that whenever G is an exit set P " ( T < co) = 1 for all x also. We now assume that G will denote the semiis an exit set. In what follows, Q,(O,) and V"(p) group and resolvent associated with ( X , T ) ((8,T)),respectively. We next define u(x> = Ex(l - e - T ) , y ( x ) = 1 - u(x) = Ex(e-T). (5.3) It is immediate that 0 < u < 1 on E , , and u E Y('f).Consequently u is ( X , T ) excessive. We come now to the key step in our construction.
There exists a CAF, A , of (A', T ) such that u = u A . (5.4) PROPOSITION. Proof. According to Theorem 3.13 of Chapter IV it suffices to show that u is a regular (A', T ) potential. It is clear from the definition that u is a regular (8,T)potential. In fact a simple computation shows that u = Vg. Suppose for the moment that the following statement has been established. (5.5) For each fixed x E E , there exists an increasing sequence {u,} of regular ( X , T ) potentials such that lim u,.(x) = u ( x ) and, for each n, u - u, E Y ( T ) .
236
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Then u must be a regular ( X , T )potential. Indeed let {R,} be an increasing sequence of stopping times with limit R and let x be fixed. Let {u,} be the sequence whose existence is assumed in ( 5 . 5 ) for this fixed x. Then &,uk -+ Q R u k as n -+ 03 and Q R , u(x) - Q R n u k ( x ) Iu(x) - u k ( x ) -+ 0 as k -+ 03, uniformly in n. Therefore lim QR,u(x) n
= lim
lim Q R n U k ( X )
n
k
= lirn QR u k ( x ) IQR u(x). k
But limn Q R , u(x) 2
QR
u(x), and so to prove (5.4) it suffices to establish (5.5).
Let x be fixed and let p be the measure p ( r ) = VIr g(x) = E"
I
Zr(X,) g ( X J dt.
Then p ( E ) = p(E,) = u(x) I1 . Let cp,(x) = P"(T < t ) and @r(x)=P""(T< t ) . Then cpl and GI decrease to zero pointwise on E , as t 10, and so by Egorov's theorem we can find an increasing sequence { D , } of compact subsets of E , such that, on each D , , cpI and @, approach zero uniformly as t 10 and p ( E , - D,,) 1 0. Define u,, = VzDng. Then u, 5 u and u - unE g(T) = Y(T). If F = D , , then p ( E , - F ) = 0 and so un(x)t VZFg(x)= vg(x) = u(x) by the definition of p. Thus ( 5 . 5 ) will be established if we can show that each 11, is a regular ( X , T ) potential. Fix n and let D = D , , h = u,. Thus h = VIDg I1 where D is compact, and cpl and approach zero uniformly on D . We must show that h is a regular ( X , T ) potential. It is, of course, clear that h is a regular (X,T)potential. First observe that h = Q",h = Q D h . Let h, = n j;ln Q l h dt. Then according to (3.4) of Chapter IV each h, E Y ( T ) and h, 7 h. Next fix E > 0 and let R,(8,) be the hitting time by X ( x ) of Bn = { x : h(x) - hn(x)2 E } , and let R = lim R , , = lirn 8,. Finally let S, = R, To 0 O R n , $,= 8, TD0 OR, and S = lirn S, , g = lirn 3,,. Now given q > 0 there exists t > 0 such that Py(T < t ) = cpl(y) < r] for all y E D, and so for any z E E ,
u
+
+
P z [ S , < T for all n , S = T ] 5 lirn P'[T - S , < t , S , < 7'1 n
= Iim E ' { P ~ ( ' ~ ' ( T < t ) ; S, < T}< r] n
since X(S,) E D almost surely on {S,,< T } . Therefore P'(S, < T for all n) = P'(S < T ) for all z E E , (and hence in E ) . This argument of course yields the same statement with P' replaced by P"'. It now follows from the quasi-leftcontinuity of X and that, for each z, Qs,(z, .) -+ Qs(z, *), &(z, *)-+ &(z, -) weakly as n 00. But Qsn = Q R n Q D = Q"R,Q"~ = 0 s. and so Qs =
5.
PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
237
os.
Consequently QRnh= Qs,h = Qsnh --t osh = Qsh and since S 2 R we obtain limn QRnhI QRh. On the other hand limn QRmh2 QRh and so we have QRnh+ QRh. Also for any z, Qt h(z) I P Z ( f< T ) -t 0 as t -t co since P'(T < GO) = 1 . Therefore (3.6) and (3.8) of Chapter IV imply that h is a regular (A', T ) potential, and so (5.5) and hence (5.4) are proved. (5.6) LEMMA.Let A be the CAF from (5.4) and let D be a Bore1 subset of E G . If R = To A T, = To A 4: then E"(A,) = EX{e-,(eR - 1)).
a
Proof. Since E i = (G'y, T - R = T 0 O R almost surely, and so, by (1.13) of Chapter IV, A , = A , A , 0 8, almost surely on {R T } . In addition A , 0 O R = 0 almost surely on { R = T } . Therefore
-=
+
E"(A,) = u ( X ) - E"{U(X,)} = U ( X ) - J ! ? { U ( X , ) } = J!?{1
-e-T
- 1 + e-(T-R)
= EX{e-,(eR -
1
I)}.
LEMMA.Let A be as in (5.4). Then almost surely t - t A , is strictly increasing on [0, T ) .
(5.7)
Proof. Obviously it suffices to show that Supp(A) = E,. Let F = Supp(A) and suppose that x E E , - F. If R = T A T F ,a = T A TF,then since E , - F is finely open P(R > 0) = 1. But A , = 0 and so we obtain from (5.6)
o = E"(A,)
= J!?{e-T(eR - 1)) > 0,
and this contradiction establishes (5.7). The next result is the fundamental property of A for our construction. PROPOSITION.Let A be as in (5.4) and IetfE bb*. Then U, f = r(fg) where g is defined in (5.3).
(5.8)
Proof. I f f = 1 we have already seen that u, = u = rg I 1. Consequently it suffices to establish (5.8) when f is continuous and has compact support. Given such an f and E > 0 define T, = inf{t: If(X,) -f ( X J 2 E } and then let To = 0, T,,, = T,, T, 8,- for n 2 0. Let T, and T,,be defined similarly relative to 3. Then arguing as in the proof of Theorem 2.13 of Chapter IV we find that
,
uAf(x)
+
=
0
EX{f(XT,)
where IL(x)l I Ilfll, and similarly
[u(xTn)
- QT,
+
u(xTn)]} &
L(x)
238
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
p.fg(x) =
EX{.f(XTn) [u(xT.)
- @Tc
u(xTn)l}
+
z(x)
I~(Y)
Ilfll. If Bx = { Y : -f(x)l 2 then QTE(x,'1 = QB,(x, *) .) = QTc(x, and so, since E is arbitrary, the proof will be complete provided we show that p,,= jZ,, where p,,(jZ,,) is the distribution of X(T,,) where IE(x)l =
QB&,
a),
(r?(T,))
under Px(p'x). We have just observed that this is true when n = 1, while, for any E &, p,,+,(r)= j ,u,,(dz) QTS(z,r) with a similar statement for Pn+,. As a result p,,= jZ,, for all n and so (5.8) is established.
So
(5.9) LEMMA. Let A be as in (5.4) and define B, = [g(X,)]-' dA,. Then BT is almost surely finite and B is a CAF of ( X , T ) which is strictly increasing on [0, T ) . REMARK.By (5.8), U , j =
rffor allfE &*, .
Proof. The last two assertions follow immediately from the first, (5.7), and the fact that g < I on E G . In order to show that B T is almost surely finite it suffices to show that almost surely t +g(X,) is bounded away from zero on [0, T ) . To this end let T,, be the hitting time of {x: g(x) < l/n}, R = lim T,,, and let 6 = P"[T,,< T for all n] where x is a fixed point in EG . Since u is a regular ( X , T ) potential, QT,,u+ Q,u. But u = 1 - g and so Q T , u(x) 2 (1 - l/n) P"(T,, < T ) . Letting n + co this yields
6 I E"{u(X,); R < T } I P"(R < T ) < 6 , and consequently 6 = P"(R < T ) . Therefore E"{u(X,); R < T } = P"(R < T ) and since u is strictly less than one on E G this implies that 6 = P"(R < 7')= 0. Thus (5.9) is established. We now interrupt our main development in order to prove the following uniqueness result which we need. (5.10) PROPOSITION. Let {R"; u 2 0 } and { W " ;u 2 0} be two families of nonnegative bounded linear operators on b&*, each of which satisfies the resolvent equation. If W o = Ro then W" = R" for all u 2 0. Proof. One sees easily that 11 W"II I 11 Wall and llR"ll I IIRoll for all u 2 0. Let M = 11 Wo(I= IIROII. For any u and p, R"[I - (p - .)RBI = RB and so m
R" =
C ( p - u)"(R')"+' n=O
provided 10 - U J < M -'. A similar statement holds for p = 0, i A 4 - 1 , M - ' , , . . we obtain Proposition 5.10.
W" and so letting
5. PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
239
It is worthwhile to point out what we have achieved so far. Suppose that not only is P“(T < 00) = 1 for all x, but that actually ,f?(T) is a bounded function of x. Then it is clear that E x j c f ( X , ) dB, = v f ( x ) for all x and positive f, where B is the CAF of ( X , T ) from Lemma 5.9. Let t be the inverse of B. Since B, is strictly increasing and finite on [0, T ) , t, is continuous and strictly increasing on [0, BT),T, = a if t 2 B,, and t, increases to T strictly from below as t increases to B,. Of course if these statements are to hold for all w we must assume that t + B,(w) is continuous and strictly increasing on [0, T ( w ) ) for all w, and this we may do. Now it follows from (2.3) that 8, = (a,9, Sr(,), Or(,, ,P ” ) may be regarded as a Markov process with right continuous paths and state space ( E , , where 82.is the trace of b* on E,. We claim that this process and T) are equivalent; that is, for all x E E,, t 2 0, and r E b*
&a),
(x,
P { x [ T ( ~ ) ] E r}= P [ x ,E r; t < T I .
jr
Indeed, if we define W a f ( x )= E x e - ” f ( X r ( , ) ) dt then by the right continuity of the processes it suffices to show that Waf= p f f o r all a and all continuousfwith compact support. Now it follows from (2.22) of Chapter IV or is easily checked in the present case that { W “ ;a 2 0} satisfies the resolvent equation. But W o f ( x )= E x j ; f f ( X , )dB, and this is equal to pf(x). Consequently by (5.10) W y = p f f o r all a. Note that this establishes Theorem 5.1 in the special case in which b ( c ) is bounded in x, for in this case we may take G = E so that T = [ and T = 1. In the general case according to the preceding paragraph we know that “locally” (that is, on any exit set for which E”(T) is bounded) r? and X run with the appropriate “clock” are equivalent. It remains to “piece together” these local results in order to obtain Theorem 5.1 in the general case. The procedure for this “piecing together” is obvious enough in outline, but the details are rather involved. We will carry out the complete argument because some of the techniques are of interest and may be useful in other contexts. To start with we need a compatability result. Let GI and G2 be exit sets, Tl and T2 the hitting times of (3; and G; ,and B’ and B the CAF’s of ( X , T I ) and ( X , T2), respectively, defined in (5.9). The resolvent corresponding to Tj)will be denoted by j = 1,2.
(x,
e,
(5.11) PROPOSITION. Almost surely B:
= B:
on the interval [0, TI A T2].
Proof. Since GI n G2is an exit set and TI A T2is the hittingtime of (GI n G2>. it will suffice to treat the case in which GI c G 2 , so that Tl I T 2 . Define B, = B: if t I Tl and B, = B;, if t > T I .Then B i s a CAF of ( X , Tl), and we wish to show that B and B’ are equivalent. Since g = g,g2 is strictly positive
240
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
on E,, and UBlg Iv l g l c CO it will suffice to show that U B l g= UBg, because of the uniqueness theorem for CAF's. We compute uBZg(x)
=
uBg(x)
= uBg(x)
so that U , g = v 2 g - PTIv 2 g =
Tz
+
g(xt) dB: TI
+ pT,
uBzg(x)
vlg = UBlg, establishing the proposition.
We need some discussion of hitting times before proceeding. The arguments used here have appeared many times already, so we will not separate off these facts as propositions. We will call a set G c E a strong exit set (for 8 )if G is a Borel set, G' is finely open, and l?(Tc.) is bounded in x . Let x be any point of E. By assumption 8 has no traps and so there is an open set N with compact closure such that x E N and pX(T,, < co) > 0. Since P is open it follows that u(x) = j: e-' p , ( x , p)dt > 0. The function u is Borel measurable. Given E > 0 let G = N n { u 2 E } . Then G'= P u { u < E } which is Borel and finely open. In addition G is a strong exit set. Indeed if u(z) 2 E and b is such that e - b < ,512, then E
I u(z) Ii3'(TNrIb )
+ e-b
and so p ( T c c 5 b) 2 P"'(T,, I b) 2 812. Then for any z p { T G e> ( n + l)b} = ~ { ~ X ( n b ) > ( Tb); G T,, c > nb} I
P ( T ~>=nb){ 1 - ~ / 2 )
because ~(2,~) 2 E if TCe> nb and nb c 1. So for every z , P(TGe) I b C n(1 - 42)" + b, proving the assertion. Compare this with (10.25) of Chapter I. Now let { N i } be a countable base for the topology of E consisting of open sets with compact closures. If ui(x) = 0 'I,;(x) = Jg e-' pf(x, &)dt and W i j= N i n { u i 2 lo}, then what we have just said implies that each W i jis a strong exit set and furthermore W i j= E. Next let G be any exit set and let p ( x ) = Ex(e-Tcc).If q < 1 define K , = {x: p ( x ) < q}. Then each K,, is a finely open Borel set and E , = K , - l,n. Fix q < 1 and define
ui,j
u,,
To = O T2n+1 = T 2 n + TGc T2n+2 = T2n+1
0
e,,,,
+ TK,
0
O T ~ " +.
The sequence {T,,}of stopping times is increasing, and for any x and n 2 1
5. PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
241
E X ( e - T 2 n + 1T~~ ; < 5 ) IEX{e-T2nq(xTZn); T~~< C} Ir] E x { e - T 2 n TZn ; <
c}
because (p(XTZn)5 '1 if T,, < 5 and n 2 1. It follows that lim Tn= 00 almost surely (relative to X ) . We will use these notions in the next proposition.
(5.12) PROPOSITION. Let G be an exit set, let T = T G c and , let A be a CAF of ( X , T ) such that E"A(T) is bounded in x. Let K = {x: E x ( e - T ) < q } for a fixed q < 1. Then there is a CAF 2 of X such that
for every x and every f E a*,which vanishes off K . Proof. Let
so that B is a CAF of ( X , T ) . Define
where the sequence {Tn}is the one constructed in the paragraph before (5.12). Note that $ is bounded and that z ( x ) 5111)Il C:=OE X ( e - T 2,nT*2 n < 0I Il$ll(l + r]"). Thus z is also bounded. The main part of the proof is in showing that z is a regular 1-potential of .'A We will break up the argument into steps. Also we will omit the phrase " almost surely " where it is clearly applicable. (a) z
=Pkz.
Proof. We will write R in place of TK. Now P ~ Z ( X )= E x { e - R z(XR)}
-
Cm EX{e-(T2no@~+R)
$(x(T2n
n=O
OR
+ R))}*
Let us break up each summand into an integral over { R < T l } and one over {R 2 T I } Now . observe that for n 2 1, T2, 0 0, R = TZn on { R < T,} while R = T2(n+l) on { R 2 T I } .If we use these facts we find for n 2 0, T,,, 0 0, that
+
+
242
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
P ~ z ( x=) E"(e-R $(XR); R < Tl) +
m
C EX(e-TZn $(XTz,,)), n= 1
and so Z(X)
-P i Z(X)
= $(x)
- E"(e-R $(-XR); R < 7'1).
This last difference is 0 because R = TK ,Tl = T, and B vanishes off K . (b) If J is any Bore1 set, then PiuJ z
= z.
Proof. Since K is finely open we have R = R 0 OTKUJ + TKuJ, where as above R = TK . Consequently using (a) P i u J z = PkuJP i z = P i z = z . (c) If J is any compact set, then P i z I z. Proof. Let S = TJ + R 6 T J where R = TK. Then X ( S ) E K u K' on { S < oo}, and so if S < oo then T2kI S < T2k+lfor some k. Now from the definition of z it is immediate that 0
(5.13)
But S
Z(X)
+ T,,
0
+ EX{e-Tzz(XT,)}.
= $(x)
6, = T,, on {S < Tl} for n 2 1 and { S < Tl} E PTZ. Therefore
E " { e - S z ( ~ s s) ;< T ~ } = E"{e-S
+(xS); s < T ~+} 1 e-Tzn $(xTZn); s < T1) EX
= E"{e+ $ ( x S ) ;
('p
s < T ~+}E X { e - T zz(x,,);s < T ~ } ,
and using (5.13) and the fact that $ E Y ' ( T ) we obtain (5.14)
+
E"{e+ z(x,);S < T ~ } EX{e-Tzz(x,,); s 2 T ~ } I $(x)
+ EX{e-Tzz ( x , , ) }
= z(x).
We make the induction hypothesis (5.15)
Z(X)
2 E"{e-S z ( X S ) ; S < T2,}
+ EX{e-Tznz ( X T Z n )S; 2 T,,}
which, by (5.14), is valid when n = I since S lies in some interval [TZk,T2k+,).The second summand on the right in (5.15) may be written
+ Ex{e-T2n z(xTZn); s > T,,}. that T,, + S OT2, = S on {T,, < S}
E"{e-S z ( x , ) ; S = T,,} Now using (5.14) and the fact obtain
a
we
5.
PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
243
Proof. In view of (c) and Dynkin's theorem ((5.3) of Chapter 11) it will suffice to show that lim inf, P: z(x) 2 z(x) for all x . Suppose x is not regular for K. Then almost surely P", t + R 0 8, = R for t sufficiently small, and since z = P i z this yields
lim P: z ( x ) = lim EX{e-('+Raer) Z(xr+R*eJ} 140
1-0
= E"{e-R z ( x , ) } = P; Z ( X )= z(x).
Suppose on the other hand that x is regular for K. Then in particular P"(t
P:
z ( x ) 2 EX{e-' z ( X , ) ; t = E'{(e-'$(X,); t
< TI} < Tl} + EX{eCT2z(X,,);
Since I) is 1 - (A', T ) excessive this approaches $ ( x ) z(x) as f + 0. The proof of (d) is complete.
t
< Tl}.
+ E X { e - T 2z ( X T 2 ) }=
(e) z is a regular I-potential. Proof. We must show that if {S,,} is any increasing sequence of stopping times with limit S, then Pj,z + Piz. A glance at the development in Section 3 of Chapter 1V shows that we need consider only the case in which S,, = TB, where {B,,} is a decreasing sequence of sets in 8".In particular each S,, will then be a strong terminal time and hence so will S. Also in checking Pi, z ( x ) +Pi z(x) we may assume that Px(Sn> 0) = 1, for if S,, = 0 almost surely P" for all n there is nothing to prove. Now fix x and let Umk
=E"{e-Snz(Xs,,); Tk < S,, 5
&+I}
uk = E X { e - Sz ( X s ) ; Tk < S I Tk+1)
244
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
SO that P i n Z ( X ) = c k a,,, and P i Z ( X ) = E k a,. It Will Suffice to show that, for each k,ank+ak a s n + c c because x k r N a n k < llzll E X ( e - T N ) + Oas N-,co. Suppose first that k is even, say k = 2j. Now if Q is any strong terminal time, then on {T2j< Q IT 2 j + l }we have Q = T Z j+ Q o OT,, and Q + T, BQ = T 2 j + 2and , so using z ( x ) = $(x) + E X ( e - T 2z(X,,)) we obtain for any such Q 0
(5.17)
EX{e-Qz(X,); T Z j< Q IT2j+l} = Ex{e-Q$ ( X Q ) ; T2j < Q , Q
0
4-,, IT b,,} 0
+ EX{e-QEX(Q)(e-Tz z ( X T z ) ) ;T2j< Q IT2j+ - EX{e-T2' EX(T2j)(e-Q + ( X Q ) ; Q IT ) ; T2, < Q } + E X { e - T z I + z z ( X T , , + , ) ; T2j < Q IT2j+1}*
Now in (5.17) we may replace Q by either S,, or S. Observe that the set {T2j< S,,} approaches { T 2 j< S} and that {T2j< S,, I T 2 j + l }approaches {T2j< S IT 2 j + l as } n + 00. Furthermore for any y we have
EY(e-,"$ ( X S n ) ;S,, IT ) = Ey(e-," $(Xs,); S , < T ) + EY(e-'
$(X,); S c T )
= EY(e -'t+b(X,>;
S 5 T),
because S,, + S and $, being the 1-potential of a CAF of (X, T ) , is a regular (X, T) potential. It follows that + as n + 00. Next consider the case in which k = 2j + 1. Here we use the fact that z = PLz to obtain
+
= E"{e- ( S n + 7 K o e s ~ ' z ( X ( S , ,TK
8,")); T 2 j + l< S,,
T2j+2} with the same expression for a2j+l upon replacing S,, with S. NOW on { T 2 j +< 1 S,, IT 2 j + 2n } < S IT 2 j + 2 } we have S,, TK 8,,, = S + TKo 8,. From this and the fact that S,, t S it is immediate that + a 2 j + , as n + co. This establishes (e). an,2j+ I
0
+
0
Finally we can complete the proof of (5.12). Since z is a regular I-potential there is, according to (3.14) of Chapter IV a CAF 2 of X such that m
z(x)=E*!
0
e-'dA,.
Now D, = r o h T d & defines a CAF of ( X , T ) and E x J ; e - ' d D , = z ( x ) - E x { e - T 1~ ( x , , )But } . $ vanishes off E , , and using (a) this last expression is z ( x ) - E X { e - T zz(T2)}= $(x). Hence D and B are equivalent, and so E Xj ; f ( X , ) dB, = E X j t f ( X , ) d 2 , for all x and f E S*,. This completes the proof of (5.12) because j ; f ( X , ) dB, = j r f ( X , ) dA, iffvanishes off K . We will now specialize to the case in which G is a strong exit set (for 8)
5.
PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
245
and A is the finite, continuous, strictly increasing on [0, T,,], CAF of ( X , TGE) satisfying E x J,'c'f(X,) dA, = p f ( x ) for all x andfE S*, . With this A , pick a K as in (5.12) and let A be the CAF of (5.12) going with this A and K . NOW let H be a strong exit set (for and let C , be the CAF of ( X , T H f )which satisfies
x)
E x joTH>(X,)d C , = P , jT">(X,) 0 dt.
(5.18) PROPOSITION.E x ~ ~ H c f (dC, X,= ) E x I r H c f ( X , dA", ) for all x and all positive f'vanishing off K . Proof. There is no loss of generality in assuming that f vanishes off H as well. According to (5.1 1) we have fof(Xs) dC, = fof(Xs) dA, on [0, TI where T is the hitting time of H' u C". Let R be the hitting time of ( K n H ) u H' and let To = 0
G n + 1 = T2n + T OO T ~ , , T2n+2 =T2n+l + R O ~ T ~ , , + I * Clearly To 5 TI I . . . ,J z ; : : f ( X , ) dC, = 0 for all n, and furthermore lim Tn= THc because for ii 2 1, T2n+l= T2, TGc0 OTZn on { T Z n +< , THc} and so ~ x ( '~h n-t 1 , T2n+l< THE) I q EX(e-T2n;T2, < THc)
+
1
where q < 1 is the constant appearing in the definition of K ; that is, E"{e-TG'} < for all x E K . As a result
JO
This completes the proof of (5.18).
246
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Given a strong exit set G we will write A‘ for the CAF of ( X , TGC) satisfying (5.19)
E x j T0 c c j ( X f )dAf = E x j o T G c j ( X ,dt )
for all x a n d f r 0. The next proposition gives the desired synthesis of the various A‘. There is a CAF, A , of X such that for every strong (5.20) PROPOSITION. exit set G, every x, and everyfe dPf E x j 0T G c j ( X , )dAf = E x / 0T c c f ( X , )d A , .
Furthermore A is strictly increasing and finite almost surely on [0, [). Proof. Let { ( K , ,G i ) ;i 2 l} be a family of pairs of sets, each of which satisfies these conditions: (1) G i is a strong exit set; (2) K , is a finely open Bore1 set, K , c G i ,and for some q i < 1, < q i for every x E K i . Also assume that U K , = E. We have seen that such a family always exists. Let be a CAF of X such that E x j?Tf’(X,) dAFi = E x j p f f ( X , ) for all x and all positivefvanishing off K i . The existence of such d i is the content of (5.12). Now disjoint the K i : J , = K , , J i = K i - Uj:: K j , and define the CAF‘s, A’, by Af = / ‘ I J i ( X s )da:. 0
Finally, set A, =
1Af i
so that A satisfies the measurability and additivity conditions of an A F of X . It is also clear t + A , is everywhere left continuous and is right continuous at to if A,,,, < 00 for some E > 0. Moreover A is constant on [[, 001. Letf’be a function in S*,and let G be any strong exit set. Thenf= C f l J i, f I J , vanishes off K , , and so by (5.18) (5.21)
= Ex
J
~
0
f ( X , )dA,.
5.
PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
247
It follows from (5.21) that A ( T p ) = Ac(TG,) < ca whenever G is a strong exit set and so lim,,o A, = 0. If S = inf{t: A,+ > A,}, then by the above discussion A s < a and A s + , = ca for all E > 0 when S < a. But A s + , = A, + A, 0 Os and this leads to a contradiction unless S = a.Thus t + A, is everywhere continuous. It is now clear that A, = A: on [0, T G o ] for any strong exit set G, and this certainly implies that t A, is strictly increasing on [0, R) where R = inf{I : A ,= a}. Of course in this discussion we have been omitting the qualifying phrase " almost surely." In order to complete the proof of Proposition 5.20 it will suffice to show that R 2 c. This requires an argument that will also be needed later on and so we formulate it as a lemma. Coming to this, let G be any open subset of E, L = G', and G, = G n {@,2 E}where &(x) = EX(e-TL).Then G: is a finely open Bore1 set by (5.1 bis). If b is so large that e - b < ~ / 2 then , P'(T, 5 b) 2 E - e - b > ~ / whenever 2 z E G, , and so it follows from the familiar iteration estimate that G, is a strong exit set for each E, 0 < E < 1. Now let T,,(T,) be the hitting time of Gi,, = L u {@,, < l/n} by X(x). Then T,(T,) increases to a limit T(T).If Q(&)denotes the hitting time of L u { @,, = 0)by X ( 8 ) , then obviously T 5 Q and T I &. We can now state the fact which we require. We continue to omit the phrase " almost surely." -+
(5.22)
LEMMA. T = Q and ?' = &.
Before coming to the proof of (5.22) let us use it to complete the proof of (5.20). First observe that if (pL(x)= EX(e-TL),then because of the identity of the hitting distributions { q L= 0) = {@,, = 0). Now for each n, GI,,is a strong exit set and hence by what we have already established A(T,) is finite. Consequently t -+ A, is finite on [0, T). Let R, = inf{t: A, 2 n}. Since t -+ A, is continuous, A(R,) = n and {R,} increases to R strictly from below. Suppose that for some x, P"(R < c) > 0. Then by an argument similar to that which follows the proof of (5.11) we can find an open set G c E such that P " [ X , E G n {q,,> O}] > 0 where as above L = G'. Lemma 5.22 and the ensuing discussion applies then to this open set G. Since XRn+ X , on { R < 5) we must have X , E G for all t in [R,, R ] provided n is sufficiently large. If (pL(y)= 0, then Py(TL= ca) = 1 and so Py[(pL(X,)> 01 = Py[TL0 8, < a ]= 0 for all t . Hence if S is the hitting time of {q, = 0}, then (pL(Xs+,)= 0 on {S< a}. Consequently R < S if (pL(XR)> 0 and R < 3 and SO X , E G n {q,, > 0} for all r in [R, ,R ] for all sufficiently large n provided X , E G n { (p, > 0).Therefore there exists an n such that P"[R R, + Q 0 ORn] > 0 where Q is the hitting time of L u { ( p L = O}. Hence E"{PX(Rn)(R < Q)} > 0, but this is a contradiction since t -+ A, is finite on [0, T) which, by (5.22), is the same as [0, Q). Thus the proof of (5.20) will be complete as soon as we establish Lemma 5.22.
-=
248
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
Proof of (5.22). We will first show that T = &. Since L is open, Q and & are also the hitting times of L' u {qL= 0) = L' u { q L= 0) = { q L= 0 or l } = {qL= 0 or 1) by X and 8,respectively. If 8(Tn)EL' u { Q L = 0} for some n, then Tm= T = T,= 0 for all m 2 n. On the other hand if 8(Tn)$ L' then I l/n. Since T+ TL0 OT 2 T,+ ' 7 0 8," we have for any x
&(zTn)
Ex{e-T ~ L ( x T ) ; xTn $ L'} = Ex{e-(T+TLoeT)* > xTn 4 1
I E x { e - T " ~ L ( X T , ) ; X T , $ ~ }I-.
n
Now letting n + 00 we see that &(8,)= 0 on {Tc 00, XTn4 L' for all n}. Thus T = almost surely f'" for any x. We come now to the proof that Q = T. Recall that L' = {(pL = 1) = {qL= I}. Fix6 and r] such that 0 c 6 c r] < 1 and let D = { x : 6 I&(x) I q } . Let S ( s ) be the hitting time of D u L' u {& = 0} by X ( 8 ) . We first claim that for all x
Ex{HX(Tn)[XS E D ;Q < CO]}
(5.23)
+0
as n + co. As above if qL(y)= 0 then qL(Rt)= 0 for all t almost surely Py. As a result 2, is regular for L' u {qL= 0). In particular & + 0 0, = (z. Therefore we can write
s
Ex{HX(Tn)[XS E D ;Q < co]} = Ex{Px(Tn)[Xs E D ;Q
5 P{X(T,
+S
o
< CO] ; T, < Q )
ern)E D ;Q
< OO}.
But t + e-' &(f,) is a bounded right continuous supermartingale with respect to f'" for each x and so t + qL(8,)has left-hand limits on [0, 00). Now T, OF,, < 0 if X(T, 0 &,) E D and so the last displayed probability does not exceed
+s
0
+s
P"{S I q L ( X t )Ir] for some t E [T,, Q), T, < Q < co}, and this must approach zero as n 00 because T,+ 0, Q L ( f T n )I I/n, and t + &((R,) has a left-hand limit at & when & < co. Thus (5.23) is established. Next observe that since 3 0 0 & = (z ( S Q 0 OS = Q) one has
+
P y [ X s E D ;Q < CO] =
+
S P~(x,d z ) P ~ ( zEd) , D
for each y. Now using the fact that X and r? have the same hitting distributions it follows from (5.23) (since Q, S, T, are all hitting times) that
5. (5.24)
PROCESSES WITH IDENTICAL HITTING DISTRIBUTIONS
249
lim E"{PX(T"'[X,E D , Q < a]}= 0 n
for all x. Now suppose that for some x, Px{O < (PL(XT)< 1) > 0. Since 0 < (pL < 1 if and only if 0 c g L < 1 there must then exist 6 and q, 0 < 6 < q < 1, such that if D = (6 I Iq } , then P"{XT E D } > 0. Consequently E x { ~ - Qx;T 01 p { e - v + T r o e T ) . X T E D )
eL
5
= E X { e - T(PL(XT);XT
ED}
> 0,
since T < co if X , E D ;and so P"[Q < co ; XT E D ] > 0. Finally observe that if l/n < 6 then Tn< T when X,. E D and hence for such n we have P"[Q < 00; X T E D ] = P x { Q < 00, X T E D , Tn< T < Q } < Px{Q 0
oTn< CO, s eT, < Q 0
< E"{PX(Tn'[S< Q ; Q
- ~ x { p X ( T [d X -
0
OT,,; Tn< T }
,~D;Q
as n + 00 by (5.24). This is a contradiction and so ( P L ( X T ) = 0 or 1 almost surely and this implies that Q IT. Thus Lemma 5.22 and consequently Proposition 5.20 are established. We are finally ready to complete the proof of Theorem 5.1. The CAF, A , is the one we have just constructed in Proposition 5.20 and z is the inverse of A . The process 2 is of course the "time changed " process 2,= Xr(f).
(5.25)
PROPOSITION.
Let G be a strong exit set and T = T G c .Then for
f E bb*, ct > 0, and x E E
E " { e - a A ( T ' f ( X T )= } Ex{e-aTf(XT)>. Proof. Let x be fixed and then denote the left and right sides of the desired equality by L ( f ) and R(f), respectively. Clearly it suffices to show that L(f) = R ( f ) for f~ C,. For such an f, t + P T f ( X r )is right continuous by (4.14) of Chapter I1 and so the following manipulations are easily justified:
1 Jb
T
p T f ( x )- ~ ( f=)aE"
e-aA(r)f(XT)d ~ ,
0
T
=c t ~ "
e-aA(t) P , . ~ ( X Jd ~ ,
=uW"pTf(x)
where, as in the discussion preceding (5.1 l), { W";c1 2 0) is the resolvent of
250
V. PROPERTIES OF CONTINUOUS FUNCTIONALS
the process Xt(r)terminated when it first leaves G; that is, W"f ( x ) = E x e-aA(t) f ( X , ) dA Similarly P, f (x) - R(f ) = ci ppTf (x) where { p} is the resolvent of (8,T).Now P , = H, by the identity of the hitting distributions, while the identity of W" and p has been established in the discussion preceding (5.11). Consequently L ( f ) = R(f), and the proof of (5.25) is complete.
REMARK.An important consequence of (5.25) is that the joint distribution of ( A T , X T ) under P x is the same as the joint distribution of (T x p ) under p". Also the equality T
(5.26)
T
e-"'f(X,) dt
E x Jo e-"Atf(X,)dA, = Ex 0
is simply the equality W" =
which we established prior to (5.11).
Proof of Theorem 5.1. We must show that for all positive f XEE m
(5.27)
Ex
joe-"'f(Xl(,))dt =?!,
1 e-"'f(X,)
E
t*, ci > 0, and
m
dt.
0
Of course the left side of (5.27) is simply
and we will use the latter expression instead. Let G be a strong exit set and K a finely open Bore1 subset of G such that, for some r] < 1, Ex(e-Tc') < r] and EX(e-TGc) < 1 for all x E K. Let SK(x)= E X ( e d T Kand ) q K ( x )= E"(e-TK).Pick a number 6 > 0 and let J = { S K 2 6). From past discussions it is clear that J n K' is a strong exit set. With this 6 fixed, define R = T G f V J S E ,= T K u j c , and To = O
T2n+I = 7'2,
+R
0 9 ~ 2 ,
Tzn+z=Tzn+l SOOT^^+,* The identity of the hitting distributions readily yields the fact that P ( T , =T,.) = P"(T,,= TJE)for all x and n. On the other hand the condition on K implies that the event (7'. < Tjc for all n, lim T,, c oo} has probability 0 under each P" and a similar statement holds for the process 8. Consequently lim T,,= T,, almost surely for X and We next observe that, for each g E b&*, ci > 0, x E E, and n 2 0,
x.
5. PROCESSES WITH (5.28)
IDENTICAL HITTING DISTRIBUTIONS
251
E X { e - Q A ( T n ) g(XTn)} = ,??{e-QTn g ( X T n ) > .
Indeed by (5.25) this is valid when Tnis replaced by R or S, and so (5.28) follows by an easy induction, whose details we omit. Now suppose f E bb* and f vanishes off K . We will prove (5.27) for such an$ Since f(XJ vanishes for T 2 , + ,5 t < T2n+2 we have
= E"
TJC
jo e-a' ~ ( X dt.J
Now recall that Jdepends on a positive constant S. Write Rd for the hitting time of {& < S} so that we already have shown (5.29)
As S decreases to 0, R, increases to a stopping time R, and, arguing as in the proof of Lemma 5.22 one sees that R is the hitting time of {& = 0} = { ' p K = O}. By monotone convergence and the fact that A is continuous we
may replace Rd by R in (5.29). But then we may replace R by 00 in (5.29) because f vanishes off K and neither process hits K at a time exceeding R . In other words, we have proved (5.27) for any j vanishing off such a K . But this establishes (5.27), for E can be covered with a countable family of sets such as K . The proof of (5.1) is complete.
Exercises
Prove (5.1 bis). [Hint: if A P"[TB< t ] = P"(A) for all x.] (5.30)
=ul
show that
AND POTENTIAL THEORY
1. Dual Processes
In this chapter we will study Markov processes for which an "adjoint" or "dual" Markov process exists. This additional structure will allow us to introduce and study some notions from classical potential theory. In particular we will define potentials of measures and study their relationship with potentials of additive functionals. As before X = (0,A, .At,X I , 0 1 ,P") will be a fixed standard process with state space (E, 8).In the first part of this section only the potential operators U" associated with X will play a role. (1.1) DEFINITION. A family { V " ; a > 0) of positive linear operators from
b8* to bB* is called a resolvent if it satisfies: (i) 1) V"ll I l / a , a > 0 ; (ii) V " f ( x )is a measure inffor each x E E and a > 0; (iii) V" - Y P = (b - a) V"v P , for all a, f l > 0.
Of course Conditions (i) and (iii) are simply Conditions (4.2i) and (4.2iii) from Chapter 111. Condition (ii) is satisfied by any resolvent subordinate to the resolvent { U " ; a > 0) of X. Given any resolvent { V " ;a > 0}, not necessarily subordinate to U", we define a - V supermedian and u - V excessive functions as in (4.5) of Chapter 111. The reader will have no difficulty in checking that the propositions of Section 4, Chapter 111, most notably Proposition 4.6, are valid in the present more general situation. We will use them without further elaboration. 252
1.
253
DUAL PROCESSES
One fact not explicitly mentioned in Chapter I11 is that f i s a - V excessive if and only if it is p - Vexcessive for all > a.
(1.2) DEFINITION.Let { U " ;a > 0) and {o"; > 0} be any two resolvents on bb* and let 5 be a o-finite measure on b*. Then we say that { U'} and (0') are in duality relative to 5 provided that for each a > 0 there is a nonnegative function ff E b* x b* with the following properties: (i) the function x 4 f f ( x ,y ) is a-excessive relative to the resolvent { U"} for each y E E and a > 0; (ii) the function y + f f ( x ,y ) is @-excessiverelative to the resolvent {l?"} for each x E E and a > 0; (ii) U " f ( x )= g ( x , y ) f ( y ) W u ) and @fb) = 5 g ( x , y ) f ( x ) 5(dx) for all a > 0 , f ~ bb* and x and y in E. REMARK.As the notation suggests, { U " ;a > 0} usually will be the resolvent of a standard process X . However we will not introduce this assumption for the moment. Suppose that { U " } and {o"} are in duality relative to 5. In most discussions the measure t will be fixed and expressions like " almost everywhere " are to be regarded as defined relative to t. Frequently in writing integrals against 5 we will write dx in place of 5(dx). We will write (f,g ) for j f ( x ) g(x) dx. The functions u' are by definition nonnegative but they may perhaps take on the value 03. We will write f rather than 0:f for the action of the operator 0" on the functionf, and we will write 0"(A,x ) for the measures associated with these operators. Thus we have the expressions f o " ( y ) = Jj'Cx) o"(dx, Y ) = /.fCx)
UYX,U) d x .
If G(x, y ) is a positive or bounded function in b* x b* we write G o"(x,y ) for J G(x, z ) o"(dz,y ) and U"G(x,y ) for j U"(x,dz) C(z,y). With this guide the reader should have no trouble unscrambling similar notation not specifically mentioned here. (1.3) PROPOSITION.Let { V " } be a resolvent and 8 a measure such that V"(X, is absolutely continuous with respect to 8 for each a > 0 and X E E . If f and g are a - Vexcessive and f = g a.e. 8, then f and g are identical. a )
Proof. The hypotheses imply that pV"+Pf= pV''pg for each conclusion then follows from Definition 4.5 of Chapter 111.
p > 0. The
An obvious consequence of (1.3) is that the functjons u" appearing in Definition I .2 are uniquely determined.
254
VI. DUAL PROCESSES AND POTENTIAL THEORY
{o"}
Let {U'} and be in duality relative to 5. We will write 9'0 (PB)for the class of all functions which are p-excessive relative to { U " } ({o"}). By (4.6) of Chapter 111, U ~ 9" E and f @ E 9"whenever f E 8: . If p is a (nonnegative) measure on 6*,then we define U"&) = J u"(x, y) p(dy) and p@(y) = J p(dx) u"(x, y). Note that /3Uu+BUup(x) = J p U u + au"(x, y ) p(dy) and since x + u"(x, y) is in 9" for each y it is clear that U"p E 9'"Similarly . p t P E 9".
We will now give a condition which guarantees that a pair of resolvents {oa}are in duality relative to a given measure (.
{ U"} and
<
THEOREM.Let {U"} and {o"} be resolvents on bb* and let be a o-finite measure on b*. Then {U"} and {oa}are in duality relative to if and only if the following two conditions are satisfied: (a) The measures U'(x, .) and p(., x ) are absolutely continuous with respect to 5 for each a 0 and x E E, (b) (f, U'g) = g) for each a > 0 and allf, g E a*,. Moreover if (a) and (b) hold then (in the notation of Definition 1.2) for 0 < a I p we have (1.4)
(fo",
=-
(1.5)
<
uYx, y ) = u%, Y ) =uqx, y)
+ (P - w e 4 x , y )
+ (p - oI)uflo"(x, y).
Proof. If { U " } and {@} are in duality relative to g, then the validity of (a) and (b) are immediate. Furthermore, in this event the resolvent equation U" = U B+ (D - a)U"UB= U B (1- a)UBU" implies that for each x the first equation in (1.5) is valid for almost all y (relative to 5, of course). But each term in this equation is, as a function of y, in QB.Consequently, by Proposition 1.3, the first equality in (1.5) is valid for ally. The second equality follows in a similar manner. So now we turn our attention to proving that conditions (a) and (b) imply duality. As before we write @ ( A , x ) for the measures associated with 001 and use the notation described above. By hypothesis, U"(x, is absolutely continuous with respect to 5 and, for each A E 6, U'(x, A ) is in a*, as a function of x. Since I is countably generated we can find a nonnegative function w" E &* x &'* such that
+
a)
for all f E 8: (see the proof of (2.3) of Chapter 111). For the moment let us fix a and x and denote by w the function y + w'(x, y). Using Condition (b) and (1.6) we obtain for any f E b: (WO"+B,
f) = ( w , U"+Bf)
= U"U"+Bf (x).
1.
By the resolvent equation U ' U ' + p f l for all f E B*, , and this implies that pw'
(1.7)
255
DUAL PROCESSES
p-'UY
Thus fl(wff'+p, f ) I U"f ( x )
o"+qx, z ) I w"(x, z )
o'+y(-,
for almost all z. The measure y ) is absolutely continuous with respect to ( and so it follows from (1.7) that (1.8)
Pw'@+'
y ) = PW'@'~
@"(x,
o"+'(x, y ) 2
W'
@+y(x, y )
for all x and y. Thus the function y + w'o '"(x, y ) is CI supermedian relative to the resolvent {o'}, and so, according to (4.6) of Chapter 111, as p + co the left side of (1.8) increases to a limit L,(x, y ) which is in 9"as a function of y and is b* x b* measurable as a function of ( x , y ) . Let f E S*,and suppose f is bounded. Then applying hypothesis (b) and the resolvent equation we obtain
(pwO'+YO'+P, f) = ( w , pu"+yu"+Bf)
and so letting
p + co ( L , f ) = ( w , U'+Yf> 9
where L, is the function y we see that (1.9)
+ L,(x, y )
(yL, ,f>= )JU"'+'f(X)
with x fixed. Using the definition of w =
U"f (x) - U'+Y f (x),
and so y,L,,(x, y ) 5 yzLy2(x,y ) for almost all y if y , < yz . But then this inequality is valid for all y by the fact that y + L,(x, y ) is in p.Define the function u' by u'(x, y ) = lim y L,(x, y). Y+m
As a function of y , u'(x, y ) is in L@ since it is an increasing limit of functions in 9'. Also it follows from (1.9) that
for a l l f e 8; . We have now defined the functions rr' on E x E for all CI > 0. All that remains is to check that x + rr'(x, y ) is in 9'"and that this function is a density relative to t for the measure 8"(.,y ) . Letfand g be in S*,. By what we have already proved, and Fubini's theorem,
256
VI. DUAL PROCESSES AND POTENTIAL THEORY
and so for each f
f 0%) = p
x m UYX, Y)
for almost all y. Operating on each side of this relationship with obtain
O"+pwe
Pfo" Ou+s(y) = P j d x f ( r ) u" o " + s ( x ,y) for all y. NowfO" E @ and y --t f f ( x ,y) E @ for all x and so letting fi -,00 we have by (4.6) of Chapter 111 and the monotone convergence theorem
f W Y ) =p
x m U"(% Y )
for all y, and of course also for all a. Thus for each y and a, x -+ u"(x, y ) is the desired density function. Finally, for each y we have that for all x
Pu" P + O ( x , y ) = fi Ju'(x, z ) u"+@(z,y ) d z ; that is, x + Pff o"+B(x,y) is U " f ( x ) where f ( z ) = fi U " + ~ ( Z ,y). Hence it is in 9". But as /3 + 00 this increases to u'(x, y) and so x -+ f f ( x ,y) is, for each y, in 9". This completes the proof of Theorem 1.4. We now assume that { U'} is the resolvent of ourfixed standardprocess X and that { U"}and a resolvent { are in duality relative to a measure 5. The functions ff are those of Definition 1.2. Before coming to the main point of this section we will make a few observations based on what we have already derived. These will be used later on without special reference. First of all, by (1.3) of this chapter, and (1.2) of Chapter V, X has a reference measure. Hence by (1.4) of Chapter V for each a and y the function x ua(x,y) is Bore1 measurable. If a I P then by (1.5) we have u'(x, y ) 2 up(,, y). As jl decreases to a, up(x,y ) increasps to a function rp(x, y ) and for each x we have u"(x,y ) = ~ ( xy), for almost all y because by monotone convergence f rp(x, y)f(y) dy is equal to Ultf(x). 3ut this implies that ua and rp are identical because each is in as a functim of y. Another application of (1.5) shows that if P increases to a then us(x, y) decreases to u"(x,y) for all ( x , y ) such that d ( x , y) c 00 for some P < a. As a decreases to 0, u'(x, y) increases to a function u(x, y) which is excessive in x for each y and 0-excessive, relative to { p } in , y for each x . In addition we have by monotone convergence
o"}
--f
1. DUAL
257
PROCESSES
Of course it is entirely possible that u is identically infinite, but if Uh is finite for some strictly positive h then for each x, u(x, y ) < 00 for almost all y . The transformation 0 = 0' can always be defined by monotoneity alsofO= 1irna+' for f ' &*, ~ ; the relationship h ) = (f, Uh) shows that if Uh is finite for some strictly positive h, then@ is almost everywhere finite for some strictly positive$ For each y and ct > 0, u"(x, y ) co for almost all x; but by (3.5) of Chapter I1 this implies that {x: u"(x, y ) = co} is polar. This observation will be used many times. Finally, note that (1.5) was stated for a 5 p. This was done merely to ensure that the sum is defined. Of course it is valid for any a and p provided the sums on the right side are defined. In particular for any a and j3 we have
fo"
(fo,
-=
u a u q x , y ) = up
P(x,y )
for any (x, y) such that uy(x,y ) < 00 where y
=u
v
p.
Given a measure p on & we write p P t for the measure PP,(A) = J P(dX)Pt(X, 4
and pU" for the measure pU"(A) = J p ( 4 U"(X,
4= j p
( 4
JA
u"(x, Y ) 4.
Warning. Distinguish carefully between the measure p U a and the function U " P ( X ) = J u " k u) P(dY).
(1.10) DEFINITION. A measure pPp I p for all t 2 0.
on & is a-excessive if p is o-finite and
Note that this definition makes sense for any standard process and in no way depends upon the existence of the dual resolvent {ti"}. If p is a-excessive then obviously pPp increases setwise as t 10 to a measure v with v I p. We will show that v = p. First note that, for /3 > 0, P U " + ~ ( AI) J: e - p r p ( A )dr, and so if p ( A ) < 00 then P I I " + ~ ( A<) 00. Fix such an A . Then for any p > 0 and E > 0 we have p { x : U"+B(X,A ) > E } I
(p&)-1
p ( A ) < 00.
Let {A,,} be an increasing sequence of sets whose union is E and such that p(A,) < 00 for all n, and let B,, = {x: U n + P (A,,) ~ , > l / n } . Then each B,, is a finely open Bore1 set, B, = E and p(B,,) < co. Iffis any positive continuous function then lim inftl Pp(f'ZBn)2 fI B n and so
u
v(flB,)
= lim 11'
pp:(fzB,)
258
VI. DUAL PROCESSES AND POTENTIAL THEORY
Consequently p = v on Bnand so p = v on E. The reader will verify easily that, given a measure q, the measure ? U p is /3-excessive if it is a-finite. It follows then from arguments like those in Section 2 of Chapter I1 that a a-finite measure p is a-excessive if and only if PpUp'" 5 p for all P > 0; in particular, if p is a-excessive then /?pup+" increases setwise to p as p -,00. The next proposition establishes an important relationship between excessive measures and excessive functions.
(1.11) PROPOSITION. A measure pis a-excessive if and only if p(dy) =f( y ) dy for some YE 9' which is almost everywhere finite. Given such anf; the measure p is certainly a-finite. Moreover the relationship
Proof.
Ppup"(4 =
1( p f w A
uB+'(x, Y ) d x ) dY
shows that BpUp+"is absolutely continuous, with derivative flf op+". Since fE this implies that PpUp+' Ip ; that is, p is a-excessive. Conversely suppose we are given an a-excessive p. Given fl > 0 define the function4 by
Clearly fpis a density with respect to for the measure / 3 p U a f aand ~ 0 . is finite almost everywhere. It is also evident that fpE 9 " + p . We next assert that, in fact, fpE 9 for each P > 0. Indeed, given 1 > 0 and any y such that fp(y) < co it is the case that rPfp(x, y) co for almost all x relative to p and so, in particular, r ~ + p@+' (x, y ) = U"' rP+@(x,y ) for this y and almost all x (p). Consequently, since I2pUU+' increases to p as 1 co,
-=
Ifp almost everywhere, and hence In particular, for each Iz > 0, 7 where y = max(a + A, everywhere since both of these functions are in 9 a j).Consequently fp is a - 0 supermedian and so gp = limA.+m A&@+' is in 9'. But by the above computation gp =faalmost everywhere, and hence it follows that fp, < everywhere. Thusfa E 9'. Since flpU"'p increases with /I fp2 almost everywhere if P , s B2. But the inequality is then valid everywhere
+
f ~
1. DUAL PROCESSES
259
because fa E 901. Finally as /l' 4 co,fa increases to a function f E 9. Since /l'pUp+"increases to p it is obvious that p(dy) =f ( y ) dy. This completes the proof. (1.12)
COROLLARY.The measure ( is excessive.
Proof. ( ( d y ) = 1 dy where 1 denotes the function on E identically equal to 1. Of course /l'l@ I 1 and increases as p 4 co to a function h which is excessive relative to {od}. Now iff'is a positive bounded continuous function on E then limp,, PUT= J: Consequently using Fatou's lemma we obtain
and so h = I almost everywhere. Thus ((dy) = h(y) dy and so by ( I . I I), ( is excessive. (1.13) REMARK.It is now evident that ( itself is a reference measure for X . Indeed, on the one hand if ( ( A ) = 0 then U"(x,A ) = J A u"(x, y ) ( ( d y ) = 0 for all x, while on the other hand if A is of potential zero then 0 = p(US(A) t ( ( A ) .
Before proceeding let us repeat that our basic data is a standard process X with resolvent {U'} and a resolvent {o"} which is in duality with { U " } relative to a measure (. We are now going to assume that (0") is also the 2,A,, 8 , ,fit, p) with the sume resolvent of a standard process 8 = state space ( E , a). This assumption will be in force throughout the rest of the chapter. The processes X and 8 are said to be in duality relative to <. Potential-theoretic objects defined in terms of 8 will be designated by the prefix co-. For example a function f E p awill now also be called a-coexcessive and the function go" will be called the a-copotential of g. We will denote by {p,} the semigroup of transition operators for 8,and, in keeping with our previous notation, will writefp, for the action of p , onf, and, p,(A, x) for the associated measures. If A is a set, we will write FA for the time that 8 hits A and (when A is nearly Borel relative to 8)pA(r,x) = p[8(PA)E r, FA < co] for the associated hitting measure. For example, to say that a Borel set A is copolar is the same as saying p(FA< 03) = 0 for all x E E. With this much introduction the reader should have no difficulty interpreting notation not specifically mentioncd here. Finally, in keeping with our previous convention, for each LX we extend u a ( x , y ) to EA x EA by setting it equal to zero when either argument equals A.
(a,
260
VI. DUAL PROCESSES AND POTENTIAL THEORY
(1.14) REMARK.It is of course possible to impose restrictions on the resolvent { @} which will guarantee the existence of a corresponding standard process 8. For example iff @ E C,(E) for every a > 0 and f E C,(E) and if also afo" 4fpointwise as a 4 co for everyfE C,(E), then the Hille-Yosida theorem yields a strongly continuous semigroup {P,} of operators on C , ( E ) such that.f@ = j;fp, e-" dt. The discussion in Section 9 of Chapter I then shows that {P,} is the semigroup of transition operators of a standard process 8. The process is in fact quasi-left-continuous on [0, 00). However we will postpone imposing further restrictions on the resolvents until the need arises.
Clearly, either process, X or 8, may be regarded as basic. Consequently some of the statements we have already made have a valid dual version. For example, (1.1 1) may be extended to read " a measure p is a-coexcessive if and only if p(dy) = f ( y ) dy for some f e 9"where f is almost everywhere finite." In particular the measure 5 is coexcessive. Also for each x and a the function 1p(x,y) is Bore1 measurable in y because 5 is a reference measure for 8. Finally let us mention that, for each x and a > 0, {y: u"(x, y) = co} is copolar. Recall that if p is a measure we have defined the .function U"p(x) = J u"(x, y ) p(dy), and that U'p E 9'".Plainly U'p is just the density of the measure @p(r) = j Oyr,x ) p(dx) relative to t. In particular if u a p is almost everywhere finite then o d p is a-coexcessive. Similarly the function p@ is the density of the measure p U a relative to t. Because of this duality between excessive measures for one of our processes and excessive functions for the other we will in the future discuss only excessive functions. The following proposition is essentially (6.22a) of Chapter 111. However we will give a proof since it is of basic importance in this chapter. (1.15) PROPOSITION. Let p be a measure. If U'p is almost everywhere finite, then U'p determines p. Proof. Suppose p and v are measures such that, for some a, U a p = U"v < 00 a.e. The second equality in (1.5) then shows that U B p= UBv,a.e., and hence everywhere, for all p 2 a. Let > 0 be such that ( l a p = UBv< 00 a.e., and let h be a strictly positive bounded function such that O
A")OB+yl=
1.
261
DUAL PROCESSES
Since g E p p it follows that fg is cofinely continuous and so as y-, m, y(fg) o p + y +fg. Furthermore y(fg) Iygop+y5 g, so letting y -, 00 and applying the dominated convergence theorem we obtain p(fg) = v( fg) for all bounded continuousf. This implies that the finite measures gp and gv are the same, and so p = v since g > 0. We come now to the key technical fact of this chapter. (1.16) THEOREM.Let A be a Borel subset of EA. Then for each a 2 0,
Pi f f ( x ,y ) = ff &(x, y ) for all (x, y ) E EA x EA .
Proof. By monotoneity it suffices to consider the case a > 0. Let a > 0 be fixed. Then we must show that (1.17)
1
P X X , dz) u'(z,
y) = [u'(x, z) &(dz, y )
for all (x, y ) and all Borel sets A . First note that both sides of (1.17) reduce to zero if either x or y equals A. Next let G be an open subset of Ed and letfbe a bounded nonnegative Borel function which vanishes off G. Then P;U"f= Uaf. If g E bb then (Sl GUY) = (9, UY) = (&',f >. ,x) = E, if x is in G and so g@pG = 960'on G . Consequently since But f vanishes off G,(go', f) = (go'&, f ). As a result
for almost all x, and hence for all x since both sides are a-excessive functions of x. Consequently for each fixed x, (1.17) (with A = G) holds for almost all y in G n E, and hence for all y in G since both sides are a-coexcessive functions of y and G is open. That is P; f f ( x ,y ) = u' e ( x , y ) on EA x G, and by duality this also holds on G x EA. For a fixed y let u(x) = Pz f f ( x ,y). Then v(x) = ff &(x, y ) on G v G'. But PzPz = Pz and 88 = 8 since G is open, and so for each fixed x u ( x ) = P: u ( x ) = / P : ( x , d z ) u'&(z,
y)
= I'P: u'(x, z) &(dz, y).
However for fixed x, Pz f f ( x ,z) = ff bG(x, z) for z in G and hence also i f z is coregular for G since both expressions are a-coexcessive in z. Now the y ) is carried by G and the points coregular for G , and commeasure y). Thus we bining these observations this last integral reduces to ff have proved ( I . 17) for open sets A .
e(.,
e(~,
262
VI. DUAL PROCESSES AND POTENTIAL THEORY
In the general case it clearly suffices to show that //g(x)
pi
m,Y ) f ( Y ) dx d y = //m u = m , Y ) f ( Y ) d x dY
wheneverfand g are nonnegative bounded functions in b* with {(f)and { ( g ) finite. Let v(dx) = g(x) dx and p(dx) = f ( x ) dx. Then the desired equality takes the form (1.18)
EV{e-ETA C J ~ ~ ( X , ,=) }B{e-a'A g P(.?fA)}
and by the first part of the argument we know that this holds whenever A is open. Let 'A denote the points which are coregular for A. Then A - A' and A - 'A are semipolar and cosemipolar, respectively, and hence both have 5 measure zero. Therefore we can find a decreasing sequence of open sets {G,} containing A such that TGnt (7'" A c) almost surely Pv and TGn ( P A A p ) almost surely plr. But U"fand g @ are regular @-potentialsfor X and 2,respectively and so the validity of (1.18) when A is replaced by G, implies the validity of (1.18) for general A. Thus we have established Theorem 1.16.
REMARK.For typographical convenience we will omit the hat "'"in those places where it is clearly appropriate. For example we will write Ep{e-OITAg@(XTA)}for the expression appearing on the right side of (1.18). We will give several simple consequences of (1.16). Deeper consequences will appear in later sections. (1.19) PROPOSITION. (i) A set is polar if and only if it is copolar. (ii) A set is semipolar if and only if it is cosemipolar. Proof. If A is polar then it follows from (1.7) of Chapter V that A is contained in polar Borel set B. Let a > 0. Then using (1,16), #(x, z) pi(dz, y ) = P: u"(x, y ) = 0. If, for a fixed y , v is the measure * ,y), then this says that U"v = 0 and so, by (1.15), v = 0. As a result By and hence A, is copolar. Dually a copolar set is polar. In proving that a semipolar set A is cosemipolar we may assume that A is a thin Borel set (see the proof of (1.15) of Chapter V) and that A c E. Let q < 1 and let B = A n {@A I q}. (As usual @i(x) = E"{e-TA;TA< l}.) Then (PA)"@, s q"-' and @; 2 PAU'l. Consequently as n + 00, (PA)"CJ'l 10. Let g E bb be strictly positive and satisfy { ( g ) < 00. Let 3, = go'(p;)"; then {I,} is a decreasing sequence of l-coexcessive functions. If 3 = limf, then using (1.16) one has
fi(
((3,)
= (9,
( P # c J ' ~ ) 10
s
1.
DUAL PROCESSES
263
as n -, co, and sof = 0 almost everywhere. Therefore the regularization o f f is the zero function, and hence, according to (3.6) of Chapter I1 {f> 0} is cosemipolar.* If x is coregular for B, then &( * , x) = E, and so, for each / I , f,,(x) = g O'(x) > 0. Thus if ' B denotes the set of points coregular for B, then ' B c {f > 0} and hence ' B is cosemipolar. But B = ( B - ' B ) u ( B n ' B ) and since B - 'B is cosemipolar ((3.3) of Chapter II), it follows that B, and hence A , is cosemipolar. By duality this yields (1.19). REMARK. It is not true that a set is thin if and only if it is cothin (see Exercise 1.21.)
From now on we will speak just of polar sets or semipolar sets. For example if ct > 0 both {x: f f ( x ,y ) = co} and { y : f f ( x ,y ) = co} are polar (or copolar). If A is any set, then ' A denotes the set of points which are coregular for A . It follows from (1.14) of Chapter V and ( I . 19) that ' A is Borel and that A - ' A is semipolar. Also (1.18) of Chapter V and (1.19) imply that the a-algebra E' of fine Borel sets and the a-algebra 8' of cofine Borel sets coincide. The following is a closely related result. However it is not particularly important since in the present case excessive functions are Borel measurable. (1.20) PROPOSITION. A set is nearly Borel relative to X if and only if it is nearly Borel relative to 8;that is, 8"= b". Proof. Let B E 8".Then there exist Borel sets B, and B, with B, c B c B, and such that Pc(TB2-B, < co) = 0. Consequently Px(TB2-B, < a)= 0 for all x and so B, - B , is polar. Since B is universally measurable, given an initial measure p one can find Borel sets A , and A , such that A , c B c A , and p ( A , - A , ) = 0. Let C , = A , u B , and C , = A , n B , . Then C,c B c C , and C , - C , , being contained in B, - B , , is polar. Since p(C, - C , ) = 0, Pp{TtE C , - C , for some r 2 0} = 0, and hence B is nearly Borel relative to 8. Thus (1.20) is established.
Of course it is now obvious that the conclusion of Theorem 1.16 is valid for nearly Borel sets as well as for Borel sets.
Exercises (1.21) Let X be translation to the right at unit speed in R and let 8 be translation to the Zefr at unit speed in R. Show that X and 8 are in duality
* In fact (3.2) (a much more elementary result than (3.6) of Chapter 11) implies that {f> 01 is copolar.
264
VI. DUAL PROCESSES AND POTENTIAL THEORY
relative to Lebesgue measure. Compute zf(x, y) explicitly. Describe the fine and cofine topologies. Exhibit a thin set which is not cothin. Note that the transition functions P,(x, A) and b,(A,x) are not absolutely continuous with respect to Lebesgue measure. Let X be a Hunt process with state space (RN,B(RN))whose transition function P,(x, A) has the form (2.12) of Chapter I. Let fl,(A) = p,( -A) where {p,} is the semigroup of measures appearing in (2.12) of Chapter I. Show that f P , ( x ) = + y) fi,(dy) defines a transition function on RN and that there exists a Hunt process 8 on RN with this transition function. If p ( A ) = j; e-'p,(A) dt is absolutely continuous with respect to (N-dimensional) Lebesgue measure 5 show that X and 8 are in duality relative to 5. If t+bR denotes the real part of the function t+b appearing in (2.13) of Chapter I, then show that a sufficient condition that p be absolutely continuous with respect to 5 is that, for some a > 0, j [a J / R ( x ) ] - ' dx < 00. Whenever X is a process in R"' with transition function of the form (2.12) of Chapter I, 8 will always denote the process decribed above. (1.22)
If(.
+
(Special cases of (1.22).) If X is the symmetric stable process in RN of index a with 0 < a < N,then show that
(1.23)
See (1.7) of Chapter 11. If X is the one-sided stable process in R of index p with 0 < p < 1, then show that u ( x , y ) = [r(p)]-l ( y - x)B-l
=O
if x < y, if x 2 y.
See (1.6) of Chapter 11. Let E = Z, the integers, and let d be all subsets of E. Let q be a probability measure on E. Define Q(x, A) = EYE,, q(y - x) and &(A, x) = CysAq(x - y). Let A(x) = A where 0 < A < 00. Using the notation of Section 12, Chapter I, let X and 8 be the regular step processes constructed from (Q, A) and (0, A), respectively. Show that X and 8 are in duality relative to the counting measure ( on E; that is, t ( A ) is the cardinality of A. If q(1) = 1 and q(x) = 0 for x # 1, describe the processes X and 2. (1.24)
Let X and 8 be in duality relative to (. (i) Let A be any subset of E. Show that A' - 'A and 'A - A' are semipolar. [Hint: let F = 'A - A'. Then F is a Borel set and F u 'F c 'A. Let B be a Borel set containing A such that (1.25)
2.
POTENTIALS OF MEASURES
265
B' = A'. See (1.13) and (1.14) of Chapter V. Let a > 0 and let O;(x) = EX(e-OTTp; TF< c) = P; I(x). Let {f,,}be a sequence of bounded functions such that U"f, t 1. Now use (1.16) to show that O; =Pi@; and conclude from this that @; < 1 on F. Use this to show that F i s semipolar.] (ii) Use (i) to show that if A is any subset of E then the fine closure and the cofine closure of A differ by a semipolar set. Also show that the fine interior and the cofine interior of A differ by a semipolar set. (1.26) Let X and 2 be in duality relative to 4;. (i) Let p be a measure and B be a cofinely open Bore1 set carrying p. If M = supxEBU" p(x), then show that U"p 5 M everywhere. (Here a 2 0.) If p does not charge semipolar sets and K is the support of p, then show that IIcI"pII = supzeKU" p ( x ) . (ii) If the fine and cofine topologies coincide and p is any measure with support K, then show that IIU"pII = supzEKU" p ( x ) . Note that this last result is applicable whenever X and 2 are equivalent-for example, Brownian motion or the symmetric stable processes. (iii) Let X be the one-sided stable process in R of index p, 0 < /3 < 1. Exhibit a finite measure p such that Up = 0 on the support of p and such that U p is unbounded.
(1.27) Suppose X and 9 are in duality relative to a measure 5. Let A be an open subset of E and suppose A' = (A'Y = '(A'). (a) Show that ( X , TAE) and (2,PAC)are in duality relative to the restriction o f t to A . (b) Show that X and 8 are equivalent if and only if u"(x, y ) = u"(y, x ) for all x and y. (c) Show that if X and 2 are equivalent, then ( X , TAE) and (2,PA,) are equivalent also.
2. Potentials of Measures
In this section we are going to establish the analog of the classical Riesz decomposition theorem; that is, under suitable assumptions an excessive function f will be written in the form f = U p + h where h is in some sense "harmonic." We assume that X and 2 are standard processes with the same state space ( E , 8) and that they are in duality relative to a fixed a-finite measure t. The notation and terminology of Section 1 will be used without special mention. In addition the following smoothness and boundedness conditions will be imposed throughout this section except in certain propositions where explicitly stated otherwise. (2.1) If a > 0 and f~ bb * vanishes outside a compact subset of E, then f O " ( y ) = J ua(x,y ) f ( x ) dx is a continuous function of y on E. Of c o u r s e f o is bounded.
266 (2.2)
VI. DUAL PROCESSES AND POTENTIAL THEORY
Iff€ bd' * vanishes outside a compact set, then
are bounded functions of y and x, respectively, and in addition ,f0 is continuous on E. We will assume both (2.1) and (2.2) and state and prove theorems about excessive functions (a = 0). However, if only (2.1) holds, then obviously for any a > 0 the a-subprocess ((3.17) of Chapter 111) satisfies both (2.1) and (2.2). Consequently all of our results are valid for a-excessive functions when a > 0 just under condition (2.1). These results often will be used in this form in later sections. Before proceeding we will draw some elementary consequences of (2.1) which do not depend on (2.2). An immediate consequence of (2.1) is that a-coexcessive functions are lower semicontinuous for any a 2 0. See (2.6) and (2.16) of Chapter 11. Also (2.1) implies that 5 is a Radon measure; that is, ( is finite on compact subsets of E. To see this let {A,,} be an increasing ,)co. We may also assume sequence of Bore1 sets with E = U A , , and &I,< that each A,, has compact closure in E. Then y + f i ' ( A , , y ) is continuous for each n. Let yo be fixed and choose n so that O'(A,,, yo) > 0. Consequently there exist a neighborhood G of yo and an q > 0 such that O'(A,, ,y) 2 q on G.Now g is coexcessive and so
v
1
I;
G
o1(An Y ) ( ( d y ) 5 t(An) < 00.
<
Thus every point has a neighborhood on which 5 is finite and so is a Radon measure. Here is another useful consequence of (2.1). Let K be a compact subset of E and G a neighborhood of K. Then for any a 2 0, y + P ( G , y) is lower semicontinuous and strictly positive on K, and so inf,,, P ( G , y ) > 0. In the present situation the various finiteness assumptions on excessive functions and potentials of measures take a particularly nice form. Recall that f~ &'* is said to be locally integrable provided If1 dt < 00 for all compact K c E.
I,
(2.3) PROPOSITION. Let f be an excessive function. Then the following conditions are equivalent under (2.1): (Condition 2.2 is not assumed for this proposition.) (i) f i s locally integrable; (ii) f i s finite a.e.; (iii) f i s finite except on a polar set.
2.
POTENTIALS OF MEASURES
267
Proof: The equivalence of (ii) and (iii) follows from Proposition 3.5 of Chapter I1 and the fact that is a reference measure for A'. Plainly (i) implies (ii) and so we need only show that (ii) implies (i). Assume (ii) and let y o be a fixed point in E. Then there exists an xo E E such that u'(xo,yo) > 0 and f(xo) co. Since y -+ u ' ( x o , y ) is lower semicontinuous one can find an q > 0 and a neighborhood G of yo such that u l ( x o ,y ) 2 r] for all y in G. Consequently
-=
>S(XO)
2 p ( x o Y ) f ( Y ) dY 2 '1 9
jc f(y) dY.
Thusfis integrable over some neighborhood of each fixed point y o and hence f is locally integrable.
(2.4) PROPOSITION. Let p be a measure on b*. Then Up is locally integrable if and only if P, p is a finite measure for each compact subset K of E. In this case p is finite on compact subsets of E. Proof. Suppose b,p is a finite measure for compact K . Given a compact K, JK U p d( = (ZKO) dp. NOWZ K O = Z K O P K and J ( Z K O B K ) dp = J (ZKQ dpKp which is finite because P , p is a finite measure and ZO , is bounded. Next suppose that U p is locally integrable. Let K be compact, G a neighborhood of K with compact closure, and r] = inf,,,, O(G, y ) > 0. Then
5
and so p is finite on compact subsets of E. In particular Up, p = P K U p5 Up, and so if U p is locally integrable then p , p is finite on compacts and hence is finite since it is supported by K . Consequently the proof of Proposition 2.4 is complete. Let M denote the set of all measures p on b* such that Up is locally integrable. This will turn out to be the appropriate class of measures for our discussion. In particular, according to (1.15) and (2.3), U p determines p when p is in M. Finally (2.4) implies that any finite measure is in M and that any measure in M with compact support in E is finite. We will introduce some terminology before stating the next proposition. Recall from Section 4 of Chapter I11 that an f E S*,is called supermedian if aU"f 5 f for all c1 > 0 and that in this situation uU"fincreases as a -+ co to an excessive function 1', the regularization of f: Moreover f is the largest excessive function dominated by f and t({/ 0, aUugIg, a.e. Let D, = {aUag> g} and D = D , , the union being over all rational a. Define
u
268
VI. DUAL PROCESSES AND POTENTIAL THEORY
f = g on E - D and f = co on D. Then g IA g = f a.e., so that Uy= U'g for all u, and u U u f f~ if u is rational. It follows immediately that u U y < f for all a, so f is supermedian. Consequently uU'g increases to an excessive function which we again call the regularization of g, and g = g a.e. Moreover S is the largest excessive function dominated almost everywhere by g. (2.5) THEOREM. Let {p,,} be a sequence of measures in M and suppose that there exists a locally integrable function f such that Up, If for all n. Then the following conclusions hold. (i) {p,,} contains at least one weakly convergent subsequence {v,,}. If v is the weak limit of {v,}, then v E M. (ii) Let {v,} and v be as in (i) and assume that Uv, converges to a function g a.e.; then g is nearly supermedian and S 2 Uv. If, in addition for each compact subset K of E and each E > 0 there exists a compact subset J of E such that j,. 0 ( K , x) v,(dx) < E for all n, then S = Uv.
Proof. In order to show that {p,} contains a weakly convergent subsequence it suffices to show that { p , ( K ) } is bounded in n for each compact subset K of E. For such a K let G be a compact neighborhood of K and let r] = inf,,, O(G, x). Then
Now let {v,} be a weakly convergent subsequence with limit v. If h is a bounded nonnegative function vanishing off a compact subset of E, then h 0 is in C + and so, for any continuous function k with compact support and 0 5 k I 1,
/ k ( h O ) dv = lim / k ( h O ) dv, I( h , f )
-= co.
n
Consequently
( h , U V ) = S h O dv 5 ( h , f ) and hence (i) is established. As for (ii), using the same notation as above we obtain with the aid of the bounded convergence theorem
( h , g)
= lim(h,
Uv,) 2 lim S k ( h O ) dv,
n
=
fl
S k ( h O) dv.
Letting k run through a sequence in (CK)+increasing to 1 we obtain ( h , g ) 2
5 h O dv = ( h , Uv). Hence g 2 Uv a.e. Finally using Fatou's lemma we have
2.
POTENTIALS OF MEASURES
269
almost everywhere g = lim Uv, 2 lim inf aU"Uv,
2 tl U"(lim Uv,,) = aU"g,
and so g is nearly supermedian. Clearly S 2 Uv since g 2 Uv almost everywhere. We have now established all of (2.5) except for the last assertion. Let K be a compact subset of E and given E > 0 let J be a compact set in E such that jJc0 ( K , x ) v,,(dx)c E for all n. Let k be a continuous function with compact support such that 0 5 k 5 1, and k = 1 on J . Then g d(
= lim
1Uv, K
d5 = lim
1D ( K , y ) v,(dy)
Since E is arbitrary this implies that ( I Kg) , I ( I , , Uv). On the other hand Uv 5 g a.e. and so Uv = g a.e. on K , Since K is arbitrary, Uv = g a.e., and so Uv = S, completing the proof of (2.5). REMARK. The assumption in the last sentence of (2.5) certainly holds whenever there is a fixed compact subset J of E containing the supports of all the p,. We now introduce some notation that will be used in the rest of this section. Let { K , } be an increasing sequence of compact subsets of E such that K,, c K:+, for each n 2 1 and E = U K , . Here K: denotes the interior of K,, . If T,, = TK; then clearly T,, t ( almost surely, and if f,,= fK;then f,, p almost surely also. Iff is an excessive function, then { P T , f } decreases to a limit g as n -+ 00 and this limit is obviously independent of the particular sequence {K,,} used-subject to the above conditions of course. It will be convenient to write limKrEP,, f for this limit g in order to emphasize its independence of the sequence {K"}.Obviously g is super-mean-valued and it differs from its regularization 9 on a semipolar set, by Theorem 3.6 of Chapter IJ. Of course g = inf, PKe f where the infimum is over all compact subsets K of E. REMARK. In fact, by (3.20) of Chapter 11, S = g except possibly on the set { g = a}, and whenfis locally integrable this set is polar since it is contained in {f= 00). Suppose that f = Up with p compact subset of E. Then
E
M and let h be the indicator function of a
270
VI. DUAL PROCESSES AND POTENTIAL THEORY
= Ell
fT"c h(X,) d t ,
and this last expression approaches zero as n + 00 since p,, t p almost surely. Consequently in this case g = 0 almost everywhere and so S = 0. It now follows from the above remark that g = 0 on { U p < a}.The exceptional set cannot be eliminated in general. See Exercise 2.17. The following corollaries of Theorem 2.5 are perhaps the most useful forms of this theorem for our later applications.
(2.6) COROLLARY. Let {p,,}be a sequence of measures in M such that the sequence {Up,,}is increasing and suppose that f = lim Up,,is locally integrable. Suppose further that either the supports of all the p,, are contained in some f = 0,a.e. Then {p,,}converges fixed compact subset of E or that limKtEPKE weakly to a measure p in M and f = Up. Proof. Let us show first that under these hypotheses the condition in the last sentence of Theorem 2.5 is satisfied. It is certainly satisfied if the supports of all the p,,are contained in some fixed compact subset of K . So suppose J is a compact subset of E and y E J'. Then for any x
Consequently if K and J are compact subsets of E
and under the second assumption in the second sentence of (2.6) this last integral approaches zero as J runs through the sequence {K,,}. Thus in either case the condition in the last sentence of Theorem 2.5 is satisfied. Since {Up,,} is increasing, f is excessive. If {v,,} is a weakly convergent subsequence of {p,,} with limit p, then f = lim Uv,,. Since f is excessive, Theorem 2.5 and the preceding paragraph imply that f = f = Up. If v is the limit of any other weakly convergent subsequence of {p,,}, then by the above argument Uv =f = U p . Consequently p = v and so the entire sequence {p,,} must converge weakly to p. (2.7) COROLLARY. Let {p,,} be a sequence of measures in M such that
2.
271
POTENTIALS OF MEASURES
{Up,} decreases. Then {p,,}converges weakly to a measure p in M and {Up,,} decreases to Up except on a semipolar set.
Proof. Since Up1is locally integrable and limKTE PKFUpl= 0 a.e., the argument in the first paragraph of the proof of (2.6) shows that Theorem 2.5 is applicable. Let f = lim Up,. Then under the present assumptions f is super-mean-valued and f =f;except on a semipolar set by Theorem 3.6 of Chapter 11. If {v,} is a weakly convergent subsequence of {p,} with limit p, then f = lim Uv, and, by Theorem 2.5, f= Up. It now follows as in the proof of (2.6) that the entire sequence {p,} converges weakly to p. REMARK.If in (2.6) or (2.7) all of the measures p, have their supports contained in some fixed compact subset of E, then the limit measure p is finite since it is in M and has compact support. We come now to one of the important results of this section. THEOREM.Let B be a Borel set with compact closure in E and let f be a locally integrable excessive function. Then PBf = Up where p is a finite measure concentrated on the union of B and the set of points coregular for B. (2.8)
Proof. Under the present assumptions one can find an increasing sequence { Ug,} of bounded potentials with limit f. Let p, = g,<. Then each p, is in M ,and PBUp,,= UP,p, . Let v,, = a B p , , . Then { Uv,} increases to PBf and each v, is a measure in M which is concentrated on 8.Since Uv, sf,Corollary 2.6 is applicable, and hence there exists a finite measure p concentrated on such that P, f = Up.It remains to show that p is concentrated on B u ‘B. To this end let G be a compact neighborhood of B. By what we have proved so far, P, f = Uv where v is a finite measure concentrated on G. But PEP, = PB and so P, f = P, P , f = P,UV = u a , v . Hence p = P,v and this representation of p clearly shows that pis concentrated on B u ‘B. Thus Theorem 2.8 is established. We are going to derive a very useful consequence of Theorem 2.8 before proceeding to the main result of this section.
(2.9) PROPOSITION.For this proposition we assume only (2.1) and not (2.2). Let A and B be Borel sets with B c A and assume that B c ‘A. (This is certainly satisfied if A is a neighborhood of B.)Then TB OTA = 0 almost surely on {TA= T, < 00). 0
212
VI. DUAL PROCESSES AND POTENTIAL THEORY
Proof. Let u > 0. Since TA I TBthe conclusion of (2.9) is equivalent to the statement that TA + T , o OTA = T , almost surely, which in turn is equivalent to Pf;P:l = P:l. In light of (1.20) of Chapter V (or (10.19) of Chapter I applied to the measures E, and Pf;(x, .)) it suffices to prove this when B is compact. In this case Pi1 = U"lr; where R; is a finite measure concentrated on B, according to Theorem 2.8. (Note that as explained above we are applying (2.8) when u > 0 assuming only condition (2. l).) Since B c ' A , = n$ and so
p i p i 1 = P i u a n t = UapA
= Van: = P i 1,
completing the proof of (2.9).
REMARK.In view of the remark following the proof of (1.20), the sets A and B appearing in (2.8) and (2.9) need only be nearly Bore1 measurable. As mentioned before, this is not of much interest in the present situation. Note also that (2.9) can be stated in the form Pf;P: = P: for all a 2 0. As a result if G is a neighborhood of B, then PgP: = P i = P i p ; for all u 2 0. Yet another way of formulating the conclusion of (2.9) is that X(TJ E B' almost surely on {TA = T , < a}. (2.10) PROPOSITION. A locally integrable excessive function f is the potential, Up, of a measure p in M if and only if limKf,PKcf = 0 a.e. Proof. We have already seen that if p E M then U p satisfies this condition. Conversely suppose f satisfies the hypotheses of (2.10). By (2.2), U ( x , K ) is a bounded function of x if K is compact, and so according to (2.19) of Chapter I1 there is a sequence {g,,} of bounded functions such that Ug,,is bounded for each n and Ug,,increases to j : Let p,, be the measure g,,t. It is then an immediate consequence of (2.6) that p,, converges weakly to a measure p in M and that U p =f.
We are now ready to state and prove the analog ofthe Riesz decomposition theorem. THEOREM.Let f be a locally integrable excessive function. Then f has a unique representation of the form f = U p + h where p E M and h is an excessive function with the property that P A = h whenever D is the complement of a compact subset of E. (2.11)
+
Proof. Suppose f has two such representations- f = U p , h , = U p 2 + h 2 . Operating on this last equality by P,, and letting K t E we find, since P,,Up + 0 a.e. if p E M, that h, = h 2 , a.e. and hence everywhere. Therefore
2.
273
POTENTIALS OF MEASURES
U p , = Up2 and this implies that p1 = p 2 . Thus we have established the uniqueness of such a representation. We will next show that f = g + h where h is as in (2.11) and g is an excessive function with the property that limKT,P,, g = 0 a.e. By Proposition 2.10 this will complete the proof of Theorem 2.1 1. Let {K,,} be our fixed increasing sequence of compacts with K,, c K f + , and U K , , = E. Define h, = limn PKLJ Then h, is super-mean-valued and h, I J If J is any Borel set with compact closure in E, then J c K," for all large n and PjcP,; = PKF,for such n. Therefore PJch,(x) = h,(x)'at each x satisfying f(x) < co. Next define g , ( x ) = f ( x ) - h,(x) if f(x) co and g l ( x ) = co iff(x) = co. Clearlyf = g , h , , and limKT,PKEgl = Oon{f< co}. Moreover if J,is any Borel set with compact closure in E, then PJCg, Ig, everywhere. Now let K be a fixed compact subset of E and let J = K C n K, . Then J is compact and J' = K u K,' so that PKUKEg1 I g , for each n. Let T and T,, be the hitting times of K and K,', respectively. Then
+
-=
E"{gi(XT); T I Tnl5 PKUK:gi(x) I gi(x),
and letting n 4 co we obtain PKg1 I g 1 since T,, 5 almost surely. Thus, by Theorem 5.1 of Chapter 11, g , is supermedian. Let g = lima+mCrU'g, and h = 1ima+* aU"h,. Then g and h are excessive and f = g + h. Since g I g l , limKT,P,,g = 0 on {f< co}; in particular, this limit is zero almost everywhere. Finally we have already observed in the remark preceding (2.6) that h = h, except on a polar set, and hence P,,h = PKChlexcept on a polar set, whenever K is a Borel set. In particular if K is compact then we already know that P,,h, = h, a.e., so that P,,h = 12 a.e. and hence everywhere because h and P,,h are excessive. The proof of (2.1 1) is complete. We will next formulate and prove another very important theorem due to Hunt, which complements the result in Section 6 of Chapter 111. Iff is excessive and B E 1 we define %(A B) to be the family of all excessive functions g which dominate f on a neighborhood (depending on g ) of B and f B to be the infimum of the functions in %(A B). (2.12) THEOREM.Let f be a locally integrable excessive function and B E 1.Then f " coincides, except possibly at the points outside B where f is infinite and at the points of B - B', with the supremum of the potentials U p where p is a finite measure carried by a compact subset of B and U p 5 f everywhere. There is a sequence of such measures pn such that {Up,,]increases to the supremum in question. This supremum is itself the potential, UV,, of a
measure v g in M which is concentrated on B u 'B, whenever either f is the potential of a measure in M, or B has compact closure in E. Finally if B is open f" and the supremum in question coincide everywhere.
214
VI. DUAL PROCESSES AND POTENTIAL THEORY
Proof. The proof of this theorem is rather long and so we will proceed in steps. First of all if g 2 f on a neighborhood of B, then g 2 P B g 2 P , f everywhere and so f B 2 P B f .Of course f
"
(2.13)
f " = inf {P,f: G open, G
2
B}.
It is, of course, no restriction in (2.13) to assume that G c E. In the remainder of this proof all sets are understood to be subsets of E unless explicitly mentioned otherwise. An immediate consequence of (2.13) is that f = PGf whenever G is open. If x E B' then f ( x ) 2 f "(x) 2 P , f ( x ) =f ( x ) , and so f =f B = P B fOn B'. If p is a finite measure with compact support contained in B such that Up s f and if g E %(f, B ) dominates f on a neighborhood G of B, then g 2 P , g 2 P , U p = @,p = U p since G is also a neighborhood of the support of p. Thus the supremum in question nowhere exceeds f".Let {K,,} be an increasing sequence of compact sets contained in B such that, for all x, TKn1T B almost surely Px. (See (1.20) of Chapter V for the existence of such a sequence.) Certainly PKnf IPBf while by the right continuity of t f ( X , ) and Fatou's lemma we have lim inf,,+mP K nf 2 P,f. Therefore limnPKnf = P , f and so limnPKnf ( x ) =f "(x) at any point x such that P , f ( x ) =f "(x), in particular at any point X E B'. Finally, according to Theorem 2.8, PKnf = Up,, where p,,is a finite measure carried by K,, . Thus we have proved the assertion in the last sentence of Theorem 2.12 and that part of the assertion in the second sentence dealing with points inside B. Note also for future reference that if G is open and K a compact subset of G, then P Kf If K < P , f =f ', and so it follows from what we have proved that f = sup{f K : K a compact subset of G}. Next suppose that B is compact and let {G,,} be a decreasing sequence of open sets with compact closures C,,in E and such that G,,+, c G,,, B = nG, . Then { PGnf } decreases to a super-mean-valued function f o 2 PBJ But if G is any neighborhood of B then G 3 G,, for all large n and so, by (2.13), fo =f " . Now by Theorem 2.8 PGnf = Up,, where p,, is a finite measure with support contained in C,,c GI, and so, by Corollary 2.7, {p,,} converges weakly to a measure p and U p =f o =f B a.e. In particular Up is the regularization off ",and, according to (3.20) of Chapter 11,f "(x) = Up(x) at each x outside B at which f " is finite. Moreover the support of p is contained in C,, for each n, and hence p is carried by B. Obviously Up If and so the second sentence of Theorem 2.12 is proved whenever B is a compact subset of E. Note that we have actually proved more in this case: namely, there exists a measure p with support in B such that U p I f and Up(x) =f "(x) at any x E B' at which f ' ( x ) < 00. --f
2.
275
POTENTIALS OF MEASURES
We will complete the proof by using Choquet’s extension theorem for capacities (Theorem 10.6 of Chapter I). Let x be a fixed point at which f ( x ) < 03 and define q ( K ) = f K ( x ) for all compact subsets K of E. Then q ( K ) is finite, cp(K) < cp(L)if K c L, and it follows from (2.13) that cp is right continuous, that is, satisfies condition (10.5) of Chapter I. Let K and L be compact and E > 0. Using (2.13) we can choose open sets C and H containing K and L, respectively, such that P C f ( x )+ P H f ( x )< q ( K ) cp(L) E . Since G u H and G n H are open sets containing K u L and K n L, respectively, in order to show that q ( K n L) cp(K n L) I cp(K) q(L) it will suffice to show that PGvHf+ PGnHf< P G f + P H f .For this it is enough to consider the case in which f is the bounded potential of a bounded nonnegative function 9.But then
+
+
1
+
+
TH
pGuHj(x) - P
H ~ ( x ) = EX
g(xr) dt
T c A TH
Thus we have checked that cp is a Choquet capacity on the class of compact subsets of E. Also it follows from (2.13) and the remark at the end of the second paragraph of the proof that cp(B) = f B ( x )for all Bore1 sets where now cp has been extended to d by Theorem 10.6 of Chapter I. Finally suppose in addition that x 4 B. Given E > 0 we can find, using Choquet’s theorem, a compact subset K of B such that f B ( x )- f K ( x ) < E. But by what was proved in the preceding paragraph there exists a measure p E M with support in K such that Up(x) = f K ( x )and U p
(2.14) PROPOSITION. Let p l and p2 be measures in M. Then the smallest excessive function dominating both U p , and Up2 is the potential Uv of a measure V E Mwhose support is contained in the union of the supports of PI and P 2 .
+
Proof. Let p = p l p 2 . Then p E M and Up dominates both U p , and U p 2 . Consider the collection V of all excessive functions f such that f dominates U p l and Up2 and f is dominated by Up, and let u denote the infimum of this collection. Then according to Theorem 1.6 of Chapter V we can find a decreasing sequence {f.}of elements of V such that if u = inf, f, then ij < u I v. Moreover, since any excessive function dominated by the potential of a measure in M is itself the potential of measure in M, ij = Uv with v E M.
216
VI. DUAL PROCESSES AND POTENTIAL THEORY
Clearly ij dominates both Up, and Up2 almost everywhere and hence everywhere, and so ij is the smallest excessive function dominating Up, and U p 2 . Let G be any open set containing the supports of pl and p 2 . Then P,ij 2 PGUpj = U p , p j = U p j for j = 1,2, so P,ij dominates Upl and U p 2 . Consequently ij = P,C, and so Uv = P,Uv = Up,v. Therefore v = p,v and hence the support of v must be contained in the union of the supports of p1 and p 2 . This establishes Proposition 2.14. We return now to the proof of Theorem 2.12. Let W be the collection of all Up where p is a finite measure with compact support in B and Up sf. Proposition 2.14 implies that W is filtering upward, and so by Theorem 1.5 of Chapter V we can find an increasing sequence {Up,} of potentials in W such that sup,, Up,, = sup{ Up: Up E W } .This establishes the third sentence of Theorem 2.12. It remains only to check the fourth sentence of Theorem 2.12. Let u be the supremum in question, which, in view of the preceding paragraph, is excessive. We also know that u =f B except possibly on a semipolar set. Suppose that f = Uv with v E M. Let v , be the restriction of v to B and v2 = v - v l . By (2.13) and Theorem 1.6 of Chapter V we can find a decreasing sequence {G,} of open sets containing B such that if u = limnP G f then C If B Iu, and, since v 2 ( B )= 0 we may also assume that (TGn A p) (?, A p ) almost surely pvz.Now Pen Uv, = Up,, v2 decreases to a super-mean-valued function g and if h 2 0 is a bounded function vanishing outside a compact set, then
+ pVz
c
h ( X , ) dt
=(h,
UbBv2).
TB
Therefore g = U ~ , Va.e. , But B c G, and so
+
and letting n + co we obtain u = Uv, g . But u = f = u a.e., and g = ufiBv2 a.e. Therefore u = Uv, u p B v 2 a.e., and hence everywhere. Thus u = Uv, where v B = v , + p B v 2 , and clearly V , is carried B u 'B. Finally if B has compact closure in E and G is a compact neighborhood of B, then P , f = Uv for some v E M by Theorem 2.8, and the same argument as above shows that u = Uv, for v B = v 1 + p,v2 where again v1 is the restriction of v to B and v2 = v - v I . Thus, at last, the proof of Theorem 2.12 is complete.
+
2.
POTENTIALS OF MEASURES
271
We close this section by investigating the relationship between P, f and f when B E 1.The notation is that of (2.12). Since condition (2.2) certainly implies the hypothesis of Theorem 6.12 of Chapter I11 (with M , = ZLOn(t)) we know that P, f agrees with the infimum of the family of all excessive functions dominating f on B except possibly at the points of B - B', and so there must be a close relationship between P , f and f '. In the course of the proof of (2.12) we showed that P , f =f B =f on B' and P , f =f if G is open. Sincef B is only super-mean-valued and P , f is excessive we cannot hope to show that they are equal in general. The most one could hope for is that P , f is the regularization of f B and this is certainly the case if f B = P E A a.e. The next result characterizes those excessive functions f for which this is true. The following concept is used in its statement. A locally integrable excessive function f is said to be admissible if whenever K is a compact subset of E there exists a decreasing sequence {G,} of open sets containing K and such that P G n f 1p K f a*e* (2.15) PROPOSITION: (i) Let f be a locally integrable excessive function. Then for each B E 8,f B = P E A a.e. if and only iff is admissible. In this case f B = P, f except possibly on ( B - B') u (BCn { f = a}).(ii) If f = U p with EM, then f is admissible if and only if p ( K - ' K ) = 0 for all compact subsets K of E. In this case, for all B E 6,P, f = Uv, where V, is the measure defined in Theorem 2.12. Proof. Suppose f is admissible, K is compact, and {G,} is a decreasing sequence of open sets containing K such that PGnf 4 P , f a.e. Clearly we may assume that no point of G, is distant from K by more than l/n and so if G is any open neighborhood of K, then C, c C for large n. Since by (2.13), f K = inf P , f as C ranges over such neighborhoods we have f = lim P G , f . Consequently P , , f = f a.e. and P , f is the regularization of f K . Thus by (3.20) We now assert of Chapter 11,f = P, f on K' n {f < a}=I K' n {f < a}. that, for any B E 8,P,f =f B except perhaps on ( B - B') u (B' n {f= a}). Indeed we have already observed that P , f ( x ) = f B ( x )if x E B'. Now suppose x E B' n { f < a}.Then by the proof of (2.12) there is a sequence {K,} of compact subsets of B such that f Kn(x) -,f "(x). We may also suppose the K, are such that TKn1 T , a.s. P", so that P,, f ( x ) + P , f (x). By what was proved above P K n,f(x) = f K"(x)for all n, and so P , f ( x ) =f "(x). Conversely if K is compact we can find a decreasing sequence {C,} of open sets containing Ksuch that P,, f 1f '. Indeed f = inf P, f as G ranges over the open neighborhoods of K and so we have merely to choose a sequence {C,} which is ultimately contained in any such G . Now if P , f =f a.e. and P G , f l f', then P," f 1P, f a.e. Consequently if, for every cornpact K, P, f = f a.e. then f is admissible. This proves assertion (i) in (2.15).
,
,
278
VI. DUAL PROCESSES AND POTENTIAL THEORY
Coming to assertion (ii),
let K be a compact set and let G, =
(x: p(x, X) < l/n}, where p(x, K ) is the distance from x to K . It is obvious
that P H , f l PKf a.e. for some sequence {H,} of open sets containing K if and only if PGnf 1PKf a.e. Since { P , , f } decreases, this will occur if and only if
( h , pGn up) 1( h , PK u p )
(2.16)
whenever h E bb* is positive with compact support. Now for any set J 5
( h , PJ U p ) = B” J h(XJ dt TJ
and this makes it obvious that (2.16) holds for all relevant h if and only if FGnA p t fK A p almost surely p”. Now this will happen if and only if p(K - ‘ K ) = 0, and so U p is admissible if and only if p(K - ‘ K ) = 0 whenever Kis compact. Finally when f = Up with p E M then, by (2.12),fB = U V ,a.e. and iff is also admissible then U V ,= P,f a.e., and hence everywhere. This completes the proof. It is easy to see that p(K - ‘ K ) = 0 for all compact subsets K of E if and only if p ( B ) = 0 for every semipolar set B. REMARK.
Exercises (2.17) Let A c R4 be the set of points, all of whose coordinates are rational, and let v be a probability measure carried by A and such that v({q})= v({ -4)) > 0 for all q E A . Let g,(x, y ) be the transition density for Brownian motion in R4 (see (2.17) of Chapter I). Define vo to be unit mass at 0 E R4 and v k + l = v * vk for k 2 0. Let pf = e-‘ (tk/k!)vk. (a) Show that {p,; t 2 0) is a convolution semigroup of probability measures on R4. (b) Show that pt(x, y ) = g,(x - q, y ) p,(dq) defines a transition density with respect to Lebesgue measure for a Hunt process X in R4. (c) Show that p,(x, y ) =p,(y, x) so that this process is in duality with itself relative to Lebesgue measure. Show that conditions (2.1) and (2.2) are satisfied. [Hint: show that p f ( x ,y ) I t - 2 . ] (d) Show that if x - y E A then p,(x, y ) 2 k t - ’ (k > 0) for small t and conclude that u(x, y ) = 00 whenever x - y E A . (e) Observe that v E M and use (d) to show that PGU v(x) = 00 for all x E A whenever G is open and nonempty. Conclude that the exceptional set in the discussion preceding (2.6) cannot in general be eliminated.
Cr=o
5
(2.18)
Let B be a thin Bore1 set. Show that T , is accessible on { T , < c}.
3.
POTENTIALS OF ADDITIVE FUNCTIONALS
279
[Hint: given an initial measure p with p ( B ) = 0 let {G,} be a decreasing sequence of neighborhoods of B such that T,. A 5 T TB A a.e. P". Use (2.9) to conclude that Pp(T,, = TB < a)= 0 for all n. For the general case use the first part to conclude that if K is a compact subset of BEthen TB is accessible on {TK < TB < C}. Finally use the fact that there is an increasing sequence {K,} of compact subsets of B' such that Px(TKn10) = 1 for all x . ] (2.19) Let X be the process (2, TAc)where 2 is Brownian motion in R" and A = { x : 1x1 < I } (see (1.27)). Show that in this case the decomposition (2.11)
is the classical Riesz decomposition of a positive superharmonic function into the sum of a harmonic function and the potential of a measure.
3. Potentials of Additive Functionals
In this section we are going to study the relationship between the potentials of measures in M and the potentials of additive functionals. In particular we will characterize those measures p E M having the property that Up is either a natural potential or a regular potential as defined in Chapter IV. The assumptions in this section are the same as those in Section 2. Thus X and 2 are standard processes in duality relative to <,and (2.1) and (2.2) are assumed to hold. As in Section 2 we will state and prove our results in the case a = 0. However these results are valid also for positive a under (2.1) alone. (3.1) THEOREM.Let p be a measure in M and let A be an NAF of X such we have U , f = U (fp). that Up = u,. Then for any f E
Proof. Recall that u,(x) = U , l ( x ) = E x A ( a ) . Also
Let x be a fixed point in E at which Up(x) < co. Since both sides of the desired equality are excessive functions it will suffice to establish the desired equality for such x . On the other hand, with such an x fixed, U , f ( x ) and U ( f p ) ( x )are finite measures in f and so it will be enough to establish the equality when,f= I, where G is an open set with compact closure in E whose boundary has p measure zero. Let g = I E - , and let cp = U , f and $ = UAg. Since cp and $ are dominated by uA = Up it follows from (2.10) that cp = Uv and $ = U1 where v and 1are measures in M. Now
280
VI. DUAL PROCESSES AND POTENTIAL THEORY
and, since f ( X r )vanishes on [0,TG),
d x )- pG d x )= E X { f ( X T ~ ) CAT^) - A(TG-111. If A has a jump at TG < 00, then since A is natural the path X i must be continuous at T G ,and consequently X(T,) is in G'. Therefore cp = PGcp and so Uv = PGUv = UPGv. But this implies that v = PGv,and hence v is carried by G. Coming to $, let H be an open neighborhood of the closed set E - G. Then P H $(X)
J " . m d x dAt i)
= EX
9
and since g(Xr) vanishes on [0, TE-G)3 [0, TH) one obtains, as above, II/ = PH$.As a result A = P,A and so A is carried by R. But H was an arbitrary neighborhood of E - G and hence E - G must carry A. Next observe that up = U A = u A ( f
+ 9) = + $ = u(v + A),
and so p = v + A. But v is carried by G,A is carried by E - G , and the boundary of G has p measure zero. Plainly this yields v =f p and A = gp. Therefore U,f = cp = Uv = Uup), and so Theorem 3.1 is established. We are next going to characterize those measures p E M such that Up is a natural potential. In the course of the discussion we will need the following fact which is of considerable interest in itself.
(3.2) PROPOSITION.Let X be a standard process-no assumptions about the existence of a dual process or even the existence of a reference measure are made. Let {h}be a decreasing sequence in 9'"(ct 2 0) with limit f and suppose that f = 0 except on a set of potential zero. Then f = 0 except on a polar set. Proof. The argument is the same for all ct and so for convenience we assume that ct = 0. If T is any {Fr} stopping time, then PTfns f , , and so by the bounded convergence theorem P , f ( x ) If ( x ) at any x for which f ( x ) < 00. In particular PTf =f on {f = O } . Let 8 > 0 and let K be a compact subset of {f 2 E } . Then iff(x) = 0 we have
0 = f ( x ) 2 P , f ( X ) r & P X ( T K
and so P"(T, < co)= O . But {f>0) is of potential zero and since the excessive function x P"(TK < co) vanishes except on this set it follows that, for all x, P"(TK < 00) = 0. Therefore K is polar and this implies that {f> 0) is polar proving (3.2).
3.
POTENTIALS OF ADDITIVE FUNCTIONALS
281
(3.3) REMARK. If X has a reference measure 5, then Proposition 3.2 may be restated as follows: a decreasing sequence {f,} in 9“which approaches zero a.e. must, in fact, approach zero except on a polar set. THEOREM.Let p be a measure in M. Then necessary and sufficient conditions that U p be a natural potential are (i) U p is everywhere finite and (ii) p charges no polar set. (3.4)
Proof. If Up is a natural potential, then U p is finite by Definition 4.17 of Chapter IV. On the other hand if p charges some polar set then pmust charge a compact polar set K. Let v be the restriction of p to K.Then Uv s Up and Uv is a natural potential. Let {G,,} be a decreasing sequence of open sets containing Ksuch that n G n = n G n = K. Since v is supported by K,P,,UV = U~,,V= Uv. But if x 4 K then lim TGn2 5 almost surely Px,and so by the definition of natural potential P,,UV(X)+ 0. Consequently Uv = 0 except possibly on the polar set K and so Uv vanishes. Therefore v = 0 and so p charges no polar set. This argument used only the definition of natural potential If one makes use of the theory developed in Chapter IVYthen we know that there exists an NAF, A , of X such that U p = u, . If D is a polar Bore1 set, then by (3.1)
1
OD
u(lDp)(x)
= uA
ID(x)
= EX
lD(xt)
dAt
Y
0
and this last expression is clearly zero. This gives an alternate proof of the fact that p does not charge polar sets. Next suppose that p does not charge polar sets and that Up is finite. Given x let {T,,} be an increasing sequence of stopping times such that lim T, 2 5 almost surely Px.Since z --t u(z, y ) is excessive, it follows that for each fixed y and z, PTnu(z, y) decreases with n. I f f € bb*, and vanishes off a compact subset of E, then PT,
f ( X t ) dt -+ 0
U f ( x )= Ex J-:n
as n + co. Therefore PTnu(x, .) decreases to zero a.e. But for each n, y + P,,u(x, y ) is coexcessive and so, by (3.3), PTnu(x, decreases to zero except on a polar set. Since p doesn’t charge polar sets and Up is finite, we see that PTnUp(x)= PTnu(x,y ) p(dy) decreases to zero. Consequently U p is a natural potential and Theorem 3.4 is proved. a)
(3.5) THEOREM.Let p be a measure in M. Then necessary and sufficient conditions that U p be a regular potential are that (i) U p is everywhere finite and (ii) p charges no semipolar set.
282
VI. DUAL PROCESSES AND POTENTIAL THEORY
Proof. If U p is a regular potential, then U p is finite and there exists, by Theorem 3.13 of Chapter IV, a CAF, A of X such that U p = u,. Any semipolar set is contained in a Bore1 semipolar set D, and by (3.1) we have
P)(x) = u, ID(x) = E X
J"
m
zD(xr)
0
d ~* t
Since D is semipolar X , is in D for at most countably many values o f t almost surely, and since A , is continuous the measure dA, induced by A , on [0, co] vanishes on countable sets. Therefore the last displayed expression is zero, and consequently p charges no semipolar set. Conversely suppose U p is finite and p charges no semipolar set. Let u = Up.If K is a compact subset of E and p K is the restriction of p to K, then since K - ' K is semipolar it follows that PK UpK = U P Kp K = U p K .NOWlet A be a measure equivalent to 5 such that J u dA c 03, and let t j be the measure given by q(r)= j,-A0 dp for r E b*. Then q ( E ) = u dA < 00. Since P,u t u as t + 0 we can find, using Egorov's theorem, an increasing sequence {K,} of compact subsets of E such that (i) u is bounded on each K, , (ii) P,u + u uniformly on each K,, as t + 0, and (iii) q(E - K,) 10 as n + co. For each n let p,,be the restriction of p to K, and v, be the restriction of p to E - K, . Then Uv, decreases with n and since j Uv,dA = q(E - K,) it follows that Uv, decreases to zero a.e. (A). But u = Up,+ Uv, and hence Up,,increases to u a.e. (A), which implies that lim Up, = u everywhere since both u and limn Up, are excessive. Also - Up, = Up; where p; is the restriction of p to K,, - K, . Therefore the sequence {Up,} is increasing in the strong order; that is, Up,+ - Up, is excessive for each n. We next claim that, for each n, u, = Up, is uniformly excessive. We already have observed that P K p , = u, for each n. Since u, < u and uis bounded on K, this implies that each u, is bounded. Now fix n. Then given E > 0 there exists a 6 > 0 such that u IP,u + E on K, provided t c 6. Hence on K, we have for t < 6
+ P,uv,I u, + uv,= u IP,u + E = P, u, + P,uv,+ E. and so u, IP,u, + E on K,, . But P, u, + E is excessive and so u,
u, = PK,u, IPK,(P, u,
+
8)
p , u,
+
everywhere provided t < 6. Consequently u, is uniformly excessive. If we define uo = 0, then plainly f;, = u, - u , , - ~is uniformly excessive for each n and u = S,. By (3.4), u is a natural potential and so, for each n, P,f,I P,u + 0 as t -,co. Therefore according to Theorem 3.16 of Chapter IV there exists for each n a CAF, A", of X such that f,= u,. . If we define A = A", then E x ( A m )= u(x) < co and so A: converges uniformly on [0, co]
1
1
1
4.
CAPACITY AND RELATED TOPICS
283
almost surely. As a result A is a CAF of X whose potential is u = Up, and consequently U p is a regular potential. This establishes Theorem 3.5.
REMARK.The reader should note that the proof of the second half of Theorem 3.5 is essentially the same as the proof of Theorem 2.1. of Chapter V. The following decomposition complements Theorems 3.4 and 3.5.
(3.6) PROPOSITION.Let p be a measure in M. Then p can be written uniquely in the form p = p , + p t + p, where p l is carried by a polar (Borel) set, /it is carried by a semipolar (Borel) set but charges no polar set, and p 3 charges no semipolar set. In particular p,, p t , and p3 are carried by disjoint Borel sets. Proof. Recall that any polar set is contained in a polar Borel set. Let p B denote the restriction of p to B whenever B is a Borel set. Define
9 = { U p B :B a polar Borel set}. Clearly 9 is filtering upward and so by Theorem 1.5 of Chapter V we can find an increasing sequence { U p B , } in 9 such that sup UpB,= sup 9.Let B = B,, and p l = p B . Plainly B is a polar Borel set and p - p 1 charges no polar set. One next constructs p z in a similar manner starting from the measure 11 - i l l , and then p 3 = p - pl - p 2 . Moreover it is evident from this construction that this decomposition is unique. (The uniqueness may also be established very easily by a direct argument.) Thus Proposition 3.6 is proved.
u
An immediate consequence of (3.6) and (2.11) is the following decomposition which should be compared with Theorem 5.14 of Chapter IV. A locally integrable excessive function f can be written uniquely in the form f = Up1 Upz Up, h where p,,p z , and p3 are as in Proposition 3.6 and h has the property that P,h = h whenever D is the complement of a compact subset of E.
+
+
+
4. Capacity and Related Topics
In this section we will develop a theory of capacity which in the classical case reduces to the ordinary Newtonian capacity. We will also explore the situation in which every semipolar set is polar-this additional hypothesis being necessary if one wishes to obtain some of the strongest results of classical potential theory. We will assume throughout this section that X
284
VI. DUAL PROCESSES AND POTENTIAL THEORY
and 8 are standard processes in duality relative to (, that (2.1) and (2.2) hold, and, in addition, that X satisfies the same smoothness hypotheses as 8. To be more explicit we assume: (4.1) If M > 0 and f~ bb* vanishes off a compact subset of E, then V y ( x )= ff(x, y ) f ( y )dy is continuous on E. (4.2)
Withfas in (4.l), U''(X) = u(x, y) f ( y ) dy is continuous on E.
We separate (4.1) and (4.2) just as we did (2.1) and (2.2) because the results which we will establish hold for strictly positive a just under (2.1) and (4.1). Again, however, we will formulate and prove these results mainly in the case M = 0. Under these assumptions there is a complete duality between X and 8 and hence the results of Sections 2 and 3 apply to 8 as well as X. In particular the elements of 9'"(a 2 0) are lower semicontinuous (only (4.1) is needed for this fact). Also (4.1) implies that condition (4.1) of Chapter 1V holds. To see this suppose that {T,,} is an increasing sequence of stopping times with limit T. Let a > 0 and f E C,, f 2 0 be fixed. Then there exists a nonnegative random variable L E 9 such that e-aTnU y ( X T , )-+ eWaTL almost surely. Clearly L 2 V a f ( X T )almost surely. On the other hand m EX{
e - a r j ( x , ) dt
e - a T n u ~ ~ ( x T J )= E X ! T. 4
Ex JTme-"'f(X,) dt
= EX{ePuT uaj(xT)),
and therefore limn U a f ( X T n= ) U a f ( X,) almost surely on { T < a}.By (4.32) of Chapter IV this implies that (4.1) of Chapter IV holds. Consequently all of the results of Sections 4 and 5 of Chapter JV are applicable to X and 8. As in Section 1 of Chapter V we let eB denote the lower envelope of the family of all excessive functions which dominate 1 on B, where B is an arbitrary subset of E. In the present situation eB is also the lower envelope of the family of all excessive functions which dominate 1 on some (variable) neighborhood of B. To see this let ez denote this second infimum for the e ; . On the other hand i f f € Y and f 2 1 on B, then moment. Clearly eB I {f > 1 - E } is an open set containing B since f is lower semicontinuous. Thus for any E with 0 < E < 1 the excessive function (1 - E ) - tfdominates 1 on a neighborhood of B and so (1 - E)ez sf.Letting E 1 0 we obtain ef I f , and hence ei I e B . Thus e B = ez in the present situation. As a result we may apply either Theorem 6.12 of Chapter I11 or Theorem 2.12 to e , when B is a Bore1 set to obtain the following statement.
4.
285
CAPACITY AND RELATED TOPICS
(4.3) PROPOSITION. Let B be a Borel set. Then eB= P B1 except perhaps on B - B', and e , agrees except perhaps on B - B' with the supremum of all potentials U p where U p I 1 and p has compact support contained in B. Since this supremum is excessive it agrees everywhere with P , 1. Finally if B has compact closure P,1 = Un, where n, is a finite measure carried by B u 'B. An immediate, but important, consequence of (4.3) is the fact that if Up is bounded then p charges no polar set. Indeed if v is the restriction of p to a compact polar set K and v # 0, then r] = v/ll Uvll is a measure on K with Ur] I 1 and Uq # 0. Consequently P,1 2 Ur] and this is a contradiction since K is polar. This also shows that a set B is polar if and only if Up is unbounded whenever p is a finite nonzero measure with compact support contained in B. Let B be a Borel set with compact closure in E. Then by (4.3) or (2.8) we where n i have, for each cc 2 0, (D; = Pt1 = Uan: and 6;= I& = and fii are finite measures carried by B u ' B and B u B', respectively. The measure n i is called the u-capacitary measure of B, and fiithe u-cocapacitary measure of B. As usual when u = 0 we will drop it entirely from our notation and terminology. If A is a Borel set with compact closure in E and if B c A' then Un, = 1 on B u B' and so P , 1 = P , Un, . Hence n, = p , n, ,and similarly 2 , = AAPB if B c 'A. On the other hand if B c 'A and B c A then P A P B= P , by (2.9) and so nB = P An,. Similarly IZB = WBPAif B c A and B c A'. All of these relationships are obviously valid for positive u also.
A:oa
(4.4) PROPOSITION. If B is a Borel set with compact closure in E, then the measures n; and fi; have the same mass. Proof. Let G be a neighborhood of B having compact closure in E. Then using the above remarks we have
which proves (4.4) since the same computation is valid for u > 0. (4.5) DEFINITION. The common value n i ( E ) = W:(E) is called the (natural) cc-capacity of B. We denote it by Ca(B).
286
VI. DUAL PROCESSES AND POTENTIAL THEORY
In the following discussion we assume that a = 0. However the results obviously are valid for arbitrary a 2 0. Let B denote a Borel set with compact closure in E. If K is a compact subset of B and p is carried by K, then p ( K ) I C ( B )provided Up 5 1. Indeed by (4.3), Up I Un, and so if {f,}is a sequence of bounded nonnegative functions such that {f,o} increases to 1 we have p ( K ) = lim l ( j n0) d p = lim(fn, UP> n
n
I lim(fn, Un,) n
= lim n
l(j,0 )dn,
= C(@.
On the other hand by (1.20) of Chapter V we can find an increasing sequence {K,,} of compact subsets of B such that UnK,= PKn1 t P B1 = UnB.It then follows from (2.6) that {nKn)converges weakly to n B and consequently C(Kn)t C ( B ) . Thus we have shown that C ( B ) = sup p ( E ) where the supremum is over all measures p with support in B such that Up I 1 (or dually p o I 1). In particular there is an increasing sequence {K,,} of compact subsets of B such that C(Kn)t C ( B ) .It is also clear that C ( B ) = 0 if and only if B is polar. So far we have defined the capacity, C ( B ) ,of a set B only when B has compact closure in E. If B is any Borel subset of E we define the capacity, C ( B ) ,of B to be the supremum of C ( K )as K varies over all compact subsets of B. This definition agrees with the previous one if B has compact closure. We leave it to the reader as Exercise 4.14 to check that K - r C ( K ) defines a Choquet capacity on the compact subsets of E. Obviously its extension to d is just C. Moreover C is countably subadditive. See Exercise 4.14. Since a set is polar if and only if every compact subset of it is polar, it remains true that a Borel set B is polar if and only if C ( B ) = 0. Suppose now that B E d and that there exists a measure ~l, such that @, = UK,-here B need not have compact closure in E. Let us show that it is still the case that C ( B ) is the total mass of ng and that n, is carried by B u 'B. As in the preceding paragraph we can find an increasing sequence {K.} of compact subsets of B such that UnK,t Un,,and, in addition, we may suppose that C(Kn)T C ( B ) . If {f,}is a sequence of bounded nonnegative functions such that {f,0 } increases to one, then C(B) = lim c ( K ,= ) lim lim J(jm 0) dnK, n
= lim m
n
m
lim(fm, UnKn) n
= lim(jm, Un,) = n,(E) m
where the interchange of limits is justified since
(fmo) dnK,= ( f , , UnKn>
4.
287
CAPACITY AND RELATED TOPICS
is increasing in both m and n. Let v' be the restriction of 7~~ to B and v = IT^ -v'. Let {C,} be a decreasing sequence of open sets containing B such A [) T (T, A almost surely P". Then that (FGn
4)
@B
= PG,@B =
uv'+ U f i G , v,
and, as in the proof of (2.12), Upcnv = P,,Uv decreases to P, Uv = UP,v a.e. Consequently 0,= U(v' P,v) and so IT^ = v' P,v. Therefore nB is concentrated on B u 'B. Whenever IT^ exists it will be called the capacitary measure of B. If both IT^ and i2, exist, then they must have the same massnamely, C ( B ) . We are now going to develop some of the implications of the following hypothesis :
+
+
(4.6) Every semipolar set is polar Obviously (4.6) is equivalent to the statement that if a compact set K is not polar, then some point of K must be regular for K . This last statement is hypothesis (H) of Hunt's memoir (Hunt [4], p. 193). Note that (4.6) is not satisfied by the process "uniform motion to the right in R," so that in one sense it is a strong restriction. On the other hand we shall see that (4.6) is satisfied by many of the familiar processes, in particular by Brownian motion and by the symmetric stable processes. Obviously X satisfies condition (4.6) if and only if for some a > 0 the a-subprocess ((3.17) of Chapter 111) satisfies (4.6). We will state explicitly when (4.6) is being assumed to hold. Let us begin with the following characterization of the capacitary measure nK of a compact set K .
(4.7) PROPOSITION. Assume (4.6). If K is a compact subset of E, then IT, is the unique measure carried by K and having the following two properties: (i) UIT,I 1 and (ii) { UIT,< 1 } n K is polar. = P"(T, < CO) and @, = 1 on K', it is clear that, under (4.6), IT, has properties (i) and (ii). Now let p be any measure with support in K and satisfying (i) and (ii). If x $ K then P,(x, charges no polar set and so, for such an x, PKUp(x)= PKl(x)= UIT,(X).Also U p is bounded and so p charges no polar set. But under (4.6), K - ' K is polar and so PKUp = UP,^ = up. Consequently = UIT, except possibly on a polar set and hence p = nK proving (4.7).
Proof. Since U ~ , ( X=) @,(X)
a)
We are now going to formulate some conditions which are equivalent to (4.6). A locally integrable a-excessive function f is said to be regular provided that almost surely the mapping t - f ( X , ) is continuous wherever
288
VI. DUAL PROCESSES AND POTENTIAL THEORY
t + X , is continuous on [0, 0.If f is finite this is clearly equivalent to the definition of regularity given in (5.1) of Chapter IV. The requirement that f is everywhere finite is not appropriate here-the local integrability off is the appropriate finiteness condition. Of course, since f is locally integrable, {f= co} is polar and so almost surely t + f ( X , ) is finite on (0, 001. In particular the proof of (5.9) of Chapter IV remains valid when f is merely assumed to be locally integrable. As a result, a locally integrable f i n 9'" is regular if and only if whenever {T,,} is an increasing sequence of stopping times with limit T we have f ( X , J +f(X,) almost surely on {T < 0.We continue to state our results in the case a = 0, but emphasize again that they carry over to positive a just under (2.1) and (4.1).
(4.8) PROPOSITION. A necessary and sufficient condition that all locally integrable excessive functions are regular is that Up be regular whenever p has compact support and Up is bounded. Proof. Only one of the implications requires proof. Suppose f ~ 5 is@ locally integrable and that, for some x E E and some increasing sequence {T,,} of stopping times with limit T, P " { f ( X T nt)t f ( X , ) ; T < > 0. Then obviously there is a positive number k and an open set G with compact closure in E such that P * { ( ~ kA) ( X T n ) * ( f ~k)(X,); T < TGc}> 0. Now P G ( f A k ) = (fA k ) on G and P,(f A k ) is the bounded potential Up of a measure p carried on C. Hence P"{ U p ( X T n ) t ,V p ( X , ) ; T < [} > 0, contrary to the hypothesis that every bounded potential of a measure with compact support is regular.
c}
The next theorem, which is due to Hunt, is one of the most important results in this section.
(4.9) THEOREM. All locally integrable excessive functions are regular if and only if (4.6) holds. Proof. Suppose (4.6) holds and Up is a bounded potential. According to (3.6), p = pl p z p 3 where p1 p z is carried by a semipolar, and hence by a polar, set and p 3 charges no semipolar set. Hence p, + p z = 0 because Up is bounded. Consequently p charges no semipolar set and so, by (3.5), Up is a regular potential. It follows from (5.13) of Chapter IV then that Up is regular and so one half of (4.9) follows from (4.8). In proving the converse we first observe that if all locally integrable excessive functions are regular, then all locally integrable a-excessive functions are regular for each a > 0. Indeed let p be a measure with compact support such that U"p is bounded. Clearly p is finite since p is in M a and so p is in M
+ +
+
4.
CAPACITY AND RELATED TOPICS
289
also. Now Up = U"p + uUU"p and, since p is in M, Up and UU"p are locally integrable. This and (4.8) plainly yield the assertion in the first sentence of this paragraph. Now suppose there is a compact subset K of E which is thin but not polar. Let u > 0 be fixed and let @i(x) = EX(e-uTK). Then Qg is everywhere less than one and @; is a regular u-excessive function. Since K is not polar 0: is not identically 0, and hence, since C(K) = 0, there is a point x 4 K with @i(x) > 0. Let {G,,}be a decreasing sequence of open sets containing K with nc,, = K. Let T,,= T G n .Then Proposition 2.9 and the fact that K is thin imply that almost surely T,, < TK on {TK < 00). Clearly lim T,, = TK almost surely P" on {TK < co }, and so, almost surely P", {TK < co} c {T,, < TK for all n, lim T,, = TK< co}. But by (4.14) of Chapter IV, @ i ( X T n ) - )1 almost surely on {T,, < TK for all n, lim T,, = TK < co}. Consequently the regularity of @iand the fact that @ i ( X T K )< 1 imply that P"(TK< 00) = 0, proving (4.9). It now follows easily from Theorems 3.4 and 3.5 and the results of Chapter IV that (4.6) holds if and only if every NAF with a finite potential (or a finite u-potential for some c1 > 0) is actually continuous. A slightly more complicated argument shows that under (4.6) every NAF is continuous. See Exercise 4.17. In particular if X has continuous paths and (4.6) holds, then every (nonnegative, right continuous, continuous at I) additive functional of X is continuous. The most important example is, of course, Brownian motion. We are next going to give some conditions which imply (4.6). In particular Proposition 4.10 will show that (4.6) is satisfied whenever u"(x, y ) = u"(y, x) for all x, y in E and u > 0, that is, whenever X and 8 are equivalent.
(4.10) PROPOSITION. Condition (4.6) holds if and only if, for every finely closed B E 8,B and the cofine closure of B differ by a polar set. Proof. According to (1.25) the fine and cofine closures of a set in d differ by a semipolar set, so that if (4.6) holds then this difference is a polar set. Coming to the converse let K be a compact thin set and with c( > 0 fixed; let @;(x) = EX{e-uTKjand B,, = {@i 2 1 - l / n } . Then each I?,,is finely closed and r)B,iS empty. The last part of the proof of (4.9) shows that limttTKO i ( X t ) = 1 almost surely P" on {TK< 0 0 } for each x not in K. Therefore if R,, = TBn and x E K', then R,, < TK almost surely P" on {TK< I}. Consequently
P i , Oi(x)
+ TK OR,,)]; TK< 51 < c} = @i(x>,
= EX{exp[- u(R,
- E X { e - a T K ; TK
0
290
VI. DUAL PROCESSES AND POTENTIAL THEORY
for x E K', and, since K is thin, Pin @: = @: everywhere. Now (&= U a n i and so = finni . Thus n; is carried by B, v 'B, . But B,, v 'B, is the cofine closure of B, and so, by hypothesis, it differs from B, by a polar set. Therefore n; must be carried by B, for each n since n; can not charge a polar set. But OB, is empty and so n: = 0. Consequently 0: = 0 and hence K is polar. Of course when X and 2 are equivalent then the fine and cofine topologies coincide so the hypothesis of (4.10) is certainly satisfied in this case. An interesting situation in which (4.6)obviously holds is when there are no semipolar sets except, of course, the empty set. Plainly this is equivalent to the condition that, for each x in E, x is regular for {x}. (Recall from Theorem 3.13 of Chapter V that this is also the condition that the local time for X at x exists.) PROPOSITION.Suppose that for some a > 0 the function x -,u"(x, x,) is bounded and is continuous at x = x, . Then x, is regular for { x , } . In particular if this condition holds for all xo in E then (4.6)holds. (4.11)
Proof. If p is unit mass at x o , then Uap(x) = ff(x, x,) and, since this is bounded by assumption, {x,} can not be polar. Let T be the hitting time of {x,} and suppose that x, is not regular for {x,}. Then since { x , } is not polar there exists a y # x o such that Py(T< 00) > 0. Applying the argument in the third paragraph of the proof of Theorem 4.9 to the compact set K = { x , } , we obtain an increasing sequence {T,} of hitting times such that almost surely Py,{T,} increases to T strictly from below on { T < a},and @ i ( X T n )-, 1 as n-, 00 on { T < a}.But @: = Van: and, since K = { x , } , n; = C a ( K )eXo. Therefore @:(x) = C " ( K )ua(x, x,) which is continuous at x = x, , and so @:(XTm)-,@:(XT) = @:(xo) almost surely P y on {T < 00). This is a contradiction since cD",x,) < I and hence Proposition 4.1 1 is proved. The reader can easily check that the condition in (4.11) is satisfied for all x, by any stable process in R of index a > 1. More generally. using the notation in (1.22), if X is any standard process in R with station iry independent increments such that [a + I/I~(X)]-' dx < 00 for some a > 0, then the condition in (4.1 1) is satisfied at all points in R. Thus (4.6) holds for these processes and, in addition, the local time Lx exists at all x. See the exercises for further information about these local times. Under slightly stronger hypotheses, condition (4.6) is equivalent to two other familiar statements from classical potential theory. We are now going to formulate and prove these results following the development in Hunt [4, Sec. 201.
4.
291
CAPACITY AND RELATED TOPICS
(4.12) THEOREM.Assume, in addition to (2.1), (2.2), (4.1), and (4.2), that U a f is continuous on E whenever ci > 0 and f~ bb. Then (4.6) is equivalent to each of the following statements: (i) Let p be a finite measure with compact support K. Then Up is continuous if it is bounded and if its restriction to K is continuous. (ii) If f i s a locally integrable excessive function and E > 0, then there is an open set G with C ( G ) < E such that the restriction off to E - G is finite and continuous.
ProoJ First we show that (4.6) implies (i). Since U p is bounded, p does not charge polar sets and so, under (4.6), p ( K - ' K ) = 0. Therefore for any ci 2 0, PI;U"p = Ua&p = U a p and consequently U a p is bounded on E by its maximum on K. Now if ci > 0 then U p = U a p + ciUaUp. But U p is bounded so U a U p is continuous by hypothesis, and hence the restriction of U"p to K is continuous. Clearly U a pdecreases as ci + 00 and, since ciUaUp + U p as ci+ co, it follows that U a p decreases to iero as ci 00. By Dini's theorem U a p decreases to zero uniformly on K and hence on E since 11 V p l l = supxEKUap(x). Therefore ciU"Up + U p as CY + 00 uniformly on E, and since UaUp is continuous this yields Statement (i). Next (i) implies (ii). Suppose first of all that f is the bounded potential U p of a measure p with compact support. Then p is a finite measure which vanishes on polar sets and if v is the restriction of p to a compact set K then, by (i), Uv is continuous if its restriction to K is continuous. We will need a bound for the capacity of the (open) set B on which the potential Uq of a measure in M exceeds a positive constant p . If D is a compact subset of B, then -f
(4.13)
p C(D) I/ii,(dx) U ~ ( X=)16. dq I q ( E )
and so C ( B )Iq(E)/p. Obviously this bound is useful only when q is a finite measure. Given 6 > 0 we apply Lusin's theorem to the finite measure p to obtain an open set B such that p ( B ) < 6 and the restriction of U p to E - B is continuous. Let v , be the restriction of p to E - Band p, the restriction of p to B. The support of v, is a compact subset of E - B and the restriction of Uv,to E - B is continuous since Uv,and U p , are lower semicontinuous on E and their sum U p = Uv, U p , has a continuous restriction to E - B. Therefore Uv,is continuous on E. The set H on which U p , exceeds p is open - ~p,, = 2-" and apply the above and by (4.13), C ( H ) < 6 / p . Let 6, = ~ 4 and is continuous and U p - Uv,,s argument. Then U p = Uv,, Up,, where Uv,, 2-" on E - G, where G, is open and C(G,,) < ~ 2 - " If . G = U G n ,then G is open, C(G) < E , and Uv, converges to U p uniformly on E - G . Thus (ii) is proved when f is the bounded potential of a measure with compact support.
+
+
292
VI. DUAL PROCESSES AND POTENTIAL THEORY
Next suppose that f is bounded. If D is an open set with compact closure, then P, f is the bounded potential of a finite measure with compact support and P, f = f on D. Consequently one can find an open set G of arbitrarily small capacity such that the restriction off to D - G is continuous. Clearly this yields (ii) for boundedf. Iff is locally integrable then {f= co} is polar and hence there exists an open set Go containing {f= oo} such that C ( C o )< ~ / 2 Let . f,= f n.~ By what has already been proved there exists for each n 2 1 an open set G, such that the restriction off, to E - G, is continuous and C(C,) < e2-(”+’).Then G = uF=oG, is open, C(G) < E, and it is easy to see that the restriction offto E - G is finite and continuous. Therefore (i) implies (ii). Finally we will show that (ii) implies (4.6) by using (4.8) and (4.9). Let f be a bounded excessive function and let v = gt where g is a bounded nonnegative function vanishing outside a compact set. The potential g 8 is bounded, say by M. If K is a compact subset of E, then
=j g o
dn, I M C ( K ) .
Using (1.20) of Chapter V it now follows that, for any Bore1 set B, jg(x) (DB(x)dx I M C ( B ) . Let G be the set mentioned in (ii). Since t - f ( X , ) is continuous wherever t + X I is continuous on (0, T,) and since
Pv[TG< 51 = jg(x) QG(x) d x I A4 C ( G ) I M E ,
it follows, E being arbitrary, that, almost surely Pv, r - f ( X , ) is continuous wherever t - t A’, is continuous on (0, 5). Given 6 > 0 let T be the smallest value of r such that I f ( X , ) - f ( X l ) - l > 6 and X , = X t - . Then Tis a terminal time (see (4.16) of Chapter IV) and by what was proved above P’(T < = 0 whenever v = gt with g as above. Therefore Px(T < 4‘) = 0 a.e., and hence everywhere since x + P”(T < 5) is excessive. But 6 is arbitrary and so f must be regular. This completes the proof of Theorem 4.12.
c)
REMARK. If X is such that 1) Up11 = supxsKU p(x) whenever p is a measure with support K (by (1.26ii) this is the case whenever the fine and cofine topologies coincide), then one may replace the second sentence in (4.12i) by “Then Up is continuous if its restriction to K is finite and continuous.”
Exercises
In the exercises Conditions (2. l), (2.2), (4. I), and (4.2) are assumed to hold unless explicitly stated otherwise.
4.
CAPACITY AND RELATED TOPICS
293
(a) Show that the set function cp defined on compact subsets of E by cp(K) = n K ( E )is a Choquet capacity. (b) Let cp be any Choquet capacity on the compact subsets of E with cp(0)= 0 and use cp also to denote the extension of this capacity to d. Show that cp is countably subadditive on 8.[Hint: the first step is to show that cp(A, u A , ) _< cp(A,) cp(A,) if A , and A , are open. For this, show first that if K is a compact subset of A , u A , then K = K , u K , where Kiis a compact subset of Ai (i = 1,2).] (4.14)
+
Let u 2 0 and let B be a Borel set for which ni exists. If p > u show that "8, exists and that ni = ni pi^^ where v E = (p - u)Oi(. [Hint: use the argument of (4.9) of Chapter V.]
(4.15)
+
Let B be a Borel set with compact closure. Show that for every Borel set D,n:(D) is a continuous nondecreasing function of a-in particular C"(B)varies continuously with a. [Hint: use (4.15) and the fact that ni is a finite measure to establish the assertion in the interval (0,GO). Next use the fact that Un8, = O8,+ U(j?Og)to obtain the continuity at 0.1 (4.16)
(4.17) Show that under (4.6) every NAF of X is continuous. [Hint: given E > 0 let T be the smallest value of t such that A , - A , - 2 E and X , = X , - .
Use (4.10) of Chapter IV to show that T is accessible on {T < 5) and hence by (4.38) of Chapter IV, T is accessible since T = co on ( T 25). Let cp(x) = E"(e-=). Show that cp E 9" and so rp = U ' p + h by (2.11). Then adapt the argument of (4.10) to show that p is carried by a polar set and hence p = 0. Now if K is a compact subset of E, P&cp = q. Show finally that this implies that T 2 l.] Suppose that xo is regular for {xo} and let L be the local time at xo . Let K = {xo} and let T = TK. (a) Using the notation of (3.15) of Chapter V show that u"(x) = c(1) ua(x, xo)and that f f ( x o ) = c(l)/c(u) where ~ ( u = ) Ca(K) for a > 0. (b) Use (4.15) to show that in the notation of (3.20) and (3.21) of Chapter V, b = lima+mg(u)/u = ((K)/c(l). Thus the corresponding subordinator Y has no linear term if ( ( K ) = 0. (4.18)
(4.19) Let X be a stable process in R with index u > 1 ; that is, X is a Hunt process in R of the type described in (2.12) of Chapter I with the corresponding $ of (2.13) of Chapter I given by
1
+ ip sgn(x) tan -
where p is a parameter with - 1 I pI 1. As mentioned after (4.1 1) these processes satisfy the condition of (4.11) at all x o E R and so a local time
294
VI. DUAL PROCESSES AND POTENTIAL THEORY
exists at each xo , Let p(t, x ) = (h)-' jZrn e-ixy e-rS(y) dy so thatf(t, x , y ) = p ( t , y - x ) is the transition density for X with respect to Lebesgue measure and ul(x, y ) = e - A rp(t, y - x ) dt is the potential kernel-here we use 1 as the parameter in the potential kernel since LY is reserved for the index of X. Show that if a > 0 then p(at, x ) = a-'/'p(t, a - ' / " x ) for all t and x. Use this for all 1> 0 and all x. to show that d ( x , x ) = p(1,O) r(l - l/a) Using the notation of (4.18) show that ul(xo) = c(l)/c(A) = A('/")-' for each x o . If z ( t ) denotes the inverse of the local time at x o , then use (3.21) of Chapter V to show that ( z ( t ) ,Po)is a stable subordinator of index 1 - ] / a . Compare this with (3.37) of Chapter V. Show that the hypothesis of Theorem 3.30 of Chapter V is satisfied with h(x) = min(1, IxIa-') so that the local time L: is jointly continuous in t and x almost surely. (4.20) Let X satisfy the hypothesis of Theorem 3.30 of Chapter V, and let Lx denote the local time at x . Let h(x) = [C'({x})]-' = d ( x , x). Then show that if D is a Bore1 set, j D h(x)L; dx = yoZD(Xu)du for all t almost surely. Compare with (3.41) of Chapter V.
(4.21) The assumptions and notation are as in (4.20). Let A be a CAF of X with a finite 1-potential. Show that there exists a measure v such that A, = f L: v(dx) for all t almost surely. [Hint: show that u i = U ' p . Then show that v = hp does the job.] Extend this result to arbitrary CAF's of X.
Chapter 0 The reader will find a treatment of measure theory especially designed for probabilists in Neveu [l]. In particular the development given there of martingale theory is more than adequate for our purposes. For considering examples the reader will need a bit of Fourier transform theory. We refer him to Bochner [I] or Feller [ I ] for this. The particular form of Theorem 2.2 given in the text appeared for the first time in Dynkin [I].
Chapter I SECTION 2. We introduce the processes with stationary independent increments to provide examples illustrating various points in the general theory; also they enter into the discussion of local times (Chapter V). Of course these processes have an extensive theory of their own going back to the work of Kolmogorov and Ltvy in the early 1930’s. For an elegant treatment of the measure theoretic properties of these processes we refer the reader to Ito [2]. The result in (2.15) may be used to reduce the study of sample function properties in the general case to the temporally homogeneous case. It has not been used for much else however. The operation of passing from the transition function P,to Q, in (2.16), see also (2.20), is called “subordination” by Bochner [I]. The term “subordinate to” is used in a much different sense in Chapter 111. SECTION 3. A definition of Markov process as general as the one given here is needed if one is to avoid endless additional qualifications in the future. 295
296
NOTES AND COMMENTS
Even so it does not include some processes which arise in a natural manner. See, for example, (3.8) of Chapter 111. A reader new to the subject may find the extensive axiomatization of this section annoying; he may, if he wishes, start with the function space representation described in Section 4, altering this later on to achieve the desired sample function regularities. However in some instances this is not the most natural representation of a process; for example, when discussing time changes in Chapter V. SECTION 5. In some papers (including those of the present authors) 9* is defined as r),(9p)p”, the intersection being over all finite measures p on B A S However this turns out not to be the appropriate completion-it is necessary to complete 9:in the larger a-algebra 9 as we have done in the text. Note that (5.17) doesn’t say much unless 9, is considerably larger than 9; (as it will be in the cases of interest to us). SECTION 6. We consider only those properties of stopping times which are immediately relevant to their use in the theory of Markov processes. For a much more thorough investigation see Chung and Doob [I].
SECTION 8. To the best of our knowledge the first formulation and proof of the strong Markov property for a class of Markov chains appeared in Doob [2]. Our main criterion for the strong Markov property (Theorem 8.1 1) was proved by Hunt [l] for processes with independent increments. The general case was obtained by Blumenthal [l] and by Dynkin and Yushekevich [l] independently. However, the use of the resolvent { Ua} rather than the semigroup {P,} in this condition seems to be due to Ito [I]. The difficult problem of finding more or less necessary conditions for the strong Markov property was first considered by Ray [I]. Two more references to related work are Ray [2] and Knight [l]. SECTION 9. The basic results concerning the absence of oscillatory discontinuities in the sample functions are due to Kinney [l] and Dynkin [3]. See also Blumenthal [l]. The idea of using martingale theory to study the trajectories of a Markov process goes back to Kinney [ I ] and to Doob [4] and [5]. The notion of quasi-left-continuity and the observation that it is the appropriate substitute for the continuity of the paths is due to Hunt. The fact that the conditions in Theorem 9.4 imply quasi-left-continuity appeared in Blumenthal [I]. The fact that Definition 9.2 delineates the appropriate class of processes on which to base a potential theory is due to Hunt. In fact what we call a Hunt process is just a process satisfying Hypothesis (A) of Hunt [2]. Also in [3] Hunt considered standard processes (implicitly if not explicitly). The terminology “standard process” is due to Dynkin. In [2]
CHAPTER I1
297
Hunt points out that the equality Po+ = F,, turns the zero-one law (5.17) into a useful result. Theorem 9.4 on the existence of a Hunt process with a prescribed transition function is not the best result available. This is not important to us because in this book we assume that a standard process is given as the basic data. However in some other work an improvement of (9.4) is essential; see, for example, Ray [2] and Hansen [l]. Most of the familiar standard processes are actually Hunt processes. But even so, in passing to subprocesses the quasi-left-continuity on [0, co) may be lost (see (9.16)) while the quasi-left-continuity on [0, [) is not. Assuming the process X to be standard rather than Hunt leads to an increase in the complexity of some proofs. From a technical point of view the possible failure of (1 1.3) of Chapter I and of (4.2) of Chapter IV for standard processes causes the most difficulty. However, (6.1) of Chapter I11 may be used in place of (1 1.3) of Chapter I in most situations in which (1 1.2) of Chapter I does not suffice. SECTION10. The idea of using Choquet’s capacitability theorem to establish the measurability of hitting times, and, more important, the approximation theorems (10.l6)-(10.l9) is due to Hunt [2].
SECTION12. Sometimes regular step processes are used as approximations to general standard processes; see, for example, T. Watanabe [l].
Chapter II SECTIONS 1-4. With a few exceptions, most notably (3.6), the definitions and theorems in these sections are contained in Hunt [2]. However, some of the terminology and proofs differ a little from those given by Hunt. The probabilistic approach to the fine topology and to regular points is due to Doob [4] and [6] for Brownian motion and the heat process. The extension to general Markov processes was straightforward. Theorem 5.1 was announced in Dynkin [4] and a proof appeared in Dynkin [2]. Exercises 4.17-4.23 are designed to be a collection of the folklore concerning recurrence and the existence of nonconstant excessive functions. See AzCma et al. [l] for recent work in this direction. SECTION5. For results similar to those in (5.6), (5.9), and (5.11) see T. Watanabe [ l ] and Sur [2].
298
NOTES AND COMMENTS
Chapter III SECTIONS 1-4. Special types of MF's and some important analytic features of their relationship to processes have been studied (by physicists as well as mathematicians) for many years; see, for example, Kac [I] and Darling and Siegert [l]. In [3] Hunt used a rather wide class of MF's as the basis for the relative theory. Apparently Dynkin [I] was the first to axiomatize the properties of Definition 1.1 and systematically study the relationship between MF's and subprocesses. The construction of subprocesses and the discussion of their properties in Section 3 is due to Dynkin [I] and [2]. Most of the remainder of Sections 1-4 is due to Meyer [2]. Let M be a nonnegative M F satisfying E"(M,) 5 1 for all x and z but which need not satisfy the stronger condition M , I 1. Then (1.8) still defines a subMarkov transition function and much work has been directed towards the construction of a process with this transition function. In this case the desired process can no longer be obtained simply by '' killing" X at an appropriate time. We refer the reader to Dynkin [2], Ito and S. Watanabe [I], and Kunita and T. Watanabe [I] for different treatments of this interesting problem. In [6] and [9] Meyer showed that if {M,} is a complex-valued MF, then (M,} has the strong Markov property if and only if the MF, {m,}, defined by m, = 0 if M , = 0 and m , = 1 if M , # 0 has the strong Markov property. One can then easily obtain a criterion similar to (4.21) for a complex-valued M F to have the strong Markov property.
SECTION 5. Most of this material appears (in a slightly less general form) in the first half of Hunt [3]. SECTION6. As mentioned in the text Theorem 6.12 is due to Hunt [2], at least for Hunt processes. In order to carry over Hunt's proof to the case of standard processes it seems to be necessary to have some result such as (6. I ) available. Theorem 6.1 appeared in Blumenthal and Getoor [5].
Chapter IV
yo
SECTIONS 1-3. The study of AF's of the form t + f ( X J ds has about the same history as the study of MF's of the form f + exp[-fo f(XJds]. Many interesting results about such functionals are given in Kac [I]. Blanc-Lapierre and Fortet [l] contains several sections devoted to the study of additive and multiplicative functionals. Darling [11 contains a bibliography of some of the
CHAPTER IV
299
earlier papers on this subject. Already in 1956 Ito and McKean had developed the theory of CAF‘s of a linear diffusion and the theory of time changes based on such AF’s. See Sections 3 and 4 of Chapter V. The existence of local times for these processes enabled Ito and McKean to represent the additive functionals in question as integrals of the local time (see (4.21) of Chapter VI). Their work finally was published in Ito and McKean [I]. In the meantime many of their results were rediscovered by the Russian school. See especially, Volkonskii [I]. The study of the relationship between excessive functions and additive functionals for general standard processes was carried out by the Russian school and by P. A. Meyer in the late 1950’s and early 1960’s. The representation f ( x ) = E”(A =) for a uniformly excessive f as the potential of a CAF was first given by Volkonskii [2]. Then Sur [ l ] made a clever extension of Volkonskii’s method to treat general regular potentials. Theorem 3.8 is essentially Sur’s result, and the proof given here is his with some modifications due to McKean and Meyer. Independently Meyer also extended Volkonskii’s result to the general case in [2]. His method is used in the proofs of Theorem 2.1 of Chapter V and Theorem 3.5 of Chapter VI. In addition, in [2] Meyer introduced the concept of a natural additive functional, proved the uniqueness theorem (2.13), and extended the representation theorem to natural potentials (4.22). In the case of Brownian motion the basic representation theorem was found independently by McKean and Tanaka [ I ] and by Ventcel’ [I]. The definition of the characteristic of an AF and the results contained in (2.19), (2.20), and (3.18) are due to Dynkin [2]. Motoo and S. Watanabe [ I ] contains an important extension of these results. Usually one studies only AF’s of ( X , M ) for M of the form M , = ZLo,T)(t) where T is an exact terminal time, the most important case being T = 5. The increased generality of Definition 1 . 1 does not seem to complicate the discussion in any significant way and is useful in some situations. The results and techniques of these sections have found their ultimate generalization in P. A. Meyer’s work on the decomposition of a supermartingale into the sum of a martingale and a decreasing process. See Meyer [4], [5], and especially [ 11. SECTION4. The proof of Theorem 4.22 given in the text is due to Meyer [2]. We will now give an alternate proof based on Meyer’s work on the decomposition of supermartingales. This proof does not require the assumption (4.1). For simplicity we consider the case M , = I r o , J t ) and leave the straightforward extension to more general M F s to the reader. Thus let u be a natural potential of X (Definition 4.17). Let 42’ denote the set of all finite measures p on 8: satisfying f u dp < co. It follows immediately from the definition of a natural potential and (4.18), which is independent of (4.l), that u has the following two properties :
300
NOTES AND COMMENTS
(I) The family {u(XT); T a stopping time} is uniformly integrable with respect to P pfor each p E 4". (2) For each p E 4",lim,.+mE'{(u(X,)} = 0. We will use the terminology of Meyer [l] without special mention. Fix an element p E 42" for the moment. Then (1) and (2) state that the supermartinP'} is a right continuous potential of the class (D). Let gale {u(X,),9,, g, = n(u - P,,, u) and A: = g,(X,) ds. Then according to Meyer's theorem [l, VII, T 291 there exists a unique natural increasing process {B;} such that for each stopping time T and Y E b 9 , E'( YA;) + E'( YBf) as n -+ 00, and such that E"(BL) = u dp. Nowexactlyas in the proof of Theorem 3.8 we can find a family A = { A , ; t 2 0 ) of nonnegative random variables that has all the properties of an AF of X (Definition 1.l) except that t + A, may be discontinuous at and such that t + A, and t + BF are identical functions almost surely ' P for each p E 42". In particular we may assume that t -+ A, is constant co] since each A" has this property. Since E, E 42" we see that u(x) = on E x @ , ) = ,??"(Ac)for each x in EA. Next we will show that almost surely t + A, and t + X , have no common discontinuities on [0, If this were not the case there would exist an E > 0 and an x in E such that if T = inf{t: d(X,, X t - ) > E } , then
Po
c,
[c,
c).
PX(AT- AT- > 0 ; T
< 5) > 0.
c}
Here d is a metric for E A . Let R = T o n {T< and R = co on { T 2 5). The quasi-left-continuity of X implies that R is totally inaccessible (relative to P"). But Bex, and hence A, is a natural increasing process and so A, - AR- = 0 almost surely P" (Meyer [ l , VII, T 491). Thus the assertion in the first sentence of this paragraph is established. A has all the properties of an NAF of X except that t -,A, may have a jump at Moreover t -+ A, is a natural increasing process relative to 'P for each p E 4".For the purposes of the present discussion let us call such an A a generalized NAF of .'A So far we have proved that a finite excessive function, u, satisfying (1) and (2) has a unique representation u(x) = E"(A,) where A is a generalized NAF of X. Finally we will show that if u is a natural potential, then we can modify the generalized NAF constructed above to obtain an NAF whose potential is u. To this end we need the following lemma whose proof is just the same as that of (4.37). Since we sketched this proof in the text we will omit the proof of the lemma.
c.
LEMMA. Let p be a fixed initial measure and let {R,} be an increasing sequence of stopping times with limit R. Suppose that A E V, 9," and let r = A n { R , < R for all n, R < a}. Then there exists an increasing sequence
301
CHAPTER IV
of stopping times {T,} such that Pp almost surely lim T, = R on for all n on r, and T,, = 00 for all large n on r'.
r, T,, c R
Let f be a finite excessive function satisfying (1) and (2) and let A be the generalized NAF such that f(x) = Ex@,). Define A: = A , if t c [, A: = A,if t 2 5, and J = A , - A , - . Then A* = {A:} is an NAF of X . Define g,(x) = E X ( & ) ;
h,(x) = E"(J).
It is easy to check that h, is excessive and that it satisfies (1) and (2) since h, I J Thus f = g, + hf where g, is the potential of the NAF, A*, and hf again satisfies (1) and (2). Now for our given natural potential u define u, = g, , v I = h, and u,, = g u n _,, u,, = for n > 1. Let f, = u1 ... u,, . Clearly u = f,, + v, for each n, and as n + 00, {f,,}increases to an excessive function f while {v,,} decreases to a super-mean-valued function u which is excessive since u = f + v. By construction for each n, u,(x) = Ex(&,) where A" is an NAF of X and so if A , =EnA : , thenJ(x) = Ex@,). It is easy to check that A = { A , } is an NAF of X . Thus in order to complete the proof of Theorem 4.22 it will suffice to show that v = 0. To this end note that, for any j , v j - v = limn (vj - v,) = limn u, is excessive. Both v j - v and v satisfy conditions (1) and (2) and since v j = ( v j - v ) + v it follows from the uniqueness of the above representation v j - v + O as j + 03. that hUj= hUj-"+ h,. But h,, = v j + l J v and h,,-, I Therefore v = h, and this implies v(x) = EX(B,)where B is a generalized N A F with the property that E"(B,-) = 0 for all x; that is, B is constant except for a possible jump at 5. Fix x and let [ I and [,, denote the totally inaccessible part and the accessible part of [ relative to P", respectively. Since t + B, is a natural c co}. Recall that increasing process B, = B,- = 0 almost surely P" on 5 = T I on (5, < co}. But {rA < co} is a countable union (up to a set of P" measure zero) of sets of the form K[(R,,)];here {R,} is an increasing sequence of stopping times with R = lim R, [ and
+ +
I;+
{cI
K [ ( R , ) ] = {R,, c . R for all n, R
=[c
a}.
See the remark following VII, T 44,of Meyer [l]. Since {[ c co} = { [ I c co} u (5" < co}, in order to show that v(x) = E"(B,) = 0 it suffices to show that B, = 0 almost surely P" on K[(R,)]. Let A = {BR > O } ; then A E V , , F R m (Meyer [ I , VII, T 491). Moreover A c {R = [ c a}.Let r = A n {R,c R for all n, R < co}. By the lemma there exists an increasing sequence {T,} of stopping times such that {T,} increases to R strictly from below on and lim T,, = co on r' almost surely P". Because r c {R = [ c co}, lim T, 2 almost surely P". But v is a natural potential since v Iu and so PTnv(x) -i0 as n -+ co, while
302
NOTES AND COMMENTS
P T n 4 x ) = E"{B, - BTn; Tn < C } + E"{Bc - B,- ; Tn< [ for
all n}.
Consequently B, = 0 almost surely P" on {T,, < [ for all n} 3 r, and since r = {B, > 0} n K[(R,)] this complete the proof. Thus we have established Theorem 4.22 for arbitrary standard processes. It is easy to give examples of excessive functions which satisfy (1) and (2) but which are not natural potentials. Such an excessive function, u, has the representation u(x) = E"(A,) with A a generalized NAF but can not be represented as the potential of an NAF. Finally consider the following example. Let X be uniform motion to the right killed with probability 3 as it passes through zero, i.e., the process discussed in (4.34) of Chapter lV, (3.18) of Chapter 111, and (9.16) of Chapter I. Let u(x) = 1 if x c 0 and u(x) = 0 if x 20.Then u is an excessive function which satisfies (1) and (2). Also it is not hard to see that u is a natural potential-however, this will become obvious in a moment. As in the first part of the above construction let gn = n(u - P,,,,u); then gn(x) = 0 if x 2 0 or if x < - l/n and gn(x)= n if - l/n Ix < 0. Consequently if To is the hitting time of {0},then A: = gn(Xs)dsapproaches A, as n --t co where A, = 0 if t c To A [ and A, = 1 if t 2 To A [. Clearly A = {A,} is the unique generalized NAF such that u(x) = E"(A,). But if x c 0, P"(To A [ = [) = 4 and so A has a jump at [. Define B, by B, = 0 if t < To and B, = 2 if t 2 T o . It is easily verified that B = {B,} is an NAF such that u(x) = E"(B,). Thus u is a natural potential and B is the unique NAF whose potential is u. The important point is that A and B are quite different functionals. S. Watanabe [I] has studied the structure of general discontinuous additive functionals (not necessarily natural). See also Motoo and Watanabe [I].
yo
SECTION 5. Most of the results of this section are due to Meyer [2].
Chapter V SECTION 1. Meyer [2] was the first to introduce the condition (1.3) and to use it systematically in the study of Markov processes. It is somewhat surprising that such an innocuous regularity condition allows rather far-reaching simplifications in the theory. The results embodied in (1 3, (1.6), and (1.7) are due to Meyer [2]. Most of the remaining results in this section are due to Doob [3]. SECTION 2. Proposition 2.4 is due to Motoo [l]. The idea of using the inverse of a CAF as a "time change" in the process X has a long history. Already in 1956 it was used by Ito and McKean in discussing linear diffusions. See also
CHAPTER VI
303
Section 15 of Hunt [3]. The results contained in (2.11) may be found in Volkonskii [I 1. SECTION 3. The notion of the support of a CAF was introduced in Getoor [l]. These lectures also contained Theorem 3.8. The idea of a local time for the one-dimensional Brownian motion goes back to Ltvy. See the discussion in Ito and McKean [l]. Trotter [l] gave a complete construction of the local time L: for Brownian motion including the joint continuity in t and x. Additional properties of such local times may be found in Knight [2], McKean [l], and Ray [3]. Furthermore in their monograph [I] Ito and McKean develop many deep results about such local times and their inverses. Theorems 3.13, 3.17, and 3.21 appeared in Blumenthal and Getoor [3], although the definition of local time used in the text (3.12) appears here for the first time. As mentioned in the text, most of the results from (3.23) on are due to Meyer [8], although Theorem 3.30 is essentially due to Boylan [I]. Boylan’s proof is much different from Meyer’s proof, which is presented here. In [2] Kac used techniques not unrelated to those of Meyer.
SECTION 4. Most of the results of this section are due to Motoo [2], although some of the terminology and proofs differ a little from those given by Motoo. SECTION 5. Theorem 5.1, at least for Hunt processes, appeared in Blumenthal et al. [l] and [2]. However in these papers the “piecing together” of the local time changes was somewhat slurred over and, in fact, the method outlined there is probably valid only under an assumption such as (1.3) which guarantees that every CAF is perfect. The procedure for carrying out this piecing together given in the text is new.
Chapter VI SECTIONS 1 AND 2. The description of the dual process, or more precisely the demonstration of its relevance to the development of a strong potential theory, is due to Hunt [4]; nearly all the results of Sections 1, 2, and 4 are taken from that paper. Wedescribe the relationship between Xand 8 only analytically, although the interpretation of fz as “ X run backwards ” suggests many theorems and proofs. Probabilistic descriptions of the relationship between the process and its dual are complicated; see, for example, Nagasawa [l]. The fundamental equality (1.17) more or less expresses the fact that the subprocesses ( X , 7‘”)and (8,FA) are in duality also. Hunt establishes this fact for subprocesses corresponding to a much larger class of MF’s-the
304
NOTES AND COMMENTS
proof is quite involved. The formulation and proof for general MF's has not yet been given. Discussions similar to ours are given by Kunita and Watanabe [2] and by Meyer [3]. The equivalence of semipolar and cosemipolar sets was first pointed out by Meyer in [2]. SECTION 3. Theorems 3.1, 3.4, and 3.5 are due to Meyer [2]. The proofs of (3.1) and (3.4) given here are different from the original proofs of Meyer. An alternate proof of (3.5) in the same spirit as the proof of (3.4) is given in Blumenthal and Getoor [2]. Also that paper contains an extension of these results to certain classes of measures and CAF's which do not possess finite potentials. SECTION 4. The results of this section are taken from Hunt [4]. One exception is (4.10) which appears here for the first time.
BIBLIOGRAPHY
IJM = Illinois J. Math.; TAMS = Trans. Am. Math. Soc.; TV = Teor. Veroyatnostei iee Primeneniya; TP = Theory of Probability and its Applications (translation of T V ) ; Z W = Z . Wahrscheinlichkeitstheorievenv. Geb.
J. AZBMA,M. KAPLAN-DUFLO, and D. REVUZ: [I] Rkurrence fine des processus de Markov. Ann. Inst. Henri Poincare B2, 185-220 (1966).
H. BAUER: [l] “Markoffsche Prozesse.” Vorlesung an der Universitat Hamburg (1963). A. BLANC-LAPIERRE and R. FORTET:
[ I ] “ThCorie des fonctions albtoires.” Masson, Paris, 1953. R. M. BLUMENTHAL: [I] An extended Markov property. TAMS 85, 52-72 (1957). R. M. BLUMENTHAL and R. K.GETOOR: [I] Sample functions of stochastic processes with stationary independent increments. J . Math. Mech. 10,493-516 (1961). [2] Additive functionals of Markov processes in duality. TAMS 112, 131-163 (1964). [3] Local times for Markov processes. Z W 3 , 50-74 (1964). [4] A theorem on stopping times. Ann. Math. Statist. 35, 1348-1350 (1964). [5] Standard processes and Hunt processes. Proc. Symp. Markov Processes and Potential Theory, Madison, 1967, p. 13-22. Wiley, New York, 1967. R. M. BLUMENTHAL, R. K. GETOOR, and H P. MCKEAN, JR.: [ I ] Markov processes with identical hitting distributions. IJM 6 , 402420 (1962). [2] A supplement to “ Markov processes with identical hitting distributions.” IJM 7 , 540-542 (1 963).
305
306
BIBLIOGRAPHY
S. BOCHNER: [l] “Harmonic Analysis and the Theory of Probability.” Univ. of California Press, Berkeley, California, 1955. N. BOURBAKI: [l] “filements de mathkmatique, Livre 111, topologie gtntrale,” 2nd ed., Chapter 9. Hermann, Paris, 1956. E. S. BOYLAN: [l] Local times for a class of Markov processes. IJM 8, 19-39 (1964). M. BRELOT: [l] “filements de la thkorie classique du potentiel,” 3rd ed. Centre de Documentation Universitaire, Paris, 1965.
K. L. CHUNGand J. L.DOOB: [I] Fields, optionality, and measurability. Am. J . Math. 87, 397-424 (1965). P. C O U R R and ~ E P. PRIOURET: [I] Temps d’arr2t d’une fonction alkatoire: Relations d‘kquivalence associkes et propriktks de dhmposition. Publ. Inst. Statist. Univ. Paris 14, 245-274 (1965). [2] Recollements de processus de Markov. Publ. Inst. Statist. Univ. Paris 14, 275-377 (1965). D. A. DARLING: [ I ] fitude des fonctionnelles additives des processus Markoviens. Le calcul des probabilitks et ses applications. Colloq. Intern. Centre Natl. Rech. Sci. (Paris)87, 69-80 (1959). D. A. DARLING and A. J. F. SIEGERT: [I] On the distribution of certain functionals of Markov chains and processes. Proc. Natl. Acad. Sci. U.S.42, 525-529 (1956). J. L. DWB: [I] “Stochastic Processes.” Wiley, New York, 1953. [2] Markoff chains-denumerable case. TAMS 58,455-473 (1945). [3] Applications to analysis of a topological definition of smallness of a set. Bull. Am. Math. SOC.72, 579-600 (1966). [4] Semimartingales and subharmonic functions. TAMS 77, 86-121 (1954). [5] Martingales and one dimensional diffusion. TAMS 78, 168-208 (1955). [6] A probability approach to the heat equation. TAMS 80,216-280 (1955).
E. B. DYNKIN: [l] “Foundations of the Theory of Markov Processes.” Moscow, 1959 (in Russian). English translation: Prentice-Hall, Englewood Cliffs, New Jersey, 1961. [2] ‘‘ Markov Processes.” Moscow, 1963. English translation (in two volumes): Springer, Berlin, 1965. [3] Criteria of continuity and lack of discontinuities of the second kind for trajectories
BIBLIOGRAPHY
307
of a Markov stochastic process (in Russian). Izv. Akad. Nauk SSSR, Ser. Mat. 16, 563-572 (1952). [4] Intrinsic topology and excessive functions connected with a Markov process (in Russian). Dokl. Akad. Nauk SSSR 127, 17-19 (1959). E. B. DYNKIN and A. A. YUSHEKEVICH: [ I ] Strong Markov processes (in Russian). TV 1, 149-155 (1956). English translation: TP 1, 134-139 (1956). W. FELLER: [I] “An Introduction to Probability Theory and its Applications,” Vol. 11. Wiley, New York, 1966. R. K. GETOOR: [I] Additive functionals of a Markov process. Lecture notes, University of Hamburg (1964). [2] Additive functionals and excessive functions. Ann. Math. Statist. 36, 409422 (1 965). [3] Continuous additive functionals of a Markov process with applications to processes with independent increments. J. Math. Anal. Appl. 13, 132-153 (1966). W. HANSEN: [I] Konstruktion von Halbgruppen und Markoffschen Prozessen. Invent. Math. 3, 179-214 (1967).
L. L. HELMSand G. JOHNSON: [ I ] Class D supermartingales. Bull. Am. Math. SOC.69, 59-62 (1963). E. HEWITTand K. STROMBERG: [I] “Real and Abstract Analysis.” Springer, Berlin, 1965. G. A. HUNT: [I] Some theorems concerning Brownian motion. TAMS 81, 294-319 (1956). [2] Markoff processes and potentials. I. IJM 1, 44-93 (1957). [3] Markoff processes and potentials. 11. IJM 1, 316-369 (1957). [4] Markoff processes and potentials. 111. IJM 2, 151-213 (1958).
N. IKEDA, M. NAGASAWA, and S. WATANABE: [I] A construction of Markov processes by piecing out. Proc. Japan Acad. 42, 370-375 (1 966). K.
ITO:
[ I ] “Stochastic Processes.” lwanami Shoten, Tokyo, 1957 (in Japanese). English translation of Chapters 4 and 5 by Y. Ito, Yale University, 1961. [2] “Lectures on Stochastic Processes.” Tata Institute of Fundamental Research, Bombay (1961). K. ITOand H. P. MCKEAN,JR.: [I] “Diffusion Processes and their Sample Paths.” Springer, Berlin, 1965.
308
BIBLIOGRAPHY
K. ITOand S. WATANABE: [I] Transformation of Markov processes by multiplicative functionals. Ann. Inst. Fourier 15, 13-30 (1965). M. KAC: [I] On some connections between probability theory and differential and integral equations. Proc. 2nd. Symp. Math. Statist. Probability, Berkeley, 1950, pp. 189-21 5 . Univ. of California Press, Berkeley, California, 1951. [2] Some remarks on stable processes. Pub/. Insf. Statist. Wniu. Paris 6,303-306 (1957). J. R. KINNEY: [I] Continuity properties of sample functions of Markov processes. TAMS 74, 280-302 (1953).
F. B. KNIGHT: [I] Markov processes on an entrance boundary. IJM 7 , 322-336 (1963). [2] Random walks and a sojourn density process of Brownian motion. TAMS 109, 5 6 8 6 (1963). H. KUNITA and T. WATANABE: [I] Notes on transformations of Markov processes connected with multiplicative functionals. Mem. Fac. Sci., Kyushu Uniu. A17, 181-191 (1963). [2] Markov processes and Martin boundaries. I. IJM 9,485-526 (1965). J. LAMPERTI: [I] An invariance principle in renewal theory. Ann. Math. Statist. 33, 685-696 (1962). P. LBvv: [I] ‘‘ Thkorie de I’addition des variables alkatoires,” 2nd ed. Gauthier-Villars, Paris, 1954. [2] ‘‘Processus stochastiques et mouvement Brownien,” 2nd ed. Gauthier-Villars, Paris, 1965. M. LOBVE: [ I ] “Probability Theory,” 2nd ed. Van Nostrand, Princeton, New Jersey, 1960.
H. P. MCKEAN, JR.: [I] A Holder condition for Brownian local time. J. Math., Kyoto Uniu. 1, 195-201 (1962). H. P. MCKEAN,JR. and H. TANAKA: [ l ] Additive functionals of the Brownian path. Mem. Coll. Sci., Univ. Kyoto A33, 479-506 (1961).
P. A. MEYER: [I] “Probability and Potentials.” Ginn (Blaisdell), Boston, Massachusetts, 1966. [2] Fonctionnelles multiplicatives et additives de Markov. Ann. Inst. Fourier 12, 125230 (1962).
BIBLIOGRAPHY
309
[3] Semi-groupes en dualitt. SCminaire de thCorie du potentiel, directed by M. Brelot, G. Choquet, and J. Deny. Inst. Henri PoincarC, University of Paris, 5th year ( 1960/1961). [4] A decomposition theorem for supermartingales. IJM 6, 193-205 (1962). [5] Decomposition of supermartingales: the uniqueness theorem. IJM 7, 1-17 (1963). [6] La propriCtC de Markov forte des fonctionnelles multiplicatives. TV 8, 349-356 (1963). English translation: TP 8, 328-334 (1963). [7] Sur les relations entre diverses propridtts des processus de Markov. Invent. Math. 1, 59-100 (1966). [8] Sur les lois de certaines fonctionnelles additives: Applications aux temps locaux. Publ. Inst. Statist. Univ. Paris 15, 295-310 (1966). [9] Quelques resultats sur les processus. Invent. Math. 1, 101-115 (1966). [lo] “ Processus de Markov.” Springer, Berlin, 1967.
M. MOTOO: [ I ] Representations of a certain class of excessive functions and a generator of Markov process. Sci. Papers CON.Cen. Educ., Univ. Tokyo 12, 143-159 (1962). [2] The sweeping-out of additive functionals and processes on the boundary. Ann. Inst. Statist. Math. 16, 317-345 (1964). M. MOTOOand S. WATANABE: [I] On a class of additive functionals of Markov processes. J. Math., Kyoto Univ. 4, 429469 (1965).
M. NAGASAWA: [I] Time reversions of Markov processes. Nugoya Marh. J . 24, 177-204 (1964). J. NEVEU: [l] ‘‘ Mathematical Foundations of the Calculus of Probability.” Holden-Day, San Francisco, California, 1965. D. RAY: [I] Stationary Markov processes with continuous paths. TAMS 82, 452493 (1956). [2] Resolvents, transition functions, and strongly Markovian processes. Ann. Math. 70,43-72 (1959). [3] Sojourn times of diffusion processes. IJM 7 , 615-630 (1963). C. J. STONE: [I] The set of zeros of a semi-stable process. IJM 7 , 631-637 (1963). M. G. SUR: [ I ] Continuous additive functionals of a Markov process (in Russian). Dokl. Akad. Nauk SSSR 137, 800-803 (1961). English translation: Soviet Math. 2, 365-368 (1961). [2] A localization of the concept of an excessive function connected with a Markov process (in Russian). T V 7 , 191-196. English translation: TP 7 , 185-189 (1962). H. F. TROTTER: [I] A property of Brownian motion paths. IJM 2, 425-433 (1958).
310
BIBLIOGRAPHY
A. D. VENTCEL’: [I] Nonnegative additive functionals of Markov processes (in Russian). Dokl. Akad. Nauk. SSSR 137, 17-20 (1961). English translation: Soviet Math. 2, 218-221 (1 961). V. A. VOLKONSKII: [I] Random time changes in strong Markov processes (in Russian). TV 3, 332-350 (1958). English translation: TP 3,310-326 (1958). [2] Additive functionals of Markov processes (in Russian). Tr. Mosk. Mat. O b E 9, 143-189 (1960). English translation : Selected translations in mathematical statistics andprobability. Inst. Math. Statist. Am. Math. Soc. 5, 127-178 (1965).
S. WATANABE: [I] On discontinuous additive functionals and Lkvy measures of a Markov process. Japan J. Math. 34,53-70 (1964). T. WATANABE: [I] On the equivalenceof excessive functions and superharmonic functions in the theory of Markov processes. I and 11. Proc. Japan Acad. 38,397401 and 402407 (1962).
INDEX OF NOTATION A' 'A A A(M) AF bX CAF C"(B) C(E) W E ) CdE) DA
b" 8: b* 819 eB
EM
EMF
f fB F T
+
%T
Ft + 9
M
1
-
61 262 206 206 148 2 152 285 8 8 8 52 60 60 2 2 20 I 98 120 77 273 32 33 31 31 267
MCT MF PPt
PU" NAF p; pi 71; Qt
P T
RMF 9" 9"(M) SMF Supp(A)
4J 1 TA
U+
Gf 4 UY @(f,B) u =P V"f'
r
7 98 257 257 153 61 61 285 99 126 119 72 125 110 213 1 52 200 154 154 41 273 257 114 22
311
SUBJECT INDEX Accessible stopping time, 172 Adapted to, 11 Additive functional, 147, 148 of ( X , M ) , 148 of ( X , T ) , 149 of X, 149 continuous, 152 equivalence of, 149 natural, 153 perfect, 149, 169, 205 strong, 151 Almost surely, 27 Balayage, 136 of additive functionals, 229 Brownian motion, 18, 51, 71, 73, 83, 84, 88, 94, 146,221
Canonical subprocess, 107 Capacitary measure, 285 Capacity, 53 capacitable, 54 natural capacity, 285 Characteristic of an additive functional, 159 co- (as a prefix), 259 Completion (with respect to a family of measures), 26 d-system, 5 Doob’s theorem, 81 Entry time, 52 Exactly subordinate, 117 Exit set, 235 strong, 240 Excessive function, 72 ( X , M ) excessive, 125 o! - Yexcessive, 115, 252 admissible, 277 regular, 187, 287 uniformly, 169 312
uniformly integrable, 187 Excessive measure, 257 Excessive regularization, 78, 116 Exponent of a subordinator, 219 Feller process (strong), 77 Fine topology, 85 closure, 87, 199 support of a measure, 204 support of a continuous additive functional, 204, 213 Gauss kernel, 18 Harmonic function, 187 Harmonic measure, 61 Hitting distribution, 61 time, 52 Holding point, 91 Hunt process, 45 Independent increments, 17, 51, 52, 219, 264,290
Initial distribution, 15 Instantaneous point, 91 Inverse of an additive functional, 208 of an increasing function, 207 Irregular, 61, 80, 199 Lifetime, 22 L6vy measure, 219 Local time, 216, 293, 294 on a set, 232 Locally integrable, 266 Markov process, 11 equivalence of, 24 in duality, 259 of function space type, 24 right continuous, 41 temporally homogeneous, 15 with transition function, 15 with translation operators, 20
SUBJECT INDEX
Markov kernel, 63 Measurable stochastic process, 34 progressively, 34 Monotone class theorem, 5, 6 , 7 Multiplicative functional, 97 equivalence of, 98 exact, 120 normalized, 124 perfect, 149 regular, 119 right continuous, 98 strong, 110 Nearly Bore1 set, 59 Nearly supermedian, 267 Normal Markov process, 30 Null set, 79 Path, 21 Permanent point for a multiplicative functional, 98 for a semigroup, I02 r-system, 5 Poisson process, 19, 20 Polar set, 79 Potential natural, 178 Newtonian, 71 of an additive functional, 154 of a function, 41, 69 of a measure, 254, 257 pseudo, 187 regular, 161 Riesz, 71 theory (classical), 94 zero (set of), 79 Projective system, 16 set, 229 Quasi-left-continuous, 45 on [ O , c o ) , 45 On “ 3 , 4 5 Recurrence, 88, 89 Reduite, 136 Reference measure, 196 Regular point, 61, 199 step process, 63, 91, 264 Resolvent, 41, 252
313
equation, 41 exactly subordinate, 117 in duality, 253 subordinate, 1 I5 Riesz decomposition, 272, 279 Right continuous family of o-algebras, 31 Semigroup generated by a multiplicative functional, 100 subordinate, 101 Semipolar set, 79 Special set, 133 Stable process, 293 one-sided, 19, 71, 227, 264, 294 symmetric, 19, 71, 83, 264 Standard process, 45 State space, 20 Stochastic process, 11 adapted to a family of o-algebras, 11 Stopping time, 31 for X , 36 Strong Markov property, 37, 38 for an additive functional, 151 Strong order, 190 Subordinator, 219 Subprocess, 105 Superharmonic, 96 Supermartingale, 4 Super-mean-valued, 77, 81 Supermedian, 1 15, 252 Support of a continuous additive functional, 213 Terminal time, 78, 98 complete, 173 exact, 124 perfect, 153 strong, 124 Thin set, 79 at a point, 79 Time changed process, 212, 228, 232, 233 Translation operators, 21 Transition function, 14 sub-Markov, 22 temporally homogeneous, I4 Uniform motion, 23 Zero-one law, 30
This Page Intentionally Left Blank