Stochastic Systems In Merging Phase Space
This page intentionally left blank
Stochastic Systems In Merging Phase Space Vladimir S. Koroliuk lnstitute of Mathematics, National Academy of Sciences, Ukraine
Nikolaos Limnios Applied Mathematics Laboratory, University of Technology of Compiegne, France
N E W JERSEY
- LONDON
d i' World Scientific *
SINGAPORE
- BElJlNG
SHANGHAI
- HONG KONG
*
TAIPEI
*
CHENNAI
Published by World Scientific Publishing Co. Re. Ltd. 5 Toh Tuck Link, Singapore 596224 USA once: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK oflcc: 57 Shefton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-PublicationData A catalogue record for this book is available from the British Library.
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE Copyright 0 2005 by World Scientific Publishing Co. Re. Ltd All rights reserved. This book, or parts tlzereoJ may not be reprodured in any fonn or by any means,
electronic or mecltanical, inrluding photocopying, rerording or any information storage and retrieval system now known or to be invented, withoui written pennission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-256-591-4
Printed in Singapore by World Scientific Printers ( S )Pte Ltd
Preface
"
... the theory of systems should be built on the methods of simplification and is, essentially, the science of simplification". Walter Ashby (1969)
The actual problem of systems theory is the development of mathematically justified methods of simplification of complicated systems whose mathematical analysis is difficult to perform even with help of modern computers. The main difficulties are caused by the complexity of the phase (state) space of the system, which leads to virtually boundedless mathematical models. A simplified model for a system must satisfy the following conditions: (i) The local characteristics of the simplified model are determined by rather simple functions of the local characteristics of original model. (ii) The global characteristics describing the behavior of the stochastic system can be effectively calculated on large enough time intervals. (iii) The simplified model has an effective mathematical analysis, and the global characteristics of the simplified model are close enough to the corresponding characteristics of the original model for application. Stochastic systems considered in the present book are evolutionary systems in random medium, that is, dynamical systems whose state space is subject to random variations. From a mathematical point of view, such systems are naturally described by operator-valued stochastic processes on Banach spaces and are nowadays known as Random evolution. This book gives recent results on stochastic approximation of systems by weak convergence techniques. General and particular schemes of proofs for average, diffusion, diffusion with equilibrium, and Poisson approximations of stochastic systems are presented. The particular systems studied here V
vi
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
are stochastic additive functionals, dynamical systems, stochastic integral functionals, increment processes, impulsive processes. As application we give the cases of absorption times, stationary phase merging, semi-Markov random walk, LBvy approximation, etc. The main mathematical object of this book is a family of coupled stochastic processes [ € ( t )z'(t), , t 2 0 ,E > 0 (where E is the small series parameter) called the switched and switching processes. The switched process ['(t),t 2 0 , describes the system evolution, and, in general, is a stochastic functional of a third process. The switching process z'(t),t 2 0, also called the driving or modulation process, is the perturbing process or the random medium, and can represent the environment, the technical structure, or any perturbation factor. The two modes of switching considered here are Markovian and semiMarkovian. Of course, we could present only the semi-Markov case since the Markov is a special case. But we present both mainly for two reasons: the first is that proofs are simpler for the Markov case, and the second that most of the readers are mainly interested by the Markov case. The switching processes are considered in phase split and merging scheme. The phase merging scheme is based on the split of the phase space into disjoint classes
E = UkEVEk,
Ek n Eki = 8 , k
# k'
(0.1)
v,
and further merging these classes Ek, k E into distinct states k E V . So the merged phase space of the simplified model of system is E = V (see Figure 4. l).' The transitions (connections) between the states of the original system S are merged to yield the transitions between merged states of the merged system S . The analysis of the merged system is thus significantly simplified. It is important to note that the additional supporting system So with the same phase space E but without connections between classes of states Ek is used. Split the phase space (0.1) just means introducing a new supporting system consisting of isolated subsystems s k , k E V , defined on classes of states Ek, k 6 V . The merged system S is constructed with respect to the ergodic property of the support system So. It is worth noticing that the initial processes in the series scheme contain no diffusion part. Diffusion processes appear only as limit processes. 'Figures, theorems, lemmas, etc. are numbered by x.y, where x is the number of chapter, and y is the number of figure, theorem, etc. into t h e chapter.
PREFACE
vii
The general scheme of proof of weak convergence for stochastic processes in series scheme is the following.
I. Limit compensating operator: 1. Construction of the compensating operator ILEof the Markov additive process J E ( t )t, 2 0. 2. Asymptotic form of ILE acting on some kind of test functions ' p E . 3. Singular perturbation problem: LevE= Q EO".
+
11. Tightness: 1. Compact containment condition lim M-%x
sup O<E<EO
B( sup l~'(t)l>. M ) o
= 0.
2. Submartingale condition. The process
V E ( 4 := cp(tE(t))+ Cd, is a nonnegative submartingale with respect to some filtration. In the case of semimartingales, used in proofs of Poisson approximation results, we follow the same scheme of weak convergence proof, but for their predictable characteristics presented under integral functional forms. For Markov switching processes, step I1 is performed in a simpler and more adequate way by using predictable square characteristics of martingale characterization. It is worth noticing that most of results presented in this book are new, or generalize results published previously by the authors. The book is organized as follows. In Chapter 1, we present shortly the basic families of stochastic processes used in this book. More specifically, we present the Markov, semi-Markov processes, and some of their subfamilies, and semimartingales. In Chapter 2, we present the notions of switching and switched processes via additive functionals of processes with locally independent increments switched by Markov or semi-Markov processes. In Chapter 3, we present stochastic systems in the series scheme. More specifically, we present the basic results of average and diffusion approximations; proofs are postponed until the next chapters. We start from the simplest case of integral functionals to end by the more complicated case of general additive functionals.
...
Vlll
STOCHASTIC S Y S T E M S I N M E R G I N G PHASE SPACE
In Chapter 4, we present results concerning average and diffusion approximations, as in Chapter 3, but with the addition that the state space of the switching process is asymptotically merged (single and double merged). Ergodic and non-ergodic switching processes are considered. In the next two chapters we present detailed proofs of the results stated in Chapters 3 and 4. In Chapter 5, we present the algorithmic part of the proofs, realized by the singular perturbing reducible-invertible operator technique. This part corresponds to the finite-dimensional distribution convergence and is performed as in step I above. In Chapter 6, we give the last step of the proofs concerning the tightness of the probability distribution of the processes. This is performed as in step I1 above. In Chapter 7, we give Poisson approximation results for impulsive processes switched by Markov processes and for stochastic additive functionals switched by semi-Markov processes. The next two chapters include applications of the theory presented in the previous chapters. In Chapter 8, we present several applications: absorption time distributions, stationary phase merging, superposition of two independent renewal processes, semi-Markov random walks. In Chapter 9, we present birth-and-death processes, repairable systems, and L6vy approximation of impulsive systems. Finally, we give some problems to solve. These problems include results stated without proofs in previous chapters, additional results, and some extensions. Three appendices are also included. The first gives some definitions about weak convergence of stochastic processes. The second gives some known basic theorems on convergence of semimartingales and composed processes needed in proofs. The third includes some additional results concerning intermediate proofs of theorems. Of course, even if these theorems are included in order that this book becomes as autonomous as possible, we encourage the readers to find the corresponding information directly to bibliography. This book should serve its a textbook for graduate students, postdoctoral seminars or courses for applied scientists and engineers in stochastic approximation of complex systems: queuing, reliability, risk, finance,
PREFACE
ix
biology systems, etc. Acknowledgment. Authors express their gratitude to DFG for a financial support of the project 436 UKR 113/70/2-5 and thank particularly Sergio
Albeverio of the Institute of Applied Mathematics at University of Bonn, and Yuri Kondrutzev of the BiBos Center of the University of Bielelfeld for their hospitality. They are also grateful to the University of Technology of Compibgne for some support and hospitality. They are also grateful to Vladimir Anisimov and Anatoli Skorokhod for several discussions about these problems. Last, thanks are due also to Esther Tan Leng-Leng, Desk Editor in World Scientific, for her useful collaboration. Kiev CompiBgne
July 2005
V.S. Koroliuk N. Limnios
This page intentionally left blank
Contents
Preface
V
1. Markov and Semi-Markov Processes
1
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Markov Chains . . . . . . . . . . . . . . . . . . . . . 1.2.2 Continuous-Time Markov Processes . . . . . . . . . . 1.2.3 Diffusion Processes . . . . . . . . . . . . . . . . . . . 1.2.4 Processes with Independent Increments . . . . . . . . 1.2.5 Processes with Locally Independent Increments . . . 1.2.6 Martingale Characterization of Markov Processes . . 1.3 Semi-Markov Processes . . . . . . . . . . . . . . . . . . . . 1.3.1 Markov Renewal Processes . . . . . . . . . . . . . . . 1.3.2 Markov Renewal Equation and Theorem . . . . . . . 1.3.3 Auxiliary Processes . . . . . . . . . . . . . . . . . . . 1.3.4 Compensating Operators . . . . . . . . . . . . . . . . 1.3.5 Martingale Characterization of Markov Renewal Processes . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Semimartingales . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Counting Markov Renewal Processes . . . . . . . . . . . . . 1.6 Reducible-Invertible Operators . . . . . . . . . . . . . . . . 2. Stochastic Systems with Switching 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Stochastic Integral Functionals . . . . . . . . . . . . . . . . 2.3 Increment Processes . . . . . . . . . . . . . . . . . . . . . . xi
1 2 2 6 10 11 14 15 19 19 21 23 24 25 25 28 31 35 35 36 40
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
xii
Stochastic Evolutionary Systems . . . . . . . . . . . . . . . Markov Additive Processes . . . . . . . . . . . . . . . . . . Stochastic Additive Functionals . . . . . . . . . . . . . . . . Random Evolutions . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Continuous Random Evolutions . . . . . . . . . . . . 2.7.2 Jump Random Evolutions . . . . . . . . . . . . . . . 2.7.3 Semi-Markov Random Evolutions . . . . . . . . . . . 2.8 Extended Compensating Operators . . . . . . . . . . . . . . 2.9 Markov Additive Semimartingales . . . . . . . . . . . . . . 2.9.1 Impulsive Processes . . . . . . . . . . . . . . . . . . . 2.9.2 Continuous Predictable Characteristics . . . . . . . .
2.4 2.5 2.6 2.7
3. Stochastic Systems in the Series Scheme 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Random Evolutions in the Series Scheme . . . . . . . . . . . 3.2.1 Continuous Random Evolutions . . . . . . . . . . . . 3.2.2 Jump Random Evolutions . . . . . . . . . . . . . . . 3.3 Average Approximation . . . . . . . . . . . . . . . . . . . . 3.3.1 Stochastic Additive Functionals . . . . . . . . . . . . 3.3.2 Increment Processes . . . . . . . . . . . . . . . . . . 3.4 Diffusion Approximation . . . . . . . . . . . . . . . . . . . . 3.4.1 Stochastic Integral Functionals . . . . . . . . . . . . 3.4.2 Stochastic Additive Functionals . . . . . . . . . . . . 3.4.3 Stochastic Evolutionary Systems . . . . . . . . . . . 3.4.4 Increment Processes . . . . . . . . . . . . . . . . . . Diffusion Approximation with Equilibrium . . . . . . . . . . 3.5 3.5.1 Locally Independent Increment Processes . . . . . . 3.5.2 Stochastic Additive Functionals with Equilibrium . . 3.5.3 Stochastic Evolutionary Systems with Semi-Markov Switching . . . . . . . . . . . . . . . . . . . . . . . .
4 . Stochastic Systems with Split and Merging 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Phase Merging Scheme . . . . . . . . . . . . . . . . . . . . . 4.2.1 Ergodic Merging . . . . . . . . . . . . . . . . . . . . 4.2.2 Merging with Absorption . . . . . . . . . . . . . . . . 4.2.3 Ergodic Double Merging . . . . . . . . . . . . . . . . 4.3 Average with Merging . . . . . . . . . . . . . . . . . . . . .
43 46 47 50 50 54 56 59 61 61 63 67 67 68 68 72 74 74 79
81 81 84 88 89 90 90 93 97 103 103 104 104 110 112 116
CONTENTS
4.3.1 Ergodic Average . . . . . . . . . . . . . . . . . . . . 4.3.2 Average with Absorption . . . . . . . . . . . . . . 4.3.3 Ergodic Average with Double Merging . . . . . . 4.3.4 Double Average with Absorption . . . . . . . . . 4.4 Diffusion Approximation with Split and Merging . . . . 4.4.1 Ergodic Split and Merging . . . . . . . . . . . . . 4.4.2 Split and Merging with Absorption . . . . . . . . 4.4.3 Ergodic Split and Double Merging . . . . . . . . 4.4.4 Double Split and Merging . . . . . . . . . . . . . 4.4.5 Double Split and Double Merging . . . . . . . . . 4.5 Integral F'unctionals in Split Phase Space . . . . . . . . 4.5.1 Ergodic Split . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Double Split and Merging . . . . . . . . . . . . . 4.5.3 Triple Split and Merging . . . . . . . . . . . . . .
xiii
.. .. .. .. .. .. .. .. .. ..
.. ..
5. Phase Merging Principles 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Perturbation of Reducible-Invertible Operators . . . . . . . 5.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Solution of Singular Perturbation Problems . . . . . 5.3 Average Merging Principle . . . . . . . . . . . . . . . . . . . 5.3.1 Stochastic Evolutionary Systems . . . . . . . . . . . 5.3.2 Stochastic Additive F'unctionals . . . . . . . . . . . . 5.3.3 Increment Processes . . . . . . . . . . . . . . . . . . 5.3.4 Continuous Random Evolutions . . . . . . . . . . . . 5.3.5 Jump Random Evolutions . . . . . . . . . . . . . . . 5.3.6 Random Evolutions with Markov Switching . . . . . 5.4 Diffusion Approximation Principle . . . . . . . . . . . . . . 5.4.1 Stochastic Integral F'unctionals . . . . . . . . . . . . 5.4.2 Continuous Random Evolutions . . . . . . . . . . . . 5.4.3 Jump Random Evolutions . . . . . . . . . . . . . . . 5.4.4 Random Evolutions with Markov Switching . . . . . 5.5 Diffusion Approximation with Equilibrium . . . . . . . . . . 5.5.1 Locally Independent Increment Processes . . . . . . 5.5.2 Stochastic Additive F'unctionals . . . . . . . . . . . . 5.5.3 Stochastic Evolutionary Systems with Semi-Markov Switching . . . . . . . . . . . . . . . . 5.6 Merging and Averaging in Split State Space . . . . . . . . . 5.6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . .
117 119 120 121 122 123 126 128 130 132 134 134 137 138 139 139 140 140 141 150 151 152 154 156 157 159 160 161 165 169 172 173 174 175 176 182 182
xiv
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
5.6.2 Semi-Markov Processes in Split State Space . . . . . 5.6.3 Average Stochastic Systems . . . . . . . . . . . . . . 5.7 Diffusion Approximation with Split and Merging . . . . . . 5.7.1 Ergodic Split and Merging . . . . . . . . . . . . . . . 5.7.2 Split and Double Merging . . . . . . . . . . . . . . . 5.7.3 Double Split and Merging . . . . . . . . . . . . . . . 5.7.4 Double Split and Double Merging . . . . . . . . . . . 6. Weak Convergence
184 186 188 188 189 190 191 193
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Pattern Limit Theorems . . . . . . . . . . . . . . . . . . . . 6.3.1 Stochastic Systems with Markov Switching . . . . . . 6.3.2 Stochastic Systems with Semi-Markov Switching . . 6.3.3 Embedded Markov Renewal Processes . . . . . . . . 6.4 Relative Compactness . . . . . . . . . . . . . . . . . . . . . 6.4.1 Stochastic Systems with Markov Switching . . . . . . 6.4.2 Stochastic Systems with Semi-Markov Switching . . 6.4.3 Compact Containment Condition . . . . . . . . . . . 6.5 Verification of Convergence . . . . . . . . . . . . . . . . . .
193 193 196 196 201 205 209 209 212 213 216
Poisson Approximation
219
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Stochastic Systems in Poisson Approximation Scheme . . . 7.2.1 Impulsive Processes with Markov Switching . . . . . 7.2.2 Impulsive Processes in an Asymptotic Split Phase Space . . . . . . . . . . . . . . . . . . . . 7.2.3 Stochastic Additive F'unctionals with Semi-Markov Switching . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Semimartingale Characterization . . . . . . . . . . . . . . . 7.3.1 Impulsive Processes as Semimartingales . . . . . . . 7.3.2 Stochastic Additive F'unctionals . . . . . . . . . . . .
219 220 220
8. Applications I 8.1 8.2 8.3 8.4
Absorption Times . . . . . . . . . . . . . . . . . . . . . . . Stationary Phase Merging . . . . . . . . . . . . . . . . . . . Superposition of Two Renewal Processes . . . . . . . . . . . Semi-Markov Random Walks . . . . . . . . . . . . . . . . .
225 228 231 232 237 243 243 249 253 258
CON TENTS
8.4.1 8.4.2 8.4.3 8.4.4 8.4.5
Introduction . . . . . . . . . . . . . . . . . . . . . . . The algorithms of approximation for SMRW . . . . . Compensating Operators . . . . . . . . . . . . . . . . The singular perturbation problem . . . . . . . . . . Stationary Phase Merging Scheme . . . . . . . . . .
9 . Applications I1
xv
258 259 262 265 267 269
9.1 Birth and Death Processes and Repairable Systems . . . . . 9.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Diffusion Approximation . . . . . . . . . . . . . . . . 9.1.3 Proofs of the Theorems . . . . . . . . . . . . . . . . . 9.2 L6vy Approximation of Impulsive Processes . . . . . . . . . 9.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 L6vy Approximation Scheme . . . . . . . . . . . . . . 9.2.3 Proof of Theorems . . . . . . . . . . . . . . . . . . .
269 269 270 272 276 276 278 282
Problems t o Solve
287
Appendix A Weak Convergence of Probability Measures
30 1
A.l Weak Convergence . . A.2 Relative Compactness Appendix B
..................... .....................
Some Limit Theorems for Stochastic Processes
301 303 305
B . l Two Limit Theorems for Semimartingales . . . . . . . . . . 305 B.2 A Limit Theorem for Composed Processes . . . . . . . . . . 308 Appendix C Some Auxiliary Results
311
C . l Backward Recurrence Time Negligibility . . . . . . . . . . . 311 C.2 Positiveness of Diffusion Coefficients . . . . . . . . . . . . . 312 Bibliography
315
Notation
325
Index
329
This page intentionally left blank
Chapter 1
Markov and Semi-Markov Processes
1.1
Preliminaries
The aim of this chapter is to give a brief account of basic notions on Markov and semi-Markov processes, together with martingale characterization, needed throughout this book. Further introduction to semimartingales and stochastic linear operators is also given. Let ( E , r ) be a complete, separable metric space, (that is, a Polish space), and let € be its Borel a-algebra of subsets of E 5 2 . Throughout this book, we will call the measurable space (El€)a standard state space. The space Rd, with the Euclidean metric, is a Polish space. We will denote by Bd its Borel a-algebra, d > 1, with B := B1. In this book, the space where the trajectories of processes are considered is D[O,co),the space of right-continuous functions having left side limits with Skorokhod metric We call them cadlag trajectories and cadlag processes. Let also C[O,co)be the subspace of D[O,oo), of continuous functions with the sup-norm, llzll = supt>, IIL:(t)l,IL: E C[O,co). These two spaces are Polish spaces (see Appendix AT. Let B be the Banach space, that is a complete linear normed space, of all bounded real-valued measurable functions on E , with the sup-norm 1694517011321153.
II'Pll = SUPzEE1'P(IL:)(,'P E B.
Let us consider also a fixed stochastic basis '3 = (52,F,F = (Ft, t 2 O),P),where (Ft,t 2 0) (for discrete time note that F = (Fn, n 2 0)) is a filtration of sub-a-algebras of F ,that is FsC Ft C F,for all s < t and t 2 0. The filtration F = (&, t 2 0) is said to be complete, if FOcontains 2 0. If for any t 2 0, 3 t = Ft+, all the P-null sets. Set Ft+ = ns>tFs,t then the filtration Ft,t 2 0 , is said to be right-continuous. If a filtration is complete and right-continuous, we say that it satisfies the usual conditions.
1
CHAPTER 1. MARKOV A N D SEMI-MARKOV PROCESSES
2
A mapping T : R -+ [O,+oo], such that {T 5 t } E Ft,is called a stopping time. If T is a stopping time, we denote by 3~the collection of all sets A E F such that A n {T 5 t } E Ft. An (E,E)-valued stochastic process x ( t ) , t E I , ( I = R+ or I = N), defined on the stochastic basis 3,is adapted, if for any t E I , z ( t ) is Ftmeasurable. The set of values E is said to be the state (or phase) space of the process z ( t ) ,t 2 0. Given a probability measure on (E,&), we define the probability measure P,, by
where P,(B):= B(B 1 x ( 0 ) = x). We denote by E, and E, the expectations corresponding respectively to P, and P,. We will also consider the following spaces endowed by the corresponding sup-norms:
-
B is the Banach space of real-valued measurable bounded functions cp(u,x),u E Rd, x E E ; B' := C' (Bd x E ) n B is the Banach space of continuously differentiable functions on u E Bd, uniformly on x E E , with bounded first derivative; B2 := C2(Rd x E) n B is the Banach space of twice continuously differentiable functions on u E Bd, uniformly on x € E, with bounded first two derivatives.
1.2 1.2.1
Markov Processes Markov Chains
Definition 1.1 A positive-valued function P ( z , B ) , z E E , B E &, is called a Markov transition function or a Markov kernel or transition (probability) kernel, if 1) for any fixed x E E , P ( x , . ) is a probability measure on ( E , E ) ,and 2) for any fixed B E E , P ( . , B ) is a Bore1 measurable function, that is, an ( I B)-measurable , function. If P ( x , E ) 5 1, for a x E E , then P is said to be a sub-Markov kernel. If for fixed x E E , the P ( x , .) is a signed measure, then it is said to be a
1.2. MARKOV PROCESSES
3
signed kernel. In that case, we will suppose that the signed kernel P is of bounded variation, that is,
1PI ( x , E ) < +m.
(1.1)
If E is a finite or countable set, we take E = P ( E ) (the set of all subsets of E ) , the Markov kernel is determined by the matrix ( P ( i , j ) ; i , j E E ) , with P ( i ,B ) = CjEB P ( i , j ) ,B E E .
Definition 1.2 A time-homogeneous Markov chain associated to a Markov kernel P ( x ,B ) is an adapted sequence of random variables x,, n 2 0 , defined on some stochastic basis S, satisfying, for every n E N, x E E , and B E E , the following relation
P(Z,+I
E
B I F,) = P(x,+l
E
B
I 2),
=: P ( x n , B ) ,
(as.),
(1.2)
which is called the Markov property. In most cases we consider the Markov property with respect to 3, := n ( x k , k 5 n ) , n 2 0, the natural filtration generated by the chain x,, n 2 0. The Markov property (1.2) is satisfied for any finite 3,-stopping time and it is called strong Markov property and the chain a strong Markov chain. The product of two Markov kernels P and Q defined on ( E , & ) is , also a Markov kernel, defined by
Let us denote by P ( x ,B ) = P(x, E B I xo = x) = P(x,+, E B I x, = x) the n-step transition probability which is defined inductively by (1.3). By the Markov property we get, for n, m E N ,
which is the Chapman-Kolmogorov equation. A subset B E E , is called accessible from a state x E E , if
Pz(x,
E
or equivalently P”(2,B ) > 0.
B , for some n 2 1) > 0,
CHAPTER 1. M A R K O V A N D SEMI-MARKOV PROCESSES
4
Definition 1.3 1) A Markov chain X n , n 1 0 , is called Harris recurrent if there exists a a-finite measure $ on ( E , E ) ,with + ( E )> 0, such that Px('-Jn>l{xnE A } ) = 1, z E E ,
(1.4)
for any A E & with +(A)> 0. 2) If the probability (1.4) is positive, the Markov chain is called $I-
irreducible. 3) The Markov chain is said to be uniformly irreducible if, for any A SUpPz(7A > N ) X
where
TA
:= inf{n
-
0,
N
m,
EI,
(1.5)
2 0 : zn E A } , is the hitting time of set A E E.
Definition 1.4 A Markov chain x n , n 2 0, is said t o be d-periodic (d > I), if there exists a cycle, that is a sequence (c1,..., c d ) of sets, ci E &, 1 5 i 5 d , with P(z,Cj+l)= 1, x E Cj, 1 I j 5 d - 1, and P ( z , C l ) = 1, x E Cd, such that: - the set E \ U%lCi is $-null; - if (Ci, ...,C;,) is another cycle, then d' divides d and C,! differs from a union of dld' members of (C1,..., Cd) only by a +-null set which is of type UirlV,, where, for any i 2 1, Pz(limsup{xn E K}) = 0. If d = 1 then the Markov chain is said to be aperiodic. Definition 1.5 A probability measure p on ( E ,E ) , is said to be a stationary distribution or invariant probability for the Markov chain xn, n 0, (or for the Markov kernel P ( z ,B ) )if, for any B E E ,
>
Definition 1.6 1) If a Markov chain is $-irreducible and has an invariant probability, it is called positive, otherwise it is called null. 2) If a Markov chain is Harris recurrent and positive it is called Harris positive. 3) If a Markov chain is aperiodic and Harris positive it is called (Harris) ergodic. Proposition 1.1 Let x n , n 2 0 , be a n ergodic Markov chain, then: 1 ) for a n y probability measure a o n ( E , E ) ,we have
IlaP
- pII
--f
0,
n -+
00;
1.2. MARKOV PROCESSES
5
2) f o r any cp E B, we have
for any probability measure p o n ( E , & ) . Let us denote by P the operator of transition probabilities on B defined by W X ) = E[cp(Xn+l)
I xn = I.
=
L
P ( Z , dY)cp(Y),
and denote by Pn the n-step transition operator corresponding t o P ( x , B ) . The Markov property (1.2) can be represented in the following form
Definition 1.7 Let us denote by II the stationary projector in B defined by the stationary distribution p ( B ) , B E & of the Markov chain x,, as follows
where 1 ( z )= 1 for all x E E. Of course, we have 112 = l-I.
Definition 1.8
The Markov chain xn is called uniformly ergodic if
Note that uniform ergodicity implies Harris recurrence 117,80,137,139. Moreover, the convergence in (1.7) is of exponential rate (see, e.g. 137). So the series 00
Ro
:= C [ P n -
n],
n=O
is convergent and defines the potential operator of the Markov chain x,, n 2 0, satisfying the property (see Section 1.6)
&[I-PI = [ I - P ] & = I - I I .
CHAPTER 1. MARKOV A N D SEMI-MARKOV PROCESSES
6
1.2.2
Continuous- Time Markov Processes
Let us consider a family of Markov kernels (Pt = Pt(x,B ) , t E R+) on ( E , E ) . Let an adapted (E,E)-valued stochastic process x ( t ) , t 2 0, be defined on some stochastic basis 3.
Definition 1.9 A stochastic process x ( t ) , t 2 0, is said t o be a timehomogeneous Marlcov process, if, for any fixed s , t E R+ and B E €, P(z(t+s) E B I F 3 ) = P ( x ( t + s ) E B l z ( s ) ) = P t ( z ( s ) , B ) ,(as.). (1.8) When the Markov property (1.8) holds for any finite F-stopping time 7, instead of a deterministic time s, we say that the Markov process z ( t ) ,t 1 0 , satisfies the strong Marlcov property, and that the process x ( t ) is a strong
Markov process. Definition 1.10 On the Banach space B, the operator Pt of transition probability, is defined by
This is a contractive operator (that is, llPtcpll I llcpll). The Chapman-Kolmogorov equation is equivalent t o the following semigroup property of Pt,
PtP,
= Pt+3,for
all t , s E R+.
(1.10)
The Markov process x ( t ) , t 2 0 , has a stationary (or invariant) distribution, x say, if, for any B E E ,
Definition 1.11 The Markov process a;(t),t 2 0 , is said to be ergodic, if for every cp E B, we have
for any probability measure p on ( E ,E ) . The stationary projector IT, of an ergodic Markov process with stationary distribution x, is defined as follows (see Definition 1.7)
1.2. MARKOV PROCESSES
7
where 1(x) = 1 for all x E E. Of course, we have 112 = II. Let us consider a Markov process x ( t ) , t 2 0 , on the stochastic basis 3, with trajectories in D[O,co),and semigroup (Pt, t 2 0 ) . There exists a linear operator Q acting on B, defined by 1
(1.11)
lim i(Ptcp- cp) = Qcp, tl0
t
where the limit exists in norm. Let D ( Q ) be the subset of B for which the above limit exists, this is the domain of the operator Q. The operator Q is called a (strong) generator or (strong) infinitesimal operator.
Definition 1.12 A Markov semigroup Pt,t 2 0, is said to be uniformly continuous on B, if
where I is the identity operator on B.
A time-homogeneous Markov process is said t o be (purely) discontinuous or of jump type, if its semigroup is uniformly continuous. In that case, the process stays in any state for a positive (strict) time, and after leaving a state it moves directly t o another one. We will call it a jump Murkow
process
34,56,153,165
Let x ( t ) , t 2 0 , be a time-homogeneous jump Markov process. Let n 2 0 be the jump times for which we have 0 = TO 5 TI 5 . . . 5 T, 5 . . . . A Markov process is said to be regular (non explosive),if 7, -+00, as n + 00 T,,
(see, e.g.
56).
Definition 1.13
The stochastic process x,, n 2 0 defined by 2 ,
= x(~,),
n 2 0,
is called the embedded Markow chain of the Markov process x ( t ) , t 2 0. Let P ( x ,B ) be the transition probability of x,, n 2 0. The generator Q of the jump Markov process x ( t ) ,t 2 0 , is of the form (see, e.g. 56134),
Q 4 z ) = q(x)
/
E
P(x,dy)[cp(y) - cp(x)l,
(1.12)
where the kernel P ( x ,d y ) is the transition kernel of the embedded Markov chain, and q ( x ) ,x E E , is the intensity of jumps function.
CHAPTER 1 . MARKOV AND SEMI-MARKOV PROCESSES
8
Proposition 1.2 (see, e.g. ) Let (Pt, t L 0 ) be a uniformly continuous semigroup on B, and Q its generator with domain V ( Q ) c B. Then: 1 ) the limit in (1.11) exists, and the operator Q is bounded with B(Q)= B; 2) dPtldt = QPt = PtQ; 3) Pt = exp(tQ) = I Ck-l(tQ)k/k!. 165952
+
If x ( t ) has a stationary distribution, T , then x, also has a stationary distribution, p, and we have
Let us consider the counting process
v ( t ) = max{n
: T,
5 t},
(1.13)
with maxO = 0. That gives the number of jumps of the Markov process in ( O , t ] .
Example 1.1. The generator IL of a Poisson process, with intensity X > 0, is D
D Example 1.2. Let Tn,n 2 O, be a renewal process on R+, TO = O, with distribution function F , and hazard rate of inter-arrival times 8, := 7, - Tn-1,
Let v ( t ) ,t 2 0 be the corresponding counting process, that is v(t) := sup{n : Tn
I t}.
The generator of the Markov process x ( t ) := t--7(t),t 2 0 , T(t) := T v ( t ) , is given by
M z ) = cp’(x>+ X ( X ) [ c p ( O ) - c p b ) ] ,
E
N,t
E R+,
where F ( t ) := 1 - F ( t ) . The domain of this generator is D(L) = C’(R).
1.2. M A RKOV PROCESSES
9
D Example 1.3. Let z ( t ) , t 0, be a nonhomogeneous jump Markov process, the generator of the coupled Markov process t , z ( t ) t, 2 0, is defined as follows
with D(L) = C’I’(IW+ x E ) . D Example 1.4. Let z ( t ) , t 2 0, be a pure jump Markov process (that is, without drift and diffusion part) with state space E and generator Q, and let v ( t ) , t 2 0, be the corresponding counting process of jumps and z,,n L 0 the embedded Markov chain. Let a be a real-valued measurable function on the state space E , and consider the increment process
k=l
Then the generator of the coupled Markov process a ( t ) ,z ( t ) ,t 2 0 is
IL = Q + Qo[r(z)- I ] ,
(1.14)
where:
and I is the identity operator. D Example 1.5. For the jump Markov process z ( t ) , t 2 0, as in the previous example, let us consider the process
<(t):= s 0’ a ( z ( s ) ) d s . Then the generator of the coupled process <(t), z ( t ) ,t 2 0 is
where A(z)cp(u) := a(z)cp’(u).
10
1.2.3
CHAPTER 1. MARKOV A N D SEMI-MARKOV PROCESSES
Diffusion Processes
A diffusion process is a strong Markov process in continuous time with almost surely continuous paths. This process is used in order to describe mathematically the physical phenomenon of diffusion, that is the movement of particles in a chaotic environment. In this book we will use diffusions as limit processes of stochastic systems switched by Markov and semi-Markov processes.
Definition 1.14 A Markov nonhomogeneous in time process z ( t ) , t 2 0, defined on a stochastic basis 3, with values in the Euclidean space Rd, d 2 1, with transition function P,,t(z,B) := P(z(t) E B I z(s) = x), for 0 5 s < t < 00, x E Rd, B E t&, is said to be a diffusion process, with generator ILt, if (a) it has continuous paths, and (b) for any x E Rd, and any cp E C’(W~),
Let a be a real-valued measurable function, defined on Rd x R,, and B a function defined on Wd x R+ with values in the space of symmetric positive operators from R~ to IW~. For any t 2 0, the generator Lt, of the diffusion x ( t ) , t 2 0, acts on functions cp in C2(Rd) as follows ILt(P(%)= a(x,t ) P W
1
+ p ( x ,t)(P”(x).
(1.15)
The above means that the drift coeficient a(x,t ) ,and the diffusion operator (or coefficient) B ( z ,t ) , apply as follows:
and
For a time-homogeneous diffusion the generator does not depend on t , that means functions a and B are free o f t . I f a ( z ,t ) = 0, and B ( z ,t ) = I , the x ( t ) ,t 2 0, is a Wiener process or a
Brownian motion.
1.2. MARKOV PROCESSES
11
A diffusion process with a constant shift vector and a diffusion operator B has the following representation
z ( t ) = sc(0) + t a + aw(t), where w(t), t L. 0, is a Wiener process, with IEw(t) = 0, l E ( ~ w ( t ) )=~t llz/12, z E Wd, and (T := B1I2( B = (TO*) is the positive symmetric square root of the matrix B . For the existence and weak uniqueness of a diffusion with generator (1.15), we suppose that (see, e.g. 567140i151i165 ) (a) a ( z ,t ) and B ( z ,t ) are continuous, and (b) Ila(z,t)ll+l(B(z,t)ll 5 C(l+IIzII),where C i s some positive constant. Weak uniqueness here means uniqueness of the transition function.
1.2.4
Pmcesses with Independent Increments
Let S be a stochastic basis, and x ( t ) ,t 2 0, a stochastic process adapted to S with values in Wd.
Definition 1.15 The process z ( t ) , t 2 0, is said to be with independent increments (PII), if for any s,t E R+, with s < t , z ( t )-z(s) is independent of FS. This property is equivalent to the increment independence property of the process z ( t ) ,when the filtration (Ft)is the natural filtration of the process. That is, for any n 2 1, and any 0 5 t o < tl < . . . < t, < 00, the random variables z(to),z(tl)- z(tO),...,z(tn)- z(tn-l) are independent. If, moreover, z ( t ) has stationary increments, that is, the law of z ( t )z(s) depends only on t - s, then the process z ( t )is said to be a process with stationary independent increments (PSII). The main properties of PSII are that their distributions are infinitely divisible and have the Markov property. Let
$ t ( A ) := Eexp(iA(z(t
+ s) - ~ ( s ) ) ,
be the characteristic function of increments. Then it satisfies the semigroup property
12
CHAPTER 1. MARKOV AND SEMI-MARKOV PROCESSES
The characteristic function has the following natural representation
4t (A)
= exp/t+(A)I.
The cumulant +(A), has the well known Le'vy-Khintchine formula ixz
-
137164
1 - i A ~ l ~ l ~ l < ~ } ] H ((1.16) d~),
where H is the spectral measure and satisfies the following conditions Jzl
lzl>l
H ( d z ) < m.
z 2 H W < m7
In the case where the PSII process z ( t )is of finite variation (that is, it has trajectories of finite variation), the cumulant has the following form
and the spectral measure satisfies the condition
The most important PSII processes are Brownian motion, Poisson process, and LBvy process. l 3
o Example 1.6. The Compound Poisson Process is defined by v(t)
k=l
where v ( t ) ,t 2 0, is a homogeneous Poisson process with intensity X > 0, and &, k 2 1, is an i.i.d. sequence of real random variables, independent of v ( t ) , t 2 0, with common distribution function F . Then we have F ( A ) = H(A)/X = IP(& E A ) . And the cumulant of the compound Poisson process has the form
$(A) = X
l[eixz
- l]F(dz).
(1.17)
As stated above, the PSII satisfy the Markov property, that is, the transition probabilities are generated by the Markov semigroup
1.2. MARKOV PROCESSES
Lemma 1.1 representation
(l" The ) generator ll?
13
of semigroup (1.18) has the following
(1.19)
ll?cp(u)= JR eixu$(X)@(X)dX,
for ~ ( u=)
sReixu@(X)dX,where @(A) and X2@(X) are integrable functions.
PROOF. Let us consider the semigroup (1.18) ,iX(u+z(t))
@(W
Note that, according to the L6vy-Khintchine formula, the cumulant has the asymptotic form $(A) = O(X2), as 1x1 + 00. Hence, the latter integral is convergent, uniformly on t . So, we get the derivative
By the evolutionary equation for the semigroup, we have
d --rtcp(u) = FtT(P(u).
dt
Comparing the two latter formulas, we get (1.19). The meaning of representation (1.19) is that the cumulant $(A) of a PSI1 is the symbol of the generator lr. In the particular case of a drift process z ( t ) = at, the corresponding generator is lrcp(u)= acp'(u) since for the corresponding semigroup
$(A)
It is well-known that the standard Wiener process with the cumulant = -a2X2/2 has the generator lrp(u) = a2cp"(u)/2.
Corollary 1.1 The generator of the semigroup (1.18) with the cumulant (1.16) has the following representation a2
T(P('LL)= acp'b)
-
-p%) + [cp('IL +
- cp('1L) -
vcp'(u)l(lvl~l)lH(dv).
14
CHAPTER 1. MARKOV AND SEMI-MARKOV PROCESSES
In the particular case of a compound Poisson process with the cumulant (1.17) the generator has the form
I’cp(u)= A
1.2.5
s,
+ i)- cp(u)]J’(dv).
[cp(u
Processes with Locally Independent Increments
We consider now the Markov processes with locally independent increments (PLII). It is worth noticing that such processes include strictly the independent increment processes. Here we will restrict our interest to PLII without diffusion part. These processes are called also “Piecewisedeterministic Markov processes” 34, or “jump Markov process with drift”, or “weak differentiable Markov processes” and have the same local structure as the PI1 (see 56). Roughly speaking, these processes are jump Markov processes with drift and without diffusion part. These processes are of increasing interest in the literature because of their importance in applications, for which they constitute an alternative to diffusion processes. For their detailed presentation and applications see 34. These processes are defined by the generator I’as follows
+
I’cp(u) = a(u)cp‘(u) with the intensity kernel
1 Wd
+ v)
r(u,dv) satisfying
-
cp(u)lr(ul dv),
(1.20)
the boundedness property:
r ( u , R d ) E R+. We can also write the above generator in the following form, by extracting the drift, due to the jump part, and add it into the initial drift a to obtain the drift coefficient g, that is, g ( u ) := a(.) JRd v r ( u ,dv),
+
Let us be given the Euclidean space Rd with the Bore1 a-algebra 23d and the compact measurable space ( E ,E ) . We consider the family of timehomogeneous Markov processes q ( t ; x ) ,t 2 0 , z E El with trajectories in D[O,a), with locally independent increments. These processes take values in the Euclidean space Rd ( d 2 l), and depend on the state z E El and
1.2. MARKOV PROCESSES
15
their generators are given by
These processes will be used in Chapter 2 as switched processes. A complete characterization of the above generators is given in 34. Note also that q(t;x) contains no diffusion part (see, e.g. 1. Of course, it is understood that when d > 1, we have 34951156,107
d k=l
It is worth noting that slightly changed conditions allow one to include a locally compact space of values for the switched process. The drift velocity a ( u ; x ) and the measure of the random jumps r ( u , d u ; x ) depend on the state IC E E . The time-homogeneous cadlag jump Markov process x ( t ) , t 2. 0, taking values in the state space ( E , & ) , is given by its generators (1.12)
Q 4 z ) = 4 x 1 L P ( x , d ~ ) l c p (~ )cp(z)l, where q is the intensity of jumps, which is a nonnegative element of the Banach space B(E)of real bounded functions defined on the state space E , with the sup-norm, that is llcpll := supzEElp(x)l. We will consider additive functionals of the process q ( t ; x ) ,of the following form
with generator
IL = Q + lr(z).
1.2.6 Martingale Characterization of Markov Processes Let S = (0,F ,F = (Ft, t 2 0), P) be a stochastic basis. Let ~ ( tt)2, 0, be a process, defined on S and adapted to F.
Definition 1.16 (see, e.g. A real-valued process z ( t ) , t 2 0, adapted to the filtration F , is called an F-martingale (submartingale, su70,132993)
CHAPTER 1 . MARKOV AND SEMI-MARKOV PROCESSES
16
permartingale), if IE Iz(t)l < 00, for all t
2 0, and, for s < t ,
W t ) I F8]= ~ ( s ) , ( W t )1 F8]L 4 s ) , E [ z ( t )1 Fs]I ~(s)), a.s. (1.23)
A martingale pt,t 2 0, is called square integrable, if supIEp: < 00.
(1.24)
t2o
Then the process p:, t 2 0, is an 3t-submartingale. The Doob-Meyer decomposition 132 of a square integrable martingale pt, t 2 0, is as follows CL:
=
(P)t + 4 t h
where z ( t ) ,t 2 0, is a martingale, and the increasing process ( p ) t ,t called the square characteristic of the martingale pt.
2 0, is
Definition 1.17 (see, e.g. 132) A process z ( t ) , t 2 0, adapted to F is called an F-local martingale if there exists an increasing to infinity sequence of stopping times, T, 4 +GO, such that the stopped process z"(t) = z ( t A T ~ )t ,2 0 , is a martingale for each n 2 1. It is clear that any martingale is a local martingale, by setting n 2 1. The converse is false.
T, =
n,
Let x(t), t 2 0, be a Markov process with a standard state space ( E ,I ) , defined on a stochastic basis 9.Let Pt(x, B ) , x E E , B E E , t 2 0 , be its transition function, and Pt,t 2 0, its strongly continuous semigroup defined on the Banach space B of real-valued measurable functions defined on E , with the sup-norm, (see Definition 1.10). Let Q be the generator of the semigroup Pt, t L 0, with the dense domain of definition D ( Q ) c B. For any function cp E V ( Q )and any t > 0, we have the Dynkin formula 45,34 rt
(1.25) From this formula, using conditional expectation, we get
1.2. MARKOV PROCESSES
17
Thus, the process t := cp(z(t>) - cp(z) -
is an 3:
= u(x(s),s
Qcp(z(s))ds
(1.26)
5 t)-martingale.
The following theorem gives the martingale characterization of Markov processes. Theorem 1.1 (45) Let ( E ,E ) be a standard state space and let z ( t ) ,t 2 0 , be a stochastic process o n it, adapted to the filtration F = (Ft,t >_ 0 ) . Let Q be the generator of a strongly continuous semigroup Pt,t 2 0, o n the Banach space B, with dense domain D ( Q ) c B. If for any cp E D ( Q ) , the process p ( t ) ,t 2 0, defined b y (1.26) is a n Ft-martingale, then z(t),t 2 0, is a Markov process generated by the infinitesimal generator Q .
The process z ( t ) , t2 0, is said to solve the martingale problem for the generator Q . The martingale (1.26) is a square integrable one whose the square integrable characteristic is the process:
t32)
The square characteristic of the martingale p ( t ) ,t 2 Theorem 1.2 0, (see 1.26), denoted b y ( ~ ) ~ 2 0, ,t is the process rt
PROOF. Let us denote at := s,” Q p ( z ( s ) ) d s . Then, from the representation of the martingale p ( t ) , we have p2 =
where L := 2 p a
(p
+ a)2 = p2 + 2 p a + a2= p2 + L ,
+ a2.Differentiating L , we get d L = 2dpa + 2 p Q p d s + 2 a Q p d s .
Now the martingale representation p = cp - a , gives d L = 2dpa
and, by integration,
+ 2cpQcpd~,
18
CHAPTER I. MARKOV AND SEMI-MARKOV PROCESSES
The first term is a martingale, since it is the integral with respect t o a martingale p ( s ) . It is obvious that
PI(^) := cp2(4t))-
/
t
Qp2(4s))ds,
0
is a martingale. Hence
where p2 is a martingale. So, the latter relation gives the square characteristic of the martingale p ( t ) . Let x,, n 2 0, be a Markov chain on a measurable state space ( E ,E ) induced by a stochastic kernel P ( z , B ) , 1c E E , B E E. Let P be the corresponding transition operator defined on the Banach space B. Let us construct now the following martingale as a sum of martingale differences n-1
p n = C [ ( P ( X ~ -+E~( ()P ( x ~ +I~-7=k)I. )
(1.27)
k=O
By using the Markov property and the rearrangement of terms in (1.27) the martingale takes the form n- 1
p n = cp(xn) - ~ ( x o-)
C[P - IICP(S~)*
(1.28)
k=l
This representation of the martingale is associated with a Markov chain characterization.
Lemma 1.2 Let x,,n 2 0, be a sequence of random variables taking n 2 0. values in a measurable space ( E ,E ) and adapted to the filtration 3n, Let P be a bounded linear positive operator o n the Banach space B induced by a transition probability kernel P ( x ,B ) on ( E ,E ) . If for every cp E B, the right hand side of (1.28) is a martingale pn,Fn,n L. 0, then the sequence xn, n 2 0 , is a Markov chain with transition probability kernel P ( x ,B ) induced b y the operator P .
19
1.3. SEMI-MARKOV PROCESSES
PROOF. Using (1.28) we have n- 1
E[Pn I 372-11 = wP(zn) I
Fn-11 - d z o ) -
C [ P-
IIdZk)
k=l
I
= ~ [ c p ( z n )Fn-11 - W Z k - 1 )
= Pn-I
+ W(P(zn)I 3,-11-
So, the martingale property E[pn Markov property
I
IEE[(P(zn+dI Fnl = wP(zn+l)
=
+ P(GL-1)- c p ( ~ o >
PP(2k-1). ~ ~ - is 1 equivalent ,
I znl = P d z n ) .
to the (1.29)
0 By the definition of square characteristic of martingale it is easy to check that n-1
(P)n =
C[PP2(Xk) - (Pcp(Zk))21.
(1.30)
k=O
1.3
Semi-Markov Processes
The semi-Markov process is a generalization of the Markov and renewal processes. We will present shortly definitions and basic properties of semiMarkov process useful in the sequel of the book (see, e.g. lZ7,ll6). 1.3.1
Markov Renewal Processes
Definition 1.18 A positive-valued function Q ( z , B , t ) ,z E E , B E E , t E R+, is called a semi-Markov kernel on ( E ,E ) if (i) Q ( z , B , . ) ,for z E E , B E E , is a non-decreasing, right continuous real function, such that Q(z, B , 0) = 0; (ii) Q(., ., t ) , for any t E R+, is a sub-Markov kernel on ( E ,E ) ; (iii) P ( . ,.) = &(., ., m) is a Markov kernel on ( E , E ) . For any fixed z E E , the function F,(t) := Q(z, E , t ) is a distribution function on R+. By Radon-Nikodym theorem, as Q << P there exists a positive-valued function F ( z ,y, .), such that (1.31)
20
CHAPTER 1 . MARKOV AND SEMI-MARKOV PROCESSES
We consider a special class of semi-Markov processes where F ( z ,y, t ) does not depend on the second argument y, we have F ( z ,y, t ) =: F,(t). Nevertheless, any semi-Markov process can be transformed in one of the above kind, (see, e.g. 127), by representing the semi-Markov kernel Q as follows (1.32)
Q(z,B , t ) = p ( z ,B)Fz(t).
Let us consider a (ExIR+,€@B+)-valued stochastic process (zn, 7,; n 2 0), with TO I T I I . .. I T, I T,+I I . . . . Definition 1.19 A Markov renewal process is a two component Markov chain, x,,~,, n L 0 , homogeneous with respect to the second component with transition probability defined by a semi-Markov kernel Q as follows,
P(%+l E
B,Tn+1 -7,
5 t I Fn) = p(zn+l E B,Tn+l
-7,
= Q(zn,B , t ) (a.s.1,
for any n L 0, t 2 0, and B E
I t 1%) (1.33)
B+.
Let us define the counting process of jumps v ( t ) , t 1 0, by
v(t) = sup{n 2 O : 7, 5 t } , that gives the number of jumps of the Markov renewal process in the time interval (0,t ] . Definition 1.20 ing relation
A stochastic process z ( t ) , t 2 0 , defined by the follows ( t )= X u ( t ) ,
t 2. 0,
is called a semi-Markov process, associated to the Markov renewal process zn77-71, 2. 0.
Remark 1.1. Markov jump processes are special cases of semi-Markov processes with semi-Markov kernel Q(z,B , t ) = P ( z ,B ) [ 1- e-q(z)t]. n + xk=l
Let On := T, - ~ ~ - 1that , is T, = TO &. The random variable O, 2 E E , will denote the sojourn time in state z. The process xn,On,n 2 0 , will be called also a Markov renewal process.
1.3. SEMI-MARKOV PROCESSES
21
It is worth noticing that jump Markov and semi-Markov processes consider in this book are regular, that is 127756,
P(v(t) < m) = 1, for every t 2 0.
1.3.2
Marlcov Renewal Equation and Theorem
Let Q1 and Qz be two semi-Markov kernels on tion, denoted by Q1 * Q 2 , is defined by
(I?,&).Then their convolu-
where x E E l t E R+, B E E. The function Q1 * Qz is also a semi-Markov kernel. For a semi-Markov kernel Q on ( E ,E ) , we set by induction Q1 = Q ,
Qnfl =
&*&",
n 2 0,
(1.35)
and
(1.36) We prove easily that
(1.37) Note that
Q"(x, B , t ) = P(x, E B , 7, 5 t I xo = x). Let us now consider two real-valued functions U ( x ,t ) and V ( x ,t ) defined on E x R+. Definition 1.21
U ( X t, ) -
The Markov renewal equation is defined as follows
/ /'
Q ( x ,dy, d s ) U ( y , t - S) = V ( Xt,) , x E E l
(1.38)
E O
where U is an unknown function and V a given function. The above Markov renewal equation can be also written as follows
U ( t ) - l Q ( d s ) U ( t- s ) = V ( t ) ,
CHAPTER 1. MARKOV AND SEMI-MARKOV PROCESSES
22
which is the usual form in the scalar case of the classical renewal equation on the half-real line t 2 0. By using convolution *, this equation can be written as follows [I-Q]*U=V
where I is the identity operator: I Theorem 1.3
or
U=V+Q+U,
+U
=U
+I
(Markov renewal theorem 15’)
= U.
Let the following conditions
hold: C1: the stochastic kernel P ( z ,B ) = Q(z,B , 00) induces an irreducible ergodic Markov chain with the stationary distribution p, C 2 : the mean sojourn times are uniformly bounded, that is: 03
m ( z ):= and m :=
F x ( t ) d t F C < +m,
s,
p ( d z ) r n ( z )> 0 ,
C3: the distribution functions F x ( t ) := Q ( x ,E , t ) , 3: E E , are non arithmetic (that is, not concentrated on a set { n u : n E N}, where a > 0, is a constant; the largest a is called the span of distribution). C4: the nonnegative function V ( x lt ) is direct Riemann integrable47 on R+, so
Then Equation (1.38) has a unique solution U ( x , t ) , and the following limit result holds
Let us apply the above theorem in order to obtain the limit distribution of the semi-Markov process. The transition probabilities of the semi-Markov process
P t ( z , B )= P ( x ( t ) E B I x ( 0 ) = z), satisfy the following Markov renewal equation
(1.39)
1.3. SEMI-MARKOV PROCESSES
23
Now, applying the renewal limit theorem t o the above equation, we get
r ( B ):= lim P t ( z , B )= t-w
p(dz)m(s)/m, B
E E.
The limit distribution r ( B )is said t o be the stationary distribution of the semi-Markov process z(t). 1.3.3
Auxiliary Processes
The following auxiliary processes will be used:
T(t)= T v ( t ) , T+(t)= T u + ( t ) , v+(t) = v(t)+ 1, ')'(t)= t - T(t), ')'+(t) = T+(t)- t.
(1.41)
The distributions of the auxiliary processes (1.41) satisfy the Markov renewal equation (1.38) with certain function V ( z , t ) .Let us consider the distribution function of the remaining sojourn time y+(t),which will be used in the phase merging principle (see Chapter 4),
aZ(u,t ) := P(y+(t)5 u I s(0) = x). The right-hand side of the Markov renewal equation (1.38) is calculated as follows:
Now, the Markov renewal theorem yields the following limit result:
=
Lu
F(t)dt/m,
24
CHAPTER 1. MARKOV A N D SEMI-MARKOV PROCESSES
where, by definition,
It is worth noticing that the limit remaining sojourn time y+ distribution, naturally called the stationary renewal t i m e distribution, is defined by
F+(u):= P(y+ 5 u)=
IU-
F(t)dt/m,
with density
1.3.4
Compensating Operators
The compensating operator is a basic important device in our analysis of stochastic systems switched by semi-Markov processes. Let us consider a Markov renewal process
where z,
= Z(T,),
and
~,+1 = T,
+ &+I,
n 2 0, and
Definition 1.22 (174) The compensating operator IL of the Markov renewal process (1.42) is defined by the following relation
where q(x) = l/rn(z), m ( x ) = EB, = J,"Fz(t)dt,
and
Of course, by the homogeneity property of the Markov renewal process, we have
1.4. SEMIMARTINGALES
25
Proposition 1.3 The compensating operator (1.43) of the Markov renewal process (1.42) can be defined by the relation
ILdx, t ) = q ( x )
O
/
E
Q(xld y , ds) [CP(Y,t + s> - d x ,41
If q(x) = 0, then lL'p(x,t)= 0. The claim of Proposition 1.3 follows directly from Definition 1.22.
Martingale Processes
1.3.5
Characterization of Markov Renewal
Let x ( t ) ,t 2 0 , be a semi-Markov process and let xn,T ~n ,2 0 , be the corresponding Markov renewal process, and IL be the compensating operator. Define the process Cn, n 2 0 , by n
Cn := v(xn,Tn) -
z(~i
- ~ i - l ) ~ ( z i - l y ~ i - l )n ,
2 0,
i=l
and
Bn
:= ~ ( x k~ , k k ;5 n ) ,n
2 0.
p74)
Proposition 1.4 The process Cn,n 2 0, is a &-martingale sequence for any function 'p E B ( E x R+), such that iE, Ip(x1, TI)^ < +m.
1.4
Semimartingales
Let us consider an adapted stochastic process x ( t ) , t 2 0 , on the stochastic basis 9, with trajectories in the space D[O,+m). Set Ro = R \ (0).
Definition 1.23
(70) The
stochastic process z(t), t 2 0, is said to be:
(1) a semimartingale, if it has the following representation
z ( t )= 2 0
+p ( t ) + a(t),
t 2 0,
(1.46)
where zo = x(0) is a finite &-measurable random variable, p ( t ) is a local martingale, with po = 0, and a ( t )is a bounded variation process, with a(0)= 0; (2) a special semimartingale, if it has the representation (1.46) with a(t)a predictable proce~s.'~
CHAPTER 1. MARKOV AND SEMI-MARKOV PROCESSES
26
Then we note x E S and x E S, respectively. The representation (1.46), for a special semimartingale is unique. A semimartingale with bounded jumps is a special semimartingale. While the representation (1.46) for a semimartingale is not unique, the continuous martingale part is unique. Let h be a truncation function, that is, h : R,d R,d, bounded with compact support and h ( z ) = x in a neighborhood of 0. Let x ( t ) , t 2 0 , be a d-dimensional semimartingale. For a h e d truncation function h, let us consider the processes: ---f
X h ( t ) = x ( t ) - &(t),
t 2 0,
where A z ( s ) := x ( s )-z(s-), and ~ ( d sdu) , is the measure of jumps of x ( t ) . Since 2 h ( t ) is of bounded variation, the process xh(t) is a semimartingale and has bounded jumps, consequently, it is a special semimartingale, and has the canonical representation Zh(t)
=20
+ Mh(t) + Bh(t),
t 1 0,
(1.47)
where Mh(t) is a local martingale, with Mh(0) = 0 and Bh(t) is a predictable process of bounded variation. Definition 1.24 (70) For a fixed truncation function h, we call a triple of predictable characteristics, with respect to h, the triplet T = ( B , C , v ) , of which the semimartingale z ( t ) , t 2 0, has the following representation 70,132.
where:
- B - C
is the predictable process in (1.47), = ( d j ) is a continuous process of bounded variation, with cij = < 2zC,zjc >, which is the predictable process of (xic,xjc),that is, the . . process x’cx~c-< zit, zjc > is a local martingale; = Bh,
14. SEMIMA RTINGA LES
27
compensator of the measure p of jumps of x ( t ) ,that is a predictable measure on W+ x Wt.
- u is the
It is convenient to introduce the second modified characteristic ?i =
($), by
We will use the semimartingales as a tool in order to establish Poisson approximation results (see Chapter 7). D Example 1.7. Brownian motion. Let w ( t ) ,t 2 0 be a Wiener process with w(0) = 0. This is a local martingale with (w,w ) ~= c 2 ( t ) .Its predictable characteristics are ( B ,C,u ) = (0, u 2 ( t )0). , D Example 1.8. Gaussian process. Let x ( t ) ,t 2 0, be a Gaussian process. We have ( B ,C, v) = (IEx(t),IE(x(t) - l E ~ ( t )0))~ . , D Example 1.9. Generalized diffusion (62). Let us consider Bore1 functions a 2 0 and b defined on I%+ x R, and a family of transition kernels Kt, t 2 0, on (W, B),satisfying the following conditions:
(1A y)2Kt(z,dy) < +m. Ii
Let ~ ( tt) 2 , 0, be a semimartingale with predictable characteristics ( B ,C, v) given by: rt
In that case, the semimartingale z ( t ) , t 2 0, is said to be a generalized diffusion. If a(t,x), b(t,x) and Kt(x, B ) do not depend upon t , them it is called a time-homogeneousgeneralized diffusion (compare with PLII).
28
CHAPTER 1. MARKOV AND SEMI-MARKOV PROCESSES
For a time-homogeneous generalized diffusion x ( t ) ,t 2 0, the infinitesimal generator IL acts on functions cp E C’(JR),as follows
K(z,dy)[cp(z+ Y) - cp(z>- h(Y)cp’(Y)l.
(1.48)
The triplet (b, a , K ) is called the infinitesimal characteristics of the generalized time-homogeneous diffusion. D Example 1.10. Processes with stationary independent increments. For the processes with stationary independent increments given in Section 1.2.4, with cumulant function $(A), given in (1.16), we have (Bt,Ct,vt(dz))= (at,n’t, tH(dz)).
1.5
Counting Markov Renewal Processes
In this section, we consider counting processes as semimartingales. Let z, On, n 2 0 , be a Markov renewal process taking values in E x [0,fm), and defined by the semi-Markov kernel
So, the components xn+l and
en+, are conditionally independent
The renewal moments are defined by
The counting process is defined by
v ( t ) = max{n 2 1 : 7, 5 t } . Definition 1.25
(see, e.g.
211709133)
An integer-valued random measure
1.5. COUNTING MARKOV RENEWAL PROCESSES
29
for the Markov renewal process xn,T,, n 2 0, is defined by the relation
P(dZ,4 =
c
6(,,,7n)(d2,dt)l(,~<+00),
(1.49)
n2 1
where S, is the Dirac measure concentrated at point a. It is worth noticing that the random measure (1.49) defines the multivariate point process x,, T,, n 1 0, (see, e.g. 70). By Theorem 111.1.26 70, there exists a unique predictable random measure F ( d z , dt) which is the compensator of the measure p ( d z , d t ) , that is, for any nonnegative continuous function w(z, t ) ,
lo
w(z, s)[p(dx,d t ) - F ( d z , ds)] JE
is a local martingale (Section 1.2.6) with respect to the natural filtration
3.; := cT(x(s),sI t ) , t 2 0,
(1.50)
of the corresponding semi-Markov process z ( t ) ,t 2 0. Certainly, there exists a unique predictable random measure p ( t ) which is a compensator of the counting process v(t),t 2 0, that is, the process p ( t ) = v ( t )- n(t),t 2 0, is a local martingale with respect to the filtration (1.50). Note that
T ( d t )= &z,
dt).
(1.51)
Moreover, it is possible to give the constructive representation of the compensator (1.51) for the Markov renewal process, at any rate, where the family of distributions F,(t),z E E , is absolutely continuous, with respect to the Lebesgue measure on R+, and has the following representation
-
F,(t) := 1 - F,(t) = exp[-A(x, t ) ] , A(x, t ) =
t”,)
I’
X(x, s)ds. (1.52)
Proposition 1.5 The compensator P ( t ) , t 2 0, for the counting process u ( t ) ,t 2 0 , of a Markov renewal process can be represented as follows
n(t)=
I’
X(x(s>,y ( s ) ) d s ,
where y(s) := s - T ( s ) , T ( s ) := T ” ( ~ ) .
CHAPTER 1. MARKOV AND SEMI-MARKOV PROCESSES
30
It is worth noticing that the compensator of the counting Markov renewal process is a stochastic integral functional of the Markov process z ( t ) ,y ( t ) ,t 2 0 (see Section 2.2). PROOF.Introduce the conditional distributions of the Markov renewal process z, T,, n 2 0:
By Theorem 111.1.33 70, the compensating measure of the multivariate point process (1.49) can be represented as follows
-
4% dt) =
c
l(rn
n20
where
Therefore, the compensator of the counting process v ( t ) ,t 2 0 , is represented by
Now, calculate
X(z,, = q z n , &+1).
Similarly, for
T,
~ $ 1we ,
get
s)ds,
(by using (1.52))
1.6. REDUCIBLE-INVERTIBLE OPERATORS
31
Hence,
Corollary 1.2 The compensator of the counting process f o r a Markow j u m p process with the intensity function q(x),xE E , is represented as follows
1.6
Reducible-Invertible Operators
Let B be the Banach space of real-valued measurable functions, with the sup-norm, defined on the state space E. Let Q : B + B be a linear operator acting on B, and denote by
DQ := {cp:
cp E B,Qcp E B},
11.11
the domain of Q,
by
RQ := {$: $ = Qcp,cp E B}, the range of Q, and by
NQ := {p: Qcp = 0, cp E B}, the null-space kernel of Q. An operator Q is said to be bounded if there exist constants C such that IIQVII 5 C
7
> 0,
E DQ.
The least of these constants is called the norm of the operator Q, and is denoted by 11Q11. The operator norm is
(1.55)
CHAPTER 1. MARKOV A N D SEMI-MARKOV PROCESSES
32
Let Q : B 4 B be a linear operator that maps DQ to RQ one-to-one. Thus a linear operator Q-l is defined as a map RQonto DQ, which satisfies the following conditions: Q-lQcp
= P,
P E DQ,
&&-I$
= $,
$ E RQ.
The operator Q-' is defined uniquely and is called an inverse operator. Definition 1.26
(116)
(1) An operator Q is said to be densely defined if its domain of definition is dense in B, that is, DQ = B, (DQis the closure of DQ). (2) An operator Q is said to be closed if for every convergent sequence x , -+ x , and Qx, -+y , as n -+ 00, it follows that x E DQ and Q x = y .
(3) A bounded linear operator Q is said to be reducible-invertible if the Banach space B can be decomposed in a direct sum of two subspaces, that is B = NQ@ RQ,
(1.56)
where the null-space has nontrivial dimension, d i d Q 2 1. (4) A densely defined operator Q: B 4 B is said to be normally solvable if its range of values RQ is closed.
Remark 1.2. 1) A reducible-invertible operator is normally solvable. 2) The decomposition (1.56) generates the projector II on the subspace
NQ
The operator I - II is the projector on the subspace RQ
where I is the identity operator in B.
Lemma 1.3 the operator Q
too) If the linear operator Q
+ II has an inverse.
is normally resolvable, then
1.6. RED UCIBLE-INVERTIBLE OPERATORS
33
PROOF. Applying the projector II to both sides of the equation
[Q+ nlcp = $J,
(1.57)
and since nQcp = QIIcp = 0, we get
On the other hand, rewriting (1.57), we have
QCp = [I - n]$JE RQ. This equation, as Q is a normally solvable operator, has a solution which 0 is the solution of (1.57). Definition 1.27 operator
(116)
Let Q be a reducible-invertible operator. The
Ro := II - ( Q
+ II)-'
(1.58)
is called the potential operator or the potential of Q . It is easy to check that the potential can also written as follows
Ro Proposition 1.6
=
(II - Q ) - l - n.
(1.59)
The following equalities hold:
&I& = RoQ = II - I , IIRo = Roll = 0, QR: = RZQ = R,"-', n 2 1,
(1.60) (1.61)
IlROll = IlQO'II
(1.63)
(1.62)
Equation (1.60) is called Poisson equation. The right hand side of this equation is sometimes defined as I - 1T (see, e.g. ' 1 6 ) . Proposition 1.7 Let Q : B the equation
--+
B be a reducible-invertible operator. Then (1.64)
Qcp = $7
under the solvability condition II$J = 0, has a general solution with representation cp = -Roll,
+ PO,
cpo E
NQ.
34
CHAPTER 1. MARKOV A N D SEMI-MARKOV PROCESSES
If moreover the condition IIcp
= 0 holds, Equation (1.64) has a unique
solution represented by cp = -&$.
(1.65)
For a uniformly ergodic Markov chain, the operator Q := P - I (where
I is the identity operator), is reducible-invertible loo,that is, B =NQ$ R Q , with dimNQ = 1. For a uniformly ergodic Markov process with generator Q, and semigroup Pt,t 0, the potential & is a bounded operator defined by
>
Ro = where the projector operator
pt
- n)dt,
(1.66)
II is defined as follows
with p ( B ) ,B E E , the stationary distribution of the Markov chain and the indicator function 1 ( ~=) 1, for any 2 E E .
B, E > 0, be a family of linear operators. Definition 1.28 Let QE : B We say that it converges in the strong sense, as E 4 0, to the operator Q, if lim
&+O
IIQEcP
- QcPII= 0
And we note s - limE,o QE= Q.
CP E B.
Chapter 2
Stochastic Systems with Switching
2.1
Introduction
This chapter deals with switched-switching processes. When real systems are studied an important problem arise, connected to the fact that the local characteristics of the systems are not fixed but depend upon random factors. Concerning this problem, one describes the random changes of local characteristics by a stochastic process, called a switching process, or driving or modulation process 2*3. In applications, switching processes could represent the environment or, in the particular case of dynamic reliability, they represent the structure of the system 36. Usually, the switching process is assumed to be an ergodic process. Nevertheless, in many practical problems switching non ergodic stochastic processes have to be considered, for example when the system is observed up to the hitting time to some subset of the state space. Switched processes were first considered by Ezhov and Skorokhod 46. Starting with a basic process, say q(t;z), t >_ 0 , x E E , where x is a parameter, one considers an additional process, say x ( t ) ,t 0, with values in the set E. Then one considers the composed process q(t;z(t)),t1 0, meaning that the process q ( t ;z) depends on the current state of the process z ( t )in time t. The process q(t;z) is said to be a switched process, and z ( t ) a switching process. The switched process is also called a modulated process or a process driven by the process z(t). Such a composition of stochastic processes is very useful for studying real problems. A switching process can represent the variation of environment, the structure evolution of systems, etc. A switched process can represent the level of performance for stochastic systems, the flux of materials or information, the parameter values, as temperature, pressure, etc. A very 1009146,
>
35
36
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
large literature exists concerning this kind of problems. It includes additive stochastic functionals, stopped processes, hidden processes in statistics, etc. In our case, the switching process x ( t ) will be a Markov or a semi-Markov process. The present chapter includes stochastic evolutionary systems, stochastic additive functionals, increment processes, Markov additive semimartingales, impulsive processes, and finally, random evolution which generalize all the previous processes. The main object of this chapter is to introduce generators and compensating operators of the coupled switched-switching processes, which are the main tools for the proofs of weak convergence presented in Chapters 3-6. 2.2
Stochastic Integral F'unctionals
A stochastic model of systems with switching is constructed using a realvalued measurable bounded function a ( x ) , x E E , and a regular cadlag semi-Markov process x ( t ) ,t 2 0 , on the measurable space ( E ,E ) , given by a semi-Markov kernel (see Section 1.3.1) Q ( x ,B , t ) = P(xn+l E B , en+, L t
I z n = x) = P ( x ,B)J"(t),
for x E E , B E E , t 2 0.
A Stochastic integral functional with semi-Markov is rep-
Definition 2.1 resented by
rt
U ( t ) = u + Jo a(x(s))ds, t 2 0, 21 E R. Using the Markov renewal process x,, T ~n ,2 0 (see Section 1.3.1), the stochastic integral functional (2.1) can be represented as follows v(t)-l
V ( t )= 'u.
+
c
a(xk)Bk+l
+ r ( t ) a ( x ( t ) ) , t 2 0,U E R,
k=O
where y(t) := t - T ( t ) and T ( t ) := ~ ~ ( t ) . Let us introduce the family of associated evolutionary equations
(2.2)
2.2. STOCHASTIC INTEGRAL FUNCTIONALS
37
The trajectories of (2.3) are
Uz(t;uo) = uo
+ a(z)t,
t 2 0,
(2.4)
and satisfy the semigroup property
+
U z ( t t';uo) = U,(t; U,(t';uo)).
(2.5)
This property can be described by the family of semigroup operators
A,(z)cp(u):= cp(Uz(G u)), t 2 0, z E E ,
(2.6)
defined on the Banach space C(R) of real-valued bounded continuous functions cp(u),u E R, with the sup norm llcpll := supuERIcp(u)I. The semigroup property of the operators in (2.6) take the form A t + y ( ~ := ) A , ( z ) A t ) ( z ) , t,t' 2 0 , E~E ,
(2.7)
which can be verified by using (2.5). It is worth noticing that the semigroup (2.6) is continuous and contractive. That is, for any z E E , limAt(z) = I, and IIAt(z)ll 5 1.
t+O
Hence, the generators (Section 1.2.2) h ( z ) p ( u ):= limt-l[At(z) - I]cp(u), z E E , tl0
(2.8)
exist on the test functions cp E C1(R),the real-valued functions with bounded continuous first derivatives.
Lemma 2.1 by
The generators (2.8) of the semigroups (2.6) are represented (2.9)
W.)cp(u) = a(z)cp'(u).
PROOF. Indeed, by definitions (2.6) and (2.4), we get A(z)cp(u) = limt-l[cp(u tl0
+ a ( z ) t )- cp(u)]
= a(z)cp'(u).
0 Relation (2.6) determines the evolution
@T(u) := At(z)cp(u) = p(UZ(t;u)), t 2 O,Z %(u> = cp(u>,
E
E , u E R,
(2.10)
38
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
in the Banach space C(R). The operators At(z) transform an initial test function ~ ( uinto ) @P;(u).The evolution (2.10) can be determined by a solution of the evolution equation = A(z)@P;(u),
(2.11)
%(u) = (P(u), or, in another form (see (2.9))
$@W = (2.12)
@E().
= cp(u>*
Definition 2.2 The random evolution for integral functional is defined on a test function cp E C(R) by the relation
apt(.) := p ( U ( t ) ) ,
t 2 0,u E R,
(2.13)
where V ( t ) , t 0, is the integral functional (2.1).
Lemma 2.2 lowing form
The random evolution (2.13) can be represented in the fol-
n
u(t)-l
at(.) = A ~ ( t ) ( z ( t ) )
(zk)(P(u),
t 2 0,
E R,
(2.14)
k=O
and satisfies the evolution equation %@t(.)
=A ( 4 W t ( 4 ,
t L 0,
@o(u)= cp(u>.
Indeed, if v(t) = 0, then y ( t ) = t , z ( t )= 2,hence in (2.14) at(.) = At(z)cp(u)= (P(U(t)>.
Next, the formula (2.14) can be proved by induction using (2.2). The characterization of the stochastic integral functional with semiMarkov switching is realized by using the compensating operator for the extended Markov renewal process
U, := U(T,),
z, := Z(T,),
T,,
n 2 0,
(2.15)
where ~ , , n2 0, are the renewal jump times of the semi-Markov process z ( t ) t, 2 0.
2.2. STOCHASTIC INTEGRAL FUNCTIONALS
39
Definition 2.3 The compensating operator of the extended Markov renewal process (2.15) is defined by the relation
Lp(u, 2, t ) = E[p(U1,X l , 71)- p(.,
2, t )
I uo = ,.
20
= x,70 = tI/m(x).
(2.16) It is easy to verify that the compensating operator has the following homogeneous property ~ ( u2, t, ) = E[p(un+1,zn+l, ~ n + 1) ~ ( uz,, t ) I u n = u,x n = z, 7, = t I / m ( z ) ,
where m ( x ) := EB, =
s,”[l
- Fx(t)]dt.
Lemma 2.3 The compensating operator (2.16) can be represented in the following f o n n
where A,(x),s 1 0 , x E E , are the semigroups defined in (2.6) by the generators A(x),x E E in (2.9), and q(x) = l/m(x). The transformation of the compensating operator can be realized as follows. By definition the compensating operator, acting on functions p(u, x), is given by the following relation
or, in a symbolic form
IL = q[IF(x)P- I ] , where 00
P(x) :=
F,(ds)A,(z).
The first step of transformation is the following
IL = Q -I-[p(x) - I]Qo, where Q := q[P- I ] ,
(2.17)
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
40
is the generator of the associated Markov process, and
1
Qocp(z)= 4 2 )
P(zldy)cp($).
E
The second step of transformation consists in using the integral equation for the semigroup
A,(z) - I = A(z)
A,(z)dv.
For the second term in (2.17), we obtain
F,(ds)[A,(z) -I]
1
00
= A(z)
Fz(s)A3(z)ds.
So, we get the equivalent representation
F(s)
- I = A(Z)F(~)(Z),
where, by definition,
So doing, we have proven the following result. Lemma 2.4 The compensating operator of the extended Markov renewal process (2.15) is represented as follows
IL = Q + A(z)F(~)(z)Qo. 2.3
(2.18)
Increment Processes
The discrete analogue of the integral functional considered in Section 2.2 is the increment process defined by the sum on the embedded Markov chain xnr n 2 0, rt
(2.19)
2.3. INCREMENT PROCESSES
41
with the given real-valued measurable bounded function a(z),z E E . The counting process
v ( t ) := max{n 2 0 : T~ 5 t } , t L 0,
(2.20)
is defined by the renewal mqments rn,n 2 0, of the switching semi-Markov process ~ ( t t) 2, 0. Introduce the family of shift linear operators on the Banach space C(R) D(z)cp(~)= cp(u
+ a ( ~ ) ) ,z C E,u E R.
(2.21)
Definition 2.4 The random evolution associated with the increment process a(t),t 2 0, is defined by the relation
at(.)
:= cp(a(t)), a(0)= 21.
(2.22)
Clearly, the random evolution (2.22) can be represented in the following form
Indeed, for t < 7 1 , by definition
Next, for
T,
5 t < T,+I, from (2.23) and (2.21)
that is Equation (2.22). The recursive relation for the random evolution (2.23) (2.24)
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
42
provides the following additive representation of the random evolution (2.23)
44 t 2 0. @t(u)= cp(u)+ C [ D ( S k )- 4@7k--1(4,
(2.25)
k=l
In what follows it will be useful t o characterize the increment process by the generator of the coupled increment process
Let the switching process x ( t ) , t L 0, be Markovian and defined by the generator (2.27)
Proposition 2.1 The coupled increment process (2.26) is also Markovian and can be defined by the generator
where
Let the switching semi-Markov process z ( t ) , t 2 0, associated t o the Markov renewal process x,, T,, n 2 0 , be given by the semi-Markov kernel
Q ( x , B , t )= P ( z , B ) F , ( t ) ,
x
E
E , B E E , t 2 0.
(2.29)
Introduce the extended Markov renewal process
a, : = a ( ~ , ) , z,
T,,
n 2 0.
(2.30)
Proposition 2.2 The compensating operator of the extended Marlcov renewal process (2.30) can be represented as follows
PROOF. Let Pu,z,tbe the conditional probability on t ) , and Eu,z,tthe corresponding expectation.
(a0 = u, zo
=x
, =~
2.4. STOCHASTIC EVOLUTIONARY SYSTEMS
43
Then we have
But
so,
and the conclusion follows from Definition 2.3. It is easy t o verify the following result.
0
Corollary 2.1 The compensating operator IL acts o n test functions cp(u,x) as follows
ILP(u,
2) =
[Q+ Qo(D(2) - I)lcp(u, 2 ) .
(Compare with (2.28)). 2.4
Stochastic Evolutionary Systems
Various stochastic systems can be described by evolutionary processes with Markov or semi-Markov switching.
Definition 2.5 The evolutionaq switched process U ( t ) , t defined as a solution of the evolutionary equation
$@)
2 0 , in Wd,is
= a ( U ( t ) ;z(t>),
(2.31)
U ( 0 ) = u. The local velocity is given by the Wd-valued continuous function a ( u ; z ) , u E R d , z E E. The switching regular semi-Markov process x ( t ) , t 2 0, is considered in the standard phase space ( E , E ) , given by the semi-Markov kernel (see Section 1.3.1) Q(z,B , t ) = P ( x ,B)F,(t).
44
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
The integral form of the evolutionary equation is
U ( t )= u
+
s,
t
a(U(s);z(s))ds.
(2.32)
In what follows, it is assumed that the velocity a ( u ; z ) satisfies the condition of the unique global solvability of the deterministic problems (2.33). That is Lipschitz condition on u E Rd, with a constant which is independent of x E E. In order to emphasize the dependence of U ( t ) on the initial condition u,let us write
$U(t; 2,u)= a ( U ( t ;2,u); z), (2.33)
U ( 0 ;z, u)= u,
for all
2 E
E.
The well-posedness of the stochastic process U ( t ) , t 2 0, by the solution of Equation (2.31) follows from the fact that this solution can be represented in the following recursive form by using the solution of Problem (2.33)
The initial values for Problem (2.33) are defined by the following recursive relation
The recursive relation (2.34) can be represented in the following form
U ( t ) = U ( t - 7,; Z,, U(‘T,)),
7,
5 f! < ‘Tnfl,
72
2 0.
The existence of a global solution U ( t ) , t 2 0, for arbitrary timeinterval [O,T]follows from the regular property of the switching semiMarkov process z ( t ) , t > 0 (see Section 1.3). It is well known (see, e.g. loo) that the solution of the deterministic problem (2.33) under fixed value of 2 E E has a semigroup property which can be expressed as follows
+
U ( t t’;z, u)= U(t’;2,U ( t ;2,u)).
(2.36)
It means that the trajectory at time t + t’ with initial value u can be obtained by extending the trajectory at time t‘ of the trajectory with initial value U ( t ;5,u).
2.4. STOCHASTIC EVOLUTIONARY SYSTEMS
45
The semigroup property (2.36) can be reformulated for the semigroup operators in abstract form by the relation rt(z)cp(u) := cp(U(t;2, u)), t 2 0 ,
(2.37)
in the Banach space C(Rd) of continuous bounded real-valued functions cp(u),u E Rd.
It is easy to see that the operators property
rt(z),t
2 0, satisfy a semigroup
rt+tl(z)= Ft‘(Z)Ft(Z).
Indeed: rt,(z)rt(z)cp(u) = rt,(z)cp(U(t;2,u)) = p(U(t’; z, U ( t ;2,u ) ) = cp(U(t
+ t‘;z, u)) by
(2.36)
= rt+tj(z)cp(u).
Definition (2.37) of the semigroup rt(z),t 2 0 , implies the contraction property ‘18
Ilrt(z)II 5 1, and their uniform continuity Iim
t-0
(pyz)- I ( (= 0.
Proposition 2.3 The generator T(z) of the semigroup I’t(z), t 2 0, Zs defined b y the following relation
Uz)cp(u) = du;z)v’(u).
(2.38)
PROOF. We have ~r(z)cp(u) = lim t-l t-0
= lim t-l t+O
[r&)
- I ]cp(u)
I’
a ( ~ ( s z)dscp’(u) );
= a(u;z)cp’(u).
For sake of simplicity, we have written U ( s ) := U ( s ;z, u).
0
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
46
Remark 2.1. In the vector case we have to consider the scalar product, that is ucp'(u) = C ak&cp(u). Note that the domain of definition D(lr(x)) of the generator F(x) contains C1(@) , the continuously differentiable functions cp(u) with bounded first derivative. 2.5
Markov Additive Processes
Markov additive processes (MAP) constitute a very large family of processes including semi-Markov processes as a particular case. Of course, since the MAP are a generalization of the Markov renewal (semi-Markov) processes, of Markov processes, and of renewal processes, the field of their applications is very large: reliability, survival analysis, queuing theory, risk process, etc.
Definition 2.6 An Rd x E-valued coupled stochastic process E(t),z ( t ) , t 2 0 is called a MAP if 1) the coupled process [ ( t ) z, ( t ) t, 2 0 is a Markov process; 2) and, on {E(t) = u},we have, (as.),
for all A E B ( Rd ) ,B E E , t
2 0 and s 2 0, where Ft := a ( ~ ( s ) , z ( s )s; I t ) ,
t 2 0. From 2., it is clear that z(t),t 2 0, is a Markov process. A typical example of a MAP is the Markov renewal process when the time t is discrete and < (t), t 2 0, is an increasing sequence of R+-valued random variables. Let us define also the transition function Pt(x, A, B), A E B d , B E E,t 2 0 , by P t ( Z , A , B ) :=
I?'(<(t+ S) - <(s)
E A , z(t
+
S)
EB
I X ( S ) = z).
The semigroup property is written
In the discrete case, the MAP e n , x n , n 2 0, can be represented as follows
2.6. STOCHASTIC ADDITIVE FUNCTIONALS
47
where
Let us present some examples of Markov additive processes. D
Example 2.1. Sums of i.i.d. random vectors in Rd.
cr==l
Example 2.2. The increment process. Define En = a(xk),where x,,n 2 0, is a Markov chain on ( E , E ) and a : E --f Rd a measurable function. In applications, ,fn can be considered in the following form = D
<,
c;=,
a(2k-1,zk).
A Markov renewal process additive component = r,, n 2 0.
D Example 2.3.
<,
t,, x,,
n 2 0 , is a MAP with
6
D Example 2.4. The integralfunctional. Define <(t)= a(x(s))ds,where z ( t ) t, 2 0 is a cadlag Markov process on ( E ,E ) , and a : E Rd a measurable function. When z ( t ) ,t 2 0, is a semi-Markov process, then the extended Markov renewal process = <(rn),7,, x,, n 2 0, is a MAP with additive component the coupled process ,fn, r,, n 2 0.
<,
2.6
Stochastic Additive Fhnctionals
An evolutionary system with switching considered in Section 2.3 is characterized by a deterministic trajectory with velocity depending on the state of the switching process between successive renewal moments. Various stochastic systems can be characterized by a stochastic trajectory under fixed state of switching process. The Markovian locally independent increment processes (see Section 1.2.4) constitute a wide class of such processes in Rd with semigroup property. Definition 2.7 The Markov additive process <(t), z ( t ) t, 2 0, in Rd x E , with Markov switching x ( t ) ,t 2 0, defined by the relation (2.39) is called a stochastic additive functional.
48
CHAPTER 8. STOCHASTIC SYSTEMS WITH SWITCHING
The cadlag Markov processes q ( t ; x ) ,t 2 0 , x E E , are of locally independent increment processes determined by its generators
r(x)cp(U) = 4 . u ;z)cp'(u) [cpb +4 - c p ( 4 where the positive kernels tions: the functions
r(u,dv;z), x
-
~ ' ( 4 l rdv; ( ~4,,
(2.40)
E E , satisfy the following condi-
R(u;x) := r(u,Rd; x), b ( ~x); := IRd
v q u , dv;z),
L
C(u;v) :=
vv*r(u, dv;x),
are continuous and bounded on u E Rd, uniformly on x E E. Note that the common domain of definition D(1r) = n,cED(lr(x)) contains the full and complete class of functions C2(Rd) 45. The generators of locally independent increment processes q(t;x), t 2 0, x E E l can be represented in the following form
Il'(x)cp(u)= ao(u;z)cp'(u)
+
s,.
+
[cp(u ). - cp(.u)lr(u,dv;z), (2.41)
with ~ ( u x) ; = a(u;x) - b(u;x). According to (2.41) the Murkov additive process [ ( t ) ,x ( t ) , t 2 0 , has a deterministic drift defined by a solution of the evolutionary equation
d -U(t;x) = ao(U(t;x);z), dt and a pure jump part defined by the generators
~o(~)(P= ( uA(u; ) x)
s,.
[P(U+ v) - du)IF(u,dv;x),
with intensity of jump moments h(u;x) and with distribution functions of jump values
F ( u , dv;z)
=
r(u,dv;x)/A(u;x).
In the particular important case where a(u;x) = a(.)
and
r(u, dv;x) = I?(&;
z),
2.6. STOCHASTIC ADDITIVE FUNCTIONALS
49
the family of Markov processes q ( t ; x ) ,t 2 0 , x E E l are processes with independent increments. This is why such a process is said t o be with locally independent increments loo. In this case the pure jump part is defined by the compound Poisson process with generators
~ o ( x ) c p ( u=) A(z)
/
+ v) - cp(u)lF(dv;.).
Wd
It is well-known that the compound Poisson process sented by a sum of i.i.d. random variables
loo can
be repre-
y(t;x)
r](t;x)=
ak(Z),
t2
E E,
(2.42)
k=l
where v ( t ; z ) ,t 2 0 , x E E l are Poisson processes with intensity A(x), and c Y k ( x ) , k 2 1, x E E l is a sequence of i.i.d. r.v., under fixed x, with the distribution function F(dw;x). The switching jump Markov process x ( t ) ,t 2 0 , in (2.39), is defined by its generator
x ( t ) ,t Lemma 2.5 The Markov additive process <(t), b y the generator
M u ,x) = Qcp(u,
2 0, is determined
+ r(z)cp(u1z),
where k ( x ) , x E E , is the family of generators (2.40) PROOF. The proof of Lemma 2.5 is based on the asymptotic representation of the conditional expectation
IE [du+ W t ) ,4 t + At)) I r ( t )= 'LL, x(t>= XI = IE [p(u+ Aq(t;x),z)l(O, > At)] +lE [p(u + Aq(t;x),z(t + A t ) ) 1 x ( t ) = 4 l(0, 5 At) + o(At) = Ep(u Aq(t;x),x)(1 - Atq(x))
+
+E [cp(u,z(t + At)]Atq(x)+ o(At) = Ep(u + Aq(t;x),x) + AtQcp(u,x) + o(At)
= p(., z)
+ A t [ r ( x )+ Q]cp(u,z) + o(At).
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
50
The last equality leads t o the desired result.
2.7
Random Evolutions
The stochastic systems considered in the above Sections 2.2-2.6, can be described by the abstract mathematical model in the Banach space B(Rd) of functions p(u), u E Rd, called random evolution model, introduced Griego and Hersh Some particular examples of random evolution models were considered in Sections 2.2 and 2.3. Now we will introduce two models of random evolution: continuous and jump random evolution. 59,63364.
2.7.1
Continuous Random Evolutions
Let the family of continuous semigroup operators r t ( z ) , t 2 0 , x E E , be given on the Banach space C(Rd),of real-valued continuous bounded (with respect to the sup-norm) functions p(u), u E Rd. Let the switching semi-Markov process z ( t ) , t 2 0, be given by the semi-Markov kernel
which determines the Markov renewal process x ~ , T 2~ 0,, associated ~ with the semi-Markov process z ( t ) ,t 2 0, and the Markov renewal process xn,On,n L 0, associated with the semi-Markov (or Markov) process z ( t ) ,t 2 0.
Definition 2.8 The semi-Markov continuous random evolution on the Banach space C(Rd) of continuous bounded functions p(u),u E Rd, is determined by the relation
Particularly, in the renewal moments
2.7. RANDOM EVOLUTIONS
51
Proposition 2.4 Xhe random evolution (2.43) can be determined by a solution of the evolutionary equation
t > 0, $@(t)= lr(z(t))@(t), (2.45) @(O) = I ,
where r ( z ) ,z E E , is the family of the generators for semigroups r t ( z ) , t 2 0 , x E E . The integral equivalent of (2.45) is (2.46)
PROOF. The semigroups rt(z),t 2 0,z
E E , satisfy the integral equation rt
By Definition 2.8, the random evolution (2.43), on the interval [0,T I ] is represented by
q t ) = rt(zo), 0 5 t < T1,
(2.48)
and, in general form by
@(t)= rt-Tn(zn)@(7n), 7, 5 t < Tn+l.
(2.49)
By substitution o f (2.47) into (2.48) we get (2.50) since T(z(s))= lr(x0) for 0 5 s < 71. Now, by induction, let relation (2.46) hold f o r t 5 for -rn 5 t < ~ , + 1 ,we get: @ @ ) = rt-Tn(zTZ)@(TTl), 771 5 < T7L+1 t = @(Tn) l r ( z ( s ) ) @ ( s ) d s ,(by
+
=I
1 +1 +
/
Then, from (2.49),
substitution of (2.47))
Tn
Tn
+
lr(z(s))@(s)ds
t
L
lr(z(s))@(s)ds,
(because lr(z(s))= lr(z,), for t
=I
7,.
lr(Z(S))@(S)dS.
T,
5 s < T,+I)
52
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
0 Definition 2.9 The coupled random evolution is determined on the Banach space C(Rd x E ) , by the relation q t ,x ( t ) ):= @(t)p(u,x ( t ) ) , t 2 0.
(2.51)
Proposition 2.5 The mean value of the coupled random evolution (2.51), defined by the family of semigroups rt(x),t 2 0 , x E E , o n the Banach space C(Rd x E ) of (p(u,x),uE Rd,Z E E ,
U ( t ,x) := lE,[ia(t,z(t))]:= E[ia(t,z ( t ) )1 2(0)= x] is determined by a solution of the Markov renewal equation
(2.52)
PROOF. Let us use the following recursive relation for random evolution (see (2.49) and (2.48))
@(t)= r t - T 1
(x)ia(T1)l(Ti
+ rt(x)l(Ti>t)?
where TI is the first renewal moment of the switching semi-Markov process. Now calculate the first term by using the Markov property in the renewal moments
0 Hence, the Markov renewal equation (2.52) holds. Let the switching process z ( t ) ,t 2 0 , be Markovian and defined by the generator
(2.53) The stochastic evolutionary switched process U ( t ) ,t 2 0, is defined by a solution of the evolutionary equation (2.31) (see Section 2.4).
2.7. RANDOM EVOLUTIONS
53
Then the coupled process (2.54) is also Markovian.
Remark 2.2. The coupled random evolution is
Lemma 2.6 generator
The coupled Markov process (2.54) can be determined by the
on the test functions cp E C1(R x E ).
PROOF. See proof of Lemma 2.5 which includes the present case.
0
Corollary 2.2 The mean value of the coupled Markov random evolution is determined by a solution of the evolutionary equation
$u(t,z) = [Q + r(z)lv(t, z),
(2.56)
U ( 0 ,). = ( P ( % .).
So, the mean value of the Markov random evolution is characterized by the generator
IL = Q + ~I'(z).
(2.57)
PROOF. The mean value of the Markov random evolution satisfies the Markov renewal equation (2.52) with the distribution function of jumps
F,(ds) = q(z)e-q(z)sds, z E E ,
so,
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
54
or
nOW, DIFFERENTIATING OUT, WE GET
where
Hence,
0 that is exactly (2.56). It is worth noticing that the generator L in (2.55) characterizes the coupled Markov process U ( t ) , x ( t ) , t2 0, (see Section 2.2), and also the switched evolutionary stochastic system U ( t )defined in Section 2.5.
2.7.2
J u m p Random Evolutions
The random evolution model for the increment process (see Section 2.3) is constructed similarly by using the family of bounded operators lLD(x), x E E , characterizing jumps of stochastic system in state x E E , and the Markov renewal process xn,T ~n , 2 0, associated with the switching semi-Markov process x ( t ) ,t 2 0.
Definition 2.10 The semi-Markov jump random evolution on the Banach space C ( R d )of continuous bounded functions cp(u),u E R d ,is determined by the relation (2.59)
55
2.7. RANDOM EVOLUTIONS
Particularly, in the renewal moments, we have
n n
@(7-n)=
D ( x k ) , n 2 1.
(2.60)
k=l
Proposition 2.6 following f o r m
The j u m p random evolution can be represented in the 4t)
@(t)= I
+C[D(Sk)-
I]@(Tk-I),
t 2 0.
(2.61)
k=l
PROOF. From (2.59) it follows that
a(‘%)- a ( 7 k - 1 )
= [ D ( x k )- 1 ] @ ( 7 k - 1 ) .
(2.62)
The sum in (2.61) gives (compare with (2.25), (2.46) and (2.63) instead of relation (2.60)) n a(Tn>- I
= x [ ~ ( x k-) 1 1 @ ( 7 ~ - 1 ) 7
(2.63)
k= 1
0 that is an equivalent form of (2.60). The linear forms of the relations (2.46) and (2.61) provide the most effective asymptotic analysis of stochastic systems in the series scheme considered in the next Chapter 3. Proposition 2.7 The mean value of the coupled random evolution defined by the family of bounded operators JD(x),x E E , is determined b y a solution of the Markow renewal equation
PROOF. We use the following recursive relation for the jump random evolution
z ( t ) )= D ( z l ) a ( t -
x(t - 7 1 ) ) 1 ( T l < t )
+
Y(u7x)1(T1>t)’
The mean value of (2.65) gives the Markov renewal equation (2.64).
(2’65)
0
Corollary 2.3 The mean value of the Markov jump random evolution is determined b y a solution of the evolutionary equation
$u@, x) = [Q+ Qo(D(x) - I ) l U ( t ,x), U ( 0 ,). = 4%.).
56
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
So, the mean value of the Markov j u m p random evolution is characterized by the generator
Lo = Q + Qo[D(z) - 11. It is worth noticing that the generator LO characterizes the coupled Markov process cr(t),z(t),t2 0, (see Section 2.3), with the switched increment process cr(t),t 2 0. The operator Qo, defined in Section 2.3, is
Q o 4 ~= ) q(2) 2.7.3
P ( x ,dy)cp(y).
Semi-Markov Random Evolutions
The semi-Markov random evolution can be characterized by the compensating operator.
Definition 2.11 The compensating operator of the continuous coupled random evolution (2.51) is determined by the relation
I
m ( t , X ) := { I E [ ~ T ~ , X To ~ )= t , X O =
4
-
q t , z ) ) / ~ [ e , ~ . (2.66)
It is worth noticing that the compensating operator (2.66) satisfies the homogeneous condition
L@(t,X ) := {IE[@(T,+i,
%+I)
I 7,
= t ,% = X] - @(t, z))/E[&+l
I 5,
= XI.
(2.67)
Proposition 2.8 The compensating operator of the continuous coupled random evolution @(t, x ( t ) ) , (2.51), can be represented as follows
where q ( x ) := l/rn(z) := l/IEO,. pROOF. lET US CALCULATE BY USING(2.51)
Hence, (2.68) follows from definition (2.67).
2.7. RANDOM EVOLUTIONS
57
Proposition 2.9 The compensating operator (2.68) acting o n the test functions cp E Ck(Rdx E ) ,k 2 3, can be transformed into
Lcp(u,x) = Qd., 2) + r ( x ) P i ( x ) Q o ~ ( u x), Lcp(u, x) = QCP(., z)
+ r ( x ) W ( ux) , + r2(x)Pz(z)&odu,x)
(2.69) (2.70)
and
+
b ( u , x) = Qd., x) r(x)Pcp(u,x) + ~ 2 ( x ) r ~ ( x ) Mz)u , + r 3 ( x ) ~ 3 ( x )x). ~~~(~, (2.71)
The operator Qcp(x) := 4 2 )
p ( x ,d l / ) " P ( ~-) c p ( ~ ) l ,
is the generator of the associated Markov process x o ( t ) t, 2 0, with the same embedded Markov chain x,, n 2 0 , as in the semi-Markov process, and the intensity of sojourn times q(z) = l/m(x), m ( z )=
w,:= l m F , ( t ) d t .
sE
As usual, Qocp(x) := q(x) P ( x ,dy)cp(y). The bounded operators E , k = 1,2,3, are given by the relation
Pk(Z),z E
where 00
-(k)
F,
(3)
-(k-1)
F,
:=
( t ) d t , k 2 2,
4 1 )
F , ( t )= FZ(t),
and a3
p2(z) := rn2(2)/[2m(z)],
m2(z) := 2 1 F r ) ( t ) d t .
PROOF. Integration by parts gives
1
oo
P(z) :=
F,(ds)r,(z)ds
=I
+
Using the differential equation for semigroups
(2.72)
58
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
we get
F(z) = I
+ r(z)FI(z),
where
In the same way, we get (2.73)
where
that is (2.74)
where
Successively putting the Pk(z),k = 1 , 2 , 3 , given by formulas (2.73) and (2.74), into (2.72), we get the representations (2.69)-(2.71). Proposition 2.10 T h e compensating operator f o r the j u m p random evolution (2.59) can be represented as follows
w t ,).
= 4x1
[ / P(., E
d Y ) W Y ) @ ( t Y, ) - q t , 41.
(2.75)
PROOF. From Definition 2.9, we calculate:
0 Now, by Definition 2.11 we obtain (2.75). It is worth noticing that the compensating operator for the random evolution can be considered directly on the test functions p(u,z) as represented in Proposition 2.9.
2.8. EXTENDED COMPENSATING OPERATORS
59
Corollary 2.4 The compensating operator of the j u m p random evolution can be represented as follows
2.8
Extended Compensating Operators
The semi-Markov continuous random evolution can be characterized by the extended compensating operator which is constructed by using the extended Markov renewal process
where x,,~,, n 2 0 , is the Markov renewal process associated with the switching semi-Markov process x ( t ) ,t >_ 0 , defined by the semi-Markov kernel (see Section 1.3.1)
with en+, := T,+I - Tn, n 2 0. The first component in (2.77) is the continuous part of the random evolution generated by the values of semigroup (x,),t 2 0 , such that
where Atn := &+I - tn1 n >_ 0. For example, the evolutionary stochastic system defined in Section 2.3 is generated by the first component of the extended Markov renewal process (2.77) by the semigroup
where U ( t ) ,t 2 0, is a solution of the evolutionary equation
and
60
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
Analogously the first component of the extended Markov renewal process (2.77) can be defined for other stochastic systems considered in above Sections 2.2-2.4.
Definition 2.12 The extended compensating operator of the Markov renewal process (2.77) is defined by the relation
L P ( % X l t ) := ~ [ P ( J l , ~ l , n( P) ( % Z , t )
1x0
= 2 7 7 - = t]/ m ( x > ,
where m ( x ) := EB, = JomF,(t)dt.
Proposition 2.11 The extended compensating operator of the Markov renewal process (2.77)is represented as follows
(2.79)
where rS(x), s 2 0 , x E E , is the family of semigroups with generators K'(x), x E E , defined in (2.40). The proof of Proposition 2.11 is based on representation (2.78) for the increments of the first component and on the homogeneous property of the compensating operator W ( U , x,t ) =
E [P(Jn+l,%+lr
Tn+1
- P(
I en = 21, zn = z, 7-n = tl.
Proposition 2.12 The compensating operator (2.79) acting o n the test functions 'p E C k ( R dx E ) , k 2 3, does not depend o n time t and can be transformed into: LP(U,x) = QP
where:
+ lr(x)F(l)(x)Qo~,
2.9. M A RKO V ADDITIVE SEMIMA RTINGA LES 00
00
FF’(s)ds/m(z), mz(2) =
p2(2) := mz(z)/2m(z) =
--(k+l)(t) := F,
2.9 2.9.1
61
Sm
F?)(s)ds,
s2F,(ds),
k = 1,2, ...
t
Markov Additive Semimartingales
Impulsive Processes
Impulsive processes constitute a very active area of research since they are involved in many applications, especially in risk theory7~44~149~55. They are in some respects related t o stochastic additive functionals as considered in lo7. The the increment processes and the compound Poisson processes constitute very particular cases of impulsive processes. These are also a typical case of stopped processes169. A large literature exists concerning risk processes, but few papers concerning functional type limit theorems, that we are interested in here (see, e.g.53,7). The impulsive process < ( t )t, 2 0, considered here (see Definition 2.13 below), is the stochastic part of a risk process switched by a Markov process. To be specific, if we define the process
lo7. For example, if the impulsive process gives the amount of damages for an insurance company in the interval of time [ O , t ] , that is, if a,(z,s)is the amount of the n-th damage, 1 5 n 5 ~ ( t )under , 2 , = 2 , where z denotes the state of the environment, for example weather, time, etc., and s the time elapsed from the last damage, then the amount is strongly dependent of the environment. The particular class of switched semimartingale processes is defined over a switching Markov renewal process. Definition 2.13 defined by
The impulsive process with semi-Markov switching is
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
62
where 8,+1 = T,+I - T,, n 2 0 , and x,,~,,n 1 0, is a Markov renewal process defined by the semi-Markov kernel
The counting process of jump times is
v ( t ) = max{n : T, 5 t } ,
t 2 0.
The family of random variables
a,(x,t),
t 2 0, n 2 1, x E E ,
is supposed t o be i.i.d. under fixed x and t , defined by the distribution function
@ ( B ; x , t= ) P ( a n ( x , t )E B ) , and such that for every fixed sequence zn E E , s, E R+, n 2 0, a,(z,, sn), n 2 1, are independent r.v. Lemma 2.7 The impulsive process (2.80) can be characterized by its predictable characteristics represented in the following form:
-
the predictable process is
where:
(2.81)
-
the second modified characteristic is dt)
2.9. M A R KO V ADDITIVE SEMIMA RTINGA LES
63
WHERE
THE COMPENSATING MEASURE OF JUMPS IS
n=l
where
r g ( z )=
im Q ( z ,dy, ds)
g ( z ) @ ( d zy, ; s).
PROOF. The proof of Lemma 2.6 implies the standard formulas for predictable characteristics of increment processes 70. For Fn := a ( z k , r k ; O 5 k 5 n), we have: 4t)
C IE[an(zn,
~ ( t=)
1
en) ~ n - 1 1
n= 1
n=1
17
where b(z)is defined by (2.81). 2.9.2
Continuous Predictable Characteristics
The real-valued semimartingales [ ( t ) t, 2 0 , defined for a Markov process z(t),t 2 0 , on a stochastic basis 8 = ( R , F , F = (Ft,t 2 O),P), with some standard state space ( E ,E ) constitute the general class of stochastic processes considered here as mathematical models of real stochastic systems. Let 6,, s 2 0 be the shift operator.
Definition 2.14 (”) A Markow additive semimartingale [ ( t ) , t 2 0 , defined over 8,is characterized by the additivity property
(0) [ ( O )
=0
and
6,[(t) = [ ( t )- [(s)
(a.s.) for all 0 5 s 5 t.
64
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
It is well known that if (Ft, t 2 0) is a strong Markov filtration, the process [ ( t ) ,t 2 0, satisfies the strong additive property, that is, (a.s.) for any (0s) [(O) = 0 and d S t ( t ) = “ ( t )- E(s)ll[s,oo)(t) Ft-stopping time S.
Remark 2.3. The exceptional null set in the additivity properties (0)or ( 0 s ) can be chosen in order not to depend on s , t . The switching Markov process x ( t ) , t 2 0, considered here is defined by the following generator
Q d z ) = q(z)
/
E
P(., dy)[cp(y) - cp(z)l.
The factorization theorem (see 27 ) implies the following canonical representation of the Markov additive semimartingale
Here w(s), s 2 0, is a standard Wiener process, p(ds,dw) is the measure of jumps of the semimartingale [ ( t ) ,and h(v) := l(lv1 5 1) is a truncation function, and tps):= (&,O 5 u < s). The canonical representation (2.82) is characterized by the triplet of characteristics:
-
the predictable process
(2.83)
-
the second predictable characteristic
(2.84)
-
and the compensating measure
2.9. M A RKO V ADDITIVE SEMIMA RTINGA LES
65
The Markov additive semimartingale (2.82) has the following representation
t ( t )= t o +
1 t
(2.86)
r l( d s ; z ( s ) ) , t 2 0,
0
where
rl(ds,4 s ) ) = b ( t ( s ) 4, s ) ) d s
+L
+ c(E(s), z(s))dw(s)
V I P - h(v)lr(t(s), dv; 4s))lds.
In the particular case where r](t;z), t 2 0, z E E , are locally independent increment processes defined by their generators (2.40), the predictable characteristics are represented as follows:
B,(t) =
I'
b(rl(s;z); z)ds, (2.87)
The predictable characteristics (2.87) of the Markov additive semimartingale (2.86), with switching Markov process z ( t ) ,t 0, can be characterized by the tripled Markov processes
>
t ( t ) ,A(% z(t), t 2 0, defined by the generator
Lv(u, 21,
=
[Q + r ( z ) + A(z)lv(u, v, z),
where A ( t ) is one of the predictable characteristics (2.83)-(2.85) defined by the generator
A(z)(P(u) := a(u; z)cp'(u). The function a(u; z) is one of the local characteristics b(u;z), c(u; z) and
This page intentionally left blank
Chapter 3
Stochastic Systems in the Series Scheme
3.1
Introduction
This chapter deals with the stochastic systems presented in Chapter 2, in a series scheme. That is, for a process c(t),t 2 0, as in the previous chapter, we will consider here a family of processes, t E ( t ) , t2 O , E > 0, where 0 < E < EO is the series parameter, defined on a stochastic basis 3 = (R, F,F = (Ft, t 2 0), IF’). We are interested in the weak convergence of the probability measures P o (t“)-l,as E -+ 0. Here E is supposed t o be a sequence E, -+0, as n -+ co.Instead of a common probability space, we could consider different spaces for each E. Two different schemes are considered here, the average approximation and the diffusion approximation. The switching semi-Markov process is considered with fast time-scale “EC’” for average approximation, and for diffusion approximation. The ergodic property of the switching process is used in the average and diffusion approximation algorithms. The main results presented in this chapter concern the asymptotic representation of compensating operators of the coupled switched-switching processes. First we give results for random evolution (Propositions 3.1-3.5), on which the average and diffusion approximation results will be based. The average approximation is presented for stochastic additive functionals and increment processes (Theorems 3.1-3.2). The diffusion approximation is presented in two different schemes. The first one is the usual one whose equilibrium point is the average limit fixed point (Theorems 3.3-3.5), and the second one is one whose equilibrium point is a deterministic function (Theorems 3.6-3.8). In the next chapter we will also present results for a diffusion approximation whose equilibrium point is a random process.
67
68
3.2
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Random Evolutions in the Series Scheme
The characterization of random evolutions in the series schemes is considered with two different switching processes: semi-Markov and Markov processes, with the different algorithms. 3.2.1
Continuous Random Evolutions
The continuous random evolution with semi-Markov switching in the average scheme with the small series parameter e > O,E + 0 , is given by a solution of the evolutionary equation (compare with Proposition 2.4), p ( t ) = Ir(z(t/e))@E(t),t
2 0, (3.1)
V ( 0 )=I.
Here r ( z ) , z E E , is the family of generators of the semigroup operators I't(z),t 2 0,. E E , which determines the random evolution in the following form (compare with Definition 2.8) V(t/E)
@[)"(t) = r&y(t/E)(z(t/&))
reOk(zk),
t > 0 , Q E ( 0 )= 1.
(3.2)
k=l
The semi-Markov continuous random evolution @"(t), t 2 0, in the average series scheme can be characterized by the compensating operator on the test functions cp E C(Rd x E ) , given by the following relation (compare with Proposition 2.8)
(3.3) The normalized factor "E-"' corresponds to the fast time-scaling of the switching semi-Markov process in (3.1). The small time-scaling "E" in the semigroup rss(x)provides the representation (3.2) for the random evolution in the series scheme. As usual, we will suppose that the domain D r ( z ) contains the Banach space c'(R~).
Proposition 3.1 The compensating operator (3.3) in the average scheme o n the test functions cp E C2i0(Rdx E ) has the following asymptotic
69
3.2. RANDOM EVOLUTIONS IN T H E SERIES SCHEME
representation (compare with Proposition 2.9):
where:
for
k
= 1,2, and, as usual,
PROOF. The same transformation as in the proof of Proposition 2.9 is used with one essential difference. The equation for semigroup is now
I'
rES(z)= I + E ~ ( z )
rEv(z)dv,
that is, in differential form,
dr,,
= &qz)r,,(z)ds.
0 The continuous random evolution in the diffusion approximation scheme with accelerated switching is represented by a solution of the evolutionary equation
%(t)
=lrE(z(t/&2))@P"(t),
t 2 0, (3.6)
W(O) =I. The family of generators S E ( z )z, E E , has the following representation
re(,)= & - l r ( Z ) + r l ( z ) .
(3.7)
Note that the generalization of the average scheme in such a way would not be productive. The compensating operator of the random evolution (3.6) on the test functions cp E C(Rd x E ) is given by the relation (in symbolic form, see Section 2.8) LEv(u,z)= &(z"E(4p
- IIV,
(3.8)
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
70
where
Proposition 3.2 The compensating operator (3.8)-(3.9),in the diflusion approximation scheme, acting on the test functions cp E C3(Rd x E ) has the following asymptotic representation:
+ E-'T(z)P + Qz(x)P+ d;(z)]cp = + E - ~ T ( ~+) P = [E-~Q +~ - l e ; ( ~ ) ] ~ ,
L'(P(u,X) = [&-2Q
E ~ ; ( E ) ] ~
[ E - 2 ~
(3.10)
where
and the remaining terms are:
+
e;(x) := [r2(~)FJ2)(x) Tl(x)P]Qo,
(3.13)
+
(3.14) eg(x) := r,(x)[T2(2)F!3)(x) I~~(E)F!~)(z)]Q~.
+
Here, by definition T,(x) := Ir(z) ~ T l ( x ) .
PROOF. The starting point is the integral equation for semigroup
or, in differential form, dI'Zz,
(E)
= ET,(x)I'z2, ( z ) d s .
There we use the following relation &2lr"(Z) = & l r & ( X ) .
The initial representation of the compensating operator is
IL" = E - ~ Q + E-~[P~(Z) - I]Qo,
(3.16)
3.2. RANDOM EVOLUTIONS IN T H E SERIES SCHEME
where
P,(z) =
1"
Fz(ds)I'ZzS(z)ds
71
(3.17)
is transformed, by using (3.15), into
P&)
- I =&rE(z)Ip(z),
with
Now, by using (3.16), an integration by parts gives
@)(z) = m ( z ) l +&rE(z)IFp(z) 1 IFz"'(z) = -mz(s)I 2
(3.18)
+ &k,(z))Fp(z),
where, by definition:
(s)r:z,(z)ds,
and
mz(z):=
Jd
k = 1 , 2 , ...,
(3.19)
co
s2F,(ds).
Now by putting (3.18) and (3.19) into (3.15) and then into (3.8) and by 0 using (3.7), we get (3.10). The following result concerns the coupled random evolution defined in Definition 2.9.
The coupled Markov random evolution, with the switching Markov process z ( t ) , t 0, in the average scheme can be characterized by the generator
Proposition 3.3
>
+
ILE'p(u,z)= &-lQp lr(z)cp.
The coupled Marlcov random evolution in the diffusion approximation scheme can be characterized by the generator I L E ~ ( Z) u , = &-'Q'p
+ &-'k(z)'p+ II'l(~)cp.
72
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
It is worth noticing that the characterization of the Markov random evolution is comparatively simpler than the characterization of the semiMarkov random evolution (see Propositions 3.1 and 3.2). 3.2.2
Jump Random Evolutions
The jump random evolution in the average series scheme is represented with fast-scaling (3.20) The family of bounded operators D E ( x ) , zE E , is supposed to have the following asymptotic representation DE(2)
=I
+ eD(2) + D;(2),
2
E E,
(3.21)
on the space Bo dense in C,"(Rd x E ) , with the negligible term
ll~;(~)cPll
+
0,
E
+
0,
'p E
Bo.
(3.22)
The compensating operator of the semi-Markov jump random evolution in the average scheme is represented as follows (see Proposition 2.10)
Proposition 3.4 The compensating operator (3.23) has the following asymptotic representations: LE'p(u, 2) = =
[E-~Q + QoD(z)
+ QoD;(z)]Au, 2)
[&-'Q + QoJ%(z)l~(u,z),
(3.24)
with
D;(z) := D(z) +ID!(%) and the negligible term (3.25) WHERE, AS USUAL,
3.2. RANDOM EVOLUTIONS IN THE SERIES SCHEME
73
PROOF.The proof is obtained by putting the expansion (3.21) in (3.23). 0 The j u m p random evolution in the diffusion approximation scheme is considered in the accelerated fast-scaling scheme:
n
4tlE2)
V ( t )=
DE(xi),
t > 0, W(0) = I .
(3.26)
k=l
The family of bounded operators D E ( x ) , zE E , has the following asymptotic expansion
W ( s )= I
+ eD(x) + &2D1(2)+ &2D",(z),
(3.27)
on the test functions cp E Bo, a dense subset of C2(Rd),with the negligible term
The compensating operator acting on the test functions cp(u,x) is
Proposition 3.5 The compensating operator of the jump random evolution in the diffusion approximation scheme has the following asymptotic representation
0 PROOF.The proof is obtained by putting (3.27) in (3.29). The Markov jump random evolutions in the average and diffusion approximation schemes are respectively characterized by the generators ILE represented in (3.24) and (3.30), with the generator Q of the switching Markov process. It is worth noticing that the semi-Markov random evolution is characterized by the compensating operators in asymptotic forms (3.24) and (3.30) with the generator Q of the associated Markov process x ( t ) ,t 2 0. The intensity function of the renewal times is q(x) = l/rn(x), where m ( x ) := EB,, is the mean value of renewal times of the switching semi-Markov process.
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
74
Average Approximation
3.3
The phase merging effect for stochastic systems can be achieved under different scaling of the stochastic system and of the switching semi-Markov process. Let US consider the main model of stochastic systems, presented in the previous Chapter 2, that is the stochastic additive functionals model.
3.3.1
Stochastic Additive Functionals
Stochastic additive functionals are considered in the following scaling scherrie
C ( t )= E“(0)
+/
t
qE(ds;Z ( S / E ) ) ,
t 2 0.
(3.31)
0
The switching semi-Markov process z ( t ) t, 2 0, on the standard phase space ( E ,E ) is given by the semi-Markov kernel Q(z,B , t ) ,
Q ( z , B , t )= P(z,B)F,(t), z E E , B E E , t 2 0.
(3.32)
The family of Markov processes with locally independent increments v E ( tz), ; t 2 0,z E E , with values in the Euclidean space Rd, d 2 1, is given
by the generators
+
l r , ( ~ ) ~ p ( ~=) U(U,z)c~’(u)E
s,.
- ~
[V(U
+
E W ) - V ( U ) ] ~ ( U ,d
~z), ; (3.33)
defined on the Banach space C’(Rd). The fast time-scaling for the switching process in (3.31) corresponds to the scale factor E for the increments EZ, of the switched processes $(t ; z), t 2 0. This explains why the large-scale intensity of the switching process is compensated by the small-scale of increments of the switched processes. By subtracting the first moment of the jump values in (3.33), the generator takes the form
r&(z)Cp(u) = r(z)cp(u)+ 7&(~C)Cp(4
(3.34)
where:
lr(z)cp(u):= d”; z)Cp’(u),
(3.35)
3.3. AVERAGE APPROXIMATION
Here g(u;3) := a(u;z)
+ b(u;z),
b(u;z) :=
Ld
vr,(u, d v ;z).
75
(3.37)
Let us consider the following assumptions.
A l : The switching semi-Markov process z ( t ) , t 2 0, is uniformly ergodic with stationary distribution 7r(B),B E E. A2: The function g(u;z),u E Rd, z E E , is (globally) Lipschitz continuous on u E Rd, with common Lipschitz constant L for all z E E. So, there exists a global solution to the evolutionary systems d -Uz(t) = g ( U z ( t ) , z ) , z E E. dt A3: The operators y E ( z )are negligible for cp E Ilr&(z)cpII 0, -+
E
C:(Rd), that is,
-+
0.
A4: The initial value condition is
lE IJ"(0)l I c < +m.
JE(0)2J ( O ) ,
The average phase merging principle is formulated as follows.
Under Assumptions Al-Ad, the stochastic additive functional (3.31) converges weakly, as E + 0, to the average evolutionary deterministic system G ( t ) ,t 2 0, determined by a solution of the evolutionary equation
Theorem 3.1
(3.38)
where the average velocity
is given by G(u) =
where: A
a(.)
=
s,
qu)+I+), h
n(dz)a(u;z),
b(u)=
(3.39)
s,
7r(dz)b(u;z).
(3.40)
76
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Remark 3.1. The weak convergence
r"(t)
==+
0, i G ( t ) , c --
means, in particular, that for every finite time T sup IE'(t) - G(t)I -50,
&
(3.41)
> 0, 4
0.
OltlT
The verification of the average merging principle (3.38)-(3.40) is made in Chapter 5. The weak convergence (3.41) is investigated in Chapter 6. The proof of Theorem 3.1 is based on the representation of the stochastic additive functional (3.31) by the associated continuous random evolution (3.1). The corresponding family of generators IFE(z),z E E , E > 0, is r e p resented in (3.34). Setting (3.34) in the asymptotic representation (3.4) of the compensating operators (3.3) (see Proposition 3.1) we get the following form of the compensating operator for the stochastic additive functional: ILEv(u,Z) = [ E - ~ Q = k-lQ
+ l r ( ~ )+P ~ e f ( z ) ] q + G(x)lvl
(3.42)
where
+
:= Y , ( ~ ) P E e ; ( z ) ,
(3.43)
and the remaining terms O;(z), k = 1,2, are given in (3.19) with the generators IF,(x) and the semigroups rEs(x),depending on the parameter series & > 0. The family of generators IF(z),z E E , is represented in (3.35), and the negligible term y,(z),z E El is represented in (3.36). Let us now give some heuristic explanation about the phase merging effect of Theorem 3.1. The average algorithm in Theorem 3.1 is evident from the ergodic theorem point of view. The problem is how does the ergodicity principle works? In order to explain this, let us consider the Markov additive process t & ( t ) , z ( t /t ~2)0,, which can be characterized by the following generator, ILEv(u,z) = [ E - ~ Q
+ T(x)lv(u,z),
on the Banach space C1(Rd x E ) of cp(u,x),where the generator r ( z ) is defined in (3.35), and the negligible operator (3.36) is neglected. The
3.3. AVERAGE APPROXIMATION
77
uniform ergodicity of the switching Markov process with the generator Q provides the definition of the projection operator 11 (see Section 1.6), which satisfies the following property
IIQ = Q11= 0. The projector 11 acts on the functions cp E B(E)as follows
ndx) =
s,
7r(dx)(P(x)l(z)= W X ) ,
where l ( x ) = 1, for all x E E , and
Since 11v(u) = cp(u),the generator ILE of the Markov additive process acts on a function 'p E C'(Rd), which does not depend on x E E , as follows, LEcp(u)= Ir(x)cp(u).
Note that, since IIQ = 0,
+
IIILEp(u,X) = [ E - ~ I I Q 1 1 l l ? ( ~ ) ] c p ( ~X) , = ITn'(~)cp(u,~).
Hence, we have
rILErIcp(u,x) = rIIr(x)IIcp(u, x) = IIn?(x)II@(u),
s,
where b(u):= 7r(dz)cp(u;x). The average evolution in Theorem 3.1 is characterized by the main part of the average generator
FrI = rIIr(Z)rI. Note that the problem of verification of such a scheme is still open (see Chapters 5 and 6). The stochastic homogeneous additive functional in series scheme
<'(t)= (0
+
/
t
"(ds; x(s/e)),
0
t 2 0,
(3.44)
is given by the family of Markov processes with independent increments
('(t;x),t 2 0, x E E , defined by the generators
78
CHAPTER 3. STOCHASTIC SYSTEMS I N THE SERIES SCHEME
The intensity kernel r ( d v ; x ) , x E E , is supposed t o satisfy the average condition (3.45)
with b ( x ) ,x E E , a bounded function. The average algorithm for the stochastic functional t E ( t t) ,1 0 , given in (3.44)-(3.45),is formulated as follows.
Corollary 3.1
Under Assumptions A l - A 4 , the weak convergence
C ( t )===+ 6(t)= &I +jit, t 2 0 ,
E -+ 0,
holds true. The average deterministic velocities of the drift are defined by the relations: h
2=
5=2+b, Corollary 3.2
s,
h
r(dx)a(z), b =
s,
r(dz)b(x).
The stochastic integral functional in the series scheme
5'(t) converges weakly, as
E
=
--f
50 +
rt
a ( z ( s / E ) ) d s , t 2 0,
(3.46)
0
0 , to the deterministic linear drift h
U ( t ) = 50
+a, t 2 0,
where Z = JE r ( d z ) a ( z ) . Corollary 3.3 Under Assumptions A l , A2, the stochastic evolutiona y system, defined by a solution of the evolutionary equation
$ U W = g ( U E ( t )Z; ( t / & ) ) , (3.47) UE(0) = uo
converges weakly t o the solution of the average equation
$@t)
=
#m,
A
U ( 0 ) = uo.
3.3. AVERAGE APPROXIMATION
79
Increment Processes
3.3.2
The increment process considered here in the series scheme has the following form
c
4 t l E )
r"(t)= b + E
t 2 0,
a(%),
(3.48)
n=1
where v ( t ) := max{n 2 0 : 7, 5 t } , t 2 0 , is the counting process of renewal moments ~ ~2 0, , ofnthe switching semi-Markov process z(t),t 2 0, given by the semi-Markov kernel
Q(z, B , t ) = P ( z , B ) F , ( t ) ,
z E E , B E &,t2 0.
The average algorithm for the increment process (3.48) is formulated as follows. Theorem 3.2 Let the switching semi-Markov process x ( t ) ,t 2 0, be uniformly ergodic with stationary distribution 7r(B),B E E , satisfying the relation
T ( d z )= p ( d z ) m ( z ) / m , where p ( d x ) is the stationary distribution of the embedded Markov chain x n , n 2 0, defined by the stochastic kernel P ( x ,B ) . The increment function a ( x ) , x E E is supposed to be bounded, that is, a E B(E). Then the weak convergence
" ( t ) ===+5 0 +zit, t 2 0 ,
E -+
0,
holds true. The average velocity is 2=
IE
p(dx)a(z)/m.
(3.49)
The proof of Theorem 3.2 is based on the representation of the increment process (3.48) by the associated jump random evolution (3.20). The family of shift operators D " ( x ) , x E E , is given by the relation DE(z)p(.) = p(.
+m(z)).
By using Taylor's expansion we get the following asymptotic representation on
'p
E C,~(IW~)
D E ( Z ) P ( U )= [I
+ m z ) + &W(Z)lP(U),
80
CHAPTER 3. STOCHASTIC SYSTEMS I N THE SERIES SCHEME
where
and the negligible term lD;(z) satisfies the condition
IlJD;(~)cpll
+
0,
E
+
0,
cp E C,”(W.
The compensating operator of the jump random evolution associated with the increment process (3.48) is represented in asymptotic form (3.24) in Proposition 3.4.
Remark 3.2. The formula (3.49) can be explained as follows. The increment process (3.48) takes its increments in the renewal moments T,, n 2 0, connected by the recursive relation
The sojourn times
On,n 2
1, are defined by the distribution functions
Hence, it is almost evident that the average value of jumps have t o be calculated by using the stationary distribution p(dz) of the embedded Markov chain. The average approximation process [ ( t )= Zit, t 2 0, is continuous in time. Hence, the average velocity 2 of the limit process have to be normalized by the average mean value of the sojourn times
+
that is according to (3.49) The increment process can be considered in more general forms, for example, as follows 5‘(t) = 50
+E
c
4 t I E )
4Xn-1,
4,
t 2 0.
(3.50)
n=l
It is easy t o formulate the average algorithm for the increment process (3.50) by taking into account that the coupled sequence ~,-1,x,,n 2 0, is
3.4. DIFFUSION APPROXIMATION
81
also a Markov chain with the stationary distribution p'(dx, dx') = p(dx)P(x,dx').
Corollary 3.4 Under the conditions of Theorem 3.2 for the bounded increment function a(x,z'), x,x' E E , the weak convergence
C(t) I&J+ait, t 2 0,
& + 0,
holds. The average velocity of the limit linear drift 2 is calculated by
ii =
s, s, p(dz)
P ( x ,dx')a(z,x')/rn.
(3.51)
An additional interesting form of the increment process is
["(t) = t o
+
c
V(tIE)
&
a(%
t 2 0,
ek+l),
(3.52)
k=l
for which we have the following result. Corollary 3.5 Let us consider the process (3.52). Then, under the conditions of Theorem 3.2, and the additional condition that the function
-a(.)
1
00
:=
F,(dt)a(x,t),
zE
El
is bounded, the average velocity of the limit linear drift 2, is calculated by
3.4 3.4.1
Diffusion Approximation
Stochastic Integral Functionals
The diffusion approximation scheme applies to the integral functionals in the series scheme with accelerated time-scaling of the switching semiMarkov process in the following form (3.53) The velocity function a a ( z ) is supposed t o depend on the parameter series as follows
a"(.)
+
= &-la(x) a1(x).
(3.54)
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
82
The first term in (3.54) satisfies the balance condition
7r(dz)a(z)= 0.
(3.55)
The accelerated scaling of the switching process and the balance condition (3.55) provide the diffusion approximation of fluctuation of the integral functionals (3.53). Under the balance condition the average approximation scheme (Section 3.3.1) in Corollary 3.2 provides the weak convergence
I"
a(z(s,e))ds
+0,
E
-+
0.
Let us state the following conditions.
D1: The switching semi-Markov process z ( t ) ,t 2 0 , is uniformly ergodic with the stationary distribution 7r(B),B E E . D2: The second moments of the sojourn times are uniformly bounded:
1 Srn
00
m2(2)=
and sup xEE
t2Fx(dt)5 M < +m,
-
t2Fx(dt)
T
0,
T
-+
00.
D3: The velocity functions a ( z ) and a1(z)are bounded and a(.) the balance condition (3.55).
satisfies
Let us denote by w ( t ) , t 2 0, the standard Wiener process, that is, lEw(t) = 0 and E w ( s ) w ( t )= s A t . Theorem 3.3 holds
Under Conditions 0 1 - 0 3 the following weak convergence
cr"(t)==+ ao(t):= a0
+ a l t + aw(t),
E
provided that a2 > 0. The variance a2 is calculated by (T2
where
= uo"+ u p ,
-+
0,
83
3.4. DIFFUSION APPROXIMATION
and the velocity of the drift is
The potential operator Ro (see Section 1.6) corresponds t o the generator Q associated t o the Markov process
where q ( z ) := l/m(z), m ( z ):= J,"F(t)dt.
Remark 3.3. The function p ( z ) is positive, if the density f, (with respect to Lebesgue measure on R+)of F, is a completely monotone function. That means if the derivatives of fzn', ( for n = 1,2, ..., exist and 2 0. This class of distribution function is included in the class of decreasing failure rate distribution functions. (See 84). In the case where fz is of Polya frequency function of infinite order (PF,), we have p ( x ) 5 0. This class of distribution functions is a subset of the class of increasing failure rate distribution functions. We have p ( z ) = 0 for exponential distributed renewal times, that is, for switching Markov processes.
(-l)"fp'(z)
Corollary 3.6 Under Conditions D l - D 3 , the integral functionals (3.53) with the switching Markov process, converge weakly [ " ( t ) ==+ aO(t):= a0
+ U l t +aow(t),
&
4
0,
where a: is defined as an Theorem 3.3. Let us give here some heuristic explanation of the diffusion approximation of the integral functional. By using the representation (3.54), the integral functional takes the form:
a , ( z ( s / c 2 ) ) d s = c L l E 2a , ( x ( s ) ) d s
=
&lo
+
a(x(s))ds
84
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
It is easy to see that the second term satisfies the average principle
al(x(s/E2))ds==+ Zit,
E -+0.
The first term requires a more thorough explanation. The integral functional with time-scaling
cl!'(t)= E LlE2 a(x(s))ds, under the balance condition
induces fluctuations comparable to the accelerated moving determined by the velocity = a(z)Roa(x).
go(.)
Indeed, the potential kernel Ro(x,d y ) can be interpreted as an intensity of transition between state x and d y . Now, the variance of the Wiener process
can be interpreted as a characteristic of the accelerated moving of Wiener process. 3.4.2
Stochastic Additive Functionals
The diffusion approximation is applied to the stochastic additive functionals (Section 3.3) in the series scheme with accelerated switching
+ / $ ( d s ; z ( s / E 2 ) ) , t 2 0. t
("(t)= (0
(3.56)
0
The family of processes with locally independent increments $(t; x),t 2 0 , x E E , depends also on the series parameter E and is determined by the generators
The process x ( t / E 2 ) ,t 2 0, is a semi-Markov process as described in the previous section.
3.4. DIFFUSION APPROXIMATION
85
The selection of the first two moments of the jump values in (3.57) transforms the generators into the following form
1 + fdu; z)cp"(u) + -Y,(z)cp(u),
r,(z)cp(u) = g,(u; z)cp'(u)
where
Here:
The intensity kernel has the representation
r,(u,dv;z)= r(u,dv;z) + E ~ ~ ( U , ~ V ; Z ) .
(3.58)
The velocity of the deterministic drift has the representation g E ( u ; z )= g(u;z)+ E g l ( U ; z ) .
(3.59)
The time-scaling of the increments in (3.57) is made for the same reasons as in the average scheme (Section 3.3.1). The time-scaling of the intensity kernel is connected with the finiteness of the second moments of increments. The balance condition (3.60) provides the compensation of the velocity &-lg(u;x) in the average scheme and appears in the diffusion scheme. Let us state here the following additional conditions.
D3': The velocity functions g ( u ; z) and g l ( u ; s )belong to C1(Rd x E ) , and the balance condition is fulfilled (3.60) D4: The operators Y&(Z)(P(U)
:= E - l
E2V2
[(P(u+Ev) -(P(u)-~vcp'(u) - -v2cp"(u)]r& 2
(u, dv;z),
CHAPTER 3. STOCHASTIC SYSTEMS I N THE SERIES SCHEME
86
are negligible for cp E Ci(IRd),that is,
Theorem 3.4 Under Assumptions D1, 0 2 , D5” and 04,the following weak convergence holds
E‘(t) ===+EO(t),
E -+
0,
provided that the diffusion coeficient g(u) is positive for u E Rd. The limit diflusion process (‘(t), t 2 0, is defined by the generator
Lcp(u)= &)cpl(u)
+ p1(-u ) c p ’ l ( U ) .
The velocity of the drift is h
g ( u ) = 51(u)
+ &(.) + F3(U),
where
AND
WHERE
Let us consider a stochastic additive functional
( t ) t, 2 0, represented
by
(“(t)= t o
+/
0
t
C“(ds;x(s/E2)), t 2 0.
(3.61)
3.4. DIFFUSION APPROXIMATION
The family of Markov processes with independent increments 0, x E E l is determined by the generators
87
c(t;x),t 2
Subtracting the first two moments of jump values, the transformed generators take the following form: 1
+ ,c€(4cp”(u)+ Y € ( 4 ( P ( 4 .
r&(Z)cP(u)= 9 & ( Z ) ( P f ( 4
Here
CE(x) :=
Ld
vv*l?,(dv;z).
The velocity of the deterministic drift has the representation g“(x)
= c-lg(x)
+ 91(x).
(3.63)
+ EI’l(dv;x).
(3.64)
The intensity kernel
r e ( d v ;X) = l?(dv;Z)
Then the following balance condition holds
The first two moments of the increments are bounded functions:
Corollary 3.7 gence holds
Under Assumptions D1-D.2, the following weak conver-
I’
CE(ds;x(s/E2)) ==+ (‘(t),
E + 0.
88
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
The limit diffusion process ['(t), t 2 0 is determined b y the generator Lop(.)
1+ +'(u).
= &p'(u)
Here:
3.4.3
Stochastic Evolutionary Systems
The evolutionary stochastic system in the diffusion approximation scheme is given by the evolutionary equation
$ U & ( t )= g"(U"(t);.(t/&2)), U"(0)= 21. The velocity gE has the following representation gE(u; ). = e-lg(u; ).
+ g 1 ( u ;x).
The balance condition (3.60) holds. Corollary 3.8 Under Assumptions D1,0 2 , and 03: the following weak convergence holds
U " ( t ) ===+< O ( t ) ,
E -+ 0,
provided that the diffusion coeflcient B^ is positive. The limit diffusion process Co(t),t 2 0, is determined by the generator
Lop(.)
= g(u)p'(.)
+;s(u)pyu).
The velocity of the drift is
F(u)= &(.) where
+ F2((.) + ?3(u),
3.4. DIFFUSION APPROXIMATION
89
The covariance function is
where:
3.4.4
Increment Processes
The diffusion approximation for the increment processes in the series scheme is considered with the following time-scaling V(tlEZ)
F ( t )= P O
+E
C
as(zn),
t 2 0.
(3.66)
n=l
The values of jumps are a,(z) = a(.>
+
EUl(Z).
The following balance condition holds
where p ( d x ) is the stationary distribution of the embedded Markov chain xn, n 2 0.
Theorem 3.5 gence holds
Under Assumptions Dl-D3, the following weak conver-
F(t)
b
+ at + a w ( t ) ,
e +0,
provided that u 2 > 0. The variance u2 is calculated by u2 = a; +Is;,
where:
90
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Co(z) := C(z)&C(z),
C(z) := b(z)/m(z),
The drift velocity is
Remark 3.4. As in the averaging scheme (Section 3.3.2), since the increment process (3.66) has its jumps at the renewal moments, the average effect is realized by using the stationary distribution of the embedded Markov chain p(dz). The normalized factor l / m transforms the discrete jumps of the increment process into the continuous characteristics of the limit process.
3.5
Diffusion Approximation with Equilibrium
The balance condition in the diffusion approximation for the stochastic additive functional in the series scheme, considered in Section 3.4.2, provides the homogeneous in time limit diffusion process. In applications there are situations in which the average approximation is not trivial, that is, the limit process must be considered as an equilibrium process, very often deterministic, determining the main behavior of the stochastic systems on the increasing time intervals. The problem of approximation of fluctuations of stochastic systems with respect to equilibrium is considered in this section.
3.5.1
Locally Independent Increment Processes
First we consider a stochastic system in series scheme with small series parameter E > O,E + 0, described by a Markov process with locally independent increments q E ( t )t, 2 0, on the Euclidean space Rd, d 2 1, given by the generator
91
3.5. DIFFUSION A P P R O X I M A T I O N WITH EQUILIBRIUM
The main condition in the average scheme is the asymptotic representation of the first moment of jumps b E ( u ) :=
LdwrE(u,
+ &eyu),
dV) = qu)+ Ebl(u)
with bounded continuous functions b ( u ) , bl(u)and with the negligible term
lleq
+ 0,
E + 0.
Then the Markov process ~ " ( tt )2, 0, converges weakly
to the solution of the evolutionary equation
& d t )= b ( P ( t ) ) , p(0) = q"(0) = u.
If there exists an equilibrium point p for the velocity b(u),that is,
b ( p ) = 0, and the initial value of the process is close to the point p, (see (3.75)) then the weak convergence
$(t)
* p,
t
E -+ 0,
--+
co,
(3.68)
holds. Approximation of the fluctuation ~ " ( -t p) is considered in the following centered and normalized scheme
("(t):= l;l"(t/&)- E-lp,
t 2 0.
(3.69)
Such a normalization can be explained by noticing that
( " ( t ):= [ E $ ( t / E )
-
PI/&.
(3.70)
The convergence (3.68) provides the weak convergence EVE(t/E)
===+p,
E + 0,
t
--+
00.
Hence, the normalized scheme (3.70) is productive.
92
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Theorem 3.6 Let the intensity of the j u m p values of the Markov process q E ( t )t, 2 0, given by the generator (3.67), have the asymptotic representations of the first two moments of jumps as E 4 0: bE(zu l ) :=
ld s,.
+ EU, d v ) = b ( z ) + Eb(z, u)+ E q ( z , u ) ,
vr,(z
~ ~ (U 2 ) := ,
+ E u , d v ) = qz)+ e;(z, u),
vv*rE(z
(3.71)
(3.72)
with the negligible residual terms
Ilefll
-+
0,
E
-+
0,
i = 1,2.
(3.73)
T h e n the normalized centered process (3.69) converges weakly, as E+O, t o the digusion process ['(t), t 2 0, given by the following generator LOV(U>= b(p,U)V/(U)
1
+ p(p)V%).
(3.74)
The initial value of the limit diffusion is
['(o) = E'O lim[EqE(0)- p ] / ~ , that is ~ q " ( 0 ) p N
(3.75)
+ ~['(o).
Remark 3.5. Let the intensity kernel be represented by r E ( u , d v )= r ( u , d v )
+ &I'1(u,dv),
(3.76)
and the kernel r ( u , d v ) have continuous derivative in u.Then the asymptotic representation (3.71) has the following form
v ( z , u ) = qz)+&[bl(z)
+ uqu)l +&eyz,u),
where r
Corollary 3.9 Under the conditions of Theorem 3.6 and the additional condition (3.76), the limit diffusion process Co(t),t2 0 , i s defined by the generator
3.5.DIFFUSION APPROXIMATION WITH EQUILIBRIUM
93
where
that is the Ornstein- Uhlenbeck diffusion process. 3.5.2
Stochastic Additive finctionals with Equilibrium
More complicated but some what similar is the diffusion approximation of the stochastic additive functional (Section 3.3.1) in the series scheme satisfying the average approximation conditions with non-zero average limit processes. That is, the stochastic additive functional with Markov switching in the average approximation scheme is represented as follows
&(t)= 50“+
t
$(dS;z(s/E)),
t 2 0.
0
The family of Markov processes with locally independent increments
f ( t ;x),t 1. 0, x E E , with values in the Euclidean space Rd, d 3 1, is given by the generators (Section 3.3.1)
+
]r&)cp(u) = &J(%xC>Cp’(u) & - l % ( 4 ( P ( 4 ,
(3.77)
defined on the Banach space C1(Rd). The switching Markov process x ( t ) , t 2 0 , on the standard state space ( E , E ) is given by the generator Q d z ) = q(z)
1 E
P ( x ,~ Y ) [ V ( Y )cp(z)l.
(3.79)
According to Theorem 3.1, the following weak convergence holds
r“(t)---r. Cg(t),
&
--+
0.
The limit process f ( t ) , t 2 0, is a solution of the deterministic evolution equation
$At) = Xm, (3.80)
CgP) = &.
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
94
The average velocity c(u),u E Rd, is defined as follows
Now we consider the centered stochastic additive functional
with the re-scaled switching Markov process as follows rt
(3.82) and with the more general of the stochastic additive functional qe(t;x),t 2 0, x El
where
This generalization means that the velocity of drift g E ( u ; x ) and the intensity kernel r,(u,dv;x)now depend on the parameter series in the following way S E ( Y
). = s(u;).
+ Egl(U; 21,
(3.85)
and
Subtracting the second moment of jump values in (3.83)-3.84) gives the representation
Here:
3.5.DIFFUSION APPROXIMATION W I T H EQUILIBRIUM
and CE(u;x) :=
ld
vv*rE(u, d v ; x).
95
(3.88)
From (3.86), we get in (3.88) c E ( u ; x= ) C ( u ; x )+&cl(u;x),
where
Theorem 3.7 (Diflusion approximation without balance condition). Let the following conditions be fulfilled. D1’: The velocity and the intensity kernel are represented by (3.85) and (5’.86). D2’: The velocity functions and the second moments of jumps have the following asymptotic expansion:
+ E u ; x) = C(v; x) + e;(v,
C(?J
21;
x),
with the negligible terms ei(v, u;x),k = 1 , 2 , 3 satisfying the condition, f o r any R > 0 , sup
u;
+ 0,
-+0.
ZEE IuI
The negligible term $(x) satisfies the following condition
(Ir,o(z)cpII 0 , +
E
-+
0,
cp E
C3(Rd).
D3”: T h e initial values satisfy
c; =+ ro, supIEl(;l (6 = & +&(lo”, E>O
5 c < +m.
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
96
Then the weak convergence holds
C ( t )===+a t > ,
E
4
0,
provided that B ( v ) > 0. The limit diffusion process t 2 0, is determined by the generator of the coupled Markov process C(t),r(t),t 2 0,
t(t),
1 W u , v) = b(v,u)cpl(u,). + f(")cp:,(%
v) + %J)(P1(% v),
where: b(v,'LL)= &(v)
+ uY(v),
The covariance function is
where:
-
C(v) =
s,
7r(dz)C(v;z).
Here
j(v;z)
:= g(v;z) -
gv),
and & is the potential operator of Q (Section 1.6). This means that the coupled Markov process c(t),r(t),t 2 0, can be defined as a solution of the system of stochastic differential equations
+0(5^(WW(t),
d a t ) = b ( t @ ) ,F(t))dt
d r ( t ) = g^(F(t))dt. The covariance function g(v) is determined from the representation
B ( v ) = O-(v)a*(v).
97
3.5. DIFFUSION APPROXIMATION WITH EQUILIBRIUM
Remark 3.6. The limit diffusion process r ( t ) , t 2 0, is not homogeneous in time and is determined by the generator
The limit diffusion process is switched by the equilibrium process r ( t ) t, 2 0.
Remark 3.7. The stationary regime in the averaged process (3.80) is obtained when the velocity has an equilibrium point p, that is, c ( p ) = 0. Then the limit diffusion process c(t), t 2 0, is of the Ornstein-Uhlenbeck type with generator Cop(.)
+ p1 d y u ) ,
= b(u)cp'(u)
where: b ( u ) = bo
bo = &), 3.5.3
bl
+
Ubl,
= Z'(p),
B = B(p).
Stochastic Evolutionary Systems with Semi-Markov Switching
Now the stochastic evolutionary systems in diffusion approximation scheme considered in Section 3.4.3 is investigated without balance condition (3.60) but under assumption of average approximation conditions of Corollary 3.3, (Section 3.3). The centered and normalized process is considered as follows
C'(t) = E-l[U'(t) - G ( t ) ] ,
(3.89)
The stochastic evolutionary system V ( t )is described by a solution of the evolutionary equation in Rd
-U'(t) d dt
= aE(U'(t);Z(t/E2)),
(3.90)
with a,(u;x) = a ( u ; x )+cal(u;Z), where u E Rd and x E E.
(3.91)
98
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
The switching semi-Markov process x ( t ) ,t 2 0 , on the standard state space ( E ,E ) , is given by the semi-Markov kernel
Q(., B , t ) = p(., B ) F z ( t ) ,
(3.92)
for x E E, B E E , and t 2 0, supposed t o be uniformly ergodic with the stationary distribution 7r(B),B E E , satisfying the relation
r ( d x ) = p(d.c)m(.)/m,
(3.93)
where p ( B ) ,B E E , is the stationary distribution of the embedded Markov chain x,, n 2 0, given by the stochastic kernel
P(.,B) := P(.,+1
E
B 1 x, = x).
(3.94)
As usual: M
m ( z ):=
Fx(t)dt, F z ( t ):= 1 - F z ( t ) , m := L p ( d x ) m ( x ) . ( 3 . 9 5 )
The deterministic average process of the average evolutionary equation d-
-V(t) dt
6(t),t 2 0, is defined by a solution
= 2(6(t)),
(3.96)
r ( d z ) u ( u ;2).
(3.97)
with the average velocity h
a(.)
= /E
Theorem 3.8 Let the stochastic evolutionary system (3.89) be defined by relations (3.89)-(3.97) and the following conditions be fulfilled. C1: The switching semi-Murkov process x ( t ) , t 2 0, is uniformly ergodic with stationary distribution x(dx) on the compact phase space E . C2: The following asymptotic expansions take place:
+
a(w
EU;
where, for any R > 0 ,
= a(w;x)
+ Eua;(w;). + e;(w,u;.).
3.5. DIFFUSION A P P R O X I M A T I O N W I T H EQUILIBRIUM
99
Moreover, the velocity functions a(u;x) and a l ( u ;x) satisfy the global solution of equations (3.90) and (3.91). Then the weak convergence for 0 5 t 5 T ,
C“(t)===+ c O ( t ) ,
E
+
0,
takes place. The limit diffusion process cO(t),t2 0, is determined by the generator of the coupled process co(t),6(t), t 2 0,
ILv(u, V ) = b ( ~W,) ( P ; ( U, W)
1 + -B(v)p;,(u, + G(W)~:(U,w). 2 W)
(3.98)
Here:
b(u,v)= 2 1 ( W ) + u2’(v), U(W)
=
L
(3.99)
,.
n ( d x ) a ( v ; x ) , a1(v) =
The covariance matrix B ( v ) ,v
E Rd,
L
r(dx)al(v;x).
is determined b y the relations:
B ( v ) = Bo(v) + B i ( v ) ,
(3.100)
r ( d x ) Z ( v x)RoZ(v; ; z), Bl(V) =
s,
n ( d x ) p ( x ) Z ( vx)Z*(w; ; x)
(3.101)
4 2 ) = b z ( x ) - 2m2(41/m(4 -a(v; x) = a(v;x) - q.).
In the particular case of Markov switching, we have p(x) = 0 (see Remark 3.3, page 83). The limit diffusion process co(t),t 2 0 , is nonhomogeneous in time and is solution of the following SDE
+
+
dCO(t)= [ a l ( C ( t ) ) E’(C(t))CO(t)]dt B1’2(6(t))dw(t), (3.102) where w ( t ) ,t 2 0 is the standard Wiener process in Rd. The stationary regime for the average process 6(t),t2 0, is obtained when the average velocity 2(v) has an equilibrium point p, that is, 2 ( p ) = 0. Then the limit diffusion process c(t),t2 0, is an Ornstein-Uhlenbeck process with the following generator 1 t p ( u ) = b(u)p’(u) ~ B p ” ( u ) ,
+
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
100
where
b ( ~=) bl + t h o ,
bl = Zl(p),
B
bo = E’(p),
= B(p).
PROOF. The proof of Theorem 3.8 is divided into several steps. First, the extended Markov chain h
Uz = U(E~T,), Z, = z(T,),
[: = [ ‘ ( E ~ T , ) ,
n 2 0,
(3.103)
is considered, where ~ ~ 2 , 0, nis the sequence of the Markov renewal moments (moments of jumps of the semi-Markov process x ( t ) , t 2 0), that is:
F S ( t )= p(e,+l I t I 2,
=
Let us introduce the following families of semigroups:
rz(z)cp(u) = cp(U:(t)),
U:(o)
=uE
Rd,
(3.104)
where U i ( t ) , t 2 0, is a solution of the evolutionary system
d
-UG(t) dt
= ae(U:(t);Z),
zE E,
(3.105)
and, similarly,
Xt&)
= cp(G(t)), G(0)= 2, E Rd,
(3.106)
where C(t),t L: 0, is the solution of the average evolutionary system (3.96). It is worth noticing that the generators of semigroups (3.104) and (3.106) are respectively: IrE(5)cp(U)
icp(2,)
= a,(u; z)cp’(4, = 2(2,)cp’(.).
The following generators will be also used: T(z)cp(u) = a ( u ;Zc)cp’(4, F(Z)y7(U)
= qu;Z)cp’(U),
-a(u;Z):= a(u;Z)- E(u).
The main object in asymptotic analysis with semi-Markov processes is the compensating operator of the extended embedded Markov chain (3.103) given here in the next lemma.
101
3.5. DIFFUSION APPROXIMATION W I T H EQUILIBRIUM
Lemma 3.1 The compensating operator of the extended embedded Markov chain (3.103) is determined by the relation
(3.107)
where the semigroup I'Z(xlv),t 2 0, is defined by the generator:
+
A E ( vz)cp(u) ; = [ d ( EU; ~ z) - Z(v)]cp'(u), a"(u;x) := &-la& z) = c-la(u; x) a1(u; z),
+
(3.108) (3.109)
It is worth noticing that the generator AE(v; x) in (3.108) can be transformed by using condition C2 of Theorem 3.8, as follows (3.110)
A'(.; x) = E-'A&(v; x),
+
A,(v; x)cp(u) := [a,(v EU; x) - Z(v)]cp'(~) = a(v;x)cp'(u) cb(u,u; x)cp'(u) where by definition:
+ F(v,u;z)p(u),
+
I
-a(v;z) = a(v;x) - Z(v), b ( v , U ; x )= al(v;z) +uu;(v;x).
PROOF OF LEMMA 3.1. The proof of this lemma is based on the conditional expectation of the extended embedded Markov chain (3.103) which is calculated by using (3.89)-(3.91) and (3.96): E[cp(C:+,,
1
-
u:+1,2,+1)
00
=
F,(dt)E[cp(u
+
I C:
=
u,u; = v,z,
= x]
1
E2
E - ~
aE(Ug(s);x)ds-
t
2(6(s))ds],
0 The next step in the asymptotic analysis is to construct the asymptotic expansion of the compensating operator with respect to c , (see Lemmas 5.3-5.4, Section 5.5.3).
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
102
Lemma 3.2 The compensating operator (3.1Or)[email protected]) has the following asymptotic representation o n test functions 'p E c:J(w~ x I W ~ )
+ &-'X(V; v)P'p(u,*, .) +[LO(Z,V ) P P ( * , .>+ &J)PP(% ., .)I
ILE'p(u,V ,X) = cW2Q'p(., .,x)
21,
+OfP(%v,x),
(3.111)
with the negligible term
Here, by definition
QdxC) = q(x)[P- Ilc~(x),
(3.112)
is the generator of the associated Markov process xo(t),t 1 0 , with the intensity function
The generator X(V;x), and the operator ILO(v; x) are defined as follows: X(v; z)p(u) = q v ;z)'p'(u),
(3.113)
and
+
b(v, u;x) := a l ( v ;x) ua:(v; x), B1(v;x) := p2(x)iz(w;x)ii*(v; x),
(3.115) (3.116)
pz(x) := m a ( x ) / m ( z ) ,
1
00
mz(2) :=
t2Fz(dt).
The proof of Lemma 3.2 is given in Section 5.5.3.
(3.117)
0
Chapter 4
Stochastic Systems with Split and Merging
4.1 Introduction In the study of real systems a special problem arises, connected to the generally high complexity of the state space. Concerning this problem, in order to be able to give analytical or numerical tractable models, the state space must be simplified via a reduction of the number of states. This is possible when some subsets are connected between them by small transition probabilities and the states within such subsets are asymptotically connected. That is typically the case of reliability -and in most applications involving hitting time models, for which the state space is naturally cut into two subsets (the up states set and the down states set) In this case, transitions between the subsets are slow compared with those within the subsets. In the literature, the reduction of state space is also called aggregation, lumping, or consolidation of state space. This chapter deals with average and diffusion approximations with single and double asymptotic phase split and merging of the switching process. The asymptotic merging provides a simpler process and for that reason is important for applications, as for example in reliability where in general two subsets of states are of interest: up and down states. 1009127.
The main object studied here is the following stochastic additive functional (see Sections 2.6, 3.3.1, 3.4.2, and 3.5.2)
["(t)= ['(O)
+
/
t
q"(ds; z " ( s / E ) ) ,
t 2 0 , E > 0.
0
The switching semi-Markov process z ( t ) is considered in two cases: er103
104
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
godic and absorbing. Particular cases of the above additive functional that will be studied are the following three: 1. Integral Functional
d ( t )=
I"
a " ( z E ( s / & ) ) d s , t 2 0,
E
> 0.
2. Dynamical System d -dtU & ( t )
= CE(U&(t); X&(S/&)),
t 2 0, & > 0.
(44
3. Compound Poisson Process V(t/&)
cE(t)= E
C aE(z;),
t 2 0, t > 0,
(4.3)
k=l
where ~ ( t t) 2 , 0 , is a Poisson process. The above functional F ( t ) , t 2 0, can also be written in the following form V E ( t / & ) -1
( " (t)=
1
+ $(&ee(t);zE(t/&)),t 2 0,
'$(&ek;z;-i)
&
> 0.
k=l
The generators r,(z),z E E , of the Markov processes with locally independent increments $(t; z), are given in Section 3.3, that is ~,(z)Cp(u)= a&(Kz)Cp'(u) +&-I
4.2
S,.
[Cp(u
+ &v) - Cp(u)
-
&vcpyu)lr,(u, dv;z). (4.4)
Phase Merging Scheme
4.2.1 Ergodic Merging The general scheme of phase merging, described in the introduction, now will be realized for the semi-Markov processes zc"(t),t 2 0, with the standard phase (state) space ( E , E ) ,in the series scheme with the small series parameter E 0, E > 0, on the split phase space (see Fig. 4.1) --f
N
E= UEk, k= 1
EknEp=@,
k#k'.
(4.5)
4.2. PHASE MERGING SCHEME
105
Remark 4.1. More general split schemes can be used without essential changes in formulation, for example
E
=
u E,,
E V p v =r 0,
#
VEV
where the factor space (V,V) is a compact measurable space. The case where V is a finite set is of particular interest in applications. The semi-Markov kernel is
wher e xE E , B ~ & , t 2 0 . Let us introduce the following assumptions:
ME1: The transition kernel of the embedded Markov chain x;, n 2 0, has the following representation
P(z, B ) = P ( z ,B ) + EPI(IC, B).
(4.7)
The stochastic kernel P ( x ,B ) is coordinated with the split phase space (4.5) as follows
The stochastic kernel P ( z ,B ) determines the support Markov chain z,,n >_ 0, on the separate classes Ek, 1 5 k 5 N , (see Fig. 4.1 (b)). Moreover, the perturbing signed kernel Pl(s, B ) satisfies the conseruative condition
which is a direct consequence of (4.7) and P E ( zE , ) = P ( z ,E ) = 1. M E 2 : The associated Markov process z o ( t ) t, 2 0 , given by the generator
where q ( z ) := l/rn(z), is uniformly ergodic in every class Ek, 1 5 k 5 N , with the stationary distributions 'ITk(dx),1 5 k I N , satisfying the
106
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
relations:
As a consequence, the Markov chain z,,n 2 0, is uniformly ergodic with the stationary distributions p k ( B ) , B E &k = & f l Ek,1 5 k 5 N , satisfying the integral equations
ME3: The average exit probabilities
are positive, and the merged mean values
are positive and bounded. The perturbing signed kernel Pl(x,B ) in (4.7) defines the transition probabilities between classes Ek, 1 5 k I N . So, relation (4.7) means that the embedded Markov chain xE,n 2 0, spends a long time in every class Ek and jumps from one class to another with the small probabilities E P ~ ( ~ , E \ E It ~ )is. worth noticing that under fast time-scaling the initial semi-Markov process can be approximated by some merged stochastic process on the merged phase space E = (1,..., N } . The particularity of phase merging effect is that the approximating process will be Markovian. Introduce the merging function (see Fig. 4.1. (c)) A
.(z) = k,
z EEk,
15 k 5 N ,
(4.14)
and the merged process T ( t ):= w(z"(t/&)),t 2 0,
(4.15)
h
on the merged phase space E = (1, ...,N } . The phase merging principle establishes the weak convergence, as E of the merged process (4.15) to the limit Markov process.
4
0,
107
4.8. PHASE MERGING SCHEME
(a) Initial System S,
(b) Supporting System S
3 A
(c) Merged System
S
Fig. 4.1 Asymptotic ergodic merging scheme
108
CHAPTER 4. STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
Theorem 4.1 (Ergodic Phase merging principle). Under Assumptions MEl-MES, the following weak convergence holds
2&(t)==+ q t ) ,
(4.16)
& -+ 0.
The limit Markov process 2(t),t 1 0, o n the merged phase space ?!, = (1, ...,N ) is determined by the generating matrix (j = (&, 1 5 k , r 5 N ) , where:
a
First let us precise the matrix which is the generating matrix of some conservative Markov process. Indeed, from (4.7) and (4.8), we calculate: &p^kT =
L,
/'k(dx)&Pl(x,
Er)
where d k r is the Kronecker symbol. Hence, p^kT 2 0, if r # k and p^I& 5 0. Using the conservative condition in ME1: P l ( z ,E ) = 0, and taking into account (4.12) we obtain the following:
Fk = -
P k ( d x ) P l ( x , E k ) = -pkk
=I P k r , T#k
cT+
and after dividing by p^k , we get p^kr = 1. After multiplying by &, together with (4.17), it gives (4.18) h
That is the condition of conservation of the generating matrix Q. Condition (4.12) ensures that all states in E are stable. h
We will introduce the following additional assumption.
4.2. PHASE MERGING SCHEME
109
ME4: The merged Markov process .^(t),t 2 0, is ergodic, with the stationary distribution ?? = ( X k , k E E ) . h
In the particular case of the Markov initial process zc"(t), t 2 0, with the semi-Markov kernel Q E ( zB , , t ) = P E ( zB, ) [ 1 - e - q ( z ) t ] ,
the statement of the phase merging process theorem is valid with m ( z )= l / q ( z ) , z E E . Using the equation for the stationary distributions of the support Markov process n k ( d z ) q ( z ) = qkPk(dz),
we calculate:
that iS qk = l / f % k . Hence, the intensity of the merged Markov process can be represented as in (4.17) with the merged intensity T k = qkgk,
15
5 N.
(4.19)
From the heuristic point of view the merging formulas (4.17) and (4.19) are natural. Indeed, in order to calculate an average exit prcrbability by using the stationary distribution of the semi-Markov process defined by the semi-Markov kernel (4.6)-(4.7),we have to calculate:
- &gk.
The relations = &k/%k,l 5 k 5 N , also are natural, since the intensity of the limit Markov process has to be directly proportional to the average intensity qk = l / & k 7 in the class Ek with factor p k which is the
CHAPTER
110
4 . STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
exit probability. Only one question remain, why are the stationary distributions of the support Markov process used? It is a natural consequence of the limit merging effect in the fast time-scaling scheme. Equalities (4.17) are obtained by using the phase merging principle based on a solution of singular perturbation problem (see Section 5.2). 4.2.2
Merging with Absorption
We will study here the merging scheme with an absorbing state. The semiMarkov process z"(t),t 2 0, is considered on a split phase space
uEk, N
Eo = E U { O } , E =
Ek n E p = 8, k # k',
(4.20)
k=l
with absorbing state 0. For example, in Fig. 4.2., we have: Eo = El U E2 U E3 U ( 0 ) ; E = El U EzU E3; and = {1,2,3}. Let us introduce the following assumptions.
MA1: The semi-Markov kernel
of which stochastic transition kernel is perturbed, that is
P E ( xB l ) = P ( x ,B ) 4-E P l ( Z , B ) , where P(xlB ) satisfies relation (4.8). MA2: The perturbing kernel P ~ ( Z B ,) satisfies the following absorption condition. There exists at least one Ic E El such that the absorption probability from Ic is positive, that is (4.21)
MA3: The stochastic kernel P ( x , B ) defines the support embedded Markov chain x,, n 2 0 , P ( Z ,B ) = P(Z,+l E B
1 2,
= x),
which is uniformly ergodic in every class Ek, 1 5 k 5 N , with the stationary distributions & ( B ) , 1 5 k 5 N , defined by a solution of the
4.2. PHASE MERGING SCHEME
111
equations:
Fig. 4.2
Asymptotic merging scheme with absorption
Theorem 4.2 (Absorbing phase merging scheme) Under split phase merging scheme (4.20) and Assumptions MA1-MA3 (Section 4.2.1) for the support Markov process, the following weak convergence takes place
*q t ) ,
V(Zc'(t/&))
-
E
4
c,
0,
where the limit Markov process i?(t),O 5 t 5 is defined o n the merged phase space Eo = U { 0 } , 5= (1, ..., N } by the generating matrix A
Q = [ T k r ; oI k , r I N ] ,
112
CHAPTER 4. STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
where:
qk = l / m k .
Tk = pkqk,
c7
The random time is the absorption (stoppage) time of the merged Markov process, that is, := inf{t 2 0 : 2(t)= 0).
<
The following result concerns the weak convergence of the absorption time of the initial process, that is,
<' Corollary 4.1
:= inf{t
1 O : z E ( t / E ) = 0).
Suppose that N = 1, and the following conditions hold:
and
Then
where
-
I"
d
-
<,
E
+
o,
&(p/m),that is,
I P ( ~t>) = eWAtl with A = p/m. (See also Section 8.1).
4.2.3 Ergodic Double Merging The stochastic systems can also be investigated in a double merging and averaging scheme. In fact, this kind of scheme is useful in practice when several orders of small transition probabilities arise. This also allows us to obtain several averaging and diffusion approximation results. Consider a family of semi-Markov processes, z'(t), t 2 0 , E > 0, on a standard state space (El&), with semi-Markov kernel
QE(zlB , t) = P(z, B)F,(t).
113
4.2. PHASE MERGING SCHEME
Fig. 4.3 Double merging
We consider the following finite split of the state space E (see Fig. 4.3): Nk
N
E = U E k , Ek= U E L , I I k S N , k=l
EL
r=l
E;: = 0, IC # IC' or T #
TI.
(4.22)
Let us introduce the following assumptions, specific to the double phase merging.
MD1: The stochastic kernel P E ( zdy) , has the following representation p E ( d~y ,) = p ( z ,d y ) 4 ~ P i ( 5d y, )
+ c2pz(2,d y ) ,
(4.23)
where the stochastic kernel P ( z ,dy) defines the associated Markov chain z, n 2 0, and Pl(z,dy) and P2(2,dy) are perturbing signed kernels. The first one concerns transitions between classes EL and the second one between classes Ek. The perturbing kernels PI and P2 satisfy the following conservative merging conditions:
Pl(z,Ek)= 0 , z E Ek, P ~ ( zE,) = 0 , z E E .
15 k 5 N
(4.24) (4.25)
MD2: The associated Markov process z o ( t ) , t 2 0, is uniformly ergodic with generator Q, defined by
Q 4 z )= q(z)
P ( z 7d y ) [CP(Y)
-
cp(z)l,
(4.26)
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
114
with stationary distribution 7r;(dz), 1 5 r L Nk, 1 5 k I N . As a consequence, the associated Markov chain xn, n 2 0, is also uniformly ergodic in every class E l , 1 5 r 5 Nk, 1 5 k 5 N . MD3: The merged Markov process 2(t),t 1. 0 is uniformly ergodic, with stationary distribution (?;, 1 5 r 5 Nk, 1 5 k 5 N ) . The perturbing operators Qk, Ic = 1,2 are defined as follows
Let us define the following two merging functions:
G(z) = w,L,
if
z E EL,
and h
G(%) = k, if
X E
Ek.
Theorem 4.3 (Ergodic double merging) Assume that the merging conditions MDl-MDS hold, then the following weak convergences take place: G ( Z E ( t / & ) ) ===+ 2(t), E + 0,
* $(t)
A
G(."(t/&2))
& -+
(4.27)
0.
(4.28)
The limit process ?(t) has the state space E =
u?=l,!?k,
,!?k
=
{v;
:
A
5 N k } , and $(t) the state space 2 = {1,2, ..., N } . The generators of processes 2(t) and 2(t) are respectively and which are defined 15
T
h
G2,
h
below. The contracted operators 61 and
A
6,
are defined as follows:
IIQlII = GlII and
a&. A
A
h
h
IIQ2II = The projectors II and
fi are defined as follows:
ncp(.)
=
c N
Nk
k=l
r=l
Glrk(2)
4.2. PHASE MERGING SCHEME
115
where:
and
(4.29)
where:
Thus we have
0 2 =
(g), where
6 A
Moreover Qz =
($ek)r
where
and
Now, we have: N
N
116
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
and
m= 1
5 0.
( ‘‘1
Qg),
h
Thus we have Q1 = diag(Qij, ..., where Qfj = qki . Let us introduce the following additional assumption, needed in the sequel.
MD4: The merged Markov process $(t),t 2 0, is ergodic with stationary distribution ( G k , 1 5 Ic 5 N ) . 4.3
Average with Merging
In this section we consider switched stochastic systems with split and merging of the switching semi-Markov process. Let the processes ~ “ (x), t; t 2 0, x E E , E > 0, be given by the generators (4.4), = a&(u;x)(P’(u)
+ &v) - +)
+&-I Ld[p(u
Let the stochastic evolutionary system,
-
&up’(u)lrE(u, dv;4.(4.30)
P(t),be represented
by
rt
[“(t)= [ “ ( O )
+ J0
V E ( d s ;z & ( s / E ) ) ,
t 2 0,
E
> 0.
(4.31)
Let us introduce the following conditions.
A l : The drift velocity a(u; x) belongs to the Banach space B1, with a,(u;
x) =
x) + eyu; x),
where F ( u ;x) goes to 0 as E + 0 uniformly on (u; x). And rE(u, dv;x) = r ( u ,dv;x) is independent of E . A2: The operator
4.3. AVERAGE WITH MERGING
117
is negligible on B 1 ,that is, SUP IpEB'
Ilr&(xc>Pll 0, --+
E
+
0.
A3: Convergence in probability of the initial values of p ( t )v(x"(t/E)), , t2 0 hold, that is,
and there exists a constant c E
E%+,
such that
supE I"(0)l 5 c < +w. E>O
Remark 4.2.The operator y,(z) is the jump part after extraction of the drift part due to the jumps of the process $(t, x). 4.3.1 Ergodic Average In this section we will give a theorem for the averaging of the evolutionary system p ( t ) t, 2 0 , in ergodic single split of the switching semi-Markov process z"(t),t L 0. The switching semi-Markov process x E ( t ) t, 2 0 , is considered in a split phase space
and supposed t o satisfy the phase merging assumptions ME1-ME3 (Section 4.2.1).
Theorem 4.4 (Ergodic Average) Let the switching semi-Markov process x"(t), t 2 0 , satisfies the phase merging conditions ME1-ME3. Then, under Assumptions A l - A 3 , the stochastic evolutionary system J E ( t )t, 2 0 , (4.31), converges weakly to the averaged stochastic system
6(t): ( " ( t )=+ G ( t ) ,
&
4
0.
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
118
The limit process G(t), t 2 0 , is defined by a solution of the evolutionary equation d- U ( t ) = Z ( G ( t ) ;Z ( t ) ) , G(0) = dt where the averaged velocity is determined by 7rk(dz)a(u;z),
a(u;k) = L
m,
15k
(4.32)
IN.
k
The following corollary gives particular results of Theorem 4.4,in the three cases described in Section 4.1.
Corollary 4.2 1) The stochastic integral functional (4.1) converges weakly as follows
l
l--
a ( z ( s ) ) d s e 4 0,
u(z‘(s/e))ds =+
where q(dx)a(x).
a(k) = L
k
2) The dynamical system defined by (4.2), with
cE(u; ). = c
( ).~ +; eyu; z),
where F ( u ;x) is the negligible term IleE(U;2)ll
0,
E
0,
converges weakly to a dynamical system with switching process .^(t),t 2 0, d-
- U ( t ) = C(G(t);.^(t)), dt where
C(u;k) =
lk
7rk(dx)C(u;x).
3) The compound Poisson process with Murkov switching defined by (4.3) converges weakly as follows
(“(t)+ fZ(Z(s))ds, 0
e t 0.
4.3. AVERAGE WITH MERGING
119
Average with Absorption
4.3.2
The switching semi-Markov process z"(t), t 2 0 , is considered in a split phase space (see relation (4.20), Section 4.2.2)
u N
Eo = E U { O } ,
€3 =
Ek,
Ek
n E p = 8, k # k',
(4.33)
k=l with absorbing state 0, and supposed t o satisfy the phase merging scheme, that is, Assumptions MA1-MA2 (Section 4.2.2).
Theorem 4.5 (Average with Absorption) Let the switching semi-Markov process x"(t), t 2 0 , satisfy the phase merging Assumptions M A l - M A 2 , (Section 4.2.2). Then, under Assumptions A1-A3, the stochastic evolutionary system ,t 2 0 , (4.3l), converges weakly to the averaged stochastic system U ( tA
q(t),
c):
<"(t)+ 6(tA The limit process equation
c),
& -+
0.
6(t),t 2 0 , is defined by a solution of the evolutionary
I
g ( t )= Z ( G ( t ) , Z ( t ) ) , t 2 0
c, (c
o n the time interval 0 5 t 5 is the absorption time of the merged Markov process Z ( t ) , t 2 0). The averaged velocity is determined by Tk(dz)a(u;z), 15 k 5 N ,
Z(u;O) = 0.
The following corollary gives particular results of Theorem 4.5, in the three cases described in Section 4.1.
Corollary 4.3 1 ) The stochastic integral functional (4.1) converges weakly as follows
1 t
a ( z " ( s / c ) ) d s+
1
tAr
Z(Z(s))ds,
E
---f
0,
C H A P T E R 4 . STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
120
where A
a(k)=
L,
q(dz)a(z).
I n the particular case where N = 1, the stochastic integral functional converges weakly as follows:
I"
a(z"(s/E))ds + Zi. (t A
t),
E
4
0 , Zi =
T(dz)a(z).
2) The dynamical system defined by (4.2) converges weakly to a dynamical system with a simpler switching process .^(t),0 I: t 5 f, than the initial one z E ( t )t ,1 0. 3) The compound Poisson process with Markov switching defined b y (4.3) converges weakly as follows
("(t)+ f A ' i i ( 2 ( s ) ) d s ,
E + 0.
0
4.3.3
Ergodic Average with Double Merging
The following theorem concerns averaging results for the evolutionary system in the double merging scheme (4.22), (Section 4.2.3), satisfying Assumptions MD1-MD3.
c(t)
Theorem 4.6 (Double average) Let the switching semi-Markov process z E ( t ) t, >_ 0 , satisfy the conditions of double merging scheme MDI-MD4. Let the stochastic system be represented as follows
t " ( t )= ("(0)
+
/
t
v E ( d s z; " ( s / E 2 ) ) ,
(4.34)
0
where the processes q E ( t ; x ) ,t >_ 0 , x (4.4). Let Assumptions A1-A3 hold. Then the weak convergence
E
E , are given by the generator of
C"(t)=+ 6 ( t ) ,
E +0 h
takes place. The limit double averaged system C(t),$(t),t>_ 0 , is defined also equivalently by a solution of the equation d= dt
-V(t)
A
= ?@(t),$(t)),
(4.35)
4.3. AVERAGE WITH MERGING
121
where
Remark 4.3. The stochastic ergodic system (4.35) can be considered in ergodic average scheme (see Section 4.3.1). The same ergodic average result can be obtained for the initial stochastic system (4.34) with time-scaling c3 instead of c2.
Remark 4.4. A result analogous to Corollary 4.1 can be obtained for the double merged process .^(t),t 2 0 , in the cases of stochastic integral functional (4.1), of dynamical system (4.2), and of the compound Poisson process with Markov switching (4.3) loo. 4.3.4
Double Average with Absorption
The following result is an averaging result for the evolutionary system CE(t) in the double merging scheme (4.22). Define
c
the absorption time of the process $(t),by
c = min{t 2 o : $(t>= 0). Corollary 4.4 (Double average) Let the switching Markov process x " ( t ) , t 2 0 , satisfy the conditions of double merging scheme (4.22). Let the stochastic system be represented as follows
( " ( t )= C"(0)+
t
J 77"(d.;xE(s/E2)), 0
where the processes q E ( t ; x ) ,t 2 0, x E E , are given b y the generator of (4.4). Let Conditions A l - A 3 (Section 4.3) hold. Then the weak convergence
C'(t) + @(tA
c)
E -+ 0,
takes place. The limit double averaged system c(t),t 2 0 , is defined by a
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
122
solution of the equation
where
r=l
c,
h
The stopping time, for N = 1, parameter
has an exponential distribution with the
h
= 9P,
where:
and p ( x ) is defined as follows PE(x, (0)) = -&2P2(2,E ) = E 2 P ( Z ) . 4.4
Diffusion Approximation with Split and Merging
In this section we consider the additive functional c"(t),t 2 0, under the following time-scaling of the switching process (4.36) The processes q"(t;x),t 2 0 , x E E , e (compare with (4.30))
> 0, are given
Let us consider the following conditions.
by the generator
4.4. DIFFUSION APPROXIMATION W I T H SPLIT A N D MERGING
123
D1: The drift velocity function has the following representation
a'(u; x) = €-la(u; x)
+ al(u;x),
where a ( u ; x )and al(u;z) belong to the Banach space BC1: Balance condition
r ( d x ) a ( u ; x ) 0.
B2. (4.38)
D2: The operators
are negligible on B2,that is,
4.4.1 Ergodic Split and Merging The switching Markov processes x c ' ( t ) t, 2. 0, are considered on the split phase space (4.5) and the support Markov process x ( t ) ,t 2 0, defined by the generator (4.9)is uniformly ergodic in every class E k , 1 5 k _< N , with the stationary distributions r k ( d x ) ,1 5 k 5 N , satisfying relation (4.10). The stochastic additive functional <"(t),t2 0 , given by relation (4.36) satisfies conditions Dl-D2 and the balance condition BC1 (4.38). Let the following assumption holds.
MS1: Merging and split condition. The transition kernel of the embedded Markov chain xk of the switching Markov process z"(t),t 2. 0, has the following representation
P " ( z , B )= P ( x , B )+&2P&,B), where the kernels P(xlB ) and the phase merging scheme.
PI(z, B ) satisfy conditions ME1-ME3 of
Theorem 4.7 (Ergodic split and merging) Let the switching Markov processes x"(t),t 2 0 , satisfy Condition MS1. Then the weak convergence
124
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
takes place. The limit diffusion process Rt),t 2 0, switched by the merged Markov process .^(t),t 2 0 , is defined by the generator of the coupled Markov process .^(t),t 1 0,
FW,
6
where the generator defines the merged process .^(t),t 2 0, o n the merged phase space E = {1,2, ..., N } (see Theorem 4.1). The drift coefficient is defined by h
6
b(u; k) = i?l(u; k) +&(u; k) +&(u;k), with: Xk (Wa1(u;).
The covariance function is defined by
with:
,
4.4. DIFFUSION APPROXIMATION WI T H SPLIT A N D MERGING
125
Operator & is the potential of generator Q (Section 1.6).
Remark 4.5. The limit diffusion process r ( t ) ,t 2 0 , can be defined by a solution of the following stochastic differential equation
where the variance function is
qu;k)l?*(u;k) = Z(u;k), and w ( t ) ,t 2 0 , is the standard Wiener process. The following corollary concerns particular cases of the above theorem.
Corollary 4.5 1 ) The stochastic integral functional (4.1) converges weakly /ta'(z'(s/E2))ds Jo
+ r(t),
E -+0.
The limit process r(t),t 2 0, is a diffusion process with generator (4.41), where: A
b(u;k)
r k ( d z ) a l ( z ) ,and B^(u;k) E
Z 1 ( k )= s,k
rk(dz)a(z)&a(z). L
k
2) The dynamical system
d - U C ( t ) = a e ( U e ( t )z; E ( t / E 2 ) ) , t 2 O , E > 0, dt
(4.40)
converges weakly to a diffusion as in the above theorem. 3) The compound Poisson process with Markov switching defined by generators (4.3) converges weakly
6 " ( t / E 2 ) +-r^ct),
E
-4
0,
where the limit process r(t),t 2 0 , is a diffusion process with generator (4.41), with drijl
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
126
and covariance coefficient
with:
4.4.2
Split and Merging with Absorption
Here, the switching Markov process z'(t),t 2 0, E > 0, is supposed to be as in the previous Section 4.4.1 but with N = 1,for simplification and without the conservative condition ME1, Section 4.2.1. The stochastic additive functionals c(t),t 2 0, are given in (4.36)(4.37) and satisfy Conditions Dl-D2, and the balance condition BC1.
Theorem 4.8 (Split and merging with absorption) Let the switching Markov processes x"(t),t 2 O , E > 0, satisfy Condition M S l . Then the weak convergence
E " W =+ tw,
0
I t 5 t, E
takes place. The limit diffusion process c(t),0 5 t 5
c,
+
0,
is defined by the generator
+
Ecp(u) =%(u)cp'(u) Iij(u)p"(u) - ;ip(u). 2
(4.41)
The drift coefficient is defined by A
+&(u),
b ( u ) =;I(.) where i?l(u)= L ? i ( d z ) a l ( u ; z ) and
-b l ( u ) =
The covariance function is defined by
where
s,
?i(dZ)a(u;x)&aL(u;z).
4.4. DIFFUSION APPROXIMATION WITH SPLIT A N D MERGING
127
where v* is the transpose of the vector v. The absorption time is exponentially distributed with intensity
r^
-
A = qp? The following corollary concerns particular cases of Theorem 4.8 in the three cases given in Section 4.1.
Corollary 4.6 1) The stochastic integral functional (4.1) converges weakly
The limit process r(t),t where b(u) = 21
=
2 0, is a diffusion process with generator (4.41),
T ( d z ) a l ( z ) and , B ( u )3
n(dz)a(z)fia(x).
2) The dynamical system (4.40) converges weakly to a diffusion as in the above theorem. 3 ) The compound Poisson process with Marlcov switching defined by generators (4.3) converges weakly <"(t/e2)
+ ;(t
A
c),
e + 0,
where the limit process r(t),t 2 0, is a diffusion process with generator (4.41), with drift
and covariance coefficient
with:
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
128
4.4.3
Ergodic Split and Double Merging
The switching Markov processes xc'(t), t >_ 0, E > 0, are considered on the split phase space (4.5) and satisfy conditions ME1-ME4, Section 4.2.1. The stochastic additive functionals SE(t),t2 0, are given with accelerated switching
["(t)= c'(0)
+
1 t
$ ( d s ; z : " ( t / ~ ~t) 2 ) ,0, E > 0.
(4.42)
0
The following condition will be used next.
BC2: Balance condition
Cf==, ?&(u)
E
0 , where, by definition,
(4.43)
(Digusion approximation in ergodic split and double rnergi n s ) Under Conditions M D l - M D 2 and balance condition BC2, the following weak convergence holds
Theorem 4.9
<"(t)=3 ?(t),
E + 0.
h
The limit diffusion process r(t),t 2 0 , is defined by the generator h
Lp(u) =i;(u)pt(u)
1% + -B(u)pt'(u). 2
The drift coeficient is defined by
--
h
b(u) = %(u) +&(u),
where: N
N
The covariance matrix is defined by
(4.44)
4 . 4 . DIFFUSION A P P R O X I M A T I O N W I T H S P L I T A N D MERGING
129
where:
Corollary 4.7 1) The stochastic integral functional (4.1) converges weakly
h
The limit process &t), t 2 0 , is a diffusion process with generator where the drift coeficient is defined by A
(4.441,
A
b(u) = 21 (211, with
The covariance matrix is defined by
where ~ ( uk );= U ( U ; k)&a(u;
k).
2) The compound Poisson processes with Markov switching defined by generators (4.3) converge weakly < " ( t / & 3 ) =+
&),
&
-
0,
A
where the limit process r(t),t 2 0 , is a dinusion process having generator (4.44), with: h A
b 3 0,
A
1y
6 = z?ki?(k), k=l
e ( k )=
Lk
7rk(dx)C(x).
C H A P T E R 4 . STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
130
4.4.4
Double Split and Merging
Here, the switching Markov processes z"(t),t 2 0, are considered in the double split phase space (4.22) and satisfy conditions MD1-MD4, Section 4.2.3. The stochastic additive functionals c(t),t 2 0, are considered with the accelerated switching
+ / 0t f ( d s ; z " ( t / ~ ~ )t)2, 0,
<"(t)= c(0)
E
> 0.
(4.45)
The following condition will be used next.
BC3: Balance condition Nk
(4.46)
where
a k ( u ):=
-T
L,
7r;(dz)a(u;z).
Theorem 4.10 (Double merging) Under conditions Dl-D3, and the balance condition BC3, the following weak convergence holds
S'(t)
=+ ?(t),
&
4
0.
h
The limit diffusion process {(t), t 2 0, switched by the twice merged Markov process $(t), t 2 0, is defined by the generator of the coupled A
Markov process: {(t),$(t),t 2 0,
h h
h
where the generating matrix Q of the double merged Markov process E ( t ) , t 2 0, is defined by the relations in Section 4.2.3. The drift function is defined by A
A h
b(u;k ) = i?i(u; k) +&(u;k ) ,
4.4. DIFFUSION A P P R O X I M A T I O N W I T H S P L I T A N D M E R G I N G
131
where:
r=l
b;(u) = %(u)iioq ' ( u ) ,
The covariance function as defined by
where:
Here, the operator & is the potential operator of the merged Markov process
.^(t),t 2 0, defined by the generating matrix
s.
Corollary 4.8 1) The stochastic integral functional (4.1) converges weakly A
l a " ( x " ( s / ~ ~ ) ) d r(t), s
E + 0.
A
The limit process r(t),t L. 0 , is a diffusion process with generator (4.47), where:
and
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
132
2) The compound Poisson processes with Markov switching defined by generators (4.3) converge weakly: J E ( t / E 3 ) =+
&),
E -+ 0,
h
where the limit process r(t),t 2 0 , is a diffusion process with generator (4.47), where Nk
h
b 30,
e: =
Fie;,
B^(k) = r=l
4.4.5
L;
Fi(dz)C(z).
Double Split and Double Merging
Here, the switching Markov processes z"(t),t 2 O , E > 0, are considered as in the previous Section 4.4.4, and satisfy conditions MD1-MD3, Section 4.2.3. The stochastic additive functionals t 2 0 , are considered with the more accelerated switching
c(t),
+
<"(t)= cE(0)
1 t
q ' ( d s ; z " ( t / ~ * ) ) , t 2 O,E
> 0.
0
Theorem 4.11 (Double split and double merging) Under Conditions D103, and the balance condition BC2 (Section 4.4.4), the following weak convergence takes place h
*
JE(t) $(t),
E + 0.
h
The limit diffusion process r ( t ) t, 2 0 , is defined by the generator h
ILp(u) = i;(u)cp'(u)
1% + -B(u)p"(u). 2
The drift coefficient is defined by A h
b(u)
where:
h
=Z&)
h
+2o(u),
(4.48)
4.4. DIFFUSION APPROXIMATION WI T H SPLIT A N D MERGING
h
C(u;k) =
Nk
1?fC(u;k,r ) ,
1
a(u;k,r ) =
r=l
133
L;
r f ( d z ) a ( u ;x).
The covariance matrix is defined by h
N - 2
h
B^(U)
??kB(u;k),
=2 k=l
where:
h
h
&(u; k) = 2(u;k)&2(u; k),
r=l
Eo(u;k , r ) =
L;
?f(dx)Co(u;z).
.. h
Here, the potential operator & corresponds to the twice contracted operator that is
G2,
A
h
A A
h h
h h
&2k,= RoQz = II - I. Corollary 4.9 1 ) The stochastic integral functional (4.1) converges weakly
134
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING A
The limit process &t), t 2 0 , is a diffusion process with generator (4.48), where:
A
N
h
A
g(u)= C ? k B k , A
h
c?i/ Nk
A A
i i k =g(k)&g(k),
Z(k) =
k=l
r=l
xi(dx)a(x).
El
2) The compound Poisson processes with Markov switching defined by generators (4.3) converge weakly h
< " ( t / € 4 ) 3 ?(t),
& -+
0,
h
where the limit process &t), t 2 0, is a diffusion process with generator (4.48), where:
4.5
Integral F'unctionals in Split Phase Space
In this section we will consider the integral functional
d ( t )= (YO +
I'
(4.49)
a(x'(s))ds,
with Markov switching z"(t),t 2 0 , in single and double ergodic split. 4.5.1
Ergodic Split
Let us define the potential matrix
fi,
= [rkl;l 5
k,l I N ] ,
by the following relations : A h
A
h
Q& = &Q = where the generator
A
-I
= [ r l k = rk - 61k; 1
0is defined in Theorem 4.1.
5 1, k 5 N ] ,
4.5. INTEGRAL FUNCTIONALS IN SPLIT PHASE SPACE
135
A
The centering shift-coefficient 2 is defined by the relation N
or, in an equivalent form,
where N
and
is the stationary distribution of the embedded Markov chain Zn, n 2 0. The vector Zi is defined as Zi = (& := ak - 2, 1 5 k 5 N ) .
Let the merged condition of Theorem 4.3 be fulfilled, and the limit merged Markov process .^(t),t 2 0, be ergodic with the stationary distribution r = ( T k , 1 5 k 5 N ) . Then the normalized centered integral functional
Proposition 4.1
I"(t) = & 2 0 " ( t / & 3 ) - & - I t % converges weakly, as mean and variance
E +
0 , to a diffusion process E(t), t 2 0, with zero
k,l=0 The variance (4.51) can be represented by the Liptser's formula the following way
132
in
(4.52) i= 1
136
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
where ( b l , . . . ,b N ) is the solution of the equations N C Q i j b j j=l
and Q = [ Q i j ; 1 I i , j
= a ( i ) - a(O),
i = 1 , . .. N
(4.53)
I N ] is nonsingular matrix defined by 4.. a2 - Q i j - 9 O j .
(4.54)
As was shown in I o 1 , the variance (4.51) can be represented in the following form (qi := -qii) c2= 2
N
N
i=1
i#j>l
C niqibg - C
bibj[xiqij
+~ j q j i ] .
(4.55)
From (4.53)-(4.54) we obtain:
By using the balance condition N
Eni&= 0 ,
(4.56)
i=O
in the right hand side of (4.53), we get N i=O
and hence N
(4.57) j=l
Now from (4.53) and (4.57), we have:
and hence N
(4.58) j=1
4.5. INTEGRAL FUNCTIONALS IN SPLIT PHASE SPACE
137
Let us consider the negative term in (4.55): N
N
N
i=l
i=l
j#i
i#j
c c +c N
N
ribi
=
N
qijbj
i=1
ribBqi
(from (4.58))
i=1
Thus the variance (4.52) is transformed into the form (4.55). 4.5.2 A A
Double Split and Merging
o2 A
A
Let Ro = ( F k e , 1 F k , l I N ) be the potential matrix of operator Theorem 4.3) defined by relations:
(see
&.
In the same way, is the potential matrix of operator Let w ( t ) , t 2 0 , be the standard Wiener process. Then the following result takes place. If the merging condition of Theorem 4.3 holds true and the limit merged Marlcov process .^(t),t 2 0 , has a stationary distribution, ?? = (r;, 1 5 r 5 Nk, 1 5 k 5 N ) , then, under the balance condition Proposition 4.2
A^ = 0 , the following weak convergence
I
takes place,
t
&-l
a ( z E ( S / E 4 ) ) d s===+ f f w ( t ) , & -+ 0,
with variance N
N
(4.59)
138
CHAPTER A. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
Triple Split and Merging
4.5.3
Proposition 4.3 If the merging condition of Theorem 4.3 holds true and the limit merged Markov process $(t), t >_ 0 , has a stationary distribution, h
h A A
<
A
% = (%k, 1 5 k N ) , then, under the balance condition A = 0, the following weak convergence takes place,
&-'LA -
i?(Z(s/E2))ds+ u w ( t ) ,
E + 0,
with variance u2 given by relation (4.59).
Proposition 4.4 If the merging condition of Theorem 4.3 holds true and the support Markov process x ( t ) , t 2 0 , has a stationary distribution, n ( d x ) = ( n i ( d z ) , 1 5 r 5 Nk, 1 k 5 N ) , then the following weak convergence takes place,
<
1 t
qa(t):= E
- ~
Z ( x " ( s / ~ ~ ) )==+ d s uw(t),
E + 0,
provided that the balance condition holds h
fifiG(z)= 0, A
where Zi := a(.)
- i?, and with variance N
u2 given by
N
where
Concerning positiveness of t h e variances u2 defined by t h e above formulas see Appendix C .
Chapter 5
Phase Merging Principles
5.1
Introduction
The phase merging principles for switching processes constructed in this chapter are based on a solution of a singular perturbation problem for an asymptotic representation of singular perturbed operators. Solving singular perturbation problems will also be basic in constructing and verifying phase merging principles, averaging, and diffusion approximation schemes. Therefore, first, the singular perturbation problems for the reducibleinvertible operators of stochastic systems presented in Chapters 3 and 4 are solved. The solution of these problems will yield the first part of the proof of weak convergence of stochastic processes, corresponding to the convergence of finite-dimensional distributions of the laws of the stochastic processes. The main assumption is that the switching processes are strongly ergodic, which implies that the generators are reducible-invertible. This means that the singular perturbation problem has a solution provided that some additional non restrictive conditions hold. The basic singular perturbation problems used here are given in Propositions 5.1-5.5. Particular stochastic systems switched by a semi-Markov process are especially considered. The average approximation is given in Propositions 5.6-5.7. The diffusion approximation is given in Propositions 5.8-5.17.
139
140
5.2
5.2.1
CHAPTER 5. PHASE MERGING PRINCIPLES
P e r t u r b a t i o n of Reducible-Invertible O p e r a t o r s
Preliminaries
Here, we will give the main steps of the solution of the singular perturbation problem. Let z(t),t 2 0, be a uniformly ergodic Markov process with state space ( E lE ) , generator Q , and stationary distribution 7r . We suppose that E is split into N finite ergodic classes, say E l , ...,E N , with
with stationary distributions 7 r k , 1 5 k 5 N , on each class. Let II be the projector onto the null space NQ of the generator Q (Section 1.6) acting as follows on the test functions cp N
and
As a consequence, the contracted space f l is~an N-dimensional Euclidean space EXN. so, the contracted vector @ := IT9 is @ := ( @ k , 1 5 k 5 N). The problems of singular perturbation for reducible-invertible operators are the main tools for achieving phase merging principles. Let us recall that a reducible-invertible operator is normally solvable (see Section 1.6). Let Q be a bounded reducible-invertible operator on a Banach space B. Then we have the following representation
The null-space is not empty, dimNQ2 1. The decomposition (5.2), generates the projection
NQ
II on the subspace
5.2. PERTURBATION OF REDUCIBLE-INVERTIBLE OPERATORS
141
The operator I - II is the projector on the space RQ
where I is the identity operator in B. Let Q be an reducible-invertible operator, and its potential (operator) Ro (see Section 1.6). The solution of the following equation Qcp = $ 7
in the space RQ , is represented by
where:
QRo = RoQ = II - I For a uniformly ergodic Markov process with generator Q, and semigroup Pt,t 2 0, the potential &, is a bounded operator defined by
Ro where the projector operator
g :=
=
J,
(Pt -
II is defined as follows
s,
7r(dz)cp(z), l(5)= 1,
5 E
E,
with n ( B ) ,B E & the stationary distribution of the Markov process. 5.2.2
Solution of Singular Perturbation Problems
A solution of the asymptotic singular perturbation problem for the reducible-invertible operator Q in the series scheme with small parameter series E > O , E 4 0, and perturbing operator Q1 is formulated in the following way. We have t o construct the vector pE = ( P + E ( P ~ and the vector 1c, which satisfy the asymptotic representation
+
[ E - ~ Q Q~]cp'
= 1c,
+
EOE,
(5.5)
142
CHAPTER 5. PHASE MERGING PRINCIPLES
with uniformly bounded in norm vector O", that is,
lleq I c,
E + 0.
It is worth noticing that in such a problem the operator Q corresponds to the generator of a uniformly ergodic Markov process. Usually the operator Q1 is a generator too, but may be just a perturbing jump kernel operator. Such a problem amounts to the asymptotic solution of the following equation for a given vector
+"
A similar equation appears when the inverse operator of a singular operator is constructed, that is, [Q+€Qi]-
1-
-E
-1
Q0 + Q1 + ...
While there exist many situations which cannot be classified, it is possible to find out some logically complete variants of these problems. Equation (5.5) can be represented as follows
k-lQ + Q i l ( +&(PI) ~ = €-'Q'P + [Qvi + QIV] + € Q i ( ~ i .
(5.6)
In order to obtain the right-hand side of (5.5), we set: Qcp = 0, Qcpi
+ Qicp = $J,
Qicpi = 8'.
(5.7)
From (5.7) we get cp E NQ. The third equality in (5.7) means that the vector 8" is independent of E . Hence, the boundedness of the remaining vector in (5.5) provides the boundedness of the function Q1cp1. Now, the main problem is to solve the second equation of (5.7), that is, Qcpi =
+
- Qicp.
(5.8)
The solvability condition for (5.8) with the reducible-invertible operator Q has the following form
n(d~- QIP) = 0 ,
(5.9)
where II is the projector onto NQ. Taking into account that cp E NQ,that is IIcp = cp, (5.9) leads to IIQiIIcp = II$.
(5.10)
5.2. PERTURBATION OF REDUCIBLE-INVERTIBLE OPERATORS
143
The decisive step of the singular perturbation problem (5.5) comes now. The operator IIQlII acts in the subspace J%Q and IIQ1IIp = 0 if cp E RQ. Let us introduce the contracted operator Q1 on the contracted space by the following relation
aQ
and set also
:=$?I
E SQ. So, equality (5.10) becomes
J = 61@.
(5.12)
Relation (5.12) establishes a connection between two vectors @ and Now, by formula (5.4), we get from Equation (5.8):
4in
$Q.
Substituting (5.12) into (5.13), we get
Finally, the vector 8" has the following representation:
Equations (5.12)-(5.15) give the solution of the singular perturbation problem (5.5). Let us now calculate the contracted operator associated to the kernel Ql(x,dy) on E , which acts on B as follows
61,
where we suppose that Q1(x, E ) = q1(x), satisfies
The contracted operator
61is defined by the relation (5.17)
144
CHAPTER 5. PHASE MERGING PRINCIPLES
Thus from ( 5 . 1 ) and ( 5 . 1 6 ) , we have: N
N
N
N
N
where
Hence, we get
k=l
T=l
Now, we conclude that the contracted operator matrix
and acts on the Euclidean space follows
01 is determined by the
RN of vectors N
For future reference, we formulate the solution of singular perturbation problem ( 5 . 5 ) as follows. Proposition 5.1 Let the bounded operator Q , o n the Banach space B, be reducible-invertible with projector 11 on the null-space NQ)d i m N Q 2 1 and potential operator Ro. Let the perturbing operator Q1 o n B be closed with a dense domain = B, and a non-zero contracted operator Q 1 . Bo C B, Then the asymptotic representation h
[E-~Q
+ Qil(cp
+&(PI) =
6icp + E B E ,
is realized by the vectors cp E NQ and yl=
R ~ G ~ P0", = Q ~ R ~ G ~ P .
Here Ro is the potential of Q , and
a 1
is given by (5.14).
(5.19)
5.2. PERTWRBATION OF REDUCIBLE-INVERTIBLE OPERATORS
145
PROOF. The proof was given above in the case of the bounded operator Q1. For the case of the densely closed defined on B operator Q1, see116. 0 Under the assumptions of Proposition 5.1, the asymptotic
Corollary 5.1 representation
+ + &eg](cp + & p i )= Q i c p + EBEcp, (5.20) by the vectors cp E NQ and (pi = RoQ1p. The remaining t e r n [E-~Q
Qi
is realized 8" is represented as follows
eEP= [eg + ~ : R ~ @ ~ ] ( P . Here by definition: 0; := Q1
(5.21)
+ d3;.
PROOF. By taking into account the following equalities, the proof becomes straightforward: [E-'&
+ Qi + E ~ ; ] ( ( P+ & p i )= [ E - ~ Q + Qi + E Q ; ] ~+ E [ E - ~ Q + e:]cpi = Qlcp + +qp + e;pl]. 0
Solving the singular perturbation problem (5.19) is trivial if the following balance condition holds
IIQiII = 0.
(5.22)
Nevertheless, there are cases of non trivial solution of singular perturbation problem under the balance condition.
Proposition 5.2 Let the operator Q o n B be a bounded reducibleinvertible with the projection operator II and the potential Ro. Assume that the operator Q1 satisfies the balance condition (5.22), and that Q1 and Q2 are closed with common domain Bo dense in B and operator Qo := Q2+QiRoQi whose contraction o n the null-space is the non-zero operator Qo. Then the asymptotic representation
flQ
[E-~Q
+
+ Q~]('+ P &pi+ e2p2) = Qop + E e E
7
(5.23)
is realized b y the vectors determined b y the equations: pi = RoQip
(5.24)
146
CHAPTER 5. PHASE MERGING PRINCIPLES
(5.26)
PROOF. From (5.23), we get: Qcp
= 0,
Qcpi
QCPZ
+ Qicp = 0, + Q i p i + Q2cp = $.
(5.27)
The first equation in (5.27) gives cp E NQ.The second equation together with balance condition (5.22) gives (5.24). Rewriting the third equation using (5.24) we get Qcp2
+ Qocp = 1CI.
The solvability condition for this equation gives
6= Qob, that is the main part in (5.23).
0
Remark 5.1. Proposition 5.2 is valid for the closed operator Q with the common dense domain Bo. In this case, the potential operator & is a closed densely defined operator '16. h
In various situations the operator Q1 is reducible-invertible116. The phase merging principles constructed in the next sections are based on a solution of the singular perturbation problem for an asymptotic representation singular perturbation operator with a remaining term.
Proposition 5.3 Under the conditions of Proposition 5.1, assume that the contracted operator 01 is reducible-invertible with null-space c J%Q
f l ~ ~
and projection operator fi o n SQ. The operator Q2, under the conditions of Proposition 5.2 has a twice contracted non-zero operator Q2 o n NQ,, determined by the relations: h h
h
(5.28)
Then the asymptotic representation
V2Q + 6%
+ Q ~ ] ( P+
EW
+~
+
~= Q2cp p ~E e E ,)
(5.29)
5.2. PERTURBATION OF REDUCIBLE-INVERTIBLE OPERATORS
147
i s realized by the vectors given by the relations: cp E NQ,
-
(5.30)
(pi = 2 0 6 2 P
oE = [Qi+ ~ Q 2 j c p 2+ Q 2 ~ 1 where
& is the potential of the reducible-invertible operator 6 1 ,
0, := (32 - 6 , .
(5.32) and
A
PROOF. From (5.29), we obtain:
(5.33)
Qcp=O
+ Qicp = 0 Qp2 + Q i ~+ i Q29 =6 2 ' ~ (Qi + ~ Q z ) c p 2+ Q 2 9 1 = eE. Qcpi
(5.34)
h
(5.35) (5.36)
By (5.33), we get that cp E NQ.The solvability condition for (5.34) gives
nQincp= 0, A
or, after contraction on the subspace NQ,
ola = 0,
(5.37)
#a,.
that is @ E Writing the solvability condition for from (5.35)
cp2
by using projector I'I, we obtain
h
SlPl+ 6 2 P = 6 2 v , or in another form A A
6iFi = - Q ~ ' P . Now, the solvability condition for
@I
holds because of relation (5.28).
0,
The solvability condition for the operator is verified. Hence, the vector @I is defined by (5.30). Now, (5.35), for the vector cp2, can be written h
QCPZ = -[Q2 - 6
2 1~ Qi(~1.
CHAPTER 5. PHASE MERGING PRINCIPLES
148
So that the solution (PZ is represented by (5.31), and (5.32) is obvious.
A solution of the singular perturbation problem (5.29) is trivial if the twice contracted operator Qz is null. However, there are cases of non trivial A A
solution of singular perturbation problem under this condition. Proposition 5.4 Let the bounded operator Q o n the Banach space B be reducible-invertible with projector Il and potential &. Assume that: (1) the operators Q1, Q2 and Q3 are closed with common domain Bo dense
in B;
01
on the null-space .h'~ is reducible-invertible with projector 6 and potential &;
(2) the contracted operator
0124)
=
=
fi - I,
A
A
(3) the twice contracted operator Qz is null. Then the asymptotic representation [E-~Q
+ &-'&I + e-lQ2 + Q3](p+ &'(PI+ 2 9 2 + ~
A
0
~ 9= 3 )0 9
+ ~"(P, (5.38)
is realized by the contracted operator:
00:= 03 + Q 2 h Q 2 , 0ofi= HQon, h
A
A
A
A
A
A
(5.39)
A
on the null-space .h'& and by the following vectors: (5.40)
cp1 = E002(P,
(5.41)
-
A
P3 = Ro[Qo -
S o ] +Ql&&o]~,
(5.42)
with the negligible term 0" = Q3(pl+ (Q2
+ E Q ~ ) ( P+~
(Q1
+ E Q ~+ c2Q3)(p3.
(5.43)
149
5.2. PERTURBATION OF REDUCIBLE-INVERTIBLE OPERATORS
PROOF. Comparing the coefficients, with respect t o the degrees of the parameter E , of the expansion of the left-hand side of (5.38) with those of the right-hand side, we get the following relations:
Qcp=O Qpi QVZ Qp3
(5.44)
+ Qicp = 0
+ Q 1 ~ 1+ Qzcp
(5.45) =O
+ Qipz + Qzpi + Q ~ ' P= $J
(5.46) (5.47)
From (5.44) we get cp E NQ. Equation (5.45) gives cp E N Q ~ Hence . Q 1 p = 0, Qcpl = 0, that is (pi E NQ.Let us now investigate Equation (5.46). The solvability condition is
+ Q2cpI = 0,
n[Qipi
or, in another form (5.48)
Qlcpl+ Q2cp = 01
from which we get cp1
(5.49)
= 2OQ2V.
The solvability condition for (5.48) is by Assumption (3), h h
fiQ2cp = Q2cp = 0.
Hence, the vector cp1 is defined by Equation (5.40). Finally, the solvability condition for Equation (5.47) is
+ Q3cp = q,
Qi(p2 +&(pi
This equation for the vector
cp2
$J E NQ.
has t o satisfy the solvability condition
or, in another form, using the representation of the vector cp1,
That is the representation of the main part of the asymptotic relation (5.38), with the contracted operator in (5.39). 0
60
150
CHAPTER 5. PHASE MERGING PRINCIPLES
Now the singular perturbation problem can be solved under some additional assumption on the perturbing operators Q1, Q2, .... We propose only the next result.
Proposition 5.5 Let the bounded operator Q on the Banach space B be a reducible-invertible with projector IT and potential Ro. Assume that: (1) the operators Q k , k = 1 , 2 , 3 , 4 , are closed with the common domain Bo dense in B; (2) the contracted operator on the null-space NQ is reducible invertible with projector 6 and potential
&;
G2 on the null-space NG, is reducibleA
(3) the twice contracted operator
A
h
invertible with projector
A
fi and potential &.
Then the asymptotic representation
+
+
[ep4Q c - ~ Q I &-'Qz
+
e-lQ3
+ Q 4 ] ( q + &:(PI + e2qz + c3p3 +
4 E 94)
A h
= OOP
+ OEP,
(5.50)
is realized with the contracted operator for
Go := Q4 + Q3ROQ37 A
A
-
h
A
-
-
-
h
, and with the negligible term
on the null space
IlO"pII + 0, as E
4
0.
Q2
PROOF.Similarly to the proof of Proposition 5.4, the last step can be obtained by the following equation
The solvability condition for this equation gives the statement of Proposition 5.5. 0 The formulated Propositions 5.1-5.5 have an adequate interpretation as phase merging principles for the stochastic systems considered in the next section.
5.3
Average Merging Principle
Averaging is an important step in stochastic approximation of systems. We present in this section averaging results for switched stochastic systems: stochastic evolutionary systems, additive functionals, random evolu-
5.3. AVERAGE MERGING PRINCIPLE
151
tions, where the switching semi-Markov or Markov process is time-scaled by %- 11,.
5.3.1
Stochastic Evolutionary Systems
First, the stochastic evolutionary system with the switching ergodic Markov process in the series scheme is considered (see Section 3.3.1)
(5.51)
I U"(0) = u.
The switching Markov process z(t),t 2 0 , is defined by the generator
(5.52) The stationary distribution n(dz) of the ergodic process with generator (5.52) defines the projector rIp(2) = ?l(z),
@ :=
J,n(dz)cp(z), l(z) = 1.
(5.53)
The velocity function g ( u ; z), u E I t d , z E E , is supposed to provide the global solution of the deterministic evolutionary equations:
$L(t) UJO)
= g(Udt);
, XEE.
(5.54)
=1 '1
The coupled Markov process U"(t),z"(t):= z ( t / ~ ) ,2t 0 , can be characterized by the generator (see Proposition 3.3)
+ r ( z ) I d ~z),,
LEv(u,x) = [&-lQ
(5.55)
where the generator lI'(z),z E E , is defined by the relation r(z)(P(u)= g ( u ; z)v'(u).
(5.56)
Note that the generator (5.56) is induced by the family of semigroups
rt(z)cp(u):= v(Uz(t)), U m = 1' 1.
(5.57)
The average merging principle in ergodic merging scheme is realized by a solution of the singular perturbation problem for the generator (5.55).
CHAPTER 5. PHASE MERGING PRINCIPLES
152
Proposition 5.6 The solution of the singular perturbation problem for generator (5.55) is given by the relation ]L~(P€(U,Z =)
FV(U) + EeE(Z)V(U),
(5.58)
on the perturbed functions cp"(u,z) = p(u) t E ( P ~ ( u , zwith ) , cp E Ci(Rd). The average operator 5 is determined by the relations &+)
= Z(.)cp'(u>,
A
du) :=
J,4 d z ) d u ; z ) .
(5.59)
The remaining term O E ( x ) is defined by
e y z ) = r(z)~,,jF(z), iF(z) := r ( ~ - r. ) A
(5.60)
It is worth noticing that the average operator (5.59) provides the average evolutionary system
%(t) dt
=
Z(C(t)),
that is exactly as in Corollary 3.3. PROOF. According to the solution of the singular perturbation problem, given in Proposition 5.1, we obtain (5.58) and (5.60) with the average operator f,given by the relation r I I r ( Z ) r I ( P ( U ) = rIIF(z)cp(u).
(5.61)
Now calculate the average operator in (5.61):
nw$&w
=
rw-Mu)
= b ( u ;4 ( P ' ( 4 = Z('IL)(PW A
= lr(P(u).
The average velocity is defined in (5.59). 5.3.2
0
Stochastic Additive Functionals
The average phase merging principle for the stochastic additive functionals (see Section 3.3.1) (5.62)
5.3. AVERAGE MERGING PRINCIPLE
153
with switching ergodic semi-Markov process z ( t ) t, 2 0, on the standard phase space ( E ,E ) , given by the semi-Markov kernel Q(z,B , t ) = P ( z ,B)F,(t),
z E E ,B E E , t
2 0,
(5.63)
is realized by a solution of the singular perturbation problem for the compensating operator of the extended Markov renewal process represented in the following asymptotic forms (see Proposition 3.1): ILE'p(u,z) =
+ T(z)P+ + 8F(z)]q.
~e;(~)]'p
= [e-'Q
(5.64)
The generator Q of the associated Markov process is defined by (5.52) with the intensity functions:
1
00
q(z) = l/m(z),
m ( z ):=
q = l/m,
m :=
F,(t)dt,
z E E,
p(dz)m(z).
The generators lC'(z),z E E , are defined by relation (5.56). The remaining terms ei(z), k = 1 , 2 , are given by the following relations (see Proposition 3.1) (5.65)
+
Here l?,(z) := T(z) ~-y&(z).The remaining term -yE(z)is represented in (3.36).
Proposition 5.7 The solution of the singular perturbation problem f o r the operator (5.64) is given b y the relation I L ~ P ~ ( U , Z= )
F'c~(u) + Ee;(z)p(u),
(5.66)
o n the perturbed test functions 'p"(u,z) = p(u) f E ( P ~ ( U , Z ) , with 'p E C;(Rd).The average operator f is determined by relation (5.59). The remaining term ef(z) is defined by ef(z) = e:(z) where f'(z)
:= r ( z ) -
F.
+ e:(z)R0fyz),
(5.67)
CHAPTER 5. PHASE MERGING PRINCIPLES
154
It is worth noticing that the average merging principle gives the same result (5.59) for Markov and semi-Markov switching. In the semi-Markov case, the difference lies only in the definition of the intensity function q(z) = l/m(z), and the remaining terms. PROOF. According to the solution of singular perturbation problem, given in Proposition 5.1, we obtain (5.66) with the average operator f', given by the relation:
The remaining operator ef(x) is calculated as follows:
IL"cp" = [ E - ~ Q
+ ]r(z)P+ ee;(~)]cp(u)+ E [ E - ~ Q + O~(z)]cpl(u,x)
= fcp(u>
+ G(z)cp('1L)+ W ) c p 1 ( u , 4 .
Hence, due to the relation = &f'(s)cp, we get (5.67). Now, the calculation of the average operator in (5.68) gives us the same result (5.59) as in the case of the Markov switching, for the stochastic evolutionary system (5.51). 0
Remark 5.2. The average merging principle for stochastic evolutionary systems with semi-Markov switching can be represented as a corollary of Proposition 5.7, formulated for the stochastic additive functionals. 5.3.3
Increment Pmcesses
The increment process in the series scheme with the semi-Markov switching, associated to the jump random evolution, considered in Section 3.2.2, is defined by the family of bounded operators DE(z),x E E ,
D'(z)cp(~) := cp(u
+
EU(X)),
x E E,
(5.69)
which has the asymptotic expansion (3.21) on the test functions cp E Cg (I@), that is
D"(z) = I
+ E D ( Z ) + Df(z).
The definition (5.69) provides that
(5.70)
5.3. AVERAGE MERGING PRINCIPLE
155
The compensating operator of the extended Markov renewal process
is represented on the test functions cp(u,z) as follows, (see (3.23)-(3.25)):
where
DE(z)= D(z) + D?(z),
(5.72)
with the negligible term
The average merging principle in the ergodic merged scheme can be obtained by using Proposition 5.1 for the truncated compensating operator
Considering the truncated operator (5.73) on the perturbed test functions @(ti, z) = cp(u) E ( P ~ ( u ,z), with cp E C,"(Rd), we get, by Proposition 5.1,
+
where the negligible term is &j(z)(p(u)= QoD(z)cp1(u), or, in explicit form:
The average operator
60is determined by the relation 6oII = IIQoD(z)II.
(5.75)
CHAPTER 5. PHASE MERGING PRINCIPLES
156
Let us calculate the average operator
60:
where:
0 .
Hence, the average operator Do is represented as follows
that is exactly the result of Theorem 3.2. 5.3.4
Continuous Random Evolutions
The continuous random evolution with semi-Markov switching in the average approximation scheme is given by a solution of the evolutionary equation in the Banach space C(Wd)(see Section 3.2.1),
$@“(t)= I r ( Z ( t / & ) ) W ( t ) , (5.76) W(0) = I ,
with a given family of generators I’(z),z E Elgenerating the semigroups rt(+ 2 o , E~E. The coupled random evolution (see Section 2.7)
on the Banach space C(Wdx E ) can be characterized by the compensating
5.3. AVERAGE MERGING PRINCIPLE
157
operator of the extended Markov renewal process (Section 3.2.1)
(5.78) The factors "E-'" and "ES" concern the fast time-scaling in (5.76). The average merging principle in ergodic merging scheme for the continuous random evolutions with semi-Markov switching in the series scheme can be obtained by using Proposition 5.1 for solving the singular perturbation problem for the truncated compensating operator --E
IL
=E - ~ Q
+ K'(z)P.
(5.79)
According to Proposition 5.1, the operator (5.79) on the perturbed test functions cp"(u,z) = p(u) E ( P ~ ( u , ~ with ), cp E Bo, dense in C(Rd), has the representation
+
--E
L @(u, z) = &(u)
+
(5.80)
EBE(rc)cp(U),
where OE(z)cp(u) = r(z)&Pii?(z)cp(~), and f'(z) := r ( z ) - ?. The limit average generator is defined by
?rI
= IIIr(2)PrI= rIH'(2)rI.
(5.81)
Hence A
H' =
S,
T(dz)r(z).
(5.82)
The remaining term P ( z ) is supposed to be bounded on Bo, that is,
Ile"(z)(~IlI b < +m, 5.3.5
cp E Bo.
Jump Random Evolutions
The jump random evolution in the average merging scheme with semiMarkov switching is determined by a solution of the difference equation (see Section 2.7.2) on the Banach space C(Wd)
a"(.:) and
=
[ D E ( Z i) I]@"(T;),
72
2 0,
@"(Ti) =
= I , (5.83)
158
CHAPTER 5. PHASE MERGING PRINCIPLES
given by the family of the bounded operators lW(x),z E E . The coupled jump random evolution
on the Banach space C(Rd x E ) functions ~ ( ux), , u E Rd,z E E , can be characterized by the compensating operator (see Section 3.2.2)
(5.86) The main assumption in what follows is that the family of bounded operators DE(x),x E E , has the following asymptotic expansion on the test functions 'p E Bo, dense in C(Rd),
DE(x)= I
+
+ Dt(x),
&ID(Z)
(5.87)
with the family of generators D(x),x E E , having common dense domain of definition on Bo. The negligible term is supposed to satisfy (5.88)
The average merging principle in ergodic merging scheme for the jump random evolution with semi-Markov switching in series scheme can be obtained by using Proposition 5.1, applied to the truncated operator (see (5.73)).
JLE
= &-'Q
+ QOD(Z).
(5.89)
According to Proposition 5.1, the generator (5.89) on the perturbed test functions 'pE(u,z) = 'p(u) c'p1(u,z), has the representation
+
+
L ~ ~X) =~ ( ~ efi(u,x). , The limit average generator
(5.90)
6is determined by the relation (5.91)
5.3. AVERAGE MERGING PRINCIPLE
159
Taking into account that Qocp(z) = q(z)Pcp(z),we'calculate:
Hence
(5.92) 5.3.6
Random Evolutions with Markov Switching
The average merging principle for the continuous random evolution with Markov switching can be obtained by using Proposition 5.1 applied t o the generator
IL" = E - ~ Q + lr(z),
(5.93)
characterizing the coupled random evolution (see Section 3.2.1, Proposition 3.3)
W ( t ,z " ( t ) )= W ( t ) ( P ( U ,z ( t / c ) ) .
(5.94)
According t o Proposition 5.1, the average generator is defined by the relation
ell that is
r= h
= IIlr(z)II,
s,
r(dz)F(z).
(5.95)
(5.96)
The jump random evolution with Markov switching in average scheme is characterized by the generator (see Section 3.2.2, Proposition 3.4)
ILZ, = E - ~ [ Q + Qo(DE(z)- I ) ] , where, as usual, Qocp(z)= q ( z ) P p ( z ) .
(5.97)
CHAPTER 5. PHASE MERGING PRINCIPLES
160
The main assumption in the asymptotic expansion, as E DE(z) - I = ED(2)
+ D;(z).
-+0,
is
(5.98)
The negligible term is supposed to satisfy
on cp E Bo. The average phase merging principle for the jump random evolution with Markov switching can be obtained by using Proposition 5.1, applied to the truncated generator
IL; = E - ~ Q + QoD(z).
(5.99)
The average generator is determined from the relation
6II = IIQ,D(z)II.
(5.100)
Let us calculate:
Hence
(5.101)
5.4
Diffusion Approximation Principle
In this section we verify the algorithms of diffusion approximation for the stochastic systems in ergodic merging scheme formulated in Section 3.4 (Theorems 3.3-3.5, and Corollaries 3.5-3.7 ). This is obtained by using solution of the singular perturbation problem given in Proposition 5.2, Section 5.2.
5.4. DIFFUSION APPROXIMATION PRINCIPLE
5.4.1
161
Stochastic Integral finctionals
Let us show that the diffusion approximation principle is satisfied by the stochastic integral functionals in series scheme with accelerated ergodic Markov switching (see Section 3.4.1)
cu"(t)= a0
+
1 t
a,(z(s/E2))ds, t 2 0.
(5.102)
The velocity function is supposed t o depend on the series parameter follows
a"(z)= & - l a ( z )
+ a1(z),
z E E.
E
as
(5.103)
The first term of the right hand side in (5.103) satisfies the balance condition
(5.104) The switching process z ( t ) , t 2 0 , is supposed to be Markovian and given by the generator
(5.105) The coupled Markov process aY"(t),zC"(t) := z(t/~~),t 2 0 , can be determined by the generator (see Section 3.4.1)
LEcp(u,z) = [ E - ~ Q
+ & ( z ) l ~ ( u21, ,
(5.106)
where the family of generators A,(z),x E El is defined by the velocity (5.103) (see Section 3.2.1, Proposition 3.3): &(z)cp(u) = a'(z)cpW
+
= &-la(z)(p'(u> a1(z)(p'(u) =
[E-lA(z)
+ A1(z)]cp(u),
(5.107)
where:
A(z)cp(u) = a(z)cp'(u), and Al(z)cp(u) = al(z)cp'(u). (5.108) The balance condition (5.104) can be expressed in terms of the projector
II of the generator Q of the switching uniformly ergodic Markov process z(t),t2 0 , as in Theorem 3.3.
162
C H A P T E R 5. P H A S E MERGING PRINCIPLES
The limit generator IL in Corollary 3.5, obtained by a solution of the singular perturbation problem for the generator (5.106):
+ + = I L ~ (+~ eyu, ) x),
+
ILEvE(u, Z) = [C2Q E - ~ A ( x )A ~ ( z ) ] [ ( P ( ~ )E ( P ~ ( u ,x)
+ E ~ P Z ( x)] ~ , (5.109)
has the following form (see Proposition 5.2, Section 5.2)
lL = I I A ~ ( z ) I I+ rIh(~)RoA(z)rI,
(5.110)
with negligible term P ( u ,x)
and where & is the potential of Q (Section 1.6). Now, we calculate using representation (5.107):
& W ( u ) = l-&(z)Wu) = ITAl(Z)P(.) = rIa1 ( W ( u )
= alP/(U),
where a1 := SE7r(dz)a~(x). By a similar calculus, we have:
l-IA(z)RoA(z)ncp(u)= nA(Z)RoA(Z)m(u) = ~A(Z)RoA(Z)P(u) = ~A(z)Roa(z)(p'(u) = nu(Z) &a( Z)(p'I (u) 1
2
/I
= ZOOP (u),
where
a; := 2
s,
7r(dz)a0(x), ao(x) := a(z)Roa(x).
Note that actually a: 2 0 (see Appendix C). Therefore, the limit generator IL in Corollary 3.5 is represented by b ( u ) = a1cpYu)
+ ZaoP 1 2
I/
(u>,
(5.111)
5.4. DIFFUSION APPROXIMATION PRINCIPLE
163
that is the limit diffusion process is
ao(t)= a0
+ a l t + aow(t),
t 2 0,
(5.112)
exactly as in Corollary 3.5.
The proof of Theorem 3.3 is based on the representation of the stochastic integral functional (5.102) by the associated semi-Markov random evolution given by a solution of the evolutional equation (3.6). The corresponding family of generators IC"(z),z E E , in (3.7), is given in (5.107), that is ly'(z) = &(z). Here A:(%), s 2 0 , z E E , is the family of semigroups, determined by the evolutions az(s) = u
+sa,(~),
s
2 O,Z
E E,
(5.113)
that is
A:(x)(P(U)= cp(a:(s)>.
(5.114)
The compensating operator of the extended Markov renewal process
xi,
a: = Q'(T;),
T:
= E 2T
~ ,n
2 0.
(5.115)
is given by relations (3.8)-(3.9), that is
[ 1 F,(ds)A:,,(z)Pp(u, co
IL'cp(u, x) := e-2q(z)
x) - p(u, z)] . (5.116)
0
Now we can use the asymptotic representation (3.10) given in Proposition 3.2 with obvious changes r(z) = A(z), and lyl(z) = A,(z). 0 Proposition 5.8 The generator IL of the limit diffusion process in Theorem 3.3 is calculated by a solution of the singular perturbation problem for the truncated generator
IL$ = E-'Q
+ E-'A(z)P + Qz(z)P,
as follows
lL = IIQ:!(z)II + IIA(z)P&A(z)PII, where
Qdz)= Ai(z) + p2(z)A2(z),
P Z ( ~=) mz(z)/2m(z).
(5.117)
CHAPTER 5. PHASE MERGING PRINCIPLES
164
The operator & is the potential of Q (see Section 1.6)
Q&
zz
&Q = I2 - I,
(5.118)
or, equivalently
q[P - I]& = rI - I. Hence (5.119)
PRO = & + r n ( r I - I ) , where q(z) = l / m ( z). PROOF. We calculate:
where: a1
:=
s,
7r(dz)al(z), and B1 := 2
s,
7r(dz)p2(z)u2(z). (5.120)
Next:
A(X )'p(u) = IIA(z) P&u (Z)cp' (U )
IIA(X )P &A( Z)PIIp(U ) = IIA (Z)P&
= rIa(z)P&a(z)cp'(u)
= nu(z)&a( z)(p"( u)- rIrn(z)a2(z)cp" (u),
(by using (5.119))
where, by definition :
Bo := 2
s,
7r(dz)ao(z)/rn, ao(z) := u(z)&a(z),
5.4. DIFFUSION APPROXIMATION PRINCIPLE
165
Hence, we get the following representation of the limit generator W ( U ) = alcp'(4
1 + pcp"(u),
where:
+
al := L n ( d z ) a l ( z ) , B := BO BOO,
with:
Bo := 2
s,
7r(dz)ao(z),
p ( z ) := [m&)
Boo
:=
s,
7r(dz)p(z)a2(z),
- 2m2(z)]/m(z).
Note that p(z) can be seen as a distance from exponential distributions of F,(t) of sojourn times (see Remark 3.3, page 83). 5.4.2
Continuous Random Evolutions
The diffusion approximation principle in ergodic merging scheme can be verified for a continuous random evolution in the series scheme with accelerated semi-Markov switching given by a solution of the evolutionary equation (Section 3.2.1)
$@"(t)= ] r " ( Z ( t / & 2 ) ) @ " ( t ) , t 2 0, (5.121) P(0)
=I,
on the Banach space B , with the given family of generators K"(z),z E E , of the semigroup rg(z),t 2 0, z E E. The generators lr"(z),z E E , have the following form K',(z) = E - ' ~ ( z )
+ K',(z),
5
E E.
(5.122)
In what follows, the generators r(z) and rl(z),z E E , are supposed to have a common domain of definition Bo,dense in B. The couppled random evolution (Section 2.7.3, Definition 2.11) W ( t ,Z ( t / E 2 ) ) := W ( t ) c p ( U , Z ( t / C 2 ) > ,
z(0) = z,
166
CHAPTER 5. PHASE MERGING PRINCIPLES
can be characterized by the compensating operator (Section 3.2.1)
L“cp(v,
[/
/
t
= &-2q(z)
4.
F3C(dS)r:ZS(z) P ( z ,dY)cp(.u,Y) - 4% E
0
(5.123) The factor “ E - ~ ” corresponds to the accelerating time-scaling of the switched semi-Markov process z(t),t 2 0, in (5.121), given by the semiMarkov kernel
Q(z,dy, ds) = P ( z ,d~)F,(ds). The fast time-scaling “ E ~ ” of the semigroup in (5.123) provides a diffusion approximation of the increments under the balance condition for the first term in (3.10),
rIIr(z)rI= 0,
(5.124)
where rI is the projector of the associated ergodic Markov process z o ( t ) ,t 2 0, defined by the generator
(5.125) with
The key problem in the diffusion approximation is t o construct an asymptotic representation for the compensating operator (5.123), by using Proposition 5.2 and Assumptions (5.122) and (5.124). The diffusion approximation principle for the semi-Markov continuous random evolution in the series scheme (5.121) with the switching ergodic semi-Markov process z(t),t 2 0, is realized by a solution of the singular perturbation problem for the truncated operator (see Proposition 5.2)
The limit generator is given by
or, in our case
5.4. DIFFUSION APPROXIMATION PRINCIPLE
167
where:
Qi(z)cp(u,z) := r(z)cp(u, 21, Q z ( z ) ~ (z) u ,= [ri(z)+ ~ 2 ( z ) ~ ~ ( z ) Iz). c~(u, Let us now compute the limit generator in explicit form. The first term in (5.127) gives:
nQ2(z)n= nri(z) + p 2 ( z ) r 2 ( z ) n = (F1+FOl)rI,
(5.128)
where, by definition:
Recall that the potential operator & satisfies the equation (see Section 1.6):
Q& = RoQ =
- I , Q = g[P - I ] .
Hence,
P&=Ro+rn[rI-I]. Next, we calculate the second term in (5.127):
rIr(z)P&s(z)l-I= rIIr(z)RoIT(z)rI- rIrn(z)s"z)n =
( f o - Fo2)n,
where, by definition, (5.129) Gathering all the above calculations we get
IL = $0
+el,
+&
(5.130)
where, by definition: h
A
h
r o o := r01 - lr02 =
s,
7r(dz)p(z)r2(z),
p ( z ) := [rnz(z)- 2m2(z)]/rn(z).
(5.131) (5.132)
168
CHAPTER 5. PHASE MERGING PRINCIPLES
It is worth noticing that the formulas (5.127)-(5.132) give the preliminary “blank-cheque” for constructing the limit generator in the diffusion approximation scheme for stochastic systems with ergodic semi-Markov switching considered in Section 3.4. The generators of the limit diffusion processes are constructed by formulas (5.129)-(5.132) with the following generators of the corresponding continuous random evolutions (Section 3.4). 1. In Theorem 3.4:
2. In Corollary 3.6:
3. In corollary 3.7:
Now, the generators of the limit diffusion processes for stochastic systems in Section 3.4 can be calculated in explicit form by formulas (5.129)and First, we calculate the generator for stochastic evolutionary system (Section 3.4.3, Corollary 3.7):
where, as in Theorem 3.4:
5.4. DIFFUSION APPROXIMATION PRINCIPLE
169
Next, we calculate:
Now, we get, by using (5.129)-(5.132):
where, by definition:
(5.137) and
(5.138) Gathering (5.136)-(5.138),we obtain the generator of the limit diffusion process in Corollary 3.7. Analogous calculation can be done for the stochastic additive functionals considered in Sections 3.4.2 (Theorem 3.4).
5.4.3
Jump Random Evolutions
The increment process in the series scheme with the semi-Markov switching, considered in Section 3.4.4, in the diffusion approximation scheme, is here considered with the following accelerated scaling 4tlE2)
P(t)= P O
as(xk),
+&
t 2 0.
(5.139)
k=l
The values of jumps are defined by the bounded deterministic function
u,(x),x E E , which takes values in the Euclidean space Rd, and has the following representation a&)
= u(z)
+ EUl(5).
(5.140)
170
CHAPTER 5. PHASE MERGING PRINCIPLES
The first term satisfies the balance condition (5.141) where p(dx) is the stationary distribution of the embedded Markov chain x,, n 2 0. To verify the algorithm of diffusion approximation formulated in Theorem 3.5, the associated jump random evolution is considered (Section 3.2.2), defined by the family of bounded operators
+
DE(x)cp(u):= cp(u EaE(x)),
2
E El
(5.142)
given on the test functions cp E C,3(Wd). By using representation (5.140), the following asymptotic expansion is valid := I
+
+ E ~ D ~ +( x )
E2eE(X),
(5.143)
where, by definition:
and
Proposition 5.9 The diffusion approximation principle for the semiMarkov jump random evolution in the series scheme with the switching ergodic semi-Markov process, satisfying Conditions D1-D3 of Theorem 3.3 (Section 3.4.1), and the family of jump operators IDE(x),xE E , is realized b y a solution of the singular perturbation problem for the truncated operator
ILg = E - ~ Q
+ E - ~ Q o D ( x+)QoDl(x).
(5.147)
PROOF. The proof is obtained by using the asymptotic representation (3.30) of the compensating operator (3.29) for the jump random evolution (3.26)(3.28).
5.4. DIFFUSION A P P R O X I M A T I O N PRINCIPLE
171
Considering the operator (5.147) on the perturbed test functions p‘(u,z) = ‘ ~ ( ~ ) + E ( P ~ ( u , z ) + E ~ ’with ~ ( ~ ‘p , zE) Ci(Rd) , we get, by Proposition 5.2,
with the negligible term
llee(z)cplI
+
0,
E
-+
0,
‘p E
caw.
The limit operator lL is calculated by the formula (see Proposition 5.2)
ILII = ~ Q o D(z)n I + nQ0D(z)&QoD(5)IT7 where Ro is the potential of operator Q (see Section 5.2). Now we calculate using representation (5.144)-(5.145) and Qo’p(z) = q(x)P’p(x):
ILin‘p = nQo&(z)n‘p(u) = nQob( 5 M u ) 1
= nQ oa i(z)d (u ) -t 5 n Qo a2 (z)d r(u )
Hence
Here (compare with Theorem 3.5)
nEXT, THE OPERATOR IS CALCULATED AS FOLLOWS
172
CHAPTER 5. PHASE MERGING PRINCIPLES
Hence
where, by definition,
with:
b(z):= P a ( z ) =
L
P(z,dy)a(y),
that is exactly as in Theorem 3.5. Note that according to Appendix C, cl: 2 0. 5.4.4
Random Evolutions with Markov Switching
The diffusion approximation principle for random evolution with Markov switching can be obtained from the results presented in Sections 5.3.2 and 5.3.3, for the semi-Markov random evolutions, by putting the “distance from exponential distribution” parameter p ( z ) equal to 0 (see Remark 3.3, p. 83). According to Propositions 5.8 and 5.9 we can formulate as a corollary the following result.
Proposition 5.10 The diffusion approximation principle for the Markov random evolution given b y a solution of the evolutionary equation (5.121), with the switching Markov process x ( t ) , t 2 0, defined by the generator (5.125), is realized by a solution of the singular perturbation problem for the generator (see Proposition 3.3)
The limit generator is given by (see Proposition 5.2)
5.5. DIFFUSION APPROXIMATION WITH EQ U1.LI B R IUM
173
Thus we obtain the preliminary “blank-cheque” t o construct the limit generator in diffusion approximation scheme for stochastic systems with ergodic Markov switching considered in Section 3.4 in the following form
where:
The diffusion approximation principle for jump random evolution with Markov switching coincides with the analogous one for the semi-Markov jump random evolution. We have t o keep in mind that, in case of switching Markov process, q(z)is the true intensity of the exponential distribution of renewal times. Hence, for example, the parameter is
without transformation from the equality
which was used in the case of switching semi-Markov processes.
5.5
Diffusion Approximation with Equilibrium
The main problem in constructing the diffusion approximation principle for stochastic systems with equilibrium considered in Section 3.5 is the representation of the generator of the centered and normalized process in a suitable asymptotic form. Certainly, the situations considered in Sections 3.5.1 and 3.5.2 are completely different. The centered and normalized process (3.69) is Markovian, which essentially simplify the problem. While, the centered and normalized process (3.81) has t o be extended t o the Markov process by two components: the deterministic shift-process ( ( t ) t, 2 0, defined by a solution of the evolutionary equation (3.80), and the switching Markov process z(t),t 2 0, defined by the generator (3.79). A
CHAPTER 5. PHASE MERGING PRINCIPLES
174
5.5.1
Locally Independent Increment Processes
The generator of the centered and normalized process (3.69) is constructed by using the generator (3.67) and the following relation (see (3.70)) EV"(t/E) = p
Lemma 5.1
+ EC"(t).
(5.148)
The generator of the Markov process
<"(t)= f ( t / E )
t 2 0,
- E-lp,
(5.149)
is represented as follows
+
(5.150)
q c p ( U ) = LE(P EU)(P(U),
with the generators lL"(p
+
EU)(P(U)
+
+
[(P(u E U ) - (P(u)]I',(~ EU,dw). (5.151)
= EJId
PROOF. The statement of Lemma 5.1 provides the equality (5.148) and the representation of the generator
of the initial stochastic system described by the Markov process with locally independent increments r f ( t ) ,t 1 0 , E > 0 (see Section 3.5.1). Now conditions (3.71)-(3.73) of Theorem 3.6 provide the following asymptotic representation of generator (5.150) qP(4
= Lo+)
+ @"Cp(U),
(5.153)
with the negligible term
pE(PII + 0,
E -+
0,
(P
E
c~(w~).
(5.154)
Here, the generator Lo, given by (3.74), defines the limit diffusion process <'(t),t 2 0, in Theorem 3.6. The verification of convergence (5.155) can be obtained by using the Skorokhod limit theorem (see Chapter 6, and Appendix A). It is worth noticing that in Theorem 3.7 the additional term 6(w)in diffusion coefficient, from the jump part of the stochastic additive functionals
175
5.5. DIFFUSION APPROXIMATION W I T H EQUILIBRIUM
(see (3.87)), is ignored in order to simplify the construction of the diffusion approximation principle for Markov stochastic systems with equilibrium (without balance condition). 5.5.2
Stochastic Additive Functionals
As was mentioned in the introduction of Section 5.4.4, the centered and normalized stochastic additive functional (see relation (3.81)) (5.156) with the time re-scaled switching Markov process is as follows (see (3.82))
(5.157) where the family of Markov processes q d ( t x), ; t by the generators (see (3.83))
2 0 ,E > 0, x
E
r&(x)cp('zL) = S&(Wx)cp'(u).
E , is given (5.158)
For simplification we dropped the remaining term in (3.83). The decisive step in the asymptotic analysis of the considered problem is the construction of the generator of the three component Markov process
r"(t), E^(t), x; Lemma 5.2
:= z(t/e2),
t 2 0.
(5.159)
The generator of the Markov process (5.159) is represented
as follows
+
+
IL;(P(u, v,x) = [ E - ~ Q r E ( v
EU;
x)]cp(u,v,x)
+ m c p ( U , 21,
(5.160)
Here Q is the generator of the switching Markov process x ( t ) , t 2 0, F(v)cp(v) := g(v)cp'(v).
(5.161)
The generator Ira is defined by
r&(v+ au;x)cp(u):= [g&(v+ E U ; x) - g(v)]cp'(U).
(5.162)
PROOF. The representation (5.162) provides the following equality (compare with (5.156))
F ( t )= a t , + E C E ( t ),
(5.163)
CHAPTER 5. PHASE MERGING PRINCIPLES
176
that is, under conditions:
r(t)= v , ["(t)= u, we have <"(t)= v
+~ u ,
(5.164)
which are, exactly those used in (5.162). The first and last terms on the right hand side of (5.160) are obtained by using independence of processes t ( t ) and x; from C ( t ) . The asymptotic representation (5.165) is obtained by using a solution of the singular perturbation problem in Proposition 5.2 for the truncated operator
+
lL3p(u,21, Z) = E - ~ Q + ~ - ~ Q i ( x )Q2(x), where:
Q I ( ~ ) v ( v) w = X u ;x)(~L(u, v), Q ~ ( ~ ) ( P v) ( u= , d u , v)cpl(u, v).
Z(Y x) := d u ;x) - ?(u),
Now the conditions of Theorem 3.7 (Section 3.5.2) and Proposition 5.2 provide the following asymptotic representation of the generator (5.160) h
q w ( u , 21, ). = W u ,
+ O"(z)cp(u,v), .
(5.165)
with the negligible term IlO'(~c>(PII
The generator
M u ,).
+o,
&
-+
0.
2 is given in Theorem 3.7 in the following form
= d ~ , + P L ( u ,).
1
+ p(4cp;u(u, ). + ?(v)cp:(.u,v),
that is the limit generator of diffusion process r ( t ) , t2 0, and the equilibrium process c(t),t 2 0. Representations (5.165), and (5.165) are obtained by using Condition D2' of Theorem 3.7. 5.5.3
Stochastic Evolutionary Systems with Semi-Markov Switching
The diffusion approximation principle for stochastic evolutionary systems with semi-Markov switching can be constructed as in the previous sections by using the suitable asymptotic form of the extended compensating operator of the centered and normalized process considered in Section 3.5.3.
177
5.5. DIFFUSION APPROXIMATION WITH EQUILIBRIUM
We constructed the generators of the centered and normalized processes
G ( t ):= E-'[U,"(t)
- @ ( t ) ] ,t
2 0,
x
E
E,
where the evolutionary systems U,"(t),t 2 0 , x E E , are defined by a solution of the equations
d
-U,"(t) dt
z E E.
= a,(U,E(t);x),
Using the relation
+ECZ(t),
U,'(t) = @ ( t )
it is easy to obtain the generator of the two component process [ z ( t ) ,@ ( t )t ,2 0, that is
A"(v;z)cp(u,V)
+
= E-'&(v
~)cpd(u, v),
EU;
(5.166)
where, by definition,
-a,(w;X) := a,(v;
Z)
-qV).
Under the condition of Theorem 3.8, the following asymptotic representation of generator (5.166) holds
A'(v;
~ ) c p ( ~V), = E - ' ~ ( v ; z ) ( P ( u W) ,
+ AI(w;~)cp(u,W )
+eyV;X ) c p ( u , w).
(5.167)
By definition we have:
(5.168) i ( w ; z)cp(u) = q w ;.)cp'(u), A~(w;X)cp(u)= b(w,u;x)cp'(u),z(V;Z) := a ( w ; z )-Z(w), the generator of the average component f i ( t ) t, &)cp(V)
2 0, is the following (5.169)
= q.)cp'(.),
and the negligible remaining term is
IPb(v; X)cpll
-+
0,
E
0,
i
with cp E C2?l(IWd x, Rd). Let us now define the extended compensating operator.
CHAPTER 5. PHASE MERGING PRINCIPLES
178
Lemma 5.3 The extended compensating operator of the three-component process c(t),zZ:= x(t/E2),t2 0 , is determined by the relation
c(t),
(5.170) Here, the semigroups A: and &, t 2 0, are defined by the generators (5.166)-(5.168) and (5.169). The next step in the asymptotic analysis is to construct the asymptotic expansion of the compensating operator (5.170) with respect to E .
Lemma 5.4 The extentended compensating operator (5.170) has the following asymptotic representation o n the test functions ‘p E C312(Rdx Rd)
(5.171)
with: (5.172) (5.173) (5.174) (5.175)
PROOF. The compensating operator is transformed as follows ILE = E - ~ Q
+
E - 2 q ( Z ) [ I F E ( z ) - I]P.
(5.176)
Now, the following algebraic identity is used
ab - 1 = ( a - 1)
+ ( b - 1)+ (a - l ) ( b - 1).
(5.177)
Setting a := Az,t(v; z), b := AE2t,
(5.178)
the terms in (5.176) with (5.177) and (5.178) are transformed by using the
5.5. DIFFUSION A PPROXIMA TION WITH EQUILIBRIUM
179
integral equation for semigroup:
F,E(z):=
I
03
F,(dt)[AEzt(v; x) - I]
1
00
Fx(t)AEzt(v; x)dt
=
E ~ I " v) (z,
=
~~m(z)r"(z,v) + ~ ~ [ l I ".)I2( x ,
1
03
P,(t)AE,z,(w; x)dt
0
where:
F&(z) :=
I"
F,(t)AZz,(v;x)dt. -3
Taking into account (5.166) the following expansion is obtained
F,E(X)= E ~ ( ~ ) A , ( w , I c + ) E ~ m2 - [ A2(x) ,(W,~)]~
+ &2e:(z, v),
(5.179)
with the negligible term e:(x,v) := ~ [ r w~) ](~ ~ F :, ~(x),
on the test functions cp E C,"(Rd). Similarly, the asymptotic expansion can be obtained for the next two terms in (5.177)-(5.178)
1
w
F [ ( z ) := =
Fz(dt)[A,zt(v) -I]
E ~ v L ( ~ ) X+(E2eg(x), ~)
with the negligible term
1
w
Og(z) := [ X ( V ) ] ~ F & ( ZFi2(x) ), :=
on the test functions cp E
Ci(Rd).
F?'(t)&zt(v)dt,
(5.180)
CHAPTER 5. PHASE MERGING PRINCIPLES
180
Finally, we analyze the third term:
where:
hENCE, BY
we get
AND
with the negligible term e E b l ( x ) on the test functions 'p E C3(]Wd). Gathering expressions (5.179)-(5.181), the asymptotic extension (5.171) 0 for the compensating operator is obtained. The final step in the diffusion approximation principle for the stochastic systems with the semi-Markov switching is t o determine the limit generator in Theorem 3.8 calculated by using a solution of the singular perturbation problem for the truncated operator
IL;'p(u,
V , Z) =
[&-'Q + & - ' X ( V ; x ) P + Q ~ ( v~ ;) P ] ' p (21,ux). , (5.182)
According t o Proposition 5.2 we get the following result.
Lemma 5.5 (3.111)
A solution of singular perturbation problem for the generator v,
I L E ~ E ( ~ ,
=I
+ e;(.,
L~(~,
V , x),
(5.183)
on the test functions ' p E ( u , v , x= ) 'p(u,v)+ E ( P ~ ( u , w,x) +c2'p2(u,v1x),and negligible term Oe(ulv,x), is realized b y the generator IL given in Theorem 3.8, formulas (3.98)-(3.101).
PROOF. According to Proposition 5.2, Section 5.2, the limit generator in (5.183) is represented as follows
ILrI = rI&
z ) P & i ( t J ;2)PrI
+ rIlLo(v; x)PrI + rIX(v)PrI,
5.5. DIFFUSION APPROXIMATION WITH EQUILIBRIUM
181
II is defined as follows
where the projector
Let us calculate:
By the definition of the potential operator Ro (Section 5.2), we have
QRo = &Q = II- I , or
q(z)[P- I]& = IT - I,
+
hence, PRO = & m(z)[II - I]. So, we can write:
IL1l-I = I I q v ; z)[& - m(z)I]Z(v; z)cp’(u) = JJX(v; z)RoE(v;~)cp’(u) - E(w; z)m(.)G*(~;~)cp“(u) = E ( w ; z)R&T(v; Z)“”(U) - IIm(z)Z(w;z)Z*(v; z)cp”(u). Hence, the first term, on the right hand side of the latter equality, is
where:
s, s,
&(v) := 2
X&J) The second term is:
:= 2
7r(dz)Z(v; z)RoZi(v;z),
.rr(dz)m(z)Z(w;z)Z*(v;z).
CHAPTER 5. PHASE MERGING PRINCIPLES
182
where
and
The functions Bl(v;z), b(v, u ) , b(v, u;x) and pz(z) are defined respectively in (3.116), (3.99), (3.115) and (3.117). Hence, putting together (5.184) and (5.185) we obtain the generator JL of Theorem 3.8. 0
Merging and Averaging in Split State Space
5.6 5.6.1
Preliminaries
Here, the phase merging principles are formulated in split state space for semi-Markov processes (Section 5.2.2) and for stochastic systems in average scheme (Section 5.6.3). As in the previous sections of Chapter 5, the phase merging principles are constructed by using the corresponding propositions from Section 5.2 and the asymptotic representation of the extended compensating operators for the stochastic systems in split state space considered in Chapter 4. The new additional particularity connected with the switching semiMarkov processes considered in the series scheme also with the small parameter series E + 0, ( E > 0) disappears. The split phase space ( E ,E ) is considered as follows (see Section 4.2.1):
E = Ur=lEk, Ek n Ep = 8 , k
# k‘.
(5.186)
The stochastic kernel P(z,d y ) is coordinated with split (5.186) in the following way:
The semi-Markov kernel
(5.187)
5.6. MERGING AND AVERAGING IN SPLIT STATE SPACE
183
depends on the parameter series E as follows
B)+ EPl(2,B).
P ( Z ,B ) = P(z,
The perturbing kernel Pl(z,B ) is a signed kernel with additional conservative condition
which is basic in the ergodic merging principle of Section 4.2.1, and
is a bounded function, in the absorbing merging principle. The embedded Markov chain x:, n 2 0 , defined by the stochastic kernel P ( x ,B ) , is supposed t o be uniformly ergodic with stationary distributions p k ( B ) , B E E k , 1 5 k 5 N , satisfying the equations:
Moreover, the associated support Markov process zo(t),t 2 0 , defined by the generator (5.189) with q(z) := l / m ( z ) , m(z) := S,"F,(t)dt, z E E , also is supposed t o be uniformly ergodic with stationary distributions rk(B),B E &k, satisfying the relations:
or, in the equivalent form:
The projector I2 on the Banach space B is defined as follows:
184
CHAPTER 5. PHASE MERGING PRINCIPLES
Remark 5.3. In the absorbing split case (5.188), the additional assumption
will be used. That means that absorption with positive probability in the split state space takes place. 5.6.2
Semi-Markov Processes in Split State Space
The phase merging principle for the semi-Markov processes z"(t),t 2 0, in the series scheme with the small series parameter c --+ 0, ( E > 0) in the split state space (5.186) and the semi-Markov kernel (5.187) satisfying the conditions of Theorem 4.1 (Section 4.2.1) is formulated by using the compensating operator of the coupled process z E ( t / c ) ?(t) , := v ( z " ( t / c ) )t, 2 0 (see Section 4.2.1) given in the following form
+ Qi142, k),
LEcp(z, 5) = K'Q
(5.190)
on the test functions p(z, k), k E E , k E ?!, = (1, ..., N } . Here Q is the generator (5.189) of the associated Markov process, and Q1 the operator (5.191) The representation (5.190)-(5.191) can be obtained from the definition of the compensating operator in the following form (see Section 1.3.4)
and then considering on the test functions cp(z, k ) which do not depend on
t. It is worth noticing that the compensating operator (5.190) coincides with the generator (4.9) (Section 4.2.1) of the Markov process. Hence the phase merging principle for the semi-Markov processes gives the same result as for the Markov processes.
6
Proposition 5.11 The limit generator of the limit Markov process ?(t),t 2 0 , in Theorem 4.1, is defined by a solution of the singular perturbation problem given in Proposition 5.1 for the compensating operator
5.6. MERGING A N D AVERAGING IN SPLIT STATE SPACE
185
of the coupled process z E ( t / E ) , w ( x E ( t / & )t )2 , 0. According to Proposition 5.1 the limit generator lows (compare with (4.17), Section 4.2.1)
6is calculated as fol-
that is exactly as in Theorem 4.1.
Corollary 5.2 I n the absorbing case (5.188) the generating matrix = (&Ti 1 L: k , r 5 N ) has the same representation (4.17), Section 4.2.1, but the conservative relation does not take place, precisely,
where &O, Ic E g , are the absorbing probabilities of the limit merged Marlcov process .^(t),t 2 0, considered on the extended merged state space EO = (0; 1,...,N } . Note that it is exactly what was formulated in Theorem 4.2 (Section 4.2.2). The double merging scheme considered in Section 4.2.3 leads to the following representation of the compensating operator of the coupled process: h
Z"(t/&2),
F ( t / & 2:= ) G(X"(t/&2)),
L E p ( sw, ) =
t 2 0,
[ E - ~ Q + ~ - l Q i+ Q z ] ~ ( z ,v),
(5.192)
where (see (4.23))
Qicp(z) = q(z)pi(p(z), &(z)
:=
J'i(x,d~)cp(~),
= L2.
The stochastic kernel P ( x ,d y ) is coordinated with the split state space (4.22) as follows: P ( z ,EL) = I ; ( x ) :=
1,a:EEL 0 , x q! EL.
It is worth noticing that the split state space (4.22) and uniform ergodicity of the associated Markov process zo(t),t_> 0, with generator (4.26)
and stationary distributions n;(dz),1 I kI N,1 I rI Nk, in every class E,T provides the projector on the Banach space B,
186
CHAPTER 5. PHASE MERGING PRINCIPLES
In the same way, the first convergence in (4.27) and uniform ergodicity of the limit merged Markov process Z ( t ) , t 2 0, on the merged state space ,f? = {,f?L;l 5 k 5 N , 1 5 r 5 Nk} with the stationary distributions ??k = (ni; 1 5 r 5 N k ) , 1 5 k 5 N , provides the projector in Banach space B
r=l
According to Theorem 4.2 and using additional assumption ME4 (Section 4.2.1), the limit merged Markov process i?(t),t 2 0, defined by the generating matrix is determined by the following relation
6
QII = IIQlII.
6
Hence, the generator is reducible-invertible with the projector fi on the null space Na.This means that the asymptotic representation (5.192) of the compensating operator can be analyzed by using Proposition 5.3. The double merging principle is given in the following proposition, exactly as in Theorem 4.3. A
Proposition 5.12 The generator Q2 of the double merged Markov process ?(t),t 2 0 , is determined by the relations (see (5.28))
(5.194) 5.6.3
Average Stochastic Systems
The phase merging principles for the stochastic systems with split and merging of the switching semi-Markov process, considered in Section 4.3, are constructed by using the asymptotic representation of the extended compensating operator of stochastic system and a solution of the suitable singular perturbation problem from Section 5.2.1. The new situation in the analysis of the compensating operator disappears due to the dependency of the generator of the associated Markov process on the series parameter E . According to Assumptions Al-A3, Section 4.3, and additional stated assumptions in Section 4.3.1, the compensating operator of the continuous
5.6. MERGING AND AVERAGING IN SPLIT STATE SPACE
187
random evolution in the split and merging scheme can be written in the following form (compare with (5.121), Section 5.3.4): - Ilcp(u, 21, z),
LEcp(u, u , z) = &(2)[F&)PE a3
FZ(WESb),
:=
PE
=P
(5.195)
+ &PI.
It is easy to obtain the following asymptotic representation for generator (5.195) (compare t o (5.79))
IL" = E-'Q
+
[Ql
+ ]r(z)P]+ BE(%)
(5.196)
where Q is the generator of the associated Markov process z O ( t ) , 2 t 0, and
1
Qicp(z):= 4 2 )
E
Pi(z,&)cp(y).
Proposition 5.13 The generator of the limit coupled Markov process .^(t),6 ( t ) , t2 0 , can be represented in the following form
C=Q^,+F.
(5.197)
PROOF. According to Proposition 5.1 the solution of the singular perturbation problem for generator (5.196) has the following representation
CII = H[Ql
+ ]r(z)P]H=
+ F]H.
[Q^i
0 The generator (5.197) of the coupled Markov process .^(t),G(t),t 2 0 , defines the limit stochastic system 6 ( t ) given by a solution of the evolutionary equation
I
$G(t) = Z(G(t);.^(t)),
with the average velocity
that is exactly as in Theorem 4.4, Section 4.3.1. Using Theorem 4.5, Section 4.3.2, we can solve a similar problem for merging of the semi-Markov processes.
188
5.7
C H A P T E R 5. PHASE MERGING PRINCIPLES
Diffusion Approximation with Split and Merging
The phase merging principles for the stochastic systems with split and merging of the switching semi-Markov process considered in Section 4.4 are constructed as in the previous Section 5.6.2, by using the asymptotic expansion of the extended compensating operator of the stochastic systems and a solution of the singular perturbation problem from Section 5.2. In the diffusion approximation scheme with split and merging of state space the new special feature disappears owing to the dependency of the switching semi-Markov process on the series parameter E + 0, ( E > 0). The diffusion approximation scheme in Sections 4.4.2-4.4.5 are considered with Markov switching, which simplifies the asymptotic analysis. The generators of the Markov stochastic systems are considered instead of the compensating operator. Generalization for switching semi-Markov processes can be also obtained following the analysis considered in Section 5.7.1.
5.7.1
Ergodic Split and Merging
According to Conditions D1-D3, in Section 4.4, and additional assumptions of Theorem 4.7, Section 4.4.1, the compensating operator of the continuous random evolution in split and merging scheme is represented in the following form (compare with (5.123), Section 5.4.2)
L E p ( u)., = E-2q(.)[F€(.)PEP(u, ). - d u , .)I.
(5.198)
Here, by definition: r E
(5.199) and:
(5.200) P(Z,
B ) = P(., B ) + &2Pl(Z,B ) .
Proposition 5.14 The Compensating operator (5.198)-(5.200) acting on the test functions p E C3(Rd x E ) has the following asymptotic represeratation (compare with Proposition 3.2, Section 3.2.1)
189
5.7. DIFFUSION APPROXIMATION WITH SPLIT A N D MERGING
where G2(2)=
[ri(z)+ P ~ ( ~ ) F ~ (+ x Q) II. P
PROOF. In representation (3.10) put Q'
=Q
+ E ' Q ~with , Q1 = qP1.
0
Now, the solution of the singular perturbation problem, given in Proposition 5.2, for the truncated operator
LE = E - ~ Q + E-'F(z)P + Q ~ ( z ) , gives the limit generator of the coupled Markov process f ( t ) ,.^(t),t 2 0, in the form given in Theorem 4.7, Section 4.4.1. Calculation of the limit generator L almost coincides with calculation of the limit generator in Section 5.4.2. As a result we obtain the following construction of the limit generator (compare with (5.130)) A
2 = 61 + Fo + 6'oo(k) + G ( k ) ,
kE
z,
(5.201)
with the operators depending on the state of the limit merged Markov process .^(t),t 2 0, that is: A
m ( d 4 F 0 ( 4 , Fo(z) = q z ) R o u z > ,
61
and defined by QlII = IIQlrI. Formulas (5.201)-(5.202) allow us to construct the limit generator for the stochastic systems considered in diffusion approximation scheme with split and merging of the state space of the switching semi-Markov process. The limit generator IL of the limit diffusion process r(t), t 2 0, switched by the merged Markov process .^(t),t 2 0, in Theorem 4.7, is calculated by formulas (5.201)-(5.202), setting: qz)cp(u) = 4 u ;Z)(P'('zL),
1 Jqz)cp(u) = a1 (u; zc>cp'(u> zCo(";
+
5.7.2
+l44.
Split and Double Merging
According to conditions Dl-D2 in Section 4.4 and additional Condition BC2, Section 4.4.3, the generator of the random evolution
C H A P T E R 5. PHASE MERGING PRINCIPLES
190
r'(t),zE(t/E2),t 2 0 , described in Theorem 4.9 is represented in the following form
IL'
+
+
+ B(z>+ Of(z),
= E - ~ Q E - ~ Q E~ - A ~( . >
(5.203)
where by definition:
Wz)cp(u):= al(.zL;z)cp'(u)+ CO(U; z)d'(u), with negligible term IlOf(z)cpll + 0, as E
+ 0,
cp E
(5.205)
C3(Rd).
Proposition 5.15 The generator lL of the limit diffusion process in Theorem 4.9 is calculated b y using a solution of the singular perturbation problem for the generator (5.203) given in Proposition 5.4 in the following form A
-
IL = 6 + ARoA. A
h
A
(5.206)
The calculation of IL in the formula (5.206) using (5.204) and (5.205) leads to the representation of the limit generator in Theorem 4.9. 5.7.3
Double Split and Merging
Conditions MD1-MD4, Section 4.2.3 and Conditions Dl-D3, BC3, Section 4.4.4, in Theorem 4.10, provide the generator IL' of the Markov process
t'(t),
A
A
z; := Z(t/E3), z't := a(z;),
t 2 0,
represented in the following form
(5.207) where, by definition: (5.208)
with the negligible term
5.7. DIFFUSION APPROXIMATION WITH SPLIT A N D MERGING
191 h
Proposition 5.16 The generator JL of the limit digusion process r(t), t 2 0, switched by the twice merged Markov process z ( t ) , t 2 0 , is defined by a solution of the regular perturbed problem for the truncated operator
IL; = E - ~ Q+ E-'Q
i(z)
+ E - ~ A ( +~ )[Qz + B(z)I1
(5.209)
in the following form h
IL = Q2
h
+ii + ARoA,
(5.210)
o2 A
A
where the generator of the twice merged Markov process 2 ( t ) , t 2 0, is given in Condition MD4, Section 4.2.3. The potential 60 is defined for the generator Q1 as follows &^la = 6061 = II - I. The twice average operators in (5.210) are calculated by A
h
h
Efi = IIBII, A A A
& =I rIB(z)II,
(5.211)
and analogously,
(5.212)
PROOF. The formulas (5.210)-(5.212) are obtained straightforwardly from Proposition 5.4, Section 5.2. Calculation by formulas (5.210)-(5.212),with (5.208), give the representation of the limit generator JL in (4.47) of Theorem 4.10. 0 5.7.4
Double Split and Double Merging
Under the conditions of Theorem 4.11, Section 4.4.5, and taking into account the conditions of Theorem 4.3 we can calculate the generator ILE of the coupled Markov processes c ( t ) , z t := z & ( t / ~ * )2, t0 , with the first component Jb(t),t 2 0 , given in Theorem 4.11 in the following form
LLEp(= ~ ,[ ~E -)~ Q
+Of
+ ~ - ~ Q l ( +z )E - ~ Q Z ( . )
+&-'A(%)
+ B(z)]p(u,z) (5.213)
bC)Cp7
where, by definition, Q is the generator of the support Markov process z o ( t ) , t2 0, given by the generator (4.26) and Qi(z) := q(z)Pi(z,B),i = 1,2, (see Section 4.2). The operators are (compare with Section 5.7.2):
A(.)cp(.)
:= 4u;z)cp'(u),
CHAPTER 5. PHASE MERGING PRINCIPLES
192
B(Z)(P(U) := a1(u;z ) d ( u )
+ Co(u;Z ) d ( U ) ,
with the negligible term: IlOf(x)(pll --+ 0, as E
-+
0, for cp E C 3 ( R d ) . h
Proposition 5.17 The generator lL of the limit digusion process s^
(5.214) The calculation in formula (5.214) leads to the representation of the limit generator in the form given in Theorem 4.11. PROOF. According to the conditions of Theorem 4.3, we can verify that Conditions (i)-(iii) of Proposition 5.5 really hold by using conditions of Theorem 4.3. The contracted operator & I , defined by the relation &lII = IIQl(z)II, defines the ergodic merged Markov process Z ( t ) , t 2 0. That is, the generator 61 is reducible-invertible with the projector fi as in Assumption (ii) of Proposition 5.5. The second convergence in Theorem 4.3 induces that the limit twice merged Markov process $(t),t 2 0, is ergodic, whose h
stationary distribution defines the projector
,.
fi on the null space n/=
of its
Q2
generator Q2 as in Assumption (iii) of Proposition 5.5. Assumption (i) in Proposition 5.5 is obviously valid. Hence the solution of the singular perturbation problem given in Proposition 5.5 gives the preliminary representation of the limit generator in Theorem 4.10. 0
Chapter 6
Weak Convergence
6.1
Introduction
The present chapter presents the entire proofs of weak convergence. Actually, the algorithmic part of the proofs, concerning convergence of the generators, providing finite-dimensional distribution convergence, was given in the previous Chapter 5. Here we prove the tightness of the probability measures of stochastic processes. Of course, in a Polish space, as is our case here, relative compactness and tightness are equivalent. The proof of relative compactness, is given via the scheme of proof of Stroock and V a r a d h a r ~ ' for ~ ~ space C[O,cm). More precisely, we use the extension of Sviridenko to space D[O,co),that is the compact containment condition and nonnegative submartingale (see Theorem 6.2). The proofs concerning specific systems are based on the pattern limit theorems given in Section 6.2; for stochastic systems with Markov switching see Theorem 6.3; with asymptotic merging for averaging see Theorem 6.4; for diffusion approximation see Theorem 6.5; for semi-Markov switching see Theorem 6.6. Theorem 6.7 gives an alternative proof to the previous theorems, based on the continuous-time extended Markov renewal process. Appendix A gives general definitions and basics on weak convergence concerning this chapter.
6.2
Preliminaries
Phase merging, averaging, diffusion and Poisson approximation algorithms for stochastic systems with Markov and semi-Markov switching are based on limit theorems for stochastic processes in series scheme. The processes arising in applications and being of interest to us now are 193
CHAPTER 6. W E A K CONVERGENCE
194
Markovian (or have an embedded Markov chain) with values in a standard state space (ElE ) . Recall that, if two probability measures are the limits of the same sequence in a common standard state space, then they are equal. Every probability measure on a standard state space is tight 52. The functional space DE[O,00) of right continuous functions z : B+ := [0, 00) + E , with left limit (cadlag functions) is considered as the space of sample paths of stochastic processes. It is well known that for a standard state space El the space DE[O,oo),with the Skorokhod topology, is also a
standard state space. Of course, the space CE[O,00) of continuous functions in uniform topology is also considered as a space of simple paths of stochastic processes, especially for limit processes 5 6 . The main type of convergence of stochastic processes is the weak convergence of finite-dimensional distributions, that is for the family of stochastic processes z"(t),t 2 O,E > 0, in the series scheme with the small series parameter E > 0, E -+0, there is a process sc(t),t 2 0, such that lim E p ( z " ( t l ) ,' ' ' 7 zc"(tN))= Eq(z(tl),
E'O
' ' i
z(tN)),
(6.1)
for any test function 'p(zl,..., z ~ E) C ( E N ) ,the space of real-valued bounded continuous functions on E N ,and for any finite set {tl ,. . . ,t N } E S , where S is a dense set in R+. We will denote this type of convergence bY
zc"(t)3 z(t),
E
4
0.
A more general type of convergence of stochastic processes is the weak convergence of associated measures, that is, the probability distributions (see Appendix A)
where V Eis the Bore1 a-algebra on DE[O,00). So, the following limit takes place lim IEq(z"(.))= Eq(z(.)),
0'"
(6.2)
'p E C(DE), the space of continuous real-valued functions on DE[O,00) in the Skorokhod topology. We will denote the weak convergence
for all
195
6.2. PRELIMINARIES
of processes and of associated measures as follows 2"
=+ 2 ,
P,
* P,
E + 0.
The connection between the above two types of convergence of stochastic processes is realized by means of relative compactness of the family of associated measures (P&,E> 0), that is, for any sequence, say (P,,) of (P,,E > 0) there exists a weak convergent subsequence (Pek)c (Pen).
Theorem 6.1 (45) Let x E ( t ) , t2 O , E > 0, and x ( t ) , t 2 0 , be processes with simple paths in DE[O,00).
*
D
(a) If xE x as E + 0, then x E =+ x , for the set S = {t > 0 : P ( x ( t ) = x(t-)) = 1). (b) If xE,E > 0, is relatively compact and there exists a dense set S C R+ D such that x E + x , as E -+0 , on the set S , then x E + x , as E + 0. By Prohorov's theorem 16, relative compactness of a family of probability measures on DE[O,00) is equivalent to tightness of this family. Many different kinds of procedures are available for verifying the relative compactness for a family of probability measures associated with given stochastic processes. In the case of Markov processes the martingale characterization approach seems to be the most effective one '", in particular the one developed by Strook and Varadhan ' 7 3 is effective in the verification of relative compactness. It is worth noticing that the relative compactness conditions formulated in '73 for stochastic processes with sample paths in the space CE[O,00) are valid for the space DE[O,co).
r73
Theorem 6.2 ) Let the family of Rd-valued stochastic processes, x'(t),t 2 O , E > 0 , such that: H1: For all nonnegative functions cp E C r ( R d ) there exists a constant
A, 2 0 such that (cp(x'(t))+A,t, F f ) is a nonnegative submartingale. H2: Given a nonnegative cp E C r ( R d ) , the constant A, can be chosen so that it does not depend o n the translates of cp. Then, under the initial condition
the family of associated probability measures P,,
E
> 0 , is relatively compact.
196
CHAPTER 6. WEAK CONVERGENCE
In order t o verify the weak convergence of a family of stochastic processes on DE[O,m), we have to establish the relative compactness and the weak convergence of finite-dimensional distributions. Both these problems for a family of Markov processes in DE[O,00) can be solved by using the martingale characterization of Markov processes and convergence of generators. The particularity of the fast time-scaling switching processes is that the convergence of generators or compensating operators cannot be obtained in a direct way because the generators are considered in a singular perturbed form. But, as was shown in Chapter 5, the phase merging and averaging algorithms as well as diffusion and Poisson approximation schemes can be obtained by using a solution of the singular perturbation problem for reducible-invertible operators. Such an approach will be used in the following. Another approach t o verifying the relative compactness of a family of Markov processes consists in using the martingale characterization and the compactness conditions for square integrable martingales 132. The uniqueness of the limit measure follows from the uniqueness of solution of the martingale problem 45.
6.3
6.3.1
Pattern Limit Theorems Stochastic Systems with Markov Switching
The stochastic systems with Markov switching in series scheme with the small series parameter E > 0, defined by the coupled Markov process (see Sections 3.2-3.3)
can be characterized by the martingale rt
> 0, have the common domain of definition D(IL), The generators which is supposed t o be dense in C,"(Rdx E ) . The limit Markov process ( ( t ) t, 2 0, is considered on Rd,characterized
6.3. PATTERN LIMIT THEOREMS
197
by t h e martingale
-
where t h e closure D(L) of t h e domain D ( L ) of t h e generator L is a convergence-determining class (see Appendix A). (loo) Let the following conditions hold for a family of Markov processes ( " ( t ) t, 2 0, E > 0:
Theorem 6.3
C1: There exists a family of test functions p E ( u , x )in C,"(Wdx E ) , such that
uniformly o n u,IC.
lim p E ( ux) , = p(u),
&-iO
C2: The following convergence holds
l i m ILEpE(u, x ) = ILp(u),
uniformly o n u, x.
€40
The family of functions IL&p", E > 0, is uniformly bounded, and ILp and LEp" belong to C(Rd x E ) . C3: The quadratic characteristics of the martingales (6.3) have the representation
where the random functions < " , E > 0, satisfy the condition
C4: The convergence of the initial values holds, that is,
('(0)
5
m,
E
+
0,
and supIE I["(O)l
5 c < +oo.
E>O
Then the weak convergence E"(t) ===+ E(t),
&
---f
0,
takes place. The limit Marlcov process J ( t ) , t2 0 , is characterized by the martingale (6.4).
CHAPTER 6. WEAK CONVERGENCE
198
In the particular case where Condition (6.5) is replaced by sup IEI<'(t)l+ 0,
E
-+
O
0,
the limit process J ( t ) , t2 0 , is given b y the solution of the deterministic equation cp(S(t))-
I"
Q(E(s))ds = cp(J(O)),
or, in the equivalent form
PROOF. Condition C3 of the theorem means that the quadratic characteristics of the martingale (6.3) is relatively compact (see 70,100),that is, there exists a sequence E~ 4 0, such that the weak convergence PZ"
=w:,
n+
00,
takes place. Now, in order to prove that the martingale characterization of the limit process is (6.4), let us calculate:
1 t
EP:
= E[cp"(S"(t), z"(t))-
~'cp'(S'(s>,.'(s))d~I
= E[cp"(S'(t),z"(t))- cp(J"(t))l
+Wcp(JE(t))-
I"
~cp(SE(s))dsl
t
+ El [b(J'(t)) - L E c p " ( J E ( S ) ,x"(s))dsl. The first and third terms on the right hand side tend t o zero, as E -+ 0, by Conditions C1 and C2 of the theorem. Due t o the relative compactness of the family of stochastic processes c(t),t O , E > 0, the following convergence
>
takes place. Now, due to Condition C3, We calculate:
199
6.3. PATTERN LIMIT THEOREMS
Hence, the following convergence holds EP;"
-
Ecp(J(O)),
+
00.
Consequently, we get that the limit process [ ( t ) ,t 2 0, is characterized by the martingale (6.4), with EPt = Ecp(E(0)).
0 The following convergence theorem is an adaptation of Theorem 8.2, p. 226 and Theorem 8.10, p. 234, in 45, t o our conditions with the solution of singular perturbation problem.
e5)
Theorem 6.4 Suppose that for the generator IL of the coupled Markov process [ ( t ) , 2(t),t 2 0, o n the state space Rd x V , with V a finite set, there is at most one solution of the martingale problem in D R d x v[O, m), and that the closure of the domain V(IL)is a convergence-determining class. Suppose that the family of Markov processes J " ( t / E ) , x:, t 2 O , E > 0 o n Rd x E defined by the generators IL",E > 0 , with domains V(IL")dense in C,"(Rd x E ) , satisfies the following conditions: C1: The family of probability measures (P", E > 0 ) associated to the processes ( F ( t )v, ( z E ( t / E ) )t, 2 0, E > 0) is relatively compact. C2: There exists a collection of functions cp"(u,z) E C(Rd x E ) , such that the following uniform convergence takes place
lim (pE(u, z) = cp(u,v ( x ) ) E c(@ x V)
(6.8)
E-0
and such that for every T > 0
C3: The uniform convergence of generators
lim ILEpE(u, x) = Lcp(u,v(z)),
(6.10)
0'"
takes place, and the functions IL"q9, E > 0, are uniformly bounded on E > 0 , and ILq E C(Rd x V). C4: The convergence in probability of the initial values holds, that is, ( J " ( 0 ) , v ( z E ( O ) )-5 ) (E(o),qo>),
E
0,
CHAPTER 6 . WEAK CONVERGENCE
200
with uniformly bounded expectation
I c < +m.
supE I("(0)l E>O
Then the weak convergence in D R d (€&(t)l
[0, m)
v ( f ( t > )==+ ) (C(t),.^(t>>,E
+
0,
takes place. The limit Markov process ( ( t ) .^(t),t , 2 0 , is defined b y the generator L.
Remark 6.1. The main algorithmic conditions (8.53) and (8.54) of Theorem 8.10 in 45 are represented in conditions of Theorem 6.4, respectively (6.8) and (6.10). Additional conditions of boundedness (8.51) and (8.52) correspond to additional conditions C2 and C3. All other conditions of Theorem 8.10 are represented in the convergence Theorem 6.4 in the same form. We will use also the following theorem which is a compilation of Theorem 9.4, p. 145, and Corollary 8.6, p. 231 in 45 under our conditions in diffusion approximation schemes.
Theorem 6.5
(") Let us consider the family of coupled Markov processes
('(t), Z"(t/E2),
t 2 0 , E > 0,
(6.11)
with state space Rd x E , and generators IL",E > 0, with domains D ( L E )dense in C(Rd x E ) . Let ((t),.^(t),t 1 0 , be a Markov process with state space Rd x V , and generator L with domain D(IL), and let be a convergence class. Consider also the test functions
Suppose that the following conditions are fulfilled: C1: The family of processes ('(t),t 2 0 , c > 0, satisfies the compact containment condition lim
sup^(
C + ~ E > O
sup o < t g
~<'(t)l> c )
= 0.
6.3. PATTERN LIMIT THEOREMS
201
C2: There exists a collection of functions cpE E C(Rd x
any T
E ) , such that, f o r
> 0 , we have
(6.12) and (6.13)
v(z(t/E2)))1
* where llVl/,,T = SUPO
(t"(O),
v9),
v(z"(0))) 5 (5(0),
E
-+
0,
with uniformly bounded expectation supE I["(O)l
5 c < +m.
E>O
Then, the weak convergence E (cE(t),v(xE(tlE2)>) ===+ (,3t>,.^(t>>,
-+
0,
takes place. 6.3.2
Stochastic Systems with Semi-Markov Switching
The stochastic systems with semi-Markov switching in the series scheme are determined by the embedded Markov renewal process (see Section 3.2) := ('(TE),
X E := Z E ( T E ) ,
n >_ 0,
(6.14)
which can be characterized by the martingale n
PE+1 = d G + l J E + l ) - pi+lILEP(E;,x;), k=l
72
2 0,
(6.15)
with respect t o the filtration F .: := a(<;, x;, 7;; 0 5 k 5 n), n 2 0. The compensating operator of the extended Markov renewal process (6.14) in the average approximation scheme is defined by (see Section 3.3) ILE = E - ~ Q
+ l r ( z ) P+ &Q;(x),
on the test functions cp(u,z) in C,"(Rd x E ) .
(6.16)
CHAPTER 6. WEAK CONVERGENCE
202
In the diffusion approximation scheme (see Section 5.4), it is defined by
+
+
+
ILE = &02Q ~-'lt-'(x)P Q2(x)P &&(x),
(6.17)
on the test functions cp(u,x)in C,"(Rd x E ) . The limit Markov process e ( t ) ,t 2 0 , is supposed to be characterized by the martingale rt
(6.18)
-
where the closure D(!L) of the domain D(IL) of the generator convergence-determining class.
Theorem 6.6
lL is a
Let the following conditions hold:
C1: The family of stochastic processes cE(t),t2 O,E > 0, is relatively compact. C2: Thew. exists a family of test functions cp"(u,x) in C,"(Rd x E ) , such that lim (pE(u,x ) = cp(u), uniformly on u,x .
E+O
C3: The following uniform convergence holds lim ILEcpE(u, x ) = ILcp(u),
uniformly on u,x.
E'O
The family of functions ILEcpE,&> 0, is uniformly bounded, and IL"cp" and ILcp belong to C(Rd x E ) . C4: The convergence of the initial values holds, that is, t " ( 0 )5 a o > ,
E
+
0,
and supE I("(0)l 2
c < +oo.
E>O
Then the weak convergence
* E(t),
S"(t)
&
+
0,
(6.19)
takes place. The limit process < ( t )t, 2 0, is characterized by the martingale (6.18).
6.3. PATTERN LIMIT THEOREMS
203
I n the particular case where the martingale is constant pt = po = const., then the limit process [ ( t ) ,t 2 0, is given by the solution of the deterministic equation
or, in an equivalent form d
--cp(J(t)) = L-cp(J). dt PROOF.Let us introduce the following random variables
v"(t):= max{n 2 0 : T: 5 t } , V;(t)
:= V " ( t )
+ 1,
T ; ( t ) := T,,;(t),
T E ( t ):= Tye(t).
Recall that the time-scaled semi-Markov process in the average scheme is considered as ~ ' ( s ) := z ( s / E ) , and in the diffusion approximation scheme as Z'(S) := Z(S/&2). Note also that the random variables v$(t) are stopping times with respect t o the filtration
For the proof of theorem we need the following lemma. In what follows, we consider the embedded stochastic system with piecewise trajectories as follows
Lemma 6.1
The process
has the martingale property E[CE(t)- C(S)I F:] = 0,
0 5 s 5 t 5 T.
(6.23)
PROOF. It is worth noticing that
c'(t) = c " ( ~ ' ( t )=) and
<"(T:)
=
n 2 0.
for P ( t )I t < ~ : ( t ) ,
CHAPTER 6. WEAK CONVERGENCE
204
The truncated random variables v;(t) AN, for all positive integer value of N > 0, are finite. Hence the following martingale property takes place174, 'bL',;(t)AN
- PL',;(s)AN
I Fzl
(6.24)
= O'
Taking the limit in (6.24) for N + w, we get that for N large enough, v;(t) A N = v;(t). Hence, we obtain
I CI = 0.
E[PL',;(t) -
(6.25)
Since, by construction, ( " ( t )= c ( ~ f (= t )PL'),;(~), the martingale property (6.23) of the process (6.22) is proved. 0 Now, for verifying that the weak convergence (6.19), in Theorem 6.6, holds, we have to estimate the expectation of the following process t
P: = cp(JE(t))-
ILcp(JE(s))ds,
by using the conditions of Theorem 6.6, and the martingale property (6.23) of the piecewise process (6.22). Let us calculate:
1 t
E[PZI = W J E ( t ) )-
ILcp(J"(S))dSI
0
= E[cp(S"(t))- c p " ( J E ( t ) l z"(t))l
+E[cp"(F(t),z"(t)> - cp'(J:(t),z$(t))l
First note that the third term of the sum is exactly equal to (6.22)), and the fourth term is equal t o 7;
El
(t)
IL"cp"([f(s),z"(s))ds
-
0,
6 + 0,
c(t)(see
6.3. PATTERN LIMIT THEOREMS
205
thanks to the property of the renewal moments
uniformly on every finite time interval [0,TI (see Appendix C). The first two terms tend to zero by Condition C2 of Theorem 6.6. The fifth and sixth terms tend to zero by Condition C3 of Theorem 6.6 and finally the dominated convergence theorem allows us to get the limit with expectation. Now we have to estimate the third term by using the martingale property (6.23) of the process (6.22) as follows:
The last term tends to zero by Condition C2 of Theorem 6.2. In conclusion we obtain
where the negligible term BE + 0, as E + 0. Hence, by Condition C4, we get
Now, by Condition C1 of Theorem 6.6, there exists a sequence cn 4 0, such that the weak convergence (6.19) takes place and simultaneously the limit process <(t),t2 0, is characterized by the martingale (6.18). 0
6.3.3
Embedded Marlcov Renewal Processes
We present here an alternative approach to the previous pattern limit theorems based on the continuous time embedded Markov renewal process. In the average scheme, the scaled in continuous time embedded Markov renewal process is considered in the following form
where [t]means the integer part of the real number t 2 0.
CHAPTER 6. W E A K CONVERGENCE
206
The corresponding coupled random evolution is characterized by the martingale PLf = c p ( t , " l
ZLf) - E
-1
c
WE1
LEcp(G1x i ) , t 2 0,
(6.27)
k=O
where the generator of the coupled Markov renewal process (6.26) is defined by (see Section 3.3)
L"= E - ~ Q + lr(x)P+ d ? ( x ) ,
(6.28)
where
Q := P - I ,
P c p ( ~:= )
s,
P ( z ld y ) p ( y ) .
(6.29)
The negligible term is
e;(z)
=I~(~)(~)F;~)(~)P.
(6.30)
In the diffusion approximation scheme, the scaled in continuous time embedded Markov renewal process is considered in the following form
'$ := E ; / , z ] ,
x; := q t / , 2 1 ,
t L 0.
(6.31)
The corresponding coupled random evolution is characterized by the martingale [t/2]-1
PLf = cp(E:,.Lf)
- E2
c
IL"P(G,G)l
t 2 01
(6.32)
k=O
where the generator of the coupled Markov renewal process (6.31) is represented in the following asymptotic form (see Section 5.4)
IL" = E - ~ Q + E - ~ F ( x ) + P Q2(2)P+ &O;(X). The limit Markov process the martingale
Etl t
(6.33)
2 0 , is supposed to be characterized by
(6.34) with the generator determining class.
-
IL whose closure D(L) of domain D ( L ) is a convergence-
Theorem 6.7 Let the following conditions hold:
6.3. PATTERN LIMIT THEOREMS
207
C1: The family of embedded Markov renewal processes I f , x f , t 2 0,c > 0, is relatively compact. C2: There exists a family of test functions p " ( u , x ) in Cp(Wd x E ) , such that lim p E ( u x, ) = p ( u ) , uniformly o n u, 2.
E+O
C3: The following convergence holds
lim ILEpE(u, x ) = % p ( u ) , uniformly o n u, x . E+O
The family of functions ILEpE,& > 0, is uniformly bounded, and LEpE and ILp belong to C(Rdx E ) . C4: The convergence of the initial values holds, that is,
to'
P
to,
0,
E
and
Then the weak convergence
I; *tt,
E
+
0,
(6.35)
takes place. The limit process process &,t 2 0 is characterized by the martingale (6.34). I n the particular case, where the martingale is constant p t = po = const., the limit process is determined by a solution of the deterministic equation
or, in equivalent f o r m
PROOF. In order to verify the weak convergence (6.35), we have to estimate the expectation of the following process (6.36)
CHAPTER 6. WEAK CONVERGENCE
208
by using the conditions of Theorem 6.7, and the martingale property of the piecewise process (6.27). Let us calculate:
w: = =
1M J 3 4 WS:) +W"(JPl.Z) c t
W(P(@) -
0
- Cp'(~:l~P>l
[tl"I---l
-E
4 1
~"V"(E&
k=O
we1 - 1
c
[LEP"(Gl4- JL(f(G)I
k=O
Due to Condition C2 of theorem, the first and third terms of the sum tend t o zero as E + 0. The forth term also tends to zero, since in the square brackets we have the difference between the integral and the corresponding integral sum. The second term is exactly the martingale (6.27). In conclusion we obtain:
Finally,
with the negligible term
Hence, the limit process in (6.36),
is the martingale (6.34).
6.4. RELATIVE COMPACTNESS
6.4
209
Relative Compactness
In this sections the relative compactness of the family of stochastic systems c'(t), t 2 0, E > 0, is realized by using the Stroock and Varadhan criteria formulated in Theorem 6.2. 6.4.1
Stochastic Systems with Markov Switching
The stochastic systems with Markov switching in the series scheme with the small parameter E > 0, E -+ 0, is characterized by the martingale (6.37) where the generators L € , E> 0, have the common domain of definition D(IL), is supposed to be dense in C,"(IRdx E). Lemma 6.2
Let the generators IL',
E
> 0, have the following estimation
ILEcp(u)I 5
c,,
(6.38)
f o r any real-valued nonnegative function cp E C,"(Rd),where the constant C, depends o n the norm of cp, but not on E > 0, nor on shifts of cp 45. Suppose that the compact containment condition holds
(6.39) Then the family of stochastic processes t E ( t t) ,2 0, E > 0, is relatively compact.
PROOF. Let us consider the process
V E ( t ):= cp(J'(t))
+ c,t,
t L 0,
and prove that it is an Ff-nonnegative submartingale. To see this, let us calculate, by using the martingale characterization (6.37), for 0 5 s 5 t: E [ V E ( t I)
el
I el + c,s
= E[cp(EE(t))
CHAPTER 6. WEAK CONVERGENCE
210
So, the following equality takes place
q
t
E [ V E ( t I) F3 = V E ( S ) +
+
(LE'p($(u)) C d d u
I el,
where the last term is nonnegative due to the estimation (6.54). Hence
IE[qE(t)IF,']2 ~ " ( s ) , for s < t. Now, we can see that both hypotheses of Theorem 6.2 are valid. We need the following lemma for the proof of Lemma 6.4 below.
0
Lemma 6.3 (Lemma 3.2, p. 174, in 45) Let z(t), t 2 0 , be a Marlcov process defined by the generator L,and Gt 3 FF. Then for any jixed X E R and 'p E D(L) t
+
e-x"(4t>>
e-X"[X'p(.(s>>
- IL'p(4s))lds
is a Gt-rnartingale. Lemma 6.4 Let the generators f o r 'po(u) = d G 2 ,
> 0, have the following estimation
LE'po(u)I Cl'pO(~>l Iul I4
where the constant Ci depends o n the function 'pol but not o n E > 0 , and
Then the compact containment condition holds lim s u p F
PROOF. Since (P~('(L) =u 1
+ J u Jwe , have
I(Pd(4l
( sup IF(t)l2 t1
=O.
(6.41)
/ d w ' ,and ( ~ " ( u=) (1+
and 'po(u) 5
e+mE>0
OltlT
5 1I 'po(u), Ivb'(4l 51 5 vo(u),
Let us define the stopping time re",by
'(L
E It.
6.4.RELATIVE COMPACTNESS
211
By Lemma 6.3, applied to a stopping time instead of a fixed time t , we have that
(6.43) is a martingale. We get, for s 5 t A re',
and from (6.43) we obtain
JE [e-C1tArtcpo((E(tA
?$)I
5 Ep:
=
= Epo(CE(0)).
(6.45)
The convexity of po and the inequality PO(.) 2 1, together provide the estimation: p; : = PE
(
sup IC"(t>l2 l )
OltlT
and Chebichev's inequality yields Pd
(6.46)
I IE [cpo(CE(7a)l /cpo(Q
From inequality (6.46), together with inequality e-r; 2 e-T (since r; 5 T ) ,we obtain PeE <
[ e - ~ l r i p o ( ~ ~ ( r ;/cpo(e), ))]
(6.47)
lcpo(tE(0))l/ c p O ( O
(6.48)
and from (6.45) we get Pe <
Now, by the inequalities
4-
pz 5 eCIT(b
Corollary 6.1
5 1+ I u I and (6.40), we get
-
+ l)/po(~)
0,
e
4
00.
(6.49)
Let the generators ILE,E > 0 , have the following estimation
CHAPTER 6. WEAK CONVERGENCE
212
for any real-valued nonnegative function 'p E C;(Rd), where the constant C, depends on the norm of 'p, and for 'po(u) =
d m ,
L''po(u) I Cl'pO(U),
14 5 1,
where the constant Cl depends on the function 9 0 , but not on E > 0 . Then the family of processes [ " ( t ) , t2 O,E > 0 , is relatively compact. 6.4.2
Stochastic Systems with Semi-Markov Switching
The stochastic systems with semi-Markov switching in the series scheme with the small series parameter E > 0, E + 0, is characterized by the process (see Lemma 6.1)
1
7; ( t )
S ( t >= ' p ( G ( t )x"(t)) , -
where the compensating operators IL", representation (see Proposition 3.1)
E
w4m,s"(s))ds,
(6.51)
> 0, have the following asymptotic
+
+
IL''p(u,x) = ~ - l Q ' p lr(x)P'p ~ B ; ( x ) ' p ,
(6.52)
on the test functions 'p(u,x) in C,"(Rdx E ) . The remaining term is
Bg(x) = S2(z)FL2)(x)Qo. The process c ( t ) in (6.51) has the martingale property (see Lemma 6.1). The relative compactness of the family of the stochastic systems c ( t ) , t 2 O , E > 0, is realized by using the Stroock and Varadhan criteria, formulated in Theorem 6.2.
Lemma 6.5 estimation
Let the compensating operators IL', c > 0 , have the following (6.53)
for any real-valued nonnegative functions ~ ( uin) C r ( R d ) ,where the constant C, depends only on the norm of 'p, but not on E nor on shifts of 'p45.
Let the compact containment condition (6.39) holds, and 1<"(0)1 5 c < +m.
6.4. RELATIVE COMPACTNESS
213
Then the family of the stochastic systems g(t),t2 0 , c > 0, is relatively compact.
PROOF. The proof of this lemma is similar as the proof of Lemma 6.2. 0 Corollary 6.2 estimation
Let the compensating operator (6.52) has the following IILEP(U)I5
c,,
(6.54)
for any real-valued nonnegative function p E C i ( R d ) ,where the constant C, depends only on the norm of p, and for po(u) = d m , LEvo(u)I Cwo(u),
14
511
where the constant Cl depends o n the function po, but not on E > 0. Then the family of processes e E ( t ) ,2 t O , E > 0, is relatively compact. 6.4.3
Compact Containment Condition
e(t),
The relative compactness of the family of stochastic processes t 2 0, & > 0, with simple paths in the space D R d [0, oo),is obtained provided that the compact containment condition holds 45 lim s u p P l+oOE>O
(o s t g I<"(t)l> ) sup
1
= 0,
(6.55)
together with the additional condition that p ( e ( t ) ) is relatively compact for each test function p(u) in a dense set, say H , in C ( E ) ,in the topology of uniform convergence on compact set. The unique solution of the martingale problem for the limit generator of Markov process together with Condition (6.55) provides the weak convergence of the processes (see Theorem 9.1, c h . 3 in 45).
The family of processes ('(t), t 2 0 , 0 < E 5 EO, characterized by the compensating operator (6.52), with bounded initial value IE I$(O)l 5 b < 00, satisfies the compact containment condition (see 45) Lemma 6.6
(6.56) PROOF. We will use the function cp~(u) = d m .The asymptotic representation (6.52) for the compensating operator and the following properties
CHAPTER 6. W E A K CONVERGENCE
214
of
yield the inquality
Let us use now the process
defined by (see Lemma 7.8)
First, the following inequality is satisfied
for any large enough c > 0. Using the martingale property of the process (E(t)and inequality (6.58), we obtain the following inequality
The left hand side in
is estimated as follows
The second term is estimated using property
Hence,
We will use below the following property of random sojourn times y'(T) (see Appendix C), for all S > 0, (6.62)
6.4. RELATIVE COMPACTNESS
215
Note that the function d ( s ) = bse-cs is bounded in s E R+
0 5 d ( s ) I e < +w.
(6.63)
Let us estimate:
Similarly, using property (6.62), we estimate: E[e-cT'(T)
I FT ]= E [ e - c ~ ' ( T ) [ ~ ( y t 2( TS)) + I(~'(T) < s)] I F@] 2 e-csP(-y'(T) < S) = e-" [I - ~ ( y( "T ) 2 s)].
(6.65)
By (6.62), we have ~ [ e - ~ y ' (I ~~ )$ 2 1 h
> 0 , for
o < E 5 EO.
Inequality (6.59) can be now transformed into the following
he-cTEcpo(tE(T))I WO(J"(O)). The convexity of cpo(u) = d vide the estimation:
and Chebishev's inequality yields
Now, Inequality (6.66) is used:
(6.66)
m ,and the inequality cpo(u) 2 1, pro-
CHAPTER 6. W E A K CONVERGENCE
216
6.5
Verification of Convergence
The verification of convergence of stochastic systems with semi-Markov switching in the average merging scheme (see Theorem 3.1) is based on the determination of the pattern limit Theorem 6.6 conditions, by using the explicit representation of the solution of the singular perturbation problem given in Proposition 5.7. First, the explicit representation of the remaining term in (5.65)-(5.67) and Condition A2 in Section 3.3.1 (Theorem 3.1) provide that Condition (6.53) in Lemma 6.5 is valid. Next, the compact containment condition (6.56) is realized by Lemma 6.6. So, Condition C1 of Theorem 6.6 is true. Conditions C2 and C3 also are true from the same explicit representation (5.65)-(5.67) of the limit generator and the remaining term and, of course, Conditions A2 and A3 in Section 3.3.1 (Theorem 3.1). The characterization of the limit process in (6.20) with the limit generator IL = I?, lrp(u) = ij(u)(p'(u),(see (5.59)), due t o Condition C4, completes the proof of Theorem 3.1.
--
Verification of weak convergence of stochastic additive functionals (3.56) in Theorem 3.4 is achieved following an analogous scheme t o Theorem 3.1. First, Conditions C2 and C3 of Theorem 6.6 are obtained by using asymptotic representation (3.10) in Proposition 3.2. Next, compact containment condition (6.39) is realized by Lemma 6.6. Condition (6.53) in Lemma 6.5 is proved for the perturbed test functions (pE(u,z)= p(u) + E ( P ~ ( u , z ) ,such that
where the constant C, depends only on cp(u),but not on
E
nor on shifts of
9. The characterization of the limit generator IL,in Proposition 5.8, completes the proof of Theorem 3.4. The proof of Theorem 3.3 can be obtained as a particular case of that of Theorem 3.4. The verification of weak convergence of Theorems 3.2 and 3.5 is obtained similarly by using Propositions 3.4 and 5.9 respectively.
6.5. VERIFICATION OF CONVERGENCE
217
The convergence in distribution of the coupled Markov process
C'(t),f(t),t2 O , E > 0 , is made by the Pattern Limit Theorem 6.3. For the weak convergence, we propose to the interested reader to calculate the square characteristic of the martingale characterization of the coupled Markov process C E ( t )c(t), , t 2 0 , and verify the relative compactness of the family t 2 0, E > 0 , as E + 0.
c(t),
Theorem 3.6 can be considered as a particular case of Theorem 3.7. The weak convergence in Theorems 4.7-4.11 is based on the solutions of the singular perturbation problems given in Propositions 5.14-5.17 and on the Pattern Limit Theorem 6.4 in average merging scheme and Theorem 6.5 in diffusion approximation scheme. The switching semi-Markov processes is considered in Theorem 6.6. The verification of relative compactness is made by using the estimations of generator (or compensating operator) on the test functions given in Lemmas 6.2 and 6.4. The relative compactness of the processes on the series scheme is shown by using Stroock-Varadhan approach given in Theorem 6.2.
This page intentionally left blank
Chapter 7
Poisson Approximation
7.1
Introduction
The Poisson approximation merging scheme is represented here for two kinds of stochastic systems: impulsive processes with Markov switching (Sections 7.2.1 and 7.2.2) and stochastic additive functionals with semiMarkov switching (Section 7.2.3). The average and diffusion approximation merging principles are constructed for stochastic systems in the series scheme with the small series parameter E -+0, ( E > 0) normalizing the values of jumps. In the Poisson approximation scheme, the jump values of the stochastic system are split into two parts: a small jump taking values with probabilities close t o one and a big jump taken values with probabilities tending t o zero together with the series parameter E 4 0. So, in the Poisson merging principle the probabilities (or intensities) of jumps are normalized by the series parameter E . The main assumption in the Poisson merging principle is the asymptotic representation of the probability measure on the measure-determining class of functions cp E C3(lR), which are real-valued, bounded, and such that 70 cp(u)/u24 0, I u I -+ 0, (see Appendix B). The techniques of proofs developed here are quite different from those used in the previous chapters for diffusion and average approximations. The proofs of theorems in the present chapter make use of semimartingale theory. Theorems 7.1 and 7.2 concern impulsive process, with and without state space merging of switching Markov process. Theorem 7.3 concerns additive functionals with semi-Markov switching. The main framework of proofs is that of Theorems VIII.2.18, and IX.3.27 in 70 (see Appendix B, Theorems B.l and B.2). But the main point here is to prove convergence of predictable characteristics of semimartingales which are integral functionals
219
CHAPTER 7. POISSON APPROXIMATION
220
of switching Markov processes. This is done by techniques given in Chapters 5 and 6 . The Poisson merging principle is constructed similarly t o the average approximation principle (see Section 5.4) with some special devices. As usual, there are four different schemes: the continuous and jump random evolutions considered with Markov and semi-Markov switching. The associated continuous random evolution in the Poisson approximation scheme is given by the family of generators S , ( x ) , x E E , which defines the switched Markov processes with locally independent increments q“(t;z),t2 0 , x E E , with values in Bd,d >_ 1, and the switching Markov renewal process x ; , T;, n >. 0, which determines the states xk E E , and the renewal times by the transition probabilities given by the Markov kernel. The starting point of construction of the Poisson approximation principle i s the compensating operator of the continuous random evolution in series scheme (see Section 5.3). For reasons of easier understanding by the reader, we consider, in the first part, Markov switching, but the same approach can be used for semi-Markov switching when we replace the generator by the compensating operator.
7.2 7.2.1
Stochastic Systems in Poisson Approximation Scheme Impulsive Processes with Markov Switching
Let x ( t ) , t >. 0, be a Markov jump process on a standard state space ( E ,E ) defined by the generator
(7.1) The semi-Markov kernel
Q ( z , B , t )= P ( x , B ) ( l- e-q(”)t),
x E E , B E E , t 2 0,
defines the associated Markov renewal process X k , T k , k L 0 , where 0, is the embedded Markov chain defined by the stochastic kernel
P ( x ,B ) = p ( X k t 1
EB
xk
(7.2) Xk,
k L
=x),
and T k , Ic 2 0 , is the point process of jump times defined by the distribution function of sojourn times 6&1 = T k + l - r k , k 2 0 ,
7.2. POISSON APPROXIMATION SCHEME
221
We suppose that the Markov process z(t),t 2 0, is uniformly ergodic with stationary distribution .rr(B),B E E . Thus the embedded Markov chain z k , k 2 0 , is uniformly ergodic too, with stationary distribution p(B), B E E l connected by the following relations
In the sequel we will suppose that
0 < 40
I q(z)5 4 1 < +OO,
z E E.
(7.4)
The impulsive process with Markov switching is defined by
+
4tlc)
[ ' ( t ) := c(0)
&(zk)l
(7.5)
t 20,
k=l
where v ( t ) = max{k : Tk 5 t } is the counting process of jumps. The family of random variables az(z), k 2 1, z E El is considered in the series scheme with a small series parameter E > 0, and is defined by the following distribution functions on the real line EX
Analogous results can be obtained for the impulsive processes in E X d , d 2 1. In the sequel, we will suppose that for any fixed sequence ( z k ) in El the sequence a;(&),k 2 1, is constituted of independent random variables. Let the following conditions hold.
A l : The switching jump Markov process z(t), t 2 0, is uniformly ergodic with the stationary distributions (7.3). A2: The family of random variables az(z),k 2 1,z E El is uniformly square integrable, that is, sup sup E>OZEE
J
u2@jc(du)- 0 ,
IUI>C
A3: Approximation of mean values
with sup
c-+
00.
CHAPTER 7. POISSON APPROXIMATION
222
A4: Poisson approximation condition
and SUPZEE1@’2(g)15 @ ( g )< A5: Square-integrability condition sup xEE
/
u 2 a x ( d u ) < +m.
W
where the measure @,(du) is defined on the measure-determining class C, (R),by the relation
The negligible terms t9t(z), eE(z) and eg(z) in the above conditions satisfy
Theorem 7.1 Under Assumptions A l - A 5 , the impulsive process (7.5) converges weakly to the compound Poisson process
c
vo(t)
<“t)
:=
a; +tqao,
t 2 0.
(7.7)
k= 1
The distribution function a’(.) of the i.i.d. random variables a:, k >_ 1, is defined o n the measure-determining class C3(R) of functions g b y the relation
(7.8) where: (7.9)
The counting Poisson process vo(t) is defined by the intensity qo := q&(l).
The drijl parameter
a0
(7.10)
is defined by
(7.11)
7.2. POISSON APPROXIMATION SCHEME
223
The following corollary gives an adaptation of the above theorem in the case of finite valued random variables a:((.).
Corollary 7.1 values:
The impulsive process (7.5) with a finite number of j u m p
P(a,Ek(z)= a,)
= ~p,(z),
15 m 5 M,
(7.12) M
m=l converges weakly to the compound Poisson process (7.7) determined by the distribution function of jumps:
P(ai = a m ) = p;,
15 m 5 M ,
where:
M
-
and Fo := pm. The intensity of the counting Poisson process vO(t), t 2 0 , is defined b y 40 := 4POl
the drift parameter
a0
(7.14)
is given in (7.12).
Remark 7.1. Assumptions A3 and A4 together split jumps into two parts. The first part gives the deterministic drift, and the second part gives the jumps of the limit Poisson process. The small jumps of the initial process characterized by the function a(.) in A3, are transformed into deterministic drift 2 for the limit process.
Remark 7.2. The stochastic exponential process for the impulsive process
CHAPTER 7. POISSON APPROXIMATION
224
(7.5) is defined as follows lo6
n
4 t / g )
f ( x ( " ) t :=
+
[1 x a ; ( z k ) ] ,
t 2 0.
(7.15)
t 2 0.
(7.16)
k=l
The weak limit of the process (7.15), as E
0, is
(t)
I-I[1+
yo
:=
€(X[O))t
-+
XQ3&-t9ao,
k=l
D
Example 7.1. Let us consider a two state ergodic Markov process
z ( t ) , t 2 0, with generating matrix Q, and the transition matrix P of the embedded Markov chain
Thus, the stationary distributions of z(t),t 2 0, and zn,n 2 0, are respectively: P
lr=
(-,-), x+p
1 1
A x+p
P=
(2'2)
Now, suppose that, for each E > 0, the random variables a;(z), z = 1 , 2 , k 2 1, take values in { E ~ o a, l } with probabilities depending on the state z, @;(eao) = P(a; = ~ a o= ) 1- ~ p , and @ z ( a l )= P(a; = a ~=) cp,, for z E E. We have
where
Oi(z) := ~ a z g ( ~ a o )-( ~l p , ) / ~ ~=a ;E a z . o(1) = o ( E ) , for E + 0, and
/
+
u q + = ~ &[(ao alp,)
+~ W I ,
where ee(z) = - m o p z . For the limit process, we have P(G0 = a l ) = 1, thus
+ alvO(t), with Evo(t)= qot, q = A + p, qo = qpo = q(p1 + p 2 ) / 2 . r o ( t )= qaot
7.2. POISSON APPROXIMATION SCHEME
225
Let us now take: X = p = 0.01;pl = 0 . 5 ; ~ = ~0.6; a1 = 100; ao = - 2 ; ~= 0.1. Then we get qo = 0.0165, and Fig. 7.1 gives two trajectories in the time interval [0,4500], one for the initial process and the other for the limit process.
Fig. 7.1 Trajectories of the initial and limit processes, and of the drift
7.2.2 Impulsive Processes in an Asymptotic Split Phase Space Now the switching Markov process z " ( t ) ,t 2 0, is considered in the series scheme with a small series parameter E > 0, on an asymptotic split state space:
u N
E=
E,,
E,
p,) = 0,
21
# 21',
(7.17)
VEV
where (V, V ) is the factor compact measurable space. The generator is given by the relation
S,
~ ~ c p (= z)
(7.18)
QE(z,dy)[cp(y) - cp(z)~.
The transition kernel Q, has the following representation
QE(z, B ) = q(z)P'(z,B ) = Q(z,B )
+ EQI(X,
B),
(7.19)
226
CHAPTER 7. POISSON APPROXIMATION
with the stochastic kernel P" representation
P"(5,B ) = P ( x ,B )
+ EPl(2,B ) .
(7.20)
The stochastic kernel P ( z , B ) is coordinated to the split state space (7.17) as follows: (7.21) In the sequel we suppose that the signed kernel PI is of bounded variation, that is,
lP1l(z,E)< +m.
(7.22)
According to (7.20) and (7.21), the Markov process z"(t),t 2 0, spends a long time in every class E,, and the probability of transition from one class t o another is O(e). The state space merging scheme (7.17) is realized under the condition that the support Markov process z ( t ) , t 2 0, with generator (7.1) is uniformly ergodic in every class E,,, v E V, with the stationary distributions (7.23) Let us define the merging function
v(z)= v, z
(7.24)
E E,.
By the state merging scheme, the merged Markov process converges weakly (see Section 4.2), V ( Z " ( t / & ) ) ===+q t ) ,
&
4
0,
(7.25)
to the merged Markov process .^(t), t 2 0, defined on the merged state space V by the generating kernel
The counting process of jumps, denoted by C ( t ) ,t 2 0, can be obtained as the following limit & V E ( t / & ) ===+
i;(t),
&
4
0.
7.2. POISSON APPROXIMATION SCHEME
227
Theorem 7.2 Under conditions A l - A 5 , in the state space merging scheme the impulsive process with Markov switching in series scheme
c
VC(t/E)
r"(t):=
(7.27)
a;(z;), t 2 0,
k=l
converges weakly to the additive semimartingale
to(t), t 2 0, (7.28)
or, in the equivalent increment form P(t)
(7.29)
k=l The compound Poisson processes ,Et(t)are defined by the generators
and v t ( t ) are the counting Poisson processes characterized b y the intensity h
q: = qW@,,(1),or, in an expZicit form
4 (t) <:(t) =
1 k=l
atk
+ tqwat,
21
E
v,
for fixed w E V , where a t k , k 2 1, are i.i.d. random variables with common distribution function defined by @39) = & h ) / E w ( 1 ) .
The drijl parameter is given b y h
u: = c(v) = Z(W) - @w(l)Ea!:l.
Remark 7.3. In applications, the limit semimartingale (7.28) can be considered in the following form G(t)
Eo(t)=
+
gkc(zk-1) -k ? ( t ) c ( z ( t ) ) / J O ( t ) , k=l
(7.30)
CHAPTER 7. POISSON APPROXIMATION
228
where po(t) is a martingale fluctuation. The predictable term in (7.30) is a linear deterministic drift between jumps of the merged switching Markov process 2 ( t ) , t 2 0 . 7.2.3
Stochastic Additive Fzlnctionals with Semi-Markov Switching
Let us consider an additive functional with semi-Markov switching depending on the small series parameter E > 0, namely (7.31)
t ; t 2 0, z E ElE > 0, is a family of Markov jump processes in where ~ “ (z), the series scheme defined by the generators F , ( z ) c ~ ( u=) E
+
[V(U W) - p ( ~ ) ] r ~x()d, z~ E; E ,
- ~ L
(7.32)
d
switched by the semi-Markov process z ( t ) , t 1 0, defined on a standard state space ( E ,E ) by the semi-Markov kernel Q ( z ,B , t ) = P ( z ,B)F,(t), z E ElB E E , t 2 0 , which defines the associated Markov renewal process x,,
T,,
~ , e , +I ~ t I 2, = E B 1 z, = z)p(en+lI t I z,
(7.33)
n20:
~ ( z , ~ =, P(Z,+~ t ) E = IP(%,+~
=
(7.34)
Remark 7.4. Here we do not consider drift for processes $(t; z),t 2 0 , as was the case in diffusion approximation (see Section 4.1), since only random jumps can be transformed into jumps of limit Poisson processes. Let the following conditions hold.
C l : The switching semi-Markov process z ( t ) , t 2 0 , is uniformly ergodic with the stationary distribution: 4 d z ) = p(dz)m(z)/m,
7.2. POISSON APPROXIMATION SCHEME
p(B) =
/
E
229
p(dz)P(z,B), P(E) = 1.
C2: Approximation of the mean jumps: (7.35)
and (7.36) and a ( z ) ,c(z) are bounded, that is, Ia(z)I I a < +m, Ic(z)1 I c < +m. C3: Poisson approximation condition
for all g E C3(Wd),and the kernel rg(z)is bounded for all g E C3(Rd), that is,
The negligible terms in (7.35)-(7.37) satisfy the condition (7.38) C4: Uniform square-integrability
where the kernel r ( d v ;z) is defined on the measuredetermining class C3(Wd)by the relation
C5: CramBr’s condition
Now, we get the following result.
CHAPTER 7.POISSON APPROXIMATION
230
Theorem 7.3 Under Assumptions Cl-C5, the additive functional (7.31) converges weakly to the Markov process <‘(t), t 2 0, defined by the generator
where
2=
r(dz)a(s),
(7.40)
and ?(dv)
s, +1
=
.n(dz)I’(dv; z).
(7.41)
The generator of the limit process can also be represented as follows
Fcp(U) = Zov’(u)
Rd
+
[cp(u ). - cp(u)lr^(dv).
(7.42)
The first term of the sum in (7.42) determines the deterministic drift and the second one gives the representation of the generator of the stochastic part of the limit process. It is worth noticing that under the Poisson approximation condition C3, the small jumps of the initial process characterized by the function a(.) in Condition C2, are transformed into a deterministic drift with velocity 2 which is the average value of a(.) with respect t o the stationary distribution of the switching ergodic semi-Markov process. Due t o both the representations (7.39) and (7.42) of the limit generator, and the approximation conditions C2 and C3, the small jumps of the initial functional are transformed into the deterministic drift U o ( t ) = Got, where A
A
h
a0 = a - b,
h
b :=
ld
vf;(dv).
(7.43)
The big jumps of the initial functional (7.31) are distributed following the averaged distribution function
F ( d v ) := F ( d v ) / F ( R d ) ,
(7.44)
with the intensity of jump moments := ?(Titd).The limit Markov process has the representation [‘(t) = U0(t)+C0(t),where the jump Markov process
7.3. SEMIMA RTINGA LE CHAR A CTERIZA TION
231
co(t)has the following generator
7.3
Semimartingale Characterization
The stochastic systems in Poisson approximation scheme are here considered as additive semimartingales characterized by their predictable characteristics 70 (see Section 1.4). The limit predictable characteristics determine the limit semimartingale for the corresponding stochastic system. So, the limit theorems for semimartingales are used 70 (see Appendix B). Since Theorem 7.1 is a particular case of Theorem 7.2, we will prove only the latter one. The setting for proving convergence in Theorem 7.2 is the following. Consider a family of switching Markov processes
in the series scheme with a small series parameter E > 0, on the product space Rd x El defined by the generators L", E > 0. The domains of definition D(lLE)are supposed t o be dense in the space C(Rd x E ) of real-valued, bounded, continuous functions cp(u,x), u E Rdl x E E , with sup-norm
llvll = SUPUEJRd, Z E E 14% .)I.
c(t)
takes values in the Euclidean space The first switched component Rd, d 2 1. The second switching Markov component x " ( t / ~is) defined on the standard state space ( E ,E ) by the generator
in perturbed form, with the kernel:
t 2 0, is considered on the asympThe switched Markov process xc"(t), totic split state space (7.17). The merged state space V is defined by the merging function (7.24).
CHAPTER 7. POISSON APPROXIMATION
232
The limit Markov process
at),
q4, t 2 0,
(7.46)
is considered on the product space Rd x V and is defined by the generator IL,with domain D(L) dense in C(Rd x V ) . 7.3.1
Impulsive Processes as Semimartingales
Let 3; := a(z(s),0 I s 5 t ) ,t 2 0 , be the natural filtration of the Markov process z(t),t 2 0. Let us define also the filtration IFE = ( F f , t 2 0 ) , Ff := a ( z 6 ( s )a, ; ( z k ) , 0 5 s 5 t , k 5 v'(t)), and the discrete time filtration :=a(z;,CYg(Zk),O 5 S 5 t , k 5 n),n 2 0. The semimartingale characterization of the impulsive process with Markov switching (7.27) is given by the predictable characteristics as follows. Lemma 7.1 Under Assumptions A 1-A5, the predictable characteristics (B'(t),C E ( t ) , of the semimartingale
@i(t))
c
VE(t/&)
"(t) =
t10,
&(z;),
(7.47)
k=l
are defined as follows. The first predictable characteristic is VE(t/E)
B E @= )E
C
+
b ( ~ ; - ~ )e;(t), t 2 0,
k=l
where b(z)= P a ( z ) =
s,
P ( z ,d y ) a ( y ) , z
c
VE(tlE)
=E
P@z;-,(g)
E
E , and the predictable measure
+ O,.(t;9),
t 2 0,
(7.48)
k= 1
where The modified second characteristic is
(7.49)
where
7.3.SEMIMA RTINGALE CHARACTERIZATION
233
The continuous part of the second predictable characteristic is C,"(t)G 0. The negligible terms satisfy the following asymptotic conditions for every finite T > 0:
sup leE(t)l 3 0,
O
E + 0,
PROOF.Concerning the semimartingale (7.47) we have (see Theorem 11.3.11, in 70),
c
VE(tl&)
F ( t )=
k=l
E[ai(xi)1 F L J .
(7.51)
In particular
By Condition A3, we get:
Now, relation (7.51) becomes
where V'(tl&)
eE(t= ) E
C
eE(x;-,),
k=l
which is a negligible term satisfying (7.50). In the same way, as in (7.51), we get:
c k=l
c m;-l(d?
VE(t/&)
VE(t/E)
=
E [ g ( a ; ( 4 ) )I E - 1 1 =
k=l
234
CHAPTER 7. POISSON APPROXIMATION
where, by Condition A4, we have: @:(9)= W a E ( z ) ) l=
Jw g('LL)@j(du)= & [ @ & I ) + mJ)1.(7.52)
Thus the predictable measure @ J ( tis) represented by (7.48). By Condition A5, the modified second characteristic is represented as follows70:
k=l
0 Lemma 7.2
The coupled Markov Process
(7.53) characterised by the martingale
with respect t o the filtration F/.,t 2 0, and cp E D ( L & ) ,i s defined by the generator IL', which i s represented in explicit form as follows L"cp(u,x) = &-lQ
and in asymptotic form
where:
+ [Qi+ QoB'(z)] ~ ( uz),,
(7.54)
7.3. SEMIMARTINGALE CHARACTERIZATION
235
and
O'(x)cp(u,.) = & - y ( P ( U + E b ( x ) , . )
- cp(u,.)-Eb(x)cp:(u,.)].
0 PROOF.For the proof of this lemma, see Section 5.3.3. A standard calculus, via a stochastic perturbation problem, gives us the following result (see Section 4.5). Lemma 7.3
T h e generator
B ( t )=
I"
L of the limit coupled Markov process b(Z(s))ds, q t ) , t 2 0,
i s given by (7.56) where
The generator of the coupled Markov process (7.45) is represented in the following singular perturbed form LEv(u,z) = E
[Q" + Q M u , x),
- ~
(7.57)
where the operator Qt corresponds to the first switched component in (7.45).
PROOF OF THEOREM 7.2. The coupled Markov processes B,E(t),
Z"(t/&),
t 2 0,& > 0,
(7.58)
in the series scheme are defined by the generators (7.54) in Lemma 7.2. For p(u, w(z))E C:(R x V) the generator (7.54) is represented under the asymptotic form as
L"= E - ~ Q + [Qi + QoB(z)] + QoOb(z)
(7.59)
236
CHAPTER 7. POISSON APPROXIMATION
with, for
and the residual operator q x ) c p ( u ) = E-l"p(u
For cp(u) E
+Eqx)) - P(u) - E q 4 P y u ) l .
(7.60)
Ci(W) the following negligible condition holds lQE(x)cp(u)l5
Eb
llcp"II
-+
0,
E
(7.61)
0.
We will use a solution of the following singular perturbation problem given in Proposition 5.1 for the test functions cpE(u,x)= cp(.U,vb))
+ EPl(U,x),
(7.62)
that is,
+
= J L ~ EeE,
I L E ~ E
(7.63)
where @ = @(u,v) E C,"(R x V ) . The limit operator JL is given by the contracted operator
JL = 61 +Q2,
61 and QoB are defined by: h
where the contracted operators
61II= IIQ1II,
and
Q-I
= IIQoBII.
Condition A3 is fulfilled for the coupled Markov process (7.58) (see Appendix C). The limit generator h
Lv(u,v) = Qicp(., v) +%(v)cpd(u, a>,
satisfies the preliminary conditions of Theorem 6.4 (see Theorem 2 in 174). The limit generator lL given by relation (7.56) is obtained by a singular perturbation approach. Then, from Lemma 7.3, the following weak convergence, in Dwxv[O,CO),
-
(B"(t),4ZE(t/E)))
( B ( t ) , Z ( t ) ) ,E
-+
0,
takes place, where B ( t )= s,' b(f(s))ds is a continuous trajectory stochastic process.
7.3. SEMIMA RTINGA L E CHARACTERIZATION
237
Due to the fact that the process B ( t ) has continuous trajectories (as.), the weak convergence in Dw[O,cm) is equivalent t o uniform convergence in probability in Cw[O,00) on every finite time interval [0,TI 16, that is sup IBE(t)- B(t)I O
P
0,
&
--f
0.
In the same way, we obtain the weak convergence in Dw[O,oo) of the predictable measures, that is, for any fixed function g E C3(R), (a;@)
5 vt(g),
& -+
0.
(7.64)
The modified second characteristic C E ( t )given , by (7.49) converges t o A
C ( t )=
t
e(qS))d%
as stated in Theorem 7.2. All the conditions of the limit theorem, Theorem B.2 (see Appendix B) for semimartingales, are fulfilled. Due t o Condition A2, in particular, the square integrability Condition B.l holds true. The strong domination hypothesis is valid with the dominating function Ft = t F , for some constant F , due to the boundedness of all functions b(z), C ( x ) and QZ(g) for g E C3(IR) and the known inequality 60: E V " ( ~ / E )I ct, c > 0 (see Appendix C). Condition A5 implies the condition on big jumps for the last predictable measure of Theorem B.2 (Appendix B). Conditions (iv) and (v) of Theorem B.2 are obviously fulfilled. The weak convergence of predictable characteristics (B.3), (B.4) and (B.5) (Theorem B.2, Appendix B) are proved in Lemmas 7.1-7.2. The last Condition (B.6) of Theorem B.2 holds due to the uniformly square integrable Condition A2. Hence, we get the weak convergence in Dw[O,cm) of the impulsive process p(t)to the process t o ( t )which is defined by the predictable characteristics, as stated in the above Theorem 7.2. 0 7.3.2
Stochastic Additive Functionals
For semi-Markov switching we need the compensating operator of the Markov renewal process. The additive functional (7.31) is first considered as an additive semimartingale defined by its predictable characteristics (see Section 1.4).
238
CHAPTER 7' . POISSON APPROXIMATION
PROOF OF THEOREM 7.3 In order to prove this theorem, the following lemmas are needed. Lemma 7.4 Under the assumptions of Theorem 7.3, the predictable characteristics (B"(t),C E ( t )y,' ( t ) ) of the semimartingale
C"(t)= 66 +
1 t
rl"(ds;Z ( S / E ) ) ,
0
are defined b y the following relations: - the predictable process is
- the modified second characteristic is
- the predictable measure is
where C ( x ) = JR,uu*I'(du;x), and 0;) 0:) and 0; satisfy the negligible condition IlO'll
4
0,
E --+
0.
PROOF. The statements of this lemma are almost direct consequences of 0 the factorization theorem (see Section 2.9). In what follows, we will study only the convergence of (B,E(t),C,E(t), y6(t;g ) ) , where:
B,E(t)=
I'
a(x(s/E))ds, t 2 0,
1
t
%(t;g) =
~ , ( W W s ,
t 2 0.
In the sequel the process A"(t) will denote one of the above predictable , characteristics B,E(t),C t ( t ) y,E(t).
7.3.SEMIMA RTINGAL E CHARACTERIZATION
239
The extended Markov renewal process is considered as a threecomponent Markov chain
A: =A~(T:), z,:
.;
where z; = Z€(T;), z"(t):= z ( t / e ) and
qe;,, I t I
=
T:, T;+~
=q
n 2 0, = 7;
+ ee;,
(7.65)
n 2 0, and
t )= q e , It).
We are using here the compensating operator of the extended Markov renewal process (7.65) (see Section 1.3.4). Let At(z), t 2 0, z E E , be a family of semigroups determined by the generators
(7.66)
N z M u ) = a(z)(P'(u).
Lemma 7.5 The compensating operator of the extended Markov renewal process (7.65) can be defined by the relation (see Section 2.8)
(7.67) PROOF.The proof of this lemma follows directly from Definition 2.12.
0
Lemma 7.6 The extended Markov renewal process (7.65) is characterized by the martingale n
CL~+l=(P(A~+l,z:+l,T~+l) - -pE1+1LEcp(Ai,ZiJL),
n
2 0. (7.68)
k=O
PROOF. The martingale property of (7.68) follows from the homogeneous Markov property
E[P(A:+,,z:+1,7:+1) I A; = 21, z; = 277; = t] = E[cp(Af,zf,~,E) I A; = u,zg = Z , T ; = t ] .
(7.69)
0 In what follows, the martingale property will be used for the process
<'(t) = cp ( A E ( 7 ; ( t ) ) , x E ( 7 ; (Jt )f)( t ) )
+
where ? - f ( t ):= ? - Y S ( t ) , v t ( t ):= v"(t) 1.
CHAPTER 7. POISSON APPROXIMATION
240
Note that the following relations hold:
C(Tn)= &+I,
72
L 0,
(7.71)
and
('(t) = [ " ( ~ ' ( t ) ) for , ~ " ( 5t )t < T ; ( t ) .
(7.72)
The random numbers v:(t) are stopping times for
3 : Lemma 7.7
= ~ ( A ; , Z ~ ; ,0T5; ;k
5 n),
TI
2 0.
The process (7.72) has the martingale property
IE[C'(t)- ("(s) I 3:] = 0, f o r 0 5 s < t 5 T , where
Ff
:= ~(A"(s),z"(s),T'(s); 05s5
t).
PROOF. The proof of this lemma is based on relation (7.71) and on the martingale property of the sequence &, n 2 0, defined in (7.68). 0
c(t),
Remark 7.5. Note that the process t 2 0 , is not a martingale since it is not 3;-adapted. The next lemma is basic in the proof of the compact containment condition for the additive functionals AE(t), t 2 0. (Compare with Lemma 3.2, Ch. 4, in 4 5 ) . Lemma 7.8
The process
(,"(t)= e-cs;(t)cp (A'(T;(~)),z'(T;(~)) ,TT(t))
-e - C T E
( S )L E
(7.73)
has the martingale property for every c E R, that is,
IE[C:(t)
- I,"(.) 1 3 f ] = 0,
for 0 5 s < t 5 T.
PROOF. The proof of this lemma is based on the representation
(7.74)
7.3. SEMIMA RTINGAL E CHARACTERIZATION
241
0 where the process ['(s), s 2 0, is defined by (7.70). The algorithm of Poisson approximation given in Theorem 7.3 provides the asymptotic representation of the compensating operator. Lemma 7.9 The compensating operator (7.67) applied to functions cp E C2(R) x B(E)has the asymptotic representation
+
+
IL'cp(u, Z)= &-'Q'p(u,Z) A(~)Pcp(u, Z) d3'(Z)Pcp(u,z), (7.75)
where:
/
Qd., Z) = d ~ )P ( z ,~ Y ) [ P (Y) . , - cp(., ~ 1 1 , q(z) := 1 / 4 ~ ) , E
The negligible operator is defined as follows
where
The asymptotic expansion (7.75) can be obtained as in Section 5.3.2.
PROOF OF THEOREM 7.3. Let us first consider a solution of singular perturbation problem for the compensating operator ILE using the asymptotic representation (7.75).
Corollary 7.2 According to Proposition 5.1, a solution of the singular perturbation problem
+ &(Pl(U,.)I
LE[du)
=b
(u)
+ %(Z)cp(u),
(7.76)
whose negligible term in (7.76) satisfies
Ile3~)PII
+
0,
is the limit operator which is defined as a contracted operator
ILq(u)= rIA(z)rIcp(u) = I-r&(u),
where A
A=
.rr(dz)A(z).
(7.77)
C H A P T E R 7. POISSON A P P R O X I M A T I O N
242
and given by
Lp(u) = Hence, we get from (7.66) Zp(u) = ;;Cp’(u),
ii :=
s,
7r(dz)a(z).
Now, the proof of the theorem is achieved in the same way as in Theorem 7.2, by using Lemmas 7.6, 6.4, and Theorem B.l in Appendix B. 0 In this chapter we have considered processes with independent increments, impulsive processes and stochastic additive functionals in order to simplify proofs. Processes with locally independent increments, considered in Chapters 3, 4 and 5, can be studied in the same way. A more general approximation by a LBvy process is given in Section 9.2.
Chapter 8
Applications I
In this chapter we present some applications of the previous results encountered in real application problems as the absorption time distributions, stationary phase merging, superposition of two independent renewal processes and semi-Markov random walks.
8.1
Absorption Times
Absorption times constitute a concept very useful in many applications: reliability, survival analysis, queuing theory, risk analysis, etc. The phase merging effect can be observed for an “almost ergodic” Markov process with an absorbing state. Let x“(t), t 2 0, E > 0, be a family of Markov jump processes on a measurable phase space Eo = E U (0) with absorbing state 0 (see Fig. 8.1), considered in a series scheme with the small series parameter E > 0, defined by the generator
on the Banach space B of real-valued bounded functions p(x) with supnorm llcpll := SUPZEE Icp(z)l.
@--?
1
-00
Fig. 8.1 Merging with absorption
The basic assumption is that the stochastic kernel P Ehas the following 243
CHAPTER 8. APPLICATIONS I
244
representation P ( 2 ,B ) = P ( z ,B ) - EPl(2,B ) ,
where the stochastic kernel P ( z ,B ) on E defines the support Markov chain z, n 2- 0, uniformly ergodic with stationary distribution p ( B ) , B E E . The perturbing kernel Pl(z,B ) provides the probability of absorption of the embedded Markov chain xk, n 2 0:
P(Zi+,= 0 I z;
= z) = P E ( z(, 0 ) ) = 1 - PE(z,E ) = &Pl(Z,E) =: E P ( Z ) .
(8.2)
The hitting time to absorption for the Markov process z"(t),t 2 0, is defined by
'C Lemma 8.1
:= inf{t : z " ( t )= 0).
The transition probabilities
@ & ( t ,Bz ;) = P(ZE(t/E)E B , E C E > t I z"(0) = z), satisfy the evolutionary equation
d -@&(t,z;B ) = E-lQ'@E(t,Z;B ) dt with initial value: @€(O, 2;B ) = l B ( 2 ) :=
{i::;;.
PROOF. By using the representation, on ( ~ ' ( 0 ) = z}, z E E , 1(Z"(t/E)E B,&CE> t ) = l ( Z " ( t / &E) El,&<€> t , & &> t ) +i(zE(tIE)E B , & C E > t,EeZ 5 t ) , we get the following relation
The first term is calculated in an obvious way
(8.3)
8.1. ABSORPTION TIMES
245
The second term is calculated by using the total probabilities formula: ~ ( z € ( t / EE ) B,ECE
> t,Ee, I t I x y o ) = z)
Now differentiating with respect to t leads to Equation (8.4).
0
Remark 8.1. We used the fact that the first sojourn time in state x E E is exponentially distributed with intensity q(z):
a(e, > t ) = e - q ( z ) t .
Remark 8.2. Equation (8.4) has the following singular perturbing form
where the perturbing operator Q1 acts as follows
s,
Q~(P(z)= 4%) 4 ( x ,~ Y ) P ( Y ) .
(8.6)
The generator
Qcp(x) = dx)
1 E
P ( x ,d ~ ) [ c p ( y-) c p ( ~ ) l ,
defines the support Markov process x ( t ) , t 2 0, on ( E l&), uniformly ergodic with stationary distribution
74dxMz) = qddx),
Q=
s
ddx)q(x).
The problem of singular perturbation considered in Chapter 5, can be applied for constructing a solution of Equation (8.5). The formal scheme of the asymptotic solution of Equation (8.5) is the following. Let us consider the following asymptotic representation for a solution of Equation (8.5)
@€(t, x) = @(t)+ E@l(t,x) + EO&(t,x),
(8.7)
CHAPTER 8. APPLICATIONS I
246
with negligible term Q,(t, z) such that
l@,(t,z)I
--f
0,
E -+
0.
Substituting (8.7) in (8.5) and comparing the terms with the same degrees in E , yields the following equations:
Q@(t> = 0,
(8.8)
d Q@l(t,z)= z@(t) - C?i@(t).
(8.9)
The first equation (8.8) is satisfied evidently because, by assumption,
@ ( t does ) not depend on z. The second equation (8.9) is valid if the following solvability condition is satisfied (see Section 1.6),
[it
(8.10)
II -@(t) - Ql@(t)] = 0, or, in another form
1
[it
II -@(t) - &@(t) where the contracted operator 5.2)
(8.11)
= 0,
01 is determined by the relation (see Section
QirI = IIQlrI.
The projector
(8.12)
II is defined by the relation
IIcp(z) = L * ( d z ) p ( z ) = @l(z)= 1, z
E
E.
Calculation in (8.12) by using (8.6) gives the following
where
is the stationarp averaged absorbing probability.
8.1. ABSORPTION TIMES
247
Now, Equation (8.10) can be written d
-@(t) + A@(t) = 0 , A dt
= 45.
(8.13)
Its initial value has to be chosen as follows Q(0) = rI@€(O,
x) = rIl,(x)
=r(B).
(8.14)
The solution of Equation (8.13) under initial condition (8.14) can be represented as follows Q ( t )= r(B)e-"t,
that is the limit representation of the transition probabilities (8.3) lim P ( x E ( t / E )E
B,E<'
E'O
> t I ~ " ( 0= ) x) = r(B)e-"t.
This scheme can be formally verified (see for instance '16). In the next section this result will be obtained as a corollary in a more general scheme. The following result is obvious. CoroIIary 8.1
T h e following limits hold:
lim P ( z " ( t / ~E) B
E+O
I
>t/E)
lim P(E$ > t ) = e P A t , A
E'O
=r(B),
= 45.
It is worth noticing that the limit conditional probability is the so-called quasi-stationary di~tribution.'~~ Heuristic principle. Corollary 8.1 can be interpreted as the heuristic principle of reliability for almost ergodic stochastic systems with stationary distribution 7r(B),B E E , on the set E of working states. 1. The stopping time has exponential distribution
<
P(C> t ) = e-ht, 2. The intensity of the stopping time is =
where
45,
t 2 0.
CHAPTER 8. APPLICATIONS 1
248
is the stationary intensity of sojourn times in working states,
is the stationary probability of absorption, and p(x) are the probabilities of absorption in state x E E . D Example 8.1. Consider a three state Markov process, Eo = {0,1,2}, with generator matrix
The transition matrix of the embedded Markov chain is
(l1:& 1 ) =(EBZ).E(;:l-:). -0 0
PE=
&
0
0
1-&
P
Pl
Now, for the ergodic process x ( t ) ,t 2 0, taking values in E = {1,2}, and generator Q, we have .rr = &-). For the ergodic embedded Markov chain xn,n 2 0,we have p = (1/2,1/2). Thus, since we have p(1) = --Pl(l,E) = 1 and p(2) = -P1(2,E) = 1, the stoppage probability is p = 1. On the other hand, we have q(1) = A, and q(2) = p. Hence
(6,
and
The limit of the distribution of the normalized absorption time is
IF'(< > t ) = exp(-At).
8.2. STATIONARY PHASE MERGING
249
Remark 8.3. In order to obtain Equation (8.11) for the limit term @(t) in the asymptotic representation (8.7) we have to write Equation (8.9) for the next asymptotic term @ l ( tx) , and use the solvability condition for this equation. This is the main scheme in the problem of singular perturbation approach (see Chapter 5). 8.2
Stationary Phase Merging
A surprising possibility of the phase merging scheme is that the simplification model can be constructed for the ergodic semi-Markov process x ( t ) ,t 2 0, given by the semi-Markov kernel
where x E E , B E E , t 2 0, and xn, r,, n 2 0, is the Markov renewal process associated with the semi-Markov process x ( t ) ,t 2 0, (see Section 1.3). The main assumption is that the semi-Markov process x ( t ) ,t 2 0, has a unique stationary distribution .rr(B),B E E. Let p ( B ) ,B E E , be the stationary distribution of the embedded Markov chain xn, n 2 0. As we have already point out, these two distributions are connected by the following relations:
Let the split of phase space:
be given such that the inequality holds
p(Ek) > 0,
15 k 5 N .
(8.17)
Introduce the merging function
Definition 8.1 The stationary phase merging process is a Markov renewal process 2,,&,n 1 0, on the merged phase space E = {l,.,.,N}, A
A
CHAPTER 8. APPLICATIONS 1
250
defined by the merged semi-Markov matrix
6 ( t )= [6kr(t);1 _< k , r 5 N], given by the transition probabilities A
&^kr(t) = p(Zn+1 = T , en+1 I t I 2, = k) =
L.
p(dx)Q(x,4 - 7 t)/p(Ek), 1 I k , r 5 N*
(8.18)
Particularly, the transition probabilities of the embedded Markov chain Zn,n 2 0 , are given by the relation p^kr := p(Zn+l = r
I 2,
=k)
The distribution functions of the merged sojourn times are represented
as follows Fk(t)= qe,+l 5 t h
1 zn = L)
The problem is to determine how the Markov renewal process 2,, ? ,, n 2 0 , is associated with the ergodic Markov renewal process xn,T,, n 2 0. The solution of this problem is formulated as follows. First define the restriction of the embedded Markov chain xn,n 2 0, t o the set Ek, which is a Markov chain also denoted by x y g ) m , 2 0, where
v g ) := inf{n > vm-l (k)
: z, E & } ,
v r ) = 0, m L 1,
are the observed jump times into Ek.
Theorem 8.1 (Stationary phase merging 6, Under the ergodicity condition for the embedded Markov chain xn,n 2 0 , and the additional condition (8.17) the weak convergence holds lim p(xyg)+lE Er,evg)+lI t ) = Qkr(t), 1 I k,r 5
m+cx
Especially,
N. (8.21)
251
8.2. STATIONARY PHASE MERGING
Remark 8.4. Introduce the merged process
iqt) := w(x(t)), t 2 0,
(8.22)
A
on the merged phase space E = (1, ..., N } , and define its renewal moments
Consider now the two-component process
The statement of Theorem 8.1 means that the weak convergence of
-
-
-~
On := T~ - ~
Zn,
- 1 ,n
2 1,
(8.23)
h
to the Markov renewal process Zn, On, n 2 0, takes place as follows h
I
(?n+mr en+m)
==+
(Zn,e n ) , m 4 00.
That is, in the steady-state regime, the merged process (8.22) can be considered as being associated with the stationary merged MRP Zn,Fn,n L 0.
PROOF OF THEOREM 8.1. The proof is based on considering the two component ergodic Markov chain
on the phase space E x E , with the stationary distribution
ij(%
dy) = p(dz)P(z,dY).
(8.25)
Set Ak = Ek x E. The hitting times':Y of the set Ek for the Markov chain zn,n 2 0, are the hitting times of the set Ak for the Markov chain Cn,n 2 0. The crucial property is that the thinned sequence Cvc),m2 0, is the ergodic Markov chain on the set Ak, with the stationary distribution
From (8.25) and (8.26) this yields
CHAPTER 8. APPLICATIONS I
252
Now the ergodicity condition gives:
= Pkr .
Taking into account the Markov property of renewal times ve', m 2 1, we calculate
WvpflE =L
with I'm(&) Hence
- t)
ET,evg,k)+l
P(z,,g)+lE
I t I xvg)= z ) f ' m ( d z ) ,
ET, e,,g)+l
k
= P(~,,gnk) E dz).
P(zvg,k)+l E ET,evg,k)+l< t)=
Er, t)f'm(dz),
Q(z1
L
(8.28)
k
but the Markov chain x,,:), m 2 0, is ergodic on the set Ek with stationary distribution p(dx)/p(Ek). 0 Passing to the limit in (8.28) we get (8.21). It is worth noticing that the merged embedded Markov chain &, n 2 0, has, in general, a virtual jumps, which can be extracted in the well-known way. The transition probabilities of the merged embedded Markov chain .",, n 2 0, without virtual passages are represented as follows
The stationary phase merging principle introduced in Theorem 8.1 gives important hints for simplifying the analysis of semi-Markov systems. The Markov renewal process with the ergodic embedded Markov chain can be merged for any arbitrary split phase space (8.16) which satisfies Condition (8.17). The merged process will again be a Markov renewal process given by the semi-Markov kernel calculated by (8.18). As usual, the split phase space (8.16) is considered under some technical arguments, for example, states connected by some criterion (see Section 8.3).
8.3. SUPERPOSITION OF TWO RENEWAL PROCESSES
253
The semi-Markov model of stochastic system can be constructed by expanding the physical (technical) phase space up t o the “semi-Markov” phase states. In other words, the technical states yield a natural split of the semi-Markov phase space (see Section 8.3).
8.3
Superposition of Two Renewal Processes
Let us consider two independent renewal processes, say r1 and r 2 ,given by (see Fig. 8.2),
c.;, n
r: =
n 2 0, i = 1,2,
(8.29)
k=O
where a;, k 2 0, i = 1 , 2 are i.i.d. positive random variables, with distribution functions
Fi(t) := P(o; 5 t ) , Fi(0) = 0,
k 2 0, i = 1,2.
(8.30)
The counting renewal processes are defined by the relation vi(t) := max{n 1 o : T: 5 t } , t 2 0.
(8.31)
The superposition of the two independent renewal processes (8.29) is defined by
v ( t ) := V l ( t )
+ vz(t),
t L 0.
(8.32)
Denote by Tn, n L 0 , the renewal moments of the superposed process. Certainly, the counting process (8.32) is a renewal process only in the case of exponential distributions (8.30). In that case, the three processes are homogeneous Poisson processes. Nevertheless, the superposition (8.32) can be described by the Markov renewal process xnrrn,n 2 0, on the phase space E = El U E2, Ei = ((2,s): z 10 } , i = 1,2, with the following formulas for sojourn times on+1 := Tn+1 - 7 7 x 7 n 2 0, 8: = a’ A X .
The simplest way t o explain this formula is t o consider Fig. 8.2.
(8.33)
CHAPTER 8. APPLICATIONS I
254
Fig. 8.2
Semi-Markov random walk
In order to fix the state of the embedded Markov chain z,,n 2 0, we have to introduce a residual time z from one renewal time 7-A to the next nearly renewal time which can be if 5 z, or 7: if > Z. So, renewal moments T,, n 2 0, can be described by the Markov chain x,, Cn, n 2 0, with values in E , and which transition probabilities are given in the matrix (8.34)
where: Fl(Z
- dy) := P(Z,+l = 1,G + 1 E dy I z, = 1,Cn = S),
F ~ (x d y ) := P(z,+~ = 2,
E dy
I Z,
= 2, Cn = z).
Here the transition ((1, z), (1,dy)) means that a’ E 2 - dy, that is, z - y < a1 5 z - y dy. Similar interpretation holds for the other transitions. The particularity of the Markov chain zn,C,,n 2 0, is that it has a stationary distribution determined by
+
pl(dz) = ~ F , ( z ) d z , p 2 ( d ~ = ) ~Fl(~)dz. where a = l/(al
+ a2), ai = Ed,i = 1,2.
(8.35)
8.3. SUPERPOSITION OF TWO RENEWAL PROCESSES
255
It is worth noticing that the densities (8.35) can be defined by the stationary residual times a'* according to the renewal theorem: p1(dz) = plf;(z)dz, p2(dz) = pzfi*(z)dz,
where:
f:(z) = Fi(z)/ai. The semi-Markov kernel of the Markov renewal process z n , ~ n , 2 n 0, can be calculated starting from (8.33) as follows. Set Qij(z, dy, t ) instead of Q((i,z),(j,dy),t). We have:
+
QIz(x,dy,t) = P(al> x , a l E z dy,Oi = P(a1 E z+dy,z I t) = Fl(Z
It )
+ dY)l(,lt),
and, similarly,
Q21(2, dy, t ) = F2(z + dy)l(z
and, similarly, Q22(z,
dy,t) = F2(z - dy)l(xlt+y)*
In particular, we need to calculate in the stationary phase merging theorem
Qiz(2, Ez, t ) = Fl(z)l(Z
(8.36)
Qzi(2, El, t ) = Fz(z)l(Z
Z
= 1,2.
Now, we can construct the stationary phase merging Markov renewal process describing the superposition of two independent renewal processes.
CHAPTER 8. APPLICATIONS 1
256
Proposition 8.1 (lo The o stationary ) phase merging Markov renewal process Zn, n 1 0, o n the phase merged space = { 1,2} is defined by the equation for the sojourn times in states
Cn,
el = a1A a2*, e2 = ( ~ l *A a2, A
where the upper index
h
u*’’
P(aZ* E d x ) = f : ( z ) d x ,
(8.37)
means the stationary residual tame, that is,
f:(z)
= Fi(z)qi, qi := l/ai,
i = 1,2.
PROOF. The semi-Markov kernel for the stationary merged Markov renewal process is calculated by using representation (8.36) and the stationary distributions (8.35). According t o (8.18), we get:
(8.38)
where, by definition,
F ( t ) :=
l-
F~(x)F~(z)~x.
Similarly, (8.39)
&iZ(t)= &i(t)- & i j ( t ) , with:
-* F j ( t ) := P ( d * > t ) = qj
4
00
Fj(S)dS,
j =2,l.
Next:
(8.40)
8.3. SUPERPOSITION OF T W O RENEWAL PROCESSES
257
Particularly, the transition probabilities of the embedded Markov chain
&, n 2 0, are represented as follows F12
=Q
1 2 ( + ~ )
= 424,
Fzi = Q
2 1 ( + ~ ) = qi4,
and, obviously, F12
= 1- 5 2 1 ,
522 = 1 - 3 2 1 .
Verification that the semi-Markov kernel o i j ( t ) , z , j = 1,2, calculated in (8.37), of Proposition 8.1, coincides with that calculated in formulas (8.38)-(8.40),can be easily obtained in the backward way. Let us calculate the semi-Markov kernel of the Markov renewal process given by the sojourn times (8.37): &(t)
5 t , a 1 > a2*) = P(a2* 5 t , a 2 * < a1) = P(&
= q2
I”
772(4771(4dx
=: Q2F(t),
that is exactly (8.38). Next:
&(t)
L t,a2* > a l ) = P(a1 5 t , a1 < a2*) = P(&
1 t
= q2
772(z)dFdz)
t
= Q2
F2(3$71(Z)dZ,
that is exactly (8.40). 0 Comparing the stochastic representations (8.33) and (8.37) for the initial Markov renewal process z, On,n 2 0, and the stationary merged Markov renewal process Z,,e^,,n 2 0 , we can conclude that going from (8.35) to (8.37) is very simple. The residual time in (8.35) is replaced by the stationarily distributed residual time a*. That constitutes the heuristic principle of stationary phase merging scheme.
CHAPTER 8. APPLICATIONS I
258
In the superposition of Markov renewal processes the residual times are transformed into the stationary residual times.
8.4
Semi-Markov Random Walks
The semi-Markov random walk (SMRW) is defined and the average, diffusion and Poisson approximation results are presented here. The SMRWs, obtained by the superposition of two independent renewal processes constitute a special field in the theory of semi-Markov processes. 8.4.1
Introduction
Let v+(t),t 2 0 and v- ( t ) ,t 2 0, be the counting processes of the two renewal processes defined as follows (see Section 8.3):
,
n
v*(t) = max{n :
Ca: 5 t ) ,
t 2 0.
(8.41)
k=l
The time intervals a:, Ic 2 1, are jointly independent and are given by the distribution functions
P*(t) = P ( a ; 5 t ) , t 2 0. The superposition of the two renewal processes is given by
v ( t )= v+(t)+ v - ( t ) , t 2 0.
(8.42)
The SMRW is defined by the following sums
(8.43) r=l
The jumps p,", r ution functions
r=l
2 1, are jointly independent and are given by the distribG*(u) = P(p,' 5 u), u 2 0.
This kind of processes are interesting for various applied problems. They model the number of customers in the queue systems with given distribution functions of arrival and service times, and they can be interpreted
8.4. SEMI-MARKOV RANDOM WALKS
259
as storage processes with arbitrary distribution of time intervals between arrivals and departures of goods. The process (8.43) can also be considered as a mathematical model of risk with arbitrary distributions of intervals between times of payment of claims and the premium income. The superposition of two renewal processes (8.41) can be described by the counting process v(t) = max{n : T, I t } , for the Markov renewal process x , , T , , ~ 2 0, on the phase space E = E+UE-, Ef = {(k,~) :x 2 0}, with the following formula for sojourn times (&+I := T ~ + -I T ~ , T I2 0).
e,f =
AX.
The transition probabilities of the embedded Markov chain xn,n 2. 0, is defined by the matrix (8.44) The stationary distribution of embedded Markov chain has the density P*(t) = m t > / P l
P=P+ f p - ,
P+. =Elf ,
(8.45)
where, as usual, P(t):= 1 - P ( t ) . T h e embedded SMRW = <(r,),n 2 0, is defined by the relation
cn
<,+I=
G + l(z,+l
= +)P,',,
- 1(~,+1=
-)P;+~,
n 2. 0,
where 1(A) is the indicator of a random event A. The SMRW (8.43) can be defined as follows: [ ( t )= <,,v(t), t 2 0. 8.4.2
The algorithms of approximation f o r SMRW
The algorithms of approximation for the SMRW (8.43) in the series scheme with the small series parameter E -+ 0 ( E > 0) are considered here. The average, diffusion and Poisson approximation schemes are investigated. The approximation algorithms are constructed by using the asymptotic expansion of the compensating operator, for the extended Markov renewal process < , , x , , T ~ , ~ 2 0, and a solution of the singular perturbation problem for the generator of associated Markov process. Introduce the notation for the first moments:
CHAPTER 8. APPLICATIONS 1
260
The average drift per unit time for SMRW (8.43) is defined by the value b = b+/p+ - b-/p-.
Note that we have by the limit theorem for renewal processes
t 4 00.
IEv*(t)/t -+ l/p*,
Hence, the mean value of both sums on the right hand side of (8.43) have the following asymptotic evaluations
"* ( t ) lE
c P,'lt
t
b*/P*,
--+
00.
r=l
Hence, the average drift per unit time for SMRW (8.43) is evaluated by
-
lE<(t) u + bt,
t
+ 00.
The average algorithm. This algorithm is obtained for SMRWs in the following series scheme
c a - "-c ].
"+
.,t)=u+E[
(tie)
(t/E)
k= 1
7
t2o.
(8.46)
k=l
Theorem 8.2 (Average) Under the condition b # 0 and the finiteness of the second moments of ,@, k 2 1, the weak convergence
* ( o ( t ) = + bt,
&(t)
'1~
E + 0,
takes place. The algorithm of Poisson approximation. This algorithm is obtained for SMRWs in the following series scheme
L
k=l
k=l
The distribution function G",u) of the random variables ,BE'+,lc 2 1, satisfies the following Poisson approximation conditions:
PA1:
26 1
8.4. SEMI-MARKOV RANDOM WALKS
on the measure determining class C3(IR+); PA2:
Jd"
vG",dv)
= E[b+
+ Oil,
Jdm
v2G",dv)
= E[C+
+ OE],
The negligible terms Oi,Og, OE, satisfy the condition
Theorem 8.3 (Poisson Approximation) Under Conditions PA1 and P A 2 and the finiteness of the third moments of Pi'+, k 2 1, the weak convergence vow
c"(t)===+('(t) = u
+ bot + C p i , t 2 0 , k=l
takes place. The distribution function of the jumps pi, k 2 1, is determined by
The intensity of the counting Poisson process u o ( t ) ,t 2 0 , is defined by A = g+/P+. The deterministic drift bo of the compound Poisson process co(t),t2 0 , is defined b y (8.47)
Diffusion approximation scheme. This scheme is obtained for SMRWs in the following series scheme
(Diffusion approximation) Under the balance condition b = 0, and the finiteness of the third moments of @ , k 2 1, the weak convergence
Theorem 8.4
+
?;(t) ==+ &(t)= 21 ow(t),
E
-4
0,
CHAPTER 8. APPLICATIONS I
262
takes place. The process w ( t ) ,t 2 0 , as the standard Wiener process. The variance is determined by the formulas: CJ2
=2 4
OO(Z)
+a;,
CJ;
= c+/p+
+ c-/p-,
= &(Z)ROZO(Z).
The vectors &,(x) are defined by
-
bo(x) = (b+P+(x)- b - P + ( x ) , b + P - ( x ) - b-P-(z)).
The potential operator Ro for the generator Q = q ( z ) [ P- I ] of the embedded Markov chain x,, n 2 0 , is defined by the relation
where the projector I3 acts as follows on the real-valued vector-function cp(x)= (cp+(x),cp-(z))1
8.4.3
Compensating Operators
The compensating operator on the real-valued vector test functions cp(u,x) = (cp+(u1x),cp-(ulx))is given by the relation
.>
b(% = q(x)IPG - IIcp(%x), where the operator P is defined by the transition probabilities (8.44),and the integral operator G is defined by
The product operator q is given by
where
8.4. SEMI-MARKOV RANDOM WALKS
263
Note that the following relation holds:
=
Jdz +
P* ( d t )
Jdm
&(dt)
G* (dv)cp*(u f 21, x - t )
Jlu"
G+(dv)cp,(u T v, t - x).
So, the operator PG - I is the generator of the embedded Markov chain In,xn, n 2 0. The approximation algorithms, introduced in Theorems 8.2-8.4, are constructed by using the asymptotic expansion for the compensating operator in the corresponding series scheme.
The compensating operator in the average scheme (8.46) is represented in the following form LEV = &-lq[PG'- I]cp,
(8.49)
where
Under the condition of Theorem 8.2, the compensating operator (8.49) o n the test function cp(u,.) E Cz((R+)has the following asymptotic expansion
Lemma 8.2
ILEcp(u, Z) = C 1 Q p ( . ,Z)
+ PBcpL(u,X) -I-&Oicp(u,z),
(8.50)
where I5 is the product operator
s=[b+ 0 -b-O ] Note that the remaining operator 0: can be represented in an explicit form.
The compensating operator in the Poisson approximation scheme (8.47) is represented in the following form LEcp(u,z) = &-1q[PGE- I]cp(u,x),
(8.51)
264
CHAPTER 8. APPLICATIONS I
where the integral operator GEis given by
where:
1 1
00
qP(u)=
cp(u
+ v)G;(dv),
00
GE_(p(u)=
(p(u - ev)G-(dv).
Lemma 8.3 Under the conditions of Theorem 8.3, the compensating operator (8.51) on the test function (p(u,-)E C3(R+)has the following asymptotic representation IL'cp(u,
+
2) = ~ - ~ Q ( px) ( u , P[Bv;(u,
x) + G + ( P (x)] ~ , -I-&@;CP(~, 51, (8.52)
where
The remaining operator 0; can be represented in an explicit form.
The compensating operator in the diffusion approximation scheme (8.48) is represented in the following form (8.53)
LEV = e-2q(x)[PGE - I]p,
where the matrix integral operator 6"is the same as in the average scheme.
Lemma 8.4 Under the conditions of Theorem 8.4, the compensating operator (8.53) on the test function p(u, .) E C3(R) has the asymptotic representation
+
+2
1
+
I L ~ ~ Z) ( u=, E - ~ Q ~ (X)u , E - ~ P B ~ ~ Z) : ( u ,-PC:CP"(U, Z) @p(u, s), (8.54)
with the product matrix operator C =
p; 3.
The remaining operator 0; can be represented in an explicit form.
8.4. SEMI- M A R KOV RANDOM W A L K S
8.4.4
265
The singular perturbation p r o b l em
T h e average operator IL for a SMRW in the average series scheme (8.50) is calculated by using a solution of the singular perturbation problem for the truncated operator
IL; = E - ~ Q+ PB. According to Proposition 5.1, the average operator IL is determined in the asymptotic representation of the compensating operator on the perturbed test function
where ILp(u) = IIPBnpyU). After some computations we obtain ILp(u) = b p ' ( ~ )b,
The generator
= rIPBII.
IL defines the dynamical system = b,
t 2 0, (8.55)
Co(0) = 21.
Hence, Co(t) = u
+ bt,
t 2 0,
which is the limit process in the average scheme.
The Poisson limit operator in the series scheme (8.52) is calculated by using a solution of the singular perturbation problem for the truncated operator
According to Proposition 5.1 the limit operator is calculated by
ILp(u) = rIIPBrIpf(u)
+ rIGrIp(u).
CHAPTER 8. APPLICATIONS I
266
Hence
or, in another form, by using bo in (8.47)
+
ILp(u) = b O p ’ ( ~ ) A
Irn +
[ p ( ~V) - p(u)]Go(dv).
This is the generator of the compound Poisson process with drift in Theorem 8.3.
The limit diffusion process in the approximation scheme (8.48) is defined by the generator calculated by using a solution of the perturbation problem for the truncated operator in Lemma 8.4
According to Proposition 5.2 the limit operator IL is determined in the asymptotic representation of the action of lL6 on the perturbed test function
du)+ E(P1(u, ).
The operator
+ E2V2(U1).
IL has the following representation Lp(u) = [l-IPIB&PBrI
1 + -rIPCl-I]cp”(u). 2
After some computation we get
where the variance g2 is represented by the formulas in Theorem 8.4. This is the generator of the limit process t ( t )= u a m ( t ) in Theorem 8.4.
+
Martingale characterization of the Markov renewal process. The proof of Theorem 8.2-8.4 is based on the following martingale characterization of the Markov renewal process c;, x:, n 2 0.
8.4. SEMI-MARKOV RANDOM W A L K S
267
Lemma 8.5 T h e Markov renewal process <E,x;,n 2 0. is characterized by the martingale n
&+I
= (P(ch+i, .:+I)
ek+lLE(P(
- (P(% X ) - & k=l
See Section 1.3.5, Proposition 1.4. The concluding step of the proof of Theorems 8.2-8.4 follows some familiar procedures adapted to the switching semi-Markov processes from Chapter 6. 0 8.4.5
Stationary Phase Merging Scheme
The algorithm of the stationary phase merging scheme is realized by using the stationary distribution of the embedded Markov chain (8.45) and is based on the formulas given in Section 8.2. According to Section 8.2 the stationary phase merged superposition of two renewal processes is given in the merged phase space E = {+, -}, by the formulas for sojourn times
e;
(8.56)
= a* A a T * ,
where a** are the stationary remaining times with densities
P;(t)
= A,P*(t),
= l/P*.
(8.57)
The stationary merged superposition of two renewal processes on the merged phase space E with given sojourn times (8.56) can be interpreted as being in the stationary regime. The stationary merged IMC x;, n 2 0 is determined by the matrix of transition probabilities
[
P * = 1-P+ P+ p1-p-
1,
where
The stationary merged embedded SMRW is defined by the relation c+1 =
c: + 1(2;+, = +)P,',l
-
1K+,= -)Pi+,*
The stationary merged SMRW with continuous time can be defined as follows
CHAPTER 8. APPLICATIONS I
268
C * ( t > := C&),
t 2 0,
where v*(t) , t 2 0, is the counting process of renewal moments for the stationary merged superposition of two renewal processes. The algorithm of asymptotic approximation for the stationary merged SMRW can be formulated in a form analogous to the one presented in Section 8.4.2 by using the compensating operator of the extended Markov chain
VCp(u)= q*[P*G- J]Cp(u), where p(u) = (p+(u),p-(u)). The product operator is
The integral operator G is the same as in Section 8.4.3. The asymptotic representations of the compensating operator in the approximation schemes are obtained in a form similar to Sections 8.4.3, but are actually simpler. The asymptotic representations of the compensating operator can be realized in the same form as in Lemmas 8.1-8.4, with transition matrix P* instead of P . The average and Poisson approximation algorithms give the same result as in Theorems 8.2-8.3. In the diffusion approximation scheme the variance o* of the limit diffusion in Theorem 8.4 is defined as follows g*2
= 20:'
+ 022,
oT2 = b R J .
The potential operator Rt; is defined for the generator Q* -* -* b- = (b+, b-) b = b+/p+ - b - / p - ,
b; = f(q+b*
-
~+b,).
= q* [P*- I],and
Chapter 9
Applications I1
In this chapter we continue the presentation of applied topics as in Chapter 8. In particular, we give a diffusion approximation for birth-and-death processes applied to storage and repairable systems theory, and a LQvy approximation result for impulsive processes.
9.1 9.1.1
Birth and Death Processes and Repairable Systems
Introduction
The Markov renewal system with finite identical devices working independently with intensities of working and repairing times depending on the switching ergodic Markov process is considered here. The diffusion approximation by the Ornstein-Uhlenbeclc diffusion process when the number of devices tends to infinity is established. The Markov Renewal System with finite devices working independently was considered in 47 and is called the supplying energy system. The functioning of each device is described by the alternative renewal process with exponentially distributed working and repairing times with respective intensities X and p. In this section we consider the Markov renewal system with Markov switching in the diffusion approximation scheme when the number of devices tends t o infinity. Let ~ ( t t) 2 , 0 be an ergodic jump Markov process in a standard state space ( E , E )with generator Q acting as follows
269
CHAPTER 9. APPLICATIONS II
270
where q(z) is the intensity of jumps and P ( x , d y ) is a stochastic kernel defining the transition probabilities of jumps from the state x to the set of states dy. The stationary distribution x ( d z ) of the process z(t),t 2 0, defines the projector ll which acts as follows (94 where l(z) = 1 for all z E E . The main assumption is that the intensities A and p depend on the state of the switching Markov process z ( t ) as follows:
x = X(Z(t/&)),
p = p(z(t/e)),
&
:= l/&.
The states of the Markov renewal system are defined by the birth-anddeath process ~ “ ( tt)2, 0 , which describes the number of working devices at time t. The state space of the process v“(t)is the finite set En = {0,1, ...,n}. Let pn(u)be its stationary distribution. 9.1.2
Digusion Approximation
The diffusion approximation scheme for the Markov renewal system with Markov switching can be obtained for the corresponding normalized process. Let us introduce the following notation:
and p ( z ) = 1 - q ( 2 ) = X(z)/.(.).
Theorem 9.1
The normalized process [€
:= & V “ ( t ) - &-lq(z(t/&))
converges weakly, together with the stationary distribution, as E Ornstein- Uhlenbeck diflusion process [ ( t ) with the generator
ILq(u) = bcp”(u) - auq’(u)
(9.3) 4
0 , to the
(9.4)
9.1. BIRTH AND DEATH PROCESSES AND REPAIRABLE SYSTEMS
271
Note that the stationary distribution of the Ornstein-Uhlenbeck process with generator (9.4) has the density
Remark 9.1. The simple form of the stationary distribution of the limit process allows us t o use this distribution as an approximation for the stationary distribution of the Markov Renewal System and calculate the functionals as follows
1
f(U)Pn(U)dU
"
1
f(U)P(U)dU.
This estimation formula gives us an exact enough result when the number of devices n is large enough. The Markov renewal system can be considered with a finite number m < n of repairing tools. It means that the number of repaired devices of u,-(t) satisfies a boundary condition such as u-,
( t )=
n - u " ( t ) , if u " ( t ) 2 n - m if v'(t) < n - m
In the case where u"(t) < n - m, the n - m - u"(t) devices are waiting to service. For such Markov renewal systems the diffusion approximation takes another form 99. Theorem 9.2 The normalized process (9.3) under the boundary condition (9.6) for m = c P 2 [ q ( x )- ECO] converges weakly, together with the stationary distribution, as c 4 0 , to the diffusion process &,(t)with the generator
where
, ~(dz)X(x),po = JE ~ ( d z ) , u ( z )and , with A0 = J nq, 5 m.
co is a constant such that
CHAPTER 9. APPLICATIONS 11
272
Remark 9.2. The density of stationary distribution of the limit process [ ~ ( tcan ) be written
where K = [ p 1 ( ( - c o , ~ ] ) + p 2 ( ( ~ 0 3, ) ) ] - ~ , with pl(dx), pz(dz) the stationary distributions of the above diffusion process corresponding to the two branches of a0 (u) . This density of stationary distribution pn can be used in the optimization problem to choose the value of GJ by
with some (smooth) function f(u). D Example 9.1. Suppose that z(t),t 2 0, is a 2-state Markov process with generator matrix Q = ( q i j ; i , j = 1 , 2 ) and ~ ( i = ) pi, X ( i ) = X i , i = 1,2. Thena=nl(X1+~l)+n2(Xz+~~ b =) , Xo =nlX1+n2X2 and PO = ~ P I ~+ 2 ~ 2 As a numerical application for -qll = q12 = 0.1, q21 = 3 2 2 = 0.2, 2p1 = p2 = 0.2 and 2x1 = Xz = 0.02, we get a = 0.0333, b = 0.0083 and v = 0.25.
*+*,
9.1.3
Proofs of the Theorems
PROOF OF THEOREM 9.1. The proof is divided into three steps. As a first step we calculate the generator operator of the birth-anddeath process (z(t) for a fixed value of z. The state space of the process (l(t)is the following E, = {
~ = k ~
( -kE - ' ~ ( x ) ): 0 5 k 5 n}.
It is worth noticing that the number of working devices is
k
+
= n[q(z) E U ~ ] , 0
Hence, the number of repairing devices is
5 k I n.
9.1. BIRTH A N D DEATH PROCESSES A N D REPAIRABLE SYSTEMS
273
The main step is the calculation of the jump intensities of
+
The intensity of jumps from U k to U k E is determined by the intensity of the repairing time in the state ?& defined by the formula
where ,@, j 2 1, are independent and identically exponentially distributed random variables with intensity p(z). Hence the intensity of the repairing time is defined by the formula = np(x)[p(z)- &.k]*
a:(.k)
The intensity of jumps from U k to u k - E is determined by the intensity of working time in state U k defined as follows k
a p = A a; i= 1
where a;, i 2 1, are independent and identically exponentially distributed random variables with intensity X(x). Hence the intensity of jumps from ‘uk to 'ilk - E is defined as = np(”)[q(z)
a,(.k)
+&uk].
Further we shall use the next notation:
a,(u)
= a;(u)
+ u,(u)
= n[2b(z)f E C ( Z ) U ]
and
a:(.)
- a,(.)
= -&-1c(z)..
The next step is the calculation of the generator of the coupled Markov process C‘(t), z ( t / c ) ,t 2 0,
LEqJ(u, x) = [ E - Q where
A&(.)
+ AE(z)]cp(u,x)
is the generator operator of the process
follows
AE(.)cp(u) = a,+(u)cp(.
+
E)
f
a,(u)cp(. - €1 - az(.)d.).
acts as
274
CHAPTER 9. APPLICATIONS I1
Now we calculate the asymptotic representation of the generator AE(x) using the Taylor's expansion for a twice differentiable function q(u)
A"(z)Cp(u)= A(x)'p(u) + e'(z)(P('LL) where
and the operator P ( x ) satisfies the following asymptotic condition
on twice continuously differentiable functions cp(u) Now the martingale approach can be applied to complete the proof of theorem. Let us introduce the martingale
with respect to the natural filtration 3:' t 2 0 , generated by the process C E ( t )X, ( t / E ) , t L 0. The main step is the construction of an asymptotic representation of the integral term in the martingale (9.8) by choosing the test functions cpE
+ EVJ1(u,x),
P E ( %2 ) = 'p(u)
where cpl(u,x) is defined by the solution of Qcp l( u ,X)
= [L- N ~ ) I ' p ( u ) '
(9.9)
in which the operator A is determined by the relation (see Proposition 5.1)
ILl-I = rIA(2)l-I This relation provides the solvability condition by using Equation (9.9). The function cp(u) is chosen smooth enough. Here it is sufficient to consider a four times differentiable function. Applying the generator ILE to the test functions 'p" and taking into account Equation (9.9), we obtain the following representation of the martingale (9.8) rt
(9.10)
9.1. BIRTH AND DEATH PROCESSES AND REPAIRABLE SYSTEMS
275
where the last term in the sum satisfies the following condition
Now we can use the standard arguments to establish the compactness of the family of the processes E > 0 (see Chapter 6 ) . Hence the weak limit
-
c(t),
C"(t>
C(t),
E
+
0,
holds. The limit process C(t) is the solution of the following martingale problem cp(C(t))
-
b ( C ( s ) ) d s = Pt*
Thus, the process ( ( t ) , t 2 0, is the Ornstein-Uhlenbeck process with generator IL given in (9.4). Now, in order to get the weak convergence of the stationary distributions, we establish the stochastic boundedness of the processes CE(t)loo. For the Lyapounov function
with % > 0 and V1
>
soooe-'(Y)dy,
b(z) = -
a(u)du/P2,we have
Hence we get x E =+ xo. 0 PROOF OF THEOREM 9.2. The proof of this theorem follows the same lines as that of Theorem 9.1. In this case, relation (9.11) becomes (9.11)
Thus we have: a,'(.)
-a,(.)
=
{
-&-
1 a (x)u,
-E-"coX(z)
+ p(x).],
u > co u 5 co
CHAPTER 9. APPLICATIONS II
276
and
a,+(.)
+a,(.)
=
n[2b(z) - EC(Z)U], u>Co n[2b(z)- E(coX(x) p(x)u)],'LL I co
+
where C(II:)= X(x) - p(z). From these, we can proceed as previously.
LBvy Approximation of Impulsive Processes
9.2 9.2.1
Introduction
The impulsive processes considered here are switched by Markov processes (see Sections 2.9.1, 7.2.1, 7.2.2 and 7.3.1). Let us consider a family of random sequences a; (x), 5 = 1 , 2 , ...,II: E E , where E is a non-empty set, indexed by the small parameter E > 0, and a family of jump Markov processes zc"(t), t 2 0, with embedded Markov renewal process xi,T ; , k 2 0, and counting processes of jumps v"(t),t2 0. Thus, times T;, k 2 0, are jump times, xi := xC'(7;),and v'(t) := max{k 2 0 : 7; 5 t } . Define now the impulsive process as partial sums in a series scheme, with series parameter E > 0, by
c
v'(tla)
E&(t):=
a;(.;).
k=l
The limit LQvy process, obtained here, has been used directly in55 in order t o model the time of ruin via defective renewal equation. So, results of the present section can be used directly in order to take into account a more general real situation, and results of 55 can be used in order t o get ruin time probabilities for the limit LBvy process. Since L6vy processes are now standard, L6vy approximation is quite useful for analyzing complex systems (see, e.g. Moreover they are involved in many applications, e.g., risk theory, finance, queueing, physics, etc. For a background on L6vy process see, e.g. 137155).
13,155156.
Let ( E ,E ) be a standard state space. Let us consider an E-valued cadlag
9.2. L E V Y A P P R O X I M A T I O N OF IMPULSIVE PROCESSES
277
Markov jump process z ( t ) ,t 2 0, with generator Q , that is,
p ( z 7dy)[cp(y) - cp(z)l,
Qcp(z)= q ( z )
and z , , ~ ~ ,2n 0, the associated Markov renewal process to z ( t ) , t 2 0. The transition probability kernel of z, n 2 0, is P ( z ,B ) , z E E , B E €. Let ~ ( t ) ,2t 0, be the counting process of jumps of z(t),t 2 0, that is, Y ( t ) = Sup{’??2 0 : 7, 5 t } . We suppose here that the process z ( t ) ,t 2 0, is uniformly ergodic with stationary probability 7r(B),B E E. Thus the embedded Markov chain is uniformly ergodic too. Let p(B),B E E l denote the stationary probability measure of the embedded Markov chain x,,n 2 0. These two probability measures are related by the following relation
Define the projector ll by
where 1(z)= 1 for all z E E . Let us denote by Ro the potential operator defined by (see Section 1.6) RoQ
= Q&
=II
-
(9.12)
I.
Let E > 0 be a small parameter and define the family of Markov processes z E ( t ):= z(t/~~),t 2 0. We formulate here a new result of approximation by a L&y process of the following impulsive processes
c ai(zi),
v E( t / € 2 )
“ ( t ) := (5
+
t 2 O,& > 0.
(9.13)
k=l
For any E > 0, and any sequence Z k , k 2 1,of elements of El the random variables a;(&),Ic 1 1 are supposed to be independent. Let us denote by G: the distribution function of a i ( x ) ,that is,
G:(dv):= P ( a i ( z )E dv), Ic 2 O , E > 0,z
E E.
It is worth noticing that the coupled process F ( t ) , z e ( t ) 2 , t 0, is a
Markov additive process (see Section 2.5).
CHAPTER 9. APPLICATIONS I I
278
Let c(t),t 2 0 , be a LQvy process with characteristic exponent (cumulant) given by the Liwy-Khintchine formula 1
1
t
2
+
$(u) := -Eei"c(t) = ibu - - - a ~
/
[eius - 1 - iuzl{lul<1)1p(dz),
R
(9.14) where b E R, (T 2 0 , and the positive measure p , on R - {0}, such that l ( l A y 2 ) p ( d y ) < 00, is the Le'vy measure. The law of the process c(t) is completely determined by the function $(u), that is, from the triplet ( b , -a, p ) , called the characteristics of [ ( t ) . The infinitesimal generator IL of the LQvyprocess [ ( t ) ,with exponent function $(u) (9.14), is defined as follows (56113)
ILcp(z) = b c p W
1 + Z-a2d1(") +
s,
[dz+ Y) - cp(z) - cpl(")Yl{lvl
where cp is a twice continuously differentiable function which vanishes a t infinity. The functions cp' and ip" are first and second derivatives of cp, respectively.
9.2.2
Lkvvy Approximation Scheme
The results presented here concern the weak convergence of the R-valued impulsive processes cE(t), t 2 0 ,E > 0 , defined by (9.13). Generalization of these results t o the Rd case (d > 1) is straightforward. We will need the following assumptions.
L1: We suppose that the Markov process z ( t ) t, 2 0 , is uniformly ergodic. L2: Initial value condition supIE
5 c < 00.
E>O
L3: Approximation of the mean value
where functions bl and b are bounded.
L4: Approximation of the second moment
where the function c is bounded.
9.2. LEVY APPROXIMATION 0F.IMPULSIVE PROCESSES
279
L5: Poisson approximation condition
where g belongs to the class of functions C3(R), (see Section 7.2)). The above negligible terms, 6;(x),0; (x),:)6 (x),fulfill the following negligibility condition sup
le:(z)l
0,
E
-, 0.
xEE
L6: Balance condition r
Remark 9.3. Assumptions L3, L5, and L6 together split jumps into three parts. The first terms in L3, together with L6, give the diffusion component, the second term in L3 gives the deterministic drift, and L5 gives the jumps of the limit process. The balance condition L6 means that the values of order O ( E )are compensated in order to yield the additional diffusion part to the limit process. The following theorem states the main result of this section. Theorem 9.3 gence holds
Under Assumptions Ll-L6, the following weak conver-
C"(t)==-+
so(%
E
0,
provided that <; 5 Co(0), as E + 0. The limit process r o ( t ) , t2 0, is a L i v y process defined by the generator IL as follows
with u2 2 0, where:
280
CHAPTER 9. APPLICATIONS I1
G(dw) =
p(dz)G,(dv),
X
Go(dw) = G(dv)/G(R).
= qG(R),
Remark 9.4. The jump part of the above limit process t o ( t ) , t2 0, is a compound Poisson process, that is,
k=l
where v o ( t ) t, 2 0, is a Poisson process with intensity A, and a:, k L 1, are i.i.d. random variables with common distribution function Go(dv).
Remark 9.5. The characteristics of the above limit LQvy process are the drift coefficient b - bo, the diffusion coefficient c2,and the Lkvy measure G defined on Ro. In the case of a finite set of values for the r.v.s a;(z),we get the following result. D
Example 9.2. Let:
P(aE(z)= E U l ( 2 ) )
= po
P(a"(z)= E 2 U ( Z ) ) = 40,
- E2pl,
qo +po = 1,
and
P(ay"(z) = d ( z ) )= 2 p 1 . We suppose that the balance condition J, p ( d z ) a l ( z ) = 0 is fulfilled. Calculation of the first two moments gives:
b'(z) := E a E ( z )= &2[E-'al(z)po
+ [a(z)qo + d(z)pl]] +
O(E2),
9.2. L E V Y APPROXIMATION OF IMPULSIVE PROCESSES
28 1
Hence:
Assumption L5 gives:
and hence
Gz(d v) = dc(z) (dv)plNow, we get the characteristics of the limit operator in (9.15):
So, the deterministic drift is defined by
b - bo = q[aqo
+ (d - C)PlI.
The diffusion coefficient is
The Poisson part of limit process is determined by the generator
W ( u ) = X[(P('IL + c) - (P(u)I,
= qPl.
The increasing jumps ~ a l ( zare ) transformed into diffusion part with variance g 2 . The big jumps c(z) with small probability e2p1 are transformed into Poisson process. The small jumps &'a(z) are transformed into deterministic drift. Let us give now a result on the rate of convergence in the LBvy approximation scheme of Theorem 9.3.
CHAPTER 9. APPLICATIONS 11
282
Theorem 9.4 ity holds
Under assumptions of Theorem 9.3, the following inequal-
OltlT
IE[v(tE(t)) - (p(So(t))]l 5 ECT,
where cp E Cz(R), bounded twice continuously differentiable functions defined on R, and CT is a constant. 9.2.3
Proof of Theorems
Here we give the proofs of Theorems 9.3 and 9.4. The method of proof of Theorem 9.3 is as follows. We construct the compensating operator of the Markov additive process t E ( t )x"(t), , t 2 0 (Lemmas 9.1-9.2), and obtain the generator of the limit process (Lemma 9.3) by a singular perturbation technique.
PROOF OF THEOREM 9.3. Let us start by the construction of the generator of the Markov additive process given in the following lemma. Lemma 9.1 is given by
The generator IL" of the impulsive process
ILEcp(u,x) = E-24(x)
[J P ( x , d y )w/ -G",d+P(u +
217
r(t), x"(t),t 2 0,
Y ) - c p ( ~ , 4 ] .(9.16)
E
PROOF. By Definition 1.22 of the compensating operator and from a stan0 dard calculus we get the desired result. Lemma 9.2 The main part in the asymptotic representation of the generator ILE is as follows (with the same notation, ILE)
+
+
E-lQobi(z)cpL(u, &o[b(x)- b o ( ~ > ] d *) (u, 1 (9.17) + ~ Q o c ( x ) ~ ' p z.), ( ~Q o ~ ~ ( P*),( u ,
ILEcp(u, X) = ~ - ~ Q c p ( . , X)
0 )
+
where: Qocp(z) := d x )
1 E
P k ,dy)cp(y), bo(x) := L v G d d v ) ,
9.8. LEVY APPROXIMATION OF IMPULSIVE PROCESSES
PROOF From
283
we can write
an
In order to apply Assumptions
let us consider the operator
which is transformed as follows
where the function
belongs t o C3(lR),and
byy)
:= L v ~ : ; ( d v ) , c&(Y) :=
s,
v2~;(dv) = E~[c(Z)
+ e:(x)].
Then, using Assumptions L3-L5, we get 1 qfw = 2 { 1w Gy(dv)L7,(4+[E-lbl (Y)+b(Y)lcp’(u)+Zc(Y)cp//(u))+o(E2) 1
or, in another form q/cp(u) = E
{
H‘,cp(U) +&-
1b l (Y>Cp’(u) +[b(Y)
1 -bo (Y)l ‘PI (u) +5C(Y)cp!I (u)}+ 0 ( E 2 ) .
Putting this representation in (9.16), we get (9.17). 0 We will now obtain the limit generator by solving the following singular perturbation problem for the reducible-invertible operator Q, according to Proposition 5.2,
+
(u, x) = ~ p ( u ) &eyx),
for a test function cpE(u,x) = cp(u) Let us define the operators: Q1 := QoBl(x)
and
(9.18)
+ ecpl(u, x) + c2p2(u,x).
Q2 := Qo[B(x)
+ rz+ C(x)l,
where Bl(z)p(u) := b~(z)cp‘(u), and B(z)cp(u) = [b(x)- bo(x)]p’(u), and: Q3
:= Q2
+ Q1RoQ1,
1 and C(x)cp(u)= -c(x)cp”(u). 2
CHAPTER 9. APPLICATIONS II
284
Note that under the balance condition we have
IIQiII = 0. Lemma 9.3
T h e asymptotic representation
[ E - ~ Q +&-'&I
+ Q2][cp + &pi+ ~
~
= b~ p
+ e"(z), 2
1
i s verified by 'PI = RoQicp,
with negligible t e r m eE(z>= [Qi
+ EQzI'P~ + Q 2 ' ~ i .
T h e limit operator L can be obtained by Proposition 5.2
W ( U=) ~ [ Q + z QiRoQi]n~(u).
(9.19)
Calculation of the limit operator. Taking into account RoP = Ro+H-I, and the balance condition L6, the limit operator (9.19) is represented by (9.15). Specifically, by a straightforward calculus, we obtain:
nQoWz)cp(u) ( b - bo)cp'(u),
and
In order to prove the relative compactness of the family of the processes ['(t),t 2 0, E > 0, we can follow the lines of Chapter 6.
PROOF OF THEOREM 9.4. For the coupled Markov process T ( t ) , z ' ( t ) t, 2 0, with generator L', we have the following equation
+
cp"(["(t),Z"(t)) = 'PE('lL,z)
I'
L E ' P E ( J E ( S ) ,z ' ( s ) ) d s
+y"(t),
(9.20)
285
9.2. LEVY APPROXIMATION OF IMPULSIVE PROCESSES
where y'(t) is an Ff-martingale, and the test functions cp" considered here are of the form V E ( %).
= cp(v)
+W1(%
(9.21)
z)
For the limit process e0(t),t 2 0, in Theorem 9.4, we get (9.22) = c ( r o ( s ) ,s 5 t)-martingale. where y o ( t ) , t 2 0, is an From (9.20) and (9.21), we get
where
Now, from (9.22) and (9.23), we get
+yE(t)
- yo(t)
+ ce;(t).
(9.25)
From Lemma 9.3, we have
b ( t O ( t ) )+@(S).
J J E P E ( E E ( S ) , Z E ( 5= -))
(9.26)
From (9.25) and (9.26), we get cp(tE(t)) - cp(tO(t>> = Y'(4
+ Jot
- Yo@)
+ @(t),
where e;(t) := O;(t) O;(s)ds. Since E[y"(t) -yo@)] = 0, the result follows from the latter equality.
This page intentionally left blank
Problems to Solve
Here we give about fifty problems for the reader to solve. These problems propose results stated without proofs in the previous chapters, alternative proofs, and some extensions. They are classified following chapters.
Chapter 1
D Problem 1. Prove that the generators in examples in Section 1.2.2 are as stated there. D
Problem 2. Prove Proposition 1.3.
D
Problem 3. Prove Proposition 1.4.
D
Problem 4. Prove the Markov Renewal Equation (1.40).
D
Problem 5. Prove identities of Proposition 1.6 for Ro given by (1.66).
D
Problem 6. Prove Proposition 1.7.
D
Problem 7. Prove identity (1.59).
D
Problem 8. Prove that the generator of the backward Markov process
~ ( t=)t - ~ ( t t) 2, 0, of a renewal process on R+, considered in Example 1.2, is a reducible-invertible operator. 287
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
288
Problem 9. Let x ( t ) , t 2 0, be a regular semi-Markov process. 1) Prove that the process x ( t ) , y ( t ) , t 2 0, is a Markov process, (see Section 1.3.3). 2) Calculate its generator and prove that it is a reducible-invertible operator. D
Problem 10. Let z(t),t L 0 , be a jump Markov process with a standard state space ( E , & ) . Let q ( x ) be the intensity function, m ( z ) the mean jump value at x, r n z ( x ) the second moment of jump at x, and Q ( x ,.) the distribution of jumps at x. Prove that: 1) the compensator of the process x ( t ) ,t 2 0, is D
2) the square characteristic of the local martingale p ( t ) = x ( t ) - x ( 0 ) -
a ( t ) ,t 2 0, is
provided that m z ( x ) is finite for any x E E .
Chapter 2
D
Problem 11. Prove Lemma 2.2.
D
Problem 12. Prove Lemma 2.3.
D
Problem 13. Prove Proposition 2.1.
D
Problem 14. Prove Corollary 2.1.
D
Problem 15. Prove Corollary 2.3.
D
Problem 16.
z ~ , T 2~0,, be ~
(174) Let x ( t ) , t 2 0 , be a semi-Markov process, let the corresponding Markov renewal process, and IL be
PROBLEMS TO SOLVE
289
its compensating operator. Define the process c(t),t 2 0, by
1
+
T"Y(t) 1
r ( t ) := cp(Zv(t)+lr G ( t ) + l ) -
JLcp(zv(s)Jv(s))ds,
and 3 t := n ( { 5 ~ t~} n { ( m ,...,z,) E B;'-'};n measurable, and cp and ILcp are bounded. Prove that: 1) for t 2 0 and s 2 0, &[J(t
+ s) - r ( t ) I Ft] = 0,
2 0).
7?,
2 0,
The function cp is
as.
2) the process C(t),t 2 0, is not measurable with respect to 3 t . So, the process <(t)is not an Ft-martingale. D Problem 17. Let g : Rd --t Rd, d 2 1, be a globally Lipschitz continuous function, f : Rd + lR be a function in C1(Rd) and let A be a differential operator defined by
where gbc) = (91(z),...lgd(x)). Consider the ordinary differential equations d
-z(t) = g(z(t)), z(0) dt
= 2 E Rd,
(9.27)
1) Prove that x ( t ) is a solution of (9.27) if and only if it is a solution of (9.28). 2) Prove that the family (q5t,t E R), where q5t : z H q5(t,x) =: z ( t ) ,is a group, that is, 4tfs
that means q5(t mapping.
+ s,z)
= q5t
04Sl
= $(t,q5(s,z)), z E
D
Problem 18. Prove Corollary 2.4.
D
Problem 19. Prove Proposition 2.11.
Rd; and q5t is a one-to-one
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
290
D
Problem 20. Prove Proposition 2.12.
Chapter 3
D
Problem 21. Prove Corollary 3.5.
D Problem 22. Let f be a probability density function on R+, whose first and second moments are finite and denoted by m and m2 respectively. Show that m2 2 2 m if f is completely monotone (see Theorem 3.3). Prove that in Theorem 3.3 we have o2 > 0.
Problem 23. (Liptser’s formula). Let z ( t ) , t 2 0, be an irreducible jump Markov process with finite state space E = (1, ..., d} and generating matrix Q. Denote by 7 r i , l 5 i 5 d its stationary distribution. Let a be a real-valued function defined on E . Let us define the following family of integral functionals D
s,
t
cr‘(t) = &-I Let
(vl,...,vd-1)
[.(z(s/E2)) - Ea(z(s/&2))]ds,t
2 0,
> 0.
be the solution of the following system of equations d- 1
6ij.j
= .(i)
- a(d),
i
= 1,...,d
j=1
where
E
6is a non-singular matrix defined as follows
Deduce from Theorem 3.3 that
*bw(t),
E -+ 0,
where w ( t ) ,t 2 0 , is a standard Wiener process, and
-
1,
PROBLEMS TO SOLVE
291
D Problem 24. For the process in the previous problem, define the occupation time process of states
Ai(t) = rneas{s : z(s) = i , O 5 s 5 t}, and the vector A* = (A:,
...,A;),
t 2 0,
with
Deduce from Liptser’s formula that the vector A converges weakly to a multivariate normal distribution with mean zero and covariance matrix to be defined. D
Problem 25. Let us consider a family of contraction semigroups
Ft(z),t 2 0,z E E , on a Banach space B, and the random evolution operator
4(t) = r T 1
(z(o))rT2-T~
(z(T1))
rt-T(t)
(5(T(t))).
1) Prove that 4(t)is a contraction operator on B. 2) Prove that for any f E B the mapping t H 4(t)f is continuous, and differentiable at points t # ~ 1 , 7 2..., , and satisfies the following differential equation d
-4(t)f = 4(W(z(t))f, dt where T(z) is the generator of rt(z), z E E. 3) Define the expectation semigroup as follows
( W t ) f ) ( z := ) K[4(t)f(z(t))l. Prove that II(t),t 2 0, is a contraction semigroup. Chapter 4
D
Problem 26. Construct the generator of the limit stochastic system
f i ( t )?(t), , t L 0, in Theorem 4.4. D
P r o b l e m 27. State and prove the result of Theorem 4.6, when the limit h
merged Markov process $(t),t 2 0, has null generator
6,= 0.
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
292
Problem 28. By ad hoc time scaling, give averaging results for system (4.35) in Theorem 4.6. D
Problem 29. State and prove the result of Theorem 4.10, when the limit merged Markov process 2(t),t >_ 0, is not conservative. D
h
Chapter 5
D
Problem 30. Prove Lemma 5.5 by using the following formula (Propo-
sition 5.2).
LII = IIQz(v;z)II + IIK(v; x)PR&(v;z)II. D Problem 31. Derive results of Proposition 5.17, for the following representation of the generator of the switching Markov process
Q'
=Q
+ E Q ~+ E
~
+Q E ~~ Q ~ .
Give an interpretation of this scheme. D Problem 32. Give a conclusion such as in Proposition 5.5 for the following generator ILE
for k
= &-kQ
+ &-k+l
Qi
+ ... +
&-
2
Qk-2
+ E - l Q k - 1 + &k
(9.29)
> 4.
Problem 33. Show that the generator of the random evolution in Proposition 4.1 is represented as follows D
IL" = E - ~ Q +C 2 Q 1
+ E-'&(x),
where the operator
K(.)cp(.)
= E(.)cp'(.),
-
a(.)
satisfies the balance condition
flX(Z)fl = 0.
h
:= a(.)
- 2,
293
PROBLEMS TO SOLVE
Prove that the limit generator is the following h - - A
-
-c
Lfi = IIARoAII.
Hint. Use Proposition 5.4. D Problem 34. Prove that the generator of the random evolution in Proposition 4.2 is represented as follows
IL& - E -4 Q + E - ~ Q + E~ - ~ Q +~ ~ - l A ( z ) , where the generator
satisfies the balance condition
fifirrA(+Ififi
= 0.
Prove that the limit generator is the following
Hint. Use Proposition 5.5. D Problem 35. Prove that
the generator of the random evolution in Proposition 4.3 is represented as follows
L“=.= E - 2 6 +
+z,
6
h
where is the generator of twice merged Markov process .^(t),t2 0, and the operator
z
= fiIIA(z)IIL, - A h - - A
satisfies the balance condition IIAII = 0. The limit generator has the following form A h - - A
Ah..
Lfi = IIARoAII. Hint. Use Proposition 5.3.
294
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
D Problem 36. Prove that the generator of the random evolution in Proposition 4.4 is represented as follows
LE= E - ~ Q
+ E - ~ Q +~ E - ~ Q +~ ~ - l A ( z ) ,
where the operator
-a(.)
= ir(Z)Cp’(U),
&.)Cp(U>
:= a(.)
- i?,
satisfies the balance condition h
fifinxnfifi = 0. Prove that the limit generator is the following
- _--_A
- - A
-
- h
Lfi = nARrJAn.
Hint. Use Proposition 5.5. D Problem 37. Prove Theorem 4.6, by using the development in Section 5.6.1.
Chapter 6
D Problem 38. Let z,,n 2 1, be a sequence of i.i.d. centered random variables, and define the family of stochastic processes z E ( t ) , t2 O,E > 0, by
k=l
Show that x E ( t )==+ w ( t ) ,where w(t), t 2 0, is a standard Wiener process. D Problem 39. Prove the diffusion approximation result in Theorem 3.4, following calculus in Section 5.5.2 and Chapter 6. D Problem 40. Let x E ( t / & ) , t2 O,E > 0, be a family of semi-Markov processes with phase space El split as follows N
E = U+IEk,
Ek n Eki = 8,
k # k‘.
295
PROBLEMS TO SOLVE
Let w be the merging function on E with values in {1,2, ...,N } . Suppose that the following averaging principles are fulfilled: W(Z"(tl&))
* .^(t),
& V " ( t / & ) ===+ q
t),
where .^(t),t 2 0 is a Markov process. 1) Show that the compensating operator of
-v"(t) =
I"
U"
is
X(z"(S),y(S))dS,
and that of i; is
-v ( t )=
Jnt
q^(.^(s))ds.
2) Show that we have
where %(dz x ds)X(z, s),
q^(k) = Ek X
b
and ? is the stationary distribution of the Markov process zc"(t), t--7(t),t 0, on E x R+.
2
Chapter 7
D Problem 41. Formulate
the corresponding stochastic singular perturbation problem and prove Lemma 7.3.
Problem 42. Show that the predictable characteristics ( B ( t )c^(t), , vt(g)) of the semimartingale ('(t),t 2 0, in Theorem 7.2, relation (7.28), are given by: D
t
B ( t )=
b(f(s))ds, b(w) = qv2(w),
2(w)
:=
s,,
p,(dz)a(z).
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
296
The modified second characteristic is t
e(t)= where
e ( v ) = q,,
L,
p,(dz)Co(z),
E(?(s))ds,
Y
E V
and Co(z) =
LU'@~(~U).
The predictable measure is
where
D Problem 43. Prove that the compensating operator formula is as stated in Lemma 7.5.
Problem 44. Prove that under conditions of Theorem 7.1, the following convergence takes place D
E(Xc'>t
* E(Xco)t =
n
yo
(t)
[I
+ X ( Y exp(-tqao), ~
E .+ 0.
k=l
The limit stochastic exponential process &(X
is defined by the limit
Chapter 8
D
Problem 45. Show that the stationary distribution of the MRP {k,z>_ 0}, of the SMRW is given
xn,&,n 2 0, with phase space E =
by
+
p*(dz) = F*(z)dz/(a+ a - ) ,
*
where ah = lEa, .
297
PROBLEMS T O SOLVE D
Problem 46. Consider a centered SMRW defined as follows
where b = b+/p+ - b-/p-. Let b # 0 and the third moments E[P,fl3< 00. For notation see Section 8.4. Show that the weak convergence
*
("(t) ( O ( t )
=u
+
OW(t),
E
-+
0
takes place, and that the variance u2 is
where: 00
00" =
2 l
[F-(Z)x;(Z)
+P+(s)i;O(z)]dz,
-bO,(x):=X* (x)ROf& (x), 0::= j
p + ( X ) C - ( X ) +P-(z)C+(z)]dz,
C*(X) := E[Y:+~~Z, = z], x
E E*.
The potential operators R t are defined for the semi-Markov kernel Q = q ( z ) [ P- I].The process w(t) is the standard Wiener process.
Chapter 9
D Problem 47. Let zc"(t),t 1 O , E > 0, be a family of Markov processes, with embedded Markov chain xL,n 2 0, with the standard state space ( E ,E ) ; let the process v ( t ) ,t 1 0, be a Poisson process with intensity q. Let the following autoregressive real-valued process a"(t),t 2 0, be defined by
a"(t)= ayE(0) +E
4 t / E )
C a(ayEk;zyEk), k=l
where a is a fixed real-valued function defined on R x E.
(9.30)
STOCHASTIC S Y S T E M S IN MERGING PHASE SPACE
298
1) Prove that the generator of the coupled Markov process a"(t),x'(t),t 2 0, is
+
where [D"(x)- I]cp(u)= cp(u ~ a ( ux)) ; - cp(u). 2) Formulate the singular perturbation problem and find out the limit generator. 3) Prove the following weak convergence result
a"(t)=+ aO(t), where the limit process ao(t), t 1 0, (deterministic), is defined as a solution of the following evolutionary equation d
-a0@) dt = i;(aO(t)), where
Z(u)= q
s,
p(dx)a(u;x).
D Problem 48. Let the process a"(t),t2 0, in the previous problem be scaled as follows
a"(t)= a'(0) + &
c
V(tlE2)
a&(a;;xi),
(9.31)
k=l
with a,(u;
x) = a(u;x) + &al(u;x).
Prove that the following weak convergence takes place
a"(t)=+ aO(t), where the limit diffusion process aO(t),t 2 0, is defined by the generator L, defined as follows
+ f1( U ) ' p " ( " ) ,
Lcp(u) = b ( u ) d ( u ) where, the drift coefficient is defined by
PROBLEMS TO SOLVE
299
with b~,(u; x) = a(u;x)&a:(u; x), and the diffusion coefficient is
B ( u )= Q
s,
p(dx)uo(u;x),
with ao(u;x) = a(u;x)&a(u; x).
General problems
Problem 49. Let v"(t),t 2 O,E > 0, be a family of counting processes with intensities D
2 0,
X ( t ) = &-1C(t;&v'(t)),t
&
> 0.
Prove that the following convergence holds & Y ' ( t ) ===+ xO(t),
&
4
0,
where xo(t),t 2 0 is the solution of d
-x(t) dt D
= C ( t ; x ( t ) ) , x(0) = 0.
Problem 50. Let us consider a birth and death process with state space
EN = {0,1, ..., N } and jumps intensities: Q(i,+1) = ( N - i ) X , 0 5 i < N , and Q(Z, -1) = ip,0 < i I N. 1) Put
E =
1/N and define the normalized family of processes
v'(t)
:= & V ( t / & ) ,
t 2 0,
&
> 0,
on the state spaces E E= {u = ie : 0 5 i 5 N } . Prove that v"(t)+ p ( t ) , as E + 0, where the limit process p ( t ) , t >_ 0, is a deterministic function obtained as a solution of the following evolutional equation
d -p(t)
dt
= CMt)),
with C ( u )= A( 1 - u)- pu. 2) Put E := N-l/' and consider the normalized family of processes
" ( t ) := & V ( t / & ' ) where p = X/(X
+p ) .
-
&-lp,
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
300
Prove that <“(t)+ <‘(t),where <‘(t),t 2 0 is the Ornstein-Uhlembeck diffusion process with generator defined as follows
+ ~ ) . ~ L ( P+’ ~(1uB)V ” ( . ( I ) ,
Lop(.) = -(A where B = 2pp.
D Problem 51. Suppose that the process z ( t ) ,t 2 0, satisfies the following mixing condition
where .Ft := a ( z ( s ) ;s 5 t ) and c > 0,
p+’:= a(z(u);u 1. t + s), and, for some
Lrn
ecu4(u)du< +oo.
Then the stochastic system U“(t)defined by the solution of d
-U&(t) = C(z(t/,))UE(t),
dt converges weakly, as E --t 0, t o the solution of the averaged system d
-U(t) dt with
:= EoC(z(t/~)).
= EU(t),
U ( 0 ) = u,
Appendix A
Weak Convergence of Probability Measures
In this appendix we present some results on weak convergence of probability measures in Polish spaces, that is a complete and separable metric space. For example, the Euclidean space Rd, the space C[O,m) with the topology of uniform convergence on bounded sets, the space D[O,00) with the Skorokhod metric are Polish spaces.
A.l
Weak Convergence
Let C ( E ) be the set of real-valued bounded continuous functions on E , with the sup-norm. Let us consider a Polish space E , with its Bore1 a-algebra E. SO, the measurable space ( E ,E ) is a standard space. Let M I ( E ) be the space of all probability measures on ( E , E ) ,endowed with the weak topology. In this topology the mappings p -+ pf are continuous for all f E C ( E ) . The space MI(,!?),is a Polish space for the weak topology.
Definition A . l Let P,,n 2 1, and P , be in MI(,!?). Then we say that (Pn) converges weakly t o P , if, for any real-valued bounded continuous function cp on E , we have
It is denoted by
P,+P,
nAm.
Theorem A . l (Portmanteau) Let Pn, n 2 1, and P in M l ( E ) . Then the following assertions are equivalent. 301
S T O C H A S T I C S Y S T E M S IN M E R G I N G P H A S E S P A C E
302
-
1 ) P, + P, n -+ 00. 2) S'pdP, ScpdP, n -+ 00, for any bounded and uniformly continuous function cp on E . 5') J pdP, S pdP, n -+ 00, for any bounded and measurable function 'p on E . 4 ) limsup,,, P,(C) 5 P ( C ) ,for any closed subset C of E . 5) liminf,,, P,(O) 2 P(O),for any open subset 0 of E . 6) limn,, P,(A) 5 P ( A ) ,for any A of E , for which P(BA)= 0.
-
Let S be a subset of C ( E ) .
Definition A.2 The set S is said to be measure-determining (class), if, whenever P and Q belong to MI ( E ) ,
/
'pdP =
/
cpdQ, for all 'p E S,
we have P = Q.
Definition A.3 The set S is said to be convergence-determining(class), if, whenever P,,n 2 1, and P belong to M l ( E ) ,
we have P,
P.
A convergence-determining class is also a measure-determining class. Definition A.4 For (E,E)-valued stochastic elements x,,n 2 1, and x , we say that x, converges weakly to x , if P, + P , where P, is the probability distribution of x, and P of x . We denote that by xn + x. It is not necessary for the above stochastic elements x , and x to be defined on the same probability space. In case they are defined on the same probability space, say (0,F,P), we have P, = P o x;' and P = P o x-'. Corollary A . l Let P, Q E M l ( E ) . 1 ) If ScpdP = ScpdQ, for any bounded and uniformly continuous function 'p on E , then P = Q . 2) If P and Q are limits of the same sequence in M l ( E ) , then P = Q.
APPENDIX A
A.2
303
Relative Compactness
Definition A.5 A subset M of M 1 ( E ) is said to be relatively compact, if every sequence in M has a convergent subsequence. Theorem A.2 Let P,, n >_ 1, be a relatively compact sequence in M I ( E ) , such that every convergent subsequence has the same limit P . Then P, =+ P. Definition A.6 1) The probability measure P E M l ( E ) is tight if for every E > 0 there exists a compact subset K of E l such that P(K") < E. 2) A subset A4 of M l ( E ) , is tight if, for every E > 0, there exists a compact subset K of E such that P ( K C )< E , for every P in M . Theorem A.3 (Prohorov) A subset M of M l ( E ) is relatively compact (for the weak topology) if and only if it is tight. Theorem A.4 Let x,(t),t 2 0,n 2 0, a sequence of processes and let a process x ( t ) ,t 2 0 , be with simple paths in D[O,00). 1) If xn(t) =+ x ( t ) , then
* ( X n ( t l ) r ..*ixn(tk)), n
(xn(tl),...,xn(tk))
(A.1)
f o r any finite set { t l , ..., t k } c D, := {t 2 0 : P ( x ( t ) = x ( t - ) ) = 1). 2) If the sequence xn(t) of processes is relatively compact and there exists a dense set D c [0,00) such that ( A . 1 ) holds for every finite set { t l ,..., t k } C D , then
xn(t)+ x(t), n
4 00.
Theorem A.5 (Skorokhod representation) Let x,,n >_ 1, and x be E valued stochastic elements, and suppose that x , =+-x . Then there exist stochastic elements Z n l n 2 1, and 5, all defined o n a common probability space, such that Z, has the same distribution as xn, and Z as x , and 5,
a.s.
x.
Remark A . l . The above definitions and more detailed results can be found, e.g., in 163563703132.
This page intentionally left blank
Appendix B
Some Limit Theorems for Stochastic
Processes
The present appendix gives three theorems used in proofs of theorems in Poisson approximation of Chapter 7, (Theorems B.l-B.2, from 7 0 ) and in LBvy approximation of SMRW in Chapter 9 (Theorem B.3, from .)'61
B.l
Two Limit Theorems for Semimartingales
Let us consider the classes of functions C,(Rd), C2(Rd),and C3(Rd) defined as follows (see 70, p. 354).
C2(Rd) is the set of all real-valued continuous bounded functions defined on Rd which are zero around 0 and have a limit at infinity. C1(Rd) is the subclass of Cz(Rd) of all nonnegative functions g a ( z ) = ( a 1x1 - 1)+ A 1 for all positive rationals a , and with the following property: let p n , p be positive measures on Rd \ {0}, finite on any complement of neighborhood of 0; then p n f -+ pf for all f E C1(Rd) implies pn f + p f for all f E C2(Rd). So, it is a convergence-determining class. C3(Rd) is the measure-determining class of functions cp, which are realvalued, bounded, and such that 4 ' 1 1 ) / 1'1112
+
0,
11 ' 11 -+
0.
The above three classes satisfy the following inclusion relations:
C1(Rd) c C2(Rd) c C3(Rd). Integral process ( 7 0 ) . First we consider a random measure v = { v ( w ; d t , d z ) ; wE R} on (R+ x E , B + x E ) , such that v({O} x E ) = 0. 305
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
306
Let R be a measurable function on (0 x R+ x optional a-algebra on 0 x R+. Define the integral process R * v ( w ,t ) by
E , 8 x E ) , where 6 is the
r
R ( w ;s, z)v(w;ds, d z ) , xE
when
J& x E IR(w;s, .)I
v(w;ds, d z ) < 00; and R * v ( w , t ) = 0 otherwise.
Theorem B . l (Theorem VIII2.18, p . 423, an 70) Let x ( t ) , t >_ 0 be a semimartingale of an independent increment process continuous in probability and let v"(t),t2 0 and v ( t ) , t 2 0 , be such that
1x1 * v'(t) < +co, 1x1 * v ( t ) < +co for all t 2 0. Define B t s ( t ) , t2 0, and B ' ( t ) , t L 0 , as follows
+ (Z - h ( z ) )* v"(t),
B'&(t)= B E ( t )
and ?(t),t 2 0 , C ' ( t ) , t2 0, as follows
+(dzk)
2;1.>jk(t) = cs,jk(t)
sst
If P
sup (B'"(s)- B'(s)l + 0,
for all t 2 0,
s
and %(t)
5 ?(t),
for all t 2 0,
lim lim sup then
* z(t).
z"(t) Moreover, in this case we have
307
APPENDIX B
and
Theorem B.2 (Theorem IX.3.27, p. 507, in 'O) Let D be a dense set of R+, and assume that C1(Rd) contains the positive and negative parts of the following functions 9 m = (xi- hi(.))(l - g a ( x ) ) gij(x) = ( z ~z h~i ( x ) h j ( x ) ) ( l ga(x)). where a E Q+ and g a ( z ) = ( a 1x1 - 1)+ A 1). Assume also that uE and u are such that
1x1 * u " ( t ) < +oo, 1x1 * u ( t ) < +m for all t 2 0.
(B.1)
Let also the following conditions hold. (1) The strong majoration hypothesis: there is a continuous and determin-
istic increasing cadlag function F which strongly dominates the functions V a r ( B ' Z ( a ) ) and - VaT(@(a)). (2) The condition of big jumps:
(3) The uniqueness condition: there is a unique probability measure P on ( R , 3 ) such that the canonical process x ( t ) , t L 0 , is a semimartingale on (0,F ,F, P) with characteristics ( B ,C, u ) and initial distribution 7. (4) The continuity condition: for any t E D , g E Cl(Wd)the functions w H B'(t,w), e ' ( t , w ) , g * v ( t , w ) are Skorokhod continuous on D(Rd). ( 5 ) L(x'(0)) + PO,where C ( z E ( 0 ) )is the law of the r.v. x'(O), and PO is the initial distribution of the limit process. ( 6 ) The following convergence in probability holds
g
* u'(t) - (9 * u ( t ) ) o x E
P
0 for all t E D,g E C,(Rd), (B.3)
-
and the three following conditions hold: sup IB"(s) - B'(s) o x E ( s
P
0, for all t 2 0 ,
(B.4)
STOCHASTIC S Y S T E M S IN MERGING PHASE SPACE
308
lim limsupP"( 1 x 1l{lll>a) ~ * uE(t) > E ) ,
atm
f o r all E > 0 , t
E>O
E R+.
Then C ( x " ) =+ IP,
&
4
0,
where ,C(z') is the law of the process x"(t),t 2 0
B.2
A Limit Theorem for Composed Processes
Let n E , &> 0, be a family of positive non random numbers, such that nE 00, as E + 0; let a;,k = 1 , 2 , . . . , &> 0, be a family of real-valued random variables and let a family of stochastic processes Jb(t),t 2 0 , E > 0 be defined as follows --f
b'l
t'(t)=
c
a;,
t 2 O , & > 0.
k=l
Let further p E ,E > 0, be a family of non negative random variables, and set u" := pLE/nE. Define now the cadlag process
<"(t):=
c
a;,
&
> 0.
k<tpE
Then
c(t)= J L ( t u E )is a composition of the two stochastic processes:
r(t)and u"(t) = t uE ,and we have the following theorem 16'. Theorem B.3 tions hold:
(Theorem 4.2.1, p. 241, in)'61
Let the following condi-
I. (v",Jb(t),tE U ) + (uo,to(t),tE V ) , E -+ 0, where: (1) uo is a non-negative random variable. (2) to(t), t 2 0, is a cadlag processes. (3) U is dense in R+ and contains the point 0 .
APPENDIX B
309
11.
)
l i m l i m s ~ p ~ ( A j ( ~ E , c>, 6T ) = 0, c-0
6 > O,T > 0.
(B.9)
E+O
Then
(<"(t), t E w> ==+ ( < O ( t ) , t E
w>,E
--+
01
where W = R+ \ A , A is any set at most countable, and 0 A j ( J 8 , c , T ) is the modulus of compactness, defined as follows: Aj(<', c,") :=
oV (t -c) 5 t'
SUP
st st" 5 ( t+c) A T
min
( It"(t')- E" (t>l IE' 7
E
W , and
(9 - tE(t")I ) .
Remark B . l . Instead of relation (B.9), the process t E ( t ) , t2 0, has to satisfy the compact containment condition (see Chapter 6 ) .
This page intentionally left blank
Appendix C
Some Auxiliary Results
In this appendix we present some results useful in several theorems presented in this book. The first one (Lemma C.1) establishes the negligibility condition of the backward recurrence time of the jump times process of a semi-Markov family of processes. The second one (Lemma C.2) proves the positiveness of diffusion coefficients in limit theorems.
C.l
Backward Recurrence Time Negligibility
Lemma C . l Let the family of sojourn times Ox7xE E , with distribution function F,, satisfy the following conditions. A l : Uniform integrability sup] xEE
A2: For any x E E , and
E
&(t)dt
4
0,
T
-+
00.
T
> 0,
Then the following estimation takes place
for any
S > 0 , and any T > 0 , where y'(t)
:= t - ~ ' ( t ) ,(see
Section 1.3.3).
PROOF. The regular property of the semi-Markov process provides the following estimation P(T&,E
5 T ) --+ 0 , N 311
-i 00.
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
312
for
E > 0. Indeed, let us calculate
Using Condition A2, we estimate
Under Condition A1 we estimate N/ E '(1
max e k 2 5k< N / E
S/E)
I x P
( e k
26/~)
k=l
N
< - sup & xEE
5
00
Fx(t)dt 6 / ~
NE
-- sup Fx(b/E) E bxEE
N = - sup Fz(6/E) 6 xEE
-+
0,
& --+
0.
Now. we can estimate
C.2
Positiveness of Diffusion Coefficients
For an ergodic Markov process z ( t ) ,t 0, with state space (ElE ) , generator Q 1stationary distribution IT, and potential operator Ro, defined in Section 1.6, let us prove the following result.
h
Let us consider two functions, say cp and $, such that Lemma C.2 Qcp = $. Then we have b :=
n(dz)$(z)Ro$(z)I 0.
Remark C . l . This result is important since it proves that the variance, expressed as 0 = 4,in the diffusion approximation scheme is nonnegative. PROOF. From the martingale characterization of Markov processes, we have that
Mt
:= cp(z(t))-
I’
Qcp(z(s))ds,
is a martingale with square characteristic (see Theorem 1.2),
(Wt = f[Qcp2(z(s)) 0
Let us calculate
Hence
and, using the fact thatthat
- 2cp(z(s))Qcp(z(s))Ids,
t L 0.
This page intentionally left blank
Bibliography
1. Anantharam V., Konstantopoulos T. (1995). A functional central limit theorem for the jump counts of Markov processes with an application to Jackson networks, Adv. Appl. Prob., 27, 476-509. 2. Anisimov V.V. (1977). Switching processes, Cybernetics, 13 (4), 590-595. 3. Anisimov V.V. (1995). Switching processes: averaging principle, diffusion approximation and applications, Acta Aplicandae Mathematica, 40, 95-141. 4. Anisimov V.V. (2000). J-convergence for switching processes with rare perturbations to diffusion processes with Poisson type jumps. In Skorokhod’s Ideas in Probability Theory, V.S. Korolyuk, N. Portenko, H. Syta (Eds), Institute of Mathematics, Kiev, pp 81-98. 5. Anisimov V.V. (2004). Averaging in Markov models with fast semi-Markov switches and applications, Commun. Statist.- Theory Meth., 33(3), 517-531. 6. Arjas E., Korolyuk V.S. (1980). Stationary phase merging of Markov renewal processes, (in Russian), DAN of Ukraine, 8, 3-5. 7. Asmussen S. (1987). Applied Probability and Queues, Wiley, Chichester. 8. Asmussen S. (2000). Ruin probabilities, World Scientific. 9. Assing S., Schmidt W.M. (1998). Continuous Strong Markov Processes in Dimension One, Lecure Notes in Mathematics, 1688, Springer, Berlin. 10. Barbour A.D., Chryssaphinou 0. (2001). Compound Poisson approximation: a user’s guide, Ann. Appl. Probab., vol. 11, no. 3, 964-1002. 11. Barbour A.D., Holst L., Janson S. (1992). Poisson Approximation, Oxford University Press, Oxford. 12. Bensoussan A., Lions J.-L., Papanicolaou G.C. (1978). Asymptotic Analysis of Periodic Structures, North-Holland, Amsterdam. 13. Bertoin J. (1996). Lkvy processes. Cambridge Tracts in Mathematics, 121. Cambridge University Press, Cambridge. 14. Bhattacharya R.N. (1982). On the functional central limit theorem and the law of the iterated logarithm for Markov processes, Z. Wahrsch. verw. Gebiete, 60, 185-201. 15. Bhattacharya R.N., Waymire E.C. (1990). Stochastic Processes with Applicat ions, W iley, N .Y. 16. Billingsley P. (1968). Convergence of Probability Measures, Wiley, New
315
316
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
York. 17. Blankenship G.L., Papanicolaou G.C. (1978). Stability and control of stochastic systems with wide band noise disturbances, I, SIAM J . Appl. Math., 34, 437-476. 18. Boccara N. (2004). Modeling Complex Systems, Springer, Berlin. 19. Borovkov A.A. (1998). Ergodicity and Stability of Stochastic Processes, Wiley, Chichester. 20. Borovkov K., Novikov A. (2001). On a piece-wise deterministic Markov processes model, Statist. Probab. Lett., 53, 421-428. 21. Borovshkikh Y. V., Korolyuk V. S. (1997). Martingale Approximation, VSP, Utrecht, The Netherlands. 22. Bouleau N., Lbpingle D. (1994). Numerical methods f o r stochastic processes, Wiley, N.Y. 23. Breiman L. (1968). Probability, Addison-Wesley, Mass. 24. Brbmaud P. (1981). Point Processes and Queues, Springer, Berlin. 25. Chetouani H., Korolyuk V.S. (2000). Stationary distribution for repairable systems, Appl. Stoch. Models Bus. Ind., 16, 179-196. 26. Chung K.L. (1982). Lectures from Markov Processes to Brownian Motion, Springer, N.Y. 27. Cinlar E., Jacod J., Protter P. and Sharpe M.J. (1980). Semimartingale and Markov processes, Z. Wahrschein. verw. Gebiete, 54, 161-219. 28. Cogburn E. (1984). The ergodic theory of Markov chains in random environnement, 2. Wahrsch. Verw. Gebiete, 66, 109-128. 29. Cogburn R., Hersh R. (1973). The limit theorems for random differential equations, Indiana Univ. Math. J., 22, 1067-1089. 30. Comets F., Pardoux E. (Eds.) (2001). Milieux Ale'atoires, Socibtb Mathbmatique de France, No 12. 31. Crauel H., Gundlach M. (Eds) (1999). Stochastic Dynamics, Springer, N.Y. 32. Dautray R., Cessenat M., Ledanois G., Lions P.-L., Pardoux E., Sentis R. (1989). Me'thodes Probabilistes pour les Equations de la Physique, Eyrolles, Paris. 33. Davis M.H.A. (1984). Piecewise-deterministic Markov processes: A general class of non-diffusion stochastic processes, J. Roy. Statist. SOC.,B46, 353388. 34. Davis M.H.A. (1993), Markov Models and Optimization, Chapman & Hall. 35. Dellacherie C., Meyer P. A. (1982). Probabilities and Potential, B. NothHolland. 36. Devooght J. (1997). Dynamic reliability, Advances in Nuclear Science and Technology, 25, 215-278. 37. Doob J.L. (1954). Stochastic Processes, Wiley, N.Y. 38. Ducan P. (1994). Mixing: Properties and Examples, LNS no 85, Springer,
N.Y. 39. Dudley R.M. (1989). Real Analysis and Probability, Wadsworth. 40. Dunford N., Schwartz J . (1958). Linear Operators. General Theory, Interscience. 41. Dynkin E.B. (1965). Markov Processes, Springer-Verlag.
BIBLIOGRAPHY
317
42. Elliott R.J. (1982). Stochastic Calculus and applications, Springer, Berlin. 43. Ellis R., Rosenkrantz W. (1977). Diffusion approximation for transport processes with boundary conditions, Preprint, University of Massachussetts. 44. Embrechts P., Kluppelberg C., Mikosc T. (1999). Modelling Extremal Events for Insurance and Finance, Springer, Berlin. 45. Ethier S.N., Kurtz T.G. (1986). Markow Processes: Characterization and convergence, J. Wiley & Sons, New York. 46. Ezhov I.I., Skorokhod A.V. (1969). Markov processes homogeneous with respect to the second component I, Theory Probab. Appl., 14, 3-14; 11, ibid, 679-692. 47. Feller W. (1966). An introduction to probability theory and its applications, vol. 1 and 2, J. Wiley, NY. 48. Fleming W.H., Mete Soner H. (1993). Controlled Markow Processes and Viscosity Solutions, Springer-Verlag, N.Y. 49. Freidlin, M.I. (Ed.) (1994). The Dynkin Festschrift: Markow Processes and their Applications, Birkauser, Boston. 50. Freidlin M.I. (1996). Markov Processes and Differntial Equations: Asymptotic Problems, Lectures in Mathematics, ETH Zurich, Birkhauser. 51. Freidlin, M.I. and Wentzell, A.D. (1998). Random Perturbations of Dynamical Systems, 2nd Edition, Springer, N.Y. 52. Fristedt B., Gray L. (1997). A Modern Approach to Probability Theory, Birkhauser, Boston. 53. Furrer H., Michna Z., Weron A. (1997). Stable LBvy motion approximation in collective risk theory, Insurance: Mathem. €4 Econ., 20, 97-114. 54. Gamier L. (1997). Multi-scaled diffusion-approximation. Applications to wave propagation in random media, ENSAIM: Probab. Statist., 1, 183-206. 55. Gerber H.U. (1970). An extension of the renewal equation and its applications in the collective theory of risk, Skandinawisk Aktuarietidskrift, 205-210. 56. Gihman, I.I., Skorohod, A.V. (1974). Theory of stochastic processes, vol. 1,2, & 3, Springer, Berlin. 57. Glynn, P.W. (1990). Diffusion approximation, In Hanbook in Operations Research and Management Science, Vol. 2, Stochastic Models, D.P. Heyman, M.J. Sobel (Eds), Noth Holland, Amsterdam, pp 145-198. 58. Glynn, P.W., Haas P.J. (2004). On functional central limit theorems for semi-Markov and related processes, Commun. Statist.- Theory Meth., 33(3), 487-506. 59. Griego, R., Hersh, R. (1969). Random evolutions, Markov chains, and Systems of partial differentialequations, PTOC. Nat. Acad. Sci. U.S.A., 62, 305308. 60. Gut A. (1988). Stopped Random Walks, Springer-Verlag, N.Y.. 61. Hall P., Heyde C. (1980). Martingale Limit Theorems and its Applications, Academic Press, N.Y. 62. He S.W., Wang J.G. and Yan J.A. (1992). Semimartingale theory and stochastic calculus, Science Press and CRC Press, Hong Kong. 63. Hersh R. (1974). Random evolutions: a Survey of results and problems, Rocky Mountain J . Math., 4, 443-477.
318
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
64. Hersh R. (2003). The birth of random evolutions, Mathematical Intelligencer, 25( 1), 53-60. 65. Hersh R., Papanicolaou G. (1972). Non-commuting random evolutions, and an operator-valued Feynman-Kac formula, Comm. Pure Appl. Math., 30, 337-367. 66. Hersh R., Pinsky M. (1972). Random evolutions are asymptotically Gaussian, Comm. Pure Appl. Math., 25, 33-44. 67. Iglehart D.L. (1969). Diffusion approximation in collective risk theory, J. Appl. Probab., 6, 285-292. 68. Iscoe I., McDonald D. (1994). Asymptotics of exit times for Markov jump processes I, Ann. Prob., 22(1), 372-397. 69. Ito K., McKean J r M.P. (1996). Digusion Processes and their Sample Paths, Springer, Berlin. 70. Jacod J., Shiryaev A.N. (1987). Limit Theorems for Stochastic Processes, Springer-Verlang, Berlin. 71, Jacod J., Skorokhod A.V. (1996). Jumping Markov processes, Ann. Inst. H. Poincare', 32, (l),pp 11-67. 72. Jefferies B. (1996). Evolution Processes and the Feynman-Kac Formula, Kluwer, Dordrecht. 73. Kabanov Y., Pergamenshchiskov S. (2003). Two-Scale Stochastic Systems: Asymptotic Analysis and Control, Springer, Berlin. 74. Kallenberg (1975). Random Measures, Akademie, Verlag, Berlin. 75. Kaniovski Yu., Pflug G. (1997). Limit theorems for stationary distributions of birth-and-death processes, Interim Report IR-97-O41/July, IIASA, Laxenburg, Austria. 76. Kaniovski Yu. (1998). On misapplications of diffusion approximations in birth and death processes of noisy evolution, Interim Report IR-9805O/August, IIASA, Laxenburg, Austria. 77. Kannan D., Lakshmikantham V. (Eds.) (2002). Handbook of Stochastic Analysis and Applications, M. Dekker, N.Y. 78. Karatzas I., Shreve S.E. (1988). Brownian Motion and Stochastic Calculus, Springer-Verlag. 79. Karlin S., Taylor H.M. (1981). A Second Course in Stochastic Processes, Academic Press, San Diego. 80. Kartashov N.V. (1996). Strong Stable Markov Chains, VSP Utrecht, TBiMC Kiev. 81. Karr A. F. (1991). Point Processes and their Statistical Inference, 2nd Edition, Marcel Dekker, N.Y. 82. Kato T. (1980). Perturbation Theory for Linear Operators, Springer, Berlin. 83. Kazamaki N. (1994). Continuous Exponential Martingales and BMO, LNM no 1579, Springer, Berlin. 84. Keilson J. (1979). Markov Chains Models - Rarity and Exponentiality, Springer-Verlag, N.Y. 85. Keepler M. (1998). Random evolutions processes induced by discrete time Markov chains, Portugaliae Mathematica, 55(4), 391-400. 86. Khasminskii R. (1969). Stability of the solutions of systems of differential
BIBLIOGRAPHY
319
equations under random disturbance of their parameters, Nauka, MOSCOW. 87. Khoshnevisan D. (2002). Multiparameter Processes, Springer, N.Y. 88. Kifer Y . (1988). Random Perturbations of Dynamical Systems, Birkhauser, Boston. 89. Kimura M. (1964). Diffusion models in population genetics, J. Appl. Prob., 1, 177-232. 90. Kingman J.F.C. (1993). Poisson Processes, Clarendon Press, Oxford. 91. Kipnis C., Varadhan S.R.S. (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple excursions, Commun. Math. Phys., 104(1), 1-19. 92. Kleinrock L. (1975). Qeueing Systems. Vol. 1: Theory, Wiley, N.Y. 93. Klebaner F. C. (1998). Introduction to Stochastic Calculus with Applications, Imperial College Press, London. 94. Kluppelberg C., Mikosch T. (1995). Explosive Poisson shot processes with applications t o risk reserves, Bernoulli, 1,(1& 2), pp 125-147. 95. KokotoviC P., Khalil H.K., O’Reilly J. (1999). Singular Perturbation Methods in Control: Analysis and Design, SIAM, Classics in Applied Mathematics, Philadelphia. 96. Korolyuk V.S. (1990). Central limit theorem for semi-Markov random e v e lutions, Comp. Math. Appl., 83-88. 97. Korolyuk V.S. (1998). Stability of stochastic systems in diffusion approximation scheme, Ukrainian Math. J., 5 0 , N 1, 36-47. 98. Korolyuk V. S. (1999). Semi-Markov random walk, In Semi-Markov Models and Applications, J. Janssen, N. Limnios (Eds.), pp 61- 75, Kluwer, Dordrecht. 99. Korolyuk V. S., Derzko N.A., Korolyuk V. V. (1999). Markovian repairman problems. Classification and approximation, In Statistical and Probabilistic Models in Reliability, D.C. Ionescu, N. Limnios (Eds), pp 143-151, Birkhauser, Boston. 100. Korolyuk V. S., Korolyuk V. V. (1999). Stochastic Models of Systems, Kluwer, Dordrecht. 101. Korolyuk V.S., Limnios N. (1999). A singular perturbation approach for Liptser’s functional limit theorem and some extensions, Theory Probab. and Math. Statist., 58, pp 83-87. 102. Korolyuk V.S., Limnios N. (1999). Diffusion approximation of integral functionals in merging and averaging scheme, Theory Probab. and Math. Statist., 59, pp 91-98. 103. Korolyuk V.S., Limnios N. (2000). Diffusion approximation of integral functionals in double merging and averaging scheme, Theory Probab. and Math. Statist. 60,pp 87-94. 104. Korolyuk V.S., Limnios N. (2000). Evolutionary systems in an a s y m p totic split state space, in Recent Advances in Reliability Theory: Methodology, Practice and Inference, N. Limnios & M. Nikulin (Eds), pp 145-161, Birkhauser, Boston. 105. Korolyuk V.S., Limnios N. (2002). Poisson Approximation of Homogeneous Stochastic Additive Functionals with Semi-Markov Switching, Theory of
320
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
Probability and Mathematical Statistics, 64, pp 75-84. 106. Korolyuk V.S., Limnios N. (2003). Increment Processes and its Stochastic Exponential with Markov Switching in Poisson Approximation Scheme, Computers and Mathematics Applications, 46 (7), 1073-1080. 107. Korolyuk V.S., Limnios N. (2004). Average and diffusion approximation for evolutionary systems in an asymptotic split phase space, Annals Appl. Probab., 14(1), pp 489-516. 108. Korolyuk V.S., Limnios N. (2002). Markov additive processes in a phase merging scheme, Theory Stochastic Processes, vol. 8, no 24, pp 213-226. 109. Korolyuk V.S., Limnios N. (2004). Semi-Markov random walk in Poisson approximation scheme, Communication i n Statistics - Theory and Methods, 33(3), pp 507-516. 110. Korolyuk V.S., Limnios N. (2004). LBvy approximation of increment processes with Markov switching, Stochastics and Stochastic Reports, 76(5), 383-374. 111. Korolyuk V.S., Limnios N. (2004). Poisson Approximation of Increment Processes with Markov Switching, Theory Probab. Appl., 49(4), 1-18. 112. Korolyuk V.S., Limnios N. (2005). Diffusion approximation for evolutionary systems with equilibrium in asymptotic split phase space, Theory of Probability and Mathematical Statistics, 70. 113. Korolyuk V.S., Limnios N. (2005). Centered semi-Markov random walk in diffusion approximation scheme, Proc. Intern. Symposium on Applied Stoch. Proc. Data Anal., ASMDA-2005, Brest. 114. Korolyuk V S , Momonova A. (2003). DAN of Ukraine. 115. Korolyuk V.S., Portenko N., Syta H. (Eds.) (2000). Skorokhod’s Ideas in Probability Theory, Institute of Mathematics, Kiev. 116. Korolyuk V.S., Turbin A.F. (1993). Mathematical foundations of the state lumping of large systems, Kluwer Academic Publ., Dordtrecht. 117. Korolyuk V.S., Swishchuk A. (1995), Semi-Markov random evolution, Kluwer Academic Publ., Dordrecht. 118. Korolyuk V.S., Swishchuk A. (1995), Evolution of System in Random Media, CRC Press. 119. Kovalenko I.N., Kuznetsov N.Yu., Pegg P. A. (1997). Mathematical Theory of Reliability of Time Dependent Systems with Practical Applications, Wiley, Chichester . 120. Kurtz T.G. (1972). A random trotter formula, Proceeding of the American Mathematical Society, 35(1), 147-154. 121. Kushner H.J. (1990). Weak Convergence Methods and Singular Perturbed Stochastic Control and Filtering Problems, Birkhauser, Boston. 122. Kushner H.J., Clark D.S. (1978). Stochastic Approzimation Methods for Constrained and Unconstrained Systems, Springer-Verlag, N.Y., 1978. 123. Kushner H.J., Dupuis P.G. (1992). Numerical Methods for Stochastic Control Problems in Continuous Time, Springer-Verlag, N.Y. 124. Iglehart D.L. (1969). Diffusion approximations in collective risk theory, J. Appl. Prob., 6, 285-292. 125. Lamperti J. (1977). Stochastic Processes, Springer-Verlag, N.Y.
BIBLIOGRAPHY
321
126. Lapeyre B., P a r d o n E., Sentis R. (1998). Me‘thodes de Monte-Carlo pour les Equations de Transport et de Diffusion, Springer, Paris. 127. Limnios N., Opriaan G. (2001). Semi-Markov Processes and Reliability, Birkhauser, Boston. 128. Lindvall T. (1973). Weak convergence of probability measures and random function space D[O,+m), J. Appl. Prob., 10, 109-121. 129. Lindvall T. (1992). Lectures on the Coupling Method, Wiley, N.Y. 130. Liptser R. Sh. (1984), On a functional limit theorem for finite state space Markov processes, in Steklov Seminar on Statistics and Control of Stochastic Processes, pp 305-316, Optimization Software, Inc., N.Y. 131. Liptser R. Sh. (1994). The Bogolubov averaging principle for semimartinNo 4, gales, Proceedings of the Steklov Institute of Mathematics, MOSCOW, 12 pages. 132. Liptser R. Sh., Shiryayev A. N. (1989). Theory of Martingales, Kluwer Academic Publishers, Dordrecht, The Netherlands. 133. Liptser R. Sh., Shiryayev A. N. (1991). Martingale and limit problems for theorems for stochastic processes, in Encyclopaedia of Mathematical Sciences. Probability Theory 111, Yu. Prokhorov and A.N. Shiryaev (Eds), Springer, pp. 158-247. 134. Maglaras C., Zeevi A. (2004). Diffusion approximation for a multiclass markovian service system with ”guaranteed” and ” best-effort” service level, Math. Oper. Res., 29(4), 786-813. 135. Maxwell M., Woodroofe M. (2000). Central limit theorems for additive functionals of Markov chains, Ann. Probab., 28(2), 713-724. 136. MBtivier M. (1982). Semimartingales. A course on Stochastic Processes, Walter de Gruyter, Berlin. 137. Meyn S.P., Tweedie R.L. (1993). Markov Chains and Stochastic Stability, Springer, N.Y. 138. Murdock J.A. (1999). Perturbations: Theory and Methods, SIAM, Classics in Applied Mathematics, Philadelphia. 139. Nummelin E. (1984). General Irreducible Markov Chains and Non-negative Operators, Cambridge University Press, Cambridge. 140. Oksendal B. (1998). Stochastic Differential Equations, Fifth Edition, Springer, Berlin. 141. Orey S. (1971). Lecture Notes on Limit Theorems for Markov Chain Transition Probabilities, Van Nostrand Reinhold, London. 142. Papanicolaou G. (1987). Random media, Springer-Verlag, Berlin. 143. Papanicolaou G., Kohler W., White B. (1991). Random media, Lectures in Applied Math., 27, SIAM, Philadelphia. 144. Parzen E. (1999). Stochastic Processes, SIAM Classics, Philadelphia. 145. Petrov V.V (1995). Limit Theorems of Probability Theory, Clarendon, Oxford. 146. Pinsky M. (1991), Lectures on Random Evolutions, World Scientific, Singapore. 147. Pollard D. (1984). Convergence of Stochastic Processes, Springer, N.Y. 148. Pollett P.K. (1996). Quasistationary distributions bibliography.
322
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
149. Prabhu N.U. (1980). Stochastic Storage Processes, Springer-Verlag, Berlin. 150. Revuz D. (1975). Markov Chains, North-Holland, Amsterdam. 151. Revuz D., Yor M. (1999). Continuous Martingales and Brownian Motion, Springer, 3rd Edition, Berlin. 152. Royden H.L. (1988). Real Analysis, 3rd Ed., McMilan, N.Y. 153. Rogers L.C.G., Williams D. (1994). Diffusions, Markov Processes, and Martingales, vol. l & 2, J. Wiley & Sons, Chichester, U.K. 154. Rudin W. (1991). Functional Analysis, McGraw-Hill, N.Y. 155. Sat0 K.-I. (1999). L h y processes and infinitely divisible distributions. Cambridge Studies in Advanced Mathematics, 68. Cambridge University Press, Cambridge. 156. Shanbhag D.N., Rao C.R. (Eds.) (2001). Stochastic Processes: Theory and Methods, Handbook of Statistics, Vol. 19, Elsevier. 157. Shiryaev A.N. (1999). Essentials of Stochastic Finance: Facts, Models, Theory, World Scientific, Singapore. 158. Shurenkov V.M. (1984). On the theory of Markov renewal, Theor. Probab. Appl., 19(2), 247-265. 159. Shurenkov V.M. (1989). The Ergodic Theory of Markov Processes, (in Russian), Nayka, Moscow. 160. Shurenkov V.M. (1998). Ergodic Theorems and Related Problems, VSP, Utrecht. 161. Skorokhod A.V. (1984). Random Linear Operators, R. Reidel-Kluwer, Dordrecht. 162. Skorokhod A.V. (1988). Stochastic Equations for Complex Systems, R. Reidel-Kluwer , Dordrecht. 163. Skorokhod A.V. (1989). Asymptotic Methods in the Theory of Stochastic Differential Equations, AMS, vol. 78, Providence. 164. Skorokhod A.V. (1991). Random Processes with Independent Increments, Kluwer, Dordrecht. 165. Skorokhod A.V. (1991). Lectures on the Theory of Stochastic Processes, VSP, Utrecht. 166. Skorokhod A.V., Borovskikh Yu. V. (Eds.) (1995). Exploring Stochastic Laws, VSP. 167. Skorokhod A.V., Hoppensteadt F.C., Salehi H. (2002). Random Perturbation Methods with Applications in Science and Engineering, SpringerVerlang, N.Y. 168. Silvestrov D. S. (2001). The perturbed renewal equation and diffusion type approximation for risk processes, Theory of Probability and Mathematical Statistics, 62, 145-156. 169. Silvestrov D. S. (2004). Limit Theorems for Randomly Stopped Stochastic Processes. Series: Probability and its Applications, Springer. 170. Sobsczyk K. (1991). Stochastic Differential Equations, Kluwer, Dordrecht. 171. Stone C. (1963). Weak convergence of stochastic processes defined on semiinfinite time intervals, Proc. Amer. Math. SOC.,14, 694-696. 172. Stroock D.W. (1993). Probability Theory: An Analytic View, Cambridge University Press, Cambridge.
BIBLIOGRAPHY
323
173. Stroock D.W., Varadhan S.R.S. (1979). Multidimensional Diffusion Processes, Springer-Verlag, Berlin. 174. Sviridenko M.N. (1998). Martingale characterization of limit distributions in the space of functions without discontinuities of second kind, Math. Notes, 43, NO 5, pp 398-402. 175. Sviridenko M.N. (1986). Martingale approach to limit theorems for semiMarkov processes, Theor. Probab. Appl., pp 540-545. 176. Swishchuk A. (1997). Random evolutions and their applications, Kluwer, Dordrecht. 177. Van Pul M.C.J. (1991). Statistical analysis of software reliability models, CWI Tract, Amsterdam. 178. Vishyk M.I., Lyusternik L.A. (1960). On solutions of some problems related to perturbations in the case of matrices and selfadjoint or non-selfadjoint differential equations, Uspekhi Mat. Nauk, 15(3), 3-80, (in Russian). 179. Watanabe H. (1984). Diffusion approximation of some stochastic difference equations, Adv. Probab. Appl., 7, 439-546. 180. Watkins J. (1984). A central limit theorem in random evolution, Ann. Probab., 12, 480-514. 181. Whitt W. (2002). Stochastic-Process Limits: An Introduction to StochasticProcess Limits and their Applications to Queues, Springer, N.Y. 182. Ye J.J. (1997). Dynamic programming and the maximum principle for control of piecewise deterministic Markov processes, In Mathematics of Stochastic Manufacturing Systems, Yin G.G., Zhang Q. (Eds), AMS, Lectures in Applied Mathematics, vol. 33, Providence. 183. Yin G.G. (2001). On limit results for a class of singularly perturbed switching diffusions, J . Theor. Probab., 14(3), 673-697. 184. Yin G.G., Zhang Q. (1998). Continuous-Time Markov Chains and Applications. A singular perturbation approach, Springer, N.Y. 185. Yin G.G., Zhang Q. (2005). Discrete-Time Markov Chains. Two-Time-Scale Methods and Applications, 3pringer, N.Y. 186. Yosida K. (1980). Functional Analysis, Springer, Berlin. 187. Yu P.S. (1977). On accuracy improvement and applicability conditions of diffusion approximation with applications to modelling of computer systems, Technical Report No. 129, Digital Systems Lab., Stanford University. 188. Zhenting H., Qingfeng C. (1988). Homogeneous Denumerable Markov Processes, Springer, Berlin and Science Press, Beijing.
This page intentionally left blank
Notation
the set the set the set the set
of of of of
natural numbers relative integers rational numbers real numbers
R \ (01 the set [-00, +m] the set of real positive numbers [0, +m) distribution function of the sojourn time in state x E E F x = 1 - F, characteristic function of increments cumulant function the transpose of vector w E Rd Dirac measure in a E Rd probability space indicator function of set A mathematical expectation of X variance of X random evolution filtration stochastic basis 9= (R, 3,F, P) state space, a measurable space transition kernel of Markov chain transition function of a Markov process transition operator associated to P(x,B) transition operator associated t o Pt(x, B)
325
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
generator of a Markov process, semi-Markov kernel potential operator of the generator Q stationary distribution of (semi-) Markov processes stationary distribution of the Markov chain z, n 2 0 projection operator on the null space of generator Q a class of split state space E = Ur=v=, Ek stationary distribution of Markov process on Ek mean sojourn time in state z mean times, pmean: m = J, p(dz)m(z) embedded Markov chain of a semi-Markov process jump times of a semi-Markov process, TO = 0 inter-jump times of a semi-Markov process sojourn time in state z E E of a semi-Markov process counting process of jumps the time of the last jump before t , r ( t ) = T ~ ( ~ ) the number of the first jump after t , v+(t)= v(t) 1 the time of the first jump after t , r+(t)= T ~ + ( ~ ) square characteristic of the martingale M T = ( B ( t )C , ( t ) I’,(t)) , the predictable characteristics of semimartingales the Banach space of real measurable bounded functions defined on E the space of real-valued continuous bounded functions defined on E the space of bounded continuous functions on E vanishing a t infinity the space of bounded continuous functions on E having continuous derivatives of order up to and including k vanishing a t infinity the space of bounded continuous functions on E having continuous derivatives of all orders vanishing a t infinity the space of k-th times continuously differentiable functions on E with compact support the space of twice differentiable function in the first argument and continuous in the second the class of real-valued bounded continuous functions g with g(u)/u2-+ 0 as I u I +0 the space of E-valued continuous functions defined on R+
+
C2>’(lRdx E) c 3(R)
cE[o,
INDEX
a.e. a.s. i.i.d. r.v.(s)
PI1 PSI1
PLII MAP SMP MRP a.s. d
P
----t
d
--+ LP
+ 4
327
the space of E-valued right continuous functions having left limits (cadlag), defined on R+ the domain of the operator Q maximum of a and b minimum of a and b the integer part of the real number z increment of sequence ( x n ) ,Axn = x, - xn-l jump of the process x ( t ) at t , Ax(t) = z ( t )- x ( t - ) the shift operator on the sequences: q z o , 51,. . . ) = ( 2 1 , x 2 , . . . ) small o of x in xo E R, limz-zo o ( x ) / x = O big 0 of x in 20 E R: lim,,,o O ( x ) / z= c E R a test function; cp'(u),( ~ " ( u ... ) , are derivatives of function cp, cpI(u,w), cpE,(u, w), ... derivatives with respect t o the first variable almost everywhere almost surely independent and identically distributed random variable(s) process with independent increment process with stationary independent increment process with locally independent increments Markov additive process Semi-Markov process Markov renewal process almost sure convergence convergence in probability convergence in distribution convergence in norm LP weak convergence in the sense of Skorokhod topology in the space D[O,00) or in Sup norm in C[O, 00).
This page intentionally left blank
Index
extended -, 59 compound Poisson process, 12, 104 convergence-determining class, 302, 305 counting process, 8, 20, 28 CramBr’s condition, 229 cumulant, 13
absorption condition, 110 absorption time, 112, 121, 243 additive functional, 15, 36, 47, 61, 67, 74-77, 84, 86, 90, 94, 104, 122, 123, 126, 128, 130, 132, 150, 152, 154, 175, 228, 230, 237, 240 with equilibrium, 93 arithmetic distribution, 22 auxiliary processes, 23 average approximation, 67-69, 71-76, 79, 80, 82, 84, 85, 90, 91, 93, 97, 103, 116, 117, 119-121, 139, 150-152, 154-160, 182, 186, 201, 203, 205, 216, 217, 219, 220, 258, 259, 264, 265, 268
diffusion coefficient, 10 operator, 10 time-homogeneous -, 10 diffusion approximation, 67, 70, 71, 73, 81,82, 84, 88, 89, 95, 97, 112, 122, 128, 139, 160, 161, 165, 166, 168, 170, 172, 173, 175, 176, 180, 188, 189, 200, 202, 203, 206, 217, 219, 228, 261, 264, 268, 270 with equilibrium, 90, 173 diffusion process, 10 direct Riemann integrable, 22 domain, 7 domain of generator, 7 drift coefficient, 10 dynamical system, 104 Dynkin formula, 16
Banach space, 1 birth and death process, 269, 270 Brownian motion, 10, 12 Chapman-Kolmogorov equation, 3, 6 characteristic exponent, 278 characteristic function, 12 compact containment condition, 200, 209, 210, 240, 309 compensating measure of jumps, 63 compensating operator, 24, 25, 36, 39, 40, 42, 56-60, 67-70, 72, 73, 76, 80, 100-102, 153, 155, 157, 163, 166, 176-178, 180, 182, 184-186, 188, 196, 201, 212, 213, 220, 237, 262-265, 268, 282, 289, 295
equilibrium process, 97 ergodic merging, 104 evolutionary equation, 13, 36 evolutionary system, 43 exponential process, 224 329
330
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
extended Markov renewal process, 25 Factorization theorem, 64 filtration, 1 generalized diffusion, 27 generator, 7, 10 impulsive process, 61, 221-223, 225, 276 increment process, 40 integral functional, 36, 104, 134 invariant probability, 4 LBvy approximation, 276 Levy process, 12 characteristics -, 280 LBvy-Khintchine formula, 12, 13, 278 limit distribution, 23 Liptser formula, 135, 290 local martingale, 16 Lyapounov function, 275 Markov additive process, 46, 277 Markov additive semimartingale, 61, 63, 65 Markov chain, 3 +-irreducible -, 4 aperiodic -, 4 embedded -, 7 ergodic -, 4 Harris positive -, 4 periodic -, 4 positive -, 4 strong -, 3 uniformly irreducible -, 4 Markov kernel, 2, 3 Markov process, 7 egodic -, 6 jump -, 7 martingale characterization, 17 pure jump -, 9 regular -, 7, 21 time-homogeneous -, 6 weak differentiable -, 14
with locally independent increments, 14 Markov property, 3, 5, 6 strong -, 3, 6 Markov renewal equation, 21, 23 Markov renewal process, 20 extended -, 59 martingale characterization - , 25 Markov renewal theorem, 21-23 Markov semigroup, 7 Markov transition function, 2 martingale, 16 Doob-Mayer decomposition, 16 problem, 17 square integrable, 16 measure-determining class, 305 measure-determining class, 219, 302 merging condition, 123 double -, 112, 121, 137, 185 function, 106, 114, 226, 231, 249 state space, 103 with absorption, 110 modulated process, 35 operator closed -, 32 densely defined -, 32 inverse -, 32 normally solvable -, 32 potential -, 33 reducible-invertible -, 32 Ornstein-Uhlenbeck process, 97, 269, 270, 275 phase merging principle, 75, 139 phase space, 2 piecewise-deterministic Markov processes, 14 Poisson equation, 33 Poisson approximation, 193, 219, 220, 230, 231, 241, 258, 260, 261, 263, 268 condition, 222, 229, 260, 279 Poisson process, 12
33 1
INDEX
Polish space, 1 potential operator, 5 predictable characteristics, 26, 27, 63 predictable process, 25 process adapted, 2 impulsive, 61 independent increments -, 11 LII, 14 locally independent increment -, 47 stationary and independent increments -, 11 projector, 32 random evolution, 38, 50 continuous -, 50 coupled -, 165 coupled -, 52, 71 jump -, 50, 54 Markov jump -, 55 semi-Markov -, 56 reducible-invertible operator, 31, 139 relatively compact, 303 reliability, 243 renewal process, 258 renewal processes superposition, 253 repairable system, 269 second modified characteristic, 62 semi-Markov kernel, 19 random walk, 258 semi-Markov process, 19, 20 regular -, 21 semigroup uniformly continuous -, 7 semimartingale, 25, 64, 65 special -, 25 shift operator, 41 signed kernel, 3, 106 singular perturbation, 139 solvability condition, 33 split double -, 121 split and merging
ergodic -, 123 split condition, 123 split state space, 103 square-integrability condition, 222 standard state space, 1, 301 state space, 2 stationary distribution, 4, 6, 8 stationary phase merging, 249 stationary projector, 5, 6 stochastic basis, 1 stochastic integral functional, 30 stopping time, 2 sub-Markov kernel, 2 switched process, 35 switching Markov -, 42, 47, 49, 52, 64, 65, 71, 73, 77, 83, 93, 94, 99, 118, 120, 121, 123, 125-130, 132, 134, 151, 154, 159-161, 172, 173, 175, 188, 193, 196, 209, 219-221, 225, 227, 228, 231, 232, 269, 270 process, 35 semi-Markov -, 36, 38, 41-44, 50, 52, 54, 59, 61, 67, 68, 73-75, 79, 81, 82, 98, 104, 116, 117, 119, 120, 153, 154, 156-158, 165, 166, 168-170, 173, 176, 180, 182, 186, 188, 189, 193, 201, 212, 216, 217, 219, 220, 228, 230, 237, 267 tight, 303 truncation function, 26 uniform square-integrability, 229 usual conditions, 1 weak convergence, 301 weak topology, 301 Wiener process, 10