Random Integral Eqzlations with Applications to Lzfe Sciences and Engineering
To A . T. Bharucha-Reid
. . . yqpaatio 'mi~
Z ~ W K ~ ~ E V. O Socrates S . .
This is Volume 108 in MATHEMATICS IN SCIENCE AND ENGINEERING A series of monographs and textbooks Edited by RICHARD BELLMAN, University of Southerri California The complete listing of books in this series is available from the Publisher upon request.
R.ANDOM INTEGRAL EQUATIONS W I T H APPLICATIONS T O LIFE SCIENCES A N D ENGINEERING Chris P . Tsokos DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH FLORIDA TAMPA. FLORIDA
W. J. Padgett DEPARTMENT OF MATHEMATICS A N D COMPUTER SCIENCE UNIVERSITY OF SOUTH CAROLINA COLUMBIA, SOUTH CAROLINA
A C A D E M I C P R E S S New York and London 1974 A Subsidiary of Harcourt Brace Jovunovich, Publishers
COPYRIGHT 0 1974, BY ACADEMIC PRESS,INC.
ALL RIGHTS RESERVED. N O PART OF THIS PUBLICATION MAY BE REPRODUCED OR TRANSMITTED IN ANY FORM OR BY ANY MEANS, ELECTRONIC OR MECHANICAL, INCLUDING PHOTOCOPY, RECORDING, OR ANY INFORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT PERMISSION IN WRITING FROM THE PUBLISHER.
ACADEMIC PRESS, INC.
111 Fifth Avenue, New York, New York 10003
United Kingdom Edition published b y ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road. London N W 1
Library of Congress Cataloging in Publication Data Tsokos, Chris P Random integral equations with applications to life sciences and engineering. (Mathematics in science and engineering, v. 108) Bibliography: p . 1. Stochastic integral equations. 1. Padgett, W. J . , joint author. 11. Title. 111. Series. QA274.27.T76 5w . 4 5 73-2079 ISBN 0-12-702150-7
AMS (MOS) 1970 Subject Classifications: 60H20,45(399, 93E15
PRINTED IN THE UNITED STATES OF AMERICA
Contents PREFACE
ix
General Introduction
i
Chapter I. Preliminaries and Formulation of the Stochastic Equations 1.0 Introduction 1.1 Basic Definitions and Theorems from Functional Analysis 1.2 Probabilistic Definitions 1.3 The Stochastic Integral Equations and Stochastic Differential Systems Appendix l.A
6 7 12 18
22
Chapter 11. Some Random Integral Equations of the Volterra Type with Applications 2.0 Introduction 2.1 The Random Integral Equation 2.1.1 Existence and Uniqueness of a Random Solution 2.1.2 Some Special Cases 2.1.3 Asymptotic Stability of the Random Solution 2.2 Some Applications of the Equation 2.2. I Generalization of Poincare-Lyapunov Stability Theorem 2.2.2 A Problem in Telephone Traffic Theory 2.2.3 A Stochastic Integral Equation in Hereditary Mechanics
29 30 30 33 38 39 40 42 46
ui
CONTENTS
2.3 The Random Integral Equation 2.4 Applications of the Integral Equation 2.4. I A Stochastic Integral Equation in Turbulence Theory 2.4.2 Stochastic Models for Chemotherapy
49 55 55 57
Chapter 111. Approximate Solution of the Random Volterra Integral Equation and an Application to Population Growth Modeling 3.0 Introduction 3. I The Method of Successive Approximations 3.1. I Convergence of the Successive Approximations
3.1.2 Rate of Convergence and Error of Approximation 3.1.3 Combined Error of Approximation and Numerical Integration 3.2 A New Stochastic Formulation of a Population Growth Problem 3.2.1 The Deterministic Model 3.2.2 The Stochastic Model 3.2.3 Existence and Uniqueness of a Random Solution 3.3 Method of Stochastic Approximation 3.3.1 A Stochastic Approximation Procedure 3.3.2 Solution of Eq. (3.0.1) by Stochastic Approximation 3.3.3 Numerical Solution for a Hypothetical Population
Chapter IV. 4.0 4.1 4.2 4.3 4.4 4.5
65 66 68 71 74 78 79 81 84 87 87 89 94
A Stochastic Integral Equation of the Fredholm Type and Some Applications
Introduction Existence and Uniqueness of a Random Solution Some Special Cases Stochastic Asymptotic Stability of the Random Solution An Application in Stochastic Control Systems A Random Perturbed Fredholm Integral Equation
97
98
110 113 115 120
Chapter V. Random Discrete Fredholm and Volterra Systems 5.0 Introduction
Existence and Uniqueness of a Random Solution of System (5.0.1) 5.2 Special Cases of Theorem 5.1.2 5.3 Stochastic Stability of the Random Solution 5.4 An Approximation to System (5.0.I ) 5.5 Application to Stochastic Control Systems 5.5.1 A Discrete Stochastic System 5.5.2 Another Discrete Stochastic System 5.1
132 133 136 139 141 148 148
152
CONTENTS
rii
Chapter VI. Nonlinear Perturbed Random Integral Equations and Application to Biological Systems 6.0 Introduction 6.1 The Random Integral Equation 6.1.1 Existence and Uniqueness of a Random Solution 6.1.2 Some Special Cases 6.2 Applications to Biological Systems 6.2. I A Random Integral Equation in a Metabolizing System 6.2.2 A Stochastic Physiological Model 6.2.3 A Stochastic Model for Communicable Disease
Chapter VII. 7.0 7.1 7.2 7.3
i56 157 157
159 165 165 170 176
On a Nonlinear Random Integral Equation with Application to Stochastic Chemical Kinetics
Introduction Mathematical Preliminaries An Existence and Uniqueness Theorem A Stochastic Chemical Kinetics Model 7.3.1 The Concept of Chemical Kinetics 7.3.2 Stochastic Interpretation of the Rate of Reaction 7.3.3 Rate Functions of General Reaction Systems 7.3.4 A Stochastic Integral Equation Arising in Chemical Kinetics
180 181 194 197 198 20 1 20 I 204
Chapter VIII. Stochastic Integral Equations of the Ito Type 8.0 8. I 8.2 8.3
Introduction Preliminary Remarks On an Ito Stochastic Integral Equation On It+Doob-Typc Stochastic Integral Equations 8.3.1 An Existence Theorem
207 208 212 214 215
Chapter IX. Stochastic Nonlinear Differential Systems 9.0 Introduction 9. I Reduction of the Stochastic Differential Systems 9.1. I Stochastic System (9.0.1j(9.0.2) 9.1.2 The Random Differential System (9.0.3)-(9.0.4) 9.1.3 The Stochastic System (9.0.5)-(9.0.6) 9.1.4 The Random Differential System (9.0.7j(9.0.8) 9.2 Stochastic Absolute Stability of the Differential Systems Appendix 9.A 9.A. 1 Stochastic Differential System (9.0.1)-(9.0.2) 9.A.2 Stochastic Differential System (9.0.3j(9.0.4) 9.A.3 The Reduced Stochastic Integral Form of Systems (9.0.1H9.0.2) and (9.0.3)
217 219 219 220 22 I 222 225 239 239 240 240
viii
Chapter X.
CONTENTS
Stochastic Integrodifferential Systems 24 I 243 241 250
10.0 Introduction The Stochastic Integrodifferential Equation 10.1.1 Asymptotic Behavior of the Random Solution 10.1.2 Application to a Stochastic Differential System 10.2 Reduction of the Stochastic Nonlinear Integrodifferential Systems with Time Lag 10.2.1 The Integrodifferential System (10.0.2j(l0.0.3) 10.2.2 The Random Integrodifferential System (10.0.4j(10.0.5) 10.3 Stochastic Absolute Stability of the Systems
25 I 25 1 253 255
Bibliography
260
INDEX
215
10. I
Preface
Random or stochastic integral equations arise in virtually every field of scientific endeavor. Recently, attempts have been made by many scientists and mathematicians to develop and unify the theory of stochastic or random equations using the concepts and methods of probability theory and functional analysis. We have two main objectives in this book. First. we wish to give a complete presentation of various aspects of some of the most general forms of nonlinear stochastic integral equations of the Volterra and Fredholm types which have been studied, including the problems of existence, uniqueness, stochastic stability, and approximation of random solutions of the equations. In addition, we investigate stochastic integral equations of the It+Doob type. The second objective is to apply the theory developed to some very important problems in the life sciences and engineering. With respect to the applications, stochastic models for various phenomena in the biological, engineering, and physical sciences are obtained. For example, applications of the theory to the following areas are given: telephone traffic theory, hereditary mechanics, turbulence theory, chemotherapy, population growth, stochastic control systems, metabolizing systems, physiological models, communicable diseases, chemical kinetics, and stochastic integrodifferential systems. The book will be of value to mathematicians, probabilists, statisticians, and engineers who are working in the theoretical and applied aspects of random integral equations. The book can be used for a beginning graduate iA
x
PREFACE
course on random integral equations with emphasis being placed on probabilistic modeling of various problems in life sciences and engineering. It should be pointed out that this book differs in purpose considerably from the book of A. T. Bharucha-Reid [7]. He is concerned primarily with the overall theory of random integral equations, whereas we emphasize the stochastic modeling aspects and applications of certain types of random or stochastic integral equations. Thus, the two books are complementary. This book was written with the direct and indirect help of many people. We are grateful to Dr. J . Susan Milton for her helpful and stimulating discussions during the preparation of the manuscript. We would also like to acknowledge Dr. A. N. V. Rao for his assistance and valuable comments and Ms. Debbi Beach for her excellent typing of the manuscript. In addition we would like to express our appreciation to Professor Richard Bellman for his encouragement and interest in the subject matter. Finally, we would like to express our sincere thanks to our families for their understanding and patience during the writing of the book.
General Introdzlction7
Due to the nondeterministic nature of phenomena in the general areas of the biological, engineering, oceanographic, and physical sciences, the mathematical descriptions of such phenomena frequently result in random or stochastic equations. It is the aim of this book to present theoretical results concerning certain classes of stochastic or random equations and then to apply those results to problems that arise in the general areas just mentioned. In order to understand better the importance of developing such a theory and its application, it is of interest to consider first the various ways in which these equations may arise. Usually the mathematical models or equations used to describe physical phenomena contain certain parameters or coefficients which have specific physical interpretations but whose values are unknown. As examples, we have the diffusion coefficient in the theory of diffusion, the volume-scattering coefficient in underwater acoustics, the coefficient of viscosity in fluid mechanics, the propagation coefficient in the theory of wave propagation, and the modulus of elasticity in the theory ofelasticity, among others. The mathematical equations are solved using as the value of the parameter or coefficient the mean value of a set of observations experimentally obtained. However, if the
t Adapted from Pddgett and Tsokos [12]
with permission of Taylor and Francis, Ltd I
2
GENERAL INTRODUCTION
experiment is performed repeatedly, then the mean values found will vary, and if the variation is large, the mean value actually used may be quite unsatisfactory. Thus in practice the physical constant is not really a constant, but a random variable whose behavior is governed by some probability distribution. It is thus advantageous to view these equations as being random rather than deterministic, to search for a random solution, and to study its statistical properties. There are many other ways in which random or stochastic equations arise. Stochastic differential equations appear quite naturally in the study of diffusion processes and Brownian motion (Gikhmann and Skorokhod [I]). The classical Ito stochastic integral equation (Ito [l]) may be found in many texts, for example, Doob [l]. Integral equations with random kernels arise in random eigenvalue problems (Bharucha-Reid [7]). Stochastic integral equations describe wave propagation in random media (Bharucha-Reid [7]) and the total number of conversations held at a given time in telephone traffic theory (Fortet [l] and Padgett and Tsokos [4]). In the theory of statistical turbulence, stochastic integral equations arise in describing the motion of a point in a continuous fluid in turbulent motion (Bharucha-Reid [7], Lumley [l], Padgett and Tsokos [3]). Integral equations were used in a deterministic sense by Bellman, Jacquez, and Kalaba [l-31 in the development of mathematical models of chemotherapy. However, due to the nondeterministic nature of diffusion processes from the blood plasma into the body tissue, the stochastic versions of these equations are more realistic and should be used (Padgett and Tsokos [l, 2, lo]). Stochastic integral equations also arise in problems in chemical kinetics and metabolizing systems (Milton and Tsokos [l, 41). Random equations are also frequently encountered in a natural way in systems theory (for example, Morozan [l-51 and Tsokos [1-51). These examples point out the importance of random equations in diverse areas. However, in many instances the scientist tends to use a deterministic model to represent a process under investigation with the philosophy that there is a deterministic but unknown function x ( t ) which describes the phenomenon he observes. He then attempts by experimental methods to determine as accurately as possible the form of this function. A standard procedure is to obtain, at several specified values o f t , observations on the value of x ( t ) and then to use as the “true” value of x ( t ) some estimate based on these observations, the usual estimate being the mean. In this way a single trajectory can be constructed which is then taken as the true form of x ( t ) and is subsequently used in working with the model. This general technique characterizes the deterministic approach to a physical situation. However, if this procedure were repeated many times, even under the most carefully controlled conditions, the trajectories so obtained will differ, and in most cases
GENERAL INTRODUCTION
3
this variation could be quite signijicant. If this is indeed the case, then there is evidence that there is more than mere measurement error entering into the picture, and that, in fact, the function which governs the process is not a fixed unknown entity, but a random one. Thus it is more realistic in this situation to construct a stochastic model for the system rather than a determiiiistic model. This entails the basic assumption that at each point t , x ( t ) is not a fixed unknown value which should be estimated, but rather a random variable which we denote by x(t ;w), where w E R,the supporting set of a complete An important point to be made is that probability measure space (R, d,9). if a stochastic model is assumed when a deterministic model could be justified, nothing is lost ;but if a deterministic model is assumed when in fact the process is random, then the results obtained could be quite unsatisfactory. Recently attempts have been made by many scientists and mathematicians to develop and unify the theory of stochastic or random equations : Adomian [ l a ] , Ahmed [l], Anderson [l, 21, Bharucha-Reid [l-71, Hans [l], Tsokos [4], Padgett andTsokos [ l , 3,5-9, 1 1 , 12, 151, Rao [l], Milton et al. [l]. It was Antonin Spacek from Prague, Czechoslovakia, who began this work, utilizing the concepts and methods of probability theory and functional analysis. In fact, Bharucha-Reid [5] refers to probabilistic functional analysis as being concerned with the applications and extensions of the methods of functional analysis to the study of the various concepts, processes, and structures which arise in the theory of probability and its applications. Random or stochastic equations have been categorized into four main classes as follows : (1) (2) (3) (4)
Random or stochastic algebraic equations. Random differential equations. Random difference equations. Random or stochastic integral equations.
In this book we will be concerned with certain classes of random or stochastic integral equations of the Volterra and Fredholm types and a class of random integrodifferential equations of the Volterra type. For example, in Chapters I1 and I11 we will study various aspects of the stochastic integral equation of the Volterra type in the form x ( t ; w )= h ( t ; o )+
c
k(t,T;W)f(Z,X(T;W))dr
(0.1)
for t 2 0, where the integral is interpreted as a mean-square integral. We will consider the existence, uniqueness, asymptotic behavior, and approximation of random solutions of each type of stochastic integral equation studied in this book. In addition, the second aim of the book is to present numerous
4
GENERAL INTRODUCTION
applications of the theory of such equations to the problems in chemotherapy, chemical kinetics, physiological systems, population growth, telephone engineering, turbulence, and systems theory as previously mentioned. Furthermore, these equations are more general than any random Volterra or Fredholm equations of these forms that have been studied to date. The generality consists primarily in the choice of the stochastic kernel and the nonlinearity of the equations. This book includes the recent work of the authors, Padgett and Tsokos [l-151, and generalizes the work of Hans [l], Bharucha-Reid [l, 3-51, and Anderson [l, 21. In the area of systems theory we will apply the general theory which is presented concerning random integral equations to certain problems in random differential systems and random integrodifferential systems (Tsokos [l-3,5], Tsokos and Hamdan [l]). The nonlinear stochastic differential and integrodifferential systems will be reduced in a unified way to stochastic nonlinear integral equations. Then in each case the existence and uniqueness of a random solution of the stochastic system will be investigate.d.In addition, we will consider the concept of stochastic absolute stability of the systems. This type of stability has been studied in the deterministic case by many scientists, but to the knowledge of the authors it has not been utilized for random systems. The concept of absolute stability arose in the context of differential control systems and the general theory of stability of motion. The primary mathematical technique which was universally used to study absolute stability was Lyapunov’s direct method. However, in the late 1950’swhen Lyapunov’s method appeared to be exhausted V. M. Popov developed a new approach, obtaining very elegant and powerful results. Popov’s method is known as the frequency response method. In Chapter IX we successfully utilize the frequency response method with a random parameter to investigate the stochastic absolute stability of several stochastic differential systems. These results generalize the recent results of Morozan [ l ] in that he chose a specific form of the stochastic kernel, namely an exponential form. In Chapter I we present preliminary notations, definitions, lemmas, and theorems which are essential to the aims of this book. Further, we define and formulate the types of stochastic integral equations and the stochastic differential systems which will be investigated in later chapters. Finally, in Appendix l.A of Chapter I the proofs of some of the fixed-point theorems that are utilized throughout the book are given. As already mentioned, in Chapters I1 and 111 we will investigate the existence, uniqueness, asymptotic properties, and approximation of random solutions of certain stochastic integral equations of the Volterra type. In addition, several applications and examples of such equations will be presented in the areas of chemotherapy, telephone traffic theory, turbulence theory, hereditary mechanics, and
GENERAL INTRODUCTION
5
stochastic systems theory. Chapter IV will be devoted to stochastic integral equations of the Fredholm type. Certain random or stochastic discrete Volterra and Fredholm equations will be considered in Chapter V along with their approximate solutions and applications. Chapter VI will deal with perturbed versions of the random equations studied in Chapters I1 and IV and their application to biological systems. In Chapter VII an application of a nonlinear random integral equation to a problem in stochastic chemical kinetics is presented. A connection between Ito’s equation and certain classes of stochastic integral equations studied in earlier chapters is given in Chapter VIII; that is, Ito’s equation is studied by applying some aspects of the “theory of admissibility” (Corduneanu [l]). Several stochastic nonlinear differential systems and their stochastic absolute stability are studied in Chapter IX. Chapter X is devoted to the investigation of a class of random or stochastic integrodifferential equations and its application to nonlinear stochastic systems.
CHAPTER I
P reZiminaries and F ormzclution of the Stochastic Equations
1 .O
Introduction
In an attempt to make this book essentially self-contained, one purpose of the present chapter is to give some of the basic definitions and theory of functional analysis which will be used throughout the text. Therefore Section 1.1 will consist of the statements of several definitions concerning linear topological spaces and some important theorems which are needed in later discussions. In Appendix 1.A we will give the proofs of some of the classical fixed-point theorems of functional analysis for the interested reader, but otherwise most of the theorems will be stated without proof for the sake of brevity. The second purpose of this chapter is to present the probabilistic definitions, basic assumptions, and notations that are essential to the development of the theory in later chapters. These will be given in Sections 1.2 and 1.3. Section 1.2 will be devoted to certain probabilistic definitions and notations, while the specific types of stochastic or random integral equations which will 6
1.1
BASIC DEFINITIONS AND THEOREMS
7
be investigated are given in Section 1.3. Some definitions and notations concerning the stochastic differential systems to be studied also will be presented in Section 1.3. 1.1
Basic Definitions and Theorems from Functional Analysis
The following basic definitions and theorems are stated for the convenience of the reader. Definition 1.1.1 A real-valued measurable function f(x) defined on a closed interval [a, b] is said to be a square-summable function if lf(x)I2 dx <
+
CO.
We shall designate the class of all such functions by the symbol L , . Definition 1.1.2
A real number associated with f~ L , , denoted by
is called the norm off. Definition 1.1.3 The element f’of the space L , is called a limit of the sequence f i t f,,f 3 , . . . of elements of the same space if for every E > 0 there exists a nonnegative integer N such that
Ilf” - f l l < E for all n > N . Definition 1.1.4 A nonempty set H is called a metric space if to an arbitrary pair x, y of elements in H there corresponds a real number p(x, y ) possessing the following properties :
(i) p(x, y ) 2 0, where p(x, y ) = 0 if and only if x = y . (ii) P(X, Y ) = P(Y, XI. (iii) p(x, z ) < p ( x , y ) + p ( y , z ) for any x, y , z E H (triangle inequality). The number p(x, y ) is the distance between the elements x and y . Definition 1.1.5 A sequence {xn} of elements in a metric space is said to be convergent in itself or a Cauchy sequence if for every E > 0 there exists an
8
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
N such that for n > N and m > N we have P(Xm x),
< E.
Definition 1.1.6 A metric space H is said to be complete if every sequence of its elements which is convergent in itself has a limit in H . Definition 1.1.7 A set H of elements x, y , z, . . . is said to be a linear space if: (i) To every pair of elements x and y of H there corresponds a third element of H , z = x y, called the sum of x and y. (ii) T o every element x E H and every scalar a there corresponds an element, ax E H , which is called the product of a and x. (iii) The operations introduced possess the following properties for every x, y, z E H and scalars a and b :
+
+ y = y + x, i.e., addition is commutative. + y ) + z = x + ( y + z), i.e., addition is associative. x + y = x + z implies y = z.
x (x
(4) (5) (6) (7)
l x = x. a(bx) = (ab)x. ( a b)x = ax a(x y ) = ax
+ +
+ bx.
+ ay.
Definition 1.1.8 A linear space H is said to be normed if to each x E H there corresponds a real number IIx 11, called the norm of this element, possessing the following properties for every y E H and every scalar a : (i) llxll B 0, where llxll = 0 if and only if x = 0. (ii) llaxII = la( . Ilxll, and in particular 11 -XI/ = Ilxll. (iii) llx + Yll d IIXII + IIYII.
Definition 1.1.9 A complete normed linear space is called a Banach space. A FrPclzet space is a complete linear metric space. Definition 1.1.10 Let H be a given set, and let B be a set of subsets of H having the following properties : (i) H E E (ii) 4 E .9? (iii) The union of any nonempty family of sets from 9 belongs to 5 (iv) The intersection of any two sets of B belongs to E The ordered pair ( H , 9) is then called a topological space and 9 is the topology of the space. The sets belonging to B are called open sets.
Definition 1.1.11 If the set H in Definition 1.1.10 is a linear space and the two basic operations (addition and multiplication by a constant) are continuous, then H is a linear topological space or vector space.
1.1
BASIC DEFINITIONS AND THEOREMS
9
Definition 1.1.12 Let H and H , be metric spaces and let T be a rule which associates some point y E H , with every point x E H. Such a rule is called an operator which is defined on the space H and maps H into the space H,. If y E H, is the point which the operator T assigns to the point x E H, we write y = T ( x )and call y the value of the operator Tat the point x . Definition 1.1.13 Let the operator T map the metric space H into itself. If there exists a real number q, 0 < q < 1, such that for arbitrary points x and x’ of the space H we have
P ( T ( 4 ,T(x” d 4 . P ( X , x’), then we call T a (strict) contraction operator. Definition 1.1.14
Let f , g E L,( - 00,co). The function
1W
h(x) =
m
1a3
f ( x - Y)AY)dY =
m
f’(y)g(x- Y ) d Y
is defined almost everywhere on the real axis and is called the convolution of the functions f and g. Definition 1.1.15 Let ( H , 3)be a topological space. We shall say that a c 9 is a base for this topology if for every 0 E 9 collection of subsets 9‘ there exists a subset 0’ E 9such that 0’ c 0. Definition 1.1.16 A linear topological space is said to be locally conuex if it possesses a base for its topology consisting of convex sets. We now state several theorems which will be needed in later chapters. The proofs of the classical fixed-point theorems are presented in Appendix l.A for the convenience of the interested reader. Theorem l.l.l (Minkowski’s inequality) (Natanson [l]) If f ( x )E L , and g(x)E L,, then
Theorem 1.12 ( S . Banach’s $xed-point principle) (Natanson [2]) If T is a contraction operator on a complete metric space H, then there exists a unique point x* E H for which T ( x * ) = x*.
Theorem 1.13 (Closed-graph theorem) (Goldberg [ 11) A closed linear operator mapping a Banach space into a Banach space is continuous.
10
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
Theorem 2.2.4 (Halanay [l]) If f ( x ) ,g(x)E L,( - co,co), then the convolution h(x) is defined for almost every x , h(x)E L,( - co,co), and we have
J-
m
1m
m
Ih(x)l dx G
J-
m
If(x)l dx
Ig(x)l dx.
Theorem 2 . 2 5 (Halanay [l]) The Fourier transform of the convolution h(x) is the product of the Fourier transforms of the functions f ( x ) and A x ) . Theorem 2.2.6 ( P a r s e d equality) (Bochner [l]) Let f ( t )E L l ( - co,co) n L2(- co,co) m
f(U) =
Then
J-
e-'"(t)
Lemma 2.2.7
m
for A real.
dt,
m
lf(iA)I2 d A = 27t
and
J-
m
If(t)12dt.
(Barbalat [I]) If
(i) f ( t )is a continuous function, and its derivatives f ' ( t )are bounded for t 2 0; (ii) G(x)is a continuous function, G(x) > 0 for any x # 0, G(0) = 0 ; and (iii)
then
som
G [ f ( t ) dt ] <
00
;
lim f ( t ) = 0.
t-tm
Definition 1.1.17 A continuous operator Tfrom a Banach space H into a Banach space H such that the image of closed bounded sets in H is compact is called a completely continuous operator.
Two other useful fixed-point theorems are due to Schauder and Krasnosel'skii. Theorem 2.2.8 (Schauder's $xed-point principle) (Krasnosel'skii [l]) Let W be a closed, bounded convex set in a Banach space, and let T be a completely continuous operator on W such that T(W ) c W. Then T has at least one fixed point in W. That is, there is at least one x* E W such that T ( x * ) = x*.
Theorem 2.2 9 (Krasnosel'skii's $xed-point theorem) (Krasnosel'skii [l]) Let S be a closed, bounded convex subset of a Banach space, and let U and I/ be operators on S satisfying:
1.1
BASIC DEFINITIONS AND THEOREMS
11
(i) V ( x ) + V ( y )E S whenever x , y E S . (ii) U is a contraction operator on S . (iii) T/ is completely continuous. Then there is at least one point x* E S such that U(x*)
+ V ( x * )= x * .
That is, there is at least one point in S which is a fixed point of the operator
u + v.
Note that Schauder's fixed-point theorem is a special case of Theorem 1.1.9. Definition 1.1.18 Let H be a linear space. A mapping ( x , y ) taking points x and y in H into the real (or complex) numbers is called an inner product if for each x , y , z E H and scalar u we have (i) (ii) (iii) (iv)
( x + Y , z ) = ( x , z) + (Y, 4. ( a x , Y ) = 4x9 Y ) . ( x , y ) = ( y , x ) , the bar denoting complex conjugate. ( x , x ) > 0 if x is not the zero element of H .
In this case H is called an inner product space. The norm of an element x may be defined in terms of the inner product by
EH
llxll = ( X , X ) f .
Definition 1.1.19 A Banach space H whose norm is defined in terms of the inner product as just given is called a Hilbert space. The following theorems will also be needed in the sequel. Theorem 1.1.10 (Dunford and Schwartz [ 13) If { T,} is a sequence of continuous linear operators from a Frechet space H into a Frechet space H , such that for each x E H , limn-m T,,(x) = T ( x ) exists, then lim, +o T,(x) = 0 uniformly for n = 1,2,. . . , and T is a continuous linear operator from H into H , . Theorem 1.1.11 (Horvath [l, p. 1141) A locally convex Hausdorff space X whose topology 6 is defined by an increasing sequence of seminorms p,(x), n = 1, 2, 3 , . . . ,is metrizable with metric
Theorem 1.1.12 (Yosida [l, p. 761) A linear space X can be topologized by a family of semi-norms satisfying the axiom of separation in such a way that the space is locally convex.
12
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
Theorem 2.2.23 (Horvath [I, p. 961) The locally convex topology defined on a linear space X by a family of semi-norms is Hausdorff if and only if the family satisfies the axiom of separation. 1.2
Probabilistic Definitions
We shall denote by (R, d,9) a probability measure space; that is, R is a nonempty abstract set, d is a a-algebra of subsets of R, and 9 is a complete probability measure on d. The following spaces of functions are basic to this investigation.
) denote the space of all Definition 1.2.1 C = C ( R + , L z ( R , d , 9 )will continuous and bounded functions on R + = [0, CQ) with values in L,(Cl, d, 9). Definition 1.2.2 We shall denote by C, = C,(R+, L,(R, d, b))the space such that of all continuous functions from R + into L,(R, d,9)
where Z is a positive number andg(t) is a positive continuous function defined on R , .
Definition 1.2.3
We shall further define the following space: C,
= C,(R+, L,(R, d,9)) is the space of all continuous functions from R ,
into L,(R, d,9) with the topology of uniform convergence on the intervals [0, TI for T > 0. This space, C,, is a locally convex space (Yosida [l, pp. 24-26]) whose topology will be defined by means of the following family of semi-norms :
These semi-norms satisfy the following conditions :
2 0, for n = 1 , 2 , 3 , . . .; if Ilx(t; w)ll, = 0 for all n, then i.e., x ( t ;w ) is the zero element of C,.
(i)
Ilx(t; w)ll,
(4
IIctx(t;w)II. = 14. Ilx(t;w)ll,. Ilx(t; 4 y ( t ;4 1 1 , G Ilx(t; w)ll.
x ( t ; w ) = 0 a.e.,
(i$ + + Ilv(t; 411,. We now proceed to verify that the manner in which we have defined the semi-norms (1.2.1) in the space C, satisfies Conditions (iHiii). Condition (i) is obviously satisfied from the definition of semi-norm. Condition (ii) can be
1.2
13
PROBABILISTIC DEFINITIONS
shown as follows :
=
I4
'
Ilx(t; w)ll,.
Next we must show that the triangular inequality is satisfied, that is, Ilx(t;4
+ y ( t ; 4 . < l l x ( ~ ; ~ ) I+l nI l v ( t ; 4 l l n .
(1.2.2)
Applying Minkowski's inequality and the fact that SUP
OQrQn
[f(t)
+ &)I
G
SUP A t )
OQt
+
SUP
OQtQn
dt),
it follows from the definition of the semi-norm that
which means that Ilx(t; 4
+ y ( t ; 4ll, < Ilx(t; 4. + Il&;
4..
Therefore we have shown that it is valid to define the semi-norms by (1.2.1). One can define the topology on this space by the following distance function :
With respect to this distance function p(x, y), the space C,(R + , L2(Q,d,9)) is a complete metric space. That is, every Cauchy sequence is convergent. In this space a sequence of functions is convergent if and only if it is convergent on every compact interval [0, TI, 0 < 7:
14
I
PRELIMINARIES A N D FORMULATION OF THE EQUATIONS
Note that the following inclusions hold: C c C , c C,. Note also that the space C = C ( R +,L,(R, d,9))is the space of all second-order stochastic processes (Prabhu [l]) defined on R , which are bounded and continuous in mean-square. Let B and D be a pair of Banach spaces such that B, D c C, and let T be a linear operator from C,into itself. Then with respect to B, D, and T we state the following definitions. Definition 1.2.4 The pair of spaces ( B , D)will be called admissible with if and respect to the operator T :C,(R+,L,(R, d,9))-,C,(R+, L,(R. d,9)) only if T ( B ) c D. Definition 1.2.5 An operator T is said to be closed if from x,(t; w ) 4 ~ ( tw; )
and
(Tx,)(t;w ) 3 y ( t ;w )
it follows that (Tx)(t;0)= y ( t ; w). Definition 1.2.6 The operator T is said to be continuous on C , ( R + , L,(R, d, 9))if and only if ( T x n ) ( t ;0)
-,(Tx)(t; 0)
in C , ( R + ,L,(R, d,9))for every sequence {x,(t; w ) } such that x,(t; x(t; o)in the same space.
0 )
-,
Definition 1.2.7 By the space L , ( R , d , 9 ) we mean the space of all measurable and 9-essentially bounded functions, i.e., a function is in L,(R, d,9)if it is bounded in the ordinary sense on a set R - R,, where 9(R,) = 0. Definition 1.2.8 By stating that the Banach space B is stronger than the space C,(R + ,L,(R, d,9)) we mean that every convergent sequence in B, with respect to its norm, will also converge in C , (but the converse is not true in general). Definitions 1.2.9-1.2.1 1 and Theorem 1.2.1 are due to Bharucha-Reid, Mukherjea, and Tserpes [I]. Definition 1.2.9 By the space L p ( R , d , 9 ) ,1 d p < co, we mean the space of all measurable functions defined on R , such that for each t E R + we have
f,
Ix(t; w)lpd.Y(w) <
+ co.
1.2
15
PROBABILISTIC DEFINITIONS
{ s,
The norm in this space is defined by Ilx(t; w)ll, =
I x ( t ; o)lPrfP(w)
< co.
I l l P
If x(t ;w ) is a vector-valued function with rn components, then we define, as usual, Ix(t ; w)l = [x:(t ; 0))
+ x:(t ;w ) + . + x i ( t ; w ) ] ‘ ‘
+.
Definition 1.2.10 Let q = p / ( p - I), 1 < p < m. The sequence (s,(w)J, w E R, converges weakly to x(w) in L,(R, d, 9’)if
s,
?!it x,(w)h(w)d9’(w) for every h E L,(R, d,9’).If p in L,(R, d,9’)if
s,
=
=
1‘
x(w)h(w)d./P(o)
1, then { x n ( o ) }converges weakly to x(w)
?!it x,(w)h(o)d;P(w)=
1‘
x(w)h(o)d . ? ( ~ )
for every bounded measurable function h on R.
Definition 1.2.11 A family of measurable functions {x,(t; w ) ; r = is said to be an equicontinuous family if for every E > 0 there is a 6 > 0 such that Ilx,(t, ; w ) - x,(t,; w)II < E whenever It, - r21 < 6 for all n = 1,2,. . . . The following random version of the Arzela-Ascoli theorem will be used in Chapter 11. Theorem 1.2.1 Let @ be the class of all functions x ( t ; w ) which are product-measurable on [0, I] x R and satisfy Ix(t ;w)l d N o . Suppose that for every n the map x , ( t ; w ) E @ is continuous from [0,1] into L,(R, d,9). Assume further that the family of maps {x*[x,(t; w ) ] } ,n = 1,2,. . . , from [0,1] into the reals R is eventually equicontinuous for every x*, that is, given E > 0, there is an M and a 6 > 0 such that It - tol < 6, n 2 M , implies that Ix*[x,(t; d l - x*[x,(to; w)Il < E . Then there exists a subsequence {x,,(t ;w ) }such that for some map x(t ;w ) E @ this subsequence converges in the weak topology of L,(R, d,9’)to x ( t ; w ) for every t E [0, I]. The following definitions will also be needed in the study.
Definition 1.2.12 Let H be the set of all functions x ( t ; w ) in C,(R+, L,(R, d,9’))such that : (i) Ilx(t; 0 ) / 1 & ~ , ~ ) is integrable on R , .
16
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
(ii) For any function y(t ; o) satisfying (i), y(t ;o ) E H if the inner product , ~ , ~ ) on R + . (x(t ;o), y(t ; w ) ) ~ ~ ( is~ integrable For M > 0 let B,, DM c H be Hilbert spaces with the inner product on BM defined by
and that on D M ,(x, y),, is defined likewise. These are valid inner products, 4,) is an inner product space, we have as can easily be shown. Since L2(Q, d, for any scalar cc
Also
and if x(t ; w) # 0 for almost all o E t 2 and
r ER+ ,
The norm of an element of BM is then defined by
and that of an element of D M , I(x(t;w)llDM,is similarly defined. Since we have that ((x(t;o)(j,,(,,,,,) is integrable on R , , for every M > 0 the norms
1.2
defined here exist and are finite. If M B , is given by Ilx(t; W)llB,
17
PROBABILISTIC DEFINITIONS
=
{
JOrn
-+
00, then
the norm of an element of
Ilx(t; W)IIE2(n.d,s, d t
i:
<
and the norm of an element of D , is defined likewise. Note that Hilbert spaces such as those just given exist, since we may take C,, with g(t) = e-p‘, p > 0, t E R , , and with the appropriate inner product, as the space B , (or Dm). In order to study random discrete equations in Chapter V, we must define the following spaces. Definition 1.2.13 We denote by X the space of all functions x from N, the positive integers, into L2(R, d,9). That is, for each n = 1 , 2 , . . . , the The . topology of X is the topology of value of x at n is x,(w) E L,(R, d,9’) uniform convergence on every set N m = { l , 2,..., m}, that is, xi
-+
m = 1 , 2 ,...,
x as i -+ co in X if and only if
lim
i+m
IIXi,n(W)
- Xn(W)IIL2(R,dol,)) = 0
uniformly on every set N,, rn = 1 , 2 , . . . . Note also that X is a locally convex space (Yosida [l, pp. 24-26]), with the topology defined by the following family of semi-norms : m = 1,2, . . . .
Definition 1.2.14 We let X , be the Banach space of sequences in X for which there exist positive numbers g, < 00 and some constant Q > 0 such that IIxn(W)IIL2(n,d,eq
Qgn,
n = 122,. . . .
The norm in X , is defined by
,
When g, = 1 for n = 1 , 2 , . . . we obtain the Banach space X of all bounded functions from N into L,(R, d,9). The norm in X I is defined by llxllx, =
IIX,(W)IIXI
=
SUP IIXn(W)IlL2(n,dB,s). n
18
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
Definition 1.2.15 We shall denote by x b , the Banach space of all functions in X of bounded variation, that is, m
+ i1 =
IIxIIXbv= IIX1(w)IILz(R,sg.b)
1
IIxi+1(w) - Xi(~)IIL2(R,ztcP.~) < a,
which defines the norm in x b v . The definitions of X, X,, and x b , are stochastic generalizations of some spaces considered by Petrovanu [ 11 in the nonstochastic case. 1.3
The Stochastic Integral Equations and Stochastic Differential Systems
In this section we will give the main types of stochastic integral equations to be investigated and state some of the assumptions that are made. Also, certain definitions concerning the stochastic differential systems to be studied will be presented. The main types of equations which will be studied in Chapters 11, I l l , and IV are those of the Volterra type in the form X(t; W )
+
= h ( t ; 0)
!:
k ( t , t ; W ) f ’ ( T ,X ( t ;W ) ) dt
(1.3.1)
and those of the Fredholm type in the form X( t ; o)=
+ Soy k,( t , t ;o)e(t, x ( t ;o))d t ,
h( t ; o)
(1.3.2)
where t 2 0 and (i) o is a point of R ; (ii) h ( t ;o)is the stochastic free term or ,free random variable defined for 0 6 t and w E R ; (iii) x(t ;w ) is the unknown random variable for each t 0 ; (iv) the stochastic kernel k ( t , r ; w ) is defined for 0 6 t 6 t < co and w E R ; (v) the stochastic kernel k,(t, t ;o)is defined for 0 6 t < 00 and 0 6 t < co and w E R ; (vi) f’(t, x) and e(t,x) are scalar functions defined for 0 < t and scalars x. The integrals in Eqs. (1.3.1) and (1.3.2) will be interpreted as mean-square integrals (Prabhu [l]). We shall assume that x(t;w) and h ( t ; w )are functions of the argument t E R + with values in the space L,(R, d, 9). The functions f ( t , x(t ; w ) ) and e(t, x(t; w ) )under convenient conditions will also be functions o f t E R + with 9). The stochastic kernels k(t, T ; w ) and k,(t, t ; w ) will be values in L,(R, d, essentially bounded functions with respect to 9’for every t and T such that 0 6 T 6 t < co and 0 6 t < co,0 6 z < 00, respectively. The values of the
1.3
STOCHASTIC INTEGRAL EQUATIONS AND DIFFERENTIAL SYSTEMS
19
stochastic kernel k(t, r ; w ) for fixed t and r will be in L,(R, d,9’),so that the product of k(t, r ;w ) and f ( t , x ( t ; w ) )will always be in L,(R, d,9’).A similar assumption holds for k,(t, r ;w ) for fixed t and r. With respect to the stochastic kernel k(t, r ; w), we shall assume that the mapping (t,
+
k(t, 7 ; 0)
from the set A = { ( r , r ) : Od t < t < co] into Lm(R,d,9’)is continuous. co we have That is, whenever (t”,5,) + ( t , r ) as n --f
9”-ess sup Ik(t,, 5,; w ) - k(t, r ; w)l -+0 0
as
n
as
n
-+
co
or, equivalently, inf { sup ) k ( t , , r,; w ) - k(t, r ; w ) l ) + 0 Rg
R-nO
+
co
where P(R,) = 0. Likewise, for the stochastic kernel k,(t, r ; w ) we will assume that the mapping ( t ,T )
+
ko(t, ; w )
from the set A , = { ( t , ~ ) :d0 t < 00, 0 < r < ca) into Lm(Q,d,9’) is continuous. Further assumptions will be given at appropriate points in the text. We will also study in Chapter IV a stochastic integral equation of the mixed Volterra-Fredholm type of the form x ( t ; ~=)
h ( t ; w )+
+
k(t,r;w)f(r,x(r;w))dr
IOm
k,(t, T ;8 )e(r, x ( r ; 0))dT
(1.3.3)
for t 2 0. Equation (1.3.3) is of interest because Eqs. (1.3.1) and (1.3.2) arise as special cases of it. The following perturbed random Volterra and Fredholm integral equations will be investigated in addition to the above types : ~ ( tw;) = h(t, ~ ( tw;) )
and
+
k(t, r ; w)f’(t,x ( r ; w ) )dr
(1.3.4)
20
I
PRELIMINARIES A N D FORMULATION OF THE EQUATIONS
where t 2 0 and h ( t , x ) is a scalar function of t and x possessing certain continuity properties which will be stated later. We now give the following definitions. Definition 1.3.1 By a random solution of any one of the stochastic integral equations (1.3.1H1.3.5)we will mean a function x ( t ;w ) which belongs to C , ( R + ,L2(R, d,9)) and satisfies the equation 9-a.e.
Definition 1.3.2 A random solution x ( t ; w ) is said to be stochastically asymptotically exponentially stable if there exist constants p > 0 and > 0 such that { E l x ( t ; w ) l 2 j + = {/alx(t;w)12dq(w)]'
< pe-flr,
tER,.
Definition 1.3.3 A random solution x ( t ; 0 ) is said to be asymptotically stable in mean square if
lim Elx(r;u)I2= 0.
r+
m
Definitions 1.3.2 and 1.3.3 are important in applications in which the behavior of a stochastic system as time becomes large is of interest. That is, conditions are needed for which the system remains stable in some sense. In many applications random nonlinear Volterra integral equations arise in the form x ( t ; w ) = h(t ;0) +
sd
K(u,X ( U ;0);0)du,
(1.3.6)
where t b 0 and (i) K ( u , x ; w ) is the random kernel defined for 0 d u d t < ocj and w E R ; (ii) the random function x ( t ; w ) is unknown, t E R + ; and (iii) the random function h ( t ; w ) is known, t E R + . Equations such as (1.3.6) have been studied by Bharucha-Reid, Mukherjea, and Tserpes [ 11 utilizing given in Definition 1.2.9. In Chapter I1 we will present the spaces L,(R, d,9) some theory concerning the random equation (1.3.6) and two important applications of such equations. Definition 1.3.4 A random function x ( t ;w ) is said to be a random solution of Eq. (1.3.6)if for every t E R , it satisfies the equation .?-a.e.
In Chapter V we will use the spaces given by Definitions 1.2.13-1.2.15 in order to investigate the existence and asymptotic behavior of random solutions of stochastic discrete equations of the Volterra type in the form (1.3.7)
1.3
STOCHASTIC INTEGRAL EQUATIONS AND DIFFERENTIAL SYSTEMS
21
and of the Fredholm type in the form xn(o) = hn(o) +
m
1 Cn,j(~)f;~xj(~))
j= 1
(1.3.8)
for n = 1,2,3,. . . . These equations may be interpreted as discrete versions of the stochastic integral equations (1.3.1) and (1.3.2). Definition 1.3.5 A random solution x,(o) of Eq. (1.3.7) or (1.3.8) is said to be stochastically geometrically stable if there exist a /? > 0 and 0 < u < 1 such that
{In
{ E l ~ , , ( o ) l=~ } ~ lxn(o)I2dP(w))'
< /3un,
n
=
1,2,3,. . . .
An important application of the theory which has been obtained for stochastic or random integral equations such as those given earlier is in the area of nonlinear stochastic systems. Throughout this book, particularly in Chapters IX and X, we will apply the theory obtained to stochastic differential systems or stochastic integrodifferential systems in various forms. As representative examples of such systems, we mention the following three : i ( t ;O ) =
A ( o ) x ( t ;O )
+ b(w)+(a(t;
0))
with ;w ) = i ( t ; o )=
(44, x(t ; 4) ; A(o)x(t;w)
+ b(~)4(a(t;o))
with
+
( ~ (t T;w),x(T;w))~T;
o ( t ; o )= f ' ( t ; ~ )
and i ( t ; o )=
+
A(o)x(t;w)
with
+
o ( t ; w )= j ( t ; o )
sd
J:
b(t - z ; o ) 4 ( a ( r ; o ) ) d r
(c(t - z;o),x(z;o))dz;
where (x, y ) denotes the scalar product in Euclidean space, . = d/dt, and (i) A(w) is an n x n matrix whose elements are measurable functions; (ii) x(t; o)is an n x 1 vector whose elements are random variables; (iii) b ( o ) and c ( o ) are n x 1 vectors whose elements are measurable functions;
22
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
(iv) o ( t ;o)is a scalar random variable for each t E R , ; and (v) f ( t ; w ) is a scalar random variable for t E R , . Schematic diagrams illustrating some of the important stochastic differential systems that are presented in Chapter IX will be given in Appendix 9.A. Such diagrams are useful in the physical interpretation of stochastic differential systems. Finally, we state two definitions concerning systems such as those just given.
Definition 1.3.6 A matrix A ( @ )whose elements are measurable functions is said to be stochastically stable if .!3'{o:ReIl/k(o) <
-E,
k
= 1,2,.
.., n } =
where a > 0. That is, the characteristic roots Gk(o), k negative real parts 9-a.e.
1,
= 1,2,. . . ,n,
have
Definition 1.3.7 A stochastic differential system is said to be stochastically absolutely stable if there exists a random solution x ( t ;o)of the system such that 9 { o :lim x ( t ; o)= O } = 1. 1-m
Appendix 1 .A
In order for this work to be more self-contained and complete, we will give in this appendix proofs of the fixed-point theorems of Banach and Schauder and the lemma of Barbalat which were introduced in Section 1.1. Several further definitions, lemmas, and theorems which are needed in the proofs but which were not given earlier will also be presented. We will begin with a proof of the fixed-point theorem of Banach. Theorem Z.A.2 ( S . Banach's fixed point theorem) If a contraction operator U is defined on a complete metric space E, then there exists a unique point x* in this space for which U(x*) = x*. PROOF
follows :
Let xo be any point in the space E . Construct a sequence as
XI =
U(x,),
x2 = U ( x , ) ,
. ..,
x, =
u(X"-J,
., ..
(This is called a sequence of successive approximations.) This sequence exists because U is defined on E into itself. Let p be the metric defined on E. Since
APPENDIX 1.A
U is a contraction operator on E, we have p(u(xn), u(xn- 1 ) ) = P(xn+ 1 ,x,) Q qp(x,,
- I),
X,
0 Q q < 1.
However, P(xn,xn-1) 6 q ~ ( x n l-, ~ n - 2 )
and hence 2
p(xn+lrxn) Q 4 ~ ( x n - l r x n - 2 ) .
Applying this argument repeatedly, we obtain (l.A.1)
P ( x ~ 1+r xn) Q qnP(xl XO). 9
Using the triangle inequality, we have for m > n
+ ... + ~ ( x m - 1 , ~ ~ ) .
P ( x ~ , x6~ P(xnrXn+l) ) + ~(xn+l,xn+z)
Therefore, using ( 1 .A.l), we obtain
+ qn+' + . . . + qm- ) = q"p(x,,x,)(l + q + . . . + qm-"-1)
p(xn,xm)Q p ( x l , x0)(qn
l
6 [qn/(l - ~ ) I P ( xXI O, )
+ +
since the sum of the series 1 q . . . is 1 / ( 1 - q) for 0 6 q < 1. This shows that {x,} is a Cauchy sequence. Since E is complete, then lim x,
n+ w
= x*
exists and x* E E. Since U is a contraction mapping from E into itself, we have P(&+ I , u ( x * ) ) = p( u ( x n ) ,U ( X * ) )6 qp(xn,x * )
which implies that as n + a, from the fact that p(x,, x * ) + 0 as n unique in E, we see that
+
co.Since the limit of {x,) is
x* = lim x n + , = U(x*). n- w
Therefore a fixed point of U exists in E. To show that x* is unique, suppose that x is another fixed point of U . Then P(X,
x * ) = d u b ) , U ( x * ) ) Q qp(x,x * )
< p(x, x * )
since 0 d q < 1, a contradiction. Thus x* is the unique fixed point of U in E, completing the proof.
24
I
PRELIMINARIES AND FORMULATION OF THE EQUATIONS
In order to prove the fixed-point theorem of Schauder, we will need a theorem of Brouwer which we now discuss.
Definition l.A.l A retraction is a continuous mapping from a space onto a subspace so that the subspace remains fixed. Theorem I.A.2 (Brouwer's jxed-point theorem) Any continuous map f :D" + D" has a fixed point x , f ( x ) = x , where D" = { x E En:1x1 d 1).
Theorem 1.A.2 is an immediate consequence of the following theorem. Theorem I.A.3 boundary.
There exists no retraction of a closed space onto its
PROOF OF THEOREM 1 .A.2 FOR n < 2 Suppose f : D" + D" has no fixed point. That is, for all x E D", f ( x ) # x . For x E D 2 we can draw a straight line through f ( x ) and x which intersects the circumference S' of the unit disk at some point p*. Then the transformation x + p* would be a continuous transformation of the whole disk D 2 onto its circumference S' and would leave each point on the circumference fixed, which contradicts the fact that such a transformation does not exist. Thereforef: D 2 + D 2has a fixed point. If n = 1, then D' = [-1, I ] and So = { -1, I}. Applying the same argument, we have f :D' + D' has a fixed point.
The proof for n > 2 is essentially the same. By assuming that f :D" + D" has no fixed point, it is shown that S"- is a retraction of D", which, by Theorem 1.A.3, is a contradiction.
'
Definition 1.A.2 Let B be a subset of a metric space and t > 0. A finite set of points { q l , q,, . . . ,4.} is called an &-netfor B if for every point x E B there exists a 4i,so that p ( x , qi,) < E . Definition 1.A.3 Let M be a compact set in a normed linear space and let
R be its closure. Let v l , . . . , v p be an &-netof R,and for x E R define
(1.A.2) where
Using this definition, we may obtain the following theorem, which is used in the proof of Schauder's fixed-point theorem. Theorem I.A.4 Let T be a compact transformation with domain W, a bounded subspace of H , a normed linear space, and let T ( W )c M . Let
25
APPENDIX 1.A
F, be defined on
R as given by Eq. (1.A.2). If x E M , then IITb) - F&T(X)ll< 6 .
PRooi:
By definition
by definition of mi(x),completing the proof. Theorem I.A.5 (Schauder's $xed-point theorem) Let M be a convex, bounded, closed set in a Banach space and let T be a compact transformation such that T ( M ) c M . Then T has a fixed point in M . That is, there exists an xo E M so that T(xo)= x o . PROOF Since T ( M ) c M , we have T ( M ) c R. The fact that M is closed implies that M = R,and hence T ( M ) c M . Let {E,} be a monotone decreasing sequence such that E, = 0. Let T, = F,,T be defined on M as described in Theorem 1.A.4. For x E M we have
T,(x) = F,,(T(X)).
But T(x)= y E M ; therefore Fen( T(x)) = F J y ) . Suppose { v l , . . . ,vp,} is an &,-netof T ( M ) .The function
which means that FJY)
=
m1(Y)V 1
mlb)
+ . . . + m,,(y)
Set mXy),"ml(y)
+ . . . + %(Y) + . . . + m,,(y)'
+ . . . + mp,(y)l
mP,(Y)vPn
=
Mi.
Therefore, since M is convex, F J y ) E M , which implies that T,(M) c M . Let H , be the finite-dimensional subspace of H which is spanned by { v l , . . . , v,,}. Let M , = M n H,. Now M is closed and H , is closed since it is a
26
I
PRELIMINARIES A N D FORMULATION OF THE EQUATIONS
finite-dimensional space. This implies that M , is closed since the intersection of two closed sets is closed. Also M is convex, and H , is convex since for XI E H,, i= 1
and for xz E H,,
=
2 y i v i € H,,
i= 1
where 0 < 4 < 1. Hence M , is convex (the intersection of two convex sets is convex). Therefore M , is a closed convex subset of H,. The transformation is defined on M and M , c M , which implies that T,, is defined on M , . Also, T,,(M,) c M , , for if x E M , , then x = ctivi, and
/
Pn
T,(x) = F,,v(X))
=
cpz
PF?
P"
i= 1
i= 1
1 mi[~(x)lvi 1 m i [ ~ ( x )=l 1 pivi
i= 1
E H,.
Also, T,,(x) E M , since if T,,(M) c M and M , c M , we have that x EM, c M and T,,(x) E M . Thus T,,(~)E M nH , = M , ,
which means that T,(M,) c M , . Since FE,is continuous and T is continuous, we have that T,, is continuous, and by using Brouwer's fixed-point theorem (Theorem 1.A.2),there is a point x, E M , such that T,,(X,) = x,.
-
The set ( T ( x , ) )is contained in the closed compact set T ( M ) . REMARK
The set T ( M )is compact since the &,-netgives a finite covering.
Now T ( M ) c M implies {T(x,)} c M . Thus T(x,) has a limit point xo and xo E M since M is closed. Either the sequence { T(x,)} converges to xo or there is a subsequence of { T(x,)} which converges to x o . For simplicity of notation, assume that { T(x,)) converges to xo. Then
(1 T(x,) - xo11 < E
for n > n ( ~ ) .
( 1 .A.3)
27
APPENDIX 1.A
T. we have
Also from the definition of
II T,(x,)
(1.A.4)
< En.
- T(x,)II
Then from (1.A.3) and (1.A.4) we have lIT#(x,) - xoll I
IIT.(X,)
- T(x,)ll
+ IIT(x,)
Since T,(x,) = x,, we obtain IIX,
- XgII
< En
+
- xoll
< c,
+ c.
E.
Let E‘ be given. T is continuous, and we have that there exists a d(d) > 0 such that
II V x , ) -
< E‘
T(X0)ll
whenever IIx, - xoll < d(~’).To make I/x, - xoII < d(~’),choose n large enough so that E , E < d(~’). We can d o this since E, = 0. Hence we have shown that
+
IIT(x,)
< -5
- W0)ll
whenever n is large enough, which means that T(x,) + T ( x o )as n Since the limit of the sequence { T(x,)} is unique, then we must have
+
co.
m,)= xo,
completing the proof. Finally, we present the proof of Barbalat’s lemma which was stated in Section 1. Lemma l.A.6 (Barbalat) If (i) f ( t ) is a continuous function, and its derivatives f ’ ( t ) are bounded fort 2 0; (ii) G(x) is a continuous function, G(x) > 0 for x # 0, and G(0) = 0;
(iii)
JOm
G [ f ( t ) ]dt <
00
then
;
lim f ( t ) = 0.
1-
m
PROOF We shall prove this lemma by contradiction. From the hypothesis of the lemma, we have for every t 2 0
lf’(t)l < b < co
som
and
G ( f ( t ) )dt = c < co.
Let us assume that limt+ f ( t ) # 0. This implies that there exists a sequence, say ( t k ) , t , > 0 for k = 1,2,. . . , and some E > 0, such that If(tk)l
2
E
> 0.
28
I
PRELIMINARIES A N D FORMULATION OF THE EQUATIONS
We can further assume that for all k (1 .A.5)
tk+l - tk 2 m > 0
that is, the elements in sequence (1.A.5) are distinct, t , < t, < . . . < tk. If this condition does not hold, then we can choose a new sequence which would satisfy (l.A.5). Since f ‘ ( t ) is bounded for t 2 0, then using the mean value theorem, we can write I f ( t ) - f(tk)l
< blt
for all k.
- tkl
It is given that C ( x ) > 0 for x # 0, so we have
f {J
tk
JOm
G[f’(t)Idt 2
k=l
+( m i , )
G[l’(t)l d r } .
rr-(m/Z)
For the length of the interval m we can write t, -
tm d t d tk + i m
for all k .
(1.A.6)
On this interval (1.A.6) we can construct the following inequality:
r
=
min G ( x ) > 0 ;
&
then we can write
(1.A.7)
Taking the sum of both sides of (l.A.7), that is,
this implies that
Iom
G [ f ( t ) ldt =
but by hypothesis
JOm
G [ f ( t ) ]dt
=
00,
c<
00,
hence, a contradiction. Therefore we conclude that limf(t) = 0.
r+ m
C H A P T E R I1
Some Random Integrul Eputions of tbe Volterru Type with Applications
2.0
Introduction
In this chapter we shall present some of the most general results which have been obtained to date concerning random integral equations of the Volterra type. In Section 2.1 some results of Tsokos [4] will be given for the random integral equation
+
x ( t ; o )= h ( t ; w )
k(t,r;o)f’(.t,x(r;o))dr.
(2.0.1)
We shall investigate the existence and uniqueness of a random solution of Eq. (2.0.1). The asymptotic behavior of the random solution and its stability properties also will be considered. In Section 2.2 several applications of Eq. (2.0.1) will be presented in the areas of telephone traffic theory, hereditary 29
30
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
mechanics, and a generalization of the classical Poincare-Lyapunov theorem (Tsokos [3]). In Section 2.3 some recent results of Bharucha-Reid, Mukherjea, and Tserpes [l] will be given along with theorems of the authors concerning the existence of a random solution of the random Volterra integral equation of the form
+
x ( t ; ~=) h ( t ; o )
sd
K(u,x(u;o);w)du.
(2.0.2)
Then in Section 2.4 applications of (2.0.2) in the theory of turbulence and in the theory of chemotherapy will be given. In later chapters we shall consider some stochastic differential systems which reduce to stochastic integral equations of the form just given.
2.1
The Random Integral Equation
x ( t ; o) = h ( t ;W )
+
k(t. r ; o ) f ( ~ ~ ,( rw)) ; dr
This equation seems to be more general than any random Volterra integral equation which has been studied to date. The generality consists primarily in the choice of the stochastic kernel. The origin and the importance of this random integral equation have already been discussed. In this section we shall investigate the existence of a random solution and its uniqueness and asymptotic behavior, and shall consider a number of special cases as corollaries of the main theorems. Finally, the stability properties of the random solution will be investigated. To accomplish our objectives here, we employ certain aspects of the methods of “admissibility theory,” which has been utilized quite recently in the theory of deterministic integral equations by Corduneanu [4] as presented in Chapter I. 2.1.1
Existence and Uniqueness of a Random Solution
Let B and D be a pair of Banach spaces and T a linear operator. With respect to the study of this section, we state the following lemma, which will be used in the main theorems. Lemma 2.1.2 Let T be a continuous operator from C,(R+ ,L,(R, d,9)) into itself. If B and D are Banach spaces stronger than C , and the pair (B, D) is admissible with respect to IT: then T is a continuous operator from B to D.
2.1
31
THE RANDOM INTEGRAL EQUATION
PROOF First we will prove that the operator T is closed from B to D . Let us consider the sequence x , ( t ; o)E B such that x , ( t ; o)3 x ( t ;o)as n -+ co. Let us assume ( T x , ) ( t ;o)9 y ( t ; w ) as n + 00. Now we must show that ( T x ) ( t ;o)= y ( t ; o).Since x , ( t ; o)+ x ( t ;o) in B, x , ( t ; o)+ x ( t ; o) in C,.But since T : C , + C, is continuous we have ( T x , ) ( t ; w )+ ( T x ) ( t ; w ) in C,. O n the other hand, (Tx,)(t;o)+ y(t ;o)in D, which implies that ( T x , ) ( t ;o)+ y ( t ; w ) in C,. Hence ( T x ) ( t ;o)= y ( t ; o), because the limit is unique in C,. Therefore the operator T is closed. Then by the closed-graph theorem (Theorem 1.1.3) it follows that T is a continuous operator from B to D. REMARK Since T is a closed and continuous linear operator, it is also bounded (Yosida [l, pp. 10-1 13). Then it follows that we can find a constant K > 0 such that
II(Tx)(t;
Kllx(t;
W)llD
o)llB
(see Definition 1.1.13). With respect to our aims here, we state and prove the following theorems. Theorem 2.2.2
Let us consider Eq. (2.0.1)under the following conditions :
(i) B and D are Banach spaces stronger than C,(R + , L,(R, d,9)) such that (B, D ) is admissible with respect to the operator k(t,r;o)x(r;~)d.r,
where k(t, r ; o)behaves as in Chapter I. (ii) x ( t ; o)+ f ( t , x ( t ;w)) is a continuous operator on S = { x ( t ; o ) : x ( t ; w ) E D , I l x ( t ; o ) I I ~d P). with values in B, also satisfying
11 f(t, x ( t ;0))-
f(t9
r(t ; W))ll B
d A Ilx(t ;w,
-
.dt ; o)ll
D
with x ( t ;w), y ( t ;o)E S and 2. a constant. (iii) I/h(t;&))]I E D . Then there exists a unique random solution of the random integral equation (2.0.1),provided that
A < K-',
/ I h ( t ; w ) l+ l ~ Kllf(t,O)11, d P(1 - AK),
where K is the norm of the operator T (see Remark to Lemma 2.1.1).
32
I1
PROOF
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
Let us define an operator U on S into D as follows : ( U x ) ( t ;o)= h(t; o)+
1:
(2,l.l)
Sd
(2.1.2)
k(t, T ; W ) ~ ’ ( T , X ( Z ; o ) ) d z .
Now we must show that U is a contracting operator and U ( S ) c S. Consider a function y(t ; o)in S . We can write ( U y ) ( t;o)= h(t ; W )
+
k(t, z ;o ) ~ ( z ,Y ( T ;w ) )dz.
Subtracting Eq.(2.1.2) from Eq.(2.1.1), we have
( U x ) ( t ;0)- ( U y ) ( t ;0)=
i“ 0
k(t, 7 ;w)[f(s, x(z; 0)) - fb,y b ; 4 ) l d z .
Since U ( S ) c D and D is a Banach space, then ( U x ) ( t ; w) (Uy)(t;o)ED.
By assumptions (i) and (ii), [ f (z, x(z ;w)) - f (7,y ( z ;o))] E B. From Lemma 2.1.1 we have seen that T i s a continuous operator from the Banach space B into D, which implies that we can find a constant K > 0 such that II(Tx)(t; ~
d Kllx(r; o)ll~.
) l l D
That is, ll(Ux)(t;o) - ( U Y ) ( t ; o ) I I D
d K \ I f ( t , x ( t ; w )-) f ( t ? Y ( t ; w ) ) l I B .
Now, applying Lipschitz’s condition given in (ii), we have II(Ux)(t; W ) - ( U y ) ( t ;o ) l l D
-=
< AKllx(t; 0)
-
~ ( to );l l D .
Using the condition that AK 1, the operator U is a contracting operator. It now remains to be shown that U ( S ) c S . For every function x(t ;o)E S we have ( U x ) ( t ;W ) = h(t ;W )
+
k(t, z ;w)f(z,
X(T
; w ) )d z .
(2.1.3)
Applying Condition (iii) and Lemma 2.1.1, we can write expression (2.1.3) as follows : (2.1.4) I l ( u x ) ( t ; O ) l l D d IIh(t; O ) i l D + K l l f ( t , x ( t ; o))IIB.
2.1
THE RANDOM INTEGRAL EQUATION
33
Using Lipschitz’s condition, we have IIf(t,x(t;w))IIB d AIIx(t;w)
-
011,
+ llf(t,O)llB.
We can now write expression (2.1.4) as follows: l l ( U x ) ( t ; ~ ) l l Dd Ilh(t;w)llD + K 4 x ( t ; w ) l l D
+ Kllf(t,O)Il~.
(2.1.5)
Since x ( t ; w ) S ~and Ilx(t; w)llD< p, (2.1.5) can be written as II(Ux)(t; w)llD < IIh(t; w)llD
+ KAP + Kllf(t, 0)llB.
(2.1.6)
Applying the condition of the theorem that Ilh(t;w)llD
+ Kllf(t,
O)IlB
d d1- I K ) ,
(2.1.6) becomes
II(uX)(t; W)IID < p(1 - AK) KpA Or II(Ux)(t; W)IID < p, which implies that (Ux)(t ; w ) E S for all random variables x ( t ; w ) E S or U ( S ) c S. Therefore, since U is a contracting operator and U ( S ) c S (inclusion property), applying Banach’s fixed-point theorem (Theorem 1.1.2), there exists a unique random solution x(t ; w ) E S such that ( U x ) ( t ; w ) = h ( t ; w )+
k ( t , z ; w ) f ( z , x ( z ; w ) ) d z = x(t;w).
2.1.2 Some Special Cases Now we shall derive some particular cases of Theorem 2.1.2 choosing in a convenient manner the spaces B and D.Recall that a C , space is a space of 9’)such that all continuous functions from R , + L,(Q d,
where t E R , , 2 is a number greater than zero, and g(t) is a continuous 9’))is the space of all function greater than zero. Also, C,(R + ,L,(R, d, with the topology of uniform continuous functions from R + into L,(R, d,9’), convergence on the interval [0, TI for any T > 0, and the norm of the stochastic kernel of the integral equation can be defined as follows : K ( t , T) = Ilk(t, z; w)ll = .Y-ess sup Ik(t, z; w)l
with respect to w E 0. That is, with Y(R,,)
=
0.
34
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
Theorem 2.1.3 Let us consider the random integral equation (2.0.1) under the following conditions :
(i) There exists a number A > 0 and a continuous function (on R+)g(t)
> 0 such that
(ii) j ( t , x) is a continuous vector-valued function for t E R , , I l x ( t ; o)ll d p , such that f ( t , 0) E
c,,
II f(t3 x ( t ; w ) ) -
f ( t 7
Y ( t ;d ) l l G Mt)llx(t; 4 - Y ( t ;4.
(iii) h ( t ; o)is a continuous bounded function on R , whose values are in L,(R, d,9).Then there exists a unique random solution x ( t ;o)E C of the random integral equation (2.0.1) such that ( I -
’It
for t E R , , as long as IJh(t;w)ll, A, and IIj(t, O)llcg are small enough. PROOF We must show that under Condition (i) of the theorem the pair of Banach spaces (C,, C ) is admissible. That is, ( C , , C ) is admissible with respect to the integral operator
( T x )( t ; o)=
For x ( t ;w ) E C , we have
J:
k( t , T ;O)X(T ; o)d7.
k(t, 7 ;O)X(Z ;W ) dz
2.1
THE RANDOM INTEGRAL EQUATION
35
for g ( t ) > 0, we can write (2.1.7) as follows:
1 1Q II(Tx)(t; 4
Jo
IIk(6 t ;(411IIx(7; o)llcg(t) dt
s’ IIW,
Q Ilx(t; &,
0
t ;w)llg(d dr
Q AIlx(t; w)Ilc,.
Therefore II(Tx)(t;o)ll is boundedandhence(Tx)(t;w ) C ~f o r a l l x ( t ; w ) ~ C,. Hence TC, c C , which implies that the pair of Banach spaces (Cg,C ) is admissible with respect to the integral operator as defined here. The remainder of the proof is analogous to that of Theorem 2.1.2 and is omitted. For the special case where g(t) = 1 we state and prove the following corollary. Corollary 2.2.4 Let us assume that the random integral equation (2.0.1) satisfies the following conditions :
(i) Jb IIk(t, t ; w)ll dt Q A , t E R,, where A is some constant greater than zero. (ii) f ( t , x) is a continuous function from R , into R uniformly in x such that If(4X) - f’(t9Y)l Q Llx - Yl.
9). (iii) h(t ;w ) is a continuous bounded function from R , into L,(Q, d, Then there exists a unique bounded random solution on R , of the random integral equation (2.0.1) if A is small enough. PROOF We must show that under Condition (i) of the corollary the pair of Banach spaces (C, C ) is admissible. For a function x(t ;w ) E C we,have
k(t,z;w)~(~;w)dt or
II(Tx)(t;w)ll 6
f IIkk
t ;w)II
Ilx(r; 4 1dt.
(2.1.8)
Applying the definition of the norm as used in Theorem 2.1.3, inequality (2.1.8) can be written as follows: II(TX)(~;O)II Q IIx(t; W)II
J‘ IIk(t, r ; 0
Q Allx(t;o)ll,
W)II
-R+.
dz
36
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
Therefore ( T x ) ( t 0) ; E C for every random variable x(t ;w ) or TC c C , which implies that ( C , C ) is admissible. The remainder of the proof follows from Theorem 2.1.2. The following two corollaries are particular cases of Theorem 2.1.3. Corollary 2.1.5 Assume that the random integral equation (2.0.1)satisfies the following conditions :
(i) Ilk(t,~;o)ll< A,,forO < r 6 t < co,andJ,"g(t)dt < (ii) Same condition as in Theorem 2.1.3, Condition (ii). (iii) Same condition as in Theorem 2.1.3, Condition (iii).
00.
Then there exists a unique random solution of Eq. (2.0.1) bounded on R , if IJh(t;o)ll, I , and IIf(t, 0)ll are sufficiently small. PROOF It is only necessary to show that the pair of Banach spaces (C,, C ) is admissible with respect to the integral operator
( T x ) ( t o) ; =
J:
k(t, r ; o ) x ( r ;o)dz,
(2.1.9)
along with Condition (i)ofthe corollary. For a function x ( t ;o)E C , expression (2.1.9) implies that
Applying hypothesis (i) of the corollary, we have
(2.1.11) but, applying Condition (i) of the corollary, (2.1.11) is written as II(Tx)(t;w)ll
<M
for all
t 2 0.
Therefore the function x ( t ; o)E C , implies that ( T x ) ( t ;o)E C , or TC, c C . Hence the pair (Cg,C ) is admissible, and, since Conditions (ii) and (iii) are the same as in Theorem 2.1.3, the proof is complete.
2.1
THE RANDOM INTEGRAL EQUATION
37
Corollary 2.1.6 Let us consider the random integral equation (2.0.1) under the following conditions : (i)
IIk(t, z; w)ll
< A,
e-'('-'), for 0
sup
reR+
{ f:
< z < t < + co,and
g b ) d z } < a,
where A, and ct are positive numbers. (ii) Same as Condition (ii) of Theorem 2.1.3. (iii) Same as Condition (iii) of Theorem 2.1.3. Then there exists a unique random solution of the random integral equation (2.0.1)bounded on R , if Ilh(t; w)ll, 2, and IIf(t. 0)ll are small enough. PROOF We must show that the pair of Banach spaces (Cg,C ) is admissible with respect to the integral operator rr
( Tx)( t ;O)=
k(t, z ;W ) X ( T ;W ) dz
(2.1.12)
JO
along with Condition (i) of the corollary. Taking the norms of both sides of (2.1.12),we have
but Ilk(t, z; w)ll
< A,
e-'(I-'),
which implies that (2.1.13) can be written as
< A,
/~e-""-"[llx(z; o)ll/g(z)]g(z)dz.
(2.1.14)
From the definition of the norm as used in Corollary 2.1.5, inequality (2.1.14) can be written as follows :
But Obl sup
implies that
{I"
g(z)dz} <
03
38
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
and (2.1.15) can be written as II(Tx)(t; 4 d Ilx(t; 4
1 1 C g M ~ Z ~
Therefore (Tx)(t;o)E C or TC, c C , which implies that the pair of Banach spaces (Cg,C ) is admissible. Since Conditions (ii) and (iii) are the same as in Theorem 2.1.3 and admissibility has been shown, the proof is complete.
2.1.3 Asymptotic Stability of the Random Solution With respect to the asymptotic behavior of the random solution of the stochastic integral equation (2.0.1),we state and prove the following theorem, the objective of which is to investigate the possibility of the random solution being asymptotically exponentially stable.
Theorem 2.1.7 Let us consider the stochastic integral equation (2.0.1) under the following conditions : (i) IIk(t, r ; o)ll < A2 e-'(r-r),for 0 < r < r < + co,A z > 0, and c( > 0. (ii) f ( t , x ) is a continuous function from R , x R into R such that f ( t , 0) = 0, and If(4 x) - f ( 4 Y)l d 4 x - Yl.
' , p and (iii) Ilh(t;o)I(< ~ e - ~where 0
B are positive numbers such that
Then there exists a unique random solution of Eq. (2.0.1) such that
as long as A is small enough. PROOF
We must show that the pair of Banach spaces (C,,C,), with
g(t) = C P ris, admissible under Conditions (i) and (iii) of the Theorem.
That is, (Cg,C,) is admissible with respect to the operator defined by ( T x )( t ;o)=
J:
k(c, T ;O)X(Z ;o)dr.
(2.1.16)
The norm of expression (2.1.16) can be written as II(Tx)(t;w)II d
J: IIk(r,r;o)ll IIx(r;w)II dr.
(2.1.I 7)
Applying Condition (i) of the theorem, we have II(Tx)(t;w)ll
< Az
(2.1.18)
2.2
39
SOME APPLICATIONS OF THE EQUATION
Hence II(Tx)(t;o)ll d A,
1;
e-'('-') [Ilx(z; o)ll/g(Mr)d.t..
(2.1.19)
Using the definition of the norm on the C , space, inequality (2.1.19) can be written as II(Tx)(t;w)IId M =
s,'
e a(r-r)g(T)d.r= M -
M e-"[l/(a - /3)](e('-8'r
=~
sd
e-'('-')
-P'd
- 1)
( -aP)-'(e-D' - e-'').
(2.1.20)
Since 0 < /3 < a, we can majorize inequality (2.1.20) as follows: II(Tx)(t;w)ll
< M(cc - p)-l(e-pr
- e-")
< M(cc - P)-'
e-P',
R,, which implies that (Tx)(t;w ) E C, for a function x(t ;o)E C,. Therefore the pair of Banach spaces (C,, C,) is admissible with respect to the operator 7; where g(t) = e-B1. Condition (iii) of the theorem means that h ( t ; o)E C,. Applying Condition (ii), we have IIf(t,x(t;o)) - f(4y(t;w))llcg
d 4lx(t;W)
tE
- Y(t;o)llc;
Hence, all the conditions of Theorem 2.1.3 have been satisfied, which implies that there exists a unique random solution of the integral equation (2.0.1) such that
{Jn
i+
I x ( t ; co)12dp(o)
< p e-".
REMARK It is now clear that under these conditions there exists a random solution of the random integral equation (2.0.1) which is exponentially asymptotically stable, that is,
i:
w 0. ) lim {Jn ~ x ( t ; o ) l ~ d ~ ( = r+m
2.2
Some Applications of the Equation
~ ( 0 t ); = h ( t ; 0 )
+
k ( t , t; ~
) f ( tx ,( t ; o ) )dt
In this section we shall present some applications of the results of the previous section. We shall first consider a generalization of the classical stability theorem of PoincarC and Lyapunov. We shall then study a stochastic
40
11
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
integral equation arising in the theory of telephone traffic, a related study of which was done by Fortet [l]. A problem in hereditary mechanics also will be considered which results in a pair of stochastic integral equations of the Volterra type, for which Distefano [l] showed existence of a solution by the method of successive approximations. In each case we shall briefly describe the problem and indicate that a random solution exists by applying the results of Section 2.1. 2.2.1
Generalization of PoincarbLyapunov Stability Theorem
As an example to illustrate our results, we shall generalize the classical stability theorem of Poincare and Lyapunov (Tsokos [3]). That is, consider the following random differential system : k(t; w ) = A ( w ) x ( t ;w )
+ f ( t , x ( t ; w)),
t 2 0,
(2.2.1)
where (i) x ( t ; w ) is the unknown n x 1 random vector; (ii) A(w) is an n x n matrix whose elements are measurable functions ; and (iii) f ( t , x ) is, for t E R + and x E R, an n x 1 vector-valued function. Now we shall reduce the random differential system (2.2.1) to a stochastic integral equation which will be a special case of the stochastic integral equation (2.0.1). Multiplying the random system (2.2.1) by we have But
e - A ( " ) ' k ( t ;w ) -
A(w)e - A ( w ) ' x ( tw ; ) = e-"w"f(t, x ( t ; 0)).
( d / d t ) { e - A ( " ) ' x ( t ;w ) } = e - A ( " ) t ( d / d t ) x ( t ;w ) -
Therefore
A(w)e - A ( o ) r x ( tw). ;
(dldt) { e - A ( o ) r x ( tw ; ) } = e-"""f(t, x ( t ; 0)).
Integrating both sides of Eq. (2.2.2) from
to to t,
(2.2.2)
we have
e-A(w)rf(o, x ( r ; w))dz. (2.2.3)
e - " w " x ( t ; w ) - e - A ( w ) t o x ( t Ow; ) = JtO
Multiplying Eq. (2.2.3) by eA(w)tand letting to = 0, it reduces to x ( t ; w )= eA(W)tXO(w) +
J:
e A ( o ) ( t -frb ) ,x ( z ; 0))dz,
(2.2.4)
where xo(w) = x ( 0 ; 0).Hence, if we let h(t; w ) = e A ( o ) r x o ( w )
and
Eq. (2.2.4) can be written as x(t;w)=
k(t, z; w ) = e A ( o ) ( t - r ) ,
+
h(t;w)
0 Q z Q t < m,
k(t,z;o)f(z,~(z;w))dz.
2.2
SOME APPLICATIONS OF THE EQUATION
41
Hence, the stochastic differential system (2.2.1) reduces to the stochastic integral equation (2.2.4), which is a special form of Eq. (2.0.1). Now we state the following theorem. Theorem 2.2.1 Let us assume that the following conditions hold with respect to the stochastic integral equation (2.2.4):
(i) The matrix A(w)is stochastically stable, that is, there exists an c1 > 0 such that P { w ; Ret,hk(w)< -a, k = 1,2,. . . , n } = 1, where t,hk(w), k = 1,2, . . . ,n are the characteristic roots of the matrix. (ii) f ( t , x) is a continuous function from R , x R" + R" such that If(4 x) - f(GY)l G 4 x - Yl with f ( t , 0) = 0 and 1 sufficiently small. Then there exists a unique random solution of the stochastic integral equation (2.2.4) such that
PROOF To prove this result, we want to prove that the pair of Banach spaces (C,,C,) is admissible under Conditions (i) and (ii) with g(t) = e - p f , and then apply Theorem 2.1.3. Recall that the norm in the space C,(R+,L,(Q, d,9')) is defined by
, us define the following and for any function x ( t ; w ) E C,(R+ ,L,(Q, d,9'))let integral operator : ( T x ) ( t ;w ) =
Since k(t, t ; w ) = eA(w)(f-lr), 0G
t
s:
k(t,t;w)x(t;w)dt.
(2.2.5)
< co,Eq. (2.2.5) becomes
or
It has been shown by Morozan [2,3] that there exists a subset D of Q such that 9 ( D ) = 1 and 11 e A ( o ) - 1) 11 \< ~ ~ - d - d (2.2.7) (f
42
11
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
d KIIx(t;w)IICg(u - P)-'(e-af
-
e-"I),
t >/ 0.
(2.2.9)
Inequality (2.2.9)can be majorized as follows : II(Tx)(t;o)II < Kllx(t; 4IIc,(a - PI- e-Of
(2.2.10)
because 0 < j3 < a. Dividing inequality (2.2.10) by e - P f , we have II(Tx)(t;o)llc, d K(a - P)-'Ilx(t; o)llc,.
Hence for x(t ;o)E C,(R+ , L,(R, d,9)) we have TC,(R+, L,(R, d,9)) c C,(R+ , L,(R, d, 9)), and the pair of Banach spaces (C,, C,) is admissible. The rest of the proof is due to Theorem 2.1.3.
2.2.2 A Problem in Telephone Traffic Theory In this subsection we shall examine a stochastic integral equation arising in the study of telephone traffic. We shall describe the problem in detail and then apply Corollary 2.1.5 to show existence of a unique random solution. Consider a telephone exchange, and suppose that calls arrive at the exchange at time instants t , , t,, . . . ,t,, . . . ,where 0 < t , < t , < . . . < t, < 00. These arrival times must be considered as random instants, so we denote the distribution function by A ( t ) on the time axis. For a call arriving at time t let the random variable H ( t ;o)denote the holding time, that is, the length of time that a "conversation" is held for a call arriving at the exchange at time t. The H ( t , ; o ) , H ( t , ; o ) , . .. are considered as being mutually independent for different times t , ,t, ,. . . and as being independent of the state of the exchange, where the state of the exchange is the number of busy channels.
2.2
SOME APPLICATIONS OF THE EQUATION
43
The number rn of trunks or channels of the exchange is assumed to be finite and large, so that we approximate a continuous process. It is also assumed that any channel not being used may be utilized by an incoming call and that the holding time for a channel begins at the time instant that the call arrives at the exchange. A conversation (or connection) is realized if a channel is not busy at the time a call arrives. If all channels are busy at the time t that a call arrives, then either the call is lost or a queueing problem develops. Only the first case will be considered here. Various problems have been studied in this situation. For example, the probability Pk(t)that at time t, k of the rn channels are busy has been examined in detail (Fortet [I]). We are concerned with the total number of “conversations” held (the number of busy channels) at time t, which for each t is a random variable and may be described by a stochastic integral equation. Let x ( t ; o)be the total number of conversations held at time t . That is, x(t; w ) is a random variable for each t E R , , and x(0; o)= 0. Let J ( t ; o) be a random function with value one if a call arising at time t > 0 is not lost and value zero if the call is lost. Let K ( t , z ; o )=
i
1
0
- z~[O,H(t;w)], if t - z $ [0, H ( 7 ; w ) ] ,
if
t
such that K ( t , z ; o)is equal to one if a conversation from a call arising at time z is still being held at time t 2 z and is equal to zero otherwise. Thus we may write
~ ( t0) ; =
sd
J ( z ; w)K(t,t ; W ) dA(z).
(2.2.1 1 )
Equation (2.2.11) is interpreted as the total number of telephone calls arising at times z, 0 d T d t , that were not lost such that the conversation is still being held at time t. Suppose that V(k) is any function such that V(k) =
i 1
if k = 0 , 1 , ..., r n - I ,
0
otherwise.
Clearly, x(t ;o)< rn for all t E R , and w E Q. Hence we may write V [ x ( t ;o)]=
i
1
if x ( t ; o ) = O , l , . . . , r n - 1 ,
0
otherwise,
which means that V [ x ( t ; o ) ]has value one if a call arising at time t is not lost and value zero otherwise. Then Eq. (2.2.11) may be written as the
44
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
nonlinear stochastic integral equation ~ ( to) ; =
J:
K ( t , 7 ;w ) V [ x ( z w ; ) ] dA(z),
sd
K ( t , T ; w ) V [ x ( z o; ) ] a ( z )dz.
(2.2.12)
which Bharucha-Reid [7] refers to as the Fortet integral equation. Suppose that the distribution A ( t ) of arrival times has a density function a(t).Then Eq. (2.2.12)reduces to a stochastic integral equation of the Volterra type x ( t ; o)=
If we let
f ( z , x(z ; 0)) = V [ x ( z; o)]a(z) =
a(r)
i o
if x(z; o)= 0, 1,. . . , m - 1, otherwise,
then we obtain a stochastic Volterra integral equation of the form of (2.0.1), with the stochastic free term identically zero, ~ ( tW;) =
JO
K ( t , z; o ) f ( z , x(z; w ) )dz.
(2.2.13)
K ( t , z ; o ) is the stochastic kernel .defined for 0 d 7 d t < co and taking only the value zero or one. Before showing that (2.2.13) possesses a unique random solution, we observe that the above description applies to many systems. If we replace the words “telephone exchange” with “serving mechanism,” and the words “channel,” “call,” and “conversation” with the words “server,” “customer,” and “service,” respectively, then we are dealing with a general system in which “customers” are being “served” by rn < co “servers.” If we assume that a customer does not wait when he finds all m servers busy so that no queue develops, then the random solution of the stochastic integral equation (2.2.13) gives the total number of “services” being performed at time t. Also, the functions in (2.2.13) may be any functions which behave as given in Chapter I and describe the physical situation. For example, the stochastic kernel may be of the form
K(t, r ; o)= I,,,,(t
- z) e-(‘-‘),
where ZX(,)( . ) is the indicator function of a random set X(w), which means that solutions at earlier times z ,< t have a decaying effect on the system. We now show that the stochastic integral equation (2.2.13) satisfies the conditions of Corollary 2.1.5. We first show that f ( t , x(t ; o)) E L,(R,
4 9) and
K ( t , z ;w )E L,(R, 4 9).
2.2
45
SOME APPLICATIONS OF THE EQUATION
Let t 2 0 be fixed. Since a(t) is a density function, it may be assumed to be bounded for all t except on a set of measure zero. Hence for some M > 0 and all t, 0 < a(t) < M <: co,and we have
sn
If@,
~ ( o))l’ t ; d9(w) <
sn
la(t)I2d S ( o ) < M 2 <
00
by definition off(t, x ( t ; w)),so that f ( t , x(t ;w))E L,(Q d,9) for each t E R + . By definition of K ( t , z ; w ) ,0 d z d t < co, we obviously have that the 9measure of { w : IK(t,z;w)l > I},
0
t
<
00,
is zero, that is, a 9-null set. Hence K(t,5 ;o)is bounded 9-a.e. and is in L,(O, d,9). Also, if (t,, z,) + (t,T ) as n + co,we have 9 { w : IK(t,,z,;o)
- K(t,z;w)l > 0 } - + O
as n --* co since K ( t , z; w ) has value zero or one only. That x(t ;o)E C, a continuous bounded function for each w, is easily s h o w n . L e t w E R a n d c h o o s e t E R + . F o r E > Oandh > O , Ix(t
+ h; w )
-
x ( t ; w)l =
~
s,””
11
KO, T ;w ) f ( z .X ( T ;0))dz I
Thus for E > 0 there is a 6 > 0 such that when I(t
[+h Therefore Also,
X ( t ; w ) is
J‘,
a(z)dz
+ h) - tl < 6
< E.
continuous on R + for each ~ Ix(t; w)12 d 9 ( w )
s m2 <
rxj,
E and Q bounded by m.
t ER,,
which means that x(t ;o)E L,(Q, d,9) for each t. We note that A, = 1 in Corollary 2.1.5, since
Since h(t ;w ) = 0 9-a.e., trivially, it belongs to C . Since a ( t ) is a density function, f ( t , 0) = a(t) is continuous, bounded, and nonnegative, and JTm a(t)dt < 00. Suppose g ( t ) is any function satisfying Condition (i) of
46
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
I
0 d lg(t).O
0
IjV, x) - f’(4 Y)l
< Ag(t)(rn- 1 )
=<
if x # y ; x , y {0,1,. ~ . ., in
a(t) d ;ig(t)m
g ( t ) d t < co
-
11,
if x # m and x =m
<
J
if x = y ,
and
a(t) d g(t),
y = in or
and y #
in.
tER,.
0
This restriction is not too severe, since a(t) is a density function. We may take g(t) = C q tq, > 0, for example. Therefore, since Ilh(t;w)llc = 0, IIf(t, 0)ll = a(t) ,< M , and A = l / m is small for large m, there exists a unique random solution of equation (2.2.13) if a(?)d g(t), where g(t) satisfies the given conditions.
2.2.3 A Stochastic Integral Equation in Hereditary Mechanics A simple example of a stochastic integral equation of the Volterra type comes from the field of hereditary mechanics where the “forcing term” depends on the deviation of the system from a natural position of equilibrium as well as on an external source of excitation (Distefano [ l ] ) .When forces tl and q are applied to two hinged bars, deflection of the bars is prevented by a viscoelastic spring reacting with an upward force S. Here v] is the axial load and 01 is the resulting downward force on the spring. The bars are deflected a certain amount s, a nonlinear function of q, in general (see Figure 2.2.1).The forces and deflection are functions of time t 2 0, and the displacement at time t > 0, s(t), is some functional form of the force S(t) exerted by the spring for t ,< t . However, S ( t ) is a function of tl(t), q(t), and S ( T ) for each t d t , which accounts for the hereditary effects of the system. Distefano [I] considered the stochastic version of a system of two Volterra integral equations which arises in this problem. We are interested in s(t), the displacement at time t >, 0, of the linear hereditary phenomenon. The linearized version of this problem leads to the integral equation
(2.2.14)
2.2
47
SOME APPLICATIONS OF THE EQUATION
Figure 2.2.1
Is
From Distefano [ I]
where f ( t , T ) is the “memory function” for the hereditary phenomenon and g(t) = c4t) +
sd
f’(u)dT)dT,
assuming initial straightness of the bars, that is, s ( t ) = 0 at t = 0. When the axial load has a small, randomly fluctuating component, that is, q ( t ;w), w E R, is a random variable for each fixed t > 0, then Eq. (2.2.14) reduces to the two stochastic Volterra integral equations u ( t ; w ) = g ( t ;w )
+
u ( t ; w )= G ( t ; w )
s:
4 ( ~o)f’(t, ; t)u(z;w ) dt,
+ &t;w)
where it is assumed that 0 d q(t ;w ) d
sd
f’(t,t)u(T;w)dz,
p < 1 P-a.e. for all t E R ,
. Also,
are random variables for each t E R , . Then the quantity of interest the sum of the two random solutions u(t ;w ) + u(t ;w). Let
s ( t ; w ) is
and
Then the random integral equations reduce to two stochastic integral equations each of the form (2.0.1) with f ( t , x ( t ; 0))= x ( t ; w), that is, linear
48
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
stochastic integral equations,
(2.2.15)
Suppose the "memory function" is of the form
Since 0 d q(t ;w ) < 9-a.e. for all t E R + , $(t ;o)is bounded 9-a.e. on R + , and we may assume a ( t ;o)and s(t ;w ) are bounded 9-a.e. and continuous on R , by the nature of the physical situation. Thus there is some M > 0 such that & t ; o)= q(t;o)/[l - q ( t ; o ) ]d M
9-a.e.
for all ~ E R ,
Hence
and
Also, f ' ( t ,0) = 0, since f ( t , u ( t ; 0)) = u ( t ; w) and f ( t , u ( t ; 0)) = v ( t ; w) by comparison of (2.0.1) and (2.2.15), and G ( t ; o ) and g ( t ; w )are continuous and bounded 9-a.e. on R + since $(t ;w )and a(t ;w )are assumed to have these properties. If in Corollary 2.1.6 we take the function g(t) = 1 for all t E R+,then sup
tsR +
{
l"lg(.)dz}
= 1 < co.
Also,
so that il = 1. We may take A2 = M Q > 0 and u = Q > 0, and then all conditions of the corollary are satisfied. Therefore there exists a unique random solution of each of the equations (2.2.15), provided that Ilg(t;o)ll and IIG(t;w)ll are small enough.
2.3 2.3
THE RANDOM INTEGRAL EQUATION
49
The Random Integral Equation
+
x ( t ; o)= h ( t ; o)
K(u, X ( U ; w ) ; o)du
This equation occurs in many important situations, as has already been stated. To complete the objectives of this part of the book, we shall investigate the existence and uniqueness of a random solution of the stochastic integral equation (2.0.2).Some results of Bharucha-Reid, Mukherjea, and Tserpes [ 11 will be given which use methods of probabilistic functional analysis from Chapter I. Also a theorem of the authors will be presented which utilizes some of the concepts of “admissibility theory” as was discussed in Chapter I and employed in Section 2.1. We shall first assume that the functions x(t; o), h(t;o), and K(u, x ; o)in Eq. (2.0.2)are real-valued, x and h are product-measurable on R + x R, and K(u, x ; o)is product-measurable on R , x R for each x. Theorem 2.3.2 Equation (2.0.2) with t E R + has a random solution if the following conditions are satisfied :
(i) The kernel K ( u , x ; o ) is a continuous function of u and x, U E R , and x E R for each o E R, and K(u, x ;o)is a random variable for each u and x. (ii) supx ( K ( u ,x ; o)l d g(w) E L,(R, d,P), 1 d p < 00. (iii) Let h(t ; o)bea continuous map from R , into L,(R, d,P),1 d p < 00, and Ih(t;o)l
+
1;
sup lK(u,X ; a)\ du X
<
00
for each t E R + and o E R. (iv) The kernel satisfies the following weak continuity property: If x,(o) converges weakly to x(w) in L,(R, d,P),then K(u, x,(o) ;o)converges weakly to K(u,x(w);o)whenever they are in L,(R, d,P ) for each u. PROOF We first consider t E [0, MI. Define the sequence x,(t; 0)= h ( t ;o),
i
x,(t;w)
x,(t ; w ) =
x d t ; ~+)
Let 1 < p <
00
if 0
1,
l/n,
t-(l/n)
K(u,x,-,(u;o);o)du
if m/n d t d (m + l)/n, 1dmGMn-1, n 2 1 . throughout the proof.
50
11
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
Since h(t ;w ) E L,(R, d,9’),then x,(t ;w ) is in L,(SZ, d,9). Thus x,(t; w ) , n 3 1, is in Lp(SZ,d, g), since by Condition ($and an extension of Minkowski’s inequality (Beckenbach and Bellman [l, p. 22]),for t E [0, MI we have
(2.3.0)
0 and n 3 0 be fixed but arbitrary. Then for t , , t , E [0, MI, (x,(t; w)):=,
Ixn(t1;
a) - xn(t, ;w)l ,< lh(t1; w ) - h(t, ; 011 + -
I
J:‘-“!“’
K(u, xn- l(u; w ) ;w ) du
Sd’”/”’
K ( u , xn- 1(u; w ) ;w ) du
for w E Q. Thus we can find 6 > 0 such that for It, Ih@l;w ) - h(t2 ; w)l < 4 2
t,l
<6
2.3
51
THE RANDOM INTEGRAL EQUATION
and K(u, x,- l(u; o); o)du
-
I:’”/”’
K ( u , x,- l(u; 0);o)du < 4 2 .
Since n is arbitrary, the functions x,(t; w ) are equicontinuous from [0, MI into L,(R, d,9). Also, since h(t ;w ) is product-measurable on [0, 00) x R and K(u, x ;o) is product-measurable on [0, co) x R for each x, then x,(t; o)is productmeasurable on [0, MI x 0, n 2 0. Now, for t E [0, MI and any x ( t ;o), we have by Condition (iii)
Then the sequence {x,(t ; w ) ) satisfies the conditions of the Arzela-Ascolitype Theorem 1.2.1, and there exists a subsequence { x n i ( t ;o)}that converges weakly to a product-measurable function x(t ;o)in L,(R, d, 9), for t E [0, MI, that satisfies
x(t; o)= h ( t ;o)+
K(u, x ( u ; o); o)du, 9-a.e.
Thus for each M > 0 we have a random solution x,(t ;w ) of (2.0.2) such that for each t E [0, MI x,(t;
o)= h ( t ; o)+
K(u, x,(u; w ) ;w ) du, 9-a.e.
We then define z ( t ; o ) = x,(t;w) for M - 1 < t < M so that z ( t ; o ) is product-measurable and is a random solution of Eq. (2.0.2). The proof of the following theorem is similar to that of Theorem 2.3.1 and is therefore omitted. Theorem 2.3.2 Equation (2.0.2) with t E [0, MI, for 0 < M < a random solution if the following conditions are satisfied :
00,
has
(i) The kernel K ( u , x ( u ; o ) ; w )is a continuous function of u and x, u E R , , x E R, for each o E R, and K(u, x ;w ) E L,(R, d,9) for each u and x, l
do)E L,(R, d,9),1 < p
-=
00.
52
11
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
(iii) h(t ;o)is a continuous function from [0, MI into L,(R, d,9), and for each t E [0, MI, 0 < M < GO, there exists a constant N , > 0 such that Ih(t; o)l
+ j-;sup M u , x ; w)l du < N , X
< a,
O E R.
(iv) Same as Condition (iv) in Theorem 2.3.1. The following theorem presents conditions under which a random solution exists and is unique. Theorem 2.3.3 Equation (2.0.2) with t E [0, co) has a unique random solution if the following conditions are satisfied : (i) For each w E R and u E [0, GO) the kernel K(u, x; o)is a continuous function of x in ( - 03, co), and for each x, K(u, x ; o)is product-measurable on [0, co) x R and Jb supx IK(u, x ;o)ldu < GO. (ii) The kernel K(u ;x ;w ) satisfies the following Lipschitz condition : For each u IK(l.4,x ;
- K(u, L' ;4 1<
44Ix
- Yl,
where a ( o ) is a nonnegative, real-valued function on R. (iii) h(t;o)is continuous in t for each o E R. PROOF We shall obtain a solution first on an arbitrary closed, bounded interval [O,M]. Let C[O,M] be the space of all continuous functions on [0, MI, where for each o E R we introduce a norm 11 . 11- in the following way :
where b ( o ) > a(o). Under this norm C[O,M] is a Banach space. Now let E be the set of all mappings x(t ;w ) from R into C[O, MI such that x(t; o)is a random variable for every t. Then x ( t ; o )E~ is product-measurable on [0, MI x R, and so J'b K(u, x(u ;o);o)du is defined and is a random variable for every t. Hence if we define T[x(t; w ) ] = h(t ;0)
+
1:
K(u, X(U; U );W ) du,
then clearly T[x(t; o)] is continuous in t for each o E R, and hence productmeasurable on [0, MI x Q. If z(t ; o)E E , then one can check easily that
2.3
THE RANDOM INTEGRAL EQUATION
53
where C(w) = a ( o ) / b ( o )< 1. Then, defining x,(t; o)= h ( t ;o)
+
x,+,(t;o) = h ( t ; o )
sd
K(u,x,(u;o);w)du,
we see that the x , ( t ; o)are all in E and for each o E 0, x,(t ;o)is a Cauchy sequence in the Banach space C[O, MI with metric 11 . 11". Hence there exists an x ( t ; w ) E E such that limn+ x , ( t ; o)= x ( t ; w ) in C[O, MI by the completeness property, and x ( t ; o )= h(t;o)
+
f
K(u,x(u;o);w)du
c
K(u,x(u;w);o)du
for t E [0, MI. Now, as in Theorem 2.3.1, we can find a random solution x ( t ; w ) such that
+
x(t;o)= h(t;o)
for all t 2 0. Suppose that y(t ; o)is a random solution different from x(t ;0). Then y(t ; o)E E as before, and
a contradiction. Hence, x ( t ; o ) is the unique random solution of (2.0.2), which completes the proof. We now suppose that x(t ;w ) and h(t ;o)are rn-component vector-valued continuous random functions from R + into L,(R,d,P) as defined in Chapter I. Also, the stochastic kernel K ( u ,x(u;w ) ;o)is an rn-component vector-valued function such that for each u E R + ,o E 0,K is continuous in x ; for each x and o,K is a continuous function of u ; and for each u and x, K(u, x ; o)E L,(Q d, 9). Then for each t and o,x ( t ; o)and h ( t ; w ) are points in m-dimensional Euclidean space, and for each x, u, and o,K(u,x ;w ) is a point in m-dimensional Euclidean space. Let C , ( R + , L,(Q d, 9)) = C , be the space of all m-component vectorvalued continuous functions from R , into L,(Q d, 9) with the topology of
54
11
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
uniform convergence on each compact interval [0, TI, T > 0, as defined in Chapter I. We shall use the concepts of “admissibility theory” and Banach’s fixed-point theorem from Chapter I to prove the following existence theorem.
Theorem2.3.4 Under the assumptions just given, if Eq. (2.0.2)satisfies the following conditions, then there exists a unique random solution of (2.0.2):
(i) D , E c C, are Banach spaces stronger than C,, and the pair ( E , D) is admissible with respect to the operator T given by ( T x ) ( t ;0)=
J:
x ( u ; w ) du.
(ii) K(r, x ( t ; 0);w ) is a mapping from the set W = { x ( t ; w ) : x ( t ; w ) E D , Ilx(t;w)llDd p }
into the space E , for some p 2 0, satisfying I I K ( t , x ( r ; w ) ; w) K(t,y(t;w);w)llE d
~IX(~;W)
- Y(t;w)llD
for x ( t ; w), y ( t ; o)E W, and A 2 0 a constant. (iii) Ilh(t; w)llD QIIK(t,x ( t ;w ) ;w)IIEd p , where Q is the norm of the operator IT; and LQ < 1.
+
PROOF
Define the operator U from W into D by ( U x ) ( t ; ~=) h ( t ; w )+
1:
K(u,x(u;co);w)~u.
Since the function K is continuous in x , U is a continuous mapping from W into D. We must show that U ( W ) c W (inclusion property) and that U is a contraction mapping on W We first show that U is a contraction mapping. Let x ( t ; w ) and y ( r ; w ) be in W Since the difference of two elements of a Banach space is in the Banach space, we have ( U x ) ( t ;w ) - ( U y ) ( t ;w ) E D. Thus rr
2.4
APPLICATIONS OF THE INTEGRAL EQUATION
55
since K ( t , x ( t ; 0); o)- K ( t , y ( t ; 0); w ) E E , and the operator Tis continuous from E t o D and hence bounded, from the remark following Lemma 2.1.1. Using the Lipschitz condition in (ii) of the theorem, we have from inequality (2.3.1) II(Ux)(t; W ) - ( V Y ) ( ~w)llD ;
< Qnllx(C; 0)- ~ ( tw)llD. ;
Since AQ < 1 by hypothesis, U is a contraction mapping on W Now we must show inclusion. Let x ( t ; w ) E W We have
< IIM;w)ll + QIIK(t,x(t;4;o)llE
Gp
by Condition (iii), so that ( U x ) ( t ;w ) E W Then, since x(t ;w ) is arbitrary, V ( W )c W Therefore, by Banach’s fixed point theorem, there exists a unique point x ( t ; w ) in W such that ( U x ) ( t ; w )= h ( t ; w )+
sd
K ( u , x ( u ; o ) ; w ) d u= x ( t ; o )
for each t E R , ,completing the proof. 2.4
Applications of the Integral Equation
+
~ ( a) t ; = h ( t ;a)
K(u, X ( U ; a);a)du
We shall now present two important problems in which stochastic integral equations of the form (2.0.2) arise, one in the study of turbulent motion of fluids and the other in the study of drug distribution in one- and two-organ biological systems. In order to illustrate the fruitfulness of the results of the previous section, we shall use some of the theorems to ensure the existence of random solutions for the equations in these applications. 2.4.1
A Stochastic Integral Equation in Turbulence Theory
A theoretical approach to the turbulent motion of fluids is virtually impossible except in a stochastic framework, since the velocity fluctuations in turbulent flow are random.
56
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
Consider a tagged point in a continuous fluid in turbulent motion as described by Lumley [l]. Let the position of the designated point at time t > 0 which was located at position s at time t = 0 be a random variable r(s, t ;w ) defined on Q for each fixed t > 0 and initial position vector s. Then r(s, t ; w ) and s are three-component vectors, that is, for each s, t > 0 and w E Q, r(s, t ;w ) is a vector. The velocity of a point at position r(s, t ;w ) is given by the Eulerian velocity field ~ ( tr(s, , t ;0);w), which is a vector-valued random variable for each t 3 0 and r. That is, for each t E R , , r , and w E Q, u(t, r ; w ) is a three-component vector. The Eulerian approach to describing flow in fluid mechanics is that at a fixed position the velocity of a point in the continuous fluid moving by this position at any time t > 0 can be expressed. However, if the flow is turbulent, the velocity is random, and the problem is to determine r(s, t ; w ) if the statistical properties of u(t, r ; w ) are known. Since the initial position of the point can be considered as the origin, put s = 0. Then the position of the designated point is given by the stochastic integral equation U(T,
r ( ;~o);w ) &,
(2.4.1)
where (i) w E Q ; (ii) r(t ;w ) is the unknown vector-valued random function which gives the coordinates ( r l ( t ; a),r 2 ( t ;w), r 3 ( t ;w ) ) of the position of the tagged point in the fluid for each time t > 0 given that at t = 0 it was at the origin; (iii) u(t, r ( t ;w ) ;w ) is the Eulerian velocity field, that is, a random vector function whose components record the random rate of change of the coordinates of the tagged point at time t, and the length J u ( t , r ; w ) ( records the random speed of the tagged point at time t > 0. Here we have three-component vector-valued functions and may apply Theorem 2.3.4 to obtain conditions under which a random solution of Eq. (2.4.1)exists. Due to the physical situation, we may assume that r ( t ; w ) is and that the a continuous vector-valued function from R + into L,(Q, d,9) stochastic kernel u(t, r(t ;w );w ) is a three-component vector-valued function which, for each t E R , and w E Q, is continuous in I , and for each t E R , and vector r ( t ; LO), is in L,(Q, d,9). Also, since h(t;w ) = 0, it is trivially in L,(Q, 4 9). Hence, if Ilr(t;w)ll < p for some p > 0, and if u satisfies a Lipschitz condition with respect to r(t ;w ) with Lipschitz constant R such that AQ < 1 and
IIu(t,r(t;0);0)II
< p/Q,
since IIh(t;w))I= 0, then by Theorem 2.3.4, there exists a unique random solution of (2.4.1).As in Chapter I, 11 . )I = suptbo1) . I),.
2.4
APPLICATIONS OF THE INTEGRAL EQUATION
57
2.4.2 Stochastic Models for Chemotherapy We shall now consider two models for chemotherapy which were developed deterministically by Bellman, Jacquez, and Kalaba [2]. We will present stochastic versions of these models that describe the distribution of a drug in one-organ and two-organ biological systems and which lead to semistochastic integral equations of the form (2.0.2) (Padgett and Tsokos [l, lo]). That is, the resulting integral equations have deterministic solutions for 0 < t < z, for some fixed z > 0, and random solutions for z G t. Solutions of this type are called semirandom solutions. 2.4.2.a A STOCHASTIC MODELFOR DRUGDISTRIBUTION IN A ONE-ORGAN BIOLOGICAL SYSTEM Consider a closed system with a simplified heart, one organ or capillary bed, and recirculation of the blood with a constant rate of flow, where the heart is considered as a mixing chamber of constant volume. The flow of blood is assumed to be “slug” flow, that is, no mixing occurs in the vessels. It is assumed that an injection of drug is given directly at the entrance of the heart, producing a known concentration in the blood plasma. Also, as the blood passes through the capillary bed or organ, the particles of drug are assumed to enter the extracellular space only by the process of diffusion through the capillary walls. Since it is impossible to know the concentration of drug at every point in the plasma at any given time after injection, for a given experiment measurements of drug concentration in the plasma should be made at several points in a particular area of the system at the same instant of time and the mean value of these measurements should be used as the drug concentration in that area of the system. For example, several measurements may be taken at points between the heart exit and the entrance to the capillary bed at time t > 0, since the concentration is considered fairly uniform in certain areas of the system, as assumed by Bellman, Jacquez, and Kalaba [l, 21. It is realistic to assume that this mean value estimates the true state of nature at time t > 0, and if another initial injection is given in the same system and the experiment is repeated under the same conditions, then a different mean value would result. Thus the concentration of drug in the plasma in given areas of the system is more realistically considered as a random function of time rather than a deterministic one. We use the following notation for t >, 0: u(s, t ;w ) is the concentration in moles per unit volume at point s in the capillary at time t ; w E R (a random variable for each t ) ; c is the constant volume flow rate of plasma in the capillary bed ;and 1 is the mean length of capillary in the organ. We assume that all capillaries in the capillary bed or organ are lumped together into
58
II
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
one capillary of length I, total volume flow rate c, and total surface area equal to that of all of the capillaries combined, and that the blood enters at the zero end and exits at the I end. For a one-organ system maintained by a simplified heart (see Fig. 2.4.1) let the heart be considered as a mixing chamber of constant volume given by V* = VJ[ln(l
+ V'/V,)],
where V, is the residual volume of the heart and V, is the ejection volume, and let the heart have constant entering and leaving flow rates c. This is obtained by representing the concentration y ( t ) in plasma leaving the heart at time t > 0 as a function of V,, V,, and the initial concentration of drug at time zero, y o , in the residual blood of the heart, y ( t ) = yo
exp(-ct/V*),
t 2 0.
We assume that an initial injection is given at the entrance of the heart resulting in a concentration u,(t), 0 < t < t,, of drug in plasma entering the heart, where t , is the duration of injection. Let the time required for the blood to flow from the heart exit to the entrance of the organ be T > 0, and also let z be the time required for blood to flow from the exit of the organ to the heart entrance. Then plasma containing drug particles reaches the organ T time units after injection, and while flowing through the organ, diffusion of drug through the capillary walls into the organ tissue occurs. Therefore after time T > 0 the concentration of drug in blood plasma in the system is a random variable due to the random nature of the diffusion process and recirculation of the blood. Hence the concentration of drug in plasma entering the heart at time t > T, u,(t ; a),is a random variable and is given by t < 0,
Odt
-k U(1, f - 7 ;W ) ,
Figure 2.4.1
One-organ model.
t
T,
(2.4.2)
2.4
59
APPLICATIONS OF THE INTEGRAL EQUATION
where u,(t) = 0 for t > t,, and u(l, t ;w ) is the concentration of drug in plasma leaving the organ at time t . The concentration of drug in plasma leaving the heart, u,(t ; w), satisfies the integral equation (see Bellman, Jacquez, Kalaba, and Kotkin El]) uL(t; 0) = (c/V*)
J:
[uR(s
;0 ) -
U,(S
;o)]ds,
t >, 0.
(2.4.3)
Then the concentration of drug entering the organ at time t is given by u(0, t ;w ) =
io.
u,(t - z ; w ) ,
OQt
Equation (2.4.3)is a semistochastic integral equation as stated. The solution is deterministic for 0 < t < T , that is, u,(t; w ) = ( ( t ) , 0 Q t < t,and u,(t; w ) is a random function oft for t >, T . This is due to the diffusion process in the organ and the recirculation of the blood. Substituting Eq. (2.4.2)into (2.4.3),we have uL(t; 0 ) =
J:
+
(c/V*)[UI(S) ~ ( l s, - T ; O)- uL(s; o)]ds
= ~ o T ' r ) ( c / V * ) u I ( s) d(sc / V * ) = G(t)
+ J: k(s, u,(s;
[uL(s;w)- u(l,s - z ; w ) ] d s
o)ds,
0);
(2.4.4)
where
k ( s , u , ( s ; w ) ; w )= ( - c / V * ) [ u , ( s ; w ) - u(l,s - t ; w ) ] , and u(l, s ; w ) = 0 if s < 0. Let the initial concentration be given by udt) =
u*,
0,
0 Q t Q t,, otherwise.
We shall now show that a semirandom solution of (2.4.4)exists, that is, a deterministic solution ( ( t ) for 0 < t < t and a random solution u , ( t ; w ) for t d t Q M , where z < M < 03, t , < M . To show that a deterministic solution ( ( t ) , 0 < t < z, exists, we shall use the ordinary method of successive approximations for deterministic integral equations (Mikhlin, [I]). To indicate that a random solution u,(t; w), t Q t Q M < 03, exists, we employ Theorem 2.3.2.
60
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
Suppose that 0 ,< t < z. Since u(l, t ; w) = 0 for t < z, Eq. (2.4.4)becomes a linear (deterministic) Volterra integral equation with u,(t ;w ) = ( ( t ) , (2.4.5) Let the kernel be given by K(t,Y ) =
1,
O ~ y y r ,
0,
otherwise.
Then the nth approximation is given by
+
tn(t) = ~ ( t )
f:
m= 1
(-c/v*)m
j:
K m ( t , Y ) G ( ~d)y
by successive substitution in the sequence and then interchanging the order of summation and integration. As n -+ 00, the sequence t,(t)converges to a solution of (2.4.5)if the series on the right converges, that is,
J:
m
t(t)= c(t)+ m1 ( - c/v*Irn Krn(t, Y)G(Y)d y . =1
(2.4.6)
To see that the series in (2.4.6) converges, we note that since K ( t ,y) 6 1 for all t and y , there exists a Q > 0 such that IK(t, y)( < Q. By definition, we have IKi(t.Y) I = IK(t,Y)I < Q
and
IW,Y)I
=
1 /):W ,
s)K(s,Y) dsl < Q2(t - Y).
2.4
61
APPLICATIONS OF THE INTEGRAL EQUATION
Assume that (2.4.7) Then lKm+
l(t,
Y)I
B
-
IK(~, s)llKm-
1(s,
Qm+l
Y)l ds < ( m - I)!
S,'
- Y)m- I ds
Q m + l ( t- y)"
m! Therefore, by induction, inequality (2.4.7) holds for all m from (2.4.6)
< G(t) +
> 1, and we have
j:
< ~ ( t+) mc= l J(c/v*)m m
tt(t)t
(s
Km(t, Y)G(Y) d y
1
m
1 (-
C / V * ) ~ ( Q 'u*/m!), ~~;~+
m=l
where t , = min(t, t,), 0 d t < t. The series on the right converges by the ratio test. Hence there exists a solution of (2.4.4) for 0 < t < t. Suppose that t < t < M , z < M < 00. Then Eq. (2.4.4) is u,(t; O ) =
G(t)
+
c
k(s,u,(s; 0); O )ds.
For o E SZ the concentrations uL(t; o)and u(l, t ;o)can be considered as continuous functions of time, and hence k(t, u L ( t ;w ) ;o)is continuous in t and uL. For each t and uL, k ( t , u , ; o ) is a constant. Hence k ( t , u , ; o ) E L ~ ( S Z , ~ 1, ~d )p, < GO, trivially, and Condition (i) of Theorem 2.3.2 is satisfied. By the nature of the physical situation, the maximum value of u,(t; o)and u(l, t - t ; o)is u*, and the minimum value is zero for all t E [t,MI. Thus suplk(t, u,; o)l = sup{(c/V*)lu,(t; UL
0)-
u(l, t - t ;o)l}= (c/V*)u*.
UL
Define g(0) = cu*/v*,
oER,
and then g ( o )E L,(Q, d,9),since
Therefore Condition (ii) of Theorem 2.3.2 is satisfied.
62
I1
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
The stochastic free term h(t ;w ) is the deterministic function G(t),which is bounded, IG(t)l d /oT(f)(c/V*)u,(s)dsd t,cu*/V* < co
for all t E [0, MI. Hence h(t ;w ) is in L,(R, d,9). Obviously, h(t ;w ) is continuous in t since G(t) is an integral. Let t E [T, MI c [0,MI. Then Ih(r; w)l Take
+ J: suplk(s, u,; w)l ds d t,(cu*/V*) + t(cu*/V*) d (cu*/V*)(t,+ M ) .
+ M)]+ 1 <
N , = [(cU*/V*)(tl
CO,
where [ . ] is the greatest integer function. Then Condition (iii) of Theorem 2.3.2 holds. for Suppose the sequence {uL,.(t, w ) } converges to u,(t ;w ) in L,(R, d,9) each t E [T,MI. Then for every E , > 0 there exists an N > 0 such that whenever n > N,
{In 4 luL,,(t;
-
<E
U L ( ~ ;w ) I P d W 4
~ .
lip
Therefore, letting E = (c/V*)tI > 0, we have for n > N
=
is,
(c/V*)
lUL,"(r; 0) -
1
u,(t; U ) ( P d Y ( 0 )
< ( C / V * ) E 1 = E.
lip
That is, k( t , uL,,(t ;w );w ) converges to k( t ; u,( t ;w ); w ) in L,(R, d,9'). Hence Condition (iv) of Theorem 2.3.2 is satisfied. Therefore for T d t d M < co, t, < M , there exists a random solution of Eq. (2.4.4). Thus we have shown that the semistochastic integral equation (2.4.4) possesses a semirandom solution for all t E [0, M I , where 0 < M < co, T < M , and t, M.
-=
2.4.2.b
TWO-ORGAN BIOLOGICAL
SYSTEMS
In this section we will extend the results ofthe previous section to biological systems which consist of two organs.
2.4
APPLICATIONS OF THE INTEGRAL EQUATION
63
Consider a closed system with a simplified heart, two organs or capillary beds, and recirculation of the blood with a constant rate of flow assumed, where the heart is considered as a mixing chamber of constant volume. With respect to the flow of blood, we assume that there is no mixing in the vessels as before. This system is not as simplified as it may appear at a casual glance, since we may consider one organ as a certain organ of the body, such as the lungs, and the other organ as being a collection of the remaining capillary beds or organs of the body. See Fig. 2.4.2 for a schematic description of the system. As in Section 2.4.2.a, it is assumed that an injection of drug, or a chemical agent, is given to the system directly at the entrance of the heart, producing a known concentration of drug in the blood plasma entering the heart; that is, drug particles are dissolved in the blood plasma. Also, as the blood passes through each of the organs, the particles of drug are assumed to enter the extracellular space only by the process of diffusion through the capillary walls. Due to the nondeterministic nature of a diffusion process, a random amount of drug is removed from the blood plasma in the organs, and hence the concentration of drug in the blood is reduced by a random amount. Therefore it is impossible to know exactly the point-by-point concentration of the drug in the blood plasma and in the extracellular space of the organs at any given time after the blood containing the drug passes through the organs. In addition to the notation already introduced in the previous section, the following will be used : uj(s,t ; o)is the random concentration of drug in organj at point s in the capillary at time t, for j = 1,2. c j is the constant-volume flow rate of blood in organ j , j = 1,2, and c = c 1 + c2 is the total constant-volume flow rate of blood in the system.
Again, suppose that the injection of drug produces an initial concentration
u,(t), 0 d t d t,, entering the heart, and let z denote the time lag due to the
64
11
SOME VOLTERRA TYPE EQUATIONS WITH APPLICATIONS
circulation of the blood as described in Section 2.4.2.a. Thus the drug enters each organ after a time z, and due to the diffusion process in the organs, the after time T is a random concentration of drug entering the heart, u , ( t ; o), variable for each t 2 T and is given by
u&;
4=
r
t < 0,
u,(t),
0 d t,
u,(t)
T
+ { [ c , u , ( l ,t - T ; w ) + c,u,(l, t - T ; o ) ] / c }
d t d M,
where uj(l, t ; o)is the concentration of drug in plasma leaving organ j (at the I end) at time t , j = 1,2, and ul(t) = 0 for t > t,. Also, the concentration of drug in plasma leaving the heart at time t 2 0 is given by the semirandom solution u,(t ; w ) of the semistochastic integral equation (2.4.3). We showed in the previous section that this semistochastic integral equation has a semirandom solution for each t, 0 d t d M , where M < cc is a constant. Then the concentration of drug in plasma entering organ j at time t is given by
where j = 1,2. These models for chemotherapy are realistic in that they retain such properties as recirculation of the blood, mixing in the heart, the presence of more than one organ, and randomness in the diffusion of the drug into the organ tissues, even though several simplifying assumptions were made in order to deal with the mathematics involved. Such assumptions seem necessary in obtaining mathematical descriptions of biological systems since they are in general very complex systems.
C H A P T E R 111
Approximate Solution of the Random Yolterra Integral Equation and an Application to Population Growth Modeling
3.0
Introduction
In this chapter we shall present some methods of approximating the unique random solution x ( t ; o ) of a stochastic integral equation of the Volterra type of the form ~ (; Wt ) = h(t ; W )
+
rl
J O
k(t, z ;w)f’(~,X(Z ;w ) )d ~ ,
(3.0.1)
where x ( t ; w), h ( t ; o),k(t, z ; o), and f’(t,x) behave as described in Chapters I and 11. We shall consider the problem of obtaining an approximation to a realization of x ( t ; o)by the method of successive approximations at each t E R , and also by applying some of the theory of stochastic approximation. 65
66
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
A new stochastic formulation of a population growth model will also be given along with a numerical example. In the method of successive approximations we shall investigate the convergence of a generated sequence of random variables to the unique random solution x ( t ; w ) at each t E R+ . Also, the rate of convergence, the maximum error of approximation, and the combined error of approximation with the error of numerically evaluating the integral are considered. A general theorem of Burkholder [l] in the theory of stochastic approximation is also applied to Eq. (3.0.1) resulting in conditions under which a sequence of approximations converges with probability one to the unique random solution x ( t ; w ) at each t E R , . 3.1
The Method of Successive Approximations
Let C C ( R + , L 2 ( Q d , 9 ) )be the space described in Chapter I, and let B, D c C c ( R + , L 2 ( Q d , 9 ) )be Banach spaces with the norm in D defined such that
Let
s = { x ( t ; w ) : x ( t ; a ) E D , llx(t;w)II~d p } ,
as in Theorem 2.1.2, with x ( t ; w), h ( t ;w), f(t, x(t; w)), and k(t, T ; w), for 0 d z d t < 00, behaving as described previously. As in the existence proof of Theorem 2.1.2, let U be the contraction mapping from S into S defined by (Ux)(tw ; ) = h ( t ;w ) +
k ( t ,T
; ~ ) ~ ( T , x (w T ); )d z ,
which has the unique fixed point x ( t ;w). It is assumed here that the distribution of the random variable h(t ; w ) is known at each t E R + ,or that a value of h(t ;w )can be observed at each t E R + . Define the sequence of successive approximations {x,(t ;w ) } by x,(t; w ) = h ( t ;w),
x , , l ( t ; w ) = ( U x , ) ( t ;w),
n 3 0.
(3.1.1)
The sequence defined recursively here is contained in the set S, which is a result of the following lemma. Lemma 3.2.1
h ( t ;w ) E S and hence x , ( t ; a)E S, n = 0, 1,2,. . . . Also, x,(t;w)3 x(t;w)ES.
3.1 PROOF
THE METHOD OF SUCCESSIVE APPROXIMATIONS
67
From the condition of Theorem 2.1.2
Ilh(t;
w)llD
<
- IK)-
KIIf'(t,O)llB
<
-
IK) < P
since IK < 1. Hence h(t ;w ) E S. For an arbitrary integer r > 0 consider
by Condition (ii) of Theorem 2.1.2 and the boundedness of the operator 7: Repeating the same argument n - 1 times gives Ilxn+,(t;w)- xn(t;w)llD< (AK)211xn+r-2(t;~) - x ~
-~(~;w)IID
< . . . < (LK)"llx,(t; w ) - xo(t ;w ) l l ~ < (LK)n[llx,(t;~ ) I I D + IIh(t ;O ) ~ ~ D I < (IK)"2p.
But as n
-+
CX),
(IK)" 0 since IK < 1, and hence -+
lim lIx,+,(t;w ) - x,(t; w)llD= 0.
n-+ m
Since r > 0 is arbitrary, { x n ( tw ; ) ) is a Cauchy sequence in S c D. Thus, ; ) ) converges to a point in D. But the unique since D is complete, { x n ( tw solution x(t ; w ) is in S c D, and since
as n
-+
co,then x n ( t ; w )= ( U x , - , ) ( t ; w -) + x ( t ;w ) = ( U x ) ( t ; w ) ~ S .
68
III
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
3.1.1 Convergence of the Successive Approximations +f We shall now investigate the mode of convergence of the successive approximations defined by Eq. (3.1.1). We shall use the definition of almost sure convergence (or convergence with probability one) and the Markov inequality as given by Lotve [I]. Definition 3.1.1 Let X n ( o )be a sequence of random variables defined on d,9) and let X ( w ) be a random variable the probability measure space (0, The sequence X n ( o ) converges almost surely ( a s . ) to defined on (Q, d,9). X ( w ) ,X n ( o )-,a.s. X ( w ) ,if X n ( o )-, X ( w ) ,except perhaps on a set ofprobability zero. Equivalently, X n ( o )-,a.s. X ( o ) if for every E > 0
9'{U [ 0 : ( x k ( w) X(o)l 2 kkn
E]}
-, 0
as n
-+
00.
Markov Inequality For a 2 0, r > 0, we have P { m :(X(w)l2 a } d EIX(o)lr/ar,
if EIX(w)l' exists. Theorem 3.1.2
(Lotve [l], p. 173) If for some r > 0
c EIXn(4 m
n= 1
then
X(W)l' < a,
-
X n ( 0 )".1.X ( 0 ) .
PROOF
By the Markov inequality, for every E > 0 P { 0 :I X n ( 0 )- X(W)l B
E}
d EIX,(o)
-
X(O)I'/E*
for every n 2 1, r > 0. Hence m
1 P {0 :IXn(0)
-
n= 1
m
X(0)l B E } d
1 [EIX,(u) - X(OJ)I~/E'] <
00
n= 1
for some r > 0, by hypothesis. But
and since
c P{o:(X,(o) X(0)I 2 m
-
n= 1
E}
<
for every E > 0, the sum on the right in (3.1.2) must tend to zero as n Hence, B U [0:1xk(cC))-x(@)I 2 E] - 0 as n - , 00, {kkn
t Sections 3.1.1-3.1.3
Francis, Ltd.
I
+
co.
adapted from Padgett and Tsokos [S] with permission of Taylor and
3.1
69
THE METHOD OF SUCCESSIVE APPROXIMATIONS
and X,(w) -,a.s. X(w),by definition. We shall now show that the sequence of successive approximations (3.1.1) converges a s . to the unique random solution of (3.0.1). Theorem 3.1.3 For each t E R + ,xn(t;o)+a.s. x(t ;w ) under the conditions of Theorem 2.1.2. PROOF
By definition, for t E R , ,
Repeating the argument n - 1 times, we have
m
< 1(AK)”2p = 2p/(l n=O
-
AK) <
00
70
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
since ,tK < 1. Hence the nth term of the series converges to zero as n and there exists an N > 0 such that for k > N
so that for k > N we have
+
co,
3.1
THE METHOD OF SUCCESSIVE APPROXIMATIONS
71
Therefore, by Theorem 3.1.2, for each t E R + x,( t ; o)"2x(t ;w).
Thus the sequence of successive approximations converges to the unique random solution x ( t ; o ) with probability one for each re R + . Therefore the sequence (x,(t ; o)}converges to x(t ; o)in probability and in distribution for each t E R + . As a by-product of this theorem, we also obtain that x,,(t ;o) converges to x(t;o) in mean-square (quadratic mean) for each t E R , , since m
1 Elx,(t;w)
- x(t;o)12
<
03.
n=O
3.1.2 Rate of Convergence and Error of Approximation We now consider the rate of convergence of the sequence of successive approximations given by (3.1.1) and obtain the maximum error of approximating the true solution x ( t ;o)by the nth successive approximation, x,(t; o), at each t E R , . For the investigation of the rate of convergence, let t E R + be fixed. We 9) of the difference between the now obtain a bound on the norm in L,(Q d, nth and ( n + 1)th successive approximations, giving the speed of convergence of (3.1.1) for each t E R , . We have I/xn+1(t; 0)- xn(t; 411Lz(n,d,9)
72
I11
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
as in the previous section. Repeating the argument n
-
1 times, we obtain
But since xo(t; w ) = h(t ; w), we have x , ( t ; w ) = ( U x o ) ( ;t o) = (Uh)(t; o) and (AK)"llx,(t;o)- xo(t; w)Il,
ID
k(t,.r ;w )f ( r , h(r ;w))dr - h(t ; w )
from the assumption of Theorem 2.1.2 that AK < 1 , and f ( t , x ) satisfies a Lipschitz condition. Therefore for each n 2 0 and t E R + we have that where I K < 1 and p > 0. Now, to find a bound on the error of approximating the random solution x(t ;w ) at t E R + by the nth successive approximation given by (3.1.1), we use a technique similar to that used by Rall [ l ] in the nonstochastic case. As before, IIx(t;a ) - xn(t;w)IIL2(n,d,9,< IIx(t;w )
x n ( t ;w)\ID.
An upper bound on the quantity on the right-hand side is found as follows. Since x,(t ; w ) 3 x(t ; w ) from Lemma 3.1.1, let p > 0 and note that, as p + co, for every n 2 0,
Ilxn+p(t;
w,
x n ( t ;w ) l l D
Ilx(t; w ) - xn(t; ~ ) l l D .
3.1
THE METHOD OF SUCCESSIVE APPROXIMATIONS
73
< (IK)”p(l - A K ) [ I / ( l - AK)] = (AK)”p since I K < 1. Therefore the error of approximation of x ( t ; o)by x,(t; w ) for each t 2 0 is less than (AK)”p,that is, Ilx(t; 4- x n ( t ; 4 L 2 ( r 2 . d * 9 )< (AK)np.
(3.1.4)
We may also remark that these results support the fact that x,(t;o) converges to x(t ;o)in mean-square, as was shown in Section 3.1.1, and we
74
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
have a bound on E { l x ( t ;w ) - x , ( t ; w)I2} for each t E R , and n 2 0 from (3.1.4), which is given by E{lx(t;0) - x , ( t ; @)I2) < [ ( 2 W P I 2 .
3.1.3 Combined Error of Approximation and Numerical Integration In this section we shall consider the error of approximation of a random solution x ( t ; w ) of (3.0.1) when the integral is evaluated numerically. We state a definition given by Rall [l].
Definition 3.1.2 The operator U on a Banach space X into itself such that for some x E X , U ( x ) = x , is said to be an arithmeticfixed-point problem if the function U ( x ) can be calculated to any desired accuracy by a finite number of arithmetic operations. As before we write Eq. (3.0.1) as a fixed-point problem,
x ( t ; w )= ( U X ) ( ~ ; W = )h ( t ; w )+
sd
k ( t , r ; o ) f ( r , ~ ( r ; w ) ) d r ,t 2 0. (3.1.5)
We shall consider the discrete approximation of (3.1.5).That is, we obtain a solution at each of the discrete points 0 = t o < t , < . . . < t, < . . . < co, where ti - t i - = r, i = 1,2,. . . , and t, = to + nr = nr. For fixed t = t, in R , the interval from zero to t is divided into n subintervals, 0 = to < t , < . . . < t, = t. Note that as r + 0, then for fixed t such that t = t, = nr, we must have n + 00. This discrete version is equivalent to writing the integral in (3.1.5) as a finite sum which approaches the true value of the integral as r + 0 for fixed t = t , . Thus we transform (3.1.5)into an arithmetic fixed-point problem. For i = 0,1,2,. . . we use the following notation throughout this section :
3.1
THE METHOD OF SUCCESSIVE APPROXIMATIONS
75
where WnPi are appropriate weights (such as the composite trapezoidal rule), and the error of approximation 6'")(o)can be made as small as desired by choosing r = ti - ti- 0 < ti < t , = t , appropriately. Then we may write the discrete version of (3.1.5) as (exactly)
X A o ) = hn(o) +
n
i=O
Wn,ikn,i(o)-h(xi(w))- 6'"'(o) =z (Uxn)(o),
However, if we ignore the error of approximating the integral by the sum, for each t = t , , we obtain an approximate value of x,(o), denoted by %,,(o), where Then we have (3.1.6) Let Then (F%,)(O) = U ( % , ) ( W )
+ 6'"'(w).
Define the sequence of successive approximations to x,(o) for each t = 1, by %~"(CO)
= h,(o) = xi0'(o),
%im+')(o) = ( U%Lm))(o) + 6'")(o)= ( F % ! , ~ ) ) ( w ) , rn 2 0. Suppose the set S is given by
s = {x(t;o):x(t;w)ED, Ilx(t;o)llD < P } , as in Theorem 2.1.2. The operator U is a contraction operator on S, and there exists a unique fixed point x,(o) = (Ux,)(w) in S at t = t, 2 0, that is, a unique random solution of (3.1.5) exists at each t = t, . We assume that the error 6(")(o)E L,(Q d,S)for each t,. We define the norm
11 6'"'(w)11
11 6'"'(m) 11 D
for n (the number of subintervals) fixed. Also, from Theorem 2.1.2 we have that 11 h(xn(w)) - fn(yn(m)) 11 B < A Ilxn(o) - yn(O) 11 D for x,(o), y,(w) E S and i > 0 a constant. If we have ~ ~ ~ , , (
P1
+ [ l l ~ ( n ) ( ~ ) l l / (-l w1
76
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
and
KIIfJO)II,
llhn(0)llD
then the sequence xim)(w)= x,(t,;
S1
=
< p l ( 1 - LK),
o)defined in Section 3.1.2 is in the set
{x(t;o):x(t;o)ED,
~ ~ x ( f ~ ~C S. ) ~ ~ ~
This can be shown as follows :
x,(t,;
0)= xlp'(0)
= h(t,; 0)= h,(0)€ S1
by assumption. Then the successive approximations defined by xim)(w) (Vxkrn-'))(o)) are in S , , since
~~x!tl)(w)~~ D = Vx!to))(w)llD
IIX(,rn)(w)llDd
+
f ~ K ~ ~ x k r n - l ) ( w ) ~ ~KD~ \ f n ( o ) ~ ~ B
Ilhn(0)llD
< pl(l - LK) + p l L K
= p1.
Hence all xim)(co)E S, c S and converge to the unique random solution x,(w)E S1 as m + 00, as shown in the previous section. of approximations to %,(a) is in We now show that the sequence %irn)(w) S for all m 2 0 under the given conditions on h,(o) and f: We have %iO)(o)
= h,(w) E S , c
s.
Assume that Zh')(o), . . . ,Zirn)(w) are in S. Then we wish to show that Tirn+')(co) E S, and hence that all %Lrn)(w) are in S. We have (Ig;m+l)(w)
-
,
x(m+
1 )(0)llD
=
ll(u%im))(@) + 6'"'(w) - ( U X i m + " ) ( O ) l l D
< AKll%Lrn)(U) - X(,rn)(0)ll~ + ~~6(n)(0)~~
since U is a contraction operator on S. Continuing in this manner, we obtain ((%irn+')(w) - x!,m+l)(w)((D
< (1K)'ll%Lm- ')(o) - xkm- (w)IID + (J-K)II6(")(w)II + ll6(")(w)ll < - .. < (IK)m+lll%!,o)(OJ) - x!,"(W)ll~ + ~ ~ 6 ( n ) ( 0 ) ~ ~ x [l
+ I K + ... + ( I K ) m ]
= (AK)m+lllh,(o)- h,,(O)ll~-k I16cn)(W)ll[l 4- LK -k = Il6(")(co)11{[1- (AK)rn+l]/(l- I K ) } .
..
*
(AK),] (3.1.7)
3.1
77
THE METHOD OF SUCCESSIVE APPROXIMATIONS
Hence for an arbitrary rn 2 0
< Il%krn+ "(0)- xLrn+l ) ( 0 ) I I D + IlxLm+"(0)IID < Il6(")(~)11{[1- ( W m + l I / (-l W } + PI G [ll~(n)(411/(~ -w 1 + P1 < P,
Il%Lm+')(W)IID
since I K < 1. Hence for all rn, %!,V"(co) E S. We shall now give the following theorem. Theorem 3.1.4
If [ l h f l ( ~ ) I< l Dp l ,
+ K ~ ~ ~ n ( o G) ~Pl(' ~ B
llhn(0)llD
and P 2 P1
- IK)?
;lK
<
'7
+ [ll~(n)(W)ll/(l -w 1,
then the sequence of successive approximations defined by = h,(w) = X'.O'(W),
%;0'(0)
%im+' ) ( m ) = (~%!,"'))(o) + 6(")(0), =
rn 2 0,
(mp)(0) ,
converges in the Banach space D to within an error of ll6(")(co)ll/(l- X) to the true solution x , ( o ) at t = t , = nr. That is,
< (2K)""p
Il%'"+')(O) - X n ( 0 ) I I ~
+
-t [l16(n)(0)\l/(1 - LK)][1 - (IK)"+']
l16(n)(411/(1 -
w
as rn -+ 00. Also, since we can choose r as small as desired to make I~6('')(co)~l small, this error can be made to go to zero as r -+ 0, n + 00 such that t = nr remains fixed. PROOF
From inequality (3.1.7) we have
ll%!,m+l)(0) - XLmfl)(W)II~
< 116(")(0)11[1- (AK)m+l]/(l- AK).
From Section 3.1.2 and the argument that xi""(w) E S , c S , we also have that
lIx(,m+l)(0) - X,(W)IID < (IK)"+'p. Hence IlZ!y+l)(W) - X,(W)IID
G
IlZ:m+l)(W)
- Xn( m + l )(0)llD
+ IIx;m+l)(o)
< ~ ~ d ( " ) ( ~-) ~(LK)"'+']/(l ~{[l Thus as rn -+ 00 we have that 113(,"+l)(o) - x,(w)II~ II6'"'(4 II/( 1 -
w.
-
IK)}
-
xn(W)IID
+ (IK)'"+'p.
becomes smaller than
78
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
Since we may choose r = t i - ti- to calculate the integral as accurately as desired, for every E~ > 0 we may choose r so small (n large) that Also for every E~ > 0 we may choose rn so large that (AK)m+' p < E ~ Thus . for every c 1 > 0 and c2 > 0 we may choose r so small (n large) and m so large that I(f:m+"(o) - X,(O)IID < E l 1- E* = E.
As an example, suppose that we use a quadrature such as the composite trapezoidal rule (Kopal [l], pp. 397-410). The error of approximating the integral at each W E Ris then of the order of r3. Hence, using r as small as possible gives a good approximation for sufficiently large m. In fact, if the interval [0, t ] is divided into n subintervals such that r = t/n, then the error of integration by the composite trapezoidal rule of a function g(t ;w ) is of the form P ( W ) = &nr3g"(5 ;o), where 5 is some point in [0, t] and the double prime indicates second derivative with respect to t . Hence here we must assume that k(t, T ;o)f'(z, x(z; 0)) has a second derivative with respect to t,and as r ---t 0 (n -P a), ll6(")(o)ll -+ 0.
3.2 A New Stochastic Formulation of a Population Growth Problem
In mathematical models for biological processes it is usually the case that a complex biological system is replaced by a simpler, idealized, hypothetical one (Bartlett [l, 21, Moran [l], Chiang [l], Bharucha-Reid [S]). Many simplifying assumptions on the actual biological system may still result in a very complicated mathematical or statistical model. In obtaining a mathematical model of a biological system, either the random or stochastic changes in the system may be ignored and a deterministic model reflecting "averages" of the random phenomena used, or the random variations may be accounted for, resultingin a stochastic model. The latter case is more realistic than the former and should be used even though in most situations stochastic models are much more difficult to work with mathematically. In formulating a mathematical model for the growth of a biological population consisting of a single species, such complications as age structure and random changes must be dealt with. In much of the classical theory of population growth in which the age structure is considered the changes produced in the population growth rate by the phenomena of birth, aging,
3.2
A NEW STOCHASTIC FORMULATION
79
and death are assumed to be deterministic (Feller [l], Kendall [I], Bartlett [I, 21, Moran [I]). In this deterministic theory the expected birth rate satisfies an integral equation which Feller [I] calls the renewal equation. This integral equation involves the expected reproduction rate of female individuals of a given age and the expected rate of reproduction of females at a given time by members of the parent population. In this section we shall present a formulation ofa population growth model which results in a stochastic integral equation that is similar in form to the deterministic integral equation just mentioned. However, the solution to the stochastic integral equation is a stochastic process giving the birth rate instead of the expected birth rate in the population. We will also show that the expected value of the birth rate process satisfies the deterministic integral equation previously mentioned. The stochastic integral equation obtained is of the Volterra type given by Eq. (3.0.1). The theory of random or stochastic integral equations of this type given in Chapter I1 and Section 3.1 permits one to show existence and uniqueness and to obtain an approximation to a realization of a random solution x(t ;a), of the process x ( t ; u) (also see Anderson [I], Tsokos [4],Padgett and Tsokos [5,6, 113, Milton and Tsokos [2,3], Hardiman and Tsokos [2], A. N. V. Rao and Tsokos [2]). This theory allows one to obtain the existence and uniqueness of a random solution x ( t ; o)without specifying the exact distributions of the stochastic processes at each time t . That is, the stochastic processes which constitute Eq. (3.0.1) may be very general processes. In Section 3.2.1 we shall discuss the deterministic model, and we give the stochastic formulation of the population growth problem in Section 3.2.2. We shall show that it completely specifies the state of the population at each time t and indicate the connection between the present formulation and the deterministic formulations mentioned earlier. In Section 3.2.3 we will show that the stochastic integral equation obtained in Section 3.2.2 has a unique random solution under certain conditions. Finally, we will give an example to indicate the fruitfulness of the stochastic integral equation formulation.
3.2.1 The Deterministic Model Consider the effect of age structure on the growth of a population consisting of a single species, in the absence of external influences such as emigration or immigration, dependence on the density of the population, and disease. In the classical theory the size of the population is treated as a continuous variable and it is supposed that the modifications produced in the population due to the phenomena of birth, aging, and death are deterministic (Kendall [I]).
80
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
The variables such as the birth rate and number of individuals in the population are usually interpreted only for the female or reproducing portion of the biological population with the possibility of unequal sex ratio being ignored. Let A(t) dt be the expected number of females born during time (t,t + d t ) to females aged t as in Feller [I], Bartlett [l, 21, and Kendall [l]. Also, b(t)d t is the expected number of female births occurring in the time interval ( t , t + dt) (the expected birth rate), and n(s, t ) ds is the expected number of females in the population at time t in the age group (s, s + ds).Then the total number of females in the population, that is, the size of the female population at time t, is the quantity
where l ( t ) is the probability that an individual born at time zero is alive at time t > 0 and is given by (3.2.2) where p(s) is the death rate of individuals aged s, 0 (Kendall [ 11) n(s, t ) = l(s)b(t - s)
< s < t . Then we have that (3.2.3)
and that (3.2.4) where (3.2.5) The function g(t) is completely specified if the age structure and population size are known at the time observation of the population begins, that is, at epoch t = 0. The integral equation (3.2.4) was studied in detail by Feller [ 11. He showed that under certain conditions Eq. (3.2.4)possessed a unique solution, and also the asymptotic behavior of the solution was investigated. He suggested the method of successive approximations in order to approximate the expected birth rate b(t)at each time t > 0, after presenting a treatment of the equation by Lotka [l]. The paper by Feller [l] contains an extensive list of the research papers concerning Eq. (3.2.4) up to 1941.
3.2
A NEW STOCHASTIC FORMULATION
81
3.2.2 The Stochastic Model It should be evident that the deterministic theory ofpopulation growth does not provide an adequate and realistic description of the processes involved, because it does not take into account the random fluctuations which occur as the process develops. Using the theory of stochastic integral equations which was given in Chapter 11, we will formulate stochastic versions of Eqs. (3.2343.2.5). It will not be required to specify the exact behavior of the processes because of the very general theory involved. However, we will show that the stochastic formulation is related to the “expected value” (deterministic) formulation under the usual kind of assumptions that are made in the stochastic population models. Let m ( t ; o ) be a stochastic process which enumerates the number of offspring (female) produced by individuals (females) aged t at epoch t > 0. Then m(t ;u) dt is the number of offspring produced by individuals aged t in the interval ( t , t + dt). It is clear that m should be treated as a stochastic process, since at any epoch t the number of offspring produced by individuals aged t is random. We will consider m to be a continuous function o f t for almost all u,which approximates the number of offspring at each t 2 0. That is, for almost all o we assume that the stochastic process m(t;u) has continuous sample functions. Hence for each t 2 0 the number of female births at that epoch in the population is a random variable b(t;u).That is to say, b ( t ;o)is a stochastic process, and the number of individuals aged s at epoch t in the population is given by
n(s, t ; O)= I(s)b(t - S ; W )
(3.2.6)
and the process b(t ;u) must satisfy the stochastic integral equation of the type (3.0.1) given by b ( t ; w )= g ( t ; o )
where
+
J:
I(s)m(s;o)b(t- s ; u ) d s ,
g ( t ; w )= I(t)Jomrn(t+ u ; u ) n ( u , O ; u ) d u .
(3.2.7)
(3.2.8)
The process g ( t ; w ) is completely specified if the distribution of the population size at time zero and the behavior of the process m ( t ; o )are known. Thus the size of the population at time t 2 0 is a random variable and is given by n(t ;W ) =
s:
n(s, t i O)ds
the stochastic version of Eq. (3.2.1),
=
l(s)b(t - s ;W ) ds,
(3.2.9)
(12
I11
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
Before investigating the stochastic integral equation with respect to the conditions which guarantee a unique random solution we will consider its relation to the classical stochastic theory of population growth. We assume that the process N ( t ; w ) is a discrete valued stochastic process giving the number of individuals in the population at time t. Assume the following :
(i) The subpopulations which are generated by two coexisting individuals develop independently of each other. (ii) An individual (female) age x existing at time t has a chance A(x)dt
+ o(dt)
of producing a single offspring (female) during any time interval of length dt, where A(x) is the same function for all individuals in the population. Let d N ( x , t ; o)denote the number of individuals in the small age interval ( x ,x d x ) at time t, and
+
E [ d N ( x ,t i o)]= ~ ( xt ),dx
+ o(dx).
This is also the variance of d N ( x , t ;w ) to the first order. [That is, d N ( x , t ;w ) is of Poisson character.] It is assumed that dN(x,t ;w ) = 0
with probability
1 - a(x, t ) d x
1
with probability
a(x, t ) dx
2
with probability
o(dx).
=
+ o(dx)
+ o(dx)
Also, let d M ( x ; o)denote the number of offspring produced by individuals aged x in the time interval ( x , x + dx). It is assumed that
+ o(dx)
d M ( x ;w ) = 0
with probability
1 - A(x) dx
= 1
with probability
A(x)dx + o(dx)
22
with probability
o(dx),
where E [ d M ( x ;o)]= A ( x ) dx
+ o(dx).
IfA(x) = A, a constant, then d M ( x ;w ) has the same probability distribution at each x . If the process dB(t ;w ) denotes the number of new births (integervalued) in the population in the time interval ( t ,t + dt), then d N ( x , t ;0)= l(x)dB(t - x ;w). Assume that the death rate p(s) is the same for all ages s.
3.2
83
A NEW STOCHASTIC FORMULATION
Then from Eq. (3.2.2) we have l(t) = e P p ' . Hence dN(x, t ; w ) = e - p xdB(t - x ; 0).
(3.2.10)
The joint probability distribution of dN(x, t ; w ) and dM(x; w ) is given to the first order by
I dM(x:oj
i
1 1
Thus
T$a,
dN(x,I :
o 1 - a ( x , r j dx 0 1 - a ( x , I ) dx
0)
1
Total
[ l - I ( x )d x ] a ( x ,1 ) dx
1 - I ( x )dx . a ( x , I ) dx R(x)dx . a ( x , t )dx
L(xj dx . a ( x , t ) dx a ( x , t ) dx
1
E[dM(x ;w ) dN(x, t ; w ) ] = A(x) dx . a(x, t ) dx
and hence cov[dM(x ; w),dN(x, t ; w ) ] = 0 ; that is, dM(x ; w ) and dN(x, t ; o)are uncorrelated. Then cov[dM(x ;w)f(x),dB(t - x ;w)] = cov[dM(x ;w ) e - p X ,dN(x, t ;w ) epx]
=o
from Eq. (3.2.10) and the given covariance. For the continuous case, therefore, we make the similar assumption that the processes m ( x ;w ) and n(x, t ;w ) are uncorrelated. Hence if we take the expectation of the stochastic integral equation (3.2.7), we obtain
E [ b ( t ; o ) ]= E[g(t;w)]+ or
b(t) = g ( t ) = g(t)
+
1;
Jo E[l(s)m(s;w)b(t
- s;w)]ds
f(s)E[m(s;w)] . E[b(t - s; w ) ] ds
+ J: f(s)i(s)b(t- s) ds,
since E[g(t;w)]= f(r)J"
E[m(t
som
= l(t)
0
A(t
+ u;w)n(u,O;w)]du
+ u)n(u,0) du = &).
(3.2.11 )
84
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
Equation (3.2.11)is the deterministic integral equation (3.2.4)given by Feller [I], Bartlett [l, 21, Moran [I], and Kendall [l]. We make the assumption on the correlation of m and n in order to obtain (3.2.11). This assumption is not necessary in the general theory of stochastic integral equations of the form (3.0.1) as given in Chapter 11.
3.2.3 Existence and Uniqueness of a Random Solution As Feller [l] did for the deterministic integral equation (3.2.4),we will now obtain conditions under which the equation (3.2.7)possesses a unique random the population birth rate, bounded for t 2 0. solution b(t ;o), By changing the variable of integration we may rewrite the stochastic integral equation (3.2.7) as
b(t;o)= g ( t ;w )
+
J:
(3.2.12)
I(t - s)m(t - s; w)b(s;o ) d s .
Then the stochastic kernel in Eq. (3.2.12) is, for 0 d
T
dt<
00,
(3.2.13)
k(t, T ; 0)= I(t - T)m(t - T i 0).
If m(t ;o)is assumed to be bounded for almost all u by some positive constant A, that is, there is an upper bound on the number of offspring produced at epoch t by females aged t 2 0, then since l(t) = e-ul
for a constant death rate p , we have from (3.2.13)
Illk(t, z; w)lll = 9'- ess supll(t - t)m(t - z ; w)l UJ
-A
< All(t - T ) [
e-P(l-T)
We will now state and prove the following theorem, which gives conditions that guarantee the existence of a unique random solution of Eq. (3.2.12). Theorem 3.2.2 Consider the stochastic integral equation (3.2.12) subject to the following conditions : (i) Illl(t - r)m(t - ~;o)lll d A e - P ( r - r ) for 0 d z d t < and A are positive constants. (ii) g(t ;o)is in the space C.
GO,
where p
Then there exists a unique random solution b(t;o)E C such that Ilb(t; o)llc d p for some p > 0, provided that
d P [ 1 - (A*/P)I and A* < 14 where A* is the infimum of the set of all constants A satisfying (i). Ilg(t;w)llc
3.2
85
A NEW STOCHASTIC FORMULATION
PROOF It has been shown in Chapter I that under conditions similar to (i) and (ii) there exists a unique random solution of Eq. (3.0.1) if the Banach spaces involved are admissible with respect to the integral operator
(Tx)(t ;o)=
s:
k(t, T ;O)X(T;o)dr
and if the function f in Eq. (3.0.1) satisfies a Lipschitz condition on the set
s = {x(t; o):x(t; 0)E c,
Ilx(t; 0 ) l l C
for some constant p > 0. In Eq. (3.2.12) f is the identity function with respect to x(t; w), and hence (3.2.14)
If(4 x) - f ( 4 Y)l = Ix - Yl
for x, y E S . That is, f satisfies a Lipschitz condition on S with Lipschitz constant equal to one. Therefore if we show that the pair of spaces (C,C) is admissible with respect to the operator T given by (Tx)(t;w) =
s:
l(t - r)m(t - T ; w ) x ( T ; ~ ) ~ T
(3.2.15)
for x(t ; w ) E S, and verify that g(t ;w ) given in Section 3.2.2 is in the space C, then the existence of a unique random solution of Eq. (3.2.10) follows from the result of Theorem 2.1.2 under Conditions (i) and (ii) of the theorem. of both sides of Eq. (3.2.15), we have Taking the norm in L,(Q d,9) II(Tx)(t; 4IILZ(*,d,9)
1; <1 ; d
II4t - r)m(t - ;W)X(T ;W)IIL2(R.dB.P)d.z
IIIl(t - T)m(t - r ;~ ) I I.I IM;~
i'
) I I ~ ~ ( d~ T , ~ , B )
< sup { Ilx(t; ~ ) l l L z ( * * d , 9 ) } 0 III4t - T)m(t - r ; 4 1 1 1d.r 130
d I(x(t;o)llcA
i'
e-p(f-T) dz
0
by definition of the norm in C and inequality (3.2.14). Thus II(Tx)(t; dlLz(n,-d,9)< Ilx(t; 4 l l c ( N P ) ( 1 - e - P ' ) ,
since
J: e-p(r-r)dr = ( 1 / ~ ) ( -1 eCpr).
t 2 0,
86
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
Therefore (Tx)(t;w ) is bounded in mean square and is in C by definition. Hence the pair (C, C) is admissible with respect to the operator T given by Eq. (3.2.15).
< IMt; w)lldNP) SUP (1 t30
-
e-”1
= (A/P)llx(t; o)llc,
we see that the norm of T is the quantity A*/p, where A* is the least upper bound of m(t; o)for almost all o. Now, we must verify that g ( t ; w ) is in C. Note that J,” n(u,O;w)du is the number of female individuals in the population at time zero, that is, the size of the initial population. We may assume without losing generality that there is a finite number N o > 0 such that JOm
n(u,O;o)du< N o
for almost ail w e n .
Then g ( t ; o)= JOm I(t)m(t
+ u ; o)n(u,0 ;o)du
< e-”‘A*N0 c co
for almost all w
by the assumption that m(t ;o)is bounded by A* and I([) = e-”’. Therefore /!g(t; o)I2d.9’(o) 6
s,
le-”“A*NoI2W w )
= (e-”‘A*No)’ <
co,
t~ R + ,
and so by definition g(t ;o)is in the space L2(Q d,9). Also,
IMt; ~ ) I I L 2 ( R , d , b=)
{r,
t Ig(c; 412d P ( o ) } 6 e-”‘A*No,
tER+,
which means that g(t ;o)is bounded in Lz(Q d,9’). Since m ( t ;w ) is a continuous function of t in L,(R, d, 9),we have Ilg@+ s; a) -
w)llL*(R,d,D)
< Jom 11 /(t + s)m(t + u + s ;o)n(u,0 ;o) -
d
JOrn
-
KMt
+
;w)n(u,0 ;o)llL2(R.d,b)du
l l f l ( ~ , o ; ~ ) l l L ~ ( II4t ~ , ~+, ~s)m(t ) . + l.4 + s ; o ) W m ( t + 24 ;~)llLz(n,&4,B) du.
3.3
METHOD OF STOCHASTIC APPROXIMATION
87
But as s -,0 e-r(t+S)
+
e-”‘,
and since n(u,0; o)is in L,(R, d,9) for u 2 0, we have that the integrand approaches zero as s -, 0. Hence At; o)is continuous in mean square. We then have that g(t; o)is in C by definition. Therefore there exists a unique random solution of the stochastic integral equation (3.2.12),b(t;a)E S , provided A* < P
and
llg(t;411c = A*NX
< p[1
- (A*/p)I,
since we have
IMt; o)IIc
=
SYf: IMt;
~ ) I I L 2 ( n * s l . B= )
where N,* is the infimum of { N , } , completing the proof. The assumption that A* < p means that the upper bound on the number of offspring of individuals of a given age is less than the death rate. This may be interpreted as meaning that the population will eventually die out as t + co. Also the effect of g(t ;o)tends to be negligible as t -, co. In Section 3.3.3 we shall give a numerical example for a hypothetical population model such as that given here. 3.3 Method of Stochastic Approximation
We shall now approach the problem of obtaining an approximation to the random solution of the stochastic integral equation (3.0.1) by applying the theory of stochastic approximation. We shall first discuss the technique of stochastic approximation that will be used and present a general theorem due to Burkholder [ 13 which describes the convergence of the approximations to the true solution. We then shall obtain the conditions which the functions in Eq. (3.0.1)must satisfy in order for the method of stochastic approximation to be applicable.
3.3.1 A Stochastic Approximation Procedure Stochastic approximation was first introduced in 1951 in a paper by Robbins and Monro [ 11, in which they considered the problem ofapproximating the root of an unknown regression function M ( x ) = c1, where c1 is a known constant. Their results were generalized and extended by Wolfowitz [ 11, Blum [l, 21, Kallianpur [l], Kiefer and Wolfowitz [l], Burkholder [I], and
88
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
others (see Wasan [ 11). Morozan [4] has applied stochastic approximation techniques to the theory of Lyapunov functions. Robbins and Monro showed that under certain conditions on M ( x ) , where M(x) = -w(x(o))lx(4 = XI, the sequence defined by xn+1(o) = X n ( o )
+ an[a -
y(xn(o))l,
2 1,
(3.3.1)
where x l ( o ) is an arbitrary, real, random variable, converges to the root of M ( x ) = a in mean square and in probability. We shall discuss a theorem of Burkholder [l] which requires somewhat weaker conditions than those of Robbins and Monro but results in the sequence ( x n ( w ) }converging with probability one. It is assumed that a value of x ( w ) can be fixed, x l , say, and the value Y ( x l ) can be found or observed. Then the next value x 2 is found from (3.3.1), and so on. Under the conditions to be given, the sequence (3.3.1) converges with probability one to a real number 8 such that M ( 8 ) = 0 (= tl), that is, 0 is a root of M ( x ) = 0. Let M ( . ) be a function from R (the real numbers) into R . For each x E R let Y ( x ) be a random variable with probability distribution function G(.lx) such that E [ Y ( x ) ]= M(x). Let (a,} be a positive number sequence, and let x l ( o )be a random variable. If n is a positive integer, let xn + 1(w) = X n ( 0 ) - anYn(o), where y , ( o ) is a random variable with conditional distribution function G(.(xn),given x l , . . . , x n , y l , . . .,y,,-,. The random variable sequence { x n ( o ) }so defined is a stochastic approximation process which Burkholder [l] refers to as type A o . We denote by V ( x )the function V ( x ) = var[Y(x)] for x E R . We shall now state the following theorem. Theorem 3.3.1 (Burkholder [l]) Suppose (x,(w)} is a stochastic approximation process of the type A . and 8 is a real number such that the following conditions hold :
(i) For every E > 0, if Ix - 81 > E, then ( x - 8)M(x) > 0. (ii) suPx(lM(x)I/(l + Ixl)} < (iii) supx V ( x ) < co.
3.3
METHOD OF STOCHASTIC APPROXIMATION
89
(iv) If 0 < 6, < 6, < co,then inf
61s1x-e1$62
I,"=
IM(x)l > 0.
Enw=
(v) a, = co and u," < 00. (vi) M ( x ) and V(x) are Bore1 measurable.
Then P{w:limn+wx,(w)
=
0) = 1.
The conditions in Theorem 3.3.1 seem to be more general than those given by Blum [l], although the proofs are similar. In the next section this theorem will be applied to the stochastic integral equation (3.0.1). 3.3.2 Solution of Eq. (3.0.1) by Stochastic Approximation We have that for each t and z satisfying 0 d z d t < 00 the variances existforh(t;w),k(t,r;w),andx(t;w),sinceforeacht~ R , , h ( t ; w ) a n d x ( t ; o ) are in L,(R, d,9)and for each t and z, 0 < z < t < co, k(t, z ; o)is in L,(R, d,9). Again we assume that the distribution function of h ( t ; o)is known for each t E R + or that a value of h(t ;w ) can be observed for each t . It is also assumed that x(t ; a),h(t ;w), and k ( t , z ; w ) are mutually independent, real-valued random variables at each t and z, 0 d z d t < co. As in Section 3.1, we write (Ux)(t;w)= h ( t ; w )+
sd
k(t,z;w)f'(r,~(r;o))dr
for t E R , . By Theorem 2.1.2 there exists a unique random solution x(t; o)of (3.0.1), (Ux)(t;0)= x(t; 0). Let
( Y x ) ( t ; w )= x ( t ; o ) - ( U x ) ( t ; o )
for x a continuous, real-valued function from R , into L,(R, d,P )contained in the set S of Theorem 2.1.2. For fixed r E R , define M[x(t)] = E[( Y x ) ( t ;o)lx(t; o)= x(t), =
x(z; w ) = O(r)
for z < t ]
E[x(t; w ) - (Ux)(t;w ) l x ( t ;o)= x(t), X(T;o)= H(r) for T < t ] ,
where O(z) is a realization of the unique random solution of Eq. (3.0.1). That is, we have already obtained values O(z) of the unique random solution for z < t . Hence we now wish to find the value O ( t ) such that
90
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
That is, a value of the unique random solution is now to be obtained at tER,. Suppose that at the fixed t E R + we choose x,(t; o)= h ( t ;o), x,+ , ( t ; w ) = x,(t; o)- a,y,(t; o), n 2 1, (3.3.2) where y,(t ;w ) is the random variable (Yx,)(t; o).We apply Theorem 3.3.1 to obtain conditions for which the sequence {xn(t;o)}defined by (3.3.2) converges to e(t)with probability one. We have M[x(t)] = E[x(t;o) - (Vx)(t;w)lx(t;o) = x(t),
X(T;W) =
O(z) for T < t ]
= E[x(t; O ) - h(t ; O )
-Jo k(t,t;w)f(z,x(.r;o))dzlx(t;o)x(t),x(z;o) =
=
B(z)ifr < t ]
(3.3.3)
which exist by assumption. We will now show that Condition (i) of Theorem 3.3.1 holds at fixed t E R + . Let x(t) < e(t).Then from (3.3.3) we have M[x(t)l = x(t) - /dt)-
< e(t) - P h ( t )
-
=o
f f 0
Pk(t,
z)f(T, e(T)) dl
Pk([,
t ) f ( t , e ( T ) ) dr
0
since e(t)is a value of the unique random solution of (3.0.1) at t E R + . Likewise, if x ( t ) > O(t), then M[x(t)l = x(t) - P k t ) -
> e(t) - P h ( [ ) =
-
f f 0
P k ( [ , T)f(T,
e(T))
dz
P k ( t , T ) f ( T , e ( T ) ) dT
0
0.
Hence, Condition (i) of Burkholder's theorem holds.
3.3
91
METHOD OF STOCHASTIC APPROXIMATION
We now must show that Condition (ii) of the theorem is satisfied ; that is, SUP
X(f)
{IM[x(t)lli[l + Ix(t)ll) < a.
By definition, we have that M[x(t)l = x(t) - Ph(l) -
f
pk(t,t)
0
x E[f(t, x (t; o))lx(t;w ) = x(t), x ( t ; o)= 0(t) if
But by assumption the means of k(t, t ;a),h(t ; a),and and hence
t
< t] dt.
f ( t ,x(t ; a))exist,
IM[x(t)lI 1 + IxWl -
Ix(t )
- ph(t)
1 -
+ Ix(t)l Jb t)E[f’(t,X(T ;w))lx(f;0)= x(t), k ( t r
1 + Ix(t)l
if
t
< t] dt(
; 0)= e(7) if
t
< t]I dt
t
< t]
X ( t ; 0)= O(t)
+ Iph(t)l
<
J:lpk(t, t)E[f(t, X(t; CO))lX(t;0)= X(t),
for all x(t) since Ix(t)l/[l
+ Ix(t)l] < 1
and
1/[1
X(t
+ Ix(t)l] < 1.
Thus SUP {IM[x(t)ll/[l + lx(t)Il) < a, x(t)
and Condition (ii) holds. To show that Condition (iii) holds, we proceed as follows. Let v[x(t)] = var[x(t; o)- (Ux)(t;o)lx(t; w ) = x(t), x ( t ; o)= O(t) if =
var[(Ux)(t;w)lx(t;o)= x(t), x (t;w) = O(t) if
t
< t]
from the property that a constant [x(t)] plus a random variable has the same variance as the random variable. But from the preceding E[(Ux)(t;w)lx(t; o)= x(t), x (t; o)= O(t) if = Ph(l) f
[:
pk(tr
t)f(t,e(r))dr.
t
< t]
92
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
111
Therefore
x(t; o)= x(t),
=
o)= e(t) if
7
x(t; o)= x(t), x(t ;o)= e(t) if
t
var[h(t;o)]
+0
X(Z;
I
3.3
METHOD OF STOCHASTIC APPROXIMATION
93
However, since E[k(t,s ; o)f (s,x ( s ; o))lx(t;o) = x(t), x ( u ; o)= 0(u) if u < t]
the integrand becomes E{[k(t,r ; o ) f (7, x(r ; w ) ). k(t, s ; o)f (s,x(s ;w))]lx(t; o)
= {E[k(t,r ;o ) k ( t ,s ; o)]
Hence the double integral becomes
J: J:
COV[W~,
m))
;u),k(t, S ; w ) I ~ ( 0T(,7 ) ) f ( ~ , d~d ~ .
Therefore if f(7, x ) is uniformly bounded for x fixed t~ R + , a n d if
ER
J: J-d cov[k(t,r ;w),k(t, s ;a)]ds dr < r,
and r E [0, t ] for each some constant.
then V[x(t)l< W3. for all x ( t ) values, since var[h(t;w)] < co by the assumption that h(t;o) E L 2 ( R ,d,9)Hence . sup V [ x ( t ) ]< co x(t)
and Condition (iii) is satisfied. Note that the covariance exists for each fixed t , z, and s, since k(t, z ; o)and k(t, s ; o)are in L,(R, 4 9)for each (t,z ) and (t,4. To show that Condition (iv)of Theorem 3.3.1 holds, let 0 < 6 , < 6, < co. By definition,
94
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
since O(t) is a realization of the unique random solution of (3.0.1). Hence inf
a 1< J x ( t ) - B ( t ) 1 6 6 2
IM[x(t)]1 =
inf
Ix(t) - O(t)l = 6, > 0
6161~(r)-e(r)lQ62
and Condition (iv) holds. Thus we have the following theorem. Theorem 3.3.2 If x(z ;w), h(t ; w), and k(t, z ;w ) are mutually independent, real-valued random variables for each t and z, 0 < z < t < co, and the conditions of Theorem 2.1.2 hold for a unique random solution of (3.0.1) to exist, and if for fixed t E R , : (i) f(z, x) is uniformly bounded for x E R and z E [0, t ] ; (ii) yoyocov[k(t, z ; a),k(t, s; w)] ds dz < r. some constant ; (iii) a, = co and u.' < co ; (iv) M ( x ) and V(x) are Bore1 measurable;
=:x
then
P { w : limx,(t;w) = e(t)} = 1, n-1 a,
where O ( t ) is a value of the unique random solution x(t; w ) of the stochastic integral equation (3.0.1) and {x,(t; w ) } is defined by (3.3.2). Therefore the stochastic approximation procedure defined by (3.3.2) converges to a value of the unique random solution at each t E R , with probability one if the conditions given are satisfied. This gives a very useful technique for numerically obtaining an approximation to a realization of the unique random solution of Eq. (3.0.1)in practical situations.
3.3.3 Numerical Solution for a Hypothetical Population We will now illustrate the fruitfulness of the results obtained in the foregoing sections by using a hypothetical population and choosing certain distributions for the processes m(t ;w ) and n(u, 0 ; w ) given in Section 3.2. Then values will be obtained from these distributions and a realization of the unique random solution of (3.2.13), the birth rate b(t ;w), at each t 2 0 will be found by the method of stochastic successive approximations presented in Section 3.1. Feller [ 11 used successive approximations in solving the deterministic equation (3.2.4). Let ( U b ) ( t ; w )= g ( t ; w )
+
J:
k ( t , z ; w ) b ( z ; ~ ~ ) d z ,t 2 0.
(3.3.4)
Define a sequence of random variables at each fixed value oft E R + as follows : bl(t;w) = g ( t ; w ) ,
b,,+l(t;w)= ( U b , ) ( t ; w ) ,
n 2 1.
(3.3.5)
3.3
METHOD OF STOCHASTIC APPROXIMATION
95
The sequence {b,(t;o)}for each t 2 0 can be shown to converge to the unique random solution of Eq. (3.2.13) with probability one and in mean square as n + co under the conditions of Theorem 3.2.1. We use Theorem 3.1.2.
Theorem 3.3.3 The sequence of random variables b,(t;o), n = 1,2,. . . , converges to b(t;o)with probability one for each t 2 0 under the conditions of Theorem 3.2.1. PROOF The proof follows a similar argument to that for Theorem 3.1.3, and will be omitted.
We use a hypothetical population as an example. Suppose that the death rate, that is, the number of individuals dying per unit time, is p = 2.1. To obtain a value of the stochastic kernel k(t, z; o),a value of the number of offspring produced at time t - z by females aged t - z 2 0, m(t - z; w), is needed. We suppose that the biological popuiation consists of organisms which produce offspring at the same rate at all ages from age zero until death. That is, for all t, m ( t ;w ) has the same distribution with mean A(t) = A. It is assumed that m ( t ;o)has the uniform distribution on the interval zero to two, so that the mean number of offspring produced per unit time by individuals aged t is 1.0 for all t. Therefore to find a value of the stochastic kernel, we may generate a value from this uniform distribution. Also, in order to use the stochastic successive approximation method to approximate a realization of b(t ;o), we need a value of g(t ; o)at each t. The process g(t ;o)is given by Eq. (3.2.8), and thus we need values of n(u, 0; w ) in order to obtain values of g(t ;a).We choose as an approximate distribution of n(u,0; o),the number of organisms of age u in the population at time zero, a gamma distribution with mean which appropriately tends to zero as u + co,where Po is the mean number of organisms in the population of age zero at time zero (that is, the average number of organisms aged zero at the time observation of the population is begun). Therefore we use values from the family of gamma densities as values of n(u,0; o), and then obtain values of g ( t ; o)from Eq. (3.2.8) with m(t; o)behaving as described and I ( ~= ) e - ~ r = e-(2.1)f
We assume the mean number of organisms aged zero to be convenience.
Po = 50 for
96
111
SOLUTION OF THE RANDOM VOLTERRA INTEGRAL EQUATION
T o evaluate the integrals in Eqs. (3.2.8) and (3.2.12), the composite trapezoidal rule was used (Kopal [l])and an approximate realization of the birth rate b(t;w ) was obtained by the method of successive approximation with the aid of an electronic computer. Iteration of Eqs. (3.3.5) was continued until the absolute difference between the values of b,(t; w ) and b,- l(t; o)at each t was less than the specified accuracy 0.001. The results of two simulations of this hypothetical population are shown in Fig. 3.3.1. The hypothetical population serves to illustrate the usefulness of the formulations given here. In a more realistic situation the distributions of n(u,0; o)and m(t ;w ) may be known or approximated and the same techniques applied. Also, once these distributions are known and the values of b(t;w ) are found, then the number of individuals (female or reproducing organisms) in the population may be obtained from Eq. (3.2.9), and the growth process is completely specified. It is possible that more general growth processes than that given here may be formulated in terms of stochastic integral equations, and if so, the general theory discussed in Chapter I1 may apply. The method of stochastic approximation given in section 3.3.2 may also be used to obtain the realizations shown in Fig. 3.3.1.
0
5 Time f
Figure 3.3.1
Two realizations of birth rate for hypothetical population.
CHAPTER I V
A Stochastic Inteegal Eqzlation of the Fredholm Type and Some A p pZications
4.0
Introduction?
In this chapter we shall consider a stochastic integral equation of the Fredholm type of the form x(t;w)= h(t;o)
+
som
k,(t,z;w)e(z,x(z;w))dz,
y4.0.1)
which was presented in Section 1.3. We shall study the existence and uniqueness of a random solution of Eq. (4.0.1) using the concepts of admissibility introduced in Chapters I and 11. We will also consider the asymptotic properties of the unique random
t Sections 4.0 and 4.1 adapted with permission of the publisher, The American Mathematical Society, from Padgett and Tsokos [I I], Proceedings of the American Mathematical Society. Copyright 0 1972, Volume 33, Number 2, pp. 534-542. 97
98
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
solution x ( t ;w). In order to study the existence and uniqueness of a random solution of Eq. (4.0.1),we shall first consider the existence and uniqueness of a random solution of the stochastic integral equation of the mixed Volterra-Fredholm type of the form x ( t ; w )= h ( t ; w )
+
+
k(t,z;w)f(r,x(z;w))d.r
k,(t,z;w)e(z,x(z;w))dz,
(4.0.2)
where the stochastic kernel k(t, 2 ; w ) and the function f(t, x) behave as in the first three chapters. Then Eq. (4.0.1)is a special case of Eq. (4.0.2),that is, we obtain (4.0.1) from (4.0.2) when k ( t , z ; w ) is identically equal to zero for almost all o E R. Likewise, for k,(t, z ;w ) equal to zero for all t, z E R + and almost all o E R, we obtain the random Volterra integral equation of Chapters I1 and 111. A nonstochastic version of Eq. (4.0.2)has been studied by Miller, Nohel, and Wong [I], Petrovanu [ 11, and Corduneanu [5], among others. An application of Eq. (4.0.1)will be presented in stochastic control theory. A linear system which was considered in the deterministic sense by Corduneanu [3] is generalized to the nonlinear stochastic case, and conditions are given that guarantee the existence of a unique random solution of the resulting stochastic control system such that the random solution is stochastically asymptotically exponentially stable. 4.1
Existence and Uniqueness of a Random Solution
We shall make the following assumptions throughout this chapter : The random solution x ( t ;w ) and h(t ; w ) are functions of t E R , with values in the space L,(R, d,9). The function e(t, x ( t ; w)) is a function of t with values in L,(R, d,9’) For . each t and z such that 0 Q t < co and 0 Q z < co, the stochastic kernel k,(t, z; w ) will be an essentially bounded function with respect to 9-measure, that is, for each t , z E R + , k,(t, r ; w ) will be in L,(Q, d, 9’)Then . the product of k,(t, 7 ;w ) and e(t,x ( t ;0))will always be in L,(R, d,9). Furthermore, with respect to the behavior of k,(t, z ; w), we assume that the mapping ( t ,T ) -+ k,(t, z; w ) from the set A1
= {(t,T):O
G t <
00,
0 ,< z < a)
into L,(R, d,9) is continuous. That is, 9-ess sup Iko(t,, r,; w ) - k,(t, z; w)l -+ 0 wsn
4.1
EXISTENCEAND UNIQUENESS OF A RANDOM SOLUTION
as n + cc whenever (t,, 7,) + ( t ,7)as n in L,(R, d, 9) by
+ 00.
Illko(t,t ; w)lll = .p-ess SUP Iko(t,z; osn
99
Denote the norm of k,(t, r ; o)
d l = Ilko(t,t ; 41Lm(n,d,9).
We also assume that for each t E R , , Illko(t.r ; o)lll and Illko(t,z; o)lll x I~x(T; o)I/L2(n.d.9) are integrable with respect to z E R , . We let C, = C,(R+,L,(Q, d,9)), C,, and C denote the various spaces of functions defined in Chapter I. Also, let B, D c C, be a pair of Banach spaces. Define the operators X and 9from B into C, by ( . X x ) ( t ; o >= )
( 9 ~ ) ( to) ; =
6
k(t,t;o)x(z;o)dr,
ibm
ko(t,z; W)X(Z;W ) dt,
t ER ,
.
If B and D are stronger than C , and the pair ( B , D) is admissible with respect to X, then from Lemma 2.1.1 we have that X is continuous from B to D. Therefore there is a constant K , > 0 such that Il(-xx)(t;4llD d K1llx(t;~)II..
The following lemma shows that the operator 9 is continuous from C,(R+, L,(Q, d,9)) into itself. Lemma 4.1.1 The operator 9defined here is a continuous linear operator into itself. from C,(R+,L,(Q, d,9)) PROOF As stated previously, the space C,(R+, L,(Q d, 9)) is a Frechet space with distance function given by W
d(x, Y ) =
1 (lPn)[IIx - YIIJ(~ + IIx - YII~)I
n= 1
where 11 . 1 , is the family of semi-norms defined in Chapter I. By the assumption made on the stochastic kernel ko(t, z; o), the integral ( 9 ~ ) (Wt ); =
Im
ko(t, T ;W)X(Z;W ) dz
exists for all x(t; w ) E C,(R+, L,(R, d,8)).Now, define the integral operator LZM,M = 1,2,. .., by ( 9 M ~ ) ( tW ; )=
JoM
ko(t, T ;o)x(t; O)dz,
t E R,,
100
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
and hence as M
( Y M x ) ( t o) ; + ( 9 x ) ( t ;o)
+ 00.
We shall show that (9,is )indeed a sequence of continuous linear operators into itself. Obviously, 9, is linear, M = 1, from CJR,, L,(Q d,9)) 2, . . . . Let x , ( t ; o ) - + x ( t ; o ) in C c ( R + , L 2 ( Q d , 9 )as ) m + 00. Then by definition Ilxm(t; 0)- x ( t ; W ) I I L ~ ( R , ~ ~ , S ) 0 +
uniformly on every interval [0, Q ] , Q > 0. Therefore for each n = 1,2,. . . the semi-norms Ilxm(t; W ) - x ( t ; o ) l l n =
as m
+ 00
SUP Ilxm(t; W )
o
and d ( x m ,x) + 0 as rn
ll(9,xrn)(t;
W)
- ( p ~ x ) ( ~t ;)
+ 00.
-
x ( t ; W ) I I L ~ ( R . ~ , P--* ) 0
Hence for any n
=
1,2,. . . we have
l l n
Since k o ( t , z ; a ) is continuous from A 1 into L 2 ( Q , d , 4 , ) , the norm Illko(t,T ;w)lll is bounded on the set { ( t , z ) : O < t d n, 0
by some number K , , . Thus independent of t E [0, n ] for m > N , we have ll(9Mxrn)(t; 0)- ( g M x ) ( t ;a ) , I l n
Therefore for each M and implies
E,"
ll(~,xm)(t;
< EMKn,.
> 0 there exists an N , such that m > N , W)
- ( ~ , x ) ( L ; w)lln
< E,*,
4.1
EXISTENCE AND UNIQUENESS OF A RANDOM SOLUTION
101
which means that (Y,x,)(t;
0)
-+
(Y,x)(t;
in L,(R, d,9)-norm uniformly on [0, n ] . Since n may take Q < n so that
0)
=
1,2,. . . is arbitrary, we
l l ( ~ ~ x r n )a) ( t ;- ( Y , x ) ~( ~~~ :)
SUP
06tCQ
<
SUP
OQr6n
I I ( p , X r n ) ( t ; 0)-
I I L ~ ~ Q . . ~ . ~ )
( Y M x ) ( t ;w ) I I L ~ ( R , ~ ,< ~ ) En*
for m > N , , and thus we have uniform convergence on every set [0, Q ] , Q > 0. Hence (Y,x,)(t;w) converges to (Y,x)(t;w) in the metric d by definition, that is, Y, is a continuous mapping from the Frechet space CJR,, L,(R, d,9)) into itself for each M = 1,2,. . . . Now, applying Theorem 1.1.10 to the sequence of continuous linear operators {Y,}, we obtain the fact that 9 is a continuous linear operator 9)) into itself, completing the proof. from C,(R + ,L,(R, d, Let H I , H , c H , where H is the subset of C, of all functions x ( t ; w ) whose inner product in L,(R, d,9’)is integrable on R , , as defined in Section 1.2. The norms in H , and H , are defined by Ilx(t; d
l H ,
=
{
JOrn
Ilx(t; 4 1 1 2 2 ( R , d , b ) dt
I+ 3
i = 192.
We consider the following stochastic integral equations for M = 1,2,. . . :
+
x(t;w) = h(t;w)
J:
k(t,z;o)f(z,x(z;o))dz
for t E [0, MI c R , . Define the operators X, and 2 , from the Hilbert space H , into Hilbert space H , by (X,x)(t;o) =
1:
(Y,x)(t;w) =
JbM
and
k(t,z;w)~(z;w)dr
k(t,z;w)x(z;w)dz
for t E [0, M I . We now give a lemma which will be used.
102
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
defined earlier from Hilbert Lemma 4.1.2 The integral operator 2 , space H, into Hilbert space H , is a bounded operator if the kernel ko(t, t ;o) is such that Jorn JoM Illko(t,
~ ) 1 1 1 2 dT dt
exists and is finite. PROOF
II(-%Mt;
For each x(t; o)E H z we have
o)lli, = G
la 1 6lM JM
0
0
ko(t,7;
, ., (,:l
r
OM;W) d t
Illko(t,T ; 4llI2 dt .
dt
Ilx(t ;w
) I I ~ ~ ( ~ , ~ dt , ~ d) t
by Schwarz’s inequality. But the second inside integral is less than or equal to the norm squared of x(t ; o)in H z , and thus we have
which is finite by hypothesis. Hence 2Mis bounded. We shall now prove a theorem with respect to the existence of a random solution of Eq. (4.1.1) for M 2 1. The fixed-point theorem of Krasnosel’skii given in Chapter I and the theory of admissibility are used in the proof.
Theorem 4.1.3 Consider the random integral equation (4.1.1) subject to the following conditions : (i) H I and Hz are Hilbert spaces stronger than C, such that the pair ( H , , H ,) is admissible with respect to each of the operators
( X M x ) ( t ; o)= ( Y M X ) ( t ; W) =
J: JoM
k(t, t ;W)X(T; o)dr
and
ko(t, t ;O)X(T; W) dt,
where t~ [0, MI, M 2 1, k(t, t ;o),and k o ( t , t ; W) behave as described previously and lllko(f, T ; w)11I2 dt d t exists and is finite. Further assume that YM is completely continuous. (ii) x ( t ; w ) -+ f(t, x(t ;o)) is an operator on
S = { x ( t ; ~ ) : x ( t ; m ) ~ H i ,I l x ( t ; o ) l l H , G P }
4.1
EXISTENCE AND UNIQUENESS OF A RANDOM SOLUTION
103
for some p 2 0 with values in H , satisfying the Lipschitz condition
IIf(t,
x ( t ; O ) ) - f(t,Y ( t ; w))IIHz
d Allx(t;
w, - Y ( t ; w)IIH1
for x ( t ; w), y ( t ; w ) E S, and A > 0 a constant. is a continuous operator on S with values in (iii) x ( t ; w ) -,e(t, x ( t ; 0)) H , such that Ile(t,x ( t ; w))IIHZQ y, for y > 0 a constant. (iv) h ( t ; w )H~, . Then there exists at least one random solution of Eq. (4.1.1), provided that jkK1,
< 1,
I I ~ ( ~ ; ~ ) I I H+, ~
YK~M Q P(1 - J-KIM)
l M ~ ~ f ( c ~ o ) ~ ~ H z
where K I Mand K , , are the norms of X, and Y,, respectively. PROOF It is obvious by definition that sets of the type S are closed, bounded, convex sets in a Hilbert space. By definition, H , and H , are Banach spaces, since they are Hilbert spaces. Let x(t ; w), y(t ;w ) E S. Define the operator U , from S into H , by
+
(U,x)(t;w) = h(t;w)
J:
k(t,z;w)f(r,x(z;w))dt
for t E [0, MI. Taking the norm of both sides of this equation, we have
since K , , is the norm of the integral operator X,, and
104
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
Hilbert space is in the Hilbert space, (U,x)(t; w ) - (U,y)(t;
O)E
H , , and
IKlMllx(t; 0)- y ( t ;w)lIHl. By hypothesis, K I M < 1, so that U w is a contraction on S. Define the operator 'V from S into H , by ( VMx)(t;o)=
JoM
k,(t, z ;o)e(z, x(z ;o)) dz,
t E [0,MI.
From Lemma 4.1.2, the operator ' 9 in Condition (i) of the theorem is a completely continuous bounded operator from H , into H , since
jom joM Illko(t, 4Il2 dz dt 6 : j :j z;
Illko(t,Z;w)ll12 dz dt <
00.
The function e(t,x(t;w)) maps elements of Hilbert space H I into Hilbert space H , and is bounded in H , by Condition (iii). Hence e is continuous and bounded from H , into H,. The operator V, may be expressed as V, = 9 , e . Thus V, is a completely continuous operator from S into H . We have for x ( t ; o), y ( t ; o)E S
,
II(U,x)(t;
4 + (V,y)(t;
w)IIH,
4.1
105
EXISTENCE AND UNIQUENESS OF A RANDOM SOLUTION
from (4.1.3) and Condition (iii) of the theorem. Then from the last condition of the theorem II(U,x)(t;
0)
+ (V,y)(t;
W)IIH,
d Ilh(t; w)llJfl
+ KlMIIf'(t,O)IlH*
+ K,MY + K l M w 4 t ;4 Q P(1 - AKIhf)
+K,dp
H
,
= p,
and we have that ( U , x ) ( t ; w ) + (V'y)(t; w ) E S . Therefore the conditions of the fixed-point theorem of Krasnosel'skii (Theorem 1.1.8) hold, and there exists at least one random solution of Eq. (4.1.1) for M 2 1, which completes the proof.
In the case that gMis the null operator, that is, k,(t, z ;o)= 0 for all t,
z E R + , and almost all o E R, then we have for t E [0, MI
+
x ( t ;o)= h(t ; w )
s:
k(t, z ; w)f(z,
X(Z
;w ) )dz,
the random Volterra integral equation of Chapter 11. For the case that XMis the null operator we obtain the stochastic integral equation of the Fredholm type
+
x ( t ; o )= h ( t ; w )
soM
k,(t,z;o)e(z,x(z;o))dz
(4.1.4)
for t E [0, MI, M 2 1. Hence we have the following corollary to Theorem 4.1.3. Corollary 4.1.4 Consider the random integral equation (4.1.4) under the following conditions :
(i) H I and H , are Hilbert spaces stronger than C , and the pair ( H 2 ,H , ) is admissible with respect to the integral operator
( 9 M x ) ( t ; o= )
JoM
ko(t,z;~)~(~;~)dz,
where k,(t, z; w ) behaves as described and 50" Illko(t, z; w)ll12 dz dt exists, t E [0, M I , M = 1,2, . . . . Further assume YMis completely continuous. (ii) x ( t ; o)+ e(t,x ( t ; 0))is a continuous operator on S
=
{ x ! w ) : x ( t ; o ) E H , , Ilx(t;w)llH, Q p }
for some 0 d p < co with values in H , such that IIe(t, x ( t ; o))IIHz d y for some y 2 0 a constant. (iii) h ( t ; w ) ~ H , .
106
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
Then there exists at least one bounded random solution of Eq. (4.1.4), provided Ilh(t;O)llH1 + YK2M 6 p. PROOF This is a special case of Theorem 4.1.3. When X, is the null operator, however, the existence of at least one random solution of (4.1.1) follows from Schauder's fixed-point theorem, keeping in mind the proof of Theorem 4.1.3 (Schauder's fixed-point theorem is a special case of that of Krasnosel'skii).
Now we may note that the integral operators 9,on Hilbert space H , into H , converge to the operator 9 on H , into H I defined previously by ( g X ) ( t ; 0)=
Jorn
ko(t, T ; W)X(T; 0)d t ,
t E R,.
By a well-known theorem in functional analysis, then, 9 is a completely continuous operator from H , into H I (Bachman and Narici [l, p. 2601) under the same condition as in (i) of Corollary 4.1.4. Hence we have the following theorem. Theorem 4.1.5 Consider the random integral equation (4.0.1)subject to the following conditions : (i) H and H , are Hilbert spaces stronger than C, and the pair ( H , , H I ) is admissible with respect to the completely continuous integral operator (9X)(t;O) =
JOW
k o ( t , t ; ~ ) X ( ~ ; O ) d ~t E, R + ,
wherek,(t,r;o) behavesasdescribed previouslyandJ,"J," (Ilko(t,~;w)ll12 dzdt exists and is finite. (ii) Same as Condition (ii) of Corollary 4.1.4. (iii) Same as Condition (iii) of Corollary 4.1.4. Then there exists at least one bounded (by p ) random solution of Eq. (4.0.1), provided Ilh(t;w)llH, + YL p, where L is the norm of the operator 9. The proof of Theorem 4.1.5 follows that of Theorem 4.1.3 with the remark that 9 is a completely continuous opetator. We consider now the conditions under which the random equation (4.0.2) has a unique random solution. The fixed-point theorem of Banach from Chapter I is utilized in this respect. We could prove uniqueness by adding a in Theorem 4.1.3 and showing that there Lipschitz condition on e(t, x ( t ;0))
4.1
EXISTENCE AND UNIQUENESS OF A RANDOM SOLUTION
107
is only one random solution, by a contradiction argument. However, by using Banach’s theorem, we remove the condition ( I e ( t , x ( t ;w))llH2Q y and require only that e(t, 0) be bounded in the Banach space B defined in Chapter I. Also, we use Banach spaces and the supremum norms given in Section 1.2 instead of the Hilbert spaces given previously. Theorem 4.1.6 following :
Suppose the random integral equation (4.0.2) satisfies the
(i) B and D are Banach spaces stronger than C, such that ( B , D ) is admissible with respect to each of the operators ( X X ) o) ( ~= ; ( Y x ) ( t ;W ) =
J-:
k(t, z ; O)X(Z;
JOW
k,(t, z;
O ) dr,
O)X(Z;
o)h,
tER,,
where k(t, r ;w ) and k,(t, r ; o)behave as previously. (ii) x ( t ; w ) -.f’(t, x ( t ; o)) is an operator on
with values in B, satisfying the Lipschitz condition
IIf’(r,x(t;o))
-f(t,Y(r;w))\\B
< lllx(t;W)
- Y(t;o\\D
for x(t ;o), y(t ;o)E S and 2 2 0 a constant. (iii) x ( t ; o)+ e(t, x(t ;w ) )is an operator on S with values in B satisfying Ile(t, x ( t ; w ) ) - e(t, y ( t ; o))IIB
d 511x(t; 0)- y ( t ; w)llD
for x ( t ; o), y ( t ; w ) E S and 41 2 0 a constant. (iv) h ( t ;o)E D. Then there exists a unique random solution of Eq. (4.0.2),provided AK1 Ilh(t; w)lllJ+
Kl
IIf(t,
O)11B
+ 5 K 2 < 1,
+ K211e(t,o)llB Q p(l
-
-
tK2)?
where K 1 and K , are the norms of X and Y ,respectively. PROOF The operator X is continuous from B into D, from the results of Chapter 11. The operator 9 is continuous from C,(R +, L,(Q, .d,P))into itself as a result of the Lemma 4.1.1 given immediately following the definition Hence Y is bounded from Condition (i), and its norm is K , . of 9.
Define the operators U and V from S into D by (Ux)(t ; w) = h(t ;w) + (Vx)(t; 0) =
som
J1:
k(t, T ; w)f (z, x(z ;w)) dz
ko(t, r ; w)e(z,x(z ; w)) dz,
and
tER+.
Since D is a Banach space, (Ux)(t; o) + (Vx)(t;o)E D whenever (Ux)(t; o ) E D and ( Vx)(t ; o)E D. We must show that (Ux)(r; w) + (Vx)(t;w) E S whenever x(t; o)E S (inclusion property) and that U + V is a contracting operator on S. Consider another element y(t; w) E S, and ( U Y ) ( ~w);
+ (Vy)(t;w) = h(t; o) +
J:
k(t, z ; o)f(z, y(z; o))dz
Then we have
+Jb"
ko(t, z ; w)e(z, x(z ; w)) dz - h(t ;o)
G K , ll f (r, x(t ;o)) - f ( t , y(t ; o))ll,
+ K211e(t, x(t: 4)- e(t, ~
(; o))ll t B
using the Lipschitz conditions on f (t, x) and e(t, x). Since AK, + 5 K 2 < 1 by hypothesis, we have that U + V is a contracting operator on S.
4.1
EXISTENCE AND UNIQUENESS OF A RANDOM SOLUTION
= Ilh(t;w)
+ JOm
+
kO(t,
109
k(t,t;o)f’(t,x(t;o))dt 0
;o)e(T7
x(T
;w))dT 11 D
+
+
< Il4t; w)llD K1 II f(t, x(t ;w))IIB K 2 Ile(t, x(t ;w))lls. Using an inequality similar to (4.1.3) forf(t, x ( t ; w))and a similar inequality for e(t, x ( t ;w)), we obtain Il(ux)(t;w,
f
( v x ) ( t ;w ) l l D
O)llBl + K2[511X(t; w ) l l D < IIh(t; 0 ) l l D + K l [ A l l x ( t ;w)llD + + Ile(t, O)llBl = Ilh(t;w)Ilo + K i IIf(t,o)lls + KJle(t,O)lIB + l l ~ ( t ; w ) I l ~ + ~W tK2) i < ~ ( -1 1 K 1 - tK2) + p(AK1 + 5K.J = p
from the last condition of the theorem. Thus ( U x ) ( t ;w ) + ( V x ) ( t ;O)E S . Therefore, applying Banach’s fixed-point theorem, there exists a unique random solution x(t ;w ) E S of Eq. (4.0.2), completing the proof. For the case that X is the null operator we immediately obtain the following corollary to Theorem 4.1.6, which gives conditions under which the random Fredholm integral equation (4.0.1) possesses a unique random solution.
Corollary 4.1.7 We consider the random integral equation (4.0.1)subject to the following conditions : (i) B and D are Banach spaces stronger than C, such that the pair (B, D)is admissible with respect to the operator ( 9 x ) ( t ; w )=
JOrn
ko(t,t;w)x(t;w)dz,
tER,,
where ko(t, t ; w ) behaves as described previously. (ii) Same as Condition (iii) of Theorem 4.1.6. (iii) Same as Condition (iv) of Theorem 4.1.6. Then there exists a unique random solution of Eq. (4.0.1), provided that
5 K 2 < 1,
l l ~ ( t ; 4 l +l ~K211e(t,0)llB d ~ ( 1 W2),
where K 2 is the norm of the operator 9.
110
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
PROOF A special case of Theorem 4.1.6 with K , null operator.
4.2
=
0, that is, .X is the
Some Special Cases
We now present some useful special cases of the preceding theorems and corollaries by taking C , or C as the Banach spaces B and D. Theorem 4.2.2 Consider the stochastic integral equation (4.0.1) subject to the following conditions :
(i) There exists a constant Z > 0 and a positive continuous function
g(t) finite on R , such that JOm
lllko(t, T ; o)lIIg(ddr G 2,
tE R,
.
(ii) e(t, x) is continuous in t E R , and x E R such that le(t,O)l G
w(t)
and
14,x) - e(t,Y)I G 5gWlx - YI
for IIxllc, llyllc < p and y 2 0 and 5 2 0 constants. (iii) h(t ; o)E C , the set of continuous bounded functions from R , into L,(Q d ,9). Then there exists a unique random solution x(t; o)E C of Eq. (4.0.1) such that
provided that Ilh(t;o)llc,5, and y are small enough. PROOF We must show that under the given conditions the pair (C,, C) is admissible with respect to the integral operator
k,(t,~;w)x(r;o)dr,
Let x(t; w ) E C,. Then we have
C E R,.
4.2
SOME SPECIAL CASES
111
where Illko(t,t ; w)lll = IIko(t, t ; w)llLm(n,d.9) is a function only of ( t , t)E A 1 . Using the definition of the norm in C , , we have II(Yx)(t; w)IlLz(R,d,9)
Q Ilx(t; 4
C P Z '
by Condition (i) of the theorem. Thus Il(Yx)(t; is bounded, and Y is a continuous function of x from the proof of Theorem 4.1.6. Hence ( 9 x ) ( t ; w ) E C and (Cg,C) is admissible with respect to 9. From Condition (ii),
14,x(t; 0)) - e(t, Y O ; w))l Q tg(t)lx(t; 4 - Y O ; d l implies that
and hence
SUP 1Ile(t, x(t; 4) - e(t, I30
Q
W))IIL2(ntd.9)/g(t))
5 SUP Ilx(t; 0)- Y ( t ;4 I l L 2 ( Q , d * 9 ) 130
or Ile(t,x(t;w)) - e(t,y(t;w))llc,
Q t I l x ( t ; ~ )- Y ( ~ ; ~ I I c
for llxllc, llyllc d p. Likewise, le(t,O)I Q yg(t) implies that Ile(t,O)IIc,~d y. Therefore Corollary 4.1.7 applies with B = C , and D = C , provided that Ilh(t; w)llc, 5, and y are small enough in the sense that m
2
< 1,
Ilh(t;4llc
+ Kzy
Q P(1 - tK2).
Then there exists a unique random solution of Eq. (4.0.1) such that Ilx(t; d l l c Q PI
completing the proof. For g(t) = 1 for all ~ E R in, Theorem 4.2.1 we obtain the following corollary.
112
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
Corollary 4.2.2 Consider the random integral equation (4.0.1)under the following conditions :
(i) J$ lllko(t,r ;o)lll dz < Z , t E R + , where Z is some constant greater than zero. (ii) e(t, x) is a continuous function from R, x R into R such that 0)l
and
le(t, x) - e(t,Y)I < tlx - YI
for ( ( x (and ( ~ (Jy((,less than or equal to p 2 0 and t 2 0 a constant. (iii) h( t ;o)E C . Then there exists a unique bounded (by p ) random solution x ( t ; o)E C provided that (Jh(t;w)llc, t, and y are sufficiently small. The following corollary is also a particular case of Theorem 4.2.1. Corollary 4.2.3 Assume that the random integral equation (4.0.1)satisfies the following conditions :
(i) I[[ko(t,z;o)ll[ ,
( 9 x ) ( t ; o)=
sum
ko(t, Z;O)X(T; o)dr,
tER,,
along with Condition (i) of the corollary. For x(t ; a)E C , we have
From hypothesis (i) of the corollary we obtain
rm
4.3
STOCHASTIC ASYMPTOTIC STABILITY
113
for all t E R,. Thus ( 9 x ) ( t w ; ) E C for x ( t ;o)E C,, and 2 C , c C. That is, (C,, C) is admissible with respect to 9. Since the other hypotheses are identical to those of Theorem 4.2.1, the proof is complete. 4.3
Stochastic Asymptotic Stability of the Random Solution
In many practical situations it is of interest to know the behavior of the random solution of the random Fredholm integral equation (4.0.1) for large values of t, which may in some instances represent time. That is, if (4.0.1) describes the behavior of some physical system, it may be of interest to determine the behavior of the system after it has been operating for a long time. We shall now prove a theorem which states that under certain conditions the random solution of Eq. (4.0.1)is stochastically asymptotically exponentially stable, which was defined in Chapter I. Theorem 4.3.1 Suppose that the random integral equation (4.0.1)satisfies the following conditions :
+
d N exp( --at fit) for N > 0 a constant, -a > p > 0, (i) Jllko(t,t ;o j J J and t , t E R + . (ii) e(t,x ) is defined on R , x R into R, continuous, and
on R , , y 2 0 a constant, and
<
c g , d p and 2 0 a constant. for J J x J J JJyJJc8 (iii) II h(t;Q ) I I ~ ~ ( ~ . d~ ,H~ exp( ) - -at), H > 0, t E R + .
Then there exists a unique random solution x(t ;o)of Eq. (4.0.1) satisfying
provided that H , 5, and y are sufficiently small. PROOF We must show that the pair (C,, C,) is admissible with respect to the integral operator
ko(t,t ;w)x(t; W ) d t
114
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
with g(t) = exp( - crt),a > 0, t E R + ,and the given conditions. For x ( t ;o)E C, we have
II(=w(t;4 1 1 L 2 ( R , . d , B ) d
s:
Illko(t9
W)lll . Ilx(t; W)IIL*(R,d,B) d t
x exp( - XT) dr
But from Condition (i) of the theorem, Joa Illko(t, 7 ; W)lll eXp( - a t ) d T d N
I0=
exp( - at 4-
/jt - a t ) d t
=
N(exp( - a t ) ) Joa exp[(/j - a ) t ] d t
=
“/(a -
p)] exp( -at).
(4.3.2)
Thus, combining inequalities (4.3.1) and (4.3.2), we obtain for every t E R , Il(Yx)(t ; 4 1 1L*(R..d.S) d IIx(t; W)ll “ / ( a -
811 exp( - at).
Therefore, by definition of C , with g(t) = exp( -at), ( Y x ) ( t ;o)E C,, and ( C , , C,) is admissible with respect to 9. Hence Condition (i) of Corollary 4.1.7 is satisfied with B = D = C,. Since we have (as in the proof of Theorem 4.2.1) that Condition (ii) implies lle(t, x ( t ; a))- e(t, y ( t ;w))IIC,
< tllx(t; 4 - I@;
w)lIC,
and IIe(t,O)\lCg d y, and that Condition (iii) implies that h ( t ; o)E C , = D for g ( t ) = exp(-crt) by definition of C,, all of the conditions of Corollary 4.1.7
are satisfied. Therefore, by Corollary 4.1.7, there exists a unique random solution x(t; O)E C , of Eq. (4.0.1) such that ~ ~ x ( r ; o ) l /dc gp, provided that H, 5, and y are small enough in the sense that 5K2 < 1 and
IlW; W)lIc, + K211e(t,0)llcgd H + Kzy d
P(1 - 5K2).
4.4
AN APPLICATION IN STOCHASTIC CONTROL SYSTEMS
I15
From (4.3.2) the norm of 9, by definition, is K 2 = N/(a - 8). Hence we must have
H + [ y N / ( a - fi)] Also, Ilx(t; w)(Ic,< p means that 5N < a - fi
and
II x ( t ; w)ll L2(R,.d.b) G p exp( - at)?
< p{1 - " / ( a
- fill)
ER+>
completing the proof. We therefore have that the unique random solution of (4.0.1) is stochastically asymptotically exponentially stable. In fact, with regard to the asymptotic behavior of the random variable x ( t ; w), we have
+O
as t - + m ,
that is, limt+mE { l x ( t ; w ) I 2 ) = 0, the second absolute moment of x ( t ; w ) approaches zero as t + co.Hence from Jensen's inequality we have that the expected value of the absolute solution approaches zero as t -+ a, lim,+mE { ( x ( t ;w)l} = 0. 4.4
An Application in Stochastic Control Systems
Corduneanu [3], Desoer and Tomasian [l], and Petrovanu [l] considered the stability properties of a nonrandom linear system described by the triple (E, F ; T ) , where E is the space of inputs to the system, F is the space of outputs from the system, and T is a linear operator from E to F given by (Tx)(t)=
jom k(t,
s)x(s) ds,
t 2 0.
Here we have altered the function k(t, s) in the paper of Corduneanu [3] to be zero whenever s 0. In this section we shall study a nonlinear stochastic feedback control system for which the random output is given in terms of the random input by the nonlinear operator T defined by
-=
for w E a. In a feedback control system the output, or a fraction of the output, is returned as input to the system. For a stochastic system this fraction may be a random function o f t in general. We shall consider the fraction of the
116
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
random output to be returned as q ( t ;w ) ( T x ) ( tw), ; where 0 9 q ( t ;w ) < 1 for all t 3 0 and w E R. See Fig. 4.4.1 for a schematic description. The following differential system with random parameters describes the stochastic feedback control system in Fig. 4.4.1 : with
+
A(t;to) = A ( t ; w ) x ( t ; o ) + ( t ; w ) 0) =
v(r; 4(Tx)(t;w),
(4.4.2) (4.4.3)
where the dot denotes the derivative with respect to t, T is the linear operator given by Eq. (4.4,1), x ( t ; w ) is an n x 1 vector whose elements are random variables, A ( t ; w ) and k(t, s; w ) are n x n matrices whose elements are measurable functions, q ( t ; w ) is a scalar random variable for each t 2 0, and e ( t , x ) is an n x 1 vector-valued function for each t and x. For n = 2 we have complex-valued random functions. By taking as the spaces E and F the space C,, we shall study the existence of a random solution x(t ;w ) and its stochastic stability properties by applying methods similar to those employed in the previous section and Theorem 4.1.6. Here we consider n = 1. The random differential system (4.4244.4.3)may be reduced to a stochastic integral equation of the mixed Volterra-Fredholm type in the form of Eq. (4.0.2). Integrating both sides of Eq. (4.4.2)and substituting the expression for &r; w ) given by (4.4.3), we obtain
J: =lor
x ( t ; w ) - x(0;o) =
+
A(z;o)x(z;w)dz
1:
+(z;w)dz
A ( z ; w)x(z ; w ) dz
+ J:
JOm
q ( z ; w)k(z,s; w)e(s,x(s; w ) )ds dz.
In the second integral on the right-hand side of Eq. (4.4.4) the integral JOm
k(z, s; o)e(s, x(s; w))ds
Figure 4.4.1.
(4.4.4)
4.4
AN APPLICATION IN STOCHASTIC CONTROL SYSTEMS
117
exists and is finite for each z and o ;otherwise, the output of the system is infinite. Also, if for each t >, 0 and s >, 0, yoq(z ;w)k(r,s ;w ) dz exists and is finite, that is, ifJb k(z, s; w ) dz exists and is finite, since 0 < q ( t ;w ) Q 1, then we may interchange the order of integration by Fubini’s theorem (Hewitt and Stromberg [l]) to obtain
1
[ / : q ( z ; w ) k ( z . s ; w ) d r e(s,x(s;w))ds. Then Eq. (4.4.4) may be written as x(t;w)=
1:
+
A(z;w)x(r;w)dz
where x(0 ;w ) = 0 and k * ( t , z ; w )=
J:
J0**
(
t , t ; w ) e ( z , x ( z ; w ) ) d r , (4.4.5)
t,zER+.
q(u;o)k(u,z;w)du,
The following theorem gives conditions under which a unique random solution of Eq. (4.4.5) exists and has the property of stochastic asymptotic exponential stability. Theorem 4.4.1 Suppose that the random equation (4.4.5) satisfies the following conditions :
(i) lllA(z;o)lll < N , exp( --at + 6 t ) for N , > 0 a constant, -a > 6 > 0, and 0 Q z < t < co. (ii) Illk*(t, z; w)lll Q N , exp( --at Bz) for N 2 > 0 a constant, -a > B > 0, and t , z E R + . (iii) e(t, x(t ;w)) is such that e(t,0) E C , , is continuous in t uniformly in x, and satisfies
+
le(t,
for Ilx(t; w)llc,, lly(t; w)llc,
x) - 4 4 Y)l 6 51x - Yl
< p, and t a constant.
Then there exists a unique random solution of Eq. (4.4.5) satisfying jElx(t; 0 ) 1 ~ } +
Q p exp(--at),
t 3 0,
provided that 5 and le(t, O)( are sufficiently small. PROOF We must show that the pair of spaces (C,, C,) is admissible with respect to the integral operators
A ( z ; O)X(Z;
W ) ~ T
118
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
and ( Y x ) ( t ; o)=
som
k*(t, r ;o)x(r ;o)dr,
t
2 0,
with g ( t ) = exp( - at) and Conditions (i) and (ii). For x(t ;w ) in C , we have I I ( = m ( t ;411L2(fi.d,b)
d
im
Illk*(t, t ; o)lll[Ilx(z;
o)llL,,,,d,b)/exp (-0rz)l exp( -015) dz
= Ilx(t; ~)IIC,[NZI(~ - P)1 exp(-Ut) < m,
t 2 0,
since a > ,!IThus, . by definition of C,, where g ( t ) = exp( - a t ) , ( Y x ) ( t ;w ) E C, for all x(t; o)E C,, and (C,, C,) is admissible with respect to 9. Likewise, II(.Xx)(t; 4 L 2 ( R , d * 9 )
=
Ilx(t;tu)IIC.NIexp(-axt)(I- exp[-(a - 6 ) t ] ) / ( a- 6)
6 llx(t; ( l j ) ) ) r p [ N , /( aS)] exp(-at) < a,
t 3 0,
from Condition (i) and the definition of the norm in C , . Hence (.Xx)(r;w ) E C, whenever x ( t ; o)E C,, and the pair ( C , , C,) is admissible with respect to X . Since the functionf(t, x) in Eq. (4.0.2) is the identity function in x in Eq. (4.43, the constant A in Theorem 4.1.6 is equal to one. From Condition (iii) we have, as before, that lIe(t, x ( t ; 0)) -
e(t,A t ; o))/Ic, 6 511x(t; 0)- ~ ( o)Ilc,. t ;
4.4
AN APPLICATION IN STOCHASTIC CONTROL SYSTEMS
119
Since the stochastic free term is identically zero, all of the conditions of Theorem 4.1.6 are satisfied for B = D = C,, and it follows that there exists a unique random solution of Eq. (4.4.5) in the set
s = { x ( t ; o ) : x ( t ; o ) E C g ,Ilx(t;o)llc, < p } for some p > 0, provided that random solution satisfies
< and le(t,O)I are small enough. Hence the
l l x ( ~ ; ~ ) l l L 2 ( ~ ,= d ,{EIx(t;W)l2)f 9)
< p exp ( - a t ) ,
by the definition of the space C,. The constants enough in the sense that KI
+ 5K2 < 1
and
t 2 0,
< and le(t,0)l must be small
K2Ile(t,O)Il~,d ~ ( 1 K i -
,
where K and K , are the norms of the operators X and 9, respectively. From the preceding results we see that
K , = N , * / ( a - 6)
and
K,
=
N,*/(a - p),
where N , * and N 2 * are the greatest lower bounds of the constants N , and N 2 which satisfy Conditions (i) and (ii), respectively, and the given inequalities. Therefore we must have
completing the proof. Therefore, if the conditions of Theorem 4.4.1 hold, then the unique random solution of the system (4.4.2H4.4.3) satisfies E[lx(t; o)Il + 0
ast-co. We remark that this is a very general stochastic control system because of the generality of the stochastic kernel, the nonlinear operator T , and the functions q ( t ;o)and A(t ;0).The operator T as given says that the system output is a function of both past and future input, which may seem a bit unrealistic at first glance. However, this operator contains all other operators of the Volterra and Fredholm types for compact or noncompact intervals in R , , and the results obtained here have wide applicability in stochastic control systems.
120
IV
4.5
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
A Random Perturbed Fredholm Integral Equation
We will consider in this section the existence and uniqueness of a random solution of the stochastic integral equation of the Fredholm type of the form x ( t ; w ) = h(t, x ( t ; w))
+
JOW
k,(t, z ; o)e(z, x ( z ; a))dz
(4.5.1)
for t 2 0; k,(t, T ;w ) and e(t, x ( t ; 0)) behave as described in Chapter I and in Section 4.1. Equation (4.5.1)was recently studied by Milton, Padgett, and Tsokos [l]. The function h(t, x ( t ; 0)) is a function from R , into L,(Q d,9) under certain convenient conditions, and h(t, x ) is a scalar function of t 2 0 and scalar x . To obtain the existence and uniqueness of a random solution of (4.5.1), we shall first consider the existence and uniqueness of a random solution of the mixed Volterra-Fredholm type equation of the form x ( t ; w ) = h(t,x(t;w))
+ Jom
+
1:
k(t,z;w)f(z,~(t;w))dz
ko(t,z ; o)e(t, x(z ;w ) )dz,
t 2 0,
(4.5.2)
where in addition to the given conditions we have that the functions k(t, z ;w ) and f ( t , x(t ; 0)) behave as described in Chapters I and 11. Note that Eq. (4.5.1) is a special case of Eq. (4.5.2); that is, when k(t, r ; w ) is identically , Eq. (4.5.2) equal to zero for all t and z in R , and almost all ~ E Rthen reduces t o (4.5.1). Also note that when k o ( t , z ; o )is equal to zero for all t, z E R , and almost all w E 0 we obtain the random, nonlinear, perturbed Volterra equation studied recently by Milton and Tsokos [5]. Let H c C, be the space of all functions such that the inner product of elements of H are integrable on R , and let H , and H z be Hilbert spaces contained in H with norms defined by
M
We shall consider the following stochastic integral equation for = 1,2, ... :
+
x(t;w) = h(t,x(t;w))
+
1,'
k(t,z;w)f(z,x(z;w))dz
ko(t, z ;o)e(z, X(T ;w)) dz
(4.5.3)
4.5
A RANDOM PERTURBED FREDHOLM INTEGRAL EQUATION
121
for t E [0, MI c R,. Define the operators T, and W, from the Hilbert space H , into Hilbert space H I by
[:
and
k(t, z; w ) x ( z; w ) dz
ko(t,z;W)~(z;w)dz
We will now prove a theorem with respect to the existence of a random solution of the stochastic integral equation (4.5.3). Theorem 4.5.1 Consider the random integral equation (4.5.3) subject to the following conditions :
(i) H I and H , are Hilbert spaces stronger than C, such that the pair
(If,, H , ) is admissible with respect to each of the operators ( T , x ) ( t; w ) = ( WMx)(t ;Q) =
Jo
k( t , z;w)x(z;w ) dz
and
JoM
ko(t, z;W ) X ( Z ;w ) d z ,
where t E [0, MI,M = 1,2,. . . ; k ( t , z;w ) and k,(t, 7 ; w ) behave as described previously; and J: lllko(t,z ; oj)ll12dz dt exists and is finite. Further assume that W, is completely continuous. (ii) x ( t ; a)3 f ( t , x ( t ; w)) is an operator on
s = {x(t;w):x(t;+H1, for some p
> 0, with values in H ,
llx(~;w)llHl
< P3
satisfying
Ilf’(t,x(t;w)) - f ( ~ , y ( t ; 4 ) l l € f z
d 4 x ( t ; w ) - Y(t;w)llH,
for x(t; w), y ( t ; w ) S ~and 1 2 0 a constant. (iii) x ( t ;w ) 3 e(t, x ( t ; w ) ) is a continuous operator on S with values in H 2such that Ile(t, x ( t ; w))IIHz< y for y 2 0 a constant. (iv) x ( t ; w ) --.* h ( t , x ( t ;w ) ) is an operator on S with values in H , such that IlhK x(t; 0)) - h(t, y ( t ; 4 ) I I H 1 d 21 Ilx(t;0) - y ( t ; W)IIHI for some constant ,Il 2 0.
122 122
IV IV
A FREDHOLM FREDHOLM TYPE TYPE EQUATION EQUATION AND A N D SOME SOME APPLICATIONS APPLICATIONS A
Then there there exists exists at at least least one one random random solution solution of of Eq. Eq. (4.5.3) (4.5.3)provided provided that that Then 2.1
IIMt,X(t;U))IIHl
+
AKiM
< 1,
+ Kl.Wl\f(t~o)\lH2
d P(1 - A K ~ M ) ~ and K,, K,, are are the the norms norms of of T, TMand and W,, W M respectively. ,respectively. where KK ,I,Mand where AK2M
PROOF The The set set SS is is closed, closed, bounded, bounded, and and convex convex in in H H,I. . By By definition, definition, PROOF H 1 and H , are Banach spaces since they are Hilbert spaces. H , and H , are Banach spaces since they are Hilbert spaces. x(t ;a), a),yy(t w))EE S. S. Define Define the the operator operator U U,Mfrom from SS into into H HI, by by Let x(t; ( t ;;w Let
(UMx)(t; w ) = h(t,x(t ; w)) +
1:
k(t, T ; w )f ( z , X ( T ;w))d 5
for t E [0, MI. Taking the norm of each side of this equation we have
since K
is the norm of the operator TMand
forZ(t;w ) E H z due to Lemma 2.1.1 and theadmissibilityofthepair ( H z ,H I ) . However, using Condition (ii), we have
/If
(t3~ ( tw))llH2 ; =
Hence (4.5.4)becomes
IIf
(t,x ( ~LO)) ; - f( t ,O )
+f
((3
,< l.llx(t; u)llHl+ lif(t,o ) l i H 2 .
O)I1fI2
(4.5.5)
+AKIMIIx(t;w)IIIfl
< p(l - AKlM) + AK~MP= P by the last condition of the theorem and the fact that x ( t ;w ) E S . Thus UM(Sc ) S. Since the difference of elements in a Hilbert space is in the Hilbert space,
4.5
A RANDOM PERTURBED FREDHOLM INTEGRAL EQUATION
123
and
Using the Lipschitz conditions given in (ii) and (iv), we have that
Using the condition of the theorem that k l + IZK,, < 1, we can state that U , is a contraction mapping on S. Now define the operator V, from S into H , by
From Condition (i) of the theorem and Lemma 4.1.2 the operator W, is a completely continuous bounded operator from H 2 into H I . The function r(t, x ( t ; 0)) is a continuous and bounded mapping from S into H 2 by Condition (iii). We may express the operator V, as the composite W,e, and therefore V, is a completely continuous operator from S into H
I24
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
= (h(r,x ( t ; w ) ) -k
+ JoM d
k0(t7
lilt(t, x(t; ~
J: k(t,
Z; o ) f ( z , X ( T ; o ) ) d . r
z ; 4 e ( r 7Y ( Z ;4)dz
11111
) ) I I H ~-t K i ~ l l f ( t r x ( t ; O))IIH~ -k K z ~ l l e ( t Y, ( t ;W ) ) I I H ~
d ~ ~ h ( ~ ~ x ( t+~ K~I M) I)Z I~I X~( tH; W1) l I H ~ +
+
K I M ~ ~ f ( c ~ o ) ~ K~Z hH4 ?~
from (4.5.5) and Condition (iii). Then from the last condition of the theorem
+
l l ( U M x ) ( t ; 0) (%fy)(t; u ) l l H ~
KIMIZllx(t; w)lIH,
d K1,lp
and we have that
+ d1 - I Z K I M )
+ p - K1,lp
=p
( U , x ) ( t ; w ) + ( b f y ) ( t;0) E s.
Therefore, the conditions of the fixed-point theorem of Krasnosel’skii hold, and there exists at least one random solution of Eq. (4.5.3) for M = 1, 2 , . . . , which completes the proof. In the case that T, is the null operator, we obtain the perturbed stochastic integral equation of the Fredholm type
+
x ( t ; o ) = h(t,x(t;w))
for t E [0, M I , M Theorem 4.5.1.
=
SoM
k,(t,z;o)e(z,x(z;w))dz
(4.5.6)
1,2,. . . . Hence we have the following corollary to
Corollary 4.5.2 Consider the random integral equation (4.5.6) under the following conditions : (i) H , and H , are Hilbert spaces stronger than C, and the pair ( H , , H I ) is admissible with respect to the integral operator (WMX)(t;W) =
JoM
ko(t, Z;O ) X ( Z ;
W ) dz,
where ko(t, z; o)behaves as described previously and JOrn
lorn
exists and is finite, t E [0, MI, M continuous.
Illko(t, 7; w)1112 d Z dt =
1,2,. . . . Also assume W, is completely
4.5
A RANDOM PERTURBED FREDHOLM INTEGRAL EQUATION
125
(ii) x(t ;o)-+ e(t,x(t ; o)) is a continuous operator on
s = {X(t;d:x(t;4EHl,
Ilx(f;~)llH,d P }
for some p 2 0 with values in H , such that Ile(t,x(t; w))IIHZd y for some y 2 0 a constant. (iii) x(t ;w ) -+ h(t, x(t ;w)) is a contraction on S . Then there exists at least one bounded random solution of Eq. (4.5.6) provided that Ilh(t, x(t;o))IIH1yK,, d p.
+
PROOF All that is required is to show that under the given conditions (U,x)(t; w ) is a contraction on S and that (U,x)(t; w ) + (VMy)(t;w ) E S whenever x(t; o)and y ( t ; w ) E S . By definition
(U,x)(t ; 0)= h(t, x(t ;a))+
J:
k(t, t ;w)f(t, x(t ;w ) )dz.
Under the assumption that T, is the null operator, (U,x)(t ; w ) = h(t,x(t; w)). Hence by Condition (iii) U , is a contraction on S . The proof that under the last restriction of the theorem (U,x)(t; w ) ( V,y)(t; w ) E S is analogous to that given in Theorem 4.5.1.
+
As M + co, the sequence of integral operators W, on the Hilbert space H , into H I converges to the operator Wdefined by (Wx)(t;w ) =
6
ko(t,T ;o ) x ( t ; w ) d z
and W is a completely continuous operator from H, into H,,under the same condition as in (i) of Corollary 4.5.2. Hence we have the following theorem. Theorem 4.5.3 Consider the random integral equation (4.5.1) subject to the following conditions :
(i) H , and H , are Hilbert spaces stronger than C, and the pair (H,, H,) is admissible with respect to the completely continuous integral operator (Wx)(t ;o)=
6
ko(t, t ;O)X(T ;w ) dt,
where ko(t, t ; w ) behaves as described previously and r m
r m
J, J
0
Illko(t,t ;o)ll12dz d t
exists and is finite. (ii) Same as Condition (ii) of Corollary 4.5.2. (iii) Same as Condition (iii) of Corollary 4.5.2.
tE
R+,
126
IV
A FREDHOLM TYPE EQUATION A N D SOME APPLICATIONS
Then there exists at least one bounded (by p ) random solution of Eq. (4.5.1) provided
Ilh(t,
x(t; U ) ) l l H ,
f
y K d p?
where K is the norm of the operator W . PROOF
The proof follows the lines of that of Theorem 4.5.1.
We now consider the conditions under which the random integral equation (4.5.2)possesses a unique random solution. We shall use Banach's fixed-point theorem, which requires only the boundedness of e(t, 0) in the space B and not that IIe(t,x(t; w))IIH2 < y. Also, we use the Banach spaces B and D and not the Hilbert spaces H ,and H,. Theorem 4.5.4 following :
Suppose the random integral equation (4.5.2)satisfies the
(i) B and D are Banach spaces stronger than C , such that ( B , D ) is admissible with respect to each of the operators (Tx)( t ;w ) =
J:
k(t, z ;w)x(z ; w ) dz
and t E R+,
k,(t, z ;W)X(T;0)dz, where k(t, z; w ) and k,(t, z; w ) behave as before. (ii) x(t; w ) + f ( t , x(t; 0))is an operator on S = {x(t;w):x(t;w)ED,
Ilx(t;w)llD
< p}
with values in B satisfying Ilf(t,x(t;w)) - j ( r ? Y ( ~ ; ~ ) ) l l B 'lllX(t;w)
- y(t;w)llD
for x(t; w), y(t; w ) E S and A 2 0 a constant. (iii) x(t; w ) + e(t,x(t; w)) is an operator on S with values in B satisfying Ile(t,x(t;w)) - e(t,y(t;w))lL
< 5 1 1 x ( t ; ~ )- ~ ( t ; w ) l l D
for x(t; w), y(t; w )E S and 5 2 0 a constant. (iv) x(t; w ) + h(t,x ( t ; w)) is an operator on S with values in D satisfying Ilh(t,x(t;w)) - h(t,y(t; w))IlD d
]'ll-x(r;
- y(t;w)llD
for x(t ; w), y(t ; w ) E S and 1' 2 0 a constant. Then there exists a unique random solution of Eq. (4.5.2)provided
Kl'l
+ K25 + y < 1
4.5
A RANDOM PERTURBED FREDHOLM INTEGRAL EQUATION
127
and
where K , and K , are the norms of T and W , respectively. PROOF The operators T and W are continuous operators from B into D by the continuity assumption on k ( t , 5 ;( I ) ) and the continuity and integrability assumptions on kO(t , T ;tu) and Lemma 2.1.1. Therefore the operators T and W are bounded and their norms are finite. Let us define the operators U and V from S into D by
+
( U x ) ( t ; w )= h ( t , x ( t ; w ) )
J:
k ( r , r ; t o ) f ( r , x ( r ; w ) ) d r and
kO(t , 5 ; co)r(7, X(T ;to))d.r,
t 2 0.
Since D is a Banach space, (U.u)(t; to) + ( V.u)(t; to) E D whenever (U.u)(t;w ) , (V.u)(t ; w ) E D. We must show that ( U.u)(t ; to) + ( V.u)(t ; 01)E S whenever x ( t ; to) E S(inclusion property)and that Lj + V is a contractingoperator on S. Consider another element y ( t ; w) E S. Then
Then we have
I28
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
Now using the fact that T and W are bounded operators from B to D, we can write
Since ( K + K 2 5 + 7 ) < 1 by hypothesis, we have that U ing operator on S. To show inclusion, let x ( t ;w ) E S. Then
Using inequalities similar to inequality (4.53,we obtain
Hence we can write
Using the condition of the theorem that
+ V is a contract-
4.5
A RANDOM PERTURBED FREDHOLM INTEGRAL EQUATION
129
we have
+ (Vx)(l;w)llD d p(1 - AKl Thus ( U x ) ( t ;w ) + ( V x ) ( t w ; )E S . ll(ux)(t;w)
-
tK2)
+ K I A p + K 2 5 p = p.
Therefore, applying Banach’s fixed-point theorem, there exists a unique random solution of (4.5.2),x(t ; w ) E S, and the proof is complete.
For the case that W is the null operator, we obtain the theorem recently studied by Milton and Tsokos [5]. For the case that T is the null operator, we obtain the following corollary to Theorem 4.5.4, which gives conditions under which the perturbed random Fredholm integral equation (4.5.1) possesses a unique random solution. Corollary 4.5.5 We consider the random integral equation (4.5.1)subject to the following conditions :
(i) B and D are Banach spaces stronger than C, such that the pair (B, D) is admissible with respect to the operator
where ko(t, T ; w ) behaves as described previously. (ii) Same as Condition (iii) of Theorem 4.5.4. (iii) Same as Condition (iv) of Theorem 4.5.4. Then there exists a unique random solution of the random integral equation (4.5.1)provided that K,C + y < 1, and
where K 2 is the norm of the operator W . We now present some useful special cases of Corollary 4.5.5 by taking as the Banach spaces B and D the spaces C, and C. Theorem 4.5.6 Consider the stochastic integral equation (4.5.1)subject to the following conditions : (i) There exist a constant Z > 0 and a positive continuous function g(t) finite on R , such that
130
IV
A FREDHOLM TYPE EQUATION AND SOME APPLICATIONS
(ii) e ( t , x ) is continuous in t uniformly in x from R , x R into R such that le(t,0)(< yg(t) and I4L .u) - 4 t , Y)l G tg(t)lx - Y1 for y 2 0 and 5 2 0 constants. (iii) h(r, x) is continuous in t uniformly in x from R , x R into R such that (h(t,0)l < a and Ih(4-u) - h(t, y)1 < 4.x - L'I
for some I 3 0 and a 2 0.
Then there exists a unique random solution x ( t ; o ) C ~ of Eq. (4.5.1) such that IIx(t; o)llC< p provided that 5, A, y, and IIh(t,s(t;w))jlC are sufficiently small. PROOF We must show that under the given conditions the pair (Cg,C) is admissible with respect to the integral operator
(W.u)(t ;W) =
JOZ
k,(t, T ;(U)X(T ;O ) dr,
tE
R,
.
Let x(f ;to) E C,. Then we have
where Illko(t, T ;o)lll is a function only of ( r , T) E A , . Using the definition of the norm in C, we have
lo=
lllko(f,T ;o ) l l l g ( T ) d t ll(wx)(t;411,~2,Q..d,,l.) < I1.W; ~411~~
< llx(t; o4IIC,Z
by Condition ( i ) of the theorem. Thus W is a bounded operator and ( W x ) ( t CO) ; E C. Hence (C,, C) is admissible with respect to W . It can be shown that Conditions (ii) are sufficient for e ( t , x ( r ; c u ) ) to be in C, for x ( t ; w ) E S = I x ( t ; O ) E C : I I x ( t ; O ) l l c < p ] , and that Conditions (iii) are sufficient for h ( t , x ( t ; w ) )to be in C for x(t; W ) E S. Therefore Corollary 4.5.5 applies with B = C, and D = C provided that llh(t, x(t ;w)))Ic,(, y, and i. are small enough in the sense that K 2 5 + i. < 1 and Ilhk 4 t ; .Nlc
+ K,s < P(1 - 5K2X
where K 2 is the norm of the operator W .
For the special case g(t) = 1 for all t E R , in Theorem 4.5.6 we obtain the following corollary.
4.5
A RANDOM PERTIJRBED FREDHOLM INTEGRAL EQUATION
131
Corollary 4.5.7 Consider the random integral equation (4.5.1) under the following conditions :
(i) Illko(t,7 ; w)lll d7 d 2, r E R , , where 2 is some nonnegative constant. (ii) e(t,x) is continuous in t uniformly in x from R , x R into R such that le(t, 0)l 6 1' and Ir(t,x) - e(t,y)l d 51x - YI for some constants 1' 2 0 and 5 2 0. (iii) Same as Condition (iii) of Theorem 4.5.6. Then there exists a unique random solution x(t ; (0) E C of Eq. (4.5.1) such are sufficiently that Ilx(t; w)llc d p provided that t, A, y, and l~h(r,x(t;co))llc small. Corollary 4.5.8 Assume that the random integral equation (4.5.1)satisfies the following conditions :
(i) lllko(t, T ; w)lll d A for all t , T E R , and g(t)dr < m. (ii) Same as Condition (ii) of Theorem 4.5.6. (iii) Same as Condition (iii) of Theorem 4.5.6. Then there exists a unique random solution x(t ;w ) E C of (4.5.1) such that Ilx(t; w)llc d p provided that 5, A, y, and IIh(t, x ( t ; w))/Icare sufficiently small. We need only to show that the pair of Banach spaces (C,, C) is admissible with respect to the integral operator PROOF
(Wx)(t; w ) =
JOrn A,([,
T
; w)x(7 ; 0)d7,
tER
,,
along with Condition (i) of the corollary. For x(t; w ) E C, we have
From hypothesis (i) of the corollary we obtain
<
Y,
for all
~ER,.
Thus (W.u)(t; ( J I )E C for .u(t; (11) E C, and (Cg, C ) is admissible with respect to Mi. Since the other hypotheses are identical to those of Theorem 4.5.6, the proof is complete.
CHAPTER V
Random Discrete Fredholm and Volterra J’ystems
5.0
Introduction
In the previous chapter we investigated the random Fredholm integral equation (4.0.1), and in Chapter I1 we presented the theory and some applications concerning the stochastic integral equation of the Volterra type (2.0.1). We shall now study a discrete version of the random integral equation of the Fredholm type of the form (4.0.1), which will be very useful for the application of an electronic computer in obtaining a realization of the random solution of the Fredholm equation in Chapter IV. Equation (4.0.1) may be “discretized” by replacing the integral with a sum of the functions evaluated at discrete points t , , t 2 , . . . ,t,, . . . , for example. We shall utilize again the concepts and theory of admissibility which were used in Chapters I1 and IV in order to show the existence and uniqueness of a random solution of the stochastic discrete Fredholm system ‘w
+ C c,,,~(w)~Jx~(w)), n = 1,2,. . . .
x,(w) = h , , ( ~ )
j= I
I32
(5.0.1)
5.1
EXISTENCE AND UNIQUENESS OF A SOLUTION OF (5.0.1)
I33
We shall also consider some asymptotic stochastic stability properties of the random solution of Eq. (5.0.1) and investigate the approximation of the x,(w),n = 1,2,. . . . The discrete version of the stochastic Volterra integral equation of the form (2.0.1) is a special case of the system (5.0.1); that is, when c,,~(o) = 0,
j
> n, n = 1,2,. . .,
we obtain the random discrete Volterra system n
+ 1 ~ , , ~ ( w ) f , ( x ~ ( t on) )=, 1, 2 , . . . .
x,(w) = h n ( o )
(5.0.2)
j= 1
The discrete version of the random Volterra integral equation that was presented in Section 3.1.3 is analogous to the system (5.0.2) whenever the numerical integration error term d(")(w) is ignored. Some of the results presented here are stochastic versions of some results of Petrovanu [l]. 5.1 Existence and Uniqueness of a Random Solution of System (5.0.1 )
Let the spaces X , X , , X , , and X,, be as defined in Chapter I, Section 1.2. Let B* and D* be Banach spaces contained in X with the norm in B* denoted by IIxllB* = IIXn(w)llB*
and the norm in D* denoted likewise. That is, x,(w) E X is a function from N , the positive integers, into the space L,(R, d ,9) and X , , X , , and X,, are Banach spaces contained in X . Hence the random functions in X are discrete parameter second-order stochastic processes. Let T be a linear operator from X into itself. With respect to T and the Banach spaces B* and D*,we now state and prove a lemma analogous to Lemma 2.1.1.
Lemma 5.1.1 If T is a continuous operator from X into itself, B* and
D* are stronger than X , and the pair (B*,D*) is admissible with respect to T , then T is a continuous operator from B* to D*.
Suppose xi E B* such that x i +B* x, that is, xin(w)+'* x,(o) as co. Assume that T x i n ( o+D* ) y,(w) as i + co. But Txi,(w)+' Tx,(w), since T is continuous from X into itself, and xin(w)+B'x,(w) implies that xin(w)+' x,(w). However, Txin(w) +D* y,(w) implies that Txin(w) +' yn(to) as i + cc. Hence Tx,(w) = yn(w)because the limit in X is unique. Therefore PROOF
i
+
134
v
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
T is closed and by the closed graph theorem it follows that T is continuous from B* into D*, completing the proof.
If T is a continuous operator from Banach space B* into D*, then it is bounded and, as before, there exists a constant K > 0 such that IITxn(w)llD*
d Kllxn(w)llB*.
We make the following assumptions concerning the functions in the random system (5.0.1): The functions x,(w) and h,(o) are functions of n E N with values in L,(R, d,9). For each value of x,(o), n = 1,2,. . . ,J,(xn(w))is has values in the space L,(R, d,9). a scalar, and for each n = 1,2, . . . ,fn(xn(w)) For each n and j in N , cn,j(w)is assumed to be in Lm(Q d,9) so that the 9). Also, for each value product of c,,~(o) and fj(xj(to)) will be in L,(R, d, of n I l l ~ n , j ( ~ ~ )=l l l9 - e SUP ~ ~Icn,j(w)I = IICn,j(Q)IIL,(R,d,B) U>
and I I I c ~ , ~ ( ~ ) I.I IIX~(CU)II,~~(~,~,~) I are assumed to be summable with respect to j e N. Consider the linear operator T defined by m
Tx,(w) =
C c,,~(w)x~(o),
= 1,2,. . .,
t~
j= I
for x j ( o ) in X . I t may be shown that the operator T is continuous from the space X into itself by using an argument similar to that of Lemma 4.1.1. The following theorem gives conditions under which there exists a unique random solution of the system (5.0.1).
Theorem 5.2.2 Consider the random discrete equation (5.0.1) subject to the following conditions : (i) B* and D* are Banach spaces stronger than X such that the pair (B*, D*) is admissible with respect to the linear operator
c cn*j(o)xj(w), n m
Tx,(o) =
= 1,2,. . . ,
j= 1
where C , , ~ ( Whas ) the properties given previously. (ii) x,(w) +fn(x,(o)) is an operator on with values in B* satisfying for x,(o), y,(w) E S and A 2 0 a constant. (iii) h,(w) E D*.
5.1
EXISTENCE AND UNIQUENESS OF A SOLUTION OF(5.0.1)
Then there exists a unique random solution x,(Q) equation (5.0.1), provided that
~I~,,(~)~~+ D*~
AK < 1,
E S of
135
the random discrete
~ ~ ~ ( dOp(1 ) ~- ~AK), B *
where K is the norm of T . PROOF
Define the operator U from S into D* by m
uxn(0) = h,(w)
+ C cn,j(w)h{xj(w)),
fi
= 192, .
j= I
As in Theorem 2.1.2, we show that U(S) c S and that U is a contraction operator on S. Then Banach's fixed-point theorem applies. Let x,(to), y n ( w )E S . Then
It uxn(m)IID* =
m>
IIhn(w)
+ j=2I cn.j(w).fi(xj(w))II~
d llMw)llD* + KllL(-~n(w))ll~* from the result following Lemma 5.1.1 that T is a bounded linear operator from S into D*. From Condition (ii) of the theorem
Hence
/I f n ( x n ( w ) )11 B* d /I h(-xn((o))- fn(O) 11 B* + 11 f"(O) /IB* d AIIXn(to)IID* + IIh(o)IIB*.
+ K ~ l f n ( o ) ~ ~ B+* AKIIXn(W)IID* d p(1 - 3,K) + 3.Kp = p
11 UXn(w)llD* d
llhn(U)llD*
from the last hypothesis of the theorem. Thus U ( S ) c S . Since the difference of elements of a Banach space is in the Banach space, VX,(W) - U ~ , ( W )D* E
and
II u x n ( 0 ) -
m
~YA~)IID =*
v
136
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
by Condition (ii). Since EX < 1, we have that U is a contraction mapping on S. Therefore, applying the fixed-point theorem of Banach, there exists a unique random solution x,(co) E S of Eq. (5.0.1),completing the proof.
5.2
Special Cases of Theorem 5.1.2
In this section we shall present some special cases of Theorem 5.1.2 which will be very useful in practice. We take as the spaces B* and D* the spaces x,,XI or Xb". 5
Theorem 5.2.1 Consider the random discrete equation (5.0.1) subject to the following conditions : (i) There exist a constant Z > 0 and a positive sequenceg,, n such that
c lllc~,j(~)lllgjd z, m
=
1,2,. . . ,
n = 192,. . . .
j =1
(ii) jJx) is a function defined for n E N and scalar x such that If,(O)l and
d yg,
for ~ ~ y , ( w )d~p~and x l 2 and y constants. (iii) h,(w) E X , . Then there exists a unique random solution of (5.0.1),
provided that ~~hn(co)[~xl, y, and 1, are small enough. PROOF If we show that the pair of Banach spaces ( X g ,X , ) is admissible with respect to the linear operator
n
=
1,2, ...,
then the conclusion follows from Theorem 5.1.2 with B*
=
(5.2.1)
X , and D* = X I .
5.2
137
SPECIAL CASES OF THEOREM 5.1.2
Let x,(o) E X , . Taking the norm of both sides of Eq. (5.2.1), we have by the generalized Minkowski inequality (Beckenbach and Bellman [ 1, p. 221)
m
by the definition of the norm in X,. Since the last sum on the right is less than or equal to Z by Condition (i), we have
<
Il~xn(o)llL2(n.d,8) ZIIXn(411xgr
n = 1,2, *
f
*
Hence Tx,(w) is bounded for all n, and so by definition it is in X ,. Therefore ( X g ,X I )is admissible with respect to T From Condition (ii) we have iJa
Ijn(Xn(w))
- fn(Yn(w))t2ds(m))i
< Agn
I+
{ JQ
Ixn(w) - y n ( w ) l 2 d ~ ~ ( t ~ ~ )
or
II fn(Xn(4) - f n ( Y n ( w ) ) I I L 2 ( R , d , PG) k
n Ilxfl(4
- Yn(w)llL*(n:d,P)
'
This implies that sup{ II fn(xn(w)) - fn(Yn(w))IILz(n,d,~~ / R n l G 2 SUP Ilxn(0) - Yn(w)lIL,(n.a,iP,, n
which means by definition that IIfn(xn(4)
Likewise,
- f,(Yn(W))lIxs
G
4 X n ( 4
- Yn(411x1*
l l . f n ( O ) l l ~ ~< Y ,
and from Theorem 5.1.2 we have that there exists a unique random solution of Eq. (5.0.1) provided that llhn(w)llx, , y, and 2 are small enough in the sense that
AZ* <: 1,
llhn(w)llx,
+ z*y < p(1 - AZ*),
138
V
RANDOM DISCRETE FREDHOLM A N D VOLTERRA SYSTEMS
where Z* is the infimum of all constants Z > 0 that satisfy Condition (i), completing the proof. For g , = 1, n = 1,2,. . . , we obtain the following corollary to Theorem 5.2.1. Corollary 5.2.2
conditions :
Consider the random equation (5.0.1)under the following
(i) There exists a constant Z > 0 such that
(ii) f,(x) is a function of n E N and scalar x such that
and If,(O)l < y for A and y constants. (iii) h ( t ;w ) E X I . Then there exists a unique random solution of Eq. (5.0.1) bounded for n E N , provided that ~ ~ h , ( w,)y,~and ~ x ,1. are small enough. We also have the following theorem as a special case of Theorem 5.1.2. Theorem 5.2.3 Suppose that the random equation (5.0.1) satisfies the following conditions :
(i) There exists a 2 > 0 such that Illc,,j(w)lll d
z,
n,jE N,
and a positive sequence g , , n = 1,2,. . . , such that (ii) Same as Condition (ii) of Theorem 5.2.1. (iii) Same as Condition (iii) of Theorem 5.2.1.
cz=g , < co.
Then there exists a unique random solution
provided that ~~h,(w)~[x,, 2, and y are small enough. PROOF We need only to show that the pair of Banach spaces ( X g ,X I ) is admissible with respect to the linear operator given by expression (5.2.1) along with Condition (i) of the theorem. Taking the norm of Tx,(o) for
5.3 x,(w)
EX
139
STOCHASTIC STABILITY OF THE RANDOM SOLUTION
, as in the proof of Theorem 5.2.1, we obtain m
II Txn(~)II~az(fi,d,p) d C
j =1
Illcn,X~)IIl[IIxj(~) IILz(fi.d,p)/gjIgj m
6 SUP{IIXn(0)II L2(fi,d,p)/gn)C IIIcn.j(o)IIIgj. j =1
But by the definition of the norm in X , and Condition (i) of the theorem, we have
II Txn(o)IILz(fi,dr8.4) d
m
IIxn(m)IIx,
2
j =1
III~n,j(~)IIIgj
m
IIxn(w)IIxgz
1 gj < a.
j= 1
Thus Tx,(w)is bounded from N into L,(R, d,P), and by definition is in X I . Therefore (X,, X , ) is admissible with respect to 7:Since the other conditions are the same as those of Theorem 5.2.1, this completes the proof. 5.3
Stochastic Stability of the Random Solution
In the continuous case we examined the conditions under which the random solution x(t ;o) was stochastically asymptotically exponentially stable. We shall now consider the stochastic geometric stability of the random solution x,(o) of the stochastic discrete system (5.0.1), which is analogous to the stochastic asymptotic exponential stability in Chapters I1 and IV. Thus we state and prove the following discrete analog of Theorem 4.3.1. Theorem 5.3.1 Suppose that the random equation (5.0.1) satisfies the following conditions : (i) There exist constants Z > 0 and 0 < a < 1 such that, for all n , j N ~, lllcn,j(~)IIld zan+j.
(ii) fn(x) is defined for n E N and scalars x such that Ifn(0) G P",
n
=
L2,. ..,
and
Ifn(4- fn(y)I d Alx
-
YI
for IIxn(o)ll~,,llyn(o)llxgd p and A and y constants. d Ban,B > 0, n = 192, . * . . (iii) Il~,(~)llL,(n,d*s) Then there exists a unique random solution x,(o) of Eq. (5.0.1)satisfying {~ClX.(~)lZ1) G pa",
provided that
P, 1,and y are sufficiently small.
140
v
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
PROOF We must show that the pair (X,, X,) is admissible with respect to the linear operator
c c,,j(w)xj(o), m
Tx,(w) =
n = 1 , 2 , . ..,
j= 1
with g, = a", n = 1,2,. . . , and the given conditions. For x,(w) E X,,taking the norm of Tx,(o), we obtain as before m
1
11 Txfl(w)llLz(n,d,@)d
~ ~ ~ c f l , J < m ) ~ ~'\
IIXJ<W)IIL2(n,d,@)
j= 1 m
m
m
=
1
(5.3.1)
~ ~ x f l ( m ) ~ ~ ~ gIllcfl,J
j= 1
by the definition of the norm in X,. But from Condition (i) of the theorem we have m
m
II(c,,j(o)((IaJ d
z1
m
CC"+jCCj =
ZCC"
j= 1
j= 1
= ZCm([l/( 1
1
(CC2)'
j= 1 - .2)]
- 1} =
[ZCr'/( 1 - a')]a".
Therefore from expression (5.3.1) we get
II ~xn(~)llL2(n,.d,s) Q Ilx,(~)llx,[z~2/(1 - 4 1 a n<
02
(5.3.2)
since 0 < a < 1 . Hence Tx,(o) E X, by definition, and the pair (X,, X,) is admissible with respect to 7:Thus Condition (i) of Theorem 5.1.2 is satisfied with B* = D* = X From Condition "ci) we have that
II f " ( X f l ( 4 )
- f , ( Y n ( 4 ) IIL 2 ( Q . d , B ) Q
A IIX " ( 4
-
Y " ( 4 IIL z ( n , d , 9 )
and hence SUP[Il f " ( X " ( 4 ) - f"(Y"(~))llL~(,,,,,)/~"I
Q
A SUP( Ilx,(w) - Y , ( ~ ) l l L 2 ( n , d , B ) / ~ n l
which means that
II f n ( X n ( 4 )
- f f l ( Y n ( 4 ) II x , Q
A IIx n ( 4 - Y n ( 4 II x , .
) ~y, ~ and x g from Condition (iii) we get In a similar manner we have ~ ~ j n ( 0 d Ilh,,(o)llXgQ j.Therefore all of the conditions of Theorem 5.1.2 are satisfied
5.4
AN APPROXIMATION TO SYSTEM (5.0.1)
141
with B* = D* = X, for gn = an, n = 1,2,. . . , and the conclusion holds, provided that P, 1,and y are small enough in the following sense :
P + KY 6 ~ ( -1 ‘ K ) ,
llhn(~)llxz+ Kllfn(O)llx, G
ILK < 1,
where K is the norm of T However, from inequality (5.3.2) we have that
II Txn(w)IIx8 [Za2/(1 - a2)IIIxn(w)IIx,, which implies that K = Z*a2/(1 - a2), where Z* is the infimum ofall constants satisfying Condition (i) ofthe theorem and (5.3.2). Thus we must have P, A, and y small in the sense that E,Z*a2 < 1 - a2 and z*a2y P+=GP[
1.
1 - aZ(1 + AZ*) 1 - a2
completing the proof.
It follows that the ;andom solution of the discrete system (5.0.1) is stochastically geometrically stable under the conditions of Theorem 5.3.1. That is, {E[~X,(W)~~])+ G pa”,
Therefore, as n
+ 00, we
n
=
1,2,.
...
have E[lx,(o)121 --* 0
since 0 < a < 1. Hence from Jensen’s inequality we have that the expected value of the absolute random solution approaches zero as n + co, lim E[\x,(o)l] = 0.
n- m
5.4
An Approximation t o System (5.0.1)
In this section we shall consider the random system with only a finite number m E N of unknowns as follows :
142
v
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
The following theorem gives conditions under which the system (5.4.1) possesses a unique random solution for each m that converges to the random solution of (5.0.1) in the space x b , as m + co. Consider the random system (5.4.1) subject to the follow-
Theorem 5.4.2 ing conditions :
(i) There exists a positive constant Z and a finite positive sequence g,, n = 1,2,. . . , such that
(ii) f,(x) is a function of n E N and scalars x such that If,(O)l .fn(Yn(w))l d ‘gnllx - Y I1 X b v 3
I.fn(~n(m))-
n
=
d yg, and
1,2, . . . ,
for IIxIlxbv,llyllxb, d p and A a constant. (iii) h,(w)E Xb,. Then there exists a unique random solution of (5.4.1) for each rn, provided ~ ~ h ~ ~A,xand b v ,y are small enough. Also, if x,(w) is the random solution of (5.0.1), where c,,~(o) satisfies (i) andf,(x) satisfies (ii),and x:””(w) is the random solution of (5.4.1),then = 0, lim I(x - x(”’lJXbv m-. m
provided AZ* < 1, where Z* is the infimum of all constants satisfying Condition (i). PROOF We must show that the pair of spaces (X,, Xbv)is admissible with respect to the operator m
Tx,(w)
=
1 cn,j(o)xjo),
j= 1
n
=
1,2,. . . ,
(5.4.2)
along with Condition (i) of the theorem. Let x,(Q) E X,. Then taking the norm of expression (5.4.2),we obtain as before
II Txn(w)IIL2(R,dg,iP) Also, for i
=
m
1 Illcn,j~w)III[IIxj(w)IIL,(n,~,~)/gjIgj.
j= 1
1,2,. . .
/I Txi+1(0))-
Txi(w)llL2(R,.zt,iP)
m
C
=
II
G
1 IIIci+
j= 1
[ci+ 1 , j ( ~ ~ ~ci,j(w)Ixj(o)IIL,(n.d,iP) )
m
j= 1
l,j(w) -
~i,j~)lll[II~j(~)lI~~~~,~,~~/~jl~
5.4
A N APPROXIMATION TO SYSTEM (5.0.1)
I43
Hence
d
IIXfl(W)IIX,Z
(5.4.3)
by definition of the norm in X , and Condition (i). Therefore by definition Tx,(to) E x b v , and the pair ( X g ,Xbv)is admissible with respect to 7: From Condition (ii), as in the proof of Theorem 5.2.1, we have that llffl(O)lIxg d 1.'.
Also, from Condition (ii) we have
Hence all of the conditions of Theorem 5.1.2 are satisfied with B* = X, and D* = Xbv.Since from inequality (5.4.3) and the definition of the norm in Xb, we have
ll~xllxb\Q Z l l X f l ( ~ ) I l x g , there exists a unique random solution of the system (5.4.1), provided that llhllxbv, I , and y are small enough in the sense that AZ* < 1,
llhll,ybv + Z*y d ~ ( -1 IZ*),
where Z* is the infimum of all constants satisfying Condition (i) and (5.4.3).
144
V
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
For the other part of the conclusion, we consider for fixed m
from Condition (ii). Hence we obtain from (5.4.4)
5.4
145
AN APPROXIMATION TO SYSTEM (5.0.1)
since by a similar argument as that given previously, we have m
Therefore
from Condition (i) of the theorem. Since A Z * < 1, we then have m
IIX - X(m)IIxbv
as m -+
00,
[1/(1 - A Z * ) I
1
i=m+ 1
IIxi+ I ( w ) - X i ( m ) I I L 2 ( R , d , p )
-+
0
completing the proof.
We now consider approximating the random solution x!,W(o) of the system (5.4.1) for each m = 1, 2, . . . . We may write the finite system (5.4.1) as
k 0,
otherwise.
IT=,,+
From this we see that if ~,,~(m)fj(O) is small, then we may ignore it. Suppose that at each fixed n = 1,2,. . . ,m we use the successive approximations to the solution of (5.4.1) similar to those of Chapter 111 as follows: x!l:m
=
u4,
xi?)+I ( w ) = hn(w) +
+ C
m
1 cn,j(w)J(x$y)(m))
j=1
m
j=m+ 1
~,,~(m)f;(O), i = 0, 1,. . . .
(5.4.5)
The rate of convergence of the sequence of random variables (5.4.5) will be investigated now under the conditions of Theorem 5.4.1. By definition
146
V
RANDOM DISCRETE FREDHOLM A N D VOLTERRA SYSTEMS
of the norm in Xbvrwe have for i
=
1,2,
m
(5.4.6)
since xp'(o) = 0 for n > rn. But from (5.4.5)
from Condition (ii) of Theorem 5.4.1. Also.
II m
m
5.4 5.4
AN APPROXIMATION APPROXIMATION TO TO SYSTEM SYSTEM (5.0.1) (5.0.1) AN
147
from Condition (ii) of the theorem. Hence we have from Eq. (5.4.6) and these inequalities that
I I x $ ~ ? ~ - X $ ~ ) I I X ~<, A I I X $ ~ )- xlE'1 llx,,
1 '.
1 Illc~,J~w)lllgj
j= 1
m
m
The last expression in brackets is less than or equal to Z* by Condition (i) of the theorem, and hence 1lx$yl- xlm)llXb,G ( A Z * ) I I X ~ ~x) p l l xb, Repeating this argument i - 1 times, we obtain llx\T1 - x
{ ~ ) I I ~ ~ <, (AZ*)i/I~(;l)
- xbm)II,y,,y.
Since i.Z* < 1 in Theorem 5.4.1, we have + co,m E N fixed. Hence the sequence of successive approximations in (5.4.5) converges at a rate proportional to (AZ*)'. Now, to investigate the error of approximating x!,""(w) by xLm/(w), we
as i
consider
By the same kind of argument as given previously, we have
148
v
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
from Condition (ii) of Theorem 5.4.1. Hence we get m
1) X(m) - X y q Xb" < 3, (1 X(m) - X!m) 1 IIxbv 1 IIIcl,j(W)IIIgj t-
j= 1
c m
m
+ 3LllX(m) - x!"', IIXbv k = l j 1 [lick+ =l d
(3.Z*)llX'"'
- xjm)l
- ck,j(o)lllgj
11 X h r .
Repeating this argument i - 1 times, we obtain that
- Xjm)llXbv < ( i Z * ) i l ( X ( m ) - X g q X b Y + 0 for fixed rn = 1 , 2 , . . . . Hence the error of approximating xim)(o)by IIX(m)
as i + 00 the ith successive approximation is less than (RZ*)'times a positive constant. The approximation enables one to apply the electronic digital computer to obtain realizations of the random solution of Eq. (5.4.1) for each rn and therefore to obtain an approximation to a realization of the random solution of Eq. (5.0.1).Combining the results with the conclusion of Theorem 5.4.1, we have that Ilx - X:m)llXh" d IIX - X(m)llXh, IIX(m) - X$m)JIXb"2 0
+
as i -+ co and m -+ co simultaneously. As a final remark, the results also apply to the random discrete Volterra system (5.0.2), since it is a special case of the random discrete Fredholm system (5.0.1). Also, since the discrete system in Chapter I11 is a special case of system (5.0.2)when the error ofapproximating the integral is zero (that is, the functions are zero except on the discrete set of points r , < t , < . . . < t, < . . .), the results obtained in this chapter apply there. 5.5
Application t o Stochastic Control Systems
In this section we will present applications of the random discrete Volterra equation (5.0.2)to stochastic discrete control systems. Such systems occur when the input and output are observed or obtained only at discrete times. They are also useful in approximating the continuous processes used in order to apply digital computer methods to control theory.
5.5.1 A Discrete Stochastic System Consider the nonlinear differential system with random parameters of the form (5.5.1) k(r;o) = A ( w ) x ( t ; o ) b ( o ) 4 ( a ( t ; o ) )
+
5.5
APPLICATION TO STOCHASTIC CONTROL SYSTEMS
149
with o(t ; 0) =
( d t ; w),x(t; U)),
(5.5.2)
where A ( w ) is an m x m matrix whose elements are measurable functions,
x ( t ; w ) and q(t;w ) are m x 1 vectors of random variables, b(w) is an m x 1 vector whose elements are measurable functions, a(t;w ) is a scalar random
variable for t E R , , +(a) is a scalar for each value a, and (., .) is the scalar product in Euclidean space. The system (5.5.1)45.5.2) has been studied by Tsokos [ 5 ] with respect to its stochcistic uhsolutr stability using a generalized version of Popov's frequency response method. It is the aim of this section to investigate the discrete analog of system (5.5.1)+5.5.2). We shall study the existence and uniqueness of a random solution of the discrete system with random parameters whose state at time n = 1 , 2 , . . . is the random variable x,(w). We will also give conditions which guarantee that the unique random solution is stochastically geometrically stable using Theorem 5.1.2. The discrete system to be studied is given by xn+ 1(w) - xn(0) = A(w)xn(w)+ b(w)+(an(U))
with
(5.5.3)
an(w) = (qn(w),xn(w)).
(5.5.4)
We may write Eq. (5.5.3) as xn+ 1(w) = [A(w)+ IIxn(w) + b(w)+(an(w))= B(w)xn(w)+ b(w)+(an(U)), where I is the m x m identity matrix, A(w)and b(w)are as defined previously, qn(w)and xn(w)are m x 1 vectors of random variables, and a,(w) is a scalar random variable for n = 1,2,. . . . We may reduce the system (5.5345.5.4) to a stochastic discrete Volterra equation in the form of Eq. (5.0.2). For n = 1 we have x z ( 4 = B ( 4 x , ( w )+ b(o)+(al(w)). It may be shown by induction (Hildebrand [l, pp. 73-74]) that X,(O) =
Bn-'(CO)Xl(w)+
n- 1
1 Bk-'(o)b(w)4(an-k(w)),
n > 1.
k= 1
Substituting this expression for x,(w) into Eq. (5.5.4), we obtain
where T denotes the transpose. Letting n - k = j , this expression becomes n- 1
+ 1 qT(o)B"-'- '(o)b(o)4(aj(w)),(5.5.5)
a,(o) = qT(w)B"- l ( w ) x l ( o )
j= 1
150
V
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
which is in the form of Eq. (5.0.2) with
B(u) = A(w) + I, and
fj=4,
j = 1 , 2 , . . . , n.
We will now show that Eq. (5.5.5) has a unique random solution which is stochastically geometrically stable under certain conditions.
Theorem 5.5.1 Suppose that the random equation (5.5.5) satisfies the following conditions : (i) There exist constants Z > 0 and 0 < u < 1 such that
(ii)
4(0) satisfies Iq5(O)l
= 0, and
Then there exists a unique random solution
provided that
of Eq. (5.5.5) satisfying
O~(W)
asd A are sufficiently small.
PROOF We must show that the pair (Xg, X , ) is admissible with respect to the linear operator fl-
WO,(W)=
1
1 q,T((w)B"-'- '(w)b(w)oj(o)
j= I
(5.5.6)
5.5
APPLICATION TO STOCHASTIC CONTROL SYSTEMS
1-51
for n = 1,2,. . . ,with g , = OI",n = 1,2,. . . ,and Condition (i). For an(o)E X,, taking the norm of both sides of (5.5.6), we have 1
n-
n- 1
j= 1
n
from the definition of the norm in X , and Condition (i) of the theorem. Hence kVa,,(o)~X, for a , ( w ) ~ X , ; that is, ( X , , X , ) is admissible with respect to K From Condition (ii) we have that
or
II 4 ( a n ( d )- 4 ( Y n ( d )II x, d 1.II a n ( 4
- Y n ( dII x ,
Likewise, from Condition (iii) we have that the stochastic free term is in the space X , . Therefore all conditions ofTheorem 5.1.2 are satisfied with B* = D* = X ,' and there exists a unique random solution of Eq. (5.5.5) which satisfies
<
(Elon(o)12}f POI",
n
=
1,2,. . .,
provided that B and 1 are small enough, completing the proof. Thus from Theorem 5.5.1 E{lan(w)12}-, 0
as n
-+
00,
and hence from Eq. (5.5.4) and Jensen's inequality EzCI13 G E{I(q,,(o), xn(w)>12}-, 0
as n
-, co.
That is, ~ I l q ; f ( 4 x n ( w ) l= } ~{lo,(o)l}-, 0
as n co,and E{14(on(o))l}-, 0 as n of the theorem holds. -+
4
co,since 4(0) = 0 and Condition (ii)
v
152
RANDOM DISCRETE FREDHOLM A N D VOLTERRA SYSTEMS
5.5.2 Another Discrete Stochastic System? Consider the nonlinear differential system with random parameters of of the form (5.5.7) n(t; O ) = A ( o ) x ( t ;CII) + b(co)+(o(t;0)) with a(t;to) = e ( t ; W )
+
J:
(q(t -
T;w),x(T;~))~T,
(5.5.8)
where A(w) is an rn x m matrix whose elements are measurable functions,
x ( t ; w ) and q(t ; w ) are m x 1 vectors whose elements are random variables, b(w) is an rn x 1 vector whose elements are measurable functions, a(t;w) and e(t ; w ) are scalar random variables for t E R , , $(o) is a scalar for each
value a,and (., .) denotes the scalar product in Euclidean space. The system (5.5.7H5.5.8)has been studied by Tsokos [2] with respect to its stochastic absolute stability using a generalized version of Popov’s frequency response method. It is the aim of this section to study a discrete version of the system (5.5.7)(5.5.8).We shall investigate the existence of a unique random solution of the discrete system with random parameters whose state at time n = 1,2,. . . is the random variable x,(w). We will also give conditions which guarantee that the random solution is stochastically geometrically stable using Theorem 5.1.2. The discrete version of system (5.5.7)-(5.5.8) to be studied is given by xfl+
with
1 ( 4
-
xfl(0)
+ b(d4(0,(4)
(5.5.9)
1(w),xj(~))
(5.5.10)
= A(w)xfl(w) fl
an(w) = eAo)
for n x,+
=
+1
(qn-j+
j= 1
1, 2,. . . . We may write Eq. (5.5.9) as
I(0)
=
[44 + IIxn(4 + b ( w M a f l ( 4 )= w 4 x , ( w ) + b(44(%(4)?
where I is the m x m identity matrix, B(w) = A(w) + I, A(w) and b(w) are as defined previously, qfl(w)and x,(w) are m x 1 vectors of random variables, and ofl(w)and e,(w) are scalar random variables for n = 1,2,. . . . We may reduce the system (5.5.9)-(5.5.10)to a stochastic discrete Volterra equation in the form of Eq. (5.0.2). For n = 1 we have
xA0) = B(w)x,(w)+ b(44(a1(4).
t Section 5.5.2 adapted from Padgett and Tsokos [16] with permission of Taylor and Francis, Ltd.
5.5
APPLICATION TO STOCHASTIC CONTROL SYSTEMS
153
It may be shown by induction (Hildebrand, [l, pp. 73-74]) that n- 1
Xn(W)
= B"- ' ( W ) X i ( W )
+1
Bk-'(W)b(O)4(On-k(0))
k= 1
for n > 1. Letting n - k = j , we obtain
+
x,(w) = B"- ' ( w ) x ~ ( w )
c
n- 1
Bn-'-
'(w)b(co)4(gj(w)).
j= 1
Substituting into (5.5.10),we have that n
o,,(w) = e , ( o )
+ 1 qT-j-l(w)Bj-'(o)xl(w) j= 1
+
n
j- 1
j= 1
k= 1
1 qT-j+l(0) 1
B'-k-1(0)b(O)4(Ok(O)),
(5.5.11)
where T denotes the transpose and the sums are zero whenever the upper limit is less than one. In the second sum we may interchange the order of summation and change the variable of summation to get
f: 1
j- 1
j=1
k=l
q ~ j +- l ( W ) ~ j - k - 1(W)b(W)4(ok(W))
"-1
r
1
Let j - k = i in the inside sum. Then n
i-1
otherwise,
0, which is of the form (5.0.2) with f, = 4, j
=
1,2,. .
154
V
RANDOM DISCRETE FREDHOLM AND VOLTERRA SYSTEMS
We will now present a theorem that gives conditions under which the stochastic equation (5.5.11) possesses a unique random solution that is stochastically geometrically stable. Theorem 5.5.2 Suppose that the random equation (5.5.11) satisfies the following conditions : (i) There exist constants Z > 0 and 0 < u < 1 such that
(ii) +(a) satisfies +(O) = 0 and I+(xn(m)) -
d
+(Y,(W))l
4Xfl(4
-Y n ( 4
for llxn(m)llxgand llyn(m)llxgd p and 2 a constant. (iii) n
IIen(m)
+ j1 4:=1
j+
,(m)B'-
l(m)X1(m)IIL~(R,~,~)Pan,
P > O , n = 1 , 2,.... Then there exists a unique random solution a,(w) of Eq. (5.5.11) satisfying
provided that
P
n and 2 are sufficiently small. {Elan(m)12}+d pun,
=
1,2,. . . ,
PROOF We must show that the pair ( X , , X , ) is admissible with respect to the linear operator n-
W%(4 =
I
1 cn,JG4aj(4,
j= 1
(5.5.1 3)
where c , , ~ ( wis ) given by expression (5.5.12), with g, = u", n = 1,2,. . . , and Condition (i). For a,(w) E X,, taking the norm of both sides of (5.5.13), we have n- 1
5.5
1.55
APPLICATION TO STOCHASTIC CONTROL SYSTEMS
by definition of the norm in X , and Condition (i) of the theorem. Hence Wa,(w)E X , for all an(w) E X , ; that is, ( X , , X,) is admissible with respect to
w
From Condition (ii) we have that sup{ IICb(xn(w)) - cP(yn(w,,II/~"} G
suP(llxn(w) - Yn(w)ll/"')
or IICb(xn(w)) - cP(Yn(w))llxB G 1IIXAw) - Yn(w)llx, for xn(w),yn(w)in the set S of Theorem 5.1.2 with D* = X,. Likewise, Condition (iii) implies that the stochastic free term is in X ,: Therefore all conditions of Theorem 5.1.2 are satisfied with D* = B* = X,, and there exists a unique random solution of (5.5.1 1) which satisfies n = 1,2,. . . ,
{Ela,(w)12}"2 G pun,
provided that 1and 0 are small enough, completing the proof. The foregoing result implies that as n
-+
00
E[Io~(o)I~I 0, +
and hence that EIIan(o)l]and EII&on(o))l] approach zero as n Condition (ii).
-, a, from
CHAPTER VI
Nonlinear Pertarbed Random Integral Epations and Application to Biological SyJtems
6.0
Introduction
In this chapter we shall study a nonlinear perturbed random integral equation of the Volterra type of the form
+
x ( t ; 0)= h(t, x ( t ; a))
c
k(t, z; o)f(z,
x ( 7 ; 0)) dz,
t 2 0,
(6.0.1)
which has recently been studied by Milton and Tsokos [5]. From a deterministic point of view such integral equations are important in many physical problems, especially in the field of theoretical physics. Chandrasekhar [11, among others, has applied the deterministic version of (6.0.1)in determining the relative changes of radiative intensity in a radiation field due to the presence of the atmosphere. Equations of this general form 156
6.1
THE RANDOM INTEGRAL EQUATION
157
also arise in neutron transport theory. Furthermore, due to the difficulty of handling random equations such as (6.0.1) mathematically, in many instances a simple deterministic model is used instead of a more realistic stochastic model which results in a random equation in the form (6.0.1). We shall first be concerned with conditions which will ensure the existence and uniqueness of a random solution of (6.0.1). In Sections 6.2.14.2.3 we shall illustrate some recent applications (Milton and Tsokos [4, 6, 71) of the theoretical results to biological problems, namely a metabolizing system, a physiological model, and a stochastic model for communicable diseases.
We shall be using the topological spaces defined in Chapter I, and the random functions which constitute Eq. (6.0.1) are defined as for the nonperturbed random integral equation. The random perturbed term h(t, x(r ;0)) is a map from R + into L,(R, d, 9). We shall denote the norm of the operator Tdefined in Lemma 2.1.1 as follows:
The norm guarantees that
6.1.1
Existence and Uniqueness of a Random Solution
With respect to the aim of this chapter, the following theorems give conditions under which a unique random solution-a second-order stochastic process-exists. Theorem 6.1.1 Consider the random integral equation (6.0.1) subject to the following conditions :
9)) (i) B and D are Banach spaces stronger than C , ( R + , L,(Q d, such that ( B ,D)is admissible with respect to the operator T given by ( T x ) ( t ; w= )
sd
k(t,z;o)x(z;w)dz,
t 2 0.
158
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
(ii) x ( t ; w ) + f ( t , x ( t ; w)) is an operator on Ilx(t; w)llD< p> with values in B satisfying
IIf(t,x(t;
0)) - f ( l , Y(t ;w))lIBG
S
= {x(t; w ) : x ( t ;W ) E D,
Allx(t; w ) - y(t;w)llD
for x ( t ; a),y ( t ; w ) elements of S,and 1 and p positive constants; (iii) x ( t ;w ) -+ h(t, x ( t ; a))is an operator on S with values in D satisfying
IIh(t,x(t;
?llx(t; 0)- y ( t ; w)llD
- h(t,y ( t ; O ) ) I l D
for x ( t ; w), y ( t ; w ) elements of S and y a positive constant. Then there exists a unique random solution of Eq. (6.0.1) in S provided ?
+ AK
<
where K = 11 T )I*. PROOF
<
+
~ l ~ ( ~ ~ x ( ~Kllf(t,o)llB ; ~ ) ) ~ ~ Dp(1
- AK),
Let U be an operator from S into D defined as follows : ( V x ) ( t ;w ) = h(t, x ( t ; w))
+
sd
(6.1.1)
k(t, 7 ; o ) f ( r ,x(z; w ) )dz.
We must show that U is a contracting operator on the set S. Consider y ( t ;w ) We can write
E S.
( U y ) ( t ;W ) = h(t, y ( t ;w ) )
+
f
(6.1.2)
k(t, z; o ) f ( t ,y(z; 0)) dz.
Subtracting Eq. (6.1.2) from Eq. (6.1.1),we obtain ( U x ) ( t ;4 - ( U y ) ( t ;4 = h(t, ~ ( t4) ; - h(t, At ; 4)
+
1;
W ,t ;o ) [ f ( z ,
x(z ;4)- f ( 7 ,y(.r ; 411 d.r.
Since B is a Banach space, f ( r , x(r ;w)) - f ( r , y(r ;w)) is an element of B. The admissibility of ( B , D ) with respect to T implies that ( U x ) ( t ; w ) - (Uy)(t;w ) is an element of D. From Lemma 2.1.1, T is continuous from B to D,implying that I I ( U x ) ( t ; m )- (Uy)(t;w)llDG IIh(t,x(t;w))- h(t,fic;w))llD
+ Kllf(t,
x ( t ; 0))- f(t,
W))IIB.
Applying the Lipschitz conditions given in (ii) and (iii), we have that Il(Ux)(r; 0)- (uy)(t; w)llD
?llx(t; 0)- .dt;U ) l l D
+ iKIlx(t; W ) - Y ( t ;w)llD = ( y + AK)llx(t; 0)- y ( l ;w)llD.
Since y + AK < 1, the first condition of the definition ofcontraction mapping is satisfied. It now remains to show that U(S)c S. Let x ( t ; w ) E S. Then ( U x ) ( t ;o)= h(t,x ( t ; w))
Taking norms, we obtain
dl
+
k(t, z ; o)f
(7,~
( zw)) ; dz.
Note that 11 f (t,x(t ; o))llB can be written as
Again applying the Lipschitz condition, we have IIf(t,x(t;m))IIBd
Thus
AIIx(t;o)llD
+ IIf(t,o)IIB.
II(ux)(t;w)llDd IIh(t,x(t;m))IID+ K A l l x ( t ; w ) l l+ ~ KIIf(t,o)IIB.
By definition of S, Ilx(t; u)llD d p. Hence we have Using the condition that we have that Il(Ux)(t;o)llD d p(1 - I K )
+ KAp = p.
Hence ( U x ) ( t; o)E S for x(t ; o)an element of S or U(S) c S. Therefore the conditions of Banach's fixed-point theorem have been satisfied and we can conclude that there exists a unique element x ( t ;w ) E S such that ( U x ) ( t ;o) = x(t ; o). 6.1.2 Some Special Cases
We shall now consider some special cases of the random integral equation (6.0.1)which are useful in various applications.
160
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
Theorem 6.1.2 Suppose that the random integral equation (6.0.1) satisfies the following conditions :
(i) There exists a number A > 0 and a positive-valued continuous function g ( t ) on R , such that
J: Illk(f,
5 ; to)lllg(T)
dz d A.
(ii) f ( t , x) is continuous in f uniformly in x from R , x R into R ; there exists a constant A such that I.f(t. 0)l d Ag(t), t E R , , and If(& x) - f ' ( t ,y)l d lg(t)lx - yl for some 3. > 0. (iii) h(t,x) is continuous in t uniformly in x from R , x R into R such that Ih(t, 0)l d Q, t E R , , and Q > 0 ; Ih(t, x) - h(t, y)( d y[x - yl for some y > 0. Then there exists a unique random solution x(t; o)E C of the random integral equation (6.0.1) such that Ilx(t; o)llcd p, provided lIh(t,x(t; 4 ) I I c , 2, IIf(t, O)IIc8 are sufficiently small. REMARK
By sufficiently small, we mean that y IIh(t, x(t; 4 ) I I c
+ Kllf(t, O)llC,
+ AK
d p(1
11,
and
< 1 and
- IK).
PROOF The proof consists basically in showing three things. First, that under the conditions assumed in (i) the pair (Cg,C) is admissible with respect to the operator
(Tx)(t;o)=
f
k(t, T ; W)X(T; w)d~;
second, that Conditions (ii) are sufficient for Conditions (ii) of Theorem 6.1.1 to hold; and third, that Conditions (iii) are sufficient for Conditions (iii) of Theorem 6.1.1 to hold. Letx(t;o)ECg.Then ,
6.1
161
THE RANDOM INTEGRAL EQUATION
Thus II(Tx)(t;w)ll is bounded, which implies that ( T x ) ( t; w ) E C for x ( t ; o ) ( C g ,C ) is admissible with respect to 7: Now let t, + t in R , . We must show that f (t,, x(tn;w ) ) + f (t, x(t ;w ) ) in L2(R,4 9). That is, we must show that given E > 0, there exists an N , such that n > N , implies E C g . Thus
II f (tn,x(tn ; w)) - f (t, ~
(; t
<E
or equivalently that t II lim II f (tn ~ ( t ;n01) - f(t9 ~ (;w))
n-rm
where
11. 1 1
=
II.IIL,cn,d,9,
=
0
3
throughout this section. Consider
We only need to show that each term on the right side of this inequality can be made arbitrarily small. Consider Il f ( t n ,x(tn; w ) ) - f (tn,x(t ; to))ll. For each n
by the Lipschitz condition given in (ii). Squaring and integrating over $2, we obtain that
Thus
Ilf ( t n , x(tn; 01)
-
f (t,, x ( t ; w))II G Ag(tn)llx(tn;W ) - x(t ;w)II
Taking limits, we obtain that
where the limit on the right is due to the fact that g is continuous at t and x ( t ; s)E C. Hence there exists an N , such that n > N, implies that
162
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
Since f (t,x ) is continuous in t uniformly in x, there exists an N , such that n > N, implies that Squaring and integrating over 0 , we obtain that
for n > N,. Hence for n > N , Let N, = max(N, , N,); then for n > N, inequality (6.1.3) becomes Thus under Conditions (ii), t + f (t,x ( t ; w)) is continuous from R + into L,(R, d, 9).Now fix o E R. For each t E R+
Again squaring and integrating over R, we obtain that
If
I*
(t,x(t ;w))I2dP(w) ,< A2g2(t) Ix(t; w)I2 Ix(t ; w)l d.Y(w)
Using the Cauchy-Bunyakovskii-Schwarz
II f (t,x(t ;0))1l2< A2g2(t)llx(t;0)Il Since
x ( t ; w ) E S,
where
< II x(t ; o)ll < P. Thus
+ A2g2(t).
inequality, we obtain
+ 2AAg2(t)llx(t;w))ll + A2g2(t).
S = { x ( t ;w ) E C : Ilx(t; w)IIC< p),
Ilx(t; o)ll
II f ( t , x ( t ; 0))Il < A2g2(t)p2+ 23.Ag2(t)p+ A2g2(t) =
z2g2(t)
where
Therefore
Il f (4 x(t ;w ) )II
z2 = AZp2
+ 21.Ap + A2.
< zg(t).
This implies that f (t,x(t ; w ) )E C , for x(t ;w ) E S. Let x(t ; w), y(t ; w ) E S. Then
6.1
163
THE RANDOM INTEGRAL EQUATION
implying that
II f ( t , x(t ; 4)- f ( t , Y o ; w))II d
Ag(t)llx(t ; 0)- Y o ; 4 1 1
or that
II f ( a 4; 0)) - f ( 4 YO ; dlI/g(t) 6 w 4 t ; 0)- Y(t ;w)ll. Hence by definition of the norms in C, and C we have that
IIfk
x ( t ; 0)) - f ( t ,y ( t ; w))llc,
6 Allx(t; 0)- A t : 0 ) l l C .
Thus Conditions (ii) are sufficient for Conditions (ii) of Theorem 6.1.1 to hold. The proof that Conditions (iii) are sufficient for Conditions (iii) of Theorem 6.1.1 to hold is analogous to that just given and will be omitted. The remainder of the proof is identical to that of Theorem 6.1.1. When g(t) 3 1 we have the following corollary. Corollary 6.1.3 Suppose that the random integral equation (6.0.1) satisfies the following conditions :
(i) (ii) exists d Alx (iii)
j b Illk(t, z ; w)lll d.r d
A for t E R , , A > 0. f ( t ,x) is continuous in t uniformly in x from R + x R into R ; there a constant A such that I f ( t , 0)l 6 A, t E R , ; and I f ( r , x) - f ( t , y)l
for some A > 0. Same as Theorem 6.1.2, Condition (iii).
- yl
Then there exists a unique random solution x ( t ; w ) E C of the random integral equation (6.0.1) such that Ilx(t; o)llc6 p , provided Ilh(t, x ( t ; o))llc, I., y, and II f ( t ,O)llc are sufficiently small. Corollary 6.2.4 Assume that the random integral equation (6.0.1)satisfies the following conditions :
At) d t = M < (i) Illk(t, z ;w)lll d Al for ( t ,z)E A and (ii) Same as Theorem 6.1.2, Condition (ii). (iii) Same as Theorem 6.1.2, Condition (iii).
cx).
Then there exists a unique random solution x ( t ; w ) E C of the random integral equation(6.0.l)such that Ilx(t; w)llc d p provided that IIh(t, x ( t ; w))IIc, 2, y, and 11 f ( t ,O)llcg are sufficiently small. PROOF It is necessary only to show that (Cg,C) is admissible with respect to the integral operator
( T x ) ( t ; o )= Corollary 6.2.5
sd
k(t,z;w)~(z;w)dz.
Consider Eq. (6.0.1) under the following conditions :
164
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
d A2 e-'('-'), 0 d z d t < (i) lllk(t,~;w)\ll < co,where A2 and tl are positive constants.
00, and
S U P ~ ~ ~ +g{( TJ ):d+z }~
(ii) Same as Theorem 6.1.2, Condition (ii). (iii) Same as Theorem 6.1.2, Condition (iii).
Then there exists a unique random solution x ( t ; w ) E C of Eq. (6.0.1) such that Ilx(t; w)llc d p, provided IIh(t, x ( t ; w))IIc, I , y, and IIf(t, O)llc, are sufficiently small. PROOF Since Conditions (ii) and (iii) are identical to those of Theorem 6.1.2, it is sufficient to show that (Cg,C ) is admissible with respect to the integral operator
( T x ) ( t ;w ) =
Let x ( t ; w ) E C,. Then
Sd
k(t, Z ;W)X(T;
0)d T .
Using the definition of the norm in C,, we have
However,
implies that ~ ~ e - " " - ' ) g ( . r ) d=i N <
00
and thus that II(Tx)(t;w)ll d Ilx(t; w)[lcgNA2, t~ R , . Therefore ( T x ) ( t w ; ) which implies that (C,, C) is admissible with respect to T, and the proof is complete.
E C,
6.2 6.2
APPLICATIONS TO BIOLOGICAL SYSTEMS
165
Applications t o Biological Systems
In this section we shall present some biological systems which are characterized by random integral equations of the form studied in Section 6.1. Specifically, we shall present a stochastic formulation of mathematical models for the study of blood flow in a circulatory system which was investigated in the deterministic setting by Stephenson [l] ; a stochastic version of a deterministic equation arising in the mathematical description of a biochemical metabolizing system which was originally studied by Branson [l, 21, Wijsman [l], and Hearon [l]; and finally a random formulation of a model which arises in the study of the spread of a communicable disease through a finite population, which was treated deterministically by Landau and Rapoport [ 13. In each case we present the manner in which the deterministic model arises and why such models should more realistically be characterized from a stochastic point of view. The studies given in this section are due to Milton and Tsokos [4,6, 71.
6.2.1 A Random Integral Equation in a Metabolizing System Biochemists are concerned with the study of metabolizing systems and have made repeated attempts to describe such systems mathematically. Generally speaking, a metabolizing system can be thought of as an irregularly shaped region of complex structure where a substance called the metabolite is being produced, consumed, transported, modified, or stored. The multitude and complexity of the reactions which take place simultaneously in any biological system make a deterministic mathematical description of the metabolizing process virtually impossible and at best highly speculative. Biochemists have, however, made various attempts to describe these reaction systems and have in many instances used as their mathematical models deterministic integral equations (Branson [ 1, 21, Wijsman [ 11, and Hearon [l]). The integral equation description seems to be especially suited to biological models in that they are well able to handle situations in which the state of the system depends not only on the immediately preceding state but on all previous states. In a typical experiment on metabolism the experimenter is interested in the evolution in time of the amount of some substance present in the system. The function of time which describes this evaluation shall be denoted by M . Also associated with any metabolizing system will be two functions F and R which we shall call the metabolizing function and the rate function, respectively. These functions physically have the following interpretation :
166
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
M ( t ) = amount of metabolite present in the system at time t ; R( t ) = rate at which the metabolite is entering the system from the outside at time t ; F(t - T, M ( T ) )= the fraction left at time t of any amount of metabolite which entered the system at time z, 0 < z < t. The essential idea in the integral equation description is that the amount of metabolite present in the system at time t is attributable to two sources: the amount remaining from the initial amount present and the amount remaining from that which has entered the system from outside sources at any time z < t . Under the assumption that this is a good description of the metabolizing system under study, Branson [ l ] proposed that the system be characterized by the following deterministic integral equation : M ( t ) = M(O)F(t,M(0))+
rf
R(z)F(t - 7, M ( z ) )dz ,
t 2 0, (6.2.1)
J O
where the unknown function is M and F and R are considered as being known. There has been considerable discussion of the general validity of this equation as a description of an arbitrary metabolizing system, for example, see Hearon [l] and Wijsman [I]. However, there seems to be general agreement that Eq. (6.2.1) is a valid model in the case of a first-order reaction. We shall discuss this case in depth. In many metabolizing systems, especially those which occur in nature as opposed to carefully controlled laboratory experiments, it is virtually impossible to know exactly the amount of metabolite present at time t = 0, the beginning of our observation of the system. This is due in part to the fact that this amount will be influenced to some extent by conditions existing in the system prior to our observation and also to the fact that we must estimate this amount using experimental techniques. A usual procedure is to obtain several experimental values for M(0) and solve the deterministic equation using as the “true” value of M(0) the mean of the values so obtained. However, if this procedure were repeated many times, the mean values so obtained would vary and the variation could be quite large. Thus the mean value actually used could be quite unrepresentative of the true state of the system and its use could lead to incorrect results. Thus it is indeed realistic to assume that the amount of metabolite present at time t = 0 is not a fixed constant but rather a random variable whose behavior is governed by some probability distribution function. We shall denote this random variable by M ( 0 ;w). Consider the function R . By definition R ( z ) is the rate at which metabolite is entering the system from outside sources at time T. In carefully controlled laboratory experiments it could perhaps be argued that this is a deterministic
6.2
16 7
APPLICATIONS TO BIOLOGICAL SYSTEMS
function ; however, in a metabolizing system occurring spontaneously in nature this is certainly not the case. Thus it could be more realistic in general to assume that at each time z, 0 < z d t < 03, R(z) is not a fixed constant but in reality a random variable. That is, R(z)is not a deterministic function but rather a random function which we shall denote by R(z ;0). With these remarks in mind we can formulate the following random equation analogous to Eq. (6.2.1):
(6.2.2) This equation is of the general form given by Eq. (6.0.1). Wijsman [l] showed that in the case of a first-order reaction the metabolizing function F ( t - z , M ( z ) ) takes the form of an exponential function, namely c>o. F(t - z, M ( z ) ) = e-'('-') , Thus in this case Eq. (6.2.2) reduces to the following form : M ( t ;w ) = M ( 0 ;o)e-"
+
1:
R ( z ;w ) e-'('-') dz,
C
2 0.
(6.2.3)
In order to facilitate our theoretical presentation of Eq. (6.2.3), we make the following identifications : h(t, x ( t ; o))= H ( t ; w ) = M ( 0 ;o)e-",
x ( t ; w ) = M ( t ;w),
k ( t , z ; w )= R(z;w)e-'('-'),
f ( t , x ( t ; w ) )= 1.
Thus Eq. (6.2.3) can be written as (6.0.1),that is, x ( t ; 0 ) = h(t, ~ ( t0)) ;
+
6
k(t, T ;w ) f ( z ,X(T; w ) )dz,
t 2 0. (6.2.4)
With respect to the functions which constitute Eq. (6.2.3) we shall make the following assumptions: For each t 2 0 the random variable M ( t ;o)has finite variance and there exists a constant Q independent of z such that IR(z;w)l < Q for almost all w. These restrictions are necessary for our theoretical presentations, and their validity in the physical sense will be commented on later. T o show that the given stochastic integral equation possesses a unique random solution, we must show that the basic assumptions concerning the functions which constitute the formulation of Eq. (6.0.1) and the conditions of Corollary 2.1.4 are satisfied. We shall assume that { e-C('-r)R(z ; o):w E a} is an equicontinuous family of functions from A into R and show that this
168
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
is sufficient for our purpose. To show that the functions of the model meet the given conditions, we proceed as follows : For t E R , ,M ( t ;o)by assumption has finite variance. This implies that for each t, M ( t ;o)E L2(Q,d,9). Since f ( t , x ( t ; o ) )= 1 and (Q,d,9) is a probability space and hence a for each t E R , . For fixed t , finite measure space, f ( t , x ( t ;w))is in L,(Q, d,9) ec' is a constant and since M(0 ;o)has finite variance, we can conclude that M ( 0 ;u)e-" E L,(Q, d,9). Fix (f, T) E A. Recall that k(t, T ; o)= R ( T ;o) x By assumption IR(T;w)l d Q, 9-a.e., (k(t,T ; o)l = ~ R ( To) ; e-c('-r)l < Qe-c('-'), 9-a.e. This implies that ( t , z) + k(t, T ; o)is a map from A into L,(Q, d,9). Now let (t,, T), -, ( t ,T). Choose E > 0. By the equicontinuity condition, there exists an N , such that n > N , implies le-C('n-rn)R(zn ., - e - C ( ' - r ) ~ ( ~ w)l ; <E for each o E Q. Thus by definition the infinity norm, for n > N , , becomes Il(e-C('n-'n)R(z n ,. w ) - e-'('-')R(z; o)lll < E . However, this implies that for n > N , we have
IIIW,, T, ; w ) - k(t, 7 ;0)lll < 6, as was to be shown. To show that the hypotheses of Corollary 2.1.4 are satisfied, we proceed as follows : For Conditions (i) we have
s:
Illk(t, z;
o)llldz
=
llle-c('-r)R(z;o)llldz
= (Qe-c'/c)(ec'- 1 ) = ( Q / c ) ( l - e-")
< Q/c. I t is easy to see that Condition (ii) is satisfied. To see that M ( 0 ;o)e-" E C , let t, -, t in R + and choose E > 0. If IIM(0;o)ll # 0, choose N such that n > N implies le-crn - ec'l < ~/llM(o; 411. Then for n > N
IIM(0;o)ePcrn- M ( 0 ; w ) e-"II
= le-crn -
e -cfl IIM(0; w)ll
< "mw;~)IIlIIM(O;4 1 1 = E.
6.2
APPLICATIONS TO BIOLOGICAL SYSTEMS
169
If IIM(0;u)II = 0, then IIM(0;w ) eWcrn - M ( 0 ;w ) e-"II = 0 < E . Thus t -,M ( 0 ;w)e-" is continuous from R , into L2(R,d,9 ) . To see that the map is bounded, consider IIM(0;w ) e-"II = e-"IIM(O; w)ll d IIM(0;w)ll. Thus the conditions of Corollary 2.1.4 are satisfied, and we can conclude that there exists a unique random solution of Eq. (6.2.3) provided IIM(0;u) x e-"jlc, 2, and IIf(t, 0)11, are sufficiently small. REMARK
that
When we say that the quantities are sufficiently small we mean
x < 1,
IIM(O;w)e-"Il,
+ KIIf(t,O)Il, < p ( 1 - m
where K = IITII*, the norm of the operator T defined in Section 6.1. Note also that
IIM(0;w ) e-c'llc = sup e-"(lM(O; w)ll = IIM(0;w)ll 0 8t
and that 11 f ( t , O)llc = 1. Hence we are actually requiring that AK < 1,
IIM(O;CO)II+ K
< p(1 - LK).
Since in this case 2 can be any positive number, the first condition can easily be satisfied. Hence we will have a unique random solution M ( t ;w ) such that EIM(t;w)I2 d p for each t provided EIM(0;w)I2 is sufficiently small. In formulating the stochastic model, we have been forced to make certain assumptions on M ( t ; w ) and R(t ;w). Namely, we assume that { M ( t ;w ):t E R , } is a second-order stochastic process and that for each t, R ( t ;w ) is 9-essentially bounded and furthermore that the bound is uniform over R , . These restrictions make our particular approach to the problem possible and may or may not be satisfied by a particular given metabolizing system under study by a biologist or biochemist. The feasibility of these assumptions must be determined in each instance by the experimenter. Although these requirements appear on the surface to be quite strong, in practice they are in many cases quite easily satisfied due to the physical or chemical characteristics of the system under study. For example, if the amount of metabolite present at any time were limited due to space considerations, we would automatically satisfy the condition that { M ( t;w ) :t E R , } be a second-order stochastic process. As a simple illustration, visualize the "metabolizing" system as being a reservoir and the "metabolite" as being the amount of water present at time t . If, on the other hand, the amount of metabolite
I70
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
present at any given time were limited due to some chemical characteristic of the system, we could come to the same conclusion. That is, visualize the metabolizing system as being perhaps a lake or stream and the metabolite of interest the amount of dissolved KCI present per gallon. This amount will be limited by chemical considerations due to the fact that there is a maximum amount of the salt which can be dissolved in a given amount of water at a given temperature. In any situation similar to this the assumption of finite variance on M ( t ; u)for each t will be quite naturally satisfied. Similarly, in many systems the rate at which metabolite enters the system from outside sources at any time will be restricted due to physical limitations, especially in systems where metabolite is simply being transported, or to chemical limitations in systems where metabolite is being consumed or produced. Hence the assumption that there exists a Q such that IR(z;w)l < Q 9-a.e. is an assumption which can often be realistically met. The point to be made here is twofold. First, the proposed random model is a more realistic description of a general metabolizing process than is the deterministic formulation and should be used whenever possible. Second, in many cases the restrictions placed on the functions in the random model are not extremely difficult to meet in practice, but whether or not they are met is a question which must be considered carefully by the experimenter in each case. Note that there is a certain degree of flexibility in the random formulation in that for each t we require no knowledge of the particular form of the distribution functions for the random variables involved but only that they are elements of certain or L,(Q d,9). spaces, namely either L,(Q d,9)
6.2.2 A Stochastic Physiological Model Physiologists quite often are faced with the problem of constructing mathematical models which attempt to describe the complex processes that take place within the human body. Such models are very difficult to obtain and must necessarily be oversimplified due to the inability of scientists to understand fully all of the factors which can influence even the simplest process taking place in a living organism. Thus mathematical models in use are constantly subject to refinement as more insight is gained into the true nature of the process taking place. Along this line, Milton and Tsokos [4] proposed a stochastic formulation for a model used to study blood flow in a simplified circulatory system which has been studied from the deterministic point of view by Stephenson [l]. The stochastic approach yields a more realistic characterization of the system than the nonrandom formulation and should be used whenever possible. A simplified circulatory system is visualized as consisting of the heart, a capillary bed, and connecting vessels. Schematically it is pictured in Fig.
6.2
APPLICATIONS TO BIOLOGICAL SYSTEMS
171
6.2.1. The points X and Y represent an inflow and an outflow point, respectively. It is assumed that a given amount M of indicator substance I is suddenly injected in the system at point X , the inflow to the capillary bed, thus producing a fixed concentration of indicator C,(O) > 0 at time t = 0. The fraction of the amount M flowing out of the capillary bed at time t is considered to be a deterministic function of time and is denoted by p(t). This function is determined by taking instantaneous measurements on the concentration of indicator at point Y. The concentration at time t is also considered to be a deterministic function of time which is denoted by C,(t). The model allows for recirculation and hence the concentration of indicator at point X will also depend on time and will be denoted C,(t).Stephenson [l] makes the following assumptions concerning these functions based upon experimental results and working experience with such models: (a) C,(t) and C2(t)are twice differentiable; (b) there exist constants G > 0 and H > 0 such that IC;(t)l d G
and
IC;(t)l d H
for all t, where C;(t) = dC,(t)/dt and C;(t) = dC,(r)/dt. Under the assumption that the model provides a reasonable description of the physical situation at hand, the following deterministic integral equation of the Volterra type is obtained : p ( t ) = [C;(t)/C,(O)l - [l/Cl(0)l
s’ 0
C;(t - ~ ) p ( ~ ) d t , t 2 0. (6.2.5)
Let us first consider the function C,(t). The usual technique for determining the value of this function at a given time t , is to obtain experimentally several observations on C2(t]). These experimental values are then averaged and the mean value of these experimental observations is used as the “true” value for C,(t,). Due to the possibility of some diffusion in the capillary bed together with the inherent difficulties of accurately measuring instantaneous concentrations at a given point, as well as the natural variability of the circulatory system in general, if the same experiment were repeated, the mean value obtained would most likely differ from the first determination. Heart
Capillary bed
Figure 6.2.1.
172
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
If this variability is large, the actual mean value used could be quite unsatisfactory. Therefore it is more realistic to assume that the concentration of indicator at Yis for each t a random variable and that C,(t) is in fact a random function which we shall denote by C,(t; w). This tacitly implies that the derivative of C,(t ; w ) is also random and it will be denoted by C;(t ; w). The function C,(t)can be considered random because of the fact that it is obtained experimentally in a manner identical to that described for C,(t) and also due to the fact that we are allowing for recirculation. Thus C,(t) will be influenced after a certain point by the same factors which influenced C,(t), namely diffusion and natural variability. The function p( t ) is being expressed in terms of the random functions C ; ( t ;w ) and C ; ( t ;w), and thus it should also be considered as random and will be denoted by p ( t ; w). In view of these remarks we have the following random version of Eq. (6.2.5):
x p ( ~w; ) d ~ ,
t 3 0.
(6.2.6)
We shall make the following identifications in order to simplify our theoretical investigation : x ( t ;w ) = p ( t ; w ) ; h(t,x ( t ; w)) = C#;
t~),’c,(O);
k ( t , r ; w )= C;(t - 7
; ~ ) ;
and
f ( r , x ( t ; w ) )= -x(t;w)/C,(O) = -[l/C,(O)]p(t;~).
Using the notational changes, Eq. (6.2.6) takes the familiar form given by Eq. (6.0.1). With respect to the functions appearing in Eq. (6.2.6)we make the assumption that there exist constants G > 0 and H > 0 such that
for each t 3 0. T o show that the model possesses a random solution, we must verify that the random functions which constitute Eq. (6.2.6) meet the required assumptions of Eq. (6.0.1). We begin by assuming that the families {C;(t - T ; W ) : C O E R ) and
{C;(t;o):wER)
are equicontinuous families from A into R . Fix t E R , . Then ( p ( t ;o)l ,< 1 P-a.e. for each t due to the fact that p ( t ;w ) represents the fraction of the
6.2
APPLICATIONS TO BIOLOGICAL SYSTEMS
original amount of indicator in the outflow at time
1 Ic;(t; 1
= [1/Cl(O)]2
< [1/C1(O)]2
w)l2
n
t.
173
Hence
dY((0)
H 2 d.Y(to) = [1/C1(0)]2H2 <
03,
R
that is, for fixed f E R + , h(t,x(t; w ) )E L2(Q d,9). Finally, fix ( t ,T ) E A and consider k ( t , z ; w )= C;(t - T ; O ) = C ; ( u ; w ) , where u = t - T . By our previous assumption, IC;(u;w)I 6 G d-a.e. This implies that Yit~j:lCi(u; w)l > Cj = 0 and hence that C ; ( u ;co) E LJR, .dP).Thus ( t , T ) + k(t, T ; Q) is a 9). I t remains only to show that this map is conmap from A + L,(R, d, tinuous. To this end, let ( t n ,T,) + ( t ,5). By the equicontinuity condition, given E > 0, there exists an N such that n > N implies
Ic;(t,- T , ;
O j ) - c’l(? -T;
to)\ < E
for each w E R. Hence for n > N , 9 { w : I c ; ( t n- Tn;-w)-
implying that for n > N ,
c;(t
-
t ; w ) (>
I;}
=
0,
174
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
Therefore the basic assumptions of our theoretical development are met. We shall now show that for an appropriate choice of the function g the pair (C,, C,) is admissible with respect to T as required in Theorem 2.1.2, Condition (i) and furthermore that Conditions (ii) and (iii) are also satisfied by the functions of the stochastic model. This allows us to apply the theorem to obtain the existence of a unique random solution. Choose g(t) = ct eB' for fi > 0 and c1 3 1. Note that g is positive valued and is well defined. continuous on R + and hence the space C,(R+ ,L,(R, d,9)) 9)) Consider . Let x(t ;o)E C,(R+ ,L,(R, d,
d [Gllx(t; W)Ilc,/P14eB' d [Gllx(t; 4IIc,/PIct
- 1)
eBt = [Gllx(t; W)llc,/Plg(t).
Thus for x ( t ; o ) ~C g ( R + , L 2 ( Q , d , 9 )()T, x ) ( t ; w ) ~C g ( R + , L 2 ( R , d , . 9 )im), plying that the pair (C,, C,) is admissible with respect to 7: Now, let x(t ; w ) E S, where S is as defined in Theorem 2.1.2, Condition (ii) with C, = D. Then which implies that IIf(t,x(t;
f(t,
4) = [l/Cl(0)lx(t; 4,
4 = [1/C,(O)lllx(t; 0)ll d
[1/C,(O)IAa eBt = Zg(t),
where Z is a constant, Z = [I/C,(O)]A. Thus by definition of C , ( R + ,L,(Q d,P)), f ( t , x(t ; 0)) E C,(R+ ,L,(R, d, 9)). To show that the Lipschitz condition is satisfied, consider
6.2
APPLICATIONS TO BIOLOGICAL SYSTEMS
175
Hence we can choose A = l/Cl(0)and Condition (ii) is satisfied. Let x(t ;w ) E S. Then
IIhk x(t; 4 1 1 = I l [ 1 / ~ 1 ( 0 ) 1 ~o)ll #~ = ~
~ / ~ l ~ do l l~ l l l ~ x ~ ~
d [1/C 1(0)IH d [ H / C*(O)l ( d a )epr = Zg(t).
Since the equicontinuity condition implies the continuity of the map t + h(t, x ( t ; w)), we have that h(t, x(t; o)) E C , ( R + ,L,(Q d,8)) as was desired. Therefore we have shown that Theorem 2.1.2 is applicable with B = D = C, where g(t) = aep’. We can thus conclude that there exists a unique random solution x ( t ; o)of Eq. (6.2.6) such that Ilx(t; w)IIc, d p provided 1.K < 1 and ll~(t,x(t;4)llcg + KIIf(t,O)llc, d ~ ( -1 W . Note that IIf(t,O)llc, = 0 and that IIh(t, x(t; w))IIc, = SUP {[l/Cl(O)l Ilc;(t; o)ll/a e B f ) O
d [1/C1(0)1SUP { IIc;(t; o)ll1 d [1ICl(O)IH. OSr
Furthermore,
II(Tx)(t; W)Iic, = SUP [II(W(t;o)ll/g(t)l r2O
d
SUP OSr
[Gllx(t; ~ ) l l c ~ ( t ) / b g (= t ) lGllx(t; o)llc,/B,
Here K
=
sup[ll(Tx)(t;~)IIc,IIIX(t; 4 c , : Ilx(t; w)Ilc,
z 01
and hence
K d Gllx(t; 4c,/BIIX(t;
4 l C ,
= G/P.
Thus we can conclude that there exists a unique random solution x(t ;o)E S of Eq. (6.2.6) provided p is such that [1/C,(O)l(G//3)< 1
and
[1/Cl(O)IH d P{1 - [1/Cl(O)I(G//3)}.
The mathematical restrictions placed on the functions which constitute the stochastic model do not appear to be overly restrictive in that they closely parallel the assumptions which Stephenson [ l ] made. That is, there exist constants G and H such that IC’,(t; o)l d G and IC;(t; w)l d H 8-a.e. for all t 3 0, and the assumption that p ( t ; o)E L,(IR, d,9)for each t 2 0 is quite naturally satisfied due to the fact that the physical meaning of p ( t ; o) implies Ip(t; o)l d 1 9-a.e. The choice of the function g(t) is arbitrary and could be replaced at the discretion of the investigator.
I76
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
6.2.3 A Stochastic Model for Communicable Disease Early efforts to describe the course of the spread of a communicable disease throughout a population of given size were concerned with deterministic models. Recently statisticians have begun to reconsider those models in an attempt to take into consideration the essentially random nature of many of the processes which are involved in the spread of diseases. Along this line the deterministic models are being replaced by newer, stochastic models which more realistically characterize the true nature of such a system. Landau and Rapoport [I] have examined a mathematical model for the spread of communicable disease through a finite population, from a deterministic point of view. Their work was an extension of the study done by Kermack and McKendrick [l]. Recently Milton and Tsokos [7] have formulated a stochastic version of the communicable disease model which more realistically describes the physical situation. We shall formulate such a model and show, using the technique of admissibility theory, that a random solution, a second-order stochastic process, exists for the system and that it is also unique. Consider a population whose members are thoroughly mixed, that is, each individual has the ability to make contact with every other individual in the population. Assume that the probability of contact between any pair of individuals is the same for each pair. A “state” will signify any quality that can be transmitted from one individual to another by contact, and an individual acquires the state by coming into contact with another individual who has already acquired the quality and has not yet lost it. The word “population” is simply used to mean any collection of objects which can be described as being thoroughly mixed, and an individual is any member of the population under study. There are of course many variations of the general situation which can be considered. For example, there may or may not be “recovery”; there may or may not be “immunity”; there may be only a definite fixed period during which it is possible to transmit the state even though the “carrier” still possesses it himself; there may be a varying number of individuals in the population ;the frequency of contact may or may not be constant ;and so forth. Each of these considerations influences the rate of spread of the state and should be accounted for in the formulation of the mathematical model. Landau and Rapoport [l] considered a model of this problem under the following conditions : (i) The number of individuals who become affected up to and including time t = 0, the start of the process, is a fixed constant denoted R(0).
6.2
177
APPLICATIONS TO BIOLOGICAL SYSTEMS
(ii) The total number of individuals affected up to and including time t is a deterministic function of time denoted by R(t),t 2 0. (iii) There is no recovery. (iv) Each individual can acquire the state at most once. (v) The number of individuals in the total population is a known constant N throughout the spread of the state. (vi) The frequency of contact y is assumed to be the same for each pair and constant throughout the spread of the state. (vii) The probability of transmission depends on t (the time of the whole process, called the clock time) and s (the time elapsed since the person in the state acquired the state, called the private time) and is denoted p ( t , s).
A simple example of the feasibility of allowing the transmission probability to depend on both t and s is provided in epidemiology, when the infectiousness of a diseased individual may decrease with time, so that if contact is made shortly after the carrier acquires the disease, the probability of transmission is relatively high. With these ideas in mind, Landau and Rapoport arrived at the following deterministic integral equation of the Volterra type in R’(t),the instantaneous rate at which the number of affected individuals is changing at time t 2 0 : rf
R’(t)= y[N - R(t)]{R(O)p(t,t )
+J
0
R’(r)p(t,t - T) d z } ,
t 2 0.
(6.2.7)
With respect to our aim, consider the constant function y(t) = y, which represents the number of contacts per unit time per pair and which is assumed to be the same for each pair of individuals in the population. This is a constant which in a sense characterizes the population. In order to apply Eq. (6.2.7) to a given situation, the value of the constant y must be determined based on the experimenter’s prior knowledge of the population or on direct observation of the population. The usual procedure is to obtain several estimates for y based on observations of samples drawn from the population and then solve the deterministic equation using as the “true” value of y the mean of the experimental values. If this procedure were repeated many times, the mean values obtained will usually vary considerably and hence the value actually used could be quite unrepresentative of the true state of nature y. Thus it would appear to be more realistic to assume that this “constant” y is in reality a stochastic variable whose behavior is governed by some probability distribution function. We shall denote this stochastic variable by y ( o ) . It is clear that the number of individuals affected up to time t, R(t),will be influenced by the frequency of contacts between pairs, which we are now considering to be random. Thus we shall assume that for each t 2 0 the quantity R(t) is no longer a fixed constant but is in reality a random
I78
VI
NONLINEAR PERTURBED RANDOM INTEGRAL EQUATIONS
variable which we shall denote by R(t ;0). That is, R(t) should not be considered as a deterministic function of time but rather as a random function of time. This tacitly implies that the derivative of R(t ;o)giving the instantaneous rate of change of the number of affected individuals in the population is also a random function, which we shall denote by R ' ( t ; o ) .That is, for each t 2 0, R'(t;o)is a random variable. Therefore we can now formulate the following stochastic model for communicable diseases : R ' ( t ; w )= ~ ( w ) [ N- R(t;w)][R(O;w)p(t,t)
+ J: ~ ' (;7o ) p ( t , t - z) dr],
r 3 0.
(6.2.8)
As in the previous models, in order to simplify our theoretical investigation, we shall employ the following identifications : x ( t ;0)f R'(t ; w),
h ( t , ~ ( t ; ~3 )H) ( t ; w )3 y ( o ) [ N - R(t;o)][R(O;o)p(t,t)],
k ( t , . r ; o )E y(w)[N - R ( t ; w ) ] p ( t ,t T), j " ( ~x(z , ;0)) 5 x(z ;w ) = R'(z ; 0).
Using these identifications Eq. (6.2.8)is essentially Eq. (6.0.1)without random perturbation. We shall make certain assumptions with regard to the behavior of the random functions which constitute the communicable diseases model : We shall assume that there exists a constant Q lsuch that ly(w)J 6 Q 1P-a.e. and that for each t, R(t ;w ) and R'(t ; w ) have finite variances. The assumption that the number of individuals in the population during the spread of the state is a known constant N and the interpretation of p(t, s) are retained as stated earlier. In order to show that the random integral equation (6.2.8) possesses a unique random solution, we must show that the model satisfies the conditions of Corollary 2.1.6. T o show that Condition (i) holds, consider
6.2
179
APPLICATIONS TO BIOLOGICAL SYSTEMS
This implies that Illy(4"
-
R ( t ; 43111 G l l l l ~ ~ ~ ~ l l l ~
and that
IllW, z;4 1 1 1< p(t, t
- z)llly(w)lllN.
Taking A2 = Illy(o)lllN, Condition (i) can be satisfied if p ( t , t - z)is such that p(t, t - z) < ePa('-')
and 0
for some a > 0
< z < t < m. Take the function g E g(t) = 1. Then teR +
[+lg(.)dr}
=
1 < cc.
The function f ( t , x) is defined by f ( t , x) = x. It is clear that this function is continuous in t uniformly in x from R , x R + R and If@, 0)l = 0. The Lipschitz condition is obviously satisfied, and thus Condition (ii) has been met. By assumption t + H ( t ;w ) is continuous from R , + L2(Q d,9). It remains only to show that this map is bounded. Observe that
<
[
=
Nlly(w)R(O; 4
1'
N21y(4R(O;412 dg(4
< Nlllr(w)lll IIR(0; 4. Since this argument is independent oft 2 0, we have that the map is bounded. Thus we can conclude that the random integral equation (6.2.8) possesses a unique random solution provided that IIH(t; w)llc, 1, and IIf(t, O)llcg are sufficiently small. That is, small in the sense that IIH(t;w)llc
+ Kllf(t,O)llCP < P(1 - w
and
1K < 1,
where K = IITII*. Since f ( t , O ) = 0, we have IIf(t,O)llc, E IIf(t,O)llc = 0 and the model requires only that 1K < 1 and IIH(t;w)llc 6 p(1 - X), p > 0. Although the model has been formulated for communicable diseases, a similar model can be developed in the case of the spread of a rumor throughout a community, when the "news value" of the rumor decreases with time so that a person would be less likely to communicate the rumor, say, two weeks after hearing it than he would only a few hours after hearing it.
CHAPTER VII
On a Nonlinear Random Integral Eqzlation with A p plication to Stochastic Chemical Kinetics
7.0
Introduction
The aim of this chapter is to study a random vector integral equation of the form given by ~ ( w t ;) = h(t, x ( t ; w ) )
+
k(z,X ( T ; 0);W ) dz,
t 2 0,
(7.0.1)
where w E 0, the supporting set of a probability measure space (Q, d,P), x(t ; w ) is the unknown rn-dimensional vector-valued random function defined on R , , the stochastic kernel k(t, x ( t ; w ) ;w ) is an rn-dimensional vector-valued function on R,, and for each t E R , and each rn-dimensional vector-valued random function x ( t ; w),h(t, x ( t ; w ) )is an rn-dimensional vector-valued random variable. More specifically, we are interested in the existence and uniqueness of a solution, a random vector-valued function I80
7.1
MATHEMATICAL PRELIMINARIES
181
and some special cases which are important for studying certain physical problems. In addition, we shall give a stochastic formulation of a classical chemical kinetics problem. The formulation of such a model results in a random integral equation of the form given by (7.0.1). In Section 7.1 we shall introduce some topological spaces and definitions and state and prove certain lemmas which are essential in our study. An existence theorem and some special cases are given in Section 7.2. In Section 7.3 we shall give a complete and precise formulation of a chemical kinetics problem which is quite realistic for describing the physical phenomenon. The results given in this chapter are due to Milton and Tsokos [ l , 51. x ( t ; w),
7.1 M a thematica I Preliminaries
In this section we shall give some definitions and concepts which are basic to our study. Definition 7.1.1 Two random vectors x(w) = {xl(w),x 2 ( o ) ,. . . ,x,(w)} and y(w) = {yl(w),y2(o),.. .,y,(w)} are said to be equal if and only if xi(w)= yi(w)9-a.e. for each i = 1,2,. . . , m.
Definition7.1.2 We shall denote by +(a,&,!?) the set of all random where, for each vectors of the form z(w) = (zl(o), z2(o), . . . , z,(o)} i = 1,2,. . . , m, z i ( o )is an element of L,(Q d ,9).
Lemma 7.1.2 The space $(a,d , 9 )is a normed linear space over the real numbers with the usual definition of componentwise addition and scalar multiplication, where the norm in $(Q, d ,9) is defined by ll~(w)Il~(o,d,p) = max
Illzilll.
PROOF The fact that $(a, d,9)is a linear space follows from the fact is a linear space. That I( . ll*(o,d,p) is a norm follows from that L,(Q, d,9) the fact that 111 . 111 is a norm.
Definition 7.1.3 C,(R+ ,$(a,d ,9)) will denote the set of all continuous functions from R , into $(a,d,9).
We remark that Definition 7.1.3 simply states that t + x ( t ; w ) = {xl(t;w), each t~ R , and each fixed t E R ,
x 2 ( t ; o), . . . ,x,(t; o)}is continuous and that for i = 1,2,. . . ,rn, x i ( t ;o)E L,(Q d, 9). Therefore for
182
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
Also an element of the space C , ( R + , $ ( Q , d , P ) ) is a random function. We shall be assuming that for each i the sample function xi(t ;o)is continuous in t for each o E Q. Thus, since we are working with a finite measure space, as in for each t and i, E l x i ( t ; o ) l < co. Defining the norm of +(Q,d,9) Lemma 7.1.1 will enable us to obtain a relatively simple norm defined in terms of the components of the random vector.
9)) is a linear space over the Lemma 7.1.2 The space C,(R + ,$(a,d, real numbers with the usual definitions of addition and scalar multiplication for continuous functions. Lemma 7.1.3 The collection
F = { Ilx(t; m)lln: Ilx(t; o ) l l n = OSUP Ilx(t; m)Il,(~,d,~)} Qrdn for n = 1,2,. . . , is a family of semi-norms defined on C,(R+, +(Q, d ,9)).
d,9). Thus, PROOF By definition, x ( t ; o)is continuous from R+ + $(a, given the compact set [0,n] c R , there exists some constant M , such that t E [O, nl implies Ilx(t ;411,(R,d.B) d M , . Hence { Ilx(t ;W ) I I ~ ( ~ ,t ~ E LO, , ~ nl} ) : is bounded above by a constant which implies that the supremum exists, and thus that Ilx(t ; m)ll, is uniquely determined and also nonnegative. Furthermore,
IIW;w)ll,
=
SUP {IIW;W J I ( R , d , B J = OSUP (14 I M ; 4 J l ( R . d . B ) ) dt
OQrQn
=
IMt; m)ll,
+ IIY(t; 4,.
Therefore since the argument is independent of the choice of n, 11 . Iln is a semi-norm on C,(R+ ,$(Q, SQ, 9)) for n = 1 , 2 , 3 , . . . . Lemma 7.1.4 The family of semi-norms defined on C,(R+ , $(a,d, 9)) satisfies the axiom of separation.
7.1
183
MATHEMATICAL PRELIMINARIES
PROOF Choose x ( t ;W ) E C,(R+ ,$(Q, d, 9)), x ( t ; w ) # 0. Thus there exists a to E R , such that x ( t o ;W ) # 0. Now choose a natural number such that 0 Q to < N , and consider
Therefore we have Ilx(t; o)llN # 0, which implies that the family F of seminorms satisfies the axiom of separation.
Lemma 7.1.5 The space C,(R+ ,$(Q, d,9)) can be topologized by the family of semi-norms F and the topology obtained is locally convex and Hausdorff. The proof follows from Theorems 1.1.11 and 1.1.1 2.
Lemma 7.1.6 The topology z on C , ( R + , $(Q, d,9)) induced by the family of semi-norms F is metrizable and the metric is defined by
The proof follows from the fact that (11 . 1), is an increasing sequence of and Theorem 1.1.13. semi-norms on C,(R + ,$(Q, d,9))
Lemma 7.1.7 The topology t on C + ( R + $(Q, , d,9)) induced by F is the topology of uniform convergence. Assume that
PROOF
lim Ibrn(t;0)- x(t; W)Il,(*,d,B)
m- w
=
0
uniformly on every compact interval. We shall show that given an there exists a natural number N , such that m > N , implies that p(X"(t;
O),X ( t ; W ) )
E
> 0,
< E.
That is, we need to show that for m > N ,
Since
I:==, 112"
Enrn= , +
is finite, there exists a natural number M such that 112" < ~ / 2 Also . for each n and m we have IIxrn(t;0)- x ( t ; w)llJ[1
Thus we have
+ IIxrn(t;0)- x ( t ; W)1ln1 < 1.
184
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
On an interval [O,M] there exists a natural number Ne,M such that for m > N e , Mwe have Ilxrn(t;0) - x ( t ; w)II$(R,d,S) < 4/(1 - 4 )
for every t E [0, MI where q = ( ~ / 2 )l/ (cC;"1/2"). = Since IIxrn(t;0) - x ( t ;wHM = SUP {IIx"(t; w ) - x ( t ;w)lI$(n,d,s)), OQtQM
for m > Ne,Mwe have IIxrn(t;w) - x(t;w)llw d 4/(1 - 4).
Because the sequence (11 . 1), of semi-norms is increasing, for n = 1 , 2 , .. . , M and rn > NE,M,Ilx"(t;w) - x(t;w)JI,,d q/(1 - 4). Choose N , = max(M, N E . J . Then for m > N , we have
c -)(-)( c) ; M
I
&
= (" = I 2" 2
M
-1
+z &
= &.
Therefore
uniformly on closed intervals implies that x m ( t ;w ) -, x ( t ; w ) in the metric 9)). Now assume that xm(t;w ) + x ( t ;w ) in the topology on C,(R+ ,$(a,d, metric topology on C,(R+, $(a,d,9)), but (Ixm(t;w ) - x ( t ; w)II$(R,d,9) does not converge uniformly to zero on some interval [O,Q]. Without loss of generality assume that Q is a natural number. Thus by definition of uniform convergence there exists an E > 0 such that for each natural number 2 there exists a point t , E [0, Q ] and a k > 2 such that IIXk(t,
;0) - x ( t , ;4 IIJI(R.d.9) > E .
7.1
185
MATHEMATICAL PRELIMINARIES
Since x m ( t ;o)-,x(t ;o)in the metric topology, there exists an M such that, for m > M , we have
c -2"1 1 +
1
l l x m ( t ; 4 - x(t;o)ll,
a
Ilx"(t;o)
<-- x(t;w)ll, 2Q 1
E
+
E'
In particular, m > M implies 1
However, since
1 E <--2' 1 + &' - x(t;o)llQ
IIXm(t;0)- x(t; w)IlQ
2' 1 + Ilx"(t;m)
1Ix"Yt; 0)- x(t;
o)llQ2
IIxm(t;
-
x(t;~)IIJl(n,d,g)
for t E [0, Q] we have for m > M and t E [0, Q] the following inequality: 1
IIx"(t ;w ) - x(t ;4 1 1@(n,d,P, 2Q 1 + Ilxrn(t;0)- 4 1 ; ~ ) I l , ( R , d , 9 )
1
d2Q 1
Ilx"(t;o) - x(t;o)llQ IIXrn(t;o)- x(t; WllQ
+
1 E <-2' 1 + &'
(7.1.1)
Inequality (7.1.1)holds because of the fact that the function g(t) = t / ( 1 + t) is increasing. But since uniform convergence does not exist on the interval [0, Q], there exist a to E [0, Q ] and r > M such that IIx'(t0;
implying that 1
4 - x(t0; w)ll,(n,d,P) > E,
Ilx'(t0; w ) - x(t0;
2Q 1 + Ilx'(t0;
W)ll~(R.d,P)
4 - x ( t 0 ; o)IIJl(n,d,P)
1
E
zQ 1 +
6'
This is a contradiction to inequality (7.1.1) and hence the proof is complete. The following lemma is analogous to Lemma 2.1.1. Lemma 7.1.8 Let T be a continuous linear operator from C,(R + , $(Q, d ,9)) C,(R + $(Q, d,9)). 3
+
If B and D are Banach spaces stronger than C,(R+, +(a,d ,9)) and if ( B , D ) is admissible with respect to T , then T is continuous from B to D. Thus, as stated in Chapters I1 and IV, T :B + D is bounded and there exists a constant K such that
II(Tx)(t;
o)llD
Kllx(t;
o)llB.
Thus we use the usual method to define the norm of T by
M = 1) TI1 * = sup{ II(Tx)(t;o)llD/llx(t;0)ll~ : X ( t ; 0) E B,
IlX(t ; o)ll~ #
o}.
186
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
Definition 7.1.4 The random vector-valued function x(t; o)on R+ is a random solution of Eq. (7.0.1) if for each t~ R+ ,x(t ;o)is a vector random variable and satisfies Eq. (7.0.1)P-a.e. The following lemma is essential for obtaining the main results of the chapter.
Lemma 7.1.9 The operator T , ( Tx)(t ; W ) =
1:
X(Z ;W ) d t ,
defined on C,(R+ ,$(a,d,9)) is a continuous linear operator from C,(R + $(Q, d,9))into C,(R + $(a,d, 9)). 9
9
P). It is PROOF For fixed t we shall show that (Tx)(t; o)E $(a,d, sufficient to show that for fixed t and each i, the function of w
1: 1 < 1;
( Txi)(t ; o)=
xi(z ;O)d t
is P-essentially bounded. Consider
l(Txi)(t; 011 =
1 1:
xi(z ; 0)dr
IXi(t
;011 dr
<
1:
Illxi(t ; o)llld t
9-a.e. Now, x(z;w) is continuous from R , into $(Q,d,9), since x(z ;o)E C,(R + ,$(Q, d, 9)) by assumption. This implies that x(t ;o)is continuous on [0, t ] and hence there exists an M such that T E [0, t] implies IIx(t ; O ) / I , ( ~ . ~ , ~ ) d M . Thus, for t E [0, t ] , maxilllxi(z:o)lll d M , implying that, for z E [0, t ] , Illxi(t: o)lll d M for each i. Therefore I(Txi)(t;w)l d
1;
Illxi(t ;o)llld t ,
<1;Mdt
=
9-a.e.
tM.
Thus for fixed t, I(Txi)(t;w)l d tM P-a.e., implying that ( T x , ) ( tw ; ) is 9essentially bounded. Hence (Tx)(t ;o)is a function from R + into $(Q, d ,9). To show that (Tx)(t;w ) E C,(R+, -01, P)),it is also necessary to show that (Tx)(t;o)is continuous on R , . Therefore we must show that t, + t in R + implies that (Tx)(t, ;w ) + (Tx)(t ;w ) in $(a,d ,9)as n + co.This means that we must show that II(Tx)(t,; o)- (Tx)(t; O ) I I * ( ~ , ~ , ~ )can be made arbitrarily small for large enough n. Since
$(a,
7.1
187
MATHEMATICAL PRELIMINARIES
it is sufficient to say that for each i, lll(Tx,)(f,;o)- (Tx,)(t;o)lllcan be made arbitrarily small. Take E > 0 and consider the set [0, t E ] . Since x(7; o) E C , ( R + , $(a,d,9)), it is continuous from R , into $(a,d,9). Hence it is also continuous on the compact set [0, t E ] and there exists a constant M , such that z E [0, t E ] implies IJx(r;O ) I I , ( ~ , ~ , ~ ) < M,. Without loss of generality we may assume M , > 1. Thus for z E [0, t E ] , maxiIllxi(z;w)lI/ < M,. This, in turn, implies that for Z E [0, t E ] and each i, I l l ~ ~ ( ~ ; ~ ) l l < M , . Hence
+
+
+
+
+
+
(xi(z;o)l < I~~x,(T; w)lll Q M E ,
8-a.e.
(7.1.2)
for z E [0, t E ] . Since t , + t in R + , there exists an N , such that for n > N , , It, - tl < &/ME < E. Consider I(Tx,)(t,;o)- (Tx,)(t;w)l for n > N , where i is
arbitrary but fixed. By definition
I(Txi)(t,;w)- (TxJ(t;w)J = Then for t , > t we have ( ( T ~ i ) ( tW) , ; - (Txi)(t;o)(=
Since [t, t,] E [0, t
I["
Isd'
1
J:
x i ( ~ ; ~ )-d t x ~ ( T ; w ) .~ T
X ~ ( Z;O ) d T
I < I"Ix~(s
;w)I dz.
+ E ] , we have by inequality (7.1.2) that
l(Txi)(t,;w ) - ( T x , ) ( t o)l ; <
1"
Y-a.e.
M, d.r,
implying that I(Txi)(t,;W ) - (Txi)(t;w)I
< Metn - M,t
= It, -
tlM, < (E/M,)M, = E.
That is, for n > N,, I(Tx,)(t,;w ) - (Tx,)(t;o)l is arbitrarily small for almost all w. By definition we have llI(Txi)(tn; o)- (Txi)(t;o)lll = inf S . where
s = ;z:Y{w:((Tx,)(t,;w)- (Tx,)(t; w)l > 2 ) = 0,
2 > O}.
Thus we have shown that 9{o:l(Txi)(t,,;o)- (Tx,)(t;o)l> E } n > N , . Hence E E S and
=
0 for
lIl(TxJ(tn;0)- (Tx,)(t;w)lll < E for n > N , , as was desired. Thus (Tx)(t;o) is continuous from R+ to $(a,d ,P), implying that T does map C,(R+ , $(Q, d,9))into C,(R + d, 7
$(a m.
l
188
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
We now need to show that the mapping T itself is continuous. Let co. We need to show that (Tx")(t;o)-+I (Tx)(t ;w ) as n -+ 00. The topology on the space C,(R+ ,I/@,d,9)) is the topology of uniform convergence. Select E > 0 and pick an interval [0, Q] c R , where Q is arbitrary but fixed. Since ~ " ( 5w; ) -+'x(t; o),there exists an Ne,Qsuch that n > NE,Qimplies that x"(t; o)-+T x ( t ; o)as n -+
IIx"(t; W ) - x ( t ; ~
) I I ~ ~
for all t E [0, Q]. By definition of the norm in +(Q, d,P), maxlllxl(f;0)- xi(t; o)llI < E/Q 1
for
tE
[0, Q] and n > NE,Q. Thus
Illxl(t; 4 - x i V ; dill < dQ for t E [0, Q] and n > NE,a.This implies that Ixr(t; W ) - xi(t ;0)1
< E/Q
for n > N,,,, and t E [0, Q] .9-a.e., Consider I(Tx;)(t;o)- ( T x i ) ( t ;o)l for n > NE,Q and t E [0, Q]. We can write
I( Tx;)(t;O)- (Txi)(t;w)I
= =
I/:
[x;(z ;o)- xi(z ;4
3 dz
< J: Ix;(r ;o)- xi(z ;o)ldz
<
f
Ix;(z ;o)- xi(r ;o)ldr
< JoQ (E/Q)dr = E, That is, for n > NEVQ and
tE
9-a.e.
[0, Q]
I(Tx;)(t; o)- ( T x ) ( t ;o)l < E,
9-a.e.
Thus for n > NE,aand t E [0,Q ] YP(o:I(Tx;)(t; 0)- ( T x ) ( t ;w)[ > E } = 0. This implies that for n > NE,Qand t E [0, Q] III(Tx;)(t ;4- (Txi)(t;w)lll < E.
7.1
189
MATHEMATICAL PRELIMINARIES
Since the argument is independent of the choice of i, it holds for each i. Therefore we can write that for n > NE,aand t E [0, Q] II(Tx")(t; 0)- ( T W ; W)II,(~,~.~)= maxlll(Txl)(t;4 I
This implies that II(Tx")(t;w ) - (Tx)(t;W ) I I , ( ~ , ~ , ~ ) converges to zero uniformly on [0, Q], which is equivalent to saying that (Tx")(t;w ) +' (Tx)(t;w ) and the proof is complete. Definition 7.1.5 Cb = C J R + ,$(Q, d,9))) will denote the set of all 9)such that for g a continuous functions x(t; w ) from R + into $(a,d, positive-valued continuous function on R + we have
Ilx(t; ~ ) I I , ( R , d * S ) G Zg(t), where Z is some positive constant which depends on x(t ; w). Definition 7.1.6 C' = C'(R+,$(a,d,9)) will denote the set of all continuous and bounded functions x(t; w ) from R + into 1.4,9).
$(a,
Lemma 7.1.10 The space C i ( R + ,$(Q, d ,9)) is a normed linear subspace of C,( R + , $(Q, d,9)) where the norm in Ci is given by Ilx(t; 0)llc; =;tpx(-yJ PROOF
w)ll,(n,d,s)lg(t)).
Let x ( t ; w), y ( t ; o)E Ci(R+,$(Q d ,9)). Then we can write
Ilx(t; 4
+ YO; 41,(Q,d,g) G Ilx(t; 411,(R,d,B) + Ilv(t; 41,(n.d,9) G ZlAt) + Z2g(O G (Zl + Z2Mt).
Ilax(t; d l l = I4 Ilx(t; w)ll G I4W)= W t ) . Thus the space Ci(R+ ,$(Q, d,9)) is a linear subspace of C,(R + ,$(Q d,9)). Also from the definition we know that (I . (Ici 3 0. The fact that (1 . (Ir, = 0 if and only if x(t; w ) = 0 follows from the fact that 11 . IIS(R,d,9) is a norm by Lemma 7.1.1. The fact that IlMt; 4 c ; = 14 llx(t; 4 c ; also follows directly from the fact that 11 . Il,(R,d,S) the triangle inequality holds, consider 11x0; 4
+ Y o ; w)llc:,
is a norm. To show that
+
= sup{Ilx(t; 0) Y ( t ; 4,(n,d,9)/g(t)f O
G sup{.[llx(t;~)ll,(n,d,s,/~(t)l+ [Ilv(t; o)ll,(n,d,fp)/g(t)l) 06t
G "{0 .t
Ilx(t; W)II,(R,d,b)/g(t))
+ SUP{ Ily(t; W)Il,(Q,d,b)/g(t)) O
+ Ilu(t; w)llc,.
190
V11
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
Lemma 7.1.11 The normed linear space $(Q, d,9) is complete. PROOF Let Z"(o)be a Cauchy sequence in $(Q, d, P). We can write the sequence in the form
(Z"(u))= ( { ~ WZX4, ) , ... Z 3 4 ) ) . 2
We shall show first that for each i = 1,2,. . . ,rn the sequence (Z;(w)) is Cauchy in L,(Q, d,9). Since Z"(w) is Cauchy in $(Q, d,9), given E > 0, there exists an N , such that s, n > N , implies that IIZYo) - Zfl(4Il(b(f2,d,9) < E.
Hence, by definition, for s, n > N , we have that maxlllZXd I
-
ZXdlll < E.
This implies that for s, n > N , and each i IllZXo) - -T(w)lll < E.
Therefore for each i the sequence Z l ( o ) is Cauchy in L , ( R , d , S ) . Since L,(Q, d, 9) is a Banach space, Z l ( o )converges in L,(Q, d,9) to an element Zi(w).We shall now show that the sequence Zn(o)converges in $(Q, d,9) to the element Z(o), where Z(o)= (Z,(o),Z,(o), . . . ,Z,(o)). Since for each i, Zl(w) -+Z,(o)given E > 0, there exists an N i , , such that n > N,,, implies that IllzX~) - Zi(o)lll < E. Choose N ,
=
max,(N,,,}; then for n > N , maxlllZl(o) - Z,(w)lll < E. i
Hence, by definition, for n > N ,
llZ"(4 - Z(4ll(b(*,d,B)< E? implying that Z"(o)converges to Z(w),and the proof is complete. Lemma 7.1.12
is complete. The space CC(R+,$(a,d ,9))
PROOF Let x " ( t ; w ) be a Cauchy sequence in t o E R , and consider x"(to ; 0). We shall show that
C i ( R + , $ ( Q , d 7 9 ) )Fix . the sequence x"(to; o)is Cauchy in $(a,d ,9). Choose E > 0. Since x " ( t ; o) is Cauchy in C;(R+ , $(Q, d,P)),there exists an N,, such that s, n > N,, implies Ilx"(t; 0)- x v ; 0)llc; < e/g(to).
Thus by definition SUP{ Ilx"(t; 4 - x S ( t ;4(b(n,d,s,/g(t))< E/g(to), OSr
7.1
MATHEMATICAL PRELIMINARIES
191
implying that in particular for n, s > N , , That is, for n, s > N,,, Ilx"(to;w ) - xS(to;O ) I I ~ ( ~ , ~ < , ~ E) , implying that x"(to; w) is a Cauchy sequence in $(Q d,9). Since $(Q, d ,9) is a Banach space, there exists an x(to;w ) such that x"(to ;w ) --* x(to;w). Since the argument is independent of the choice of t o ,x"(t ; w ) + x(t ;w ) in $(a,d, 9) for ) an element of each r E R + . We claim that the function t + x ( t ; ~ is C i ( R + ,I,!@, d ,9)). Choose E > 0. Since x"(t ; w ) is Cauchy in C i ( R + ,$(Q, d, 9)), there exists an N , such that s, n > N , implies IIxS(t;0)- x y t ; o)llc; < 4 3 . Choose t o E R , arbitrary but fixed and s(ro, E ) such that s ( t o , E ) > N , and also such that
11
XS('O,E)
( t o ;0) -
x(to ;w)ll$(n,d,s~)< (&/3)g(to),
Now we can write Ilx"(t0 ;4 - x(t0 ;W)II$(n,d,b)/g(to) =
+
Ilx"(t0; w ) - XS('O'&)(to; 0) XS('OqrO;w ) - x(t0; w)(($(n,srs))/g(to)
d [Ilx"(to; 0)- XS('o;E)(tO; w)ll$(n,d,s)/g(to)l
+ [IIxSirO.E)(tO; 0) - x(to;w)IIILiR,d,p)/g(tO)ld 4~+ 3~ < E for all n > N , . Note that the integer N , is independent of the choice of and since to was arbitrary, we may conclude that for n > N ,
to
SUP{ Ilx"(t; 0) - 4 ; w)II$(n,d,8)/g(t)) < E. OQ1
To show that there exists a constant 2 such that Ilx(t; w)IIsiR,d,B) d Zg(t), consider the following. By the preceding argument we can choose an n such that sup{Ilx"(t; 4 - x(t; w)ll$(n,d,9)/g(Q} < 1' OQr
Now IIx(t; w)ll$(R,d,b)/g(t) = Ilx(t; w ) - x"(t; 0)
+ x"(t; 4ll$(R,d,B)/&)
d [Ilx(t; 0) - x"(t; w)II$(n,d,tP)/g(t)l
+ [Ilx"(t; w)ll~(R,d9)/g(~)l d 1 + [IIx"(t;w)II$(n,d,9)/g(t)l
d 1 + Ilx"(t; 0)llc;
=
z.
192
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
Therefore
d mt). To show that the function t -+ x(t ;o)is continuous, we must show that t, -+ t in R , implies that x(t,;w) -+ x ( t ; m ) in $(Q, d , P ) as rn -+ co.Fix t o € R,. Let t, + t o . Choose E > 0. There exists an N 1 such that m > N , implies Ig(t,) - g(to)l < g(to).By the first argument we can choose an n large enough so that Ilx"(t ; 0)- x(t ; dll~(*,d,9)/g(t)} < a&/g(to). Ilx(t; W)IIS(R,d,S)
zy'
Since x"(t; w ) E CC(R+,$(Q, d,P)),there exists an N 2 such that rn > N 2 implies /Ix"(t,; 0)- x"(t0; t0)II < be. Let N = max(Nl, N 2 ) .Then for rn > N we have
However,
Hence
7.1
MATHEMATICAL PRELIMINARIES
193
This in turn implies that for m > N This is simply the definition of convergence in $(Q, d,9). Since the choice of to was arbitrary, the same argument will suffice for each t and we have that the function t + x(t ;w ) is continuous. We can thus conclude that the 9)) The . conclusion of function t + x ( t ; o)is an element of C J R + ,$(a,.d, the first part of the proof that there exists an N , such that n > N , implies that SUP{ IlxYt; 0)- x(t; w)ll,(n,d.9.rP)ig(t); < I:. o
Then for n > N , IIx"(t ; 4
-
x(t ; w)llC; < 8,
which implies that the Cauchy sequence x"(t; w ) + x(t; w ) in C;(R+,$(a,d, 9)). Therefore the space C;(R+,$(a,d, P))is complete, as was to be shown.
Lemma 7.1.13 The space C'(R+,$(a,d,9)) is a normed linear subspace of C,(R+ ,$(Q, d, 9)) where the norm in C' is given by Ilx(t; w)llc,
=
sup{IIx(t; o)ll,(*,d*9)}. OQt
The proof is similar to that of Lemma 7.1.10 with g(t) = 1.
Lemma 7.1.14 The space C'(R+,$(Q;d, 9)) is complete. The proof is analogous to that given for Lemma 7.1.12.
Lemma 7.1.15 The Banach spaces
$(a,,rQ, 9))and C'(R+ $(a,d,9)) are stronger than C,(R+ ,$(a,d, 9)). C;(R+
7
9
as n + co. Select PROOF Let x " ( t ; o) x ( t ; w ) in C'(R+, $(Q, .d 9)) Q > 0 and consider the interval [O,Q]. We must show that Ilx"(t;w) - x(t; w ) I I , ( ~ , ~ , ~ ) converges to zero uniformly on [0, Q]. Choose E > 0. Since x"(t; o) x(t; o) in C'(R+, $(Q, d, 9)), there exists an N such < E . Hence for that n > N implies that supOat(Ilx"(t;o)- x ( t ; o)JI,cn,d,9,} every t E [0, Q] and n > N , -+
-+
IlxYt; 0)- x(t; W)ll,(*,d,b)
< 69
implying that C'(R+ , $(Q, d, 9)) is stronger than C,(R + , $(Q, d,9)). Let x"(t ; o)+ x(t ; co) in C I ( R + ,$(Q, d, 9)). Pick Q > 0 and consider the interval
[0, Q]. Choose E > 0. Since g is continuous, g assumes a maximum at some
194
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
point t o E [0, Q ] . Since x " ( t ; (0) -+ x ( t ; w ) in C ; ( R + ,$(O, d,9)), there exists an N , such that n > N , implies SUP{ llx"(t ;4 - x ( t ;w)llJl(*.d*B)/g(t)) < E/g(to).
Odt
Hence for t E [0, Q] and n > N , we have IIXYt ;4
- x ( t ;w)ll,(n,d*s)/g/g(t)< E M t 0 ) . This in turn implies that for t E [0, Q ] and n > N , IIx"(t; 0) - x ( t ; 4ll,(*,d,9) < &g(t)/g(to).
Since g(t) < g(to),g(t)/g(ro) < 1 and we conclude that for t E [0, Q] and n > N,
Ilx"(t; 4 - X(t ;w)ll$(n,d.B)c E.
Thus the space C i ( R + ,$(Q d ,9)) is stronger than C,(R+ ,$(a,d ,9)). 7.2
An Existence and Uniqueness Theorem
With respect to the partial aim of this chapter we state and prove the following theorem, which gives sufficient conditions for the existence of a unique random solution of (7.0.1). Also, a special case will be given which is useful when studying applications.
Theorem 7.2.1 Suppose that the random integral equation (7.0.1)satisfies the following conditions: (i) B, D E C,(R + ,$(a,d,9)) are Banach spaces stronger than C , ( R + , +(Q, d,9)) and the pair (B, D)is admissible with respect to
(Tx)(t;o)=
1:
x(r ;o)dr.
(ii) k(t, x(t ;0); w ) is a mapping from the set
W = ( x ( t ; o ) : x ( t ; o ) E D , llx(t;o)ll,,
< PI
into the space B for some p >, 0, such that IIk(t, x ( t ; 4; 4 - k(t, Y O ; 4; 4IB G Allx(t; 4 - ~ ( td ;l l D
for x ( t ; w), y ( t ; w ) E W and 1 2 0. (iii) x ( t ; o)-+ h(t, x ( t ; a))is a mapping from W into D such that
11 h(t, x(t; w)) - h(t, Y ( l ;w))ll D for some y 2 0.
< 711x(t;
- Y ( t ; w)ll
D
7.2
AN EXISTENCE AND UNIQUENESS THEOREM
195
Then there exists a unique random solution of Eq. (7.0.1) provided that y +AM < 1 and
IIh(t,x ( t ;w))llD + MIIk(t, x ( t ;0 ) ;w)IIB G p, where M = 11 T(I*. PROOF
Define the operator U from W into D by
(Ux)(t; w ) = h(t,x(t ; w)) +
Sd
k(r,X ( T ; w ); w ) d t .
We need to show that U ( W )G W and that for some r E [0, 1 ) Let x(t ; w),y(t ;to)E W . Since (Ux)(t; w) and (Uy)(t; w) E D and D is a Banach space, (Ux)(t; w ) - ( Uy)(t;w) E D. Thus we can write
where the last inequality is due to the Lipschitz condition given in (iii) and the fact that T is continuous from B to D by Lemma 7.1.9, and therefore bounded. However,
+
by the Lipschitz condition given in (ii). Since y Ml, < 1, the first condition of the definition of a contraction map is satisfied.
196
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
We must now show that the inclusion property holds. Let x(t ;o)E W . We can thus write
6
+
Its:
~ ~ ~ ( ~ , X ( ~ ; ~ k) ()~~, ~~ ( D z;
W);o)dT
d IIh(t,x(t;w))llDf M \ l k ( t , x ( f ; a ) ; w ) l l B d p. Hence ( U x ) ( t; w ) E W , implying U( W ) E W . Applying Banach's fixed-point theorem, we conclude that there exists a unique point x ( t ; o)E W such that ( u x ) ( t ; o )= h ( t , x ( t ; o ) )4-
and the proof is complete.
sd
k ( ~ , x ( ~ ; w )= ; ~x () td; o~)
The following theorem is a useful special case of Theorem 7.2.1. Theorem 7.2.2 Assume that Eq. (7.0.1) satisfies the following conditions : (i) k(t, x ( t ; 0); w ) is a mapping from the set W = { ~ (;0) t :x ( t ;0)E C'(R+
7
+(a,.r$, g)),I l 4 t ;o)llc' d P }
into the space Cb(R+,+(Q, d,9')) for some p 2 0; I I k ( t , x ( t ; o ) ; w )- k(t,y(t;w);w)llc;,
d W 4 t ; m ) - y(t;w)llc.
for x ( t ; w), y ( t ; o)E W , A 2 0 a constant; g is also integrable on R + . (ii) x ( t ;w ) 4 h(t, x ( t ;w ) ) is a mapping from W into C' such that Ilh(t,x(t;4 - h(t,y(t;o))II,.
d ~ l l x ( t ; w-) y(t;w)llc,
for some y 2 0. Then there exists a unique random solution of Eq. (7.0.1) provided that y AM < 1, where M = 11 TI[*,and
+
IlW, x ( t ;W))IIC' + Mllkk x ( t ;0); w)llc;, d
P.
PROOF The proof consists in showing that under the assumption that g is integrable the pair (CL(R+,+(a, d,Y)), C'(R+,$(Q, d,9))) is admissible with respect to the operator T given by
( T x ) ( t ;w ) =
s:,
X(T;
o)dz.
7.3
rm
r m
=
197
A STOCHASTIC CHEMICAL KINETICS MODEL
p < co,
9-a.e.
By definition of the norm in Lm(Qd,9), we can conclude that III(7'xi)(t;o)lll d
p for each i.
This in turn implies that II('x)(t;
u)ll$(o,d,y) = max{lll(7'xi)(t;~ ) l l l }< B,
which is the condition needed for ( T x ) ( t ;w ) to be an element of C'(R+,$(a,d, 9)). Since the remaining conditions are identical to those of Theorem 7.2.1, the proof is complete.
7.3 A Stochastic Chemical Kinetics Model Gavalas [ 11 has formulated a deterministic model which characterizes a chemically reacting system. It is the aim of this section to formulate a stochastic version of this model and thus make it describe the physical situation more realistically (Milton and Tsokos [l]). The basic formulations of such a model involve a random or stochastic integral equation of the type discussed theoretically in the previous sections. Thus we shall illustrate the applicability of the theoretical results, mainly Theorems 7.2.1 and 7.2.2, to obtain conditions under which a chemically reacting system will possess a unique random solution. The stochastic approach to the study of chemical kinetics is relatively new and has been developing rapidly during the last ten years. One of the main reasons for studying the classical chemical kinetics problem from a statistical point of view is that the evolution in time of a chemically reacting system is indeed random rather than deterministic. Blanc-Lapierre and Fortet [l] give a general discussion of how randomness enters into many physical systems. A brief description of how randomness enters into a
198
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
chemically reacting system is given by McQuarrie [I]. An excellent bibliography of recent work in the area of stochastic chemical kinetics is also given in McQuarrie’s paper. Bartholomay [l] gives a strong argument relative to the desirability of viewing chemical kinetics from a statistical point of view. In Section 7.3.1 we shall give a brief description of the classical chemical kinetics problem and give a basic definition and introduce some notation of the subject area to set the stage for the stochastic formulation. The stochastic interpretation of the rate of reaction of a simple system is given in Section 7.3.2. In Section 7.3.3 we shall give some basic concepts of the rate functions of a general reacting system and describe the manner in which an integral equation arises in chemical kinetics. The stochastic formulation of the chemical kinetics is given in Section 7.3.4. 7.3.1
The Concept of Chemical Kinetics
Chemical kinetics is that branch of chemistry which deals with the rate and mechanism of chemical reactions and attempts to discover and explain those factors which influence the speed and manner by which a reaction proceeds. The reaction of a system under study which takes place in a single phase can be characterized at each point by the following variables: the velocity, the concentrations of all chemical species, and a thermodynamic variable such as the internal energy or temperature. A chemical system is called uniform if there are no space variations within the system. In our brief discussion of the subject area we shall assume that we are dealing with a homogeneous, uniform system at constant volume and constant temperature. We shall be concerned with the evolution of the system in time, and the variables used to study this evolution will be the concentrations of the species involved in the reactions. Such variables are called state variables. Thus chemical kinetics is concerned with the manner by which a reacting system gets from one state to another and with the time required to make the transition. In what follows we shall give certain notation and ideas which are fairly standard in chemical kinetics and stoichiometry. We shall be concerned with a mixture of N chemical species M 1,M, , . . . , MN. For example, the chemical reaction 2HzO
-+
2Hz
consists of three species M , = HzO, M, reaction is usually written as N
+ 0, =
H,, and M 3
C “Mi = 0,
i= 1
(7.3.1) =
0,. A chemical (7.3.2)
7.3
A STOCHASTIC CHEMICAL KINETICS MODEL
199
where vi is called the stoichiometric coeficient of species M iin the balanced equation for the reaction. That is, reaction (7.3.1)can be written as 2 H 2 0 - 2H2
- 0,= 0.
We shall consider a species M i to be a reactant if vi > 0 and a product if vi < 0, for i = 1,2,. . . ,N. A similar convention is given by Gavalas [l]. Note that from the stoichiometric equation we have 6ni/6n, = vJv,,
1
and
1d
k
< N,
where ani represents a change in the number of moles of species M i in the chemical system. For the reaction given by Eq. (7.3.1) we have for each mole of water that is decomposed by electrolysis the generation of one-half mole of oxygen and one mole of hydrogen. This notation, of course, can be generalized to describe a system of R reactions, that is N
1 vijMi = 0,
j = 1 , 2 , . . . ,R ,
(7.3.3)
i= 1
where vij is the stoichiometric coefficient of the species M iin thejth reaction, i = 1,2,..., N a n d j = 1,..., R.
Definition 7.3.1 A rate function or rate of reaction or reaction rate ri(t), i = 1,2,. . . ,N , is the rate of change of the concentration of a fixed species
M i involved in the reaction.
The exact functional expression of the rate function does depend on the species used in defining the term. However, once the rate of reaction at a given time is determined for one species entering into the reaction, the rate of any other species involved may be calculated from stoichiometric considerations. We shall use r(t) to denote any one of the possible N rate expressions. But when we are interested in a particular species being used to define the rate of reaction we shall denote it accordingly. Thus if ci(t) is the function which represents the concentration of species M i at time t , then ri(t) = dci(t)/dt.In the reaction given by (7.3.1) we have M I = H 2 0 , M , = H,, M 3 = 02,and r , ( t ) = dc,(t)/dt = -dc,(t)/dt = -2dc3(t)/dt. That is, r l ( t ) = - - I # ) = -2r3(t). The rate of reaction at a fixed temperature T is generally a function of the concentrations of the various species of the reaction alone. Thus the reaction rate at a fixed temperature T can almost always be expressed in the form r(t) = KTf"Cl(t),CAt), . .*
9
CN(t)l.
(7.3.4)
200
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
The subscript T of the constant K indicates that the constant involved is usually dependent on the particular value of T at which the reaction takes place but independent of concentration. In some cases there are also unknown constants involved in expression (7.3.4). For example, some rate functions are of the form r(t) = KT[~l(t)la1[c2(t)1a2 . . . [cN(OlaN,
where K , , u l ,u 2 , . . . ,u N are constants that must be determined experimentally. It is a principal task for an experimental chemist to obtain the form of this rate expression and also to estimate the values of the constants involved from laboratory data. It has been shown that in a slightly acidic solution, I is oxidized by H,O yielding I; and H,O, stoichiometrically represented by H,O, + 31-
+ 2H+ + I; + 2H,O,
(7.3.5)
where the rate has the form r(d
=
dc,(t)/dt = KT[Cl(t)lrn[C2(~)1"
(7.3.6)
with cl(t) the concentration of H 2 0 2 at time t and c 2 ( t )the concentration of I - at time t . From experimental data one can determine the values of the constants K , , m, and n. However, it is a well-known fact that it is difficult if not impossible to outline a procedure for determining both the form and estimates for pertinent constants which will be applicable to all situations. The following is a basic procedure that could be followed : (i) If possible, postulate a general form for the rate expression based on any previous information available. (ii) Set up an experiment which allows either the direct observation of the concentration of some species in time or the observation of some other quantity whose relationship to concentration is known. (iii) The experiment should be run several times at the temperature of interest with initial concentrations of species varying. (iv) Using the data obtained from the experiments, the hypothesized form of the rate expression can either be accepted or rejected and estimates for constants involved can be obtained. (v) The value of a constant finally used in the rate expression is the average of the values of this constant obtained in successive runs of the experiment. However, recently Box [l], Kittrell, Mezaki, and Watson [l], and Lee [ 11 have given more sophisticated techniques for determining from the data the value of the constant to be used. Thus it is clear that determining the rate function for a reaction is extremely difficult and of limited accuracy. In what follows we give a stochastic version of the rate function (Milton and Tsokos [l]).
7.3
A STOCHASTIC CHEMICAL KINETICS MODEL
20 1
7.3.2 Stochastic Interpretation of the Rate of Reaction In this section we shall discuss a stochastic interpretation of the rate function r ( t ) of a single reaction, that is, the function given by Eq. (7.3.4). As we have mentioned, one of the initial things that the experimenter attempts to decide is the exact form of the function f[cl(t), cz(t),. . . ,C N ( t ) ] . This often involves estimating from experimental data such things as the powers to which the concentrations of various species are to be raised and the coefficients of concentration terms. Thus, due to the complexity of the problem, it is reasonable to interpret the function f as random rather than deterministic. In addition, if some form of the functionfhas been established, one is still faced with the problem of obtaining an estimate of K T . Such an estimate is obtained from experimental data and it is usually the average value that is used as an estimate of the “true” value. If the experiment is repeated under the identical conditions, the estimates of K , usually will be different. Hence it is more reasonable to consider K T not as a constant but rather as a stochastic variable K T ( u ) .Furthermore, Parrott [l] states that it is not possible to know the exact concentration of any given species Mi at a particular time to and it must be estimated from several observations of this concentration. Therefore we shall denote by ci(to;u) the stochastic analog of ci(to).Since the argument is independent of the choice of t o , one can consider c,(t; u) to be a random variabIe for each i = 1,2,. . . ,N and each t E R + . That is, for each i, ci(t ; u) is a random function. In view of these remarks a more realistic form of Eq. (7.3.4) is given by
7.3.3 Rate Functions of General Reaction Systems We shall begin by focusing our attention on the mechanism by which the observed chemical change proceeds. This calls for looking at the concept of an elementary reaction, a reaction that corresponds in a sense to a single molecular collision (Gavalas [11). A reaction is generally considered as being made up of a number of elementary reactions each with its own rate function. The overall rate functions are then related to the rates at which the elementary reactions proceed.
Definition 7.3.2 The reaction system given by Eq. (7.3.3) is said to be capable of describing the observed chemical change if with each reaction we
202
VII
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
can associate a functionfj of the N concentrations c,(t) such that
The functions f , , f,, . . . ,fR and F , , F,, . . . ,F, are all referred to as rate functions, reaction rates, or kinetics for the system. One can easily illustrate these concepts by using the classical reaction in which hydrogen and bromine unite to form hydrogen bromide. Gavalas [I] discusses the mechanism of this reaction. That is, Hz
+ Br,
Br,
+M
d 2Br
Br
+ H,
d HBr
+ Br,
HBr
H
2HBr J,
h
h f4
+M +H + Br,
with M being either Hz or Br,. Experimentally, it has been determined that the rate functions fj of the elementary reactions are of the forms fl
[Cl(t),c,(t), . . . ~ 5 ( t )= l K 1cz(t)[ ~ l ( t+) ~ 2 ( t ) l , 7
. . . ~ 5 ( t )= l K Z [ ~ 5 ( t ) l ~ [+ ~ l~( Z t )( t ) l ? f , [ ~ l ( t )cz(t), , ?
f3[c1(t), c2(t),. . . cs(t)l = K,[c,(t)l [ci(t)l, f4[ci(t)r C Z ( ~ ).,. . cg(t)l = K 4 [ ~ 3 ( t ) l [ ~ 4 ( t ) l 3 3
f5[Cl(t),c,(t), . . . c,(t)l 9
=
K 5 [ ~ 4 ( t )[ l~ Z ( t ) l ,
where M I = H z , M , = Br,, M 3 = HBr, M 4 = H, and M , = Br. Therefore if the reaction system (7.3.9)is capable of describing the observed chemical change, then the reaction rate for the overall reaction expressed in terms of the rate of formation of HBr is given as follows : 5
7.3
A STOCHASTIC CHEMICAL KINETICS MODEL
203
We shall now look at the concept of extent of reaction in terms of a single viMi = 0 before looking at the extension to a system of reacreaction tions. It was stated that the relationship (6ni)(t)/(6nk)(t) = vJvk, where (6ni)(r) represents the change in the number of moles of species M i from the beginning of the reaction to time t , must hold for any pair (i, k) such that 1 d i d N, 1 < k < N . This relationship can be written as
xy=
(6nl)(t)/vl = (6n2)(t)/v2= . . . = (6n,)(t)/v, = x(t).
We shall assume for convenience that the reaction always starts at r = 0. Thus for each i = 1,2,. . . ,N , (6ni)(t)= ni(t) - ni(0)and x ( t ) = [ni(t) - ni(0)]/Si. The function x ( t ) is called the molar extent or degree of advancement of the reaction. Its usefulness lies in the fact that it is a function linked to the reaction as a whole and not any particular species Mi, the choice of which would be arbitrary. If there are several reactions under study, as in the system vijMi = 0, then an extent can be defined for each of them. If we let xj(t)denote the extent of the jth reaction, then it has contributed vi,xj(t)moles to the total change in the number of moles ofspecies Mi.Thus the total change in the number of moles of Mi in the entire system can be expressed by
xy=l
ni(t) - ni(0) =
R
1 vijx,{t),
i = 1, 2,. . . ,N
j= 1
(7.3.10)
In order to free these equations from any dependence upon the actual size of the sample, it is convenient to divide Eq. (7.3.10) by the volume V to obtain an expression which reflects the extent of reaction in terms of concentration change rather than actual change in number of moles. Hence we arrive at the equation R
[ni(t) - ni(O)I/v =
or
1 vij[xj(t)/vI
R
ci(t) - ~i(0)=
1
j= 1
(7.3.11)
j= 1
~ijtj(t),
i = 1, 2, . . . , N ,
(7.3.12)
where t j ( t ) = x,(t)/V Throughout we will be referring to t,{t)when we speak of the extent of the jth reaction. We shall now give a brief discussion of how integral equations arise in chemical kinetics. Recall that in the evolution in time of a reaction system under study we assume that the system is homogeneous, uniform, of constant temperature and volume, and that it can be represented by the following set
204
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
VII
of equations : R
dci(t)/dt =
1 vijfj[cl(t),cz(f), . . .
j= 1
3
cN(t)],
i = 1, 2 , . . . 7 N .
(7.3.13)
We shall use the following vector notation :
. . . cN(o)l and refer to c(0) as the initial conditions and c(t)as a trajectory of the system (7.3.13) passing through c(O), which is assumed known. Due to the stoic(t) = [Cl(t), CZ(l), . . . 7 C N ( l ) l ,
C(O) = [cl(o), C2(0)r
9
3
chiometry of the reaction there exist functions cimand ciMsuch that ciM[ci(0)] is the minimum concentration possible for species M i and ciM[ci(0)] is the maximum concentration possible for species M iduring the course of the reaction when the initial concentration is c,(O). Equations (7.3.12) allow one to express the state of the system at time t in terms of either concentrations ~ ( t=) [c,(t),c2(t),. . . , CN(t)] or extent of reaction ( ( t ) = [tl(t),r z (t),. . . ,
. . . (R(t)l = fi[r(t)13 (7.3.14) 1,2,3,. . . ,R. The question of interest
t2(t)r
9
where fi[((t)] = & -‘ ( ( ( t ) ) for ] j = is then to show the existence of a solution to the system d t j ( t ) / d t = f i [ ( ( t ) ] , j = 1,2,. . . , R,
(,(O) = 0.
(7.3.15)
This gives rise to a nonlinear integral equation of the form (7.3.16) Gavalas [l] discusses the existence and uniqueness of a solution of Eq. (7.3.16) in the deterministic sense. His general method of proof is to call upon a fixed-point method which guarantees the existence of a solution and then he further imposes a Lipschitz condition on the kernel f l ( ( r ) ] in order to guarantee uniqueness.
7.3.4 A Stochastic Integral Equation Arising in Chemical Kinetics In this section we shall extend the ideas of Section 7.3.2 to a system of reactions and show how Eq. (7.3.16) can be more realistically studied from a stochastic point of view. It was argued that due to many factors such as estimation of constants involved it would be more realistic to assume that the rate function involved
7.3
A STOCHASTIC CHEMICAL KINETICS MODEL
205
in a single elementary reaction is stochastic. These arguments can easily be extended to a system of reactions such as Eq. (7.3.3) to imply that we can consider fj, j = 1 , 2 , . . . ,R to be random functions. Furthermore, since the variables tj(t)are defined in terms of concentrations, which we have seen can be given a statistical interpretation, we can logically consider these Thus, since J = 1 , . . . , R, are defined by variables to be random tj(t;0). using random functions, they may also be considered stochastic and the integral equation (7.3.16) is written as
5,
(7.3.17)
Jo
We shall assume the following : (i) ~ E Rwhere , R is the supporting set of the probability measure space (R, d,9)). (ii) ((r ;w) is the unknown R-dimensional, vector-valued random function defined on R, . (iii) Under appropriate conditions the stochastic kernel f[t(z; w); w] is an R-dimensional, vector-valued random function on R + . With respect to the aims of this section, we state the following theorem concerning the stochastic integral equation (7.3.17), which assures the existence of a unique random solution. Theorem 7.3.1 Suppose that Eq. (7.3.17) satisfies the following conditions :
(i) B, D G C,(R+ , $(a,d,9)))are Banach spaces stronger than C,(R+ , ~ ( Rd, , 9’)) and the pair ( B , D ) is admissible with respect to the operator T given by
(Tt)(t;4 = (ii)
J; t(z ;
0)d z .
f ( t ( z ;0);o)is a mapping from the set W = { < ( t ; w ) : t ( t ; w ) ~ D ,I l t ( t ; w ) l l ~< P }
into the space B for some p 2 0 such that llf({(t;w);w) - . f ( y ( t ; w ) ; w ) \ \ Bd l l \ { ( t ;w)
for t ( t ;w), y ( t ; w) E Wand A 3 0 a constant. Then there exists a unique random solution t ( t ;w) of Eq. (7.3.17) in W provided that AM < 1, where M = 11 TII*, and M l l f ( t ( t ;w); w)llBQ p.
206
V11
AN APPLICATION TO STOCHASTIC CHEMICAL KINETICS
PROOF The proof of this theorem is identical to that of Theorem 7.2.1 with the operator U given by
Theorem 7.3.2 Suppose that Eq. (7.3.17) satisfies the following conditions : (i) f ( l ( t ;w ) ;w ) is a mapping from the set
w = { H t ;0):5"
; 0) E C'(R+ 3 $(Q, 4 PI),
II a t ;w)llcyR
+
d P)
,$(Cl,d,9))
into the space C b ( R + ,$(Q, d,9)) for some p 2 0 ;
II .1;m;0); 0)- J;(?)(t ; w );w )Ilc; G A I1 5 0 ; 4 - Y ( t ; 4
c *
for ( ( t ;w), y ( t ; W ) E Wand A 2 0. (ii) J ; g ( r ) d r = < 00. Then there exists a unique random solution of Eq. (7.3.17) in W provided that AM < 1 , where M = IITI(*,and M ~ ~ f ( ( ( t ; w ) G; up .~ ) ~ ~ ~ ~ The proof consists of showing that the pair (Cb(R+,$(Q, L4 P), C'(R+, $(Q, d, 9)) is admissible with respect to the operator T and the remaining
arguments are identical to Theorem 7.2.1.
We shall conclude our discussion by looking at the practicality of the assumptions given earlier. The main assumption is that for each j = 1,2, 3 , . . . , R and each t 2 0, ( i t ; w ) E L,(Q, d, 9). That is,
tti(t; w)I = I[Nij(r; 0) - Nij(O;w)I/l'ijvJ
d (l/lpijV)[lNij(t;w)l
+ INij(0; w)ll.
(7.3.18)
A kinetic experiment is a carefully controlled laboratory procedure, and hence the number of moles of reactant at time zero is known apart from measurement error. By assumption, V and vij are known constants. The amount of species i present in reaction j at time t, N i j ( t; w), is controlled by the initial amount of each reactant present at the start of the experiment. In fact, due to stoichiometric considerations we can say that for each i, j there exists a constant pij such that for every t INi,(t; o)l d
pij,
9-a.e.
(7.3.19)
From (7.3.19), inequality (7.3.18) becomes That is, for each j
=
+ (Nij(O;0)1]
b-a.e. d (l/vijV). 2pij, 1 , 2 , . . . , R and t 3 0, tj(t;w ) E L,(R, d, 9).
ITj(t; w)l G (l/vijV)[lNijt; w)l
CHAPTER VIII
Stocbastic Integral E pations Oftbe It0 Type
8.0
Introduction
Another type of stochastic integral equation which has been of considerable importance to applied mathematicians and engineers is that involving the Ito or Ito-Doob form of stochastic integrals. We shall give some historical remarks concerning the development of this type of equation and point out the essential difference between them and random integral equations discussed in the previous chapters. In 1930 N. Wiener introduced an integral of the form
s(t)d
m
where g ( t ) is a deterministic real-valued function and {P(z), z E [a, b ] } is a scalar Brownian motion process. Ito [l] in 1944 generalized Wiener’s integral to include those cases where the integrand is random. That is, he 20 7
208
VIII
STOCHASTIC INTEGRAL EQUATIONS OF THE IT0 TYPE
obtained an integral of the form
1;
g ( z ; 0)dB(4,
t
E [O,
11,
which is referred to as the Zto stochastic integral or simply the stochastic integral. Since that time many scientists have contributed to the general development of this type of stochastic integral. For example, see Doob [l], Dynkin [l], Jazwinski [l], Ito [2], McKean [l], Saaty [l], Gikhmann and Shorokhod [l], Stratonovich [l], and Wong and Zakai [l]. In 1946 Ito [2] formulated a stochastic integral equation of the form
f(?,X ( T ; W ) ) dT
+
g(T, X ( T ; 0 ) )dp(T),
(8.0.1)
where t E [0, 11, ( B ( t ): t E [0, 11) is a scalar Brownian motion process, and c is a constant. Restrictions are usually placed on the functions f and g so that the first integral is interpreted as the usual Lebesgue integral of the sample functions which can then be related to the sample integral of the :t E [0, 13) and the second integral is an Ito stochastic process { f ( t , x(t ;0)) integral. The principal feature which distinguishes the type of equation studied in the previous chapters from an equation of the Ito type is the fact that in the former case each of the integrals involved is interpreted as a Lebesgue integral for almost all w E Q. That is, almost all sample functions are Lebesgue integrable. Since in the Ito stochastic integral the limit is taken in the meansquare or in the probability sense, the theory of such integrals has been developed as self-contained and self-consistent. One of the main purposes of subsequent work in connection with the Ito stochastic integral equation has been to construct Markov processes such that their transition probabilities satisfy given Kolmogorov equations and to investigate the continuity of the processes, among other properties of the sample function. The method of successive approximation was used by Ito and Doob to show the existence and uniqueness of a random solution to Eq. (8.0.1). The objective of this chapter is to attempt to apply the theoretical techniques of probabilistic functional analysis developed in the previous chapters to answer the question of existence of a random solution to (8.0.1). 8.1
Preliminary Remarks
Let { B ( t ) ; t E [a, b ] } be a scalar Brownian motion process. In this section we shall be concerned with the integral (8.1.1)
8.1
209
PRELIMINARY REMARKS
for a fairly general class of functions @. This integral will be called the Ito stochastic integral, as we mentioned previously. As is well known, almost all the sample functions of the Brownian motion process are of unbounded variation and hence the integral (8.1.1) cannot be defined as an ordinary Stieltjes integral. First we shall define the integral (8.1.1)for the class of step functions. That is, functions @ of the form t < a,
ti d t d t i + l ,
(8.1.2)
t > b,
where a = t o < t , < t , < . . . < t n - < t, = b, Qi(w) are measurable with respect to the a-algebra A r i , and E{l@i(w)12}< 00. For such functions we define the Ito integral by
lab
n- 1
;w ) d ~ ( t = )
1 @i(w){ B ( t i +
i=O
1)
-
B(ti)}.
(8.1.3)
Now suppose that @(t;W ) is any function satisfying the following conditions : (i) @ ( t ; w )is a product-measurable function from [a, b] x R -+ R + , assuming the usual Lebesgue measure on R + . (ii) For each t E [a, b], @ ( t ;w ) is measurable with respect to a-algebra A , , where A , is the smallest a-algebra on R, such that B(s), s < t, is measurable. (iii) J Y m El@(t; w)I2d t < co. In view of Eq. (8.1.2), it is evident that the class of step functions satisfy Conditions (i)+iii). we shall define their For the functions @(t; w ) satisfying Conditions (ixiii) norm as follows : (8.1.4) For this case Doob [l] has shown the following: (i) @(t; w ) can be approximated in the mean-square sense by a sequence of step functions {@,,(r ;0)).That is,
Il@(t ;4 - @At;0)I1 as n -+ 00. (ii) The sequence of integrals
-+
0
210
VIIl
STOCHASTIC INTEGRAL EQUATIONS OF THE IT0 TYPE
possesses a mean-square limit. That is, there exists a
e(o)such that (8.1.5)
asn -+ 00. Now we shall define the integral (8.1.1) for a class of functions { @ ( tw ; )} satisfying Conditions (iHiii)by /ab
@ ( t ;0) d/?(t) = O ( 0 ) .
(8.1.6)
As with the ordinary integrals, we shall define
/-
m
~ ( t0) ; d ~ ( t= )
a-
lim
- m,b+
+m
Jab
@(t; 0) dB(t).
(8.1.7)
Definition 8.1.1 Let G E L, where L denotes the collection of Lebesguemeasurable subsets of R , . Define a function xc from R , x R -+ {O, l } by ;cc(z; 0)=
Lemma 8.1.1
{
1
if ( z , w ) ~ Gx R ,
0
otherwise.
The function OxG:R + x 0 + R + defined by (@XC)(z
;@) = @(T ;o)XC(t ;
where @ satisfies Conditions (iHiii), and xc is as defined earlier, also satisfies Conditions (iHiii). PROOF The proof is a straightforward result of the definition of the fact that @ satisfies Conditions (iHiii).
xC and
We are now in a position to define exactly what is meant by the expression
Definition 8.1.2 We define J, @(z; o)@(T) for G a Lebesgue-measurable subset of R , by
J,
@(z ; 0) dp(t) =
J
m -m
(@xC)(z; 0)~ r ) .
Note that Lemma 8.1.4 guarantees that the expression on the right exists and is well defined. Definition 8.1.3 We shall denote by C*([a,b], L2(R,d,9)) the space of We shall define the all continuous functions from [a, b] into L2(R,d,9).
8.1
211
PRELIMINARY REMARKS
norm of C*([a,b], L,(R, d, 9)) by
Lemma 8.1.2
E[eI = E
{
i
@(l;w ) dp(t) = 0.
Lemma 8.1.3
Lemma 8.1.4 If we define a distance between two functions each satisfying Conditions (iHiii)by
’and Q2
and the distance between
For the proof of these lemmas see Doob [ 11. Lemma 8.1.5
Let ~ ( tw;) =
Jr
@(t
; W ) dp(t),
tE
[a,b].
Then x ( t ; w ) E C*([a,b], L,(Q d,9)). For the proof see Jazwinski [l]. It is easily seen that many of the properties of the stochastic integral are analogous to those of the ordinary integral.
212
VIlI
STOCHASTIC INTEGRAL EQUATIONS OF THE I T 0 TYPE
8.2
On an Ito Stochastic Integral Equation
In this section we shall investigate a stochastic integral equation of the type x ( t ; w )=
k ( t , z ; w ) f ( z , x ( t ; w ) ) d+ z
j:
j:
@ ( z ; w ) d D ( ~ ) , t 3 0, (8.2.1)
where x(t ;w ) is the unknown random process defined for t E R + and w E SZ. We shall place the following restrictions on the random functions which constitute the stochastic integral equation (8.2.1).
(i)’ k ( t , z ; w ) is an element of L m ( S Z , & 4 9and ) k ( t , z ; w ) : A-,L,(Q = { ( t ,T ) : O d z d t < 00). (ii)’ x ( r ; w ) + f ( t , x ( r ;w ) ) is an operator on the set S with values in the Banach space B satisfying
-4P)is continuous, where A
Ilf(t,x(t;w)) - f ( t , Y ( t ; ~ ) ) l I E
W 4 t ; w )- Y(t;w)llD
for x ( t ; w), y ( t ; w ) E S . (iii)’ Conditions (iHiii)of Section 8.1 hold Thus with the given assumptions the first integral of (8.2.1) can be interpreted as a Lebesgue integral and the second as an Ito stochastic integral. We shall now proceed to state and prove a theorem concerning the behavior of the lto integral. More precisely, if we show that the lto integral is an element of the space C,(R+,L,(R, P)),we can apply the theory of admissibility to Eq. (8.2.1) to show the existence of a random solution. By a random solution to Eq. (8.2.1) we mean a random function x ( r ; w ) from R , into L,(R, 49) such that for each t E R , , x ( t ; w ) satisfies the integral equation ,”P-a.e.Showing that the Ito integral is an element of C , ( R , , L,(R, d9)) will make feasible the assumption that we wish to make that the integral is an element of D,a Banach space contained in the topological space mentioned. For convenience we shall denote the Ito integral by h(r ; w ) =
6
@(z ; w )@(T),
t 3 0.
Theorem 8.2.2 For t E R,, h ( t ;w ) E C , ( R + ,L,(SZ, d,P)). PROOF
Fix r E R , . Then
8.2
ON A N ITO STOCHASTIC INTEGRAL EQUATION
213
Thus
by Lemma 8.1.2. By Condition (vii) and Lemma 8.1.4 we have Elh(t;o ) 1 2 < m.
Therefore for fixed t , h(t ;w ) E L , ( Q Lg 9). Now let t , + t in R + . T o show that h(t,; o) h ( t ;o)in L,(R, ~4P),it is sufficient due to Lemma 8.1.3 to show that -+
II@X[O,t,]
-
@X[O.,lIl
can be made arbitrarily small. That is, we must show that
j-
m m
EI(@X[O,,,,~)(T; 0)- ( @ X [ O , , ] ) ( T ; o)I2 dr
can be made arbitrarily small. Choose E > 0. Consider the nonnegative function q ( s ;o)= E ~ @ ( Tw)12. ; By Condition (iii) q ( ~o) ; is integrable over R , . Hence there exists a S > 0 such that for every set of Lebesgue measure less than 6, Jc q(r ;w ) d~ < a. Thus rm
=
=
[" ["
r
E1(@X[o,rn])(T;W ) - ( @ X [ o , , ] ) ( T ; o)I2dT
El@(s; w)I2dr =
q ( ~o) ; dr.
Since for n > N , , It, - tl < 6 and since the Lebesgue measure of the interval ( t , t,) is its length, we conclude that the Lebesgue measure of ( t , t,) is less than 6. Hence [ " q ( r ; w ) d s < E, implying that t -+ h ( t ;w ) is continuous from R , into L,(R, proof is complete.
.49) and
the
Since we have shown that h ( t ;o)E C , ( R + ,L,(R, d,P)),we can conclude that under the same conditions stated in Theorem 2.1.2 the stochastic integral equation (8.2.1) possesses a unique random solution. Furthermore, one can state the special cases given in Section 2.1.2 concerning Eq. (8.2.1).
214
Vlll
STOCHASTIC INTEGRAL EQUATIONS OF THE IT0 TYPE
8.3 On Ito-Doob-Type Stochastic Integral Equations In this section we shall study the existence and uniqueness of a random solution to a stochastic integral equation of the form x(t; w) =
sd
f(z, x ( z ; 0)) dz
+
s:
@(T, ~ ( z w ; ) )dp(z),
(8.3.1)
where t E [0, 11. As before, the first integral is a Lebesgue integral, while the second is an Ito-type stochastic integral defined with respect to a scalar Brownian motion process { p ( t ):t E [0, l]}. Recall that C*([O, 11, L,(@ d,9)) c C , ( R + ,L,(Q d,9)). We shall define the operators W, and W, from C*([O, I], L,(Q d,9)) into C*([O, I], L,(Q 4 9)) by ( W , x ) ( t ; w= )
and ( W , x ) ( t ; w )=
1: 1:
x(~;w)dr
(8.3.2)
x(T;w)~@(T).
(8.3.3)
Note that in view of Lemma 8.1.5, x(t; w )E C*([O,I], L,(R, 4 9)). It is clear that W, and W, are linear operators. Lemma 8.3.1 The operators W , and W, defined by (8.3.2) and (8.3.3), P)) into respectively, are continuous operators from C*([O, 11, L,(R, d, C*([O, 11, L,(Q d,9)).
The fact that W, is a continuous operator from C*([O,11, L,(Q, 9')into ) C*([O, 11, L,(R, d,9)) follows from Lemma 2.1.1. From (8.3.3) we have
PROOF
Il(w2x)(t; 411?2(R,dB,b) = Jnd%4 = s:dz
by Lemma 8.1.3. Furthermore, lI(W~x)(t; ~ I / ? , ( Q , ~ , P ) G J:dr
JR
{s:
2
x(z; 4 d@(z)}
x2(z ; w )d.Y(oj)
;z~& Ilx(z; W)II?~(R,~,B)
8.3
ox
ITO-DOOB-TYPE STOCHASTIC INTEGRAL EQUATIONS
215
Therefore Il(W,x)@; 4 G
IIW; 0)ll.
Thus W, and W, are continuous operators from C*([O, 11, L,(sZ, d,9)) into C*([O, 11, L,(Q 4 9)).
8.3.1 An Existence Theorem We shall assume that Lemma 2.1.1 holds with respect to the operators W, and W,. Therefore there exist positive constants K and K , less than one such that
,
ll(wlx)(t;411Dd Klllx(t;o)llB
and
l l ( ~ , x ) ( t ; ~ ) l l DQ K,llx(t;411B.
The following theorem gives sufficient conditions for the existence of a unique random solution, a second-order stochastic process, to the Ito-Doob stochastic integral equation (8.3.1). Theorem 8.3.2 Consider the stochastic integral equation (8.3.1) under the following conditions : (i) B and D are Banach spaces in C*([O, 11, L,(R, d, 9)) which are such that ( B ,D) is admissible with stronger than C*([O, 11, L,(Q, d,9)) respect to the operators W, and W,. is an operator on (ii) (a) x(t; w ) --* f ( t , x(t; 0)) S = -(x(t;w):x(t;w)ED
and
lIx(t;w)llD d p }
with values in B satisfying IIf'(t,x(t;w))
- f'(t,.Y(t;w))llB
(b) x(t; w ) -, @(t,x(t;
ll@(t?x(L;w)) where
0)) is an
d
Al\lx(r;w) -
y(t;w)llD.
operator on S into B satisfying
- @,(t?y(r;w))IIBd A211X(t;w) -
Y(t;w)llD,
A, and A, are constants.
Then there exists a unique random solution to Eq. (8.3.1) provided that KIAl + K J , < 1 and
IIf'(t, PROOF
0)llB
+ II@(t,
0)llB d p(l -
AIK1
-
Define an operator U from the set S into D as follows : f'(z,X(T;O))dz
+
@(t,X(t;~))dp(r).
216
VIll
STOCHASTIC INTEGRAL EQUATIONS OF THE IT0 TYPE
We need to show that U is a contraction operator on S and that US c S. Let x(t;w), y(t;w)ES. Then (Ux)(t;w) - (UY)(t;o)ED, because D is a Banach space. Further, we have
Thus U is a contraction operator For any element in S we have
d KIIIf(t,x(t;w))IIB + K211@(t7x(r;w))11B d ~ l K l l l x ( w ) l l D+ AZK211X(t;W)11D
+ K I I I f ( t , o ) l l B + K211@(t?o)\lB. Since x(t ;w ) E S, it follows that
<
~ ~ ( ~ x ) ( t ~ w )p(llK1 ~ ~ D
+ A 2 K 2 ) + I I f ( t , o ) l l B + ~ ~ @ ( r ~ od) ~p ~ B
from the assumptions in the theorem. Thus the existence and uniqueness of a random solution to Eq. (8.3.1)follow from the Banach fixed-point theorem. One can very easily state the special cases given in Chapter I1 for the Ito-Doob stochastic integral equation (8.3.1).
CHAPTER IX
Stochastic Nonlinear Dzfferentiul SJYstems
Introduction
9.0
In this chapter we shall investigate the existence of a random solution, a second-order stochastic process, and the stability properties of the nonlinear differential systems with random parameters of the form
+
. t ( t ; w ) = A(co)x(~;co) b(w)$(a(t;w))
with
( . = d/dt)
(9.0.2)
a(t ;0)= (c(t ;0), x(t ;0)) ;
*(r;
0) =
A(w)x(t;0)+ b(o)&cT(t;
(9.0.1)
0))
(9.0.3)
with
+
a ( t ; ~=) f ( t ; ~ )
( ~ ( -t ~ ; ~ ) , x ( t ; w ) ) d t ;
q t ; 0)= A ( o ) x ( t ;0)+
s:
217
b(t - q ; w)b(a(q; 0)) dq
(9.0.4) (9.O.5 )
218
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
with o ( t ; w )= j ( t ; w )
and k ( t ; w )= A ( w ) x ( t ; w )
+ [:C(t with
1: + 1:
+
(c(t - q ; w ) , x ( q ; w ) ) d q ;
b(t - T
; W ) ~ ( U ( T ; ~ ) ) ~ T
- T;w)o(T;w)dz
+
a ( t ; w )=f’(t;o)
1:
(d(t -
(9.0.6)
T;~),x(T;o))~T,
(9.0.7) (9.0.8)
where A ( w ) is an n x n matrix whose elements are measurable functions;
x ( t ; w ) , c ( t ; w ) , b ( t ; w ) , and d ( t ; w ) are n x 1 vectors whose elements are random variables ; o(t ; w ) and j ( t ;w ) are scalar random variables defined for t E R + and o E R ; and (x, y) denotes the scalar product in the Euclidean
space. The problem of absolute stability was first formulated in 1944 by two Russian mathematicians, A. I. Lur’e and V. N. Postnikov. Since 1944 many scientists have worked, and are currently working, on the absolute stability of nonlinear control systems. The primary mathematical technique which was used universally to study this type of stability was Lyapunov’s direct method. A good summary of the results and methods of such studies can be found in the book by LaSalle and Lefschetz [l]. However, in the late 1950’s Lyapunov’s method appeared to have been exhausted, and at about that time V. M. Popov introduced the frequency response method, which is currently being used in differential control systems. The concept of absolute stability is connected with both mathematical and engineering considerations. In engineering problems one is led to this concept because the characteristic function @(a)cannot be accurately determined and may even change with time wherever the stability of the system must be preserved. From a mathematical point of view one arrives at the concept of absolute stability from considerations of continuity. Although Popov’s method has been used extensively by many scientists in ordinary control systems, it is only the work of Morozan [l,21 and Tsokos [ 1-3, 51 which utilizes this method in differential control systems with random parameters. The nonlinear stochastic differential systems (9.0.1t(9.0.2),(9.0.3t (9.0.4),(9.0.5)-(9.0.6),and (9.0.7H9.0.8)will be reduced to a nonlinear stochastic integral equation of the form
+
o ( t ; w )= h ( t ; w )
k(f -
T;W)@(O(T;W))dT.
(9.0.9)
9.1
219
REDUCTION OF THE STOCHASTIC DIFFERENTIAL SYSTEMS
Utilizing a generalized version of Popov's frequency response method, in Section 9.2 we shall investigate the stochastic absolute stability of the reduced form of each of the systems described. Finally, we shall state the conditions under which the stochastic differential systems are stochastically absolutely stable. In Appendix 9.A we shall give a schematic representation of some of the given nonlinear stochastic differential systems along with their reduced form into a random integral equation. 9.1
Reduction of the Stochastic Differential Systems
9.1.1 Stochastic System (9.0.1)-(9.0.2)
The stochastic differential system (9.0.1H9.0.2)may be written as k(t; W ) - A ( w ) x ( t ;0) = b(w)c$(a(t; a)).
(9.1.1)
Multiplying Eq. (9.1.1) by e - A ( w ) rand simplifying, it reduces to
(d/dt)e - A ( " ) r x ( t ;w ) = e-"")'b(w)c$(a(t;a)).
(9.1.2)
Integrating both sides of (9.1.2) from t o to t and simplifying, we have e - A ( " ) r x ( t ;w ) - e - A ( W ) r o x ( t O w; ) =
r,l
Multiplying Eq. (9.1.3) by eA(-)' and taking
+
x(t ;w ) = eA(W)rxO(co)
e-"""b(o)c$(a(z; 0))dz. (9.1.3) to =
0, it reduces to
1:
eA(o)(r-r)b(w)q5(a(z ;0))dz,
(9.1.4)
where xo(w)= x(0; w). Therefore x ( t ; w ) as given by Eq. (9.1.4) is a solution of the stochastic differential system (9.0.1H9.0.2). Recall that the nonlinear part of the differential system is a function of a ( t ; w )= ( c ( t ; w ) , x ( t ; w ) )= c T ( t ; w ) X ( t ; w ) ,
(9.1.5)
where T denotes the transpose. Now, substituting x ( t ; w ) given by Eq. (9.1.4) into Eq. (9.1.5), we have a ( t ;w ) = cT(t;w ) eA(o)rxo(w)
+ cT(t;w ) If we let
1;
e"(""'-')b(w)c$(a(r;0))dr.
h ( t ; w ) = cT(t;w ) eA(o)rxo(w)
(9.1.6)
220
and
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
k(t - 7 ;o)= c'(t ;o)eA(oHr-r)b(a), 0
< z < t < a,
then Eq. (9.1.6) reduces to a stochastic integral equation of the form (9.0.9). Therefore the stochastic differential system reduces to a special form of the stochastic integral equation of the Volterra type (2.0.1), for which we have given conditions such that a random solution exists in Chapter 11. 9.1.2 The Random Differential System (9.0.3)-(9.0.4)
The stochastic system (9.0.3H9.0.4)can be written as
+
J:
x ( t ; w ) = eA(W)rxO(w)
eA("')(*-%(o)4(o(z; 0))dz,
(9.1.7)
as was shown, since (9.0.1) and (9.0.3) are identical. Substituting Eq. (9.1.7) into (9.0.4), the nonlinear part of the system, we have
J:
+
o ( t ; w )= f ' ( t ; o )
cT(t - t ; ~ ) e ~ ( ~ ) ' x ~ ( ~ ) d ~
Ji
+ J: cT(t - 7 ;o) e"(')('-s)b(o)c$(o(s; ds d7. (9.1.8) Let h(t ;w ) = f ( t ;w ) + yocT(t - t ;w ) e A ( w ) r ~ odz. ( o )Then Eq. (9.1.8) results 0))
in the following expression : a(t;w)= h(t;w)
+ J:cT(f
- . ; o ) J ~ e A ( ~ ) ( ' - " ' b ( o ) ~ ( o ( s ; o ) ) (9.1.9) dsd7.
By interchanging the order of integration and changing variables, we have
J: cT(t=
= =
s,' eA(")('-")b(o)+(a(s;
o)) ds d7
7 ; o)
J: cT(7;o)J:-' eA(w)(r-r-S)b(o)4(o(s; ds d.s 0))
l:I,-'
~ ~ (o) 7 eA(w)('-r-S)b(w)4(a(s; ; 0))ds d7
J: J;''
C~(tw ; )d ( a ) ( f - r - s )
b(w)4(o(s;o)) dz ds.
Now define cT(z;o)eA(o)(i-')b(o) dz.
(9.1.10)
9.1
REDUCTION OF THE STOCHASTIC DIFFERENTIAL SYSTEMS
221
Then
J: k(t -
S;
u)C$(o(s; o))ds =
J: /:"c"(T;w) eA(o)(f-s-r)b(w)dr 4 (o (s ;u ))d s ,
which is the same as in Eq. (9.1.10). Therefore the equation for o(t ;o)can be written as o ( t ; o )= h ( t ; o )
+
J:
k(t - z;o)$~(a(z;w))dz.
(9.1.11)
Equation (9.1.11) is a special case of the stochastic integral equation of the Volterra type stated in Chapter 11. 9.1.3 The Stochastic System (9.0.5)-(9.0.6) Equation (9.0.5) can be written as follows :
+
x ( t ; cc)) = e A ( o ) f X O ( W )
J: eA(o)(f-s)[ b ( r
- u;w)4(o(u;w))duds,
(9.1.12)
where x(0; w ) = xo(w).From the commutativity property of the convolution product we have
J:
e A ( w ) ( f - s)
=
1;
b(t -
U ;w)C$(o(u; w ) ) du d s
[:-'b(t
eA(o)s
- s - u ; o ) 4 ( o ( u ;w))du ds
= [:J:-'eA(")'b(t - s - u ; o ) & o ( u ; u ) ) d u d s .
(9.1.13)
Changing the order of integration and letting k , ( t ; o )=
J:
eA("%(t - s ; o ) d s ,
the integral in Eq. (9.1.12) becomes J:kl(t
- u;o)@(o(u;o))du.
Hence Eq. (9.1.12) can be written as
+
x ( t ; o)= eA("')'x0(o)
Jo
kl(t
- u ; w)+(o(u;0)) du.
(9.1.14)
222
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
Substituting Eq. (9.1.14) into Eq. (9.1.13), we have a(t;o) = f ( t ; o )
+
ib'
cT(t - s ; o ) e A ( " ) " x o ( o ) d s
+ Jo' cT(t Let h ( t ;o)= f ( t ; w ) + To cT(t -
S; W )
s:
k , ( s - U ;o)&a(u;
0))duds.
(9.1.15)
S ; O ) eA(o)sxo(o) ds, and applying the commutativity property of the convolution product, we can write Eq. (9.1.15)as
a(t;W) =
+
h ( t ; w ) J ~ j : - ' c T ( s ; ~ ) k , ( r- s - u ; o ) ~ $ ( a ( u ; w ) ) d s d u .(9.1.16)
Define k ( t ; o)= fo cT(s;o ) k , ( t - s; o)ds. Then ~ ~ ~ ~ - ' ~ ~( s~ -; uo; ~) ) kd s~& (a (r u ; ~ ) ) d u =
s,'
k(t - U ;w)&o(u; w ) )du.
(9.1.17)
Therefore the differential system (9.0.5H9.0.6) with random parameters reduces to a(t;W) =
h(t;o)
where
k ( t ; O)=
+
J:
Sd
k(t - u ; w ) & a ( u ; o ) ) d u ,
(9.1.18)
cT(s;o ) k , ( t - s; o)ds,
eAc")'b(t- s; w ) d s , and
+
s:
h ( t ; o )= f ( t ; o )
cT(t - s ; o ) e A ( w ) s x o ( o ) d s .
The stochastic integral equation (9.1.18) is a special case of (2.0.1). Thus a random solution exists.
9.1.4 The Random Differential System (9.0.7)-(9.0.8) Equation (9.0.7) can be written as
rs
+Jo
C(S
- t ;O ) C ( T ; O)d t
J. 'I
ds.
(9.1.19)
9.1
223
REDUCTION OF THE STOCHASTIC DIFFERENTIAL SYSTEMS
Substituting Eq. (9.1.19) into Eq. (9.0.8), we have
+
a ( t ; w )= h ( t ; o )
(9.1.20)
Ji
where h ( t ;w ) = f ( t ; w ) + dT(t ing portion of Eq. (9.1.20):
t ; w ) eA(o)rxo(u) dz.
{
JoreA(w)(r-s) S,*b(s - v ; w)&o(v;0)) dv =
J,J,-’ eAcw%(s- v
-
+
s:
c(s
-
Consider the follow-
v ; o)o(w;w ) dw
u ;o)$(o(w ;0)) dv ds
(9.1.21)
eA(”)sc(s- z - w;w)o(v;w)dvds.
Changing the order of integration and letting
k,(t; w) = and k 3 ( t ;w ) =
6‘ fd
eA(”’”b(t- s; w ) d s
eA(w)sc(t- s;w ) ds
in Eq. (9.1.21), the right-hand side becomes
J, k,(r - v ; w)+(o(v;w ) )dv +
k,(z - v ; w)o(v;0)dv.
Hence Eq. (9.1.20) can be written as a ( t ; w )=
+
h(t;w) x
{ s,’
k,(z - v ; w)Cj(o(v;0)) dw
+
+ J:j:-‘dT(r;w)k,(t
z - v;w)$(o(w;w))dvdr
or a ( t ;O ) = h ( t ;O )
-
s,’ k,(?
- v ; w)o(v;0) dv
d T ( z ; w ) k 3 (t z - v;o)o(w;w)dvdz.
1
dz
(9.1.22)
224
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
Changing the order of integration, letting k , ( t ; o )=
and k,(t;o) =
sd J:
dT(z;w)k4(t - z ; o ) d z
d T ( z ; w ) k 3 (t z;~)dz,
and applying the commutativity property of the convolution product, Eq. (9.1.22)becomes a(t;
W) =
h ( t ;W ) +
+
p,(r -
r
k,(t -
w ) $ ( ~ ( vw ; ) )d ~ l
11;
v;w)a(v;w)dv.
(9.1.23)
Utilizing a convolution theorem, we can write Eq. (9.1.23)as follows:
+
a ( t ; w )= $ ( t ; ~ )
s:,
u(t - S ; W )
h ’ ( s ; ~+)
$@;a) =f(O;o)u(t;w+ ) and u(t ;W ) is the random solution of
J: r
+
I
k’(s - v ; w ) ~ ( o ( v ; w ) ds, )~v
(9.1.24)
where
u ( t ; o )= 1
s,’
sd
J:
u(t - s ; w ) h ’ ( s ; o ) d s
k2(t - s ; ~ ) u ( s ; w ) d s .
Equation (9.1.24)can be reduced to the following form: a(t;W)
+
= $(t;o)
u(s; w)k;(t -
s-
V ; O ) ~ ( O ( V0)) ; dvds.
(9.1.25)
Changing the order of integration and letting k ( t ; w )=
Eq. (9.1.24)becomes a(t; 0)=
$(r;
1:
W)
u ( s ; w ) k ; ( t- s ; w ) d s ,
+
k(t -
V ; w ) ~ ( o (0)) v ; dv,
(9.1.26)
9.2
STOCHASTIC ABSOLUTE STABILITY
225
which is a special case of Eq. (2.0.1), and we know that a random solution exists. We remark that in the reduction of the random system (9.0.7H9.0.8) we assumed that the stochastic kernel k 2 ( t ; o )is of an exponential form. However, this condition can be relaxed. 9.2
Stochastic Absolute Stability of the Differential Systems
Recall that a stochastic differential system is said to be stochastically absolutely stable if there exists a random solution x(t; o)to the system such that 9 { w ; lim x ( t ; o)= 0 } = 1. i- m
With respect to the aims of this chapter, we state and prove the following theorems. Theorem 9.2.2 Suppose that the nonlinear stochastic integral equation (9.1.11) satisfies the following conditions: , d,9))for t E R , and w E rZ. (i) h ( t ; o)and h’(t;o)E L 1 ( R +L,(Q (ii) k ( t ; o) and k ’ ( t ;o)E L , ( R + ,L,(R, d,9))n L,(R+, L,(R, d,9)) for t E R + a n d o e n . (iii) 4(o) is a continuous and bounded function for all o E R, R being the real line and o&o) > 0 for o # 0. (iv) There exists a q 2 0 such that Re{(l iIq)R(iI;o)}d 0 for A E R and a.e. with respect to o.
+
Then every random solution o(t ;a),t 2 0, of the nonlinear stochastic integral equation (9.1.11) is stochastically absolutely stable. REMARK k ( i I ;w ) = k ( t ; o)CiA* dt, the Fourier transform of the stochastic kernel with I the frequency.
Let o(t ;o)be a solution ofthe stochastic integral equation (9.1.11). Following the method of V. M. Popov, let us consider the following function on R , : PROOF
(9.2.1)
226
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
To show the validity of the choice of this function, we proceed as follows : Differentiating Eq. (9.1.11)with respect to t, we have a’([;O)= h’(t;w )
+ k ( 0 ;w)&o(t; LO)) +
Sd
k’(t - z ; w)b(a(z; 0)) dz.
(9.2.3)
Utilizing Eqs. (9.2.1)-(9.2.3), we obtain the following : Y,(Z
; O)= O ( T ; W )
+ q a ’ ( T ; W ) - [ h ( ~O); + q h ’ ( ~o)], ;
0
< z < t.
(9.2.4)
To show that we can write Eq. (9.2.4),substitute Eq. (9.2.3)into Eq. (9.2.4), and the right-hand side of Eq. (9.2.4)becomes
In expression (9.2.6)replace &a(z; and we have
0)) with +,(T; w ) as
given in Eq. (9.2.1),
(9.2.7) Therefore we have shown that the right-hand side of Eq. (9.2.4)equals the left-hand side as shown in Eq. (9.2.7).Further, for t > t we can write Eq. (9.2.7)as Y,(T; w ) =
1‘ 0
+
[k(T - 5 ;O) qk’(t - 5 ;w)]&a(< ;0)) dr,
T
> t.
(9.2.8)
It now follows from (9.2.7),(9.2.Qthe hypothesis of the theorem, and the assumption on a, that yr(z ; 0) E LI(R+ 7 L m ( Q
d,9)) n L,(R + L m ( Q d, 9)) 9
fortER+ andwEQ. We shall now consider the Fourier transforms of y,(z; w ) and follows : y,(i,?;
0)=
s:
y,(z; w ) e-iArd z
and
$,(T;
w ) as
f$,(iA; w ) = JOm f$,(z; w ) CiA‘ dz.
9.2
22 7
STOCHASTIC ABSOLUTE STABILITY
Using the fact that if k ( t ; 0)E L1(R+,L,(Q,
4 9)) and
h ( t ;w ) E L l ( R + ,L,(Q, 4 9 h
then their convolution product belongs to L 1 ( R + ,L,(Q sit, 9)), andapplying the well-known result that the Fourier transform of the convolution product is equal to the product of the Fourier transforms to Eq. (9.2.2),that is,
we have
+ qk(0 ;w)+I(T ;w),
jj,(iA; w ) = l(iA; w)$,(iA; w )
+ q[&’(U;w)$,(iA; w ) ] + qk(0;w ) & ( i i ; 0).
(9.2.9) We know that the Fourier transform of(d/dt)f(t) equals iA [Fourier transform off’(t)] - f’(0).That is, Eq. (9.2.9) can be written as V,(iA ; w ) = R(iA ; o)$,(iA ;w )
+ &(iA
;w)$,(ii ; w )
+ q k ( ;~w)$,(iA ;w),
(9.2.10)
(9.2.11) Applying Parseval’s equality or completeness relation, we can write Eq. (9.2.11) as follows :
I
nm
p ( t ; w ) = (1/27c) -- jj,(ii;w)$,(iA;w)dA,
(9.2.12)
J -02
where +,(ii; w ) is the conjugate of the Fourier transform of +,(T; 0). Substituting p,(iA; w ) as given in Eq. (9.2.10) into Eq. (9.2.12),we have 1
p ( t ; w ) = (1/27c)
m
[
J-w
&iA; w)&(iA; w)[l
+ i i q ] $ ( d ; w)dA.
(9.2.13)
228
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
Equation (9.2.13) can be written as
1
m
p ( t ; w ) = (1/2n)
&A; w)(l + iiq)lⅈ o)I2d l .
J-m
(9.2.14)
Since we know that p ( t ; o ) is real, because we have defined it as such, we can take only the real part of (9.2.14), that is, p ( t ; w ) = (l/2n)J
m
Re[k(iA; w)(l
+ iAq)]]&il;o)I2di.
(9.2.15)
-00
However, by hypothesis we have Re[( 1
+ ilq)ⅈ w)] < 0,
which implies that Eq. (9.2.15) becomes p(t;w)
Recall that
< 0.
J-;YJr; d 4 W ; dr = J: { ;w ) + qo’(z;o)- [h(z;o)+ qh’(z;o)]4(a(z;o)) dr 6 0.
p ( t ; 0)=
0))
o(t
(9.2.16)
I t follows from (9.2.16)that p(t ; w ) =
1:
a(r ; w)4(a(r;0))dr
Let F(o) =
Sd
a’(r;o)$(a(z ;0))dr
+ q h ‘ (;~0)]4(a(z ;
0))dz
[h(7 ;o)
< 0.
(9.2.17)
JX 4(u)du. Equation (9.2.17) then reduces to
;4 =
or
f:
+q
ia
ab ;44(0b;4)d7
-
F(o(0;w))] -
1;
4~;w ) 4 ( 4 ~;4)dz -
J:
[ h ( T ;W )
+ q[F(4z ;0))
f[h(.r; 0
w)
+ qh’(t;w ) ] 4 ( 0 ( z ;a))dr < 0
+ qF(o(z;0))
+ qh’(r ;w)]4(0(T;0)) d7 < qF(a(0;w)).
(9.2.18)
9.2
229
STOCHASTIC ABSOLUTE STABILITY
However, we know from Eq. (9.1.11) that a(0;w ) = h(0;0); hence Eq. (9.2.18)can be written as
j:
o(t ; o)+(a(t ; 0)) dt
-
+ qF(o(t;4)
[:[h(.r;o)+ qh'(t ;
o)]+(o(z; 0)) dt
< qF(h(0;0)).
(9.2.19)
By Condition (iii) of the theorem, F(a(t;o))> 0 for a # 0, which implies that
<
[:a(t;o)+(a(.r;o))dt
6
+
+
[ h ( t ; w ) qh'(t;w)]+(a(r;w))dt qF(h(0;o)).
(9.2.20)
From inequality (9.2.20) it follows that
6
4)d t d
4 7 ; o)+(a(t ;
J:
147 ; w )
+ 4h'(T ;o)lIWb;4 ) l
+ qlF(h(0; w))l.
(9.2.21)
Let I+(o(t;o))ld Qo(o), with Qo(w) a.e. bounded. Since by hypothesis &a(t; w ) )is bounded, inequality (9.2.21) can be written as follows:
+ qlF(h(0;o))l< M
< co,
9-a.e.
Let Z ( t ;o)= yo+(a(t;w))a(t; o)dz. Therefore E(t; o)is bounded for t E R , and o E Q, and from Condition (iii) of the theorem it is also a monotonic increasing function for t 2 0, which implies that E ( t ; w ) allows a finite limit as t + co. Hence, applying the lemma of Barbalat, we conclude that 9 { o ;lim a(t; o)= 0 ) = 1. 1-
03
Theorem 9.2.2 Suppose that the differential system with a random parameter (9.0.1H9.0.2) satisfies the following conditions :
(i) (a) the matrix A(w) is stochastically stable; (b) the vector function c ( t ;o)is defined for all t 2 0 and o E 52 such that c(t ;01, c'(t ;0)E L,(R + Lm(Q d, 9)) ; 9
(c) b ( o )is a scalar random variable.
230
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
(ii) +(a) is a continuous function for all a E R and cr&a) > 0 for a # 0. (iii) There exists a q 2 0 such that Re((1
+ iAq)ZT(iA;w)(iAI
-
A(w))-'b(w)}
< 0,
where I is the identity matrix and ET(iI.;w ) = J," cT(t;w ) e-iirdt. Then the system (9.0.1H9.0.2) is stochastically absolutely stable. PROOF We shall prove this theorem by demonstrating that the conditions of Theorem 9.2.1 are satisfied. From system (9.0.1) we have
+
x ( t ; w ) = eA(o)rxo(w)
sd
eA(o)(t-r)b(w)+(a(z; w ) )dz,
(9.2.22)
where xo(w) = x(0; w). Substituting (9.2.22) into (9.0.2) results in a(t ;w ) = cT(t ;w ) eA(o)(r - r, xo(w)
+
1,
cT(z;w ) eA(")('-')b(w)+(a(z;w))dz.
(9.2.23) Since A(w) is stochastically stable and c ( t ; w ) E L 1 ( R +L,(Q, , d,9)), their convolution product h ( t ; w ) = cT(t;w ) eA(OHr-')x dw) E Ll(R+ L m ( Q dse,9)). Similarly, 2
h'(t;w ) = ( d / d t ) { c T ( tw; ) eA(o)('-r)1
+
= cfT(t;w ) eA(w)(r-r)xo(w) cT(t;w ) ~ ( weA(o)(t-r) ) XO(4
LAQ, 9)) for almost all w E Q. Hence Condition (i) of Theorem 9.2.1 is satisfied. Furthermore, E Ll(R+3
d
9
k(t - 7 ;0) = cT(z;w ) eA(w)(r-r)b(w) E L , ( R + ,L,(R,
d,9))n L,(R+, L,(R, d,9))
because cT(t;w )E L , ( R + ,L,(Q, d,9)), e A ( o ) ( t- 1 )
,L m ( Q d,9)) n L,(R+, L,(Q, d, P)),
E Li(R+
and their convolution product, that is, k(t - z ; w ) E L ' ( R+ , L,(Q d,9)) n L,(R + , L J Q , d, .P))
for almost all w E Q. By similar argument k'(t
-
7 ;0)E L1(R+,L,(Q
cd,9)) n L,(R+ L,(Q d , 9)). 9
9.2
231
STOCHASTIC ABSOLUTE STABILITY
Therefore Condition (ii) of Theorem 9.2.1 is satisfied. Condition (iii) of Theorem 9.2.1 is identical with Condition (ii) of this theorem. Now k ( t ; w ) ePiLf dt
=
/om
cT(t;w ) eA(w)(r-r)b(o) CiA dt.f
(9.2.24)
Applying the well-known result that the Fourier transform of the convolution product is equal to the product of the Fourier transforms and the fact that the Fourier transform of eA(o)fis ( i l l - A(w))-', we can write Eq. (9.2.24) as follows :
L(iA; w ) = ZT(iA; w ) ( i i ~ ~(w))-'b(w). From Condition (iii) of the theorem we have
Re((1 + ilq)k(il;w ) } d 0.
Hence Condition (iii) of Theorem 9.2.1 is satisfied, and the stochastic differentialsystem(9.0.1H9.0.2)isstochastically absolutely stable, completing the proof. Theorem 9.2.3 Suppose that the stochastic differential system (9.0.3k (9.0.4) satisfies the following conditions :
(i) The matrix A(w) is stochastically stable. (ii) (a) The vector function c(t;w ) is defined for all t 2 0 and w E R such that c(t ;0)E L 1(R + ,L(Q, .r$, 9)) n L,(R + L ( R , =&,9)); 9
(b) f ( t ; w ) is defined for t 2 0, ~ E Rwithf(t;w) , andf'(t;w)EL1(R+, U R , d,9)) ; (c) b(w)is a scalar random variable. (iii) +(o) is a continuous function for all 0 E R and o#(o) > 0 for o # 0. (iv) There exists a q 2 0 such that
Re((1
+ iAq)ZT(iA;w)(iAZ- A ( w ) ) - ' b ( o ) }d 0,
where ZT(iA; 0) = J," cT(t;o)ePiar dt and I is the identity matrix. Then the stochastic differential system (9.0.3H9.0.4) is stochastically absolutely stable, that is, 9 { w ; lim c(t;w ) = 0) = 1. 1-m
PROOF We shall prove this theorem by demonstrating that the conditions of Theorem 9.2.1 are satisfied.
232
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
We have seen from the reduction of the differential system (9.0.3H9.0.4) to the random integral equation o ( t ; ~=) h ( t ; w )+
that h(t ;o)= f ( t ;o)+
1:
Sd
k(t - ~ ; o ) $ ( a ( ~ ; w ) ) d ~
c'(z ; o)eA@)('-')x o ( o ) dz.
(9.2.25)
We must show that h ( t ; w ) as defined in Eq. (9.2.25) belongs to L l ( R + , L , ( R , a ' , 9 ) ) . It is given in Condition (iib) of the theorem that f ( t ;o)E L , ( R +, L,(R, d, 9')). Equation (9.2.25) is a convolution product of c'(t; o)E L 1 ( R +L,(R, , 4 9)) and eA(o)(r-') which also belongs to L1(R+,L,(R, d, 9'))for almost all o E R. We know that if two functions then their convolution product also belongs belong to L,(R+,L,(R, d,9)), to the same space for almost all w E R. Hence h ( t ; o)E L,(R+,L,(R, d,9')). Now we must show that h'(t;o)E L I ( R +L,(R, , d,9')). Differentiating Eq. (9.2.25)with respect to t, we have
+
h'(t;o)= j ' ( t ; o)
J:
~'(7; o)A(o) e A ( o ) ( r - ' ) x O ( dz o)
+ cT(t;o ) x , ( w ) . (9.2.26)
and By hypothesis (ii) we know that f ' ( t ;o)E L1(R+,L,(R, a',9)) cT(t;o)E L1(R+,L,(R, d, 9)). The convolution product in Eq. (9.2.26),as 9)). Therefore has previously been shown, belongs to L l ( R + ,L,(R, d, h ( t ; o)and h'(t ; o)E L1(R+,LJR, d,9)) and Condition (i) ofTheorem 9,2.1 is satisfied. To show part (ii) of Theorem 9.2.1, we recall that
cT(z;o)eA("')(l- ')b(o)dz,
(9.2.27)
which belongs to L I ( R +L,(R, , d, 9)) for the same reason as before. Now, differentiating Eq. (9.2.27), we have cT(r;w ) eA(W)('-')A(o)b(o) dz
+ cT(t;w)b(w),
which obviously belongs to L1(R+,L,(R, d,9)) for almost all o E R. Utilizing the same type of reasoning, it is easy to see that k ( t ; o ) and k'(t ;w ) E L,(R+ , L,(R, d,9)). Hence Condition (ii) of Theorem 9.2.1 is satisfied.
9.2
STOCHASTIC ABSOLUTE STABILITY
233
Condition (iii) of Theorem 9.2.1 is identical with Condition (iii) of this theorem. It remains to be shown that Condition (iv) of Theorem 9.2.1 is satisfied. Let us consider
where
J-: /:
k(t ;w ) =
som
k ( t ; w ) e - i A dt, 1
(9.2.28)
~ ~ ( 7 eA(w)(r-r)b(w) ; d7.
(9.2.29)
k"(il; o)=
o)
Substituting Eq. (9.2.29)into Eq. (9.2.28), we have R(iA ;w ) = JOm
cT(z;w ) eA(w)(t-r)b(w) e-i'l dz d t .
(9.2.30)
Now, applying the well-known result that the Fourier transform of the convolution product is equal to the product of the Fourier transforms and the fact that the Fourier transform of eA(o)tis (ill - A ( w ) ) - ',we can write Eq. (9.2.30) as follows : k"(iA; w ) = E T ( i l ; w)(iAl - A ( o ) ) - ' b ( o ) .
(9.2.31)
From Condition (iv) of the theorem, that is, Re((1
+ ilLq)?T(iA; w)(iAl - A ( o ) ) -' b (w )}d 0,
we can write Re((1
+ iAq)k(il;w))d 0.
(9.2.32)
Therefore inequality (9.2.32) shows that Condition (iv) of Theorem 9.2.1 holds. Hence, since Theorem 9.2.3 satisfies Theorem 9.2.1, we conclude that the stochastic differential system (9.0.3H9.0.4) admits at least one solution, say o ( t ;w), for t 2 0, such that 9 ( w ; lim o ( t ; w ) = 0} = 1, I'm
which completes the proof. Theorem 9.2.4 Suppose that the nonlinear stochastic integral equation (9.1. I 8) satisfies the following conditions :
(i) h(t;w)and h ' ( t ; w ) ~ L ~ ( R + , L , ( n , a t ' , 9 ) )~f oEr R andwEQ. , (ii) k ( t ; o) and k'(t;w ) E L , ( R + ,L,(R, d ,9')) n L,(R+, L,(Q, at', 9)) for t E R , and ~ E R . (iii) $(o)is a continuous and bounded function for all o E R, R beiag the real line and ob(a) > 0 for D # 0.
234
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
(iv) There exists a q 3 0 such that Re{(l + iLq)k(il.;w ) } < 0 for A E R and a.e. with respect to w . Then every random solution ~ ( o) t ; of the nonlinear stochastic integral equation (9.1.18)is stochastically absolutely stable. Note that
k(iA;w ) =
Jam
k ( t ; o)e-"l dt,
the Fourier transform of the stochastic kernel, with ,Ithe frequency. PROOF The proof of this theorem is similar to that of Theorem (9.2.1) and is omitted.
By placing the same conditions on $ ( t ;w ) as we have on h(t ;w), we can conclude that every random solution of the stochastic integral equation (9.1.26)is stochastically absolutely stable.
Theorem 9.2.5 Suppose that the differential system with random parameters (9.0.5H9.0.6)satisfies the following conditions : (i) The matrix A(w)is stochastically stable. (ii) (a) The vector-valued function c(t;w ) is defined for all t 3 0 and w E R such that
~ , ( Q.d,9)) f-7 L,(R + U R , c d , P ) ) ;
c ( t ; 0)E L,(R + 9
9
(b) f ( t ; w )is defined for t 3 0, w e n , withf(t;w) a n d f ' ( t ; o ) E L , ( R + , L,(Q d 9)) ; (c) b(t ;w ) is defined for t 3 0 and w E R such that b(t ; 0)E L I(R + L,(Q d, 9)) f-l L,(R + 9
2
d,9)).
(iii) &cr) is a continuous function for all cr E R and cr$(a) > 0 for a # 0. (iv) There exists a q 3 0 such that w ) } < 0, Re((1 + iAq)ZT(iA;w)(iAI - A(w))-'6(iEL;
where Z'(i,I;w) = S ~ ~ ~ ( t ; w ) e - ' " d d t , 6 ( i ,=IJ;;ob)( t ; w ) e - ' " d t , the identity matrix.
and I is
Then the random differential system (9.0.5H9.0.6)is stochastically absolutely stable. PROOF We shall prove this theorem by showing that the conditions of Theorem 9.2.4 are satisfied. We have seen that system (9.0.5H9.0.6)reduces to
a ( t ; w )= h ( t ; w )
+
k(t - z ; w ) 4 ( a ( z ; w ) ) d r ,
9.2
235
STOCHASTIC ABSOLUTE STABILITY
where h ( t ; w )= f ’ ( t ; w )+ = f ( t ;w )
sd
cT(t - ~ ; w ) e ~ ( ~ ) ~ x ~ ( w ) d s
+ fc ~ ( sw; )eA(m)(t-s)xo w ds.
(9.2.33)
We must show that h ( t ; w ) as defined in Eq. (9.2.33) belongs to L l ( R + , L m ( Q , d , 9 ) ) From . Condition (iib) of the theorem we have f ( t ;o)E L l ( R + ,L,(Q d,9)). Equation (9.2.33) is a convolution product of cT(t;o)E L,(R+,Lm(Q 4 P ) ) and eA(o)(r-S) which also belongs to L,(R+,L,(R, d ,9)) for almost all o E Q, because we know that if two functions belong to L , ( R + ,L,(Q d,9)), then their convolution product also belongs to the same space for almost all o E Q. Therefore h ( t ; w ) ~ L l ( R, L+m ( Q d , g ) ) .
Now differentiating h(t;w ) with respect to t, we have h’(t;w ) = f ” ( t ; w )
+ J,
+ c‘(t;
cT(s;w ) A ( o )eA(o)(t-S)xO(co) ds
w)xo(w).
(9.2.34)
By hypothesis (ii) we know that f ‘ ( t ;w )E L , ( R + ,L,(Q, d,9)) and cT(t;o)E L I ( R + L,(Q, , d, 9)).The convolution product in Eq. (9.2.34) also belongs to L I ( R + L,(Q , d ,9))because of the convolution theorem and the stability of the matrix A ( o ) . Hence h ( t ; o ) and h ’ ( t ; o ) E L l ( R + , L,(Q d,.P))and Condition (i) of Theorem 9.2.4 is satisfied. To show Condition (ii) of Theorem 9.2.4, recall that k ( t ; w ) = J cT(s;w)k,(t - s ; w ) ds,
(9.2.35)
0
where k , ( t ; o)= J eA(”%(t - s; o)ds = 0
Jo
eA(w)(t-S)b(s; o)ds.
Since b(r;w ) and eA(w)tboth belong to L , ( R + ,Lm(Q, d ,9)), k , ( t ;a),the convolution product, also belongs to L l ( R + ,L,(R, d ,9)). Again since both k , ( t ; o) and cT(t;w ) belong to L 1 ( R + L,(Q , d,9)),their convolution product k ( t ; o)also belongs to L1(R+,Lm(R,d ,9)). Now, differentiating Eq. (9.2.35)with respect to t, we have
236
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
with k ; ( t ; o)=
J: A ( o )eA(")(r-s)b(s; o)ds + h(t ; o),
which obviously belongs to L,(R+, Lm(R,d ,9'))for almost all o E R. Utilizing the same type of reasoning, it is easy to see that k ( t ; o ) and k'(t; o)E L 1 ( R +Lm(R, , d ,9)) n L,(R+, L,(R, d, 9)).Thus Condition (ii) of Theorem 9.2.4 is satisfied. Condition (iii) of Theorem 9.2.4 is identical with Condition (iii) of this theorem. It remains to be shown that Condition (iv) of Theorem 9.2.4 is satisfied. Let us consider r m
R(iA;o)=J
(9.2.36)
k(t;o)e-'"dt, 0
where
I I
k ( t ; o ) = J : ~ ~ ( r : o ) { J : e ~ ( " ~-~zb; (wt) d z ds.
Substituting Eq. (9.2.37) into Eq. (9.2.36), we have ,&(iA; w ) = Jom
J: c'(s; a){J: eA("%(t - z; o)dz
ds d t .
(9.2.37)
(9.2.38)
Now, applying the well-known result that the Fourier transform of the convolution product is equal to the product of the Fourier transforms and the fact that the Fourier transform of &")r is (iAZ - A ( o ) ) - ' , we can write Eq. (9.2.38) as follows : &iA; o)= ET(iA; o ) ( i A l - A ( o ) ) - ' b ( o ) .
(9.2.39)
From Condition (iv) of the theorem, that is, Re{(l
+ iAq)ZT(iA; w)(iAZ
we can write Re{(1
-
A ( o ) ) - ' b ( o ) }< 0,
+ i A q ) k ( i l ; o)}< 0.
(9.2.40)
Therefore inequality (9.2.40) shows that Condition (iv) of Theorem 9.2.4 holds. Hence, since Theorem 9.2.5 satisfies Theorem 9.2.4, we conclude that the random solution of system (9.0.5H9.0.6)is stochastically stable. Theorem 9.2.6 Suppose that the random system (9.0.7H9.0.8)satisfies the following conditions : (i) The matrix A(w) is stochastically stable.
9.2
237
STOCHASTIC ABSOLUTE STABILITY
1 (ii) (a) The vector function b ( t;o)is defined for t 2 0 and ~ € 5 such that, b(t;o)E L , ( R + ,L,(R, d, 9)); (b) c ( t ; o)is defined for t 2 0 and w E 51 such that c(t;o)E L,(R+,L,(R, d ,9)); (c) d ( t ; o)is defined for t 3 0 and o E 51 such that d ( t ; o)E L l ( R + ,L,(R, d,9)); ( d ) f ( t ;w ) is defined for t 2 0 and o E 51 such thatf(t; a)E L , ( R + ,L,(C!, d, Y)) and f"t; w ) E Ll(R+, L,(Q d ,9)) n L,(R+ L,(Q d, 9)). 9
(iii) $(o) is a continuous and bounded function for for # 0. (iv) There exists a q 3 0 such that Re((1 x
0E
R and (r$(o) > 0
+ iAq)[l - h7(iA;o)(iAl - A(o))-'?(iA;o)]-' d'(iJ.;cu)(iJ.l
<
- ~ ( u ) ) - ' & i ~ ; o0,) }
where
dT(U
?(iA; o)=
d T ( t ;o)e-iardt,
&iA;
w ) = JOm b(t ;w ) e-iardt,
and I is the identity matrix. Then the system is stochastically absolutely stable. PROOF We shall prove the theorem by demonstrating that the conditions of Theorem 9.2.4 are satisfied. We have defined $ ( t ; o )= f ( O ; u ) u ( t ; o )
where h'(t;o)= f ' ( t ; w )
+
s:
+
s:
u(t
- s;w)h'(s;o)ds,
dT(7;o)~(o) eA(o)(r-r)xo(o) + dT(t;o)x,(o).
From Conditions (i), (iic), and (iid),f ' ( t ;w), d T ( t ;o),eA(o)rE L l ( R + ,L,(R,
4 9')) implies that h'(t;o)E L l ( R + ,L,(R, 4 9j). Also, from the manner in Thus which u(t) is defined, it belongs to L , ( R + , L,(R, d,9)). $(t;O)ELl(R+,L,(n,d,9)).
Differentiating $ ( t ; w ) with respect to 1, we have $ ' ( t ; 0) =f(O; o ) u ' ( t ;W )
+
u'(t - S ;
o)h'(s;o ) d s .
238
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
$ ' ( fw; ) belongs to L I ( R +L,(Q , d,9)) because each of its terms belongs , d,9)). Hence Condition (i) of Theorem 9.2.4 is satisfied. to L I ( R + L,(R, The stochastic kernel is defined by
where
d T ( 5 ; O ) k 3 ( f- 5 ; 0) d5,
eA(o)('-S)c(s; w ) ds,
and
Using Conditions (i) and (iia), we have eA(o)tE L,(R+, L,(R, d,9)) and b(t;w ) E L 1 ( R +L,(Q , d,9)) n L 2 ( R + ,L,(Q, d, 9)). Again, by hypothesis, c ( t ; w ) E L , ( R + ,L,(s1, d, 9)) and eA(0)'E L , ( R + ,LT(S2,d,9)) and their 9)) n L 2 ( R + ,L , ( Q d, 9)). convolution product, k,(t ; w ) E L,(R+, L,(R, d, By similar reasoning it can be seen that k 2 ( t ; w ) and k , ( t : w ) E L 1 ( R +L,(R, , R 9)) n L,(R+ , L,(Q, d, 9)). Thus
By differentiating k(t ; w ) with respect to t and applying a similar argument as before, it can be shown that
which implies that Condition (ii) of Theorem 9.2.4 is satisfied. Condition (iii) of Theorem 9.2.6 is the same as that of Theorem 9.2.5. To show part (iv) of Theorem 9.2.4, we must find the Fourier transform of k ( t ; w). By lengthy computation it can be seen that the Fourier transform is given by
QiI.; w ) = =
J: [I
x
k ( t ; w ) e-ia' d t -
dT(iA;w)(iJ.I - ~ ( w ) ) - ' z ( iw~). ]; - ' Z ( i ~w)(iAl ; - ~(o))-'
&iA;
w).
239
APPENDIX 9 . A
From Condition (iv) of the theorem we can write
Re{( 1
+ iAq)L(iA; w ) } d 0,
which implies that Condition (iv) of Theorem 9.2.4 is satisfied. Hence we can conclude that system (9.0.7)+9.0.8) admits at least one solution, say a(t; w ) for t 2 0, such that ~ { c olim ; a ( t ; w ) = 0) = 1 1-X
Appendix 9.A
9.A.1
Stochastic Differential System (9.0.1)-(9.0.2) i ( t ; w ) = A(w)x(t;w )
+ b(o)&a(r;
0))
with a(t; w ) =
Nonlinear
'
Zero u Input
=93
Transfer
( c ( t ;o), x ( t ; 0)) Integrator
function
Figure 9.A.l.
(. = d / d t )
240
IX
STOCHASTIC NONLINEAR DIFFERENTIAL SYSTEMS
9.A.2 Stochastic Differential System (9.0.3)-(9.0.4) 2 ( t ;W ) = A(w)x(r;W )
with
+
a ( t ; ~=)f ( t ; ( o )
+ b(w)t$(a(t;w ) )
( ~ (-t
T;w),x(T;w))~T
Zero
L
Input
functlon
€P
L.
u,(w)
Figure 9.A.2.
9.A.3 The Reduced Stochastic Integral Form of Systems (9.0.1)-(9.0.2) and (9.0.3) a([;W ) = h ( t ; W )
+
k(t
- T ;
w ) ~ $ ( o ( w)) T ; dT
r ~
Zero inout function
Figure 9.A.3.
Linear time-invariant system
CHAPTER X
Stochustic Integrodzffe rentid Systems
10.0
Introduction
The object of this chapter is to study the behavior of a nonlinear stochastic integrodifferential equation of the form %(t; 0)= h[t, x ( t ; w)]
+
1:
k(t, s ; w ) f [ x ( z ; w)] ds,
t20
(10.0.1)
and stochastic nonlinear integrodifferential systems with a time lag of the type given by
+
k ( t ; w ) = A ( o ) x ( t ; w ) B(o)x(t - T ; U )
with a ( t ; m )= f ( r ; w ) +
and
Jb'
+ b(o)&o(t;w)) (10.0.2)
cT(t - s ; w ) x ( s ; o ) d s
(10.0.3)
+ B(w)x(t - T;O)
n(t; w ) = A(w)x(t;o)
+ [q(t
+ b(w)&a(t;o)) (10.0.4)
- u;w)#4a(u;w))du 241
242
with
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
+
a ( t ; w )= f ( t ; o )
cT(u;o)x(t- u ; w ) d u .
(10.0.5)
With respect to the random integrodifferential equation (lO.O.l), x ( t ;w ) is the unknown stochastic process for t E R , , h(t,x) is a scalar function of t E R + and scalar x, k(t, T ;o)is the stochastic kernel defined for t and T satisfying 0 < T < r < 03, andf(x) is a scalar function of x. For the nonc(t ; o), linear stochastic systems (10.0.2)-( 10.0.3)and (10.0.4)-( 10.0.5),x ( t ;o), and ~ (;o) t are n-dimensional vectors whose elements are random variables ; A ( o )and B(w)are n x n matrices whose elements are measurable functions; o ( t ;o)and f ( t ; w ) are scalar random variables; b ( o ) is an n x 1 vector whose elements are measurable functions ;and cT(t;o)denotes the transpose of c( t ;0). In the first part of our presentation we shall give conditions which guarantee the existence and uniqueness of a random solution of the stochastic integrodifferential equation (10.0.1). In addition, we shall study the asymptotic behavior of the random solution in Section 10.1.1. In Section 10.1.2 we shall illustrate the usefulness of the theory with an application to differential systems with random parameters. The second part of this chapter will be concerned with studying the existence and stability of a random solution of the stochastic nonlinear systems (10.0.2)-(10.0.3) and (10.0.4H10.0.5). In Section 9.2 we reduced the systems with time lag to a nonlinear stochastic integral equation which was studied in Chapter 11. Knowing that a random solution to the system exists, we shall give conditions under which it is stochastically absolutely stable in Section 10.3. From a deterministic point of view, the concept of stability has been widely used by many scientists under various model formulations. The basic idea, however, is: “If a system has a suitable response for a class of inputs or initial conditions and if small changes in the input or in the initial conditions occur, then the new response should be close to the original one.” It is apparent from this formulation that stability is a very basic concept in a great many practical problems. In fact, the conventional design techniques in control theory are all directly or indirectly derived from the stability criteria. Among the more useful of the concepts of stability is the concept of “absolute stability,” which is simply global asymptotic stability for a nonlinearity class. Absolute stability, as we mentioned previously, was originally formulated by Lur’e and Postnikov and is connected both with engineering and mathematical considerations. From a mathematical point of view, one arrives at this concept from considerations of continuity. In engineering problems one is led to this type of stability because system nonlinearities cannot be accurately determined and may even change in time but yet the
10.1
THE STOCHASTIC INTEGRODIFFERENTIAL EQUATION
243
system stability has to be preserved. In our study we shall be concerned with a stochastic view of such physical phenomena. 10.1
The Stochastic lntegrodifferential Equation
The importance of stochastic integrodifferential equations of the form (10.0.1) lies in the fact that they arise in many situations. For example, equations of this kind occur in the stochastic formulation of problems in reactor dynamics which have been investigated from the deterministic point of view by Levin and Nohel [l]. Also, they arise in the study of the growth of biological populations by Miller [ 11, in the theory of automatic systems resulting in delay-differential equations (Oguztoreli [l]), and in many other problems occurring in the general areas of biology, physics, and engineering. With respect to the aims of our study we shall assume that x ( t ; o)will be a function of t E R + with values in the space L 2 ( R , d , 9 ) ,a second-order stochastic process defined on R , . The function h[t,x(t; w ) ] , the stochastic perturbing term, under certain conditions will also be a function in L,(R, d,P), and f [ x ( t ;o)]will be considered as a function from R, into L,(R, c d , 9). With respect to the stochastic kernel, we shall assume that for each t and T such that 0 < T < t < 00, k(t, T ; o)is essentially bounded. As we indicated will be denoted by before, the norm of k(t, T ;w ) in L,(Q d,9) lllk(t, T ; o)lll = 9’-ess suplk(t, T ; o)l. W€Q
I t will also be assumed that for each fixed t and Illk(s, 7 ;w)lll
< M(t,z)
T
uniformly for
< s < t, and T , 0 < T < t < 00. T
where M(,,7)> 0 is some constant depending on t We shall make use of the integral operators TI and T2on C,(R+ , L,(R, d, P)), defined as follows : (10.1.1) and (T,x)(t;o)= where K ( t , z ; w )=
sd
s:
K(t,r; w ) x ( T ; o ) ~ T ,
(10.1.2)
k(s,~;o)ds.
(10.1.3)
244
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
These integral operators will be needed in obtaining existence and uniqueness of a random solution of Eq. (10.0.1). It is clear from the given conditions and Lemma 4.1.1 that the integral operators Tl and T, are continuous mappings from C , ( R + ,L,(R, d, 9)) into itself. If we integrate Eq. (10.0.1) from zero to t, we obtain
=
xo(w)
sd
+ J ; ~ ( T , X ( T ; W ) +) ~ T K ( t , ~ ; o ) f ( x ( z ; w ) ) d z ,(10.1.4)
where xo(w) = x(0; w ) and K(t, 7 ; w ) is given by Eq. (10.1.3). We now prove the following existence theorem (Padgett and Tsokos [15]). Theorem 20.2.1 Suppose the random equation (10.0.1) satisfies the following conditions :
9)) and (i) B and D are Banach spaces stronger than C , ( R + ,L,(R, d, the pair ( B , D) is admissible with respect to each of the integral operators (Tlx)(t;w)=f0x(7;w)d7 and (T,x)(t;w)=& K(t,z;w)x(t;w)dt, t 2 0 , where K(t, 7 ; w ) is given by (10.1.3). (ii) x(t;w)+ h ( t , x ( t ;0))is an operator on S = { x ( ~ ; ~ ) E D : I I x ( ~ ; o6 ) (pI D }
with values in B satisfying Ilh(t, x(t; 0)) - h(t, ~ ( tw))IlB ; 6
I1 Ilx(t; ~ 0 ) ~ ( tw)llD ;
for x(t; w ) , y ( t ; w ) E S and I l constant. (iii) x(t;w)+f(x(t;w)) is an operator on S with values in B satisfying f ( 0 ) = 0 and Ilf(x(t; 0))- f ( y ( t ; w))IIB 6 A211x(t;O ) - y([; w)llD for x(t; w ) , y(t ; w ) E S and I , constant. (iv) xo(w)E D. Then there exists a unique random solution of (10.1.4),x(t ;w ) E S , provided
where K1 and K , are the norms of Tl and T,, respectively.
10.1
THE STOCHASTIC INTEGRODIFFERENTIAL EQUATION
245
PROOF By Condition (1) TI and T, are continuous from B into D. Hence their norms K , and K , exist. Define the operator U from S into D by
=
x0(w) +fh(r,x(r;w))dr 0
+
s:
K ( t , ~ ; w ) f ( x ( ~ ; w ) ) d t(10.1.5) .
We must show that U ( S ) c S and that U is a contraction operator on S. Then we may apply Banach’s fixed-point theorem to obtain the existence of a unique random solution. Let x(t;w ) E S . Taking norms in Eq. (10.1.5),we get
By Condition (ii)
and by Condition (iii)
=P by the last condition of the theorem. Thus U ( S ) c S .
246
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
Let y(t ;w ) be another element of S . We have, since the difference of two elements of a Banach space is in the Banach space,
Il(ux)(t;
W,
=I(
- ( U y ) ( t ;w)llD
II
dz + Jo h(7, x(7 ;a))d7 + Jo K(t, z ;w ) f ( x ( z;0)) ff
f l
xo(w)
G K II
x(t ;4) -
W ,YO ;w ) )II
+ K 2 l l f ( x ( t ; 4 )- f ( Y ( t ; 4 ) l l e
< (AlK1 + &&)llx(t;w)
- y(t;w)IID
by Conditions (ii) and (iii). Since by hypothesis l , K l + L2K2 c 1, U is a contraction operator on S . Applying Banach’s fixed-point theorem, there exists a unique element of S so that (Ux)(t;w ) = x(t ;w), that is, there exists a unique random solution of the random equation (lO.O.l), completing the proof. Now, when the stochastic perturbing term h(t, x(t ;0)) is zero we obtain a stochastic version of the integrodifferential equation studied by Levin [l] as a corollary to Theorem 10.1.1.
s:
Corollary 10.1.2 Consider the stochastic integrodifferential equation k(t ;O ) =
k(t, z ;w ) ~ ( x ( 0)) T ; dz
(10.1.6)
under the following conditions :
(i) B and D are stronger than C,(R + , L,(R, d ,P))and (B,D)is admissible with respect to the operator ( T x ) ( t ;w ) = So K ( t , r ;w)x(r;w ) dr, t 2 0, where K(t, r ;w ) is given by Eq. (10.1.3) and behaves as described. (ii) x(t ;w ) + f ( x ( t ; 0))is an operator on S = ( x ( t ; o ) ~ D : I l ~ ( t ; w<) Ip> I~
10.1
24 7
THE STOCHASTIC INTEGRODIFFERENTIAL EQUATION
with values in B satisfyingf(0) = 0 and
< AI\x(t;
IIf(x(t ;0))- f ( y ( t ;w ) ) \ l B
O) -
Y ( t ; w)llD
for x ( t ; w), y ( t ; w ) E S and 1constant. (iii) xo(o) E D. Then there exists a unique random solution of Eq. (10.1.6)provided AK and IIx0(w)IID ,< p(1 - AK), where K is the norm of T.
-= 1
Since Eq. (10.1.6) is the equivalent of Eq. (10.1.4) with h ( t , x ) equal to zero, the proof follows from that of Theorem 10.1.1with T, the null operator. 10.1.1 Asymptotic Behavior of the Random Solution
Using the space C,(R + ,L,(O, d,P)),we now give some results concerning the asymptotic behavior of the random solution of Eq. (10.0.1). We shall first consider the unperturbed equation (10.1.6). Theorem 20.1.3 Suppose Eq. (10.1.6) satisfies the following conditions :
(i)
J: Illk(s,T ; w)lll ds < A,
for some constants A 1 > 0 and tl > 0,
O < T < t .
(ii) x ( t ;w)+f(x(t ;0)) satisfies, for some A2 > 0 and f(0) = 0, Ilf(x(t; w))ll < A, e-O', t 2 0, and IIf(x(t; w)) -f(y(t; 0))ll for Ilx(t; w)I( and Ily(t; w)ll (iii) xo(w) = 0, 9-a.e.
tl
>
> 0,
< Allx(t; 0 ) - y ( t ;4 1
< p e-O' at each t 2 0 and 1 constant.
Then there exists a unique random solution of (10.1.6) which is stochastically exponentially stable, llx(t; w)ll
0)1~]}+
= {E[lx(t;
< p e-p',
t 2 0,
where E [ . ] is the mathematical expectation, provided that A is small enough. PROOF It is sufficient to show that Condition (i) implies the admissibility of the pair of spaces (Cg,C,) with respect to the operator
(Tx)(t;w) = with
J: S:
K ( t , T ;w ) =
K ( t , ~ ; o ) x ( ~ ; w ) d T t, 2 0,
k(s, T ; w ) ds,
0d
T
(10.1.7)
< t,
and that Condition (ii) is equivalent to Condition (ii) of Corollary 10.1.2, g(t) = e-O*,p > 0. with B = D = C,(R+, L,(O, d,9)),
248
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
by Condition (i). But x ( t ; w ) E C,(R+ , L,(R, d ,9)) with At) = e-", t 3 0, fl > 0, and we get by definition of the norm in C , that
a>
= A 1IJx(t;w))1,--(a - 8)-l ( e P p t- e-")
< l M t ; w ) l l c , [ ~ l M~ flu since a > > 0. Hence for x ( t ; W ) E C , , (Tx)(t; w ) C ~ , ; that is, (C,, C,) is admissible with respect to T. From Condition (ii),J'(x(t ;w))E C,(R + ,L,(Q d,P)),and sup{llf(x(t; 230
0)) - f(y(t;
o)ll/e-P'}
< A sup{ IIx(t; w ) - y(t; o)II/e-B2} 250
implies that the Condition (ii) of Corollary 10.1.2 holds. Therefore by Corollary 10.1.2the conclusion follows. If h(t, x) is not identically equal to zero, then we can still obtain the result that there is a unique random solution of Eq. (10.0.1) which is bounded in mean square for all f E R + . Theorem 10.1.4 Assume that Eq. (10.0.1) satisfies the following conditions : (i) $, Illk(s, z; w)lll ds d A , for some constant A, > 0,O < T < t. (ii) x(t ; o)+ h(t, x(t ;0))satisfies, for some A2 > 0 and fl > 0,
and Ilh(t,x(t; o))(I< A, e - p t Ilh(t,x(t;w)) - h(t,y(t;w))Il d A, e-Prllx(t;4 - v(t;w)ll for ( I x ( t ;w)ll and IIy(t; w)ll < p , t 2 0, and A1 constant.
10.1
THE STOCHASTIC INTEGRODIFFERENTIAL EQUATION
249
(iii) x(t; w)+f'(x(t; w))satisfiesf'(O) = O,IIf'(x(t; w))II d A3 e-01,A3> 0, and II f ' ( x ( t ;w ) ) - f'(y(t; w))Il < ;I., e-PtllX(t; 0) - ~ ( tw)ll ; for Ilx(t; w)ll and Ily(t; w)ll d p, t 2 0, and 1, constant. (iv) xo(w) E C , . Then there exists a unique random solution of Eq. (10.0.1) satisfying Ilx(t;4ll = { E [ l x ( t ; ~ ) l 2 l } *d P,
tER+,
(bounded in mean square on R + ) , provided that ;I.,,A,, IIh(t, O)llcg are sufficiently small.
(Ixo(w)IIc,, and
PROOF It will suffice to show that the pair of spaces (C,(R+, L,(R, d ,9)), C , ( R + ,L,(R, d, 9'))) is admissible with respect to the integral operators defined by equations (10.1.1)+10.1.3) under Condition (i). Then . from Eq. (10.1.1) we have that Let x(t; w ) E C,(R+, L,(R, d,9))
= lIx(t; w)llcg(l//w
-e-9
< Ilx(t;w)IIcg(l/P) < a , by definition of the norm in C,(R + , L,(Q d,9)). Hence ( T , x ) ( t ;0)E C I ( R + ,
w-&9)) &I
and the pair (Cg,C , ) is admissible with respect to TI. Now from Eqs. (10.1.2) and (10.1.3) for x(t; w )E C,(R+, L,(R, d,9)) we obtain
< A,
1;
[[Ix(T;
~ ) ( I / e - ~e-P'dz 'l
d A, Ilx(t; w)IIcg
f e - p r dz 0
< A, Ilx(t; 411cg(1/P) < a from Condition (i). Thus (T,x)(t;w ) E C , ( R +,L,(Q, d,9)) and the pair
(C,, C , ) is admissible with respect to T,.
250
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
Therefore the conditions ofTheorem 10.1.1 hold with B = C , , g ( t ) = e - B 1 , = C,, and there exists a unique random solution of (lO.O.l), x(t ;w ) , bounded in mean square by p for all t E R + .
p > 0, and D
10.1.2 Application to a Stochastic Differential System Consider the following nonlinear differential system with random parameters : (10.1.8) n(t; 0) = A ( o ) x ( t ;0) b ( o ) 4 ( o ( t ;w ) ) ,
+
b ( t ;o)= cT(t; w)x(t ; o),
(10.1.9)
where A(w) is an n x n matrix of measurable functions, x(t ;o)and c ( t ; o) are n x 1 vectors of random variables for each t E R , , b ( o ) is an n x 1 vector of measurable functions, 4(o) is a scalar function, o(t ; o)is a scalar random variable for each t E R , , and T denotes the transpose of a matrix. Note that Eq. (10.1.9) can be written as o ( t ; o )= o(0;o)
+
J-:
cT(s;o)x(s;w)ds,
which is similar to a system studied in Chapter IX. The system (10.1.8H10.1.9) may be reduced to a stochastic integrodifferential equation of the form (10.0.1). We may write (10.1.8) as
J-:
x ( t ; w ) = eA(W)rxO(w) + eA(w)(t-r)b(w)+(o(r; w ) )dr.
Substituting this expression for x ( t ; w ) in (10.1.9),we obtain
+
&(t ; w ) = cT(t;o)eA(uJ)rxo(w)
J-:
cT(t;w ) eA(")('-')b(w)4(o(r; 0))dr.
Assume that I((cT(t; w)lll < K for all t 2 0 and K , 2 0 a constant. Also let x o ( o )E C,, +(O) = 0, and b ( w )E L,(R, d,9).If we assume that the matrix A ( o ) is stochastically stuble, that is, there exists an ci > 0 such that S ( w : R e & ( o ) < -a,
k = 1,2,. . . , n ] = 1,
where +k((w), k = 1,2,. . . ,n, are the characteristic roots of the matrix, then it has been shown by Morozan [3] that
<
(JleA(w)r(JIK , e-ar
for some constant K , > 0. We also let &o(t; 0)) be in the space t 2 0, and C,(R + ,L,(Q at,9)) with At) = I&o,(t ; 0))- 4(02(t;o))l
< 1 e-atlol(t;4 - 0 2 ( t ;w)l.
10.2
Let
REDUCTION OF SYSTEMS WITH TIME LAG
25 1
h(t, a(t ;w ) ) = cT(t;O)e A ( w ) r ~ o ( ~ ) .
Then,
llh(t, a([; o))ll d
o)IIIK, e-a'llxo(o)ll d KIK,
lIICT(f;
e-"Z
where Z > 0 is a constant, since xo(w)E C , . Thus h(t, d t ;0)) E Cg(R + L,(Q, d , 9)) 9
by definition. Also, IIh(t, o,(t;
0)) -
h(t, o,(t; 0))ll
=
0
so that it satisfies a Lipschitz condition. b(o), and A(w) we Now, by the assumptions we have made on cT(t;o), have k(s, t ; o)= cT(s;o)eA('U)(S-7)b(o), satisfying
1
Ill&,
f;
o)lllds d l
IllcT(s; o)lllK2e-a(s-7)lllb(o)lll ds
d K K, e"'llIb(o)lll
f e-as ds 7
=
K1K2111b(w)lll(l/cc)[l - e-"('-')]
< K,K,IIIb(~)Ill(~/~). Therefore all conditions of Theorem 10.1.4 are satisfied and there exists a unique random solution of the system (10.1.8H10.1.9) which is bounded in mean square on R + . 10.2 Reduction of the Stochastic Nonlinear lntegrodifferential Systems w i t h Time Lag 10.2.1
The Integrodifferential System (10.0.2)-( 10.0.3)
The random system with time lag (10.0.2) can be written as a stochastic integral equation of the form x(t;w)
=
+
sp,
X ( t ; ~ ~ ) x 0 ( ~ ) X ( t - t - u;w)B(o)x(u;o)du
+ J; X ( t - u ; o)b(o)&a(u ;0)) du,
t > 0,
(10.2.1)
252
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
X
where xo(w) = x(0; w) and X ( t ; w) is the stochastic fundamental matrix solution of the homogeneous system
+
x'(t; 0)= A(w)x(t;0) B(w)x(t - z; w),
t
> 0,
with the initial condition X ( 0 ; w) = I , the identity matrix, and X ( t ; w) = 0 for t < 0. Let $(ti
0) =
+ s_". X ( t -
X ( t ; w)x~(o)
-
u ;~ ) B ( ~ ) ;xW)( udu.
Then the random equation (10.2.1) becomes
+
s:
~ ( t0) ; = f ( t ; 0)
+
~ ( W) t ; = $ ( t i W)
with
X ( t - u ; w)b(w)+(a(u ;0))du
s:
cT(t -
U ;O)X(U; W)
Substituting Eq. (10.2.2) into (10.0.3),we get
+
a(t;w) = f ( t ; w )
+
s:
h(t;w)
du.
i
CT(t - u ; o ) $ ( u ; w )
I
X ( u - s ; w)b(w)+(a(s; w)) ds du.
(10.2.3)
+ yocT(t - u ;w)$(u ;w) du. Then Eq. (10.2.3) becomes
Let h(t ;w) = f ( t ;w) o ( t ;w) =
si
sd
(10.2.2)
+
1: si
cT(t - u ; w)
sI
X ( u - s; w)b(w)+(a(s;
0))ds
du.
Using the property of the convolution integral, we have cT(t - u ; w) =
=
=
X(u
-
1; I:-' 1:s:-'
0))ds
du
X ( t - u - s ; w)b(w)+(o(s; 0))ds du
cT(u;w)
/: s:-'
s ; w)b(w)4(a(s;
cT'(u;o ) X ( t - u - s ; w)b(w)+(a(s; w)) ds du cT(u;w ) X ( t - u
Define
k(t ;W) =
s:
-
s; w)b(w)+(a(s; 0))du ds.
C~(U ;o ) X ( t -
u ; w)b(w)du.
(10.2.4)
10.2
REDUCTION OF SYSTEMS WITH TIME LAG
253
Then
which is the same as Eq. (10.2.4). Therefore the equation for the error signal in the presence of a random parameter o ( t ; o)can be written as ~ ( O t ;) = h(t ;O )
+
k(t - u ; L O ) ~ ( O ( U;w ) )du.
(10.2.5)
Thus we know that there exists a unique random solution to Eq. (10.2.5) under the conditions given in Chapter 11. 10.2.2 The Random Integrodifferential System (10.0.4)-( 10.0.5)
The random equation (10.0.4)can be written as X ( t ; 0)
x(t;w)Xo(w) +
+
1;
X(t
+ j:X(t
-
s_". x(t
- T - U ;w)B(w)X(u;0) du
u ;o)b(w)4(o(u ; 0)) du
- U;W)
q(t - s ; o ) 4 ( a ( s ; o ) ) d s d u ,
(10.2.6)
where x0(o) and X ( t ;o)behave as defined earlier. We shall write
+
$ ( t ; W ) = X ( t ; o)x,(o)
s_",
X(t - T
-
u ; o ) B ( o ) x ( u ;o)du.
Then the stochastic integral system (10.2.6) becomes
+
x ( t ; o )= $(t;o)
+ /;X(r
J:
X ( t - u;o)b(w)f)(o(u;w))du
- u;w)[:q(t - s;w)+(a(s;o))dsdu.
(10.2.7)
254
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
Applying the well-known result that the convolution product commutes, we can reduce part of Eq. (10.2.7) as follows: J : X ( t - u ; ~ ) S d ( q (-t s ; o ) ~ ( a ( s ; w ) ) d s d u =
J ; X ( U ; W ) J : - ' ~ ( I- u - s;w)4(D(s;W))dsdu
=
fJ;-' X ( U;~ ) q ( -t u - s ;w)&o(s; w ) )ds du X ( U; ~ ) h ( t u - s ;w ) ~ ( D ( w s ;) )du ds.
(10.2.8)
Let k,(t;o)=
i'd r
i':
X(t
-
s;w)q(s;w)ds.
Then Eq. (10.2.8) can be written as follows :
1;
X ( t - s - u ; ~ ) r l ( ~ ; o ) 4 ( a ( s ; ~ ) ) d=u d sk , ( t - s ; w ) & ~ ( s ; w ) ) ~ s .
Therefore Eq. (10.2.7) becomes x(t;w)= $(t;w)
+
sd
X(t - u ; w ) ~ ( w ) ~ ( D ( u ; w ) ) ~ u
+ /;k,(t - u;u)4(a(u;w))du or ~ ( tW; ) =
$ ( t ;W )
where
+
sd
K2(t -
U ;W ) + ( C ( U ; w ) )du,
(10.2.9)
+
k,(t ;W ) = X ( t ;w ) b ( ~ ) k , ( t ; w ) .
Substituting stochastic Eq. (10.2.9) into Eq. (10.03, we have a ( t ; o )= f ( t ; W )
+
+ J;cT(t
cT(t - u ; w ) $ ( u ; w ) ~ u - u;r.)J0'
k,(t - s ; w ) 4 ( a ( s ; w ) ) d s d u . (10.2.10)
10.3
255
STOCHASTIC ABSOLUTE STABILITY OF THE SYSTEMS
Define
Sd
+
h ( t ; o )= f ( t ; o )
CT(t
- u;o)~(u;o)du.
Then random equation (10.2.10) can be written as
+
a ( t ; o )= h ( t ; o )
J-: I:-"
cT(u;o ) k 2 ( t - u - s; o)4(a(s; w ) )ds du.
Let k(t ;W ) =
Then Eq. (10.2.11) becomes a ( t ; o )=
(10.2.11)
l
C'(U
+
h(r;w)
sd
; o ) k , ( t - u ;W ) du.
k(t -
U ; W ) ~ ( ~ ( U ; O ) ) ~ U ,
which is the same as the nonlinear stochastic integral equation (10.2.5) that we obtain by reducing the stochastic system with time lag (10.0.2H10.0.3). 10.3
Stochastic Absolute Stability of the Systems
The following theorems give the conditions under which the stochastic differential systems with lag time (10.0.2H10.0.3) and (10.0.4)+0.0.5) are stochastically absolutely stable. Theorem 10.3.1 Suppose that the stochastic system with time lag (10.0.2H10.0.3) satisfies the following conditions :
+
(i) The equation det{A(o) e-"B(o) - 21) = 0 has all its roots in the semiplane Re 1 < - a < 0. (ii) (a) The random vector function c(t ;o)is defined for all t 2 0, o E R, and is such that c ( t ; W ) E L I ( R + , L , ( S Z , , ~ , 9 ) ) nL2(R+,L,(SZ,&',9)); (b) f ( t ;o) is defined for t 2 0, W E Q, and is such that f ( t ; o ) , f ' ( t ; o ) E L I ( R + L,(Q, d,9)). (iii) There exists a q 3 0 such that 9
+ il.q)ET(iA; o)[i11- A ( o ) - B ( o )eiAr]-' b ( o ) },< 0, where ZT(iA; o)= I," cT(t;o)e-i2.'dt. Re((1
Then the stochastic system (10.0.2H10.0.3) is stochastically absolutely stable.
256
X
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
PROOF We shall prove the theorem by showing that the assumptions of Theorem 9.2.1 are satisfied. By definition,
h ( t ; o )= f ( t ; w )
+
J:
CT(t -
u;w)$(u;w)du
+ J~cT(u;w)$(t
=f(t;u)
-
u;w)du.
From Condition (ii) we know that the functions f ( t ;w),c ( t ;0) E Li(R + , L,(Q ,d, sq)).
The definition of $(t ;w), together with assumption (i) of the theorem, Thus the convolution product implies that $(t ; w )E L , ( R + , L,(Q d,9)). 9)). Hence of cT(t;o)and $ ( t ; w ) also belongs to L I ( R + ,L,(R, d,
4 t ; w )E L , ( R + ,L$4 d,9)). Differentiating h(t ; w ) with respect to t , we have
h ( t ; w )= f ( t ; w )+
J:
c'(u;w)$(t - u ; w ) d u .
(10.3.1)
Each term of Eq. (10.3.1) belongs to L I ( R +L,(R, , d, 9)).Thus, h ( t ;w ) d, 9)) and assumption (i) of Theorem 9.2.1 is satisfied. We shall consider the stochastic kernel
E L , ( R + ,L,(R,
k ( t ; 0)=
and L ( t ;w ) =
It is given that
J: J:
cT(s;w ) X ( t - s; 0)ds
cT(s; w ) X ( t - s; 0) ds.
~(w t ;) E L i ( R + L,(R d ,9)) n L,(R+ Lm(Qd, 9)) 7
9
and from assumption (i) we conclude that X ( t ;w ) belongs to L 1 ( R + L,(R, , d,9)). Using the fact that the convolution product of c T ( s ;w ) and X ( t ;w ) belongs to L I ( R + L,(R, , d,9))n L 2 ( R + ,L,(Q d,Y)), we have
k ( t ; 0)E L , ( R + ~ , ( Q d,9))L,(R + L,(Q d,9)). Applying a similar argument, it can be easily seen that 3
7
4 E L , ( R + , L,(Q d,9)) n L,(R + , L,(Q d, P)), and Condition (ii) of Theorem 9.2.1 is satisfied. Assumption (iii) of this theorem is the same as (iii) of Theorem 9.2.1. Knowing the fact that the
10.3
STOCHASTIC ABSOLUTE STABILlTY OF THE SYSTEMS
25 7
Fourier transform of the convolution product is equal to the product of the Fourier transforms, we have
&iA; w ) = Z T ( i l ; w ) [ i l l - A(w) - B(w)elLr]-'b(o). From Condition (iv) of the theorem, we obtain
Re((1 + ilq)?(iA; w)[iAl - A ( o ) - B(w)e ' L r ] - l b ( w )<} 0
or Re((1
+ iAq)k(iA;w)),< 0,
which is assumption (iv) of Theorem 9.2.1. Therefore, since Theorem 9.2.1 holds, we conclude that the stochastic system (10.0.2H10.0.3)is stochastically absolutely stable.
Theorem 10.3.2 Assume that the random system with time lag (10.0.4)(10.0.5)satisfies the following conditions : (i) The equation det{A(o) + e-*r - A I } = 0 has all its roots in the semiplane Re 1 d - a < 0. (ii) (a) c(t ;w ) is defined for t b 0, o E R, and is such that c(t ; 4 E L,(R + , L,(Q, d,9)) n L,(R + ,L,(Q ,rQ, 9));
(b) f ( t ;o)is defined for t b 0, w E R, and is such that
f(t;o),f(t;o)EL1(R+,L,(R,d9,.~))n L,(R + , ~ , ( R , d d ) ) ; (c) q(t ;o)is defined for 1 2 0, o E R, and is such that v l ( t ; o ) E L i ( R + , L , ( R , d . ~ ) )n L , ( R + , L , ( Q , d , P ) ) . (iii)
4(0)is continuous and bounded for r~ E R, R being the real line, and
a4(o) > 0 for
r~ # 0. (iv) There exists a q b 0 such that
Re((1
+ iAq)ZT(iA;o)[iAI- A ( o ) - B(o)e'"']-'(b(o)+ q ( i A ; o ) ) },< 0,
where ZT(iA;o)=
cT(t;w ) e-lL' dt and q(iA; o)= J; q(t ;o)e-"' dr.
Then the stochastic system of equations (10.0.4)+0.0.5) absolutely stable.
is stochastically
PROOF This theorem will also be proved by demonstrating that the conditions of Theorem 9.2.1 are satisfied. By definition
h ( t ; o )=
+
f ( t ; ~ )
cT(t -
u;W)$(U;W)du,
258
x
STOCHASTIC INTEGRODIFFERENTIAL SYSTEMS
where
From the hypothesis of the theorem, we have Also, because of the fact that from Condition (i), and x(t ; o) is a continuous function for t E [ - T, 01 and a.e. with respect to o. Differentiating h(t ; w ) with respect to t , we have
Using a similar argument, it is easy to see that h(t ; 4 E L ( R+ , La@, d, 9))n L2(R+ ,L,(R, d, 9)).
Thus assumption (i) of Theorem 9.2.1 holds. We have defined the stochastic kernel k(t ; w ) as follows :
with and
It is given that X(t ;w), c(t ;o ) , and q(t ; w ) belong to Also, because each of its terms belongs to L l ( R+ , La@, 4 9)) n L 2 ( R + ,L,(R,
d,P)),which implies that
k2(t;a)E L , ( R + La(Q d, 9)) n L z ( R + ,Lm(Q,d, 9)).
10.3
STOCHASTIC ABSOLUTE STABILITY OF THE SYSTEMS
k l ( t ; w )=
259
6
X ( t - s;w)q(s;w)ds.
Using a similar argument, it can be seen that k ( t ; ~ ) E L ~ ( LRm+( Q d ,9)) n L,(R+ 3
9
Lm(Q
d ,9)).
Thus Condition (ii) of Theorem 9.2.1 is satisfied. Assumption (iii) of Theorem 9.2.1 is the same as Condition (iii) of this theorem. It can be shown that the Fourier transform of the stochastic kernel k ( t ; w ) is given by
l(iA;w)= ET(iA;w)[iAl- A(w) - B(w)e'"]-'[b(w) + q(iA;w)].
From hypothesis (iv) of the theorem, we have Re((1
+ iAq)k(iA;w))< 0,
which satisfies Condition (iv) of Theorem 9.2.1. Therefore we conclude that the stochastic system with time lag is stochastically absolutely stable.
Deterministic Anselone, P. M. [I] Editor, Nonlinear Integral Equations. Univ. of Wisconsin Press, Madison, Wisconsin 1964. Bachman, G., and Narici, L. [l] Functionnl Analysis. Academic Press, New York, 1966. Barbalat, I. [ 11 Systtmes d’tquations differentielles d’oscillations non-lintaires, Rev. Math. Pures Appl. 4 (1959), 267-270. Beckenbach, E. F., and Bellman, R. [I] Inequalities (2nd ed.). Springer-Verlag, Berlin and New York, 1965. Bellman, R. [I] Introduction to the Mathematical Theory of Control Processes, Vol. 1. Academic Press, New York, 1961. Bellman, R., Jacquez, J., and Kalaba, R. [ 11 Mathematical models of chemotherapy, Proc. Berkeley Symp. Math. Statist. Proh., 4th Vol. IV, Univ. of California Press, Berkeley, California, 1961, 57-66. [2] Some mathematical aspects of chemotherapy. I : One-organ models, Bull. Math. Biophys. 22 (1960), 181-198. [3] Some mathematical aspects of chemotherapy. 11: The distribution of a drug in the body, Bull. Math. Biophys. 22 (1960), 309-322. 260
DETERMINISTIC
261
Bellman, R., Jacquez, J., Kalaba, R., and Kotkin, B. [I] A Mathematical Model of Drug Distribution in the Body: Implications for Cancer Chemotherapy. The RAND Corp., RM-3463-NIH, 1963. Bellman, R., and Kotkin, B. [I] A Numerical Approach to the Convolution Equations of a Mathematical Model of Chemotherapy. The RAND Corp., RM-3716-N1H, 1963. [2] Chemotherapy, Convolution Equations, and Differential Approximations. The RAND Corp., P-3005, 1964. Bihari, L. [I] Notes on a nonlinear integral equation, Studia Sci. Math. Hungar. 2 (1967), 1-6. Bochner, S. [I] Lectures on Fourier integrals, Ann. Math. Stud. No. 42 (1959), 217-218. Box, G. E. P. [ I ] Fitting empirical data, Ann. N. Y. Acad. Si.86 (1960). Branson, H. [I] The kinetics of reactions in biological systems, Arch. Biochem. Biophys. 36 (1952). [2] The use of isotopes in an integral equation description of metabolizing systems, Cold Spring Harbor Symp. Quant. Biol. XI11 (l948), 135-1 42. Browder, F. E., de Figueiredo, D. G., and Gupta, C. P. [I] Maximal monotone operators and nonlinear integral equations of the Hammerstein type, Bull. Amer. Math. SOC.76 (1970), 700-705. Chandrasekhar, S. [I] Radiative Transfer. Dover, New York, 1960. Cooper, I., and Jacquez, J. [I] A Mathematical Model of Chemotherapy Assuming Mixing in the Large Blood Vessels. The RAND Corp., RM-3712-NIH, 1964. Corduneanu, A. [I] The stability of the solution of equations of Volterra type, Bul. Inst. Politehn. Iasi 15(19) (1969), fasc. 1-2, 69-73. Corduneanu, C. [I] Probltmes globaux dans le thtorie des equations integrales de Volterra, Ann. Mat. Pura Appl. 67 (1965), 349-363. [2] Some perturbation problems in the theory of integral equations, Math. Systems Theory 1 (1967), 143-155. [3] Stability of some linear time-varying systems, Math. Systems Theory 3 (1969), 151-1 55. [4] Admissibility with Respect to an Integral Operator and Applications, Math. Tech. Rep., Univ. of Rhode Island, 1968. [5] Nonlinear perturbed integral equations, Rev. Roumaine Math. Pures Appl. 13 (1968), 1279-1284. [6j Sur certaines equations fonctionnelles de Volterra, Funkcial. Ekuac. 9 (1966), 1 IS127. [q Sur une tquation integrale non-lineaire, An. Sti. Univ. “AI. I. Cuza” Iasi Sect. I . (N.S.) 9 (1963), 369-375. Davis, H. T. [I] Introduction to Nonlinear Differential and Integral Equations. Dover, New York, 1962. DePree, J. D. [l] Reduction of linear integral equations with difference kernels to nonlinear integral equations, J. Math. Anal. Appl. 26 (1969), 539-544. Desoer, C. A,, and Tomasian, A. J. [I] A note on zero-state stability of linear systems, Proc. Ann. Allerton Conf. Circuit and Systems Theory, 1st 1963, 5&52.
262
BIBLIOGRAPHY
Dunford. N., and Schwartz. J. [ I ] Linear Operators, Part I. Wiley (Interscience). New York, 1958. Friedman, A. [ I ] Monotonicity of solutions of Volterra integral equations in Banach space, Trans. Amer. Math. Soc. 138 (1969), 129-148. Friedman, A,, and Shinbrot, M. [I] Volterra integral equations in Banach space, Trans. Amer. Math. Soc. 126 (1967). I3 1-1 79. Gavalas, G. R. [I] Nonlinear Differential Equations of Chemically Reacting Systems. Springer-Verlag, Berlin and New York, 1968. Goldberg, S. [I] Unbounded Linear Operators: Theory and Applications. McGraw-Hill, New York, 1966. Gol’dengerSel’, E. I. [I] Discrete analog of an integral equation of Volterra type on the half-axis, Uspehi Mat. Nauk 21 (1966), no. 2 (128), 223-225. Green, C. D. [ I ] Integral Equation Methods. Nelson, London, 1969. Grossman, R. I., and Miller, R. K. [I] Perturbation theory for Volterra integro-differential systems, J . Differential Equations 8 (1970), 457474. Grossman, S. 1. [I] Existence and stability of a class of nonlinear Volterra integral equations, Trans. Amer. Math. SOC.150 (1970), 541-556. Gupta. C. P. [ I ] On existence of solutions of nonlinear integral equations of Hammerstein type in a Banach space, J . Math. Anal. Appl. 32 (1970), 617-620. Halanay, A. [I] Differential Equations-Stability, Oscillations, Time Logs. Academic Press, New Y ork, 1966. Hannsgen, K. B. [I] On a nonlinear Volterra equation, Michigan Math. J. 16 (1969), 365-376. Heard, M. L. [ I ] On asymptotic behavior and periodic solutions of a certain Volterra integral equation, J. Dtferenfial Equations 6 (1969), 172-186. [2] On a nonlinear integro-differential equation, J. Math. Anal. Appl. 26 (1969), 170-189. Hearon, J. [ 11 A note on the integral equation description of metabolizing systems. Bull. Math. Biophys. 15 (1953). Hewitt, E., and Stromberg, K. [I] Real and Abstract Analysis. Springer-Verlag. Berlin and New York, 1965. Hildebrand, F. B. [I] Finite-DifferenceEquations and Simulation. Prentice-Hall, Englewood Cliffs, New Jersey, 1968. Horvath, John [ I ] Topological Vector Spaces and Distributions. Addison-Wesley, Reading, Massachusetts, 1966. Ize, A. F. [I] On an asymptotic property of a Volterra integral equation, Proc. Amer. Math. SOC.28 (1971), 93-99.
DETERMINISTIC
263
John, F. [l] Integral equations, Mathematics Applied to Physics, pp. 316-347. Springer Publ., New York, 1970. Kartsatos. A. G. On the relationship between a nonlinear system and its nonlinear perturbation, J . Differential Equations 11 (1 972). 582-591. Convergence in perturbed nonlinear systems, T6hoku Math. J . 24 (1972), 539-546. Positivz solutions to linear problems for nonlinear systems, J . Differential Equations (in press). Bounded solutions to perturbed nonlinear systems and asymptotic relationships, J . Fur die Reine und Angewandtr Math. (in press). Kartsatos, A. G., and Michaelides, G. J . [I] Existence of convergent solutions of quasi-linear systems and asymptotic equivalence, J . Dijferential Equations 13 ( 1973). 48 1-489. Kermack, W. O., and McKendrick, A. G . [I] A contribution to the mathematical theory of epidemics, Proc. Roy. SOC.( A ) 115 (1927), 700-72 I . Kirpotina, N. V. [l] Systems of non-linear integral equations, Moskou. Oblast. Ped. Inst. W e n . Zap. 150 (1 964), 29-39. Kittrell, J. R., Mezaki, R., and Watson, C . C. [l] Estimation of parameters for nonlinear least squares analysis, Ind. Eng. Chem. Fundamentals 57 ( 1965). Kopal, Z. [I] Numerical Analysis. Chapman & Hall, London, 1961. Kotkin, B. [I] A Mathematical Model of Drug Distribution and the Solution of Differential-Difference Equations. The RAND Corp., RM-2907-RC, 1962. Krasnosel’skii, M. A. [ 13 Topological Methods in the Theory of Nonlinear Integral Equations. Pergamon, Oxford and Macmillan, New York, 1964. Krasnosel’skii, M. A,, Zabreiko, P. P., Pustyl’nik, E. I., and Sobolevskii, P. E. [I] Integral Operators in Spaces of Summable Functions. Izdat. “Nauka,” Moscow, 1966. Landau, H. G., and Rapoport, A. [I] Mathematical theory of contagion, Bull. Math. Eiophys. 15 (1953). LaSalle, J. P., and Lefschetz, S. [I] Stability by Lyapunou’s Method with Applications. Academic Press, New York, 1961. Lee, E. S. [I] Quasilinearization and estimation of parameters in differential equations, Ind. Eng. Chem. Fundamentals 7 (1968). Levin, J. J. [I] The asymptotic behavior of the solution of a Volterra equation, Proc. Amer. Math. SOC. 14 (1963), 534-541. [2] A nonlinear Volterra equation not of the convolution type, J . Differential Equations 4 (1968), 176-186. Levin, J. J., and Nohel, J. A. [l] On a system of integro-differential equations occurring in reactor dynamics, J . Math. Mech. 9 (1960), 347-368. [2] Perturbations of a nonlinear Volterra equation, Michigan Math. J., 12 (1965), 431447. [3] A system of nonlinear integro-differential equations, Michigan Math. J . 13 (1966), 257-270.
264
BIBLIOGRAPHY
Levin, J. J., and Shea, D. F. [I] On the asymptotic behavior of the bounded solutions of some integral equations, J . Math. Anal. Appl. 37 (1972), 42-82. Londen, Stig-Olof [I] On a nonlinear Volterra integro-differential equation, Commentationes Physico-Mathematicae 38 (l969), 5-1 1. [2] The qualitative behavior of the solutions of a nonlinear Volterra equation, Michigan Moth. J . (in press). Luca, N. [I] On the behavior of the solutions of an integro-differential system of equations, An. Sti. Univ. “AI. I . Cuza” Iasi Sect. I a Mat. (N.S.) 13 (1967), 299-303. Lure, A. I . , and Postnikov, V. N. [I] On the theory of stability of control systems, Prikl. Mat. i Mehk. 8 (1944), 3. Mahmudov, A. P., and Musaev, V. M. [I] On the theory of the solutions of nonlinear integral equations of Volterra-Uryson type, Akad. Nauk Azerbaidian. SSR Dokl. 25 (1969), 3-6. Mikhlin, S. G . [I] Integral Equations. Macmillan, New York, 1964. Mikhlin, S. G., and H. L. Smolickii [I] Approximate Methods of Solution of Differentialand Integral Equations. Izdat. “Nauka,” Moscow, 1965. Miller, R. K. [I] On Volterra’s population equation, J . SIAM Appl. Math. 14 (1966). 446-452. [2] An unstable nonlinear integro-differential system, Proc. US.-Japan Seminar on Differential and Functional Equations (Minneapolis, Minnesota, 1963, pp. 479489. Benjamin, New York, 1967. [3] Asymptotic stability properties of linear Volterra integro-differential equations, J . Differential Equations (in press). [4] Admissibility and nonlinear Volterra integral equations, Proc. Amer. Math. SOC.25 (1970), 65-71. [5] On the linearization of Volterra integral equations, J . Math. Anal. Appl. 23 (1968), 198-208. Miller, R.K., Nohel, J. A,, and Wong, J . S. W. [l] A stability theorem for nonlinear mixed integral equations, J . Math. Anal. Appl. 25 (1969), 446-449. Miller, R. K., and Sell, G. R. [I] Existence, uniqueness, and continuity of solutions of integral equations, Ann. Mat. Pura Appl. 80 (1 968), 135-1 52. [2] Volterra integral equations and topological dynamics, Mem. Amer. Math. Soc. 102 (1970). 67 p. Moore, R. H. [ I ] Approximations to nonlinear operator equations and Newton’s method, Numer. Math. 12 (1968), 23-34. Muki, R.,and Sternberg, E. [I] Note on an asymptotic property of solutions to a class of Fredholm integral equations, Quart. Appl. Math. 28 (1970). 277-281. Muldowney, J. S., and Wong, J. S. W. [I] Bounds for solutions of nonlinear integro-differential equations, J . Math. Anal. Appl. 23 (1968), 487499. Nashed, M. Z., and Wong, J . S. W. [I] Some variants of a fixed point theorem of Krasnosel’skii and applications to nonlinear integral equations, J . Math. Mech. 18 (1969). 767-777.
DETERMINISTIC
265
Natanson, I. P. [I] Tlwory of Functions of a Real Variable. Vol. I . Ungar. New York, 1960. [2] Theory of Functions of a Real Variable, Vol. 11. Ungar, New York, 1960. Nohel, J. A. [ 13 Qualitative behavior of solutions of nonlinear Volterra equations, Stability Problems of Solutions of Differential Equations. Proc. NATO Adv. Study Inst., Padua, Italy, 1965. [2] Remarks on nonlinear Volterra equations, Proc. US.-Japan Seminar Differential and Functional Equations (Minneapolis, Minnesota, 1967), pp. 249-266. Benjamin, New York, 1967. Oguztoreli, M. N. [I] Time-Lag Control Systems. Academic Press, New York, 1966. Petrovanu, D. [ 11 Equations Hammerstein integrales et discretes, Ann. Mat. Pura Appl. 70 (1966), 227-254. Petry, Walter [I] Ein iteratives Verfahren zur Bestimmung einer Losung gewisser nichtlinearer Operatorgleichungen im Hilbertraum mit Anwendung auf Hammersteinsche Integral-gleichungssysteme, Math. Ann. 187 (1970), 127-149. Poincare, H. [l] Memoire sur les courbes definier par une equation differentiable, J . Math. Pures Appl. (3)7 (1881), 375422; 8 (1882), 251-296; (4)l (1885) 167-244; 2 (1886), 151-217. Rall, L. B. [ 13 Computational Solution of Nonlinear Operator Equations. Wiley, New York, 1969. Saaty, Thomas L. [l] Modern Nonlinear Equations. McGraw-Hill, New York, 1967. Sandberg, I. W. [ 11 On the boundedness of solutions of nonlinear integral equations, Bell System Tech. J . 44 (1965), 439-453. Sauer, R. [ 13 Ingenieur-Matematik, Band I: Differential-und Integralrechnung. Springer-Verlag, Berlin and New York, 1969. Schmeidler, W. [l] Linear Operators in Hilbert Space. Academic Press, New York, 1965. Stenger, F. [I] The approximate solution of Wiener-Hopf integral equations, J. Math. Anal. Appl. (in press). Stephenson, J. [ I ] Theory of the measurement of blood flow by the dilution of an indicator, Bull. Math. Biophys. 10 (1948). Straws, A. [I] A discussion of the linearization of Volterra integral equations, Seminar Differentid Equations Dynamical Syst. II, Lecture Notes Math. 144 (1970), 209-217. [2] On a perturbed Volterra integral equation, J. Math. Anal. Appl. 30 (1970), 564-575. Titchmarsh, E. C. [I] Introduction to the theory ofFourier Integrals. Oxford Univ. Press, London and New York, 1959. Tricomi, F. G. [I] Integral Equations. Wiley (Interscience), New York, 1957. Vainberg, M. M. [I] Integro-differential equations. Math. Analysis, Theory of’ Probability Control (1962). pp. 5-37. ltogi Nauki, Akad. Nauk SSSR Inst. NauEn. Inform., Moscow, 1964.
266
BIBLIOGRAPHY
Vinokurov, V. R. [I] Volterra integral equations with infinite interval of integration, Drflerenciarnye Uravnenga 5 (1969), 189&1898. [2] Certain questions in the theory of the stability of systems of Volterra integral equations. Izv. Vysi. Uc'ebn. Zaved. Matematika 85 (1969), 24-34; 86 (1969), 28-38. Widom, H. [I] Lectures on Infegral Equations, Van Nostrand Math. Stud. No. 17. Van Nostrand Reinhold, Princeton, New Jersey, 1969. Wijsman, R. [I] A critical investigation of the integral description of metabolizing systems, Bull. Math. Biophys. 15 (1953). Yosida, K. [I] Functional Analysis. Springer-Verlag, Berlin and New York, 1965. Zabreiko, P. P., and Povolockii, A. I. [I] On the theory of the Hammerstein equation, Ukrain. Math. J. 22 (1970), 127-138.
Stochastic or Probabilistic Adomian, G. [I] Random operator equations in mathematical physics I, J. Math. Phys. 11 (1970), 1069-1 074. [2] Linear random operator equations in mathematical physics 11, J. Math. Phys. 12 (1971), I9444 948. [3] Linear random operator equations in mathematical physics 111, J. Math. Phys. 12 (1971), 1948-1 955. [4] Theory of random systems, Trans. Prague Con$ on Information Theory, Statistical Decision Functions, Random Processes. 41h, Prague, 1965. 205-222. Academic, Prague, 1967. Ahmed, N. U. [ I ] A class of stochastic nonlinear integral equations on L p spaces and its application to optimal control, Information and Control 14 (1969). 5 12-523. Ahmed, N. U., andTeo, K. L. [I] On the stability of a class of nonlinear stochastic systems with applications to distributed parameter systems, Proc. IFAC Symp. Control of Distributed Parameter Syst., Banff, Canada, June, I97 I . Anderson, M. W. [ I ] Stochastic Integral Equations. Ph.D. Dissertation, Univ. of Tennessee (1966). [2] A stochastic integral equation, SIAM J. Appl. Math. 18 (1970), 526-532. Bailey, N. T. J. [ I ] A perturbation approximation to the simple stochastic epidemic in a large population, Biometrika 55 (1968), 199-209. [2] Stochastic birth, death, and migration processes for spatially distributed populations, Biometrika 55 (1968), 189-198. [3] The simple stochastic epidemic: A complete solution in terms of known functions, Biometrika 50 (1963), 235-240. Bartholomay, A. [I] Stochastic models for chemical reactions I-theory of unimolecular reaction process, Bull. Math. Biophys. 20 (1958).
STOCHASTIC OR PROBABILISTIC
267
Bartlett, M. S. [ 11 On theoretical models for competitive and predatory biological systems, Biometrika 44 (1957), 27-42. [2] Stochastic Processes. Cambridge Univ. Press, New York and London, 1955. [3] Stochastic Population Models in Ecology and Epidemiology. Methuen, London, 1960. Bharucha-Reid, A. T. [I] Sur les equations integrales aleatoires de Fredholm a noyaux skparables, C.R. Acad. Sci. Paris 250 (1960), 454456,657458. [2] Approximate solutions of random operator equations, Notices Amer. Math. Soc. 7 (1960), 361. [3] On random solutions of Fredholm integral equations, Bull. Amer. Math. SOC.66 (1960), 104109. [4] On random solutions of integral equations in Banach spaces, Trans. Prague Conf. Information Theory, Statistical Decision Functions, and Random Processes, 2nd 21-48. Academic Press, 1960. [5] On the theory of random equations, Proc. Symp. Appl. Math. 16 (1964), 40-69. Amer. Math. SOC.,Providence, Rhode Island. [6] Ed., Probabilistic Methods in Applied Mathematics, Vol. I . Academic Press, New York, 1968; Vol. 11, 1970. [q Random Integral Equations. Academic Press, New York, 1972. [8] Elements of the Theory of Markov Processes and Their Applications. McGraw-Hill, New York, 1960. Bharucha-Reid, A. T., and Arnold, L. [I] On Fredholm integral equations with random degenerate kernels, Zastos. Mat. 10 (l969), 85-90. Bharucha-Reid, A. T., Mukherjea, A,, and Tserpes, N. A. [I] On the existence of random solutions of some random integral equations (to appear). Blanc-Lapierre, A,, and Fortet, R. [I] Theory of Random Functions. Gordon and Breach, New York, 1965. Blum, J. R. [I] Approximation methods which converge with probability one, Ann. Math. Starisr. 25 (1954), 382-386. [2] Multidimensional stochastic approximation procedures, Ann. Math. Statist. 25 (l954), 737-744. Buell, J., and Kalaba, R. [I] Quasilinearization and the fitting of nonlinear models of drug metabolism to experimental kinetic data, Math. Biosci. 5 (1969), 121-132. Burkholder, D. L. [I] On a class of stochastic approximation processes, Ann. Math. Statist. 27 (1956), 10441059. Chiang, C. L. [ I ] Introduction to Stochastic Processes in Biostatistics. Wiley, New York. 1968. Chung, K. L. [I] On a stochastic approximation method, Ann. Math. Statist. 25 (1954), 463483. Dawson, D. A. [ I ] Generalized stochastic integrals and equations, Trans. Amer. Math. Soc. 147 (1970), 473-506. Day, W. B., and Boyce, W. E. [I] On the relationship between the solution of a stochastic boundary value problem and parameters in the boundary conditions, S I A M J . Appl. Math. 19, No. I , (1970).
268
BIBLIOGRAPHY
Dence, D. [I] Wave propagation in anisotropic media, Probabilistic Methods in Applied Mathematics (A. T. Bharucha-Reid, Ed.), Vol. 3. Academic Press, New York, 1972. Dence. D.. and Spence. J . E. [ I ] An effective medium description for a random uniaxial anisotropic medium, IEEE Trans. AP-20 (1972). [2] The mean Green’s Dyadic for a random uniaxial anisotropic medium with small scale fluctuations, Proc. Fall Internat. Symp. Antennas and Propagation (1971). [3] Dyson’s Equation for a medium possessing uniaxial anisotropic fluctuations, Electron Lett. 7 (1971). [4] The coherent wave in a random uniaxial anisotropic medium, IEEE Trans. AP-19 (1971). Derman, C. [I] An application of Chung’s lemma to the Kiefer-Wolfowitz stochastic approximation procedure, Ann. Math. Statist 27 (1956), 532-536. Distefano, N. [ I ] A Volterra integral equation in the stability of some linear hereditary phenomena, J . Math. Anal. Appl. 23 (1968), 365-383. Doob, J. L. [I] Stochastic Processes. Wiley, New York, 1953. Dvoretzsky, A. [I] On stochastic approximation, Proc. Berkeley Symp. Math. Statist. Probability, 3rd Univ. of California Press, Berkeley, California, 1956. Dynkin, Y. B. [I] Markou Processes. Academic Press, New York, 1964. Feller, W. [I] On the integral equation of renewal theory, Ann. Math. Statist. 12 (1941), 243-267. [2] An Introduction to Probability Theory and Its Applications, 3rd ed., Vol. 1. Wiley, New York, 1957. Fortet, R. [I] Random distributions with application to telephone engineering, Proc. Berkeley Symp. Math. Statist. and Probability, 3rd Vol. 11, pp. 81-88. University of California Press, Berkeley, California, 1956. Fry, T. C. [l] Probability and Its Engineering Uses, 2nd ed. Van Nostrand Reinhold, Princeton, New Jersey, 1965. Gikhmann, I . I., and Skorokhod, A. V. [I] Introduction to the Theory of Random Processes. Saunders, Philadelphia, Pennsylvania, 1969. Goodman, L. [I] Stochastic models for the population growth of the sexes, Biometrika 55 (1 968), 469-488. Hans, 0. [I] Random operator equations, Proc. Berkeley Symp. Math. Statist. and Probability, 4th Vol. 11, pp. 185-202. Univ. of California Press, Berkeley, California, 1961. Hardiman. S. T., and Tsokos. C. P. [I] Biological models pertaining to stochastic integral equations, J . Stat. Physics (to appear). [2] On a Uryson Type of’Stochstic Integral Equations (to appear). [3] Existence theory for nonlinear random integral equations using the Banach-Steinhaus Theorem, Math. Nachr. (to appear). Haskey, H. W. [ 13 A general expression for the mean in a simple stochastic epidemic, Biometrika 41 (1954). 272-275.
STOCHASTIC OR PROBABILISTIC
269
Hunt, B. R. [ I ] Statistical properties of numerical solutions to convolution-type integral equations, preprint, Los Alamos Sci. Lab. of the Univ. of California, Los Alamos, New Mexico. Ito, K. [I] Stochastic integral, Proc. Imp. Acad. Tokyo 20 (1944), 519-524. [2] On a stochastic integral equation. Proc. Japan Acad. 22 (1946), 32-35. Jazwinski, A. H. [I] Stochastic processes and filtering theory, Mathematics in Science and Engineering (R. Bellman, ed.), Vol. 64. Academic Press, New York, 1970. Kallianpur, G. [I] A note on the Robbins-Monro stochastic approximation method, Ann. Math. Statist. 25 (1954), 386-388. Kallianpur, G., and Striebel, C. [I] Stochastic differential equations occurring in the estimation of continuous parameter stochastic processes, United States Air Force Office of Aerospace Research, Grant No. AF-AFOSR-885-65; the United States Navy, Grant No. NOOOl4-67x0307, and the National Science Foundation, Grant No. GP7490. Kannan, D. [I] An operator-valued stochastic integral, 11, Ann. Inst. Henri PoincarP VIII, no. 1 (1972), 9-32. [2] An operator-valued stochastic integral, 111, Ann. Inst. Henri PoincarP 8 (1972), 217-228. [3] On enzyme amplifier systems triggered by white noise, Math. Biosci. 15 (1972). [4] Martingales in Banach spaces with Schauder Bases, J . Math. Phys. Sci.7 (l973), 93-100. Kannan, D., and Bharucha-Reid, A. T. [I] Probability measures on H , spaces, Ann. Inst. Henri PoincarP VII, no. 3 (1971), 205-217. [2] Note on covariance operators of probability measures on a Hilbert space, Proc. Japan Acad. 46, No. 2 (1970), 124-129. [3] An operator-valued stochastic integral, Proc. Japan Acad. 47, No. 5 (1971), 472476. [4] Random integral equation formulation of a generalized langesian equation, J. Stati.yt. Phys. 5, No. 3 (1972). Kendall, D. G. [I] Stochastic processes and population growth, J. Roy. Statist. SOC.(1949), 230-267. Kerr, J. D. [I] The probability of disease transmission, Biometrics 27 (l971), 219-222. Kesten, H. [I] Some nonlinear stochastic growth models, Bull. Amer. Math. SOC.77 (1971), 492-51 I . Kiefer, J., and Wolfowitz, J. [I] Stochastic estimation of the maximum of a regression function, Ann. Math. Statist. 23 (1953), 462466. Kim, Ho Go1 [I] On the parametric dependence of solutions of stochastic integral equations in a Hilbert space, Cho-sdn In-mmKong-hwa-kuk Kwa-hak-won T’ong-p’o( I 970). 3-8. Leslie, P. H. [I] A stochastic model for studying the properties of certain biological systems by numerical methods, Biometrika 45 (1958), 16-31. Levit, M. V. [I] Frequency conditions for the absolute stochastic stability of automatic control systems with random external action, Dokl. Akad. Nauk USSR 1% (1970), 769-772. Lokve, M. [I] Probability Theory, 3rd ed. Van Nostrand Reinhold, Princeton, New Jersey, 1963.
270
BIBLIOGRAPHY
Lotka, A. [I] On an integral equation in population analysis, Ann. Math. Statist. 10 (1939), 144-161. Lumley, J . L. [I] An approach to the Eulerian-Lagrangian problem, J . Math. Phys. 3 (1962). 309-312. McKean, H. P., Jr. [I] Stochastic Integrals. Academic Press, New York, 1969. McQuarrie, Donald [ I ] A stochastic approach to chemical kinetics. J . Appl. Probability 4 (1967). 413478. Milton, J . S., and C. P. Tsokos [I] A stochastic model for chemical kinetics (to appear). [2] Admissibility theory and Ito’s stochastic integral equation (to appear). [ 3 ] On a class of nonlinear stochastic integral equations, Math. Nachr. (to appear). [4] Stochastic integral equations in a physiological model (to appear). [5] On a non-linear perturbed stochastic integral equation, J . Math. Phys. Sci. V , No. 4 (1971). 361-374. [6] A random integral equation in a metabolizing system. J . Statist. Physics 8 (1973). [A A stochastic model for communicable disease. In!. J . Svstems Sci. (to appear). [8] On a random solution of a nonlinear perturbed stochastic integral equation of the Volterra type, Bull. Austral. Math. Soc. 9 ( 1973). 227-237. Milton, J. S., Padgett, W. J., and Tsokos, C. P. [l] Existence theory of a stochastic integral equation of the Fredholm type with random perturbations, SIAM J . Appl. Math., 22 (1972), 194-208. Moran, P. A. P. [I] The Statistical Processes ofEvolutionary Theory. Oxford Univ. Press, London and New York, 1962. Morozan, T. [I] The method of V. M. Popov for control systems with random parameters, J . Math. Anal. Appl. 16 (1966). 201-215. [2] Stability of some linear stochastic systems, J . Differential Equations 3 (1967), 153-169. [3] Stability of linear systems with random parameters, J . Differential Equations 3 (1967), 170-1 78. [4] Stubilitatea sistemelor cu parametri aleatori. Editura Academiei Republicii Socialiste Romlnia, Bucarest, 1969. [5] Stability of stochastic discrete systems, J . Math. Anal. Appl. 23 (1968), 1-9. Mukherjea, A. [ 11 Transformations aliatoires stparables: Theoreme du point fixe aliatoire, C . R . Acad. Sci. Paris Ser. A-B 263 (1966), A393-A395. [2] ldempotent Probabilities on semigroups, Z . Wahrscheinlichkeitstheorie uerw. Geb. I 1 (1969), 142-146. [3] Random operators on Lusin probability spaces, Z . Wahrscheinlichkeitstheorie uerw. Geb. 9 (1968), 232-234. [4] A Stone-Weierstrass Theorem for random functions, Bull. Austral. Math. Soc. 2 (l970), 233-236. [5] On the convolution equation P = P * Q of Choquet and Deny for probability measures on semigroups. Proc. Amrr. Math. Soc. 32 (1972). 457463. Mukherjea, A,, and Bharucha-Reid, A. T. [l] Separable random operators. I, Rev. Roumaine Math. Pures Appl. 14 (1969). 1553-1 561. Mukherjea, A,, and Tserpes, N. A. [I] A problem on r*-invariant measures on locally compact semigroups, Indiana Math. J . 21 (1972), 973-978.
STOCHASTIC OR PROBABILISTIC
271
Mesures de probabilite r*-invariantes sur un semigroupe metrique, C.R. Acud. Sci. Paris SPr. A . 268, (1969). 318-319 [3] A note on countably compact semigroups. J . A . M . S . 13 (1972). 18C184. [4] On the convolution equation P = P * Q of Choquet and Deny for probability measures on semigroups, Proc. Amer. Math. Soc. 32 (1972). 457463. [5] On certain conjectures on invariant measures on semigroups, Semigroup Forum 1 (1 970), 260-266. [6] Some problems on idempotent measures on semigroups, Bull. Ausiral. Maih. SOC.2 (1970), 299-315. Invariant measures on semigroups with closed translations, Z . Wuhrscheinlichkeitstheorie Verw. Geb. 17 (1971), 33-38. [8] Idempotent measures on locally compact semigroups, reprinted from the Proc. Amer. Math. SOC.,29, No. I (1971), 143-150. Nichols, W. G., and Tsokos, C. P. [I] Formal solutions for a class of stochastic linear pursuit-evasion games with perfect information, Internal. J . Systems Sci. 1 (1970). [2] An empirical Bayes approach to point estimation in adoptive control, J . Information and Control (1972). Padnett, W. J., and Tsokos, C. P. On a semi-stochastic model arising in a biological system, Math. Biosci. 9 (1970), 105-1 17. A stochastic model for chemotherapy: Computer simulation, Math. Biosci. 9 (l970), 119-133. Existence of a solution of a stochastic integral equation in turbulence theory. J . Math. Phys. 12 (1971). 21C212. On a stochastic integral equation of the Volterra type in telephone traffic theory, J . Appl. Probubility 8 (197 1), 269-27 1. Random solution of a stochastic integral equation: Almost sure and mean square convergence of successive approximations, h i . J . Systems Sci. 4 (1973). 605-61 2. On the solution of a random integral equation by a method of stochastic approximation (to appear). A stochastic discrete Volterra equation with application to stochastic systems. Proc. Ann. Princeton Conf: Information Sci. and Sysiem.>.5th March 25-26, I97 I , pp. 505-509. Princeton Univ. On a stochastic integral equation of the Fredholm type, Z . Wuhr. Vern. Geb. 23 (1972), 22-3 1. Existence of a solution of a random discrete Fredholm system (to appear). A stochastic model for chemotherapy: Two-organ systems. Int. J . for Biomedical Computing 3 ( I 972), 29-41, A random Fredholm integral equation. Proc. Amer. Math. Soc. 33 (1972). 534542. The origins and applications of stochastic integral equations, Int. J . Sysiems Sci. 2 (1971). 135-148. A new stochastic formulation of a population growth problem, Math. Biosci. 17 (1973). 105- 120. On the existence of a unique solution of a stochastic integral equation in hereditary mechanics. J . Math. Phys. Sci. (to appear). On a stochastic integro-differential equation of Volterra type. SIAM J . Applied Math. 23 (1972). 49GS12. A random discrete equation and non-linear stochastic control system. Int. J . Sysiems Sci. 4 (1973). 77-85.
[2]
[I
2 72
BIBLIOGRAPHY
Stochastic asymptotic stability and approximation of the random solution of a stochastic discrete Fredholm system, Int. J . Cybernetics (to appear). [I81 Stochastic integral equations in life sciences and engineering. Int. Stat. Review (1973). Parrott, L. G. [I] Probability and Experimental Errors in Science. Wiley, New York, 1961. Pincus, M. [I] Gaussian processes and Hammerstein integral equations, Trans. Amer. Math. Soc. 134 (1968), 193-214. Prabhu, N. U. [I] Stochastic Processes. Macmillan, New York, 1965. Rabotnikov, Ju. L. [I] On the theory of linear stochastic differential equations with non-Markov type solutions, Differenciafnye Uravnenija 4 (1968), 244-251. [2] On the theory of nonlinear stochastic systems, Mat. Zametki 5 (1969), 607-614. Rao, A. N. V., and Tsokos, C. P. [I] On an Ito-Doob type of stochastic integral (to appear). [2] Sufficientconditions for the existence of a random solution to a mixed Volterra-FredholmIto type of stochastic integral equations (to appear). Rao, B. L. S. Prakasa [l] Stochastic integral equations of mixed typed (Abstract), Bull. Inst. Math. Statist. 1 (1972), 48. Robbins, H., and Monro, S. [I] A stochastic approximation method, Ann. Math. Statist. 22 (1951), 400407. Roxin, E., and Tsokos, C. P. [ I ] On the definition of a stochastic differential game, Math. Systems Theory 4 (l970), 60-64. Sacks, J. [ I ] Asymptotic distribution of stochastic approximation procedures, Ann. Math. Statist. 29 (1958). 373-405. Sibul, L. H. [I] Stochastic Green's functions and their relation to the resolvent kernels of integral equations, Proc. Ann. Allerton Con$ on Circuit and System Theory, Sth, Monticello, Illinois, 1967, pp. 356-363. Univ. of Illinois, Urbana, Illinois, 1967. Stratonovich. R. L. [I] A new representation for stochastic integrals and equations. J . SIAM Control 4 (1966). 362-37 I . Tserpes, N. A,, and Mukherjea, A. [l] Invariant measures on semigroups with closed translations, Z . Wahrscheinlichkeitstheorie Verw. Geb. 17 (1971), 33-38. [2] Mesures de probabilitt r*-invarientes sur un semi-group metrique, C . R . Acad. Sci. Paris 268 (1969), 318-319. Tsokos, C. P. [I] On a nonlinear differential system with a random parameter, Int. Conf. on System Sciences, IEEE Proc., Honolulu, Hawaii (1969). [2] On some nonlinear differential systems with random parameters, IEEE Proc., Ann. Princeton Conf on Information Sci. and Systems, 3rd (1969), 228-234. [3] On the classical stability theorem of Poincare-Lyapunov, Proc. Japan Acad. 45 (1969), 780-785. [4] On a stochastic integral equation of the Volterra type, Math. Systems Theory 3 (1969), 222-23 1 . [5] The method of V. M. Popov for differential systems with random parameters, J . Appl. Probability 8 (1971), 298-310. [I71
STOCHASTIC OR PROBABILISTIC
2 73
[6] Stochastic absolute stability of a nonlinear differential system, IEEE Proc. on Circuits and Systems, Naval Post-graduate School (l968), 586-590. [q Stochastic approximation of a random integral equation, Math. Nachr. 51 (1971), 101-1 10. [8] Stochastic integral equations in engineering and biological science, A n . Stiinl$w Tomul. XVII (1971), 407-419. Tsokos. C. P., and Hamdan, M. A. [I] Stochastic nonlinear integro-differential systems with time lag, J . Natur. Sci. and Math. 10 (1970). 293-303. Tsokos. C. P., and Nichols, W. G. [ I ] On some stochastic differential games, Probabilistic Methods in Applied Mathematics (A. T. Bharucha-Reid, ed.), Vol. 3. Academic Press, New York, 1972. Tsokos, C. P., and Padgett, W. J. [I] Random Integral Equations with Applications to Stochastic Systems, Lecture Notes in Mathematics, Vol. 233. Springer-Verlag, Berlin and New York, 1971, Tsokos, C. P., and Telionis, D. P. [ I ] Stochastic particle trajector:es in turbulent flow, J . Math. Phys. 14 ( I ) , (1973). Umegaki, H., and Bharucha-Reid, A . T. [I] Banach space-valued random variables and tensor products of Banach spaces, reprinted from J . Math. Anal. Appl. 31, No. 1, July (1970). Wasan, M . T. [I] Stochastic Approximarion. Cambridge Univ. Press, London and New York, 1969. Whittle. P. [ I ] A view of stochastic control theory, J . Roj*.Srafi.sl. SOC.Ser. A 132 (1969). 32C334. Whittle, P., and Gait, P. A. [I] Reduction of a class of stochastic control problems, J . Inst. Math. Appl. 6 (1970), 131-1 40. Williams, T. [I] The simple stochastic epidemic curve for large populations of susceptibles, Biometrika 52 (1965), 571-579. Wolfowitz, J. [I] On the stochastic approximation method of Robbins and Monro, Ann. Math. Statisf. 23 (1952), 457-461. Wong. E., and Zakai, M . [I] On the relation between ordinary and stochastic differential equations, Int. J . Engng Sci. 3 ( 1965). 21 3-229. Yang, Grace Lo [I] Contagion in stochastic models for epidemics, Ann. Math. Statist. 39, No. 6 (1968), 1863-1889.
This page intentionally left blank
Index Numbers in italics refer to the pages on which the complete references are listed.
A Admissibility theory, 30,49,54, 102 Adomian, G., 3,266 Ahmed, N. U., 3,266 Almost sure convergence, 68 Anderson, M. W., 3,4,79,266 Approximate solutions, 65-78, 87-96, 141-148 Arithmetic fixed-point problem, 74 Asymptotic stability, 20-21, 38-39
B
Banach, S., 9 Banach space(s), 8, 14, 193 admissible, 14, 3 1 of sequences, 17-18, 133 Barbalat, I., 10, 27,260 Barbalat lemma, 10,27-28 Bartholomay, A., 198,266 Bartlett, M. S., 78,80,267
Beckenbach, E. F., 137,260 Bellman, R., 2,50,57,59, 137,260,261 Bharucha-Reid, A. T., 2, 3,4, 14,44,49, 78,267 Biological system, 57-64, 165-179 one-organ, 57-62 two-organ, 62-64 Bochner, S., 10,261 Bounded variation, 18 Branson, H., 165, 166,261 Brownian motion, 2 Brownian motion process, 207-209,214 Burkholder, D. L., 66,87,88,267
C Cauchy sequence, 7 Chemical kinetics, 180, 198-200 rate functions, 201-204 rate of reaction, 201 stochastic integral equation in, 204-206 275
276
INDEX
Chemotherapy models, 57-64 Circulatory system, see Physiological models and Chemotherapy models Closed-graph theorem, 9,31, 134 Communicable disease model, 176-179 random integral equation in, 178 Contraction operator, 9, 22, 32 Contraction mapping, 66 Control systems discrete, 148-155 feedback, 115 stochastic, 115-1 19, 148-155 Convolution, 9,227,256 Corduneanu, C., 5,98,115,261
D Desoer, C. A., 115,261 Discretized equation, 132 Distefano, N., 46, 268 Doob, J. L., 208,209,211,268 Dunford, N., 11,262 Dynkin, Y. B., 208,268
E Error signal, 253 Essentially bounded function, 14, 18. Eulerian velocity field, 56 Existence and uniqueness theorems, 30-39,49-55,8L87,98-113,
120-131, 157-164,194-197,215,244 for random discrete equations, 133-141
F Feller, W., 79, 80, 268 Fixed-point theorem(s), 9,22-27 Banach’s, 9,22-23,54, 109 Brouwer’s, 24,26 Krasnosel’skii’s, 10, 102 Schauder’s, 10,25-26,106 Fortet integral equation, 44 Fortet, R., 2,43, 197,267,268 Fourier transforms, 10,225 FrCchet space, 8,99, 101 Fredholm random integral equation, 18, 97-1 3 1 discrete version, 132-155 Free random variable, 18
G Gavalas, G. R., 197, 201,204, 262 Gikhmann, I. I., 2,208,268
H Halanay, A., 10,262 Hamdan, M. A., 4,273 Hans, O., 3,4,268 Hardiman, S., 79,268 Hearon, J., 165, 166,262 Hereditary mechanics, 46-48 Hilbert space, 11, 16, 101, 102 Hildebrand, F. B., 149, 153,262
I Inner product, 1 1, 16 Ito, K., 2, 208, 269 Ito stochastic integral, 208-21 1
J Jacquez, J. A., 2,57,59,260,261 Jazwinski, A. H., 208,211,269
K Kalaba, R., 2,57,59,260,262 Kendall, D. G., 79, 80,269 Kernel, 18,33, 180,243 Kotkin, B., 59,261
L L,, 7 Landau, H. G., 165,176, 177,263 Levin, J. J., 243, 263 Linear hereditary phenomenon, 46 Linear space, 8 normed, 8 topological, 8 Lipschitz condition, 32,52, 103 Lobve, M., 68,269 Lumley, J. L., 2, 56, 270 Lur’e, A. I., 242, 264 Lyapunov, 4,40
277
INDEX
M Markov inequality, 68 McKean, H. P., 208,270 McQuarrie, D., 198,270 Mean-square convergence, 71 Mean-square integral, 3, 18 Metabolizing systems, 165-170 random integral equation in, 167 Method of successive approximations, 60, 66-78, 208 Metric space, 7 Mikhlin, S. G., 59,264 Miller, R. K., 98, 243, 264 Milton, J. S., 2,3,79, 120, 129, 156, 157, 165,170,176,181,197,200,270 Minkowski’s inequality, 9, 13, 50, 137 Morozan,T., 2,4,88,218,270 Mukhejea, A., 14,49,270,271
N Nohel, J. A., 98,243,263,265 Norm, 7,181 of operator, 157, 185 0
Operator, 9 bounded, 31,102,134 closed, 14, 31 completely continuous, 10, 102, 105, 106, 121,125 continuous, 14, 30-31, 99, 133, 185, 186,214 contraction, 9,22, 32 linear, 31,99, 115, 134, 185, 186 nonlinear, 115
P Padgett, W. J., 2, 3,4,57,79, 120,244, 271,272 Parseval equality, 10,227 Petrovanu, D., 18,98, 115,265 Physiological models, 170-175 random integral equation in, 172 PoincarC-Lyapunov stability theorem, 40-42
Popov, V. M., 4,218 Popov frequency response method, 4, 149, 152,225 Population growth problem, 78-87 numerical solution, 94-96 Postnikov, V. N., 242,264 Probability measure space, 12 complete, 12
R Rall, L. B., 72, 74,265 Random Arzela-Ascoli theorem, 15,51 Random differential system, 40, 116 Random equations, 3 algebraic, 3 difference, 3 differential, 3 discrete, 20-21, 132-155 Random integral equation(s), approximate solutions, 65-78,87-96 Fredholm, 18,97-131 ItO, 207-216 mixed Volterra-Fredholm, 19, 98, 101, 116-117, 120 perturbed, 19, 120-131, 156-164 vector, 180-197 Volterra, 18,29-64, 156-164 Random integrodifferential equation, 241, 243-251 Random Lipschitz condition, 52 Random solution, 20, 186 asymptotically stable in mean-square, 20,249 existence and uniqueness theorems, 30-39,49-55,84-87,98-113, 120-131,133-141,157-164, 194197,244-247 stochastically asymptotically exponentially stable, 20, 38-39, 113-115, 117,247 stochastically geometrically stable, 21, 139,141,149,152 Random vector, 181 Rao, A. N. V., 79,272 Rao, B. L. S. Prakasa, 3,272 Rapoport, A., 165,176,177,263 Renewal equation, 79 Retraction, 24
278
INDEX
S
Saaty, T. L., 208,265 Schwartz inequality, 162 Schwartz, J., 11,262 Semi-norm, 11-13, 182-184 Semirandom solution, 57,59 Semistochastic integral equation, 59 Skorokhod, A. V., 2,208,268 Spacek, A., 3 Square-summable function, 7 Stability, 20-21, 113-115, 139-141, 149,225,247,255-259 Stephenson, J., 170,175,265 Stochastic absolute stability, 4,225-240, 255-259 Stochastic approximation, 87-94 Stochastic chemical kinetics, 180 model, 197-206 Stochastic control, see Control systems Stochastic differential systems, 21-22, 116,250 absolute stability of, 225-239 nonlinear, 148,152,217-240 reduction of, 2 19-225 schematic representations of, 239-240 stochastically absolutely stable, 22 Stochastic discrete equations, 20-21 Stochastic free term, 18 Stochastic fundamental matrix solution, 252 Stochastic integral, see Ito, K. Stochastic integral equations, see also Random integral equations Ito, 212-213 Ito-Doob, 214-216 Stochastic integrodifferential systems, 21, 241-259 with time lag, 251-259 Stochastic kernel, 18,33, 180,243 Stochastic process, 14 continuous in mean-square, 14 second-order, 14, 133,157,215,243 Stochastically stable matrix, 22,41,229, 231,234,236 Stoichiometry, 198
Stratonovich, R. L., 208,272 Successive approximations, 22-23, 60, 145 almost sure convergence of, 68-71 error of approximations, 71-74, 147 rate of convergence of, 71-74, 145 sequence of, 22,66,75,77,94
T Telephone exchange, 44 Telephone traffic, 42-46 Telionis, D. P., 273 Tomasian, A. I., 115,261 Topological space, 8 linear, 8, 181 locally convex, 9, 12, 183 Tserpes, N., 14,49,270,272 Tsokos, C. P., 2, 3,4,40,57,79, 120, 129, 149, 152, 156, 157, 165, 170, 176, 181,197,200,218,244,272,273 Turbulence theory, 55-56
V Vector space, 8 Volterra random integral equation, 18, 29-64 discrete version, 133
W Wasan, M. T., 88,273 Wijsman, R., 165, 166,167,266 Wong, E., 208,273 Wong, J. S. W., 98,264
Y Yosida, K., 17,266
2 Zakai, M., 208,273