Stochastic Optimal Control The Discrete Time Case
This is Volume 139 in MATHEMATICS IN SCIENCE AND ENGINEERING A Series of Monographs and Textbooks Edited by RICHARD BELLMAN, University of Southern California The complete listing of books in this series is available from the Publisher upon request.
Stochastic Optimal Control The Discrete Time Case
DIMITRI P. BERTSEKAS DEPARTMENT OF ELECTRICAL ENGINEERING AND COORDINATED SCIENCE LABORATORY UNIVERSITY OF ILLINOIS URBANA, ILLINOIS
STEVEN E. SHREVE DEPARTMENT OF MATHEMATICAL SCIENCES UNIVERSITY OF DELAWARE NEWARK, DELAWARE
ACADEMIC PRESS
New York
San Francisco
A Subsidiary of Harcourt Brace Jovanovich, Publishers
London
1978
COPYRIGHT © 1978, BY ACADEMIC PRESS, INC. ALL RIGHTS RESERVED. NO PART OF THIS PUBLICATION MAY BE REPRODUCED OR TRANSMITTED IN ANY FORM OR BY ANY MEANS, ELECTRONIC OR MECHANICAL, INCLUDING PHOTOCOPY, RECORDING, OR ANY INFORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT PERMISSION IN WRITING FROM THE PUBLISHER.
ACADEMIC PRESS, INC.
III Fifth Avenue, New York. New York 10003
United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road, London NWI 7DX
Library of Congress Cataloging in Publication Data Bertsekas, Dimitri P Stochastic optimal control. (Mathematics in science and engineering; v. Includes bibliographical references and index. 1. Dynamic programming. 2. Stochastic processes. 3. Measure theory. I. Shreve, Steven E., joint author. II. Title. III. Series. T57.83.B49 519.7'03 77-25727 ISBN 0-12-093260-1 AMS (MOS) 1970 Subject Classifications: 28A05, 49C15,60B05,62C99,90A05
PRINTED IN THE UNITED STATES OF AMERICA
To Joanna and Steve's Mom and Dad
This page intentionally left blank
Contents xi xiii
Preface Acknowledgments
Chapter I 1.1 1.2 1.3
Introduction
Structure of Sequential Decision Models Discrete-Time Stochastic Optimal Control Problems-Measurability Questions The Present Work Related to the Literature
5 13
Part I ANALYSIS OF DYNAMIC PROGRAMMING MODELS Chapter 2 2.1 2.2 2.3
Notation and Assumptions Problem Formulation Application to Specific Models 2.3.1 Deterministic Optimal Control 2.3.2 Stochastic Optimal Control-Countable Disturbance Space 2.3.3 Stochastic Optimal Control-Outer Integral Formulation 2.3.4 Stochastic Optimal Control-Multiplicative Cost Functional 2.3.5 Minimax Control
Chapter 3 3.1 3.2 3.3
Monotone Mappings Underlying Dynamic Programming Models 25 28 29 30
31 35 37 38
Finite Horizon Models
General Remarks and Assumptions Main Results Application to Specific Models
39 40
47
vii
viii
CONTENTS
Chapter 4 4.1 4.2 4.3
4.4
General Remarks and Assumptions Convergence and Existence Results Computational Methods 4.3.\ Successive Approximation 4.3.2 Policy Iteration 4.3.3 Mathematical Programming Application to Specific Models
Chapter 5 5.1 5.2 5.3 5.4 5.5
Part II
7.\ 7.2 7.3 7.4
7.5 7.6
7.7
8.1
70 71
78
80 88
92 94 96
STOCHASTIC OPTIMAL CONTROL THEORY
Borel Spaces and Their Probability Measures
Notation Metrizable Spaces Borel Spaces Probability Measures on Borel Spaces 7.4.\ Characterization of Probability Measures 7.4.2 The Weak Topology 7.4.3 Stochastic Kernels 7.4.4 Integration Semicontinuous Functions and Borel-Measurable Selection Analytic Sets 7.6.1 Equivalent Definitions of Analytic Sets 7.6.2 Measurability Properties of Analytic Sets 7.6.3 An Analytic Set of Probability Measures Lower Semianalytic Functions and Universally Measurable Selection
Chapter 8
68
A Generalized Abstract Dynamic Programming Model
General Remarks and Assumptions Analysis of Finite Horizon Models Analysis of Infinite Horizon Models under a Contraction Assumption
Chapter 7
52 53 58 59 63 67
Infinite Horizon Models under Monotonicity Assumptions
General Remarks and Assumptions The Optimality Equation Characterization of Optimal Policies Convergence of the Dynamic Programming Algorithm-Existence of Stationary Optimal Policies Application to Specific Models
Chapter 6 6.1 6.2 6.3
Infinite Horizon Models under a Contraction Assumption
102 104 117
122 122 124 134 139
145 156 [56 166 169 17\
The Finite Horizon Borel Model
The Model
188
ix
CONTENTS
8.2 8.3
The Dynamic Programming Algorithm-Existence of Optimal and e-Optirnal Policies The Semicontinuous Models
Chapter 9 9.1 9.2 9.3 9.4 9.5 9.6
194 208
The Infinite Horizon Borel Models
The Stochastic Model The Deterministic Model Relations between the Models The Optimality Equation-Characterization of Optimal Policies Convergence of the Dynamic Programming Algorithm-Existence of Stationary Optimal Policies Existence of s-Optimal Policies
213 216 218 225 229 237
Chapter 10 The Imperfect State Information Model 10.1 Reduction of the Nonstationary Model-State Augmentation 10.2 Reduction of the Imperfect State Information Model-Sufficient Statistics 10.3 Existence of Statistics Sufficient for Control 10.3.1 Filtering and the Conditional Distributions of the States 10.3.2 The Identity Mappings
Chapter II
Appendix A
The Outer Integral
Appendix B
Additional Measurability Properties of Borel Spaces
Proof of Proposition 7.35(e) Proof of Proposition 7.16 An Analytic Set Which Is Not Borel-Measurable The Limit o-Algebra Set Theoretic Aspects of Borel Spaces
Appendix C
246 259 2tiO 2M
Miscellaneous
11.1 Limit-Measurable Pplicies 11.2 Analytically Measurable Policies 11.3 Models with Multiplicative Cost
B.I B.2 B.3 B.4 B.5
242
266 269 271 273
282 285 290 292
301
The Hausdorff Metric and the Exponential Topology
References
312
Table (If" Propositions, Lemmas, Definitions. and Assumptions Index
317 321
This page intentionally left blank
Preface
This monograph is the outgrowth of research carried out at the University of Illinois over a three-year period beginning in the latter half of 1974. The objective of the monograph is to provide a unifying and mathematically rigorous theory for a broad class of dynamic programming and discrete-time stochastic optimal control problems. It is divided into two parts, which can be read independently. Part I provides an analysis of dynamic programming models in a unified framework applicable to deterministic optimal control, stochastic optimal control, minimax control, sequential games, and other areas. It resolves the structural questions associated with such problems, i.e., it provides results that draw their validity exclusively from the sequential nature of the problem. Such results hold for models where measurability of various objects is of no essential concern, for example, in deterministic problems and stochastic problems defined over a countable probability space. The starting point for the analysis is the mapping defining the dynamic programming algorithm. A single abstract problem is formulated in terms of this mapping and counterparts of nearly all results known for deterministic optimal control problems are derived. A new stochastic optimal control model based on outer integration is also introduced in this xi
xii
PREFACE
part. It is a broadly applicable model and requires no topological assumptions. We show that all the results of Part I hold for this model. Part II resolves the measurability questions associated with stochastic optimal control problems with perfect and imperfect state information. These questions have been studied over the past fifteen years by several researchers in statistics and control theory. As we explain in Chapter I, the approaches that have been used are either limited by restrictive assumptions such as compactness and continuity or else they are not sufficiently powerful to yield results that are as strong as their structural counterparts. These deficiencies can be traced to the fact that the class of policies considered is not sufficiently rich to ensure the existence of everywhere optimal or s-optimal policies except under restrictive assumptions. In our work we have appropriately enlarged the space of admissible policies to include universally measurable policies. This guarantees the existence of s-optimal policies and allows, for the first time, the development of a general and comprehensive theory which is as powerful as its deterministic counterpart. We mention, however, that the class of universally measurable policies is not the smallest class of policies for which these results are valid. The smallest such class is the class of limit measurable policies discussed in Section 11.1. The a-algebra oflimit measurable sets (orC-sets) is defined in a constructive manner involving transfinite induction that, from a set theoretic point of view, is more satisfying than the definition of the universal a-algebra. We believe, however, that the majority of readers will find the universal a-algebra and the methods of proof associated with it more understandable, and so we devote the main body of Part II to models with universally measurable policies. Parts I and II are related and complement each other. Part II makes extensive use of the results of Part I. However, the special forms in which these results are needed are also available in other sources (e.g., the textbook by Bertsekas [B4]). Each time we make use of such a result, we refer to both Part I and the Bertsekas textbook, so that Part II can be read independently of Part I. The developments in Part II show also that stochastic optimal control problems with measurability restrictions on the admissible policies can be embedded within the framework of Part I, thus demonstrating the broad scope of the formulation given there. The monograph is intended for applied mathematicians, statisticians, and mathematically oriented analysts in engineering, operations research, and related fields. We have assumed throughout that the reader is familiar with the basic notions of measure theory and topology. In other respects, the monograph is self-contained. In particular, we have provided all necessary background related to Borel spaces and analytic sets.
Acknowledgments
This research was begun while we were with the Coordinated Science Laboratory of the University of Illinois and concluded while Shreve was with the Departments of Mathematics and Statistics of the University of California at Berkeley. We are grateful to these institutions for providing support and an atmosphere conducive to our work, and we are also grateful to the National Science Foundation for funding the research. We wish to acknowledge the aid of Joseph Doob, who guided LIS into the literature on analytic sets, and of John Addison, who pointed out the existing work on the limit (T-algebra. We are particularly indebted to David Blackwell, who inspired us by his pioneering work on dynamic programming in Borel spaces, who encouraged us as our own investigation was proceeding, and who showed us Example 9.2. Chapter 9 is an expanded version of our paper .. Universally Measurable Policies in Dynamic Programming" published in Mathematics of Operations Research. The permission of The Institute of Management Sciences to include this material is gratefully acknowledged. Finally we wish to thank Rose Harris and Dee Wrather for their excellent typing of the manuscript. xiii
This page intentionally left blank
Stochastic Optimal Control The Discrete Time Case
This page intentionally left blank
Chapter 1
Introduction
1.1 Structure of Sequential Decision Models Sequential decision models are mathematical abstractions of situations in which decisions must be made in several stages while incurring a certain cost at each stage. Each decision may influence the circumstances under which future decisions will be made, so that if total cost is to be minimized, one must balance his desire to minimize the cost of the present decision against his desire to avoid future situations where high cost is inevitable. A classical example of this situation, in which we treat profit as negative cost, is portfolio management. An investor must balance his desire to achieve immediate return, possibly in the form of dividends, against a desire to avoid investments in areas where low long-run yield is probable. Other examples can be drawn from inventory management, reservoir control, sequential analysis, hypothesis testing, and, by discretizing a continuous problem, from control of a large variety of physical systems subject to random disturbances. For an extensive set of sequential decision models, see Bellman [Bl], Bertsekas [B4], Dynkin and Juskevic [D8], Howard [H7], Wald [W2], and the references contained therein. Dynamic programming (DP for short) has served for many years as the principal method for analysis of a large and diverse group of sequential 1
2
1.
INTRODUCTION
decision problems. Examples are deterministic and stochastic optimal control problems, Markov and semi-Markov decision problems, minimax control problems, and sequential games. While the nature of these problems may vary widely, their underlying structures turn out to be very similar. In all cases, the cost corresponding to a policy and the basic iteration of the OP algorithm may be described by means of a certain mapping which differs from one problem to another in details which to a large extent are inessential. Typically, this mapping summarizes all the data of the problem and determines all quantities of interest to the analyst. Thus, in problems with a finite number of stages, this mapping may be used to obtain the optimal cost function for the problem as well as to compute an optimal or s-optimal policy through a finite number of steps of the OP algorithm. In problems with an infinite number of stages, one hopes that the sequence of functions generated by successive application of the OP iteration converges in some sense to the optimal cost function for the problem. Furthermore, all basic results of an analytical and computational nature can be expressed in terms of the underlying mapping defining the OP algorithm. Thus by taking this mapping as a starting point one can provide powerful analytical results which are applicable to a large collection of sequential decision problems. To illustrate our viewpoint, let us consider formally a deterministic optimal control problem. We have a discrete-time system described by the system equation (1)
where Xk and Xk + I represent a state and its succeeding state and will be assumed to belong to some state space S; Uk represents a control variable chosen by the decisionmaker in some constraint set U(Xk), which is in turn a subset of some control space C. The cost incurred at the kth stage is given by a function g(x b ud. We seek a finite sequence of control functions tt = (110' Ill' ... ,IlN - I) (also referred to as a policy) which minimizes the total cost over N stages. The functions Ilk map S into C and must satisfy Ilk(X) E U(x) for all XES. Each function Ilk specifies the control Uk = Ilk(Xk) that will be chosen when at the kth stage the state is X k. Thus the total cost corresponding to a policy tt = (110' Ill' ... ,IlN _ d and initial state Xo is given by N-I
IN.,,(X O)
=
I
g[Xbllk(Xk)],
(2)
k~O
where the states x I ' Xl' equation
... 'X N _ I
are generated from Xo and n via the system k = 0, ... ,N - 2.
(3)
Corresponding to each initial state Xo and policy n, there is a sequence of control variables Uo, u l, ... ,UN- I, where Uk = Ilk(Xk) and Xk is generated by
1.1
3
STRUCTURE OF SEQUENTIAL DECISION MODELS
(3). Thus an alternative formulation of the problem would be to select a sequence of control variables minimizing "2J,:J g(x b Uk) rather than a policy tt minimizing IN,,,(X O )' The formulation we have given here, however, is more consistent with the DP framework we wish to adopt. As is well known, the DP algorithm for the preceding problem is given by Jo(x) Jk+l(X)
= 0, =
(4)
+ Jk[j(x,u)]},
inf {g(x,u)
k
=
O, ... ,N - 1,
(5)
"EU(X)
and the optimal cost J*(xo) for the problem is obtained at the Nth step, i.e., J*(xo)
= inf J N.,,(X O) = J N(X O)' "
One may also obtain the value J N.,,(XO) corresponding to any tt d at the Nth step of the algorithm
=
(flo, fll,' .. ,
flN-
Jo.,,(x)
= 0,
(6)
Jk+l."(X) = g[X,flN-k-l(X)]
+
J k.,,[j(X,flN-k-l(X))],
k = O, ... ,N - 1. (7)
Now it is possible to formulate the previous problem as well as to describe the DP algorithm (4)-(5) by means of the mapping H given by H(x, u, J)
=
+
g(x, u)
J[j(x, u)].
(8)
Let us define the mapping T by T(J)(x)
= inf
H(x, u, J)
(9)
UEU(X)
and, for any function u: S
->
C, define the mapping T /l by
T/l(J)(x)
=
H[X,fl(X),J].
(10)
Both T and T/l map the set of real-valued (or perhaps extended real-valued) functions on S into itself. Then in view of (6)-(7), we may write the cost functional J N.,,(XO) of (2) as (11)
where J 0 is the zero function on S [J o(x) = 0 Vx E S] and (T/lo Till' .. T /IN _ J denotes the composition of the mappings T /lo' T /l1" .. , T /IN-'' Similarly the DP algorithm (4)-(5) may be described by k
=
O, ... ,N - 1,
(12)
1.
4
INTRODUCTION
and we have
where TN is the composition of T with itself N times. Thus both the problem and the major algorithmic procedure relating to it can be expressed in terms of the mappings T and Til' One may also consider an infinite horizon version of the problem whereby we seek a sequence TC = (flO'/!l"") that minimizes N-1
J,,(x o) = lim
L
N---+XJk:::::::O
g[Xb/!k(xdJ = lim(TlloT'll··· TIlN_1)(JO)(xo) N-oo
(13)
subject to the system equation constraint (3). In this case one needs, of course, to make assumptions which ensure that the limit in (13) is well defined for each TC and Xo. Under appropriate assumptions, the optimal cost function defined by J*(x) = inf J ,,(x)
can be shown to satisfy Bellman's functional equation given by J*(x)
= inf {g(x, u) + J*[j(x, tt)J}. UEU(X)
Equivalently J*(x) = T(J*)(x)
i.e., J* is a fixed point of the mapping T. Most of the infinite horizon results of analytical interest center around this equation. Other questions relate to the existence and characterization of optimal policies or nearly optimal policies and to the validity of the equation J*(x)
=
lim TN(JO)(x)
N--+ if)
VXES,
(14)
which says that the DP algorithm yields in the limit the optimal cost function for the problem. Again the problem and the basic analytical and computational results relating to it can be expressed in terms of the mappings T and Til' The deterministic optimal control problem just described is representative of a plethora of sequential optimization problems of practical interest which may be formulated in terms of mappings similar to the mapping H of (8). As shall be described in Chapter 2, one can formulate in the same manner stochastic optimal control problems, minimax control problems, and others. The objective of Part [ is to provide a common analytical frame-
1.2
STOCHASTIC OPTIMAL CONTROL PROBLEMS
5
work for all these problems and derive in a broadly applicable form all the results which draw their validity exclusioelyfrom the basic sequential structure of the decision-making process. This is accomplished by taking as a starting
point a mapping H such as the one of (8) and deriving all major analytical and computational results within a generalized setting. The results are subsequently specialized to five particular models described in Section 2.3: deterministic optimal control problems, three types of stochastic optimal control problems (countable disturbance space, outer integral formulation, and multiplicative cost functional), and minimax control problems.
1.2 Discrete-Time Stochastic Optimal Control ProblemsMeasurability Questions
The theory of Part I is not adequate by itself to provide a complete analysis of stochastic optimal control problems, the treatment of which is the major objective of this book. The reason is that when such problems are formulatedover uncountable probability spaces nontrivial measurability restrictions must be placed on the admissible policies unless we resort to an outer integration framework. A discrete-time stochastic optimal control problem is obtained from the deterministic problem of the previous section when the system includes a stochastic disturbance W k in its description. Thus (1) is replaced by (15) and the cost per stage becomes g(Xb U b wd. The disturbance Wk is a member of some probability space (W, g;) and has distribution p(dwklx b ud. Thus the control variable Uk exercises influence over the transition from X k to Xu 1 in two places, once in the system equation (15) and again as a parameter in the distribution of the disturbance Wk. Likewise, the control Uk influences the cost at two points. This is a redundancy in the system equation model given above which will be eliminated in Chapter 8 when we introduce the transition kernel and reduced one-stage cost function and thereby convert to a model frequently adopted in the statistics literature (see, e.g., Blackwell [B9J; Strauch [SI4J). The system equation model is more common in engineering literature and generally more convenient in applications, so we are taking it as our starting point. The transition kernel and reduced one-stage cost function are technical devices which eliminate the disturbance space (W, g;) from consideration and make the model more suitable for analysis. We take pains initially to point out how properties of the original system carryover into properties of the transition kernel and reduced one-stage cost function (see the remarks following Definitions 8.1 and 8.7).
I.
6
INTRODUCTION
Stochastic optimal control is distinguished from its deterministic counterpart by the concern with when information becomes available. In deterministic control, to each initial state and policy there corresponds a sequence of control variables (uo, . . . , UN -1) which can be specified beforehand, and the resulting states of the system are determined by (1). In contrast, if the control variables are specified beforehand for a stochastic system, the decisionmaker may realize in the course of the system evolution that unexpected states have appeared and the specified control variables are no longer appropriate. Thus it is essential to consider policies tt = (Jlo, ... ,JlN- d, where Jlk is a function from history to control. If X o is the initial state, Uo = Jlo(xo) is taken to be the first control. If the states and controls (xo, Uo, . . . , Uk-1' xd have occurred, the control (16)
is chosen. We require that the control constraint Jlk(XO,U O,'" ,Uk- 1,Xk)E U(Xk)
be satisfied for every (xo, Uo, ... , Uk _ 1l x k) and k. In this way the decisionmaker utilizes the full information available to him at each stage. Rather than choosing a sequence of control variables, the decisionmaker attempts to choose a policy which minimizes the total expected cost of the system operation. Actually, we will show that for most cases it is sufficient to consider only Markov policies, those for which the corresponding controls Uk depend only on the current state Xk rather than the entire history (xo, Uo, ... , Uk- ll Xk)' This is the type of policy encountered in Section 1.1. The analysis of the stochastic decision model outlined here can be fairly well divided into two categories-structural considerations and measurability considerations. Structural analysis consists of all those results which can be obtained if measurability of all functions and sets arising in the problem is of no real concern; for example, if the model is deterministic or, more generally, if the disturbance space W is countable. In Part I structural results are derived using mappings H, Til' and T of the kind considered in the previous section. Measurability analysis consists of showing that the structural results remain valid even when one places nontrivial measurability restrictions on the set of admissible policies. The work in Part II consists primarily of measurability analysis relying heavily on structural results developed in Part I as well as in other sources (e.g., Bertsekas [B4]). One can best illustrate this dichotomy of analysis by the finite horizon OP algorithm considered by Bellman [BI]: Jo(x) = 0, J k+ 1(X)= inf E{g(x,u,w) UEU(X)
(17)
+ Jk[f(x,u,w)]},
k=O, ... ,N-I,
(18)
1.2
STOCHASTIC OPTIMAL CONTROL PROBLEMS
7
where the expectation is with respect to p(dwlx, u). This is the stochastic counterpart of the deterministic DP algorithm (4)-(5). It is reasonable to expect that J k(X) is the optimal cost of operating the system over k stages when the initial state is x, and that if flk(X) achieves the infimum in (18) for every x and k = 0, ... , N - 1, then tt = (flo,· .. ,flN- d is an optimal policy for every initial state x, If there are no measurability considerations, this is indeed the case under very mild assumptions, as shall be shown in Chapter 3. Yet it is a major task to properly formulate the stochastic control problem and demonstrate that the DP algorithm (17)-(18) makes sense in a measure-theoretic framework. One of the difficulties lies in showing that the expression in curly braces in (18) is measurable in some sense. Thus we must establish measurability properties for the functions J k : Related to this is the need to balance the measurability of policies (necessary so the expected cost corresponding to a policy can be defined) against a desire to be able to select at or near the infimum in (18). We illustrate these difficulties by means of a simple two-stage example. TWO-STAGE PROBLEM
Consider the following sequence of events:
(a) An initial state xoER is generated (R is the real line). (b) Knowing x o, the decisionmaker selects a control Uo E R. (c) A state Xl ER is generated according to a known probability measure p(dx llxo, uo) on f!IJ R, the Borel subsets of R, depending on xo, Uo' [In terms of our earlier model, this corresponds to a system equation of the form Xl = Wo and p(dwolxo, u o) = p(dxilxo, uo).] (d) Knowing Xl' the decisionmaker selects a control u 1 E R. Given P(dXl!XO'UO) for every (xo,uo)ER 2 and a function g:R 2 -+ R, the problem is to find a policy n = (flo, fll) consisting oftwo functions flo: R -+ R and fll : R -+ R that minimizes (19)
We temporarily postpone a discussion of restrictions (if any) that must be placed on g, flo, and fll in order for the integral in (19) to be well defined. In terms of our earlier model, the function g gives the cost for the second stage while we assume no cost for the first stage. The DP algorithm associated with the problem is Jl(xd = infg(xl,ud,
(20)
J 2(Xo)
(21)
u,
=
inffJl(xl)P(dxllxo,uo), UQ
8
1.
INTRODUCTION
results one expects to be true are: R.t
There holds rt
R.2 Given I; > 0, there is an (everywhere) s-optimal policy, i.e., a policy n£ such that
J".(xo):S; infJ,,(xo)
"
+ I;
R.3 If the infimum in (20) and (21) is attained for all Xl ER and xoER, then there exists a policy that is optimal for every Xo ER. R.4 If Ilf(xtl and 1l~(Xo), respectively, attain the infimum in (20) and (21) for all Xl E Rand Xo E R, then n* = (Il~, Ilf) is optimal for every Xo E R, i.e.,
"
A formal derivation of R.1 consists of the following steps: infJ,,(xo) = infinf fg[Xl,lll(xd]p(dxllxo,llo(xo)) n:
flo
= inf f{infg(Xbud}P(dX1Ixo,llo(Xo)) flo
(22a)
III
(22b)
Ul
= inf fJl(XtlP(dxllxo,llo(xo)) 1'0
= inf fJl(Xl)P(dxllxo,uo) = Jz(xo). Uo
Similar formal derivations can be given for R.2, R.3, and R.4. The following points need to be justified in order to make the preceding derivation meaningful and mathematically rigorous. (a) In (22a), g and III must be such that g[ Xb III (x d] can be integrated in a well-defined manner. (b) In (22b), the interchange of infimization and integration must be legitimate. Furthermore g must be such that J l(Xd [ = infu 1 g(Xl' ud] can be integrated in a well-defined manner. We first observe that if, for each (xo, uo), p(dxdxo, uo) has countable support, i.e., is concentrated on a countable number of points, then integration in (22a) and (22b) reduces to infinite summation. Thus there is no need to impose measurability restrictions on g, Ilo, and Ill, and the interchange of infimization and integration in (22b) is justified in view of the assumption
1.2
9
STOCHASTIC OPTIMAL CONTROL PROBLEMS
infu, g(Xl' U l ) > -
00
for all Xl E R. (For
G
> 0, take !If.: R
g[Xl,!l,Jxd] ::; infg(xl'u l)
u,
Then
f
--->
R such that
+G
(23)
f
inf g[ XI,!l1(x d]p(dx Ilxo, !loCxo))::; g[x 1,!loCX 1 )]p(dx Ilxo, !loCxo)) IL,
::; finfg(xl,udp(dxllxo,!lo(.xo))
u,
Since
G
f
(24)
> 0 is arbitrary, it follows that
inf g[Xl' !l1(X d]p(dxdxo, !lo(.xo)) ::; f1.1
+ I:.
f {inf g(x l' Ud} p(dxI!xo, !lo(Xo))· Ul
The reverse inequality is clear, and the result follows.) A similar argument proves R.2, while R.3 and RA are trivial in view of the fact that there are no measurability restrictions on flo and !ll' If ptdx llxo, uo) does not have countable support, there are two main approaches. The first is to expand the notion of integration, and the second is to restrict g, flo, and !ll to be appropriately measurable. Expanding the notion of integration can be achieved by interpreting the integrals in (22a) and (22b) as outer integrals (see Appendix A). Since the outer integral can be defined for any function, measurable or not, there is no need to require that q, flo, and !ll are measurable in any sense. As a result, (22a) and (22b) make sense and an argument such as the one beginning with (23) goes through. This approach is discussed in detail in Part I, where we show that all the basic results for finite and infinite horizon problems of perfect state information carry through within an outer integration framework. However, there are inherent limitations in this approach centering around the pathologies of outer integration. Difficulties also occur in the treatment of imperfect information problems using sufficient statistics. The major alternative approach was initiated in more general form by Blackwell [B9] in 1965. Here we assume at the outset that g is Borel- measurable, and furthermore, for each BE/JJR (ilJR is the Borel a-algebra on R), the function p(Blxo, uo) is Borel-measurable in (x.,. u o). In the initial treatment of the problem, the functions flo and !ll were restricted to be Borelmeasurable. With these assumptions, g[x l,!ll (x d] is Borel-measurable in x 1 when !ll is Borel-measurable, and the integral in (22a) is well defined. A major difficulty occurs in (22b) since it is not necessarily true that J l ( X 1 ) = infu,g(xl,ud is Borel-measurable, even if g is. The reason can be traced to the fact that the orthogonal projection of a Borel set in R 2 on one
1.
10
INTRODUCTION
of the axes need not be Borel-measurable (see Section 7.6). Since we have for cER {XI!JI(xd < c} = proj{(xt>udlg(xl,ud < c], Xl
where projx, denotes projection on the x.-axis, it can be seen that {XljJI(xd < c} need not be Borel, even though {(xt.ul)lg(xl,ud < c} is. The difficulty can be overcome in part by showing that J I is a lower semianalytic and hence also universally measurable function (see Section 7.7). Thus J 1 can be integrated with respect to any probability measure on flJR' Another difficulty stems from the fact that one cannot in general find a Borel-measurable s-optimal selector f.1e satisfying (23), although a weaker result is available whereby, given a probability measure p on flJR , the existence of a Borel-measurable selector f.1e satisfying
for p almost every Xl E R can be ascertained. This result is sufficient to justify (24) and thus prove result R.l (J 2 = inf, J ,,). However, results R.2 and R.3 cannot be proved when f.1o and f.11 are restricted to be Borelmeasurable except in a weaker form involving the notion of p-optimality (see [S14J; [H4J). The objective of Part [[ is to resolve the measurability questions in stochastic optimal control in such a way that almost every result can be proved in a form as strong as its structural counterpart. This is accomplished by enlarging the set of admissible policies to include all universally measurable policies. In particular, we show the existence of policies within this class that are optimal or nearly optimal for every initial state. A great many authors have dealt with measurability in stochastic optimal control theory. We describe three approaches taken and how their aims and results relate to our own. A fourth approach, due to Blackwell et al. [BI2J and based on analytically measurable policies, is discussed in the next section and in Section 11.2. [
The General Model
If the state, control, and disturbance spaces are arbitrary measure spaces, very little can be done. One attempt in this direction is the work of Striebel [S16J involving p-essential infima. Geared toward giving meaning to the dynamic programming algorithm, this work replaces (18) by J k+ 1(X) = pk-essential inf E {g[ X, f.1(x), wJ 11
+ J k[f(X, f.1(x), w)]},
(25)
1.2
STOCHASTIC OPTIMAL CONTROL PROBLEMS
11
k = 0, ... , N - 1, where the p-essential infimum is over all measurable fl from state space S to control space C satisfying any constraints which may have been imposed. The functions J k are measurable, and if the probability measures Po, . . . , PN - 1 are properly chosen and the so-called countable a-lattice property holds, this modified dynamic programming algorithm generates the optimal cost function and can be used to obtain policies which are optimal or nearly optimal for PN _ 1 almost all initial states. The selection of the proper probability measures Po, . . . , PN _ l ' however, is at least as difficult as executing the dynamic programming algorithm, and the verification of the countable a-lattice property is equivalent to proving the existence of an a-optimal policy. II
The Semicontinuous Models
Considerable attention has been directed toward models in which the state and control spaces are Borel spaces or even R" and the reduced cost function h(x, u) =
Sg(x, u, w)p(dwlx, u)
has semicontinuity and/or convexity properties. A companion assumption is that the mapping x
---+
U(x)
is a measurable closed-valued multifunction [R2]. In the latter case there exists a Borel-measurable selector u: S ---+ C such that fl(X) E U(x) for every state x (Kuratowski and Ryll-Nardzewski [K5]). This is of course necessary if any Borel-measurable policy is to exist at all. The main fact regarding models of this type is that under various combinations of semicontinuity and compactness assumptions, the functions J k defined by (17) and (18) are semicontinuous. In addition, it is often possible to show that the infimum in (18) is achieved for every x and k, and there are Borel-measurable selectors flo,' .. ,flN-l such that flk(X) achieves this infimum (see Freedman [Fl], Furukawa [F3], Himmelberg, et al. [H3], Maitra [M2J, Sohal [S3J, and the references contained therein). Such a policy (flo, . . . , flN _ d is optimal, and the existence of this optimal policy is an additional benefit of imposing topological conditions to ensure that the problem is well defined. In Section 9.5 we show that lower semicontinuity and compactness conditions guarantee convergence of the dynamic programming algorithm over an infinite horizon to the optimal cost function, and that this algorithm can be used to generate an optimal stationary policy. Continuity and compactness assumptions are integral to much of the work that has been done in stochastic programming. This work differs from
1.
12
INTRODUCTION
our own in both its aims and its framework. First, in the usual stochastic programming model, the controls cannot influence the distribution of future states (see Olsen [01-03J, Rockafellar and Wets [R3-R4J, and the references contained therein). As a result, the model does not include as special cases many important problems such as, for example, the classical linear quadratic stochastic control problem [B4, Section 3.1]' Second, assumptions of convexity, lower semicontinuity, or both are made on the cost function, the model is designed for the Kuratowski-Ryll-Nardzewski selection theorem, and the analysis is carried out in a finite-dimensional Euclidean state space. All of this is for the purpose of overcoming measurability problems. Results are not readily generalizable beyond Euclidean spaces (Rockafellar [R2J). The thrust of the work is toward convex programming type results. i.e., duality and Kuhn-Tucker conditions for optimality, and so a narrow class of problems is considered and powerful results are obtained. III
The Borel Models
The Borel space framework was introduced by Blackwell [B9J and further refined by Strauch, Dynkin, Juskevic, Hinderer, and others. The state and control spaces Sand C were assumed to be Borel spaces, and the functions defining the model were assumed to be Borel-measurable. Initial efforts were directed toward proving the existence of "nice" optimal or nearly optimal policies in this framework. Policies were required to be Borel-measurable. For this model it is possible to prove the universal measurability of the optimal cost function and the existence for every f. > 0 and probability measure p on S of a p-c-optimal policy (Strauch [SI4, Theorems 7.1 and 8.1J). A P-f.-optimal policy is one which leads to a cost differing from the optimal cost by less than f. for p almost every initial state. As discussed earlier, even over a finite horizon the optimal cost function need not be Borel-measurable and there need not exist an everywhere s-optimal policy (Blackwell [B9, Example 2J). The difficulty arises from the inability to choose a Borel-measurable function 11k: S ........ C which nearly achieves the infimum in (18) uniformly in x. The nonexistence of such a function interferes with the construction of optimal policies via the dynamic programming algorithm (17) and (18), since one must first determine at each stage the measure p with respect to which it is satisfactory to nearly achieve the infimum in (18) for p almost every x. This is essentially the same problem encountered with (25). The difficulties in constructing nearly optimal policies over an infinite horizon are more acute. Furthermore, from an applications point of view, a P-f.-optimal policy, even if it can be constructed, is a much less appealing object than an everywhere s-optimal policy, since in many situations the distribution p is unknown or may change when the system is
1.3
THE PRESENT WORK RELATED TO THE LITERATURE
13
operated repetitively, in which case a new p-e-optimal policy must be computed. In our formulation, the class of admissible policies in the Borel model is enlarged to include all universally measurable policies. We show in Part II that this class is sufficiently rich to ensure that there exist everywhere e-optimal policies and, if the infimum in the DP algorithm (18) is attained for every x and k, then an everywhere optimal policy exists. Thus the notion of p-optimality can be dispensed with. The basic reason why optimal and nearly optimal policies can be found within the class of universally measurable policies may be traced to the selection theorem of Section 7.7. Another advantage of working with the class of universally measurable functions is that this class is closed under certain basic operations such as integration with respect to a universally measurable stochastic kernel and composition. Our method of proof of infinite horizon results is based on an equivalence of stochastic and deterministic decision models which is worked out in Sections 9.1-9.3. The conversion is carried through only for the infinite horizon model, as it is not necessary for the development in Chapter 8. It is also done only under assumptions (P), (N), or (D) of Definition 9.1, although the models make sense under conditions similar to the (F+) and (F-) assumptions of Section 8.1. The relationship between the stochastic and the deterministic models is utilized extensively in Sections 9.4-9.6, where structural results proved in Part I are applied to the deterministic model and then transferred to the stochastic model. The analysis shows how results for stochastic models with measurability restrictions on the set of admissible policies can be obtained from the general results on abstract dynamic programming models given in Part I and provides the connecting link between the two parts of this work. 1.3 The Present Work Related to the Literature This section summarizes briefly the contents of each chapter and points out relations with existing literature. During the course of our research, many of our results were reported in various forms (Bertsekas [B3-B5]; Shreve [S7 -S8]; Shreve and Bertsekas [S9-S12]). Since the present monograph is the culmination of our joint work, we report particular results as being new even though they may be contained in one or more of the preceding references. Part I
The objective of Part I is to provide a unifying framework for finite and infinite horizon dynamic programming models. We restrict our attention to
14
1.
INTRODUCTION
three types of infinite horizon models; which are patterned after the discounted and positive models of Blackwell [B8-B9J and the negative model of Strauch [S14]. It is an open question whether the framework of Part I can be effectively extended to cover other types of infinite horizon models such as the average cost model of Howard [H7J or convergent dynamic programming models of the type considered by Dynkin and Juskevic [D8J and Hordijk [H6]. The problem formulation of Part I is new. The work that is most closely related to our framework is the one by Denardo [D2J, who considered an abstract dynamic programming model under contraction assumptions. Most of Denardo's results have been incorporated in slightly modified form in Chapter 4. Denardo's problem formulation is predicated on his contraction assumptions and is thus unsuitable for finite horizon models such as the one in Chapter 3 and infinite horizon models such as the ones in Chapter 5. This fact provided the impetus for our different formulation. Most of the results of Part I constitute generalizations of results known for specific classes of problems such as, for example, deterministic and stochastic optimal control problems. We make an effort to identify the original sources, even though in some cases this is quite difficult. Some of the results of Part I have not been reported earlier even for a specific class of problems, and they will be indicated as new. Chapter 2 Here we formulate the basic abstract sequential optimization problem which is the subject of Part I. Several classes of problems of practical interest are described in Section 2.3 and are shown to be special cases of the abstract problem. All these problems have received a great deal of attention in the literature with the exception of the stochastic optimal control model based on outer integration (Section 2.3.3). This model, as well as the results in subsequent chapters relating to it, is new. A stochastic model based on outer integration has also been considered by Denardo [D2J, who used a different definition of outer integration. His definition works well under contraction assumptions such as the one in Chapter 4. However, many of the results of Chapters 3 and 5 do not hold if Denardo's definition of outer integral is adopted. By contrast, all the basic results of Part I are valid when specialized to the model of Section 2.3.3. Chapter 3 This chapter deals with the finite horizon version of our abstract problem. The central results here relate to the validity of the dynamic programming algorithm, i.e., the equation J~ = TN(J 0)' The validity of this equation is often accepted without scrutiny in the engineering literature, while in mathematical works it is usually proved under assumptions that are stronger than necessary. While we have been unable to locate an appropriate source, we feel certain that the results of Proposition 3.1 are known
1.3
THE PRESENT WORK RELATED TO THE LITERATURE
15
for stochastic optimal control problems. The notion of a sequence of policies exhibiting {cn}-dominated convergence to optimality and the corresponding existence result (Proposition 3.2) are new. Chapter 4 Here we treat the infinite horizon version of our abstract problem under a contraction assumption. The developments in this chapter overlap considerably with Denardo's work [D2]. Our contraction assumption C is only slightly different from the one of Denardo. Propositions 4.1, 4.2, 4.3 (a), and 4.3 (c) are due to Denardo [D2], while Proposition 4.3 (b) has been shown by Blackwell [B9] for stochastic optimal control problems. Proposition 4.4 is new. Related compactness conditions for existence of a stationary optimal policy in stochastic optimal control problems were given by Maitra [M2], Kushner [K6], and Sohal [S5]. Propositions 4.6 and 4.7 improve on corresponding results by Denardo [D2] and McQueen [M3]. The modified policy iteration algorithm and the corresponding convergence result (Proposition 4.9) are new in the form given here. Denardo [D2] gives a somewhat less general form of policy iteration. The idea of policy iteration for deterministic and stochastic optimal control problems dates, of course, to the early days of dynamic programming (Bellman [Bl]; Howard [H7]). The mathematical programming formulation of Section 4.3.3 is due to Denardo [D2]. Chapter 5 Here we consider infinite horizon versions of our abstract model patterned after the positive and negative models of Blackwell [B8, B9] and Strauch [S14]. When specialized to stochastic optimal control problems, most of the results of this chapter have either been shown by these authors or can be trivially deduced from their work. The part of Proposition 5.1 dealing with existence of an s-optimal stationary policy is new, as is the last part of Proposition 5.2. Forms of Propositions 5.3 and 5.5 specialized to certain gambling problems have been shown by Dubins and Savage [D6], whose monograph provided the impetus for much of the subsequent work on dynamic programming. Propositions 5.9-5.11 are new. Results similar to those of Proposition 5.10 have been given by Sohal [S5] for stochastic optimal control problems under semicontinuity and compactness assumptions. Chapter 6 The analysis in this chapter is new. It is motivated by the fact that the framework and the results of Chapters 2-5 are primarily applicable to problems where measurability issues are of no essential concern. While it is possible to apply the results to problems where policies are subject to measurability restrictions, this can be done only after a fairly elaborate reformulation (see Chapter 9). Here we generalize our framework so that problems in which measurability issues introduce genuine complications can be dealt with directly. However, only a portion of our earlier results carry
16
1.
INTRODUCTION
through within the generalized framework-primarily those associated with finite horizon models and infinite horizon models under contraction assumptions. Part II
The objective of Part II is to develop in some detail the discrete-time stochastic optimal control problem (additive cost) in Borel spaces. The measurability questions are addressed explicitly. This model was selected from among the specialized models of Part I because it is often encountered and also because it can serve as a guide in the resolution of measurability difficulties in a great many other decision models. In Chapter 7 we present the relevant topological properties of Borel spaces and their probability measures. In particular, the properties of analytic sets are developed. Chapter 8 treats the finite horizon stochastic optimal control problem, and Chapter 9 is devoted to the infinite horizon version. Chapter 10 deals with the stochastic optimal control problem when only a "noisy" measurement ofthe state of the system is possible. Various extensions of the theory of Chapters 8 and 9 are given in Chapter 11. Chapter 7 The properties presented for metrizable spaces are well known. The material on Borel spaces can be found in Chapter 1 of Parthasarathy [PIJ and is also available in Kuratowski [K2-K3]. A discussion of the weak topology can be found in Parthasarathy [PI]' Propositions 7.20, 7.21, and 7.23 are due to Prohorov [P2J, but their presentation here follows Varadarajan [VI]. Part of Proposition 7.21 also appears in Billingsley [B7]. Proposition 7.25 is an extension of a result for compact X found in Dubins and Freedman [D5]. Versions of Proposition 7.25 have been used in the literature for noncompact X (Strauch [SI4J; Blackwell et al. [BI2J), the authors evidently intending an extension of the compact result by using Urysohn's theorem to embed X in a compact metric space. Proposition 7.27 is reported by Rhenius [RIJ, Juskevic [13J and Striebel [SI6]. We give Striebel's proof. Propositions 7.28 and 7.29 appear in some form in several texts on probability theory. A frequently cited reference is Loeve [Ll]. Propositions 7.30 and 7.31 are easily deduced from Maitra [M2J or Schal [S4J, and much of the rest of the discussion of semicontinuous functions is found in Hausdorff [H2]. Proposition 7.33 is due to Dubins and Savage [D6]. Proposition 7.34 is taken from Freedman [Fl]' The investigation of analytic sets in Borel spaces began several years ago, but has been given additional impetus recently by the discovery of their applications to stochastic processes. Suslin schemes and analytic sets first appear in a paper by M. Suslin (or Souslin) in 1917 [SI7J, although the idea is generally attributed to Alexandroff. Suslin pointed out that every Borel
1.3
THE PRESENT WORK RELATED TO THE LITERATURE
17
subset of the real line could be obtained as the nucleus of a Suslin scheme for the closed intervals, and non-Borel sets could be obtained this way as well. He also noted that the analytic subsets of R were just the projections on an axis of the Borel subsets of R 2 • The universal measurability of analytic sets (Corollary 7.42.1)was proved by Lusin and Sierpinski [L3] in 1918. (See also Lusin [L2].) Our proof of this fact is taken from Saks [SI]. We have also taken material on analytic sets from Kuratowski [K2], Dellacherie [D 1], Meyer [M4], Bourbaki [B 13], Parthasarathy [P I], and Bressler and Sion [BI4]. Proposition 7.43 is due to Meyer and Traki [M5], but our proof is original. The proofs given here of Propositions 7.47 and 7.49 are very similar to those found in Blackwell et al. [BI2]. The basic result of Proposition 7.49 is due to Jankov [11], but was also worked out about the same time and published later by von Neumann [NI, Lemma 5, p. 448]. The Jankov-von Neumann result was strengthened by Mackey [MI, Theorem 6.3]' The history of this theorem is related by Wagner [WI, pp. 900-901]. Proposition 7.50(a) is due to Blackwell et al. [BI2]. Proposition 7.50(b) together with its strengthened version Proposition 11.4 generalize a result by Brown and Purves [BI5], who proved existence of a universally measurable ({J for the case where f is Borel measurable. Chapter 8 The finite horizon stochastic optimal control model of Chapter 8 is essentially a finite horizon version of the models considered by Blackwell [B8, B9], Strauch [SI4], Hinderer [H4], Dynkin and Juskevic [D8], Blackwell et al. [BI2], and others. With the exception of [BI2], all these works consider Borel-measurable policies and obtain existence results ofa p-s-optimal nature (see the discussion of the previous section). We allow universally measurable policies and thereby obtain everywhere a-optimal existence results. While in Chapters 8 and 9 we concentrate on proving results that hold everywhere, the previously available results which allow only Borel-measurable policies and hold p almost everywhere can be readily obtained as corollaries. This follows from the following fact, whose proof we sketch shortly: (F)
If X and Yare Borel spaces, Po, P1, . . . is a sequence of probability measures on X, and fl is a universally measurable map from X to Y, then there is a Borel measurable map fl' from X to Y such that fl(X) = fl'(X) for Pk almost every x, k = 0, I, ....
As an example of how this observation can be used to obtain p almost everywhere existence results from ours, consider Proposition 9.19. It states in part that if a > 0 and the discount factor rx is less than one, then an aoptimal nonrandomized stationary policy exists, i.e., a policy tt = (fl, u;. . .j,
1.
18
INTRODUCTION
where /1 is a universally measurable mapping from S to C. Given Po on S, this policy generates a sequence of measures Po, PI" .. on S, where Pk is the distribution of the kth state when the initial state has distribution Po and the policy tt is used. Let u': S ~ C be Borel-measurable and equal to II for Pk almost every x, k = 0,1, .... Let tt' = (/1',/1',... j. Then it can be shown that for Po almost every initial state, the cost corresponding to tt' equals the cost corresponding to tt, so tt' is a po-e-optimal nonrandomized stationary Borel-measurable policy. The existence of such a n' is a new result. This type of argument can be applied to all the existence results of Chapters 8 and 9. We now sketch a proof of (F). Assume first that Y is a Borel subset of [0, 1]. Then for r E [0, 1], r rational, the set U(r) = {xl/1(x) ~ r)
is universally measurable. For every k; let pte U(r)] be the outer measure of U(r) with respect to Pk and let B k l , B k 2 , . . . be a decreasing sequence of Borel sets containing U(r) such that pteU(r)] =
Let B(r) =
nf= nfo I
1
p{l~\
BkjJ
B kj · Then
pte U(r)]
= Pk[B(r)],
k
=
0,1, ... ,
and the argument of Lemma 7.27 applies. If Y is an arbitrary Borel space, it is Borel isomorphic to a Borel subset of [0,1] (Corollary 7.16.1), and (F) follows. Proposition 8.1 is due to Strauch [SI4], and Proposition 8.2 is contained in Theorem 14.4 of Hinderer [H4]. Example 8.1 is taken from Blackwell [B9]. Proposition 8.3 is new, the strongest previous result along these lines being the existence of an analytically measurable s-optimal policy when the one-stage cost function is nonpositive [BI2]. Propositions 8.4 and 8.5 are new, as are the corollaries to Proposition 8.5. Lower semicontinuous models have received much attention in the literature (Maitra [M2]; Furukawa [F3]; Schal [S3-S5]; Freedman [FI]; Himmelberg et al. [H3]). Our lower semicontinuous model differs somewhat from those in the literature, primarily in the form of the control constraint. Proposition 8.6 is closely related to the analysis in several of the previously mentioned references. Proposition 8.7 is due to Freedman [Fl]. Chapter 9 Example 9.1 is a modification of Example 6.1 of Strauch [SI4], and Proposition 9.1 is taken from Strauch [SI4]. The conversion of the stochastic optimal control problem to the deterministic one was suggested
1.3
THE PRESENT WORK RELATED TO THE LITERATURE
19
by Witsenhausen [W3J in a different context and carried out systematically for the first time here. This results in a simple proof of the lower semianalyticity of the infinite horizon optimal cost function (cf. Corollary 9.4.1 and Strauch [S14, Theorem 7.1J). Propositions 9.8 and 9.9 are due to Strauch [S14J, as are the (D) and (N) parts of Proposition 9.10. The (P) part of Proposition 9.10 is new. Proposition 9.12 appears as Theorem 5.2.2 of Schal [S5J, but Corollary 9.12.1 is new. Proposition 9.14 is a special case of Theorem 14.5 of Hinderer [H4]. Propositions 9.15-9.17 and the corollaries to Proposition 9.17 are new, although Corollary 9.17.2 is very close to Theorem 13.3 of ScMI [S5]. Propositions 9.18-9.20 are new. Proposition 9.21 is an infinite horizon version of a finite horizon result due to Freedman [F1 J, except that the nonrandomized s-optimal policy Freedman constructs may not be semi-Markov. Chapter 10 The use of the conditional distribution of the state given the available information as a basis for controlling systems with imperfect state information has been explored by several authors under various assumptions (see, for example, Astrom [A2J, Striebel [S15J, and Sawaragi and Yoshikawa [S2J). The treatment of imperfect state information models with uncountable Borel state and action spaces, however, requires the existence of a regular conditional distribution with a measurable dependence on a parameter (Proposition 7.27), and this result is quite recent (Rhenius [Rl J; Juskevic [13J; Striebel [S16J). Chapter 10 is related to Chapter 3 of Striebel [S16J in that the general concept of a statistic sufficient for control is defined. We use such a statistic to construct a perfect state information model which is equivalent in the sense of Propositions 10.2 and 10.3 to the original imperfect state information model. From this equivalence the validity of the dynamic programming algorithm and the existence of s-optimal policies under the mild conditions of Chapters 8 and 9 follow. Striebel justifies use of a statistic sufficient for control by showing that under a very strong hypothesis [S16, Theorem 5.5.1J the dynamic programming algorithm is valid and an s-optimal policy can be based on the sufficient statistic. The strong hypothesis arises from the need to specify the null sets in the range spaces of the statistic in such a way that this specification is independent of the policy employed. This need results from the inability to deal with the pointwise partial infima of multivariate functions without the machinery of universally measurable policies and lower semianalytic functions. Like Striebel, we show that the conditional distributions of the states based on the available information constitute a statistic sufficient for control (Proposition 10.5), as do the vectors of available information themselves (Proposition 10.6). The treatments of Rhenius [RIJ and Juskevic [13J are like our own in that perfect state information models which are equivalent to the original
20
I.
INTRODUCTION
one are defined. In his perfect state information model, Rhenius bases control on the observations and conditional distributions ofthe states, i.e., these objects are the states of his perfect state information model. It is necessary in Rhenius' framework for the controller to know the most recent observation, since this tells him which controls are admissible. We show in Proposition 10.5 that ifthere are no control constraints, then there is nothing to be gained by remembering the observations. In the model of Juskevic [13J, there are no control constraints and control is based on the past controls and conditional distributions. In this case, s-optimal control is possible without reference to the past controls (Propositions 10.5, 8.3,9.19, and 9.20), so our formulation is somewhat simpler and just as effective. Chapter 10 differs from all the previously mentioned works in that simple conditions which guarantee the existence of a statistic sufficient for control are given, and once this existence is established, all the results of Chapters 8 and 9 can be brought to bear on the imperfect state information model. Chapter 11 The use in Section 11.1 of limit measurability in dynamic programming is new. In particular, Proposition 11.3 is new, and as discussed earlier in regard to Proposition 7.50(b), a result by Brown and Purves [B15J is generalized in Proposition 11.4. Analytically measurable policies were introduced by Blackwell et al. [B12J, whose work is referenced in Section 11.2. Borel space models with multiplicative cost fall within the framework of Furukawa and Iwamoto [F4-F5], and in [F5J the dynamic programming algorithm and a characterization of uniformly N-stage optimal policies are given. The remainder of Proposition 11.7 is new. Appendix A Outer integration has been used by several authors, but we have been unable to find a systematic development. Appendix B Proposition B.6 was first reported by Suslin [S17J, but the proof given here is taken from Kuratowski [K2, Section 38VI]. According to Kuratowski and Mostowski [K4, p. 455J, the limit e-algebra 2? x was introduced by Lusin, who called its members the "C-sets." A detailed discussion of the o-algebra was given by Selivanovskij [S6J in 1928. Propositions B.9 and B.10 are fairly well known among set theorists, but we have been unable to find an accessible treatment. Proposition B.11 is new. Cenzer and Mauldin [C1 J have also shown independently that 2? x is closed under composition of functions, which is part of the result of Proposition B.ll. Proposition B.12 is new. It seems plausible that there are an infinity of distinct o-algebras between the limit o-algebra and the universal o-algebra that are suitable for dynamic programming. One promising method of constructing such c-algebras involves the R-operator of descriptive set theory (see Kantorovitch and
1.3
THE PRESENT WORK RELATED TO THE LITERATURE
21
Livenson [Kl]). In a recent paper [Bl I], Blackwell has employed a different method to define the "Borel-programmable" a-algebra and has shown it to have many of the same properties we establish in Appendix B for the limit a-algebra. It is not known, however, whether the Borel-programmable a-algebra satisfies a condition like Proposition B.12 and is thereby suitable for dynamic programming. It is easily seen that the limit a-algebra is contained in Blackwell's Borel-programmable a-algebra, but whether the two coincide is also unknown. Appendix C A detailed discussion of the exponential topology on the set of closed subsets of a topological space can be found in Kuratowski [K2-K3]. Properties of semicontinuous (K) functions are also proved there, primarily in Section 43 of [K3]. The Hausdorff metric is discussed in Section 38 of [H2].
This page intentionally left blank
Part I
Analysis of Dynamic Programming Models
This page intentionally left blank
Chapter 2
Monotone Mappings Underlying Dynamic Programming Modelst
This chapter formulates the basic abstract sequential optimization problem which is the subject of Part I. It also provides examples of special cases which include wide classes of problems of practical interest. 2.1 Notation and Assumptions
Our usage of mathematical notation is fairly standard. For the reader's convenience we mention here that we use R to denote the real line and R* to denote the extended real line, i.e., R* = R u {- 00, co}, The sets ( - 00, 00] = R u {co} and [ - 00,(0) = R u {- co] will be written out explicitly. We will assume throughout that R is equipped with the usual topology generated by the open intervals (iX, f3), iX, f3 E R, and with the (Borel) a-algebra generated by this topology. Similarly R* is equipped with the topology generated by the open intervals («, f3), iX, f3 E R, together with the sets (y, 00], [ - 00, y), Y E R, and with the a-algebra generated by this topology. The Cartesian product of sets Xl, X 2" .. ,Xn is denoted X 1 X 2 . . . X n . t Parts I and II can be read independently. The reader may proceed directly to Part II if he so wishes.
25
26
2.
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
The following definitions and conventions will apply throughout Part I. (l) Sand C are two given sets referred to as the state space and control space, respectively. (2) For each XES, there is given a nonempty subset U(x) of C referred to as the control constraint set at x. (3) We denote by Mthesetofallfunctions,u: S -+ Csuch that utxj e Utx) for all XES. We denote by Il the set of all sequences tt = (,uo,,u l' . . . ) such that ,uk E M for all k. Elements of Il are referred to as policies. Elements of Il of the form n = (,u,,u, .. .), where ,uEM, are referred to as stationary policies. (4) We denote: F the set of all extended real-valued functions J: S -+ R*; B the Banach space of all bounded real-valued functions J: S -+ R with the supremum norm 11'11 defined by
fiJll = sup IJ(x)1
VJ EB.
XES
For all J, J' E F we write
(5)
J'
if J(x) = J'(x)
VXES,
J :s; J'
if J(x):s; J'(x)
VXES.
J
=
For all J E F and e E R, we denote by J + e at each XES, i.e.,
+s
the function taking the value
J(x)
+ e)(x)= J(x) + e
(J
VXES.
(6) Throughout Part I the analysis is carried out within the set of extended real numbers R*. We adopt the usual conventions regarding ordering, addition, and multiplication in R* except that we take 00 -
00 =
- 00
+ 00
=
00,
and we take the product of zero and infinity to be zero. In this way the sum and the product of any two extended real numbers is well defined. Division by zero or 00 does not appear in our analysis. In particular, we adopt the following rules in calculations involving 00 and - 00 : 0(
+ 00
0( -
= 00
00 =
-
+ 0( = 00 00 + 0( = -
00
for
-
00
for
-
00
:s; :s;
0(
0(00 = 000( = 00,
0(( -
0(00 = 000( =
0((- 00) = ( - 00)0( = 00
- 00,
00) = ( - 00)0( =
0( :s; 00,
000 = 000=0=0(-00)=(-00)0,
inf where
0
0 is the empty set.
=
+ 00,
<
00;
- 00
for
0
for
-
-(-00)= 00;
sup 0
=
- 00,
< 00
0( :s; 00,
:s;
0(
<
0;
2.1
27
NOTATION AND ASSUMPTIONS
Under these rules the following laws of arithmetic are still valid:
+ ()(Z =
()(1
()(l()(Z
()(Z
=
+ ()(1,
+ ()(z) + ()(3 =
(()(1
()(Z()(I,
(()(I()(Z)()(3
We also have
°
()((()(1
+ ()(z) =
()(()(1
+
=
()(1
+ (()(z + ()(3),
()(1(()(Z()(3)'
()(()(Z
if either ()(;:::: or else (()(1 + ()(z) is not of the form + 00 - 00. (7) For any sequence {Jd with JkEF for all k, we denote by limk~x>Jk the pointwise limit of {Jk} (assuming it is well defined as an extended realvalued function) and by lim SUPk~ 00 J k (lim infk~ 00 J d the pointwise limit superior (inferior) of {Jd. For any collection {J~I()(EA} c F parameterized by the elements of a set A, we denote by inf~EA J~ the function taking the value inf~EAJ«(x) at each XES. The Basic Mapping
We are given a function H which maps SCF (Cartesian product of S, -+ F by
C, and F) into R*, and we define for each JiE M the mapping T u : F Tu(J)(x) = H[x,Ji(x),J]
We define also the mapping T: F T(J)(x)
-+
VXES.
(1)
VXES.
(2)
F by
= inf H(x, U, J) UEU(X)
We denote by t-. k = 1,2, ... , the composition of T with itself k times. For convenience we also define TO(J)=J for all JEF. For any n=(Jio, Jil>" .)Ell we denote by (T uoTI'1'" TuJ the composition of the mappings T uo" .. , T Uk' k = 0, 1,.... The following assumption will be in effect throughout Part I. Monotonicity Assumption
For every XES, UE U(x), J,J'EF, we have
H(x, U, J) ::::: H(x, U, J')
if J::::: P.
(3)
The monotonicity assumption implies the following relations: J ::::: J'
~
J:::::J'~Tu(J):::::Tu(J')
T(J)::::: T(J')
VJ,J' EF, VJ,J'EF,
JiEM.
These relations in turn imply the following facts for all J E F: J ::::: T(J) ~ Tk(J) ::::: t-: I(J), k = 0,1, , J ;:::: T(J) ~ Tk(J) ;:::: i-: I(J), k = 0, 1, , J::::: Tu(J)
VJiEM ~ (T uo'" TUk)(J)::::: (T uo'" TUk+l)(J), k = 0, 1,... , tt = (JiO,Jil'" .)Ell,
J;:::: Tu(J)
VJiEM ~ (T uo' .. TuJ(J) ;:::: (T uo' .. TUk+ J(J), k = 0, 1,... , tt = (JiO,Jil'" .)Ell.
28
2.
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
Another fact that we shall be using frequently is that for each J E F and
e > 0, there exists a u, E M such that T (J)( ) < {T(J)(X) IlE x -l/e
+e
In particular, if J is such that T(J)(x) > -
if if 00
T(J)(x) > T(J)(x) = -
00, 00.
for "Ix E S, then for each e > 0,
there exists a u, E M such that TIl)J)::;: T(J)
2.2
+ e.
Problem Formulation We are given a function JoE F satisfying
(4)
VXES,
and we consider for every policy n = (ilo,ill'" .)EI1 and positive integer N the functions IN,,,EF and J"EF defined by IN,,,(x) = (TIlOTIl,'" TIlN_J(JO)(x) J,,(x)
= lim (TIlOTIl," . TIlN_1)(JO)(x) N-"co
VXES,
(5)
VXES.
(6)
For every result to be shown, appropriate assumptions will be in effect which guarantee that the function J" is well defined (i.e., the limit in (6) exists for all XES). We refer to J N,,, as the N -stage cost function for nand to J" as the cost function for tt. Note that J N,,, depends only on the first N functions in tt while the remaining functions are superfluous. Thus we could have considered policies consisting of finite sequences of functions in connection with the N-stage problem, and this is in fact done in Chapter 8. However, there are notational advantages in using a common type of policy in finite and infinite horizon problems, and for this reason we have adopted such a notation for Part I. Throughout Part I we will be concerned with the N-stage optimization problem mmmuze IN,,,(x) (F) subject to tt E 11, and its infinite horizon version minimize J,,(x) subject to n E 11.
(I)
We refer to problem (F) as the N-stagefinite horizon problem and to problem (I) as the infinite horizon problem.
2.3
29
APPLICATION TO SPECIFIC MODELS
For a fixed XES, we denote by J~(x) and J*(x) the optimal costs for these problems, i.e., (7) I::/XES, J~(x) = inf I N. 71,(x) "EO
J*(x) = inf J,,(x)
(8)
I::/xES.
"EO
We refer to the function J~ as the N-stage optimal cost function and to the function J* as the optimal cost function. We say that a policy n* E II is N-stage optimal at XES if J N. ".(x) = nix) and optimal at XES if J".(x) = J*(x). We say that n*EII is N-stage optimal (respectively optimal) if I N,,,' = J~ (respectively J". = J*). A policy n* = (fl~, flL· .. ) will be called uniformiy N-stage optimal if the policy (flT, flT+ 1,' .. ) is (N - i)-stage optimal for all i = 0, 1,... , N - 1. Thus if a policy is uniformly N-stage optimal, it is also N-stage optimal, but not conversely. For a stationary policy tt = (fl, u; ...) EII, we write J" = J u : Thus a stationary policy n* = (fl*,fl*, ... ) is optimal if J* = J/l" Given S > 0, we say that a policy tt; E II is N-stage e-optimal if J
We say that n
E
E
N,,,<
(x) < -
+s
{J~(X)
-l/s
if if
J~(x)
> -
00,
J~(x)
= -
00.
II is e-optimal if
J (x) < {J*(X) "< - -l/s
+s
if J*(x) > if J*(x) = -
00, 00.
If [s,} is a sequence of positive numbers with s, to, we say that a sequence of policies {nn} exhibits {sn}-dominated convergence to optimality if
lim IN'''n = J~,
n--+ 00
and, for n = 2,3, ... , I
2.3
N
,,
(x)
. n
~ {J~.(X)
+ s,
IN'''n_,(x)+sn
if if
J~(x)
> -
00,
J~(x)
= -
00 .
Application to Specific Models
A large number of sequential optimization problems of practical interest may be viewed as special cases of the abstract problems (F) and (I). In this section we shall describe several such problems that will be of continuing interest to us throughout Part I. Detailed treatments of some of these problems can be found in DPSC. t t We denote by DPSC the textbook by Bertsekas, "Dynamic Programming and Stochastic Control." Academic Press, New York, 1976.
30
2.
2.3.1
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
Deterministic Optimal Control
Consider the mapping H:SCf H(x, u, J) = g(x, u)
->
R* defined by
+ aJ[f(x, u)]
VXES, UEC,
JEF.
(9)
Our standing assumptions throughout Part I relating to this mapping are: (1) The functions 9 and f map SC into [ - 00, 00] and S, respectively. (2) The scalar a is positive. The mapping H clearly satisfies the monotonicity assumption. Let J 0 be identically zero, i.e., VXES. Then the corresponding N-stage optimization problem (F) can be written as N-l
minimize
J N.,,(X O) =
I
akg[ Xb ttk(xd]
(10)
k=O
ttk E M,
k = 0, ... , N - 1.
This is a finite horizon deterministic optimal control problem. The scalar a is known as the discount factor. The infinite horizon problem (I) can be written as N-l
minimize
J,,(xo) = lim N~'X)
L
akg[xbttk(Xk)]
(11)
k=O
ttkEM,
k
= 0,1, ....
This limit exists if anyone of the following three conditions is satisfied: g(x, u);:::: a
< 1,
°s
g(x, u) S g(x, u) s
° ° b
VXES,
UE U(x),
VXES, UEU(X), for some b e (0,00) and all XES, U E U(x).
(12)
(13) (14)
Every result to be shown for problem (11) will explicitly assume one of these three conditions. Note that the requirement Os g(x, u) s b in (14) is no more strict than the usual requirement Ig(x, u)1 s b/2. This is true because adding the constant b/2 to 9 increases the cost corresponding to every policy by b/2(l - a) and the problem remains essentially unaffected. Deterministic optimal control problems such as (l0) and (11) and their stochastic counterparts under the countability assumption of the next subsection have been studied extensively in DPSC (Chapters 2, 6, and 7). They are given here in their stationary form in the sense that the state and control spaces Sand C, the control constraint U('), the system function f, and the
2.3
31
APPLICATION TO SPECIFIC MODELS
cost per stage g do not change from one stage to the next. When this is not the case, we are faced with a nonstationary problem. Such a problem, however, may be converted to a stationary problem by using a procedure described in Section 10.1 and in DPSC (Section 6.7). For this reason, we will not consider further nonstationary problems in Part I. Notice that within our formulation it is possible to handle state constraints of the form Xk E X, k = 0, 1,... , by defining g(x, u) = 00 whenever x ¢ X. This is our reason for allowing g to take the value 00. Generalized versions of problems (10) and (11) are obtained if the scalar a is replaced by a function a: SC ---> R* with o:s;; x(x, u) for all XES, UE U(x), so that the discount factor depends on the current state and control. It will become evident to the reader that our general results for problems (F) and (I) are applicable to these more general deterministic problems. 2.3.2
Stochastic Optimal Control-Countable Disturbance Space
Consider the mapping H: SC F
--->
R* defined by
Hix, u,J) = E{g(x, u, w) + aJ[j(x, u, w)Jlx, u},
(15)
where the following are assumed: (1) The parameter w takes values in a countable set W with given probability distribution p(dwlx, u) depending on x and u, and E[ ·Ix, u} denotes expected value with respect to this distribution. (See a detailed definition below.) (2) The functions g and f map SCW into [ - 00, 00 ] and S, respectively. (3) The scalar a is positive. Our usage of expected value in (15) is consistent with the definition of the usual integral (Section 7.4.4) and the outer integral (Appendix A), where the o-algebra on W is taken to be the set of all subsets of W. Thus if w', i = 1,2, ... , are the elements of W, (p\p2, ... ) any probability distribution on W, and z: W ---> R* a function, we define E{z(w)}
=
L piz+(wd - I 00
00
i=1
i=1
piZ-(Wi)'
where z+(wd z-(wd
= max{O,z(wd}, = max{O, -z(wd},
i = 1,2, ... ,
i
=
1,2, ....
In view of our convention 00 - 00 = 00, the expected value E {z(w)} is well defined for every function z: W ---> R* and every probability distribution (pi, p2, ... ) on W. In particular, if we denote by (pl(X, u), p2(X, u), . . .) the
32
2.
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
probability distribution p(dwlx, u) on W written as Hix, u,J) =
=
ex
I
pi(X, u)max{O,g(x, u, w')
i= 1
{w 1 , wZ , • • • }, then (15) can be
+ etJ[j(x, U, wi)J}
00
- I
pi(X, u)max{O, - [g(x, U, w')
i= 1
+ etJ[j(x, U, wi)JJ}.
A point where caution is necessary in the use of expected value defined this way is that for two functions c.: W -+ R* and :::z: W -+ R*, the equality E{Zl(W)
+ zz(w)} = E{:::l(W)} + E{:::z(w)}
(16)
need not always hold. It is guaranteed to hold if (a) E{ :::t(w)} < 'XJ and E{:::t(w)} < 00, or (b) E{zj(w)} < 00 and E{zz(w)} < 00, or (c) E{zi(w)} < 00 and E{zj(w)} < 00 (see Lemma 7.11). We always have, however,
It is clear that the mapping H of (15) satisfies the monotonicity assumption. Let J 0 be identically zero, i.e., 'l/xES.
Then if g(x, u, w) > written as
+
00
for all x, u, w, the N-stage cost function can be
E {etN-1g[XN-1,,uN-1(XN-d,WN-1J!XN-1, WN-l
,uN - 1(XN -
=
1)} I...}Ixo, ,uo(xo)}
E{E{OO' E {NIl et WQ
WI
WN-l
kg[Xk,,uk(xd,WkJlxN-1'
k=O
(17) where the states Xl' xz, . . . ,xN-1 satisfy k = O, ... ,N - 2.
(18)
The interchange of expectation and summation in (17) is valid, since g(x, u, w) > - 00 for all x, u, w, and we have for any measure space (0, %, v),
2.3
33
APPLICAnON TO SPECIFIC MODELS
measurable h: n
-+
R*, and ). E(- 00,
+ 00],
A + fhdv = f(A
+ h)dv.
When Eq. (18) is used successively to express the states XI,X2,'" ,X N- I exclusively in terms of wo, WI"'" W N _ I and x o, one can see from (17) that J N. n(XO) is given in terms of successive iterated integration over W N- I, ..• , Wo0 For each XoES and nEll the probability distributions ptxo, flo(Xo)), ... ,pi(XN-I,flN-I(xN-d), i = 1,2, ... , over Wspecify, by the product measure theorem [AI, Theorem 2.6.2], a unique product measure on the cross product W N of N copies of W. If Fubini's theorem [AI, Theorem 2.6.4] is applicable, then from (17) the N-stage cost function J N, n(XO) can be alternatively expressed as IN,n(XO) = E[t: akg[xk,flk(xd, Wk]},
(19)
where this expectation is taken with respect to the product measure on W N and the states Xl' X2,' •• , X N- l are expressed in terms of Wo, WI" .. , WN-I and Xo via (18). Fubini's theorem can be applied if the expected value in (19) is not of the form 00 - 00, i.e., if either E{max{o,
or E{max{o, -
:t~
:t:
akg[xk,flk(Xk),Wk]}} <
00
(Xkg[Xbflk(xd, Wk]}} <
00.
In particular, this is true if either
k = 0, ... , N - 1, or E{max{O, -g[Xbflk(Xk), wk] } } <
00,
k
=
0, ... , N - 1
or if g is uniformly bounded above or below by a real number. If J N. n(XO) can be expressed as in (19) for each Xo ESand nEll, then the N-stage problem can be written as minimize subjectto
IN,n(XO) = E[t: (Xkg[Xbflk(Xk)' Wk]} Xk+I=![Xbflk(xd,Wk],
flkEM,
k=O, ... ,N-l,
which is the traditional form of an N-stage stochastic optimal control problem and is also the starting point for the N -stage model of Part II (Definition 8.3).
34
2.
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
The corresponding infinite horizon problem is (cf. Definition 9.3) minimize
J,,(xo)
subject to
Xk + 1
=
=
~~~
E[t: akg[xb,uk(Xk), Wk]}
f[ Xb ,uk(Xk), wk],
,uk E M,
k
(20) =
0, 1,....
This limit exists under anyone of the conditions: g(X, U, w) ;::: 0
VXES,
UEU(X),
WEW,
(21)
w):::;; 0
VXES,
UEU(X),
WEW,
(22)
g(x,
a
< 1,
U,
o:: ; g(x, U, w) :::;; b
for some b e (0, 00 ) and all XES, UE U(x), WE W.
(23)
Every result to be shown for problem (20) will explicitly assume one of these three conditions. Similarly as for the deterministic problem, a generalized version of the stochastic problem is obtained if the scalar a is replaced by a function a:SCW ~ R* satisfying 0 :::;; z(x, u, w) for all (x, u, w). The mapping H takes the form H(x, u, J)
=
E{g(x, u, w) + «(x, u, w)J[.f(x, u, w)]lx, u}.
This case covers certain semi-Markov decision problems (see [12]). We will not be further concerned with this mapping and will leave it to the interested reader to obtain specific results relating to the corresponding problems (F) and (I) by specializing abstract results obtained subsequently in Part I. Also, nonstationary versions of the problem may be treated by reduction to the stationary case (see Section 10.1 or DPSC, Section 6.7). The countability assumption on W is satisfied for many problems of interest. For example, it is satisfied in stochastic control problems involving Markov chains with a finite or countable number of states (see, e.g., [D3], [K6]). When the set W is not countable, then matters ;ue complicated by the need to define the expected value E {g[ x, ,u(x), w]
+ aJ[.f(x, ,u(x), w)]lx, u}
for every ,uEM. There are two approaches that one can employ to overcome this difficulty. One possibility is to define the expected value as an outer integral, as we do in the next subsection. The other approach is the subject of Part II where we impose an appropriate measurable space structure on S, C, and Wand require that the functions ,uE M be measurable. Under these circumstances a reformulation of the stochastic optimal control problem into the form of the abstract problems (F) or (I) is not straightforward. Nonetheless, such a reformulation is possible as well as useful as we will demonstrate in Chapter 9.
2.3
35
APPLICATION TO SPECIFIC MODELS
2.3.3
Stochastic Optimal Control-Outer Integral Formulation
Consider the mapping H:SCF
~
R* defined by
Hix, u, J) = E*[g(x, u, w) + aJ[f(x, u, w)]lx, u},
(24)
where the following are assumed: (1) The parameter w takes values in a measurable space (W, .'F). For each fixed (x, u) ESC, a probability measure p(dwlx, u) on (W,~) is given and E*{ 'Ix, u} in (24) denotes the outer integral (see Appendix A) with respect to that measure. Thus we may write, in the notation of Appendix A, H(x, u, J) = f* {g(x, u, w) + aJ[f(x, u, w)]}p(dwlx, u).
(2) The functions g and j map SCW into [ - 00, 00] and S, respectively. (3) The scalar a is positive. We note that mappings (9) and (15) of the previous two subsections are special cases of the mapping H of (24). The mapping (9) (deterministic problem) is obtained from (24) when the set W consists of a single element. The mapping (15) (stochastic problem with countable disturbance space) is the special case of (24) where W is a countable set and ~ is the IT-algebra consisting of all subsets of W. For this reason, in our subsequent analysis we will not further consider the mappings (9)and (15),but will focus attention on the mapping (24). Clearly H as defined by (24) satisfies the monotonicity assumption. Just as for the models of the previous two sections, we take Jo(x)=O
't:/XES
and consider the corresponding N-stage and infinite horizon problems (F) and (I). If appropriate measurability assumptions are placed on S, C, j, g, and p, then the N -stage cost JNjx) = (T ll o '
"
TIlN_,l(JO)(x)
can be rewritten in terms of ordinary integration for every policy n = (J.1o, J.11" ..) for which J.1b k = 0,1, ... , is appropriately measurable. To see this, suppose that S has a IT-algebra Y', C has a IT-algebra '{j, and qg is the Borel Y')-measurable and g is (Y'(~'ff, 24)IT-algebra on R*. Suppose T is (Y''{j~, measurable, where Y''{j ~ denotes the product IT-algebra on SCW. Assume that for each fixed B E ~, p(Blx, u) is Y''{j-measuraple in (x, u) and consider a policy n = (Ilo, J.11,' ..), where J.1k is (Y', '(j)-measurable for all k. These conditions guarantee that T Ilk (J) given by T Ilk(J)(X) = f{g[ x, J.1k(X), w] + aJ[f(x, J.1k(X), w)]}p(dwlx, u)
36
2.
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
is ,'l'-measurable for all k and J E F that are sP-measurable. Just as in the previous section, for a fixed X o E Sand n = (JiO,Ji1"") En, the probability measures p( ·Ixo, Jio(Xo)), . . . ,p( ·IXN - 1, JiN -1 (XN - 1)) together with the system equation k
= 0, ... ,N - 2,
(25)
define a unique product measure p(d(w o, . . . , wN- d!xo,n) on the cross product W N of N copies of W. [Note that Xk' k = 0,1, ... ,N - 1, can be expressed as a measurable function of (wo, ... ,WN- d via (25)]. Using the calculation of the previous section, we have that if g(x, u, w) > - 00 for all x, u, w, and Fubini's theorem is applicable, then
where X1,Xl"",X N- 1 are expressed in terms ofwo, W1, . . . ,WN-l and X o via (25). Also, as in the previous section, Fubini's theorem applies if either
or E{max{o, -
:t~
(tkg[XbJik(xd, Wk]}} <
00.
Thus if appropriate measurability conditions are placed on S, C, W, [, g, and p(dw!x, u) and Fubini's theorem applies, then the N -stage cost J N, n corresponding to measurable tt reduces to the traditional form
This observation is significant in view of the fact that inf J N n(x):os:; inf J N n(x) nEn' nED'
'VXES,
where
n- =
I
{nE Il tt
=
(JiO,Ji1" ..), JikEM is (sP,ce)-measurable, k
= 0,1, ..
1 'J'
Thus, if an optimal (s-optimal) policy n* can be found for problem (F) and
2.3
37
APPLICATION TO SPECIFIC MODELS
n* E IT (i.e., is measurable), then tt" is optimal (s-optimal) for the problem
minimize subject to
I
lx)
N • l1
n E ft,
which is a traditional stochastic optimal control problem. These remarks illustrate how one can utilize the outer integration framework in an initial formulation of a particular problem and subsequently show via further (and hopefully simple) analysis that attention can be restricted to the class of measurable policies rr for which the cost function admits a traditional interpretation. The main advantage that the outer integral formulation offers is simplicity. One does not need to introduce an elaborate topological and measure-theoretic structure such as the one of Part II in an initial formulation of the problem. In addition the policy iteration algorithm of Chapter 4 is applicable to the problem of this section but cannot be justified for the corresponding model of Part II. The outer integral formulation has, however, important limitations which become apparent in the treatment of problems with imperfect state information by means of sufficient statistics (Chapter 10). 2.3.4
Stochastic Optimal Control-Multiplicative Cost Functional
Consider the mapping H: SCF ~ R* defined by H(x, u,J)
=
E{g(x, u, w)J[f(x, u, w)]lx, u}.
(26)
We make the same assumptions on w, g, and f as in Section 2.3.2, i.e., w takes values in a countable set W with a given probability distribution depending on x and u. We assume further that g(x, u, wl
>0
't/XES,
UEU(X),
WEW.
(27)
In view of (27), the mapping H of (26) satisfies the monotonicity assumption. We take Jo(x)
=1
't/XES
and consider the problems (F) and (I). Problem (F) corresponds to the stochastic optimal control problem minimize
IN.,,(xo)
= E{g[Xo,flo(Xo), woJ"
'g[XN-l,flN-l(xN-d,
WN-1J} (28)
flkEM,
k = 0,1, ... ,
and problem (I) corresponds to the infinite horizon version of (28). The limit as N ~ 00 in (28) exists if g(x, u, w) ::::: 1 for every x, u, w or 0 :s;; g(x, u, w) :s;; 1
38
2.
MONOTONE MAPPINGS IN DYNAMIC PROGRAMMING MODELS
for every x, u, w. A special case of (28) is the exponential cost functional problem minimize
E{ expCt~
subject to
Xk+ 1 = f[ Xk, Ilk(X k), WkJ,
g'[Xk' Ilk(Xk), wkJ]} IlkEM,
where g' is some function mapping SCW into ( 2.3.5
k = 0, 1,....
00, 00].
Minimax Control
Consider the mapping H:SCF Hix, u, J)
=
sup WEW(X.U)
R* defined by
-+
[g(x, u, w) + ()(J[f(x, u, w)J)
(29)
where the following are assumed: (1) The parameter W takes values in a set Wand W(x, u) is a nonempty subset of W for each XES, U E U(x). (2) The functions g and f map SCW into [ - 00, 00 J and S respectively. (3) The scalar a is positive.
Clearly the monotonicity assumption is satisfied. We take VXES.
If g(x, U, w) > - 00 for all x, u, w, the corresponding N-stage problem (F) can also be written as
minimize
IN,,,(xo) =
sup Wk
subject to
E
W[Xk.llk(xkll
{Nf,l ()(kg[Xbllk(xd, WkJ} k~ 0
x k+ I = f[Xk' Ilk(Xk), WkJ,
Ilk E M,
k = 0,1" , , ,
(30)
and this is an N-stage minimax control problem. The infinite horizon version is rmrurmze
J,,(xo) = lim N~oo
subject to
sup. WkEW[Xk,llklxk)]
Xk+ I = f[ Xb Ilk(Xk), WkJ,
{Nf,l ()(kg[Xk,llk(xd, wkJ} k~O
Ilk EM,
k = 0,1" . , ,
(31)
The limit in (31) exists under anyone of the conditions (21), (22), or (23). This problem contains as a special case the problem of infinite time reachability examined in Bertsekas [B2]. Problems (30) and (31) arise also in the analysis of sequential zero-sum games.
Chapter 3
Finite Horizon Models
3.1
General Remarks and Assumptions
Consider the N -stage optimization problem minimize
IN,,,(x)
subject to
n=
where for every 11 E M, J T Il(J)(X)
E
= (T llo '
(110,111"
"
..) E
TIlN_.)(JO)(x)
II,
F, and XES we have
= H[ x, I1(X), J],
T(J)(x)
= inf
H(x, u, J).
UEU(X)
Experience with a large variety of sequential optimization problems suggests that the N-stage optimal cost function Jt satisfies
Jt = inf "Ell
I N,,,
= TN(J O)'
and hence is obtained after N steps of the DP algorithm. In our more general setting, however, we shall need to place additional conditions on H in order to guarantee this equality. Consider the following two assumptions. Assumption F.l If {Jk} c F is a sequence satisfying J k+ 1 ~ J k for all k and H(x, u, J 1) < 00 for all XES, UE Vex), then
lim H(X,U,Jk) = H(X,U, lim J k)
k-
00
k-«
"IxES,
UE
Vex).
00
39
3.
40
FINITE HORIZON MODELS
Assumption F.2 There exists a scalar rt.E (0,00) such that for all scalars r E (0, 00) and functions J E F, we have
H(x, u, J)
~
H(x, u, J
+ r) ~
H(x, u, J)
+ or
'VXES,
UE Vex).
We will also consider the following assumption, which is admittedly somewhat complicated. It will enable us to obtain a stronger result on the existence of nearly optimal policies (Proposition 3.2) than can be obtained under F.2. The assumption is satisfied for the stochastic optimal control problem of Section 2.3.3, as we show in the last section of this chapter. Assumption F.3 There is a scalar (JE(O, 00) such that if J EF, {In} c F, and {en} c R satisfy 00
L en <
n= 1
en> 0,
00,
In(X)~{J(x)+en, In-1(x)+e n, H(x,u,J 1) < 00,
n
= 1,2, ... ,
n= 1,2, ... and XES with J(X) > -00, n=2,3,oo. and XES with J(x)=-oo, 'VXES,
UEV(X),
then there exists a sequence {.un} c M such that lim TIlJJ n)
=
T(J),
Til (In)(X) < {T(J)(X) + {Jen, n=I,2, n T ll n _ Pn-l)(X)+ (Je n, n= 2,3,
, XES with T(J)(x) >-00, , X ES with T(J)(x) = - 00.
Each of our results will require at most one of the preceding assumptions. As we show in Section 3.3, at least one of these assumptions is satisfied by every specific model considered in Section 2.3. 3.2
Main Results
The central question regarding the finite horizon problem is whether J't = TN(J O )' in which case the N-stage optimal cost function J't can be obtained via the DP algorithm that successively computes T(J 0), T 2(J 0), .... A related question is whether optimal or nearly optimal policies exist. The results of this section provide conditions under which the answer to these questions is affirmative. Proposition 3.1 XES, z e Il, and k
(a) Let F.1 hold and assume that Jk,,,(x) < 00 for all
= 1,2,.,., N. Then
3.2
41
MAIN RESULTS
(b) Let F.2 hold and assume that Jt(x) > 1,2, ... ,N. Then
=
J~
00
for all
XES
and k
=
TN(J O)'
and for every a> 0, there exists an N-stage a-optimal policy, i.e., a no E 11 such that I
J~:::;;
N , ,, , :::;;
J~
+ a.
Proof (a) For each k = 0, 1, ... , N - 1, consider a sequence {flU c M such that
lim TI'IJT N- k- 1(J O)] = TN-\J O),
k=O, ... ,N -1,
i-e cc»
TI'~[TN-k-1(JO)]
k=O, ... ,N -1, i=O, 1,....
~ TI'~+1[TN-k-1(JO)]'
By using F.l and the assumption that Jk,,,(x) < J* < inf'" N
=i~f" .
10
we have
inf(T io···T i N- l)(J )
.i~~;(;:o'" .
IN - 2
Ito
= inf'" inf(T1'0 . .
iO'"
10
00,
IN- 2
;~:~2)[;nf
Il N- 2 .
Tl'i N 'Cl(J O)]
IN ~ 1
N-l
T iN-2)[T(J o)] I'N-2
where the last equality is obtained by repeating the process used to obtain the previous equalities. On the other hand, it is clear from the definitions of Chapter 2 that TN(J 0) :::;; and hence J~ = TN(J 0)' (b) We use induction. The result clearly holds for N = 1. Assume that it holds for N = k, i.e., Jt = Tk(J 0) and for a given a > 0, there is a n e Ell with J k,,,, :::;; Jt + a. Using F.2 we have for all fl EM,
n;
Jt+1 :::;; Tih,,,J:::;; TI'(Jt)
+ «s,
Hence Jt+ 1 :::;; T(Jt), and by using the induction hypothesis we obtain Jt+1:::;; Tk+1(J O)' On the other hand, we have clearly Tk+1(J O):::;; Jt+1, and hence i-: l(J 0) = Jt+ i - For any e> 0, let n = (710,711,' ..) be such that Jk,ff:::;; Jt + (e/2a), and let 71 EM be such that TJi(Jt):::;; T(Jt) + (e/2). Consider the policy ne = (71,710,711,' ..). Then Jk+l.rr E= T Ji(Jk,1iJ :::;; TJi(Jt)
The induction is complete.
+ (e/2):::;; T(Jt)+e=Jt+1
+e.
Q.E.D.
Proposition 3.1(a) may be strengthened by using the following assumption in place of F.1.
42
3. Assumption F.1'
FINITE HORIZON MODELS
The function J 0 satisfies 't:/XES,
UE
U(x),
and if {J d c F is a sequence satisfying Jk+ 1 :s; J k :S; J 0 for all k, then lim H(x, u, Jd = H(X' u, lim J k)
k-+oo
k-r co
't:/XES,
UEU(X).
The following corollary is obtained by verbatim repetition of the proof of Proposition 3.1(a). Corollary 3.1.1
Let F.1' hold. Then Jt
=
TN(J O)'
Proposition 3.1 and Corollary 3.1.1 may fail to hold if their assumptions are slightly relaxed. COUNTEREXAMPLE I Take S= {O}, C= U(O)=(-I,O], Jo(O)=O, H(O,u, J) = U if -1 < J(O), H(O,u, J) = J(O) + U if J(O) :S; -1. Then (T ll o ' •. TIlN_1)(JO)(0) = /l0(0) and J~(O) = -1, while TN(JO)(O) = -N for every N. Here the assumptions J k, ,,(0) < 00 and Jt(O) > - 00 are satisfied, but Fvl, F.1', and F.2 are violated. COUNTEREXAMPLE 2 Take S={O, I}, C= U(O) = U(1)=( - 00,0], Jo(O)= J o(1)=O, H(O,u,J)=u if J(I)= -00, H(O,u,J) = 0 if J(1» -00, and H(I, u,l) = u. Then (T 110' •• T IlN- J(J 0)(0) = 0, (T ll o ' •• T IlN-l)(J 0)(1) = /l0(1) for all N ~ 1. Hence, J~(O) = 0, J~(1) = - 00. On the other hand, we have TN(JO)(O) = T N(Jo)(l) = -00 for all N ~ 2. Here F.2 is satisfied, but F}, F.l', and the assumptions Jk,,,(x) < 00 and Jt(x) > - 00 for 't:/x E S are all violated. The following counterexample is a stochastic optimal control problem with countable disturbance space as discussed in Section 2.3.2. We use the notation introduced there. COUNTEREXAMPLE 3 Let N = 2, S = {O, I}, C = U(O) = U(I) = R, W = {2,3, ... }, p(w=klx,u)=k-zO:::'=zn-Z)-1 for k=2,3, ... , XES, uEC, f(O, U, w) = f(l, u, w) = 1 for 't:/u E C, WE W, g(O, U, w) = w, g(1, u, w) = u for 't:/UE C, WE W. Then a straightforward calculation shows that J!(O) = 00, Ji(1) = - 00, while TZ(Jo)(O) = - 00, TZ(Jo)(l) = - 00. Here F.1 and F.2 are satisfied, but F.1' and the assumptions Jk,,,(x) < 00 for all x,n,k, and Jt(x) > - 00 for all X and k are all violated. The next counterexample is a deterministic optimal control problem as discussed in Section 2,3.1. We use the notation introduced there.
43
3.2 MAIN RESULTS
COUNTEREXAMPLE 4 Let N = 2, S = to, 1, ...}, C = U(x) = (0, (0) for [ix, u) = 0 for VXE S, UE C, g(O, u) = -u for VUE U(O), g(x, u) = x for Vu E U(x) if x #- O. Then for 11: E n and x #- 0, we have J 2. 7[(x) = X - /11(0), so that Ji(x) = - 00 for all XES. On the other hand clearly there is no two-stage s-optimal policy for any 8 > O. Here F.1, F.2, and the assumption J k. 7[(x) < 00 for all x, 11:, k are satisfied, and indeed we have Ji(x) = T 2(J o)(x) = - 00 for Vx E S. However, the assumption Jt(x) > - 00 for all x and k is violated. As Counterexample 4 shows there may not exist an N-stage s-optimal policy if we have Jt(x) = - 00 for some k and XES. The following proposition establishes, under appropriate assumptions, the existence of a sequence of nearly optimal policies whose cost functions converge to the optimal cost function. VXE S,
Proposition 3.2 Let F.3 hold and assume J k.7[(X) < and k = 1,2, ... ,N. Then Jt
00
for all
XES, 11: En,
= TN(Jo)·
Furthermore, if {8 n } is a sequence of positive numbers with 8n 10, then there exists a sequence of policies {11: n } exhibiting {8 n }-dominated convergence to optimality. In particular, if in addition Jt(x) > - 00 for all XES, then for every 8 > 0 there exists an s-optimal policy. Proof We will prove by induction that for K ::; N we have Jk. = TK(J 0), and furthermore, given K and [8 n } with 8n 10, s, > 0 for Vn, there exists a sequence {11: n } c n such that for all n; lim J K. 7[n = J k.,
n--f>
(I)
CfJ
J K. 7[Jx) ::; {Jk.(X) + 8n J K.7[n_,(X)+8 n
Vx E S VXES
with with
Jk.(x) > - 00, Jk.(x)=-oo.
(2) (3)
We show that this holds for K = 1. We have Jf(x)
= inf Jl,Ax) = inf H[x,/1(x),J o] = T(Jo)(x) 7[Ell /lEM
VXES.
It is also clear that, given {8 n } , there exists a sequence {11: n } c n satisfying (1)-(3) for K = 1. Assume that the result is true for K = N - 1. Let f3 be the scalar specified in F.3. Consider a sequence {8 n } c R with 8n > 0 for vn and lim.., a: e; = 0, and let {itn } c Tl, itn = (/11, /1~, ...), be such that
lim I N-
n-+
00
1.ft n
= Jt-1'
(4)
44
3.
The assumption J k •1r (x ) < that we have
00
for all XES, nEIl, k
H(X,U,J N - 1. it, ) <
=
1,2, ... ,N, guarantees
UEU(X).
VXES,
00
FINITE HORIZON MODELS
(7)
Without loss of generality we assume that L~ 1 en < 00. Then Assumption F.3 together with (4) implies that there exists a sequence {,u'O} c M such that, for all n, lim T llo(JN-1.itJ = T(J%_d,
(8)
n-e co
T(J %- d(x) + en Tllo(JN-l.it,.)(x) S { T 110 _I (J N-1,1ril-l )(X )
if T(J%_ d(x) > - 00, 'f T(J* 1 l' N-1 )() X = -00.
+ en
We have by the induction hypothesis J%-l TN(J O) s J%. Hence,
(9) (10)
T N- 1(J O), and it is clear that
=
(11) We also have (12)
J% slim Tllo(J N-UJ n--+ 00
Combining (8), (11), and (12), we obtain J%
=
=
T(J%-d
TN(J O)'
(13)
Let tt; = (,uo,,u7,,uz, ...). Then from (8)-(10) and (13), we obtain, for all n, lim I N ,1rn =
n-r cc
J%, with with
VXES VXES
and the induction argument is complete.
J%(x»-oo, J%(x)=-oo,
Q.E.D.
Despite the need for various assumptions in order to guarantee J% = TN (J 0), the following result, which establishes the validity of the DP algorithm as a means for constructing optimal policies, requires no assumption other than monotonicity of H. Proposition 3.3 if and only if
A policy n* =
=
(TIl~TN-k-1)(JO)
Proof
. . .) is uniformly N-stage optimal
(,u~,,u!,
k = 0,
TN-k(J O),
Let (14) hold. Then we have, for k (TIl~'"
TIl,;;_)(Jo)
=
=
0,1,
TN-k(J O)'
, N - 1.
,N - 1,
(14)
3.2
45
MAIN RESULTS
On the other hand, we have J't-k S (TJl~' .. TJl'N_ )(10), while TN-k(J O) S J't-k' Hence,J't-k. = (T..• . . . T... )(J 0) and n* is uniformly N-stage optimal. f"'k f"'N- I Conversely, let tt" be uniformly N-stage optimal. Then T{1o)
= Ji = TJl'N_ ,(10)
by definition. We also have for every pEM, (TJlT)(J o) = (TJlTJl;._\l{1o), which implies that T
2(J
O)
= inf(T"T)(J o) = inf(TJlTJl'N_ )(J o) JlEM
JlEM
n = (T/';'_2TJl;'_1)(JO) 2
2
Therefore T
2(J
O)
T
2(J
O)'
= J! = (TJl'N_JJl'N_ )(10) = (T Jl'N_J){1o).
Proceeding similarly, we show all the equations in (14).
Q.E.D.
As a corollary of Proposition 3.3, we have the following. Corollary 3.3.1 (a) There exists a uniformly N-stage optimal policy if and only if the infimum in the relation Tk+l(J O)(X) =
inf H[x,u, Tk(J O)]
(15)
UEU(X)
is attained for each XES and k = 0, 1,... , N - 1. (b) If there exists a uniformly N-stage optimal policy, then J't
= TN(J 0)'
We now turn to establishing conditions for existence of a uniformly N-stage optimal policy. For this we need compactness assumptions. If C is a Hausdorff topological space, we say that a subset U of C is compact if every collection of open sets that covers U has a finite subcollection that covers U.
The empty set in particular is considered to be compact. Any sequence {un} belonging to a compact set U c C has at least one accumulation point UE U, i.e., a point UE U every (open) neighborhood of which contains an infinite number of elements of {un}. Furthermore, all accumulation points of {un} belong to U. If {Un} is a sequence of nonempty compact subsets of C and U; ::J U n + 1 for all n, then the intersection U; is nonempty and compact. This yields the following lemma, which will be useful in what follows.
n:=l
Lemma 3.1 Let C be a Hausdorff space, f: C U a subset of C. Assume that the set Up.) defined by U(Je) =
is compact for each
},ER.
-->
R* a function, and
rUE Ulf(u) s Je}
Then f attains a minimum over U.
46
3.
FINITE HORIZON MODELS
Proof If f(u) = 00 for all UE U, then every UE U attains the minimum. Iff* = inf{f(u)juE U} < 00, let {An} be a scalar sequence such that z, > An+ 1 for all n and An ---> f*. Then the sets U(A n) are nonempty, compact, and satisfy U(A n):;:) U(A n+ d for all n. Hence, the intersection 1 U(A n) is nonempty and compact. Let u* be any point in the intersection. Then u* E U and f(u*) :os; }'n for all n, and it follows that f(u*) :os; f*. Hence, f attains its minimum over U at u*. Q.E.D.
n:=
Direct application of Corollary 3.3.1 and Lemma 3.1 yields the following proposition. Proposition 3.4 Let the control space C be a· Hausdorff space and assume that for each XES, AER, and k = 0,1, ... ,N - 1, the set
(16) is compact. Then
and there exists a uniformly N-stage optimal policy. The compactness of the sets U k(X, },) of (16) may be verified in a number of important special cases. As an illustration, we state two sets of assumptions which guarantee compactness of U k(X, A) in the case of the mapping H(x, u, J)
= g(x, u) + «(x, u)J[f(x, u)J
corresponding to a deterministic optimal control problem (Section 2.3.1). Assume that 0 :os; «(x, u), b :os; g(x, u) < 00 for some bE R and all XES, U E U(x), and take J 0 == O. Then compactness of U;(x, A) is guaranteed if: (a) S = R" (n-dimensional Euclidean space), C = R'", U(x) == C, [, g, and a are continuous in (x, u), and 9 satisfies limk~oo g(Xb ud = 00 for every bounded sequence {xd and every sequence {ud for which IUkl ---> 00 (I·' is a norm on Rm); (b) S = R", C = R'", f, g, and a are continuous, U(x) is compact and nonempty for each x ERn, and U(·) is a continuous point-to-set mapping from R" to the space of all nonempty compact subsets of R'". The metric on this space is given by (3) of Appendix C. The proof consists of verifying that the functions Tk(J 0), k = 0,1, . . . , N - 1, are continuous, which in turn implies compactness of the sets Uk(x, A) of (16). Additional results along the lines of Proposition 3.4 will be given in Part II (cf. Corollary 8.5.2 and Proposition 8.6).
3.3
47
APPLICATION TO SPECIFIC MODELS
3.3 Application to Specific Models
We will now apply the results of the previous section to the models described in Section 2.3. Stochastic Optimal Control-s-Outer Integral Formulation
Proposition 3.5 The mapping Hix, u,J) = E*{g(x, u, w) + o:J[j(x, u, w)]lx, u}
(17)
of Section 2.3.3 satisfies Assumptions F.2 and F.3. Proof We have H(x, u, J)
=
f*[g(x, u; w) + o:J[j(x, u, w)]}p(dwlx, u),
where S* denotes the outer integral as in Appendix A. From Lemma A.3(b) we obtain for all XES, UE C, J EF, r > 0, H(x, u, J)
s
Hix, u, J
+ r) s
Hence, F.2 is satisfied. We now show F.3. Let JEF, {In} en > 0, and for all n,
Hix, u; J)
c F,
[en}
+ 20:r.
c R
satisfy
I:=l F. n <
00,
(18)
J n() X H(x, u, J d
Let {,un}
C
J (X) + en <{ - I n - 1 (x ) + en
if J(x) > if J(x) = -
<
VXES,
00,
00,
(19)
00,
(20)
UE U(x).
(21)
T(J)(x)
> -
00,
T(J)(x)
= -
00,
(22) (23)
M be such that for all n, T- (J)( ) lIn
X
T /iJJ)
< {T(J)(X) -l/en
-
s
+ en
if if
T /in- P)·
(24)
Consider the set A(J)
= [x s S'[thereexists u e
U(x) with p*({wIJ[j(x,u, w)]
=
-oo}lx,u) > 0],
where p* denotes p-outer measure (see Appendix A). Let ,u EM be such that p*({wIJ[j(x,,u(x), w)]
=
-oo}lx,,u(x))
>
°
VXEA(J).
(25)
48
3.
FINITE HORIZON MODELS
Define for all n Iln(x)
if XE A(J), if x ¢= A(J).
Ji(X)
= { u; _ (x )
(26)
We will show that {Iln} thus defined satisfies the requirement of F.3 with f3 = 1 + 2a. For x E A(J), we have, from Corollary A.l.l and (18)-(21), lim sup TI'JJn)(x) = lim sup T;i(Jn)(x) n- OCi
=
lim sup S* {g[x,Ji(x), w] + aJn[f(x,Ji(x), w)]} n-oo
x p(dwlx, Ji(x)
= S* {g[ x, Ji(x), w] + aJ[f(x, Ji(x), w)] }p(dwlx, Ji(x)). It follows from Lemma A.3(g) and the fact that T;:;(J)(x) < (21)] that limsupTI'JJn)(x) n-oo
=
-00':::;
00
T(J)(x).
[cr. (18) and (27)
For x ¢= A(J), we have, for all n, p*( {wIJ[f(x, Iln(x), w)]
= - oo}lx, Iln(x)) = O.
Take B; E f7 to contain [w!J[j(x, Iln(x), w)]
= -
lx, Iln(x)) = 0
p(B n
oo} and satisfy
Vn.
Using Lemma A.3(e) and (b) and (19), we have Tl'n(Jn)(x)
= S* XW-BJW){g[X,lln(x), w] + aJ n[f(x,lln(x), w)]}p( dwIX,lln(x)) .:::; S* XW-BJW){g[X,lln(x), w] + aJ[j(x,lln(x), w)]} x p(dwIX,lln(x))
+ 2aG n
= T I'JJ)(x) + 2aG n.
(28)
Hence, for x ¢= A(J) we have from (28), (22), and (23) that lim sup TI')JnHx),:::; lim sup TI'JJ)(x) = T(J)(x). n-e co
n-e co
Combining (27) and this relation we obtain lim sup Tl'n(Jn)(x)':::; T(J)(x) n-e co
Vx E S,
3.3
49
APPLICATION TO SPECIFIC MODELS
and since T Iln(Jn) ~ T(J) for all n, it follows that lim Tlln(Jn) = T(J).
(29)
If x is such that T(J)(x) > - 00, it follows from (27) and (29) that we must have x¢ A(J). Hence, from (28), (22), and Lemma A.3(b), Tlln(Jn)(x):S:; TIlJJ)(x)
+ 2etBn :s:;
T(J)(x)
+ (1 + 2et)Bn
if T(J)(x) > -
00.
(30) If x is such that T(J)(x) = -
00,
there are two possibilities:
(a) x ¢A(J) and (b)
xEA(J).
If x ¢ A(J), it follows from (28), (24), and (18) that TIlJJn)(x):s:; TIlJJ)(x)
+ 2etB n:S:; :s:;
+ 2etBn Tlln_,(Jn-d(x) + 2etB n. Tlln_JJ)(x)
(31)
If XE A(J), then by (18)-(20) and Lemma A.3(b), TIlJJn)(x)
f* {g[x,JI(x), w] + etJn[f(x,j1(x), w)]}p(dwlx,j1(x)) :s:; f* {g[x, j1(x), w] + etJn-1 [f(x, j1(x),w)] }p(dwlx, j1(x)) + 20:B" =
= T lln_Pn-1)(X) + 20:Bn.
(32)
It follows now from (29)-(32) that {Il,,} satisfies the requirement of F.3 with f3 = 1 + 20:. Q.E.D.
As mentioned earlier, mapping (17)contains as special cases the mappings of Sections 2.3.1 and 2.3.2. In fact, for those mappings F.1 is satisfied as well, as the reader may easily verify by using the monotone covergence theorem for ordinary integration. Direct application of the results of the previous section and Proposition 3.5 yields the following. Corollary 3.5.1
Let H be mapping (17) and let Jo(x)=O for
VXES.
(a) If Jk.,,(x) < 00 for all XES, nEn, and k = 1,2, ... ,N, then J% = TN(J0) and for each sequence {B n} with Bn s, > 0 for Vn, there exists a
to,
sequence of policies {nn} exhibiting {B,,}-dominated convergence to optimality. In particular, if in addition J%(x) > - 00 for all XES, then for every B > 0 there exists an s-optimal policy. (b) If Jt(x) > - 00 for all XES, k = 1,2, ... ,N, then J% = TN(J 0) and for each B > 0 there exists an N-stage s-optimal policy. (c) Propositions 3.3 and 3.4 and Corollary 3.3.1 apply.
3. FINITE HORIZON MODELS
50
As Counterexample 3 in the previous section shows, it is possible to have J'!. #- TN(J 0) in the stochastic optimal control problem if the assumptions
of parts (a) and (b) of Corollary 3.5.1 are not satisfied. Naturally for special classes of problems it may be possible to guarantee the equality J'!. = TN(J 0) in other ways. For example, if the problem is such that existence of a uniformly N-stage optimal policy is assured, then we obtain J'!. = TN(J O ) via Corollary 3.11(b). An important special case where we have J'!. = TN(J O ) without any further assumptions is the deterministic optimal control problem of Section 2.3.1. This fact can be easily verified by the reader by using essentially the same argument as the one used to prove Proposition 3.1(a). However, if J'!.(x) = - 00 for some XES, even in the deterministic problem there may not exist an N-stage a-optimal policy for a given a (see Counterexample 4). Stochastic Optimal Control-Multiplicative Cost Functional Proposition 3.6
The mapping
H(x, u,J)
=
E{g(x, u, w)J[j(x, u, w)]!x,u}
of Section 2.3.4 satisfies F.1. If there exists e b e R such that for all XES, UE U(x), WE W, then H satisfies F.2.
°
~ g(x,
(33) u, w)
~
b
Proof Assumption F.1 is satisfied by virtue of the monotone convergence theorem for ordinary integration (recall that W is countable). Also, if ~ g(x, U, w) ~ b, we have for every J E F and r > 0,
°
H(x, u,J
+ r) = E{g(x, u, w)(J[j(x, u, w)J + r)lx, u} = E {g(x, u, w)J[j(x, u, w)J [x, u} + rE {g(x, u, w)lx, u}.
Thus F.2 is satisfied with
IX
= b.
Q.E.D.
By combining Propositions 3.6 and 3.1, we obtain the following. Corollary 3.6.1
Let H be the mapping (33) and J o(x) = 1 for Vx E S.
°
(a) If Jk.,,(x) < 00 for all XES, nEIl, k = 1,2, ... ,N, then J'!. = TN(Jo)' (b) If there exists e b e R such that ~ g(x, u, w) ~ b for all XES, U E U(x), WE W, then J'!. = TN(J O ) and there exists an N-stage a-optimal policy. (c) Propositions 3.3 and 3.4 and Corollary 3.3.1 apply. We now provide two counterexamples showing that the conclusions of parts (a) and (b) of Corollary 3.6.1 may fail to hold if the corresponding assumptions are relaxed. COUNTEREXAMPLE 5 Let everything be as in Counterexample 3 except that C = (0, 00) instead of C = R (and, of course, J 0(0) = J 0(1) = 1 instead
3.3
51
APPLICATION TO SPECIFIC MODELS
of J 0(0) = J 0(1) = 0). Then a straightforward calculation shows that J!(O) = 2 2 00, n(l) = 0, while T (J O)(0) = T (J O )(l ) = 0. Here the assumption that J k, ,,(.x) < 00 for all x, tt, k is violated, and 9 is un bounded above. COUNTEREXAMPLE 6 Let everything be as in Counterexample 4 except for the definition of g. Take g(O, u) = u for Vu E U(O) and g(x, u) = x for Vu E U(x) if x =1= 0. Then for every tt:E n we have J 2, ,,(x) = XII dO) for every x =1= 0, and J!(x) = for vx E S. On the other hand, there is no two-stage a-optimal policy for any a> 0. Here the assumption Jk,,,(.x) < 00 for all x, n, k is satisfied, and indeed we have n(x) = T 2 (J 0 )(x) = for Vx E S. However, 9 is unbounded above.
°
°
Minimax Control
Proposition 3.7 The mapping
Hix, u, J) =
sup
of Section 2.3.5 satisfies F.2. Proof
We have for r > H(x,u,J
+ r) = =
Corollary 3.7.1
°
[g(x, u, w)
WEW(X, u)
°
+ aJ[f(x, u, w)]]
(34)
and J E F,
sup WEW(X,U)
[g(x,u, w) + aJ[f(x,u, w)]
Hix; u, J)
+ or.
Q.E.D.
Let H be mapping (34) and Jo(x)
=
°
+ ar}
for
VXES.
(a) If Jt(x) > -00 for all XES, k = 1,2, ... ,N, then J% = TN(J o), and for each a > there exists an N-stage s-optimal policy. (b) Propositions 3.3 and 3.4 and Corollary 3.3.1 apply.
If we have Jt(x) = - 00 for some XES, then it is clearly possible that there exists no N -stage a-optimal policy for a givens > 0, since this is true even for deterministic optimal control problems (Counterexample 4). It is also possible to construct examples very similar to Counterexample 3 which show that it is possible to have J% =1= TN(J 0) if Jt(x) = - 00 for some X and k.
Chapter 4
Infinite Horizon Models under a Contraction Assumption
4.1
General Remarks and Assumptions
Consider the infinite horizon problem minimize J,,(x) = lim(T/lJ/ll'" T/lN_,)(Jo)(x) N->oo
subject to n = (/-10,/-11", .)E Il. The following assumption is motivated by the contraction property of the mapping associated with discounted stochastic optimal control problems with bounded cost per stage (cf. DPSC, Chapter 6). Assumption C (Contraction Assumption) There is a closed subset 13 of the space B (Banach space of all bounded real-valued functions on S with the supremum norm) such that JoE 13, and for all J E 13, /-1 E M, the functions T(J) and T /l(J) belong to 13. Furthermore, for every tt = (/-10, /-11" ..) E IT, the limit (1)
52
4.2
53
CONVERGENCE AND EXISTENCE RESULTS
exists and is a real number for each XES. In addition, there exists a positive integer m and scalars p, ex, with < p < 1, < ex, such that
°
IIT,,(J) - T/l(J')11 :s;
°
exllJ - I'll
VilE M,
II(T/loT/l.··· T/l m - )(J) - (T/lJ/ll'" T/l m _
Vllo.... ,llm-1EM,
J,J' E B,
(2)
)(J')II :s; pllJ - I'll
J,J'EB.
(3)
Condition (3) implies that the mapping (T/lo T /ll ... TIL'" _) is a contraction mapping in B for all IlkE M, k = 0,1, ... .m - 1. When m = 1, the mapping T" is a contraction mapping for each IlEM. Note that (2) is required to hold on a possibly larger set of functions than (3). It is often convenient to take B = B. This is the case for the problems of Sections 2.3.1, 2.3.2, and 2.3.5 assuming that ex < 1 and g is uniformly bounded above and below. We will demonstrate this fact in Section 4.4. In other problems such as, for example, the one of Section 2.3.3, the contraction property (3) can be verified only on a strict subset B of B. 4.2 Convergence and Existence Results We first provide some preliminary results in the following proposition. Proposition 4.1 Let Assumption C hold. Then: (a) For every J E Band
t: E
Il, we have
J,,= lim(T/lo···T/lN_,)(Jo)= lim(T/lo···T/LN_)(J). N--+oo
N--+w
(b) For each positive integer N and each J E B, we have inf(T/lo '
"
T/lN_,)(J)
TN(J)
=
"Ell
and, in particular, I~
= inf(T/lo '
"
T/lN_,)(Jo)
"Ell
(c) The mappings T with modulus p, i.e.,
I1I
= TN(J o).
and T:', IlEM, are contraction mappings in B
IITm(J) - Tm(J')11 :s; pllJ IIT;(J) - T;(J')II :s; pllJ -
J'II I'll
VI,J'EB, VJ,1'EB,
IlEM.
Proof (a) For any integer k ;;::: 0, write k = 11m + q, where q, 11 are nonnegative integers and :s; q < m. Then for any J. J' E B. using (2) and (3). we obtain
°
54
4.
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
from which, by taking the limit as k (and hence also n) tends to infinity, we have VJER.
(b) Since Tk(J) E R for all k by assumption, we have Tk(J)(x) > - 00 for all XES and k. For any t; > 0, let IlkE M, k = 0,1, ... ,N - 1, be such that TjiN_P) ~ T(J) (TjiN_,T)(J)
s
+ e,
TZ(J)
+ s,
Using (2) we obtain T N(J):2: (TjiJN-l)(J) - e :2: Tjio[(Tjil TN-Z)(J) -
eJ - s
:2: (TjiJjilTN-Z)(J) - tu: - e
. (N-l ) cxke :2:(T jioTjil"' TjiN_J(J)-
I
k=O
Since s >
°
:2: inf(T
llo' " TIlN_J(J) - (Nil cxke).
"eIlk=O
is arbitrary, it follows that T N(J):2: inf(T llo" ·TIlN_,)(J)· "EIl
The reverse inequality clearly holds and the result follows. (c) The fact that T; is a contraction mapping is immediate from (3). We also have from (3) for all IlkEM, k = 0, ... ,m -'--- 1, and J,J'ER, (T llo' .. T llm_ J(J) ~ (T llo' .. T llm_ J(J')
+ pllJ -
Taking the infimum of both sides over Ilk E M, k using part (b) we obtain
=
J'II·
0, 1, ... , m - 1, and
A symmetric argument yields Trn(J') ~ Trn(J) + pllJ -
J'II·
Combining the two inequalities, we obtain II Trn(J) - Trn(J')11 ~ pill - J'II· Q.E.D.
4.2
55
CONVERGENCE AND EXISTENCE RESULTS
In what follows we shall make use of the following fixed point theorem. (See [05, p. 383]-the proof found there can be generalized to Banach spaces.) Fixed Point Theorem If B is a closed subset of a Banach space with norm denoted by 11'11 and L: B -+ B is a mapping such that for some positive integer m and scalar p E (0,1), IlLm(z) - L m(z')11 :s; pllz - z'll for all z, z' E B, then L has a unique fixed point in B, i.e., there exists a unique vector z* E B such that L(z*) = z*. Furthermore, for every ZE B, we have
lim IILN(z) - z*11 =
N-+oo
o.
The following proposition characterizes the optimal cost function J* and the cost function J Il corresponding to any stationary policy (fl, u, ...) En. It also shows that these functions can be obtained in the limit via successive application of T and Til on any J E B. Let Assumption C hold. Then:
Proposition 4.2
(a) The optimal cost function J* belongs to B and is the unique fixed pointofTwithinB,i.e.,J* = T(J*),andifJ'EBandJ' = T(J'), thenJ' = J*. Furthermore, if J' E B is such that T(J') :s; J', then J* :s; J', while if J' :s; T(J'), then J' :s; J*. (b) For every fl E M, the function J Il belongs to B and is the unique fixed point of Til within B. (c) There holds lim II TN(J) - J*[[ = 0
VJEB,
N-+oo
lim IIT~(J)
N-+
- JIlII = 0
00
Proof From part (c) of Proposition 4.1 and the fixed point theorem, we have that T and Til have unique fixed points in B. The fixed point of Til is clearly J Il , and hence part (b) is proved. Let J* be the fixed point of T. We have J* = T(J*). For any e > 0, take Jl EM such that
+ e. T i1(J*) + at :s; J* + (1 + lJ()e. Continuing
T i1(J*) :s; J*
From (2) it follows that T~(J*) :s; in the same manner, we obtain T~(J*):s;
J*
+ (1 + IJ( + ... + IJ(m-l)e.
Using (3) we have
+ p(1 + IJ( + :s; J* + (1 + p)(1 + IJ( +
nm(J*):s; T~(J*)
+ IJ(m-l)e + IJ(m-l)e.
56
4.
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
Proceeding similarly, we obtain, for all k 2 1, T~m(J*)
J* + (1 + P + ... + pk-l)(1 + a + ... + am-i)£".
~
Taking the limit as k ~ co and using the fact that J Ii = limk_oo T~m(J*), have J{i ~ J*
1 + -1-(1 + a + ... + am-i)£".
we (4)
-p Taking£"= (1 - p)(l + a + ... + am-l)-le, we obtain J{i~J*+e.
Since J* ~ J{i and e >
°is arbitrary, we see that
J* ~ J*. We also have
J* = inf lim (T 110 ... T !J-N-l )(J*) -> lim TN(J*) = 1tEnN-tX)
N-oo
J*
•
Hence J* = J* and J* is the unique fixed point of T. Part (c) follows immediately from the fixed point theorem. The remaining part of (a) follows easily from part (c) and the monotonicity of the mapping T. Q.E.D. The next proposition relates to the existence and characterization of stationary optimal policies. Proposition 4.3
Let Assumption C hold. Then:
(a) A stationary policy n* = (f.1*,f.1*, .. .)En is optimal if and only if
=
T 1l,(J*)
T(J*).
Equivalently, n* is optimal if and only if
=
TIl.(JIl')
T(J Il,)·
(b) If for each XES there exists a policy which is optimal at x, then there exists a stationary optimal policy. (c) For any s > 0, there exists a stationary s-optimal policy, i.e., a n e = (f.1" f.1" . . .) En such that
IIJ* -
Jilell ~ e.
Proof (a) If n* is optimal, then J Il, = J* and the result follows from parts (a) and (b) of Proposition 4.2. If TIl,(J*) = T(J*), then TIl,(J*) = J*, and hence J Il' = J* by part (b) of Proposition 4.2. If T 1l.(JIl') = T(J Il,), then J Il, = T(J Il,) and J Il, = J* by part (a) of Proposition 4.2. (b) Let n: = (f.1~,x,f.1r.x, ... ) be a policy which is optimal at XES. Then using part (a) of Proposition 4.1 and part (a) of Proposition 4.2, we have J*(x) = J,,'(x) = lim (TIL' x
k-CtJ
a,X
... TIL' )(Jo)(x) k,x
= lim (T, ... T, )(J*)(x) Ilo,x !1k,x k-o a»
2 lim (TIL'a,x Tk)(J*)(x) = TIL'o,x (J*)(x) 2 T(J*)(x) = J*(x). k
-e
co
4.2
57
CONVERGENCE AND EXISTENCE RESULTS
Hence T 1'~.JJ*)(x) = T(J*)(x) for each x. Define f.1* EM by means of f.1*(x) = f.1(L(.x). Then TI'*(J*) = T(J*) and the stationary policy (f.1*, u", . . .) is optimal by part (a). (c) This part was proved earlier in the proof of part (a) of Proposition 4.2 [cf. (4)]. Q.E.D. Part (a) of Proposition 4.3 shows that there exists a stationary optimal policy if and only if the infimum is attained for every XES in the optimality equation
=
J*(x)
T(J*)(x)
= inf
Hix, u, J*).
UEU(X)
Thus if the set U(x) is a finite set for each XES, then there exists a stationary optimal policy. The following proposition strengthens this result and also shows that stationary optimal policies may be obtained in the limit from finite horizon optimal policies via the DP algorithm, which for any given J E B successively computes T(J), T 2(J), . . . . Proposition 4.4 Let Assumption C hold and assume that the control space C is a Hausdorff space. Assume further that for some J E B and some positive integer K, the sets
(5) are compact for all (a) all
XES
XES,
l
E
Rand k ~ K. Then:
There exists a policy n* = (f.1~, f.1!, . . .) E IT attaining the infimum for and k ~ K in the DP algorithm with initial function J, i.e.,
Vk
~
K.
(6)
(b) There exists a stationary optimal policy. (c) For every policy n* satisfying (6), the sequence {f.1t(x)} has at least one accumulation point for each XES. (d) If f.1*:S ~ Cis such that f.1*(x) is an accumulation point of [f.1t(x)} for each XES, then the stationary policy (f.1*, u"; . . .) is optimal. Proof (a)
We have Tk+l(J)(X)
= inf
H[x,u, Tk(J)],
UEU(X)
and the result follows from compactness of sets (5) and Lemma 3.1. (b) This part will follow immediately once we prove (c) and (d). (c) Let n* = (f.1~,f.1!, ... ) satisfy (6) and define .
k = 0,1, ...
58
4.
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
We have from (2), (6), and the fact that T(J*) = J*, II(TI'~
Tn)(J) -
II(TI'~Tn)(J)-
(TI'~Tk)(J)11
J*II = IIT n+ 1(J) - T(J*)II ~ 0:11 Tn(J) - J*[I 'in ~ K, ~ 0:11 Tn(J) - Tk(J)11 ~ 0:11 Tn(J) - J*II + o:IIT\J) - J*II 'in ~ K,
k = 0,1, ....
From these two relations we obtain
H[ x, .u:(x), Tk(J)] ~ H[ x, .u:(x), Tn(J)] + 20:Ck ~ J*(x) + 30:Ck 'in ~ k, k ~ K.It follows that .u:(x) E Uk[ x, J*(x) + 30:Ck] for all n ~ k and k ~ K, and {.u:(x)} has an accumulation point by the compactness of U k[ x, J*(x) + 30:Ck]. (d) If .u*(x) is an accumulation point of {.u:(x)}, then .u*(X)E Uk[ x, J*(x) + 30:Ck] for all k ~ K, or equivalently, (TI'.Tk)(J)(x) ~ J*(x)
+ 30:Ck
'ixES,
k ~ K.
By using (2), we have, for all k,
Combining the preceding two inequalities, we obtain
°
'ixES,
k~K.
Since Ck ~ [cf. Proposition 4.2(c)], we obtain TI'.(J*) ~ J*. Using the fact that J* = T(J*) ~ T I'.(J*), we obtain T I'.(J*) = J*, which implies by Proposition 4.3 that the stationary policy (.u*, u", ...) is optimal. Q.E.D. Examples where compactness of sets (5) can be verified were given at the end of Section 3.2. Another example is the lower semicontinuous stochastic optimal control model of Section 8.3. 4.3
Computational Methods
There are a number of computational methods which can be used to obtain the optimal cost function J* and optimal or nearly optimal stationary policies. Naturally, these methods will be useful in practice only if they require a finite number of arithmetic operations. Thus, while "theoretical" algorithms which require an infinite number of arithmetic operations are of
4.3
59
COMPUTATIONAL METHODS
some interest, in practice we must modify these algorithms so that they become computationally implementable. In the algorithms we provide, we assume that for any J E Band s > there is available a computational method which determines in a finite number of arithmetic operations functions J. E B and fl. E M such that
°
J.
~
T(J)
+ s,
TI'P) ~ T(J)
+ c.
For many problems of interest, S is a compact subset of a Euclidean space, and such procedures may be based on discretization ofthe state space or the control space (or both) and piecewise constant approximations of various functions (see e.g., DPSC, Section 5.2). Based on this assumption (the limitations of which we fully realize), we shall provide computationally implementable versions of all "theoretical" algorithms we consider. 4.3.1
Successive Approximation
The successive approximation method consists of choosing a starting function J E B and computing successively T(J), T 2(J), . . . , Tk(J),. . . . By part (c) of Proposition 4.2, we have limk~oo IITk(J) - J*II = 0, and hence we obtain in the limit the optimal cost function J*. Subsequently, stationary optimal policies (if any exist) may be obtained by minimization for each XES in the optimality equation J*(x)
= inf H(x, u, J*). XEU(X)
If this minimization cannot be carried out exactly or if only an approximation to J* is available, then nearly optimal stationary policies can still be obtained, as the following proposition shows.
Let Assumption C hold and assume that ]* E Band
Proposition 4.5 fl E M are such that
where Cl
;;:::
0, C2
;;:::
°
are scalars. Then
J* ~ JI' ~ J*
+ [(2aCI + c2)(1 + a + ... + am - 1)/(1
- p)].
Proof Using (2) we obtain T iJ*) - aCI ~ T I'(J*)
and it follows that
s
T(J*)
+ C2 ~ T(J*) + (aCI + C2),
4.
60
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
Using this inequality and an argument identical to the one used to prove (4) in Proposition 4.2, we obtain our result. Q.E.D. An interesting corollary of this proposition is the following. Corollary 4.5.1 Let Assumption C hold and assume that S is a finite set and U(x) is a finite set for each XES. Then the successive approximation method yields an optimal stationary policy after a finite number of iterations in the sense that, for a given J E 13, if n* = (/16,/1'f,... ) En is such that (T /1~ Tk)(J)
=
r-: I(J),
k = 0, 1,... ,
then there exists an integer k such that the stationary policy (/1t, /1t, . . .) is optimal for every k ~ k.
Proof Under our finiteness assumptions, the set M is a finite set. Hence there exists a scalar e* > 0 such that J/1 S J* + e* implies that (/1,/1,...) is optimal. Take k sufficiently large so that IITk(J) - J*II S B for all k ~ k, where s satisfies Zest l + ()( + ... + ()(rn-l)(1_ p)-1 S e*, and use Proposition 4.5. Q.E.D. The successive approximation scheme can be sharpened considerably by making use of the monotonic error bounds of the following proposition. Proposition 4.6 Let Assumption C hold and assume that for all scalars r =F 0, J E B, and XES, we have
(7) where ()(1, ()(z are two scalars satisfying 0 S XES, and k = 1,2, ... , we have
()(1
Tkrn(J)(x) + bk S r»: llrn(J)(x) + bk+ 1 S J*(x) S T(k+l 1rn(J)(x)
S ()(z < 1. Then for all J E
+ Dk+ 1 S
Tkrn(J)(x)
+ Db
13,
(8)
where
bk = min 1_()(_1- db ~dkJ, 1 - ()( z Ll-()(1 dk = inf[Tkrn(J)(x) - T(k- 11rn(J)(X)], XES
ak = sup[Tkrn(J)(x) -
T(k-l 1rn(J)(X)].
XES
Note If B = 13 we can always take ()(z = p, ()(1 = 0, but sharper bounds are obtained if scalars ()(1 and ()(z with 0 < 0:1 and/or ()(z < p are available.
Proof It is sufficient to prove (8) for k = 1, since the result for k > 1 then follows by replacing J by T(k -1 lrn(J). In order to simplify the notation, we assume m = 1. In order to prove the result for the general case simply
4.3
61
COMPUTATIONAL METHODS
replace T by T" in the following arguments. We also use the notation dz
Relation (7) may also be written (for m
= d',
= 1) as
T(J) + min[D:lr,D:zr] ~ T(J + r) ~ T(J) + max[D:lr,D:zrJ.
We have for all
(9)
XES,
J(x)
+d~
T(J)(x).
(10)
Applying T on both sides of (10) and using (9) and (10), we obtain J(x) + mined + D:ld,d + Ctzd] ~ T(J)(x) + min[D:ld,D:zd] ~ T(J
+ d)(x) ~
TZ(J)(x).
(11)
By adding min [lXid, Ct~d] to each side of these inequalities, using (9) (with J replaced by T(J) and r = min[Ctld, D:zd]), and then again (11), we obtain J(x)+min[d + Ctld + Ctid, d + Ctzd + Ct~d]
~ T(J)(x) +min[IXld + Ctid, Ctzd + D:~d] ~ TZ(J)(x)+ minelXid, Ct~d] ~ T[T(J) + mineIXI d,Ctzd]](x)
~ T 3(J)(x).
Proceeding similarly, we have for every k = 1,2, ... ,
~
J(x) + min [to Ctild, ito Cthd]
T(J)(x) + min[JI D:ild, i t CthdJ l
~ ... ~ Tk(J)(x) ~
Taking the limit as k -4
00,
+ min[Ct~d,D:~d]
Tk+ I(J)(X).
we have
l 1 d ] ~ T(J)(x) + mm . [Ct Ctz ] J(x) + min [ - -1d , - -1-- d, -1--d 1 - Ct l 1 - Ctz - Ct l - Ct z
~
TZ(J)(x) +
~
J*(x).
min[~d,l-IXI ~dJ 1- Ctz
Also, we have from (11) that mineCtl d, IXzd] ~ TZ(J)(x) - T(J)(x), and by taking the infinum over
XES,
we see that
min[D:ld,Ctzd] s d'.
(12)
4.
62
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
It is easy to see that this relation implies
min[~d,1-(J(ll-(J(2 ~dJ ~
-d', ~dIJ.
min[_(J(_1 1-(J(1
(13)
1-(X2
Combining (12) and (13) and using the definition of b l and b2 , we obtain T(J)(x)
+ hI
~ T
2(J)(x)
+ b2 .
Also from (12) we have T(J)(x) + b, ~ J*(x), and an identical argument shows that T 2(J)(x) + b 2 ~ J*(x). Hence the left part of (8) is proved for k = 1, rn = 1. The right part follows by an entirely similar argument. Q.E.D. Notice that the scalars b, and Ok in (8) are readily available as a byproduct of the computation. Computational examples and further discussion of the error bounds of Proposition 4.6 may be found in DPSC, Section 6.2. By using the error bounds of Proposition 4.6, we can obtain J* to an arbitrary prespecified degree of accuracy in a finite number of iterations of the successive approximation method. However, we still do not have an implementable algorithm, since Proposition 4.6 requires the exact values of the functions Tk(J). Approximations to Tk(J) may, however, be obtained in a computationally implementable manner as shown in the following proposition, which also yields error bounds similar to those of Proposition 4.6. Proposition 4.7 Let Assumption C hold. For a given J consider a sequence {Jd c B satisfying T(J)
~
JI
~
T(J k) s i., I
T(J)
s
E
Band s > 0.
+ s,
T(Jd
+ s,
k = 1,2, ....
Then
k
=
0,1 •... ,
(14)
where B = £(1
+ (X + ... + (Xm-I)/(1
- p)
Furthermore, if the assumptions of Proposition 4.6 hold, then for all and k = 1,2, ...
XES
where
bk = inf[Jkm(x) - J(k-l)m(X)] - 2B. XES
~k = SUp[Jkm(x) - J(k-l)m(X)] XES
+ n.
4.3
63
COMPUTATIONAL METHODS
Proof
We have J m S T(Jm-d + s S T[T(J m- z) + e] + I; S TZ(Jm_z) + (1 + a)e S T Z[T(J m_3)+e] +(1 +a)e S T 3 (J m _ 3 ) + (1 + a + aZ)1;
s
Tm-I(Jd + (1 + a + ... + am-z)e
S Tm(J)
+ (1 + a + ... + am-I)e.
An identical argument yields J Zm S T"'(Jm) + (1 + a + ... + a",-I)e.
and we also have Using the preceding three inequalities we obtain IIJzm - TZm(J)11 S IIJzm - Tm(Jm)11 + II Tm(Jm) - Tzm(J)11 S (1 + p )e(1 + a + + am - I).
Proceeding similarly we obtain, for k = 1,2,
,
IIJ k", - Tkm(J)11 S (1 + P + ... + l-l)e(1 + a + ... + am-I).
and (14) follows. The remaining part ofthe proposition follows by using (14) and the error bounds of Proposition 4.6. Q.E.D. Proposition 4.7 provides the basis for a computationally feasible algorithm to determine J* to an arbitrary degree of accuracy, and nearly optimal stationary policies can be obtained using the result of Proposition 4.5. 4.3.2
Policy Iteration
The policy iteration algorithm in its theoretical form proceeds as follows. An initial function Ilo E M is chosen, the corresponding cost function J Mn is computed, and a new function III EM satisfying T/l 1(J/lO ) = T(J , I O ) is obtained. More generally, given Ilk EM, one computes J /l1< and a function Ilk+.) E M satisfying Tl'k+ P/lJ = T(J/lJ. and the process is repeated. When S is a finite set and U(x) is a finite set for each XES, one can often compute J Ilk in a finite number of arithmetic operations, and the algorithm can be carried out in a computationally implementable manner. Under these circumstances, one obtains an optimal stationary policy in a finite number of iterations. as the following proposition shows.
64
4.
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
Proposition 4.8 Let Assumption C hold and assume that S is a finite set and V(x) is a finite set for each XES. Then for any starting function flo E M, the policy iteration algorithm yields a stationary optimal policy after a finite number of iterations, i.e., if {fld is the generated sequence, there exists an integer k such that (flb flb' .. ) is optimal for all k :2: k. Proof
We have, for all k, T llk+ 1(JIlJ
= T(JIlJ s Tllk(JIlJ = i.;
Applying T Ilk + 1 repeatedly on both sides, we obtain
P IlJ s s
T~k+
s ... S
T~k-+~(JIlJ J Ilk'
N
=
T llk+ 1(JIlJ
= T(JIlJ
1, 2, . . .
(15)
By Proposition 4.2, (16)
If (flbflk"") is an optimal policy, then J llk+ 1 = J ilk = J* and \flk+l, flk+ 1,· .. ) is also optimal. Otherwise, we must have JIlk + 1 (x) < J/1k(X) for some
XES, for if J llk+ = J llk, then from (15) and (16) we have T(JIlJ = J llk, which implies the optimality of (flb flb ... ). Hence, either (flb flb ... ) is optimal or else (flk+ b flk + 1, . . . ) is a strictly better policy. Since the set M is finite under our assumptions, the result follows. Q.E.D. 1
When Sand Vex) are not finite sets, the policy iteration algorithm must be modified for a number of reasons. First, given flb there may not exist a flk+ 1such that T Ilk + 1 (J IlJ = T(JIlJ· Second, even if such a flk+ 1exists, it may not be possible to obtain T llk+ JJIlJ and J Ilk + 1 in a computationally implementable manner. For these reasons we consider the following modified policy iteration algorithm. Step 1
Choose a function flo EM and positive scalars y, b, and
Step2
Given flkEM, find J llkE J3 such that
Step 3
Find flk+l EM such that IIT llk+ 1 ( JIlJ - T(JIlJII
IIJ
IITllk+l(Jllk) - Jllkll s
ll k -
E.
Jllkll S ypk.
s bl.
If
E,
stop. Otherwise, replace flk by flk+ 1 and return to Step 2. Notice that Steps 2 and 3 of the algorithm are computationally implementable. The next proposition establishes the validity of the algorithm. Proposition 4.9 Let Assumption C hold. Then the modified policy iteration algorithm terminates in a finite, say k, number of iterations, and
4.3
65
COMPUTATIONAL METHODS
the final function /11< satisfies
I I
J _Ilk
J*II -< ypI< + (e + bpl<)(1 +1 a+ ... + am-I) . -p
(17)
Proof We first show that ifthe algorithm terminates at the Kth iteration, then (17) holds. Indeed we have
I Til;;
IIJ,,;; +
Jllkll S ypl<,
(18)
II Jllkll s
(19)
,(JIl;;) - T(JIl;;) S bl,
IITIl;;+,(Jllk) -
(20)
e.
For any positive integer n, we have
IIJllk- J*II S IIJllk+
I I
II + ... J*II·
Tm(Jllk) I + Tm(Jllk) - T 2m(Jllk) IIT(n-l)m(J Il;;) - T"m(Jllk)1 + IITnm(Jllk) -
I
From this relation we obtain, for all n ;::: 1,
IIJllk- J*II s
(1 + P + ... + pn-I)IIJllk - Tm(Jllk) II + I Tnm(Jllk) - J*II.
(21)
We also have
IIJllk-
Tm(Jllk)11 S
IIJIl;; -
T(Jllk)11
+ IITm-I(Jllk) -
+ IIT(Jllk) -
T
I + ...
2(J llk)I
Tm(JIl;;)II,
from which we obtain, by using (2),
IIJllk-
II
Tm(Jllk) S (1
+ a+ ... + am-I )IIJllk- T(Jllk)ll.
(22)
Combining (21) and (22), we obtain, for all n;::: 1,
IIJllk- J*II s
(1 + P +
... + pn-I)(1 + a+ ... + am-I)IIJllk- T(JIl;;) II
+ I Tnm(Jllk) -
Taking the limit as n ~
IIJllk- J*II s
00,
(1 +
J*II.
we obtain
a+ ... + am-I)IIJllk-
T(J Il;;)II/(1 - p).
(23)
Using (18), we also have IIJllk -
J*II S
IIJllk -
JIl;;11 + IIJllk- J*II s
yl
+ IIJllk-
J*II.
(24)
From (19) and (20) we obtain
IIJllk-
II
T(Jllk) S e + bpI<.
(25)
By combining (23)-(25), we obtain (17). To show that the algorithm will terminate in a finite number of iterations, assume the contrary, i.e., assume we have I TIlk+JJIlJ - Jllkll > e for all k,
66
4.
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
and the algorithm generates an infinite sequence k,
{.ud
eM. We have, for all
IITllk+PIlJ - T(JIlJII:s; IITllk+I(JIlJ - Tllk+I(JIlJII + IIT llk+ Pllk) - T(JIlJII + IIT(JIlJ - T(J,1kJII :s; (r5 + 2ay)pk.
This relation yields, for all k, Tllk+PIlJ:S; T(JIlJ
+ (r5 + 2aY)l:s; Tllk(JIlJ + (r5 + 2aY)l = J ilk + (r5 + 2ay)pk.
(26)
Applying T Ilk + 1 to both sides of (26) and using (26) again, we obtain T;k+I(JIlJ:S; Tllk+I(JIlJ + a(r5 + 2aY)l:s; T(JIlJ :s; J ilk + (l + a)(r5 + 2ay)pk.
+ (1 + a)(r5 + 2aY)l
Proceeding similarly, we obtain, for all k, T;k+PIlJ:s; T(JIlJ + (l + a + ... + a m - I)(r5 + 2ayJl :s; J ilk + (1 + a + ... + a m - I )(r5 + 2ayJ pk.
Applying T;k+ repeatedly to both sides, we obtain, for all nand k; I
T:~+I(JIlJ:S;
T(JIlJ
+ (l +
P
+ ... + pn-I)(1 + a + ... + am - I )(r5 + 2ay)l. (27)
Denote
A = (1
+ a + ... + a m - I )(r5 + 2ay)/(1 - p).
Then by taking the limit in (27) as n -> J llk+ 1 :s; T(JIlJ
00,
+ Apk,
we obtain
k = 0,1, ....
By repeatedly applying T to both sides, we obtain J Ilmn < Tm(JIl(n-l)m ) -
Let A =
A(am- I
+
).(a m -
I
+ am-Zp + ... + pm-I)p(n-llm.
(28)
+ am-Zp + ... + pm-I). Then (28) can be written as
J/lnm:S; Tm(J/l(n_,)J + Ap(n-I)m,
n
Using (29) repeatedly, we have, for all n, J/lnm:S; Tm(J/l(n_I)J + Ap(n-I)m :s; T m[Tm(Jf1(n-Z)m ) + Ap(n - Z)mJ + Ap(n - I)m :s; T zm(J/l(n_2,J + A[p(n-llm + pp(n-Z)mJ
= 1,2, ....
(29)
4.3
67
COMPUTATIONAL METHODS
Since lp(n-k-1)m::::; pn-1 for all k
=
J*::::; Jl'nm::::; ynm(Jl'o) Since limn~x(npn-1) tends to J* as n
= ---> 00,
° and
0, 1,.. . .n - 1, this inequality yields
+ npn-1J,
n = 1,2, ....
Iimn~x IITnm(Jl'o) and it follows that
J*II = 0,
the right side
lim IIJl'nm - J*II = 0.
n-r cc
Since by construction IITl'nm+,(Jl'nm) -
Jl'nJ ::::;
IITl'nm+,(Jl'nJ - T(Jl'nJII + IIT(Jl'nm) - T(Jl'nm) II + IIT(Jl'nm) - T(J*)II
+ 111*'- Jl'nmll + IIJl'nm - Jl'nJ ::::; (15 + lY.y + y)pnm + (1 + 1Y.)IIJl'nm - J*II, we conclude that This contradicts our assumption that IITl'k+JJI'J for every k. 4.3.3
Jl'kll
>
I;
Q.E.D.
Mathematical Programming
Let the state space S be a finite set denoted by and assume 13 = B. From part (a) of Proposition 4.2, we have that whenever J E Band J::::; T(J), then J::::; J*. Hence the values J*(xd, . . . , J*(x n) solve the mathematical programming problem n
maximize subject to
L Ai
;=1
Ai::::; ni»; u, J ;J,
i
= 1, ... .n,
UE
U(x;),
where J A is the function taking values J A(X;) = A;, i = 1,... ,n. If U(x;) is a finite set for each i; then this problem is a finite-dimensional (possibly nonlinear) programming problem having a finite number of inequality constraints. In fact, for the stochastic optimal control problem of Section 2.3.2. this problem is a linear programming problem, as the reader can easily verify (see also DPSC, Section 6.2). This linear program can be solved in a finite number of arithmetic operations.
4.
68 4.4
INFINITE HORIZON MODELS UNDER A CONTRACTION ASSUMPTION
Application to Specific Models
The results of the preceding sections apply in their entirety to the problems of Sections 2.3.3 and 2.3.5 if Ct. < 1 and g is a nonnegative bounded function. Under these circumstances Assumption C is satisfied, as we now show. Stochastic Optimal Control-Outer Integral Formulation Proposition 4.10
Consider the mapping
H(x, u, J) = E* {g(x, u, w) + Ct.J[f(x, u, w)] [x, u}
of Section 2.3.3 and let Jo(x) = 0 for bE R there holds
o~
g(x, u, w) ~ b
VXES.
VXES,
(30)
Assume that Ct. < 1 and for some UE
U(x),
WE
W.
Then Assumption C is satisfied with 11 equal to the set of all nonnegative functions J E B, the scalars in (2) and (3) equal to 2Ct. and o, respectively, and m=1. Note If the special cases of the mappings of Sections 2.3.1 and 2.3.2 are considered, then B can be taken equal to B, and the scalars in (2) and (3) can both be taken equal to o: Proof Clearly JoEBand T(J), T IL(J)E11forallJEBand,uEM. We also have, for any tt = (,uo, ,ul,' .. ) E II, J o ~ T ILo(1o)
s ... ~
(T/LO' .. TILk)(J O)
~
(T lLo' .. TILk+ )(10)
and hence limN-+oo(T lL o' .. T ILN_ J(Jo)(x) exists for all verify inductively using Lemma A.2 that
XES.
L Ct.kb~b/(l-Ct.)
k=O
VXES,
... ,
It is also easy to
N-l
(TILO"'TILN_J(JO)(x)~
~
N
= 1,2, ....
Hence limN-+x,(TILo'" TILN_1)(JO)(x) is a real number for every x. We have for all XES, J, 1'EB, ,uEM, and WE W, g[ X, ,u(x), w]
+ Ct.J[f(x, ,u(x),w)] s g[ X, ,u(x), w] + Ct.J'[.f(x,,u(x), W)] + Ct.1IJ - 1'11.
(31)
Hence, using Lemma A.3(b), E*{g[ X, Il(X), w] + Ct.J[f(x, ,u(x),w)] [x, u} ~ E*{g[x,,u(x),w]
+ Ct.J,[f(x,,u(x),w)]lx,u} + 2Ct.IIJ - Jill.
(32)
4.4
69
APPLICAnON TO SPECIFIC MODELS
Hence T,/J)(x) - TIl(J')(x) ~
2(X1IJ - 1'11·
J'II. Therefore,
A symmetric argument yields T Il(1')(x) - TIl(J)(x) ~ 2(XIIJ jT,/J)(x) - TIl(J')(x)1 ~
2(XliJ- I'll
VXES,
fJ-EM.
Taking the supremum of the left side over XES, we have (33) which shows that (2) holds. If J, J' E E, then from (31), Lemma A.2, and Lemma A.3(a), we obtain in place of (32)
E*{g[x, fJ-(x), wJ + (XJ[j(x, fJ-(x), wlJlx, u} ~ E*{g[x, fJ-(x), wJ + (XJ'[j(x, fJ-(x), w)J[x, u} + (XIIJ
-
J'II,
and proceeding as before, we obtain in place of (33)
II (XliJ- J'II
VfJ-EM,
IITIl(J) - T Il(1') ~
This shows that (3) holds with p = a.
J,J'EB.
Q.E.D.
Minimax Control
Proposition 4.11
Consider the mapping
Hix, u, J)
=
sup
(g(x, u, w) + (XJ[j(x, u, w)J}
(34)
WEW(X,U)
of Section 2.3.5 and let J o(x) bE R, there holds
o~
=
g(x, u, w) ~ b
0 for VxE S. Assume that VXES,
UEU(X),
(X <
1 and for some
wEW.
Then Assumption C is satisfied with E equal to E, m = 1, and the scalars in (2) and (3) both equal to o:
Proof The proof is entirely similar to the one of Proposition 4.10. Q.E.D. For additional problems where the theory of this chapter is applicable, we refer the reader to DPSC. An example of an interesting problem where Assumption C is satisfied with m> 1 is the first passage problem described in Section 7.4 of DPSC.
Chapter 5
Infinite Horizon Models under Monotonicity Assumptions
5.1
General Remarks and Assumptions
Consider the infinite horizon problem minimize
J,,(x) = lim (TJ1.JJ1.1· .. TJ1.N_,)(JO)(X)
subject to
n = (JlO,Jll'" .)Ell.
N--+oo
(1)
In this chapter we impose monotonicity assumptions on the function J 0 which guarantee that J" is well defined for all n E Il. For every result to be shown in this chapter, one of the following two assumptions will be in effect. Assumption I
(Uniform Increase Assumption) There holds J o(x)
Assumption D
s
H(x, U, J 0)
VXES,
UE
U(x).
(2)
(Uniform Decrease Assumption) There holds J o(x) ~ H(x, U, J 0)
VXES,
UE
U(x).
(3)
It is easy to see that under each ofthese assumptions the limit in (1)is well defined as a real number or ± 00. Indeed, in the case of Assumption I we have
70
5.2
71
THE OPTIMALITY EQUATION
from (2) that J o::; TIlO(Jo)::; (TIlOTIll)(J o)::;"'::; (TIlOTIl1'" TIlN_,)(JO)::;"',
while in the case of Assumption D we have from (3) that J o ~ Tllo(Jo) ~ (TlloTIl.)(Jo)
~
...
~
(TIlOTIlI' .. TIlN_.)(Jo) ~ ....
In both cases, the limit in (1) clearly exists in the extended real numbers for each XES. In our analysis under Assumptions I or D we will occasionally need to assume one or more ofthe following continuity properties for the mapping H. Assumptions I.1 and 1.2will be used in conjunction with Assumption I, while Assumptions D.1 and D.2 will be used in conjunction with Assumption D. Assumption 1.1
all k, then
If {J d c F is a sequence satisfying J 0
lim H(x, U, Jd = H(X'U' lim J k )
\!XES,
k-oo
k~oo
::;
Jk
::;
J k+ 1 for
(4)
UEU(X).
Assumption 1.2 There exists a scalar !Y. > 0 such that for all scalars r > 0 and functions J E F with J 0 ::; J, we have H(x, u, J) ::; H(x, U, J
Assumption D.l
all k, then
+ r) ::; H(x, u, J) + ar
If {J d
c
F is a sequence satisfying J k+ 1
lim H(x, u, J k ) = H(X' u, lim J k )
k-» co
\!XES,
\!XES,
k-+oo
UEU(X). ::;
Jk
::;
UE U(x).
(5)
J 0 for
(6)
Assumption D.2 There exists a scalar !Y. > 0 such that for all scalars r > 0 and functions J E F with J ::; J 0, we have H(x, u,J) - ar ::; H(x, u,J - r) ::; H(x, u, J)
5.2
\!XES,
UEU(X).
(7)
The Optimality Equation
We first consider the question whether the optimality equation J* = T(J*) holds. As a preliminary step we prove the following result, which is of independent interest. Proposition 5.1 Let Assumptions I, I.1, and 1.2 hold. Then given any e > 0, there exists an s-optimal policy, i.e., an, E IT, such that J* ::; J".::; J*
+ e.
(8)
72
5.
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
Furthermore, if the scalar o: in 1.2 satisfies to be stationary.
lY.
< 1, the policy
Let {Bd be a sequence such that Bk >
Proof
7[,
can be taken
°
for all k and (9)
For each XES, consider a sequence of policies {7[k[X]} 7[k[X]
=
C
Il of the form
.. .),
(,ut[x],,u~[x],
such that for k = 0,1, ... J"dXl(x):s J*(x)
+ Bk
't:/XES.
(10)
Such a sequence exists, since we have J*(x) > - 00 under our assumptions. The (admittedly confusing) notation used here and later in the proof should be interpreted as follows. The policy 7[k[X] = (,ut[x],,uHx], ...) is associated with x. Thus ,u7[ x] denotes, for each XES and k, a function in M, while ,u7[x ](z) denotes the value of ,u7[x] at an element z E S. In particular, ,u7[X](x) denotes the value of ,u7[x] at x. Consider the functions Jik E M defined by Jik(X)
= ,ut[x](x)
(11)
't:/XES
and the functions J k defined by Jk(x)
=
H[ x, Jik(X), ~~~
(T
Il~[xl'
.. T 1l7[xl)(J
o)J
't:/XES,
k = 0,1, ....
(12)
0,1, ....
(13)
By using (10), (11), I, and 1.1, we obtain Jk(x)
= lim (Tllk[xJ" .. Tllk[xl)(JO)(x) i-« 00
0
I
't:/XES,
k
=
We have from (12), (13), and 1.2 for all k = 1,2, ... and XES Tllk_Pk)(X)
=
H[X,Jik-l(X),lk]
:S H[X,Jik-l(X),(J*
+ Bd]
:S H[x, Jik-l(X),J*]
+ lY.Bk
:S H[X,Jik-l(X), ~im(TIl~-I[XJ" I~OO
.. T Il7-1[Xl)(J O)] + lY.Bk
= J k - 1(X) + lY.Bb and finally,
k = 1,2, ....
(14)
5.2
73
THE OPTIMALITY EQUATION
Using this inequality and 1.2, we obtain TPk_,[Tpk_l(Jd]::;; T/lk_2(Jk-l + W-;k) ::;; T l k ,(Jk- d + C(2 Ck::;; J k- Z +
(C(Ck-1
+ C(zed.
Continuing in the same manner, we obtain for k = 1,2, ... (T po' .. T lik _ ,l(Jd::;; J o +
(W-;I
+ ... + C(ked::;; J* +
(_I
/=0
Since J o ::;; J k , it follows that (Til,,'" Tpk_,)(Jo)::;; J*
+
(to
rxiB i).
rxie;}
Denote TC e = (710,711" .. ). Then by taking the limit in the preceding inequality and using (9), we obtain
If.C( < 1, we take Bk = G(l (10). The stationary policy XES, satisfies I n ,, ::;; J* + E.
7["
TCk[X] = (llo[X].PI[X], ... ) in = (71,71, ... ), where 71(x) = Po[x] (x) for all
Ct.) for all k and
Q.E.D.
It is easy to see that the assumption o: < 1 is essential in order to be able to take TC" stationary in the preceding proposition. As an example, take S = [0], U(O) = (0, CD), J 0(0) = 0, H(O, U, J) = U + J(O). Then J*(O) = 0, but for any Il E M, we have Jll(O) = CD. By using Proposition 5.1 we can prove the optimality equation under I, I.1, and 1.2.
Proposition 5.2
Let I, 1.1, and 1.2 hold. Then J*
=
T(J*).
Furthermore, if J' E F is such that J' ;:::: J 0 and J' ;:::: T(J'), then J' ;:::: J*. Proof In(X)
For every =
TC
= (Po, PI' ...)E nand
XE
S, we have, from 1.1,
lim (TI/JIlI'" Tllk)(JO)(X) k-.)o((J
=
B~'
Tllo[lim. (Till' .. T/lJ(JO)](X);:::: Tllo(J*)(x);:::: T(J*)(x). k- -c
taking the infimum of the left-hand side over J* ;:::: T(J*).
TC E
Il, we obtain
74
5.
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
To prove the reverse inequality, let let 1f = Cilo, ill" ..) be such that
=
and
+ El'
Tilo(J*) :s; T(J*)
where 1fj
El
E2
be any positive scalars and
J ff, :s; J*
+ E2'
(ilj,il2," .). Such a policy exists by Proposition 5.1. We have J ff = lim (TilJil" .. Til.)(Jo) k-r co
= Tilo(Jff):S; Since J* :s; J ff and
Ej
and
Tilo(J*)
E2
+ iXE2 :s; T(J*) + (Ej + iXB2)'
can be taken arbitrarily small, it follows that J* :s; T(J*).
Hence J* = T(J*). Assume that J' E F satisfies J' 2 J 0 and J' 2 T(J'). Let {Bd be any sequence with ei > 0 and consider a policy 1f = (ilo,il" .. .)EIT such that
+ Eb
T ilJJ') :s; T(J')
k = 0,1, ....
We have, from 1.2, J*
= inf lim (T Jlo· .. TJl.)(Jo) a s Il k-e co
:s; inf lim inf(T llo ••• T Jl.)(J') k-r cc
1tEIl
:s; lim inf(Tilo ... T ilk)(J') k~oo
:s; liminf(T ilo ·
· ·
Tilk_)[T(J')
+ Ek]
k~oo
:s; lim inf( T ilo ... Tilk_2 T flk _,)(J' k~
00
:s; lim inf[(T flo· tc-r ca
Since we may choose L~o
iXiE;
..
T flk- )(1')
+ Ek)
+ iXkEk]
as small as desired, it follows that J* :s; J'. Q.E.D.
The following counterexamples show that I.1 and 1.2 are essential in order for Proposition 5.2 to hold.
5.2
75
THE OPTIMALITY EQUATION
COUNTEREXAMPLE 1 Take S={O,l}, C=U(O)=U(I)=(-l,O], 1 0 (0) = 1 0 (1) = -1, H(O, u,J) = u if 1(1) S -1, H(O, u, 1) = if 1(1) > -1, and H(l,u,l) = u. Then (T llo ' " TIlN_J(Jo)(O) = 0 and (T llo ' " T IlN_,)(l o)(l) = !-io(l) for N 2 1. Thus 1*(0) = 0, 1*(1) = -1, while T(l*)(O) = -1, T(l*)(I) = -1, and hence 1* -=I T(l*). Notice also that 1 0 is a fixed point of T, while 1 0 S 1* and 1 0 -=I 1*, so the second part of Proposition 5.2 fails when 1 0 = J'. Here I and 1.1 are satisfied, but 1.2 is violated.
°
°
COUNTEREXAMPLE 2 Take S = {a, I}, C = U(O) = U(1) = {a}, 1 0(0) = 1 0 (1) = 0, H(O,O,l) = if 1(1) < 00, H(O,O,l) = 00 if l(l) = 00, H(I,O,l) = 1(1) + 1. Then (T ll o ' " T1'N_ J(Jo)(O) = and (T ll o ' " T IlN_ J(Jo)(l) = N. Thus 1*(0)=0, 1*(1)= 00. On the other hand, we have T(l*)(O) = T(l*)(l) = 00 and 1* -=I T(l*). Here I and 1.2 are satisfied, but 1.1 is violated.
°
As a corollary to Proposition 5.1 we obtain the following. Corollary 5.2.1
Let I, 1.1, and 1.2 hold. Then for every stationary policy
n = (/1, /1, ... ), we have
1 1l = T/l(l/l)'
Furthermore, if l' E F is such that l' 2 1 0 and J' 2 T /l(1'), then l' 2 1 u : Proof Consider the variation of our problem where the control constraint set is U,.,(X) = {/1(x)} rather than U(x) for 't/XES. Application of Proposition 5.2 yields the result. Q.E.D.
We now provide the counterpart of Proposition 5.2 under Assumption D. Proposition 5.3
Let D and D.1 hold. Then
1* = T(l*). Furthermore, if l'EF is such that J' Proof
s
1 0 and l'
s
T(l'), then l'
s
1*.
We first show the following lemma.
Lemma 5.1
Let D hold. Then (15)
1* = lim 11., N-+
00
where 11. is the optimal cost function for the N-stage problem. Proof Clearly we have 1* s 11. for all N, and hence 1* Also, for all t: = (/10' /11" .. ) E 11, we have (T ll o '
••
s
limN-+oo 11..
T IlN_ 1)(1 0 ) 2 11.,
and by taking the limit on both sides we obtain 1" 2 limN-+oo 11., and hence 1* 2 limN-+oo 11,. Q.E.D.
76
5.
INFINITE HORIZON MODELS UNDER MONO TONICITY ASSUMPTIONS
Proof (continued) We return now to the proof of Proposition 5.3. An argument entirely similar to the one of the proof of Lemma 5.1 shows that under D we have for all XES
lim inf H(x, u, Jt) = inf lim H(x, u, Jt).
N-OOUEU(X)
UEU(x)N-rx
(16)
Using D.1, this equation yields lim T(Jt) = T(lim Jt).
N-oo
(17)
N--+r/J
Since D and D.I are equivalent to Assumption F.1' of Chapter 3, by Corollary 3.1.1 we have Jt = TN(Jo),from which we conclude that T(Jt) = T N+ l(J O ) = Jt+l' Combining this relation with (15) and (17), we obtain J* = T(J*). To complete the proof, let J' E F be such that J' ::::; J 0 and J' ::::; T(J'). Then we have J*
inf lim (T llo '
"
TIlN_1)(J O)
:2: lim inf(T ll o '
"
TIlN_1)(J O )
:2: lim inf(T ll o '
"
T IlN_ 1)(J'):2: lim T N(J'):2: J'.
=
ITEllN-C()
N-oo nEll
N-oo rr e Fl
Hence J* :2: J'.
N-oo
Q.E.D.
In Counterexamples 1 and 2 of Section 3.2, D is satisfied but D.I is violated. In both cases we have J* # T(J*), as the reader can easily verify. A cursory examination of the proof of Proposition 5.3 reveals that the only point where we used D.1 was in establishing the relations limN~~
T(Jt)
=
T(1imN~C(Jt)
and Jt = TN(J 0)' If these relations can be established independently, then the result of Proposition 5.3 follows. In this manner we obtain the following corollary. Corollary 5.3.1 Let D hold and assume that D.2 holds, S is a finite set, and J*(x) > - UJ for all XES. Then J* = T(J*). Furthermore, if J' E F is such that J' s J o and J' s T(J'), then J' s J*. Proof A nearly verbatim repetition of the proof of Proposition 3.1(b) shows that, under D, D.2, and the assumption that J*(x) > - UJ for all XES, we have Jt = TN(J O) for all N = 1,2, .... We will show that
lim H(X,U,Jt)=H(X,U, limJt)
N-+oo
N-oo
VXES,
UEU(X).
Then using (16) we obtain (17), and the result follows as in the proof of Proposition 5.3. Assume the contrary, i.e., that for some .X E S, it E U(.\'), and
5.2
77
THE OPTIMALITY EQUATION
e > 0, there holds
no: ii, It)
- e> H(X, ii, lim I »-:«.
N) '
k = 1,2, ....
From the finiteness of S and the fact that l*(x) = all x, we know that for some positive integer K lt - (e/ct) slim
N-""x)
lN
Vk
111(x) > - o: for
limN~'l'
~
K.
By using 0.2 we obtain for all k ~ K H(x,ii,lt) - e S H(x,ii,Jt - (eM) s H(X, ii, lim I N) ' N~oo
which contradicts the earlier inequality.
Q.E.D.
Similarly, as under Assumption I, we have the following corollary. Corollary 5.3.2
n
Let 0 and 0.1 hold. Then for every stationary policy
= (fl, u; . . .), we have
1", = T",(l",).
Furthermore, if J' E F is such that r S
10 and l
f
S T",(l
f
then l S s ; f
),
It is worth noting that Propositions 5.2 and 5.3 can form the basis for computation of 1* when the state space S is a finite set with n elements denoted by Xl, X 2, ,Xn • It follows from Proposition 5.2 that, under I, 1.1, and 1.2, l*(x d, ,1*(xn ) solve the problem n
minimize
I
1
i~
subject to
Ai inf H (Xi' U, 1 ;J,
}'i ~
i
= 1, ... ,n,
i
=
UEU(xil
1, ...
,n,
where l A is the function taking values l A(xJ = Ai, i = 1,... ,no Under 0 and 0.1, or 0, 0.2, and the assumption that l*(x) > - o: for Vx E S, the corresponding problem is n
maximize
I
Ai
i~1
subject to
Ai s nc«; U, 1 A), Ai s 1 0 (x ;),
i= 1,
,n,
= 1,
.n.
i
UEU(Xi)
When U(Xi) is also a finite set for all i, then the preceding problem becomes a finite-dimensional (possibly nonlinear) programming problem.
78 5.3
5.
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
Characterization of Optimal Policies
We have the following necessary and sufficient conditions for optimality of a stationary policy. Proposition 5.4 Let I, 1.1, and 1.2 hold. Then a stationary policy n* = (p*, u", ... ) is optimal if and only if TIl*(J*)
=
(18)
T(J*).
Furthermore, if for each XES, there exists a policy which is optimal at x, then there exists a stationary optimal policy. Proof If n* is optimal, then J Il* = J* and the result follows from Proposition 5.2 and Corollary 5.2.1. Conversely, if T 1l,(J*) = T(J*), then since J* = T(J*) (Proposition 5.2),it follows that TIl*(J*) = J*. Hence by Corollary 5.2.1, J Il* ~ J* and it follows that tt" is optimal. Un; = (,uL,,uL, ...) is optimal at XES, we have, from 1.1, J*(x)
=
J,,*(x) = lim (T Il* ... T Il* )(Jo)(x) x k----t>'XJ a.x k,x =
TIl'("x[lim(TIl~.x·
k---+G(J
.. Tllk,)(Jo)](X)
:2: T Il'(,)J*)(x):2: T(J*)(x) = J*(x).
Hence T Il'(,. )J*)(x) = T(J*)(x) for all XES. Define ,u* E M by ,u*(x) = ,u6.Ax). Then TIl*(J*) = T(J*) and, by the result just proved, the stationary policy (,u*,,u* •... ) is optimal. Q.E.D. Proposition 5.5
Let 0 and 0.1 hold. Then a stationary policy
ti"
=
()1*, )1*•...) is optimal if and only if (19)
Proof If n* is optimal, then J Il* = J*, and the result follows from Proposition 5.3 and Corollary 5.3.2. Conversely, if TIl*(J Il*) = T(J/I*), then we obtain, from Corollary 5.3.2, that J Il* = T(JIl*)' and Proposition 5.3 yields J Il* ::;; J*. Hence n* is optimal. Q.E.D.
Examples where n* satisfies (18) or (19) but is not optimal under 0 or I, respectively, are given in DPSC, Section 6.4. Proposition 5.4 shows that there exists a stationary optimal policy if and only if the infimum in the optimality equation J*(x)
= inf UEU(X)
Hix, u, J*)
5.3
79
CHARACTERIZATION OF OPTIMAL POLICIES
is attained for every XES. When the infimum is not attained for some XES, the optimality equation can still be used to yield an a-optimal policy, which can be taken to be stationary whenever the scalar o: in 1.2 is strictly less than one. This is shown in the following proposition. Proposition 5.6
Let I, I.I, and 1.2 hold. Then:
(a) If c > 0, {c;} satisfies Lk=Orxkck = c, Ci > 0, i = 0, 1,... , and n* = (flt flt , ...) E II is such that
k = 0, 1,... ,
then J* ::;; J". ::;; J*
+ c.
(b) If c > 0, the scalar rx in 1.2 is strictly less than one, and fl* E M is such that T Jl.(J*) ::;; T(J*)
+ c(1 -
«),
then
TJl~_
Proof (a) Since T(J*) = J*, we have T Jl~(J*) I to both sides we obtain (TJl~_.TJl~)(J*)::;;
TJl~_P*)
+ rxck::;;
::;; J*
J*
+
+ Cb and
(Ck-l
applying
+ rxck)·
Applying T Jl~ _2 throughout and repeating the process, we obtain, for every k = 1,2, ... ,
(TJl~· Since J 0
::;;
..
TJl~)(J*): ;
J*
+
(.t
1=0
rxiCi).
J *, it follows that
(TJl~·
..
TJl~)(Jo): ;
J*
+ (to rxic}
k = 1,2, ....
By taking the limit as k ~ 00, we obtain J". ::;; J* + c. (b) This part is proved by taking Ck = c(1 - rx) and flt = fl* for all k in the preceding proof. Q.E.D. A weak counterpart of part (a) of Proposition 5.6 under D is given in Corollary 5.7.1. We have been unable to obtain a counterpart of part (b) or conditions for existence of a stationary optimal policy under D.
80 5.4
5.
INFINITE HORIZON MODELS UNDER MONO TONICITY ASSUMPTIONS
Convergence of the Dynamic Programming AlgorithmExistence of Stationary Optimal Policies The DP algorithm consists of successive generation of the functions
r-:
T(J0), T 2(J 0)" ... Under Assumption I we have Tk(J 0) :s;; I(J0) for all k, while under Assumption D we have t-: I(J 0) :s;; Tk(J 0) for all k. Thus we can define a function J 00 E F by Joo(x) = lim TN(JO)(x) N-oo
(20)
VXES.
We would like to investigate the question whether J 00 = J*. When Assumption D holds, the following proposition shows that we have J w = J* under mild assumptions. Proposition 5.7
Let D hold and assume that either D.1 holds or else
Jt = TN(J O) for all N, where Jt is the optimal cost function of the N-stage
problem. Then
Jw
= J*.
Proof By Lemma 5.1 we have that D implies J* = lim N_ w Jt. Corollary 3.1.1 shows also that under our assumptions Jt = TN(J 0)' Hence J* = Iim, _ 00 TN(J 0) = J w • Q.E.D.
The following corollary is a weak counterpart of Proposition 5.1 and part (a) of Proposition 5.6 under D. Corollary 5.7.1 Let D hold and assume that D.2 holds, S is a finite set, and J*(x) > - 00 for all XES. Then for any e > 0, there exists an s-optimal policy, i.e., a ne E Il such that J* :s;; J "e
+e e/2(1 + a + ... + aN-I)
s
J*
Proof For each N, denote eN = and let nN = {f.1~,f.1~, ... ,f.1Z-1,f.1,f.1,···} be such that f.1EM, and for k ~ O,l, ... ,N - 1, f.1~ EM and (TI'r:TN-k-l)(JO):S;; TN-k(J O) + eN'
We have T I'~ (TI'~_,TI'~_
_
po) :s;; T(J 0) + eN' and applying T I'~
+ aeN :s;;
)(Jo) :s;; (TI'~_,T)(Jo)
_
T
2
to both sides, we obtain
2(Jo)
+ (1 + a)eN'
Continuing in the same manner, we have (TI'{i'" TI'~_I)(JO):S;;
TN(J o) + (1
+ a + ... + aN-1)eN,
from which we obtain, for N = 0,1, ... , J "N :s;; TN(J 0) + (e/2).
5.4
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
81
As in the proof of Corollary 5.3.1 our assumptions imply that J~ = TN(J 0) for all N, so by Proposition 5.7, IimN~lf TN(J O ) = J*. Let N be such that TN(J0) S J* + (e/2). Such an N exists by the finiteness of S and the fact that J*(x) > - 00 for all XES. Then we obtain J"N S J* + e, and nN is the desired policy. Q.E.D. Under Assumptions I, 1.1, and 1.2, the equality J = J* may fail to hold even in simple deterministic optimal control problems, as shown in the following counterexample. if.)
Let S = [0,00), C = U(x)
COUNTEREXAMPLE 3 for \ixES, and H(x, u,J)
=
X
+ J(2x +
= (0, cojfor vxeS, Jo(x) =
\ixE S,
u)
UE
°
U(x).
Then it is easy to verify that J*(x)
= inf J,,(x) = 00 "Ell
while
Hence J 00(0) =
°
TN (J 0)(0) = 0,
N
= 1,2, ....
and J 00(0) < J*(O).
In this example, we have J*(x) = 00 for all XES. Other examples exist where J* # J 00 and J*(x) < 00 for all XES (see [SI4, p. 880J). The following preliminary result shows that in order to have J 00 = J*, it is necessary and sufficient to have J 00 = T(J (0)'
Proposition 5.8
Let I, 1.1, and 1.2 hold. Then
s
s
J*.
(21)
J 00 = T(J (0) = T(J*) = J*
(22)
J 00
T(J (0)
T(J*)
=
Furthermore, the equalities
hold if and only if (23) Proof Clearly we have J 00 s J" for all tt E Il, and it follows that J 00 s J *. Furthermore, by Proposition 5.2 we have T(J*) = J*. Also, we have, for all
k > 0,
T(J (0)
= inf H(x, u,J (0) UE
U(x)
~
inf H[x, U, Tk(J 0)] UE
U(x)
=
r-: I(J 0)'
Taking the limit of the right side, we obtain T(J (0) ~ J 00' which, combined with J 00 s J* and T(J*) = J*, proves (21). If (22) holds, then (23) also holds.
82
5.
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
Conversely, let (23) hold. Then since we have J 00 ;::: J 0, we see from Proposition 5.2 that J 00 ;::: J*, which combined with (21) proves (22). Q.E.D. In what follows we provide a necessary and sufficient condition for J 00 = T(1 00) [and hence also (22)] to hold under Assumptions I, 1.1, and 1.2. We subsequently obtain a useful sufficient condition for J 00 = T(J a:J to hold, which at the same time guarantees the existence of a stationary optimal policy. For any J E F, we denote by E(J) the epigraph of J, i.e., the subset of SR given by E(J)
Under I we have Tk(J 0) ::;; so it follows easily that
= {(x,).)IJ(x) ::;; Ie}.
t-: l(J 0) for all
(24)
k and also J o: = lirm.,
n E[Tk(J k=l
00
Tk(J 0),
00
E(1oo) =
(25)
O)].
Consider for each k ;::: 1 the subset C k of SCR given by C k = {(x, u, A)IH[ x, u, Tk-1(J 0)] ::;; ).,
XES, U E
U(x)}.
(26)
Denote by P(C k ) the projection of C k on SR, i.e., (27)
Consider also the set P(Cd
= {(x,A)J3{An} S.t. )'n ...... A, (X,An)EP(C k ) , n = 0,1, ... }.
(28)
The set P(C k ) is obtained from P(Cd by adding for each x the point [x, J.:(x)] where A(X) is the perhaps missing end point of the half-line P-1(X,Ie)EP(Cd}. We have the following lemma. Lemma 5.2
Let I hold. Then for all k ;::: 1 (29)
Furthermore, we have (30)
if and only if the infimum is attained for each Tk(J o)(x) =
XES
in the equation
inf H[ x, u, Tk-1(J 0)]. UEU(X)
t
The symbol 3 means "there exists" and the initials "s.t." stand for "such that."
(31)
5.4
83
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
Proof
If(X,A)EE[Tk(J O)]' we have Tk(J o)(x)
inf H[ x, u, Tk-1(J 0)] :S: A.
=
UEU(X)
Let [6 n} be a sequence such that 6n > 0, e; ---+ 0, and let {un} C U(x) be a sequence such that H[ x, Un' Tk-1(J 0)] :S: Tk(J o)(x)
+ e; :S: A + 6n·
Then (X,Un,A + en)EC k and (X,A+6 n)EP(Cd for all n. Since ).+6n ---+ by (28) we obtain (x, A) E P( C k ) . Hence
},
(32)
E[T\J o )] c P(Cd.
Conversely, let (x, )~) E P( Cd. Then by (26)-(28) there exists a sequence (}n} with An ---+ ). and a corresponding sequence {un} C U(x) such that Tk(JO)(x):S: H[x, Un, Tk-1(J O)] :S: An'
Taking the limit as n Hence
---+ 00,
we obtain Tk(JO)(x):s:). and (X,A)EE[Tk(J O)]' P(C k) c E[Tk(J O)]'
and using (32) we obtain (29). To prove that (30) is equivalent to the attainment of the infimum in (31), assume first that the infimum is attained by Ilt _ 1 (x) for each XES. Then for each (X,A)EE[Tk(J O)], H[X,llt-l(X), Tk-1(JO)J:s: A,
which implies by (27) that (X,A)EP(C k). Hence E[Tk(JO)J c P(C k) and, in view of (29),we obtain (30). Conversely, if (30) holds, we have [x, Tk(J o)(x)J E P(Ck) for every x for which Tk(J o)(x) < 00. Then by (26) and (27), there exists a Ilt-l(X)E U(x) such that H[ x, Ilt _ 1 (x), i-: l(J o)J :S: Tk(J o)(x)
= inf
H[ x, u, Tk-1(J 0)]'
UEU(X)
Hence the infimum in (31) is attained for all x for which Tk(JO)(x) < 00. It is also trivially attained by all UE U(x) whenever Tk(J o)(x) = 00, and the proof is complete. Q.E.D. Consider now the set n;:~ sets
p(l\ p(l\
1
C k and define similarly as in (27) and (28) the
C k) = {(X, A)!3uE U(x)s.t.(x, U,A)E C k) =
lJ
{(X, )·)13{An} S.t. An ---+ A, (x, )~n)E
1
(33)
C k},
PCOl
C k)}-
(34)
5.
84
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
Using (25) and Lemma 5.2, it is easy to see that
PCOI Ck) PCOI Ck)
C
c
kOl P(Cd
i:
P(Cd
l]l
C
=
P(Cd
=
kOl E[Tk(J O )]
kOl E[Tk(J O)]
=
=
E(J cxJ,
E(J cxJ.
(35)
(36)
We have the following proposition. Proposition 5.9
Let I, 1.1, and I.2 hold. Then:
(a) We have J 00 = T(J cxJ (equivalently J 00 = J*) if and only if
p(k=lnC
(b) We have J 00
= T(J (0)
n
=
k)
k=l
(37)
P(Cd·
(equivalently J 00
= J*),
and the infimum in
J oo(x) = inf H(x, u, J (0)
(38)
UEU(X)
is attained for each policy) if and only if
(equivalently there exists a stationary optimal
XES
PCOI Ck) Proof
(a) Assume J 00
=
= T(J (0)
(39)
kOl P(C k)·
and let (x, },) be in E(J (0)' i.e.,
inf H(x, u, J (0) = J oo(x)
~
A.
UEU(X)
Let {c n } be any sequence with {un} such that
Cn
> 0,
Cn --+
O. Then there exists a sequence n
= 1,2, ... ,
and so
H[x,u n , Tk-1(J O )] ~ A + Cn'
k,n = 1,2, ....
nk'=
It follows that (x, Un' A + cn)E C k for all k, n and (x, Un, A + cn)E 1 C k for all n. Hence (x,). + cn) E p(nk'= 1 C k ) for all n, and since }, + s, --+ Ie, we obtain (x,},)E p(n k'=1 Cd. Therefore --c,-----.,...
E(J (0)
C
PCOI Ck}
and using (36) we obtain (37). Conversely, let (37) hold. Then by (36) we have p(nk'= 1 C k ) = E(J (0)' Let XES be such that Joo(.x;) < 00. Then [x, Jw(x)] E p(n k''= 1 C k ) , and there
5.4
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
exists a sequence {An} with An ---+ J oo(x) and a sequence {un}
C
85
Vex) such that
k,n= 1,2, .... Taking the limit with respect to k and using I.1, we obtain H(x, un,J (0)
and since T(J oo)(x)
~
An'
~
H(x, un,J (0)' it follows that T(Joo)(x) ~ An'
Taking the limit as n ---+ 00, we obtain T(J oo)(x)
J oo(x)
~
for all XES such that J oo(x) < 00. Since the preceding inequality holds also for all XES with J oo(x) = 00, we have T(J (0)
~
J 00
•
On the other hand, by Proposition 5.8, we have J 00
~
T(J (0)'
Combining the two inequalities, we obtain J 00 = T(J (0)' (b) Assume J 00 = T(J (0) and that the infimum in (38) is attained for each XES. Then there exists a function J.1* E M such that for each (x, A)E E(J (0)
H[x, J.1*(x),J ooJ
s
A.
Hence H[x, J.1*(x), Tk-1(J 0)] ~ A for k = 1,2, ... , and we have [x, J.1*(x), AJ E 1 Ci, As a result, (x, A)E p(nk'= 1 C k ) . Hence
nk'=
and (39) follows from (35). Conversely, let (39) hold. We have for all XES with J oo(x) < 00 that [x,Joo(x)JEE(Joo) =
P(Ol
Ck }
Thus there exists a J.1*(X)E Vex) such that
nC 00
[x,J.1*(x),Joo(X)]E
k=l
b
from which we conclude that H[ x, J.1*(x), Tk-1(Jo)J ~ J 00 (x), k = 0,1, ....
86
5.
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
Taking the limit and using 1.1, we see that T(Jco)(x) ~ H[x,,u*(x),J,,,J ~ Jco(x).
It follows that T(J co) ~ J co, and since J co ~ T(J co) by Proposition 5.8, we finally obtain J co = T(J co), Furthermore, the last inequality shows that ,u*(x) attains the infimum in (38) when J co(x) < 00. When J ",(x) = 00, every UE Vex) attains the infimum, and the proof is complete. Q.E.D.
In view of Proposition 5.8, the equality J co = T(J ",) is equivalent to the validity of interchanging infimum and limit as follows J co
= lim inf( T
1'0' ••
k-» o: a s Il
T I'k)(J 0)
= inf
lim(T 1'0'
n e Fl k-«
••
cc.
=
Tl'k)(J 0)
J*.
Thus Proposition 5.9 states that interchanging infimum and limit is in fact equivalent to the validity of interchanging projection and intersection in the manner of (37) or (39). The following proposition provides a compactness assumption which guarantees that (39) holds. Proposition 5.10 Let I, I.1, and 1.2 hold and let the control space C be a Hausdorff space. Assume that there exists a nonnegative integer k such that for each XES, ). E R, and k 2: k, the set (40)
is compact. Then
p(l\
C k) =
{]I
(41)
P(Ck)
and (by Propositions 5.8 and 5.9) J co
=
T(J co)
=
T(J*)
=
J*.
Furthermore, there exists a stationary optimal policy. Proof
By (35) it will be sufficient to show that
n co
Let (x, Je) be in that
nf=
P(Ck)
k= 1 1
=
n co
P( Cd. Then there exists a sequence {un}
H[x, Un' Tk(J O )] ~ H[x,u n, Tn(Jo)] ~
Je
C
"In 2: k,
or equivalently "In 2: k,
(42)
P(Ck)'
k= 1
k = 0, 1, ....
Vex) such
5.4
87
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
Since V k(X, A) is compact for k accumulation point Ii and
~ 7(,
it follows that the sequence {un} has an
Vk ~ 7(.
liE Vk(X,A)
But Vo(x,il):::::> VI(X,A):::::> ... , so UE VdX,A) for k = 0, 1,.. _. Hence and (x, Ii, A) E
nr=
k = 0,1, . . . ,
H[x, Ii, Tk(Jo)] ::;; A, I
Ck • It follows that (x, },) E
p(nr= I C
k)
and
PCOI Ck):::::> kOI P(Ck)' Also, from the compactness of V k(X, A) and the result of Lemma 3.1, it follows that the infimum in (31) is attained for every XES and k > K. By Lemma 5.2, P(Ck) = P(Ck) for k > 7(, and since P(C I):::::> P(C z ) :::::> ••• and P(Cd:::::> P(C z) :::::> ••• , we have
n P(Ck) = n P(Cd·
Thus (42) is proved.
00
00
k=1
k=1
Q.E.D.
The following proposition shows also that a stationary optimal policy may be obtained in the limit by means of the DP algorithm. Proposition 5.11
Let the assumptions of Proposition 5.10 hold. Then:
(a) There exists a policy n* = (fl6, fli, ...) E Il attaining the infimum in the DP algorithm for all k ~ 7(, i.e.,
Vk
(43)
~ 7(.
(b) F or every policy n* satisfying (43), the sequence {flt(X)} has at least one accumulation point for each XES with J*(x) < 00. (c) If fl*:S --+ C is such that fl*(X) is an accumulation point of {flt(X)} for all XES with J*(x) < 00 and fl*(X)E Vex) for all XES with J*(x) = 00, then the stationary policy (fl*,fl*, ...) is optimal.
00,
Proof (a) This follows from Lemma 3.1. (b) For any n* = (fl6, fli, ...) satisfying (43) and XES such that J*(x) < we have
Vk Hence,
Vk
~ 7(,
n ~ k.
~ 7(,
n
~
k.
88
5.
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
Since Uk[x,J*(x)] is compact, (.u:(x)} has at least one accumulation point. Furthermore, each accumulation point .u*(x) of (.u:(x)} belongs to U(x) and satisfies vk 2: K.
(44)
By taking the limit in (44) and using 1.1, we obtain H[x,.u*(x),Joo]
= H[x,.u*(x),J*]::::; J*(x)
for all XES with J*(x) < 00. This relation holds trivially for all XES with J*(x) = 00. Hence TJl.(J*)::::; J* = T(J*), which implies that TJl.(J*) = T(J*). It follows from Proposition 5.4 that (.u*,.u*, . . .) is optimal. Q.E.D. The compactness of the sets Uk(x, 2) of(40) may be verified in a number of special cases, some examples of which are given at the end of Section 3.2. Another example is the lower semicontinuous model of Section 8.3, whose infinite horizon version is treated in Corollary 9.17.2. 5.5
Applicationto Specific Models
We now show that all the results of this chapter apply to the stochastic optimal control problems of Section 2.3.3 and 2.3.4. However, only a portion of the results apply to the minimax control problem of Section 2.3.5, since D.l is not satisfied in the absence of additional assumptions. Stochastic Optimal Control-Outer Integral Formulation
Proposition 5.12
Consider the mapping
H(x, u, J)
=
E* (g(x, u, w) + O(J[f(x, u, w)]lx, u}
(45)
of Section 2.3.3 and let J o(x) = 0 for Vx E S. If 0::::; g(x,u,w)
VXES,
UEU(X),
WEW,
(46)
then Assumptions I, 1.1, and 1.2 are satisfied with the scalar in 1.2 equal to If
0(.
g(x, U, w)::::; 0
VXES,
UEU(X),
WEW,
(47)
then Assumptions D, D.l, and D.2 are satisfied with the scalar in D.2 equal to 0(, Proof Assumptions I and D are trivially satisfied in view of (46) or (47), respectively, and the fact that J o(x) = 0 for Vx E S. Assumptions I.1 and D.1 are satisfied in view of the monotone convergence theorem for outer integration (Proposition A.l). From Lemma A.2 we have under (46) for all
5.5
89
APPLICATION TO SPECIFIC MODELS
r > 0 and J
E
F with J 0 ::;; J
H(x, u, J
+ r) = E*{g(x, u, w) + IXJ[f(X, u, w)] + IXrlx, u} = E*(g(x, u, w) + IXJ[f(X, u, w)]lx, u} + IXr = H(x,
u, J) + IXr.
Hence 1.2 is satisfied as stated in the proposition. Under (47), we have from Lemmas A.2 and A.3(c) that for all r > 0 and J E F with J ::;; J 0 H(x, u, J - r) = H(x, u, J) - IXr,
and D.2 is satisfied.
Q.E.D.
Thus all the results of the previous sections apply to stochastic optimal control problems with additive cost functionals. In fact, under additional countability assumptions it is possible to exploit the additive structure of these problems and obtain results relating to the existence of optimal or nearly optimal stationary policies under Assumption D. These results are stated in the following proposition. A proof of part (a) may be found in Blackwell [BI0]. Proofs of parts (b) and (c) may be found in Ornstein [04] and Frid [F2]. Proposition 5.13
Consider the mapping
H(x, u, J) = E{g(x, u, w) + J[f(x, u, w)] lx, u}
of Section 2.3.2 (W is countable), and let Jo(x) = 0 for all XES. Assume that S is countable, J*(x) > - 00 for all XES, and g satisfies
b s g(x, u, w) ::;; 0
VXES,
UEU(X),
WEW,
where bE ( - 00,0) is some scalar. Then: (a) If for each XES there exists a policy which is optimal at x, then there exists a stationary optimal policy. (b) F or every s > 0 there exists a fl. E M such that J I'e(x) ::;; (1 - s)J*(x)
Vx E S.
(c) If there exists a scalar fJE(-oo,O) such that fJ::;; J*(x) for all XES, then for every e > 0, there exists a stationary s-optimal policy, i.e., a fl. E M such that J*::;;Jl'e::;;J*+s.
We note that the conclusion of part (a) may fail to hold if we have J*(x) = for some XES, even if S is finite, as shown by a counterexample found in Blackwell [BIO]. The conclusions of parts (b) and (c) may fail to hold if S is uncountable, as shown by a counterexample due to Ornstein [04]. The -
rxJ
5.
90
INFINITE HORIZON MODELS UNDER MONOTONICITY ASSUMPTIONS
conclusion of part (c) may fail to hold if J* is unbounded below, as shown by a counterexample due to Blackwell [B8]. We also note that the results of Proposition 5.13 can be strengthened considerably in the special case of a deterministic optimal control problem (cf. the mapping of Section 2.3.1). These results are given in Bertsekas and Shreve [B6].
Stochastic Optimal Control-Multiplicative Cost Functional Proposition 5.14
Consider the mapping
H(x, u, J)
E{g(x, u, w)J[f(x, u, w)] [x, u}
=
of Section 2.3.4 and let J o(x) = 1 for Vx E S. (a) If there exists e b e R such that
l:s;;g(x,u,w):s;;b
UEU(X), WEW,
VXES,
then Assumptions 1,1.1, and 1.2 are satisfied with the scalar in 1.2 equal to b. (b) If
o :s;; g(x, u, w) :s;; 1
UE U(x),
VXES,
WE W,
then Assumptions D, D.1, and D.2 are satisfied with the scalar in D.2 equal to unity.
Proof This follows easily from the assumptions and the monotone convergence theorem for ordinary integration. Q.E.D. Minimax Control Proposition 5.15
Consider the mapping
H(x, u, J)
{g(x, U, w)
sup
=
WEW(X,U)
of Section 2.3.5 and let J o(x) (a)
=
+ iXJ[f(X, u, w)]}
0 for Vx E S.
If
O:s;; g(x, U, w)
VXES,
UEU(X), WEW,
then Assumptions I, 1.1, and 1.2 are satisfied with the scalar in 1.2 equal to o: (b) If
g(x, U, w):s;; 0
VXES,
UEU(X), WEW,
then Assumptions D and D.2 are satisfied with the scalar in D.2 equal to «.
Proof The proof is entirely similar to the one of Proposition 5.12. Q.E.D.
Chapter 6
A Generalized Abstract Dynamic Programming Model
As we discussed in Section 2.3.2, there are certain difficulties associated with the treatment of stochastic control problems in which the space Wof the stochastic parameter is uncountable. For this reason we resorted to outer integration in the model of Section 2.3.3. The alternative explored in this chapter is to modify the entireframework so that policies tt = (f.1o, f.11' ... ) consist offunctions f.1k from a strict subset of M-for example, those functions which are appropriately measurable. This approach is related to the one we employ in Part II. Unfortunately, however, many of our earlier results and particularly those of Chapter 5 cannot be proved within the generalized framework to be introduced. The results we provide are sufficient, however, for a satisfactory treatment of finite horizon models and infinite horizon models under contraction assumptions. Some of the results of Chapter 5 proved under Assumption D also have counterparts within the generalized framework. The reader, aided by our subsequent discussion, should be able to easily recognize these results. Certain aspects of the framework of this chapter may seem somewhat artificial to the reader at this point. The motivation for our line of analysis stems primarily from ideas that are developed in Part II, and the reader may wish to return to this chapter after gaining some familiarity with Part II. 91
92
6.
A GENERALIZED ABSTRACT DYNAMIC PROGRAMMING MODEL
The results provided in the following sections are applied to a stochastic optimal control problem with multiplicative cost functional in Section 11.3. The analysis given there illustrates clearly the ideas underlying our development in this chapter. 6.1
General Remarks and Assumptions
Consider the sets S, C, U(x), M, II, and F introduced in Section 2.1. We consider in addition two subsets F* and i of the set F of extended realvalued functions on S satisfying F* c
i
c
F
and a subset M of the set M of functions u.S ---. C satisfying /l(X)E U(x) for Vx E S. The subset of policies in II corresponding to M is denoted by ft, i.e.,
ft
= {(/lo,/l!> .. .)EIII/lkEM, k = 0,1, ... }.
In place of the mapping H of Section 2.1, we consider a mapping H: sci ---. R* satisfying for all XES, U E U(x), J, J' E i, the monotonicity assumption H(x, u,J)
~
H(x, u, 1')
if J
~
r.
Thus the mapping H in this chapter is of the same nature as the one of Chapters 2-5, the only difference being that H is defined on sci rather than on SCF. Thus if i consists of appropriately measurable functions and H corresponds to a stochastic optimal control problem such as the one of Section 2.3.3 (with 9 measurable), then H can be defined in terms of ordinary integration rather than outer integration. For /lEM we consider the mapping TI':i ---. F defined by T Jl(J)(x) = H[x, /l(x),J]
Consider also the mapping T: i T(J)(x)
VXES.
---. F defined by
= inf H(x, u,J)
VXES.
UEU(X)
We are given a function J 0: S ---. R* satisfying and we are interested in the N-stage problem minimize subject to
IN,,,(x) = (T Jlo" . T JlN_ ,)(Jo)(x) TC E
ft
(1)
6.1
93
GENERAL REMARKS AND ASSUMPTIONS
and its infinite horizon counterpart minimize subject to
l,,(x)
= lim (T iLO '
tt E fi.
••
N~oo
T iLN _ J(Jo)(x)
(2)
We use the notation, It = inf IN,,,, "E
J" =
IT
inf J, ,
"En
and employ the terminology of Chapter 2 regarding optimal, s-optimal, and stationary policies, as well as sequences of polices exhibiting {en}-dominated convergence to optimality. The following conditions regarding the sets F*, F, and Mwill be assumed in every result of this chapter. A.I For each XES and UE U(x), there exists a J.1EM such that J.1(x) (This implies, in particular, that for every 1 E Fand XES T(l)(x)=
inf H(x,u,l) = infH[x,J.1(x),lJ). UEU(X)
A.2
iLEM
For all 1 E F* and r E R, we have T(l)E F*,
A.3
= u.
(l
+ r)EF*.
For all lEF, J.1EM, and rER, we have (l
+ r)EF.
A.4 For each 1 E F * and s > 0, there exists a J.1. E M such that for all XES
T (l)(x) iLe
~
{T(l)(X)
-lie
+s
if T(l)(x) > if T(l)(x) = -
00, 00.
In Section 6.3 the following assumption will also be used. A.S
limk~oo
For every sequence {Jd c F that converges pointwise, we have lkEF. If, in addition, {ld c F*, then limk~oo lkEF*.
Note that in the special case where F* = F = F and M = M, we obtain the framework of Chapters 2-5, and all the preceding assumptions are satisfied. Thus this chapter deals with an extension of the framework of Chapters 2-5. We now provide some examples of sets F*, F, and M which are useful in connection with the mapping H(x, u,l)
=
S* {g(x,
U,
w) + oel[f(x, u, w)] }p(dwlx, u)
94
6.
A GENERALIZED ABSTRACT DYNAMIC PROGRAMMING MODEL
associated with the stochastic optimal control problem of Section 2.3.3. We take J o(x) = 0 for vx E S. The terminology employed is explained in Chapter 7. EXAMPLE I Let S, e, and W be Borel spaces, ff the Borel a-algebra on W, f a Borel-measurable function mapping sew into S, g a lower semianalytic function mapping sew into R*, p(dwlx, u) a Borel-measurable stochastic kernel on W given se, and I/. a positive scalar. Let the set
r =
{(X,U)ESqxES,UEU(X)]
be analytic. Take F* to be the set of extended real-valued, lower semianalytic functions on S, F the set of extended real-valued, universally measurable functions on S, and Nt the set of universally measurable mappings from S to C with graph in r (i.e., J1 E Nt if J1 is universally measurable and (x, J1(x)) E r for Vx E S). This example is the subject of Chapters 8 and 9. EXAMPLE 2 Same as Example 1 except that Nt is the set of all analytically measurable mappings from S to e with graph in r. This example is treated in Section ·11.2. EXAMPLE 3 Same as Example 1 except for the following: p(dwlx, u) and
f are continuous, g real-valued, upper semicontinuous, and bounded above, r an open subset of SC, F the set of extended, real-valued, Borel-measurable functions on S which are bounded above, F* the set of extended real-valued, upper semicontinuous functions on S which are bounded above, and Nt the set of Borel measurable mappings from S to e with graph in r. This is the upper semicontinuous model of Definition 8.8. EXAMPLE 4 Same as Example 3 except for the following: e is in addition compact, g real-valued, lower semicontinuous, and bounded below, r a closed subset of SC, F the set of extended real-valued, Borel-measurable functions on S which are bounded below, and F* the set of extended realvalued, lower semicontinuous functions on S which are bounded below. This is a special case of the lower semicontinuous model of Definition 8.7. All these examples satisfy Assumptions A.1-A.4 stated earlier (see also Sections 7.5 and 7.7). The first two satisfy Assumption A.5 as well.
6.2
Analysis of Finite Horizon Models
Simple modifications of some of the assumptions and proofs in Chapter 3 provide a satisfactory analysis of the finite horizon problem (1). We first modify appropriately some of the assumptions of Section 3.1. Assumption F.2 Same in statement as Assumption F.2 of Section 3.1 except that F is replaced by F.
6.2
95
ANALYSIS OF FINITE HORIZON MODELS
Assumption F.3 Same in statement as Assumption F.3 of Section 3.1 except we require that IE F*, {In} eft, and {,un} eM, instead of J E F, {In} c F, and {,un} eM.
It can be easily seen that F.2 is satisfied in Examples 1-4 of the previous section. It is also possible to show (see the proof of Proposition 804) that F.3 is satisfied in Example 1, where universally measurable policies are employed. By nearly verbatim repetition of the proofs of Proposition 3.1(b) and Proposition 3.2 we obtain the following. Proposition 6.1 (a) Let Assumptions A.1-Ao4 and F.2 hold and assume that It(x) > - 00 for all XES and k = 1,2, ... ,N. Then Jt
and for every s > such that
=
TN(J O)'
°there exists an N-stage s-optirnal policy, i.e., a Jt::; IN,,,,::; Jt
nEE
+ e.
(b) Let Assumptions A.1-Ao4 and F.3 hold and assume that lk, ,,(x) < for all XES, nEfi, and k = 1,2, ... , N. Then Jt
=
Ii
TN(J O)'
°
00
Furthermore, given any sequence {en} with en ~ 0, en > for \:In, there exists a sequence of policies exhibiting {en }-dominated covergence to optimality. In particular, if in addition Jt(x) > - 00 for all XES, then for every s > there exists an s-optimal policy.
°
Similarly, by modifying the proofs of Proposition 3.3 and Corollary 3.3.1(b), we obtain the following. Proposition 6.2
Let Assumptions A.l- Ao4 hold..
. . .) E fi is uniformly N-stage optimal if and (a) A policy n* = (,u~,,ut. = TN-k(JO),k = 0, 1,... ,N - 1. only if(T/l~TN-k-l)(Jo) (b) If there exists a uniformly N-stage optimal policy, then Jt = TN(J 0)'
Analogs of Corollary 3.3.1(a) and Proposition 304 can be proved if is rich enough so that the following assumption holds. Exact Selection Assumption T(J)
M
For every J E F*, if the infimum in
= inf Hix, U, J) UEU(X)
is attained for every XES, then there exists e u" E M such that T /l*(J) = T(J).
6.
96
A GENERALIZED ABSTRACT DYNAMIC PROGRAMMING MODEL
In Examples 1 and 4 of the previous section the exact selection assumption is satisfied (see Propositions 7.50 and 7.33). The following proposition is proved similarly to Corollary 3.3.1(a) and Proposition 3.4. Let Assumptions A.1-A.4 and the exact selection as-
Proposition 6.3 sumption hold.
(a) There exists a uniformly N-stage optimal policy if and only if the infimum in the relation Tk+1(J O)(X) = inf H[x,u, Tk(J O)] UEU(X)
is attained for each XES and k = 0,1, ... ,N - 1. (b) Let the control space C be a Hausdorff space and assume that for each XES, AER, and k = 0,1, ... ,N - 1, the set Uk(x,A) = {UE U(x)IH[x,u, Tk(J O )]
::::;
A}
is compact. Then J't
= TN(J O )'
and there exists a uniformly N-stage optimal policy.
6.3
Analysis of Infinite Horizon Models under a Contraction Assumption
We consider the following modified version of Assumption C of Section 4.1. Assumption C There is a closed subset B of the space B such that: (a)
JoEBnF*,
(b) For all J E B n F*, the function T(J) belongs to B n F*, (c) For all JEBnF and /1EM, the function T)1(J) belongs to BnF. Furthermore, for every
tt =
(/10,/11'... )Efi, the limit
lim (T)10T)1J· .. T)1N_ )(Jo)(x)
N-oo
°
°
exists and is a real number for each XES. In addition, there exists a positive integer m and scalars p and oc with < p < 1, < oc such that IIT)1(J)-T)1(J')II::::;ocIIJ-J'11
V/1EM,
J,J'EBnF,
II(T)10T)1J··· T)1m_ )(J) - (T)1J)1J··· T)1m_ )(J')II::::; pllJ - J'II
"1/10' ... ,/1m-1 EM,
J,J' EB n
F.
6.3
97
ANALYSIS OF INFINITE HORIZON MODELS
If Assumptions A.1-A.5 and C are made, then almost all the results of Chapter 3 have counterparts within our extended framework. The key fact is that, since F and F* are closed under pointwise limits (Assumption A.5), it follows that B n F, B n F, B n F*, and B n F* are closed subsets of B. This is true in view of the fact that convergence of a sequence in B (i.e., in sup norm) implies pointwise convergence. As a result the contraction mapping fixed point theorem can be used in exactly the same manner as in Chapter 3 to establish that, for each /1 EAt, J p. is the unique fixed point of T u in B n F and J* is the unique fixed point of T in B n F*. Only the modified policy iteration algorithm and the associated Proposition 4.9 have no counterparts in this extended framework. The reason is that our assumptions do not guarantee that Step 3 of the policy iteration algorithm can be carried out. Rather than provide a complete list of the analogs of all propositions in Chapter 4 we state selectively and without proof some of the main results that can be obtained within the extended framework. Proposition 6.4
Let Assumptions A.1-A.5 and
C hold.
(a) The function J* belongs to B n F* and is the unique fixed point of T within B n F*. Furthermore, if J' EB n F* is such that rov« J', then J* ::s;; J', while if J' s T(J'), then J' ::s;; J*. (b) For every /1 EM, the function J u. belongs to B n F and is the unique fixed point of T u. within B n F. (c) There holds lim IITN(J) -
N-+
00
lim IIT~(J)
N-+
00
J*II = °
- Jp.11 =
(d) A stationary policy n*
VJ EB n F*,
°
= (/1*,/1*, ..
o)Efi is optimal if and only if
T p.*(J*) = T(J*). Equivalently, n* is optimal if and only if J p.* E B n F* and
T p.*(Jp.*) = T(Jp.*). tt,
(e) For any s > 0, there exists a stationary s-optirnal policy, i.e., a (/1" /1.,. ..) Efi such that
=
IIJ* - J p.ell ::s;; e. Proposition 6.5 Let Assumptions A.I-A.5 and C hold. Assume further that the exact selection assumption of the previous section holds.
(a) If for each XES there exists a policy which is optimal at x, then there exists an optimal stationary policy.
98
6.
A GENERALIZED ABSTRACT DYNAMIC PROGRAMMING MODEL
(b) Let C be a Hausdorff space. If for some J E 13 n F* and for some positive integer K the set Uk(X,A)
is compact for all tionary policy.
= {UE U(x)IH[x,u, Tk(J)]:::;; A}
XES, ), E
R, and k
~
K, then there exists an optimal sta-
Part II
Stochastic Optimal Control Theory
This page intentionally left blank
Chapter 7
Borel Spaces and Their Probability Measures
This chapter provides the mathematical background required for analysis of the dynamic programming models of the subsequent chapters. The key concept, which is developed in Section 7.3 with the aid of the topological concepts discussed in Section 7.2, is that of a Borel space. In Section 7.4 the set of probability measures on a Borel space is shown to be itself a Borel space, and the relationships between these two spaces are explored. Our general framework for dynamic programming hinges on the properties of analytic sets collected in Section 7.6 and used in Section 7.7 to define and characterize lower semianalytic functions. These functions result from executing the dynamic programming algorithm, so we will want to measurably select at or near their infima to construct optimal or nearly optimal policies. The possibilities for this are also discussed in Section 7.7. A similar analysis in a more specialized case is contained in Section 7.5, which is presented first for pedagogical reasons. Our presentation is aimed at the reader who is acquainted with the basic notions of topology and measure theory, but is unfamiliar with some of the specialized results relating to separable metric spaces and probability measures on their Borel a-algebras. 101
102
7.1
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Notation
We collect here for easy reference many of the symbols used in Part II. Operations on Sets
Let A and B be subsets of a space X. The complement of A in X is denoted by A e. The set-theoretic difference A - B is A n Be. We will sometimes write X - A is place of A e. The symmetric difference ADB is (A - B) u (B - A). If X is a topological space, It will denote the closure of A. If A1,A z , ... is a sequence of sets such that A I C A 2 C . . . and A = U;;,= I An' we write An i A.1f Al =:J A z =:J ••• and A = IAn' we write An 1A.1f X I' X 2, . . . is a sequence of spaces, the Cartesian products of X I, X 2,' .. , X n and of X I, X 2,' .. are denoted by X IX 2' .. X; and X IX z ' .. , respectively. If the given spaces have topologies, the product space will have the product topology. Under this topology, convergence in the product space is componentwise convergence in the factor spaces. If the given spaces have c-algebras :IFx I ' :IFX 2' ... , the product o-algebras are denoted by :IFx I:IFX1 .• • :IFx; and :IFx I:IF X 2... , respectively. If X and Yare arbitrary spaces and E c XY, then for each XEX, the x-section of E is
n:=
Ex = {YE Y!(x,Y)EE}.
(1)
If fYJ is a class of subsets of a space X, we denote by u(8I') the smallest e-algebra containing 81'. We denote by 81'" or ,0;>0 the class of all subsets which can be obtained by union or intersection, respectively, of countably many sets in 81'. If :IF is the collection of closed subsets of a topological space X, then :lFo = :IF, and the members of .fF" are called the F,,-subsets of X. If '§ is the collection of open subsets of X, the members of '§0 are called the Go-subsets of X. If(X, fYJ) is a paved space, i.e., fYJ is a nonempty collection of subsets of X, and S is a Suslin scheme for 81' (Definition 7.15), then N(S) is the nucleus of S. The collection of all nuclei of Suslin schemes for ,0;> is denoted Y'(.?Jl). Special Sets
The symbol R represents the real numbers with the usual topology. We use R* to denote the extended real numbers [ - 00, + 00] with the topology discussed following Definition 7.7 in Section 7.3. Similarly, Q is the set of rational numbers and Q* is the set of extended rational numbers Q u { ± oo}. If X and Yare sets and f: X -+ Y, the graph off is Gr(f) = [(x,f(X))IXEX}.
(2)
7.1
103
NOTATION
If A c X and C{! is a collection of subsets of X, we define f(A)
and f(C{!)
=
{j(C)ICEC{!}.
= (f(x)lx E A J (3)
If BeY and C{! is a collection of subsets of Y, we define f-l(B) =
{xEXlf(x)EB} and (4)
If X is a topological space, :F x is the collection of closed subsets of X and f!Bx the Borel c-alqebra on X (Definition 7.6). The space of probability measures on (X, &D x ) is denoted by P(X); C(X) is the Banach space ofbounded, real-valued, continuous functions on X with the supremum norm
Ilfll = sUPlf(x)l; XEX
for any metric d on X which is consistent with its topology, U d(X) is the space of bounded real-valued functions on X which are uniformly continuous with respect to d. If X is a Borel space (Definition 7.7), Jli x is its analytic c-alqebra (Definition 7.19) and 0lI x its universal a-alqebra (Definition 7.18). We let N denote the set of positive integers with the discrete topology. The Baire null space JV is the product of countably many copies of N. The Hilbert cube j'f is the product of countably many copies of [0, 1]. We will denote by L the collection of finite sequences of positive integers. We impose no topology on L. If SEL and z = «(1'(2'" .)EJV, we write s < z to mean s = «(1,(2, ... ,(k) for some k. Mappings If X and Yare spaces, proh is the projection mapping from X Y onto X. If E is a subset of X, the indicator function of E is given by
if xEE, if x¢E. If f: X
-+ [ -
(5)
co, + co], the positive and negative parts of f are the functions f+(x) = max{O,f(x)}, f-{x) = max{O, -f(x)}.
(6) (7)
Y is a sequence of functions, Y is a topological space, and = f(x) for all x E X, then we write f" -+ f. If, in addition, Y = [-ro, +ro] and fl(X) ~f2(X) ~ ... for all x EX, we write fn if, while if fl(X) ~ f2(X) ~ ... for all x E X, we write fn 1 f. In general, when the arguments of extended real-valued functions are omitted, the statements are to be If j~:X
limn~oofn(x)
-+
7.
104
BOREL SPACES AND THEIR PROBABILITY MEASURES
interpreted pointwise. For example, (suPnfn)(x) = sUPnfn(x) for all x E X, ~ I2 if and only if II (x) ~ I2(X) for all x E X, and f + s is the function (f + s)(x) = f(x) + s for all x EX.
.f~
Miscellaneous If (X,d) is a nonempty metric space, XEX, and Y is a nonempty subset of X, we define the distance from x to Y by
d(x, Y) = inf d(x, y).
(8)
YEY
We define the diameter of Yby diam(Y) = sup d(x,y).
(9)
X.YEY
If (X, g;) is a measurable space and g; contains all singleton sets, then for X we denote by Px the probability measure on (X, g;) which assigns mass one to the set {x}.
XE
7.2
MetrizabJe Spaces
Definition 7.1 Let (X,5") be a topological space. A metric d on X is consistent with z" if every set of the form {YEXld(x,y) < c}, XEX, C > 0, is in !/, and every nonempty set in 5" is the union of sets of this form. The space (X, !/) is metrizable if such a metric exists.
The distinction between metric and metrizable spaces is a fine one: In a metric space we have settled on a metric, while in a metrizable space the choice is still open. If one metric consistent with the given topology exists, then a multitude of them can be found. For example, if d is a metric on X consistent with !/, the metric p defined by
p(x,y) = d(x,y)/[l
+ d(x,y)]
VX,yEX
is also consistent with !/. In what follows, we abbreviate the notation for metrizable spaces, writing X in place of (X, !/). If (X,!/) is a topological space and Y c X, unless otherwise specified, we will understand Y to be a topological space with open sets G (\ Y, where G ranges over !/. This is called the relative topology. If (Z, ,9") is another topological space, tp:Z ~ X is one-to-one and continuous, and
7.2
105
METRIZABLE SPACES
as just described, we may define a metric d, on Z by d1(zt>Z2) = d(q>(zd,q>(Z2»'
(10)
It can be easily verified that the metric d 1 is consistent with the topology Y. This implies that every topological space homeomorphic to a metrizable space (or subset of a metrizable space) is itself metrizable. Our attention will be focused on metrizable spaces and their Borel a-algebras. The presence of a metric in such spaces permits simple proofs of facts whose proofs are quite complicated or even impossible in more general topological spaces. We give two of these as lemmas for later reference.
Lemma 7.1 (Urysohn's lemma) Let X be a metrizable space and A and B disjoint, nonempty, closed subsets of X. Then there exists a continuous function f: X ---* [0, lJ such that f( a) = 0 for every a E A, f( b) = 1 for every bE B, and 0 < f(x) < 1 for every x ¢ A u B. If d is a metric consistent with the topology on X and infa EA , bEB d(a, b) > 0, then f can be chosen to be uniformly continuous with respect to the metric d. Proof Let d be a metric on X consistent with its topology and define f(x)
=
d(x, A)/[ d(x, A) + d(x, B)],
where the distance from a point to a nonempty closed set is defined by (8). This distance is zero if and only if the point is in the set, and the mapping of (8) is Lipschitz-continuous by (6) of Appendix C. This f has the required properties. If infaEA,bEBd(a,b) > 0, then d(x,A) + d(x,B) is bounded away from zero, and the uniform continuity of f follows. Q.E.D. Lemma 7.2 Let X be a metrizable space. Every closed subset of X is a Gs and every open subset is an F". Proof We prove the first statement; the second follows by complementation. Let F be closed. We may assume without loss of generality that F is nonempty. Let d be a metric on X consistent with its topology. The continuity of the function x ---* d(x, F) implies that
is open. But F
=
n:=
G; = {xEXld(x,F) < lin} 1
Gn •
Q.E.D.
Definition 7.2 Let X be a metrizable topological space. The space X is separable if it contains a countable dense set. It is easily verified that any subspace of a separable metrizable space is separable and metrizable. A collection of subsets of a topological space (X, g-) is a base for the topology if every open set can be written as a union of sets from the collection. It is a subbase if a base can be obtained by taking
7.
106
BOREL SPACES AND THEIR PROBABILITY MEASURES
finite intersections of sets from the collection. If :T has a countable base, (X,:T) is said to be second countable. A topological space is Lindelof if every collection of open sets which covers the space contains a countable subcollection which also covers the space. It is a standard result that in metrizable spaces, separability, second countability, and the Lindelof property are equivalent. The following proposition is a direct consequence of this fact. Proposition 7.1 Let (X,:T) be a separable, metrizable, topological space and f1J a base for the topology:T. Then f1J contains a countable subcollection f1J o which is also a base for :T. Proof Let Cfl be a countable base for the topology :T. Every set C E Cfl has the form C = U, E'J(C) B" where I (C) is an index set and B, E f1J for every IXEI(C). Since Cis Lindelof, we may assume I(C) is countable. Let :!lJ o = UCd{B,[IXEI(C)}. Q.E.D.
The Hilbert cube Yf' is the product of countably many copies of the unit interval (with the product topology). The unit interval is separable and metrizable, and, as we will show later (Proposition 7.4), these properties carryover to the Hilbert cube. In a sense, Yf' is the canonical separable metrizable space, as the following proposition shows. Proposition 7.2 (Urysohn's theorem) Every separable metrizable space is homeomorphic to a subset of the Hilbert cube Yf'.
Let (X, d) be a separable metric space with a countable dense set
Proof
[xd. Define functions
and
~
= min {I, d(x, Xk)}'
k = 1,2, ... ,
Yf' by
Each
and Xk such that d(y, x k) < e. Since d(Yn,x k) ~ d(y, Xk) as n ~ 00, for n sufficiently large d(Yn, x k) < e. Then d(y, Yn) < 2e. Q.E.D.
°
If X is a separable metrizable space and xp : X ~ Yf' is the homeomorphism whose existence is guaranteed by Proposition 7.2, then by identifying x E X with
7.2
107
METRIZABLE SPACES
Definition 7.3 Let X be a topological space. The space X is topologically complete if there is a metric d on X consistent with its topology such that the metric space (X, d) is complete, i.e., if {x k } C X is a d-Cauchy sequence [d(x n , x m ) ~ as n, m ~ 00], then {xd converges to an element of X.
°
Proposition 7.3 (Alexandroff's theorem) Let X be a topologically complete space, Z a metrizable space, and rp : X ~ Z a homeomorphism. Then
Proof For the proof of the first part of the proposition, we treat X as a subset of Z. There are two metrics to consider, a metric d on Z consistent with its topology and a metric d 1 on X which makes it complete. Define U; = {ZE Z!d(z, X) < lin and 3 an open neighborhood V(z) of z such that
sup x.Y
E
V(z) n X
d 1 (x , y ) < lin}.
For n = 1,2, ... , given z E Un and V(z) as just defined, we have V(z) n {YEZld(y,X) < lin}
so U; is open. We show X For z E X, define W(z)
= n~~l
=
C
Un'
Un'
{YEXld 1(y,z) < l/3n}.
Then W(z) is relatively open in X, thus of the form W(z) = V(z) n X, where V(z) is an open neighborhood in Z of z. Also, sup
X,YEV(Z) rv X
d 1 (x , y ) < lin,
so ZEU n. Therefore XCn:=lUn' Now suppose ZEn:=lUn' Then = 0, and since X is closed, we have z E X. There is a sequence {xd C X such that Xk ~ z. Let v,,(z) be an open neighborhood in Z of z for which
d(z,X)
sup
X,YEVn(Z) n X
d 1 (x , y ) < lin.
(11)
For each n, there is an index k; such that XkE v,,(z) for k Z k.: From (11) we see that d1(XioXj) < lin for i,j Z k n , so {xd is Cauchy in the complete space (X, dd and hence has a limit in X. But the limit is z by assumption, so X =
n:=
1
ti;
108
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
For the converse part of the theorem, suppose (Z, d) is a complete metric space and Y = n:~ 1 Un, where each U; is open in Z. Define a metric d 1 on Yby
d1(y,z) = d(y,z)
+
00
L
min{1/2n,I[1/d(y,Z - Un)] - [l/d(z,Z - Un)]I}·
n~l
If {Yd is Cauchy in (Y,d 1 ) , then it is also Cauchy in (Z,d), and thus has a limit yEZ. For each n,
as i,j
-4
00,
so [l/d(Yb Z - Un)] remains bounded as k n, hence yE Y. Q.E.D.
-4
00.
It follows that
yE U; for every
As we remarked earlier without proof, the Hilbert cube inherits metrizability and separability from the unit interval. It also inherits topological completeness. This is a special case of the fact, which we now prove, that completeness and separability of metrizable spaces are preserved when taking countable products. Let Xl' X 2,
be a sequence of metrizable spaces and Then Y and each y" is metrizable. If each X k is separable or topologically complete, then Y and each y" is separable or topologically complete, respectively. Proposition 7.4
Y" = X 1 X 2 . . . X n' Y = X 1 X 2
Proof If dk is a metric on X k consistent with its topology, then
d(y,y) =
00
L
min {1/2k, dk(t/k,rik)},
k~l
where y = (111,1]2", .), Y = (riJ,Q2'" .), is a metric on Y consistent with the product topology. If each (X b dk ) is complete, clearly (Y, d) is complete. If '§ k is a countable base for the topology on X b the collection of sets of the form G1 G2 ' .. GnX n+ lXn + 2 , .. , where G; ranges over '§k and n ranges over the positive integers, is a countable base for the product topology on Y. The arguments for the product spaces y" are similar. Q.E.D. Combining Propositions 7.2-7.4, we see that every separable, topologically complete space is homeomorphic to a Go-subset ofthe Hilbert cube, and conversely, every Go-subset of the Hilbert cube is separable and topologically complete. We state a second consequence of these propositions as a corollary. Corollary 7.4.1 Every separable, topologically complete space can be homeomorphically embedded as a dense Go-set in a compact metric space.
7.2
109
METRIZABLE SPACES
Proof Let X be separable and topologically complete and let
If X and 2 are topological spaces,
if x = y, if x # y, and dz(x, y)
= [x - yl·
Then (X, d 1 ) is complete, but (X, d z ) is not. A more surprising example is that the set JV 0 of irrational numbers between 0 and I with the usual topology is topologically complete. To see this, write JV 0 = nrEQ( [0, I] - {r}), where Q is the set of rational numbers. It follows that JV 0 is a G,,-subset of [0, I] and is thus topologically complete by Proposition 7.3. Another proof is obtained as follows. Let N be the set of positive integers with the discrete topology and JV the product of countably many copies of N. The space JV is called the Baire null space and is topologically complete (Proposition 7.4). The topological completeness of JV 0 follows from the fact that JV and JV 0 are homeomorphic. We give the rather lengthy proof of this because it is not readily available elsewhere. This homeomorphism will be used only to construct a counterexample (Example I in Chapter 8), so it may be skipped by the reader without loss of continuity. Proposition 7.5
The topological spaces JV 0 and JV are homeomorphic.
Proof Let ~ be the set offinite sequences of positive integers. If ZE~ u JV, we will represent its components by (k' Similarly, t, will represent the components of an element z of ~ u JV. The length of Z E ~ u JV is defined to be the number of its components. If Z has length greater than or equal to k, we define Zk = ((k,(k+l,"') or Zk = ((b'" , (m), depending on whether Z has infinite length or length m < co.
110
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
For ZEL U .K, define a sequence whose initial terms are
= (II, X2(Z) = ((I + (21)-1,
x 1(z)
X3(Z) Ifz has length k < xk(z), j = 1,2, ....
00,
=
((I
+ ((2 + (31)-1)-1.
we define X1(Z),X2(Z)" .. ,xk(z) as shown, and Xk+)Z)
=
Claim 1 The sequence {xk(z)} converges to an element of (0,1 J for 't:/ZELU%. If Z has finite length, the claim is trivial. If Z has infinite length, then
0< x 2n(z) < X2n+2(Z) < X2n+ 1(Z) < X2n- 1(Z)::; 1,
n=I,2, ... ,
(12)
so for every n X2n(Z)::; lim inf xk(z) ::; lim sup Xk(z) ::; X2n- I(Z). k -r o:
(13)
k~·:xJ
Now
0< X2n-1(Z) - X2n(Z)
=
+ X2n_2(Z2)]-1
[(I
- [(I
+ X2n_1(Z2)]-1
+ X2n_2(Z2)]-1[(1 + X2n-1(Z2)]-1[X2n-l(Z2) -
= [(I
X2n-2(Z2)J
1)-lJ- 2[X2n_1(Z2) - X2n-2(Z2)J
+ ((2 + + ((2 + 1)-lJ-2[(2 + ((3 + l)-lJ- 2[X2n_3(Z3) -
X2n-2(Z3)J
+ ((2k + 1)-lr 2U2k + ((2k+1 + 1)-lJ-2 ::;~,
k = 1, ... ,n - 1,
::; [(1 ::; [(I
Since [(2k-1
we have
and Claim 1 follows. Define
-+
(0,1] by
= lim xk(z). k-+
CJJ
Note that if Z E %, then 0 <
Claim 2
If Z E % and
z.
(14)
7.2
111
METRIZABLE SPACES
Suppose cp(z) = cp(z) and z =/= of generality that (1 =/= (lor else case, (12) implies
z. We can use (14) to assume without loss z has length one and (1 = (1' In the latter
c:p(z) = 1/(1 = 1/(1 = x 1(z) > X3(Z) 2 cp(z),
and a contradiction is reached. In the former case, if z has length one, then from (14) so (1
= (1 + CP(Z2),
which is impossible, since 0 < cp(z 2) < 1. If zhas length greater than one, then 1/[(1 + CP(Z2)] = cp(z) = cp(z) = 1/[(1
+ CP(Z2)],
and
(1 + C:P(Z2)
= (1
+ C:P(Z2)'
This is also impossible, since 0 < CP(Z2) ::;; 1 and 0 < CP(Z2) < 1. Claim 3 Every rational number in (0,1] has the form c:p(z), where ZE~.
Let r d q be a rational number in (0, 1] reduced to lowest terms, r 1 and q positive integers. Then
r.t« =
(q/rd- 1 = [q1
+ (r2/ r1)]-\
where q1 and r2 are positive integers and r 2 < r 1. Likewise, rslr, = (rdr2)-1 = [q2
+ (r3/r2)]-1,
where q2 and rs are positive integers and r3 < t z- Continuing, we eventually obtain rn = 1 and have rdq = CP[(ql,q2"" ,qn-1,rn-dJ.
Claims 2 and 3 imply that if zEAl, then cp(z) is irrational. Put another way, cp maps AI into AI o- But given y E AI 0, it is possible to choose positive integers (1, (2" .. , such that ((1 ((1
+ 1)-1
((1 + (21)-1 < Y < + ((2 + ((3 + 1)-1)-1)-1 < y <
etc., so that defining
Z
((1 ((1
+ ((2 + 1)-1)-\ + ((2 + (31)-1)-\
= (( 1, (2' ...), we have
X2k(Z) < Y < x 2k- 1(z),
k = 1,2, ....
112
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
It follows that cp(z) = y, so cp maps JV onto JV 0 and, by Claim 2, is one-to-one onJV.
We show that cp restricted to JV is open and continuous. Let V open. We may assume without loss of generality that
c
,AI be
V = {ZEJV!((b'" ,(n) = ((b'" ,(n)).
Then cp(V) = {((I
+ ((Z + ... + ((n + cp(Z))-l .. Tl)-llzEJV},
and since {cp(Z)IZEJV} = JVo,cp(V) is open. Since convergence in JV is componentwise and xn(z) depends only on the first n components of Z E JV, continuity of cp on JV follows from (13). Q.E.D. We now examine properties of metrizable spaces related to the notion of total boundedness. Definition 7.4 A metric space (X, d) is totally bounded if, given there exists a finite subset F, of X for which
X
=
8
> 0,
U {YEXld(x,y) < 8}.
xeFc.
U::,=
A totally bounded metric space is necessarily separable, since 1 F 11" is a countable dense subset. Total boundedness depends on the metric, however, and a space which is totally bounded (and separable) with one metric may not be totally bounded with another. Like separability, total boundedness is preserved under passage to subspaces, i.e., if (X, d) is totally bounded and Y c X, then (Y, d) is totally bounded. To see this, take 8 > and let F'IZ be a finite subset of X such that
°
X=
U {YEX!d(x,y) < 812}.
XEF&/2
Choose a point, if possible, in each of the sets Y (\ {YEX\d(x,y) < 812},
xEFtIZ ,
and call the collection of these points G,. Then
Y=
U {ZE Yld(y,z) < 8}.
yeG e
We use this fact to prove the following classical result relating completeness, compactness, and total boundedness. Proposition 7.6 A metric space is compact if and only if it is complete and totally bounded.
Proof If (X, d) is a compact metric space, then every Cauchy sequence has an accumulation point. The Cauchy property implies that the sequence
7.2
113
METRIZABLE SPACES
converges to this point, and completeness follows. Also, for s > 0, the collection of sets (YEXld(x,y) < s],
XEX,
contains a finite cover of X. Hence, (X, d) is totally bounded. If (X, d) is complete and totally bounded and S = {Sj} is a sequence in (X, d), then an infinite subsequence SIc S must lie in some set B 1 = {YEXld(Xl'Y) < I}. Since B 1 is totally bounded, an infinite subsequence S2 c Sl must lie in some set B 2 = {YEB1Id(X2'Y) < !}. Continuing in this manner, we have for each n an infinite sequence Sn+ 1 C S; lying in B n+ 1 = {YEBnld(xn+by) < l/(n + I)}. Let i,
As mentioned previously, total boundedness implies separability. By combining this fact with Proposition 7.6, we obtain the following corollary. Corollary 7.6.2
A compact metric space is complete and separable.
If X is a metrizable space, the set of all bounded, continuous, realvalued functions on X is denoted C(X). As is well known, C(X) is a Banach space under the norm
IIIII = sup!f(x)!, XEX
and we will always take C(X) to have the metric and topology corresponding to this norm. If d is a metric on X consistent with its topology, we denote by VAX) the collection offunctions in C(X) which are uniformly continuous with respect to d. We take VAX) to have the relative topology of C(X). We conclude this section with a discussion of the properties C(X) and VAX) inherit from X. Proposition 7.7 separable.
If X is a compact metrizable space, then C(X) is
Proof The space X is separable (Corollary 7.6.2). Let {xd be a countable dense subset of X and let F 1, F 2, . . . be an enumeration of the collection of sets of the form {YE X\d(X b y) ::; lin}, where k and n range over the positive
7.
114
BOREL SPACES AND THEIR PROBABILITY MEASURES
°
integers. For any disjoint pair F, and F j , let fij be a continuous function taking values in [0, 1] such that .h/x) = for x E F, and .hj(x) = 1 for x E F j . If F i and F, are not disjoint, let fij be identically one. Let C(} consist of the functions .hj as i and j range over the positive integers. The collection 0 clearly separates points in X, i.e., given x =I y, there exists f E C(} for which f(x) =I I( y). Let .0/> be the collection of finite-degree polynomials over (e. i.e., a typical element in fJjJ has the form (i
J, • • • •
L
in)' (j}. . . . ,jrl)
a(i b
· · ·
,i,,;jl"" ,j,,)fl:" 'f/~,
where a(il" .. ,i,,;jl" .. ,j,,)ER, .hI" .. ,.hnEC(}, and the summation is finite. Then fJjJ is a vector space under addition and the product of two elements in fJjJ is again in fJjJ. With these operations fJjJ is an algebra, and by the StoneWeierstrass theorem, fJjJ is dense in C(X). Let fJjJo be the collection of finitedegree polynomials over C(} with rational coefficients. An easy approximation argument shows that &0 is dense in fJjJ, and thus dense in C(X) as well. Since f!lJ 0 is countable, C(X) is separable. Q.E.D. Definition 7.5 Let (X,dd and (Y,d z ) be metric spaces. A mapping Y is an isometry if
qJ : X ~
In this case we say that (X, dd and (qJ(X),d z) are isometric spaces. If (X,dd and (Y,d z ) are as in Definition 7.5, we may regard the former as a subspace of the later, and the distances between points in X are unaffected by this embedding. Thus an isometry is a metric-preserving homeomorphism. Proposition 7.8 Let (X, d) be a metric space. There exists a complete metric space (X d' d l ), called the completion of (X, d), and an isometry ip : X ~ X d such that qJ(X) is dense in X d •
Proof The construction of the completion of a metric space is standard, so we content ourselves with a sketch of it. Given the metric space (X. d), define an equivalence relation ~ on the set of Cauchy sequences in (X,d) by
[x,,}
~ {x~}
<=>
lim d(x", x~)
n----+CJ:
=
0.
Let X d be the set of equivalence classes of Cauchy sequences in (X,d) under this relation and let d, be defined on XdX d by dl(x,y)
=
lim d(x",y,,),
(15)
7.2
115
METRIZABLE SPACES
where {x n} and {Yn} are chosen to represent the equivalence classes x and y. It is straightforward to verify that the limit in (15) exists for every pair of Cauchy sequences {x n} and {Yn}, and it is independent of the particular sequences chosen to represent the equivalence classes x and y. Furthermore. (Xd,d l) can be shown to be a complete metric space. and the mapping cp which takes x E X into the equivalence class in X d containing the Cauchy sequence (x, x, ...) is an isometry. The image of X under tp is dense in X d • Q.E.D. We can regard X d as consisting of X together with limits of all Cauchy sequences in X. We are really interested in the case in which (X, d) is totally bounded, for which we have the following result. Corollary 7.8.1 Let (X, d) be a totally bounded metric space. There exists a compact metric space (Xd,d l) and an isometry cp:X -* X; such that cp(X) is dense in X d.
Proof In light of Propositions 7.6 and 7.8, it suffices to prove that the completion (X do dd of (X, d) is totally bounded. Choose s > 0. Regarding (X,d) as a subspace of(Xd,dd, choose a finite set Fe of X for which X
U {YEXld(x,y) < eI2}.
=
XEFt;
Since X is dense in X d , we have
x, = U {YEXdldl(x,y) < e}.
Q.E.D.
XEF e
If X is a separable metrizable space, it is not necessarily true that C(X) is separable (unless X is compact, in which case we have Proposition 7.7). F or example, let f: R -* [0, 1] be defined as
f(x)=
o 1+2x {1 - 2x
and given an infinite sequence b ft,(x)
=
=
if if if
Ixl;:::: 1, -1~x~0,
°
~ x ~
1,
(!3l,!3Z, ...) of zeroes and ones, define
L
InlPn=
f(x - n). l}
We have constructed an uncountable collection of functions fb in C(R) such that if b, i= b z , then };,211 = 1. Therefore, C(R) cannot be separable. It is true, however, that given a separable metrizable space X, there is a metric d on X consistent with its topology such that UAX) is separable. This is a consequence of the next proposition and the fact that separability implies the existence of a totally bounded metrization (Corollary 7.6.1). We prove this proposition with the aid of the following lemma.
II};" -
116
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Lemma 7.3 Let Y be a metrizable space, d a metric on Y consistent with its topology, and X c Y. If g E Ud(X), then g has a continuous extension to Y, i.e., there exists gE C( Y) such that g(x) = g(x) for every x E X, and the extension g can be chosen to satisfy Ilgll = Ilgll. If X is dense in Y, g is unique.
Proof
Since g is uniformly continuous on X, given e > 0 there exists
..5(e) > 0 such that if XI' XzEX and d(xl,x z) S ..5(e), then Ig(xd - g(xz)! s e. Suppose yEX. Then there exists a sequence [x.} C X for which X Il ~ y. Given e > 0, there exists N(e) such that d(xn,xm ) S ..5(e) for all n; m ~ N(e), so {g(x n)} is Cauchy in R. Define g(y) = limn~co g(x n). Note that n ~ N(e) implies Ig(xn) - g(y)1 s e. Suppose now that X E X and d(x, y) S ..5(e)/2. Choose n ~ N(e) so that d(x n, y) S ..5(e)/2. Then dix, x n) s ..5(e) and Ig(x) - g(y)1 s Ig(x) - g(xll)1
+ Ig(xn) -
g(y)1
s
2e.
(16)
This shows that for any sequence {x~} c X with x~ ~ y, we have g(y) = limn_co g(x~), so the definition of g(y) is independent of the particular sequence {x n} chosen. If yEX, we can take x, = y, n = 1,2, ... , and obtain g(y) = g(y), so {j is an extension of g. If {Ym} is a sequence in X which converges to yEX, then there exist sequences {x mn} in X with Ym = lim ll_co x mll. Choose n 1 < n z < ... so that lim m_co xmll~ = Y and d(xmll~' Ym) S 6(l/m)/2. Then g(y)
= lim g(xmnJ,
(17)
m-oo
and, by (16), (18) Letting m ~ 00 in (18) and using (17), we conclude that g(y) = lim m_ oo gCvm) and {J is continuous on X. It is clear that sUPlg(x)1 XEX
=
sup!{J(y)I·· YEX
If X = Y, {J is clearly unique and we are done. If X is a proper subset of Y, use the Tietze extension theorem (see, e.g., Ash [AI] or Dugundji [D7]) to extend g to all of Y so that Ilgll = suplg(y)l·
Q.E.D.
YEf
Proposition 7.9 is separable.
If (X, d) is a totally bounded metric space, then UAX)
Proof Corollary 7.8.1 tells us that (X, d) can be isometrically embedded as a dense subset of a compact metric space (Xd,dd. We regard X as a
7.3
117
BOREL SPACES
subspace of X d. Given any gE Ud(X), by Lemma 7.3, g has a unique extension such that Ilg[1 = Ilgll. The mapping g --+ 9 is linear and normpreserving, thus an isometry from UiX) to C(X d)' The latter space is separable by Proposition 7.7, and the separability of UiX) follows. Q.E.D. gE C(Xd)
7.3
Borel Spaces
The constructions necessary for the subsequent theory of dynamic programming are impossible when the state space and control space are arbitrary sets or even when they are arbitrary measurable spaces. For this reason, we introduce the concept of a Borel space, and in this and subsequent sections we develop the properties of Borel spaces which permit these constructions. Definition 7.6 If X is a topological space, the smallest a-algebra of subsets of X which contains all open subsets of X is called the Borel a-algebra and is denoted by !!lJ x . The members of !!lJ x are called the Borel subsets of X. If X is separable and metrizable and ff is a a-algebra on X containing a subbase Y for its topology, then ff contains !!lJ x . This is because, from Proposition 7.1, any open set in X can be written as a countable union of finite intersections of sets in Y. Thus we have !!lJ x = a(Y) for any subbase
Y.
We will often refer to the smallest a-algebra containing a class of subsets as the a-algebra generated by the class. Thus, !!lJ x is the a-algebra generated by the class of open subsets of X. Note that !!lJ R is the class of Borel subsets of the real numbers in the usual sense, i.e., the a-algebra generated by the intervals. Given a class of real-valued functions on a topological space X, it is common to speak of the weakest topology with respect to which all functions in the class are continuous. In a similar vein, one can speak of the smallest a-algebra with respect to which all functions in the class are measurable. If X is a metrizable space, it is easy to show that its topology is the weakest with respect to which all functions in C(X) are continuous. The following proposition is the analogous result for !!lJ x . In the proof and in subsequent proofs, we will use the fact that for any two sets 0, 0', any collection ~ of subsets of 0', and any function j:O --+ 0', we have a[f-l(~)J
= j-l[a(~)].
Proposition 7.10 Let X be a metrizable space. Then !!lJ x is the smallest a-algebra with respect to which every function in C(X) is measurable, i.e., 1 fJU x = a[UfeC(Xd- (!!lJR)].
118
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Proof Denote ~ = a[U!EC(xJ- 1(.@R)] and let Y R be the topology of R. We have
~
=
a[
U f-l[a('~R)]
!EC(X)
= a[
U
!EC(X)
a[j-l(5R)]] c a[
U
!EC(X}
f!8X ]
= .@x·
To prove the reverse containment f!8x c ~ we need only establish that .'F contains every nonempty open set. By Lemma 7.2, it suffices to show that .'F contains every nonempty closed set. Let A be such a set. We may assume without loss of generality that A #- X, so there exists x E X-A. Let B = [x], and let f be given by Lemma 7.1. Then A = f-l( [O})E~. Q.E.D. We use Lemma 7.2 to prove another useful characterization of the Borel a-algebra in a metrizable space. Proposition7.11 Let X be a metrizable space. Then .@x is the smallest class of sets which is closed under countable unions and intersections and contains every closed (open) set.
Proof Let f0 be the smallest class of sets which contains every closed set and is closed under countable unions and intersections, i.e., f0 is the intersection of all such classes. Then f0 c f!8x and it suffices to prove that f0 is closed under complementation. Let f0' be the class of complements of sets in f0. Then f0' is also closed under countable unions and intersections. Lemma 7.2 implies that f0 contains every open set, so f0' contains every closed set, and consequently f0 c f0'. Given DEf0, we have DEf0', so D Q.E.D. CEf0.
Definition 7.7 Let X be a topological space. If there exists a complete separable metric space Y and a Borel subset BE f!8y such that X is homeomorphic to B, then X is said to be a Borel space. The empty set will also be regarded as a Borel space.
Note that every Borel space is metrizable and separable. Also, every complete separable metrizable space is a Borel space. Examples of Borel spaces are R, W, and R* with the weakest topology containing the intervals [ - 00, 0:), (/3, 00], (0:, /3), 0:, /3 E R. (This is also the topology that makes the function tp defined by cp(x) =
{~gn(X)(l -1
- e-
1xl )
if x =00, if xER, if x = - 00,
7.3
119
BOREL SPACES
a homeomorphism from R* onto [ -1, 1J). Any countable set X with the discrete topology (i.e., the topology consisting of all subsets of X) is also a Borel space. We will show that every Borel subset of a Borel space is itself a Borel space. For this we shall need the following two lemmas. The proof of the first is elementary and is left to the reader. Lemma 7.4 If Y is a topological space and E c Y, then the rr-algebrasf £ generated by the relative topology coincides with the relative IT-algebra, i.e., the collection {E n qCE86'y}. In particular, if EE86'y, then 86'£ consists of the Borel subsets of Y contained in E. Lemma 7.5 If X and Yare topological spaces and cp is a homeomorphism of X into Y, then cp(86'x) = 86'
Proof If!Yx is the topology of X, then cp(!Yx) is the topology of cp(X). Since cp is one-to-one, we have that cp is the inverse of a mapping, and Q.E.D. Proposition 7.12
If X is a Borel space and BE 86'x, then B is a Borel space.
Proof Let cp be a homeomorphism of X into some complete separable metric space Y such that cp(X)E86'y. From Lemma 7.5 and the fact that BE86'x, we obtain cp(B)E86'
Proof As in Proposition 7.4, we focus our attention on the more difficult infinite product. Consider the last statement of the proposition. Each X k has a countable base rgk for its topology, and the collection of sets of the form G IG Z ' " GnXn+lXn+Z"', where G, ranges over rgk and n ranges over the positive integers, is a base for the product topology on Y. The IT-algebra generated by this topology is 86'r- Recall that the product IT-algebra86'x)86' x2 ••• is the smallest IT-algebra containing all finite-dimensional measurable rectangles, i.e., all sets of the form E IE z ... BnX n+ b X n+ Z "', where B kE86' Xk ' k = 1, ... , n. It is clear that each basic set of the product topology on Y is a finite-dimensional measurable rectangle, and since each open subset of Y is a countable union of these basic open sets, every open subset of Y is 86'X)86' X2 ... measurable. We conclude that 86'y c 86'X)86' X2 • • • • (Note that
7.
120
BOREL SPACES AND THEIR PROBABILITY MEASURES
this argument relies only on the separability of the spaces Xl' X 2, . . . . Without this separability assumption, the argument fails and the conclusion is false.) The reverse set containment follows from the observation that for each k and BkE38xk, X IX 2' .• Xk-1BkXk+ 1" ·E38 y . To prove that Y is a Borel space, note that X k can be mapped by a homeomorphism CPk onto a Borel subset of a separable topologically complete space Xk : The product Y = X1 X2 ... is separable and topologically complete, and cp: Y ~ Y defined by CP(Xl' X2" ..) = (cpl(xd, CP2(X2),· ..)
is a homeomorphism from Y onto CPl(X dCP2(X 2) .... This last set is in 38 X /JB X2 ' " = f!lJ y , and the conclusion follows. Q.E.D. Definition 7.8 Let X and Y be topological spaces. A function f:X is Borel-measurable iff- 1(B)Ef!lJ x for every BEf!lJy.
~
Y
In many respects, Borel-measurable functions relate to Borel a-algebras as continuous functions relate to topologies. We have already used the fact, for example, that if h.: X ~ lk is continuous from a topological space X to a topological space Yb k = 1,2, ... ,then F:X ~ Y1 Y2 ... defined by F(x) = (fl(X),f2(X), ...) is also continuous. This follows from the componentwise nature of convergence in product spaces. There is an analogous fact for Borel-measurable functions and Borel spaces. Proposition 7.14 Let X be a Borel space, Yb Y2 , . . . a sequence of Borel spaces, and h.: X ~ lk a sequence of functions. If each h. is Borel-measurable, k = 1,2, ... , then the function F:X ~ Y1 Y2 ' •• defined by
F(x) = (fl (X),f2 (x), ...)
and the functions F n: X
-4
Y1 Y2
...
Y,. defined by
Fn(x) = (fl(X),f2(X), ... ,f,.(x))
are Borel-measurable. Conversely, if F is Borel-measurable, then each h. is Borel-measurable, k = 1,2, ... , and if some F; is Borel-measurable, then L, f2' ... .I, are Borel-measurable. Proof Again we consider only the infinite product. The Borel a-algebra in Y1Y2 .•• is generated by sets of the form B 1B2 " ' , where BkEf!lJYk' k=1,2, ... . Now F- 1(B 1B 2 " .) = f l 1(B1) n f Zl(B 2 ) n .. . . (19)
The left side of (19) is in f!lJ x for each Bk E f!lJYk' k = 1,2, ... , if and only if the sets .fk I(Bk ) are in 38x for each B k E 38Yk' k = 1,2, ... , and the result follows. Q.E.D.
7.3
121
BOREL SPACES
Corollary 7.14.1 Let X and Y be Borel spaces, D a Borel subset of X, and f: D ---+ Y Borel-measurable. Then Gr(f)
= {(x,f(x)) E X Ylx ED}
is Borel-measurable. Proof The mappings (x, y) ---+ f(x) and (x, y) ---+ yare Borel-measurable from D Y to Y, so the mapping Fix, y) = (f(x), y) is Borel-measurable from D Yto YY. Then Gr(f) = F- 1({(y,y)!YE Y}).
Since {(y, y)ly E Y} is closed in YY, Gr(f) is Borel-measurable.
Q.E.D.
The concept of homeomorphism is instrumental in classifying topological spaces, since it allows us to identify those which are "topologically equivalent." We can also classify measurable spaces by identifying those which, when regarded only as sets with a-algebras, are indistinguishable. We specialize this concept to Borel spaces. Definition 7.9 Let X and Y be Borel spaces and cp: X ---+ Y a Borelmeasurable, one-to-one function such that cp - 1 is Borel-measurable on cp(X). Then cp is called a Borel isomorphism, and we say that X and cp(X) are Borel-isomorphic (or simply isomorphic).
If X and Yare Borel spaces and cp: X ---+ Y is a Borel isomorphism, it is tempting to think of X and cp(X) as identical measurable spaces. The difficulty with this is that X is a Borel space, but cp(X) is not required to be. This discrepancy is eliminated by the following intuitively plausible proposition, the rather lengthy proof of which can be found in Chapter I, Section 3 of Parthasarathy [PI]. We will not have occasion to use this result. Proposition 7.15 (Kuratowski's theorem) Let X be a Borel space, Ya separable metrizable space, and cp:X ---+ Y one-to-one and Borel-measurable. Then cp(X) is a Borel subset of Yand cp -1 is Borel-measurable. In particular, if Y is a Borel space, then X and cp(X) are isomorphic Borel spaces.
The advantage of classifying spaces by means of Borel isomorphisms is illustrated by the following result. We need this proposition for the subsequent development, but the proof is rather lengthy and is relegated to Appendix B, Section 2. Proposition 7.16 Let X and Y be Borel spaces. Then X and Yare isomorphic if and only if they have the same cardinality.
Proposition 7.16 leads to a consideration of the possible cardinalities of Borel spaces. Of course, Borel spaces which are countably infinite are
122
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
possible, as are Borel spaces which consist of a given finite number of elements. In both these cases, the Borel a-algebra is the power set and the conclusion of Proposition 7.16 is trivial. Because every Borel space can be homeomorphically embedded in the Hilbert cube, every Borel space has cardinality less than or equal to c. Even if one were to admit the possibility of an uncountable cardinality strictly less than c, the proof of Proposition 7.16 as given in Appendix B shows that every uncountable Borel space has cardinality c. By combining this fact with Proposition 7.16, we obtain the following corollary. Corollary 7.16.1 Every uncountable Borel space is Borel-isomorphic to every other uncountable Borel space. In particular, every uncountable Borel space is isomorphic to the unit interval [0, 1] and the Baire null space ,;j/'. 7.4
Probability Measures on Borel Spaces
If X is a metrizable space, we shall refer to a probability measure p on the measurable space (X, .%'x) as simply a probability measure on X. The set of all probability measures on X will be denoted by P(X). A probability measure p E P(X) determines a linear functional lp: C(X) ---+ R defined by
(20) On the other hand, a function OJ: P(X) ---+ R defined by
f
E
C(X) determines a real-valued function
(21) These relationships and the metrizability of the underlying space X allow us to show several properties of P(X). In particular, we will prove that there is a natural topology on P(X), the weakest topology with respect to which every mapping of the form of (21) is continuous, under which P(X) is a Borel space whenever X is a Borel space. 7.4.1
Characterization of Probability Measures
Definition 7.10 Let X be a metrizable space. A probability measure pEP(X) is said to be regular if for every BEf1J x , p(B)
=
sup{p(F)1F c B, F closed}
= inf{p(G)IB c G, G open}.
(22)
Proposition 7.17 Let X be a metrizable space. Every probability measure in P(X) is regular.
7.4
123
PROBABILITY MEASURES ON BOREL SPACES
Proof Let p E P(X) be given and let rff be the collection of BE [!,9x for which (22) holds. If HeX is open, then H = U:;"l Fn, where [Fn} is an increasing sequence of closed sets (Lemma 7.2), so inf{p(G)[H c G, G open}
= =
p(H)
lim p(Fn)
n-r c;
~ sup{p(F)IF c H, F closed} ~ p(H).
Therefore rff contains every open subset of X. We show that rff is au-algebra and conclude that rff = [!,9xIf B E rff, then p(BC )
= 1 - p(B) = 1 - sup{p(F)[F = inf{p(G)IB c G, G open},
c B, F
closed}
C
and similarly, p(BC )
=
sup{p(F)[F c B C , F closed},
so rff is closed under complementation. Now suppose {Bn } is a sequence of sets in rff. Choose 8 > 0 and F; c B; c G; such that F; is closed, G; is open, and p(Gn - F n) ~ 8/2n. Then nV! s,
c
nVl
a, =
(VI ULv! (VI ULVI F n)
c
(Gn - F n)]
B n)
(Gn - F n)}
so
and since
8
is arbitrary,
P(V1 B
n)
= inf{p(G)lnVl u, c G, G open}.
It is also apparent from (23) that
P(V1 ~ P(VI Bn )
so for N sufficiently large,
F n)
+ E,
(23)
124
7.
The finite union U~=
1
BOREL SPACES AND THEIR PROBABILITY MEASURES
F; is a closed subset of
U:'=
1
B; and
B
is arbitrary, so
This shows that Iff is closed under countable unions and completes the proof. Q.E.D. From Proposition 7.17 we conclude that a probability measure on a metrizable space is completely determined by its values on the open or closed sets. The following proposition is a similar result. It states that a probability measure P on a metric space (X, d) is completely determined by the values Sgdp, where g ranges over VAX). Proposition 7.18 Let X be a metrizable space and d a metric on X consistent with its topology. If Pl' P: E P(X) and
f gdPl = f gdpz
VgE
VAX),
then p, =pz. Proof Let F be any closed proper subset of X and let G; = {x EX Id(x, F) < lin}. For sufficiently large n, F and G~ are disjoint nonempty closed sets for which infXEF.YEG~d(x,y) > 0, so by Lemma 7.1, there exist functions f" E V AX) such that fn(x) = for x E G~, f,,(x) = 1 for x E F, and ~ .f,,(x) ~ 1 for every x E X. Then
°
Pl(F)
~
f f"dPl = f f"dpz ~ pz(G
°
n),
and so
Reversing the roles of Pl and Pz, we obtain Pl(F) = pz(F). Proposition 7.17 implies Pl(B) = pz(R) for every BE!JIJ x . Q.E.D. 7.4.2
The Weak Topology
We turn now to a discussion of topologies on P(X), where X is a metrizable space. Given B > 0, P E P(X), and f E qX), define the subset of P(X):
(24)
7.4
125
PROBABILITY MEASURES ON BOREL SPACES
If D c C(X), consider the collection of subsets of P(X): "f/(D) =
{v,,(P;f)18 > 0, PEP(X),IED}.
Let !7 (D) be the weakest topology on P(X) which contains the collection 'Y(D), i.e., the topology for which 'Y(D) is a subbase.
Lemma 7.6 Let X be a metrizable space and Dc C(X). Let {p,} be a net in P(X) and pE P(X). Then P, ~ P relative to the topology ,Cf(D) if and only if SI dp; -> SI dp for every lED.
Proal Suppose P« ~ P and lED. Then, given 8 > 0, there exists [3 such that IX ?: [3 implies P, E v,,(p;.f). Hence SI dp; ~ SI dp. Conversely, if SI dPa-> Sf dp for every lED, and G E !7(D) contains p, then p is also contained in some basic open set 1 v".(p;.!k) c G, where 8k > and j, E D, k = 1, ,11. Choose fJ such that for all IX ?: [3 we ha ve ISj~ dp; - J,rk dpl < 8b k = 1, ,11. Then Pa E G for IX ?: [3, so P« ~ p. Q.E.D.
°
nZ=
We are really interested in ,Cf[ C(X)], the so-called weak topoloqy on P(X). The space C(X) is too large to be manipulated easily, so we will need a countable set D c C(X) such that !7(D) = ,Cf[ C(X)]. Such a set D is produced by the next three lemmas. Lemma 7.7 Let X be a metrizable space and d a metric on X consistent with its topology. If IE C(X), then there exist sequences {Un} and {hn} in Ud(X) such that e, i I and h; ~ I·
Proof We need only produce the sequence {Un}, since the other case follows by considering -I. In Lemma 7.14 under weaker assumptions we will have occasion to utilize the construction about to be described, so we are careful to point out which assumptions are being used. If I E C(X), then I is bounded below by some bE R, and for at least one Xo E X, I(xo) < 00. Define Un(x) = inf[f(y) YEX
+ nd(x, y)].
(25)
Note that for every XEX, b :::;; Un(x) :::;; I(x)
+ ndix, x) = I(x),
and
Thus
b s U1
:::;;
gz :::;; ... :::;; I,
(26)
7.
126
BOREL SPACES AND THEIR PROBABILITY MEASURES
and each gn is finite-valued. For every x, Y, Z EX, f(y)
+ nd(x, y) :s;; f(y) + nd(z, y) + nd(x, z),
and infirnizing first the left side and then the right over y EX, we obtain gn(x) :s;; gn(z)
Reverse the roles of x and
Z
+ nd(x, z).
to show that
Ign(x) - gn(Z) I :s;; nd(x, z),
so gn E V AX) for each n. From (26) we have lim gn :s;; f.
(27)
n--> 00
We have so far used only the facts thatf is bounded below and not identically 00. To prove that equality holds in (27), we use the continuity of f. For XEX, and B > 0, let {Yn} C X be such that f( Yn)
+ nd(x, Yn)
:s;; gn(x) +
B.
As n ---+ 00, either gn i 00, in which case equality must hold in (27), or else Yn ---+ x. In the latter case we have f(x) = lim f(Yn) :s;; lim gn(x) +
(28)
B,
and since x and B are arbitrary, equality holds in (27).
Q.E.D.
Lemma 7.8 Let X be a metrizable space and d a metric on X consistent with its topology. Then .'1 [ C(X)] = .'1 [ V AX)]. Proof Since VAX) c C(X), we have "Y'[Vd(X)] c "Y'[C(X)] and .'1[VAX)] c .'1[C(X)]. To prove the reverse containment, we show that every set in "Y'[C(X)] is open in the .'1[Vd(X)] topology. Thus, given any set v,,(p;.f)E"Y'[C(X)] and any point Po in this set, we will construct a set in .'1[Vd(X)] containing Po and contained in v,,(pJ). Given V(p;.f) and Po E v,,(p;.f), there exists Bo > for which v"o(Po;.f) c v,,(p;.f). By Lemma 7.7, there exist functions 9 and h in VAX) such that 9 :s;; f:S;; hand
°
ff dpo < f 9 dp., + (Bo/2), f h dp., < ff dp., + (Bo/2).
If q E v"o/2(PO; g) n v"o/2(PO; h), then
f f dpo < f
and
9 dp.,
+ (eo/2) <
f
9 dq
f
+ eo :s;; f
dq
+ Bo
7.4
127
PROBABILITY MEASURES ON BOREL SPACES
so
i.e., q E v"o(Po;f) and v"o/2(PO; g) n v"o/2(PO; h) c v,,(p;f).
Q.E.D.
Lemma 7.9 Let X be a metrizable space and d a metric on X consistent with its topology. If D is dense in UAX), then .'1[ Ud(X)] = .'1(D). Proof It is clear that .'1(D) c .'1[Ud(X)]. To prove the reverse set containment, we choose a set v,,(p;g)E -r[UAX)], select a point Po in this set, and construct a set in .'1(D) containing Po and contained in v,,(p; g). Let 80 = 8
Let hED be such that
If
9 dq -
Jg
-If
9 dp., -
f
9 dPI > O.
Ilg - hll < Bo/3 . Then for any qE v"o/3(Po;h), we have
If
f h dql + If h dq - f h dPol + !f h dPo - f 9dPo + If dp., - f gdPI < + + + If9 dp., - f g dPI =
dPI :s;
9 dq -
!
(Bo/3)
so v"o/3(PO; h) c v,,(p; g).
(Bo/3)
(80/ 3)
q
8,
Q.E.D.
Proposition 7.19 Let X be a separable metrizable space. There is a metric d on X consistent with its topology and a countable dense subset D of UAX) such that .'1(D) is the weak topology .'1[C(X)] on P(X).
Proof Corollary 7.6.1 states that the separable metrizable space X has a totally bounded metrization d. By Proposition 7.9, there exists a countable dense set D in UAX). The conclusion follows from Lemmas 7.8 and 7.9. Q.E.D. From this point on, whenever X is a metrizable space, we will understand P(X) to be a topological space with the weak topology .'1 [ C(X)]. We will show that when X is separable and metrizable, P(X) is separable and metrizable; when X is compact and metrizable, P(X) is compact and metrizable; when X is separable and topologically complete, P(X) is separable and topologically complete; and when X is a Borel space, P(X) is a Borel space. Proposition 7.20 If X is a separable metrizable space, then P(X) is separable and metrizable.
128
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Proof Let d be a metric on X consistent with its topology and D a countable dense subset of VAX) such that 5{D) is the weak topology on P{X) (Proposition 7.19). Let Roo be the product of countably many copies of the real line and let
=
(I e. I
dp, gz dp, . .
J
where {gl,gZ,"'} is an enumeration of D. We will show that
II g dpi - I g dPzl s li~s~pII
(g - gk) dPII +
+ lim suplI(gk j
r+ oo
::; 21imsupllgk j j-'"
-
-
li~s~pII
gkj dp; -
I gk; dPzl
g) dPzl
gil = 0,
00
so Sgdp! = Sgdpz. Proposition 7.18 implies that PI = pz, so tp is one-to-one. For each gkED, the mapping p--+ Sgkdp is continuous by Lemma 7.6, so
Pn--+P; (b) Sf dp; --+ Sf dp for every f E C{X); (c) SgdPn --+ Sgdp for every gE Vd{X); (d) lim sUPn~ 00 Pn{F) ::; p{F) for every closed set F eX; (e) lim inf, ~ 00 Pn{ G) ;::: p{G) for every open set G eX.
(a)
7.4
129
PROBABILITY MEASURES ON BOREL SPACES
Proof The equivalence of (a), (b), and (c) follows from Lemmas 7.6 and 7.8. The equivalence of (d) and (e) follows by complementation. To show that (b) implies (d), let F be a closed proper nonempty subset of X and let Gk = {x E X!d(x, F) < 11k}. For k sufficiently large, F and Gk are disjoint nonempty sets, and there exist functions j;. E C(X) such that j;.(x) = 1 for x E F, j;.(x) = 0 for x E Gk, and 0 ~ f(x) ~ 1 for every x E X. Using (b) we have
lim sup Pn(F) n--l>
S
S
~ lim I, dp; = j;. dp ~ p(Gk ) , "-Ci)
ex)
and letting k -4 00, we obtain (d). To show that (d) implies (b), choose f E C(X) and assume without loss of generality that 0 ~ f ~ 1. Choose a positive integer K and define closed sets F k = {xEXlf(x) ~ kIK},
Define ({J:X
-4
k =0, ... ,K.
[0,1] by K
({J(x)
where F K+ 1 = 0· Then f
f
tp dq
=
-
=
L
k=O
(kIK)xFk-h+ Jx),
(11K) ~
f, and, for any q E P(X),
({J ~
K
I
k=O
(kIK)q(F k - Fk+d
= (11K)
K
I
k=l
q(F k)·
Using (d) we have
li~}~p
ffdPn - (lIK)
~ li~~s~p
f qr dp;
= (11K) lim sup n-v co
~
K
L piFd k=l
K
(IlK) k~l
p(Fd = f
tp
dp
~
fIdp,
and since K is arbitrary, we obtain
li~}~p for every f
E
li~ ~nf
f f dp; ~ Sf dp
C(X). In particular, (29) holds for
f f dp; =
-li~}~p
f ( -.f) dPn
Combine (29) and (30) to conclude (b).
~
(29)
-f, so - f (- .f) dp = f f dp.
Q.E.D.
(30)
130
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
When X is a metrizable space, we denote by Px the probability measure on p(X) which assigns unit point mass to x, i.e., pAB) = 1 if and only if x E B. Corollary 7.21.1 Let X be a metrizable space. The mapping (i:X defined by b(x) = Px is a homeomorphism.
-+
P(X)
Proof It is clear that b is one-to-one. Suppose [x n } is a sequence in X and x EX. If x; -+ x and G is an open subset of X, then there are two possibilities. Either x E G, in which case x, E G for sufficiently large 11, so lim infn_ w PxJG) = 1 = pAG), or else x ¢G, in which case lim infn_ w PxJG) ;::;>: 0= pAG). Proposition 7.21 implies Px" -+ Px' so b is continuous. On the other hand, if PXn -+ Px and G is an open neighborhood of x, then since lim infn_ oo PxJG) ;::;>: pAG) = 1, we must have xnEG for sufficiently large 11. i.e., x; -+ x. This shows that b is a homeomorphism. Q.E.D.
From Corollary 7.21.1 we see that Pn can converge to P in such a way that strict inequality holds in (d) and (e) of Proposition 7.21. For example. let G c X be open, let x be on the boundary of G, and let x, converge to .x through G. Then PxJG) = 1 for every n, but Px(G) = O. We now show that compactness of X is inherited by P(X). Proposition 7.22 If X is a compact metrizable space, then P(X) is a compact metrizable space.
Proof If X is a compact metrizable space, it is separable (Corollary 7.6.2) and C(X) is separable (Proposition 7.7). Let Uk} be a countable set in C(X) such thatfl == 1, 11.h11 s l for every k, and U~} is dense in the unit sphere [f E C(X)lllfll s I}. Let [-1,1]00 be the product of countably many copies of [ -1,1] and define cp:P(X) -+ [ -1,1]'"' by cp(p)
(f.f~
=
dp,
f.f~
dp, . . .).
A trivial modification of the proof of Proposition 7.20 shows cp is a homeomorphism. We will show that cp[P(X}] is closed in the compact space [ -1,1 ]'"', and the compactness of P(X) will follow. Suppose {Pn} is a sequence in P(X) and CP{Pn) -+ (cx l, CX 2, • • •) E [ -1, 1] 0':. Given s > 0 and I E C(X) with IIIII s 1, there is a function fk with III- .All < £/3. There is a positive integer N such that n,m ~ N implies ISfkdPn - ffkdPml < £/3. Then
If f dp; - f f dPml s
If
I
dp; -
f.A dPnl +
+ Isr,. dp.; -
If
fk dp; -
ff dPml < a,
f t. dPml
7.4
131
PROBABILITY MEASURES ON BOREL SPACES
so {JfdPn} is Cauchy in [ -1,1]' Denote its limit by E(f). E(f) =
IfIIIII >
1, define
IlfIIE(f/llllj)·
It is easily verified that E is a linear functional on C(X), that E(f) ~ 0 whenever f ~ 0, IE(f)1 s Ilfll for every f E C(X), and E(fd = 1. Suppose {hn} is a sequence in C(X) and hn(x)! 0 for every x E X. Then for each e > 0, the set Kn(e) = (xlhn(x) ~ e} is compact, and I Kn(e) = 0. Therefore, for n sufficiently large, Kn(e) = 0, which implies Ilhnll ! O. Consequently, E(hnl! O. This shows that the functional E is a Daniell integral, and by a classical theorem (see, e.g., Royden [R5, p. 299, Proposition 21]) there exists a unique probability measure on a[UfEC(xd- I(&6'R)] which satisfies E(f) = Sfdp for every f E C(X). Proposition 7.10 implies p e P(X). We have
n:=
rJ. k
so q>(Pn)
~
= lim ffkdPn = E(fk) = SAdp, n-e cc
q>(p). This proves q>[P(X)] is closed.
k = 1,2, ... ,
Q.E.D.
In order to show that toplogical completeness and separability of X imply the same properties for P(X), we need the following lemma. Lemma 7.10 Let X and Y be separable metrizable spaces and q>: X a homeomorphism. The mapping t/!:P(X) ~ P(Y) defined by
~
Y
is a homeomorphism. Proof Suppose PI,pzEP(X) and PI #- Pz. Since PI and Fz are regular, there is an open set G c X for which PI(G) #- pz(G). The image q>(G) is relatively open in q>(X), so q>(G) = q>(X) n B, where B is open in Y. It is clear that t/!(pd(B) = PI(G) #- pz(G) = t/!(pz)(B),
so t/! is one-to-one. Let {Pn} be a sequence in P(X) and P E P(X). If Pn ~ P, then since q> - l(H) is open in X for every open set HeY, Proposition 7.21 implies
so t/!(Pn) ~ t/!(p) and t/! is continuous. If we are given {Pn} and p such that t/!(Pn) ~ t/!(p), a reversal of this argument shows that P« ~ P and t/! - 1 is continuous. Q.E.D. Proposition 7.23 If X is a topologically complete separable space, then P(X) is topologically complete and separable.
7.
132
BOREL SPACES AND THEIR PROBABILITY MEASURES
Proof By Urysohn's theorem (Proposition 7.2) there is a homeomorphism cp:X - Ye, and the mapping t/J obtained by replacing Y by Ye in Lemma 7.10 is a homeomorphism from P(X) to P(Ye). Alexandroff's theorem (Proposition 7.3) implies cp(X) is a Grsubset of Ye, and we see that t/J[P(X)]
= {pEP(Ye)lp[Ye - cp(X)] = O}.
(31)
We will show t/J[ P(X)] is a Go-subset of the compact space P(ft) (Proposition 7.22) and use Alexandroff's theorem again to conclude that P(X) is topologically complete. Since cp(X) is a Grsubset of Ye, we can find open sets G] :=> G z :=> ••• such that cp(X) = Gn • It is clear from (31) that
n:=l
t/J[P(X)]
n = nn =
00
n= 1 00
{pEP(Ye)lp(Ye - Gn )
= O}
00
n= 1 k= 1
{pEP(Ye)lp(Ye - Gn ) < 11 k}.
But for any closed set F and real number c, the set {p E P(Ye)lp(F) 2 c} is closed by Proposition 7.21(d), and {pEP(Ye)lp(Ye - Gn ) < 11k} is the complement of such a set. Q.E.D. We turn now to characterizing the o-algebra !!JP(X) when X is metrizable and separable. From Lemma 7.6, we have that the mapping 8f: P(X) - R given by
is continuous for every f E C(X). One can easily verify from Proposition 7.21 that the mapping 8B :P(X ) - [0,1] defined by" 8B(p)
= p(B)
is Borel-measurable when B is a closed subset of X. (Indeed, in the final stage of the proof of Proposition 7.23, we used the fact that when B is closed the upper level sets {pEP(X)18 B(p) 2 c} are closed.) Likewise, when B is open, 8B is Borel-measurable. It is natural to ask if (}B is also Borel-measurable when B is an arbitrary Borel set. The answer to this is yes, and in fact, !!JP(X) is the smallest e-algebra with respect to which (}B is measurable for every BE!!J x . A useful aid in proving this and several subsequent results is the concept of a Dynkin system. t The use of the symbol liB here is a slight abuse of notation. In keeping with the definition of fir- the technically correct symbol would be fiXB '
7.4
133
PROBABILITY MEASURES ON BOREL SPACES
Definition 7.11 Let X be a set and f!}) a class of subsets of X. We say is a Dynkin system if the following conditions hold: (a) X «». (b) If A,BEf!}) and B c A, then (c) If At,A z , .. . Ef!}) and At c
A A
z
BEf!}). C"',
then
f!})
U:=t AnEf!}).
Proposition 7.24 (Dynkin system theorem) Let fF be a class of subsets of a set X, and assume fF is closed under finite intersections. If f!}) is a Dynkin system containing fF, then f!}) also contains a(fF). Proof This is a standard result in measure theory. See, for example, Ash [AI, page 169]. Q.E.D.
Proposition 7.25 Let X be a separable metrizable space and iff a collection of subsets of X which generates &8 x and is closed under finite intersections. Then &8 p (x ) is the smallest a-algebra with respect to which all functions of the form
are measurable from P(X) to [0, I], i.e.,
Proof Let fF be the smallest a-algebra with respect to which BE is measurable for every E E iff. To show fF C &8 p (X ) , we show that (i B is :?dP(X)measurable for every BE &8 x- Let f!}) =[ B E.~ x lOB is .~ p(Xl-measurable}. It is easily verified that f!}) is a Dynkin system. We have already seen that f!}) contains every closed set, so the Dynkin system theorem (Proposition 7.24) implies f!}) = :?d xIt remains to showthat:?dp(x) C fF. Let f!})' = {BE&8 xleB is fF -measurable} As before, f!})' is a Dynkin system, and since iff C f!})', we have f!})' = :?d x . Thus the function BAp) = SI dp is fF -measurable when I is the indicator of a Borel set. Therefore eJ is fF -measurable when I is a Borel-measurable simple function. If f E qX), then there is a sequence of simple functions in which are uniformly bounded below such that In l' f. The monotone convergence theorem implies eJ l' eJ' so eJ is fF -measurable. It follows that for B > 0, P E P(X), and f E qX), the subbasic open set n
v,,(p;f)
=
{qE
p(X)llsf dq -
is fF-measurable. It follows that &8 p (X ) tion 7.6). Q.E.D.
=
Sf dPI <
B}
fF (see the remark following Defini-
7.
134 Corollary 7.25.1
BOREL SPACES AND THEIR PROBABILITY MEASURES
If X is a Borel space, then P(X) is a Borel space.
Proof Let q; be a homeomorphism mapping X onto a Borel subset of a topologically complete separable space Y. Then, by Lemma 7.10, P(X) is homeomorphic to the Borel set {pEP(Y)jp[q;(X)] = 1}. Since P(Y) is topologically complete and separable (Proposition 7.23), the result follows. Q.E.D. 7.4.3
Stochastic Kernels
We now consider probability measures on a separable metrizable space parameterized by the elements of another separable metrizable space. Definition 7.12 Let X and Y be separable metrizable spaces. A stochastic kernel q(dYlx) on Y given X is a collection of probability measures in P( Y) parameterized by XEX. If Ji' is a c-algebra on X and y-l[26'p(Yj] c.'#', where y:X ----> P(Y) is defined by
y(x)
=
q(dYlx),
(32)
then q(dYlx) is said to be Ji'-measurable. If y is continuous, q(dYlx) is said to be continuous. Proposition 7.26 Let X and Y be Borel spaces, (f a collection of subsets of Y which generates ss; and is closed under finite intersections, and q(dYlx) a stochastic kernel on Y given X. Then q(dYlx) is Borel-measurable if and only if the mapping AE: X ----> [0, 1] defined by
AE(X) = q(Elx)
is Borel-measurable for every E E $. Proof Let y:X ----> P(Y) be defined by y(x) = q(dYlx). Then for EE{f, we have )'E = eE y. If q(dylxl is Borel-measurable (i.e.. y is Borel-measurable), then Proposition 7.25 implies AE is Borel-measurable for every E E g. Conversely, if AE is Borel-measurable for every EE{f, then
y -1 [!Jlp(y)] = y-
=
{
EEt
so q(dYlx) is Borel-measurable.
R
EEt
1(!Jl
Rl]
c
!Jl x,
Q.E.D.
Corollary 7.26.1 Let X and Y be Borel spaces and q(dYlx) a Borelmeasurable stochastic kernel on Y given X. If BE!Jl Xy, then the mapping
7.4
135
PROBABILITY MEASURES ON BOREL SPACES
AB : X
->
[0, 1J defined by (33)
where B; = {y E YI(x, y) E B}, is Borel-measurable. Proof If BE go xy and x EX, then B x c Y is homeomorphic to B n [{x} EgoX Y ' It follows that B xE:14 y, so q(Bxlx) is defined. It is easy to show that the collection £0 = {BE:14x y B is Borel-measurable} is a Dynkin system. Proposition 7.26 implies that £0 contains the measurable rectangles, so £0 = :14 X Y ' Q.E.D.
YJ
IA
We now show that one can decompose a probability measure on a product of Borel spaces into a marginal and a Borel-measurable stochastic kernel. This decomposition is possible even when a measurable dependence on a parameter is admitted, and, as we shall see in Chapter 10, this result is essential to the filtering algorithm for imperfect state information dynamic programming models. As a notational convenience, we use X to denote a typical Borel subset of a Borel space X. Proposition 7.27 Let (X, 9'") be a measurable space, let Y and Z be Borel spaces, and let q(d(y, z)lx) be a stochastic kernel on YZ given X. Assume that q(Blx) is 9'"-measurable in x for every BE:14yz. Then there exists a stochastic kernel r(dzlx, y) on Z given X Y and a stochastic kernel s(dYlx) on Y given X such that r(Zlx, y) is 9'"goy-measurable in (x, y) for every Z E :14 z, s(Xlx) is 9'"-measurable in x for every XEgoy, and q( YZlx)
=
Ix r(Zlx, y)s(dYlx)
(34)
Proof We prove this proposition under the assumption that Y and Z are uncountable. If either Y or Z or both are countable, slight modifications (actually simplifications) of this proof are necessary. From Corollary 7.16.1, we may assume without loss of generality that Y = Z = (0, 1]. Let s(dYlx) be the marginal of q(d(y, z)lx) on Y, i.e., s(Xlx) = q(XZlx) for every XE£?6'y. For each positive integer n, define subsets of Y
M(j, n) = ((j - 1)/2n , j/2 nJ,
j = 1, ... ,2 n •
Then each M(j, n + 1) is a subset of some M(k, n), and the collection {M(j,n)ln= 1,2, ... ; j= 1,... ,2 n } generates goy. For zEQnZ, define q(dy(O, zJ[x) to be the measure on Y whose value at X E goy is q(X(O, zJ[x). Then q(dy(O, zJ[x) is absolutely continuous with respect to s(dYlx) for every
136
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
ZEQ n Z and XEX. Define for ZEQ n Z
Gn(zlx,y)
=
{
q[ M(j, n)(O, z] [x]Is[M(j, n)lx] if YEM(j,n) and s[M(j,n)lx] > 0,
°
if YEM(j,n)
and
s[M(j,n)lx] =0.
The functions Gn(zlx, y) can be regarded as generalized difference quotients of q(dy(O, z] Ix) relative to s(dYlx). For each z, the set B(z)
=
{(X'Y)EXYI!~~
Gn(zlx,y) exists in R}
= {(x,Y)EXYI{Gn(zlx,y)} is Cauchy}
nun 00
=
00
k= 1 N= 1 n,m"?N
{(x,Y)EXYIIGn(zlx,Y) - Gm(zlx,y)i < 11k}
is ffguy-measurable. Theorem 2.5, page 612 of Doob [D4] states that s[B(z)xlx]
=1
'l/xEX,
zEQnZ,
and if we define G(zlx, y) =
then q[Y(O, z] Ix]
=
if (x,Y)EB(z),
lim Gn(zlx, y) {
n- 00
otherwise,
Z
Ix G(zlx, y)s(dYlx)
'l/xEX,
ZEQnZ,
YEguy.
(35)
It is clear that for any z, G(zlx, y) is ffguy-measurable in (x, y).t A comparison of (34) and (35) suggests that we should try to extend G(zlx, y) in such a way that for fixed (x, y), G(zlx, y) is a distribution function. t For the reader familiar with martingales, we give the proof of the theorem just referenced. Fix x and y and observe that for m ::::: n,
q[M(j,n)(O,z]lx]
=
r .
JM(j,n)
Gm(zlx,~v)s(dYlx).
Since {M(j, n)!.i = 1, ... , 2n} is the o-algebra generated by Gn(zjx, y) regarded as a function of y. we conclude that Gn(zlx. y). n = 1,2, ... is a martingale on Y under the measure s(dy!x). Each Gn(zjx,y) is bounded above by 1, so by the martingale convergence theorem (see, e.g.. Ash [AI. p. 292]), Gn(z[x, y) converges for s(dYlx) almost every y. Thus s[B(z)xl-x] = land the definition of G(zlx, y) given above is possible. Let m ---> OC! in (*) to see that (35) holds whenever I = M(j, n) for somej and n. The collection of sets r for which (35) holds is a Dynkin system, and it follows from Proposition 7.24 that (35) holds for every r Ef4 y •
7.4
137
PROBABILITY MEASURES ON BOREL SPACES
Toward this end, for each Zo E Q n Z, we define C(zo) = {(x,Y)EXYI:JZEQ n Z with z:$; Zo and G(zlx,y) > G(zolx,y)},
U
=
{(x, y) E X YIG(zlx, y) > G(zolx, y)},
ZEQnZ z::;:zo
C
u
=
C(zo),
ZoEQnZ
D(zo) = {(x,Y)EXYiGClx,y) is not right-continuous at zo}
un u 00
=
00
ZEQnZ
n~lk~l
D
U
=
{(x,Y)EXYIIG(zlx,y) - G(zolx,Y)1 ~ lin},
zo$z
D(zo),
zoEQnZ
E = {(x,Y)EXYIG(zlx,y) does not converge to zero as z I O]
U n U {(x,Y)EXYIIG(zlx,Y)1 ~ 00
=
n~
00
1 k> 1 zEQnZ
lin},
z< 11k
and F
= {(x,Y)EXYIG(llx,y)"# t},
For fixed x E X and Zo E Q n Z, (35) implies that whenever z E Q n Z, z :$; zo, then
Ix G(zlx, y)s(dYlx) s Ix G(Zolx, y)s(dYlx)
VYE rJB y .
Therefore G(zlx, y) :$; G(zolx, y) for s(dYlx) almost all Y, so s[ C(zo)xlx] = 0 and s(Cxlx)
= O.
(36)
Equation (36) implies that G(zlx, y) is nondecreasing in z for s(dYlx) almost all y. This fact and (35) imply that if z 1Zo (z E Q n Z), then
Iy G(zlx, y)s(dYlx)! Iy G(zolx, y)s(dJilx), and G(zlx, y) ! G(zolx, y)
fur s(dYlx) almost all y. Therefore s[ D(zoUx] = 0 and s(Dxlx)
= o.
(37)
7.
138
BOREL SPACES AND THEIR PROBABILITY MEASURES
Equation (35) also implies that as z t 0 (z E Q n Z)
Ir G(zlx, y)s(dYlxH 0
'i/f
E
a.,
Since G(zlx, y) is non decreasing in z for s(dYlx) almost all y, we must have G(zlx, y) ~ 0 for s(dYlx) almost all y, i.e., s(Exlx)
= O.
(38)
Substituting z = I in (35), we see that
Ix G(llx, y)s(dYlx) = s(fl x ) so G(llx,y) = I for s(dYlx) almost all y, i.e., s(Fxlx)
=
o.
(39)
For z E Z, let {zn} be a sequence in Q n Z such that z, ~ z and define, for every XEX, yE Y, F(zlx,y)=
lim G(znlx,y)
n-w
{z
if (X,y)EXY - (C u DuE u F),
(40)
otherwise.
For (x, Y)E XY - (C u DuE u F), G(zlx, y) is a nondecreasing rightcontinuous function of z E Q n Z, so F(zlx, y) is well defined, nondecreasing, and right-continuous. It also satisfies for every (X,y)EXY,
o :s; F(zlx, y) s F(llx,y)
=
1 I,
'i/zEZ,
and lim F(zlx, y) = O. do It is a standard result of probability theory (Ash [AI, p. 24J) that for each (x, y) there is a probability measure r(dzlx, y) on Z such that r((O,zJlx,y)
= F(zlx,y)
'i/z E (0, I].
The collection of subsets ZEf!4 z for which r(Zlx,y) is fff!4 y -measurable in (x,y) forms a Dynkin system which contains {(O,ZJ[ZEZ}, so r(Z\x,y) is fff!4 y -measurable for every ZEf!4 z . Relations (35)-(40) and the monotone convergence theorem imply
Ix F(zlx, y)s(dYlx) = Ix r( (0, zJ[x, y)s(dYlx)
q[f(O, zJ[x J =
'i/xEX,
ZEZ,
fEfJ8 y • (41)
7.4
139
PROBABILITY MEASURES ON BOREL SPACES
The collection of subsets Z.E Plz for which (34) holds forms a Dynkin system which contains {(O,ZJIZEZ}, so (34) holds for every Z.EPl z. Q.E.D. If :#' = Plx , an application of Proposition 7.26 reduces Proposition 7.27 to the following form.
Corollary 7.27.1 Let X, Y, and Z be Borel spaces and let q(d(y, z)lx) be a Borel-measurable stochastic kernel on YZ given X. Then there exist Borel-measurable stochastic kernels r(dzlx, y) and s(dYlx) on Z given X Yand on Y given X, respectively, such that (34) holds. If there is no dependence on the parameter x in Corollary 7.27.1, we have the following well-known result for Borel spaces.
Corollary 7.27.2 Let Yand Z be Borel spaces and q e P(YZ). Then there exists a Borel-measurable stochastic kernel r(dzlY) on Z given Y such that q( YZ)
=
II
r(z:ly)s(dy)
where s is the marginal of q on Y. 7.4.4
Integration
As in Section 2.1, we adopt the convention - 00
+ 00 = + 00
-
00
=
00.
(42)
With this convention, for a, b, C E R* the associative law (a
+ b) + c = a + (b + c)
still holds, since if either a, b, or c is 00, then both sides of (42) are 00, while if neither a, b, nor c is 00, the usual arithmetic involving finite numbers and - 00 applies. Also, if a, b, C E R * and a + b = c, then a = C - b, provided b i= ± 00. It is always true however that if a + b :s; c, then a :s; c - b. We use convention (42) to extend the definition of the integral. If X is a metrizable space, p E P(X), and f: X -+ R* is Borel-measurable, we define (43)
J.r-
Note that if SJ+ dp < 00 or if dp < 00, (43) reduces to the classical definition of Sf dp. We collect some of the properties of integration in this extended sense in the following lemma. Lemma 7.11
Let .I; g and f~, functions on X.
Let X be a metrizable space and let p E P(X) be given. n = 1,2, ... , be Borel-measurable, extended real-valued
7.
140
(a)
BOREL SPACES AND THEIR PROBABILITY MEASURES
Using (42) to define f f U
(b) If either (bl ] JJ+ dp < (b2) JJ- dp < (b3) Jg+ dp <
+ g, we have
+ g) dp s
f f dp
and Jg+ dp < 00 and Jg- dp < 00 and Jg- dp < 00
+f
g dp.
(44)
or 00, or 00, then 00,
fU+g)dp= ffdp+ fgdp.
(45)
(c) If 0 < a < 00, then J(r:t.f) dp = r:t.JJ dp. (d) Iff:OS; g, then JJ dp :os; dp. (e) Iff" t f and JJl dp > - 00, then J.fn dp t dp. (f) If J., ~ f and JJl dp < 00, then Jf"dp q f dp.
Jg
Jf
Proof We prove (b) first and then return to (a). Under assumption (bl ), we have f(x) < 00 and g(x) < 00 for p almost every x, so the sum f(x) + g(x) can be defined without resort to the convention (42) for p almost every x, Furthermore, dp < 00 and dp < 00, so (45) follows from the additivity theorem for classical integration theory (Ash [AI, p. 45J). The proof of (45) under assumption (b2) is similar. Under assumption (b3), either JJ+ dp = 00, in which case both sides of (45) are 00, or else Jf+ dp < 00, in which case assumption (bl ) holds. Returning to (a), we note that if assumption (bl ) holds, then (45) implies (44). If assumption (bl) fails to hold, then
Jf
Jg
f f dp
+f
gdp =
00,
so (44) is still valid. Statements (c) and (d) are simple consequences of (42) and (43). Statement (e) follows from the extended monotone convergence theorem (Ash [AI, p. 47J) if Jf~ dp < 00. If dp = 00, then Jfl dp > - 00 implies ft dp = fl dp = 00, and the conclusion follows from (d). Statement ([) follows from the extended monotone convergence theorem. Q.E.D.
J
J
sr:
We saw in Corollary 7.27.2 that a probability measure on a product of Borel spaces can be decomposed into a stochastic kernel and a marginal. This process can be reversed, that is, given a probability measure and one or more Borel-measurable stochastic kernels on Borel spaces, a unique probability measure on the product space can be constructed. Proposition 7.28 Let Xl' X 2, . . . be a sequence of Borel spaces, Y" = X 1 X 2' .. Xnand Y = X 1 X 2' . . . Let p E P(X tl be given, and, for n = I, 2, ... , let qn(dXn+lIYn) be a Borel-measurable stochastic kernel on X n+1 given Y".
7.4
141
PROBABILITY MEASURES ON BOREL SPACES
Then for n = 2,3" .. , there exist unique probability measures rn E P( Y,,) such that rn(XtXZ'''X n) =
Ix J2..2 Ix ... Jz, Ix_ J2. 1
X
1
qn-t(Xnlxt,xZ, .. "xn-d
qn-Z(dxn-tIXt,xz, ... ,xn- Z)··· \:IX1 E/JDX"", ,XnE/!4xn. (46)
x qt(dxzlx1)p(dxd
If f: Y" then
--->
R* is Borel-measurable and either Sf+ dr; < 00 or Sf- dr; < 00,
I JX I , .. JX I n f(XbXZ"" ,xn)qn-l(dxnlx1,xz, .. , ,xn-d'" JYI n f dr; = JX 1
2
(47)
x qt(dxzlxdp(dx1).
Furthermore, there exists a unique probability measure r on Y = X 1 X Z ' such that for each n the marginal of r on y" is r n'
, ,
Proof The spaces Y", n = 2,3, ... , and Yare Borel by Proposition 7,13. If there exists r; E P( y") satisfying (46), it must be unique. To see this, suppose r~EP(y") also satisfies (46). The collection ~ = {BE/JDynlrn(B) = r~(B)} is a Dynkin system containing the measurable rectangles, so ~ = fJU Yn and rn = r~, We establish the existence of r« by induction, considering first the case n = 2. For BEfJU y 2 , use Corollary 7.26.1 to define
(48) E P( Yz ) and rz satisfies (46), If f is the indicator of BE/JDy 2 , the SxJ(xt,xz)qt(dxzlxd is Borel-measurable and, by (48),
It is easily verified that r :
(49) Linearity of the integral implies that (49) holds for Borel-measurable simple functions as well. If f: Yz ---> [0,00] is Borel-measurable, then there exists an increasing sequence of simple functions such that fn t f. By the monotone convergence theorem, lim I fn(Xt,xz)qt(dxzlx t) = IX f(xt,xz)qt(dxzlx t) JX2 J2 so SxJ(xt,xz)qt(dxzlxd is Borel-measurable and n--'OO
lim
n~ooJY2
I
fndrz
=
\:IX t EXt,
lim I I fn(Xl,XZ)ql(dxzlxdp(dxd n-+ooJxIJx2
= JX1JX2 I I f(Xl,XZ)qt(dxzIXt)p(dxt).
7.
142
BOREL SPACES AND THEIR PROBABILITY MEASURES
But Sr2.fn drz i Sr2f dr2, so (49) holds for any Borel-measurable nonnegative [: Fora Borel-measurable f: Yz ---'> R* satisfying Sf + dr; < 00 or Sf - dr; < 00, we have
and
r 1JX2 r .r(xI,xz)ql(dxzlxdp(dxd. JrY f- drz = JX 2
Assume for specificity that
SrJ- drz <
00.
Then the functions
fX/+(XI, xz)ql(dxz!xd
and
satisfy condition (b2) of Lemma 7.11, so
r
J Y2
fdr z = =
ri'
drz -
J Y2
r [rJX JXl
2
= JX r
r
1JX 2
r i : dr z
J Y2
f+(xl,xz)ql(dxzlxd -
r JX
2
f-(XI,XZ)ql(dXzlxd]P(dXI)
f(xI,xz)ql(dxzlxdp(dxd,
where the last step is a direct result ofthe definition of SxJ(XI, xz)ql (dxzlx d. Assume now that r« E P(Yk ) exists for which (46) and (47) hold when n = k. For BE Yk + l , let
Then rH I E P(~+ rHI(B)
=
d. If B = X IXZ ... XkX H
I, where X j E f7JX j , then
fX.Kl-K2 ... .Kk(Ydqk(XHIIYdrk(dYk)
= J-K1J.K2 r r .,. J.Kk r qk(XHllxl,XZ, ... ,xdqk-l(dxklxk-d'" x ql(dxzlxdp(dxd
(50)
by (47) when n = k. This proves (46) for n = k + 1. Now use (50) to prove (47) when n = k + 1 and f is an indicator function. As before, extend this to the case of f: Yk + I ---'> [0, 00]. If f: ~+ I ---'> R* is Borel-measurable and either drk+ I < 00 or drH I < 00, then the validity of (47) for non-
v:
U-
7.4
143
PROBABILITY MEASURES ON BOREL SPACES
negative functions and the induction hypothesis imply
r
JYk+l
f+drk+l
r r· ... r
=
JXIJX2
JXk+l
f+(Xl"",Xk+tlqk( dxk+lI Xl,X2, ... ,xd···
x ql(dx2Ixl)P(dxtl
=
r.
r
f+(x 1 , ••• 'Xk+ tlQk(dxk+ dXl,X2,'" ,xddrk'
r
r
f-(xf,'" ,Xk+tlQk(dxk+llxl,X2,'" ,xddrk.
JX""XkJXk+l
and likewise
r
Jrk+'
=
f- drk+l
JX1 ... Xk JXk+ 1
Assume for specificity that
r JX
k
+ 1
hk+
J- dri ; 1 <
00.
Then the functions
f+(Xl,'" ,Xk+tlQk(dxk+llxl,X2,'"
,xd
and
satisfy condition (b2) of Lemma 7.11, so as before
r
JYk+1
=
fdrk+l
r
r
JXI···XkJXk+l
f(x 1 , ••• ,Xk+l)Qk(dxk+l!X 1,X 2, ... ,xddrk'
(51 ) Since
[ JXr
k
+ 1
s
f(x 1 , •
f
Xk+ 1
• • ,
Xk+ l)qk(dxk+ dXl, X2,· .. ,
XdJ-
f-(Xl"" ,Xk+tlQk(dxk+lI Xl,X2,'" ,xk),
we can apply the induction hypothesis to the right-hand side of (51) to conclude that (47) holds in the generality stated in the proposition. To establish the existence of a unique probability measure r E P( Y) whose marginal on Y,. is r n , n = 2,3, ... ,we note that the measures r n are consistent, i.e., if m ~ n, then the marginal of rm on Y,. is rn • If each X k is complete, the Kolmogorov extension theorem (see, e.g., Ash [AI, p. 191]) guarantees the existence of a unique r E P( Y) whose marginal on each Y,. is rn' If X k is not complete, it can be homeomorphically embedded as a Borel subset in a complete separable metric space Xk : As in Proposition 7.13, each Yn is homeomorphic to a Borel subset of the complete separable metric space Y" = X 1 X2 . . . Xnand Y is homeomorphic to a Borel subset of the complete separable metric space Y = X 1 X2 . . . . Each rn E P( Y,.) can be identified with
7.
144
BOREL SPACES AND THEIR PROBABILITY MEASURES
rnEP(y") in the manner of Lemma 7.10, and, invoking the Kolmogorov extension theorem, we establish the existence of a unique rE P(Y) whose marginal on each y" is rn . It is straightforward to show that r assigns probability one to the image of Y in Y, so r corresponds to some r E P( Y) whose marginal on each Yn is rn • The uniqueness of r implies the uniqueness of r. Q.E.D.
In the course of proving Proposition 7.28, we have also proved the following. Proposition 7.29 Let X and Y be Borel spaces and q(dYlx) a Borelmeasurable stochastic kernel on Y given X. If f: X Y --+ R* is Borel-measurable, then the function kX --+ R* defined by
(52)
Jc(x) = I f(x, y)q(dylx)
is Borel-measurable. Corollary 7.29.1 Let X be a Borel space and let f:X measurable. Then the function ()f: P(X) --+ R* defined by ()f(P)
--+
R* be Borel-
= If dp
is Borel-measurable. Proof Define a Borel-measurable stochastic kernel on X given P(X) by q(dxlp) = p(dx). Define.f:P(X)X --+ R* by l(p,x) = f(x). Then
of(P) = If(x)p(dx) = Il(p, x)q(dxlp) is Borel-measurable by Proposition 7.29.
Q.E.D.
If f E C(XY) and q(dYlx) is continuous, then the mapping }, of (52) is also continuous. We prove this with the aid of the following lemma. Lemma 7.12 Let X and Y be separable metrizable spaces. Then the mapping a:P(X)p(Y) --+ P(XY) defined by a(p, q)
=
pq,
where pq is the product of the measures p and q, is continuous. Proof We use Urysohn's theorem (Proposition 7.2) to homeomorphically embed X and Yinto the Hilbert cube Yf', and, for simplicity of notation, we treat X and Yas subsets of Yf'. Let d be a metric on Yf' Yf' consistent with its topology. If gE UiXy), then Lemma 7.3 implies that g can be extended to a function gE C(Yf'Yf'). The set of finite linear combinations of the form 1 h(X)hiY), where hand hj range over C(Yf') and k ranges over the
IJ=
7.5
145
BOREL-MEASURABLE SELECTION
positive integers, is an algebra which separates points in YfYf, so given 6> 0, the Stone-Weierstrass theorem implies that such a linear combination can be found satisfying IIIJ= gil < 6. If {Pn} is a sequence in P(X) converging to P E P(X), {qn} a sequence in P(Y) converging to q E P(Y), and fj and hj the restrictions of Jj and hj to X and Y, respectively, then
d/ij -
lim supls g d(Pnqn) n~OCJ
xr
slim supI r n~OCJ JXY
I
r g d(Pq)j JXY
(g - I 1fjhj)d(Pnqn) \ j=
+ j=l n-e lim I r fjdPn r hjdqn - r fjdp r hjdq/ co JX Jy Jx Jr
+ li~s~plfxyCtl
fjh j
-
g)d(pq)1
S 26.
The continuity of (J follows from the equivalence of (a) and (c) of Proposition 7.21. Q.E.D. Proposition 7.30 Let X and Y be separable metrizable spaces and let q(dYlx) be a continuous stochastic kernel on Y given X. If f E C(X Y), then the function A: X --+ R defined by A(X)
=
f f(x, y)q(dylx)
is continuous. Proof The mapping v:X --+ P(XY) defined by v(x) = Pxq(dylx) is continuous by Corollary 7.21.1 and Lemma 7.12. We have A(X) = (8J v)(x), where 8J: P(X Y) --+ R is defined by 8J(r) = Sf dr. By Proposition 7.21,8J is continuous. Hence, Ais continuous. Q.E.D. 0
7.5
Semicontinuous Functions and BorelMeasurable Selection
In the dynamic programming algorithm given by (17) and (18) of Chapter 1, three operations are performed repetitively. First, there is the evaluation of a conditional expectation. Second, an extended real-valued function in two variables (state and control) is infimized over one of these variables (control). Finally, if an optimal or nearly optimal policy is to be constructed, a "selector" which maps each state to a control which achieves
146
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
or nearly achieves the infimum in the second step must be chosen. In this section, we give results which will enable us to show that, under certain conditions, the extended real-valued functions involved are semicontinuous and the selectors can be chosen to be Borel-measurable, The results are applied to dynamic programming in Propositions 8.6-8.7 and Corollaries 9.17.2-9.17.3. Definition 7.13 Let X be a metrizable space and f an extended realvalued function on X. If {x E X[f(x) :::;; c} is closed for every c E R, f is said to be lower semicontinuous. If {x E X!f(x) ~ c} is closed for every c E R, f is said to be upper semicontinuous. Note that f is lower semicontinuous if and only if - f is upper semicontinuous. We will use this duality in the proofs of the following propositions to assert facts about upper semicontinuous functions given analogous facts about lower semicontinuous functions. Note also that if f is lower semicontinuous, the sets {xEXlf(x) = -oo} and {xEX!f(x):::;; oo] are closed, since the former is equal to n~= dXEX[f(x) :::;; -n} and the latter is X. There is a similar result for upper semicontinuous functions. The following lemma provides an alternative characterization of lower and upper semicontinuous functions. Lemma 7.13
Let X be a metrizable space and f:X
->
R*.
(a) The function f is lower semicontinuous if and only if for each sequence {x n } C X converging to x E X
lim inff(xn )
~
n-s cc
f(x).
(53)
(b) The function f is upper semicontinuous if and only if for each sequence {x n } C X converging to x E X lim sup f(x n) :::;; f(x).
Proof Suppose f is lower semicontinuous and x, a subsequence {xnJ such that x nk -> x as k -> 00 and
(54) ->
x. We can extract
lim f(xnJ = lim inf f(x n).
k-oo
Given
B
n-e co
> 0, define 8(B) =
lim inf f(x n ) {
n->
+B
if lim inf f(x n ) > -
00
-liB
otherwise.
00,
7.5
147
BOREL-MEASURABLE SELECTION
There exists a positive integer k(e) such that f(xnJ s O(e) for all k 2 k(e). The set {y E Xlf(y) s O(e)} is closed, and hence it contains x. Inequality (53) follows. Conversely, if (53) holds and for some cER, (x n } is a sequence in [YEXlf(y) s c} converging to x, thenf(x) s c, so f is lower semicontinuous. Part (b) of the proposition follows from part (a) by the duality mentioned earlier. Q.E.D. If f and g are lower semicontinuous and bounded below on X and if x; ....... x, then
2 f(x)
+ g(x),
so f + g is lower semicontinuous. If f is lower semicontinuous and o: > 0, then «f is lower semicontinuous as well. Upper semicontinuous functions have similar properties. It is clear from (53) and (54) that f:X ....... R* is continuous if and only if it is both lower and upper semicontinuous. We can often infer properties of semicontinuous functions from properties of continuous functions by means of the next lemma. Lemma 7.14
Let X be a metrizable space and f:X ....... R*.
(a) The function f is lower semicontinuous and bounded below if and only if there exists a sequence {f,,} c C(X) such that .r.. t f'. (b) The function f is upper semicontinuous and bounded above if and only if there exists a sequence {f,,} c C(X) such thatf" t f. Proof We prove only part (a) of the proposition and appeal to duality for part (b). Assume f is lower semicontinuous and bounded below by bE R, and let d be a metric on X consistent with its topology. We may assume without loss of generality that for some X o E X, f(xo) < 00, since the result is trivial otherwise. (Take f,,(x) = n for every x E X.) As in Lemma 7.7, define
gn(x) = inf [f(y) YEX
+ ndix, y)].
Exactly as in the proof of Lemma 7.7, we show that {gn} is an increasing sequence of continuous functions bounded below by b and above by f. The characterization (53) oflower semicontinuous functions can be used in place of continuity to prove e, i f. In particular, (28) becomes f(x) slim inf f(Yn) slim gn(x) + e. n-oo
n-e co
Now define f" = min {n, gn}. Then each f" is continuous and bounded and This concludes the proof of the direct part of the proposition. For
f" t f.
7.
148
BOREL SPACES AND THEIR PROBABILITY MEASURES
the converse part, suppose {J,,} c C(X) and J" if. For
CE
n {XEXjJ,,(X) ~
c}
{xEXjf(x) ~ c} =
R,
00
n=l
is closed.
Q.E.D.
The following proposition shows that the semicontinuity of a function of two variables is preserved when one of the variables is integrated out via a continuous stochastic kernel. Proposition 7.31 Let X and Y be separable metrizable spaces, let q(dYlx) be a continuous stochastic kernel on Y given X, and let f:XY ~ R* be Borel-measurable. Define
A(X) =
f f(x, y)q(dYlx).
(a) If f is lower semicontinuous and bounded below, then A is lower semicontinuous and bounded below. (b) If f is upper semicontinuous and bounded above, then A is upper semicontinuous and bounded above. Proof We prove part (a) of the proposition and appeal to duality for part (b). If f:XY --+ R* is lower semicontinuous and bounded below, then by Lemma 7.14 there exists a sequence {J,,} c C(XY) such that J" i f. Define An(X) = fJ,,(x,y)q(dYlx). By Proposition 7.30, we have that An is continuous, and by the monotone convergence theorem An i A. By Lemma 7.14, Ie is lower semicontinuous. Q.E.D. An important operation in the execution of the dynamic programming algorithm is the infimization over one of the variables of a bivariate function. In the context of semicontinuity, we have the following result related to this operation. Proposition 7.32 be given. Define
Let X and Y be metrizable spaces and let f: X Y f*(x) = inf f(x, y).
~
R* (55)
YEY
(a) If f is lower semicontinuous and Y is compact, then f* is lower semicontinuous and for every x E X the infimum in (55) is attained by some yE Y. (b) Iff is upper semicontinuous, then f* is upper semicontinuous. Proof (a) Fix x and let {Yn} c Y be such that f(x, Yn).!. f*(x). Then
{Yn} accumulates at some Yo E Y, and part (a) of Lemma 7.13 implies that
f(x, Yo) = f*(x). To show that f* is lower semicontinuous, let {x n} c X be
7.5
149
BOREL-MEASURABLE SELECTION
such that Xn ---+ xo. Choose a sequence {Yn} c Y such that n
= 1,2, ....
There is a subsequence of {(xn,Yn)}, call it {(Xnk'YII.)}' such that lim infll _ 00 f(x n, Yn) = linu., 00 f(x nk, YnJ. The sequence {YnJ accumulates at some YoE Y, and, by Lemma 7.13(a), liminff*(xn) = liminff(xn,Yn) = lim f(xnk,Ynk) 2 f(xo, Yo) 2 f*(xo)' n-+oo
n-» o:
k-r co
so f* is lower semicontinuous. (b) Let d j be a metric on X and d 2 a metric on Y consistent with their topologies. If G c X Y is open and Xo Eprojy] G), then there is some Yo E Y for which (xo, Yo)E G, and there is some B > 0 such that N.(xo,Yo) = {(x,Y)EXYldj(x,x o) < B,d2(y,yo) < B} c G.
Then xoEproh[N.(xo,Yo)] = {xEXldj(x,x o) < B} c projx(G),
so projx(G) is open in X. For cER, {xEXlf*(x) < c} = projx({(x,Y)EXYlf(x,y) < c}).
The upper semicontinuity of f implies that {(x, Y)!f(x, y) < c} is open, so {xEXlf*(x) < c} is open and f* is upper semicontinuous. Q.E.D. Another important operation in the dynamic programming algorithm is the choice of a measurable "selector" which assigns to each x E X aYE Y which attains or nearly attains the infimum in (55). We first discuss Borelmeasurable selection in case (a) of Proposition 7.32. For this we will need the Hausdorff metric and the corresponding topology on the set 2Y of closed subsets of a compact metric space Y (Appendix C). The space 2 Y under this topology is compact (Proposition C.2) and, therefore, complete and separable. Several preliminary lemmas are required. Lemma 7.15 Let Y be a compact metrizable space and let g: Y be lower semicontinuous. Define g*: 2Y ---+ R* by
:A
m in g(y) g*(A)
=
{
if A;6
---+
R*
0,
if A = 0.
(56)
Then g* is lower semicontinuous. Proof Since the empty set is an isolated point in 2Y , we need only prove that g* is lower semicontinuous on 2Y - {0}. We have already shown [Proposition 7.32(a)] that, given a nonempty set A E 2 Y , there exists yEA
7.
150
BOREL SPACES AND THEIR PROBABILITY MEASURES
such that g*(A) = g( y). Let {An} C 2 Y be a sequence of nonempty sets with limit A E 2Y , and let Yn E An be such that g*(A n) = g(Yn), n = 1,2, .... Choose a subsequence {Ynk} such that lim g(Ynk) = lim inf g(Yn) = lim inf g*(A n).
k-r so
n-e cc
n-s co
The subsequence {YnJ accumulates at some Yo E Y, and, by Lemma 7.13(a), g(yo)::; lim g(YnJ = liminfg*(A n ) · k-oo
n-r co
From (14) of Appendix C and from Proposition C.3, we have (in the notation of Appendix C) YOE lim An = A,
so g*(A) ::; g(yo) ::; lim inf g*(A n). n-+ 00
The result follows from Lemma 7.13(a).
Q.E.D.
Lemma 7.16 Let Y be a compact metrizable space and let g: Y be lower semicontinuous. Define G:2 Y R* --> 2Y by
G(A,c) = An {YE Ylg(y)::; c}.
-->
R*
(57)
Then G is Borel-measurable. Proof We show that G is upper semicontinuous (K) (Definition C.2) and apply Proposition CA. Let {(An,c n)} C 2YR* be a sequence with limit (A,c).lflim n-+ oo G(An,c n) = 0, then
(58) Otherwise, choose Y E lim n -+ 00 G(A n, cn)' There is a sequence nl < n z < ... of positive integers and a sequence Ynk E G(A nk, CII)' k = 1,2, ... , such that Ynk --> y. By definition, Ynk E A nk for every k, so Y E lim n -+ a: An = A. We also have g(Ynk)::; cnk' k = 1,2, ... , and using the lower semicontinuity of g. we obtain g(y) ::; lim inf g(YnJ ::; lim cnk = k-v co k -r cc»
C.
Therefore Y E G(A, c), (58) holds, and G is upper semicontinuous (K). Q.E.D.
7.5
151
BOREL-MEASURABLE SELECTION
Lemma 7.17 Let Y be a compact metrizable space and let g: Y -> R* be lower semicontinuous. Let g*:2 Y -> R* be defined by (56) and define G*:2 Y -> 2Y by G*(A)
= An {YE Ylg(y)
~ g*(A)}.
(59)
Then G* is Borel-measurable. Proof Let G be the Borel-measurable function given by (57). Lemma 7.15 implies g* is Borel-measurable. A comparison of (57) and (59) shows that G*(A) = G[A, g*(A)].
It follows that G* is also Borel-measurable.
Q.E.D.
Lemma 7.18 Let Y be a compact metrizable space. There is a Borelmeasurable function a: 2Y - {0} -> Y such that a(A) E A for every AE2 Y
-
{0}.
Proof Let {gnln = 1,2, ...} be a subset of C(Y) which separates points in Y (for example, the one constructed in the proof of Proposition 7.7). As in Lemma 7.15, define g:: 2 Y -> R* by g:(A)
=
min gn(Y) {
if A"# 0,
yEA 00
if A = 0,
and, as in Lemma 7.17, define G:: 2 Y -> 2 Y by G:(A)
Let H n:2 Y
->
= A n ryE Y!gn(Y):S; g:(A)} = {YEAlgn(Y) = g:(A)}.
2 Y be defined recursively by Ho(A) = A, Hn(A)
= G:[Hn-1(A)],
n = 1,2, ....
Then for A "# 0, each Hn(A) is nonempty and compact, and A
= Ho(A):::::J H1(A):::::J H 2(A):::::J'" .
Therefore, n:.n=oH.(A)"# 0. If y,y'En:.n=oHn(A), then for n = 1,2, ... , we have gn(Y)
= g:[Hn-1(A)] = gn(y')·
Since {gnln = 1,2, ...} separates points in Y, we have Y = y', and n:.n=o Hn(A) must consist of a single point, which we denote by a(A).
7.
152
BOREL SPACES AND THEIR PROBABILITY MEASURES
We show that for A =f 0 lim Hn(A) = n-e cc
00
n Hn(A)
=
{cr(A)}.
(60)
n=O
Since the sequence {H.(A)} is nonincreasing, we have from (14) and (15) of Appendix C that 00 (61) HIl(A) c lim Hn(A) c lim Hn(A).
n
n-s co
n=O
n-e cc
If Y E limn.... 00 Hn(A), then there exist positive integers 111 < 112 < ... and a sequence YllkEHnk(A), k= 1,2, ... , such that Yllk-+Y' For fixed k, YnjEH nk for all j ~ k, and since Hnk(A) is closed, we have yEHIlJA). Therefore, yE n~=o HIl(A) and 00 (62) lim Hn(A) c Hn(A). n-(()
n
n:=O
From relations (61) and (62), we obtain (60). Since G~ and H; are Borel-measurable for every 11, the mapping v:2 Y {0} -+ 2Y defined by v(A) = {cr(A)} is Borel-measurable. It is easily seen that the mapping r: Y -+ 2Y defined by r(y) = {y} is a homeomorphism. Since Y is compact, r( Y) is compact, thus closed in 2Y , and T-1: r( Y) -+ Y is Borel-measurable. Since a = r -1 -v, it follows that (J is Borel-measurable. Q.E.D. Lemma 7.19 Let X be a metrizable space, Ya compact metrizable space, and let f:XY -+ R* be lower semicontinuous. Define F:XR* -+ 2Y by
F(x,e) = rYE Ylf(x,y) ~ c}.
(63)
Then F is Borel-measurable.
Proof The proof is very similar to that of Lemma 7.16. We show that F is upper semicontinuous (K) and apply Proposition CA. Let (x,; ell) -+ (x, c) in X R* and let Y be an element of limn.... 00 F(x n , ell), provided this set is nonempty. There exist positive integers 111 < 112 < ... and Yllk E F(xnk, ellJ such that Ynk -+ y. Since f(x nk, YnJ ~ enk and f is lower semicontinuous, we conclude that f(x, y) ~ e, so that limn.... 00 F(x ll, en) c F(x, c). The result follows. Q.E.D. Lemma 7.20 Let X be a metrizable space, Ya compact metrizable space, and letf:XY -+ R* be lower semicontinuous. Letf*:X -+ R* be given by f*(x) = minYEf f(x, y), and define F*: X -+ 2Y by
F*(x) = {YEYlf(x,Y)~f*(x)}. Then F* is Borel-measurable.
(64)
7.5
153
BOREL-MEASURABLE SELECTION
Proof Let F be the Borel-measurable function defined by (63). Proposition 7.32(a) implies thatf* is Borel-measurable. From (63)and (64)we have F*(x) = F[x,f*(x)]. It follows that F* is also Borel-measurable.
Q.E.D.
We are now ready to prove the selection theorem for lower semicontinuous functions. Proposition 7.33 Let X be a metrizable space, Y a compact metrizable space, D a closed subset of X Y, and let f: D ---+ R* be lower semicontinuous. Letf*:projx(D) ---+ R*begivenby (65)
f*(x) = min f(x, y). YED x
Then projx(D) is closed in X, f* is lower semicontinuous, and there exists a Borel-measurable function cp:proh(D) ---+ Y such that Gr(cp) c D and
f[x, cp(x)]
=
f*(x)
(66)
VXEproh(D).
Proof We first prove the result for the case where D = X Y. As in Lemma 7.18, let a:2 Y - {0} ---+ Y be a Borel-measurable function satisfying a(A)EA for every AE2 Y - {0}. As in Lemma 7.20, let F*:X ---+ 2 Y be the Borelmeasurable function defined by F*(x) = {YE Ylf(x,y) = f*(x)}. Proposition 7.32(a) implies that f* is lower semicontinuous and F*(x) i= 0 for every x EX. The composition cp = a F * satisfies (66). Suppose now that D is not necessarily XY. To see that projx(D) is closed, note that the function g = - XD is lower semicontinuous and 0
proh(D)
=
{XEX!g*(X)
s -l},
where g*(x) = min YE Y g(x, y). By the special case of the proposition already proved, g* is lower semicontinuous, proh(D) is closed, and there is a Borelmeasurable function CPl:X ---+ Y such that g[X,CP1(X)] = g*(x) for every XEX or, equivalently, (X,CP1(X))ED for every xEproh(D). Define now the lower semicontinuous function ]:XY ---+ R* by
](x,Y) =
{;:,y)
if (x,Y)ED, otherwise.
For all CE R,
{xEproh(D)lf*(x) S c} = {xEX1min](X,Y) YEY
s
c}.
154
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Since min.; Y !(x, y) is lower semicontinuous, it follows that f* is also lower semicontinuous. Let cpz: X ---> Y be a Borel-measurable function satisfying ![X,CP2(X)] = min j'(x. p) YEY
Clearly (x,CPz(x»ED for all x in the Borel set l (X, y) < { x EX lmin yE Y
oo}.
Define cp:proh(D) ---> Y by if rnin j'(x.y) =
00,
YEY
if rnin j'(x.y) <
00.
YEY
The function cp is Borel-measurable and satisfies (66).
Q.E.D.
We turn our attention to selection in the case of an upper semicontinuous function. The analysis is considerably simpler, but in contrast to the "exact selector" of (66) we will obtain only an approximate selector for this case. Lemma 7.21 Let X be a metrizable space, Y a separable metrizable space, and G an open subset of X Y. Then proh(G) is open and there exists a Borel-measurable function cp:projx(G) ---> Y such that Gr(cp) c G. Proof Let {Ynln = 1,2, ...}. be a countable dense subset of Y. For fixed yE Y, the mapping x ---> (x,y) is continuous, so {xEXI(x,y)EG} is open. Let Gn = {XEX!(X,Yn)EG}, and note that projx(G) = U.~)=I G; is open. Define cp:proh(G)---> Yby
YI cp(x)
=
n-I
{ Yn
if x E Gn
-
U Gb
n = 2,3, ....
k= 1
Then cp is Borel-measurable and Gr(cp) c G.
Q.E.D.
Proposition 7.34 Let X be a metrizable space, Ya separable metrizable space, D an open subset of X Y, and let f: D ---> R* be upper semicontinuous. Let f*: proh(D) ---> R* be given by f*(x)
= inf
f(x, y).
(67)
YED x
Then projx(D) is open in X, f* is upper semicontinuous, and for every s > 0, there exists a Borel-measurable function cp,:proh(D) ---> Y such that
7.5
155
BOREL-MEASURABLE SELECTION
Gr(cpc) c D and for all x E proh(D) f *(X) + 8 f[ x, cpJx)]:<:::;; { -1/8
if f*(x) > if f*(x) = _
00, 00.
(68)
Proof The set proh(D) is open in X by Lemma 7.21. To show that f* is upper semicontinuous, define an upper semicontinuous function]: X Y ---+ R* by ](x,y)
=
{~X'Y)
if (x,Y)ED, otherwise.
For cER, we have
{xEprojx(D)lf*(x) < c} = {XEXlinf J(x,y) < ye Y
c},
and this set is open by Proposition 7.32(b). Let 8 > 0 be given. For k = 0, ± 1, ± 2, ... , define (see Fig. 7.1)
A(k) = {(x,Y)EDlf(x,y) < k8}, B( -
B(k) = {xEprojx(D)I(k - 1)8:<:::;; f*(x):< kG}, 00) = {xEproh(D)lf*(x) = - co},
B(oo) = {xEproh(D)lf*(x) = oo]. The sets A(k), k = 0, ± 1, ± 2, ... , are open, while the sets B(k), k = 0, ± 1, ± 2, ... , B( - 00), and B( 00) are Borel-measurable, By Lemma 7.21,
(k-ll.
B(k)
B(k)
FIGURE 7.1
156
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
there exists for each k = 0, ± 1, ± 2, ... a Borel-measurable
if xEB(k), k if x E B( (0), if xEB( - (0).
= ip(x)
{
= 0, ±1, ±2, ... ,
Since B(k) c projy]' A(k)] and B( - (0) c projy]' A(k)] for all k, this definition is possible. It is clear that
If xEB(oo), then f(x,y) xEB( - (0), we have
=
for all YED x and f[x,
00
f[ x,
00
= f*(x). If
= f[ x,
Hence
7.6
+ e.
Q.E.D.
Analytic Sets
The dynamic programming algorithm is centered around infimization of functions, and this is intimately connected with projections of sets. More specifically, iff:XY ~ R* is given andf*:X ~ R* is defined by f*(x)
= inf [tx, y), YEY
then for each c E R {XEXlf*(x) < c}
= projx({(x,y)EXYlf(x,y) < c}).
If f is a Borel-measurable function, then {(x, y)lf(x, y) < c} is a Borelmeasurable set. Unfortunately, the projection of a Borel-measurable set need not be Borel-measurable. In Borel spaces, however, the projection of a Borel set is an analytic set. This section is devoted to development of properties of analytic sets. 7.6.1
Equivalent Definitions of Analytic Sets
There are a number of ways to define the class of analytic sets in a Borel space X. One possibility is to define them as the projections on X of the Borel subsets of X Y, where Y is some uncountable Borel space. Another
7.6
157
ANALYTIC SETS
possibility is to define them as the images of the Baire null space JV under continuous functions from JV into X. Still another possibility is to define them as all sets of the form
u n S(0'1'0'2"" 00
(a"a2 •.. .)E.#· n= 1
,an),
where JV is the set of all sequences of positive integers (the Baire null space) and the sets S(O'I' 0'2' ... ,an) are closed in X. All these definitions are equivalent, as we show in Proposition 7.41. We will take the third definition as our starting point, since this is the most convenient analytically. We first formalize the set operation just given in terms ofthe notion ofa Suslin scheme in a paved space. Definition 7.14 Let X be a set. A paving [JJ> of X is a nonempty collection of subsets of X. The pair (X, [JJ» is called a paved space. If (X, [JJ» is a paved space, we denote by O'([JJ» the a-algebra generated by we denote by [JJ>o the collection of all intersections of countably many members of [JJ>, and we denote by [JJ>a the collection of all unions of countably many members of [JJ>. Recall that N is the set of positive integers, JV is the set of all infinite sequences of positive integers, and L is the set of all finite sequences of positive integers. [JJ>,
Definition 7.15 Let (X, [JJ» be a paved space. A Sus lin scheme for mapping from L into [JJ>. The nucleus of a Suslin scheme S: L --+ [JJ> is N(S)
=
a
00
U
(<1I,a2,"
n
[J}l is
.)E.A'
n= 1
S(0'1'0'2"",O'n)'
The set of all nuclei of Suslin schemes for a paving
[JJ>
(69)
will be denoted by 9'([JJ».
In order to simplify notation, we write, for s = (0'1,0'2" .. ,O'n) ELand
z
=
((1,(2," .)EJV,
With this notation, (69) can also be written as N(S)
=
U n S(s).
ZE./V
s
We will use both expressions interchangeably. Note that the union in (69) is uncountable, so if [JJ> is a a-algebra and S is a Suslin scheme for [JJ>, N(S) may be outside [JJ>. Several properties of 9'(.gt» are given below.
7.
158 Proposition 7.35 f!lJ c!!t. Then (a)
(b) (c) (d)
(e)
BOREL SPACES AND THEIR PROBABILITY MEASURES
Let X be a space with pavings f!lJ and !!t such that
9'(f!lJ) c 9'(.2), 9'(f!lJ)J = 9'( f!lJ), 9'( f!lJ)" = 9'( f!lJ), f!lJ c 9'(f!lJ), 9'(f!lJ) = 9'[ 9'(f!lJ)].
Proof (a) Obvious. (b) It is clear that 9'(f!lJ),j:::J 9'(f!lJ). Now choose n:~ 1 N(Sk)E9'(gp),j. where Sk' is a Suslin scheme for f!lJ, k = 1,2, .... It suffices to construct a Suslin scheme S for f!lJ such that
n 00
N(S)
=
(70)
N(Sk)'
k=l
For k = 1,2, ... , let ilk = {(2j - 1)2k - 1 U= 1,2, ...}. Then ill' il 2, ... is a partition of N into infinitely many infinite sets. For each positive integer k, let qJk: JV ~ JV be defined by
i.e., qJk picks out the components of(( 1, (2' ...) with indices in ilk' We want to construct a Suslin scheme S for which
n S(s) = n n 00
s
Sk(S)
VZEJV.
(71)
k=ls<rpdz)
We may rewrite (71) as
n 00
S((1,(2,··· ,(n)
n=1
nn 00
=
00
k= 1 j= 1
Sk((2k-l,(3'2k-l""'((2j-1)2k-,)
Given ((1,(2"",(n)EL, we have n=(2j-1)2k positive integers j and k. Define S((1,(2,'" ,(n)
=
1
for exactly one pair of
Sk((2k-I,(3-2k-t, ... ,(2j-1)2k-J).
(73)
This defines a Suslin scheme S for which (72),and hence (71),is easily verified. We now use (71) to prove (70). Choose XE
N(S)
=
U nS(s).
ze%s
7.6
159
ANALYTIC SETS
For some ZoEJV, we have XE
Thus, for every k,
s
n
XE
00
n
=
S(s)
k
Sk(S)
E
>
1
S<
n
U
C
Sk(S),
Sk(S) = N(Sk)'
zeJVs
S<tpk(ZO)
1N(Sk) and
It follows that x nk~
n n
n N(Sk)' 00
N(S) c
(74)
k~l
If we are given XEnk~lN(Sk)' then XEUZE.VnS
nS
XE
n S(s)
U
c
S
n S(s) = N(S)
ze.N'·s
and
n N(Sk)' 00
N(S)::J
(75)
k~l
Relation (70) follows from (74) and (75). (c) It is clear that 9"(,o/J)" ::J 9"(gp). Choose Uk~ 1 N(Sk) E 9"(gp)", where S; is a Suslin scheme for ,o/J, k = 1,2, .... It suffices to construct a Suslin scheme S for gp for which 00
N(S)
=
U N(Sk)'
(76)
k~l
Given ((l,(Z,'" ,(n)EL, we have (1 = (2j - 1)2kpositive integers j and k. Define S((l,(Z,'" ,(n) = S((2j- 1)2k-
1,(Z""
1
for exactly one pair of
,(n) = Sk(j,(Z"" , (n)'
This defines a Suslin scheme S for which
n S((2j 00
k1)2
n Sk(j,(Z"" 00
1,(Z""
,(n) =
n~l
,(n)
n~l
VkEN,V(j,(z, ...)EJV. (77)
ns
= UZE.,v S(s). For some S((1>(z"" , (n), and choosingj and k so
Returning to (76), we choose xEN(S) ((b(Z," .)EJV, we have XEn~l
160
7.
that (1 = (2j - 1)2k -
1
,
n
BOREL SPACES AND THEIR PROBABILITY MEASURES
we have, from (77),
00
XE
00
SkU, (2'
...
U N(Sk)'
,(n) c N(Sk) c
n=l
k=l
so 00
N(S)
c
U N(Sd.
(78)
k= 1
If, on the other hand, we choose 00
XE
00
U N(Sk) = U U
k=l
then for some ke Nand Equation (77) implies
n 00
XE
n
Sk(S),
k=lzEKs
U, (2' .. . )EJII, we have
XE
n~
1 SkU, (2' ... '(n)'
S((2j - 1)2k- 1' ( 2" " ,(n) c N(S),
n= 1
so 00
N(S) ~
U N(Sk)'
(79)
k=l
Relation (76) follows from (78) and (79). (d) For PEi~, define S(s) = P for every SEL. Then N(S) = P. (e) The proof of this takes us somewhat far afield, so is given as Proposition B.2 of Appendix B. Q.E.D. It is not in general true that Y(eP) is closed under complementation, so Y(eP) is generally not a a-algebra. In order for Y(eP) to contain aU?P), we need one additional assumption. Corollary7.35.1 Let (X,eP) be a paved space and assume that the complement of each set in eP is in Y(eP). Then a(eP) c Y(,'?lJ).
Proof The smallest algebra containing eP consists of the finite intersections of finite unions of sets in eP and complements of sets in ,qJ. By Proposition 7.35, these sets are contained in Y[Y[Y(eP)JJ = Y(eP). Since Y(eP) is a monotone class, it contains the a-algebra generated by eP as well (Ash [AI, p. 19J). Q.E.D. Definition 7.16 Let X be a Borel space. Denote by !F x the collection of closed subsets of X. The analytic subsets of X are the members of Y(!F x). Corollary 7.35.2 Let X be a Borel space. The countable intersections and unions of analytic subsets of X are analytic.
7.6
161
ANALYTIC SETS
Proof
This follows from Proposition 7.35(b) and (c).
Q.E.D.
Proposition 7.36 Let X be a Borel space. Then every Borel subset of X is analytic. Indeed, the class of analytic sets Y(ff x) is equal to Y(&l x ). Proof Every open subset of X is an F" (Lemma 7.2), so every open set is analytic. Corollary 7.35.1 implies &Ix c Y(ff x). Proposition 7.35(a) and (e) implies
Q.E.D. If the Borel space X is countable, then every subset of X is both analytic and Borel-measurable. If X is uncountable, however, the class of analytic subsets of X is strictly larger than &Ix. This is shown in Appendix B, where we prove the existence of an analytic set whose complement is not analytic. Note that an immediate consequence of Proposition 7.36 is that if Y is a Borel subset of the Borel space X, then the analytic subsets of Yare the analytic subsets of X contained in Y. A generalization of this fact is the following.
Corollary 7.36.1 Let X and Y be Borel spaces and qJ:X -+ Y a Borel isomorphism. Then A c X is analytic if and only if qJ(A) c Y is analytic. Proof If qJ: X -+ Y is a Borel isomorphism and A c X is analytic, then A = N(S), where S is a Suslin scheme for ff x- It is easily seen that qJ(A) = N(qJ S), where qJ a S is the Suslin scheme for &ly defined by (qJ a S)(s) = qJ[S(s)], so qJ(A) is analytic by Proposition 7.36. If qJ(A) c Y is analytic, 0
A c X is analytic by a similar argument.
Q.E.D.
We proceed to the development of several equivalent characterizations of analytic sets. The general definition of a Suslin scheme is unrestrictive with respect to the form of the mapping S: L -+ &. In the event that X is a separable metric space and & = ff x, one can assume without loss of generality that S has more structure. Definition 7.17 Let (X, &) be a paved space and Sa Suslin scheme for f!jJ. The Suslin scheme S is regular if for each nEN and (0'1,0'2"" ,O'n+1)EL, we have
Lemma 7.22 Let (X,d) be a separable metric space and S a Suslin scheme for ff x- Then there exists a regular Suslin scheme R for ff x such that N(R) = N(S) and, for every z = ('1"2'" .)Efl,
7.
162
BOREL SPACES AND THEIR PROBABILITY MEASURES
Proof By the Lindelof property, for each positive integer k, X can be covered by a countable collection of open spheres of the form Bkj = {xEXld(x,x k ) < l/k},j = 1,2, .... For (~1'(1'~2'(2' ... )Ev¥', define
R(~I) R(ZI, (d R(Zh(I,Z2)
R(Zh(I'~2'(2)
= lJI~l' = R(Zd n S«(I), =
R(~I,(dnlJ2~z'
= R(~I'(I,Z2)n
S(C'(2)'
etc. Thus (81) where Z = (( I' (2" ..). It is clear that R is a regular Suslin scheme for .'Fx and (80) holds. If x E N(R), then there exists (Z I' (I, ~ 2, '2,' ..) E.ff such that XE nS«~I"Io~Z'{2'' ') R(s), so by (81) XE nS«{lob ... )S(s) c N(S), and therefore N(R) c N(S). If xEN(S), then there exists ('1'(2'" .)Efl such that x E ns«", S(s). Since for each positive integer k, the collection {Bkjlj = 1,2, ... } covers X, there exists for each k a positive integer ~k for which x E Bk~k' Then x E nk'= 1 Bk~k and, by (81), x E ns«~ lo'Io~Z.'2"") R(s) c N(R), so N(R) => N(S), It follows that N(R) = N(S). Q.E.D.
'z, ...)
Note that if a regular Suslin scheme R satisfies (80), then for all fl l = {ZEflIDz R(s) -#
the set
Z
in
0}'
ns
= f(fl l ) ,
and this relation provides the basis for an alternative way of Characterizing analytic sets. We have the following lemma. Lemma 7.23 Let (X, d) be a complete separable metric space. If A c X is a nonempty analytic set, then there exist a closed subset fl l of fl and a continuous function f: fl l -> X such that A = f(fl l ). Conversely, if fl l c fl is closed and{: fl l -> X is continuous, then f(JV'd is analytic. Proof Let A = N(R) be nonempty, where R is a regular Suslin scheme for :1i' x satisfying (80). Define
7.6
163
ANALYTIC SETS
Let Z = (( l' (z, . . .) be in u1!. If for each n we have R(( l' (z,· .. , (n) =/= 0, then it is possible to chose X n E R(( I' (z, . . . , (n)' The sequence {x n} is Cauchy by (80), and since (X,d) is complete, {x n } has a limit xEX. But for each n the regularity of R implies {xmlm 2': n} c R(( I, (z, . . . , (n), so X E R(( I' (z,· .. , (II)' Therefore xEns 0, (80) implies that there exists Sn < Zo for which diam R(sn) < e. For k sufficiently large, Zk E {ZE flls n < z}, so f(Zk) E R(sll)' Therefore d(f(Zk)J(ZO») ~ diam R(sll) < e, which shows that f is continuous. For the converse, suppose fl l c fl is closed and I: ut'l --> X is continuous. Define a regular Suslin scheme R for :IF x by R(s)
= f((zEA/'lls < z}),
where R(s) = 0 if {ZE.AlI!S< z} = 0. If ZE.Al I , then f(Z)En.,
so given s > 0, there exists a z; E fl l with (( I, (z,· .. , (n) < z; and d(xJ(zn)) < e. But as n --> 00, Zn must converge to Zo. The closedness of fl l implies Zo E fl l , and the continuity of f implies d(xJ(zo)) ~ e. Since s > is arbiQ.E.D. trary, we have f(zo) = x, x E f(fl d, and N(R) c f(fl d.
°
We have thus characterized analytic sets as the continuous images of closed subsets of fl. We will obtain an even sharper characterization, for which we need the following lemma. Lemma 7.24 If fl l is a nonempty closed subset of fl, then there exists a continuous function g: fl' --> fl such that fl l = g(fl).
Proof Use the Lindelof property to cover fl l with a countable collection of nonempty closed sets {S(( dl( lEN} which satisfy fll::J
S((d,
diamS((I)
~
1, (I
=
1,2, ... ,
where d is a metric on fl consistent with its topology and diam S(( d is given by (9). Cover each S(( I) with a countable collection of nonempty closed sets {S((I,(z)l(zEN} which satisfy S((d::J S((I,(Z),
diamS((I,(z) ~
to
(z
= 1,2, ....
164
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Continue in this manner so that, for any «(I'(Z"" '(n-d, (n
= 1,2, ... ,
(82)
00
5«(I,(Z"",(n-d=
U 5«(I,(Z"",(n),
(83)
Sn= 1
5«(I,(Z,··' ,(n-d;:) 5«(I'(Z,'" ,(n-I,(n), diam5«(I,(z, ... ,(n):::; lin, (n = 1,2, ....
(84)
(n=1,2, ... ,
The completeness of JiI and (82)-(85) imply that for each Z E JiI, consists of a single point. Define g(z) to be this point. Then
ns
(85)
5(5)
N(5) = g(JiI) = JiI I '
The continuity of g follows by an argument similar to the one used in the proof of Lemma 7.23. Q.E.D. Proposition 7.37 Let X be a Borel space. A nonempty set A c X is analytic if and only if A = f(JiI) for some continuous function f: JiI -> X. Proof If X is complete, the proposition follows from Lemmas 7.23 and 7.24. If X is not complete, it is still homeomorphic to a Borel subset of a complete separable space, and the result follows from Corollary 7.36.1.
Q.E.D. Proposition 7.37 gives a very useful characterization of nonempty analytic sets in terms of continuous functions and the Baire null space JiI. The Baire null space has a simple description and its topology allows considerable flexibility. We have already shown, for example, that it is homeomorphic to JiI 0' the space of irrationals in [0, 1]. Another important homeomorphism is the following. Lemma 7.25 The space JiI is homeomorphic to any finite or countably infinite product of copies of itself. Proof We prove the lemma for the case of a countably infinite product. Let Il b Il z , ... be a partition of N, the set of positive integers, into infinitely many infinite sets. Define qJ: JiI -> JiI JiIJiI ... by (86)
where Zk consists of the components of Z with indices in Ilk' Then qJ is oneto-one and onto and, because convergence in a product space is componentwise, qJ is a homeomorphism. Q.E.D. Combination of Lemma 7.25 with Proposition 7.37 gives the following.
7.6
165
ANALYTIC SETS
be a sequence of Borel spaces and A k Proposition 7.38 Let X b X 2, an analytic subset of X k , k = 1,2, Then the sets A 1A 2 ••• and A 1A 2 . . . An' n = 1,2, ... , are analytic subsets of X 1 X 2 . . . and X 1 X 2 . . . X n' respectively. Proof Let fk: JV ~ X kbe continuous such that A k = !k(JV), k Let qJ be given by (86) and F: JVJV ... ~ X 1 X 2' .. be given by F(Zl'Z2"")
=
1,2, ....
= (fd Zl ),fi z 2), " .).
Then Fa qJ is continuous and maps JV onto A 1A2 • .•. The finite products are handled similarly. Q.E.D. Another consequence of Proposition 7.37 is that the continuous image of an analytic set, in particular, the projection of an analytic set, is analytic. As discussed at the beginning of this section, this property motivated our inquiry into analytic sets. We formalize this and a related fact to obtain another characterization of analytic sets. Proposition 7.39 Let X and Y be Borel spaces and A an analytic subset of XY. Then proh(A) is analytic. Conversely, given any analytic set C eX and any uncountable Borel space Y, there is a Borel set B c X Y such that C = projx(B). If Y = JV, B can be chosen to be closed.
Proof If A = f(JV) c X Y is analytic, where f is continuous, then projx(A) = (proj; a f)(JV) is analytic by Proposition 7.37. If C = f(JV) c X is nonempty and analytic, then C = projx[Gr(f)],
where Gr(f) = {(f(z), z) E X JVlz E JV} is closed because f is continuous. If Y is any uncountable Borel space, then there exists a Borel isomorphism qJ from JV onto Y (Corollary 7.16.1). The mapping <1> defined by <1>(x, z)
= (x, qJ(z))
is a Borel isomorphism from XJV onto X Y, and C = proh(<1>[Gr(f)]).
Q.E.D.
So far we have treated only the continuous images of analytic sets. With the aid of Proposition 7.39, we can consider their images under Borelmeasurable functions as well. Proposition 7.40 Let X and Y be Borel spaces and f:X ~ Ya Borelmeasurable function. If A c X is analytic, then f(A) is analytic. If BeY is analytic, then f - l(B) is analytic.
Proof Suppose A c X is analytic. By Proposition 7.39, there exists a Borel set B c XJV such that A = proh(B). Define l{!: B ~ Y by l{!(x, z) = f(x).
7.
166
BOREL SPACES AND THEIR PROBABILITY MEASURES
Then ljJ is Borel-measurable, and Corollary 7.14.1 implies that Gr(ljJ)E.?J'UY' Finally, f(A) = projy[Gr(ljJ)] is analytic by Proposition 7.39. If BeY is analytic, then B = N(S), where S is some Suslin scheme for % r Then f-1(B) = N(f-1 S), where i : S is the Suslin scheme for :?J x defined by 0
0
The analyticity of f- 1 (B ) follows from Proposition 7.36.
Q.E.D.
We summarize the equivalent definitions of analytic sets in Borel spaces. Proposition 7.41 Let X be a Borel space. The following definitions of the collection of analytic subsets of X are equivalent: (a)
Y(% x);
Y(:?Jx); (c) the empty set and the images of ,Ai' under continuous functions from JV into X; (d) the projections into X of the closed subsets of X JIf'; (e) the projections into X of the Borel subsets of XY, where Y is an uncountable Borel space; (f) the images of Borel subsets of Y under Borel-measurable functions from Y into X, where Y is an uncountable Borel space. (b)
Proof The only new characterization here is (f). If Y is an uncountable Borel space and f: Y -> X is Borel-measurable, then for every BE.?J'y, f(B) is analytic in X by Proposition 7.40. To show that every nonempty analytic set A c X can be obtained this way, let cp be a Borel isomorphism from Y onto X JV and let F c X JV be closed and satisfy projx(F) = A. Define B = cp -l(F) E:?J y . Then (proj; 0 cp)(B) = A. If A = 0, then f(0) = A for any Borel-measurable f: Y -> X. Q,E.D. 7.6.2
Measurability Properties of Analytic Sets
At the beginning of this section we indicated that extended real-valued functions on a Borel space X whose lower level sets are analytic arise naturally via partial infimization. Because the collection of analytic subsets of an uncountable Borel space is strictly larger than the Borel IT-algebra (Appendix B), such functions need not be Borel-measurable. Nonetheless, they can be integrated with respect to any probability measure on (X, .?J'x). To show this, we must discuss the measurability properties of analytic sets. If X is a Borel space and p E P(X), we define p-outer measure, denoted by p*, on the set of all subsets of X by p*(E) = inf{p(B)IE c B,BE.?J'x}.
(87)
7.6
167
ANALYTIC SETS
Outer measure on an increasing sequence of sets has a convergence property, namely, p*(E n) i p*(U:~ 1 En)if E 1 C E z C . . . . This is easy to verify from (87) and also follows from Eq. (5) and Proposition A.1 of Appendix A (see also Ash [AI, Lemma 1.3.3(d)]). The collection of sets iJ6 x (p) defined by iJ6 x (p) = {E c Xlp*(E)
+ p*(g) = I}
is a c-algebra (Ash [AI, Theorem 1.3.5]), called the completion of .U4 x with respect to p. It can be described as the class of sets of the form BuN as B ranges over iJ6 x and N ranges over all subsets of sets of p-measure zero in iJ6 x (Ash [AI, p. 18]), and we have p*(B u N) = p(B).
Furthermore, p* restricted to iJ6x (p) is a probability measure, and is the only extension of p to iJ6 x (p) that is a probability measure. In what follows, we denote this measure also by p and write p(E) in place of p*(E) for all E E f4 x (p). Definition 7.18 Let X be a Borel space. The universal a-alqebra !lit x is defined by OIl x = nPEP(X) iJ6 x (p). If E E!lItx, we say E is universally measurable. The usefulness of analytic sets in measure theory is in large degree derived from the following proposition. Proposition 7.42 (Lusin's theorem) Let X be a Borel space and S a Suslin scheme for !lit x- Then N(S) is universally measurable. In other words, y(!lIt x )
= !lItx ·
Proof Denote A = N(S), where S is a Suslin scheme for !lItx . For ,00dE!:, define
(0"1""
N«Jl,.·.,(Jk)= {«(I,(Z, ... )EJII!(1 =O"I""'(k=O"k}
(88)
and M(O"I"" ,O"k) = {«l,(Z," .)Eu¥'I(1 :s;; t I ::::;
U
(J
0"1,'"
,(k::;; (Jd
N(rl>' .. , rd·
(89)
10 •••• tk ::::; (Tk
Define also
n S(s).
U
R«Jl,'" ,O"d =
(90)
ZEM(Ub··.,O"k)S
Then (91)
where K«Jl,'" ,O"d =
U
n k
T:t::::;Ul ••.•• tk:5;akj=
1
S(rl>'" ,r j ) .
(92)
168
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
As ali 00, M(a 1)j fl, so R(a 1)j A. Likewise, as ak i 00, M(a b' .. , ak- 1, ad i M(al"" ,ak-d,soR(al"" ,ak-l,ak)i R(ab'" ,ak_d·GivenpEP(X)and <:
> 0, choose Z1, ?:2, . . . such that p*(A) ~
,rk-dJ s
P*[R(?:I""
p*[R(?:dJ + (e/2), p*[R(rl"" ,rk-l,rk)J
+ (e/2k),
k = 2,3, ....
Then p*(A) ~ p*[R(?:I" .. , rdJ
+ e,
k = 1,2, ....
(93)
The set K(rl,' .. , rk) is universally measurable, so (91) and (93) imply
+ p[X - K(rl,'" ,rk)] ,rk)] + p[X - K(rl,'" ,rdJ
1 = p[K(rb'" ,rk)] :2: p*[R(rl"" :2: p*(A) -
E
+
(94)
p[X - K(rb'" ,rk)].
We show that
n K«(b'" 00
k=1
(95)
,(k) c A
If
XEn k 00
n :5 u
k
00
K «( I , · ..
,(d=
= 1
k= h 1
(10 ... ,
nS(rb···,rj),
(96)
(k j = 1
an argument by contradiction will be used to show that for some r 1 ~ (1' we have xES(rd n
[lt2:5(2"~'
S(rl,r 2,···, r j)}
If no such r 1 existed, then for every r 1 integer k(rd such that x¢ S(rd n [(\<2:5(2,
~ (b
~,
there would exist a positive
S(r 1, r 2,
... ,
rj ) }
If I = max<,:5(, k(rl)' then x ¢<JY( ::"J
J
[k02<2 :5(2, ~,
n S(rl,r2,'" :5(,.... ,
U <1
S(r d n
S(r 1, r 2, ... , r)]}
Ii
j= 1
(97)
,rj)
= K«(I,'"
,(d
7.6
169
ANALYTIC SETS
and a contradiction is reached. Replace (96) by (97) and apply the same argument to establish the existence of'f 2 ::; (2 such that XES(T 1 ) n S(T 1,T2 ) n
[0
k-3 !";~3
U
•... . !kS;~kJ-3
.0
Continuing this process, construct a sequence 1'1
n
S(T 1,T2 ,'r 3 ,
,t)J.
::; (I, 1'2 ::; (2' . . .
00
XE
· · ·
such that
= A.
S(TI>' .. ,Tk ) c N(S)
k= 1
This proves (95), i.e., as k -+ 00, K«( I" .. , (k) decreases to a set contained in A, and X - K«( I,' .. ,(d increases to a set containing X-A. Letting k -+ 00 in (94), we obtain 1 2:: p*(A) - s
+ p*(X -
A):
Since s > 0 is arbitrary, this implies that
+ p*(X that p*(E) + p*(X 1 2:: p*(A)
It is true for any E c X
p*(A)
+
p*(X - A)
and A is measurable with respect to' p.
A). E) 2:: 1, so
= 1,
Q.E.D.
Corollary 7.42.1 Let X be a Borel space. Every analytic subset of X is universally measurable. Proof The closed subsets of X are universally measurable, so !I'(ff'x) c IJIt x by Proposition 7.42. Q.E.D.
As remarked earlier, the class of analytic subsets of an uncountable Borel space is not a a-algebra, so there are universally measurable sets which are not analytic. In fact, we show in Appendix B that in any uncountable Borel space, the universal a-algebra is strictly larger than the a-algebra generated by the analytic subsets. 7.6.3
An Analytic Set of Probability Measures
In Proposition 7.25 we saw that when X is a Borel space, the function -+ [0,1] defined by (JA(P) = peA) is Borel-measurable for every Borel-measurable A c X. We now investigate the properties of this function when A is analytic. The main result is that the set {p E P(X)lp(A) 2:: c} is analytic for each real c.
(JA:P(X)
Proposition 7.43 Let X be a Borel space and A an analytic subset of X. For each cER, the set {pEP(X)lp(A) 2:: c} is analytic.
170
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Proof Let S be a Suslin scheme for :Ji'x, the class of closed subsets of X, such that A = N(S). For S E L, let N(s), M(s), R(s), and K(s) be defined by (88)-(90) and (92). Then (91) and (95) hold and each K(s) is closed. We show that for cER [pEP(X)lp(A) ~ c}
nun en
=
{pEP(X)lp[K(s)] ~
C -
(lIn)}. (98)
n= 1 zeJV· s
If p(A) ~ c, then for any n ~1, there exists (~1' ~z, ...) E JV such that (93) is satisfied with p = p and I: = lin. Then by (91), for k = 1,2, ...
so
nun 00
PE
n=1
.
{pEP(X)lp[K(s)] ~
c'- (lin)}.
ZEJVS
On the other hand, given
nun 00
PE
{pEP(X)lp[K(s)] ~ c - (lIn)},
n=lzEffs
we have that for each n there exists (( 1, (z,· ..) E.A1 for which
We have then from (95) that p(A)
~
c - (lin),
n
= 1,2, ... ,
so p(A) ~ c, and (98) is proved. Proposition 7.25 guarantees that for each n
~
1 and SEL, the set
Tn(s) = {pEP(X)lp[K(s)] ~ c - (lin)}
is Borel-measurable in P(X). We have from (98) that {pEP(X)lp(A) ~ c}
=
n 00
n= 1
N(T n),
and the proposition follows from Proposition 7.36 and Corollary 7.35.2. Q.E.D. Corollary 7.43.1 Let X be a Borel space and A an analytic subset of X. For each cER, the set {pEP(X)lp(A) > c} is analytic.
7.7
UNIVERSALLY MEASURABLE SELECTION
Proof
171
For each cER, 00
{pEP(X)lp(A) > c}
=
U {pEP(X)lp(A);::: c + (lin)}.
n=l
The result follows from Corollary 7.35.2 and Proposition 7.43. 7.7
Q.E.D.
Lower Semianalytic Functions and Universally Measurable Selection
In a Borel space X, there are at least three a-algebras which arise naturally": the Borel a-algebra f!jJ x of Definition 7.6, the universal a-algebra 07/ x of Definition 7.18, and the analytic a-algebra J4 x, which we define now. Definition 7.19 Let X be a Borel space. The analytic a-algebra J4 x is the smallest a-algebra containing the analytic subsets of X. In symbols, J4 x = a [9'(ff x )]. If EEJ4 x, we say that E is analytically measurable. From Proposition 7.36 and Lusin's theorem (Proposition 7.42), we have that for any Borel space X
If X is countable, each of these collections of sets is equal to the power set of X (the collection of all subsets of X). We show in Appendix B that if X is uncountable, each set containment above is strict. This fact will not be used in the constructive part of the theory, but only to give examples showing that results cannot be strengthened. Corresponding to the three a-algebras just discussed, we will treat three types of measurability of functions. Borel-measurable functions were defined in Definition 7.8. The other two types are defined next. Definition 7.20 Let X and Y be Borel spaces and f a function mapping Dc X into Y. If DEJ4 x and f-1(B)EJ4 x for every BEf!jJy, f is said to be analytically measurable. If D E Ol/x and f-1(B)E Ol/x for every BE f!/jy, f is said to be universally measurable. From the preceding discussion, we see that every Borel-measurable function is analytically measurable, and every analytically measurable function is universally measurable. The converses of these statements are false. We begin by stating for future reference the following characterization of the universal a-algebra. t A fourth o-algebra. the limit o-alqebra !f' x' which lies between d x and'llx. is defined in Appendix B, and treated there and in Section 11.1.
7.
172
BOREL SPACES AND THEIR PROBABILITY MEASURES
Lemma 7.26 Let X be a Borel space and E c X. Then E E Olt x if and only if, given any pEP(X), there exists BEfJB x such that p(E~B) = o.
We turn now to the question of composition of measurable functions. If Borel-measurable functions are composed, the result is again Borel-measurable. Unfortunately, the composition of analytically measurable functions need not be analytically measurable (Appendix B). We have the following result for universally measurable functions. Proposition 7.44 Let X, Y, and Z be Borel spaces, DEOlt x, and EEOlt y. Suppose f: D -+ Y and g: E -+ Z are universally measurable and f(D) c E. Then the composition go f is universally measurable.
Proof We must show that given BEfJB z , the set f-l[g-l(B)] is universally measurable. Since g-l(B) E Olt r- it suffices to prove that l(U)E Oltx for every UEOlt y. For pEP(X), define p'EP(Y) by
r
p'(C)
=
pU- 1 (C)]
VCE84y .
Let V E fJB y be such that pU- 1(V) ~.r
l(U)] = p'(V c; U) =
o.
The setf-l(V) is in ou x- so there exists W EfJB x for which peW ~f-l(V)] Then peW f- 1(U)] = O. The result follows from Lemma 7.26.
c:
=
o.
Q.E.D.
The proof of Proposition 7.44 also establishes the following fact. Corollary 7.44.1 Let X and Y be Borel spaces, DEOlt x, and f:D a universally measurable function. If UEOlt y, thenf- 1(U)EOlt x.
-+
Y
Since.e/ XC Olt x» we can specialize these results to analytically measurable sets and functions. Corollary 7.44.2 Let X, Y, and Z be Borel spaces, DE.s#x, and EE.s#y. Suppose f:D -+ Y and g:E -+ Z are analytically measurable and f(D) c E. Then the composition go f is universally measurable. If A E.s# y, then f-l(A)EOlt x·
We remind the reader that if X and Yare Borel spaces, a stochastic kernel q(dylx) on Y given X is said to be universally measurable if the mapping y(x) = q(dYlx) is universally measurable from X to P(Y) (Definition 7.12).
Corollary 7.44.3 Let X and Y be Borel spaces, let f:X -+ Y be a function, and let q(dyjx) be a stochastic kernel on Y given X such that, for each x, q(dy[x) assigns probability one to the point f(X)E Y. Then q(dYlx) is universally measurable if and only if f is universally measurable.
7.7
173
UNIVERSALLY MEASURABLE SELECTION
Proof Let (5: Y -> P( Y) be the homeomorphism defined by (j( y) = Py (Corollary 7.21.1). Let y:X -> P(Y) be the mapping y(x) = q(dYlx). Then y = (50 fandf = (5-1 0 y. The result follows from Proposition 7.44. Q.E.D. If X is a Borel space and f: X -> R* is universally measurable, then given any p E P(X), f is measurable with respect to the completed Borel e-algebra &6 x (p), and Ifdp is defined by
If dp = If+ dp - If- dp,
where the convention 00 - 00 = 00 is used and the integrations are performed on the measure space (X, &lx(p),pl. If DE 0/1 x' the integral IDf dp is defined similarly. Having thus defined If dp without resort to p-outer measure, we have all the classical integration theorems at our disposal, provided that we take care with the addition of infinities. We proceed now to show that universally measurable stochastic kernels can be used to define probability measures on product spaces in the manner of Proposition 7.28. For this we need some preparatory lemmas. Lemma 7.27 Let X be a Borel space and f:X -> R*. The function f is universally measurable if and only if, for every p E P(X), there is a Borelmeasurable function fp: X -> R* such that f(x) = fp(x) for p almost every x.
Proof Suppose f is universally measurable and let p E P(X) be given. For rEQ*, let U(r) = {xjf(x)::<:;r}. Then f(x) = inf{rEQ*lxEU(r)}. Let B(r)E&lx be such that p[B(r) £:,. U(r)] = O. Define fp(x) = inf{rEQ*lxEB(r)} = inf l/Jr(x), rEQ*
where l/Jr(x)=r if x e Bir) and l/Jr(x) = Borel-measurable, and {xlf(x) # fp(x)}
c
00
otherwise. Then
j~:X->R*
IS
U [B(r) £:,. U(r)] rEQ*
has p-measure zero. Conversely, if, given p e P(X), there is a Borel-measurable f(x) = fp(x) for p almost every x, then
j~
such that
f follows.
Q.E.D.
p({xlf(x) ::<:; c} £:,. {x!fp(x) ::<:; c}) = 0
for every
C E R*,
and the universal measurability of
Lemma 7.27 can be used to give an alternative definition of If dp when
f is a universally measurable, extended real-valued function on a Borel space X and p e P(X). Letting I, be as in the proof of that lemma, we can define S! dp = S!p dp. It is easy to show that this definition is equivalent to the one which precedes Lemma 7.27.
174
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Lemma 7.28 Let X and Y be Borel spaces and let q(dYlx) be a stochastic kernel on Y given X. The following statements are equivalent:
(a) The stochastic kernel q(dYlx) is universally measurable. (b) For any BE[lJy, the mapping AB:X -+ R defined by }'B(X) = q(Blx) is universally measurable. (c) For any p E PIX), there exists a Borel-measurable stochastic kernel qp(dYlx) on Y given X such that q(dYlx) = qp(dYlx) for p almost every x. Proof We show (a) => (b) => (c) => (a). Assume (a) holds. Then the function y:X -+ P( Y) given by y(x) = q(dy[x) is universally measurable. If B E [lJx, AB is defined as in (b), and lIB:P(Y) -+ R is given by lIB(p) = p(B), then AB = liB o y, which is universally measurable by Propositions 7.25 and 7.44. Therefore (a) => (b). Assume (b) holds and choose p E PIX). Since Y is separable and metrizable, there exists a countable base [lJ for the topology in Y. Let $' be the collection of sets in [lJ and their finite intersections. For FE$', let fF be a Borel-measurable function for which fF(X)
= q(Flx) I;fXEBF,
where BFE[lJx and p(B F) = 1. Such an fF and B F exist by assumption (b) and Lemma 7.27. For XE nFE.<JC B F, let qp(dYlx) = q(dYlx). For x ¢ nFEf' B F, let qp(dYlx) be some fixed probability measure in PlY). Then q(dYlx) = qp(dYlx) for p almost every x. The class of sets X in (]By for which qp(Xlx) is Borelmeasurable in x is a Dynkin system containing $'. The class $' is closed under finite intersections and generates [lJy, so statement (c) follows from the Dynkin system theorem (Proposition 7.24). Therefore (b) => (c). Assume (c) holds and choose pEP(X). Let qp(dYlx) be as in assumption (c) and define y, Yp:X -+ PlY) by y(x) = q(dYlx), yp(x) = qp(dYlx). If BE(]Bp(X), then p[y-I(B) ~ y; lIB)] = O. Lemma 7.26 implies that y-I(B) is universally measurable. Therefore (c) => (a). Q.E.D. Lemma 7.29 Let X, Y, and Z be Borel spaces and let f:XY -+ Z be a universally measurable function. For fixed XEX, define gx: Y -+ Z by gx(Y) = f(x,y).
Then gx is universally measurable for every x EX. Proof For fixed XoEX, let cp: Y -+ XY be the continuous function defined by cp(y) = (xo,Y). For ZE[lJz,
{YE Y!gxo(Y)EZ} = cp-I({(X,Y)EXY!f(x,Y)EZ}), and this set is universally measurable by Corollary 7.44.1.
Q.E.D.
7.7
175
UNIVERSALLY MEASURABLE SELECTiON
It is worth noting that if(OI' ~ b p) and (Oz, ~ z. q) are probability spaces, then there are two natural a-algebras on 0IOZ, namely, ~I~Z and the completion ~I~Z of ~I~Z with respect to pq. If f:OIOz ---+ R is ~I~Z measurable, then for every WI EOI, the function gw,(wz) = f(wI'wz) is ~ z-measurable. However, if f is only ~ I ~ z-measurable, then gw,(wz) can be guaranteed to be ~ z-measurable only for p almost all WI' The case treated by Lemma 7.29 is intermediate to these two, since 0Ux!J/t y c !J/tXY, and if p E P(X), q E P( Y), and !J/tx!J/t y denotes the completion of!J/tx!J/t y with respect to pq, then !J/t XY C !J/tx!J/t y. Note that the stronger result that gAy) is !J/ty-measurable for every x EX holds, although the assumption that f is !J/t Xy-measurable may be weaker than the assumption that f is !J/t x!J/t y-measurable. We now use the properties of universally measurable functions and stochastic kernels to extend Proposition 7.28. Proposition 7.45 Let X I, X z, . . . be a sequence of Borel spaces, y" = X I X Z •.. X nand Y = X I X Z •••• Let p E P(X d be given and, for n = 1, 2, ... , let qn(dxn+ tlYn) be a universally measurable stochastic kernel on X n+ 1 given
Y". Then for n = 2,3, ... , there exist unique probability measures such that
X
t«
E P( y,,),
qn-z(dXn-I!XbXZ,'" ,xn-z)··· "IX I E!?J x,,· .. ,XnEf!J xn. (99)
x ql(dxzlxdp(dxl)
If f: y" 00,
---+
then
R* is universally measurable and either JJ+ dr; <
00
or Sf- dr; <
r n fdrn=Sx , SX "'Sx, f(XI,XZ"",Xn)qn-l(dxn\XbXZ"",xn-I)'" JY 2
X
ql(dXzIXI)p(dxd.
noo,
Furthermore, there exists a unique probability measure r E P( Y) such that for each n the marginal of r on y" is r n • Proof There is a Borel-measurable stochastic kernel 711 (dxzlxd which agrees with q(dxzlxd for p almost every XI' Define rz E P(Yz) by specifying it on measurable rectangles to be (Proposition 7.28)
Assume f: Yz ---+ [0,00] is universally measurable and let]: Yz ---+ [0,00] be Borel-measurable and agree with f on Yz - N, where N E f!J y z and rz(N) = O.
7.
176
BOREL SPACES AND THEIR PROBABILITY MEASURES
By Proposition 7.28, 0= r2(N) =
r r 2 XN(XbX2)Z!t(dx2!X1)P(dx1) Jx,JX
= SX, 711(N x,!xdp( dxd,
so 711(Nx,lxd=0 for p almost every Xl' Now f(X1'X 2)=](X1,X 1 ) for x 2 ¢: N x,so ISx2 [f(Xb X2) - ](X1,Xl)]711(dXllxd!
~ for p almost every
X l'
SNJf(X 1,X2) - ](X 1,x2)1711(dx 1!xd = 0 It follows that
r
r
Jx,](Xb X2)711(dx2Ix1) = JX 2 f(Xl>X2)711( dx2Ix1) =
r 2 f(X 1,X2)q1(dxllxd JX
for p almost every Xl' The left-hand side is Borel-measurable by Proposition 7.29, so the right-hand side is universally measurable by Lemma 7.27. Furthermore,
This proves (100)for n = 2 and f ;;::: O. Iff: Y2 ~ R* is universally measurable and satisfies Sf + drl < 00 or Sf - drl < 00, then (100) holds for f + and f -, so it holds for f as well. Take f = XK,K2 to obtain (99). Now assume the proposition holds for n = k. Let 71k(dxk+1IYk) be a stochastic kernel which agrees with qk(dxk+11Yd for rk almost every xi, Define rk+ 1 by specifying it on measurable rectangles to be
r
rk+1(X 1X 2"'Xk+1)= JK,K2"'Kk 71k(Xk+1!X1,X2"",xk)drk
Proceed as in the case of n = 2 to prove the proposition for n = k + 1. (See also the proof of Proposition 7.28.) The existence of r E P( Y) such that the marginal of r on X n is rn' n = 2, 3,... , is proved exactly as in Proposition 7.28. Q.E.D.
7.7
UNIVERSALLY MEASURABLE SELECTION
177
In the course of proving Proposition 7.45, we have also established the following fact. Proposition 7.46 Let X and Y be Borel spaces and let f:XY -> R* be universally measurable. Let q(dYlx) be a universally measurable stochastic kernel on Y given X. Then the mapping .Ie: X -> R* defined by
Jc(x)
=
I f(x, y)q(dylx)
is universally measurable. Corollary 7.46.1 Let X be a Borel space and let f: X -> R * be universally measurable. Then the function 8J : P(X) -> R * defined by 8J (p) = If dp
is universally measurable. Proof
Define a universally measurable stochastic kernel on X given = p(dx). Apply Proposition 7.46. Q.E.D.
P(X) by q(dxlp)
As mentioned previously, the functions obtained by infimizing bivariate, extended real-valued, Borel-measurable functions over one of their variables have analytic lower level sets. We give these functions a name. Definition 7.21 Let X be a Borel space, Dc X, and f:D -> R*. If D is analytic and the set {xED!f(x) < c} is analytic for every cER, then f is said to be lower semianalytic.
It is apparent from the definition that a lower semianalytic function is analytically measurable. We state some characterizations and basic properties of lower semianalytic functions as a lemma. Lemma 7.30 (1) Let X be a Borel space, D an analytic subset of X, and f:D -> X. The following statements are equivalent.
(a) The function f is lower semianalytic, i.e., the set {xEDlf(x) < c}
(101 )
is analytic for every c E R. (b) The set (101) is analytic for every cER*. (c) The set {xED!f(x) ~ c}
is analytic for every c E R. (d) The set (102) is analytic for every cER*.
(102)
7.
178
BOREL SPACES AND THEIR PROBABILITY MEASURES
(2) Let X be a Borel space, D an analytic subset of X, and f,,:D -> R*, n = 1,2, ... , a sequence oflower semianalytic functions. Then the functions inf, t; sup, .in, liminfn-+ oo In, and limsuPn-+oo f" are lower semianalytic. In particular, if I, -> I, then .f is lower semianalytic. (3) Let X and Y be Borel spaces, g:X -> Y, and /:g(X) -> R*. If g is Borel-measurable and I is lower semianalytic, then I 0 g is lower semianalytic. (4) Let X be a Borel space, D an analytic subset of X, and f, g: D -> R *. If f and g are lower semianalytic, then I + g is lower semianalytic. If, in addition, g is Borel-measurable and g 2 0 or if / 2 0 and g 2 0, then Ia is lower semianalytic, where we define 0 . 00 = 00 . 0 = O( - 00) = (- 00)0 = o. Proof (1) We show (b) => (a) => (d) => (c) => (b). It is clear that (b) => (a). If (a) holds, then {xEDlf(x)::; oo}
=D
is analytic by definition, while the sets {xEDII(x)::; -oo}
=
n 00
n= 1
{xEDlf(x) < -n},
n {xEDI/(X) < C+ (lin)}, 00
[xEDI/(x)::; C} =
cER,
n=1
are analytic by Corollary 7.35.2. Therefore (a)=> (d). It is clear that (d) =>(c). If (c) holds, then the sets {xEDII(x) < - oo}
=
0, 00
{xEDI.f(x) < oo}
=
U {xEDII(x)::; n},
n= 1 00
{xEDI/(x) < C}
=
U {xEDII(x)::; c -
(lIn)},
c E R,
n=1
are analytic by Corollary 7.35.2. Therefore (c) =>(b). (2) For cER, {x s Dlinf In(x) < c} n
=
00
U{xEDIf,,(x) < c},
n= 1
n 00
c} =
{xEDlsup.f~(x)::; n
n=1
{xEDI.in(X)::; C},
so inf, .in and SUPn f" are lower semianalytic by Corollary 7.35.2 and part (I). Then lim infIn = sup inf Ik n-r a:
n~l
k?:n
7.7
179
UNIVERSALLY MEASURABLE SELECTION
and lim sup in = inf sup h n:::1
n-e co
«>:»
are lower semianalytic as well. (3) The domain g(X) of f is analytic by Proposition 7.40. For cER, {xEXI(fog)(x) < c} = g-l({YEg(X)lf(Y) < c})
is analytic by the same proposition. (4) For cER, {xEDlf(x)
+ g(x) < c} = U [{xEDlf(x) <
r} n {xEDlg(x) < c - r}],
reQ
and this is true even if f(x) + g(x) = 00 - 00 = 00 for some xED. From Corollary 7.35.2 it follows that f + g is lower semianalytic whenever f and g are. Now suppose g is Borel-measurable and g 2 0. For c > 0, we have {xEDlf(x)g(x) < c} = {xEDlf(x) ~ O} u {xEDlg(x) ~ O}
u[
while if c
~
U
{xED!f(x) < r,g(x) < c/r}],
reQ,r>O
0, we have
{xED!f(x)g(x) < c}
=
U
{xEDIJ(x) < r,g(x) > c/r}.
reQ,r
In both cases, the set {xEDlf(x)g(x) < c} is analytic by Corollary 7.35.2. Suppose f and g are both lower semianalytic and nonnegative. For c> 0, the set {xEDlf(x)g(x) < c} is analytic as before, and for c ~ 0, this set is empty. It follows that fg is lower semianalytic under either set of assumptions on f and g. Q.E.D. Note in connection with Lemma 7.30(3) that the composition of a Borel-measurable function with a lower semianalytic function can be guaranteed to be lower semianalytic only when the composition is in the order specified. To see this, let X be a Borel space and A c X be an analytic set whose complement is not analytic (see Appendix B). Define f(x) = - XA(X), which is lower semianalytic, because {xEXlf(x) < c} is either 0, A, or X, depending on the value of c. Let g:R* ~ R* be given by g(c) = -c. Then XA = go f, and this function is not lower semianalytic, since {x E XIXA(X) < t} = A C • This also provides us with an example of an analytically measurable function which is not lower semianalytic. Proposition 7.47 Let X and Y be Borel spaces, let D be an analytic subset of XY, and let f:D ~ R* be lower semianalytic. Then the function
7.
180
f*: projx(D)
-->
BOREL SPACES AND THEIR PROBABILITY MEASURES
R* defined by f*(x) = inf f(x, y)
(103)
yeD x
is lower semianalytic. Conversely, if f*: X --> R* is a given lower semianalytic function and Y is an uncountable Borel space, then there exists a Borelmeasurable function f: X Y --> R * which satisfies (103) with D = X Y.
Proof For the first part of the theorem, observe that if f: D --> R* is lower semianalytic and e E R, the set inf f(x,y) < e} { XE Proh (D)[yeD x
=
proh({(x, y)ED!f(x, y) < en
is analytic by Proposition 7.39. For the converse part of the theorem, let analytic and let Y be an uncountable Borel {xEXlf*(x) < r}. Then A(r) is analytic and, exists B(r)EBH xy such that A(r) = projx[B(r)]. and f:XY --> R* by
f*:X --> R* be lower semispace. For rE Q, let A(r) = by Proposition 7.39, there Define G(r) = USEQ.ssrB(s)
f(x,y) = inf{rEQI(x,y)EG(r)} = inf t/!r(x,y), rEQ where t/!r(x, y) = r if (x, Y)E G(r) and t/!r(x, y) = 00 otherwise. Then f is Borel-measurable. Let g be defined by g(x) = infyEYf(x,y). We show that f*(x) = g(x) for every x E X. Iff*(x) < c for some eER, then there exists rEQ for whichf*(x) < r < c, and so x E A(r). There exists y E Y such that (x, y) E G(r), and, consequently, f(x, y) :s; rand g(x) :s; r < c. Therefore g(x) cannot be greater than f*(x). If g(x) < c for some c E R, then there exists r EQ and y E Y for which g(x) < r < c and (X,y)EG(r). Thus for some SEQ, s:S; r, we have (x,y)EB(s) and x E A(s). This implies f*(x) < s :s; r < e, which shows that f*(x) cannot be greater than g(x). Q.E.D. Proposition 7.48 Let X and Y be Borel spaces, f:XY --> R* lower semianalytic, and q(dYlx) a Borel-measurable stochastic kernel on Y given X. Then the function A:X --> R* defined by A(X)
is lower semianalytic.
f
= f(x, y)q(dylx)
Proof Suppose f 2 O. Let !n(x,y) lower semianalytic and In i f. The set
=
min{n,f(x,y)}. Then each .f,. is
En = {(X,y,b)EXYRI.f,.(x,y):s; b s: n}
n rEQU{(X,y,b)EXYRIIn(x,y) < r.r s; b + (llk):s; n + (11k)) OC!
=
k=l
7.7
181
UNIVERSALLY MEASURABLE SELECTION
is analytic in X YR by Corollary 7.35.2 and Proposition 7.38. Let u be Lebesgue measure on R, P E P(X Y), and PIl the product measure on X YR. By Fubini's theorem, (PIl)(En) = Ixy IR XE n du dp = IxJn - In(x, y)] dp =
n - Ixy In(x, y) dp.
For cER we have, by the monotone convergence theorem,
n{pEP(XY)I(PIl)(En) Z n 00
=
c}.
n=l
Hence, by Proposition 7.43 and the fact that the mapping P -+ PIl is continuous (Lemma 7.12), the function eJ:p(XY) -+ R* defined by eJ(p) = Sf(x, y) dp is lower semianalytic. We have Jc(x)
=
eJ[ q(dylx)pxJ.
Since the mapping x -+ q(dYlx) is Borel-measurable from X to P(Y) and the mappings x -+ Px and [q(dYlx),Px] -+ q(dylx)px are continuous from X to P(X) and P(X)P(Y) to P(XY), respectively (Corollary 7.21.1 and Lemma 7.12), it follows from Lemma 7.30(3) that Jc is lower semianalytic. Suppose f:-:::; O. Let In(x,y) = max{ -n,/(x,y)}. Then each In is lower semianalytic and fn~f. The sets En = {(X,y,b)EXYRIIn(x,y):-:::; b s; O} are analytic and
For cER,
00
=
U {pEP(XY)I(PIl)(En) >
-c}.
n= 1
Proceed as before. In the general case, I f(x, y)q(dylx)
=
If + (x, y)q(dylx) - If - (x, y)q(dYlx).
The functions f + and - f - are lower semianalytic, so by the preceding arguments each of the summands on the right is lower semianalytic. The result follows from Lemma 7.30(4). Q.E.D.
182
7.
BOREL SPACES AND THEIR PROBABILITY MEASURES
Corollary 7.48.1 Let X be a Borel space and let f: X semianalytic. Then the function 8J: P(X) -+ R* defined by
8J (p) =
ff
-+
R * be lower
dp
is lower semianalytic. Proof Define a Borel-measurable stochastic kernel on X given P(X) by q(dxlp) = p(dx). Apply Proposition 7.48. Q.E.D.
As an aid in proving the selection theorem for lower semianalytic functions, we give a result concerning selection in an analytic subset of a product of Borel spaces. The reader will notice a strong resemblance between this result and Lemma 7.21, which was instrumental in proving the selection theorem for upper semicontinuous functions. Proposition 7.49 (Jankov-von Neumann theorem) Let X and Y be Borel spaces and A an analytic subset of X Y. There exists an analytically measurable function sp : projx(A) -+ Y such that Gr(cp) c A. Proof (See Fig. 7.2.) Let f: JV -+ XY be continuous such that A = f(JV). Let 9 = proh f. Then g: JV -+ X is continuous from JV onto {x}) is a closed nonempty subset of .ff. Let proh(A). For x E proh(A), (l(X) be the smallest integer which is the first component of an element Z1 E g-l( {x}). Let (2(X) be the smallest integer which is the second component of an element Z2Eg-1({X}) whose first component is (l(X). In general, let (k(X) be the smallest integer which is the kth component of an element ZkEg-1({X}) whose first (k - l)st components are (l(X), ... ,(k-l(X). Let ljJ(x) = ((1(X),(2(X), . . .). Since Zk -+ ljJ(x), we have ljJ(X)Eg- 1({X}). (104) 0
Define
tp : projx(A) -+ Y
«:«
by cp = proj,
0
f
0
ljJ, so that Gr( cp) c A.
y
FIGURE 7.2
7.7
183
UNIVERSALLY MEASURABLE SELECTION
We show that cp is analytically measurable. As in the proof of Proposition 7.42, for (a1,' .. , adEL let N(al"" M(ab'"
= {(Cl,C2," .)EJVIC1 = al,'" ,ak) = {(CbC2,·· .)E·JVICI S ab'"
,Ck = ak),
,ak)
,Ck S ak}'
We first show that t/J is analytically measurable, i.e., t/J - 1(.CJB.'f) C d x- Since {N(slls E L} is a base for the topology on JV, by the remark following Definition 7.6, we have a({N(s)lsEL}) = .CJB.Ho Then
t/J-l(.CJBHl = t/J-1[a({N(s)lsEL}lJ = a[t/J-l({N(sl!SELj)], and it suffices to prove (105)
t/J-l[N(s)JEd x
We claim that for s = (al,a2,'" ,ak)EL t/J-l[N(s)J
=
k
g[M(s)J -
U g[M(al""
,aj-baj - I)J,
(106)
j= 1
where M(a1"" ,aj_l,aj - 1) = 0 if aj - 1 = O. We show this by proving that t/J-1[N(s)] is a subset of the set on the right-hand side of (106) and vice versa. Suppose XE t/J-l[ N(s)]. Let t/J(x) = (C leX), C2(X), . . .). Then (107)
t/J(x)EN(s) c M(s),
so (104) implies
x
g[t/J(x)J E g[M(s)].
=
(108)
Relation (107) also implies Cl(X) = ab'" ,Ck(X) = ak' By the construction of t/J, we have that al is the smallest integer which is the first component of an element of g - 1({ x} l, and for j = 2, ... .k, aj is the smallest integer which is the jth component of an element of g-l({X}) whose first (j - 1) components are al,' .. , aj_ L' In other words, g-l({X}) n M(al""
,aj-l,aj - 1) =
0,
j
=
1,... ,k.
It follows that k
x¢
U g[M(al""
j= 1
,aj-baj - 1)].
(109)
Relations (108) and (109) imply k
t/J-1[N(s)] c g[M(s)] -
U g[M(al""
j= 1
.oi.,», - 1)].
(110)
7.
184
BOREL SPACES AND THEIR PROBABILITY MEASURES
To prove the reverse set containment, suppose k
xEg[M(s)] -
U g[M(O"I""
,00j-l,O"j -1)].
(111)
j= 1
Since x E g[M(s)], there must exist y
= (111,112" ..) E «: ({ x}) such that (112)
Clearly, x E projx(A) = g(JV), so Ij;(x) is defined. Let Ij;(x) By (104), we have g[lj;(x)] = x, so (111) implies j
= (( 1(X),(2(X), . . .).
= 1,2, ... ,k.
Since Ij;(X)¢M(O"I - 1), we know that (I(X) ~ 0"1' But (I(X) is the smallest integer which is the first component of an element of g-I({X}), so (112) implies (I(X) ~ 111 ~ 0"1' Therefore (I(X) = 0"1' Similarly, since Ij;(X)¢M((I(X), 0"2 - 1), we have (2(X) ~ 0"2' Again from (112) we see that '2(X) ~ 112 ~ 0"2, so (2(X) = 0"2' Continuing in this manner, we show that Ij;(X)E N(s), i.e., xEIj;-I[N(s)] and k
1j;-I[N(s)]
=.J
g[M(s)] -
U g[M(O"I""
j=1
.vi-,.«, - 1)].
(113)
Relations (110) and (113) imply (106). We note now that M(t) is open in JV for every tE~, so g[ M(t)] is analytic by Proposition 7.40. Relation (105) now follows from (106), so Ij; is analytically measurable. By the definition of qJ and the Borel-measurability off and proj., we have
We have just proved Ij; - 1(!Jl x ) c d x» and the analytic measurability of qJ follows. Q.E.D. This brings us to the selection theorem for lower semianalytic functions.
Proposition 7.50 Let X and Y be Borel spaces, [) «: XY an analytic set, andf:D ---+ R* a lower semianalytic function. Define f* :proh(D) ---+ R* by f*(x) = inf f(x, y).
(114)
yeD x
(a) For every s > 0, there exists an analytically measurable function qJ: proh(D) ---+ Y such that Gr( qJ) c D and for all x E proh(D),
< {f*(X) f[ x, qJ(x )] - 1/e
+s
f*(x) > if f*(' 1 x) = -
if
00, 00.
7.7
185
UNIVERSALLY MEASURABLE SELECTION
(b) The set 1= {xeprojx(DWor some YxeDx,f(x,Yx) = f*(x)}
°
is universally measurable, and for every e > there exists a universally measurable function q>:proh(D) ~ Y such that Gr(q» c D and for all xeproh(D) f[ x, q>(x)] = f*(x)
if x e I,
f *(X). + e f[ x, q> (x )] S { _ 1/1;
(115)
if x¢=I, f*(x»-oo, if x ¢= I, f*(x) = - 00.
(116)
Proof (a) (Cf proof of Proposition 7.34 and Fig. 7.1.) The function f* is lower semianalytic by Proposition 7.47. For k = 0, ± 1, ± 2, ... , define A(k) = {(x,y)eDlf(x,y) < ke}, B(k) = {xeproh(D)I(k - l)e S f*(x) < ke}, B(-oo) = {xeproh(D)lf*(x) = -oo}. B( (0)
= {x e proh(D)lf*(x) = oo].
The sets A(k), k = 0, ± 1, ± 2, ... , and B( - (0) are analytic, while the sets B(k), k = 0, ± 1, ± 2, ... , and B( (0) are analytically measurable. By the Jankov-von Neumann theorem (Proposition 7.49) there exists, for each k = 0, ±1, ±2, ... , an analytically measurable q>k:projx[A(k)] ~ Y with (x, q>k(X)) e A(k) for all x e projy]' A(k)] and an analytically measurable ip:projx(D) ~ Y such that (x,ip(x))eD for all xeprojx(D). Let k* be an integer such that k* S -1/e 2 . Define q>:proh(D) ~ Y by if xeB(k), k=0,±I,±2,... , if xeB(oo), if xeB( - (0). Since B(k) c proh [A(k)] and B( - (0) c projx[ A(k)] for all k, this definition is possible. It is clear that q> is analytically measurable and Gr(q» c D. If x e B(k), then (x, q>k(X)) e A(k) and we have f[ x, q>(x)] = f[x, q>k(X)] < ke S f*(x) If xeB(oo), then f(x,y) = xeB(-oo), we have
00
f[ x, q>(x)]
=
+ c.
for all yeD x and f[x,q>(x)] f[x, q>k*(X)] < k*e S -l/e.
Hence q> has the required properties.
= 00
=f*(x). If
7.
186
BOREL SPACES AND THEIR PROBABILITY MEASURES
(b) Consider the set E E
=
c XYR*
defined by
((x,y, b)[(x, y)ED,/(x, y) ~ b}.
Since
n U ((x,y, b)!(x, Y)E D,f(x,y) ~ 00
E
=
r, r
k=lrEQ*
s b + (l/kj),
it follows from Corollary 7.35.2 and Proposition 7.38 that E is analytic in XYR*, and hence the set A
=
projXR*(E)
is analytic in XR*. The mapping T:projx(D) T(x)
--->
XR* defined by
= (x,/*(x»
is analytically measurable, and 1= {x[(x,/*(x))EA}
= T- 1(A).
Hence I is universally measurable by Corollary 7.44.2. Since E is analytic, there is, by the Jankov-von Neumann Theorem, an analytically measurable p:A ---> Y such that (x,p(x,b),b)EE for every (x, b) E A. Define !/J: 1---> Y by !/J(x)
= p(x,/*(x» = (p a T)(x)
VXEI.
Then !/J is universally measurable by Corollary 7.44.2, and by construction f[x, !/J(x)] s f*(x) for x E I. Hence f[ x, !/J(x)]
= f*(x)
(117)
VXEI.
By part (a) there exists an analytically measurable !/J,: proh(D) that f * (X) + e f[ x,!/J.(x)] ~ { -l/e
if f*(x) > if f*(x) = -
00, 00.
--->
Y such
(118)
Define cp: projx(D) ...... Y by cp(x)
=
{!/J(X) !/J,(x)
if XE I, .if x E proh(D) - I.
Then cp is universally measurable and, by (117) and (118), it has the required properties. Q.E.D. Since the composition of analytically measurable functions can fail to be analytically measurable (Appendix B), the selector obtained in the proof
7.7
UNIVERSALLY MEASURABLE SELECTION
187
of Proposition 7.50(b) can fail to be analytically measurable. The composition of universally measurable functions is universally measurable, and so we obtained a selector which is universally measurable. However, there is a er-algebra, which we call the limit a-alqebra, lying between ",Ix and O/i x such that the composition oflimit measurable functions is again limit-measurable. We discuss this o-algebra in Appendix B and state a strengthened version of Proposition 7.50 in Section 11.1.
Chapter 8
The Finite Horizon Borel Model
In Chapters 8-lOwe will treat a model very similar to that of Section 2.3.2. An applications-oriented treatment of that model can be found in "Dynamic Programming and Stochastic Control" by Bertsekas [B4], hereafter referred to as DPSC. The model of Section 2.3.2 and DPSC has a countable disturbance space and arbitrary state and control spaces, whereas the model treated here will have Borel state, control, and disturbance spaces. 8.1
The Model
Definition 8.1 A finite horizon stochastic optimal control model is a ninetuple (S, C, U, W, p, j, ()(, g, N) as described here. The letters x and U are used to denote elements of Sand C, respectively.
S State space. A nonempty Borel space. C Control space. A nonempty Borel space. U Control constraint. A function from S to the set of nonempty subsets of C. The set
r =
{(X,U)!XES, UE U(x))
is assumed to be analytic in S'C. 188
(1)
8.1
189
THE MODEL
W Disturbance space. A nonempty Borel space. p(dwlx, u) Disturbance kernel. A Borel-measurable stochastic kernel on W given S'C. f System function. A Borel-measurable function from sew to S. c< Discount factor. A positive real number. 9 One-stage cost function. A lower semianalytic function from I' to R*. N Horizon. A positive integer. We envision a system moving from state Xk to state xk+ 1 via the system equation
k = 0, I, ... ,N - 2, and incurring cost at each stage of g(Xb ud. The disturbances W k are random objects with probability distributions p(dwklxk, ud. The goal is to choose Uk dependent on the history (xo,u o, ... ,Xk-l,Uk-l,Xk) so as to minimize
(2) The meaning of this statement will be made precise shortly. We have the constraint that when x, is the kth state, the kth control Uk must be chosen to lie in U(x k ) . In the models in Section 2.3.2 and DPSC, the one-stage cost 9 is also a function of the disturbance, i.e., has the form g(x, u, w). If this is the case, then g(x, u, w) can be replaced by
g(x, u) =
Sg(x, u, w)p(dwlx,
u).
If g(x, u, w) is lower semianalytic, so is g(x, u) (Proposition 7.48). If p(dwlx, u) is continuous and g(x, u, w) is lower semicontinuous .and bounded below or
upper semicontinuous and bounded above, then g(x, u) is lower semicontinuous and bounded below or upper semicontinuous and bounded above, respectively (Proposition 7.31). Since these are the three cases we deal with, there is no loss of generality in considering a one-stage cost function which is independent of the disturbance. The model posed in Definition 8.1 is stationary, i.e., the data does not vary from stage to stage. A reduction of the nonstationary model to this form is discussed in Section 10.1. A notational device which simplifies the presentation is the state transition stochastic kernel on S given se defined by
t(Blx, u) = p({wlf(x, U, W)E B} lx, u) = p(j-l(B)(x, ullx, u).
(3)
190
8.
THE FINITE HORIZON BOREL MODEL
Thus t(Blx, u) is the probability that the (k + l)st state is in B given that the kth state is x and the kth control is u. Proposition 7.26 and Corollary 7.26.1 imply that t(dx'lx, u) is Borel-measurable. Definition 8.2 A policy for the model of Definition 8.1 is a sequence n = (110,l1b'" ,I1N-d such that, for each k, I1k(du klxo,uo, .. ' ,Uk-I,xd is a universally measurable stochastic kernel on C given SC' .. CS satisfying
I1k(U(Xk)!x o,uo,'" ,Uk-I,xd
=
1
for every (xo, U o , ' .. , Uk _ I ' x k). If, for each k, Ilk is parameterized only by (xo,xd, t: is a semi-Markov policy. If Ilk is parameterized only by Xk, tt is a Markov policy. If, for each kand(xo, Uo,' .. , uk-I,Xk),l1k(duk!xo, Uo,' .. , Uk-I' Xk) assigns mass one to some point in C, tt is nonrandomized. In this case, by a slight abuse of notation, n can be considered to be a sequence of universally measurable (Corollary 7.44.3) mappings Ilk: Sc- .. CS ~ C such that
I1k(Xo, U o , · .. ,u k-
b
x k) E U(xd
for every (x.,, U o , ... , Uk _ I ' xd. If :F is a type of a-algebra on Borel spaces and all the stochastic kernel components of a policy are :F -measurable, we say the policy is :F-measurable. (For example, .~ could represent the Borel a-algebras or the analytic e-algebras.)
We denote by IT the set of all policies for the model of Definition 8.1 and by II the set of all Markov policies. We will show that in many cases it is not necessary to go outside II to find the "best" available policy. In most cases, this "best" policy can be taken to be nonrandomized. Since r is analytic, the Jankov -von Neumann theorem (Proposition 7.49) guarantees that there exists at least one nonrandomized Markov policy, so II and II' are nonempty. If tt = (110, Ill,' .. , I1N - d is a nonrandomized Markov. policy, then tt is a finite horizon version of a policy in the sense of Section 2.1. The notion of policy as set forth in Definition 8.2 is wider than the concept of Section 2.1 in that randomized non-Markov policies are permitted. It is narrower in that universal measurability is required. We are now in a position to make precise expression (2). In this and subsequent discussions, we often index the state and control spaces for clarity. However, except in Chapter 10 when the nonstationary model is treated, we will always understand Sk to be a copy of Sand Ck to be a copy of C. Suppose p E P(S) and tt = (110, Ill" .. , I1N _ d is a policy for the model of Definition 8.1. By Proposition 7.45, there is a unique probability measure rN(n,p) on SoCO"'SN-ICN-I such that for any universally measurable function h: SoC o' .. SN-l C N- I ~ R* which satisfies either Sh+ drN(n,p) < 00
8.1 or
191
THE MODEL
v:
drN(n,p)
J
h dr N(n , p )
=
<
00,
we have
r r r ... JSN-IJC r r N- 1 h(xo,u o, ... ,XN-l,UN-d
JSOJCOJSI
x IlN-l(duN-dxo,uo, ... ,uN- 2,xN-ll X t(dXN -llxN - 2, UN - 2)' .. t(dx t1xo, Uo)flo(duolxo)p(dxo),
(4)
where t(dx'lx, u) is the Borel-measurable stochastic kernel defined by (3). Furthermore we have from (4) that ShdrN(n, Px) is a universally measurable function of x (Proposition 7.46), and if hand tt are Borel-measurable, then ShdrN(n, pJ is a Borel-measurable function of x (Proposition 7.29).
Definition 8.3 Suppose tt = (flo, fll" .. ,flN _ d is a policy for the model of Definition 8.1. For K .::;; N, the K-stage cost corresponding to tt at XES is JK.,,(x) =
Jet;
(5)
ctkg(Xb Uk)] drN(n,pJ,
where, for each n E TI' and p E peS), rN(n, p) is the unique probability measure satisfying (4). The K-stage optimal cost at x is J~(x)
= inf J K,,,(X).
(6)
n s Fl '
If e > 0, the policy tt is K-stage e-optimal at x provided if if
J~(x)
> -
00,
J~(x)
=
00.
-
If JK,,,(x) = J~(x), then tt is K-stage optimal at x, If n is K-stage s-optimal or K -stage optimal at every XES, it is said to be K -staqe e-optimal or K -staqe optimal, respectively. If {En} is a sequence of positive numbers with En t 0, a sequence of policies {nn} exhibits {8 n} -dom inated convergence to K-stage optimality provided
and for n
= 2,3, ... J K, ,,)x)
s
{Jk(X) + 8 n J s,«;» Jx) + e;
if if
J~(x) J~(x)
> -
= -
00, 00.
If K = N, we suppress the qualifier "K-stage" in the preceding terms. Note that J~ is independent of the horizon N as long as K .::;; N. Note also that J K,,,(X) is universally measurable in x. If tt is a Borel-measurable policy and 9 is Borel-measurable, then J K,,,(X) is Borel-measurable in x,
192
8.
THE FINITE HORIZON BOREL MODEL
For re = (f-lO,f-ll"" 'f-lN-dETI' and pEP(S), let qk(re,p) be the marginal of fN(re,p) on SkCk' If we take h = XSO"'Ck-l!!k{;kSk+ I" 'CN-l in (4), we obtain qk(re,p)(SkCd =
r Jco r Jsr Jso
1
... JCr
k -
I
r
J~k
f-lk(hk!XO'U O,'" ,Uk-l,Xk)
x t(dXkIXk-l' Uk- df-lk-l(duk-1Ixo, Uo, . . . ,Uk-2,Xk- .J
..
x t(dx llx o, Uo)f-lo(dUoIXo)p(dxo)
= JsoCo r ... Si; - ICk - Js, r f-lk(hklxo,Uo, .. ' ,Uk-l,Xk) 1
(7) From (l) and (7), we see that qk(re, p)(r) = 1. If t: is Markov, (7) becomes qk(re, p)(SkCd
=
fSk-
lCk-
1
1k f-lk(hkIXk)t(dXkIXk- b
Uk- ddqk-l(re, p) \;fSkEfJOS,
hkEfJOC'
(8)
If either fSkCk 9 - dqk(zt, p x) < 00
\;freETI ',
XES,
k=0, ... ,N-1, (F+)
\;freE TI',
XES,
k=0, ... ,N-1, (F-)
or fSkCkg+dqk(re,PX)
<
00
then Lemma 7.1l(b) implies that for every tt e TI' and XE S K-l
JK,,,(x)
= k~O
ry"k fSkckgdqk(re,PX),
K
= 1, ... ,N.
(9)
If (F +) [respectively (F -)] appears preceding the statement of a proposition, then (F+) [respectively (F-)] is understood to be a part of the hypotheses of the proposition. If both (F+) and (F-) appear, then the proposition is valid when either (F+) or (F-) is included among the hypotheses. If rei E TI' is a given policy, there may not exist a Markov policy which does at least as well as rei for every XES, i.e., a policy re E TI for which (10)
for every XES. However, if X is held fixed, then a Markov policy re can be found for which (10) holds. Proposition 8.1 (F+)(F-) policy re such that
If XES and K
reIETI ',
then there is a Markov
= 1, ... ,N.
(11)
8.1
193
THE MODEL
Proof Let n' = (fl'o, fl~,' .. ,fl'tv_ 1) be a policy and let XES be given. For k = 0,1, ... ,N - 1, let flk(duk!Xk) be the Borel-measurable stochastic kernel obtained by decomposing qk(n',Px) (Corollary 7.27.2), i.e., qk(n',pJCS.k[k) = 1kflk([klxk)Pk(n',px)(dxk)
V!ikEfJOS'
[kE[JJC,
(12)
where Pk(n', pJ is the marginal of qk(n', pd on Sk' From (12) we see that 1 = qk(n',px)(r) = 1kflk(U(xdlxdPk(n',pJ(dxd,
so we must have flk(U(Xk)!Xk) = 1 for Pk(n',pJ almost every x., By altering flk( '[xd on a set of Pk(n', pJ measure zero if necessary, we may assume that (12) holds and tt = (flo, fll' ... ,flN -I) is a policy as set forth in Definition 8.2. In light of (9), (11) will follow if we show that qk(n',px) = qk(n,pJ for k = 0,1, ... ,N - 1. For this, it suffices to show that, for k = 0, 1, ... ,N - 1, (13) We prove (13) by induction. For k (12), qo(n', Px)(~o[o)
If qk(n',px!
=
=
10
=
O,!ioE[JJs and [oE[JJc, we have, from
flo([o[Xo)px(dxo)
= qo(n, Px)(~o[o)·
qk(n,pJ, then for !ik+ I E [JJs, [k+ 1 E fJO c, we have, from (12),
qk+l(n',px!(!ik+IC+ 1) = 1k+lflk+I([k+llxk+dPk+I(n',px)(dxk+l)'
(14)
From (7) we see that Pk+ 1(n', Px)(!ik+ I)
so if h: Sk+ 1
r
JSk +
1
->
=
fSkCk t(!ik+ I[Xk, Uk) dqk(n', pJ,
[0,00] is a Borel-measurable indicator function, then
h(Xk+l)dpk+l(n',pJ(dxk+l) =
r
r
JSkCk JSk +
X
1
h(Xk+l)t(dxk+l[X k,Uk)
dqk(n',pJ.
(15)
Then (15) holds for Borel-measurable simple functions, and finally, for all Borel-measurable functions h:Sk+ 1 -> [0,00]. Letting h(xk+d in (15) be flk+ I([k+ dXk+ I), we obtain from (14), the induction hypothesis, and (8) qk+ l(n',pJ(!ik+ I[k+ I)
= 1kCk 1k+
1
flk+ I([k+ dXk+ dt(dxk+ I[Xb Uk)
x dqk(n',pJ
= JSkCk r J§k r + 1 flk+I([k+I[Xk+dt(dxk+I[Xbuddqk(n,PX) = qk+l(n,pJ(!ik+l[k+d, which proves (13) for k
+ 1.
Q.E.D.
8.
194 Corollary 8.1.1
(F+)(F-)
where n
For K = 1,2, ... ,N, we have
= inf
J~(x)
THE FINITE HORIZON BOREL MODEL
JK.,,(X)
"En is the set of all Markov policies.
VXES,
Corollary 8.1.1 shows that the admission of non-Markov policies to our discussion has not resulted in a reduction of the optimal cost function. The advantage of allowing non-Markov policies is that an s-optimal nonrandomized policy can then be guaranteed to exist (Proposition 8.3), whereas one may not exist within the class of Markov policies (Example 2). 8.2
The Dynamic Programming Algorithm-Existence of Optimal and e-Optimal Policies .
Let U(qS) denote the set of universally measurable stochastic kernels fl on C given S which satisfy fl( U(x)!x) = 1 for every XES. Thus the set of Markov policies is n = U(qS)U(qS)' .. U(qS), where there are N factors. Definition 8.4 Let J: S -4 R* be universally measurable and fl The operator TIL mapping J into T I'(J): S -4 R* is defined by TI'(J)(x) = Ie [g(x, u)
+ o:
E
U( qS).
£
J(x')t(dx'lx, U)]fl(dulx)
for every XES. The operator TIL can also be written in terms of the system function f and the disturbance kernel p(dwlx, u) as [cr. (3)] T Il(J)(X)
= Ie [g(x, u) + o: I w J[f(x, u, w)]p(dwlx, U)]fl(dulx).
By Proposition 7.46, T I'(J) is universally measurable. We show that under (F +) or (F "), the cost corresponding to a policy tt = (flo, ... , flN - d can be defined in terms of the composition of operators T 1'0 T 1'1' •• T '1N_ 1 . Lemma 8.1 (F+)(F-) Let n = (flo,fl1,'" ,flN-1) be a Markov policy and let J 0: S -4 R* be identically zero. Then for K = 1,2, ... , N we have J Ko"
where T 1'0 Proof
•••
=
(Tl'o'"
(16)
TI'K_,)(10)'
T ILK _ 1 denotes the composition of T 1l0'
••• ,
T 11K _I
We proceed by induction. For XES, J 1.,,(x)
=
I gdqo(n,pJ
= Ieo g(x, uo)J1o(du o!x) =
T 1'0(1 o)(x).
•
8.2
195
THE DYNAMIC PROGRAMMING ALGORITHM
Suppose the lemma holds for K - 1. Let n = (11 h flz, ... ,flN - 1, fl), where fl is some element ofU(qS). Then for any XES, the (F+) or (F-) assumption along with Lemma 7.11(b) implies that (5) can be rewritten as 1
J K.,,(X)
=
r g(x, uo)llo(duo!x) + IX JcDJs,Jc, r r r ... JCK-l r [Ki. jeD k=l X
k- 19(Xk' Ud]
IX
flK-l(duK-1[XK- dt(dxK-IIXK-Z' UK-z)· ..
x fll(dullxdt(dxdx, Uo)flo(duolx)
= JCD r g(x,U o)llo(duo[x) + IXJCDJS, r r JK-l.,,(Xl)t(dxl[X,Uo)flo(duolx). Under (F-), SCD g+(x, Uo)flo(duolx) <
00
(17)
and
fCD fSI[J K-l. ,,(x1)t(dxllx, Uo)] + flo(duolx) <
00,
. while under (F+) a similar condition holds, so Lemma 7.11(b) and the induction hypothesis can be applied to the right-hand side of(17) to conclude
J K,,,(X)
= fCD[g(X,uo)
=
+ IX f SI(T I1 1' .. T I1K _,)(Jo)(xdt(dx1!x, UO)]flO(dUolx) Q.E.D.
(T I1DTI1 1' .. TI'K _1)(JO)(X).
Definition 8.5 Let J: S -+ R* be universally measurable. The operator T mapping J into T(J):S -+ R* is defined by T(J)(x)
= inf
UEU(X)
{g(X, u) +
ri
f
S
J(x')t(dx'[x, U)}
for every XES. Similarly as for T 1" the operator T may be written in terms of
.f and
p(dwlx, u) as T(J)(x)
= inf
UEU(X)
{g(X, u)
+ o:
l
W
J[f(x, u, w)]p(dw[x, U)}.
If fl is nonrandomized, the operators T I' and T of Definitions 8.4 and 8.5 are, except for measurability restrictions on J and u, special cases of those defined in Section 2.1. In the present case, the mapping H of Section 2.1 is H(x, u, J)
=
g(x, u) +
= g(x, u)
IX
£J(x')t(dx'[x, u)
+ o: fw J [f(x, u, w)]p(dw[x, u).
We will state and prove versions of Assumptions F.1 and F.3 of Section 3.1 for this function H. Assumption F.2 is clearly true. Furthermore, if fl E U( CiS),
8.
196
THE FINITE HORIZON BOREL MODEL
J 1, J 2: S --+ R* are universally measurable, and J 1 ::;; J 2, then T Il(J d ::;; T Il(J 2) and T(J d ::;; T(J 2)' IfrE(O, (0), then TI,(J 1 + r) = T Il(J d + ar and T(J 1 + r) = T(J d + or. We will make frequent use of these properties. The reader should
not be led to believe, however, that the model of this chapter is a special case of the model of Chapters 2 and 3. The earlier model does not admit measurability restrictions on policies. By Lemma 7.30(4) and Propositions 7.47 and 7.48, T(J) is lower semianalytic whenever J is. The composition of T with itself k times is denoted by T\ i.e., Tk(J) = T[T k- 1(J)], where TO(J) = J. We show in Proposition 8.2 that under (F +) or (F -) the optimal cost can be defined in terms of TN. Three preparatory lemmas are required. Lemma 8.2 Let J:S --+ R* be lower semianalytic. Then for exists Ji E U( CiS) such that T Il(J)(x) ::;; T(J)(x)
where T(J)(x)
+ G may
be -
+G
G
> 0, there
Yx E S,
00.
Proof By Proposition 7.50, there are universally measurable selectors Jim: S --+ C such that for m = 1,2, ... and XES, we have tim(.x) E U(x) and T
IlJ J)(x)::;;
{T(J)(X) _2 m
+G
if if
T(J)(x) > - 00, T(J)(x) = -00.
Let Ji(dulx) assign mass one to til(X) if T(J)(x) > 112m to tim(.x), m = 1,2, ... , if T(J)(x) = - 00. For each C EfJ6 c ,
Xd Ji 1(X)] ti(Clx) -
=
CfJ
{m~l
(l/2m)X_~Jtim(x)]
and assign mass
00
if
T(J)(x)
> -
00,
if
T(J)(x)
= -
00,
is a universally measurable function of x, and therefore ti is a universally measurable stochastic kernel [Lemma 7.28(a),(b)]. This Ji has the desired properties. Q.E.D. Lemma 8,3 (F+) If Jo:S --+ R* is identically zero, then TK(JO)(x) > for every XES, K = 1, ... ,N, where T K denotes the composition of T with itself K times. -
00
Proof
Suppose for some K ::;; N and XES that Ti(J o)(x)
for every
XES,
and
> -
00,
j
= 0, ... ,K
- 1,
8.2
197
THE DYNAMIC PROGRAMMING ALGORITHM
By Proposition 7.50, there are universally measurable selectors u j: S = 1,... , K - 1, such that Ilj(X)E U(x) and
-->
C,
j
(TIlK_jTj-l)(JO)(X)::; Tj(Jo)(x)
+ 1,
j
= 1, ... ,K - 1,
for every XES. Then (T 1,, '
••
T IlK_ ,)(J o)::; (Till' .. T IlK_ 2)[T(J o) +
lJ
::; (Til,' .. TIlK_3)[T2(JO) + 1 +
IXJ
+ 1 + IX + ... + IXK- 2,
::; TK-I(J O)
where the last inequality is obtained by repeating the process used to obtain the first two inequalities. By Lemma 8.2,there is a stochastic kernel Ilo E U( CiS) such that Then (TIlOTIlI' .. T IlK_ ,)(Jo)(x)::; TIlO[TK-l(JO)
Choose any IlE U(qS) and let By Lemma 8.1, K-l
L
k=O
k IX S g dqk(n,
Px)
tt =
+ 1 + IX + ... + IXK- 2J(X) =
- 00.
(Ilo, ... ,ilK-I,ll,··· ,Il), so that nED.
= hjx) = (T110' .. T 11K _,)(Jo)(x) = -
so for some k s: K - 1, Sg- dqk(n,px) = assumption. Q.E.D.
00.
00,
This contradicts the (F+)
Lemma 8.4 Let {Jk} be a sequence of extended real-valued, universally measurable functions on S and let Il be an element of U(qS).
(a) If T Il(J I)(X) < 00 for every XES and J k t J, then T Il(Jd t TiJ). (b) If TiJl)(x) < 00 for every xES,g 2: O,andJ k l J, then TIl(Jk)i TIl(J)· (c) If {Jd is uniformly bounded, g is bounded, and J k --> J, then TIl(Jd
-->
Proof
TIl(J)·
Assume first that T iJ d < S[g(x, u) +
00
and Jk t 1. Fix x. Since
IX SJ l(x')t(dx'!x,
u)]ll(dulx) <
00,
we have g(x, u) +
IX S J I (x')t(dx'lx,
u) <
00
for Il(dulx) almost all u. By the monotone convergence theorem [Lemma 7. 11(OJ, g(x, u) +
IX S Jk(x')t(dx'lx,
u) t g(x, u) +
IX
SJ(x')t(dx'lx, u)
8.
198
THE FINITE HORIZON BOREL MODEL
for J.l(dulx) almost all u. Apply the monotone convergence theorem again to conclude T/-,(Jd(x)t TIl(J)(x). If TIl(J1J < 00, g ~ 0, and s, i J, the same type of argument applies. If {J k} is uniformly bounded, g bounded, and J k ~ J, a similar argument using the bounded convergence theorem applies. Q.E.D. The dynamic programming algorithm over a finite horizon is executed by beginning with the identically zero function on S and applying the operator T successively N times. The next theorem says that this procedure generates the optimal cost function. In Proposition 8.3, we show how s-optimal policies can also be obtained from this algorithm. Proposition 8.2 Then
Let J o be the identically zero function on S.
(F+)(F-)
K
= 1, .. . ,N.
(18)
Proof It suffices to prove (18) for K = N, since the horizon N can be chosen to be any positive integer. For any 1t = (J.lo, . . . , J.lN _ 1) E ITand K :-:::; N, we have
JK
, 7[
= (Tllo'" T IlK- )(Jo) ~ (Tllo'" TIlK_,T)(J O)
~ TK(J O)'
(19)
where the last inequality is obtained by repeating the process used to obtain the first inequality. Infimizing over tt E IT when K = N and using Corollary 8.1.1, we obtain J~
~ TN(J 0)'
(20)
If (F+) holds, then, by Lemma 8.3, Tk(J O) > -00, k = 1, > 0, there are universally measurable selectors ilk: S ~ C, k = 0, with ilk(X)E U(x) and 8
-00
< TIiN_JTk-l(JO)](x) :-:::; Tk(JO)(x)
for every
,N. For ,N - 1,
XES
+ 8/(1 + IJ( + 1J(2 + ... + IJ(N-l),
k = 1, ... ,N,
(Proposition 7.50). Then
(TIiJlil' .. T IiN_ )(Jo) :-:::; (TlioTii 1 • • • TIiN_,)[T(Jo) + 8/(1 + IJ( + 1J(2 + ... + IJ(N :-:::; (TIiJlil' .. TIiN_3)[T2(JO) + 8(1 + 1J()/(1 + IJ( + 1J(2 + ... :-:::; T N(J O) + 8,
1)]
+ IJ(N-l)] (21)
where the last inequality is obtained by repeating the process used to obtain the first two inequalities. It follows that J~
s
TN(J 0)'
(22)
8.2
199
THE DYNAMIC PROGRAMMING ALGORITHM
Combining (20) and (22), we see that the proposition holds under the (F +) assumption. 1f(F-) holds, then J K,1t(x ) < 00 for every XES, nEil, K = 1, , N. Use Proposition 7.50 to choose nonrandomized policies n i = (.u~, , fl~ _ dE il such that (TIlLTN-k-l)(JO)
as i ->
00,
t TN-k(J O),
k
= 0, ... ,N - 1,
By (19) and Lemma 8.4(a), Jt::::;;
inf
(io, ... , iN
_
(Tio"'TiN-l)(J O)
1)
11- 0
J1 N
= inf··· inf(Tlli o ' io
iN-l
T iN-1)(J O) N 1
"
0
- 1
J1.
-
=inf··· inf(Tllio"'T iN-2)[inf TlliN-1(JO)] io
iN-2
= inf'-> inf(Tlli o ' io
=
iN
-
2
Il N- 2
0
"
0
iN- I
N- 1
T iN- 2 T ) ( J o) Il N - 2
TN(J O)'
(23)
where the last equality is obtained by repeating the process used to obtain the previous equality. Combining (20) and (23), we see that the proposition Q.E.D. holds under the (F-) assumption. When the state, control, and disturbance spaces are countable, the model of Definition 8.1 falls within the framework of Part I. Consider such a model, and, as in Part I, let M be the set of mappings fl:S -> C for which fl(X)E U(x) for every XES. In Section 3.2, it was often assumed that for every XES and fljEM,j = 0, ... ,K - 1, we have (TIlO' .. T IlK- )(Jo)(x)
<
K
00,
=
1,... ,N,
(24)
or else for every XES inf
IljEM.OSjSK-!
(Tilo' .. T IlK _)(Jo)(x) > -
K = 1, ... ,N.
00,
(25)
Under the (F+) assumption, Lemma 8.3 implies that -00
< TK(J o)::::;;
inf
IljEM,OSjsK-!
(T ilo" 'TIlK_)(JO),
so (25) is satisfied. Under (F-), we have from Lemma 8.1 that (Tilo' .. T IlK_ )(J 0)
=
J K, 1t <
00,
where n = (flo, ... ,flK - d, so (24) holds. The primary reason for introducing the stronger (F+) and (F-) assumptions is to enable us to prove Lemma 8.1. If one chooses instead to take (16) as the definition of J K,1t (as is done in
8.
200
THE
FINITE HORIZON BOREL MODEL
Section 3.1), then (24) or (25) suffices to prove Proposition 8.2 along the lines of the proof of Proposition 3.1 of Part I. Proposition 8.2 implies the following property of the optimal cost function. Corollary 8.2.1 (F+)(F-) lower semianalytic.
For K
=
1,2, ... ,N, the function J; is
°
Proof As observed following Definition 8.5, T(J) is lower semianalytic wheneverJ is. Since J; = TK(J 0) and J 0 == is lower sernianalytic, the result follows. Q.E.D.
We give an example to show that even when r = SC and the one-stage cost g:SC ~ R* is Borel-measurable, Jt can fail to be Borel-measurable. EXAMPLE I Let A be an analytic subset of [0, 1] which is not Borelmeasurable (Appendix B). By Proposition 7.39, there is a closed set F c [0, 1J;V such that A = proj[o. dF). Let S = [0,1], C = %, r = SC, and g = XF" Then Jt(x)
=
inf g(x, u)
ueC
= XAc(X)
VXES,
which is a lower semianalytic but not Borel-measurable function. We could also choose C = [0,1], r = SC, B a Gb-subset of the unit square S'C, and g = XB C ' This is because % and %0' the space of irrational numbers in [0,1], are homeomorphic (Proposition 7.5), But
.X o = n([O,I] - {r}) re Q
is a Gb-subset of [0,1], so there is a homeomorphism cp: JV ~ [0, 1J such that cp(JV) is a Gb-subset of [0,1]. Let <1>:[O,IJ% ~ [0,1][0,1] be the homeomorphism defined by <1>(X, z)
= (x, cp(z).
Then <1>([0, IJ%) = [O,IJ(F) is a Gb-subset of SC which satisfies projs(B) = A. If g = XBc, then again J! = XAc' We now use Proposition 8.2 to establish existence of s-optimal policies. Proposition 8.3 (F+) For each s > 0, there exists a nonrandomized Markov s-optimal policy. (F-) For each s > 0, there exists a nonrandomized semi-Markov eoptimal policy and a (randomized) Markov s-optimal policy. Proof If (F +) holds, then the policy (flo, . . . , flN _ d constructed in the proof of Proposition 8.2 is e-optimal, nonrandomized, and Markov.
8.2
201
THE DYNAMIC PROGRAMMING ALGORITHM
Assume (F-) holds. We show first the existence of an a-optimal, nonbe as in the proof randomized, semi-Markov policy. Let n i = Cub, ... ,,u~-d of Proposition 8.2. Then = TN(J O) =
J~
inf
(Tl'i O ' 0
inf
J N, "('D"
(iD.·· .• iN - I)
where n UD' .... iN -
=
1l
... ,,uW.:
(,u~,
8(x)
••
(io ..... iN-I)
= {J~(X)
-l/a
TI"N- d(Jo) N-l
.
iN-
I)'
n Choose a > 0 and define
+a
if if
J~(x)
> -
00,
J~(x)
= -
00.
Order linearly the countable set {n UD , iN - 1)1 i o , . . . , iN_ 1 are positive iN - ,) such that integers J and define n(x) to be the first nun IN,''('D'
.iN_ II(X) ~ 8(x).
Let the components of n(x) be .. ,,uN-l(XN- d)·
(,uo(xo),,u~(xd,·
The set (x!n(x) = nUn, .... iN- I)} is universally measurable for each (io, ... , i N- 1 l, so (,uO(XO),,ul (x o, x d,· .. ,,uN- dxo, x N- 1)), where ,uo(xo) = ,uoD(xol and ,uk(xo,xd = ,ukD(xd, k = 1, ... ,N - 1, is an aoptimal nonrandomized semi-Markov policy. We now show the existence of an a-optimal (randomized) Markov policy. By Lemma 8.2, there exist ,uN-kE U(CJS) such that for k = 1, ... ,N (TI'N _Jk-1)(J O) ~ Tk(J O)
Proceed as in (21).
+ a/O + r:t. + rt 2 + ... + rt N- 1).
Q.E.D.
If the (F-) assumption holds and a> 0, it may not be possible to find a nonrandomized Markov a-optimal policy, as the following example demonstrates. EXAMPLE
N
2
Let S = [0,1,2, ...}, C = {1,2, ...}, W =
= 2 and define
-
g(x,u)
={0
[tx, U, w) = p({wdlx, u)
=
o {1
if x = 1, if x =/= 1,
U
if x = 0 or x = 1 or w = WI, if x =/= 0, x =/= I, and W = W2'
1 - l/x
p({w2}!x,u)=l/x
{Wb wz},
if x =/= 0, x =/= 1, if x =/=0,
x=/=1.
r = SC,
8.
202
THE FINITE HORIZON BOREL MODEL
The (F-) assumption is satisfied. Let tt = (f.1o,f.1d be a nonrandomized Markov policy. If the initial state Xo is neither zero nor one, then regardless of the policy employed, Xl = 0 with probability 1 - (l/xo), and XI = 1 with probability l/xo. Once the system reaches zero, it remains there at no further cost. If the system reaches one, it moves to X2 = 0 at a cost of - f.11 (1). Thus IN,,,(xo) = -f.1I(l)/X O if Xo =I- 0, Xo =I- 1, and J%(xo) = - 00 if X o =I- 0, Xo =I- 1. For any G> 0, tt cannot be s-optimal. In Example 2, it is possible to find a sequence of nonrandomized Markov policies {nn} such that J N, "nt J%. This example motivates the idea of policies exhibiting {Gn}-dominated convergence to optimality (Definition 8.3) and Proposition 8.4, which we prove with the aid of the next lemma. Lemma 8.5 Let {J k} be a sequence of universally measurable functions from S to R* and f.1 a universally measurable function from S to C whose graph lies in r. Suppose for some sequence {Gd of positive numbers with L~= I Gk < 00, we have, for every XES,
f Ji(x')t(dx'!X, f.1(x)) <
and for k
=
lim Jk(x) = J(x)
00,
k-r o:
2,3, ... J(x) ::s;; Jk(x) ::s;; J(x) Jk(x) ::s;; Jk-I(X)
+ Gk
if J(x) > - 00,
+ Gk
if J(x) = - 00.
Then lim TJ1(Jd 00
k~
Proof
=
TJ1(J).
(26)
Since J ::s;; J k for every k, it is clear that TJ1(J)::S;; liminfTI'(Jk)'
(27)
k-r co
For
XES,
lim sup TI'(Jk)(x)::s;; g[x,f.1(x)] k~oo
+ cdimsup r" . J k(x')t(dx'jx,f.1(x)) k-r o» J{xIJ(x»-oo)
+ 0: limk-rsup r sa Jlx'IJ(x')= -
00)
J k(x')t(dx'!x, f.1(x)).
Now lim supS{x'IJ(x'» k~oo
- 00)
::s;; limsup[f k~oo
=
r
J{x'IJ(x'»
J k(x')t(dx'!x, f.1(x))
{x'IJ(x'»
-""j
-oo}
J(x')t(dx'!x,f.1(x))
J(x')t(dx'ix, I/(x)) ,...
+ GklJ
8.2
203
THE DYNAMIC PROGRAMMING ALGORITHM
If J(x') = -
00,
then 00
Jk(X')
+ I
n=k+l
+ I:=ze n ] t(dx'IX, J1{x )) <
and since S[Ji(x')
r
lim sup k-r eo
Jlx'IJ(x')= - co}
~
lim
r, ,_
r
J{x'IJ(x')= - co]
00,
we have
J k(x')t(dx'IX,J1(x))
k--+ooJ{xIJ(x)--OO}
=
en t J(x'),
f
[Jk(X')+
n=k+l
en]t(dx'lx'J1(x))
J(x')t(dx'lx, J1(x)).
If follows that
lim sup TIt(J k ) k--+oo
(28)
~ TiJ).
Combine (27) and (28) to conclude (26).
Q.E.D.
Proposition 8.4 (F-) Let {en} be a sequence of positive numbers with en t O. There exists a sequence of nonrandomized Markov policies {nn} exhibiting {en}-dominated convergence to optimality. In particular, if J~(x) > - 00 for all XES, then for every e > 0 there exists an s-optimal nonrandomized Markov policy. Proof For N = I, by Proposition 7.50 there exists a sequence of nonrandomized Markov policies nn = (J1'O) such that for all n T n(J )(x) < {T(J o)(x) Ito
0
-
-lien
if T(J o)(x) > if T(Jo)(x) = -
+ en
00, 00.
We may assume without loss of generality that TIt~(Jo)
~ TIt~-I(JO)'
Therefore {nn} exhibits {en}-dominated convergence to one-stage optimality. Suppose the result holds for N - I. Let ti; = (J11, ... "t'}, _ 1) be a sequence of (N - Il-stage nonrandomized Markov policies exhibiting {e n I2a }dominated convergence to (N - I)-stage optimality, i.e.,
() X I N- 1 ,T
~
J ~
-1 (x)
+ (enI2a)
{ J N-l,T
I:=
if J~ _ 1(x) > - 00, if I N * - 1(X) -- -00.
(29) (30)
We assume without loss of generality that 1 en < 00. By Proposition 7.50, there exists a sequence {J1n} of universally measurable functions from S to
204
8.
THE FINITE HORIZON BOREL MODEL
C whose graphs lie in I" such that T
(J *
11"
N-[
)() < {J~(X) X
-2len
~
if if
+ (enI 2)
J~(x)
> -
00,
J~(x)
= -
00.
(31)
We may assume without loss of generality that
s
TIl"(J~-l)
n = 2,3, ....
TIl"-t{J~-d,
(32)
By Proposition 7.48, the set A(J~-
d=
= - co}[x,u) > O}
{(x, U)E rjt( {xtIJ~_l(x')
= {(X,U)Er!S -X{x'IJ;'_I(x')=-OO}(x')t(dx'!x,u)
<
o}
(33)
is analytic in SC, and the Jankov-von Neumann theorem (Proposition 7.49) implies the existence of a universally measurable fl:projs[A(J~-1)] --+ C whose graph lies in A(J~ -1)' Define if xEprojs[A(n-dJ, otherwise. Then fin = ([in, 1I:n) is an N-stage nonrandomized Markov policy which will be shown to exhibit {en}-dominated convergence to optimality. For xEprojs[A(J~-1)J, we have, from Lemma 8.5 and the choice of u; lim sup I N • it.(x) = lim sup T Il(J N - 1 ."J (x ) "-00
n-s co
= T Il(J~-1)(X)
For x¢projs[A(J~-dJ, U(x), so by (29)
= - 00.
-oo}IX,U) = 0 for every
we have t({X'IJ~_1(Xt)=
UE
IN.it,,(x)
=
+ en!2,
T Il"(J N- 1.",.)(x) S TIl"(J~-1)(X)
(34)
and lim sup I N. it.,(x) S lim sup TIl,,(J~-d(x)
S J~(x)
n-v cc
n-s co
by (31). It follows that lim J«.«;
n-co
Suppose for fixed XES we have n(x) > and we have, from (31) and (34), IN.it.(x) S TIl"(J~-1)(X)
(35)
= J~.
-00.
+ en l2 s
Then x¢projs[A(J~-1)J,
J~(x)
+ en-
(36)
8.2
205
THE DYNAMIC PROGRAMMING ALGORITHM
Suppose now that imply, for n 2 2,
= -
J~(x)
00.
If x ¢ projs[ A(J~
_ dJ, then (32) and (34)
+ c./2 + c./2 T /in - ,(J N- 1, "n_ J( X) + c./2 IN,ftn_ ,(X) + c./2,
IN,ftn(x) ~ T/in(J~-d(x)
~ T/in-,(J~-l)(X) ~ ~
_ dJ, we have, from (29) and (30),
while if x E projs[ A(J~
IN,ft.(x) = Ti JN-1,,,J(X) ~ T/i(JN- 1'''n_,)(x)
= J N, ft ,(x) + c./2,
+ c./2
n -
In either case, (37)
From (35)-(37) we see that {ft.} exhibits {c.}-dominated convergence to optimality. Q.E.D, We conclude our discussion of the ramifications of Proposition 8.2 with a technical result needed for the development in Chapter 10. Lemma 8.6
(F+)(F-)
For every pEP(S),
f n (x )p(dx ) = inf fJN.,,(X)p(dx), "En
Proof For p E P(S) and f
t: E
Il,
J~(x)p(dx)
~ f J N.,,(x)p(dx),
which implies f
J~(x)p(dx)
~ inf f JN, ,,(x)p(dx).
Choose c > 0 and let ft E n be s-optimal. If p( {xl J"t(x) = f J N, ,,(x)p(dx)
(38)
"En
~ f J~(x)p(dx)
00 })
= 0, then
+ s,
and it follows that inf f IN,,,(x)p(dx) ~ f J~(x)p(dx).
"En
(39)
8.
206 If p({xIJ~(x)
THE FINITE HORIZON BOREL MODEL
= - co}) > 0, then
f
IN,ir(x)p(dx) s; -P({X\JN(X)
+ J{xIJN(x» r *
-oo})/E
=
-:q
+ E.
J~(x)p(dx)
(40)
If S{XIJ';v(Y) > -oo}JN(x)p(dx) = 00, then S n(x)p(dx) = 00 and (39) follows. Otherwise, the right-hand side of (40) can be made arbitrarily small by letting E approach zero, so inf1tE u IN.n(x)p(dx) = - 00 and (39) is again valid. The lemma follows from (38) and (39). Q.E.D.
S
We now consider the question of constructing an optimal policy, if this is at all possible. When the dynamic programming algorithm can be used to construct an optimal policy, this policy usually satisfies a condition stronger than mere optimality. This condition is given in the next definition. Definition 8.6 (Ilk' ... , IlN- 1)' k
Let tt = (Ilo, ... ,IlN-l) be a Markov policy and n N- k = = 0, ... , N - 1. The policy tt is uniformly N -staqe optimal if
k = 0, ' .. , N - 1. Lemma 8.7 (F + )(F -) The policy tt N -stage optimal if and only if (TIlJN-k-l)(JO) Proof
If n
=
=
(Ilo, ... ,IlN -
TN-k(J O)'
k
= (Ilo, ... ,IlN - d is uniformly
dE II is uniformly
= 0, ... , N -
1.
N -stage optimal, then
TN-k(J O) = IN-k = IN-k,nN-k = Tllk(JN-k-l,nN-k-.)
= Tllk(Jt-k-d = where JO,no = J~ then for all k
== 0.
Jt-k
=
(TllkTN-k-l)(JO),
k
= 0, ... ,N - 1,
If (TllkTN-k-l)(JO) = TN-k(J O), k = 0, ... ,N - 1, TN-k(J O)
=
(TIlJN-k-l)(JO)
= (TllkTllk+,TN-k-2)(JO) =
(T llk'"
TIlN_J(JO)
where the next to last equality is obtained by continuing the process used to obtain the previous equalities. Q.E.D. Lemma 8.7 is the analog for the Borel model of Proposition 3.3 for the model of Part I. Because (F+) or (F-) is a required assumption in Lemma 8.1, one of them is also required in Lemma 8.7, as the following example shows. If we take (16) as the definition of Jk,n, then Lemma 8.7, Proposition 8.5, and
8.2
207
THE DYNAMIC PROGRAMMING ALGORITHM
Corollaries 8.5.1 and 8.5.2 hold without the (F+) and (F-) assumptions. The proofs are similar to those of Section 3.2. EXAMPLE 3 Let S = {s,t} u {(k,j)lk = 1,2, ... ; j = 1,2}, C = {a,b}, U(s) = {a,b}, U(t) = U(k,j) = {b}, k = 1,2, ... , j = 1,2, W = S, and rt. = 1. Let the disturbance kernel be given by p(sls, a) = 1,
p[(k, l)ls,b] = p[(k,2)1(l, 1),b] = k-
z (
OCJ
n~l
1 nZ
)-1
k,l
,
= 1,2, ... ,
p[tl(k, 2), b] = 1, k = 1,2, ... , and p(tlt, b) = 1. Let the system function be [ix, u, w) = w. Thus if the system begins at state s, we can hold it at s or allow it to move to some state (l, 1), from which it subsequently moves to some (k, 2) and then to t. Having reached t, the system remains there. The relevant costs are g(s,a) = g(s,b) = g(t, b) = 0, g[(k, 1),b] = k, g[(k, 2), b] = - k, k = 1,2, .... Let tt = (jlo,f.1l>f.1z) be a policy with f.1o(s) = b, f.11(S) = f.1z(s) = a. Then
if if if if
~H
T(J o)(xz) = T 1J.2(Jo)(xz)
TZ{Jo)(xd = (TIJ. JTIJ.2)(Jo)(xd
~ {-~
-k
r 0
T'( J o)(xo) ~ (T" T" T,,)( J 0)(xo) ~
roo =
Xz = s, Xz = (k, 1), Xz = (k,2), Xz = t,
if if if if
Xl = s,
if if ·if if
Xo = s,
Xl =
(k, 1),
Xl =
(k,2),
Xl =
t,
(k, 1), = (k,2), = t.
Xo = Xo Xo
However, J",3(S) = 00 >J",3(S)=0, where n=CJio,Jil,JiZ) and Jio(s)= = Jiz(s) = a, so tt is not optimal and T 3(JO) # J~. It is easily verified that ii is a uniformly three-stage optimal policy, so Corollary 3.3.1(b) also fails to hold for the Borel model of this chapter. Here both assumptions (F+) and (F-) are violated.
JiI(S)
Proposition 8.5
(F+)(F-)
f
Ifthe infimum in
inf {g(X, u) + rt. Jt(x')t(dx'lx, u)},
UEU(X)
k = O, ... ,N - 1,
(41)
208
8.
THE FINITE HORIZON BOREL MODEL
is achieved for each XES, where J~ is identically zero, then a uniformly Nstage optimal (and hence optimal) nonrandomized Markov policy exists. This policy is generated by the dynamic programming algorithm, i.e., by measurably selecting for each x a control u which achieves the infimum. Proof Let 1i = (f.1o, ... ,f.1N-d, where f.1N-k-t:S ~ C achieves the infimum in (41) and satisfies f.1N-k-t(X)E U(x)foreveryxES,k = 0, ... ,N - 1 (Proposition 7.50). Apply Lemma 8.7. Q.E.D.
Corollary 8.5.1 (F+)(F-) If U(x) is a finite set for each XES, then a uniformly N-stage optimal nonrandomized Markov policy exists. Corollary 8.5.2 the set Uk(x,}.)
(F+)(F-)
=
If for each
XES,
AE R, and k = 0, ... ,N - 1,
{UE U(x)lg(x,u) + Lt. f Jt(x')t(dx'lx,u):s:; }.}
is compact, then there exists a uniformly N-stage optimal nonrandomized Markov policy. Proof
8.3
Apply Lemma 3.1 to Proposition 8.5.
Q.E.D.
The Semicontinuous Models
Along the lines of our development of lower and upper semicontinuous functions in Section 7.5, we can consider lower and upper semicontinuous decision models. Our models will be designed to take advantage of the possibility for Borel-measurable selection (Propositions 7.33 and 7.34), and in the case of lower semicontinuity, the attainment of the infimum in (55) of Chapter 7. We discuss the lower semicontinuous model first. Definition 8.7 The lower semicontinuous, finite horizon, stochastic, optimal control model is a nine-tuple (S, C, U, W, p, f, Lt., g, N) as given in Definition 8.1 which has the following additional properties:
r
2
(a) The control space C is compact. (b) The set r defined by (1) has the form c ... , each r j is a closed subset of SC, and lim j~
00
(x, u)
inf E
rv -
rj
g(x, u)
r =
U~
1
where
r1c
= co."
- 1
(c) The disturbance kernel p(dwlx, u) is continuous on t
r-,
By convention, the infimum over the empty set is
I" j are all identical for.i larger than some index k.
00,
r.
so this condition is satisfied if the
8.3
209
THE SEMICONTINUOUS MODELS
(d) The system function f is continuous on rw. (e) The one-stage cost function g is lower semicontinuous and bounded below on r. Conditions (c) and (d) of Definition 8.7 and Proposition 7.30 imply that t(dx'lx, u) defined by (3) is continuous on r, since for any h e C(S) we have
f
h(x')t(dx'lx, u) =
f
h[f(x, u, w)Jp(dwlx, u).
Condition (e) implies that the (F+) assumption holds. Proposition 8.6 Consider the lower semicontinuous finite horizon model of Definition 8.7. For k = 1,2, ... ,N, the k-stage optimal cost function Jt is lower semicontinuous and bounded below, and it = Tk(J 0)' Furthermore, a Borel-measurable, uniformly N-stage optimal, nonrandomized Markov policy exists.
Proof Suppose J: S --+ R* is lower semicontinuous and bounded below, and define K: r --+ R* by K(x, u) = g(x, u)
+ r>:
f
J(x')t(dx'lx, u).
(42)
Extend K to all of SC by defining R(x, u) =
{K~
u)
if (X,U)Er, if (x,u)¢r.
By Proposition 7.31(a) and the remarks following Lemma 7.13, the function K is lower semicontinuous on r. For cER, thiset {(x,u)EsCjR(x,u)
must be contained in some
r- by Definition 7.8(b), so the set
s
c}
{(x, U)ESCIR(x, u) s c} = {(x, U)ErkIK(x, u) s c}
is closed in r- and thus closed in SC as well. It follows that R(x, u) is lower semicontinous and bounded below on SC and, by Proposition 7.32, the function T(J)(x)
= inf R(x, u)
(43)
UEC
is as well. In fact, Proposition 7.33 states that the infimum in (43) is achieved for every XES, and there exists a Borel-measurable ip : S --+ C such that T(J)(x)
= R [x, cp(x)J
'VXES.
For) = 1,2, ... , let CPj: projs(r j ) --+ C be a Borel-measurable function with graph in r j. (Set D = r i in Proposition 7.33 to establish the existence of such a function.) Define u: S --+ C so that Jl(x) = cp(x) if T(J)(x) < co, Jl(x) = cp 1(x) if T(J)(x) = co and x E projs(r 1) ; and for) = 2,3, ... , define Jl(x) = cpix) if
8.
210
THE FINITE HORIZON BOREL MODEL
T(J)(x) = 00 and x s projsff") - projs(r i - 1 ) . Then /l is Borel-measurable, /lex) E U(x) for every XES, and T Il(J) = T(J). Since J 0 =: is lower semicontinuous and bounded below, the above argument shows that J: = Tk(J 0) has these properties also, and furthermore, for each k = 0, ... ,N - 1, there exists a Borel-measurable /lk: S -+ C such that flk(X)E U(x) for every XES and (T IlJ N- k- 1)(J O) = TN-k(J O)' The proposition follows from Lemma 8.7. Q.E.D.
°
We note that although condition (a) of Definition 8.7 requires the compactness of C, the conclusion of Proposition 8.6 still holds if C is not compact but can be homeomorphically embedded in a compact space C in such a way that the image of r-, j = 1,2, ... , is closed in SC. That is to say, the conclusion holds if there is a compact space C and a homeomorphism tp : C -+ C such that for j = 1,2, ... , (ri) is closed in SC, where (x, u) = (x, qJ(u».
The continuity of f and p(dwlx, u) and the lower semicontinuity of g are unaffected by this embedding. In particular, if r i is compact for each i. we can take C = :Yf and use U rysohn's theorem (Proposition 7.2) and the fact that the continuous image of a compact set is compact to accomplish this transformation. We state this last result as a corollary. Corollary 8.6.1 The conclusions of Proposition 8.6 hold if instead of assuming that C is compact and each r- is closed in Definition 8.7, we assume that each r i is compact. Definition 8.8 The upper semicontinuous, finite horizon, stochastic, optimal control model is a nine-tuple (S, C, U, W, p, f, rx, g, N) as given in Definition 8.1 which has the following additional properties:
(a) The set r defined by (1) is open in S'C. (b) The disturbance kernel p(dwlx, u) is continuous on T, (c) The system function f is continuous on rw . (d) The one-stage cost g is upper semicontinuous and bounded above onr. As in the lower semicontinuous model, the stochastic kernel t(dx'lx, u) is continuous in the upper semi continuous model. In the upper semicontinuous model, the (F-) assumption holds. If J:S -+ R* is upper semicontinuous and bounded above, then K:r -+ R* defined by (42) is upper semicontinuous and bounded above. By Proposition 7.34, the function T(J)(x) =
inf K(x, u) ueU(x)
is upper semicontinuous, and for every 8 >
°there exists a Borel-measurable
8.3
211
THE SEMICONTINUOUS MODELS
ui S
~
C such that f.l(X)E U(x) for every XES, and T /l(J)(x)
+s
~ {lI_(Jl/)~X) G
if T(J)(x) > if T(J)(x) = -
00 00.
Since J 0 == 0 is upper semicontinuous and bounded above, so is Jt = Tk(J 0)' k = 1,2, ... , N. The following proposition is obtained by using these facts to parallel the proof of the (F-) part of Proposition 8.3. Proposition 8.7 Consider the upper semicontinuous finite horizon model of Definition 8.8. For k = 1,2, ... , N, the k-stage optimal cost function Jt is upper semicontinuous and bounded above, and Jt = Tk(J 0)' For each s > 0, there exists a Borel-measurable, nonrandomized, semi-Markov, e-optimal policy and a Borel-measurable, (randomized) Markov, s-optimal policy.
Actually, it is not necessary that Sand C be Borel spaces for Proposition 8.7 to hold. Assuming only that Sand C are separable metrizable spaces, one
can use the results on upper semicontinuity of Section 7.5 and the other assumptions of the upper semicontinuous model to prove the conclusion of Proposition 8.7. It is not possible to parallel the proof of Proposition 8.4 to show for the upper semicontinuous model that given a sequence of positive numbers {en} with en,j, 0, a sequence of Borel-measurable, nonrandomized, Markov policies exhibiting {sn}-dominated convergence to optimality exists. The set A(J% - 1) defined by (33) may not be open, so the proof breaks down when one is restricted to Borel-measurable policies. We conclude this section by pointing out one important case when the disturbance kernel p(dwlx, u) is continuous. If W is n-dimensional Euclidean space and the distribution of w is given by a density d(wlx, u) which is jointly continuous in (x, u) for fixed w, then p(dwlx, u) is continuous. To see this, let G be an open set in Wand let (x n, un) ~ (x, u) in S'C. Then lim inf p(Glx k, Uk) = lim inf k~CfJ
k-oo
2':
r d(WIXk, Uk) dw
J(G
fG d(wlx, u) dw = p(Glx, u)
by Fatou's lemma. The continuity ofp(dwlx, u) follows from Proposition 7.21.t t
Note that by the same argument, lim infp(GcIX., uk! ~ p(Gclx, u), k--oc.,
so p(Glx k , Uk) ---+ p(Glx, u). Under this condition, the assumption that the system function is continuous in the state (Definitions 8.7(d) and 8.8(c)) can be weakened. See [H3] and [S5].
212
8.
THE FINITE HORIZON BOREL MODEL
In fact, it is not necessary that d be continuous in (x, u) for each w, but only that (x,; un) ~ (x, u) imply d(wlx n, un) ~ d(wlx, u) for Lebesgue almost all w. For example, if W = R, the exponential density d(wlx,u)
= {ex p[ -(w - m(x,u))]
o
~f
w
~ m(x,u),
If w < m(x,u),
where m:SC ~ R is continuous, has this property, but need not be continuous in (x, u) for any WE R.
Chapter 9
The Infinite Horizon Borel Models
A first approach to the analysis of the infinite horizon decision model is to treat it as the limit of the finite horizon model as the horizon tends to infinity. In the case (N) of a nonpositive cost per stage and the case (D) of bounded cost per stage and discount factor less than one, this procedure has merit. However, in the case (P) of nonnegative cost per stage, the finite horizon optimal cost functions can fail to converge to the infinite horizon optimal cost function (Example 1 in this chapter), and this failure to converge can occur in such a way that each finite horizon optimal cost function is Borel-measurable, while the infinite horizon optimal cost function is not (Example 2). We thus must develop an independent line of analysis for the infinite horizon model. Our strategy is to define two models, a stochastic one and its deterministic equivalent. There are no measurability restrictions on policies in the deterministic model, and the theory of Part lor of Bertsekas [B4], hereafter abbreviated DPSC, can be applied to it directly. We then transfer this theory to the stochastic model. Sections 9.1-9.3 set up the two models and establish their relationship. Sections 9.4-9.6 analyze the stochastic model via its deterministic counterpart.
9.1
The Stochastic Model
Definition 9.1 An infinite horizon stochastic optimal control model, denoted by (SM), is an eight-tuple (S, C, U, W, p,f, rx, g) as described in 213
9.
214
THE INFINITE HORIZON BOREL MODELS
Definition 8.1. We consider three cases, where I' is defined by (1) of Chapter 8: (P) O:s:: g(x, u) for every (x, u) E r. (N) g(x, u) :s:: 0 for every (x, u) E r. (D) 0 < o: < 1, and for some bER, -b :s:: g(x, u):s:: b for every (x, U)E r. Thus we are really treating three models: (P), (N), and (D). If a result is applicable to one of these models, the corresponding symbol will appear. The assumptions (P), (N), and (D) replace the (F+) and (F-) conditions of Chapter 8. Definition 9.2 A policy for (SM) is a sequence tt = (J..lo, J..ll" ..) such that for each k, J..lk(duklxo, Uo, . . . ,Uk- b x k) is a universally measurable stochastic kernel on C given SC' .. CS satisfying
J..lk(U(xdlxo,uo, ... ,Uk-1'X k) = 1 for every (xo,uo, ... ,Uk-1,Xk)' The concepts of semi-Markov, Markov, nonrandomized, and :Y'" -measurable policies are the same as in Definition 8.2. We denote by TI' the set ofall policies for (SM) and by TI the set ofall Markov policies. If n is a Markov policy of the form n = (J..l, u, ...), it is said to be stationary. As in Chapter 8, we often index Sand C for clarity, understanding Sk to be a copy of Sand C, to be a copy of C. Suppose p e P(S) and tt = (J..lo, J..lb' ..) is a policy for (SM). By Proposition 7.45, there is a sequence of unique probability measures rN(n,p) on SoC o" 'SN-1C N- b N = 1,2, ... , such that for any N and any universally measurable function h:SoC o' .. SN-l C N- 1 -> R* which satisfies either Jh+ drN(n,p) < 00 or Jh- drN(n,p) < 00, (4) of Chapter 8 is satisfied. Furthermore, there exists a unique probability measure r(n,p) on SOCOS1C 1'" such that for each N the marginal of r(n,p) on SoC o" ,SN-1CN- 1 is rN(n,p). With rN(n,p) and r(n,p) determined in this manner, we are ready to define the cost corresponding to a policy. Definition 9.3 Suppose t: is a policy for (SM). The (infinite horizon) cost corresponding to tt at XES is
J,,(x)
= fL~o =
rJ.kg(Xk,Uk)]dr(n,PJ
f rJ.k f g(Xb Uk) drk(n, Px)'
t
(1)
k=O
t The interchange of integration and summation is justified by appeal to the monotone convergence theorem under (P) and (N), and the bounded convergence theorem under (D).
9.1
215
THE STOCHASTIC MODEL
= (p,J1, ...) is stationary, we sometimes write J in place of J". The " (infinite horizon) optimal cost at x is
If t:
J*(x)
= inf
a e H'
J ,,(x).
(2)
If s > 0, the policy n is e-optimal at x provided J ( ) < {J*(X)
"x -
-lie
if J*(x) > if J*(x) = -
+s
00, 00.
If J ,,(x) = J*(x), then tt is optimal at x. If n is s-optimal or optimal at every XES, it is said to be e-optimal or optimal, respectively. It is easy to see, using Propositions 7.45 and 7.46, that, for any policy ti, J ,,(x) is universally measurable in x. In fact, if n = (Po, J11,' ..) and nk =
(J1o, . . . ,J1k- d, then Jk."k(X) defined by (5) of Chapter 8 is universally measur-
able in x and lim Jk,,,k(X)
k-r cc
= J,,(x)
(3)
VXES.
If n is Markov, then (3) can be rewritten in terms of the operators T Ilk of
Definition 8.4 as lim (T lIo '
k-r co
••
T ll k _ )(Jo)(x)
=
J,,(x)
VXES,
(4)
which is the infinite horizon analog of Lemma 8.1. If n is a Borel-measurable policy and g is Borel-measurable, then J ,,(x) is Borel-measurable in x (Proposition 7.29). It may occur under (P), however, that limk~co n(x) i= J*(x), where Jt(x) is the optimal k-stage cost defined by (6) of Chapter 8. We offer an example of this. EXAMPLE
I
Let S =
to, 1,2, ... J,
C = {1,2, ... J, U(x) = C for every
xES,rt.=I, f(x, u)
u
= { x-I
if x = 0, 'f X=l-, 0 1
g(x, u)
{I
= 0
if x = 1, if x=l-. 1
1
The problem is deterministic, so the choice of Wand p(dwlx, u) is irrelevant. Beginning at Xo = 0, the system moves to some positive integer Uo at no cost. It then successively moves to U o - 1, Uo - 2, ... , until it returns to zero and the process begins again. The only transition which incurs a nonzero cost is the transition from one to zero. If the horizon k is finite and Uo is chosen larger than k, then no cost is incurred before termination, so Jt(O) = O. Over the infinite horizon, the transition from one to zero will be made infinitely often, regardless of the policy employed, so J *(0) = CfJ.
9.
216
THE INFINITE HORIZON BOREL MODELS
Four = (flo, /11,' .. )E IT' and p EpeS), let qk(n, p) be the marginal ofr(n, p) on SkCk' k = 0,1, .... Then (7) of Chapter 8 holds, and if n is Markov, (8) holds as well. Furthermore, from (1) we have J,,(X) =
f
k=O
ak r
JSkCk
gdqk(n,px)
VXES,
(5)
which is the infinite horizon analog of (9) of Chapter 8. Using these facts to parallel the proof of Proposition 8.1, we obtain the following infinite horizon version. Proposition 9.1 policy tt such that
(P)(N)(D)
If XES and n' E IT', then there is a Markov
J,,(x) = J".(x). Corollary 9.1.1
(P)(N)(D)
We have
J*(x) = inf J,,(x)
VXES,
"Ell
where IT is the set of all Markov policies for (SM). 9.2
The Deterministic Model
Definition 9.4 Let (S, C, U, W,P,J, a, g) be an infinite horizon stochastic optimal control model as given by Definition 9.1. The corresponding infinite horizon deterministic optimal control model, denoted by (DM), consists of the following:
peS) State space. P(SC) Control space. D Control constraint. A function from peS) to the set of nonempty subsets of P(SC) defined for each P E peS) by D(p) = {q EP(sC)lq(r) = 1 and the marginal of q on Sis p},
(6)
r is given by (1) of Chapter 8. J System function. The function from P(SC) to peS) defined by
where
l(q)(SJ = Ssc t(slx, u)q(d(x, u))
VS E fJU s ,
(7)
where t(dx'lx, u) is given by (3) of Chapter 8. a Discount factor. 9 One-stage cost function. The function from P(SC) to R* given by g(q) = Ssc g(x, u)q(d(x,u)).
(8)
9.2
217
THE DETERMINISTIC MODEL
The model (DM) inherits considerable regularity from (8M). Its state and control spaces peS) and P(SC) are Borel spaces (Corollary 7.25.1). The system function J is Borel-measurable (Proposition 7.26 and Corollary 7.29.1), and the one-stage cost function g is lower semianalytic (Corollary 7.48.1). Furthermore, under assumption (P) in (8M), we have g ?': 0, while under (N), u s: 0, and under (D), 0< oc < 1 and
n. We place no measurability requirements on these mappings. A policy If of the form If = (11,11, ...) is said to be stationary. p E peS). The set of all policies in (DM) will be denoted by
Definition 9.6 Given Po E peS) and a policy If = (110,111, ... ) for (DM), the cost corresponding to If at Po is 00
J,,(Po) =
I
k=O
(9)
OCkg(qk),
where the control sequence {qd is generated recursively by means of the equation k
= 0,1, ... ,
(10)
and the system equation PH I = J(qd.
k = 0,1, ....
If If = (11,11, ... ) is stationary, we write J Ii in place of at Po is
(11)
J". The
optimal cost
The concepts of a-optimal and optimal policies for (DM) are the same as those given in Definition 9.3 for (8M). Definition9.7 A sequence (PO,qO,ql," .)EP(S)P(SC)P(SC)··· is admissible in (DM) if qo E O(po) and qk+ IE O[J(qd], k = 0,1, .... The set of all
admissible sequences will be denoted by
~.
The admissible sequences are just the sequences of controls qO'ql' ... together with the initial state Po which can be generated by some policy for (DM) via (10) and (11). Except for Po' the measures Pk are not included in the sequence, but can be recovered as the marginals of the measures qk on S [cf. (6)]. Definition 9.8 Let J:P(S) -+ R* be given and let J1:P(S) -+ P(SC) be such that l1(p)E O(p) for every p e peS). The operator 1'ji mapping J into
9.
218
'f1l(J):P(S)
->
THE INFINITE HORIZON BOREL MODELS
R* is defined by
'f1l(J)(p) = g[il(p)J
+ al[](il(p))]
The operator T mapping 1 into 'f(1):P(S)
rov»
->
R* is defined by
inf {g(q) + al[J(q)J}
=
VpE P(S).
VpEP(S).
qEU(p)
Because (DM) is deterministic, it can be studied using results from Part I, Chapters 4 and 5 or from DPSC. This is because there is no need to place measurability restrictions on policies in a deterministic model. The operators 'fit and 'f of Definition 9.8 are special cases of those defined in Section 2.1. In the present case, we take H(p, q, 1) to be H(p, q, 1) = g(q) + al[J(q)].
The monotonicity assumption of Section 2.1 is satisfied by this choice of H. The cost corresponding to a policy n = (ilo, ill>' ..) as given by (9) is easily seen to be of the form (cf. Section 2.2)
lit = lim ('fila' N-->oo
..
'fIlN_ .)(10),
where 10 (p) = 0 for every p e P(S). It is a straightforward matter to verify that under (D) the contraction assumption of Section 4.1 is satisfied when B is taken to be the set of bounded real-valued functions on P(S), m is taken to be one, and p = a. Under (P), Assumptions I, 1.1, and 1.2 of Section 5.1 are satisfied, while under (N), Assumptions D, D.1, and D.2 of the same section are in force. 9.3
Relations between the Models Definition 9.9
ii
= (ilo, ill' ...) E
f~llk(Clx)Pk(dx)
Let
na
= (Ilo, Ill' ... ) E I1 be a Markov policy for (SM) and policy for (DM). Let Po E P(S) be given. If for all k
tt
=
ilk(pd(SC)
V~E~s,
(EPJ c ,
(12)
where Pk is generated from Po by n via (10) and (11), then nand ii are said to correspond at Po. If nand ii correspond at every p E P(S), then nand n are said to correspond. If nand n correspond at Po, then the sequence of measures [qo(n, Po), q1 (zr, Po), ... J generated from Po by tt via (8) of Chapter 8 is the same as the sequence (qo, q1,' ..) generated from Po by n via (10) and (11). If nand n correspond, then they generate the same sequence (qo, q1" ..) for any initial Po.
9.3
219
RELATIONS BETWEEN THE MODELS
Proposition 9.2 (P)(N)(D) Given a Markov policy tt E Il, there is a corresponding nEn. If n En and Po E P(S) are given, then there is a Markov policy n E Il corresponding to n at Po.
Proof If t: = (llo,llb' .. )E n is given, then for each k and any PkE P(S), there is a unique probability measure on SC, which we denote by Pk(Pk)' satisfying (12) (Proposition 7.45). Furthermore, (13)
so n = (Po, PI,...) is in n and corresponds to tt. If n = (Po, PI" .. )E nand Po E P( S) are given, let (Po, Ph pz , ...) be generated from Po by n via (10) and (11). For each k, choose a Borel-measurable stochastic kernelllk(dulx) which satisfies (12) for this particular Pk (Corollary 7.27.2). Then (13) holds, so
=1
Ilk(U(x)lx)
(14)
for Pk almost every x. Altering Ilk(dulx) on a set of Pk-measure zero if necessary, we may assume that (14) holds for every XES and (12) is satisfied. Then n = (110' Ill" ..) En corresponds to n at Po. Q.E.D. Let P E P(S), n E Il, and n En be given. If
Proposition 9.3 (P)(N)(D) nand n correspond at P, then
I
J,,(p) =
J,,(x)p(dx).
Proof We have from (7) of Chapter 8, (5), (8), (9), and the monotone or bounded convergence theorems
I
J ,,(x )p(dx ) =
r Js
[f
=
k~O
a
=
k~O
a
=
L
k=O k k
a
Is
k
r
JSkCk
ISkCk
ISkCk
gdqk(n,PJ]p(dX)
g dqk(n,Pxlp(dx)
g dqk(n,p)
00
k=O
akg[qk(n,p)]
Q.E.D. Corollary 9.3.1
(P)(N)(D)
n correspond at Px' then
Let XES, tt EIl, and n En be given. If nand
9.
220 Corollary 9.3.2
(P)(N)(D)
THE INFINITE HORIZON BOREL MODELS
For every J*(px)
=
XES,
J*(x).
Proof Corollaries 9.1.1, 9.3.1, and Proposition 9.2 imply that, for every XES,
J*(px)
= inf
nEll
Ji!(Px)
= inf
=
J,,(x)
J*(x).
Q.E.D.
1t'En
Corollary 9.3.2 shows that J* and J* are related, but in a rather weak way that involves J* only on S= {PxEP(S)lxES}. In Proposition 9.5 we strengthen this relationship, but in order to state that proposition we must show a measurability property of J*. This is the subject of Proposition 9.4, which we prove with the aid of the following lemma. Lemma 9.1 The set ~ of admissible sequences in (DM) is an analytic subset of P(S)P(SC)P(SC)· ...
[nk'=o B k ] ,
Proof The set ~ is equal to A o n A o = {(PO,qo,qb"
where
.)!qoEO(po)},
B k = {(Po, qo, q1,' . ·)Iqk+ 1 E
o[J(qk)] }.
By Corollary 7.35.2, it suffices to show that A o and B k , k = 0,1, ... , are analytic. Using the result of Proposition 7.38, this will follow if we show that A = {(PbqdEP(S)P(Sq!q1
B
=
E
O(pd},
{(qo, q1)EP(SqP(SC)!ql E O[l(qo)]}
are analytic. Let P(r) = {qEP(sqlq(r) = I}, where r is given by (1) of Chapter 8. Then P(r) is analytic (Proposition 7.43). Equation (6) implies that A is the intersection of the analytic set P(S)P(r) (Proposition 7.38) with the graph of the function a:P(SC) ~ P(S) which maps q into its marginal on S. It is easily verified that a is continuous (Proposition 7.21(a) and (b)), so Gr(a) is Borel (Corollary 7.14.1). Therefore, A is analytic. The set B is the inverse image of A under the Borel-measurable mapping (qo, qd ~ [l(qo),ql], so is also analytic (Proposition 7.40). Q.E.D. Proposition 9.4 analytic. Proof Define
(P)(N)(D) G:~
~
The function J*: P(S)
~ R*
is lower semi-
R* by
G(pO,qO,ql"")
=
00
I
k=O
akg(qk),
(15)
9.3
221
RELATIONS BETWEEN THE MODELS
where L\ is the set of admissible sequences (Definition 9.7). Then G is lower semianalytic by Lemma 7.30(2), (4) and Lemma 9.1. By the definition of J* and L\, we have J*(po)
=
VpOEP(S),
G(Po,qo,q!, . . .)
inf
(16)
(qO.q" ... )Ed p o
so J* is lower semi analytic by Proposition 7.47. Corollary 9.4.1 analytic. Proof
(P)(N)(D)
Q.E.D.
The function J*: S
R* is lower semi-
~
By Corollary 9.3.2, J*(x)
= J*[<5(x)]
VXES,
where <5(x) = Px is the homeomorphism defined in Corollary 7.21.1. Apply Lemma 7.30(3) and Proposition 9.4 to conclude that J* is lower semianalytic. Q.E.D. Lemma 9.2 such that
Given p E P(S) and I> > 0, there exists a policy
(P)(D)
J,,(p) ~
(N)
J,,(p) ~
f J*(x)p(dx) + 1>,
_ {f
J*(x)p(dx)
+ I>
-1/1>
f if f J*(x)p(dx)
if J*(x)p(dx) > =
-
n for (DM)
00, 00.
Proof As a consequence of Corollary 9.4.1, S J*(x)p(dx) is well defined. Let p E P(S) and I> > 0 be given. Let G: L\ ~ R* be defined by (15). Proposition 7.50 guarantees that under (P) and (D) there exists a universally measurable selector cp:P(S) ~ P(SC)P(SC)'" such that (p,cp(p))EL\ for every pEP(S) and G[p,cp(p)] ~ J*(p) + I> VpEP(S).
Let (J: S ~ P(SC)P(SC)' .. be defined by (J(x) = cp(pJ. Then (J is universally measurable (Proposition 7.44) and G[px, (J(x)] ~ J*(x)
+ I>
Under (N), there exists a universally measurable (J:S that for every XES, (Px,(J(X))EL\ and G[px, (J(x)]
~
J * (X) + I> { -(1 + 1>2)/I>Poo(J*)
(17)
VXES. --->
P(SC)P(SC)· .. such
if J*(x) > if J*(x) = _
00, 00,
(18)
wherepoo(J*) = p({xIJ*(x) = -oo})ifp({xIJ*(x) = -oo}) > Oandpoo(J*) = 1 otherwise.
9.
222
THE INFINITE HORIZON BOREL MODELS
Denote o (x) = [qo(d(xo,uo)lx),q1(d(X1,u 1)lx), .. .]. Each qk(d(xk,udlx) is a universally measurable stochastic kernel on SkCk given S. Furthermore, qo(d(xo, Uo)IX)E D(px)
and, for k
=
VXES,
0, 1, ... ,
qk+ 1 (d(Xk+ 1, Uk+
dl x)
E
DU[qk(d(xk, Uk) Ix)])
VXES.
For k = 0,1, ... , define 7.1kE P(SC) by
Then 7.1k(r) = 1, k = 0,1, .... We show that (p, 7.10,7.11" ..) E A. Since the marginal of qo(d(x o, uo)lx) on So is Px, we have 7.1o(SoC o) =
.fs qoCSoColx)p(dx) = .fs X~o(x)p(dx)
so 7.10 E D(p). For k
= p(So)
0,1, ... , we have
=
7.1k+ l(Sk+ 1 c., d
=
Isqk+ 1(Sk+1 Ck+1!x)p(dx)
=
r JSkCk f «s.. l!Xk, Uk)qk(d(Xb Uk)jx)p(dx) Js
= JSkCk r t(Sk+1!Xk,Uk)7.1k(d(Xk,Uk))
VSk+1EfJDS'
Therefore 7.1k+ 1 E D[J(7.1k)] and (p, 7.10,7.11" ..) E A. Let n be any policy for (DM) which generates the admissible sequence (P,7.10,Q1," .). Then under (P) and (D), we have from (17) and the monotone or bounded convergence theorem J1f(p) = G(P,7.10,7.11"")
= k~O =
=
rJ.k fskCkg(XbUk)7.1k(d(Xk,Uk))
f rJ.k Jsr JSkCk f g(x k, uk)qM(x k, udlx)p(dx) Js[fk=O rJ.k JSkCk r g(x b Uk)qk(d(x k, Uk)!X)]P(dX} k= 0
= Is G[px, o-(x)]p(dx)
: :; .fs J*(x)p(dx) +
B.
9.3
223
RELATIONS BETWEEN THE MODELS
Under (N), we have from the monotone convergence theorem Jft(p) = fsG[px,a(x)]p(dx).
If p({xIJ*(x)
=
(19)
-co}) = 0, (18) and (19) imply Jft(p) ::; f J*(x)p(dx)
+ s,
where both sides may be - 00. If p({xIJ*(x) = - co]) > 0, then SJ*(x} p(dx) = - 00 and we have, from (18) and (19), Jft(p) ::; ~XIJ*(X» ::; e - (1 Proposition 9.5
-'I
[J*(x) + e]p(dx) - (1 + e
+ e2)/e =
(P)(N)(D)
2)/e
Q.E.D.
-l/e.
For every p E P(S),
J*(p) = f J*(x)p(dx). Proof
Lemma 9.2 shows that J*(p) ::; f J*(x)p(dx)
v» E P(S).
For the reverse inequality, let p be in P(S) and let n be a policy for (DM). There exists a policy tt E IT corresponding to nat p (Proposition 9.2), and, by Proposition 9.3, Jft(p) = f J,,(x)p(dx) ~ f J*(x)p(dx). By taking the infimum of the left-hand side over ii Eft, we obtain the desired result. Q.E.D. Propositions 9.3 and 9.5 are the key relationships between (SM) and (DM). As an example of their implications, consider the following corollary. Corollary 9.5.1 (P)(N)(D) Suppose tt E IT and n Eft are corresponding policies for (SM) and (DM). Then tt is optimal if and only if n is optimal. Proof If ti is optimal, then
Jft(p) =
f J,,(x)p(dx) = f J*(x)p(dx) = J*(p)
YpEP(S).
If n is optimal, then YXES.
Q.E.D.
The next corollary is a technical result needed for Chapter 10.
9.
224 Corollary 9.5.2 (P)(N)(D) 5
Proof
THE INFINITE HORIZON BOREL MODELS
For every p E P(S),
J * (x )p (dx ) = inf
"En By Propositions 9.2 and 9.3,
5
J ,,(x)p(dx).
l*(p) = inf 5J,,(x)p(dx)
'rIPEP(S).
"En
Apply Proposition 9.5.
Q.E.D.
We now explore the connections between the operators T ts: and T Jl and the operators T and T, The first proposition is a direct consequence of the definitions. We leave the verification to the reader. Proposition 9.6 (P)(N)(D) Let J: S ---+ R* be universally measurable and satisfy J ~ 0, J .:::;; 0, or - c .:::;; J .:::;; c, c < 00, according as (P), (N), or (D) is in force. Let 1: P(S) ---+ R* be defined by l(p)
and suppose P:P(S)
---+
U(qst
5
J(x)p(dx)
Vp E P(S),
P(SC) is of the form
5~J1(C:lx)p(dx)
P(p)(SC) =
for some J1E
=
c-».
V~E~s,
Then P(p)E a(p) for every pEP(S), and
fJl(l)(p)
=
5
Vp E P(S).
TIi(J)(x)p(dx)
Proposition 9.7 (P)(N)(D) Let J:S ---+ R* be lower semianalytic and satisfy J ~ 0, J .:::;; 0, or - c .:::;; J .:::;; c, c < 00, according as (P), (N), or (D) is in force. Let J: P(S) ---+ R* be defined by J(p)
=
5
f(l)(p)
=
5
Then
Proof g(q)
J(x)p(dx)
'rIpEP(S).
T(J)(x)p(dx)
(20)
'rip E P(S).
For pEP(S) and qE O(p) we have
+ IXJ[J(q)] = 5sJg(x, u) + IX 5s J(x')t(dx'lx, u)]q(d(x, u)) ~
5s T(J)(x)p(dx),
t The set U(qS), defined in Section 8.2, is the collection of universally measurable stochastic kernels J.l on C given S which satisfy J.l(U(x)!x) = 1 for every XES.
9.4
225
THE OPTIMALITY EQUATION
which implies 1'(J)(p)
~
I T(J)(x)p(dx).
Given e > 0, Lemma 8.2 implies that there exists /1 E U( CiS) such that IcCg(x, u)
+ a Is J(x')t(dx'lx, U)]/1(dulx) :::;;
T(J)(x)
+ e.
Let (1 E O(p) be such that q(SC)
=
l/1(Clx)p(dx)
Then 1'(J)(p):::;; 1c[g(x, u)
+a
1
J(X')t(dx'lx, U)}(d(X, u))
= Is Ic[g(x, u) + a Is J(x')t(dx'lx, U)] /1(dulx)p(dx)
s
I T(J)(x)p(dx)
where ST(J)(x)p(dx)
+ e may be
-
+ s, 00.
Therefore,
1'(J)(p) :::;; I T(J)(x)p(dx).
9.4
Q.E.D.
The Optimality Equation-Characterization of Optimal Policies
As noted following Definition 9.8, the model (DM) is a special case of that considered in Part I and DPSCt. This allows us to easily obtain many results for both (SM) and (DM). A prime example of this is the next proposition.
Proposition 9.8 (P)(N)(D)
We have
J* = 1'(1*), J*
=
T(J*).
(21) (22)
Proof The optimality equation (21) for (DM) follows from Propositions 4.2(a), 5.2, and 5.3 or from DPSC, Chapter 6, Proposition 2 and Chapter 7, t Whereas we allow 9 to be extended real-valued, in Chapter 7 of DPSC the one-stage cost function is assumed to be real-valued. This more restrictive assumption is not essential to any of the results we quote from DPSC.
9.
226
THE INFINITE HORIZON BOREL MODELS
Proposition 1. We have then, for any XES, J*(X) = l*(px) = r(J*)(pJ = T(J*)(x)
by Propositions 9.5 and 9.7, so (22) holds as well.
Q.E.D.
Proposition 9.9 (P)(N)(D) If n = (71,71, ...) is a stationary policy for (DM), then 1Ii = 1i(1Ii)' If n = (/1,/1, ...) is a stationary policy for (SM), then J Jl = TJl(JJl)'
r
Proof For (DM) this result follows from Proposition 4.2(b), Corollary 5.2.1, and Corollary 5.3.2 or from DPSC, Chapter 6, Corollary 2.1 and Chapter 7, Corollary 1.1. Let n = (/1,/1, ...) be a stationary policy for (SM) and let ii = (71,71, ...) be a policy for (DM) corresponding to n. Then for each XES,
by Propositions 9.3and 9.6.
Q.E.D.
Note that Proposition 9.9 for (SM) cannot be deduced from Proposition 9.8 by considering a modified (SM) with control constraint of the form VXES,
(23)
as was done in the proof of Corollary 5.2.1. Even if /1 is nonrandomized so that (23) makes sense, the set
r ll
=
{(X,U)!XES,UE UJl(x)}
may not be analytic, so UJl is not an acceptable control constraint. The optimality equations are necessary conditions for the optimal cost functions, but except in case (D) they are by no means sufficient. We have the following partial sufficiency results. Proposition 9.10
ta;
If 1: P(S) ~ [0, 00 J and 1;?: then 1 ;?: 1*. If J: S ~ [0, 00 J is lower semianalytic and J;?: T(J), then J ;?: J*. (N) If l:P(S) ~ [ - 00, OJ and 1 .::;; tu; then 1 .::;; 1*. If J: S ~ [ - 00, OJ is lower semianalytic and J .::;; T(J), then J .::;; J*. (D) Ifl:P(S)~[ -c,cJ,c< oo,andl= 1(l),thenl=l*. If J: S ~ [ - c, cJ, c < 00, is lower semianalytic and J = T(J), then J = J*. (P)
Proof We consider first the statements for (DM). The result under (P) follows from Proposition 5.2, the result under (N) from Proposition 5.3, and the result under (D) from Proposition 4.2(a). These results for (DM)
9.4
227
THE OPTIMALITY EQUATION
follow from Proposition 2 and trivial modifications of the proof of Proposition 9 of DPSC, Chapter 6. We now establish the (SM) part of the proposition under (P). Cases (N) and (D) are handled in the same manner. Given a lower semianalytic function J: S ---+ [0, 00] satisfying J ~ T(J), define]: P(S) ---+ [0, 00] by (20). Then ](p)
=
f
J(x)p(dx)
~
f
T(J)(x)p(dx)
=
by Proposition 9.7. By the result for (OM),]
~
J(x) = ](Px) ~ ]*(Px) = J*(x)
Proposition 9.11 Let n = (Ii, Ii, . . .) and policies in (OM) and (SM), respectively.
(P)
T(])(p)
\lPEP(S)
s-. In particular,
\Ix E S. tt
Q.E.D.
= (fl, u, . . .) be stationary
If] :P(S) ---+ [0,00] and ] ~ Yjl(J), then ] ~ ]jl' If J: S ---+ [0, 00] is universally measurable and J ~ TI,(J), then J~JIl'
(N) If]: P(S) ---+ [ - 00,0] and ] ~ Tjl(J), then ] ~ ] jl' If J: S ---+ [ - 00,0] is universally measurable and J (D)
~ TIl(J), then J ~ JIl' If]: P(S) ---+ [ - c, c], c < 00, and] = Tjl(]), then] = ] jl' If J:S ---+ [ -c,c], C < 00, is universally measurable and J = TIl(J), then J = JIl'
Proof The (OM) results follow from Proposition 4.2(b) and Corollaries 5.2.1 and 5.3.2 or from DPSC, Corollary 2.1 and trivial modifications of Corollary 9.1 of Chapter 6. The (SM) results follow from the (OM) results and Proposition 9.6 in a manner similar to the proof of Proposition 9.10. Q.E.D.
Proposition 9.11 implies that under (P), J u is the smallest nonnegative universally measurable solution to the functional equation J
=
TIl(J).
Under (D), J u is the only bounded universally measurable solution to this equation. This provides us with a simple necessary and sufficient condition for a stationary policy to be optimal under (P) and (D). Proposition 9.12 (P)(D) Let n = (Ii, Ii, . . .) and tt = (fl, p; . . .) be stationary policies in (OM) and (SM), respectively. The policy n is optimal if and only if]* = Tjl(J*). The policy n is optimal if and only if J* = TIl(J*). Proof If n is optimal, then T, = J*. By Proposition 9.9, J* = Yjl(J*). Conversely, if J* = Yjl(J*), then, by Proposition 9.11, J* ~ and n is
s,
9.
228
THE INFINITE HORIZON BOREL MODELS
optimal. The proof for (SM) follows from the (SM) parts of the same propositions. Q.E.D. Corollary 9.12.1 (P)(D) There is an optimal nonrandomized stationary policy for (SM) if and only if for each XES the infimum in
inf UEU(X)
{g(X, u) + a fJ*(x')t(dx'!x, u)L'f
(24)
is achieved. Proof If the infimum in (24) is achieved for every XES, then by Proposition 7.50 there is a universally measurable selector u: S ---+ C whose graph lies in I' and for which
g[x, f.1(x)] + rJ. f J*(x')t(dx'lx, f.1(x)) inf UEU(Xj
{g(X, u) +
rJ.
f J*(x')t(dx'lx, u)L
'f
VXES.
Then by Proposition 9.8 TIt(J*)
= T(J*) = J*,
so tt = (f.1,f.1, ...) is optimal by Proposition 9.12. If n = (f.1, u; . . .) is an optimal nonrandomized stationary policy for (SM), then by Propositions 9.8 and 9.9 Ti J*)
= Ti J It) = J It = J* = T(J*),
so f.1(x) achieves the infimum in (24) for every XES.
Q.E.D.
In Proposition 9.19, we show that under (P) or (D), the existence of any optimal policy at all implies the existence of an optimal policy that is nonrandomized and stationary. This means that Corollary 9.12.1 actually gives a necessary and sufficient condition for the existence of an optimal policy. Under (N) we can use Proposition 9.10 to obtain a necessary and sufficient condition for a stationary policy to be optimal. This condition is not as useful as that of Proposition 9.12, however, since it cannot be used to construct a stationary optimal policy in the manner of Corollary 9.12.1. Proposition 9.13 (N)(D) Let n = (71,71, ...) and tt = (f.1, u, . . .) be stationary policies in (DM) and (SM), respectively. The policy n is optimal if and only if J;1 = T'(J;1)' The policy t: is optimal if and only if J It = T( J It)· Proof
Ifn is optimal, then J;1
= J*. By Proposition 9.8
J;1 = J* = f(J*) = f(J;1)'
9.5
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
229
Conversely, if JJi = T(JJi)' then Proposition 9.10 implies that JJi s J* and n is optimaL If 1! is optimal, J Il = T(J Il) by the (8M) part of Proposition 9.8. The converse is more difficult, since the (8M) part of Proposition 9.10 cannot be invoked without knowing that J 11 is lower semianalytic. Let n = (jl, ii, . . .) be a policy for (DM) corresponding to 1! = (/1, /1, ...), so that J Ji(p) = J1 ix)p(dx) for every p E P(S). Then for fixed p E P(S) and q E O(p), g(q)
+ ~JJi[J(q)]
=
lc[g(x,U)
;: : f
+ o: IJIl(X')t(dx'lx,U)}(d(X,U))
inf {9(X,U)
UEU(X)
+ o:
provided the integrand T(JIl)(x) =
inf {g(X, u)
UEU(Xj
+a
fs
J
JIl(x')t(dx'lx,u)lp(dx),
1
J ll(x')t(dx'lx, u)}
is universally measurable in x. But T(J 11) = J /1 by assumption, which universally measurable, so g(q)
+ ~JJi[J(q)]
;::::
f
JIl(x)p(dx)
=
IS
JJi(p).
By taking the infimum of the left-hand side over q E O(p) and using Proposition 9.9, we see that f(JJi) ;:::: ]Ji = fJi(JJi). The reverse inequality always holds, and by the result already proved for (DM), n is optimal. The optimality of 1! follows from Corollary 9.5.1. Q.E.D. 9.5
Convergenceof the Dynamic Programming AlgorithmExistence of Stationary Optimal Policies
Definition 9.10 The dynamic programming algorithm is defined recursively for (DM) and (8M) by ] o(p) = 0
Vp E P(S),
]k+l(p)=f(Jk)(P)
Jo(x) = 0
VpEP(S),
k=O,l, ... ,
VXES,
J k+ 1(X) = T(Jk)(x)
VXES,
k=O,l, ....
We know from Proposition 8.2 that this algorithm generates the k-stage optimal cost functions Jt. For simplicity of notation, we suppress the * here. At present we are concerned with the infinite horizon case and the possibility that J k may converge to J* as k ~ 00.
9.
230
THE INFINITE HORIZON BOREL MODELS
Under (P), 1 0::;; 1 1 and so 1 1 = 1''(10)::;; 1'(1 1 ) = l z . Continuing, we see that l k is an increasing sequence of functions, and so 1 00 = limk~oo l k exists and takes values in [0, + 00]. Under (N), l k is a decreasing sequence of functions and 1 00 exists, taking values in [ - 00,0]. Under (D), we have
10
::;;
b + 1'(10), + feb + 1'(10)] = (l + rx)b + f Z(1o),
0::;; b + 1'(10)::;; b 0::;; b + feb
+ f(1o)]
+ rx)b + f Z(1o) ::;; b + f[(l + rx)b + f = (1 + a + rxZ)b + 1'3(10),
=
(1
rx i
+f
Z(1o)]
and, in general, 0::;; b
k-Z
I
i=O
As k ---> 00, we see that b bl(1 - «), so ] a; = limk~oo
k
-
l
(l o) ::;; b
k-I
I
i=O
rx i
+
fk(l o)·
D=A rxi + fk(1o) increases to a limit. But b I.i=o rxi = fk(1o) exists and satisfies -bl(1 - rx)::;; 1 00 ,
Similarly, we have
1 00 Now if]: peS) --->
[ -
::;;
blO - «),
c,c], c < 00, then
1 0::;; 1
+ c,
1'(10) ::;; 1'(1 + c) = ac + 1'(1),
and, in general, It follows that ]
00
::;;
lim inf fk(1), k~oo
and by a similar argument beginning with 1 - c ::;; 1 0 , we can show that lim SUPk~o: fk(J) ::;; 10:,' This shows that under (D), if 1 is any bounded realvalued function on peS), then 1-x: = limk~e<: fk(l). The same arguments can be used to establish the existence of J 00 = limk~oo J k. Under (P), J oo:S ---> [0, + 00]; under (N), J 00: S ---> [ - 00,0]; and under (D), J oo = limk~oo Tk(J) takes values in [-bl(l - rx), bl(1 - rx)] where J: S ---> [ - c,c], c < 00, is lower semianalytic. Note that in every case, J 00 is lower semianalytic by Lemma 7.30(2). Lemma 9.3
(P)(N)(D)
l k(p) =
For every p E
f Jk(x)p(dx),
ns:
k = 0,1, ... ,
k = 00.
9.5
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
231
Proof For k = 0,1, ... , the lemma follows from Proposition 9.7 by induction. When k = 00, the lemma follows from the monotone convergence theorem under (P) and (N) and the bounded convergence theorem under (D). Q.E.D. Proposition 9.14
(N)(D)
We have
J
J*,
(25)
J oo = J*.
(26)
00
=
Indeed, under (D) the dynamic programming algorithm can be initiated from any J: P(S) ~ [ - c, c], C < 00, or lower semi analytic J: S ~ [ - c, c], C < 00, and converges uniformly, i.e., lim sup Ifk(J)(p) - J*(p)1 = 0,
k-r
(27)
co pEP(S)
lim sup] Tk(J)(x) - J*(x)1
(28)
0.
=
k-ooxeS
Proof The result for (DM) follows from Proposition 4.2(c) and 5.7 or . from DPSC, Chapter 6, Proposition 3 and Chapter 7, Proposition 4. By Lemma 9.3, "IXES,
k=0,1, ... , k=oo,
so (25) implies (26). Under (D), if a lower semianalytic function J: S ~ [ - c,c], c < 00, is given, then define J: P(S) ~ [ - c, c] by (20). Equation (28) now Q.E.D. follows from (27) and Propositions 9.5 and 9.7. Case (D) is the best suited for computational procedures. The machinery developed thus far can be applied to Proposition 4.6 or to DPSC, Chapter 6, Proposition 4, to show the validity for (SM) of the error bounds given there. We provide the theorem for (SM). The analogous result is of course true for (DM). Proposition 9.15 (D) LetJ: S ~ [ - c,c], c < Then for all XES and k = 0,1, ... , Tk(J)(x)
+ bk :s;
T k+ l(J)(X)
:s; J*(x)
00,
be lower semianalytic.
+ bk+ 1
s t-: l(J)(X) + Dk+ 1 :s;
Tk(J)(x)
+ Db
(29)
where b, = [1J(/(1 - IJ()] inf[Tk(J)(x) - T k-
1(J)(X)],
(30)
1(J)(X)].
(31)
XES
Dk = [1J(/(1 - IJ()] sup[Tk(J)(x) XES
T k-
9.
232
THE INFINITE HORIZON BOREL MODELS
Given a lower semianalytic function J: S ---+
Proof
[ -
c, c], c <
00,
define
]:P(S)---+ [-c,c] by (20). By Proposition 9.7,
'fk(J)(p) =
f Tk(J)(x)p(dx)
'ipE peS),
k = 0,1, ....
Therefore bk = [a/(l - a)] inf [fk(J)(p) - jk-1(J)(p)]' pEP(8)
b k = [a/(l - a)] sup [fk(J)(p) - jk-l(J)(p)], pEP(8)
where bk and Dk are defined by (30) and (31). Taking a 1 = a2 = o: in Proposition 4.6 or using the proof of Proposition 4, Chapter 6 of DPSC, we obtain jk(J)(p)
+ b, :<; t-: 1 (J)(p) + bk+1 :<; ]*(p):<; jk+1(J)(p)
+ bk+1
:<; fk(])(p)
Substituting p = Px in this equation, we obtain (29).
+ s;
Q.E.D.
It is not possible to develop a policy iteration algorithm for (SM) along the lines of Proposition 4.8 or 4.9. One difficulty is this. If at the kth iteration we have constructed a policy (llk,llk'" .), where IlkE U(qS), then J ILk is universally measurable but not necessarily lower semianalytic. We would like to find Ilk+l E U(qS) such that TILk+JJ/IJ:<; T(JILJ + e, where f. > is some prescribed small number, but Proposition 7.50 does not apply to this case. We turn now to the question of convergence of the dynamic programming algorithm under (P). Without additional assumptions, we have only the following result.
°
Proposition 9.16 (P)
We have ]
00
:<;
s-,
J 00 :<; J*.
(32) (33)
Furthermore, the following statements are equivalent: (a) (b) (c) (d)
]00 ]00
J 00
= T(] 00)'
= i-, = T(J 00)'
J 00 = J*.
Proof It is clear that (32) holds and, by Proposition 9.10, implies the equivalence of (a) and (b). Lemma 9.3, Proposition 9.5, and (32) imply (33). Conditions (a) and (c) are equivalent by Lemma 9.3 and Proposition 9.7. Conditions (b) and (d) are equivalent by Lemma 9.3 and Proposition 9.5. Q.E.D.
9.5
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
233
In Example 1, we have J co(0) = 0 and J*(O) = 00, so strict inequality in (32) and (33) is possible. We present now an example in which not only is J co different from J*, but J co is Borel-measurable while J* is not. EXAMPLE 2 (Blackwell) Let L be the set of finite sequences of positive integers and H the set of functions h from L into {O, I}. Then H can be regarded as the countable Cartesian product of copies of {O, I} indexed by L. Let {O, I} have the discrete topology and H the product topology, so H is a complete separable metrizable space (Proposition 7.4). A typical basic open set in H is {hEHlh(s) = 1 \>'SEL1, h(s) = 0 \>'SEL2}' where L1 and L2 are finite subsets of L. Consider a Suslin scheme R: L -> f!J H defined by R(s) = {hEHlh(s) = I}
\>'SEL.
Then
lj;(y)(S) =
{~
IfL1 and L2 are finite subsets of E, then
lj;-l({hEHlh(s)
=
=
1 \>'SELt> h(s) = 0 \>'SEL2})
[n
SEL,
Q(S)] n [
n
(Y - Q(S))]
SEL,
is in f!Jy • The collection t! of subsets E of H for which lj; - 1(E) E f!J y is a o-algebra containing a base for the topology on H, so, by the remark following Definition 7.6, t! contains f!J Hand lj; is Borel-measurable. For each S E L, we have Q(s) = lj;-1[R(s)], so N(Q)=
U
nQ(s)=
ZE,AfS
U
nlj;-l[R(s)]
ze,Al's
= lj;-1[
un
R(S)] = lj;-1[N(R)].
ze..Afs
Since N(Q) is not Borel-measurable, N(R) is also not Borel-measurable. Define the decision model by taking S = HL*, where L* = L U {O}, C = {1,2, ...}, U(x) = C for every XES, and f([h, OJ, u) = (h, u), f([h,«t>(2"" ,(n)],u) = [h,«1,(2,'" ,(n,u)],
234
9.
THE INFINITE HORIZON BOREL MODELS
The system transition is deterministic, so the choice of Wand p(dwlx, u) is irrelevant. Choose \I. = 1 and
g([h,((1,(2,'"
= 1,
=
{~
if if
,(n)],u) =
{~
if h((r.(2"",(n,u)=1, if h((1,(2"",(n,u)=0.
g([h, 0], u)
h(u)
h(u) = 0,
If the system begins at X o = [h,O] and the horizon is infinite, a positive cost can be avoided if and only if there exists (( 1, (2,' ..) such that h(( 1, (2,' .. , (n) = 1 for every n, i.e., J*( [h, 0]) = 0 if and only if hE N(R). Therefore, J* is not Borel-measurable. Over the finite horizon, we have Jk+l(X)
=
T(Jd(x)
=
inf{g(x,u)
+ Jk[f(x,u)]},
UEC
and since C is countable and f, g, and J 0 are Borel-measurable, J k is Borel-measurable for k = 0, 1,2, .... It follows that J", is Borel-measurable. The equivalent conditions of Proposition 9.16 are not easily verified in practice. We give here some more readily verifiable conditions which imply that J co = J*. (P)(D)
Proposition 9.17
K such that for each
XES,
Assume that there exists a nonnegative integer AE R, and k ~ K, the set
Uk(X, A) = {u E U(X)lg(X, u)
+ \I.
f
Jk(x')t(dx'lx, u)
~
I"}
(34)
is compact in C. Then J", = J*, J", = J*, and there exists an optimal nonrandomized stationary policy for (SM). Proof Under (P), we have, for each k, J k ~ J""soJ k+ 1 and letting k --> o: we obtain
=
T(J k) ~ T(Joo), (35)
Let XES be such that J ",(x) < roo By Lemma 3.1 for k ~ u; E U(x) such that
i., l(X) Since J k ~ Jk+ 1 g(x, Ui)
= g(x, ud
~ . . . ~ J""
+ \I.
f
+ \I.
f
k there exists
Jk(x')t(dx'lx, ud·
it follows that for k ~ K
J k(x')t(dx'!x, u;) ::s; g(x, Ui)
= i., l(X)
+ \I. ~
f
J;(x')t(dx'lx, u;)
J ",(x)
Vi ~ k.
Therefore, {u;ji ~ k} c Uk[x,Joo(x)] for every k ~ K. Since Uk[x,Joo(x)] is
9.5
235
CONVERGENCE OF THE DYNAMIC PROGRAMMING ALGORITHM
compact, all limit points of the sequence {udi 2 k} belong to V k[x, J x(x)], and at least one such limit point exists. It follows that if Ii is a limit point of the sequence {udi 2 I}, then
n Vk[x,Joo(x)]. k=k 00
IiE
Therefore, for all k 2 I, J oo(x) 2 g(x, Ii) + a
Letting k
--+ 00
f
Jk(x')t(dx'lx, Ii) 2
i., [(.x).
and using the monotone convergence theorem, we obtain
J ry:(x)
=
g(x, Ii) +
rJ.
f
(36)
J oo(x')t(dx'!x, Ii) 2 T(J ",)(x)
for all XE S such that J x(x) < 00. We also have that (36) holds if J u(x) = x, and thus it holds for all XES. From (35) and (36) we see that J ry: = T(J y ) and conditions (a)-(d) of Proposition 9.16 must hold. In particular, we have from (35) and (36) that for every XES, there exists Ii E Vex) such that J*(x)
f
= g(x, Ii) + rJ. J*(x')t(dx'!x, Ii) = T(J*)(x).
The existence of an optimal nonrandomized stationary policy for (SM) follows from Corollary 9.12.1. Under (D), conditions (aHd) of Proposition 9.16 hold by Proposition 9.14. If we replace g by g + b, we obtain a model satisfying (P). This new model also satisfies the hypotheses of the proposition, so there exists an optimal nonrandomized stationary policy for it. This policy is optimal for the original (D) model as well. Q.E.D. Corollary 9.17.1 (P)(D) Assume that the set Vex) is finite for each Then J 00 = J*, J 00 = J*, and there exists an optimal nonrandomized stationary policy for (SM). In fact, if C is finite and g and rare Borelmeasurable, then J* is Borel-measurable and there exists a Borel-measurable optimal nonrandomized stationary policy for (SM).
XES.
Corollary 9.17.2 (P)(D) Suppose conditions (a)-(e) of Definition 8.7 (the lower semicontinuous model) are satisfied. Then J = J*, J = J*, J* is lower semicontinuous, and there exists a Borel-measurable optimal nonrandomized stationary policy for (SM). 00
CD
Proof From the proof of Proposition 8.6, we see that J k is lower semicontinuous for k = 1,2, ... , as are the functions
~
Kk(x, u) =
{g(X, u) + 00
IY.
f
Jk(x')t(dx'lx, u)
if (X,U)Er, if (x, u)~
r.
(37)
236
9.
THE INFINITE HORIZON BOREL MODELS
F or AE Rand k fixed, the lower level set {(x,u)ESC[Kk(x,u):::;; A} c
r
is closed, so for each fixed XES Vk(X,A) = {uEC[Kk(x,u):::;; A}
is compact. Proposition 9.17 can now be invoked, and it remains only to prove that the optimal nonrandomized stationary policy whose existence is guaranteed by that proposition can be chosen to be Borel-measurable, This will follow from Proposition 9.12 and the proof of Proposition 8.6 once we show that J 00 = J* is lower semicontinuous. Under (P), J k i J*, so {xESIJ*(x):::;; A}
=
n{XES[Jk(X):::;; A} 00
k=O
is closed, and J* is lower semicontinuous. Under (D), 00
i, - b
L cl t J*,
j=k
so a similar argument can be used to show that J* is lower semicontinuous. Q.E.D. By using the argument used to prove Corollary 8.6.1, we also have the following. Corollary 9.17.3 The conclusions of Corollary 9.17.2 hold if instead of assuming that C is compact and each r- is closed in Definition 8.7, we assume that each r- is compact.
Proposition 9.17 and its corollaries provide conditions under which the dynamic programming algorithm can be used in the (P) and (D) models to generate J*. It is also possible to use the dynamic programming algorithm to generate an optimal stationary policy, as is indicated by the next proposition. Suppose that either Vex) is finite for each k~ 0 there exists a universally measurable I1k:S -> C such that I1k(X)E Vex) for every XES and (38) Proposition 9.18
(PHD)
XES or else conditions (a)-(e) of Definition 8.7 hold. Then for each
If {l1d is a sequence of such functions, then for each XES the sequence {l1k(X)} has at least one accumulation point. If 11: S -> C is universally measurable, I1(X) is an accumulation point of {l1k(X)} for each XES such that J*(x) < 00, and l1(x) E Vex) for each XES such that J*(x) = 00, then n = (11,11, . . .) is an optimal stationary policy for (SM).
9.6
237
EXISTENCE OF E-OPTIMAL POLICIES
Proof If U(x) is finite for each XES, then the sets Uk(X, ),) of (34) are compact for all k > 0, XES, and AER. The proof of Corollary 9.17.2 shows that these sets are also compact under conditions (a)-(e) of Definition 8.7. The existence of functions fJ.k: S -+ C satisfying (38) such that fJ.k(X) E U(x) for every XES is a consequence of Lemma 3.1 and Proposition 7.50. Under (P) we see from the proof of Proposition 9.17 that {fJ.k(X)} has at least one accumulation point for each XES such that J*(x) < 00 and every accumulation point of {fJ.k(X)} is in U(x). If u: S -+ C is universally measurable and fJ.(x) is an accumulation point of {fJ.k(X)} for each XES such that J*(x) < 00, then from (35),(36), and Proposition 9.17 we have J*(x)
f
= g[x, fJ.(x)] + IX
for all XES such that J*(x) < 00, then J*(x)
=
T(J*)(x) :s;
00.
J*(x')t(dx'lx, fJ.(x))
If fJ.(x) E U(x) for all
f
XES
(39)
such that J*(x) =
g[x, fJ.(x)] + IX J*(x')t(dx'lx, fJ.(x)) :s;
00
=
J*(x)
(40)
for all XES such that J*(x) = 00. From (39) and (40) we have J* = TI'(J*), and the policy tc = (fJ., u, . . .) is optimal by Proposition 9.12. Under (D) we can replace g by g + b to obtain a model satisfying (P) and the hypotheses of the proposition. The conclusions of the proposition are valid for this new model, so they are valid for the original (D) model as well. Q.E.D. A slightly stronger version of Proposition 9.18 can be found in [SI2].
rj
Corollary 9.18.1 If conditions (bj-te) of Definition 8.7 hold and if each of condition (b) is compact, then the conclusions of Proposition 9.18 hold.
9.6
Existence of a-Optimal Policies
We have characterized stationary optimal policies and given conditions under which optimal policies exist. We turn now to the existence of s-optimal policies. For fixed XES, by definition there is a policy which is s-optimal at x. We would like to know how this collection of policies, each of which is s-optimal at a single point, can be pieced together to form a single policy which is s-optimal at every point. There is a related question concerning optimal policies. If at each point there is a policy which is optimal at that point, is it possible to find an optimal policy? Answers to these questions are provided by the next two propositions. Proposition 9.19 (P)(D) For each E > 0, there exists an s-optimal nonrandomized Markov policy for (8M), and if IX < 1, it can be taken to be
238
9.
THE INFINITE HORIZON BOREL MODELS
stationary. If for each XES there exists a policy for (SM) which is optimal at x, then there exists an optimal nonrandomized stationary policy. Proof Choose c > 0 and Ck > 0 such that IJ:'~ 0 chk = C. If rx < 1, let = (1 - rx)c for every k. By Proposition 7.50,there are universally measurable functions flk: S -> C, k = 0,1, ... , such that flk(X) E U(x) for every XES and
Ck
TI"k(J*)
s
J* + Ck'
If rx < 1, we choose all the flk to be identical. Then (TI"k_71"J(J*)
s
TI"k_,(J*)
+ «e, S
J*
+ Ck-l + exck'
Continuing this process, we have k
(T 1"0 T I" , ... T I"k )(J*) -< J*
+ "L.,
ex j C') -< J*
+ C,
j~O
and, letting k ->
00,
we obtain lim(TI"oTI""" TI"J(J*)
k -«
00
s
J*
+ c.
Under (P) we have J" = lim (TI"JI' , ... TI"J(J o) s lim(TI'OTI"I'" TI"J(J*), k-e co
k-+co
(41)
so tt = (flo, fll , ...) is s-optimal. Under (D), J0
s
J*
+ [bj(1
- ex)],
(TI"oTIl 1 • • • TI"k)(J O) S (TI"JI" , ... TI'J(J*
=
[rx k+ 1 bj(1 - ex)]
+ [bj(1
- ex)])
+ (TI"oTI"""
TI"J(J*),
so (41) is valid and n = (flo, /11' ...) is s-optimal. This proves the first part of the proposition. Suppose that for each XES there is a policy for (SM) which is optimal at x. Fix x and let n = (flo, fll,' ..) be a policy which is optimal at x. By Proposition 9.1, we may assume without loss of generality that tt is Markov. By Lemma 8.4(b) and (c), we have J*(x) = J,,(x) =
=
lim(TI'JI"'" . TI'J(Jo)(x)
k--+ 00
TI"o[lim(TI"I' .. TI"J(Jo)](X) k-r co
2 TI"o(J*)(x) 2 T(J*)(x)
= J*(x).
9.6
239
EXISTENCE OF e-OPTIMAL POLICIES
Consequently, Tl-to(J*)(x)
= T(J*)(x).
This implies that the infimum in the expression inf {9(X, u) + ex
ueU(x)
f
J*(x')t(dx'lx, U)}
is achieved. Since x is arbitrary, Corollary 9.12.1 implies the existence of an optimal nonrandomized stationary policy. Q.E.D. Proposition 9.20 (N) For each s > 0, there exists an s-optimal nonrandomized semi-Markov policy for (SM). If for each XES there exists a policy for (SM) which is optimal at x, then there exists a semi-Markov (randomized) optimal policy.
Proof Under (N) we have J k ! J* (Proposition 9.14), so, given e > 0, the analytically measurable sets
A k = {x E SIJ*(x) > U
{xESIJ*(x)
00,
=
+ e/2} 2)/2e} -(2 + e
J k(X) :::; J*(x)
-00,
Jk(x):::;
converge up to S as k -+ 00. By Proposition 8.3, for each k there exists a k-stage nonrandomized semi-Markov policy n k such that for every XES J
Then for x E A k we have either J*(x) > Jk,rrk(X):::; Jk(x)
or else J*(x) = -
if h(x) > if Jk(x) = -
() < {Jk(X) + (e/2) k,rr k X -l/e
00.
If J*(x) = -
00
00.
and
+ (e/2):::;
00,
00,
J*(x)
+ s,
then either Jk(x) = -
00
and
Jk,rrk(X) :::; -l/e,
or else Jk(x) > -
00
and
Jk,rrk(X):::; Jk(x)
+ (e/2):::; -
[(2
+ e2)/2e] + (e/2) = -l/e.
Choose any fl E U( CIS) and define fck = (flt, . . . , fl~ (flt· .. , fl~- d· For every x E A b we have
1,
k fl, u, . . .), where n =
if J*(x) > if J*(x) = -
00, 00,
so fck is a nonrandomized semi-Markov policy which is s-optimal for every x E A k. The policy tt defined to be fck when the initial state is in Ab but not in A j for any j < k, is semi-Markov, nonrandomized, and s-optimal at every XE Ut=l A k = S.
9.
240
THE INFINITE HORIZON BOREL MODELS
Suppose now that for each XES there exists a policy n X for (SM) which is optimal at x. Let nX be a policy for (OM) which corresponds to tt", and let (Px,q'Q,q1, ... ) be the sequence generated from Px by n via (10) and (11). If G: ~ ---+ [ - 00,0] is defined by (15), then we have from Proposition 9.3 that X
J*(x)
= J",,(x) = JnApJ = G(Px,q'Q,q1,.. .).
(42)
We have from Proposition 9.5 and (16) that J*(x)
= J*(px) =
G(Px,qO,ql"")'
inf (qQ, q 1, . . . ) E
(43)
L'.px
Therefore the infimum in (43)is attained for every PxE S, where S = {pyl yE S}, so by Proposition 7.50, there exists a universally measurable selector 1jJ: S ---+ P(SC)P(SC)' .. such that ljJ(px) E ~Px and J*(x)
= J*(px) = G[Px,ljJ(px)]
VPxES.
Let fJ: S ---+ S be the homeomorphism fJ(x) = Px and let cp(x) cp is universally measurable, cp(X)E~px' and J*(x)
= G[px' cp(x)]
VXES.
= IjJ [fJ(x)]. Then (44)
Denote cp(x) = [qo(d(xo,uo)\x), ql(d(Xl,U1)jX), .. .].
For each k :2: 0, qk(d(xk, udlx) is a universally measurable stochastic kernel on SkCk given S, and by Proposition 7.27 and Lemma 7.28(a), (b), qk(d(xk, udlx) can be decomposed into its marginal Pk(dxk!x), which is a universally measurable stochastic kernel on Sk given S, and a universally measurable stochastic kernel,uk(duklx,xd on C k given SSk- Since Po(dxolx) = Px(dxo), the stochastic kernel ,uo(duolx, x o) is arbitrary except when x = X o . Set ,ITo(duolx)
= ,uo(duolx, x)
Vx E S.
The sequence tt = (,ITO,,ul,,u2,' .. ) is a randomized semi-Markov policy for (SM). From (7) of Chapter 8, we see that for each XES qk(n,pJ
=
qk(d(Xk,ud[x)
VXES,
k=O,l, ....
From (5),(15), and (44), we have J,.(x)
so tt is optimal.
= G[px,qo(n,pJ,ql(n,pJ, ...] = J*(x) Q.E.D.
Although randomized polcies may be considered inferior and are avoided in practice, under (N) as posed here they cannot be disregarded even in deterministic problems, as the following example demonstrates.
EXISTENCE OF E-OPTIMAL POLICIES
9.6
241
EXAMPLE 3 (St. Petersburg paradox) = C for every XES, !Y. = 1,
Let S= {0,1,2, ...}, C= {0,1},
U(x)
{
°+
g(x, u) = {
°_ 2
f(x,u) =
X
1
if u = 1, x#- 0, otherwise, if x#- 0, u = 0, otherwise.
X
Beginning in state one, any nonrandomized policy either increases the state by one indefinitely and incurs no nonzero cost or else, after k increases, jumps the system to zero at a cost of - 2k+ 1, where it remains at no further cost. Thus J*(1) = - 00, but this cost is not achieved by any nonrandomized policy. On the other hand, the randomized stationary policy which jumps the system to zero with probability when the state X is nonzero yields an expected cost of - 00 and is optimal at every XES.
t
The one-stage cost g in Example 3 is unbounded, but by a slight modification an example can be constructed in which g is bounded and the only optimal policies are randomized. If one stipulates that J* must be finite, it may be possible to restrict attention to nonrandomized policies in Proposition 9.20. This is an unsolved problem. If (SM) is lower semicontinuous, then Proposition 9.19 can be strengthened, as Corollary 9.17.2 shows. Similarly, if (SM) is upper semicontinuous, a stronger version of Proposition 9.20 can be proved. Proportional 9.21 Assume (SM) satisfies conditions (a)-(d) of Definition 8.8 (the upper semicontinuous model). (D) For each s > 0, there exists a Borel-measurable, s-optimal, nonrandomized, stationary policy. (N) For each E > 0, there exists a Borel-measurable, s-optimal, nonrandomized, semi-Markov policy. Under both (D) and (N), J* is upper semicontinuous. Proof Under (D) and (N) we have Iim.., 00 J k = J* (Proposition 9.14), and each J k is upper semicontinuous (Proposition 8.7). By an argument similar to that used in the proof of Corollary 9.17.2, J* is upper semicontinuous. By using Proposition 7.34 in place of Proposition 7.50, the proof of Proposition 9.19 can be modified to show the existence of a Borel-measurable, s-optimal, nonrandomized, stationary policy under (D). By using Proposition 8.7 in place of Proposition 8.3, the proof of Proposition 9.20 can be modified to show the existence of a Borel-measurable, s-optimal, nonrandomized, semi-Markov policy under (N). Q.E.D.
Chapter 10
The Imperfect State Information Model
In the models of Chapters 8 and 9 the current state ofthe system is known to the controller at each stage. In many problems of practical interest, however, the controller has instead access only to imperfect measurements of the system state. This chapter is devoted to the study of models relating to such situations. In our analysis we will encounter nonstationary versions of the models of Chapters 8 and 9. We will show in the next section that nonstationary models can be reduced to stationary ones by appropriate reformulation. We will thus be able to obtain nonstationary counterparts to the results of Chapters 8 and 9. 10.1 Reductionof the Nonstationary Model-State Augmentation The finite horizon stochastic optimal control model of Definition 8.1 and the infinite horizon stochastic optimal control model of Definition 9.1 are said to be stationary, i.e., the data defining the model does not vary from stage to stage. In this section we define a nonstationary model and show how it can be reduced to a stationary one by augmenting the state with the time index. 242
10.1
REDUCTION OF THE NONSTATIONARY MODEL
243
We combine the treatments of the finite and infinite horizon models. Thus when N = 00 and notation of the form SO,SI,'" ,SN-l or k = 0, ... , N - 1 appears, we take this to mean So, S 1" •• and k = 0,1, . . . , respectively. Definition 10.1 A nonstationary stochastic optimal control model, denoted by (NSM), consists of the following objects:
Horizon. A positive integer or 00. Sk, k = 0, ... , N - 1 State spaces. For each k, Sk is a nonempty Borel space. C ko k = 0, ... , N - 1 Control spaces. For each k, C, is a nonempty Borel space. Uko k = 0, ... , N - 1 Control constraints. For each k, Uk is a function from Sk to the set of nonempty subsets of C k, and the set N
(1)
is analytic in SkCk' Hi/" k = 0, ... , N - 1 Disturbance spaces. For each k, Hi/, is a nonempty Borel space. Pk(dwklx ko Uk), k = 0, ... , N - 1 Disturbance kernels. For each k, Pk(dwklx ko ud is a Borel-measurable stochastic kernel on Hi/, given SkCk' fk, k = 0, ... , N - 2 System functions. For each k, fk is a Borel-measurable function from SkCkWk to Sk+ i a Discount factor. A positive real number. gko k = 0, ... , N - lOne-stage cost functions. For each k, gk is a lower semianalytic function from r k to R*. We envision a system which begins at some XkE Sk and moves successively through state spaces Sk+ b Sk+ 2" .. and, if N < 00, finally terminates in SN _ i- A policy governing such a system evolution is a sequence n k = (flko flk+ b ' . . , flN - d, where each flj is a universally measurable stochastic kernel on C, given SkCk' .. C j- 1 Sj satisfying flj(UiXj)lxkoUk, ... ,Uj-1,Xj) = 1
for every (Xko Uk" .. , Uj_ b xJ Such a policy is called a k-originating policy and the collection of all k-originating policies will be denoted by Ilk. The concepts of semi-Markov, Markov, nonrandomized and:#' -measurable policies are analogous to those of Definitions 8.2 and 9.2. The set Il o is also written as Il', and the subset of Il: consisting of all Markov policies is denoted by Il, Define the Borel-measurable state transition stochastic kernels by tk(~k+
llXk, Uk)
= Pk({ Wk E Hi/,lfk(Xk, Uko Wk)E ~k+ I}Ixko Uk)
244
10.
THE IMPERFECT STATE INFORMATION MODEL
Given a probability measure P« E P(Sd and a policy n k = ({lb' .. define for j = k, k + 1,... ,N - 1 qin\ Pk)(S.jC) =
r ... JCf.
f
JSk JCk
I
J -
,{IN _ dE
n-,
f {lj(CjIX k, Ub . . . ,Uj_ 1, x;) Js,
x tj_ 1(dXjIXj-l' Uj_ d x {lj- 1 (duj_
ll
Xb
Ub' .. ,Uj_ 2, Xj_
d' .. {lk(dukIXk)Pk(dxk)
VS.jEf!JSj'
CjEfJDcj"
(2)
There is a unique probability measure qj(n\ Pk) E P(SjC) satisfying (2). If the horizon N is finite, we treat (NSM) only under one of the following assumptions: f.9j-(xj,uj)dqin\px.)<00 JSJcJ
VnkEn\
XkESb
ksjsN-l,
k
=
(F+)
0, ... ,N - 1.
k = 0, ... ,N - 1. If N =
00,
we treat (NSM) only under one of the assumptions:
(P)
Os gk(xbud
(N)
gk(XbUk) s
(D) (Xk, Uk) E
°<
o:
for every (XbUk)Erk, k = 0,
,N - 1.
(Xk,Uk)Er k, k = 0,
,N - 1.
° for every
< 1, and for some bE R, -b S gk(X k , ud
r k, k =
0, ... ,N - 1.
s
b for every
As in Chapters 8 and 9, the symbols (F+), (F-), (P), (N), and (D) will be used to indicate when a result is valid under the appropriate assumption. We define the k-oriqinatinq cost corresponding to n k at XkE Sk to be J"k(xbk)
=
N-l
L
j=k
r:t.
j
f .gixj,u)dqj(n\px.), JSJcJ
and the k-oriqinatinq optimal cost at Xk E Sk to be J*(Xbk)
A policy
t:
= inf J"k(Xk,k). 1t k
(3)
en k
E n° is e-optimal at Xo E So if
J ( 0) < {J*(Xo,O) "Xo, -lie
+s
if J*(xo,O) > if J*(xo,O) =
-00,
-
00.
The policy n is optimal at Xo if J ,,(xo, 0) = J*(x o, 0). We say tt E n° is e-optimai (optimal) if it is s-optimal (optimal) at every xoESo. Let {en} be a sequence of positive numbers with en J 0. A sequence of policies {nn} c n° is said to
10.1
245
REDUCTION OF THE NONSTATIONARY MODEL
exhibit {8 n }-dominated convergence to optimality if n-+ ro
and for n = 2,3, ...
J ( 0) {J*(XO,0)+8 n "n Xo, :s; J ( 0) "n_'xO' +8n
if J*(xo,O) > if J*(xo, 0) = -
00, 00.
Definition 10.2 Let a nonstationary stochastic optimal control model as defined by Definition 10.1 be given. The corresponding stationary stochastic optimal control model, denoted by (SSM), consists of the following objects. (T is both a terminal state and the only control available at that state. If N = 00, the introduction of T is unnecessary.): u {T} State space. S = Uf~d{(Xk,k)lxkESk} C = Ut=-Ol{(Ubk)lukECk} u {T} Control space. U Control constraint. A function from S to the set of nonempty subsets of C defined by U(x b k) = {(Uk' k)IU k E Uk(X k)} , U(T) = {T}. W = {(Wb k)IW k E Vl-i} Disturbance space. p(dwlx, u) Disturbance kernel. If !:K E f?J W k , we define
Ut=-Ol
P[{(Wb k)lwk E !:K} I(x b k),(ub k)] f
=
Pk(!:K!Xb ud·
(4)
System function. We define for k = 0, ... ,N - 2 f[(xbk),(uk,k),(wk,k)] = Lf,,(Xk,Uk,Wk),k
+ 1],
(5)
and for the remaining two stages f[(xN-l,N -l),(UN-l,N -1),(w N- 1,N -1)] = T, f(T, T, w) = T. IY. Discount factor. g One-stage cost function. We define
N
g[(Xb k),(ub k)] = gk(X k, Uk), g(T, T) = 0.
Horizon.
(6)
(7)
(8) (9)
Consider the mapping ({)k: Sk ~ S given by ({)k(Xk) = (x k, k). We endow S with the topology that makes each ({)k a homeomorphism, and we endow C and W with similar topologies. The spaces S, C, and Ware Borel. The set
r=
{(X,U)IXES,UEU(X)} N-l = {[(xk,k),(Uk,k)]I(XklUk)Erdu{(T,T)} k=O
U
246
10.
THE IMPERFECT STATE INFORMATION MODEL
is analytic, and g defined on I" by (8) and (9) is lower semianalytic. The disturbance kernel p(dwlx, u) is not defined on all of se by (4), but it is defined on a Borel subset of se containing I" - {(T, T)}, which is all that is necessary. Likewise, the system function f is not defined on all of sew by (5)-(7), but the set of points where it is not defined has probability zero under any policy governing the system evolution. Both p(dwlx, u) and f are Borel-measurable on their domains. Thus (SSM) is a special case of the stochastic optimal control model of Definition 8.1 (N < 00) or Definition 9.1 (N = 00). If N < 00, the (F+) and (F-) assumptions on (SSM) are given in Section 8.1. These are equivalent to the respective (F+) and (F-) assumptions on (NSM) given earlier in this section. If N = 00, the (P), (N), and (D) assumptions on (SSM) of Definition 9.1 are equivalent to the respective (P), (N), and (D) assumptions on (NSM) given earlier in this section. The reader can verify that there is a correspondence of policies between (NSM) and (SSM), and the optimal cost at (Xb k) E S for (SSM) is J*(x b k) given by (3). Because of these facts, results already proved for (SSM) with either a finite or infinite horizon have immediate counterparts for (NSM). An illustration of this is the nonstationary optimality equation. Proposition 10.1 (P)(N)(D) Let J*(xk, k) be defined by (3). For fixed k, J*(x k, k) is lower semianalytic on Sb and J*(xk,k) =
inf
Uk E Uk(Xk)
{gk(XbU d
+ rx
l
Sk
+
1
J*(Xk+l,k
+ l)tk(dXk+llxk,Ud}.
We do not list all the results for (NSM) that can be obtained from (SSM). The reader may verify, for example, that the existence results of Propositions 8.3 and 8.4, are valid for (NSM) in exactly the form stated. From Propositions 9.19 and 9.20 we conclude that, under (P) and (D), an s-optimal nonrandomized Markov policy exists for (NSM), while under (N), an s-optimal nonrandomized semi-Markov policy exists. In what follows, we make use of these results and reference only their stationary versions; 10.2
Reduction of the Imperfect State Information Model-8ufficient Statistics
Before defining the imperfect state information model, we give without proof some of the standard properties of conditional expectations and probabilities we will be using. For a detailed treatment, see Ash [AI]. Throughout this discussion, (n,:Ji', P) is a probability space and X is an extended realvalued random variable on n for which either E[X+] or E[X-] is finite. If ~ c :Ji' is a IT-algebra on n, then the expectation of X conditioned on ~ is any ~-measurable, extended real-valued, random variable E[ XI~](-)
10.2
on
REDUCTION OF THE IMPERFECT STATE INFORMATION MODEL
247
° which satisfies
It can be shown that at least one such random variable exists. Any such random variable will be called a version of E[XI.@]. If X(w) ~ b for some bE R and every WE 0, then it can be shown that for any version E[ XI.@](·)the
random variable E[ XI.@](·) defined by E[XI.@](w)
= max{E[XI.@](w),b},
is also a version of E[XI.@]. If r! c .@ is a collection of sets which is closed under finite intersections and generates the a-algebra .@ and if Y is an extended real-valued, .@-measurable, random variable satisfying fDX(W)P(dw)
= fD Y(w)P(dw)
VDEr!,
(10)
then Y satisfies (10) for every DE.@, and Y is a version of E[ XI.@]. If r! c .@ is a a-algebra, then E{E[XI.@]Ir!}(w)
= E[XIr!](w)
(11)
for P almost every w. Suppose now that (° 1, ff 1) and (° 2 , ff 2) are measurable spaces and Y1 : ~ 0 1 and Y2 : ~ O2 are measurable. Let g: 0 1 O 2 ~ R* be measurable and satisfy either E[g + (Y1, Y2 ) ] < 00 or E[g - (Yb Y2 ) ] < 00 . We define
°
°
E[XIY1](w)
where
= E[Xlff(Yd](w),
We define for Y1 E0 1 E[X!Y1 = Y1] = E[X\Y1](w(Y1»,
where w(Yd is any element of y~l({yd). Since E[XIY1] is ff(Y1)measurable, it is constant on Y ~ 1({y 1}), and this definition makes sense. Note that E[XI Y1 = Y1] is a function of Y1' not of w. We have for any Y1 E Y1 E[g(Yb Y2 )IY1
= Y1] = E[g(Y1' Y2 )]
(12)
for P almost every Y1' We use the phrase "for P almost every Y1" to indicate that, in this case, P({wEOI(12) fails when Y1
= Y1(w)}) = O.
For FE ff 2, define P[Y2 E FI Y1](w) P[Y2EFIY1 = Y1]
= E[XF(Y2)1 Y1](w), = E[XF(Y2)IY1 = Y1].
10.
248
THE IMPERFECT STATE INFORMATION MODEL
Suppose t(dYzIYd is a stochastic kernel on (Oz, Jj'" z) given 0 every FE Jj'" z P[Yz E FI Yl
= Y1] =
1
such that for
t(FIYl)
for P almost every Yl' Then (12) can be extended to E[g(Yl, YZ)IYl = Yl] = E[g(Yb Yz)] =
f
g(YbYz)t(dYzIYd
(13)
for P almost every Yl' We will find (11) and (13) particularly useful in our treatment of the imperfect state information model. They will be used without reference to this discussion. Definition 10.3 The imperfect state information stochastic optimal control model (lSI) is the ten-tuple (S, C, (U 0, . . . , UN _ d, Z, a, g, t, so, s, N) described as follows:
S, C, a, g, t State space, control space, discount factor, one-stage cost function, and state transition kernel as given in Definition 8.1 and (3) of Chapter 8. We assume that 9 is defined on all of S'C. Z Observation space. A nonempty Borel space. Ub k = 0, ... ,N - 1 Control constraints. Define for k = 0, ... , N - 1,
I k = ZoC o' .. Ck-lZ k.
(14)
An element of Ik is called a kth information vector. For each k, Uk is a mapping from I k to the set of nonempty subsets of C such that
rk =
{(ib u)likEI b UE Uk(ik)}
(15)
is analytic. So Initial observation kernel. A Borel-measurable stochastic kernel on Z given S. s Observation kernel. A Borel-measurable stochastic kernel on Z given CS. N Horizon. A positive integer or 00. For the sake of simplicity, we have eliminated the system function, disturbance space, and disturbance kernel from the model definition. In what follows, our notation will generally indicate a finite N. If N = 00, the appropriate interpretation is required. The system moves stochastically from state Xk to state Xk+ 1 via the state transition kernel t(dxk+ llx k , ud and generates cost at each stage of g(Xb Uk)' The observation Zk+ 1 is stochastically generated via the observation kernel s(dz k+ lluk' Xk+ l) and added to the past observations and controls (zo, Uo, ... , Zk'ud to form the (k + 1)st information vector ik+ 1 = (zo, Uo, ... , Zb Uk' Zk+ d· The first information vector io = (zo) is generated by the initial observation
10.2
249
REDUCTION OF THE IMPERFECT STATE INFORMATION MODEL
kernel so(dzolxo), and the initial state Xo has some given initial distribution p. The goal is to choose Uk dependent on the kth information vector i k so as to minimize
E[t; ~kg(Xk'
Ukl
Definition 10.4 A policy for (lSI) is a sequence tt = (flo, ... , flN- 1) such that, for each k, flk(duk[p; id is a universally measurable stochastic kernel on C given P(S)I k satisfying flk(Uk(ik)\p; id
=1
V(p; ik)E P(S)h.
If for each p, k, and ib flk(duklp; i k) assigns mass one to some point in C, tt
is nonrandomized.
The concepts of Markov and semi-Markov policies are of no use in (lSI), since the initial distribution, past observations, and past controls are of genuine value in estimating the current state. Thus we expect policies to depend on the initial distribution p and the total information vector. In the remainder of this chapter, 11 will denote the set of all policies in (lSI). Just as we denote the set of all sequences of the form (zo, Uo, ... , Uk - 1, Zk) E ZC· .. CZ by I k and call these sequences the kth information vectors, we find it notationally convenient to denote the set of all sequences of the form (xo,zo,uo, ... ,XbZk,udESZC,' 'SZC by H, and call these sequences the kth history vectors. Except for Ub the kth information vector is that portion of the kth history vector known to the controller at the kth stage. Given p E P(S) and n = (flo,' .. , flN -1) E 11, by Proposition 7.45 there is a sequence of consistent probability measures Pk(n,p) on H k, k = 0, ... , N - 1, defined on measurable rectangles by Pk(n, p)(S.oZo(;.o· .. S.kZkCd =
r Jzo r Jeo r ... J~kr JZk r flk(Cklp;zo,uo, ... ,Uk-l,Zk)S(dzkluk-l,Xk)
J~o
x t(dXkluk-l,Xk-l)'" flo(duolp;zo)so(dzolxo)p(dxo). Definition 10.5
positive integer K
~
(16)
Given p E P(S), a policy n = (flo, . . . , flN _ dE 11, and a N, the K-stage cost corresponding to n at pis (17)
If N <
00,
the cost corresponding to n is J N, 1t' and we assume either
10.
250
THE IMPERFECT STATE INFORMATION MODEL
or
fHN-l[t~
akg+(XbUd]dPN_1(n,p) <
CfJ
If N = 00, the cost corresponding to tt is Jn = lim, ~ J K.n, and to ensure that this limit is a well-defined extended real number, we impose one of the following conditions: cy
(P)
0:-:; g(x, u) for every (x, u) E sc.
(N) (D)
g(x, u) :-:; for every (x, u) E S'C. 0< a < 1, and for some bE R, - b :-:; g(x, u) :-:; b for every (x, u) E S'C.
°
The optimal cost at p is J~(p)
= inf IN,n(P). nEIl
The concepts of optimality at p, optimality, a-optimality at p, and a-optimality of policies are analogous to those given in Definition 8.3. If N <
and (F+) or (F-) holds, then by Lemma 7.11(b) N-1 IN,n(P) = ak fHkg(Xk, uk)dPk(n,p) VnEIT, pEP(S). CfJ
Jo
If N
=
CfJ
(18)
and (P), (N), or (D) holds, then
In(p) =
Jo
k a fHkg(xk,uddPk(n,p)
Vn E IT, p E P(S).
(19)
To aid in the analysis of (lSI), we introduce the idea ofa statistic sufficient for control. This statistic is defined in such a way that knowledge of its values is sufficient to control the model. Definition 10.6 A statisticfor the model (lSI) is a sequence (IJo,· .. ,IJN- d of Borel-measurable functions IJk: P(S)h --* lk, where lk is a nonempty Borel space, k = 0, ... ,N - 1. The statistic (IJo, . . . , IJN -1) is sufficient for control provided: (a) F or each k, there exists an analytic set f' k c lkC such that projyk(f'd = lk and for every p E P(S) f\= {(ik,u)I[IJk(p; ik),u] Ef'k},
(20)
where T, is defined by (15). We define (21)
(b) There exist Borel-measurable stochastic kernels fk(dYk+ 11Yk, ud on given lkC such that for every p E P(S), nE IT, lk+ 1 Ef!Jy k + l ' k = 0, ... ,
lk+ 1
10.2
REDUCTION OF THE IMPERFECT STATE INFORMATION MODEL
251
N - 2, we have PH 1(n, p)['1k+
1(p; ik+ d E r,. + t1'1k(P; ik) = Jib Uk = Uk] = fk(lk+ t1Jik' Uk) (22)
for Pk(n, p) almost every (Jik' Uk).t (c) There exist lower semianalytic functions gk:r k --+ fying for every p E P(S), tt E IT, k = 0, ... , N - 1, E[g(Xb uk)I'1k(P; id = Jib u, = Uk] =
[ -
sss: Uk)
00, 00] satis(23)
for Pk(n, p) almost every (Jib Uk), where the expectation is with respect to Pk(n,p). Condition (a) of Definition 10.6 guarantees that the control constraint set Uk(ik) can be recovered from '1k(P; ik). Indeed, from (15), (20), and (21), we have for any pEP(S), ikEh, k = 0, ... ,N - 1, (24)
If Uk(ik) = C for every ikE I b k = 0, ... ,N - 1, then condition (a) is satisfied with r, = 1;,c. This is the case of no control constraint. Condition (b) guarantees that the distribution of YH 1 depends only on the values of Yk and Uk' This is necessary in order for the variables Yk to form the states of a stochastic optimal control model of the type considered in Section 10.1. Condition (c) guarantees that the cost corresponding to a policy can be computed from the distributions induced on the (Yk' Uk) pairs. We temporarily postpone discussion on the existence and the nature of particular statistics sufficient for control, and consider first a perfect state information model corresponding to model (lSI) and a given sufficient statistic. Definition 10.7 Let the model (lSI) and a statistic sufficient for control ('10' ... ,'1N - 1) be given. The perfect state information stochastic optimal control model, denoted by (PSI), consists of the following (we use the notation of Definitions 10.3 and 10.6):
1;" k = 0, ... ,N - 1 State spaces. Control space. k = 0, ... ,N - 1 Control constraints. rJ. Discount factor. gb k = 0, ,N - lOne-stage cost functions. tk, k = 0, ,N - 2 State transition kernels. N Horizon. C
o
b
t In this context "for Pk(n, p) almost every Ch, Uk)" means that the set {(xo. Zo, U o , ' .. , x i ; zkoudEHkl (22) holds whcn p, = '7k(p;ik),Uk = ud has Pk(n,p)-measure one.
252
10.
THE IMPERFECT STATE INFORMATION MODEL
Thus defined, (PSI) is a nonstationary stochastic optimal control model in the sense of Definition 10.P The definitions of policies and cost functions for (PSI) are given in Section 10.1. We will use (~) to denote these objects in (PSI). For example, ft.1 is the set of all (O-originating) policies and ft. is the set of all Markov (O-originating) policies for (PSI). If ii = (flo, ... , flN - d is a policy for (PSI), then by (24) and Proposition 7.44 the sequence (flo[duollJo(p;io)],··. ,flN-l[duN-lllJo(p;io), Uo,···, UN- 2 ' IJN-l(P;i N- d]),
where
k = O, ... ,N - 1,
(25)
is a policy for (lSI). We call this policy ii also, and can regard fit as a subset of II in this sense. If ii is a nonrandomized policy for (PSI), then it is also nonrandomized when considered as a policy for (lSI). We will see in Proposition 10.2 that ii results in the same cost for both (PSI) and (lSI). Define sp : peS) ~ P( Yo) by (26) Thus defined, cp(p) is the distribution of the initial state Yo in (PSI) when the initial state X o in (lSI) has distribution p. By Corollary 7.26.1, for every YoE.?J y o the mapping ljixo(xo,p) = so({zollJo(p;Zo)E Yo}lxo)
is Borel-measurable. Define a Borel-measurable stochastic kernel on S given peS) by q(dxolp) = p(dxo). Then (26) can be written as Cp(P)(l'o) = NXo(xo,p)q(dxolp)· It follows from Propositions 7.26 and 7.29 that cp is Borel-measurable. For pEP(S), define the mapping ~.k:Hk ~ YoCo'" ~Ck by Vp.k(XO,zo,uo,··· ,Xk>Zk,Uk ) = [lJo(p;io),uo,··· ,lJk(p;ik),Uk],
(27)
where (25) holds. For q E P( Yo) and fi = (flo,' .. , flN _ dE fit, there is a sequence of consistent probability measures i\(fi, q) generated on YoCo' .. ~Ck' k = 0, ... , N - 1, defined on measurable rectangles by i\(ii, q)(l'o~o'
.. XJ,~k)
=
fro 10' .-IXk flk(cl Yo, Uo,' .. ,Uk-t,Yk)
x tk-1(dYkIYk-t,Uk-l)···flo(duoIYo)q(dyo). t
to be
(28)
The disturbance spaces. disturbance kernels. and system functions in (PSI) can be taken W. = y. + I. Pk(dwk!Yk. Uk) = tk(dYH llh. Uk), and .f.( Yb u i ; Wk) = Wk. respectively.
10.2
253
REDUCTION OF THE IMPERFECT STATE INFORMATION MODEL
For a Markov policy ii E rt, these objects are related to the probability measures Pk(ii,p) defined by (16) in the following manner. Lemma 10.1 Suppose p E P(S) and ii E ft. Then for k = 0, ... ,N - 1 and for every Borel set B c YoC o' . 'lkC ko we have (29) It suffices to prove that if YoE~yo'
Proof CkE~Ck'
CoE~co""
,X,E~Yk'
then P k(ii,p)({1]o(p;io)E Yo, UoECo,··· ,1]k(p;ik)E Xc, UkECd) = Pk[ii, cp(p)](YoCo' . ·lkCd. t
(30)
For k = 0, (30) follows from (16), (26), and (28). If (29) holds for some k < N, then using (16), (22), (28), and (29), we obtain P k+ l(ii, pH {1]o(p; io)E Yo, Uo E Co,· .. , 1]k+ l(P; i H l)E XH l' Uk+ 1E ~H I})
= l~o(p; X
io)e 1'0.
uoe~o
•...•
~k(p;
iderk.
Uke~kJ
Irk + 1 fik+ 1 (Ck +llYk+d
tk(dYH t11]k(P; id, Uk) dPk(ii, p)
= Iro(o ... rk(k Irk + I fiH 1(f:H t1YH dtk(dYk+ .]». ud dPk[ii, cp(p)] = Pk+1[ii,cp(p)](YoCo'" X,+lCk+d.
Q.E.D.
As noted earlier, (PSI) is a model of the type considered in Section 10.1. The (F+) and (F-) conditions of Section 10.1, when specialized to the (PSI) model, will be denoted by (F+) and (F-), respectively. These conditions are not to be confused with the (F+) and (F-) conditions for the lSI model given in this section. In a particular problem it is often possible to see the relationship between these finiteness conditions on the two models. In the general case, the relationship is unclear. We point out, however, that if g is bounded below or above, then (F +) or (F -) is satisfied for (lSI), respectively, and given any statistic sufficient for control, the corresponding gk can be chosen so as to be bounded below or above, respectively. If a particular result holds when we assume (F +) on the (lSI) model and (F+) on the (PSI) model, the notation (F +, F +) will appear. The notation (F -, F-) has a similar meaning. t
In this context, we define
Uo E Co,· ..• '7k(P; i.) E lk, Uk EC.] :(xo,=o,u o,··· ,XkoZkoU.)I'7o(p;io)EYo,uoE~o,
i'7o(p; io)E Yo, =
...
,'7k(p;i.)Elk,Uk ECk).
where i j = (=o,uo, ... ,Uj_I.Zj)' We will often use this notation to indicate a set which depends on functions of some or all of the components of a Cartesian product.
10.
254
THE IMPERFECT STATE INFORMATION MODEL
If N = 00, we consider conditions (P), (N), and (D) for (lSI) and the corresponding conditions (P), eN), and (1») for (PSI). In this case, however, if (P) holds for (lSI) and lower semianalytic functions 9k: k ~ [ - 00, 00 ] satisfying (23) exist, there is no loss of generality in assuming that 9k ~ 0 for every k, i.e., (P) holds for (PSI). Likewise, if (N) or (D) holds for (lSI), we may assume without loss of generality that (N) or (1»), respectively, holds for (PSI). As in the finite horizon case, we adopt the notation (P, P), (N, N), and (D, D) to indicate which assumptions are sufficient for a result to hold. From Section 10.1, we have that when (p~), (p-), (P), eN), or (D) holds, then the (O-originating) cost corresponding to a policy it for (PSI) at Y E Yo is
r
N-l
~ IN,ii(Y) =
~
L,
k=O
rx k~
YoCo" 'YkCk
~ 9k(Yb uddPk(ft,py),
(31)
where N may be infinite. The (O-originating) optimal cost for (PSI) at yE Yo is J~(y)
=
inf
TtEfI'
I N ii(Y).
(32)
•
The remainder of this section is devoted to establishing relations between costs, optimal costs, and optimal and nearly optimal policies for the (lSI) and (PSI) models. Proposition 10.2 (F+, P+)(F-, F-)(P, P)(N, N)(D,D) For every pe P(S) and it Eft, we have IN,ii(P) = fy/N,ii(YO)cp(p)(dyo). Proof
(33)
From (31), (28), (23), (18), (19), and Lemma 10.1, we have N-l
fyo IN,ii(y)CP(p)(dy) =
= =
k~O
rJ.k fYofYoco'''YkCk9k(YbUddi\(ft,py)cp(p)(dy)
N-l
rx k r
L
k=O
JYOCO'''YkCk
N-l
L
k=O
rJ.k
r
JH
k
9k(Yk,u k)di\[it,cp(p)]
g(xbuddPk(ft,P) = IN,ii(P),
where the (F+) or (F-) assumption is used to interchange integration and summation when N < 00, and the monotone or bounded convergence theorem is used when N = 00. Q.E.D. Corollary 10.2.1 we have
(F+ ,F+)(F-, F-)(P, P)(N, N)(D,D)
J~(p)
~ Jy r o J~H yo)cp(p)(dyo).
For every pEP(S), (34)
10.2
255
REDUCTION OF THE IMPERFECT STATE INFORMATION MODEL
Proof The function J~(yo) is lower semianalytic, so the integral in (34) is defined. From Proposition 10.2, we have J%(p) = inf IN,,,(p):S; inf 1tEn
r
1tEn Jyo
IN,ft(Yo)qJ(p)(dyo),
so it suffices to show that
This follows from Lemma 8.6 and Corollary 9.5.2.
Q.E.D.
We wish now to establish a relationship similar to (33) between the optimal cost functions for (lSI) and (PSI). In light of Corollary 10.2.1, it suffices to show that given any policy for (lSI), a policy for (PSI) can be found which does at least as well. This is formalized in the next lemma, and the analog of (33) is given as part of Proposition 10.3. Lemma 10,2 (F+, F+ )(F-, F-)(P, P)(N, N)(D, D) nEIl, there exists ii E IT such that
IN.,,(p)
Given p E
= fyo IN,it(Yo)qJ(p)(dyo).
ns,
and (36)
Proof Let pE peS) and tt = (flo, . . . , flN-l) E 11 be given. For k = 0, ... , N - 1, let Qk(n, p) be the probability measure on ~Ck defined on measurable
rectangles to be Qk(n,p)(X,Cd = P k(n,p)({'1k(P; idEl/"Uk E Cd)·
(37)
There exists a Borel-measurable stochastic kernel [ik(duk!yd on C k given ~ such that for every Borel set B c ~Ck we have (38) In particular, 1 = Pk(n,p)({(ik,udE1d)
= Pk(n,p)( {['1k(P; id, Uk] Efd) =
Qk(n,p)(f k) = fYkCk[ik(Ok(Yk)\yddQk(n,p),
so, altering [ik(dukIYk) on a set of measure zero if necessary, we may assume that (38)holds and [ike 0 k(yd!Yk) = 1 for every Yk E Yk· Let ii = ([io,· .. , [iN - 1)' Then ii is a Markov policy for (PSI). We show by induction that for JkE.?~\k' CkE~c, k = O, ... ,N - 1, (39)
256
10.
THE IMPERFECT STATE INFORMATION MODEL
We see from (26) and (37) that the marginal of Qo(n, p) on Yo is
c.d
Qk+ I(n, p)( 1';.+ 1 =
I
y Jlk+l
C
k+l
PH1(Ck+r1YHrl dQHI(n,p)
= J{tlk+ II I (.' ) P,lk+
y
1 E_k+ I
)Pk+l(CH11'1HI(P;iHd)dPH 1(n,p)
fHk fh+ PH I(CH r1YH dtk(dYk+ II'1k(P; ik),Uk) dPk(n, p) = fYkck fxu PH l(CH IIYH dtk(dYH IIYk' Uk) dQk(n, p) = fyoco'" YkCk fXk+ PH (Ck+ I!YH l)tk(dYk+ I!Yb ud dl\[ft,
I
I
1
1
= PH I[ft,
Taken together, (37)and (39)imply that for Xk E we have P k(n,p)({'1k(P;ik)Elk,UkECk})
=
d)·
!lYk' (;k E !lc, k = 0, ... , N -
l\[ft,
1,
(40)
If (40) is used in place of Lemma 10.1, the proof of Proposition 10.2 can now be used to prove (36). Q.E.D. Definition 10.8 Given q E P( }~) weakly q-e-optimal if
~
I Jy
IN.ft(Yo)q(dyo) S a
and /; > 0, a policy ft E rr is said to be
{fyo J'fii(Yo)q(dyo) + /; -l/c
if
fyo J~(Yo)q(dyo)
The policy if. is said to be q-optimal if q({YoE YoIJN.nevo)
= -
= J~(yo)})
00.
= 1.
Equation (35) shows that given any p E P(S) and s > 0, a weakly
J~(p)
(F+ ,P+)(F-, F-)(P, P)(N, N)(D,:o) =
fyo J~(Yo)
VpEP(S).
We have (41)
Furthermore, if if. is optimal,
10.2
REDUCTION OF THE IMPERFECT STATE INFORMATION MODEL
257
Proof Equation (41) follows from Corollary 10.2.1 and Lemma 10.2. Let if. be s-optimal for (PSI). It is clear that under (P, P) and (D, :0), we have JMyo) > -
00
VYo E Yo,
(42)
so IN.ic(YO)sJR;(Yo)+s
VYoEYo.
(43)
Under (F+, p+), (42) follows from Lemma 8.3 and Proposition 8.2, so again (43) holds. We have from (41) and Proposition 10.2 that J N, ic(P) =
Iy) N, ic(Yo)
s Jyo r Jt( Yo)
VpE peS),
so if. is s-optimal for (lSI). The remainder of the proposition follows from (41) and Proposition 10.2. Q.E.D. We shall show shortly that a statistic sufficient for control always exists, and indeed, in many cases it can be chosen so that (PSI) is stationary. The existence of such a statistic for (lSI) and the consequent existence of the corresponding model (PSI) enable us to utilize the results of Chapters 8 and 9. For example, we have the following proposition. Proposition 10.4 (F+, P+ )(F -, p-)(P, P)(N, N)(D,:o) If (tlo, ... , tiN- 1) is a statistic sufficient for control for (lSI), then for every s > 0, there exists an s-optimal nonrandomized policy for (lSI) which depends on ik = (zo, U o, · .. .ui: 1, Zk) only through tlk(P; id, i.e., has the form 11:
=
(f.lo[p;tlo(p;i o)],··· ,f.lN-I[P;tlN-I(P;iN- 1 )]
).
(44)
Under (F+, F+), (P, P), or (D, :0), we may choose this s-optimal policy to have the simpler form (45)
Proof Under (F+, P+), (P, P), or (D, :0), there exists an s-optimal, nonrandomized, Markov policy if. = (flo, ... ,flN _ d for (PSI) (Propositions 8.3 and 9.19). This policy if. is s-optimal for (lSI) by Proposition 10.3, and the second part of the proposition is proved. Assume (F-, p-) holds and let {sn} be a sequence of positive numbers with I:~ 1 e; < 00 and Sn J O. Let if. n = (fl~,· .. ,flN - I) be a sequence of nonrandomized Markov policies for (PSI) exhibiting {sn}-dominated convergence to optimality (Proposition 8.4). By Proposition 10.2 and the (F-, p-)
258
10.
THE IMPERFECT STATE INFORMATION MODEL
assumption, we have
fyo IN,ftJyo)
IN,ftJp) <
00
VpEP(S).
ao
IN,ftJyo)+
we have lim
n-r co
L I::dJ~(yo) k=n+l
r IN,,,JYo)
VpEP(S).
Let I:: > 0 be given and let n(p) be the smallest positive integer n for which
fyo J*(yo)
r J*(yo) Jyo
-
00,
if
fyo J*(yo)
-
00.
Define Ilk(P; Yd = .a~(p)( Yd, k = 0, ... ,N - 1. Then by Propositions 10.2 and 10.3, ti given by (44) is an s-optimal nonrandomized policy for (lSI). Assume (N, fir) holds. Consider the nonstationary stochastic optimal control model (NPSI) for which the initial state space is P( Yo), the initial control space is a singleton set {uo}, the initial cost function is go(q, uo) = 0 for every q E P( Yo), and the initial transition kernel is given by t(dYo!q, uo) = q(dyo) for every qEP(Yo). For k > 0, the (k + 1)st state and control spaces, control constraint, cost function, and transition kernel are Y,., C, 0 b gb and ~(dYk+ llYb ud of (PSI), respectively, The discount factor is \I. and the horizon is infinite. By definition, the optimal cost for (NPSI) at q E P( Yo) is }nn! 1tE
fyo J,,(yo)q(dyo),
which, by Corollaries 9.1.1 and 9.5.2, is the same as
r J*(yo)q(dyo). Jyo Now (NPSI) has a nonpositive one-stage cost function, so, by Proposition 9.20, for each I:: > 0 there exists an e-optimal, nonrandomized, semi-Markov policy 'it = (ll(q),llo(q;Yo),lll(q;Yl)," .). For fixed q E P(Yo), let ft(q) be the policy for (PSI) given by ft(q)
= (llo(q;Yo),lll(q;Yl)," .).
10.3
259
EXISTENCE OF STATISTICS SUFFICIENT FOR CONTROL
Then
J:
Iyo Jii:(qj(yo)q(dyo) :s; {
J*(Yo)q(dyo)
+e
if
Yo
-l/e
if
J: J:
YO
Yo
J*(yo)q(dyo) > -
~
J*(yo)q(dyo)
00,
= - 00,
i.e., n(q) is weakly q-e-optimal for (PSI). By Proposition 10.3, the policy tt defined by (44), where Ilk(P; Yk) = Jl(cp(p); Yk), is s-optimal for (lSI). Q.E.D. The other specific results which can be derived for (lSI) from Chapters 8 and 9 are obvious and shall not be exhaustively listed. We content ourselves with describing the dynamic programming algorithm over a finite horizon. By Proposition 8.2, the dynamic programming algorithm has the following form under (F+, P+) or (F-, p-), where we assume for notational simplicity that (PSI) is stationary: JMy) =
°
VyE Y,
I
Jt+ l(y) = inf {g(y, u) + C( Jt(y')t(dY'IY, un, UEU(y)
(46) k
= 0, ... ,N - 1. (47)
If the infimum in (47) is achieved for every y and k = 0, ... ,N - 1, then there exist universally measurable functions fik: Y ~ C such that for every y and k = 0, ... ,N - 1, fik(Y)E O(y) and fik(y) achieves the infimum in (47). Then ii = (fio, . . . ,fiN _ 1) is optimal in (PSI) (Proposition 8.5),so is optimal
n
in (lSI) as well (Proposition 10.3). If (F+, P+) holds and the infimum in (47) is not achieved for every y and k = 0, ... , N - 1, the dynamic programming algorithm (46)and (47)can still be used in the manner of Proposition 8.3 to construct an s-optimal, nonrandomized, Markov policy n for (PSI). We see from Proposition 10.3 that n is an s-optimal policy for (lSI) as well. In many cases, '1H 1 (p; iH 1) is a function of '1k(P; ik), Uk> and ZH l' The computational procedure in such a case is to first construct (fio,· .. ,fiN- 1) via (46) and (47), then compute Yo = '1o(p; io) from the initial distribution and the initial observation, and apply control Uo = fio(yo). Given Yk' Uk' and ZH b compute Yk+ 1 and apply control Uk+ 1 = fiH I(YH I), k = 0, ... ,N - 2. In this way the information contained in (p; id has been condensed into Yk' This condensation of information is the historical motivation for statistics sufficient for control. 10.3 Existence of Statistics Sufficient for Control Turning to the question of the existence of a statistic sufficient for control, it is not surprising to discover that the sequence of identity mappings on P(S)h, k = 0, ... ,N - 1, is such an object (Proposition 10.6). Although this
260
10.
THE IMPERFECT STATE INFORMATION MODEL
represents no condensation of information, it is sufficient to justify our analysis thus far. We will show that if the constraint sets r k are equal to IkC, k = 0, ... ,N - 1, then the functions mapping P(S)h into the distribution of x; conditioned on (p; ik ) , k = 0, ... ,N - 1, constitute a statistic sufficient for control (Proposition 10.5). This statistic has the property that its value at the (k + l)st stage is a function of its value at the kth stage, Uk and Zk+ 1 [see (52)], so it represents a genuine condensation of information. It also results in a stationary perfect state information model and, if the conditional distributions can be characterized by a finite set of parameters, it may result in significant computational simplification. This latter condition is the case, for example, if it is possible to show beforehand that all these distributions are Gaussian.
10.3.1 Filtering and the Conditional Distributions of the States We discuss filtering with the aid of the following basic lemma. Lemma 10.3 Consider the (lSI) model. There exist Borel-measurable stochastic kernels ro(dxolp; zo) on S given P(S)Z and r(dxlp; u, z) on S given P(S)CZ which satisfy
I~o
so(Zolxo)p(dxo) = Is
1
s(zlu, x)p(dx) =
I~o
J: I~
ro(Solp; Zo)So(dZoIXo)p(dxo) ZoEf!4 z,
VSoE~s,
pEP(S),
(48)
r(Slp; u, z)s(dzlu, x)p(dx) VSEf!4s,
ZE~z,
pEP(S),
UEC.
(49)
Proof For fixed (p; u) E P(S)C, define a probability measure q on SZ by specifying its values on measurable rectangles to be (Proposition 7.28) q(SZ[p;u) =
I~s(Zlu,x)p(dx).
By Propositions 7.26 and 7.29, q(d(x, z)lp; u) is a Borel-measurable stochastic kernel on SZ given P(S)C. By Corollary 7.27.1, this stochastic kernel can be decomposed into its marginal on Z given P(S)C and a Borel-measurable stochastic kernel r(dxlp; u, z) on S given P(S)CZ such that (49) holds. The existence of ro(dxolp; zo) is proved in a similar manner. Q.E.D. It is customary to call p, the given distribution of xo, the a priori distribution of the initial state. After Zo is observed, the distribution is "up-dated", i.e., the distribution of Xo conditioned on Zo is computed. The up-dated distribution is called the a posteriori distribution and, as we will show in Lemma-lO.d, is just ro(dxolp; zo). At the kth stage, k ~ 1, we will have some a priori distribution Pk of Xk based on ik- 1 = (zo, Uo, . . . ,uk- 2, Zk-l)' Control
10.3
EXISTENCE OF STATISTICS SUFFICIENT FOR CONTROL
261
1 is applied, some Zk is observed, and an a posteriori distribution of Xk conditioned on (ik-I, Uk-I' Zk) is computed. We will show that this distribution is just r(dxlp~; uk- b zd. The process of passing from an a priori to an a posteriori distribution in this manner is called filterinq. and it is formalized next. Consider the function T: P(S)C -+ P(S) defined by
Uk-
It». u)(SJ =
\IS E 86s·
I t(slx, u)p(dx)
(50)
Equation (50) is called the one-stage prediction equation. If x, has an a posteriori distribution Pk and the control Uk is chosen, then the a priori distribution of xk+ 1 is J(Pk' ud. The mapping! is Borel-measurable (Propositions 7.26 and 7.29). Given a sequence ikEl k such that ik+ 1 = (ibubZk+d, k = 0, ... ,N - 2, and given P E P(S), define recursively Po(p; i o) = ro(dxolp; zo), Pk+l(p;ik+d
=
(51)
r(dx[![Pk(P;ik),Uk];UbZk+d,
k = 0, ... ,N - 2. (52)
Note that for each k, Pk:P(S)Ik -+ P(S) is Borel-measurable. Equations (48)-(52) are called the filtering equations corresponding to the (lSI) model. For a given initial distribution and policy, they generate the conditional distribution of the state given the current information, as the following lemma shows, Lemma 10.4
Let the model (lSI) be given. For any P E P(S), n = we have
(flo,··" flN-dEll and ~kE[?JS'
Pk(n,p)[xkE~klik]
=
(53)
Pk(p;id(~k)
for Pk(n,p) almost every ik, k = O, ... ,N - 1.
Proof" We proceed by induction. For any ~oE86s from (51), (16), and (48), that
r
Jizo E~oJ
po(p;zo)(So)dPo(n,p) -
=
r .
Jizo E~O}
and Z, E (JIJz, we have
ro(Solp;zo)dPo(n,p)
= Is I~o ro(~olp;
-
Zo)So(dZoIXo)p(dxo)
= Iso So(?-.oIXo)p(dxo) =
Equation (53) for k probability.
=
°follows from
Po(n,p)({xoE§o,ZoE?-.o}).
(54)
(54) and the definition of conditional
t In this and subsequent proofs, the reader may find the discussion concerning conditional expectations and probabilities at the beginning of Section 10.2 helpful.
262
10.
THE IMPERFECT STATE INFORMATION MODEL
Assume now that (53) holds for k. For any !kEfJB1k' ~kE~c, ~k+1EfJBz and ~k+ 1 E fJB s, we have from (16),the induction hypothesis, Fubini's theorem, (50), (52), and (49) that likElk, UkE~k,
Zk+
1 E;!;k+
= lik Elk} f~k X
Jl Pk+ 1 (p; ib U b
=
=
f~k
=
Zk+ 1)(~k+
1)S( dZk+ 1luk' Xk+ d
f;!;k+ 1 Pk+ 1(P; i b
1
Ub
Zk+ 1)(~k+
1)S( dzk+ dUb Xk+ d
fSk+
1
f~k+
1
r(~k+
dJ[Pk(P; id, Uk]; U b Zk+ 1)
s(dZk+ 1luk'Xk+ dJ[Pk(P; i k), Uk] (dXk+ 1),uk(duklp; i k) dPk(n, p)
S
~k+
1
s(Zk+1IUk,Xk+dJ[Pk(P;ik),Uk](dxk+1) -
,uk(duklp; iddPk(n,p)
r
r
J{lk E!k} J~k X
Ub
t(dxk+ 11xb Ud[Pk(P; ik)(dxk)],uk(duklp; i k) dPk(n, p)
= J{lkE.!:k} r Jf.\ r X
S;!;k+ 1 Pk+ 1(P; i b
1
t(dxk+ 11xb Ud,uk(duklp; ik)[Pk(P; ik)(dxk)] dPk(n, p)
LkE.!:i
ds(dZk+ 11Ub Xk + 1)
t(dXk+1lxk' Ud,uk(duklp; ik)dPk(n,p)
= LkE!k} 1k fSkfSk+ X
d dP k+ 1(n,p)
fSk+ 1 S;!;k + /k+ 1(p; i k, Zk+ d(~k+
= LkE!k} Isk 1k fSk+ X
Zk+ 1)(~k+
SS Sk §k
+ 1
S(Zk+dUk,Xk+1)t(dxk+1IXk,Uk)[Pk(P;ik)(dxk)] -
,uk(duklp; ik)dPk(n,p)
r.
J{lkE!kl
SSk J~kr S§k
+
1
S(~k+dUk,Xk+dt(dxk+1IXbUd,uk(duklp;id
x [Pk(p;ik)(dXk)]dPk(n,p)
= =
r.
J{lkE!kd~k
r
S§k+ S(~k+1IUbXk+dt(dXk+1IXbUk),uk(duklp;ik)dPk(n,p) 1
Pk+1(n,p)( {ikElb Uk E ~bXk+
1 E~k+
1,
Zk+ 1 E~k+
d)·
(55)
It follows from (55) and the definition of conditional probability that Pk+1(n,p)[x k+1 E~k+1Iik+1]
= Pk+1(P;ik+1)(~k+d
for Pk + 1(n,p) almost every i k , and the induction step is completed.
Q.E.D.
Proposition 10.5 Consider the (lSI) model and assume that Uk(x) = C for every XES and k = 0, ... ,N - 1. Then the sequence [Po(p; io), . . . , PN-1(P;i N-d] defined by (51) and (52) is a statistic sufficient for control, and the resulting perfect state information model is stationary.
10.3
263
EXISTENCE OF STATISTICS SUFFICIENT FOR CONTROL
Proof Let Yk in Definition 10.6 be P(S), k = 0, ... ,N - 1. We have already seen that the mappings Pk: P(S)h -> P(S) are Borel-measurable, so (Po, . . . ,PN- d is a statistic. Condition (a) of Definition 10.6 is satisfied with = P(S)C, k = 0, ... ,N - 1. For yE P(S), UE C and I E~p(S)' define
r,
= {zEZlr[dxIJ(y,u);U,Z]E X},
~(y,u,X)
t(IIY,u) =
IsIss[~(y,U,
.Dlu,x']t(dx'lx,u)y(dx).
Note that ~(y, u, D is the (y, u)-section of the inverse image of Borel-measurable function. The stochastic kernel
A(~lx,
u) = Is
s(~lu,
(56)
I
under a
x')t(dx'lx, u)
is Borel-measurable by Propositions 7.26 and 7.29, so the stochastic kernel
A(~I
y, u) = Is Is s(~lu,
x')t(dx'lx, u)y(dx)
= Is A(~lx,
u)y(dx)
is Borel-measurable by the same propositions. It follows from Proposition 7.26 and Corollary 7.26.1 that t(dY'IY,u) is a Borel-measurable stochastic kernel on P(S) given P(S)C. For nEIl, P E P(S), XE ~p(S) and k = 0, ... ,N - 2, we have from Lemma 10.4 PH
d E XIPk(P;ik) = Yb u; = Uk] = Pk+ l(n, p)[Zk+ 1 E ~(Yk' u., Dlpk(P; ik) = Yb u, = Uk] = E{PHdn,p)[ZH1 E~(YbUk' D!ibUk]lpk(P;ik) = YbUk = ud
l(n, P)[Pk+ l(P; i H
= E{Is Is S[~(Yk'
Uk' .!:JIUb
x [Pk(P; ik)(dxd]jpk(P;
XH
l]t(dxH dXk' Uk)
id = Yk' Uk = Uk}
= t(IIYb Uk) for Pk(n,p) almost every (Yb ud, where the expectations are with respect to PH l(n,p). Thus (22) is satisfied. For nEIl, pEP(S), and k = 0, ... ,N - 1, we have from Lemma 10.4 E[g(Xb udlpk(P;
id =
Yb Uk = Uk]
= E{E[g(XbUk)iik,uk]lpk(P;id = YbUk = Uk}
= E{Isg(XbUk)Pk(P;U(dXdlpk(P;id = YbUk = Uk} = Is g(Xb Uk)Yk(dxk)
(57)
264
10.
THE IMPERFECT STATE INFORMATION MODEL
for Pk(n, p) almost every CYb ud, where the expectations are with respect to Pk(n,p). The function g:P(S)C ~ R* defined by g(y, u)
=
Is g(x, u)y(dx)
(58)
is lower semianalytic (Proposition 7.48), and, by (57), gsatisfies (23).
Q.E.D.
If the horizon is finite, then the transition kernel t and the one-stage cost function g defined by (56) and (58) can be substituted in the dynamic programming algorithm (46)-(47) to compute the optimal cost function JX for (PSI). The optimal cost function JX for (lSI) can then be determined from (41). If the horizon is infinite, in the limit the dynamic programming algorithm (46)-(47) yields J* under (N) and (D) and under CP) in some cases (Propositions 9.14 and 9.17). The determination of J* from J* is again accomplished by using (41). 10.3.2
The Identity Mappings
Proposition 10.6 Let the model (lSI) be given. The sequence of identity mappings on P(S)h, k = 0, ... , N - 1, is a statistic sufficient for control. Proof Let Yk in Definition 10.6 be P(S)I k, k = 0, ... , N - 1, and let 17k be the identity mapping on P(S)h. Then (17o, ... ,IJN-d is a statistic. Condition (a) of Definition 10.6 is satisfied with = P(S)rb k = O, ... ,N - 1. If l'i<+ 1 E !?8P(S)Ik+ I ' Yk E P(S)h, and UkE C b we adopt the notation
r,
= {Zk+
(~+l)(V/"U")
1EZICp;zo,U o, ... ,Uk-loZbUk,Zk+l)E .lk+l]'
where Yk = (p;zo,u o ....• Uk-1,Zk)' Using this notation, we define for k = 0, ... ,N - 2 the stochastic kernel tk(dYk+ llYb Uk) on P(S)h+ 1 given P(S)IkC by tkO'/'+ llYb ud
= J~Is
k
+
1
s[(Xk+ d(Y'ou'JIUk' xk+ l]t(dxk+ llxb UdPk(Yd(dxd (59)
where Pk(Yk) is given by (51) and (52). By an argument similar to that used in Proposition 10.5, it can be shown that tk is Borel-measurable. For P E P(S), n E n,l'i<+ 1 E 28 P (S )I k+ [, and k = 0, ... ,N - 2, we have from Lemma 10.4 Pk+ l(n, p)[1Jk+ l(P; ik+ d E ~+ l!lJk(P; i k) = Yb Uk =
= Uk]
Pk+l(n,p)[(Yk,UbZk+dE~+l]
= Isk+ I s[(~+
d(Yk.Udlub Xk+ l]t(dxk+ llxb UdPk(Yd(dxd
= tk(~+lIYk,Uk)' for Pk(n,p) almost every (Yb ud, so (22) is satisfied.
10.3
EXISTENCE OF STATISTICS SUFFICIENT FOR CONTROL
265
For k = 0, ... ,N - 1, define {Jk: P(S)IkC ---+ R* by (Jk(J!k, Uk)
= JSk r g(Xk' UdPk(Jik)(dxd.
By Proposition 7.48, (Jk is lower semianalytic for each k. For p e P(S), and k = 0, ... ,N - 1, we have from Lemma 10.4 E[g(Xb uk)llJ(p; id
(60) nE
Il,
= Jik' Uk = Uk] = JSk r g(Xb Uk)Pk(Jik)(dxd = (Jk(Jib ud
for Pdn, p) almost every (Jib ud, where the expectation is with respect to Pk(n, pi, so (23) is satisfied. Q.E.D. The transition kernels tk and one-stage cost functions {Jk defined by (59) and (60) can be used in the non stationary version of the dynamic programming algorithm (46)-(47). See the discussion following Proposition 10.5.
Chapter 11
Miscellaneous
11.1
Limit-Measurable Policies
In this section we strengthen the results of Section 7.7 concerning universally measurable functions. In particular, we show that these results are still valid if limit-measurable functions (Definitions B.2 and B.3) are used in place of universally measurable functions. This allows us to replace all the results on the existence of universally measurable policies in Chapters 8 and 9 by stronger results on the existence of limit-measurable policies. We now rework the main results of Section 7.7 with the aid ofthe concepts and results of Appendix B. Proposition 11.1 Let X, Y, and Z be Borel spaces, DESf x , and EESf y . Suppose f:D ~ Y and g:E ~ Z are limit-measurable and f(D) c E. Then the composition go f is limit-measurable. Proof
This follows from Corollary B.1l.l.
Q.E.D.
Corollary 11.1.1 Let X and YbeBorelspaces,letf:X ~ Ybeafunction, and let q(dYlx) be a stochastic kernel on Y given X such that, for each x, q(dYlx) assigns probability one to the point f(X)E Y. Then q(dYlx) is limitmeasurable if and only if f is limit-measurable. Proof
266
See the proof of Corollary 7.44.3.
Q.E.D.
11.1
267
LIMIT-MEASURABLE POLICIES
Proposition 11.2 Let X and Y be Borel spaces and let q(dYlx) be a stochastic kernel on Y given X. The following statements are equivalent:
(a) The stochastic kernel q(dYlx) is limit-measurable. (b) For every BE8Uy , the mapping )'B:X --* R defined by (1)
is limit-measurable. (c) For every QE!l'y, the mapping AQ of (1) is limit-measurable. Proof
Now AQ =
We prove (a)=(c)=(b)=(a). Suppose (a) holds and QE!l'y. y, where y:X --* P(Y) is given by
(}Q 0
y(x)
and (}Q:P(Y)
--*
= q(dYlx)
(2)
R is given by (3)
We have assumed that y is limit-measurable, and {}Q is limit-measurable by Proposition B.12. Therefore (c) holds. It is clear that (c) = (b). Suppose now that (b) holds. Then
Letting y and
(}B be
y-l[8U p (y) ] = =
defined by (2) and (3), we have from Proposition 7.25
y-{aC~.t
(}i 1(8U R ))
a[ U y-l({}i
1(8U
R) )]
BE@y
so q(dYlx) is limit-measurable.
]
=
a[ U Ai (88 1
BE@y
R )]
c
!l'x,
Q.E.D.
Proposition 11.3 Let X and Y be Borel spaces and let f: X Y --* R * be limit-measurable. Let q(dy[x) be a limit-measurable stochastic kernel on Y given X. Then the mapping A:X --* R* defined by A(X)
f
= f(x,y)q(dYlx)
is limit-measurable. Proof The mapping (j(x) = Px is continuous (Corollary 7.21.1), as is the mapping a: P(X)P(Y) --* P(X Y) defined by a(p, q) = pq, where pq is the
11.
268
product measure (Lemma 7.12). Suppose QEst X Y and
MISCELLANEOUS
f =
XQ'
For every
XEX,
(4)
A(X) = [Pxq(dYlx)](Q) = eQ((J[O(x), rex)]),
where y and eQ are given by (2) and (3). Since all the functions on the righthand side of (4) are limit-measurable, Ie is limit-measurable. It follows that Ie is limit-measurable when f is a limit-measurable simple function. The extension to the general limit-measurable, extended real-valued function f is straightforward. Q.E.D. Corollary 11.3.1 Let X be a Borel space and let f:X measurable. Then the function ef: P(X) ~ R* defined by ef(p)
~
R* be limit-
= If dp
is limit-measurable. We have the following sharpened version of the selection theorem for lower semianalytic functions. Proposition 11.4 Let X and Y be Borel spaces, D c X Y an analytic set, and f: D ~ R* a lower semianalytic function. Define f*: proh(D) ~ R* by
f*(x) = inf f(x, y). YED x
The set 1= {xEprojx(D)[ for some yxEDx,f(X,yJ = f*(x)} is limit-measurable, and for every 8 > 0 there exists a limit-measurable function tp : proh(D) ~ Y such that Gr(cp) c D and for all x E proh(D) f[x, cp(x)] = .f*(x)
if x E I,
f *(X) + 8 f[x, cp(x)]::;; { -1/8
if x¢ I, f*(x) > if x ¢ I, .f*(x) = -
00, 00.
Proof The proof is the same as in Proposition 7.50(b), except that at the points where Corollary 7.44.2 is invoked to say that the composition of analytically measurable functions is universally measurable, we use Proposition 11.1 to say that the composition is limit-measurable. Q.E.D.
By the remark following Corollary B.11.1, we see that I and the selector obtained in Proposition 11.4 are in .fact stJc-measurable. This remark further suggests that the constructions in Chapters 8 and 9 of optimal and s-optimal
11.2
ANALYTICALLY MEASURABLE POLICIES
269
policies can be done more carefully by keeping track of the minimal £'S with respect to which policies and costs are measurable. We do this to some extent in the next section, but do not pursue this matter to any great length. Propositions 11.1-11.4 are sufficient to allow us to replace every reference to a "(universally measurable) policy" in Chapters 8 and 9 by the words "limit-measurable policy." It does not matter which class of policies is considered when defining J~ and J*; the proof of Proposition 8.1 together with Proposition 11.5 given below can be used to show that these functions are determined by the analytically measurable Markov policies alone. Corollary 11.1.1 tells us that the nonrandomized limit-measurable policies are just the set of sequences of limit-measurable functions from state to control which satisfy the control constraint (cf. Definition 8.2). This fact and Proposition 11.2 are needed for the proof of the limit-measurable counterpart of Lemma 8.2. From Proposition 11.3 we can deduce that the cost corresponding to a limit-measurable policy is limit-measurable (cf. Definitions 8.3 and 9.3). This fact was used, for example, in proving that under (F-) a nonrandomized, semi-Markov, s-optimal policy exists (Proposition 8.3). Proposition 11.4 allows limit-measurable s-optimal and optimal selection. The s-optimal selection property for universally measurable functions is used in practically every proof in Chapters 8 and 9. The exact selection property is used in showing the existence under certain conditions of optimal policies (Propositions 8.5, 9.19, and 9.20). 11.2
Analytically Measurable Policies
Some of the existence results of Chapters 8 and 9 can be sharpened to state the existence of s-optimal analytically measurable policies. This is due to Proposition 7.50(a) and the following propositions. Proposition 11.5 is the analog of Corollary 7.44.3 for universally measurable policies and of Corollary 11.1.1 for limit-measurable policies. Proposition 11.5 LetXand YbeBorelspaces,letj:X ~ Ybeafunction, and let q(dYlx) be a stochastic kernel on Y given X such that, for each x, q(dYlx) assigns probability one to the point j(X)E Y. Then q(dYlx) is analytically measurable if and only if f is analytically measurable.
We sharpen the proof of Corollary 7.44.3. Let y(x) = q(dYlx) and = Py , so that y = (50 f and j = (5 -loy. Now (5 is a homeomorphism from Y to Y = {pylYE Y}, so (5 and (5-1: Y ~ Yare both Borel-measurable. If f is analytically measurable and C E fJB P(y)' then Proof
(5(y)
y-l(C) = j-l[(5-1(C)] Ed x
11.
270
MISCELLANEOUS
because ()-l(C)E.?6'y. Ify is analytically measurable and BE.?6'y, then f-l(B)
because ?i(B) E.?6'P(Y).
=
y-l[?i(B)JESof x
Q.E.D.
Proposition 11.6 Let X and Y be Borel spaces and let q(dylx) be a stochastic kernel on Y given X. The following statements are equivalent:
(a) The stochastic kernel q(dYlx) is analytically measurable. (b) For every BE.?6'y, the mapping AB:X -> R defined by (5)
is analytically measurable. Proof Assume (a) holds and define y(x) = q(dYlx). Then for BE goy, C E go R' and 8B:P(Y) -> R defined by (3), we have
because 8; l(C) E.?6'p(y) (Proposition 7.25). Therefore (b) holds. If (b) holds, we can show that (a) holds by the same argument used in the proof of Proposition 11.2. Q.E.D. We know from Corollary B.11.1 that the composition of analytically measurable functions need not be analytically measurable, so the cost corresponding to an analytically measurable policy for a stochastic optimal control model may not be analytically measurable. To see this, just write out explicitly the cost corresponding to a two-stage, nonrandomized, Markov, analytically measurable policy (cf. Definition 8.3). A review of Chapters 8 and 9 shows the following. Proposition 8.3 is still valid if the word "policy" is replaced by "analytically measurable policy," except that under (F-) an analytically measurable, nonrandomized, semiMarkov, a-optimal policy is not guranteed to exist. However, an analytically measurable nonrandomized a-optimal policy can be shown to exist if g s 0 [B12]. The proof of the existence of a sequence of nonrandomized Markov policies exhibiting {an}-dominated convergence to optimality (Proposition 8.4) breaks down at the point where we assume that a sequence of one-stage policies {f1~} exists for which ) S T"n-l(J o). Tl'n(J o o ~o
This occurs because T I'nl(J 0) may not be analytically measurable. In the o first sentence of Proposition 9.19, the word "policy" can be replaced by "analytically measurable policy." The a-optimal part of Proposition 9.20 depends on the (F-) part of Proposition 8.3, so it cannot be strengthened in
11.3
MODELS WITH MULTIPLICATIVE COST
271
this way. Under assumption (N), an analytically measurable, nonrandomized, s-optimal policy can be shown to exist [B12], but it is unknown whether this policy can be taken to be semi-Markov. The results of Chapters 8 and 9 relating to existence of universally measurable optimal policies depend on the exact selection property of Proposition 7.50(b). Since this property is not available for analytically measurable functions, we cannot use the same arguments to infer existence of optimal analytically measurable policies. 11.3 Models with Multiplicative Cost
In this section we revisit the stochastic optimal control model with a multiplicative cost functional first encountered in Section 2.3.4. We pose the finite horizon model in Borel spaces and state the results which are obtainable by casting this Borel space model in the generalized framework of Chapter 6. This does not permit a thorough treatment of the type already given to the model with additive cost in Chapters 8 and 9, but it does yield some useful results and illustrates how the generalized abstract model of Chapter 6 can be applied. The reader can, of course, use the mathematical theory of Chapter 7 to analyze the model with multiplicative cost directly under conditions more general than those given here. We set up the Borel model with multiplicative cost. Let the state space S, the control space C, and the disturbance space W be Borel spaces. Let the control constraint U mapping S into the set of nonempty subsets of C be such that I"
= {(X,U)IXES,UEU(X)}
is analytic. Let the disturbance kernel p(dwlx, u) and the system function
f :SCW ~ S be Borel-measurable. Let the one-stage cost function 9 be Borel-
measurable, and assume that there exists e b e R such that 0 S; g(x, U, w) S; b for all XE S, UE U(x), WE W. Let the horizon N be a positive integer. In the framework of Section 6.1, we define F to be the set of extended real-valued, universally measurable functions on Sand F* to be the set of functions in F which are lower semianalytic. We let 1ft be the set of universally measurable functions from S to C with graph in I'. Define H:SCF ~ [0, co] by
H(x, u, J) =
fw g(x, u, w)J[.f(x, u, w)]p(dwlx, u),
where we define 0 . co = co . 0 = O·(- co) = (- co). 0 = O. We take J 0: S ~ R * to be identically one. Then Assumptions A.1-A.4, F.2, and the Exact Selection Assumption of Section 6.1 hold. (Assumption A.2 follows from Lemma 7.30(4) and Propositions 7.47 and 7.48. Assumption A.4 follows from Proposition
272
11.
MISCELLANEOUS
7.50.) From Propositions 6.1(a), 6.2(a), and 6.3 we have the following results, where the notation of Section 6.1 is used. Proposition 11.7
In the finite horizon Borel model with multiplicative
cost, we have
=
J~
TN(J O)'
and for every £ > 0 there exists an N-stage s-optimal (Markov) policy. A policy n* = (f1~,' .. ,f1~ - d is uniformly N -stage optimal if and only if (TI'~TN-k-l)(JO) = TN-k(J O), k = 0, ... ,N - 1, and such a policy exists if and only if the infimum in the relation Tk+l(J O)(X)
=
inf H[x,u, Tk(J o)]
UEU(X)
is attained for each XESand k = 0, ... , N - 1. A sufficient condition for this infimum to be attained is for the set Uk(X,Jc)
=
rUE U(x)[H[x,u, Tk(JO)]:S).}
to be compact for each XES,
)oER,
and k = 0, ... , N - 1.
Appendix A
The Outer Integral
Throughout this appendix, (X,~, p) is a probability space. Unless otherwise specified, j, g, and h are functions from X to [ - 00, 00]. If j 2 0, the outer integral of j with respect to p is defined
Definition A.I
by
S* j dp = inf {S g dplJ ::'::: g, g is ~-measurable
}.
(1)
If j is arbitrary, define
S* j dp = S* j
+
dp -
S* j - dp,
(2)
where j+(x) = max {O,f(x)},
and we set
00 -
Lemma A.I such that
00
=
j-(x) = max{O, -j(x)},
00.
If j 2 0, then there exists a
~-measurable
g with g 2
I, (3)
273
274
APPENDIX A
Proof
Choose gn
gn
~f,
so that
~-measurable,
We assume without loss of generality that gl ~ g2 ~ .... Let g = Then g ~ I, g is ~-measurable, and (3) holds. Q.E.D. Lemma A.2
If /
~
0 and h
f*(f
~
limn~:M
gO'
0, then
+ h)dp:S;
f*/dp
+ f* hdp.
(4)
If either / or h is 86'-measurable, then equality holds in (4). Proof Suppose e. ~ f, g2 ~ t, e, and g2 are ~-measurable, and S*f dp = Sgl dp, S*h dp = Sg2dp. Then e. + g2 ~ / + hand (4) follows from (1). Suppose h is 86'-measurable and Sh dp < 00. [If Sh dp = 00, equality is easily seen to hold in (4).J Suppose / + h :s; q, where g is ~-measurable and
J*(f + h)dp = f gdp. Then / :s; g - hand g - h is 86'-measurable, so f* / dp :s; f g dp -
f h dp,
which implies f*
f
dp
+ fhdp:s;
Therefore equality holds in (4).
f*(f
+ h)dp.
Q.E.D.
We provide an example to show that strict inequality can occur in (4), even if / + h is 86'-measurable. For this and subsequent examples we will need the following observation: For any E c X, f* XE dp
= p*(E),
where p*(E) is p-outer measure defined by p*(E)
=
inf{p(B)IE c B, BE86'}
and XE is the indicator function of E defined by if xEE, if x¢E.
(5)
275
THE OUTER INTEGRAL
To verify (5), note that if XE ::;; 9 and 9 is 3l-measurable, then {xjg(x) ~ 1} is a 3l-measurable set containing E and consequently f 9 dp 2': p*(E).
Definition A.1 implies
r
XE dp
(6)
~ p*(E).
On the other hand, if {Bn } is a sequence of 3l-measurable sets with E c B; and p(BnH p*(E), then 1 B n) = p*(E). By construction, Xn7: 01 s; ~ XE' But Xn7:=l Bn is ~-measurable, and
p(n:=
The reverse of inequality (6)follows. Note that the preceding argument shows that for any set E, there exists a set B E ~ such that E c Band p(B) = p*(E). EXAMPLE 1 Let X = [0,1]. let ~ be the Borel o-algebra, and let p be Lebesgue measure restricted to ~. Let E c X be a set for which p*(E) = p*(X - E) = 1 (see [HI, Section 16, Theorem E]). Then
f(XE
+ XX-E)dp =
f* XEdp + f* Xx-Edp
=
f 1 dp = 1,
2,
and strict inequality holds in (4). Lemma A.2 cannot be extended to (possibly negative) bounded functions, even if h is 3l-measurable, as the following example demonstrates. EXAMPLE
2
Let (X,31,p) and E be as before. Let f =XE - XX-E, h = 1.
Then f*cr
+ h)dp =
f* 2XEdp = 2,
f*fdp+ fhdp= f*XEdp- f*xx-Edp+ 1 = 1. Lemma A.3
(a) Iff::;; g, then S*f dp ::;; S*gdp. (b) If s > 0 and f ::;; 9 ::;; f + e, then f* .f dp ::;; f* 9 dp s f*.f dp
+ 2£.
(7)
276
APPENDIX A
If S*f+ dp <
(c)
00
or S*f- dp <
00,
then
(8)
S*( -f)dp = - S* f dp. If A, BE iJU are disjoint, then for any f
(d)
S* XAuBf dp = S* XAf dp (e) If E c X satisfies p*(E)
=
+ S* XBf dp.
(9)
0, then for any f
S* f dp = S* XX-Ef dp. (f) If p*({xlf(x) = oo}) > 0, then for every g, S*(g + f) dp = 00. (g) If p*({xlf(x) = - 00 }) > 0, then for every 9 either S*(g + f) dp
or S*(g + ndp
Proof
(a)
= -
=
00
00.
r: s g+ and f- a>. By (1), S*f+ dp s S* e: dp, S* r: dp ~ S* e dp. Iff s g, then
~
The result follows from (2). (b) In light of (a), it remains only to show that
S*U For e, ~ f+,
+ e)dp s
S* f dp
+ 2e.
(10)
e. iJU-measurable, and S*f+dp= Sg1 dp,
we have so
S*tr For gz
~
U
+ et dp s
S a, dp + s = S* i
+ e)-, gz iJU-measurable, S*tr
'
dp
+ e.
(11)
and
+ e)- dp =
S gz dp,
we have 92
+ e ~ U + e)- + e =
max U - - e, OJ
+s ~
j":
so
e+ S*U+e)-dp=e+ Sgzdp= Combine (11) and (12) to conclude (10).
Yg2+e)dp~S*rdp.
(12)
277
THE OUTER INTEGRAL
(c) We have f*(-f)dp= f*(-f)+dp- f*(-f)-dp = f* f - dp - f* f
= -
r
+
dp
=
-
[f* f
+
dp -
r
f - dPJ
fdp,
, where the assumption that S*f + dp < 00 or S*f - dp < 00 is necessary for the next to last equality. (d) Suppose r > O. Let gbea.'J6'-measurable function with g::::: XA u Bfand f* XAvBf dp =
Sg dp.
Then XAg::::: XAf, XBg ::::: XBf, so f* XAuBf dp = f XAg dp
:: r
+f
XAf dp
XBg dp
+ f* XBf dp.
(13)
Now suppose gl ::::: XAf, gz ::::: XBf are .?4-measurable and f g 1 dp = f* XAf dp,
Then
e. + gz ::::: XAuBf, so f* XAf dp
+
r
f gz dp
f* XBf dp.
=
XBf dp = f(gl
+ gz) dp
::::: f* XAvBf dp.
Combine (13) and (14) to conclude (9) for f
f is straightforward.
(14)
::::: O. The extension to arbitrary
(e) Suppose f::::: O. Choose BE.?4 with p(B) = p*(E) = 0, B:::J E. By (d),
r
f dp = f* XX-Bf dp ::; f* XX-Ef dp ::; f* f dp.
Hence J*f dp = S*Xx-Ef dp. The extension to arbitrary f is straightforward. (f) We have (g + f)+(x) = 00 if f(x) = 00, so that p*({xl(g
+ f)+(x) = co]) >
O.
Hence S*(g + f)+ dp = 00, and it follows that S*(g + f) dp = 00. (g) Consider the sets E = {xlf(x) = - oo} and E g = {xlf(x) = g(x) < oo}. If p*(Eg ) = 0, then p*(E - E g }
=
p*(E - E g }
+ p*(E
g } :::::
p*(E} > O.
00,
278
APPENDIX A
Since we have f(x) + g(x) = 00 for x E E - E g , it follows from (f) that S*(g + .f)dp = 00. If p*(Eg ) > 0, then p*({xl(g + .f)-(x) = ooj) 2': p*(Eg ) > 0 and hence, by (f), S*(g + .f)- dp = 00. Hence, if S*(g + .f)+ dp = 00, then S*(g + .f) dp = 00, while if S*(g + .f)+ dp < 00, then S*(g + .f) dp = - 00. Q.E.D. The bound given in (7) is the sharpest possible. To see this, let f be as defined in Example 2, 9 = f + 1, and s = 1. Despite these pathologies of outer integration, there is a monotone convergence theorem, which we now prove.
f" i
Proposition A.I f, then
If {f,,} is a sequence of nonnegative functions and
f* t: dp i f* f dp. If {f,,} is a sequence of nonpositive functions and
(15)
f" t f, then
f* t: dp t f* f dp. Proof We prove the first statement of the theorem. The second follows from the first and Lemma A.3(c). Assume I; 2': 0 and I; t f. Let {gn} be a sequence of .94-measurable functions such that gn 2': f" and (16)
If, for some n, Sgn dp = S*f" dp = every n,
00,
then (15) is assured. If not, then for
f gndp <
(17)
00.
Suppose (17) holds for every n and for some n, p({xlgn(x) > gn+l(X)}) > O.
Then since 9 n + 1 2': f" + 1 2': f", we have that 9 defined by if gn(x):::; gn+ 1 (x), if gn(x) > gn+ l(X), satisfies gn 2': 9 2': f" everywhere and 9 < gn on a set of positive measure. This contradicts (16). We may therefore assume without loss of generality that (17) holds and 9 1 :::; 9 2 . . . . Let 9 = lim.., gn' Then 9 2': f and lim n~oo
r
00
I, dp = lim fgn dp = fg dp 2': n-co
r
f dp.
But fn :::; f for every n, so the reverse inequality holds as well.
Q.E.D.
279
THE OUTER INTEGRAL
One might hope that if {In} is a sequence offunctions which are bounded below and In i f, then (15) remains valid. This is not the case, as the following example shows. EXAMPLE 3 Let X = [0,1), flJ be the Borel a-algebra, and P be Lebesgue measure restricted to flJ. Define an equivalence relation - on X by
x - y-=-x - y is rational. Let F ° be constructed by choosing one representative from each equivalence class. Let Q = {qo, qb' ..} be an enumeration of the rationals in [0,1) with qo = and define
°
Fk = {x + qk[modl]lxEFd = F o + qk[modl]
k = 0,1, ....
Then F 0, F I, . . . is a sequence of disjoint sets with 00
U r, =
k=O
[0, 1).
(18)
If for some n < 00, we have P*(Ur'=nFk) < 1, then E = UZ=6 F, contains set with measure b > 0. For k = 1,... , n - 1, let qk = rdsk' a ~-measurable where rk and Sk are integers and rk/sk is reduced to lowest terms. Let {PI, Pz, ... } be a sequence of prime numbers such that
max
1 '5',kS,/1-1
Sk < PI < pz < ...
Then the sets E, E + Pl 1 [mod 1], E + Pi. I [mod 1], ... are disjoint, and by the translation invariance of p, each contains a flJ-measurable set with measure b > 0. It follows that [0,1) must contain a flJ-measurable set of infinite measure. This contradiction implies (19)
for every n. Define n = 0,1, ....
Then In i 0, but (5) and (19) imply that for every n
f* fndp = -1. Bya change of sign in Example 3, we see that the second part of Theorem A.l cannot be extended to functions which are bounded above unless additional conditions are imposed. We impose such conditions in order to prove a corollary.
280
APPENDIX A
Corollary A.I.1 Let {IOn} be a sequence of positive numbers with I IOn < 00. Let {f,,} be a sequence with
I:'=
lim I, = f,
f
~
f,,(x)
~
+ IOn fn-I(X) + IOn f(x)
f* I, dp <
00.
f* fn dp =
r
Then lim n-oo
n = 1,2, ... ,
fn'
~
f,,(x)
(20) (21)
f(x) > -
00,
if f(x) = -
00,
if
(22)
n = 2,3, ... ,
(23) (24)
f dp.
(25)
Proof From(20)wehavelim n_oof: 00. By Proposition A.1,
rr
dp
=
lim
n-e co
r
inf
ke n
r, dp ~
r r: r dp
lim II-CO
~
i
dp,
so lim n-oo
f* r. dp f* r =
(26)
dp.
Let A = {xlf(x) = - co]. Ifp*(A) = 0, then (21),(22), (24), and Lemmas A.3(b) and (e) imply
so lim n-oo
f* t: dp =
r
f+ dp <
00.
(27)
Combine (26) and (27) to conclude (25). If p*(A) > 0, then S*f- dp = and (26) will imply (25) provided that
00
f* i: dp <
00
(28)
f* t: dp <
00.
(29)
and lim sup n-e oo
Conditions (21) and (24) imply (28). Conditions (21)-(23) imply for every XEX
n
= 2,3, ... ,
281
THE OUTER INTEGRAL
so
and
S* r: dp ~ 2 f
k=2
The finiteness of
Lk'=
2 ck
Ck
+
and (24) imply (29).
S* f~
dp.
Q.E.D.
Appendix B
Additional Measurability Properties of Borel Spaces
This appendix supplements Section 7.6. The notation and terminology used here is the same as in that section and, in most cases, is defined in Section 7.1. B.l
Proof of Proposition 7.35(e)
Our first task is to give a proof of Proposition 7.35(e). To do this, we introduce the space N* = {I, 2, ...} u { oo] with the topology induced by the metric d(x,Y)
=
I~
-11,
where we define 1/ w = O. Let %* = N* N* ... with the product topology. The space % of sequences of positive integers is a topological subspace of %*. The space %* is compact by Tychonoff's theorem, while % is not. If(X,&) and (Y,2) are paved spaces, we denote by f!J2 the paving of XY: f!J2
=
{PQ!p
E
&, Q E 2}.
(1)
Proposition B.l Let (X, (!J» be a paved space and X the collection of compact subsets of %*. Then the projection on X of a set in 9'U?lix) is in 9'(9). Conversely, every set in 9'(f!J) is the projection on X of some set in [(f!J X),rJo' 282
283
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
Proof Let S be a Suslin scheme for ,?Ji,7{'. Then for every the form S(s) = Sl(S)S2(S), where Sl(S)E gPand S2(S)E%. Now N(S)
=
SE
L, S(s) has
U n S(s)
ZE%S
=
u n[Sl(S)S2(S)]
ze% s
so proh[N(S)] =
U n Sl(S),
ZEAs
where
Since each S2(S) is compact, we have A = {((b (2" ..)E
Define a Suslin scheme R for
JV\Ol S2((1, (2"
f!jJ
.. , (d =F
0
\in}.
by
n S2((1,'" n
if
k=l
,(d =F
0,
otherwise. Then proh[N(S)] =
U n Sl(S)
ZEAs
=
u n R(s) = N(R),
ze%s
so projy]' N(S)] E Y(&). For the second part of the proposition, suppose S is a Suslin scheme for &. Define a Suslin scheme R for % by R((Jb'"
,(In) = {((b(2," .)EA'*I(l =
(Jl,'"
,(n = (In}'
For fixed zoEA', we have ns
sOo [S(s)R(s)] = LOo S(S)][sOo R(S)] =·{(X,ZO)IXE n S(S)}. 5
(2)
284
APPENDIX B
Therefore,
=
N(S)
U n S(S)
ze% s
= z~.v projx{Dz [S(s)R(s)]} =
projxt~.vDz[S(S)R(S)]},
and it remains only to show that
u n[S(S)R(S)]E[(~;n()"]o'
(3)
ze%s
If we can show that
u n[S(s)R(s)] n u [S(s)R(s)], 00
(4)
=
ZE,AI
s
k= 1
5Erk
where Lk is the set of elements in L having k components, then (3) will follow. Let XEX and Zo = ((?, (g, .. .)EJV* be given. Suppose (x, zo) E
n n[S(s)R(s)].
ze% s
We see from (2) that Zo E JV and (x, zo)E ns
and
U
n
nU
nk"'=
1
USE~k[S(s)R(s)],
00
[S(s)R(s)] c
ze% s
[S(s)R(s)].
(5)
k= 1 serk
nk"'=
On the other hand, if (x, Zo)E 1 USE~k[S(s)R(s)], then for each k:2: 1, This can happen only if ZoEJVo and (X,Zo)E S((?, ... , (2)R( (?, ... , (2). Therefore,
(x,ZO)EUSE~JS(s)R(s)].
n[S((?, ... ,(2)R((?, ... ,(2)] = n [S(s)R(s)] U n[S(s)R(s)], 00
(X,Zo)E
k=l
s < Zo
c
ze%s
which proves the reverse of set containment (5). Equality (4) follows.
Q.E.D.
285
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
If (X, ,qp) is a paved space, Y is another space, and Q paving of X Y by
c
Y, we define a
f?JQ = {PQ!PEf?J}. Lemma B.l (a) (b)
Let (X,&) and (Y,2) be paved spaces. Then:
9'(&)Q = 9'(.qpQ) for every 9'(&)2c9'(iJl!2).
Proof
Q c Y;
Part (a) is trivial and part (b) follows from (a).
Q.E.D.
We are now in a position to prove part (e) of Proposition 7.35. Proposition B.2 Proof
Let (X,&) be a paved space. Then 9'(&) = 9'[9'(iJl!)].
In light of Proposition 7.35(d), we need only prove
(6) Let JV* and X' be as in Proposition B.1. If A E9'[9'(&)], then by the second part of Proposition B.1, A = proh(B) for some set BE([SC(iJl!)X]")d' By Lemma B.1(b) and Proposition 7.35(b) and (c), we have
The first part of Proposition B.1 implies that A = proh(B)E9'(&) and (6) follows. Q.E.D. B.2
Proof of Proposition 7.16
In Proposition 7.16 we stated that Borel spaces X and Yare Borelisomorphic if and only if they have the same cardinality. A related result is that every uncountable Borel space is Borel-isomorphic to every other uncountable Borel space. We used the latter fact in Proposition 7.27 to assume without loss of generality that the Borel spaces under consideration were actually copies of (0, 1], we used it in Proposition 7.39 to transfer a statement about JV to a statement about any uncountable Borel space, and we will use it again in Proposition B.7 to allow our treatment of the limit o-algebra to center on the space JV. The proofs of Proposition 7.16 and Corollary 7.16.1 depend on the following lemma, which is an immediate consequence of Propositions 7.36 and 7.37. The reader may wish to verify that these propositions depend only on Propositions 7.35, B.l, and B.2, so no circularity is present in the arguments. Lemma B.2 Let X be a nonempty Borel space. There is a continuous function f from JV onto X.
286
APPENDIX B
Define .H to be the set of infinite sequences of zeroes and ones. We can regard A as the countable product of copies of {O, 1} and endow it with the product topology, where {O, 1} has the discrete topology. By Tychonoff's theorem, A is compact with this topology. It is also metrizable as a complete separable space. Our proof of Proposition 7.16 consists of three parts. We show first that every uncountable Borel space contains a Borel subset homeomorphic to ,/It, we show second that every uncountable Borel space is isomorphic to a Borel subset of A, and we show finally that these first two facts imply that every uncountable Borel space is isomorphic to A. Lemma B.3 Let X be an uncountable Borel space. There exists a compact set K c X such that A and K are homeomorphic.
Proof Let f:.¥ -> X be the continuous, onto function of Lemma B.2. For each x E X, choose an element Zx E JV such that x = f(zJ. Let 5 = {zxlXE X}, so that f is a one-to-one function from 5 onto X. For ZE5, if possible choose an open neighborhood T(z) of z such that 5 n T(z) is countable. Let R be the set of all Z E5 for which such a T(z) can be found. Since separable metrizable spaces have the Lindelof property, there exists a countable subset R' of R such that UZER T(z) = UZER' T(z), so R c 5n
LYR T(Z)] = z~,
[5 n T(z)J,
and R is countable. Since 5 is uncountable, 5 - R must be infinite. Furthermore, if z E5 - R, then every open neighborhood of z contains infinitely many points of 5 - R. Let d be a metric on JV consistent with its topology for which CAl,d) is complete. For Z E JV, the closed sphere of radius r centered at z is the set [zEJVld(z,z).:::; r}. The interior of this sphere, denotedlnt{zEJV[d(z,z) .:::; r}, is the set {zE JV Id(z, z) < r}. Let z(O) and z(1) be distinct points in 5 - R. Then f[z(O)] "# f[ z(l)J, so there exist disjoint open neighborhoods U and V of f[z(O)J and f[z(1)J respectively. Let 5(0) and 5(1) be disjoint closed spheres of radius no greater than one centered at z(O) and z(l) and contained in f-1(U) and f- 1(V) respectively. We have that f[5(0)J and f[5(1)J are disjoint. Note also that for every z E(5 - R) n Int 5(0), every open neighborhood of z contains infinitely many points of (5 - R) n Int 5(0), and the same is true of 5(1). By the same procedure we can choose distinct points z(O, 0) and z(O, 1) in (5 - R) n Int 5(0) and distinct points z(1,0) and z(1, 1) in (5 - R) n Int 5(1), and we can also choose disjoint closed spheres 5(0,0), 5(0,1),5(1,0) and 5(1, 1) of radius no greater than centered at z(O, 0), z(O, 1), z(1,0) and z(1,1), respectively, so that f[5(0, 0)], f[5(0, 1)J, f[5(1, O)J and f[5(1, 1)] are all disjoint. We can choose these spheres so that 5(0,0) and
t
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
287
S(O, 1) are contained in S(O), while S(l,O) and S(l, 1) are contained in S(1). At the kth step of this process, we choose a collection of disjoint closed spheres S(J..ll" .. ,J..lk) of radius no greater than 11k centered at distinct points Z(J..lb ... , J..lk) in S - R, where each J..lj is either zero or one. Furthermore, we can choose the spheres so that for each (J..lI, ... , J..lk - I)
(i) ![S(J..lb"" J..lk-l, 0)] n![S(J..lI'···' J..lk-b 1)] = 0, (ii) S(J..lI, ... , J..lk-bJ..lk) C S(J..lI'···, J..lk-l), J..lk = 0,1.
For fixed m = (J..lbJ..lZ," .)EA, the sets {S(J..lb·" ,J..ld} form a decreasing sequence of closed sets with radius converging to zero, so {Z(J..lI, ... , J..lk)} is I S(J..lI, ... , J..ld· Cauchy and thus has a limit cp(m) E We show that cp:"1t ---+ % is a homeomorphism. If (J..lI' J..lz, ... ) and (VI' Vz , ...) are distinct elements of A, then for some integer k, we have J..lk =I Vk· Since CP(J..lbllz,·· .)ESUIt> ,lid, cp(Vj, Vz," .)ES(V j , • . • , vd, and S(J..l[, ... ,J..lk) is disjoint from S(v[, , vk), we see that CP(J..lb J..lz,· .. ) =I cp(v j, Vz , ' ..), so tp is one-to-one. To show cp is continuous, let {m n } be a sequence converging to mEA. Choose s > and let k be a positive integer such that 2/k < e. There exists an "ii such that whenever n ;;::: 'ii, the elements mn and m = (J..lj, Ilz, ...) agree in the first k components, so both cp(mn ) and cp(m) are in S(lll' ... ,Ilk)' This implies d(cp(m n ), cp(m)) :s; 21k < e, so cp is continuous. To show that cp - 1 is continuous, it suffices to show that cp( F) is closed in cp(A) whenever F is closed in A. This follows from the fact that A is compact and cp is continuous. Define % 1 C % to be the compact homeomorphic image of A under cp. We now show that !:%1 ---+X is a homeomorphism. To see that f is one-to-one, choose distinct points Z and z in % i - Then there exist distinct points m = (J..ll' Ilz, ...) and m= ({iI, {iz,· ..) in A such that Z = cp(m) and 2 = cp(l'n). For some k, we have J..lk =I {ib so by (i), ![S(llo, . . . ,lId] n j'[S({io, ... ,{ik)] = 0· Since Z E S(J..lo, ... ,Ilk) and 2 E S({io, ... ,{ik), we see that f(z) =I f(z), so f is one-to-one. Just as in the case of q), the continuity of f - 1 follows from the fact that! is continuous and has a compact domain. The set K = !Cff1)is a compact subset of X homeomorphic to A. Q.E.D.
nk'=
°
Lemma B.4 Let X be an uncountable Borel space. There exists a Borel subset L of A such that X and L are Borel-isomorphic.
Proof By definition, X is homeomorphic to a Borel subset B of a complete separable metric space Y. By Urysohn's and Alexandroff's theorems (Propositions 7.2 and 7.3), Y is homeomorphic to a Go-subset of the Hilbert cube Yf, so B and hence X are homeomorphic to a Borel subset of Yf. It suffices then to show that Yf is Borel-isomorphic to a Borel subset of A. The idea of the proof is this. Each element in Yf is a sequence of real numbers in [0, 1]. Each of these numbers has a binary expansion, and by
288
APPENDIX B
mixing all these expansions, we obtain an element in A. Let us first define lj;: [0, 1] -+ A which maps a real number into a sequence of zeroes and ones which is its binary expansion. It is easier to define lj; - 1, which we define on A 1 U {(O, 0, 0, ... where
n,
A
1
=
{(Ill' Ilz,"
.)EA!llk
= 1 for infinitely many k}.
It is given by
and it is easily verified that lj; -1 is one-to-one, continuous, and maps onto [0, 1]. Since A - A 1 is countable, the domain of lj; - 1 is a Borel subset of A, and Proposition 7.15 tells us that lj; is a Borel isomorphism. Since we have not proved Proposition 7.15, we show directly that lj; is Borel-measurable. Consider the collection of sets R(k) R(k)
= =
{(lll,IlZ'"
.)EA!llk
{(Ilbllz,"
.)EA!llk
= O}, = I},
k = 1,2,
,
k
.
=
1,2,
These sets form a subbase for the topology of A, so by the remark following Definition 7.6, we need only prove that lj;-l[R(k)] and lj;-l[R(k)] are Borelmeasurable to conclude that lj; is. Since one of these sets is the complement of the other, we may restrict attention to lj; - 1 [R(k)]. Remembering that the domain oflj;-l is A 1 U {O,O,O, ... we have lj;-l[R(k)]
=
{Jl ~fl(ll'IlZ'"
n,
.)EA
1,
Ilk
= O}
U{O},
and
which is a finite union of Borel sets. The proof that A A ... and A are homeomorphic is essentially the same one given in Lemma 7.25, and we do not repeat it here. Let e mapping .~.~ ... onto A be a homeomorphism and define ip : Yf -+ .~ by cp(x1 , X Z"")
Then
tp
=
e[lj;(x 1),lj;(xz),··.].
is the required Borel-isomorphism.
Q.E.D.
Lemma B.5 If K 1 and L are Borel subsets of A, K 1 Borel-isomorphic to A, then L is Borel-isomorphic to A.
C
L, and K 1 is
Proof For Borel subsets A and B of A, we write A ~ B to indicate that A and B are Borel-isomorphic. Note that A ~ Band B ~ C implies
289
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
A ::::: C. Also, if A l' A 2 , ••• is a sequence of disjoint Borel sets, if B 1> B 2 , . . . is another such sequence, and if Ai::::: B, for every i, then U~ 1 Ai ::::: U~ 1 B i • We note finally that if A = A 1 U A 2 and A ::::: B, then B = B 1 U B 2 , where A 1 ::::: B 1 and A 2 ::::: B 2 . If A 1 and A 2 are disjoint, then B 1 and B 2 can be taken to be disjoint. Under the hypotheses of the lemma, let D 1 = At - K 1 • Since At 1 ::::: K 1 and At = K 1 U D 1 , there exist disjoint Borel sets K 2 and D 2 such that K 1 = K 2 U D2 , K 1 ::::: K 2 and D 1 ::::: D2 . Since K 1 ::::: K 2 and K 1 = K 2 U D2 , there exist disjoint Borel sets K 3 and D3 such that K 2 = K 3 U D3 , K 2 ::::: K 3 , and D 2 ::::: D 3 • Continuing in this manner, at the nth step we construct disjoint Borel sets K; and D; such that K n- 1 = K; U Dn, K n- 1 ::::: K n, and D~-I::::: Dn· Let K oo = n:'=IKn' Then At = x ; U [U:'=IDn], and all the sets on the right side of this equation are disjoint. Let A 1 = At - Land B 1 = L - K 1 • Then A 1 and B 1 are disjoint and D 1 = A 1 U B 1 • For each n, D 1 ::::: Dn, so D; = An U Bn, where An and B; are disjoint Borel sets and A 1 ::::: An' B 1 ::::: Bn· In particular, An::::: An+ 1 for n = 1, 2, ... , and we have
«;
U
::: x ;
u
At =
= {K oo
[91 = x ; [9 [9 [91 [91 DnJ
2
U
AnJ u
U
1
AnJ u
[91
BnJ
BnJ
DnJ} - A 1 = At - A 1 = L.
Q.E.D.
We can now prove Proposition 7.16, and the proof clearly shows that Corollary 7.16.1 is also true. Proposition B.3 Let X and Y be Borel spaces. Then X and Yare isomorphic if and only if they have the same cardinality.
Proof If X and Yare isomorphic, then clearly they must have the same cardinality. If X and Y both have the same finite or countably infinite cardinality, then their Borel a-algebras are their power sets and any oneto-one onto mapping from one to the other is a Borel-isomorphism. If X is uncountable, then by Lemma B.4 there exists a Borel isomorphism sp: X ~ At such that L =
290 B.3
APPENDIX B
An Analytic Set Which Is Not Borel-Measurable
Suslin schemes can be used to generate a strictly increasing sequence of o-algebras on any given uncountable Borel space X. The first o-algebra in this sequence is the Borel o-algebra !!J x and the second is the analytic 0"algebra d x, and, as a result of the following discussion, we will see that d x is strictly larger than !!J x- The proof of this depends on a contradiction involving universal functions, which we now introduce. Let JI{ 1 be the set of sequences of zeroes and ones for which one occurs infinitely many times. If the nonzero components of m E JI{ 1 are in positions m 1, mz, ... , then we can think of m as a mapping from JV to JV defined by
Definition B.l Let fl} be a paving of JV. A universal function L for a mapping from AI onto f!J. If fL is another paving of JV and
fl}
is
(7)
{Z E JVlz E L[ m(z)]} E fL we say L is consistent with fl.
Proposition B.4 Let 'fI be the collection of open subsets of Ai: There exists a universal function for 'fI consistent with 'fl. Proof The space JV is separable, so its topology has a countable base {G(l), G(2),.. .}, where the empty set is included among these basic open sets. Define L: JV ----> 'fI by 00
L(C, (z, ... )
=
U G((n)'
n=1
It is clear that L is a universal function for 'fl. Now choose m E JI{ 1 and suppose the nonzero components of m are in positions ml, mz, . . . . Choose Zo = ((y,(t ...) in the set
{ZL¥!zEL[m(z)]} = {((\,(z, .. .)EJV!((l,(Z," .)E Dl
Then for some k, we have
Zo E
G((~'jJ
G((mJ}.
Let
U,,(zo) = {((l,(Z," .)EJV!(mk = (~kl
Then G((~'j) c L[m(z)] for every ZE U,,(zo), so zEL[m(z)] for every ZE Udzo) n G((~k)' Therefore U,,(zo) n G((~j) is an open neighborhood of Zo contained in {zEJVlzEL[m(z)]}, so this set is open. Q.E.D.
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
291
Given a paved space and a universal function for the paving which satisfies a condition like (7), it is possible to construct similar universal functions for larger pavings. We show first how this is done when the given paving is extended by the use of Suslin schemes. Proposition B.5 Let flJi be a paving for Kand suppose that there exists a universal function for flJi consistent with Y'(flJi). Then there exists a universal function for Y'(flJi) consistent with Y'(flJi).
Proof Fix a partition {psis E l:} of the positive integers into countably many countable sets, and define for each S E l: a corresponding ms = (fll(S),fl2(S)", .)E '~I by
if kE p." if krj: r.,
(8)
Let L be a universal function for flJi consistent with Y'(flJi). Define K: JV Y'(flJi) by
K(zo) =
U
n L[mJzo)]·
-+
(9)
ZE.#"S
To show that K is onto, we must show that given any Suslin scheme S for flJi, there exists Zo E JV such that S(S)
= L[ms(zo)]
(10)
If S:l: -+ flJi is given and SEl:, then S(s)Ei?J. Since L is a universal function for flJi, there exists z, E JV for which S(s) = L(zs)' If Zo is chosen so that ms(zo) = z, for every S E l:, then (10) is satisfied, and such a choice of Zo is
possible because ms(zo) depends only on the components of Zo with indices in Ps . Therefore K is a universal function for Y'(.'?l'). If m, n E.4t I' then there is an element in At 1, which we denote by mn, such that (mn)(z) = m[ n(z)] for every z E JV. In fact, if the nonzero elements of m are (ml' m2" ..) and the nonzero elements of n are (nl' n2, . . .), then the nonzero elements of mn are (nml' nm2, ...). Now suppose mEAt I' We have
[zoEJVlzoEK[m(zo)]} = {zoEJVlzOE =
U
n{zo
ZE ..A''' s
E
Z~%DzL[(msm)(zo)]}
JVlzo E L[(msm)(zo)]},
which, since L is consistent with Y'(2Jl), is the nucleus of a Suslin scheme for 9"(2Jl). It follows from Proposition B.2 that K is consistent with Y'(.'?l'). Q.E.D. Corollary B.5.1
with 9"(''¥j).
There is a universal function for Y'(ff+) consistent
292
APPENDIX B
Proof Let '§ be the collection of open subsets of A'. By Propositions B.4 and B.5, there is a universal function for 9"('§) consistent with 9"('§), and it remains only to show that 9"('§) = 9"($' ff)' Since '§ c ~ ff' it follows from Proposition 7.36 that 9"('§) c 9"($'ff)' Since every closed subset of A' is a Go-set and, by Proposition 7.35, '§ 0 c 9"('§)o = 9"('§), we see that .'F,¥, C 9"(~). Proposition B.2 implies that 9"($'%) C .9" [9"(~)J = 9"('§). Q.E.D.
Corollary B.S.2 Let L be a universal function for 9"($',,,,) consistent with 9"($'ff)' The set Ao
= {zE.KlzEL(z)}
(11)
is analytic but not Borel-measurable, and A' - A o is not analytic. Proof
have
The set A o is analytic because L is consistent with 9"($' ff)' We A' - A o = {zEA'lz¢L(z)},
(12)
and if this set is analytic, then there exists Zo E A' such that
A' - A o = L(zo). If zoEA o, then zo¢L(zo), and (11) is contradicted. If zoEA' - A o , then Zo E L(zo) and (12) is contradicted. Therefore A' - A o is not analytic, thus not Borel-measurable, so A o is also not Borel-measurable. Q.E.D. Proposition B.6 Let X be an uncountable Borel space. There exists an analytic subset A of X such that A is not Borel-measurable and X - A is not analytic. Proof Let cp:A' ~ X be a Borel isomorphism from JV onto X (Corollary 7.16.1), and let A o c .K be as in Corollary B.5.2. Then A = cp(Ao) is analytic, but since A' - A o = cp - l(X - A) is not analytic, neither is X-A. It follows that A is not Borel-measurable. Q.E.D.
B.4
The Limit a-algebra
We construct a collection of o-algebras indexed by the countable ordinals, and at the end of this process we arrive at the limit rr-algebra, denoted by !I! x- The proofs of many of the properties of !I! x, and indeed the definition of !I! x , proceed by transfinite induction. We also make frequent use 01 the fact that if {IXn } is a sequence of countable ordinals, then there exists a countable ordinal ex: such that IXn < ex: for every n. In keeping with standard convention, we denote by n the first uncountable ordinal.
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
293
Definition B.2 Let X be a Borel space and '§X the collection of open subsets of X. For each countable ordinal IX, we define
(13) (14)
The limit a-algebra is (15)
We prove later (Proposition B.10) that 2 x is in fact a a-algebra. Note that 2~ = !!J x and 21 = sf x- When X is countable, !!J x = 2~ for every a < Q. If X is uncountable, there is no loss of generality in assuming X = ""v when dealing with the a-algebras 2~ and 2 x- This is the subject of the next proposition. Proposition B.7 Let X be an uncountable Borel space and let sp :% ~ X be a Borel isomorphism from % onto X. (Such an isomorphism exists by Corollary 7.16.1.) Then for every IX < Q,
cp(2
x)= 2~,
2
x = cp-1(2~),
(16)
and (17) Proof We prove (16) by transfinite induction. For IX = 0, (16) clearly holds. If (16) holds for all f3 < a, where IX < Q, then we have
cp(U 2~) fJ
U 2L
=
fJ
Let S be a Suslin scheme for
UfJ < a: 2~.
U 2~
fJ
=
cp-1(U 2i). fJ
Then
cp[N(S)] = N(cpoS),
where (cp S)(s) = cp[S(s)] 0
Since cp S is a Suslin scheme for UfJ
(18)
On the other hand, if R is a Suslin scheme for
UfJ 2i, then
N(R) = cp [N(cp -loR)],
294
APPENDIX B
where
(cp-1 oR)(s) = cp-1[R(s)]
VSEL.
This shows that N(R)E cp[y(Upq2' Pvl] , which proves the reverse of set con tainment (18). Therefore, (19)
Since cp is one-to-one, we also have (20) Now by (19), cp(2'~)
is a o-algebra containing y(Up
:::J
(21)
2'x.
By (20), cp - 1(2'x) is a o-algebra containing Y(U p
c cp - l(2'X)'
(22)
cp - l(2'X)
(23)
Since cp is one-to-one, (21) implies 2'~v
:::J
and (22) implies (24)
Relations (21)-(24) imply (16). Relation (17) follows from (15) and (16). Q.E.D. We have already seen that in an uncountable Borel space X, 2'~ is properly contained in 2'} (Proposition B.6). We would like to show more generally that if f3 < IY. < n, then 2'i is properly contained in 2'x. Our method for doing this is to generalize Corollary B.5.1 and then generalize Corollary B.5.2. The following lemmas are a step in this direction. If ,~ is a paving for a space X, we denote by qp the paving
qp =
fYJ u
{X - pIPE:Jlj.
(25)
Lemma B.6 Let ,q}J be a paving for JV which contains the open subsets of Ai', and suppose there exists a universal function for fYJ consistent with fYJ. Then there exists a universal function for ,OJ consistent with rr(fYJ). Proof
K:JV
~
Let L be a universal function for
qp by
r v K( <, 1, ~Z"
•.
,q}J
)={L((Z'(3,(4,"') JV - L((Z,(3,(4,"')
consistent with fYJ. Define if (1 is odd, if (1 is even.
295
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
It is clear that K is a universal function for .9. As in the proof of Proposition 8.4, choose mE J!{ 1 and suppose that the nonzero components of m are in positions m 1 , m z, . . . . Then
{ZE ./Vlz E K[ m(z)]} = {«(1,(Z,' . ')\(mJ is odd and ((1' (z,· .. )EL((m2,(m3'· ..)} U {((I,(Z'" ·)I(m, is even and ((I,(Z'" .)1=L((m2,(m3,···j)
= ([Vl[((1,(Z," ')\(ml = 2k n {(( I> (2 , ...) 1((1, (2 ,. U
([Dl[((1,
n
H( 1,(2,' ..
. .) E
(2" . ·)\(ml
*(
I'
-l}]
L((m2 ' (m3 ' ...)})
= 2k}]
(2,' ..) 1= L((m2' i.;
..)}).
(26)
Since L is consistent with f!1' and f!1' contains every open set, we have that every set in (26) is in (J(f!1'). It follows that K is consistent with (J(f!1'). Q.E.D. Lemma B.7 Let rx be a countable ordinal. For each f3 < «, let 2J!p be a paving for JV which contains the collection '§ of open sets, and assume that there exists a universal function L p for 2J!p consistent with 2J!p' Then there exists a universal function for Up
Proof The set of ordinals {f3If3 < rx} is countable whenever a < Q, so there exists a partition {P(f3)If3 < rx} of the positive integers such that P(f3) is nonempty for each f3 < o: Define a universal function for <;« P(f3) by
Up
if Let
mEvtl l
have nonzero components
m l, m2' ....
(1 E
P(f3).
Then
{ZE J1I'lz E L[ m(z)]}
=
U {((1,(2, .. ·)I(ml EP(f3) and ((l>(z, .. ·)ELP((m2,(m3'·")}
p
= U
p
[{((I,(2, ..
·)I(m, EP(f3)} n {((I>(z,· .. )I((I'(Z,· .. )EL p((m2,(m3'''·))]'
and this set is in .9"(Up
consistent with
Y"(2'~1)'
For each rx <
Q,
there is a universal function for Y"(2'~t)
296
APPENDIX B
Proof For simplicity of notation, we suppress the subscript JV. The proof is by transfinite induction. When IX = 0, the result follows from Corollary B.S.!. Assume now that the result holds for every f3 < IX, where IX < n. We prove it for IX. By Lemma B.7 and the induction assumption, there is a universal funcconsistent with 9"[Up<~9"(.!l'p)]. Now tion for Up<~9"(.!l'p)
p~~
and applying
.!l'p c
p~~
9"C~~
9"(.!l'P) c
.!l'P).
(27)
9" to both sides of (27) and using Proposition B.2, we obtain 9"( u .!l'p) = 9"[ u 9"(.!l'P)]. P<~
(28)
P<~
From Proposition B.5 and (28) we have the existence of a universal function for <~ .!l'P) consistent with q .!l'P), and Lemma B.6 implies .!l'p) consistent with .!l'~. From existence of a universal function for Y)(Up<~ Corollary 7.35.1 we have
9"(Up
9"(Up
(29) so we have a universal function for
9"(U P
.!l'P) consistent with
But from (29),
and applying
9" to both sides, we see that (30)
From Proposition B.5 and (30) we have the existence of a universal function for 9"(.!l'~) consistent with 9"(.!l'~). Q.E.D. Proposition B.9 Let X be an uncountable Borel space. If f3 < then .!l'{ is properly contained in .!l'l.
IX
<
n,
Proof We assume without loss of generality that X = JV (Proposition B.7) and suppress the subscript JV. It is clear that for f3 < IX we have .!l'P c
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
297
!l'a. Let L be a universal function for Sf'(!l'P) consistent with Sf'(!l'P) and define A
= {zEJVlzEL(z)}.
Then A E Sf'(!l'P). If JV - A E Sf'(!l'P), then for some
Zo E
JV we have
If Zo E A, then Zo rf. L(zo) and a contradiction is reached. If Zo E JV - A, then Zo E L(zo) and again a contradiction is reached. It follows that JV - A rf. Sf'(!l'P). But JV - A E !l'a, so !l'P is properly contained in !l'a. Q.E.D. Proposition B.1O Let X be a Borel space. The limit a-algebra !l'x is
contained in U/I x and (31) Indeed, !l'x is the smallest a-algebra containing the open subsets of X which satisfies (31). Proof The result is trivial if X is countable, so assume that X is uncountable. It is clear that 0 E !l' x and !l' x is closed under complementation, so we need only verify that !l'x is closed under countable unions in order to show that it is a a-algebra. If Ql' Qz, ... is a sequence of sets in !l'x, then for some o: < n, we have Qk E !l''X for every k. Then k"= 1 QkE!l''X c !l'xWe prove by transfinite induction that !l''X c U/I x for every rf. < n. This is clearly the case if rf. = O. If !l'~ c U/I x for every f3 < «, where rf. < n, then by Lusin's theorem (Proposition 7.42), Sf'(UP Sf'(!l'x). Let 8 be a Suslin scheme for !l'x- Since ~ is countable, there exists rf. < n such that 8(8) E!l''X for every 8 E~. Then N(8) E!l''X + 1 c !l'x , and (31) is proved. Suppose f!lJ is a a-algebra containing the open subsets of X which satisfies f!lJ = Sf'(f!lJ). Clearly, f!l x = !l'~ c f!lJ. If !l'~ c f!lJ for every f3 < rf., where rf. < n, then (14) implies that !l''X c f!lJ. Therefore f!lJ contains !l'x, which must be the smallest a-algebra containing the open subsets of X and satisfying (31). Q.E.D.
U
A major shortcoming of the analytic a-algebra is that the composition of analytically measurable functions is not necessarily analytically measurable (cf. remarks following Proposition 7.50). However, the composition of limit-measurable functions is limit-measurable. We first give a formal definition of these terms and then prove the preceding statements.
298
APPENDIX B
Definition B.3 Let X and Y be Borel spaces, D c X, and & a a-algebra on X. A function f:D ~ Y is said to be &-measurable if f-l(B) E& for every BEggy. If & = Y x , we say that f is limit-measurable. The a-algebra & is said to be closed under composition of functions if, whenever f:X ~ X is &-measurable and P E&, then f -l(p) E&.
In Definition B.3 there is no mention of a &-measurable function g mapping X into a Borel space Y with which to compose f. If there were such a g, then to check that go f: X ~ Y is .9-measurable, we would check that f-l[g-l(B)] is &-measurable for every BEggy. Since g-l(B)E.9, it suffices to check that f-l(p)E& for every PE&, which is the condition stated in Definition B.3. The stipulation in Definition B.3 that .f have the same domain and range space is inconsequential as long as & = Y~ for some a < Q or .9 = Y x (see Proposition B.7). These are the only cases we consider. The closure of a a-algebra under composition of mappings and the satisfaction of an equation like (31) are intimately related, as the following lemma shows. Lemma B.8 Let X be a Borel space and let & be a a-algebra on X. If & contains the analytic subsets of X and is closed under composition of functions, then
&
=
9'(&).
Proof If X is countable, the result is trivial, so we assume that X is uncountable. In light of Proposition 7.35(d), we need only prove that under the assumptions of the lemma we have & ~ 9'(&). To do this, for an arbitrary Suslin scheme S for & we construct a &-measurable function f: X ~ X and a set P E& such that (32) Let
]~(z)
=
{~
if
and define J: JV ~ JV by T(z)
= [II (Z),J2(Z), . . .].
Finally, let f:X ~ X be given by f =
0
0
299
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
base the collection of open sets {R(k), R(k)[k = 1,2, ... j, where R(k) = {((1,(2," ')!(n:S 2 "In and i. = l}, R(k) = {((1,(2," .}[(n:S 2 "In and (k = 2}.
(33)
By the remark following Definition 7.6, the ,gil-measurability of the sets cpU- I [R(k)]) = S[ ljI(k)],
k
1,2,
,
cpU-I [R(k)]) = X - S[ljI(k)],
k = 1,2,
,
implies the .gIl-measurability of J o cp Define P c X by
I.
=
It follows that f is &'-measurable.
ze%s
where R(k) is given by (33). Then P is an analytic subset of X, so P E:?JJ. We have f-i(P) = cpLYffDJ-I(R[ljI-l(Sm]
U n S(s) = N(S),
ZEA's
so (32) holds.
Q.E.D.
Proposition B.ll Let X be a Borel space. The limit a-algebra fE x is the smallest a-algebra containing the analytic subsets of X which is closed under composition of functions. Proof We show first that fE x is closed under composition of functions. It suffices to show that if I. X -+ X is fE x-measurable, ()( < n, and Q E fE~L then .r - '( Q) E fE x- If o: = 0, this is true by definition. Suppose that for some ()( < n and for every f3 < ()( and CEfEi we have.r-1(C)EfE x. We show that f-1(Q)EfE x for every QEY'(UP
UP
f-I(Q) = N(f-I
-s;
(34)
where i:' -S is the Suslin scheme defined by
u:
0
S)(s) = f-I[S(s)]
By the induction bypothesis. j" -loS is a Suslin scheme for fE x, and we have from Proposition B.I0 and (34) that .r-1(Q)EfE x ' The fact that fE x is the smallest a-algebra containing the analytic subsets of X which is closed under composition offunctions follows from Proposition B.I0 and Lemma B.8. Q.E.D.
300
APPENDIX B
Let X, Y, and Z be uncountable Borel spaces. If Z are limit-measurable, then g oJ:X -> Z is limitmeasurable. In particular, if J and g are analytically measurable, then go J is limit-measurable. It is possible to choose J and g to be analytically measurable so that go J is not analytically measurable. Corollary B.11.1 J:X -> Y and g: Y
->
ProoJ Proposition B.9 implies that d x» d y, and d z are properly contained in f:f x, f:f y, and f:f z» respectively. Apply Proposition B.7 to the results of Proposition B.11. Q.E.D.
Using an argument similar to the first part of the proof of Proposition B.11, the reader may verify that if J:X -> Y and g: Y -> Z are analytically measurable, then g oj is in fact f:fi-measurable. Indeed, one can show by induction that if J is f:fX'-measurable and g is f:f~-measurable, where m and n are integers, then go J is f:fX'+n-measurable. Let X be a Borel space, and for Q E O/tx define (JQ: P(X) -> [0,1] by (35) Then (JQ is universally measurable (Corollary 7.46.1). If Q is Borel-measurable, then (JQ is Borel-measurable (Proposition 7.25), and if Q is analytically measurable, then (JQ is analytically measurable (Proposition 7.43). We consider the case when Q is f:f:X-measurable. Proposition B.12 Let X be a Borel space. If Q E f:f x- then (J Q defined by (35) is f:f p(X)-measurable. In fact if a < Q and Q E f:f then (JQ is f:fp(Xl-measurable.
x,
ProoJ The last statement is true when a = O. If it is true for every f3 < a, where a < Q, and S is a Suslin scheme for Up
Eo9"(PU
c f:fP(X)'
Thus, if Q EY'(U P
ADDITIONAL MEASURABILITY PROPERTIES OF BOREL SPACES
B.5
301
Set Theoretic Aspects of Borel Spaces
The measurability properties of Borel spaces are closely linked to several issues in set theory which we have for the most part skirted. These issues are presented briefly here. There is some controversy concerning the propriety of the axiom of choice and Cantor's continuum hypothesis in applied mathematics. The former is generally accepted and the latter is regarded with suspicion. The general axiom of choice says that given any index set A and a collection of nonempty sets {SalaEA}, there is a function f:A -> UaEASa such that f(a) E Sa for every a E A. We have used this axiom in Appendix A to construct examples. In particular, the set E of Example 1 of that appendix for which both E and g have p-outer measure one is constructed by means of the axiom of choice. We have also used this axiom to construct the set S in the proof of Lemma B.3, and this lemma was instrumental in proving that every uncountable Borel space is Borel-isomorphic to every other uncountable Borel space (Proposition B.3 and Corollary 7.16.1). However an alternative proof of Lemma B.3 which does not require the axiom of choice is possible, but is quite lengthy and will not be given. The countable axiom of choice is the same as the general axiom except that the index set A is required to be countable. A paraphrase of this axiom is that given any countable collection of nonempty sets, one element can be chosen from each set. We have made extensive use of this axiom, such as in the choice, for each k, of a selector ({Jk in the proof of Proposition 7.50(a). Indeed, much of real analysis and topology rests on the countable axiom of choice. Solovay [S13J has shown that if the general axiom of choice is replaced by the weaker "principle of dependent choice," which is still stronger than the countable axiom of choice, then every subset of the real line may be assumed to be Lebesgue-measurable. A slight extension of this result shows that under these conditions every subset of any Borel space may be assumed to be universally measurable. Therefore, by choice of the proper axiom system, the measurability difficulties which are the subject of Part II can be made to disappear. It is possible to show without the use of the axiom of choice that every uncountable Borel space X contains universally measurable sets which are not limit measurable. An unpublished proof of this is due to Richard Lockhart. If both the axiom of choice and the continuum hypothesis are adopted then it follows that iJlf x has a larger cardinality than .Y x- Since for each a < n, f!J x c .Y~ and f!J x has cardinality at least c, so does .Y~. On the other hand, .Y~ is contained in g'(.Y~) and there is a universal function so the cardinality of .Y~ is c. Now .Y x = Ua
302
APPENDIX B
cardinality of the set of countable ordinals is less than or equal to c, so ft?x has cardinality c. In contrast, under the assumption of the axiom of choice and Cantor's continuum hypothesis, Olt x contains a set F of cardinality c which has measure zero with respect to every nonatomic probability measure [H5, Chapter III, Section 14]' Thus every subset of F is also in Olt x' and the cardinality of 071 x is at least 2e • I t follows that ft?x must be properly contained in Olt x . Another relevant set theoretic work is that of Godel [G 1], who showed that it is consistent with the usual axioms of set theory to assume the existence of the complement of an analytic set in the unit square whose projection on an axis is not Lebesgue-measurable. This means that it is consistent with the usual axioms to assume the existence of an analytically measurable function f: [0,1] [0,1] ~ R such that f*(x) = infyf(x, y) is not Lebesgue measurable. This places a severe constraint on the types of strengthened versions of Proposition 7.47 which might be possible.
Appendix C
The Hausdorff Metric and the Exponential Topology
This appendix develops a metric topology on the collection of closed subsets (including the empty set 0) of a compact metric space (X,d). We denote this collection of sets by 2x . For AE2x and XEX, define d(x, A)
= min d(x, a) aEA
if A # 0,
(1)
(2)
d(x,0) = diam(X) = max d(y, z). y,ZEX
Definition C.l Let (X,d) be a compact metric space. The Hausdorff metric p on 2 x is defined by p(A, B) = max {max d(a, B), max d(b, A)} aEA
p(A,0) = p(0, A) = diam(X) p(0, 0)
=
o.
if A,B #- 0,
(3)
hER
if A#-
0,
(4) (5)
We have written max in place of sup in (3), since every set in 2x is compact and d(x, A) is a continuous function of x for every A E 2x . To see this latter property, consider a set A E 2 x . If A = 0, then the function d(x, A) is 303
304
APPENDIX C
constant and hence continuous. If A i= 0, then for x, y E X and a E A we have dix, a) ~ d(x, y)
+ d(y, a).
By taking the infimum of both sides over a E A, we obtain d(x, A) - d(y, A)
d(x, y).
~
By reversing the roles of x and y, we have Id(x,A) - d(y,A)1 ~ d(x,y)
VX,YE X,
(6)
which shows that d(x, A) is a Lipschitz continuous function of x. It is a tedious but straightforward task to verify that (2x , p) is a metric space, and this is left to the reader. We will prove that (2X , p) is a compact metric space. We first show some preliminary facts. If A is a (not necessarily closed) subset of X, define 2A
= {K E2 x IK
c A}.
We define two classes
{2GIG is an open subset of X},
(7)
.:K = {2x - 2 IK is a closed subset of X}.
(8)
<:g =
K
To aid the reader, we will continue to denote points of X by lowercase Latin letters and subsets of X by uppercase Latin letters. Uppercase script letters will be used for subsets of 2\ except for subsets of the form 2A as defined above. In keeping with this practice, we denote open spheres in the two spaces as follows:
= {YEXld(x,y) < s], Y',(A) = {BE 2Xlp(A, B) < s]. Six)
Finally, classes of subsets of 2x will be denoted by boldface script letters, as in the case of f!} and % defined above. The topology obtained by taking f!} u % as a subbase in 2x is called the exponential topology and an extensive theory exists for it [K2, K3]. It can be developed for a nonmetrizable topological space X, but we are interested in it only when X is compact metric. In this case, the exponential topology is the topology generated by the Hausdorff metric, as we now show. Proposition C.l Let(X,d) be a compact metric space and p the Hausdorff metric on 2x . The class f!} u % as defined by (7) and (8) is a subbase for the topology on (2X , p). Proof We first prove that when G is open and K is closed in X, then 2G and 2x - 2K are open in (2 x , p). If G or K is empty, then 2G or 2x - 2\
305
THE HAUSDORFF METRIC AND THE EXPONENTIAL TOPOLOGY
respectively, is easily seen to be open, so we assume G and K are nonempty. Suppose A is a nonempty closed subset of X and A E 2G • (The proof for A = 0 is trivial.) Since A is compact, is a subset of G, and X - G is closed, there exists e with 0 < e < diam(X) such that min d(a, X - G)
~
(9)
e.
UEA
For BEY'iA), we have B #- 0 and max d(b, A) < e.
(10)
bER
From inequalities (9) and (10) we have that BeG. Hence Y'iA) c 2G , and 2G must be open. Turning to the case of 2x - 2K for K closed, we let A E 2x 2K be nonempty. By definition, A 1'- 2 K , so A - K contains at least one point ao. Since X - K is open, we can find e > 0 for which SE(a O) c X - K. For BEY'E(A), we have d(ao, B) ::;; max d(a, B)
< s,
UEA
which implies B n Siao) #- 0 and BE 2 x - 2K • Therefore Y'E(A) c 2x - 2K , and 2 x - 2K is open. Having thus shown that the sets 2 G and 2 x - 2K are open in (2X , p) when G is open and K is closed, we must now show that given any open subset tfJ of(2 X , p) and any nonempty AE tfJ, we can find open sets G l , G 2 , ••• , Gm and closed sets K 1,K 2 , . . . .K; in X for which AE2 G 1 n··· n 2Gm n (2 x - 2K 1) n ... n (2X
-
2K n )
C
tfJ.
Since tfJ is open in (2 X , p), there exists s > 0 such that Y'E(A) c tfJ. Since A is closed in the compact set X, there exist points {x., ... ,Xn } in A such that A c Uk=l SEdxd. Let G 1 = {xEXld(x,A) <
s}
and k = 1, ... ,no
K k = X - SEdxk), G
By construction, A E 2 A E 2x - 2Kk • Therefore
1
and, since for each k, A n SE/2(Xk) #- 0, we have
A E2 G 1 n (2X
-
2K 1) n ... n (2X
Suppose B is another set in 2G 1 n (2X that BE 2G 1 implies
-
2K n ) .
2K 1) n ... n (2X
max d(b, A) < c. bER
-
-
2K n) . The fact (11)
306
APPENDIX C
If for some ao E A we had d(ao, B) ~ s, then we would also have S,(ao) c X - B. But for some Xk E A, ao E S,/z(xd and this would imply in succession S,/z(x k ) C X - B, B C K b and B rt: 2x - 2K k • This contradiction shows that max d(a, B) < e.
(12)
aEA
Inequalities (11) and (12) establish that p(A, B) < s, and as a consequence
2G 1 n (2X
-
2K 1) n ... n (2X
-
2K n ) c S,(A)
C
'§.
Q.E.D.
If a cover of a space contains no finite subcover, we say the cover is essentially infinite. To show that (2 X , p) is compact when X is compact, we
must show that no essentially infinite open cover of2 x exists. As a consequence of the following lemma, this will be accomplished if we can show that the subbase tfj u off contains no essentially infinite cover. We remind the reader that a topological space in which every open cover has a countable subcover is called Lindelof, and in metrizable spaces this property is equivalent to separability. Lemma C.I Let n be a Lindelof space and let Y be a subbase for the topology on n. If there exists an essentially infinite open cover of n, then there exists one which is a subset of Y. Proof Let fJ6 be the base for the topology on n constructed by taking finite intersections of sets in Y and let C(J be an essentially infinite open cover where B~E!J9 for of n. Each CEC(J has a representation C = UHA(C)B~, every IX E A( C). The collection UCE'C {B~IIX E A( is an essentially infinite open cover of n, and, by the Lindelof property, it contains a countable, essentially infinite, open subcover £0 = {B 1, B z , . . .}. Each B; has a representation B k = nj~)l Skj, where SkjEY,j = 1,... , n(k).lffor eachj the cover £0 j = {Slj,B z,B 3 , . . . } is not essentially infinite, then there exists a finite subcollection ~j which also covers n. But then
Cn
{Bd u
[~: (~j
- {Slj})]
C
£0
is a finite subcover of n. This contradiction implies that for some index jo, the cover £0 jo is essentially infinite. Denote R I = S Ijo' In general, given RbR z , ... .R; in Y such that B k C R k , k = 1,... ,n, and {RbR z , ... ,R n, B; + 1, B; + Z, .•• } is an essentially infinite open cover of n, we can use the preceding argument to construct R; + 1 E Y for which B; + 1 C R; + I and [R1,R z , ... ,Rn,Rn+l,Bn+z,Bn+3""} is an essentially infinite open cover of n. The collection {R I , R z , ... } is an essentially infinite open cover contained in Y. Q.E.D. Proposition C.2 Let (X,d) be a compact metric space and p the Hausdorff metric on 2x . The metric space (2x , p) is compact.
THE HAUSDORFF METRIC AND THE EXPONENTIAL TOPOLOGY
307
Proof We first show that (2x, p) is separable. Since (X, d) is compact, it is separable. Let D be a countable dense subset of X and let
ce = {Sl/n(x)lxED, n = 1,2, ...}. consist of finite unions of sets in ce. Then f!) is countable and, as we
Let f!) now show, is dense in (2X , p). Given AE2 x and G > 0, choose a positive integer n satisfying 2/n < G. The collection of sets {Sl/n(X)!XED} covers the compact set A, so there is a finite subcollection {Sl/n(x)lxEF} which also covers A and which satisfies Sl/n(X) n A =I- 0 for every XE F. The set B = UXEFSl/n(X) is in f!) and satisfies p(A,B) < G. As a result of Proposition C.1, Lemma C.1, and the separability of (2x , p), to show that (2\ p) is compact we need only show that every open cover of 2 x which is a subset of rg u % contains a finite subcover of 2 x. Thus let {GaIIXEA} be a collection of open sets and {K pl,8EB} a collection of closed sets in X, and suppose
Define the closed set K o = nPEB K p • By definition, K o ¢' UPEB(2 x - 2 K p ), so KoE UaEA2G•. Thus for some IXoEA, we have K o E2 G•o , i.e., K o c; Gao. This means that X - Gao C X - K o = U (X - K p), PEB
and since X - Gao is compact, there exists a finite set {,8 1,,82" .. ,,8n} C B for which n
X - Gao
C
U(X -
KpJ
U(2x -
2K Pk
(13)
k=l
To complete the proof, we show 2x
= 2G• o
U [
k=l
)J.
If C E 2 x , then either C c Gao, in which case C E2G• o, or else C n (X - Gao) =I0. In the latter case, (13) implies that for some k, C n (X - KpJ =I- 0, i.e., C E 2x - 2K pk . Q.E.D. We now develop some convergence notions in (2X , p). Let {An} be a sequence of sets in 2x . Define
o}, = o}.
lim An = {x E X[lim inf d(x, An) =
(14)
lim An = {XEX!limSUPd(X,A n)
(15)
n-ct::;·
n-+aJ
u-e co
n--+oo
308
APPENDIX C
For example, if X = [-l,lJandA n = {(-I)"}, we have lim n _ oo An = {-1,l} and lim n _ oo An = 0. If X = [ -1,1 J and An = [ -lin, 11nJ, we have lim An = lim An = {O}. n-r
n-(XJ
oi
Clearly we have lim n _ oo An C lim n -+ oo An. It is also true that lim n _ oo An and lim n -+ oo An are closed. To see this for lim n _ oo An' let {x m } be a sequence in lim n -+ oo An converging to x. Then from (6) we have for each m lim inf dix, An) ~ d(x, X m ) n-+ 00
+ lim inf d(x m , An) = d(x, x m ), n-+ 00
and since d(x, x m ) can be made arbitrarily small by choosing m sufficiently large, we conclude that x E lim n -+ 00 An. Replace lim infn -+ 00 by lim SUPn-+ 00 in the preceding argument to show that lim n -+ 00 An is closed. If lim n _ a: An = lim n _ x An' we denote their common value by lim n -+ x An· This notation is justified by the following proposition. Proposition C.3 Let (X, d) be a compact metric space and p the Hausdorff metric on z-. Let {An} be a sequence in 2x . Then
lim An = lim An = A
(16)
lim p(A n , A) = O. n-+ 00
(17)
n-oo
n-r cc
if and only if
Proof Assume for the moment that A # 0 and suppose (16) holds. Then for each x in the compact set A, d(x, An) -+ 0 as n -+ 00. Given e > 0, let {Xl' ... ,xd be points of A such that the open spheres S.dx),j = 1, ... , k cover A. Choose N large enough so that for all n ~ N j = 1, ... .k.
Now use the Lipschitz continuity [cf. (6)J of the function conclude that
X -+
d(x, An) to
'v'xEA. This implies that lim max d(x, An) = O.
n-e cc
XEA
This equation and (3) imply that (17) will follow if we can show lim maxd(y,A) = O.
n- 00 YEAn
(18)
THE HAUSDORFF METRIC AND THE EXPONENTIAL TOPOLOGY
If (18) fails to hold, then for some that nl < n z < ... and
B
309
> 0 there exists a sequence Yk E A nk such Vk.
(19)
The compactness of X implies that {Yk} accumulates at some YoEX which, by (19) and the continuity of x --+ d(x,A), must satisfy d(Yo,A) ~ B. But YoElimn~oo An by (14), and this contradicts (16). Hence (18) holds. Still assuming A # 0, we turn to the reverse implication of the proposition. If (17) holds, then VXEA,
(20)
and lim maxd(y,A) = O.
(21)
n---+ooYEA n
Equation (20) implies that A c lim An C lim An. n-r cc
(22)
n-+oo
If x Elim.,; 00 An, then by definition there exists a sequence Yk E A nk such that n 1 < nz < ... and lim d(x, Yk) =
k -«
00
o.
(23)
We have from (6) that
and, letting k --+ 00 and using (21) and (23), we conclude d(x, A) = O. Since A is closed, this proves x E A and (24) n-s co
Combine (22) and (24) to obtain (16). Assume finally that A = 0. If (16) holds, then all but finitely many of the sets Anmust be empty, for otherwise one could find Yk E A nk, nl < n z < ... , and {yd would accumulate at some Yo E limn~ 00 An. If all but finitely many of the sets An are empty, then (5) implies that (17) holds. Conversely, if (17) holds and A = 0, then (4) implies that all but finitely many of the sets An Q.E.D. are empty. Equation (16) follows from (2), (14), and (15). For the proof of Proposition 7.33 in Section 7.5 we need the concept of a function which is upper semicontinuous in the sense of Kuratowski, or in abbreviation, upper semicontinuous (K).
310
APPENDIX C
Definition C.2 Let Y be a metric space and X a compact metric space. A function F: Y -> 2x is upper semicontinuous (K) if for every convergent
sequence {Yn} in Y with limit Y, we have limn_co F(Yn)
c
F(y).
The similarity of Definition C.2 to the idea of an upper semicontinuous real or extended real-valued function is apparent [Lemma 7.13(b)]. Although we will not discuss functions which are lower semicontinuous (K), it is interesting to note that such a concept exists and has the obvious definition, namely, that the function F: Y -> 2x is lower semicontinuous (K) if for every convergent sequence {Yn} in Y with limit Y, we have limn_co F(Yn):::J F(y). It can be seen from Proposition C.3 that a function F: Y -> 2x is continuous in the usual sense (where 2x has the exponential topology) if and only if it is both upper and lower semi continuous (K). We carry the analogy with real-valued functions even farther by showing that an upper semicontinuous (K) function is Borel-measurable, and the remainder of the appendix is devoted to this. Lemma C.2 Let Y be a metric space and X a compact metric space. If x -> 2 is upper semicontinuous (K), then for each open set G c X, the
F: Y
set (25)
is open. The openness of F- 1(2G ) for every open G is in fact equivalent to upper semicontinuity (K), but we need only the weaker result stated. To prove it, we show that for G open, the set F- 1(2X - 2G ) is closed. If {Yn} is a sequence in this set with limit Y E Y, then Proof
F(Yn)n(X - G) =I 0,
n
= 1,2, ... ,
and so there exists a sequence {x n } in the compact set X -. G such that x, E F( Yn), n = 1,2, .... This sequence has an accumulation point x EX - G, and, by (14), xElim n_ oo F(Yn)' The upper semicontinuity (K) of F implies xEF(y), and so F(y) n (X - G) =I 0, i.e., YEF- 1(2 x - 2G ). Q.E.D. Proposition CA Let Y be a metric space, (X, d) a compact metric space, and let 2x have the exponential topology. Let F: Y -+ 2x be upper semicontinuous (K). Then F is Borel-measurable. Proof If F: Y -> 2x is upper semicontinuous (K) and G is an open subset of X, then F- 1(2G ) is Borel-measurable in Y by Lemma C.2. If K is a closed 1 Gn , subset of X, define open sets G; = {xld(x, K) < lin}. We have K = and so a closed set A is a subset of K if and only if A c Gn , n = 1,2, ....
n:,=
THE HAUSDORFF METRIC AND THE EXPONENTIAL TOPOLOGY
n F-
311
00
F- 1(2K ) =
1(2G n
)
n= 1
is a G~-set, thus Borel-measurable in Y. It follows that for any set qj in the subbasew v % for the exponential topology on 2x , F- 1 (qj) is Borel-measurable in Y. By Proposition 7.1, any open set in 2x can be represented as a countable union of finite intersections of sets in ~ v % and so its inverse image under F is Borel-measurable. Q.E.D.
References
[AI] R. Ash, "Real Analysis and Probability." Academic Press, New York, 1972. [A2] K. J. Astrom, Optimal control of Markov processes with incomplete state information, J. Math. Anal. Appl. 10 (1965),174-205. [Bl] R. Bellman, "Dynamic Programming." Princeton Univ. Press, Princeton, New Jersey, 1957. [B2] D. P. Bertsekas, Infinite-time reachability of state-space regions by using feedback control, IEEE Trans. Automatic Control AC-17 (1972), 604-613. [B3] D. P. Bertsekas, On error bounds for successive approximation methods, IEEE Trans. Automatic Control AC-21 (1976), 394-396. [B4] D. P. Bertsekas, "Dynamic Programming and Stochastic Control." Academic Press, New York, 1976. [B5] D. P. Bertsekas, Monotone mappings with application in dynamic programming, SIAM J. Control Optimization 15 (1977), 438-464. [B6] D. P. Bertsekas and S. Shreve, Existence of optimal stationary policies in deterministic optimal control, J. Math. Anal. Appl. (to appear). [B7] P. Billingsley, Invariance principle for dependent random variables, Trans. Amer. Math. Soc. 83 (1956), 250-282. [B8] D. Blackwell, Positive dynamic programming, Proc. Fifth Berkeley Sympos. Math. Statist. and Probability, 1965,415-418. [B9] D. Blackwell, Discounted dynamic programming, Ann. Math. Statist. 36 (1965), 226-235 [BIO] D. Blackwell, On stationary policies, J. Roy. Statist. Soc. 133A (1970), 33 - 37. [Bll] D. Blackwell, Borel-programmable functions, Ann. Prob. 6 (1978), 321-324. [BI2] D. Blackwell, D. Freedman, and M. Orkin, The optimal reward operator in dynamic programming, Ann. Probability 2 (1974), 926-941. [B13] N. Bourbaki, "General Topology." Addison-Wesley, Reading, Massachusetts, 1966. [BI4] D. W. Bressler and M. Sion, The current theory of analytic sets, Canad. J. Math. 16 (1964),207-230.
312
REFERENCES [BI5] [CI] [DI] [D2] [D3] [D4] [D5] [D6] [D7] [D8] [FI] [F2] [F3] [F4] [F5] [G I] [HI] [H2] [H3] [H4] [H5] [H6] [H7] [11] [J2] [13]
[KI]
313
L. D. Brown and R. Purves, Measurable selections of extrema, Ann. Statist. 1 (1973), 902-912. D. Cenzer and R. D. Mauldin, Measurable parameterizations and selections, Trans. Amer. Math. Soc. (to appear). C. Dellacherie, "Ensembles Analytiques, Capacites, Mesures de Hausdorff." SpringerVerlag, Berlin and New York, 1972. E. V. Denardo, Contraction mappings in the theory underlying dynamic programming, SIAM Rev. 9 (1967),165-177. C. Derman, "Finite State Markovian Decision Processes." Academic Press, New York, 1970. J. L. Doob, "Stochastic Processes." Wiley, New York, 1953. L. Dubins and D. Freedman, Measurable sets of measures, Pacific J. Math. 14 (1964), 1211-1222. L. Dubins and L. Savage, "Inequalities for Stochastic Processes (How to Gamble if you Must)." McGraw-Hill, New York, 1965. (Republished by Dover, New York, 1976.) J. Dugundji, "Topology." Allyn & Bacon, Rockleigh, New Jersey, 1966. E. B. Dynkin and A. A. Juskevic, "Controlled Markov Processes and their Applications." Moscow, 1975. (English translation to be published by Springler-Verlag.) D. Freedman, The optimal reward operator in special classes of dynamic programming problems, Ann. Probability. 2 (1974), 942-949. E. B. Frid, On a problem of D. Blackwell from the theory of dynamic programming, Theor. Probability Appl. 15 (1970),719-722. N. Furukawa, Markovian decision processes with compact action spaces, Ann. Math. Statist. 43 (1972) 1612-1622. N. Furukawa and S. Iwamoto, Markovian decision processes and recursive reward functions, Bull. Math. Statist. 15 (1973), 79-91. N. Furukawa and S. Iwamoto, Dynamic programming on recursive reward systems, Bull. Math. Statist. 17 (1976), 103-126. K. Godel, The consistency of the axiom of choice and of the generalized continuumhypothesis, Proc. Nat. Acad. Sci. U.S.A. 24 (1938), 556-557. P. R. Halmos, "Measure Theory." Van Nostrand-Reinhold, Princeton, New Jersey, 1950. F. Hausdorff, "Set Theory." Chelsea, Bronx, New York, 1957. C. J. Himmelberg, T. Parthasarathy, and F. S. Van Vleck, Optimal plans for dynamic programming problems, Math. Operations Res. 1 (1976), 390-394. K. Hinderer, "Foundations of Nonstationary Dynamic Programming with Discrete Time Parameter." Springler-Verlag, Berlin and New York, 1970. J. Hoffman-Jorgensen, "The Theory of Analytic Spaces." Aarhus Universitet, Aarhus, Denmark, 1970. A. Hordijk, "Dynamic Programming and Markov Potential Theory." Mathematical Centre Tracts, Amsterdam, 1974. R. Howard, "Dynamic Programming and Markov Processes." MIT Press, Cambridge, Massachusetts, 1960. B. Jankov, On the uniformisation of A-sets, Dokl. Akad. Nauk SSSR 30 (1941),591-592 (in Russian). W. Jewell, Markov renewal programming I and II, Operations Res. 11 (1963), 938-971. A. A. Juskevic (Yushkevich), Reduction ora controlled Markov model with incomplete data to a problem with complete information in the case of Borel state and control spaces, Theor. Probability Appl. 21 (1976), 153-158. L. Kantorovich and B. Livenson, Memoir on analytical operations and projective sets, Fund. Math. 18 (1932), 214-279.
314 [K2] [K3] [K4] [K5] [K6] [Ll] [L2] [L3] [MI] [M2] [M3] [M4] [M5]
[NI] [01] [02] [03] [04] [05] [PI] [P2] [RI] [R2]
[R3] [R4] [R5] [SI] [S2]
REFERENCES
K. Kuratowski, "Topology I." Academic Press, New York, 1966. K. Kuratowski, "Topology II." Academic Press, New York, 1968. K. Kuratowski and A. Mostowski, "Set Theory." North-Holland, Amsterdam, 1976. K. Kuratowski and C. Ryll-Nardzewski, A general theorem on selectors, Bull. Polish A cad. Sci. 13 (1965), 397-411. H. Kushner, "Introduction to Stochastic Control." Holt, New York, 197\. M. Loeve, "Probability Theory." Van Nostrand-Reinhold, Princeton, New Jersey, 1963. N. Lusin, Sur les ensembles analytiques, Fund. Math. 10 (1927), 1-95. N. Lusin and W. Sierpinski, Sur quelques proprietes des ensembles (A), Bull. Acad. Sci. Cracovie (1918),35-48. G. Mackey, Borel structure in groups and their duals, Trans. Amer. Math. Soc. 85 (1957),134-165. A. Maitra, Discounted dynamic programming on compact metric spaces, Sankhya 30A (1968),211-216. J. McQueen, A modified dynamic programming method for Markovian decision problems, J. Math. Anal. Appl. 14 (1966),38-43. P. A. Meyer, "Probability and Potentials." Ginn (Blaisdell), Boston, Massachusetts, 1966. P. A. Meyer and M. Traki, Reduites et jeux de hasard (Seminaire de Probabilites VII, Universite de Strasbourg, in "Lecture Notes in Mathematics," Vol. 321), pp. 155-17\. Springer, Berlin, 1973. J. von Neumann, On rings of operators. Reduction theory, Ann. of Math. 50 (1949), 401-485. P. Olsen, Multistage stochastic programming with recourse: The equivalent deterministic problem, SIAM J. Control Optimization 14 (1976), 495-517. P. Olsen, When is a multistage stochastic programming problem well-defined.", SIAM J. Control Optimization 14 (1976), 518-527. P. Olsen, Multistage stochastic programming with recourse as mathematical programming in an L p space, SIAM J. Control Optimization 14 (1976), 528-537. D. Ornstein, On the existence of stationary optimal strategies, Proc. Amer. Math. Soc. 20 (1969),563-569. J. M. Ortega and W. C. Rheinboldt, "Iterative Solutions of Nonlinear Equations in Several Variables." Academic Press, New York, 1970. K. Parthasarathy, "Probability Measures on Metric Spaces." Academic Press, New York, 1967. Yu. V. Prohorov, Convergence of random processes and limit theorems in probability theory, Theor. Probability Appl. 1 (1956),157-214. D. Rhenius, Incomplete information in Markovian decision models, Ann. Statist. 2 (1974),1327-1334. R. T. Rockafellar, Integral functionals, normal integrands and measurable selections, in "Nonlinear Operators and the Calculus of Variations." Springer-Verlag, Berlin and New York, 1976. R. T. Rockafellar and R. Wets, Stochastic convex programming: relatively complete recourse and induced feasibility, SIAM J. Control Optimization 14 (1976), 574-589. R. T. Rockafellar and R. Wets, Stochastic convex programming: basic duality, Pacific J. Math. 62 (1976),173-195. H. L. Royden, "Real Analysis." Macmillan, New York, 1968. S. Saks, "Theory of the Integral." Stechert, New York, 1937. Y. Sawaragi and T. Yoshikawa, Discrete-time Markovian decision processes with incomplete state information, Ann. Math. Statist. 41 (1970),78-86.
REFERENCES
315
[S3] M. Schill, On continuous dynamic programming with discrete time parameter, Z. Wahrscheinlichkeitstheorie und VerII'. Gebiete 21 (1972), 279~288. [S4] M. Schill, On dynamic programming: Compactness of the space of policies, Stochastic Processes Appl. 3 (1975), 345-364. [S5] M. Schal, Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal, Z. Wahrscheinlichkeitstheorie und VerII'. Gebiete 32 (1975), 179~196. [S6] E. Selivanovskij, Ob odnom klasse effektivnyh mnozestv (mnozestva C), Mal. Sb. 35 (1928),379-413. [S7] S. Shreve, A General Framework for Dynamic Programming with Specializations, M. S. thesis (1977), Dept. of Elec. Eng., Univ. of Illinois, Urbana. [S8] S. Shreve, Dynamic Programming in Complete Separable Spaces, Ph.D. thesis (1977), Dept. of Math., Univ. of Illinois, Urbana. [S9] S. Shreve and D. P. Bertsekas, A new theoretical framework for finite horizon stochastic control, Proc. Fourteenth Annual Allerton Conf, Circuit and System Theory, Allerton Park, Illinois, October, 1976,336-343. [SIO] S. Shreve and D. P. Bertsekas, Equivalent stochastic and deterministic optimal control problems, Proc. 1976 IEEE Conf. Decision and Control, Clearwater Beach, Florida, 705- 709. [SII] S. Shreve and D. P. Bertsekas, Alternative theoretical frameworks for finite horizon discrete-time stochastic optimal control, SIAM J. Control Optimization \6 (1978). [S12] S. Shreve and D. P. Bertsekas, Universally measurable policies in dynamic programming Mathematics 01' Operations Research (to appear). [SI3] R. Solovay, A model of set-theory in which every set of reals is Lebesgue measurable. Ann. Math. 92 (\970\, 1~56. [SI4] R. E. Strauch, Negative dynamic programming, Ann Math. Statist. 37 (1966), 871- 890 [S15] C. Striebel, Sufficient statistics in the optimal control of stochastic systems, 1. Math. Anal. Appl. 12 (1965), 576~592. [SI6] C. Striebel, "Optimal Control of Discrete Time Stochastic Systems." Springer-Verlag, Berlin and New York, 1975. [SI7] M. Suslin (Souslin), Sur une definition des ensembles measurables B sans nombres transfinis, C. R. A cad. Sci. Paris 164 (1917), 88~9I. [VI] V. S. Varadarajan, Weak convergence of measures on separable metric spaces, Sankhya 19 (1958), 15-22. [WI] D. H. Wagner, Survey of measurable selection theorems, SIAM J. Control Optimization 15 (1977), 859~903. [W2] A. Wald, "Statistical Decision Functions." Wiley, New York, 1950. [W3] H. S. Witsenhausen, A standard form for sequential stochastic control, Math. Systems Theory 7 (1973), 5-1 I.
This page intentionally left blank
Table of Propositions, Lemmas, Definitions, and Assumptions
Chapter 2
Monotonicity Assumption
27
Chapter 3
Proposition Proposition Proposition Proposition Proposition Proposition Proposition
3.1 3.2 3.3 3.4 3.5 3.6 3.7
40 43 44 46 47 50 5\
Lemma 3.\
45
Assumption F.\ Assumption F.2 Assumption F.3
39 40 40 Chapter 4
Proposition Proposition Proposition Proposition Proposition
4.\ 4.2 4.3 4.4 4.5
53 55 56 57 59
Proposition Proposition Proposition Proposition Proposition Proposition
4.6 4.7 4.8 4.9 4.\0 4.11
60 62 64 64 68 69
Assumption C (Contraction Assumption) Fixed Point Theorem
52 55
Chapter 5
Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.\0 5.11 5.12 5.13
71 73 75 78 78 79 80 81 84 86 87 88 89
317
318
TABLE OF PROPOSITIONS
Proposition 5.14 Proposition 5.15
90 90
Lemma 5.1 Lemma 5.2
75 82
Assumption 1 (Uniform Increase Assumption) Assumption 0 (Uniform Decrease Assumption) Assumption I. I Assumption 1.2 Assumption 0.1 Assumption 0.2
70 70 71 71 71 71
Chapter 6
Proposition Proposition Proposition Proposition Proposition
6.1 6.2 6.3 6.4 6.5
95 95 96 97 97
Assumption A.I Assumption A.2 Assumption A.3 Assumption A.4 Assumption A.5 Assumption F.2 Assumption F.3 Exact Selection Assumption Assumption C
93 93 93 93 93 94 95 95 96
Chapter 7
Proposition 7.1 Proposition 7.2 (Urysohns Theorem) Proposition 7.3 (Alexandroff's Theorem) Proposition 7.4 Proposition 7.5 Proposition 7.6 Proposition 7.7 Proposition 7.8 Proposition 7.9 Proposition 7.10 Proposition 7.11 Proposition 7.12 Proposition 7.13 Proposition 7.14 Proposition 7.15 (Kuratowskis Theorem)
106 106 107 108 109 112 113 114 116 117 118 119 119 120 121
Proposition 7.16 Proposition 7.17 Proposition 7.18 Proposition 7.19 Proposition 7.20 Proposition 7.21 Proposition 7.22 Proposition 7.23 Proposition 7.24 (Dynkin System Theorem) Proposition 7.25 Proposition 7.26 Proposition 7.27 Proposition 7.28 Proposition 7.29 Proposition 7.30 Proposition 7.31 Proposition 7.32 Proposition 7.33 Proposition 7.34 Proposition 7.35 Proposition 7.36 Proposition 7.37 Proposition 7.38 Proposition 7.39 Proposition 7.40 Proposition 7.41 Proposition 7.42 (Lusin's Theorem) Proposition 7.43 Proposition 7.44 Proposition 7.45 Proposition 7.46 Proposition 7.47 Proposition 7.48 Proposition 7.49 (Jankov-von Neumann Theorem) Proposition 7.50 Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma
7.1 (Urysohn's Lemma) 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13
121 122 124 127 127 128 130 131 133 133 134 135 140 144 145 148 148 153 154 158 161 164 165 165 165 166 167 169 172 175 177 179 180 182 184 105 105 116 119 119 125 125 125 127 131 139 144 146
319
TABLE OF PROPOSITIONS Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma
7.14 7.15 7.16 7.17 7.18 7.19 7.20 7.21 7.22 7.23 7.24 7.25 7.26 7.27 7.28 7.29 7.30
Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition
147 149 150 151 151 152 152 154 161 162 163 164 172 173 174 174 177 104 105 107 112 114 117 118 120 121 122 133 134 146 157 157 160 161 167 171 171 177
7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19 7.20 7.21
Chapter 8
Proposition Proposition Proposition Proposition Proposition Proposition Proposition Lemma 8.1 Lemma 8.2
8.1 8.2 8.3 8.4 8.5 8.6 8.7
192 198 200 203 207 209 211 194 196
Lemma Lemma Lemma Lemma Lemma
8.3 8.4 8.5 8.6 8.7
Definition Definition Definition Definition Definition Definition Definition Definition
196 197 202 205 206
8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8
188 190 19\ 194 195 206 208 210 Chapter 9
Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition
9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10 9.11 9.12 9.13 9.14 9.15 9.16 9.17 9.18 9.19 9.20 9.21
216 219 219 220 223 224 224 225 226 226 227 227 228 231 231 232 234 236 237 239 241
Lemma 9.1 Lemma 9.2 Lemma 9.3
220 221 230
Definition Definition Definition Definition Definition Definition Definition Definition Definition Definition
213 214 214 216 217 217 217 217 218 229
9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10
320
TABLE OF PROPOSITIONS
Chapter 10
Proposition Proposition Proposition Proposition Proposition Proposition Lemma Lemma Lemma Lemma
10.1 10.2 10.3 lOA 10.5 10.6
246 254 256 257 262 264
10.1 10.2 10.3 lOA
Definition Definition Definition Definition Definition Definition Definition Definition
Appendix B
253 255 260 261 243 245 248 249 249 250 251 256
10.1 10.2 10.3 lOA 10.5 10.6 10.7 10.8
Chapter 11
Proposition Proposition Proposition Proposition Proposition Proposition Proposition
11.1 11.2 11.3 1104 11.5 11.6 11.7
266 267 267 268 269 270 272
Appendix A
Proposition A.I
278
Lemma A.I Lemma A.2 Lemma A.3
273 274 275
Definition A. I
273
Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Proposition Lemma Lemma Lemma Lemma Lemma Lemma Lemma Lemma
B.I B.2 B.3 BA B.5 B.6 B.7 B.8 B.9 B.IO B .11 B.12
282 285 289 290 291 292 293 295 296 297 299 300 285 285 286 287 288 294 295 298
B.l B.2 B.3 BA B.5 B.6 B.7 B.8
290 293 298
Definition B.I Definition B.2 Definition B.3
Appendix C
Proposition Proposition Proposition Proposition
C.I C.2 C.3 CA
304 306 308 310
Lemma C.I Lemma C.2
306 310
Definition C.I Definition C.2
303 310
Index
A
Alexandroffs theorem, 107 Analytic measurability ofa function, 171 Analytic set, 160 Analytic c-algebra, 171 A posteriori distribution, 260ff A priori distribution, 260ff Axiom of choice, 301
8
Baire null space, 103, 109 Borel isomorphism, 121 Borel measurability of a function, 120 Borel programmable, 21 Borel o-algebra, 117 Borel space, 118
c Cantor's continuum hypothesis, 301ff Completion of a metric space, 114 ofa o-algebra, 167 Composition of measurable functions, 298 Contraction assumption, 52
Control constraint, 2, 26,188,216,243,245,248, 251. 271 space, 2, 26, 188,216,243,245,248,251, 271 Cost corresponding to a policy, 2, 28, 191,217, 244,249,254 one-stage, 2,189,216,243,245,248,251, 271 optimal, 2, 29,191,217,244,246,250,254 C-sets,20 D
Disturbance kernel, 189,243,245,271 Disturbance space, 189,243,245,271 Dynamic programming (DP) algorithm, 3, 6, 39,57,80,198,229,259 Dynkin system, 133 Dynkin system theorem, 133 E
Epigraph,82 Exact selection assumption, 95 Exponential topology, 304
321
322
INDEX F
Filtering, 261 Fixed point theorem (Banach), 55 F,,-set, 102
G
G,-set, 102 H
Hausdorff metric, 303 Hilbert cube, 103 Homeomorphism, 104 Horizon finite, 28,189,243,245,248,251, 271 infinite, 70, 213, 216, 243, 245, 248, 251
Imperfect state information model, 248 Indicator function, 103 Information vector, 248 Isometry, 144
J Jankov-von Neumann theorem, 182 K
Kuratowski's theorem, 121
L
Limit measurability, 298 Limit IT-algebra, 293 Lindelof space, 106 Lower semianalytic function, 177 Lower semicontinuous function, 146 Lower semicontinuous model, 208 Lusin's theorem, 167 M
Metrizable space, 104 Monotonicity assumption, 27
N
Nonstationary model, 243
o Observation kernel, 248 Observation space, 248 Optimality equation, 4, 57. 71. 73. 78f£, 225 nonstationary, 246 Outer integral, 273 monotone convergence theorem for, 278 p
Paved space, 157 Policy, 2, 6, 26, 91,190,214,217,243,249 analytically measurable. 190, 269ff Borel-measurable, 190 e-optimal,29,191,215,244 {en}-dominated convergence to optimality,29,191,245 k-originating, 243 limit-measurable, 190, 266ff Markov, 6, 190 nonrandomized, 190,249 optimal,29, 191. 215, 244 p-e-optimaI, 12 q-optimal,256 semi-Markov, 190 stationary, 214 uniformly N-stage optimal, 29, 206 universally measurable, 190 weakly q-e-optimal, 256 p-outer measure, 166,274 Projection mapping, 103 R
Regular probability measure, 122 Relative topology, 104 R-operator, 2 1
s Second countable space, 106 Semi-Markov decision problems, 34 Separable space, 105 State space, 2, 26, 188,216,243,245,248, 251,271
323
INDEX State transition kernel, 189,243,248, 251 Statistic sufficient for control, 250 existence of, 259ff Stochastic kernel, 134 Stochastic programming, 11 IT Suslin scheme, 157 nucleus of, 157 regular, 161 System function, 189,216,243,245,271
T Topologically complete space, 107 Totally bounded space, 112
u Uniform decrease assumption, 70 Uniform increase assumption, 70 Universal function, 290 Universal measurability of a function, 171 Universal o-algebra, 167 Upper semicontinuous function, 146 Upper semicontinuous model, 210 Upper semicontinuous (K) function, 310 Urysohn's lemma, 105 Urysohns theorem, 106
w Weak topology on space of probability measures, 125
This page intentionally left blank