GENERAL DYNAMICAL PROCESSES A Mathematical Introduction THOMAS G. WINDEKNECHT Department of Electrical Engineering Michigan Technological University Hougbton, Michigan
ACADEMIC PRESS
NEW YORK AND LONDON
COPYRIGHT 0 1971, BY ACADEMIC PRESS,INC. ALL RIGHTS RESERVED NO PART O F THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, RETRIEVAL SYSTEM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS, INC.
111 Fifth Avenue, New York, New York 10003
United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. Berkeley Square House, London W l X 6BA
LIBRARY OF CONGRESS CATALOG CARD NUMBER:79-137604 AMS(M0S) 1970 Subject Classification: 93AlO PRINTED I N THE UNITED STATES OF AMERICA
To Margaret and La.ura, John, and Beth
Contents
PR£f'A(;£ ACKNOWI..EIJCMENTS
I. General Processes 1. 1 In troduction 1. 2 St:l$ :md C13S!>e$
1. 3 1.4
I
4
9
Relatior;s and Functions T ime S4!ts
13
1. 5 Pn;,c::esse.s
17
1.6 I. 7 1.8 1.9
21
Processors Free. 1--"unctional, and Unooupled Processors Prootss l\•1 orphisms E:<:ln'l{>les (l( f>roc:es~:~es
26 30 33
36
J::.Xcrcis.es
2. Basic Interconnections 41
2. 1 Introduction 2.2 Processors ln Series 2.3 Processors in Par:t.Ud 2.4 PrO«$$Or Pt-oj e<:ciorli 2. 5 C los(:d-Loop P t•oce9$Qts Exercises
42 45
48 52 55
3. Time-Evolution 3. 1 3.2
59
In trodu ction J\<: l iOJ'IS
of:.. Moooid
61 vij
CONTENTS
VUl
J.J
Ti mc-~ vOI Uii<) l\ o( Proc:clliCS
3.4 ·n11..· T r;,ndation CIOSlii'C Operation 3.5 Tilnf'.fo;volurio n of lntcrconncction .s 3.6 Corun.ctirtJ; Proc:;e$$t.-s 3. 7 E"--r:mdtng ;~n,! Surion.ar)' J•roct'sses
3.8 Spcdnliution to Otscrct('
TuYU!
Ex ~rci ~"
69 74 79 81
85
90 91
4. Strong T ypes of Causality 4. 1
lutroduc;tion
4.!
S t~III C I'I'0<:~$$0r$
S ta tic lnh.:rc..'Q •lnecrions Nonnn ticip;HO•'Y "Proa:-,,ort Nonnnticipator y lnttn.:
lltory Proc('SiOrt 'l'r.m~1hon.al Pr«c.'$$Qrs Tr.al\'\lhOnal l nt~Tconn«~toc" 4.9 OtA:rch:·T•me TransitiOttlfl l'rocnsors 4 .10 Nondl\'.:rging Proce11Stt 4 . 11 C mll""·•·nlizcd Morions
4.3 4.4 4.5 4.6 4 .7 4.8
Exe rci 8l~$
95
98 106 109 119 122 124
131 134 137 142 144
;. State Decomposit ions 5.1 lrHrOduc tion 5.2
State l •:• l~mtiOr)~
5.3 Exi ;~h' "('C' of St.ue SpaCtfl 5.4 St:HC'4, lntc rconnec li on~. wnd T lmc-E\'olution 55 Spt'Cu'l St11te l)ecomposuion • 1-~crns'-""S
References
149 ) 50 154 160 164 169
173
Preface
This book presents a mathematically rigorous approach to general systems theory. Such an undertaking is relatively new and very ambitious; there are few guides to go by and even the purpose and scope of such a theory is .:ontroversial. The author believes general systems theory should be a rigorous, tightly axiomatic development in set theory that reveals what are the basic mathematical structures in systems theory (formulated as generally as possible), what sorts of properties of systems are important, and how these structures and properties are related in general cases. Based on a number of years of research, such an axiomatic development is herein presented. The book introduces the reader to many of the set-theoretic details involved as well as considers some of the main questions. The material was originally a set of course notes for a graduate seminar on general systems and remains, in many ways, a research monograph with mathematical exercises. However, the exercises are extensive and rather complete proofs are given throughout. Therefore, the book is suitable for a one-quarter (or, if supplemented with additional reading and a term paper, a one-semester) graduate course in engineering or applied mathematics. Many first-year students can both comprehend and appreciate the material. The main prerequisites for the student are maturity both with respect to systems engineering and to abstract mathematics. The minimal formal requirements are a rigorous one-term course in axiomatic set theory or modern algebra and an undergraduate background with suitable emphasis on engineering controls and computers. The value of the material is to develop in the theoretically inclined graduate student a suitable overview of the systems theory field. ix
Acknowledgments
For the author this book represents a first extended report of research on general systems theory undertaken over a period of years at the Systems Research Center of Case Western Reserve University. During the course of this research, many debts have been incurred and it is a pleasure to acknowledge the aid of the following persons (colleagues, visitors, and graduate students): R. B. Banerji, L. G. Birta, W. H. Clifford, Jr., R. J. Dompe, R. H. Foulkes, Jr., M. E. Goldfeder, N. Jones, L. H. Kerschberg, D. Macko, L. R. Marino, M. D. Mesarovic, S. K. Mitter, J. J. Talavage, L. Veelenturf, and T. G. Zogakis. At various times, the research leading to these notes was supported by the Office of Naval Research under contracts Nonr 1141(12) and Nonr 1141(13) and by the National Science Foundation under grants GK-1394 and GK-13300. The author would particularly like to express his thanks to Professor Richard Bellman for permitting this work to be included in his series, Mathematics in Science and Engineering.
XI
I General Processes
1.I
INTRODUCTION
During the past thirty years, the concept of a system in trchnology has undergone continual generalization both in practice and as a theoretical notion. During this same period, the use of mathematical representations (so-called models) for real technological systems in their analysis and design has achieved widespread acceptance and legitimacy. I n fact, it now appears to be generally accepted that systems theory can best be defined as “the theory of mathematical models of real technological systems.” I n recent years, mathematical theorists in the systems field have shown increasing interest in a “general” systems theory. This is a simple result of the fact that the concept of a system is better understood now than ever before, and the problems of systems theory are better formulated. This interest in a general theory is evidenced by a growing number of papers on the subject [ 1 4 2 ] , and there have recently appeared at least five books [15, 22, 37, 40, 411 dealing in depth with the foundations of such a theory. Despite the given volume of work, it would appear there is yet considerable room for basic innovation on the subject of mathematical approaches to general systems theory. I n fact, some of the most fundamental questions are unsettled. For example, as Kalman et al. [15] state 1
1.
2
GENERAL PROCESSES
i n their recent excellent work, “there is no universal agreement at present on Lvhat the primary definition of a system should be.” In this book we propose to examine certain fundamental questions associated ivith the foundations of a “general” systems theory. ‘l’hc forniiilation is almost entirely set-thcoretic, although we do from tinic to time employ some simple concepts from abstract algebra. ‘l’he basic questions we hope to shed light on include: 1 . LVhat iYhat IVht II’hat
2. 3. 4. 5.
is a general system ? is time invariance for general systems ?
is causality for general systems ? is a state space ? I I o u , general is the state space approach in systems theory?
Iniplieitly, we hope to provide a useful introduction to three of the niorc attractive parts of systems theory: namely, the theory of sequential machines (automata theory), the theory of dynamical systcins or motions (topological dynamics), and the theory of gc.nera1 linear systems. A number of basic references to these various areas are given in the references [I-42; 50-671. i n our approach to general systems theory, we have been greatly influenced by Professor >I. D. JIesarovic. From him we have borro\vcd inethodology, some important definitions, and an overall approach to the work. It may then be useful to review the basic positions of J,Iesarovic [20--271, We begin with the realization that what is the most obvious notion of a system: namely, un intercotitirt‘tion oj” siihs~~stenis, cannot serve as a starting point for a theory o f systems. ‘That is, a subsystem is itself a system and hence this all too attractivc definition is circular. ‘l’his is not to say that the concept of a system as “an interconnection of subsystems” is uniinportant (that would be completely wrong) but rather that it is simply not a suitable starting point for a mathematical definition. \ \ k i t hlcsarovic proposed instead is the notion that a system i s ( i n tihstrrirt veltition. T h e idea here is that we can experience a ten) o n l y by observing or measuring certain attribute variables associated Lvith it, and thus, in making a mathematical model of tlic system, the system becomes identified with the relation which exists connecting the attribute variables. In [23], hlesarovic gives a relevant iiiatheniatical definition based on this point-of-view
1.1.
INTRODUCTION
3
which he repeats in [24] together with an intriguing plan for a theory of general systems. He wrote in part: Let a family of objects be given V {V(! i E I ) , where I is the index set for the family V. A system S is simply a relation defined on V , i.e., 7
sc x
{ V i I i€I}
where x indicates Cartesian product. O n the basis of this definition, general systems theory becomes simply a general theory of relations. Following a formalization approach, one starts from such a general notion of a system and then proceeds to assume more structure for the objects Vl, ..., V , and investigates the properties induced by the relation S.f
It is clear that the first additional structure one would add to the objects of a general system would serve to formalize the notion that said objects are parametrized by (evolve in) time. Mesarovic has himself discussed this in [26]. With such additional structure, it is clear many authors recognize and indeed have studied such concepts of a general system, including Arbib [ l , 21, Kalman [ I I , 14, 151, Wymore [37], and Zadeh [38, 391. Thus, we shall show no unusual originality in taking this concept as the starting point for our considerations. However, there are yet many formalizations possible for the notion of an abstract relation parametrized by (evolving in) time, and the reader will find that ours is somewhat novel. Perhaps even more important than the particular formalization of a general system we choose are the implications as to methodology that the above quotation from Mesarovic’s paper provide. First, there is the suggestion that many of the basic issues of general systems theory are set theoretic. Second, there is the implication that we would do well to formalize concepts as generally as we can without adding additional structural assumptions that are irrelevant. Third, and most important, general systems theory is conceived of as a strictly axiomatic mathematical undertaking rather than a constructive one or one with a fixed setup. T h e idea then is to formalize our concepts with as little structure as possible and then study them in a deliberate axiomatic fashion adding
1. M. D. Mesarovic, New directions in general theory of systems, in “ Systems and Computer Science ” (J. F . Hart and S. Takasu, eds.), p. 223. University of Toronto Press, Toronto, 1967.
4
1.
GENERAL PROCESSES
structure to the concept of a system little by little. This is a fairly radical departure from other developments in systems theory such as automata theory, dynamical systems, and linear system theory where the setup is more or less rigid and specific. What we are after is a very general and flexible formalism. Some remarks about terminology are in order. Naming concepts in a theory of systems is a dangerous game quite likely to please no one. Still it must be done. T h e overwhelming majority of concepts w e introduce in the following pages are not new. O n the other hand, many of them have not been formalized so generally before or with such precision as is demanded here. I n short, we shall continually be naming some quite precise formal concepts after some relatively vague but very well-known concepts long talked about in systems theory. T o the extent that we attach these names badly, we apologize beforehand and trust the reader to be forgiving. R case in point is the concept of a system as a “relation parametrized by (evolving in) time.” One possibility is to call this a “general time system.” However, we prefer the names “process” and “processor,” implying as they do the underlying existence of time. We actually use both these names but each with a slightly different meaning, e.g., a “processor” being a special case of a “process.” ‘l‘hus, although we see ourselves as addressing general systems theory, what we actually develop is a theory of processes and pyocpssors. 1.2
SETS AND CLASSES
As we indicated above, the principal mathematical tool to be eniployed in our considerations is set theory. T h e theory of sets is one of the foundations of mathematics and, over the years, it has been quite carefully analyzed and axiomatized. I n this section and the next, we shall briefly review set theory in order to set forth what is presumed known to the reader and to establish some notation. Previous exposure to both mathematical logic and set theory is presumed. In particular, we shall (for the most part) take for granted the notation of mathematical logic, rules for manipulating logical formulas, and the concepts of formal proposition and proof. We shall adopt (essentially outright) the formulation of set theory given by Kelley [46]. T h e main features of this
1.2.
SETS AND CLASSES
5
formulation are: (i) the embedding of sets within the more general notion of “classes” in order to avoid the classical paradoxes and (ii) the use of one axiom schema together with seven axioms as a basis for the theory. T h e reader not familiar with Kelley’s appendix would do well to peruse it during the course of reading this work, since our treatment will of necessity be quite sketchy and incomplete. Throughout the notes, we use “iff” in place of “if and only if,” and we omit universal quantifiers standing before logical formulas (for the most part). Existential quantifiers are always given explicitly so there is no ambiguity. We denote the primitive logical constants as follows: v (“or”); & (“and”); 3 (“implies”); o (“iff”); (“not”); (3x): (“there exists an x such that”); (Vx): (“for all 2’); = (“is identical with” or “equal to”); E (“is a member of the class” or “belongs to”); and {---...} (“the class of all -~ such that ...”). We employ standard upper and lower case English letters for logical variables. Such variables are interpreted throughout to be “classes.” More precisely, the reader should recall that logical formulas are formed according to the following rules:
-
(i) T h e result of replacing a and /3 with variables in the expressions a E /3 and a = /3 is a formula. (ii) T h e result of replacing 01 and /3 with variables and and a with formulas in the expressions CZ * .!4?; (% 0 .g;-Pi‘; a & .%; a! v ,29; ( V a ) : fl; ( 3 3 ) :a/; p E {a 1 a};{ a 1 (q t p; and {X 1 a}E { p 1 .g}is a formula. (iii) Nothing is a formula unless it follows that it is from (i) and (ii). Axioms arc, of course, formulas which are asserted to be true. Theorems are formulas which can be deduced to be true from given axioms and the rules of logic. Again, the latter we presume from the beginning. Kelley’s formulation of set theory (which w e paraphrase, omitting almost all details in what follows) begins with the axiom of extent: A =B (VU): ( U E A o a E B) Essentially, this asserts two classes are identical iff they contain
1.
6
GENERAL PROCESSES
the same elements or members. From this axiom, it follows that
A A (iii) A (1)
(ii)
~
=
=
A. 13 =- U - A . H & B - c' =- A
C.
:
Perhaps the most important concept in set theory is the notion of defining a class as the extension of a logical formula. I n Kelley, this is set out as an axiom schema: An axiom results if, in the following, 01 and /3 are replaced by variables, (Y by a formula 9, and & by the formula obtained from F by replacing each occurrence of the variable which replaced Y by the variable which replaced p: p E { Y fdj <> (3C):p E c 8i B For example, let A replace 01 and B replace p. If O? is rcplaced by thc formula A I1 v ,4 E , then 2? is replaced by the formula B D v B E . 'I'hus, -
~
is an axiom of set theory. JIany basic set theoretic operations can be very conveniently defined as extensions of formulas. For example, the union and intprsection of classes A and B are defined as 3
uR
.In13
(u
a E .4 v a t U j
(u a c
4&atB)
respectively. A number of familiar identities readily follow: 'I
u zl
L! n .J
=
A -1
uB
Bud .JnB - R n A (Ll u U) u c (.4 n n)n C A n(Ru C)
.I
u (BnC)
=1 u ( R u
~
C')
. 1 n (H n C ) ( A n B ) u (A n C ) (A u B ) n ( A u C )
1.2.
7
SETS AND CLASSES
T h e complement of a class B with respect to a class A is defined as A -B ={a/aEa&+EB))
I n general,
a
--
B
-=
a
(A n B)
~
A n(B - C ) = ( A nB)
-
C
A
-
(Bu C ) = (A
-
B ) n (A
A
-
( B n C ) = (A
-
B ) u (A
C)
~
C)
~
T h e universal class and the empty class, respectively, are *={AIA=A) ={A~A+A}
where A # A o -(A = A ) . As is easily proved, -(A Moreover, auA=A, @ n A = 2 i A U % =@, 011 -
x
E
a).
AnW=il
&,
-
*&
=
y
T w o of the most useful elementary concepts for our purposes are the notions of the intersection of a class, i.e.,
na
=
{b I
pa): a
=
{b I
pa):
b E a>
and the union of a class,
u
A
E
a&bE
I t can be proved that
na=o&,
ua = a
n@= m,
U ~ L ~ L
Next, we define A C B o ( V u ) : ( a E A*
U E B )
8
1.
GENERAL PROCESSES
and we say A is a subclass of B, A is contained in B, and B contains A iff A C B. T h e following elementary properties hold in general:
c z4,
‘4
c Jld,
& c‘IA
1 Boa4CB&BCA A C B & B C C - .ICC -
.ICBo.lUB=B -4 C B o ‘4 n B = A ACB
a
uAcUB&nBcnil
. ~ E2 B AC(JB&nBCA
T h e power class of a class A is the class of all subclasses of A , i.e., 2A
(B B C d )
‘l’he singleton of a class A is the class {‘-l;
=-
(B B ~
E
2
R
A)
I t follows froin the definition that (A} - { B j e A= B
’I’hc doublet o f two classes A and B is thc class
{A-l, B )
--
(9)U { R )
and it follows that
I n Lclley, the concept of a set is defined in the following manner: .I
is a set o (3B):A
E
B
T h a t is, a set is a class which is an element of some class. From this definition, the above axiom schema about extensions of formulas reduces to
1.3.
RELATIONS AND FUNCTIONS
9
Thus, { a 1 @} is precisely the class of all sets a for which G;! is true.t Three of the axioms of set theory have to do with postulating that certain of the elementary operations on classes introduced above when applied to sets yield sets. For example, the following is among Kelley’s seven axioms: If A is a set then there exists a set B such that (VC): C C A -+ C E B. Using this axiom, it is proved that: (i) if B is a set and A C B, then A is a set; and (ii) for any set A , 2 A is a set. T h e other two such axioms arc: If A is a set and B is a set, then A v B is a set. If A is a set then lJA is a set. From these axioms, Kelley carefully develops the fact that if A and B arc sets, then A n B , A - B, n A ( A # B), {A},and { A ,B} are sets. Finally, it is proved that (i) A E G2 0 A is a set. (ii) G2 6 %. (iii) G2 is not a set. Here, A 4 B o -(A E B). T h e universal class is thus the class of all sets. It is further then a prime example of a class which is not a set. 1.3
RELATIONS AND FUNCTIONS
Many of the above basic set-theoretic concepts play only a relatively minor role in our considerations. Of much greater significance are the concepts of relations and functions and their elementary properties. Again, we shall only sketch the basic prerequisites using Kelley [46] as a basis. If A and B are classes, then the Cartesian product of A and B is the class A x B ={(u,b)la~A&b~B} Here, ( a , b) is an ordered pair which is defined as (a, 4 = {{a},{a, 61)
t T h e condition “,5 is a set” in the axiom schema is iniportant in avoiding certain classical paradoxes in set theory. However, in our general systems developments, it is a mere technicality, generally, whose verification being straightforward is omitted altogether.
10
1.
Kelley proves that if
LC
GENERAL PROCESSES
and 6 are sets, then ( a , 6) is a set, and (c, d ) u u
(u, b )
c&6
~
7-
d
I t is in this latter sense that an ordered pair is “ordered,” and wc see that a doublet is an “unordered” pair. A relation is a class of ordered pairs, i.e., S
is a rclatiori o (Vs): (s t S
(3u)(36):s
-->
=
(a, b))
We write a s 6 u ( a , 6 ) E S. T h e classes
9s = {u 1 (36): aSb} 9 s = { b 1 ( l a ) :USb} then are the domain and range of S, respectively. A relation S is a function or a map iff 6
a S b & USC
(in which case one writes S: W S if d S C Y ) and is improper iff
---f
=c
,%?S(onto) and S: 9s + Y
I n general, S C 9s x d S , so S is improper iff indeed, if for any classes A and B, S
9s x
A x B
-:
for in this case, B S : A and d S In general, we define ==
=
r){b I
B.
u ~ 4
If S is a function, then, as is readily proved,? b Thus, in this case,
s
=
B!S C S or
{(u,6) 1 b
=
=
aS
e aSb.
US)
1. \Ve thus choose to denote the image of a function S at a point a as U Sinstead of as the more usual S ( a ) .
1.3.
11
RELATIONS AND FUMCTIONS
Moreover, if S and R are functions, then = R u (VU): U S =
S
9s and B R C 9s.'The
If R C S, then 9 R C and S is defined as R S
= {(a, b )
0
I n general, R
0
UR
composition of R
j ( 3 ~ )URC : & cSb}
S is a relation, and
9 ( R S )C 9 R
%(R S ) C 2s.
and
0
0
If 9 R C 9S, then 9 ( R S) = 2 R . If 0
9s.Quite importantly,
(R0S)oU
=
9s C 9 R , then &(R
0
S) =
Ro(S0 U )
T h e composition of two functions is itself a function. I n fact, if S and R are functions, then for all a E 9 ( R 0 S ) , u(R S ) = (uR)S 0
T h e converse of a relation S is the relation
s-'
=
9s
I n general, 9s-l = and for any relations R and S,
((6, u ) 1 USb}
9s-l = 9s.Also, (S-l)-l = S
9
( R o S)-l
= ,'-I
o
and
R-1
For any A, the class 1,
=
{(a,4 I a E 4
is the identity relation on A. We see 9 1 , = 91A = A. Moreover, (1,)-' = 1, . A relation S is a 1 : 1 function iff S is a function and US = b S
3
or in other words, for all aSc, b S d :
u =b
1.
12
GENERAL PROCESSES
I t readily follows that S is a 1 : 1 function iff both S and S-l are functions. .\Iso, S is a I : 1 function iff S is a function and
s s-' v
I y.7
~
If S is a 1 : I function, then '5-l is a 1 : 1 function. For any A , 1, is a 1 : I function. Finally, i f S and R are 1 : 1 functions, then S K is a 1 : 1 function. If S 1s a relation, then for any A ,
si.1
= {((I,
h) , uSh & a
E
A]
is the restriction of S on A. X relation R is an extension of S iff for some A , S R A. I n general, S / A is itself a relation and Q(S A ) ' / S n A and d ( S ' A )C ,XS. Indeed, SIA C S. W e see S V S = S and (SIA)/Z3= S / ( Ar? R). A h y subclass of a relation [a function; a 1 : 1 function] is a relation [a function; a 1 : 1 function]. If S is a function, then R is d subclass of S (i.e., R C S ) iff R S / ' / R . I t fo1lows that if is a function, R C ,5'& 5'H = 9 . S -> R - S -
~
s
~
h o w , very importantly for our work, we note BA =
{"/if:
rl+ B )
T h a t is, B" denotes the class of all functions mapping A into B. T h c follo\ving conditions hold in the general case:
H" C C". (i) H C C' (ii) B A C ( K u C).l. \Yc now can consider the remaining axioms of set theory. Kelley gives: If S is a function and 9 S is a set, then :&S is a set. He subsequcntly proves: (i) { A ) x B is a set, (ii) A x B is a set, and (iii) B/' is a set whenever both A and R are sets. H e further proves: If S is a function and %S is a set, then S is a set. 'The remaining axioms of set theory are: If A # a,then there PJ. T h i s axiom leads to exists a class H E A such that 4 n B : -4 B '4 and -(A E B & 13 t A ) . Another axiom is that there exists
1.4.
13
TIME SETS
a set B such that 0 E B and A w { A }E B whenever A E B. This axiom leads to a derivation of the Peano postulates for the natural numbers (nonnegative integers) and hence to the existence of this set. T h e existence of other sets such as follows. Finally, there is the axiom of choice. This axiom, which admits a truly magnificent set of alternative formulations and which has been the subject of some controversy in the past, is particularly well treated by Suppes [49]. There, it is shown that one formulation of the axiom of choice (and one which is particularly useful for our work) is: For every relation R there exists a function f C R such that 9f== QR. 1.4
TIME SETS
As we mentioned above, we will address ourselves in this book to dynamical processes (or systems) which are parametrized by (evolve in) time. Our first order of business is then to develop a suitable mathematical representation of time, which we shall do in this section. Little motivation for the particular representation of time we choose can be given here. However, at the beginning of Chapter 3 , some pertinent discussion will be presented. I t suffices to say that the main mathematical structure we shall have in our considerations will be that imposed on a time set, i.e., that which is developed in the next few pages. A semigroup [43, 44, 471 is a set T together with a function T x T A T [whose images are written as t t’ instead of as ( t , t’) which satisfies
+:
+
+]
t
+ (t’ + t ” )
=
(t
+ t ’ ) + t“
for all t , t‘, t“ E T . A semigroup T is a monoid if there exists an element 0 E T such that t
-+0 = t
=
0 -1- t
for all t E T. 0 is an identity. A semigroup T has at most one identity, i.e., if 0 and 0’ are identities in T , then 0
=0
+ 0’ = 0’
1.
14
GENERAL PROCESSES
Thus, the identity of a monoid T is unique. (Proper) left division over a monoid T is the relation < on T such that t < t’
1.4.1
(3”): t“ f 0 & t‘
0
=
t
+ t”
(t” E T )
Definition. Let T be a monoid. T is a time set iff
0 )& t , (I) (3t1)(3t2):( t l = 0 v t, (11) t , t = t t, 0 t , = t, . (111) t , t = 0 * t , = 0. T
+ +
+
+t
= t’
+ t , ( t , , t, E T ) .
\Ve think of a time set as a mathematical model of real time.
of properties, most especially a simple ordering. T h a t it does is shown in the following theorem. -1s such, it is clear that a time set should have a number
1.4.2
Theorem. Let T be a monoid. If T is a time set then
+t
+ t‘
(commutativity). (left cancellation). = : t” < t’ v t = t’ v t‘ < t (connectedness). -(t ,c t ) (irreflexivity). t < t’ => -(t’ < t ) (asymmetry). t << t’ S; t’ < t” => t < t” (transitivity). t < t’ -3t” 3- t < t“ + t‘ (left invariance). t < t‘ 3 t < (t’ t ” ) (right extension). t #- 0 o 0 -1 t (least element). (s) t’ p 0 0 t c:. t t’.
(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix)
t‘ t t
PK~OF.
+ t‘
= t
+ t“ * t ’
t
:
+ +
LVe have
(i) t‘ t‘ :* t‘ + t = t + t‘ (by axiom 11). t’ = t” (byII). (ii) t t‘ t + t” t’ t =t t” (byi) (iii) (3t1)(3t2): (tl 0 v t , = 0) & t , t = t‘ t , (i.e., I) t. ( t l f 0 & t , --: 0 & t, t = t‘ t,) v ( t l = 0 & t , = 0 & t, $- t = t’ 4- t,) v ( t l = O & t, 0 & t, t = t’ t,) * ( I I # O & t’ = t t l ) v ( t = t ’ ) v ( t z f 0 & t =: t’ + t,) (using i) ~
-+
+
-7
+ + +
:
+
+
3
t
< t’v
t
t ’ v t’
+
< t.
+ + +
1.4.
+ + + + + + +
15
TIME SETS
+
+
(iv) t‘ # O & t = t t’ * t’ # O & O t =t t’ * t’ # 0 & t‘ = 0, which is impossible. (v) t , # O & t ’ = t t , & t , # O & t = t’ t, * t , # O & t , # O & t = ( t t,) t , * t, t, # 0 & t = t ( t , t,) (using 111), which contradicts (iv). (vi) t , # 0 & t’ = t t , & t, # 0 & t“ = t’ + t, 3 t, # 0 & t , # 0 & t“ = ( t t,) t, => (t, t,) # 0 & t“ = t (tl t2). (vii) t, # 0 & t’ = t t, o t , # 0 & t” t’ = t” ( t t,) 0 t , # 0 & t“ t‘ = (t” t ) t , (using ii). (viii) t , # 0 & t’ = t t, 3 t , # 0 & t’ t” = ( t t l ) t“ * ( t , t”) # 0 & t‘ t” = t (tl t”)(using 111). (ix) t # 0 => t # 0 & t == O + t 3 0 < t t = 0 => -(O < t). (x) t‘ # 0 u 0 < t’ o t + 0 < t t’ 0 t < t t’ (using vii and ix). I
+ + + + +
+ + + +
+ + + + + + + + + + + +
+
+
+
REMARK. There are two time sets of particular importance. T h e set p of nonnegative real numbers is a time set with identity 0 under ordinary addition of reals. T h e set w of nonnegative integers (the natural numbers) is a time set under addition of integers. Again, 0 is the identity. I n both of these cases, left division turns out to be the usual inequality relation < on numbers.
Throughout the following, we shall denote the above sets of real numbers by p and w , respectively. We shall presume the reader to be familiar with the elementary properties of these sets of numbers. In fact, we shall presume a familiarity with certain properties of these number sets beyond those indicated here for time sets in general. 1.4.3 Lemma. relation
Let T be a monoid. If T is a time set, then the - =
is a function.
{ ( ( t ,t
+ t’), t’) 1 t , t’ E TI
I.
16
GENERAL PROCESSES
Immediate from the fact that T admits left cancdlation.
PROOF.
above is complementation over T.
'l'he function (-)
REMARK.
\Ye write t' ~-t instead of ( t , t')- for the images of (-). Given t , t' E T , t' -- t is defined iff either t = t' or t < t'. I n the case
of the time sets p and
1.4.4 Lemma.
(-) is proper subtraction.
w,
Let T be a time set. A subset T' C T is itself E T' =. ( t -t-t ' ) E T ' ; and (iii)
a time set iff: (i) 0 t T ' ; (ii) t , t' t , t' t 7" & t I : ' t' =1- (t' - t ) E T'.
an exercise.
PRC)OF.
\\'e conclude this section by establishing one further fact about tinie sets (in fact, about monoids in general). Let T be a semigroup Lvith identity 0 under and let T' be a semigroup with identity 0 under -1- .I function h : T + TI is a (monoid) homomorphism iff
+
I .
Oh
=
0'
( t i- t ' ) h
=
tk
+' t'h
( t , t'
7')
E
T and 7'' are isomorpliic iff there exists a homomorphism h: T-+ T' ( 1 : 1 onto). T and T' are order isomorphic iff there exists a homomorphism h : T + ?" ( 1 : 1 onto) such that t
\\-here
< :
and
'< ,
th
t'h
are left division on T and T',respectively.
1.4.5 Theorem. (and conversely).
Isomorphic monoids are order isomorphic
I,et 7' and '1" be as above. \lie have
IW)OJ..
t -..f '
<< t'
G>
(3"'): t"
f
-=> (3"): 1" #
0 & t' 0 & t'h
~
<*
(3"): t" 4 0 &t'h
<>
(3"): t"h k 0' & t'h
-t=- th
_ 't'h
t"
2
=
( t + t")h
( / I 1s
1 : 1)
th 1 ' t"h
( h is
'1
-,
(Oh
-
th
' t"h
homomorphism)
= 0'
& h is 1 : 1 )
( h is onto)
17
1.5. PROCESSES PROCESSES
1.5
Having developed the concept of a time set, we are now in a position to set forth what is to be the main object of our study: namely, the concept of a (dynamical) process. Henceforth, let T be a time set with identity 0 and function Also, let < denote left division over T and let - be complementation on T.
+.
1.5.1 Definition. A set p is a T-time function iff p is a function and 9 p = T. A T-process is a set of T-time functions. T h e concept of a time function is very well known in systems theory, particularly in the control field and in other areas dealing with systems which are modeled with ordinary differential equations. T h e time function has numerous synonyms, e.g., time variable, time history, dynamical variable, dynamic measurement, dynamical observation, or simply a signal. By the definition, a process is simply a set of time functions. Thus, this concept is well known too. EMA ARK.
1.5.2
Lemma. A set P i s a T-process iff for some set A , P C A T .
PROOF. Recall AT { p 1 p : T --t A}. Thus, if P C AT,then P is a set of T-time functions, i.e., a 7’-process. Conversely, if P is a set of T-time functions, consider A = u{.gp1 p E P}. Clearly, A is a set, and if p E P then p : T 3 A. I n other words, P C AT. I
Following the established precedent in systems theory, we distinguish two fundamental types of T-processes:
1.5.3 Definition. Let P be a T-process. P is discrete-time iff T is order isomorphic with the additive monoid of the nonnegative integers w. P is continuous-time iff T is order isomorphic with the additive monoid of the nonnegative reals p. I n view of Theorem 1.4.5, the following is immediate.
1.5.4 Theorem. A T-process P is discrete-time iff T is isomorphic with the additive monoid w , and P is continuous-time iff T is isomorphic with the additive monoid p.
18 1.5.5
1.
GENERAL PROCESSES
Theorem. No T-process is both discrete-time
and
continuous-time. PROOF. A standard proof such as Cantor’s diagonalization construction shows there is no function f : p -+w (1 : 1 onto). Now, if P is both discrete-time and continuous-time, there exist homomorphisms g: p -+ T ( I : 1 onto) and h: T + w (1 : 1 onto). T h e composition of two 1 : 1 functions is a 1 : 1 function. It follows that (g h): p + w (1 : 1 onto). This is a contradiction. # 0
We remark in passing that the distinction between a discretetime process and a continuous-time process is historically a very important and fundamental one. There is a virtual dichotomy of the literature of systems theory based upon this distinction. We shall find that a good many basic concepts and results in our considerations will not depend on this.
1.5.6 Lemma. A set P is an w-process iff P is a set of sequences. PROOF.
A sequence is a function whose domain is
w.
#
1.5.7 Definition. Let P be a T-process. With each t we associate the set tl’ = {tp 1 p E P )
E
T,
Also, we definc thc set aP=(tp!p~P&t~T)
CIP is thc attainable space of P and tP is the attainable space of P at t E T.
1.5.8 Lemma. Let P be a T-process. Then, aP
=
U { t P !t
E
T ) = U { 9 p i p EP}
Rloreover, P C (ci(P)Tand, for all t PROOF.
E
T , t P C GZP.
For the reader.
1.5.9 Lemma. If P is a T-process, then P C AT
e GZP C
A.
1.5.
19
PROCESSES
PROOF. P C (QlP).. Now, if LlP C A, then (LIP)TC AT. Thus, P C AT. Conversely, if aEL?P, then for some P E P and some t E T, a = tp. If P C AT then p : T + A. Thus, tp E A. I n other words, a E 6TP * a E A, i.e., QlP C A. I
REMARK.
Thus, LlP is the least set A such that P C AT.
1.5.10 Lemma. Let P and Q be T-processes. If P C Q, then 6TP C 6TQ and, for all t E T, t P C tQ. PROOF.
Obvious.
I n this chapter and the next, we shall consider a number of ways that two T-processes may be combined. Our first example of this is the following: 1.5.1 1 Lemma. D' is a T-process. If P and Q are T-processes, then P v Q , P - Q , and P n Q are T-processes. PROOF. We see @ T = 0.Thus, 0 is a T-process. Now, if P and Q are T-processes then, for some sets A and B, P C AT and Q C BT. We have
P
u Q c A u B= c ( A u B)z-
so P u Q is itself a T-process. T h e remaining conditions are left as exercises. I Next, we are going to develop a very fundamental and important way of combining T-processes. REMARK.
If S and R are relations, then the relation SR
= { ( a , (6, c))
1 US^ & URC}
is the product of S and R. We see 9 ( S R ) = 9s n 9 R ,
9 ( S R ) = S-l
o
R
If both S and R are functions [ 1 : 1 functions] then SR is a function [a 1 : 1 function]. I n fact, for all a E 9 ( S R ) , u(SR) = (US aR) ,
20
1.
GENERAL PROCESSES
1.5.12 Definition. If P and Q are T-processes, then the composite of P and Q is the set
where pq is the product of p and q.
1.5.13 Lemma. T-t i me function .
If u and y are T-time functions, then uy is a
uy is a function since both u and y are. Moreover,
IW)OE’.
U(uy)
-
Y Un B y
Thus, uy is a ”-time function.
=
TnT
=
T
I
1.5.14 Corollary. If P and Q are T-processes, then PQ is a 7‘-process.
1.5.15 Lemma. If u, y , o, and z are T-time functions, then uy PROOF.
uy
=
v z u u
=
o&y
=
z
From the set theory, we see :
vz U ( V t ) : t(u?/) = t(vz)0 ( V t ) : ( t u , t y ) = (tv,t z ) u ( V t ) : tu = tv & ty = tz 0 u - 71 & y = z
1.5.16 Lemma. If P,
0,R, and
S are T-processes ( P and Q
noncmpty), then PQ C RS e P C R & Q C S
I;rom right to left is obvious. Assume PQ C RS. P and Q IWOOF. are nonempty; choose p E P and q E Q. Now, p q E PQ and, since 1’0 C R S , p q E RS. Thus, for some Y E R and some s E S , pq = YS. Y and q = s. Hence, p E R and q E S. Hy 1,emma 1.5.15, p Thus, P C R and 0 C S. I
-
~
1.5.17 Corollary. If P, Q, R,and S are nonempty T-processes, I’Q
~~
RS u P
=
R &Q
S
1.6.
PROCESSORS
21
REMAnK. T h e operation of product on T-time functions is essentially “making a vector function from given components.” Thus, if P and Q are T-processes, PQ is the set of all “vector” functions pq such that p E P and q E Q. Corollary I .5.17 shows that, given PQ with P and Q nonempty, the component processes P and Q are unique. T h e reader should prove: For any T-processes P and Q, PQ = .Z o P = .Z v Q = e
I n other words, if PQ is nonempty, then the component processes P and Q are nonempty and unique. From this condition on PQ = 0 ,it is easy to see why we require P and Q to be nonempty in Lemma 1.5.16. 1.6
PROCESSORS
Throughout this book, we are going to continually address ourselves to a certain special type of a process called a “processor.” I t is the “processor” which we regard as our main formalization of the notion of a “general system.” A processor is simply defined given the notion of a process. I t is a process whose attainable space is a relation.
1.6.1 Definition. A T-process P is a T-processor iff there exists an ordered pair ( A ,B ) of sets (called an input space and an output space for P, respectively) such that GlP C A x B. REMARK.
If K is a relation, then the sets
are the projections of R. IR and 2R are functions; in fact, 1 R : R + 9 R (onto) and 2R: R + .2R (onto).
1.6.2 Lemma. For any sets A and B, ATBT== ( A x B)T
22
1. PROOF.
GENERAL PROCESSES
If u: T + A and y : T + B, then uy: T + A x B , i.e., B)T. Conversely, T + A x B. Consider the sets
t(ziy) = (tu, ty). I n other words, ATBTC ( A x
choose
7':
u
y
-=
v o
1(A A B )
v
2(A x B )
0
where 1(A x B ) and 2(A x R) are as in the above remark. Clearly, zi and y are functions, since v and l ( A x B ) and 2(A >i B ) are functions. Now, u and y are T-time functions. '1-hat is, d v C A x B = Ql(A x B ) ; hence, 9 ( v 0 1(A x B ) ) = 9 v = T (and similarly for y ) . Now, for all t E T , t(uy) = (tu, t y ) = (t(v 1(A x B ) ) ,t(v 2(A x B ) ) ) 0
= =
0
((tv) l(A x B ) , (tv) 2(A x B ) ) tv
that is, uy = v. Xow, u:T + A and y : T + B. Hence, for any 7': T A x B , there exist u:T A and y : T + B such that z' = uy. I n other words, ( A x C ATBT. 1 ----f
---f
We have the following theorem characterizing T-processors: 1.6.3 Theorem. equivalent:
If P is a set, then the following statements are
(i) P is a T-processor. (ii) There exists an ordered pair of sets ( A , B ) such that P C (A x B)T. (iii) There exists an ordered pair (Q, R ) of T-processes such that P C QR. (iv) There exists a T-process S such that P C SS. (v) 'I'here exists a set C such that P C (C x C)T. (vi) There exists a set C such that OZP C C x C, where P is a T-process. PROOF. (i) =- (ii). If P is a T-processor, then P is a 7'-process, and there exists an ordered pair ( A ,B ) of sets such that I I P C A x B. Then, P c (G2P)TC ( A x B)T
1.6. (ii)
=>
(iii). Choose Q
=
23
PROCESSORS
AT and R
=
BT. T h e n using Lemma
I .6.2, P C ( A x B)T = A T B T =QR
(iii) * (iv). Choose S T-process. Now, we see
=Q
( 9 U R)(Q U R ) ==
P C QR C
(iv)
* (v). Consider C P
(v) (vi)
=>
u R. By Lemma 1.5.1 I , S is a SS
as. Since S C (as)=, we have
=
c ss c (as)T(as)T= (as x asy = ( C x
C)T
(vi) is immediate by Lemma 1.5.9. (i) is trivial. I
Theorem 1.6.3 gives us a number of alternative ways of viewing a T-processor, some of which we shall refer to in an essential
way in what follows. T h e next definition is fundamental as far as terminology and interpretation are concerned.
1.6+4 Definition. If P is a T-processor, then the sets P’
P2
(3y):uy E P } { y 1 (3u): uy E P ) {u j
=
are the input set and the output set of P, respectively. REMARK. ‘I‘hus we think of a processor as a process with “inputs” and “outputs.”
1.6.5 Lemma. If P is a T-processor, then Pl and P 2 are T-processes. hloreover, fYP = 9 @ P aPz = L-?Z@P and, for all t E T , tP1 - QtP tP2
=
BtP
Also, LZPC U P 1 x LZP2 and, for all t
E
T, t P C tP1 x t P 2 ,
24
1.
GENERAL PROCESSES
PROOF. P1 and P2 are sets of T-time functions; hence, T-processes. Xow, consider the given conditions. We have
,
( t u u - I" & t E T ) = (tu (3y): uy
//"I ~~
Q{(lU,
t y ) 1 uy
E
T). I'& t E 7 )= 9 { t ( u y ) 1 uy € P & t E
P& t
€
€
T}
V(1I'
T h e next thrcc conditions are proved similarly. 'The last two conditions are obvious. I mhimh. Recall that and U P 2 are the attainable spaces of Pi and Pi, respectively. 7'he condition { I P C UPi \ U P 2 of Lemma 1.6.5 sho\\s that they are also input and output spaces, rcspectil ely, tor P. 13y I x m m a 1.5.9, O'P1 is the least input space is the least output space. for P and
1.6.6 Lemma. If P is a T-processor, then P C P'P'. hloreover, if P C QR then P' C Q and p"z C R. IWOOF.
\Vc h a w lil'
I'
'
2L E
I" & 21 E
1'2
'
ILy c P'PA
so P C PIP2. Son, let I'C QR. Either P
@ or 'f / O . If P2 3 . Thus, Pi C Q and P' C R. If P f a, y*, then P' P then both P i
'Thus P1 and PL are the least T-processes R"Limh. such that P C QR.
1.6.7 Lemma.
0
and R
Let P and Q be T-processors. If P C Q , then
P' C Q1 and P 2 C Q2. PROOF.
Obvious.
1.6.8 Lemma.
Let P and Q be nonempty T-processes. T h e n
(PQ)' =- P and (PQ):! Q. PROOF.
For the reader.
1.6.
25
PROCESSORS
We come now to another important concept:
1.6.9 Definition. If P is a T-processor, then the relation of P is the set P, = {(u,Y ) I
UY E
ps
REMARK. P , is what in other writing we have called a T-system. We now deem dealing with P more efficient mathematically than taking P , as the basic concept of our theory. As we shall show, there is only a minor distinction between P and P , as set theoretic concepts. P , is the principal example of Mesarovic’s “general system” in the present case where time is introduced explicitly.
1.6.10 Lemma. If P is a T-processor, then 1’1 =
P2
wp,
= WP,
Obvious.
PROOF.
1.6.11 Lemma. If P and Q are T-processors, then PCQoP,CQ,
We easily see that uy
PnooF.
uP,y
* uy t P
E
P o U P ,y . T h u s if P C Q, then uy E Q * uQcy
s
that is, P , CQ, . Conversely, if P , C Q , , uy E P
that is, P C Q.
3
uP,y
s- uQ,y
* uy E Q
I
1.6.12 Corollary. If P and Q are T-processors, then P
Also uy
E
=Q-
P,
= Q.+
P o uP,y and therefore P = Y{.
1 UP,Y)
26
1.
GENERAL PROCESSES
Corollary 1.6.12 shows that the correspondence between P and I', is 1 : I . T h e following theorem delineates the class of relations P,.
1.6.13 Theorem. A relation S is the relation of some 7'-processor P iff there exists an ordered pair of sets ( A , B ) such that S C AT x B T . PROOF.
1.7
As an exercise.
FREE, FUNCTIONAL, AND UNCOUPLED PROCESSORS
In this scction, we develop five very simple classifications of T-processors. This development is a preview of things to come, and it will serve to introduce the reader to some important methodology. I n later chapters, we will be principally occupied with the development of some very much more sophisticated classifications of T-processors.
Definition. Let P be a T-processor. P is uncoupled iff
1.7.1
u t P & y € P -> u y t P
We have a simple characterization 7'-proccssors.
theorem for uncoupled
1.7.2 Theorem. If P is a T-processor, then the following statements arc equivalent: (i) (ii) (iii) (iv) (v)
P is uncoupled. PIP2 C P. There exist 7'-processes Q and R such that P P - PIP2. P , is improper.
mmF.
(i)
+
(ii). If P is uncoupled, then using Lemma 1.5.15, uylZP'P2
(ii)
P
= QR.
--u E P ' & y E P 2
2
uyrP
=. (iii). By I m n m a 1.6.6, P C P1P2.Hence if P 1 P 2C P, then P'P2.
1.7.
FREE, FUNCTIONAL, AND UNCOUPLED PROCESSORS
(iii) * (iv). If P = QR, then P P # D , Lemma 1.6.8 applies and Pl
P
=
=
P1P2. If P
=
GQ
= ia
v
R
=
0.If
= (QR)l = Q
P2 = (QR)'
Thus, P
= ia
27
=
R
D , then P1 = P2 =
o
and again
P'P2.
(iv) U(%P*
=>
(v). Using Lemma 1.6.10 and Corollary 1.6.12,
x 9P,)y * u € B P , & y € 9 ? P * * uy E P * uP,y
3
u E P l & y € P 2 3 uyEPlP2
that is, 9P.+ x .%P, C P , . Thus, P , is improper. (v) 5 (i). If P , is improper, u€P'&y€P2 * u€9P*&y€&P, 3 uP,y 3 uy E 'I so P is uncoupled. 1
1.7.3 Corollary. If P and Q are 5"-processes, then PQ is an uncoupled T-processor. We remark that an uncoupled 5"-processor is in an important sense trivial. 1.7.4
Definition. If P is a T-processor, then the set P-1
=
{ y u I uy E P }
is the inverse of P. If Q is a T-process, then the set IQ
=
{qq I
qEQ)
is the identity on Q. REMARK. P-l is itself a T-processor. I n general, (P-')* = (P,)-'. By Corollary 1.6.12, it follows that (P-l)-I = P. Also, (P-l)l = P2 and (P-l)z = Pl. IQ is a T-processor. We see that (IQ)* = 1,. Moreover, (IQ)' = (I(z)2 = Q.
28
1.
GENERAL PROCESSES
1.7.5 Definition. Let P be a T-processor. P is functional [hifitnrtional] iff for all uy, z’z E P, u
- - 7)
1
y
=
,”
[u =.
2,
e y
=
z]
1.7.6 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is functional [bifunctional]. (ii) P , is a function [a 1 : 1 function]. (iii) P , : P’ + P2 (onto) [P* : P’ P 2 (1 : 1 onto)] and ----f
UyEI’ey PROOF.
=UP*
Obvious.
1.7.7 Lemma. Let P be a T-processor. P is bifunctional iff both P and P-’ are functional. For any T-process Q, IQ is bifunctional. IW)OF.
0bvi o u s.
nErmRK. For functional T-processors, the output y is unique given the input u , i.e., y == U P , . It is no great distortion then to regard y as being “determined by” or “caused by” u. Indeed, this is exactly how the situation is usually regarded in systems theory, where the concept of “causality” is exceedingly important. ‘I’hc functional T-processors are distinguished by a very simple type of causality, and some much more sophisticated types exist. In Chapter 4, we shall consider some of these very sophisticated types in great detail.
I n the following definition, we introduce a deceivingly simple dichotomy of the 1’-processors. Actually, this dichotomy is very old and very important conceptually. Perhaps even now though, it is not fully understood.
1.7.8 Definition. Let P be a T-processor. P is.free iff 6TP1 has one and only one element. If P is not free, it is said to be.forced. REMARK.
:I relation R is constant iff aHb PL cRd
b
=
d
1.7.
I t follows that a relation R is constant iff for some b, 9 R Every constant relation is a function. 1.7.9
29
FREE, FUNCTIONAL, AND UNCOUPLED PROCESSORS =
(6).
Definition, For any a, a
REMARK.
= { ( t ,a )
I t E TJ
a is a constant T-time function. It is easy to prove that = a.
a T-time function v is constant iff for some a, v
1.7.10 Theorem. If P is a T-processor, then the following statements are equivalent:
(i) (ii) (iii) (iv) (v) (vi) (vii)
P is free.
some a, a P 1 = {a}. some a, P1 = {a}'. some a, Pl = {a}. some a, P = {a}P2. some a and some T-process Q, P = {a}Q. some constant T-time function u and some T-process Q, P = {u)Q. (viii) For some constant T-time function u,Pl = ( u } . For For For For For For
PROOF. (iv) 3 (v). If uy uy E {a}P2.Conversely,
uyE{a}P2
E P,
then u
=
a and y E P 2 . Thus,
a & y c P 2 3 u = a&(3u):vyEP 3 u = a&(3v):(vEP1&uyEP) 3 u = a & (371): (u = a & uy E P ) 3UYEP u =
Thus, P = {a}P2. (vii) 3 (viii) is by Lemma I .6.8. All of the remaining implications are obvious. REMARK. T h e concepts of free and forced processors (or systems) are quite well known in the various branches of systems theory. For example, the theory of dynamical systems or motions [63-671 is essentially a theory of certain kinds of free, continuous-time processors. We give an introduction to dynamical motions in Chapter 4.
30
1.
GENERAL PROCESSES
1.7.11 Theorem. Let P be a T-processor. If P is free, then, (i) 1’ is uncoupled, (ii) ( P * ) is a constant function, and (iii) P-’ is functional. PROOF.
-1s an exercise.
1.7.12 Definition. Let P bc a T-processor. P is multizariable iff either P’ or Pior both are T-processors. Ri:vARK. T h e concept of a multivariablc processor (or system) is well known in the control field. For example, see Mesarovic [20]. 1f.e include Definition 1.7.12 mainly to emphasize the fact that processors can be “nested” to arbitrary (finite) depth. For example, it can happen that (say) P’, etc., are T-processors as aell as P.
1.8
PROCESS MORPHISMS
I n this section, we introduce the notion of a “homomorphism” of T-processes. As we shall show, we have already encountered s e era1 ~ instances of such homomorphisms, one of considerable conceptual importance.? 1.8.1 Definition. Let P and Q be T-processes. Q is an image of ‘I iff therc exists a function h: U P l;rQ such that ---f
e
{P<)hiPEPS
h is a homomorplzism from P to 0. An isomorphism is a homomorphism which is 1 : 1 onto. P and Q are isomorphic iff Q is an image of P I{ ith respect to some isomorphism. Isomorphic T-processes arc structurally “identical.” An image of a Y’-process is structurally “similar” to the given T-process but possibly simpler. A homomorphism of nonalgebraic type has becn introduced previously in systems theory, namely, in automata theory (for example, sce [MI).Our definition (though perhaps not obviously so) is a generalization of the concept in automata theory. \Ve believe the concept of a homomorphism is important, particularly, in the light of lvork like that of ( h g r u e n 181.
1.8.
31
PROCESS MORPHISMS
By the definition, if h is a homomorphism from P to Q,
REMARK.
then
q E 0 o (3p):p
E
P&q
==
p h 0
123.2 Lemma. Let P, Q, and R be T-processes. If R is an image of Q and Q is an image of P, then R is an image of P. PROOF. Let 11: O Q -M R and g : flP+LrQ be the given homomorphisms. Clearlyg 0 h: 6YP + O?R.Now R = {q h 1 q E Q} and Q =: ( p og 1 p E PI. Thus, 0
K
=
(9 0 h 1 4 E Q }
{ ( ( P og) h ) i P E PI
and R is an image of P.
=
{ ( P (g h ) ) P PI 0
J
I
REMARK. I t is easy to see that if P and Q are isomorphic and Q and R are isomorphic, then P and R are isomorphic. Also, if P and Q are isomorphic, then Q is an image of P and P is an image of Q. In fact, if h is an isomorphism from P to Q then h-l is an isomorphism from Q to P, i.e., h-l: CTQ + MP (1 : 1 onto) and
P
=
{ p ' p t Pj
the last step since Q
{ p 0 lop 1 p -{(puh)oh-' -
{p h[p 0
E
P } { p ( h h-') p t P } P E P }- { 9 h - ' I g t Q ) 0
E P}.
1.8.3 Lemma. Let P and Q be T-processes. If h is a homomorphism from P to Q, then h : MP -* Pi'Q (onto). PROOF.
As an exercise
1.8.4 Theorem. Let P be a T-processor. Both P' and P2 are images of P. PROOF. Consider the function 1UiP. By Lemma 1.6.5, we see 6YP1 = 9 a P . Hence, 1UP: U P + CTP'. Moreover, for all uy E P and all t E T ,
t(uy I m ) 0
that is, uy Pl
-= {u
o
If l P
=
=
(t(uy))l@P= ( t u , ty)l@P = tu
u.Thus,
1 (3y):uy E P } = {uy lap 1 uy E P } 0
{ p 1flP I p
1
0
E
P}
32
1.
GENERAL PROCESSES
Hence, lClP is a homomorphism from P to PI. Similarly, 2cIP is a hoinoniorphisin froin P to P2. I
1.8.5 Corollary. I f P and Q arc nonempty T-processes, then P and 0 are images of PQ. ‘l’he following theorem is our first instance of a process isomorphism and very much clarifies the concept of a free T-processor :
1.8.6 Theorem. Let P be a T-processor. If P is free, then P and P2 are isomorphic. IWOOF.
Consider the map 2NP. Clearly 2UP: U P P2
~
---f
/TP2 and
( p c 2nP p t P )
‘l’hus, 2UP is a homomorphism from P to P 2 . Now, since P is free, for some N , ((P‘ - (a}. By Lemnia 1.6.5, / / U P - = f l P 1 , and it follou s that PLf’ { ( a ,6) i b t CTP’)
Illoreover, then, 2/21’
=
{ ( ( a ,h), 6) h t U P 2 ]
nliich is clearly 1 : 1 on / / P onto UP2. Thus, 2UP is an isomorphism. ‘l’heorem 1.8.6 ninkcs plausible a practice general!y adhered to in systems theory, namcly, to identify a free processor with its output set and thus to treat such a processor as a process.
1.8.7 Theorem. For cvery nonempty T-process, there exists an isomorphic T-processor. PRO~F.
Let P be the T-process. Choose a class a and consider Q
--
{alp
By ‘Theorem 1.7.10, Q is free and Q2 = P. Thus, isomorphic, where 9 is a T-processor. I
0 and
P are
1.9.
33
EXAMPLES OF PROCESSES
1.8.8 Theorem. If P is a T-processor, then P and P-’ are isomorphic. If Q is a T-process, then Q and IQ are isomorphic.
For the reader.
PROOF.
We shall encounter a number of examples of homomorphic and isomorphic T-processes in later chapters. We shall find the concept of a homomorphism of processes particularly useful in Chapters 3 and 4 for making proofs. 1.9 EXAMPLES OF PROCESSES
We conclude this introductory chapter with a brief catalog of examples of T-processes and T-processors. T h e examples are purposely diverse in nature and are intended to demonstrate the wide variety of processes possible within our theory. T h e section is not essential for the comprehension of what follows but should prove very useful for the reader not well versed in the systems theory field. We abandon the theorem-proof style in this section and simply list the examples. Detailed consideration of these examples is left for the exercises.
1 . Let A be an alphabet, i.e., a finite nonempty set, and let r C A x A. ‘The set P such that p
E P -e-p : w
+
+
A & (Vt): ( t p ) ~ ( ( t 1) p )
(t E w )
is a symbol manipulator. Clearly, since P C A”, P is an w-process.
2. Let both A and B be alphabets and let f:A x B T h e set P such that UY E
P
u U : w + A & Oy E B &
(Vt): ( t
+ I)y = (tu, ty)f
+ B.
(t Ew)
is a jinite automaton. I n general, P C ( A x B)w;hence, P is an w-processor.
3. Let A , B, and C be alphabets, let c E C, and let functions C and g: C + B be given. T h e set P with
f:A x C
---f
UY E P o U : w =
-+
A & ( 3 ~ ) ( V t )(OX :
(tu, t x ) f & ty
=
(tx)g)
L
c & (t
(t Ew)
+ 1)”
34
1.
GENERAL PROCESSES
is a A’loore sequential machine (with distinguished initial state c ) . P is an w-processor, i.e., I-’ C ( A x RERIAHIL If A is a noncmpty set and n is a nonnegative integer, then Ao -= { o 1; ‘4’ A ; and A”+l A” x A. An is the set of all left n-tuples on A. Similarly, if A,, {a},A , = A , and A,, t l = A x A,, , then A,, is the set of right n-tuples on A. T h e set of all left-tuples on A is the set ~
z.4
u{A-Pl 1 n E w}
-
and the set of all right-tuples on A is - 4 2 = U{A, 1 n E w }
In general, x E ZA iff either x = o or x E A or there exists a (unique) y E Z A and a (uniquc) a E A such that x = ( y , a ) , i.e., s is of the form x = (... ((al , u J , a3),..., a l l ) .Similarly, x E AZ iff either x = or x E A or there exists a (unique) y E AZ and a (unique) a E A such that x ( a , y ) , i.e., x is of the form of (01 (UP ,..., (Q,,-l, a,) .-.)). ~
1
~
4. Let € be I an alphabet with o, Let f : A A BZ+ BZ satisfy
r
a$B
and let A
if a = o & x = if a # o & , \ = ~ if a o & a ~ B if a # o & x t B if a =- o & (3): (b E B & x if a # o & (36): (b E B & x
a I.
(a,\) Y
( a , .I)
= =
=
B u {o}.
(b,y ) ) (6, y ) )
T h e set P such that Uy t
1’
0
U : (IJ->
AJ & oy
--
v‘
& ( v t ) :( t
1)”
--
(tu, ty)f
( t E W)
is a pushdown list. 1’ is an w-processor because P C ( A x BL’)oJ.
5. 1 s t B and C be alphabets with __ E I3 and 1, - 1, 0 4 B, and let A B u (1, 1 ). Let functions f:B x C --t C and g: B x C-, A be given. ‘Then define the function ~
h : (ZB x B Z ) x C -+ (ZB x B Z ) x C
1.9.
p
E
P o (3g)(Vt):(y: = (ty) h
35
EXAMPLES OF PROCESSES
& tp
w + (ZB
(tq)i)
x B Z ) x c'&( t
-C 1 ) ~
(t E w )
is a Turing machine. P is an w-process, i.e., P C (ZB x BL')'~
6. Let R be the set of real numbers, and let i , j , k be positive integers. Let Q C and X C R? be given together with functionsf: w x ( R z x RI) + R3 and g: w x (R1 x RI) R k . T h e set P such that --f
uy E P
0
(3x)(Vt): ( u E Q & o v E
-
( t , (tu, t x ) ) f & ty
x& ( t -
( t , (tu, tc))g
1). (t E w )
is a jinite-difference equation system. I n general, P C (R' x Rk))w, hence, P is an w-processor.
36
1.
GENERAL PROCESSES
7. 1,et R be the set of real numbers and l e t j be a positive integer. Let L> be the set of all diflerentiable functions in (RJPand let a function f : RI 4 RJ be given. T h e set P such that
p
E 1’ -->
p
E
D & Op E R’ & ( V t ) : t ( d j ~ / d t )= ( t p ) f
is a diflerentiul motion. P is a p-process, i.e., P
(t E p )
C (RIP.
8. Let R be the set of real numbers, and let i , j , k be positive integers. Let I1 be the set of all differentiable functions in (Rip. Let sets SL, C (RLP and X C RJ be given together with functions f : R 1 x RI + HI and g: RJ + Rk. T h e set P such that uy E 1’ ca (3~)(V‘t):(U E SZ & x E D & OX E X & t ( d . ~ / d t ) -
(tu, t ” t ) f &‘ty = (t.X)g)
(t E P)
is a time-invariant ordinary dzflerential equation system. P is a p-processor, i.e., P C (R’ x Rk)o.
Exercises
1 - 1.
Prove that a time set with more than one element is infinite. t , t’ = t for all [Hint: consider t n , where tn+l = t n t E T ( t / 0 ) , and all n E w ( n f O).]
1-2.
. I monoid IbZ is a group iff for every m E M there exists some m‘ E &I such that m rn‘ = 0 nz‘ 4-wz. Prove that a time set is a group iff it has precisely one element.
+
+
1-3. Prove
011a
-
time set T :
+
(i) If t < t’, then t’ t (t’ - t). (ii) If t < t’, then t < (t’ 4-t”) and (t’ t”)- t = (t’ -- t ) $- t“. (iii) If t < ( t t’), then ( t t’) - t -= t’. (iv) If t < t’ and t’ < t”, then t < t” and (t” - t ) = (t’ - t ) 4- ( t ” t’). 1
+
+
+
~
1-4. Prove Lemma 1.4.4.
1-5. Prove Lemma 1.5.8. 1-6. Prove for any relation R , 1
=
( 1 R)(2R).
37
EXERCISES
1-7. Iff: ,4 -+ B is any function, prove: (i) T h e set P
= {u(u o f )
! uEAT&
u is constant}
is a T-processor. (ii) P is functional. (iii) y E P 2 =+y is constant. (iv) LZP2 = 9 f . 1-8. T h e preceding exercise shows how any function may be “represented” as a T-processor. Devise an analogous “representation” for relations. 1-9. If P is a p-process, then the set p6J
=
{ P b I P E p;
is the discrete-time analo,g of P. Prove that the discrete-time analog of a p-processor is an w-processor. 1-10. Give necessary and sufficient conditions for the discrete-time analog of a functional p-processor to be functional. 1-11. Prove that the discrete-time analog of a free p-processor is free. 1-12. Prove that a T-processor P is uncoupled if P-l is free. 1-13. Prove Lemma 1.8.3. 1-14. Prove Theorem 1.8.8.
1-1 5. Consider the symbol manipulator P with A f,
= {a, b, c, = {(a,
d, e }
4, (a, 4,(b, 4,(c, b), (c, 4,(4
(e,e))
Specify three T-time functions p , q, x E P, such that Op oq = ox = c.
=
1-16. Prove that for any symbol manipulator P, GrlP C 9 r u 9 r . 1-17. Discuss the proposition: Every symbol manipulator is a finite set.
38
1.
GENERAL PROCESSES
1 - 18. Consider the finite automaton P with *I
i
I1
--
{O, 1;
(((0, O), O), ((0, I ), 1 ), (( 1,0), 11, (( 1 I), 0)) I
Spccify ,111 7’-tinic functions y such that uy
E
P with u
=
1.
1 - 19. 1:or the tinite autoinaton P of Exercise I - 18, specify a T-time
function y such that Oy tu
=
\I 10
-
0 and uy E P, where
for t = I , 3, 5, 7, 11, 15 otherwise
1-20. I;or a general finite automaton P, give expressions for P1, ((Pi, OP‘, and U P. 1-21. I’rove that evcry Aloore sequential machine (with distinguished initial state) is functional.
1-22. Consider the RIoore sequential machine P such that
1-23. Prove that every pushdown list is functional. Give the input set (in the general case).
1-24. Consider the pushdown list P with B
=
{ a , PI
39
EXERCISES
Give U P , for the case of tu
=
I
a
for t
=
/3 o
for
t
=
2, 5, 6, 9 4, 7
otherwise
Give two T-time functions p , q E P such that
OP
=
Oq
= (0>
(i, (i, (i, (#, (i, (i, ( i , i))))))))) (/?
1-26. Consider the finite-difference equation system P such that j=2
i=Kzl Q . =
( 4 (Y1
7
(Y2 > Y 3 N f
( 4 (Y1
7
(YZ
, y3)))g
X
p, =
(Y3
=
r2
7
4 . 2
R2
+
Y3
~
y1))
Specify a T-time function y such that uy E P, where u Oy = 1 , and l y = 0.
=
0,
1-27. Prove that a finite-difference equation system P is functional if X contains one element and is,free if Sr! = {O), where X is nonempty.
1-28. Prove that the p-process P = {F, 1
Y E
R}
F,
= { ( t , re-t)
1 t E p}
is a differential motion, where e is the base of natural logarithms.
40
1.
GENERAL PROCESSES
1-29. Prove that every differential motion is isomorphic to some time-invariant differential equation system.
1-30. Pro\-e that the set of positive rational numbers together with zero forms a time set.
2
Basic Interconnections
2.1
INTRODUCTION
We suggested in the introduction to Chapter 1 that the idea of a system as “an interconnection of subsystems” is very fundamental. Going along with this, there is perhaps no problem in systems theory more important than the investigation of interconnections and decompositions of systems. I t is of great importance to understand how the properties of an interconnection of systems are related to the properties of the systems interconnected. Also, it is fundamental that systems theory explain how a given system may be decomposed into subsystems, i.e., how it may be realized as an interconnection of other systems. As a background for such considerations here, we need to develop the concept of interconnections within our formalism. I n Chapter 1 , we introduced the concept of a T-processor as a special case of the T-process and suggested that the former serve as our main concept of a “general system.” Consistent with this point of view, it follows that “interconnections” in our theory will be a set of operations which are defined on and yield T-processors. We actually consider five operations on T-processors for this purpose. Two of these are binary operations(one T-processor is produced from two given ones) and the remaining three are unary (one T-processor is produced by modifying another). 41
2.
42
HASIC INTERCONNECTIONS
.\lthoiigh, of course, the concept of interconnections is very ell knoM-n throughout systems theory, there has been relatively little \vork done on formalizing and studying them in such a general case as \ve no\v have the capability for. We shall draw upon the results ohtained in this chapter in our subsequent considerations extensively. 2.2
PROCESSORS IN SERIES
In this section and the next, \ r e introduce two of the very basic interconnections of T-processors. I n keeping with many precedents? in systems theory, we call these particular interconnections, ‘‘ber 1es’ ’ and ‘‘parallcl .’ ’
2.2.1
Definition. If P and Q are T-processors, then the series
inttvcorrnertion of P and Q is the set
I’LQ
I’
‘~ (1
:{uz
(3y):uytP&yzEQ;
is said to be properf if? P” C Q1.
I W . ~ . I I ~ K .Clearly,
P
0
0
is generally a T-process. Moreover,
it is a T-processor since, as is obvious, ( P (2) C PlQz. 0
2.2.2 Lemma. I f P and
\\
here the synibol I>IH)OF.
0
011
0 are
T-processors, then
the right is composition.
\Ye have
-I_ .llthoiigh i i l i n o s t evei-1 t)riinch of systems theory discusses “series” a n d “parallel” ititri-coiiiir(.tioiis, there is some :itnhiguity in what is generally meant by these terms. \\‘hat is nieant b y “parallel” is different in automata theory than what is nieiiiit i n linear system theory and there is more t h a n one notion of a “series” ~ i i t e t . ~ ~ i i ~ i iii e c both t i ~ ~ areas. i 1 ’l‘liis requireiiicmt is h i g h l y intuitive. I t says that every o u t p u t of P must b e :I11 IIl~”1t of 0 .
2.2.
43
PROCESSORS IN SERIES
REMARK. Lemma 2.2.2 makes clear the fact that the use of the symbol 0 for both composition of relations and for series interconnections of T-processors is justified, i.e., the dual of series interconnection is composition applied to the relations P , and Q.+ .
T h e following conditions are immediate by Corollary 1.6.12:
2.2.3 Corollary. If P, Q, and R are T-processors, then Po(Q0R) = (P0Q)oR
If P and Q are functional [bifunctional], then P [bifunctional]. Also ( p o Q)-1 = Q-1 o P-1
0
0
is functional
2.2.4 Lemma. If P and Q are T-processors, then ( P 0)' C Pl and (P C Q2. If P 0 Q is proper, then ( P 0)' = Pl. 0
0
T h e first two conditions are obvious. If P 2 C Q', then
PROOF.
u E Pl
0
3 3
(3y): uy E P 3 (3y): uy E P & y t P2 (3y):uy E P & y E Q l (3y)(3z):uy E P & y z E Q 3 (32): uz E P o 0 3 u E ( P o Q)l
that is, P1C ( P 0 Q)I. Thus, ( P o Q)'
=
Pl.
I
2.2.5 Corollary. Let P and Q be T-processors. If P o Q is proper, then ( P Q)l is an uncoupled T-processor iff Pl is. Also, P Q is free iff P is free. 0
0
2.2.6 Lemma. Let P, Q, R, and S be T-processors. If P C R and Q C S , then ( P o 0) C (R S ) . 0
PROOF.
uz E
We have
(P 9)
(3y):uy E P & y z E Q
0
2
3
(3y):uy E R & y z
E
S
I
uzE(R0 S)
2.2.7 Lemma. If P is a T-processor, then both I ( P ' ) and I ( P 2 ) are T-processors and I(P') P 0
~
I-'
=
P I(P) 0
where both interconnections are proper.
44
2. Obvious.
PROOF.
2.2.8
BASIC INTERCONNECTIONS
If P and
Lemma.
IP
0 are
c I[)
T-processes, then
cQ
I’
-?-
Obvious.
PIWOF.
L e m m a . Let P be a 7’-processor and let Q be a T-process. P C P and
2.2.9
T h e n IQ
L
(Kl .
1~~001.
PI*
-
PXiQ
Recall P*lQ = { ( u , y ) ,u P * y & u € Q ]
JYe h a l e U ( K 1 P)* J
I’ 3 (3z):u 3 t r,Q & 2-y F P z & zy t I’ @+uu E 1Q & uy E P \> ( 3 z ) :uu’t 1,Q & 2L u?, t I Q
>
u
-+
F
y
0& uP,y
Q)y
> u(P,
that is, (10 P)k P , Son, ( P * 1.6.1 1, \\e see ’ P C P. I
ro
0.
~
0)C P ,
. ‘l’hus, by Lemma
K r v w k . ‘Thus, the series interconnection c P serves to denote the “restriction” of the T-processor P on the input set \\here 0 is ‘1 qiveii 7’-process and 0 C Pl.
0,
2.2.10
Lemma.
PK~OE.
1’2
f=
If P
l(P) Q &
0
0
Let P m d be 7’-processors. If P is functional.
functional, then Z(P‘)
0
C Q is
0 is functional, we hax.t
>‘ZL
i
[(P) 0
y y E I ( P ) & 3’2c Q & y“L E Q Y i PZ & y 3 r C,, & J ’ W E ,Q ( 3 )UJ :
i I’&y,-ty&4’ZL i Q
( 3 u ) : u z i I ’ ~ ~ & u z l !9 ~P ->z
T h i s proves I ( P ) [I
is
functional.
I
w
2.3. 2.3
PROCESSORS IN PARALLEL
45
PROCESSORS IN PARALLEL
V’c continue with a similar elementary investigation of the “parallel” interconnection of T-processors.
2.3.1 Definition. If P and Q are T-processors, then the parallel interconnection of P and Q is the set P//Q
-=
{(uu)(Yz)1
UY E
P & uz E 0 )
REMARK. We see P / / Q C (P1Q1)(P2Q2), and it follows that P / i Q is a T-processor. Moreover, (P//Q)‘ C PlQl and ( P / / Q ) 2C P2Q2; hence, both ( P , / Q ) land ( P / i Q ) 2are T-processors. Thus, P / / Q is always multivariable.
2.3.2 Lemma. If P and Q are T-processors, then (P,’/Q)l= P’Q’ and (P/lQ)2= P2Q2. PROOF.
We see
(I’i/Q)l = { u u , ( 3 y ) : u y € P & ( 3 z ) : U Z € Q } { u v ~ u r P l & v r Q 1 } = ply‘ 1
T h e condition ( P / l Q ) 2= p2Q2 is similar.
[
By Theorem 1.7.2, we have:
2.3.3 Corollary. If P and 0 are T-processors, then (P //Q)l and (P//Q12 are 7‘-processors. Moreover, ( P / / Q ) l and (Pl’’Q)2are uncoupled. 2.3.4 Lemma. If P and Q are T-processors, then ( P / i Q )is free iff both P and Q are free. PROOF. If u and v are T-time functions, then uu is constant iff both u and v are. In fact,
(a)@)
=
(a, b)
Now, since (P/’Q)l PIQ1, it follows that (P,’!Q)l contains precisely one constant T-time function iff both P’ and Q1 do. [ ~
46
2. IlASIC Lemma. If I’ and
2.3.5
INTERCONNECTIONS
0 are
(P/‘Q)-’ iwooi~.
2.3.6
=
T-processors, then P-yIg-1
For the reader.
Lemma. If P and 0 are nonempty T-processors, then is functional [bifunctional] iff both P and Q are functional c t i o nal] .
IW)OF. I,et P and Q be functional. Choose ( u v ) ( y z )E ( P / / Q ) and ( u ‘ v ’ ) ( y ’ z E’ )(1’ 0). Clearly, uy, u‘y’ E P and DZ, v’z’E Q. I~-sing1,emma I .5.15 - fL’V’
1(7)
-. u
-z u‘
&v
=
@’
3
y
.-:
y‘ & z
=
z‘
=> y z = y’z’
that is, ( P i j Q ) is functional. Conversely, let ( P / / Q )be functional. F P. Sincc 0 is nonempty, there exists some vz E Q. 0)is functional, u = u‘
-:. uv
=
yz
u‘t1 ~2
~
y‘z
3
y
= y‘
that is, P is functional. Similarly, Q is functional. T h c condition for the bifunctional case follows from Lemmas 1.7.7 and 2.3.5.
2.3.7 Lemma. Let P, 0,K, and S be T-processors. If P C R and Q C S, then ( P i ‘ Q )C ( R S). PIU)OI;.
Obvious.
1 , e r n n ~2.2.7 inakcs clear the fact that any T-processor admits (tri\,ial) series ctccompositions which are proper. ’l’here is an issile i n the par;dlel case which we can easily settle:
2.3.8 Theorem. If P is a noncmpty T-processor, then the follov in? statements are equivalent: (1)
(11)
’l’hcre exist 7’-processors R and Q such that P Q//Ii. 1’1 m d P’ are 7’-processors and I’ 1 1 I/, where ~
L
I
{ u y , (32)(32): ~
{vz
,
( u v ) ( v z )t C’]
, ( 3 u ) ( 3 y ) : (.v)(yz)
E PI
2.3.
47
PROCESSORS IN PARALLEL
(iii) P’ and P2 are T-processors, and (uv’)(yz’)E P & (u’.)(y’x)
P * (u.)(yx)
E
t
P
(ii) 3 (i) is trivial. (i) 3 (ii). If P = Q / / R , then obviously both P’ and P2 arc T-processors. Now, since P is nonempty, both Q and R are V. nonempty. I n this case, clearly, Q = U and R (ii) 3 (iii). If P = U / / V ,then PROOF.
2
(uv’)(yz’)E P & (u’.)(y’.)
E
P
3
uy E
u& 7Jx E v
-. (u.)(y.)
(iii)
3
E
P
-
( u v ) ( y x )E ( U / / V )
Given (iii), U / / V C P. I n fact,
(ii).
( u v ) ( y z )E (L’//L’) * uy E U & U Z E v
3
(3v’)(32’): (uv’)(yz’)E P
& (3uf)(3y’):(u’v)(y’z)E P
2
( u v ) ( y z )E 1’
But, also, P C ( U / / V ) .T h a t is, (u.)(yz)
Hence, P
2.3.9
=
E
P
* (uy) E U & (u.)
U//V.
E
I; * (u.)(yz)
E
(U//l.)
I
Theorem, If P, Q, R , and S are T-processors, then ( P / / Q ) (RIIS) = ( P R ) / / ( Q S) O
O
We conclude this section by proving:
2.3.10 Theorem. If P and Q are nonempty T-processors, then both P and Q are images of P / / Q .
2.
48
Consider the function
IW)OF.
Clc,irly, t i 7’,
BASIC INTERCONNECTIOSS
/I:
/I(P
t ( ( U Z ’ ) ( yz)
0)
+
I/)
(t((IW)(JJ2)))h ~
that
/I IS, (U~)(J
Hence I’
I5
~
/ [ I J . Yo\+, for all ( u z ~ ) ( ytz )P / Q and all
( ( l u , f7’),
(l(UV),
( t y , tz))h
t(yz))A
(tu, t y )
t(uy)
uy. ‘l’hus, since Q is iionenipty,
an image of I’
0.Similarly, 0 is an image of P / , Q . I
PROCESSOR PROJECTIONS
2.4
I -processes, though highly generalized and abstract, are still concrete models for real dynamical proccsscs. It is, of course, in the realin of real dynamical processes that one must look to decide bvhich operations on 7-processors correctly interpret to be interconnections and which do not. If it is obvious that the above series and parallel interconnections of T-processors are correct formalizations o f corresponding notions of interconnections in real physical processes, it is equally u n c l t m what other elementary operations must be added i n order to have a “complete” set of interconnection operations for 7‘-processors, i.e., in the sense that any “real” interconnection can be represented algebraically. It is our guess at this point in time that we get approximately “ a complete’’ set when we add to the above series and parallel opcrxtions (\vhich are of course binary operations) three unury operations. One of thesc unary operations which we call the “closed loop” operation (and which is introduced in the next section) is employed to account for “feedback.” Feedback which is definitely distinct from series and parallel interconnection is one of the most important concepts in systems theory froin the point of view of practical engineering. Fccdback accomplishes many very ,
I
2.4.
PROCESSOR PROJECTIONS
49
important tasks; most especially, it permits some quite complex systems to be constructed from simple subsystems. I n other words, one suspects at the outset that the “closed loop” operation applied to a processor may yield a processor with radically different properties than the processor to which the operation is app1ied.t T h e other two unary operations we believe are fundamental here are quite unsophisticated projection operations:
2.4.1 Definition. If P is a T-processor, then the sets P:
=
{u(uy)j uy E P }
: p = {@Y)Y I UY
E
P>
are the feedforward and the feedback of P, respectively. REMARK. P : C PIP, so P : is a T-processor. Moreover, ( P : ) 2C P, so P : is multivariable. :PC PP2,which implies :P is a T-processor. Also, (:P)’ C P so :P is likewise multivariable. We note
u(.y)
E
P:
0
uy E P & u
(UY)ZE : P - uy E P & y
=
=
x
z
2.4.2 Lemma, If P is a T-processor, then (P:)l= Pl, (P:)2= P, (:P)I = P, and ( : P ) 2-=P2. PROOF.
Obvious.
2.4.3 Lemma. If P and Q are 7’-processors, then ( P : ) C ( Q : ) o P C Q e( : P ) C ( : Q ) PROOF. Clearly, if P C Q , then ( P : )C ( Q : ) and ( : P ) C ( : Q ) . Conversely, if ( : P )C (:Q), we have
u y ~ P a ( u y ) y ~ : P( u* y ) y ~ : Q - u y ~ Q
and if ( P : )C (Q:), uy E P
u(uy) E P:
that is, in either case, P C Q. -I. Unfortunately,
* u(uy) €9: * uy E Q
I
by the same token, we should not expect to prove a great
deal about the closed loop operation in general; i.e., what we are mostly doing here is showing preservation of various properties under operations.
50
2.
BASIC INTERCONNECTIONS
2.4.4 Corollary. If P and Q are T-processors, then ( P : ) = ( 0 : )e P
=
Q 0 ( : P )= (:Q)
2.4.5 Theorem. If P is a T-processor, then the following statements are equivalent: (i) There exists a T-processor Q such that P (ii) P2 [P'] is a T-processor, and u =z
u(zy)E P
[(uy). E P
(iii) P 2 [PI]is a T-processor and P
=
3
y
= Q: [ P =
:Q];
= z]
( P z ) :[P = :(P1)].
PROOF. Wc shall treat the latter case and leave the former as an exercise.
(i) 3 (ii) is obvious. (ii) 3 (iii). If (ii) holds, then uy E Pl
Thus, P :(I"). (iii) 3 (i) is trivial.
0
(32): (uy).
E P e (uy)y E
P
~~
I
2.4.6 Lemma. If P is a T-processor, then P : is free iff P is free. :P is functional. PROOF.
2.4.7
Obvious.
Lemma. If P is a T-processor, then P
FKOOF.
=
P: :P 0
LVc have :P 0 ( 3 4 3 2 ) : u(s2) E P: & u = x & (.z)y E :P & z = y -a u(uy) E P: & ( u y ) yE :P cr> uy E P & uy E P 0 uy E P
uy E P: 0 : P c- (3x)(3z): u(xz) E P: & ( x z ) y E
that is, P
-
P: o :P.
I
2.4.
51
PROCESSOR PROJECTIONS
Lemma 2.4.7 shows in the given weak sense, for any REMARK. T-processor P, P : and :P are “inverses” of each other. This interesting property leads to some other useful identities. 2.4.8
Lemma. If P, Q, R, and S are T-processors, then (POQ) = ( P o Q : ) :Q 0
( P o 8)o ( R o S ) = ( P PROOF.
0
(8
0
R):)0 (:(Q R) S ) 0
0
Application of Lemma 2.4.7 gives (PoQ)=Po(Q:o:Q)
=(PoQ:)o:Q
( P o Q ) o ( R o S ) = P o ( Q 0 ( R 0 S ) ) = P O ( @0 R ) 0 S )
(((Q R ) : :(Q R ) ) 0 S ) = P ((Q R): (:(Q R ) S ) ) = ( P (Q R ) : ) (:(Q R ) S ) =P
0
0
0
0
0
0
0
0
0
0
0
0
0
0
where we have repeatedly used the associative law established in Corollary 2.2.3. I 2.4.9 Theorem. If P is a T-processor, then P , P : , and :P are pairwise isomorphic. PROOF.
We note
Therefore, consider the relation h
Clearly, h: GlP (uy)y. Hence,
--f
4, ( ( a , b), 6)) I a ( @ W }
= {((a,
Gl(:P) (1 : 1 onto) and for all uy
E P,
uy 0 h
=
52
2.
BASIC INTERCONNECTIONS
and h is an isomorphism from P to :P. Similarly, the relation R
~
{ ( ( a ,( a , b ) ) , (66)) I a ( W b 1
is an isomorphism from P : to P. Finally, g 0 h is an isomorphism from P: to :P. I 2.5
CLOSED LOOP PROCESSORS
Algain,“feedback” is one of the most intriguing concepts in the entire systems area. In this section, we introduce the notion of the “closed loop” of a T-processor. T his can be regarded as a direct attempt to formalize the concept of feedback in as general a \lay as possible still retaining the flavor of the concept as it is employed in engineering. T h e closed loop of a T-processor is u hat that T-processor “looks like” with feedback introduced in the form of a direct connection from output to input. As we shall show, the closed loop operation and the feedback operation introduced in Section 2.4 arc intimately related. 2.5.1 Definition. I,et P be a 7’-processor. If P’ is a T-processor, then the closed loop of P is the set #P
=
{UY,(U,V)Y EP:
KFVAHK.In general, # P C PI. Thus, # P is itself a T-processor. C (P’)’ and (#P)2 C n P2.Also, # P C Pl o I(P2),
\Ye see (#P)’ 1.e.,
u?/r#P~~(u3’)yEP~uy€P’&y€P2 f’
u-y E
P‘ & y y
E
I ( P ) 2 uy € P’ Z(p2) 0
2.5.2 Lemma. Let P and Q be T-processors with P1 and Q1 IikeLvise T-processors. Then, PCQ
=“
#PC#Q
PROOF. Obvious. REMARK. ‘l’hc converse of Lemma 2.5.2 fails and this is important.
2.5.
53
CLOSED LOOP PROCESSORS
Next, we see how the closed loop and feedback operations are related: 2.5.3 Theorem. If Q is a T-process, then #(:Q) = Q. If P is a T-processor with Pl a T-processor, then :(#P) C P. PROOF.
We note (:Q)l is a T-processor, so #( is: well (I defined. )
We have
#(:~)={~Y/(~~)YE:~}={~YI~YE~}=Q Next, we see (.Y)Y
that is, :(#P) C P.
E
:(#P)
UY E
#P * (.Y>Y
Ep
I
REMARK. Theorem 2.5.3 interprets to say :(#P) is the least T-processor whose closed loop is the same as that of P, i.e., #(:(#P)) = #P. Thus, for a T-processor R, :R is the least “open loop” of R. :(#P) also interprets to be that part of P that can be “observed” or “measured” under closed loop conditions.
An interesting special case, namely where :(#P) characterized as follows:
=
P can be
2.5.4 Theorem. Let P be a T-processor. If Pl is a T-processor, then the following statements are equivalent: (i) :(#P) = P. (ii) P = :(I“). (iii) U P = 21ZP1. (iv) (uy)z E P * y = z . (v) P = : ( P l ) & # P = P’.
-
(ii) is by Theorem 2.4.5. (i) (iii). I n the proof of Theorem 2.4.9, wc showed 2aQ for any T-processor Q. Thus, here PROOF.
(ii)
3
OlP
(iii)
3
(iv).
(uy) z E P
=
a(:@=
@(:(Pl))= 2aPl
Given (iii),
* (Vt): t((uy)z)E OlP * (Vt): t((uy)z)E 2 a r ’ 3
(Vt): ((tu, ty), t z ) E 2@P1
(Vt): ty
=
tz
2
y
=
z
54
2.
BASIC INTERCONNECTIONS
(iv) 3 (v). Given (iv), P :(P1)by Theorem 2.4.5. But then by Theorem 2.5.3, #P = #(:(PI)) = Pl 1
so (v) holds.
(v)
3
(i). We have :(#P)
=
:(PI)= P
I
Another interesting special case is to characterize when # P For this, we have:
=
Pl.
2.5.5 Theorem. Let P be a 7'-processor. If Pl is a T-processor, then the following statements are equivalent: (i) (ii) (iii) (iv) (v)
# P = Pl. uy E Pl u (uy)y uy E P 1 3 (uy)y :(PI)C P. P' c #P.
IWWF.
(i)
3
(ii). uy
E E
P. P.
Given (i), E
Pl
0
uy E # P u (uy)y € P
(ii) * (iii) is trivial. (iii) 3 (iv): ( u y ) z t :(PI) 3 u y E P ' & y
=
z
3
( u y ) y t P & y = z =. ( U Y ) X € P
(iv) (v). P' = #(:(P1))by Theorem 2.5.3. Using Lemma 2.5.2 then, if :(PI)C P, we see P1 C #P. (v) 3 (i). I n general, # P C PI. I Ifre have a number of identities involving the closed loop operation:
2.5.6
Lemma. If P, Q, and R are 7'-processors and R1 is
T-processor, then
#((PiiQ) R ) = P #((W2)//Q) R) O
O
O
a
55
EXERCISES PROOF.
We see
uy E #((P//Q)0 R ) 0 (uyly E (PllQ) R 0
P / / Q & (xz) y E R P & y z E Q & (xz) y E R
u (3x)(3z): (uy)(xz)E
o (3x)(3z): ux E
o (3x)(3z): ux E P & x E P2 & y z E Q & (xz) y
ER P & xx € I ( P 2 )& y z E Q & ( x z ) y E R o (3x)(3z): ux E P & (xy)(xz) E Z(P2)//Q& (xz)y E R
u (3x)(3z): ux E
u (3x): ux E
-
P & (xy) y E (Z(P2)//Q) o R
o (3x): ux E P & xy E #((Z(P')//Q) 0 R )
uy E p
O
#((V2)//Q) R) O
2.5.7 Lemma. If P and Q are T-processors with Q1 a T-processor, then #(PllQ) = Po (#Q): PROOF.
4 ~ E4#(PI@)
0
( u ( y z ) ) ( y z E) P / / Q
-
UY E
P&(
~ 4 EO
#Q e uy E P & y ( y z ) E (#Q): e (3x): ux E P & x ( y z ) E (#Q): 0 u(y.) E P (#Q):
o uy E P & y z E
0
2.5.8 Lemma. If P and Q are T-processors with Q' a T-processor, then P o #Q = # ( p / / Q ) :(#el 0
PROOF. Combining Lemma 2.5.7 and the first condition of Lemma 2.4.8,
(P
O
#Q)
=
(P
O
(#!a:):(##!a = # ( P / / Q ) :(#!a I O
O
Exercises 2-1. If P is a T-processor and Q is a T-process, prove ( P O Q)* = (V*)-*/Q)Y
2-2. Prove for any T-processors P and Q that P / / Q and PQ are isomorphic.
56
2.
DASIC INTERCONNECTIONS
2 - 3 . P r m c or give a counterexample: For any T-processors P and 0,P Q is uncoupled iff both P and Q are uncoupled.
2-4.
ProLC for any T-processors P and (2, P, iL,
2-5.
r-
(P//Z(Q')) ( I ( P 2 ) / / Q )
Prove for any T-processor P, #((P:)-1
2-6.
0
P)
=
P
Let P be a T-processor with both P' and P2 T-processors, and let lT = {uy , (3v)(31): (uv)(y.) € P )
I-
= (u2
(%)(3y): (uv)(yz)E P }
Prove that if C' is functional and V is uncoupled, then Pp is u n co u p 1e d .
2-7. Prove for any 7'-processors P and Q,
P.0: = { u ( y z ) ~ 2 I ~ y E P & y S E Q ) 2-8.
Give necessary and sufficient conditions on a T-processor P that there exist T-processors 0 and R such that P = Q R : , where Q 9 R: is proper. 0
2-9.
Let I' be a T-processor with P' a T-processor. Prove
(P'): P 0
=
{uy ,
(32):( u 2 ) y
E P)
2-10. Let P be a T-processor xvith Pl a 7'-processor. Prove if ( P I ) : P is functional that # P is functional.
-
2- 1 1. Prove if P is a T-processor with P' a T-processor that
#r = ( ( P I ) :
0
P : ) :(:(PI)) 0
and hence that # P may always be represented as a (often improper) series interconnection.
EXERCISES
57
2-12. (Feedback compensation problem.) If P and Q are T-processors, prove that the following statements are equivalent: (i) There exists a T-processor R with R‘ a T-processor such that P = #(R 0 Q), where R2 C Q1 and R1= P. (ii) P 2 C Q2. (iii) P = # ( ( : P 0 Q-l) 0 Q). 2-13. Develop and prove a theorem analogous to that of Exercise 2-12 for series compensation of T-processors, i.e., for the case of P = R Q. 0
2-14. In automata theory, the following operation on T-processors is defined and called “parallel interconnection”: P$Q ={u(~z)Iu~EP&uzEQ}
Has this operation been subsumed in our theory of interconnections ? 2-15. Work out an elementary theory of the interconnection given in Exercise 2-14.
Time-Evolution
3.1
INTRODUCTION
I n this chapter, we consider the notion of the “evolution” of a T-process in time. This concept is not a concept that is commonly dealt with in systems theory except perhaps informally. What we mean by the evolution of a process in time is “how that process appears (as a process) at various instants of time.” But does not a process always appear the same at various instants of time? T h i s is a very subjective question and whether it does or not depends very much on what we take time to be. This is the basic point of Weiner [31] who, in his discussion of time, eloquently argues for the existence of processes with nontrivial time-evolution. Given a T-process P, the element 0 E T interprets to be the <‘ starting time” of P . (We have previously proved that 0 is the least element of T in the sense of the ordering <.) Now how does P appear at the starting time t ? Has P changed ? T o find out, it is apparently necessary to translate the 2’-time functions in P to the left, so that the images of these functions at each time t‘ (t’ = t v t < t’) are reassigned to t’ - t instead of t‘. I n other words, we have to left-translate P itself so that the time base of instants greater than or equal to t coincides with all of T . In this way, how P appears (as a process) at various instants of time can be compared directly. Moreover, in this way, the time-evolution of P can be formalized. 59
60
3.
TIME-EVOLUTION
Again, how this all works out depends on what we take time to be. In Section 1.4, we postulated a time set to be an ordered commutative monoid. This structure plays essentially no role in the formalization of interconnections of processes of Chapter 2; nor has it played a significant role in the considerations of Chapter 1. T h e properties we have postulated on a time set are precisely those we need to be able to deal with the concept of time-evolution in an elegant manner. T h e emphasis here should be on the word “ elegant,” for it is possible to proceed with less structure on the time set (see, for example, the article by Windeknecht [35]). I n this, it is the algebraic structure of the time set and not the ordering structure which does most of the work. Actually, the ordering properties of the time set play a more important role in Chapter 4 than here. T h e monoid structure permits us to translate time functions and, as we have indicated, this is crucial. Because of the monoid structure of the time set, time-evolution o f processes is naturally characterized in terms of a certain algebraic structure, viz., the “action” of a monoid on a set. This is an abstract version of the concept of a “transformation monoid.” As such, one of the principal concepts of actions of a monoid on a set is the concept of “invariant” subsets of the given set. When a time set acts on a set, the invariant subsets of the given set are naturally “time invariant.” This, of course, is the case in time-evolution. ’l‘hus, lve are led to formalize in general what is meant by a “timeinvariance” property of processes. Such properties are important in all of systems theory arid become a major theme in our later considerations. Our formulation of time-evolution leads us quite naturally to consider some types of processes that are particular, in the sense that their time-evolution is special. Three such notions are investigated in detail in the chapter, namely, contracting, expanding, and stationary processes. Each of these process types is defined by postulating a particularly simple or (in the case of stationarity) a trivial time-evolution. I t is proved that contraction, expansion, and stationarity are “time-invariance” properties of processes in the sense above referred to, i.e., in the sense of actions over a time set. Perhaps one more remark about the use of ordered commutative monoids as time sets is in order. I n some parts of systems theory,
3.2.
ACTIONS OF A MONOID
61
it is more common to take the time set to be an ordered group (see Jacobson [45]) rather than an ordered monoid. For example, in control theory and the theory of dynamical systems, the set of real numbers is ordinarily used. Ordered groups and ordered monoids are mutually exclusive (see Exercise 1-2). Hence, one has to make a choice between the two. I n automata theory, there is no question but that the time set is an ordered monoid and not an ordered group, i.e., w is (implicitly) employed. Now, if we take the time set to be an ordered monoid, we can address “positive” control systems and “positive” dynamical systems easily, i.e., the restriction of the usual concepts to the nonnegative reals. (In dynamical systems, the “positive” aspect of the dynamical system is developed routinely anyway.) On the other hand, if we choose the ordered group, then we have to deal with sequential machines (somehow!) in terms of an extension to the whole set of integers. This is an unacceptable alternative. Finally, it would seem that the monoid is more attractive in another respect. By parametrizing a process with an ordered group, apparently both the past and future of that process have been represented. In taking an ordered monoid, only the future of the process has been represented. As a purely pragmatic position, it would seem the province of a system theorist to analyze, discuss, or indeed design the future of the process at hand, everything else being equal. What we shall attempt to demonstrate here is that everything else is indeed equal, i.e., there is no theoretical loss incurred in dropping the past of a process from our conception of it.
3.2
ACTIONS OF A MONOID
In this section, we develop some mathematical background for our considerations. What we require is the notion of an action of a monoid and the concept of invariant sets for such actions. T h e discussion represents a mild generalization to the case of monoids of some standard algebraic ideas for the case of groups (see LlacLane and Birkhoff [48] for the latter). Throughout this section, let M be an arbitrary monoid with function and identity 0.
+
62
3.
3.2.1
Definition.
TIME-EVOLUTION
An action of M is any function A: M x g L 4+ B A
such t h a t t (0, b)A ( m -t m‘, b ) A for all b E 6 , 4 and all m, m‘ REMARK.
E
= =
b (m’, ( m , b)A)A
M.
For example, the map A : Icf x M + M with (m,m‘)A
=
m’
is an action of M on itself. ‘rhat is, 9 (0, m’).4 (m
=
- m‘,m”)d
~
=
m’ 1 0 m“
~
A = M and
m‘
(m { m‘)
1
+m
=
(m” -1- m)
+ m‘ = (m’, m“ + m)A
(m’, ( m , m”)-4)A
3.2.2 Definition. Let A be an action of M . If m m-transformation of A is the relation
E
M , then the
((6, (m, b ) A ) 1 b E 9 A )
.4,
If b E & A , then the 6-transition of A is the relation .I b
=
{ ( m ,(m,b ) A ) j m E M )
:\lso, we define
.9,4= {A, m E M } F A = { A b1 b E 9,4) 1- Actually, ours is neither t h e “left” nor “right” action of M a c L a n e a n d Birkhoff b u t rather a cross bet\vcen t h e two. A left action is a m a p A : M X %A +%A with (0, h ) A = b ( m : m’,b ) A
=
( m ,(m’, b)il)A
a n d a right action is a map A : 9 A x M +.&‘A with
( b , O)A = 0
( h , m i m’)A
=
( ( h ,m ) A , 4 - 4
3.2.
63
ACTIONS OF A MONOID
REMARK. I n general, c F AC ( 3 A ) ( d Aand ) F A C (.#'A)&',i.e., for any m, A,, : &?A+&?A and for any b, At,:M + &A. I n particular, b B , = (m,b ) A = mAb
3.2.3 Theorem. If A is an action of M , then ,FA is a monoid under composition with identity lwa . Moreover, the function (0)
h
=
{(m, A,) j m E M }
is a homomorphism on M onto F A . We note
PROOF.
bA,,,,
+ m', b)A
=
(m
=
(m', bA,)A
=
(m', (m, b)A)A (bA,,) A,, = b(AwS 0 A,,,)
that is, A,,,L,n' = A,,t A,,,' . This proves A,,, A,,,, E F A . Thus, 9 A is a semigroup. Now, 0
0
bA,
=
(0,6)'4
-=
6
so A,, = lyea . Thus, 19A E F A and it follows that .FA is a monoid. Finally, h is a homomorphism, i.e., Oh
(m RmiARK.
=
A,
--
+ m')h = A,,,'
1.gA =
A,,
0
A,'
=
mh m'h 0
I
F A is called the transformation monoid of A.
3.2.4 Theorem. Let A be an action of M . If M is a time set, then ,FA is an M-process. PROOF.
Again, .FA C (92A) L1.
3.2.5 Definition. If A is an action of M > then the transition relation of A is the set
REMARK.
I n general, A
==
IJ.SA
64
3.
TIME-EVOLUTION
-4lS0,
bAh’ u (3m):b’
=
(m, h ) A
3.2.6 Lemma. Let A be an action of M . Then the following statements are equivalent: (i) bAb’.
b’ E c ~ A b . (iii) &Ab’ C .#Ah. (11)
PROOF.
(i)
(ii).
-3
bAh’ u (3m): h’ (ii)
u
We have =
( m ,0) A
u
mAb u 6’
(3m):b’
(iii). Assume &A6’C J A ” . Now, h’ h’
=
( 0, b’)i3
t
G
%’Ab
9?A”’,i.e.,
-- O A b ’
T h u s , b’ €.#Ah. Conversely, if b ’ t #A‘), then for some m , b’ ( m , b ) A . Now, if h” E gA”’ then for somc m‘,b” -- ( m ’ , b’)A. But, h“
=
(m’, h’)A = = (m’, (m, h ) d ) A
-
so h” t .#’A”. In other words, &AtJ‘C &A”.
(m
+ m‘, h)A
1
In view of condition (iii) of 1,emma 3.2.6, the following is apparent:
3.2.7
Corollary. If A is an action of M , then A is a preorder >( d A and
of .HA, i.e., A C .&A
(i) hAb (reflexivity). (ii) bAb‘ & b’A6” 3 bAb” (transitivity).
I n the remaining part of the section, we develop the elementary theory of invariant and minimal invariant sets for actions of a monoid. Several of the important definitions and proofs are due to I,. It. XIarino (private communication).
3.2.8
Definition.
Let A be an action of hf:
(i) A is strongly connected iff for every 6, 6’ some m E such that 6’ = (m,6)A.
E &A
there exists
3.2.
65
ACTIONS OF A MONOID
(ii) An element b E 9 A is initial iff for every 6’ E &?A there exists some m E IC.1such that ( m , b)A = 6’. (iii) An element b E WA is terminul iff for every b’ E .#A there exists some m E M such that ( m , b’)A = b. (iv) .4n element b E 2 A is recurrent iff for every m E M there exists some m‘ E M such that ( m m’, b)A = b.
+
3.2.9
Lemma.
Let A be an action of M :
(i) 6 E .@Ais initial iff d A b = &A. (ii) b E WA is recurrent iff 6’ E 9 A b 3 9A6’ = &A6. (iii) A is strongly connected iff for every b E &A, .%’Ab = 9 A . PROOF. (i) and (iii) are obvious. T o see (ii), first we note that b is recurrent iff (b’ E &’Ab 3 b E 9 A b ’ ) ,i.e.,
(Vm)(3m’):( m
+ m‘, b ) d = b
0
(Vm)(3m’):(m’, ( m , b)A).-l = b
e(Vm)(3m’):( ( m ,b),4 = b’
3
(m’,b’),4 = b )
o (6’ ~ W ‘ i = 2.~(3m’):(m’,b’)A - b ) o (b’ E B24b 3 (3m’):m‘Ab’ = b )
o (b’ E B’i2b* b E BAgb’)
Now, using Lemma 3.2.6, (6’ E W 4 b
Thus, (ii) holds.
3
b E BAb‘)
o (b’ E BAb
2
gab‘C W A b& b E A?Ab’)
o (6‘ E BAb
3
W‘,4b’ C 9 ‘ 4 b & B A b C W‘i2b’)
o (b‘ E WAb
3
WAb’ = .G’Ab)
[
3.2.10 Theorem. If A is an action, then the following statements are equivalent: (i) (ii) (iii) (iv)
A is strongly connected. Every b E .%A is initial. Every b E B?Ais initial and recurrent. Every b E 2 A is terminal.
66 IW)OP.
(1)
(iii) a
(ii).
3.
-
TIME-EVOLUTION
(ii) is iminediatc from Lcinma 3.2.9. By Lemma 3.2.9, .#A6”- YllA for d11 b” €%A.
, T
1 hus, h‘
t .9A4b >
9Ab’
z
9-4
-
~ L 4 b
h is recurrent h y 1,cmnia 3.2.9. (iii) (iv). Choose h’ E ,//AA. Since h’ is initial, 9A6’ a n d there exists some rn E M such that SO
%:
=
.%A
(nz,b’)A --= mAb‘ = b
’l’hus, h is terminal. (iv) ;1(i). Choose 6, 6’ E 9 A . Since h’ is terminal, there exists sonic m such that ( m , b)A = b’. Thus, A is strongly Connected. I 3.2.11 Theorem. Let A be an action of M . If bc9Z’A is initial, then the following statements are equivalent:
(i) b is terminal. (ii) A is strongly connected. ( i i i ) Every eleiiient of &A is recurrent. (iv) h is recurrent. PROOF. (i) * (ii). Let b be initial and choose b’, b” E %A. If b is terminal, for some m, ( m ,h’)A = 6. Since b is initial, for some m’, ( i n ’ , /))A h“. Thus, ~
( m !- m‘,b’)A = (m’,( m , b’)A)A = (m’, b ) A
=
b”
that is, A is strongly connected. (ii) * (iii) is by Theorem 3.2.10. (iii) 3 (iv) is trivial. (iv) (i). Choose 6’ E .%”A.Since b is initial, ,%A‘) = %’A and 6‘ E .#A”. Since b is recurrent,
-
IIcncc, there exists some m such that (m,b’)A terminal. I
=
b. Thus, b is
3.2.
ACTIONS OF A MONOID
67
3.2.12 Definition. Let A be an action of M . A subset C C %A is invariant (with respect to A ) iff mEM&bEC
3
(m,b)AEC
An invariant set C is minimal iff DCC&Disinvariant*D=
@vD=C
where D is the empty set. REMARK. .@A itself is invariant and D is invariant. 0 is a minimal invariant set.
We have the following theorem characterizing invariant sets.
3.2.13 Theorem. Let A be an action of M . If C C , @ A , then the following statements are equivalent: (i) C is invariant. (ii) A / ( M x C) is an action of M . (iii) b E C * .@Ab C C. PROOF.
For the reader.
3.2.14 Lemma. Let A be an action of M . If C, D C -@Aare invariant, then both C u D and C n D are invariant. For any b E g A , 9 A b is invariant. PROOF.
Let C and D be invariant. We have
r n E M & b E ( C n D )3 m e M & b e C & b E D
3
( m E M & b E C )& ( m E M & b E D )
( m , b ) A ~ C & ( m , b ) A ~ D = > ( m , bs )( iC3 n D )
Thus, (C n 0)is invariant. T h e proof for (C u D) is similar. Finally, by Lemma 3.2.6, for any b E 9 A , b'
E
%'Ab 3 BAb' C BAb
Thus, by condition (iii) of Theorem 3.2.13, .%Ab is invariant.
I
REMARK. Contrary to the case of actions of a group, %?A- C need not be invariant when C is invariant.
68
3.
TIME-EVOLUTION
We have a characterization theorem for minimal invariant sets analogous to that for invariant sets:
3.2.15 Theorem. Let A be an action of M . If C C 9 A , then the following statements are equivalent: (i) C is invariant and minimal. (ii) A / ( M x C) is a strongly-connected action of M . (iii) b E C => %Ab = C. PROOF. (i) v (iii). If C is invariant then h E C 3 9 A b C C by 'l'heorem 3.2.13. But &?AtJis nonempty (i.e ., b E 9 A b ) and, by Lemma 3.2.14, &!Aois invariant. Thus, if b E C, then 9 A b = C. Conversely, let D C C be nonempty and invariant. Since D is invariant, b E D * W A b C D. Since D is nonempty, there exists some b E D.But b E C then, and so .%?Ab = C. Thus,
D CC
=
%'AbC D
so D = C. Thus, C is invariant (by Theorem 3.2.13) and minimal. (ii) u (iii) is obvious from Lemma 3.2.9. [ REMARK. Thus, invariant sets correspond 1 : 1 with subactions of a given action, and minimal invariant sets correspond 1 : 1 with strongly connected subactions.
i r e conclude by relating the notions of minimal invariant sets and recurrent elements.
3.2.16 Theorem. Let A be an action of M . A nonempty subset C C &A is a minimal invariant set iff for some recurrent element b c c%A,C r~Ab. ~
PROOF. Let C be invariant and minimal. Since C is nonempty, there exists some b E C. 9 A 6 = C by Theorem 3.2.15. Hence,
b' E %'A0 3 b' E C
3
(WAb' = C
= ,!%?Ab)
and, by Lemma 3.2.9, b is recurrent. Conversely, if b is recurrent and C = 2 A b , then h' E C
3
6' E * A b
2
(WAb' = %''Ab
=
C)
Thus, by Theorem 3.2.15, C is minimal and invariant.
[
3.3.
69
TIME-EVOLUTION OF PROCESSES
TIME-EVOLUTION OF PROCESSES
3.3
I n this section, we introduce the concept of “left translations” of T-time functions and T-processes. This notion leads readily to the concept of time-evolution.
3.3.1 Definition. If v is a T-time function and t t-left translation of v is the set ~ l = t
3.3.2
(i) (ii) (iii) (iv)
{(t’,( t
E
T, then the
+ t ’ ) ~j )t’ E T }
Lemma. If v is a T-time function, then for all t, t’
E
T:
v l is a T-time function. t’v, = ( t t‘)v. gal C 9v. vo = ZI.
(4 U t + t ’
+
= (VJt,
.
PROOF. (i) v l is clearly a function and 93%~~ = T, i.e., v l is a T-time function. (ii) follows immediately from (i). (iii) We note
av,= { ( t + t’)v 1 t’ E T } c {t”v I t“ E TI (iv) For all t
E
=
av
T, tv,
= (0
+ t)v = tv
so v, = v. (v) For all t , t’, t” E T, t”v,+,.= ( ( t + t’)
that is, v l - l ’
=
+ t”)v
(t
+ (t’ + t”))v = ( t ’ + t”)v,= t”(v,),.
( v ~ ) .~ , I
v l interprets to be “what v looks like as a T-time function” starting at time t . I n order to make a , in fact a T-time function, the time base must be “shifted” so as to make tv (for example) correspond to the time instant 0. This is of course literally a left translation of ZI.
70
3.
TIME-EVOLUTION
3.3.3 Definition. If P is a T-process and t E T , then the t-left trmslation of P is the set p, = { P t I P
E
P>
P , interprets to be “what P looks like as a T-process” at the starting time t . T h e following lemma provides formal verification: If P is a T-process, then for all t E T , P , is a
3.3.4 Lemma. 1 -process. , I
By 1,emma 3.3.2, P , is a set of T-time functions.
PROOF.
3.3.5
I
L e m m a . If P is a T-process, then for all t E T, OlP, C OlP. By I.emma 3.3.2, .&pi C .%p. Thus,
PROOF.
‘ZP,
-
U{a?p, p E P ) c U { 9 p 1 p E P } = 6TP
that is, /I€‘, C { T P .
~
I
Since every subset of a relation is a relation, the following is imrnediate:
3.3.6 Corollary. T-processor.
If P is a T-processor, then for all t E T , P , is a
’l’he next lemma is crucial for all our subsequent results.
3.3.7 L e m m a . If u and y are T-time functions, then for all t E T, (UYh PK~OF.
i‘(uy)t
W e have
=
( t :- t’)uy
that is, (uy), = u l y , .
3.3.8
=: U t Y t
=
((t
;
t’)u,
(t
+ t’)y)
---
(t’u,
, t’y,) = t’(u,y,)
I
L e m m a . If P and Q are T-processes, then for all t E T,
3.3.
71
TIME-EVOLUTION OF PROCESSES
Lemma. If P is a T-processor, then for all t
3.3.9
(PtY
=
t
T,
(Wt
(PJZ = (P”t
We have
PROOF.
(P,)’
= {V
~
( 3 ~ ) vz : E Pt} = {ut i ( 3 ~ )UY:
E P } = {u,
~
u E P’)
=
(P’),
1
T h e condition on ( P J 2is similar.
Combining this with Lemma 3.3.5, we have:
3.3.10 Corollary.
If P is a T-processor, then for all t
QZ(P,)’ C GYP1
3.3.11
Lemma. If P is a T-processor, then for all t =
T,
LT(Pt)zC GYP2
and
(P-yt
E
E
T,
(Pt)-l
If (2 is a T-process, then
(m,
= [(Qt)
Using Lemma 3.3.7,
PROOF.
3.3.12 for all t PROOF.
3.3.13
Lemma. Let P and Q be T-processes. If P C Q , then T, P , C Q l .
E
Obvious.
Lemma. If P is a T-process, then for all t , t’ E T , t’P,
=
( t -1 t’)P
72
3.
TIME-EVOLUTION
PROOF.
t’P,
{t’p, 1 p
c; P ) = { ( t
+ t’)p 1 p E P ) = ( t + t’)P
Corollary. If P is a T-processor, then for all t , t‘ E T ,
3.3.14
t’(P,)l == ( t
+ t’) Pl
t’(P,)Z = ( t
+ t ’ ) PZ
Lemma. If P is a T-process, then Po = P, and for all t ,
3.3.15 t‘ E T,
p,,f’
= (Pt)t,
By Lemma 3.3.2.
PKOOF.
We now formalize the concept of time-evolution of T-processes:
3.3.16 Definition. Let C be any set. T h e T-evolution in C is the relation &(’
~
(((1,
P ) , Pt) 1 t E T & P C C T }
{ B 1 B C A} is the
Iiecall that if A is a set, then 2A power set of A. I~ILIAIIK.
3.3.17
Theorem. For any set C, GC is an action of T on 2(cT).
Pwor. Let K = 2(c7) and consider GC. Clearly, P E K iff P is a 7‘-process and f7P C C. Moreover, 9GC = T x K. Now by Lemma 3.3.5, U P , C U P . Thus, P F K =+ P , E K. I n other words, 9‘GC --- K . I t follows that bC: T x K + K . Now using Lemma 3.3.15,
(0, P ) t C (t
=
P” - P
t ’ , P ) GC = P,,
t’ --
(P,),, = ( t ’ , Pt) 8 C
Thus, AC is an action of T .
=
(t’, ( t , P ) &C) bC
I
I n the light of Theorem 3.3.17, it is possible to formalize what is meant by a “time-invariant” class of T-processes and a “timeinvariance” property of T-processes.
3.3.
TIME-EVOLUTION OF PROCESSES
73
3.3.18 Definition. Let 9 be the class of all T-processes. A subclass A C 9 is time invariant iff for any set C, the set K
=J i!n 2(CT)
is an invariant set of &C. If 9 is a formula depending on the (variable) T-process P, then 9 is a time-inaariance property of T-processes iff the class JZ
=
{P 1
F}
is time invariant.
3.3.19 Theorem. If A is a class of T-processes, then the following statements are equivalent: (i) A is time invariant. (ii) P E A 3 ( V t ) : P , E A ( t E T). (iii) P E A 0 ( V t ) : P , E A ( t E T ) .
(i) =, (ii). Choose P E A and let C ==e%P and K = Then, P E J Z n K and 4n K is an invariant set of bC. We have PROOF.
2'C'). t
E
T&P
E
( An K ) 3 ( t , P ) G C E (.A' n K)* P, E (A' n K ) 3 f,E J ~ '
that is, (ii). (ii) 3 (iii). Po = P and, hence if for all t E T, P , E A', then PEA. If P E K , then (iii) 3 (i). Choose a set C and let K = 2(c7). for all t E T, P , E K . Thus, if P E A n K , then for all f E T, P , E A? n K . Moreover then, t E T & P E ( d Z n K ) = .P , E ( A n K ) * ( t , P ) t " C E ( A n K )
that is, A n K is an invariant set of &C. Finally, since C was arbitrary, JA! is a time-invariant class of T-processes. I
As an example of a time-invariant class of T-processes, we see by Corollary 3.3.6,
3.3.20 Corollary.
T h e class of T-processors is time invariant.
74
3.
TIME-EVOLUTION
I n our later considerations, we shall encounter many other clas;hcs of 7'-processes (in particular, subclasses of the T-processors) \I
hich are time invariant.
3.4
THE TRANSLATION CLOSURE OPERATION
(;I\ en thc concept of left translations of 7'-time functions and T-processes, it is natural to consider the set of all left translations
of a given 7'-process. 'I'his leads to a closure operation on the class of all 7'-proccsses which is of considerable importance.
3.4.1 Definition. If P is a T-process, then the translation closiire of I' is the sct P - { p , l p ~ P & t ~ T ] RI 11IRK.
P
IS
itself a ?'-process and P
=
u(P,i t E TI
Theorem.? If P and Q are T-processes, then
3.4.2 (I)
P
(11)
P
c I-'. P.
I-'u Q
(iii)
.
(I)
i'i
=
P u Q;
\Ye have P
I',,Cu{P,'tE T}
P
(il) ~
I'
= ~
(q,. y F P
2i t' t
{plL,'1 p
P & t , t'
E
7')
{(p,),, , p E P & t , t' E 7') T ) = (p,,, I p E P & t" E T } = P
-
E
(;i\-cn a set of T-processes A, it' we define . d i - { p l ~ E A } ~ d an:~I(i~o~ properties is to those of 'I'heorem 3.4.2 hold. Thus, - is a Kuratowski closui-e operator on sets of T-processes. RIoreover then, the operation is a fairly natui-a1 way to induce a topology on the class of all T-processes. This topology is of some interest hut has few additional properties.
3.4.
75
THE TRANSLATION CLOSURE OPERATION
(iii) __-
Q
Z'U
{ p t 1 p E (I' u Q) & t E 7') -= { p t I ( p t P v p EQ) & t E 7.1 = {pt I ( pt Z '& t E 1') v ( P E Q & t E T)] = { p t 1 p E P & t E T } LJ{ p t p € 0& t E T } = P u 0
=
~
Lemma. If P is a T-process, then
3.4.3
OP
=
TTP
GZP
:
PROOF
GYP
I q E P & t' E T } -= {t'pt I p t P & t , t' E T ) = { ( t + t') p 1 p E P & t , t' E 7') = {t'$ j p E P & t" E T ) -= { t ' p
=
so nP
OP
mP
= UIP. Also.
= {Oq
I p E P } = {Opt I P € P & t € T } = { ( t + 0 ) p l p € P & t E =
3.4.4
T}
{tp I P € P & t € T } = a P I
Corollary. If P is a T-processor, then P is a T-processor.
3 + 4 S Lemma. If P is a 7'-processor, then P I C ( P ) l and P2 C ( p ) 2 Moreover, .
(p') -= (P)' (F) (P)Z 7
PROOF. T h e first two conditions follow from Lemma 1.6.7, since P C P. Next,
(p') = { U t I u E PI& t E T ] =- { U t 1 (3y):uy E P & t = {v 1 (32): vz E r j } = (P)' and similarly, 3.4.6
(p") = ( P ) 2 . I
Lemma.
If P is a T-processor, then 0(P)' 0CP)Z
=
@PI
=
a(Py
=
aP2
=
a(Py
€
TI
76
3.
PROOF.
Use Lemmas 3.4.3 and 3.4.5.
3.4.7 Lemma. PCQ. PROOF.
If P and Q are T-processes, then P C Q implies
Obvious.
Lemma.
3.4.8
TIME-EVOLUTION
If P is a T-processor, then
(F)-1 = F) If' 0 is a T-process, then
42) PKOOF.
Wc have
Lemma.
3.4.9
=(a)
If P is a T-process, then for all t __
(Pt) = PROOF.
("0
E
T,
(h
\Ve use the fact that a time set is commutative here:
{(p,,), p r P & t ' t T } = ( p , , + , ~ p E P & t ' E T } { p t , " p E P & t' E T ] = {(p,),, I p E P & t' E T) {(/( q F P }
~
=
'
{qt 4 E P , & t'
E
7')
(pt, I Finally, ~ v have: c
3.4.10
Lemma. If P is a T-process, then for all t E T, tP
=
3.4.
77
THE TRANSLATION CLOSURE OPERATION
PROOF
tP
P } = {tp,. j p E P & t ’ E T). = {(t’ + t ) p 1 p t P & t’ E T ] = { ( t + t ‘ ) p 1 p E P & t’ E T } = {t’pt 1 p E P & t’ E T } = {t’q I q E P, & t’ E T }
= {tq 1 q E
=
apt
I
We are now in a position to prove the first of a number of similar theorems about the time-evolution of classifications of T-processes. One consequence of such theorems is to identify a time-invariant class of T-processes. 3.4.11 Lemma. A T-time function v is constant iff, for all t E T, U , = V. PROOF.
v is constant iff for some a, t’(a), = ( t
ZI =
a. Now,
+ t ’ ) a = a = t‘a
that is, (a)[ = a. Thus, if v is constant, then for all t Conversely, if v , = v for all t , then tv = ( t f 0) v = OV, =
so, if Ov = a, then
‘u
= a.
E
T, u ,
= 2’.
o?/
I
3.4.12 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is free. (ii) P is free. (iii) For all t E T, P , is free. PROOF. (i) (ii). P is free iff OTPl has one and only one element. Now, by Lemma 3.4.6, OTP1= OT(P)’. Thus, if P is free, P is free. (ii) * (iii). If P is free, for some a, (P)’ = {a}. Thus, for any t E T, t ( P ) 1 = {ta} = {u}
78
3.
TIME-EVOLUTION
Xon w i n g 1,emmas 3.3.9, 3.4.10, and 3.4.5,
CT((P,)~
-
C((P),
-
t(73
and, by definition, P , is free. ( 1 1 1 ) 5 (i). If P , is free for all t
Corollary.
3.4.13
t(P)'
E
T, then Po is free. But
=
{a)
I
P.
f',,
-
T h e class of free
7'-processors
is time
invariant.
Theorem. T h e class of uncoupled T-processors is time
3.4.14
i n \ ariant. 1~1~001.
Ijy 'I'heorem 1.7.2, P is uncoupled iff P C, and R. But, by Lemma 3.3.8, then
= QR
for some
7'-processes
pt so I',
Id
uncoupled.
~
(QR),= QtR,
I
L x
Rr:v IRK. I n general, it is not true that -= R^ for arbitrary T-processes. 'Therefore, P need not be uncoupled when P is an uncoupled ?'-processor. T h e reader can readily show:
3.4.15
Theorem. 'l'he class of multivariable T-processors is
ti 1 1 1 c i n \ , x i a n t. 11.e ha1 e one more \ ery important result to obtain here, namely, to consider the timc-e\vlution of image 7'-processes.
3.4.16
moor.
0 be T-processes. If Q is an an image of P and, for all t E T , Q1 is an
Theorem. Let P and
image of P, then Q InlJgt of P , . 1,et
11:
fIP
IS
-+
UQ be a homomorphism. 'Then,
Q
=
{ p > h, P E P ]
3.5.
Now, for all p
By Lcmma 3.4.3, h: c1"P + G!Q.
t ' ( p , h) = (t'p,) h 0
that is, p , h 0
Q
79
TIME-EVOLUTION OF INTERCONNECTIONS
=
=
((t
+ t') p ) h = ( t + t ' ) ( p
E 0
P and all t
h)
=
E
T,
t ' ( p h)t 0
( p h ) , . Therefore, 0
{q, q E Q & t E T } = { ( p o h ) , i ~ E P &T~} =E{ p , h 1 p E P & t E T } ={voh~v€P} =
0
which proves Q is an image of P. Next, choose t E T. Clearly, OIP, C OlP. Hence, consider g = h/OIP,. Since t'( p , o h) = t'( p o h ) ,, where p 0 h E Q, we see g: LZP, OIQ, . Moreover, ---f
Qt -= {qt i
q E Q}
={(P
0
h)t I P E PI
Hence Q, is an image of P , .
{ p t h 1 P E PI 0
=
{v h 1 a E Pt> 0
I
3.4.17 Corollary. Let P and Q be T-processes. If h is a homomorphism from P to Q, then h is a homomorphism from p to Q and, for all t E T, h/OIP, is a homomorphism from P , to Q, . TIME-EVOLUTION OF INTERCONNECTIONS
3.5
I n this section, we investigate how the various interconnections of T-processors which were introduced in Chapter 2 evolve in time. Somewhat surprisingly, we find that, in some cases, the left translations of interconnections are proper subsets of, rather than equal to, the interconnections of the corresponding left translations. Thus, we develop necessary and sufficient conditions for equality.
3.5.1 Theorem. If P and Q are T-processors, then ( P OQ) C P o and, for all t E T , ( P o Q), C P , Q , . Also, ( P Q ) , = P , Q t iff
0
0
0
0
U V E P &xz E Q &V , = x t PROOF.
(3p)(3~)(3q):p~ E P & y g E Q & p t = u,& qt = Z,
Using Lemma 3.3.7, ( P Q), 0
P, Qt 0
= {U,Z,I =
(3y): uy E P & y z
E Q}
{u,z,1 (3~)(3.~): uv E P & xz E Q & ~
l = t XJ
80
3.
TIME-EVOLUTION
so (I' r Q)! C P , Q , and ( P Q), = P , 0 Q, iff the given condition holds. Also, we have 0
0
~~~
( P 0 0) {utzt I (33)): uy E P &yZ E Q & t E T } ) : E P & xz E Q & t , t' E T & V , = P Q - (2ltzt,, ( 3 ~ ) ( 3 ~uv 0
'I'hus, since this is the case, ( P o Q ) C P
g. I
o
3.5.3 Theorem. If P is a 7'-processor with Pl a T-processor, then (#P) C # ( p ) and, for all t E T , ( # P ) , C #(Pt). Also, (#O/= #(PJ iff (uy)x E I ' & y t mom.
= Xf
3
(3v)(32): ( V Z ) Z
P & V f = Ut & Z f
t
--
yt
Wc have (#PI,
@*Yt ! (UYlY E P l
#(Pi)
{ WI(3.v): f
(v).6 P & Y t
=
4
Thus, ( # P ) ( C #(P,) and # ( P l ) = # ( P t ) iff the given condition holds. Vow, (#I)) #(P)
T h u s , clearly,
=
{tLtyt 1 (uy)y E P & t E
=
( u , Y 1~( 3 ~ ) (UY)X :
(#p) C #(p>.
I
E
7-1
P& t
t
T&y,
=
x/>
3.6.
81
CONTRACTING PROCESSES
Theorem. If P is a T-processor, then
3.5.4
(F)= ( P ) : and
(:p) = : ( p ) Moreover, . for all t E T, ( P : ) l= (P,):and ( : P ) t= : ( P J PROOF
P:)t= {(4WCI uy E PI =
{v(vz)1
v.2 E
P,}
=
=
{u,(uy),I
UY E
PI
=
{.,(u,y,) I uy E PI
(P,):
(p,) = {(u(.y)), I uy 6 P & t E T } = {u,(uy), uy E P & 2 E T } = {u,(u,y,) 1 uy E P & t E T } = {v(vz)I vz E P } = ( P ) : ~
Similarly, ( : P ) t= :(Pi) and
(,p) = :(P).
I
Thus, the various interconnections of T-processors appear to evolve in time. 3.6
CONTRACTING PROCESSES
I n the next two sections, we examine three special kinds of T-processes. These types, contracting, expanding, and stationary T-processes, are distinguished by a special property of timeevolution. As we shall see, each of these defining properties is a time invariance property of T-processes.
3.6.1
Definition. Let P be a T-process. P is contracting iff
for all t
E
T, P , C P.
REMARK. For example, if P contains only constant T-time functions, then
P, = = = { p t I p E P } = { p ~ p E P } = P
and, in particular, P , C P, so P is contracting. T h e property of contraction on a T-process has a simple intuitive interpretation. I t says the process “has no new elements at any later time.” We have the following theorem which characterizes contracting T-processes.
3.6.2
Theorem. If P is a T-process, then the following state-
ments are equivalent:
3.
82
TIME-EVOLUTION
(i) P is contracting. (ii) P P. (iii) p E P & t E T * p , E P. (iv) For all t , t' E T, Pt7,,C P , . (v) For all t E T , P , is contracting. P,, C P , . (1.;) I:or all t , t' E T , t < t' 1
PROOF.
Clearly, P C P. We are given P , C P for all t ;
(i) -> (ii).
hencc, PCP
P. that is, P (ii) * (iii). P
=
{ p ,1 p
=
E
U{P,1
T)CP
P & t E T}, so if P
P € P & t € 1' * p
t'
t E
, E P =>
=
P, then
ptEP
(iii) * (iv). LfTe use the commutative law on 7' here. Fix t, 7' and choose q E P,,,, . For some p E P, q = p,,,, . Now,
E
q
=
P,,,, = P,,,.,= ( P t . 1 ,
But p E P & t' E II' * p , , E P. 'rhus, q E P , . (i\-) 3 (v). Choose t E T . For all t' E T , (Pt)t, = pt+t,c p ,
that is, P , is contracting. (v) 2 (vi). Let t < t'. By hypothesis, there exists some t" E T t". But then such that t" f 0 and t' = t
+
P,,
that is, (vi). (vi) * (i).
P,+t-
(P,),,,C P,
For all t
E
T, 0
(since
P, is contracting)
< t and
0 < t * P,CP, But P,, == P and so, for all t
E
T, P , C P.
I
In Theorem 3.6.2, (ii) shows that the contracting T-processes are precisely those which are equal to their translation closure; (iii) shows they are those which are closed under left translation
3.6.
83
CONTRACTING PROCESSES
of their elements; and (iv)-(vi) give conditions on their timeevolution. Condition (v) yields: 3.6.3 Corollary. invariant.
T h e class of contracting 7’-processes is time
We consider next the special case of contracting T-processors. We note the intersection of two time invariant classes of T-processes is, in fact, time invariant. Hence: 3.6.4 Theorem. T h e class of contracting T-processors is time invariant. 3.6.5 Lemma. Let P be a T-processor. If P is contracting, then both Pl and P2 are contracting. PROOF.
If P is contracting, P
P and by Lemma 3.4.5,
=
(p’) = (P)’
x
p1
that is, Pl is contracting. Similarly, P 2 is contracting. 3.6.6 (i) (ii) (iii) (iv)
a
Lemma. Let P be a 7’-process. If P is contracting, then OP
= mP. For all t E T, t P C OP. For all t , t’ E T , t < t’ =z t‘P C t P . For all t , t‘ E T, ( t $- t ’ ) P C tP.
PROOF.
If P is contracting, P
OP
1
==O
P
P. Hence, by Lemma 3.4.3,
=
@P
Now, using Lemmas 1.5.10 and 3.3.13, if P is contracting, t
< t’
P,.CP,
3
OP,.COP,
3
(t’ + O ) P C ( t
4-0)P =- t ’ P C t P
that is, (iii). Rut (ii) follows from (iii) since either t = 0 or 0 < t . Similarly, (iv) follows from (iii) since t < t’ iff there exists some t“ E T (t” f 0) such that t‘ = t t”. I
+
84
3.
TIME-EVOLUTION
RFMARK. ‘l’hus, for a contracting T-process, the attainable space also is “contracting” in time. I n particular, every element of the attainable space is attainable at time 0.
Combining Lemmas 3.6.5 and 3.6.6, thr following is immediate:
3.6.7 Theorem.
Let P be a T-processor. If P is contracting,
then (i) OP’ ((PI and OP2 GYP2. (ii) For all t E T , t P 1 C OP’ and t P 2 C OP2. t’Pl C tP’ & t‘P2 C tP2. (iii) For all t , t’ E II’, t < t’ (iv) For all t , t’ E T , ( t t ’ ) P C tP’ and ( t t’)P2C tP2. -
+
+
REMARK. I n words, for a contracting T-processor, the attainable spaces of both input set and output set also are “contracting” in time.
3.6.8 Theorem. Let P and ,O be T-processes. If ,O is an image of P and I’ is contracting, then 0 is contracting. IW)OF. \I’e are given P P. By Corollary 3.4.17, any homomorphism from P to 0 is also a homomorphism from P to g. ‘l’hercfore, if /z is a homomorphism, ~
Q ={pohlp€P] -{pohlpEP] -0 and Q is contracting.
I
‘I’heorem 3.6.8 leads immediately to several other results:
3.6.9 Theorem. If P is a T-processor, then P is contracting iff P-I is contracting. PROOF. By Theorem 1.8.8, P and P-’ are isomorphic, hence, images of each other. ‘The theorem is then imrnediatc from Theorem 3.6.8.
3.6.10 Theorem. If P is a 7’-process, then P is contracting iff I P is contracting. PROOF.
Similar to Theorem 3.6.9.
3.7.
85
EXPANDING A N D STATIONARY PROCESSES
3.6.11 Theorem. Let P be a T-processor. If P is free, then P is contracting iff P2 is contracting. PROOF. By Theorem 1.8.6, P and P2 are isomorphic. T h e theorem is immediate.
Finally, we T-processors:
consider
the interconnections
of
contracting
3.6.12 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is contracting. (ii) P : is contracting. (iii) : P is contracting. PROOF. By Theorem 2.4.9, P, P:, and :P are pairwise isomorphic. Thus, each is an image of the other. T h e theorem follows from Theorem 3.6.8.
3.6.13 Theorem. Let P and Q be T-processors. If both P and Q are contracting, then P o Q and P / / Q are both contracting. If P1 is a T-processor and P is contracting, then # P is contracting.
By Theorem 3.5.1, ( P Q ) , C P , 0 Q t for all t PROOF. fore, if P and 0 are contracting, by Lemma 2.2.6,
E
T. There-
1
P,,!/Ql by
0
P, C P & Q ,CQ
* P,.Q, C P o Q
and we see P o Q is contracting. Similarly, ( P / / Q ) l Theorem 3.5.2, and by using Lemma 2.3.7, Pt C P & Qt C Q
* PtlIQt C PIIQ
so P / / Q is contracting. Finally, using Theorem 3.5.3, if P # P C (#p)C # ( P ) -
that is, (#P) 3.7
==
=
=
P,
#P
#P. Thus, # P is contracting.
EXPANDING AND STATIONARY PROCESSES
T h e concepts of “expanding” and “stationary” T-processes are very similar to the above concept of contracting T-processes. For the most part, we get completely analogous results.
86
3.
3.7.1
Definition.
TIME-EVOLUTION
Let P bc a T-process. P is expanding E T, P C P , [for all t E T, P = P J .
[strrtioiiarj~]iff for all t
Evidently, P is stationary iff P is both contracting and Rt x w K . c\panding. *Igain, if every element of P is constant, then P is st,itionary and expanding as 1% ell as contracting. Thus, for example, thc input set of any free T-processor is stationary. \Vc have characterization theorems for expanding and stationary T-processes as follows:
3.7.2 Theorem. If P is a 7’-process, then the following statements arc equivalent: (i) P is expanding. (ii) p t I ’ & t ~ T :. ( 3 q ) : q ~ P k p y,. ( i i i ) I:or a11 t , t‘ t 7’, P , C I’trt, . (iv) For all t F 7’, P , is expanding. (L) For all f , t‘ E T , f -:t’ ;- P , C P,’ . ~~
i w o o i ~ . Similar to the proof of Theorem 3.6.2. We leave the d c‘t;I i 1s to t 11e rca d e r .
3.7.3
Corollary. ‘l’hc class of expanding T-processes is time
i n \ ariant. ‘l’hc class of expanding T-processors is time invariant.
3.7.4
Theorem. I f I-’ is a 7’-process, then the following state-
niciits are cquivalcnt:
( i ) I’ is stationary. (ii) I:or all f E T , P , is stationary ( i i i ) For all t , t’ i Y’, P , I-’,, . ~
ixooi~.
(i)
.- (ii).
P for all t’ E T. Fix t. Then,
\Ye k n o w P,,
:
(P,),.
=
I’,+,,
P
=
P,
that is, Pt is stationary. ( i i ) .- (iii). For all t , t’ E T , by hypothesis on T, there exist t, and f, such that t t, t , 7 t‘ -= t‘ 1 t , ~
3.7. with either t ,
EXPANDING A N D STATIONARY PROCESSES
=
0 or t ,
=
0. If t,
since P,, is stationary. If t ,
= 0,
=
87
0, then
then
since P , is stationary. Thus, (iii) holds. (iii) * (i). For any t E T,
so P is stationary.
a
Pt
=
Po = P
Condition (iii) of Theorem 3.7.4 shows that a stationary T-process has a constant and hence a trivial time-evolution. Condition (ii) gives the usual corollary:
3.7.5 Corollary. T h e class of stationary T-processes is time invariant. T h e class of stationary T-processors is time invariant. 3.7.6
Lemma. Let P be a T-process. If P is expanding [stationary] then for all t , t' E T , t < t' * tP C t'P [for all t , t' E T, tP = t'P]. PROOF.
Like Lemma 3.6.6.
T h e next theorem leads to almost all of our other preliminary results:
3.7.7 Lemma. Let P be a T-process. If P is expanding, then for all t E T, U P , = (ZP. PROOF. By Lemma 3.3.5, ClP, C O/P. By Lemma I .5.10, if P C P , , then o%PC a p t . Thus, if P is expanding, " P , = CZP for all t. 1
3.7.8 Theorem. Let P and Q be 7'-processes. If Q is an image of P and P is expanding [stationary], then Q is expanding [stationary].
88
3.
TIME-EVOLUTION
PROOF. Let h : ( / l ’ + be a homomorphism. By Corollary 3.4.17, for all t E T, /z,L%P,is a homomorphism from P , to Q 1 . If P is expanding, then for all t t T, P C P , . Moreover, by Lemma 3.7.7, (TP, { I P and it follows that h,iCIP, h. We have 1
~
0 = = { p o h I p E P } C { ~ ~ h ~ p €-0, Pt} that is, Q is expanding. Similarly, if P is stationary, P , = P for all t . Every stationary T-process is expanding, hence, again, h hiUP, . Thus, for any t E T , ~
Q
=
{p h1p 0
E
Z’]
which proves Q is stationary.
-= {
p h Ip 0
E
Pt}
0,
1
3.7.9 Lemma. Let P be a T-processor. If P is expanding [stationary], then both PJand P“ are expanding [stationary]. PROOF.
1
of P.
I n Theorem 1.8.4, we showcd PI and P2 are images
Combining 1,emnias 3.7.6 and 3.7.9, we get
3.7.10 Corollary. Let P he a 7’-processor. If P is expanding [stationary], then for all t , t‘ E T: (1) (11)
t t
/,
t‘ t‘
-.
tP1 c t‘PL [ t P tP2 c t‘PL [ t P
-
=
t’Pl]. t’P.1.
‘I’hus t h e ‘ittitinable spaces of both input set and output set of an expanding T-processor are “expanding,” and for a stationary T-processor they are “constant.”
3.7.11
Theorem. Let P be a T-processor. P is expanding [stationary] iff P-’ is expanding [stationary]. I’nooF.
P and P-’ are isomorphic.
1
3.7.12 Theorem. Let I’ be a 7’-process. P is expanding [stationary] iff Z P is expanding [stationary]. PROOF.
I’ and I P are isomorphic.
I
3.7.
89
EXPANDING A N D STATIONARY PROCESSES
3.7.13 Theorem. Let P be a T-processor. If P is free, then P is expanding [stationary] iff P 2 is expanding [stationary]. PROOF.
If P is free, then P and P2 are isomorphic.
I
3.7.14 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is expanding [stationary]. (ii) P: is expanding [stationary]. (iii) :P is expanding [stationary].
P, P:, and :P are pairwise isomorphic. I REMARK. I t happens that if two T-processors P and Q are expanding [stationary], the series interconnection P 0 Q need not be expanding [stationary]. Also, if P’ is a T-processor, then the closed loop # P may not be expanding [stationary] when P is. PROOF.
T h i s somewhat surprising result is traceable to the situation discussed at the beginning of Section 3.5. As we showed there, the conditions ( P , 0 Q l ) C ( P 0 Q), and # ( P l ) C ( # P ) f hold only in special cases, and these conditions are essentially what we need in the present situation. This result leads us to conclude that of the three concepts, contracting, expanding, and stationary T-processes, the contracting T-process is the most interesting and important since it admits the significant additional property of being closed under various interconnections in the processor case. We shall encounter additional results unique to thc contracting case in Chapter 4 that tend to support this conclusion. For the expanding and stationary T-processors, we get only the following theorem:
3.7.15 Theorem. Let P and Q be T-processors. If both P and Q are expanding [stationary], then P / / Q is expanding [stationary]. PROOF.
Using Lemma 2.3.7 and Theorem 3.5.2, we have P C Pt & Q C Qt
3
(p//Qz) C (PtliQt)
Thus, PIIQ is expanding when both P and P
=
Pt & 8
Qt
(p//Q)t
0 are. Similarly,
* (PllQz) = PtllQt
so P / / Q is stationary when both P and Q are.
=
(P//’Q)t
I
90 3.8
3.
TIME-EVOLUTION
SPECIALIZATION TO DISCRETE TIME
In some instances, we get a significant simplification of things hen \t e restrict attention to discrete-time processes. Our first e.;ample of this is the following in which we consider the concepts of contracting, expanding, and stationary w-processes. I\
3.8.1 Theorem. If P is an W-process, then the following statements are equivalent: (i) P is contracting. (ii) p t P => p , E P. (iii) P, C P. (iv) For all t E w , P,+l C P , . PROOF.
We know
P
2
P and 1 E w . Thus,
p € P * P€P&1t
that is, (i) => (ii). (ii) 3 (iii). If q E P, , then q p 1 E P ; hence, q E P. (iii) * (iv). W'e see p,+1
w
* pl€ P
=
p , for some p
=
-~: p1,t
3
p,€ P E
P. But, by (ii),
( P A c pt
where we have used Lemma 3.3.12. (i) is by induction on t. We will show P , C P for all t (iv) Choosing t : : 0 in (iv), we have
Plc P"
-=
E
T.
P
which provides the basis for the induction. Assume P , C P. Using (iv), Pt,, c pt c p This completes the induction and theorem is proved.
I
We shall simply state the result in the remaining two cases and leave the proofs as exercises.
3.8.2 Theorem. If P is an W-process, then the following statements are equivalent:
91
EXERCISES
(i) (ii) (iii) (iv)
P is expanding. P E P * ( 3 q ) : q E P & p= q1 P C PI . For all t E w , P , C P,,, .
3.8.3 Theorem. If P is an W-process, then the following statements are equivalent: (i) P is stationary. (ii) P = P, . (iii) For all t E W , P , = P,,, .
Exercises 3-1. Prove that for any function f:B A
=
+B
the set
{ ( ( t , 6), 6 f t ) I t E w & 6 E B }
is an action of w , where f o = 1, ;f Characterize P A in the given case.
= f;a n d f l f '
=f
0
f.
3-2.
Let A be an action of the monoid M . Prove if b € % A , then g A b is a minimal invariant set iff b is recurrent.
3-3.
(Marino's theorem.) Let A be an action of a monoid M . Prove the set I of nonempty minimal invariant sets of A is a partition of the set of recurrent elements of A. (Note: A partition of a set K is a set I of nonempty subsets of K such that: (i) (J I = K ; and (ii) for all i , j E I , i # j =, i n j =
->
3-4. Prove that for a T-time function p , the following statements are equivalent: (i) p is constant. (ii) For all t E T, p , = p . (iii) For all t , t' E T , p , = p,, . (iv) For all t , t' E T , tp = t'p. (v) For all t E T, tp = Op.
3-5. Prove that for any T-process P, the set { P , I t E T } is a timeinvariant class.
3.
92 3-6.
TIME-EVOLUTION
Prove that for any T-time function v, the set 7:,
~
{t 1 t
E
T & vt
=
v}
is a time set. 3-7.
A T-time function v is weakly periodic iff for some t E T ( t O), a t a. v is periodic iff for some t E T ( t # 0),
+
:
I
J 1 11
-=
(t” 1 n E w}
where T, is as in Exercise 3-6, and to 0; tl = t ; t n f l = tTL t. t is the period of v. Show that an w-time function is periodic iff weakly periodic.
+
3-8.
A time set T has Archemediun order iff for every t , t‘ E T ( t # 0), t‘ < t r r + l for some n E W . Prove if T has Archemedian order that a T-time function v is periodic iff there exists some t E T ( t f 0) such that v t = v and for all t‘ E T (0 < t‘ i t ) u t # a. (Hint: First show that if 0 t -< t’, there exists some n E w and some t , E T such that t‘ t” + t , where t , < t.) i.
~~
3-9.
Prove ‘l‘heorcm 3.2.13.
3-10. Give a counterexaniple to the following proposition: If the time set T has Archemedian order, then every weakly periodic T-time function is either periodic or constant. (Hint: Consider Exercise 1-30.)
3-1 1 . Prove that if A is an action of T (where T is a time set), then .PA is a contracting T-process. 3-12. Prove for any set A , the set A’ is a stationary T-process. 3-1 3. \Vrite a short paragraph discussing your interpretation of the properties of contraction, expansion, and stationarity on T-processes. 3-14. A T-process P is weakly periodic iff for some t E T ( t # O), P, : P . Prove the class of weakly periodic T-processes is time invariant. -
3-15. Prove a T-process P is stationary iff, for all t E T, P , ==.: P.
EXERCISES
93
3-16. If P is an w-process then the unit delay on P is the w-processor dP = { ( P A P I P E P I Give necessary and sufficient conditions on dP that P be contracting. Prove that if P is contracting, then dP is contracting. 3-17. Prove that every finite automaton is contracting. 3-18. Prove that every differential motion is contracting. 3-19. Let T be a time set with Archernedian order. Prove that for a T-time function p , if there exists some t E T ( t f 0) such that p 1 = p and for all t' E T, t' < t p,, : p , then p is constant. 3-20. Characterize a T-process P such that the set { P l I t E T } is a minimal invariant set of GC, where C -= 6TP. Prove that such a T-process is weakly periodic (Exercise 3-14).
Strong Types of Causality
4.1
INTRODUCTION
T h e properties of contraction, expansion, and stationarity are what might properly be termed “strong” types of time invariance for T-processes. Though the properties are strong, their exposition may be justified in two ways:
(i) Some mathematical models for real technological processes do in fact possess these strong properties. (ii) These properties are suggestive of other (weaker) properties that a process might more realistically possess. Another type of property that a process or processor might very well possess is some type of “causality.” Now what do we mean by this term as used here ? Certainly the bcst word to substitute for causality” is “functionality.” However, we also do not see any fundamental reason for distinguishing the words “causality” and “ determinism” as properties of processes. There is a reason; namely, the term “nondcterminism” has, over the years, become equated with “random” or “stochastic” in some parts of systems theory. This is unfortunate because there would appear to be a broad area between “nondeterministic” and “stochastic,” wherein “ nondeterministic” processes would be studied by methods other than the theory of random processes. Of course, such an area does (1
95
4.
96
STRONG TYPES OF CAUSALITY
not, in fact, exist. Indeed, on the contrary, it would appear that even the theory of random processes takes us back to some type or other of process functionality, hence, back in the end to causality, although the functionality may be on some process or processor other than the one we started with. Let us disregard this for nou. Here, causality means functionality, and functionality in ea n s : (1) For a 7-process, for each p E P and t E T , t p is given as the image of some function f associated with P, whose arguments do not include t p itself, (11) For a ?’-processor, for each uy E P and t E ?’, ty (i.e., the output) is given as the image of some function f associated with P, n hose arguments d o not include ty itself.
I n each of the ahole cases, f is called an auxiliuryfunction for P. \Ye have already seen one example of an auxiliary function for 7’-processes. Namely, for a functional ?’-processor P, the function f : P’ T - * UP’ such that (u,t ) f
-
f(ZLP,)
is an auxiliary function for P. T h a t is, for all uy
7
E
P and all t
E
T,
.
1he concept of auxiliary function is due to Mcsarovic, as i s the concept of a constructice specijication, and it was Mesarovic who first pointed out the important rclationship between the two concepts in [26]. Roughly, a constructive specification for a 1-process P is a formula .% depending on one or more auxiliary functions, such that the following formula is true: pEPtl%--
Llbove,the auxiliary function f for functional T-processors gives rise to a constructive specification for such ?‘-processors. Recall that in Theorem 1.7.6 we showed Z ~ J IF 1’ < > U P , y in the functional case. It follows that for any functional 7-processor, ~
uy
E f’e (Yt):
ty
-
(u,t ) f
4.1.
INTRODUCTION
97
Hence, the formula ( V t ) : ty = ( a , t ) f is a constructive specification for P. T h e reader will recall, of course, that in Section 1.9 a number of examples of T-processes (in particular, T-processors) was presented. Looking back, essentially all of these example Tprocesses were defined by some sort of constructive specification or another. I n other words, the most common way of defining processes in systems theory is by the use of auxiliary functions and constructive specifications. This is an important observation and gives some insight. Actually, the ready availability of constructive specifications for processes has tended in the past to obscure what, in essence, a process really is and to make generalization (such as we are trying for here) more difficult. I t is quite well understood in systems theory that certain constructive specifications for processes are “canonical” in the sense that they exist for a host of important processes. In fact, this is one of the common ways of classifying processes. Onc speaks of “ordinary differential equation” processes (or systems); “integral equation” processes; “finite-difference equation” processes; etc. This understanding (that “canonical” constructive specifications exist) is, however, informal. I n the setup we have established here, it is possible to attack this issue formally. What we can do is show for a certain classification of 7’-process that a constructive specification of a certain form exists. If we do not find any surprises in this, we shall at least take comfort in the new precision we bring to bear on this issue. Fundamentally, what we propose to do in this chapter is to consider a number of “strong” types of causality for T-processcs. Such properties in the case of mathematical models for real technological processes arise: (i) When the process is a processor, and there is a very strong property of “cause” and “effect” from input to output underlying the given phenomenon, (ii) When said property of “cause” and “effect” is well understood (as in the case of many processes which obey physical “laws”) and is duly accounted for in choosing the mathematical model for the processor.
4.
98
STRONG TYPES OF CAUSALITY
The main concepts ~ v shall e be espousing are, of course, auxiliary functions and constructive specifications. Building on what we do in this chapter, in Chapter 5 we shall consider the concept of state in our theory. IVith the conccpts of auxiliary functions, constructive specifications, and state spaces for T-processors, we will then be able to esplain the “state space” approach in systems theory, i.e., the approach to processes on which almost all of systems theory is today constructed. ‘I’his, then, we consider to be perhaps the most basic chapter of this book. 4.2
STATIC PROCESSORS
O n e of the simplest yet most important concepts of causality in thc systems field is the notion of a “static” processor. A static or is one which is “instantaneously” a function from input to output. Static processors are often referred to as “instantaneous” or “iiiemoryless” as well as “static.” As we shall see, the property of being ‘‘static’’ is a very strong property of processors. T h e importance of the concept stems from the fact that many real processors can be regarded as “instantaneous” for the purposes of analysis relative to the processors with which they are interconnected. ’l’hus, in the given interconnection, they are relatively sirnple constituents. Recall that if I’ is a 7’-processor, then ( I P is a relation and, for all t t 7’, t P is a relation. \Ye distinguish four types of static ,, I -processors:
4.2.1 Definition. Let I’ be a T-processor. P is weakly static iff OP is ;I function. P is static iff for all t E 7’, t P is a function. P is um’,tornily strrtic [bistcrtic] iff rip is a function [a 1 : I function]. I~I~.XI.ARK.
liecall that U P is the attainable space of P and that t P
is t h e attainable space at t E 7’. We are going to discover the rather
reinarkable fact that these attainable spaces can also be auxiliary functions for 7’-processors.
4.2.2 Lemma. l,et P be a T-processor. If P is static, it is weakly static. If P is uniformly static, it is static. If P is bistatic, it is uniformly static.
4.2.
99
STATIC PROCESSORS
PROOF. T h e first and third statements are obvious. T o see the second, recall that for all t E T, tP C MP. If ClP is a function, then for all t E T , tP is a function since every subset of a function is a function. Thus, if P is uniformly static, it is static. I
\Ye have characterization theorems for each of the types of static T-processors. However, only the theorems for static and uniformly static T-processors are at all sophisticated. For weakly static T-processors, we have:
4.2.3 Theorem. A OP: OPI + OP2(onto). PROOF.
2"-processor
For any function f,f:9f
P
is
weakly
-+ 9f(onto).
static
iff
But, by Lemma
1.6.5, BOP
= OP1
BOP
=
QPz
Hence, OP is a function iff OP: OP1 + OP2(onto).
I
4.2.4 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is static. (ii) For all t E T, P , is weakly static. (iii) For all t E T, P , is static. (iv) For all t E T, tP: tP1 tP2 (onto). (v) P,: PI P2 (onto); for all t E T, tP: tP1 -+ tP2 (onto); and -j
--f
t(uP*) = (tu)(tP)
(vi) P,: Pl
+ P2 (onto)
and
tu = tv =r
(vii) For all uy, z1z
E
t(uP*) = t(vP*)
P and all t E T, tu = tv -aty = t z
+
PROOF. (i) 3 (ii). By Lemma 3.3.13, OP, = ( t 0 ) P = tP. Thus, if for all t , tP is a function, then for all t , OP, is a function.
100
4.
STRONG TYPES OF CAUSALITY
(ii) >: (iii). Let OP,, be a function for all t‘. Choose t consider tP. By 1,emma 3.3.13, t ’ pt
~~~~~
(t
t’)P == ( ( t -tt’)
7-
E
T and
+ 0)P = optit,
Hence, for all t’ E T, t’P, is a function. (iii) 3 (iv). If P , is static, then it is weakly static via Lemma 4.2.2. Thus, by Theorem 4.2.3, OP,: O(P,)I 0 ( P J 2 (onto). But O P , = tPand using Lemmas 3.3.9 and 3.3.13, --j
0(Pt)1 = O(Pl), --= tP’
and similarly for P2. Thus, tP: t P 1 + tP2 (onto). (iv) * (v). If for all t~ T, t P : tP1 -+ tP2 (onto), then P is functional, i.e., uy € 1’ 8i uz E 1’
3
(Vt):(tu)(tP) ==ty & (tu)(tP) = tz
3
( V t ) : ty :== t z
3
y
=
z
so P,: Pl ---t Pz (onto) by Theorem 1.7.6. Then, if uy E P and t E 7’, w e have U P , -=3’ and t(uZ’,)
(v)
=
(tu)(tP)
Given (v), P,: P’--t P’ (onto) and
: ~ >(vi).
tu
ty
=
tv =’ (tu)(tP)=- ( t v ) ( t P )
(vi) -+ (vii) and (vii)
3
(i) are trivial.
t(uI’*) ~=t(vP*)
I
Condition (v) of Theorem 4.2.4 shows that all static processors are functional and indicates how the output y may be calculated from the input u in a point-by-point manner using the functions tP.
4.2.5
Corollary. Every static T-processor is functional.
Condition (iii) of Theorem 4.2.4 gives us the familiar result:
4.2.6 Corollary. T h e class of static 7-processors is time invariant.
4.2.
STATIC PROCESSORS
101
T h e next theorem shows there exists a uniform scheme for defining all static T-processors. Such a scheme is what we have discussed in the introduction above, namely, a constrzrctize specification for static processors.
4.2.7 Theorem. If P is a static T-processor, then uy E P 0u E PI& ( V t ) : ty
=
(tu)(tP)
PROOF. Clearly, if uy E P, then the given conditions are satisfied. Conversely, assume the given conditions. If u E P' then, for some z , uz E P. Then, since P is static, for all t E T,
tz
that is, z
= y.
Thus, uy
=
E
(tu)(tP)= ty
P.
I
REMARK. Thus, the formula u E P 1 & (Vt):ty = (tu)(tP) is a constructive specification for any static T-processor P. T h e interpretation of this is important. It says that in order to define a static T-processor P, it suffices to give the input set P' together with the family of functions tP for t E T. We note that Theorem 4.2.7 makes clear the fact that each of the functions tP is an auxiliary function for P. Thus, we have an example of a constructive specification involving an infinite number of auxiliary functions.
We note in passing that the weakly static T-processors do not constitute a time-invariant class. This downgrades the importance of the concept somewhat. We justify our interest in these T-processes by the role they play in the section in making proofs.
4.2.8 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is uniformly static. (ii) P is weakly static. (iii) For all t E T, P , is uniformly static. (iv) OlP: GlP' + a P 2 (onto). (v) P,: Pl -+ P2 (onto); LZP: GIP1 + @P2 (onto); and UP, = u
0
aP
102
4.
STRONG TYPES OF CAUSALITY
tu = t’E
( \ 1 1 ) for all
ZLJ?, 7’2F
t’(,P*)
P and all t , t‘ t T, tu =
iwx)~.
t(,P*)
t’v * ty
t‘z
Stniilar to the proof for Theorem 4.2.4 and using Lemma ( l P ) extensively. I
3.4.3 (nainclv, OP
(’ondttion (1) of ‘I’hcorem 4.2.8 shows how the function U P may be used to calculate the output y given the input u in a point-bypoint manner, i.e., t(llP,)
~
I(u
2
f2P)
(tu)m’
All,o, It g11es:
4.2.9 Corollary. ti1 not i o na 1.
Every
uniformly
static
T-processor
is
Condition (vii) shows that the functional relationship between tu and t j docs not depend on t , i.e., it does not vary with time. T h u s , Uniformly static processors are what are sometimes referred to as “time-invariant” ones, and static processors are what are sometimes refered to as “times-varying” ones. Condition (iii) of Thcorem 4.2.8 gives:
4.2.10 Corollary.
T h e class of uniformly static T-processors is
time invariant. 1:or uniformly static T-processors, there is only the one auxiliary function ( / P . Such processors admit the following simple sort of constructive specification:
4.2.11 Theorem. If P is a uniformly static T-processor, then 2‘3’ PROOF.
€
I’
-
u E P I & ( V t ) : ty
=
(tu)tYP
Similar to the proof for Theorem 4.2.7.
4.2.
103
STATIC PROCESSOR5
REMARK. T h u s we see that to define a uniformly static ?’-processor, it suffices to give the input set P1 and the auxiliary function L7P of P.
Finally, there is a very simple relationship between the bistatic 5”-processors and the uniformly static ones:
4.2.12 Theorem. Let P be a T-processor. P is bistatic iff both P and P-l are uniformly static. PROOF.
It suffices to show that Ql(P-l) =
U(P-1) = { t ( y u )1 uy E P & t =
E
We see
T } = {(ty,tu) 1 uy E P & t
E
T}
{ ( b , a ) 1 a6TPb) = ( m y
Now, since MP is a 1 : 1 function iff both CIP and (UP)-l are functions, the theorem is immediate. T h e following conditions follow readily:
4.2+13 Corollary.
Every bistatic T-processor is bifunctional.
4.2.14 Corollary. invariant.
T h e class of bistatic T-processors is time
\Ye have on hand a very important and general example of a bistatic T-processor:
4.2.15 Lemma. T-processor. PROOF.
For
any
T-process P, I P is a
bistatic
-4s an exercise.
4.2.16 Lemma. Let P and Q be T-processors, such that P C 0.If Q is weakly static [static; uniformly static; bistatic], then P is M eakly static [static; uniformly static; bistatic]. PROOF. If P C Q, then for all t E T, t P C tQ and U P C UQ by Lemma 1.5.10. T h e lemma follows from the fact that any subset of a function [a 1 : 1 function] is a function [a I : 1 function]. 1
104
4.
STRONG TYPES OF CAUSALITY
b’e come now to a very important set of results which give to the uniformly static ”-processors and the bistatic T-processors a very promintmt role in our theory.
4.2.17 Theorem. A T-processor P is uniformly static [bistatic] iff f / / ’ is a homomorphism [isomorphism] from Pi to P2. PROOF. If fTP is a homomorphism [isomorphism], then OZP is a function [a 1 : 1 function]. Thus, P is uniformly static [bistatic]. Con\ ersely, if P is uniformly static [bistatic], then fTP: (tP1 G‘P2 (onto) [UP: (/Pi ( l P 2( 1 : 1 onto)] by Theorem 4.2.8. Moreover, by condition (t) of 4.2.8, ---f
---f
P2
=
{ U P * 1 u E 1”)
(u
0
(TI’ 1 u E P’)
‘Thus, if P is uniformly static, then U P is a homomorphism, and if I-’ is bistatic, then U P is an isomorphism from P1to P2. I
4.2.18 Corollary. Let P be a T-processor. If P is uniformly static [bistatic], then P2is an image of Pi[PI and P 2are isomorphic]. 4.2.19 Theorem. Let P and Q be 7‘-processes. If h is a homomorphism [isomorphisnil from P to Q, then the set I? = { p ( p o h ) l p € P ] is n uniformly static [bistatic] T-processor. Moreover, U R PKOOE.
f 7lZ
~
~
=
h.
Clearly, R is a T-processor. Now, {t(p( p h ) ) 1 p E P & t E 7‘) { ( t p , t( Z, h ) ) p F P 8i t t TI { ( t p , (tp)h) 1 p E P & t E T ) - { ( a , ah) I a F m’) h ~1
~
Ilcncc, if I? is a homomorphism [isomorphism], U R is a function [a I : I function] anct K is uniformly static [bistatic]. I
4.2.20 Theorem. If P is a uniformly static 7’-processor, then I’ anct Z’l are isomorphic.
4.2. PROOF.
STATIC
105
PROCESSORS
Clearly, the function h
IUlP
:
= { ( ( a ,b), U )
I ~@Pbj
is a homomorphism from P to P‘ (see proof of Theorein 1.8.4). Now if P is uniformly static, 1aP is 1 : 1. I n fact, choose aOPb and caPd. There exist elements uy, vx E P and t , t’ E T such that: (i) (ii) (iii) (iv)
tu = a. ty = b. t’v = c. t’x = d .
Now, using condition (vii) of Theorem 4.2.8, we have u =
c
3
u
=
( a , 6)
c&tu =
=
t’v
=> u = C &
ty
=
t‘z > u = c & b
--
d
(c, 4
Thus, h is an isomorphism.
I
4.2.21 Corollary. If P is a uniformly static T-processor, then ICTP is an isomorphism from P to Pl.
Theorem 4.2.22 gives a result which shows when uniformly static T-processors are going to be contracting, expanding, and stationary:
4.2.22 Theorem. Let P be a T-processor. If P is uniformly static, then P is contracting [expanding; stationary] iff Pl is contracting [expanding; stationary]. PROOF. Since P and Pl are isomorphic, they are images of each other. Thus, apply Theorems 3.6.8 and 3.7.8. I
Finally, we have a theorem which shows to some extent that the assumption of contraction on a T-processor is indeed a strong assumption. We merely note that being weakly static is, in fact, a “weak” assumption on a T-processor relative to being uniformly static. Then:
106
4.
STRONG TYPES OF CAUSALITY
4.2.23 Theorem. I,et P be a 7’-processor. If P is contracting, thcn the follo\ving statements are equivalent: (i) I’ is uniformly static. ( i i ) P is static. ( i i i ) I’ is ~ e a k l ystatic. IW)OF. (iii) hold by Lemma 4.2.2. ( i ) =:. ( i i ) a n d (ii) (iii) -2- (i). If I’ is contracting, then P : P. Thus, if P is weakly static, thcn I’ is weakly static. However, by l h e o r e m 4.2.8, if P is u ~ a k l ystatic, then P is uniformly static. I
4.3
STATIC INTERCONNECTIONS
I n this section, ~ v econsider interconnections of the various types of static 7’-processors.
4.3.1
Theorem.
any
I:or
I
.
I-processor
P, : P is uniformly
static . See
PROOF.
//(:P)
{t((uy)?) uv ~
{ ( ( t u , ty), ty)
t
P 8; t E 2“)
uy
E
P&tE T) h ) , 6) 1 aTlP6)
{(t(uy),ty) 1 uy
1’8; t t 7’)
= ((((1,
t
2(/P
4.3.2 Theorem. itf f’: is histatic.
1,et P be a 7‘-processor. P is uniformly static
r i t o o ~ . Consider //( 1’:). \Ye have
(/(I?)
--
{r(u(ujv)) rL-v t 1’8; t t 7’)
= :
{([u, ( t u , I!))
=
(1clP-l
U? E
==
{ ( t u ,t(uy)) , uy
1’8; t t 7’)
-
t
1’8; t E TI
{ ( a , ( ( I , 6)) aClPbJ ~
So\\, according to Corollary 4.2.21, if P is Uniformly static, then l l l P is an isomorphism from P to PI, hence, a 1 : 1 function. Clearlv, thcn ( I U P ) - 1 is also a 1 : 1 function, i.e., U ( P : )is 1 : 1 and
4.3.
107
STATIC INTERCONNECTIONS
P : is bistatic. Conversely, if P: is bistatic, then (161!P)-t is a 1 : 1 function. Choose aCtPb and clZPd. Clearly, a(lTirP) = ( a , 6) and c(lCZP)-* = ( c , d). Thus, a = c
= (a, b ) = (c, d ) = b
=
d
that is, U P is a function. I n other words, P is uniformly static.
I
4.3.3 Theorem. Let P be a T-processor. P is weakly static [static] iff P: is weakly static [static]. T h e reader can show that for all t
PROOF.
E
T,
a(tP:)(a,b) o a(tP)b
and it follows that tP: is a function iff tP is a function.
I
4.3.4 Theorem. Let P and Q be 1-processors. If P and Q are both weakly static [static; uniformly static; bistatic] then P o Q is weakly static [static; uniformly static; bistatic]. PROOF.
t(P
For any t E T, 0
0 ) = {(tu, tz) 1 (3y):uy E Z’&yz
t P 0 tQ
= {(tu, tz)
EQ}
1 (3y)(3s):uy € P & x z € Q & ty
=
tx}
and we see that t ( P 0 Q) C t P tQ. If both OP and OQ are functions, then OPo O Q is a function and O(P0Q) is a function, since O(P Q) C OP 0 OQ. Thus, if both P and Q are weakly static, then P 0 Q is weakly static. Similarly, if for all t E T , tP and tQ are functions, then tP tQ is a function, whence, t ( P Q) is a function. I n other words, if P and Q are both static, then P 0 Q is static. Next, we see 0
0
0
0
6qP.Q))
==
{ ( t u , t z ) 1 (3y):u y € P & y z € Q &t € TI
LM’ flQ
==
{(ti%, t‘z) 1 (3y)(&):uy
E
P & XZ E Q & t , t’ t T & ty
=
t’x}
and, again, U ( P Q) C U P UQ. If both U P and lYQ are functions [I : 1 functions], then LIP 6TQ is a function [a 1 : 1 function], and we see that U ( P Q) is a function [a 1 : 1 function]. Thus, if both P and Q are uniformly static [bistatic], then P 0 Q is uniformly static [bistatic]. I 0
0
0
0
4.
108
STRONG TYPES OF CAUSALITY
‘l’he cross product of two relations R and S is the
RE\iARK.
relation RX.S
= { ( ( a ,c ) ,
(b, d ) ) 1 uRb & cSd)
’I’hc cross product of two functions [two 1 : 1 functions] is a Eiinction [a 1 : 1 function]. I n fact, if H and S are functions, ( a , c ) ( R X S )= ( a n , C S )
4.3.5 Theorem. L e t ’I and Q be T-processors. If P and Q are both weakly static [static; uniformly static; bistatic], then P / / Q is \I eakly static [static; uniformly static; bistatic]. IW)OF.
For any t E T, t ( / ’ / / Q ) = { l ( ( u u ) ( y z ) )1 u y E P & Z I Z E Q ] { ( ( t u ,tv), (ty, t z ) ) 1 uy E P & 2’2€Q} = { ( ( a ,c ) ,
( 6 , 4) I a(tP)b c(tQ)d>
tf“YtQ
:
\ihcrc ( X ) is the abovc cross product. It follows that if both P and
0a r c
M
eakly static [static], then P / / Q i s weakly static [static]. Next,
u e sc‘c that ([(f’,
‘0) { ( ( t u ,tv),(ty, t z ) )1 uy E P & uz t Q & t E 7’;
11PX/IQ
{ ( ( t u , f’Z)),
(ty, t ’ z ) )I uy
€
P & uz & 0 & t , t’ E T }
and that / / ( I ” 0 )C {ll’X(/Q. If both CIP and 6TQ are functions [ I : I functions], then / / P A Y ( @ is a function [a 1 : 1 function], aird hence / i ( P / , C , )is a function [a 1 : 1 function]. I n other words, i t both P and Q are unifornily static [bistatic], then P / / Q is u n i f o r i n 1y sta t i c [bi seat ic] . I i
4.4. 4.4
N O N A N T I C I P A T O R Y PROCESSORS
I09
NONANTICIPATORY PROCESSORS
Another type of causality perhaps even more important (and certainly more general) than static behavior is “nonanticipation.” Intuitively, a T-processor is nonanticipatory if the output at any time depends only on the “past” o f t h e input.? While e17eryone would agree that this is indeed what nonanticipation means, this intuitive notion is rather imprecise. There are several ways to formalize this concept depending on exactly what we mean by ‘‘past’’ of the input that differ from one another slightly; hence, we are going to consider three types of nonanticipatory T-processors. There is also some ambiguity in the concept introduced by the words, depends only on.” Surely this means the output is a function of the past of the input. But does it mean the output is a function only of the past of the input or can the output be a function of the past of the input and some other parameter ? Although some would disagree with our decision, we are going to formalize “nonanticipation” to mean that the output is a functior, only of the past of the input. I n Section 4.7, we consider an instance of the other case, namely, where the output at any given time is a function of the past of the input and another paranicter. We prefer to call such T-processors “transitional” rather than nonanticipatory.” A s we show, the two concepts are closely related. Before proceeding with a formal definition of nonanticipation, we need some notation for denoting the “past” of a T-time function. ii
ii
4.4.1 Definition. Let u be a T-time function. If t E T , then the t-segment of v is the set ut
.-.
{ ( t ‘ , t‘v) 1 t’
E
l & t‘ < t }
If P is a T-process, then the set I’
=
{ p t I p t P & t E TI
is the set of initial segments of P.
t Or, as
some students prefer, “P does not jump before it is kicked.”
4.
110 Kivink.
STRONG TYPES OF CAUSALITY
In general, c 1C c, and hence r 1 I S a function. We
see that? {t‘ t’ E T8: t‘
‘/vt Allso, z‘) (siiicc -(t nonenipty P , ; E P.
< 0)
for all t
t]
’
7’) and hence for any
E
’l’he follou ing three lemmas are all very important for the subsequent theoretical development. I t is at this point that we begin to make real use of the ordering structure on the time set T that ~ v a spostulated.
4.4.2 Lemma. Let u and y bc T-time functions. For all t , t’ E T , (i) 211 yt‘ t t’ ~
(ii) (iii)
u1’ yl‘ & t ~ 1 ‘ - ’ 1 ’ = j r t t 1’ < ,
< t’
2
=y‘& & (ul)t‘
ut
211 = y t
~
tY
tic
(y,)t’
PROOF. (i) and (ii) are straightforward and arc left as exercises. (iii) ‘The argument here is not simple. 1-irst, we require two new formulas for time sets; namely,
,
< t ) v ((t” == t ) v ( t -: t” & t” *< t + t’)) t” & t” < f + t’) 5:- (3J:t , : . t‘ & t” = t -1 t ,
t’) -:3- (t” t” -, (t
(I)
(2) (t” =-- t ) v ( t
,<
To show ( I ) , we make use of properties (iii), (viii), and (x) of time sets \vhich wcre established in Theorem 1.4.2. \lie have f”
I
.t
.
f’
.:tr
(t” .:_ t v t”
~
. f
__
.’-. (1‘’ .: t & f’ < t --3- (
f”
-.:-(t”
> :.
v t -‘Z t”)8:
,-t ’ ) v (t”
t
-f
t’
t & t” . : 2 + t’) v ( t .:. t“ & t”
+-0)
t ) v (t” =~-t 8: t” .:. t - ; - t‘ & t’ v ( t . . t“ & t” <’ t -. t’) t ) v (t” t ) v ( t c:. t” 8: t” ‘it
‘_
’
f” 2
-::
~~
-7t ’ )
(t” . t ) v ((t” = t ) v ( t :’. f” & t” <. t -1 t’))
t 9v‘ is the
t-iut<’rvcr/ of 7 s h i c h , i f T = p (for example) is often denoted [0, t ) . notation, ;dthough t’ainiliar, is somewhat clumsy for our purposes, hence, is not d o p t e d . l o r those used to t h e notation, \ve note
rI .his
7,‘
u/[O,t ) .
:
4.4.
111
NONANTICIPATORY PROCESSORS
Now, first using (ix) and then (vii) of Theorem 1.4.2, we have (3t,): t , 0
\
t’ & t” = t
+ t,
(3,):( t , = 0 v t , # 0) & (t” = t
+
+ t, & t, < t’)
= 0 & t“ = t t , & t , < t’) 0 & t” = t i- t , & t , < t’) 0 (2,): ( t , = 0 & t” -= t -t t,) v ( ( t l .f 0 & t” = t +- t,) & (t” == t t, & t 0 (t” = t ) v ( t < t“ & t” < t + t’)
(3,):( t ,
0
v (t, #
+
+ t, < t + t’))
Finally, using ( 1 ) and (2), &t’
L z
yt
1
t’ 0 {(t”,t”u) 0
i t” < t
{(t”,t ” u ) 1 t“
+ t ’ ) == {(t”,t”y)i t” < t + t’}
< t)
1 t” < t } < t” & t” < t + t’)} 1 (t” = t ) v ( t < t” & t” < t + t’)} {(t”,t’?)
& {(t”,t ” u ) 1 (t” = t ) v ( t = {(t”,t’)) 0U t
== yt
0U t
~= yt
& {(t”,t”u) j (It,): t ,
{(t”,t”y) 1 (3,):t , & {(t
= {(t
t,)y) t ,
t
+ t,}
:
1 t, < t’}
< t’) < t’]
0Ut 0 Ut
= yt
Thus, (iii) is proved.
(i) (ii)
tl)U)
-4-t l ) u ) 1 t , = { ( t , , ( t -1- t l ) y ) 1 t , < t’} (by left cancellation) = yt & { ( t , , t,u,) t , < t ’ } = { ( t , , t,yt) t, < t’}
0Ut
4.4.3
< t‘ & t”
+ t, ,(t +
+ t , , ( t -I-
< t’ & t” = t f t , }
= yt
& {(t, ,( t
~
~
& (ut)t’ = (yt)t’
I
Corollary. If T is a time set, then
+
t” < ( t t’) u (t” < t ) v ((t”= t ) v ( t < t“& t” < t (t” = t ) v ( t < t” & t” < t t’) 0 (3,): t , < t ‘ & t” = t t,
+
+ t’))
+
4.4.4 Lemma. If u and y are T-time functions, then the following statements are equivalent:
4.
112 (i) u
-
STRONG TYPES OF CAUSALITY
-v.
(ii) For all I E 7', u 1 ~~= y' and u t == y t . (iii) For soiiic t E 7',u' = y t and u l = y l . (iv) For 311 t E T , zit = y t .
---
(i) (ii) and (ii) => (iii) are obvious. (iii) -; (iv). LAssume ut y t and u t = y t and choose t' E T. Either t' .; t or t t' or t < t'. If t' < t , then ul' = yt' by condition (ii) of 1,einnia 4.4.2. If t' = t , then trivially ul' = yt'. -4ssuinc t < t'. For some t" 0, t' == t t". We have PROOF.
--j
~~
:+
d
y'
2~ zit
-- z d
yl
f . 14'
=
+
y' & ( u t ) l " = (y,)"' yL1
"'
'"
(by
1
-7
ii)
(by (iii) of 1,emma 4.4.2)
ut' = yt'
'l'hus, in c\ery case, 11'' = yt' and (iv) holds. (IV) (I).? If t i T ( t 0), then t <. ( t t ) by (x) of 'l'heorein 1.4.2. 'l'hus, for every t E T there exists some t' E T such that t ~, t'. We arc given for all t E T , u t = yl. Therefore, fix t E 7' and let t' t 7' satisfy t <' t'. Applying (ii) of Lemma 4.4.3, we see th'it
+
Ut' = yt'
7', tu
&t
-< t'
9
ty
tzr
ty, i.e., u
I
y.
'I'hereforc, for 'ill t
t
4.4.5
Let u and y be T-time functions. For all t
Lemma.
~
~
E
T,
(uy)t = u"r" PKOOI..
\Ye note that 9 u t = 9 y t
9(utyt) - 9ut n2yt S o w , for all t'
E
1hus, ( U Y ) ~
=
5'(uy)l, and it follows that =
Q(uy)t
91(uy)~,
t'(uy)t r ,
2
-
t'(uy)
~
(t'u, t'y) = ( t ' u t , t'yt)
=- t'(utyt)
~ 5 ' [.
:
.f r1-c tacitly assuiiic that 7' contains more than one element here.
4.4.
113
NONANTICIPATORY PROCESSORS
T h e following definition sets out three types of nonanticipatory T-processors.
4.4.6 Definition. If P is a T-processor, then we associate with P the following relations, sn P
=
{(d, ty) I
uy E P &
t E T}
na P = {((ut, tu), ty) 1 uy E P & t an P -= { ( u t , y y " )uy ~
E
P& t
E
E
T)
T)
Pis strictly nonanticipatory iff sn P i s a function. P is nonanticipatory [almost nonanticipatory] iff na P [an PI is a function. REMARK. Clearly, sn P, and na P (when functions) are auxiliary functions for P. T h e function an P (like P*), strictly speaking, fails to meet the requirements.
Lemma. Let P be a T-processor. P is strictly nonanticipatory [almost nonanticipatory] iff sn P: (P') 4MY2(onto) [an P: + ( PZ )(onto)].
4.4.7
N
N
(F)
N
We merely note 9 ( s n P ) 0!P2; and P ) = (P". I PROOF.
4.4.8
an
=
N
9(an P )
=
(P); &(sn P ) =
Theorem. Let P be a T-processor:
(i) If P is strictly nonanticipatory it is nonanticipatory. (ii) If P is nonanticipatory, it is almost nonanticipatory. (iii) If P is almost nonanticipatory, it is functional. PROOF.
(i) If P is strictly nonanticipatory, then for all uy, E T,
v z E P and all t
Ut =
vt 58. tu
=
tv
3
ut
=
vt
ty
=
tz
that is, P is nonanticipatory. (ii) Suppose P is not almost nonanticipatory. T h e n there exist elements uy, u z E P and t E T such that ut = v L and yl # z l .
4.
! 14
STRONG TYPES OF CAUSALITY
If y‘ + z‘,f o l - sonle t‘ t 7 ’ ( t ’ -< t ) , t’y f t’z. Now, with t’ u f ’ = vt’ and t’u t‘73 by I m n m a 4.4.2. Also, u”
--
vt‘ & t ‘ u
==
t‘v & t‘y f t‘z
( u f ’ , t’u)
--:
(vt‘, t’v) & t’y -f t’z
< t,
Lvhich p r o w s that P is not nonanticipatory. (iii) If P is almost nonanticipatory, then using condition (iv) of 1,emma 4.4.4, /qc I’
5;
21,”
t
P
3
( V t ) : Ut = ut & uy
->
( V t ) : yt = zt
3
y
E
P & uz E P z
I
so P is functional.
l’hus, \ye see that our definition of nonanticipatory T-processors establishes a hierarchy of causality types. T h e first two types of nonanticiF.itory T-processors give rise to simple sorts of constructive spccitication.
Theorem. Let P be a T-processor. If P is strictly nonant 1 c 1 pat o r y [non a n t I ci p at o r y1, t h c n 4.4.9
uv c [u\i
twooF.
I’ + u E Pi & ( V t ) : ty
tP
uE
Like ‘I’hcorem
I’l
& ( V t ) : ty
(ut)(5nP )
(d, tu)(na P ) ]
4.2.7.
R I U ~ R K . ‘I’hus, to define these two kinds of T-processors, it suffices to specify the input set and the corresponding auxiliary friiict ion.
4.4.10 Theorem. 1,et P be a l’-processor. If P is almost nonnnticipatory, then u-v t I’ czu E P’ & ( V t ) : y 2 == (ul)(anZ’) PROOF. Similar to the proof for Theorem 4.2.7 using (iv) of I .emma 4.4.4.
4.4.
NONANTICIPATORY PROCESSORS
115
Of the three types of nonanticipatory T-processors, it is the nonanticipatory and strictly nonanticipatory types which play the most important roles in the sequel. We have a number of results relating the various types of nonanticipatory T-processors with other classifications previously introduced. Let P be a T-processor. If P is static, then P is
4.4.1 1 Lemma. nonanticipatory. PROOF.
Obvious.
4.4.12 Theorem. Let P be a T-processor. If P-l is free, then P , is a constant function. If P , is a constant function, then P is strictly nonanticipatory. PROOF. (P-'), = (P,)-'. Applying Theorem 1.7.11, if P-l is free, then P , is a constant function. If P, is a constant function, then for some y , P = {uy 1 u E P'}
that is, 9P*
=
{ y } . T h e n for all u, 2, E Pl and all t E T, ut = vt
3
ty
=
ty
so P is (trivially) strictly nonanticipatory.
I
4.4.13 Theorem. Let P be a T-processor. Both P a n d P-l are almost nonanticipatory iff an P is a 1 : 1 function. PROOF.
an(P-l)
an(P-l)
=
(an P)-'.
I n fact,
= { ( y t ,u t ) I yu E P - l & t E = {(s, Y)
T } = { ( y t ,12)1 uy E P & t E T ) 1 r(an P)s} = (an P)-l
Now if both P and P-' are almost nonanticipatory, an P and (an P)-l are functions, i.e., an P is a 1 : 1 function. Conversely, if an P i s a 1 : 1 function, then both an P and (an P)-l are functions, i.e., P and P-l are both almost nonanticipatory. I On the surface, it would appear that a T-processor could very well be both strictly nonanticipatory and contracting. T h e following theorem shows that this class of T-processors is very small indeed.
116
4.
STRONG TYPES OF CAUSALITY
4.4.14 Theorem. If P is a nonempty T-processor, then the following statements are equivalent:
(i) P is contracting and strictly nonanticipatory. (ii) Pl is contracting and U P 2 has one and only one element. (iii) P-' is free and contracting. PKOOF. If P is contracting, then both Pl and P2 are contracting by 1,emma 3.6.5. If P is strictly nonanticipatory, then 0PLhas one and only one element. I n fact, for all uy, az E P, i;i
:...
3
uo -= u"
ei
oy
=
oz
Since P2 is contracting, by Lemma 3.6.6 ClP2 = OP2. Thus, OIPz has one and only one element. (i) then implies (ii). (ii) =- (iii). We see that (P-')I -= P2. Thus, Ol(P-l)l = OlP2 and /Y(P-l)l has one and only one element. Therefore, P-l is free. By 'Theorem 3.6.1 1, ( P P ' ) is ~ contracting iff P-l is contracting. But, ( p l ) 2
=~131
where PI is given to be contracting. 'Thus P-' is contracting. (iii) => (i). Let P-l be free and contracting. T h e n P is strictly nonanticipatory by Theorem 4.4.12, and P is contracting by Theorem 3.6.9. REMARK. 'l'hus a 7'-processor which is both strictly nonanticipatory and contracting has (precisely) one constant output T-time junction.
\Ye get an analogous result for the case of strictly nonanticipatory T-processors which are stationary:
4.4.15 Theorem. If I' is a nonempty 7'-processor, then the following statements are equivalent: (i) P is stationary and strictly nonanticipatory. (ii) I-" is stationary and (LP' has one and only one element. (iii) P-' is free and stationary. PROOF.
As a n exercise.
4.4.
117
N O N A N T I C I P A T O R Y PROCESSORS
REMARK. Classically, processors which are both free and stationary are called “autonomous.”
For later use, we record what we showed in the proof of Theorem 4.4.14 about strictly nonanticipatory T-processors: 4.4+16 Corollary. Let P be a nonempty T-processor. If P is strictly nonanticipatory, then OP2 has one and only one element.
Next we consider the class of contracting nonanticipatory T-processors. Here we get a very interesting result: 4.4.17 Lemma. Let P be a T-processor. If Pis nonanticipatory, then P is weakly static. PROOF. Our observation is the same as that leading to Corollary 4.4.16. For all uy, vz E P,
ou -=
ov * d
* uo
= 0 &Ou -= ov
* (u”, O U )
= (VO, O V ) 2
oy
=
vO&Ou
= Ov
= oz
T h a t is, P is weakly static since OP is a function.
[
4.4.18 Theorem. If P is a T-processor, then the following statements are equivalent:
(i) P is contracting and nonanticipatory. (ii) P is contracting and uniformly static. (iii) Pl is contracting, and P is uniformly static. PROOF. (i) 3 (ii). If P is nonanticipatory, it is weakly static by Lemma 4.4.17. If P is contracting and weakly static, it is uniformly static by Theorem 4.2.23. (ii) a (iii). If P is contracting, then Pl is contracting. (iii) 3 (i). If P is uniformly static and P’ is contracting, then P is contracting by Theorem 4.2.22. If P is uniformly static, it is static and hence nonanticipatory by Lemma 4.4.11. [
4.4.19 Corollary. Let P be a T-processor. If P is contracting, then P is nonanticipatory iff P is uniformly static.
4.
118
STRONG TYPES OF CAUSALITY
Again we get an analogous result in the stationary case:
4.4.20 Theorem. If P is a T-processor, then the following statements are equivalent: (i) 1' is stationary and nonanticipatory. (ii) '1 is stationary and uniformly static. (iii) P1is stationary, and P is uniformly static. PROOF.
For the reader.
IEXUK. Again contraction and stationarity arc shown to be strong propcrtics of 7'-processors.
-1s was the case with contracting, expanding, and stationary 7'-processors, we get a simplification of nonanticipation in the discrete-time case:
4.4.21 Theorem. If P is an w-processor, then P is nonanticipatory iff P is almost nonanticipatory. PROOF.
If
11
and
I n fact, in t h e Cled rl y , then ,
7'
are w-time functions, then for all t
case, (zilj*
w
(U/)]
=-
(V,)l<+
((0,t u ) ) and
(9,)'=
E w,
((0, tv)}.
tu = tv
I Ienee, applying condition (iii) of Lemma 4.4.2,
Sou choose
(d, tu)
u j l ,
zz
t
(tt,tz.)
P and t
. ut yf
t
W.
v ' & tu Zt
I f P is almost nonanticipatory, tv
'A
Ut
t(yf-1)
~
vt'l
t(zt-1)
-*
ty
=
tz
that is, P is nonanticipatory. But if P is nonanticipatory, then P is almost nonanticipatory by 'Theorem 4.4.8.
4.5.
NONANTICIPATORY INTERCONNECTIONS
Corollary.
4.4.22 t E w,
I19
If u and y are w-time functions, then for all
ut-1 -
0Ut
=
at & tu = tv
Finally we have:
4.4.23 Lemma, Let P and Q be T-processors. If P C Q and Q is strictly nonanticipatory [nonanticipatory ; almost nonanticipatory], then P is strictly nonanticipatory [nonanticipatory; almost nonanticipatory]. PROOF.
sn P
If P C Q , then
= {(d, ty) 1
u y E P & t g T ] C { ( u t t, y ) 1 u y ~ , O & Tt }~== s n Q
that is, sn P C s n Q . Similarly, na P C n a Q and an P C a n Q . Now since every subset of a function is a function, if Q is strictly nonanticipatory [nonanticipatory; almost nonanticipatory], then P is. I 4.5
NONANTICIPATORY INTERCONNECTIONS
I n this section, we examine various interconnections of nonanticipatory T-processors. T h e results obtained play a significant role in a later section of this chapter.
4.5.1
Theorem.
Let P and Q be 7'-processors:
(i) If P is strictly nonanticipatory and Q is nonanticipatory, then P 0 Q is strictly nonanticipatory. (ii) If P is almost nonanticipatory and Q is strictly nonanticipatory, then P 0 Q is strictly nonanticipatory. (iii) If both P and Q are nonanticipatory, then P O Q is nonanticipatory. (iv) If both P and Q are almost nonanticipatory, then P o Q is almost nonanticipatory. PROOF. Choose uz, vw E P 0 Q and t E T. There exist elements y and x such that uy, vx E P & y z , xw E Q. We have
(i) ut
=
vt
=>
y L = x t (since P is almost nonanticipatory) &
4.
120
STRONG TYPES OF CAUSALITY
ty -= tx * ( y t ,t y ) == (xt,tx) 3 t z tw, so P is strictly nonanticipatory. z’t y t --= x t ~3 tz == tw, so P 0 Q is strictly non(ii) anticipatory. (iii) ( z i t , t z t ) -= ( c t , tv) * y t = x t (since P is aln-lost nonanticipatory) & ty = t x ( y t ,ty) :=- (xt,tx) 3 tz = tw, so P Q is nonanticipatory. (iv) u1 -= v t * y t ==: xt zt = w t ,so P Q is almost nonanticipatory. 1 -7
I
% -
0
0
RIMARK. Since each of two T-processors may independently be strictly nonanticipatory, nonanticipatory, or almost nonanticipatory, there are actually nine cases of P o Q in all. T h e four cases considered in Theorem 4.5.1 subsume the other five. For example, if both 1’ and Q are strictly nonanticipatory, then by (i), P Q is strictly nonanticipatory (sincc Q is also then nonanticipatory). 0
4.5.2 Lemma. If u , y , z’, and z are T-time functions, then for all t E T , u”‘ El,!$ <> ut L vl & yl ;z 21 (1) (ii) (uy)l = ( E Z ) ~2 ut = vt & yt = 21 PROOF. (ii) follows from (i) by Lemma 4.4.5. For all t’ t T (t’ < t ) , ~
t‘(vtz‘)u (t’ut, t’yt)
t’(utyt)
3
Iience, (1) holds.
t’ut
=
=
(t’vt, t’zt)
t’vt & t’yt
t’zt
1
4.5.3 Theorem. Let P and Q be T-processors. If both P and Q arc strictly nonanticipatory [nonanticipatory; almost nonanticipatory], then P / / Q is strictly nonanticipatory; [nonanticipatory ; alniost nonanticipatory]. PROOF. We use Lemma 4.5.2 extensively. Choose ( u v ) ( y ~ ) , ( p x ) ( q w )E P / / Q and t E T . The n, uy, p q E P & vz,xw E Q. I n order, we have (uv)t = : (p.q
2 3
ut = p t & vt == X t
(ty, t z )
= i
==
(q,tzu)
* ty 3
I=
t(yz)
tq & tz J Z
=
t(qw)
tw
4.5.
121
NONANTICIPATORY INTERCONNECTIONS
that is, the strictly nonanticipatory case. Next, (uv)t = (
p q & t(uv) = t(p.) * ut
& Zlt = Xt & tu = t p & tv = tx
tp) & (vt,tv) = (Xt, t x )
3
(ut, tu) =
3
(ty, tz) = (tq, tw)
(pi,
=pt
3
ty
= tg
& t z = tw
t ( y z ) = t(qw)
that is, the nonanticipatory situation. Finally, (uv)t = ( p x ) t
* ut
= pt
& vt = xt
* yt
= qt
& zt = Wt
3
( y z ) t = (gw)t
which proves the theorem for almost nonanticipation. REMARK.
I
Again the given conditions subsume all nine possible
cases. Both Theorems 4.5.1 and 4.5.3 admit interesting corollaries connecting the static and nonanticipatory situations of interconnections. We recall that every static 7'-processor is nonanticipatory. Then, 4.5.4
Corollary. Let P and Q be 7'-processors:
(i) If is (ii) If is (iii) If
P is strictly nonanticipatory and Q is static, then P 0 Q strictly nonanticipatory. P is static and (2 is strictly nonanticipatory, then P o Q strictly nonanticipatory. P is static and Q is nonanticipatory, then both P Q and P / / Q are nonanticipatory. (iv) If P is nonanticipatory and Q is static, then both P Q and P / / Q are nonanticipatory. (v) If P is static and Q is almost nonanticipatory, then both P Q and P / / Q are almost nonanticipatory. (vi) If P is almost nonanticipatory and Q is static, then both P Q and PiIQ are almost nonanticipatory. 0
0
0
0
4.5.5
Theorem. For any T-processor P, :P is nonanticipatory.
PROOF. By Theorem 4.3.1, :P is uniformly static, hence, static. T h u s :P is nonanticipatory. I
4.
122
STRONG TYPES OF CAUSALITY
4.5.6 Theorem. I,et P be a T-processor. P is strictly nonanticipatory iff P is almost nonanticipatory, and :P is strictly nonanticipatory. PKOOF. Let P be strictly nonanticipatory. P is clearly also almost nonanticipatory. Choose (uy)y E :P, ( u z )z E :P, and t E T :
(Uyy
= (W)f
=- Ut = 29 <:
ty
=
tz
that is, :t’ is strictly nonanticipatory. Conversely, if P is almost nonanticipatory and :P is strictly nonanticipatory, we have Ut
=
d
’
?l* -
d & y f = zt
3
(.y)t
=
which proves P is strictly nonanticipatory.
(uz)t * ty
=
tx
I
4.5.7 Theorem. For any T-processor P, P is nonanticipatory [almost nonanticipatory] iff P: is nonanticipatory [almost nonanticipatory]. t w o o ~ . As
an exercise.
4.5.8 Theorem. Let P be a T-processor with P’ a T-processor. If I” is weakly static [static; uniformly static; strictly nonanticipatory ; nonanticipatory; almost nonanticipatory], then # P is wcakly static [static; uniformly static; strictly nonanticipatory; nonanticipatory; almost nonanticipatory].
4.6
TIME-EVOLUTION OF NONANTICIPATORY PROCESSORS
In our consideration of nonanticipatory T-processors, there has heen no mention of time-evolution; nor, indeed, have we considered this subject for functional T-processors in the general case. ‘There is a good reason for the omissions. Neither (i) the strictly nonanticipatory T-processors, (ii) the nonanticipatory T-processors, (iii) the almost nonanticipatory T-processors, nor (iv) the functional T-processors constitute time-invariant classes.
4.6.
TIME-EVOLUTION
OF NONANTICIPATORY PROCESSORS
123
T h e following two theorems graphically demonstrate how severe the situation is. I n effect, they show that the largest subclasses of the nonanticipatory T-processors and the strictly nonanticipatory T-processors which are time invariant are, in fact, exceedingly small. 4.6.1 Theorem. Let P be a T-processor. For all t nonanticipatory iff P is static.
E
T, P , is
By Theorem 4.2.4, if P is static, then for all t E T , P , is PROOF. static, hence, nonanticipatory. Conversely, if for all t E T , P, is nonanticipatory, then (by Lemma 4.4.17) for all t E T , P,is weakly static. But then by Theorem 4.2.4, P is static. 1 4.6.2 Theorem. If P is a nonempty T-processor, then the following statements are equivalent:
(i) (ii) (iii) (iv)
For all t E T, P , is strictly nonanticipatory. P 2 has one and only one element. P , is a constant function. For all t E T, (P,)* is a constant function.
PROOF. (i) * (ii). nonanticipatory,
0
=
0
Choose uy, ZIZ E P & t
(u,)" = (at)"
oyt
= oz, 3
(t
E
T. If P , is strictly
+ 0 ) y = ( t + 0)z
3
ty
=
tz
T h a t is, y = z. Thus, P 2 has precisely one element. : P2, and it follows that P , (ii) o (iii). We recall that 9 P * = is constant iff P 2 has one and only one element. (ii) (iv). If P 2 = { y } ,then using Lemma 3.3.9, for all t E T , 9 ( P t ) * = (Pt)2 = (P",
= {zt j z E P } =
{y,}
T h u s for all t E T , (P,)* is a constant function. (iv) * (i). By Theorem 4.4.12, if (P,)* is a constant function, then P,is strictly nonanticipatory. I Thus, any time-invariant class of nonanticipatory T-processors contains only static elements, and any time-invariant class of strictly nonanticipatory T-processors contains only elements
124
4.
STRONG TYPES OF CAUSALITY
admitting a single output. On the surface, this result might appear to downgrade seriously the importance of such classifications of 7’-processors. This is not the case. In the next section, we shall examine another strong causality type which turns out to be a generalization of the concept of a strictly nonanticipatory T-processor and which is at the same time a time-invariance property of 7‘-processors. We can at this point justify our concern with nonanticipatory T-processors by the role they play in the development of this new property.+ 4.7
TRANSITIONAL PROCESSORS
In this section, we introduce yet another important classification of T-processors, namely, the transitional T-processors. T h i s classification (another type of causality) is perhaps the most important type in our theory. As we shall see, it is the transitional 7’-processor which plays the essential role in our development of the concept of state, hence, to our exposition of the “state space approach” in systems theory. By way of introduction, we prove in this section that the transitional T-processors are intimately related to the previously introduced concept of a strictly nonanticipatory T-processor. We distinguish (in direct analogy with the static case) three types of transitional T-processors. 4.7.1 Definition. Let P be a I-processor. With each t associate the following relation: tr, I’ = { ( ( ( u ~ )ty), ~ ’ ,( t
E
T , we
4- t ’ ) ~ )UY E P & t’ E T } ~
Also we define t r P = { ( ( ( u t ) t ’ ,ty), ( t
t ’ ) y )j uy E P & t , t’ E TI
P is weakly transitional iff tr,, P is a function. P is transitional iff for all t E T , t r , P is a function. P is uniformly transitional iff t r P is a function.
t A great deal of light can be shed by introducing a new concept of evolution for nonanticipatory T-processors. This leads to another formulation of the state concept more sophisticated than that which we consider in Chapter 5. We simply consider this work beyond the scope of this book.
4.7.
T R A N S I T I O N A L PROCESSORS
I n general, tr P
REMARK.
=
u {tr, P 1 t
E
T}, and for all t
125 E
T,
tr, P C tr P. Intuitively, a T-processor P is transitional if the output at any time t’ is a function of the output at any previous time t , the time t, and the “intervening” input segment (u,)~’-,. A uniformly transitional T-processor is the same, except that t’y is a function only of ty and (u,),’-~and not o f t .
4.7.2 Lemma. Let P be a T-processor. If P is uniformly transitional, P is transitional. If P is transitional, it is weakly transitional. PROOF.
Obvious.
First we shall prove that transitional and uniformly transitional T-processors (in contrast to nonanticipatory ones) form timeinvariant classes. T h e weakly transitional T-processors do not form such a class, but the concept (like weakly static T-processors) is still a useful one to have for proofs. Subsequently we will show that there exists a uniform method for the constructive specification of weakly transitional [and, hence, for transitional and uniformly transitional] T-processors.
4.7.3
Lemma.
Let P be a T-processor. For all t, t‘ E T, tr,,,, P = tr,, P,
PROOF.
4.7.4
We have
Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is transitional. (ii) For all t E T, P , is weakly transitional. (iii) For all t E T , P , is transitional.
4.
126
STRONG TYPES OF CAUSALITY
PROOF. (i) 3 (ii). If P is transitional, then for all t E T, tr, P is a function. Now by the above lemma, tr, P , = tr,,, P = tr , P. Thus, for any t , P , is weakly transitional, i.e., tr, P , is a function. (ii) * (iii). Choose t , t‘ E T and consider tr,, P , . By the lemma,
tr, P,,,’ -= tr((,7f,)+o) P .:= tr,,,, P = tr,, P ,
is a function since P,,-,, is weakly transitional. Moreover, since t’ was arbitrary, P , is transitional. Finally, since t was arbitrary, for all t E T, P , is transitional. (iii) :+ (i) is trivial since P,, = P. 1 Xow tr, P,,,,
4.7.5 Corollary. invariant.
T h e class of transitional T-processors is time
S ex t , tvc develop a similar result for uniformly transitional 7‘-processors.
4.7.6 Lemma.
For any T-processor P , tr, P
=
tr P.
PROOF
4.7.7 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is uniformly transitional. (ii) P is weakly transitional. (iii) For all t E T , P , is uniformly transitional. PROOF.
(i)
=j
(iii) s (i) is trivial. (ii) is immediate using Lemma 4.7.6.
4.7.
(ii) 3 (iii). If P is weakly transitional, transitional, i.e., by Lemma 4.7.6, tr P
-
=
127
TRANSITIONAL PROCESSORS
tr, P
= tr,
it is uniformly
(by Theorem 3.4.2)
P
Now, for any T-processors Q and R, if Q C R, then tr Q C t r R. Here, for any t E T, P , C P. Since tr P is a function, t r P, is a function. T h u s for any t E T , P , is uniformly transitional. 1
4.7.8 Corollary. T h e class of uniformly transitional T-processors is time invariant. As we showed in the proof of Theorem 4.7.7, the following is apparent:
4.7.9 Corollary. Let P and Q be T-processors. If P C Q, then E T, tr, P C t r , Q and t r P C tr Q.
for all t
We consider now the concept of a constructive specification for weakly transitional T-processors.
4.7.10 Theorem. If P is a weakly transitional T-processor, then uy
E
P o (u, Oy) E dom P & (Vt): t y
= ( u t , Oy)(tr,
P)
where dom P
= { ( u , Oy)
1 uy E P }
PROOF. If uy E P, then clearly the given conditions hold. Conversely, suppose ( u , Oy) E dom P and ty = (u,, Oy)(tr, P ) for all t E T. Since ( u , Oy) E dom P, there exists some uz E P such that Oz = Oy. Since P is weakly transitional, for all t E T,
ty that is, y
=
= ( u t , Oy)(tr, P )
z. T h u s uy E P.
-
= (ut, Oz)(tr,
P)
7
tz
1
REMARK. T h u s to constructively specify a weakly transitional T-processor, it suffices to give tr, P together with the relation dom P introduced in Theorem 4.7.10. We immediately have the following constructive specifications for transitional and uniformly transitional T-processors:
4.
128
STRONG TYPES OF CAUSALITY
4.7.1 1 Theorem. If P is a transitional [uniformly transitional] T-processor, then U ~ Vt [ U ~ VE
-
P e-f. (u, Oy) E dom P & (Vt): ty P
( u , Oy) E doin
P & (Vt): ty
=-
( u L Oy)(tr, , P)
= (ut, Oy)(tr P ) ]
PROOF. Since transitional T-processors are weakly transitional, the first condition is immediate from Theorem 4.7.10. T o see the second, we note that tr, P C t r P. It follows (as we noted in the mathematical preliminaries) that
tr,] I-’
~
(tr P)/L%(troP )
and the givcn condition follows.
I
Our next task is to show that the various types of transitional 7’-processors have an intimate relationship with the strictly nonanticipatory T-processors. We require a new notation:
4.7.12 Definition. associate the set
Let P be a T-processor. With each b
E
OP2,
me
I’ll
=
{uy 1 uy E P & oy
6)
RFVARI:. Pb is ‘‘what P looks like givcn that the initial value of the output is 0.” P b C P ; hence, Pb is itself a T-processor. Also,
P
-
(J{Pb I b E OP2$
4.7.13 Theorem. Let P be a nonempty T-processor. P is strictly nonanticipatory iff P is weakly transitional and OP2 has one and only one element. PROOF.
t F
Let P be strictly nonanticipatory. Choose uy, vz E P PE
7’.
’I’hus, P is weakly transitional. Now OP2 has one and only one
4.7.
129
TRANSITIONAL PROCESSORS
element by Corollary 4.4.16. Conversely, if tr, Y is a function and OP2 has one element, for all uy, ZIZ E P and all t E T, Ut =
vt
3
Ut =
d & oy
=
oz * ty
=
tz
I
that is, P is strictly nonanticipatory.
4.7.14 Theorem. Let P be a T-processor. P is weakly transitional iff, for all b E OP2, Pb is strictly nonanticipatory. PROOF. Suppose that for all b E OP2, Pb is strictly nonanticipatory. If uy, ZIZ E P and t E T,
oy
= oz
* uy E P(0y)& vz E P(0y)
Moreover then,
since P(0y) is strictly nonanticipatory. Thus, tr, P is a function and P is weakly transitional. Conversely, let tr, P be a function and choose b E OP2. If uy, Z I X E Pb, then Oy = b = Oz, so for all t E T, U t = d * U t = ZIt & oy = oz 2 ty = tz that is, Pb is strictly nonanticipatory. 4.7.15 Corollary. transitional,
I
Let P be a T-processor. If P is weakly (s, b)(tr,
P)
=
s(sn(Pb))
whenever either side is defined. 4.7.16 Theorem. Let P be a T-processor. P is transitional iff for all t E T and all b E tP2, (P,)b is strictly nonanticipatory. PROOF.
By Lemmas 3.3.9 and 3.3.14, tP*
=
(t
+ 0)
P2
=O
(qt
= O(P,)!
130
4.
STRONG TYPES OF CAUSALITY
Using Theorems 4.7.4 and 4.7.14, then P is traiisitioiial
-::.( V t ) :
I’,
is weakly transitional
( P t ) h is strictly nonanticipatory tPS 2 (P,)h is strictly nonanticipatory
.: :-(Vt)(Vh): b E O(Z’f)z
(Vt)(Vh): h
-3
E
2
4.7.17 Theorem. 1,et P be a 7’-processor. P is uniformly transitional iff for all h t { T P , ( P ) b is strictly nonanticipatory. IW(IOF.
Siinilar to thc proof for ‘l’heorem 4.7.16.
KIXUW. \17e note in passing that it would be possible to set forth six more types of transitional T-processors in terms of the concepts of nonanticipatory and almost nonanticipatory T-proccssors. 1:or example, we could say that P is quasitvansitional itf for all t F 7’ arid all b t t P 2 , (P,)h is nonanticipatory. Our experience is that these additional types do not justify the added coni plexi ty they in troduce.
4.7.18 Theorem. Let P be a nonempty T-processor. If P is transitional [uniformly transitional], then P is strictly nonanticipatory iff OP‘ has one and only one element. i~oo17.
;\s an exercise.
4.7.19 Theorem. Let P be a 7’-processor. If P is contracting, then the follo\ving statements are equivalent: ( i ) P is uniformly transitional. ( i i ) P is transitional. ( i i i ) P is \veakly transitional. i w ) o i ~ . (i) =- (ii) and (ii) = ~ >(iii) arc obvious. By Lemma 4.7.6, tr I’ tr,, p. If 1’ is contracting, then P -: P ; hence, tr P .-= tr,, P. ‘l‘hus, P is uniformly transitional if it is weakly transitional. 1
\Ye note that Theorem 4.7.19 is the same strong result obtained in ‘l‘heorem 4.2.23 for static T-processors that are contracting. I:inally, ~ v ehave a result (quite independent of the notion of transitional 7’-processors) based on the notation P6 introduced above. ’l’his result is particularly interesting in the transitional case.
4.8.
131
TRANSITIONAL INTERCONNECTIONS
4.7.20 Lemma. Let P be a T-processor. For all b E OP2,(Pb)-l is nonanticipatory iff P-l is nonanticipatory. PROOF. We note that (P6)-l C P-l for any b. T h u s if P-l is nonanticipatory, then for all 6 E OP2, (P6)-' is nonanticipatory. Conversely, suppose that (Pb)-' is nonanticipatory for all 6. Choose yu, zu E P-I and t E T. Clearly uy, vz E P. For any t E T,
( y t , ty)
=
(zt, t z ) 3 yt
=
zt & ty
=
tz
oy = oz
Thus, if Oy = b (say), then uy, v x E P6. Now, since (Pb)-l is nonanticipatory, ( y t , ty) = (zt, tz) * tu = tv which proves that P-l is nonanticipatory.
1
4.7.21 Theorem. Let P be a T-processor. For all t b E tP2, ((P,)b)p1 is nonanticipatory iff P-l is static. PROOF.
E
T and all
Use Lemma 4.7.20 and Theorem 4.6.1.
4.7.22 Theorem. Let P be a T-processor. For all beO%P2,
( ( P )b)-' is nonanticipatory iff P-1 is uniformly static. PROOF. Since OF2 = GYP2,the given condition holds iff (F)-l is nonanticipatory. By Lemma 3.4.8, (P)-l = (p-') so the latter is __ nonanticipatory, hence by Lemma 4.4.17, weakly static. But if (P-l) is weakly static, P-l is uniformly static by Theorem 4.2.23, i.e., -_ p-l is contracting. Conversely, if P-1 is uniformly static, (P-1) is __ uniformly static, hence, nonanticipatory. Finally again, @)-I = (P-1). 1
4.7.23 Corollary. Let P be a T-processor. P is nonanticipatory iff P is uniformly static. 4.8
TRANSITIONAL INTERCONNECTIONS
I n this section, various interconnections involving transitional T-processors are investigated. We begin by noting that the series interconnection of two weakly transitional [transitional; uniformly
132
4.
STRONG TYPES OF CAUSALITY
transitional] T-processors is not in general weakly transitional [transitional; uniformly transitional]. As we show, it would appear that the “natural” way to serially interconnect two transitional T-processors P and Q is via the operation P Q: rather than P 0 Q. T h a t is, under P 0 Q:, there is c1osure.t 0
4.8.1 Lemma. Let P and Q be T-processors such that P C Q . If Q is weakly transitional [transitional; uniformly transitional], then P is weakly transitional [transitional; uniformly transitional]. PROOF. Immediate from Corollary 4.7.9.
4.8.2 Lemma, Let P and Q be T-processors. If both P and Q are weakly transit;onal, then P 0 Q: is weakly transitional. We note that P Q: = { u ( y z )1 uy E P & yz E Q}. Choose PROOF. u ( y z ) , ~ ( x wE) P Q:. T h e n uy,ux E P and y z , xw E Q. If both P and Q are weakly transitional, then for all t E T, 0
0
U f == uf 3
& O(yz) = O(xw) & (Oy, Oz) = (Ox,Ow)
Ut -= ut
* Ut =- ty
= --
vt&Oy
LZ
t.v&ut
=
Ox&Oz
tx & ( V t ’ ) : (t’ <
:>
ty
:
3
ty
7 -
=
ow
Ox&Oz = o w t * U t ’ = Dt’ & oy
vt&Oy
=
= Ox)
& oz = ow
(by 4.4.2)
* ty 2
=
t x & (Vt’): (t’
ts & (2’1
(ty, tz)
=-=~( t s ,
3
t‘y
t’x) & 02
= ow
( P weakly transitional) = xt & oz = Ow) =* ty == t x & tz = tw tw) =- t ( y z ) = t(xw)
T h u s , P Q: is weakly transitional. 0
=
I
Theorem. Let P and Q be T-processors. If both P and Q are transitional [uniformly transitional] then P Q: is transitional [uniformly transitional].
4.8.3
0
t T h e operation P 0 : on ”-processors is sometimes called “cascade” interconnection in systems thcory.
4.8. PROOF.
133
TRANSITIONAL INTERCONNECTIONS
Using Theorems 3.5.1 and 3.5.4,
( p ) c c o ( Q q= P o @ ) : and, for all t
E
T, ( P Q:)tc Pt 0 (Q:)t = Pt 0 (Qt): 0
E T, both P,and Qt are weakly transitional, and by Lemma 4.8.2, P, (Q,): is weakly transitional. But then ( P 0 Q:), is weakly transitional by Lemma 4.8.1. T h u s for all t E T, ( P Q : ) l is weakly transitional, i.e., P o Q: is transitional. Similarly, if both P and Q are uniformly are weakly transitional. Then P (&): transitional, then P and is weakly transitional, and we see that ( P Q:) is weakly transitional by Lemma 4.8.1. Finally, P 0 Q: is uniformly transitional. I
If both P and Q are transitional, then for all t
0
0
0
0
4.8.4 Lemma. Let P and Q be T-processors. If both P and Q are weakly transitional, then P / / Q is weakly transitional.
Choose (uv)(yx), ( p x ) ( q w )E P / / Q and t E T. We see PROOF. uy, pq E P and vz,xw E Q. Thus, (uv)t = ( p . q & O(yz) = O(qw) 3
U t = pt
& vt = Xt & (Oy, Oz)
=
(Oq, Ow)
3ut=p~&Oy=Oq&vt=xt&Oz=Ow=-ty=tq&tz=tw 3
(ty, t z ) = (tq, tw) => t ( y z ) = t(qw)
and P / / Q is weakly transitional.
I
4.8.5 Theorem. Let P and Q be T-processors. If both P and Q are transitional [uniformly transitional], then P / / Q is transitional [uniformly transitional].
Similar to the proof for Theorem 4.8.3, using Lemmas PROOF. 3.5.2 and 4.8.4 and Theorems 4.7.4 and 4.7.7. We leave the details to the reader. T h e following theorem characterizes some important special cases of the series interconnection.
I14
4.
STRONG TYPES OF CAUSALITY
4.8.6 Theorem. I,et P and Q be 7‘-processors. If P is almost nonanticipatory [static; uniformly static] and Q is weakly transitional [transitional; uniformly transitional], then P c Q is weakly transitional [transitional; uniformly transitional]. i w ) o ~ . Let P be almost nonanticipatory and Q be weakly transitional. If u z , cw E P Q, then for some y and x, uy, ux E P & y z , z‘w E Q. For any t E T, 0
u‘
.cJ
Si Oz
= Ozo z I-
yt tz
= xt --
Pr Oz
tu!
Ow
( P is almost nonanticipatory)
(0 is weakly transitional)
‘I’hus, P 0 Q is ueakly transitional. Suppose P is static and Q is transitional. For any t E T, P , is static, hence, nonanticipatory. ‘I‘hus, it is almost nonanticipatory. Q, is weakly transitional. Hence P l 0, is weakly transitional, and since ( P o Q), C P , o Q I , ( P o Q ) t is kveakly transitional for all t. I n other words, P Q is transitional. Similarly, suppose P is uniformly static and Q is uniformly transitional. By Lemma 3.4.3, I I P =- ( l P , and it follows that P is uniformly static, hence, almost nonanticipatory. Now, Q is weakly transitional so P is weakly transitional. Moreover then, ( P 0)is weakly transitional, so P 0 Q is uniformly transitional. I 0
0
4.8.7 Theorem. Let P and Q be T-processors. If P is weakly transitional [transitional; uniformly transitional] and Q is bistatic, then P 0 Q is weakly transitional [transitional; uniformly transit i o 11all. PROOF.
4.9
For the reader.
DISCRETE-TIME TRANSITIONAL PROCESSORS
1 n this section, ~ v e consider the concept of transitional
T - processors in the special case of discrete time. Previously, we indicated that in the discrete-time case, we often obtain great simplification in 1 arious concepts. T h e present instance is no exception. In fact, transitional T-processors in the discrete-time case are much simpler than in the general case.
4.9.
I35
D I S C R E T E - T I M E T R A N S I T I O N A L PROCESSORS
4.9.1 Definition. Let P be an w-processor. With each t t W, we associate the relation
Also, we define it P
---
I n general, it P
REMARK.
+ 1)y) I uy E P & t E
{((tu, ty), ( t =
{itt P 1 t
E w}
W}
and, for all t
E w,
it, P C it P.
4.9.2 Lemma. Let P be an w-processor. For any t is a function, then it, P is a function. PROOF.
E w,
if t r , P
Recall that tr, P
=
{(((u$’,
ty), ( t -b t’)y) 1 uy E P & t’ E W }
With P discrete time, we see that (uJ1 = ((0,Ou,)}. Thus, if t r , P is a function, for all uy, z’z E P, tu
=
tv & ty
=
tz
(0,tv) & ty
--
tz
= ((0,tv)} &
ty
-=
3
(0,t u )
->
((0,t u ) }
2
(ut)l
-’
((ut)’, t y ) = ((vt)l, t z ) -* ( t
that is, it, P is a function.
=
(vt)l& ty
1
=
tz
tz
(t
l)y
i
- l)z
I
Lemma. Let P be an w-processor. If, for all t is a function, then for all t E w , tr, P is a function. 4.9.3
E
w, it, P
PROOF. Suppose that tr, P i s not a function. There exist elements uy, z’z E P and some t’ E w (t’ # 0) such that
( i ) (u,),’ = (zit),’. (ii) ty = tz. (iii) ( t t‘)y # (t
+
+ t’)z.
Clearly, there exists some least integer k, with k
0 or 0
:
:<
k < t’
136
4.
such that ( ( t k < t’,
STRONG T Y P E S OF CAUSALITY
+ k) $- I)y
(2@
== (u$’
3
# ((t k(u,)t’
+ k) + 1)z.
=
* ( t -1. k)”
k(v,)t’ 3 Ku, ==
Moreover, since =
kv,
( t -1- k ) ~
T h u s , we have (i) ( t 4-k) zi (ii) ( t k)y (4( ( t A )
+
( t + k) z‘. ( t f k) z. 1 ) y f ( ( t -t k)
.=--
1-
+ +
+ 1) z.
This proves that it(.,( P is not a function.
I
\lie come now to the first of two results showing how the concept of a transitional processor simplifies in the discrete-time case.
4.9.4
Theorem. If P is an W-processor, then P is transitional t w , it, P is a function.
iff for all t PROOF.
By definition, P is transitional iff for all t
a function. By Lemmas 4.9.2 and 4.9.3, for all t function iff, for all t E W, it, P is a function. I
4.9.5
E
E w,
tr, P is tr, P is a
W,
Lemma. If P is an 0,-processor, then it I’
PROOF.
it,
P
As an exercise.
Lemma 4.9.5 gi17es our second result:
4.9.6 Theorem. If P is an W-processor, then P is uniformly transitional iff it P is a function. PROOF. If P is uniformly transitional, then P is weakly transitional by ‘l?heorem 4.7.7, i.e., tr, P i s a function. By Lemma 4.9.2, i t , P is a function. I3y Lemma 4.9.5, then, it P is a function. Conversely, assume that t r P is not a function. Then, proceeding similar to the proof of Lemma 4.9.3, it can be shown it P is not a function. I
4.10.
137
NONDIVERGING PROCESSES
T h u s we see that an w-processor P is transitional iff it, P is a function for all t E w and it is uniformly transitional iff it P is a function. We note that it P and it, P are auxiliary functions for w-processors. We conclude this section by developing techniques of constructive specification for transitional w-processors using these new auxiliary functions.
4.9.7
Theorem. Let P be an w-processor. If P is transitional,
then uy
E
P o (u, Oy) E dom P & (Vt): ( t
-+ 1)y
=
( t u , ty)(it, P )
PROOF. If uy E P, then the given conditions clearly hold. Conversely, assume (u,Oy) E dom P and ( t 1)y = (tu, ty)(it, P ) for all t E w . Since (u, Oy) E dom P, there exists some uz E P such that Oy = Ox. Since P is transitional, for all t E T,
+
(t
+ 1)z
=
( t u , tz)(it, P )
Finally, by a simple induction (which we omit), we see that for all tz, i.e., y = z. Thus, u y P. ~ I
t ~ w ty, =
4.9.8 Theorem. transitional, then
Let P be an w-processor. If P is uniformly
uy E P o (u, Oy) E dom P & (Vt): ( t PROOF.
4.10
+ I)y
= (tu,
ty)(it P )
Similar to the proof for Theorem 4.9.7.
NONDIVERGING PROCESSES
Our discussion of causality is almost finished; however, there is one final item well worth pursuing here. A moment’s reflection by the reader will reveal that every causality type we have introduced here was exclusively for T-processors. There is at least one important causality concept which is basically for T-processes that are not T-processors. This is the concept of a dynamical “ system” or “motion.” One of the important generalizations of the classical concept of a dynamical motion has already been introduced in this book previously. I t is the concept of an action of a time set
138
4.
STRONG TYPES OF CAUSALITY
(Section 3.2). I n the present instance, we are interested in a fact macle clear back in Section 3.2 (Theorem 3.2.4), namely, that 7’-processes may be constructively defined by the use of a T-action. I n particular, if A4is such an action, then .FA(the set of transitions) is thc associated 7’-process. Here we shall undertake a very general study o f the relation betxvecn A and 5 4 , i.e., between dynamical niotion and dynamical proccss. i1.e begin b y introducing a new property of 7’-processes called nondivcrgencc.” O u r investigation of this concept (which is a caus;ility type for general processes) leads us to a remarkably general formulation of the relationship between a dynamical motion and a dynamical process. L‘
4.10.1
Definition. If I’ is a 7’-process, then we associate with P
the follou ing relations: di\ I’
{(/p,p,) p E I’&
mo I’
{ ( ( t ‘ ,tp), (I . t ’ ) p )
f E T]
p
t
P & f, t’
E
TI
P is uotidiwyirzg iff di\- P is a function. IW.V.\RK. ctiv P is a function iff t p =I t’q -:p , q r , . We interpret this condition to say “there is one and only one T-time function cmcrging to the right from r:ach point b E UP.” -7
’l‘hcre arc ;I nunil>c‘r of ivays that a nondiverging 7’-process may be characterized. .I very simple onc is the follokving:
4.10.2
Theorem.
I f P is a T-process, then the following
s t at c I 11c‘nt s arc e q u i vale n t :
(i) I’ is nondivcrging. (ii) div P: ( / I ’ + P ( 1 : 1 onto). (iii) Ji\7 P is a 1 : I function. ~ w o o ~ -1s . an exercise.
‘I’hus for a nondi\ erging T-process, the sets OZ’ anti P are in I : 1 correspondence. Next u e have two very interesting characterization thcorciiis. First (and most important) it turns out that the relation
4.10.N O N D I V E R G I N G
139
PROCESSES
mo P of Definition 4.10.1 is a function prccisely when div P is. Moreover, whenever mo P is a function, it is an action of T.
4.10.3 Theorem. If P is a T-process, then the following statcrnents are equivalent: (1) P is nondiverging. (ii) mo P is an action of (the monoid) T. (iii) mo P : T x 6TP + OP. (iv) mo P is a function. PROOF. (i) function, i.e.,
( t , t’>)
3
=
(ii).
(t’, t*q)
First, since div P is a function, nio P is a t : t‘ & t”p =- t*q 2 t .’ t j t . = t’gt* (t” I t ) p
-
I-
t’ & pt. ~
4,’ (t* - t’)q
T x U P and d ( m o P ) “P, and it Now, clearly, 9 ( m o P ) : follows that mo P: T x 9 ( m o P ) --f d ( m o P ) (onto). Next, we see (0, tp)(moP ) (t’
=
( t -t 0 ) p = t p
+ t”, tp)(moP ) = ( t + (t’ + t ” ) ) p = ( ( t + t ’ ) + t ” ) p = (t”,(t =
+ t’)p)mo P )
(t”,(t’, tp)(moP))(mo P )
Hence, mo P is an action of T. (ii) * (iii) and (iii) => (iv) are trivial. (iv) * (i): tp = t’q (Vt”): (t”,t p ) = (t”, t’q) 2- (Vt”): ( t + t”)p
> ’
(Vt”): t”p,
=
t”qt, 3 p ,
=
= (t’
4- t”)q
4,’
that is, div P is a function. T h e next characterization consists of showing that there exists an intimate relationship between the notions of a nondiverging T-process and a T-processor which is both free and uniformly transitional. There are several relevant theorems:
4.10.4 Theorem. Let P be a T-process. P is nondivcrging iff there exists some T-processor Q which is free and uniformly transitional, and such that P = Qz.
140
4.
STRONG TYPES OF CAUSALITY
If P is nondiverging, then for any element 6 , the
PROOF.
T-processor
Q
- -
{bP 1 P E P )
is uniformly transitional, i.e., t y ) = ( ( 7 j t . ) f * , t”x) s-( U , ) t ’
((/At),’,
=- t’
= (7Jt.)t*
t* & ty = t“x > ( t
& ty = t“x
+ t’)y
=
(t”
+ t*)x
the last step since nio P is a function. But Q = {b} P, so by ’Thcorem 1.7.10, Q is free. Finally, P = Qz. Conversely, let P Q2, where Q is free and uniformly transitional. Clearly, by Theorem I .7.10, Q = {b} P for some element 6. Choose p , q c P and t , t‘ E 7’. Then (bp), (bq) E Q and since Q is uniformly transitional, t’q
t~
-
(Vt’’): ((b),),”
=’ (Vt”): ( t
& tp = t‘q
((b),,),‘
--
+ t ” ) p ==
(t’
+ t”)q
3
(using Lemma 3.4.1 1)
(Vt”): t”pt = t”qt’ 3
p,
= Qt‘
I
that is, div P is a function. Thus, P is nondiverging.
Conceptually, Theorem 4.10.4 is very important. I t shows that nondixerging 7’-processes are simply the output sets of free, uniformly transitional 7’-processors. Another way to say the same thing is the following Theorem 4.10.6. We require a lemma: 4.10.5 Lemma. Let P and Q be isomorphic T-processes. If P is nondiverging, then Q is nondiverging. PROOP. are given a function h: G%P--+ @Q ( I : 1 onto) { p h p E P}. If P is nondiverging, div P is a such that Q function, so for all q, r E 0 and all t , t‘ E T , 0
~
tq
=
t’r ? (tq)h-’ = (t‘r)h-1 3
3
->
(1. (4 0 (Vt”): ( I ; t”)(q h-1) h-’),I
t(q h-1)
-
91
-
1”)r)h
that is, d i v Q is a function.
I
t’(r h-1) 0
0
> (Vt”):(t
Yt’
=
(Vt”): t”(q h q t
(t’
0
((t’
0
+ t”)(r
-1 t ” ) p
0
h-1)
= (t’
=
t”(r
0
h-I),.
-> ( V t ” ) : ( ( t 4
+ t”)r
3
t”)q)h-1
(Vt”):t”qt =t“r,’
4.10.
NONDIVERGING PROCESSES
141
4.10.6 Theorem. Let P be a T-process. P is nondiverging iff P is isomorphic to some free, uniformly transitional T-processor.
If P is nondiverging there exists some free, uniformly PROOF. transitional T-processor Q, such that P = Q2. By Theorem 1.8.6, since Q is free, Q and P are isomorphic. Conversely, let P be isomorphic to Q, where Q is free and uniformly transitional. By Theorem 4.10.4, Q2 is nondiverging. By Theorem 1.8.6, Q and Q2 are isomorphic, and it follows that P and Q2 are isomorphic. But then by Lemma 4.10.5, P is nondiverging. I Again, the study of nondiverging T-processes is simply REMARK. the study of uniformly transitional T-processors which are free.
4.10.7 Lemma. Let P and Q be T-processes. If P is nondiverging and Q C P, then Q is nondiverging. PROOF.
Obvious.
4.10.8 Theorem. If P is a T-process, then the following statements are equivalent: (i) P is nondiverging. (ii) P is nondiverging. (iii) For all t E T, P,is nondiverging. PROOF.
As an exercise.
4.10.9 Corollary, time invariant.
T h e class of nondiverging T-processes is
Finally, we see that there is a simple scheme for the constructive specification of nondiverging T-processes:
4.10.10 Theorem. If P is a nondiverging T-process, then
p PROOF.
E
P
Op E OP & (Vt): t p
0
= ( t , Op)(mo P )
Obvious.
Thus, to constructively specify a nondiverging T-process P, it suffices to give OP together with the function mo P. Again, mo P is an action of T.
142 4.1 1
4. STRONG
TYPES OF CAUSALITY
GENERALIZED MOTIONS
In Section 4.10, aje have laid the foundations for a very general theory of dynamical motions. In this section, we set forth the concept of a T-motion as an action of T on a set. We then define a 7’-flo\v to bc a 7’-process associated with a T-motion via its set of transitions. \Ye then obtain a characterization theorem for T-floa~s. T h i s ill sera e to interrelate our thecry with the classical theory of dynamical motions [63-671. 4.11.1 Definition. Al set A is a 7’-motion iff A is an action of T . II set P is a T-$ow i f f for some 7’-motion A , I’ FA. RI VXRK. If is a T-motion, F A 3.2.4, e\ery T-flou is a T-process.
=
{ A D1 b t &A}. By Theorem
4.11.2 Lemma. Let P be a T-process. If P is nondiverging, thvii i11o P is a 7’-motion and F ( m o 1’) P. ~
iwooi. Ixt A nio P. If I’ is nondiverging, A is a 7’-motion 131 ‘I’hcorein 4.10.3. I f p E P and t E T , then for all t’ c 7’, ~
f.1”’
(t’, t p ) f
-~
(t
-
t’)p
t’pt
t h a t is, the given condition.
4.1 1.3 Theorem. and nondix crging.
.A T-floaa is a T-process
M
hich
IS
contracting
IWJOF. If I’ i b a contracting T-process, P P. If P is nondia crging, ino I’ is a 7’-motion, and by Lemma 4.1 1.2, 7 ( m o P ) I’ P. IIence, P IS 7’-flo\v. If I’ is a 7‘-flow, then for some ~
~
4.1 1.
143
GENERALIZED MOTIONS
T-motion A , P = FA. P is clearly a T-process. Now P is nondiverging, i.e., for all 6, c E %)Aand all t , t’ E T, tAb == t’A‘ =-- ( t , b ) A 3
=
(Vt”): ( t
-
( t ’ , c)A
(t’
=
(Vt”): t”(Ab), = t”(A”,.
Next, choose b E 9 A and t t’A“ = ( t ’ , C ) A
=
=
( t ” , ( t ’ , c)A)A
+ t” , b)A = (t’ + t”, c ) A
* (Vt”): ( t 4-t ” ) A b 3
(Vt”): ( t ” , ( t , b ) A ) A
E
( t ’ , ( t , b)A)A
+ t”)
T. If c =
(t
A C
* (Ab), =
2
(AC),.
( t ,6) A , then
+ t’, b)A = ( t +
t’)P
=
t’(A”),
= . Therefore, if Ab E F A and t E T, then (Ab)), E FA. I n other words, P is contracting. I
that is, A‘
Next, we can show that the relationship between a T-motion and the corresponding T-flow is 1 : 1.
4.11.4 Theorem. If A is a T-motion and P is a T-flow, then P
=
FA
o A = mo
P
PROOF. By Theorem 4.1 1.3, P = P. If A Lemma 4.11.2, P =P =F(moP) =FA
Conversely, let P
+ t ’ ) Ab)1 b E W A & t , t’ E T }
= { ( ( t ’ ,( t , b)A),( t
=
mo P.
mo P, then using
F A , where A is a T-motion. Clearly, we have
=
mo P = mo(FA) = { ( ( t ’ ,tAb),( t
Thus, A
=
+ t ’ , b)A) 1 b E 9?A& t , t‘ E T }
=
{ ( ( t ’ , ( t , b ) A ) ,( t ’ , ( t , b)A)A) b E 9’4& t , t’ E T }
=
{ ( ( t ’ ,c), ( t ’ , c)A) I c E B A & t‘
~
t
7’1
A
I
T h e following corollary is immediate.
4.11.5 Corollary. If is the class of T-motions, 9 is the class of T-flows, and F = { ( A ,F A ) 1 A E JZ},then F : ~ 2 ‘+ F (1 : 1 onto). Moreover, F-l : {(P,mo P ) 1 P E ,F}.
144
4.
STRONG TYPES OF CAUSALITY
Corollary 4.11.5 makes it possible for us to move back and forth conceptually from T-motions to T-flows, Le., we see that the notions are duals. Moreover, by Theorem 4.11.3, we see that T-flows are simply nondiverging T-processes which are contracting. T h e reader will recall that in Chapter 3 we showed the attainable space of a contracting T-process, in fact, “contracts” as a function of time, i.e., we have condition (iii) of Lemma 3.6.6. T h e following theorem shows us that for nondiverging T-processes, the converse is valid, i.e., if the attainable space is “contracting” then the process is.
4.11.6 Theorem, Let P be a T-process. If P is nondiverging, then the following statements are equivalent: (i) (ii) (iii) (iv)
P is contracting. F o r all t , t‘ E T , t < t‘ I o r all t E T, tP C OP. fi’P Of‘.
t’P C tP.
~
; 1(ii) is by Lemma 3.6.6. and (iii) -> (iv) are obvious. Choose p E P and t E T. If aP = OP, there exists some q E P such that Oq -= tp. Since P is nondiverging,
PROOF.
(i)
(ii)
>: (iii) ( i k ) => (i).
oy = tp
qo
; 1
=
p,
=. q = p ,
Therefore, p , E P. But, then, by Theorem 3.6.2, P i s contracting.
I
4.11.7 Corollary. A T-flow is a nondiverging T-process P such that GLP OP. ~
Exercises 4-1.
Let P be a free T-processor. Prove that P is weakly static [static; uniformly static] iff OP2[P2; UP2] has one and only one element.
4-2.
Prove that a ?’-processor P is uniformly static iff P is.
4-3.
Prove Theorem 4.2.8.
145
EXERCISES
4-4.
Let P be a T-processor. Prove, if P-' is free, that P is uniformly static.
4-5.
Prove Theorem 4.2.11.
4-6.
Let Q be a T-process, let f be a function, and consider the T-processor P such that u y E P o u E Q & ( v ' t ) : t y= (tu)f
a. b. c. d.
(tET)
Give necessary and sufficient conditions that Pl = Q. Give the precise relationship between 6TP and f. Prove that P is uniformly static. Give necessary and sufficient conditions that a%P= f .
4-7.
Prove conditions (i) and (ii) of Lemma 4.4.2.
4-8.
Prove that a T-processor P is nonanticipatory iff the relation seg P
=
{((d, tu), ( y t ,ty)) 1 uy E P & t
E
T)
is a function. 4-9.
Let P be a nonempty T-processor. Prove that strictly nonanticipatory iff P is free.
(P)-l is
4-10. A T-processor P is initially unique iff, for all h E OP2, Pb is almost nonanticipatory. Prove that P is weakly transitional iff P is initially unique and : P is weakly transitional. (Hint: Consider Theorem 4.5.6.) 4-1 1 . A T-processor P is weakly inductive iff the relation ind,P = { ( ( ~ y ) ~ , t t y ) / u y ~ P & t#~O T ] &t
is a function. Prove that every weakly transitional T-processor is weakly inductive, hence, that weakly inductive T-processors are a generalization of weakly transitional ones. 4-12. Using the definition of Exercise 4-11 as a guide, set forth definitions for inductive and uniformly inductive T-processors. Prove that a T-processor is uniformly inductive iff P is weakly inductive.
4.
146
STRONG TYPES OF C.4USALITY
4- 13. I,ct I’ hc a noneiiipty 7’-processor. Prove that : P is strictly nonanticipatory iff P is Lveakly inductive and OP‘ has one and only one element. 4-14. Prove that thc class of inductive ?’-processors is time invariant (using your definition of Exercise 4- 12). 4-15. .A 7’-processor P has short memory iff the relation mein P
=
{((u~)”,( t
+ t ’ ) y ) i uy t Z’S-
t , t’ t 7’& I’
1:01
is a function. Using your definition of Exercise 4- 12, prove that ;i 7’-processor P is uniformly inductive iff :P has short memory.
4- 16. I’rovc for any uniformly transitional 7’-processor P that :P has short memory, hence, that every uniformly transitional 7’-processor is the closed loop of some T-processor that is uniformly static and has short memory. 4-.I 7. Prove ‘l’heorern 4.8.5
4- 18. Prove ‘l’heoreni 4.8.7. 4- 19. Discuss the constructive specification of weakly inductive 7’-processors (defined in Exercise 4-1 I ) . 4-20. I’rovc or give a counterexample: If P is a uniformly transitional 7’-proccssor and 0 is a uniformly static 7’-processor, then f’ 0 is uniformly transitional. I
4-21. I’rovc or give a counterexample: If a T-processor P is \vcakly transitional, then P: is weakly transitional. 4-22. Prove that every LLloore sequential machine is strictly nonanticipatory. 4-23. Prove that every pushdown list is strictly nonanticipatory. 4-24. Prove that every finite automaton is uniformly transitional. 4-25. Prove that a noneinpty w-processor P is a finite automaton iff: (i) (IP’ and CIP2 are finite; (ii) P’ = (6ZPl)w;(iii) P is contracting; and (iv) P is uniformly transitional.
147
EXERCISES
4-26. Using the result of Exercise 4-25, prove that a nonempty w-processor P is a finite automaton iff: (i) OlP is finite; (ii) Pl = (6TP1)w; (iii) P is contracting; and (iv) P is weakly transitional.
4-27. Prove that any nondiverging T-process P for which 6TP is a finite set is itself a finite set.
4-28. Prove that every T-flow is isomorphic to some T-processor which is free, contracting, and weakly transitional.
4-29. Prove that an w-process P is nondiverging iff the relation nex P
= {(tp, ( t
+- 1)p) I p E P & t E
W }
is a function.
4-30. Prove that an w-process P which is nondiverging is an w-flow iff 1P C OP. t'p * . Prove that a T-time function is ultimatcly periodic
4-31. A T-time function p is ultimately periodic iff tp
p,
= p,,
iff it is an element of some T-flow.
4-32. If p is an ultimately periodic T-time function, prove that either p is 1 : 1 or, for some t E T, p, is weakly periodic (as defined in Exercise 3-7). 4-33. Prove that if A is a T-motion, then the following statements are equivalent: (i) A has an initial element; (ii) For some p E T A , FA = {p); (iii) For s o m e p E Y A , .9p = &'A. 4-34. Let P be __ a T-flow. Prove that if p E P satisfies 9'p that P = (p}.
=
OP,
4-35. Prove that an w-processor P has short memory iff the relation lagP={(tu,(t+ l ) y ) I u y ~ P & t ~ w }
is a function. (Short memory is defined in Exercise 4-15.)
4-36. For any w-processor P, prove that P is uniformly transitional iff :P has short memory.
148
4.
STRONG TYPES OF CAUSALITY
4-37. Without mathematics, discuss the proposition: Every uniformly transitional 7‘-processor has “internal feedback.” 4-38. FVithout mathematics, discuss the proposition: Every uniformly transitional w-processor may be constructed using unit delays, uniformly static W-processors, and feedback. 4-39. Develop Exercise 4-38 mathematically.
State Decompositions
5.1
INTRODUCTION
Although we have now dealt at length with a number of questions, the reader familiar with modern systems theory will no doubt feel the relationship between that theory and the present study a bit strained. One reason for this is our persistence in keeping our considerations very general. Another reason exists however. Quite simply, it is our failure to mention thus far the concept of “state” for T-processes. Modern systems theory has been developed almost exclusively on what is commonly called “the state space approach.” T h e problem we encounter here is the fact that the state space approach is not easily explained or defended in a few words; nor is it easy to explain the relationship between the state space point of view and the notion of “an input-output relation parametrized by (evolving in) time.” Indeed, it was Zadeh [38, 401 who pointed out the size of the problem by observing the state space of a processor to be an even more complex parametrization of it than time itself. I t suffices to say there is no simple phrase to describe the state space point of view of a processor unless we accept that a processor is an input-output relation parametrized by (evolving in) state. I n other words, one has to take the “state” of a processor as a primitive concept. 149
150
5.
STATE DECOMPOSITIONS
One way to approach the “state” in our theory of T-processes is to consider certain types of decompositions for T-processors. T h e advantage of such an approach is its extreme generality. I t is also a rather down to earth (even “brute force”) approach. We think such a n approach gives some insight and is therefore justified here. l’he principal concepts required are the notions of static and transitional 7’-processors. Given these, what remains to be done can be disposed of in rather short order. Some interesting results can be obtained. We establish the uni\ crsality of the state space approach by proving an existence thcorem for state decompositions for T-processors. By addressing ourselves to constructive specifications based on state space paraphernalia, we show how state equations arise in our theory. We show how state decompositions can be combined in interconnections, and we obtain some impressive results about the timeevolution of state decompositions. Finally, the existence of some spccial types of state decompositions for contracting, nonanticipatory, etc., 7’-processors is proved. 5.2
STATE EQUATIONS
If P is a T-processor, then an ordered pair ( R ,Q) of T-processors is an eyact series decomposition of P iff P R 0 0, where R2 -- Q1. If ( R , 0)is an exact series decomposition of P , then Pl = R 1and P’ 0.. ‘l’he conccpt of state for T-processors is naturally arrived at through the concept of certain exact series decompositions of h i x , c i a l type: 5.2.1 Definition. Let P, R, and 0 be 7‘-processors. T h e ordered pair ( R ,Qj is a direct stute decomposition [ a n indirect state deco~nposition]of P iff
(ij (ii) (iii) (iv)
R is transitional.
Q is static. K‘ 0’ P Rho Q [ P
0’1.
:
~
~
R: .PI.
A state decomposition for P is either a direct state decomposition for P or an indirect state decomposition for P. A state decomposition
5.2.
151
STATE EQUATIONS
(R,Q) is uniform iff R is uniformly transitional and Q is uniformly static. I t is small iff Q is bifunctional. REMARK. We shall focus attention particularly on the concept of uniform direct state decompositions in the following. Such state decompositions would appear to be the most universal type studied in the several branches of systems theory. However, as the following theorem shows, the indirect state decomposition is slightly more general than the direct one.
5.2.2 Theorem. Let P be a T-processor. If ( R , Q ) is a direct state decomposition [a uniform direct state decomposition] for P, then ( R , :R o Q ) is an indirect state decomposition [a uniform indirect state decomposition] for P. PROOF. By Theorem 4.3.1, :R is uniformly static. Therefore, :R Q is static if Q is static and uniformly static if Q is uniformly static. Also, since R2 = Q’, 0
(R:)2== H
=
(:R)l= ( : RoQ)’
where we have used Lemmas 2.4.2 and 2.2.4. Thus, ( R : ) 2= ( : R0 Q)l. Now, using Lemma 2.4.7, P
=
R oQ
=
(R: :R)o Q 0
=
R: ( : R0 Q) 0
T h u s if (R,Q) is a direct state decomposition for P, then ( R , :R 0 Q ) is an indirect state decomposition for P, and if ( R ,Q ) is uniform, so is ( R , :R 0 Q). Associated with any state decomposition of a T-processor is a number of extremely important set-theoretic objects all well known in systems theory. I n identifying these objects, we account for all of the usual “trappings” of the state space approach:
5.2.3 Definition. Let P, R, and Q be T-processors. If ( R , Q ) is a state decomposition for P, then (i) OR2 is a set of initial states for P. (ii) a R 2 is an (attainable) set of states for P.
5.
152
STATE DECOMPOSITIONS
If ( R ,(I is) uniform, then
(iii) (iv) (v) (vi)
R2 is a set of state trajectories for P. 6?R2is an (attainable) state space for P. tr R is a state transition function for P. @Q is an output function for P.
If P , R, and Q are discrete-time and (R,Q) is uniform, then (vii) it R is a next-state function for P. T-processors admit canonical types of constructive specification with respect to given state decompositions. Such constructive specifications are called “state equations’’ in systems theory. T h e standard practice is to constructively define various processors to be studied by the use of various types of state equations.? 5.2.4 Theorem. Let P be a T-processor. If ( R , Q ) is a direct state decomposition of P, then uy E P
-
(3x)(Vt): (u, Ox) E dom R & tx
=
(ut, Ox)(tr,
R ) & ty
= (tx)(tQ)
We are given P = R o Q where R is transitional, Q is Q1. By Theorem 4.2.7, xy E Q e x E Q1& ( V t ) : static, and R2 : t y = ( t x ) ( t Q ) ,and by Theorem 4.7.10, ux E R ( u , Ox) E dom R & ( V t ) : t x = (ut, Ox)(tr, R). Now since R2 = Q1, u x E R * x E R2 => x € p i . Thus, PROOF.
UY E P
e 0
UY t R
00 (3%):ux E R & XY E Q
(3r): ux E R & 2. €Q1 & (Vt): ty ( 3 s ) :ux E K & ( V t ) : ty
‘ ; .
=
(tx)(tQ)
-
(tx)(tQ)
e ( 3 % )(u, : OY) E dom R & (Vt): t x = (ut, Ox)(tr, R) & ( V t ) : ty = (tx)(tQ) u (3%)(Vt):( u , 0 % E)
dom R & tx
that is, the given condition.
=
(ut, Ox)(tr,
R ) & ty
=
(tx)(tQ)
i
IIowever, the T-processor itself is essentially never formalized, and the state equations themselves are loosely referred to as “the system.”
5.2.
153
STATE EQUATIONS
5.2.5 Theorem. Let P be a T-processor. If (R,Q) is an indirect state decomposition for P, then uy E P o (3x)(Vt): (u, Ox) E dom R & tx = (ut, Ox)(tr, R ) & ty = (tu, tx)(tQ) PROOF.
Very similar to the proof for Theorem 5.2.4.
Theorem. Let P be a T-processor. If (R,Q) is a uniform direct state decomposition for P , then 5.2.6
uy E P o (3x)(Vt): (u, Ox) E dom R & tx = (ut, Ox)(tr I?) & ty = (tx)(@Q) PROOF. Similar to the proof for Theorem 5.2.4, but using Theorems 4.2.11 and 4.7.11.
Theorem. Let P be a T-processor. If ( R ,Q) is a uniform indirect state decomposition for P, then 5.2.7
uy E P o (3x)(Vt): (u,Ox) E dom R & tx = (ut, Ox)(tr R ) & ty = ( t u , t x ) ( a Q ) PROOF.
Similar to the proof for Theorem 5.2.6.
Theorems 5.2.5-5.2.7 show that the difference between the direct and indirect state decompositions is in the character of the output functions. T-processors with uniform direct state decompositions lend themselves to another, and particularly simple, form of constructive specification: 5.2,8 Theorem. Let P be a T-processor. If ( R ,Q) is a uniform direct state decomposition of P, then uy E P o (3b)(Vt):(u, b ) E dom R & ty = (ut, b)(tr, R
0
GTQ)
Thus, to specify constructively a T-processor P from a given uniform direct state decomposition (R,Q), it suffices to give dom R and the composition function (tr, R 0 aQ). Finally, we get the usual simplification of concepts in the discrete-time case:
I54
5.
STATE DECOMPOSITIONS
Q) is a uniform 5.2.9 Theorem. I,ct P be an w-processor. If (R, direct state decomposition for P, then uy
E
P -3(3s)(V‘t):
( u , Ox) E
dom R & ( t
+ 1)x
=
(tu, tx)(it R)
& ty = (tx)(a!Q)
Use Theorem 4.9.8.
PROOF.
5.2.10 Theorem. Let P be an w-processor. If ( R , Q ) is a uniform indirect state decomposition for P, then uy
E
P
d 2
(3x)(V‘t): (u,Ox) E dom R & ( t + 1)”
=
(tu, tx)(it R )
& ty = (tu, tx)(a!Q) REMARK. T h e reader acquainted with the theory of sequential machines will recognize the distinction between the constructive specifications of Theorems 5.2.9 and 5.2.10 as that which distinguishes Moore and Mealy machines.
5.3
EXISTENCE OF STATE SPACES
In this section, we prove the universal existence of uniform direct state decompositions of T-processors and hence the existence of state spaces for them. T h i s is important as it provides some justification for the use of the state space approach. Before we take this up, however, we need to devote some attention to clarification of the concept of state spaces for a T-processor. One way to do this is to introduce the concept of a “strong” homomorphism between T-processors.
5.3.1 Definition. Let P and Q be T-processors. Q is a strong image of P iff there exists a homomorphism h from P to Q2 such that Q = {u(uy h) 1 uy E P } 0
Lemma. Let P and Q be T-processors. If Q is a strong image of P, then both Q and Q2 are images of P , and Q1 = Pl. 5.3.2
5.3.
155
EXISTENCE OF STATE SPACES
Let h: 6TP 6TQ2be the given (strong) homomorphism. PROOF. Clearly, Q2 is an image of P. Moreover, ---f
Q'
= {U
I ( 3 ~ ) ux : E Q } = {U 1 ( 3 ~ )UY:
E
P } = Pl
Now consider the set g = { ( ( a , b), (a, (a, b)h)) 1 a@Pb}. As the reader can show, g: GYP -+ LTQ. For all uy E P & t E T, t(uy o g )
= (t(uy))g = (tu, ty)g = (tu, (tu, tY)h) = ( t u , = (tu,
that is, uy o g
Q
=
(t(uy))h)
t(uy h)) = t(u(uy h ) )
u(uy
0
0
0
h). Thus,
={u(uyoh)IuyEP}={uyogIuyE~}={~og/~~~~
and Q is an image of P.
I
5.3.3 Lemma. Let P be a T-processor. A set X is a state space for P iff there exists a uniform indirect state decomposition (R,Q) of P such that X = a R 2 . PROOF. By definition, X is a state space for P iff there exists a uniform state dtcomposition (R,Q ) for P such that X = a R 2 . A state decomposition is either direct or indirect. If ( R , Q ) is a uniform direct state decomposition for P such that X = aR2, then by Theorem 5.2.2, ( R , :R0 Q ) is a uniform indirect state decomposition of P such that X = 6TR2. I
5.3.4 Lemma. Let P, R, and Q be T-processors with Q uniformly static. If P = R Q and R2 = Q1, then 6TQ is a homomorphism from R2 to P2. 0
PROOF. By Theorem 4.2.17, LTQ is a homomorphism from Q' to Q2. If R2 = Q1, then LTQ is a homomorphism from R2 to P2, since R2 = Q1 3 P2 = ( R o Q)2 = Q2 I
Using the foregoing lemmas, we may now characterize state spaces for T-processors in terms of the concept of a strong homomorphism of T-processors:
5.
156
STATE DECOMPOSITIONS
5.3.5 Theorem. Let P be a T-processor. A set X is a state space for P iff P is a strong image of some uniformly transitional T-processor R such that A’ =- CIR2. PROOF. 1,et X be a state space for P. By Lemma 5.3.3, there exists a uniform indirect state decomposition ( R ,Q ) for P such that X = UR2. R is uniformly transitional, Q is uniformly static, and = Q1, with P = R: 0Q. Moreover, R = (R:)*.Thus, by Lemma 5.3.4, /Q is a homomorphism from R to P2.Since Q1= R,
I-’
=
{uy
Z L E~ R:
0
Q} = {UY 1 ( 3 ~ )U:( U X ) E R: & ( u x ) E~ Q}
: E R & u x E Ql & ( U X ) o @Q = {“Y ( 3 ~ )ux = {uy (3x): us E K & (La)n Q = y >
=y }
0
=
{u(u.x
0
q l ) , ux E I<)
Therefore, P is a strong image of R. Conversely, let P be a strong image of a uniformly transitional T-processor R such that X UR2. Let h: f l R + U P 2 be the given homomorphism with P = {u(ux o 11) 1 ux t R}. By ‘l’heorem 4.2.19, the set
Q
=
{(u.v)(ux h ) 1 ux E R} 0
is a uniformly static T-processor and QlQ : h. Moreover, (]<:)Z
R
:--=
-
0 1
Using Theorem 4.2.1 1,
==
h) 1 u x E Rj = {UY ( 3 ~ )ux : E R & ux o h {ZLY1 ( 3 ~ )ux : E R & U S E 0’ & U S PZQ r}
=:
{UY ( 3 ~ )U:S t R & ( u x ) E~Q}
==
{.y
P
{U(UX
0
~
: :
0
,
(3x): u(ux) E R:& ( u x ) y E Q}
=y }
R: Q
-:-
0
T h u s ( R ,Q ) is a uniform indirect state decomposition for P. Now since X = U P , ri is a state space for P. I
It is common in systems theory to refer to the states of a T-processor as “internal states.” Ry our definition, a set X is a set of states [a state space] for P iff X : 6YR2 for some state decom-
5.3.
EXISTENCE OF STATE SPACES
157
position [uniform state decomposition] (R, Q) of P. T h u s the states of P do appear “internal” to P. I n the same sense, the states of a transitional T-processor are “external.”
5.3.6 Theorem. Let P be a T-processor. P is uniformly transitional iff there exists a uniform direct state decomposition (R,Q) of P such that Q is bistatic. PROOF. If P is uniformly transitional, then the ordered pair ( P ,I ( P 2 ) )is a uniform state decomposition for P ; in fact, P is uniformly transitional, I ( P 2 ) is uniformly static, P2 = (I(P2))l, and P = P 0 I ( P 2 ) . Now, by Lemma 4.2.15, I ( P 2 ) is bistatic. Conversely, let ( R ,Q) be a uniform direct state decomposition for P such that Q is bistatic. Clearly, (ZQ: OlQ’ (ZQ2 (1 : 1 onto). Choose uy, vz E P & t, t’, t” E T. Since P = R Q, there exist x and w such that ux, vw E R & xy, wz E Q. ---f
0
(ut)t”
(Ut.)”’
1
& ty
(ut)l“ =
(t 3
= I
t’z
(u~,)“’ & tx
=
t’w
+ t ” ) x = (t’ + t”)w + t”)z
( t $- t ” ) y = (t’
(since (@Q)-l is a function) (since R is uniformly transitional) (since GZQ is a function)
that is, P is uniformly transitional.
I
5+3.7 Corollary. Let P be a T-processor. If P is uniformly transitional, then ( P , I ( P 2 ) )is a uniform direct state decomposition for P, and LIP2 is a state space for P. REMARK. We note in passing that if (R, Q) is a uniform state decomposition for P and Q is bistatic, then Q is also bifunctional. It follows by definition that ( R , Q ) is a small state decomposition for P.
We turn now to the question of the existence of state spaces for T-processors. 5.3.8 Theorem. Every T-processor has a direct state decomposition which is both small and uniform.
5.
158
STATE DECOMPOSITIONS
Let P he a T-processor. With each y
PROOF.
E
P2, associate
the sct Futy
=
{(t,y,) 1 t
T)
E
Futy is clearly a T-time function. Since it is, the following are 7’-processors: R = {u(Fut y ) I UY E P }
0
=
{(FUtY)Y I Y
E
P2f
Now R is uniformly transitional, i.e., tr R
=
{ ( ( ( u J t ’ , t(Futy)), ( t
=
{(((%It’,
-tt’)(Futy)) I uy E P & t, t’ E T }
1 uy
Yt), Yt+t,)
E
p
t , t’
E
TI
and t r R is a function, since (ut)t” = (Vt.)”’
&yt
~~
= Zt’
( yt)t” = (.t.)t”
3
yt+t”= Z t ’ + t ”
; .
Q is uniformly static, i.e., fZQ = { ( t ( F u t y ) , t y ) i y € P ‘ & t E 1 ’ }= { ( y t , t y ) / y € P 2 & t E T }
and (’7Q is a function, since yt
-=
z,,,
Oy,
-7-
= :
OZ,,
0)y
-.;. ( t
==
( t ’ -1-
O)Z
I->
ty
t ’ ~
1
Next,
R2
{Futy ! ( 3 ~ ) uy :
2
Moreover, since F u t y =- y .? a,
zo
=
E
P } = {Futy y
Fut z
E
P2} = Q1
* O(Fut y ) = O(Fut Z) * y o =
7 . :
K 0 0
--=
{u-vi (32): uz E R & ZY E Q}
= {UV
1 ( 3 s ) : U.X E P & y E P 2 &Fut x
= Futy}
-{uy,(3s):u.vEP&yEPZ&X = y } = = { z ~ y i u y ~ P & y ~ =P {‘ u} y j u y E P } = P ‘Thus ( R ,0 ) is a direct state decomposition for P which is uniform. Finally, 0 is bifunctional, i.e., for all y , z E P’, y
=z
.=-( V t ) : ~t
so ( R ,(2) is also small.
I
z ~t 0
Fut y = Fut z
5.3. REMARK.
EXISTENCE OF STATE SPACES
159
We note that in the above construction,
R2 = {Futy 1 y
E
@R2 = {t(Futy) 1 Y
P2} E
P2& t E T } = { y t 1 Y E P2 & t E 7’) = (p“)
OR2 = {O(Futy) IYEP} = { y o IYEP’)
=
{ Y ~ Y E P=~P }2
Thus, we have:
5.3.9 Corollary. If P is a T-processor, then (p“)is a state space for P, P 2 is a set of initial states for P, and the set {Fut y 1 y E P2> is a set of state trajectories for P. On the surface, the state decomposition given in the proof of Theorem 5.3.8 has little to recommend it, since (despite the fact that we call it “small”), it would appear the associated state space (p“) is cumbersome. However, the existence of this state decomposition is of considerable importance and will serve to prove the existence of state decompositions with special properties. Our next task is to clarify in what sense a “small” state decomposition is in fact “small.” For direct state decompositions, the following pair of results shows the “smallness” to be in the sense of cardinality of the associated set of state trajectories.
5.3.10 Lemma. Let P be a T-processor. If both ( R , Q ) and ( S , U ) are uniform direct state decompositions of P which are small, then there exists a function f : R2 S2(1 : 1 onto). --f
PROOF.
Q* ( U , ) - l is (as the reader can show) such a function. 0
REMARK. T h u s the two sets of state trajectories R2 and S2are in 1 : 1 correspondence.
5.3.11 Theorem. Let P be a T-processor. If ( R ,Q) is a uniform direct state decomposition of P, then there exists a function f:R2 -+ 1 (onto), where J = {Fut y 1 y E P 2 } . PROOF.
and Q‘2
=
By Theorem 4.2.8, Q*: Q1 + Q2 (onto). Here Q1 == R2 P2.Thus, Q*: R2 P 2 (onto). Now consider the relation --f
K
= { ( Y , Fut Y
)I Y
E
P2}
5.
160
STATE DECOMPOSITIONS
Clearly, K : P‘ 3 J (onto). Thus, if f : R2 J (onto). I
f
=
Q.+ o K , we see that
----f
STATES, INTERCONNECTIONS, AND TIME-EVOLUTION
5.4
In this section, we consider the time-evolution of state decompositions of a T-processor. Also, we consider briefly how state decompositions get combined under various interconnections of T-processors.
5.4.1 Lemma. Let P and (Jbe 7’-processors. If P 0 Q is proper and Q is static, then for all t E T , ( P Q), = P , o Q, . 0
PROOF.
Iiecall that in general ( P o Q),
uvtI’&rz€Q&vt
=
P , o Q , iff
= Z‘t -<
(3p)(3y)(3q):PY
p & yq 6 2 ! Z8.
P t =- U t &L
qt
= Zt
(i.e., Theorem 3.5.1). Let iiz’ t P and xz E Q such that v 1 = x l . If I’ Q is proper, then uv E P -- v F P 2 v E Q1 2 (39): vq E Q. If Q IS static, z q E (2 & xz E 0?i = x L 3 q , = z1 , i.e., il
zit
t’qt
t t’)v)((t+ t’)Q) - - (t’vt)((t+ f)Q)
=
( t I t’)q
=
( t ’ x t ) ( ( t 3 t’)cl,) = ( ( t 4 l ’ ) x ) ( ( t 1- t’)Q) = ( t
((1
+ t’)a = t’x,
l’hercfore, U V E P & A Z E Q &=V ~ , > ( 3 q ) : u V ~ P & v g ~ Q & - zgt ,
and the abovc given condition is satisfied. T h u s ( P o Q),
P , 0, . Q
=
I
5.4.2 Lemma. Let P and Q be T-processors. If I‘ Q is proper and (2 is uniformly static, then ( P o Q) = P o 0. 0
PROOF.
Similar to the proof for Lemma 5.4.1.
These lemmas allow us to analyze the time-evolution of state decompositions:
5.4.
STATES, INTERCONNECTIONS, AND TIME-EVOLUTION
161
5.4,3 Theorem. Let P be a T-processor. If ( R , Q ) is a state decomposition for P, then for all t E T , ( R , ,Q,) is a state decomposition for P,. PROOF. By Theorem 4.7.4, since R is transitional, R , is transitional for all t. By Theorem 4.2.4, since Q is static, Q , is static for all t. ( R ,Q) is either direct or indirect. Let ( R ,Q) be direct. Since R2 = Q1,
(Rt)'
= ( W t = (Q')t
= (Qt)'
by Lemma 3.3.9. By Lemma 5.4.1, for all t t T , P,
=
( R o Q ) -= ~ R, o Q t
since Q is static and R Q is proper. Thus, ( R ,, Q,) is a direct state decomposition for P , . Now let ( R ,Q) be indirect. Since 0
( R : ) 2= Q', ((Rt):)= 2 R,
=
((R:)')t= (Q')t
= (,(&)I
where we have used Lemmas 2.4.2 and 3.3.9. Since Q is static and R: Q is proper, 0
P,
=
(R:
=
((R:)to Q t )
=
(R,):o Q t
using Theorem 3.5.4. Thus, (K,, Q,) is an indirect state decomposition for P , .
5.4.4 Corollary. Let P be a T-processor. If ( R ,Q) is a direct [indirect] state decomposition for P, then for all t E T , (K,, Q,) is a direct [indirect] state decomposition for P , .
5.4.5 Theorem. Let P be a 7'-processor. If ( R ,Q) is a uniform state decomposition for P, then (l?,Q) is a uniform state decomposition for P , and for all t E T, ( R , ,Q,) is a uniform state decomposition for P , . PROOF. Let ( R , Q ) be a uniform state decomposition for P. By Theorem 4.7.7, R , is uniformly transitional for all t , and by Theorem 4.2.8, Q, is uniformly static. Clearly then, if ( R ,Q) is a
5.
162
STATE DECOMPOSITIONS
uniform direct [indirect] state decomposition for P, then for all t E 1',( R ,, 0,) is a uniform direct [indirect] state decomposition for P , . If R is uniformly transitional, then R is, and if Q is uniformly static, then is. If ( R ,Q) is direct, then since R2 = Q1,
and by Lemma 5.4.2, ~~
p
-=
(R
O Q )
=
jj
oQ
If ( H , Q ) is indirect, then since ( R : ) 2= Q1, -
~
(K:)z
==
R
_
=
_
(R:)2=
(F) =
(Q)1
and by Letntna 5.4.2, -
P
( R :00) ( R : )0 0 ~
=
(R): Q 0
T h u s if ( R ,(2) is a uniform direct [indirect] state decomposition for P , then (I?, p)is a uniform direct [indirect] state decomposition '. I for 1
5.4.6 Corollary. I x t P be a T-processor. If ( R ,0 ) is a uniform direct [indirect] state decomposition for P, then (R,Q) is a uniform direct [indirect] state decomposition for P , and for all t E T , ( R ,, 0,) is a uniform direct [indirect] state decomposition for P , . Next \ve consider how state decompositions combine under various interconnections. For simplicity, we consider only uniform direct state decompositions.
5.4.7
Theorem. I,et P and S be T-processors, with P 2 = S1. If ( K ,0) is a uniform direct state ,decomposition for P and ( l ; , V ) is a uniform direct state decomposition for S, then (I? 0 (00 U ) : ,:(Q o C-) o V ) is a uniform direct state decomposition for P c, S . PROOF. If 0 is uniformly static and U is uniformly transitional, then by Theorem 4.8.6, Q (1is uniformly transitional. If R is uniformly transitional, then K (Q 0 U ) : is uniformly transitional 0
0
5.4.
163
STATES, INTERCONNECTIONS, AND TIME-EVOLUTION
by Theorem 4.8.3. Now, :(Q U ) is uniformly static by Theorem 4.3.1 and thus :(Q 0 U ) V is uniformly static by Theorem 4.3.4. Since R2 = Q1 and U 2 = PI, 0
0
S1
=
U1
R2 = Q1 -=(Q o U)'
=
((Q o U):)'
=
U'
== p2 =
Q2
so and (:(& U))Z = (& 0 U)Z 0
=
V'
Thus ( R (Q o U ) : ) 2= ((Q U ) : ) 2= Q U 0
0
0
=
(:(&
0
U))'
=
(:(&
0
U ) V)' 0
Finally, by Lemma 2.4.8, PoS=(RoQ)o(Uo V)=(Ro(&o V):)o(:(QoU)oV)
T h u s ( R (Q U ) : ,:(Q composition for P S. 0
0
0
0
U ) 0 V ) is a uniform direct state de-
I
Theorem. Let P and S be T-processors. If (R,Q) is a uniform direct state decomposition for P and ( U , V ) is a uniform direct state decomposition for S , then (RIIU,Q / / V )is a uniform direct state decomposition for PIIS. 5.4.8
By Theorem 4.8.5, R / / U is uniformly transitional, and PROOF. by Theorem 4.3.5, Q / / V is uniformly static. Since R2 = Q1 and U' v1, (Rl/U)' == R'U' = ell'' = (Q// 1
v1
Finally, by Theorem 2.3.9, P / / S = ( R o Q ) / / ( U oJ") = ( R / / U ) o ( Q / / L 7 Thus, ( R / / U ,Q / / V ) is a uniform direct state decomposition for PIIS. I
5.4.9 Theorem. Let P be a T-processor. If ( R ,Q) is a uniform direct state decomposition for P, then (R, 1R 0 (I(R1)//Q)) is a uniform indirect state decomposition for P:. PROOF.
As an exercise.
5.
164 5.5
STATE DECOMPOSITIONS
SPECIAL STATE DECOMPOSITIONS
Recall for contracting [expanding; stationary] 7'-processors that the attainable input and output spaces themselves exhibit a property of contraction [cxpansion; constancy]. It is of interest to know if for such 7'-processors there exist state spaces with such regular properties. Similarly, recall for a (nonempty) T-processor P which is strictly nonanticipatory, OP2 has precisely one element. I n this case, it is reasonable to ask if there exists a state decomposition ( R ,Q) such that OR2 has precisely one element. I n other words, is it always possible to regard such T-processors as having just one initial state ? We consider these questions in concluding this chapter.
5.5.1 Definition. Let P be a 7'-processor. A state decomposition ( R ,Q) for P is contrcrcting [expanding; stationary] iff both R and Q are contracting [expanding; stationary]. ( R ,Q) is simple iff OR2 has one and only one element. KIXARK. A state decomposition is stationary if€ it is both contracting and expanding. simple state decomposition is one with precisely one initial state.
lye shall direct our attention first to state decompositions which are contracting, expanding, and stationary. Subsequently, we shall consider simple state decompositions.
5.5.2 Theorem. I,et P he a T-processor. A uniform state decomposition ( R ,Q) for P is contracting [expanding; stationary] iff R is contracting [expanding; stationary]. T h e necessity is obvious. Let (R, Q) be indirect. If R contracting [expanding; stationary], then Q1 is, since Q1 ( R : ) 2 R. I x t ( R ,Q) he direct. If R is contracting [expanding; stationary], then R ' is, and since R2 : Q1, Q1 is. I n either case, Q I S uniformly static. Therefore, by Theorem 4.2.22, Q is contracting [expanding; stationary]. I PROOF.
is
~
~~
'l'he casc of contraction of state decompositions is of particular interest. T h e following three propositions show why.
5.5.
SPECIAL STATE DECOMPOSITIONS
165
5.5.3 Lemma. Let P be a 7'-processor. Every contracting state decomposition of P is uniform. PROOF.
For the reader.
5.5.4 Theorem. Let P be a T-processor with state decomposition (R,Q). If R is contracting, then ( R ,Q) is contracting iff ( R , Q )is uniform. PROOF.
By Theorem 5.5.2 and Lemma 5.5.3.
5.5.5 Lemma. Let P be a T-processor. If (R,Q) is a contracting state decomposition for P, then a R 2 is both a state space and a set of initial states for P. PROOF. @R2 is a state space, since by Theorem 5.5.4, (R,Q) is uniform. If R is contracting, OR2 = a R 2 . T h u s 6YR2 is a set of initial states for P. 1 REMARK. Lemma 5.5.5 shows that in the case of contracting state decompositions, every state is an initial state. Lemma 5.5.3 shows once again the power of the assumption of contraction on T-processes.
We come now to one of our main results in this section.
5.5.6 Theorem. Let P be a T-processor. P is contracting [expanding; stationary] iff there exists a contracting [expanding; stationary] state decomposition for P. PROOF. Let (R,Q) be a state decomposition for P. ( R ,Q) is either direct or indirect. Let (R,Q) be direct. If (R,Q) is contracting, then by Theorem 3.6.13, P is contracting, since P = R Q, where both R and Q are contracting. If (R,Q) is expanding, then for all t E T, R C R , and Q C Q t . Using Lemmas 2.2.6 and (critically) 5.4.1, for all t E T, 0
P
= RoQCRtoQt =
(RoQ),= Pt
T h u s P is expanding. Similarly, if ( R , Q )is stationary, then P is stationary. Let (R,Q) be indirect. By Theorems 3.6.12 and 3.7.14,
5.
166
STATE DECOMPOSITIONS
if R is contracting [expanding; stationary], then R: is. T h u s if ( R ,0)is contracting, then P is contracting, since P = R: Q. If (R,0)is expanding, for all t E T , Q C Q t and R: C Thus, 0
K : Q C ( R : ) ,o p t
P
( R : “Q), = P,
that is, Pis expanding. Similarly, if ( R ,Q) is indirect and stationary, P is stationary. This completes the proof from right to left. Conversely, consider the small uniform state decomposition ( R ,(2) of the proof of Theorem 5.3.8, i.e.,
R
where
=
{u(Fut y ) 1 uy E P }
Q {(FUtY)YI Y E P2} Fut y = {(t,y,) 1 t E T } . For all y E P 2 and t’(Futy)t
that is, ( F u t y ) ,
all t , t‘
E
T,
( t 1 t’)(FUtY) = Y,+~’= (y,),, = t’(Futy,) =
( F u t y o . Thus,
{(u(Fut-Y))tI uy E PI = {(ut(Futy),)I U Y E PI {u,(Futrt)1 UY t P } = {Q(FutZ ) 1 QZ E Pt}
Uf
0, =
{((FUtY)Y)f1 Y E p21 { W Y ) t Y , I Y E P2} {(Futy,)y, I y E PZ} = {(Fut z ) z I z E (P”),}
Now if P is contracting, then for all t E 7’, R,
{v(Fut Z )
V.Z
E
P f }C {n(Fut 2) 1
QZ E
P}
=
R
and, since P 2 is contracting,
Q,
{(Fut Z ) Z
z E (P’),} C {(Fut Z)Z 1 z E
P2} = Q
which proves that both R and Q are contracting. Similarly, if P is expanding [stationary], both R and Q are expanding [stationary]. T h u s if P is contracting [expanding; stationary], there exists a contracting [expanding; stationary] state decomposition for P. I nF.mum. Again, the state space ClR2associated with a contracting [expanding; stationary] state decomposition ( R ,Q) exhibits the
5.5.
SPECIAL STATE DECOMPOSITIONS
167
property of contraction [expansion; constancy] of Theorem 3.6.7 and Corollary 3.7.10. Next, we address the concept of simple state decompositions.
5.5.7 Lemma. Let P be a nonempty T-processor. A state decomposition (R, Q) for P is simple iff R is strictly nonanticipatory. If (R,Q) is a state decomposition for P, R is transitional. PROOF. Moreover, if P is nonempty, R is nonempty. Thus, by Theorem 4.7.18, R is strictly nonanticipatory iff OR2 has one and only one element. I T h e following pair of theorems constitute main results in this section.
5.5.8 Theorem. Let P be a nonempty T-processor. P is strictly nonanticipatory iff there exists a simple direct state decomposition for P. PROOF. If ( R ,Q) is a direct state decomposition for P which is simple, then Q is static, and by Lemma 5.5.7, R is strictly nonanticipatory. Now, P = R 0 Q. Hence, by Corollary 4.5.4, P is strictly nonanticipatory. Conversely, let P be strictly nonanticipatory and consider the ordered pair (R, Q) such that
R
=
(u(Past u ) 1 u E P1}
Q
=
((Past U ) Y
UY E P }
where Past u = { ( t ,u L )I t E T}. For any u E Pl, Past u is a T-time function; hence, both R and Q are T-processors. R is uniformly transitional, i.e., tr R
==:
{ ( ( ( u ~ ) t(Past ~ ’ , u ) ) , ( t t t’)(Past u ) ) I u E P1& t , t’ E T )
’
-= { ( ( ( u t ) t ’ ,ut), ut t‘)
i
u E P’ & t , t’ E
TI
5.
168
STATE DECOMPOSITIONS
which is a function by Lemina 4.4.2, i.e., ((V( )’-,
ut)
((Ut)l’,
vt )
-’
(Ut)“ = ( W l . ) t *
> (Ut)‘
ut
3
..U t
t‘
> U t i 1’
Siniilarly, Q (/[)
~
=
& Ut
(vt, ) h * & t‘
=
vt”
=
t* & U t
= at”
&t =t
vt & (ut)t’ = (vt)t’& t‘ = t* & t = t” - vt t‘ & t 4 t’ t“ 4-t* I
vt”tt*
-
uniformly static, i.e.,
is
{ t ( ( P a t u ) y ) uy E P & t
{ ( d ,t y ) uj’ c P & t
E
7’:
E
T ) = {(t(Past u), ty) 1 uy E P & t E T )
=
sn P
uherc, by assuinption, sn P is a function. Next, RL
{Past u u E PI)
{Past u 1 (3y):uy t PI\
~
,O’
L sing 1,emiiia 4.4.4, Pnst v ,--. ( V t ) : t(Past u )
Past u
14
. : A .
=
t(Past v) o (Vt): ut
==
vt
= ‘u
‘Thus, I? (c, (7
--
~
=
{ ~ t Iy(31): u z E R & . ~ yE Q } {uy I (3v): u E P I & vy E P & Past u {uy 1 (32)): u E P& vy {uy 1 uy E P: = P
E
P&u
~
=
Past v}
v}
T h u s ( R ,Q) is a uniform direct state decomposition for P. Finally, OR2
{O(I’ast u ) I u E P’] = {uo I u
E PI}
{a}
:
which proves that ( R ,0 ) is simple.
5.5.9 Corollary. Let P be a nonempty T-processor. If P is strictly nonanticipatory, then there exists a direct state decomposition for P which is 130th simple and uniform. Moreover, the set h
1”
is a state space for P.
=
{ut , u E PI & t E TI
169
EXERCISES
We get an analogous result for nonanticipatory T-processors in terms of indirect state decompositions.
5.5.10 Theorem. Let P be a nonempty T-processor. P is nonanticipatory iff there exists a simple indirect state decomposition for P. PROOF. Similar to the proof for Theorem 5.5.8 but using the ordered pair R = {u(Past u ) I u E P1}
Q
= {(u(Past u ) ) y
~
uy E P )
for the indicated construction.
Corollary. Let P be a nonempty T-processor. If P is nonanticipatory, then there exists an indirect state decompositkn for P which is both simple and uniform. Moreover, the set ( P I ) is a state space for P. 5.5,11
Exercises
5-1. Prove Theorem 5.2.8. 5-2.
Prove Theorem 5.2.10 directly using only lemmas from Chapters 1-4.
5-3.
Let P and Q be T-processors. Prove that Q is a strong image of P iff there exists a homomorphism lz from P to Q such that (a, b)h = (c, d )
5-4.
-’
a =c
Let P, R, and Q be T-processors. Prove that if R is transitional, Q is static, R2 C QI, and P = R Q that ( R ,l ( R 2 ) Q) is a direct state decomposition of P. 0
0
5-5.
Prove that a T-processor P is transitional [uniformly transitional] iff (P, : P ) is an indirect state decomposition [a uniform indirect state decomposition] of P.
170 5-6.
5.
STATE DECOMPOSITIONS
Prove that the ordered pair ( R ,0)with I’)
I\‘
=
(u(Fut y ) 1 uy
(1
=
{(u(Futy))y 1 uy E P ]
E
is a uniform indirect state decomposition for any T-processor 1’. 5-7.
I,et P be a 7’-processor. Prove that any ?’-process S with a map f : S + J ( 1 : 1 onto) where J - { F u t y j y ~ P ~ }is a set of state trajectories for P.
5-8.
Prove Lemma 5.3.10.
5-9.
Prove I m n m a 5.4.2.
5-10. Prove ‘l’heorern 5.4.9.
5-1 1 . Let P be a functional T-processor. Prove that if ( R ,Q) is a uniform direct state decomposition for P , then ((P:)-’ R,Q) 0
is a uniform direct state decomposition for :P.
5- 12. Prove that a 7’-processor which admits a state space with one and only one element is uniformly static. 5-13. 1,et I’ be a 7’-processor. Alstate decomposition ( R ,Q) for P is esserztirrlly contracting if R is contracting. Prove that a uniform state decomposition which is essentially contracting is contracting. 5-14. Prove for a state decomposition which is essentially contracting that every state is an initial state. 5- I 5. Pro1 e for a stationary T-processor P that there exists a state spaec *Y11 hich is a set of initial states for P , for all t E 7’. 5-16. Prove that a 7’-processor P is free iff there exists a state decomposition (12,Q) for P such that R is free. 5-17. (Fundamental lemma of dynamical motions.) Let P be a 7’-proccss. Prove that P is nondiverging iff P is a set of state trajectories of some free T-processor.
EXERCISES
171
5-18. O n the basis of Exercise 5-17, formulate and prove a characterization theorem for T-flows involving the concept of state.
5-19. For the universal state decomposition given in the proof of Theorem 5.3.8, calculate dom R. Repeat for the state decomposition of Theorem 5.5.8. 5-20. Prove that any static T-processor admits a simple state decomposition. 5-21. Prove that a T-processor P is nonanticipatory iff there exists a set of initial states for P containing exactly one element.
5-22. Let P be a T-processor. Prove that every state space for P is both a set of initial states and a state space for P.
5-23. Prove Theorem 5.5.10.
References
For convenience the references are given in four categories based on the subject matter: (i) general systems, (ii) mathematics, (iii) automata theory, and (iv) dynamical systems.
General Systems
I . Arbib, M. A., A common framework for automata theory and control theory, S I A M J . Control 3, 206-222 (1965). rapproachement, 2. Arbib, M. A., Automata theory and control theory-a Automatica 3, 161-189 (1966). 3. Banerji, R. B., T h e state space in systems theory, Info. Sci. (to appear) (1970). 4. Rirta, L. G., “A Formal Approach to Concepts of Interaction,” Ph. D. dissertation, Case Institute of Technology, Cleveland, Ohio, 1965. 5. Birta, L. G., T h e concept of generativity in general systems theory, IEEE Syst. Sci. Cy. Conf., Boston, Massachusetts, 1967. 6. Bushaw, D., A stability criterion for general systems, J. Math. SYS.Th. 1, 79-88 (1967). 7. Dompe, R., Memory-delay systems, Internal Memorandum, Case Western Reserve University, Cleveland, Ohio, 1968. 8. Gogruen, J. A., Categorical foundations for general systems theory, A M S Midwest Category Seminar, New Orleans, Louisiana, 1969. 9. Goldfeder, M. E., “On Cancellation of General Systems,” M.S. dissertation, Case Institute of Technology, Cleveland, Ohio, 1967. 10. Goldfeder, M. E., “On General Systems and Regular Sets,” F’h. D. dissertation, Case Western Reserve University, Cleveland, Ohio, 1969. 11. Kalman, R. E., Mathematical description of linear dynamical systems, SIAM J . Control 1, 152-192 (1963). 173
174
REFERENCES
12. Kalriimi, I<. I-., C h i canonical realizations, Proc. 2 d illlerton Coiif. Ckt. Syst. Th.,32-41 (1964). 13. I.. <;enel-al systems theory and its ni:itliematical foundations, .S>,st. S c i . C ‘ W , ~ . , noston, Massachusetts, 1967. .o\ ic, 1 1 . I ) , , .-\uxiIiiir> functions a n d t h e constructive specification of genci-;il s y s t e i n s , / . .\/ot/r. &Syst. ?%. 2, 203-222 (1968). w r o \ - i c , .\I. I ) . ;inti I~cltnian.I). I’., On soiiie h s i c concepts of t h e general t e n i s theory, Proc. 3rd. Int(,ruat/. (’ouf. (’y., N:imur, Belgium, 1961. la\ age, J . J., “Adapti\.e Pattern I<ecognition in General Systems Theory,” i-t:ition, C’ase \Vestern 1ieseri.c. L.ni\ crsity, Cleveland, Ohio, 1968. I ,_,‘ r o \ \ a r d :I formaliz;ition of physical cIiar;icteristics of information processing. 1ntern;tl l l e n i o r a n d u m , Case Institute of Technology, Cle\ eliind, ( ) l i i o , 1966. 30. \.on 13ertalanft_\..I,,, General systems theory, (;rneru/ S y s t e m 1 ( 1 956). 3 I . \\.riiier, S..“Cyhcriietics,” 2nd edition. I I I T a n d \Yiley, New York, 1966. 32. \\~indcltnccht. ‘r, (;., Concerning an algehr:iic theory of systems, in “Systems i i n d C o m p u t e r Science” ( I I a r t , 1 . F., a n d ‘I’akasu, S., eds.). T o r o n t o ITniv. Press, ’l’orvtito, 1967. ( ’ ~ 9 .
REFERENCES
175
33. Windeknecht, T. G., Mathematical systems theory: determinism, Conf. Mach. Lung. Semigrps., Asilomar, California, 1966. 34. Windeknecht, T. G., An axiomatic theory of general systems, IEEE Syst. Sci. Cy. Conf., Boston, Massachusetts, 1967. 35. Windeknecht, T. G., Mathematical systems theory: causality, J. Math. Syst. Th. 1, 279-288 (1967). 36. Windeknecht, T. G., and Mesarovic, M . n., On general dynnmical systems and finite stability, in “Differential Equations and Dynamical Systems” (Hale, J. K., and LaSalle, J. P., eds.). Academic Press, New York, 1967. 37. Wymore, A. W., “A Mathematical Theory of Systems Engineering: T h e Elements.” Wiley, New York, 1967. 38. Zadeh, L. A,, l h e concept of state in system theory, in “Views on General Systems ‘Theory” (Mesdrovic, M. D., ed.). Wiley, New York, 1964. 39. Zadeh, L. A., T h e concepts of system, aggregate, and state in system theory in “System Theory” (Zadeh, L. A,, and Polak, E., eds.). %.lcGraw-Hill, New York, 1969. 40. Zadeh, L. A., and Desoer, C. A., “Linear System ‘I’heory.” McGraw-Hill, New York, 1963. 41. Zadeh, L. A,, and Polak, E., “System Theory.” McGraw-Hill, New York, 1969. 42. Zogakis, 1’.G., “General Systems Theory Investigation o f System Resolution,” Ph. D. dissertation, Case Western Reserve University, Cleveland, Ohio, 1968. Mathematics 43. Clifford, A. H., and Preston, G . R., “The Algebraic Theory o f Semigroups,” Vol. I. American Mathematical Society, Providence, IUiode Island, 1963. 44. Herstein, I. N., ‘“Topics in Algebra.” Blaisdell, Neu York, 1964. 45. Jacobson, N., “Lectures in Abstract Algebra,” 1’01. I11 (‘“Theory of Fields and Galois ’Theory”). Van Nostrand, Princeton, New Jersey, 1964. 46. Kelley, J. L., “General Topology.” Van Nostrand, Princeton, New Jersey, 1955 (appendix). 47. Ljapin, E. S., “Semi-Groups.” American Mathcmatical Society (translation), Providence, Rhode Island, 1963. 48. MacLane, S., and Birkhoff, G., “Algebra.” MacMillan, New York, 1967. 49. Suppes, P., “Axiomatic Set ‘Theory.” Van Nostrand, Princeton, New Jersey, 1960. Automata Theory 50. Davis, M., “Computability and Unsolvability.” McGrmv-Hill, New York, 1958. 51. Gill, A., “Introduction to the Theory of Finite-State Machines.” McGrawHill, New York, 1962. 52. Gill, A,, “Linear Sequential Circuits.” McGraw-Hill, New York, 1966.
176
REFERENCES
53. Ginsburg, S.,“An Introduction to Rlatliematical Machine Theory.” Addison\i’cslcy, Ileading, Massachusetts, 1962. 54. Harrison, hl. A , , “Introduction to Switching and Automata Theory.” McGrawHill, New York, 1965. 55. Hart, J. I:. and Taltasu, S., (eds.), “Systems and Computer Science.” Toronto Cniv. Press, ‘l’oronto, 1967. 56. Hartmanis, J., and Steams, K. E., “Algebraic Structure Theory of Sequential Llachines.” I’rentice-Hall, Englewood Cliffs, New Jersey, 1966. 57. Krohn, I<., and Rliodes, J., Algebraic theory of machines I, Trans. Amer. A i Z ~ t SOL. / ~ . 116, 450-464 (1966). 58. hllnrko\., PI. A,, Theory of algorithms, Acad. Sci. U S S R 42 (translation by and U S Ilept. Commerce) (1954). 59. Aloore, I<. 1.; (ed.), “Sequential Machines: Selected Papers.” Addison\\.csley, Iteiiding, Massachusetts, 1964. 60. Nelson, I<. J., “Introduction to Automata.” Wiley, New York, 1968. 61. Nerodc, A . , Linear automaton transformations, Proc. Arner. Math. SOC.9, 541 -544 ( 1 958). 62. Ikincy, <;. N., Sequential functions, /. flssoc. Comp. Illach. 5, 177-180 (1958). l~ylliz?7ficu/ SJ..sto,ls
63. 13liatia, K. I’., and Szego, C;. l’., Dynamical systems: stability thcory and applications, “Lecture Notes in Mathematics,” Vol. 35. Springer-Verlag, N c ~ vYoIk, 1967. 64. <;ottsch:ilk, \V. H., and IIedlund, G. A. “Topological Dynamics.” American h1atheni;itical Socicty, I’rovidence, Rhode Island, 1955. 65. IIaIe, J. l i . , and LaSalle, J. P. (eds.), “IMferential Equations and Dynamical Systems.” Ac:idemic, New York, 1967. 66. Xcmytsltii, V. V.,and Stcpanov, 1’. V., “Qualitative ‘l’heory of Differential I
Index
A
Flows. 142 Functions, see a/so Time functions auxiliary, 96 definition of, 10 next-state, IS2 one:one, 1 I output, 152 state-transition, 152
Actions of monoids, 61 -68 strongly connected, 64 transformations of, 62 transition relation of, 63 transitions of, 62 Arbib, M. A., 3, 173, 174 Archemedian order, 92
G
B
General systems, 1-4 Gogrurn, J., 30, 173
Birkhoff, G., 61, 175
C
I
Causality, 95-98 Classes, 4 Closed-loop operation, 52 Constructive specifications, 96
D Decompositions, see nlso State decompositions exact series, I S 0 parallel, 46 Discrete-time processes, 17, 90, 134
Interconnections cascade, 56, 132 nonanticipatory, 1 1 9-122 parallel, 45 proper series, 42 series, 42 static, 106-108 transitional, 131-1 34
J Jacobson, N., 61, 175
F Feedback operation, 49 Feedforward operation, 49
K Kalman, R. E., I , 3 , 173, 174 Kelley, J. L., 4, 9, 175 177
!78
INDEX
L
M R I x I , n l l c ~ , s., 61, 17.5 LTiiriiio, L,. I<., 64, 91, I 7 4 I\lcsiiro\ic, 11.I)., 2, 25, 30, 96, 174, 175
Alonoids delitiit ion of, I 3 liori~omor~~liism of, 10, 63 isomorphic,
I6
order-isomorpliir. 16, I 7 tr;iiisform:ition, 6 3 Rlotions, I42
N son:lntlcipatioll, 109, 1 13
I’roccssors, sce also Interconnections hifunctiond, 28 definition of, 21 free, 28 f u n c t i o n d , 28 identit>-, 27 inducti\e, 145, I46 muItiv:rri:ihlc, 30 noii;iiiticipator)., 1 13 relation of, 25 static, 98-106 strong im;ige, I 5 4 tr:insition;il, 124 uncoupled, 26
R Relations constiltit, 28 cross product of, 108 dcfinitioii of, 10 product of, 19 projections of, 21 stat1d;it.d operations on, 10-1 2
S Seimigroups, ser hlonoids Sequences, I8 Sets definition of, 4 o f initial states, I5 I input, 23 in \,a r i m t, 67 minimal iri\s;iriant, 67 o u t p u t , 23 st;ind:ird operations on, 6-9 of StiItCS, 15 I time, 14, 59 S h o r t memory, 146, 147 Spaces att:iin:il,le, 18 input, 21 output, 21 state, 152
179
INDEX
State decompositions contracting, 164 direct, 150 expanding, 164 indirect, 150 simple, 164 small, 151 stationary, I64 uniform, 151 State spaces, existence of, 157 State trajectories, I52 Suppes, P., 13, 175
T Time evolution of interconnections, 79-8 1 of nonanticipatory processors, 122 of processes, 72
Time functions constant, 29, 77 definition of, I7 left translation of, 69 segments of, 109 weakly periodic, 92 Translation closure operation, 74
W Weiner, N., 59, 174 Windeknecht, T. G., 60, 174, 175 Wymore, A. W., 3, 175
Z Zadeh, L. A,, 3, 149, 175