Lecture Notes in Control and Information Sciences Edited by A.V. Balakrishnan and M.Thoma
20 IIIII IIIIII III II
I III...
87 downloads
1156 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Lecture Notes in Control and Information Sciences Edited by A.V. Balakrishnan and M.Thoma
20 IIIII IIIIII III II
I IIII IIIIIIII
Bo Egardt
Stability of Adaptive Controllers I
II
III
III
III
II
Springer-Verlag Berlin Heidelberg NewYork 1979
Series Editors A.V. Balakrishnan • M. Thoma Advisory Board L. D. Davisson • A. G. J. MacFarlane - H, Kwakernaak Ya. Z. Tsypkin • A..I. Viterbi Author Dr. Bo Egardt Dept. of Automatic Control Lund Institute of Technology S-220 07 Lund ?
ISBN 3-540-09646-9 Springer-Vedag Berlin Heidelberg NewYork ISBN 0-387-09646-9 Springer-Verlag NewYork Heidelberg Berlin This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks. Under § 54 of the German Copyright Law where copies are made for other than private use, a fee is payable to the publisher, the amount of the fee to be determined by agreement with the publisher. © Springer-Verlag Berlin Heidelberg 1979 Printed in Germany Printing and binding: Reltz Offsetdruck, Hemsbach/Bergstr. 206013020-543210
PREFACE The present work is concerned with the s t a b i l i t y analysis of adaptive control systems in both discrete and continuous time. The attention is focussed on two well-known approaches, namely the model reference adaptive systems and the self-tuning regulators. The two approaches are treated in a general framework, which leads to the formulation of a f a i r l y general algorithm. The s t a b i l i t y properties of this algorithm are analysed and s u f f i c i e n t conditions for boundedness of closed-loop signals are given. The analysis differs from most other studies in this f i e l d in that disturbances are introduced in the problem. Most of the material was o r i g i n a l l y presented as a Ph.D. thesis at the Department of Automatic Control, Lund I n s t i t u t e of Technology, Lund, Sweden, in December 1978. I t is a pleasure for me to thank my supervisor, Professor Karl Johan Astr~m, who proposed the problem and provided valuable guidance throughout the work.
B. Egardt
V
Table of contents
I.
INTRODUCTION
l
2. UNIFIED DESCRIPTION OF DISCRETE TIME CONTROLLERS
9
2.1
Design method for known plants
g
2.2
Class of adaptive controllers
13
2.3
Example of the general control scheme
20
2.4
The positive real condition
24
3. UNIFIED DESCRIPTION OF CONTINUOUSTIME CONTROLLERS
27
3.1
Design method for known plants
27
3.2
Class of adaptive controllers
30
3.3
Examples of the general control scheme
35
3.4
The positive real condition
41
4. STABILITY OF DISCRETE TIME CONTROLLERS
43
4.1
Preliminaries
45
4.2
L~-stability
60
4.3
Convergence in the disturbance-free case
77
4.4
Results on other configurations
80
4.5
Discussion
84
5. STABILITY OF CONTINUOUS TIME CONTROLLERS
87
5.1
Preliminaries
87
5.2
L~-stability
95
5.3
Convergence in the disturbance-free case
I03
REFERENCES
107
APPENDIX A - PROOFOF THEOREM4.1
Ill
APPENDIX B - PROOFOF THEOREM5.1
132
1,
INTRODUCTION
Generalities Most of the current techniques to design control systems are based on knowledge of the plant and i t s environment. In many cases this information is, however, not available. The reason might be that the plant is too complex or that basic relationships are not f u l l y understood, or that the process and the disturbances may change with operating conditions. Different p o s s i b i l i t i e s to overcome this d i f f i c u l t y exist. One way to attack the problem is to apply some system i d e n t i f i c a t i o n technique to obtain a model of the process and i t s environment from practical experiments. The controller design is then based on the resulting model. Another p o s s i b i l i t y is to adjust the parameters of the controller during plant operation. This can be done manually as is normally done for ordinary PID-controllers, provided that only a few parameters have to be adjusted. Manual adjustment is, however, not feasible i f more than three parameters have to be adjusted. Some kind of automatic adjustment of the controller parameters is then needed. Adaptive control is one p o s s i b i l i t y to tune the controller. In particular, self-tuning regulators and modeZreference adaptive systems are two widely discussed approaches to solve the problem for plants with unknown parameters. These techniques w i l l be the main concern of the present work. Although these two approaches in practice can handle slowly time-varying plants, the design is basically made for constant but unknown plants. The basic ideas behind the two techniques are discussed below.
Self-tuningregulators The self-tuning regulators (STR) are based on a f a i r l y natural combination of i d e n t i f i c a t i o n and control. A design method for known plants is the starting-point. Since the plant is unknown, the parameters of
the c o n t r o l l e r can, however, not be determined. They are instead obtained from a recursive parameter estimator. A separation between i d e n t i f i c a t i o n and control is thus assumed. Note that the only i n f o r mation from the estimator that is used by the control law is the parameter estimates. Schemes which u t i l i z e e.g. parameter u n c e r t a i n t i e s are not considered here. Probably the f i r s t
to formulate t h i s simple idea as an algorithm was
Kalman (1958). An o n - l i n e least-squares algorithm produced estimates of plant parameters. The estimates were then used at every sampling i n s t a n t to compute a deadbeat control law. The s e l f - t u n i n g idea was brought up by Peterka (1970) and Astr~m/ Wittenmark (1973) in a stochastic framework. Astr~m and Wittenmark's algorithm, based on minimum variance c o n t r o l , is described below in a simple case. EXAMPLE I . I
Consider the plant, given by y ( t ) + a y ( t - l ) = bu(t-l) + e ( t ) , where u is the input, y the output and { e ( t ) } is a sequence of independent, zero-mean random variables. I t is easy to see that the control law a
u(t) = ~y(t) gives the minimum output variance. I f the parameters a and b are unknown, the algorithm by Astr~m and Wittenmark can be applied. I t consists of two steps, each repeated at every sampling i n s t a n t :
.
Estimate the parameter ~ in the model y ( t ) : ~ y ( t - l ) + Bou(t-I ) + c ( t ) , t e.g. by minimizing z c 2 ( s ) . ~(t).
Denote the r e s u l t i n g estimate by
2.
Use the control law u(t) = - ~(t) y ( t ) . BO
I t should be noted that the estimation of ~ can be made recursively i f a least-squares c r i t e r i o n is used. This makes the scheme practically feasible.
[]
The above algorithm can easily be generalized to higher order plants with time delays. The paper by Astr~m and Wittenmark (1973) presented some analysis of the algorithm. The main conclusion was that ~f the algorithm converges at a l l , then i t converges to the desired minimum variance controller, even i f the noise { e ( t ) } is correlated. The l a t t e r result was somewhat surprising at that point. I t has later been shown by Ljung (1977a) that the algorithm converges under a s t a b i l i t y condition i f the noise characteristics satisfy a certain positive realness condition. Similar results without the s t a b i l i t y assumption was given by Goodwin et al. (1978b).
The self-tuning regulators are not confined to minimum variance control. For example, Astr~m/Wittenmark (1974) and C1arke/Gawthrop (1975) proposed generalizations of the basic algorithm. Algorithms based on pole placement design were discussed by Edmunds (1976), Wellstead et al. (1978) and Astr~m et al. (1978). Multivariable formulations are given by e.g. Borisson (1978). The general configuration of a self-tuning regulator is shown in Fig. I . I . The regulator can be thought of as composed of three parts: a parameter estimator, a controller, and a t h i r d part, which relates the controller parameters to the parameter estimates. This p a r t i t i o n i n g of the regulator is convenient when describing how i t works and to derive algorithms. The regulator could, however, equally well be described as a single nonlinear regulator. There are of course many design methods and i d e n t i f i c a t i o n techniques that can be combined into a self-tuning regulator with t h i s general structure. A survey of the f i e l d is given in Astr~m et al. (1977).
Regulator parameter calculation
J
Parameter estimation
F
Plant
j y'
I
$
4~ Regulator
Figure I . I . Block diagram of a self-tuning regulator.
Mode] reference adaptive systems The area of model reference adaptive systems (MRAS) is more d i f f i c u l t to characterize in a general way. The main reason is that the many different schemes proposed have been motivated by different considerations. An early attempt to cope with e.g. gain variations in servo problems was Whitaker's "MIT-rule". Parks (1966) i n i t i a t e d a rapid development of the MRASby using Lyapunov functions and s t a b i l i t y theory in the design. He also observed the relevance of a certain positive realness condition. A simple example wi]l i l l u s t r a t e the ideas.
EXAMPLE 1.2 A f i r s t order plant is assumed to have known time constant but unknown gain. The desired relationship between the input u and the output y is defined by a reference model with output yM, see Fig. 1.2. The objective is thus to adjust the gain K such that e(t) = y ( t ) - y M ( t ) tends to zero. The solution uses a Lyapunov function
I (e2 +cR2), V=~
Model .J KM 7 1+sT U
j
Plant
Kp
yM
y?
1 +sT
/ Figure 1.2. Configuration of Example 1.2.
where c > 0 and = KM - KKp. The derivative of V is " = - ?l e 2 + -I R u e + c R ~
T
I f the gain is adjusted according to K=
- --] ue, cT
(I.I)
the derivative of V is negative definite and i t can be shown that the error e tends to zero. This implies that the objective is f u l f i l l e d . Note that ( l . l ) can equivalently be written as =
"_l_lue
CTKp i f Kp is assumed to be constant. Since c is arbitrary, this updating formula is possible to implement, although the adaptation rate will vary due to the unknown plant gain.
a
The above example can be generalized considerably. The problem to follow a given reference signal was solved for higher order plants with unknown dynamics, see e.g. Gilbart et al. (1970) and Monopoli (1973). However, a crucial assumption in these references is that the
p l a n t ' s t r a n s f e r function has a pole excess ( i . e , difference between number of poles and number of zeros) equal to one, Monopoli (1974) proposed a modification of the e a r l i e r schemes to t r e a t the general case. His ideas have inspired many authors in the f i e l d and in p a r t i c u l a r the stability
problem associated with his scheme has been frequently dis-
cussed. The basic idea behind the schemes can be described as in Fig. 1.3. The unknown plant is c o n t r o l l e d by an adjustable c o n t r o l l e r . The desired behaviour of the plant is defined by a reference model. Some kind of adaptation mechanism modifies the parameters of the adjustable c o n t r o l l e r to minimize the difference between the plant output and the desired output. The methods to design the adaptation loop in MRAS have mostly been based on s t a b i l i t y theory since Park's important paper appeared. Although the MRAS ideas were f i r s t trol,
developed for continuous time con-
the same framework has been carried over to discrete time c o n t r o l .
Surveys of the numerous variants of the technique are given by e.g. Landau (1974) and Narendra/Valavani (1976).
] Reference -J model uM
.J
AdaptationL
"J mechanism =
Adjustoble J I u controller ~
PIont
Figure 1.3. Block diagram of a model reference adaptive regulator.
JyM
Similarities between STR and MRAS The STR and the MRASwere developed to solve different problems. The STR were o r i g i n a l l y designed to solve the stochastic regulator problem. The MRASwere developed to solve the deterministic servo problem. In spite of these differences, the two techniques exhibit some important s i m i l a r i t i e s . This has been observed in e.g. Ljung (1977a) and Gawthrop (1977). The question has thus arised, whether the two approaches are more closely related than earlier thought. Some answers are given in Ljung/Landau (1978~, Narendra/Valavani (1978} and Egardt (1978). The purpose of the f i r s t part of this work is to describe several MRAS and STR in a unified manner. The discussion is limited to systems with one input and one output. I t is assumed that only the plant output is available for feedback. I t w i l l be shown that i t is possible to derive MRAS from the STR point of view. This observation leads to the possibili t y to describe several MRASand STR as special cases of a f a i r l y general algorithm. The unified treatment also f a c i l i t a t e s a comparison of the positive real conditions, which play an important role in the design and analysis of both MRASand STR. I t is shown that the condition can be removed in the deterministic case. The discrete time case is covered by Chapter 2 and Chapter 3 gives the treatment for continuous time control. Since adaptive regulators are predominantly implemented using digital computers, the discrete time case is emphasized. The analysis is also a l i t t l e simpler in that case.
Stability There are a number of important properties of adaptive regulators which are poorly understood, e.g. - overall s t a b i l i t y - convergence of the regulator -
properties of the possible l i m i t i n g regulators
- effects of disturbances.
Overall s t a b i l i t y of the closed loop system is perhaps the most fundamental property. This is of course important both practically and theoretically. The s t a b i l i t y problem has also been encountered indirectly in most convergence studies. For MRASwithout disturbances, boundedness of closed-loop signals was assumed to prove convergence of the output error to zero. See e.g. Feuer/Morse (1977), Narendra/Valavani (1978) and - for discrete time - Landau/B~thoux (1975). The paper by Feuer and Morse (]977) in fact contained a proof of global s t a b i l i t y , but the algorithm considered was very complicated. For simpler schemes, the only rigorous convergence proofs without the s t a b i l i t y requirement are the ones by Goodwin et al. (1978a) for discrete time, Egardt (1978) for both discrete and continuous time and Morse (1979) for continuous time. Goodwin et al. and Morse treat the disturbance-free case whereas Egardt (1978) contains results with disturbances, too. S t a b i l i t y conditions are important also in the stochastic convergence analysis of STR. The convergence results presented in Ljung (1977a) for the minimum variance self-tuning regulator required a s t a b i l i t y assumption. As mentioned above, similar results were given by Goodwin et al. (1978b) without the s t a b i l i t y condition. S t a b i l i t y analysis of adaptive schemes in the presence of disturbances is the topic of the second part. The s t a b i l i t y properties of the algorithms described in Chapters 2 and 3 are investigated using the L~ - s t a b i l i t y concept.
The main e f f o r t is given to algorithms with a
stochastic approximation type of estimation scheme. The main results (Theorems 4.1 and 5.1) state that the closed-loop signals remain bounded under some reasonable assumptions. The most important one - boundedness of parameter estimates - can be omitted i f the algorithms are s l i g h t l y modified. When no disturbances affect the plant, the s t a b i l i t y results can be used to prove convergence of the output error to zero. This result thus holds without a priori requiring the closed loop to be stable and is analogous to the above mentioned results by Goodwin et al. (1978a) and Morse (1979). Chapter 4 treats the discrete time case and the continuous time schemes are analysed in Chapter 5.
2,
UNIFIED DESCRIPTION OF DISCRETE TIME CONTROLLERS
The MRASphilosophy has been applied to the discrete time case in e.g. Landau/B6thoux (1975), B6n6jean (1977), and Ionescu/Monopoli (1977). S t a b i l i t y theory is the major design tool. The STR approach has been used almost exclusively for discrete time systems, see e.g. Astr~m/Wittenmark (1973), Clarke/Gawthrop (1975), and Astr~m et al. (1978). The basic idea is to use a certainty equivalence structure, i . e . to use a control law for the known parameter case and j u s t replace the unknown parameters by t h e i r estimates. Since the control algorithms obtained by the MRASand the STR approaches are very similar, i t is of interest to investigate the connections between the two approaches. Results in this direction are given in Gawthrop (1977) and Ljung/Landau (1978). A unified treatment of MRASand STR for problems with output feedback w i l l be presented in this chapter. I t w i l l be shown that MRAScan be derived from the STR point of view. Some problems in the design and analysis of the discrete time schemes are also discussed. In particular, the nature of the positive real condition, associated with both MRASand STR, w i l l be examined in detail. I t is shown that this condition can be avoided in the deterministic case.
2.1. Design method for known plants A design method, which w i l l be the basis for the general adaptive algorithm in the next section, is described below. I t consists of a pole placement combined with zero cancellation and adding of new zeros. Related schemes are given in e.g. B~n~jean (1977), Ionescu/ Monopoli (1977), Gawthrop (1977), and Astr~m et ai.(1978). The plant is assumed to satisfy the difference equation A(q- I ) y ( t ) = q-(k+l) b0 B(q-l) u(t) + w(t),
(2.1)
where q-l is the backward s h i f t operator, k is a nonnegative integer,
10
and A(q - I ) and B(q - I ) are polynomials defined by A(q - I ) = 1 + alq-I + . . . +
anq -n
B(q - I ) = l + bl q-l + . . . +
bmq-m.
Furthermore, w(t) is a nonmeasurable disturbance. REMARK
The parameter b0 is not included in the B-polynomial, because i t will be treated in a special way in the estimation part of the adaptive controller in the next section. The objective of the controller design is to make the difference between the plant output y ( t ) and the reference model output yM(t) as small as possible. The reference output yM is related to the command input uM by the reference model, given by
yM(t) = q-(k+l) gM(q-l) uM(t ) = q - ( k + l ) ( b ~ + . . . AM(q-I )
M
+b m q-m) uM(t) •
1 + a~ q-I + "'" + aMn q-n (2.2)
I t is no r e s t r i c t i o n to assume that the polynomial degrees n and m are the same in the model and the plant, because c o e f f i c i e n t s may be zero in (2.2) and i t is easy to add zeros or poles by modifying uM. I t is seen that the time delay of the reference model is greater than or equal to the time delay of the plant. This is a natural assumption to avoid noncausal control laws. The problem will be approached by assuming the controller configuration shown in Fig. 2.1. Here R, S, and T are polynomials in the backward shift operator. Motivation for this structure can be found in e.g. Astr~m et al. (1978). I t can be shown that the controller is closely related to the solution in a state space setup with Kalman f i l t e r and feedback from the state estimates. Notice that the process zeros are cancelled. This implies that only minimum phase systems can be considered. Other versions which allow nonminimum phase systems are discussed in Astr~m et a1.(1978). The T-polynomial can be interpreted as the characteristic polynomial of an observer.
11
r
i
boB(q-1)R(q-1)
PIQnt
-S(q-1)FiLi
I I[
Controller
__ __]
Figure 2.1. Controller configuration.
The design procedure w i l l be given for two different problems. In the f i r s t one, the disturbances are neglected and the problem is treated as a pure servo-problem. This means that the design concentrates on tracking a given reference signal. The procedure w i l l be referred to as a d~t~rmi~tZc design. On the other hand, i f the disturbance is considered as part of the problem, the controller should have a regulating property too. An interesting special case is when the disturbance w(t) is a moving average, given by w(t) = C(q- l ) v ( t ) = ( l + C l q-l + . . . +Cnq-n) v ( t ) ,
(2.3)
where { v ( t ) } are independent, zero-mean random variables. A design procedure which has the objective to reject noise of the form (2.3), w i l l be called stochastic. The deterministic design is considered first.
P~erminist~c d~ign Assuming w(t) = 0, i t is possible to have the plant output equal to the reference model output yM(t). This is obtained by making the closed-loop transfer function equal to the reference model transfer function, i . e .
12
q-(k+l) BM(q-I) AM(q-I )
q- (k+l) bo B(q-I )T(q-I )BM(q-I ) A(q-l) bo B(q-l)R(q-l) + q - ( k + l ) bo B(q-l)s(q-1)
or, equivalently, T(q-l) AM(q-I) : A(q-l) R(q-l) + q-(k+I) S(q-l).
(2.4)
The observer polynomial T is cancelled in the closed-loop transfer function. Neglecting the effects of i n i t i a l values, i t can therefore be chosen a r b i t r a r i l y . When T has been determined, the equation (2.4) has many solutions R and S. I t w i l l , however, be required that the degree of R is less than or equal to the time delay k. Then there is a unique solution to (2.4). The degree of S w i l l depend on n, k, and the degree of T. Furthermore, i t is required that R(O) ¢ 0 in order to get a causal control law. As seen from (2.4), this is equivalent to T(O) ~0. Finally the R- and T-polynomials are scaled so that T(O) = R(O) = I. The deterministic design procedure can thus be summarized in the following steps: I)
Choose the polynomial T(q - I ) defined by T(q -I )
=
1 + tlq-]
+
...
+
tnT
q-nT
2) Solve the polynomial equation T(q-l) AM(q-I) = A(q-l) R(q-l) + q-(k+l) S(q-l) for the unique solutions R(q - I ) and S(q-1),defined by R(q - I ) = I + r l q - I + . . . +
rkq -k
S(q - I ) = s O+sl q-I + . . . ÷Sns q
-n s
,
n s = max(n+nT-k-l, n - l ) .
Stochastic d~iDn The deterministic design procedure can of course be used also when disturbances are acting on the plant. The choice of observer
13
polynomial w i l l , however, be of importance not only during an i n i t i a l transient period. I f i t is assumed that w(t) is given by (2.3), then i t is well-known that the optimal choice of observer polynomial is T(q- l ) = C(q- l ) , in the sense of minimum variance. This is e x p l i c i t l y demonstrated in Gawthrop (1977) as a generalization of the result on minimum variance regulators in Astr~m (1970).
2.2. Class of adaptive controllers A general adaptive control scheme is defined in this section. The scheme is a self-tuning version of the controller described in the preceeding section. I t w i l l be shown to include earlier proposed MRAS and STR as special cases. The plant is s t i l l assumed to satisfy (2.1). The following assumptions are also introduced. Al) The number of plant poles n and zeros m are known. A2) The time delay k is known and the sign of b0 is known. Without loss of generality b0 is assumed positive. A3) The plant is minimum phase, i . e . the numerator polynomial B(q- l ) in (2.1) has i t s zeros outside the unit circle.
REMARK
Notice that some coefficients in A(q- I ) or B(q- I ) may be zero. I t therefore suffices to know an upper bound on the polynomial degrees to put the equation into the form of (2.1) with known n and m. The condition on k in A2) is the counterpart of the continuous time condition, that the pole excess ( i . e . the difference between the number of poles and number of zeros) is known. Compare Chapter 3. The minimum phase assumption was commented upon in Section 2.1.
14 The objective of the controller is the same as in Section 2.1, i . e . to minimize the error defined by e(t) = y ( t ) - yM(t). The controller to be described uses an xLmpZZex~ti d e n t i f i c a t i o n , Astr~m et al. (1978). This means that the controller parameters are estimated instead of the parameters of the model (2.1). The f i r s t step in the development of the algorithm is therefore to obtain a model of the plant, expressed in the unknown controller parameters. Thus, use the identity (2.4) and the equations (2.1) and (2.2) to write for the error:
TAM e(t) = TAM y(t) - TAM yM(t) = (AR+q-(k+l)s) y ( t ) - T A M yM(t) = : q-(k+l)[b OBRu(t)+S y ( t ) - T B M uM(t)] + R w(t).
(2.5)
To obtain some f l e x i b i l i t y of the model structure, a filtered version of the error will be considered. Let Q and P be asymptotically stable polynomials, defined by Q(q-l)
=
p(q-l) =
1 + ql q-1 +
"'"
+
q-nQ
qnQ
pl(q-1) p2(q-1) = 1 + pl q-1 +...+
pnpq
-np
,
where PI and P2 are factors of P of degree np and np~ respectively. It l L is assumed that PI(O) = P2(O) = I. Define the filtered error by ef(t)
= Q(q-l) e(t). p(q-l)
Note that e f ( t ) is a known quantity, because y ( t ) is measured and yM(t), Q, and P are known. Using (2.5), e l ( t ) can be written as
ef(t)
~M q-(k+l)[boBR =
~
+S u(t)
p y(t)-T
_ TA MQ q-(k+l)[bo u(t)+bo(BR-P2) + s Y ( t ) uU -~ptp I
TBM uM(t)]+ QR w(t) = T~P
P -TBM P uM(t)]+ Q~R' TAmPw(t) " (2.6)
15 REMARK
The polynomials Q and P give the necessary f l e x i b i l i t y to cover both MRAS and STR. The exact choices of the polynomials and t h e i r degrees w i l l be commented in the examples in Section 2.3. I t should also be noted that instead of polynomials Q and P, one could consider rational functions. We will however not elaborate this case. o The general adaptive controller will f i r s t be given for the deterministic design case.
Deterministic d~i~n The observer polynomial is now determined a priori. Let e be a vector, containing the unknown parameters of the polynomials BR-P2 and S/b0 and the constant l/b 0 as the last element. Note that e contains the parameters of the controller, described in Section 2.1. Furthermore, define the vector ko(t) from
q)T(t) = [ _ ~ ,
u(t-2)p , . . y_.~_p,t . .~ . , .
, _ TTBM uM(t)] ' (2.7)
where the numbers of u- and y-terms are compatible with the d e f i n i t i o n of 8. Note that the elements of Ko are known signals. Using the definitions of e and m, i t is possible to write (2.6) as m q-(k+l) [b0 u(t)pl + bO BT m(t)] +~QR w(t). ef(t) = QTA
(2.8)
This model, which contains the unknown controller parameters b0 and 8, can be taken as a basis for a class of adaptive controllers. The intention is to estimate the unknown parameters b0 and e, and to use these estimates in the control law. The estimation procedure can be designed e.g. to force a prediction error of ef(t) to zero. Note that ef(t) is i t s e l f a known quantity. Taking the different possibilities of choosing e.g. estimation algorithm and control law into consideration, a class of controllers can be characterized in the following way.
16
BASIC CONTROLSCHEME o Estimate the unknown parameters b0 and 8 (or some combination of these) in the model (2.8). o Use these estimates to determine the control signal. A natural requirement on the c o n t r o l l e r is that i t performs as the c o n t r o l l e r in Section 2.1, i f the parameter estimates are equal to the true parameters.
Stochastic design The algorithm described above can of course be used also when w • O. However, i f w(t) is given by (2.3) with an unknown C-polynomial, i t was seen in Section 2.1 that the choice T = C is optimal. Since C is unknown, i t might be desirable to estimate i t .
Some minor changes are
then needed. Concatenate the 8-vector with a vector whose elements are the unknown parameters of C/b O. Also, redefine the m-vector as
J(t) : [ut_ p-1 ' u(t-2) y(:) -y(t-l) ~ ..... T ' --'7-
The f i l t e r e d ef(t)
P uM(t-l) . . . .
.....
B"
- - P uM(t)'
]
(2.9)
error can then be w r i t t e n as
: Q
[
U p ~ + b0 e T CAM q-(k+l)Lb 0
+
QR v ( t )
(2 I0) i
which constitutes the model for a class of algorithms in the same way as in the d e t e r m i n i s t i c design case. The class of algorithms described above contains many d i f f e r e n t schemes. Apart from the selection of the polynomials Q and P and the choice between fixed or estimated observer polynomial, the choices of control law and estimation algorithm generate d i f f e r e n t schemes. The choice of estimation algorithm w i l l
be commented in connection with
some examples in Section 2.3 and f u r t h e r discussed in Section 2.4. To proceed, i t is however suitable to specify one p a r t i c u l a r method.
17
A characteristic feature of the model reference methods is that the estimation is based on a model like (2.8), where the parameters b0 and 0 enter biZineaJ~Zy. The estimation scheme will be described in the deterministic design case. Let bo(t-l) and O(t-l) denote estimates at time t-] of b0 and 8. Using the model (2.8), a one step ahead prediction of ef(t) is defined as + ~o(t_l ) ~T(t_l ) re(t-k-l)]. e f ( t l t - l ) = ~ AM [ Go(t-I ) u(t-k-l) Pl
(2.11)
The prediction error ~(t) is defined as ~(t) = ef(t) - ef(tlt-l),
(2.12)
where el(t) is given by (2.8), and is usually used in the parameter updating. The following expression is obtained for c(t) i f i t is assumed that the disturbance w(t) is equal to zero: e(t) : .Q [[b O-bO(t-l)] (u(t-k-l) TAM k Pl
+ sT(t-l) ~(t-k-l)) (2.13)
+ bo[O-@(t-l)] T ~(t-k-l)].
The following parameter updating is used in the constant gain case: bo(t) @(t)
:
bo(t-l)
+F
Pl
@(t-l)
m(t), ~o(t-k-l)
where F is a constant, positive definite matrix.
(2.14)
REMARK
It is straightforward to define stochastic approximation (SA) or least squares (LS) versions of the algorithm (2.14). For LS F is replaced by P(t) = [zt ¢(s) ~(s)T] -l and a SA variant uses e.g. l / t r P-l(t) instead of F. Here
~(t) ~ [ u(t-k-l) ] P+ ~T(t-l) I ~(t-k-l) ~(t-k-l)
n
18 The intention with the algorithm (2.14) is to e x p l o i t the properties of a s t r i c t l y
positive real t r a n s f e r function in order to establish
convergence of ~ ( t ) to zero. The motivation is the successful use of Lyapunov theory and the Kalman-Yakubovich lemma in continuous time, see Chapter 3. The problems that arise w i l l
be discussed next. Let us
j u s t b r i e f l y comment on the stochastic case. The algorithm given by (2.11),
(2.12), and (2.14) cannot be d i r e c t l y applied to the model
(2.10), because the C-polynomial is unknown. This implies that the prediction cannot be calculated according to (2.11). An easy modification is to replace C in f r o n t of the paranthesis with an a p r i o r i estimate of C or even with unity.
Choice of control law The control law, given in Section 2.1, can be w r i t t e n as u ( t ) = - Pl(q - I )
[@T~(t)],
where 8 is the vector of true parameters. Compare (2.6),
(2.8). Any
reasonable control law should equal t h i s one when the parameter estimates are correct.
Notice that a parameter estimator l i k e (2.14)
has the objective to force the prediction error c ( t ) to zero. I t would thus be desirable to choose a control such that ~ f ( t l t - l ) equal to zero, because convergence of e f ( t ) from the convergence of ~ ( t ) to zero, cf.
is
to zero would then follow (2.12). This is accomplished
by the control law u ( t ) = - Pl(q - I )
[oT(t+k) ~ ( t ) ]
as seen from (2.11). This control
law is however noncausal. I t is
therefore natural to modify i t in the following way: u ( t ) = - Pl(q - I ) This control considered,
[§T(t) ~ ( t ) ] .
law i~ used in a l l control schemes of the type
(2.15)
19
Difficulties w~h conv~ence analysis There are two key problems in the analysis of the schemes of MRAS type described above. The f i r s t
problem is t h a t the control law (2.15) has
to be used i f a causal control law is required. This implies t h a t ~f(tlt-l)
is not equal to zero in the case k ¢ O. This in turn means
t h a t i t is not easy to conclude t h a t e f ( t )
tends to zero even i f
E(t) tends to zero. The second problem is to show t h a t ~ ( t ) tends to zero. Consider f o r s i m p l i c i t y the case k = O, which is analogous to the case f o r c o n t i n uous time systems, where the pole excess is equal to one, cf. Chapter 3. Then ~ ( t ) is equal to e f ( t )
i f the control law (2.15) is used.
Contrary to the continuous time case, convergence of e f ( t ) cannot be proved s t r a i g h t f o r w a r d l y .
to zero
The reason is the f o l l o w i n g one.
I f the control law (2.15) is used and i t
is assumed t h a t b0 = I , the
equation (2.13) can be w r i t t e n ~(t) = ef(t)
= H(q-l) " q - l [ - O T ( t )
~(t)].
(2.16)
Here H(q_l) :
Q(q-l) T(q - I ) AM(q- I )
and @(t) : @ ( t ) - 8 . In continuous time the estimation e r r o r ~ ( t ) is given by ~ ( t ) = G(p) [-@T(t) m ( t ) ] . Compare Chapter 3. P o s i t i v e realness of G(p) can be used to prove the convergence of ~ ( t ) to zero. I t i s however not possible to use the same arguments in d i s c r e t e time, because the t r a n s f e r f u n c t i o n H ( q - l ) . q - I can never be made p o s i t i v e r e a l . The d i f f e r e n c e appears because a d i s c r e t e time t r a n s f e r f u n c t i o n must contain a feedthrough term to be s t r i c t l y f u n c t i o n may be
p o s i t i v e r e a l , whereas a continuous time t r a n s f e r
s~o_~y proper. This d i f f i c u l t y
Landau/B~thoux (1975).
is also emphasized in
20 The problem mentioned above and also, in the case k # O,
the
previously mentioned problem to r e l a t e convergence of ~ ( t ) and e f ( t ) are c l o s e l y related to the boundedness of the signals of the closed loop system. This is pointed out in e.g. lonescu/Monopoli (1977).
2.3. Examples of the general control scheme Some special cases of the basic control scheme, proposed in the preceeding section, w i l l now be given. Both model reference adaptive systems and s e l f - t u n i n g regulators w i l l be shown to f i t
into the
general description.
EXAMPLE 2.1. Ione~cu's and Monopo2W_,s schP.me The scheme in lonescu/Monopoli (1977) is a straightforward t r a n s l a t i o n i n t o discrete time of the continuous time MRAS by Monopoli (1974). I t is possible to t r e a t the scheme as a special case of the general algorithm in the following way. Choose the polynomials as P1 = T
of degree k
P2
of degree n-I
Q = P = PiP2
of degree n+k-l.
The equation (2.6) then transforms into
P2 q-(k+l) [ b0 Tu(t) ef(t) : e(t) =~-~ 1 + b0(BR-P2) u(t) p + s y t) _ BM
-
P2 uM(t)]'l
(2.17)
where the disturbance w has been assumed to be zero as in the o r i g i n a l presentation. This is the model used by Ionescu and Monopoli and the estimation scheme is s i m i l a r to the one in (2.14). The polynomial P2 is chosen to make the t r a n s f e r function P2/AM s t r i c t l y
p o s i t i v e real.
Some modifications of the estimation scheme are done to handle the
21
problems solution Monopoli that the
discussed in the preceeding section, although no complete is presented. The concept of augmented eJu~or, introduced in (1974), is translated into discrete time. I t can be shown augmented error n(t) in the case k = 0 is given by
P2 n(t) = ~(t) - ~-~ [Kn - n ( t ) , l ~ ( t - l ) I 2 ] , where K is a constant. I t is shown that n(t) tends to zero, but a boundedness assumption is needed to establish convergence of ~(t) or ef(t). Finally i t should be noted that the polynomials Pl and P2 are called Zf and Zw in Ionescu/Monopoli (1977).
o
EXAMPLE 2.2. B~n~jean's scheme A discrete time MRASis presented in B~n~jean (1977). I t can be shown that the algorithm is very similar to Ionescu's and Monopoli's scheme. The model used by B~n~jean is obtained by reparametrizing (2.17) as follows: ef(t) = e(t) =~-~P2q-(k+l) [bo u(t)-uM(t)pl +bo(BR-P2) u(t)-uM(t)P + + S y(t)p + (boBR_BMPI) U_~pt]. The estimation algorithm used is similar to the one used by Ionescu and Monopoli. Note that more parameters have to be estimated because of the reparametrization. [] In the two MRASexamples above the natural choice Q : P has been used. This implies that the filtered error ef(t) equals the error e(t). Another possibility is to choose the polynomials so that the transfer function Q/TAM becomes very simple. This is done below.
EXAMPLE 2.3. SeZf-~Lng pole pZaeementaZgo~Cthm A pole placement algorithm with fixed observer polynomial is described in Astr~m et al. (1978). I t can be generated from the general structure in the following way. Choose the polynomials as
22
Q : TAM P = P1 = P2 = I , which means t h a t e f ( t )
= TAM e ( t ) .
This implies t h a t (2.8) has the
simple form ef(t)
= q - ( k + l ) [ b 0 u ( t ) + b0 e T m ( t ) ] ,
(2.18)
where the elements of ~ are simply lagged i n p u t and output s i g n a l s . The disturbance has been assumed to be zero. The model (2.18) i s used f o r the s e l f - t u n i n g r e g u l a t o r with a minor m o d i f i c a t i o n . The parameters which are estimated by a l e a s t squares algorithm are b 0 and bOB. Since the l a s t element in @ is I / b O, the e f f e c t is t h a t one parameter is known to be equal to one. I f 8 and m are redefined not to include the l a s t known element, the equation (2.18) can be w r i t t e n as ef(t)
= TAM [ y ( t ) - y M ( t ) ]
= q-(k+l)[b Ou(t)+b OBT~(t)]-TAMyM(t),
which is the model used. In the three examples above the choice of observer polynomial T was made in advance. However, i f there is noise of the form given by (2.3), the optimal choice of observer polynomial is T = C, which is unknown. I t can then be estimated as described in Section 2,2. Below some schemes of this type will be described.
EXAMPLE 2.4. Ast~m'6 and Wittenmark's self-tuning regul~utor The basic self-tuning regulator is described in Astr~m/Wittenmark (1973). I t is based on a minimum variance strategy, which minimizes the output variance. This is a special case of the problem considered in Section 2.1 with AM = l and uM = yM = O. Inserting this into (2.6) and using the polynomials Q = P = l , the following is obtained:
ef(t)
: y(t)
l q-(k+l =~ )[bou(t)+bo(BR-l)u(t)+Sy(t)]+Rv(t).
This model can be w r i t t e n analogously with (2.10) as ef(t)
y ( t ) : ~1 q- (k+l )[b 0 u ( t ) 8+ T ~(t)] + R v(t)
(2.19)
23
and is the basis for the self-tuning regulator. Since C is unknown, the prediction is chosen as in (2.11) with T = C replaced by unity. Compare the discussion in Section 2.2. Hence, ~
AT
e f ( t l t - l ) = y ( t l t - l ) = bo(t-l)u(t-k-I )+e ( t - l ) ~ ( t - k - l ) .
(2.20)
The fact that C is included in (2.19) but not in (2.20) makes i t somewhat unexpected that the algorithm really converges to the optimal minimum variance regulator. I t is shown in Ljung (1977a) that the scheme (with a stochastic approximation estimation algorithm) converges i f I/C is s t r i c t l y positive real. I f instead a least squares estimation algorithm is used, convergence holds i f I / C - I / 2 is SPRo The condition on I/C and i t s relation to the positive real condition for MRASwill be further examined in the following section,
o
EXAMPLE 2.5. Clarke's and Gawthrop's self-tuning controller Clarke and Gawthrop (]975) consider a 'generalized output' @(t) = P(q-l)y(t) + Q(q-l)u(t-k-l) - R(q-l)uM(t-k-l) and applies the basic self-tuning regulator to the system generating this output. I t is possible to treat the algorithm within the general structure in the special case Q = 0 in their notation. Thus change the notation into: @(t) : AM(q-l)y(t) - q-(k+l)BM(q-l)uM(t). Then i t follows that ¢(t) equals ef(t) = AMe(t) i f P = l and Q = AM. I f the noise is given by (2.3) and T is chosen to be equal to C, the equation (2.6) can be written as ef(t)
= l
~ q-
(k+l
)[bou(t ) + bo(BR-])u(t ) +Sy(t) - CBMuM(t)]+ Rv(t).
This is the model used in the self-tuning controller. The fact that the f i r s t parameter in C is known to be unity is exploited. The prediction is calculated as in Example 2.4, i.e. C in front of the parenthesis is replaced by unity. The estimation scheme is a least squares algorithm,
o
24
2.4. The positive real condition A special model structure and a specific estimation scheme were described in Section 2.2. The structure was obtained from an analogy with the model reference adaptive systems in continuous time. The intention was to use the properties of positive real transfer functions to establish convergence. I t was noted in Example 2.4 that a positive real condition also appears in the analysis of a self-tuning regulator in the presense of noise. The relations between the conditions in the two cases w i l l be treated below. F i r s t consider the d e t e r m i n i s t i c design case and f o r s i m p l i c i t y assume that k = 0 and b0 = I.
I f the control law (2.15) is used, we
have from (2.16) ~(t) = -H(q-l)[sT(t-l)~(t-l)].
(2.21)
We want to show in a simple way that a p o s i t i v e real condition r e a l l y appears in the analysis in a natural way. To do so, assume that a modified version of the parameter updating (2.14) is used:
e(t) = ~ ( t - l ) +
m(t-l)
I (t-l)i z
c(t).
(2.22)
This algorithm is s i m i l a r to stochastic approximation schemes and is used in e.g. lonescu/Monopoli
(1977).
Subtract the true parameter vector 0 from both sides, m u l t i p l y from the | e f t by the transpose and use (2.21) to get J~(t)I 2 = J S ( t - l ) I 2 + 2 8 T ( t - l ) m ( t - l ) l~(t-l)l 2
= l
(t-l)l 2
= l~(t-l)I 2
2
E(t) +
~2(t) l~o(t-l)I 2
~H(q-')/ + ~2(t) = I~o(t-l)l 2 Im(t-l)l 2
2 ~(t)[(I/H-I/2)c(t)]
Im(t-l)I 2
(2.23)
I t can be seen that the p o s i t i v e real condition enters in a natural
25 way. I f I / H - I / 2 is positive real, the parameter error will eventually decrease. Moreover, c ( t ) / I m ( t - l ) l tends to zero i f I / H - I / 2 is SPR. I t should be noted that the boundedness condition mentioned in Section 2.2 appears because (2.23) only proves convergence of ~(t) / Im(t-l)l. I t is straightforward to show that the positive real condition can be avoided. Thus, let ~ denote the signal obtained by f i l t e r i n g x by Q/TAM and rewrite (2.8) as ef(t) =
q-l[ u(t) + 0T [pl (q_l) ~(t) ],
(2.24)
where the same assumptions as above are used. Now consider this as being the model instead of (2.8). The prediction (2.11) is then replaced by ^ ~(t-l) ^ e f ( t l t - l ) = P1 + BT(t-l) ~ ( t - l ) , which is different from (2.11) because §(t) is timevarying. Instead of (2.21) we then have
E(t) = - § T ( t - l ) ~ ( t - l ) . I f the parameter updating (2.22) is replaced by
§(t) = @(t-l) + ~ ( t - l ) ~(t), l~(t-1)l 2
(2.25)
the following is obtained:
Io(t)l 2 : Io(t-l)l 2 + 2 oT(t-]) ~ ( t - l ) E(t) + l~(t-l)l 2 : l~(t-l)l 2 -
c2(t) I~(t-l)l 2
c2(t) l~(t_l)i2 "
I t thus follows that c ( t ) / I ~ ( t - l ) l tends to zero without any positive real condition. Of course the boundedness of the closed loop signals mentioned in Section 2.2 is s t i l l a problem. The conclusion is that i t is possible to eliminate the positive real condition in the determini s t i c design case i f a modified estimation scheme is used.
26
Now consider the stochastic design, where the observer polynomial C is estimated. The transfer function H(q-l), which was previously known, now contains the unknown C-polynomial. This implies that the f i l t e r i n g in (2.24) and (2.25) cannot be done with the true C-polynomial. The positive real condition then enters in the same way as in Example 2.4. The positive real condition on H(q- I ) = I/C(q - l ) and a boundedness condition are in fact sufficient to assure convergence for the s e l f -tuning regulator in Example 2.4, see Ljung (1977a), Anatural modification in order to weaken the condition on C is to f i l t e r with I / ~ ( t ) , where C(t) is the timevarying estimate of C. This is further discussed in Ljung (1977a). The conclusion of the discussion above is that the positive real condition, which appears in the analysis of both deterministic MRAS and stochastic STR, are of a similar technical nature. There is, however, an important difference. The condition can be eliminated for the deterministic case by choosing another estimation algorithm, which includes f i l t e r i n g by the transfer function H(q-l). In the stochastic case, the positive real condition is not possible to be dispensed with in the same way, because the f i l t e r is unknown.
3,
UNIFIED DESCRIPTION OF CONTINUOUS TIME CONTROLLERS
The MRASschemes were originally developed in continuous time. The solution for the problem with output feedback was given in Gilbart et al. (1970) for the easy case with pole excess of the plant equal to one or two. The pole excess is defined as the difference between the number of poles and the number of zeros. The solution was reformulated in a nice way by Monopoli (1973). Monopoli (1974) introduced the concept of #J~gmen.ted eyu~or to treat the general case. Similar schemes are proposed by B6n~jean (1977), Feuer/Morse (1977), and Narendra/Valavani (1977). Self-tuning regulators have not been formulated in continuous time before. Yet, i t is of interest to relate the MRAS philosophy and the separation idea behind the STR in continuous time too. In this chapter some MRAS schemes w i l l be derived in a unified manner from the STR point of view. The development gives a new interpretation of the augmented error, introduced by Monopoli. Some problems in the analysis are also pointed out and the positive real condition f o r MRAS is examined. I t is shown that the condition can be dispensed with. I t should be noted that the treatment of the continuous time schemes is not as complete as for discrete time. Only the determini s t i c design is considered. I t should, however, be possible to carry through a development, analogous with discrete time, in the stochastic design case too.
3.1. Design method for known plants Before a unified description of several algorithms is given, the known parameter case has to be considered. A design scheme, which includes interesting special cases, w i l l
be described in this section.
I t is analogous to the discrete time procedure in Section 2.1. The scheme is given in Astr~m (1976) and special cases are treated in e.g. Narendra/Valavani (1977), and B~n~jean (1977).
28
The plant is assumed to satisfy the d i f f e r e n t i a l equation boB(P)
y(t)
-
A(p)
bo(pm + blpm-l + . . . + bm)
u(t) =
pn +alpn-I + ... +an
u(t),
(3.1)
where p denotes the d i f f e r e n t i a l operator. REMARK l
I t is assumed that there is no disturbance. I t is convenient to make this assumption in this chapter, because the design is deterministic. Disturbances w i l l , however, be introduced in the s t a b i l i t y analysis in Chapter 5.
o
REMARK 2
The parameter b0 is not included in the B-polynomial, because i t w i l l be treated in a special way in the estimation part of the adaptive controller in the next section. Compare Chapter 2.
o
The objective of the controller is to make the closed-loop transfer operator equal to a reference model transfer operator, given by b~ pm+ . . . + bM
yM(t) - BM(p) uM(t) . . . . . . . . . AM(p)
pn+a ~ pn-l +.
~
+a~ uM(t)"
(3.2
)
Here yM(t) is the desired output of the closed loop system and uM(t) is the command input. I t is seen that the pole excess of the reference model is greater than or equal to the pole excess of the plant. This assumption is made to avoid differentiators in the control law. Analogous to the discrete time case, a controller structure as shown in Fig. 3.1 w i l l be considered. The controller polynomials R, S, and T are polynomials in the d i f f e r e n t i a l operator p. The configuration is motivated in e.g. Astr~m {1976). As in the discrete time case i t is related to a solution with Kalman f i l t e r and state estimate feedback. The T-polynomial can be interpreted as an observer polynomial. Also note that the B-polynomial is cancelled, r e s t r i c t i n g the design method to minimum phase systems.
29
I uM I
I
"JT(p)BM(P )
1 Plant I u J b0 B(PlI I 7 A(p} "J'
BIp)IR~)
I I
I Controller
._J
!
F~g~e 3.1. C o n t r o l l e r configuration. The desired closed-loop t r a n s f e r function i s obtained i f the polynomials R, S, and T are chosen to s a t i s f y the equation BM(~) = AM(p)
b0 B(p) T(p) BM(p) A(p) b0 B(p) R(p) + b0 B(p) S(p)
or, e q u i v a l e n t l y , T(p) AM(p) : A(p) R(p) + S(p).
(3.3)
The observer polynomial T(p) is cancelled in the closed-loop t r a n s f e r function. I f the effects of i n i t i a l therefore be chosen a r b i t r a r i l y .
values are neglected, i t can
When T(p) has been determined, the
equation (3.3) has many solutions S(p) and R(p). I t w i l l ,
however, be
assumed that the degree of S(p) is less than or equal to n - l , which assures that the equation has a unique s o l u t i o n , Astr~m (1976). Since the polynomials A(p) and AM(p) both have degree n, T(p) and R(p) w i l l have the same degree nT. In order to assure that the control law does not contain any d e r i v a t i v e s of the output, nT is chosen greater than or equal to n-m-l. Furthermore, R and T are scaled so that they are monic. In summary then, the design scheme consists of the following steps: I) Choose the monic polynomial T(p) nT T(p) = p + t I
pnT-1+ . . .
+tnT ,
nT ~ n - m - l .
30
2) Solve the equation T(p) AM(p) = A(p) R(p) + S(p) f o r the unique solutions R(p) and S(p), defined by S(p) = s O pn-I + . . . +Sn_l R(p) = pn T + r I
The f i r s t
pnT-1+ . . .
+rnT.
step, the choice of T(p) ( i n c l u d i n g i t s degree) does not
a f f e c t the closed-loop t r a n s f e r function. However, i t is of importance for the t r a n s i e n t properties and the e f f e c t of disturbances as was seen in Chapter 2. The importance of the noise colour f o r the choice of observer w i l l ,
however, not influence the discussion in t h i s
chapter, since only the d e t e r m i n i s t i c design case is considered.
3.2. Class of adaptive c o n t r o l l e r s The idea behind the s e l f - t u n i n g regulators w i l l be used in t h i s section to define a general class of adaptive regulators. These regulators w i l ] be adaptive versions of the c o n t r o l l e r described in Section 3.1. The class of algorithms w i l l
l a t e r be shown to include
several MRAS schemes as special cases. The plant is s t i l l
assumed to s a t i s f y (3.1). The following assump-
tions are also introduced. AI) The degrees n and m are known and m ~ n - l . A2) The parameter b0 is nonzero and i t s sign is known. Without loss of g e n e r a l i t y b0 is assumed to be p o s i t i v e . A3) The plant is minimum phase.
REMARK
Notice that i t is s u f f i c i e n t to know the pole excess and an upper
31
bound on the number of poles to write the differential equation in the form of (3.1) with known n and m. Knowledge of the pole excess is the counterpart of the discrete time condition, that the time delay is known, cf. Chapter 2. The minimum phase assumption was discussed in Section 3.]. The desired closed-loop transfer function is given by (3.2). The f i r s t step in the development is to use the results in Section 3.1 to obtain a model, expressed in the unknown controller parameters. Compare with Section 2.2. The polynomial identity (3.3) and the equations (3.1) and (3.2) are used to get the following expression for the error e(t) = y ( t ) - y M ( t ) : TAMe(t) = TAMy(t) - TAMyM(t) = (AR +S) y ( t ) - TAMyM(t) = = boBRu(t) +Sy(t)- TBMuM(t).
(3.4)
Let Pl(p), P2(p), and Q(p) be stable, monic polynomials of degree n - m - I , m+nT, and n+n T - l respectively, and let P(p) be given by P(p) = Pl(P)P2(p). Define the filtered error ef(t) = Q(P) e(t), P(p) which thus is a known quantity. Using (3.4), el(t) can be written as
/boBR
ef(t) = ~ AM L ~ u ( t )
-
S
TBM
+~y(t) -T
1
uM(t)J =
TA MQ Ibo u(t)pl + bo(BR-P2) u(t)p + S Ylt)p - TBMpuM(t)].
(3.5)
REMARK
The motive to introduce the polynomials Q and P and the filtered error is the f l e x i b i l i t y obtained. Different choices of polynomials will be seen to generate different MRASschemes in the examples in the next section. Also compare with Chapter 2. It should also be noted that
32
Q and P could be chosen as rational functions, but this generalization will not be considered here.
u
Let 0 be a vector containing the unknown parameters of the polynomials BR-P2 (degree m+n T - l )
and S/b0 (degree n - l )
and the constant I/b 0
as the last element. Note that the vector B contains the parameters of the controller, described in Section 3.1. Furthermore, define the vector m+nT- l
1
pn-I
..... -pU(t)'TY(t)'
....
TB"u"(t)]"
-T
(3.6) I t is then possible to rewrite the expression (3.5) for the filtered error e l ( t ) as ef(t)
:
Q
TAM
[b° u__~+ b0 0T ~(t)]. Pl
(3.7)
This model provides the starting-point for a class of adaptive controllers as in discrete time. Note that e l ( t ) is s t i l l a known quantity. As before, there is a lot of freedom when specifying the estimation algorithm and the control law. The development done so far thus proposes a class of adaptive controllers, defined in two steps.
BASIC CONTROLSCHEME o Estimate the unknown parameters b0 and B (or some combination of these) in the mode] (3.7). o Use these estimates to determine the control signal.
To make the discussion easier, i t is convenient to specify a particular estimation algorithm as was done in discrete time.
A special parame~ter ~t~mator A specific structure of the estimation part will be discussed below. I t is of special interest because many MRAS schemes use this structure.
33
It is analogous to the special configuration for discrete time controllers discussed in Chapter 2. Using the model (3.7), an estimate of ef(t) is defined as ef(t) = T ~M [go(t)Up_~+ ~o(t ) ~T(t ) ~o(t)],
<3.8)
where go(t) and @(t) are estimates of b0 and O. Define the difference between the filtered error and its estimate as (3.9)
c(t) = ef(t) - @f(t). The following equation is then obtained for E(t): TAM
\ Pl
(3.10) The following parameter updating is used in the constant gain case:
:(t)
=r
(311)
where F is a constant, positive definite matrix. REMARK
I t is possible to define variants of the algorithm (3.11) with F replaced by some timevarying gain. These schemes can be defined analogously with the discrete time case and the details are not given. Q
If the parameter updating (3.11) is used and the transfer function Q/TAM is s t r i c t l y positive real, i t is possible to apply Lyapunov theory and the Kalman-Yakubovich lemma to assure that ~(t) tends to zero, provided that the closed-loop signals are bounded. See e.g. Monopoli (1973). Note that the estimation scheme (3.11) was the motivation for the scheme (2.14) considered in Chapter 2. Also note that Q/TAM can always be made s t r i c t l y positive real by choosing Q appropriately, because T and AM are known and the degree of Q is one less than the degree of TAM.
34
The estimation part of the c o n t r o l l e r with MRAS structure is defined by equations (3.8),
(3.9), and (3.11). Note that e f ( t )
is known.
So f a r the second part of the c o n t r o l l e r - the control law - has not been discussed. This w i l l
be done next.
Choice of control law The choice of control law contains one d i f f i c u l t y .
I t is natural to
determine the control signal so that the estimate of the e r r o r , i . e . ef(t),
is equal to zero. According to equation (3.8) t h i s means that u ( t ) = - Pl(p) [~T(t) ~ ( t ) ] .
This control law is identical
(3.12)
to the one described in Section 3.1 i f
@(t) is equal to the true parameter vector e in (3.7). This can be seen from (3.5). The control law, however, uses d e r i v a t i v e s of the parameter estimates, except f o r the t r i v i a l
case when m = n - l ,
i . e . the pole excess is
equal to one. In t h i s case P1 is a constant. Since ~ ( t ) is in general obtained by i n t e g r a t i o n of known signals as in (3.11), i t is in fact possible to use (3.12) without d i f f e r e n t i a t o r s
also in the case
m : n -2. However, in the general case the control law must be modified in order not to include d i f f e r e n t i a t o r s .
Compare the discussion of the control
law in the case k # 0 in Section 2.2. There are d i f f e r e n t solutions proposed in the l i t e r a t u r e .
For example, Monopoli (1974) chooses a
control signal which corresponds to the choice u ( t ) = - ~T(t) [Pl(p) ~ ( t ) ] .
(3.13)
I t is clear that the control law (3.13) is asymptotically equivalent to the control law (3.12). Note that i t follows from the d e f i n i t i o n of ~ ( t ) that P l ( p ) ~ ( t ) contains f i l t e r e d signals without any d e r i v a t i v e s .
input, output and reference
The choice (3.13) does not guarantee
A
that e f ( t )
: O, and i t remains to conclude that e f ( t )
tends to zero
8B from the fact that ~ ( t ) = e f ( t ) - e f ( t )
tends to zero. This problem
is closely related to the problem of boundedness of the closed-loop signals. Compare with the discussion in Section 2.2.
3.3. Examples of the general control scheme Some special cases of the procedure proposed in Section 3.2 w i l l now be given. Several MRAS schemes proposed in the l i t e r a t u r e w i l l be shown to f i t
into the general algorithm. As a side r e s u l t , the
~gme~ted error introduced by Monopoli is given a new i n t e r p r e t a t i o n . A new algorithm is also given to i l l u s t r a t e the large number of schemes that are possible to derive from the general algorithm.
EXAMPLE 3.1. Monopo2~L's scheme The scheme by Monopoli (1974) has been frequently discussed, because i t was an attempt to solve the adaptive control problem when the pole excess of the plant is greater than two. Monopoli introduced the concept of ~u~gmented Pyutor, motivated by the r e s u l t s on adaptive observers. The scheme w i l l be described in some detail in order to show the i n t e r p r e t a t i o n of the augmented error. To be consistent with the preceeding section, the notation is d i f f e r e n t from Monopoli's. A cross reference table between the notations is given in Table 3.1. In summary, Monopoli's scheme is as follows. Monic polynomials Dw (degree n - l )
and Df (degree n - m - l )
polynomials D (degree n - 2 ) ,
are chosen. Furthermore, the
F (degree n - m - 2 ) ,
and G (degree n - l )
are solved from the i d e n t i t i e s Dw = BDf + D/b 0 ( A - A M) Df = AF + G. With these polynomials, e ( t ) = y ( t ) - y M ( t )
can be w r i t t e n as
36
TabZe 3.1. Present notation compared to Monopoli's
Dw [
present
Monopoli's
y(t)
x(t)
yM(t)
Xm(t)
uM(t) BM(p)uM(t) e(t) el(t ) n(t) A(p) boB(P) AM(p) BM(p) -D(p) G(p) boB(p)F(p )
rl(t) r(t) -e(t) y(t) -n(t) Dp(p) Du(P) Om(P) Dr(P) A(p) B(p) C(p)
u~1_ bo(Wbo+Br) u(t) _ G y(t)
e(t) = ~-~ b0 Df
^ Dw
= ~A [b 0 u ( t ) + bo Df
DfDw
" DfDw
@T
~p(t)l '
B"
D-w uM(t)
] ^ :
(3.14)
where @ and m are defined as in the derivation of the controller in Section 3.2. The augmented error n(t) is defined as n(t) : e(t) - el(t )
(3.15)
Dw(P) [So(t ) w l ( t ) ] e l ( t ) = AM(p----~
(3.16)
where
and the auxiliary signal wl(t ) is determined so that u(t) + ~T(t ) ~(t) : wl(t ). Df(p)
(3.17)
37
Here bo(t) and O(t) denote estimates at time t of b0 and e. I f the closed-loop signals are bounded, the positive realness of Dw/AM can be shown to assure the convergence of the augmented error to zero, see Monopoli (1974). However, even i f n(t) tends to zero, i t does not follow that e(t) tends to zero, which is the primary goal. Monopoli makes this conclusion under the crucial boundedness assumption. Compare the discussion in Section 3.2. I t is easy to give an interpretation of the augmented error from the equations given above. Thus, i f Equation (3.17) is inserted into Equation (3.16), the following is obtained: e1(t ) - AM(p) Compare t h i s with the i d e n t i t y (3.14). The conclusion is that e1(t ) = e ( t ) , where ~(t) is an estimate of e ( t ) using the model (3.14) with the latest available parameter estimates. From equation (3.15) i t then follows that the augmented error q is simply the estimation error, i . e . the difference between e(t) and i t s estimate ~ ( t ) . This quantity is denoted c ( t ) in the preceeding section. I t is straightforward to show that the scheme by Monopoli is a special case of the general algorithm in Section 3.2. Thus, i t is possible to v e r i f y that the expression (3.14) coincides with (3.5) i f the degree of T, nT, is chosen to be n - m - l . as follows: Q=P Df = P1 = T Dw = P2 D : bo(P2-BPI) F = P1 - R G=-S.
The polynomials are related
38
The filtered error is thus equal to the error i t s e l f . Furthermore, Monopoli chooses the control signal according to (3.13). I t can be seen from (3.17) that the control law (3.12) corresponds to the choice wl(t ) = O.
I f n-m ~ 2 i t is thus possible to set the extra
signal wI to zero and this means that the augmented error n(t) is simply equal to the error e(t).
o
EXAMPLE 3.2. B~n~jean's scheme The scheme is presented in B~n~jean (1977) and can be shown to be a variant of Monopoli's. The model used is obtained from Monopoli's after a reparametrization:
ef(t) =-~--Q Ibo u ( t ) - u M ( t ) + bo(BR_P2) u l t ) p uM(t) + S y(~) TAM P1 P _ (TBM_bOBR) U ~ p t ]. The choices of polynomials are identical to Monopoli's and so are the estimation algorithm and the choice of control law. Note that more parameters have to be estimated because of the reparametrization. []
EXAMPLE 3.3. Feu#~r's and Morse's scheme The scheme is given in Feuer/Morse (1977). A minor change of the design method described in Section 3.1 will be needed in order to treat the algorithm within the general framework. To this end, write the reference model transfer function as
BM(p) AM(p)
_
1
h(p),
To(P) YI(P)
where yo(p ) and yl(p ) are monic polynomials of degree I and n - m - l respectively. The degree of XOYl is thus equal to the pole excess of the plant and h(p) is a proper transfer operator. Now, consider h(p)uM(t) as being the input to the reference model with transfer function I/XoYl. The effect is that the development proceeds as if B M = l and A M = YOYl . The modification necessary is seen from the identity corresponding to Equation (3.3), i.e.
39
T(p) yO(p) yl(p ) = A(p) R(p) + S(p). I f nT is chosen to be equal to n, i t follows that R(p) has the same degree as YoYl, i.e. n-m. This is different from the other schemes. The control law for the known parameter case now is boB(P ) R(p) u(t) = T(p) h(p) uM(t) - S(p) y ( t ) which is identical to the one in Figure 3.1 with BM = l and uM replaced by h(p) uM.
_
ef(t)
With the above modifications, the equation (3.5) becomes
Q Ibo u(t) + bo(BR_ u_~_pt + S y(t) _ T h uM(t)] TYoY1 P1 P2) p (3.18)
where Q and P now should be of degree nT + n - m - l ,
because AM has been
replaced by YoYl. Now choose the polynomials according to Pl = YI P2=T Q = p = PIP2. Then (3.18) transforms to
i [
e(t) =~00 tbo
u(t) + YI
bo(
BR T) -
Yl T
S y(t)
h uM(t)l
Yl T - Y--~
]
which is the model used by Feuer and Morse. The estimation algorithm is the ordinary MRAS scheme. The control law is, however, a special and complicated one, derived to obtain s t a b i l i t y of the closed loop. Compare the discussion in Section 3.2.
n
EXAMPLE 3.4. Nou~.end~ta's and Va.~avani's scheme The scheme is described in Narendra/Valavani (1977). By choosing the observer order nT to be equal to n - m - l according to Pl = L P2 =
TBM/b~
Q = p = plP2 ,
and the polynomials
40 the equation (3.5) transforms into e(t)-
BML
TBM/bo M)
u(t)
+
+ S LTBM/boM--LThis is the model used and the parameter estimation is analogous to the other MRASdescribed. The polynomial L is chosen to make the M - s t r i c t l y positive real. o transfer function BML/b~A
The above four examples describe algorithms that are proposed in the l i t e r a t u r e . However, the general algorithm presented in Section 3.2 has a l o t of freedom in the choices of polynomials, estimation algorithm etc. Thus, many algorithms can easily be generated. A specific algorithm w i l l be given below, which does not require the positive real condition. Further comments on the positive real condition are given in the following section.
EXAMPLE 3.5. A ncw a~Zgo,~J~thm
When using the model (3.5) and the ordinary MRASestimation scheme, the positive real condition on Q/TAM enters. A modification of the polynomial degrees, so that degree (Pl) = n -m and degree (Q) = degree (P) = n+nT, makes i t very natural to choose Q = p = TAM. The transfer function Q/TAM = l is automatically s t r i c t l y positive real. However, since the degree of Pl is n-m, i t is possible to set e l ( t ) to ze~ro by the control law only in the case n -m = I. Compare the discussion in Section 3.2. The ordinary MRASestimation scheme of course can be used in this case too. o
41 3.4. The positive real condition The positive real condition has been seen to be an essential condition in order to guarantee convergence of the estimation error. The condition has been an attribute of the MRASalgorithms ever since Parks (1966) introduced the idea. However, i t has been demonstrated in Example 3.5 in the preceeding section that the condition can be removed i f the polynomials Q and P are chosen in a special way. I t is, in fact, possible to use a slightly different estimation algorithm in all the MRASdescribed, and eliminate the positive real condition in all cases discussed. Thus, w r i t e (3.7) as
ef(t) = boUp~ + b0 BT ~(t), where ' - ' denotes f i l t e r i n g by Q/TAM. Let the estimate ef(t) be given by ef(t) = bo(t) u ( t ) + ~o(t ) ~T(t ) $(t) P] instead of (3.8). Introduce b0(t) = b0(t) - b 0
O(t) = 8(t) - 8. The estimation error c(t) in (3.9) satisfies the equation E(t) = - bI o ,(t ,
\(~(t)pl + @T(t) ~ ( t ) ) - b0 oT(t) ~(t).
Now choose m2(t) as a c r i t e r i o n .
Regarding i t as a f u n c t i o n of b0 and
e, we have @e2(t)
@g0
: - 2c(t) [u(t) + @T(t) ~ ( t ) )
\ P1
B~2(t~ = - 2c(t) bO~(t). It is natural to make the parameter adjustment in a modified steepest
42
descent direction, i.e. I
0 (t) = r~ ~ p ~ +
~T(t) ~ ( t ) ) c ( t ) ,
~,t,(~ = F-I ~(t) E(t),
r 0 positive constant
F positive definite.
I t is possible to verify that this estimation scheme has the desired s t a b i l i t y property. Choose the Lyapunov function V(t) = rob~(t ) + b0 oT(t) rS(t). Its derivative becomes V(t) = 2robo(t ) bo(t) + 2boBT(t) r ~ ( t ) = = 2bo(t )
+8T(t) ~(t)
~(t) +2bosT(t) ~(t) e(t) = - 2e2(t)
and i t follows under mild conditions that m(t) tends to zero. The details will be considered in Chapter 5. The conclusion is that by modifying the estimation part, i t is possible to eliminate the positive real condition in all the described MRAS. I t should, however, be noted that the same situation occurs as in Example 3.5. I t is possible to have ~f(t) = 0 without differentiators only in the case n-m = I.
4,
STABILITY OF DISCRETE TIME CONTROLLERS
S t a b i l i t y of the closed loop system is fundamental in applications of adaptive control. S t a b i l i t y is also essential in most theoretical studies of mode] reference adaptive systems and self-tuning regulators. The problems which appear for discrete time MRASwere discussed in Section 2.2. I t was shown that i t is d i f f i c u l t to relate convergence of the f i l t e r e d error ef(t) and i t s prediction error c(t) in the general case k ~ 0. Moreover, even i f k = 0 the convergence problem could not readily be solved because boundedness of the closed loop signals was not easy to prove. These problems have been emphasized in e.g. Landau/Bathoux (1975) and Ionescu/Monopoli (1977). Recently, a solution in the disturbance-free case was given by Goodwin et al. (1978a). The s t a b i l i t y problem also appears for the self-tuning regulators, both in practice and in theory. The convergence results mentioned in Chapter 2 require that the estimates and the input and output signals belong to a bounded area i n f i n i t e l y often, see Ljung (1977a). Similar results by Goodwin et al. (1978b) do not, however, require the stab i l i t y assumption. S t a b i l i t y in sample mean square sense has been considered by Ljung/Wittenmark (1976) and Gawthrop (1978). The s t a b i l i t y problem can be approached in several different ways. Local s t a b i l i t y results are given by e.g. Feuer/Morse (1978). This technique is of limited interest because i t t e l l s l i t t l e about the global properties.
The global s t a b i l i t y properties are much more
d i f f i c u l t to investigate. One p o s s i b i l i t y is to apply Lyapunov theory, but the technique suffers from the d i f f i c u l t y to find a suitable Lyapunov function. This approach has been used in Feuer/Morse (1977) to design a globally stable MRAS in continuous time. The adaptive regulator obtained in this way is unfortunately very complicated. An alternative to the Lyapunov function approach is to analyse the systems directly. In fact, the partitioning of the schemes into
44 estimation and control parts suggests the following intuitive argument. Consider a situation when the parameter estimates would give an unstable closed-loop system. The plant output increases and after some time the signals are so large that the disturbances become insignificant. The estimates then tend to be accurate and consequently give a stable closed-loop. The plant output thus decreases again. There are, however, some shortcomings in the heuristic argument given above. F i r s t l y , i t is not obvious that all parameter estimates become accurate when the signal amplitudes are growing. I t might happen that only some parameter combination is accurately estimated but that the estimates which cause the i n s t a b i l i t y are s t i l l poor. Secondly, i t takes some time for the estimates to become good, even i f the signals are very large. The reasoning above will thus not be valid i f the output increases very fast OF i f the parameter adjustment is very slow. In this chapter some s t a b i l i t y results will be given for the general adaptive algorithm described in Chapter 2. The heuristic discussion given above will be converted into formal proofs. In particular, the assumptions needed to overcome the d i f f i c u l t i e s mentioned above will be discussed. Uniform boundedness of the closed-loop signals, i.e. L~%stability, will be considered. I t is then natural to assume that the inputs to the overall system, i.e. the command input uM and the disturbance w, are bounded. I f i t is not desirable to introduce this assumption, some other s t a b i l i t y concept could be considered, e.g. s t a b i l i t y in mean square as in Ljung/Wittenmark (1976). The algorithms treated are based on either deterministic or stochastic design, see Chapter 2. The estimation schemes used are stochastic approximation and least squares algorithms. The main effort has been devoted to the stochastic approximation case. For the least squares version only partial results are given. The analysis is carried out mainly for models where the parameters enter bilin~arly as in (2.8). This structure simplifies the s t a b i l i t y analysis, but the significance of the product structure is s t i l l an open problem. The algorithms under consideration are briefly described in Section
45 4.1. Some preliminary results are also given in that section. Section 4.2 gives the main theorems on L°%stability. The disturbance-free case is then considered in Section 4.3. The boundedness results are used to prove that part of the state vector, namely the output error y_yM, converges to zero. This is a f a i r l y satisfactory result for the deterministic case. P a r t i c u l a r l y , the convergence problems of the MRAS systems discussed in Chapter 2 are solved. Section 4.4 indicates some extensions of the s t a b i l i t y analysis. Section 4.5 f i n a l l y contains a discussion of the results.
4.1. Preliminaries The algorithms described in Chapter 2 were divided into two broad cathegories, called de~te~min~La or stochastic depending on the underlying design method. Expressed in another way, the two approaches deal with fixed or ~%E.m~ted observer polynomials. Results will be given for both types of algorithms but the details are worked out for the deterministic design case only. For easy reference, some of the equations describing the algorithms will be given below.
Plant model
A(q - I ) y ( t ) = b0 q-(k+l) B(q-l) u(t) + w(t).
(4.1)
Here k is a nonnegative integer, A(q - I ) = I + aI q-I + . . . +a n q-n B(q - I ) = 1 + bI q-I + . . . +bm q-m and w(t) is a disturbance which cannot be measured.
Reference mod#_~
yM(t ) =q-(k+l) BM(q- I ) uM(t) = q-(k+l)
bM+... +bMmq-m
uM(t).
l+aiM q - l + . . . a M q -n (4.2)
48 Here AM(q-I ) is asymptotically stable and uM(t) is the command input. F i l t ~ e d error [ y ( t ) - yM(t)], ef(t) = ~ e(t) = Q(q-l) p(q-l) pl(q-l) p2(q-l)
(4.3)
where
Q(q-1) = 1 + ql q-1 +... +qnQ q-nQ Pl(q -I) = 1 + Pll q-I + . . . +Plnp I q-nPl
P2(q -1) = 1 + P21 q-1 +... +P2np 2 q-nP2 are all asymptotically stable polynomials.
Deterministic d ~ i ~ n The observer polynomial
T(q -1) . 1. + . t 1. q-l+
+tn T q-nT
is chosen a priori. I t is assumed that T(q- I ) is asymptotically stable. The estimation model is given by
ef(t) = q-(k+l)[b 0 ~(t)_~l + bO 8T ~ ( t ) l + T~OR w(t). Here ~ denotes the signal obtained by f i l t e r i n g ~T(t) =
lu~ pt
.... ,
u(t-nu) y(~) p ' ~ .....
(4.4)
x with Q/TAM and y(t-ny+l) p
_ TBM uM(t)], •
p
(4.5) where nU
= max (m+k, nP2 )
ny = max (n+nT-k, n). The description of the algorithms in Section 2.2 was quite general. There are many details where several choices can be made. For the
47 analysis i t is necessary to be more specific. Two prototype algorithms will be stated. Theywill be called the P$A-~Zgo~/3t/~n (Discrete time, Stochastic Approximation) and the DLS-~..go,~&thm(Discrete time, Least Squares) respectively. The deterministic versions are defined as follows. DSA-ALGORITHM -
estimation: I ~o(t) = ~o(t_l ) + (-u(t-k-l) + ~T(t_l ) ~(t-k-l)) c(t) r(t) \ PI ~(t) = ~(t-1)+ @ 0 ~(t-k-l)~I~
r(t) = ~ r ( t - l ) +
I
(~(t-k-l)+ Pl
(4.6a) (4.6b)
~T(t_l ) ~(t_k_l))2 +
+ B~l~(t-k-l)l2 +~ o~ ~< I~ ~ o ^
(4.6c)
~(t) = el(t) - ef(tlt-I )
(4.6d)
ef(tlt-I ) = ~ o ( t - l ) ~ ( t p i - l ) + §T(t-l)~(t-k-l)]
(4.6e)
- control:
~(t) : _ ~T(t ) ~(t).
PI
(4.6f) D
REMARK I
The constant @0is an a priori estimate of bO. The importance of the choice of B0 for the stability properties will be discussed in Section
4.2.
m
REMARK 2
The stochastic approximation algorithm is a simple estimation scheme which has been used by several authors, e.g. Ionescu/Monopoli (1977)
48 and Ljung (1977a). Note that ~ is equal to one in a proper stochastic approximation scheme. This means that the "gain" in the algorithm is decreasing as ] / t when t ~ .
This case is not covered in the analysis.
Compare with the discussion in connection with Theorem 4.1 in the next section. The constant ~ may be included for numerical reasons. I f : 0 , i t will be assumed that r ( t } is made nonzero e.g. by using ~ > 0 and a s t r i c t l y positive i n i t i a l value of r ( t ) . Finally note that i f k=O the estimation of bO, (4.6a), is eliminated and also ~(t) = e f ( t ) . This follows from (4.6f).
a
REMARK 3
The control law (4.6f) is identical to the one discussed in Chapter 2, see (2.15).
n
REMARK 4
Note that the f i l t e r i n g by the transfer function Q/TAM is done as described in Section 2.4. This implies that no positive real condition is needed. More important is perhaps the fact that the f i l t e r i n g seems to improve the transient properties of the algorithms, This is i l l u strated in the following example.
[]
EXAMPLE 4.1 Consider a plant given by ( I - 0 . 9 q-l) y ( t ) = u ( t - l ) . Let the reference mode] be (I-0.7
q - l ) yM(t ) = 0.3 uM(t-l).
The plant is controlled by the MRAS by lonescu and Monopoli.
I t is
seen in Example 2.1 that the polynomials should be chosen as Q=P=T=I and the transfer function P2/AM = 7 / ( I - 0 . 7 q-l) is then s t r i c t l y positive real. The estimation scheme in the DSA-algorithm is used with B0 = I, ~ = O, and ~ = 0.2. Two parameters have to be estimated, namely so = 0.2 and I/b 0 = I. The behaviour of the algorithm is
49
shown in Figures 4.1 - 4.3 when uM is a square wave. The o r i g i n a l algorithm has an o s c i l l a t o r y t r a n s i e n t behaviour as seen in Fig. 4.1, These o s c i l l a t i o n s are e l i m i n a t e d i f
the signals are f i l t e r e d
by
Q/TAM as in the DSA-algorithm. This can be seen in Fig, 4,2. I t should be noted t h a t the e f f e c t s can be more or less pronounced with d i f f e r e n t choices of ~. S i m i l a r observations have been made in many cases. In Fig. 4.3 i t
is shown t h a t the o s c i l l a t i o n s can be reduced
with a d i f f e r e n t ~ but the parameter convergence is s t i l l
slower than i f the f i l t e r i n g is used.
'
much
a
I
c-
E'~ (00.
I
I
I
I
T
r
] C
l/I
"6
E ,o 00
l
500
Figure 4.1. Command, output, and control signals and parameter estimates for Example 4.1 without f i l t e r i n g by Q/TAM.
50
o "(3 c-
o_m OEm
~|-
C ._on U~
F
to (D
0
I
i
i
i
I
I
I
I
!
E Ul
(]J
E P
O-
o
0..
f 0
500
Figure 4.2. Command, output, and control s i g n a l s and parameter estimates f o r Example 4.1 with f i l t e r i n g
by Q/TAM.
The DLS-algorithm is defined analogously to the DSA-algorithm. Since the s t a b i l i t y
proof requires t h a t b 0 is known, the algorithm is
stated f o r t h i s case only. Some changes are then needed. F i r s t l y , the estimation of b 0 as in (4.6a) is eliminated. Secondly, the vector 8 was p r e v i o u s l y defined to have I / b 0 as i t s l a s t element. Since b0 is assumed known, i t is now not included in 8. With obvious n o t a t i o n the estimation model (4.4) is replaced by yf(t)
: q-(k+l)
bo T 1
+ boOT~(t)
+ TAMp w ( t ) ,
where m(t) does not contain the l a s t component in (4.5). The DLS-algorithm is defined as f o l l o w s .
(4.7)
51
0
I
r-
0 E m E - -I 0 ::J
I
l
I
i
I L3 I
I
I
I
I
I
I
I
r U~
tO
U~ 0
I
.E L .
E o
0-
W_
6~ 0
500
Figure 4.3. Command, output, and control signals and parameter estimates for Example 4.1 without f i l t e r i n g by Q/TAM and with ~ = 0.8.
DLS-ALGORITHM -
(b0 known)
estimation:
e(t) : @(t-l) + ~
P(t) ~ ( t - k - l )
P-l(t) = ~ P - l ( t - l ) + ~ ( t - k - l ) m(t) : y f ( t ) Yf(tlt-l)
- yf(tlt-I
e(t)
~T(t-k-l);
(4.8a) 0.<~
)
b [~(t-k-l) + ~T(t-l) ~ ( t - k - l ) l Pl
= O[
- control: u(t) = _ ~T(t ) ~ ( t ) + 1 QBM uM(t) PI bo PAM
(4.8b)
(4.8c) (4.8d)
(4.8e) []
52 REMARK The updating of the matrix P - l ( t ) , (4.7b), is in practise replaced by an equivalent updating of i t s inverse P(t):
1 [P(t-l) - P(t-l) ~(t-k-l) ~T(t-k-l).P(t-l) P(t) = ~
].
(4.8f)
~ +~T(t-k-l) P(t-l) ~ ( t - k - l ) Q
Stocha.~t~c d~ign
I t was seen in Section 2.2 that a model similar to (4.4) can be obtained in the stochastic design case where w(t) = C(q- l ) v ( t ) . The parameter vector B is then augmented with the unknown C-parameters divided by b0 and the vector ~(t) is redefined as T ( t ) : Fu t~_ L P -
.... '
BM
u(t-nu) ~ P . . . . . .
-#- uM(t). . . . .
y(t-ny+l) P '
_ BM
-~-uM(t-n)] .
(4.9)
The model corresponding to (4,4) is then
ef(t) : q-(k+1)l_c[bo-CPI + bo°T
+
w(t),
(4.101
where ' - ' now denotes f i l t e r i n g by Q/AM. Note that i t is not possible to f i l t e r by I/C since C is unknown. The prediction e f ( t J t - l ) is calculated as described in Section 2.2: ef(tlt-l) =
^bo(t-I ) FL~(t-k-l) + oT(t-l) ~(t-k-l)]. Pl
(4.11)
The definitions of the DSA- and the DLS-algorithms are s t i l l valid with these new interpretations of 8, m, and ' - ' . The equations are identical and are therefore not repeated. I t is convenient for the s t a b i l i t y proofs to have a representation like (4.10) ~ t without the unknown C-polynomial. Note that (4.10) was obtained using the identity CAM =
AR + q - ( k + l )
S,
53 where the degree of R was k and the degree of S was max(n+nT-k-l, n-l). Use instead the identity AM = AR0 + q-(k+l) SO. I f degree (RO) = k, then the degree of SO will be at most n-l, i.e. less than or equal to the degree of S. Using this identity, another representation like (4.10) is obtained with the same q>-vector but a different B-vector, here called 00: ef(t) =q-(k+l)[boU--(-~+ b0 0 ~ ( t ) ] Pl
QROw(t). +A-ffP
(4.12)
Here ' - ' s t i l l denotes f i l t e r i n g by Q/AM. This representation is convenient because the prediction i f ( t l t - l ) C-polynomial, see (4.11).
is calculated without the
General assumptions The following general assumptions are made: Al) The number of plant poles n and zeros m are known. A2) The time delay k is known and the sign of the nonzero constant b0 is known. Without loss of generality b0 will be assumed positive. A3) The plant is minimum phase. These assumptions were all introduced and discussed in Chapter 2.
Basic lemmas Some basic lemmas, which will be used several times in the proofs, are given below. The f i r s t lemma relates the evolution of m(t) to the command input uM, the disturbance w and to the error e = y-yM.
54
LEMMA 4.1
Consider the plant (4.1). Assume that i t is minimum phase. Then ~o(t), defined by (4.5) or (4.9), satisfies l .A e(t+k+l)
_ 1
0
~(t+l) =F ~(t) +
I w(t+k+l) 0
0
+ uM(t+l)
1 e(t+l ) (4.13)
0
0 where the constant matrix F has a l l
i t s eigenvalues inside the u n i t
disc and UM is a vector whose components are outputs of asymptotically stable f i l t e r s
with uM as the input,
o
Proof
The proof is given for the deterministic design only. The proof for the stochastic design is, however, almost identical. From (4.5) and (4.1) i t follows that
u(t) P -b1...-b m 0...0 u(t-nu+l) .P m(t+l) =
y(t+l) P
1 1
0 +
0 1
y(t-ny+2) P -
-~uM(t+l
1
0
Io
55
1 [Ay(t+k+l)- w(t+k+I)] 0
0 y(t+l) P
F m(t) + g ( t ) ,
0
0 TBM uM(t+l)
--7
where F is asymptotical]y stable because the plant is minimum phase. Note that the fact that m ~ nu has been used. From (4.3), y ( t ) = e(t) + yM(t) = e(t) + q-(k+l) M B
AM uM(t)
which implies that 1 [Ay(t+k+l)- w(t+k+l)] 0 0 g(t) =
Y(t+l) p 0
0 - TB----~NuM(t+l)
P
=
56
l
_ l_l_l_l" w(t+k÷l ).. bo P
A e(t+k+])
ABM uM(t)
boPAM
0
0
0
0 BM
e(t+l )
+
P
u"(t-k)
0
0
0
- ~TB - M uM(t+l) This is equivalent to (4.13). Define bo(t ) = bo(t ) - b0 (4.14)
O(t) :
{ @^(t)-O @(t)
00
in the deterministic design case, in the stochastic design case.
LEMMA 4.2
b0 Let b0(t) and e(t) be defined by (4.14). Assume that 0 < T < ~0" Then the following holds for the DSA-a|gorithm for deterministic design, (4.6): ~
b0
+
,o ~(t)
bo
b~(t) +-~0 l@(t) 12
v t >~k+l,
where c is a positive constant,
2
~2(t)
(4.15) n
57 Proo~
Write (4.6 a,b) in terms of bo(t) and e(t) and multiply them by their transposes : r(t) k
Pl
c z (tl (~(t-k-1 ) + @T(t-l) ~(t-k-l) + r2(t) \ P1
)2
(@(t) 12 = le(t-l)12 + 2(30~(t) r ( t ~ eT(t-l) ~(t-k-l) +
+ ~o 2~ l~(t-k l)l 2 rZ(t) bo Add the second equation, multiplied bY~o0, to the first equation, which gives
bo
i2
bo
= ~(~> r(t) [~o~ ~> ~¢t \ Pl~ ~> + ~T¢~ ~>~¢~ ~ ,~)+~ o
~(~ ~I~¢~ ~ ~]+
+ r2(t) 2~(t) (~W(t)-c(t)) + "< r(t)
+max (l, ~ ) c2(t) IF ~(t-k-l) +~T(t-l)~(t-k-l))2+BO 2 l~(t-k-l),2], r - ~ L \ Pl where (4.4), (4.6e) and (4.14) are used in the last step. Let c
= l
- ~max
I,
.
Then c is positive from the assumptions. Insert this into the inequality above and use (4.6c) to obtain
bo
12
bo
58
2~(t) (~w(t)-e(t)) "< r-~ _
1 I-(/6 r(t)
~(t)
+ 2(I-c) r(t)c2(t) 1
R~(t))
vq~ P
c2(t) + 1 .< - c r - ~ cr(t)
w(t)
2
_ c ~2(t) +
2
,
which concludes the proof.
Coro2_~y The same result holds for the DSA-algorithm with stochastic design i f R is replaced by R0 and m(t) is defined by (4.9).
Proof The same proof s t i l l
holds.
A corresponding result concerning the DLS-algorithm with known b0 is given in the following lemma. LEMMA 4.3
Let 8(t) be defined by (4.14). Assume that b0 is known. Then the following holds for the DLS-algorithm for the deterministic design, (4.8): @T(t) P-l(t) 6(t) = X sT(t-l) P - l ( t - l ) X X +~T(t-k-l) P ( t - l ) ~ ( t - k - l )
~(t-l) -
~2(t) + 1 /R(q - I ) ~ ( t ) ) 2. b~ b~ \p(q-l)
(4.16) []
Proof Write (4.,Ba) in terms of @(t) and multiply from the l e f t by P-I/2(t). This gives 1 pl/2 (t) ~ ( t - k - l ) P-I/2(t) 8(t) = P-l/2(t) ~ ( t - l ) +b-o0 and after multiplication with the transpose
E(t)
59
sT(t) P-l(t) @(t) = @T(t-l) P-l(t) 8(t-l) + 2 ~T(t_l)~(t_k_l )e(t) + ~l ~T(t-k-l) P(t)~(t-k-l) c2(t) : :
~ @T(t-]) P-l(t-l) e(t-l) + [sT(t-l) ~(t-k-])] 2 + 2 ~T(t_l ) ~(t-k-l) ~(t) + l ~ T ( t - k - l ) P(t) $(t-k-l) ~2(t).
b~
+~o
If (4.7), (4.8d) and (4.14) are used, the following is obtained:
eT(t) P-l(t) 8(t) - A eT(t-l) P-l(t-l) e(t-l) = 2 +-]-] ~T(t-k-l) P(t) ~ ( t - k - l ) ~2(t).
(4.17)
The updating formula for P(t) is given by (4.8f). Multiply this equation from the l e f t by~T(t-k-l) and from the right b y ~ ( t - k - l ) , This gives ~T(t-k-l) P(t) ~ ( t - k - l ) = .~T(t-k-l) P(.t-l) ~ ( t - k - l ) +~T(t-k-l) P(t-l) ~ ( t - k - l ) Insert this into the equation (4.]7) to get: @T(t) P-](t) @(t) - A @T(t-l) P - l ( t - ] ) e ( t - l ) =
~2(t) = b2
b2
b2 0 ~+~T(t-k-l)P(t-l)~(t-k-l)
l+~T(t-k-l) P(t-l) ~(t-k-l) which is identical to (4.16).
D
Corollary The same result holds for the DLS-algorithm with stochastic design i f R is replaced by R0 and ~(t) is defined by (4.9), []
60
Proof The proof remains the same.
The results of Lemma 4.2 and Lemma 4.3 can be interpreted in the following way. The estimation errors bo and g decrease i f the prediction error e(t) is large. On the other hand, the errors increase i f the noise magnitude is large. This is natural i n t u i t i v e l y .
4.2. L°%stability The main r e s u l t s on L~%stability w i l l be given in t h i s section. For convenience, make the following
DEFINITION The closed loop system is L~-stable i f uniformly bounded disturbance (w) and command (uM) signals give uniformly bounded input (u) and output (y) signals. I t w i l l thus be assumed in the sequel that w(t) and uM(t) are uniformly bounded. The main part of this section is devoted to the DSA-algorithm. The idea behind the s t a b i l i t y analysis is the heuristic argument given in the beginning of this chapter. I t was pointed out that there are some shortcomings of the argument. F i r s t l y , i t is necessary to show that not only a few of the parameter estimates become accurate when the signals are growing large. This is no problem for the DSA-algorithm. The second problem mentioned seems to be more d i f f i c u l t . I t takes some time for the estimates to become accurate even i f the signals are very large. The discussion thus requires that the output does not increase a r b i t r a r i l y fast and that the parameter adjustment is not too slow. The l a t t e r condition is the reason why we do not consider estimation algorithms with decreasing gains. Compare the definitions of the DSA- and DLS-algorithms. The p o s s i b i l i t y that the output may increase arbit r a r i l y fast is closely related to the magnitude of the parameter
61
estimates. I t w i l l be eliminated by guaranteeing that the estimates are bounded. The following example i l l u s t r a t e s
that unbounded para-
meter estimates can lead to i n s t a b i l i t y .
EXAMPLE 4.2 Consider a plant given by y(t) + a y(t-l)
= b0 u ( t - 1 ) + w ( t ) ,
where b0 is known to be u n i t y . Assume that the reference model is yM(t) = u M ( t - l ) . Choose Q = P = T = I . Equation (4.4) can then be w r i t t e n as y ( t ) - yM(t) = u ( t - l )
+ s y(t-l)
- uM(t-l) + w(t),
(4.18)
where s = -a. Since b0 is known, the prediction e r r o r can be w r i t t e n as
c(t) = - s(t-l) y(t-l)
+ w(t),
(4.19)
where
s ( t ) : ~(t) - s. With ~ = 0 and ~ = 1 in the DSA-algorithm the updating of the parameter estimate is given by ~(t) = ~ ( t - l )
+ y(t-l)
c(t) I +y2(t-l)
This equation can be expressed in s ( t ) as s(t) = s(t-])
+y(t-])
w(t)-s(t-l) y(t-l) 1 + y2(t-l)
(4.20)
The control law corresponding to (4.6f) is u(t) = - ~(t) y ( t ) + uM(t), which can be inserted into (4.18) to give y(t)
: - s(t-l)
y(t-l)
+ uM(t-l) + w ( t ) .
Eqs. (4.20) and (4.21) describe the closed loop system.
(4.21)
62
The basic idea with the example is to show that the closed loop system is unstable by finding a disturbance w and a command input uM such that the parameter error s l t ) can increase without l i m i t . assume t h a t the r e c u r s i o n ( 4 . 2 0 ) ,
i4.21)
starts at t=l
Thus,
with i(I)
= O,
y ( 1 ) = I . Define f(t)
A: ( v ~ - ( t - l ) ) ( I
+
t = 2, 3 . . . . .
T - 5,
for some large T. Choose the following disturbance w(t) = 1 -
1
V~
+ fit),
t = 2, 3 . . . . .
T-5,
and the f o l l o w i n g command s i g n a l uM(t-l)
--
1 - f(t),
t = 2, 3, . . . .
T-5.
The signals w and uM are bounded. I t is then easy to show that ~(t) : / C -
1
y(t) = l
VF
for t = l . . . . .
T-5.
Further, l e t
wit ) : O, t : T - 4 . . . . . uM~t_l ~j
T
= S O, t = T - 4 l, t T 3. . . . .
T.
I t is then easy to check that s ( t ) and y ( t ) for large T are approximately given by: t
~(t)
y(t)
T-4
~
-I
T-3
VT
VT
2 T-2 T-I T
1
2~
--
v~T 2 16 ~-TT3
T
2
4 l
63
Now choose w(T+l) and uM(T) such that s(T+I) : 0 and y(T+l) = I . The state vector of (4.20),
(4.21) is then equal to the i n i t i a l
state. By
repeating the procedure f o r increasing values of T, a subsequence of y(t) will
increase as - ~ and therefore is unbounded. The r e s u l t of a 2 simulation with T = 50, I00,150 . . . . is shown in Fig. 4.4. [] The example shows that bounded disturbance and command signals can be found such that the output is unbounded. The assumption of bounded
i
w
0
O_ C
C)_
-300-
o
C
J LIILLL L_L_
nO
o
0
E E 0 cO
-I
2 cO
.m o
0 0
Figure 4.4. Simulation results f o r Example 4.2.
5000
64
disturbance and command signals is thus not s u f f i c i e n t to guarantee L~%stability. Some additional assumption is needed. Boundedness of parameter estimates is chosen here and other p o s s i b i l i t i e s are discussed in Section 4.5, I t should f i n a l l y be noted that the same technique can be used to derive examples of i n s t a b i l i t y with any ~ < I .
L~-stability for the DSA-a~9orit~ The main r e s u l t on L~%stability f o r the DSA-algorithm is given in the following theorem.
THEOREM 4,1
(DSA-o~Zgo~CthmwX~thnoiyse)
Consider the plant ( 4 . l ) controlled by the DSA-algorithm with determin i s t i c or stochastic design. Assume that assumptions A I - A 3 are s a t i s f i e d . Moreover assume that the parameter estimates are uniformly bounded and that b0 < 2BO. Then the closed-loop system is L~%stable. Q
Proof The f u l l
proof for the d e t e r m i n i s t i c design is given in Appendix A. I t
can be concluded immediately that the proof holds also f o r the stocha s t i c design, using the representation (4.12) instead of (4.10). Some minor changes are needed, such as replacing R by R0 and Q/TAM by Q/AM. The q>-vector w i l l also contain more uM-components in the stochastic design case, see (4.9), The proof of the theorem is unfortunately f a i r l y
technical. An o u t l i n e
of the proof w i l l therefore be given. The idea of the proof is to examine the behaviour of the algorithm when l ~ ( t ) I is growing from an a r b i t r a r i l y large value to a larger one. The time i n t e r v a l under consideration can be shown to increase with the difference between the values i f the rate of growth is l i m i t e d . This is done in Step 1 of the proof. I t follows from Lemma 4.1 that e ( t ) must be large when I ~ ( t ) l
increases.
65
I t must in fact be of the order of l ~ ( t ) I many times i f the interval where I ~ ( t ) l increases is long. This is shown in the f i r s t part of Step 3 of the proof. Since r ( t ) is of the order of I ~ ( t ) l 2 and ~(t) is of the order of e ( t ) , i t then follows from Lemma 4.2 that the parameter errors decrease s i g n i f i c a n t l y at many time instants. Neglecting the noise term, i t thus follows from the boundedness of the estimates that there is a contradiction, which implies that a r b i t r a r i l y large values of l ~ ( t ) l do not exist. This is shown in the second part of Step 3. However, i f the disturbance w(t) is nonzero the parameter errors could increase in the intervals between any two time instants where they decrease. See Lemma 4.2. Hence, i t is important to get an upper bound on the length of these intervals. This is done in Step 2 of the proof, which u t i l i z e s the same kind of arguments as Step 3.
[]
The conditions of the theorem have a l l been discussed e a r l i e r , except for the condition b0 < 280. This condition enters via Lemma 4.2. I t w i l l be shown below that the condition is in fact necessary for global s t a b i l i t y . Consider, however, f i r s t the local s t a b i l i t y properties. For simplicity assume that k=O and that w=O. Linearize the equations for the closed-loop system around the true parameter values and a constant uM. I t is then straightforward to verify that the eigenvalues corresponding to Eq. (4.6b) are a l l but one equal to one and one eigenvalue is equal to l ~bo(l-1)/80. local s t a b i l i t y is thus that
A necessary condition for
80 > bo(l-1)/2. I t is interesting to
note that the condition requires only that 80 is positive in the l i m i t case I = l . This is exactly the condition which is met in the convergence analysis in presense of noise in Ljung/Wittenmark (1974). I t w i l l be shown in the following example that the condition 80 > bo(l-1)/2 must be strengthened to 80 > bo/2 in order to assure global s t a b i l i t y . The condition is also discussed in Astr~m/Wittenmark (1973) and Ljung/Wittenmark (1974).
EXAMPLE 4.3 Consider the plant and the controller described in Example 4.1. I f uM
66
is set to zero, only one parameter is estimated, namely sO. I t is easy to check that the estimation error so(t) is given by
So(t) = S o ( t - l ) ( l - B 0 ~ r ( t -yl 2) +( tB- l~)
y2(t_l))v
Let r(O) = 0 and y(O) = I . Assume t h a t B0 = ~l - 6
f o r some a r b i t r a r i l y
small 6 > O. Straightforward c a l c u l a t i o n s then show t h a t l y ( t ) I tends to i n f i n i t y
if
So(O) > max ( l
' (l-~) B)"
The closed-loop system is thus not g l o b a l l y stable with t h i s choice of BO,
m
Several r e s u l t s on boundedness of the closed-loop s i g n a l s can be derived from Theorem 4,1. Consider f i r s t
the case where the disturbance
w(t) is zero. This is the s i t u a t i o n most often analysed in connection with model reference adaptive r e g u l a t o r s . The f o l l o w i n g theorem gives a s o l u t i o n to the boundedness problem discussed before.
THEOREM4.2
(PSA-aZgo~CthmmiJthou~t noise)
Consider the p l a n t (4.1) with no noise, i . e . w(t) = O, c o n t r o l l e d by the DSA-algorithm with d e t e r m i n i s t i c design, (4.6), Assume t h a t A I - A 3 are f u l f i l l e d L°%stable.
and that b 0 < 2BO. Then the closed-loop system is []
Proof I t follows from Lemma 4.2 t h a t the parameter estimates are bounded i f w(t) = O. Theorem 4,1 then gives the r e s u l t .
[]
The corresponding r e s u l t is also true f o r the s t o c h a s t i c design case. The r e s u l t is however not given, because i t seems u n r e a l i s t i c to assume t h a t there is no noise when the decision has been made to estimate the optimal observer from noise c h a r a c t e r i s t i c s .
67 I t appears that the conditions for the s t a b i l i t y result above are f a i r l y mild. The condition on 80 has been shown to be necessary for global s t a b i l i t y . Also, the choice X < l is the common one in real applications. However, the assumption that the disturbance is equal to zero is not very satisfactory. I t would thus be desirable to improve the result in Theorem 4.1 without the a priori assumption of boundedness of parameter estimates. Below are presented two s t a b i l i t y results, which treat modified versions of the DSA-algorithm.
THEOREM 4.3
(DSA-a~gorithm with conditional updating]
Consider the plant (4.1), controlled by the DSA-algorithm with deterministic or stochastic design, modified in the following way: ~o(t) = ^bo(t-l) 1 @(t) = @(t-l) ]
if
2 Kw
l~(t)l <
,
(4.22)
2-max (bo/Bo, I)
where
sup R~(t) I .< Kw" t I P
(4.23)
Assume that AI-A3 are f u l f i l l e d and that b0 < 2BO. Then the closed-loop system is L°%stable.
Proof As before, the proof is given for the deterministic design only. Lemma
4.2 gives together with (4.23) ~
bo
c r(t)
2
+ cr(t)
bo
=
'
r(t----~
R ~(t) c
,<0
if Kw
lc(t)l ~-a- ~
2 2 - max (bo/B O, l )
Combined with (4.22) this implies that the parameter estimates are
68
bounded. However, t h i s does not imply t h a t Theorem 4.1 can be applied straightforwardly,
because the algorithm has been modified. Some
changes are needed because Lemma 4.2 is no longer true at a l l times. The f o l l o w i n g minor changes of the proof in Appendix A have to be made:
(i)
Lemma 4.2 is e x p l o i t e d in Step 2 of the proof, when (A.28) is derived. Note t h a t the lemma is not n e c e s s a r i l y v a l i d f o r a l l times in the i n t e r v a l [T~j_I +k, Tj+li + k ] . However, i t is only needed t h a t i t is true when the supremum in (A.25) is a t t a i n e d . But t h i s follows immediately f o r large N from (A. I I ) and (A.21), because when I E ( t ) l
(ii)
is large, the m o d i f i c a t i o n (4.22) is not used.
The above comments also apply in Step 3 of the proof. Here i t is only needed that Lemma 4.2 is true when the supremum in (A.32) is a t t a i n e d .
F i n a l l y , the m o d i f i c a t i o n might cause some terms in the estimate of ef(t[t-l).., in Lemma A.3 to be zero. This f a c t j u s t s i m p l i f i e s the proof. The proof of Theorem 4.1 is thus s t i l l
v a l i d with these small m o d i f i -
cations and the theorem is proven,
a
Admittedly, the m o d i f i c a t i o n of the algorithm requires an upper bound on the disturbance which is not known a p r i o r i .
I t is of course possible
to use a large Kw to assure t h a t (4.23) is f u l f i l l e d .
On the other hand,
t h i s implies t h a t the p r e d i c t i o n e r r o r ~ ( t ) can be large w i t h o u t causing any adjustment of the estimates. I f B0 is reasonably close to b0 and the noise amplitude is small compared to the l a r g e s t acceptable magnitude of the p r e d i c t i o n e r r o r , then the m o d i f i c a t i o n could be of p r a c t i cal s i g n i f i c a n c e . M o d i f i c a t i o n s of t h i s s o r t are also common in p r a c t i c a l algorithms. As an example, the case BO=b 0 and k=O f o r the d e t e r m i n i s t i c design w i l l be considered. Then R(q-l) = I. I f i t is also assumed t h a t Q = TAM and P = 1 as in Example 2.3, the t e s t (4.22) can be made very simple: IE(t)I
= lef(t)I
< 2 sup l w ( t ) I . t
69 This means that an error ~(t) twice the maximal noise amplitude is accepted. This is not very restrictive i f the signal to noise ratio is high. I t should also be noted that an error of the same magnitude as the disturbance would be expected even for known parameters, at least in the case with white noise. Apart from the modification described in Theorem 4.3, there is one natural modification of the algorithm which will guarantee that the estimates are bounded. This is achieved by projecting the estimates into a bounded area. Such a modification is always made in practice. Similar techniques have also appeared for a long time in the stochastic approximation literature. See e.g. Albert/Gardner (1967) and Ljung (1977b). Different possibilities to make the projection exist. Below i t is shown formally that one such modification will make the closed-loop system stable.
THEOREM 4.4
(DSA-algo~J_thmwith projection)
Consider the plant (4.1), controlled by the DSA-algorithm with deterministic or stochastic design, modified in the following way:
^ ^, b0(t ) = bo(t-I ) + {-u(t-k-l) + s T ( t - l ) ~ ( t - k - l ) k P1
) r~(t) (t) (4.24)
~(t) e'(t) : o(t-l) + B0 ~(t-k-l) r(t----) A
l[b0(t)
o(t)
@'T(t)] I 1 0 ' ( t ) ]
1 I't'10,t, otherwise
e'(t)]
> C (4.25a)
A
(4.25b)
where C is a positive constant, satisfying
C > 2 XV~n ( I : b0/B0 )
(4.26)
Here b0 and 8 are the true plant parameters. Assume that AI -A3 are
70 satisfied and that b0 < 2BO, Then the closed-loop system is L ' - s t a b l e . D
P~oo~ The proof is given for the deterministic design. I t is obvious that the modification implies that the parameter estimates are bounded. Theorem 4.1 can, however, not be applied straightforwardly, because the algorithm has been modified. The equations for updating the parameters are used in the proofs of Lemmas 4.2 and A.3. These lemmas will be considered separately. Define CT = [ b o v ~ e
T]
~T(t) : [ b o ( t ) ~ ~ T ( t ) ] ~T(t) : [ b o ( t ) / - ~
~T(t)]
and analogously ~ ' ( t ) and ~ ' ( t ) . We have min (l, bo/Bo)(b~ + [812) ~ J~l 2 ~ max (I, bo/Bo)(b~ + 1812)
(4.27)
and similarly for the other @-vectors defined above. Consider now those times, when the projection (4.25a) is used. I t follows from (4.25a), (4.26) and (4.27) that, for some #, 0 < ~ < I / 2 ,
l$'(t)l 2 ~ min (l, boIBo)(b~2(t) + l@'(t)l 2) > > min (l,bo/Bo)
1 1 I@I 2 , C2 ~--~max (I,bo/Bo) (b~ + 1812 ) ~--~
which implies that Iml ~ ~ I $ ' ( t ) l
~ ~ (l$'(t)l
+ Iml).
Hence, (4.28) Let
71
y(t)
^, ~,T( t ) l l l[bo(t)
and use (4.25a), (4.28) to obtain A
l~(t)l :
~
y(t) I~'(t)l
.<
I~'(t)l-
{@(t)-el
[
A
y(t) b ~ ( t ) - b 0
]
v-6~7T~o{y(t) ~'(t) - el
+ [l-y(t)] I~1
l-2p rl-y(t)]-i~-_. I~'(t)l .<1 ~, (t)l ,
where the last step follows because y(t) is less than one and ~ is less than I/2. This inequality implies that Lemma 4.2 is s t i l l applicable. The proof of Lemma A.3 remains to be discussed. If the projection (4.25a) is used, the proof now gives estimates of terms of the type l @ ' ( t ) - @ ( t - l ) l instead of l ~ ( t ) - 0 ( t - l ) I. But i t follows from (4.25a) that 1 8 ( t ) - @ ( t - l ) l = Iy(t) @ ' ( t ) - ~ ( t - l ) l
y(t) 18'(t)- ~(t-l)l + [I -y(t)] l~(t-l)l
l~'(t)-~(t-1)l
+
l~'(t)-e(t-l)l + l@'(t)-@(t-l)l
[b0(t) B'T(t)] I - C l[b~(t) @'T(t)]l [B~(t) ~'T(t)] I - C
I~(t-1)l
le(t-1)l
+ [bG(t) @'T(t)]) - C,
where the last step follows from the fact that the estimates are bounded by C. Use this inequality together with (4.24) and (4.6f) to obtain
72
IIt-11 ]~(t)-~(t-l)l .< l~'(t)-@(t-l)l + A
-I-
O(t-l)
+ I [@(t-l)-~(t-k-l)]T~(t-k-l)] B0 ~(t-k-I) lS'(t)-8(t-l)l
+ (zC+Bo)
I~(t)l rCt)
C
.<
i~(t_k_l)] ] c ( t ) [ r(t)
I t is easy to see that the extra term in the right hand side of the inequality above does not affect the proof of Lemma A.3 I t is thus shown that Theorem 4.1 can be applied with these small changes and the theorem is proven.
REMARK The modification
(4.25a) is a scale reduction, which simply assures
that the norm of the vector of parameter estimates is bounded by a constant C. The condition (4.26) ensures that the projection is applied sufficiently far away from the true parameters.
A discussion of the results obtained so far is given in Section 4.4. Now consider the DLS-algorithm.
C - s t a b ' ~ ~i for the DLS-~orithm I t seems d i f f i c u l t to directly extend the s t a b i l i t y results for the DSA-algorithm to the DLS-algorithm. Only a preliminary result is therefore given. The following theorem, which corresponds to Theorem 4.1 fpr the DSA-algorithm, illustrates the d i f f i c u l t i e s .
THEOREM4.5
(DLS-~ZgoriJchmwXJChnoise)
Consider the plant (4.1) controlled by the DLS-algorithm with deterministic or stochastic design. Assume that AI-A3 are satisfied, that
73
b0 is known and that sup ~T(t-k) P(t) $(t-k) < =. t
(4.29)
Then the closed loop system is L:%stable.
Proof As for the DSA-algorithm the proof is given for the deterministic design case. The stochastic design case is treated analogously. Apply Lemma4.3 to obtain t @T(t) P-l(t) @(t) + ~ - - ~ xt-s
e2(s) _
X +~T(s-k-l) P(s-l)~(s-k-l)
s=k+l
t = Xt-k @T(k) P-l(k) 8(k) + . ~ , ~ t - s s=k+l
1 b~ (~ ~(s ))2"
The assumption (4.29) and the boundedness of the noise w(t) imply t
2(t )
.<~-~,
~+sup ~T(t-k) P(t)~(t-k)
xt-s 2 ( s ) . <
t
s=k+l
~
t
~-~
xt-s .
s=k+l
.~2(S) ,< },+~T(s-k-l) P(s-I) ~(s-k-l)
t (4.30) s=k+l
b2
for some constants K1 and K2. It follows from Equations (4.8 a,d,e) that ^ t Jt-l)l = Ibo ~ ( t - k - l ) lYf( \ P1
+ ~T(t_l ) ~(t-k-I )) ,<
-< Ibo [ ( s T ( t - 1 ) - S T ( t - k - l ) )
~(t-k-l)]l
+ I QBM uM (t-k-l)
<
74 ~T(t-k-2) P(t-l) ~ ( t - k - l )
+ ... ÷
~(t-l)[
+ l T(t-2k-l) Pit-k)
I
+ IIQB M uM (t_k_i) i
T#
.
(4.31)
Consider one term in the sum, i.e. for 1 ~ i ~ k, l~T(t-k-l-i)
P(t-i)~(t-k-l)
E(t-i)l
.< [ ~ T ( t - k - l - i ) P ( t - i ) ~ ( t - k - l - ~ ) ] I / 2 [ ~ T ( t - k - l ) P ( t - i ) ~ ( t - k - l ) ] I / 2 1 ~ ( t - i ) l . < .< [ ~ T ( t - k - l )
P(t-]) ~ ( t - k - l ) ] I/2 l ~ ( t - i ) l ,
(4.32)
where the last step follows from the fact that ~T(t-k-l)
P(t) ~ ( t - k - l )
.< 1.
(4.33)
This inequality follows readily from (4.8f). Furthermore, (4.8b) implies ~T(t-k-l)
P(t-i) ~ ( t - k - l )
= ~T(t-k-l)
P(t-i+l)
= ~T(t-k-l)P(t-i+l) = X ~T(t-k-l) + [~T(t-k-l) .< ~ ~ T ( t - k - l )
=
P-](t-i+l)
P(t-i) ~ ( t - k - l )
[ XI + ~ ( t - k - i ) ~ T ( t - k - i ) P ( t - i ) ]
P(t-i+l) ~ ( t - k - l )
$(t-k-l)
=
+
P(t-i+l) ~ ( t - k - i ) ] . [ ~ T ( t - k - i ) P(t-i+l) ~ ( t - k - l )
=
P(t-i) ~ ( t - k - l ) ]
.<
+
+ [~T(t-k-I )P(t-i+l )~(t-k-I )]I/2. [$T(t_k_ i )P(t-i+l ) ~ ( t - k - i ) ] 1/2. • [~T(t-k-i)
P(t-i)~(t-k-i)]I/2.[~T(t-k-l)
But, since P(t-i+l) .<~1 P ( t - i ) , ~T(t-k-l)
P(t-i+l) ~ ( t - k - l )
P(t-i)~(t-k-l)]
I/2 . (4.34)
.<
.< [~T(t-k-I ) P ( t - i + l ) ~ ( t - k - l ) ] I / 2
[ 7I ~T(t_k_l )P(t-i )~(t-k-I )]I/2
Using this inequality and (4.33), (4.34) gives
75 ~T(t-k-1) P(t-i) ~ ( t - k - l ) [~T(t-k-1 )P(t-i+1 )~(t-k-I )]I/2. [~T(t_k_l )P(t-i )~(t-k-1 ) ] I / 2 , • [/-~+ [ ~ T ( t - k - i )
P(t-i) ~ ( t - k - i ) ] I / 2 ] ,
which implies that ~T(t-k-1) P(t-i) ~ ( t - k - l ) [~T(t-k-l)P(t-i+l)~(t-k-1)].[~-~+[~T(t-k-i)P(t-i)~(t-k-i)]I/212% 2[~T(t-k-I ) P(t-i+l )m(t-k-I ) ] • [~ +~T(t-k-i ) P( t - i ) ~ ( t - k - i 211 + s#p ~ ( t - k ) P ( t ) ~ ( t - k ) ] .
[~T(t-k-I )P(t-i+l )~(t-k-I ) ].
Use this inequality recursively for i = 1 . . . . . for i = I. The result is -T(t-k-l)P(t-i)~(t-k-l)
)]
k and exploit (4.33)
~ 2i[I +s~p ~(t-k)P(t)m(t-k)] i
2k[l +s#p ~T(t-k) P(t) ~ ( t - k ) ] k,
i = 1.....
k.
But from (4.29) this is bounded, by a constant K3 say, so that the inequalities (4.31) and (4.32) give lYf(tlt-l)1%
v~3[Im(t'l)l
+ . . . +Im(t-k)l ] + QBM uM(t_k_l) I
where (4.30) have been used in the last step. The conclusion is thus that both ~(t) and y f ( t l t - l ) bounded. But from (4.3) and (4.8c) i t follows that e(t) = y(t) _ yM(t) = y f ( t ) p(q-l) p p Q _ 1 [~(t) + y f ( t l t - l ) ] Q
are uniformly
yM(t ) = p
yM(t) P
and since Q(q-l) and p(q-l) are asymptotically stable, e(t)/P(q -I) is
76 a l s o uniformly bounded. Hence, from Lemma 4.1, m(t) is generated by an asymptotically stable f i l t e r with bounded inputs. Therefore m(t) is bounded and consequently also y ( t ) and u(t) are bounded,
u
I f the result above is compared with Theorem 4.1, i t can be seen that the boundedness condition on the parameter estimates has been replaced by the condition (4.29). This condition is unpleasant since i t cannot be verified a p r i o r i . In the case with only one unknown parameter, i.e. with ~o(t) being a scalar, the condition can be written ~2(t-k) sup t ;~t-k t-k-1 P(k) + ~----~, ~ t - k - l - s ~2(s )
< 0o .
s=O This means that ~ shall not increase a r b i t r a r i l y fast compared to the weighted sum of old m. This is similar to, but somewhat weaker than, the condition which was used for the DSA-algorithm and which was proven in Lemma A.2. The assumption on boundedness of the parameter estimates was used in the proof. The condition (4.29) thus holds i f the estimates are bounded and only one parameter is estimated. However, when m is a vector, the change of m's direction makes the condition more d i f f i c u l t to interpret. One p o s s i b i l i t y is the following one. Let v~, i =l . . . . . n be orthonormal eigenvectors of P - l ( t ) . Here n is the dimension of ~. Thus, P - l ( t ) v~• = ~ " v ti and i P(t) vt
l
=
i
--= vt,
where {;~}V=l are the eigenvalues. Since n
~(t-k)
:
i=l i t follows that
TT
~ T ( t - k ) P(t) ~ ( t - k )
: / ~n, ~ 1 [~T(t_k ) v it ] 2 " i = l ~t
But t-k-I i =
Xt
v~T p-1 (t) vti = xt-k v~Tp-1 (k) v ti +
,xt-k-l-s [ T(s ) vti ]2 s=O
so that ~T(t- k) P ( t ) ~ ( t - k )
=f
[~T(t-k) v~]2 t-k-1
i=l' ~t-k vtiTp-l(k) vti + ~,~t-k-l-s[~T(s)v~]2 s=O
The conclusion is t h a t a s u f f i c i e n t c o n d i t i o n f o r (4.29) to hold is t h a t there is a K, independent of t , such t h a t f o r every t , t-I
[vT~(t)] 2~ K (~t-k + ~ , } t - l - s
[vT~(s)]2)
v v,
Ivl = l .
s=O This c o n d i t i o n is analogous to the i n t e r p r e t a t i o n in the scalar case. I t says t h a t the growth of ~ in an a r b i t r a r y d i r e c t i o n must be bounded by the weighted sum of old components of ~ in t h a t same d i r e c t i o n . The conclusion of the discussion above is t h a t i t is necessary to examine the c o n d i t i o n (4.29) more thoroughly in order to derive r e s u l t s corresponding to Theorems 4.2, 4.3 and 4.4 f o r the DSA-algorithm or to look f o r other methods of proving s t a b i l i t y .
4.3. Convergence in the d i s t u r b a n c e - f r e e case An important motivation f o r the L ~ - s t a b i l i t y
i n v e s t i g a t i o n is t h a t a
boundedness c o n d i t i o n is needed in the convergence a n a l y s i s . This was pointed out in Chapter 2 and in the beginning of t h i s chapter. The r e s u l t s of the preceeding section w i l l
now be used to solve the con-
vergence problem in the d e t e r m i n i s t i c case. I t w i l l
thus be shown t h a t
part of the state of the closed-loop system, namely the output e r r o r e = y-yM,
tends to zero i f the disturbance w is equal to zero.
78 Consider f i r s t the DSA-algorithm. The following result follows from Theorem 4.2.
THEOREM 4.6
(DSA-a~gor~thm without noise)
Consider the plant (4.1) with no noise, i.e. w(t) = O. Let the plant be controlled by the DSA-algorithm for deterministic design, (4.6). Assume that AI-A3 hold, that b0 < 2B0 and m > 0 and that the command input uM(t) is uniformly bounded. Then the output error converges to zero, i.e. y(t)
- yM(t)
~ O,
t ~.
[]
Proof I t follows from Lemma 4.2 that ~2(t)
~ O,
t ~
r(t) and so, because r ( t ) is bounded from Theorem 4.2, c(t) ~ O,
t ~.
Furthermore, as in the proof of Lemma A.3 in Appendix A, lef(tlt-l)l Ibo(t-l)l BO ~T(t-k-2) r ( t - l )
"'" +~T(t-2k-l) r(t-k)
Here Go(t-l ) and ~(t) are bounded from Lemma 4.2 and Theorem 4.2. Also note that r ( t ) ~ m > 0 from the assumptions. I t thus follows that if(tlt-l)
~ O, t ~ .
Consequently, since Q(q-l) is asymptotically stable, y ( t ) - y M ( t ) = p(q-l) = p(q-l) [ ~ ( t ) + e f ( t l t - l ) ] ~ O, t ~ Q(q-l) ef(t) Q(q-l) and the theorem is proven.
[]
79 The result above can be applied to modified versions of the schemes by Ionescu/Monopoli and Astr~m/Wittenmark, described in Chapter 2. I t is also possible to infer the convergence of the output error for the scheme by B~n~jean. Only minor changes are needed in the analysis. This means that the convergence problem is solved for several schemes without noise. In contrast to earlier convergence results for discrete time MRAS, the result does not require any a priori assumption of boundedness of the closed-loop signals. The situation for the least squares version of the general algorithm is, however, not that pleasing, as can be seen in the following theorem.
THEOREM 4.?
(PLS-algoritAm withou~t noise}
Consider the plant (4.1) without noise, i.e. w(t) = O. Let the plant be controlled by the DLS-algorithm for deterministic design, (4.8). Assume that Al -A3 hold and that b0 is known. Further assume that k = 0 and that sup~T(t) P(t) ~(t) < ~. t Then the output error converges to zero, i.e. y ( t ) - yM(t) ~ O,
t ~.
Proof I t readily follows from Lemma 4.3 that
E(t) ~o,
t~.
But k = 0 implies that ef(t) = ~(t) as can be seen from (4.8 d,e). The conclusion then follows as in Theorem 4.6.
o
I t is seen that the unpleasant condition (4.29) is s t i l l required. Furthermore, i t is assumed that k = O. This condition can be avoided by modifying the estimation scheme and control law. I f delayed parameter estimates are used in (4.8 d,e), i t is possible to have
80 ef(t)
= ~ ( t ) and the proof holds f o r the case k ¢ 0 too. The conclu-
sion is that the analysis f o r the DLS-algorithm is not as complete as f o r the DSA-algorithm and that f u r t h e r work on the DLS-algorithm is needed. Finally,
i t should be mentioned t h a t there is one problem t h a t has
not been t r e a t e d here. This is to consider convergence not only of the output e r r o r , but of another part of the state v e c t o r , namely the parameter estimates. To make t h i s analysis meaningful, i t
is
necessary to assume t h a t the number o f parameters is c o r r e c t l y chosen. Note that t h i s has not been required so f a r . We w i l l ,
however, not
elaborate on t h i s problem. Let i t s u f f i c e to note that some kind of p e r s i s t e n t l y e x c i t i n g c o n d i t i o n on the command input is s u f f i c i e n t f o r the estimation errors to tend to zero. See e.g. Kudva/Narendra (1974).
4.4. Results on other c o n f i g u r a t i o n s •The s t a b i l i t y
analysis given in the previous sections has been con-
cerned with a s p e c i f i c class of algorithms.
In p a r t i c u l a r ,
some minor
changes of the algorithms were needed in order to apply the r e s u l t s . Therefore a few possible extensions of the analysis are indicated in t h i s section.
O t h ~ model s t r u c t u r ~ The s t r u c t u r e of the model f o r the DSA- and DLS-algorithms was chosen in a p a r t i c u l a r way. The model reference adaptive systems provided the motivation.
The c h a r a c t e r i s t i c
property of the MRAS s t r u c t u r e is that
the model contains the unknown parameters b0 and e as a ~oduo~t, see (4.4). This is in c o n t r a s t to most model s t r u c t u r e s used f o r i d e n t i f i cation and s e l f - t u n i n g r e g u l a t o r s .
For example, the s e l f - t u n i n g
c o n t r o l l e r considered in Example 2.3 uses a model which is ZinecuL in the unknown parameters.
In the general case, a model which is l i n e a r
in the unknown parameters can be obtained in the f o l l o w i n g way. Let g denote b0 times the previous O-vector and w r i t e (4.4) as
81 ef(t) = q-(k+l)Ib 0 ~(p-~it + 8T ~ ( t ) ) + T-~QRw(t).
(4.33)
This model suggests an alternative to the DSA-algorithm. The new algorithm can be written analogously with equation (4.6) as: -
estimation: 8(t) = O(t-l) + ~ ( t - k - l ) r(t) = ~ r(t-l)
E(t) r(t)
(4 34a)
+ l~(t-k-l)l 2 + m
(4.34b)
~(t) = ef(t) - 8 f ( t l t - I ) ef(tlt-l)
k-l) = BO ~ ( t -P1
(4.34c) + ~T(t.l ) ~ ( t - k - l )
(4.34d)
- control: u(t) = _ l_]_~T(t ) ~(t) PI BO
(4.34e)
REMARK
Note that an a priori estimate 80 of b0 is s t i l l used. Also notice that the last element of @is known to be unity, but i t is necessary to estimate i t because B0 might be chosen different from bO.
a
The key result, which is needed in the s t a b i l i t y proofs, is Lemma 4.2. I t will be shown that the lemma s t i l l holds for the algorithm above i f k = O. Define BO
B(t) : )(t) - ~
8.
Write (4.34a) in terms of e and multiply from the left by its transpose. This gives for k = 0
)@(t)I 2 = l e ( t - l ) ( 2 +2 sT(t-l) ~ ( t - l ) ~--~+ ( ~ ( t - l ) l 2 e2(t) r(t) r2(t)
18(t-I)12+ I-]-- [2 ~T(t-l) ~ ( t - l ) r(t)
e(t)+e2(t)].
It follows from (4.33) and (4.34 c,d,e) that
82 c(t) = e f ( t ) - e^f ( t l t - I
- \fB0 ~ - (-t -+l ) Pl
) = b0 ~ ( t - l ) + 0T~(t_I) + R ~(t) Pl
oT(t-l) ~(t-l)) I +
- Bo =
-
~T(t-l) ~ ( t - l ) ) ~(t-l)
F0
bo o T ( t - l ) ~ ( t - l ) SO
m
=
+
R
-
+ R ~(t) =
w(t).
Inserting this expression for m(t) in the inequality above gives
BO
I°(t-l)12+
r(t)l [2 ~BO P R ~ ( t ) e ( t ) - ( 2
booBO_ I ) E2 ( t ) ] .
Define SO c = bo
l
2'
which is positive under the conditions of Lemma 4.2. I t then follows that 10(t)I 2 % 10(t-l)I 2 +
+
m
~
1 SO R w(t) m(t) - v~" b0 P
< l~(t-l)l 2 - c ~
2(t)
- c ~2(t) +
1 (SoIZfR
+ cr(t----) \Yo/ \ ~ ( t )
~
-P
w(t)
)']
)2
The conclusion is that the lemma is s t i l l true for the new algorithm, provided k = O. In this case i t is thus possible to derive s t a b i l i t y results corresponding to Theorems 4 . 1 - 4 . 4 . However, i t is not straightforward to show that Lemma 4.2 also holds for the case k • O. I t is thus an open question whether the MRAS structure used in the
<
83
DSA-algorithm is advantageous for the case k m O. The d i f f i c u l t i e s in the s t a b i l i t y analysis for the algorithm (4.34) indicates this, but on the other hand no significant differences have appeared in simulations. Similarly i t may perhaps be useful to apply the MRAS-type of model structure in other cases.
,Non-minim~ phase sgstems A characteristic feature of the schemes considered is that the process zeros are cancelled. This implies that only minimum phase systems can be treated. The underlying design method with cancellation of zeros was chosen because i t gives a simple adaptive scheme. A convenient impZicCt scheme (i.e. with estimation of c o n ~ o ~ parameters) for
nonminimum phase systems seems, however, more d i f f i c u l t to give. One possibility to solve the problem is to consider expZieYJt schemes instead. This means that the plant parameters are estimated d i r e c t l y and the controller parameters are calculated from these estimates. See AstrBm et al. (1978). A possibility to control nonminimum phase plants with an implicit scheme without cancellation of zeros is discussed in Clarke/Gawthrop (1975). The self-tuning controller described in Example 2.5 is used with the polynomial Q (in their notation) in the definition of the generalized error different from zero. This case is not covered by the analysis so far. See Example 2.5. The case Q • 0 can however be treated in the following way. Change the notation in Example 2.5 into @(t) = AM(q-l ) y(t)-BM(q - I ) uM(t-k-l)+cM(q - I ) u ( t - k - l ) , where CM is equal to Q in the original notation. When CM ¢ O, the model in Example 2.5 changes into
u ( t ) + S y ( t ) - C B M uM(t) ) + Rv(t). @(t) = ~1q - (k+l)( ( b o B R ccM) + Denote by co the constant term in CM and rewrite the expression for ~(t) as
84
@(t) : ~1 q- (k+l) (b 0 + Co)[ u ( t ) + S +O bo + c
y(t)
b0 BR+CCM- (b O+cO) u(t) +
b0 + c O
C BM uM(t)] + R v ( t ) bo + c o
- ~1 q-(k+l) (bO+cO) [u(t)+ 8Tm(t)] + R v(t), where e and m(t) are defined as f o r the DSA-algorithm with estimated observer. This model is analogous with the model (4.10) for the DSA-algorithm. I t is therefore possible to conclude that the s t a b i l i t y results hold for Clarke's and Gawthrop's algorithm i f the "MRAS- s t r u c t u r e " in the DSA-algorithm is used. There i s , however, one condition that has to be f u l f i l l e d in order to apply the r e s u l t s . This is that the q>-vector is generated from @(t) and bounded signals through stable f i l t e r s .
Compare with Lemma 4.1. I t is easy to check
that t h i s is the case i f the polynomial (ACM+bOBAM) is stable. This polynomial is in fact part of the closed-loop c h a r a c t e r i s t i c polynomial when the underlying design scheme is used for known parameters. The success of the scheme w i l l thus heavily depend on the polynomials AM and CM. This fact is also discussed in Clarke/Gawthrop (1975).
4.5. Discussion D i f f e r e n t aspects of the s t a b i l i t y r e s u l t s obtained in the previous sections w i l l be discussed in t h i s section. The discussion is l i m i t e d to the DSA-algorithm, because the r e s u l t s on the DLS-algorithm are only fragmentary. Consider f i r s t
the disturbance-free case. Theorems 4.2 and 4.6 give
a f a i r l y complete picture of how the algorithms behave. Convergence of the output error to zero is proved without any a p r i o r i assumption of boundedness of closed-loop signals. Such a condition is u s u a l l y required in convergence analysis of MRAS schemes also in the absence of noise. A notable exception is the g l o b a l l y stable continuous time scheme in Feuer/Morse (1977).
85 In the case with disturbances i t has been demonstrated that some additional assumptions are needed to guarantee global s t a b i l i t y . The approach taken here is to assume that the parameter estimates are bounded and two different means to ensure this were considered in Theorems 4.3 and 4.4. Another p o s s i b i l i t y is to put more conditions on the noise and/or command signal. I t does not seem unreasonable that some kind of persistently exciting condition (see e.g. Astr~m/ Bohlin (1965), Kudva/Narendra (1974)) might be s u f f i c i e n t to ensure the boundednessof the parameter estimates. This is however s t i l l an open problem. I t should f i n a l l y be pointed out that the case with decreasing gains ( ~ = l ) in the estimation algorithm has not been treated at a l l . This means that i t is not possible to combine the s t a b i l i t y results presented here with results concerning the asymptotic behaviour of the algorithms as given e.g. by Ljung (1977a). Some comments should also be made on the structure of the estimation scheme. A model structure which is b/~.~LneLu~in the unknown parameters is used. I t was noted in the previous section that i t is not straightforward to extend the s t a b i l i t y results to models which are linear in the unknown parameters. This is an interesting observation which perhaps deserves further investigation. Both the DSA-algorithm and the algorithm in the preceeding section based on a model which is linear in the parameters suffer from a condition on the a priori estimate BO of bO. I t would of course be desirable to eliminate this condition. A straightforward solution would be to estimate b0 in the model (4.33). The control law would then look like (4.34e) with BO replaced by the estimate bo(t ) of bO. D i f f i c u l t i e s w i l l appear i f Go(t ) is very small or has the wrong sign. A
In particular, the control law is undefined i f bo(t ) is equal to zero. Although the scheme can behave well in practise, the s t a b i l i t y analysis w i l l be d i f f i c u l t . This approach is therefore l e f t with these remarks. Finally, the general assumptions introduced in Section 4.1 w i l l be discussed. The assumption that the time delay is known seems d i f f i c u l t to overcome, at least in the theoretical analysis. The minimum phase
86
assumption is a consequence of the choice of design method. I t is naturally of interest to investigate the properties of algorithms which are capable of controlling nonminimum phase systems, see e.g. Astr~m et al. (1978) and Astr~m (1979). I t seems that such analysis has not been carried out so far. I t is also desirable to relax the condition that the orders of the plant model are not underestimated. I t would be valuable to have results concerning the control of higher-order or nonlinear plants. This seems to be an unexplored area.
5,
STABILITY OF CONTINUOUS TIME CONTROLLERS
I t was b r i e f l y indicated in Chapter 3 that a boundedness condition is essential in the analysis of continuous time MRAS. I t is easy to prove that the output error converges to zero when the pole excess is one or two, see Gilbart et al. (1970) and Monopoli (1973). However, boundedness of the closed-loop signals has been assumed to prove convergence in the general case. See Monopoli (1974), Narendra and Valavani (1977), Feuer et al. (1978). Recently Morse (1979) presented a rigorous convergence proof in the noisefree case without requiring closed loop s t a b i l i t y . In this chapter some s t a b i l i t y results w i l l be given for the general adaptive algorithm described in Chapter 3. The continuous time problem is similar to the discrete time problem. The discussion of different approaches in the beginning of Chapter 4 thus also applies to continuous time systems. The results given in this chapter are analogous to those for discrete time systems in Chapter 4. The algorithm considered is defined in Section 5.1, which also contains some preliminary results. The main results on L~:-stability are given in Section 5.2. The implications of the s t a b i l i t y results for the convergence in the disturbance-free case is examined in Section 5.3. In particular, the convergence problem for MRAS is solved.
5.1. Preliminaries The algorithms considered were described in Section 3.2. For easy reference, some of the equations are given below. Pta~t rnodP..t
The plant is described by
88
b0(pm + blpm-I + . . . + bm) b0 B(p) u(t)+v(t), y(t) - - u(t) + v ( t ) = A(p) pn +alPn-I + . . . +an
(5.1)
where v(t) is a disturbance, which cannot be measured. Reference model
The desired response is characterized by M yM(t) = BM(p) uM(t ) = b~ pm+... +b m uM(t) • AM(p) pn+a ~ pn-I + . . . +a~
(5.2)
where AM is asymptotically stable and uM is the command input. Filtered error
The f i l t e r e d error is defined by ef(t)
: Q(P) e(t) = Q(P) [y(t)-yM(t)], P(P) PI(P) P2(P)
(5.3)
where n+nT-I n+nT-2 Q(P) = P + ql p + " " + qn+nT-I pn-m-2 pl(p ) = pn-m-I + Pll + " " + Pl(n-m-l) m+nT m+nT-I P2 (p) = P + P21 p + " " + P2(m+nT) are all asymptotically stable polynomials. Observer polynomX~ The observer polynomial T(p) = pnT + t I p
nT-I
+...+
tnT,
nT > n-m-I
is assumed asymptotically stable. ~,t~a~tion model
The estimation model is given by ~(t)
ef(t):boT1
+b o
BT
(5.4)
89 where ' - '
denotes f i l t e r i n g -pm+nT-I
mT(t) = ~
~
by Q/TAM and n-I
u(t), • . , u(t) p , P
y(t), . . . ,
,8" P
(5.5)
The algorithm analysed is called the
CSA-~ZgorZ~tk~n (Continuous time,
Stochastic Approximation). The estimation scheme is inspired by the stochastic approximation method used in discrete time. The structure of the estimator is analogous to the MRAS algorithms described in Chapter 3. The CSA-algorithm is defined as follows.
CSA-ALGORITHM -
estimation: ^ •
b0(t ) = (?u(--~J-+ ~T(t) ~ ( t )
\Pl
~..
)
~(t)
(5.6a)
r(t)
c(t) r(t)
~(t) = - kr(t) + l~(t)l 2 + ~(t);
(5.6b) k > 0;
(5.6c)
krmi n ~ ~ ( t ) ~ ~, r ( t ) ~ rmi n , r ( O ) ~ rmi n > 0 0 ~ ~ ( t ) ~ E, r ( t ) > rmi n c(t) = ef(t) - ef(t)
(5.6d)
ef(t) : ~)o(t)~p--~ + eT(t) ~(t))
(5.6e)
- control:
~(t) = - ~Pl(O----~)~T(t)~-- ~(t) PI(P) \PI(P) /
(5.6f)
REMARK I
The estimation scheme is analogous to the stochastic approximation algorithm for discrete time. Comparewith the DSA-algorithm in Chapter
90
4. The denominator r { t ) makes the scheme d i f f e r e n t from those u s u a l l y used in MRAS, cf. Chapter 3. Note t h a t the case with decreasing gain in the estimation algorithm ( ~ = 0 ) is not considered. Compare with the discussion of the DSA-algorithm. The purpose of m(t) is to prevent r(t)
from approaching zero. Also note t h a t when the pole excess is
equal to one, Pl(p) is a constant and (5.6 a , f )
imply t h a t ~ o ( t ) = O.
In t h i s case i t is thus not necessary to estimate bO. I t also f o l l o w s from (5.6 d,e) t h a t ~ ( t ) = e l ( t ) .
[]
REMARK 2
The m o d i f i c a t i o n proposed in Section 3.4 is used. The s i g n a l s are thus f i l t e r e d
by the t r a n s f e r f u n c t i o n Q/TAM. I t is therefore not
necessary to introduce any p o s i t i v e real c o n d i t i o n . I t can also be expected t h a t the m o d i f i c a t i o n has a b e n e f i c i a l e f f e c t on the t r a n s i e n t properties of the algorithm as was found i n the d i s c r e t e time case. No s i m u l a t i o n studies have, however, been made.
[]
REMARK 3
Notice t h a t the control law ( 5 . 6 f ) does not contain any d i f f e r e n t i ators. The control law is not the same as the commonly used control law (3.13). The control laws ( 5 . 6 f ) and (3.13) are, however, asymptotically
equivalent. The important property of the control law ( 5 . 6 f ) is
t h a t e f ( t ) and e f ( t )
are l i n e a r in ~ ( t ) .
This is e x p l o i t e d in the
proofs. I t is not c l e a r whether t h i s m o d i f i c a t i o n is s i g n i f i c a n t just a technicality. are i l l u s t r a t e d
The e f f e c t s of d i f f e r e n t choices of control law
in a simple numerical example below,
EXAMPLE 5.1
Consider a f i r s t y(t)
-
or
order p l a n t , given by
bo u(t). p+a
The reference model is yM(t) = BM(p) uM(t) AM(p)
bM
p +a uM(t)"
The algorithm in Example 3.5 is used with the polynomials
m
91
T = p +t 1 P1 = AM P2 : T Q = p = PIP2. The expression corresponding to (3.5) is then u(t) u(t) + SO y ( t ) _ BM e f ( t ) = bo--Z~-A.. + b0 (r I - t l ) TAM TAM A--~uM(t) = = bo u(t) + bo eT ~ ( t ) . AM Here R(p) : p + r 1 S(p)
= so
The parameters b 0 and @ are estimated as in the CSA-algorithm. The numerical values of the d i f f e r e n t
parameters are given in Table 5.1.
Table 5.1. Parameter values used in the simulations. Parameter
a
b0
aM
bM
tI
~
m(t)
Value
l.O
l.O
2.0
1.0
2.0
5.0
O.l
Since the pole excess of the plant is equal to one, i t is possible to use the control law ~3.12), which sets i f ( t )
to zero. The control law
is given by u(t) = - AM(p)[sT(t) ~ ( t ) ] .
(5.7)
Although i t is possible to use this control law in this f i r s t order example, i t is interesting to compare with the results when using other control laws, which are designed to avoid differentiators when the pole excess is larger than one. The commonly used control law (3.13) is given by u(t)
: - §T(t)/AM(p) ~(t)].
(5.8)
92 The control law (5.6f) used in the CSA-algorithm is
u(t) = - A M ( p ) I ( a A - ~ p )
0T(t))~(t)].
(5.9)
The closed-loop system was simulated using the d i f f e r e n t control laws given above. The i n i t i a l values of the parameters were zero except b0(0 ) which was equal to two. The i n i t i a l value of r ( t ) was one. The simulation results are shown in Figures 5.1, 5.2, and 5.3 for uM(t) being a square wave. I t is seen that the control law (5.7) gives the A
fastest convergence of the regulator at the price of larger control inputs. I t can also be noticed that there are hardly any differences between the control laws (5.8) and (5.9). m
.L, o
u~
I yM
"I0 #-
-y
~m (~
=I--
k I
I
I
I
I
I
3
k
-6 (U~
0-
o #0
o
-3
t
0
25
Figcu'te 5.1. Simulation results for Example 5.1 with control law (5.7).
93
0 no r-
8'-'
I
I
I
I
I
3_ c-
._~ m
0-
(o 0
-3-
I
I
0
Figure 5.2. Simulation r e s u l t s f o r example 5.1 w i t h control law ( 5 . 8 ) .
i
=o "o C
o~
1
Y
O-
r-~
~ • ~ -1aJ n,-o.
I
I
I
I
I
I
I
I
3 i
-6 cul
O-
o
-3o (D
0
F/.gu~e 5.3. Simulation r e s u l t s f o r Example 5.1 with control law ( 5 . 9 ) .
94 The following general assumptions are made: A])
The number of poles n and zeros m are known and m ~ n - l .
A2)
The parameter b0 is nonzero and i t s sign is known. Without loss of generality b0 is assumed positive.
A3)
The plant is minimum phase, i . e . the zeros of the polynomial B(p) l i e in the open l e f t half plane.
A4)
There exists a solution to the d i f f e r e n t i a l
equations describing
the closed-loop system such that ~ ( t ) is continuous. The assumptions A1 - A3 were introduced in Section 3.2 and discussed there. Juste note that i f only an upper bound on n (or m) is known, the plant equation can be put into the form of (5.1) with known n and m j u s t by multiplying A and B by factors (p+m). The condition A3 w i l l not be violated by this operation i f m > O. The condition A4 is of technical nature. I t does not seem to be very r e s t r i c t i v e . For example, i t can easily be shown that the closed-loop system can be written as a d i f f e r e n t i a l equation x(t) = f /x(t), t] where f E C1 i f the noise v ( t ) is continuous. The existence and c o n t i n u i t y of the solution then follows from well-known theorems for ordinary d i f f e r e n t i a l equations and A4 is s a t i s f i e d . We w i l l not, however, go into these details. Finally a lemma of independent interest w i l l be given.
LEMMA 5.1
Let bo(t) and @(t) be defined as in Chapter 3. Then the following holds for the CSA-algorithm: ddt ( b ~ ( t ) + b 0 s T ( t ) e ( t ) ) . <
-
B2(t) ~(t)) 2 r ( t ) + r-~t) ( ~
(5.10) D
95 Proof The.equations (5.6a) and (5.6b) can equivalently be written in terms of bo and O- Then :
bo(t) bo(t) =
bo(t) c(t) {u(t) + sT(t ) ~(t))
r(t)
\PI
sT(t) ~(t) : sT(t) ~(t) e(t) r(t) which from (5.4) and (5.6 d,e) implies d (~2(t)+ bo ~T(t) 8(t)> = dt
: 2 ~r(t) [~)o(t)(-6P~ + ~T(t) ~(t)> + b0 QT(t)~(t)]
:
c(t) = 2 r - ~ <~ V ( t ) - ~ ( t ) ) = -r(It) [ - ~ 2 ( t ) + ( ~ V ( t ) > 2 - ( E ( t ) - ~ V ( t ) > 2] .< -
E2(t) 1 <_~ r(t)
+
r(t)
V(t)
)2.
m
5.2. L~-stability Someresults on L°%stability for the CSA-algorithm will be given in this section. They correspond to the results in Section 4.2 for discrete time. The presentation will be kept short. Refer to Section 4.2 for a discussion of the problems and ideas behind the proofs. Since LO=-stability is considered, i t is natural to assume that the commandand disturbance signals are bounded. It is convenient to make the following
98
DEFINITION The closed-loop system is said to be L~-st~zble i f uniformly bounded command (u M) and disturbance (v) signals give uniformly bounded input (u) and output (y) signals,
m
The main result is given by the following theorem. I t corresponds to Theorem 4.1 in discrete time and the technique used in the proof is very similar.
THEOREM5.1
(CSA-~9o~bthm with noise)
Consider the plant (5.1) controlled by the CSA-algorithm, Assume that assumptions A1 -A4 are satisfied and that the parameter estimates are uniformly bounded. Then the closed-loop system is L~%stable.
Proof The proof is given in Appendix B.
Corollary Instead of (5.6f), l e t the control law be given by
(5.11) PI(P) where G(p) is an asymptotically stable transfer function with pole excess greater than or equal to n - m - I and steady state gain equal to one. Then the theorem is s t i l l true. m
Proof The only properties of the transfer function PI(O)/PI(p) that are used in the proof are that the steady state gain is one, that Pl(p) is asymptotically stable and that degree (PI) ~ degree ( P l ( O ) ) + n - m - l . The proof therefore s t i l l holds i f (5,11) is used.
g7 REMARK I f r ( t ) is replaced by~ + I ~ ( t ) I 2, ~ positive constant, in the estimation part, the conclusion of the theorem is s t i l l true.
[]
Theorem 5.1 can be specialized in d i f f e r e n t ways. The origin of the s t a b i l i t y investigations were the problems with MRAS. In that case noise is generally not included in the problem formulation and Theorem 5.1 can be specialized to give the solution.
THEOREM5.2
(CSA-o~_ga~ZJt~mwithout node)
Consider the plant (5.1) with no noise, i.e. v(t) = O, controlled by the CSA-algorithm. Assume that assumptions Al -A4 are satisfied. Then the closed-loop system is L~-stable. a
Proof The boundedness of the parameter estimates follows immediately from Lemma 5.1, and Theorem 5.1 can be applied, a
Recall from Section 3.3 that the general structure includes several MRAS, e.g. the ones by Monopoli, Feuer/Morse and Narendra/Valavani. I f the estimation algorithm in these schemes are chosen as the one in (5.6 a-e) - or as in the remark following Theorem 5.1 - and i f the control law (5.6f) is used, the boundedness follows from Theorem 5.2. The complicated modifications introduced by Feuer and Morse (1977) should thus not be necessary to obtain s t a b i l i t y . could of course s t i l l
Such modifications
have effect on e.g. convergence rates.
Theorem 5.2 gives a f a i r l y satisfactory s t a b i l i t y result for the deterministic case. A natural question is whether i t is possible to extend the result in Theorem 5.2 to the case of disturbances which are not zero. The assumption of bounded estimates in Theorem 5.1 i s , however, d i f f i c u l t
to v e r i f y a p r i o r i .
In analogy with the discrete
time case, two p o s s i b i l i t i e s to modify the algorithm to ensure bounded estimates are presented below.
9B
THEOREM 5.3
( CSA-a~garithm with conditional updouti~g )
Consider the plant (5.1) controlled by the CSA-algorithm, modified in the following way:
bo(t ) : 0 ] e(t)
= o
I
if
I c ( t ) ] < Kv,
(5.12)
where Kv is a positive constant, satisfying
sup t
v(t)]
Kv.
(5.13)
Assume that AI-A4 are satisfied. Then the closed-loop system is L~°-stabIe. Proof Two problems have to be considered in order to apply Theorem 5.1. I t must be shown that the estimates are bounded and the consequences on Theorem 5.1 of the modification (5.12)must be examined. First, Lemma 5.1 gives ~t (b~(t)+b 0 oT(t)~(t)),
~2(t) + 1 - r(t) r(t)
(t)) 2 S 0 (5.14)
if
IE(t)f ~ Kv~ If
?(t)l"
On the other hand, i t follows from (5.12) that
if
l~(t) I < Kv. The parameter estimates are thus bounded. For the second problem, the following changes are needed: (i)
In Step 2 of the proof, Lemma 5.1 is used in the derivation of
99 (B.30). However, i f the function IQ(t) is defined as
l, l~(t) =
t E R = { t l l ~ ( t ) l ~ Kv}
O, elsewhere
then ( 5 . 1 0 ) o f Lemma 5.1 holds for all t i f c ( t ) is replaced by 1~(t) c ( t ) . Furthermore, i t follows from (B.21), (B.27), (B.26), and (B.18) that Ti j+l
Ti j+l
l~(s)l ds-(T]+ 1 - T~_I)Kv>~ Ti j-I
Ti j-l
Tj+l
T+I l~(s)l ds- 2KA In N . K v >
[e(s)l ds - 2AT-Kv >~ Ti j-l
Ti j-I
Ti j+l
K1(cM+ pKT) +4
l~(s)l
ds - ~
•
N
4K ( I + ~ ) K A In N.N KIpKT
Ti j-1
Tj+I >~ -~1
I
I~(s)l ds
for N s u f f i c i e n t l y
large.
i Tj-I
It is then easy to see that (Bo30) sti]1 holds i f ~(t) is replaced by E(t).l~(t) in the derivation. (ii)
Lemma 5.1 is used in the same way as above in Step 3 of the proof.
Since the technique is s i m i l a r , the above modifications can be used here too. (iii)
The modification might cause ~(t) to be zero sometimes. The A estimate used f o r (~(t) in the proof of Lemma B.4 i s , however, s t i l l
I00 valid• The conclusion is that Theorem 5.1 can be applied and the theorem is
proven.
D
The comments on the modification of the estimation algorithm in discrete time apply here too. Another p o s s i b i l i t y to obtain bounded estimates is to project them into a bounded area. This idea is exploited in the following theorem.
THEOREM 5.4
l CSA-algorithm with p r o j e ~ o n )
Consider the plant (5.1) controlled by the CSA-algorithm, modified in the following way:
{ ~0(t): [Up_~+ ~T(t ) ko(t)] ~Itt)
-
r ( t ) bo(t)
i,[ I>IF> c J O(t)
(5.15) where y is a positive constant and the constant C satisfies
C > 2
i
max (l ' bO)
~min
{[":I
~ bo)
•
,
(5.16)
where b0 and 8 are the true plant parameters. Assume that A1 - A4 are satisfied. Then the closed-loop system is L~%stable.
Proof
Define ,T
: [b o ,~0 ,T]
~T(t) : (b0(t)
~eT(t))
~T(t) = [ b o ( t )
V~-O0oT(t)) •
v
4-
II
~
(~
P~
II
-'~/ ~ ¢1~ ~
v
+
¢-I-
0"/
v
0
~-. !~
•
r..-
...~.
'~v
N
~A
PO
"~-~
,_~0
v
el-
tA
"E--
m
c.-
-.
:::5-
e., _1.
~
~
~1~
~,
.J.
~'Z~-
~:~
r..o
toe
#1
i'D
,.~
_J.
~'~ tn
.~.
0
~
O
c-'e
e+ ~-
m
v
-
+
-e->
-
:::r
ct"
~ -i°
X
~.V
~°
3
~,V
I"0
O
'-h 0 "S
--%
~ O
O
~
~J')
~"
"5
0"
PA
tA
v
OP,~
~
u
0
~-
~' ~ ~
~l
0'~
u
--I 1~.
O'1
~-
i'D
1
0
0
102
+ b0 eT(t) ~(t) c(t) _ Y b0 ~T(t) @(t)] : r(t) r(t) = 2 ~r ( t ) I - ~ v ( t ) - ~ ( t ) ) < -
r(t)
+
r(t)
- 2
2 r--~t) (~O(t) E)o(t) +b0 @T(t) @(t)) (t) @(t) .<
r(t)
E2(t ) + K~ _ 2 ~
"< -
r(t)
r(t)
1-21J
r(t)
l -
lJ
l~(t)12.
(5.18)
Two things follow from this result. Firstly, i t is obvious that Lemma 5.1 holds even i f the modification is used. Secondly, i t follows from (5.17) that i f
l~(t)l .> I~I
+ cv'max (l,bo)"
then A
•
@(t)
Jmax ( l , bo~
]$(t)l ;
/'max (I, bo~
(l;(t)l-
I01) ; C
and so the modification is used. This fact together with (5.15) implies that [@(t)l ~ max 101 + CVmax (l,bo)', Kv
2y(l-Zp)
which means that the parameter estimates are bounded. To apply Theorem 5.1, i t now only remains to modify the proof of Lemma B.4. I t is easy to see that a constant is added to I~(s)~.Im(s)I in the integrands in (B.47) when the projection is used. This does not influence the estimate (B.48). Equation (B.49) is s t i l l true i f a constant is added to t
I Ic(s)l ds.
t-T This does not influence the arguments in the proof of Theorem 5.1. This concludes the proof.
[]
103 5.3. Convergence in the disturbance-free case S t a b i l i t y conditions are crucial in the convergence analysis of adaptive schemes. For example, convergence of the output error in the absence of noise could not readily be solved for continuous time systems, except for the case with pole excess equal to one or two. Compare with the discussion in the beginning of this chapter and in Chapter 3. The results of the preceeding section prove the boundednessof the closed-loop signals. I t thus follows that the output error converges to zero.
THEOREM 5.5
(CSk-o~Zgo~Y~thmwithout noi.se)
Consider the plant ( 5 . l ) with no noise, i.e. v ( t ) = O, controlled by the CSA-algorithm. Assume that assumptions Al -A4 are satisfied and that the command input uM(t) is uniformly bounded. The output error then converges to zero, i.e. y ( t ) - yM(t) ~ O,
t ~.
m
Proof Lemma 5.1 gives ( v ( t ) : 0 )
r(t) 0 But I~(t)I is bounded from Theorem 5.2 and r ( t ) is therefore also bounded. Hence, f ~2(t) dt < ~. 0
(5.19)
This does not, however, imply that E(t) tends to zero. A bound on the derivative of ¢2(t) is necessary for ¢(t) to converge to zero. I t follows from (5.4) and (5.6 d,e,f) that
104
~(t) = - bo(t)~8(t) /
Pl(O)
~ ( t ) jI T ~ ( t ) - b0 ~T(t) ~(t).
PI(P)
Thus, e(t) is bounded, because the parameter estimates and I~(t)l are bounded. Define H(p) : l
Pl(0) PI(P)
and differentiate the expression for ~(t) to get
+ bo(t)[H(p)(~(t)c(t)~]T~(t)+bo(t)(H(p) ~(t)) T * ( t > + r(t)/]
+ bo~T(t) E(t) ~(t) + b0 ?T(t) ~ ( t ) ] . r(t) The parameter estimates and l~(t)I are bounded. Also, e(t) is bounded as was seen above. Furthermore, H(p) is asymptotically stable and r ( t ) is bounded from below by rmi n as shown in the proof of Lemma B.3, Appendix B. Finally l~(t)l is bounded from the proof of Lemma B.2. I t is thus possible to conclude that E(t) is bounded. Hence, dt
[~2(t)] = 2~(t) E(t)
is bounded. I t then follows from (5.19) that e(t) ~ O,
t ~.
In the same way as in Lemma B.4 (Appendix B), we have
el(t) = bo(t)(G(p) ~T(t) ~(t)) r(t) where G(p) is a s t r i c t l y proper, operator. Since Ibo(t)l and
l (t)I
~(t),
asymptotically stable transfer are bounded and r ( t ) ~ rmi n v t,
105 i t thus follows that e f ( t ) ~ O,
t
This implies that ^
ef(t) = ~(t) + ef(t) ~ O,
t ~.
Hence y ( t ) - yM£t)'' = e(t) : P(P) e l ( t ) ~ O, Q(P)
t ~
because Q(p) is asymptotically stable and P(p)/Q(p) is proper.
The output error thus converges to zero for the general algorithm defined in Chapter 3, provided that the estimation scheme and control law are chosen as for the CSA-algorithm. The output error will also converge to zero i f the estimator is chosen as in the corollary or remark of Theorem 5.1. In particular, convergence of the output error is assured for earlier propsed MRASby Monopoli (1974) and Narendra/Valavani (1977) i f some minor modifications of the algorithms are made. F i r s t l y , the signals should be filtered by the transfer function Q/TAM. In fact, this modification seems to improve the properties of the algorithms, cf. Example
4.1. Secondly, the parameter adjustment should have r ( t ) (or I ~ ( t ) l 2) in the denominator. Finally, the control law should be chosen as in (5.6f). I t should also be noted that the same conclusions can be made for the algorithms by B~n~jean (1977) and Feuer/Morse (1977) and the new algorithm proposed in Section 3.3. As in the discrete time case, i t is possible to go one step further and investigate conditions for the convergence of the parameter estimates. I t is then necessary to assume that the number of parameters is chosen correctly. Note that for the above results to hold, this was not required. The convergence of parameter estimates has been examined by others, e.g. Caroll/Lindorff (1973), LUders/Narendra (1973),
106
Kudva/Narendra (1973), Morgan/Narendra (1977). The well-known conditions on the frequency contents of the input signal are introduced to assure convergence of parameter estimates. The problem w i l l be l e f t with these remarks.
107 REFERENCES Albert A E, Gardner L A (1967): Stochastic Approximation and Nonlineo~ Regression, The MIT Press, Cambridge, Mass.
AstrBm K J (1970): I~oduction to Stochastic Control Theory, Academic Press, New York.
Astr~m K J (1976): Regle#uteoluL (in Swedish), Almqvist & Wiksell (2nd edition), Stockholm. Astr~m K J (1979): New implicit adaptive pole-placement algorithms for nonminimum phase systems. Report to appear, Dept of Automatic Control, Lund Institute of Technology, Lund, Sweden. Astr~m K J, Bohlin T (1965): Numerical identification of linear dynamic systems from normal operating records. IFAC Symp on Theory of Self-adaptive Control Systems, Teddington, England. AstrBm K J, Wittenmark B (1973): On self-tuning regulators. A~tomatica 2, 185-199.
Astr~m K J, Wittenmark B (1974): Analysis of a self-tuning regulator for nonminimum phase systems. Preprints of the IFAC Symp on Stochastic Control, Budapest, Hungary, 165-]73. Astr~m K J, Borisson U, Ljung L, Wittenmark B (1977): Theory and applications of self-tuning regulators. Au~tomat~Lca I_~3, 457-476. Astr~m K J, Westerberg B, Wittenmark B (1978): Self-tuning controllers based on pole-placement design. CODEN: LUTFD2/(TFRT-3148)/l-52/ (1978), Dept of Automatic Control, Lund Institute of Technology, Lurid, Sweden. B6n6jean R (1977): La commande adaptive ~ mod61e de r6f6rence 6volutif. Universit6 Scientifique et M6dicale de Grenoble, France. Borisson U (1979): Self-tuning regulators for a class of multivariable systems. Automata/ca 15, 209-215.
108
Caroll R L, Lindorff P D (1973): An adaptive observer for single-input, single-output linear systems. IEEE T~ns AuJtom Control AC-18, 496-499. Clarke D W, Gawthrop P J (1975): Self-tuning controller. Proc IEE 122, 929-934. Edmunds J M (1976): Digital adaptive pole shifting regulators. PhD dissertation, Control Systems Centre, University of Manchester, England. Egardt B (1978): Stability of model reference adaptive and self-tuning regulators. CODEN: LUTFD2/(TFRT-lO17)/l-163/(1978), Dept of Automatic Control, Lund Institute of Technology, Lund, Sweden. Feuer A, Morse A S (1977): Adaptive control of single-input, single-output linear systems. Proc of the 1977 IEEE Conf on Decision and Control, New Orleans, USA, I030-I035. Feuer A, Morse A S (1978): Local s t a b i l i t y of parameter-adaptive control systems. John Hopkin Conf on Information Science and Systems. Feuer A, Barmish B R, Morse A S (197B): An unstable dynamical system associated with model reference adaptive control. IEEE T~ns Au~tom Control AC-23, 499-500.
Gawthrop P J (1977): Some interpretations of the self-tuning cont r o l l e r . Proc IEE 124, 889-894. Gawthrop P J (1978): On the s t a b i l i t y and convergence of self-tuning algorithms. Report 1259/78, Dept of Engineering Science, Univers i t y of Oxford. Gilbart J W, Monopoli R V, Price C F (1970): Improved convergence and increased f l e x i b i l i t y in the design of model reference adaptive control systems. Proc of the IEEE Symp on Adaptive Processes, Univ of Texas, Austin, USA. Goodwin G C, Ramadge P J, Caines P E (1978a): Discrete time multivariable adaptive control. Div of Applied Sciences, Harvard University.
109
Goodwin G C, Ramadge P J, Caines P E (1978b): Discrete time stochastic adaptive control. Div of Applied Sciences, Harvard University. Ionescu T, Monopoli R V (1977): Discrete model reference adaptive control with an augmented error signal. Au~tomcut/Laa 13, 507-517. Kalman R E (1958): Design of a self-optimizing control system. T~c~ A~ME 80, 468-478. Kudva P, Narendra K S (1973): Synthesis of an adaptive observer using Lyapunovs direct method. I ~ J
Control 18, 1201-1210.
Kudva P, Narendra K S (1974): An identification procedure for discrete
multivariable systems. IEEE Tram Au~tom Control AC-19, 549-552. Landau I D (1974): A survey of model reference adaptive techniques theory and applications. Au~tomo~?xicaIO, 353-379. Landau I D, B~thoux G (1975): Algorithms for discrete time model reference adaptive systems. Proc of the 6th IFAC World Congress, Boston, USA, paper 58.4. Ljung L (1977a): On positive real transfer functions and the convergence of some recursive schemes. IEEE T~uzns Au~tom Control AC-22, 539-550. Ljung L (1977b): Analysis of recursive stochastic algorithms. IEEE Trans Au~om Control AC-22, 551-575.
Ljung L, Landau I D (1978): Model reference adaptive systems and self-tuning regulators - some connections. Preprints of the 7th IFAC World Congress, Helsinki, Finland, 1973-1980. Ljung L, Wittenmark B (1974): Asymptotic properties of self-tuning regulators. Report TFRT-3071, Dept of Automatic Control, Lund Institute of Technology, Lund, Sweden. Ljung L, Wittenmark B (1976): On a stabilizing property of adaptive regulators. IFAC Symp on Identification and System Parameter Estimation, T b i l i s i , USSR.
110
LUders G, Narendra K S (1973): An adaptive observer and identifier for a linear system. IEEE Tran~ Au~om Con~ol AC-I8, 496-499. Monopoli R V (]973): The Kalman-Yakubovich ]emma in adaptive control system design. IEEE Trans Autom Control AC-18, 527-529. Monopoli R V (1974): Model reference adaptive control with an augmented error signal. IEEE Trans Autom Control AC-19, 474-484. Morgan A P, Narendra K S (1977): On the uniform asymptotic s t a b i l i t y of certain linear nonautonomous differential equations. SIAM J Control 15, 5-24. Morse A S (1979): Global s t a b i l i t y of parameter-adaptive control systems. Dept of Engineering and Applied Science, Yale University. Narendra K S, Valavani L S (1976): Stable adaptive observers and controllers. Proc of the IEEE 64, I198-1208. Narendra K S, Valavani L S (1977): Stable adaptive controller design, part I: Direct control. Proc of the IEEE Conf on Decision and Control, New Orleans, USA, 881-886. Narendra K S, Va]avani L S (197B): Direct and indirect adaptive control. Preprints of the 7th IFAC World Congress, Helsinki, Finland, 1981-1987. Osburn P V, Whitaker H P, Kezer A (1966): New developments in the design of adaptive control systems. Inst of Aeronautical Sciences, paper 6]-39. Parks P C (1966): Lyapunov redesign of model reference adaptive control systems. IEEE Trans Autom Control AC-ll, 362-367. Peterka V (1970): Adaptive digital regulation of noisy systems. 2rid IFAC Symp on Identification and System Parameter Estimation, Prague, Czechoslovakia. Wellstead P E, Prager D, Zanker P, Edmunds J M (1978): Self tuning pole/zero assignment regulators. Report 404, Control Systems Centre, University of Manchester, England.
111
APPENDIX A - PROOFOF THEOREM4.1
Theorem 4.1 Consider a plant• described by A(q-l)y(t) = q-(k+l)boB(q'l)u(t ) + w(t)
(A.l)
or• alternatively, ef(t) = boq-(k+l) Here '-'
) + 8T~(t) +
(A.2)
denotes filtering by Q/TAM and
= Fu(t-l) u(t-nu) y(t) y(t-ny+l) ~T(t) L F . . . . . - T ' - T ..... P
!BM uM(t)], (A.3) P
where
nu = max (m+k, nP2) (A.4)
ny = max (n+nT-k, n). The plant is controlled by the DSA-algorithm with fixed observer polynomial, defined by - estimation scheme: ^bo(t) =bo(t-l)+IT(t-k-l)+sT(t-l)~(t-k-l) ^ ] ¢(t) r(t)
(A.Sa)
B(t) =O(t-l) + 6O~(t-k-l) ~(t_) r(t)
(A.5b)
L Pl
r(t} =~r(t-]) +~u(t-k,.l_) + ~T(t_l)~(t_k_])] l Pl
2
+
+ B~ l~(t-k-l)l 2+~; O(~
(A.5c)
c(t) = ef(t) - ~f(tlt-I )
(A.5d)
~f(tlt-l) = Bo(t-l)F~(tzk~l) + BT(t-l)~(t-k-l)] L P1
(A.5e)
112
- control law: ~(t) P1
= _ ~T(t ) ~ ( t )
Assume t h a t A . I - A . 3
(A.5f)
are f u l f i l l e d .
Further assume t h a t the parameter ~nb estimates are uniformly bounded and t h a t 0 < r-~-< B0" Then the closed loop system is L~-stable, i . e . i f w(t) an-d uM(t) are u n i f o r m l y bounded, then u ( t ) and y ( t )
are also uniformly bounded.
P~oo~ A single realization will be considered throughout the proof. This means that constants appearing in the estimates will depend on i n i t i a l conditions and realizations of w(t) and uM(t). The boundedness of l~(t)l will be proved by contradiction. Thus, assume that sup ] ~ ( t ) l t)O
> NM
f o r N and M a r b i t r a r i l y
large. This assumption w i l l
be contradicted
f o r some N and M. Under the assumption, tNM and t M are w e l l - d e f i n e d i f N > l and M > I ~ ( o ) I : tNM = min { t i l ~ ( t ) l t M = max { t l t
) NM}
~ tNM;
i~(t)l
~ M;
l ~ ( s ) ] < M v s E [max (0, t - [ c M In N]), t - l ] } . Here [ t ] denotes the i n t e g e r part of t and c M is defined below. A
schematic picture of i~(t)l in the interval [tM,tNM] is seen in Fig. A.I. The c o n t r a d i c t i o n w i l l
be achieved by analysing the algorithm in
d e t a i l in the i n t e r v a l [tM~ tNM]. An o u t l i n e of the proof is as N f o l l o w s . In Step 1 an increasing subsequence of { l ~ ( t ) l } , {l~(Ti)l} T i=l' is defined and a lower bound on NT is derived. In Step 2 an upper bound on TN - T 1 is given. F i n a l l y , Step 3 derives an upper bound on N . Combining the r e s u l t s of Stepsl and 3 gives the c o n t r a d i c t i o n and the boundedness of l~(t)1 is thereby proved. I t is then easy to conclude the boundedness of u ( t ) and y ( t ) .
113
m itp(t NM ¸
X X
X X :<
,~CMln N _1
X
X X
M
x
x
--IX
X
X
~CMln
T
I
INM Figure A.I. The behaviour of l~(t)l in the interval [ t M, tNM].
Some preliminary results are f i r s t given as separate lemmas. Proofs of these lemmas can be found at the end of the appendix.
Lemma A. I
Under the conditions of the theorem, there exist a positive constant K, a constant ~, 0<~ < ], and an asymptotically stable matrix F such that, for every t ) s + l , T+k+l
l~(t)l , II Ft-s II" l~(s)l +(t-s)K(l + sup ~ s~z
pT+k+l-~lef(~)l). (A.6)
Lemma A.2
Under the conditions of the theorem, there exist positive constants Kl and K2 such that l~(t+l)I ~ Kii~(t)1 + K2
V t.
(A.7)
114 Lemmc A. 3 Under the conditions of the theorem, there exist positive constants K3 and K4 such that, for k ~ l ,
J^e f ( t l t - l ) J
~ ( K3 +
min K4l~(s) l
t-2k-I ~ s ~ t-k-2
Before we return to the proof, we w i l l complete by d e f i n i n g 1 cM : max I n ( I / p ) '
sup Jc(S)J.
)
(A.8)
t-k ~ s ¢ t-I
make the d e f i n i t i o n
of t M
2 ) In(I/~)
(A.9)
where p is from Lemma A.I. I t is meant that cM = I / I n ( I / p ) i f L=O. Let [si,s 2] be any interval between t M and tNM such that
l (s)l < M v sI ~ s ~ s2 I
Assuming
(s2+I)I
M.
min I~(s)l s I ~ s ~ s2
is attained f o r s =s o ,
Lemma A.2 gives
\ s2-so+l / s2-s 0 s2-s O- 1 + . . . + l ) K2 M ~ J~(s2+1)l ~ K1 (~(So)( + [K 1 +K 1
s2-So+I/
KI
K2
[l (so)l + Kl
),
- I
where i t has been assumed t h a t K1 > I , definition
which is no r e s t r i c t i o n .
The
of t M then implies
l~(So)J ~
M K2 s2-so+l - Kl---~i-1 ~ K1
and so, since the i n t e r v a l
min I~(s)I tM~s~tNM
~
M
K2 KI -1
~ K1
[Sl,S 2] is a r b i t r a r y ,
M cMInK 1 N
~K2 KI - I
~2
1N 3
-
M K2 In K I ' - K1 1 NcM -
we have f o r large N '
(A.lO)
where M has been chosen as M : Np = ^ NcM
In KI+3
(A.ll)
115
Step I . Characterization of the sequence { ~ ( ~ i ) } Assume from now on t h a t t M < tNM. This~ is true f o r large N as seen from (A.7). Define the sequence
{ ~ i } ~ l _ r e c u r s i v e l y from
{~1 = tM Ti+ l : min { t l T i + n ~ ¢ t < tNM, l~(t)] ~
max tM~s~t
where n w i l l be defined below. I t is c l e a r t h a t the sequence is nondecreasing. An i l l u s t r a t i o n of the d e f i n i t i o n is given in Fig. A.2, which shows the same r e a l i z a t i o n as in Fig. A . I . Define NF(X) to be the smallest i n t e g e r t h a t s a t i s f i e s I] Fi II ~ ~1
v
i ) NF(X)
(x > 0) ,
(A.12)
where F is defined in Lemma A . l . This is possible, because F is a s y m p t o t i c a l l y stable. Also notice t h a t from ( A . 2 ) , ( A . 5 f ) and the boundedness of the estimates and the noise i t f o l l o w s t h a t
IL0(t) I NN
@ X
X
@
@ x x"
®
x
0 x
Iv
x
×
x
® X X
X
x
x
X
X
x
I
I
tNM
tM Figure A.2. The d e f i n i t i o n of {~i }. The members of the sequence are marked with a r i n g .
I t is assumed t h a t n
T
: 2.
116
(A.13)
lef(t+k+l) I ~ Kel~(t) I + Kv for some constants Ke and Kv. Now choose nT as an i n t e g e r which satisfies the following three conditions: (i)
nT ~ NF(4 )
(ii)
nT ) k
(iii)
nT-k I nz1~ ,< 8K(Ke+I) .
(A.14)
Choose N so that M ~ max
, K2
with Kl and K2 from Lemma A.2; i t is assumed that Kl > l , which is no restriction. This can of course be achieved with an N large enough. Conditions like this one on N and M will appear several times in the sequel. I t is then important to check that the constants appearing in these conditions do not depend on the interval under consideration and consequently on N and M themselves. This will however not be pointed out every time i t appears. I f N is chosen as above, Lemma A.2 and the definition of {zi } give J~(Ti+l) j c KII~(Ti+I-I)I + K2 ~ K1 n +l KIT
l~(~i)I +
nT+l /
n n -1 (KIT + KiT + . . . + I) K2 K2
Kl
n +1
~ 2KIT
n +I
+ KI_I) 2Kl+
I t follows in the same way that
I~(tNM)I
sup I~(t)J + K2 Ti~t~Ti+nT
I~(~N )I T
and so, combining these inequalities,
117
NM ~
I~(tNM) I
n +I ~ 2KIT I~(T N
)I
N NT(nT+I) ~ 2 T K1 l
...
(Tl) I
T
N N (n+l) 2 ~ K1 (KII~(TI-I)I+K2)
N N (nT+l) ~ 2 T K1 (KIM+K2) ~
N NT(nT+l)+l 2 • 2 T K1 M, where the c o n d i t i o n on M above has been used in the l a s t step, The c o n c l u s i o n is thus t h a t In (N/2K 1 ) N ~ T In 2 + In KI . (n~+l)
(a.15)
This is a lower bound on NT, which w i l l be used l a t e r on. The f i r s t step of the proof is concluded.
Step 2. Derivation of an upper bound on ZN - T l T
Define intervals I i = [%i_l,Ti+l],
i = 2, 4 . . . . .
2NI
where the number of intervals NI s a t i s f i e s 1 (N-I},
N odd. (A.16)
NI = ] (NT-2)'
NT even.
Ni i T and can be assumed positive from (A.15). Define the sequence { T j } j = 0 inside the interval I i (where i is a r b i t r a r y ) through i
TO = z i - I (A.17) T!j = min { t l t ~ Tji- I +nT'
I (t)I
~ M}, j : 1 . . . . . NTi
where NTi s a t i s f i e s Ti+ 1 - n T < T N~ i
~i+l
(A.18)
The l e f t inequality follows because I~(Ti+l) I ~ M. The integer nT is
118 defined as nT = KT In N,
(A.19)
where KT = max
In[I/r(F)]'
Here r(F) is the spectral
In(I/~)
"
(A.20)
radius of the matrix F in Lemma A . I .
Denote by AT the maximal distance between any two successive members of the sequence { T ] } . of t M ( c f .
Then i t follows from (A.17) and the d e f i n i t i o n
Fig. A . I ) t h a t
AT < nT + c M In N = (KT+CM) In N A__KA In N,
(A.21)
where KA i s independent o f N. Define intervals Jj :
j-l'
j+l
'
J = I, 3 .....
where the number of i n t e r v a l s
2N - I ,
Nji s a t i s f i e s
1 (N~-I),
NTi odd
l
i NT even.
Nji =
(A.22) i NT,
The algorithm w i l l
be examined in d e t a i l
in an i n t e r v a l
J J! .
Distinguish
between two cases.
The case N~T ~ 2 I t is seen from (A.22) that there is at l e a s t one i n t e r v a l the i n t e r v a l
Jji inside
I i . Suppose that
sup i~(s+k+l)i Ti i j - I ~ s < Tj+ 1
<
(I-IJ) M 4K(K3 + K4 + 1 )AT
(A.23)
11g
with K3 and K4 from LemmaA.3. This assumption will be shown to lead to a contradiction. We will f i r s t conclude that nT ~ NF(4N) as defined in (A.12). The matrix F in (A.12) is asymptotically stable, which implies that there is a constant KF such that II Fi II ~< KF ' r ( F ) i
V i ) iF
for some integer i F. Here r(F), the spectral radius of F, is less than one. If NF(4N) -*~, N-*~, the definition of NF(X). (A.12), gives
1 i FNF(4N)-I (NF(4N)-I) 4N < II ~ KF- r(F) , which implies that NF(4N ) ~
In 4KF +In N In [I/r(F)]
+ 1 ~
2 In [I/r(F)]
In N ~ nT
for N sufficiently large. On the other hand, i f NF(4N) is uniformly bounded in N, then the above result is s t i l l true. It is therefore possible to use LemmaA.l and the definitions of NF(4N) and AT to obtain:
-
i
I~P(Tj+I)I~
I~(T~)I
4---T--+
T+k+l
KAT ( l +
sup
~I~
T+k+l-°
lef(a)J).
Ti~'T<Ti+I 3 J Suppose the sup is attained for T=t. Then
]~(T~+I)] "<
4-T
NM -<-~T_ + KAT + KAT
+ KAT
t-T~_ l-k+l
t+k+l + ~ - ~ , l j t+k+l-~ ,ef(a), 1 .< a=k+l T~.]+k-I
~
~N
t + KJ~T ~ - - ~
t-a
Ti-l+k-l-a
ief(a+k+l) I
[ef(a+k+l)i +
M + KAT + R1 + R2. 4
a=T~_l+k The terms R1 and R2 will be considered separately.
(A.24)
120
First, i t follows from (A.13)and the fact that nT ~ k that
nT-k+l T]-l+k-l~. T] _l+k-I -a R1 ~ KAT ~
~
~
1 (KeNM+Kv) ~ KAT ~nT-k+l ~j~
(Ke+l) 1 - k KA In N.N
(Kel~(~)I + Kv) nT_k+l (Ke+I)AT ~ NM
-KT In (I/~)
M NM ~ ~ ,
where (A.21) and the definition of nT, (A.Ig),(A.20), have also been used in the two last steps.
The term R2 can be estimated using (A.5d) and LemmaA.3: t R2 ~ KAT ~__~ t - ~ (IE(a+k+1)l + lef(a+k+llo+k)i) ~=T]_l+k t ~ KAT ~ - - ~ IJt-~ [i~(~+k+l)i +(K3+
~=T]_l+k t ,
< KAT
ut-~(K3+K4+I)
, /
K4
!
sup I~(s+k+l)l](
min I~(s) /a-k~s(o-I o-k~s~o-I
sup i~( s+k+l)I -< (~-k~s.<~
~=Tj_l +k K .<-~_~ (K3+K4+I) AT
M sup i~(s+k+l)[ < ~ , Tji - l .<s
according to the assumption (A.23). Here i t has been used that min
I~(t) l ~ l,
tM~t~tNM which follows from (A.IO). Using the estimates of R1 and R2 in (A.24), the following is obtained: ...l~(Tj+lll , i < - 3 M + KAT ~ M 4
if
121
FAT ~
M NP 4 4
This condition is t r i v i a l l y
s a t i s f i e d f o r large N as seen from (A.21). -
i
We have thus arrived at a contradiction since J~(Tj+I) I ~ M by construction. I t is possible to conclude that sup i
J~(s+k+l)J ~ i
Tj-I~S
(l-p) M 4K(K3 + K4 +1)AT
(A.25)
i
This inequality is valid for all intervals dj.
Define
bo
v(t) = 5~(t) +~ool~(t)12
(A.26)
and suppose the supremum in (A.25) is obtained fo r s = t, Lemma 4.2 then gives
T]+ 1+k V(T~+I+k)-V(T ~ l + k ) ~ - c
F l
-
T!+I +k ~2(s)
r(s)
s:T _l+k+l i 2 Tj+I-1 Kv ~2(t+k+l) 1 - c + ~ " r(t+k+l) c / . ir(s+k+l) s=T! . 3-I -
-
1
~-~
+ c
,
1 (R~(s))2 r(s)
s:T]_l+k+l
$
where Kv is the bound on the noise in (A.13). I t follows from ( A . 5 c , f ) i 1 ~ t < Tji+ l ' and the boundedness of the estimates t h a t , for Tj_ r(t+k+l)
.< r ( k + l ) + T ~
.< r ( k + l ) + ~
+ Kr
t ~t-s 2 z J~(s)J .< s=0
Kr 2 Kr +~Z~ (NM)2 -< ~ (NM)2,
f or some Kr i f N is chosen s u f f i c i e n t l y large. Furthermore, from (A.5c) and (A. IO) a lower bound on r ( t ) can be obtained: r(t+k+l)
Bu
l~(t)J
2
>. 2
1 N6 •
I f these two res u l t s are used above, the following in e q u a lity is
(A.27)
122
obtained: V(Tj+ 1 +k) - V(Tj. 1 +k) .<
c (1-~.)
2Kr(NM)2
2 2AT Kv . 4 e2(t+k+l) + - ~ 6 ~ N6
and so, using (A.25), we have for N s u f f i c i e n t l y V(T]+ 1 +k) - V(T~_ l + k )
_ c(l-k)
large:
.<
. ~ (l-w) M
~2 + v8~T K2 2Kr(NM)2 \4K(K3+K4+I)AT) c 6~ N6
2 + -8AT - Kv A_ "<- N2AT2 2Kr[4K(K3+K4+I112 C 6~ N6 c ( l - ~ ) (I-~12
^ -
cI
c2 AT
1 + N2 &T2 - ~ ,
(A.28)
where c I and c 2 are independent of N. Use the estimate of AT in (A.21) to obtain cI
N2K2(1 n N) 2
+
c2KA In N
.<
N6
co ,<
- ~-~
-
for N s u f f i c i e n t l y
N2
"<
N4
large. Here c O is independent of N. This in e q u a lity Thus
i i.e. Nji times. holds for every interval Jj, ,
But V is positive and bounded. I f the bound is denoted by Kv' we have i Kv . N4. Nj ~-00
(A.29)
123
The case N~ < 2
I t is seen from (A.22) that in this case N~ : 0. Consequently, (A.29) is t r i v i a l l y satisfied. The conclusion is thus that (A.29) is valid in every interval I i , provided N is chosen large enough. The inequality (A.18) and (A.22) imply Ti+l
_
Ti Ti 2N~ = ( T i + I - T i ' N ~ ) + ( N ~
_ T i
2NJ ) "< n T + A T `< 2AT,
and so, by using (A.29), _
T i
.
T i
Kv
_
4Kv
.<,AT+ 2N~AT .< 2AT (l +~-~0N4),< T O AT. Summing for i -- 2,4 . . . . . 2NI T2NI+ l - T l
,4.
gives
4Kv N4 ~ -~0 AT-NI,
(A.30)
which ends Step 2 of the proof.
Step 3. Derivation of an upper bound on N and the contradiction T
'
"
Consider an interval I i defined in Step 2. Suppose that (l-p)l~(Ti+l) I sup l~(s+k+l)l < Ti+l-2nT~ s < Ti+l 4KnT(K3+K4 +l )
(A.31)
This w i l l lead to a contradiction in the same way as in Step 2. Thus, analogously with (A.24) we have l~(Ti+l)l ~ where
l~(Ti+l- n) l 4
+ KnT + R1 + R2'
124
Rl = KnT ~
t-Ti+l+2n~-k+l
Ti+l-2nT+k-I o=~ ~i+l-2nT+k-l-~ ~ lef(a+k+l)l
t R2 = Knz ~=Zi+l~-nT+k t - o lef(o+k+l) I and t is in the interval [Ti+l-n T, mi+l-l]. Notice that the property (A.14i) of n has been used. Let [t] denote the integer part of t and estimate Rl by using (A.13) and the inequality Kv ~ M= Np, which is valid for N large enough: R1 ~ Kn ~nT-k+l[T i+l-2nT+k-l-(tM-[cMlnN]-l)
tM-[C M In N]-I , •
tM-[C M In N]-l-o ~
/
lef(~+k+l)l
+
O=0 Zi+l-2nT+k-I
+
Ti+l-2nT+k-l-o ~-~, p lef(o+k+l)l] ¢ ~=tM-[CMIn N]
n-k+l
KnT ~ T
[ [CMlnNl+k NM (Ke+l) I-,
+ l~(Ti+l)I i-~
]
'
where (A.14 i i ) has been used in the last step. Now use (A.9) and (A.14 i i i ) to obtain n -k [ M I~(Ti+l )I ] R1 .< KnT ~ T (Ke+l) l + 1-~ "< n -k 2 l~(Ti+l )I KnT ~ T (Ke+l) ~-~ l~(Ti+l)l ~ 4 The term R2 is estimated as in Step 2 using the assumption (A.31):
R2 ~ Kn
•
K3+K4+I I-~
sup l~(s+k+l)l < ~i+1-2nT~S< ~i+l
Insert the estimates of Rl and R2 above, which gives
I~(Ti+l)l 4
125
l~(~i+ l - nT)l J~(Ti+l) j <
4
l~(Ti+l)l + KnT +
2
~ l~(~i+l) I,
if
Kn% ~ M = N~, which of course is satisfied for large N. This gives a contradiction and consequently the assumption (A.31) is false. Hence,
(I-~) l~(Ti+ l )I
sup JC(s+k+l)J ~ Ti+l-2n ~ s < Ti+l 4KnT(K3+K4+I)
(A.32)
in every interval I i. In the same way as in Step 2, (A.5 c,f) and the boundedness of the estimates imply that, for Ti_ 1 ~ S ~ Ti+ I, S
r(s+k+l) ~ r(k+l)+l_-~+Kr _ _ ~
+ Kr xs-(tM-[CM In N]-I)
s'a J~(a)J 2 = r(k+l)+T~_~ +
tM-[C M In N]-I o=~
tM-[C M In N]-l-a
I~(a)l 2
S
+ Kr
~ ,
~s-o j~(a)J2
a=tM-[CN In N] [CM In N]+I (NM)2 J~(Ti+l) J2 r(k+l)+--~-~ +K r ~, + K r I-~ I-~ I-~ 2Kr 2 3Kr r(k+l)+-F~+ZZ~ J~(Ti+l) j ~-fZ~ J~(Ti+1)J 2' for large N and (A.9) has been used in the second last step. Now use Lemma4.2 in the same way as in Step 2, which gives
(A.33)
126
2 Ti+ 1- I Kv ~2(s+k+l) V(Zi+l+k) - V(Ti_l+k) ~ - C > " + / r(s+k+l ) , r(s+k+l) c s=Ti_ 1 s=~i-I Ti+l - l
7-"
K2 ~i+1 - I E2(t+k+l) + v ~ 1 - c r(t+k+l) ~- " / / , r(s+k+l) s=Ti_ 1
'
where the supremum in (A.32) is attained for s = t. and (A.27) we have for N large enough:
V(Ti+l+k)
- V(Ti_1+k) ~ - C -
^
Using (A.32), (A.33),
4K~ Ti+l - T i - l ^ ( 1-p ~2 f I-,~,~ 2 + " [4KnT(K3+K4+I)] 3Kr ~ N6
Ti+l - T i _ I
= _ c 3 + c4
•
N6
'
where c 3 and c4 are independent of N. The inequality is valid for i = 2, 4 . . . . .
2NI and so
V(T2NI+I+k) - V(T]+k) ~ - c3NI + c4
T2NI+I -T1 N6
c4 4Kv N4 ( 4C4Kv A T e , - c3N I + N--6 • ,c0 AT, -N I = - NI c 3 - c T N2/ where (A.30) is used in the second l a s t step. But as in Step 2 V(t) is positive and bounded from above by Kv, so that -
NI "<
Kv
-
c3
_ 4c4Y,v • A__~T co
~<[
Kv c 3 -c3/2
-
2Kv c3
for N s u f f i c i e n t l y large. This follows from (A.2I). From (A.16) i t then fol lows that
N'r "< 2NI+2 "<-~3 + 2, i . e . N is bounded by a constant. This is in disagreement with the T
r e s u l t of Step l ,
(A.15). We have thus lead the existence of a r b i t r a r i l y
large l ~ ( t ) I into contradiction and i t is thus proved that l~(t) l is uniformly bounded.
127
It remains only to conclude that l~(t)l bounded implies u(t) and y(t) bounded. But since Q(q-l) is asymptotically stable, l~(t)l bounded implies that l~(t)l is also bounded. From (A.3) i t immediately follows that u(t) and y(t) are bounded. The proof is thus complete.
Proof of Lemma A.I I t follows from Lemma 4.1 that
N1 ' 5A e(t+k+l
)
0
6
~(t+l) =F~(t) +
+gl(t) ~ F ~(t)+g2(t)+gl(t),
1 g(t+l)
-V
0
where F is asymptotically stable and g l ( t ) is a vector consisting of f i l t e r e d uM and w. The vector g l ( t ) is therefore bounded, by a constant Kl say. Iterate the above equation to obtain t ~ ( t ) : Ft-s ~(s) + ~ - - "
t Ft - i g 2 ( i - l ) + ~
i =s+l Since F is asymptotically stable, sup II Ft II ~= KF < ~. t~O Also note that
~(t) =-q-Qe(t) = TAPM e f ( t ) TAM which implies
i =s+l
Ft - i g l ( i - l ) .
128
I
A ef(t+k+1)
'TA"
0 g2(t) :
1
-~
ef(t+l ) O 0
But both T and AM are asymptotically stable polynomials, so that Ig2(t)l(
t+k+l K2 ( I + ~ - ' ~ , t + k + l - s s=k+l
lef(s)i)
for some K2 > 0 and 0 < lJ < I. Use these inequalities above to get l~(t)l
~ II Ft-s }l " l~(s)l +
+ KF(t-s)[ sup K211 LS(T< t
~+k+l ~ - - ~ / l j T+k+l-~ l e f ( a ) I ) + a=k#l
which gives (A.6) with the choice K = KF(KI+K2 ).
KI] a
129
Proof of Lemma A.2 The definition of ~ ( t ) ,
(A.3), gives
0
P 0
1
&
1 0 ~(t+l) =
~(t) +
g(t+l) P 0
6
1
_ TBM u-M(t+l ) P (A.34) I t follows from (A.5 f) and the d e f i n i t i o n of P2 that -6(t) = ( l - P 2 ) ~ ( t ) + ~ ( t ) P P Pl
-
= - ,FP21...P2nP2 0 . . . 0 l, ~ ( t ) - oT(t) ~ ( t ) ,
(A.35)
where the fact nu ~ nP2 from (A.4) has been used. The equation (A.I) gives y(t+l) _ P
al y ( t ) P - ...+
y(t-n+l) + an p
L ,/-u(t-k) + u(t-k-l) D0\ p bl p
+...+
b ~(t-m-k}~ + ~ ( t + l ) m p / p
which, i f k > O, can be written y ( t=+ lp)
[0...0
b0 b0bl...bob m 0 . . . 0 - a l . . . - a
+ w(t+l ) P On the other hand, i f k : 0 (A.35) can be used to get
n O...OI ~ ( t ) + (A.36)
130 ~(t+l) : p
Ibobl...bobm D...O -al...-a n 0...0 1 ~ ( t ) ^T
-
bo[P21...P2nP2 0...0] ~(t) - bOO ( t ) ~ ( t )
+
+ ~(t+l)
(A.37)
P
If (A.35) and (A.36) or (A.37) are inserted into (A.34), the following is obtained: ~(t+l) : A(t) ~(t) + b(t), where A(t) is a matrix with bounded elements because the parameter estimates are assumed to be bounded. Also, the vector b(t) is bounded because its components are outputs of asymptotically stable filters with inputs w(t) and uM(t), which are bounded. The lemma is thus proven with
K1 = sup JJA(t)JJ and K2 = sup Jb(t)J. t t
a
Proof of LemmaA.3 It follows from (A.5 e,f,b) that Jef(tlt-l)J ~ I b o ( t - l ) I I [ o T ( t - l ) - o T ( t - k - l ) ] ~(t-k-l)}
r(t-l)
^
"<
l(
l
÷...+
Ibo(t-1)J'~OJ~(t-k-2lJ
r(t-k)
1
)
l~(t-2k-llJl~(tmk-ll-t_k.<s~t_i l~(s)], sup
where (A.5 c) is used in the last step. From LemmaA.2 i t follows that there are constants K~ and K~, such that
J~(t-k-l)J ~ K~ J~(t-k-l-i)J + K~,
i : I . . . . . k. A
Insert this into the inequality above and use the boundedness of bo(t ) to o b t a i n
131
[ef(tlt-l)l ~
Ib0(t-l)l .T0
K½ + min
[~(s)l
sup t-k~s~t-I
I~(s)l
t-2k-l.<s.
(
.< K3 +
"4 min
IT(s)["
)
sup
t-k.<s.
l~(s)l
t-2k-l~s~t-k-2
and the lemma is proven.
[]
132 APPENDIX B - PROOF OF THEOREM 5,1
Theorem 5.1 Consider a plant, described by
y(t) -
b0 B(p)
u(t) + v(t)
A(p)
(B.l)
or, alternatively, ef(t) = b0 [ ~ p ~ + Here ' - '
eT ~ ( t ) ] + - ~ ( t ) .
denotes f i l t e r i n g
(B.2)
by Q/TAM and
pm+nT-I u(t) J ( t ) = I-----F--- u(t) . . . . . T '
pn-l Y~.-..--!, TBM uM(t)]. T y(t) . . . . . -T (B.3) The plant is controlled by the CSA-algorithm, defined by - estimation algorithm:
o(t)
:
~(.t) r(t)
\P]
(B.4 a)
~(t)
(B.4 b)
~(t) = ~(t) r(t)
r ( t ) : - Xr(t) + l ~ ( t ) l 2 + ~ ( t ) ; Xrmi n 0
x > O;
e(t) ~ ~, r ( t ) ~ rmi n,
(B.4 c)
r(O) ) rmi n > 0
~(t) ~ ~, r ( t ) > rmi n,
~(t) = ef(t) - ef(t)
(B.4 d)
Gf(t) : bo(t) ~ ( t ) + ~T(t ) ~ ( t ) ] . LP 1
(B.4 e)
133
- control law:
P1
= _ [PI (°) t P1
BT(t)]~(t).
(B.4 f )
Assume that assumptions A . I - A . 4 are f u l f i l l e d . Further assume that the parameter estimates are uniformly bounded. Then the closed loop is L=-stable, i.e. i f v(t) and uM(t) are uniformly bounded, then u(t) and y ( t ) are also uniformly bounded. Proof A single realization will be considered throughout the proof. The boundedness of l~(t)l will be proved by contradiction. Thus, assume that sup l ~ ( t ) I t~O
> NM
f o r N and M a r b i t r a r i l y
large. This assumption w i l l
be contradicted for
some N and M. Assuming the unboundedness, tNM and t M are well-defined i f N > l and M > I ~ ( 0 ) I : tmM = min { t I l ~ ( t ) l t M = max { t [ t < t N M ;
: NM } l~(t)l
= M;
l~(s)I <M v s E (max(O,t-c M l n N ) , t ) } . Here the c o n t i n u i t y of I ~ ( t ) I i s used. The constant c M w i l l be defined below. A typical r e a l i z a t i o n of I ~ ( t ) l shown in Fig. B.I.
in the time i n t e r v a l
[tM, tNM] is
The contradiction will follow from thorough analysis of the algorithm in the interval [tM,tNM]. An outline of the proof is as follows. In
Nz
Step l an increasing sequence ( l ~ ( T i ) l } i = l in the interval [tM, tNM] is defined and a lower bound on N is given. Step 2 derives an upper bound on ~N~-~I" Finally, Step 3 derives an upper bound on NT which is in disagreement with the result in Step l and the boundedness of l~(t)l is thereby proved. The boundedness of u(t) and y ( t ) is then easily concluded.
134
ICPlt)I NN
I I I I I I tM
INM
Fi~upte B.I. The behaviour of l ~ ( t ) l in the interval [tM, tNM].
Before proceeding to the main part of the proof, some results are given in the form of separate lemmas. Proofs of these lemmas are found at the end of the appendix. LEMMA B.I Under the conditions of the theorem, there exist positive constants a and K and an asymptotically stable matrix F such that, for every
t ) s+l, ]~(t)l ~ K[lleF(t-s)ll
l~(s)I+(t-s)(l +sup ~ e-a(T-°)lef(~)Ido)]. s(T=;t 0 (B.5)
LEMMA B. 2
Under the conditions of the theorem, there exist positive constants K1 and K2 such that
J~(t)J ~ e
K1(t-s) (j~(s)
J+K2) ¥ t ~ s.
(B.6)
135
LEMMA B.3
Under the conditions of the theorem, there exists a positive constant K3 such that l~(t)[ 2 -
-
~ K3
r(t)
v t.
(B.7)
LEMMA B. 4
Under the conditions of the theorem, there exist positive constants K4, K5, and c such that for a r b i t r a r y T, ^
J ef (t )l
~ K4 e-CTj~(t)J + K5 e
K1T t
f JE(s)l ds t-T
V t >. T
(B.8)
where K1 is as in Lemma B.2.
Before returning to the proof of the theorem, an important observation w i l l be made. F i r s t choose CM = m a x ( ~ , ~ )
(B.9)
with a from Lemma B.I. Let [Sl,S 2] be any interval such that
{ If
between t M and tNM
l (s2) l : M l
(s)l < M V s I < s < s 2.
min l~(s)I is attained for s=s O, Lemma B.2 and the definition
Sl~S~S 2 of t M give M = J~(s2) j ~ e
Kl (s2-so) (l~(So)
J +K2) ~ e
KICM In N (I~(So)l +K2)
which implies l~(So) I ) M _ K2" NKICM Since the interval [Sl,S 2] is arbitrary, we thus have
136
M min l~(s)l ~ KlCM tM~ s ~ tNM N
(B.IO)
K2"
Step I. Characterization of the sequence {~(Ti)} NT
The sequence {Ti}i= l is defined recursively from T
= tM
Ti+ 1 : i n f { t
I Ti+n ~ { t < tNM, l ~ ( t ) l
~ sup l ~ ( s ) I } , tM~ s ~t
where n w i l l be defined below. The sequence {T i } for the r e a l i z a t i o n in Fig. B.I is shown in Fig. B.2. Let NF(X) be the smallest number that satisfies II Ke~t II ~ x
V t ~ NF(X)
(x > O)
(B.ll)
where K and F are from Lemma B.l. The definition is meaningful because
{t) NN
-
I
tM=~l 1:2 Figure B. 2. The definition of {%i }.
I
1;3 1;/..
I
I
1:5 z6 z7 tNM "~t
137
F has i t s eigenvalues in the open l e f t half plane. Also note that from (B.2), (B.4 f ) , and the boundedness of the estimates and the noise i t follows that for some Ke, Kv l e f ( t ) I ~ Ke l~(t)I + Kv
V t.
(B.I2)
Now choose nT in the definition of {T i } to be a number which f u l f i l l s the following conditions: (i)
n T >~ I
(ii)
nT ~ NF(5) (B.13)
ant (iii)
n
e
2
.~
10K(Ke+l)
T
(iv)
n
T
cnT 2 .< e
a 5KK4
Here a and K are from Lemma B.l, Ke as in (B.12), and c and K4 from Lemma B.4. Let M satisfy the condition M ~ K2
with K2 as in Lemma B.2. I t should be noted that N and M can be chosen a r b i t r a r i l y . A number of conditions of the type above w i l l appear in the proof. They are however easy to f u l f i l l
by choosing N and M
appropriately. I t is, however, important that the constants appearing in the conditions do not depend on the choice of interval [tM, tNM], i . e . on N and M themselves. This fact will not be commented upon in the sequel. I f M is chosen to f u l f i l l
the condition above, Lemma B.2 gives rise to
an inequality in the following way. Separate between two cases: (i)
Ti+ 1 = Ti + nT ;
then
l~(Ti+l)l ,< 2eKInT }~(Ti) I .
138
(ii)
zi+ 1 > Ti+n ~ ; the d e f i n i t i o n of {Ti } then implies l~(Ti+n )I <
sup I~(s)I • i ~ S~Ti+nT
and the continuity gives ]~(Ti+l) I :
sup I~(s)l ~ 2eKln~ l~(Ti) I . TiE S~Ti+nT
The same inequality thus holds in both cases. Using this together with the fact that tNM
TN < nT,
which follows from the definition of {T i } and the continuity, the following is obtained:
KinT I;(~N)I NM = (~(tNM) I ~ 2e =2
~ ...
N eKlnN
~ Z T
~
:
NT eKI nTN TM
which implies N
T
InN In 2+Kln T
(B.14)
This is the lower bound on NT sought for in Step I .
Step 2. Derivation of an upper bound on T N - T 1 Define i n t e r v a l s I i = [Ti_l,Zi+l],
i = 2,4 . . . . . 2NI
where the number of i n t e r v a l s NI s a t i s f i e s (Nr - I ) ,
N odd (B.15)
NI =
1
(NT
2),
Consider an interval
N even. T Ni i T I i and define the sequence { T j } j = 0 inside the
139
i n t e r v a l through
TO = ~i-I
i i T~j = m i n { t I t ~ Tj_ l + n T, l~(t)l ~ M}, j =I . . . . . NT
(B.16)
where N~ satisfies Ti+l - nT ~ Ti i ~ ~i+l" NT
(B.17)
The l e f t inequality follows because l~(Ti+l) I ~ M. The constant nT is defined as nT = KT In N,
(B.18)
KT = max ( - r - ~ ' ~ p ) "
(B.19)
where
Here r(F) is the l a r g e s t real part of any eigenvalue of F and p is defined as p
=
,
a
(B.20)
a+c
Let AT be the maximal distance between any Tji and Tji+ l "
I t follows from
the d e f i n i t i o n of t M ( c f , Fig. B.1) and (B.16) t h a t
AT .< nT + cM In N = (KT+CM) In N A KA In N,
(B.21)
where KA is independent of N. Define i n t e r v a l s
i
j
Jj = IT -l
Tj+,I
J = l , 3. . . . . 2N - I ,
where the number of intervals NJ satisfies I
i Nj =
2 ½ i NT ,
i odd
'
NT ' i NT even.
The behaviour of the algorithm in an i n t e r v a l Jji w i l l
(B.22)
now be examined.
140 D i s t i n g u i s h between two cases.
Z NT >.
Tk~ ~
is seen t h a t there is at l e a s t one i n t e r v a l Jji in the
From (B.22) i t interval
I i . Suppose that
Ti
j+l
I
M
ds <
I~(s)l
(B.23)
.
4K ( l + - ~ ) AT e KlpnT
i
rj_l This w i l l
lead to a c o n t r a d i c t i o n . To see t h i s , we w i l l
first
show
t h a t nT ~ NF(4N ). The matrix F used when d e f i n i n g NF(X), ( B . I I ) , has i t s eigenvalues in the open l e f t h a l f plane. This implies the existence of a constant KF such t h a t , f o r some t F,
II eft II ~ KF et'r(F)
V t ~ t F,
where r(F) is defined in connection with (B.19). Clearly NF(4N ) ~ , N ~. Then i t f o l l o w s by c o n t i n u i t y of II eFt II from ( B . I I ) t h a t FNF(4N)II
1 4N
NF(4N)'r(F)
K IIe
~ KKF e
for N sufficiently
large and so
In N + In 4KKF NF(4N) ~
-r(F)
2 ~ _-~
In N ~ nT.
I t is thus possible to use Lemma B.I and the d e f i n i t i o n s of NF(4N ) and AT to obtain
i£°(Tj+l)] ~ -- i
4 ~ )1 I~(T]
+ KAT
sup +I ~ e-a(~-°)lef(~)l (I+T]~$~TJ i 0
Suppose the sup is attained f o r • = t . •
i~(T~+l) i~
I~(T!~
I
Then
t
.< ' 3" " 4 ~ + K~T (I + f e - a ( t - ° ) l e f ( a ) i 0
do) .<
do).
141 i +T j +T-q) NM -a(t-T _l-T) Tj-I -a(T -I ~TN + KAT+KAT e S e lef(o)l do + 0 t + KAT . I e-a(t-~)lef(~)l d~ ~ ~+KAT4+Rl +R2' T~_l +T
(B.24)
where 0 < T < nT will be chosen later. The two terms R1 and R2 w i l l be estimated separately. From (B.12) and the definition of {T~} i t follows that R1 ~ KATe
KATe
-a(nT-T) 1 a suP. lef (°)1 ~ o~T~
-a(nT-T) 1
-a(nT-T ) NM, sup(Kel~(~) I + Kv) ~ K (Ke+l) AT e o~T~
i f N and M are chosen such that Kv (B.4 d) and Lemma B.4: t
NM. The term R2 is estimated using T~+I I
T~ I+T t I
+ KAT[
Tji - I
e-a (t-~)lK4e-CTl~(°) I + K5e
T~_1+T
KI T o )Ids) d° I IE(s ] o-T
i Tj+l K5 eKIT) I IE(s)Ids + ~K K4AT e-cT NM. .< KAT ( I +-~Tj-I
Now choose T = p.n T and use (B.20) to obtain for large N
KAT (
Rl + R2 " < T
+ KAT
(
1
-a(l-p)nT "cpnT) (Ke+l) e + K4 e NM+
T +I
+-a
(o)I do +
eKlpnTI I
Ti j-1
I (s)l
ds
142 i Tj+1 KAT "
-cpnT NM (1 +-~) eKlpnT (Ke+K4+I) e + KAT
rj
I (s)l ds <
Ti j-I -CpKTln N NM+ ~M ~ ~M , <_K a (Ke+K4+I) KA In N e
(B.25)
where (B.18), (B.21), and the assumption (B.23) have been used in the second last step and the definition of KT, (B.19), was used in the last step. If (B.25) is inserted into (B.24), the result is
I~(T]+l) I
< M
if KAT ,< -M 4 This is t r i v i a l l y satisfied for large N by the choice M = Np A=NKI(CM+ PKT)+4
(B.26)
With this choice of M, we have arrived at a contradiction and the conclusion is that Ti j+l I I~(s)I ds ) M Ti 4K(I+-~)AT j-I
(B.27)
eKlpnT"
The inequality holds for every interval Jj.i Define V(t) = bg(t) + b0 eT(t) e(t).
(B.28)
Lemma 5.1 gives Ti
T]+ 1 *l 2 i S )
V ( T ] + I ) - V(T]_I) -< -
r(s)
TI j-l
T! ~+I
c2(s) r(s)
-
T!
3-I
Ti j+l f
ds + j
i
Tj-I
2 Kv r-~
ds,
ds + J ( - ~ ( s ) ) i Tj-l
2
r-~ds
143 where Kv is the bound on the noise introduced in (B.12). Note that, for i 1 ~ s ~ Tji + l ' Tj_ S
r(s) = e-Xs r(O) + -I e-X(s-~)(l~(~)12 + ~(~)) d~ 0 r(O) + ~ +
(NM)2 x
~
2 (NM)2
i f N and M are chosen sufficiently large. Furthermore, i t follows from Lemma B.3, (B.IO), and (B.26) that, for large N, 1 l~(s) 12 1 ( M r(s) ~ -~3 >" -~3 NKIC,
K2)2 ~ ~ I
m2(p-KICM)
(B.29)
Apply these two inequalities above to obtain
V(T~+I) - V ( T ~ _ I ) ~ - - - X 2(NM)~
T]+I 4AT Kv2 K3 I C2(s) ds + 2(p-KIC M) Ti N j-I
Now, use Schwarz' inequality to obtain for N sufficiently large:
4-1 T X 1 2(NM)2" (T!~ i Ti. .)
i V(T]+]) - V(Tj_])
dti
4AT K2 v K3
+
l~(s)l ds
.<
2(p-KIC M)
X 4AT(NM)2 LT]_l
.< -
+
4AT N214K(I + - ~ ) A T eKIPnT] 2 c1 N2 AT3 e2KlPnT
c 2 AT N2(p-KlC M) '
l~(s)l
4-
Ti-I
T 1
N
^
J-I
f
2
4AT Kv2 K3 ds
4AT Kv2 K3
4-
NZ(P-KICM)
A=
N2(p-KICM )
(B.30)
144
where c I and c2 are independent of N. I t follows from (B.18) and (B.2I) that 3 N3 N2KlPKT AT3 e2KlPnT < KA Inserting this inequality into (B.30) and also using (B.26) gives •
cI
+
c 2 KA N
V(T]+I) - V(T~-l) "< - 2 3 ,3 ,2KIPKT ,2(KIPKT+4) : N
-
1 2KIPKT +5 N Co
"< -
Cl _ N K-~A c2 KA N2( K1pKT +4) A
2KIPKT + 5
KA
CO
= -
N
N sufficiently
PO '
"<
large,
N
where c O i s a c o n s t a n t ,
independent o f N.
This inequality holds for every interval j ij , i.e. Nji times, whence i
v
co
NPo
But V is positive and also from the assumptions bounded, by ~ say, so that i v. NPo. Nd ~ c0
(B.31)
The ca6e N~T < 2 The inequality (B.31) is t r i v i a l l y satisfied also in this case, i < 2 implies Nji = O. because NT The conclusion is thus that (B.31) holds in every interval I i provided N is chosen large enough. From (B.17) and (B.22) i t follows that
145
Zi+l
_
,+)
-T' )
Ti 2N~ = ( T i + l - T i "
which together with (B.31) gives ~ i + l - Ti-I = ( T i + l - T;N~)+(Ti"-2N~ T~)
+
2AT + 2N~ AT
4R v
Summing for i -- 2, 4. . . . . 2NI gives
4~'_v
T2NI+I - T1 "< -Co AT.N pO NI
(B.32)
which concludes Step 2 of the proof.
Step 3. Derivation of an upper bound on NT and the contradiction Consider an interval I i defined in Step 2. Suppose that Ti+l
I
Ic(s)l
j~(Ti+l) j
ds <
K5 Klnz/2 5Kn (I +--~-e )
Zi+l-2n
(B.33)
This assumption will lead to a contradiction in much the same way as in Step 2. Analogously with (B.24) we have l~(Ti+l)l
l~(Ti+l - nT) I
~
5
+ KnT + Rl + R2'
where 3 _a(t_T++++~nT) T i + l - 2 nz 3 Rl = KnT e " ' ~ I e-a(Ti+l-2nT-~) lef((~) I d~ 0 t R2 = KnT
I
e-a(t-~) lef(o) l do
3 ~i+l -~ nT
146
and t is in the interval ITS+l - nT, Ti+l ]. The properties (B.13 i. i i ) of nT has been used. The term Rl can be bounded from above using (B.9) and (B.12) i f Kv ~ M = Np, which is true for large enough N: tM-CMIn N an
- a(tM-CMIn N-o) e e f ( o ) Ido
In N)] ~ __~_[e_ a[~i+l- ~nT-(tM-CM 3 R1 ~ KnT 0
3nT
Ti+ 1 -~
J
+
e-a (Ti+1-~ nT- ~)
tM-C M In N an T
~ Kn
T
e
2
e-a CMln N" T~M + l~(~i÷ I)I ] a
(Ke+l)
ant
"< Kn% e'--Z- (Ke+l)[~+
l~(mi+l )1 a I "<
anT
2 KnT e" T ( K e +1) ~ l~(Ti+l) I ~ where (B.13 i i i )
<
l~(Ti+l )I 5
"
has been used in the l a s t step. The term R2 is
estimated e x a c t l y as in Step 2:
Ti+ ]
T K/52 )Kl R2 .< KnT(1n+-~-e
I
cnT K 2 ,~(Ti+l ) I < Ic(s)l ds +~ K4nT e
~i+l-2n +
5
5
where (B.13 iv) and the assumption (B.33) have been used in the last step. Using the estimates of Rl and R2, we thus have l~(Ti+l) I < l~(Ti+l-nT) + KST + 3
5
if M Np Kn~.< ~-T
,
5 I~(Ti+])I ~ l$(Ti+l))
147 which of course is s a t i s f i e d for large N. We have thus arrived at a contradiction and the conclusion is that
Ti+l
l~(Ti+] )l
I
Ti+I-2RT
It(S)[ ds >.
(B.34)
K5 KlnT/2 .
5Kn(I +-~-e
)
The inequality holds for every interval I i . From (B.9) i t follows that, for Ti_ l ~ s ~ Ti+ l , s
Im(°)]2+~(°)]
r(s) = e-XSr(o) +I e-X(s-°)[ 0
do
s
r(O)
+
~ + [ e-~(s-o)l~(o)l 2 do
=
0 tu-cu I n N :
r(0)
E
+ ~ + e
-X(s-tM+C MI n N)
I'1
I
I'1
-~(tM-c M In N-o) e
l~(o)l 2 do +
0 s
+
I e-x(s-°) l;(o)] z d~ tM-c M I n N E
-Xc M InN
r(0) + ~ + e
.< r(O) + ~ + ~
(NM)2 "
X
l~(+i+l)l 2 +
X
l ( M2 + +C,i÷lll 2) -<
3
12
for N s u f f i c i e n t l y large. Applying Lemma 5.1 in the same way as in Step 2 now gives a r e s u l t analogous with (B.30) for large N: Ti+l
~i+l
r(s)
V(Ti+l) - V(~i_l) ~ -
~i-I .< -
Ti+l [ J Zi+l - 2 n
Ti-I
mi+l ~2(S) ds + r(s)
I Ti-I
2 Kv --ds
r(s)
.<
148
lc(S)l ds
]
31~P(Ti+l)I2 2nT
.<
2(p-KlCM) N
[Ti +l-2nT
l
2K2v K3 (Ti+I-Ti-I)
-
+
^ =
6n.~[5KnT(l +-~ eKlnT/2)]2
N2(p'KICM)
^ Ti+l -Ti_ I = _ c3 + c4 N2(p-KICM) ' where c 3 and c 4 are independent of N and (B.34) has been used in the second last step. Summing the inequality for i = 2, 4 . . . . ,2N I gives ~2NI+I
~I
V (%2NI+I) - V(TI) ~ - C3 NI + c 4 N2(p-KICM)
4Rv.ATN ' P0"NI - c3NI+ c4 co
N2(p-KICM)
=
- NI (c3-c4
v
where (B.32) and the definitions of PO and p have been used. But V is positive and bounded by Kv as in Step 2, so that _ ~
~
v
_
NI Ic 3 - c 4 4Kv A T ~ CoN3 ]
which by (B.15) and (B.21) implies N
~ 2NI + Z ~ I
2Kv +2 4KvAT
~
v +2 c3 -c3/2
:
•
v
C3
+2
Ic 3 - c 4 coN3 for N s u f f i c i e n t l y large. This result obviously violates the inequality (B.14) obtained in Step 1 for N large enough. The existence of the sequence { } ~ ( T i ) l } for N a r b i t r a r i l y large is thus contradicted and the boundedness of I~(t)I
is proved.
I t remains to conclude boundedness of u(t) and y ( t ) from the boundedness of l ~ ( t ) l . From (B.2) and (B.4 f) i t is clear that e f ( t ) = ~ [ y ( t ) m yM(t)]
149
is bounded. But yM(t) is bounded and Q and P are asymptotically stable polynomials of the same degree, which implies that y ( t ) is bounded. The boundedness of u(t) is possible to establish from (B.4 f ) , which can be written P2 u(t) = _ (PI (0) 8 T ( t ) ) ~ ( t ) P \ PI or, using the definition of P2' m+nT ~(t) = I m+nT-lu(t) + + P2 P p - P21 p p "'" (m+nT)
- {PI(O) BT(t))~(t).
(B.35)
\ Pl Here all terms in the f i r s t bracket are components of ~(t) and i t follows that
pm+nT~(t) p is bounded. Differentiating m+nT+l (B.35) ~(t)
n-m-l times gives recursively boundedness of Pn+nT-I u(t) P
Notice that
8(t) P1
p
p
l , 2. . . . .
, ...,
is possible to differentiate because
Pl is of degree n-m-l and also that the derivatives of ~(t) are bounded because of earlier steps in the recursion and boundedness of n+nT ~(t) y ( t ) and uM(t), cf. (B.3). Finally boundedness of p follows P by an additional differentiation of (B.35) but then the boundedness of ~(t), which follows from (B.4 b,c,d), is also used. As a result, the f i r s t n+nT derivatives of u(t) _ Q u(t) have been shown to be P TAMp bounded. But the pole excess of Q/TAMp is exactly n+nT and Q is asymptotically stable. Hence boundedness of u(t) follows readily. The theorem is thus proven,
o
150 Proof of Lemma B.I
Assume for the moment that m ) 1 and define ~T(t) =
I m+nT-I P p u(t) ' " . ,.
nT 1 ~p u(t) •
which thus is formed from the m f i r s t components of ~(t). Using (B.I) and the definitions of e(t) and ef(t) (see (5.3)), the following is obtained:
pm+nT T~(t)
,(t) = nT+l
~-#--~(t) 1 . ~pnTA [ ~ ( t ) - ~ ( t ) ] b0 P
nT
- bI pm+nT-I ~~(t)-...-b P
m PF
~(t)
m+nT-I P ~(t) P nT+l P ~(t) P 1 pnTA ~(t) bo P
-b I ... -b m ~(t) +
nT ]
p A r L,y "-~1't'-~'t"] ~, , ~, , --o-
0
0
0
0
6
1
0
151
l p ~ ef(t) bo TAM F~(t) +
+ b(t),
0 0
where F is asymptotically stable since the plant is minimum phase. Integrating from s to t gives
t ~(t) = eF(t-s) ~(s) + I eF(t-~) "
l pnTA ef(o) bo TAM 0
+ b(o)
do.
S
0 (B.36) The vector b(t) is obtained by filtering uM(t) and v(t) (which are bounded) through proper, asymptotically stable filters and is therefore bounded by a constant Kb say. Also note that since F has its eigenvalues in the open left half plane,
suplleFtll :KF<
t~O
Finally we have I Ol b pnTA ) T Aef(t M
t ] ~ Kl ( l + I e-a(t-s) lef(s)l ds) v t, 0
(B.37)
where a > O, because pnTA/TAM is a proper, asymptotically stable transfer operator. Using these facts in (B.36) gives t ef(T)IdT)+Kb]dO s
0
II er(t- lll" l (s)I + K2(t-s)[+sup I
s~o~t 0
where K2 = KF(K1 + Kb).
e-a(o-T)lef(T)IdT ], (B.38)
152
The definitions of m(t) and ~(t) give m+nT-I P u(t) P
~(t)
=
~(t)
pnT-1
---#-- ~(t)
u(t) P pn-I
T
P n-1
=
YCt)
(B.39)
~ - y(t)
y([)
Y(t)
P
P
_M TB u-M(t) P
TBMu-M(t) - T
From (B.I) i t follows that, for 0 c i ~ nT - I , • 1 [pn+i bmPl u(t) = ~O0 y(t)+...
" +an pl y ( t ) - p i A v ( t ) ]
_ pm+i u(t) - . . . _ bm_1 pi+l
-
u(t)
where bm ¢ 0 from assumption (A.4) and so, because v(t) is bounded, pi
(i
pi+l + I'-Tu(t)+...+
pi pm+i I -TU(t)
pn+i
).
I f this inequality is used recursively for i = k . . . . . following is obtained:
nT-l, the
n+nT-I p
nT
pm+nT- 1
Using the definition of ~(t), this can be simplified into n+nT-I
k = 0.....
nT-1.
153 I f t his is used together with (B.39), the following estimate of l ~ ( t ) l is obtained: n+nT-1 •"
p
T
u t z) ~/. (B.40)
Here (TBM/p)u-M(t)isbounded because uM(t) is bounded. Also, for i = 0.....
n+nT-l,
pi(t) p
pi pi pi =t[~(t)+y-M(t)] = TAMef(t) +Ty-M(t),
where the first term can be estimated as in (B.37) and the secondterm is bounded. The inequality (B.40) can therefore be simplified into t [~(t)[
~ K7 ( ] + I
e'a(t-~),ef(~),
d o + [ ~ ( t ) l ).
0 Invoking (B.38) gives for t - s t
~ 1
l~(t)I ~ K7 [l + I e-a(t'~)lef(~)l d~+ IleF(t-s)ll • l~(s) 0 o
+ K2(t-s)ll+ s.<~.
I1+s
.<.
[IIeF(ts)ll" r~(sll +(ts)(~+ sup I o a(o,) lef(~> s.
0
I t remains to comment the case m:O when ~(t) is undefined. Then i t is easy to see that (B.40) is valid without the I ~ ( t ) [ - t e r m and the assertion of the lemma is s t i l l true. m
154 Proof of Lemma B.2 I t follows from the d e f i n i t i o n
of ~ ( t ) ,
0
I
I
1
I I
I I
".. 10I
(B.3), t h a t m+nT
Lh---~(t) 0
I
~(t) :
I0
I
I i l
I I
I
1
$(t) +
(B.41)
0
Ol
I
pn -#- y ( t )
_ TBM T~M(t) I0
From (B.4 f) i t follows that m+nT P u(t) P
m+nT - p - P2 ~ ( t ) + ~ ( t ) P P1
m+nT-I P21 p + " " + P2(m+nT) u ( t ) - [ Pl(O) §T(t)ji ~(t) = J L Pl
= - [P21
.
"" P2(m+nT) . . .
0
O] ~ ( t )
-
FPl(°)
L P1
(B.42)
oT(t)] ~(t).
I t can be seen from ( B . I ) t h a t pn P y(t)=-
al pn-I + . . . P
+an
pm + . . . + bm y(t)
u(t)
+ b0
+ A ~(t),
which, i f hT ~ I , can be r e w r i t t e n as pn -~-y(t)
= [0...
0
b0
bob I . . . b o b m -a I . .. -a n O] ; ( t )
+ A ~(t) (B.43)
whereas i f nT = 0 (B.42) gives pn - # - y ( t ) = [bob I ... bobm -a I . . . - a n
- bo[P2] " ' " P2m 0 . . . O] ~ ( t ) -
b
O] ~(t)
r P1 (o)
OLP_~m(t)]
-
l $(t)
+ ~A ~(t) (B.44)
155 I f (B.42) and (B.43) or (B.44) are inserted into (B.41), the following is obtained:
QA v(t)
~ ( t ) = A(t) ~ ( t ) +
TA-P 0
0 QBM ~M(t)
Here A(t) is a matrix with bounded elements according to the assumptions. Furthermore, QA/TAMpand pQBM/pAM are proper, asymptotically stable transfer operators and v(t) and uM(t) are assumed bounded. Hence, ~ ( t ) = A(t) ~ ( t ) + b(t) with a bounded vector b ( t ) . This d i f f e r e n t i a l t ~ ( t ) = @(t,s) ~(s) + f @(t,a) b(~) d~,
equation has the solution t ~ s
(B.45)
s
where the transition matrix @(t,s) satisfies t ~p(t,s) = I + f A(a) ¢(~,S) d~. s Using the boundedness of A(t), we have t
t
s
s
II¢(t,s)ll ~ l +f IIA(o)ll'll~(o, s)ll d~ ~ 1 + K1 f II ~(~,s)ll d~. I f the GroenwaII-Bellman lemma is applied to this inequality, the result is
II
(t,s)II
Kl(t-s)
e
and so, using (B.45),
156
t
I~(t)l ~ II¢(t,s)ll" I~(s)l + f II@(t,a)ll. Ib(a)l do s
K1(t-s) e
t
KI (t-a)
I~(s)l + i e s
K1(t-s)( K2 da ~ e
K_~) ]~(s)l +
Replacing K2/K 1 by K2, the lemma is proven,
. a
Proof of Lemma B.3
The result is immediate for 0 ~ t ~ 1, so i t suffices to consider the case t ) I. From (B.4 c), r(t) = e-~t r(O) + ft e_~(t_s)[ l~(s)l 2 +=(s)] ds 0 tf
e_L(t_s) [~(S)I 2 ds ~ e-~ i n f
t-I
I~(s) 12 ^= e-~ l~(So)l 2
t-l~s~t
(B.46)
for some t - I .< sO ~ t . But from Lemma B.2, l~(t)l
.< eKl[I~(So)I +K2]
which implies that
l~(t)l 2 ~ 2e2Kl[l~(So)l 2 + K~] and so, using (B.46),
zK1 r(t)
~
/ + r(t))"
But from (8.4 c) i t follows that r(t)
~ - ~ r ( t ) +m(t) ~ -~rmi n +~rmi n = 0
if
r ( t ) ~ rmi n
so that, because r(O) ~ rmin, r(t) ~ rmi n
v t.
If this is inserted into the inequality above, the lemma is proven.
157 Proof of Lemma B.4
It is seen from (B.4 e,f,b) that ef(t) : bo(t )
+ eT(t) ~(t)
^
= bo(t ) @T(t) --p~ ^
^
0 (t) ~(t)=
^T
: bo(t)[g(p) ~T(t)] ~(t) : bo(t)(G(p) ~T(t), ~(t.)~(t), r(t) / where
G(p) :
[ P I ( p ) - Pl(O)] / P PI(P) A
is a s t r i c t l y proper, asymptotically stable transfer operator and bo(t) is bounded. Hence, t
lef(t)I ~ KI~(t)l ( e-ct + I e-C(t's)l~(s)l" IE(s)l ds) r(s) 0 for some positive K and c. Split the integral into two parts: t-T
lef(t)l .< Kl;(t)i
(e -ct + I e-C(t-s) 0
t
I (s)l.r(s)I (s)l ds) (B.47)
t-T
The two terms in the r.hos, w i l l be estimated separately. First use the boundednessof the estimates and the noise to conclude from (B.2) and (B.4 d,e,f) that for some KE and Kv, IE(t)l ~ K l ~ ( t ) l + Kv Hence, t-T
K l~(t)IIe-Ct + I e-c(t-s) I~.(s)[.'{e(s)l r(s)
0
ds).<
t-T
.< K'~(t)' e-cT (I + I e-c(t-T's) l~(s);(sl)E(s)l ds) 0
15B
l~(s)l (K~l~(s)l r(s)
K l~(t)l e-cT I + TI sup sct-T
+ Kv)
) ~
Kl~(t)l e-cT (l + I sup (K~+Kv)l~(s)12+Kv ) s.
K
"<
1 Kv ) e-cT I + ~ [K3(K~ + Kv) ] + rmin
l~(t)l
K4 e-CT
r(s) A
l~(t)l,
(B.48)
where Lemma B.3 and the lower bound on r ( t ) obtained in the proof of that lemma have been used in the second la st step. The second term in (B.47) is estimated by the use of Lemma B.2:
t
e-C(t-s) ,L~(s)l" l~(s)l ds .<
K I~(t)l t-T t KIT ~
.<
Ke
r(s)
e-~(t'~)El~(s)l+Kzl l~(s)!(sl~(s)l ds .<
t-T .< KeKI T t~ e " c ( t - s )
(K2+l)I~(s)12+K2 r(s)
l~(s)! ds .<
t-T t
.< Ke
(Kz+I)K 3 + r--~n
I I~(s)l
ds A
t-T t _6 K5 e KIT
I l~(s)l
ds,
(B.49)
t-T where the second l a s t step follows as above. Inserting (B.48) and (Bo49) into (B.47) proves the assertion of the lemma. D