This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
* or (3ri2)+/}< 6(27) < 2;r(equivalently P - 7i 12 < 6(27) < 0), which still guarantees (4.4.32). Now suppose the claim is false, that is, 6(27) g[0, 0*]U [(3^/2)+/?, 2 ^ . For any f e[T, 27], if 6>(7) =6(7) = 0*, (4.4.24) gives £ ( H = £(7), which implies w(f ) = w(7) = 0. Since w (i) = sin0(t)-(-C{)cos0(t) < 0, whenever 0 e [;r, 0*], we know the pendulum can not reach 6(27) upward from 6(7). Therefore, the pendulum must arrive at 6(27) through 0=0. In particular, there exists h&{T, 27) such that 6(?,)= (37d2)+p. Applying (4.4.24) once again we get E(tx)-E{T) = -u(sin0CT)-sin0(ti)) = 0. However, E(h) = 1/2 w\h)+cos((3ri2)+j3)>cos((3td2)+/J)> 0. This is a contradiction. Lemma 4.4.6 At the end of each iteration of Step 3 or Step 4, \ x \
'i(x) = i;'lZx. Thus, since <j> is 2n -periodic, one can let^Zx
(4.4.33)
= i(0) + c 1 r 2
where
r cx if o
\-Cl
if
T
Noticing -a<x(0)<0 and (4.4.5) we know \x(2T)\ < a. Lemma 4.4.7 IfE(0)
152
J. Zhao
Now we give the main theorem. Theorem 4.4.8 System (4.4.2) is globally asymptotically stabilized by the hybrid controller given in Section 4.4.2. Proof: Global attractiveness follows from Lemma 4.4.2-4.4.7. More precisely, the trajectory of system (4.4.2) starting from any initial point enters, in finite switches and finite time, the neighborhood U, and then is steered to the origin by the local controller ut. Local asymptotic stabilizability is guaranteed by the local controller Uj.
4.4.4 A modified algorithm In the algorithm given in Section 4.4.2, the control is set to zero between adjacent bang-bang controllers. Energy remains the same during these periods. This is a kind of "waste" of time and as a result the algorithm may converge slowly. To overcome this drawback we modify the algorithm as follows. Step 0. Apply us to steer x and v to zero. Step 1. Given initial states x0=x(t0), v0 = v(to),fy>=0(to) and w0 = w(t0) and choose s> 0, T4 > 0, T5 > 0 and T6 > 0 (see Remark 4.4.9 below). Step 2. Calculate the energy £ 0 =( W V2)+ cos90. If cose < E0 <(cP/2)+l, set u =0 until (x,v,Q,w)r enters U, and then switch to u\. Otherwise if E0 < cose, go to step 3. If E0 >(d2/2)+l, go to step 4 Step 3. Case 1. One of the following conditions holds. \w o > 0,
wn =0, (4.4.35) \?-
n<x,
W Q< 0 ,
(4A36>
3* —<e
and
Q<2X-C,
Hybrid Control for Global Stabilization of a Class of Systems 153 WQ <0,
(4.4.37) c<0n<--
Calculate
mm
l+ d E
(«-0)
2~0
// (4.4.34)or (4.4.35) holds , (4.4.38)
C 4 =1
. l(«-*o) ! - £ 0 , if(4A.36)or (4.4.37) holds Tl
'
4
Apply the controller )C 4 // C 4 > £ U=
C A =
(4.4.39)
lo if CA <e until mini 3^/2,6»| w=0 >^k!/(4.4.34)or(4.4.35)toWi , 0=0* = 3TT/2 ,
i/(4.4.36)/ioto ,
max{:,0\w=o<7r/2}, if (4.4.37) holds . If C 4 < e, set current states as initial states and restart Step 3. If C 4 ^ £, switch the control to u = -C4. When v = 0, set current states as initial states and then return to Step 2. Case 2. One of the following conditions holds. w0 < 0 ,
Till < e 0 < 3 n 12,
(4.4.40)
w0 = 0 ,
7C <0Q<
3tf 12
(4.4.41)
154 J. Zhao
\w0
>0, (4.4.42)
| 3 ; r / 2 < 6>0 , and
Jw0 >0,
(4.4.43)
[0Q <7tl2,
Calculate a + XQ
[+
mm
-d2-Eun
2 4
, if •,if
(4.4.40) or (4AA\)holds (4.4.44)
C<
min \
a + xn 1 - E, *° , °- \ , if (4.4.42) or (4.4.43) holds.
Apply controller u=C5 =
\-C5ifC5>e 10
(4.4.45)
ifC5<e
until max {jtll,0
\W=Q<
n \
0 = 0* = min {2n-c,0\w=Q>3?i
nil,
if (4.4.40)or (4.4.41) holds I2\, if {AAA.2)holds , if (4.4.43) holds .
If C 5 < s, set current states as initial states and restart Step 3. If c 5 5: s, switch the control to u =-C5. When v = 0, set current states as initial states and then return to Step 2. Step 4. Case 1.
J w0 < 0 , [x/2<0o Calculate
(4.4.46) <3nl2,
Hybrid Control for Global Stabilization
C,=min ^ 0 6
^EQ-COS^I
of a Class of Systems
155
(4A4?)
r,5
Apply controller u=C(.= \
C 6 if C6>e _
[o // C6<s,
(4.4.48)
until 0=ri2. If C 6 < s, set current states as initial states and restart Step 4. If C~6 > e, switch the control to u =-C6. When v = 0, set current states as initial states and then return to Step 2. Case 2. fw0 > 0,
(4.4.49)
U/2<80<37r/2 Calculate \a + xQ
EQ
C-j = min •
- COS C
{T2
4
(4.4.50)
Apply controller u=C1 = \ C [Oif
l l
l
Cl £
~ C7<£
(4-4-51)
until 0=3nf2. If C 7 < e, set current states as initial states and restart Step 4. If C 7 > e, switch the control to u = -C7. When v = 0, set current states as initial states and then return to Step 2. Case 3. Either fw0 > 0, \?>7tl2<e 0<2x or 0 or
(4.4.52) <9Q
156
J. Zhao
Jw 0 < 0 ,
(4.4.53)
\2>KI2<0 Q<2TT or
O<0Q<X/2,
holds. If 0O = 0, calculate Co = min •
a-x0 2
T
d d2 '2' 2
EQ - COSC
(4.4.54)
3
and CQ
.
.a
= mm i
+ x
0
d2
d
—, —, '2' 2 T2
EQ-COSC
(4.4.55)
where T3 is the constant given in Section 4.4.2. Apply controller Cg
if
WQ
> 0 and Cg > e,
" = 8 = 1-Cy ifwQ<0and 0 otherwise c
Cg>e,
(4.4.56)
until 0=f;r/2 I 3 K11
ifw0>0, (or equivalently - nil ) if W Q < 0 .
(4.4.57)
Then, if u = 0, set current states as initial states and restart Step 4. Otherwise, switch the control to u = -C8. When v = 0, set current states as initial states and then return to Step 2. If 0O^O, set M=0 until 0= 0 (or equivalently, 2n) or 6=7d2 or 0=3nt2, whichever holds first and then set current states as initial states and restart Step 4. Remark 4.4.9 In the modified algorithm T4 is an upper bound of the time in which the pendulum moves from 0oto 6 = min{3n/2, 9\w = 0>7t) with w(t)>0for all 7t/2<90 <3n/2 and under all possible nonnegative constant controls, T5 is an upper bound of the time in which the pendulum moves from 90 to 9 =3n/2 with initial energy E0> (d2/2)+1, w(t)2 0for all n/2<0o <3n/2 under all possible nonpositive constant controls u >(cosc-E0)/4, T6 is an upper bound of the time in which the pendu-
Hybrid Control for Global Stabilization of a Class of Systems 157 lum moves from 60 to 6 = max{c,8\w = 0} with initial energy E0 < cose , w(t)<0 , c< 9o<7z /2 and under all possible nonnegative constant control u satisfying 0< u< (l-E0)/4. s is chosen as
< • { « Moll >
e < mm {—— ,
A
^ j
Simulation results are shown in Fig.4.2 and Fig.4.3.
4.5 Concluding remarks Quadratic stability of switched nonlinear systems has been studied in this chapter. The key point is to avoid employing strict completeness which is usually very hard to be verified. Instead, we first transform the quadratic stability of systems into an equivalent restricted nonlinear programming problem and then give a necessary and sufficient condition for quadratic stability with Fritz John condition. It is easy to verify this condition because only algebraic equations and inequalities are involved. As an application example of hybrid control technique, a globally stabilizing controller for the cart-pendulum system has been designed in this chapter. Comparing with the existing results, this result has two features. Firstly, we give a globally asymptotic stabilization result. To the best of our knowledge, no global asymptotic stabilization results for the cart-pendulum system have previously been reported. Secondly, before switching to a local stabilizing controller, we switch between bang-bang controllers. Therefore it is practically easy to realize the control. Although the method is based on energy, we do not limit ourselves to keeping the energy increasing (in the case of E0
158
J. Zhao Swinging up control of the cart-pendulum system
f 5
10 Fig.4.2
15 20 25 30 t(sec), sampling step=0.1sec
35
Control switching 0.4 0.2
—
0
—~~
0.2 U.4
f\
I \ -
" -
"
0.6 0.8
-
-
-1
-
•
1.2 1 a.
10 Fig.4.3
15 20 25 30 t(sec), sampling step=0.1sec
35
40
References [1] A.A.Agrachev and D.Liberzon, Lie-algebraic conditions for exponential stability of switched systems, in Proc. IEEE Conf.Decision and control, pp.2679-2684, 1999.
Hybrid Control for Global Stabilization
of a Class of Systems
159
'2] A.Back, J.Guckenheimer and M.Myers, A dynamical simulation facility for hybrid systems, in Hybrid Systems, R.Grossman et al. Eds., 255-267, 1993. ^3] M.S.Branicky, Multiple Lyapunov functions and other analysis tools for switched and hybrid systems, IEEE Trans.Automat.Contr., vol.43, pp.475-482, 1998. [4] M.S.Branicky, V.S.Borkar and S.K.Mitter, A unified framework for hybrid control: model and optimal control theory, IEEE Trans.Automat.Contr., vol.43, pp.31-45, 1998. [5] M.S.Branicky, Stability of switched and hybrid systems, in Proc. 33rd IEEE Conf.on Decision and Control, 3498-3503, 1994. [6] R.W.Brockett, Hybrid models for motion control systems, in Essays on Control Prospectives in the Theory and its Applications, H.L.Trentelman and J.C.Willems, Eds. Boston, MA: Birkhauser, 1993, pp.29-53. [7] C.C.Cheng and J.Hauser, Nonlinear control of a switching pendulum. Automatica, Vol.31, No.6, pp.851-862, 1995. [8] W.P.Dayawansa and C.F.Martin, A converse Lyapunov theorem for a class of dynamical systems with undergo switching, IEEE Trans.Automat.Contr., vol.44, pp.751-760, 1999. [9] J.A.De Dona, S.O.Reza Moheimani, G.C.Goodwin and A.Feuer, Allowing for over-saturation in robust switching control of a class of uncertain systems, in Proc. IEEE Conf.Decision and control, pp.3053-3058, 1999. [10] Z.Feng, Z.Yin and H.Chen, Microprocessor-Based Controller for double inverted pendulum, In Proc. IFAC 10th World Congress, pp.237-240, 1987. [11] J.F.Frommer, S.R.Kulkarni and P.J.Ramadge, Controller switching based on output prediction errors, IEEE Trans.Automat.Contr., vol.43, pp.596-607, 1998. [12] K.Furuta, M.Yamakita and S.Kobayashi, Swing-up control of inverted pendulum using pseudo-state feedback. Proc. Instn. Mech. Engrs., Vol.206, pp.263-269, 1992. [13] K.Furuta, T.Okutani and H.Stone, Computer control of a double inverted pendulum. Comput. Elect. Engng., Vol.5, pp.67-84, 1978. [14] S.Geva and J.Sitte, A Cart-Pole experiment benchmark for trainable controllers. IEEE Control Systems, Vol.13, No.5, pp.40-51, 1993. [15] A.Gollu and P.P.Varaiya, Hybrid Dynamical Systems, in Proc. 28th IEEE Conf. Decision and Control, pp.2708-2712, 1989. [16] J.P.Haspanha and A.S.Morse, Stability of switched systems with average dwelltime, in Proc. 38th Conf.on Decision and Control, 2655-2660, 1999. [17] A.Hassibi and S.Boyd, Quadratic stabilization and control of piecewise-linear systems, in Proc. American Contr. Conf, pp.3659-3664, 1998. [18] A.Iftar and U.Ozguner, Overlapping decompositions, Expansions, Contractions, and Stability of Hybrid Systems. IEEE Trans.Automat. Control, Vol.43, No.8, pp.1040-1055, 1998.
160
J. Zhao
[19] P.A.Iglesias, On the stability of sampled-data linear time-varying feedback systems, in Proc.33rd Conf.Decision and Control, pp.219-224, 1994. [20] A.Isidori, Nonlinear Control Systems, 3rd Edi., Springer-Verlag, Berlin, 1995. [21] I.Kolmanovsky and N.H.McClamroch, Hybrid feedback laws for a class of cascade nonlinear control systems, IEEE Trans. Automat. Contr., vol.41, pp. 1271-1282, 1996. [22] M.Krstic, I.Kanelakopoulos and Petar Kokotovic, Nonlinear and Adaptive Control Design, John Wiley & Sons, New York, 1995. [23] D.Liberzon and A.S.Morse, Basic problems in stability and design of switched systems, IEEE Contr. Syst. Magazine, vol.19, 1999. [24] R Marino and P.Tomei, Nonlinear Control Design, Prentice Hall, 1995. [25] A.S.Morse, supervisory control of families of linear set-point controller-part 1: exact matching, IEEE Trans.Automat.Contr., vol.41, pp.1413-1431, 1996. [26] T.Ooba and Y.Funahashi, On a common quadratic Lyapunov function for widely distant systems, IEEE Trans. Automat. Contr., Vol. 42, 1697-1699, 1997. [27] K.M.Passino, A.N.Michel and P.J.Antsaklis, Lyapunov stability of a class of discrete event systems, IEEE Trans.Automat.Contr., Vol.39, pp.269-279, 1994. [28] P.Peleties and R. DeCarlo, Asymptotic stability of m-switched systems using Lyapunov-like functions, in Proc. American Contr. Conf., pp.1679-1684, 1991. [29] R.Sepulchre, M.Jankovic and P.V.Kokotovic, Constructive Nonlinear Control, Springer-Verlag, Berlin, 1997. [30] E.Skafidas, RJ.Evans, A.V.Savkin and I.R.Peterson, Stability results for switched controller systems, Automatica, vol.35, pp.553-564, 1999. [31] M.W.Spong, The swing up control problem for the acrobot, IEEE Contr. Syst. Magazine, vol.15, pp.49-55, 1995. [32] M.W.Spong, Underactuated Mechanical Systems. In Control Problems in Robotics and Automation, pp.135-150, Springer, 1998. [33] M.W.Spong and L.Praly, Control of underactuated mechanical systems using switching and saturation, In Control Using Logic-Based Switching, pp. 162-172, 1995 [34] P.P.Varaiya, Smart cars on smart roads: problems of control, IEEE Trans.Automat, Vol.3 8.pp. 195-207, 1993. [35] Q.Wei, W.P.Dayawansa and W.S.Levine, Nonlinear controller for an inverted pendulum having restricted travel. Automatica, Vol.31, No.6, pp.841-850, 1995. [36] H.Ye, N.Michel and L.Hou, Stability theory for hybrid dynamical systems, IEEE Trans.Automat.Contr., vol.43, pp.461-474, 1998. [37] J.Zhao and M.W.Spong, Hybrid Control for Global Stabilization of the CartPendulum System, Automatica (to appear).
Chapter
5
Robust and Adaptive Control of Nonholonomic Mechanical Systems with Applications to Mobile Robots Y.M.Hu*
and
W.Huo*
^Department of Automatic Control Engineering, South China University of Technology, Guangzhou, 510641, E-mail:auymhu@scut. edu. en
P.RChina.
*The Seventh Research Division, Beijing University of Aeronautics Astronautics, Beijing, 100083, P.RChina
and
5.1 Introduction Nonholonomic mechanical systems have profound engineering background such as mobile robots, vehicles and spacecraft. Nonholonomic mechanical systems, although controllable, cannot be stabilized by smooth static or dynamic state feedback control laws. This fact makes the control of general nonholonomic systems very difficult, and stimulates researchers to construct time-varying and discontinuous feedback controllers for some special nonholonomic systems. Many sophisticated control approaches, such as smooth time-varying strategy, time optima" control law, discontinuous control law and iterative learning control, etc, have been proposed for various nonholonomic control plants. However, most of current control approaches for nonholonomic systems are dependent on complete knowledge and special structures of the controlled plants, which may practically produce poor dynamic responses in the presence of system uncertainties or parameter variations. The design of robust and adaptive controllers for general nonholonomic mechanical systems with uncertainties is thus emphasized in recent surveys (Hu, Zhou and Pei., Kolmanovsky and McClamroch). 161
162
Y. M. Hu & W. Huo
Sliding mode control (SMC) is a special discontinuous control technique applicable to a broad variety of practical systems (referred to the book of Utkin, and the intioductory papers of Young and Ozguner). This approach is mainly based on the design of switching functions which are used to create a sliding manifold. When this manifold is attained, the switching functions keep the trajectory on the manifold, thus yielding desired system dynamics. Since nonholonomic mechanical systems can not be stabilized by smooth time invariant static or dynamic feedback laws, the SMC technique becomes appealing in the control of nonholonomic systems. A few of researchers have investigated the design of sliding mode controllers for a class of nonholonomic systems, which heavily depend on the existence of suitable Lyapunov functions and special structures of the systems (see, for instance, Bloch and Drakunov). Even if the necessary Lyapunov function is constructed, however, the resulted sliding mode motion is only locally ensured. Although considerable progress has been made for the control of nonholonomic systems during the last decade, the literature on uncertain nonholonomic systems is sparse. The aim of this chapter is to study the robust and adaptive control problems of uncertain nonholonomic mechanical systems. We first present the reduced model of uncertain mechanical control systems with classical nonholonomic constraints, and review some properties of uncertain nonholonomic control systems. Based on the backstepping design approach, high-order sliding mode and differential flat systems theory, we then develop some adaptive and robust control strategies for a class of nonholonomic mechanical control systems with mismatched uncertainties. Finally, we apply the proposed approaches to a two-wheel driven nonholonomic mobile robot. Numerical simulations and on-line implementations are performed to show their efficiency. The proposed approaches are not only adaptive and robust to uncertain parameter variations or uncertainties, but also have the potentiality to weaken the chattering phenomenon of traditional sliding mode control systems.
5.2 Dynamic model of nonholonomic mechanical systems Many nonholonomic mechanical systems can be described by the following dynamic equations M(q)q + V(q,q)q + G(q) = B(q)r + A T (q)X
(5.2.1)
and the nonholonomic kinematic constraints A(q)q = 0
(5.2.2)
where q e R" and r e Rr are respectively the generalized configuration and the control input vectors; X & Rm is the constraint force vector; M(q)e R"x" is a
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems
. ..
163
symmetric positive definite matrix; V(q,q)q e R" is the term which may include centripetal and Coriolis forces; G(q) is the gravitational force; B(q) is a n x r full rank input matrix; A(q) i s a m x n full rank matrix associated with the constraints. Let Nt = Nj(q)(i = \,...,n-m) be a set of smooth and linearly independent functions such that A(q)Nl=0,i
= l,...,n-m
(5.2.3)
and A be the distribution spanned by the vectors Nt (i = 1,..., n-m), then from (5.2.2) it follo'vs that q e A, that is, there exists an (« - m) - dimensional pseudo-velocity vector v = (Vj,... vn_m )T such that q = Nv where N = N(q) =
(5.2.4)
(Nl(q),...,Nn_Jq)).
Differentiating (5.2.4), we obtain q = Nv + Nv
(5.2.5)
Substituting (5.2.5) into (5.2.1) and premultiplying by N r , we have NTMNV
+ (NTMN+NTV)V
+ NTG=NTBT
(5.2.6)
That is, the nonholonomic mechanical systems (5.2.1) and (5.2.2) can be reduced to the following form \q = Nv r~ ~
~
~
(5.2.7)
[MV + VV + G = BT
where M =NTMN; V =NTMN +NTV; G = NTG; B =NTB
(5.2.8)
Since the above base matrix N is independent on the uncertainties of the mechai ical system, it is easy to prove the following properties PI). M is an (n-m) x (n - m) symmetric positive definite matrix; P2). M - 2V is a skew symmetric matrix; P3). Let 4" e RM be the parameter vector of the mechanical system (5.2.1), then Mv + Vv + G = 0(q, q, v, v)^
(5.2.9)
164
Y. M. Hu & W. Huo
where 0(q,q,v,v) e /? ( " m)x// is independent on the parameters of the system. As a typical example of nonholonomic mechanical systems, we consider the two-wheeled plat mobile robot as shown in Figure 5.1. Two independent but identical dc motors are used to steer the wheels, the corresponding dynamic equations and kinematic constraint are given as follows mx = —(rj+r 2 )cos^-/lsin^ R -my = —(ri+r, )smd + Xcos6 R
(5.2.10)
xsm6-ycos6
(5.2.11)
Fig.5.1
=Q
Two-wheel steering mobile robot
where (x,y) is the position of the center P corresponding to the rear axis, and 9 is the orientation with respect to the X -axis. TX and r 2 represent, respectively, the torques produced by the two driven motors. 2R and 2D are respectively the diameter and distance between the two rear wheels. Other nonlinear dynamic model can also be developed using the Lagrangian formulation (see, for instance, Canudas de wit, Siciliano and Bastin, 1998). It is to rewrite the above model in the reduced form (2.8) with q =(x, y, 9)T, r =(r,, r2f, and
Robust and Adaptive
'm
0
0N
0
m
0 , B=
,o o / 0 , Select N(q)
Control of Nonholonomic
'cos# 1
sin#
7? v
Mechanical Systems
...
165
cos#N sin# , F = 0, G = 0, ^ = (-sin6> cos0 o)
D
as 'cos#
°) 0 Io u
N(q) = sin#
then, one can develop controller based on the reduced model q = Nv NTMNV
+ NTMNV
=
NTBT
(5.2.12)
with the above coefficients. However, motor can not behave ideally to produce torque or velocity as the control law needed, actuator dynamics should be considered in the complete robot dynamics to improve control performance especially in the case of high-velocity movement and fast varying loads. Suppose that the steering motors used are the same types, then the dynamic equations for brushed DC motors can be written in the form •Tl+kJci=Ul
" T: + JJfv rp
fJn,
(5.2.13)
rp
whereh 0 ,r,k e denote, respectively, the armature inductance, the armature resistance and the back-EMF coefficients, co = ((o],eo2)T represents the angular velocity vector defined as
1
-D
(5.2.14) V"27
:„ P is the ratio of the decelerating gear, kT is the torque constant, and u - {ux, u2)\T is the vector of applied voltages to the motors. Differentiating the second equation of (5.2.12), and substituting (5.2.13) and (5.2.14) into the equation, we obtain the following model
[q = Nv {H(q,v,v)v + V(q,v)v + K(q)v = u where the control input u is selected as the voltage vector, and
(5.2.15)
166
H=-
Y. M. Hu & W. Huo
l^
RK
IDP-k,
KmD
V
Rr 2Dp-kT
=•
0/
mD mD
h^ ,K = kj
-1 0 /
R
1
D
1
-D
(5.2.16)
The output variables are generally selected as Y=
= h(q):
\*u
y + hsind
(5.2.17)
that is, the position of the central point Q.
5.3
Robust control design based on SMC
5.3.1 SMC based on high-order sliding mode As discussed above, nonholonomic mechanical systems can be reduced to the following form x = f(x) + B(x)u
(5.3.1)
where x e R" and usRm are respectively the state and input vectors; / and B(x) = (bx ,...,bm) are sufficiently smooth functions with corresponding dimensions. To design a sliding mode controller with sliding order r = {rx,...,rm )T such that the above system (5.3.1) has proper dynamic behavior, we need to determine a highorder sliding mode control law and a sufficiently smooth w-dimensional function S = (S,,.., Sm)T : R" -+Rm such that (I). Each components, exists at leastr, -th order continuous derivatives; (II). The sliding motion constrained on the set Q defined by the following equations S, = S,
:0H)
: 0 , (/=!,.., m)
(5.3.2)
is asymptotically stable (III). The following regular sliding condition (',-i)i rank{VS,, VS,, • • •,VS« '' |/ = \,...,m) = r
(5.3.3)
is satisfied, where r = rx H— + rm is called the total sliding order. (VI). The desired high-order sliding motion on Q can be achieved in finite time.
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems
...
167
Obviously, the above high-order SMC is a natural generalization of the conventional SMC(i.e., rx = ••• = rm = 1, r =m). However, tools in the conventional SMC designs can not be easily applied to the high-order SMC design, since, in general, the derivatives of the switching function components 5 ; , • • •, SJrr' (i = 1,..., rri) may be dependent on the derivatives of inputs and the fact that the relative degree components are not one may cause instability of the system. To derive the mathematical description of high-order sliding mode motions, we suppose that the designed switching function S satisfies the following conditions (HI). Lb.LkfSt=0 forallxandO<£
Lb(LrflS,)
-
Lb
(L'f'sy (5.3.4)
mxm
z
ls
L
^V /~ »> - V /~
,s
m ).
is nonsingular for all x. The above conditions (HI) and (H2) imply that the system (5.3.1) has strongly relative degree {rx,...,rm} from the control input to the sliding manifold. Such kind of conditions are natural in the input/output linearization of nonlinear control systems (see, e.g., Sastry, 1999). Under the above conditions (HI) and (H2), we have S\k) =LkfSi,Q
(5.3.5) (5.3.6)
which imply that the r, -th derivatives of each switching function components st are dependent on the control. Obviously, once the sliding motion is achieved, we have SJk^ (x) = 0 (z' = l,..., m\ k = 0,1,..., rt -1), that is Si-k\x) = LkfSi=0,
i = l,...,m;k = 0,l,...,ri-l.
(5.3.7)
which implies thatS}r''(x) = 0. The equivalent control ueq can thus be uniquely determined by (5.3.6) as follows ue=-Es-lKs
(5.3.8)
168
Y. M. Hu & W. Huo
where Ks=(L}Su...,LySmf
(5.3.9)
It should be noted that the equivalent control ueg in (5.3.8) is actually determined by the rt -th order derivatives of the switching functions, that is, the equations sfr'\x) = 0(i = l,...,m). It thus generalizes the conventional equivalent control method (see, e.g., Utkin, 1992), since in the case of at least one /-, > 1, the conventional equivalent control can not be directly solved from the conventional equivalent control equation S(x) = 0. Let the control be the following form u = Es-\v + Kzzs) + ueq whereK z = diag(Kzl,...,Kzm),
Kzi=(Kzi,...,Kzfl)(i
(5.3.10) = l,...,m)arerrth
dimensional
s;ain vectors to stabilize the new state componentsz si ; whilev = (vj,...,v m ) r is the nonlinear sliding mode control such that the desired high-order sliding motion can be achieved in finite time. On the other hand, as well-known in nonlinear control theory (see, e.g., Sastry, 1999, and Isidori, 1995), we know that the following row vector functions VS, (x), VLfSx (*),..., V l V ' S , (x) Sm(x),VLfSm(x),...,VLj-1Sm(x) are linearly independent, and the regular sliding condition (5.3.3) is thus satisfied. Consider the nonlinear state transformation z = ( z j ,£T)T = T(x) as follows zs =(zsl,...,zsmf
=Tz(x) = (Sl,LfSl,...,L}'lSl;...;Sm,LfSm,...,Lrf1Sm)T
£ = T( (x) = (Ci ,-,C„.7)T
(5.3.11)
= (r, (x),. ..,Tn_7 (x))T
where £ = T^ (x) is chosen to be the smooth functions such that z = T(x) is a diffeomorphism, then from (5.3.5), (5.3.6) and (5.3.10) we transform the system (5.3.1) into the following canonical form
Robust and Adaptive
Control of Nonholonomic
2 si = (Ai+biKzi)zsi+b,vi,i £ = p(zs,0 + q(zs,C)v
Mechanical Systems
...
169
= l,...,m
where (Aj,bj)(i = l,...,m)sre controllable canonical pairs; and | p = [{LfTk) + {Lbj Tk (x))(Es~lKzzs {q = {LhTk{x))E?
+ ueq)] o T~x (z) e Rn~T
o r ' W e * ^
The switching function S(x) is rewritten in the new state variable z in the form S(zs,0 = S(x)°T-\z)
(5.3.13)
The first subsystem in the above canonical form (5.3.12) describes the dynamics of switching variable 5 , while the second subsystem, once the sliding motion is attained, determines the sliding modes. Since the inputs and switching variables are decoupled, one can easily determine the gain constants Kzi and sliding control law v, such that the system can realize the sliding motion with desired dynamics due to the controllability of the pair (A^ty). Let Kzi = (Kzi ,...,Kr<~x)be chosen such that X' = KrflArrl
+••• + Kl2iX + K°zi
(5.3.14)
are Huwitzian polynomials, then the asymptotic stability of the first subsystem in (5.3.12) is obviously guaranteed. Next, we consider the design of sliding mode control law so that the desired high-order sliding motion can be achieved in finite time. Let 5, =S<"-1) -Mrs?-2)
—-M?St
-n)St
(5.3.15)
where /// (J = l,...,rt -1) are constants such that the following (r, -1)-th order systems Sj*-* -M?-^
--M?S,
-M]S,=0
have desired eigenvalues. Then, from (5.3.12) and (5.3.15) we obtain -/=(->"/.•••,-//,•'
= (rn\ ,...,-$
,l)z«
,\)[{At + bjKzi )zsi + bjVj ]
(5.3.16)
170
Y. M. Hu & W. Huo
r -1 = I (KJzi -MJi)S{iJ)+K%Si 7=1
+Vi
(5.3.17)
Thus, under the following control law
v, = - E W -M/)Sy) -K°z,Si -y,E, -S,Sign(E,)
(5.3.18)
7=1
we have Ei=-riEi-SlSign($l)
(5.3.19)
where yt > 0 and£, > 0 (/' = l,...,m), which implies that the desired sliding dynamics (5.3.16) can be achieved. Finally, we need to determine switching function S(x) such that the sliding mode system has proper dynamic behavior. Once the sliding motion is attained, the dynamic equation of sliding motion is obviously given as follows C=P?(ZS,O
(5320) (,i ,)
S,. = S ' j = - = l S, - =0, i = \,...,m where p = [{LfTk) + {LbTk{x))E-\Kzzs
-Ks)]oT~\z)
(5.3.21)
Select the switching functionS(z s ,£) to satisfy the regular sliding condition, e.g., rank{d{Si,S1,---,Syi'X))ldzs\i
= \,...,m) = r
(5.3.22)
then one can solve r state variables zs = G(£) from the 7 constraints 5, = St (r -1)
= Sf' =0 in (5.3.20). The sliding motion is thus described by the following equation
C-p0(zs,O zs=G{Q where the function zs = G(£) can be regarded as the input of the sliding mode system. If the following zero dynamics
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems . ..
171
(5.3.24)
^ = A,(0,O
is asymptotically stable (i.e., the system is of minimum phase), then the sliding mode system is locally asymptotically stable. To present an explicit solution of the above high-order SMC design problems, we consider the following nonlinear SISO canonical control systems
(0
f0~\
h-^
x=
0
0
( 0 ^ (5.3.25)
x + 0 u+
0
1
a(x).
where x = (x,,...,x„) r e # " andMe/?are respectively the state and control vectors; a{x) is a scalar function. Let the switching function be given in the form s{x) = cxxl+ — + ckxk + xk+x =Cx, (0
*0)
(5.3.26)
then, we obtain AJ) _ clxJ+l+{n
k)
\s
+ ckxJ+k+xj+k+lJ
= cxxn_k+x+-
=
0,l,...,(n-k-l)
+ ckx„+u + a(x)
(5.3.27)
which implies that the conditions (HI) and (H2) are satisfied forr, = n-k (i.e., the sliding order is n-k). The nonlinear transformation (5.3.11) and the equivalent control (5.3.8) in this case are thus respectively given as follows z s =Ts(x) =
(z0,...,zn_k_1)7
Z
} ~ C l Xj+\
H
+C
k
eq=~C\Xn_k+x
X
j+k
+
X
j+k+\
(j =
0,...,n-k-\)
(5.3.28)
=(x„_k+l,...,x„) CkX„-a
= -C1Cl
Ck£k-(Z
(5.3.29)
With the above state transformation and the following control u = ueq + K°zz0 +••• + K"-k-'zn_k_x
+v
we obtain the canonical forms as follows 7 'j^ J = 0,1,...,«- * - 2 z
n-k-\
• K°zz0+-
+
Krk-lzn-k-\ + V
(5.3.30)
(5.3.31)
172
Y. M. Hu & W. Huo
Gj -Cj+iJ-h-,k 1 Ck = K°zz0 +... + K"z-k-izn_k_l -c^x
(5 3 32) ckCk +v
Let K{ (j = 0,...,n-k-l) be the coefficients of the desired Huwitzian polynomial (5.3.14) and the SMC law be given by (5.3.18), then the system (5.3.25) can realize the desired sliding motion, as indicated above. What we need to do is the selections of proper constants c'j(j = \,...,kt;i = \,...,m) such that the sliding mode has prescribed dynamic behavior. During the sliding motion, from (5.3.26) and (5.3.27) we have zj =c]xJ+l+-
+ ckxJ+k+xJ+k+l
=0, j = 0,1,...,(«-k-1)
(5.3.33)
From (5.3.32) and (5.3.33) we can obtain the sliding mode equation as follows
J f •/' = ^ +1 'J = 1, "•'k ~l [Ck =-C\£i
c
(5.3.34)
kCk
which ran be stabilized with desired dynamic response so long as the corresponding k -th order characteristic polynomial Ak +ckZk~l +--- + c2Z + cl = 0 is assigned to have desired eigenvalues. Consequently, as conventional SMC, high-order SMC can also be explicitly designed for the class of nonlinear SISO or MIMO canonical systems (for the case of MIMO, see, Hu and Chao, 2000).
5.3.2 Dynamic SMC based on high-order sliding mode Since nonholonomic mechanical control systems can not be exactly linearized via smooth state and input transformations, two approaches are generally used to simplify the control design problem. The first one is to construct proper nonlinear state and input transformations to transform the reduced model (5.2.7) into the canonical chained or power forms. Another approach is to apply dynamic extension algorithms to ensure that the controlled plant is differentially flat to a linear controllable system. However, the first approach is difficult to implement in the presence of parameter or system uncertainties since the constructed transformations are heavily dependent on the exact knowledge of the controlled plant. Without loss of generality, we consider the following Fliess SISO canonical system
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems
...
173
a, =<J-,
(5.3.35)
=CT
°>1
P {r)
Q(crl,---,ap,&p,u,u,---,u ,x)
wheredQ/d&p
* 0 . Let u-vx,
=0
VJ =v/ + i(/' = l,...^-l)andv
= w b e the dynamic
compensate variables, then (5.3.35) can be rewritten in the new state variables
p-\
a
=
p
Op = 2 o ( ° V v, =v 2 Vi
= v
Vy
=W
• • ^
P
^ \ , -
•,vy,w,x) (5.3.36)
r
where Q0 is differentiate with respect to all its variables and dQ0 ldw*Q. For the sake of simplicity, we consider the design of SMC with sliding order p. Let the switching function s be given in the form .5 = 0^, then, from (5.3.36), we have \s(i)=ai+1(i (p)
[s
= l,...,p-l)
=Q0(.(?i,---,o-p,vl,---,vr,w,x)
(5.3.37)
Let Z = s(p-»-MP-is(r-V-...-Ss
-Mls
(5.3.38)
and the reaching law be given as
^-rZ-ssigniZ) where /iJ\j system
= l,...,p-l)
(r>o,s>o)
(5.3.39)
are constants such that the following ( p - l ) - t h order
174
Y. M. Hu & W. Huo
Ap-l-Mp-]Ap-2
(t2X - / / ' = < )
(5.3.40)
has desired eigenvalue distribution, then, from (5.3.37), (5.3.38) and (5.3.39) we obtain Qo(ai,--,ap,v\,--,vr,w,x)-np~
ap
M^'M
°"2
= -rf-SSign(?)
(5.3.41)
A dynamic SMC law can be obtained from the above equation (5.3.41) since dQ0 ldw*0. The asymptotic stabilization of the output y = ox is thus achieved via the proposed dynamic SMC approach.
5.3.3 Robustness design in the presence of uncertainties 5.3.3.1 Model reduction of nonlinear uncertain systems As discussed above, high-order SMC can be designed for a wide class of nonlinear systems. However, due to the fact that some essential parameters such as inertial, payload, and the motor constants are generally unknown or uncertain, therefore it is necessary to consider the uncertainties in the design of SMC. This subsection will concentrate on the development of robust SMC design based on the backstepping approach, which allows the mismatched uncertainties. In the presence of uncertainties, the nonlinear system (5.3.1) can be rewritten in the fom JC = f(x) + Af(x,p) + [B(x) + AB(x,p)]u
(5.3.42)
where A/ and AB(x) = {Abx,...,Abm) represent, respectively, the uncertain parts off and B(x) = {bx,....,bm);p&Rl denote the uncertain parameter vector. Apart from the assumptions (HI) and (H2), we suppose that the uncertainties A/ and AB(x) satisfy the following conditions (H3). There are integer numbers at (i = \,...,m) such that L^L'y~lSl * 0 for some x and p; (H4). There are integer numbers^ (i = \,...,m) such that L^Ly'^S,*0 somey'e{l,...,w},
for
xandp.
If there is at least an integer/e{l,...,/w}such t h a t c r ^ ^ o r yi
then the
Robust and Adaptive Control of Nonholonomic Mechanical Systems . .. 175
above assumptions imply that the uncertain part Af or AB does not satisfy the socalled generalized matching conditions. Without loss of generality, we suppose 1
= l,...,«)
(5.3.43)
m
£i =Kg +AK# + Y(e$j
+Ae
i!j)uJ,
l = 7 + \,...,n
(5.3.44)
where r
T
Ksj=L'j-Si,
—1
AKSI.=LyLj-
Sj,i = l,...,m;
Kg =LfT-p+j, AKg =LyT7+J, A£j
=
j =
\,...,n-r;
(&esij )mxm = (^Abj Lf Si ) m x m >
E( =(egj)(n-7)xm=(LbjTl)(n-7)xm
>
AE^ = (Ae^j )(„-fym = (LdJ)j 7} ) ( „_? )xm ; At]p+J-l
= L¥Ly+J-lS„
0 < ; < r , - C T ; - l ; i = l,...,m.
Other terms are defined as before. Under the control of (5.3.10), the above subsystems (5.3.43) and (5.3.44) are respectively reduced to the forms if =zf + 1 ,l<*<<7,-l; (SF),: Z'.+J
= Z°,+J* +AT}^-\
*? =KZIZSI+ATJ?-
0<j
-a, - 1 ; i = \,...,m
(5.3.45)
+V,+A£,V
C = X{zs ,D + *Z + (Ec + AEC )E;\ where Es and Es + AES are assumed to be invertible, and Atf~l =AKsi +(Aen,....,Aeim)EJi(K2zs A£, =(Aen,....,Aeim)E;\
-Ks), i = \,...,m; i = \,...,m;
(5.3.46)
176
Y. M. Hu & W. Huo
Z(z„0
=
AZ =
Ki+EcE;\Kzzs-Ks); AKc+AEiE;\Kzzs-Ks)
? (UJ » (x) / Once the sliding motion is achieved, then5', = 0(z = l,...,w;7 = 0,l,...), that
is z) = 0 , i / =0, \<j
i = \,...,m
(5.3.47)
From (5.3.45) and (5.3.47), we obtain the equivalent control veq=-(Im
i r l T +AEsE; r\Atf-\...,A?7 m»- )
(5.3.48)
Substituting (5.3.48) into (5.3.46), we obtain the sliding mode equation £ = Z(Zs,0 + *Z
(5-3-49)
where Ax =
Az-(E(+AE()E;l(i+AEsE;ly\An?-\...,ATiZ-l)T
The robust stability of (5.3.49), as discussed in the section 5.3.1, can be guaranteed by selecting proper switching function via conventional design approaches in nonlinear continuous control systems. What we need is to design SMC law v so that the desired sliding motion can be achieved in the presence of uncertainties. 5.3.3.2 High-order SMC design for nonlinear uncertain systems Consider the functions Ei(i = l,...,m) defined in (5.3.15), then from (5.3.45) it follows that
= Kzizsl +A?li +v, +AElv-rxrfzf+1 where
A^A^-TV^^1 m
Let the norm of any vector a be defined as |a| = £ | a , | > anc* suppose i=l
(5.3.50)
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems
...
177
(5.3.51)
|A77,|
(5.3.52) 7=1
we have m
m f
i
il
IS,S,.<-X[ r,.S?+(^-A/7,M)|S,|J 1=1 (=i i-i <=i
;=i
+ //| S /| + /
7=1
^-(^„-'"/ma,^M)2:^,2 r,-l (=1
)|S,| (5.3.53)
;=i 7=1
where Xm;„ = winy,-, ? W = waxx,, Smin = min8i, 5max = maxSi, AEM I
I
I
I
AEM = maxAElM , ArjM = maxAr]iM i
i
Therefore, ifAEu is relatively small, then we can select proper parameters St and X, in (5.3.52) such that rmin-mymaxhEM
>0
8mm -m8maxt^M
~ A1M
r,-\ J^J+I '^MZ Ztf 1=1
•Kzlzsl
>0
(5.3.54)
From (5.3.53) it follows that the desired sliding dynamics (5.3.16) can be achieved in the presence of bounded uncertainties. Hence, ifKJzi =///0=l,...,r, - l ; / = l,...,/w) , then from (5.3.17) and (5.3.54) we know that the SMC law (5.3.52) has a low switching gain.
178
Y. M. Hu & W. Huo
5.3.3.3 High-order SMC design via backstepping approach Backstepping approach has been extensively applied to the design of robust and adaptive controllers in recent years. However, almost all the controls developed by this approach are based on the triangular assumption of the system model. Unfortunately, the uncertainties in (5.3.45) do not satisfy, in general, such triangular conditions. In the following, we will develop a robust controller based on the backstepping approach. Only the Lipschitzian-like bounds of uncertainties are used in the design, and the triangular structure assumptions are thus eliminated. Let
(l
= l,...,m)
then<j> = T(z) is a diffeomorphism so long ascof (z),...,zf~])(l
(5.3.55) = l,...,m) are
(5.3.56)
for the subsystems {SF)t, then V} =z)zf =-p,{z))2
+ z\{z2 +Piz}) ( A >0)
(5.3.57)
If z2 + ptz) =0, then (5.3.57) implies that the components z) are asymptotically stable. However, in general, this is not true. To ensure that z) andz 2 proper asymptotic behavior, we introduce the auxiliary variables >] =z),
2
(OTO)] =0,o)
=-Plz\ )
+ptz)have (5.3.58)
and Lyapunov functions V?=V>+{tffl2
(5.3.59)
then, we have
=-*/ -Pitf
+
(5.3.60) (5.3.61)
Robust and Adaptive
Control of Nonholonomic
tf = z? -a)?,a>? =->) -Pltf
Mechanical Systems ...
+ ^zf
179
(5.3.62)
dz,
From (5.3.61) we know that <j>] and $f are also asymptotically stable if $ = 0. In general, at the k-th (1 < A: < cr, -1) step, we obtain a series of /fc-th order systems lk
_
Pi ~Zi
*=?a»*
M
la
j
(l<*
j+x
(5.3.63)
Z
i
and Lyapunov functions
V" = iV/ Hti? /2 (I = U,I» )
(5.3.64)
y=i
such that K;* = -2PiV* + tftf+l,(\
m)
(5.3.65)
where a!+\zl..,zf)
= -tf-1 -Pltf
+ Z^rz{+\
(l
i = l,...,«) (5.3.66)
At the (cr, + 7) - th (0 < _/' < r,- - cr, -1) step, due to the presence of uncertainties, we select the corresponding auxiliary functions co°l+J and the Lyapunov functions V°'+j as follows
<'+>+1 =-t?'+H -P,trJ + T 1 ^ V ^ , ' + 1 /=1
^+J,'=ffZ
V/Htf'+J)2'2
(5-3.67)
*Z;
(i = l,...,m)
(5.3.68)
/=i
which satisfy i?+J=z?+J+l+Atf.+J-*= - ^ + y - ' - ptfrt
£ - - ^ y - Z,'+1 /=1 <^, +
(5.3.69)
180
Y. M. Hu & W. Huo
V?+] = -2PiV°>+1 + 4>?+jtf>+J+1 + £tf+l An?+'-1 /=o
(5.3.70)
At thefinalstep, the auxiliary functions $ and the Lyapunov functions
V? =r£v/+(tf)2/2 = zTz/2
(5.3.71)
7=1
satisfy, respectively, the following relations fi =Kzlzsi + A^'- 1 +v,- + AE.-V-
(5.3.72)
Y^T^
r-2A
r
7=1
^/
/~l
(5.3.73)
'i
_/+!
^ Z i z « + v , +AE,-v- X—7-z,7=0
;=i &,.
j
(5.3.74)
Finally, let V = ^lVjr' be the Lyapunov function for the general combined system i=i
(5.3.45), then we have I.\-2plVlr'-1+r':£'tf>+JAT,?+J-1
V= ;=i
7=0
1-ltfto /V z / z s , + v,.+4£,v-I—i' M
I Suppose that the uncertain terms AT)?'+J conditions ^L^z;
l
(5.3.75) ,/+l
&
satisfy the following bounded
A £ j S 4 £ w S i £ •;M
(5.3.76)
7=0
that is, the uncertain terms only need to satisfy Lipschitian-like constraints. Consider the following SMC laws
Robust and Adaptive
v, =-Pifp
Control of Nonholonomic
Mechanical Systems
r <~ldcor' + E — ^ z , ' + 1 -SiSign(tf),i
-Kzlzsi
...
181
(5.3.77)
= l,...,m
then, from (5.3.75), (5.3.76) and (5.3.77) we obtain m { V<-Z\-2PiV /=ll
+
m
r.
ZAEm z'=l
r,
T
r
m
'?
> + Ltzl
ji
r
z-Si
X
da>j
r
I V 7=1 I
m (
< I {- 2{Pi - L, )Vp - (Si - AEiM mSmax W i=\
2 m
'
J=I
'
'
'
I
f
< S { - 2 ( A -I,)K;< - ( * ,
- ^ m * ^ ^
(=1 2
+ TP m ax(^M + 2 ^ M W >=i
^ - 1 2P; - 2 1 , - ^ t f - / ^
^
+
1
T ^
M
W
+
N >
+ L^V (5.3.78)
-U^-AEiMmSmax)\^ where >«y
_/+i
^ # M ; Pmax = wax A ; Smax = wax J, (5.3.79)
Consequently, if the switching/>,,£, gains are selected such that
182
Y. M. Hu & W. Huo
(
2Pl-2L,-AElMK-p„ 8i-AEum8max
m
AElM + ZAEM |>0 I j=\ J J ,i = \,...,m
(5.3.80)
>0
then, fr jm (5.3.78) we know that the sliding motions can be achieved. The SMC laws (5.3.52) and (5.3.77) have their own advantages. The SMC law (5.3.52) can be obtained directly from the bounds of uncertainties, and may need high gain switching due to the constraint (5.3.54). While the SMC law (5.3.77) designed by backstepping approach may need small gain switching, and thus reduce the chattering of SMC systems. Besides, unlike the conventional backstepping design, we only need the uncertainties to satisfy Lipschitian-like conditions. With the similar backstepping design approach as above, we can also develop adaptive controller for nonholonomic mechanical systems due to the linear property (5.2.9).
5.4
Applications to the nonholonomic mobile robots
To show the efficiency of the above control strategies, we will consider the robust and adaptive control problems of nonholonomic mobile robot systems (5.2.12) and (5.2.15). 5.4.1 Design of dynamic sliding mode controller for the
output
tracking of nonholonomic mobile robot Consider the dynamic model (5.2.15) and the output (5.2.17), we introduce the integral compensator v = w, then we have Y = DX (0)v Y = D2(0)v2v + Dx(0)w
(5.4.1)
Y = (D2 (0)w2 - Dx (0)v22 )v + 1D2 (0)v2w + DX (0)H~l (u-Kvwhere
DJ9) = Let
cos 6 —hsmO^ smO
h cos 6 ,
D2(0) =
f-sin# cos 0
-hcos0^\ -hsin0
Vw)
Robust and Adaptive Control of Nonholonomic Mechanical Systems ...
P = (D2 (d)w2 - D1 (6>)v2 2 )v + 2D2 (9)v2 w
183
(5.4.2)
z4=0
(5.4.3)
u = HDX (6>)_1 (v - P) + Kv + Vw
(5.4.4)
Y = P + DX {6)H-X (u-Kv-
(5.4.5)
ZX=Y,
Z2=Y,
Z3=Y,
then, we have Vw)
It is to show that the transformation Z = T(q,v,w) defined by (5.4.3) is a diffeomorphism, and the original system is transformed into the following form 7=7 7 =7
(5.4.6)
Z 3 =v 39 i 4 = — S(q)v dq
The zero dynamics of (5.4.6) is 80
z4=^%)D,-'WZ2=0 dq
(5.4.7)
and is thus stable. ( V
Letr,=
^
" " be the desired output, and a = Zx - Yd be the tracking error \yid)
vector, then
a = Z,-Yd,
& = Z2-Yd,a
= Z,-Yd
(5.4.8)
We consider the SMC design for the following three cases Case 1: SMC with Sliding order (1,1) In this case, we select the switching functions = a + m2& + mxa , and the SMC law as follows •
v = Yd
-m2a-/MjCT-ksign{Z)-SZ,(k,5>0)
(5.4.9)
where E = S. • Case 2: SMC with Sliding order (2,2) In this case, we select the switching function S = & + mxa, and the SMC law as follows
184
Y. M. Hu & W. Huo
v = Yd-mlcr-{i1(a + ml&)-ksign(Z)-8Z,(k,8>0)
(5.4.10)
where Z = S + /xxS. •
Case 3: SMC with Sliding order (3,3) In this case, we select the switching function S = a, and the SMC law as follows (5.4.11)
v = Yd - n2a - HXG - ksign(Z) - SZ, (k,8 > 0 ) where 27 = S + n2S+pxS . All the above SMC Laws satisfy the following reaching law
(5.4.12)
Z = -ksign(Z) - SZ
In the presence of parameter uncertainties, we can also select k and 8 sufficiently large to guarantee the existence of the desired motions. Such parameter estimations are somewhat complex and omitted here. Numerical simulations are respectively performed for the above controllers, with the desired outputYd(t) = (yld,y2d)T =(3shU,-5cosf) r , and the following nominal parameters m = 50kg, R = 0.lm, D = 03m, I0 =0.6kg-m2, L = 2.03m//, ke =0.02, kT =0A9\Nm/A,
J3 = 7l,
h0 =2.03mH,r = 5Alfi
21-error; z2-errof
z1 error
if.
z2 error
i \
"' " desired /
\ \
actual
/ /
v
^ ^_
S
-6 -4
Fig.5.2
Tracking error curves of SMC with sliding order (3,3)
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems ...
angular and linear velocity
actua control u 80
Av1 / \\ •'
/ . /A v / / /
\ ' V
A •' \\ /
\ / V
A2 A
/ A
I-
A / \i /
A,
V,
Fig.5.3
185
\ /
\ / V
A ;A \
60
/ /,
40
-d V
A\
s^s
/"^
A\
.A
A\
/
\\ //
%J\ \
1
\j
\\ 1! \
A'/ \ !
\j
u2
A
20
vA ' A
/
Control curves with sliding order (3,3) dynamical sliding surface and control
dynamical sliding surface and control 70 10 0
\
-
S1 w1
• •^--•^
-10 -20
•
-30
Fig.5.4
Switching functions and the integrator curves
Due to limited space, we only give the simulation results for the sliding order (3,3) SMC, which show that the proposed SMC has an evident effect on the chattering reduction and ideal dynamic response.
5.4.2 Design of adaptive sliding mode controller for the output tracking of nonholonomic mobile robot Consider the two-wheel driven mobile robot in Fig.5.1. The reduced dynamic model is given by (5.2.15). Unlike the kinematics, the dynamics of the mobile robot have some uncertain parameters such as the mass of the mobile robot, the moment of inertia and diameter of the driven wheels. To guarantee the position tracking, a robust controller should be designed to make the practical velocity output perfectly satisfy the required velocity input vr. We now develop a robust adaptive controller to overcome the influences of the above kinds of uncertainties. Assume that the parameters of driving motor are well known precisely and m,I0,R are unknown but with bounded variations, let£=($!,£2,£3)Tbe the
186
Y. M. Hu & W. Huo
parameter vector and 0(q, q, v, v) = {0tj )GR2X3
be the linear parameterized matrix
defined, respectively, as follows Rm j3-kT
_^0 ^2=
P-k,
~,Cs=
K-P
(5.4.13)
R
h, <£..= — v , + — v,, 0„= ——v? + 11
2
1
2
1'
12
2 D
2
v,
2 D
2
(5.4.14)
0n=vx+Dv2,02i=vx-Dv2 21
2
i
22
2
2D
2D
then, we can rewrite the left side of (5.2.15) in the form H(q,v,v)v + V(q,v)v + K(q)v =
0(v,v,v)£
(5.4.15)
Let the output be given by (5.2.17) and the desired out be Yd, select the switching function as follows (5.4.16)
S = v + Xv = v —v. w h e r e v = v - v d , vr=vd-X-v v^=-
, and
hcos0
hsind'
-sin#
cos#,
(Yd-ke),(k>0,e
= Y-Yd)
(5.4.17)
is the control to achieve the asymptotic tracking of the given output. Define the Lyapunov function
v = -(sTHTHS+cTr-lc)
(5.4.18)
where r is positive definite, g = g - £ is the estimation error, then from (5.2.15), (5.4.15) and (5.4.16) we obtain V=STHTHS+£Tr-l£ =STHT (HV -
Hvr )+irr~1c
=STHT (u-Vv-Kv-
Hvr)+iTr~lC
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems ...
=STHT (u -Y(v,v,vr ]C)+iTr~1C
187
(5.4.19)
Let the control and parameter adaptive laws be respectively given by u = Y{v,v,vr )C-KDS-
Kssign(HS)
£ = ~rT0THS
(5.4.20)
then V=
(5.4.21)
-S'H'KpS-S'H'KsSigniHS)
Hence, if we select K D and K s properly (e.g., KD= H,KS
= SI(S > 0) , then
from (5.4.21) and the Lasalle's invariance principle we know that S - » 0 and <^—>£"as t—>x>. v=v-v d —»0is then true by (5.4.16). Fig.5.5 gives the control and compensator time curves. Both of them have no serious chattering as in the conventional SMC systems. The desired trajectory is Y
d(0 = (yw,y2d)T
=(3sin?,5-5cosO r
and the parameters are the same as those in last subsection. The gains in (5.4.17) and (5.4.20) are given as follows A: = 2 0 , r = diag[0.l,03,0.2],KD=lOH,
'2
\ = 2-I
V
J2
10
Fig.5.5
15
Velocity tracking error and voltage control input curves
5.4.3 On-line implementations In order to investigate the availability of dynamic SMC approach to the control of mobile robots, a two-wheel driven mobile robot platform is built to perform
188
Y. M. Hu & W. Huo
related on-line implementations. Each wheel is driven by a dc motor, and a P-II 233 computer is used as a high-level processor. The related parameters are identified and given in the above two simulation examples. The desired output trajectory is an ellipse Yd=(yld,y2d)T = (asino}lt,bcoso)2t)T. Fig.5.6-Fig.5.9 give the tracking error and control curves of the on-line experiment for the high-order sliding mode controller with the parameters a)x=0.\rad I s, a) 2= 0.2 rad/ s, a = 250 mm, and b = 300 mm. Both of the numerical simulation and on-line implementation results do show that the high-order SMC has much better performance, and can greatly weaken the chattering of the SMC systems. Besides, we also find that, with the same electrical constants, the greater the sliding order is, the better the dynamic response will be.
x-JM 0
10
Fig.5.6
20
30
40
50
60
Fig.5.7
Tracking error ex | CMMnamrl
Tracking error e2
|
1000
ii . L
-1000
-2000
. ^ r -
. W .J 0
10
Fig.5.8
5.5
71/1 A. A
i\i
.„.-.-
,„
T 20
30
40
50
Control to motor 1
0
10
Fig.5.9
20
30
40
SO
Control to motor 2
Conclusions
To overcome the drawbacks of SMC, some researchers have recently concentrated on the development of the so-called high-order SMC approach, which not only
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems ...
189
generalize the conventional SMC theory, but also has considerable application potentiality especially for nonlinear control plants with dynamic actuators and sensors. By combining the high-order SMC technique with other robust algorithms such as backstepping algorithm, it can deal with mismatched uncertainties, alleviate chattering phenomenon and improve the control precision. This chapter has developed the robust and adaptive control strategies for nonholonomic mechanical systems. The canonical form of the high-order SMC systems is developed by using the input/output linearization technique. The design problem is thus decomposed to the design problems of two low dimensional subsystems: the switching functions and their related derivatives determined linear subsystem, and the high-order sliding modes described nonlinear subsystem. Two kinds of high-order SMC laws are also presented to achieve desired high-order sliding mode motion. Numerical simulations and on-line experiments have been performed to show that the proposed approach has considerable potentiality to weaken the chattering phenomenon of SMC systems. It should be noted that, apart from the physical background of high-order SMC applications, the dimensionality of sliding mode dynamics is lower than that of the traditional SMC. This property is particularly useful for the analysis and design of sliding modes. Further work will be concentrated on the robust and adaptive control design in the presence of both mobile robot and driven motor uncertainties. The high-order SMC approach is also applicable for a wide class of uncertain nonlinear control systems. It is also necessary to develop more general design method of switching functions and find proper conditions under which a high-order SMC with proper sliding order can be applied. Acknowledgement The work was supported by the National 863 Plan (#980519), the NSF (#69974015), the Guangdong NSF (#990258) and the Education Department of Guangdong Province.
References [1] Astolfi, A., Discontinuous control of nonholonomic systems, Systems & Control Letters, vol.27, no.l, pp37-45, 1997. [2] Bartolini, G., Ferrara, A., and Giacomini, L., A robust control design for a class of uncertain nonlinear systems featuring a second-order sliding mode, Int.J.Control, vol.72, no.4, pp321-331, 1999.
190
Y. M. Hu & W. Huo
3] Bartolini, G., Ferrara, A., and Usai, E., Chattering avoidance by second-order sliding mode control, IEEE Trans, on Automatic Control, vol.43, no.2, pp241246, 1998. 4] Bloch, A.M., Stabilizability of nonholonomic control systems, Automatica, vol.28, no.2, pp431-435, 1992. 5] Bloch, A.M., and Drakunov, S., Stabilization and tracking in the nonholonomic integrator via sliding modes, Systems & Control Letters, vol.29, no.2, pp91-99, 1996. 6] Canudas de Wit, C , Siciliano, B., and Bastin, G., Theory of Robot Control, London: Springer-verlag, 1998. 7] Chiacchiarini, H.G., Desages, A.C., Romagnoli, J.A., and Palazoglu, A., Variable structure control with a second order sliding condition: application to a steam generator, Automatica, vol.31, no.8, ppl 157-1168, 1995. 8] Colbaugh, R., Barany, E., and Glass, K., Adaptive control of nonholonomic robotic systems, J.Robot. Syst. vol.15, no.7, pp365-393, 1998. 9] D'Andrea-Novel, B., Campion, G., and Bastin, G., Control of nonholonomic wheeled mobile robots by state feedback linearization, Int. J. of Robotics Research, vol.14, pp543-559, 1995. 10] Dong,W.J.,and Huo, W., Adaptive stabilization of uncertain dynamic nonholonomic systems, Int.J.Control, vol.72, no.18, ppl689-1700, 1999. [11] Dong,W.J.,and Huo, W., Time-varying adaptive control of uncertain dynamic nonholonomic systems, Control Theory and Applications (in Chinese), vol.20, pp325-328, 1999. [12] Dong,W.J.,and Huo, W., Time-varying stabilization of uncertain dynamic nonholonomic systems, Acta Automatica Sinica (in Chinese), vol.25, pp402-405, 1999. [13] Dong,W.J., Xu, W.L., and Huo, W., Trajectory tracking control of dynamic nonholonomic systems with unknown dynamics, Int.J.Robust Nonlinear Control, vol.9, pp905-922, 1999. [14] Dong,W.J., Xu, Y.S., and Huo, W., On stabilization of uncertain dynamic nonholonomic systems, Int.J.Control, vol.73, no.4, pp349-359, 2000. [15] Eideeb, Y., and Eimaraghy, W.H., Robust adaptive control of robotic manipulator including motor dynamics, J.Robot. Syst., vol.15, no.ll, pp661-669, 1998. [16] Filippov, A.F., Differential Equations with Discontinuous Right-hand Side, Kluwer, Dordrecht, the Netherlands, 1988. [17] Fliess, M., Levine, J., and Martin, P., Flatness and defect of nonlinear systems: introductory theory and applications, Int. J. of Control, vol.61, ppl327-1361, 1995. [18] Fridman, L., and Levant, A., High-order sliding modes as a natural phenomenon in control theory, in Robust control via variable structure and Lyapunov techniques, Franco, G., and Luigi, G., (eds), London:Springer-verlag, 107-133, 1996.
Robust and Adaptive
Control of Nonholonomic
Mechanical Systems . ..
191
[19] Guldner, J., and Utkin, V.I., Stabilization of nonholonomic mobile robots using Lyapunov functions for navigation and sliding mode control, Proc. of the 33rd IEEE Conf on Decision and Control, pp2967-2972, 1994. [20] Hu, Y.M., Lee, C.K., and Xu, J.M., Robust output tracking of nonlinear constrained systems, Control Theory and Applications, vol.13, Suppl.l, pp27-31, 1996. [21] Hu, Y.M., Zhou, Q.J., and Pei., H.L., Theory and applications of nonholonomic control systems, Control Theory and Applications, vol.13, no.l, ppl-10, 1996. [22] Hu, Y.M., and Chao, H.M., High-order sliding mode control of nonlinear control systems with application to mobile robots, Proc. of the VSS'2000, Australia, ppl25-134, Dec.2000. [23] Hu, Z., Research on Robust Control of Nonholonomic Mobile Robots, Ph.D Thesis, South China University of Technology, P.R.China, 2000. [24] Isidori, A., Nonlinear Control Systems, 3rd edition, London:Springer-Verlag, 1995. [25] Jiuig, Y.A., Hesketh, T., and Clements, D.J., High order sliding mode control of uncertain linear systems, Proc. of the 14th IF AC World Congress, Beijing, PRC, vol.G, pp437-442, 1999. [26] Jiang, Z., and Nijmeijer, H., Tracking control of mobile robots: a case study in backstepping, Automatica, vol.33, ppl393-1399, 1997. [27] Kolmanovsky, I., and McClamroch, N.H., Developments in nonholonomic control problems, IEEE Control Systems, vol.15, pp20-36, 1995. [28] Krishnan, H., and McClamroch, N.H., Tracking in nonlinear differentialalgebraic control systems with applications to constrained robot systems, Automatica, vol.30, no.12, ppl885-1897, 1994. [29] Laumond, J.P., (Ed.), Robot Motion Planning and Control, London: Springerverlag, 1998. [30] Levant, A., Sliding order and sliding accuracy in sliding mode control, Int. J. Control, vol.58, no.6, ppl247-1263, 1993. [31] Li, Q.X., Hu, Y.M., Pei, H.L., and Zhou, Q.J., Robust output tracking of mobile robots, Control Theory and Applications, vol.15, no.4, pp515-524,1998. [32] Li, Z., and Canny, J.F., (Ed.), Nonholonomic Motion Planning, Nowell, MA:Kluwer, 1993. [33] Pomet, J.B., Explicit design of time-varying stabilizing control laws for a class of controllable systems without drift, Systems & Control Letters, vol.18, ppl47158, 1992. [34] Retchkiman, Z., and Khorrami, F., The time optimal control problem for a class of nonholonomic systems: a case study, Preprints of 13th Triennial World Congress, San Francisco, USA, pp449-453, 1996. [35] R'os-Bolivar, M., Zinober, A.S.I., and Sira-ramirez, H., Dynamical sliding mode control via adaptive input-output linearization: a backstepping approach, in Robust control via variable structure and Lyapunov techniques, Franco, G., and Luigi, G., (eds), London:Springer-verlag, ppl5-35, 1996.
192
Y. M. Hu & W. Huo
[37] Sastry, S., Nonlinear Systems Analysis, Stability and Control. New York: Springer-verlag, 1999. [38] Sira-Ramirez, H., On the dynamical sliding mode control of nonlinear systems. Int. J. of Control, vol.57, no.5, ppl039-1061, 1993. [39] Steven, C.C., and Lin, C.L., A general class of sliding surface for sliding mode control, IEEE Trans, on Automatic Control, vol. 43, no.l, ppl 15-119, 1998. [40] Su, C.Y., and Stepanenko, Y., Robust motion/force control of mechanical systems with classical nonholonomic constraints, IEEE Trans, on Automatic Control, vol.39, pp609-614, 1994. [41] Su, C.Y., and Stepanenko, Y., On the robust control of robot manipulators including actuator dynamics, J. Robot. Syst., vol.13, no.l, ppl-10, 1996. [42] Tarn, T.J., Bejczy, A.K., Yun, X.P., and Li, Z.F., Effect of motor dynamics on nonlinear feedback robot arm control, IEEE Trans, on Robotics and Automation, vol.7, no.l, ppl 14-121, 1991. [43] Utkin, V.I., Sliding Modes in Control and Optimizations, New York:Springerveilag, 1992. [44] Wang, C.L., The Robust Stabilization of Nonholonomic Dynamical Systems, Ph.D Thesis, Beijing University of Aeronautics and Astronautics, P.R.China, 1999. [45] Wang, W.H, Soh, C.B, and Chai, T.Y., Robust stabilization of MIMO nonlinear time-varying mismatched uncertain systems, Automatica, Vol.33, nol2, pp2197-2202, 1997, [46] Xu, S.J, Rachid, A., Sliding mode controller design for linear systems with mismatched uncertainty, Proc.ofthe 14lh IFAC World Congress, Beijing, PRC, Vol.G,pp431-436, 1999. [47] Yang, J.M., and Kim, J.H., Sliding mode motion control of nonholonomic mobile robots, IEEE Control Systems Magazine, vol.19, ppl5-23, 1999. [48] Yang, J.M., and Kim, J.K., Sliding mode control for trajectory tracking of nonholonomic wheeled mobile robots, IEEE Trans, on Robotics and Automation, vol.15, no.3, pp578-587, 1999. [49] Young, K.D., Utkin, V.I., and Ozguner, U., A control engineer's guide to sliding mode, IEEE Trans, on Control Systems Technology, vol.7, no.3, pp328342, 1999. [50] Young, K.D., and Ozguner, U., Sliding mode: control engineering in practice, Proc.ofthe 1999 ACC, San Diego, CA, USA, ppl50-162, 1999.
Chapter 6
Introduction to Chaos Control and Anti-Control Guanrong Chen'
Jin-Qing Fang®,
Yiguang Hong®
Hua-Shu Qin®
^Department of Electrical amd computer
Engineering
University of Houston, Houston, TX 77204, USA ^Department of Electrical
Engineering
City University of Hong Kong, Hong Kong SAR, P.R. China ®China Institute Of Atomic P.O.BOX275-27,
Beijing 102413,
Energy P.R.China
^Institute of Systems Science, Academia Sinica, Beijing 100080,
P.R.China
6.1 Overview of chaos, chaos control and anticontrol Chaos control and anticontrol technologies promise to have a major impact on many novel, time- and energy-critical applications, such as high-performance circuits and devices (e.g., delta-sigma modulators and power converters), liquid mixing, chemical reactions, biological systems (e.g., in the human brain, heart, and perceptual processes), crisis management (e.g., in power electronics), secure information processing, and critical decision-making in political, economical, as well as military events. This new and challenging research and development area has become a scientific interdiscipline, involving systems and control engineers, theoretical and experimental physicists, applied mathematicians, physiologists, and above all, circuits and devices specialists. Both control and anticontrol of chaos can be analyzed using chaos and bifurcation theories, and can be implemented by suitable design of controllers, device, and circuitry.
193
194
G. Chen et al.
6.1.1 A brief description of chaos, chaos control and anticontrol Chaos refers to one type of complex dynamical behaviors that possess some very special features such as being extremely sensitive to tiny variations of initial conditions, having bounded trajectories in the phase space but with a positive leading Lyapunov exponent, possessing a finite Kolmogorov-Sinai entropy, a continuous power spectrum, and/or a fractional topological dimension, etc. Very often, chaos coexists with some other complex dynamical phenomena like bifurcations, fractals, and strange attractors. A typical chaotic attractor generated by a power system model is shown in Figure 6.1 Due to its intrinsic dynamical complexity, chaos was once believed to be neither controllable nor predictable, therefore useless. However, recent research advances have demonstrated that chaos not only is (long-term) controllable and (short-term) predictable, but also can be beneficial to many real-world applications. In fact, control and anticontrol of chaos have become a rallying point for an important segment overlapping engineering, physics, mathematics, and biomedical science. Chaos control refers to the situation where chaotic dynamics is weakened or eliminated by appropriate controls; while anticontrol of chaos means that chaos is created, maintained, or enhanced when it is healthy and useful. Anticontrol of chaos is also called chaotification, meaning to chaotify an originally non-chaotic system (i.e., to make it chaotic). Both control and anticontrol of chaos can be accomplished via some conventional and nonconventional methods such as microscopic parameter perturbation, bifurcation monitoring, .entropy reduction, state pinning, phase delay, and various feedback and adaptive controls [1].
6.1.2 Some basic rationale for utilizing chaos and complex nonlinear dynamics There are many practical reasons for controlling or ordering chaos. First of all, chaotic (messy, irregular, or disordered) system response with little meaningful information content is unlikely to be useful as chaos can lead systems to harmful or even catastrophic situations. In these troublesome cases, chaos should be reduced as much as possible, or totally suppressed. For instance, stabilizing chaos can avoid fatal voltage collapse in power networks and deadly heart arrhythmias, can guide disordered circuit arrays (e.g., multi-coupled oscillators and cellular neural networks) to reach a certain level of desirable pattern formation, can regulate dynamical responses of mechanical and electronic devices (e.g., diodes, laser machines, and machine tools), can help well-organize an otherwise mismanaged multi-agency corporation to reach a stable equilibrium state whereby achieving optimal agent performance, etc. Ironically, recent research has shown that chaos can actually be useful under certain circumstances, and there is growing interest in utilizing the very nature of
Introduction
to Chaos Control and Anti-Control
195
chaos, particularly in some novel time- and/or energy-critical applications. The most motivative reason is the observation that chaos permits a system to explore its every dynamical possibility: when chaos is under control, it provides the designer with an exciting variety of properties, richness of flexibility, and a cornucopia of opportunities. Figure 6.2 visualizes how by varying a constant feedback control gain within a simple quadratic map, period-doubling bifurcation and chaos can be created and then be stabilized to a variety of equilibria of different periods. Traditional engineering design always tries to reduce irregular dynamical behaviors of a system and, therefore, completely eliminates chaos. However, such overdesign is usually accomplished at the price of losing great flexibilities in achieving high performance near the stability boundaries, or at the expense of radically modifying the original system dynamics. On many occasions, this proves to be unnecessary. It has been shown that the sensitivity of chaotic systems to small perturbations can be used to direct system trajectories to a desired target quickly with very low and ideally minimum control energy. This can be crucial for navigation in the multiplanetary space system. A suitable modification of chaotic dynamics such as stability conversion or bifurcation delay not only can significantly extend the operational range of machine tools and jet engines, but also may enhance the artificial intelligence of neural networks, as well as increase coding/decoding efficiency in signal and image communications. Other application examples of chaos control and anticontrol technologies include designing high-performance circuits and devices (e.g., delta-sigma modulators, automatic gain control loops, and power converters), achieving chaos synchronization for information processing, pattern recognition, and secure communications, forming various wave patterns and self-organized behaviors in oscillator arrays and neural networks, delaying bifurcations in electric power systems and energy convection loops, and performing crisis management and critical decision-making in political, economical, as well as military events. Fluid mixing is another good example in which chaos is not only useful but actually very desirable, where two fluids are to be thoroughly mixed while the required energy is minimized. For this purpose, it turns out to be much easier if the dynamics of the particle motion of the two fluids are strongly chaotic, since it is otherwise difficult to obtain rigorous mixing properties due to the possibility of invariant two-tori in the flow. This has been one of the main subjects in fluid mixing, known as chaotic advection. Chaotic mixing is also momentous in applications involving heating, such as in plasma heating for a nuclear fusion reactor. In this process, heat waves are injected into the reactor, for which the best result is obtained when the heat convection inside the reactor is chaotic. Within the context of biological systems, the controlled biological chaos appears to be important with the way a human brain executes its tasks. There has been some suggestions that human brain can process massive information instantly, in which the ability of human beings in controlling the brain chaos could be a fundamental reason. The idea of anticontrol of chaos has been proposed for solving the problem of driving responses of a human brain model away
196
G. Chen et al.
from the saddle-type of equilibrium, so that undesirable periodic behaviors of neuronal population bursting can be prevented. Also, some recent laboratory studies reveal that the complex variability of healthy dynamics in a variety of physiologic systems has features reminiscent of chaos. For example, in the human heart, the amount of intracellular Ca2+ is closely regulated by a coupled process in a way similar to a system of coupled oscillators. Medical evidence reveals that controlling the chaotic arrhythmia in an appropriate way can be a new, safe, and auspicious approach to the design of a smart pacemaker for regulating heartbeats. Motivated by many of such potential real-world applications, current research on control and anticontrol of chaos has become intensive. In the theoretical aspect, chaos control and anticontrol are posing a new challenge to both system analysts and control engineers. This is due to the extreme complexity and sensitivity of chaotic dynamics, which can cause many unusual difficulties in long-term predictability and short-term controllability of chaos. A controlled chaotic system is inherently nonautonomous, which cannot be converted to an autonomous system in most cases since the controller as a time function is yet to be designed. Possible time-delay, noise, and coupling effects often make a controlled chaotic system Lyapunovirregular and topologically extremely complex. As a result, many existing theories and methodologies for autonomous systems are no longer applicable. On the other hand, at me technical level, chaos control and anticontrol have also posed new challenges "o circuit designers and instrument specialists. A successful circuit implementation in a chaotic environment is generally difficult, due to the extreme sensitivity of chaos to parameter variations and noise perturbations, and the nonrobustness of chaos to the structural stability, within the physical devices. Notwithstanding many technical obstacles, both theoretical and technical developments in this area have gained remarkable progress in the last few years. For instance, some unified control methods have been developed under the nonautonomous Lyapunov stabilization theory; some rigorous anticontrol techniques, even for spatiotemporal systems, have been initiated; some novel chaos based encryption approaches have been advanced; and some chips of chaotic circuits have been made toward commercialization [1,2].
6.1.3 A brief summary In conclusion, the emerging field of chaos control and anticontrol is very stimulating and full of promise; it is expected to have far-reaching impacts with enormous opportunities in industrial and commercial applications. New theories for dynamics analysis, new methodologies for control, and new circuitry design for implementation altogether are calling for new efforts and endeavors from the communities of nonlinear dynamics, controls, and circuits and systems [2].
Introduction
to Chaos Control and Anti-Control
197
6.2 Challenges in chaos control
6.2.1 Chaos control: what does it mean? What does "chaos control" mean? Similar to conventional systems control, the concept has first come to mean stabilizing unstable periodic orbits (including unstable equilibria) of a dynamical system. However, it is now commonly agreed that it can also be understood as a process or mechanism that enhances existing chaos or creates new chaos in a dynamical system when it is beneficial, and suppresses it when it is harmful. Engineering and scientific methods are used to manipulate the transition between chaos and order and, sometimes, between chaos and chaos. When the control of chaos involves eliminating or weakening chaos, it is usually a process of stabilizing unstable periodic orbits or reducing the leading positive Lyapunov exponent of the dynamics of the chaotic system. However, the scope of chaos control also encompasses such control tasks as self-similarity, pattern and symmetry forming or breaking, birth or delay of bifurcations, birth or shapes of limit cycles, etc. Moreover, it includes the notion of generating chaos, in a sense creating instability into a perhaps originally stable system, depending on the need of the application at hand. Apparently, the notion of chaos control is neither exclusive of, nor conflictive with the conventional control systems theory and practice. Instead, it aims at better managing the dynamics of a nonlinear system on a wider scale, with the hope that more benefits may thus be derived. In addition, chaos control seeks to develop new theories and methods that are particularly useful for chaotic systems in various disciplines.
6.2.2 Challenges in chaos control To appreciate the challenge of chaos control, consider two potentially chaotic systems—the logistic map and a model of an electric power system. The Logistic Map A simple, yet typical, example of discrete-time chaotic systems is the onedimensional logistic map x
k+l=f(xk>Py-=Pxk(l-x0
(6-2-1)
where p>0 is a real variable parameter. Since each state of the system is a function of/?, i.e., xfc =x]i(p), a change ofp will significantly alter the dynamical behavior of the system. By solving the algebraic equation x = f(x,p), one can find the two equilibria of the system, 0 and x* ={p-\)lp. Further examination of the system
198
G. Chen et al.
Jacobian, J = dfldx = p-2px, reveals that the stabilities of these equilibria indeed depend on p. It is well known that the logistic map has period-doubling bifurcation leading to chaos [16]. At this point, it is illuminating to pose the following questions: Is it possible (and, if so, how) to find a simple (say, linear) control sequence, {u^} to be added to the logistic system: xk+i =f(xk,p)
= pxk(l-xk)
+ uk
(6.2.2)
such that, to mention just a few, (i) the limiting chaotic behavior of the period-doubling bifurcation process is suppressed? (ii) the first bifurcation is delayed to take place, or some bifurcations are changed either in form or in stability? (iii) the asymptotic behavior of the system becomes chaotic (the anticontrol problem), when the parameter/? is currently not in the bifurcating range? An Electric Power Model As a real-world engineering example, consider the simple power system shown in Figure 6.3. This model is described by the continuous-time dynamical model 9 = a> d) = \6.6661sin{6L -<9 + 0.0873)F£ -0.1667^ + 1.8807 0 = 496.8718F^ -166.667cos(0 L - 0 - 0 . 0 8 7 3 ) ^ -666.6667cos(0 L - 0 . 2 0 9 4 ) ^ -93.3333F^ + 33.3333/7 + 43.333 VL =-78.7638F^ + 26.2172cos(<9/,
-9-0M24)VL
+ 104.8689cos(6>/, -0.1346)F£ + 14.5229F/, -5.2288/J-7.0327 where 9 is the rotational angle of the power generator, with angular velocity a . In this power system, the load is represented by an induction motor, M, in parallel with a constant PQ (active-reactive) load. The variable reactive power demand, p, at the load bus is used as the primary system parameter. Also in the power system, the load voltage is VLZ.6L with magnitude ^L and angle ZG^, the slack bus has terminal voltage EZQ (a phasor), and the generator has terminal voltage denotedE m Z9. Typical chaotic behavior of this model is shown in Figure 6.1 Similar to the logistic map discussed above, a few interesting control problems are: (i) can the limiting chaotic behavior of the period-doubling bifurcation process
Introduction
to Chaos Control and Anti- Control
199
be suppressed? (ii) can the first bifurcation be delayed in occurrence, or some bifurcations be changed either in form or in stability? (iii) can the voltage collapse be avoided or delayed through bifurcation or chaos control? Many nonconventional control problems such as those prompted above have posed a real challenge to both nonlinear dynamics analysts and control engineers.
6.2.3 Some distinct features of chaos control To further understand the very nature of chaos control and anticontrol, it should be helpful to highlight some distinctive features of chaos control theory and methodology, in contrast to other conventional approaches with regard to such issues as objectives, perspectives, problem formulations, and performance measures. 1. The target trajectories in chaos control can be unstable equilibria or unstable periodic orbits (limit cycles), perhaps of high periods. The controller is designed to stabilize these unstable trajectories or to drive them to switch from one orbit to another. This inter-orbit switching can be either chaotic -» regular, chaotic -> chaotic, regular —> chaotic, or regular —> regular, depending on the application of interest. Conventional control, on the other hand, does not normally investigate such interorbit switching problems of a dynamical system, especially, not those problems that involve guiding the system trajectory to an unstable or chaotic state by any means. 2. A chaotic system typically has a dense set of unstable orbits embedded within it, and is extremely sensitive to tiny perturbations in its initial conditions. Such a special feature, useful for chaos control, is not available in linear or non-chaotic nonlinear systems. 3. In most conventional control schemes, one usually works within a state space framework. In chaos control, one deals with parameter space and phase space as well, where Poincare maps, delay-coordinates embedding, parametric variation, entropy reduction, bifurcation monitoring, etc., are some typical, but nonconventional tools for study. 4. In classical control, a target for tracking is usually a constant vector in the state space, which is generally not a state of the given system, and the terminal time for the control is usually finite (e.g., the elementary concept of "controllability" is typically defined using a finite and fixed terminal time, at least for linear systems and affine-nonlinear systems). However, in chaos control, a target for tracking is not limited to constant vectors in the state space: it often is an unstable periodic orbit of the given system. This generally requires only tiny control to achieve, but technically can be quite difficult due to the instability of the target. Moreover, in chaos control the terminal time is infinite to be meaningful and practical, because most nonlinear dynamical behaviors such as equilibrium states, limit cycles, attractors, and chaos are asymptotic properties.
200
G. Chen et al.
5. Depending on different situations or purposes, performance measures in chaos control tasks can be different from those of the conventional control. Chaos control uses criteria such as Lyapunov exponents, Kolmogorov-Sinai entropy, power spectra, ergodicity, and bifurcation changes, in addition to what the conventional control usually emphasizes: robustness of the system stability or control performance, optimality of control energy or time, ability in disturbance rejection, etc. 6. Chaos control includes a unique task - anticontrol, required in some unusual applications such as those in fluid mixing, secure communication, and biomedical engineering activities described in Section 6.1. Anticontrol tries to retain or even enhance chaos to improve performance. Bifurcation control is another example of chaos control, where a bifurcation point is expected to be delayed in case it cannot be avoided or stabilized. This delay may significantly extend the operating time or system parameter range for a time-critical control process such as chemical reaction, voltage collapse of electric power systems, and compressors stall of gas turbine jet engines, etc. These examples are in direct contrast to traditional control tasks such as the textbook problem of stabilizing an equilibrium position of a nonlinear system. Due to the inherent association of nonlinearity and chaos to various related issues, the scope of chaos control and the variety of problems that chaos control deals with is much more diverse than usually imagined. These include creation and management of self-similarity, symmetry, pattern formation, sizes of attractor basins, and birth, amplitudes, and change of bifurcations and limit cycles, etc., in addition to the classical target tracking and system regulation. It is also important to point out an additional distinctive characteristic of a controlled chaotic system that differs from an uncontrolled chaotic system. This characteristic emerges from the situation where a control input is involved in a dynamical system. A controlled system is usually nonautonomous and cannot be reformulated as an autonomous system by defining the control input as a new state variable, since the control is physically not a system state variable and, moreover, it has to be determined via design for the control purpose. Therefore, a controlled chaotic system is intrinsically much more difficult to design than it appears. This observation raises the question of extending many existing theories and techniques from autonomous system dynamics to general controlled dynamical systems, including such complex phenomena as degenerate bifurcations and hyperchaos in the system dynamics when a control is involved. Unless suppressing complex dynamics in a process is the only purpose for an intended control problem, understanding and utilizing the rich dynamics such as bifurcations and chaos in nonlinear control systems are very stimulating in both theory and applications. Chaos control is a new and challenging subject for study for both nonlinear dynamics analysts and systems control engineers.
Introduction
to Chaos Control and Anti-Control
201
6.3 Representative chaos control methods While talking about chaos control, it is worth mentioning at the beginning of introducing some typical methodologies that, strictly speaking, controlling chaos is not any harder than controlling a general nonlinear system—if chaos suppression is the only concern and if a "brute force" strategy is allowed. Many effective control techniques are indeed available today [1]. Several not-so-typical methods from the conventional control point of view are briefly introduced here, showing some special flavor of chaos in control. Recall that the dynamical behaviors of a nonlinear system, particularly bifurcations and chaos, depend on the values of the system parameter(s). The route to chaos via period-doubling bifurcations for the logistic map discussed in Section 6.2 is a typical case where, as the system parameter continuously varies, the dynamical behavior of the system trajectory shows dramatic changes. Many examples of such parameter-dependent dynamical phenomena can be found from the vast literature. Hence, it is intuitively clear that the system dynamical behavior can be altered by changing some of these parameter values, provided that the parameters are accessible for adjustment. Th 2 idea of varying certain system parameter(s) to control chaos may be traced back to John von Neumann, who pointed out in the early 1950's that small, carefully chosen, preplanned atmospheric disturbances could lead to desired, large-scale changes in the weather after some time. This suggestion was quite reasonable, since the weather dynamics is seemingly chaotic and chaos is mainly characterized by extreme sensitivity to small perturbations. Ever since then, this characteristic of chaos, known as the "butterfly effect," has been viewed or used as a possible means for changing some basic properties of chaos. In recent years, various types of parameter-dependent control methods have been applied to chaotic systems. In the effort of controlling chaos by varying some accessible system parameters), one usually looks for a strategy that requires a small control effort to achieve great effect on the ultimate behavior of the controlled chaos. The objective of a parameter-dependent control can be to suppress chaos completely or to utilize the sensitivity of a chaotic system to rapidly direct its trajectory to a desired state. This target state is typically an unusual periodic state such as an unstable equilibrium or limit cycle. In other cases, parameter-dependent controls are used to switch the trajectory between different chaotic behaviors, as circumstances change during the control process, to achieve some specified performance for the system at different times.
6.3.1 The OGYparametric-variation control method This section further develops the parameter-dependent approach in a unified fashion under a somewhat general framework. The approach taken for controlling a nonlinear dyr amical system is to stabilize one of its unstable periodic orbits embedded in
202
G. Chen et al.
an existing chaotic attractor via small time-dependent variations of a variable system parameter [16]. The idea comes from the observation that a chaotic attractor typically has embedded within it a dense set of unstable periodic orbits. Heuristic reasoning easily leads to the suggestion of controlling any of these orbits to a (usually unstable) periodic one, by continuously applying small forces that are designed for the purpose. 6.3.1.1 Using special features of chaos for control Before giving a formal presentation, a simple description of the basic idea and outline of the control procedure using a graphical language is desirable. In this approach, while trying to bring the system orbit to a target, one first locally linearizes the system and then uses the controller to slightly push the system orbit toward the target. At this step, the control method is classical (which may be interpreted as a pole placement approach), but the pushing has to be so small that it does not destroy the structural stability of chaos thereby preserving the chaotic property of the system (pole placement generally is a "brute force" approach which would be too strong for this purpose, and so would consume too much a control energy). Then, the controller stops doing anything but wait, letting the orbit free to travel until it moves back to nearby the target (by the ergodicity of chaos, it will). This step is not typical in classical control, and cannot be applied to non-chaotic systems. Note that the system orbit will move closer to the target using the system energy but not the control energy in this phase of the process. Also at this step, the system chaotic parameters have been changed. To this end, another local linearization of the system is carried out but at the new parameter values, and then a new and small control push applies, so the next cycle of the process begins. Each time, the system orbit moves closer to the target using part of the classical feedback control energy and part of the given system energy, along with the ergodicity of chaos, while preserving the chaotic dynamics until the orbit eventually arrives at the target (and chaos is thus eventually suppressed). This control method is based on a classical approach, but it differs from the classical approach in that it utilizes some very nature of chaos (such as the ergodicity and the structural stability of chaos). 6.3.1.2 The OGY control method For explanation, consider a general continuous-time autonomous system, x = f(x(t),p)
(6.3.1)
where p is a scalar system parameter accessible for external adjustment. Suppose that when p=p* the system is chaotic. Suppose that it is desired to control the system trajectory, x(t), to reach an unstable saddle type of equilibrium [8,11,18,20]. Assume that within a small neighborhood of p , say
Introduction
to Chaos Control and Anti-Control
P-Vpmax
203
(6.3.2)
in which ^pmax > 0 is the maximum allowable perturbation such that as long as p is maintained within this range, neither the chaotic attractor nor the target orbit disappears. In other words, it is assumed that within this small neighborhood ofp , there is no bifurcation point of the target periodic orbit. Let r be a periodic orbit of system 6.3.1 and 2" be a surface of cross section of the underlying Poincare map denoted by P [8,18]. For simplicity, assume that the continuous-time system (6.3.1) is three-dimensional, with coordinates (x,y,z), so that the corresponding surface of cross section is two-dimensional and is orthogonal to the third axis, that is, Z = {\cc p yY eR^ : Y = z0 (a constant)}. Now, let £ be the coordinates of the surface of cross section, which is a 3 x 1 vector satisfying
Zk+\=ptfk>Pk)
(6-3-3)
where Pk=P
+Apk
\ApA<Apmax
Since/? is adjustable at every iteration of the map, p = p ^is chosen to be a constant at each iteration. Then it is possible to determine, from the map P, many distinct unstable periodic orbits within the chaotic attractor, and select the target periodic orbit that maximizes certain desired system performance with respect to the dynamical behavior of the system. For example, sometimes it makes sense to pick a higherorder orbit as the desired one since it visits more regions of the attractor. This can be advantageous because different regions correspond to different physical states of the system. At some other times, one may want to pick as many of such regions as possible. For illustration, suppose that ?f=P{?f,p*)
(6.3.4)
has been selected as the desired, unstable, period-one saddle type of equilibrium of the map P, corresponding to the desired target trajectory of system (6.3.1). To understand the local properties of this chosen equilibrium (a degenerate periodic orbit), one needs to fit the iterations of the map to a local linear approximation of the map about the desired orbit. A linear approximation o f f near %*r and p* is given by
204
G. Chen et al.
$k+\*sy+Jk^k-?f)+wkiPk-p*)
(6.3.5)
or (6.3.6)
A
%k+i*JkAh+wkAPk
where
A£ = 4k-fy jk=dP(?f,P*)/dtk
>
A
Pk=Pk-P*
, wk=dP(4*f,p*)/dPk
.
The stable and unstable eigenvalues, As £ a n d / ^ ^ satisfying U s J <1
gikeu,k=ikes,k=° then the Jacobian J^ can be expressed as J
k -\keu,kSu,k+As,kes,kslk
(6-3.7)
The dominant theme of the OGY method is first to monitor the system until its trajectory comes near the desired orbit, i.e, until ^ falls close enough to £*f • Then, one changes the nominal value of the parameter p by a small amount, Ap . This will change the location of the orbit and its stable manifold, such that the next iteration (represented by ££+1 in the surface of cross section) is forced toward the local, stable manifold of the target 4*f • Since the nonlinearity is not contained in the linearized system, the control so designed usually is not able to bring the moving orbit
Introduction
to Chaos Control and Anti-Control
205
to the target. Thus, the controlled orbit will leave the small neighborhood of the target again, and continue to travel chaotically as if there was no control effect on it at all. Nevertheless, due to the ergodicity of chaos, or, because the chaotic attractor has a dense set of unstable periodic orbits embedded within it, sooner or later the orbit ^ will return to be close to £/• At this time, the next cycle of parametric variation and iteration can be applied. This procedure is visualized by Figure 6.4 Now, suppose that ^ has moved to being sufficiently close to £,*s , so that (6.3.6) holds. For the next iterate, < ^ + ] , to fall onto the local stable manifold of £,** , the parameter p
=p*+Ap
has to be chosen such that the next iteration is per-
pendicular to the direction of the current local unstable manifold:
To achieve this, taking the inner product of equation (6.3.6) with gu ^ and applying equation (6.3.7), one has g
ukA£k
Ap
(6 3 8)
r-^Tir
--
<,kwk
where it is assumed that g LW^ * 0 . This is the control formula for determining the variation of the adjustable system parameter/? at each step k = l,2,---. Using the control formula (6.3.8) to consequently nudge the parameter p, one can expect that the target E,\ will eventually be reached. When this happens, this unstable target orbit is said to be stabilized [10,14,17]. Note that this calculated Ap is used to perturb the parameter p only if Ap <Apmax . When the calculated Ap
is greater than Ap a s on a
should be set to zero. Also, when 4k+\ ^ "
'
oca
'
starj
, the variation
te manifold of E,*r , the
variation Ap is set to zero since the stable manifold might lead the orbit directly to the target by its stable nature. The orbit for the subsequent time instants (/.e.,^+2,^+3,-")is
tnen
supposed to approach £,*r at a geometrical rate. How-
ever, due to the errors or inaccuracies incurred in the sequence of calculations and linearizations, the subsequent iterations may tend to fall off the local stable mani-
206
G. Chen et al.
folds. In this case, to make sure that the subsequent { ^ } is approaching S,*r, a new4f, has to be calculated following each iteration. It should be emphasized that this control method is based on a classical approach, but it differs from the classical approach in that it utilizes some very nature of chaos (such as the ergodicity and the structural stability of chaos), as discussed at the beginning of this section. 6.3.1.3 Some remarks on the OGY control method It should also be noted that the above derivation is based on the assumption that the Poincare map, P always possesses a stable and an unstable direction (saddle-type of equilibria). This may not be true in general, particularly for higher-periodic orbits. Also, it is necessary that the number of accessible parameters for control be at least equal to the number of unstable eigenvalues of the periodic orbit to be stabilized. In addition, modified or alternative techniques become necessary when some key system parameters are not accessible. Moreover, the technique is successful only if the control is applied after the system trajectory moves into a small neighborhood of the target over which the control formula remains valid. In this case, one may have to wait for a long time until the trajectory returns to this region. In the case of multiple attractors within the phase space, the orbit may be attracted to another region so may never return. Nevertheless, some methods such as the so-called targeting technique [1] can often persuade the system dynamics to quickly approach the region of control. Finally, this parametric variation control method is derived based on the assumption of absence of noise. However, even small noise may lead to some rare chaotic bursts where the orbit wanders far from the stabilized orbit. In a noisy environment, the amplitude of the parameter variation must exceed a noisedependent threshold in order to obtain effective control. This issue is especially important in physical implementation of the control algorithm.
6.3.2 Feedback control of nonlinear systems Feedback control is classic in engineering applications, which has many advantages such as guaranteed and robust stability, precise target tracking, strong ability of noise rejection, systematic design, and, most important of all, no requirement for assessing oftentimes unassessable system parameters. A general approach to controlling a nonlinear dynamical system via feedback can be formulated as follows:
\m=xx,u,t), [y(t) = h(x,u,t),
(639)
Introduction
to Chaos Control and Anti-Control
207
where x(t) is the system state vector, y(t) the output vector, and u(t) the control input vector. Here, in a general discussion, once again it is assumed that all the neccesary conditions on the vector-valued functions / and h are satisfied such that the system has a unique solution in a certain bounded region of the state space for each given initial value XQ =XQQ), where the initial time tQ >0 . Given a reference signal, r(t), which can be either a constant (set-point) or a function (target trajectory), the problem is to design a controller in the statefeedback form u(t) = g(x,t)
(6.3.10)
or, sometimes, in the output-feedback form u(t) = g(y,t) where g is a nonlinear (including linear) vector-valued function, such that the controlled system \x(t) =
f(x,g(x,t),tl
y(t) = h(x,g(x,t)\ can be driven by the feedback control g(x,t) to achive the goal of traget tracking:
!™f I*0-K0|| = 0
(6.3.12)
where the terminal time, / ^ < oo, is predesired according to the application, and ||| is the Euclidean norm of a vector. Since the second equation in system 6.3.9 is merely a mapping, which can be easily handled in general, it is ignored in this discussion by simply letting h{-) = I, the identity mapping, so that;y=x. 6.3.2.1 Engineering perspective about controller design It is very important to point out that in a feedback controller's design, particularly in finding a nonlinear controller for a given nonlinear system, one must bear in mind that the controller should be (much) simpler than the given system [1]. For instance, if one would like to determine a nonlinear controller, say u^ in the discrete-time setting, for guiding the state vector x^ of a given nonlinear control system of the form
208
G. Chen et al.
to a target trajectory satisfying a predesired constraint matically it is very easy to use u =0
k k(xk)-fk(xk)
JC^ + J
= &]i(xjl), then mathe-
>
which will bring the original system state x^ to the target trajectory in just one step! As another example, to design a nonlinear controller u(t) in the continuous-time setting 10 guide the state vector x(t) of the nonlinear system x = f(x(t)) + u(t) to a target trajectory x*(t), it is mathematically correct to use u(t) = x* (t) - f(x(t),t) + K(x(t) - x* (0) where K has all its eigenvalues with negative real parts. This controller leads to e(t) = Ke(t), with e(t) = x(t) - x* (?) yielding e(t)—>0 or x(t)—>x*(t), as ?-»oo . One more example is the following: for a given nonlinear controlled system in the canonical form x\(t) = x2(t) x2(t) = x3(t) Xn{t) = f{xX(t\--;Xn(t))
+ u{t)
Suppose that one wants to find a nonlinear controller u(t) to guide the state vector x(t) = (xi(t)---xn(t)y
to a target state, x , i.e., x(t)->x
as ?-><»
It is mathematically straightforward to use the controller u(t) = - / ( * ! (0, -,x„ (0) + kc 0 ( 0 - x) with an appropriate constants row vector. Note that the resulting ^-dimensional controlled system is a completely controllable linear system, x = Ax + bu , with 0 1 0---0 0 0 1-0 A = ; : i'-.i
o o o-o
0
and 6 =
Introduction
to Chaos Control and Anti-Control
209
Therefore, a suitable constant control gain exists for the state-feedback controller, such that x(t) - > J a s t -» oo. All such examples seem to reveal a "universal controller design methodology" that works for any given system. However, a fatal problem with such "design" is that the controller is even more complicated than the given system, and hence has no practical value: it virtually remove the given system and then replace it by a desired one. It is hardly imagine that one can accept a controller for a machine described by /(such as a car) that is bigger than the machine itself (e.g., an airplane), described by something more complicated than the same J). Moreover, if this / is a simplified mathematical model for the real machine, and the controller uses the same mathematical model, then the design merely shows that the controller works for the mathematical model but there is no reason to believe that it works for the real machine a^ well. Therefore, in a feedback controller's design, it is expected to come out with a simplest possible and satisfactory controller: if a linear controller can be found to do the job, use a linear controller; otherwise, try a simple nonlinear controller with such as a piecewise linear or a quadartic nonlinearity and so on. Also, oftentimes, full state feedback information is not available in practice, so one should try to design a controller using only output feedback (i.e, partial state feedback), which means the second equation of (6.3.9) is essential. Whether or not can one find a simple, easily implementable, low-cost, and effective controller for a given nonlinear system for tracking control requires both theoretical background and design experience. 6.3.2.2 A general approach to controller design Return to the central theme of feedback control for a general nonlinear dynamical system (6.3.9)-(6.3.12). A basic idea is first outlined for a tracking control task. Let the target trajectory (or set-point) be x(t), and assume that it is differentiate, so that by denoting x=z(t)
(6.3.13)
one can subtract this equation (6.3.13) from the first equation of (6.3.11), so as to obtain e = F(e,t) where e = x-x
(6.3.14)
and F(e,t):=f(x,g(x,t),t)-z(t)
If the target trajectory 3c is a periodic orbit of the given system (6.3.9), that is, if it satisfies
210
G. Chen et al.
x=f(x,0,t)
(6.3.15)
then similarly a subtraction of (6.3.15) from the first equation of (6.3.9) gives e = F(e,t) where e = x-x
(6.3.16)
and F(e,t) := f{x, g(x,t),t) - /(5c ,0,0
In either case, e.g., in the second case which is more difficult in general, the goal of design is to determine the controller u(t) = g(x,t) such that Urn ||e(0| = 0 ?-»oo
(6.3.17)
which implies that the goal of tracking control is achieved: Urn \x(f)-x(i)\
=G
(6.3.18)
It is now clear from Eqs. (6.3.17) and (6.3.18) that if zero is a fixed point of the nonlinear system (6.3.16), then the original controllability problem has been converted to an asymptotic stability problem of this fixed point. Thus, the Lyapunov first and second methods [7,10,12,14,17] may be applied or modified to obtain rigorous mathematical techniques for the controller's design. This is further discussed in the following. 6.3.2.3 Feedback controllers design methods This subsection discusses how a linear or nonlinear controller may be designed for the control of some nonlinear dynamical systems based on the rigorous Lyapunov function arguments. Linear feedback controllers for nonlinear systems In light of the Lyapunov first method for nonlinear autonomous systems and the linear stability theory for nonlinear nonautonomous systems with weak nonlinearities (called linear stability theory), it is clear that a linear feedback controller may be able to control a nonlinear dynamical system in a very rigorous way. Take the nonlinear, actually chaotic, Chua's circuit as an example. This circuit is a simple, yet very interesting, electronics system that displays rich and typical bifurcation and chaotic phenomena. The circuit is shown in Figure 6.5, which consists of one inductor L, two capacitors Cj and Cj, one linear resistor R, and one nonlinear resistor / which is a nonlinear function of the voltage across its two termi-
Introduction
to Chaos Control and Anti-Control
211
nals: g = g(Vc (/)). Let iL(t) be the current through the inductor L, and Vc (?) and Vc (t) be the voltages across Cj and C 2 , respectively, in this circuit diagram. It follows from Kirchhoff s laws that C,— V. (0=— ^,(0-^(0 l dt c\ R 1
c c 2*- ^ 2( 0 = 7?L - ^.(0-^(0 i c
dt
L
+ s(rc (0), c
+iL(t)
C
2
In the consideration of controlling the circuit behavior, it turns out to be easier to first apply the nonlinear transformation xx(t) = Vc^t),. x2(t) = VC2(t), x3(t) = RiL(t),
7 = tlC2R
to reformulate the circuit equations in the following dimensionless form: \x = p[-x + y-f(x)] }y = x-y + z
(6.3.19)
[i = -qy where /?>0,<7>0 , and f(x) = Rg(x) is a nonlinear function represented by f(x) = mQx + -(ml -w 0 )(|x + l|-|x-l|) in whichOTQ<0 and m^ < 0 . It is known that with /? = 10.0,g = 14.87,/WQ =-0.68, and /Mj =-1.27, the circuit displays a chaotic attractor (see Figure 6.6) and a limit cycle of large magnitude. This limit cycle is a large unstable (saddle-type) periodic orbit encompasses the nonperiodic attractor, and is generated due to the eventual passivity of the transistors. Now, let (x,y,x) be the unstable periodic orbit of the Chua circuit (6.3.19). Then, the circuit trajectory (x,y,z) of the circuit can be driven from any current state to reach this periodic orbit by a simple linear feedback controller of the form u
l x—x \ ll 0 =2 = -K y-y *22 ° M-5 z — 'z *33 u
with
x-x y-y z — 'z
(6.3.20)
212
G. Chen et al.
A|i>-/>7Wi, #22 ^ 0 , and A33>0 where the control can be applied to the trajectory at any time. A mathematical justification is given as follows. First, one observes that the controlled circuit is, x = p(-x + y - /(*)) - kx x (x - x), y = x-y + z-k22(y-y\ z= -qy-k^{z-z). Since the unstable periodic orbit (x,y,z) one has
(6.3.21)
is itself a (periodic) solution of the circuit,
x = p(-x + y y = x -y + 'z,
f(x)), (6.3.22)
z=-qy. so that a subtraction of (6.3.22) from (6.3.21),with the new notation X = x-x,
Y = y-y,
and Z = z-7
yields x = p(-X + Y-f(x,x))-knX, y = X-Y + Z-k22Y, z= -qY-k^Z,
(6.3.23)
where
f(x,x)--
rriQ(x-x)
x>l,3c>l
mQX-m^x + m^-mQ)
x>i,-l<x
TW 0 (X-X) + 2(/MJ -mQ)
X>\,X<-\
myx-mQX-m^+mQ
—1<JC<1,3C>1
m\(x-x)
-1<X<1,-1<X<1
m^x-rriQX + m^ -itiQ
-l<x<\,x
/WQ(x-?)-2(7ni-OTQ)
niQX - m-^x - m^ + ntQ /MQ(X-X)
<-l
X<-\,X>\
x < - l , - l < x <1 x<-\,x<-l
Define a Lyapunov function for system (6.3.23) by
Introduction
V(X,Y,Z)=-X2 2
to Chaos Control and Anti-Control
213
+-F2-Y2 + ^-Z2 2 2
It is clear that F(0,0,0)=0 and V(X,Y,Z)>0
for all X,Y,Z
not simultaneously
>
zero. On the other hand, since p,q>0 and ^22'%3 ^ ' > ^ f°ll° w s
= -p[q(X - Y)2 + qk22Y2 +k33Z2]-
q(pXf(x,x) +
tnat
kuX2)
<0 for all X,Y ,and z , if ^ ( ^ , x ) + A:11A'2>0
(6.3.24)
for all x and x . To find the conditions under which (6.3.24) is true, by a careful examination of the nine possible cases for the function f(x,x) shown above, the following common condition can be obtained: &jl >max{-pmQ,-pm^} = -pm^
(6.3.25)
in which wj < WQ < 0, as indicated above. This condition guarantees the inequality (6.3.24). Hence, if the conditions stated are satisfied, then the equilibrium point (0,0,0) of the controlled circuit (6.3.23) is globally asymptotically stable, leading to |x|-»0, |y|-»0, |z|-»0, as t -» oo simultaneously. That is, starting the feedback control at any time on the chaotic trajectory, one achieves Urn |x(/)-3c(0| = 0, lim |x(0-x(?)| = 0, Urn \z(t)-z(t)\ = 0. t->°o t—>oo t—>oo Nonlinear feedback controllers for nonlinear systems It is not always possible to use a linear controller to control a nonlinear dynamical system. So nonlinear feedback controllers are often necessary. Consider the Duffing oscillator \ \y = -p2x-x where p[;p2,q,
-5 - pyy + qcos(cot),
(6.3.26)
and co are systems parameters. It is known that with the parame-
214
G. Chen et al.
ters set p\ =0.4,/? 2 =-l.l, = 2.1 (or q=\.%), and « =1.8, the Duffing system has a chaotic response. It is also known that this system has some inherent unstable limit cycles, which however does not have an analytic expression nor can be displayed graphically due to its instability. For this system, suppose that one is interested in controlling its chaotic trajectory to one of its inherent unstable periodic orbits (limit cycles) by designing a conventional feedback controller. Notationally, let. (x,y) = (x(t),y(t)) be the target trajectory (one of its unstable periodic orbits). The goal is to control the system trajectory, such that lim |x(0-3c(0| = 0 and lim \y(t)-y(t) = 0\ t->T t^T
(6.3.27)
for a terminal time T < <x> For this purpose, consider a nonlinear feedback controller of the form u(t) = h(t;x,x). By adding the controller to the second equation of the original system, one has the following controlled Duffing system: x = y, •x _ (6.3.28) y - ~P2X — x— p^y + qcos{cot) + h(t;x,x). Since the periodic orbit (x,y) is itself a solution of the original system, subtracting (6.3.26), with (x,y) being replaced by (x,y) therein, from system (6.3.28), gives x=Y y = -P2% ~(x
,
~x
,
_ ) - P \ Y + h(t'<x->x\
(6.3.29)
where X = x = x and Y = y-y Next, observe that the controlled Duffing system (6.3.29) is a nonlinear, nonautonomous system. Therefore, the Lyapunov first method may not apply. For this particular case, however, the Lyapunov second method can be applied fairly easily. Indeed, a nonlinear controller h(x) can be designed as follows. Let h(x) = kX + 3x2+3xX2 which is only quadratic and so is simpler as compared to the given cubic oscillator. Then, under the control of this controller, system (6.3.9) reduces to
Introduction
to Chaos Control and Anti-Control
215
X =y p2)X-PlY-X3
Y = -(k +
(6.3.30)
and one can easily verify that under the condition k + p2 > 0, which gives a criterion for determining the linear control gain k, the Lyapunov function V{XJ)=^P2X2+}_X4+}_Y2
2
4
2
satisfies V < 0, where equality holds if and only if both X = 0 and Y = 0 . More precisely, one has K = - / ? j F 2 . If 7 * 0 then K<0 . If Y = 0 then the orbit is located on the X-axis, so one has the following two cases: (i) X=0, thus V =0 only at (0,0); (ii) X * 0 , thus the second equation of (6.3.30) shows that 7 * 0 , which means the orbit is moving and will not stay on the X-axis (so sooner or later the case returns to (i)). Therefore, V <0 , where equality holds if and only if both X = 0 and 7 = 0 . This means that the zero fixed point of the controlled Duffing system (6.3.30) is asymptotically stable, so that X -» 0 and Y -» 0 as t -> oo, or the goal |x-5c| —>0 and ir —JC —>0 (t —»oo) is achieved. Some general criteria for controllers design Recall the general tracking control problem described earlier by equations (6.3.13)-(6.3.18). For a general nonlinear and nonautonomous systems of the form x = f(x,t)
(6.3.31)
which is assumed to possess a periodic orbit x of period t >0: x(t + t ) = x(t) for all 0 < / < oo , the goal is to design a feedback controller of the form u(t) = K(x-x)
+ g(x-x,
t),
(6.3.32)
where AT is a constant matrix and g is a (simple) nonlinear vector-valued function, which is to be added to the original system, to obtain x = f(x,t) + u = f(x,t) + K(x -x) + g(x -x,t).
(6.3.33)
The controller is required to be able to drive the trajectory of the controlled sys-
216
G. Chen et al.
tern (6.3.33) to approach the target periodic orbit x , in the sense that lim \\x(t)-x(t)\\ = 0
f6 3 34)
where, again, |-| is the standard Euclidean norm. Since the target periodic orbit x is itself a solution of the original system, it satisfies x=f{x,t),
(6.3.35)
and since the feedback controlled system with the controller (6.3.32) is given by x = f(x,t) + K(x-x) + g(x-x,t),
(6.3.36)
a subtraction of (6.3.35) from (6.3.36) gives x = F(X,t) + KX + g(X,t)
(6.3.37)
where X = x-x
and F(X,t) =
f(x,t)-f(x,t).
It is clear that F(0,t) = 0 for all ?e[0,oo). Now, Taylor-expand the right-hand side of the controlled system (6.3.37) at X=0 (i.e., at x = x ), and suppose that the nonlinear controller to be designed will satisfy g(0,r) = 0. Then x = A(x,t)X + h(X,K,t), where A(x,t) = df(X,t)/dx\j^=Q
and h(X,K,t)
(6.3.38)
is the Taylor expansion (truncated,
if necessary, in a design), which is a function oft, K, and 0(X). To this end, the design is to determine both the constant control gain matrix K and the nonlinear controller g(X, t) based on the linearized model (6.3.38), such that X->0 (i.e., ;C-»JC ) as ?->oo.lf this can be done, then when the controller is applied to the original system, as shown in (6.3.36), the goal (6.3.34) can be achieved [9,20]. Theorem 6.3.1 Suppose that in system (6.3.38) h(0,K, t)=0 and A(x,t) = A is a constant matrix whose eigenvalues all have negative real parts. If lim
\KX,K,t)\
IMh° \\x\\ uniformly with respect to /e[0,co), where J is the Euclidean norm, then the control
Introduction
to Chaos Control and Anti-Control
217
u(t) defined in (6.3.32) will drive the trajectory x of the controlled system (6.3.36) to the target orbit x as f —» oo . Theorem 6.3.2 h(X,K,t)
In system (6.3.38), suppose that h(0,K,t) = 0 and that
and dh(X,K,t)/dX
are both continuous in a bounded region \x\<<x>.
Also assume that lim
x
\\ \h°
\\h(X,K,t)\\_
\\x\\ =0
uniformly with respect to ?e[0,°o). If all the multipliers of the system (6.3.38) satisfy 1^1 <1, i = \,...,n V?e[0,oo) then the nonlinear controller (6.3.32) so designed will drive the orbit x of the original controlled system (6.3.38) to the target orbit x as t—><x>. It is clear from te above derivation and discussion that the state feedback control approach described here has the essence of conventional control but has incorporated some special features of a chaotic system (e.g., the unstable periodic orbit of a chaotic system is a solution orbit of the same system, which is utilized for the controller design). Basically, this kind of state feedback control methods are "brute force" strategies as compared with the parametric variation approach introduced earlier. Nevertheless, feedback control technology has proved very successful for chaos control as well as anticontrol (which will be further discussed in a later section).
6.3.3 On time-delayedfeedback control Time-dslay feedback control is another effective approach of the conventional type. A time-delayed feedback control (TDFC) system is by nature a rather special version of the familiar autoregressive moving-average (ARMA) control, or the canonical state-space control systems. Despite some of its inherent limitations, TDFC can be quite successful in many chaos control applications. In order to understand to what extent the TDFC can be successful in chaos control applications, some typical (sufficient) conditions for the TDFC approach are derived in the following, based on Lyapunov stabilization arguments. Consider a general continuous-time nonlinear dynamical system: x = f(x,t),
x(t0) = x0<ERn
(6.3.39)
Suppose that this system has an (unstable) periodic solution, x(t), and is currently
218
G. Chen et al.
in a chaotic state. The task is to find a TDFC (with a proper delay-time, r > 0 ), u(t) = K(x(t)-x(t-r)), to be added to f(x,t),
(6.3.40)
such that the controlled system orbit can track the target: lim \\x(t)-x(t)\\ = 0
(6 3 4H
The design problem is then to determine the control gain matrix K to achieve the goal (6.3.41). Since the controlled system is x = f(x,t) + K(x(t)-x(t-r)),
(6.3.42)
and the periodic orbit is a solution of the original system, namely, t = f(x,t)
(6.3.43)
subtracting (6.3.43) from Eq.(6.3.42) yields the error dynamical system e = F(e,t) + K(x(t)-x(t-z)),
(6.3.44)
where e:=x-x
and F{e,t):= f{x,t)-
f{x,t) = f(e + x,t)-
f{x,t).
6.3.3.1 Stabilization problem First, the stabilization problem is discussed for the case where the target is an unstable fixed point, i.e., x =constant and is unstable. Without loss of generality, let x = 0, and the error dynamical system (6.3.44) becomes x = F(x,t) + K(x(t)-x(t-T)),
(6.3.45)
where F(x,t):=f(x,t)-f(0,t). This system has a zero fixed point, F(0,t) = 0, for all t > ?o, and the control objective is to force x ( 0 - » 0 as /->oo . For this simple case, the following local asymptotic stabilization result can be obtained, where J(i) := F {G,t): Theorem 6.3.3 For the error dynamical system (6.3.45), if there are two positive definite and symmetric constant matrices, P and Q, and a constant gain matrix, K, such that the Riccati polynomial matrix
Introduction
to Chaos Control and Anti-Control
JT\t)P' + PJ{t) + PKQ'lKT+
PK + KTP + Q
219
(6.3.46)
is either zero or (semi)-negative definite (=0,<0,or <0) , then when |x(f)| 'n (6.3.45) is small enough, it will always approach zero: U(f) ~> 0 a * t —> °o. To verify this condition, construct a Lyapunov function of the form V(x,t) = xTPx + \\_TxT Qxds. Then, since 0 is a fixed point
of F(x,t)
, its Taylor expansion
is
F(x, t) = J(t)x(t) + [H.O.T], where [H.O.T] are higher-order terms in x(0-Thus, V(x, t) = xT (t)Px(t) + xT (t)Px(t) + xT (t)Qx(t) -xT(t= -[Ql/2x(t-T) + xT(t)[JT(t)P
+ Ql/2KTPx(t)]T[Ql/2x(t-T) + PJ(t) + PKQ-lKTP
+ [H.O.T.]TPx + xTP[H.O.T.]<0,
r)Qx(t - r) +
Ql/2KTPx(t)]
+ PK + KTP + Q]x(t) V small | x | .
To this snd, a standard verification using class- K functions [10] for this nonautonomous system completes the verification of the theorem. Corollary 6.3.4 Theorem 6.3.3 holds if condition (6.3.46) is replaced by the Riccati equation P{t) = JT{t)P{t) + P(t)KQ-XKTP{t)
+ PK + KTP + Q
(6.3.47)
which has a positive definite and symmetric solution P(t) > 0, t e [/Q , oo). It suffices to replace the constant matrix P by P(t) in the above Lyapunov function V(x,t), and then observe that there is an additional term, x* (t)P(t)x(t), in V(x,t) of the above proof. Note that in the very special case where the system Jacobian J(t) = J is a constant matrix, a sufficient condition for (6.3.46) and (6.3.47) to hold is that (J,K) is stabilizable. This is well known in linear control systems theory. Clearly, whether or not the above stabilization control is successful depends on if the trajectory error, e(t), can first be forced into a small neighborhood of the origin; if so, the theorem guarantees that the linear delayed feedback controller will
220
G. Chen et al.
continuously drive the error signal to zero. Thus, the above result provides a theoretical guideline for the controller design. Tha next theorem demands weaker conditions, but provides weaker results in the sense that it does not guarantee to which target the controlled trajectory moves. Theorem 6.3.5
Suppose that the given system (6.3.39) is autonomous. Let
J(x) = f (x) = df(x)/dxbe
the system Jacobian and assume that J(x)is uniformly
bounded by a constant matrix JQ ,namely, J(x)< J§for all x(t) and all t:tQ
e(t-s)[J(x)+Kh(x(s)-x(s-r))ds
x ^ e C - W W + ^ o +j' 'o which satisfies \\x(t)\\<e^~t0^J0+KM\xJ+
\t
s J +K
e^-
^ 0 ^K(x{s)-x(s-T))ds
'0
where both x(t) and x(t-z) are chaotic and hence are uniformly bounded in the phase space, so that the integral term above converges (so is bounded). Since the real parts of all eigenvalues of [JQ + K] can be made negative by a suitable K, the first term on the right-hand side of the above inequality tends to zero as t—>oo . This implies that the controlled orbit is always bounded, and is always pointing inward in the phase space. It then follows from the (extended) Poincare-Bendixson theorem that the controlled orbit approaches a limit set in the phase space; in particular, for planar systems, it converges to either a fixed point or a periodic orbit by the classical Poincare-Bendixson Theorem. 6.3.3.2 Tracking problem Now, consider the case where the target is an inherent unstable periodic orbit, x(t) with period t >0, of the given system (6.3.39). In this case, the error dynamical system (6.3.44) becomes
Introduction
to Chaos Control and Anti-Control
e = F(x,t) + K(x(t)-x(t-T)) where F(x,t):= f(x,t)-f(x,t)
221
(6.3.48)
. The control objective is to force x(/)->x(?) as
f —»oo
It should be pointed out that this tracking problem using TDFC has a particular property. Proposition 6.3.5 Suppose that the matrix K is nonsingular. Then, the controlled orbit x(t) tracks the inherent unstable orbit x(t) , with x(x,t)-^x(t) , as t—»co, if and only if the delay-time r is (an integer multiple of) the period t „ of x(t). To verify this, it suffices to notice that if x(t) tracks X(7Q) as / -» GO , then F{x, t) -» 0, eit) -> 0, and e(0 -» 0, as / -» oo. Eq. (6.3.48) then gives Kix(t)-xit-T))-*0 implying that x(t)-xit-r)->0 . Since x(f) = 5c(f) by periodicity, the constant r must be (an integer multiple of) the period of the unstable orbit. Th? converse can be easily verified by noticing that \\e\\ < \\Fixit))\\ + K\\xit)-x(0|+|3c(0-x(t-r)|
+||*(/ - r ) - x ( r - r ) | ]
< \\Fixit)\\ + K\\eit)\\ + 0 + \\eit - r)|] -> 0 This property deserves a special attention when dealing with a tracking problem with TDFC: it suggests that the period tv = r has to be known beforehand in order to achieve a successful tracking. Note also that if K is singular, then some components) of the state vector xit) may exhibit periodic behaviors. This may also result in other components exhibiting periodic behaviors as well. Now, the trackability of a TDFC is characterized as follows. Theorem 6.3.6 For the error dynamical system (6.3.48), if there are two positive definite and symmetric constant matrices, P and Q, and a constant gain matrix, K, such that the Riccatipolynomial matrix J T it)P + PJii) + PKQ~lKJP
+ PK + KTP + Q
(6.3.49)
is either zero or (semi)-negative definite (= 0, < 0, or < 0) , then when |e(0| 'n
222
G. Chen et at.
(6.3.48) is small enough, it will always approach zero: |e(?)||—>0 as ?—»oo. This result follows directly from the proof for Theorem 6.3.3 with x(t) being replaced by e(t) therein. Similarly, the following result follows from Corollary r3A. Corollary 6.3.7 Theorem 6.3.5 holds if condition (6.3.49) is replaced by that the Riccati equation P = JT(t)P(t)
+ P(t)J(t) + P(t)KQ~lKT
P(t) + PK + KT +Q
(6.3.50)
has a positive definite and symmetric solution P(t) > 0, t e [/Q , °o). Next, a method is suggested for estimating an unknown period constant, t > 0. Chaotic systems are generally very complex, so that it is very difficult to obtain, either analytically or experimentally, the periods of the inherent unstable orbits of a chaotic system. An effective approximation method is by means of the gradient descent approach that can search for the period. To apply this technique, the following criterion is used for search: minimize the performance index J=-
I Ik'n +ih)-x(t0 +ih-r)f "/=l" "
(6.3.51)
where h is the time step length and n is the total number of time-series data used for search. The gradient can be easily derived as 1 Y\*.VQTiri)-A\tQTiri--i)i "r — =—
-=— Z |x(?0 + ih) -x(tQ+ih-r)[
~\r xyiQ-tm-L) x(t^+ ih - r)
(6 3.52)
Then, the algorithm for searching the constant r is formulated as follows: 1. Set a tolerance e > 0 and proper n and h such that large enough amount of time-series data are covered for the estimation. Simulation starts at TQ = 0 . Choose an arbitrary initial guess, r = TQ , and let the chaotic system run freely for nh periods of time. 2. Adaptively adjust tj by Ti+1 Ti
~
0dJ P drt
(6.3.53)
Introduction
to Chaos Control and Anti-Control
223
where /? is a properly chosen positive parameter that may improve the convergence rate. Set i = i + l .3. 3. If J > e, go to Step 2; otherwise, stop. It is clear from the above derivation and discussion that the TDFC method also reminisces many basic features of classical feedback control. It utilizes some chaotic properties such as matching the delay time in the controller to the period of the unstable target periodic orbit. It also utilizes some intrinsic nonlinear dynamical properties such as the Poincare-Bendixon Theorem in the controller design amd analysis. For many other classical and non-conventional chaos control methods, the reader is referred to [1,2,7], among others, and the references therein.
6.4 Anticontrol of chaos: chaotification
6.4.1 Introduction Recently, there has been increasing interest in designing a controller that can generate chaos in an original non-chaotic system. Making a non-chaotic dynamical system chaotic or retaining the chaos of a chaotic system (anticontrol of chaos or chaotification), even for discrete-time dynamical systems (maps), can be very important in many real-world applications, as reviewed at the beginning of this chapter.
6.4.2 Problem description and the anticontrol algorithm This section discusses a general discrete-time dynamical system, originally need not be chaotic, of the form '**+l =/*(**)• XQ - given,
K
Rn
(6A1)
where f^ is assumed to be continuously differentiable, at least locally in a region of interest. The objective is to design a control input sequence {M^ }?> „ such that the output (state vectors) of the controlled system XQ-given, behaves chaotically, in the sense of some commonly used criteria for chaos.
(6.4.2)
224
G. Chen et al.
6.4.2.1 Problem description Consider linear state-feedback controls of the form uk=Bkxk
(6.4.3)
where \Bk\ are nxn constant matrices to be determined, without tuning any of the system parameters. Using this uk, the controlled system (6.4.2) becomes f*jfc+l=/*(**) + % * *
(644)
XQ- given, To describe the problem more precisely, some new notation is needed. Let JJ(Z) = /J(Z) + BJ be the Jacobian of the controlled system evaluated at z,y' = 0,l,2,..., and let Tj(x0,...,Xjy.= Jj(xj)Jj_l(xj_l)..Jl(xl)J0(x0)
(6.4.5)
(6.4.6)
Moreover, let /uj = jUj(TJ T•) be the /'th eigenvalue of the jth product matrix [TjTA,
where i = \,...,n and y = 0,1,2,....
Recall that the /th Lyapunov exponent of the orbit \xk fa_n of the controlled system (6.4.2), starting from the given x 0 , is defined, for / = 0,1,•••,«, by Aj(x0)
=
Urn —In Mi(T[Tk)
Urn —In Mi (JQ k—>oo 2«
(XQ ) • • • J\
(xk )Jk (xk ) • • • J0 (xQ ) (6.4.7)
The approach suggested here is divided into two steps. First, in system (6.4.4), one designs the constant matrices [Bk }^ =0 such that all the Lyapunov exponents of the system orbit \xk fan are finite and strictly positive: 0
i = \, — ,n,
(6.4.8)
where c is a predesired constant. Then, one wants this controller to bound the diverg-
Introduction
to Chaos Control and Anti-Control
225
ing system orbits back into a bounded region, so as to drive the system chaotic. To be practical in achieving this goal, namely, for implementation purpose, it is required that the sequence [B^ f£_n of control-gain matrices be uniformly bounded: SU
P
0<&
|]| B J I < M < O O
*"
(6.4.9)
for some constant M, where |.|| denotes the spectral norm of a finite-dimensional matrix, that is, the largest singular value of the matrix. It is shown below that this is possible under a natural condition that all the Jacobians {/^(x^)} are uniformly bounded, i.e., there is a constant N such that SUP
\\fk(xk)UN
ru
0<&
positive feedback control gain such that the matrix
[TQTT
6Q =
an
,m
(6.4.10)
XQ
is initially
d let 7 Q = J Q ( X Q ) . Design a
by choosing a positive number
CTQ
>0
] is finite and diagonally dominant.
For k = 0,1,2,-••, start with the controlled system x ^ + | = fk(xk)
+B x
kk '
wnere
x^ was obtained from the previous step. Step 1. Compute the Jacobian Jk (xk) = fk (x^ ) + crkIn and then let Tk = Jk Tk_\. Step 2. Design a positive feedback controller by choosing the positive number a^ such that the matrix \TflT\-e
In is finite and diagonally dominant,
where the constant c > 0 is the one given in (6.4.8). Step 3. Finally, apply the mod-operation to the controlled system: x
k+\ =fk(xk)
+ Bkxk ( m o d 0
(6.4.11)
The control-gain sequence {Bjl} = {crk-In} , along with the mod-operation, will achieve the desired anticontrol. One remark is that the first two steps of the algorithm ensure that all the
226
G. Chen et al.
Lyapunov exponents of the controlled system become strictly positive, so that the system trajectory is expanding in all directions. The third step, however, bounds the system trajectory globally. A combination of these two effects is expected to create complex chaotic dynamics within the bounded region of system orbits. Note also that, instead of Steps 1 and 2, a convenient choice is to simply use u
k ~ ^kxk
=
°kxk
w tn
'
°k = ^
+ e
°
(6.4.12)
for all £ = 0,1,2,---, where N is specified in (6.4.10). Then Step 3, namely, the modoperation (6.4.11) can apply. 6.4.3
Verification
of the anticontrol
algorithm
6.4.3.1 For linear time-invariant systems Consider a given w-dimensional linear time-invariant controlled system with the mod-operation: x
k+\ ~ Axk
+u
k
(mod 1)
(6.4.13)
where the mod-operation is defined component wise, and the controller takes the simple form (6.4.12) with the choice of a constant bound N > \A\, namely, uk=(N + ec)xk
(6.4.14)
in which c is specified in (6.4.8). It is shown below that the controller u^, designed by following the above anticontrol algorithm, will make the controlled system (6.4.13)-(6.4.14) chaotic, in the sense of the following three criteria: A map 0:S^>S, where S is a set, is chaotic if (textbook definition [5,6,8]) (i) <j> has sensitive dependence on initial conditions, in that for any xeS and any neighborhood N of x in S, there exists a 8 > 0 such that
and m>0, where <j>m is the wth order iteration of <j>, i.e.,
(f>'n •= (j> o • •. o ^ (m times). (ii) (/> is topologically transitive, in that for any pair of open subsets U,VcS, m
exists an integer m > 0 such that
there
Introduction
to Chaos Control and Anti-Control
227
The simplest case First, consider the simplest case of the anticontrol algorithm for the onedimensional linear time-invariant system: xjc+Y=axji+uji
(modi)
(6.4.15)
where the controller takes the simple form with the natural choice of an JV>|a|, in which the constant c is specified in (6.4.8): uk=(N + ec)xk
(6.4.16)
First, observe that the controlled system (6.4.15)-(6.4.16) is xk+l = axk + (N + ec)xk = (a + N + ec)xk
(modi)
(6.4.17)
where a + N>0 and, since c>0, (a + N + ec)>\. Next, define a map 0:S*^>S* , where S 1 is the unit circle on the twodimensional plane, by >{x) = {a + N + ec)Zx
x e Sl
(6.4.18)
in which Zx is the angle of the point x e y (thus, 0 < Z x < I n and <j> is 2K periodic). Note that this map is a one-variable map since the radius of the circle is fixed and only the angle is variable. Then, it can be verified that the two systems, (6.4.17) and (6.4.18) are equivalent. It is easy to see, by multiplying 2n to both sides, that (6.3.15) is equivalent to x
k+\ =(a + N + ec)xk
(mod2;r)
(6.4.19)
or Zx = (a + N + ec)Zx,
xksSl
which is equivalent to (6.4.18) where the initial state XQ can be arbitrary. It can be verified that the map (6.4.19) is equivalent to the well-known chaotic logistic map x^ + j =4x^(1 -x^) map, where %:=(a + N + ec)>\. comes
by defining x^=—[l-cos(27r^)j
in the logistic
With this change of variables, the logistic map be-
228
G. Chen et al.
|[l- COS (2^^ +1 )]=4-i[l- COS (2«^ it )].|l-|[l-c QS (2«-^)]J = [l - cos(2/i&k )]• ll + cos(27r&k )J which has the solution precisely given by formula (6.4.19). To provide one more proof, it is referred to the following definition of discrete chaos [19]: A map $: S -> S, where S is a nonempty set, is chaotic if and only if for any two nonempty open subsets in 5 there is a periodic solution of
= Zx + 2£^ , where £ is an integer satisfying 0<£<\g'c J - l . Then, one can
see that the roots {x^ g} of the this equation (i.e., periodic solutions of the map
which means that for two arbitrarily given nonempty open subsets in S , there are two large enough integers kq and £Q such that the periodic solution xk £ of the map has at least one point in each of these two subsets. The general case Now, consider the general case of the controlled system (6.4.13)-(6.4.14). To show that this controlled system is chaotic under the control of uk =(N + ec)xjc, where
Af>|/4| , only the first component
X£+j(l)
in the state vector
x
nee k+\ =Vck+\Q}'"xk+\(nn ^ s to be discussed, without loss of generality. Observe that this component of the controlled system has the expression x
k+\ 0) = [fll \xk (^ + • •' + a\nxk (")]+ (N + e°^>xk & --(a^\+N + ec)xk(1) + [fl|2*k(2) + • • • + a\nxk(ri)\
(mod 1)
Introduction
to Chaos Control and Anti-Control
229
where, since N>JAJ>a^ j and c>0, ax \ +1/4| > 0 and ax j + 1 ^ | + ec > 1. Next, in the phase space spanned by the coordinates x(\),•••,x(n),
consider an
w-tuple map (projection)
+ ec)Zx(\)
x(\)eSl
satisfying 0
(6.4.20) (subject to the
2n mod -operation). Clearly, > is 2/r -periodic with respect to x(l). Then, it can be similarly verified that the map (6.4.20) satisfies the three criteria as a chaotic map, and so does the controlled system (6.4.13)-(6.4.14), when it is projected on JC(1) (hence, on each principal direction) of the phase space spanned by the coordinates x(\),---,x(n). As a result, the overall controlled system is chaotic within the entire phase space by superposition; otherwise there would be at least one projection that is not chaotic. 6.4.3.2 For time-varying and nonlinear system For giver, time-varying and/or nonlinear systems, the above proof can be carried out in a similar procedure. Intuitively, if the given system is time-varying or even nonlintar, one has a higher chance to obtained a chaotic controlled system. However, due to the time-varying and/or nonlinear nature of the map, a proof is difficult (but has already been established), which is beyond the scope of this chapter. Finally, it is noted that the mod-operation used above can and has been replaced by some other operations such as piecewise-linear functions or even a sine function (if time delay is also used) in the controller [21-23]. Also, some anticontrol methods have been initiated for continuous-time dynamical systems [22,23] which, however, is still far from being as complete as that for the discrete-time case discussed above.
6.4.4 External noise control and anticontrol of chaos External noise, both white and colored, can be used for control and anticontrol of chaos [24-27]. In fact, external noise not only can control chaos but also can reorganize hyperchaos [26,28] and spatiotemporal chaos [29]. Noise-driving method has became one effective method for both chaos control and synchronization. In this sec-
230
G. Chen et al.
tion, two typical nonlinear dynamical systems are used as examples for illustration. 6.4.4.1 White noise control and anticontrol of chaos Consider a radio frequency (rf) plasma coupled with a PLC circuit [24], described by
X2=A cos(27VC2) + Bx\ + Cx? - (Dxr - EJxj +Fn
(6421)
*3 =COI2K
where X\ has the physical meaning as the normalized charge density in the plasma of an rf gas discharge, ^ is its derivative, and x^ is a related time function. Also, Fn is an external noise-driving controller, a and A are driving frequency and amplitude of the external rf power source, respectively, B, C, D, and E are the system control parameters. Gaussian white noise is used, Fn=Fw(t), with
l(Fw) = ° V ' (6 4 22) \(Fw(t)Fw(s)) = 2DFS(t-s) = 0 where Q denotes the mathematical expectation (average), DF is the diffusion coefficient, and m is the Dirac delta function. Eq. (6.4.22) completely determines the statistical feature of the controller. Control of chaos In the right regime of the parameter space, white noise can realize chaos control, as shown in Figure 6.7, where ^ = fi = C = l,£> = 0.075,£ = 0.270975,fl> = 1.745. As seen, the chaotic state (where Dp is stabilized to a period-1 limit cycle by strong white noise with Dp =0.5). The main reason is that strong white noise compensates the dissipation process and plays the role of "signal-to-noise enhancement" (or "stochastic resonance"), where the control via noise actually improves the signal processing. Anticontrol of chaos Anticontrol of chaos, and alternation of periodic-chaotic sequences, can also be achieved by changing the intensity of white noise, as shown in Figure 6.8, where /4 = 5 = C = l,Z) = 0.075,£ = 0.270975,fij = 1.745 with Dp =23.98,24.03,24.031, and 24.05, respectively. Only two transition points are shown in the sequence from period-5 to chaos.
Introduction
to Chaos Control and Anti-Control
231
6.4.4.2 Colored noise control and anticontrol of chaos A set of chemical reaction equations are considered [27]: i j = -kxXJX3 + a(x\ r - JCJ ) *2 = A*1X3 -2(*2 X 2 - * - 2 * 3 ) - * 3 * 2 i
+ fl
(*2/ -*2>
(6.4.23)
3=^2x2"A-2x3,
where x j , ^ , ^ are the molar fractions of three reactants, &+;- are the rate constants of the fth reaction, a is the inverse residence time or feed rate, and x\ r and x2 /" are the feed concentrations of xj and x2, respectively. The system is impermeable tO X3 .
The basic features of system (6.4.23) are: (a) it consists of binary collisions only; (b) it is the chemical oscillator in which there is no separation of time scales; (c) it involves only three species. It was found that the parameters, which produced sustained time periodic oscillations (limit cycle) in the system without any control, are *j = \,k2=k_2 =5,* 3 = 0.03333,x2/- =0.2667 , and a =0.03333. Figure 6.9 shows a periodic oscillation of xj for this set of parameters. It is especially interesting to see the realization of chaos control and anticontrol for system (6.4.23) by using colored noise input. Here, the so-called OrnsteinUhlenbeck process is discussed. The unified controlled equations for system (6.4.23) is described by \ ' ' l ' ' [si=-Zei+AFw
(6.4.24)
where /,(*;) represents the nonlinear function of the right-hand side of the system (6.4.23), and A and gj are constants. Consider the particular case where g^ = 1 and s\ = £j = £3 • Let Fw be the Gaussian white noise satisfying the statistics of Eq. (6.4.22). Then, the driven noise Ex is an exponentially correlated colored noise, with properties
f(«l('))=o \(sx (f)*i (s)) = DAexp(-A\(t -s)\),
(6-4-25)
in which (•••) denotes the mathematical expectation (average), and {•••} averages
232
G. Chen et al.
over the distribution of the random initial value EQ satisfying a Gaussian distribution. The correlation time of colored noise is r = \IX. Anticontrol of chaos can be realized under the right colored noise parameters, as thos i shown above. For example, anticontrol of chemical chaos is achieved from a period-1 orbit, shown in Figure 6.9, to chemical chaos shown in Figure 6.10, if D = 10~~> and the correlation time r = 0.05, where the other parameters are the same as before. It is emphasized here that the colored-noise-induced anticontrol of chemical chaos is closely related to the correlation time which leads to non-equilibrium transitions, such as from a single limit cycle to multiple limit cycles (two, three, or more), and then to chaos, and vice versa. In summary, it is found that external noise can actually be quite beneficial in such tasks as control and anticontrol of chaos, phase locking emgancement, chaos synchronization, hyperchaos and spatiotemporal chaos suppression, as well as spatiotemporal stochastic resonance. Anticontrol of Hamiltonian quantum systems Control and anticontrol of quantum chaos in several fields of contemporary physics and chemistry appears to be challenging. These include such as molecular dynamics in laser and quantum optics. A few examples of complete control of the quantum state have been proposed for some particular quantum-system atoms interacting with quantized an electromagnetic field in a single-mode resonator. A switching control method for classical systems has been extended to the control of quantum systems [33]. Complete control is achieved not only over a finite number of quantum states but also over the unitary evolution of a generic Hamiltonian system. In this approach, by switching on and off two distinct time-independent perturbations in an alternating equence, the control effect is obtained. For an Af-level system, the sequence is periodic, and each period consists of JV 2 time intervals: ^,?2> - -,? 2 , which are found by solving a so-called inverse Floquet problem. One finds this solution by linearizing the system in the vicinity of the identity map solution, which gives the intervals for the identity transformation. This nonlinear problem is solved numerically by minimizing the sum of absolute values of the coefficients of the characteristic polynomial of the evolution operator as a function of JV time intervals. An experimental setting is given in [33] for the control method and two examples of control of an atom. In this section, designing a time-delay driving method for anticontrol of temporal chaos of quantum fined system is briefly introduced [34]. These methods for anticontrol of quantum chaos of a quantum network is seen to be highly significant. For example, under non-equilibrium boundary conditions and under optical pumping, photons interacting with two-level impurity atoms in a photonic band-gap material
Introduction
to Chaos Control and Anti-Control
233
can assume a new collective steady state, intermediate between that of incoherent and coherent light. Corresponding to this new optical state, the impurity atoms acquire a steady-state polarization whose phase varies randomly from atom to atom and the resulting collective steady state is the optical analog of a quantum spin glass or quantum network. This can be used to construct a neural network or quantum computing system. One of the key questions is how to design a "predictable" quantum chaotic state by applying a driving field on the quantum network. To do so, the design problem of a time-delayed driving field is discussed. The idea of this design is to yield quantum temporal chaos for a quantum system consisting of N two-level atoms confined in a volume V interacting with a single photon mode. The approach taken is based on a spectral statistical analysis using the spectral decomposition of the Hamiltonian (obtained by using a projection approach). This provides a method for generating or constructing temporal chaos in the quantum network through the spectral decomposition of the Hamiltonian, irrespective of whether the system is in equilibrium or not. The Hamiltonian of a collection of TV two-level atoms confined within a cavity (such as a photonic band gap) with a resonant transition frequency COQ and a periodic boundary condition, driven by a time-dependent photonic mode can be written as [33,34]: H = T.-^-crl
+YJ{a+ja + a+a-j)f{t,T)
(6.4.26)
where h is the Plank constant, COQ is a resonant transition frequency. Also, for generality, the external field f(t, r) is given as a function of time t and delay time r . In this equation, a :) and b:) represent the states on which the j th atom is in upper and lower level states, respectively, a* describes atomic excitation of the jth atom; az- describes they'th atomic inversion; a,a+ are the annihilation and creation operator for photons in the resonant dielectric mode, respectively, and the term a+<xy describes the process in which the atom is taken from the upper state into the lower state and a photon of mode is created, while term o+-a describes the opposite process. The exact form of Eq. (6.4.26) is determined by spectral analysis as described below. For simplicity, and to reveal the origins of temporal dynamics owing to the driving field, it is reasonable to neglect the self-interaction terms hva+a
and
234
G. Chen et al.
J jfca^afc and to focus on the interaction terms for atoms, photons and external fields. Here, J -^ is the process coefficient for the j -th atom excitation and the k th atomic inversion, and v is a jump frequency of the photon (together, h,v is the energy of one photon jump). Denote _z _ a a
b
jKbj
j\ j
(6.4.27) a b
jl j
*)-
a
=
•j
b
a
jK J
which satisfy the spin-— algebra of the Pauli matrices. Substituting the definitions of Eq. (6.4.27) into the Hamiltonian of Eq. (6.4.26), the Hamiltonian operator is written as H(t,r) = H0+V(t,T) where HQ is the Hamiltonian for a quantum confined system in cavity, given by b :,n + \)(b.-,« + !
a n
7=1
2
V
j' Kaj'n
and V is the interaction and time-dependent driving field f(t,z) temporal chaos, expressed by N
v{t,r)= x s aj,n\(bj,n
+
\
bj,n
+
used to generate
\\Uj,n ]f(t, T).
j=\n The spectral decomposition of Hamiltonian without driving field f{t,r) calculated as 1
N-\ ho0
CD
(2) (1)
— + 2gcos 8 j cos 6^
j=0k=0 where the eigenvalues are
(2)\/(l)
(2)
can be
Introduction
to Chaos Control and Anti-Control
(1) 2n 9 J-^J>
7=0,1,
(2)
j=
2
235
0,l,-,N-\,
0j=—j,
and the eigenvectors are
!
(D\
(1)
X
TT2
\
e
i9 J
J
or
(2)
(2)\
e
,
z m
') H
i6
J (2)
id e J (2)
i(N-\)d 2
J
or (2)\
(2)
1
i(N-l)0n
X
Hence, 0) (2) tf(?,r) Xj®Xk®fm
hco
^
V + 2f(z)cOS
& (1) (2) Xj®Xk®fm )
\
6 ; COS 9fc
Using the projection operator method, it is proved that the eigenvalue can be solved as
236
G. Chen et al.
f{t,r)\fm)=m\fm) The eigenvectors are in a Rigged Hilbert Space having the following structure:
| / / » ) e * (fm\e
{f(z)-f{T0))exp[-{f(T)-f{TQ))2)
and, hence, determine its explicit form: f(T)x±lnl^Ql+f(
PW(T)
) , if _Z
JK0
'
PW(T)
Introduction
to Chaos Control and Anti-Control
237
This is a necessary condition, which enables an arbitrary spacing between the corresponding eigenvalues to evolve with a Wigner-type of distribution and indices for quantum temporal chaos in this time-delayed quantum system. The above provides a method to design a time-delayed external field for driving the quantum system into quantum temporal chaos. The delayed time parameter r acts as a key control parameter for inducing quantum temporal chaos in the system. The eigenvectors of the driving field are obtained using projection methodology. It reveals that they evolve in a dual pair of spaces beyond the Hilbert space. One benefit of this approach is in providing a definite form of eigenvalue which is only a function of the driving field, / ( r ) . This enables a simple design of a driving field suitable for inducing temporal chaos, based on Wigner-like distributions of the spectral spacing (with respect to r ). The anticontrol method discussed in this section suggests a general approach to anticontrolling temporal chaos of quantum systems, which may be easily tested by experiment, e.g., by using an external optical field with a flexible or random delayed time.
6.5 Some concluding remarks Chaos control and anticontrol, as a challenging subject born out of complex dynamics and control systems has triggered many aesthetic as well as intellectual responses from communities of engineering, science, and mathematics. It has led to many important and interesting scientific findings which demonstrates that crossing the traditional boundaries of scientific disciplines is not only important but has become necessary. Notwithstanding the existing accomplishment, chaos control and anticontrol as a profound research subject is far from being complete. In fact, it emerged only about a decade ago and is merely a small territory of the intriguing field of modern nonlinear science. In addition to pursuing deeper understanding and further amelioration of the topics that have been briefly addressed in this chapter, real-world applications are especially significant for assembling the theory. The impact of thorough studies and successful applications of many nontraditional and nontrivial topics in this exciting field will be enormous and far-reaching. Chaos control calls for new efforts and endeavors in the new millennium. In this new era, today's contemporary concepts of nonlinear dynamics and controls may undergo another cycle of rethinking and reorganizing; the chaos control and anticontrol theories, methodologies, and also perspectives may unleash some other new ideas and valuable engineering applications; real breakthroughs may begin to take place, bringing enhancement, improvement, and sustainability to the complex living nature. An unexpected yet exciting new world is yet to come.
238
G. Chen et al.
Acknowledgement Some material used are taken and modified from works with Drs. Dejian Lai, Xiaofan Wang, and Xinghuo Yu, and particularly some overview material and description are taken and modified from Referernce [1] coauthored by Dr. Xiaoning Dong. These colleagues and the World Scientific Publishing Company are acknowledged, for their cooperation's and courtesy.
References [I] G. Chen and X. Dong: From Chaos to Order: Perspectives, Methodologies, and Applications, World Scientific Pub. Co., Singapore, 1998. [2] G. Chen: Chaos, Bifurcation, and Their Control, in the Wiley Encyclopedia of Electrical and Electronics Engineering, 1998. [3] G. Chen and D. Lai, "Feedback control of Lyapunov exponents for discrete-time dynamical systems", Int. J. ofBifur. Chaos, vol. 6, pp. 1341-1349, 1996. [4] G. Chen and D. Lai, "Feedback anticontrol of discrete chaos", Int. J. ofBifur. Chaos, vol. 8, pp. 1585-1590, 1998. [5] R. L. Devaney, An Introduction to Chaotic Dynamical Systems, Addison-Wesley, New York, 1987. [6] R. W. Easton, Geometric Methods for Discrete Dynamical Systems, Oxford University Press, New York, 1998. [7] A. L. Fradkov, and A. Yu. Pogromsky, Introduction to Control of Oscillations and Chaos. Singapore: World Scientific, 1999. [8] P. Glendinning, Stability, Instability and Chaos, Cambridge university Press, New York, 1994. [9] F. C. Hoppensteadt, Analysis and Simulation of Chaotic Systems, SpringerVerlag, New york, 1993. [10] H. K. Khalil, Nonlinear Systems, 2nd Ed., Prentice-Hall, Upper Saddle River, NJ 1996. [II] D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations, Oxford University Press, Oxford, 1987. [12] V. Laskmikantham, S. Leela, and M. N. Oguztoreti, "Quasi-solutions, vector Lyapunov functions and monotone methods", IEEE Trans, on Auto. Contr., Vol. 26, 1981,pp. 1149-1153. [13] T. Y. Li and J. A. Jorke, "Period three implies chaos", American Mathematical Monthly, Vol. 82, 1975, pp.481-485. [14] I. G. Malkin, Theorie der Stabilitat einer Bewegung, Oldenbourg, Munich, 1959. [15] F. R. Marotto, "Snap-back repellers imply chaos in R"", J. of Mathematical Analysis and Applications, Vol.63, 1978, pp. 199-223. [16] E. Ott, C. Grebogi, and J. A. Yorke, "Controlling chaos", Phys. Rev. Lett., vol. 64, pp. 1196-1199, 1990.
Introduction
to Chaos Control and Anti-Control
239
[17] V. M. Popov, Hyperstability of Automatic Control Systems, Springer-Verlag, Berlin, 1973. [18] C. .Robinson, Dynamical Systems: Stability, Symbolic Dynamics, and Chaos, CRC Press, Boca Raton,FL, USA, 1995. [19] P. Touhey, "Yet another definition of chaos", American Mathematical Monthly, May 1997, pp. 411-414. [20] F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, Springer-Verlag, New York, 1990. [21] X- F. Wang and G. Chen, "On feedback anticontrol of discrete chaos", Int. J. of Bifur. Chaos, vol. 9, pp. 1435-1442, 1999. [22] X. F. Wang and G. Chen, "Chaotifying a stable LTI system by tiny feedback control", IEEE Trans. Circ. Syst, vol.47,pp.410415, 2000. [23] X. F. Wang and G. Chen, "Chaotification via arbitrarily small feedback controls: Theory, method and applications", Int. J. of Bifur. Chaos, March 2000, in press. [24] J. Q. Fang, "The effects of white noise on complexity in a two-dimensinal driven damped dynamical dystem", Phys. Lett. A, vol. 142, pp. 344-348, 1989. [25] J. Q. Fang, "The effects of external noise on complexity in a two-dimensinal driven damped dynamical system", in Measures of Complexity and Chaos, ed. by N. B. Abraham, A.M. Albano, A. Passamante and P. E. Rapp, Plenum Press, New York, 1989, pp. 229-234, [26] J. Q. Fang, "Generalization farey organization and generalized winding number in a 2-D DDDS", Phys. Lett. A, Vol. 146, pp. 35-44, 1990. [27] J. Q. Fang, "Colored noise induced nonequlibrium transition in a new chemical oscillator", Commun. Thoer. Phys., Vol.17, pp. 39-48, 1992. [28] J. Q. Fang and S. Y. Xu, "Synchronizing hyperchaos by white noise", Chinese J. ofNucl. Phys., Vol.18, pp. 244-246, 1996. [29] J. F. Lindner, B. S. Prusha, K. E. Clay, "Optimal disorder for taming spatiotemporal chaos", Phys. Lett. A, Vol. 231, pp. 164-172, 1997. [30] A. Maritan and J. R. Banavar, "Chaos, noise, and sychronization", Phys. Rev. Lett, Vol. 72, pp. 1451-1454, 1994. [31] J. Q. Fang, "A physical model for describing the characteristics of BPD and RFPD (I)", Chinese J. ofNucl. Phys., vol. 12, pp. 163-172, 1990. [32] J. Q. Fang, "A physical model for describing the characteristics of BPD and RFPD (II)", Chinese J. ofNucl. Phys., Vol. 13, pp. 71-80, 1991. [33] G. Harel and V. M. Akulin, "Complete control of Hamiltonian quantum systems: Engineering of Floquet evolution", Phys. Rev. Lett., Vol. 82, pp. 1-4, 1999. [34] B. Qiao, H. E. Ruda and J. Q. Fang, "Time delay anticontrol of temporal chaos in a quantum confined system", submitted, 2000. [35] L. E. Reichl, The Transition to Chaos: In Conservative Classical Systems: Quantum Manifestations, Springer-Verlag, New York, 1992. [36] R. Bluel and W. P. Reinhart, Chaos in Atomic Physics, Cambridge Univ. Press, New York, 1997.
240
G. Chen et al.
>«;«*>.<• rijui: fci.t
Fig.6.1 The chaotic attractor of a power system I
•0\
C*0t
Cm 0.2
C»ft?
5< / i
1/
C-03
C » 0.-9
£.' -• Q «
\:
I X"
\ \
ii
JL _ _J7
/
v
_J
Fig.6.2 A broad variety of dynamics can be created and modified when chaos is under control
Introduction
to Chaos Control and Anti-Control
4• ^
r
.,
Fig.6.3 A simple electric power system
position o l «He mqtlibffum point
wi!f»ur perluf btation
Fig.6.4 Schematic diagram for the parametric variation control method
241
242
G. Chen et al.
-v-
R ^ V,;_
Fig.6.5 Chua's circuit
Fig.6.6 The double scroll chaotic attractor of Chua's circuit
x2
1.5
— 0.5
-2.0
*1
—1 .O
O.O
X1
Fig.6.7 Stabilized an induced period-l stochastic resonance (a) D^23.98, period-5 (b) ZV=24.03, chaos
1 O
2.0
Introduction 1 .80
-i
0.80
-
-0.20
-
•1.20
-
to Chaos Control and Anti-Control
X-
-2.20
2.20
2.5
*2
1 — 1 .20
1 —0.20 Xi
1— O.80
1. 1.80
1
1 -5
O 5
-
— o.s —
1 .5
-2.5 -2.5
I -1.5
I -0.5
1 0.5
1 1.5
Xi
Fig.6.8 An example of anti-control of chaos (a) Dr=23.98, period-5 (b) ZV=24.03, chaos
1 2.5
243
244
G. Chen et al.
x2 0.0895
4.1
0.0875
0.0855
0.0835 0.2980
0.3070
0.3160Xl
Fig.6.9 Periodic oscillation of X]
X2 0.085000
4.2(a)
0.084000
0.083000
0.082000 0.286000
0.289500
, X
0.293000
Introduction
to Chaos Control and Anti-Control
x2 0.0890
4.2(b)
0.0880
0.0870
0.0860
0.08501. 0.29700
0.30400
0.31100
Q.ZOE + 00
X
4.2(c)
0.20E + 00
0.10.E + 00
0.20.E - 02
0
0.002
0.004
0.006
0.008 '
Fig.6.10 Anticontrol of chemical chaos by using colored noise input
245