V Barbu University of Ia§i
Optimal control of variational inequalities
Pitman Advanced Publishing Program BOSTON·LONDON·MELBOURNE
PITMAN PUBLISHING LIMITED 128 Long Acre, London WC2E 9AN PITMAN PUBLISHING INC 1020 Plain Street, Marshfield, Massachusetts 02050 Associated Companies Pitman Publishing Pty Ltd, Melbourne Pitman Publishing New Zealand Ltd, Wellington Copp Clark Pitman, Toronto
© V Barbu 1984 First published 1984 AMS Subject Classifications: (main) 49A29, 49B21, 49B22 (subsidiary) 35J65, 35J85, 35K60 Library of Congress Cataloging in Publication Data Barbu, Viorel. Optimal control of variational inequalities. Bibliography: p. 1. Variational inequalities (Mathematics) 2. Differential equations, Elliptic. 3. Differential equations, Parabolic. I. Title. 515'.64 83-25007 QA316.B28 1984 ISBN 0-273-08629-4 British Library Cataloguing in Publication Data Barbu, V. Optimal control of variational inequalities. -(Research notes in mathematics; 100) 1. Calculations of variations 2. Inequalities I. Title II. Series 515'.64 QA316 ISBN 0-273-08629-4 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording and/or otherwise, without the prior written permission of the publishers. This book may not be lent, resold, hired out or otherwise disposed of by way of trade in any form of binding or cover other than that in which it is published, without the prior consent of the publishers. Reproduced and printed by photolithography in Great Britain by BiddIes Ltd, Guildford
Preface
This book is concerned with the theory of first-order necessary conditions of optimality for control problems governed by variational inequalities and semilinear equations of elliptic and parabolic type. It is a pleasure to acknowledge the considerable influence of the works of Professor J.L. Lions, and the author takes this opportunity to thank him for his interest in the development of the present work. Thanks are also due to Dr D. Jiba and Dr O. Carja for reading the manuscript and suggesting a number of changes. Part of this book was written and part of the preliminary research involved in its preparation was carried out while the author was at INRIA, Rocquencourt, in 1980 and at Purdue University in February-March 1983. The author wishes to thank both institutions for their kind hospitality and support; and finally to the National Institute for Scientific and Technical Creation (INCREST) in Bucharest whose financial support extended over several years was decisive in the preparation of the present work.
lasi July 1983
Viorel Barbu
Contents
Preface Introduction Conventions and symbols CHAPTER 1 ELEMENTS OF NONLINEAR ANALYSIS 1.1
1.2 1.3 1.4 1.5 1.6 1.7 1.8
Maximal monotone operators in Hilbert spaces Surjectivity and perturbation of maximal monotone operators Convex functions and subiffferential mappings Approximation of convex functions Some examples of subdifferential mappings Generalized gradients of locally Lipschitz functions Nonlinear evolution equations in Hilbert spaces Evolution equations associated with subdifferential mappings
CHAPTER 2 ELLIPTIC VARIATIONAL INEQUALITIES 2.1 2.2 2.3 2.4 2.5
Abstract existence results A regularity result The obstacle problem Water flow through a rectangular dam Elliptic problems with unilateral conditions at the boundary
CHAPTER 3 OPTIMAL CONTROL OF ELLIPTIC VARIATIONAL INEQUALITIES 3.1 Controlled elliptic variational inequalities 3.2 Generalized first-order necessary conditions 3.3 Distributed control problems governed by semilinear equations
3 6 6 12 13 18 22 27 31 34 38 38 42 45 53 57 61 61 67 74
3.4 3.5 3.6 3.7 3.8 3.9
Optimal control of the obstacle problem Control of free surfaces Distributed control systems with nonlinear boundary value conditions Control and observation on the boundary Control on the boundary: the Dirichlet problem Extensions and further remarks
CHAPTER 4 PARABOLIC VARIATIONAL INEQUALITIES 4.1 The main existence results 4.2 Examples 4.3 The one-phase Stefan problem 4.4 A quasisteady variational inequality CHAPTER 5 OPTIMAL CONTROL OF PARABOLIC VARIATIONAL INEQUALITIES: DISTRIBUTED CONTROLS 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11
Formulation of the problem The approximating control process First-order necessary conditions~ semilinear parabolic equations First-order necessary conditions: The obstacle problem First-order necessary conditions for problem (P ) 1 Optimal control of finite-dimensional variational inequalities Optimal feedback controls Optimal control problems with infinite time horizon Control via initial conditions Control of periodic systems Various optimality results for nonlinear distributed control systems
83 91 94 102 107 114 123 123 135 153 165
171 171 176 184 188 194 198 205 210 213 218 221
CHAPTER 6 BOUNDARY CONTROL OF PARABOLIC VARIATIONAL INEQUALITIES 6.1 6.2 6.3 6.4 6.5
Control systems with nonlinear boundary value conditions Boundary control of free boundary problems: mixed boundary conditions Boundary control of free boundary problems~ the Dirichlet problem Boundary control of moving surfaces The control of machining processes
CHAPTER 7 THE TIME-OPTIMAL CONTROL PROBLEM The time-optimal control problem for nonlinear evolution equations 7.2 The maximum principle 7.3 The approximating control process 7.4 The proof of the maximum principle 7.5 Various extensions
228 228 238 248 257 260 267
7.1
REFERENCES
267 270 274 283 288 292
Introduction
Variational inequalities represent an important class of nonlinear problems and occur in the mathematical description of a large variety of physical problems. The most recent method in the study of free boundary value problems arising in filtration, heat conduction and diffusion theory uses a reformulation of these problems as variational inequalities. Quite often these problems arise as controlled systems with specified objectives. Roughly speaking, optimal control is concerned with finding the optimal input controls, within prescribed restrictions, in order to achieve the desired objectives. This book presents sev.eral optimal control problems governed by variational inequalities of elliptic and parabolic type, with the main emphasis on firstorder necessary conditions of optimality (the maximum principle), which is perhaps the most important and sensitive part of the whole theory. In point of fact the treatment refers to a broader class of nonlinear control systems, Ay + Fy 3 Bu and y'(t) +. Ay(t) + Fy(t) 3 Bu(t), 0 < t < T, where A is a linear self-adjoint positive definite operator in the state space H (in particular, a linear elliptic operator), F is a subgradient operator (i.e. the subdifferential of a lower semicontinuous convex function on H) and B is a linear continuous operator from the space of controls to state space. A unified and constructive approach to the theory of necessary conditions is developed which has its origins in the author's works [8], [9], [16]. The examples chosen to illustrate the general method contain much of the substance of this work and can be extended in several directions. In order not to overburden the book, the discussion is restricted to relatively simple problems~ these do provide, however, theoretical models for the treatment of more sophisticated problems arising in the control of industrial processes. For the same reason, it has been necessary to omit a number of applications of the general results and techniques given herein to ill-posed problems associated with variational inequalities. Other important results, such as optimality theorems for control problems governed by nonlinear parabolic e~uations and hyperbolic equations, two-phase Stefan problems, and control
problems with boundary observation, are mentioned only in passing. Since the whole subject is still under active development, there is no attempt to be comprehensive in any sense. In order to make the book self-contained, some standard results pertaining to convex functions, generalized gradients, nonlinear equations of monotone and the existence theory of variational inequalities have been included in Chapters 1, 2 and 3. The list of references at the end of the book includes only books and papers which the author consulted in the preparation of this work, and does not constitute an exhaustive bibliography.
2
Conventions and symbols
Conventions RN denotes the Euclidean space of ordered N-tuplies of real numbers. The scalar product in RN is denoted by <"'>N and the norm by 11'11 N' (1)
(2) Giv.en a bounded and open subset n of RN with the boundary f, LP(n), 1 < p < 00, denotes the space of p-summable real-valued functions on nand Ck(~) the space of all continuously differentiable functions on ~ up to order k~ C~(n) denotes the space of all continuously differentiable functions with compact support in nand Ol(n) its dual, i.e. the space of all scalar distributions on n. The space of all continuous real-valued functions on ~ is denoted by C(~). (3) For k a positive integer and 1 < p < 00, W~(~) is the Sobolev. space of all functions y E LP(n) hav.ing the property that all distributional derivatives yea) up to order k belong to LP(n). ~k(n) denotes the closure of co . k -k p. ok eO(n) 1nto Wp(n) and Wp .(n), 1Jp + lJp' :: 1, 1S the dual of Wp(n). For p :: 2 we shall use the notation Hk(n) :: w~(n), H~(n)
= w~(n)
H-k(n) :: w;k(n). By WS(f) we shall denote the corresponding Sobolev space on f (see [48J, [66J q for the definition of this space for noninteger 5). We set HS(f) :: W~(f). (4) For a given positive integer £ and 1 < p < co we shall denote by W~~'£(Q) the Sobolev space on Q:: nx JO,T[, W2~,2(Q) :: {y p
E
LP(Q);
r at r
s y ax s
_8_ _8_
E
LP(Q) for 2r
+
S<
2Q,}.
For a noninteger 2 and L :: fxJO,T[ we shall denote by W22 ,2(L) the corresp ponding Sobolev space on L (see [49J). (5) If [a,bJ is a closed real interval and X a real Banach space with the norm 11·11 then LP(a,b;X)' 1 < P < co denotes the space of all (classes of) 3
strongly measurable functions x:[a,b] + X such that f~ Ilx(t) liP dt < with the usual modification if p = Denote by C([a,b];X) the space of all continuous functions from [a,b] to X and by BV([a,b];X) the space of all X-valued functions with bounded variation on [a,b]. We shall use the notation AC([a,b];X) for the space of absolutely continuous functions from [a,b] to X. By BV([a,b[;X) and AC([a,b[;X) we shall denote the space of functions y:[a,b[ + X having the property that y E BV([a,b 1];X) and y E AC([a,b 1];X), respectively, for every compact interval [a,b 1] c [a,b[. If b = + denote these spaces by BVloc([a,+oo[;X) and ACloc([a,+oo[;X), respectively. The notations BV(]a,b];X) and AC(]a,b];X) have a similar meaning. If X is reflexive then every y E AC([a,b];X) is almost everywhere differentiable on ]a,b[ and (see [6], [20]) 00
,
00.
00,
t d y(t) = y(a) + a d~(S)dS for t E [a,b],
J
where dy/dt is the strong derivative of y:[O,T] + X. Sometimes we shall use the symbol yl or Yt instead of dy/dt. (6) Denote by W1,P([a,b];X) the space {y E AC([a,b];X), yl E LP(a,b;X)}. If X is reflexive and y E W1,P([a,b];X) then the derivative Dy of y in the sense of distributions, i.e. Dy(¢) = -
b
Ja
00
y(t)¢I(t)dt v¢ E CO(a,b),
coincides with the ordinary derivative yl. Conversely, if y E LP(a,b;X) and Dy E LP(a,b;X) then there exists y E W1,P([a,b];X) such that y = y a.e. in 0 0 1 ]a,b[. Other notations such as W1,P(]a,b];X) or Wl~~([a,+<x>[;X) are obvious. (7) Given a bounded and Lebesgue measurable subset 0 of RN we shall denote by (Loo(D))* the dual space of Loo(D). Every functional ~ E (Loo(D))* can be uniquely written as ~ = ~a + ~s where ~a E L1(~) and ~s is singular with respect to the Lebesgue measure m. This means that for every E > 0 there exists a measurable subset DE c 0 such that m(D'D E) < E and ~s(z) = 0 for all Z E Loo(D) having their supports in DE. Symbols
L(X,Y) 4
the space of linear continuous operators from X to Y
int C FrC
C conv C Vf (if X 2
B*
II ·11
X
sgn R
R
R+ Rr+ r [r]
1'1 2 II ·11 n
the interior of the set C the bounda~y of C the closure of C the convex hull of C the gradient of the function (map)f the subdifferential (generalized gradient) of the function f the set of all subsets of X the adjoint of B E L(X,V) the norm of the space X signum function: sgn x = x/II xlix for x f. 0, sgn = {x E X; Ilxllx -< 1} the real 1 ine ]- 00, + oo[ J- 00, + ooJ
°
[0, + oo[
J- 00, OJ max {r,O} -inf {r,O} the integer part of r the norm of the space L2(D) the norm of the space Rn
5
1 Elements of nonlinear analysis
In this introductory chapter we present the basi.c properties of nonlinear monotone operators and convex functions in Hi.lbert spaces, and the existence theory of nonlinear evolution equations associated with monotone operators; these are indispensable for an understanding of what follows. Thus much basic material has had to be omitted, and some important results have been given without proof. Much of the material i.s standard and can be found in many books or surveys, notably [6J, [16J, [19J, [20], [24J, [62], [72J. §1.1
Maximal monotone operators in Hi.lbert spaces
Throughout the following H wi.ll be a real Hilbert space with the norm denoted by I· I and the scalar product by (.,.). An element of the product space will be written as [x,yJ, and a multivalued operator A:H ~ 2H from H into itself will be v.iewed as a subset of H x H, i.e. it will not be distinguished from its graph. If A c H x H we set D(A)
= {x E X~ Ax
t 0}
Ax
{y E H; [x,yJ E A},
R(A)
= u {Ax~ x E D(A)}, A- 1 = {[y,xJ~ [x,y] E A}.
The subset D(A) is called the domain of A. If A,B c H x H and A is real, we set AA A+B
= {[x,
AY]~
[x,y] E A}
= {[x,y+z]; [x,yJ E A, [x,zJ E B}.
A subset A c H x H is called monotone if ( 1.1)
for all [xi'Yi] E A, = 1,2. A monotone subset of H x H is said to be maximaL monotone if it is not properly contained in any other monotone subset of H x H. If A is single valued then condition (1.1) simply means that 6
The subset A of H x H is said to be locally bounded at Xo if there exists a neighbourhood U(x O) of xo such that sup {Iyl
~
y E Ax, x E U(x O)}
< +
00.
A single-valued operator A:H -+ H is said to be der:::'::ontimtous if it is strongly-weakly continuous, i.e. lim Ax = AxO weakly in H n-xx> n for any strongly convergent sequence xn -+ xO. There ;s a close connection between the property of maximal monotonicity and the surjectiv.ity property of the operator I+A (I is the unit operator in H). The fundamental result in this direction is the celebrated theorem of Minty.
THEOREM 1.1
Let A be a monotone subset of H x H.
Then A is maximal monotone
if and only if for each A > 0 (equivalently for some A > 0), R(AI+A) = H.
Proof of IIfl part Assume that R(AI+A) = H for some A > O. We have to show that A is maximal monotone in H x H. Suppose that this is not the case and so there exists [xo,yO] E A such that (x-x o ' y-YO) > 0 for all [x,y]
E
A.
By assumption there exists [x 1'Yl] E A such that AX 1 + Y1 stituting this in (1.2) yields
Thus xl
= Xo
Proof of LEMMA 1.1
and Y1
= YO.
IO~~f~art
(1. 2)
AXO + YO.
Sub-
The contradiction arrived at concludes the proof.
First we shall prove the following auxiliary result:
Let A c H x H be a monotone subset and let C ~
closed conVex subset of H.
DTA)
be a nonempty
Then for each Yo E H there exists Xo E C such that 7
(x o
+.
~
y-yo' x-x o)
0 for all [x,y] E A.
( 1.3)
Proof Replacing, if necessary, the operator A by Ax = Ax-yO we may assume that yo = O. For every [x,y] E A define Cx,y =
E
{U
C~
(u
+.
y, u-x) < O}.
To prov.e (1.3) i.t suffices to show that
C f~. Replacing C by x,y x,y belong to a weak compact subset of
(\
[x ,y]EA
n C we may assume that all C C xo'YO x,y x,y H. Since ev.ery Cx .y is weakly closed (because it is conv.ex and closed), to prov.e the 1emma it suffices to show tha t for every fi nite sequence [x. ,y . ] E A, J J j = 1 ••••• n, ( \ ] A Cx.,y. f. [ x j'Y j E J J
~.
Equivalently, there exists} such that f\.,
(x
f\.,
+.
Yj' X - xj ) < 0 for j = 1,2, ... ,n.
(1.4 )
Let Pn eRn be the ~et of all A = (Al' ...• An) such that 0 < A1 < 1 for i = 1,2 •••. ,n and L: A' = 1. Let i=l 1 n L:
j=l where
~
=
(~1""'~n)
~j(x + Yj' x - x.) J
n L:
A·X., A = (Al, ... ,A n). The function ¢ is i=l 1 1 convex, continuous in A and linear in ~ on P x P . Then according to the n n well-known J. von Neumann minimax theorem there exists (AO,~O) E P x P such "(0
that ¢ A
,~)
< ¢(A
0)
and x
00 ,~ )
<
¢(A,~
(00
¢ ( A ,~ < ¢ ~ ,~ ) = -
0
) for all
() A,~
no L: ~.(x.,y.) j=l J J J n
n
L
L
j=l i=l
+
E Pn x Pn. Hence n n L: L: ~~~~(x.,y.) j=l i=l J 1 1 J
0 0 ~. ~ . (x . - x . , y .) ...;
J
1
J
1 J
0
v~ E
n
n
Pn •
Taking successively ~ = (o~)~ 1 for j = 1,2, ..• ,n (o~ is the Kronecker symbol) f\., n 1 1= 1 it follows that x = L: A9x. satisfies system (1.4) as claimed. i=l 1 1 8
Proof of Theorem 1.1 (continued) Let A be maxi.mal monotone and let C = conv D(A). Let YO be arbitrary but fixed i.n H. By Lemma 1.1 there exists Xo E C such that (x o + y - YO' x - xO) > 0 for all [x,y] E A. Hence [xo,yo-x O] E A. We have therefore proved that YO E R(I+A). Since YO is arbitrary we may conclude that R(I+A) is all of H. Replacing A by ~-1A we may infer that R(~I+A) = H for every ~ > O. This completes the proof of Theorem 1.1. If A c H x H is maximal monotone then we hav.e Ax = {y E H;. (y - v, x- u) >- 0,
'v' [u , v]
E A},
'v' xED ( A)
.
This yields PROPOSIT ION 1.1
Let A be a maximal monotone subset of H x H.
Then for
every· X E D(A), Ax is a closed and convex subset of H.
In particular, it follows by Proposition 1.1 that the projection of the origin into Ax exists and is unique. Let us denote it by AOx, i.e.
For every
~
>
0 we set (1. 5)
THEOREM 1.2 Let A be a maximal monotone subset of H x H. (i)
IJ~x-J~yl
(ii)
A~ is a Lipschitzian operator in H with Lipschitz constant ~-1.
(iii)
A~x
(iv)
IA~xl -< IAoxl and lim A~x = AOx for all X E D(A).
(v) (vi)
E
AJ~x
-< Ix-yl for all ~
Then
> 0 and
x,y E H.
for all X E H and ~ > O.
HO lim J~x = x for all X E DTA). HO A is demiclosed, i.e., if xn + X strongly in Hand Yn E AX n weakly in H then y E Ax.
+
y 9
( vi i) If fop some sequence .\n + 0, xn weakZy in H then [x,y] E A.
+
X stpongZy in Hand A.\ xn n
y
+
Proof According to Theorem 1.1 the operator J.\ is well defined on all of H. Moreover, we have
We take the scalar product of (1.6) with J.\x - J.\y and use the monotonicity of A to get (i). By the definition of A.\ we have A.\x E .\-1 ((I+AA)J.\x-J.\x) E AJ.\x for all x E H. Taking the scalar product of (1.6) with AAX-A.\y we get .\IA.\X-A.\yI 2 < Ix-yl
IA.\X-A.\yl
because A is monotone. This implies (ii). Noticing that, for every x E D(A), A.\x = .\-1(J.\(I+AA)X-J.\X), and using (i), we find that !A.\x! < !Aox!. Together with (1.5) the latter implies (v) for all x E D(A) and consequently for all x E TITA). Now let .\n + 0 be such that A.\ x + ~ weakly in H. Since A is monotone, n we have (A.\ x-y, J.\ x-u) >- 0 V[u,y] n
and letting n (~-y,
Hence ~
E
Ax.
E
A
n +
00
,
x-u) >- 0
v[u,y]
E
A.
Since I~I < !Aoxl we may conclude that ~ = AOx and A.\ x n
strongly in H. This completes the proof of (iv). In proof of (vi), letting n tend to + 00 in inequality (y n-v, xn-u) >- 0 v[u,v]
E
A
we get (y-v, x-u) >- 0 v[u,v] E A, and since A is maximal monotone we conclude that [x,y) E A. 10
+
~
To prove (vii) we start with the obv.ious inequa1ity (A\ xn-v, J\ xn-u);;.. 0 v[u,v] n
E.
A.
n
Since, by (v), J>.. xn
~
x strongly in H, this yields
n
(y-v, x-u) ;;.. 0 v[u,v] and therefore [x,y]
E.
E.
A
A.
Now we state without proof an important result due to Rockafellar (for the proof see [71J or [6J p. 44).
THEOREM 1.3 Let A be a monotone subset of H x H. eve~y point which is inte~io~ to D(A).
Then A is locally bounded
at
For applications to partial differential equations the following criterion is useful. PROPOSITION 1.2 Let A be a monotone and demicontinuous
ope~ato~ f~om
H to H.
Then A is maximal monotone.
Proof Assume that A is not maximal monotone and argue from this to a contradiction. Then we may find xo' Yo E. H such that Yo fAxO and (Ax-yo' x-xO) We set x have
=
xt
=
~
0 for all x
E.
D(A) = H.
tx o + (1-t)u where 0 < t < 1 and u is arbitrary in H.
We
and letting t tend to zero we get
Hence AxO = yO' which contradicts the assumption.
11
§1.2 Surjectivity and perturbation of maximal monotone operators A subset A of H x H is said to be coepcive if there exists Xo E H such that lim (Yn' xn-x O) Ixnl-1 = +
(1. 7)
co
n~
for all [xn'YnJ E A such that lim IXnl =
+
co.
n~
THEOREM 1.4 Let A c H x H be maximal monotone and coercive. Proof Let y be arbitrary but fixed in H. the equation
Then R(A)
= H.
By Theorem 1.1, for every A > 0
has a unique solution x E O(A). We take the scalar product of (1.8) with A xA-x O and use condition (1.7) to conclude that {lxAI} is bounded for A + O. Hence there exists a sequence An + 0 such that x + x weakly in Hand A Yn E AX + y strongly in H. Thus by Theorem 1.2 npart (vi) we infer that An [x,yJ E A as claimed. Theorem 1.5 is a special case of a general result due to Rockafellar. the proof we refer the reader to [6J, p. 46.
For
THEOREM 1.5 Let A and B be maximal monotone subsets of H x H such that (int O(A)) n O(B) f 0. Then A + B i3 a maximal monotone subset of H x H. In particular, it follows by Proposition 1.2 and Theorem 1.5 that if A is maximal monotone in H x Hand B:H + H is monotone, s-ingle valued and demicontinuous then A+B is maximal monotone. Let X be a rea 1 Banach space with the norm denoted by II . II and 1et X* be the dual space of X with the norm (the dua 1 norm) denoted II . II *. Let (.,.) be the pairing between X and X*. A multi valued operator A:X + 2X* (equivalently a subset of X x X*) is said to be monotone if
for all [xi'Yi J E A, i = 1,2. The operator A is called maximal monotone if it is monotone and is not properly contained in any monotone subset of X x X*. 12
The single-valued operator A:X ~ X* is said to be demicontinuous if it is continuous from X to X* endowed with weak star topology. It must be emphasized that Theorem 1.1 admits a natural extension to this general framework, and Theorems 1.3, 1.4 and 1.5 remain true if X is a reflexive Banach space (see [6J, [24J). This is obvious if X is a Hilbert space (nonidentified with its own dual). Indeed if A:X ~ X* is the canonical isomorphism of X (which exists by the Riesz theorem) then we have ( Ax , x)
=
II x 112
=
II Ax II ~
for a 11 x E X
(1. 9)
and (AX, y) = (( x , y ) ) for a 11 x, Y E X where ((.,.» is the scalar product of X. Thus, as readily seen, A is monotone (maximal monotone) in X x X* if and only if the operator A- 1 A is monotone (maximal monotone) in X x X. Thus in this case Theorem 1.1 has the following form: THEORE~1
1.6
The monotone subset A of X x X* is maximal monotone if and only
if the pange of A+A is all of X*.
As for Propositions 1.1 and 1.2 and Theorems 1.3, 1.4 and 1.5, it ;s clear now that they remain unchanged in this new context. For instance, by Proposition 1.2 and Theorems 1.4 and 1.5 we have THEOREM 1.7 Let B be a maximal monotone subset of X x X* and let A:X ~ X* be Then A+B is maximal monotone in X x X*. If it is coepcive then the pange of A+B is all of X*.
a monotone, demicontinuous opepatop.
§1.3 Convex functions and subdifferential mappings Let X be a real Banach space with the norm 11'11 and dual X*. As usual we shall denote by (.,.) the pairing between X and X*. If X = X* = H is a Hilbert space identified with its own dual then this is the scalar product of H. The function ¢:X ~ R = J - 00, + ooJ is called convex if for all ,.\ E [0,1] and x,y in X the following inequality holds: (1. 10)
13
The function f is called lowep semicontinuous (l.s.c.) on X if lim inf ¢(x) = ¢(x O) for all Xo in X, x
+
Xo
or equivalently, if every level set {x E X; ¢(x) < A} is closed. Since every convex set is simultaneously closed in the weak and strong topology of X we may infer that a convex function is lowep semicontinuous if and only if it is weakly lowep semicontinuous (i.e. in the weak topology of X). PROPOSITION 1.3 Let ¢:X is nonidentically +
lim cp(x)
/lxl/++
00
+
+
R be a lowep semicontinuous convex function which
and such that (1.11)
00.
oo
Assume that the space
X is peflexive.
othep woy'ds, thepe exists
Xo
Then cp attains its infimum on
X.
In
E X such that
Proof Let A E R be arbitrary but fixed.
We have
inf {¢(x); x E X} = inf {¢(x); x E E} where E = {x E X; ¢(x) < A}. Since E is weakly closed and bounded (by condition (1.11)) we infer that it is compact in the weak topology of X (because the space X is reflexive). In as much as ¢ is weakly lower semicontinuous, Proposition 1.3 follows from a classical result, i.e. every lower semicontinuous function on a compact subset of a topological space attains its infimum. The set O(¢) {x E X; ¢(x) < + oo} is called the effective domain of cp and E(¢) ={(X,A) E X x R; ¢(x) < A} is the epigpaph of ¢. PROPOSITION 1.4 Let ¢:X
+
R be an 1 .S.c. convex function.
Then ¢ is con-
tinuous on the intepiop of O(cp).
Proof Let Xo E int O(¢). To prove that ¢ is continuous in Xo it suffices to show that for every A > 0 the subset {y E X; ¢(xO+Y) < ¢(x O) + A} is a 14
neighbourhood of the origin. The set C = {y E X~ ¢(xO+Y) < ¢(x O) +~} n {y EX;. ¢(xo-y) < ¢(x O) + ~} is closed, CQnllex, symmetri.c and absorbs every point of X (because the function t + ¢(xO+ty) is convex and therefore continuous in a neighbourhood of the origin). Then by the Baire1s category theorem, C is a neighbourhood of the origin as claimed. PROPOSITION 1.5 Let ¢:.X
+
R be an l.s.e. convex function, ¢ F + X.
co.
Then ¢
is bounded fpom below by an affine function on
Proof If Xo E D(¢) then for every s > 0, (xO,¢(xO)-s) E E(¢). Then by virtue of the Hahn-Banach theorem there exists a linear continuous functional Y on X x R such that
Identifying the dual space of X x R with X* Xo E X* and a < 0 such that
x
R, we infer that there exist
Then for t = ¢(x) and x in D(¢) this yields
Given a lower semicontinuous convex function ¢:X ~ differential of ¢ is the multivalued operator dQ:X
R, +
by definition the subX* defined by
d¢(X) = {x* E X*; ¢(x) - ¢(u) < (x-u,x*) for all u EX}.
(1.13)
The elements x* E d¢(X) are called subgpadients of ¢ at x. We see that d¢(X) is always a closed con 'lex subset of X* but it may well be empty. The set of those x E X for which d¢(X) f 0 is called the domain of d¢ and will be denoted by D(a¢). By the definition of d¢ it follows that x is a minimum paint of ¢ on X if and only if x is the solution to the equation
oE
d¢(X).
(1. 14)
The dipectional depivative of ¢ at x in the direction h is by definition 15
¢'(x,h) = lim
(¢(x+ Ah) - ¢(X))A- 1.
(1.15)
A~O
Since for every hEX the difference quotient A ~ A- 1(¢(X + Ah) - ¢(x)) is monotonically increasing on R+, ¢'(x,h) exists for all hEX and at every point x E O(¢). The function ¢ is said to be Gateaux differentiable at x if the function h ~ ¢'(x,h) is linear continuous on X. In particular this implies that ¢'(x,h) = lim (¢(x+Ah) - ¢(X))A- 1 vh E X. A~O
If ¢ is Gateaux differentiable at x, then we shall denote by v¢(x) (the g~ad ient of ¢ at x) the element of X* defined by
(h,V¢(x)) = ¢'(x,h) for all hEX. PROPOSITION 1.6 Let ¢:X ~ R be be an l.s.c. convex function. Then a¢(x O) = {xO E X*; ¢I(XO) > (h,x5) vh E X}. If ¢ is continuous at Xo then (1.16)
For the proof see for instance [62J or [16] p. 94. If d¢(X O) happens to consist of a single element then by (1.16)
Therefore, h is Gateaux differentiable at Xo and V¢(x O) = d¢(X O)' Conversely, if ¢ is convex and Gateaux differentiable at xo' then d¢(X O) = V¢(x O). PROPOSITION 1.7 Let ¢:X int D(¢) c O(o¢).
~
R be an l.s.c. convex function.
Then
Proof Let Xo E int O(¢). Then, by Proposition 1.4, ¢ is continuous at Xo and therefore (xO,¢(x O) + E) E int E(¢) for every E > O. Since (xO,¢(x O)) is a boundary point of E(¢), there exists a closed supporting hyperplane of E(¢) which passes through (xO,¢(x O))' In other words, there exist Xo E X* and aO > a such that (1.17)
16
Since aO should be > 0 we infer that Xo E D(o¢), as claimed. For a given 1.s.c. convex function ¢:X ~ R, the function ¢*:X* ~ Rdefined by ¢*(x*) = sup {(x,x*) - ¢(x);x E X}
(1. 18)
is called the conjugate of ¢. It turns out that ¢* is itself an l.s.c. convex function on X*, and by Proposition 1.5 we see that D(¢*) f~. Furthermore, the following three conditions are equivalent (see for instance [72J or [16] p. 91): (i)
x* E o¢(x)
(ii)
¢(x)
+
¢*(x*)
(x,x*)
(1. 19)
(iii) x E d¢*(X*). THEOREt11.8 Let X be a peal Banach space and let ¢:X ~ R be an l.s.c. convex function. Then the opepator d¢:X ~ X* is maximal mo~otone. Proof By the definition of o¢' it is readily seen that d¢ is a montone subset of X x X*. We shall prove the max;mality in the special case where X = X* = H is a Hilbert space and refer the reader to [73] for a proof in the general case. According to Theorem 1.1 we should prove that R(I + o¢} = H. Let y be arbitrary but fixed in H. The equation x +
d¢(X)
(1.20)
3 y
is equivalent to d¢l(x) 3 0 where ¢1(x} = ~lxl2 + ¢(x) - (x,y). Hence it suffices to prove that ¢1 attains its infimum on X. This follows by Proposition 1.3 because, by virtue of Proposition 1.5, ¢l(x) ~ + 00 for Ixl ~+oo. PROPOSITION 1.8 Let ¢:H ~ Rbe a lower semicontinuou3 convex function. D(oQ) is a dense subset of D{¢). Proof
Then
For any x in D(¢) the equation x
s
+
so¢(x ) 3 x s
has a unique solution Xs E D{a¢) because o¢ is maximal monotone. We take the inner product of the above equation with x - x and use the definition of s
17
d¢ to get Ix
c
- xl
2
+-
c(¢(x ) - ¢(x)) c
~ 0 for all c
>
0
and by Proposition 1.5 we conclude that lim Ix
c+O
Hence x §1.4
E
c
- xl
= o.
D(d¢) as claimed.
Approximation of convex functions
Throughout this section X = X* = H is a Hilbert space with norm 1'1 and scalar product (.,.). Let ¢:H + R be a lower semicontinuous convex function. For ev.ery E > 0 define the function ¢ :H + R, E
(1.21)
By Proposition 1.3 for every x E Hand E > 0, the infimum defining ¢ (x) is E attained. As the infimum of a family of l.s.c. conv.ex functions, ¢ is itself s l.s.c. and convex on H. We set A = d¢. (Moreau [61J, Brezis [19J) The function ¢ is Fr~chet differE entiable on H and A = d¢ for every E > O. Moreover, we have THEOREH 1.9
E
E
2 ¢ (x) = EIA xl /2 c
E
+
¢(J x) for all x E H E
lim ¢ (x) = ¢(x) for all x E H
dO
(1.22)
( 1. 23)
E
¢(J x) < ¢ (x) < ¢(x) for all x E H. E
E
( 1. 24)
For the proof we refer the reader to [20J or [16J p. 107. Let K be a closed convex subset of Hand let IK be the indicator function of K, i.e. IK(x)
f 0 L +00
if x E K
(1.25 ) if x E K.
Obviously the function IK is convex and lower semicontinuous. Note that 18
aIK(x)
= {y
E
H~
(y,x-u) > 0 for all u E K}
is just the normal cone to K at x and
where PK is the projection operator of H into K. We conclude this section with a few words on the perturbation of subdifferential mappings. If A = a¢ and B is a maximal monotone subset of H x H such that (int O(a¢)) n O(B) t 0 then by Theorem 1.5 we infer that A+B is maximal monotone. In particular, if B = ag where g~H ~ ~ is an l.s.c. convex function then under the above condition we have a(f~g) = af +. ago Theorem 1.10
([19J, [20J)
A be a maximal monotone
Let ¢:H ~ ~ be an l.s.c. convex function and let
ope~ato~ f~om
H to itself.
Assume that
the~e
exists
a real constant C such that (~,a¢E:(X))
> -C(1 +.la¢E:(x)I(1 +. Ixl)) fo~ all
E
>
0 and (1. 26)
[x,~J
E A and O(A) n O(a¢) f 0.
Then the ope~ato~ A +. a¢ is maximal monotone, O(A) n O(¢) c O(A
n D(8¢), and
fo~ some positive constant
+.
a¢)
c
UTA)
Co' (1.27)
Proof Let y be arbitrary but fixed in H. To prove that A +. a¢ is maximal monotone it suffices to show that the equation x
+
Ax
T
a¢(x)
3
y
has a solution x E O(A) n 0(8¢).
(1.28) Consider the equation
which by virtue of Theorem 1.7 has a solution xE:' Let Xo be an element of O(A) n D(8¢). Multiplying (1.29) by xE: - Xo and using the monotonicity of A and a¢E' we get
19
where ~O E Ax O' Ix (
Recalling that 13¢((xo)1 < 1(3¢)o(x o)l, this yields
I < C for
all (
>
(1.30)
O.
Now we take the scalar product of (1.29) by 3¢ (x ) and use condition (1.26) ( ( to obtain, after some manipulation, 13¢ ( (x ( )1 < C for all (
>
O.
(1.31)
Now we subtract the defining equations for x( and x and then multiply the A result (scalarly in H) by x( - x to get A 2 /x(- xAI + (3¢((X() - 3¢A(x A), x( - xA) < O. Since 3¢ (x) E 3¢(1 (
+
(3¢)-1(x)) and 3¢ is monotone we have
Then by (1.31) we see that {x } is a Cauchy sequence and therefore x = lim (
exists.
(+0
On the other hand, by (1.31) it follows that on a sequence (n 3¢
En
(x
(n
)
--->
Yo weakly in H.
+
x~
~
0,
(1.32)
By Theorem 1.2 part (vii) we infer that [x,yO] E 3¢. On the other hand, since A is demiclosed, it follows by (1.29) and '(1.32) that y - yO - x E Ax. Now let y be arbitrary but fixed in DCA) n D(¢) and let xA be the solution ( to xA + A(Ax A + 3¢ (x A)) (
(
((
3
y, (, A > O.
Multiplying this by 3¢ (x A) and then Ax A, after some manipulation it follows ( ( ( by condition (1.26) that
20
Since, as seen above, lim x~ = xA =
(I +-
\(A
+-
o¢))-ly , we have
€-+Q
Hence y E DCA +- o¢) as claimed. Now by condition (1.26) we have
( 1. 33)
for all ex,s] E A and [x)nJ E d9, because
Letting t: tend to zero in (1.33) and using Theorem 1.2 part (iv) we get (1.27), thereby completing the proof. THEOREM 1.11 is
rep~aced
The conclu.sions of Theorem. 1.10 remain true if condition (1.26) by
¢(I + EA)-l(x+.t:h)) < ¢(x)+Ct: for some h E Hand
"Ix E Hand t:
> Q
( 1. 34)
C E R.
Proof Since the proof is entirely simi.lar to that of Theorem 1.10 it will be given in outline only. First we note that for t: > Q the equation (1. 35)
has a solution xt:' (n,At:(x+Eh))
On the other hand, it follows by (1.34) that ;>
(n,h)+(cp(x)-¢(Jt:(X+.t:h))t:- 1 ;> -C-Ihllnl for all t:
>
0,
[x,nJ E o¢. Then writing (1.35) as
21
and using the given inequality we see that {/A x I} is bounded. Then, arguing E: E: as in the proof of Theorem 1.10, we conclude that {xs } is convergent to the solution to (1.28). REMARK 1.1 exists h
E
In the special case where ¢ = I , condition (1.34) becomes: there K
H such that
(I + sA)-l(x + sh) E K for all x E K and s > O. §1.5
Some examples of subdifferential mappings
EXAMPLE 1.1 Maximal monotone graphs in R x R. Let S be a maximal monotone graph in R x R. Then there exists a lower semicontinuous convex function j:R + R such that aj = S. Here is the argument. Clearly there exist _00 -< a -< b -< + 00 such that ]a,b[ c: 0(6) c: [a,b]. Let SO be the minimal section of S. The function SO is single valued and monotonically increasing on O(S). Moreover, for each x E ]a,b[, S(x) = [So(x -), SO(x +)] and S(a) = ] - 00, SO(a +)] if a E O(S), SCb) ::: [So(b_), + oo[ if b E O(S) (this follows from the maximality of B). Let Xo E O(S) and let j:R + R be the function defined by 6o(s)ds
for y E [a,b]
j(y)
( 1. 36) +
for x
00
E [a,b].
We have j(y)-j(z) -< JY 60 (s)ds -<
z
~(y,z)
for all y E 0(6), z E R and
~
E 6(y).
Hence 6(y) c aj(y) for all y E 0(6). It must be noted that j is uniquely defined by 6 up to an additive constant. EXAMPLE 1.2 Self-adjoint operators adjoint and positive operator in H.
Let A:D(A) c: H + H be a linear selfDefine the function f:H + R
It is readily seen that f is convex l.s.c. and 22
~ (Ay,y) - ~ (Au,u) < (Ay,y-u) for all u E D(A). This inequality extends to all u E D(f) = D(A 1)2), showing that A c af. On the other hand, by a standard method it follows that R(I+A) = H (this follows by proving that R(I~A) is simultaneous closed and dense in H). More generally, if A is a linear continuous and symmetric operator from a real Hilbert space V to its dual V' (nonidentified with V) then A = a¢ , where ¢:V ~ R is given by ¢ (y)
1 =z (Ay, y ) v Y E V.
(.,.) is the pairing between V and V'.) EXAMPLE 1.3 Convex integrands Let n be a bounded and measurable subset of the Euclidean space Rn and g:n x R ~ R satisfying the following conditions (i)
g(x,.):R~
R is convex, 1.s.c. and J ;,
00
a.e. x E n.
(ii)
9 is measurable with respect to the 0-field of subset of n x R generated by products of Lebesgue sets in n and Borel sets in R. (iii) 9 majorizes at least one function h: n x R ~ R h(x,y)
(iv)
= a(x)y + B(x)
where a E L2(n) and B E L1(n). 2 1 g(·,yO) E L (n) for at least one function yo E L (n).
A function g satisfying conditions (i), (ii) is called normal convex intex R ([74J), and conditions (it), (iii), (iv) automatically hold if 9 is independent of x. Define the function Ig:L2(n) ~ R,
grand on n
g(x,y(x))dx if g(x,y) E Ll(n)
I. (y)
g
= { fn
+
00
otherwise.
By assumptions (i), (ii) the function x ~ g(x,y(x)) is Lebesgue measurable on n for every Lebesgue measurable function y, and I (y) is well defined for 9 every y E L2(n).
23
PROPOSITION 1.9 Under conditions (i) to (iv) in Example 1.3 the function is convex l.s.c. and i. 9 ential 31 (y) is given by 9 I
31 (y) 9
= {w
+.
00
on H
= L2 U'2).
For every y E H the subdiffer-
E L2(~);w(x) E 3g(x,y(x)), a.e. x E ~}
where 3g is the subdifferential of 9 as a function of y.
For the proof see [16J p. 102. In particular it follows by Proposition 1.9 that (31 g ) E: (y)(x) = 39 E: (x,y(x)), a.e. x E
= J gE:(x,y(x))dx, Vy
(I )E:(y) 9
~
L2(~).
Consider the function 1¢:L2(0,T~H)
REMARK 1.2
Vy where ¢:H and
E
~
+
E
+
R defined by
L2 (O,T;H)
R is an l.s.c. convex function in H.
Then I¢ is convex l.s.c.
31¢(y) = {w E L2(0,T;H);w(t) E 3¢(y(t)), a.e. t E JO,T[}. EXAMPLE 1.4 Let ~ be a bounded and open subset of Rn with a sufficiently smooth boundary r. Let j:R + R be a lower semicontinuous convex function and let B = 3j. Define the function ¢:L2(~) + R
¢(y) +
00
otherwise
It is easily seen that ¢ is convex and lower semicontinuous. rential d¢:L2(~) + L2(~) is given by (Brezis [20J, [21J)
24
6y vy E D(3¢)
3¢(y)
-
D(3¢)
{y E H2(~); - ~ E B(y), a.e. in
The subdiffe-
(1.37) r}
where a/av is the outward normal derivative. Furthermore, the following estimate holds: Ilyll 2 H
(n)
'" C( IIY-flyll 2 L
+ 1)
(n)
(1. 38)
for all y E D(acp).
EXAMPLE 1.5 Let g:R ~ R be a lower semicontinuous convex function and let cp:L 2(n) ~ R be the function ¢(y)
fnI~Y12dx fn g(y)dx
J ~
+
=l
+
1
if y E H6(n) and g(y) E L (n)
otherwise.
00
Obviously ¢ is l.s.c. and convex. PROPOSITION 1.10 Assume that 0 E D(ag). D(a¢) = {y
E
H6(n) n H2(n);
3
Then we have
w E L2(n) such that
w(x) E ag(y(x)) a.e. x E n} 3cp(y)
= {-fly+w;
Proof Let A:D(A) inite operator Ay
w(x) E ag(y(x)) a.e. x E ~}.
( 1 .39)
( 1. 40)
= H6(n) n H2(n) ~ L2(n) be the self-adjoint positive def-
= - fly for y
E D(A)
and let aI g :L 2(n) ~ L2(n) be the subdifferential of the function Ig:L2(n) ~ R, defined in Example 1.3. In terms of A and alg' (1.39) and (1.40) can be written as (see Proposition 1. 9) (1.41)
It is readily seen that (A + aIg)(Y) c acp(y) for all y E D(A) n D(3I g). Thus to prove that a¢ = A + aI it suffices to show that A + aI is a maximal 2 g 9 monotone operator in L (n). To this end we shall apply Theorem 1.10 where cp = Ig . Replacing the operator aIg by y ~ aIg(Y) - ~O where ~O E ag(O), we may assume that 0 E ag(O). Since as remarked earlier (aI ) (y) = ag (y) and geE: ag (0) = 0, (ag (r))1 > 0 for all r E R, we have by the Green formula E:
E:
25
( Ay , ( aI g ) E: ( Y) )
r
- J
n
t:;y (x)
ag (y (x) )dx E:
II7Y(x) 12 agl(y(x))dx >- 0, \/y E D(A).
Jn
E:
Thus condition (1.26) is satisfied, and Theorem 1.10 is applied to conclude that A + 31g is maximal monotone. COROLLARY 1.1
Let S be a maximal monotone graph in R
x
R such that
2
a E D(S).
Then for every f E L (n) the boundary value problem - t:;y +.
s(y)
3
f
a.e. in n
y =a
in
( 1.42)
r 2
has a unique solution y E H6(n) n H (n).
Here n is a bounded and open subset of the Euclidean space RN having a sufficiently smooth boundary r. Corollary 1.1 follows by Proposition 1.10 and Theorem 1.4, noting that, by virtue of the result established in Example 1.1, S = ag where 9 is a lower semicontinuous convex function on R. EXAMPLE 1.6 Let g:R ~ Rand j:R ~ R be two lower semicontinuous convex functions and let y = ag, s = aj. Consider the function ¢:L 2 (n) ~ R, ¢( y) = ~
r II7Y 12 dx + r
In
g ( y ) dx
In
+
Jr j (y )da•
The function ¢ is lower semicontinuous and convex. We shall assume that 0 E D(y). Then the subdifferential 3¢ of ¢ is given by
a E D(s),
3¢(Y)
{-t:;y +. W; W E
D(3¢)
{y E H2(n)~ ~y 0'0
y(y(x)), a.e. x En}, y E D(3¢) +
s(y)
3
0 a.e. in r}.
( 1. 43)
The latter follows by Theorem 1.10, where A is the operator defined by (1.37) and ¢ is the function I studied in Example 1.3. Condition (1.26) follows as 9 in the proof of Proposition 1.10. REMARK 1.3
In Examples 1.4, 1.5 and 1.6 we may replace the Laplace operator
6 by a second-order elliptic symmetric differential operator on n.
26
§1.6
Generalized gradients of locally Lipschitz functions
We shall present here an extension of the gradient concept essentially due to F.H. Clarke (see [26J, [27J, [75J). Let X be areal Banach space with norm denoted by II . II and dual by X*. As usual we shall denote by (.,.) the pairing between X and X*. Let f:X + R be a LocaLly Lipschitz function on X; i.e. for every r > 0 there exists Lr > 0 such that If(x) - f(y)1 . .;; Lr IIx-yll ' x,y
E
Lr = {x
E
X; llxll ...;;
n.
The generalized directional derivative of f at x E X in the direction hEX is by definition ( 1.44) y+ x A 4- 0 It is readily seen that fO(x,h) is a finite number for all h in X and the function h + fO(x,h) is subadditive, positively homogeneous and fO(x,h)...;;C llh II for all hEX. Then by virtue of the Hahn-Banach theorem there exists at least one element n E X* satisfying ( 1. 45)
By definition, the generaLized gradient of f at x, denoted af(x), is the nonempty set of all n E X* satisfying inequality (1.45). PROPOSITION 1.11 (i)
Let f;X
+
R be a locally Lipschitz function.
Then
For every X E X, af(x) is a convex and weak star compact subset of X*.
(ii) af is weak star upper semicontinuous; i.e. if xn + x strongLy in X, x; + x* weak stal' in X* where x; E af(x n) then x* E af(x). (i i i) The function
fO: X x X + R is upper semicontinuous.
Proof In proof of (i) by (1.44) and (1.45) it follows that af(x) is convex, closed and bounded. Thus af(x) is weak star compact. Clearly (ii) follows by (iii). To prove the latter we consider two sequences {x n} and {h n} strongly convergent to x and h respectively. For 27
every n there exists Yn in X and An f o(x n ,h n ) -< (f(x n Letting n tend to
+
00
+
yn
+
E JO, l[
such that llYn II
Anhn ) - f(x n
+
yn))A n-1
+
+
-1
An -< nand
n-1 •
we find that
1im sup
( 1. 46)
n-+oo
as claimed. Now we shall consider some particular cases. (a)
X = Rn.
In this case af(x) is the set (see [26J) conv {y = lim
af(x)
Vf(xn)~xn
+
( 1.47)
x}.
n-+oo
In other words, f being differentiable a.e., by Rademacher's theorem, one considers all sequences {x n} converging to x such that f is differentiable at xn and the limit of vf(xn) exists. The convex hull of these limits is just af(x). (b) If f is convex then af(x) coincides with the subdifferential of f. Indeed in this case fO = fl and the assertion follows by Proposition 1.6. (c)
If f admits a continuous Gateaux derivative Vf then af(x) = Vf(x).
We shall conclude this section with an approximation result. Let H be a real separable Hilbert space and let {e i }7=1 be an orthonormal basis in H. Denote by Xn the finite dimensional space generated by {ei}~=1 and by Pn:H + Xn the projection on H on Xn ' i.e. n
P x = r x.e. where x = r n i=1" i=1 n Let An :R + Xn be the operator
x,.e,..
(1.48 )
n
An ( T)
Let f:H
+
= r
i =1
Tie i' T
=
(1.49)
(T 1 ' • • • ,T n ) •
R be a locally Lipschitz function and let fE:H
+
R be defined by ( 1. 50)
28
pn(e)de
= 1 and
pn(e)
= pn(-e)
for all e E Rn•
PROPOSITION 1.12 The function f is continuously Frechet differentiable and lim fS(x)
= f(x)
for all x E H.
(1.51)
E>+-O
Let {x } be a strongly convergent sequence to x for S
+
S
such that
0
( 1.52)
Then ~ E af(x).
Proof
By an obv.ious substitution we may write fS as
= S -n f
n R
f(Ane)pn((A~1 Pnx - e)s-1)de
from which we see that fS is Fr~chet differentiable and Vf s is continuous on H. Now by (1.50) we have fS(x) - f(x) =
fRn
(f(P x-sA T) - f(x))p (T)dT n n n
which yields
and (1.51) follows. Now assuming (1.52) holds, we have by the mean v.alue theorem )..-1(f s (x + )..z) - fS(x )) S
n
r 1 i=O)..- (f(Pn(x i
-1
+
i
i
)
i
)..z) - SAnL n ,)..) - f(Pnx s - SAnT n ,).. )a n ,).. n
i
i
and an,).. >- 0, i=Oa n ,).. = 1'.IITn,)..Ii.n '" 1.. Thus ~electing a 1 1 1 1 subsequence from {)..} we may assume that Tn,A 1 + T and a n n,A1 + a n for).. + 0
where n
[€
]
29
and the last relation yields (7f E(x ),PnzJ <.~ fO(P x -EA Tni,PnzJani, z 1=0
E
n E
n
E
H.
Then by Proposition 1.11 part (iii) we infer that (~,z) < fO(x,z) for all z in H.
Hence
~ E
af(x) as claimed.
COROLLARY 1.2 If x and y ape distinct points of H then thepe is a point Z in the line segment between x and y and an element n E af(z) such that
f(Y) _. f(x) = (n, y-x).
(1.53)
Proof Let fE be defined by (1.50). According to the classical mean theorem, for e~ery E > a there exists z in such a way that
~alue
E
z = Ax E E
+
(l-A )y where A E [O,lJ E E
ha~e,
In as much as {7fE(ZE)} is bounded we E
7f n(z
En
)
+
n
for some sequence En
~
0,
weakly in H strongly in H.
By Proposition 1.12 we infer that z and n satisfy (1.53). PROPOSITION 1.13 Let H = L2(n) and f:H f(Y) = whepe g:n
In
g(x,y(x»dx,
R + R is measupable in
x
2
y EL X
Ig(x,y)_·g(x,z) I < a(x) Iy-,z-I and g(x,O)
E
+
R be given by
(n) E nand Lipschitzian in y, i.e.
a.e. x
E
n fop all y,z
E
R
L1(n).
Then 9 is locally Lipschitzian and
af(y) 30
c
{w E L2 (n)~ w(x) E ag(x,y(x»
a.e. x E n}
(1. 54)
where
ag
denotes the generalized gradient
Qf
y
~
g(x,y).
Proof By the definition of fO we ha~e fO(y,h)
= lim sup z~y
J
(g(x,z.(x)
+
Ah(x»-g (x,z.(x»)A-l dx .
rt
HO Then using the Fatou lemma we may take the lim sup under the integral sign and get fD(y,h)
-<
Jrt gD(y,h)dx
vh E L 2(rt)
which implies (1.54) by a standard method. Note that if g ;s a convex integrand (Example 1.3), or if 9 is continuously differentiable ;n y, then the equality holds in (1.54). §1.7 Nonlinear ev.olution equations in Hilbert spaces Throughout this section H will be a real Hilbert space with scalar product ( • , .) and norm I' I . Consider the ev.olution equation
~(t)
+-
Ax(t)
3
f(t)
a.e. t
E
JO,T[
(1. 55)
with the initial v.alue condition x(O) = xo
(1. 56)
where A c H x H, Xo E Hand f E L1(O,T;H). By a solution to the Cauchy problem (1.55), (1.56) we mean an absolutely continuous function x:[O,T] ~ H which satisfies (1.55) a.e. on ]O,T[, and condition (1.56). Theorem 1.12 is essentially due to Y. Komura, and it has been extended in several directions by T. Kato, M.G. Crandall, A. Pazy and H. Brezis (see [6J, [20J, [28J, [71J).
THEOREM 1.12 Let A c H x H be such that for some real w the operator A + wI 1 1 Then for every Xo E D(A) and fEW' ([O,TJ;H) problem
is maximaZ monotone.
31
(1.55), (1.56) has a unique solution X E W.
1
,OO([0,T];_H).
MOY'eo7.leY', one has
¥t(t) = (f(t) - Ax(t))o a.e. t E JO,T[.
(1.57)
Here (f(t)-Ax(t»o is the element of minimum norm in f(t)-Ax(t). W Proof Denote by A c H x H the operator A +. wI and by J W A, AA the corresponding operators defined by (1.5). For ev.ery A > consider the Cauchy problem
°
dX
en:A+.
W
AAXA - wX A = f,
t E [O,TJ ( 1. 58)
XA(O) = xC' Since A~ is Lipschitz from H to itself, problem (1.58) has a unique differentiable solution xA:[O,TJ ~ H. Without loss of generality we may assume that AO 3 0, or equiv.alently, A~O = O. Multiplying (1.58) by xA and integrating on [O,tJ we get 1
° (f(s)
2 . 12ft
7!x A(t)! < 7!xO! W because (AAX,X) > 0.
+
~ wxA(s), xA(s))ds,
°< t < T
Then by the Gronwall lemma it follows that
/xA(t)1 < C for A > 0, t E [O,TJ.
(1. 59)
Now we multiply (scalarly in H) (1.58) by djdt (A~XA - WX A) and integrate on [O,t]. We get
w .
because AA 1S monotone. Integrating by parts, and using Theorem 1.2 part (iv.), yields Ix~(t) I
+.
IA~XA(t)1 < C for all A >
° and t E [O,TJ.
Next, subtracting the defining equations for xA and xA we get
32
(1.60 )
~ ~ !XA(t)-X~(t)!2 + (A~XA(t)-A~x~(t),XA(t)-X~(t)) < w!xA(t) -
x~(t)!
2
for all t E [O,TJ.
(1.61)
On the other hand, we hav.e (A~XA - A~X~,XA-X~)
= (AwJw x - AWJwx , x -x ) > (AWx -Awx , AAwx ~~
AA
A ~
AA ~~
A~x
AA~].l~
)
Together with (1.60) and (1.61) this yields !xA(t) -
x~(t)1
2
<
C(A+'~)
for t E [O,TJ.
Hence x(t) = lim xA(t) exists uniformly i.n t on [O,TJ. Now by (1.60) we A+O see that x E W1 ,oo ([O,TJ~H). Let to E [O,TJ be such that (dxjdt) (to) exists (we recall that dxjdt exists a.e. on JO,T[). By (1.58) we see that for all y E H,
Integrating this on [to' to 1
+.
2
EJ we get 2
"2 (lx (to+d-yl - IX A(to)-y! ) A to+E +
J
(A~xA(t)-wXA(t)-f(t), xA(t)-y)dt < o.
(1.62)
to 0. o 0 Let [x 0 ,y J be an arb,trary element of Aw and let xAo = x + Ay. In (1.62) O we take y = x~ and let A tend to zero. Since yO = A~ x and A~ is monotone this gives
Letting E tend to zero yields
Since [xO,yoJ is arbitrary in AWwe conclude that 33
If Y is a solution to (1.55) with initial condition y(O)
~ (x(t)-y(t») + Ax(t) - Ay(t)
3
0 a.e.
t
E
= YO' we have
]O,T[.
Taking the scalar product with x(t) - y(t) and integrating on [O,t], it follows by Gronwall IS lemma that ( 1. 63)
Now again by (1.55) we have
21 dhd Ix(t+h)-x(t)1 2 + (Ax(t+h)-f(t+h),x(t+h)-x(t») = 0 ( 1. 64)
a.e. t, h E ]O,T[. Since A + wI is monotone, this gives h Ix(t+h)-x(t)1 -< fo If(s+t) for all
~(t)
h
~(t)lds +wfo
Ix(s+t)-x(t)lds
E Ax(t), t, t+h E [O,T].
Hence Ixl(t)1 -< If(t) -
~(t)1
for all
~(t)
E Ax(t) a.e. t E ]O,T[
and (1.57) follows. §1.8
Evolution equations associated with subdifferential mappings
We shall study here problem (1.55), (1.56) in the special case where A = a¢ - wI, i.e. x'(t) + 3¢(x(t»
- wx(t)
3
f(t)
a.e.
t
E
]O,T[ (1.65)
x (0) = xo
where w is some real constant and 3¢ is the subdifferential of a lower semicontinuous convex function ¢:H + R. The main result, Theorem 1.13, is due to H. Brezis ([19], [20]).
34
THEOREM 1.13 Let f be given in L2(0,T~H) and Xo E Dt¢). Tnen there exists 1 X E C([O,T);.H) n W ,2(JO,T];.H) which satisfies (J.6S) 1. 2 almost euerywhere on JO,T[. If X E D(rp) then x E W' ([O,TJ;H) and o ¢(x) E AC([O,T). FinaLLy, if Xo E D(a¢) and f E Wi, l([O,T];.H) then x E W1,oo([0,TJ;.H). Moreover, in all these cases x satisfies (1.65) in tr.e a unique function
following precise sense:
x'(t) = (f(t) - a<jl(x(t» + wx(t»)o a.e. t E JO,T[.
(1.66 )
We preface the proof with the following lemma [19J: LEMMA 1.2 Let x E W1,2([0,T];.H) and u E L2(0,T;.H) be such that u(t)Ea<jl(x(t» a.e. t E JO,T[. Then t + ¢(x(t» is absolutely continuous on [O,TJ and
~ rp(x(t» = (x'(t), u(t» a.e. t E JO,T[.
(1.67)
Proof of Theorem 1.13 First assume that X E O(a¢) and f E W1,1([0,TJ;H}. o Let x = x(t,xO,f) E W1,oo([0,TJ;.H) be the corresponding s01ution to (1.65). Now we take the scalar product of (1.65) by t x' and use Lemma 1.2. We get tlx'(t)1
2
+ t ~ ¢(x(t»
i
= t ~ Ix(t)1 2
+.
t(x'(t),f(t»
a.e. t E JO,T[.
Next we take the scalar product of (1.65) by x and integrate on [O,tJ. some manipulation one finds the estimate
°
Ix(t)1 2 + ft ¢(x(s»ds -< C(lxol 2
+.
(1. 68)
After
°
It If(s)1 2ds), 0<; t -< T.
Now integrating (1.68) on [O,t] we get
J: tix'(t)i 2dt
+
-< C( I xOI 2
t$(x(t» +
°I
JT
+
ix(t)i
2 (1.69 )
f( t)
I 2dt
+ 1).
Now we multiply (1.65) by x' and integrate on [O,tJ to find via Lemma 1.2 the estimate f
°T Ix'(t)1 2 + ¢(x(t»+lx(t)1 2 -< c(lxol 2t¢(x O
)+
JT 2 oif(t)1 dt).
(1. 70)
35
Let Xo E UTA) and f E L2(0,T~H) be fixed. Then there exi.st the sequences n 1 t n . {xC} c D(A) and {f n} C W ' ([O,T];.H) such that Xo -+ Xo strongly ln Hand fn -+ f strongly in L2(O,T~H). Let xn = x(t,x~,fn) be the corresponding solution to (1.65). It is readily seen that x = li.m xn exists in C([O,T];H). 1 2
Then by (1.69) we see that x E W '(JO,T]~H). t 1/2 x E L2(0,T~H), l
n-+oo
More precisely,
tep(x) E LOO(O,T;.H).
Now, by the definition of dep, we have (x~(t),xn(t)-y)+ep(xn(t))
,ep(y) + (wxn(t)
+-
fn(t), xn(t)-y) a.e. t E JO,T[, y E H.
(1.71)
This yields 1 ~
(Ix (t+E)-yi 2 - Ix (t)-yl 2 ) 'n n
ft t
+-
E
+
ep(x (s))ds n
t +E
, Eep(y) + and letting n "2"1
-+
00
(wx (s)+f (s), x (s)-y)ds n n n
ft
,
(IX(t+E)-yl 2 - Ix(t)-yl 2 )
ft
t E
+-
+
ep(x(s))ds
t +E
'·Eep(y) +
f
(wx(s) + f(s), x(s)-y)ds vy E H.
t
Since x is a.e. differentiable on JO,T[, we infer that (xl(t),x(t)-y) + ep(x(t)) ,ep(y) + (wx(t) + f(t), x(t)-y) a.e. t E JO,T[
vy E H.
Hence x satisfies (1.65) a.e. on JO,T[. If Xo E D(ep) we see by (1.70) that
fo T
I
x I 1 2dt + n
ep( x (t)) , C n
and therefore x E W1,2([0,T]~H). The case where Xo E D(dep) and fEW 1,1([0,T];H) has been treated in Theorem 1.12. 36
Let H be a real Hilbert space identified with its own dual and let V be a real reflexi~e Banach space with the dual VI such that V c H C V and the i.nclusion mapping of V into H is continuous and densely defi.ned. The norms of V and H will be denoted by II . II and I· I ,respecti~ely. We shall denote by II . II ' the (dual) norm of VI and by (v-*,v) the ~alue of \1* E VI in v E V. If ~, v* E H this is the scalar product of H. We are given a family A(t) of nonlinear single-valued operators from V to VI, 0< t < T satisfying the followi.ng conditions~ l
(i)
For every v E LP(O,T~V), 2 < p < strongly VI-measurable on [O,TJ.
(ii)
For every t E [O,TJ the operator A(t)~V + VI is monotone and demicontinuous. There exists C > such that
00
,
the function t
+
A(t)v(t) is
°
IIA(t)~II* < C(1 (iii) There exist
(A(t)v,~)
w > +
+.
IIvII P-· 1 )
Vv.
E V a.e. t E ]O,T[.
(1.72)
°and a E R such that
alvl
2 > wllllil P
v E V a.e. t E JO,T[.
) -1 *q -·1 =1begiven. Then THEOREM 1.14 LetxoEHandfEL q( O,T;.VI,p 1 under' assumptions (i) to (iii) ther'e exists a unique function XEW ,Q(JO,T];.IJ n C([O,TJ;.H) n LP(O,T;V) such that
~(t)
+-
A(t)x(t)
= f(t) a.e. t E ]O,T[
(1.73)
x (0) = xO•
For the proof we refer the reader to [16J p. 64 or [50J Chapter 2.
37
I
)
2 Elliptic variational inequalities
In this chapter we present an introductory treatment of the theory of variational inequalities of stationary type. Since its inception 1.n the work of Lions and Stampacchia [54J, this has been one of the principal fields of application of the methods and results of nonlinear analysis. The main motivation for and interest in this theory stem from its rele~ance to the study of free boundary problems. These are boundary ~alue problems inv.olv.ing partial differential equations on a given doma1.n, parts of whose boundary (the free boundary) are unknown and must be found as a component of solution. There are many standard works on elliptic v.ariational inequalities that can serve as references for this chapter, including [17J, [21J, [22J, [30J and [50J. §2.1
Abstract existence results
Throughout this section V and H are real Hilbert spaces such that V is dense in H and the 1.njection of V into H is continuous. The norms of V and H will be denoted by II . II and I' I ,respectively. H is identified with its own dual, and is then identified with a subspace of the dual VI of V~ hence V c H c V', algebraically and topologically. For v. E V and v' E VI denote by (11.,11.') the v.alue of v' at v. We shall denote by 11'11 * the norm of VI. Let A E L(V,V I ) be such that for some w > 0, (Av,v.) >-
w 1111.11
2
for all v E V.
(2.1)
The operator A is often defined by the equation (u,Av) = a(u,v) for all u,v E V, where a:V
x
V + R is a bilinear continuous functional and
a(v,v) >- w 1111.11 2 for all v E V. We are also given a lower semicontinuous convex function ¢:V +~. 38
(2.2)
(2.1)1
If f is a
given element of Find y E
~
~I,
consider the following problem:
such that
a(y,y-z)
+
¢(y) - ¢(z) < (y-z,f) for all z E
~,
(2.3)
where a is the bilinear form defined by (2.2). This is an abstract elliptic variational ine~lality associated with the operator A and the function ¢. It is readily seen that (2.3) can be rewritten in the form
Ay
+
d¢(Y)
(?.3)
3 f
I
where d¢:~ + VI is the subdifferential of ¢ (see Section 1.4). In the special case where ¢ is the indicator function IK of a closed convex subset K of V, i.e. (2.4)
then problem (2.3) becomes: Find y E K such that
a(y,y-z) < (y-z,f) for all z E K.
(2.5)
It is instructive to observe that if the operator A is symmetric, i.e. a(y,z) = a(z,y) for all y,z E V, then the variational inequaltiy (2.3) is equivalent to the following minimization problem, the Dirichlet principle: Minimize 21 a(z,z)
+
¢(z) - (z,f); z E
~.
(2.6)
Indeed it is readily seen that every solution y to (2.3) solves problem (2.6). Conversely, if y is a minimum point for the functional ~(z)
= ~1 a(z,z)
+
¢(z) - (z,f)
then 0 E d~(Y). Since, by Theorem 1.5, A +. d¢ is maximal monotone and therefore d~ = A +. d¢ - f, we may conclude that y is a solution to (2.3) (or (2.3)1) as claimed. In applications to partial differential equations V is usually a Sobolev 39
space on an open subset n of RN and A is an elli.ptic dHferential operator on n. The space V and the function ¢ or the subset K c V incorporate ~arious conditions on the boundary r or i.n n. THEOREM 2.1
Let A be a linear continuous operator form V to VI satisfying
condition (2.1) and let ¢:V
+
Rbe
a lo~er semicontinuous conVex function.
Then for every f E VI the variational inequality (2.3) has a unique solution y
E V. Moreover, the mapping f
+
Y is Lipschitzian from VI to V.
Proof Accordi.ng to Theorem 1.7 the operator A of, d¢ i.s maximal monotone in On the other hand, by definition of d¢ and by condition (2.1) we
V x VI. ha~e
for all (u,v)
(2.7)
E d¢.
Hence A + d¢ is coercive and by Theorem 1.7 it is surjecti.ve. By condition (2.1) it is readily seen that the solution y to (?.3) is unique and
Ilyll -<
w
-1
In particular for ¢ COROLLARY 2.1
= IK defined by (2.4), we
Let A:.V
assumption (2.1).
(2.8)
l!fll*·
+
ha~e
VI be a linear cont1:nuous operator satisfying
Then for every f E VI, the variational inequality (2.5)
has a unique solution y E K.
REMARK 2.1 For existence, the coercivi.ty conditi.on (2.1) is too restrictive. According to Theorem 1.7, for existence in variational inequality (2.5) it suffices to assume that for some Vo E K
II v II
lim +
+
00.
00
vEK
Now let {¢E} be a family of Frechet differentiable convex functions on V satisfying the following conditions
40
cpE(y) > -C( lIyll
+.
1)
for al1 E
> 0
and y E V
(2.9)
where C is independent of E and y. lim cpE(y) = cp(y) for all y E V E+O lim inf cpE(y ) > cp(y) E+O E
(2.10)
(2. 11 )
for all y E V and every sequence {y E } c V weakly conv.ergent in V to y. Let {f E} c V' be such that for E + 0 fE + f strongly in V'.
(2.12 )
Consider the equation (2.13 )
where VcpE:V + V'is the gradient of cpE. By Theorem 2.1, for every E > 0, (2.13) has a unique solution YE E V. THEOREM 2.2 Let A E L(V,V ' ) be a (2.1).
~he~e
Then
unde~
YE
y* 0eakly in V
+
symmet~ic ope~ato~
assumptions (2.9) to (2.12),
0, (2.14 )
y* is the solution to (2.3).
for all E,
satisfying condition
fo~ E +
FU~the~,
assume that
A > 0 and y, z- E V.
(2.15)
Then
YE
+
y*
st~ongly
Proof Let z be arbitrary but fixed in D(cp). of gradient we have ( yE - z, Ay E)
+.
(2.16 )
in V.
¢ E(y
By (2.13) and the definition
E) - cp E( z) -< (f E,y E - z),
'V Z. E
V.
(2.17)
Then by (2.1) and (2.9), (2.10) we see that {IIYE"}is bounded for E + O. 41
Hence there exists y* YE
-+
E
V and a sequence sn
-+
0 for n -+
00
such that
y* weakly in V
n
(2.18 )
AYE
-+
Ay* weakly in
v.
t •
n
Since the function y -+ (y,Ay) is convex and continuous on V. it is weakly lower semicontinuous. Hence lim inf (YE' AYE) > (y*, Ay*). Together with n+ oo
n
n
(2.11), (2.17) and (2.18) this yields (Ay*, y*-~)+-¢(y*) < ¢(z)+-(f,y*-z) for all z E v.. Hence y* is the solution to (2.3). Since the limit is unique we conclude that (2.14) holds. Now assume that condition (2.15) is satisfied. Since as seen above
it follows by (2.1), (2.13)
an~
wllYE - y,\112 < C(E+-A) for all E, ,\
>
O.
(2.15) that +-
lifE -. fAII*
IIYE - y,\11
This yields (2.16) as claimed.
In concrete situations, V¢E is a penalty operator associated with the variational inequality (2.3). A possible choice for ¢E is (see (1.21)) (2.19) §2.2 A regularity result We shall denote by AH:H AHy
= Ay
+
H the operator
for all y E D(A)
= {v
E
v.~
Av E H}.
The operator AH is positive definite in Hand R(I +- AH) = H because by Theorem 1.7 the operator I +- A:V + VI is surjective (I is the unit operator in H). Hence AH is maximal monotone in H x H. THEOREM 2.3 Undep assumption (2.1), suppose in addition that thepe exist h E Hand C E R such that ¢(I+'\AH ) -1 (y+-'\h) < ¢(y)+-C'\ for all ,\ 42
>
0 and y
E
V.
(2.20)
Then for every f E H the solution y* to (2.3) belongs to D(A ) and H
IAy*1 < C(l Proof
Let AI.. E AI..
+
If I) for all f E H.
I.(H,H)
(2.21)
be the operator defined by
(1..5),
i.e.
= A-1 (l-J A) = AJ A, A > 0
where J A = ( I + AA H) -·1 . Let y* E V. be the solution to (2.3).
We
ha~e
( f ,AI.. (y* +. Ah). In as much as (Ay,AAY) > IA AyI 2 for all Y E
v.,
and by condition (2.20)
we infer that
Thus {IAAy*l} is bounded, and by Theorem 1.2 part and
(~ii)
we infer that y*ED(A H)
COROLLARY 2.2 Let A be a linear continuous operator from V to VI satisfying condition (2.1) and let K be a closed conVex subset of V having the property that, for some h E H,
(I
+
AA H) -1 (y+.Ah) E K for all A > 0 and all y E K.
(2.22 )
Then for every f E H, the variational inequality (2.5) has a unique solution
y* E K n D(A H) which satisfies estimate (2.21). Now we shall prove an approximating result similar to Theorem 2.2 in the case where {¢E} is a family of convex Frechet differentiable functions on H satisfying the following conditions: 43
¢E(y) < -C( Iyl ~ 1) for all y E Hand E > O.
(2.23)
lim ¢E(y) = ¢(y) for all y E D(¢). E-rO lim ;nf ¢E(y ) > ¢(y) if y -r y strongly in V. E~ E E
(2.24)
(Ay,V¢E(y)) > -C(l+.lv¢E(y)1 +. IAyl) for all yED(A H) and E > O.
for all
E~
A > 0 and y,z E H.
Here V¢E:H -r H is the gradient of ¢E. (2.13), ; .e.
(2.25)
(2.26)
(2.27)
Let yE E D(A ) be the solution to H (2.28)
where {fE} cHis such that fE
-r
f strongly in H.
(2.29)
THEOREM 2.4 Under assumptions (2.23) to (2.27), for E -r 0 yE
-r
y* strongly in
(2.30)
V.
(2.31 ) (2.32) Proof Taking the scalar product of (2.28) by yE and using (2.1), (2.23) and (2.26), we see that IV¢E(l) 12
+
11/112
+
IAHll2 -< C for all E > O.
Then using conditions (2.27) and (2.1) it follows by (2.29) that IlyE - yAII2 < C(E+A) for all E, A > O. This implies, by a standard method (2.30) and (2.31). Hence, for E -r 0, 44
Then letting
E
tend to zero in inequality
it follows by (2.24) and (2.25) that cp(y*) -< ¢(z)
of,
y* - z.) Vz
(l;,
E
H
as claimed. REMARK 2.2 There exists an extensi~e literature on approximation of elliptic variational inequalities, but we refer only to the book of Glowinski et ale [41J and to the survey of Oden and Kikuchi [68J. Theorems 2.2, 2.4 are related to some general results due to Mosco [65J. §2.3
The ubstacle problem
Throughout this section Q is an open and bounded subset of the Euclidean space RN and has a sufficiently smooth boundary r. Let a., i = 1,2 be two 1 1 positive constants such that a 1 +. a 2 > O. Let V. = H (Q) and A:V + VI be defined by a (y , z ) =
(y,Az)
+
if a > 1 If a
a and 1
>
a
IQ >
I
a . . ( x )y z. dx
Q
aO(x)y(x)z(x)dx
Xi Xj
lJ +
:~
Ir y(o)z(o)do for all y,z E V (2.33)
o.
2 a and a 2
(y ,Az)
~
i,j=1
=
1 11 0, we take V = HO(Q) and A:HO(Q) + H (Q) defined by
= a (y ,z) =
~
i,j=1
I
a .. (x) y z dx lJ
Xi Xj
1
for all y, z E HO(Q).
+
I
a 0(x) y (x) z (x) dx
Q
(2.34 )
Here a 0' a.. E L Q) for a11 i, j = 1, ..• , nan d lJ
00 (
45
for all ~ E RN and a.e. x E
n
(2.35)
where w > 0 and /I • II N is the Eucl idean norm in RN. Throughout the following we shall assume that aO(x) > ~ > 0 a.e. x E n if u1 = 0 (~ is a positive constant). Then, as easily seen by (2.35), the operator A satisfies coercivity condition (2.1). Let ~ be a given function in H2(n) and let K be the closed convex subset of V
K = {y E
V~
y(x) >
~(x)
a.e. x En}.
(2.36)
We notice that K is nonempty because in particular
If V = H6(n) then we must assume that ~ < 0 a.e. in r. Let f be a fixed element of V'. Then, if K I 0, by virtue of Corollary 2.1 the variational inequality a(y,y-z) < (y-z,f) for all z E K
(2.37)
has a unique solution y E K. Formally, y is the solution to the following boundary value problem known in the literature as the 'obstacle problem':
(2.38) AoY> f, Y > y
= ~ 1. n
~
~n+ H"-l,
,
in n
oy = av o~. av 1n
oy = O·1 n av
Uo1Y + U2
(2.39) ~n+ 0"
r
(2.40) (2.41 )
where AO is the differential operator AOY
=-
N 2:
i,j=1
( a .. (x) y
lJ
) + a oy , Xi Xj
(2.42)
oy/ov = 2: a .. y cos(n,x.) is the normal derivative and on+ is the boundary 1 J Xi
46
J
of
(l+.
Indeed, let us assume that ~ E C(~) and y is a sufficiently smooth solution to (?.37) (for instance y E C(n». Then (l~ is an open subset of (l, and for every a E C;(~+.) there exists p > 0 such that y ± pa > ~ on~. Thus taking z = y ± pa E K in (2.37) we see that L:f (l
(a .. y a lJ xi. Xj
+.aoya)dx=(f,a)forallaEC~(~-t).
Hence y satisfies (2.38) in the sense of distributions. Now in (2.37) we take z = a + ~, where a E C~(n) is positive on n, to establish that y again satisfies, in the sense of distributions, the first inequality in (2.39). (The second inequality is ob~ious.) The boundary value conditions (2.41) are implicitly incorporated into the definition of the operator A. As for the equal Hy ay/av =atiJ/a\) in an+, it can be viewed as a transmission property~ this makes sense if y is smooth enough. In problem (2.38) to (2.41) the surface S = an+; which separates the regions n+ and ~+, is not known a priori and is in fact a free bounda~J. In classical terms, problem (2.38) to (2.4t) can be formulated as the problem of finding the free boundary S and the function y which satisfy AOY = f in n+ y =~
in ~n+
ay aty +. a 2 dV = 0 in r y =~,
~ = ~ in S = an+.
Under variational formulation (2.37) the free boundary S does not appear but the unknown function y satisfies a multivalued partial differential equation on n (see (2.3)1). Once y is known, the free boundary of problem (2.38) to (2.41) can be found as the boundary of the incidence set {x E n; y(x) = tiJ(x)}. There exists an extensive literature on regularity properties of the solution to the obstacle problem as well as on the nature of the free boundary. We mention in this context the works of Brezis and Stampacchia [22J and Brezis [21J and the recent book by Kinderlehrer and Stampacchia [46J, which contains complete references on the subject. We confine ourselves to the presentation of a partial result. 47
THEOREM 2.5 Assume that a ij E C1(n), aO E LOO(~), a ij = aji fo~ all i,j 1, ••• ,n and condition (2.35) holds. Further assume that ~ E H (~) and
CX1~
cx2 ~ -< 0, a.e. in r.
+.
Then for every f E belongs to
L2(~)
the solution y to variational inequality (2.37)
H2(~) and satisfies the following complementary system:
((Aoy)(x)-f(x))(y(x)-~(x))
y(x) >cx1 y
(2.43)
+
~(x),
= 0 a.e.
(AOY)(x) >- f(x)
- 0 cx2 oy oV -
X
E
~
(2.44)
X
E ~
( 2.45)
a . e . 1. n r .
(2.46)
a.e.
Moreover, there exists C independent of f such that
Ilyll 2 H
(~)
-< C( Ilfll 2 L
(~)
+.
1) for aU
f
E
L2(~).
(2.47)
Proof We shall use Corollary 2.2 where H = L2(~), V = H1(~) (V. = H6(~) if cx2 = 0) and A:~ + V' is defined by (2~33) and (2.34), respectively. Then the operator AH:L2(~) + L2(~) is defined by
where
Let us verify condition (2.22) where h = AO~' equation
in r
To this end, consider the
(2.48)
which as noted earlier has a unique solution z E H2(~). Multiplying (2.48) by (z-~)- = - inf {z - ~,O} E H1(~) and integrating on ~ we find via Green's formula
48
I I -I ~
(Z-\jJ)
2 dx + " a«(z--\jJ) - , (z-\jJ) - ) -
Ct
I
d\jJ- do 2 '" r (Ct 1-\jJ+ Ct2 a-)(z-\jJ) v
-1
= - J~ (y-\jJ)(z.--\jJ)-dx -< O. Thus by condition (2.43) we see that (z--\jJ:- = 0 and therefore Z E K. Hence condition (2.22) holds and we may infer that the solution y to (2.37) belongs to D(A ) c H2 (~) and H
II AHy II
L
2
(~)
-< C(1
+
II f II
L
2
(~)
)•
According to a well-known regularity result in the theory of linear elliptic equations, this relation implies (2.47) as claimed. If y E D(A H) then by the Green formu1a we see that
J~
AOy(x)z(x)dx = a(y,z) for all Z E H1(~).
Then by (2.37) we see that y satisfies the inequality
J~
(AOY(x) - f(x)) (y(x)-z-(x))dx -< 0 for all z- E K.
(2.49)
The latter inequality can be obviously extended by density to all z in K ' O where KO = {z E L2(~); z(x) > \jJ(x) a.e. xED} . If in (2.49) we take z = \jJ + get
Ct ,
where
Ct
(2.50)
is any positive L2(~)-function, we
(AoY)(x) - f(x) > 0 a.e. x E ~ Now we take z = \jJ to conclude that (y(x)-\jJ(X))«AOY)(X) - f(x))
0 a.p.. x E ~,
thereby completing the proof. REMARK 2.3 Theorem 2.5 gives a precise meaning to the free boundary value problem (2.38) to (2.41). Indeed, under the assumptions of Theorem 2.5 we know that the solution y to (2.37) satisfies
49
(AoY)(x) = f(x) a.e. in {x E Q, y(x) (AOY)(x) > f(x), y(x) > u1 Y(o)
u2 ~~ (0)
+
~(x)
= 0 a.e.
> ~(x)}
(2.51)
a.e. x E Q 0
E
r.
We note that under the conditions of Theorem 2.5 the obstacle problem (2.37) can be equivalently written as (2.52) where tHKO(Y)
{w E L2(Q); fQW(X)(Y(X)-Z(X))dX
> 0 Vz E KOL
Equivalently
arK (y) o
where
=
{w E L2(Q); w(x) E S(y(x)-~(x)) a.e. x E Q}
(2.53)
S:R ~ 2R is the maximal monotone graph
o S(r)
ifr>O
= { R- if r = 0
o
(2.54)
if r < O.
This form of the obstacle problem suggests the following approximating process:
u y + u
1
ay
2 Tv = 0
a.e. in r
(2.55)
-1 -
where SEer) = - E r. Equation (2.55) can be rewritten as (2.56) where (2.57) and 50
¢ E
= (IK
0
) , Le. E
cp (y) ::: (2E)-·1 E
f
~
l(y(x)-'lJ.!(x))-1 2 dx.
As seen earlier, (2.55) has a unique solution YE
(2.58) €
D(A H).
PROPOSITION 2.1 For E + 0, YE + y* strongly in H1(~) and weakZy in H2(~), where y* is the solution to (2.32). Proof It suffices to show that the assumptions of Theorem 2.4 are satisfied with V. ::: H1(~), H = L2(~), A defined as above and cpE ::: cp E defined by (2.58). Since conditions (2.23), (2.24), (2.25) and (2.27) are obvious we confine ourselves to v.erifying (2.26). By the definition of AH and by Green·s formula, we have (AHy, Vcp E(y) ) =
f
AO(y-lJ.!)6 (Y-lJ.!)dx E
+-
f~
AOlJ.!6E(Y-lJ.!)dx
~
+-
J AOlJ.!6 ~
(y-~)dx
E
> - J
because 6·E > 0 and by condition (2.43), 6E (Y-lJ.!) (ajav) Y E D(A )· H
~
rV
(Y-lJ.!)
(y-lJ.!)6
(Y-lJ.!)do E
< 0 in r for all
Now we shall present a simple conceptual model of a contact problem in linear elasticity; it can be described mathematically as an obstacle problem of the form (2.38) to (2.41). Consider an elastic membrane occupying a plain domain ~ clamped at the boundary r, limited from below by a rigid obstacle lJ.! and under pressure from a vertical force field of density f (see Figure 2.1). It is well known that when there is no obstacle the vertical displacement y of the membrane is governed by the Dirichlet principle. In the presence of the obstacle, the nondimensional governing equations are of the type (2.38) to (2.41) where a 2 ::: O. More precisely, if y = y(x) is the deflection of the membrane at x = (x 1,x 2 ) then we have 51
y
x
Fig. 2.1 -
v~y
= f in {x;y(x)
-
V~y
> f in
~,
> ~(x)}
y = 0 in
r
where ~ is some positive constant. The contact region {x E ~; y(x) = ~(x)} is one of the unknowns of the problem, and its boundary is a free boundary. The case of boundary value conditions (2.41), where a 1, a 2 > 0, describes the situation where the membrane is elastically fixed along f. Now consider the case of two parallel membranes loaded by pressures f i , i = 1,2 as shown in Figure 2.2. The variational inequality characterizing the equilibrium solution of this problem is ([69J):
x
Fig. 2.2
52
]..I1f
J~
>
J~
IJYl • IJ(Y1- z·1)dx +"]..12 f 1(Y1- z 1)dx
+
J~
f
J~
IJY2 . IJ(Y2- z2)dx
f 2(Y2- z2)dx for a11 (zl,z2) E K
(2.59 )
where (2.60 ) where ]..11 and ]..12 are positive constants, 1 = 1(x 1,x 2) is the initial gap between the unloaded membranes and Y1 = y 1(x 1,x 2), Y2 = Y2(x 1 ,x 2) are the deflections of membranes 1 and 2 in (x ,x 2) = x. 1 Problem (2.59) is of the form (2.5) where H = L2(~) x L2(~), V. = H6(Q) H6(~)' K is defined by (2.60), f = (f 1,f 2) and a:v. x V. ~ R is given by a(y,z) = ]..11
J~ IJY1
• IJz1dx
+-
x
l-2 J~ IJY2 • IJz2dx for
Y = (Y1' Y2)' z = (z 1' z2) . Thus, by virtue of Corollary 2.1, the variational inequality (2.59) has a unique solution (Y1'Y 2) E K. Arguing as for problem (2.37), we see that (Y1'Y 2 ) = Y can be viewed as the solution to the following free boundary value problem:. - ]..11 L1 Y1 = f l' Yl - Y2 > - 1 - ]..I
1 L1 Y1 > f l ,
- ]..I'
in -hl
2 L1Y2
= f2 in
{Yl- Y2 >
-
1} (2.61 )
~
L1 2 y2
~
Y1 = 0, Y2 = 0 in r. In this case the free boundary is the boundary of the contact domain (x 1,x 2) E ~ Yl (x) - Y2(x) = 1}.
{X =
§2.4 Water flow through a rectangular dam Here we shall describe a free boundary problem which models the water flow through an isotropic homogeneous rectangular dam (Baiocchi [3J). Let us denote by 0 = ]O,a[ x ]O,b[ the dam and by DO the wetted region bounded by AF, S, GC, CB and BA (see Figure 2.3). The boundary S which 53
H(O,b)
1--------------------------
L(a,b)
Dry region
F(0,h 1 )
----04- .--.-..... ...........
-.--.-.
-· .........8
..........
Do
,. '
...........
'. ............. G Seepage face
Wetted region
... A (0,0)
B(a,O)
Fig. ?.3 separates the wetted region DO from the dry region 01 = ~DO is unknown (a free boundary). Let z be the piezometric head and let p(x ,x 2 ) be the unknown pressure in t (x ,x 2) E O. We have 1
z
=p
+- x
(2.62)
2 in 0
Then by the O'Arcy law (normalizing the parameters) b.z
= 0 in DO,
(2.63)
On the other hand, z satisfies the obvious boundary value conditions
= h1 in AF, z = x2 in S U GC, z = h2 in az _ ~ = 0 in AB av - 0 inS. aX 2 ' z
We introduce the functions 54
CB
(2.64)
This is known in the literature as the Baiocchi transformation. LEMMA 2.1
The function y satisfies, in the sense of distributions, the
equation t,y
X in 0
=:
(2.65)
where X is the characteristic function of DO' and the conditions
y
>
a
y
=:
9 in aD (the boundary of D)
9
=:
a in
9
=:
in DO' y
=:
a in
~DO
(2.66) (2.67)
where
t
1 2 xl 2 2 FH, HL, LC, g(x 1,0) =:"2 h1 - 2a (h 1 - h2) =: aX l
(X -h )2 in CB, 9
2 2
=:
t
+
B
(x -h )2 in AF. 2 1
Proof We shall assume that the solution z to (2.63) and the free boundary S are sufficiently smooth. Let x2 = £(x 1), xl E [O,a] be the equation of S. From the definition of y it is clear that y > a in DO and y = a in ~DO' Moreover, some calculation involving (2.64) shows that the boundary conditions (2.67) are indeed satisfied. To prove (2.65), note that for every Q E C~(D) we have t,y(¢)
55
fo
cp(x 1,x 2 )CQ,'(x 1)ZX (xl ,9,(X 1))-Zx/X 1,9,(x 1))+1)dx 1dx 2 1
o
because t:.Z = 0 in DO = {(xl ,x 2 ) ;x 2 > 9,(x 1)}. On the other hand, as seen earlier, we have
This yields
as claimed. Thus we may view y as a solution to the obstacle problem - 6Y> -1, t::.y
= 1 in
y> 0 in 0 {y >
O}
(2.68)
y = gin aD or in variational form a(y,y-v) +
Io
(y-v)dx < 0
Vv
(2.69)
EK
where K = {v E H1(O); v = 9 in aD, v> 0 in o}
(2.70)
and a(y,v) =
Io
\ly ·\lvdx for all y,v E H1(D).
By Corollary 2.1, problem (2.69) has a unique solution y E K. 56
(2.71) The free
boundary x2 = £(x 1) can be found by solving the equation y(x 1,x 2) = o. §2.5
Elliptic problems with unilateral conditions at the boundary
Consider the boundary value problem (2.72)
~~
go in f1
+ B(Y) 3
a in
y =
where f1 and f = f1 U f2' constant and Let j:R ~ (see Example
(2.73 ) (2.74 )
f2
f2 are two smooth open and disjoint parts of the boundary f, fa E L2(D) and 90 E L2(f 1) are given functions, ~ is a positive B:R ~ R is a maximal monotone graph (in general multivalued). R be a convex lower semicontinuous function such that B = aj 1.1). We set (2.75)
define the operator A:V (Ay,z)
=
~
a(y,z)
VI,
= {'"/
'Vy·'Vz dx
+
~J
yz dx for all y,z
E
V
(2.76 )
rI
and the function
~:V ~
~(z) = J
R,
j(z(a»da for all z E V.
(2. 77)
f1 Let f E VI be defined by
Then by Theorem 2.1, the variational inequality a(y,y-z) +
~(y)
- ¢(z) < (f,y-z) for all z E V
(2.78)
or equivalently, the minimization problem min
{i a(z,z)
+
~(z)-(f,z); z E V}
has a unique solution y E V.
(2.79 )
We shall see that y is a solution (in a genera57
liz.ed sense) to (2.72) to (2.74). where a € C;(~), we get
J~
Vy(x)·Va(x)dx
+.
~J~
Indeed, i.f in (2.78) we take Z = Y - a,
y(x) a(x)dx = J~ fO(x) a(x)dx.
Hence y satisfies (2.72) in the sense of distributions we see that (if y is sufficiently smooth) - (6y,y-z) = (fO'y-z) v z
~(y,y-Z)
€
on~.
Now by (2.72)
'L
and by Green's formula a(y,y-z) - <~~, y-z> = (fO'y-z),
V z € Il _1
1
where <.,,> denotes the pairing between H 2(r ) and H2(r ). (We note that 1 1 since r is a smooth manifold, then by the trace theorem ay/av € H-~(fl) and 1 y € H~(fl)(see [55J).) Together with (2.78), this yields frl (j(y(o))-j(z(o)))do",
~~,
y-z> vz
€ V.
Thus if go - (ay/av) happens to be in L2(r l ), we hav.e
~~
+.
8(y)
go a.e. in r l ·
3
Otherwise it simply means that ay go - Tv 'V
where ¢:H
1/2
'V
a¢(y)
€
(r 1)
+
f
cp(v) = r
'V
-
R is given by
j(v(o))dovv 1
€
H1/2 (f 1 ).
In this sense we may visualize y as a solution to (2.73). As for condition (2.74), it is implicitly incorporated into the condition y € V. In the special case 90 = 0 and f2 = 0, then as noted earlier (Example 1.4) the solution y to (2.72) to (2.74) belongs to H2(~) and (2.80) In point of fact, in this case we may consider the approximating equation 58
~Ys
- 6ys
= fO a.e. in
~
(2.81)
ay
~ + Ss(Ys)
= 0 a.e.
-1
in r
-1
where Bs(r) = s (r-(l+sS) r) for all r E R. By standard existence results we know that (2.81) has a unique solution 2 Ys E H (~). We have PROPOSITION 2.2 For s
+
0, (2.82)
~here
y is the unique solution to the ~y
- 6y = fO a.e. in
~
+
bounda~d
value problem
~
( 2 .83)
B(y)
3
0 a.e. in r.
The proof can be found in the works of Brezis cited earlier. is the estimate Ilysll 2 H
(~)
-< C(1 + Ilf011 2 L
(~)
The crucial step
) for all s > O.
Now we shall present some particular cases. First we consider the case where go S(r) = ar/lrl for r
~
=
0 and S(r)
=
a sgn r, i.e.
0, B(O) = [-a,a]
(2.84)
where a is a positive constant. Then problem (2.72) to (2.74) becomes ~y
y
- 6y = fO in
~~
+
a Iy I
~
*
= 0, I I -<
(2.85) a in r l' Y :: 0 in r 2 .
Equations (2.85) model the equilibrium configuration of an elastic body ~ c R3 which is in unilateral contact with friction on r 1 (see [30J, Chapter
IV).
59
The Signorini problem Consider problem (2.72) to (2.74) in the special case where go and Be R x R defined by (2.54), i.e. ~y
-
Y>
~y
= fO in
~
~
'dy 'dy dV> 0, Y dV
a,
0, f2
=0
in
r.
(2.86 )
This is the celebrated problem of Signorini. It represents the conceptual model of an elastic body ~ with boundary r which is in contact with a rigid support body and is subject to volume forces fa' These forces produce a deformation of ~ and a displacement on r with the normal component negative or zero. In a simplified model the displacement field y satisfies (2.86). As mentioned earlier, if fO E L2(~) then the solution y to (2.86) belongs to H2(~). The penalized approximating equation (2.81) has in this case the following form: wy -
~y
= fO a.e. in
~
(2.87) 'dy
dV -
E
-1-
Y
a a.p..
in
r.
REMARK 2.4 The above results remain valid if in problem (2.72) to (2.74) the laplacian ~ is replaced by the elliptic symmetric operator Aa defined by (2.42).
60
3 Optimal control of elliptic variational inequalities In this chapter we shall discuss several optimal control problems governed by elliptic variational inequalities. The particular aim of the chapter is the optimal control of some free boundary problems. The main emphasis is put on the derivation of first-order necessary conditions of optimality. Since the control problems governed by nonlinear equations are nonsmooth and nonconvex optimization problems, the standard methods for deriving necessary conditions of optimality are inapplicable here. In brief, the idea is to approximate the given problem by a family of smooth optimization problems (see problem (P ) in the following~ and then to tend to the limit in the corresponding E: optimality equations. This is a more refined variant of the penalty method previously used for the numerical computation of optimal controls ([52J, [85J). An attractive feature of this approach, which is termed the 'adapted penalty' in [53J, is that it allows the treatment of all optimal controls; in addition, it works for general cost criteria. A different method, relying on the differentiability theory of Lipsch5tz mappings in Hilbert spaces, has been given by Mignot [59J and leads to comparable results. §3.1
Controlled elliptic variational inequalities
Let V and H be a pair of real Hilbert spaces such that V is a dense subset of H and V c H c V' algebraically and topologically. Here V' is the dual of V and the notation is that introduced in Section 2.1. Thus (.,.) is the pairing between V and V (and the scalar product of H) and II . II ' I' I are the norms in V and H, respectively. Consider the equation I
Ay
+ d~(Y) 3
Bu
+
f,
(3.1)
where A:V + V' is a linear continuous and symmetric operator satisfying the coercivity condition (Av,v) >-
w
IIvl12 for all v € V,
(3.2)
61
¢:V + R is a lower semiconttnuous convex function, B E L(U,VI) and f is a given element of VI. Here U is a real Hilbert space with scalar product <','> and with norm denoted by I . I U. I.f a:,V x V + R is the Dirichlet form associated with A, i.e. a(y,v) = (Ay,v) for all y,v E V, then, as seen earlier, (3.1) can be rewritten as the variational inequality a(y,y-v)
+
¢(y) - ¢(v) < (Bu
+
f, y-v) for all v in V.
(3.1)
I
In particular, if ¢ = IK is the indicator function of a closed convex subset K of V, then (3.1) becomes Y E K, a(y,y-v) < (Bu
+
f, y-v) for all v E K.
(3.3)
The parameter u E U is called the control, and the corresponding solution y (which exists by Theorem 2.1) is called the state of the system (3.1). Equation (3.1) itself will be referred to as the state system or control system. The optimal control problems to be studied in this chapter can be set in the following general form: (P)
Minimize the function g(y)
+
(3.4)
h(u)
over all y E V and u E U subject to state system (3.1).
Here g:H + Rand h:U + R are given functions satisfying the following conditions (assumptions): and non-negative on H.
(i)
g is locally Lipschitz
(ii)
h is convex lower semicontinuous, and· for some constants
C > 0, C E R, 1 2
h(u) > C1 lulu
+
C2 for all u E U.
(3.5)
With regard to the spaces V, H and the operator B E L(U,V I ), we shall assume in addition that (iii) B is completely continuous from U to VI.
62
Hypothesis (iii.) is satisfied in parti.cular if the i.njecti.on of completely continuous and B E L(U,H).
V.
into H is
Roughly speaking, the object of control theory for (3.1) is the adjustment of the control parameter u under prescribed restrictions so that the solutions y have some specified properties or achieve a certai.n goal. Very often the goal or the desired behaviour of the soluti.on i.s expressed as a minimization problem of the form (P). For instance, we might pick one known state yO and seek to find u in a certain closed conv.ex subset Uo of U so that y = yO. The least-squares approach to this problem leads us to a problem of type (P) in which the functions g and h are given by 1
0
g( y) ="2" Iy- y
I2 '
h( u) =
{ 0 +'00
if u E U
o
(3.6)
otherwise.
If C is a closed subset of H, and it is required that the solution to (3.1) belongs to C or is as close as possible to C, a natural choice for the function g is g(y) =
i d2(y,C)
vy
E
H,
(3.7)
where d(y,C) denotes the distance from y to C. This will be discussed further in a later section. A pair (y*,u*) E V x U for which the infimum in problem (P) is attained is
called the optimal pair, and the corresponding control u* is called the optimal control.
PROPOSITION 3.1
Under assumptions (i) to (iii), problem (P) has at Zeast one
optimal pair.
For every u E U we shall denote by yU E V the solution to (3.1).
LEMMA 3.1 The map
u ~ yU is weakly-strongly continuous from U to V.
Proof Let {un} cUbe weakly convergent in U to u. follows that BUn
~
Bu strongly in V',
By assumption (iii) it
(3.8)
63
while by (3.1) and assumption (3.2) we see that U
U
wily n _ y mil because
a~
2
u
u
is monotone in
V. x
VI.
Hence
U
lim y n = y exists in the strong topology of v.. n-KO Now letting n tend to +00 in the inequality u u u u a(y n, y n_ v) + ~(y n) < ¢(v) + (Bu + f,y n_ v),
(3.9)
n
we infer that y = yU as claimed. Proof of Proposition 3.1 Let d = inf {g(yu) + h(u)~ u E U}. By assumptions (i) and (ii) we see that - 00 < d < + 00. Now let {un} cUbe such that (3.10) u
where Yn = Y n. Then by the growth assumption (3.5) it follows that the sequence {un} is weakly compact in U, and so by Lemma 3.1 we may infer that on a subsequence (again denoted u ) n
un
~
u* weakly in U
Yn
~
u* y strongly in V.
Since h is weakly lower semicontinuous on U and g is continuous on V, this yields g(y
u*
) + h(u*) = d.
(3.11)
In other words, u* is an optimal control of problem (p). REMARK 3.1 (i)
I
Proposition 3.1 remains valid if assumption (i) is weakened to
g is locally Lipschitz and non-negative fpom V to R.
For nonconvex functions h, more refined existence results for problem (P) follow by the application of the Edelstein theorem and its extensions (Baranger [5J). 64
We now briefly turn aside from the main discussion and present some typical control problems. Throughout the following ~ is a bounded and open subset of RN having a sufficiently smooth boundary f. Let AO be the elliptic differential operator defined by (2.42), i.e. AOY = -
N L:
(a .. (x)y) lJ xi Xj
i ,j=l
1 -
+
aO(x)y
(3.12)
00
where a ij E C (~), a O E L Ut), a ij = aji. for all i.,j (2.35) are satisfied. EXAMPLE 3.1
1, ... ,N, and condi tions
Consider the following distributed control problem:
Minimize
g(y)
+
h(u)
1 on aZZ y E HO(~)
AoY y
+
(3. 13)
n H2 (~), u E U subject
s(y)
3
f
+
Bu
to
a.e. in Q (3.14)
= 0 in r.
Here S is a maximal monotone graph in R x R such that 0 E D(S), f E L2(~), and B is a linear continuous operator from a control space U to L2(~). The functions 9:L2(~) ~ R+ and h:U ~ R satisfy assumptions (i) and (ii). As seen in Section 2.3, the boundary v.alue problem (3.14) can be written in the form (3.1) where V :: H6(Q), H = L2(Q), A:H6(Q) + H- 1(Q) is given by (2.34), i.e.
( Ay , v) = a ( y , v)
~ fa~ lJ. .(x)yxi vXj dx
i,j=1
for all y,v E
+
1
f~ ao(x)y vdx
HO(~)
(3.15)
and ¢(y) =
J~
j(y(x»dx Vy E H,
(3.16)
where aj = S. Assumption (iii) is obviously satisfied in the present case. In the special case where S is defined by (2.54), system (3.14) reduces to 65
the obstacle problem (-·AoY +
Bu
>- 0,
-.AOY
y
+- f) y +.
=0
a. e. in
$1
BlI + f -< 0 a. e. inn
y = 0 i.n r. More general control systems of the form r +- f)(y-~)
Bu
(-AoY +-
= 0 a.e.
in (3.17)
y
>-~,
-AOY
+-
Bu
+
f . ; ; 0 a.e. i.n
$1,
y = 0 i.n
,
where ~ E H (n) is a gi.ven function satisfying the condition ~ . ; ; 0 a.e. in r , can be written in the form (3.3), where 2
1 K = {y E HO(n)~
y(x)
>-
~(x)
a.e.
x
En}.
EXAMPLE 3.2 Minimize the function (3.13) on all (y,u) E H2(n)
x
U subject
to the state system ~y +-
AOY
=f
Bu a.e. in n
+
(3.18)
~~
+-
B(y)
3 0
in
.
2 0, f E L (n), B E L(U,L 2(n)) and S i.s a maximal monotone graph in R x R such that 0 E D(S). In this case V = H1(n), H = L2(n), Here ~
>
(Ay,v) = a(y,v) for all y,v E H1(n) and ~:V
+
R is given by
~ ( v) = fr where oj
(3.19)
j ( v) do v v E V
= B. As noted earlier (Section 2.5), the solution y belongs to
2 H (n).
EXAMPLE 3.3 Minimize the function (3.13) on all (y,u) E H1(n) to the state system
66
x
U subject
(3.20 )
y
= 0 in r 2,
Here /..l'" 0, f E l2(D), AO' B are as in Example 3.2, and BO E l(U,l2(f 1)), f2 is a smooth and open nonempty subset of rand r 1 i.s the interior of r,r 2' To write the state system (3.20) in the form (3.1) we set V = {y E H1(D); 2 Y = 0 in f } and H = L (D), define A~V. ~ VI, 2 (Ay,v.) and have
= a(y,v.) for all y,v.
~~~ ~ ~
(Bu,v)
by (3.19).
=
fr
E V.
The operator B E L(U,~I) is given by
(BOu)(a)v(a)da for all
U
E U, v. E V.
1
If L is the closed ball {u E U~ lulU < p} then the set BO(L ) is bounded in p Then we see that B(L p ) is a compact subset of VI and therefore assumption (iii) is satisfied. 2 p -~ L (f ) and compact in H (f ). t 1
§3.2
Generalized first-order necessary conditions
Here and throughout the following we shall assume that the spaces V. and H are separable. If (y*,u*) is any optimal pair in problem (P), consider the optimization problem (P) E:
Minimize
on all (y,u) E
V. x
U subject to
(3.21 ) Here ~E::v. ~ R is a family of convex functions which are of class C2 , i.e. E: 2 E: I 9~ :v. ~ VI and 9 ~ ~V ~ L(V,V ) are continuous and satisfy the following conditions:
67
cp E: ( Il) > -C (II Il
II
for a11 v.. E 'l and E:
1)
of,
>
(3.22)
0,
lim CPE:(Il) = cp(v) for all v E ~, E:-+O
(3.23)
while for any weakly convergent sequence in
~,
1lE: -+
V,
lim inf CPE:(Il E) > cp(v.). E -+ 0
(3.24)
The function gE~H -+ R is given by (1.50), i.e. gE(y) =
JRn g(P ny-EA nT)
(3.25)
P (T)dT,
n
where P is a mollifier in Rn , n = [E -1 ] and P :H -+ X , A:R n + X given by n n n n n (1.48) and (1.49). If the function g happens to be Frechet differentiable then we shall take gE = g. According to Proposition 3.1, for ellery E > 0 the control problem (P E) has at least one optimal pair (y ,u ) E V x U. E
E
LEMMA 3.2 Assume that the injection of
V. into
H is compact.
Then for
E +
0
one has
uE: -+ u* strongly in U, YE: -+ y* weakly in V and strongly in H. In addition, if f E H, B E L(U,H) and the functions cpE:.H + R satisfy conditions (2.23) to (2.27) then
YE: -+ y* strongly in V, AYE
Ay* weakly in H.
+
9cpE:(y ) -+ f - Ay* E acp(y*) weakly in H. E
Proof
For every E gE:(y E)
where 68
y~
+
>
0, we halle
h(u E)
+
-21 lu E: -u*l 2 < gE(y*) E u
+
is the solution to (3.21) with u = u*.
h(u*), Now by Theorem 2.2 it follows
that
y~ ~
y* weakly in V and therefore strongly in H.
Then by Proposition
1.12,
because by (3.25) we see that Ig€(y )_gE(y*) 1 < Lly - y*1 where L > O. E E Hence lim sup (gE(y ) + h(u ) + -21 lu -u*IG) < g(y*) + h(u*). (3.26) E~O
E
E
E
In particular, we infer that {u E} is weakly compact in U. again denoted by E, we have
Thus on a sequence,
uE ~ u weakly in U BU E ~ Bu strongly in VI and lim inf h(u ) >- h(u). E~O
(3.27)
E
Then, again by Theorem 2.2, we have YE
~
YU weakly in V and strongly in H
and therefore
~:~ gE(YE) = g(yu). Then upon inspection of (3.26) and (3.27) we see that lim lu _u*1 2 E~O
E
U
= O.
Hence u = u* and yU = y* as claimed. Now assume that f E H, B E L(U,H) and {¢E} is a family of Frechet differentiable convex functions on H satisfying conditions (2.33) to (2.27) of Theorem 2.4. Since according to the first part of the lemma BU E ~ Bu: strongly in H, we conclude the proof by invoking Theorem 2.4 mentioned above. LEt~MA
3.3 There exist
PE
E V satisfying together with y E and uE the system
69
AYE
+
- A*p
9~E(YE)
E
= BU E +
(3.28)
f,
= 9g E(y ),
- 92~E(y)p E
(3.29)
E
B*p E E ah(u E) + uE - u*.
(3.30)
Proof Let y (u) be the solution to (3.21). Obviously, the map uEU -+y (U)EV E E is Frechet differentiable, and for all u,v E U the function zE = 9UYE (u)v is the solution to the equation
----
Az
E
+
92~E(y)Z = Bv, E E
(3.31)
where 92~E(YE) E L(V.,V. ' ) is the second-order differential of ~E. (We note that since 92~E(YE) is a positive operator, (3.31) has a unique solution ZE') Now for every v E U and A > 0, we have gE(y (u +AV)) + h(u + AV) E
E
E
This yields (9g E(y E),z E) + hl(u E,v) >-
Vv. E U.
(3.32)
Here hi is the directional derivative of h (Section 1.3). Now let PE E V be the solution to (3.29). (As already remarked, this equation has a unique solution.) Substituting (3.29) and (3.31) into (3.32) and noting that 92~E(y E ) is symmetric (as the derivative of a gradient operator), we get -< hl(u E,v) Vv E U. By Proposition 1.6, (3.30) follows, thereby completing the proof. Now take the scalar product of (3.29) by PE and use the coercivity condition (3.2) and the positivity of the operator 92~E(y ) to get E
w
lip E II -<
E 199 (y) E
I
for all E
> O.
Now, since g is locally Lipschitzian, the map y -+ 9 gE(y) is uniformly bounded on bounded subsets. Hence
70
lip E II
'"'
C for all
E >
O.
We may conclude therefore, that there exists a sequence En n E H such that
PE
~
~
0 and p* E
~,
(3.33)
p* weakly in V
n
9g
En
(y
En
)
+
n weakly in H.
(3.34)
By Lemma 3.2 and (3.34) it follows via Proposition 1.12 that n E ag(y*), where ag is the generalized gradient of g. Now letting E tend to zero in (3.30) it follows by lemma 3.2 and Theorem 1.2 part (vii) that B*p* E ah(u*). let us denote by 02cp(y*)p* the element of VI defined by
o2¢(y*)p* = weak - lim 92 ¢ (y )p . n~
En En En
(3.35)
Summarizing, we have THEOREM 3.1
Let (y*,u*) E V. x U be any optimal pair in problem (P).
there exists p* E
Ay*
+.
Then
Vwhiah satisfies along with y* and u* the following system:
acp(Y*) 3 Bu* +- f,
(3.36)
-A*p* - 02¢(y*)p* E ag(y*),
(3.37)
B*p* E ah(u*).
(3.38)
~"e
may view p* as a dual extremal element of problem (P) and (3.36) to (3.38) as generalized first-order necessary conditions of optimality. Of course (3.37) is formal because 02cp(y*) is a notation only. However, by (3.35) we suspect that 02cp(y*) is the second-order derivative of cp in some generalized sense. We shall see that this is indeed the case in some notable situations where (3.37) has a precise meaning. However, even in this form the optimality principle expressed by Theorem 3.1 might be useful to investigate the properties of optimal controls. On the other hand, the procedure developed above suggests the following approximating process for problem (P):
71
over all (y,u) E V x U subject to (3.21) where 2
= inf ~IU-Vlu
h (u) E
l
+
h(v)~ V E U}.
2E
We note that hE is a convex and Frechet differentiable approximation of h (see Theorem 1.9), and by Proposition 3.1 it follows that problem (pE) has at least one solution (yE, uE). PROPOSITION 3.2 Under the assumptions of Lemma 3.2 there exists an optimal pair (yr, ur) of problem (P) such that on a sequence En + 0 U
E n
E y n
+
ur weakly in U,
+
yr weakly in V.
Proof Since the proof is similar to that of Lemma 3.2 it will be outlined only. For all E > 0, we have gE(yE)
+
hE( u E) < g E( YoE)
+
h( U ) < C o
(3.39)
where Uo is any optimal control of problem (P) and y~ is the solution to (3.21) with u = uo. Recalling that (Theorem 1.9) E l 1 E E12 E ( hE ( UE ) = h(v ) + 2E v -u U' v = I
+
Eoh)- 1uE
we infer by (3.39) that there exists a sequence En E
E
1im u n
lim v n
n-xx:>
n-xx:>
= ur
+
0 such that
weakly in U.
Hence y
En
+
u* Y1
yr strongly in V.
Then by Proposition 1.12 and the weak lower semicontinuity of h we have U
g(yt) 72
+
h(ut) < g(y
o)
+
h(u O)'
Hence (yr, ui) is an optimal pai.r i.n problem (P), as claimed. REMARK 3.2 Assume tha t g: V -+ R sa tisfi es the weaker cond it ion (i) Remark 3.1). In this case we take instead of gE the function ~E:V defined by rc,
I
-+
(see R
rc,
g(PnY - EAnT)Pn(T)dT,
(3.25)
y EV
I
rc, rc, rc, "v where P :V -+ X is the projection operator on to X and A :R n -+ X is defined n nrc, n rc,n nrc, by (1.49). Here Xn is the linear space generated by {e i }7=1' where {e i } is an orthonormal basis in V. We note that Proposition 1.12 remains valid in this case. rc, rc, Let (y ,u ) be any solution to the problem
S
rc,
E
(P ) Minimize E
on aU (y,u) E V x U satisfying (3.2]), where
cps:.V
-+
R satisfy conditior:.s
(3.22) to (3.24) and (2.15).
Arguing as in the proof of Lemma 3.2 and using Theorem 2.2, it follows that LEMMA 3.2' ~
For E
0 we have
-+
u* strongly in U,
-+
y* strongly in V.
E
rc,
y
-+
As for Lemma 3.3 and Theorem 3.1, they remain unchanged in this new context, except for the fact that ag:V -+ V'. This remark is particularly useful in the case of boundary control problems with boundary observation (see Section 3.4). u* REMARK 3.3 The pair (y*,u*) where y* = y is termed locally optimal for problem (P) if there exists r > 0 such that g(y*)
+
h(u*) < g(yu)
+.
h(u)
for all u E U such that lu-u*l u < r. 73
Theorem 3.1 and the discussi.on preceding i.t remain valid if (y*,u*) is merely locally optimal. Indeed we take in problem (P S ) a cost functional of the form
where a is sufficiently large that 1~m -+s~p
(gs(y~)
+
h(u*)) -< o:r2.
Then by the inequality (see the proof of Lemma 3.2) gS(y S)
+.
h(u S)
+
alu -u T lu2 -< gS(y*} S
+-
h(u*)
we see that IUs-u*I -< r for all S > 0 (without loss of generality we may U assume that h > 0). Hence if u is as in the proof of Lemma 3.2 we have lu-u* IU -< r, and therefore g(yu)
+
h(u) > g(y*)
+.
h(u*).
Together with (3.26) (where 1/2 has been replaced by a) and (3.27), this implies u = u* and Us -+u* strongly in U. Hence Lemma 3.1 remains valid. not affected. §3.3
Lemma 3.2 and the subsequent estimates are
Distributed control problems governed by semilinear equations
This section concerns first-order necessary conditions of optimality for problem (3.13), (3.14) where 9:L2{~) ~ R+, h:U -+ R satisfy assumptions (i), (ii) and B E L(U,L2(~)). THEOREM 3.2 Let (y*,u*) E (H~(~) n H2(~)) x U be any optimal pair in problem (3.13), (3.14) where S :.R -+ R is a locally Lipschitz, monotonically increasing function. Then there exist the function p* E H6(~)' ~ E L2(~) such that A p* E (Loo(~))* and o
74
(3.40) ( 3.41)
B*p* E ah(u*). If either 1 < N < 3 or 13 satisfies the condition
o<
s'(r) < C( !B(r)!
+
jr!
(3.42)
a.e. r E R
+- 1)
1 then AOp* E L (~), and (3.40) becomes
-AOp* - as(y*)p*
3 ~,~ E
ag(y*)
Here aB is the generalized gradient of
a.e. in B(~ee
~.
(3.43)
Section 1.6).
According to the theorem the distribution AOp* = Ap* E H-1(~) admits an extension to LOO(~) (in particular to C(~)) again denoted AOp*. By (AOp*la we have denoted the absolutely continuous'part of this measure. Thus (3.40) should be understood in the following precise sense: there exists a singular measure Vs E (Loo(~))* such that for some n E L1(~), n(x) E - (ag(y*(x)) we have AOp* = n
+
+.
p*(x)as(y*(x)))
a.e. x E
~,
(3.44)
vs'
In particular this means that
AQ p* = n
+.
vSin· V'(~). .
Proof of Theorem 3.2
Let 13
E
(3.40)
I
= E-1 (1-(1+ES )-l land (3.45)
where p is a CO-mollifier on R, i.e. p per) = p(-r) and Joo p(r)dr = 1.
E
COO(R) , per)
=0
for jr[
>
1,
It is readily s~~n that the functi.on SE is infini.tely differentiable, -1 Lipschitzian with Lipschitz constant E and
sE(r) > 0 for all r E R,
(3.46) (3.47) 75
One of the main ingredients of the proof of Theorem 3.2 is Lemma 3.4. LEMMA 3.4 Let E be a locally compact space and let v be a positive measure Let {y } c Lt(E;.v) be a sequence of real functions on E such that veE) < + 00.
E
on E such that
YE
+
1
y strongly in L
6E (y E )
+
i;
(E~v),
(3.48)
wea k 1yin L1( E;.v) •
(3.49)
Then i;(x) E as(y(x)),
v-a.e.
X
(3.50)
E E.
Proof We hav.e denoted by L1(E~\)) the usual space of all \)-summable functions on E. Selecting a subsequence of {y } we ha~e E
y (x)
+
E
y(x),
v-a.e. x
E
E.
(3.51 )
By (3.51) and the Mazur theorem we may infer that
~
is the strong limit in
L1(E~\)) of a certain sequence {W n} consisting of convex combinations of SE(y ), i.e. E
wn(x)
=
i .E •
anS '(YE')'
L
iEIn
1
Here In is a finite set of
integers on the interval [n,
+
oo[ and
i
i
an = 1.
an >- 0, . L lEI
positi~e
n
Thus without loss of generality we may assume that Wn(x)
+
i;(x),
v-a.e. x E E.
We fix x E E with the property that the last relation holds, and consider a sequence {Zn} c R such S'(Zn) exist for all n and lim zn = y(x). By (3.45) we see that n+oo -2 Joo
Ei
-00
76
(3.52)
= yE i (x). On the other hand, we hav.e
where y. ,
S(zi) =
B«l+EiB)-l(Yi-E~e))
~ B'(Z.)(Z.-(1+E.B)-1(Y.-E~e))~w.(e)(Z.-(1+E.S)-1(Y.-E~e)) "
1.
"
,
,
,
"
where w·(e)
0 if 8. = IZ.-(1+E.B)-1(Y.-E~e)1
+
,
1'"
1
+
o.
Substituting this in (3.52), after some manipulation we get ,; E i
p
-2 (y.) = E. 6'(z.) ,
,
1
- E:.-2 ,
= 6' ( z .) 1
- E.-2 1
foo
2 (l+E.B) -1 (Y.-E.e)o'(e)d8
_00
'
1"
foo p' ( e) w. ( e)( z . - ( 1+E . B)-1 (y . - E: 2. e) )d8 _00 1 1 1 1 1 S' ( z .) (00 1
j
_00
S
(y .- ~ e)p I( e)de
E Ei l l
Joo p'(e)w.(e)(z.-(l+E.B) -1 (Y.-E.e))de. 2 1 1 1 1 1
(3.53)
-00
In as much as S is locally Lipschitz, it follows by (3.51) that BE: (Y.-E~e)
+
ill
S (y(x)) uniformly in
e E [-1,1].
On the other hand, zi can be chosen sufficiently close to Yi that
. 11m i-xx>
Iy.
1
Hence lim 8. i-xx>
~im
, -xx>
- z.
1
2 1E:. = o. ,
= 0, and so (3.53) yields
1
.E:. (B '(Yi) - 6'(Zi)) = 0
and therefore 1 im
n-+<:x)
belongs Since by definition; the convex hull of all limit pOints of {S'(z.)} 1 77
to 3S(y(x», the last relation implies (3.50) as claimed. Proof of Theorem 3.2 (continued) ~€:L2(n) ~ R be defined by
~€(y) = In
Let
V.
1 = Ho(n),
A defined by (3.15) and
2 j€(y(x»dx vy E L (n),
(3.54)
where r
j€(r) = fo
S€(s)ds for all r E R.
The function ~€ is obviously Frechet differentiable on L2(n) and
Thus problem (P ) can be written in this case as: €
Minimize
g€(y) ~ h(u)
+
21 lu-u*l u2
(3.55)
2
on all y E H6(n) n H (n) and u E U subject to
AOY
+
S€(y)
= Bu
+
f
a.e. in n
(3.56)
y = 0 in r. We notice that ~€ satisfy conditions (2.23) to (2.27). is defined by (3.16) then as seen in Example 1.4
¢ (y) = €
fn j
(y(x»dx,
Indeed, if ¢:L 2(n) +R
2 y E L (n)
€
and V¢ (y)(x) = S (y)(x) a.e. x E n for all y € € by (1.21).) Then by (3.47) we see that
E
L2(n).
(Here
¢ E
is defined
and
(3.58) Then (2.23) to (2.25), (2.27) are immediate consequences of Theorem 1.9.
78
As
for condition (2.26), by Greenl·s formula we ha'le
+
In AoYSE(O)dx
; . . -c1AHy 12 because SE ;;... 0 in R. L
2
2
v Y E L (n)
(Throughout this chapter
I' I
2 denotes the norm of
(n).)
2 Now let (y ,u ) E (H6(n) n H (n» x U be an optima1 pair in problem (3.55) E1 E and let PE E HO(n) be the dual extremal arc provi.ded by Lemma 3.3, i.e. AOY
E
+
SE(y ) = Bu E E 'E
+ f
a.e. in n
E
-AOP E - B (YE)P E = Vg (YE) a.e. i.n n
(3.59) ( 3.60) ( 3.61)
B*p
E
E ah(u ) E
+ Uc
'-
-
u*.
(3.62)
By Lemma 3.2, uE + u* strongly in U
2 y* strongly in H6(n) and weakly in H (n) 2 SE(YE) + Bu* + f - AOY* weakly in L (n)
ys
+
(3.63)
while by (3.33) and Theorem 3.1, on a subsequence again denoted E, we have PE + p* weakly in H6(n) and strongly in L2(n) VgE(y ) + ~ E ag(y*) weakly in L2(n)
(3.64)
E
B*p*
E
ah(u*).
(3.65)
It remains to be shown that p* satisfies (3.40) and (3.43), respectively. this end we need the following technical lemma:
To
79
LEMMA 3.5 There exists C
In
>
0 independent of s such that
ISs(y)p I dx -< C for all S
€
S
>
o.
Proof Let s:.R + R be a smooth, bounded and monotone approximation of the signum function such that ((0) = O. For instance we may choose ( = s\ where (3.66) p
is a C~-mollifier on Rand y\(r) = r/lrl for Irl > \, y\(r) = \-1 r for Irl
Now we multiply (3.60) by we get
~\(ps)
< \
•
and integrate on n.Using the Green formula
Hence
and letting \ tend to zero we get
In SS(Ys)IPsl
dx -< C for all s
> 0
because {Vgs(ys)} is bounded in L2(n). According to (3.63) and (3.64) there exists a subsequence s n that Ys (x)
+
y*(x)
a.e. x
+
p*(x)
a.e. x E n.
E
+
0 such
n
n
Ps (x) n
.sn On the other hand, by Lemma 3.5 the set {B (Ys )ps } is weak star compact n
n
in (Loo(n))*. Thus we may select a generalized subsequence of {sn}' denoted by {\}, such that 80
SA(YA)PA - ) v in the weak star topology of (Loo(rl»*
(3.67)
where v is some element of (Loo(rl))*. Going to the limit in (3.60) we see that the distribution Ap E H- 1(rl) defined by (3.15) admits an extension 00 -1 2 Ap E (L (rl»* n H (rl) such that Ap + vEL (rl) and -(Ap
+ v)(x)
E 39(Y*(x»
(3.68 )
a.e. x E rl.
Now by the Egorov. theorem, for each n > 0 there exists a measurable subset rl of rl such that the Lebesgue measure of f?'r/ is < nand n
n
oo y* E L ()?), y
n
En
(x)
p* E LOO(rl )~ P (x)
n
En
-> y*(x) uniformly on rl
n
->
p*(x) uni.formly on rl
n
•
Since B is locally Lipschitz, BE are equi-Lipschitz on every bounded subset and therefore IsA(YA(X» I <·M for all x E )?n' Then by (3.67) we infer that (3.69 ) where X is the characteristic fUnction of rl • n n Selecting a further subsequence we may assume that
Then by Lemma 3.4, f (x) E 3B(Y*(x» n f P*X = vx. Hence n
n
a.e. x E ~ , while by (3.69) we have n
n
\) (x) = f (x)p*(x) E 3B(Y*(x»p*(x) a.e. x E rl a n n where va is the absolutely continuous part of \). (3.68) we conclude that - (Ap) a E 39 (y*)
+
(3.70)
Since n is arbitrary, by
va a. e. ; n D.
If 1 < N < 3, then by the Sobolev imbedding theorem H2(rl)
c
C(n), and by
81
(3.63) it follows that the sequence & } is bounded in e(~). E equi-Lipschitzian on bounded subsets we have ISE(y (x))1 < e for all x E nand E E
>
Since BE are
O.
We may conclude therefore that the set {SE(y )p } is weakly compact in L2(n). 2 2 Hence v E L (n) and by (3.68) we see that AO~* ~ (AOp*)a E L (n) and p* satisfies (3.43). Now assume that a satisfies condition (3.42). Then S (r) < e(la (r)1 E
E
+.
Irl
+ 1)
and by (3.45) sEer) < e( laE(r) I
Irl
+.
a.e. r E R +.
1) a.e. r E R.
We shall prove via the Dunford-Pettis criterion that the set ~E(y )p } is 1 E E weakly compact in L (n). To this end consider an arbitrary measurable subset E of n and use the above inequality to get
JE IsE(y)p E
< e
E
I dx
JEIsE(y E)p EI dx
+.
e
JE Iy EI
Ip I dx E
+.
e
JE Ip
for all E > O.
E
I dx (3.71)
Since {p } is strongly convergent and {SE(y )} is weakly convergent in L2(n), E E inequality (3.71) implies that the family { aE(y)p dx~ E c n} is equiabsolutely continuous in L 1(n). from (3.40).
f
.
E E E Hence va = v E L 1(n), and
(3.43) follows
REMARK 3.4 Theorem 3.2 admits a straightforward extension to the case of control systems of the form AOY
+
B(y)
=f
+
Bu a.e. in n
(3.13)' ay dV
- 0 a.e. 1n .
+ ay -
r.
On the other hand, this theorem is applicable to a large variety of control problems governing steady-state diffusion processes. For instance the diffusion kinetics equation governing the steady-state concentration y(x) of some substrate in an enzyme-catalyzed reaction has the form (see [45J, [64J) 82
A(y+k) -1 y = f in n.
-~y +.
§3.4 Optimal control of the obstacle problem We shall study here the following optimization problem: Minimize
g(y)
+.
h(u)
(3.72)
1 2 on all y E HO(n) n H (n), u E U subject to the obstacle p~oble~
a(y,y-z) < (y-z, f
+.
Bu) for all
Z
EK
(3.73)
i.e. (Aoy-f-Bu)(y-~)
= 0 a,e. in
Aoy-f-Bu > 0, y >
~
~
(3.73)
I
a.e. in n, y = 0 in r.
Here B E L(U,L 2(n»), 9~L2(n) + Rand h:U + ~ satisfy conditions (i) and (ii), 2 2 f E L (n) and ~ E H (n) is such that ~ < 0 a.e. in r. The subset K is given by (2.36) and (.,.) is the pairing between H6(n) and H- 1(n). THEOREM 3.3 Let (y*,u*) be an optimal pai~ in p~oblem (3.72), (3.73). Then the~e exists p* E H6(~) with AOp* E (Loo(n»*, ~ E L2(n) such that ~ E ag(y*) and
(AOP*)a
+. ~
= 0 a.e. in [y*
> ~J
(3.74)
p*(Aoy*-f-Bu*) = 0 a.e. in n
(3.75) 1 -
V X E C (~)
B*p E ah(u*) a(p*,p*) +
(~,p*)
If 1 < N < 3 then (3.74)
(3.76) (3.77)
< O. ~educes
(3.78) to
(3.74) 83
I
Si nce y* - ljJ E C( El) in the s Huation where 1 ..;:: N -< 3, the product (Y*-ljJ)(AOp* + ~) makes sense as an element of (LOO(n))*. In terms of the operator A defined by (3.15)~ (3.76) can be rewritten as (Y*-ljJ)(Ap*
+- ~) =
(3.76) ,
0
where (y*-~)Ap* is the element of O'(n) (more precisely of (C 1(Q))*) defined by
Equations (3.74) to (3.77) taken together represent a quasivariational inequality of elliptic type [52J. Note also that by (3.74) and (3.76)' it follows (formally) via Green's formula that
fr
_dP* dV
(Y*-ljJ)X = 0
1
Vx E C (Q).
Hence (Y*-ljJ) dP* dV = 0 in r.
~E(y) = In where
l
E(r) =
f:
2 L (n)
1 Let V = HO(n), H
Proof of Theorem 3.3 and
A~V
+
V' defined by (3.15),
2
E
l (Y-ljJ)dx for all Y E L 1n)
SE(s)ds and SE ;s given by (3.45) where BE(r)
-E
-1 -
r.
In
other words, SE( r)
- E-1
roo (( r-E 2e) - -E 2e+ )p(e)de )-00
OO
J
E- 1
E
(r-E2e)p(elde
-2 r
+.
E f1 ep(e)de. 0
As remarked in the proof of Theorem 3.2, the functions ~E:L2(Q) conditions (2.23) to (2.27) and
Hence problem (P ) has in this case the following form: E
84
(3.79) +
R satisfy
Minimize
gE(y)
+
h(u)
1 2 on all (y,u) E (HO(~) n H (n)) x U subject to + SE(y_~)
AOY
=f
+
Bu a.e. in ~ ~ Y = 0 in
r.
Then the approximating optimality system (3.28) to (3.30) becomes AOY
E
+
SE(y -~) = f E °E
+
Bu
E
a.e. in
n
E
-AOPE- S (YE-~)PE = 9g (YE) a.e. in ~
(3.80)
By Lemma 3.2 we know that YE ~ y* strongly in H6(~) and weakly in H2(n)
(3.81 )
BE(YE-~) ~ f
(3.82 )
Bu* - AOY* weakly in L2(~)
+
while by (3.33), (3.34) and Theorem 3.1, 2 PE ~ p* weakly in H6(n) and strongly in L (n)
(3.83 )
9g E(y ) ~ E;,
(3.84 )
E
B*p*
E
ag(y*) weakly in L2(n)
E
ah(u*).
Now multiplying the second equation in (3.80) by ~(PE)' where ~ is a smooth approximation of signum function (see the proof of Theorem 3.2), and integrating on n, we get via Green's formula (for ( ~ sign)
J~ Let
E;,E:~ ~
IsE(y -~)p E
R and
E
I
dx , C for all E
~E: ~ ~
>
O.
0.85)
R be the measurable functions defined by
85
2 Iy (X)-\jJ(X) I > s s 2 Iy (x)-IjJ(x)1 -<E E
0 if
i;s(x)
={
\ (x)
={
if
0
y (x)-IjJ(x) >-E 2 s 2 yE(x)-IjJ(x) -< -E •
if if
Noting that sEer) = E- 1 fOO
p(e)d8 for all r E R,
(3.86)
-2 E r it follows by (3.79) that
= Elp (x)IJ E
o
8p(8)d8 -< Elp (x)i
-2
a.e. XED.
(3.87)
E
E (Y-IjJ)
On the other hand, we have p SE (Y -1jJ) = E-1 p ~ ESE E +
fl
-2 E (yE-IjJ)
E- 1p (Y -1jJ)\ E E E
+
Ep
(y -1jJ-E 28)p(8)d8 E
fl
E 0
8p(8)d8 a.e. in D.
This yields
E- 1 Iy E-tjJl\ E) + 2Elp EI a.e. in D. (3.88) E-l By ( 3.79 ) and (3.82) we see that S (y -tjJ)\ = E (y -tjJ)\+CEl..s remain a bour.ded 2 E E E s -1 subset of L (D), while by the definition of ~ we see that E Iy -1jJ1e.: -<E a.e. • E1 E E in D. Then since {p SE(y -1jJ)} is bounded in L (D) it follows by (3.83), (3.86) E E and (3.88) that, for some E + 0, +
n
(3.89) while by (3.81) and (3.83),
86
Together with (3.89) and the Egorov theorem, this yields p*(f
~
Bu* -AOY*) = 0 a.e. in n.
and therefore En PE S (YE n
1
-~)
+
p*(f*Bu*-AOY*) strongly in L (n).
(3.90)
n
Then by (3.87) we have (3.91 )
SincelsE(y -~) + E- 1(y -~)-I < CE, it follows by (3.86), (3.90) and (3.91) E E that
Now, since (YE -~)+
E
H6(~)' applying Green's formula in (3.80) yields
n +
a(PE ,eYE -~) X)+(9g n n
En
+.
(YE )'(YE -~) X) n n
+
1 0 VX E C (D).
Together with (3.81) and (3.84), this yields (3.76). Equation (3.78) follows immediately by virtue of (3.83) and (3.84), because by (3.80) \
+
y*(x)
a.e. x E
~.
n
It follows by (3.85) that there exists sequence {~} of {En} such that • ~
S
(y~-~)p~ +
y
E (Loo(~»)* and a generalized sub-
00
y weak star in (L (n»*.
(3.92)
This implies that AOp* can be extended as an element of (Loo(~)* and -(AOp* + y) E ag(y*)
c
2
L
(~).
87
Now according to the Egorov. theorem, for eV,ery n > 0 there exists a measurable subset)'G of)'G such that m(n')'G ) -< n, and y -" ljJ -+ y* - ljJ in LOO()'G). Then n n En n by (3.91) and (3.92) we infer that Y(Y*-ljJ) = 0 in )'G , i.e. n (Y*-ljJ)Y ~dx+y ((y*-y)~) = 0 v¢ E LOO()'G) , support ~ C)'G . a s n 1 Ther! exis~s an increasing sequence {)'Gk} such that m(~k) -< k- and Ys on L ()'Gk)' Hence
f)'G
J)'Gnn)'Gk (Y*-ljJ)Ya ~dx
0
= 0 v¢ E LOO()'G) , support ~ c)'Gn n )'Gk'
Thus (Y*-Y)Y a = 0 a.e. in )'Gn' and letting n tend to zero we conclude that (Y*-ljJ)Y a = 0 a.e. in)'G. Hence -(AOp*)a E 8g(y*) a.e. in [y*
>
ljJ]
as claimed. Now assume that 1 -< N -< 3. Then H2()'G) c C(~), and as seen earlier this implies that YE , y* E C(~) and YE -+ y* uniformly on
~.
Since ljJ E C(~) it follows by (3.91) and (3.92) that (y*-ljJ)Y = 0 in )'G, and therefore (Y*-ljJ)(AOp* +
~)
= O.
This completes the proof of Theorem 3.3. Distributed control problems of the type encountered in this section arise in a variety of situations, and we now digress briefly to describe one such example. Consider the model, presented in Section 2.3, of an elastic membrane clamped along the boundary, inflated from above by a vertical force field with density u and limited from below by a rigid obstacle (Figure 2.1). The desired shape of the membrane is given by the distribution yO(x) of the deflection, and we look for a control parameter u subject to constraints lu(x)1 -<
p
a.e. x E)'G,
such that the system response yU has a minimum deviation of yO in any definite 88
sense.
For example, we may consider the problem of minimizing
In
1 r
7
0
2
Iy(x) - y (x) I dx
on all (y,u) E H6(n) x L2(~) subject to the control constraint on u just given and to state equation (3.73), where B = I and f = O. This is a problem of the form (3.72), (3.73) where g(y)
o2 = ~ J Iy-y I dx
and
°
f if lu(x) I -< P a.e. x L +00 otherwise.
h(u)
E ~
According to Theorem 3.3, if (y*,u*) is an optimal pair of this problem then there exists p* E H~(~) which satisfies along with y* and u* the system 6Y* + u*
=0
a.e. in [y* > W] = n+
y* > W. 6Y*+u -< 0 a.e. in n 6P*
= y*_yO a.e. in [y*
p*(u*+6Y*) u*
> w]
= 0 a.e. in n
p sgn p*
a.e. in r2.
This yields plp*1 +
P*6~
0 a.e. in {x;y*{x) = ~(x)}.
Thus if 16~(x) I < P a.e. x E ~, we may regard p* as the solution to the homogeneous Dirichlet problem on ~+ . 6p * = Y*-y 0 ln
~+
To solve this problem numerically we may use a Gauss-Seidel algorithm of the following type. Starting with uo arbitrary we solv~ inductively the following sequence of obstacle problems: 89
(t::,y.,
u·)(Y·-ljJ) = 0 a.e. in n 1 1
+
y., >- ljJ,
= y;_yO in n;
liP; Pi U
6'j.+u. 1 , <::
0
a.e. in n, y.1 = 0 i.n r {x En;, Yi.(X)
>
ljJ(X)}
= 0 in ani
i +1
P
sgn Pi
a.e. in n, i = 0,1,2, ...
Another way to solve numerically the given control problem is to use the approximating control process (pE) (Yvon [85J, [86J). REMARK 3.5 Theorems 3.2, 3.3 remain valid if (y*,u*) is merely a local optimal pair of problems (3.13), (3.14) and (3.72), (3.73), respectively. REMARK 3.6 For a special case of the control problem considered here, somewhat more refined necessary conditions of optimality can be derived via the differentiability theory developed by F. Mignot [59J in Hilbert space. Let us briefly describe such a result for the optimal control problem (3.72), 2 (3.73) where for convenience ljJ = 0, U = L (n), B = I and g(y) = 21f n Iy(x)-y 0 (x) I2dx, h(u) = 21Jn lu(x) I2dx. For any u E L2(n), define En;. l(x) = O}
Zu
{x
Su
{~E H~(n)~ ~ >- 0 in Zu' (~,AOyU-u-f) = O}
where yU E H6(n) n H2(n) is the solution to (3.73). For this problem the following optimality theorem can be proved (Mignot and Puel [60J):, Let (y* ,u*) be an aY'b'it~t'aY'Y optimal paiY'. that u*
=p
Then theY'e exists p E - Su* such
and
a(p,~)
+
fn (y*_yO)~ dx <:: 0 foY' all
~ ES
u
*.
This result is comparable with Theorem 3.3 but neither contains it nor is contained in it. 90
The method of the proof relies on the properties of the conical derivative (see [59J) of the map u ~ yU and seems to be applicable for quadratic cost criteria only. §3.5
Control of free surfaces
Very often the study of physical systems modelled by elliptic variational of form (3.73) leads to the problem of controlling the incidence set {x E ~; y(x) = ~(x)} = Ey or its boundary aE. y For instance, in the problem of seepage through a porous dam presented in Section 2.4 an important objective is to keep the wetted region DO under the level Y2 (Figure 2.3). For the contact problem presented in Section 2.3 (see Figure 2.1) a problem of interest would be to find the distribution of the force field (viewed as a control parameter) in such a way that the set of all points x where the membrane is in contact with the obstacle contains a given subset ~O of ~. This problem can be formulated under the following general form: Minimize the function go(y) + h(u)
on all (y,u) E
where
~O
H6(~)
x U subject to (3.73) and to
is a given measurable subset of
cons~raint
~.
The least-squares approach to this problem leads to the optimal control problem (3.72), (3.73) where g(y) = A f~ Here A > 0 and
X~
o
X~O(x) IY(x)-~(x)
1
2 dx
+
gO(y)·
is the characteristic function of
~O.
Theorem 3.3 is
applicable in the present situation and can be used to find necessary conditions of optimality as well as for the approximation of optimal controls. In other situations the objective of the control is to keep the incidence set Ey as close as possible to a given measurable subset E of~. In this case, a natural choice for the performance index (3.72) would be 91
(3.93)
where X and XE are the characteristic functions of E and E, respectively. HOWev.er: since the function y ~ fD Ixy(x) - XE(x) 12dXYiS not locally Lipschitzian on L2(D), following an approach due to Saguez [79], [80] we shall approximate the cost functional (3.93) by
21 JD
I
A y-~+A
-xEI 2 dx
+
(3.93)
h(u)
which falls under Theorem 3.3 if h satisfies assumption (ii) and A > O. By Proposition 3.1, for every A > 0, the control problem with cost functional (~.93)' and state equation (3.73) has an optimal pair (YA'U A) E (H6(D) n H (D)) xU.
PROPOSITION 3.3 There exists a sequence An uA
u*
~
~eakly
~
0 such that
in U
n
YA ~ y* ~eakly in H2(D) and strongly in H6(D) n ~here
(y*,u*) is an optimal pair of optimal control problems
~ith
cost
functional (3.93) and state equation (3.73).
Proof We have (3.94 )
for all y and u E U satisfying the variational inequality (3.73). Then by condition (3.5) we see that {u A} is weakly compact in U. Since the map u ~ yU is bounded from U to H2(D) n H6(D) we may infer that there exists a 1 sequence {An} ~ 0 and (y*,u*) E (H 2(Q) n HO(D)) x U such that uA
~
u* weakly in U,
n
YA ~ y* weakly in H2(D) and strongly in H6(D) n
where y*,u* satisfy (3.73).
92
I
Moreover, we have
and
Y -~+A > Xy (x) A A
YA-~+)\ ->
0
=
a.e. x E ~,
Xy* if y* f. 0,
TTm ~ < X * if y* = O. A+O
y
YA-~+
2 On the other hand, X + X * strongly in L (n) for A + O. All these relaYA y tions taken together show via Lebesgue's dominated convergence theorem that
Then by (3.94) we see that (y*,u*) is an optimal pair in problem (3.93), (3.73) as claimed. To be more specific, let us assume that 1 < N < 3, f B = I and h (u)
={
0 if lu(x)l< +
00
a.e. x E
p
YA > 0, 6YA + uA < 0 in
plpAI
A
(y A-lP+A) +
U = L2 (~),
~
otherwi se.
Then by Theorem 3.3, there exists PA E YA and uA' the system
6PA =
= 0, AO = -6,
PA6lP
1
HO(~)
which satisfies, together with
~
A
2 (X E - Y,-lP+ A) a.e. in [YA > lPJ "0 a.e. in [Y A = lPJ
uA = P sgn PA· This system can be solved inductively as indicated in Section 3.4.
93
§3.6 Distributed control systems with nonlinear
Lour~dry ~dlue
conditions
We shall study here the problem presented in Example 3.2, i.e. Minimize
g(y)
+
h(u) 2
(3.95)
on all (y,u) E H (~)
x
U subject to
AOY = f
+
Bu,
~y +
a.e. in
~
(3.96)
~
+
S(y) 3 0
a.e. in r
where AO is the second-order elliptic operator defined by (3.12), ~ is a positive constant, S is a maximal monotone graph in R x R such that 0 E D(S), f E L2(~) and B E L(U,L 2(n)). The functions g:L 2(n) + R+ and h:U + R satisfy assumptions (i) and (ii). Let ¢~H1(n)
+
¢E(y)
R be defined by (3.19) and
=
Ir jE(y)da vy E H (n) 1
E where jEer) = SE(s)ds and a is given by (3.45). We note that conditions (3.22) to (3.249 are satisfied in this situation, and problem (P E) can be written as:
Ir
Minimize
gE(y)
+
h(u) 2
+
i lu-u*l~
on all (y,u) E H (n)
x
U subject to
~y +
AOY = f
+
Bu a.e. in n
~
SE(y) = 0
a.e. in r.
+
Since Lemmas 3.2, 3.3 and Theorem 3.1 are applicable we infer that there 2 exist YE E H (n), uE E U and Ps E H1(n) satisfying the system
94
wY E
AOY E = f
+-
+
Bu E a.e. in
~
a.e. in r
(3.97)
-wp - AOp = VgE(y ) a.e. in ~ E E E a.e. in r B*p E ah(u ) E E
+
(3.98)
u -u* E
(3.99)
and for E -+ 0 u
-+
u* strongly in U
Y
-+
y* weakly in H1(~) and strongly in L2(~)
P
-+
p* weakly in Hl(~)
E
E
E
vgE(y )
-+
E
n E ag(y*) weakly in L2(~).
Now by estimate (2.80) we know that
II y II E
2
H
(~)
-<
C(
II f
+-
Bu 112
+.
1)
E
where C is independent of E. Thus without loss of generality we may assume that y
E
-+
y* weakly in H2(~) and strongly in Hl(~)
(3.100)
and by the trace theorem
p
p* weakly in H1/ 2(r) and strongly in L2(r).
-+
E
Then by (3.47) we infer that SE(y ) --> s E S(y*) strongly in L2(r).
(3.101)
E
Next we multiply (3.98) by sgn p (more precisely by s(p ), where s is defined E
E
95
by (3.66)) and use Green's formula to get
fr
16 s (y)p I do s
s
Thus there exists y
E
s• s (ys)ps ->
< C for all s > O.
(Loo(r))* such that on a generalized sequence 00
(3.102 )
weak star i.n (L (r))*.
y
Thus letting s tend to zero in (3.97) to (3.99), we see that y*, u* and p* satisfy the system ~y* +
AOY* = f
laav*
s(y*) 3 0
+
Bu* a.e. in Q (3.103)
+
a. e. i.n r
(3.104)
ap* av+Y=O B*p*
E
in
r
ah(u*).
(3.105 )
Equation (3.104) should be interpreted in the following sense: a(p*,o.)
+
~J
~
p*o. dx
+ Y (0.) +
r no.
J~
dx = 0 vo.
E
Coo(D)
where a is the bilinear functional (2.34) and n E ag(y*) a.e. in ~. Equations (3.103) to (3.105) represent first-order necessary conditions of optimality for the control problem (3.95), (3.96), and can be made explicit in several specific situations. THEOREM 3.4 Let (y*,u*) be any optimal pair in problem (3.95), (3.96) where (3
is monotonically increasing and locally Lipschitzian.
function p* E
H1(~) with (ap*/av)
E (Loo(r))*,
~p*
+
Then there exists a
AOp* E L2(~) which sat-
isfies (3.103), (3.105) and ~p* +
AOp*
+
a9(Y*)
3
0 in
~
( 3.106) (ap*\ \ av }a
96
+
p* a6(y*) 3 0 a . e• in . r
If either 1
< N<
1 3 or S satisfies condition (3.42) then (ap*/av) E L (r)
and so (3.106) becomes
wP*
+
AOp*
a::
+
p*as(y*) 3 0 a.e. in r.
+
ag(y*) 3 0 in D (3.106)1
Here (ap*/av)a is the absolutely continuous part of dP*/dv and dS is the generalized gradient of S. Proof Since the proof is similar to that of Theorem 3.2 it will be given in outline only. According to Egorov1s theorem, for every A > 0 there exists a measurable subset r of r such that m(f'r ) < A and A A y
E
~
y* in LOO(r ) A
(m is the Lebesgue measure on r). In as much as S is locally Lipschitz there exists a subsequence, again denoted E, such that
whilst by Lemma 3.4
Now by the trace theorem, p ~ p* weakly in H1/2 (r) and strongly in L2(r). E
Then by (3.102) we infer that y = ~A
p* in r and so for A ~ 0 we have A
If 1 < N < 3, then by the Sobolev imbedding theorem H2(D) fore {y } is a bounded subset of C(~). This implies that
c
- and thereC(D),
E
Va E r
and therefore {SE(y )p } is a weakly compact subset of L2(r). 2 E E Y E L (r) as claimed.
Hence
97
If B satisfies condition (3.42) then by inequality (3.71), where E is an arbitrary measurable subset of r, we conclude via the Dunford-Pettis criterion that {SE:(y )p } is a weakly compact subset of L1(r). This completes E: E: the proof of Theorem 3.4. Now we will consider the special case where the graph S is defined by (2.54). As seen in Section 2.5, in this case (3.96) reduces to the Signorini problem AOY = f
~y +
+
Bu
a.e. in
~
(3.107)
ay > 0 y ay = 0 a.e. in r Y > 0 'av ' av which models the equilibrium of an elastic body in contact with a rigid supporting body. The control of the displacements y is achieved through a distributed field of forces with density Bu. THEOREM 3.5 Let (y*,u*) E H2(~)
x
(3.95) with state equation (3.10'2). and q E
U be any optimal pair of control problems Then there exist the functions
L2(~)
such that (ap*/av) E (Loo(r))* and
~p* +
AOp* = -q, q E ag(y*) a.e. i.n
y* (ap*\ -av)a p*
~
(3.109)
a.e. in r
(3.110)
ay* = 0 av
If 1 < N < 3 then y* E
(3.111)
C(D), (ap*/av) E L2(r) and
e 1"n r • Y* ap* av = 0 a •.
98
(3.108)
o n.e. in r
B*p* E ah(u*)
Proof The it for the In this measurable
P*EH1(~)
(3.112 )
proof is identical with that of Theorem 3.3. However, we sketch reader's convenience. case SE: is given by (3.79). Let '"~ : r ~ R and '"A r ~ R be the E: E: functions
'\.
~ (0) =
€
'\.
A (0) €
=
J0 L
if
J0 L1
if
2 Iy €(0) 1 > E 2 Iy €(0) I '" € 2 y€(o) > -€ 2 y€(o) '" _.€ .
1 if
if
We have (see (3.87), (3.88)) Iy €(o)S€(y €(o))p €(o)-p ( (0)6€(y ( (0))1 '" (Ip ( (0) I a.e. 0
E
r
E
r.
(3.113)
and (
.(
1
'\.
Ip(6 (Ys(o))1 '" IsP€(0)6 (Ys(o))1 (s- ly((o)lt;;s(o) +
s-1 1ys(O)~s(c)l)
+
2€ IP s(0)1 a.e.
0
Since {Ss(Ys)} is bounded in L2(r) and s
1 '\.
'\.
6 (Y)A = s- Y A s s s s
+
'\.
SA
J1
ep(e)de s 0
we infer that {s-1ys~s} is bounded in L2(r). On the other hand, s-1 IYsl~s '" s in r. Since {PS~s(YE)} is bounded in L1(r) we infer that there exists a sequence En ~ 0 such that PE (0)6 n
En
(y€ (0)) n
~
0 a.e. 0 E r
and by (3.100), (3.101) we may conclude therefore that En 6 (YE)
p*
+
oy*
~
1
= 0 strongly in L (r).
n
Finally, by (3.113) we see that .En 6 (YE )P E n
1
+
0 strongly in L (r).
(3.114 )
n
Now by the Egorov theorem, for every n > 0 there exists rn c r such that m(r,r ) '" n, y* E Loo(r ) and YE ~ y* uniformly on rn' This yields n
n
n
y*y = 0 on rn
99
and arguing as in the proof of Theorem 3.3 we see that Y*Ya = 0 a.e. in r. Together with (3.104), this yields (3.109). If 1 < N < 3, then by (3.100) - and y* E C(~). it follows that the sequence {y } is bounded in H2(~) c C(~) E: Hence IsE:(y (a)) 1-< C a.e. a E:
E
r for all E:
>
0
2 and by (3.102) it follows that Y = - (dP*/dV) E L (r) and rl(y E: )p E: Y
E:
-> -
dP* dV weakly in . L2( r) .
-> y* in CU1).
Together with (3.100) and (3.114) this yields (3.112), thereby completing the proof. REMARK 3.7 Since p SE:(y ) E: E: Green formula that
a(p ,p ) E: E:
+
(wpE:
+
+
0 strongly in L'(r) we see by (3.98) and the
VgE:(y ),p ) < 0 for all E: E: E:
>
O.
Thus we may add to (3.108) to (3.111) the following: a(p*,y*X)
+
(wP*
+
VX E Cl (n)
q,y*X) = 0
(3. 109)
1
a(p*,p*) + (wp* + q,p*) < O. REMARK 3.8 Theorems 3.4 and 3.5 remain true if the function g:H1(~) + R+ merely satisfies condition (i)1 (see Remarks 3.1,3.2). To be more specific, we take g to be of the following form (boundary observation)
f
g(y) = 21 r ly-y 0 I2do, y E H1(~)
(3.115)
where yO E L2(r) is a given function. In this case the map Vg:Hl(~) + (H1(~))1 is given by (Vg(y),z)
=
Ir (y-yO)Zdo
Vz
E
H1(~)
and the approximating optimality system (3.97), (3.98) has the following form: 100
lJYE:
f
+
AOYE:
av
+
E: S (YE:)
lJPE:
+
AOPE:
'dYE:
+
Bu E: a.e. in
0 0
~
a.e. in r a.e. in
~
Then letting E: tend to zero we conclude as abo~e (see also Lemma 3.2 that there exists P E H1(~) which satisfies, together with the optimal pair (y*,u1, the system (3.103), (3.105) and 1
)
32. 'dv
+ 0
= yO - Y* in r
where 0 E (Loo(r))*. If S satisfies the conditions of Theorem 3.4 then it follows that 0a E p'dS(y*) a.e. in r and
° E p'dS(y*)
a.e. in r.
For the Signorini problem (3.107), it follows as in the proof of Theorem 3.5 that the dual extremal function p satisfies the system
'dy*
(f\)
P
= 0 a.e. in r.
We can also consider cost functions g of the form
101
or 1
y E: H (~).
We leave to the reader the derivation of optimality systems in these situations. REMARK 3.9 Assume that S is given by (2.84). becomes (see (2.85)) ~y +
AOY = f
+ Bu
in
~
Iy I = 0 , I~~ I ./~
ov
y oy + a
Then the control system (3.96)
0
r.
a ,. n
In this case we have a sgn r s
-1
1 rS
(
Ir I >-
as
if
Ir I '"
as
r+sa)
c- -
c.
E-
a
-2
J 2
E-
+
r
if
J
2
(r-sa.)
(r-aE)
( r- E
2
p(e)de -
e) p ( e) de
Joo -2
_00
p(e)de vr E R
E (r+aE)
and the optimality equations may be calculated as in Theorem 3.5. §3.7
Control and observ.ation on the boundary
We shall here consider the problem presented in Example 3.3. more specific we take ~ = 1, AO = -6 and
In order to be
9 = 91 + 92 where g1:L2(~) ~ R+ is locally Lipschitzian and 92:V f } ~ R+ is given by 2 92(y) =
102
Jr1 90(a,y(a))da vy
E
V.
{y E H1(~); y = 0 in
Here gO:r 1 x R + R+ is a function measurable in a, differentiable in y and satisfies the conditions 90(0,0) = 0,
IlJy90(0,y) 1-<
e(l ~ Iyl) a.e. 0 E r 1, for all y E R.
This condition implies in particular that g2 is Frechet differentiable on V and (3.116 ) In other words, we shall study the following problem: Minimize 91(y) +
Jr, 90(0,y(0»do
+
( 3.117)
h(u)
on all (y,u) E H1(D) x U subject to
y - 6y
= f in
D
(3.118)
~
B(y) 3 BOu in r t , y = 0 in r 2 2 where f E L (D), BO E L(U,L 2(r 1)), B is a maximal monotone 9raph in R x R such that 0 E O(B) and h:U + Rsatisfies condition (ii). +
Let (y*,u*) be an arbitrary optimal pair in problem (3.117). consider the approximating control problem:
For every
£
> ~
Minimize
g~(y)
+
Jr, 90(0,y)(0»dO
+
h(u)
+
~lu-u*IG
(3.119 )
1 on all y f H (D), u f U subject to
y - 6y = f in D
~
+ B€(Y)
(3.120)
= BOu in r l , y = 0 in r 2
where g~ is 9iven by (3.25) and B€ by (3.45). We note that (3.120) can be written as (3.21) where A:V
+
VI, B:U
+
VI are 103
defined as in Example 3.3 and ¢S(y) = Jr1 jS(y)do, Vy jS(r)
E
V
= J~ SS(s)ds. It is easily seen that ¢s satisfy conditions (3.22) to
(3.24) and (~.15), so Lemma 3.2l is applicable. , Thus if (y S ,u S ) is an optimal pair in problem (3.119), we have for S ~ 0 u y
S
~
u* strongly in U
~
y* strongly in
(3. 121 )
S
v..
In particular, it follows that
because
r,
is a smooth part of f.
y
- 6Y
S
S
= f in
Now multiplying the equations
~
by SS(y ) -SS(O) and integrating on
~
S
frlISS(Ys)12dO", C for all S
we get >
O.
This yields (3.122) because IsS(y) - s (y) I'" Cs and y (0) ~ y*(o) a.e. ° E f 1. S S Arguing as in Lemma 3.3, it follows that there exist p E V such that S
-p
S ap avs
+
+
= 9g 1 (y ) in ~ S S . SS(Ys)ps =-9y90 (0,ys) in r 1 S
6P
Ps = 0 in
r2
B*p E ah(u )
os
104
(3. 123)
S
+
u - u*. S
(3. 124)
Now we multiply (3.123) by p and then ~ (p ), where ~ is given by (3.66). s ,\ s ,\ After some calculation involving Green's formula we get lip 112
+
s H1 (n)
Jr 1
'" C(I
n
Ss(y)p ~Jp )do s
S
S
1\
IV9~(Ys)12dX
+
2 Ir1 IVy 90(o'Ys)1 do
+
1).
Then letting ,\ tend to zero we see that
Thus on a generalized subsequence, again denoted [, we have (3.125) sS(y)p S
Vg~(ys)
0
+
S +
weak star in (L
OO
(f
1
))*
(3.126)
2 q E a9 1(Y*) weakly in L (n)
VygO(o,ys)
+
2 Vy90(O,Y*) strongly in L (r 1)
and lettin9 s tend to zero in (3.123), we see that p* satisfies the equations
(3.127) By (3.125) and the trace theorem we conclude that {Ps} is compact in L 2(f 1) (we denote by the same symbol p* the trace of p* on r 1 ). Hence without loss of generality we may assume that Ps
+
2 p* strongly in L (r 1)
and lettin9 s tend to zero in (3.124) we get BOp*
E
ah(u*).
Equation (3.127) must be understood in the sense of distributions, i.e.
In
P*a dx
+
In
VP*·Va dx +
In
qadX+o(a) - Jr Vygo(o,Y*)ado = 0
(3.127)
1
105
I
- such that a = 0 in r . for all a E C1(n) 2 We shall gi~e explicit forms of (3.127) in two special cases: locally Lipschitz. (2) S is the graph defined by (2.54).
(1) S is
THEOREM 3.6 Let (y*,u*) be an optimal pair in problem (3.117), (3.118) where S is a locally Lipschitz monotonically increasing function satisfying condi1 2 Then there exist p* E H (n) and q E L (n) such that (op*jov) E
tion (3.42). 1 L (r 1) and
-p*
+
bp* = q in n, q E o91(Y*) ( 3.128)
p* = 0 in r 2 • (3.129)
BOp* E oh(u*).
Pruof Using assumption (3.42) it follows by (3.71), (3.121), (3.125) that the family {f SS(Ys)psda;. E c f 1} i.s equiabsolutely continuous (see inequal ity E 1 (3.71)). Hence 0 E L (r 1), and by Lemma 3.4 and (3.127) we conclude that p* satisfies (3.128) as claimed. Now we consider the case where the state equation (3.118) reduces to the unilateral problem y - by = f in n
y >- 0, y
~~ - BOU >- 0, (~~ -. BOu)y
= 0 in
0 i.n fl
(3.130)
r • 2
THEOREM 3.7 Let (y*,u*) E H1(n)
U be an optimal pair of problem (3.117) 1 with state equation (3.130). Then there exists p* E H (n) such that oo 2 (op*/ov) E (L (r 1))*, bp* E L (n) and -p*
+
x
bp* E o91(Y*) a.e. in n, p*(a) = 0 a.e. a E r 2 .
(3.131) (3.132)
106
8y*
_
p*(a)((BOu*)(a) - -av(a)) -
0 a.e.
.
In
f1
BOp* E 8h(u*).
(3. 133) (3. 134 )
Proof The proof is identical with that of Theorem 3.5. From (3.113), (3.121), (3.122), (3.125) it follows that on a subsequence {sn} ~ 0,
and
Then by the same reasoning as in the proof of Theorem 3.5 by (3.123), (3.126) we infer that
Then, by (3.127), (3.132) follows. In Theorems 3.3,3.5 and 3.7 the graph 6 can be taken of the
REMARK 3.10
form
where 6 is locally Lipschitz and 62 is of the form (2.54), or more generally 1
We leave to the reader the calculation of optimality equations in this case. §3.8
Control on the boundary: the Dirichlet problem
Consider the following optimal control problem: Minimize the functional
g(y)
+ h(u)
(3.135)
1
on all y E H (n) and U E U subject to
107
a(y,y-z) <
In
(3.136)
f(y-z)dx Vz E K.
HerefEL 2(n), K
= {y E H1(n); y >
~
a.e. in n, y
= Fu in r}
a:H l (n) x H1(n) + R is defined by (2.34), and ~ E H2(n). The functions 2 9:L (n) + R+, h:U + R satisfy assumptions (i) and (ii) and F is a weakly continuous, Frechet differentable operator from U to HS(r) where s > (N-l )/2>{ satisfying the following conditions Fu > u
+
~
(3.137)
a.e. in r for every u E D(h)
(3.138)
9F(u) is continuous from U to L(U,Hs(r)).
By the trace theorem, for every u E U there exists XU E H'(n) such that l = Fu in r. Since, by condition (3.137), max {~'Xu} E K, the set K is nonempty. On the other hand, it is well known that
IIzi/ 1 = (In 19Z1 2dx
+
Ir
is an equivalent norm in H1(n).
Iz12da)1J2
Then by (2.35) we have
a(z,z) > C1 11z1121 - C2 IIFul122 H (n) L (r)
vz
E K
Hence by virtue of Theorem 2.1 (see Remark 2.1), the v.ariational inequality (3.136) has a unique solution yU E K which satisfies the estimate
II yu II
1
H (n)
'" C(1 + 11 Full 2
L (r)
(3.139)
) VU E U.
As remarked earlier, formally y is the solution to the following free boundary problem AOY
= f in n+ = [x E n; y(x)
> ~(x)J
(3.136) '(Jy _ av -
'(J~
'(Jv
in '(In+, y = Fu in
r.
By (3.139) it follows by a standard argument that the map u 108
+
yU is weakly
I
continuous from U to H1(D) and weakly-strongly continuous from U to L2(D). Then by the same reasoning as that used in the proof of Proposition 3.1, we infer that problem (3.135), (3.136) admits at least one optimal pair. As regards the maximum principle for this problem, it has the following form: THEOREM 3.8 Lr?t (y*,u*) be an opUmal pair in Then there exists p* E H6(D) such that AOp* E
(3.135),
(3.136).
(Lro(~))*, (3.140)
-(9F(u*))* 8p* 8v
E
8h(u*)
(3.141 )
p*(f-AoY*) = a a.e. in D.
(3.142)
Here 9F(u*) E L(U,Hs(r)) is the Frechet derivative of F and (9F(u*))* E L(H-s(r),U) is its adjoint. Proof Following the general procedure, we start with the approximating control problem: Minimize
E:
9 (y) + h(u) + on all (y,u) E H1(D) x
z1 iu*-ui u2
(3.143)
U subject to
(3.144 )
y = Fu in r.
The functions gE: and BE: are defined by (3.25) and (3.79), respectively. We continue with a technical lemma. LEMMA 3.6 For every v E H1/ 2(r) satisfying the condition v > ~ a.e. in r, the boundary value problem
(3. 145)
y = v in r has a uni0/e solution y
= YE:(v) E H1(D)
satisfying the estimate 109
Ily (11.)11 1
-< C(1 +-11'l11 1j2
H (1'2)
E
H
where C is independent of H1/ 2(r) then
y (v. ) £
E
+
E
(3.146 )
)
Moreover, if for E
11..
+
0,
yv. weakly in H1(r2) weakly in L (r2)
E
V
E
+
v weakly in
(3.147) 2
SE(y (v ) - ~) ~ ~ E
and
(r)
(3.148 )
where yv. is the solution to (3.136) and ~ = f - AOYV
Proof Let nv E H1(r2) be the solution to the inhomogeneous Dirichlet problem AO nil. = 0 in 1'2,
nil. = v in r
2 and let ze E H6(r2) n H (r2) be the solution to the boundary value problem AOzE + SE(ZE
+-
nv - ~)
=
f a.e. in 1'2 (3.149 )
Ze = 0 in r.
(The existence for problem (3.149) is standard and can be derived for instance from Theorem 1.4 by the same device as that used for Corollary 1.1.) Obviously the function YE(V) = ZE +- nil. is the unique solution to (3.145). We know that IInvll 1
. H (1'2)
'" C IIvll 1/2 H
(r)
for all v.
-1
-
E
H1/2(r).
(3.150 )
1
On the other hand, SE(YE-~) = -E (zE+nV-~) E HO(r2) because (v-~)- = 0 on r. Then multiplying (3.149) by SE(YE-~) and integrating on 1'2 it follows by Green's formula that
Recalling that ISE(r)-SE(r) I '" 2E for all r E R, this yields IsE(YE-~) 12 '" C for all E > O.
Then by (3.149) it follows that
110
(3.151)
/lz
11 E
2
H (~)
-< C(1
+
/fI 2) for all
E
>
O.
Together with (3.150), this yields (3.146). Now if v. ~ v weakly in H1/ 2(r), then by (3.146) and (3.151) it follows E that y (v.) ~ y weakly in H1(~) and strongly in L2(~) E
E
SE(y (v )-~) ~ ~ weakly in L2(~). E
E
This yields y(x) >
~(x)
a.e. x E ~ and
(~(x),y(x)-~(x)-z)
> 0 for all z > 0,
x E ~.
Hence a(y,y-z) -<
J~
f(y-z)dx Vz E K.
Thus Y = yv. and the proof of Lemma 3.6 is complete. We return to the proof of Theorem 3.8. Let (YE'U E) E K x U be an optimal pair of problem (3.143), (3.144). We have
where y~ is the solution to (3.144) with u = u*. Using Lemma 3.6 it follows by the same method as in the proof of Lemma 3.2 that for E ~ 0 uE ~ u* strongly in U
(3.152 )
and (3.153 ) (As a matter of fact the convergence in H1(~) is strong.) On the other hand, r since F and SE are dHferentiable, the mapping u ~> yE (u) is Frechet differentiable from U to H1(~), and for every v E U, zE = 9r E (u E )v E H1(~) . is the solution to boundary value problem
(3.154 ) 111
2 Let PE E H~(n) n H (n) be the solution to
-AOp - ~E(y -~)p = VgE(y ) in n E E E E p
E
::
0 in
Since (y ,u ) is optimal, we E
(3.155 )
r.
E
VgE(y )z. dx
Jn E E
+:
ha~e
h I(U ,v) E
+.
Then by an easy calculation involvtng (3.154), (3.155) and Green's formula it follows that ap -(VF(u ))* _E E ah(u ) + u - u*. (3.156) E av E E 2 Since {VgE(y)} is bounded in L (n) it follows as in the proof of Theorem E 3.1 that, on a subsequence,
and by (3.155) (3.157)
Finally, multiplying (3.155) by sgn PE and integrating on n (see the proof of Theorem 3.2) we get ISE(y
-~)p I dx ~
Jn E E
C for all E > O.
(3.158)
Then by (3.155) we see that {AOPE} is bounded in L1(n) and by Green's formula (ap AOppdX = a(p ,
f
do V
5+ 1
H 2(n).
Since HS+i(n) c C(~), this implies that {ap jav} is bounded in H-s(r). E Thus, letting E tend to zero in (3.156), it follows by (3.138) and (3.152) that u* satisfies (3.141). Now letting E tend to zero in (3.155) we get AOp*
+
0
= -q in n
(3.159) p* :: 0 in r where 0 E (LOO(n))* is the weak star limit in (LOO(n))* of {SE(y -y)p } on a E
112
E
certain generalized subsequence. From now on the proof of Theorem 3.8 is identical with that of Theorem 3.3. To illustrate Theorem 3.8 we shall consider the variational inequality (2.69) modelling the water flow through an isotropic rectangular dam. We interpret h1 = u as a control parameter and propose studying the following problem: find u in the interval [O,b] in order that the seepage face GC be under the level of the right reservoir. In other words,
no = FHLC
{x E
c
D~
y(x) = OJ.
Here y is the solution to the variational inequality (3.136) where f = -1, = 0, n = D, U = R,
~
a(y,z) =
1 vY·V zdx y,z E H (D)
In
and F:R ~ HS(aD), s Fu
1/2 is defined by
>
r
in FH, HL, LG
=g = Z 1 2 1 2 (x 2-h 2 ) in CB, Z (x 2-u) in AF aX
AB. 1 + Sin
As noted in Section 3.5, the least-squares approach to this problem leads to problem (3.135) where 1 g(y) = Z
and h:R
~
J
(x) Iy(x) I 2dx
XQ
D
0
R is defined by
h(u)
=
~
o
~ +
ifO",u",b 00
otherwi se.
Note that vF(u) E L(R,Hs(aD)) is given by 0 in HL, LC, CB, BA
VF(u) =
{
u-x 2 for 0 '" x2 '" u O for u < x2 '" b
113
Hence u
(VF(U))*(K) =
fo
K(e)(u-e)de VK E H-S(O,b).
Thus if (y*,u*) is an optimal pair, by Theorem 3.8 there exists p* E H6(O) such that 6PO E (LOO(O))* and 6y* = 1 in [y* > OJ y* > 0, 6y* < 1 in 0
(3.160)
y* = Fu* in aD -(6p*) p*
=
a
= Xn y* a.e. in [y* "0
0 a.e. in [y*
=
>
OJ
OJ
(3.162)
o
Since 0
(3.161)
if 0 < u* < b if u* = 0 if u* = b.
(3. 163)
u* < b, we may replace (3.163) by u* ap* o (u*-x 2 ) ~ (0,x 2)dx 2 = O.
<
f
(3. 164)
§3.9 Extensions and further remarks In applications, the cost functional g:L2()"I) + R+ which occurs in problem (3.4) is usually an integral functional of the form g(y) = f)"l gO(x,y(x))dx
(3.165)
where gO:)"I x R + R+ is a real-valued function, measurable in x E )"I and continuous in y E R. Then assumption (i) requires a global Lipschitz condition on y + gO(·'Y)' In this context it turns out that most of the optimality results established in this chapter remain valid if instead of (i) one merely assumes tha t (j)
1
The function go is non-negative on )"I x Rand gO("O) E L ()"I).
Fo~ each r 114
>
0 there exists hr E L1()"I) such that
(3.166 )
for all y,z E R such that Iyl + Izi < r. (jj)
ThpY'P- PX1:st
Ct,
C , C p:Jsitiv p con.=:tar.r:.s and t:O ELl (D) such ttzat 1 2
a.e.
x E n,y E R.
It must be emphasized that under these assumptions the function 9 is no longer defined on all of L2(~). In particular, condition (j) is satisfied if go is convex and everywhere finite on R as a function of y and is independent of x. For each E > 0 define the function 2
(gO)E(X,y) = inf {Iy-;:! /2E + gO(x,z);. z E R}
(3.167)
It is readily seen that (gO)E satisfies condition (j) and (gO)E(X,y) < (2E)-1 IyI 2 + 90(X,O) Vy E R.
(3.168 )
Denote by ZE(y) the set of all points ZE where the infimum in (3.167) is attained, i.e. (3.169 )
We have (compare with Theorem 1.9): LEMMA 3.7 FaY' pach xED and y E R, 3(90)E(X,y) ZE E ZE(y)·
c
3g 0(X,ZE) faY'
so~e
Here 3(gO)Edenotes the generalized gradient of (gO)E with respect to y. Proof Let (x,y) E ~ x R be such that (90)E(X,y) is differentiable at y. By (3.169) we have for all hER and ZE E ZE(Y) V(gO) (x,y)h = lim ((gO),(x,y+th)-gO) (x,y))/t E
UO
-
E
< lim sup (gO(X,ZE + th)-gO(x,zE))/t. UO 115
Hence
o
V(gO)E(x,y)h < gO(x,zE,h) for all hER where g~ is the generalized directional derivative of go (see Section 1.6). By the definition of the generalized gradient ago we may therefore conclude that V(gO)E(X,y) E gO(X,ZE)' Now let (x,y) be arbitrary in D x R and let {Yn} + y be such that v(gO)E(x'Yn) + W E a(gO)E(x,y). Let zn E ZE(Yn)' By (3.168) and (3.169) we see that {zn} is bounded. Thus selecting a subsequence we may assume that zn + z E Z (y). Since the map z + a90(x,z) is EE closed and v(gO) E(x,y n ) E agO(x,z )n we infer that w E a90(x,z ), thereby E completing the proof. E
Now define the function gO:
g~(x,y)
n x R + R,
= I~oo (gO)E(x,y-ET)p(8)d8
(3.170 )
where p is a C~-mollifier on R. Let gF.L 2(D) + R+ be the function gE(y) =
ID g~(x,y(x))dx,
YE L
2
(D).
(3.171 )
By inequality (3.168) we see that the function gE is Frechet differentiable and Lipschitzian on L2(D). If in problem (P E) (see Section 3.2) we take gE defined by (3.171) and follow the general procedure, we can extend Theorems 3.1 to 3.8 to the general situation presented here. We shall illustrate this in the case of problem (3.95), (3.96), and leave to the reader the study of other cases. Minimize
ID
gO(x,y(x))dx
+
h(u)
(3.172)
2 on all y E H (D), u E U subjpct to
6y = f
y -
Bu
+
a.e. in D (3.173)
~~
116
+
B(y)
0
a.e. in r.
Here B E L(U,L2(~)), f E L2(~), S is a locally Lipschitz monotonically increasing function, h:U + R satisfies assumption (ii) and go assumptions (j), (jj).
We have (compare with Theorem 3.4): THEOREM 3.9 Let (y*,u*) be an optimal -;;air in pio::blem (3.172), (3.173). w1,q(:J), 1 -< q < NJN-1 and qo E Ll(~)
Then there exist the functions p E such that (dP/a~)
-p
+ ~p
(~~)a
+
E (Loo(r))* and
= qo' qO(x) E a9 0(x,y*(x)) a.e. x E ~ p as(y*) 3 0 a.e. in r
(3.174)
B*p E ah(u*).
(3.175)
If either 1 -< N -< 3 or S satisfies condition (3.42) then (apJa~) E L'(f).
froof The detailed proof can be found in [8J. cedure, consider the approximating problem~
Following the general pro-
M1~nimize
gE (y)
on all (y,u) E
y -
h(u)
+
H2(~)
~y =
'dy
f +
E
a~ + B (y)
+
2 21 lu-u*i U
x U
subject to
Bu a.e. in
=0
(3.176)
a • e • in
~
r
(3.177)
where SE is defined by (3.45). Arguing as in the proof of Proposition 3.1, it follows that for every this problem admits at least one optimal pair (YE'U E), LEMMA 3.8 For
E >
0
Wp
>
0
have
U E
+
u* strongly in U
y
+
y* weakly in H2(~) ani strongly in H'(]).
E
E
117
Proof
For ev.ery S
>
0, we have
2 gS(y ) + h(u ) + -21 lu -u*l u < gS(y*) + h(u*) E SSE where y* is the solution to (~.177) with u = u*. S involving Green's formula reveals that Ily* - y* II 1
H ()2)
S
1
< Cs /2 for all S
>
(3. 178)
A little manipulation
(3.179 )
O.
Now by (3.167) and (3.170), S gO(X'YE(X)) < gO(x,y*(x))+(2d -1 J1
Iy*(x)-'Ee-Y*(x) I2de
-1
and therefore go(x,y~(x) )
< go(x,y*(x))+(2d-1IY~(X)-Y*(x) 12 + Ea.e. x
Next, by Sobolev's imbedding theorem, H1()2) c LP()2) for some P v.iew of (3.179)
/I y* - y* /I E
LP ()2)
(3.180)
E: )2.
>
2.
Then, in
< CE1/2 .
This, combined with (3.180), shows that the integrals {f goE(x,y*)dx;E c)2} E E are equicontinuous and therefore {g~(x,y~)} is weakly compact in L 1()2). On the other hand, it follows by (3.167) and (3.169) that
2
(gO) E (x,y*(x)-se) < Ee /2+9 (x,y*(x)). S E 0 Since go is continuous in y and
y~(x)
+ y*(x) a.e. x
lim zs(e) = y*(x) a.e. x E:)2, e E: [-1,1J E+O and therefore lim 9~(X,y~(x)) = gO(x,y*(x)) E+O
118
a.e. x
E:
)2.
(3.181) E: ~,
we have
Since {g~(x,y:)} is weakly compact in L1(n) we conclude that on a subsequnece, again denoted E, we have (3.182)
Since {u } is bounded in U, we can extract a sequence, again denoted {u }, E E such that for E > 0
and consequently
It is readily seen that Yo satisfies (3.173) where u = uO' hand, we have
On the other
1i min f h ( u[ );;.. h ( u0 ) (':-+0
and by the Fatou lemma
I
I
lim inf g~(X'YE(x))dx;;" go(x'YO(x))dx 00 n n because again by (3.181) we have
Then by (3.173) we see that u*
=
uO' y*
=
yO and
lim luE-u*I U = o.
E-+O
LEMMA 3.9 The set {vg~(X'YE)} is weakly compa~t i~ L1(n). Proof
By (3.170) and Lemma 3.7 one has
Ivg~(X'YE(X))
!
f~
X, e ) p( e)de ;.w ( x , e) E: d ( go) E( X , ys ( x ) - Ee) ~ e E: [- 1 , 1] }
<;
sup
<;
sup { f:lw(x,e)
coW (
Ip(e)de~w(x,e)
E:
dgo(X,Zs(x,e));e E: [-1,1J a.e. x E: n}. (3.183) 119
Here zE(x,e) E Z{YE(x)-Ee). For any k > 0 denote by Ek Iz.s(x,e)! -< kL By condition (j) it follows that
{(x,e) E ~
!agO(X,zE)! -< hk(x) for (x,e) E Ek , where hk E Ll(~).
x
[-l,lJ;
(3.184)
For (x,8) E Ek we have by assumption (jj)
!agO(x,zE)! -< c1g0(x'ZE)/k -< C1(gO)E(x'YE(X)-E8)/k
a a
+ C21zEl2 + + C3 IY E(X) 12
~O(x) +
( 3.185)
a 1(x),
where a is some function in L1(~). 1 It follows from (3.183), (3.184). (3.185) that for all k
>
0,
IlJg ~ (x, yE(x) ) 1 -< max {hk(x),Clk-ag~(x'YE(x))+C3IYE(x)12+al(x)} a.e.x E~.(3.186) Since {Y E} is bounded in H2(~), it follows by (3.181) and Lemma 3.7 that the family of functions {g~(X'YE)} is bounded in Ll(~). This fact combined with (3.186) implies by a standard argument involving the Dunford-Pettis criterion that {lJg~(x'YE)} is weakly compact in Ll(~) as claimed. 1
E
Since IJg (X'YE) E L
•E
E
+
6PE = 1J9 0(x,y E) in
~ +
.E S (YE)PE = 0 in r,
-PE apE
00
and B (Y E) E L (r), the boundary value problem
(~)
~
(3.187)
has a unique solution p E Wl,q(~) satisfying E
(3.188) for some 1 -< q < N/(N-l) (see Theorem 20 in [23J). Of course, problem (3.187) must be considered in the weak sense, i.e. •E a(PE'X) + Jr B (y )p Xdo
r
E
+
J ~
1J9 0E (x,y)x dx = 0
(3.189)
E
for all X E C1(0). Let ~ = ~(r) be a smooth bounded and monotone approximation to sgn r and let qE be a COO approximation to PE ' If in (3.189) X = ~(qE) and we let qE + PE' we obtain 120
SE(y)p I:(p )d o '" f l17go(x,y ) II I:(p ) Idx. E E EnE E 1 Letting I: + sgn we find that ap Jav E L (r) and r
Jr
€
(3.190 ) For u E U denote by 8 u = y the solution to (3.177). We note that the operator 8E :U + L2(n) is ~ateaux differentiable on L2(J), and its differential 178 E(U E) at u is given by 178 E(u E)(v) where
8
2
E
E H
= 8E,
(n) is the solution to the following boundary value problem: (3. 191 )
Since uE is a minimum point it follows by a standard procedure involving (3.191) that + u
(3.192)
- u*.
E
In the light of Lemma 3.9 there is a sequence E 1 n qo E L (n) such that
+
0 for n
+
00
and a function
(3.193 ) Since by (3.188) {p} remain in a bounded subset of any w1,q(D), 1
w1,q(n)
+
p strongly in Lq(n) and weakly in
+
p weakly in w1- 1J q,q(r) and strongly in Lq(r).
(3.194)
n
(We recall that the Itrace operator maps w1 ,q(n) into W1- 1/ q ,q(r).) Using Lemma 3.4 it follows by (3.193) that l
121
Now letting E tend to zero in (3.187)~ (3.192) we see by the same method as in the proof of Theorem 3.4 that p satisfies (3.174), (3.175). REMARK 3.11 If S is defined by (2.54) then under assumptions (j), (jj) the necessary conditions for optimality are the following (see [8J): -p
+
6P
= qo'
qO(x) E dgO(x,y*(x)) a.e. x E
P dY* = 0 a.e. in r dV
y*(dP)
dV a
0 n.e. in r
B*p E ah(u*).
122
Q
4 Parabolic variational inequalities
In this chapter we present for later use some basi.c results in the existence theory of variational inequalities of paraboli.c type. We emphasize those aspects of the abstract theory which are of importance in practical applications. We pay special attention to some moving boundary problems of a physical nature which can be represented as parabolic variational inequalities. The main references for the general theory of parabolic variational inequalities are [6J, [19J, [21J, [50J. For the applications the basic references are the books [30J, [33J. §4.1
The main existence results
Here and throughout the following the framework is that of Section 2.1. V and H will be real Hilbert spaces, V is a dense subspace of Hand
Thus
VcHcV ' algebraical1y and topologically. We shall denote by I· I and II ·11 the norms in H and V, respectively, and by (.,.) the scalar product in H and the pairing between V and its dual V The norm of V wi 11 be denoted II . II * . We are given a linear continuous and symmetric operator from V to V' satisfying for some w > 0, and real a, the coercivity condition I.
(Ay,y) + alyl
2
>-
I
w
Ilyll
2
for all y
E:
V,
( 4.1)
and a lower semicontinuous convex function ¢:V + R. 2 For Yo E: V and f E: L (0,T;V L ) , consider the problem: Find
such that
(y'(t)+Ay(t), y(t)-z) + ¢(y(t))-¢(z) , f(t), y(t)-z) a.e. t E ]O,T[ for all z
E:
V 123
(4.2)
yeO) = YO' Here y' = dy/dt is the strong derivative of y:[O,T] In terms of the subgradient mapping as y'(t)
+
Ay(t)
+ dcp(y(t)) :3
a~:v -+
f(t)
-+
V'.
V' problem (4.2) can be written
a.e. t
E
]O,T[ (4.2)'
yeO) = YO' This is an abstract variational inequal ity of parabolic type. In applications to partial differential equations V is a Sobolev subspace of H = L2(~), A is an elliptic differential operator on ~ and the unknown y is a function of two variables (x,t) E ~ x [O,T] which can be viewed as a function of t from [O,T] to L2(~). Then the derivative y'(t) can be viewed as the partial derivative Yt of y (see the examples which follow). In the special case where ¢ = IK is the indicator function of some closed convex subset K of V, i.e.
¢(y) = 0 if Y E K, ¢(y) = + 00 if y
E K,
(4.3)
the variational inequality (4.2) reduces to yet) E K for all t E [O,T] (y , ( t)
+
Ay ( t), y ( t) - z)
(f ( t), y ( t) - z)
'"
a.e. t E ]O,T[ for all
Z
EK
(4.4)
yeO) = YO' THEOREM 4.1
1
Let YO E V and f E W ,2([O,T];V') be such that
(4.5) 1 Then the vaY'iationaL 1:nequaLity (4.2) has a un1:que ,soLution y E W ,2([O,T];V) n W1 ,00([O,T];H). MOY'eov12,(" the maD (YO.,f) -+ y is [Jpschitzian f'('om
2
2
H x L (O,T;V') to C([O,T];H) n L ([O,T];V). 2 If f E L (O,T;H) and ¢(yO) < + 00 then (4.2) has a uniqUe! soLution y E W1 ,2([O,T];H) n L2(O,T;V) and the map (YO,f) -+ y is Lipschitzian fOY'm 124
H x L2(O,T;H) to
C([O,TJ~H)
n l 2(O,T;V).
Mo~eoue~,
one has
y'(t) = (f(t) - Ay(t) - d¢(y(t)))O a.e. t E JO,T[. Proof
Define in H the operator l:D(l) ly = {Ay D(l) =
{y
+
c
(4.2) "
H+ H
d¢(Y)} n H Vy E D(l)
E V ; {Ay
+
(4.6)
d¢(Y)} n H j S}.
(4.7)
We note first that the operator a1 +. l is maximal monotone in H x H (I is the unit operator in H). Indeed by condition (4.1) the operator ~I+A is continuous and positive definite from V to V'. Since d¢:V + Viis maximal monotone (Theorem 1.8) we conclude by Theorem 1.7 that aI+l is maximal monotone. Then by Theorem 1.12 it follows that for any YO E D(l) and g EW 1 , 1([0, TJ~H) the Cauchy problem d at yet) + ly(t)
3
get)
a.e. t E JO,T[
(4.8)
yeo) = YO has a unique solution y E W1,oo([0,TJ~H). let us observe that D(l) is a dense subset of D(¢) Indeed the function ~:H + R given by ~(y) =
21 (Ay
+.
aY,y)
+
¢(y)
{y E
V~
¢(y)
< + oJ} •
(4.9)
is convex, and by (4.1) 1 im
II yII ++00
~(y)/
Ily II = +
00.
This implies that the function ~ is lower semicontinuous on H (because every level subset {y E H~ ~(y) < ~} is closed in H). Since, as readily seen, aI + l c d~ and aI + l is maximal monotone in H x H we infer that d~
= aI
+
l.
(4.10)
Then by Proposition 1.7, D(l) is a dense subset of D(~) = O(¢) (in the topology of H). Now let YO E V and f E W1,2([0,T]~V') be such that condition (4.5) holds. 125
Let {y~}
c
n Yo
D(L) and {f n} C W1,2([0,T]~H) be such that -+
yO strongly in H and weakly in V
fn
-+
f
f
-+
f' strongly in L2(0,T;.V').
I
n
strongly in L2(O,T~V')
For every n denote by Yn dy a~
+
(4.11)
LYn
3
fn
E
(4.12)
W1,oo([0,T]~H) the solution to the Cauchy problem
a.e. t
E
]O;T[ ( 4.13)
n
Yn(O) ::: yO· Multiplying (4.13) by Yn-YO and using condition (4.1), we get 1 d
2
2 dt Iyn(t)-yo I
< alyn(t) - Yol
+ w
2
+
Ilyn(t)-y OII (fn(t) -
2
~,
yn(t)-yO) a.e. t E ]O,T[
(4.14 )
where ~ E AyO + d~(YO). Integrating on [O,t] and applying the Gronwall lemma we see that 2 IYn(t) - Yo I
2
rT
+
J Ilyn(t) - yo II dt < o
Next we use the monotonicity of 2 dt lYn(t) - ym(t)
1 d
I2
< alyn(t)-ym(t) 12
+
d~
+ w
C for all n.
and condition (4.1) in (4.13) to get IIYn(t) - ym(t) II
2
Ilfn(t)-fm(t) 11* Ilyn(t)-ym(t) II·
Integrating on [O,t] and again using the Gronwall lemma we see after some manipulation that 2
IYn(t) - ym(t) I
< C(
IY~-Y~I
+
fT0 Ilyn(t)-ym(t) II 2 dt
T
+
fo "fn(t)-fm(t)lI~dt).
Hence there is Y E Loo(O,T;H) n L2(0.T;V) such that 126
Yn
-+
y strongly in C([O,T];.H) n L2(0,T;.V).
Now again using (4.13) and (4.1) we get 1 d
2 dt IYn(t+.h)-Yn(t)1 '"
aly n(t+h)-y n(t)1
2
2
,2
+ w Ilyn(t+.h)-yn(t)II
+.llf n(t+h)-f n(t)II*lly (t+h)-y n(t)ll. I, n I
This yields Iy (t+h)-y (t)1 n n
2
+
r~h )0
2
lIy (t+h)-y (t)1I dt n n
'" C(ly (h)-lI2 + fT-~f (t+h)-f (t)lI~dt) nOn n and letting n tend to + 00 ly(t+h)-y(t)1
l-h
2
2
+ 10 lIy(t+h)-y(t)11 dt T-h '" C(IY(h)-y o I2 + fo lIf(t+h)-f(t)lI~dt), 0", t '" T-h.
In (4.14) we take ~ E AyO + d¢(YO) in such a way that grating on [O,t] and letting n -+ 00 yields
f(O)-~
E H.
(4.15) Then inte-
Then by the Gronwall lemma it follows that Iy(t)-yol '" Ct for all t E [O,TJ. Together with (4.15), this implies that y is H-valued absolutely continuous on [O,T] and T
T
Iy t) 12 + foil y t) 112 dt '" C(1 + f 0 \I f I (
I (
Hence y E W1 ,00([0,T];H) n W1,2([0,T]~V). Now we shall prove that y satisfies (4.2)'. get
I (
t)
II:' dt)
a. e. t E ] 0, T[ .
Using once again (4.13), we
127
where
Z
E D(l) and n E lz + az.
Integrating the latter from t to t+E yields
t +E
<
I
(f (s) + aYn(s)-n, Yn(s)-z)ds.
t
Thus letting n
n
+
00
-Z1 (Iy(t+d-z 12 - Iy(t)-zl 2) rt+E:
<}t
(f(s)
+.
ay(s)-n, y(s)-z)ds.
Hence (y(t+E)-y(t), y(t)-z) t +[
< t
f
(f(s) + ay(s)-n, y(s)-z)ds.
As seen above, y is H-valued absolutely continuous and consequently almost everywhere differentiable on [O,T]. Thus the last inequality yields (yl(t) + n - ay(t)-f(t),y(t)-z) <
° a.e.
t E ]O,T[, z E H
and since aI+l is maximal monotone in H x H, we have f(t) E yl(t) + ly(t)
a.e. t E ]O,T[
as claimed. Now let (Y6,f i ), i = 1,2 satisfy conditions (4.5) and let Yi' the corresponding solutions to (4.2), i.e. yi(t) + lYi(t)
3
fi(t)
a.e. t
Yi(O) = Y6, i = 1,2. By condition (4.1) it follows that 128
E
]O,T[
1,2 be
a.e.
t E
]O,T[.
Then integrating on [O,t] and using the Gronwall lemma, IY1(t)-Y2(t)1 2
+
IT0 !/Y1(t)-Y2(t)11 2
1 2 2 -< C(IYo-yol
+
ITollf
12
1(t)-f 2(t)I:*dt).
Now assume that f E L2(O,T;H) and YO E D(~). written by y' +
a~(y)
a.e. t
- ay 3 f
E
(4.16)
As seen above, (4.2)' can be
]O,T[ ( 4.17)
y(O) = yo where ~:H ~ ~ is given by (4.9). Then by Theorem 1.13 we conclude that (4.17) has a unique solution y E W1,2([O,T];H), where ~(y) E L1(O,T;H), i.e. y E L2(O,T;V). Now arguing as in the proof of (4.16) we see that IY1(t)-Y2(t)1 2
+
2 2 -< C( !yo1 - YOI
IT0 IIY1(t)-Y2(t)li 2 dt +
IT0
If 1(t) - f 2(t)' 2dt),
thereby completing the proof. Since the set of all (YO,f) E V x Wl,2([O,TJ~V') satisfying condition (4.5) is a dense subset of D(¢) x L2(O,T~V') and the map (YO,f) ~ y is Lipschitzian from H x L2(O,T;V') to C([O,T];H) n L2(O,T;V), we may extend it by continuity on all of O(~) x L2(O,T;V ' ). We shall call such a function y, w2ak solution of (4.2). YO E O(¢) and f E L2 (O,T;IJ'), 2 weak solution y E C([O,T];H) n L (O,T;V). COROLLARY 4.1
FoY' eveY'Y
(4.2) has a ur:ique
In particular for ¢ = I , Theorem 4.1 gives K
129
THEOREM 4.2 ~O
1
Lp.t YO E K and f E vJ ,2([0,T];.V') be given such that faY' some
EH
(f(O)-AyO -
~O'
Yo-v) > 0
(4.18)
Vv E K. 1
1
Then pY'oblem (4.4) has a unique 30l7At1~on y E W ,2([0,TJ;V) n W ,co([0,T];H). 2 If f E L (0,T;.H) and YO E K thpn pY'oblem (4.4) has a uniqiA(' solution
y E W1,2([0,TJ~H) n L2(0,T~V). Assump in addition that faY'
(Av , v) >
w
II V 112 Vv
E
SOr'l.?
u)
> 0
(4.19)
V
and there exist.'" h E H su('h that
(I + sA H) -1 (v+sh) E K for all c > 0 and all v E K. (4.20) 2 co Then Ay E L (O,T;H) and y E L (O,T;V). Here AHy = Ay n H, and recall (see Section ?.2) that by virtue of condition (4.19) the operator AH is maximal monotone in H x H. Proof The first part is an immediate consequence of Theorem 4.1. Now assume that f E L2(0,T;H), YO E K and conditions (4.19), (4.20) hold. Let y E W1 ,2([0,T];H) n L2(0,T;V) be the solution to (4.4). In (4.4) we take z = (I + cA ) -1 (y + sh) to get H
(y'(t)+Ay(t),Asy(t)-(I+cA ) -1 h) < (f(t),Asy(t)-(I+cA H) -1 h) H a.e. t E ]O,n, where As = A(I+sAH) -1 Noting that
s -1 (I-(1+sAH) -1 ).
and by Lemma 1.2. 1 d Z-dt (Asy(t),y(t))
we see by (4.21) that
130
(y'(t),Asy(t))
a.e. t E ]O,T[
(4.21 )
(A y(t),y(t))
+ (
t
J0
E:
2
!A y(s)! ds s
whence rT Jo
2
!A y(t)! dt s
+
(A y(t),y(t)) -< C vt E [O,TJ s
and by Theorem 1.2 part (vii) we infer that Ay E l2(0,T~H) and y E Loo(O,T;V), thereby completing the proof. The last existence result is concerned with the situation where Q is lower semicontinuous on H. As seen earlier, this happens for instance if the convex l.s.c. function Q:V + ~ satisfies the growth condition: lim cp(u)/ Ilull = +
II u II
00.
+
00
THEOREM 4.3
VI be a linear cO'1tir..u.ous and symmetriC' op2Y'ator satisfying (.!owiition (4.1) and let
at~,qume
Let A:V
+
that there pxists C indr?pendent of s such that
(Ay,V
(4.22)
2
Yo E D(
yl(t) = (f(t)-Ay(t) - d
a.e. t E JO,T[.
( 4.23)
1 2 Yo E O(cp) n V thr?n yEW' ([O,TJ;H) n C([O,TJ;V) and Ay E L2 (O,T;H), O(y) E Loo(O,T). Finally, if yO E O(A H) n O(dQ) and f E W1,1([0,TJ;H) then If
1
00
yEW' ([O,T];H). Here
As seen above, the operator Aa:H
+
H defined by
Aay = ay+Ay for y E O(A a ) = D(A H) = {v E V; Av E H} 131
is maximal monotone in H x H.
Then by Theorem 1.10 the operator r :H a.
+
H
is maximal monotone in H x Hand IA YI-< C(lro yl a.
a.
+
Iyl
+ 1)
Vy E D(r ). a.
(4.24)
More precisely, r = a¢ , where a.
¢a. ( y) =
a.
±(Ay, y)
+
¢(y)
+
~ Iy I 2 v Y E V.
Writing (4.2) as y'
+
r y - a.Y a.
= f a.e. t
E
JO,T[
y(O) = Yo and applying Theorem 1.13 it follows that for Yo E~) there exists a unique solution y E W1,2(JO,TJ;H) satisfying (4.23). If Yo E D(¢) n V then y E W1,2([0,TJ;H) and ¢ (y) E W1,1([0,TJ). Then by (4.24) we conclude that Ay E L2(0,T;H) and by cgndition (4.1), y E Loo(O,T;V). Since yEC([O,TJ;H), this implies that y is weakly continuous from [O,TJ to v. Now since ¢ (y) E C([O,TJ) and ¢:H + R is lower semicontinuous, we have a.
lim (Ay(t), y(t )) -< (Ay(t),y(t)) t + t n n
vt E [O,TJ.
n
Then using condition (4.1) it follows that y E C([O,TJ;V) as claimed. THEOREM 4.4 The conclusions of Theorem 4.3 remain valid if condition (4.22) is replaced by the following: there is h E H such that (4.25) Proof It is necessary only to combine Theorems 1.10 and 1.13 as in the preced i ng proof. In particular for ¢ = IK we have: COROLLARY 4.2 Let A:V 132
+
V' be a linear continuous and symmetric operator
Let K be a closed convex subset of H.
satisfying condition (4.1).
Assume
h E H such that
~xists
that there
(4.26) Then for' ever'Y YO E K and f E L2( 0, T;.H) ther'e exists a uniq,w solution
y
E
W,,2([0,TJ;.H) n C([O,TJ;.'1) n L2(0,T;.D(A H» to vaY~ational inequality (y'(t)+Ay(t)-f(t), y(t)-z) '" 0, a.e. t
E
JO,T[,
VZ E
yet) E K for t E [O,TJ, yeO) ::: YO' MOr'e
precisely~
K (4.27)
one has
y'(t) ::: (f(t)-Ay(t)-arK(y(t»)O a.e. t
E
]O,T[
where
dIK(y) ::: {w
H;. (w,y-z.) ;;:..
E
° for
all
Z E
KL
We conclude this section with an approximation result for the solutions to (4.2). Consider the equation ([ > 0) y' + Ay
+ v~[(y)
::: f
a.e. t
E
JO,T[ (4.28)
yeO) ::: YO where YO
E V.
and f
2
E L
Ay + v¢[(y) :::
{0,T;.H).
Noting that
d~[(Y), ~[(y)
,
::: 2 (Ay,y) + ¢s(y)
we conclude by Theorem '.13 that (4.28) has a unique solution
THEOREM 4.5 Under' thp hypotheses of Theor'em 4.3 for s str'ongly in C([O,T];.H)
y~ Ays
+
°
2 L (0,T;V)
2
y'
+
n
+
weakly in L {0,T;H)
Ay w~akly in L2{0,T;H)
vQs(Ys)
+
2 f-y'-Ay E d¢(y) wea~ly in L (0,T;H), 133
where y is the soZution to (4.2).
Proof Multiply (scalarly in H) the equation yl[
+.
Ay [ + 9¢~(y ) = f a.e. t cS
E
]O,T[
(4.29)
by Y[,yl and 9¢ S (y s ), respectively. Using conditions (4.1), (4.22) and S Lemma 1.1 we obtain after some manipulation
¢£(y£(t)) • [y£(t)[2 + ( ( +.
!Iy~t)[[2
+
[y~(t)[2
!9¢s(ys(t))! 2 )dt -< C for t E [O,T], s > 0
where C is independent of s. 1 d
"Z at !ys(t)-y,\(t)!
2
(4.30)
On the other hand, we hav.e by condition (4.1) + w IIYs(t)-y,\(t)
II 2 + (v¢s(ys(t))
- 9¢,\(y,\(t)), ys(t)-y,\(t)) -< alYs(t)-y,\(t)1
2
a.e. t
E
]O,T[.
Recalling (see Theorem 1.9) that 9¢ [ (y s )
= [-'(y S -(I+'sd¢)-l y[ ) E d¢((I+[d¢)-'y [ )
and since by (4.30), {9¢ [ (y s )} is bounded in L2(0,T~H), we have
1 2ft ,., z!ys(t)-y,\ (t)! + w 0 IIy[(s)-y,\ (s) ilL ds t 2 ft -
- AV¢A(YA(S)))ds and by Gronwall
IS
lemma
Iy[(t)-y,\ (t) I2 + JT0 IIYs(t)-y,\ (t) II 2dt -< C(s+'\) for a 11 Thus there exists y y[ 134
+
E
S:;
,\
>
(4.30)1
O.
W1 ,2([0,T];H) n L2(0,T;D(A )) such that for H
y strongly in C([O,T];H) n L2(0,T~V).
[+
0,
Then by (4.30) we may infer that Ay
+
E
y~
+
Ay strongly in L2(0,T~V'), weakly in L2(0,T;H) y' weakly in L2(0,T;H) 2 f-y' - Ay weakly in L (0,T;'H).
v¢E(YE)
+
t.:
Let ¢:L 2 (O,T;H)
-+
R be the 1.s.c. conv.ex function
=
rT
2
¢(y) = J ¢(y{t))dt, Vy E L
o
( 4.31)
(O,T~H).
As noted in Section 1.5, the subdifferential a¢:.L 2(O,T;.H) given by a¢{y) = {t.:
E
L2(O,T~H);. s(t)
E
a¢(y(t)) a.e. t
E
+
L2(O,T~H) is
]O,TD
and (a¢) (y)(t) = (a¢) (y(t) E E
= 17¢ E(y(t) a.e. t
E ]O,T[.
Then by Theorem 1.2 part (vii) we conclude by (4.31) that completing the proof.
~
E 3¢(y), thereby
REMARK 4.1 Theorem 4.5 remains valid if, instead of {¢ }, we consider a E family ¢f:H -+ R of Frechet differentiable functions satisfying the conditions ¢E(y) >- -C( Iy 1
+ 1)
for all E > 0 and y E H
II7
+
Iy)) for all E > 0, Y E H.
In this section we give some examples of parabolic variational inequalities covered by the abstract formulation of Section 4.1. Throughout this section Q will be a bounded and open subset of RN having a sufficiently smooth boundary r, and AO is the second-order elliptic differential operator (2.42), i.e. AOY
=-
N L
i,j=1
(aij(x)yx,,)X
, + J
aO(x)y
(4.32)
135
-. ) where a ij E C1(~), aO E LOO( ~,a;j = a j ; for all i,j, aO(x) > 0 a.e. x E and for some w > 0, N
a .. (x) ~ i ~J' > wII ~ i ,j=l lJ L:
EXAMPLE 4.1
II
~
2 N N a • e. x E ~, ~ E R
Consider the mixed boundary v.alue problem
y(x,O) = YO(x), x a y + a
1
3y 2 3v
E ~
(4.33)
= O·ln ~'\'" = rx]O,T[
where a; > 0, i = 1,2 and a 1 + a 2 > O. Here ~ E H2(~), f E L2(Q), YO E L2(~) are giv.en functions and 8 is a maximal monotone graph in R x R such that 0 E D(8). In particular 8 might be a continuous, monotonically increasing function on R. By shifting the range of B we may assume, without loss of generality, that 0 E 8(0). If a 2 i 0 then we take V = H1(~), H = L2(~) and define A E L(V,V by (2.33). If a 2 = 0 then V = H6(~) and A E L(H6(~)' H- 1(u)) ;s defined by (2.34). Recall that I
)
(Ay)(x)
(AOY)(x) a.e. x E u for all y E D(A H)
(4.34)
where D(A H) = {y
E
H2(~); a 1y
+
a2
~~ = 0 a.e. in
In particular, D(A H) = H6(~) n H2(u) if a 2 = O. (1. 38))
r} .
(4.35)
Moreover, we have (see
(4.36)
(As usua 1, I· I 2 denotes the norm of L2 (~).) Let j:R ~ R be such that 8 = 3j and let ¢:L2(~) ~ R be defined by ¢(y)
136
= fu j(y(x)-~(x))dx Vy
E
L2(~).
As seen in Proposition 1.9, ( 4.37) (7¢ )(y)(x) = (d¢) (Y)(X) = 8 (y(x)-~(x)) a.e. x E t: E E 1 where 8 = E- (1-(1+E8)-1), E > O. E Thus if a2 j 0, we have
~
(4.38)
Since 8' > 0 and 8 (0) = 0, this yields via Green's formula E
E
(A HY,7"t:(Y)) >
f
~
AO';J$
(y-~)dx +
t:
fr
B)Y-~) ~
d~
-1 r
I
al r I 8 (y-:;;)tjJdO' a 2 ) r t:
dO' + -
> - 1 AO~ I 2 1 7¢t:(Y) 12 + a 2 J (a1~+a2 dv)8t:(y-~)do Vy E D(AH)· r Thus condition (4.22) is satisfied in this case if one assumes that
=0
,
al~
dtjJ + a2 dV < 0 a.e. in r and BE < 0 in R.
EirhpY'
tjJ
If a
0 condition (4.22) is satisfied if one assumes that
2
or
(4.39)
(4.40) Indeed in this case 8E(Y-~) E H6(Q) for all y E H6(D) and by (4.34) we have for all y E D(A H) (AHy, 7¢E(Y)) = a(Y-tjJ, >
J~ AO~8E(y-tjJ)dx
8E(Y-~)) + a(tjJ,BE(Y-~))
> -C!7¢E(y) 2" 1
Then, taking into account (4.36), Theorem 4.3 yields COROLLARY 4.3 a
2
=
2
Let f E L (O,T~H)
0) bp such that
= L2(Q) and Yo
j(yo-~) ELl (D).
1
1.
E H (D) (Yo E HO(D) Lf
Fw'thel' aSEn.me that hypot7:cses (4.39)
137
(4.40) if a2
2 L (D»
= 0)
hord.
1 2
Then there exists a unique solution y E W • ([0,TJ;
n L 2(O,T;.H 2(D» n C([0,T]~H1(D» 2 1 2 y E L (O,T~HO(D) n H (D».
to (4.33).
If a
2 = 0 then
Taking into account (4.37), (4.38) and the fact that yet) E D(A H) a.e. t EJO,T[, we may indeed regard y: D x [O,T] + R as a solution to the boundary value problem (4.33). As noted earlier, Yt is the strong derivative of the function t + y(.,t) from [O,TJ to L2(D). This means that (ay/at)(x,t) exists a.e. (x,t) E Q and equals Yt(x,t). Now we shall consider the special case where S is defined by (2.54), i.e. S(r)
= 0 if r
>
0, S(O) = J_oo,OJ, S(r) = 0 for r
<
0.
(4.41)
Then ¢ = IK where
K = {y E L2(D)~ Y > ~ a.e. in D}
(4.42)
and (4.33) reduces to a variational inequality of the form (4.27). precisely, we have (see Corollary 4.2)
~(t) = (f(t)-Ay(t) - aIK(y(t»)O a.e. t E ]O,T[
More
(4.43)
where {w E L2(D)~w(x) < 0,
w(x)(y(x)-~(x»
° a.e.
xED}.
Hence
= f(x,t)-AOy(x,t) a.e. in = max
{f(x,t)-AO~(X)'O}
[x
E D; y(x,t)
> ~(x)J
a.e. in [x E D,y(x,t) = ~(x)J
(4.44)
because AOY = AO~ a.e. in [x E D~y(x.t) = ~(x)J (see [82J). -1 Note that in this case SEer) = -E r and so conditions (4.39). (4.40) are satisfied if (4.39)' respectively 138
lJ! -< 0 a.e. in
(4.40)'
r.
Then again by Theorem 4.3 we have 2
1
1
COROLLARY 4.4 Let f E L (0) and YO E H (~) (YO E HO(~) if Ci. 2 = 0) De sucr. that YO >- lJ! a.e. in ~ l",here lJ! E H2(:J) sat1:sfies condition (4.39)', ((4.40)' = 0). 2 2 1 Then there exists a unique solution y E W ,2([0,T];.L (st)) n L2(O,T;H2(~)) C([0,T];H 1(n)) to variational inequality (4.27) ~hcre K is given by (4.42).
if
n
Ci.
More precisply, y sat1:sfia8 the following equatioYf.s:-
Yt(x,t)
AOy(x,t) = f(x,t) a.e. in [(x,t) E Q;y(x,t) > lJ!(x)]
+
Yt(x,t) = max U(x,t) - AOlJ!(X),O} a.e. i.n [(x,t) EQ,y(x,t)=Ax)] y(x,t) >- lJ!(x) vt E [O,T] Ci.
1y
+ Ci.
dy
2 dV
= 0 in
a.e. x E ~
(4.45)
L
y(x,O) = yO(x) x E ~. As in the case of the obstacle problem (see Section 2.3) we may write (4.45) as the linear complementarity system (Yt(x,t) Yt(x,t)
+
+
AOy(x,t) - f(x,t))(y(x,t)-lJ!(x)) = 0 a.e. (x,t) E 0 AOy(x,t) - f(x) >- O,y(x,t) >- lJ!(x) a.e. (x,t) E Q
y(x,O) = YO(x), x E Ci.
y + 1
Ci.
(4.45) ,
~
dy _ 0 . 2 dV - a.e. ln
l: •
On the other hand, by Theorem 4.5 the solution y to (4.45) is the limit for E + 0 of the solutions YE to approximating penalized equation Yt
+
-1
AOY - E (Y-lJ!)
-
= f in Q
y(x,O) = YO(x), x E n dy Ci. y + Ci. 1 2 dV = 0 in L.
(4.46)
139
By analogy with the stationary case treated in Section 2.3, we call problem (4.15) the 'parabolic obstacle problem'. If Q+ = [(x,t) E Q~ y(x,t) > ~(x)J we may view y as the solution to the free boundary problem
y(x,O) = YO(x), x a 1y + a 2
~~ =
E
(4.47)
Q
° in ~, y = ~, ~~ = ~~ in aQ+(t)
where aQ+(t) is the boundary of the set Q+(t) = [x E Q; y(x,t) > ~(x)J. We shall call aQ+(t) the moving boundary. The boundary aQ+ of Q+ c Q will be called the free boundary. In (4.47) the domain Q+ is unknown and must be determined along with the solution. As also seen in the case of elliptic problems, this is the main difference between classical and free boundary value problems. Under the abstract formulation (4.27) the free boundary does not appear. A consequence, however, is that under the variational formulation the problem is multivalued and nonlinear. Perhaps the best-known example of a parabolic free boundary value problem is the Stefan problem, which will be discussed in Section 4.3. However, we shall briefly present here a physical problem which can be described mathematically as an obstacle problem of the form (4.15). Oxygen diffusion in an
absorbi~_tissue
([33J, [57])
Let us assume that oxygen with concentration y(x,t), x E Q, t E [O,TJ is diffusing in the bounded domain Q E R3 , that it is absorbed at a constant rate 1 whereever it is present, and that there is no diffusion on the boundary r of Q. We set Q+(t) = [x E Q; y(x,t) > OJ, QO(t) = [x E Q; y(x,t) = OJ. Then Q = Q+(t) U QO(t) and ret) = an+(t) n aQO(t) is the moving boundary separating Q+(t) and QO(t) (this is depicted schematically in Figure 4.1). Then yet) satisfies the nondimensional diffusion equation in Q+(t), i.e.
140
r
Figure 4.1 Yt(x,t) - 6y(x,t) = -1 for x E ~+(t)
(4.48)
together with the initial value condition y(x,O) = YO(x) for x
E ~
(4.49)
and the boundary value condition dY dV
=
° in L = fxJO,T[
dY(X t) dV '
= 0 for x E ret), t E [O,TJ.
(4.50) (4.51)
Equation (4.51) represents the mass conservation condition at the moving boundary ret), and Yo > 0 is the initial distribution of oxygen concentration in the tissue. Instead of condition (4.50) we may prescribe the concentration on f, i.e. y
= u in L
(4.50)
I
or a diffusion flux on r, dy dV
= u ,. n "
6.
(4.50)"
Now, returning to the free boundary problem (4.48) to (4.51), we note that 141
if (y,r(t)) is a sufficiently smooth solution then for every test function a E C~(~) we have (Yta + 9 Y·9 a + a)dxdt x x
JQ
rI dt
o
~+-(t)
(Yta-9xy·9xa)dx + J adxdt. Q
Then (4.48) to (4.51), together with the Green formula yield
I
(Yt- by +- 1 )(a) = JT dt
o
~O(t)
a(x,t)dx +
IT 0
dt
I
a
ret)
~~
da
T
= J dt f a(x,t)dx Va E C~(Q). o ~o( t) Here (Yt-by+1)(a) denotes the value of di.stribution Yt-by+1 E D'(Q) at a' Hence y satisfi.es (in the sense of distributions) the following system (obstacle problem):
Yt-by+1 = 0 in [(x,t) E y(x,O) = YO(x), x E 3y 3v = 0 in
l:
Q~
(4.52)
y(x,t) > OJ
~
.
By Corollary 4.4 we know that this problem has a unique solution YEw 1,2([0,T]; 2 2 2 L (~)) n L (O,T~H (~)). EXAMPLE 4.2
Consider the following boundary value problem
Yt(x,t) + AOy(x,t) + y(y(x,t)) = f(x,t), (x,t) E Q y(x,O) = YO(x), x E
~~
+
~
6(y) ) 0 in l:
where y and 13 are maximal monotone graphs in R and AO is the elliptic operator (4.32). 142
(4.53)
x
R such that 0 E Dh), 0 E D( 13)
We may apply Theorem 4.3 where H = L2(Q), A = 0 and ~:L2(Q) ~
R is defined
by
~ ¢(y)
a(y,y) +
I
g(y)dx
+-
~
= {
+
00
Ir
j(y)da if y
E
H1(~)
otherwise.
Here y = ag, s = aj and a:H1(~) in Example 1.6, a¢(y)
{AOY+w~
D(a¢)
{y E
x
H1(~) ~ R is defined by (4.35). As seen
w(x) E y(Y(x»
H2(~);. ~~
t
a.e. x E n~
s(y) 3 0 a.e. in
On the other hand, by Theorem 1.10 (1.27)
n.
and estimate (1.38) we have
Then by Theorem 4.4 (or directly by Theorem 1.13) we get COROLLARY 4.5 Let f E L2(Q) and YO E H1(~) be such that g(yO) E L1(~) and 1 j(Y2) E L (n). Then (4.53) has a unique solution y E L2(O,T;H2(~» n
1 W,
([0,T];.L2(~).
Now we pause briefly to present some classical problems in heat conduction and diffusion theory modelled by equations of the form (4.53). For other examples of this type we refer the reader to [30J, [33J, [44J. (1)
Newton's law of heat conduction
This is described mathematically by (4.53) where monotonicully increasing function. (2)
1~~_~~~~~~:Boltzman
y
= 0 and
S is a continuous,
law
The black body radiation heat emission on r is described by (4.53) with the boundary value condition
where Y1 is the surrounding temperature (the surroundings are black) and 143
a > O. Since the temperature is measured in absolute units we may write this equation as ay av where s(r) (3)
+
S(y) = 0 in r
4 = a(r 4-Y1)
for r > 0 and s(r)
4 - aY1 for r
<
O.
Natural convection
In this case the function S has the form S(r)
=a
r
5/4 for r> 0, S(r)
=0
for r
<
0
where a is a positive constant. (4)
The Michaelis-Menten dynamic model of enzyme diffusion
This is described by the nondimensional equation (4.53) where y denotes the concentration and
y( r) =
S(r) Here A, k and (5)
{~n
= ~(r ~
for r
>
J_oo,OJ
for r
=0
0
for r
<
- CO)
0
0
Vr ( R.
are 'positive constants.
The thermostat control process
This is modelled by the linear heat equation (4.53) where a (r-8 ) 1 1 S( r)
0
a 2(r-8 2)
if
_00
<
r < 8
1 81 -< r-< 82 if 8 < r < 2 if
00
and a i > 0, 8i C R, i = 1,2. This is the mathematical description of a temperature control process of a heat conductor ~ whose temperature y is required to remain in a given interval [8 1,8 2J (see [30J). As a limit case we may take S to be of the form (4.41), i.e. 144
° if
s(r) =
r
>
0, S(O) = J-oo,O], s(r) = ~ if r
<
O.
In this case problem (4.53) becomes Yt
+
AOY = f
a.e. in Q
y(x,O) = YO(x), x E
~
Y >- 0, ~~ >- 0, Y ~~ = 0 a.e. in r which is the"unsteady form of the Signorini problem. EXAMPLE 4.3 We shall study here the inhomogeneous boundary value problem Yt(x,t)
+
AOy(x,t) = fO(x,t), (x,t)
y(x,O) = YO(x), x ay + S.(y) av'
3
g. in ,
E
Q
E ~
L,.
( 4.54) = r,.
x
JO,T[, i. = 1,2,
where r 1 and r2 are smooth parts of r, r 1 = int(r'~2)' S1 and S2 are two maximal monotone graphs in R x R such that 0 E Si(O), i = 1,2, and gi
E
L2(Li), i = 1,2; fO
E
2 L (Q), Yo
E
2 L (J)
are given functions. For convenience we shall take AO = - 6. To solve this problem we place it in the framework of Theorem 4.1. set H = L2(~) and V = H1(D), and we define A;V ~ ~' by (Ay,z) = a(y,z)
==
jr IlY(x).
1
Ilz(x)dx vy,z E H
We
(~)
~
and 2 ¢(y)
L
i=1
,=
where S·
2 Finally, define f E L (0,T;V') by 2 fO(x,t)z(x)dx + L Jrigi(O,t)Z(o)d a Vz i=1
aj i' i = 1,2.
(f(t),z)
=
f~
1
E H (~).
A formal calculus involving the Green formula reveals that a 'sufficiently smooth' solution to the evolution equation 145
y'(t)
+
Ay(t)
+
a¢(y)) 3 f(t), t E [O,TJ (4.55)
y(O) = YO or equi.llalently d dt (y(t) ,z)
2 +
a(y(t) ,z)
+
r
L ;=1
Jf;
(w,,-g,,)zd
= r fOzdx a.e. t E JO,T[, z E H1(~)
(4.55) ,
J~
y(O) = YO
satisfies the boundary value problem (4.54). Now by Theorem 4.1 it follows that if fO E W1,2([0,TJ;L2(~)), gi E W1,2([0,T];.L 2(f;)), and YO
E
1,2
H1(~) satisfies the condition
-llYO = f1 in Q
ayO B, ( YO ) = g, ( 0) i n av' ,
-- +
where f1
E
i
f,"
= 1,2
L2(Q), then (4.54) has a unique solution
y E W1,2([0,TJ;H1(~)) n W1,OO([0,T];L 2(Q)). If fO
E
2
L (Q), gi
E
2
L (Li) and YO
1 ji(YO) E L (f i ),
= 1,2
E
1
H (~) satisfies the condition (4.56)
then by Corollary 4.1 it follows that (4.54) has a unique weak solution y E C(0,TJ;L 2(Q)) n L2(0,T;H1(~)). In Proposition 4.1 we give a more refined result obtained through a most useful approach. We shall use the notation
and recall (see for instance [55J) that W(0,T;H1(~)) c C([0,TJ;L2(~)). 146
2
2 L (E i ), t = 1,2 and YO E H1(Q) Then (4.55) has a unique 80lu-:;:on Y E W(0,T~H1(Q)). There pxists a positive constant C indeppndent 0; gi such that PROPOSITION 4.1 ji(YO) E L1(Q).
Let fO E L
(Q), gi
E
2
II YII L2 (0, T;.H 1( Q))
+
II y I II L2 ( 0 , T;. ( H1(Q) ) I)
-< C(1
+
=
i 111 9 i II L2 (L . ) ) • 1
( 4.57) Proof Without loss of generality we may assume that fO = O. Indeed the inhomogeneous equation (4.55) can be brought to bear in this case if replace 2 1 2 1 2 2 . gi by 9i - (az/av), where z E L (O,T;.HO(Q) n H (:J) n W • ([O,T];L (Q)) lS the solution to the boundary value problem
z = 0 in E. z(O) = 0 in
Q.
Consider the approximating equations
=0
Yt - 6Y 8Y
+.
S~(y)
dV'
y(x,O) where
,
S~,
in Q
= g. in L,., 1
= YO(x), x
1,2
(4.58)
EQ
i = 1,2 are defined by (3.45), i.e. (4.59)
As seen above, (4.58) can be equivalently written as (4.58) I
yeo) = YO where ¢E:H'(Q)
R is defined by
+
2
¢E(y)
=.L 1
=1
Jr. 1
ji(y)d o ,
dj~ = s~,
1,2.
'
147
Note that the operator d¢[:V (d¢[(y),z) =
2
Jr i
l:
i =1
~' is given by
+
S:(y)zda vy,z E H1(r2)
(4.60)
1
and therefore it is monotone and Lipschitzian. infer that (4.58)' has a unique solution 1
2
y [ E W( 0 , T ~H ()t)) c C( [ 0 , T];1 ( )t) )
Then by Theorem 1.14 we
•
To get a priori estimates we multiply (4.58) (or (4.58)') (where y = y[) by y[ and integrate on [O,T] and Q, respectively. After some calculation we get T
iy (t)i 2 [
+
J0
2
ily (t)11 1 [
H ()t)
dt -< C(l "" L Ilg·11 2 i =1
1
L (r.)
), t E [O,T]
1
because S~(r)r > 0 for all r E R. [ Next, we take the scalar product of (4.58) (where y = y[ ) by Sk(y s ), k = 1,2 and integrate on [O,t]. Since by (4.60) [ (d¢
[
(Ys)' Sk(Ys))
2 =.l: 1
=1
r Jr.
[
s
Si(Ys) Sk(ys)da
1
and s
a(y['Sk(Ys)) > 0
(y~, S~(Ys)) = d~
f
)t
j~(Ys)dx
a.e. t E ]O,T[
we have the estimate
s " In as much as Si(r) Sk(r) > 0 for all r
2 l:
i=l
E
Rand i,k = 1,2, this yields
2
IIS:(y )11 2 1
s
L (l:.) 1
-< C(l
+
l:
i=l
Ilg.11 2 ). 1 L (L)
(4.61)
1
Hence (4.62)
148
and
2
-< C(1
+
L: ;=1
IIg·11 2 1
L (L.)
).
(4.63)
1
Now once again using (4.58) (or (4.58)1) we get
the latter yields
IYE(t)-YA(t)l~
+
2 lIy E-y).11 2 1 -< C(s+).). L (O~T;H (f/))
Hence there exists y E W(O,T;H 1(f/)) such that for E + 0
yl
+
€
y l weakly in L2(O,T;(H 1(Q))I)
S~(Y ) + ~. weakly in L2(r.), i = 1,2. 1 E l l 2 Since {y} is compact in L (L) on a subsequence we have E
y
+
E
y a.e. in r for
This implies that and by (4.60),
~i
€ +
O.
E Si(y) a.e. in r i ,
1,2 (see Theorem 1.2 part (vii))
149
Thus letting E tend to zero in (4.58)' where y = y we see that y is a soluE tion to (4.54). The uniqueness is i.mmediate, and estimate (4.47) follows by (4.63). EXAMPLE 4.4
Consider the boundary value problem
(s(y(x,t)))t - 6Y(X,t)
3
f(x,t), (x,t)
E
Q
y(x,O) = YO(x), x E ~ y = 0 in
(4.64)
L
where S is a maximal monotone graph in R x R such that 0 E S(O) and f E L2(Q), Yo E L2(~) are given functions. Equations (4.64) models a large class of free boundary problems and in particular the classical t0o-phase Stefan p~oblem (see for instance [33J, [40J) St(x,t) - a 6S(x,t)
f(x,t) for x E ~+(t)
et(x,t) - a 26e(X,t)
f(x,t) for x
1
S = 0 in r(t) a
1
E ~-(t)
(4.65)
ae+ - a -as- = pv·n + + . 1n r(t) 2 an an
-
where ~+(t) = {x E ~; e(x,t) > O} is the liquid phase, ~-(t) = {x E ~; e(x,t) < O} is the solid phase and r(t) is the free surface separating these two phases; is the normal vector to r(t) and V is the speed of r(t) in R3. Problem (4.65) can be written in the form (4.64) (the (enthalphy formulation) where (see for instance [50J, Chapter 2, Section 3.3)
n
{
s(r) =
a1 r
if r > 0
[-p,O]
if r
0
if r
< 0
-I
-1
a2 r-p and 150
(4.66)
if
e
>
if
e
< O.
0
y
To write (4.64) as a nonlinear evolution equation we consider the operator A:H-1(~) + H-1(~), AZ = - 6W, w E 6- 1(Z) a.e. in ~ for Z E D(~), {z E Ll(~) n H-1(~),
D(A)
w(x)
E
3w E H6(~) such that
6-1(z~x)) a.e. x
E
~}.
By substituting z = 6(y) we may represent (4.64) as Z'(t)
+
Az(t)
:3
f(t) a.e. t
E
JO,T[
(4.67) z(O) = Zo where
H-1 (~) and f(t) = f(.,).
z~[O,T] +
LEMMA 4.1 (Brezis [19J)
ABSlAme that 6 is bounded 0': boundei subse-cc. Then -1 -1 the operator A is maximal monotone in H (~) x H (:'"1). More precisely, A = a¢
where ¢:.H- 1(~)
+
R is
defined by
f j(y(x))dx
¢(y) =
"R were h J:
+
J~
{
R-"~R an
+00
if Y E
Ll(~)
n
H-l(~)
otherwise
'1 ~.8.~.
convex
f une t ~on " Sue h that
"lJ" = 0
D ~
-1 •
The proof, which is quite technical, can be found in [19J. Then by virtue of Theorem 1.13 we derive from this lemma that for every f E L2(O,T;H-l(~)) and all Zo E L1(~) n H-1(~) such that j(zO) E L1(~), (4.67) has a unique solution z E W1,2([O.T];H-1(~)). Taking into account the definition of A, we get the following existence result: PROPOSITION 4.2 Assume that 6 is a maximal monotonq Jraph i~ R x R ~hich is 2 -1 2 017. bou.nded subsets. Let f E L (O,T;H (~)) c'"':d Yo E L (~) b such that Zo E 6(yO) a,e. in ~ for some Zo E L2(Q). Then "':;here e::::ists a :,·1.ique
bounded
151
· 2 1 1 2 -1 pa'l-Y' (y,z) E L (0,T~HOU1)) x W '([O,T];.H (Il)) which satisfies the equations
dz at
~y
= f a.e.
t
E ]O,T[
z(x,t) E S(y(x,t)) a.e. (x,t) E Q
(4.64)'
y(x,O) = YO(x), x E Il. In other words, y is a solution to (4.64). Under supplementary assumptions we hav.e a more refined result. PROPOSITION 4.3 In PY'oposition 4.2 assume in addition that o E S(O), (s(r)-s(s))(r-s) > w(r-s)2 Vr,s E R 2
2
1
lU > 0 and f E L (O,T;L (Il)), Yo E HO(Il). L2(1l)) n LOO(O,T~H~(Il)).
foY' some
1 2 Then yEW' ([O,T];
Proof We shall use a direct approach which consists in approximating (4.64) by
~ S(YA(t)) + AAYA(t) 3 f(t) a.e. t E ]O,T[ YA(O) = yO = s -1
-1
(zO) -1
1
2
where AA = A (I-(I+AA H) ) and AHy = -~y for y E D(A H) = HO(Il) n H (1l),A > O. In as much as S-1 and AA are Lipschitz it is readily seen that the above problemhas for every A > 0 a unique solution y E W1,2([O,T];L 2(1l)). Now 2 mUltiply this equation (scalarly in L (1l)) by tdYA/dt)(t) and S(YA(t)), respectively. After some manipulation we obtain the estimates
for all A > O.
152
Here we have used the notation
Recalling that (1 + ~AH)-l is nonexpans;ve in L2(Q), it follows by the Arzel~ Ascoli theorem that on a certain subsequence ~ 7 0,
w~
y strongly in C([0,TJ~L2(Q))
+
and weak star in LOO(O,T~H~(Q))
y~
y weakly in Wl,2([0,TJ~L2(Q))
+
and strongly in C([O,TJ;L 2(Q)) z~
Z
+
strongly in C([O,T];.H
-1
(Q)) 2 and weak star L (O,T;L (Q)) OO
dz~ en:
7
dz . 2 -1 dt weakly 1n L (O,T;H (Q)).
Then, going to the limit in the approximating equations, we see that y,z satisfy (4.64)' as claimed. rEMARK 4.2 For other significant results concerning (4.64) we refer the reader to [34J and the references given there. §4.3 The one-phase Stefan problem We shall study here a free boundary problem modelling the melting of a body of ice Q C R3 maintained at 0° C in contact with a region of water. Assume that the boundary f of Q is smooth and is composed of three disjoint parts f 1 ,f 2 ,f 3 such that f1 and f2 have no common boundary and the measure of f1 is positive. Three possible configurations are depicted in Figure 4.2; these occur according to whether some of the fixed boundaries f1 and f2 are present or not.
Figure 4.2 153
We shall assume that r is in contact with a heating medium with temper1 ature 61' that the temperature is zero on r 2 , and that the boundary r3 is insulated. If t = ~(x) is the equation of the moving boundary r t separating the liquid phase (water) and solid phase (ice), then the temperature distribution e satisfies the equations (we have normalized the constants) 6t (X,t) - M(x,t) = 0 in [(x,t) E Q, 6(X,t) = 0
E.£ =dV
~dx) <
t <
T]
in [(x,t) E Q, t < £(x)J
(4.68)
a(6-6 ) 1 de
6 = 0 in L2 = r 2 x JO,T[, dV = 0 in L3 = r 3 x JO,T[
(4.69)
s(x,O) = 0, x E
(4.70)
~
2 O where 61 E L (Ll) and a, A are positive constants. If r 1 is maintained at temperature 61 > 0, then instead of (4.69) we should consider the following boundary value condition:
(4.71) We shall assume that, at t = 0, rt = r 1 , i.e. ~ = {x,£(x) > a}. We shall present a device due to G. Duvaut [29J to reduce problem (4.68) to (4.70) or, for 8 1 > 0, problem (4.68). (4.70), (4.71)) to a parabolic variational inequality of the type encountered above. To this end, we define the function z(x,t) = 6(x,t) X(x,t), (x,t) E Q where X is the characteristic function of the aqueous region, i.e. 1
X(x,t)
-- {
0
if £( x)
<
t
£(x)
>
t.
if
LEMMA 4.2 If z E H1(Q) and £ E H1(~) then
o
Zt - 6z = - A Xt in V'(Q) i.e., in the sense of distributions.
154
~ E
Proof Let
CO(Q) be arbitrary but fixed.
We have
(Zt-6Z)(~) = - Z(~t) + IQ VxZ·Vx6 dxdt =
T
-f~ dx
T
r
)
J
.Q,( x)
J rI
V e(x,t)·V ~(x,t)dt .Q,( x) x x
r
e(x,t)¢t(x,t)dt+f dx
J
= J dx
IT
-I
6e(x,t)~(x,t)dt +V.Q,(X)·Y 8(X,.Q,(X))~(X,.Q,(X))) .
~
.Q,( x)
T
.Q,( x)
8t (X,t)cp(x,t)dt
+
J! dx(div IT
ve(x,t)cp(x,t)dt 9,( x) x
~
x
Then by (4.68) and the Stokes formula we get (Zt-6Z)(cp) = _AD
I
~
cp(x,.Q,(x»)dx = 1\0
I
Sl
dx (T CPt(x,t)dt = _A O\(¢) J .Q,( x)
as claimed. We set t
y(x,t) = 10 z(x,s)ds for (x,t)
E
Q.
LEMMA 4.3 The function y satisfies the obstacZe probZem y>
°a.e.
in Q, y = 0 a.e. in [(x,t) E
o = 0,
+
8Y
a(y-e ) in L , 8Y
= -
1
y(x,O) = 0 for x E 'V
9,(x)
0 Yt- 6Y > - A a.e. in Q
(Yt-6Y 8v
A )y
Q~
1
8v
=
>
tJ
(4.72)
(4.73 )
0 in L3' y = 0 in L2
(4.74)
~
It
81(o,t) = 0 81(o,s)ds.
Proof
Equations (4.72), (4.74) are straightforward.
By Lemma 4.2 and the
definition of y, we have
Hence
155
Since y =
where C is independent of t. Yt - 6y
+
°in
[~(x) >
t] we have
° ° in V'(Q)
AX =
and (4.73) follow, thereby completing the proof. We set H = L2(n), V = {y E H1(n)~ y = in r } and define A:V ~ V', 2 f E L2(O,T~V') and K c V by
°
(Ay,z) =
J Vy(x)'Vz(x)dx n
O fez) = - A
+
a
r
In z(x)dx
r Jr,
y(o)z(o)do, y,z E V
(4.75)
~
+ jr1a e,(o,t)z(o)do Vz E V.
K = {y E V: y > 0 a.e. in n}.
The variational formulation of problem (4.72) to (4.74) is: (y'(t)
+
Ay(t),y(t)-z)
y(t) E K for all t E
~
(f(t),y(t)-z) Vz E K, t E [O,T]
[O,T]~
yeO) = 0.
(4.76)
Indeed if y E W1,2([O,TJ~V) is a solution to (4.76) then for z = y + ¢, where ¢ E C;(Q), ¢ > 0 we see by (4.76) and the definition of A, f that (Yt - 6y
+
o
A ,¢) >
°
(Here (.,.) is the pairing between V and V'.) Hence Yt - 6y + AO > in the sense of distributions. Now for z =y(t)±P¢, where ~ E C~(Q+(t)), Q+(t) = [x E n ~ y(x,t) > OJ and p > is sufficiently small, (4.76) yields (formally) via Green's formula
°
°
(Yt(t) - 6y(t) + AO,¢) + Jr, + Jr2
~
(~~
+ a(y -
81))¢
do
¢ do
Hence y satisfies in the sense of distributions the equations
156
As regards existence, observe that the hypotheses of Theorem 4.2 are satisfied, and therefore we have PROPOSITION 4.4
The variational inequality (4.16) has a unique solution
y E W1 ,CO([0,T];.H)
n Wl,2([0,T)~.'l).
(For more refined regularity results refer to [39J.) For the applications we have in mind we shall now present a direct and constructive approach to (4.76). Namely, consider the problem Yt - 6Y
+
°
SS(y) = - A in Q
= 0,
y(x,O)
1 2
x E
~
(4.77)
°
2
where v E W ' ([O,T];.L (f 1 )), '1.(0) = and SS i.s defined by (3.79). of A and f, (4.77) can be rewritten as yl
+
Ay
y( 0) =
+
In terms
SS(y) = f a.e. t E ]O,T[ (4.77)
°
I
and by Theorem 4.1 this has a unique solution Y E W1,co([0,T];H) n W1 ,2([0,TJ;V). S
PROPOSITION 4.5 y
S
~
For
S ~
0, we have
y strongly in C([O,T];H) and weakZy in W1 ,CO([0,T];H) n W1 ,2([0,T];.'l)
SS(y ) ~ f - Ay - yl weakZy in L2 (O,T;V
I
)
S
where y is thA solution to (4. (6).
Ilyll
W1 ,CO([0,T];H)
Moreover, We have the estimate
n W1 ,2[0,T);V)
..;
C(1 + Ilvl
II
L 2 (l:1)
)
(4.78)
and the map v ~ y is weakly-strongly continuous from W1,2([0,T];L 2 (f )) to 1 2 L (Q). 157
Proof First we multiply (4.77)' where y = y by y' and integrate on E: E: x ]O,t[. We get
~
°
12ft 2 Z IYc(t) 12 + Ivyc(s) 12 ds +
where
ff1 (aYE-v)y~
fto
is a primitive of SE.
jE:
dsdo +
It° IQ jE(yE)dxdt <
C
Integrating by parts we obtain the estimate
+IIYElloo 2 IYE{t)12+IIYEI12 L (O,T;V) L (O,T;.L (f )) 1
< C( 1
+
II v II I
2 ). L (L,)
Next we differentiate (4.77) with respect to t and multiply the result by After some manipulation we get
Y~.
'y~(t) '2 + "y~1I 2
L (O,T;IJ.)
-< C(1
+
'Iv'll 2
L (L ) 1
)
Thus lYE} is bounded in Wl,2([O,TJ~V.) n Wl,oo{[O,TJ~H). Selecting a subsequence, if necessary, we may assume that (by the Arzela-Ascoli theorem) Yc ~ Y strongly in C([O,TJ;H) and weakly in L2(O,T;V) y' ~ y'weakly in L2(O,T~V) and weak star in Loo(O,T;H) E
Ay E ~ Ay weakly in L2(O,T;V') 2 SE(YE) ~ n weakly in L (O,T;V'). Letting E tend to zero in inequality (SE(YE)' YE- Z ) >
fQ
jC(yE)dxdt -
fQ
jC(z)dxdt
2
vz E L (O,T;V), z(t) E K a.e. t E JO,T[ we see that To
f
(n(t),y(t)-z(t))dt>
° Vz E L2(O,T;V), z(t) E K a.e. t E JO,T]
158
and therefore (n(t), y(t)-z) > a vz E K. We may conclude therefore that y is the solution to variational inequality (4.76). Estimate (4.78) and the weak-strong continuity of the map v ~ yare immediate. Now we shall study problem (4.72) to (4.74) in a more general context Yt - 6Y > fa, y > a in Q Yt - 6Y = fO in [y
~~
> OJ~
y(x,O) = YO(x), x E
~~ = a
aY = v in L1' y = a in L2'
+
n
(4.79)
in L3 .
Equivalently, (y'(t)
+
Ay(t)-f(t), y(t)-z) -< a a.e. t
E
]O,T[, vz
E K
(4.80) yeo) = Yo' yet)
E K
vt
E
[O,T]
where a ;s a non-negative constant and 1,2
2
1
fa E W ([O,T];. L (n», Yo E H (n), Yo> 0 a.e. in n v
E
2 L (L t ), v> a a.e. in
(4.81) (4.82)
[1'
The operator A E L(V,V') is defined by (4.75), and f E L2(0,T;V') by (f(t),z) = r fO(x,t)z(x)dx
In
+
Ir
z(a)v(a,t)da Vz E V.
(4.83)
1
The space V and the subset K are defined as above. According to Theorem 4.2, if v E W1 ,2([0,T];.L 2(r )) and ayO - yeO) = 0 in 1 r 1 then (4.80) has a unique solution y E W1,2([0,TJ~V) n Wl,OO([O,T]~H). Here we shall prove a different existence result valid under assumptions (4.81), (4.82). To this end, consider system (4.77) with the initial value yO' namely y'
+
Ay
+
SE(y) = f a.e. t E ]O,T[ (4.84)
yeo) = yO' 159
By Theorem 1.14, (4.84) has a uni.que solution y E L2(0,T;\l) with y' EL 2(0,T;V'). €
€
THEOREM 4.6 Under assumptions (4.81), (4.82) there exists a unique solution y E L2(0,T;:IJ.) n W1 ,2([0,T];,V') c C([0,TJ;12(~)) to (4.'29). MoreoVer, one has
y
-+
C
y strongly in C([0,T];12(~)) n L2(0,T;.IJ.)
SC(Yc)
-+
= fO-y'
n
+
(4.85)
2 6Y weakly in L (Q).
-< C(l +-11v.11 2
(4.86)
(4.87)
)
L (E ) 1
and if vn
2 V weakly in L (E ) then 1
+
yn
-+
y strongly in L2(Q) and weakly in L2(0,T~V)
where Yn is the solution to (4.7.9) with u
(4.88)
= un'
Proof Multiplying (4.84) by Y and S€(y ) in tUrn we get, after some man;C C pulation,
---
ly(t)1 2 +.lI y Il2 C
C
L (O,T~V.)
-< C( Ilfoll 2
L (Q)
J
Q
ISc(yc) ,2 dxdt
-<
f~
+
+-
)
(4.89)
Ifol IsC(y c ) Idxdt.
(4.90)
IYol2 +. 1Iv.11 2
L (E ) 1
JLl SC(Ys)(aYs-V)dodt
jS(YO)dx
+
fQ
Noting that jS(yO) -< C and (SS(y )v)+
-+
S
(because IsC(y ) S
fQISC(y 160
+ E-
a strongly
in L1(~)
1Y-1 -< 2s and v> 0), it follows that E
) 12 dxdt -< C for all E
E >
O.
(4.91)
Finally, again by (4.84) we have
i d~ +-
IYs(t)-y,\(t)l~ ~ J~
a Jr
1
'y -y ,2d a s,\
+-
r
J~
IV(YS(t)-y,\(t»/2 dx
(SS(y) -S'\(Y »)(y -y )dx = O. E
,\
(4.92)
£:,\
Together with estimate (4.91), this implies that {y } is a Cauchy sequence in C([0,TJ~l2(~» n l2(0,T~V), and (4.85) follows. EBy estimate (4.91) we conclude that on a subsequence (4.93) As observed earlier, this implies that n(x,t)y(x,t) = 0, n(x,t) , 0 a.e. (x,t) E Q.
(4.94)
Thus, letting E tend to zero in (4.84), we conclude that the variational inequality (4.80). If {v } is weakly convergent in l2(L1) then by (4.87) n 2 12 weakly compact in L (O,T~V) n W ' ([O,TJ;J ' ) and compact (see [50J, Chapter 1, Theorem 5.1). Thus there exists a denoted Yn) such that
y is a solution to we see that {y } is 2 2 n in l (O,T~l (~» subsequence (again
weakly in L2(0,T~V.) and strongly in L2(Q) y' ~ y' weakly in L2(0,T~V.'). n
This yields
Then going
to the limit in the inequality,
To (Yn'(t)
f
+
Ay n (t)-f n (t), yn (t)-z(t»dt < 0 Vz z(t)
E
K a.e. t
E
L2(O,T;V),
E
[O,T]
we see that y is a solution to (4.80). The next result concerns problem (4.79) in the case where r3 = 0, and 161
Yt - 6Y > f O' y > 0 in
= YO(x), x
y(x,O)
E ~~
Q~
Yt- 6Y
= fO in [y
Y = v in L1' y
>
= 0 in
OJ (4.95)
L2
or in v.ariational form (yl(t),y(t)-Z)
f
.j,
~
V y(x,t)·V (y(x,t)-Z-(x))dx x x
< J~ fO(x,t)(y(x,t)-z(x))dx a.e. t
yet) E KV. for t E [O,T], yeO)
= f1 u
Here r
E
V
VZ E
K
(4.96)
> 0 in
Q}
(4.97)
]O,T[
= YO.
r2 (see Figure 4.2) and
KII. = {1..
E
1 H (~) ;.z.
v in
==
r l' z. = 0 i n f 2'
Z
fO E Lq(Q) where q > 2
(4.98)
2- ~ YO E W q (~),Yo > 0 a.e. in ~, YO = 0 in f2
(4.99)
(4.100 )
for a E f 1. Here W2-(2/q)(~) and q
wq2-(1/q),1-(1/2q)(L 1)
are the usual Sobolev spaces on ~
and L1' respectively. Consider the approximating equation Yt - 6Y y(x,O) y
+
= fO in Q
BE(y)
= YO(x), x
E
~
(4.101 )
= v in L1' y = 0 in L2'
where BE is given by (3.79). THEOREt14.7 FoY' every and foY'
E -+
y
162
E
-+
E
>
0, (4.101) has a unique soZution
y
s
E
W2 ,1(Q), q
0, y weakly in
W2 ,1(Q), q
(4.102 )
W~,l(Q) is the unique solution to variational inequality (4.96).
where y E
Moreover, the following estimates hold:
IIYEIIW2,1(Q) +- IlsE(YE)11 Lq(Q) -< e(l +-/lv/l W2-(1/q),1-[1/2q)(l: )) q
1
(4.103) 2-(1- q ),1-(1/2 q )(Ll) then yE ~ Y weakly in W2 ,1(Q). and if v ~ u weakly in W E
Here yE is the solution to (4.101) where v
q
= VE •
Proof According to a classical result (see for instance [49J p. 388) there exists a unique solution ~ E W~,l(Q) to the boundary value problem ~t
-
= 0 in Q
~~
(4.104)
= YO(x) for x E n
~(x,o)
and
II~ II W2,1(Q) -< e(l
+
Ilvll W2-(1/q),1-(1/2q)(l: )).
Now let
E W2 ,1(Q) be the solution to the boundary value problem
Z
q
E
(z
E
)t-~Z
+
E
=0
z (x,O) E
z
(4.105)
1
q
= 0 in
E
Clearly, y = z E
BE(z
E
+~)
= fO a.e. in Q
x En
(4.106)
2:.
~ E W2 ,1(Q) is the solution to (4.101). Recall that
+
q
E
IsE(y) - B (Y) I -< 2E for all y E R, E > 0 E
and S (y) = _s-l y-.
(4.107)
Hence
E
s (y ) E
E
= 0 in 2:
because v> 0 in
(4.108)
2: •
1
163
q Then multiplying (4.106) by ss(y )Is (y )l - 2 and integrating on Q, it s s s follows by Green's formula and (4.107), (4.108) that
J Is (y (x,T))l q dx
q-1
S1
Ss
+.
Jr
Q
Is (y )jq dx dt ss
This yields (see (4.105))
whereas by (4.106) we infer that
IIYsl~~'1(Q) -< e(1
+
JJvJJ
w~-(1/q),1-(1/2q)(Z1)) for all S
> O.
Thus there exists a sequence (again denoted ys) and the functions YEW~,1(Q), ~ E Lq(Q) such that
o
y
+
S
2 "(Q) and strongly in L2(0,T;H'(S1)) y weakly in W q
SS(y S)
+
~O we a k1yin Lq ( Q) .
(4.109 )
This implies by a standard device that ~ O( x , t) y ( x , t)
= 0, ~ ( x , t)
;>
0 a. e . (x, t) E Q.
Finally, by (4.107), (4.108) and (4.109) we see that y-
+
0 a.e. in Q and therefore y E KV.
S
Then letting S tend to zero in (4.101), with y = Y , we conclude that y S satisfies the equations (Yt - 6Y- f O)y = 0, Yt - 6Y- f O ;> 0 a.e. in Q y(x,O) = YO(x), x E S1 y = va. e. in
164
L l'
Y = 0 a. e. in 2: 2 ,
(4.110)
In particular, we see that y satisfies v.ariational inequality (4.96). The uniqueness is immediate. Now if v ~ v weakly in w~-(1/q),1-(1-2q)(Ll)' then by estimate (4.103) E we may assume that yE ~ Y1 weakly in W~,1(Q), SE(yE) ~ ~1 weakly in Lq(Q) and by the same method as above we conclude that Yl is the solution to (4.96) where v =: v . 1
REMARK 4.3 By estimate (4.103) it follows that lIyll 2 1 -< e(l W'(Q) q
+
Ilvll 2-(1}q 1-(1-2q) ). ~~ ,. U:)
(4.111 )
1
Then arguing as in the proof of Theorem 4.6 it follows that the map v ~ y is 2-(1/q) 1-(1/2q) 2 1 compact from W ' (L1) to L (O,T;H (n» and weakly compact into W2, 1(Q). q
REMARK 4.4 From Theorems 4.6 and 4.7 we may derive existence results for the oxygen consumption problem (4.48), (4.49), (4.50) with boundary value condition (4.50)' or (4.50)". On the other hand, Proposition 4.3 and Theorem 4.6 remain valid for boundary value conditions of the form
~~ §4.4
+
ay = u in L1' y = 0 in L2' ~~ = 0 in L3.
(4.112)
A quasisteady variational inequality
Here we shall describe a free boundary problem which occurs in an electrochemical machining process. The metal piece to be shaped is placed as the anode in an electrolytic cell. When a potential is applied across the cell the metal will be dissolved from the surface of the anode, which is a moving boundary (see Figure 4.3).
165
Electrolyte
8
Anode
n
Cathode
r
D+(t)
Figure 4.3 We denote by Q the region inside the cathode surface f and by f = [x EQ; t t = £(x)J the anode surface at time t. The region occupied by the electrolyte at time t is denoted O+(t). If u(t) (> 0) denotes the potential difference between the electrodes at time t then the potential field ¢ = ¢(x,t) satisfies the following equations ([32J, [58J): 6¢(X,t) = 0 for x E O+(t) = [x E Q; £(x)
<
tJ
¢(x,t) = 0 for x E f, t E [O,TJ (4.113) ¢(x,t) = u(t) for x E f t vx¢(x,t)·v£(x) = AO for x E f t where AO is a positive constant. Problem (4.113) is similar to a one-phase Stefan problem describing the freezing of some substance, and can be reduced to a variational inequality by the same method as that used in Section 4.3. To this end we extend ¢ to all of Q by ¢(x,t) = u(t) in rt'O+(t) and consider the Baiocchi transformation y(x,t)
__ [
ft~(x) fo
(u(s)-¢(x,s))ds if x E Q'-O+(O), t E [O,T]
(u(s)-¢(x,s))ds if x E 0+(0), t E [O,TJ.
If we extend £ to all of Q by £(x) = 0 for x E 0+(0), we may write this as
166
y(x,t) =
t
(u(s)-¢(x,s))ds for x E ~, t E [O,T]
fo
(4.114 )
If ¢ and £ are sufficiently smooth then for every test function ~ E C~(~) we have (see the proof of Lemma 4.2)
'V xQ(x,s)ds)dx
=
f
- O+(t)
(div(~(x)
rt
J
- fO+(t)'-.O+(O) fO+(t)
rt
'V ¢(x,s)ds - J 6¢(x,s)ds¢(x))dx £(x) x £(x)
v£(x)·'Vx¢(x,£(x))~(x)dx
f(x)~(x)dx
where -A
f(x)
f L
0
0
if x E
~'-.O+(O)
if x E 0+(0).
Thus y satisfies in the sense of distributions the equation (4. 115)
where X(t) is the characteristic function of O+(t), and the boundary conditions, t
y(x,t) = v(t) =
fo
u(s)ds for x E f.
On the other hand, since 0 < ¢(x,t) < u(t) in y(x,t)
~
0 for x E
(4.116) ~,
we have
~.
In other words, for every t E [O,T] the function y(t,.) satisfies in elliptic obstacle problem
~
the
167
~
-6Y> f, y> 0 in
(4.117)
-6Y = f in [x E
~~
y(x,t)
>
OJ
along with the inhomogeneous boundary value conditions (4.116). As seen in Section 2.3, problem (4.116), (4.117) can be equivalently written as y(t) E K(t), a(y(t), y(t)-z-) -< (f,y(t)-·z.)
vz. E K(t)
(4.118 )
where a(y,z) = f~ Vy·vz dx for all y,z E K(t)
= {y
Hl(~)
E H1(~), Y > 0 in ~, y = v(t) in f}.
Here we shall study existence for more general functions f and v E L2(O,T;~3/2(f)) satisfying the conditions 1/2 (f)), v> 0 a.e. in
vEe ([O,T]~H
E
C([O,T]~L2(~))
l: = r x]O,T[.
(4.119 )
To this end, proceeding as in the proof of Lemma 3.6 we consider the approximating problem
(4.120)
y = v(t) in r, where SE has been defined by (3.79). For every E > 0 and t E [O,T] the boundary value problem -6n
= 0 in
~,
n
has a unique solution n I!n(t)
II
2
H (~)
E
= v(t) in r
(4.121)
L2(O,T~H2(~)) satisfying the estimate
-< C !!v(t) II 3/2 H
(r)
.
Then by condition (4.119) and the maximum prinCiple we conclude that n> 0 in Q =~x ]O,T[, n E C([O,T];.H1(~)).
168
(4.122)
Let Zc
E
L2(O,T;H2(Q)) be the solution to
= f(t) in Q
-6z c + Sc(z c + n(t)) Zc = 0 in f.
(4.123)
Arguing as in the proof of Lemma 3.6 we see that (4.124 )
and Ilzs(t+h)-zs(t) II 1
-< C( If(t+h)-f(t) 12 + Iln(t+h)-n(t) II 1
H (Q)
for all t,t+h we see that
E
[O,T].
Hence Zs
Ilzs(t)-ZI\(t) 1121 -< HO(Q)
+s-1 h)
H (Q) E
C([O,T];H 1(~)) and by (4.123), (4.124)
J (ss(zs+n)-sl\(zl\+ll))(Zs-ZI\)dx -< C(OI\) Q
We see that yc = Zc + n is a solution to (4.120) and by (4.122), (4.124) it 2 2 1 follows that Ys E L (O,T;H (Q)) n C([O,TJ;H (Q)) and IIYc(t) II 2
H (Q)
-< C( Ilv(t) II 3/2 H
(r)
+ If(t) 12 ) a.e. t E JO,T[.
(4.125 )
Thus there exists y E L2(0,T;H2(~)) n C([O,T;H1(~)) such that Ys ~ y weakly in L2(O,T;H2(~)) and strongly ;n C([O,T];H 1(Q)) (4.126) (4.126 ) Then by the same reasoning as in the proof of Lemma 3.6 we infer that y is a solution to (4.118). The uniqueness of y ;s immediate. To summarize, we prove the following result: THEOREM 4.8 Under assumptions
(4.119) the quasisteady variational inequality
(4.118) has a unique solution y E (4.126).
L2(O,T;H2(~)) n C([O,T];H1(~))
given by
!1oreover, the following estimate holds:
169
lIy(t) II
y
E
-+
2 H (~)
-< C(
y weakly in
Ilv(t) 1/
3/2 + If(t) 1 ) a.e. t E ]O,T[ 2 H (r)
L2(O,T~H2(~))
and strongly in
C([0,T];H1(~)) (4.128)
where y is the solution to (4.118). The map v
170
-+
y is continuous from
(4.127)
C([0,T];.H 1/ 2(r) to C([0,TJ;.H 1(n)).
5 Optimal control of parabolic variational inequalities: distributed controls This chapter is concerned with distributed optimal control problems governed by some classes of parabolic variational inequalities. The main emphasis is on the derivation of first-order necessary conditions (the Pontriaghin maximum principle). The approach is modelled on the steady-state case developed in Chapter 3. Several other results, related to the dynamic programming method, to infinite time horizon control problems and to controllability, have been included. §5.1
Formulation of the problem
First we will consider diffusion control processes governed by nonlinear parabolic equations of the form y'
+
Ay
+
S(y-\jJ) 3 Bu
+
f a.e. in Q =ri
x
]O,T[ (5.1)
y(x,O) = YO(x), x E ri and with the pay-off function rT
G(y,u) = J (g(t,y(t»
o
+
h(u(t»)dt
+
¢O(y(T».
(5.2)
Here y' is the strong derivative with respect to t of the function y:Q ~ R as a function of t from [O,T] to L2(Q) and f E L2(Q) is a given function. As for the operators A:D(A) c L2(~) 7 L2(ri), B E L(U,L 2(Q» and the functions 2 B:R 7 R, g~[O,TJ x L2(ri) ~ R, ¢0:L (ri) ~ R, h~U 7 ~, their properties will be specified in hypotheses (i) to (vi) below. Throughout this chapter we shall denote by H the space L2(ri) endowed with the usual scalar product (.,.) and norm 1'1 2 , We are given a real Hilbert space V such that V is dense in Hand V c H C VI
algebraically and topologically.
Further we shall assume that 171
The injection of V into H is compact..
(i) of
V and
by (.,.) the pairing between
A is
We s.ha 11 denote by
VI.
II . II
the norm
The norm in the dual space
II . II *.
VI will be denoted by
(ii)
V and
a linea~ continuous and symmetric operator from
V to
VI satisfying
the coercivity condition ( Ay ,y) ;;;. w 1\ y II
where w > (iii)
0 and a
E
2
-
2
a 1y 12
(5.3 )
vy E V,
R.
S is a maximal monotone graph in R x R such that
is a given function.
0 E D(S) and
2
~ E H (~)
Moreover, there pxists C E R independent of s such that
for all y E D(A ) and s > 0,
(5.4)
H
1 and evpry monotonically increasing function E, E C (R) and E,(O) := O. -1 -1 Here Ss(r) := s (r-(1+ s S) r) for all s > 0, r E Rand D(A ) := {yEV;AyEH}. H
(il() B is a linear crmtiruAolAs operator fY'om a reat Hilbert space U to H. The
I . IU
norm and the sea 1a r product of U will be denoted by
and <.,.), resp-
ectively.
(v)
The function h:U
-+
R is convex and lower semicontinuous ('[.s.c.).
There
pxist C > 0 and C E R such that
1
2
h(u);;;, C11uIG - C2 vu E U. (vi) g:[O,T]
x
H
-+
(5.6)
R+ is mpa8urable in t, and for 2very r > 0 there exi8ts
Lr > 0 independent of t such that g(t,O) E Loo(O,T) and
for all t E [O,T],
lyl2
Izl2 -<
+
r.
In the following s wi1l be a fixed real number satisfying the condition s >N/2. We shall denote by Y the space HS(~)
172
n V and by y*
:=
(Hs(~))1 + VI its dual.
By Cw([O,T];H) we shall denote the space of all weakly continuous functions from [0, T] to H. Let ¢:H + R be the 1.s.c. convex function defined by
¢(y) = J~ j(y(x)-~(x))dx where j:R
+
(5.7)
R is such that S = aj. Then (see (4.37))
a¢(y) = {w
E H~
w(x)
E B(Y(x)-~(x))
a.e. x
E ~}
and therefore (5.1) can be written as y'(t) + Ay(t)
+
a¢(y(t))
3
Bu(t) + f a.e. t
E
]O,T[ ( 5.8)
y(O) = YO. Then by Theorem 4.3 and hypothesis (5.4) we infer, for every u E L2 (O,T~U) and all yo satisfying
(5.9) that (5.8) has a unique solution y = y(t,yo'u) E W1,2([O,T];H) n L2(O,T; D(A H)) n C([O,T];V). If Yo E DT¢J-ntv. = TIT¢) then y
E
C([O,T]~H), it y'
E
L2(O,T;.H), it y
E L
2(O,T;D(A )). H
The first optimal control problem we will study here is the following: (P)
Minimiz?! T
o (g(t,y(t'YO,u))+h(u(t)))dt
J
PROPOSITION 5.1
Fop yo E D(¢)
n V the
+ ¢O(Y(T,yO'U))~u E L
2
(O,T;U) .
minimization ppoblem (P) has at least
one solution.
Proof Denote by d the infimum in (P). 2 {un} c L (O,T;U) such that
--
Then there exists a sequence
-1 d < G(y ,u ) < d + n n
n
(5. 10)
where Yn(t) = y(t,yo'u n) and G is defined by (5.2). 173
By hypothesis (v) (5.6) we see that u remain in a bounded subset of L2(O,T;U). Hence there is u* E L2(0,T~U) such that on a subsequence (again denoted un) un ~ u* weakly in L2(O,T;U) Now we take the scalar product of (5.8) (where y = Yn' u = un) by Yn and t y'n and integrate on [O,tJ. This yields (see the proof of Theorem 1.13) 2
(t
1/2
2 Q,(Yn(t)) '" C( IY ol2
J siY~(s) i2 ds
+
t
T 2 Ja Ilyn(t) II dt
+
Iyn(t) 12 ,,;; C( IYol2
o
22fT +
a
+
fta IBun(s)
1
2ds 2 )
2
IBun(s) 12 ds)
where
Q,(y) = 21 (Ay,y)
+
¢(y).
(5.11)
Then by the Arzela-Ascoli theorem we infer that on a subsequence Yn ~ y* weakly in L2(O,T;V) and strongly in every L2(o,T;H) AYn
+
a¢(y n) ~ ~ weakly in every L2(o,T;H).
(5.12)
Hence Yn ~ y* strongly in L2(0,T;H) and since the operator A + a¢ + aI is maximal monotone in H x H we conclude by (5.12) and Theorem 1.2 part (vii) that ~ E Ay* + a¢(y*). Hence y*(t) = y(t,yo'u*). Now it follows that ¢O(Yn(T))
~
T
fa
¢O(y*(T))
T
g(t'Yn)dt
~ fa
g(t,y*)dt
and
T T lim inf r h(u )dt>- r h(u*)dt n-+= JOn Ja 174
because the function u + J~ h(u)dt is convex and l.s.c. on L2(0,T~U) (see Remark 1.2) and therefore it is weakly lower semicontinuous. Then by (5.10) it follows that G(y*, u*) = d, thereby completing the proof. A control u E L2(0,T~U) for which the minimum in problem (P) is attained is called the optimal control. The pair (y* = y(t,yo'u*),u*) is called the optimal pair.
Examples of (5.1) satisfying hypotheses (i), (ii) and (iii) have been presented in Section 4.2. For instance, if ~ = H1(n), ~ ~ 0 and A:H1(~) + (H 1(n))' is defined by (2.33) then the control system (5.1), (5.8) reduces to Yt(x,t)
+
AOy(x,t)
y(x,O) = YO(x), x a,y
+
a2 ~~
=
+ E
S(y(x,t))
3
(Bu)(x,t)
+
n
0 in ~
f(x,t) a.e. (x,t)
E
Q
(5.13)
=
r
JO,T[.
x
2 If S is the multivalued graph (4.41) and ~ E H (n) satisfies the condition (4.39) or (4.40) then the state system (5.1) reduces to the obstacle problem (see (4.45)) Yt(x,t)
+
AOy(x,t) = (Bu)(x,t) E Q~
a.e. in [(x,t) Yt(x,t) = max {f(x,t)
+-
~(x)
a.e. (x,t)
y(x,O) = yO(x) a.e. x a,y
+
E
f(x,t)
y(x,t)
(Bu)(x,t) -
a.e. in [(x,t) E y(x,t) >
+-
Q~
E
> ~(x)J AO~(x),O}
y(x,t) = ~(x)J
(5.14)
Q
n
ay = O·1n~. a2 av
Here AO is the second-order elliptic differential operator defined by (4.32) and a/av is the corresponding outward normal derivative. Another class of distributed control problems we will study here is the following: 175
(P ) 1
Minimiz~
the functional T
G(y,u) = fo (g(t,y(t»
h(u(t»dt
t
1 2 2 2 on aU yEW' ([O,T);.H) n L (O,T;H (r2»
Yt(x,t)
+
AOy(x,t) = (Bu)(x,t)
+
and +
~O(y(T» U
E L2 (O,T;U) subject to
f(x,t) a.e. (x,t) E Q
y(x,O) = yO(x) a.e. x E r2 ay dV
+ 6(y) 3
° a.e.
( 5.15) in
E
where f E L2(Q), 6 is a maximal monotone graph in R x R, i.e., B = aj, B E L(U,H) and g:[O,T] x H + R, h:U + R satisfy hypotheses (v) and (vi). As seen in Corollary 4.5, if 1 1 Yo E H (r2), j(yO) E L (r) then the control system has for e~ery u E L2(0,T~U) a unique solution E W1 ,2([0,TJ;H) n L2(0,T;H 2(r2»). Then arguing as in the proof of Proposition 5.1 it follows that ppoblem (P1J admits at least one optimal paip. More will be said about this problem in Section 5.5. y
§5.2 The approximating control process . Let (y*,u*) E (W 1 ' 2 ([O,T];H) n L2 (O,T;.D(A »)) x L2 (O,T;U) be an arbltrary H optimal pair of problem (P). For every E > consider the control problem:
°
T
GE(y,u) = fo (gE:(t,y(t» 1 2 on all yEW' ([O,T];H)
y'
+
+ BE:(y-~)
Ay
+
h(u(t»
+
i
lu(t)-u*(t)l~dt
+
n L2 (O,T;D(A H») and u E L2 (O,T;U) subject = Bu
+
to
f a.e. t E ]O,T[ (5.16 )
y(O) = YO where gE::[O,T]
176
x
H + R and ¢~:H
+
R are defined by (1.50), i.e.
gE(t,y) = JRn g(t,PnY-E AnT) Pn(T)dT
(5.17)
¢O(y) =
(5.18 )
JRn ¢O(PnY-E AnT)Pn(T)dT
n where n = [E- 1], Pn is a mollifier in R , Pn:H ~ Xn , An~Rn ~ Xn are given by (1.48), (1.49) and SE is giv.en by (3.45). As noted earlier, we may equivalently write (5.16) as y'(t)
+
Ay(t)
+
9¢E(y(t)) = Bu(t)
+
f(t) a.e. t
E
]O,T[ (5.19 )
y(O) = Yo where
¢E~H ~
R is defined by (r
¢E(y) = ~ jE(y(X) - ~(x))dx, jE(r) = J SE(s)ds
f
o
and therefore 9¢E(y)(X) = SE(y(X)_~(x)) a.e. x E ~. We have already seen in Theorem 4.5 (see also Remark 4.1) that the solutions to (5.19) approximate for E ~ 0 the solutions to (5.8). Throughout the following we shall assume that yO satisfies condition (5.9). For every 1 € > 0 and u E L2(O,T~U), (5.19) has a unique solution yE(U) E W ,2([O,T];H) 2 n L (O,T;O(A )). We need the following technical lemma: H LEMMA 5.1
If for
E ~ 0,
U E
~
U
weakly in
L2(O,T;.U) then
yE(U E) ~ y(u) weakly in L2(O,T~O(AH)) n W1,2([O,T];H) and strongly in wnerp
y(u) is the solut1:on to (5.8). lIyE(U)_y(u)11
OJ
L (O,T;H)
L2(O,T~V) n C([O,T];H) NOY'eover .. one has the estimate
< CE1/2 for aU E > O.
(5.20)
Proof We write (5.19) as y'
+
Ay
+
9¢ (y) E
Bu
+
f
+ 9~
E
(y) -9¢E(y)
y(O) = yo
177
and recall (3.57), (3.58).
-< CO
+
II U !!
Then by estimate (4.30) it follows that
2
L (0, T;..U)
(5.21)
).
By estimate (5.21) we deduce that there exists a subsequence (again denoted yE) such that (we use the Arzela-Ascoli theorem) yE(U ) ~ y strongly in C([O,TJ~H) n L2(0,T~V) E
AyE(U ) ~ Ay weakly in L2(0,T~H) E
( yE (u ))' ~ y' wea k1yin L2( 0, T~H ) E
and arguing as in the proof of Theorem 4.5 it follows that
where s(t) E a~(y(t)) a.e. t E JO,T[. Hence y = y(u), as claimed. The estimate (5.20) follows by inequality (4.30)', letting ~ tend to zero. Now by Proposition 5.1 for every E > 0, problem (pE) admits an optimal pair (y ,u ) E (W 1., 2([ 0, TJ ;.H) n L2( 0 , T~D (A ) )) x L2( 0 , T~U) . H E E
LEMMA 5.2 Fop
E ~
0,
u ~ u* stpongly in L2(O,T~U)
(5.22)
E
2 YE ~ y* stmngly in L (0,T;.V) n C([O,T]~H) and weakly in
W1,2([0,TJ~H) n L2(0,T~D(AH))'
SE(y -ljJ) ~ S u)eakly in L2(0,T;.H).
(5.24)
E
whepe
S=f
+
Bu -y'-Ay E as(Y*) a.e. in JO,T[.
Proof For ev.ery E
>
0, we have
GE(y ,u ) < ~O(yE(T)) E
178
(5.23)
E
T
+
J0 (gE(yE(t))
+
h(u*(t)))dt.
(5.25)
Here yE is the solution to (5.16) where u = u*. C([O,T]~H) and therefore, by Proposition 1.12, g£(t,y£(t))
+
g(t,y*(t)) for all t
E
By Lemma 5.1, yE
+
y* in
[O,T].
Then by the Lebesgue dominated convergence theorem
T
T
lim f g£(t,y (t))dt = f g(t,y*(t))dt. E+O a £ a Similarly,
whence lim sup G£(y£,u ) < G(y*,u*). £+0 £
(5.26)
On the other hand, since {u } is bounded i.n L2(0,T;.U), there exists u*1 E 2 £ L (O,T;U), such that for some sequence £ + 0,
and by Lemma 5.1 YE
+
yi = y(Yo,ui) strongly in C([O,T];H).
Since the function u we conclude that
16
+
h(u)dt is weakly lower semicontinuous on L2(0,T;U)
and by (5.26) it follows that lim 0
£ ....
T
fa lu
2
-u*lu dt = O.
(5.27)
£
Hence u, = u*, y, = y* and (5.22), (5.23) follow by (5.27) and Lemma 5.1. By Theorem 1.14 we infer that the boundary value problem pi _ Ap -p SE(y -~) E
P£(·,T)
£
£
+
£
= 9g£(t,y ) in Q E
9¢~(Y£(T)) = a in ~
(5.28) 179
has a uni.que solution p E L2 (0,T;'IJ.) n C([O,T];..H) with pI E L2(0,T~ILI) (if € 1 2 2 € I}¢O(Y€(T)) E V. then Ps E W ' ([O,T];:.H) n L (O,T;.O(AH))). Here I}gs in the Fr~chet derivative of y + gE(t,y). On the other hand, since (y ,u ) is the optimal pair in problem (pE), E t: Gt:(yE,u A
+
t:
AV) > GE(y ,u ) for all A > 0 and v E L2(0,T~U). t: E:
Here yE is the solution to (5.16) where u = u A E
* AV. This yields
T
Jo (hl(u t: ~v.)
(l}gE(t,y ),2. ) E E
+
+.
+
(I}¢EO(Y (T)),z (T)) E E
2 > Ovv. E L (0,T;.U) 1 2
2
where ZE E W ' ([O,T];.H) n L equation ZI
+
Az
(O,T~D(AH))
SE(y -~)z = Bv a.e.
+
t E
E
i.s the solution to the evolution
]O,T[ (5.29)
z(O) = 0 and hl:U x U + R is the directional derivativ.e of h (Section 1.3). Substituting (5.29) in the last inequality and using (5.28), it follows after some integration by parts T
Jo (hl(u
,v)
+
E
- 0 vv E L2(0,T;.U) t: E
and by Proposition 1.6, B*p (t) E ah{u (t)) E t:
+ U
E
(t)-u*(t) a.e. t E ]O,T[.
(5.30)
Equations (5.28), (5.30) represent the Euler-Lagrange optimality equations for approximating control problem (pE). LEMMA 5.3 There is C independent of E such that T
Ip (t) 12
+
€
J
0
lip (t) 112 dt -< C vt E
JQ IsE(y)pldxdt-
180
E
[O,T]
(5.31) (5.32)
Proof We take the scalar product of (5.28) by p (t) and integrate on [t,TJ. E Since BE > 0, we get
---
~ IP£(t)l~ <~ IP£(T)I~ +
afT t
Ip (s) E
l~dS
+
w
f:
IIP£(S)11
2
dS
IllgE(S,y ) i 21P 12ds, tEE
fT
a . ;;
t ..;; T.
(5.33 )
On the other hand, by hypothesis (v.i) and Lemma 5.2 it follows that
Then using Gronwall's lemma in (5.33) we obtain estimate (5.31). To get (5.32) we multiply (5.28) by ~(p ) and integrate on [O,t] where ~ is given E by (3.66). By assumption (5.5) it follows that (A~(p (t)),~(p (t))) > C, E E and therefore
Then letting
~
tend to sgn we get the estimate (5.32). In as much as {Ap } is bounded in L2(0,T;V') and {BE(y -~)p } in L1(0,T;L1(~)) we may infer thatE{p'} is bounded in L1(0,T;L 1(n) + V.'). Since by the Sobolev. imbedding th~orem for s > NJ2, HS(n) c C(~) and therefore L1(n) c (Hs(n)))', we may conclude that {p'} is bounded in the space L1(0,T;Y*) where E y* = (Hs(n))' + V'is the dual of Y = HS(n) n V. Since the injection of H into y* is compact and the set {p (t)} is for ev.ery t E [O,TJ bounded in H, E by the vectorial Helly theorem, we conclude that there exists a function p E BV([O,TJ;Y*) such that, on a subsequence En ~ 0, p (t) En
~
p(t) strongly in y* for every t
E
[O,TJ.
On the other hand, by estimate (5.31) we may assume p ~ p weak star in LOO(O,T;H) and weakly in L2(0,T;V). En Now since the injection of V into H is compact, for every \ C(\) > a such that (see [50J, Chapter 1, Lemma 5.1)
>
a there
(5.34) exists
181
for all nand t E [O,T]. Together with the above two relations this yields p En
p strongly in L2(0,T;H)
+
(5.35)
and p (t) En
+
p(t) weakly in H for
e~ery
t E [O,T].
Finally, by (5.32) we infer that there exists generalized subsequence {A} of {En} S\(y\-tj))p\
+
~
(5.36)
E (LOO(Q))* such that on a
~ weak star in (LOO(Q))*.
(5.37)
En Since {V~O (YE (T))} is bounded in H, by Proposition 1.12 and Lemma 5.2 we may assume tha~
V~~n(y (T)) En
+
-p(T)
+
~
E
a~O(y*(T)) weakly in H.
(5.38)
and similarly V 9 y
En
(t,y
En
)
weak star in LOO(O,T;H).
(5.39)
LEMMA 5.4 We have ~(t) E
ag(t,y*(t)) a.e. t
E
]O,T[
whepp a9(t,y) is the genepalized gpadient of y
Proof
(5.40) +
g(t,y).
From the proof of Proposition 1.12 we see that E
(Vg n(t,y
n O · · (t)),P z(t)) <.L 9 (t,pny (t)-AnT~(t),Pnz(t))a~(t) En n 1 =0 En 1
V z E L (0, T ;.H )
where ani, Tni 182
E
Loo(O,T) and liT i(t) II n n
<:
1 a.e. t
E
]O,T[,
~
i =0
ai(t) n
1.
Integrating this inequality on [O,t] and letting n follows by (5.39) and Fatou's lemma
f:
((t),z(t))dt <
f:
[E:~1] tend to
+00
it
gO(t,y*(t),z(t))dt 1
V z_ E L (0, T ~H )
because gO(t):H x H + R is upper semicontinuous and y E: n C([O,T];.H). By (5.41) it follows that
( 5 .41) +
y* strongly in
(!;;(t),z) < gO(t,y*(t),z) Vz E H a.e. t E ]O,T[ and the latter implies (5.40). Now letting E = En + in (5.28) it follows by (5.34), (5.35), (5.37), (5.38), (5.39) and (5.40) that p E BV([O,T]~Y*) n L2(0,T;V) n Loo(O,T;H) satisfies the equations
°
p' - Ap - W E ag(t,y*) a.e. in ]O,T[ p(T)
+
a¢O(y*(T»
3
(5.42)
0
and p' - Ap - W E
Loo(O,T~H)
(5.43)
(p' is the deri~ate in the sense of V'-valued distributions). On the other hand, since the map ah:U + U is closed (because it is maximal monotone) it follows by (5.30) and Lemma 5.2 B*p(t)
E
ah(u*(t», a.e. t
E
]O,T[.
(5.44)
Summarizing, we have proved the following generalized form of the maximum principle for problem (P): PROPOSITION 5.2 Let (y*,u*) be an arbitrary optimal pair of problem (P). L2(0,T~V) n BV([O,T]~Y*) and the measure W E (Loo(Q»* which satisfy (5.42), (5.43), (5.44). Moreover p
Then there exist the function p E Loo(O,T;H) n
and Ware limits in the sense of (5.34), (5.35) and (5.37) of some subsequences of {p } and {SE(y -~)p }, respectively. E:
E:
E:
183
The function p is called the dual extremal arc of problem (P). REMARK 5.1 If (yE,U E) is the optimal pair of the optimal control problem with state equation (5.16) and cost functional T
J°
(gE(t,y)
+
h (u))dt, E
>
0,
E
then arguing as in the proof of Lemma 5.2 it follows that on a subsequence we hav.e 2
uE ~ uf weakly in L (O,T;U) yE ~ Y1' weakly in W1,2([0,T];.H) n L2(O,T;.O(A )) H where (Yf,uf) is an optimal pair in problem (P)(h is defined by (1.21)). E The first-order necessary condi.tions for this approximating problem
pEl
pESE(yE_ljJ) = \]gE(t,yE) i.n Q
ApE
yE(O) = yO, pE(T)
+
\]¢~(yE(T))
0 in n
B*pE = \]h (u E) a.e. in JO,T[ E
can be used to implement numerical algorithms for the computation of optimal contro 1. Such an algorithm is given in [2] by approximating the above control problem by a Rayleigh-Ritz-Galerkin scheme and using the fi.nite element method. In this context we also mention the works [79J, [80J. §5.3
First-order necessary conditions: semilinear parabolic equations
Here we shall study problem (P) in the special case where S is locally Lipschitz and satisfies the growth condition (5.45) THEOREM 5.1
Let (y*,u*) E (W 1 ,2([0,TJ;.H)
n L2(0,T;O(A H))
arbitrary optimal pair of problem (P) where
184
Yo
x
L2(0,T;U) be an
satisfies condition (5.16)
and S is locally Lipschitz. Then there exist the functions p E BV( [0, T] ;.Y*) n L2(0,T;.V) n Lco(O,T;.H) and 11 E (LOO(Q)* sud that pl-Ap-u E LOO(O,T;H) and
p'-Ap-l1
E
ag(t,y*) a.e. in Q
l1 a (x,t)
E
p(x,t) as(y*(x,t)
p(T)
+
B*p(t)
a¢O(y*(T» E
° in
3
ah(u*(t»
(5.46) a.e. (x,t)
E
0
( 5.47)
~
a.e. t
( 5.48)
E
]O,T[.
(5.49)
If S satisfies condit1:on (5.45) then p E AC([O,T];.Y*) n Cw([O,T]~H), l1a = 11 E L 1(0).
In (5.47), l1a E L1(Q) is the absolutely continuous part of 11 (see Section R 3.3) and as:R ~ 2 , ag(t)~H ~ H, a¢O~H ~ H are the generalized gradients of S, g(t):H ~ Rand ¢O:H ~ R~ ah~U + U is the subdifferential of h:U ~ R. Some insight into the problem can be gained from the following simple example: Minimize the functional
r ly(x,T)-YO(x) 12dX
(5.50)
J~
on all ( y,u ) E
. W1 ' 2 ([O,T]~H ) n L2 (O,T~R m) subJect
to the state equation
m
Yt(x,t) - [,y(x,t)
+
S(y(x,t)) = a.e. (x, t)
y(x,O) = yO(x)
~~
+
ay
=
x
° a.e.
~
i =1 E
u.(t) X.(x) 1
1
0
(5.51)
E ~
in ~
lui(t)1 '" p a.e. t E ]O,T[, i. = 1, ... ,m. Here a>
°and yO, Xi E L2(~), i =
1, .•. ,m, are given functions on
(5.52)
~.
This problem can be viewed as a least-squares approach to the nonlinear diffusion equation with prescribed final value condition (the backward equation). Such a control system arises in diffusion kinetics enzyme problems where (see Section 4.2) 185
and in the temperature control of a heat conductor [30]. S has the following form: { S( r)
=
if
bl(r-ell
0
_00
if
b2(r-8 ) if 2
In the latter case
<
r -< 81 81 < r < 8 2 82 -< r < +
00
where b < 0 < b2 . 1 By Proposition 5.1, problem (5.50), (5.51) admits at least one optimal pair (y*,u*), and by virtue of Theorem 5.1, if S satisfies condition (5.45) (as happens in the above cases) then there exists p which satisfies along with y*, u* the system Pt
+ 6P -
ap av
+
pas(y*)
3
0 in Q
ap = 0 in L
p(x,T) = yO(x) - y*(x,T), x
E Q
u.(t) = p sgn J p(x,t)X·(x)dx a.e. t 1
Q
1
We note that in this case h:R m +
_f - L
h(u 1 ,···,U m)
o
E
1, ... ,me
]O,T[,
R is given by
if lu·l
1, ... ,m
1
+00
otherwise.
Proof of Theorem 5.1 Let p E Loo(O,T~H) n L2(0,T~~) n BV([O,T];Y*) be the function given by Proposition 5.2 and let w E (Loo(Q))* be the measure defined by (5.37). Equations (5.46), (5.48) and (5.49) have already been established in Proposition 5.2. Equation (5.47) remains to be proved. To this end we note that by (5.23), (5.35) and the Egorov theorem for each n > 0, there exists a measurable subset Q c Q such that y*, p E Loo(Q ),
n
~
E
y€
+
n
186
Loo(Qn), m(Q'Qn) -< nand y*, P€
+
n
P strongly in Loo(Qn)'
n
Since B is locally Lipschitz this implies that {~En(y
)}is bounded in LOO(Q ). En n Then on a subsequence of the generalized subsequence {A} arising in (5.37) we have
and by vixtue of Lemma 3.4 g (x,t) E
n
as(y*(x,t)-~(x))
for (x,t) E Q .
n
On the other hand, it follows by (5.37) that PSA(y -~) A
+
X Wweak star in (LOO(Q))* n
where X is the characteristic function of Q. Hence (X W)a = XnW = pg X n n n n n in Q, and since n is arbitrarily small we infer that Wa(x,t) E p(x,t) as(y*(x,t)) a.e. (x,t) E Q as claimed. Now let us assume that s satisfies condition (5.45). Section 3.3, we hav.e i{(r) ..;; C( IsE(r) I
+
Ir! "" 1) a.e. r E R.
Let E be an arbitrary measurable subset of Q.
IE IBE(YE-~)PEI -<
e(J E
Then as observed in
We have
dxdt
Ip SE(y -~) Idxdt E E
+
fE /p E/(1""Iy EIl dxdt ).
(5.53)
By Lemma 5.2,(5.23) and (5.24)) and by (5.35) it follows that the sequences {p SE(y -~)} and {p y } are weakly convergent in L1(Q). Then by (5.53) it E E EE follows that, for every 0 > 0, there exists \)(0) > 0 such that if m(E)-<\)(c;) then
JE ISE(y -~)p I dxdt -< o. Hence the family {J ~E(y -~)p dxdt~ E c E E
E
E
Q} is equiabsolutely continuous,
E
and by the Dunford-Pettis criterion the sequencer SE(y -~)p } is weakly compact E
E
187
in L'(Q) c L' (O,T;(H s (n))I) c L'(O,T~Y*). Hence wE L'(Q) and by (5.43) we see that pi E L' (O,T;(H s (n))l) + L2(0,T;.V') c L'(O,T~Y*) (here p' is the derivative of p is the sense of distributions from [O,TJ to Y*). We may conclude therefore that p E AC([O,T]~Y*), and (5.37) can be strengthened to
In as much as p E Loo(O,T;H) n C([O,TJ~Y*) and the injection of H into y* is compact, it follows that, for every to E [O,T] and each sequence tn + to' the weak limit of {p(t n )} in H exists and equals p(tO)' Hence p is weakly continuous from [O,T] to H, thereby completing the proof. Now we shall indicate a refinement of Theorem 5.1. PROPOSITION 5.3 In Theor'em 5.1 assume that either' 8 is gZobaUy Lipschitz D(A H) C H2(rt) and N = 1. Then f.l E L2(Q), p E C([O,T];H) and
01'
p E W1,2([8,TJ;H) n L2(8,T;D(A )) v8 E ]O,T[. H 2 Pur'the1', if ¢O :: 0 then p E L (0,T;.D(A )) n W,,2([0,T]~H). H
(5.54)
Proof If 6 is globally Lipschitz then {BE} is uniformly bounded on Rand therefore {PE 6E(YE-~)} is weakly compact in L2(Q). Hence WE L2(Q) and (5.54) follows from (5.46) via Theorem 1.13. If ¢O :: 0 then by (5.48) we see that p(T) = 0 and therefore p E L2(0,T;H) n W1,2([0,T];H). , If D(A ) C H2(rt) a~d N = 1 then W ,2([0,T];H) n L2(0,T;D(A )) = H2 ,1(Q) H H C(Q) (see [55], T.2, p.15). Hence {YE} is bounded in C(Q) and PEBE(YE-~) are uniformly bounded on Q. Then the conclusion follows as above. §5.4
c
First-order necessary conditions: the obstacle problem
Here we shall consider the case where 8 is defined by (4.41) i.e. 8(r) = 0 if r
>
0, 8(0) = J_oo,O], 8(r) = 0 if r
<
O.
(5.55)
Then as noted in Section 5. " if A is defined by (4.34), (4.35) and ~ satisfies condition (4.39)1 or (4.40)' then the control system (5.1) becomes equivalent to the obstacle problem (5.14).
188
THEOREM 5.2 Let (y*,u*) be an arbitrary optimal pair in problem (P) where 6 2 is defined by (5.55). Then there exists a function p E L (0,T;V) nLCX)(O,T;H) n BV([O,T];Y*) with p'-Ap E (LCX)(Q)* which satisfies the equations (pi - Ap)a E ag(t,y*) a.e. in
[y* >
WJ
(5.56)
p(f+Bu*-Ay*)
[y* =
WJ
(5.57)
=
0
a.e. in
p(T) + a¢o(y*(T»
;3
0 in rt
B*p(t) E ah(u*(t)
(5.58) (5.59)
a.e. t E JO,T[.
Here p' is the distributional derivative of p:.[O,T] -+ L2(rt) and (p'-Ap)a E L l(Q) is the absolutely continuous part of the measure p'-Ap. Thus (5.56) must be understood in the following sense:, there exists an increasing family {Qk};=l of subsets of Q such that m(ct--Qk) -< k- 1 and T
T
fo (p(t), y'(t»)dt r
+J
+.
fo (Ap(t), y(t»dt - (p(T), y(T)
y(x,t)ag(t,y*(x,t»dt =
Q
°
(5.60)
for all y E L2(0,T~V) n C(O) n C([O,TJ~Y) with y' E L2(0,T;V ' ) and such that y(x,O) = 0, support y C [(x,t) E Q~ y*(x,t) > ~(x)J n Qk' To be more specific, consider the case where the control system (5.1) reduces 1 ' a = 0) and to problem (5.14). Then pEL 2 (O,T~H 1(rt» (p E L2(O,T;HO(rt)) lf 2 (5.56) reduces to Pt - AOp = ~s + a,P
+
ap _ a 2 a~ -
~a
° in
in [(x,t);y*(x,t)
> ~(x)]
(5.61) L
where ~a E L'(Q) is such that ~a(x,t)
and
E ag(t,y*(x,t)
a.e. (x,t) E Q
(5.62)
~s is a singular measure with respect to the Lebesgue measure on Q. Equation (5.57) becomes in this case
p =
° a.e.
in
[y*
= W] n [f + Bu*-AOW] 1 0.
(5.63) 189
Together with (5.58) and (5.59), equations (5.6t), (5.62) and (5.63) represent a quasivariational inequality of parabolic type which can be solved formally using a gradient algorithm. We start with uO arbitrary and solve inductively the following sequence of parabolic variational inequalities: i i i i Yt + AOY = Bu +. fin [y > tjJ) i i {f + Bu i - A01P, O} in [y = tjJ] Yt = max i i 'dy _ y i >- tjJ, yi(O) = Yo in lt, Ci. 1y + Ci.2 av - 0 in
P~ - AOpi = 'dg(t,yi) in [yi pi
=0 i
Ci.
1P
in [yi
+ Ci.
'dpi
2
~
= tjJJ
n [f
= 0 in
pi(T) = -'d¢O(yi(T)
+.
>
L
tjJ]
Bu i - AOtjJ ~ 0]
L
in lt,
u i + 1 = ui-Pi('dh(Ui)-B*pi) in U, Pi
> 0,
i :: 0,1, •...
For numerical calculation of optimal control we may use either discretized forms of this system or the penalized approximating system described in Remark 5.1. Proof of Theorem 5.2 The proof is essentially the same as that of Theorem 3.3. If S is the multivalued graph (5.5) then S (r) = - s-1 r - for all r E R, E ~ ~ E > 0 and SE is given by (3.79). Define the functions ~E:Q ~ Rand AE:Q ~ R
~
~€(x,t)
~ l
0 if IYE(X,t)-tjJ(x) I if IYs(x,t)-tjJ(x)
>
E2
1-< s
2
_E 2 1. 1 if YE(X,t)-~(x) ~ _E 2
f
0 if YE(X,t)-tjJ(x)
>
If {y } and {p } are the sequences which occur in Section 5.2 then, as seen E sn in thenproof of Theorem 3.3, by (3.79) we have (see (3.87) and (3.88»
-< Enlps (x,t) I a.e. (x,t) n
190
E
Q.
(5.64)
(5.65)
then selecting a subsequence we may assume that .En B (YE
PE n
(x,t)-~(x)) ~
0 a.e. (x,t) E Q.
n
On the other hand, by (5.24) and (5.35) we have En B (YE
-~) ~
p(f
+.
1 Bu*-Ay*) weakly in L (Q).
n
Hence p(f
Bu*-Ay*) = 0 a.e. in Q
+
(5.66)
By (5.66), (5.57) follows. Since (5.58) and (5.59) have been established in Proposition 5.2 it remains to prove (5.56). First we note that by (5.64) and (5.66) .E
B n (y E
-~)(y
n
E
-~) ~
1
0 strongly in L (Q).
(5.67)
n
On the other hand, by Lemma 5.2 (5.23)) and by the Egorov theorem, for every n > 0 there exists a measurable subset Q c Q such that m(Q'Q ) ~ n, y*, n n t/J E L (Q ) and 00
n
~
YE
y* uniformly on Qn'
n
Recalling that {A} is a generalized subsequence of {En} we infer by (5.37) that 191
~(y*-~)
= 0 on
Q• n
Hence r
J
(5.68) ~a(x,t)(y*(x,t)-~(x))~(x,t)dxdt + ~s((y*-~)~) = 0 Q n for all ¢ E Loo(Q) which have their supports in Q. As usual, ~ E L1(Q) and 00 n a ~s E (L (Q))* denote the absolutely continuous part and the singular part of ~, respectively. This means that there exists an increasing family of meas-1 urable subsets Qk c Q such that m(Q'Qk) < k and ~s = 0 on Qk' k = 1, . . . . Then by (5.68) it follows that
IQnnQ
~a(x,t)(y*(x,t)-~(x))~(x,t)dxdt = 0 k
for all ~ E Loo(Q) having the supports in Q n Qk' n This yields
and letting n
~
0, k
~
00
,
we conclude that
(y*(x,t)-~(x))~a(x,t)
Since
p'-Ap-~
00
=0
a.e. (x,t) E Q.
2
e L (O,T;L (n)) we see by (5.42), (5.46) that
(p'-Ap)a E ag(t,y*) a.e. in [(x,t) E
Q~
y*(x,t)
> ~(x)J,
thereby completing the proof. REMARK 5.2 Assume that N = 1 and D(A H) c H2(n). follows that yE + y* in C(Q) and (5.67) yields ~(y*-~)
Then by Lemma 5.2 it
= 0 in Q.
Hence in this case (5.56) becomes p'-Ap E ag(t,y*) a.e. in [(x,t) E Q;y*(x,t)
> ~(x)J.
The same conclusion is reached if 8 is globally Lipschitzian.
192
(5.56)'
REMARK 5.3 In Theorem 5.2 assume further that ¢(y-~)- E V for every y E V and ¢ E C1(n) (this happens in all relevant situations). Then after some calculation involving (5.28) and (5.22)~ (5.34), (5.35), (5.38), (5.39) and (5.67) we see that p also satisfies the equation
- IQ p(x,t)((y*(x,t)-~)~(x,t»tdxdt - I: (Ap(t),(y*(t)-~)~)dt =
f
~(x,t)(y*(x,t)-~(xȢ(x,t)dxdt
-
Q
for all ¢
E
J~P(X'T)(Y*(X,T)-~(XȢ(X,T)dX '"
C1(Q) such that ¢(x,O) :: O.
(5.69)
In other words,
REMARK 5.4 Theorems 5.1, 5.2 have been established in [9J (see also [12J). In a special case, Theorem 5.2 has been obtained by Saguez [78J who used a different argument. Parenthetically, we notice that these theorems are still true if 9 is of the form g(t,y) = f~ gO(t,x,y(x»dx where gO:[O,TJ x n x R + R satisfies (as a function of x,y) assumptions (j), (jj) in Section 3.9. In this case (5.46) and (5.56) hav.e the form (see [1J) pl-Ap-~ E
dygO(t,x,y*) a.e. in Q
(pl-Ap)a
dygO(t,x,y*) a.e. in [y*
E
> ~J.
REMARK 5.5 The filtering of nonlinear systems is another source of optimal control problems of the form (5.2). Consider for instance the noisy system Yt - 6y
+.
B(Y) 3 f + u in
Q
y(O) = Yo in n, y = 0 in r with observation z = y + n, where the error terms u and n are unknown. The basic problem under consideration is to recover the state y from observation z. This can be achieved by considering the problem of minimizing the least193
squares criterion function 2
T
f
o (jy(t)-z(t) 12
+
2 a(t) lu(t) 1 2 )dt
with respect to y and u subject to the above equation (here a is a positive weight function). §S.S
First-order necessary conditions for_problem (P11
To derive a maximum principle result for problem (P ) we shall use the same 1 method as for problem (P). For each s > 0, consider the approximating control problem: (P ) 1
Minimize
on all
subject to
Yt + AOY = Bu + f a.e. in Q y(x,O) = YO(x), xED
(S.70)
where in the functional GE is defined as in Section (S.2), (y*,u*) is an arbitrary optimal pair of problem (P,), and SE is given by (3.4S). We note that for s ~ 0, the solutions yE(U) to (S.70) converge to the solution y to (S.15). More precisely, we have LEMMA S.5
If uE
~
2 u weakly in L (O,T;.U) then
yE ~ Y strongly in L2(0,T;H 1(D)) n C([0,TJ;.L 2 (D)) and weakly in W,,2([O,TJ;H) n L2(O,T;H 2(D))
194
(5.71)
SE(y£)-r - ~~ E s(y) a.e. in L:,
2
weakly in L (L:)
(5.72)
Proof The proof i.s standard and similar to that of Lemma 5.1. Mul tiplying (5.70) by yE and AOyE, we obtain respectively (after some manipulation) Iy E(t)1 22
+
JT Ily E(t)[1 21
°
2 IIlCt)11 1 H (rt)
dt -< e(1
+
JT lu E(t)[U2 dt).
°
H (rt) +
JT 0
IAOl(t)l~dt -<
e(1
+.
IIY ol121
+
JT1UE(t)12dt).
H (rt)
0
U
On the other hand, as seen earlier, we have (see (1.38»
where e is independent of E. oo Hence {yE} is bounded in e([0,T];12(rt» n L (0,T;.H 1(rt)) n L2(0,T;H 2(rt» 2 2 and {y~} is bounded in L (0,T;L (rt». Thus {yE} is compact in e([0,T];L 2(rt)) n L2(0,T;H 1(rt» and weakly compact in L2(0,T~H2(rt» n W,,2([0,T];L 2(rt» and (5.71), (5.72) follow by standard methods. Since by the above estimates the map u -r y is compact from L2(0,T;U) to e([O,T];H), then arguing as in the proof of Proposition 5.1 we conclude that, for every E -r 0, problem (P~) has at least one optimal pair (YE'U E). Using Lemma 5.5 it follows as in Lemma 5.2 that LEMMA 5.6 For E -r 0, we have UE -r u* strongly in L2(0,T;U) YE
-r
y* strongly in e([O,T];L2(~» and weakly in L2(0,T;H 2 (rt»
sE (YE)
-r
-
ay*
~ weakly in
Now let PE E L2(0,T;H'(rt»
n L2(O,T;H 1(rt» n W1,2([0,T];l2(rt»
2 L (L:).
n W,,2([O,T];(H'(rt»') ce([0,T];L 2(rt» be the 195
solution to boundary value problem (PE)t - AOP E = VgE(t'YE) in Q pE (x,T) + V¢OE(y E (T))(x) = 0, x
E
u
(5.73)
2 (Here (PE)t = P~ is the distributional derivative of PE~[O,T] + L (u).) Then by the same reasoning as in Section 5.2 it follows that (5.74) 1Pj t) 12 to.
+
IT II P (t) 112 1 0
E
H (u)
dt
+
fL:
(5.75 )
Hence {p~} is bounded in L1(0,T;L 1(u)) + L2(0,T;.(H 1(u)') c L1(0,T;.(H s (u))') where s > N/2. Then by the Helly theorem, there exists P E BV([O,T];(Hs(u))') such that for some sequence E + 0 ps(t)
+
pet) strongly in (Hs(u))' for t E [O,T]
(5.76)
and by (5.75) PE + P weakly in L2(0,T~H1(u)) and 2 we a k s ta r in L (0, T;.L (u)). 00
(5.77)
Then as in the proof of Proposition 5.2 we infer that (5.78) and by (5.75) there exists 0 again denoted E,
E
(Loo(L:))* such that on a generalized subsequence,
Thus letting E tend to zero in (5.73) we see that p satisfies the following system:
196
Pt - AOp E ag(t,y*) in Q p(x,T)
~
+
+. 0 :::
a¢O(y*(T»(x) 3 0, xED
° i.n
(5.79)
L:
B*p E ah(u*) a.e. in JO,T[. THEOREM 5.3 Let S satisfy condition
(5.45) and let
(y*,u*) be an arbitrary
optimal pair of problem (Pl). Then there exists a functi~n p E AC ([O,TJ; 2 (D» n L2 (O,T;H 1CD» where S > NJ2 such that ( Hs (D»') n C ( [O,T];.L
w (ap/av) E Ll(L:) and
ag(t,y*) a.e. in Q
Pt - AOp
E
p(x,T)
a¢O(y*(T»(x)
+
3
0, xED ( 5.80)
~
+
pas(y*)
3
° a.e.
in L:
a . e. i. n J 0 , T[ •
B*p E ah(u*)
The proof, which is identical with that Qf Theorem 5.1, relies on the fact that under assumption (5.45) the set {SE(ys)ps} is weakly compact in L1(L:). Now we shall consider the case where S is defined by (5.55). Then (5.15) reduces to a parabolic initial value problem with unilateral boundary value conditions of Signorini type, i.e. Yt
+
AOY = Bu
+
f a.e. in Q
y(x,O) = YO(x), xED y >
ay ay 0, av > 0, y 21v =
(5.81)
° a.e.
in L:.
THEOREM 5.4 Let (y*,u*) be any optimal pair for problem (Pl) governed by the variational inequality (5.81). Then there exists p E BV([O,T];(Hs(D»') oo n L2(0,T;H 1(D» n L (0,T;L 2(D» with (ap/av) E (Loo(L:»* and satisfying the equations
197
Pt - AOP E dg(t,y*) a.e. i.n Q
dy* Pav= 0 y*(d P) dV a p(T)
+
a.e. in L: 0
a.e. in L: ~
d¢O(y*(T)) 3 0 a.e. in
B*p E dh(u*)
a.e. in JO,T[.
(5.82)
The proof is identical with that of Theorem 3.4 and so is omitted. We note only that if either N = 1 or S is globally Lipschitz then by (5.64) and (5.79) it follows that oy* = 0, and therefore (5.82) yield dV Y* dP
= 0 1. n L: .
( 5.83)
§5.6 Optimal control of finite-dimensional evolution variational inequalities The method developed above is applicable in particular to control problems governed by ordinary differential systems and evolution variational inequalities in Rn. We shall illustrate this with the following model problem: Minimize T
J
(g(t,y(t))
h(u(t)))dt
+
+
¢O(y(T))
(5.84)
O
n
1 2
m
2
over all yEW' ([O,TJ;R ) and u E L (O,T;R ) subject to y~(t) + 1
(Ay(t)).
(Bu(t)).
1
Yi(t) > 0, yi(t)
1
+
+
f.(t) a.e. in [t;y.(t) 1
1
(Ay(t))i > (Bu(t))i
+
>
OJ
fi(t) a.e. t E JO,T[, i = 1,2, ..• ,n
y.(O) = y. 0' 1 1,
i
= 1, .•• ,n.
(5.85)
Here A and B are matrices of dimension n x nand n x m, respectively, and 2 n f = (f 1, ... ,f n) E L (O,T;R ), yo = (Y1,0'···'Yn,0)· We have denoted by Yi' (Ay)., (Bu). the components of the vectors y, Ay, By. 1 1 n m n The functions g:[O,TJ x R R, h:R Rand ¢o:R R are assumed to satisfy hypotheses (v), (vi) in Section 5.1, where V = H = Rn and U = Rm. 7
198
7
7
It must be noted that (5.85) can be equivalently written as a variational
inequality of the form (4.4) where V = Rn, f is f K
+-
Bu and
= {y = (Y1""'Yn); Yi > 0 for i = 1, ••• ,n}.
According to general existence theory (for instance Theorem 4.1), if Yo E K and u E L2(0,T;R m) then this variational inequality has a unique solution n y E AC([O,TJ;R ) with y' E L2(0,T;R n). Moreover, by the Arzel& theorem the map u -+ y is compact from L2(0,T;.R m) to C([O,T);.R n) and so arguing as in Proposition 5.2 we conclude that problem (5.84), (5.85) has at least one optimal pair. As for the maximum principle, it has in this case the following form: 2(0,T;R m) be an arbitrary optiThen there exist p E B~([O,TJ;Rn) and ~ =
THEOREM 5.5 Let (y*,u*) E W1,2([0,T];.R n) mal pair of problem (5.84).
x L
n (~1""'~n) E LCO(O,T;R ) such that 1, ... , n. ~(t)
(5.86)
E ag(t,y*(t)) a.e. t E JO,T[
Pi(t) = 0 a.e. in [t E [O,TJ;. yi(t) = 0;. fi(t) p(T)
+-
+
(Bu*(t))i -f OJ, i = 1, ... ,n
a¢o(y*(T»)
3
0
B*p(t) E ah(u*(t)) a.e. t E JO,T[.
(5.87) (5.88) (5.89)
Here A*, B* are the adjoints of A, B, and pi are the derivatives of Pi in the sense of distributions, i = 1, ... ,n. Proof The proof is essentially the same as that of Theorem 5.2 but with some simplifications. Consider the problem: Minimize
(5.90) on all (y,u) subject to 199
y' + Ay + yE(y) = Bu + f a.e. t E JO,T[ (5.91 ) yeO) where
II'
=
YO
11m i.s the norm of Rm, yE:Rn
-+
Rn is defined by
and gE, ~O' BE are defined by (5.17), (5.18) and (3.79). Let (yE,U E) be a solution to problem (5.90) and let pE E W1,2([0,TJ;R n) be a correspondi.ng dual extremal are, i..e. (5.92) + 9~O(yE(T))
pE(T)
B*pE(t) E ah(uE(t))
=0 +
uE(t) - u*(t) a.e. t E JO,T[.
(5.93) (5.94)
Reasoning as in Lemma 5.2 we find that uE -+ u* strongly in L2(0,T;R m) yE
-+
(5.95)
y* strongly in C([O,TJ~Rn) weakly in W1,2([0,TJ;R n)
(5.96)
f - y*'-Ay* weakly in L2(0,T;R n)
( 5.97)
yE(yE)
-+
Bu*
+
while by Lemma 5.4 it follows that, selecting further subsequences, we have
where ;(t) E ag(t,y*(t)) a.e. t E JO,T[. Taking the scalar product of (5.92) with pE(t) and integrating on [t,TJ we get, after some calculation involving Gronwall's lemma,
200
Now we take the scalar product of (5.92) with (sgn p~(t), sgn p~(t), ... , sgn p~(t)) and integrate on [O,T] to get the estimate
~ IT0 I~S(y:(t)p:(t) Idt < c. 1 1
(S.98)
;=1
Since by (S.92) {ps,} is bounded in L1(O,T~Rn), we infer by the Helly theorem that there exists p e BV([O,TJ~Rn) and subsequence {sn} ~ such that
°
sn p (t)
7
pet) for every t e [O,TJ.
Hence A*psn(t)
7
A*p(t)
vt e [O,T]
and by (S.98) it follows that there exists a measure w = (w ' ... ,w ) 1 n n ((l (O,T»)*) such that on a generalized subsequence {s} of {s },
E
00
n
's s s S (Yi)Pi
~
00
Wi weak star in(L (O,T»
*
for i = 1, ... ,n.
(5.99)
Then letting s tend to zero in (5.92), (5.93), (5.94) it follows that p satisfies (S.88), (S.89) and p'-A*p-W e Loo(O,T;R n), p'-A*p-fJ e dg(t,y*) a.e. in ]O,T[.
(S. 100)
To prove (5.87), (S.88) we shall proceed as in the proof of Theorem S.2. From the definition of SS we have (see (S.64), (5.65»
a.e. t e ]O,T[, i
=
1, ... ,n
(S. 101 )
and S I < E Ip.s( t)S• E (y.(t S ) ) I (E+S -1 A..s( t) Iy.(t) S I) t)S S (y.(t» Ip.s( 1 1 1 1 1 1
+
2slp:(t) I a.e. t e JO,T[ 1
where
r° >..:(t) =1,
if y~( t) > - s2 1
1
2
if /:(t) " - S . 1
201
Since, by (5.97), BE(YiE) \iE = E-1 YiE \iE + CE are bounded in L2(O,T ) , we conclude that En En En p.1. B (y.) 1
-+
1 0 strongly in L (O,T) for
1, ... ,n
and therefore again by (5.97)
a.e. t e JO,T[ for i = 1, ... ,n and (5.87) follows. Next by (5.101) we see that En .E En En Pi Brr~yi. )Yi
-+
1 0 strongly 1.n L (O,T) for i = 1,2, ... ,n.
Together with (5.96) and (5.99), this implies that by (5.100) we conclude that (p~-(A*P)i
~(t)
-
~i)Yl
~.y~
1 1
= 0 for all i. Then
= 0 in [O,TJ for i = 1,2, ... ,n
(5.102)
e ag(t,y*(t)) a.e. t e JO,T[.
This completes the proof. REMARK 5.6
In (5.102)
p~ y~ 1
1
is the measure defined by the Stieltjes integral
T
p~y~(¢) = J y~(t)¢(t)dp.(t) 1 1 0 1 1
V¢
e C([O,TJ).
This remark can help to make (5.86) more explicit. However, we note that (5.86) implies in particular that p.(t) is absolutely continuous on every 1 compact interval of [t e [O,TJ;y.(t) > OJ and 1
Now we shall illustrate this theorem with a optimal control problem arising in management [67J. Consider a factory composed of n workshops each producing one and only one product. Denote by u the working intensity of workshop i, i 202
0< u.(t) < 1, and by y.(t) the stock level of the corresponding product. 1 1 set rr(t,y,u) = ~ - Bu(t)
+
d(t), t
We
[O,TJ
E
where B is a technological matrix and d(t) is the momentary demand for product i. In the following we shall assume that B is a diagonal matrix, i.e.
B
l
r ob1
b2 •• bOn
J
where b; >- 0 for
1, ... ,n.
The governing equations of the process Yi(t) >- 0, rri(t,y,u) >- 0, rr;(t,y,u)Yi(t)
= y., , 0 a.e. t y.(O) 1
E
JO,T[, i
0 (5.103)
= 1,2, ... ,n
are of the form (5.85) where A = 0 and f = -d. Consider the following problem: Minimiz{; n L:
i=l ove~
T
2 J0 (ly·(t)-y?(t)1 + a.u~(t) " , ,
+
c,. rr,.(t,y,u))dt
all (y,u) subject to (5.103) and to the
(5.104)
aonst~aints
(5.105) Here bi and c i ' i = 1,2, ... ,n, are positive constants. Roughly speaking, this means that one wants to arrive at a desired level of stock yO ~/ith a minimum production cost u and minimum stock breaking rr.(t,y,u). Noting that
,
T
rr.(t,y,u)dt = y.(T) - y. O-b. o ' 1', ,
J
T
T
J0 u.(t)dt + J d(t)dt ' 0
we may represent problem (5.104) in the form (5.84) where
203
n
g(t,y)
l:
i=l
2 Iy·(t) -l(t)1 , y ::: (Y1'···'y ) 1
,n
1
n l:
; =1
ciYi' Y = (Yl'Y2'···'Y n) 1 , •.• ,n
h(u) otherwise. Thus by Theorem 5.5 we infer that for every optimal pair (y*,u*) there exists a function p e BV([O,T]~Rn) satisfying the system p~(t) 1
= 2(y~(t)-y?) a.e. in 1 1
o
p "( t) 1
-
a.e. in i
C. 1
whilst the optimal control u*
"(p.+c.)b./2a. 1
REMARK 5.7
=
[y~(t) 1
= O;.d.(t)-b.u~(t) 1 0] 1 1 1
= 1,2, •.. ,n,
= (uf, ... ,u;) ;s given by 111
1
-< 0 if (p i +c i) b i > 2a i .
0
1_
1
if 0 < (p.+c.)b. < 2a.
1111
u~
[y~(t) > OJ, i ::: 1, ..• ,n
if p. + c" 1.
1
1
If the state system (5.85) is replaced by
yl(t) + Ay(t) + y(y(t» = Bu(t) + f(t) a.e. t e ]O,T[ (5.106) yeO) ::: yo where A e L(Rn,R n), B e L(Rm,R n) and y:R n + Rn is a locally Lipschitz monotonically increasing mapping, then the maximum principle has the following form: 1 n Let (y*,u*) be an optimal pair. Then there exists p e W ,2([0,T];R ) such that
p'(t)-A*p(t)-(dy(y*(t»)*p(t)-dg(t,y*(t» p(T) + d¢O(y*(T» 204
3
0
3
0 a.e. t e ]O,T[
B*p(t) E ah(u*(t)) a.e. t E ]O,T[ where aY~Rn
+
L(Rn,R n) is the generalized gradient of y.
The proof is essentially the same as that of Theorem 5.1 but the details are left to the reader. For the purpose of computation the optimal control problem (P) is often approximated, via a discretization process, by a finitedimensional problem of the form (5.84) governed by the system (5.106). §5.7 Optimal feedback controls Here we shall study the existence of optimal feedback controls for problem (P). More precisely, we will show that under assumptions (i) to (v.i), where B is locally Lipschitz (and monotonically increasing) and ~ = 0, every optimal control is a feedback optimal control (see Theorem 5.6). We shall aSSume in the following that g is independent of t and coercivity condition (5.3) holds with a = O. Let £:H + R be the function (5.11). As noted earlier (Theorem 1.10), a~ = A + a¢ where ¢:H + R is given by(5.?). LEMMA 5.?
D(d£) is a dense subset of
H.
Proof By Theorem 1.10 we know that O(a~) = UTAH) n~) = ~). On the other hand, O(¢) is a dense subset of H (in particular it contains the space Coo(~)) and so IT[aI) = orr) = H as claimed. Then as noted in Section 5.1, for every Yo E H the Cauchy problem y'
+
y(t)
Ay
+
a¢(y)
3
Bu a.e.
5 E
Jt,T[ (5.107)
= Yo
has a unique solution y = y(s,t,yo'u) E Wl,2([a,T]~H) n C([t,T];H) n L2(a,T;O(A )) va > t. H For every yO E Hand t E [O,T] define the function
o(t,yO) =
inf (
J:
(g(y(s,t,yo'u»
+ ¢O(y(T,yO'u))~
+
h(u(s»)ds
2 u E L (t,T~U)}.
(5.108)
205
The function ¢:.[O,TJ x H -+ R is the optimaL 'VaLue function of problem (P). it is readily seen that ¢ is well defined and e~erywhere finite on [O,TJ x H. La~MA
5.8 For ellery (t,yO) E [O,T] x H the infimum defining ¢(t,yO) is For every t E [O,T] the function YO -+ ¢(t,yO) is locally Lipschitz and for every YO E D(at) = D(ft'H) n D(acp) the function t -+ ¢(t,yO) is Lipschitz on [O,T]. attained.
Proof The first part of the lemma follows by Proposition 5.1 and Lemma 5.7. Now since the operator A + acp = a~ is monotone in H we have (see Section 1.7) 'V
'V
ly(s,t,yo,u)-y(s,t'YO,u)!2 < !YO-YO!2' t < s < T. Now multiplying (5.107) by y(s)-yO where yO yields ly(s,t'YO,u)!2 < IYol2 + C(
f:
where C is independent of yo and u. Let yo E H be such that IYOl2 < r. o(t,yO) " inf
(f:
E
(5.109)
OCt) and integrating on [t,T]
IU(T)I U dT+l) for t < s < T
(5.110)
We have
(g(y(s,t,yo'u)) + h(u(s»)ds
2
+ CPO(y(T,t,yO'u)); U E L (t,T;U)} (5.111) By virtue of condition (5.6) we may restrict (5.111) to the class of u E L2{t,T;U) which satisfy the condition J~ lU(T)IG dT < C1. Let us denote by M this subset of L2(t,T;U). In point of fact we may as~ume that r Mr = {u E L (t,T;U); IU(T)I U < Cr } where Cr is independent of t and IY o I2
ut (s) E ah -1 (B *p t (s)) a. e. s E ] t, T[ t
00
2
where pEL (t,T;H) n L (t,T;V) is a dual extremal arc for problem (5.111). 206
By estimate (5.33) it follows that t 2 Ip (s)1 2
~
t 2 C(ly (t)1 2
where C is independent of t.
+-
(T
t
2
J Iy (s)1 2ds), t < s < T t
Hence
/p t (s)1 2 < Cr3 for s E [t,T] and, since by assumption (5.6) the map ah- 1 is bounded on bounded subsets, we conclude that lut(s)l u < C2 for s E [t,T] and a suitable positive constant 2 r C3 · Now by estimates (5.109) and (5.110) we see that for every u E M the fu~ction Yo ~ J~ g(y(s,t'YO,u))ds is locally Lipschitz with Lipschi~z constant independent of u. Hence the function y ~ ¢(t,y) is locally Lipschitz on H. On the other hand, for every yO E D(az) we hav.e ~
~
Iy(s,t,yo'u) - y(s,t,yo,u)1 2 < ly(t,t,yo,u)-Y o i 2 ~
(t
< J lBU(T)!2 dT t
~
Id~(Yo)12 It-tl < (C r
+
~
+
laz(YO)1 2)lt-tl
~
for t < t < s < T. 2
Let ut E L (t,T;U) and Yt
¢(t,yO) =
(5.112)
= y(s,t,yo'u t ) be such that
f: (g(yt(s)) • h(Ut(s)))ds
+
.O(Yt(T))
~
and let v(s) = Uo for t < s < t, v(s) = Utes) for t < s < T where Uo is such that h(u O) < + 00. We have
~
Q(t,yo) - ¢(t,yo) < +
Itt
~
(g(y(s,t,yo'V)
+
h(uO))ds
~ fTt (g(y(s,t,yo,v))-g(y(s,t,yo,v)))ds ~
+ ~O(y(T,t,yo,v))
-
~O(y(T,t,yo,v)).
Together with (5.112) the latter yields
207
· LEMMA 5.9
For all t E [O,TJ and YO E H we have
.(O,yO) = inf {
f: (g(y(s,O,yo'u»
+ ¢(t,y(t,O,yo'u))~
Proof
Let (y,u) be such that
.(O,yO) =
f:
(g(y(s»
+ h(u(s»)ds
u E L2(O,t~U)}.
y(s,O,yo~u)
(5.113)
= yand
+ h(u(s»)ds +
f: (g(y(s»+h(u(s»)ds+~O(Y(T».
This yields
¢(O,yO) > .(t,y(t»
+
f: (g(y(s»
(5.114)
+ h(u(s»)ds.
2 On the other hand, for all u E L (O,T~U) and y = y(s,O,yo'u) we have
.(O,yO) <
f:
(g(y(s»
+ h(u(s»)ds +
f: (g(y(s»+h(u(s»)ds+~O(Y(T».
We may choose the pair (y,u) in such a way that
.(t,y(t»
=
f:
(g(y(s»
+ h(u(s»)ds
+
~O(y(T»
and therefore
.(O,yo) <
f: (g(y(s»
+ h(u(s»)ds + .(t,y(t».
Together with (5.114) this inequality implies (5.113) as claimed. THEOREM 5.6 Let assumptions (i) to (vi) and (5.45) be satisfied. (y*,u*) be an optimal pair in problem (P) where YO E D(~). Then u*(t) E ah*(-B*d9(t,y*(t))) a.e. t E JO,T[.
Let
(5.115)
Proof By Lemma 5.9, for every t E [O,TJ the pair (y*,u*) restricted to [O,tJ is optimal for the control problem
inf {
f: (g(y(s,O'YO,U»+h(u(s»)ds + .(t,y(t,O,yo,U);UEL 2(O,t;U)). (5.116)
208
Then by virtue of Theorem 5.1, for every t E [O,TJ there exists ptEAC([O,t]; Y*) n Cw([O,tJ;.H) satisfying the equations B*pt(S) E ah(u*(s)) a.e. s E ]O,t[
(5.117)
pt(t) E -a¢(t,y*(t)).
(5.118)
It is well known that every measurable function is a.e. approximately continuous on [O,TJ. Let E be the set of all points t E [O,TJ where u* is approximately continuous. This means that for every tEE there exists a measurable subset E C [O,TJ having the property that t is a density point t of E and u* restricted to Et is continuous at t. Let Et be the set of all t s E [O,tJ which satisfy (5.117) where t is a fixed point in E. Obviously there exists at least one sequence {t n} c Et n E convergent to t for n ~ t Hence u*(t) = lim u*(t) where B*pt(t ) E ah(u*(t )). Since pt(t ) is t ~t n n n n
00
•
n
weakly convergent to pt(t) and ah is strongly-weakly closed in U x U we conclude that B*pt(t) E ah(u*(t)) for all tEE. Together with (5.118) the latter implies (5.115) and the proof is complete. COROLLARY 5.1
In Theorem 5.6 assume in addition that h is
Gatea~~
entiable on U and the range R(B) of B is a dense subset of H. dual extrpmal ar~ associated with (y*,u*) by Theorem 5.1.
differ-
Let p be any
Then
p(t) E -a¢(t,y*(t)) for all t E [O,T]. Proof Let p be any dual extremal arc associated with (y*,u*). (5.49), (5.117) we conclude that
(5.119 ) Then by
pt(s) = p(s) for all s E [O,t] because ah is single valued, the kernel N(B*) = {O} and the functions pt, p are weakly continuous on [O,TJ. Together with (5.118), this yields (5.119) as claimed. REMARK 5.8
Under supplementary assumptions on Band h it follows from Theorem 209
5.6 (see [13J and [15J, Chapter 3) that the function ¢ is a solution to the Hamilton-Jacobi equation ¢t(t,y) + h(Vh*(-B*a¢(t,y)))
+
(a¢(t,y),
BVh*(-B*a¢(t,y)) - a2(y)) + g(y)
°
(5.120)
in a certain generalized sense. On the other hand, every sufficiently smooth solution to (5.120) can be put into the form (5.10B). For a direct treatment of (5.120), as well as for its relationship to control theory, we refer the reader to [15J (see [lB], [3BJ, [56J for related finite di.mensional results). §5.B Optimal control problems with infinite time horizon We shall study here the control problem inf
{f: (g(y(s,O,yo'U))
+
h(u(s)))ds;.u E
L~oc(R+;U)}
¢)Yo). (5.121 )
Here y(s,O,yo'u) is the solution to (5.1) where ~ = 0, f = 0. As well as hypotheses (i) to (v.i), the following assumptions hold throughout this section: (j)
9 is independent of t and g(O)
= 0.
Gateaux diffe~entiable on U and h(O) (jj)
The function h:.U -)- R is
= 0, Vh(O) = 0.
RTB) = H.
(jjj) Condition (5.3) holds with a
= 0.
The basic reference for this section is [13J. LEMr~A 5.10 The function ¢co:H -r R is localLy Up.c:chitzl:an and Yo E H the infimum defining ¢co(yO) is attained.
fo~ ev~'{'y
Proof Let yo E H be arbitrary but fixed. By assumption (j) there exists --2+ 1 + 1 + u E L (R ~U) such that h(u) E L (R ) and g(y(t,O,yo'u)) E L (R). Indeed it suffices to take u = -B*y where y is the solution to
210
yl
+.
Fy + BB*y =
° a.e.
t
> O~
y(O) = YO'
Then by condition (jjj) we see that ly(t)12 < exp(-wt)I YoI2. Hence ¢oo(YO) < +Arguing as in the proof of Lemma 5.8, it follows that the infimum in (5.121) is attained. Now for IYol2 < r it follows by (5.12t) that 00.
¢oo(YO) <
J: g(y(t,O'YO,O»dt < Cr'
Hence we may confine ourselves in (5.121) to those u E L~oc(R+~U) which satisfy the inequality
J°oo h(u(t»dt < Cr .
Denote by Ur this subset of L~oc(R+~U). (5.3) and (5.6) that ly(t,O,yo'u) 12 < e-
wt
Then for every u E Ur we see by t
IYOl2
+-IIBII
-<e-wtr+\lBII(C
r
J e-w(t-s) lu(s) Iu o
+C)-
Let Yo' zo be arbitrary in Er = {y E H~ Iyl -< r}. ¢oo(yO) - ¢oo(zO) <
ds
r
We have
f: (g(y(t,O,yo~v.*»-g(y(t,O,ZO,v.*»dt
where v* is such that ¢oo(zo) =
f:
(g(y(t,O,zO'v*»
+ h(v*(t»)dt
Since 9 is locally Lipschitz and by (5.3), (jjj) ly(t,O,yo'v*) - y(t,O,zO'v*) 12 < e-wt[Yo-Zoi2' t ~ 0 we find that
thereby completing the proof. In particular, it follows by Lemma 5.10 that for every yO E H the optimization problem (5.121) has at least one optimal pair (y*,u*) E C(R+;H) x L~oc(R+;U). 211
Throughout the followi.ng we shall assume that YO y(t,O,yo'u*) f W~~~(R+-~H) n L~oc(R+-~D(AH»'
f D(Q,).
Then y* =
W1,2(R+-;.H) x L~o (R+ ;.U) is an optimal pair in pro1oc +- C 2 ++ then there exists p f L (R ~H) n L1oc(R ;.'1.) n ACloc(R ;y*) n
THEOREM 5.7 If (y* ,u*)
f
00
blem (5.121)
Cw(R+~H) which satisfies the system
p'-Ap-pas(y*) - ag(y*)
3
° a.e.
B*p(t) = ah(u*(t»
t
>
0
(5.122)
a.e. t
>
0
(5.123 )
for all t:> O.
(5. 124)
Proof Arguing as in the proof of Lemma 5.8 it follows that for every t :> 0,
f° (g(y(s,O,yo'u»
¢oo(yO) = inf {
t
+.
h(u(s»)ds +
¢oo(y(t»~UfL
2
(O,t;U)}. (5.125 )
From Theorem 5.1 we deduce that there exists t
P
f
00
L
(O,t~H)
2
nL
n AC([O,tJ~Y*) n
(O,t~V)
Cw([O,tJ~H»
such that t t t ps-Ap -p as(y*)- ag(y*)
pt(t) + a¢oo(y*(t» B*pt(s) = ah(u*(s»
3
3
0 in Qt
° a.e. s f ]O,t[
=n
x
]O,t[
(5.126 ) (5.127)
(5.128)
(Here the subscript s denotes as usual the deriv.ative of pt as a function of a from [t,T] to Y*.) In as much as by assumptions (j) and (jj), N(B*) = {a} and ah is single t ~ ~ valued, it follows by (5.128) that we may take p (s) = p (s) for O,s
(5.3) we see that the function t ~ ly*(t)1
2
is bounded on R+.
a~oo is locally bounded it follows by (5.124) that p E Loo(R+;H).
In as much as The proof of
Theorem 5.7 is complete. From Theorem 5.7 (5.123), (5.124)) we may also conclude that every optimal control u* of problem (5.121) is an optimal feedback control of the form
u*(t) =
ah*(-B*a~oo(y*(t))
a.e. t E JO,T[.
(5.129 )
Since by (5.125)
~oo(y*(t))
=
f:
(g(y*(s)) + h(u*(s)))ds vt;> 0
a little calculation involving (5.129) and the conjugacy formula (1.19) reveals that ~oo is a solution to the Bellman equation (a~oo(y),a~(y))
+
h*(-B*a~oo(Y))
= g(y) vy
E
D(a~)
For details we refer the reader to [13J, [15J. §5.9 Control via initial conditions Consider the following model problem: Minimize
(5.130)
x(u) +
over all u E Off) and y E W ,2(JO,T];H)
Yt + AOY + S(y)
3 f
y = 0 in
L
=r
a.e. in Q = n a.e. in n
y(x,O) ::: u(x) x
n C([O,T];H) subject to x
JO,T[ (5.131 )
JO,T[.
Here f E L2(Q), 6 is a maximal monotone graph in R x R such that 0 E 6(0),
~
+ 00 for lul
D(X)
c
Off).
2
~
+ 00
(5.132) ( 5. 133) 213
Here D(X) and D(¢) are the effective domains of X and ¢, respecti~elY (¢ is defined by (5.7)). As noted in Section 5.1 (see also the proof of Theorem 1.13) we have the estimate
ITo
t
2
1y t) 12 +d t + I (
IT
2 2 II y ( t) II 1 dt +- 1y ( t) 12 0 H U1) O
< C(
lul~
2 + r Ifl dxdt) JQ
for all u E DGP) = Off). Then by the Ascoli-Arzela theorem we infer that the map u -+ y is compact from H to C([O,T];.H) and weakly compact from H to 1 W ,2(]O,T];.H). Then using conditions (5.1.32) and (5.133) we conclude as in the proof of Proposition 5.1 that problem (5.130) admits at least one optimal pair. To obtain first-order necessary conditions of optimality we shall proceed as in the pre~ious situations. Namely~ if (y*,u*) is an arbitrary optimal pair in problem (5.130), consider the penalized problem: Minimizp. 1
S
2
X(u) + ¢O(y(T)) + 2 lu-u*1 2
( 5.134)
subject to
y(x,O) = u(x),x y = 0 in
E ~
1:
where as, ¢~ are defined as above. If (y s,u s ) is an optimal pair in problem (5.134) then by the same reasoning we find that Us
-+
u* strongly in H = L2(~)
Ys
-+
y* strongly in C([O,T];H) n W1,2(]0,T];.H).
1 2
Let Ps E W '([O,T];H
214
-1
(~))
2
nL
1
(O,T;.HO(~))
.
be the solutlon to the system
(p 2 )t - AOp 2 - P2 62 (y € ) = f in Q p (.,T) 2
+.llep2 (y
0
2
(T)
=
° in
Sl.
Then it follows by a standard method that Pe("O) + u* - uE: E dX(U E: ) a.e. in
Sl.
Co
Having arrived at this point, we can obtain by proofs identical with those of Theorems 5.1 and 5.2 (Proposition 5.3) the following optimality theorems: THEOREM 5.8 Let (y*,u*)
E
1 W ,2(]0,T];.H)
x
(.5.130) whpY'e S satisfies condition (5.45).
1
HO(Sl»
-s n AC([O,T];. H (Sl»
H be ar.i:! optimaZ paiY' of pY'oblem 2 Then tfJ.?Y'e exist p E L (0,T;
n Cw([O,T];H), s
> r~/2 eTtd f1 E
1
L (Q) such that
Pt - AOp - w = f a.e. in Q w(x,t) E p(x,t) dS(Y*(X,t» p(x,T)'+ depO(y*(T»(x)
3
a.e.
(x~t)
° a.e. x
EQ
E Sl
a.e. x E ~.
p(x,O) E dX(U*)(X) MOY'eoveY', if S is Lipschitzian then 1 2 1 2 W , ( ] 0 , T] ;.H) n H , ( [ T] ;.H -1 (Sl) ) •
°,
w E L2(Q) and p E C([O,T]~H) n
It should be noted that, under the conditions of Theorem 5.8, ~) = H (see Lemma 5.7) and hypothesis (5.133) is redundant. In the special case where S is given by (5.55), D(ep) = {y E L2(Sl), y(x) > 0 a.e. x E Sl}. THEOREM 5.9 Let (y*,U*) E W1,2(]0,T];.H) x H be an~' optimaZ paiY' in problem (5.130) wheY'e S is given by (5.55). Then theY'e ex~sts p E L2(0,T~H6(Sl» n -s co L (O,T;H) n BV([O,TJ;H (Sl» such that Pt-AOp E (L (Q»* and 00
(p t - AOp)a
f a.e. in [y*
°
>
OJ
a.e. in [y* = 0] n [f 1 OJ p = p(x, T) + depO(y*(T)(x) 3 a.e. x E Sl p(x,O) E dX(U*)(X)
°
a.e. x E Sl.
215
Similar results can be obtained for systems with nonlinear boundary conditions of the form (5.15) or for the general parabolic obstacle problem (5.14). Instead Qf (5.130) we might choose a cost functional of the form T fo g(t,y(t))dt +. ~(u,y(T)) where ~:H x H + R, but a discussion of these generalizations would be a major digression. Problem (5.130) is relevant to the study of the backward Cauchy problem Yt - 6Y
S(y) 3 f in Q
+
y(x,T) = YT(x) x E
~
(5.135)
y = 0 in L which is ill posed. The least-squares approach to this problem leads to a control problem of the form (5.130) where
x(u) +
00
otherwise.
For instance, in the case where S is defined by (5.55), i.e. problem (5.135) reduces to the complementarity system
Yt - 6Y
+
f > 0, Y > 0 in Q
y(.,T) = YT in
~,
y = 0 in E,
then the optimality system associated with this problem has the following form (Theorem 5.9)~
p 216
= 0 in
[y*
= OJ n
[f j
OJ
p(x,T) + y*(x,T)
= YT(x) a.e. x E ~
u(x) = A- 1p+(X,0) a.e. x E ~. Now we shall consider the situation where S is Lipschitzian and ¢O' X are defined as above. Let (YA'U A) be an optimal pair for problem (5.130). Then -s 1 2 by Theorem 5.8 there exists PA E AC([O,T]~H (J)) n C([O,TJ~H) n W ' (JO,TJ;H) such that (PA)t + 6PA-PAdS(YA) 3 PA(T)
+.
yA(T) - YT =
PA(O) = AU A in
f
a.e. in Q
° in n
(5.136)
n.
On the other hand, we have
Hence for A ~ 0,
for all A > 0 and all pairs (y,u) satisfying (5.131). AU A ~
°strongly in H.
{YA-YT} is bounded in H. Now we multiply (5.136) by PA and integrate on
IPA(t) 122
+
IT II P (t) II 2 1 A
°
HO(~)
nx
]O,T[ to get
dt -< C, A > 0, t E [0, T]
II(PA)tll 2 -1 -< C VA L (O,T;H (~)) because {as(y )} is bounded in Loo(Q). A such that
>
0,
Thus there exists a subsequence An ~ 0
weak star in Loo(Q) weakly in H .
1 2
weakly ln W '([O,TJ~H strongly in L2(Q).
-1
(0))
2
1
n L (O,T;HO(n)) and
217
p (0) An
0 strongly in H.
+
Then letting A = An tend to zero in (5.136) we get Pt
6P -
+
p~
= f in Q
p(T)
- n
in r2
p(O)
0
in r2, P = 0 in L
Since the latter system has the backward uniqueness property we conclude that n = 0 and therefore YA
+
YT weakly in H.
n
We have therefore shown that if 6 is Lipschitzian thAn system 2
(5.131) is
weakly controllable in L (r2).
This result, together with other related controllability results, has previously been obtained by Henry [42J. For general locally Lipschitz functions 6 it follows from some results due to Bardos and Tartar that systems (5.131) are not weakly controllable. §5.10 Control of periodic systems Consider the following control problem: Minimize
f
To
(g(t,y(t))
+
h(u(t)))dt
1 2
(5.137)
1 2 n L2(O,T;H O (r2) n H (r2))
on all yEW' ([O,TJ;H)
2
and u E L (O,T;U) subject
to
Yt
+
AOY
y(x,O)
=
+
6(y)
y(x,T)
3
Bu a.e. in Q x E r2; Y = 0 in
L
( 5.138)
where 6 is a maximal monotone graph in R x R such that 0 E 6(0), AO is the elliptic operator (4.32) and B E L(U,H), g, h satisfy assumptions (iv), (v), (vi) .
218
With the notation introduced in Section 5.1, problem (5.138) can be rewritten as y'(t)
+
Ay(t)
*a~(y(t»
3
Bu(t), t
E
[O,TJ (5.138)
y(O)
t
= y(T).
Noting that for some w > 0, (Ay,y) >- w IIyll2 1
HOUl)
it follows by a classical method using Theorem 1.13 that problem (5.138) has a unique soluti.on y E W1 ,2([0,TJ;.H) n L2(0,T;.O(Ar:)) (see [19J, Corollary 3.4)' Moreover, we have the estimate ly(t)1
2
< (1_e- wT )-1 +
f:
T
fo
e- w(T-s)IBu(s)1 dS 2
w
e- (t-S)IBu(s)1 dS, t E [O,T].
(5.139 )
2
Then by estimates (1.69), (1.70) we infer that the mapping u + y is bounded 2 12 2 1 2 from L (O,T;.U) to W ' ([O,TJ;.H) n L (0,T;.H O(r2) n H (rl» and therefore compact from L2(O,T~U) to C([O,TJ;H). Then by Proposition 5.1 we infer that problem (5.137) has at least one optimal pair. Let (y*,u*) be a such an optimal pair. Proceeding as in Section 5.2. consider the problem~
f:
(g£(t,y(t»
+ h(u(t»
+
~Iu(t)
- u*(t)
I~)dt
1
+ 2 £IY(O)-y(T)
I~ (5.140 )
1 2 2 on aZZ (y,u) E W ' (JO,TJ;H) x L (O,T;U) subject to
y'(t)
+
Ay(t)
+
SS(y(t»
= Bu(t) a.e. t E JO,T[.
(5.141)
Using estimate (5.139) we infer as above that for every S > 0 problem (5.140) admits at least one solution (Ys,u s )' Then arguing as in the proof of Lemma 5.2 it follows that
219
uE ~ u* strongly in L2 (O,T;U) YE ~ y* strongly in C([O,T];.H) and weakly in W1,2(JO,T];H). Now let PE problem
E
p~(t)
PE(T)
1 2
W '([O,TJ;H
-1
(~))
2
n L
1
(O,T;HO(~))
'E
be the solution to the linear
E
- APE(t)-PE(t)S (YE(t)) = vg (t'YE(t)) a.e. t E JO,T[ (2E)-1(YE(T)-YE(O)).
(5.142)
Then using the fact that (YE'U E) is optimal in problem (5.140) we find as in the proof of (5.30) that PE(O) :: PE(T) B*PE(t) E ah(uE(t))
+
uE(t) - u*(t) a.e. t E JO,T[.
Multiplying (5.142) by PE(t) and using the coercivity property of A we find
IP o(T)1 2 < (1_e-wT )-1
f: e-w(T-t) IVgo(t,y£(t))1 dt 2
< C vo
>
o.
Thus the estimates (5.31), (5.32) remain valid in this situation and we may pass to the limit in (5.142) to obtain, as in the case of problem (P), the following optimality theorems. THEOREM 5.10 Let (y*,u*) E W1,2([0,TJ;H)
x
L2(O,T;U) be any optimaZ pai~ in
pi'obZem (5.137) where S is a locally Lipschitz function. Then the~e exists -s 2 1 00 00 P E BV([O,TJ;H (~)) n L (O,T;HO(~)) n L (O,T;H) and ~ E (L (0))* which satisfy (5.46), (5.47), (5.49) together' with the
p(O) = p(T).
pe~iodicity
condition
If S satisfies condition (5.45) then
P E AC([O,TJ;H-s(~)) n C ([O,TJ;H) and ~ W
a.
::~.
In the case where S is the multivalued graph (5.55), the corresponding optimality theorem is identical with Theorem 5.2 except for transversality condition (5.58) which is replaced by the periodicity condition.
220
§5.11
V.arious optimality results for nonlinear distributed control systems
The theory presented above has been developed (see [83]) in a more general context which includes optimal control problems governed by nonlinear parabolic and nonlinear hyperbolic equations in ordinary delay systems. We begin this section by looking at some results in these areas without going into details or setting them in a generalized framework. The rest of the section is concerned with some results due to Saguez [80J and Tiba & Zhou Meike [84] on the control of the Stefan problem. (1)
A parabolic optimal control problem
Consider the following optimization problem: Minimize T
J
(g(y(t))
+.
h(u(t))dt
(5.143)
O
on all y E L P(0,T;.w6,P(n)) u E L
2
n w1 ,q([0,T);.w- 1,q(n)), p-1
+.
q-1
1 and
(O,T;.U) subject to N
Yt -
.L 1
=1
(ai(yx.))x. = Bu in Q 1
n
x
JO,T[
1
y(x,O) = yo(x) a.e. x E n y(x,t) = 0 in
L
=r
x
(5.144)
JO,T[.
Here Yo E L2(n), a i are locally Lipschitz real-valued functions which satisfy the cond it ions ai(r)r >wlrlP
+
C Vr E R
(a;(r)-ai(s))(r-s) > nlr-sl
ai ( r ) ,C 1 Irl
p-2
+
2
Vr,s E R
( 5.145)
C2 a.e. r E R
where w, n > a and p > 2. 2 B is a linear continuous operator from the Hilbert space U to H = L (n), h:U ~ R satisfies assumption (v) and g:H ~ R is a continuous convex function on H.
221
It is standard that under assumptions (5.145) the boundary value problem (5.144) has a unique solution y E LP(0,T~W6'P(D)) n W1,q([0,TJ;W-1,q(~)) (see [50J). Moreo~er, our assumptions on hand 9 imply that problem (5.143) has at least one optimal pair (y*,u*). As for the maximum principle it has the fo 11 owi ng form:. THEOREt1 5.11
Let (y* ,u*) b{' any optimal pair in pr'obl('m (5.143).
Then there
n L2(O,T;.H6(n)) and q* E L2(Q) satisfying
('X1:St the functions p E Lco(O,T;,H) the ,qystem
Pt
+
N 1: (aai(y;.)px.)x. = q* a.e. in Q
i=1
1
1
1
q*(x,t) E (ag(y*(t)))(x) a.e. (x,t) E Q B*p(t) E ah(u*(t)) a.e. t E JO,T[ where aa
is th{' g{'neY'alizpd gradient of ai' i
i
=
1, ... ,n.
For the proof one starts with the approximating optimization problem: Minimize
T
JO (gE(y(t))
+
h(u(t))
+
1
2
"2 !u(t)-u*(t) iU)dt
( 5.146)
subject to
Y t
N 1: ( a ~ ( y
i=1
1
)) Xi
xi
y(x,O) = YO(x), x E
= Bu i n ~;
Q
Y = 0 in 1:
where a~1 are smooth approximations of a., and gE is obtained from 9 by (1.21). 1 To get the optimality system we pass to the limit E + 0 in the optimality system associated with problem (5.146). The details of the proof can be found in [83J. (2)
A nonlinear hyperbolic control problem
Consider the problem: Minimize
f
To
222
(g(y(t))
+
h(u(t)))dt
(5.147)
2
1
1 ro
1
on all u E L (O,T;.H OU2» and yEW' ([O,TJ;..H U2) O jPct to
Ytt - 6y
+
n W2' 2([O,T];L 2(rl»
sub-
S(Yt) = u a.e. in Q (5.148)
y=OinL:. 1 2 2 2 Here YO E HO(~) n H (rl), Y1 E L (~), S(Y1) E L (~) and S is a locally Lipschitz monotonically increasing function on R satisfying the conditions IS'(r)rl '" C( !S(r) I
+.
( 5.149)
/+1) a.e. r E R
(5.150 )
where Si' i = 1,2 are convex continuous functions on R. The functions 9:L2(~) ~ Rand h:H6(rl) + R are as in the preceding problem. By W2,2([0,T]; L2(~») we have denoted the usual Sobolev space {y E L2(O,T;L 2(rl»;Yt' 2 2 Ytt E L (O,T;L (rl»}. THEOREM 5.12 Let (y*,u*) E W2 ' 2([O,T];.L 2 (~)))
1 . L2(O,T~HO(~) be any opt1-maZ pair for problem (5.147). Then there exist the functions m E Lro(0,T;H6(~) n W1,ro([0,TJ;L 2(rl» and q* E L2(Q) which satisfy the system
mtt - 6m - mt 8S(Yt)
3 -
IT
x
q*dt, a.e. in Q
t
m(x,T) = mt(x,T) = 0 a.e. x E rl a.e. in Q
q*(t) E 8g(y*(t» B*m t
+
8h(u*(t»
3
0 a.e. in JO,T[.
For the proof we consider the approximating control
problem~
Minimize
(T
)0 (gE(y(t»
+
h(u(t»)dt
+
1 JT
2" 0 iu*(t)-u(t)1
2
1 dt HO(~)
(5.151)
on all y and u subject to
223
Ytt - 6y
+.
SE(Yt) = Bu in Q
y(x,O) = YO(x), Yt(x,O) = y1(x), x E n y
=
° in
L:
and pass to the limit in the corresponding optimality conditions: E mE - 6m E - 'SE(y)m = - JTt qE(t)dt in Q tt t t mE(x,T)
= 0,
m~(x,T)
= 0,
x En
qE(t) E 39 E(yE(t)), B*m~ + 3h(u E(t» 3 uE(t) - u*(t) a.e. t E JO,T[ where (yE ,u E) is a solution to problem (5.151). The proof is detailed in [83J and we will not proceed further with it. (3)
Optimal control of the two-phase Stefan problem
Consider the following problem: Minimize lO 2 fQ 2 Iy(x,t) - y (x) I dxdt
12
2
00
+
°
fT h(u(t))dt 1
(5.152) 2
on all yEW' ([O,T];L (n)) n L (O,T;HO(n)) and u E L (O,T;U) subject to
(S(y(x,t))t - 6y(x,t) y(x,O) y
=
3
(Bu(t))(x)
+
f(x,t), (X,t) E Q
= YO(x), x E n
° in
L:
where S is the maximal monotone graph defined by (4.66), B E L(U,H), 2 1 2 f E L (Q), Yo E HO(n) and 3 Zo E L (n) such that
Zo
E S(yO) a.e. in n.
The function h:U
224
~
R is assumed to satisfy hypothesis (v).
(5.153)
2 As seen in Proposition 4.3, for every u E L (0,T;U), (5.153) has a unique solution y E W1,2([0,TJ~L2(n)) n Loo(0~T~H6(n)). Moreover~ since the map u ~ y is compact from L2(0,T~U) to L2(Q) it follows by a standard method that problem (5.152) has at least one optimal pair (y*,u*). THEOREM 5.13 Let (y*,u*) be any optimaZ mess {(x,t) E Q;y*(x,t) = O} =
pai~
in p~obZem (5.152) such that
O. Then the~e exists p E W1,2([0,TJ~L2(n)) n
oo L (0,T;.H6(n)) which satisfies the equations
S(y*(x~t))Pt(x,t) p(x,T)
0 a.e.
+
X E
6p(x,t) = y*(x,t)-yO(x,t) a.e. (x,t) E Q n
B*p(t) E ah(u*(t)) a.e. t E JO,T[. Here
S is
the ordinary deriv.ative of S.
For proof we consider the approximating control process with cost criterion
J
21 Q Iy-y 0 I2 dxdt
+
(T
J (h(u(t))
o
+
2 21 lu(t)-u*(t)lu)dt
(5.154)
and state system (SE(Y))t- 6y = Bu
+
f in Q
y ( x,0) = YO ( x), x E n y
= 0 in
(5.155)
1:
and pass to the limit in the corresponding optimality system E·E E PtS (y ) pE(x,T=
+
6p
E
= 0, x
= YE-y 0 in Q E
n (5.156)
The detailed proof, together with other results along these lines, can be found in Meike and Tiba [84J. In [80J Saguez has studied a different type of optimal control problem for 225
the two-phase Stefan problem, namely, Minimize
II (z_p)-'1I 2 2
(5.157)
L (Q)
o<
a < u(t) < y a.e. t E ]O,T[
Zt - 6e = 0 a.e. in Q = Q
(5.158)
JO,T[
x
z(x,t) E s(e(x,t)) a.e. (x,t) E Q
ae av = g 1. n 1: 1 = r 1
x
ae av
0 in 1:2
+.
u(t)(e-B e )
=
(5.159)
JO, T[ =
r2
x
]O,T[
. e(x,O) = eO(x), Z(x,O) = Zn(x), x E Q where the boundary r of Q consists of two disjoint and smooth parts r , r 1 2 2 2 (see Figure 4.2). Here eO E H1(Q), g E L (1: ), ee E L (1: 2 ) are given func1 tions. As seen in Example 4.4, (5.159) model the melting (solidification) process of a body Q c R3 which has a prescribed heat flux on the interior boundary r 1 and whose exterior boundary r is in contact with a heating medium of temper2 ature e. The control parameter is the heat transfer coefficient u(t). ell Let AO(t)~H (Q) + (H (Q))I be the operator defined by
I
(AO(t)e,¢) = Q ve·v¢ dx -
r
Jr1
g¢da
r
+.
Jr2
1
u(e-ee)¢da v¢ E H (Q).
In terms of AO(t), the boundary value problem (5.159) can be rewritten as
~(t)
+
AO(t) S-l(z(t))) = 0, t E [O,TJ (5.154)
z(O) = Zoe Aruging as in the proof of Proposition 4.3 we infer that (5.159) has a 226
I
unique solution Z E W1,2([0,T;(H 1(Q))') n L2([0,TJ;L2(Q)~f1z E L2(O,T; 1 H (Q)). Observing that the map u + y is compact from Loo(O,T;R) to L2(Q) we conclude by a standard method that problem (5.157) has at least one solution u*. It turns out that such an optimal control can be obtained as limit for E + 0 of a sequence {u E } of optimal controls for problem (5.157) to (5.159) where the maximal monotone graph S has been replaced by SE. This procedure is useful in the numerical computation of optimal control ([80J).
227
6 Boundary control of parabolic variational inequalities This chapter treats some classes of optimal control problems governed by semilinear parabolic equations and parabolic ~ariational inequalities on a domain 0 c RN controlled through the boundary r. Such problems occur, for example, in the optimal temperature control of a body in a heating medium whose temperature is a control parameter. Nonlinear filtering, and the theory of ill-posed problems associated with nonlinear parabolic equations and free boundary problems of parabolic type, represent important sources of such problems ([81J). §6.1
Control systems with nonlinear boundary
~alue
conditions
We shall study here the following problem: Minimize T
G1(y,u 1,u 2) =
Io
on all y E W(0,T;H 1(O))
(g(t,y(t))
~ h1(u 1(t))
+
h2(u 2(t))dt
= L2(0,T~H1(O)) n W1,2([0,TJ;(H 1(O))')
(6.1) and
(u 1 ,u 2) E L2(0,T;U 1 ) x L2(0,T;U 2) subject to Yt
+
AOY = fO in Q = 0
= YO(x), x
y(x,O)
~ oV
x
JO,T[
B.u. ~ f. in L,. = ,,1
+ 8.(y) 3 1
(6.2)
E 0
r,.
x
JO,T[, i = 1,2.
Here AO is the symmetric elliptic operator (4.32), fO E L2(Q), fi E L2(L;) and r i , i = 1,2 are disjoint and smooth parts of r such that r 1 u r 2 = r. The control spaces U. are Hilbert with the norms denoted II . II . and scalar product <.,. )., i = 1,2; B. are linear continuous operators from U. to 2 " , L (r.) and 8. are maximal monotone graphs in R x R such that 0 E 8.(0).
,
,
,
,
,
This can be visualized as an optimal control heat transfer problem with a nonlinear transfer law prescribed on r. For instance, if heat transfer 228
occurs by radiation then the boundary conditi.ons in system (6.2) become (see Example 4.2) (6.2)1
°
where a> and ~.1 E L2(L.) are the temperatures of the controlled media in 1 contact with r.. If we choose u. = ~~ as control variables subject to obvious 1 1 1 constraints: ui > a.e. in Li for i = 1,2 and define 81 = 82 = 8, ar 4 ifr>O
°
8(r)
={
°
ifr
we may represent this control system_ in the form (6.2). We shall assume that the functi.on g:[O,TJ x L2(n) + R satisfies hypothesis (vi) and hi:U i + ~, i = 1,2 hypothesis (v) of Section 5.1. By Proposition 4.1 it follows that if
(6.3) then the state system (6.2) has a unique solution y E W(0,T;H 1(n)) which satisfies the estimate IIyl12 1 +llyll 2 +llyll12 1 L (O,T;H (n)) C([O,T];L (n)) L (O,T;(H (n))1 2
-< C(1
+ .L 1=1
II u 'j
/I
2
L (L.)
(6.4)
)•
1
In particular, it follows that the mapping (u 1,u 2) + y is compact from L2(0,T;U 1) x L2(0,T;U 2) to L2(Q). Then arguing as in the proof of Proposition 5.1 it follows that problem (6.1) has at least one optimal pair (y*,u*). As for the first-order necessary conditions of optimality in problem (6.1) the main results are contained in Theorems 6.1 and 6.2.
THEORa~ 6.1
1
2
2
Let (y*,u*) E W(0,T;H (n)) x (L (0,T;U ) x L (0,T;U )) be any
1
2
optimal pair of problem (6.1) where 8 are locally Lipschitz functions satisi fying condition (5.45). . 2 1 2 -1 Then there ex~sts pEL (O,T;H (n)) n Cw([O,T];L (n)) n AC([O,TJ;H (n)) 1 2 -1 00 2 such that (8p/8v) E L (L), Pt E L (O,T;H (n)~ Pt-AOp E L (O,T;L (n)) and 229
Pt - AOP E 89(t,y*) a.e. in Q 8P -I: P8p.(y*) 3 8v' p(T) B~,
8¢O(y*(T»
+
p.1-
a in
E
L,
i. = 1,2.
(6.6)
1.
a
3
(6.5)
(6.7)
in 0
8h,'(u*,') a.e. in ]O,T[, i = 1.,2.
(6.8)
Here Pt = pi is the distributional derivative of p:[O,T]
2 L (0).
+
Equations (6.5), (6.6) must be interpreted in the following sense: there 1 2 2 exist Wi E L (Zi) and ~ E L (O,T~L (0)) c L (Q) such that 00
d at (p(t),z) - a(p(t),z) -
2
'Z1
Jr.
,=,
wi z do = Jr ~ zdx 0
a.e. t E ]O,T[ for all z E HS(O)~s >~. Wi(o,t) ~(x,t)
E
o5 i (y*(o,t»p(o,t) a.e. (o,t)
E
(6.9) Zi'
(6.10)
1,2
E og(t,y*)(x) a.e. (x,t) E Q.
(6.11)
Equation (6.9) can be rewritten as
JTO ((p(t),Wt(t))
-I:
a(p(t),w(t))dt
+
~ J
i.=1
Zi
w.wdo dt 1.
= fo p(x,T)w(x,T)dx
+
JQ~
wdxdt
(6.9)1
-1 for all w E Loo( O,T~H S (0) c L (O,T;C(O) with wt = Wi E L2 (O,T;H (0») and such that w(x,O) = O. Here a:H 1(0) x H1(0) + R is the bilinear functional defined by formula (3.15). 00
-
The next theorem is concerned with the situation where r 1 = r , r 2 = 0, h1 = hand 51 = 5 is given by (4.41). Then, as seen earlier, in this case (6.2) reduces to the following unilateral boundary value problem: Yt
+
AOY = fO in Q
y(x,O)
= yO(x) x E 0
oy 8y y(8v - Bu - f) = 0, y > 0, ov - Bu - f > 230
(6.12)
°a.e.
in L.
Here B E L(U,L2(~)), fO E L2(Q) and f E L2(Z). we should assume that 1
Yo E H (~),
yO >- 0 a.e. in
~
By virtue of condition (6.3)
(6.3)1
.
L2(0,T;U) be an optimaZ pair in control probl~m (6.1) governed by (6.12). Then there exists p E Cw([O,TJ; 2 L2(rl)) n L2(0,T;.H 1(rl)) n AC([0,TJ;H- 1 (rl)) with Pt E L (O,T;H- 1 (rl)), E (Loo(Z))* such that THEOREM 6.2 Let (y*,u*) E W(0,T;H1(~))
Pt - AOp
E
=
peT)
(6.13)
og(t,y*) in Q
(~~) = 0 a.e. in [(o,t) p
E Z;.
y*(o,t)
0 a. e. in [( 0, t) E L:;. y* (0, t) +
*
x
ocj>O(y*(T))
:3
>
(6.14)
OJ.
oy* = 0, Bu* - av -f
>
OJ.
(6.15) (6.16 )
O.
(6.17)
B*p E oh(u*) a.e. in JO,T[.
Here (op/ov)a is the absolutely continuous part of the measure op/ov. Equations (6.13), (6.14) must be understood in the sense that there exists an increasing family of measurable subsets Zk C Z such that m(Z,Zk) < l/k and
r pZtdxdt JQ
+
fT
°a(p,z)dt
+
f Q
~z
dxdt =
J
~
p(x,T)z(x,T)dx
for all z E C1(Q) n L2(O,T;H 1(rl)) having the property that z(x,O) = 0 and 2 z = 0 in {(o,t) E Z; y*(o,t) = O} IJ (Z,Zk)' Here s E L (O,T;L (~)) is such that s(x,t) E og(t,y*)(x), a.e. (x,t) E Q. In particular, it follows that 00
Pt - AOP = s in V1(Q). Now let us return to Theorem 6.1. In order to have a specific example before us, we consider the following problem~ Minimize
(6.18)
231
1 2 2
on all y E W(O,T~H (0)) and (u ,u ) E L (E ) x L (E2) subject to 1 2 t
Yt - 6y
~ aV
+
= 0 in Q, y(x,O) = YO(x) in
0
B.(y) = u.1 in L, i = 1,2 1 1
This problem can be rewri.tten in the form (6.1), (6.2) where Ui = L2(r i), Bi = I, i = 1,2 and
o h (u 1 ) 1
if
lUll -<
+
u
2 2 E L (r 2)
f
Iy(x)-y 0 (x)1 2 dx, y2 E L (0). o R is a lower semicontinuous convex function and yO E L2(0).
cj>O(Y) ="21 +
a.e. in f1
otherwi.se .
00
( h2(u 2 ) = Jr2 m(u 2(a))da,
Here m:R
p
={
1, 2)
Let (y*,u u be optimal in problem (6.18). Then by Theorem 6.1 there 2 exists P E L (O,T;H 1(0)) n Cw([O,T];L 2(0)) with Pt E L2(O,T;H- 1(0)) which satisfies the equations P
+
6p = 0 in 0
~~
+
poBi(y*)
t
3
° in Ei , i = 1,2
(6.19)
p(x,T) = yO(x)-y*(x,T), x E 0
(6.20)
whilst the optimal controls are given by ui E p sgn p a.e. in E1
(6.21 ) (6.22)
Here m* is the conjugate function of m. By (6.21) we see that if the Lebesgue measure of f1 is then ui is a bang-bang control, i.e. 232
>
°and y°
f y*(·,T)
lur(o,t) I = p a.e. (o,t) E
L,.
Indeed if LO = {(o,t) E L,;p(o,t) = O} then it follows 3p/3v = 0 a.e. in LO. Then if r is smooth enough this that either m(LO) = 0 or p = O. Since, by (6.20), p t m(LO) = 0 as claimed (m(L ) is the Lebesgue measure of O has been obtained in [76J.)
from (6.19) that implies (see [36]) 0 we conclude that L ). (A related result O
Proof of Theorem 6.' Since the proof is very similar to that of Theorems 5.1 and 5.3 it will be sketched only. For every E > 0, consider the control problem~ Minimiz.e
+
on all y E
W(O,T;.H\~»
~ Ilu2-u211~) dt
-!:
rp~(y(T»
(6.23)
2 and u E L (0,T;U;), i = 1,2 subject to
i
1,2
(6.24)
y(x,O) = YO(x), x E ~ where gE, rp~ and S~ are defined by (5.17), (5.18) and (4.59), respectively. E As noted earlier, problem (6.23) admits at least one solution yE, u = (u~,u~). Moreover, by a standard procedure it follows that there exists pE EW(O,T;Hl(~» (the dual extremal arc) which satisfies the equations E E E E Pt - AOp = Vg (t,y ) in Q 1,2 E
p (T)
+
E:
E
VtPO(y (T»
=0
B~pE E dh.(u~) + Ul~-U*l' a.e. in ]O,T[, 1
1
1
(6.25)
1,2.
(6.26) 233
Equations (6.25) must be understood in the sense of (4.58)1, i.e.
"*
pE _. AE(t)pE = V'gE(t,yE) a.e. t E ]O,T[
pE(T)
=-
V'~O(yE(T))
where AE(t):H 1 (O) .... (H 1(O))1 is defined by (AE(t)p,z) = a(p,z) ~ LEMMA 6. 1 FoY' u
f....
E ....
u
i
2 L
i=l
Jri
P8~(yE(t))zda
Vz E
H1(o).
1
0,
2
s tY'ong ly in L (0, T;.U i)' i
= 1,2
yE .... y* weakly in W(O,T;.H 1(o)) and stY'ongly in
Proof We have E
G1( y E,u E1 ,u 2E)
<;
GE *) f 11 E > 0 • 1( zE ,u * 1 ,u 2 or a
Here ZE is the solution to (6.24) where u.1 = u~, i = 1,2. 1 We know from the proof of Proposition 4.1 that for E .... 0
This yields *) . . sup GE1 (E y ,u 1E ' U 2E) <; Gl (* (6.27) 11m Y 'u *1 ,u 2 E .... 0 On the other hand, by estimate (6.4) (see also (4.63)) we mat infer as in the proof of Lemma 5.2 that on a certain subsequence,
ui .... ~i weakly in L2(O,T;U i ), i = 1,2 yE .... ~ weakly in W(O,T;H 1(o)) and strongly in L2(Q) ~
~
where y is the solution to (6.2) where ui = ui ' i = 1,2. 234
'V
Since gE(yE)
g(y) and
+
lim inf IT E+ 0 0
(h1(u~)
+
h2(u 2))dt > IT 0
(h1(~1) + h2(~2))dt,
by (6.27) we infer that u~1 + u~1 strongly in L2(0,T;U.), i = 1,2. 1 ing part of Lemma 6.1 follows as in Proposition 4.1.
LEMMA 6.2 There exists C independent of
IpE(t)1 2 /I pE II t
I
Io I/
such
tha~
T
+
pE(t)//2 1
-< C.
(6.28)
H (st)
_ -< C L2(0,T;H 1(st)) 18~(yE)pEldo dt-<
L.
E
The remain-
(6.29)
= 1,2.
C, i
(6.30)
1
1
Proof We take the scalar product of (6.25) (or (6.25) I) by pE(t) and integrate on [O,tJ. Taking into account the fact that 8~ > 0 and 1
2 . 2 a(z,z) + alzl 2 > w /lzil 1 H (st)
we obtain the estimate (6.28).
Now using once again (6.25)1 we get
(:t pE(t),Z) = a(pE(t),z)
+
I
9g E(t,yE)Z dx
st
a.e. t E JO,T[,
1 z E HO(st)
and togehter with (6.28) this yields
I /I T o
dpE (t) 112 _ dt dt H 1(n) T
-<
C(I II pE( t) 112 1 dt o H (st)
+
199 E( t,yE) 12dxdt) -< c1· Q
I
Now we multiply (6.25) by ~(pE) where ~ is given by (3.66) and integrate on Q. This yields N i:1
I .
Li 8~(yE)pE~(pE)do dt < C
235
because
a(pE,~(pE)) ~
0.
Then letting
~
tend to sgn we get estimate (6.30).
Proof of Theorem 6.1 (continued) By estimates (6.28), (6.29), (6.30) we see that {pEl is weakly compact in L2(0,T~H'(D» and weak star compact in oo L (0,T;L 2(D». Since {dpE/dt} is bounded in L2(0,T;.H-'(rl») it follows by Theorem 5.1 in [50J, Chapter 1 that {pEl is strongly compact in L2(Q,T;H a (D)) where! < a < 1. Hence there exist p E Loo(0,T~L2(rl») n L2(0,T;H 1(rl» with 1 2([O,TJ;H -1 (D) such that on a subdp/dt = Pt E L2(O,T;H -1 (D» (i.e. pEW·' sequence, again denoted E, pE
oo p weakly in L2(0,T;H 1(D), weak star in L (0,T;L 2(D)
+
and strongly in L2(0,T;H a (D)) 2-1 Pt weakly in L (O,T;H (D»)
E
Pt
(6.31 )
+
9g E(y E )
+
s
(6.32)
weakly in L2(Q)
where s E dg(t,y*) a.e. in Q (we use Lemma 5.4). In particular it follows by (6.31), (6.32) that on a subsequence pE(t)
+
pet) uniformly on [O,TJ in H- 1(D).
Hence pE(t) + pet) weakly in L2(D) for t E [O,TJ. in (6.25) and (6.26) yields p(T)
+ d~O(y*(T»
3
Now letting E tend to zero
° in D
Now, if Bi satisfy condition (5.45) then we have the inequality (see (5.53»)
-< C(f
IpE Ei
B~(yE)lda dt
+
I
IpEI (1
+
lyEI)dadt)
Ei
where Ei is any measurable part of Li , i = ',2. Then by Lemma 6.1 and (6.31) 1 we infer that the subsets {S~(yE)pE} c L (L.), i = 1,2 satisfy the conditions 1 1 of the Dunford-Pettis theorem. Thus {B~(yE)pE} are weakly compact in L1(I.). 1 1 236
Hence, selecting a further subsequence, we have (6.33) Now letting £ tend to zero in (6.25), we see that (6.9) ally by Lemma 3.4 we conclude that lJ. E 1
as.(y*)p a.e. in L, 1 1
I
is satisfied.
Fin-
1,2
thereby completing the proof. Proof of Theorem 6.2 ( 5. 64 ), (5. 65 ) )
In this case S£ is given by (3.79) and we have (see
ly£s£(y£)pE - p£S£(y£)I < £lp£1 !p£S£(yE)! < E!p£S£(yE)! (£
+
a.e. in Z
£-1 !/::p. ) e:
+
2£ Ips!
a.e. in Z (6.34)
where
o " (0 t) E'
= f1.
Since {pES£(yE)} is bounded in L1(Z) and by Lemma 6.1 {E- 1" £ yE} is bounded in L2(Z), on a subsequence we have
and by (6.31) p£(3E(y E )
+
p(f
+
ay* weakly in L1(Z). Bu* - ~)
pE(3 £(y £)
+
p(f
+
ay*) = 0 strongly ln . L1(Z) Bu* - ~
Hence (6.35)
. and therefore y£'£( (3 y £) pE + 0 strongly ln L1() L. On the other hand, it follows by (6.30) that there exists a generalized subsequence {,,} of {£} such that
237
Then repeating the argument leading to (5.69) we infer that Y*~a = 0 a.e. in L, and so p satisfies (6.13), (6.14) and (6.15). As for (6.16) and (6.17), they have been established in the proof of Theorem 6.1. REMARK 6.1 Multiplying (6.25) where 8t = 8, i = 1,2, by formula we get lim ~o
~YE
and using Green's
(I pE(~yE)tdxdt ~ IT a(pE,~yE)dt Q
0
= - IQs y*~dxdt
+
Io
p(x,T)y*(x,T)~(x,T)dx
for all ~ E C (Q) such that ~(x,O) = O. Then by Lemma 6.1 and (6.32) we infer that 1
J p(OY*)t dxdt + J: O = (
)0
a(p,oy*)dt
+
p(x,T)y*(x,T)~(x,T)dx v~
JOS
y*odxdt
E C1(Q), ¢(x,T)
=0
which formally means that
dp y* = 0 in dV
§6.2
L
Boundary control of free boundary problems: mixed boundary conditions
This section is concerned with the following optimization problem: Minimize
G(y,u)
~
J:
(g(t,y(t))
+
h(u(t)))dt
+
00(y(T))
(6.36)
2 1 2 2. on all y E L (O,T;V) n W ' ([O,TJ;V') and u E L (L ) subJect to 1 (y'(t),y(t)-z)+a(y(t),y(t)-z) +
<
Io fO(x,t)(y(x,t)-z(x))dx,
Irl (ay(o,t)-u(o,t))(y(o,t)-z(o))do Vz E K a.e. t E JO,T[
y(O) = YO' y(t) E K, vt E [O,TJ. 238
(6.37)
Here H = L2(~), V.
= {y E H1(~), Y = 0 in f 2}, K = {y E V; y> 0 a.e. in ~}, a(y,z) = r Vy(x)·Vz(x)dx vY,z- E H1(~). J~
The functions fO and YO E H1(~) are assumed to satisfy the conditions fOE W1, 2( [ a,T] ;_H ), yO>
a a. e.
in ~ ,
~ is a bounded and open subset of RN whose
a is a positive constant and
boundary f consists of three disjoint and smooth parts f1' f2' f3 such that f1 and f2 have no common boundary and m(r 1) > a (see Figure 4.2). As seen in Section 4.3, problem (6.37) can be rewritten as Yt - 6y > fa, y > 0 a.e. in Q = ~ a.e. in
Yt - 6y = fO
x
]O,T[
[y > 0]
(6.37)
I
y(x,O) = YO(x), x E ~ dy + ~y = u'ln 6~ , dV dy -dV u 1
0 ln . L3' y = 0 ln . L2'
The functions g:[O,T] x H -+ R+, cj>O:_H -+ R+ and h:L 2(r1) -+ R satisfy hypotheses (v), (vi) in Section 5.1. Further we shall assume that D(h)
C
2
{u E L (f ); u
1
> 0 a.e. in f 1 }
where D(h) = {u E L2(f 1); h(u)
< +
(6.38)
oo}.
By Theorem 4.6, for every u E D(h), (6.37) has a unique solution YEw 1 ,2([0,T]; VI) n L2(O,T;V) C C([O,T];H) and the map u -+ y is conpact fran L2(L,) to L2(0,T;H). Then reasoning as in the proof of Proposition 5.1 we conclude that proble~ (6.36), (6.37) admits at least one optimal pair. THEOREM 6.3 Let (y*,u*) be any optimal pair of problem (6.36), (6.37). Then 2 2 s there exists pEL (O,T;V) n L (O,T;L (~» n BV([O,T];(V n H (~»I), s > N/2 such that Pt + 6P E (Loo(Q»* and 00
239
+- ~p)a E
(Pt
8g(t,y*) a.e. in [(x,t) E
p(T)
+- 8~0(Y*(T)) 3
op av
ap
p
+
= O·ln
~ ,
6
1
a in p
Q~
y*(x,t)
>
OJ.
(6.40)
O.
op = a in = O·ln ~ 2, 8v 6
= a a.e. in [y* = O~ fa
(6.39)
L3 •
(6.42)
OJ.
j
(6.41)
(6.43)
P E 8h (u *) a. e. in L 1 •
We hav.e denoted as usual by Pt the distributional deri.v..ativ.e of p E L2(0,T; L2(0)) and by (Pt +- ~p)a the absolutely continuous part of Pt +- ~p. 2 Proof In the followi.ng we shall denote by U the space L (r 1 ) and by I . I U its norm. The approach is still modelled on that dev.eloped in the preceding chapter. We start with the optimization problem~ Minimiz.e
G£(y,u) =
J:
(g£(t,y(t))
+
h(u(t))
+~ lu(t)-u*(t) 10)dt on all y E
~~(Y(T))
(6.44)
W1,2([0,TJ~V') n L2(0,T~V), u E L2(0,T;U) subject to
Yt -
~y +
y(x,O) oy av
+-
+
ay
E
s (y) = fa in Q
= YO(x),
x
(6.45)
E 0
8y -_ = u in L1, y = a in ~ 2 , 8v 6
O.
ln
~ 6
3
,
Here SE is defined by (3.79) and gE, ~~ are giv.en by (5.17) and (5.18) respectiv.ely. As seen in Theorem 4.6, (6.45) has for ev.ery u E L2(0,T~U) and E > a a unique solution yE and the map u + yE is compact from L2(0,T;U) to L2(Q). Then as noted earlier this implies that for every E > 0, problem (6.44) has at least one solution (Ys'u E). We have GE( YE'U S ) < GE( yE,u* ) for all 240
E >
a
(yS is the solution to (6.45) where u : u~). S In as much as by Theorem 4.6, y 7 y* strongly in C([O,T]~H), we conclude by Proposition 1.10 that (6,46) lim sup GS(ys'u&) < G(y*,u*). s 7 0 On the other hand, since {u s } is bounded in L2(0,T~U) we may assume that Us
7
u weakly in L2(0,T~U). 1
Then argui.ng as in the proof of Theorem 4.6 (see estimates (4.94), (4.95)) we conclude that {y } is bounded in L2(0,T;V) n Wl,2([0,T]~V') c C([O,T];H) and {Ss(ys)} is bou~ded in L2(Q). Thus on a subsequence we have Ys
7
Y1 weakly in L2(0,T~V) n Wl,2([0,T]~V') strongly in L2(0,T~H)
y~
7
Yl weakly in L2(0,T~V.')
A(t)ys
7
A(t)Y1 weakly i.n L2(0,T~V.') weakly in L2(O,T~H).
Repeating the argument used at the end of the proof of Theorem 4.6, we conclude that Y1 is the solution to (6.37) where u = u1. Then by (6.46) we see that G(Y1'u 1 ) < lim sup GS(y,u ) < G(y*,u*) s S 70
s
and this yields (see Lemma 5.2) Us
7
u* strongly in L2(0,T~U)
(6.47)
whence by Theorem 4.6, Ys
7
y* strongly in C([O,TJ~H) and weakly in L2(0,T~V)
(6.48)
= fo-yf - 6y* weakly in L2(0,T~H).
(6.49)
SS(Ys)
7
n
Now let ps E L2(0,T~V) be the solution to the boundary value problem 241
(PE )t
E + tl PE - PE BE() YE = 9g (t,y) E in 0
pE (x,T) + 9¢OE(y E(x,T»
= 0,
(6.50)
x E0
Taking into account the fact that gE and ¢~ are Fr~chet differentiable it follows by the inequality E E G (y~,uE +
~v)-G
E 2 (YE'U E) > 0 Vv E L (O,T~U),
(Y~ is the solution to (6.45) where u
0
= uE + ~v) that (see Section 5.2)
~
}0 (h I (u E' v) + -. 0 'Iv «.,->
~ >
is the scalar product of U = L2(r 1).
E
2 L (0, T;.U)
Hence ( 6.51)
A priori estimates for the approximating dual arc PE follow as in preceding cases, i.e. one multiplies (6.50) by PE and sgn pE (more precisely by ~\(p /\ E) where ~~ is given by (3.66) and integrate on [O,t] and Q, respectively. We get Ip (t)1 2 E
+
JT lip (t)11 2 1 dt 0 E H (0)
J ISE(y)p Idxdt -< C.
+
E
Q
£:
In particular, this implies that {pI} is bounded in L1(0,T;L 1(0)) +L 2(0,T;V I ) £: c L1(0,T~L1(0) + VI) c L1(0,T~(Hs(0) n V)I) (by the Sobolev imbedding theorem) where s > N/2. Thus selecting a subsequence we may assume that PE
+
p weakly in L2(0,T;V), weak star in Loo(O,T~H)
(6.52)
n
and by the Helly theorem PE (t)
+
p(t) strongly in (Hs(O) n V)' for all t E [O,TJ.
(6.53)
n
Since the injection of V into H = L2(0) is compact, we have by Lemma 5.1, Chapter 1 in [50J Ip)t)-p(t) 12 -< c.
242
~llp
(t)-p(t) £:
111l+C(~) v.
lip (t)-p(t) t
II
.
(H s (0)
n V)
I
for all A > O.
Together with (6.52) and (6.53), this implies that
PE ~ p strongly in L2(Q)
(6.54)
n
and by (6.52), (6.53) we conclude that P£ (t)
pet) weakly in L2(O) for all t E [O,T].
+
n
Now by Lemma 5.4 and (6.49), (6.50) QgEn(t'Y
~~
)
E
E ag(t,y*) weak star in
Loo(O,T~H)
n
PE (T)
peT) E -d$O(y*(T»
+
weakly in H.
n
E
Since {{3 n(YE )PE } is bounded i.n Ll(Q) and therefore weak star compact in n
00
n
00
(L (Q»* there exists w E (L (Q»* and a generalized subsequence {A} such tha t •A
8 (YA)PA
Thus taking Pt
E
+-
wweak star in
+
{En}
(6.55)
(L (Q»* .
= A to the limit in (6.50) we see that p satisfies the equations
6p-W
E
dg(t,y*) a.e. in Q
peT)
+
8p 8v
~ ap = O'ln ~1' p --
+
co
C
8$0(y*(T»
(6.56)
3 0 in 0
o·ln
~
~2'
8p -- O'ln ~3' ~ dV
(6.57)
Similarly, letting E = £n tend to zero in (6.51) we get
P E dh(u*) a.e. in ]O,T[. Equations (6.56), (6.57) must of course be understood in the sense of distributions. To conclude the proof it remains to verify (6.39), (6.42). To this end we use inequalities (5.64), (5.65) in the proof of Theorem 5.2 together with relations (6.48), (6.49), (6.54) to get En p£ 8 (YE)
n
n
1
+
p(f O- y:t- 6y *) = 0 strongly in L (Q)
(6.58) 243
(6.59) By Egorov's theorem, for each n > 0 there exists Q c Q such that m(~Q )
~p
-ag(t,y*))y* = o.
REMARK 6.3 Theorems 6.3 remains valid if g:[O,TJ of the form (boundary observation)
(6.39)' 1
x H
(n)
+
R is a function
g(t,y) = Jf1 gO(t,y(o))do where gO:[O,TJ
x
R + R+ is measurable in t, locally Lipschitz in y and
Ivygo(t,y) 1< C(1 + Iyl) a.e. t E ]O,T[, y E R. The optimality system (6.39) to (6.42) has in this case the following form: Pt +
~p
p(T) +
= 0 in [y* a~O(y*(T)) 3
>
OJ, p = 0 in [y* = 0; fO f OJ
0 in n
This follows by an easy adaptation of the proof, taking in (6.44) gE(t,y) = Jf1
g~(t,y(o))do
and noting that if y + Y1 weakly in L2 (0,T;V) n W1,2([0,T];V') then the 2 sequence of traces {~EJ is compact in L (L1)' REMARK 6.4 For the approximation of an optimal control we may use the following control process: 244
Minimize
(6.60) on all (y,u) subject to (6.45).
Here h is the smooth convex function defined as in Section 1.5. Arguing as above we see that for some subsequence En + 0, {u E } is weakly 2 'V n convergent in L (O,T~U) to an optimal control u of problem (6.1). For numerical algorithms we refer the reader to [79J. Example Consider an absorbing tissue ~ (Example 4.1) bution of oxygen concentration at moment t. concentration, it is required to arrive at a by choosing an adequate oxygen flux u on the constraints~ 0 < u(o,t) < p a.e. (o,t) E r.
and denote by y(x,t) the distriIf YO(x) > 0 is the initial concentration yO(x) at t = T boundary r, subject to the
To this end, consider the following problem:. Minimize T
J~
!yo(x)-y(x,T)ldx
+.
fo
( 6.61)
h(u(t))dt
on all (y,u) subject to (see (4.52)) Yt - 6y
>-
-1, y
>-
Yt - 6y = -1 y(x,O)
=
0
a . e. in Q
a.e. in [y
YO(x), x E ~; ~~
=
>
u in r
OJ =
r
x
JO,T[.
In cost functional (6.61) the first term penalizes the difference between the actual and the desired concentration. Here h:L 2(r) + R is the function defined by h(u) = 0 if 0 < u < p a.e. in r, h(u) = +
00
otherwise.
Then (see Proposition 1.9)
245
8h(u)
{w
E
L2(r)~ w = 0 a.e. in [0
in [u
<
u
<
pJ, w ~ 0 a.e.
= pJ, w < 0 a.e. in [u = OJ}.
By Theorem 6.3, if (y*,u*) is an optimal pair of problem (6.61) then there oo 2 exists p E BV([O,T];.(Hs(Il))') n L (0,T;.H 1(1l)) n L (0,T;.L 2(1l)) which satisfies the system (Pt
+
6p)a
0 a.e. in [y*
>
OJ
~ = 0 in L, p = 0 in [y* = OJ p(x,T) E sgn (yo(x)-y*(x,T)), x E ~ while the optimal control is giv.en by
o u*(a,t)
if p(a,t)
<
0
p if p(a,t)
>
O.
={
Thus if yO f y*(T) it follows that p f 0 a.e. in L and so the optimal control u* is bang-bang. Now we shall present a variant of problem (6.36) in the case where the control functions u are in the space
and yo = 0, fO = - AO. As seen in Section 4.3, the v.ariational inequality (6.37) has in this case a unique solution in W1,oo([0,TJ;H) n w1•2([0,TJ;V) and it is equivalent to the one-phase Stefan problem (4.68) to (4.70). We associate with this equation the control problem: Minimize
f: g(t,y(t))dt
+ X(u) +
¢O(y(T))
(6.62)
on all u E U and y E Wl,oo([O,TJ~H) n Wl,2([0,TJ~V.) subject to (6.37).
Here g [O,TJ x H + R+, ~O:H + R+ satisfy assumptions (vi) in Section 5.1 and X:U + R is a lower semicontinuous convex function. Let (y*,u*) be an optimal pair for problem (6.62). 246
To obtain the optimality
system we shall proceed as in the proof of Theorem 6.3. siders the control problem with pay-off T
Jo gE:(t,y(t))dt + X(u) and state system (6.45). 2
(T
Iluli U :: J
o
+
1 2 21Iu-u*llu
+
Namely, one con-
E: ¢O(y(T))
Here 11·11 u is the natural norm of
U,
i.e.
frl/u'(t,a)1 2dtda
where u' is the derivative of u:[O,T] -+ L2(r 1). If (y E: ,u E: ) is an optimal pair for this problem then by the above argument it follOi.Js via Proposition 4.6 that
If PE:
E
uE
-+
u* strongly in U
YE
-+
y* strongly in W'
1 co
([O~T];H)
1 2
n W'
([O,T]~V).
W(O,T;V) is the solution to (6.50) then we have X1(uE:'v)
+
hl (u~-u*')v'da
J~l
dt -
PE: vda dt >- 0 'Iv E U. (6.63)
Relations (6.52) to (6.55) remain valid in this case. Hence, arguing as in the above proof, we infer that there exists p E L2(O,T~V) n Lco(O,T~H) n BV([O,T]~(Hs(n) n V)I) which satisfies (6.39) to (6.42). Now letting E tend to zero in (6.63) we get X'(u*,v.) -
f~l
pvdadt >- 0 'Iv
E
U.
(6.64)
We identify the dual space U1 of U with L2(~1) under the pairing ::
JLl
q(a,t)v'(a,t)da dt 'Iv E U, q E U1.
Then (6.64) yields
f: where dX:U
p(a,s)ds - dx(u*)(a,t) 3 0 a.e. (G,t) E "1 +
(6.65)
U1 is the subdifferential of X.
Summarizing, we have
247
THEOREM 6.4
Let (y*,u*) be an optimal paip in ppoblem (6.62).
Then thepe
exists
whic~
satisfies (6.39) to (6.42) and (6.65).
§6.3 Boundary control of free boundary
problems~
the Dirichlet problem
We now treat the boundary control of the variational inequality (y'(t),y(t)-z)
+
a(y(t),y(t)-z) <
J~
fO(x,t)(y(x,t)-z(x))dx U
a. e. t E J 0, T[ V z E K
(6.66)
U
y(O) = Yo' y(t) E K for t E [O,TJ where a:H1(~) U
K
H1(~)
x
= {z
+
R is defined as in problem (6.37) and
E H1(~)~ z
= u in r 1, z = 0 in f2' z>
0 a.e. in ~}.
Here ~ = ~1~2 where ~1' ~2 are two open and bounded subsets of RN having sufficiently smooth boundaries f1 and f2' respectively (Figure 6.1).
Figure 6.1 As seen in Section 4.3, (6.66) represents the variational formulation of the obstacle problem Yt - 6y = fO
a.e. in [y
>
0]
Y> 0, Y - 6y > fO a.e. in Q = ~ t
y(x,O)
248
= YO(x) a.e. x
E
~
x
]O,T[
y = u in El = r 1
x
JO,T[, y = 0 in E2 = f2
x
]O,T[.
In the following we shall assume that fO E Lq(Q);yo E w~-(2/q)(D)' YO > 0 a.e. in ~, YO = 0 in f2
(6.67)
u E w2-(1/q),1-(lJ2q)([1)~ u> 0 a.e. in [1' u(o,O)
= YO(o)
for
G
E f1
(6.68)
where q > 2 is a fixed real number. As seen in Theorem 4.7, under these assumptions the variational inequality (6.66) has a unique solution y E W~,l(Q). Now we are ready to formulate the optimal control problem associated with state equation (6.66).
f:
g(t,y(t»dt
+ ¢(u) +
¢O(y(T»
(6.69)
2 ,1(Q) and u E W2-(1/q),1-(1/2q)(E ) subject to (6.64) and to on all y E W q 1 constraints (6.68). Here g~[O,T] x L2(D) + R~ and ~0~L2(n) thesis (vi) in Section 5.1 and 2-(1/q),1-(1/2q)([ ) ¢·w . q 1
+
+
R~ are assumed to satisfy hypo-
R is a lower semicontinuous, convex func-
tion satisfying the condition u> 0 a.e. in [1' u(o,O) = YO(o) a.e.
° E f1
for all u E D(¢). (6.70)
This assumption allows us to incorporate the control constraints (6.68) into the definition of ¢. We note also that if ¢ satisfies a growth condition of the form ¢(u) > C1 Ilull~ + C2
VU E
D(¢)
where C > 0 and II· Ilq is the norm of Xq = W~-(1/q),1-(1/2q)(l:1)' then 1 problem (6.69) has at least one solution (this follows as in Proposition 5.1, 249
taking into account that by virtue of estimate (4.111) the map u pact from X to Lq(Q)).
+
y is com-
q
The maximum principle for this problem has the following form: THEOREM 6.5 Under assumptions (6.6?), (6.68), (6.70) where q > (N + 2)/2. 2 let (y*,u*) E W ,1(Q) x X be an arbitrary optimal pair of problem (6.69). q 2 q co 2 2 1 Then there exist ~ E L (Q) and pEL (O,T~L (~)) n L (O,T~HO(~)) n BV([O,T]; H-s(~)), where s > N/2, such that
Pt
+
(Pt p(T)
6P E (Lco(Q))*, ~~ E X~ +
~)y*
6p -
+
~
= 0,
d¢o(y*(T))
3
(6.71)
E dg(t,y*) a.e. in Q.
0 in
(6.72)
~.
(6.73)
p(Yt - 6y*-f O) = 0 a.e. in Q
(6.74)
- ~~ E dp(U*) in 2: 1 •
(6.75)
In (6.71) we have denoted by X* the dual space of Xq , and in (6.72) the q 2 1 product (Pt + 6p-~)y* makes sense because y* E Wq ' (Q) c C(Q) for q > (N+2)/2. Proof We follow the method of Section 6.2. optimal control problem:
We start with the approximating
Minimize
T
HS(y,u) = fa gS(t,y(t))dt
+
¢(u)
+
~ Ilu-u*ll~
+
¢~(Y(T))
(6.76)
2
over all (y,u) E W ,1(Q) x X subject to q
Yt - 6y
+
q
SS(y) = fO in Q
y(x,O) = YO(x) in
~
(6.77)
where gS, ¢~ are defined by (5.17), (5.18) and SS by (3.79). Under the present assumptions, (6.77) has been considered in Section 4.3 250
(Theorem 4.7). Let (ys'u s ) E W~"(Q) (6.76), (6.77). We have LEr~t~A
x
Xq be an optimal pair for problem
For s -+ 0
6.3 Us
(6.78)
-+ u* strong!y in Xq
y -+ y* weakly in W2,1(Q) and strongly in C(Q)
(6.79)
SS(ys) -+ fO
(6.80)
s
q
+
6Y*-Yt weakly in Lq(Q).
Proof The proof is essentially the same as that of Lemma 5.2 but we outline it for the reader's convenience. For all s > 0, we have s
s '\.,
H (y ,u ) < H (y ,u*) s s s '\.,
'\.,
where y is the solution to (6.77) with u = u*. By Theorem 4.7, Ys -+ y* 2 weakly ~n W "(Q) and therefore strongly in C(Q). This yields q
lim Hs (y'\., ,u*) = JT0 g(t,y*(t))dt s-+O S
+-
¢(u*)
and therefore lim sup HS(ys'u s) s -+ 0
<J
T
o
g(t,y*(t))dt
+-
¢(u*).
(6.81 )
On the other hand, on a subsequence we have
and by Theorem 4.7 Ys -+ Y1 weakly in W~,l(Q) and strongly in C(Q), where y, is the solution to (6.66) with u = u, . This yields rT
lim inf HS(Ys,ud >- J g(t,y,(t))dt + ¢(u , ) +
Relations (6.79) and (6.80) follows as in 251
Theorem 4.7. Now, returning to the proof of Theorem 6.5, let Ps E H2,1(Q) n L2(0,T;H6(~)) be the solution to the boundary value problem
p s
= 0 in
(6.82)
L
Now since us is an optimal control of problem (6.76) we have T
Io
s
(17g (t,y (t)),w (t))dt s s
+
¢L(U
s
,v)
+.
s
where Ws E W~,l(Q) is the solution to the boundary value problem (ws)t - 6w s
+-
s's (Ys)w s = ° in
ws (x,O) = 0, x E ~ Ws = v in L1, Ws =
Q
° in L2,
<.,'> is the pairing between Xq and X*q and F~X q + X*q is the duality mapping of X (¢I is as usually the directional derivative of ¢). q Taking the scalar product of (6.82) by Ws and integrating by parts we find, after some manipulation involving Greenls formula, that ap - Ll v a~ dodt..;;
J
¢I(US'V)
+-
Vv E Xq •
Hence (6.83) Now multiplying (6.82) by Ps and then sgn ps we get the estimate IPs(t)
252
12
+
Io
T IIPs(t)
II 2 1
H 0 (~)
dt
+
J'Is s (Ys)psi Q
dxdt..;; C.
(6.84)
Then arguing as in the proof of Theorem 6.3 w,e infer that there exi.sts a function p E BV([O,TJ~H-s(n)), s > N/2, such that on a subsequence s -+ 0, Ps
p strongly in L2(Q), weakly i.n L2(0,T;.H6(n))
-+
and weak star in L (0,T;..L 2(n)). ps(t)
(6.85)
p(t) pointwise in H-s(n) on [O,TJ.
-+
Moreover, there exists ~ E (Loo(Q))* such that on a generalized subsequence, again denoted s, ~s(y)p s s
Thus, going Pt
-+
weak star in (Loo(Q))*.
I.
...
(6.86)
to the limit in (6.82), we see that +
6p -
~
E dg(t,y) in V'(Q)
(6.87)
and p(T)
+
d¢O(y*(T)) 3 0 in
n.
(6.88)
In other words, J P¢t dxdt Q
~(¢)
+
J Vp·V¢ dxdt Q
+
JQ s¢ dxdt - In p(x,T)¢(x,T)dx = 0
+
(6.87) ,
s E dg(t,y*) a.e. in Q~ s E L2(Q) 2
1
2-1
00
for all ¢ E L (O,T;,HO(n)) n L (Q) such that ¢t E L (O,T~H (n)) and ¢(·,O) =0 on n. Now let X E w2• 1(Q) be the solution to the boundary value problem q
X = a in
~1'
X = 0 in
(6.89)
~2
where a is arbitrary in X and XO E w2-(2/q)(n) is such that q
q
xO = a(·,O) in 253
f1 and XO = 0 in f 2• According to the trace theorem (see for instance [49J, O p. 87) we may choose X in such a way that Ilxoll 2-(2!q) I~
q
( n)
'" Clla(',O)1I 2-(3Jq) '" C11lallq . wq ( r 1)
Hence
II X II
2 1
W ' (Q) q
'" CII a
II q
(6.90)
va EX. q
Now we multiply (6.82) by X and integrate on Q. substituting (6.89), we get dp S a dV do dtl
and by (6.90) we infer that dp I ~ a~dodtl",Cllall ov q
J1
Using Green's formula and
Va E X
q
where C is independent of s. This implies that the set {dPS!dV)} is bounded in X*. Then letting S tend to zero in (6.83) we conclude that q
-~ dV E d¢(U*) in ~ 1 where dp!dV E X* is defined as q
J~1
¢
~~ = fQ +
IQ
p¢t dxdt +
J Vp.V¢ dxdt Q
¢dg(t,y*)dxdt -
In
+
~(¢)
p(x,T)¢(x,T)dx
for all ¢ E W~,1(Q) such that ¢(x,O) = 0 for x E nand ¢ = 0 in ~2' Now by the same reasoning as in the proofs of Theorems 5.2, 5.3 it follows by (6.79), (6.80) and (6.85) that 254
PESE(YE)
~ p(fO-Yt
+
6Y*) =
° strongly in L1(Q)
(6.91)
°
1 (6.92) p SE(y)y ~ strongly in L (Q). E E E 2 "(Q) is compactly embedded in C(O) it follows by Lemma 6.3 ((6.79)) Since W q that, selecting a subsequence if necessary, we have y
E
~
y* strongly in C(O)
and by (6.86), (6.92) ~y*
=
° in (Loo(Q))* or (C(O))*.
(6.92)
I
Together with (6.87) and (6.91) this implies (6.72) and (6.74), thereby completing the proof of Theorem 6.5. To give a more specific example, we return to the control problem of oxygen consumption in an absorbing tissue considered in Section 6.3. This time the control function u is the value of the oxygen concentration on the boundary r. Thus we are led to the following optimal control problem: Minimize
(6.93) on all (y,u) subject to the state system
Yt - 6y> -1, y> Yt - 6y = -1
°a.e.
a.e. in [y
y(x,O) = YO(x), x E
~;
in Q >
(6.94)
0]
Y = u in E
and to the control constraints u E Un' where
Uo = {u E Xq ; u(o,O) = YO(o) for Here
ex
/I •
is a positive constant and
° E r,
=
f ~llul12q 1.
+
00
ifuEU 0
° a.e.
in E}.
IIq ;s the norm of X •
This ;s a problem of the form (6.69) where g ¢(u)
u>
q
= 0, ~o(Y) = i ly_yOI~
and (6.95)
otherw; se. 255
Then the subdifferential d~ ( u)
a~
can be written as
:: aFu +- N( u ) VuE U0 :: D( ~) •
where F:X q ~ X*q is the duality mapping of Xq and N(u) normals to Uo at u, i.e. N(u) :: {n E
X~;.
c
X*q is the cone of
>- 0 vv E UO}.
(6.96)
We note that X c C(r) for q > (N+-2)/2. Then EU :: {(o,t) ~ r~ u(o,t) > O} is open, and taki~g in (6.96) v :: U ± ¢p where ¢ E C~(Eu) and pER is sufficU iently small we conclude that if ~ E N(u) then ~ = 0 in E (in the sense of measures). Taking v :: U +- ¢ where ~ E C~(r) is positive we see that ~ < 0 in r. Thus by v.irtue of Theorem 6.5 the necessary conditions of optimality in problem (6.93), (6.94) are Pt
6p :: 0 in [y*
+
>
OJ, p :: 0 in [y*
OJ
p(x,T) :: yO(x) - y*(x,T) in n, whilst the optimal control u* is given by
(~
+
a Fu*)u* :: 0, u* >- 0, ~~ +- a Fu* >- 0 in
r.
Equiv.alently u* ::-a-1F-lc~~) in [~~
u* :: 0 i.n [~~
< 0],
>
OJ.
Theorem 6.4 has been established in [7J and [11J in a slightly different form.
REMARK 6.5 equation
By (6.87)' and (6.92)' we see that the dual arc p satisfies the
JQ P¢t
dxdt +-
2
1
r
JQ
'Vp 'V¢ dxdt co
+
r ~¢
JQ
dxdt ::
r p(x,T)~(x,T)dx
)n
2-1
for all ~ E L (O,T~HO(n)) n L (Q) such that ¢t E L (O,T~H (n)), ¢(·,O) :: 0 and ¢ :: in [y* > OJ. Under this form the optimality equations (6.71) to (6.74) have been obtained by Moreno and Saguez [63J (see also [70J, [79J. [80J).
°
256
§6.4 Boundary control of mov.ing surfaces Given the free boundary problems (6.37) and (6.66), denote by Ey the incidence set {(x,t) E Q, y(x,t) = o} and by E (t) the set {x E ~, y(x,t) = OJ. Here y we shall study two problems related to the control of Ey and Ey (t) (see Section ~.5 for the steady case). Problem 1 Given a measurable subset E c Q, find Problem 2 Given a measurable subset ~o = ~o. Here Uo is a bounded, control space U. that Ey(T)
U E
Uo such that Ey = E.
E Uo in a such a way and closed subset of the
c ~, find U con~ex
The least-squares approach to problem 1. leads to an optimal control problem with state equation (6.37) or (6.66) and cost functional
i fQ Ix Ey (x,t)
- XE(x,t) 12 dxdt
+-
(6.97)
l/J(u).
Here XE ' XE are the characteristic functions of Ey ' E and l/J:U y indicator function of UO' i.e. l/J( u) = 1.f
R is the
if u E U
0
+
+
o
co
otherwise.
As seen in Section 6.2, in the case of problem (6.37) the space U might be {u E W,,2([O,T]~L2(r,)), u(O) = O} or U = L2(I,) and Uo = (u E L2(I,), u > 0 a.e. in ~,}. In the case of problem (6.66), U = Xq and Uo is a subset of (6.70). Proceeding as in Section 3.5, we shall approximate the cost functional (6.97) by the following: 1 fQ 1 y+A A - xEI 2 dxdt Z
+.
(6.98)
l/J(u)
which is of type (6.69) where
J
g(t,y) = 21 ~ 1y{x) A+ A - xE(x,t) 12 dx, y
E
L2(~).
A A As seen earlier, this control problem has at least one solution (y ,u ) E
257
L2(Q) xU. PROPOSITION 6.1
([77J) There is a sequence An
An u
+
u* weakly in U
An y
+
2 y* strongly in L (Q)
+
0 such that
where (u*,y*) is an optimal pair in problem (6.97).
Proof The proof is essentially the same as that of Proposition 3.3. we outline it for the reader's convenience. We have
I~ -
I
XE/
2
dxdt <
Q y +-A
I I____
A - XE/
2
(6.99)
dxdt
Q y +A
for all A > 0 and (y,u) E L2(Q) x Uo satisfying (6.37) or (6.66). there exists u* E Uo and An + 0 such that
Hence
A
u n
+
u* weakly in U.
Then by Proposition 4.5 or Theorems 4.6, 4.7, as the case might be, y
An
+
2 y* strongly in L (Q).
Thus selecti.ng a subsequence, if necessary, we hav..e A (y
An +.
n
A) n
-1
+
XE
y*
a.p.. in Q
and by the Lebesgue dominated conv.ergence theorem . L2(II) s t rang 1y ln ~. y* Together with (6.99), this yields An( yAn + An)-1
lim inf A n
and we see
+
0
th~t
+
XE
JQ /A n (yAn
+
A )-1 - XE/ 2 dxdt < n
JQIXE y*
(y*,u*) is an optimal pair.
For problem 2 we consider the optimal control problem:
258
However,
- xEI2
dxdt
Minimize
frt IXEy (T)
Xrt (x) 0
(x) -
2 dx
I
(6.100)
2 over all (y,u) E L (Q) x U subject to (6.31) or (6.66).
o
We approximate the cost functional (6.100) by
J~ !y ( x , ~) +\ 6
_.
XrtO (x) 12 dx
+ 1jJ ( U )
and denote by (YA'u A) a corresponding optimal pair. PROPOSITION 6.2 On a subsequence An u\
+
u weakly in U
+
Y strongly in L2(Q)
+
(
6 • 101)
We have:
0,
n
YA n
where (u,y) is an optimal pair of problem. (6.100).
The proof is identical with that of Proposition 6.1. Now we give an application of the previous theory to the control of the melting front i.n the one-phase Stefan problem (4.68) to (4.70), controlled by the temperature e 1 of the heating medi.um, subject to the constraints (6.102)
le 1 (o,t)! '" 1 a,e. (o,t) E 2: 1 , Consider the following controllability problem:
Find the temperature e subject to constraints (6.102) such that rt-(T) = rtO 1 where ~o is a given subset of rt and rt-(t) = {x E rt, e(x,t) = O} is the solid phase at moment t E [O,TJ.
As seen in Section (4.3), through the Baiocchi transformation y(x,t) = rt
e(x,s)ds,
J j(,( x) O
the one-phase Stefan problem reduces to (6.37)' where fO = - A , Yo = 0 and u(o,t) = f~ e1(o,s)ds for (o,t) E 2:" Thus the controllability problem reduces to problem 1, and according to Proposition 6.2 we may confine ourselves 259
to the optimal control problem with cost functional (6.101) and state equao 1 2 2 tion (6.37) where YO 0, fO ::: -A , U = {u E W ' ([O,T];L (f 1»,U(o,0) = 0 for a E f 1} and
o = {u
U
E U,
lu(o,t)1 . -;; 1 a.e. (o,t) E L 1} •
If (YA'U ) E (W"oo([0,T]~L2(~» n Wl,2([0,T]~V» x U is an optimal pair of A this problem, then by virtue of Theorem 6.4 there exists p E Loo(0,T;L2(~) n 2 L (0,T;H 1(r.» n BV.([O,T]~(V. n HS(~)') which satisfies the system Pt +- lip P
=
~+
d'V
=
0 in [YA
0]
>
0]
in [Y A
0 a.p
dP _ 0 in L 1 ' P ::: 0 in L2' d'V -
p(x,T) =
2(
A
(A+-YA(X,T»
A
A+YA(X,T)
while by (6.65) the optimal control uA is uA(o,t)
o in
(6.103) L3
-x~(X»,XE~, 0 gi~en
by
sgn fT p(o,s)ds a.e. (o,t) E L . t
1
REMARK 6.6 The control problems considered in this section are related to the following problem: For a given surface S = {(x,t) E Q, t ::: ~o(x)} find a boundary function u (subject to some magnitude constraints) such that the free boundary
d{(X,t) E Q, y(x,t) ::: O} of problem (6.66) coincides with S. This is a ill-posed problem, and in the special case of the one-dimensional inverse Stefan problem several numerical algorithms have been proposed in [43]. §6.5
The control of machining processes
We shall study here the following optimal control problem: Minimize T
fo 260
g(t,y(t»dt
+
~(v)
(6.104)
2
2
1
12
on aU y E L (O,T;.H (rt») n C([O,T]; H (rt))
and VE W ' ([O,TLR) subject
to
a(y(t),y(t)-z)
~
frtf(X,t)(y(x,t)-Z(X»dX VZ E K(t), t E [0,TJ(6.105)
where K(t)
= {y E H1(rt);. y> 0 in rt, y = ~(t) in
a(y,z) =
r},
f Vy.Vz dx Vy,z E H1(rt), rl
~:w~,2([0,TJ;'R)
~(v) = {
o +.
+
i.f 0 co
~ is gi~en by ~ Vi ~ P
a.e. in JO,T[
otherwi.se.
CO f E W1,2 C[0,TJ;.L Crt)) and g~[O,TJ x L2(rt) + R+ satisfiES hypothesis (vi). Here W~,2([0,T];'R) is the space {v E W1 ,2([0,T];.R) v(O) = OJ and Q c R3.
As seen in Section 4.4, the variational inequali.ty (6.104) models the electrochemical machining process controlled through the potential difference across the electrodes u(t) = v'et). According to Theorem 4.8, equation (6.105), which can be rewritten as -6y(x,t) > f(x,t), y(x,t) > (6y(x,t) y(x,t)
+.
f(x,t)y(x,t)
a a.e.
in Q = rt
x
JO,T[
= 0 a.e. (x,t) E Q
(6.105)
= v.(t) for x E r, t E [O,TJ,
has a unique solution y E L2(0,T;H 2(rt» n C([O,T];fI'(Q», and the ~ap v + y is compact from W1 ,2([0,TJ;'R) to C([O,TJ;H'(rt»). This implies by a standard method that problem (6.104) admits at least one optimal pair. THEOREM 6.6 Let (y*,v*) be any optimaZ paip in ppobZem (6.104). co 1 2 pEL (O,T;.HO(rt») and E., E L (Q) such that
Then thepe
exist
co 3 6p E (Lco(Q»*, ~~ E L (0,T;.H- / 2(r») (6p - E.,)y* = 0 in Q
(6. '06) 261
I
(6P)a ~(x,t)
==
~
a.e. in [(x,t) E Q, y*(x,t)
>
OJ
(6.107)
E 8g(t,y*)(x) a.e. (x,t) E Q
(6.108) (6.109)
~*I(t)
= 0 if
net)
< o~
= p if
V*I(t)
and 0 < ~*I(t) < p if net)
==
net)
>
0
(6.110)
0,
where n(t) " - ( ds
Ir ~~ vt do
(6.111)
E [O,T].
3 In (6.111) we have denoted by Ir (ap/av)do the value of (ap/av) E H- / 2(r) at 1 E H3/2 (r). In (6.106) we have denoted by (6p-~)y* E (Cl(D»*
«6p-~)y*)(¢)= -J~~
y*¢dx -
J~
Vp.V(y*¢)dx +
Ir y*¢ ~~ do
for all ¢ E C1(~). Now we pause briefly to illustrate Theorem 6,6 on the following model problem: Given a measurable subset E c Q find the potential difference across the electrodes u(t), 0 < t < T subject to the constraints
o<
u(t) <
p , t
E [O,TJ
to ensure on the time interval [O,TJ a minimum mean deviation of the shape
of anode O-(t)
E(t)
==
{x E {x E
~;.
~~
y(x,t) = O} from
(x,t) E EL
As seen in Section 6.4, the least-squares approach to this problem leads us to a optimal control problem of the form (6.104), (6.105) where
The function f is defined by (see Section 4.4)
262
f(x) = {
o The optimality system (6.107) to (6.111) becomes (,~W)a =
p(x,t)
A 2 (XE - _A_) in O+(t) = {x E D; y*(x,t) (A+Y*) A+Y*
= 0 in O-(t) = {x
o u*(t) = {
p
o<
u* <
p
E
~;
y*(x,t)
>
O}
= O}
if net) < 0 if net) > 0 if net) = 0,
(6.112)
where n is defined by (6.111). Proof of Theorem 6.6
As usualy, we start with the approximating problem:
Minimize
fTo g
S(
t , Y( t ) ) dt + 1j! ( v) + 1. "v *- V 112 2 W~,2([O.T];R)
(6.113 )
on all (y,v) subject to
y = vet) in r
where
sS
(6.114 )
is defined by(3.79).
Let (ys'v ) E (L2(0,T;H2(~)) n C([O,T];H1(~))) x U,,2([Q,T];R) be an s optimal pair. Using Theorem 4.8 and its proof it follows as in previous cases that Vt:
+
v* strongly in W1,2([0,T];R)
(6.115 )
Ys
+
y* strongly in C([O,T];H2(~))
(6.116 )
s On the other hand, mUltiplying (6.114), where y = Ys' by IS (ys)IQ-2 Ss(ys) and taking into account the fact that Ss(ys) = 0 in r (see the proof of Theorem 4.7) we get 263
I/Ss(y )11 q ..;; C s L (Q) where C is independent of sand q> 2. Hence {Ss(y )} is bounded in Loo(Q). s For every t E [O,T] and s > 0 there is a unique ps(t) E H~(n) n H2(n) such that ·s s 6p s (t)-p s (t)S (y s (t)) = Vg (t,y s (t)) in n.
00 1 2 Obviously Ps E L (O,T~HO(n) n H (n)). 6.4 we see that
I: Ir v(t)
a:~
(o,t)da dt
+
( 6.117)
Now arguing as in the proof of Theorem
W'(VE'V)
+
I: (v~
- v*')v'dt> a
Vv. E W6' 2( [ 0 , T] ;.R)
( 6. 11 8 )
where ~' is the directional derivative of ~. Now multiplying (6.117) by ps and sgn ps , respectively, we find by a standard method that lip 1100 1 + J Ips(t) II n s L (O,T~HO(n))
6s (y
s
(t)) I dx ..;; C, t E [O,T].
Hence selecting a subsequence (generalized, if necessary) we have (6.119) psSs(ys)
+
w weak star in (Loo(Q))*
-w E L2(Q) and
where 6p
6p - W E dg(t,y*) a.e. in Q.
(6.120)
Now by (6.117) it follows via Green's formula that
Jr
¢ dPs do dV
= J ¢Vgs(t,y )dX+J V¢·Vp dx n
s
D
This yields
Jr 264
¢d:Sdo..;;CII¢111 v
H (n)
2 V¢EH (n)
s
+
J p 6s (y )¢dx n s
s
and therefore by the trace theorem {ap /dV} is bounded in Lcc(0,T~H-3/2(r)). s Thus letting s tend to z.ero in (6.118) we get
-I:
«t)dt
Ir ~~
do"
~'("*,v)
Vv E W6,2(],T];R).
On the other hand, by (6.106) it is readily seen that X E (W~,2([0,T]~R)* of the form X(w) =
I: ~(t)w'(t)dt
d~(V*)
(6.121)
consists of
Vw E W6,2([O,T];R)
where n E L2(0,T;.R) and n = 0 in [0 < V*I(t) < pJ, n>- 0 in [V*I(t) pJ, n '" 0 in [V*I(t) = OJ. Together with (6.121) the latter yields (6.110) and (6.111). Now by inequality (5.65) we conclude as in the ~receding cases that psSs(ys)
+
0 a.e. in Q.
(6.122)
Since {p } is weakly compact in L2(Q) and {Ss(y )} is bounded in Lcc(Q) we infer th~t {p s SS(y s )} is weakly compact in L2(Q). Thus selecting a further subsequence we may assume that
and again by (6.122) we have (6.123) Next by Green's
for~ula,
fQcjl pS(f + 6y S)dxdt = JQcjlp Sf
dxdt - J Vy ·V(cjlp )dxdt Q E S
for all cjlEC 1(Q) and letting S
+
0, by (6.115), (6.119), (6.123) we conclude that
JQcjl p f dxdt -
fQ
1 Vy*·V(cjlp)dxdt = 0 vcjl E C (Q).
Hence
JQcjl p(f + 6y*)dxdt = 0
vcjl E C1 (Q)
265
and (6.109) follows. Now by (5.64) we jnfer that
which implies (6.106), (6.107) by the same reasoning as in the proof of Theorem 5.2. This completes the proof.
REMARK 6.7 Consider the optimal control problem with state system (6.105) and pay-off (6.104) where ~:w~,2([0,TJ~R) ~ ~ is defined as above and ¢0:L 2(n) ~ R+ is locally Lipschitz. If (y*,v*) is an optimal pair then there exist q E H~(n) and 6 E L2(n) such that
(6q)a q
=0
=6
in [x E
in [x E
n;
n;
y*[x,T)
>
OJ
y*(x,T) = OJ
6 E d¢O(y*(T)) P if
v* I (t)
Ir ~ do
oif Ir ~ do JO,P[ if
<
0
>.0
Ir ~J do
= O.
The proof is identical with that of Theorem 6.6.
266
I
7 The time-optimal control problem
This chapter is concerned with the ti~e-optimal control problem for certain control systems governed by parabolic variational i.nequalities. Loosely speaking, this is the problem of steering the initial state of the system to the origin in the minimum time and with control subject to a magnitude constraint. As in the previous chapters, the emphasis is put on the derivation of the maximum principle. The treatment is adapted from the author's work [14J. §7.1
The time-optimal control problem for nonlinear evolution equations
Consider the control process governed by the nonlinear Cauchy problem Y t) + a<jl (y ( t)) I (
:3
u ( t)
a. e. t
>
0, (7.1)
y(O) = Yo in a real Hilbert space H with the norm 1'1 and scalar product (.,.). Here a<jl:H ~ H i.s the subdifferential of an l.s.c. convex function <jl;H + ~ such that 0 E a<jl(O) and yO E ~), u E L~oc(R+;H). Then, as seen in Theorem 1.13, under these conditions (7.1) has a unique solution y = y(t,yo'u) E C([O,oo[ ;R) n w~~~ (JO,+oo[ ;H). Denote by U the class of control functions u, U = {u E Loo(R+;H);lu(t) 1 < r a.e. t
>
O}
where r is a positive constant. A control u E U is called admissibLe if it steers Yo to the origin at some time T (if any), i.e. y(T,yO'u) = O. Before we can proceed any further, we must study the existence of admissible controls. LEMMA 7.1
Assume that yo E TI\¢).
Then there exists at Least one admissible
control.
267
Proof The feedback control law u(t)
= -r sgn y(t), t> a
steers YO to O. Indeed, by virtue of Theorem 1.13 the closed loop system y'(t) + y(O)
a~(y(t))
+ r sgn y(t) 3
a a.e.
t
>
0
= Yo
(7.2)
has a unique solution y E W~;~(JO,+OO[ ;..H) n C(R+;.H) because the operator y -+ a~(y) + r sgn y is maximal monotone. I.n poi.nt of fact it is just the subdifferential a~ of the function ~(y) = ~(y) +-
rlyl, y
E
H.
Now multiply (7.2) by y and use the monotonicity of Iy(t) I' Iy(t) I + rly(t) I -< a a.e. t Let T
lYol/r.
>
a~
to get
O.
If y(t) f 0 for t E [O,T], then this relation yields
Iy(t) 1-< IYol- rt. Hence y(t) = 0 for t > T, thereby completing the proof. The smallest time t for which y(t,yo'u) = 0 i.s called the transition time of the control u. The infimum T(yO) of the transition times of all admissible controls u E U is called the optimal time. In other words,
A control u E U for which y(T(yO)' yO'u) = 0 (i.f any) is called the timeIn this case the pair (y(·,yo,u),u) is called the time-optimal pair. optimal control for system (7.1) with the control constraints u E U.
PROPOSITION 7.1 Undpr thp hypotheses of Lemma 7.1, assume that for every A E R the level set {x E H; ~(x) -< \} is compact. Then there exists at least one time-optimal control for system (7.1).
268
Proof By Lemma 7.1, T(yO) < + 00. This means that there exists a sequence 1 2 {Tn} ~ T(yO) and {un} C U such that y(Tn,yO'u n) = O. Denote by Yn E Wl~c (]O,+oo[ ;H) the solutions to the Cauchy problem
---
(7.3) Yn(O) = YO' We take un = 0 and yn = 0 on [T n , + ooJ and observe that the pair (y n ,u n) satisfies (7.3) on R+. Multiplying (7.3) by Yn and then t y~, and integrating on [O,t], yields via Gronwall IS lemma (see the proof of Theorem 1.13)
Let T > 0 be such that Tn > T for all n > NO' Then the above estimates imply that {Yn} is bounded in C([O,T];H) n W1 ,2(]O,T];H) and for every t E [O,T], {Yn(t)} remain in a compact subset of H. Then by the Arzela-Ascoli theorem we infer that {Yn} is compact in C(]O,T];H) n L2(0,T;H). Thus without loss of generality we may assume that y (t)
y(t) uniformly on every C([o,T];H) and strongly
+
n
in L2(0,T;H) y~ +
2
yl
a~(y n )
weakly in every L ([o,T];H) +
~ = u-yl weakly in ev.ery L2(o,T;H).
As seen earlier, this implies that yl(t) +
a~(y(t)) 3
u(t)
a.e. t
E
]O,T[,
y(O) = yo'
269
where u E U is the weak star limit in Loo(O,T~H) of some subsequence of {u }. n Since Yn(T n ) = a we infer that y(T(yO)) = 0 and therefore u is a time-optimal control. In the case where A = d¢ is a linear operator (or more generally if A is the infinitesimal generator of a Co-semigroup in a Banach space), every timeoptimal control is a bang-bang control (Fattorini [35J) and satisfies a variant of the maximum prin~iple ([36]). More precisely, it has been proved in [36J that if A is the generator of an analytic semigroup, Y1 E D(A) and u* is a time-optimal control for the linear problem YI = Ay + U;
Iu( t) I -<
r, t ;;.. 0,
then for every t E [O,T*[ (T* is the optimal time) there exists pt, a solution to the adjoint equation pi + A*p = 0 on [O,t], such that u*(s) = r sgn pt(s) for s E [O,tJ. Other variants of the maximum principle, together with related controllability problems for linear evolution equations, can be found in the works [4J, [25], [31], [37J, [42], [47], [51J. However, little is known about the validity of maximum principle for nonlinear evolution equations of the form (7.1). We shall see below that the bang-bang principZe and a variant of the maximum principle remain va1id in the general case considered here. §7.2
The maximum principle
Consider the control process governed by the nonlinear parabolic equation Yt(x,t)
+
AOy(x,t)
+
S(y(x,t))
y(x,O) = yO(x) for x
E ~
y(x,t) = 0 for (x,t)
E
r
x
3
1 -
wh ere a.. E C lJ
270
N L:
i,j=1
(~),
a.e. (x,t)
E
~ x R+
R*.
(7.4)
Here S is a maximal monotone graph in R linear differential operator AOY = -
u(x,t)
x R
such that 0 E S(O) and AO is the
( a .. ( x) y ) + a 0 ( x) y , lJ xi Xj 00
a0 E L
(~),
a.. lJ
a .. for all i, j Jl
1 , ... ,N and
N
a 0 >- 0,
L
i ,j =1
a i J" ( x) ~ i ~J"
We shall denote by A:L 2(n) Ay ::: AOY' y E D(A)
;.. w
II ~ II
N a. e. x E Q) ~ ERN.
L2(n) the operator
+
1 2 = HO(n) n H (n)
(7.5)
or equivalently
~
(Ay,z) = a(y,z)
i,j=1
fa""y )[2
lJ
z dx xi. Xj
+
fn aoyzdx
vy,z E H6(n).
Throughout the following we shall denote by H the space L2(n) endowed with the usual scalar product (.,.) and norm' . 1 , 2 Let F:H + H be the operator defined by Fy = Ay
+
B(y)
vy E D(F),
where 1 D(F) = {y E HO(n) n H2 (n);
3 WE
H, w(x) E B(y(x)) a.e. x En}.
As seen earlier (Theorem 4.3), as an immediate consequence of Theorem 1.10 the operator F is maximal monotone in H x H. More precisely, F = 8~ where ~:H + R is given by
~(y) = and 8j = B. yl(t)
i a(y,y)
+
In j(y(x))dx vy E H
In terms of F, (7.4) can be rewritten in the form (7.1), i.e. +
Fy(t)
:3
u(t) a.e. t
> 0
(7.6)
y(O) = YO'
By Proposition 7.1, for Yo E DTF) the time-optimal control problem (P)
inf {T;y(T,yO'u) = 0, lu(t) 12 -< r a.e. t E ]O,T[}
admits at least one time-optimal control. We shall see below that there exist time-optimal controls which are bangbang controls, and in two typical situations these controls satisfy the maximum principle. The first case is where B is locally Lipschitz and satisfies the condition 271
0-< S'(y) -< c(ls(y)1 THEOREM 7.1
+
Iyl
+-
1) a.e. y
E R~
13(0) = 0
Assume that YO E D(~) and 13 satisfies condition (7.7).
1
there exists a time-optimal pair (y*,u*) E (W ,2([0,T*J;H)
H2(~))
x
(7.7)
Then
n L2(0,T*;H6(~) n
Loo(O,T*~H) and p E Loo(O,T*~H) n L2(O,T*,H6(~)) n Cw([O,T*J;H) n
AC([O,T*J;,H-s(~)) satisfying the conditions
(7.8)
1p( t) 12 f: 0 a. e. t E ] 0, T*[;, p' - Ap E L\ Q) and the system
p' - Ap-p3S(y*)
3
0 a.e. in Q = ~
u*(t) = r sgn p(t)
a.e. t
E
JO,T*[.
x
]O,T*[.
rjp(t)!2 - (Fy*(t),p(t)) :: 1 a.e. t Here T* :: T(yO) is the optimal time, s
>
sgn p = P/! pI 2 for p f 0; sgn 0
(7.10) E
JO,T*[.
(7.11)
2N and
=
{w
E
H; Iwl2 -< 1L
In (7.9) pi E L1(0,T*;H-s(~)) is the strong derivative of p:[O,T*] and Ap(t) is the element of H-1(~) defined by (Ap(t),z) = a(p(t),z) for all
(7.9)
+
H-s(~)
1
Z E HO(~).
Thus pl-Ap E L1(0,T*,H-s(~)) +- L2(O,T*~H-t(~)), and so (7.9) simply means that the function t + (p(t),z) is absolutely continuous on [O,T*] for every z E H~(~) and
ft (p ( t) , z)
- a ( p( t) , z ) -, (p ( t) d13 (y* ( t) ) , z ) :: 0 a.e. t E JO,T*[ vz E H~(~).
Hence p satisfies in the sense of distributions the boundary value problem
~ - AOp - p3S(y*) p :: 0 in r
272
x
3
JO,T*[.
0 in Q (7.9)
I
By (7.10), (7.11) we see that u* i.s a bang-bang contY'Ol, i.e. lu*(t) 12 = r a.e. t E JO,T*[. Now we shall consider the case where s(y) = 0 for y
>
0, 6(0) = J-oo,OJ, S(y)
= 0 for y
<
O.
(7.7)'
Then as seen earlier (Section 4.2, Example 4.1), (7.4) is equivalent to the complementarity system AOy(x,t)
= u(x,t) a.e. in [(x,t);y(x,t)
Yt(x,t)
+
Yt(x,t)
= max {u(x,t),O}
(7.12)
a.e. x E 51 (x,t) E r
y(x,t) = 0 We recall that in this case
= {y
OJ
a.e. i.n [(x,t);.y(x,t) = OJ
y(x,O) = yO(x)
D(~)
>
x
R+.
1 E HO(51)~ y(x) > 0 a.e. x E 51}.
THEOREM 7.2 Let yo E D(~). Thqn thepe exists a time-optimal paip (y*,u*) 2 1 (W ,2([0,T*J;.H) () L2(0,T*~H6(51) n H (51))) x Loo(O,T*;H) fop vapiational inr::quaUty
(? .12)
E
and a function
P E Loo(O,T*~H) () L2(0,T*~H6(51)) n BV([O,T*]~H-s(51)), s
> ;
such that p' - Ap E (Loo(Q))*, !P(t)!2 f 0 a.e. t E JO,T*[ and
(p'-Ap)a = 0 a.e. in [(x,t) y*(p'-Ap)
=
E Q~
y*(x,t)
>
0 in V'(Q).
p = 0 a.e. in [(x,t)
E
0].
(7.13) (7.14)
Q; y*(x,t) = 0].
r!p(t)1 2-(Fy*(t),p(t)) =
1~u*(t)
(7.15)
= r sgn p(t) a.e. tE]O,T*[ (7.16)
Equation (7.13) must be interpreted in the following sense: there exists a singular measure v E (Loo(Q))* such that T* r patdxdt + a(p(t),a(t))dt = v(a) + p(x,T*)a(x,T*)dx (7.13)1 )Q 0 Q
f
J
273
1 for ev.ery a E L2(0,T*~H6(~)) n C (Q) such that a(x,O) = 0 and a = 0 in [(x,t) E Q, y*(x,t) = OJ. Equation (7.14) has a similar meaning, namely
J p(Y*~)tdxdt O
J:*
+
= f~
a(p(t).~y*(t))dt p(x,T*)~(x,T*)y*(x,T*)dx
for every ~ E C (Q) such that ~(x,O) 1
§7.3
(7.14)1
= O.
The approximating control process
The approach here is again modelled on the previous developments. The idea is to approximate problem (P) by the following infinite horizon optimal control problem: Minimize
f:
(gs(y(t)) + hs(u(t)))dt
(+ ) 1 2( + ) ovep all u E L2 loc R ~H and y E Wl~c R ~H y' + Ay + SS(y) = u a.e. in ~ y(x,O) Here hS :H
+
= YO(x), x
Rand gS:H
+
x
f'\ II
L2loc (R+'H1(n) ~ O~'
n H2()) ~ subject to
R+ (7.17)
E ~.
R are defined by (7.18)
and (7.19) Here n
E
Coo(R+) is such that nl(y) > 0, 0 , n(y) , 1 for all y
E
R+ and
for y > 2 n(y) = {
o
for 0, y, 1.
LEMMA 7.2 Fop all S sufficiently small, ppoblem (P s ) admits at least one solution (ys'u s )' Proof
274
It is readily seen that there exists at least one admissible pair
(y,u) in problem (P). For instance, gS(yS) ~ hS(u*1) E L1(R+) where yS, u*1 S are defined as in the proof of Lemma 7.3. Hence the value d of infimum in problem (Pc) is finite. Then there exist the sequences {u n }, {y n } such that c-
oo
fo
d,
S
(g (y ) + n
h
(u S
n
))dt , d *
n
-1
•
(7.20)
By (7.18) and (7.20) we see that un remain in a bounded subset of L21oc (R + ;H). Hence on a subsequence we have
and by Lemma 5.1 (see estimate (5.21)), y remain in a bounded subset of 12 + 2 + 1 2 n. Wl~c(R ;H) n L (R ~HO(n) n H (n)). Hence selectlng a subsequence we may infer via the Arzela-Ascoli theorem that for every T > 0, 1 strongly in C([O,T];H) n L2 (O,T~HO(n))
and weakly in L2(0,T~H2(n))
y~
+
y' weakly in L2(0,T;H)
SS(Yn)
+
~
weakly in L2(0,T;H).
This implies that y is the solution to (7.4), where u is the weak star limit 00 + in Ll oc (R ;.H) of some subsequence of {un}' Then by the Fa tou 1emma, lim inf foo gS(y )dt > foo gS(y)dt. n+ oo 0 n 0
f:
On the other hand, since the function u + hs(u)dt is weakly lower semicontinuous on Lioc(R+;H) (because it is convex and continuous) we have lim inf foo h (u )dt> foo h (u)dt n + 00 0 S n O s and by (7.20) we see that
fooo (g
E (y)
= d.
+ h ( u )) d t S
LEMMA 7.3 On a subsequence
S
+
0,
275
LOO(O,T*~H)
(7.21)
u
+
u* weak stap in
Y
+
y* weakly in W1,2([0,T*];.H) n L2(0,T*;..H 2(r2))
S
S
2 and stpongly in C([O,T*];.H) n L (0,T*;.H6(r2))
sS(y€)
t;
+
2 /;.)eakly in L (0,T*;_H)
(7.22) (7.23)
s(x,t) E S(y*(x,t)) a.e. (x,t) E r2
x
JO,T*[
whepe T* is the optimal time and (y*,u*) is an optimal paip in ppoblem (P).
Proof Let (yf,ui) E W1 ,2([0,T*J;.H) x L2 (0,T*;.H) be any optimal pair in problem (P) and let T* be the optimal time (we have already noted that such a pair exists). We extend uf by 0 and yi(t) = 0 for t> T*, and note that Yf is still a solution to (7.6) on R+ for u = uf. :et ~€ be the solution to (7.17) where u = uf. Since hs(ui) = 0 a.e. in R , we have oo roo 'V (7.24) o (gs(ys) + h€(us))dt , )0 gS(Ys)dt,
f
whereas by Lemma 5.1, (7.25)
'" and u = u1' by '"Ys and taking into account Now multiplying (7.17), where y = Ys the coerciv.ity property ( Ay ,y) >
w
II y II 21
v Y E H6(r2 )
n H2( r2) ,
HO ( r2) one obtains the estimate
because uf = 0 on [T*,+oo[. Together with (7.25), this yields 1 2 ';s(t) 12 ,Cs / vt > T*. Then by (7.25) and the definition of gS it follows that for all sufficiently small s, 276
Together with (7.24), this yields rOO
lim sup €+ 0
J (g€(y) +- h (u ))dt -< T*. €
0
(7.26)
€ €
Since {u } is bounded in L2l (R+-~H) it follows by Lemma 5.1 that there exists 2 € + oc u* E Lloc(R ~H) such that for every T > 0 and on a subsequence E + 0, uE + u* weakly in L2(O,T~H), YE + y* weakly in W1,2([O,TJ;H) n L2(O,T~H2(~))
(7.27)
and strongly in C([O,TJ~H) n L2(O,T~H6(~))
(7.28)
where y* is the solution to (7.6) with u = u*. We shall prove that u* is a time-optimal control. To this end we note that by (7.19) and (7.26) it follows that the Lebesgue measure of the set {t E R+~ly€(t)l~ > 2El/4} is smaller than T*. Hence there exist €n + 0 and tn E [O,2T*J such that /y
En
2 1/4 (t )/2 -< 2E for all n. n n
(7.29)
Selecting further subsequence we may assume that tn + TO. On the other hand, Since {y~ } is bounded in every L2(O,T;H), we have n
1yE ( t) - y E (t) 12 -< CIt - t n /1 / 2 v t n n
E [
0 ,TO] .
n
" = inf{T;y*(T) =OL Then by (7.28) and",(7.29) we infer that y*(T O) = O. Let 'T We will show that T = T*. To this aim, for every E > 0 define the set 1 E€ = {t E [O,TJ;IYE(t) I~ > 2E / 4 }. By (7.26) we see that
where m denotes the Lebesgue measure. for otherwise there would exist 0
>
" On the other hand lim sup m(EE) = 'T,
0 and En
€ +
+
a
'"
0 such that m(EE ) -< T-o. n
other words, there would exist a sequence of measurable subsets An
C
In '"
[O,TJ 277
such that m(A ) > 8 and iy (t)1 22 , 2E 1/4 for t E An' n En n imply that 1/4 1/2 \y*(t)1 2 , ( 2E n) ~
where
+- ~n
for t
E
By (7.28) this would
An' rv
n
+
O.
On the other hand, since y*(t) j 0 for t E [O,T] we have
lim m{t
n+ oo
E
[O,T]~ly*(t)12' (2E~/4)1/2 + ~n} = O. rv
The contradiction arrived at shows that indeed lim sup m(EE) = T and thererv
E +
0
fore T = T*. Hence u* is a time-optimal control. Relations (7.21) to (7.23) follow by (7.27) and by Lemma 5.1. The proof is complete. 2 ++ Note that by (7.18) and (7.26), uE E L (R ~H) + L (R ~H). Then by some manipulation involving (7.17) it follows that y E Loo(R+~H) and therefore 00
E
GE ( y E)
= 2y E E -1/
4
1T'
(
I '2 ~~ ~)
E L00 ( R\H ) .
( 7 • 30)
E
Recalling that the derivative BE of BE is positive we infer that there exists a unique function pE E Loo(R+;H) n L12 oc (R+~H01(~) n H2(~)), which . n W11,2(R+~H) oc satisfies the equation (7.31 )
(7.31)'
Since (UE'YE) is optimal in problem (P E) and hE' gE are Frechet differentiable functions, it follows by a standard method that (see for instance Theorem 5.7)
Equivalently (7.33)
Equations (7.31), (7.32) can be viewed as first-order conditions for optimality in problem (P E). 278
Consider the optimality system (7.17), (7.31), (7.33), i.e. y' + Ay + SE(y ) E r sgn p ESE S P~
·E
S
- APE - S (YS)PE = G (YE)
+
sp
E
a.e. t
>
0
a.e. t
>
0
(7.34)
and notice that (see Lemma 1.2)
Then by (7.34) it follows that
(p (t),Ay (t) + SE(y (t))) + gS(y (t)) E SSE (7.35) Multiplying (scalarly in H) the first equation in (7.34) by Ay S + SE(y S ) and integrating on [O,t] we get
J:
!Ay£(s) + S£(y£(s))
!~dS
<: C(t+!) for all t> O.
(7.36)
2 + On the other hand, we see by (7.18) and (7.32) that PE E L (R ;H). with (7.36) the latter yields
m{t E [O,n]; I(AYE(t)
+
Together
SE(YE(t)), ps(t))1 > n- 1/ 4 } < Cn 3/ 4
for all natural numbers n. Thus for every n the interv.al [n 1/ 6 ,n] contains a subset En such that m(E n) > 1 and (7.37) 2 In as much as gS(ys) E L1(R+-) and Ps E L (R+;H), this implies that the lefthand side of (7.35) converges to zero for a sequence t ~ + Hence n
00.
E rip S(t) 12 + -2 IPS (t) 122 S = (p (t), Ay (t) +SE(y S(t))) +g E(y E(t)) Vt > O. E (7.38) 279
On the other hand, again using system (7.34) we get
~t (p (t),Ay (t) ~ S((y (t))) Ul. ( ( ( ( p • ( t) ,Ay (t ) + SE(y (t)) + ( Y• ( t) ,A p (t ) + (E
(
(E
B( (y E (t)) p( (t))
(Ay (t) + S((y (t)),G((y (t)))+(r sgn p (t)+(p (t),Ap (t) (
( (
E
(
(
+ p (t)({(y (t))) >- 0 (
(
because SE>-
a and
S((y )y >- O. (
Together with (7.37), this implies that
(
(p (t), Ay (t) + s(( y (t))) -< a v t >- a ( (
(
so that (7.38) yields (7.39 ) Let A E ]O,T*[ be arbitrary but fixed.
By (7.30) we see that
Since y + y* in C([O,T*]~H) and y*(t) j 0 for t E [O,T*[, we see that ( GE(y((t)) = a for t E [O,T*-A] and 0 < E < (O(A)' Hence p'(t)-Ap (t)-p (t)S((y (t)) = 0 a.e. t (
(
E
(
E
]O,T*-A['
(7.40)
Taking the scalar product of (7.40) by P (t) and integrating the result on ( [O,T*-A]' by (7.39) we get
jT*-AII p
2 1p (t) 12 + 2w (
a<
t
(t)
E
II 2 1 H
(~)
2-2 dt -< 1p (T* - A) 12 -< (rw)
o
(
(7.41)
(O(A) and 0 -< t -< T*-A' Now we multiply (7.40) by ~(p() where ~:R + R is the smooth approximation (3.66) of the signum function. Integrating the result on Q = ~ x ]O,T*-A[ and using Green's formula we get A for
( <
fQAp( (x,t)S((Y where jO(r) = fr
o 280
E
(x,t)) ~(p (x,t))dxdt -< (
f
~
jO(p (x,O))dx, (
~(s)ds. Letting ~ tend to sgn we get for some constant C
independent of sand \ 's
Ip s (x,t)B (y s (x,t»ldxdt -< (mU2)
1/2
Ip s (0)1 2 -< C, (7.42)
By (7.41) it follows via a selection principle that there exi.sts pELoo(O,T*;H) n L2(0,T*~H~(n» such that on some subsequence En +
°
p + p weak star in Loo(O,T*~H) sn p + p weakly in L2 (0,T*;H 1(n» 0 En 2
APsn + Ap weakly in L (O,T*;H
-1
(7.43) (7.44)
(Q».
(7.45)
¥
Recalling that H~(n) c C(D) for s > we have L1(n) c H-s(n), and by (7.40), (7.41), (7.42) we see that {p'} is uniformly bounded in every L1(0,T*-\; s H-s(n». Since the injection of H into H-s(n) is compact it follows by the vectorial Helly theorem that, on a subsequence, {PSn} converges pointwise on [O,T*[ to the function p E BV[O,T*[;H-s(Q», i.e. Ps (t) + pet) strongly in H-s(n) and weakly in H n
for every t E [O,T*[. This implies as in Section 5.2 (see (5.35»
(7.46) that (7.47)
Let V([O,T*-A],p) be the vari.ation on p on [O,T*-\]. Since the function \ + V([O,T*-\J,p) is bounded (because {p~} is uniformly bounded in L1(0,T*-A~H-s(n» and monotonically increasing, it follows from the inequa1 ity < til < t' < T*,
°
-< V([O,t'];.p)-V([O,t"];p),
IIp(t')-p(t") II -s H
(n)
that lim pet) = p-(T*) exists in the strong topology of H-s(n). We extend t+T* (or redefine as the case might be) p on [O,T*J by p(T*) = p-(T*). The function p so defined is of bounded variation on [O,T*J, and (7.46) extends 281
to all t E [O,T*J. Now by (7.38) we see that for all 0 rip S(t)1 2
<
S
<
SO(A) we have
-2S Ip S(t)1 2 2 =(P S(t),Ay S(t)+_Ss(y S(t)))+-l a.e. tEJO,T*-A[.
+.
Letting S tend to zero and bearing in mind (1.23), (7.46) we get (7.11), i.e.
rlp(t) 12 - (p(t),Fy*(t)) = 1 a.e. t E JO,T*[ In particular, it follows that pet) f 0 a.e. t E JO,T*[. tend to zero in (1.33) we see by (7.21) that
(7.48) Then letting S
u*(t) = r sgn pet) a.e. t E JO,T*[. Hence lu*(t) 12
(7.49)
= r a.e. t E JO,T*[. On the other hand, since by (7.33)
lim lu (t) 12 = r a.e. t E JO,T*[ s-+O S it follows by (7.21) that on a subsequence, again denoted sn' we have Us -+ u* strongly in L2(0,T*~H) and weak star in Loo(O,T*~H) n
USn(t) -+ u*(t) a.e. t E JO,T*[ strongly in H. Let An -+ 0 for n -+ ~n
=
00
(7.50)
and let {~ n } c L1(Q) be defined by
,sn S (Ys )ps in QA ' ~n n n n
=
a in
Q'QA . n
According to estimate (7.42), {~ } is a weak star compact subset of (Loo(Q))*. n Hence there exists ~ E (L (Q))* and a generalized subsequence {~} of {sn } such that 00
~
~
-+ u weak star in (Loo(Q))*.
(7.51)
Then going to the limit in (7.40) we infer that p' - Ap -
~
= O.
(7.52)
This means that p' E V'(Q) admits an extension in (Loo(Q))* + L2(0,T;H- 1(n)) which satisfies (7.52). More precisely, we have 282
Jr patdxdt
+
Q
for all a
E
JT* 0
a(p(t), a(t))dt
+ ~(a)
J
p(x,T*)a(x,T*)dx (7.52)'
J
C'(Q) such that a(x,O) - O.
Summarizing, we have PROPOSITION 7.2 There exists a time-optimal pair (y*,u*) given by (7.21), (7.22) which satisfies, together with ~ E (Loo(Q))* and p E Loo(O,T*;H) n 1 ) n BV([O,T*];H -s (~)) defined by (2.43) to (7.46) and (7.51J, L2 (O,T*;.HOU'I) equations (2.49) and (7..52).
In brief, Proposition 7.1 is tantamount to saying that if 6 is a general maximal monotone graph hav.ing the property that 0 E 6(0), then problem (P) has at least one optimal control u* which satisfies a weak form of the maximum principle, i.e. (7.49), (7.52). in particular, it follows that u* is a bang-bang control.
We know from Theorem 5.7 that the optimal control uE is a feedback optimal control of the form uE(t) E -Ea~E(YE(t)) - r sgn a~E(YE(t)), where, for every Yo E H, cpE(yO) is the optimal v.alue of problem (P E). by virtue of Lemma 7.3, ~E(yO) + T* for E + 0, we may v.iew u
= _Ea~E(y)
Since,
- r sgn acpE(y),
as an approximating feedback optimal control for problem (P). Section 5.8, ~E is the solution to the Bellman equation.
As seen in
E (acp E (y),Ay +6 (y)) +-l r acp E(y) 12 +2E lacp E(y) 122 = gE(y). §7.4
The proof of the maximum principle
Since the proofs of Theorems 7.1,7.2 are essentially the same as those of Theorems 5.' and 5.2 they will be outlined only. Proof of Theorem 7.1
As noted earlier, if 6 satisfies condition (7.7) then (7.53) 283
This yields
jEIp -<
(x,t) IsS(y (x,t)dxdt S
S
C(j
E
J
Ip (x,t)1 ISE(y (x,t))ldxdt +Ip (x,t)y (x,t)ldxdt s S E E E +
JE Ip
(7.54)
(x, t) Idxdt), S
where E is any measurable subset of Q and {y
}, {p } are the-sequences sn sn found in the preceding section. Since {p } and {y } are strongly converS S sn n 2 n 2 gent and {S (y )} is weakly convergent in L (O,T*;H) = L (Q), we see by sn (7.54) that for every n > 0 there exists a(n) independent of n such that ·sn Ip (x,t) s (y (x,t)) Idxdt -< n E En En
J
if m(E) -< a(n). sequence {Ssn(y ~
1
S
Then by the Dunford-Pettis criterion we conclude that the )p } is weakly compact in L1(Q) C L1(0,T*;H-s(~)). Hence S
n
n
E L (Q), p E AC([O,T*];H .Sn S (Ys )ps + n
~
-s
(~))
and (7.51) is strengthened to 1
(7.55 )
weakly in L (Q).
n
On the other hand, by Proposition 7.2 p E L2(0,T*;H6(~)) n Loo(O,T*;H). Since p is bounded from [O,T*] to H and continuous from [O,T*] to H-s(~) we infer that p E Cw([O,T*];H). Since (7.8), (7.10), (7.11) have been established in Proposition 7.2 it remains to prove that ~(x,t) E
p(x,t)as(y*(x,t)) a.e. (x,t) E Q.
By the Egorov theorem, for each n > 0 there is a measurable subset such that {y } are uniformly bounded on E and y (x,t) + y*(x,t) sn n sn ·sn in (x,t) E E for S + O. Hence S (y ) are uniformly bounded on n n sn without loss of generality we may assume that .Sn
S (y
sn
) +g weak star in Loo(E). n
(7.56 ) E c
n
Q
un iformly E , and n
(7.57)
Then by Lemma 3.4 we may conclude that g(x,t) E as(y*(x,t)) a.e. (x,t) EE . n
284
Since {p } is strongly convergent to p in L2(Q) we have by (7.55) and (7.57), S
~(x,t)
= p(x,t)g(x,t) E p(x,t)aS(y*(x,t)) a.e. (x,t) E En
and (7.56) follows.
Thus the proof of Theorem 7.1 is complete.
Proof of Theorem 7.2
In this case we have (see (3.79))
S€(r) = s-1 J1 (r-s 28)p(8)d8 s-2 r
+
sJ1 8p(8)d8 for r E R. 0
This yields (see (5.64), (5.65)) pJSs(y )-y SS(y )) = _ sp c..
s
€
S
S
(rJ
1
8p(8)d8
-2
+
f1 8p(8)d8) 0
+
0,
Y€
€
strongly in L2(0,T*;H).
IYS(x,t) I)
-+
2slps(x,t)
1
(7.58)
a.e. (x,t) E Q,
(7.59)
where
{
o 2
if ys(x,t) < - s . Let {sn} -+ 0 be the sequence found in Section 7.3, i.e. the sequence for which (7.43) to (7.47) hold. It follows by Lemma 7.3 that 2 is bounded in L (Q). Then by (7.42), (7.59) we may select further a subsequence sn such that Ps (x,t)S n
sn
(Ys (x,t)) n
-+
0 a.e. (x,t) E Q.
Now by (7.23) and (7.47) we have En PE B (ys) n
where
~
-+
1
p~
weakly in L (Q)
n
= u* - Yt - Ay*
E
S(y*) a.e. in Q.
As seen earlier in the proof of 285
Theorem 5.2, this implies that Sn
P S (y ) sn sn
-+
p~
= 0 a.e. in Q and therefore
1 0 strongly in L (Q)
(7.60) 0 a.e. (x,t) E Q.
p(x,t)(u*(x,t)-Yt(x,t)-Ay*(x,t))
(7.61 )
In as much as Yt(x,t)-Aoy*(x,t)
0 a.e. in [(x,t);y*(x,t)
OJ
and by (7.49) u*(x,t) = p(x,t)/Ip(t)
12 a.e. (x,t)
E Q,
we conclude that (7.15) holds. Next, by (7.58) and (7.60) we see that .Sn
1
(7.62) Ps ys S (Y sn ) -+ 0 strongly in L (Q). n n According to the Egorov theorem, for every 0 > 0 there exists a measurable subset Ho c Q such that m(~~) ~ 0, {y } are uniformly bounded on H~ and u sn \) for sn -+ 0, Ys (x,t) n
-+
y*(x,t) uniformly for (x,t) E Ho.
Then selecting a generalized subsequence from {s n } we may infer as in the proof of (7.51), (7.52) that
= 0 on every Ho
y*~
where ~ E (Loo(Q))* satisfies (7.52). In other words,
fH
~a(x,t)y*(x,t)~(x,t)dxdt o
+
~s(y*~) = 0
(7.63)
00
for all ~ E L (Q) which vanish outside Ho. The 'singularity' of ~s means that there exists an increasing sequence of measurable subsets Ek c Q such -1 00 that m(Q,E k ) ~ k and Ws 0 on L (E k). By (7.63) it follows that r J
HonEk 286
Way*~dxdt
0
for all ~ E Loo(Q) with support in H8 n Ek.
and letting 8
= 0 a.e. (x,t) E H8 n Ek
~a(x,t)
y*(x,t)
0, k
+
~
y*(x,t)
a
+
This yields
00
we get
(x,t) = 0 a.e. (x,t) E Q
and (7.13) follows. To prove (7.14) we multiply (7.40) by Y2~ and integrate by parts on QA' Using (7.62) we conclude that for 2 + 0 n T*-\ p (y ~)t dxdt + a(p ,y ~)dt Q C n En 0 En En
j
J\
-J
Q
PE (x,T*-\)YE n
n
(x,T*-A)~(x,T*-\)dx
+
0
for ev.ery A > O.
Together with (7.22), (7.44) and (7.46), this yields T*- A Q p(Y*~)tdxdt + j0 a(p,y*~)dt
f
= J p(x,T*-A)y*(x,T*-\)~(x,T*-A)dx Q
and letting A tend to zero we obtain (7.14), thereby completing the proof. REMARK 7.1 If N = 1 then by (7.22) we see that Yc + y* uniformly in Q (i.e. in C(Q)) and by (7.51), (7.62) we infer that ~y* = 0 in Q. Thus in this case (7.13) becomes y*(p' - AOp)
= 0 in Q.
REMARK 7.2 Theorems 7.1, 7.2 and Proposition 7.1 remain valid if the homogeneous Dirichlet condition in (7.4) is replaced by a general linear boundary value condition of the form
where a i > 0 for i = 1,2; a 1
+
a
2 > O.
287
REMARK 7.3 Theorem 7.2 allows, by a minor modification of the proof, a natural extension to more general nonl ineari.ties of the form
where 81 satisfies condition (7.7) and 82 is the graph defined by (7.7)1. The details are left to the reader. §7.5
Various extensions
(1) The argument above could clearly be applied to the time-optimal problem associated with the equation Yt(x,t) + AOy(x,t) = u(x,t) a.e. (x,t) E ~ y(x,O) = YO(x), x E
~
~
x
R+ (7 .64)
r
+ 8(y) 3 0 a.e. i.n
x R+,
where 8 is a maximal monotone graph in R x R. For instance, the thenmostat control problem leads to a problem of this type where 8 is given by (7.7)1 and which corresponds to nonlinear boundary value conditions of Signorini type, i.e. dy
dy _
y >- 0, dV>- 0, Y dV - 0 a.e. i.n
(7.65)
In this case, repeati.ng word by word the proof of Theorem 7.2, we find that there exists an optimal pair (y*,u*) E W1,2([0,T*];.L2(~)) x Loo(O,T*;H) for the time-optimal control problem associated with (7.65) and a function
p E Loo(O,T*;.H) n L2(0,T*;H1(~)) n BV([O,T*];.(Hs(~))I) such that ip(t)1 f 0 2 a.e. t E O,T*[, E (Loo(E))* and
;g
pi - AOp
=0
in Q = ~
x
dY* p av- :: 0 a.e. in E = r
( dp
d)a y*
u*( t)
]O,T*[ x
]O,T*[
= 0 a.e. in E r sgn pet) a.e. t E ]O,T*[.
(7.66) (7.67) (7.68)
(7.69)
As for Theorems 7.1, it allows in this context the following formulation: 288
there exists a time-optimal control pair (y*,u*) E Wl,2([0,T*J~L2(D)) x 00 00 2 1 n L (O,T*;.H (~)) n Cw([O,T*J;H)
L (O,T*;.H) and a function pEL (O,T*;H)
n
AC([O,T*J;(Hs(D))') satisfying the conditions Ip(t)1
2
I 0 a.e. t E JO,T*[, ~~ E Ll(E)
(7.70)
and the equations
p' - AOp = 0 a.e. in Q
(7.71)
~ ~ 0. a. e ,'n ~~ dV - p~B(y*) OJ
(7.72)
u*(t) (2)
= r sgn pet) a.e.
(7.73)
t E JO,T*[.
Time-optinldl corltrol of finite-dimensional v.ariational inequalities.
Consider the time-optimal control problem (P) in the special case of system (5.85), i.e. inf {T;.y(T,yO'u) = 0, i1u(t)
II N -<
(7.74)
r a.e. t E JO,T[}
where y(t,yo'u) is the solution to the complementarity system y
i (t)
y.(t) >- 0, 1
y.(O) 1
= u i ( t)
+ (Ay ( t) ) i
y~(t) + 1
= y.1 , 0'
i
a. e • ; n [ t ;.y i ( t) > 0J
(Ay(t)). >- u.(t) a.e. t E JO,T[ 1
(7.75)
1
= 1, .•• ,N.
Here A is a positive definite matrix of dimensions N x Nand y. 0 >- 0 for 1,
a11 i. THEOREM 7.3 There exists one time-optimal pair (y*,u*) E W1 ,2([0,T*J;R N) x N Loo(O,T*;R ) for problem (P) having the property that th8re exists pEBV([O,T*J; RN), pet) I 0 a.e. t E JO,T*J which satisfies the system pi(t) - (A*p(t));
=0
in [t;yi(t) > 0], i
= 1, ... ,N
Pi ( t )
0 a. e. i n [t; yi ( t) = 0], i = 1, ... , N
ui(t)
r Pi(t)/ IIp(t) II N a.e. t E JO,T[, i
rJ Ip ( t)
=
(7.76) (7.77)
1, ... ,N
II N - N = 1 a. e. t E J 0 , T* [ •
(7.78) (7.79 ) 289
Here (.,.) is the usual Eucl i.dean scalar product in RN and T* i.s the optimal N time. Equations (7.76) should be understood in the sense of distributions in the open subsets [t E JO,T*[;y~(t) > OJ. 1
The proof is essentially the same as that of Theorem 7.2. it for the reader's convenience. We start with the approximating problem~
However, we sketch
Minimize rOO
J (gE(y(t» + hE(u(t»))dt
(7.80)
o
12+N
2(+N
on aZZ y E Wl~c(R ~R ) and u E Lloc R ~R ) subject to
y'
+.
Ay
+.
yE(y) = U a.e. t
>
0
(7.81 )
where yE:RN + RN has been defined in the proof of Theorem 5.5 and gE:RN hE:R N + R are given by (see (7.18, (7.19»
R,
+
hE(u) :: (2E)-'(ll u Ii - r)+)2, u E RN N 2 gE(y) :: 1T( Ily 1/ E- 1/ 4 ), y E RN N
Arguing as in the proofs of Lemmas 7.2, 7.3 it follows that, for every E problem (7.80) has an optimal pair (YE'U E), and for some E + 0, 00
uE + u* weak star in L YE
+
(O,T*~R
>
N
)
y* weakly in W1 ,2([0,T*J;R N) and strongly in C([O,T*J~RN).
Let PE be a dual extremal arc in problem (7.80), i.e. E 'E( PE, - APE - Y YE ) PE -- G ( YE ) a.e. t 00
+
N
> 0
(7.82)
PE E L (R ;.R ). Then we have (7.83)
290
0,
and proceeding as in the proof of Proposition 7.2 we get (see (7.35),
(7.38))~
On the other hand, for every A E ]O,T*[ we have
for all E sufficiently small. Multiplying the last equation by PE,: and sgn p~c.., 1. in turn, we get the estimate (see (5.98))
where C is independent of A and E. Now arguing as in the proof of Theorem 7.2 (see also the proof of Theorem 5.5) we may pass to the limit in (7.82), (7.83) to conclude that there exists a function p satisfying (7.76) to (7.79).
291
References
[1J
[2J [3]
[4] [5]
[6] [7]
[8]
[9J
[10] [11]
[12]
292
V. Arn~utu, Characterization and approximation of a class of nonconvex distributed control problems, Mathematica 2 (1980), 189-205. V. Arn~utu, Approximation of optimal distributed control problems governed by variational inequalities, NumeY'. Math. 38 (1982), 393-416. C. Baiocchi, Su un problema a frontiera libera conesso a questioni di idraulica, Ann. Mat. Pura ed Applicata 92 (1972), 107-127; CRAS Paris 273 (1971), 1215-1217. A.V. Balakrishnan, Applied functional analysis, Springer. (1976). J. Baranger, Existence de solution pour des probl~mes d1optimisation non convexe, J. Math. Pures et Appliquees 52 (1979), 555-587. V. Barbu, Nonlinear semigroups and differential equations in Banach spaces, Noordhoff (1976). V. Barbu, Necessary conditions for boundary control problems governed by parabolic variational inequalities, An. St. Univ. Al. I. Cuza, XXVI (1980), 47-66. V. Barbu, Necessary conditions for nonconvex distribured control problems governed by elliptic variational inequalities, J. Math. Anal. App!. 80 (1981),566-597. V. Barbu, Necessary conditions for distributed control problems governed by parabolic variational inequalities, SIAM J. Control and Optimiz. 19 ( 1981), 64-86. V. Barbu, Boundary control problems with nonlinear state equations, SIAM J. Control and Optimiz. 20 (1982), 125-143. V. Barbu, Boundary control of some free boundary problems, in Control theory for distributed parameter systems and applications, Kappel et al. (eds.) Lecture Notes in Control and Information Sciences, 45-59, Springer (1983). V. Barbu, Necessary conditions for control problems governed by nonlinear partial differential equations, in nonlinear paY'tial diffel'cntial equations and their applications, College de France Seminar, vol. II, Br~zis & Lions (eds), Research Notes in Mathematics, Pitman (1982),19-47.
[13] [14] [15J [16J [17] [18] [19]
[20] [21] [22] [23] [24]
[25] [26J [27] [28] [29] [30]
V. Barbu, Optimal feedback controls for a class of nonlinear distributed parameter systems, SIAM J. Control and Optimiz. 21 (1983),871-894. V. Barbu, The time optimal control problem for parabolic variational inequalities, Applied Mathematics & Optimization (to appear). V. Barbu, G. Da Prato, Hamilton-Jacobi equations in Hilbert spaces, Research Notes in Mathematics 86, Pitman (1983). V. Barbu, Th. Precupanu, Convexity and optimization in Banach spaces, Sijthoff & Noordhoff (1978). A. Bensoussan, J.L. Lions, Applications des inequations variationelles et controle stochastique, Dunod (1978). L.D. Berkovitz, Optimal control thp.ory, Springer (1974). H. Brezis, Monotonicity methods in Hilbert spaces and some applications to nonlinear partial differential equations, in Contributions to nonlinear functional analysis, (ed.), Academic Press (1971), 101-156. H. Brezis, Operateurs maximaux monotones et semigro1!pes de contractions dans les espaces de Hilbert, North-Holland (1973). H. Brezis, Problemes unilateraux, J. Math. Pures Appl. 51 (1972), 1-168. H. Brezis, G. Stampacchia, Sur la regularite de la solution d'inequations elliptiques, Bull. Soc. Math. France 96 (1968), 153-180. H. Brezis, W.A. Strauss, Semi-linear second order elliptic equations 1 in L J. Math. Soc. Japan 25 (1973), 565-590. F. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces, in Proceedings of the Symposium in pure mathematics Vol. XVIII, Part. II AMS (1970). O. Carja, The time optimal control problem for boundary-distributed control systems, Boll.U.M.I. (to appear). F.H. Clarke, Generalized gradients and applications, Trans. Amer. Math. Soc. 205 (1975), 247-262. F.H. Clarke, Generalized gradients of Lipschitz functionals, Adv. in Math. 40 (1981), 52-67. M.G. Crandall, A. Pazy, Semigroups of nonlinear contractions and dissipative sets, J. Functional Anal. 3 (1969), 276-418. G. Duvaut, Resolution d'un probleme de Stefan, C.R.Acad.Sc. Paris 276 (1973), 1461-1463. G. Duvaut, J.L. Lions, Inequalities in mechanics and physics, Springer (1976). 293
[31J [32J
[33J [34J
[35J
[36J [37J
[38J [39J [40J [41J [42J [43J [44J
[45J
294
Yu. B. Egorov, Some necessary conditions for optimality in Banach spaces, Mat. Sbornik. 64 (1964), 54-59. C.M. Elliott, On a variational inequality formulation of an electrochemical machining moving boundary problem and its approximation by the finite element method, J. Inst. Math. Applies. 25 (1980), 121-131. C.M. Elliott, J.R. Ockendon, Weak and variational methods for moving boundary problems, Research Notes in Mathematics 59, Pitman (1982). L.C. Evans, Application of nonlinear semigroup theory to certain partial differential equations in Proceedings symposium nonlinear evolution equations, Crandall (ed.), Academic Press (1978), 163-188. F.O. Fattorini, Time-optimal control of solutions of operational differential equations in Banach spaces, SIAM J. Control 2 (1964), 5459. F.O. Fattorini, The time optimal control problem in Banach spaces, AppL Math. & Optimiz. (1974), 163-188. F.O. Fattorini, The time-optimal problem for boundary control of heat equation, in Calculus of variations and control theory Academic Press (1976), 305-320. W. Fleming, R.W. Rishel, Deterministic and stochastic optimal control, Springer (1975). A. Friedman, D. Kinderlehrer, A one phase Stefan problem, Indiana Math. J. 24 (1975), 1005-1035. A. Friedman, Free boundary problems for parabolic equations, J. Math. Mech. 8 (1959, 499-518; 9 (1960), 19-66,327-345. R. Glowinski, R. Tremolieres, J.L. Lions, Analyse numerique des inequations variationelles, Dunod (1976). J. Henry, Quelques problemes de controlabilit~ de systemes paraboliques, These L'Universite Paris VI, 1978. P. Jochum, The inverse Stefan problem as a problem of nonlinear approximation theory, J. Approx. Theory 30 (1980), 2, 81-98. B. Kawohl, On nonlinear parabolic equations with abruptly changing nonlinear boundary conditions, J. Nonlinear Analysis, 10 (1981), 11411152. H.B. Keller, Elliptic boundary value problems suggested by nonlinear diffusion processes, Arch. Rat. Mech. Anal. 35 (1969), 363-381.
[46] [47] [48] [49]
[50] [51] [52]
[53] [54] [55] [56] [57]
[58J
[59] [60]
D. Kinderlehrer, G. Stampacchia, An introduction to variational inequalities, Academic Press (1981). G. Knowles, Time optimal control i.n i.nfinite dimensional spaces, SIAM J. Control and Optimiz. 14 (1976), 919-933. O.A. Ladyzenskaia, N.N. Uraltzeva, Linear a~d quasi linear equations of elliptic type (Russian), Nauka (1964). O.A. Ladyzenskaia, V.A. Solonnikov~ N.N. Uraltzeva, Linear and quasilinear equations of parabolic type (Russian), Nauka (1967), AMS Translations vol. 23, 1968. J.L. Lions, Quelques methodes de resolution des problemes aux limites non lineai~es, Dunod and Gauthier-V.illars (1969). J.L. Lions, Optimal control of systems governed by partial differential equations, Springer (1971). J.L. Lions, Various topics in the theory of optimal control of distributed systems, in Optimal control theory and its applications, Kirby (ed.), Lecture Notes in Economics and Mathematical Systems 105, Springer (1974), 166-309. J.L. Lions, Cont~ole des systemes distribues singuliers, Dunod (to appear). J.L. Lions, G. Stampacchia, Variational inequalities, Comm. Pure Appl. Ma th . 20 (1 967), 493 - 519 . J.L. Lions, E. Magenes, Non-homogeneous boundary value problems and applications, T.I,II, Springer (1972). P.L. Lions, Generalized solutions of Hamilton-Jacobi equations, Research Notes in Mathematics 69, Pitman (1982). E. ~1agenes, Some typical free boundary problems, in Boundary value problems for linear evolution partial differential equations, Garnir (ed.), D. Reidel (1977), 239-312. J.A. McGeough, H. Rasmussen, On the derivation of the quasi-steady model in electrochemical machining, J. Inst. Math. Applics. 13 (1974), 13-21. F. Mignot, Controle dans les in~quations variationelles elliptiques, J. Functional Analysis 22 (1976), 130-185. F. Mignot, J. Puel, Optimal control in some variational inequal ities, SIAN J. Control & Optimiz. (to appear). 295
[61J [62J [63J
[64J
[65J [66J [67J
[68J
[69J
[70J
[71J [72J [73J [74J
296
ProximitA et dualit~ dans un espace de Hilbert, BuLL. Soc. Math. France 93 (1965), 273-299. J. Moreau, Fonctionnelles convexes, in Seminaire sur Les ~quations aux deriuees partieLles, Paris, College de France (1966). C. Moreno, Ch. Saguez, Dependence par rapport aux donnees de la frontiere libre associ~e a certaines inAquations variationnelles dlevolution, Rapport Laboria 299, IRIA (1978). J.D. Murray, A simple method for obtaining approximate solutions for a large class of diffusion kinetic enzyme problems, Math. Bioscience 2 (1968), 379-411. U. Mosco, Convergence of convex sets and of solutions of variational inequalities, Advances of Math. 3 (1969), 510-585. J. Necas, Les methodes directes en thAorie des ~quations elliptiques, Academie (1967). P. Nepomiastchy, Etude dlun systeme gouverne par une inequation differentielle, in Etude numerique des g~ands systemes Lions &Marchuk (eds), Paris, Dunod (1978), 213-232. J.T. Oden, N. Kikuchi, Theory of variational inequalities with applications to problems of flow through porous media, Int. J. Eng. Sci. 18 (1980), 1173-1284. J.T. Oden, N. Kikuchi, Finite element methods for certain free boundary value problems in mechanics, in Moving boundary problems Wilson, D.J. et aL. (eds), Academic Press (1978), 147-164. I. Pawlow, Variational inequality formulation and optimal control of nonlinear evolution systems gov.erned by free boundary problems, in Applied nonLinear functional anaLysis R. Gorenflo and K.-H. Hoffman (eds), Methoden und Verfahren der Mathematischen Physik, Peter Lang (1983), 213-250. A. Pazy, Semigroups of nonlinear contractions in Hilbert spaces, CIME Varenna 1970, Cremonese (1971), 343-430. R.T. Rockafellar, Convex anaLysis, Princeton University Press, (1979). R.T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pacific J. Math. 33 (1970), 209-216. R.T. Rockafellar, Integral functionals, normal integrands and measurable selections, in NonLinear operators and the caLculus of variations J.P. Gossez et aL. (eds), Lecture Notes in Mathematics, Springer (1976). J. Moreau,
[75J [76J [77J [78J
[79J
[80J [81] [82] [83J [84J
[85J [86J
R.T. Rockafellar, Directionally Lipschit~ian functions and subdifferential calculus, Proc. London Math. Soc. 39 (1979), 331-355. E. Sachs, A parabolic control problem with a boundary condition of the Stefan Boltzman type, Z. Agnew Math. Mech. 58 (1978), 443-449. C. Saguez, Contrale optimal d'inequations variationnelles avec observation de domaines, Rapport Laboria 286, IRIA (1978). C. Saguez, Conditions necessaires d'optimalite pour des problemes de contrale optimal associes a des inequations variationnelles, Rapport Laboria 345, IRIA (1979). C. Saguez, Contrale optimal d'un systeme gouv.erne par une inequation variationnelle parabolique~ Observ.ation du domaine de contact; C.R. Acad. Sci. Paris, 287 (1978). C. Saguez, Contrale optimal de systems a frontiere libre, These L'Universite de Technologie de Compiegne, 1980. Th. Seidman, Convergent approximation schemes for ill posed problems, UMBC Mathematics Research Report. G. Stampacchia, Equations elliptiques a coefficients discontinuus, Press de l'Universite de Montreal (1966). D. Tiba, Optimality conditions for distributed control problems with nonlinear state equation, SIAM J. Control and Optimiz.(to appear). D. Tiba, Zhou Meike, Optimal control for a Stefan problem, Identification and control of distributed systems, Bensoussan and Lions (eds), Lecture Notes in Control and Information Sciences 44, Springer (1982). J.P. Yvon, Contrale optimal de systemes gouvernes par des inequations variationnelles, These de l'Universite de Compiegne, Paris, 1973. J.P. Yvon, Optimal control of systems governed by variational inequalities, 5th conference on optimization techr.iques, Lecture notes in computer science, Springer (1973), 265-275.
ADDITIONAL REFERENCES The following recent or forthcoming works are relevant too for the topics discussed in the book. [1J
K.H. Hoffman and M. Niezgodka, Control of parabolic systems involving free boundaries, in Free Boundary Problems-~eory and Applications, Fassano and Primicero eds. Research Notes in Mathematics, Pitman (1983). 297
[2J
[3J
[4J [5J
K.H. Hoffman and J. Sprekels, Real-time control of the free boundary in a tv~o phase Stefan problem, Numer. Punct. Anal. Optimiz. 5 (1982), 47-76. N. Limie and A. ~·1ikelie, Necessary conditions for an optimal control problem governed by variational inequalities, Glasnik Matemati6ki (to appear). P. Neittaanmaki and D. Tiba, A descent method for the boundary control of a two phase Stefan problem (to appear). D. Tiba, Boundary control for a Stefan problem, in OptimaZe KontroZZe partiaZZer DifferentiaZ-gZeichungen mit Schwerpunct aUf numerischen Verfahren, Hoffman and Krabs eds.
298
IStu~,
Birkhauser, Basel 1983.