Birkhäuser Advanced Texts Edited by Herbert Amann, Zürich University Steven G. Krantz, Washington University, St. Louis...
117 downloads
1633 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Birkhäuser Advanced Texts Edited by Herbert Amann, Zürich University Steven G. Krantz, Washington University, St. Louis Shrawan Kumar, University of North Carolina at Chapel Hill Jan Nekovár, Ý Université Pierre et Marie Curie, Paris
Michel Chipot
Elliptic Equations: An Introductory Course
Birkhäuser Basel · Boston · Berlin
$XWKRU Michel Chipot Institute of Mathematics University of Zürich :LQWHUWKXUHUVWU =ULFK Switzerland HPDLOPPFKLSRW#PDWKX]KFK
0DWKHPDWLFV6XEMHFW&ODVVL½FDWLRQ-[[----
/LEUDU\RI&RQJUHVV&RQWURO1XPEHU
Bibliographic information published by Die Deutsche Bibliothek 'LH'HXWVFKH%LEOLRWKHNOLVWVWKLVSXEOLFDWLRQLQWKH'HXWVFKH1DWLRQDOELEOLRJUD½H GHWDLOHGELEOLRJUDSKLFGDWDLVDYDLODEOHLQWKH,QWHUQHWDWKWWSGQEGGEGH!
,6%1%LUNKlXVHU9HUODJ$*%DVHOÀ%RVWRQÀ%HUOLQ This work is subject to copyright. All rights are reserved, whether the whole or part of WKHPDWHULDOLVFRQFHUQHGVSHFL½FDOO\WKHULJKWVRIWUDQVODWLRQUHSULQWLQJUHXVHRI LOOXVWUDWLRQVUHFLWDWLRQEURDGFDVWLQJUHSURGXFWLRQRQPLFUR½OPVRULQRWKHUZD\VDQG storage in data banks. For any kind of use permission of the copyright owner must be obtained. %LUNKlXVHU9HUODJ$* Basel · Boston · Berlin 32%R[&+%DVHO6ZLW]HUODQG Part of Springer Science+Business Media 3ULQWHGRQDFLGIUHHSDSHUSURGXFHGRIFKORULQHIUHHSXOS7&) Printed in Germany ,6%1
H,6%1
ZZZELUNKDXVHUFK
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Part I Basic Techniques 1 Hilbert Space Techniques 1.1 The projection on a closed convex set . 1.2 The Riesz representation theorem . . . 1.3 The Lax–Milgram theorem . . . . . . . 1.4 Convergence techniques . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
3 6 8 10 11
2 A Survey of Essential Analysis 2.1 Lp -techniques . . . . . . . . 2.2 Introduction to distributions 2.3 Sobolev Spaces . . . . . . . Exercises . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
13 18 22 32
3 Weak Formulation of Elliptic Problems 3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The weak formulation . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 38 41
4 Elliptic Problems in Divergence Form 4.1 Weak formulation . . . . . . . . 4.2 The weak maximum principle . 4.3 Inhomogeneous problems . . . . Exercises . . . . . . . . . . . . . . . .
. . . .
43 49 53 54
5 Singular Perturbation Problems 5.1 A prototype of a singular perturbation problem . . . . . . . . . . 5.2 Anisotropic singular perturbation problems . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57 61 69
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
vi
Contents
6 Problems in Large Cylinders 6.1 A model problem . . . . . . 6.2 Another type of convergence 6.3 The general case . . . . . . 6.4 An application . . . . . . . Exercises . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
73 79 82 86 89
7 Periodic Problems 7.1 A general theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7.2 Some additional remarks . . . . . . . . . . . . . . . . . . . . . . . 101 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 8 Homogenization 8.1 More on periodic functions . . . . . . 8.2 Homogenization of elliptic equations 8.2.1 The one-dimensional case . . 8.2.2 The n-dimensional case . . . Exercises . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
106 109 109 112 119
9 Eigenvalues 9.1 The one-dimensional case . 9.2 The higher-dimensional case 9.3 An application . . . . . . . Exercises . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
121 123 127 128
. . . .
. . . .
. . . .
. . . .
. . . .
10 Numerical Computations 10.1 The finite difference method . . . . . . . . . . . . . . . . . . . . . 129 10.2 The finite element method . . . . . . . . . . . . . . . . . . . . . . 135 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Part II More Advanced Theory 11 Nonlinear Problems 11.1 Monotone methods . . 11.2 Quasilinear equations . 11.3 Nonlocal problems . . 11.4 Variational inequalities Exercises . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
153 160 166 170 174
Contents
vii
12 L∞ -estimates 12.1 Some simple cases . . . . . . . . . . . . . . . 12.2 A more involved estimate . . . . . . . . . . . 12.3 The Sobolev–Gagliardo–Nirenberg inequality 12.4 The maximum principle on small domains . . Exercises . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
177 180 183 187 188
13 Linear Elliptic Systems 13.1 The general framework . . . . . . . . . . . . . . . . . . . . . . . . 191 13.2 Some examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 14 The Stationary Navier–Stokes System 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 14.2 Existence and uniqueness result . . . . . . . . . . . . . . . . . . . 205 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 15 Some More Spaces 15.1 Motivation . . . . . . . . . . . . . . . . . . . . 15.2 Essential features of the Sobolev spaces W k,p 15.3 An application . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
211 212 215 216
16 Regularity Theory 16.1 Introduction . . . . . . . . . . . . . . . . 16.2 The translation method . . . . . . . . . 16.3 Regularity of functions in Sobolev spaces 16.4 The bootstrap technique . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
217 221 224 227 229
17 The p-Laplace Equation 17.1 A minimization technique . . . . . . . . . . . . . 17.2 A weak maximum principle and its consequences 17.3 A generalization of the Lax–Milgram theorem . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
231 237 239 245
18 The Strong Maximum Principle 18.1 A first version of the maximum principle 18.2 The Hopf maximum principle . . . . . . 18.3 Application: the moving plane technique Exercises . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
247 252 255 259
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . .
. . . .
viii
Contents
19 Problems in the Whole Space 19.1 The harmonic functions, Liouville theorem . . . . . . . . . . . . . 261 19.2 The Schr¨ odinger equation . . . . . . . . . . . . . . . . . . . . . . 267 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Appendix: Fixed Point Theorems A.1 The Brouwer fixed point theorem . . . . . . . . . . . . . . . . . . 275 A.2 The Schauder fixed point theorem . . . . . . . . . . . . . . . . . 279 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Preface The goal of this book is to introduce the reader to different topics of the theory of elliptic partial differential equations avoiding technicalities and refinements. The material of the first part is written in such a way it could be taught as an introductory course. Most of the chapters – except the four first ones – are independent and some material can be dropped in a short course. The four first chapters are fundamental, the next ones are devoted to teach or present a larger spectrum of the techniques of this topics showing some qualitative properties of the solutions to these problems. Everywhere just a minimum on Sobolev spaces has been introduced, work or integration on the boundary has been carefully avoided in order not to crowd the mind of the reader with technicalities but to attract his attention to the beauty and variety of these issues. Also very often the ideas in mathematics are very simple and the discovery of them is a powerful engine to learn quickly and get further involved with a theory. We have kept this in mind all along Part 1. Part II contains more advanced material like nonlinear problems, systems, regularity. . . Again each chapter is relatively independent of the others and can be read or taught separately. We would also like to point that numerous results presented here are original and have not been published elsewhere. This book grew out of lectures given at the summer school of Druskininkai (Lithuania), in Tokyo (Waseda University) and in Rome (La Sapienza). It is my pleasure to acknowledge the rˆole of these different places and to thank K. Pileckas, Y. Yamada, D. Giachetti for inviting me to deliver these courses. I would also like to thank Senoussi Guesmia, Sorin Mardare and Karen Yeressian for their careful reading of the manuscript and Mrs. Gerda Schacher for her constant help in preparing and typing this book. I am also very grateful to H. Amann and T. Hempfling for their support. Finally I would like to thank the Swiss National Science Foundation who supported this project under the contracts #20-113287/1, 20-117614/1. Z¨ urich, December 2008
Part I
Basic Techniques
Chapter 1
Hilbert Space Techniques The goal of this chapter is to collect the main features of the Hilbert spaces and to introduce the Lax–Milgram theorem which is a key tool for solving elliptic partial differential equations.
1.1 The projection on a closed convex set Definition 1.1. A Hilbert space H over R is a vector space equipped with a scalar product (·, ·) which is complete for the associated norm 1
|u| = (u, u) 2 ,
(1.1)
i.e., such that every Cauchy sequence admits a limit in H. Examples. 1. Rn equipped with the Euclidean scalar product (x, y) = x · y =
n
xi yi
∀ x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn .
i=1
(We will prefer the notation with a dot for this scalar product.) 2. L2 (A) = { v : A → R, v measurable | A v 2 (x) dx < +∞ } with A a measurable subset of Rn . Recall that L2 (A) is in fact a set of “class” of functions. L2 (A) is a Hilbert space when equipped with the scalar product u(x)v(x) dx. (1.2) (u, v) = A
Remark 1.1. Let us recall the important “Cauchy–Schwarz inequality” which asserts that |(u, v)| ≤ |u| |v| ∀ u, v ∈ H. (1.3)
4
Chapter 1. Hilbert Space Techniques
One of the important results is the following theorem. Theorem 1.1 (Projection on a convex set). Let K = ∅ be a closed convex subset of a Hilbert space H. For every h ∈ H there exists a unique u such that u ∈ K, (1.4) |h − u| ≤ |h − v|, ∀ v ∈ K, (i.e., u realizes the minimum of the distance between h and K). Moreover u is the unique point satisfying (see Figure 1.1) u ∈ K, (1.5) (u − h, v − u) ≥ 0, ∀ v ∈ K. u is called the orthogonal projection of h on K and will be denoted by PK (h).
h u = PK (h) K
v
Figure 1.1: Projection on a convex set Proof. Consider a sequence un ∈ K such that when n → +∞ |h − un | → Inf |h − v| := d. v∈K
The infimum above clearly exists and thus such a minimizing sequence also. Note now the identities |un − um |2 = |un − h + h − um |2 = |h − un |2 + |h − um |2 + 2(un − h,h − um) |2h − un − um |2 = |h − un + h − um |2 = |h − un |2 + |h − um |2 + 2(h − un ,h − um ). Adding up we get the so-called parallelogram identity |un − um |2 + |2h − un − um |2 = 2|h − un |2 + 2|h − um |2 .
(1.6)
Recall now that a convex set is a subset of H such that αu + (1 − α)v ∈ K
∀ u, v ∈ K,
∀ α ∈ [0, 1].
(1.7)
1.1. The projection on a closed convex set
5
From (1.6) we derive un + um 2 |un − um |2 = 2|h − un |2 + 2|h − um |2 − 4h − 2 ≤ 2|h − un |2 + 2|h − um |2 − 4d2 un +um 2
(1.8)
1 2
∈ K (take α = in (1.7)). Since the right-hand side of (1.8) goes to since 0 when n, m → +∞, un is a Cauchy sequence. It converges toward a point u ∈ K – since K is closed – such that |h − u| = Inf |h − v|. v∈K
(1.9)
This shows the existence of u satisfying (1.4). To prove the uniqueness of such a u one goes back to (1.8) which is valid for any un , um in K. Taking u, u two solutions of (1.4), (1.8) becomes |u − u |2 ≤ 2|h − u|2 + 2|h − u |2 − 4d2 = 0, i.e., u = u . This completes the proof of the existence and uniqueness of a solution to (1.4). We show now the equivalence of (1.4) and (1.5). Suppose first that u is solution to (1.5). Then we have |h − v|2 = |h − u + u − v|2 = |h − u|2 + |u − v|2 + 2(h − u, u − v) ≥ |h − u|2
∀ v ∈ K.
Conversely suppose that (1.4) holds. Then for any α ∈ (0, 1) – see (1.7) – we have for v ∈ K |h − u|2 ≤ |h − [αv + (1 − α)u]|2 = |h − u − α(v − u)|2 = |h − u|2 + 2α(u − h, v − u) + α2 |v − u|2 . This implies clearly 2α(u − h, v − u) + α2 |v − u|2 ≥ 0.
(1.10)
Dividing by α and letting α → 0 we derive that (1.5) holds. This completes the proof of the theorem. Remark 1.2. If h ∈ K, PK (h) = h. (1.5) is an example of variational inequality. In the case where K is a closed subspace of H (this is a special convex set) Theorem 1.1 takes a special form. Corollary 1.2. Let V be a closed subspace of H. Then for every h ∈ H there exists a unique u such that u ∈ V, (1.11) |h − u| ≤ |h − v|, ∀ v ∈ V. Moreover u is the unique solution to u ∈ V, (h − u, v) = 0,
∀ v ∈ V.
(1.12)
6
Chapter 1. Hilbert Space Techniques
Proof. It is enough to show that (1.5) is equivalent to (1.12). Note that if (1.12) holds then (1.5) holds. Conversely if (1.5) holds then for any w ∈ V one has v =u±w ∈V since V is a vector space. One deduces ±(u − h, w) ≥ 0 ∀ w ∈ V which is precisely (1.12). u = PV (h) is described in Figure 1.2 below. It is the unique vector of V such that h − u is orthogonal to V .
h
u V Figure 1.2: Projection on a vector space
1.2 The Riesz representation theorem If H is a real Hilbert space we denote by H ∗ its dual – i.e., H ∗ is the set of continuous linear forms on H. If h ∈ H then the mapping v → (h, v)
(1.13)
is an element of H ∗ . Indeed this is a linear form that is to say a linear mapping from H into R and the continuity follows from the Cauchy–Schwarz inequality |(h, v)| ≤ |h| |v|.
(1.14)
The Riesz representation theorem states that all the elements of H ∗ are of the type (1.13) which means can be represented by a scalar product. This fact is easy to see on Rn . We will see that it extends to infinite-dimensional Hilbert spaces. First let us analyze the structure of the kernel of the elements of H ∗ . Proposition 1.3. Let h∗ ∈ H ∗ . If h∗ = 0 the set V = { v ∈ H | h∗ , v = 0 }
(1.15)
is a closed subspace of H of codimension 1, i.e., a hyperplane of H. (We denote with brackets the duality writing h∗ , v = h∗ (v).)
1.2. The Riesz representation theorem
7
Proof. Since h∗ is continuous V is a closed subspace of H. Let h ∈ / V . Such an h exists since h∗ = 0. Then set v0 = h − PV (h) = 0.
(1.16)
Any element of v ∈ H can be decomposed in a unique way as v = λv0 + w
(1.17)
where w ∈ V . Indeed if w ∈ V one has necessarily h∗ , v = λh∗ , v0 , i.e., λ = h∗ , v /h∗ , v0 and then v=
h∗ , v h∗ , v v v0 . + v − 0 h∗ , v0 h∗ , v0
This completes the proof of the proposition. We can now show
Theorem 1.4 (Riesz representation theorem). For any h∗ ∈ H ∗ there exists a unique h ∈ H such that (h, v) = h∗ , v
∀ v ∈ H.
(1.18)
h∗ , v . |v|
(1.19)
Moreover |h| = |h∗ |∗ = Sup v∈H v=0
(This last quantity is called the strong dual norm of h∗ .) Proof. If h∗ = 0, h = 0 is the only solution of (1.18). We can assume then that h∗ = 0. Let v0 = 0 be a vector orthogonal to the hyperplane V = { v ∈ H | h∗ , v = 0 }, (see (1.16), (1.17)). We set h=
h∗ , v0 v0 . |v0 |2
Due to the decomposition (1.17) we have (h, v) = (h, λv0 + w) = λ(h, v0 ) = λh∗ , v0 = h∗ , λv0 + w = h∗ , v for every v ∈ H. Thus h satisfies (1.18). The uniqueness of h is clear since (h − h , v) = 0 ∀ v ∈ H =⇒ h = h (take v = h − h ).
(1.20)
8
Chapter 1. Hilbert Space Techniques
Now from (1.20) we have |h| = and from (1.18) |h∗ |∗ = Sup v=0
|h∗ , v0 | ≤ |h∗ |∗ |v0 |
(1.21)
(h, v) |h| |v| ≤ Sup = |h|. |v| |v| v=0
This completes the proof of the theorem.
1.3 The Lax–Milgram theorem Instead of a scalar product one can consider more generally a continuous bilinear form. That is to say if a(u, v) is a continuous bilinear form on H, then for every u∈H v → a(u, v) (1.22) is an element of H ∗ . As for the Riesz representation theorem one can ask if every element of H ∗ is of this type. This can be achieved with some assumptions on a which reproduce the properties of the scalar product, namely: Theorem 1.5 (Lax–Milgram). Let a(u, v) be a bilinear form on H such that • a is continuous, i.e., ∃Λ > 0 such that |a(u, v)| ≤ Λ|u| |v| 2
• a is coercive, i.e., ∃λ > 0 such that a(u, u) ≥ λ|u|
∀ u, v ∈ H, (1.23)
∀ u ∈ H.
(1.24)
Then for every f ∈ H ∗ there exists a unique u ∈ H such that a(u, v) = f, v
∀ v ∈ H.
(1.25)
In the case where a is symmetric that is to say a(u, v) = a(v, u)
∀ u, v ∈ H
(1.26)
then u is the unique minimizer on H of the functional J(v) =
1 a(v, v) − f, v . 2
(1.27)
Proof. For every u ∈ H, by (1.23), v → a(u, v) is an element of H ∗ . By the Riesz representation theorem there exists a unique element in H that will be denoted by Au such that a(u, v) = (Au, v) ∀ v ∈ H. We will be done if we can show that A is a bijective mapping from H into H. (Indeed one will then have f, v = (h, v) = (Au, v) = a(u, v) ∀ v ∈ H for a unique u in H.)
1.3. The Lax–Milgram theorem
9
• A is linear. By definition of A for any u1 , u2 ∈ H, α1 , α2 ∈ R one has a(α1 u1 + α2 u2 , v) = (A(α1 u1 + α2 u2 ), v)
∀ v ∈ H.
By the bilinearity of a and the definition of A one has also a(α1 u1 + α2 u2 , v) = α1 a(u1 , v) + α2 a(u2 , v) = α1 (Au1 , v) + α2 (Au2 , v) = (α1 Au1 + α2 Au2 , v) ∀ v ∈ H. Then we have (A(α1 u1 + α2 u2 ), v) = (α1 Au1 + α2 Au2 , v) ∀ v ∈ H and thus ∀ u1 , u2 ∈ H
A(α1 u1 + α2 u2 ) = α1 Au1 + α2 Au2
∀ α1 , α2 ∈ R,
hence the linearity of A. • A is injective. Due to the linearity of A it is enough to show that Au = 0
=⇒
u = 0.
If Au = 0, by (1.24) and the definition of A one has 0 = (Au, u) = a(u, u) ≥ λ|u|2 and our claim is proved. • AH the image of H by A is a closed subspace of H. Indeed consider Aun a sequence such that Aun → y
in H.
We want to show that y ∈ AH. For that note that by (1.24) one has λ|un − um |2 ≤ a(un − um , un − um ) = (Aun − Aum , un − um ) i.e.,
≤ |Aun − Aum | |un − um |, 1 |un − um | ≤ |Aun − Aum |. λ
Since Aun converges it is a Cauchy sequence and so is un . Then there exists u ∈ H such that un → u in H
10
Chapter 1. Hilbert Space Techniques
when n → +∞. By definition of A we have a(un , v) = (Aun , v)
∀ v ∈ H.
Passing to the limit in n we get a(u, v) = (y, v)
∀v ∈ H
which is also y = Au ∈ H. This completes the proof of the claim. • A is surjective. If not V = AH is a closed proper subspace of H. Then – see Corollary 1.2 – there exists a vector v0 = 0 such that v0 is orthogonal to V . Then λ|v0 |2 ≤ a(v0 , v0 ) = (Av0 , v0 ) = 0 which contradicts v0 = 0. This completes the first part of the theorem. If we assume now that a is symmetric and u solution to (1.25) one has J(v) = J(u + v − u) 1 = a(u + (v − u), u + (v − u)) − f, u + (v − u) 2 1 1 = a(u, u) − f, u + a(u, v − u) − f, v − u + a(v − u, v − u) 2 2 (by linearity and since a is symmetric). Using (1.25) we get λ 1 J(v) = J(u) + a(v − u, v − u) ≥ J(u) + |v − u|2 ≥ J(u) ∀ v ∈ H, 2 2 the last inequality being strict for v = u. Hence u is the unique minimizer of J on H. This completes the proof of the theorem.
1.4 Convergence techniques We recall that if hn is a sequence in H then hn h∞
in H
as n → +∞,
(i.e., hn converges towards h weakly) iff lim (hn , h) = (h∞ , h) ∀ h ∈ H.
n→+∞
Due to the Cauchy–Schwarz inequality it is easy to show that hn −→ h
in H
The converse is not true in general.
=⇒
hn h
in H.
Exercises
11
We also have Theorem 1.6 (Weak compactness of balls). If hn is a bounded sequence in H, there exists a subsequence hnk of hn and h∞ ∈ H such that hnk h∞ .
Proof. See [96] Finally we will find useful the following theorem. Theorem 1.7. Let xn , yn be two sequences in H such that xn −→ x∞ ,
yn y∞ .
Then we have (xn , yn ) −→ (x∞ , y∞ ) in R. Proof. One has |(xn , yn ) − (x∞ , y∞ )| = |(xn − x∞ , yn ) − (x∞ , y∞ − yn )| ≤ |xn − x∞ | |yn | + |(x∞ , y∞ − yn )|. Since yn y∞ , yn is bounded (Theorem of Banach–Steinhaus), then the result follows easily.
Exercises 1. Show (1.3). (Hint: use the fact that (u + λv, u + λv) ≥ 0 ∀ λ ∈ R.) 2. In Rn consider
K = { x | xi ≥ 0 ∀ i = 1, . . . , n }.
Show that K is a closed convex set of Rn . Show that PK (y) = (y1 ∨ 0, y2 ∨ 0, . . . , yn ∨ 0) ∀ y ∈ Rn where ∨ denotes the maximum of two numbers. 3. Set 2 =
+∞ 2 x = (xi ) xi < +∞ , i=1
i.e., 2 is the set of sequences square summable.
12
Chapter 1. Hilbert Space Techniques
(a) Show that 2 is a Hilbert space for the scalar product (x, y) =
+∞
xi yi
∀ x, y ∈ 2 .
i=1
(b) One sets en = (0, 0, . . . , 1, 0, . . . ) where 1 is located at the nth slot. Show that en 0 in 2 but en → 0. 4. Let g ∈ L2 (A) where A is a measurable subset of Rn . Set K = { v ∈ L2 (A) | v(x) ≤ g(x) a.e. x ∈ A }. Show that K is a nonempty closed convex set of L2 (A). Show that PK (v) = v ∧ g
∀ v ∈ L2 (A).
(∧ denotes the minimum of two numbers – i.e., (v ∧ g)(x) = v(x) ∧ g(x) a.e. x.) 5. If PK denotes the projection on a nonempty closed convex set K of a Hilbert space H show that |PK (h) − PK (h )| ≤ |h − h | ∀ h, h ∈ H. 6. Let a be a continuous, coercive, bilinear form on a real Hilbert space H. Let f, ∈ H ∗ , = 0. Set V = { h ∈ H | , h = 0 }. (a) Show that there exists a unique u solution to u ∈ V, a(u, v) = f, v ∀ v ∈ V. (b) Show that there exists a unique k ∈ R such that u satisfies a(u, v) = f + k, v ∀ v ∈ H. (Hint: If h ∈ H, (h) = 1, v − (v)h ∈ V , ∀ v ∈ H.) (c) Show that there exists a unique u ∈ H solution to u ∈ H, a(v, u ) = , v ∀ v ∈ H, and that k = − f,u ,u .
(d) If 1 , 2 , . . . , p ∈ H ∗
V = { h ∈ H | i (h) = 0, ∀ i = 1, . . . , p } what can be said about u the solution to u ∈ V,
a(u, v) = f, v ∀ v ∈ V ?
Chapter 2
A Survey of Essential Analysis 2.1 Lp-techniques We recall here some basic techniques regarding Lp -spaces in particular those involving approximation by mollifiers. Definition 2.1. Let p ≥ 1 be a real number, Ω an open subset of Rn , n ≥ 1 |v(x)|p dx < +∞ Lp (Ω) = “Class of functions” v : Ω → R s.t. Ω
∞
L (Ω) = { “Class of functions” v : Ω → R s.t. ∃C s.t. |v(x)| ≤ C a.e. x ∈ Ω }. Equipped with the norms |v|p,Ω =
Ω
p1 |v(x)| dx p
|v|∞,Ω = Inf{ C s.t. |v(x)| ≤ C a.e. x ∈ Ω }. Lp (Ω), L∞ (Ω) are Banach spaces (see [85]). Moreover for any 1 ≤ p < +∞ the dual of Lp (Ω) can be identified with Lp (Ω) where p is the conjugate number of p which is defined by 1 1 + = 1, (2.1) p p p with the convention that p = +∞ when p = 1. (We refer the reader i.e., p = p−1 to [85] for these notions.)
Definition 2.2. We denote by Lploc (Ω) the set of functions v defined on Ω and such that for any Ω bounded such that Ω ⊂ Ω one has v ∈ Lp (Ω ). We will note Ω ⊂⊂ Ω when Ω – the closure of Ω in Rn – is included in Ω and compact.
14
Chapter 2. A Survey of Essential Analysis
We denote by D(Ω) the space of functions infinitely differentiable in Ω with compact support in Ω. Recall that the support of a function ρ is defined as Supp ρ = the closure of the set { x ∈ Ω | ρ(x) = 0 } = { x ∈ Ω | ρ(x) = 0 }. (2.2) Examples 2.1. 1. Let us denote by |x| the Euclidean norm of the vector x = (x1 , . . . , xn ) defined as 12 n x2i . (2.3) |x| = i=1
The function defined, for c a constant, by ⎧ 1 ⎨c exp − if |x| < 1, 1 − |x|2 ρ(x) = ⎩ 0 if |x| ≥ 1,
(2.4)
is a function of D(Rn ) with support B1 = { x ∈ Rn | |x| ≤ 1 }. 2. If x0 ∈ Ω and ε positive is such that Bε (x0 ) = { x ∈ Rn | |x − x0 | ≤ ε } ⊂ Ω then the function rε (x) = ρ
x − x 0
ε is a function of D(Ω) with support Bε (x0 ).
(2.5) (2.6)
Considering linear combinations of functions of the type (2.6) it is easy to construct many other functions of D(Ω) and to see that D(Ω) is an infinite-dimensional space on R. Suppose that in (2.4) we choose c such that ρ(x) dx = 1 (2.7) Rn
and set ρε (x) =
1 x . ρ εn ε
We then have
(2.8)
ρε ∈ D(Ω),
Supp ρε = Bε (0),
Rn
ρε (x) dx = 1.
For u ∈ L1loc (Ω) we define the “mollifier of u” as uε (x) = u(y)ρε (x − y) dy = (ρε ∗ u)(x). Ω
(2.9)
(2.10)
2.1. Lp -techniques
15
It is clear that if x ∈ Ω, uε (x) is defined as soon as ε < dist(x, ∂Ω)
(2.11)
where dist(x, ∂Ω) denotes the distance from x to ∂Ω. Recall that dist(x, ∂Ω) = Inf |x − y| where | · | is the usual Euclidean norm in Rn . Then we have y∈∂Ω
Theorem 2.1. Let Ω ⊂⊂ Ω. 1. For ε < dist(Ω , ∂Ω), uε ∈ C ∞ (Ω ). 2. If u ∈ C(Ω) – i.e., is a continuous function in Ω – then when ε → 0 uε −→ u
uniformly in Ω .
(2.12)
3. If u ∈ Lploc (Ω), 1 ≤ p < +∞ then when ε → 0 uε −→ u
in Lp (Ω ).
(2.13)
Proof. 1. Let x ∈ Ω . Since for ε < dist(Ω , ∂Ω) one has Bε (x) ⊂ Ω it is clear that (2.10) is well defined. Now from the theorem of derivation under an integral one has if ∂xi denotes the partial derivative in the direction xi ∂xi uε (x) = u(y)∂xi ρε (x − y) dy. Ω
Repeating this process in the other directions and for any order derivative it is clear that uε ∈ C ∞ (Ω ). 2. Since u ∈ C(Ω), u is uniformly continuous on any compact subset of Ω Inf |x − y| such that and for any δ > 0 there exists an ε < dist(Ω , ∂Ω) = x∈Ω ,y∈∂Ω
x ∈ Ω ,
|x − y| ≤ ε
Let x ∈ Ω one has, see (2.9):
=⇒
|u(x) − u(y)| ≤ δ.
u(x) − uε (x) =
Ω
{u(x) − u(y)}ρε (x − y) dy
(by (2.10)). Thus
|u(x) − uε (x)| = {u(x) − u(y)}ρε (x − y) dy Ω |u(x) − u(y)|ρε (x − y) dy. ≤ Ω
In the integral above one integrates only on Bε (x) and by (2.14) it comes ρε (x − y) dy = δ ρε (z) dz = δ |u(x) − uε (x)| ≤ δ Bε (x)
which completes the proof of assertion 2.
Bε
(2.14)
16
Chapter 2. A Survey of Essential Analysis
3. Let Ω ⊂⊂ Ω ⊂⊂ Ω. u ∈ Lp (Ω ) and thus there exists uˆ ∈ C(Ω ) such that
δ , (2.15) 3 since the continuous functions are dense in Lp (Ω ) – see [85]. Then we have if u ˆ ε = ρε ∗ u ˆ |u − u ˆ|p,Ω ≤
|u − uε |p,Ω = |u − uˆ + uˆ − uˆε + u ˆε − uε |p,Ω ≤ |u − uˆ|p,Ω + |ˆ u−u ˆε |p,Ω + |ˆ uε − uε |p,Ω .
(2.16)
Now let us notice that |ˆ uε (x) − uε (x)|p dx |ˆ uε − uε |pp,Ω = Ω p = {ˆ u(y) − u(y)}ρε (x − y) dy dx Ω
Bε (x)
Ω
Bε
p 1 1 p (x − y)ρ p (x − y) dy dx |ˆ u (y) − u(y)|ρ = ε ε Ω B (x) ε ≤ |ˆ u(y) − u(y)|p ρε (x − y) dy dx (by H¨ older’s inequality) Ω Bε (x) |ˆ u(x − z) − u(x − z)|p ρε (z) dz dx =
(we made the change of variable z = x − y, Bε = Bε (0)). Thus, by the Fubini theorem, we obtain ρε (z) |ˆ u(x − z) − u(x − z)|p dx dz |ˆ uε − uε |pp,Ω ≤ Bε Ω ≤ ρε (z) dz |ˆ u(ξ) − u(ξ)|p dξ Bε Ω |ˆ u(ξ) − u(ξ)|p dξ, = Ω
for ε < dist(Ω , ∂Ω ). Thus from (2.15), (2.16) we derive 2δ + |ˆ u−u ˆε |p,Ω 3 ≤δ
|u − uε |p,Ω ≤
for ε small enough by the point 2 since the uniform convergence implies the Lp convergence in the bounded set Ω . This completes the proof. Remark. If u ∈ Lp (Ω), Ω bounded, and if we denote by u¯ the extension of u by 0 outside of Ω, then as a corollary of Theorem 2.1 we have ¯ −→ u u¯ε = ρε ∗ u
in Lp (Ω).
2.1. Lp -techniques
17
We show now the following compactness result which is a Lp -generalization of the Arzel` a–Ascoli theorem. Theorem 2.2 (Fr´echet–Kolmogorov). Let Ω ⊂⊂ Ω and L a subset of Lp (Ω) 1 ≤ p < +∞ such that • L is bounded in Lp (Ω) • L is equicontinuous in Lp (Ω ) – that is to say ∀ η > 0 ∃ δ > 0, n
∀h ∈ R ,
δ < dist(Ω , ∂Ω) such that
|h| < δ =⇒ |σh f − f |p,Ω < η
∀ f ∈ L.
Then LΩ = {f |Ω | f ∈ L} is relatively compact in Lp (Ω ). (σh f is the translated of f defined as σh f (x) = f (x + h) ∀ h ∈ Rn .) Proof. We can without loss of generality assume that Ω is bounded. If f ∈ L we denote by f¯ its extension by 0 outside Ω and set L0 = { f¯ | f ∈ L }. It is clear then that L0 is a bounded set of Lp (Rn ) and L1 (Rn ). First we claim that for ε < δ we have |ρε ∗ f¯ − f¯|p,Ω < η
∀ f¯ ∈ L0 .
Indeed if ε < δ < dist(Ω , ∂Ω) |ρε ∗ f¯(x) − f¯(x)|p dx |ρε ∗ f¯ − f¯|pp,Ω = Ω p ¯(y) − f¯(x)}ρε (x − y) dy dx = { f Ω
Bε (x)
Ω
Bε
p ¯ ¯ {f (x − z) − f (x)}ρε (z) dz dx = Ω Bε p 1 1 ¯ ¯ p p |f (x − z) − f (x)|ρε (z)ρε (z) dz dx ≤ Ω Bε ≤ |f¯(x − z) − f¯(x)|p ρε (z) dz dx, by H¨ older’s inequality
≤ |σz f − f |pp,Ω < η p . (2.17) (Recall that
Bε
ρε dz = 1.) Consider then Lε = { ρε ∗ f¯ | f¯ ∈ L0 }.
18
Chapter 2. A Survey of Essential Analysis
We claim that Lε satisfies the assumptions of the Arzel`a–Ascoli theorem which are • Lε is bounded in C(Ω ). Indeed this follows from ¯ |ρε ∗ f |∞ =
Bε (x)
¯ f (y)ρε (x − y) dy
∞
≤ |ρε |∞ |f¯|1,Rn
(| · |∞ is the L∞ -norm in C(Ω )). • Lε is equicontinuous Indeed if x1 , x2 ∈ Rn we have |ρε ∗ f¯(x1 ) − ρε ∗ f¯(x2 )| ≤
Rn
|f¯(y)| |ρε (x1 − y) − ρε (x2 − y)| dy
≤ Cε |x1 − x2 | |f¯|1,Rn
∀ f¯ ∈ L0
where Cε is the Lipschitz constant of ρε . The claim follows then from the fact that L0 is bounded in L1 (Rn ). From above it follows then that Lε is relatively compact in C(Ω ) and thus also in Lp (Ω ). Given η we fix then ε such that |ρε ∗ f¯ − f¯|p,Ω < η
∀ f¯ ∈ L0 .
Then we cover Lε by a finite number of balls of radius η. It is then clear that the same balls of radius 2η cover L0 – which completes the proof. It is very useful to consider derivatives of objects having no derivative in the usual sense. This is done through duality and it is one of the great achievement of the distributions theory which emerged in the beginning of the fifties (see [86]). This is what we would like to consider next.
2.2 Introduction to distributions For any multiindex α = (α1 , . . . , αn ) ∈ Nn we denote by Dα the partial derivative given by ∂ α1 +···+αn . (2.18) Dα = α1 ∂x1 . . . ∂xαnn We define first a notion of convergence in the space of functions D(Ω). Definition 2.3. Let ϕi , i ∈ N be a sequence of functions in D(Ω). We say that ϕi −→ ϕ
in D(Ω) when i −→ +∞
(2.19)
2.2. Introduction to distributions
19
if the ϕi ’s and ϕ have all their supports contained in a compact subset K of Ω and if Dα ϕi −→ Dα ϕ uniformly in K, ∀ α ∈ Nn , i.e.,
lim Sup |Dα ϕi (x) − Dα ϕ(x)| = 0
i→+∞ x∈K
∀ α ∈ Nn .
(2.20)
Remark 2.1. One can show that this notion of convergence can be defined by a topology on D(Ω). We refer the reader to [86], [94]. We can now introduce the notion of distribution. Definition 2.4. A distribution T on Ω is a continuous linear form on D(Ω) – i.e., a linear form on D(Ω) such that lim T (ϕi ) = T (ϕ)
(2.21)
i→+∞
for any sequence ϕi such that ϕi → ϕ in D(Ω). T (ϕ) will be denoted by T, ϕ and the space of distributions on Ω by D (Ω). Examples. 1. Let T ∈ L1loc (Ω). Then
T, ϕ =
Ω
T (x)ϕ(x) dx
∀ ϕ ∈ D(Ω)
(2.22)
defines a distribution. Indeed this follows from |T | dx. |T, ϕi − ϕ | = T (x)(ϕi (x) − ϕ(x)) dx ≤ Sup |ϕi (x) − ϕ(x)| x∈K
Ω
K
2. If T1 , T2 are distributions, α1 , α2 ∈ R then α1 T1 + α2 T2 defined as α1 T1 + α2 T2 , ϕ = α1 T1 , ϕ + α2 T2 , ϕ
∀ ϕ ∈ D(Ω)
is a distribution. This shows that D (Ω) is a vector space on R. 3. The Dirac mass. Let x0 ∈ Ω. Then δx0 , ϕ = ϕ(x0 ) ∀ ϕ ∈ D(Ω)
(2.23)
defines a distribution called the Dirac mass at x0 . This is an example of a distribution which is not a function, i.e., one cannot find T ∈ L1loc (Ω) such that T (x)ϕ(x) dx = ϕ(x0 ) ∀ ϕ ∈ D(Ω). (2.24) Ω
δx0 is a “measure”.
20
Chapter 2. A Survey of Essential Analysis
To see the impossibility of having (2.24) one needs the following theorem. It establishes the consistency between equality in the distributional sense and the equality in the L1 -sense for functions. It is clear that if T1 , T2 ∈ L1loc (Ω) and T1 = T2
a.e. in Ω
(2.25)
then T1 , T2 define the same distribution through the formula (2.22). Conversely we have Theorem 2.3. Suppose that T1 , T2 ∈ L1loc (Ω) and T1 , ϕ = T2 , ϕ
∀ ϕ ∈ D(Ω).
(2.26)
Then T1 = T2 a.e. in Ω. Proof. Consider Ω ⊂⊂ Ω – i.e., Ω is an open set included in a compact subset of Ω. If ∂Ω denotes the boundary of Ω then d = dist(Ω , ∂Ω) =
Inf
x∈Ω ,y∈∂Ω
|x − y| > 0.
(2.27)
If ρε is defined by (2.8) for ε < d one has for every x ∈ Ω (T1 ∗ ρε )(x) = T1 (y)ρε (x − y) dy = T1 , ρε (x − ·) Ω
= T2 , ρε (x − ·) = (T2 ∗ ρε )(x).
(2.28)
Now when ε → 0 we have Ti ∗ ρε −→ Ti
in L1 (Ω )
i = 1, 2,
(see Theorem 2.1). The result follows then from (2.28).
(2.29)
Remark 2.2. To show that (2.24) cannot hold for T ∈ L1loc (Ω) it is enough to notice that (2.24) implies that T, ϕ = 0 ∀ ϕ ∈ D(Ω \ {x0 }). Then, by Theorem 2.3, T = 0 a.e. in Ω \ {x0 }, i.e., T = 0 a.e. in Ω and T is the zero distribution which is not the case of the Dirac mass. As we fixed us as goal at the beginning of this section we can now differentiate any distribution. Indeed we have: Definition 2.5. For α = (α1 , . . . , αn ) ∈ Nn we set |α| = α1 + · · · + αn . Then for T ∈ D (Ω) ϕ → (−1)|α| T, Dα ϕ defines a distribution on Ω which we denote by Dα T . Thus we have Dα T, ϕ = (−1)|α| T, Dα ϕ ∀ ϕ ∈ D(Ω).
(2.30)
2.2. Introduction to distributions
21
Example. Suppose that T is a function on Ω, k times differentiable. For |α| ≤ k denote provisionally by ∂ α the derivative of T in the usual sense – i.e., ∂ α T (x) =
∂ |α| T (x) ∂xα11 . . . ∂xαnn
∀ x ∈ Ω.
(2.31)
Then we have Dα T = ∂ α T,
(2.32)
i.e., the derivative in the distributional sense of T coincides with the distribution defined by the function ∂ α T . To see this consider α = (0, . . . , 1, . . . 0) the 1 being at the ith -slot. One has Dα T, ϕ = −T, ∂xi ϕ = − T (x)∂xi ϕ(x) dx Ω = − (∂xi (T ϕ) − ∂xi T ϕ) dx. (2.33) Ω
For simplicity we set ∂xi = ∂∂x and will do that also in what follows. Now clearly i by the Fubini theorem ∂xi (T ϕ) dx = ∂xi (T ϕ) dx = 0. x
Ω
xi
(In the integral above we integrate first in xi then in the other variables that we denote by x . Note that T ϕ vanishes on the boundary of Ω.) Then (2.33) becomes Dα T, ϕ = ∂xi T ϕ dx = ∂ α T, ϕ . Ω
(2.32) follows then since for any α ∈ Nn , Dα can be obtained by iterating operators of the type ∂xi . From now on we will use the same notation for the derivative in the usual sense and in the distributional sense for C k -functions. Like we defined a notion of convergence in D(Ω), we define now convergence in D (Ω). Definition 2.6. Let Ti be a sequence of distributions on Ω. We say that lim Ti = T
i→+∞
in D (Ω)
(2.34)
iff lim Ti , ϕ = T, ϕ ∀ ϕ ∈ D(Ω).
i→+∞
(2.35)
One can show that the convergence above defines a topology – see [86], [94].
22
Chapter 2. A Survey of Essential Analysis
Example. Suppose that xi is a sequence of points in Ω such that xi converges toward x∞ ∈ Ω. Then one has δxi −→ δx∞ Since
Lp (Ω) ⊂ L1loc (Ω)
in D (Ω). ∀ 1 ≤ p ≤ +∞
(2.36)
p
the functions of L (Ω) are distributions on Ω. Moreover we have Proposition 2.4. Let Ti , T ∈ Lp (Ω). Suppose that when i → +∞ Ti → T
in Lp (Ω),
(resp: Ti T in Lp (Ω))
then one has Ti −→ T
1≤p<∞
in D (Ω).
(2.37) (2.38)
Proof. We have denoted by the weak convergence in Lp (Ω). Note that the strong convergence in Lp (Ω) implies the weak one and this one can be expressed as Ti ϕ dx −→ T ϕ dx ∀ ϕ ∈ Lp (Ω). Ω
Ω
The result follows then from the fact that D(Ω) ⊂ Lp (Ω).
We also have the following Proposition 2.5. The operator Dα , α ∈ Nn is continuous on D (Ω), i.e., Ti −→ T
in D (Ω)
=⇒
Dα Ti −→ Dα T
in D (Ω)
for i → +∞. Proof. This follows immediately from Dα Ti , ϕ = (−1)|α| Ti , Dα ϕ −→ (−1)|α| T, Dα ϕ = Dα T, ϕ since Dα ϕ ∈ D(Ω) ∀ α ∈ Nn , ∀ ϕ ∈ D(Ω).
2.3 Sobolev Spaces The Sobolev Spaces are useful tools to solve partial differential equations. In this chapter we concentrate ourselves to the most simple ones. Definition 2.7. Let Ω be an open subset of Rn . We denote by H 1 (Ω) the subset of L2 (Ω) defined by H 1 (Ω) = { v ∈ L2 (Ω) | ∂xi v ∈ L2 (Ω) ∀ i = 1, . . . , n }.
(2.39)
In this definition ∂xi v denotes the derivative of v in the distributional sense.
2.3. Sobolev Spaces
23
Remark 2.3. In other words v ∈ H 1 (Ω) iff v ∈ L2 (Ω) and for every i there exists a function vi in L2 (Ω) – that we denote ∂xi v – such that v∂xi ϕ dx = vi ϕ dx ∀ ϕ ∈ D(Ω). − Ω
Ω
Example. If Ω is a bounded open set the restriction to Ω of every continuously differentiable function of Rn belongs to H 1 (Ω). For u, v ∈ H 1 (Ω) one can define the scalar product uv + ∇u · ∇v dx. (u, v)1,2 =
(2.40)
Ω
In the formula above ∇u denotes the gradient of u – i.e., the vector ∇u = (∂x1 u, . . . , ∂xn u)
(2.41)
and ∇u · ∇v the Euclidean scalar product of ∇u, ∇v that is to say ∇u · ∇v =
n
∂xi u∂xi v.
i=1
One has: Theorem 2.6. Equipped with the scalar product (2.40) H 1 (Ω) is a Hilbert space. The associated norm will be denoted 12 2 2 v + |∇v| dx (2.42) |v|1,2 = Ω
where | · | stands for the Euclidean norm in Rn . Proof. It is easy to see that (2.40) defines a scalar product. So, we will be done provided we show that H 1 (Ω) equipped with (2.42) is complete. Let us denote by vn a Cauchy sequence in H 1 (Ω) which is a sequence such that for any ε Ω
(vn − vm )2 +
n
12 (∂xi vn − ∂xi vm )2 dx = |vn − vm |1,2 ≤ ε
(2.43)
i=1
for n, m large enough. It follows that vn ,
∂xi vn
i = 1, . . . , n
are Cauchy sequences in L2 (Ω). Since L2 (Ω) is a Hilbert space there exist functions u ∈ L2 (Ω),
ui ∈ L2 (Ω)
i = 1, . . . , n
24
Chapter 2. A Survey of Essential Analysis
such that vn −→ u,
∂xi vn −→ ui
in L2 (Ω).
Due to Propositions 2.4, 2.5 one has also ∂xi vn −→ ∂xi u, It follows that
∂xi vn −→ ui
in D (Ω).
∂xi u = ui ∈ L2 (Ω) ∀ i = 1, . . . , n,
i.e., u ∈ H 1 (Ω). Moreover letting m → +∞ in (2.43) we get |vn − u|1,2 ≤ ε for n large enough that is to say vn → u in H 1 (Ω). This completes the proof.
In what follows we will be in need of functions vanishing on the boundary ∂Ω of Ω. However for a class of functions in L2 (Ω) the meaning of its value on ∂Ω is not clear. So we will overcome this problem by introducing H01 (Ω) the subspace of H 1 (Ω) defined as H01 (Ω) = the closure of D(Ω) in H 1 (Ω)
(2.44)
the closure being understood for the norm (2.42). H01 (Ω) will play the rˆ ole of the functions of H 1 (Ω) which vanish on ∂Ω. We have Theorem 2.7. Equipped with the scalar product (2.40) and the norm (2.42) H01 (Ω) is a Hilbert space. Proof. This follows immediately from the fact that H01 (Ω) is a closed subspace of H 1 (Ω). Since the functions of H01 (Ω) are vanishing – in a certain sense – one does not need the L2 (Ω)-norm to control their convergence in H 1 (Ω). The L2 (Ω)-norm of the derivatives will be enough. This will be a consequence of the following theorem. Before to state it let us introduce briefly the notion of directional derivative. Let ν be a unit vector in Rn . If v is a differentiable function then the limit v(x + hν) − v(x) h→0 h lim
(2.45)
exists and it is called the derivative of v in the direction ν. For instance ∂x1 v is the derivative in the direction e1 = (1, . . . , 0), where e1 is the first vector of the canonical basis in Rn . To see that the limit of (2.45) exists for v smooth it is enough to note that by the mean value theorem and the chain rule one has d v(x + thν)θ (θ ∈ (0, 1)) dt = ∇v(x + θhν) · hν
v(x + hν) − v(x) =
(recall that the · denotes here the Euclidean scalar product).
(2.46)
2.3. Sobolev Spaces
25
Dividing by h and letting h → 0 it follows that v(x + hν) − v(x) = ∇v(x) · ν = ∂ν v(x). h→0 h lim
(2.47)
This derivative in the ν-direction will be denoted in the following as above as ∂ν v ∂v or also ∂ν . Note that if v ∈ H 1 (Ω) then for a fixed vector ν, the last equality defines a function ∂ν v which is in L2 (Ω). Then we can now state Theorem 2.8 (Poincar´e Inequality). Let ν be a unit vector in Rn , a > 0. Suppose that Ω is bounded in one direction more precisely Ω ⊂ { x ∈ Rn | |x · ν| ≤ a }. Then we have |v|2,Ω ≤ (Recall that
∂v ∂ν
√ ∂v 2a ∂ν 2,Ω
∀ v ∈ H01 (Ω).
(2.48)
(2.49)
is defined by (2.47). | · |2,Ω was defined in Section 2.1.)
∂v it is enough to show (2.49) for any v ∈ D(Ω). Proof. By definition of H01 (Ω) and ∂ν Consider then v ∈ D(Ω) and suppose it extended to all Rn by 0 outside Ω. Without loss of generality we can assume the coordinate system chosen such that ν = e1 . Then for x ∈ Ω we have x1 ∂x1 v(s, x2 , . . . , xn ) ds. v(x) = v(x) − v(−a, x2 , . . . , xn ) = −a
Squaring this inequality and using the inequality of Cauchy–Schwarz we get
aν ν
0
Ω −aν
Figure 2.1: Bounded open set in one direction
26
Chapter 2. A Survey of Essential Analysis 2
2
x1
∂x1 v(s, x2 , . . . , xn ) ds ≤ x1 ≤ |x1 + a| ∂x1 v(s, x2 , . . . , xn )2 ds −a a ∂x1 v(s, x2 , . . . , xn )2 ds. ≤ |x1 + a|
v (x) =
−a
x1
−a
2 |∂x1 v(s, x2 , . . . , xn )| ds
−a
Integrating this inequality in x1 , we derive a a a (x1 + a)2 2 v (x1 , x2 , . . . , xn ) dx1 ≤ ∂x1 v(x1 , . . . , xn )2 dx1 2 −a −a −a a = 2a2 ∂x1 v(x1 , . . . , xn )2 dx1 .
(2.50)
−a
Integrating in the other directions leads to (2.49). Then we can show
Theorem 2.9. Suppose that Ω is bounded in one direction, i.e., satisfies (2.48) then on H01 (Ω) the norms |∇v| (2.51) |v|1,2 , 2,Ω are equivalent. Proof. |∇v|
2,Ω
is the L2 (Ω)-norm of the gradient defined as |∇v|
Thus one has
|∇v|
2,Ω
2,Ω
= Ω
≤ |v|1,2
12 |∇v(x)|2 dx . ∀ v ∈ H01 (Ω).
Next due to Theorem 2.8 one has for v ∈ D(Ω) 2 ∂v v 2 dx ≤ 2a2 dx ≤ 2a2 |∇v|2 dx ∂ν Ω Ω Ω
(2.52)
(see (2.47) and recall that ν is a unit vector). It follows that 2 2 2 v + |∇v| dx ≤ (1 + 2a ) |∇v|2 dx Ω
or also
Ω
1 |v|1,2 ≤ (1 + 2a2 ) 2 |∇v|2,Ω .
This completes the proof of the theorem since D(Ω) is dense in H01 (Ω).
We introduce now the dual space of H01 (Ω). It is denoted by H −1 (Ω), i.e., H −1 (Ω) = (H01 (Ω))∗ .
(2.53)
2.3. Sobolev Spaces
27
The notation is justified by the following result: Theorem 2.10. T ∈ H −1 (Ω) iff there exist T0 , T1 , . . . , Tn ∈ L2 (Ω) such that T = T0 +
n
∂xi Ti .
(2.54)
i=1
(The derivative in (2.54) is understood in the distributional sense.) Proof. First consider a distribution of the type (2.54). For ϕ ∈ D(Ω) one has |T, ϕ| = |T0 , ϕ −
n
Ti , ∂xi ϕ|
i=1 n
= T0 ϕ − Ω
i=1
Ti ∂xi ϕ dx
n |Ti | |∂xi ϕ| dx |T0 | |ϕ| + ≤ Ω
≤
n Ω
Ti2
12
i=1 1
(|ϕ|2 + |∇ϕ|2 ) 2 dx.
i=0
(We used here the Cauchy–Schwarz inequality in Rn+1 .) By the Cauchy–Schwarz inequality again we get 12 n Ti2 dx |ϕ|1,2 ∀ ϕ ∈ D(Ω). (2.55) |T, ϕ| ≤ Ω i=0
By density (2.55) holds for any ϕ ∈ H01 (Ω) and T ∈ (H01 (Ω))∗ . Conversely let T be in H −1 (Ω). By the Riesz representation theorem there exists u ∈ H01 (Ω) such that uv + ∇u · ∇v dx = T, v ∀ v ∈ H01 (Ω). Ω
Setting T0 = u, Ti = −∂xi u it is clear from the equality above that T = T0 +
n
∂xi Ti
i=1
in D (Ω). This completes the proof of the theorem.
Remark 2.4. Note that T0 can be chosenequal to 0 if we use the Riesz representation theorem with the scalar product ∇u · ∇v dx. The strong dual norm on H −1 (Ω) can be defined by |T |∗ =
Sup v∈H01 (Ω)\{0}
|T, v| . |v|1,2
(2.56)
28
Chapter 2. A Survey of Essential Analysis
From Theorem 2.10 one deduces easily that L2 (Ω) → H −1 (Ω)
(2.57)
→ meaning that L2 (Ω) is continuously imbedded in H −1 (Ω) namely the identity is a continuous mapping. Indeed note that if T = T0 ∈ L2 (Ω) then |T0 , v| ≤ |T0 |2,Ω |v|2,Ω ≤ |T0 |2,Ω |v|1,2
∀ v ∈ H01 (Ω)
(by the Cauchy–Schwarz inequality and (2.42)). It follows that |T |∗ ≤ |T |2,Ω which proves our assertion. One should remark that Theorem 2.10 is valid for an arbitrary domain Ω of Rn . Regarding the derivatives of mollifiers we have the following: Proposition 2.11. Suppose that u ∈ L1loc (Ω) is such that ∂xi u ∈ L1loc (Ω) (we mean here the derivative in the distributional sense). Then if d(x, ∂Ω) > ε, the usual derivative in the direction xi of uε = ρε ∗ u(x) exists at x and is given by ∂xi uε (x) = ρε ∗ ∂xi u(x). Proof. We recall the formula (2.10). Then differentiating under the integral sign we have u(y)∂xi ρε (x − y) dy ∂xi uε = Ω =− u(y)∂yi (ρε (x − y)) dy Ω
= ∂yi u, ρε (x − ·) = ∂yi u(y)ρε (x − y) dy Ω
which completes the proof. Then we can show Proposition 2.12. Suppose that f is a function of R into R such that f ∈ C 1 (R),
f ∈ L∞ (R).
Then if u ∈ L1loc (Ω), ∂xi u ∈ L1loc (Ω) we have f (u) ∈ L1loc (Ω) and the derivative in the distributional sense is given by ∂xi f (u) = f (u)∂xi u ∈ L1loc (Ω). In particular if f (0) = 0, u ∈ H 1 (Ω) (resp. H01 (Ω)) implies f (u) ∈ H 1 (Ω) (resp. H01 (Ω)).
2.3. Sobolev Spaces
29
Proof. Consider uε = ρε ∗ u as in the previous proposition. By the chain rule for ε < dist(x, ∂Ω), f (uε ) has a derivative in the usual sense given by ∂xi (f (uε )) = f (uε )∂xi uε . Let ϕ ∈ D(Ω), ε < dist(Supp ϕ, ∂Ω). We have −f (uε ), ∂xi ϕ = f (uε )∂xi uε , ϕ.
(2.58)
When ε → 0 we have also for any compact subset of Ω |f (uε ) − f (u)| dx ≤ Sup |f | |uε − u| dx → 0, K K |f (uε )∂xi uε − f (u)∂xi u| dx K = |f (uε )(∂xi uε − ∂xi u) + ∂xi u(f (uε ) − f (u))| dx K ≤ Sup |f | |∂xi uε − ∂xi u| dx + |∂xi u| |f (uε ) − f (u)| dx → 0 K
K
by the Lebesgue theorem and Theorem 2.1. This shows the first part of the theorem by passing to the limit in (2.58). If u ∈ H 1 (Ω), by the formula giving the weak derivative we have of course f (u) ∈ H 1 (Ω). If u ∈ H01 (Ω) and if ϕn ∈ D(Ω) is a sequence such that ϕn → u in H01 (Ω), ϕn → u a.e., we have f (ϕn ) → f (u) in H01 (Ω) (just replace uε by ϕn in the inequalities above). Now f (ϕn ) ∈ H01 (Ω) as a C 1 -function with compact support which can be approximated by mollifyers. This completes the proof. Remark 2.5. If u ∈ H 1 Ω) one can drop the assumption f (0) = 0 when Ω is bounded. Indeed, in this case f (u) ∈ H 1 (Ω) since f (u) ∈ L2 (Ω) due to the inequality |f (u)| ≤ |f (u) − f (0)| + |f (0)| ≤ Sup |f ||u| + |f (0)|. We denote by u+ , u− the positive and negative part of a function. Recall that u+ = Max(u, 0) = u ∨ 0, u = u+ − u− ,
u− = Max(−u, 0) = −u ∨ 0, |u| = u+ + u− .
Then we have Theorem 2.13. Suppose that u ∈ L1loc (Ω), ∂xi u ∈ L1loc (Ω) (we mean here the derivative in the distributional sense) then in the distributional sense we have ∂xi u+ = χ{u>0} ∂xi u where χ{u>0} denotes the characteristic function of the set {u > 0}.
30
Chapter 2. A Survey of Essential Analysis
Proof. Let hε be a continuous function such that hε (x) = 0
x ≤ 0,
hε (x) = 1
Define Hε (x) =
x ≥ ε,
0 ≤ hε ≤ 1.
(2.59)
x
−∞
hε (s) ds.
Clearly Hε ∈ C 1 (R) and Hε = hε ∈ L∞ (Ω). Thus in the distributional sense we have (see Proposition 2.12) ∂xi Hε (u) = hε (u)∂xi u. This can be written as − Hε (u)∂xi ϕ dx = hε (u)∂xi uϕ dx
∀ ϕ ∈ D(Ω).
Passing to the limit we obtain − u+ ∂xi ϕ dx = χ{u>0} ∂xi uϕ dx
∀ ϕ ∈ D(Ω)
Ω
Ω
Ω
Ω
and the result follows. Remark 2.6. One has of course ∂xi u− = −χ{u<0} ∂xi u with an obvious notation for χ{u<0} . As a consequence we also have (1 − χ{u=0} )∂xi u = 0 namely ∂xi u = 0 a.e. on {u = 0} = { x ∈ Ω | u(x) = 0 }. As a corollary we have
Corollary 2.14. Suppose that u ∈ H 1 (Ω) (resp. H01 (Ω)) then u+ ∈ H 1 (Ω) (resp. H01 (Ω)) and we have in the distributional sense ∂xi u+ = χ{u>0} ∂xi u.
(2.60)
Proof. The formula (2.60) is an immediate consequence of the previous proposition. If now ϕn ∈ D(Ω) is such that ϕn → u
in H 1 (Ω),
choosing in (2.59) hε ∈ C ∞ (R) – which is always possible – we see that Hε (ϕn ) ∈ D(Ω) −→ Hε (u) ∈ H01 (Ω). When ε → 0, Hε (u) → u+ in H 1 (Ω) and thus u+ ∈ H01 (Ω). This completes the proof of the theorem.
2.3. Sobolev Spaces
31
We will need the following corollary. Corollary 2.15. Let f be a continuous piecewise C 1 -function such that f ∈ L∞ (R), f (0) = 0. If u ∈ H 1 (Ω) then f (u) ∈ H 1 (Ω) and ∂xi f (u) = f (u)∂xi u. If u ∈ H01 (Ω), f (u) ∈ H01 (Ω). (See also Remark 2.5.) Proof. Without loss of generality we can assume that f has only one jump at 0. Then one can write f (u) = f1 (u+ ) + f2 (u− ) where fi ∈ C 1 (R). The result follows then from the previous results.
Definition 2.8 (Compact mapping). A mapping T : A → B, A, B Banach spaces is said to be compact iff the image of a ball of A is relatively compact in B. Then we have: Theorem 2.16. Suppose that Ω is a bounded open set of Rn , then the canonical embedding (i.e., the identity map) is compact from H01 (Ω) into L2 (Ω). Proof. Consider
L = { v ∈ H01 (Ω) | |v|1,2 ≤ R }
(L is the ball of radius R of H01 (Ω)). Denote by Ω a bounded open set such that Ω ⊂⊂ Ω . (Careful, this is Ω which is compactly embedded in Ω !) Suppose the functions of L extended by 0 outside Ω. We have • L is bounded in L2 (Ω ). This follows from |v|2,Ω = |v|2,Ω ≤ |v|1,2 ≤ R. • The extension v of a function of L belongs to H01 (Ω ) – see the definition of H01 . Moreover for any h such that h ∈ Rn ,
|h| ≤ dist(Ω, ∂Ω )
we have v(x + h) − v(x) =
1
0
≤
0
d v(x + th) dt = dt 1
1
0
|∇v(x + th) · h|2 dt
∇v(x + th) · h dt 12 .
32
Chapter 2. A Survey of Essential Analysis
Squaring and integrating the resulting inequality on Ω we get 1 |v(x + h) − v(x)|2 dx ≤ |∇v(x + th)|2 |h|2 dt dx Ω
0
Ω
= |h|2 ≤ |h|2
0
0
1
1
≤ |h|2 R2 .
Ω
Ω
|∇v(x + th)|2 dx dt |∇v(y)|2 dy dt
The result follows then from Theorem 2.2. Note that the inequality above can be first established for functions in D(Ω) and then holds by density. The theorem is used in the following and in partial differential equations in general under the following form: Theorem 2.17. Let un be a bounded sequence of H01 (Ω)that is to say such that |un |1,2 ≤ R
∀n ∈ N
for some R > 0. Then there exists a subsequence of n, nk and u ∈ H01 (Ω) such that unk u in H01 (Ω). unk −→ u in L2 (Ω), Proof. There exists a subsequence nk and u ∈ H01 (Ω) such that unk u
in H01 (Ω).
From Theorem 2.16 there exists a subsequence of nk , say nk , such that unk −→ u
in L2 (Ω).
Of course u, u have to be the same and this completes the proof of the theorem. Remark 2.7. Theorem 2.16 can be extended to H 1 (Ω) and to more general spaces – provided we have a suitable extension of the functions of H 1 (Ω) in a neighbourhood of Ω. This is the case for instance if we assume Ω to be piecewise C 1 – see [21], [44].
Exercises 1. (Generalized H¨ older’s Inequality) Let fi , i = 1, . . . , k be functions in Lpi (Ω) with 1 1 1 + + ···+ = 1. p1 p2 pk
Exercises
Show that
33
k
i=1
fi ∈ L1 (Ω) and that k k fi ≤ |fi |pi ,Ω . i=1
1,Ω
2. We set η(t) =
i=1 1
e− t 0
t > 0, t ≤ 0.
Show that η is infinitely differentiable in R. Deduce that the function ρ defined by (2.4) belongs to D(Rn ). 3. Show that D(Ω) is an infinite-dimensional vector space on R. 4. Let H ∈ L1loc (R) be the function defined by 1 if x > 0 H(x) = 0 if x < 0. Show that the derivative H of H in D (R) is given by H = δ0 5. 6. 7. 8.
where δ0 denotes the Dirac mass at 0. Show that (2.36) holds. Let T ∈ D (R) such that T = 0. Show that T is constant. Show – for instance for Ω bounded – that the function constant equal to 1 belongs to H 1 (Ω) but not in H01 (Ω) (Hint: one can use (2.49).) Show that the two norms |u|2,Ω + |∂x1 u|2,Ω ,
|u|1,2
are not equivalent on H 1 (Ω). 9. Let Ω = (0, 1). Show that u → u(1) is a continuous linear form on H 1 (Ω) – i.e., belongs to H 1 (Ω)∗ – but is not a distribution (otherwise it would be the 0 one). 10. Let u be a Lipschitz continuous function in Ω. Show, when Ω is bounded, that u ∈ H 1 (Ω) and ∂xi u ∈ L∞ (Ω) ∀ i = 1, . . . , n. 11. Let u be a Lipschitz continuous function in Ω vanishing on ∂Ω. Show that u ∈ H01 (Ω). Show that H 1 (Ω) ∩ C0 (Ω) ⊂ H01 (Ω). (C0 (Ω) denotes the space of continuous functions vanishing on ∂Ω.)
34
Chapter 2. A Survey of Essential Analysis
12. Let u ∈ H 1 (Ω) where Ω is a domain in Rn , i.e., a connected open subset. Show that ∇u = 0 in Ω =⇒ u = Cst . 13. Let Ω be a bounded open set in R. Let p ∈ L2 (Ω) and T = p ∈ H −1 (Ω). Show that (see (2.56)) |T |∗ = |p|2,Ω . 14. Show that on H 1 (R) the two norms |u|1,2 are not equivalent.
,
|u |2,R
Chapter 3
Weak Formulation of Elliptic Problems Solving a partial differential equation is not an easy task. We explain here the weak formulation method which allows to obtain existence and uniqueness of a solution “in a certain sense” which coincides necessarily with the solution in the usual sense if it exists. First let us explain briefly in which context second-order elliptic partial differential equations do arise.
3.1 Motivation Let Ω be a domain of R3 . Suppose that u denotes a density of some quantity (population, heat, fluid. . . ) which diffuses in Ω at a constant rate, i.e., the velocity of the diffusion is independent of the time. It is well known in such a situation that the velocity of the diffusion of such quantity is proportional to the gradient of u that is to say one has if v is the diffusion velocity v = −a∇u
(3.1)
where a is some constant depending on the problem at hand. To understand the idea behind the formula, suppose that u is a temperature. Consider two neighbouring points x, x + hν where ν is a unit vector. The flow of temperature between these two points has a velocity proportional to the difference of temperature and inversely proportional to the distance between these points. In other words the component of v in the direction ν is given by u(x + hν) − u(x) , (3.2) h where a is some positive constant. The sign “−” takes into account the fact that if u(x + hν) > u(x) then the flow of heat will go in the opposite direction to ν. (The v · ν = −a
36
Chapter 3. Weak Formulation of Elliptic Problems
heat, the population. . . diffuses from high concentrations to low ones.) Passing to the limit in h we get v · ν = −a∇u(x) · ν. (3.3) This equality holding for every ν, one has clearly (3.1). (We refer the reader to [29] for a more substantial analysis.) Assuming (3.1) and – to simplify a = 1 – consider then a cube Q contained in Ω. If ∂Q denotes the different faces of Q the quantity diffusing outside of Q is given by ∂Q
v · n dσ(x)
where n denotes the outward normal to ∂Q – dσ the local element of surface on ∂Q. Since we suppose that the situation is steady, that is to say does not change with time this flow through ∂Q has to be compensated by an input f dx Q
where f (x) is the local input – of population, heat – brought into the system and we have v · n dσ(x) = f dx. (3.4) ∂Q
Q
Taking into account (3.1) with a = 1 we have ∇u · n dσ(x) = f dx. − ∂Q
Q
F2 x3 F1
0
x1 Figure 3.1.
x2
(3.5)
3.1. Motivation
37
Let us for instance evaluate the first integral above on the faces F1 , F2 given by Fi = { x = (x1 , x2 , x3 ) ∈ ∂Q | x1 = αi } One has
i = 1, 2.
−
∇u · n dσ(x) − ∇u · n dσ(x) F1 F2 ∂u ∂u (α1 , x2 , x3 ) − (α2 , x2 , x3 ) dx2 dx3 =− ∂x1 Π(F1 ) ∂x1
where Π(F1 ) denotes the orthogonal projection of F1 on the plane x2 , x3 . But the quantity above can also be written α2 ∂2u (x , x , x ) dx dx dx = − ∂x21 u(x) dx. − 1 2 3 1 2 3 2 (∂x ) 1 Π(F1 ) α1 Q Arguing for the other faces similarly we obtain from (3.5) 2 2 2 − (∂x1 u + ∂x2 u + ∂x3 u) dx = f dx, Q
i.e.,
Q
Q
−Δu dx =
Q
f dx
(3.6)
where Δ is the usual second-order operator defined by 3 ∂2u Δu = . ∂xi 2 i=1
Now the equality (3.6) is true for any Q and thus if for instance Δu and f are continuous this will force −Δu(x) = f (x)
∀ x ∈ Ω.
(3.7)
This equation is called the Laplace equation. To this equation is generally added a boundary condition. For instance u=0
on ∂Ω,
(3.8)
(in the case of diffusion of temperature, the temperature is prescribed on the boundary of the body – this is for instance the room temperature that we can normalize to 0) or ∂u = v · n = 0 on ∂Ω (3.9) ∂n (n is the outward unit normal to ∂Ω – the body is isolated and there is no flux of temperature through its boundary).
38
Chapter 3. Weak Formulation of Elliptic Problems
Thus we are lead to the problem of finding u such that −Δu = f in Ω, u=0 on ∂Ω.
(3.10)
This problem is called the “Dirichlet problem”. Or we could be lead to look for u solution to ⎧ ⎨−Δu = f in Ω, (3.11) ∂u ⎩ =0 on ∂Ω. ∂n This problem is a “Neumann problem”.
3.2 The weak formulation Suppose that we want to solve the Dirichlet problem. That is to say we give us an open subset of Rn with boundary ∂Ω and we would like to find a function u which satisfies −Δu = f in Ω, (3.12) u=0 on ∂Ω. Δ is the Laplace operator in Rn given by Δ=
n
∂x2i ,
(3.13)
i=1
¯ f is a function that, for the time being, we can assume continuous on Ω. 2 0 ¯ If u ∈ C (Ω) ∩ C (Ω) satisfies (3.12) pointwise one says that u is a “strong solution” of the problem (3.12) (C k (Ω) denotes the space of functions k times ¯ the space of continuous functions on Ω). ¯ Such a solution differentiable in Ω, C 0 (Ω) is not easy to find. One way to overcome the problem is to weaken our assumption on u. For instance one could only require the first equation of (3.12) to hold in the distributional sense. To proceed in this direction let us assume that u is a strong solution. Let ϕ ∈ D(Ω). Multiplying both sides of (3.12) by ϕ and integrating on Ω we get −Δuϕ dx = f ϕ dx. Ω
Ω
This can also be written n i=1
−∂x2i u, ϕ
= Ω
f ϕ dx.
(3.14)
(We have seen that for a C 2 -function the usual derivatives and the derivatives in the distributional sense coincide – see (2.32).) Due to the definition of the
3.2. The weak formulation
39
derivative in the sense of distributions this can also be written n ∂xi u, ∂xi ϕ = f ϕ dx Ω
i=1
or since ∂xi u is a function ∇u · ∇ϕ dx = f ϕ dx ∀ ϕ ∈ D(Ω). Ω
(3.15)
Ω
Thus a strong solution satisfies (3.15). Now one could replace the pointwise condition u = 0 on ∂Ω by u ∈ H01 (Ω) then one would be lead to find u such that ⎧ ⎨ u ∈ H01 (Ω), ⎩ ∇u · ∇v dx = f v dx ∀ v ∈ H01 (Ω), Ω
(3.16)
Ω
(if u ∈ H01 (Ω) then by density of D(Ω) in H01 (Ω), (3.15) holds for every function ϕ in H01 (Ω)). (3.16) is called the weak formulation of (3.12). If u is a strong solution and if u = 0 on ∂Ω pointwise implies that u ∈ H01 (Ω) then u will be solution to (3.16). If on the other hand the solution to (3.16) is unique it will be the strong solution we are looking for. Thus in case of uniqueness, (3.16) will deliver the strong solution to (3.12). Now it will also deliver a “generalized solution” when a strong solution does not exist. To see that we have reached this we can state: Theorem 3.1. Let f ∈ H −1 (Ω). If Ω is bounded in one direction there exists a unique u solution to ⎧ ⎨ u ∈ H01 (Ω), (3.17) ⎩ ∇u · ∇v dx = f, v ∀ v ∈ H01 (Ω). Ω
Proof. By Theorem 2.9, the scalar product on H01 (Ω) can be chosen to be ∇u · ∇v dx. Ω
Then (3.17) follows immediately from the Riesz representation theorem.
Remark 3.1. If for instance f ∈ L2 (Ω) – see (2.54) – then (3.17) is uniquely solvable. Now the pointwise equality (3.12) can fail – imagine a discontinuous f . Nevertheless we have found a good ersatz for the solution: we have obtained a solution in a generalized sense.
40
Chapter 3. Weak Formulation of Elliptic Problems
Remark 3.2. One should note that u minimizes also the functional 1 |∇v|2 dx − f, v J(v) = 2 Ω (see Theorem 1.5). The integral Ω
|∇v|2 dx
is sometimes called Dirichlet integral. In the same spirit as above we have Theorem 3.2. Let Ω be an arbitrary domain of Rn and f ∈ L2 (Ω). There exists a unique u solution to ⎧ 1 ⎨u ∈ H (Ω), (3.18) ⎩ ∇u · ∇v + uv dx = f v dx ∀ v ∈ H 1 (Ω). Ω
Ω
Proof. One has f v dx ≤ |f | |v| dx ≤ |f |2,Ω |v|2,Ω ≤ |f |2,Ω |v|1,2 , Ω
Ω
i.e.,
v →
Ω
f v dx
is a linear continuous form on H 1 (Ω). The result follows then from the Riesz representation theorem. Remark 3.3. We do not have of course u ∈ H01 (Ω). Indeed supposing Ω bounded f = 1 the solution is given by u = 1 ∈ / H01 (Ω) – see exercises. Now since D(Ω) ⊂ H 1 (Ω) we have from (3.18) n
⇐⇒
∂xi u, ∂xi ϕ + u, ϕ = f, ϕ
i=1 n
−∂x2i u, ϕ + u, ϕ = f, ϕ
∀ ϕ ∈ D(Ω) ∀ ϕ ∈ D(Ω),
i=1
i.e., u is solution of
−Δu + u = f
in D (Ω).
(3.19)
The boundary condition is here hidden. Suppose that ∂Ω, u are “smooth” then one has for v smooth ∇u · ∇v dx = div(v∇u) − vΔu dx Ω
Ω
Exercises
41
n (recall that div w = i=1 ∂xi wi for every vector w = (w1 , . . . , wn )). By the divergence formula – see [29], [44] – one has then ∂u v dσ(x) + ∇u · ∇v dx = −vΔu dx. Ω ∂Ω ∂n Ω Going back to (3.18) we have obtained ∂u v dσ(x) + (−Δu + u)v dx = f v dx, ∂Ω ∂n Ω Ω i.e., by (3.19), if (3.19) holds in the usual sense ∂u v dσ(x) = 0 ∀ v smooth. ∂n ∂Ω
(3.20)
In a weak sense we have
∂u = 0 on ∂Ω ∂n and (3.18) is the weak formulation of the “Neumann problem” ⎧ ⎨−Δu + u = f in Ω, ∂u ⎩ =0 on ∂Ω. ∂n
(3.21)
Remark 3.4. In (3.18) one can replace H 1 (Ω) by any closed subspace V of H 1 (Ω) to get existence and uniqueness of a u such that ⎧ ⎨u ∈ V, (3.22) ⎩ ∇u · ∇v + uv dx = f v dx ∀ v ∈ V. Ω
Ω
For instance if V = H01 (Ω), (3.22) is the weak formulation of the problem −Δu + u = f in Ω, u=0 on ∂Ω.
Exercises 1. Let h, f be two continuous functions in Ω. If h dx = f dx ∀ Q cube included in Ω Q
Q
show that h = f (this shows (3.7)).
42
Chapter 3. Weak Formulation of Elliptic Problems
2. Let Ω be a bounded open set of Rn and 1 V = v ∈ H (Ω) v dx = 0 . Ω
Show that V is a closed subspace of H 1 (Ω). Show that the unique solution u to (3.22) satisfies in the distributional sense −Δu + u = f − − f dx Ω
where −Ω f dx is the average of f on Ω defined as 1 − f dx = f (x) dx |Ω| Ω Ω (|Ω| denotes the measure of Ω). What is the boundary condition satisfied by u? 3. Let f ∈ L2 (Ω). Consider u the solution to −Δu = f in Ω, 1 u ∈ H0 (Ω). Let k ∈ R. Show that in the distributional sense −Δ(u ∨ k) ≤ f χ{u>k} (Consider as test function of two numbers.)
in Ω
u∨k−k + ε
∨ maximum of two numbers.
∧v, ε > 0, v ∈ H01 (Ω), v ≥ 0, ∧ minimum
Chapter 4
Elliptic Problems in Divergence Form The goal of this chapter is to introduce general elliptic problems extending those already addressed in the preceding chapter and putting them all in the same framework.
4.1 Weak formulation We suppose here that Ω is an open subset of Rn , n ≥ 1. Denote by A = A(x) a n × n matrix with entries aij , i.e., A(x) = (aij (x))i,j=1,...,n .
(4.1)
Suppose that aij ∈ L∞ (Ω) ∀ i, j = 1, . . . , n and A uniformly positive definite. In other words suppose that for some positive constants λ and Λ we have |A(x)ξ| ≤ Λ|ξ| 2
λ|ξ| ≤ A(x)ξ · ξ
∀ ξ ∈ Rn , n
∀ξ ∈ R ,
a.e. x ∈ Ω,
(4.2)
a.e. x ∈ Ω.
(4.3)
n
(In these formulas “·” denotes the canonical scalar product in R , |·| the associated norm and A(x)ξ denotes the vector of Rn obtained by multiplying A(x) by the column vector ξ.) Remark 4.1. The assumption (4.2) is equivalent to assume that the aij are uniformly bounded in Ω. Indeed A = Sup ξ=0
|Aξ| |ξ|
(4.4)
is a norm on the space of matrices which is equivalent to any other norm since the space of matrices is finite dimensional. In particular it is equivalent to A∞ = Max |aij | i,j
which establishes our claim.
(4.5)
44
Chapter 4. Elliptic Problems in Divergence Form
In addition we denote by a a function satisfying a ∈ L∞ (Ω),
a(x) ≥ 0
a.e. x ∈ Ω,
(4.6)
(this last assumption could be relaxed slightly – see Remark 4.8). Then we have Theorem 4.1. Let f ∈ H −1 (Ω). If (i) (ii)
or
Ω is bounded in one direction, a(x) ≥ a > 0 a.e. x ∈ Ω,
for a positive constant a, then there exists a unique weak solution to ⎧ 1 ⎨u ∈ H0 (Ω), ⎩ A(x)∇u · ∇v + a(x)uv dx = f, v ∀ v ∈ H01 (Ω).
(4.7) (4.8)
(4.9)
Ω
Proof. The proof is an easy consequence of the Lax–Milgram theorem. Indeed consider H01 (Ω) equipped with the scalar product defined by (2.40) (u, v)1,2 = ∇u · ∇v + uv dx. Ω
We have seen that
H01 (Ω)
is a Hilbert space. Consider then a(u, v) = A(x)∇u · ∇v + a(x)uv dx.
(4.10)
Ω
• a(u, v) is a continuous bilinear form on H01 (Ω). The bilinearity is clear. For the continuity one remarks that |A(x)∇u · ∇v| + a(x)|u| |v| dx |a(u, v)| ≤ Ω |A(x)∇u| |∇v| + a(x)|u| |v| dx (Cauchy–Schwarz) ≤ Ω ≤ Λ|∇u| |∇v| + |a|∞ |u| |v| dx
(4.11)
Ω
where |a|∞ denotes the L∞ (Ω)-norm of a. It follows that |a(u, v)| ≤ Max{Λ, |a|∞ } |∇u| |∇v| + |u| |v| dx Ω ≤ Max{Λ, |a|∞ } |∇u|2 |∇v|2 + |u|2 |v|2 by the Cauchy– Schwarz inequality, ≤ Max{Λ, |a|∞ }|u|1,2 |v|1,2 , by the Cauchy–Schwarz inequality in R2 .
4.1. Weak formulation
45
This completes the proof of our claim. Note that (4.11) shows at the same time that the function under the integral sign of (4.10) is integrable and thus a(u, v) is well defined. • a(u, v) is coercive on H01 (Ω). Indeed one has a(u, v) =
Ω
A∇u · ∇v + au2 dx
≥
Ω
2
2
λ|∇u| + au dx ≥
2 λ|∇u|2,Ω
in case (i),
Min{λ, a}|u|21,2
in case (ii).
The coerciveness of a(u, v) follows since in case (i) one has for some constant c |∇u| ≥ c|u|1,2 2,Ω (see Theorem 2.9). Since f ∈ H −1 (Ω) the result follows then by the Lax– Milgram theorem (Theorem 1.5). In the case of (ii), i.e., if a(x) ≥ a > 0
(4.12)
then one can replace in (4.9) H01 (Ω) by any closed subspace of H 1 (Ω). More precisely we have Theorem 4.2. Assume that (4.2), (4.3), (4.6) and (4.12) hold. Let V be a closed subspace of H 1 (Ω) and f ∈ V ∗ where V ∗ denotes the dual space of V . Then there exists a unique u solution to ⎧ ⎨u ∈ V, (4.13) ⎩ A∇u · ∇v + auv dx = f, v ∀ v ∈ V. Ω
Proof. This is a consequence of the Lax–Milgram theorem since, as we have seen in the proof of Theorem 4.1, the form a(u, v) is bilinear, continuous and coercive on V when equipped with the | · |1,2 -norm. Remark 4.2. If f ∈ L2 (Ω) then f, v =
Ω
f v dx
defines a continuous linear form on any V as in Theorem 4.2 since by the Cauchy– Schwarz inequality we have |f, v| ≤ |f |2 |v|2 ≤ |f |2 |v|1,2
∀ v ∈ V.
46
Chapter 4. Elliptic Problems in Divergence Form
Remark 4.3. One could replace (ii) or (4.12) by a(x) ≥ 0,
a ≡ 0
(see [29]). Remark 4.4. In the case where A is symmetric, i.e., A(x) = A(x)T then for any ξ, η ∈ Rn one has Aξ · η = ξ · AT η = ξ · Aη and a(u, v) is symmetric. It follows (see Theorem 1.5) that in Theorems 4.1, 4.2 u is the unique minimizer of the functional 1 A∇v · ∇v + av 2 dx − f, v J(v) = 2 Ω on H01 (Ω) and V respectively. Note also that the two theorems above extend results that we had already established in the previous chapter in the case A = id. The terminology “elliptic” comes of course from (4.3) since for a matrix A satisfying (4.3) the set E = { ξ ∈ Rn | Aξ · ξ = 1 } is an ellipsoid – an ellipse in the case n = 2. Remark 4.5. As we could allow a to degenerate we can do the same for A(x) – i.e., we can relax (4.3). Indeed in the case of Theorem 4.1, (i) for instance, what we only need is the existence of a constant α > 0 such that A(x)∇u · ∇u + a(x)u2 dx ≥ α|u|21,2 ∀ u ∈ H01 (Ω). (4.14) Ω
As in the case of the Laplace operator – i.e., when A = id – the problems above are weak formulations of elliptic partial differential equations in the usual or strong sense. This is what we would like to see now. Let us consider first the case of Theorem 4.1. Taking v = ϕ ∈ D(Ω) we obtain n i=1
Ω
(A(x)∇u)i ∂xi ϕ dx + a(x)u, ϕ = f, ϕ
(4.15)
where (A(x)∇u)i denotes the ith component of the vector A(x)∇u. This can be written as n ∂xi (A(x)∇u)i , ϕ + a(x)u, ϕ = f, ϕ, (4.16) − i=1
4.1. Weak formulation
47
i.e., u is solution in the distributional sense of −
n
∂xi (A(x)∇u)i + a(x)u = f
(4.17)
i=1
or if we use the divergence notation − div(A(x)∇u) + a(x)u = f
in Ω. n (Recall that for a vector ω = (w1 , . . . , wn ) ∈ Rn , div ω = i=1 ∂xi ωi .) Thus since H01 (Ω) is the space of functions vanishing on ∂Ω the solution u of (4.9) is the weak solution to − div(A(x)∇u) + a(x)u = f in Ω, (4.18) u=0 on ∂Ω. To have something which looks more like an equation – i.e., which does not involve vectors one can note that (A(x)∇u)i =
n
aij (x)∂xj u
j=1
and thus the first equation of (4.18) can be written − ⇐⇒
n
∂xi
n
aij (x)∂xj u + a(x)u = f
i=1 j=1 n n
−
∂xi (aij (x)∂xj u) + a(x)u = f.
i=1 j=1
With the Einstein convention of repeated indices – i.e., one drops the sign Σ when two indices repeat each others, the system (4.18) has the form: −∂xi (aij (x)∂xj u) + a(x)u = f in Ω, (4.19) u=0 on ∂Ω. We now turn to the interpretation of Theorem 4.2. For that we will need the divergence formula that we recall here. Theorem 4.3 (Divergence formula). Suppose that Ω is a “smooth” open subset of Rn with outward unit normal n. For any “smooth” vector field ω in Ω we have div ω dx = ω · n dσ(x) (4.20) Ω
(dσ(x) is the measure area on ∂Ω).
∂Ω
48
Chapter 4. Elliptic Problems in Divergence Form
Note that in this theorem we do not precise what “smooth” means. Basically (4.20) holds when one can make sense of the different quantities n, dσ(x), the integrals, div ω occurring – see [11], [29], [44]. Note also that in the case of a simple domain as a cube the formula (4.20) can be easily established by arguing as we did between (3.5) and (3.6) with ω = ∇u. To interpret Theorem 4.2 in a strong form we consider only the case where V = H 1 (Ω), f ∈ L2 (Ω). Taking then v = ϕ ∈ D(Ω) we derive as above that in D (Ω).
− div(A(x)∇u) + au = f
(4.21)
Now, of course, more test-functions v are available in (4.13) and assuming everything “smooth” the second equation of (4.13) can be written as div(vA(x)∇u) − div(A(x)∇u)v + auv dx = f v dx. Ω
Ω
Taking into account (4.21) we derive div(vA(x)∇u) dx = 0 Ω
and by the divergence formula A(x)∇u · n v dσ(x) = 0 ∀ v “smooth”. ∂Ω
Thus if every computation can be performed we are ending up with A(x)∇u · n = 0
on ∂Ω.
Thus (4.13) is a weak formulation of the so-called Neumann problem − div(A(x)∇u) + a(x)u = f in Ω, A(x)∇u · n = 0 on ∂Ω.
(4.22)
Recall that the first equation of (4.22) can be also written as in (4.19). Remark 4.6. In the case where A(x) = Id then the second equation of (4.22) is ∂u =0 ∂n
on ∂Ω
namely the normal derivative of u vanishes on ∂Ω. We already encountered this boundary condition in the preceding chapter. In the general case and with the summation convention the second equation above can be written aij (x)∂xj uni = 0
on ∂Ω
if n = (n1 , . . . , nn ). This expression is called the “conormal derivative of u”.
4.2. The weak maximum principle
49
Remark 4.7. In (4.13) if V is a finite-dimensional subspace of H 1 (Ω), f a linear form on V then there exists a unique solution u. This follows from the fact that a finite-dimensional subspace is automatically closed and a linear form on it automatically continuous. Similarly in (4.9) one can replace H01 (Ω) by any closed subspace V and get existence and uniqueness for any f ∈ V ∗ . The case of finite-dimensional subspaces is especially important in the so-called finite elements method to compute approximations of the solutions to problems (4.9) and (4.13). We refer the reader to Chapter 10.
4.2 The weak maximum principle The maximum principle has several aspects. One of it is that if u = u(f ) is solution to (4.9) for instance then, if f increases, so does u. First we have to make precise what do we mean by f increase. For that we have Definition 4.1. Let T1 , T2 ∈ D (Ω). We say that T1 ≤ T2
(4.23)
T1 , ϕ ≤ T2 , ϕ ∀ ϕ ∈ D(Ω), ϕ ≥ 0.
(4.24)
iff 1
As we gave a meaning for a function of H (Ω) to vanish on ∂Ω we can give a meaning for such a function to be negative on ∂Ω. We have Definition 4.2. Let u1 , u2 ∈ H 1 (Ω) we say that u1 ≤ u2 iff
on ∂Ω
(u1 − u2 )+ ∈ H01 (Ω).
(4.25)
We can now state Theorem 4.4. Let A = A(x) be a matrix satisfying (4.2), (4.3). Let a ∈ L∞ (Ω) be a nonnegative function. For u ∈ H 1 (Ω) we define −L as −Lu = − div(A(x)∇u) + a(x)u.
(4.26)
Then we have: let u1 , u2 ∈ H 1 (Ω) such that −Lu1 ≤ −Lu2 u1 ≤ u2
in D (Ω)
(4.27)
on ∂Ω
(4.28)
then u1 ≤ u2
in Ω,
that is to say u1 is a function a.e. smaller than u2 .
50
Chapter 4. Elliptic Problems in Divergence Form
Proof. By (4.27), (4.24) we have A(x)∇u1 · ∇ϕ + au1 ϕ dx ≤ A(x)∇u2 · ∇ϕ + au2 ϕ dx Ω
Ω
for any ϕ ∈ D(Ω), ϕ ≥ 0 which is also A(x)∇(u1 − u2 ) · ∇ϕ + a(u1 − u2 )ϕ dx ≤ 0 ∀ ϕ ∈ D(Ω), ϕ ≥ 0.
(4.29)
Ω
Then we have the following claim. Claim. Let u ∈ H01 (Ω), u ≥ 0 then there exists ϕn ∈ D(Ω), ϕn ≥ 0 such that ϕn → u in H 1 (Ω). Proof of the claim: Since u ∈ H01 (Ω) there exists ϕn ∈ D(Ω) such that ϕn → u
in H01 (Ω).
We also have + 2 2 |∇(ϕn − u)| dx = |∇(ϕn − u)| dx + |∇u|2 Ω {ϕn >0} {ϕn <0} 2 ≤ |∇(ϕn − u)| + χ{ϕn <0} |∇u|2 , Ω
(4.30)
{u>0}
see Remark 2.6. Up to a subsequence we have ϕn → u ≥ 0 a.e., i.e., χ{ϕn <0} → 0 a.e. on {u > 0}. It follows from (4.30) ϕ+ n −→ u. (The whole sequence converges since the limit is unique.) Then it is enough to show that (ϕn )+ can be approximated by a nonnegative function of D(Ω). ρε ∗ (ϕn )+ for ε small enough is such a function. This completes the proof of the claim. Returning to (4.29) we then have A∇(u1 − u2 ) · ∇v + a(u1 − u2 )v dx ≤ 0 Ω
∀ v ∈ H01 (Ω), v ≥ 0.
Taking v = (u1 − u2 )+ – see (4.25), (4.28) we derive 2 A∇(u1 − u2 ) · ∇(u1 − u2 )+ + a (u1 − u2 )+ dx ≤ 0 Ω 2 ⇐⇒ A∇(u1 − u2 )+ · ∇(u1 − u2 )+ + a (u1 − u2 )+ dx ≤ 0
(4.31)
Ω
(see (2.60)). It follows that (u1 − u2 )+ = 0 which completes the proof (see Exercise 12, Chapter 2).
4.2. The weak maximum principle
51
Remark 4.8. One can weaken slightly the assumption on a. Indeed if λ1 is the first eigenvalue for the Dirichlet problem associated with A namely if 2 A∇u · ∇u dx u dx (4.32) λ1 = Inf 1 u∈H0 (Ω) u=0
Ω
Ω
(see also Chapter 9) then one can assume only a(x) > −λ1 a.e. x (cf. (4.31)). For a fixed a the remark above shows that the weak maximum principle holds provided Ω is located in a strip of width small enough or (cf. Paragraph 12.4) has a Lebesgue measure small enough – see (2.49). We can state that as the following corollary. Corollary 4.5. Let A(x) be a matrix satisfying (4.2), (4.3) and a ∈ L∞ (Ω). Suppose that Ω is located in a strip of width less or equal to 12 2λ |a− |∞,Ω then with the notation of Theorem 4.4 we have −Lu1 ≤ −Lu2
in D (Ω),
u1 ≤ u2
on ∂Ω
implies u1 ≤ u2
in Ω.
Proof. Going back to (4.31) we have A∇(u1 − u2 )+ · ∇(u1 − u2 )+ + (a+ − a− )((u1 − u2 )+ )2 dx ≤ 0. Ω
Thus it comes
λ
Ω
|∇(u1 − u2 )+ |2 dx −
Ω
a− ((u1 − u2 )+ )2 dx ≤ 0,
and since a− ≤ |a− |∞,Ω λ |∇(u1 − u2 )+ |2 dx − |a− |∞,Ω ((u1 − u2 )+ )2 dx ≤ 0. Ω
Ω
Using (2.49) we deduce (a is here the half-width of the strip containing Ω!) λ − − |a | ((u1 − u2 )+ )2 dx ≤ 0 ∞,Ω 2a2 Ω which implies (u1 − u2 )+ = 0 when (2a)2 <
2λ . |a− |∞,Ω
(2a) is the width of the strip containing Ω. This completes the proof of the corollary.
52
Chapter 4. Elliptic Problems in Divergence Form
Another corollary is Corollary 4.6. Under the assumptions of Theorem 4.4 let u ∈ H 1 (Ω) satisfy −Lu ≤ 0
in D (Ω),
u≤0
on ∂Ω,
(4.33)
then we have u≤0
on Ω.
(4.34)
Proof. It is enough to apply Theorem 4.4 with u1 = u, u2 = 0.
Another consequence of Theorem 4.4 is that if u is a weak solution of a homogeneous elliptic equation in divergence form – i.e., if − div(A(x)∇u) + a(x)u = 0
(4.35)
then a positive maximum of u cannot exceed what it is on the boundary. More precisely Corollary 4.7. Under the assumptions of Theorem 4.4 let u ∈ H 1 (Ω) be such that −Lu ≤ 0
in D (Ω),
u≤M
on ∂Ω
(4.36)
where M is a nonnegative constant (an arbitrary constant if a ≡ 0). Then u ≤ M.
(4.37)
Proof. Just remark that −L(u − M ) = −Lu − aM ≤ 0 and apply Corollary 4.6 to u − M .
Remark 4.9. One should note that when the extension of Theorem 4.4 mentioned in Remark 4.8 holds, i.e., when one only assumes a(x) > −λ1 , a.e. x, then the two corollaries above extend as well. Finally one should notice that the weak maximum principle also holds for the Neumann problem and we have Theorem 4.8. Let A = A(x) satisfying (4.2), (4.3) and a = a(x) ∈ L∞ (Ω) satisfying (4.12). For f1 , f2 ∈ L2 (Ω) let ui , i = 1, 2 be the solution to ⎧ ⎨ ui ∈ H 1 (Ω), (4.38) ⎩ A(x)∇ui · ∇v + a(x)ui v dx = fi v dx ∀ v ∈ H 1 (Ω) Ω
Ω
(cf. Theorem and Remark 4.2). Then if f1 ≤ f2
in Ω
(4.39)
u1 ≤ u2
in Ω.
(4.40)
we have
4.3. Inhomogeneous problems
53
Proof. Taking v = (u1 − u2 )+ in (4.38) for i = 1, 2 we get by subtraction A(x)∇(u1 − u2 ) · ∇(u1 − u2 )+ + a(x)(u1 − u2 )+2 dx Ω = (f1 − f2 )(u1 − u2 )+ dx ≤ 0 (by (4.39)). This is also Ω
A(x)∇(u1 − u2 )+ · ∇(u1 − u2 )+ + a(x)(u1 − u2 )+2 dx ≤ 0
which leads to (4.40). This completes the proof of the theorem.
Remark 4.10. The result of Theorem 4.8 can be extended to problems of the type (4.13). The only assumptions needed being (u1 − u2 )+ ∈ V,
f1 − f2 , (u1 − u2 )+ ≤ 0.
4.3 Inhomogeneous problems By inhomogeneous we mean that the boundary condition is nonhomogeneous that is to say is not identically equal to 0. Suppose for instance that one wants to solve the Dirichlet problem − div(A(x)∇u) + a(x)u = f in Ω, (4.41) u=g on ∂Ω. (f ∈ H −1 (Ω) and g is some function.) As we found a way to give a sense to u = 0 on ∂Ω we need to give a sense to the second equation to (4.41). The best way is to assume u − g ∈ H01 (Ω). Thus let us also suppose
g ∈ H 1 (Ω).
Setting U =u−g we see that if u is solution to (4.41) – in a weak sense – then U is solution to − div(A(x)∇(U + g)) + a(x)(U + g) = f in a weak sense. This can also be written as − div(A(x)∇U ) + a(x)U = f − a(x)g + div(A(x)∇g).
(4.42)
54
Chapter 4. Elliptic Problems in Divergence Form
Clearly due to our assumptions the right-hand side of (4.42) belongs to H −1 (Ω) and thus there exists a unique weak solution to − div(A(x)∇U ) + a(x)U = f − ag + div(A(x)∇g) in Ω, U ∈ H01 (Ω). Then the weak solution to (4.41) is given by u = U + g. More precisely u = U + g is the unique function satisfying ⎧ ⎨ A(x)∇u · ∇v + a(x)uv dx = f, v ∀ v ∈ H 1 (Ω), 0 ⎩ Ω 1 u − g ∈ H0 (Ω).
(4.43)
Exercises 1. Establish the divergence formula for a triangle. 2. Consider for Γ0 closed subset of Γ = ∂Ω ¯ | v = 0 on Γ0 } W = { v ∈ C 1 (Ω) (C 1 (Ω) denotes the space of the restrictions of C 1 (Rn )-functions on Ω), ¯ where the closure is taken in the H 1 (Ω)-sense. V =W Show that the solution to (4.13) for f ∈ L2 (Ω) is a weak solution to − div(A(x)∇u) + a(x)u = f in Ω, u=0
A(x)∇u · n = 0 on Γ \ Γ0 .
on Γ0 ,
Such a problem is called a mixed (Dirichlet–Neumann) problem. 3. Let T1 , T2 ∈ D (Ω). Show that T1 ≤ T2
T2 ≤ T1
=⇒
T1 = T2 .
4. Let T ∈ D (Ω), T ≥ 0. Show that for every compact K ⊂ Ω there exists a constant CK such that |T, ϕ| ≤ CK Sup |ϕ| K
∀ϕ ∈ D(Ω), Sup(ϕ) ⊂ K.
Exercises
55
5. Define Li by
−Li u = − div(A(x)∇u) + ai u
i = 1, 2.
Show that if u1 , u2 ∈ H 1 (Ω) −L1 u1 ≤ −L2 u2
in D (Ω),
0 ≤ a2 ≤ a1 0∨ u1 ≤ u2
in Ω, on ∂Ω,
then 0∨ u1 ≤ u2 in Ω. 6. Show that for a constant symmetric matrix A Aξ · ξ ξ=0 |ξ|2
λ1 = Inf is the smallest eigenvalue of A. 1 1 7. Let A = 1 12 . Show that 3
7 2 |ξ| ≤ Aξ · ξ 12
∀ξ ∈ R2 .
8. Let Ω1 ⊂ Ω2 be two bounded subsets of Rn . Let u1 , u2 be the solutions to ⎧ ⎨ A∇ui · ∇v dx = fi v dx, Ωi Ωi ⎩ ui ∈ H01 (Ωi ), for i = 1, 2 where fi ∈ L2 (Ωi ). Show that f2 ≥ f1 ≥ 0
=⇒
u2 ≥ u1 ≥ 0,
i.e., for nonnegative f the solution of the Dirichlet problem increases with the size of the domain. 9. Under the assumptions of Theorem 4.1 show that the mapping f → u where u is the solution to (4.9) is continuous from H −1 (Ω) into H01 (Ω). 10. We suppose to simplify that Ω is bounded. Let (An ) = (An (x)) be a sequence of matrices satisfying (4.2), (4.3) for λ, Λ independent of n. For f ∈ H −1 (Ω) we denote by un the solution to ⎧ ⎨ un ∈ H01 (Ω), ⎩ An (x)∇un · ∇v dx = f, v ∀ v ∈ H01 (Ω). Ω
56
Chapter 4. Elliptic Problems in Divergence Form
We suppose that An A a.e. in Ω (i.e., the convergence holds entry to entry). Show that un → u in H01 (Ω) where u is the solution to ⎧ 1 ⎨u ∈ H0 (Ω), ⎩ A(x)∇u · ∇v dx = f, v ∀ v ∈ H01 (Ω). Ω
11. Show that a subspace of H 1 (Ω) of finite dimensions is closed in H 1 (Ω) and that any linear form on it is automatically continuous.
Chapter 5
Singular Perturbation Problems This theory is devoted to study problems with small diffusion velocity. One is mainly concerned with the asymptotic behaviour of the solution of such problems when the diffusion velocity approaches 0. Let us explain the situation on a classical example (see [68], [69]).
5.1 A prototype of a singular perturbation problem Let Ω be a bounded open subset of Rn . For f ∈ H −1 (Ω), ε > 0 consider uε the weak solution to −εΔuε + uε = f in Ω, (5.1) uε ∈ H01 (Ω). It follows from Theorem 4.1 with A = ε Id that a solution to this problem exists and is unique. The question is then to determine what happens when ε → 0. Passing to the limit formally one expects that uε −→ u0 = f. Of course this cannot be for an arbitrary norm. For instance if f ∈ / never have uε → f in H01 (Ω). Let us first prove:
(5.2) H01 (Ω)
one can
Theorem 5.1. Let uε be the solution to (5.1). Then we have in H −1 (Ω).
(5.3)
Proof. We suppose H01 (Ω) equipped with the norm |∇v| .
(5.4)
uε −→ f
2,Ω
If g ∈ H −1 (Ω) then, by the Riesz representation theorem, there exists a unique u ∈ H01 (Ω) such that ∇u · ∇v dx = g, v ∀ v ∈ H01 (Ω) Ω
58
Chapter 5. Singular Perturbation Problems
and
|g|H −1 (Ω) = |∇u|2,Ω
(see (1.19)) – we denote by |g|H −1 (Ω) the strong dual norm of g in H −1 (Ω) defined as |g, v| . |g|H −1 (Ω) = Sup 1 |∇v| v∈H (Ω) 0
v=0
2,Ω
Since (5.1) is meant in a weak sense we have f − uε , v = ∇(εuε ) · ∇v dx ∀ v ∈ H01 (Ω) Ω
and thus
|f − uε |H −1 (Ω) = |∇(εuε )|2,Ω .
As we just saw:
ε
Ω
∇uε · ∇w + uε w dx = f, w
(5.5)
∀ w ∈ H01 (Ω).
(5.6)
Choosing w = εuε we get |∇(εuε )|2 + ε|uε |22,Ω = f, εuε ≤ |f |H −1 (Ω) |∇(εuε )| . 2,Ω 2,Ω
(5.7)
From this we derive that |∇(εuε )|2 ≤ |f |H −1 (Ω) |∇(εuε )| 2,Ω 2,Ω and thus
|∇(εuε )|
2,Ω
≤ |f |H −1 (Ω) .
This shows that εuε is bounded in H01 (Ω) independently of ε. So, up to a subsequence there exists v0 ∈ H01 (Ω) such that εuε v0
in H01 (Ω).
(5.8)
From (5.7) we derive then that ε|uε |22,Ω ≤ |f |2H −1 (Ω) . Thus
|εuε |22,Ω = ε · ε|uε |22,Ω −→ 0.
This means that εuε −→ 0
in L2 (Ω)
(5.9)
and also in D (Ω). Since the convergence (5.8) implies convergence in D (Ω) we have v0 = 0 and by uniqueness of the limit it follows that εuε 0 in H01 (Ω). From (5.7) we derive then |∇(εuε )|2 ≤ f, εuε → 0. 2 By (5.5) this completes the proof of the theorem.
5.1. A prototype of a singular perturbation problem
59
The convergence we just obtained is a very weak convergence, however it will always hold. If we wish to have a convergence of uε in L2 (Ω) one needs to assume f ∈ L2 (Ω), a convergence in H01 (Ω) one needs f ∈ H01 (Ω) and so on. Let us examine the convergence in L2 (Ω). Theorem 5.2. Suppose that f ∈ L2 (Ω) then if uε is the solution to (5.1) we have in L2 (Ω).
uε −→ f
(5.10)
Proof. The weak formulation of (5.1) is ε∇uε · ∇v + uε v dx = f v dx ∀ v ∈ H01 (Ω). Ω
(5.11)
Ω
Taking v = uε it comes 2 ε|∇uε |2,Ω + |uε |22,Ω =
Ω
f uε dx ≤ |f |2,Ω |uε |2,Ω
by the Cauchy–Schwarz inequality. Thus it follows that 2 |uε |2,Ω ≤ |f |2,Ω , ε|∇uε |2,Ω ≤ |f |22,Ω .
(5.12)
Since uε is bounded in L2 (Ω) independently of ε there exists u0 ∈ L2 (Ω) such that up to a subsequence (5.13) uε u0 in L2 (Ω). (In fact at this stage by Theorem 5.1 we already know that uε f in L2 (Ω), but we give here a proof of Theorem 5.2 independent of Theorem 5.1.) To identify u0 we go back to (5.11) that we write √ √ ε ∇( εuε ) · ∇v dx + uε v dx = f v dx ∀ v ∈ H01 (Ω). Ω
Ω
Ω
Passing to the limit – using (5.12) we derive u0 v dx = f v dx ∀ v ∈ H01 (Ω). Ω
Ω
By density of H01 (Ω) (or D(Ω)) in L2 (Ω) we get u0 v dx = f v dx ∀ v ∈ L2 (Ω) Ω
Ω
that is to say u0 = f and by uniqueness of the possible limit the whole sequence uε satisfies (5.13), which we recall, could be derived from Theorem 5.1. Next we remark that |uε − f |22,Ω = |uε |22,Ω − 2(uε , f )2,Ω + |f |22,Ω ≤ 2|f |22,Ω − 2(uε , f )2,Ω
by (5.12).
(5.14)
60
Chapter 5. Singular Perturbation Problems
(·, ·)2,Ω denotes the scalar product in L2 (Ω). Letting ε → 0 we see that the righthand side of (5.14) goes to 0 and we get uε → f in L2 (Ω). This completes the proof of the theorem. To end this section we consider the case where f ∈ Lp (Ω) which leads to interesting techniques since we are no more in a Hilbert space framework. So let us show: Theorem 5.3. Suppose that f ∈ Lp (Ω), 2 ≤ p < +∞. Then if uε is the solution to (5.1) uε −→ f in Lp (Ω). (5.15) uε ∈ Lp (Ω), Proof. As before we have ∇uε · ∇v dx + uε v dx = f v dx ∀ v ∈ H01 (Ω). ε Ω
Ω
The idea is to choose
(5.16)
Ω
v = |uε |p−2 uε
to get for instance the Lp (Ω)-norm of uε in the second integral. However one cannot do that directly since this function is not a priori in H01 (Ω) and one has to work a little bit. One introduces the approximation of the function |x|p−2 x given by ⎧ ⎪np−1 if x ≥ n, ⎨ γn (x) = |x|p−2 x if |x| ≤ n, (5.17) ⎪ ⎩ p−1 if x ≤ −n. −n Then – see Corollary 2.15 – one has for every n > 0 γn (uε ) ∈ H01 (Ω). Letting this function in (5.16) we get ε ∇uε · ∇uε γn (uε ) dx + uε γn (uε ) dx = f γn (uε ) dx. Ω
Since
γn
Ω
Ω
≥ 0 we obtain uε γn (uε ) dx ≤ f γn (uε ) dx ≤ |f |p,Ω |γn (uε )|p ,Ω Ω
Ω
(by H¨ older’s inequality). Let us set δn (x) = |x| ∧ n where ∧ denotes the minimum of two numbers. The inequality above implies that p−1 |δn (uε )|pp,Ω = δn (uε )p dx ≤ uε γn (uε )dx ≤ |f |p,Ω |γn (uε )|p ,Ω ≤ |f |p,Ω |δn (uε )|p,Ω . Ω
Ω
5.2. Anisotropic singular perturbation problems
It follows that
δn (uε )p dx ≤
Ω
61
Ω
|f |p dx.
Letting n → +∞ by the monotone convergence theorem we get |uε |p dx ≤ |f |p dx Ω
Ω
and thus uε ∈ Lp (Ω) with |uε |p,Ω ≤ |f |p,Ω . We already know that uε → f in H uε f
−1
(5.18)
2
(Ω), in L (Ω) thus we have
in Lp (Ω)-weak.
(5.19)
Now from (5.18) we deduce lim sup ε→0
Ω
p
|uε | dx ≤
Ω
|f |p dx
and from the weak lower semi-continuity of the norm in Lp (Ω) lim inf |uε |p dx ≥ |f |p dx. ε→0
Ω
(5.20)
(5.21)
Ω
(A norm is a continuous, convex function and is thus also weakly lower semicontinuous – see [21].) (5.20) and (5.21) imply that |uε |p dx = |f |p dx. lim ε→0
Ω
Ω
Together with (5.19) this leads to (5.15) (see exercises) and completes the proof of the theorem.
5.2 Anisotropic singular perturbation problems Let Ω be a bounded domain in R2 and f a function such that for instance f = f (x1 , x2 ) ∈ L2 (Ω). Then for every ε > 0 there exists a unique uε solution to −ε2 ∂x21 uε − ∂x22 uε = f in Ω, on ∂Ω. uε = 0
(5.22)
(5.23)
62
Chapter 5. Singular Perturbation Problems
This is what we call an anisotropic singular perturbation problem since the diffusion velocity (see Chapter 3) is very small in the x1 direction. Of course (5.23) is understood in a weak sense, namely uε satisfies ⎧ 1 ⎨u ε ∈ H0 (Ω), ⎩
Ω
ε2 ∂x1 uε ∂x1 v + ∂x2 uε ∂x2 v dx =
Ω
f v dx
∀ v ∈ H01 (Ω).
(5.24)
We would like to study the behaviour of uε when ε goes to 0. Formally, at the limit, if u0 denotes the limit of uε when ε → 0, we have Ω
∂x2 u0 ∂x2 v dx =
Ω
f v dx
∀ v ∈ H01 (Ω).
(5.25)
Even though we can show that u0 ∈ H01 (Ω) the bilinear form on the left-hand side of (5.25) is not coercive on H01 (Ω) and the problem (5.25) is not well posed. Let us denote by Πx1 (Ω) the projection of Ω on the x1 -axis defined by Πx1 (Ω) = { (x1 , 0) | ∃ x2 s.t. (x1 , x2 ) ∈ Ω }.
(5.26)
Then for every x1 ∈ Πx1 (Ω) = ΠΩ – we drop the x1 index for simplicity – we denote by Ωx1 the section of Ω above x1 – i.e., Ωx1 = { x2 | (x1 , x2 ) ∈ Ω }.
(5.27)
The closest problem to (5.25) to be well defined and suitable for the limit is the following. x2
x 1
0
x1
x1
Figure 5.1.
5.2. Anisotropic singular perturbation problems
63
For almost all x1 – or for every x1 ∈ ΠΩ if f is a smooth function – we have f (x1 , ·) ∈ L2 (Ωx1 ), (this is a well-known result of integration). Then there exists a unique u0 = u0 (x1 , ·) solution to ⎧ u0 ∈ H01 (Ωx1 ), ⎨ ∂x2 u0 ∂x2 v dx = f (x1 , ·)v dx2 ∀ v ∈ H01 (Ωx1 ). ⎩ Ωx1
(5.28)
(5.29)
Ωx1
Note that the existence and uniqueness of a solution to (5.29) is an easy consequence of the Lax–Milgram theorem since (∂x2 v)2 dx Ωx1
is a norm on H01 (Ωx1 ). What we would like to show now is the convergence of uε toward u0 solution of (5.29) for instance in L2 (Ω). Instead of doing that on this simple problem we show how it could be embedded relatively simply in a more general class of problems. However in a first lecture the reader can just apply the technique below to this simple example. It will help him to catch the main ideas. Let Ω be a bounded open subset of Rn . We denote by x = (x1 , . . . , xn ) the points in Rn and use the decomposition x = (X1 , X2 ),
X1 = x1 , . . . , xp ,
X2 = xp+1 , . . . , xn .
With this notation we set, with T denoting the transposition ∇X1 u ∇u = (∂x1 u, . . . , ∂xn u)T = ∇X2 u
(5.30)
(5.31)
with ∇X1 u = (∂x1 u, . . . , ∂xp u)T ,
∇X2 u = (∂xp+1 u, . . . , ∂xn u)T .
(5.32)
We denote by A = (aij (x)) a n × n matrix such that aij ∈ L∞ (Ω) ∀ i, j = 1, . . . , n,
(5.33)
and such that for some λ > 0 we have λ|ξ|2 ≤ Aξ · ξ
∀ ξ ∈ Rn , a.e. x ∈ Ω.
We decompose A into four blocks by writing A11 A12 A= A21 A22
(5.34)
(5.35)
64
Chapter 5. Singular Perturbation Problems
where A11 , A22 are respectively p × p and (n − p) × (n − p) matrices. We then set for ε > 0 2 ε A11 εA12 Aε = Aε (x) = . (5.36) εA21 A22 ¯ For ξ ∈ Rn if we set ξ = ξξ¯12 where ξ¯1 = (ξ1 , . . . , ξp )T , ξ¯2 = (ξp+1 , . . . , ξn )T we have for a.e. x ∈ Ω and every ξ ∈ Rn Aε ξ · ξ = Aξε · ξε ≥ λ|ξε |2 = λ{ε2 |ξ¯1 |2 + |ξ¯2 |2 }
(5.37)
where we have set ξε = (εξ¯1 , ξ¯2 ). Thus Aε satisfies Aε ξ · ξ ≥ λ(ε2 ∧ 1)|ξ|2
∀ ξ ∈ Rn , a.e. x ∈ Ω.
(5.38)
(∧ denotes the minimum of two numbers.) It follows that Aε is positive definite and for f ∈ H −1 (Ω) there exists a unique uε solution to ⎧ ⎨ A ∇u · ∇v dx = f, v ∀ v ∈ H 1 (Ω), ε ε 0 (5.39) Ω ⎩ 1 uε ∈ H0 (Ω). Clearly uε is the solution of a problem generalizing (5.24). We would like then to study the behavior of uε when ε → 0. Let us denote by ΠX1 the orthogonal projection from Rn onto the space X2 = 0. For any X1 ∈ ΠX1 (Ω) := ΠΩ we denote by ΩX1 the section of Ω above X1 defined as ΩX1 = { X2 | (X1 , X2 ) ∈ Ω }. To avoid unnecessary complications we will suppose that f ∈ L2 (Ω).
(5.40)
Then for a.e. X1 ∈ ΠΩ we have f (X1 , ·) ∈ L2 (ΩX1 ) and there exists a unique u0 = u0 (X1 , ·) solution to ⎧ ⎪ A22 ∇X2 u0 (X1 , X2 ) · ∇X2 v(X2 ) dX2 ⎪ ⎪ ⎪ ⎨ ΩX1 (5.41) = f (X1 , X2 )v(X2 ) dX2 ∀ v ∈ H01 (ΩX1 ), ⎪ ⎪ ⎪ Ω X1 ⎪ ⎩ u0 (X1 , ·) ∈ H01 (ΩX1 ). Clearly u0 is the natural candidate for the limit of uε . Indeed we have Theorem 5.4. Under the assumptions above if uε is the solution to (5.39) we have uε −→ u0 ,
∇X2 uε −→ ∇X2 u0 ,
where u0 is the solution to (5.41).
ε∇X1 uε −→ 0
in L2 (Ω)
(5.42)
5.2. Anisotropic singular perturbation problems
65
(In these convergences the vectorial convergence in L2 (Ω) means the convergence component by component.) Proof. Let us take v = uε in (5.39). By (5.37) we derive ε2 |∇X1 uε |2 + |∇X2 uε |2 dx ≤ f, uε ≤ |f |2,Ω |uε |2,Ω . λ
(5.43)
Ω
Since Ω is bounded, by the Poincar´e inequality we have for some constant C independent of ε |v|2,Ω ≤ C |∇X2 v|2,Ω ∀ v ∈ H01 (Ω). (5.44) From (5.43) we then obtain ε2 |∇X1 uε |2 + |∇X2 uε |2 dx ≤ C|f |2,Ω |∇X2 uε |2,Ω . λ
(5.45)
Ω
Dropping in this inequality the first term we get 2 λ|∇X2 uε |2,Ω ≤ C|f |2,Ω |∇X2 uε |2,Ω and thus
|∇X2 uε |
2,Ω
≤C
|f |2,Ω . λ
Using this in (5.45) we are ending up with Ω
ε2 |∇X1 uε |2 + |∇X2 uε |2 dx ≤ C 2
|f |22,Ω . λ2
(5.46)
Thus – due to (5.44) – we deduce that uε ,
|ε∇X1 uε |,
|∇X2 uε | are bounded in L2 (Ω).
(5.47)
(This of course independently of ε.) It follows that there exist u0 ∈ L2 (Ω), u1 ∈ (L2 (Ω))p , u2 ∈ (L2 (Ω))n−p such that – up to a subsequence uε u0 ,
ε∇X1 uε u1 ,
∇X2 uε u2
in “L2 (Ω)”.
(u1 , u2 are vectors with components in L2 (Ω). The convergence is meant component by component.) Of course the convergence in L2 (Ω) – weak – implies the convergence in D (Ω) and by the continuity of the derivation in D (Ω) we deduce that uε u0 ,
ε∇X1 uε 0,
∇X2 uε ∇X2 u0
in L2 (Ω).
(5.48)
66
Chapter 5. Singular Perturbation Problems
We then go back to the equation satisfied by uε that we expand using the different blocks of A. This gives 2 ε A11 ∇X1 uε · ∇X1 v dx + εA12 ∇X2 uε · ∇X1 v dx + εA21 ∇X1 uε · ∇X2 v dx Ω Ω Ω + A22 ∇X2 uε · ∇X2 v dx = f v dx ∀ v ∈ H01 (Ω). Ω
Ω
Passing to the limit in each term using (5.48) we get A22 ∇X2 u0 · ∇X2 v dx = f v dx ∀ v ∈ H01 (Ω). Ω
(5.49)
Ω
At this point we do not know yet if for a.e. X1 ∈ ΠΩ we have u0 (X1 , ·) ∈ H01 (ΩX1 ).
(5.50)
To see this – and more – one remarks first that taking v = uε in (5.49) and passing to the limit we get A22 ∇X2 u0 · ∇X2 u0 dx = f u0 dx. (5.51) Ω
Ω
Next we compute
Ω
∇X1 uε ∇X1 uε · dx. ∇X2 (uε − u0 ) ∇X2 (uε − u0 )
Iε =
Aε
(5.52)
We get Iε =
2
Ω
ε A11 ∇X1 uε · ∇X1 uε dx +
Ω
+ Ω
+
Ω
εA12 ∇X2 (uε − u0 ) · ∇X1 uε dx εA21 ∇X1 uε · ∇X2 (uε − u0 ) dx A22 ∇X2 (uε − u0 ) · ∇X2 (uε − u0 ) dx.
Using (5.39) with v = uε , we obtain Iε = f uε dx − εA12 ∇X2 u0 · ∇X1 uε dx − εA21 ∇X1 uε · ∇X2 u0 Ω Ω Ω − A22 ∇X2 u0 · ∇X2 uε dx − A22 ∇X2 uε · ∇X2 u0 dx Ω Ω + A22 ∇X2 u0 ∇X2 u0 dx. Ω
5.2. Anisotropic singular perturbation problems
67
Passing to the limit in ε we get lim Iε = f u0 dx − A22 ∇X2 u0 · ∇X2 u0 dx = 0. ε→0
Ω
Ω
Using the coerciveness assumption we have (see (5.52)) ε2 |∇X1 uε |2 + |∇X2 (uε − u0 )|2 dx ≤ Iε . λ Ω
It follows that ε∇X1 uε −→ 0, Now we also have
ΠΩ
ΩX1
∇X2 uε −→ ∇X2 u0
in L2 (Ω).
|∇X2 (uε − u0 )|2 dX2 dX1 −→ 0.
(5.53)
It follows that for almost every X1 |∇X2 (uε − u0 )|2 dX2 −→ 0. ΩX1
Since
ΩX1
is a norm on
H01 (ΩX1 )
|∇X2 v|2 dX2
12
and uε (X1 , ·) ∈ H01 (ΩX1 ) we have u0 (X1 , ·) ∈ H01 (ΩX1 )
and this for almost every X1 . Using then the Poincar´e inequality implies |uε − u0 |2 dX2 ≤ C |∇X2 (uε − u0 )|2 dX2 ΩX1
=⇒ Ω
|uε − u0 |2 dx ≤ C
(by (5.53)) and thus uε −→ u0
ΩX1
Ω
|∇X2 (uε − u0 )|2 dx −→ 0
in L2 (Ω).
(5.54)
All this is up to a subsequence. If we can identify u0 uniquely then all the convergences above will hold for the whole sequence. For this purpose recall first that (5.55) u0 (X1 , ·) ∈ H01 (ΩX1 ). One can cover Ω by a countable family of open sets of the form Ui × Vi ⊂ Ω,
i∈N
68
Chapter 5. Singular Perturbation Problems
where Ui , Vi are open subsets of Rp , Rn−p respectively. One can even choose Ui , Vi hypercubes. Then choosing ϕ ∈ H01 (Vi ) we derive from (5.49) Ui
η(X1 ) A22 (X1 , X2 )∇X2 u0 (X1 , X2 ) · ∇X2 ϕ(X2 ) dX2 dX1 Vi = η(X1 ) f (X1 , X2 )ϕ(X2 ) dX2 dX1 ∀ η ∈ D(Ui ), Ui
Vi
since ηϕ ∈ H01 (Ω). Thus there exists a set of measure zero, N (ϕ), such that
Vi
A22 (X1 , X2 )∇X2 u0 (X1 , X2 ) · ∇X2 ϕ(X2 ) dX2 =
Vi
f (X1 , X2 )ϕ(X2 ) dX2
(5.56) for all X1 ∈ Ui \ N (ϕ). Denote by ϕn a Hilbert basis of H01 (Vi ). Then (5.56) holds (replacing ϕ by ϕn ) for all X1 such that X1 ∈ Ui \ Ni (ϕn ) where Ni (ϕn ) is a set of measure 0. Thus for X1 ∈ Ui \ ∪n Ni (ϕn ) we have (5.56) for any ϕ ∈ H01 (Vi ). This follows easily from the density in H01 (Vi ) of the linear combinations of the ϕn . Let us then choose X1 ∈ ΠΩ \ ∪i ∪n Ni (ϕn ) (note that ∪i ∪n Ni (ϕn ) is a set of measure 0). Let ϕ ∈ D(ΩX1 ). If K denotes the support of ϕ we have clearly K ⊂ ∪i Vi and thus K can be covered by a finite number of Vi that for simplicity we will denote by V1 , . . . , Vk . Using a partition of unity (see [86], [96] and the exercises) there exists ψi ∈ D(Vi ) such that k i=1
ψi = 1 on K.
Exercises
69
By (5.56) we derive A22 ∇X2 u0 · ∇X2 ϕ dX2 = A22 ∇X2 u0 · ∇X2 (ψi ϕ) dX2 K
K
= =
i
Vi
i
Vi
= K
This is also ΩX1
i
A22 ∇X2 u0 · ∇X2 (ψi ϕ) dX2 f ψi ϕ dX2
f ϕ dX2 .
A22 ∇X2 u0 · ∇X2 ϕ dX2 =
ΩX1
f ϕ dX2
∀ ϕ ∈ D(ΩX1 )
and thus u0 is the unique solution to (5.41) for a.e. X1 ∈ ΠΩ . This completes the proof of the theorem.
Exercises 1. Show that if f ∈ H01 (Ω) and if uε is the solution to (5.1) one has uε −→ f
in H01 (Ω).
Show that if f ∈ D(Ω) then for every k one has Δk uε −→ Δk f
∀ k ≥ 1 in H01 (Ω),
(Δk denotes the operator Δ iterated k times). 2. One denotes by A(x) a uniformly positive definite matrix. Study the singular perturbation problem ⎧ ⎨uε ∈ H01 (Ω), ⎩ε A(x)∇uε · ∇v dx + a(x)uε v dx = f, v ∀ v ∈ H01 (Ω) Ω
Ω
when a(x) is a function satisfying a(x) ≥ a > 0 a.e. x ∈ Ω, and f ∈ H −1 (Ω).
70
Chapter 5. Singular Perturbation Problems
3. Compute the solution to −εuε + uε = 1 in Ω = (0, 1), uε ∈ H01 (Ω). Examine its behaviour when ε → 0. 4. For f ∈ Lp (Ω), p ≥ 2 let u be the solution to
− div(A(x)∇u) + u = f u ∈ H01 (Ω)
in Ω,
where A is a n × n matrix satisfying (4.2), (4.3). Show that u ∈ Lp (Ω) and that |u|p,Ω ≤ |f |p,Ω . 5. Suppose that p ≥ 2. (i) Show that there exists a constant c > 0 independent of z such that |1 + z|p ≥ 1 − pz + c|z|p
∀ z ∈ R.
(*)
(ii) Let uε ∈ Lp (Ω) be a sequence such that when ε → 0 uε f
in Lp (Ω),
|uε |p → |f |p . Show that in Lp (Ω) strong.
uε → f (Hint: take z =
uε −f f
in (∗).)
Show the same result for 1 < p < 2. 6. Let Ω be a bounded domain in Rn . For σ > 0 one considers uσ the solution to ⎧ ⎨uσ ∈ H 1 (Ω), ⎩σ ∇uσ · ∇v dx + uσ v dx = f v dx. Ω
Ω
Ω
2
(i) Show that for f ∈ L (Ω) there exists a unique solution to the problem above. (ii) Show that when σ → +∞ uσ →
1 |Ω|
Ω
f (x) dx
in H 1 (Ω).
Exercises
71
7. (Partition of unity.) Let K be a compact subset of Rp such that K ⊂ ∪ki=1 Vi where Vi are open subsets. (i) Show that one can find open subsets Vi , Vi such that
Vi ⊂ V i ⊂ Vi ⊂ V i ⊂ Vi K⊂
∀ i = 1, . . . , k,
∪ki=1 Vi
(ii) Show that there exists a continuous function αi such that αi = 1 on Vi ,
αi = 0 outside Vi .
(iii) Show that for εi small enough γi = ρεi ∗ αi ∈ D(Vi ). (iv) One sets
γi ψi = k
Show that ψi ∈ D(Vi ) and
k
i=1
i=1
γi
.
ψi = 1 on K.
8. Let Ω = ω1 × ω2 where ω1 , ω2 are bounded domains in Rp , Rn−p respectively (0 < p < n). Under the assumptions of Theorem 5.4 show that, in general, for f = 0 one cannot have uε −→ u0
in H 1 (Ω).
Chapter 6
Asymptotic Analysis for Problems in Large Cylinders 6.1 A model problem Let us denote by Ω the rectangle in R2 defined by Ω = (−, ) × (−1, 1).
(6.1)
For convenience we will also set ω = (−1, 1). If f = f (x2 ) is a regular function – x2 1 0 −
x1
−1
Figure 6.1. as we will see it can be more general – there exists a unique u solution to −Δu = f (x2 ) in Ω , (6.2) u = 0 on ∂Ω . Of course as usual this solution is understood in the weak sense as explained in Chapters 3 and 4. Since u vanishes at both ends of Ω , if f ≡ 0 it is clear that u has to depend on x1 . However due to our choice of f – independent of x1 – u has
74
Chapter 6. Problems in Large Cylinders
to experience some special properties and if is large one expects u to be fast independent of x1 far away from the end sides of the rectangle. In other words one expects u to converge locally toward a function independent of x1 when → +∞. A natural candidate for the limit is of course u∞ the weak solution to −∂x22 u∞ = f in ω, (6.3) on ∂ω. u∞ = 0 We recall that ω = (−1, 1), ∂ω = {−1, 1}. Indeed we are going to show that if 0 > 0 is fixed then u −→ u∞ in Ω0 (6.4) with an exponential rate of convergence, that is to say the norm of u − u∞ in Ω0 is a O(e−α ) for some positive constant α. At this point we do not precise in what norm this convergence takes place since several choices are possible. Somehow f is just having an auxiliary rˆ ole in our analysis and we will take it the most general possible assuming f ∈ H −1 (ω).
(6.5)
Then for v ∈ H01 (Ω ) for almost every x1 v(x1 , ·) ∈ H01 (ω) (see [30] or the exercises) and the linear form f, v(x1 , ·) dx1 f, v =
(6.6)
(6.7)
−
defines clearly an element of H −1 (Ω ) that we will yet denote by f . Note that for a smooth function f one has as usual f, v = f v dx. Ω
Then, with these definitions, there exist weak solutions to (6.2), (6.3) respectively that is to say u , u∞ such that ⎧ ⎨ u ∈ H01 (Ω ), (6.8) ∇u · ∇v dx = f, v ∀ v ∈ H01 (Ω ), ⎩ Ω
⎧ ⎨ u∞ ∈ H01 (ω), ⎩
ω
∂x2 u∞ ∂x2 v dx2 = f, v ∀ v ∈ H01 (ω).
(6.9)
We would like to show now that in this general framework – in terms of f – one has u → u∞ when → +∞.
6.1. A model problem
75
As we have seen in Chapter 2 by the Poincar´e inequality there exists a constant c > 0 such that 2 (6.10) c u dx ≤ (∂x2 u)2 dx ∀ u ∈ H01 (ω). ω
ω
We will set 0 < λ1 =
Inf1
u∈H0 (ω) u=0
(∂ u)2 dx ω x2 . 2 ω u dx
(6.11)
(As we will see later λ1 is the first eigenvalue of the operator −∂x22 for the Dirichlet problem. We will not use this here but the introduction of λ1 is useful in order to get sharp estimates.) Let us start with the following fundamental estimate. Theorem 6.1. Suppose that 2 < 1 ≤ then we have √ |∇(u − u∞ )| ≤ e− λ1 (1 −2 ) |∇(u − u∞ )|2,Ω . 2,Ω 2
1
(6.12)
Remark 6.1. This estimate shows in particular that the function √ → e− λ1 |∇(u − u∞ )|2,Ω
is nondecreasing. Proof of Theorem 6.1. Let v ∈ H01 (Ω ). By (6.6) and (6.9) we have for almost every x1 ∂x2 u∞ ∂x2 v(x1 , ·) dx2 = f, v(x1 , ·). ω
Integrating this equality on (−, ) we deduce, since u∞ is independent of x1 , ∇u∞ · ∇v dx = f, v. Ω
Combining this with (6.8) it follows that ∇(u − u∞ ) · ∇v dx = 0 ∀ v ∈ H01 (Ω ).
(6.13)
Ω
(Note the “ghost” rˆ ole of f in these estimates.) We introduce then the function ρ whose graph is depicted in Figure 6.2 on top of the next page. In particular one has 0 ≤ ρ ≤ 1, Then
ρ = 1 on (−2 , 2 ),
ρ = 0 on R\(−1 , 1 ),
v = (u − u∞ )ρ(x1 ) ∈ H01 (Ω )
|ρ | ≤
1 . (6.14) 1 − 2
76
Chapter 6. Problems in Large Cylinders
1 0 2 1
−1 −2
x1
Figure 6.2.
and from (6.13) we derive ∇(u − u∞ ) · ∇{(u − u∞ )ρ} dx = 0.
(6.15)
Ω1
It follows that we have 2 |∇(u − u∞ )| ρ dx = − Ω1
Ω1 \Ω2
(∇(u − u∞ ) · ∇ρ)(u − u∞ ) dx
=−
Ω1 \Ω2
∂x1 (u − u∞ )∂x1 ρ(u − u∞ ) dx
since ρ is independent of x2 and ∂x1 ρ vanishes everywhere except on Ω1 \Ω2 . Then – see (6.14): 1 |∇(u − u∞ )|2 ρ dx ≤ |∂x1 (u − u∞ )| |u − u∞ | dx. 1 − 2 Ω1 \Ω2 Ω1 We then use the following Young inequality 1 1 2 2 √ a + λ1 b ab ≤ 2 λ1 to get |∇(u − u∞ )|2 ρ dx Ω1
(6.16)
1 1 2 2 √ (∂x1 (u − u∞ )) dx + λ1 (u − u∞ ) dx . ≤ 2(1 − 2 ) λ1 Ω1 \Ω2 Ω1 \Ω2 Now by the definition of λ1 – see also (6.6) – we have for a.e. x1 1 (u − u∞ )2 (x1 , ·) dx2 ≤ (∂x2 (u − u∞ ))2 (x1 , ·) dx2 . λ 1 ω ω
6.1. A model problem
77
Integrating for |x1 | ∈ (2 , 1 ) we derive 1 (u − u∞ )2 dx ≤ (∂x2 (u − u∞ ))2 dx. λ1 Ω1 \Ω2 Ω1 \Ω2 Going back to (6.16) we get 1 2 |∇(u − u∞ )| ρ dx ≤ √ |∇(u − u∞ )|2 dx. 2 λ1 (1 − 2 ) Ω1 \Ω2 Ω1 Since ρ = 1 on (−2 , 2 ) – see (6.14) – this implies that 1 |∇(u − u∞ )|2 dx ≤ √ |∇(u − u∞ )|2 dx. 2 λ ( − ) 1 1 2 Ω2 Ω1 \Ω2 Let us set F ( ) =
(6.17)
|∇(u − u∞ )|2 dx.
(6.18)
1 F (1 ) − F (2 ) F (2 ) ≤ √ . 1 − 2 2 λ1
(6.19)
Ω
We can then write (6.17)
Letting 1 → 2 we obtain 1 F (2 ) ≤ √ F (2 ) for a.e. 2 . 2 λ1
(6.20)
(Note that F ( ) is almost everywhere derivable.) The formula above can also be written √ (e−2 λ1 2 F (2 )) ≥ 0 which implies
√ λ1 2
e−2
√
F (2 ) ≤ e−2
λ1 1
F (1 )
for any 2 ≤ 1 . By (6.18) this implies (6.12) and completes the proof of the theorem. To complete our convergence result we will need the following proposition. Proposition 6.2. We have √ |∇(u − u∞ )| ≤ 2|u∞ |1,2 , 2,Ω
where |v|21,2 =
ω
v 2 + ∂x2 v 2 dx.
(6.21)
78
Chapter 6. Problems in Large Cylinders
1
− − + 1
−1
x1
Figure 6.3: Graph of η
Proof. Note that the left-hand side of (6.21) cannot a priori go to 0, but the estimate shows already that u is close to u∞ . We denote by η the function whose graph is depicted in Figure 6.3. Clearly u − u∞ + ηu∞ ∈ H01 (Ω ). From (6.13) we derive ∇(u − u∞ ) · ∇(u − u∞ + ηu∞ ) dx = 0 Ω
hence
Ω
|∇(u − u∞ )|2 = −
≤
Ω
Ω
∇(u − u∞ ) · ∇{ηu∞ } dx 12 |∇(u − u∞ )|2 dx
Ω
12 |∇{ηu∞ }|2 dx
by the Cauchy–Schwarz inequality. Since η vanishes on (− + 1, − 1) we then derive |∇(u − u∞ )|2 dx ≤ |∇(ηu∞ )|2 dx. Ω
Ω \Ω−1
This last integral can be evaluated to give 2 |∇(ηu∞ )| dx = ((∂x1 η)u∞ )2 + (η∂x2 u∞ )2 dx Ω \Ω−1
Ω \Ω−1
≤
Ω \Ω−1
u2∞ + (∂x2 u∞ )2 dx = 2|u∞ |21,2 .
The result follows. We can now state our exponential convergence result, namely Theorem 6.3. For any 0 > 0 we have √ √ √λ − λ1 1 0 |∇(u − u∞ )| ≤ 2e |u | e , ∞ 1,2 2,Ω 0
i.e., u → u∞ in Ω0 with an exponential rate of convergence.
(6.22)
6.2. Another type of convergence
79
Proof. We just apply (6.12) with 2 = 0 , 1 = . The result follows then from (6.21). Remark 6.2. Taking 2 = 2 , 1 = we also have from (6.12) |∇(u − u∞ )|
2,Ω/2
≤
√ √ 2|u∞ |1,2 e− λ1 2 .
(6.23)
However we do not have |∇(u − u∞ )| −→ 0 2,Ω
(6.24)
(see Exercise 1).
6.2 Another type of convergence Somehow the convergence in H 1 (Ω) is not appealing. Pointwise convergence is more satisfactory. This can be achieved very simply by a careful use of the maximum principle. We will even extend this property to very general domains. Let us consider the function π v1 (x2 ) = cos x2 (6.25) 2 the first eigenfunction of the one-dimensional Dirichlet problem on (−1, 1) (we refer to Chapter 9). Note that we will just be using the expression of v1 . Then we have: Theorem 6.4. Let u , u∞ be the solutions to (6.2), (6.3). Suppose that for some constant C we have (6.26) |u∞ (x2 )| ≤ Cv1 (x2 ) ∀x2 . Then
|u − u∞ | ≤ C
cosh π2 x1 π v1 (x2 ). cosh 2
(6.27)
Proof. Let us remark that if u denotes the function in the right-hand side of (6.27) one has −Δu = 0 in the usual sense and also in the weak sense. It follows that −Δ{u − u∞ − u} = 0 in a weak sense in Ω . Moreover on ∂Ω u − u∞ − u = −u∞ − u = −u∞ − Cv1 ≤ 0.
80
Chapter 6. Problems in Large Cylinders
By the maximum principle it follows that u − u∞ ≤ u. Arguing similarly with u − u∞ + u one derives −u ≤ u − u∞ ≤ u
which completes the proof of (6.27).
Remark 6.3. The assumption (6.26) holds in many cases, for instance if ∂x2 u∞ ∈ L∞ (ω) which is the case for f ∈ L2 (ω). As a consequence of (6.27) we have π π 0 e− 2 |u − u∞ |∞,Ω0 ≤ C cosh (6.28) 2 which gives an exponential rate of convergence for the uniform norm locally. We can now address the case of more general domains. Suppose that Ω is a domain of the type of Figure 6.4. Our unique assumption being that x2 −1
−
x1
1
Figure 6.4.
Ω ⊂ R × ω, Let f be a function such that
Ω ∩ (−, ) × R = Ω . f ∈ L2 (ω).
Then there exists a unique u solution to ⎧ u ∈ H01 (Ω ), ⎨ ∇u · ∇v dx = f v dx ⎩ Ω
(6.29)
Ω
∀ v ∈ H01 (Ω ).
(6.30)
Then we have Theorem 6.5. Let u∞ be the solution to (6.3). For any 0 there exists a constant C = C(0 , f ) such that π |u − u∞ |∞,Ω0 ≤ Ce− 2 . (6.31)
6.2. Another type of convergence
81
Proof. We suppose first that f ≥ 0. Let u be the solution to (6.2). By the maximum principle we have 0 ≤ u
in Ω ,
0 ≤ u∞
on ω.
Thus it follows that u − u ≤ 0 on ∂Ω ,
u
−Δ(u − u ) = 0
∂Ω ,
− u∞ ≤ 0 on
−Δ(u
− u∞ ) = 0
in Ω , in Ω .
By the maximum principle we deduce that 0 ≤ u ≤ u ≤ u∞
in Ω .
By the estimate (6.28) it follows for 0 < π
|u − u∞ |∞,Ω0 ≤ |u − u∞ |∞,Ω0 ≤ Ce− 2 (see also Remark 6.3). Thus the result follows in this case. In the general case one writes f = f+ − f− where f + and f − are the positive and negative parts of f . Then introducing u± , u± ∞ the solutions to 1 u± ∈ H0 (Ω ),
1 u± ∞ ∈ H0 (ω),
± −Δu± = f
± −Δu± ∞ = f
in Ω , in ω,
we have by the results above π
± −2 |u± . − u∞ |∞,Ω0 ≤ Ce
Due to the linearity of the problems and uniqueness of the solution we have − u = u+ − u ,
The result follows then easily.
− u∞ = u+ ∞ − u∞ .
Remark 6.4. Assuming f extended by 0 outside ω one can drop the left-hand side assumption of (6.29).
82
Chapter 6. Problems in Large Cylinders
6.3 The general case We denote a point x ∈ Rn also as x = (X1 , X2 ) where X1 = (x1 , . . . , xp ), X2 = (xp+1 , . . . , xn )
(6.32)
in other words we split the components of a point in Rn into the p first components and the n − p last ones. Let ω1 be an open subset of Rp that we suppose to satisfy ω1 is a bounded convex domain containing 0.
(6.33)
Let ω2 be a bounded open subset of Rn−p , then we set Ω = ω1 × ω2 .
(6.34)
Note that by (6.33) we have Ω ⊂ Ω for < . We denote by A11 (X1 , X2 ) A12 (X2 ) A(x) = = (aij (x)) A22 (X2 ) A21 (X1 , X2 )
(6.35)
a n × n-matrix divided into four blocks such that A11 is a p × p-matrix, A22 is a (n − p) × (n − p)-matrix. We assume that
aij ∈ L∞ (Rp × ω2 )
(6.36) (6.37)
and that for some constants λ, Λ we have λ|ξ|2 ≤ A(x)ξ · ξ |A(x)ξ| ≤ Λ|ξ|
∀ξ ∈ Rn , a.e. x ∈ Rp × ω2 ∀ξ ∈ Rn , a.e. x ∈ Rp × ω2 .
(6.38) (6.39)
Then by the Lax–Milgram theorem for f ∈ L2 (ω2 )
(6.40)
there exists a unique u∞ solution to 1 A22 ∇X2 u∞ ·∇X2 vdX2 = u∞ ∈ H0 (ω2 ), ω2
ω2
f vdX2 ∀v ∈ H01 (ω2 ). (6.41)
(In this system ∇X2 stands for the gradient in X2 , that is (∂xp+1 , . . . , ∂xn ), dX2 = dxp+1 · · · dxn .) By the Lax–Milgram theorem there exists also a unique u solution to A∇u · ∇vdx = f vdx ∀v ∈ H01 (Ω ). (6.42) u ∈ H01 (Ω ), Ω
Ω
6.3. The general case
83
Moreover we have Theorem 6.6. There exist two constants c, α > 0 independent of such that |∇(u − u∞ )|2 dx ≤ ce−α |f |22,ω2 (6.43) Ω
2
Proof. The proof is divided into three steps. • Step 1. The equation satisfied by u − u∞ . If v ∈ H01 (Ω ) then for almost every X1 in ω1 we have v(X1 , ·) ∈ H01 (ω2 ) and thus by (6.41) A22 ∇X2 u∞ · ∇X2 v(X1 , ·)dX2 = ω2
ω2
Integrating in X1 we get A22 ∇X2 u∞ · ∇X2 vdx = Ω
Ω
f v(X1 , ·)dX2 .
f vdx ∀v ∈ H01 (Ω )
Now for v ∈ H01 (Ω ) we have A∇u∞ · ∇vdx = A12 ∇X2 u∞ · ∇X1 vdx + Ω
Ω
Ω
=
Ω
(6.44)
A22 ∇X2 u∞ · ∇X2 vdx
A22 ∇X2 u∞ · ∇X2 vdx =
Ω
f vdx (6.45)
(since A12 , u∞ are depending on X2 only). Combining (6.42), (6.45) we get A∇(u − u∞ ) · ∇vdx = 0 ∀v ∈ H01 (Ω ). (6.46) Ω
• Step 2. An iteration technique. Set 0 < 0 ≤ − 1. There exists ρ a function of X1 only such that 0 ≤ ρ ≤ 1, ρ = 1 on 0 ω1 , ρ = 0 on Rp \(0 + 1)ω1 , |∇X1 ρ| ≤ c0 where c0 is a universal constant (cf. Exercise 4). Then we have (u − u∞ )ρ2 ∈ H01 (Ω )
(6.47)
84
Chapter 6. Problems in Large Cylinders
and from (6.46) we derive A∇(u − u∞ ) · ∇(u − u∞ )ρ2 dx Ω ∇X1 ρ = −2 A∇(u − u∞ ) · (u − u∞ )ρdx 0 Ω ≤2 |A∇(u − u∞ )||∇X1 ρ||u − u∞ |ρdx. Ω0 +1 \Ω0
Using (6.38), (6.39), (6.47) and the Cauchy–Schwarz inequality we derive |∇(u − u∞ )|2 ρ2 dx λ Ω ≤ 2c0 Λ |∇(u − u∞ )|ρ|u − u∞ |dx Ω0 +1 \Ω0
≤ 2c0 Λ
Ω
12 |∇(u − u∞ )|2 ρ2 dx
Ω0 +1 \Ω0
12 (u − u∞ )2 dx .
It follows that (recall that ρ = 1 on Ω0 ) Λ 2 |∇(u − u∞ )|2 dx ≤ 2c0 (u − u∞ )2 dx. λ Ω0 Ω0 +1 \Ω0
(6.48)
Since u − u∞ vanishes on the lateral boundary of Ω there exists a constant cp independent of such that (see Theorem 2.8) 2 2 (u − u∞ ) dx ≤ cp |∇X2 (u − u∞ )|2 dx. (6.49) Ω0 +1 \Ω0
Ω0 +1 \Ω0
Combining this Poincar´e inequality with (6.48) we get |∇(u − u∞ )|2 dx ≤ C |∇(u − u∞ )|2 dx Ω0
which is also
Ω0
Ω0 +1 \Ω0
|∇(u − u∞ )|2 dx ≤
C 1+C
Ω0 +1
|∇(u − u∞ )|2 dx
2 where C = (2c0 cp Λ λ ) . Iterating this formula starting from 2 we obtain C [ 2 ] |∇(u − u∞ )|2 dx ≤ |∇(u − u∞ )|2 dx 1+C Ω Ω 2
2
+[
2
]
(6.50)
6.3. The general case
85
where [ 2 ] denotes the integer part of 2 . Since 2 − 1 < [ 2 ] ≤ 2 , it comes 2 (− 2 +1) ln( 1+C ) C |∇(u − u∞ )| dx ≤ e |∇(u − u∞ )|2 dx Ω
2
= c0 e where c0 =
1+C C
and α0 =
1 2
−α0
Ω
(6.51) 2
Ω
|∇(u − u∞ )| dx
ln( 1+C C ).
• Step 3. Evaluation of the last integral. In (6.42) we take v = u . We get A∇u · ∇u dx = Ω
Ω
f u dx ≤ |f |2,Ω |u |2,Ω .
Of course by the Poincar´e inequality we have |u |2,Ω ≤ cp |∇u |2,Ω
where cp is independent of . Using the ellipticity condition we derive λ |∇u |2 dx ≤ cp |f |2,Ω |∇u |2,Ω
Ω
and thus
Ω
|∇u |2 dx ≤
cp |f |2,Ω λ
2 .
By a simple computation |f |22,Ω =
ω1
ω2
f 2 (X2 ) dX2 dX1
= |ω1 | |f |22,ω1 = p |ω1 | |f |22,ω2 where | · | denotes the measure of sets. It follows that c2p |ω1 | p 2 |∇u |2 dx ≤ |f |2,ω2 . λ2 Ω
(6.52)
Similarly choosing v = u∞ in (6.41) we get 2 λ |∇u∞ | dX2 ≤ A22 ∇X2 u∞ ∇X2 u∞ dX2 = ω2
ω2
ω2
≤ |f |2,ω2 |u∞ |2,Ω2 ≤ |f |2,ω2 cp |∇u∞ |
2,Ω2
.
f u∞ dX2
86
Chapter 6. Problems in Large Cylinders
From this it follows that we have c2p 2 |∇u∞ | dX2 ≤ 2 f 2 dX2 . λ ω2 ω2 Integrating this in X1 we derive c2p |ω1 | p 2 |∇u∞ |2 dx ≤ |f |2,ω2 λ2 Ω
(6.53)
which is the same estimate as in (6.52). Going back to (6.51) we obtain 2 −α0 |∇(u − u∞ )| dx ≤ 2c0 e |∇u |2 + |∇u∞ |2 dx Ω/2
Ω
|ω1 | ≤ 4c0 c2p 2 p e−α0 |f |22,ω2 . λ The estimate (6.43) follows by taking α any constant smaller than α0 .
Remark 6.5. In (6.43) one can replace Ω by Ωa for any a ∈ (0, 1) at the expense 2 of lowering α. The proof is exactly the same. However we recall that one cannot choose a = 1 (see the exercises). Remark 6.6. One cannot allow A12 to depend on X1 . One can allow A11 , A21 to depend on provided (6.38), (6.39) are still valid. Remark 6.7. The technique above follows [37]. It can be applied to a wide class of problems (see for instance [32] for the Stokes problem).
6.4 An application We would like to apply the result of the previous section to the anisotropic singular perturbation problem. Indeed the two problems can be connected to each other via a scaling. Let us explain this. With the notation of the previous section consider Ω1 = ω 1 × ω 2
(6.54)
where ω1 satisfies (6.33). Let us denote by A = A(x) a matrix like in (6.35), i.e., A11 (x) A12 (X2 ) A(x) = . A21 (x) A22 (X2 ) Assume that λ|ξ|2 ≤ A(x)ξ · ξ |A(x)ξ| ≤ Λ|ξ|
∀ ξ ∈ Rn , a.e. x ∈ Ω1 , n
∀ ξ ∈ R , a.e. x ∈ Ω1 ,
(6.55) (6.56)
6.4. An application
87
holds for λ, Λ some positive constants. Then like in (5.36) one can define for ε > 0 Aε = Aε (x) =
2 ε A11 εA21
εA12 . A22
(6.57)
Due to (5.37) and (5.38) for f = f (X2 ) ∈ L2 (ω2 ), there exists a unique uε solution to ⎧ ⎨ Aε ∇uε · ∇v dx = f v dx Ω1 Ω1 ⎩ uε ∈ H01 (Ω1 ).
(6.58)
∀ v ∈ H01 (Ω1 ),
(6.59)
Due to Theorem 5.4 we know that when ε → 0, uε → u0 in L2 (Ω1 ) where u0 is the solution to ⎧ ⎨ A22 ∇X2 u0 · ∇X2 v dX2 = f (X2 )v dX2 ∀ v ∈ H01 (ω2 ), (6.60) ω2 ω2 ⎩ u0 ∈ H01 (ω2 ). Note that here since A22 , f are independent of X1 so is u0 . Due to a special situation we can have more information on the convergence of uε . We have Theorem 6.7. If Ωa = aω1 × ω2 , a ∈ (0, 1) there exist two positive constants c, α such that α |∇(uε − u0 )|2 dx ≤ ce− ε |f |22,ω2 . (6.61) Ωa
Proof. To prove that, we rely on a scaling argument. Indeed we set X1 1 u (X1 , X2 ) = uε , X2 . ε= ,
(6.62)
With this definition it is clear that u ∈ H01 (Ω ).
(6.63)
Moreover X1 1 ∇X1 uε , X2 X1 , X2 . ∇X2 u (X1 , X2 ) = ∇X2 uε
∇X1 u (X1 , X2 ) =
(6.64) (6.65)
88
Chapter 6. Problems in Large Cylinders
Making the change of variable X1 → X1 in the equation of (6.59) we obtain X1 X1 X1 , X2 ∇uε , X2 · ∇v , X2 dx Aε Ω X1 , X2 dx ∀ v ∈ H01 (Ω1 ). f (X2 )v = Ω For w ∈ H01 (Ω ) we have v(X1 , X2 ) = w(X1 , X2 ) ∈ H01 (Ω1 ) and X1 , X2 = (∇X1 w(X1 , X2 ), ∇X2 w(X1 , X2 ))T . (∇v) Combining this with (6.64), (6.65) – see also (6.62) we obtain X1 X1 X1 , X2 ∇uε , X2 · ∇v , X2 Aε X1 1 1 ∇X1 u ∇X1 w 2 A11 , X2 A12 (X2 ) X2 = 1 · ∇X2 w ∇X2 u A22 (X2 ) A21 , X2 X1 1 1 ∇X1 w A11 , X2 ∇X1 u + A12 ∇X2 u = · ∇X2 w A21 X1 , X2 ∇X1 u + A22 ∇X2 u X A11 1 , X2 A12 (X2 ) ∇X1 w ∇X1 u = · . ∇X2 u ∇X2 w A12 X1 , X2 A22 (X2 ) Thus setting
(X1 , X2 ) = A
X1
, X2 X 1 A12 , X2
A11
we see that u is solution to ⎧ ⎨ ∇u · ∇w dx = A f w dx Ω Ω ⎩ u ∈ H01 (Ω ).
A12 (X2 )
A22 (X2 )
∀ w ∈ H01 (Ω ),
Applying Theorem 6.6 taking into account Remarks 6.5, 6.6 we obtain the existence of some constants c0 , α0 such that |∇(u − u0 )|2 dx ≤ c0 e−α0 |f |22,ω2 . Ωa
(Note that u0 = u∞ .) Changing X1 → X1 in the integral above we obtain p |∇(u − u0 )|2 (X1 , X2 ) dx ≤ c0 e−α0 |f |22,ω2 Ωa
Exercises
89
and by (6.64) for > 1, i.e., ε < 1 p |∇(uε − u0 )|2 dx ≤ c0 e−α0 |f |22,ω2 2 Ωa ⇐⇒ |∇(uε − u0 )|2 dx ≤ c0 2−p e−α0 |f |22,ω2 Ωa
α
≤ ce− ε |f |22,ω2 for some constant c and any α smaller than α0 . This completes the proof of the theorem.
Exercises 1. Let λ1 be the first eigenvalue of the problem −∂x22 u∞ = λ1 u∞ in ω = (−1, 1) u∞ = 0 on {−1, 1} (i.e., λ1 =
π 2 2
, u∞ = v1 given by (6.25)). Show that u the solution to
−Δu = λ1 u∞ u = 0
is given by u = Show then that
in Ω = (−, ) × ω, on ∂Ω
√ cosh λ1 x1 √ u∞ (x2 ). 1− cosh λ1
Ω
|∇(u − u∞ )|2 dx −→
0
when → +∞. 2. Let u∞ , u be the solutions to (6.41), (6.42). Show that when p ≥ 3 we have |∇(u − u∞ )|2 dx → +∞ Ω
when → +∞, Ω = (−, )p × ω2 . 3. Show that (6.26) holds when f ∈ L2 (ω) (see Remark 6.3). 4. Let ω1 be a bounded convex domain containing 0. Prove that there exists a function ρ satisfying (6.47).
90
Chapter 6. Problems in Large Cylinders
5. Let Ω = ω1 × ω2 where ω1 is an open subset of Rp and ω2 an open subset of Rn−p . The notation is the one of Section 6.3. Let u ∈ H01 (Ω). Let ϕn ∈ D(Ω) such that ϕn → u
in H01 (Ω).
(i) Show that up to a subsequence |∇X2 (u − ϕn )(X1 , X2 )| + (u − ϕn )2 (X1 , X2 ) dX2 → 0 ω2
for a.e. X1 ∈ ω1 . (ii) For a.e. X1 ∈ ω1 , ∇X2 u(X1 , ·) is a function in (L2 (Ω))n−p . Show that this function is the gradient of u(X1 , ·) in ω2 in the distributional sense. (iii) Conclude that u(X1 , ·) ∈ H01 (ω2 ) for a.e. X1 ∈ ω1 . 6. We would like to show that in Theorem 6.6 one cannot relax the assumption A12 independent of X1 . Suppose indeed that we are under the assumptions of Theorem 6.6 with A12 = A12 (X1 , X2 ). Suppose that u solution of (6.42) is converging toward u∞ the solution to (6.41) in H 1 (Ω0 ) weak. (i) Show that A∇u · ∇v dx = A22 ∇X2 u∞ · ∇X2 v dx ∀ v ∈ H01 (Ω0 ). Ω0
Ω0
(ii) Show that Ω0
A12 ∇X2 u∞ · ∇X1 v = 0 ∀ v ∈ H01 (Ω0 ).
(iii) Show that the equality above is impossible in general. 7. Let Ω = (−, ) × (−1, 1), ω = (−1, 1). We denote by C01 (Ω ) the space of functions in C 1 (Ω ) vanishing on {−, } × ω. We set V = the closure in H 1 (Ω ) of C01 (Ω ). 1. Show that on V the two norms |∇v| are equivalent.
2,Ω
,
2 1 2 |v|2,Ω + |∇v|2,Ω 2
Exercises
91
2. Let f ∈ L2 (ω). Show that there exists a unique u solution to ⎧ ⎨u ∈ V , ∇u · ∇v dx = f v dx ∀ v ∈ V . ⎩ Ω
(1)
Ω
3. Show that the solution to (1) is a weak solution to the problem ⎧ ⎨−Δu = f in Ω , ∂u ⎩u = 0 on {−, } × ω, = 0 on (−, ) × ∂ω. ∂ν 4. Let us assume from now on that f (x2 ) dx2 = 0. ω
Set W =
v ∈ H (ω) v dx = 0 . 1
ω
Show that there exists a unique u∞ solution to ⎧ ⎨ u∞ ∈ W, ⎩ ∂x2 u∞ ∂x2 v dx = f v dx ∀ v ∈ W. ω
(2)
ω
5. Let v ∈ V . Show that v(x1 , ·) ∈ H 1 (ω) for almost every x1 and that we have ∇u∞ · ∇v dx = f v dx. Ω
Ω
6. Show that u converges towards u∞ in H 1 (Ω/2 ) with an exponential rate of convergence. 8. Show that when ω f (x2 ) dx2 = 0 u might be unbounded (cf. [20]).
Chapter 7
Periodic Problems In the previous chapter we have addressed the issue of the convergence of problems set in cylinders becoming large in some directions and for data independent of these directions. To be independent of one direction could also be interpreted as to be periodic in this direction for any period. Then the question arises to see if periodic data can force at the limit the solution of a periodic problem to be periodic. This is the kind of issue that we would like to address now.
7.1 A general theory For the sake of simplicity we consider here only the case of periodic functions with period n
T = Π (0, Ti )
(7.1)
i=1
where Ti are for i = 1, . . . , n positive numbers. If f ∈ L1loc (Rn ) we will say that f is periodic of period T iff f (x + Ti ei ) = f (x)
a.e. x ∈ Rn , ∀ i = 1, . . . , n.
(7.2)
ei denotes here the canonical basis in Rn . Let us then consider a matrix A and a function a such that A(x), a(x) are T − periodic
(7.3)
(this means for the matrix A = (aij ) that all the entries are T periodic). We also assume that aij , a ∈ L∞ (T ) – which implies by (7.3) that aij , a are uniformly bounded on Rn – and that for some positive constants λ, Λ we have λ|ξ|2 ≤ A(x)ξ · ξ |A(x)ξ| ≤ Λ|ξ| λ ≤ a(x) ≤ Λ
a.e. x ∈ T, a.e. x ∈ T, a.e. x ∈ T.
∀ ξ ∈ Rn , ∀ ξ ∈ Rn ,
(7.4) (7.5) (7.6)
94
Chapter 7. Periodic Problems
Note that in (7.4) and (7.6) we can assume the constants λ to be the same at the expense of taking the minimum of the two. The same remark applies to Λ in (7.5), (7.6). We define for ∈ R n
Ω = Π (−Ti , Ti ),
(7.7)
i=1
and consider a bounded open set Ω such that Ω ⊂ Ω .
(7.8)
Let us denote by V a subspace of H 1 (Ω ) such that V is closed in H 1 (Ω ),
H01 (Ω ) ⊂ V ⊂ H 1 (Ω ).
(7.9)
(We suppose that the elements of H01 (Ω ) are extended by 0 outside of Ω .)
Ω T2 Ω 0
T1
Figure 7.1. Let us also denote by f a T -periodic function such that f ∈ L2 (T ).
(7.10)
Then by the Lax–Milgram theorem there exists a unique u such that ⎧ ⎨u ∈ V , A(x)∇u · ∇v + a(x)u v dx = f v dx ∀ v ∈ V . ⎩ Ω
(7.11)
Ω
We denote by 1 Hper (T ) = the closure of { v ∈ C ∞ (T ) | v
T -periodic }
(7.12)
7.1. A general theory
95
where C ∞ (T ) denotes the space of restrictions of C ∞ (Rn )-functions to T . (v|T denotes the restriction of v to the cell T , the closure is understood in the H 1 (T ) sense.) As before H01 (Ω) stand for the set of H 1 (Ω) functions vanishing on the 1 boundary of some domain Ω, Hper (T ) stands here for the set of functions which are periodic on T – i.e., such that v(x1 , . . . , xi−1 , 0, xi+1 , . . . , xn ) = v(x1 , . . . , xi−1 , Ti , xi+1 , . . . , xn ) for almost every (x1 , . . . , xi−1 , 0, xi+1 , . . . , xn ) ∈ (0, T1 ) × · · · × (0, Ti−1 ) × {0} × 1 (0, Ti+1 ) × · · · × (0, Tn ). Note that by definition Hper (T ) is a closed subspace of 1 H (T ). Thus, there exists a unique u∞ solution to ⎧ 1 ⎨ (T ), u∞ ∈ Hper (7.13) 1 ⎩ A(x)∇u∞ · ∇v + a(x)u∞ v dx = f v dx ∀ v ∈ Hper (T ). T
T
Moreover we have: Theorem 7.1. For any > 0 there exist constants c, σ > 0 independent of such that (7.14) |u − u∞ |2H 1 (Ω/2) ≤ ce−σ {|f |22,Ω + |u∞ |2H 1 (Ω ) }.
Let us first prove the following lemma. Lemma 7.2. Let us assume that u∞ is extended by periodicity to the whole Rn . Then we have for every Ω ⊂ Rn , Ω bounded A(x)∇u∞ · ∇v + a(x)u∞ v dx = f v dx ∀ v ∈ H01 (Ω). (7.15) Ω
Ω
Proof. In other words this theorem says that if a function is T -periodic and satisfies an elliptic equation in the period cell it satisfies this equation everywhere – of course for periodic data. Let v ∈ D(Ω) and suppose that v is extended by 0 outside Ω. One has if we denote by τ the vector of the period – i.e.,
τ = (T1 , T2 , . . . , Tn ) A(x)∇u∞ · ∇v + a(x)u∞ v dx = A(x)∇u∞ · ∇v + a(x)u∞ v dx (7.16) Ω Rn = A(x)∇u∞ · ∇v + a(x)u∞ v dx z∈Zn
Tz
where Tz = T + zτ , zτ being a shortcut for (z1 T1 , . . . , zn Tn ). Note that in this sum only a finite number of terms are not equal to 0 since v is assumed to have a compact support.
96
Chapter 7. Periodic Problems
Making the change of variable x + zτ → x in Tz and using the T -periodicity of A, a, u∞ , that is to say each of them satisfies like the function a a.e. x ∈ T
a(x + zτ ) = a(x)
(7.17)
(see (7.2)) – we obtain A(x)∇u∞ · ∇v + a(x)u∞ v dx Ω = A(x)∇u∞ · ∇v(x + zτ ) + a(x)u∞ v(x + zτ ) dx z∈Zn
=
T
T
(7.18)
A(x)∇u∞ · ∇˜ v + a(x)u∞ v˜ dx
where we have set v˜ =
v(x + zτ ).
z∈Zn
Now clearly this function is indefinitely differentiable and T periodic. Thus from (7.13), (7.18) we derive A(x)∇u∞ · ∇v + a(x)u∞ v dx = f v˜ dx Ω T = f (x)v(x + zτ ) dx z∈Zn
=
z∈Zn
=
Ω
T
Tz
(7.19) f (x − zτ )v(x) dx
f v dx
by the periodicity of f . This completes the proof since D(Ω) is dense in H01 (Ω).
Remark 7.1. One should note that the periodic extension of u∞ to Ω is such that u∞ ∈ H 1 (Ω).
(7.20)
1 This is true for smooth functions of Hper (T ) and by density for any function of 1 Hper (T ) – see (7.12).
End of the proof of Theorem 7.1. We proceed basically like in the proof of Theorem 6.6 – (see starting from Step 2). Let 0 be such that 0 < 0 ≤ − 1. There exists a Lipschitz continuous function ρ such that 0 ≤ ρ ≤ 1,
ρ = 1 on Ω0 ,
ρ = 0 on Rn \Ω0 +1 ,
|∇ρ| ≤ CT .
(7.21)
7.1. A general theory
97
(Note that this function belongs to H01 (Ω0 +1 ).) Then we have (u − u∞ )ρ ∈ H01 (Ω0 +1 ).
(7.22)
Since V is supposed to contain H01 (Ω ) – see (7.9) – combining (7.11), (7.15) we have A∇(u − u∞ ) · ∇v + a(u − u∞ )v dx = 0 ∀ v ∈ H01 (Ω ). (7.23) Ω
From (7.22) we derive A∇(u − u∞ ) · ∇{(u − u∞ )ρ} + a(u − u∞ )2 ρ dx = 0. Ω
This implies
A∇(u − u∞ ) · ∇(u − u∞ )ρ + a(u − u∞ )2 ρ dx Ω (A∇(u − u∞ ) · ∇ρ)(u − u∞ ) dx. =− Ω
In the first integral above it is enough to integrate on Ω0 +1 and in the second one on Ω0 +1 \Ω0 . Moreover using the ellipticity condition (7.4), (7.5) and (7.6) we derive easily λ {|∇(u −u∞ )|2 +(u −u∞ )2 }ρ dx ≤ ΛCT |∇(u −u∞ )| |u −u∞ | dx Ω0 +1
Ω0 +1 \Ω0
(we used (7.21)). Since ρ = 1 on Ω0 by the usual Young inequality ab ≤
1 2 (a + b2 ) 2
we obtain ΛCT 2 2 |∇(u −u∞ )| +(u −u∞ ) dx ≤ |∇(u −u∞ )|2 +(u −u∞ )2 dx. 2λ Ω0 +1 \Ω0 Ω0 Setting c = Ω0 +1 \Ω0
ΛCT 2λ
and noticing that
|∇(u − u∞ )|2 + (u − u∞ )2 dx
= Ω0 +1
2
2
|∇(u − u∞ )| + (u − u∞ ) dx −
Ω0
|∇(u − u∞ )|2 + (u − u∞ )2 dx,
98
Chapter 7. Periodic Problems
we derive then like in the proof of Theorem 6.6 that c 2 2 |∇(u −u∞ )| +(u −u∞ ) dx ≤ |∇(u −u∞ )|2 +(u −u∞ )2 dx. 1+c Ω0 Ω0 +1 Iterating this formula from 2 , 2 times ( 2 denotes the integer part of 2 ) we obtain [ 2 ] c |∇(u −u∞ )|2 +(u −u∞ )2 dx ≤ |∇(u −u∞ )|2 +(u −u∞ )2 dx. 1 + c Ω/2 Ω 2
+[
2
]
Noticing as in Theorem 6.6 that
! −1≤ ≤ 2 2 2
we get
|∇(u − u∞ )|2 + (u − u∞ )2 dx Ω/2 −σ ≤ c0 e |∇(u − u∞ )|2 + (u − u∞ )2 dx
(7.24)
Ω
1 1+c where c0 = 1+c c , σ = 2 ln c . Next we need to estimate the integral on the right-hand side of (7.24). For this choosing v = u in (7.11) we get A∇u · ∇u + au2 dx = f u dx. Ω
Ω
Using the ellipticity condition and the Cauchy–Schwarz inequality we have 12 12 2 2 2 2 2 |∇u | + u dx ≤ |f |2,Ω u dx ≤ |f |2,Ω |∇u | + u dx . λ Ω
Ω
Ω
From this it follows that Ω
2
|∇u | +
Going back to (7.24) we have |u − u∞ |2H 1 (Ω/2 ) ≤ 2c0 e−σ = 2c0 e−σ ≤ 2c0 e−σ
Ω
u2
dx ≤
|f |22,Ω λ2
.
(7.25)
|∇u |2 + |∇u∞ |2 + u2 + u2∞ dx
Ω
|∇u |2 + u2 dx +
|f |2
2,Ω
λ2
Ω
+ |u∞ |2H 1 (Ω ) .
|∇u∞ |2 + u2∞ dx
7.1. A general theory
99
The result follows by choosing c = 2c0 ( λ12 ∨ 1) where ∨ denotes the maximum of two numbers. This completes the proof of the theorem. Remark 7.2. One should notice that the constants c, σ are depending only on λ, Λ and T . They are independent of the choice of V – that is to say on the boundary conditions that we choose on Ω and on Ω itself. For the estimate (7.24) we only used (7.23) which holds in fact if we only suppose f = f periodic on Ω or Ω[]+1 (see the proof of (7.15)). In the case where V = H01 (Ω ) one can use the same technique than in Chapter 6 to show convergence of u toward u∞ (see the exercises). As a corollary of formula (7.14) we have Theorem 7.3. Suppose that Ω = Ω . Let u , u∞ be the solutions to (7.11), (7.13). Then there exist two constants c1 , σ1 , depending on λ, Λ, n, T only such that |u − u∞ |H 1 (Ω/2 ) ≤ c1 e−σ1 |f |22,T .
(7.26)
Proof. Of course we suppose that we are under the assumptions of Theorem 7.1. Then by (7.24) we have |u − u∞ |2H 1 (Ω/2 ) ≤ c0 e−σ |∇(u − u∞ )|2 + (u − u∞ )2 dx (7.27) Ω
with c0 =
c+1 c ,
Ω
σ=
1 2
ln c+1 c , c=
|∇u |2 + u2 dx ≤
ΛCT 2λ
. By (7.25) we have
|f |22,Ω[]+1 |f |22,Ω ≤ λ2 λ2
2n ([] + 1)n |f |22,T 2n ( + 1)n |f |22,T ≤ ≤ . λ2 λ2 We used the periodicity of f . That is to say if N ∈ N 2 2 f dx = f dx = f 2 dx = (2N )n |f |22,T . ΩN
z
Tz
z
T
(Tz are the Tz ’s covering ΩN .) Taking now v = u∞ in (7.13) we have 2 A(x)∇u∞ · ∇u∞ + au∞ dx = f u∞ dx. T
T
Using the ellipticity condition and Cauchy–Schwarz inequality we obtain λ |∇u∞ |2 + u2∞ dx ≤ |f |2,T |u∞ |2,T T
(7.28)
(7.29)
100
Chapter 7. Periodic Problems
which implies
T
|∇u∞ |2 + u2∞ dx ≤
Thus since u∞ is periodic it comes |∇u∞ |2 + u2∞ dx ≤ Ω
Ω[]+1
|f |22,T . λ2
|∇u∞ |2 + u2∞ dx
= {2([] + 1)}n
T
|∇u∞ |2 + u2∞ dx
2n ( + 1)n 2 |f |2,T . λ2 Combining then (7.27), (7.28), (7.30) we get |u − u∞ |2H 1 (Ω/2 ) ≤ 2c0 e−σ |∇u |2 + u2 dx +
(7.30)
≤
Ω n
n
Ω
|∇u∞ |2 + u2∞ dx
4c0 2 ( + 1) −σ 2 e |f |2,T . λ2 The result follows with σ1 taken as any constant less than σ. ≤
For Ω arbitrary larger than Ω we have Theorem 7.4. Under the assumptions of Theorem 7.1 suppose that |f |22,Ω ≤ Ce(σ−δ)
(7.31)
for some 0 < δ < σ and some positive constant C. Then we have for some constant c |u − u∞ |2H 1 (Ω/2 ) ≤ c e−δ , (7.32) i.e., u converges toward u∞ at an exponential rate on Ω/2 . Proof. Combining (7.14), (7.30) and (7.31) we get 2n ( + 1)n 2 |u − u∞ |2H 1 (Ω/2 ) ≤ ce−σ Ce(σ−δ) + |f | ≤ c e−δ 2,T λ2 for some constant c .
Remark 7.3. In the case where f is periodic (7.32) holds for instance when Ω ⊂ Ω
with = O(e
(σ−δ) n
).
Note that (7.32) holds when (7.31) holds and thus f can be chosen arbitrary outside of Ω (see also Remark 7.2) and V can also be arbitrary provided it satisfies the assumptions of Theorem 7.1. Note that the Theorems 7.3, 7.4 show the convergence of u toward u∞ in H 1 (Ω) for any bounded open subset of Rn and this at an exponential rate of convergence. This is clear since for large enough we have Ω ⊂ Ω/2 .
7.2. Some additional remarks
101
7.2 Some additional remarks We have addressed a non-degenerate case that is to say a case where a(x) ≥ λ > 0. The results developed in the preceding section can be extended easily in the case where 0 ≤ a(x), a(x) ≡ 0, (7.33) we refer to [36] for details. In the case where a≡0
(7.34)
the situation is quite different. Indeed for instance a solution to (7.13) cannot be ensured by the Lax–Milgram theorem. Let us examine what can happen in dimension 1. For that let us consider ∈N
Ω = (−T1 , T1 ),
and u the weak solution to −(A(x)u ) = f in Ω , u (−T1 ) = u (T1 ) = 0.
(7.35)
(7.36)
In (7.36) we suppose for instance that f ∈ L2 (T )
(7.37)
where T = (0, T1 ) and A, f are T -periodic measurable functions such that for A we have 0 < λ ≤ A(x) ≤ Λ a.e. x ∈ R. (7.38) Then we have Theorem 7.5. If
T
f (x) dx = 0,
(7.39)
the solution to (7.36) is unbounded when → +∞. Proof. Before going into the details of the proof one has to note that the theorem above claims that in order to obtain convergence of u some condition on f is necessary. Let us set x F (x) = We have F (x + T1 ) =
0
x+T1
0
(cf. Theorem 8.1).
f (s) ds,
f (s) ds =
0
c0 =
x
f (s) ds +
T
f (s) ds.
(7.40)
f (s) ds = F (x) + c0
(7.41)
x+T1
x
102
Chapter 7. Periodic Problems
Integrating the first equation of (7.36) we have −A(x)u = F (x) + c1 for some constant c1 and thus c1 F (x) + . A(x) A(x)
−u =
(7.42)
The value of c1 is easily determined since
T1
0= −T1
−u (x) dx =
T1
−T1
F (x) dx + c1 A(x)
T1
−T1
dx . A(x)
(7.43)
By periodicity of A we easily get
T1
−T1
dx = 2 A(x)
T
dx . A(x)
(7.44)
Moreover by (7.41) for any integer k we have
(k+1)T1
kT1
F (x) dx = A(x)
(k+1)T1
kT1 (k+2)T1
= (k+1)T1
Setting
m1 =
T
dx , A(x)
F (x + T1 ) − c0 dx A(x) F (x) dx dx − c0 . A(x) A(x) T
m2 =
T
F (x) dx A(x)
we derive – starting from k = 0 and iterating – that
(k+1)T1
kT1
F (x) dx = A(x)
T
F (x) dx + c0 km1 = m2 + c0 km1 . A(x)
It follows that
T1
−T1
−1 (k+1)T1 F (x) F (x) = dx = 2m2 − c0 m1 . A(x) A(x) kT1 k=−
Going back to (7.42), (7.43) this implies hat
⇐⇒
2m1 c1 + 2m2 − c0 m1 = 0 m2 c0 − . c1 = 2 m1
(7.45)
Exercises
103
Then from (7.42) integrating between 0 and T1 we get T1 T1 F (x) dx u (0) = dx + c1 A(x) A(x) 0 0 =
−1
(m2 + c0 km1 ) + c1 m1
k=0
= m2 + c0 m1 = c0 m 1
2 2
m2 ( − 1) c0 m1 + − 2 2 m1
−→
∞ when c0 = 0.
This completes the proof of the theorem.
The interested reader will find in [36] complements on these issues, i.e., when (7.34) holds.
Exercises 1. One considers the problem ⎧ 1 ⎨ (T ), u ∈ Hper ⎩ A∇u · ∇v dx = f v dx T
T
1 ∀ v ∈ Hper (T ),
i.e., with respect to (7.13) we choose a = 0. (a) Show that the existence of a solution to (P) imposes f dx = 0.
(P)
(*)
T
(b) Show that if u is a solution then u + c is also solution for any constant. (c) Under the assumption (*) show that the problem ⎧ 1 ⎪ (T ), u dx = 0, ⎨u ∈ Hper T ⎪ 1 ⎩ A∇u · ∇v dx = f v dx ∀ v ∈ Hper (T ) T
T
admits a unique solution. 2. Under the conditions of Theorem 7.5 suppose that f (x) dx = 0. T
Show that u solution to (7.36) is T -periodic.
104
Chapter 7. Periodic Problems
3. Suppose that (7.1)–(7.8) hold. For any f ∈ L2 (T ) – extended by periodicity – let u∞ be the solution to (7.13) and u , u the solutions to ⎧ ⎨ u ∈ H01 (Ω ), A(x)∇u · ∇v + au v dx = f v dx ∀ v ∈ H01 (Ω ), ⎩ Ω
Ω
⎧ u ∈ H01 (Ω ), ⎨ ⎩
Ω
A(x)∇u · ∇v + au v dx =
Ω
f v dx
∀ v ∈ H01 (Ω ).
(i) Suppose f ≥ 0. Show that 0 ≤ u ≤ u ≤ u∞ . Hint: For the last inequality notice (u −u∞ )+ ∈ H01 (Ω ) and use (7.15). Show that for > one has 0 ≤ u ≤ u . Show that there exist positive constants c, σ such that |u − u∞ |2,Ω/2 ≤ ce−σ . (ii) Show the same convergence when f is arbitrary.
Chapter 8
Homogenization The theory of homogenization is a theory which was developed in the last forty years. Its success lies in the fact that practically every partial differential equation can be homogenized. Also during the last decades composite materials have invaded our world. To explain the principle of this theory the composite materials are indeed very well adapted. Suppose that we built a “composite” i.e., a material made of different materials by juxtaposing small identical cells containing the different type of materials for instance a three material composite – see Figure 8.1 below. Assuming that we make the cells smaller and smaller at the limit we get
Ω
Figure 8.1. a new material – a composite – which inherits some properties which can be very different from the ones of the materials composing it. For instance mixing three materials with different heat conductivity in the way above leads at the limit to a new material for which we can study the conductivity by cutting a piece of it – say Ω. This is such an issue which is addressed by homogenization techniques (see [14], [40], [41], [72], [46], [47], [61], [87], [92]).
106
Chapter 8. Homogenization
8.1 More on periodic functions As in the previous chapter we denote by n
T = Π (0, Ti )
(8.1)
i=1
the “period” of a periodic function. Let f be a T -periodic function that we suppose to be defined on the whole Rn . Then for any ε > 0 the function defined by x fε (x) = f ε is a periodic function with period εT – cf. Figure 8.2. f: 0
T
fε
1 2
ε=
Figure 8.2. Indeed we have for every i = 1, . . . , n x x + εT e
x i i + T i ei = f =f = fε (x). fε (x + εTi ei ) = f ε ε ε
(8.2)
Then we would like to study the behaviour of fε when ε → 0. Of course it is clear that a pointwise convergence is out of scope as one can see in Figure 8.2 above. Let us first remark: Theorem 8.1 (Invariance of the integral by translation). Let f ∈ L1 (T ) extended periodically on Rn . For every t0 ∈ Rn we have f (x) dx = f (x) dx (8.3) t0 +T
T
namely the integral of f on any domain obtained by translation of T is the same. (t0 + T = { t0 + t | t ∈ T }.) Proof. Suppose that we are in dimension 1 and denote also T1 by T , t0 + T = (t0 , t0 + T ) and thus
t0 +T
f (x) dx =
t0 +T
t0
f (x) dx =
0
t0
f (x) dx +
0
T
f (x) dx +
t0 +T
T
f (x) dx. (8.4)
8.1. More on periodic functions
107
Making the change of variable x → x + T in the last integral leads to
t0 +T
f (x) dx =
T
0
f (x) dx =
T
f (x) dx.
(8.5)
In higher dimensions one remarks that t0 +T
f (x) dx =
t10 +T1
t10
t10 +T2
t20
...
tn 0 +Tn tn 0
f (x) dx
where t0 = (t10 , t20 , . . . , tn0 ). The result is then a consequence of (8.5) and the theorem of Fubini. Regarding the behaviour of fε we have: Theorem 8.2 (Weak convergence of fε ). Let f ∈ Lp (T ), 1 < p < +∞, T -periodic. For every bounded domain Ω we have when ε → 0 1 fε f (x) dx := − f (x) dx in Lp (Ω). (8.6) |T | T T (|T | denotes the measure of T , thus fε converges toward its average weakly.)
T2 εT2
Ω
0 εT1 T1
Figure 8.3.
Proof. Since Ω is bounded it can be included – see Figure 8.3 – in some big “rectangle” n
ΩN = Π (−N Ti , N Ti ). i=1
(8.7)
108
Chapter 8. Homogenization
For ε < 1 let us denote by τ ε the set of the shifted copies Tε of εT covering Ω, here by a shifted copy of εT we mean a set εzτ + εT (see (7.16) for the definition of τ , recall that zτ is the vector (z1 T1 , . . . , zn Tn ) with z = (z1 , . . . , zn ) ∈ Zn ). Then we have |fε |p dx ≤ |fε |p dx = (#τ ε ) |fε |p dx, (8.8) Ω
Tε
τε
εT
the number of elements of τ ε . Thus we have x p n dx = (#τ )ε |f (y)|p dy f ε ε εT T
by Theorem 8.1 – #τ ε denoting |fε |p dx ≤ (#τ ε ) Ω
by setting y = xε . This implies
1 |fε | dx ≤ (#τ ε )ε |T | · |T | Ω p
n
T
|f (y)|p dy.
Now (#τ ε )εn |T | is the volume of the Tε covering Ω. These sets Tε are certainly all contained in ΩN +1 and thus we have 1 |fε |p dx ≤ |ΩN +1 | |f (y)|p dy (8.9) |T | T Ω which means that fε is bounded in Lp (Ω) independently of ε. Thus for any p > 1 and up to a subsequence we have for some g fε g
in Lp (Ω).
(8.10)
We need to identify g. For that let ϕ ∈ D(Ω) and suppose that ϕ is extended by 0 outside of Ω. Then we have fε (x)ϕ(x) dx = fε (x)ϕ(x) dx = fε (x)ϕ(x) dx Rn
Ω
z∈Zn
Tz (ε)
where Tz (ε) is the cell εT + εzτ . Setting x = εy + εzτ we get by periodicity of f fε ϕ dx = f (y)ϕ(εy + zτ ε)εn dy Ω
z∈Zn
= Now for any y
z∈Zn
1 |T |
T
T
f (y)
ϕ(εy + zτ ε)εn |T | dy.
z∈Zn
ϕ(εy + εzτ )εn |T | −→
Ω
ϕ(x) dx
(8.11)
8.2. Homogenization of elliptic equations
109
(this is a Riemann sum for ϕ – note that due to the compact support of ϕ only a finite number of terms are not zero). Since of course this Riemann sum is bounded, by the Lebesgue theorem we get 1 fε ϕ dx = f (y) dy · ϕ(x) dx. (8.12) lim ε→0 Ω |T | T Ω This identifies g with |T1 | T f (y) dy and by the uniqueness of the limit completes the proof of the theorem. Note that since the weak limit is uniquely determined it is indeed the whole “sequence” fε which satisfies (8.10).
8.2 Homogenization of elliptic equations Let us denote by A a T -periodic matrix, namely a matrix such that each entry is T -periodic. Furthermore assume that λ|ξ|2 ≤ A(x)ξ · ξ |A(x)ξ| ≤ Λ|ξ|
a.e. x ∈ R, ∀ ξ ∈ Rn , a.e. x ∈ R, ∀ ξ ∈ Rn .
(8.13) (8.14)
Let f ∈ H −1 (Ω). Then for every ε > 0 by the Lax–Milgram theorem there exists a unique uε solution to ⎧ 1 ⎨u (Ω), ε ∈ H0 (8.15) x ⎩ A ∇uε · ∇v dx = f, v ∀ v ∈ H01 (Ω). ε Ω We are interested in finding the limit of uε when ε → 0. Note that A xε is now εT -periodic – i.e., oscillates very fast and so could suit for representing a material whose diffusion matrix is periodic with period approaching 0, that is to say a material of the type of those introduced above.
8.2.1 The one-dimensional case In this case one can obtain relatively simply the convergence of the solution. Thus let us denote by Ω an interval, Ω = (α, β) and by uε the solution to ⎧ ⎨ uε ∈ H01 (Ω), x (8.16) ⎩ A uε v dx = f, v ∀ v ∈ H01 (Ω). ε Ω In this system we assume that for some positive constants λ, Λ we have λ ≤ A(x) ≤ Λ ∞
A ∈ L (Ω), f is a distribution such that
a.e. x ∈ Ω
(8.17)
A is T1 -periodic.
(8.18)
f ∈ H −1 (Ω).
(8.19)
110
Chapter 8. Homogenization
Then we have Theorem 8.3. Under the assumption above we have uε −→ u0
in L2 (Ω),
uε u0
where u0 is the weak solution to ⎧ 1 ⎨u 0 ∈ H0 (Ω), ⎩ A0 u0 v dx = f, v Ω
with A0 given by A0 = 1 (See (8.6) for the definition of −.)
in H01 (Ω)
∀ v ∈ H01 (Ω),
" 1 − dx. A
(8.20)
(8.21)
Proof. Taking v = uε in (8.16) we derive easily λ|uε |22,Ω ≤ f, uε ≤ |f |∗ |uε |2,Ω
(8.22)
where |f |∗ denotes the strong dual norm on H −1 (Ω) when H01 (Ω) is equipped with the norm |u |2,Ω . From (8.22) it follows that |uε |2,Ω ≤
|f |∗ λ
that is to say uε is bounded in H01 (Ω) independently of ε. Thus, up to a subsequence, for u0 ∈ H01 (Ω) we have uε u0
uε −→ u0 ,
in L2 (Ω).
(8.23)
We have used here the compactness of the embedding of H01 (Ω) into L2 (Ω). This also implies that x Λ|f |∗ uε (8.24) ≤ Λ|uε |2,Ω ≤ A ε λ 2,Ω and this quantity is also bounded independently of ε. We would like to identify now u0 . First remark that by the Riesz representation theorem there exists a function u such that ⎧ 1 ⎨u ∈ H0 (Ω), ⎩ u v dx = f, v ∀ v ∈ H01 (Ω). Ω
Then if we set g = u we clearly have f = −g
8.2. Homogenization of elliptic equations
111
where g ∈ L2 (Ω). It follows that x −A u = −g ε ε
(8.25)
and thus for some constant cε we have x (8.26) A u = g + cε . ε ε Since A xε uε is bounded in L2 (Ω), the constant cε is bounded and up to a subsequence one has (8.27) cε −→ c0 . Writing (8.26) as uε = (g + cε ) ·
A
1 x, ε
since – up to the subsequence above – g + cε −→ g + c0 , we have uε
1 1 x − dx in L2 (Ω) A A ε Ω
1 dx (g + c0 ) − Ω A
From (8.23) we deduce that
(8.28)
in L2 (Ω).
1 u0 = (g + c0 ) − dx A Ω
and thus u0 is solution to ⎧ ⎪ ⎨− 1 = −g = f u −Ω A1 dx 0 ⎪ ⎩ u0 ∈ H01 (Ω).
in Ω,
(8.29)
Since the possible limit is uniquely determined, the whole sequence uε converges toward u0 solution of (8.29) in L2 (Ω) and uε toward u0 weakly in L2 (Ω). This completes the proof of the theorem. Remark 8.1. Without using the compactness of the embedding of H01 (Ω) into L2 (Ω) the proof above shows that uε u0
in H01 (Ω).
Note that the result is a little bit surprising. Indeed, passing formally to the limit in (8.16) one would expect u0 to satisfy − A dx u0 v dx = f, v ∀ v ∈ H01 (Ω)! Ω
112
Chapter 8. Homogenization
8.2.2 The n-dimensional case 1 (T ) the space We denote by H per 1 (T ) = H per
1 v ∈ Hper (T ) − v dx = 0 .
(8.30)
T
1 (Recall the definition (7.12) of Hper (T ).) Then we have
1 (T ) the norms Theorem 8.4. On H per |∇v|
and
2,T
2 1 {|∇v|2,T + |v|22,T } 2
(8.31)
are equivalent. Note that the second norm in (8.31) is the usual H 1 -norm. Proof. Clearly the norm of the left-hand side of (8.31) is smaller than the one on the right-hand side. We only need to prove that
2 12 |v|22,T + |∇v|2,T ≤ C |∇v|2,T .
By density it is enough to show this for a smooth function v. For x, y ∈ T we have v(x) − v(y) = −
|x−y|
0
d v(x + rω) dr, dr
ω=
y−x . |y − x|
Integrating with respect to y on T we get |T | v(x) − − v(y) dy = − T
Since −T v(y) dy = 0 we deduce
T
0
|x−y|
d v(x + rω) dr dy. dr
|x−y| d 1 v(x + rω) dr dy |T | T 0 dr |x−y| d 1 v(x + rω)dr dy ≤ dr |T | T 0 |x−y| 1 = |∇v(x + rω) · ω| dr dy |T | T 0 |x−y| 1 ≤ |∇v(x + rω)| dr dy. |T | T 0
|v(x)| =
(In the computation above we used the chain rule and the fact that |ω| = 1.) Let us denote by dT the diameter of T and suppose that the function |∇v(z)| is
8.2. Homogenization of elliptic equations
113
extended by 0 for z ∈ / T . Then we derive from above for x ∈ T +∞ 1 |∇v(x + rω)| dr dy |v(x)| ≤ |T | |x−y|
T
|x − y|1−n dx ≤
BR (x)
|x − y|1−n dx.
114
Chapter 8. Homogenization
Indeed this follows from |x − y|1−n dx = |x − y|1−n dx + |x − y|1−n dx T T ∩BR (y) T \BR (y) 1−n |x − y| dx + R1−n |T \ BR (y)| ≤ T ∩BR (y) = |x − y|1−n dx + R1−n |BR (y) \ T | T ∩BR (y) 1−n ≤ |x − y| dx + |x − y|1−n dx T ∩BR (y) BR (y)\T |x − y|1−n dx. = BR (y)
It follows that for every y ∈ T 1−n |x − y| dx ≤
R
0
T
|ω|=1
dω dρ = Rnωn
where ωn is the volume of the unit ball. Since |T | = ωn Rn we have for any y ∈ T 1 1− 1 |x − y|1−n dx ≤ nωn n |T | n . T
Combining this with (8.32) we get |v|2,T ≤ dT n It follows that |v|22,T
2 + |∇v|
2,T
ω 1− n1 n |∇v| . 2,T |T |
1− n1 2 n ωn |∇v|2 . ≤ 1 + dT 2,T |T |
This completes the proof of the theorem.
By the Lax–Milgram theorem and the result above it follows then that for every = 1, . . . , n there exists a unique w solution to ⎧ 1 (T ), ⎨ w ∈ H per (8.33) T 1 (T ), ⎩ A ∇w · ∇v dx = − ai ∂xi v dx ∀ v ∈ H per T
T
where AT denotes the transposed matrix of A. Moreover if we extend w by periodicity to the whole Rn we have for every bounded open set Ω in Rn AT ∇w · ∇v dx = − ai ∂xi v dx ∀ v ∈ H01 (Ω). (8.34) Ω
Ω
(The proof is almost identical to the one of Lemma 7.2.)
8.2. Homogenization of elliptic equations
115
Then we can state our main result. Theorem 8.5. Under the assumptions (8.13), (8.14) let uε be the solution to (8.15). Set for i, = 1, . . . , n a ˜i = − aji (y){δj + ∂yj w (y)} dy. (8.35) T
= (˜ Set A aij ). Then when ε → 0 we have uε u0 where u0 is the solution ⎧ ⎨u 0 ⎩
Ω
in H01 (Ω)
(8.36)
to ∈ H01 (Ω), 0 · ∇v dx = f, v A∇u
∀ v ∈ H01 (Ω).
Proof. Taking v = uε in (8.15) we derive 2 λ |∇uε | dx ≤ A∇uε · ∇uε dx = f, uε ≤ |f |∗ |∇uε |2,Ω Ω
(8.37)
(8.38)
Ω
−1 1 where |f |∗ denotes the strong dual norm of f in H (Ω) when H0 (Ω) is equipped with the norm |∇v| 2,Ω . It follows that we have
|∇uε | We also have |A∇uε |2
2,Ω
and by (8.39)
= Ω
2,Ω
≤
|f |∗ . λ
|A∇uε |2 dx ≤ Λ2
|A∇uε |
2,Ω
≤
(8.39) Ω
|∇uε |2 dx
Λ |f |∗ . λ
(8.40)
Since A∇uε , ∇uε are bounded in L2 (Ω) independently of ε – up to a subsequence – there exist A0 ∈ (L2 (Ω))n , u0 ∈ H01 (Ω) such that uε → u0
A∇uε A0 ,
in L2 (Ω),
uε u0
in H01 (Ω)
(8.41)
(by the last convergence we have ∇uε ∇u0 in L2 (Ω)). From the equation satisfied by uε , that is to say A∇uε · ∇v dx = f, v ∀ v ∈ H01 (Ω) Ω
116
Chapter 8. Homogenization
we derive by taking the limit in ε A0 · ∇v dx = f, v Ω
∀ v ∈ H01 (Ω)
(8.42)
(recall that A0 is a vector with components in L2 (Ω)). The whole point is now to identify A0 . For that we consider x vε (x) = x + εw . (8.43) ε (We drop the index in vε for the sake of simplicity.) For ζ ∈ D(Ω) take v = vε ζ
(8.44)
as test function in (8.15) we get writing A for A( xε ): A∇uε · ∇{vε ζ} dx = f, vε ζ. Ω
This also reads
Ω
(A∇uε · ∇vε )ζ dx +
Ω
(A∇uε · ∇ζ)vε dx = f, vε ζ.
Using the transpose of A this can be written (AT ∇vε · ∇uε )ζ dx + (A∇uε · ∇ζ)vε dx = f, vε ζ Ω Ω ⇐⇒ AT ∇vε · ∇{uε ζ} dx − (AT ∇vε · ∇ζ)uε dx Ω Ω + (A∇uε · ∇ζ)vε dx = f, vε ζ.
(8.45)
Ω
Let us examine the first integral above. From the definition of vε we have x # x $ AT ∇vε ∇{uε ζ} dx = AT e + ∇w · ∇{uε ζ} dx. ε ε Ω Ω By a change of variable y = xε in the second integral above we get if u ˜ε (y) = {uε ζ}(εy) x # x $ AT AT (y){e + ∇w (y)}∇˜ uε (y) dy = 0. e + ∇w ∇{uε ζ} = 1 ε ε Ω εΩ This is indeed a consequence of (8.34) since ai ∂xi v = AT e · ∇v. Thus we arrive now to (8.46) − (AT ∇vε · ∇ζ)uε dx + (A∇uε · ∇ζ)vε dx = f, vε ζ. Ω
Ω
8.2. Homogenization of elliptic equations
Recall that
117
vε = x + εw
Hence (Since w
vε −→ x
x ε
x ε
.
in L2 (Ω).
is bounded in L2 (Ω) – see Theorem 8.2.) Moreover x ∇vε = e + ∇w e + − ∇w dx = e in L2 (Ω). ε T
(8.47)
(8.48)
and thus vε ζ x ζ in H01 (Ω). In addition we can write now (8.46) as # # $ x x aji δj + ∂yj w (∂xi ζ)uε dx + (A∇uε · ∇ζ)vε dx = f, vε ζ. − ε ε Ω Ω Passing to the limit in ε we get − (˜ ai ∂xi ζ)u0 dx + (A0 · ∇ζ)x dx = f, x ζ. Ω
Ω
This is also since a ˜i is constant (˜ ai ∂xi u0 )ζ dx + (A0 · ∇ζ)x dx = f, x ζ.
(8.49)
Taking now v = x ζ in (8.42) we obtain (A0 · e )ζ dx + (A0 · ∇ζ)x dx = f, x ζ.
(8.50)
Ω
Ω
Ω
Ω
From (8.49), (8.50) we derive (˜ ai ∂xi u0 )ζ dx = (A0 · e )ζ dx ∀ ζ ∈ D(Ω). Ω
Ω
It follows that for every one has A0 · e = a ˜i ∂xi u0 . From (8.42) we see then that u0 is solution to ⎧ 1 ⎨u 0 ∈ H0 (Ω), ⎩ a ˜i ∂xi u0 ∂x v dx = f, v Ω
∀ v ∈ H01 (Ω).
(8.51)
If the matrix a ˜ is positive definite we arrive to a unique u0 solution to (8.37) or (8.51) and the whole sequence converges toward u0 in H01 (Ω)-weak. This completes the proof of the theorem since we have:
118
Chapter 8. Homogenization
= (˜ Lemma 8.6. The matrix A aij ) defined by (8.35) is positive definite. Proof. By the definition of w we have 1 per aji ∂yj (y + w )∂yi v dy = 0 ∀ v ∈ H (T ). T
It follows that for every k we have (taking v = wk in the equation above) aji ∂yj (y + w )∂yi (yk + wk ) dy = ajk ∂yj (y + w ) dy = |T |˜ ak . T
T
It follows – with the summation convention 1 a ˜k ξk ξ = aji ∂yj (ξ (y + w ))∂yi (ξk (yk + wk )) dy |T | T 2 1 λ∇ ξ (y + w ) dy. ≥ |T | T
This last integral is nonnegative and vanishes iff ξ (y + w ) ≡ 0 on T. ∇
But ∂yi
ξ (y + w ) = ξi + ξ ∂yi w .
If this vanishes we have ξ ∂yi w dy = ξi |T | = 0, ξi + T
is positive definite. This completes the proof of the i.e., ξ = 0. This shows that A lemma. Remark 8.2. (8.35) can be written in a synthetic way as T A e = − AT ∇(x + w ) dX. T
Note that one also has uε → u0 in L2 (Ω).
(8.52)
Exercises
119
Exercises 1. Show that the convergence (8.6) also holds in L1 (Ω) weak and L∞ (Ω)-weak*. 2. For f ∈ L2 (Ω), Ω = (α, β), one considers uε the solution to ⎧ 1 ⎨u (Ω), ε ∈ H0 x ⎩ A u v dx = f v dx ∀ v ∈ H01 (Ω). ε ε Ω Ω (a) Show that ξε = A xε uε ∈ H 1 (Ω) and is bounded in H 1 (Ω) independently of ε. (b) Show that up to a subsequence there exist ξ0 ∈ L2 (Ω), u0 ∈ H01 (Ω) such that ξε −→ ξ0 , uε u0 in L2 (Ω). (c) Rediscover that u0 is the solution to (8.29). 3. Show that if A is positive definite so is AT (cf. (8.33)) 4. Recover Theorem 8.3 using (8.35). 5. Let a be a T -periodic function satisfying for λ, Λ > 0 0 < λ ≤ a(x) ≤ Λ
a.e. x ∈ T.
If Ω is a bounded open set of Rn , f ∈ H −1 (Ω), let uε be the weak solution to uε ∈ H01 (Ω), x uε = f in Ω. −Δuε + a ε Show that uε −→ u0 in H01 (Ω) where u0 is the solution to
u0 ∈ H01 (Ω), −Δu0 + au0 = f
in Ω
with a = −T a(x) dx.
1 6. Assuming that Hper (T ) is compactly embedded in L2 (T ) give an alternative proof to Theorem 8.4.
7.
(i) Show that 1 u(x) = nωn
Ω
∇u(y) · (x − y) dy |x − y|n
(Hint: see the proof of Theorem 8.4.)
∀ u ∈ D(Ω).
120
Chapter 8. Homogenization
(ii) Deduce from (i) that for u ∈ D(Ω) 1 |u(x)| ≤ nωn
Ω
1−n
|x − y|
12 dy
1−n
Ω
|x − y|
2
|∇u| dy
12 .
(iii) If Ω is of bounded measure deduce that |u|2,Ω ≤
|Ω| n1 |∇u| 2,Ω ωn
∀ u ∈ H01 (Ω).
(Hint: see the proof of Theorem 8.4.) 8.
(i) For f ∈ L1 (Ω), α ∈ [0, 1) we set Vα (f )(x) = |x − y|−nα f (y) dy. Ω
We suppose that Ω is an open subset with finite measure. Show that Vα (1) ≤
1 ω α |Ω|1−α . 1−α n
(ii) Show that for any 1 ≤ p < +∞ f ∈ Lp (Ω) =⇒ Vα (f ) ∈ Lp (Ω) and one has |Vα (f )|p,Ω ≤
1 ω α |Ω|1−α |f |p,Ω . 1−α n
(iii) Deduce that |u|p,Ω ≤ 9. Show that
|Ω| n1 |∇u| p,Ω ωn
∀ u ∈ D(Ω).
v ∈ C ∞ (T ) v T -periodic, − v(x) dx = 0 T
1 per is dense in H (T ).
Chapter 9
Eigenvalues 9.1 The one-dimensional case Suppose to simplify that Ω is the interval (−a, a), a > 0. Definition 9.1. λ is called an eigenvalue for the Dirichlet problem if there exists a function u = 0 weak solution to the problem −u = λu in Ω, (9.1) u(−a) = u(a) = 0. u is then called an eigenfunction corresponding to the eigenvalue λ. One should remark that if u is an eigenfunction of the problem (9.1) so is μu for every real μ, i.e., the solution of the problem (9.1) is not unique. Note that λ is necessarily positive since a weak solution to (9.1) is a function satisfying u v dx = λ uv dx ∀ v ∈ H01 (Ω), u = 0. (9.2) u ∈ H01 (Ω), Ω
Taking v = u it comes
Ω
Ω
u2 dx = λ
Ω
u2 dx
(9.3)
hence λ > 0 since u = 0 would imply u = 0. The first equation of (9.1) is a second-order linear differential equation which possesses a subspace of solutions of dimension 2. It is then easy to check that the general solution of this equation is given by √ √ u = A sin λ(x + a) + B cos λ(x + a) , A, B ∈ R. (9.4) √ √ (Since sin( λ(x + a)), cos( λ(x + a)) are independent solutions.)
122
Chapter 9. Eigenvalues
In order to match the boundary conditions of (9.1) we must have √ sin λ(2a) = 0.
B = 0,
Thus, the possible eigenvalues of the problem are given by λk such that 2a
λk = kπ
⇐⇒
λk =
kπ 2 2a
k = 1, 2, . . . .
(9.5)
They form a discrete countable sequence. The corresponding eigenfunctions are then given by kπ (x + a). (9.6) uk = Ak sin 2a They are simple in the sense that the eigenspaces – the spaces of eigenfunctions – are of dimension 1. If one wants to choose as basis of eigenfunctions the one which has L2 -norm equal to 1 – one says that this is a normalized eigenfunction – one has to choose Ak such that A2k
a
−a
sin2
kπ (x + a) dx = 1 2a
which leads to kπ 1 (x + a). uk = √ sin 2a a
(9.7)
It is clear from (9.5) that the smallest eigenvalue is given by λ1 =
π 2 , 2a
(9.8)
and the so-called first eigenfunction by 1 πx u1 = √ cos . 2a a
(9.9)
On this simple example we already see some properties emerging that one could try to generalize in higher dimensions for an elliptic operator. These are • λ1 is decreasing with the size of the domain • λ1 is simple – namely the dimension of the associated eigenspace is 1 • u1 does not change sign • the uk ’s are building a Hilbert basis of L2 (Ω) – see below for this concept.
9.2. The higher-dimensional case
123
9.2 The higher-dimensional case Let Ω be a bounded open subset of Rn . For the sake of simplicity we restrict ourselves to the Laplace operator for the Dirichlet problem that is to say we are interested in finding the λk ’s such that for some u = 0 we have in a weak sense −Δu = λk u in Ω, (9.10) u ∈ H01 (Ω). Then we have Theorem 9.1. The first eigenvalue λ1 for the Dirichlet problem (9.10) is given by |∇v|2 dx Ω . (9.11) λ1 = Inf v 2 dx v∈H01 (Ω) Ω v=0
(By the first eigenvalue we mean the smallest one. The quotient above is called a Rayleigh quotient.) Proof. First we remark that if λk is eigenvalue for (9.10) we have ∇u · ∇v dx = λk uv dx ∀ v ∈ H01 (Ω), Ω
Ω
for some u = 0, u ∈ H01 (Ω). Taking v = u we get |∇u|2 dx λk = Ω 2 u dx Ω
(9.12)
and λk is certainly bigger than the infimum in (9.11). Next we remark that by the Poincar´e inequality (see Theorem 2.8 or (2.52)) the infimum (9.11) is positive. Let us show that this infimum is indeed achieved. Let un be a minimizing sequence, that is to say a sequence such that |∇un |2 |∇v|2 dx 1 Ω Ω −→ Inf := Inf . (9.13) un ∈ H0 (Ω) 2 2 v∈H01 (Ω) Ω un Ω v dx v=0
By replacing eventually un by un /|un |2,Ω one can assume that |un |2,Ω = 1. Then, clearly, from (9.13) we have for n large |∇un |2 dx ≤ 1 + Inf . Ω
(9.14)
124
Chapter 9. Eigenvalues
This together with (9.14) implies that un is bounded in H 1 (Ω). Up to a subsequence there is a u ∈ H01 (Ω) such that un −→ u
in L2 (Ω),
un u in H01 (Ω).
(9.15)
By (9.14), (9.15) it is clear that |u|2,Ω = 1. Moreover by the weak lower semi-continuity of the norm |∇u|2 dx 2 2 Inf = lim ≥ Inf . |∇un | dx ≥ |∇u| dx = Ω 2 u dx Ω Ω Ω It follows that u is a function = 0 for which the infimum (9.11) is achieved. Note that the infimum is then achieved for any multiple of u. We claim that u is an eigenfunction. Indeed let u = 0 be a point where the infimum (9.11) is achieved. Let v ∈ H01 (Ω). Consider 2 Ω|∇(u + λv)| dx = R(λ). (9.16) 2 Ω (u + λv) dx For λ small enough this quotient is well defined and 0 is an absolute minimum of R(λ). It follows that |∇u|2 dx + 2λ Ω ∇u · ∇v dx + λ2 Ω |∇v|2 dx d d Ω R(0) = (0) = 0. 2 2 2 dλ dλ Ω u dx + 2λ Ω uv dx + λ Ω v dx We have to derive second-order polynomials in λ and a rapid computation shows that the equation above is equivalent to ∇u · ∇v dx · u2 dx = |∇u|2 dx uv dx Ω
Ω
Ω
Ω
and we get
Ω
∇u · ∇v dx =
|∇u|2 dx 2 Ω u dx
Ω
Ω
uv dx
∀ v ∈ H01 (Ω).
This shows that u is an eigenfunction for the value λ1 given by (9.11). This completes the proof of the theorem. Remark 9.1. λ1 is thus the best constant in the Poincar´e inequality – i.e., the largest constant c such that v 2 dx ≤ |∇v|2 dx ∀v ∈ H01 (Ω). c Ω
Ω
9.2. The higher-dimensional case
125
On the way to establish our assertions from the end of Section 9.1 we have: Theorem 9.2. Let λ1 = λ1 (Ω) be the first eigenvalue for the Dirichlet problem (9.10) on Ω. If Ω ⊂ Ω we have λ1 (Ω) ≥ λ1 (Ω ). Proof. A function in H01 (Ω) extended by 0 outside Ω is in H01 (Ω ). The result follows then directly from Property (9.11). We also have Theorem 9.3. Let λ1 = λ1 (Ω) be the first eigenvalue of the Dirichlet problem (9.10) and u a corresponding eigenfunction. Then we have, if Ω is supposed to be connected, • λ1 is simple • u does not change of sign in Ω. Proof. We know that u realizes the infimum (9.11). It is clear (see Corollary 2.14) that |u| also realizes the infimum (9.11). Thus both |u| and u are eigenfunctions of (9.10) and also |u| + u |u| − u and u− = . u+ = 2 2 That is to say we have (9.17) −Δu+ = λ1 u+ ≥ 0. If f ≥ 0, ≡ 0 one can show that the weak solution to −Δu = f,
u ∈ H01 (Ω)
is positive – i.e., u > 0 (see [17], [31]). (This property which arises for elliptic differential equations is very clear in dimension 1. Indeed if −u ≥ 0 on Ω = (α, β),
u(α) = u(β) = 0,
then u is a concave function vanishing at end points. If it vanishes in between at some point the concavity imposes that it vanishes identically. But then −u = f has to vanish identically which is impossible for f ≡ 0.) Going back to (9.17) we see that if u+ is not identically equal to 0 then u+ is positive in the whole Ω – i.e., u− = 0 and thus u cannot change of sign since if u+ ≡ 0 then u = −u− . To see now that λ1 is simple let u1 , u2 be two corresponding eigenfunctions. If u1 , u2 are linearly independent replacing u2 by u1 + μu2 one can assume that u1 , u2 are orthogonal in L2 (Ω). But this is impossible: since u1 , u2 have constant sign one cannot have u1 u2 dx = 0. Ω
This completes the proof of the theorem.
126
Chapter 9. Eigenvalues
Definition 9.2. Let H be a real separable Hilbert space. A Hilbert basis or an orthonormal Hilbert basis of H is a family of linear independent vectors (en )n≥1 such that • |en | = 1, (en , em ) = 0 ∀ n, m ≥ 1, n = m, • the linear combinations of the ei ’s are dense in H. Definition 9.3. Let H be a real Hilbert space and L a continuous linear operator from H into itself. One says that L is • positive iff (Lx, x) ≥ 0 ∀ x ∈ H • self-adjoint iff (Lx, y) = (x, Ly) ∀ x, y ∈ H. Then we have Theorem 9.4. Let H be a separable real Hilbert space of infinite dimension. Let L be a self-adjoint positive compact operator. Then there exists a sequence of positive eigenvalues (μn ) converging toward 0 and a sequence of corresponding eigenvectors (en ) such that (en ) is an orthonormal Hilbert basis of H.
Proof. See [21]. As an application we have
Theorem 9.5. Let Ω be a bounded open set of Rn . There exists an orthonormal Hilbert basis (en )n≥1 of L2 (Ω) and a sequence of positive numbers (λn )n≥1 such that λn → +∞ when n → +∞ and en ∈ H01 (Ω), (9.18) −Δen = λn en in Ω. The λn ’s are called the eigenvalues of the Dirichlet problem in Ω, the en ’s are the corresponding eigenvectors or eigenfunctions. Proof. Let us define in the Hilbert space L2 (Ω) the operator L by L(f ) = u where u is the solution to ⎧ ⎨ u ∈ H01 (Ω), (9.19) ⎩ ∇u · ∇v dx = f v dx ∀ v ∈ H01 (Ω). Ω
Ω
• L is compact. This is due to the compactness of the canonical embedding from H01 (Ω) into L2 (Ω). • L is self-adjoint. Indeed if u = L(f ), v = L(g) we have ⎫ ⎪ ∇u · ∇v dx = f L(g) dx⎪ ⎬ Ω Ω =⇒ f L(g) dx = L(f )g dx (9.20) ⎪ Ω Ω ⎪ ⎭ ∇v · ∇u dx = gL(f ) dx Ω
Ω
2
for every f, g ∈ L (Ω).
9.3. An application
127
• L is positive. Indeed, taking v = u in (9.19) we have f L(f ) dx = ∇u · ∇u dx ≥ 0 ∀ f ∈ L2 (Ω). Ω
(9.21)
Ω
According to Theorem 9.4 there exists a sequence of eigenvalues (μn )n≥1 converging toward 0 and a basis of orthonormal eigenvectors (en )n≥1 such that Len = μn en , i.e.,
⎧ 1 ⎨e n ∈ H0 (Ω), ⎩μn ∇en · ∇v dx = en v dx Ω
Setting λn =
1 μn
Ω
(9.22)
∀ v ∈ H01 (Ω).
this completes the proof of the theorem.
(9.23)
Remark 9.2. One should note that two eigenfunctions orthogonal in L2 (Ω) are also orthogonal in H01 (Ω). One can also show that en ∈ C ∞ (Ω) ∀ n ≥ 1.
9.3 An application In Ω = (− , ) × ω, ω = (−1, 1) we consider again the problem −Δu = f in Ω , u = 0 on ∂Ω ,
(9.24)
where f = f (x2 ) ∈ L2 (ω) and the associated limit u∞ defined as the solution of the problem −∂x22 u∞ = f in ω, (9.25) on ∂ω. u∞ = 0 We have seen in Chapter 6 that, when goes to +∞, u converges to u∞ with an exponential rate of convergence. We would like to show that by another argument. Let us denote by (ϕn )n≥1 the orthonormal basis of eigenfunctions associated to the Dirichlet problem (9.25) – i.e., −∂x22 ϕn = λn ϕn in ω, ϕn ∈ H01 (ω). Since (ϕn )n≥1 is an orthonormal basis of L2 (ω) we have u∞ =
+∞ n=1
an ϕn ,
an = (u∞ , ϕn ),
128
Chapter 9. Eigenvalues
(·, ·) denotes the scalar product in L2 (ω). It is clear that for any n √ cosh λn x1 √ ϕn (x2 ) hn = cosh λn is harmonic – i.e., Δhn = 0. Thus by the uniqueness of the solution to (9.24) we have √ +∞ cosh λn x1 √ ϕn (x2 ), an (9.26) u − u∞ = − cosh λn n=1 i.e., we have an explicit formula for u . Then with a simple computation (see [35]) one can show that the estimate (6.22) is sharp – see the exercises.
Exercises 1. Let Ω = (− , ) × (−1, 1). Find all the eigenvalues and eigenfunctions of the problem −Δu = λu in Ω , 1 u ∈ H0 (Ω ). 2. Let Ω be a bounded open set of Rn and (en )n≥1 be the orthonormal Hilbert basis of L2 (Ω) given by Theorem 9.5. Prove that ( √1λ en ) is an orthonormal n Hilbert basis of H01 (Ω). 3. Show Theorems 9.3, 9.5 for a general symmetric elliptic problem, i.e., ∇ · (A∇u) = λu in Ω, u=0 on ∂Ω where A = AT . 4. Using (9.26) show that the estimate (6.22) is sharp.
Chapter 10
Numerical Computations Suppose that Ω is a bounded open subset of R2 . We would like to compute numerically the solution of the Dirichlet problem −Δu = f in Ω, (10.1) u=0 on ∂Ω. For simplicity we restrict ourselves to the two-dimensional case but the methods extend easily to higher dimensions (see [38]).
10.1 The finite difference method This is perhaps the oldest method in numerical analysis and also the simplest. It consists in replacing the derivative by its difference quotients – i.e., ∂xi u(x) by u(x + hei ) − u(x) h
(10.2)
(h is some positive number taken smaller and smaller). Let us make things more precise. First we cover the domain Ω by a grid of size h – see Figure 10.1. If (ih, jh) is a point of the grid we call neighbours of (ih, jh) the four points ((i − 1)h, jh), ((i + 1)h, jh), (ih, (j − 1)h), (ih, (j + 1)h).
(10.3)
We denote by Ωh , Γh the sets of grid points defined by Ωh = {(ih, jh) ∈ Ω having their four neighbours in Ω},
(10.4)
Γh = {(ih, jh) ∈ Ω} \ Ωh .
(10.5)
(The set Γh is depicted in Figure 10.1 on top of the next page.)
130
Chapter 10. Numerical Computations
h
(ih, jh)
(ih, jh) the four neighbours of (ih, jh) h
h
Figure 10.1.
Iterating (10.2), i.e., writing ∂xi u(x) − ∂xi u(x − hei ) h {u(x + hei ) − u(x)} − {u(x) − u(x − hei )} h2 2u(x) − u(x + hei ) − u(x − hei ) − h2
∂x2i u(x)
we deduce that 1 Δu(ih, jh) − 2 4u(ih, jh) − h
u(z)
(10.6)
z∈N (ih,jh)
where N (ih, jh) denotes the set of neighbours of (ih, jh), i.e., the points given by (10.3). Thus the expression on the right-hand side of (10.6) is defined for any (ih, jh) ∈ Ωh . Let us denote by uy the approximation of u(y) for y ∈ Ωh . According to (10.1), (10.6) it is reasonable to assume that (uy )y∈Ωh satisfies ⎧ 1 ⎪ ⎨−Δh uy := − uz − 4uy = f (y) ∀ y ∈ Ωh , h2 z∈N (y) ⎪ ⎩ uy = 0 ∀ y ∈ Γh .
(10.7)
The system (10.7) is just a linear system with unknown uy , y ∈ Ωh . To show that it possesses a unique solution we will need the following lemma
10.1. The finite difference method
131
Lemma 10.1 (Discrete Maximum Principle). Let uy , y ∈ Ωh satisfying ⎧ 1 ⎪ ⎨−Δh uy = − uz − 4uy ≤ 0 ∀ y ∈ Ωh , h2 z∈N (y) ⎪ ⎩ uy given on Γh .
(10.8)
Then we have uy ≤ Max uz Γh
∀ y ∈ Ωh ,
(10.9)
i.e., uy , y ∈ Ωh ∪ Γh achieves its maximum on Γh . Proof. Suppose that the maximum of uy , y ∈ Ωh ∪ Γh is achieved for some point y0 ∈ Ωh . Then from the inequality (10.8) we have 4uy0 ≤ uz ≤ 4uy0 z∈N (y0 )
since each uz is less or equal to uy0 . This forces ∀ z ∈ N (y0 ).
u z = u y0
Repeating the operation if z ∈ N (y0 ) ∩ Ωh we will have ∀ t ∈ N (z)
u t = u y0
else z ∈ Γh and we have a point of Γh reaching the maximum, i.e., uy0 ≤ Max uz ≤ uy0 . Γh
This implies (10.9). Note that one will always reach such a point z since the grid points are finite. Remark 10.1. If the maximum of uy is achieved in Ωh it does not imply that uy is constant in Ωh . Some connectedness of this set is necessary – see Figure 10.1 above. Remark 10.2. Similarly if uy satisfies −Δh uy ≥ 0
∀ y ∈ Ωh
(10.10)
min uz ≤ uy
∀ y ∈ Ωh .
(10.11)
then one has Γh
Indeed it is enough to apply Lemma 10.1 to −uy . It follows that when −Δh uy = 0
∀ y ∈ Ωh
(10.12)
one has min uz ≤ uy ≤ Max uz Γh
Γh
∀ y ∈ Ωh .
(10.13)
132
Chapter 10. Numerical Computations
Then one can prove: Theorem 10.2. The system (10.7) possesses a unique solution. Proof. This is a linear system. It is enough to show that the corresponding homogeneous system ⎧ 1 ⎪ ⎨− uz − 4uy = 0 ∀ y ∈ Ωh , h2 (10.14) z∈N (y) ⎪ ⎩ ∀ y ∈ Γh uy = 0 has only 0 as solution. This is an obvious consequence of (10.13).
Then we would like to show that uz − u(z) −→ 0 when h → 0 – i.e., the different values of uz are approximate values of u(z). We have indeed Theorem 10.3. Suppose that u ∈ C 2 (Ω) is solution to (10.1). Suppose that Ω is such that for h small enough one has ∀ y ∈ Ωh ,
the segments joining y to its neighbours are in Ω.
(10.15)
Then for any ε > 0 we have |uz − u(z)| ≤ ε
∀ z ∈ Ωh ,
(10.16)
for h small enough. Proof. • 1. We first check the “consistency” of the system (10.7). By consistency we mean how accurate u(z), z ∈ Ωh is the solution of a linear system close to (10.7). For a C 2 -function we have 1 d u(y + thei ) dt u(y + hei ) − u(y) = 0 dt 1 1 d2 d (t − 1) 2 u(y + thei ) dt = (t − 1) u(y + thei ) − dt dt 0 0 1 = ∂xi u(y)h − (t − 1)∂x2i u(y + thei )h2 dt. 0
(Note that this computation assumes that the segment (y, y + hei ) belongs to Ω.) Similarly if the segment (y − hei , y) is contained in Ω we get u(y − hei ) − u(y) = −∂xi u(y)h −
1 0
(t − 1)∂x2i u(y − thei )h2 dt.
10.1. The finite difference method
133
It follows under the conditions above that u(y + hei ) − 2u(y) + u(y − hei ) 1 (1 − t){∂x2i u(y + thei ) + ∂x2i u(y − thei )} dt. = h2
(10.17)
0
This implies clearly that u(y + hei ) − 2u(y) + u(y − hei ) = ∂x2i u(y) h2 1 + (1 − t){∂x2i u(y + thei ) + ∂x2i u(y − thei ) − 2∂x2i u(y)} dt.
(10.18)
0
We then set εi (y, h) = to get
0
1
(1 − t){∂x2i u(y + thei ) + ∂x2i u(y − thei ) − 2∂x2i u(y)} dt,
u(y + hei ) − 2u(y) + u(y − hei ) = ∂x2i u(y) + εi (y, h). h2
(10.19)
(10.20)
Remark 10.3. If u belongs to C 2 (Ω), ∂x2i u is in particular uniformly continuous in Ω and one has (10.21) |εi (y, h)| ≤ ε(h) i = 1, 2, ∀ y, where ε(h) → 0 when h → 0. In particular if ∂x2i u is uniformly Lipschitz continuous it follows from (10.19) that |εi (y, h)| ≤ Ch
i = 1, 2,
∀ y,
(10.22)
for some constant C. Due to our assumption (10.15), (10.20) holds for any y ∈ Ωh and for i = 1, 2. It follows then that u(y), y ∈ Ωh satisfies 1 −Δh u(y) = − 2 u(z) − 4u(y) h z∈N (y)
= f (y) − ε1 (y, h) − ε2 (y, h) ∀ y ∈ Ωh .
(10.23)
• 2. The convergence. We set Uy = uy − u(y). It follows from (10.7), (10.23) by subtraction that Uy satisfies −Δh Uy = ε1 (y, h) + ε2 (y, h) ∀ y ∈ Ωh , ∀ y ∈ Γh . Uy = −u(y)
(10.24)
134
Chapter 10. Numerical Computations
We then decompose Uy in two parts. More precisely we introduce Uy1 , Uy2 the solution to −Δh Uy1 = ε1 (y, h) + ε2 (y, h) ∀ y ∈ Ωh , (10.25) ∀ y ∈ Γh , Uy1 = 0 −Δh Uy2 = 0 ∀ y ∈ Ωh , (10.26) Uy2 = −u(y) ∀ y ∈ Γh . Clearly both systems admit a unique solution. This is clear by Theorem 10.2 for the first one. For (10.26) this follows from the fact that Uy2 + u(y) is solution of a system of the type (10.7). By uniqueness of the solution of such systems we have Uy = Uy1 + Uy2
∀ y ∈ Ωh ∪ Γ h .
(10.27)
We would like to estimate Uy1 , Uy2 . For y0 ∈ Ω set
|y − y0 |2 4 where | · | denotes the usual Euclidean norm in R2 – i.e., δ(y) = M −
(10.28)
1 δ(y) = M − {(y1 − y0,1 )2 + (y2 − y0,2 )2 } 4 2
0| if y = (y1 , y2 ), y0 = (y0,1 , y0,2 ), and M = Maxy∈Ω |y−y . Note that – see for 4 instance (10.18) (10.29) −Δδ(y) = −Δh δ(y) = 1.
By (10.21), (10.25) one has then −Δh {Uy1 − 2ε(h)δ(y)} = ε1 (y, h) + ε2 (y, h) − 2ε(h) ≤ 0 Uy1
− 2ε(h)δ(y) ≤ 0
∀ y ∈ Ωh ,
∀ y ∈ Γh ,
and −Δh {−Uy1 − 2ε(h)δ(y)} = −ε1 (y, h) − ε2 (y, h) − 2ε(h) ≤ 0 ∀ y ∈ Ωh , −Uy1 − 2ε(h)δ(y) ≤ 0
∀ y ∈ Γh .
From the discrete maximum principle we derive that |Uy1 | ≤ 2ε(h)δ(y) ∀ y ∈ Ωh ∪ Γh . If dΩ denotes the diameter of Ω we obtain (since 0 ≤ δ ≤ M ≤ |Uy1 | ≤
d2Ω ε(h) ∀ y ∈ Ωh ∪ Γh . 2
d2Ω 4 )
(10.30)
10.2. The finite element method
135
Since u is in C 2 (Ω) and thus in particular Lipschitz continuous we have for some constant C (10.31) |u(y)| ≤ Ch ∀ y ∈ Γh . (Note that dist(y, ∂Ω) ≤ h for y ∈ Γh .) It follows from the discrete maximum principle that (10.32) |Uy2 | ≤ Ch ∀ y ∈ Ωh ∪ Γh . Combining (10.30), (10.32) we derive |Uy | = |uy − u(y)| ≤ Ch +
d2Ω ε(h) ∀ y ∈ Ωh ∪ Γh . 2
This completes the proof of the convergence result.
(10.33)
Remark 10.4. In the case where in addition ∂x2i u are Lipschitz continuous we have (see (10.22)) for some constant C |uy − u(y)| ≤ Ch
∀ y ∈ Ωh ∪ Γh .
(10.34)
Remark 10.5. The assumption (10.15) holds for every domain such that its sections (in the directions x1 and x2 ) are convex.
10.2 The finite element method The method has been introduced in the middle of the last century and developed together with the functional analysis approach for solving elliptic equations. The idea – for instance for solving the Dirichlet problem – is to replace H01 (Ω) by a finite-dimensional space, that is to say, to apply the so-called Galerkin method. Now the possible choices for such a finite-dimensional subspace are infinite. One can build it by splitting the domain Ω in small elements and choosing a basis of functions with support in these elements. This introduces a “mesh” on the domain, but one can also develop “mesh free” techniques by choosing the basis of the finite-dimensional space without any decomposition of the domain Ω in different subsets. Let us first consider the abstract Lax–Milgram setting of the problem. That is to say let a = a(u, v) be a bilinear, continuous, coercive form on H – see (1.23), (1.24). For f ∈ H ∗ the dual of H there exists a unique u solution to u ∈ H, (10.35) a(u, v) = f, v ∀ v ∈ H, ( a closed subspace of H (at this point (see Theorem 1.5). Let us consider then H ( H could be finite dimensional or not). Then by the Lax–Milgram theorem there
136
Chapter 10. Numerical Computations
exists a unique u ˆ solution to
( u ˆ ∈ H, a(ˆ u, v) = f, v
(10.36)
( ∀ v ∈ H.
( is “approxiThen one can ask oneself how close uˆ is to u, in particular when H mating” H in a sense to be precised. To answer this question we have the following simple lemma. Theorem 10.4 (C´ea’s Lemma). Let a be a bilinear form on H satisfying (1.23), (1.24). Let u, u ˆ be the solutions to (10.35), (10.36) respectively. Then we have |u − u ˆ| ≤
Λ Inf |u − v|. λ v∈H(
(10.37)
( ⊂H Proof. From (10.35), (10.36) we derive by subtraction since H a(u − uˆ, v) = 0
( ∀ v ∈ H.
( we obtain Replacing v by v − uˆ which also belongs to H a(u − u ˆ, v − u ˆ) = 0 ⇐⇒
a(u − u ˆ, v − u + u − u ˆ) = 0
( ∀v ∈ H ( ∀ v ∈ H.
Using the bilinearity of a again we get a(u − u ˆ, u − u ˆ) = a(u − uˆ, u − v). From the continuity and the coerciveness of a this implies ˆ|2 ≤ Λ|u − u ˆ| |u − v| λ|u − u Λ i.e., |u − u ˆ| ≤ |u − v| λ
( ∀ v ∈ H, ( ∀v ∈ H
and (10.37) follows. This completes the proof of the theorem. Remark 10.6. One can write (10.37) as |u − u ˆ| ≤
Λ |u − PH( (u)| λ
(10.38)
( – see Theorem 1.1. where PH( u denotes the orthogonal projection of u on H
( More precisely for Let us now consider a “sequence” of such subspaces H. ( h a closed subspace of H and suppose that h ∈ R denote by H ( h is dense in H when h → 0 H
(10.39)
( h such that vˆh → v when h → 0. ∃ vˆh ∈ H
(10.40)
that is to say ∀ v ∈ H,
10.2. The finite element method
137
Then we have Theorem 10.5. Under the assumptions of Theorem 10.4 and (10.40), if uˆh denotes the solution to (h, u ˆh ∈ H (10.41) (h, a(ˆ uh , v) = f, v ∀ v ∈ H we have u ˆh −→ u
in H
(10.42)
where u is the solution to (10.35). ( h such that (h be the sequence of H Proof. Let U (h −→ u U
in H.
By (10.37) we have |u − u ˆh | ≤
Λ (h | |u − U λ
and the result follows.
Perhaps it is time to show how our abstract results above are fitting concrete situations. We will argue for the sake of simplicity in R2 , i.e., Ω will be a bounded ( a polygonal subdomain of domain in R2 – see Figure 10.2. Let us denote by Ω
(h Ω Ω
Figure 10.2. Ω (the “(” reminds of the polygonal character of its boundary). This domain is supposed to approximate Ω and to measure the degree of approximation we introduce h defined as (10.43) h = Sup dist(x, ∂Ω) ( x∈∂ Ω
( by Ω ( h . To fit our abstract framework we set H ( h = H01 (Ω ( h ). and we denote then Ω
138
Chapter 10. Numerical Computations
Let us then denote by u, u ˆh the weak solution of the Dirichlet problems ⎧ 1 ⎨u ∈ H0 (Ω), (10.44) ⎩ ∇u · ∇v dx = f, v ∀ v ∈ H01 (Ω), Ω
⎧ ( h ), ⎨ u ˆh ∈ H01 (Ω ⎩
(h Ω
∇ˆ uh · ∇v dx = f, v
( h ). ∀ v ∈ H01 (Ω
(10.45)
Then as a consequence of our abstract analysis we have Theorem 10.6. For f ∈ H −1 (Ω) let u, uˆh be the solutions to (10.44), (10.45). If ( h is such that h → 0 (see (10.43)) then we have Ω in H01 (Ω).
uh −→ u
(10.46)
( h .) (We assume uˆh extended by 0 outside Ω ( h ) is dense in H 1 (Ω) ( h = H 1 (Ω Proof. By Theorem 10.5 it is enough to show that H 0 0 1 ( h the when h → 0, i.e., we have (10.40). Let v ∈ H0 (Ω) and denote by vˆh ∈ H function defined as vˆh = PH( h (v). (10.47) We mean here the projection when H01 (Ω) is equipped with the scalar product (u, v) → ∇u · ∇v dx. Ω
By Corollary 1.2 we then have ∇(v − vˆh ) · ∇w dx = 0 Ω
( h ). ∀ w ∈ H01 (Ω
(10.48)
Taking w = vˆh we derive 2 |∇ˆ vh |2,Ω , vh | 2,Ω = ∇ˆ vh · ∇ˆ vh dx = ∇v · ∇ˆ vh ≤ |∇v|2,Ω |∇ˆ Ω
i.e.,
Ω
|∇ˆ vh |2,Ω ≤ |∇v|2,Ω
(10.49)
and vˆh is bounded in H01 (Ω). Thus, up to a subsequence, when h → 0 we have for some v0 ∈ H01 (Ω) vˆh v0 in H01 (Ω). (10.50) ( h) Let us choose w ∈ D(Ω). Then for h small enough we clearly have w ∈ H01 (Ω and from (10.48) ∇(v − vˆh ) · ∇w = 0. Ω
10.2. The finite element method
139
Passing to the limit when h → 0 we derive ∇(v − v0 ) · ∇w = 0 ∀ w ∈ D(Ω). Ω
By density of D(Ω) in H01 (Ω), the equality above holds for any w ∈ H01 (Ω) and thus v0 = v. Since the possible limit of vˆh is unique the whole sequence vˆh satisfies (10.50) with v0 = v. Finally from (10.48) we derive |∇(v − vˆh )|2 dx = ∇(v − vˆh ) · ∇v dx → 0 when h → 0. Ω
Ω
( h ) in H 1 (Ω) when h → 0 and the This completes the proof of the density of H01 (Ω 0 proof of the theorem. Remark 10.7. Due to Theorem 10.6 it is clear that when we wish to compute the solution u of (10.44) we can replace Ω by a polygonal domain included in Ω which is close to Ω. – see also Exercise 4. Taking into account the remark above we suppose now that Ω is a polygonal domain of R2 . We are going to present the simplest finite element method – the so-called P1 -method.
Figure 10.3. Definition 10.1. A triangulation τ of Ω is a set of triangles Ti , i = 1, . . . , n such that n ) Ω= Ti (10.51) i=1
Ti ∩ Tj = ∅, one point or one common side ∀ i = j. (We suppose that the triangles are closed.)
(10.52)
140
Chapter 10. Numerical Computations
Definition 10.2. For a triangulation τ of Ω, the mesh size h is the parameter defined by (10.53) h = Sup diam T T ∈τ
where diam T denotes the diameter of T – i.e., its largest side. One denotes then by τh a triangulation τ of Ω of mesh size h. Remark 10.8. By (10.52) the following situation is excluded for the triangles of a triangulation: i.e., two neighbours cannot have in common only part of a side. T2 T1
Figure 10.4.
If τh is a triangulation of Ω we define Vh as Vh = { v : Ω → R | v is continuous, v|T ∈ P1 ∀ T ∈ τh , v = 0 on ∂Ω }.
(10.54)
In this definition v|T denotes the restriction of v to T , P1 is the set of polynomials of degree less or equal to 1 – i.e., the set of affine functions on R2 . Let us denote by xi = (xi1 , xi2 ),
i = 1, . . . , N
(10.55)
the vertices of the triangles of τh which are inside Ω. Then we have Theorem 10.7. Vh is a finite-dimensional subspace of H01 (Ω) and dim Vh = N.
(10.56)
Proof. 1. A function v ∈ Vh is uniquely determined by v(xi ), i = 1, . . . , N . Indeed consider a triangle T with for instance vertices x1 = (x11 , x12 ),
x2 = (x21 , x22 ),
x3 = (x31 , x32 ).
(10.57)
We claim that if v is affine on T it is uniquely determined by v(xi ), i = 1, 2, 3. On T the function v is given by v(x1 , x2 ) = ax1 + bx2 + c
(10.58)
10.2. The finite element method
where a, b, c are such that
141
⎧ 1 1 1 ⎪ ⎨ax1 + bx2 + c = v(x ), 2 2 ax1 + bx2 + c = v(x2 ), ⎪ ⎩ 3 ax1 + bx32 + c = v(x3 ).
(10.59)
Clearly a, b, c are uniquely determined provided that the determinant 1 x1 x12 1 x11 x12 1 2 1 2 1 2 x1 x22 1 = x21 − x11 x22 − x12 0 = x13 − x11 x23 − x21 3 x1 − x1 x2 − x2 x1 x32 1 x31 − x11 x32 − x12 0 is different of 0. This is the case iff the vectors x2 − x1 ,
x3 − x1
are linearly independent. Since we do not allow flat triangles in our triangulation this is of course always the case for every triangle. This shows that v(xi ) i = 1, . . . , N determines uniquely v. Note that the event mentioned in Remark 10.8 would create problems at this stage. 2. Building of a basis on Vh . A function v ∈ Vh , vanishing on the boundary is uniquely determined by its values on the inside nodes of the triangulation. In particular there exists a unique function λi such that λi (xj ) = δij ∀ i, j = 1, . . . , N. (10.60) λi ∈ Vh , We claim that (λi )i=1,...,N is a basis of Vh . These functions are indeed linearly independent since N i=1
αi λi = 0
=⇒
N
αi λi (xj ) = αj = 0 ∀ j = 1, . . . , N.
i=1
Moreover if v ∈ Vh then the function N
v(xi )λi
i=1
is a function of Vh which coincides with v at the inside nodes of the triangulation and thus everywhere which shows that v=
N
v(xi )λi
i=1
and (λi )i=1,...,N is a basis of Vh . This established (10.56).
142
Chapter 10. Numerical Computations
3. Vh ⊂ H01 (Ω). First remark that if v ∈ Vh , then v is C 1 inside each triangle T of τh . Let us denote by ∂xi v|T i = 1, 2 the usual derivative (in the direction xi ) of v in T . Then we set (∂xi v|T )χT vi =
(10.61)
T ∈τh
where χT denotes the characteristic function of T . vi is piecewise constant and thus v, v1 , v2 ∈ L2 (Ω). Let us show that ∂xi v = vi
i = 1, 2
(10.62)
in the distributional sense. Let ϕ ∈ D(Ω). We have v∂xi ϕ dx = − v∂xi ϕ dx. ∂xi v, ϕ = −v, ∂xi ϕ = − Ω
T ∈τh
(10.63)
T
Now on each triangle T by the Green formula – or a simple integration we have v∂xi ϕ dx = ∂xi (vϕ) − (∂xi v)ϕ dx T T (10.64) vϕνi dσ(x) = − (∂xi v)ϕ + T
∂T
where ν = (ν1 , ν2 ) denotes the outward unit normal to ∂T . If one integrates on a side S of T which belongs to ∂Ω one has vϕνi dσ(x) = 0 S
(since ϕ ∈ D(Ω) vanishes on S). If one integrates on a side S belonging to T1 and T2 one has vϕνi dσ(x) + vϕνi dσ(x) = 0 S∩∂T1
S∩∂T2
(since the outward normals to T1 and T2 have opposite directions). Thus if we are summing up (10.64) the boundary integrals disappear and we get (see (10.63)) (∂xi v)ϕ = vi , ϕ (10.65) ∂xi v, ϕ = T ∈τh
T
which establishes (10.62). Note that since v ∈ H 1 (Ω) ∩ C(Ω) and v = 0 on ∂Ω one has v ∈ H01 (Ω) (cf. Exercise 11, Chapter 2). This completes the proof of the theorem.
10.2. The finite element method
143
Since Vh is finite dimensional, it is closed in H01 (Ω) and by the Lax–Milgram theorem there is a unique uh solution to ⎧ ⎨u h ∈ Vh , (10.66) ⎩ ∇uh · ∇v dx = f v dx ∀ v ∈ Vh . Ω
Ω
One expects now that • uh will converge toward u, the weak solution to (10.1) when h → 0. (10.67) • uh is computable – since only a finite number of values are needed.
(10.68)
Regarding the first point one has to avoid the situation where some of the triangles of τh get flat when h → 0. This will be done through the following assumption. First, for T a triangle, denote by hT the diameter of T – i.e., the length of the largest side of T and by ρT the diameter of the largest circle which can fit in T (see the Figure 10.5 below).
ρT
T
hT
Figure 10.5. Then we have Definition 10.3. A family of triangulations (τh ) is said regular or non-degenerate when h → 0 if there is a constant γ > 0 such that hT ≤γ ρT
∀ T ∈ (τh ) ∀ h,
(10.69)
namely the triangles cannot get too flat when h → 0. Then we have Theorem 10.8. Under the assumptions above, if uh denotes the solution to (10.66) and if (τh ) is regular we have uh −→ u where u is the solution to (10.1).
in H01 (Ω)
(10.70)
144
Chapter 10. Numerical Computations
Proof. By Theorem 10.5 applied for H = H01 (Ω),
a(u, v) =
Ω
∇u · ∇v dx,
( h = Vh H
it is enough to show that Vh is dense in H01 (Ω) when h → 0. Since D(Ω) is dense in H01 (Ω) it is enough to show that for every v ∈ D(Ω) there exists a sequence of functions vh such that vh ∈ Vh and vh → v in H01 (Ω). Of course a natural candidate for v is the interpolant vˆh of v given by vˆh =
N
v(xi )λi ,
(10.71)
i=1
i.e., the function of Vh which coincides with v at the inside nodes of the triangulation. We have indeed Lemma 10.9. Let v ∈ C 2 (Ω), v = 0 on ∂Ω. Let vˆh be the interpolant of v given by (10.71). If (10.69) holds we have for some constant C independent of h |∇(v − vˆh )| ≤ Ch (10.72) 2,Ω (recall the definition of h given by (10.53)). Assuming the lemma proved the proof of the theorem is complete.
Proof of the lemma. 1. Let T be an arbitrary triangle and v a continuous function of class C 2 on each triangle vanishing at the vertices of all T . Without loss of generality we can assume that 0 is a vertex of T . Let ν1 , ν2 be the unit vectors on the sides of T – see Figure 10.6. Suppose ai = αi νi . Then we have αi d v(tνi ) dt 0 = v(ai ) − v(0) = dt 0 αi ∇v(tνi ) · νi dt = 0 αi ∂νi v(tνi ) dt. = 0
By the mean value theorem, there is a point pi on the segment (0, ai ) such that ∂νi v(pi ) = 0. Let x be an arbitrary point of T . We have ∂νi v(x) = ∂νi v(x) − ∂νi v(pi ) 1 d ∂νi v(pi + t(x − pi )) dt = dt 0 1 = ∇∂νi v(pi + t(x − pi )) · (x − pi ) dt. 0
(10.73)
10.2. The finite element method
145
a2
ν2
a1
T ν1
0
Figure 10.6.
We denote by M a constant such that Sup |∂x2i xj v| ≤ M T
∀T
∀ i, j = 1, 2.
(10.74)
i = 1, 2.
(10.75)
Then from (10.73) we easily derive |∂νi v(x)| ≤ 2M hT
Without loss of generality we can assume ν1 = e1 . Then if θ denotes the angle of the vectors ν1 , ν2 we have |∂x1 v(x)| ≤ 2M hT ,
| cos θ∂x1 v(x) + sin θ∂x2 v(x)| ≤ 2M hT .
We then derive |∂x2 v(x)| ≤
4M hT . | sin θ|
Now it is easy to see that | sin θ| ≥
ρT hT
and we derive |∂x2 v(x)| ≤ 4M
h2T , ρT
|∂x1 v(x)| ≤ 2M hT ≤ 2M
h2T ρT
and thus for some constant K = K(v) we obtain |∇v(x)| ≤ K
h2T ρT
∀ x ∈ T, ∀ T.
(10.76)
146
Chapter 10. Numerical Computations
2. Applying step 1 to v − vˆh we get h2T ρT
|∇(v − vˆh )(x)| ≤ K
∀ x ∈ T, ∀ T.
(10.77)
(Note that in (10.74) the constant M depends on v only since the second derivatives of vˆ are vanishing on each triangle.) From (10.77) we derive |∇(v − vˆh )|2 = |∇(v − vˆh )(x)|2 dx 2,Ω T
T ∈τh
≤K
hT 2
2
T ∈τh 2 2 2
≤K γ h
ρT
h2T |T |
|T | = K 2 γ 2 h2 |Ω|
T ∈τh
where |T | denotes the measure of T (see (10.69), (10.53)). It follows then that |∇(v − vˆ)|
1
2,Ω
≤ Kγ|Ω| 2 h
(10.78)
which completes the proof of the lemma. To compute uh (see (10.68)) one should notice that uh =
N
ξi λi
(10.79)
i=1
for some uniquely determined ξi (see Theorem 10.7). Moreover by (10.66) uh is uniquely determined by ∇uh · ∇v dx = f v dx ∀ v ∈ Vh . (10.80) Ω
Ω
Of course since the λi ’s built a basis of Vh , (10.80) will be satisfied if and only if ∇uh · ∇λj dx = f λj dx ∀ j = 1, . . . , N. Ω
Ω
Thus, by (10.79), we see that the ξi are uniquely determined as the solution of the N × N linear system N i=1
ξi
Ω
∇λi · ∇λj dx =
Ω
f λj dx ∀ j = 1, . . . , N.
(10.81)
Exercises
147
Since we know that uh exists and is unique the system above is uniquely solvable and in particular the matrix ∇λi · ∇λj dx Ω
i,j
is invertible. Note that this matrix is sparse since ∇λi · ∇λj dx = 0 Ω
as soon as the support of λi , λj are disjoint. (By (10.60) the support of λi is the union of the triangles having xi for vertex.) Thus at the end – like for the finite difference method – the approximate solution of our problem is found by solving a linear system of equations.
Exercises 1. One considers the problem −u = f in Ω = (0, 1), u(0) = u(1) = 0. Set h = N1 . One denotes by ui , i = 1, . . . , N − 1 the approximation of u(ih). Find the system satisfied by ui using the finite difference method. One introduces Vh = {v ∈ H01 (Ω) | v is continuous, v is affine on each interval (ih, (i + 1)h) } and λi the function of Vh satisfying λi (jh) = δij
∀ i, j = 1, . . . , N − 1.
Show that there exists a unique uh ∈ Vh solution to ⎧ ⎨ uh ∈ Vh , ⎩ uh v dx = f v dx ∀ v ∈ Vh . Ω
Ω
Show that uh =
N −1
ξi λi .
i=1
Find the linear system satisfied by ξ = (ξi ).
148
Chapter 10. Numerical Computations
2. Let T be a triangle with vertices K1 , K2 , K3 and such that the angles θi satisfy for some positive δ π 0 < δ ≤ θi ≤ − δ. 2 Denote by λi the affine functions such that ∀ i, j = 1, 2, 3.
λi (Kj ) = δij
Show that for i, j = 1, 2, 3 there exist constants C1 , C2 such that ∇λi · ∇λj ≤ −
C1 <0 h2T
1 C2 ≤ |∇λi | ≤ hT hT
∀ i = j,
∀i
(hT is the diameter of T ). 3. Discretize with finite differences the Dirichlet problem − div(A(x)∇u) = f in Ω, u=0 on Γ. Show the convergence of this discretization. One will assume all the data smooth. ( h be a polygonal domain which is not necessarily included in Ω. Suppose 4. Let Ω that for every ε > 0 one has for h small enough ( h ⊂ { x ∈ R2 | dist(x, Ω) < ε }. { x ∈ Ω | dist(x, ∂Ω) > ε } ⊂ Ω If u, u ˆh are respectively the solutions to (10.44), (10.45) show that uˆh −→ u in H 1 (Ω). Extend this result in Rn . 5. Prove Lemma 10.9 in higher dimensions. 6. Extend the P1 -method in R3 and then in Rn . 7. Let Ω = (α, β) be an interval α = t0 < t1 < · · · < tn = β a subdivision of Ω. Set h =
Max
(ti+1 − ti ). If u ∈ C 2 (Ω) and u ˆ denotes
i=0,...,n−1
the interpolant of u – i.e., the piecewise affine function such that uˆ(ti ) = u(ti ) show that
∀i = 0, . . . , n
h |ˆ u − u |2,Ω ≤ √ |u |2,Ω . 2
Exercises
149
8. A rectangle in R2 is a set of the type Q = [a1 , b1 ] × [a2 , b2 ]. A “triangulation” τh of Ω is a decomposition of Ω in rectangles such that ) Ω= Q Q∈τh
and for any Q1 , Q2 ∈ τ = τh , Q1 ∩ Q2 = ∅, one point or a common side. Set Q1 = {P | P (x) = a0 + a1 x1 + a2 x2 + a3 x1 x2 , ai ∈ R} Vh = {v : Ω → R | v is continuous, v|Q ∈ Q1 ∀ Q ∈ τh , v = 0 on ∂Ω}. (i) Show that a function of Q1 is uniquely determined by its value on the vertices of Q. (ii) Find a basis of Vh . (iii) Develop a Q1 -finite element method (h = Sup diam Q). Q∈τ
Part II
More Advanced Theory
Chapter 11
Nonlinear Problems The goal of this chapter is to introduce nonlinear problems – i.e., problems whose solution does not depend linearly on the data. We will restrict ourselves to simple techniques of existence or uniqueness.
11.1 Monotone methods To simplify our exposition we will assume that Ω is a bounded domain in Rn . Suppose that for f ∈ H −1 (Ω) we want to solve
−Δu + F (u) = f u ∈ H01 (Ω).
in Ω,
(11.1)
In order for the first equation (11.1) to make sense we need to have F (u) ∈ H −1 (Ω) and thus some hypothesis on F . Let us assume that F is uniformly Lipschitz continuous that is to say for some constant L > 0 we have |F (z) − F (z )| ≤ L|z − z |
∀ z, z ∈ R.
(11.2)
Then, clearly, if v ∈ L2 (Ω) we have F (v) ∈ L2 (Ω). This follows from the continuity of F and from the inequality |F (v)| = |F (v) − F (0) + F (0)| ≤ |F (v) − F (0)| + |F (0)| ≤ L|v| + |F (0)|.
(11.3)
The last function above belongs to L2 (Ω) since we have assumed Ω bounded. Thus the first equation of (11.1) makes sense for F satisfying (11.2). Let us then introduce the following definition:
154
Chapter 11. Nonlinear Problems
Definition 11.1 (super- and sub-solutions). We say that u (respectively u) is a supersolution (respectively subsolution) to (11.1) iff u ∈ H 1 (Ω),
u ≥ 0 on ∂Ω (in the sense of Definition 4.2),
− Δu + F (u) ≥ f in a weak sense – i.e., ∇u · ∇v + F (u)v dx ≥ f, v ∀ v ∈ H01 (Ω), v ≥ 0, Ω
(resp.
(11.4)
1
u ∈ H (Ω), u ≤ 0 on ∂Ω, ∇u · ∇v + F (u)v dx ≤ f, v ∀ v ∈ H01 (Ω), v ≥ 0). Ω
Then we have (see [3]–[5]) Theorem 11.1. Let f ∈ H −1 (Ω). Suppose that there exist u, u respectively suband supersolutions to (11.1) such that u ≤ u.
(11.5)
Then there exist um , uM solutions to (11.1) such that u ≤ um ≤ uM ≤ u
(11.6)
and such that for any solution u of (11.1) satisfying u ≤ u ≤ u one has u ≤ um ≤ u ≤ uM ≤ u.
(11.7)
(um , uM are respectively the minimal and the maximal solution to (11.1) between u and u.) Proof. We consider for λ ≥ L the mapping S which assigns for every v ∈ L2 (Ω), u = S(v) ∈ L2 (Ω) weak solution to
−Δu + λu = λv − F (v) + f u ∈ H01 (Ω).
in Ω,
(11.8)
(This mapping goes in fact from L2 (Ω) into H01 (Ω).) A solution to (11.1) is a fixed point for S. As seen in (11.3) for v ∈ L2 (Ω), λv − F (v) + f ∈ H −1 (Ω) and the existence of u = S(v) follows from the Lax–Milgram theorem. 1. S is continuous from L2 (Ω) into itself. Indeed if ui = S(vi ) one derives easily from (11.8) that −Δ(u1 − u2 ) + λ(u1 − u2 ) = (λv1 − F (v1 )) − (λv2 − F (v2 )).
11.1. Monotone methods
155
Taking in the weak form of the equation above u1 − u2 as test function we get λ(v1 − v2 ) − (F (v1 ) − F (v2 ))(u1 − u2 ) dx λ|u1 − u2 |22,Ω ≤ Ω
≤ {λ|v1 − v2 |2,Ω + |F (v1 ) − F (v2 )|2,Ω }|u1 − u2 |2,Ω ≤ {λ|v1 − v2 |2,Ω + L|v1 − v2 |2,Ω }|u1 − u2 |2,Ω (we used here the Cauchy–Schwarz inequality and (11.2)). It follows that λ+L |u1 − u2 |2,Ω ≤ |v1 − v2 |2,Ω λ and the continuity of S follows. 2. S is monotone – i.e., v1 ≥ v2 ⇒ S(v1 ) ≥ S(v2 ) Indeed for v1 ≥ v2 one has by (11.2) (λv1 − F (v1 )) − (λv2 − F (v2 )) = λ(v1 − v2 ) − (F (v1 ) − F (v2 )) ≥ λ(v1 − v2 ) − L(v1 − v2 ) ≥ 0.
(11.9)
It follows from (11.8) that −Δu1 + λu1 ≥ −Δu2 + λu2 in a weak sense and by the weak maximum principle that u1 ≥ u2 . 3. We consider then the following sequences u0 = u, u0 = u, uk = S(uk−1 )
k≥1
uk = S(uk−1 ) k ≥ 1.
We claim that the following inequalities hold for every k ≥ 1 u ≤ uk−1 ≤ uk ≤ uk ≤ uk−1 ≤ u.
(11.10)
Indeed by definition of u1 we have −Δu1 + λu1 = λu0 − F (u0 ) + f ≥ −Δu0 + λu0 (since u0 is subsolution). By the maximum principle it follows that u0 ≤ u1 .
(11.11)
Applying S k−1 to both sides of (11.11) and using the monotonicity of S we get uk−1 ≤ uk .
156
Chapter 11. Nonlinear Problems
One can show the same way that uk ≤ uk−1 ≤ · · · ≤ u1 ≤ u0 = u. Finally the inequality uk ≤ uk follows from u0 ≤ u0 k
where we apply S . This completes the proof of (11.10). 4. We pass to the limit Since uk and uk are monotone sequences between u and u by the Lebesgue theorem there exist two functions um and uM such that uk −→ um ,
uk −→ uM
in L2 (Ω).
(11.12)
It is clear by the definitions of uk , uk and the continuity of S in L2 (Ω) that um and uM are fixed points for S and thus solutions to (11.1). Moreover if u is a solution to (11.1) such that u0 = u ≤ u ≤ u = u0 applying S k to these inequalities one gets uk ≤ u = S k (u) ≤ uk and passing to the limit um ≤ u ≤ uM which completes the proof of the theorem. Example 11.1. There exists a weak solution to −Δu = 1 − |u| in Ω, u ∈ H01 (Ω).
(11.13)
This follows from the fact that −1, 1 are respectively weak subsolution and supersolution to (11.13). Remark 11.1. We used only the Lipschitz continuity of F on [u, u]. Thus there also exists a weak solution to −Δu = 1 − |u|p in Ω, u ∈ H01 (Ω), which lies between −1 and 1 for every p > 1.
11.1. Monotone methods
157
As a corollary of Theorem 11.1 we have Theorem 11.2. Let f ∈ L2 (Ω) and F a Lipschitz continuous monotone function such that F (0) = 0. (11.14) Then there exists a unique solution to −Δu + F (u) = f u ∈ H01 (Ω).
in Ω,
(11.15)
Proof. We consider u and u the weak solutions to −Δu = −f − in Ω, −Δu = f + in Ω, 1 u ∈ H0 (Ω), u ∈ H01 (Ω), where f − and f + are respectively the negative and positive part of f . By the weak maximum principle we have u ≤ 0 ≤ u. It follows, by the monotonicity of F , that −Δu + F (u) ≤ −Δu = −f − ≤ f ≤ f + = −Δu ≤ −Δu + F (u). Since in addition u = 0 = u on ∂Ω the existence of a solution to (11.15) follows from Theorem 11.1. If now u1 , u2 are two solutions to (11.15) by subtraction we get −Δ(u1 − u2 ) = −(F (u1 ) − F (u2 )) i.e.,
Ω
in Ω,
∇(u1 − u2 ) · ∇v dx = −
Ω
(F (u1 ) − F (u2 ))v dx
∀ v ∈ H01 (Ω).
Taking v = u1 − u2 , by the monotonicity of F we get |∇(u1 − u2 )|2 dx ≤ 0 Ω
that is to say u1 = u2 . This completes the proof of the theorem.
Remark 11.2. One can easily remove the assumption (11.14) by writing the first equation of (11.15) as −Δu + F (u) − F (0) = f − F (0).
(11.16)
158
Chapter 11. Nonlinear Problems
Remark 11.3. Except in very peculiar situations – like for instance in Theorem 11.2 – uniqueness is lost for nonlinear problems. For instance if Ω = (0, π)2 the problem −Δu + 2|u| = 0 in Ω, (11.17) u ∈ H01 (Ω), admits u = k sin x sin y as solution for any k ≤ 0. To end this section we would like to consider an interesting example – the so-called logistic equation (see [16]). If a is a positive constant we are looking for u solution to −Δu = u(a − u) in Ω, (11.18) u > 0, u bounded. u ∈ H01 (Ω), Note that u = 0 is solution to (11.18) if we do not require u > 0. Then we have Theorem 11.3. Let us denote by λ1 the first eigenvalue for the Dirichlet problem in Ω. The problem (11.18) admits a solution iff a > λ1 . Moreover this solution is unique. Proof. Suppose first that a ≤ λ1 . Under its weak form the problem (11.18) can be written as ∇u · ∇v − auv dx = − u2 v dx ∀ v ∈ H01 (Ω). Ω
Ω
Taking v = u we obtain Ω
2
|∇u| dx − a
Ω
2
u dx = −
Ω
u3 dx.
Since a ≤ λ1 we have 2 2 2 |∇u| dx − a u dx ≥ |∇u| dx − λ1 u2 dx ≥ 0. Ω
It follows that
Ω
Ω
Ω
Ω
u3 dx ≤ 0
and u is necessarily equal to 0. Suppose now that λ1 < a. First we claim that if u is bounded then u≤a
in Ω.
11.1. Monotone methods
159
Indeed taking v = (u − a)+ in the weak formulation of (11.18) we get ∇u · ∇(u − a)+ dx = u(a − u)(u − a)+ dx Ω Ω + ∇(u − a) · ∇(u − a) dx = u(a − u)(u − a) dx ≤ 0, ⇐⇒ Ω
{u>a}
{u > a} denoting the set defined as { x | u(x) > a }. It follows that |∇(u − a)+ |2 dx ≤ 0, Ω
+
i.e., (u − a) = 0 and this shows that u ≤ a. Next we remark that u = a is a supersolution for (11.18). Then consider ϕ1 a positive first eigenfunction corresponding to λ1 . For any t > 0 we have −Δ(tϕ1 ) = λ1 tϕ1 ≤ (tϕ1 )(a − tϕ1 ) provided that λ1 ≤ a − tϕ1
⇐⇒
tϕ1 ≤ a − λ1 .
(We remind that ϕ1 can always be chosen positive. Moreover it is bounded.) Since a − λ1 > 0 for t small enough we have −Δ(tϕ1 ) ≤ (tϕ1 )(a − tϕ1 ) and u = tϕ1 is a subsolution to (11.18). It follows then from Theorem 11.1 that there exists a maximal solution u to (11.18) such that tϕ1 ≤ u ≤ a. Of course u is everywhere positive. Let us now turn to the issue of uniqueness. Suppose that u1 , u2 are two solutions to (11.18). From the weak formulation we have ∇u1 · ∇v dx = u1 (a − u1 )v dx ∀ v ∈ H01 (Ω), Ω Ω ∇u2 · ∇v dx = u2 (a − u2 )v dx ∀ v ∈ H01 (Ω). Ω
Ω
Taking v = u2 in the first equation, v = u1 in the second and subtracting the two equalities we get u1 u2 (u2 − u1 ) dx. 0= Ω
Suppose then that u2 is the maximal solution built between 0 and a. One has of course u2 ≥ u1 > 0 and thus by the integral equality above u1 = u2 . This completes the proof of the theorem.
160
Chapter 11. Nonlinear Problems
Remark 11.4. One can replace u bounded by u(a − u)v ∈ L1 (Ω) for any v ∈ H01 (Ω) since we only use this property for all our computations to make sense. In fact one could also replace u > 0 in (11.18) by u ≥ 0, u ≡ 0 since one can show that a nonnegative solution to (11.18) is automatically positive.
11.2 Quasilinear equations A quasilinear elliptic equation in divergence form is an equation of the type − div(A(x, u)∇u) + a(x, u)u = f,
(11.19)
i.e., the coefficients are depending on u – or eventually on its derivatives. The motivation for introducing such an equation and nonlinear equations in general is to take into account the reaction of a system to its own state. For instance the diffusion of the temperature in a material has a velocity given by the Fourier law v = −A∇u. A is a constant proper to each material. However it is easy to imagine that this constant for a piece of metal is not the same at 0o and 200o: it depends on the temperature of the material itself and a more realistic Fourier law can be written as v = −A(u)∇u. (11.20) This would lead to an equation of the type (11.19). In the preceding section we addressed the case of a nonlinear lower order term. Here for the sake of simplicity we will restrict ourselves to the case where A(x, u) = A(x, u) Id,
a(x, u) = 0.
A(x, u) is a real number. Thus let us suppose that Ω is a bounded subset of Rn . Consider the following quasilinear Dirichlet problem. One wants to find u solution to ⎧ ⎨ A(x, u)∇u · ∇v dx = f, v ∀ v ∈ H 1 (Ω), 0 (11.21) ⎩ Ω 1 u ∈ H0 (Ω). In this equation f ∈ H −1 (Ω). First let us assume that A(x, u) = A(u)
(11.22)
where A is a function satisfying A is continuous, for some constant λ, Λ.
0 < λ ≤ A(u) ≤ Λ
u∈R
(11.23)
11.2. Quasilinear equations
161
Then under the assumptions above we have Theorem 11.4. Let f ∈ H −1 (Ω) and A satisfying (11.23). Then there exists a unique solution u of the problem ⎧ ⎨ A(u)∇u · ∇v dx = f, v ∀ v ∈ H 1 (Ω), 0 (11.24) ⎩ Ω 1 u ∈ H0 (Ω). Proof. For u ∈ H01 (Ω) let us introduce U = U (x) =
u(x)
0
A(s) ds.
(11.25)
By Proposition 2.12 we have U ∈ H01 (Ω),
∂xi U = A(u)∂xi u
and thus U given by (11.25) is solution to ⎧ ⎨ ∇U · ∇v dx = f, v ⎩ Ω U ∈ H01 (Ω).
∀ v ∈ H01 (Ω),
(11.26)
Of course (11.26) admits a unique solution. By (11.23) it is clear that the function t A(t) = A(s) ds 0
is a C 1 , monotone increasing function which admits a C 1 monotone increasing inverse A−1 . It follows that if U is solution to (11.26) then A−1 (U ) is solution to (11.24). This completes the proof of the theorem since A realizes a one-to-one mapping between the solution to (11.24) and (11.26). The case where A is depending on x is a little bit more involved. We will need the following definition. Definition 11.2. A function A = A(x, u) is a Carath´eodory function provided (i) (ii)
u → A(x, u) x → A(x, u)
is continuous for a.e. x ∈ Ω, is measurable for every u ∈ R.
(11.27) (11.28)
Example 11.2. If A : Ω × R → R is continuous then A is a Carath´eodory function. In a first reading one can only consider this case which is already general enough! In order to be able to solve (11.21) we will assume that A is a Carath´eodory function ∃ λ, Λ such that 0 < λ ≤ A(x, u) ≤ Λ
a.e. x ∈ Ω
(11.29) ∀ u ∈ R.
(11.30)
162
Chapter 11. Nonlinear Problems
Under the assumptions above we have Theorem 11.5. Let A satisfy (11.29), (11.30). Then for every f ∈ H −1 (Ω) there exists a solution to (11.21). Proof. We use the Schauder fixed point theorem (see the appendix). For w ∈ L2 (Ω) we introduce u = S(w) solution to ⎧ ⎨ A(x, w)∇u · ∇v dx = f, v ∀ v ∈ H 1 (Ω), 0 (11.31) ⎩ Ω 1 u ∈ H0 (Ω). The existence of u just follows by the Lax–Milgram theorem. Note that by our assumptions on A we have A(x, w) ∈ L∞ (Ω). We will be done if we can show that the mapping S considered as a mapping from L2 (Ω) into itself has a fixed point. 1. A priori estimates We take as test function in (11.31) v = u. It follows that A(x, w)|∇u|2 dx = f, u ≤ |f |∗ |∇u|2,Ω
(11.32)
Ω
1 if we denote by |f |∗ the strong dual norm of f when H0 (Ω) is equipped with the norm |∇u| 2,Ω . From (11.32) we deduce by (11.30)
2 λ|∇u|2,Ω ≤ |f |∗ |∇u|2,Ω . Thus if cp denotes the constant of the Poincar´e inequality we have cp |f |∗ . |u|2,Ω ≤ cp |∇u|2,Ω ≤ λ
(11.33)
2. Fixed point argument We introduce B the ball defined by cp |f |∗ B = v ∈ L2 (Ω) |v|2,Ω ≤ . λ
(11.34)
B is a closed convex set of L2 (Ω). Moreover, by (11.33), S is a mapping from B into B and S(B) is relatively compact in B (by the compactness of the embedding from H01 (Ω) into L2 (Ω)). If in addition we can show that S is continuous we will be done – i.e., S will have a fixed point – thanks to the Schauder fixed point theorem (see the appendix). 3. Continuity of S Let wn ∈ L2 (Ω) be a sequence such that wn −→ w
in L2 (Ω).
11.2. Quasilinear equations
163
Set un = S(wn ). The estimate (11.33) applies to un – since it is independent of w and we have |f |∗ |∇un | , ≤ 2,Ω λ i.e., un is bounded in H01 (Ω) independently of n. Then, up to a subsequence we have un −→ u∞ in L2 (Ω) un u∞ in H01 (Ω), for some u∞ ∈ H01 (Ω). Moreover – up to a subsequence – we can assume that wn −→ w
a.e. in Ω.
Due to the assumptions on A this will imply that A(x, wn ) −→ A(x, w)
a.e. in Ω.
We are then in the position to pass to the limit in the equation A(x, wn )∇un · ∇v dx = f, v ∀ v ∈ H01 (Ω).
(11.35)
Ω
Indeed by the Lebesgue theorem A(x, wn )∇v −→ A(x, w)∇v
in L2 (Ω).
Since ∇un ∇u∞ in L2 (Ω) we obtain from (11.35) A(x, w)∇u∞ · ∇v dx = f, v ∀ v ∈ H01 (Ω), Ω
i.e., u∞ = u = S(w). Since the possible limit of the sequence un is uniquely determined, the whole sequence un converges toward u in L2 (Ω). This shows the continuity of S and completes the proof of the theorem. Let us address now the issue of uniqueness (see also [6], [7]). For that let us suppose that there exists a function ω such that ω(t) > 0 ∀ t > 0,
ω is continuous, nondecreasing
(11.36)
and |A(x, u) − A(x, v)| ≤ ω(|u − v|)
a.e. x ∈ Ω,
∀ u = v ∈ R.
(11.37)
Moreover let us assume that 0+
dt = +∞. ω(t)
(11.38)
164
Chapter 11. Nonlinear Problems
Remark 11.5. The condition (11.38) implies that necessarily ω(t) → 0
when t → 0.
If for instance we suppose that u → A(x, u) is uniformly Lipschitz continuous which means that for some positive constant L we have |A(x, u) − A(x, v)| ≤ L|u − v| a.e. x ∈ Ω,
∀ u, v ∈ R,
(11.39)
then clearly ω(t) = Lt fulfills the conditions (11.36)–(11.38). Under these conditions we have Theorem 11.6. Under the assumptions of Theorem 11.5 and if the “modulus of continuity” of A, ω satisfies (11.38) then the problem (11.21) admits a unique solution. Proof. Consider u1 , u2 two solutions to (11.21). For ε > 0 let us define Fε as ⎧ x dt ⎨ if x ≥ ε, 2 Fε (x) = (11.40) ε ω (t) ⎩ 0 if x ≤ ε. Clearly Fε is a continuous piecewise C 1 -function such that |Fε (x)| ≤
1 ω 2 (ε)
∀ x.
Thus it follows from Corollary 2.15 that Fε (u1 − u2 ) ∈ H01 (Ω). From (11.21) we have for i = 1, 2 A(x, ui )∇ui · ∇v dx = f, v Ω
(11.41)
∀ v ∈ H01 (Ω).
We deduce then that A(x, u1 )∇u1 · ∇v dx = A(x, u2 )∇u2 · ∇v dx Ω Ω ⇐⇒ A(x, u1 )∇(u1 − u2 ) · ∇v dx = (A(x, u2 ) − A(x, u1 ))∇u2 · ∇v dx Ω
Ω
∀ v ∈ H01 (Ω).
(11.42)
11.2. Quasilinear equations
165
Taking v = Fε (u1 − u2 ) we derive that |∇(u1 − u2 )|2 dx A(x, u1 ) 2 ω (u1 − u2 ) {u1 −u2 >ε} (A(x, u2 ) − A(x, u1 ))∇u2 · ∇(u1 − u2 ) dx = ω 2 (u1 − u2 ) {u1 −u2 >ε}
(11.43)
where {u1 − u2 > ε} = { x ∈ Ω | (u1 − u2 )(x) > ε }. From (11.43) we obtain by (11.30), (11.37) and using the Cauchy–Schwarz inequality |∇(u1 − u2 )|2 /ω 2 (u1 − u2 ) dx λ {u1 −u2 >ε}
≤
{u1 −u2 >ε}
|A(x, u2 ) − A(x, u1 )||∇u2 ||∇(u1 − u2 )| dx ω 2 (u1 − u2 )
{u1 −u2 >ε}
|∇u2 ||∇(u1 − u2 )| ω(u1 − u2 )
≤
≤
{u1 −u2 >ε}
12 |∇(u1 − u2 )| /ω (u1 − u2 ) dx 2
2
{u1 −u2 >ε}
12 |∇u2 | dx .
From this it follows that 1 |∇(u1 − u2 )|2 /ω 2 (u1 − u2 ) dx ≤ 2 |∇u2 |2 dx λ {u1 −u2 >ε} {u −u >ε} 1 2 1 |∇u2 |2 dx. ≤ 2 λ Ω ⎧ ⎨
Let us then set Gε (x) =
⎩ ε 0
x
dt ω(t)
if x ≥ ε,
2
(11.44)
(11.45)
if x ≤ ε.
With this definition (11.44) becomes 1 |∇Gε (u1 − u2 )|2 dx ≤ 2 |∇u2 |2 dx. λ Ω Ω
(11.46)
Clearly Gε is a continuous piecewise C 1 -function satisfying the assumptions of Corollary 2.15 and we have Gε (u1 − u2 ) ∈ H01 (Ω). Using the Poincar´e inequality in (11.46) we derive that for some constant cp we have cp |Gε (u1 − u2 )|2 dx ≤ 2 |∇u2 |2 dx. λ Ω Ω
166
Chapter 11. Nonlinear Problems
Passing to the limit in ε and by Fatou’s lemma we get cp 2 lim |Gε (u1 − u2 )| dx ≤ 2 |∇u2 |2 dx. λ Ω Ω This implies that
lim |Gε (u1 − u2 )|2 < +∞ a.e. x.
ε→0
By the definition of Gε and (11.38) this implies that u1 − u2 ≤ 0
a.e. x.
Reversing the rˆ ole of u1 and u2 we conclude that u1 = u2 .
This completes the proof of the theorem.
The problems that we just addressed are of the so-called local type. That is to say the nonlinear character depends on u at each point. We just saw that somehow uniqueness can be preserved in this case. This is no more the case in the so-called nonlocal case. Now, by Remark 11.5, it is clear that uniqueness holds in the case where A is Lipschitz continuous in u.
11.3 Nonlocal problems Let us consider the following model problem. We are looking for u solution to ⎧ ⎨ A u(x) dx ∇u · ∇v dx = f, v ∀ v ∈ H01 (Ω) (11.47) Ω Ω ⎩ u ∈ H01 (Ω). f is here a distribution in H −1 (Ω) and A a function which is assumed to satisfy only 0 = A(z) ∀ z ∈ R. (11.48) Let us introduce ϕ the solution of the Dirichlet problem −Δϕ = f in Ω, 1 ϕ ∈ H0 (Ω). Moreover let us denote by E the set defined as ϕ(x) dx . E = μ ∈ R A(μ)μ = Ω
(11.49)
(11.50)
11.3. Nonlocal problems
167
Finally for μ ∈ R denote by uμ the solution to ⎧ ⎨ A(μ)∇u · ∇v dx = f, v ∀ v ∈ H 1 (Ω), μ 0 Ω ⎩ 1 uμ ∈ H0 (Ω).
(11.51)
Then we have Theorem 11.7. The mapping μ → uμ
(11.52)
is a one-to-one mapping from the set E onto the set S of the solutions to (11.47). Proof. 1. Let us show first that uμ ∈ S for every μ ∈ E. Indeed from (11.51) and by the uniqueness of the solution of the Dirichlet problem we have A(μ)uμ = ϕ, since A(μ)uμ is a weak solution to (11.49) (A(μ) is just a constant). Integrating over Ω we derive that uμ dx = ϕ(x) dx = A(μ)μ A(μ) Ω
Ω
since μ ∈ E. It follows that μ=
Ω
uμ dx
(11.53)
and uμ ∈ S, since uμ satisfies (11.47). 2. Let us show that for any u ∈ S there exists a unique μ ∈ E such that u = uμ . Indeed let u ∈ S. Then from (11.47) and the uniqueness of the solution to (11.49) we have u dx u = ϕ. A Ω
Integrating on Ω we derive u dx u dx = ϕ dx, A Ω
Ω
(11.54)
Ω
i.e., μ = Ω u dx ∈ E and u = uμ . To show that μ is unique suppose that uμ1 = uμ2 for two μ’s. Then by (11.53), μ1 = μ2 . This completes the proof of the theorem.
168
Chapter 11. Nonlinear Problems
As a corollary we have Theorem 11.8. Suppose that A is a continuous function such that λ ≤ A(z) ≤ Λ
∀z ∈ R
(11.55)
where λ, Λ are positive constants. Then there exists a solution to (11.47). In addition (11.47) might have several solutions – up to a continuum. Proof. The solutions of (11.47) are determined by solving the equation A(μ)μ = ϕ dx.
(11.56)
Ω
If
Ω
ϕ dx = 0
then μ = 0 is the only solution to (11.56) and thus (11.47) possesses a unique solution. Else μ is necessarily different of 0 and the equation (11.56) is equivalent to ϕ dx . (11.57) A(μ) = Ω μ One has if Ω ϕ dx > 0 ϕ dx ϕ dx = 0, lim = +∞ lim μ→+∞ μ→0 μ μ and thus there always exists a μ satisfying (11.57). The case where Ω ϕ dx < 0 can be treated the same way. We represent in the figures below the different possibilities for existence of solutions to (11.57), i.e., to (11.47). For simplicity we consider only the case where ϕ dx > 0. Ω
The solution to (11.57) are obtained at the intersection of the graph of A and the hyperbola μ −→
ϕ dx/μ = h(μ).
11.3. Nonlocal problems
169
h()
A
Figure 11.1: A case with unique solution
A() h() 1
2
Figure 11.2: A case with two solutions
A() h()
Figure 11.3: A case with a continuum of solutions
A()
h()
Figure 11.4: A case with no solution
170
Chapter 11. Nonlinear Problems
11.4 Variational inequalities In this section we are going to consider a class of nonlinear problems associated to linear operators. We have encountered our first variational inequality in Chapter 1 when dealing with the projection on a convex set. Thus our first step in the theory is a generalization of the Lax–Milgram theorem when H – the whole space – is replaced by a closed convex set. The notation being the one of Chapter 1 we have Theorem 11.9. Let K = ∅ be a closed convex set of a real Hilbert space H. Let a(u, v) be a bilinear form on H satisfying the assumptions of Theorem 1.5 that is to say (1.23), (1.24). Then for every f ∈ H ∗ the dual of H there exists a unique u solution to u ∈ K, (11.58) a(u, v − u) ≥ f, v − u ∀ v ∈ K. Moreover if a is symmetric u is the unique minimizer over K of the functional J(v) =
1 a(u, v) − f, v. 2
Proof. 1. An equivalent formulation. As in Chapter 1 we denote by (·, ·) the scalar product in H. Then by the Riesz representation theorem (see also Theorem 1.5) for every u ∈ H there exists a unique Au ∈ H such that ∀ v ∈ H.
a(u, v) = (Au, v)
(11.59)
Similarly the linear form f can be written for some f˜ ∈ H as f, v = (f˜, v)
∀ v ∈ H.
Then the second equation of (11.58) becomes (Au − f˜, v − u) ≥ 0 ⇐⇒
∀v ∈ K
(u, v − u) ≥ (u − ε(Au − f˜), v − u) ∀ v ∈ K
(11.60)
for any ε > 0. We will fix ε later on. It is clear that u ∈ K solves (11.60) iff u = PK (u − ε(Au − f˜)) where PK denotes the projection from H onto K, that is to say if u is a fixed point for the mapping from K into itself defined by u → PK (u − ε(Au − f˜)).
(11.61)
11.4. Variational inequalities
171
2. A fixed point argument. First we claim that PK is a contraction on H that is to say it holds |PK (u1 ) − PK (u2 )| ≤ |u1 − u2 | ∀ u1 , u2 ∈ H
(11.62)
if | · | denotes the norm in H. Indeed from the definition of PK – see Theorem 1.1 – we have (PK (u1 ), v − PK (u1 )) ≥ (u1 , v − PK (u1 )) (PK (u2 ), v − PK (u2 )) ≥ (u2 , v − PK (u2 ))
∀ v ∈ K, ∀ v ∈ K.
(11.63) (11.64)
Taking v = PK (u2 ) in (11.63) and v = PK (u1 ) in (11.64) we obtain by adding the two inequalities |PK (u1 ) − PK (u2 )|2 ≤ (PK (u1 ) − PK (u2 ), u1 − u2 ) ≤ |PK (u1 ) − PK (u2 )| |u1 − u2 | which gives (11.62). If we denote Tε (u) the mapping defined by (11.61) we then have |Tε (u1 ) − Tε (u2 )|2 ≤ |(u1 − εAu1 ) − (u2 − εAu2 )|2 2
(11.65) 2
= |u1 − u2 | − 2ε(A(u1 − u2 ), u1 − u2 ) + ε |A(u1 − u2 )|2 . (We recall that A is linear – see the proof of Theorem 1.5.) Taking v = u in (11.59) we get (Au, u) = a(u, u) ≥ λ|u|2
∀ u ∈ H.
(11.66)
Then taking v = Au we obtain |Au|2 = a(u, Au) ≤ Λ|u| |Au|, i.e., |Au| ≤ Λ|u|.
(11.67)
Going back to (11.65) and using (11.66), (11.67) with u = u1 − u2 we derive that |Tε (u1 ) − Tε (u2 )|2 ≤ |u1 − u2 |2 − 2ελ|u1 − u2 |2 + ε2 Λ2 |u1 − u2 |2 = {1 − ε(2λ − εΛ2 )}|u1 − u2 |2 . 2λ Choosing ε < Λ 2 it is clear that Tε is a contraction and by the Banach fixed point theorem (applied on K) possesses a unique fixed point which is the solution to (11.58).
172
Chapter 11. Nonlinear Problems
3. The symmetric case. In this case one proceeds as in Theorem 1.5 that is to say one computes 1 a(v, v) − f, v = J(v − u + u) 2 1 = a(v − u + u, v − u + u) − f, v − u + u 2 1 1 = a(u, u) − f, u + a(u, v − u) − f, v − u + a(v − u, v − u) 2 2
J(v) =
since a is symmetric. We then derive – see (11.58) – λ 1 J(v) ≥ J(u) + a(v − u, v − u) ≥ J(u) + |v − u|2 2 2
∀ v ∈ K.
It follows that u is the unique minimizer of J on K. This completes the proof of the theorem. Remark 11.6. In the case where K = V is a closed vector subspace of H one recovers the Lax–Milgram theorem. Indeed in (11.58) one can choose v = u ± w where w ∈ V which leads to a(u, w) = f, w
∀ w ∈ V.
To have applications of the result above one can consider again the framework of Chapter 4 that is to say consider a(u, v) = A(x)∇u · ∇v + a(x)uv dx ∀ u, v ∈ H 1 (Ω) (11.68) Ω
where A and a satisfy (4.2), (4.3) and (4.6). If ϕ is a measurable function in Ω and if Kϕ = { v ∈ H01 (Ω) | v(x) ≥ ϕ(x) a.e. x ∈ Ω }
(11.69)
we have Theorem 11.10. Let Ω be a bounded open set of Rn , f ∈ H −1 (Ω) and a(u, v) given by (11.68). If (4.2), (4.3), (4.6) hold and if Kϕ = ∅ then there exists a unique u solution to u ∈ Kϕ , (11.70) a(u, v − u) ≥ f, v − u ∀ v ∈ Kϕ . u is called the solution of a one obstacle problem. Proof. It is enough to show that Kϕ is a closed convex set of H01 (Ω). We leave the details of the proof to the reader.
11.4. Variational inequalities
173
ϕ
−1
x2
x1
1
x
Figure 11.5.
Example. Consider Ω = (−1, 1), f = 0, ϕ given by Figure 11.5 above, i.e., ϕ ∈ C 2 (Ω),
ϕ ≤ 0 in Ω,
Then the solution to
ϕ(−1), ϕ(1) ≤ 0, ϕ non-identically negative. (11.71)
⎧ ⎨u ∈ Kϕ , ⎩
Ω
u (v − u) dx ≥ 0 ∀ v ∈ Kϕ ,
(11.72)
is the function u obtained by drawing from the end points of Ω the tangents to the graph of ϕ. If x1 , x2 are the tangent points ⎧ ⎪ ⎨ϕ (x1 )(x + 1) on (−1, x1 ), u(x) = ϕ(x) (11.73) on (x1 , x2 ), ⎪ ⎩ ϕ (x2 )(x − 1) on (x2 , 1). To see that let v ∈ Kϕ and vn ∈ D(Ω) such that vn → v
in H01 (Ω).
(11.74)
It is clear that u is a C 1 -function on Ω. Thus we have x1 x2 1 u (vn − u) dx = u (vn − u) dx + u (vn − u) dx + u (vn − u) dx x2 Ω −1 x1 x2 x1 1 ϕ (vn − ϕ) dx + ϕ (x2 )(vn − u)x2 = ϕ (x1 )(vn − u)−1 + x1
174
Chapter 11. Nonlinear Problems
= ϕ (x1 )(vn − u)(x1 ) x2 {(ϕ (vn − ϕ)) − ϕ (vn − ϕ)} dx − ϕ (x2 )(vn − u)(x2 ) + x 1x2 ϕ (vn − ϕ) dx =−
(11.75)
x1
since u(xi ) = ϕ(xi ), i = 1, 2. Letting n → +∞ we obtain by (11.71) x2 u (v − u) dx ≥ − ϕ (v − ϕ) dx ≥ 0 ∀ v ∈ Kϕ Ω
x1
which implies that u is the solution to (11.72) since u ∈ Kϕ . Note that if ϕ ≤ 0 then u = 0 solves (11.72). For more results on variational inequalities we refer the interested reader to [10], [13], [18]–[20], [23], [24], [26]–[28], [62]–[66], [68], [84], [90], [95].
Exercises 1. Let f ∈ H −1 (Ω). a) Show that there exists fn ∈ L2 (Ω) such that fn −→ f
in H −1 (Ω).
b) Under the assumptions of Theorem 11.2 let un be the solution to −Δun + F (un ) = fn in Ω, un ∈ H01 (Ω). Show that un is bounded independently of n in H01 (Ω). Deduce that (11.15) admits a unique solution for any f ∈ H −1 (Ω). 2. Let A be a matrix whose coefficients are Carath´eodory functions satisfying the usual ellipticity condition. Namely we suppose that λ|ξ|2 ≤ A(x, u)ξ · ξ |A(x, u)ξ| ≤ Λ|ξ|
a.e. x ∈ Ω, a.e. x ∈ Ω,
Show the existence of a function u solution to ⎧ ⎨ A(x, u)∇u · ∇v dx = f, v ⎩ Ω u ∈ H01 (Ω).
∀ u ∈ R, ∀ u ∈ R,
∀ ξ ∈ Rn , ∀ ξ ∈ Rn ,
∀ v ∈ H01 (Ω),
(Ω is supposed to be bounded, f ∈ H −1 (Ω).) Give conditions on A in order to have uniqueness.
Exercises
175
3. Show that under the conditions of Theorem 11.6 the weak maximum principle holds for u solution to (11.21) 4. Let g ∈ L2 (Ω). In the spirit of Theorem 11.7 solve the problem ⎧ ⎨ A gu dx ∇u · ∇v dx = f, v ∀ v ∈ H01 (Ω), Ω ⎩ Ω u ∈ H01 (Ω). 5. Prove that if f ≡ 0 then we can drop the assumption (11.48) in Theorem 11.7 by replacing E by the set ϕ(x) dx . E = μ ∈ R A(μ) = 0, A(μ)μ = Ω
6. Show that when A is a function satisfying 0 < λ ≤ A(z) ≤ Λ
∀z ∈ R
then existence of a solution might fail in Theorem 11.8 when A fails to be continuous. 7. Under the assumptions of Theorem 11.9 show that (11.58) is equivalent to u ∈ K, a(v, v − u) ≥ f, v − u ∀ v ∈ K. 8. Suppose that ϕ ∈ H01 (Ω). Under the assumptions of Theorem 11.10 and if u denotes the solution to (11.70) show that the mapping ϕ → u is continuous from H01 (Ω) into itself. 9. Suppose that we are in the conditions of Theorem 11.10 and let u be the solution to (11.70). Suppose in addition that u − ϕ is continuous. Show that − div(A(x)∇u) + a(x)u ≥ f − div(A(x)∇u) + a(x)u = f
in Ω, in {u − ϕ > 0}
({u − ϕ > 0} = { x ∈ Ω | u − ϕ(x) > 0 }). 10. Let A be a Carath´eodory function satisfying (11.30). Let ϕ be a measurable function such that Kϕ = { v ∈ H01 (Ω) | v(x) ≥ ϕ(x) a.e. in Ω } = ∅.
176
Chapter 11. Nonlinear Problems
Show that there exists a u solution to ⎧ ⎨u ∈ Kϕ , ⎩ A(x, u)∇u · ∇(v − u) dx ≥ f, v − u ∀ v ∈ Kϕ . Ω
(Ω is supposed to be bounded, f ∈ H −1 (Ω).) Show that when A satisfies (11.36)–(11.38) the solution u is unique. 11. Let Ω be a bounded open set of Rn , α a positive constant. One sets K1 = { v ∈ H01 (Ω) | |∇v(x)| ≤ α a.e. x ∈ Ω }, K2 = v ∈ H01 (Ω) |v(x)| dx ≤ α , Ω
K3 = { v ∈ H01 (Ω) | |v(x)| ≤ α a.e. x ∈ Ω }. Show that Ki is a closed convex set of H01 (Ω) for i = 1, 2, 3 and that for f ∈ H −1 (Ω) there exists a unique ui solution to ⎧ ⎨u i ∈ Ki , ⎩ ∇ui · ∇(v − ui ) dx ≥ f, v − ui ∀ v ∈ Ki . Ω
Let u be the solution to
u ∈ H01 (Ω), −Δu = f in Ω.
Show that when |f |H −1 is small enough one has u2 = u.
Chapter 12
L∞-estimates In general any quantity appearing in nature is finite. Thus a natural question is to derive some conditions on our data that will assure the solution of our elliptic equation to be bounded.
12.1 Some simple cases First in dimension 1 let us consider u solution to −(A(x)u ) = f0 − f1 in Ω = (α, β), u ∈ H01 (Ω),
(12.1)
where f0 , f1 ∈ L2 (Ω), A ∈ L∞ (Ω), 0 < λ ≤ A(x) ≤ Λ a.e. x ∈ Ω. It is clear that (12.1) possesses a unique weak solution u. Moreover, integrating the first equation in (12.1) we have −A(x)u =
x
α
f0 (s) ds − f1 + c0
where c0 is some constant. The constant is easily determined by noting that
β
α
u (s) ds = 0.
Note that this equality holds for every u ∈ D(Ω) and by density for any u ∈ H01 (Ω). From x c0 1 f1 + (12.2) f0 (s) ds − −u = A(x) α A(x) A(x) we derive c0
β
α
dx = A(x)
β
α
f1 (x) dx − A(x)
β
α
1 A(x)
x
α
f0 (s) ds dx.
Chapter 12. L∞ -estimates
178
From this it follows easily if |Ω| = β − α that we have |Ω| |c0 | ≤ |c0 | Λ
β
α
Thus we have |c0 | ≤
dx 1 |Ω| ≤ |f1 |1,Ω + |f0 |1,Ω . A(x) λ λ
Λ |f1 |1,Ω + |f0 |1,Ω . λ |Ω|
(12.3)
From (12.2) we derive easily |u | ≤
1 1 1 |f0 |1,Ω + |f1 (x)| + |c0 | λ λ λ
and finally |u(x)| =
x
α
u (s) dx ≤
x
α
|u (s)| dx
1 1 |Ω| |f0 |1,Ω + |f1 |1,Ω + |Ω||c0 | λ λ λ |Ω| 1 Λ |f0 |1,Ω + |f1 |1,Ω + 2 {|Ω||f0 |1,Ω + |f1 |1,Ω } ≤ λ λ λ λ+Λ {|Ω||f0 |1,Ω + |f1 |1,Ω }. ≤ λ2
≤
We have thus proved Theorem 12.1. Under the assumptions above the unique solution to (12.1) satisfies |u|∞,Ω ≤
λ+Λ {|Ω||f0 |1,Ω + |f1 |1,Ω }. λ2
(12.4)
Let us now examine some simple cases in higher dimensions. Suppose that Ω ⊂ Sν = { x | (x − x0 ) · ν ∈ (−a, a) } ν being a unit vector. In other words Ω is included in a strip of size 2a and thus is bounded in one direction. For f ∈ L∞ (Ω) ∩ H −1 (Ω) there exists a unique solution u to −Δu = f in Ω, (12.5) 1 u ∈ H0 (Ω). Moreover we have Theorem 12.2. Under the assumptions above the solution u to (12.5) satisfies |u|∞,Ω ≤
a2 |f |∞,Ω . 2
(12.6)
12.1. Some simple cases
179
Proof. Let us denote by u1 the solution to −Δu1 = 1 u1 ∈ H01 (Ω).
in Ω,
We clearly have −Δ(u1 |f |∞,Ω ) = |f |∞,Ω ≥ f = −Δu and by the weak maximum principle u ≤ u1 |f |∞,Ω . Since −u is the solution to (12.5) corresponding to −f it also follows that −u ≤ u1 |f |∞,Ω and thus |u| ≤ u1 |f |∞,Ω . If g =
1 2 2 {a
(12.7)
2
− ((x − x0 ) · ν) } we have g ≥ 0 on ∂Ω
− Δg = |ν|2 = 1 = −Δu1 ≥ −Δ0.
By the weak maximum principle again it follows that 0 ≤ u1 ≤ g ≤
a2 . 2
(12.8)
The proof of the theorem follows by combining (12.7) and (12.8). We consider now the problem − div(A(x)∇u) + a(x)u = f u ∈ H01 (Ω).
in Ω,
(12.9)
Ω is an arbitrary domain in Rn , A is an elliptic matrix satisfying (4.2), (4.3) and a is a bounded function such that 0 < λ ≤ a(x) For
a.e. x ∈ Ω.
(12.10)
f ∈ L∞ (Ω) ∩ H −1 (Ω)
by Theorem 4.1 there exists a unique solution u to (12.9). Moreover we have Theorem 12.3. Under the assumptions above u ∈ L∞ (Ω) and |u|∞ ≤
|f |∞ . λ
(12.11)
Chapter 12. L∞ -estimates
180
Proof. One remarks setting L(u) = − div(A(x)∇u) + a(x)u that
|f |∞ |f |∞ ≥f = Lu in Ω, L =a λ λ −|f |∞ |f |∞ L ≤f = Lu in Ω, = −a λ λ
|f |∞ ≥ u on ∂Ω λ −|f |∞ ≤ u on ∂Ω. λ
Thus by the weak maximum principle we derive −
|f |∞ |f |∞ ≤u≤ λ λ
in Ω
which leads to (12.11). This completes the proof of the theorem.
12.2 A more involved estimate We would like to present in this section a L∞ -estimate due to Stampacchia ([64]). We consider u a weak solution to ⎧ n ⎪ ⎨− div(A(x)∇u) + a(x)u ≤ f + ∂xi fi in Ω, 0 (12.12) i=1 ⎪ ⎩ u≤0 on ∂Ω, where Ω is a domain of finite measure in Rn , n ≥ 2 (see also Definition 4.2). We will suppose that A satisfies (4.2), (4.3) and that 0 ≤ a(x) ≤ Λ
a.e. in Ω.
(12.13)
Moreover we will assume that f0 ∈ Lq (Ω),
1 1 1 = + , q p n
fi ∈ Lp (Ω)
∀ i = 1, . . . , n for some p > n ≥ 2. (12.14)
Then we have Theorem 12.4 (Stampacchia). Under the assumption above u+ ∈ L∞ (Ω) and 1 1 u+ ≤ C{|f0 |q,Ω + |f |p,Ω }|Ω| n − p where C = C(λ, n, p) is a constant independent of u, |f |2 = measure of Ω.
(12.15) n
i=1
fi2 , |Ω| is the
12.2. A more involved estimate
181
Proof. For k > 0 we denote by Fk the function Fk (z) = (z − k)+ .
(12.16)
One should remark that Fk (z) = 0 on z ≤ k and is equal to z − k for z > k. Note also that Fk (u) = Fk (u+ ) and therefore Fk (u) ∈ H01 (Ω). Thus taking as test function in the weak formulation of (12.12) Fk (u) we get A(x)∇Fk (u) · ∇Fk (u) + a(x)uFk (u) ≤ f0 Fk (u) − fi ∂xi Fk (u) dx. (12.17) Ω
Ω
Using the ellipticity of the matrix A and the H¨ older inequality it comes by (12.13) |∇Fk (u)|2 dx ≤ |f0 |q,Ω |Fk (u)|q ,Ω + |f |p,Ω |∇Fk (u)|p ,Ω λ Ω
where q , p denote the conjugate exponents of q, p respectively. Recall that p = p 1 1 1 1 1 1 1 p−1 and q = 1 − q = 1 − p − n = p − n = p∗ . For a real number 0 < r < n the ∗ Sobolev exponent for r, r is defined as 1 1 1 = − . r∗ r n
(12.18)
Thus we obtain 2 λ|∇Fk (u)|2,Ω ≤ |f0 |q,Ω |Fk (u)|p∗ ,Ω + |f |p,Ω |∇Fk (u)|p ,Ω where p∗ is the Sobolev exponent of p < 2 ≤ n. It follows – for a constant C = C(λ, n, p) independent of Ω – that we have by the Sobolev–Gagliardo–Nirenberg inequality – (see below the next section) |∇Fk (u)|2 ≤ C{|f0 |q,Ω + |f | }|∇Fk (u)| . (12.19) p,Ω
2,Ω
p ,Ω
Let us denote by A(k) the set defined as A(k) = { x ∈ Ω | u(x) > k }. By H¨older’s inequality we have |∇Fk (u)|p
p ,Ω
= A(k)
|∇u|p dx ≤
A(k)
p2 p |∇u|2 dx |A(k)|1− 2 ,
where | · | denotes the measure of sets. It follows that 2 |∇Fk (u)|2 ≤ |∇Fk (u)|2 |A(k)| p −1 p ,Ω 2,Ω and from (12.19) |∇Fk (u)|
p ,Ω
2 −1 ≤ C{|f0 |q,Ω + |f |p,Ω }|A(k)| p .
(12.20)
Chapter 12. L∞ -estimates
182
Using again the Sobolev embedding theorem we get 2 −1 |Fk (u)|p∗ ,Ω ≤ C{|f0 |q,Ω + |f | }|A(k)| p , p,Ω
(12.21)
where C = C(λ, p, n). For h > k we clearly have A(h) ⊂ A(k)
(12.22)
and from (12.21) we derive (h − k)|A(h)|
1 p∗
≤
+ p∗
A(h)
{(u − k) }
1∗ p dx
2 −1 ≤ |Fk (u)|p∗ ,Ω ≤ C{|f0 |q,Ω + |f |p,Ω }|A(k)| p which can be written as
p∗ p2 −1 p∗ . |A(h)| ≤ {C{|f0 |q,Ω + |f | p,Ω }/(h − k)} |A(k)|
(12.23)
Then the result will follow from the following lemma. Lemma 12.5. Let Φ : [0, +∞) → R+ be a nonincreasing function such that α C Φ(h) ≤ Φ(k)β h > k (12.24) h−k where C, α, β are positive constants. Then if β > 1 Φ(d) = 0
for d = CΦ(0)
β−1 α
β
· 2 β−1 .
Proof. Let hn = d − 2dn , n ∈ N. By (12.24) we have α α C C β Φ(hn+1 ) ≤ Φ(hn ) = 2(n+1)α Φ(hn )β . hn+1 − hn d
(12.25)
By induction we claim that nα
Φ(hn ) ≤ Φ(0)2 1−β .
(12.26)
If n = 0 (12.26) is clear. Assuming it holds for n we get from (12.25) α nαβ C 2(n+1)α Φ(0)β 2 1−β Φ(hn+1 ) ≤ d = (Φ(0)
1−β α
= Φ(0)2
(n+1)α 1−β
β
nαβ
2 1−β )α 2(n+1)α Φ(0)β 2 1−β
(by definition of d)
.
From (12.25) we then derive nα
0 ≤ Φ(d) ≤ Φ(hn ) ≤ Φ(0)2 1−β → 0 when n → 0. This completes the proof of the lemma.
12.3. The Sobolev–Gagliardo–Nirenberg inequality
183
End of the proof of Theorem 12.4. We have α = p∗ , β = p∗ 1 p∗
=
since
− 1 , i.e., since
1 p
= 1 − p1 − n1 " 1 1 1 2 2 " 1 1− − β = 2− −1 = 1− 1− − >1 p p n p p n
1 p
<
−
1 n
2 p
1 n.
Moreover β−1 β 1 2 1 1 1 = − = − 1 − ∗ = − , α α α p p n p 1 − 1p − 1 β−1 =1− =1− β β 1 − 2p
1 n
=
1 n
−
1−
1 p 2 p
.
From (12.23) and Lemma 12.5 we then derive that 1 1 u ≤ d ≤ C{|f0 |q,Ω + |f |p,Ω }Φ(0) n − p 1 1 ≤ C{|f0 |q,Ω + |f |p,Ω }|Ω| n − p .
This completes the proof of the theorem. Remark 12.1. In the estimate (12.15) one can replace f0 by the term − Ω f0− Fk (u) dx is nonpositive.
f0+
since in (12.17)
Under the assumptions of Theorem 12.4 as a corollary we have Corollary 12.6. Under the assumptions (12.13) and (12.14) if u is the weak solution to ⎧ n ⎪ ⎨− div(A(x)∇u) + a(x)u = f + ∂xi fi in Ω, 0 (12.27) i=1 ⎪ ⎩ 1 u ∈ H0 (Ω), one has u ∈ L∞ (Ω) and an estimate
1 1 |u|∞,Ω ≤ C{|f0 |q,Ω + |f |p,Ω }|Ω| n − p
(12.28)
for some constant C = C(λ, n, p). Proof. It is enough to apply Theorem 12.4 to u and −u.
12.3 The Sobolev–Gagliardo–Nirenberg inequality If p < n and if p∗ denotes the Sobolev exponent that we encountered earlier given by 1 1 1 (12.29) = − , p∗ p n we have
Chapter 12. L∞ -estimates
184
Theorem 12.7 (Sobolev–Gagliardo–Nirenberg). Let 1 ≤ p < n. There exists a constant C = C(p, n) such that (12.30) |v|p∗ ,Rn ≤ C |∇v|p,Rn ∀ v ∈ Cc1 (Rn ). (Cc1 (Rn ) denotes the space of C 1 -functions with compact support.) Proof. 1. The case p = 1. In this case the Sobolev exponent 1∗ is given by n . 1∗ = n−1 Let v ∈ Cc1 (Rn ). Since v has compact support we have xi v(x) = ∂xi v(x1 , . . . , xi−1 , t, xi+1 , . . . , xn ) dt −∞
and thus
|v(x)| ≤
+∞
−∞
|∂xi v(x1 , . . . , xi−1 , t, xi+1 , . . . , xn )| dt := fi (ˆ xi ).
(12.31)
We have denoted by xˆi the point (x1 , . . . , xi−1 , xi+1 , . . . , xn ) and fi the function above. From (12.31) we then derive n n 1 |v(x)| n−1 dx ≤ fi (ˆ xi ) n−1 dx. (12.32) Rn
Rn i=1
Let us suppose that we have established the following lemma: n xi ) ∈ Ln−1 (Rn−1 ) for i = 1, . . . , n then i=1 fi (ˆ xi ) ∈ L1 (Rn ) Lemma 12.8. Let fi (ˆ and we have n n fi (ˆ x ) ≤ |fi (ˆ xi )|n−1,Rn−1 . i 1,Rn
i=1
i=1
n−1
xi )|n−1,Rn−1 denotes the L (Rn−1 )-norm of fi , the measure of integration (|fi (ˆ * i . . . dxn where dx * i is dropped in the product of measures.) being dx1 . . . dx From (12.32) and this lemma we get 1 n−1 n n * i . . . dxn |v(x)| n−1 dx ≤ fi (ˆ xi ) dx1 . . . dx Rn
=
Rn−1 i=1 n Rn
i=1
≤
Rn
1 n−1 |∂xi v| dx
n n−1 |∇v| dx ,
which is exactly (12.30) for p = 1. The constant C being here 1.
12.3. The Sobolev–Gagliardo–Nirenberg inequality
185
2. The case of a general 1 < p < n. If p > 1 we have
1 1 1 1 1 = − < 1 − = ∗, p∗ p n n 1 p∗
i.e., p∗ > 1∗ . It follows that for v ∈ Cc1 (Rn ), |v| 1∗ belongs to Cc1 (Rn ) and from Part 1 above we have
p∗
Rn
|v|
11∗ dx ≤
Rn
p∗
|∇{|v| 1∗ }| dx.
(12.33)
An easy computation gives p∗
∇|v| 1∗ =
p∗ p∗∗ −1 |v| 1 sign v∇v 1∗
and thus from (12.33) and the H¨ older inequality we derive 11∗ p∗ ∗ p∗ |v|p dx ≤ ∗ |v| 1∗ −1 |∇v| dx 1 Rn Rn 1 p1
∗ p p p∗ −1 p p 1∗ ≤ ∗ |v| dx |∇v| dx . 1 Rn Rn
By an easy computation we obtain
1 − n1 − 1p − n1 p∗ p∗ − 1 = = , 1 1 1∗ p p − n
1 1 − = ∗ 1 p
(12.34)
1 1 1 1− − 1− = ∗. n p p
Thus (12.34) becomes Rn
p∗
|v|
11∗ 1 p1 p p∗ p∗ p dx ≤ ∗ |v| dx |∇v| dx 1 Rn Rn
and thus |v|p∗ ,Rn ≤
p∗ |∇v|p,Rn . ∗ 1
This completes the proof of (12.30) with C =
p∗ 1∗
=
p(n−1) n−p .
3. The proof of Lemma 12.8. We proceed by induction on n. For n = 2, n − 1 = 1 the inequality reduces to |f1 (ˆ x1 )f2 (ˆ x2 )|1,R2 = |f1 (x2 )||f2 (x1 )| dx1 dx2 = |f1 |1,R |f2 |1,R . R2
Chapter 12. L∞ -estimates
186
Thus let us assume that the lemma holds for n−1. We denote by xi in the integrals an integration in xi . Thus we have |f1 (ˆ x1 )| . . . |fn (ˆ xn )| dx Rn = |fn (ˆ xn )| |f1 (ˆ x1 )| . . . |fn−1 (ˆ xn−1 )| dxn dx1 . . . dxn−1 . Rn−1
xn
Using the generalized H¨older inequality (see the exercises in Chapter 2) we get |f1 (ˆ x1 )| . . . |fn (ˆ xn )| dx Rn
≤
Rn−1
|fn (ˆ xn )|
n−1 i=1
xn
|fi (ˆ xi )|n−1 dxn
1 n−1
dx1 . . . dxn−1 .
Using in this last integral the H¨ older inequality with power n − 1 and obtain |f1 (ˆ x1 )| . . . |fn (ˆ xn )| dx Rn
≤
n−1
Rn−1
|fn (ˆ xn )|
×
we
1 n−1
dx1 . . . dxn−1
n−1
Rn−1 i=1
n−1 n−2
n−1
|fi (ˆ xi )|
xn
1 n−2
dxn
n−2 n−1 dx1 . . . dxn−1
.
Applying the induction assumption in this last integral we derive |f1 (ˆ x1 )| . . . |fn (xn )| dx Rn
≤
n−1
|fn (ˆ xn )| Rn−1 n−1
×
i=1
Rn−1
1 n−1
dx1 . . . dxn−1 n−1
|fi (ˆ xi )|
which completes the proof of the lemma.
* i . . . dxn dx1 , . . . , dx
1 n−1
Remark 12.2. The constant C = C(p, n) = p(n−1) n−p that we obtained is not the sharpest one. We refer to [91] for this kind of issue. Remark 12.3. In Theorem 12.4 we used (12.30) with p given by p < 2 ≤ n and for a function v ∈ H01 (Ω). It is clear that Cc1 (Ω) as D(Ω) is dense in H01 (Ω). For p < 2 the inequality (12.30) holds for v ∈ H01 (Ω) since for p < 2 we have L2 (Ω) → Lp (Ω) (recall that the measure of Ω is finite).
12.4. The maximum principle on small domains
187
12.4 The maximum principle on small domains First let us see how the estimates of Paragraph 12.2 can be combined with the maximum principle. Definition 12.1. For u ∈ H 1 (Ω) we denote by Max u the smallest constant k such ∂Ω
that u≤k
on ∂Ω
(12.35)
in the sense of Definition 4.2. (In the case where no such constant exists Max u = ∂Ω
+∞.)
˜ k˜ satisfies (12.35)} ∈ R, then Remark 12.4. One can have k = −∞. If k := inf{k; k also satisfies (12.35), see Exercise 4. Then we have Theorem 12.9. Under the assumptions of Theorem 12.4 let u ∈ H 1 (Ω) satisfying − div(A(x)∇u) + a(x)u ≤ f0 +
n
∂xi fi
in Ω
(12.36)
i=1
then for the constant C = C(λ, n, p) of Theorem 12.4 we have 1 1 u+ ≤ Max u+ + C |f0 |q,Ω + |f |p,Ω |Ω| n − p .
(12.37)
∂Ω
Proof. If k = Max∂Ω u+ = +∞ there is nothing to prove. Else if k = Max∂Ω u+ < +∞ from (12.35) we derive − div(A(x)∇(u − k)) + a(x)(u − k) ≤ f0 +
n
∂xi fi − a(x)k ≤ f0 +
i=1
n
∂xi fi
i=1
(note that a(x), k ≥ 0). Moreover (u − k)+ = (u+ − k)+ ∈ H01 (Ω) (by definition of k). Applying Theorem 12.4 we derive 1 1 u − k ≤ C |f0 |q,Ω + |f |p,Ω |Ω| n − p which gives (12.35).
We also have Theorem 12.10 (Maximum Principle for small domains). Suppose that A = A(x) satisfies (4.2) and (4.3). Suppose that a ∈ L∞ (Ω) satisfies for some positive constant a0 −a0 ≤ a a.e. in Ω. (12.38)
Chapter 12. L∞ -estimates
188
Let u ∈ H 1 (Ω) ∩ Lq (Ω), q > n2 solution to − div(A(x)∇u) + a(x)u ≤ 0 u≤0 Denote by C = C(λ, n, p),
1 p
=
1 q
−
1 n
in Ω, on ∂Ω.
(12.39)
the constant of Theorem 12.4. Then if 2
Ca0 |Ω| n < 1
(12.40)
one has u≤0
in Ω.
(Note that a is not supposed to be nonnegative anymore.) Proof. If a = a+ − a− since a− = Max{0, −a} one has 0 ≤ a− ≤ a0 . Moreover from (12.39) we derive − div(A(x)∇u) + a+ u ≤ a− u = a− u+ − a− u− ≤ a− u+ . It follows from Theorem 12.9 applied with f0 = a− u+ that 1
1
|u+ |∞,Ω ≤ C|a− u+ |q,Ω |Ω| n − p (| · |∞,Ω denotes the L∞ (Ω)-norm). We then obtain 1
1
1
|u+ |∞,Ω ≤ Ca0 |u+ |∞,Ω |Ω| q |Ω| n − p 2
= Ca0 |Ω| n |u+ |∞,Ω . From (12.40) it follows that u+ = 0. This completes the proof of the theorem. Remark 12.5. By Theorem 12.7 we have
H01 (Ω)
2∗
⊂ L (Ω) with
1 1 1 = − . 2∗ 2 n Thus when 2∗ > n2 we just need to assume u+ ∈ H01 (Ω) in Theorem 12.10. This is the case when n ≤ 5. Also in the case of equality in the equations (12.39) u belongs to Lq (Ω) – see [89]. To remove the assumption in u ∈ Lq (Ω) we refer the reader to the exercises.
Exercises 1. Reproducing the proof of Theorem 12.1 estimate the L∞ -norm of the solution to −(A(x)u ) + a(x)u = f0 − f1 in Ω, u ∈ H01 (Ω).
Exercises
189
2. Let u ∈ H 1 (Ω) solution to (12.39). (i) Show that − div(A(x)∇u) + a+ u ≤ a0 u+ (ii) Deduce that
|∇u+ |2
in Ω.
a0 + 2 |u |2,Ω . λ (iii) Using Exercise 7(iii) in Chapter 8 show that when 2,Ω
|Ω| ωn
≤
n2
a0 <1 λ
then the weak maximum principle holds – i.e., one has u ≤ 0 in Ω. 3. Let u ∈ H 1 (Ω) such that u ≥ 0 a.e. in Ω and k > 0 a constant. Prove that u+k ∈ / H01 (Ω), hence Max∂Ω u ≥ 0. ˜ k˜ satisfies (12.36)} ∈ R. Prove that 4. Let u ∈ H 1 (Ω). Assume that k := inf{k; k also satisfies (12.35) (then k is a minimum). 5. Prove Lemma 12.5 without assuming that the function φ is nonincreasing.
Chapter 13
Linear Elliptic Systems 13.1 The general framework Sometimes, in many physical situations, one has not only to look for a scalar u but for a vector u = (u1 , . . . , um ). (In what follows a bold letter will always denote a vector.) This could be a position in space, a displacement, a velocity. . . So one is in need to introduce systems of equations. The simplest one is of course the one consisting in m copies of the Dirichlet problem, that is to say ⎧ −Δu1 = f 1 in Ω, ⎪ ⎪ ⎪ ⎨· · · · · · · · · · · · (13.1) ⎪ in Ω, −Δum = f m ⎪ ⎪ ⎩ u = (u1 , . . . , um ) = 0 on ∂Ω. This is a system of m equations with m unknowns. If for instance f ∈ L2 (Ω) := (L2 (Ω))m and since we have here a system of m Dirichlet problems it is clear that it possesses a unique solution (just apply the theorem of Lax–Milgram to each equation). However it is possible to write it down in a more condensed form. Indeed choosing v = (v 1 , . . . , v m ) ∈ H10 (Ω) := (H01 (Ω))m multiplying each equation by v i , integrating in Ω (i.e., considering the weak form of each equation) and summing up in i we see that u satisfies ⎧m m ⎪ i i ⎨ ∇u · ∇v dx = f i v i dx ∀ v ∈ H10 (Ω), (13.2) Ω Ω i=1 i=1 ⎪ ⎩ u ∈ H10 (Ω). With the convention of repeated indices and writing with a dot the scalar product we are ending up with ⎧ ⎨ ∇ui · ∇v i dx = f · v dx ∀ v ∈ H10 (Ω), (13.3) Ω Ω ⎩ u ∈ H10 (Ω).
192
Chapter 13. Linear Elliptic Systems
We have in fact ∇ui · ∇v i =
n m
∂xα ui ∂xα v i
(13.4)
i=1 α=1
and if ∇u denotes the Jacobian matrix of u, i.e., the m × n matrix ⎞ ⎛ ∂x1 u1 , . . . , ∂xn u1 ⎜ .. ⎟ ∇u = ⎝ ... . ⎠ ∂x1 u
m
, . . . , ∂xn u
(13.5)
m
then (13.4) is nothing but the Euclidean scalar product of the matrices ∇u and ∇v considered as vectors in Rm×n . Thus our initial system takes finally the following condensed form: ⎧ ⎨ ∇u · ∇v dx = f · v dx ∀ v ∈ H10 (Ω), (13.6) Ω Ω ⎩ u ∈ H10 (Ω). We just have shown that the solution of (13.1) – that is to say u = (u1 , . . . , um ) – satisfies (13.6). Conversely if u = (u1 , . . . , um ) is solution to (13.6) choosing v = (0, . . . , v i , 0, . . . , 0) in (13.6) or (13.3) we obtain for every i and with no summation in i ⎧ ⎨ ∇ui · ∇v i dx = f i v i dx ∀ v i ∈ H01 (Ω) ∀ i = 1, . . . , m, (13.7) Ω ⎩ Ω ∀ i = 1, . . . , m. ui ∈ H01 (Ω) We see then that the different components of u are solution to (13.1) and thus (13.1) and (13.6) are perfectly equivalent. If we introduce the bilinear form a(u, v) = ∇u · ∇v dx, (13.8) Ω
we see that we are again in the framework of the Lax–Milgram theorem but with this time u, v vectors, i.e., belonging to a product of Hilbert spaces. One should now notice that (13.1) is a very peculiar system in the sense that the equations are uncoupled – i.e., the k th one only depends on the k th component of u. Some more complicated systems can be investigated. We copy here our analysis of Chapter 4. Namely we replace A(x), a linear mapping from Rn into Rn , (Rn was the space where ∇u was living) by A(x) a linear mapping from Rm×n into Rm×n where we denote by Rm×n the space of m × n matrices with real coefficients (∇u ∈ Rm×n ). Similarly we can replace a(x)uv, the lower order term of Chapter 4, namely a bilinear form on R by a bilinear form on Rm defined as A(x)u · v
(13.9)
13.1. The general framework
193
(here A(x) is a m × m matrix, “·” denotes the scalar product in Rm ). Thus the bilinear form extending the one considered in Chapter 4 to the case of systems can be defined as A(x)∇u · ∇v + A(x)u · v dx (13.10) a(u, v) = Ω
where here for a.e. x A(x) ∈ L(Rm×n , Rm×n ), ∇u(x) ∈ R
m×n
A(x) ∈ L(Rm , Rm ), m
u(x) ∈ R .
,
(13.11) (13.12)
L(V, V ) denotes the space of linear mappings from V into itself. Now if V is of dimension k then L(V, V ) is of dimension k 2 – i.e., is determined by k 2 coefficients. Thus it might be useful to introduce these coefficients. For instance to write j A(x)∇u = (Aαβ ij (x)∂xβ u )α=1,...,n , i=1,...,m
A(x)u = (aij (x)uj )i=1,...,m .
(13.13) (13.14)
We see that A(x)∇u is a m × n matrix and A is equivalent to the data of (m × n)2 coefficients. To distinguish those running from 1 to n from these running from 1 to m we denote the first ones by Greek letters and the second ones by Latin letters. Note also that in (13.13) we assume a summation in β, j and in (13.14) a summation in j. Thus with the summation convention and with these coefficients explicitly written our bilinear form (13.10) becomes j i i j a(u, v) = Aαβ (13.15) ij (x)∂xβ u ∂xα v + aij (x)u v dx. Ω
i
j
As before if ∂xα v , ∂xβ u , ui , v j ∈ L2 (Ω) the integral (13.15) is perfectly defined ∞ as soon as Aαβ ij , aij ∈ L (Ω). We can formulate these later assumptions as |A(x)M | ≤ Λ|M | ∀ M ∈ Rm×n , ∀ ξ ∈ Rm ,
|A(x)ξ| ≤ Λ|ξ|
(13.16) (13.17)
for some positive constant Λ, |·| denoting the Euclidean norm of vectors or matrices 1 (for instance |M | = ( ij m2ij ) 2 ∀ M = (mij ) ∈ Rm×n ). Let us introduce more precisely the spaces that we will use in this chapter. As already mentioned we will set L2 (Ω) = (L2 (Ω))m .
(13.18)
This space is a Hilbert space when equipped with the scalar product u · v dx. (u, v) = Ω
(This follows from the fact that un is a Cauchy sequence in L2 (Ω) iff each component of un is a Cauchy sequence.)
194
Chapter 13. Linear Elliptic Systems
Similarly we will set H1 (Ω) = (H 1 (Ω))m ,
H10 (Ω) = (H01 (Ω))m
(13.19)
and we will consider on these spaces the scalar products (∇u · ∇v + u · v) dx, ∇u · ∇v dx,
(13.20)
Ω
Ω
where ∇u denotes the Jacobian matrix of u. Equipped with the first scalar product of (13.20) both H1 (Ω), H10 (Ω) are Hilbert spaces. In the case where Ω is bounded in one direction then the two scalar products of (13.20) are equivalent on H10 (Ω). ∞ With this definition if Aαβ ij , aij ∈ L (Ω) and satisfy (13.16), (13.17) the bilinear 1 form a(u, v) is continuous on H (Ω). Indeed this follows from |a(u, v)| = A(x)∇u · ∇v + A(x)u · v dx Ω ≤ |A(x)∇u · ∇v| + |A(x)u · v| dx Ω ≤ |A(x)∇u| |∇v| + |A(x)u| |v| dx (13.21) Ω |∇u| |∇v| + |u| |v| dx ≤Λ Ω ≤ Λ |∇u|2,Ω |∇v|2.Ω + |u|2,Ω |v|2,Ω ≤ Λ|u|1,2 |v|1,2 where we have set
|u|1,2 =
Ω
12 |∇u| + |u| dx . 2
2
(13.22)
Recall that | · | denotes the Euclidean norm of vectors or matrices considered as elements of Rm×n . The coerciveness of a requires some assumptions. Definition 13.1 (Legendre condition). We say that A satisfies the Legendre condition if there exists a λ > 0 such that A(x)M · M ≥ λ|M |2
∀ M ∈ Rm×n , a.e. x ∈ Ω.
(13.23)
Definition 13.2 (Legendre–Hadamard). We say that A satisfies the Legendre– Hadamard condition if A(x)M · M ≥ λ|M |2
∀ M ∈ Rm×n , Rank M = 1, a.e. x ∈ Ω,
i.e., the inequality (13.23) holds only for rank one matrices.
(13.24)
13.1. The general framework
195
Example. Suppose that for n = m = 2 A(x)M · M = |M |2 + k det M
(13.25)
where k is some constant, det denotes the determinant of a matrix, |·| its Euclidean norm. Clearly if Rank M = 1, one has det M = 0 and thus A(x)M · M = |M |2 and (13.24) holds with λ = 1. Now if M is the matrix 1 0 M= one has A(x)M · M = 2 − k < 0 0 −1 for k large enough in such a way that (13.23) cannot hold. Even so the Legendre–Hadamard condition is weaker than the Legendre one it is enough to get coerciveness – at least on H10 (Ω) – for the bilinear form a when A is independent of x – i.e., in the case of constant coefficients. More precisely we have Proposition 13.1. Suppose that A independent of x satisfies (13.24) then we have A∇u · ∇u dx ≥ λ |∇u|2 dx ∀ u ∈ H10 (Ω). (13.26) Ω
Ω
Proof. One should remark that a rank one matrix can be expressed as c ⊗ d = (ci dj )i=1,...,m j=1,...,n
(13.27)
namely with c ∈ Rm , d ∈ Rn . It is enough to show (13.26) for u ∈ (D(Ω))m . Then we can extend v by 0 outside Ω. We denote by fˆ the Fourier transform of f given by e−2πiy·x f (x) dx. (13.28) fˆ(y) = Rn
Since the functions we are dealing with are real valued we have αβ 1 j i dx A∇u · ∇u dx = Aij ∂xβ uj ∂xα ui + Aαβ ij ∂xβ u ∂xα u 2 n Ω R (13.29) 1 αβ αβ j i j i A ∂x u ∂xα u dy + Aij ∂xβ u ∂xα u dy = 2 Rn ij β Rn by the Plancherel formula (see [85]). If u ˆk = ak + ibk we have Re(ˆ uk u ˆ ) =
1 k {ˆ u uˆ + uˆk u ˆ } = ak a + b k b . 2
196
Chapter 13. Linear Elliptic Systems
j Thus from (13.29) we derive, since ∂ ˆj xβ u (y) = 2πiyβ u j i j i A∇u · ∇u dx = (2π)2 Aαβ ij yα yβ {a a + b b } dy Ω Rn ≥ (2π)2 λ|y|2 {|a|2 + |b|2 } dy. Rn
(Note that when c ⊗ d is given by (13.27) one has |c ⊗ d|2 = |c|2 |d|2 .) Thus we have now 2 2 2 i i A∇u · ∇u dx ≥ λ (2π) |y| |ˆ u| dy = λ |∇u|2 dy. ∂xα u ∂xα u dy = λ Rn
Ω
Rn
Ω
This completes the proof of the proposition. On the matrix A we can make the following assumptions ∀ ζ ∈ Rm , a.e. x ∈ Ω
A(x)ζ · ζ ≥ 0 2
A(x)ζ · ζ ≥ λ|ζ| If
Aαβ ij ,
(13.30)
m
∀ ζ ∈ R , a.e. x ∈ Ω.
(13.31)
aij ∈ L∞ (Ω) then we have
Proposition 13.2. (i) Suppose that A is independent of x and A satisfies (13.24) or A satisfies (13.23)
(13.32)
then if A(x) satisfies (13.30) and Ω is bounded in one direction a(u, v) is coercive on H10 (Ω). (ii) Suppose that A satisfies (13.23) and A satisfies (13.31) then a(u, v) is coercive on H1 (Ω). Proof. (i) By (13.23) or Proposition 13.1 we have A∇u · ∇u + A(x)u · v dx ≥ A∇u · ∇u dx ≥ λ |∇u|2 dx a(u, u) = Ω
Ω
Ω
and the result follows. (ii) It follows immediately from A(x)∇u · ∇u + A(x)u · u dx ≥ λ |∇u|2 + |u|2 dx. a(u, u) = Ω
This completes the proof of the proposition.
Ω
As an obvious corollary we have Theorem 13.3. (i) Suppose that A satisfies (13.32), A satisfies (13.30) and Ω is bounded in one direction. Then for every f ∈ H−1 (Ω) = (H −1 (Ω))n there exists a unique u solution to ⎧ ⎨ u ∈ H10 (Ω), (13.33) ⎩ A∇u · ∇v + Au · v dx = f , v ∀ v ∈ H10 (Ω). Ω
13.2. Some examples
197
(ii) If A satisfies (13.23) and A satisfies (13.31), for every f ∈ L2 (Ω) and every V closed subspace of H1 (Ω) there exists a unique u solution to ⎧ ⎨u ∈ V, (13.34) ⎩ A∇u · ∇v + Au · v dx = f · v dx ∀ v ∈ V. Ω
Ω
(In (13.33), f , v stands for f i , v i with the summation convention.)
13.2 Some examples • Diffusion of population with competition. In this example u = (u1 , . . . , um ) is the vector density of population of different species, i.e., ui is the density of population of the species ui . If all these species are living in an environment Ω their diffusion could be described by u solution to ⎧ i i ⎨μ ∇u · ∇v dx + A(x)u · v dx = f · v dx ∀ v ∈ H10 (Ω), i (13.35) Ω Ω Ω ⎩ u ∈ H10 (Ω). In strong form and if A = (aij ) this system can be written (with no summation in i) as −μi Δui + aij uj = f i in Ω ∀ i = 1, . . . , m (13.36) ∀ i = 1, . . . , m. ui ∈ H01 (Ω) The term in Au stands for the competition between the different species. A particular important case is when aii > 0 ∀ i = 1, . . . , m, aij ≤ 0 ∀ j = i, ∀ i = 1, . . . , m. • The Stokes problem. If Ω is a bounded open set of Rn (n = 2 or 3 in the applications) the Stokes problem consists in finding the velocity u of a fluid and its pressure p satisfying ⎧ ⎪ ⎨−μΔu + ∇p = f in Ω, (13.37) div u = 0 in Ω, ⎪ ⎩ u=0 on ∂Ω. (μ is the viscosity of the fluid, f the applied forces. One neglects in a first approximation the nonlinear effect. The velocity of the fluid is supposed to vanish on the boundary of the container Ω.)
198
Chapter 13. Linear Elliptic Systems
A natural space adapted to the problem is ( 1 (Ω) = { v ∈ H1 (Ω) | div v = 0 in Ω }. H 0 0
(13.38)
We have ( 1 (Ω) is a closed subspace of H1 (Ω). Lemma 13.4. H 0 0 ( 1 (Ω) converging toward v one has for every i Proof. Indeed if vn is a sequence of H 0
=⇒
∂xi vni −→ ∂xi v
in L2 (Ω)
0 = div vn −→ div v = 0
in L2 (Ω)
( 1 (Ω). Note that the equality div v = 0 in (13.38) can be taken in the and v ∈ H 0 distributions or in the L2 (Ω)- sense. This completes the proof of the Lemma. ( 1 (Ω) If we multiply the first equation of (13.37) by a smooth function of H 0 and integrate on Ω the pressure disappears and we are lead to find a u solution to ⎧ ( 1 (Ω), ⎨u ∈ H 0 (13.39) ( 1 (Ω). ⎩μ ∇u · ∇v dx = f · v dx ∀ v ∈ H 0 Ω
Ω
Having recast our problem in term of the velocity only we have Theorem 13.5. For f ∈ H−1 (Ω) there exists a unique solution to (13.39). Proof. This is a trivial consequence of Theorem 13.3.
Remark 13.1. If Ω is a Lipschitz domain and if u satisfies (13.39) one can show that there exists p ∈ L2 (Ω) – unique up to an additive constant – such that −μΔu + ∇p = f
in Ω
(we refer the reader to [56], [83], [93]). • The system of elasticity. We state here the problem in a very simple framework. First we suppose m=n and in the applications n will be equal to 2 or 3. The unknown of the problem is the displacement u(x) of a particular x of an elastic body occupying a domain αβ ∞ Ω in space (see [29], [39]). We denote by Aαβ ij = Aij (x) coefficients in L (Ω) experiencing in addition the symmetry property βi ji Aαβ ij = Ajα = Aβα
∀ i, j, α, β = 1, . . . , n.
(13.40)
13.2. Some examples
199
Moreover, we replace the Legendre or the Legendre–Hadamard condition by assuming that for some λ > 0 we have A(x)M · M ≥ λ|M |2
∀ M ∈ Rn×n ,
M T = M,
a.e. x ∈ Ω.
(13.41)
(M T denotes the transposed matrix of M – i.e., the matrix derived from M by taking for the ith column of M T the ith row of M .) In other words we are assuming that the Legendre condition holds for symmetric matrices only. Under the assumptions above we have Theorem 13.6. Let f ∈ H−1 (Ω). There exists a unique u solution to ⎧ 1 ⎨u ∈ H0 (Ω), ⎩ A(x)∇u · ∇v dx = f , v ∀ v ∈ H10 (Ω).
(13.42)
Ω
Proof. We start first with the following remark due to our symmetry conditions (13.40). Let M = (miα ) be a n × n matrix. We have with the summation convention M + MT M + MT A(x) · 2 2 miα + mαi mjβ + mβj (13.43) = Aαβ ij 2 2 1 αβ 1 αβ 1 αβ 1 = Aαβ ij miα mjβ + Aij miα mβj + Aij mαi mjβ + Aij mαi mβj 4 4 4 4 = Aαβ m m = A(x)M · M. iα jβ ij Next, due to the Lax–Milgram theorem, we remark that the existence and uniqueness of a solution to (13.42) will follow if the left-hand side of the second equality of (13.42) is coercive. Using (13.43) we have by (13.41) ∇u + ∇uT ∇u + ∇uT · A(x)∇u · ∇u dx = A(x) 2 2 Ω Ω (13.44) ∇u + ∇uT 2 ≥ λ ∀ u ∈ H10 (Ω). 2 2,Ω Recall that | · | denotes the Euclidean norm of the matrices considered as vectors in Rn×n . We adopt here the classical notation εij (u) =
1 {∂xi uj + ∂xj ui }, 2
(13.45)
i.e., we denote by εij (u) the entries of the matrix 12 (∇u + ∇uT ). Then Theorem 13.6 is a consequence of the following lemma.
200
Chapter 13. Linear Elliptic Systems
Lemma 13.7 (Korn’s inequality). We have ∇u + ∇uT 2 2 1 ≥ |∇u|2,Ω 2 2 2,Ω
∀ u ∈ H10 (Ω).
(13.46)
Proof of the lemma. By density of (D(Ω))n in H10 (Ω) it is enough to show (13.46) for u ∈ (D(Ω))n . Then we have by (13.45) 1 1 2 j i 2 (∂x u + ∂xj u ) dx = (∂x uj )2 + 2∂xi uj ∂xj ui + (∂xj ui )2 dx. |εij (u)|2,Ω = 4 Ω i 4 Ω i (13.47) An integration by parts shows that j i ∂xi u ∂xj u dx = ∂xi ui ∂xj uj dx. Ω
Ω
Thus summing up the inequalities (13.47) – recall that εii (u) = ∂xi ui – we obtain i,j
|εij (u)|22,Ω =
1 1 1 |∂xi uj |22,Ω + | div u|22,Ω ≥ |∂xi uj |22,Ω 2 i,j 2 2 i,j
which is (13.46). This completes the proof of the lemma and the theorem.
Remark 13.2. We could have added in (13.42) some lower order terms. u is the displacement of the particles inside Ω under the action of forces f (for instance the gravity) when the boundary of the body is maintained fixed. At the expenses of a more complex Korn inequality one could consider an elastic body maintained fixed only on a part of its boundary – see Figure 13.1 below. We refer the reader to [39], [29] for details.
f
Figure 13.1. For a material which is homogeneous and isotropic one has Aαβ ij = λδiα δjβ + μδiβ δjα + μδij δαβ
(13.48)
13.2. Some examples
201
where δij is the Kronecker symbol defined by 1 for i = j δij = 0 for i = j.
(13.49)
μ and λ are the so-called Lam´e’s constants (see [39]). It is easy to see that the symmetry relations (13.40) hold. Moreover if M = (miα ), M = (mjβ ) ∈ Rn×n we have Aαβ ij miα mjβ = λmii mjj + μmij mji + μmij mij
= λ tr M tr M + μmij (mij + mji )
(13.50)
with the summation convention and with tr denoting the trace of the matrices – i.e., the sum of the diagonal coefficients. One can remark that mij (mij + mji ) = mji (mij + mji )
(13.51)
and (13.50) can take the more symmetric form Aαβ ij miα mjβ = λ tr M tr M + 2μ
M + M T M + M T · . 2 2
(13.52)
It is clear that for λ ≥ 0, μ > 0 the inequality (13.41) holds for λ in (13.41) given by 2μ and thus we have Theorem 13.8. Let f ∈ H−1 (Ω), λ ≥ 0, μ > 0. There exists a unique solution to ⎧ ⎨u ∈ H10 (Ω), (13.53) ⎩λ div u div v dx + 2μ εij (u)εij (v) dx = f , v ∀ v ∈ H10 (Ω). Ω
Ω
Moreover setting σij (u) = λ div uδij + 2μεij (u), u is a weak solution to the system −∂xj σij (u) = f i u=0
in Ω, i = 1, . . . , n, on ∂Ω.
(13.54)
(13.55)
Proof. The existence and uniqueness of a solution to (13.53) follows from Theorem 13.6 and (13.52). To obtain (13.55) it is enough to notice that by (13.51) one has div u div v dx + 2μ εij (u)εij (v) dx λ Ω Ω j =λ div u∂xj v dx + 2μ εij (u)∂xj v i dx = f , v Ω
n
for any v ∈ (D(Ω)) .
Ω
202
Chapter 13. Linear Elliptic Systems
Exercises 1. In the spirit of paragraph 13.1 introduce a general theory for higher order elliptic systems. αβ ∞ 2. Show that if Aαβ ij are measurable (13.16) is equivalent to Aij ∈ L (Ω) ∀ α, β, i, j. 3. Show that for f ∈ H−1 (Ω) and every ε > 0 there exists a unique uε solution to ⎧ ⎨ u ∈ H10 (Ω), 1 ⎩ ∇uε · ∇v dx + div uε · div v dx = f , v ∀ v ∈ H10 (Ω). ε Ω Ω
Show that when ε → 0, uε → u in H10 (Ω) where u is the solution to the Stokes problem. 4. In Rn we consider a cylindrical domain of the form Ω = ω1 × ω2
ω 1 ⊂ Rp ,
ω2 ⊂ Rn−p
(ω1 satisfies the assumptions of Paragraph 6.3. We refer to it for the notation). Let A be the coefficients of a linear constant elliptic system satisfying the Legendre–Hadamard condition. Consider u the solution to ⎧ ⎨ u ∈ H10 (Ω ), (P ) A∇u · ∇v dx = f · v dx ∀ v ∈ H10 (Ω ), ⎩ Ω
Ω
where f = f (X2 ) ∈ L2 (ω2 ) – (x = (X1 , X2 ) where X1 ∈ Rp , X2 ∈ ω2 ). Let u∞ be the solution to ⎧ 1 1 m ⎨u ∞ ∈ H0 (ω2 ) = (H0 (ω2 )) , (P∞ ) A∇u∞ · ∇v dx = f · v dx ∀ v ∈ H10 (ω2 ) ⎩ ω2
ω2
where f = (f 1 , . . . , f m ). Note that v and u∞ are depending on X2 only – see Section 6.3. Show that there exist two constants C, α > 0 independent of such that |∇(u − u∞ )| ≤ ce−α |f |2,ω2 . 2,Ω /2
Extend this to A = A(x) where A satisfies the Legendre condition or is of the elasticity type.
Chapter 14
The Stationary Navier–Stokes System 14.1 Introduction Let Ω be a bounded open set of Rn . Like for the Stokes problem one is looking for a couple (u, p) (14.1) representing respectively the velocity of a fluid and ⎧ ⎪ ⎨−μΔu + (u · ∇)u + ∇p = f div u = 0 ⎪ ⎩ u=0
its pressure such that in Ω, in Ω, on ∂Ω.
(14.2)
μ is the viscosity of the fluid. Note that here one is taking into account the nonlinear effect. The operator u · ∇ is defined as ui ∂xi
(14.3)
with the summation convention in i. We will restrict ourselves to the physical relevant cases – i.e., n = 2 or 3. We refer the reader to [45], [56] for a physical background on the problem (see also [51], [52], [68]). Eliminating the pressure as we did in the preceding chapter we are reduced to find u such that ⎧ ( 1 (Ω), ⎨u ∈ H 0 (14.4) ( 1 (Ω), ⎩μ ∇u · ∇v dx + (u · ∇)u · v dx = f · v ∀v ∈ H 0 Ω
Ω
Ω
( 1 (Ω) is defined by (13.38). In order to do that we introduce the trilinear where H 0 form defined as t(w; u, v) = (w · ∇)u · v dx = wi ∂xi uj v j dx (14.5) Ω
Ω
204
Chapter 14. The Stationary Navier–Stokes System
(with the summation convention in i, j). One should notice that if wi , ∂xi uj , v j ∈ L2 (Ω) it is not clear that the product above is integrable. This will follow however for w, v ∈ H10 (Ω) from the Sobolev embedding theorem. Indeed we have Proposition 14.1. There exists a constant C = C(Ω) such that |t(w; u, v)| ≤ C |∇w|2,Ω |∇u|2,Ω |∇v|2,Ω ∀ w, v ∈ H10 (Ω), ∀ u ∈ H1 (Ω). (14.6) Proof. From the formula (12.30) we have for some constant C = C(n) |v|2∗ ,Ω ≤ C |∇v|2,Ω ∀ v ∈ H01 (Ω), where 21∗ = 12 − n1 , i.e., 2∗ can be chosen arbitrary for n = 2 and equal to 6 for n = 3. Thus in both cases we have (for instance applying H¨ older’s inequality) |v|4,Ω ≤ C(Ω)|v|6,Ω ≤ C(Ω)|∇v|2,Ω ∀ v ∈ H01 (Ω). From H¨ older’s inequality again we then have since 14 + 12 + wi ∂xi uj v j dx ≤ |wi | |∂xi uj | |v j | dx Ω
1 4
= 1 for every i, j
Ω
≤ |wi |4,Ω |∂xi uj |2,Ω |v j |4,Ω ≤ C |∇wi |2,Ω |∇uj |2,Ω |∇v j |2,Ω ≤ C |∇w|2,Ω |∇u|2,Ω |∇v|2,Ω
(14.7)
and the result follows due to the formula (14.5) after summation in i, j. This completes the proof of the proposition. Proposition 14.2. Let w ∈ H1 (Ω) with div w = 0. Then we have t(w; v, v) = 0 t(w; v, u) = −t(w; u, v)
∀ v ∈ H10 (Ω) ∀ u, v ∈
H10 (Ω).
(14.8) (14.9)
Proof. Let us first choose v ∈ (D(Ω))n , n = 2 or 3. Then by (14.5) we have – with the summation convention wi ∂xi v j v j t(w; v, v) = Ω 1 = wi ∂xi (v j )2 2 Ω 1 = − ∂xi wi , (v j )2 = 0. 2 (Note that (v j )2 ∈ D(Ω).)
14.2. Existence and uniqueness result
205
Thus (14.8) holds for v ∈ (D(Ω))n . By density of (D(Ω))n in H10 (Ω) (14.8) follows since t, trilinear form, is continuous at 0 for the H1 (Ω) norm (see (14.6)) and thus is continuous at any point. To prove (14.9) it is enough to notice that 0 = t(w; u + v, u + v) = t(w; u, u) + t(w; u, v) + t(w; v, u) + t(w; v, v) = t(w; u, v) + t(w, v, u)
∀ u, v ∈ H10 (Ω)
(by (14.8)). This completes the proof of the proposition.
14.2 Existence and uniqueness result For f ∈ H−1 (Ω) we will denote by f , v the continuous linear form on H10 (Ω) defined as (14.10) f , v = f i , v i with the summation convention. Clearly (14.10) defines a continuous linear form on H10 (Ω) and we denote by |f |∗ the strong dual norm of this linear form when H10 (Ω) is equipped with the norm |∇v| . (14.11) 2,Ω Then we have Theorem 14.3. For f ∈ H−1 (Ω), μ > 0 there exists a solution u to ⎧ ( 1 (Ω), ⎨u ∈ H 0 ⎩μ such that
Ω
∇u · ∇v dx +
Ω
(u · ∇)u · v dx = f , v |∇u|
2,Ω
≤
|f |∗ . μ
( 1 (Ω) ∀v ∈ H 0
(14.12)
(14.13)
In order to prove Theorem 14.3 we will use a variant of the Brouwer fixed point theorem (see the appendix). More precisely Lemma 14.4. Let H be a finite-dimensional Hilbert space equipped with the scalar product (·, ·). Let T : H → H be a continuous mapping such that for some ν > 0 we have (T (v), v) ≥ 0 ∀ v ∈ H, |v| = ν. (14.14) Then there exists an element u ∈ H such that |u| ≤ ν,
T (u) = 0.
(14.15)
206
Chapter 14. The Stationary Navier–Stokes System
Proof of the Lemma. If not T (u) = 0 ∀ |u| ≤ ν. Then consider the mapping u → F (u) = −
νT (u) . |T (u)|
This mapping is a continuous mapping from Bν (0) into itself and by the Brouwer fixed point theorem it admits a fixed point in Bν (0), i.e., a point u such that −
νT (u) = u. |T (u)|
Due to this equality one has |u| = ν
− (T (u), u)ν = |u|2 |T (u)| > 0
which contradicts (14.14) and completes the proof of the lemma.
( 1 (Ω) is a closed subspace of H1 (Ω) and thus separable. Let Proof of the theorem. H 0 0 ( 1 (Ω), and denote by Vk the finite-dimensional us consider vn a countable basis of H 0 ( 1 (Ω) generated by v1 , . . . , vk . One will assume Vk equipped with subspace of H 0 the scalar product of H10 (Ω), namely ∇u · ∇v dx. (14.16) Ω
By the Lax–Milgram theorem for v ∈ Vk there exists a unique u = T (v) solution to ⎧ ⎨ u ∈ Vk , ⎩ ∇u · ∇w dx = μ ∇v · ∇w dx + (v · ∇)v · w dx − f , w ∀ w ∈ Vk . Ω
Ω
Ω
(14.17) (Note that the right-hand side of the equation above is a continuous linear form in w.) Taking w = v and applying Proposition 14.2 we see that 2 ∇T (v) · ∇v = μ|∇v|2,Ω − f , v Ω
2 ≥ μ|∇v|2,Ω − |f |∗ |∇v|2,Ω μ|∇v| − |f |∗ = |∇v| 2,Ω
≥0
2,Ω
|f |∗ for |∇v|2,Ω ≤ . μ
14.2. Existence and uniqueness result
207
Applying Lemma 14.4 (note that T is continuous on Vk ) we see that there exists uk ∈ Vk such that |f |∗ |∇uk | (14.18) ≤ 2,Ω μ and T (uk ) = 0 which is μ ∇uk · ∇w dx + (uk · ∇)uk · w dx = f , w Ω
Ω
∀ w ∈ Vk .
(14.19)
From (14.18) it follows that the sequence uk is bounded independently of k in ( 1 (Ω). Up to a subsequence we have H 0 ( 1 (Ω), in H 0
uk u
uk → u in L4 (Ω).
(We use here the fact that the injection from H10 (Ω) into L4 (Ω) is compact. This follows easily from (12.30) and arguments given in Chapter 2.) Noticing by Proposition 14.2 that (uk · ∇)uk · w dx = − (uk · ∇)w · uk dx Ω
Ω
and by (14.7) we can pass to the limit in (14.19) to get μ ∇u · ∇w dx − (u · ∇)w · u dx = f , w ∀ w ∈ Vk Ω
Ω
∀ k.
(14.20)
(By (14.7) we have that t(w; u, v) is continuous at 0 on L4 (Ω) × H10 (Ω) × L4 (Ω) and thus by trilinearity at every point, which leads to (14.20).) Applying Proposition 14.2 again we see that u satisfies then ⎧ ( 1 (Ω), ⎨u ∈ H 0 ( 1 (Ω) ⎩μ ∇u · ∇w dx + (u · ∇)u · w dx = f , w ∀ w ∈ H 0 Ω
Ω
and thus u is solution to (14.12). (We have also used here the density of
1 k
Vk in
H10 (Ω).) The estimate (14.13) follows from the lower semi-continuity of the norm and (14.18). This completes the proof of the theorem. The problem of uniqueness is a difficult issue for this nonlinear problem. However for small data f we can show uniqueness. More precisely let us set t =
Sup
w,u,v=0
|t(w; u, v)| |∇w| |∇u| |∇v| 2 2 2
(14.21)
where for simplicity we dropped the index Ω in | · |2,Ω . It is clear by (14.6) that the supremum (14.21) does exist.
208
Chapter 14. The Stationary Navier–Stokes System
Moreover we have Theorem 14.5. Suppose that f ∈ H−1 (Ω) and |f |∗ <
μ2 , t
(14.22)
then the solution to (14.12) is unique. Proof. Let u1 , u2 be two solutions. By subtraction we get μ ∇(u1 − u2 ) · ∇v dx + t(u1 ; u1 , v) − t(u2 ; u2 , v) = 0 Ω
( 1 (Ω). ∀v ∈ H 0
Taking v = u1 − u2 we derive 2 μ|∇(u1 − u2 )|2,Ω = t(u2 ; u2 , u1 − u2 ) − t(u1 ; u1 , u1 − u2 ) = t(u2 − u1 ; u2 , u1 − u2 ) + t(u1 ; u2 , u1 − u2 ) − t(u1 ; u1 , u1 − u2 ) = t(u2 − u1 ; u2 , u1 − u2 ) − t(u1 ; u2 − u1 , u2 − u1 ) = t(u2 − u1 ; u2 , u1 − u2 ) by Proposition 14.2. It follows from the definition of t that 2 2 μ|∇(u1 − u2 )|2,Ω ≤ t|∇u2 |2,Ω |∇(u1 − u2 )|2,Ω .
(14.23)
Now taking v = u in (14.12) one easily sees that 2 μ|∇u|2,Ω = f , u ≤ |f |∗ |∇u|2,Ω , i.e., all solution to (14.12) satisfies (14.13). From (14.23) we then derive |∇(u1 − u2 )|2
2,Ω
≤
2 t|f |∗ |∇(u1 − u2 )|2,Ω 2 μ
which implies by (14.22) that u1 = u2 . This completes the proof of the theorem.
Exercise 1. Let t be a k-multilinear form on N a normed space. Show that the continuity at 0 is equivalent to the continuity of t at any point and also to the existence of a constant C such that |t(a1 , . . . , ak )| ≤ C|a1 | . . . |ak |
∀ ai ∈ N, i = 1, . . . , k.
(| · | denotes the norm in N and the absolute value in R.)
Exercises
209
2. Show that t(w; v, v) = 0 for all smooth functions such that w · ν = 0 on ∂Ω. (ν is the outward unit normal to ∂Ω.) ( 1 (Ω). For x ∈ Ω one denotes by Ωxi the section 3. Let u ∈ H 0 Ωxi = {(y1 , . . . , xi , . . . , yn ) ∈ Ω}. Show that
Ωxi
* i , . . . dxn = 0 ui (x) dx1 , . . . , dx
(the measure above is the Lebesgue measure where dxi has been dropped). 4. (Poiseuille flow) Let Ω = R × ω where ω is a bounded open set of R2 . One considers the problem ⎧ −μΔu + (u · ∇)u + ∇p = 0 in Ω, ⎪ ⎪ ⎨ div u = 0 in Ω, ⎪ ⎪ 1 ⎩u = 0 on R × ∂ω, u (x1 , x ) dx = F, ω
(x = (x1 , x ), F is a given constant). Show that the problem admits a solution (u, p) of the type p = cx1 u = (u1 , 0, 0), where u1 is solution of a Dirichlet problem in ω and c a constant which can be determined.
Chapter 15
Some More Spaces 15.1 Motivation Let Ω be a bounded open set of Rn . Suppose that we would like to solve the following nonlinear Dirichlet problem: −∂xi (|∇u|p−2 ∂xi u) = f in Ω, (15.1) u=0 on ∂Ω. For p ∈ (1, +∞) the operator Δp = ∂xi (|∇|p−2 ∂xi )
(15.2)
is called the p-Laplace operator. Note that in the formula above we are making the summation convention and for p = 2, Δ2 is just the usual Laplace operator. A weak formulation would lead to replace the first equation of (15.1) by p−2 |∇u| ∇u · ∇v dx = f v dx (15.3) Ω
Ω
for any test function v. In the case where p = 2 (15.3) was making sense for f, v, ∂xi u, ∂xi v ∈ L2 (Ω).
(15.4)
In the case where p = 2, the integral on the right-hand side of (15.3) makes sense for (recall H¨older’s inequality)
f ∈ Lp (Ω),
v ∈ Lp (Ω)
and the one on the left-hand side makes sense for ∂xi u, ∂xi v ∈ Lp (Ω). This suggests to introduce Sobolev spaces of the model of H 1 (Ω) but where L2 (Ω) is replaced by Lp (Ω).
212
Chapter 15. Some More Spaces
Next suppose that we are interested in solving the problem ⎧ ⎨Δ2 u = f in Ω, ∂u ⎩u = = 0 on ∂Ω. ∂ν
(15.5)
Such a problem arises for instance in elasticity, ν denotes the outward unit normal to the boundary ∂Ω of Ω. Then a weak formulation would lead to replace the first equation of (15.5) by Ω
ΔuΔv dx =
Ω
f v dx.
(15.6)
Here the first integral of (15.6) makes sense for Δu, Δv ∈ L2 (Ω). Thus the L2 (Ω)- framework is yet suitable in this case, however one needs more derivatives in L2 (Ω) than in the case of the Dirichlet problem. Guided by the two examples above we introduce for k ∈ N, 1 ≤ p ≤ +∞ W k,p (Ω) = { v ∈ Lp (Ω) | Dα v ∈ Lp (Ω) ∀ α ∈ Nn , |α| ≤ k }.
(15.7)
(See Section 2.2 for the notation, the derivatives are taken in the distributional sense.) These spaces generalize the space H 1 constructed on L2 (Ω) since we have W 1,2 (Ω) = H 1 (Ω).
(15.8)
15.2 Essential features of the Sobolev spaces W k,p We equip the spaces W k,p (Ω) with the norm uk,p =
|D
α
u|pp,Ω
p1 .
(15.9)
|α|≤k
Note that we will identify W 0,p (Ω) with Lp (Ω) and for p = ∞ the norm (15.9) is uk,∞ = Max |Dα u|∞,Ω , |α|≤k
(cf. the exercises). Theorem 15.1. Let Ω be an open subset of Rn . Equipped with the norm (15.9), W k,p (Ω) is a Banach space. Proof. We leave to the reader to show that W k,p (Ω) is a vector space and that (15.9) defines a norm. Let us show the completeness of these spaces for p < +∞
15.2. Essential features of the Sobolev spaces W k,p
213
the case p = +∞ being analogous. Let un be a Cauchy sequence in W k,p (Ω) – i.e., such that for n, m large enough we have |Dα un − Dα um |pp,Ω ≤ εp (15.10) |α|≤k
(note that for α = 0, Dα u = u). Then for any α, |α| ≤ k, Dα un is a Cauchy sequence in Lp (Ω) and there exists uα ∈ Lp (Ω) such that Dα un −→ uα
in Lp (Ω)
(in particular un → u0 = u in Lp (Ω)). By the continuity of the derivation in D (Ω) we have Dα un −→ Dα u and thus u ∈ W k,p (Ω). Moreover, letting m → +∞ in (15.10) we have for n large enough |Dα un − Dα u|pp,Ω ≤ εp |α|≤k
that is to say un → u in W k,p (Ω). This completes the proof of the theorem.
Copying the definition of H01 (Ω) we set for 1 ≤ p < +∞ W0k,p (Ω) = D(Ω) = the closure of D(Ω) in W k,p (Ω). Theorem 15.2. W0k,p (Ω) is a Banach space when endowed with the · k,p -norm. Proof. This is a trivial consequence of the definition. W0k,2 (Ω)
Remark 15.1. In the case p = 2, W k,2 (Ω), scalar product (Dα u, Dα v),
are Hilbert spaces for the
|α|≤k
(·, ·) being the canonical scalar product in L2 (Ω). In this case we will prefer the notation H k (Ω), H0k (Ω) for the spaces. We now establish a Lp (Ω)-Poincar´e inequality generalizing (2.49). We have Theorem 15.3 (Poincar´e Inequality). Let Ω be an open set bounded in one direction. More precisely suppose that we are in the situation of Theorem 2.8. Then we have 2a ∂v ∀ v ∈ W01,p (Ω). (15.11) |v|p,Ω ≤ 1 ∂ν p p p,Ω
214
Chapter 15. Some More Spaces
Proof. We just sketch it since it follows the lines of the proof of Theorem 2.8. We suppose as there ν = e1 . We then have for v ∈ D(Ω) x1 p x1 p p ∂x1 v(s, x2 , . . . , xn ) ds ≤ |∂x1 v(s, x2 , . . . , xn )| ds |v| (x) = −a −a a p ≤ |x1 + a| p |∂x1 v(s, x2 , . . . , xn )|p ds (by H¨ older) −a a |∂x1 v(s, x2 , . . . , xn )|p ds. = |x1 + a|p−1 −a
Integrating in x1 between (−a, a) and then in the other directions the result follows. As a consequence we have Theorem 15.4. Suppose that Ω is bounded in one direction. Then on W0k,p (Ω) the norms p1 α p |D u|p,Ω (15.12) uk,p and |uk,p = |α|=k
are equivalent. Proof. One has trivially |uk,p ≤ uk,p
∀ u ∈ W0k,p (Ω).
Now since Ω is bounded in one direction by the Poincar´e inequality we have for u ∈ D(Ω) and for some constant C independent of u p1 n 2a ∂u |u|p,Ω ≤ 1 ≤C |∂xi u|pp,Ω . (15.13) p p ∂ν p,Ω i=1 (We used here the fact that in Rn all the norms are equivalent.) Taking u = ∂xj u in (15.13) we get summing up in j n j=1
|∂xj u|pp,Ω ≤ C
i,j
|∂x2i xj u|pp,Ω .
Continuing this process the result follows, i.e., we obtain for some other constant C p1 α p uk,p ≤ C |D u|p,Ω = C|uk,p ∀ u ∈ D(Ω) |α|=k
and by density (15.12) holds. This completes the proof of the theorem. We also have Theorem 15.5. W0k,p (Ω), W k,p (Ω) are reflexive for any k, 1 < p < +∞. (See for instance [1].)
15.3. An application
215
15.3 An application As we were able to solve the problem of Dirichlet weakly we can now solve in a weak sense (15.5). Indeed we have Theorem 15.6. Suppose that f ∈ L2 (Ω) then if Ω is bounded in one direction there exists a unique u solution to ⎧ ⎨ ΔuΔv dx = f v dx ∀ v ∈ H02 (Ω), (15.14) Ω Ω ⎩ u ∈ H02 (Ω). Proof. We apply the Lax–Milgram theorem. Indeed v → Ω f v dx is a continuous linear form on H02 (Ω). Moreover (u, v) → Ω ΔuΔv dx is a continuous bilinear form on H02 (Ω). We will be done if we can show that this bilinear form is coercive. This is a consequence of the following simple remark. If u ∈ D(Ω) then we have Ω
(Δu)2 dx = =
Ω
Ω i,j
=
i
Ω i,j
∂x2i u
2 dx
∂x2i u∂x2j u dx
(15.15)
(∂x2i xj u)2 dx.
Indeed the last equality follows by integration by parts since u ∈ D(Ω) or from the definition of distributional derivative. From (15.15) and by density of D(Ω) in H02 (Ω) we derive (Δu)2 dx ≥ cu22,2 ∀ u ∈ H02 (Ω) Ω
for some constant c. The coerciveness of the bilinear form follows and this completes the proof of the theorem. Remark 15.2. H01 (Ω) is the space of functions of H 1 (Ω) “vanishing” on ∂Ω. Thus in this spirit H02 (Ω) is the space of functions of H 2 (Ω) such that u = 0, ∂xi u = 0 on ∂Ω for any i. This last condition seems to be stronger than u = 0, ∂u ∂ν = 0 on ∂Ω. This is not the case. Indeed for a smooth function, u = 0 on ∂Ω implies that all tangential derivatives vanish. Since the normal derivative vanishes too so does the gradient. We introduced above the spaces built on Lp (Ω) with the idea to solve the problem (15.1). However we see that the left-hand side of the equality (15.3) is not linear in u and thus does not define a bilinear form. Some more work is then needed in order to be able to reach a solution for (15.1).
216
Chapter 15. Some More Spaces
Exercises 1. Let p, q ∈ [1, +∞]. Set W 1,p,q (Ω) = { v ∈ Lp (Ω) | ∂xi v ∈ Lq (Ω) ∀ i = 1, . . . , n }. Show that W 1,p,q (Ω) is a Banach space when equipped with the norm |v|p,Ω + |∇v|q,Ω . 2. Let Ω be a measurable subset of Rn and f : Ω → R a measurable function. (i) Show that ess sup |f | ≤ lim |f |p,Ω . p→+∞
(ii) Assume that f ∈ L∞ (Ω)∩Lp0 (Ω) for some p0 ≥ 1. Prove that f ∈ Lp (Ω) ∀ p ≥ p0 and that lim |f |p,Ω = |f |∞,Ω . p→+∞
3. Let T be a continuous linear form on W01,p (Ω), 1 ≤ p < +∞. Show that there exist T0 , T1 , . . . , Tn ∈ Lp (Ω) such that T = T0 +
n
∂xi Ti
i=1
(compare to Theorem 2.10).
n+1 . Hint: consider the map u → (u, ∇u) from W01,p (Ω) into Lp (Ω)
Chapter 16
Regularity Theory 16.1 Introduction We have seen in Chapter 3 that when u is a regular or strong solution of the Dirichlet problem −Δu = f in Ω, (16.1) u=0 on ∂Ω then u is a weak solution of it. This was an argument to introduce weak solutions since strong solutions could not slip through our existence theory. Having obtained a weak solution to (16.1) one can then ask oneself if this solution also satisfies (16.1) in the usual sense. This requires of course some assumptions on f : recall that we know how to solve (16.1) for very general f ∈ H −1 (Ω). Let us try to get some insight through the one-dimensional problem in Ω = (0, 1), −u = f (16.2) u(0) = u(1) = 0. When solved for f ∈ L2 (Ω), the solution u belongs to H 1 (Ω). However since in the distributional sense (16.3) u = −f we have in fact u ∈ H 2 (Ω). Thus f ∈ L2 (Ω) implies u ∈ H 2 (Ω). If we derive k times the equation (16.3) in the distributional sense we get u(k+2) = −f (k) .
(16.4)
(k) denotes the k th derivative. Thus f ∈ H k (Ω)
⇐⇒
u ∈ H k+2 (Ω).
(16.5)
A natural question is to try to establish (16.5) for u solution to (16.1) in higher dimensions. The difficulty is due to the fact that a priori Δu ∈ L2 (Ω) does not
218
Chapter 16. Regularity Theory
imply that all second derivatives of u belong to L2 (Ω). Saying that in a different manner Δu ∈ H k (Ω) does not necessarily imply that u ∈ H k+2 (Ω). When it is not clear that derivatives exist one can work with difference quotients. Let us explain this briefly. For u ∈ W 1,p (Ω), 1 ≤ p, h ∈ R let us set τh,s (u) = τh (u) =
u(x + hes ) − u(x) . h
(16.6)
In the expression above es denotes the sth vector of the canonical basis of Rn , i.e., es = (0, . . . , 1, . . . , 0) where the 1 is located on the sth slot. The quotient above is defined for h < dist(x, ∂Ω). Remark 16.1. It is easy to show that τh,s (u) −→ ∂xs u
in D (Ω)
(16.7)
when h → 0. We have Proposition 16.1. (i) Suppose that u, v ∈ L2 (Ω) and one of them has compact support K ⊂ Ω. Then for h < dist(K, ∂Ω) we have uτh (v) dx = − τ−h (u)v dx. (16.8) Ω
Ω
(ii) If u ∈ W 1,p (Ω) and Ω ⊂⊂ Ω for h < dist(Ω , ∂Ω) we have τh (u) ∈ W 1,p (Ω ) and (16.9) ∂xk (τh u) = τh (∂xk u). (iii) For h < dist(x, ∂Ω) we have τh (uv)(x) = u(x + hes )τh v(x) + τh u(x)v(x).
(16.10)
Proof. (i) The functions are supposed to be extended by 0 when necessary. As above we drop when it is clear the index s for simplicity. We have v(x + hes ) − v(x) dx uτh (v) dx = u(x) h Ω Ω u(x)v(x + hes ) u(x)v(x) = dx − dx. h h Ω Ω
16.1. Introduction
219
Changing x into x − hes in the first integral it comes u(x − hes )v(x) u(x)v(x) dx − dx uτh (v) dx = h h Ω Ω Ω u(x − hes ) − u(x) =− v(x) dx = − τ−h (u)v dx. −h Ω Ω This completes the proof of (i). (ii) Let ϕ ∈ D(Ω ), h < dist(Ω , ∂Ω). We have ∂xk (τh u), ϕ = − τh u, ∂xk ϕ =− τh (u)∂xk ϕ dx Ω = uτ−h (∂xk ϕ) dx
by (i).
Ω
It follows (note that the assumptions on ϕ and h imply that τh ϕ ∈ D(Ω)) ∂xk (τh u), ϕ = u∂xk (τ−h ϕ) dx Ω =− ∂xk u(τ−h ϕ) dx = τh (∂xk u)ϕ dx Ω
Ω
= τh (∂xk u), ϕ . This completes the proof of (ii). (iii) One has by definition of τh u(x + hes )v(x + hes ) − u(x)v(x) h v(x + hes ) − v(x) u(x + hes ) − u(x) = u(x + hes ) + v(x) h h
τh (uv)(x) =
which completes the proof of the proposition. The following theorem relies the difference quotients to the derivatives.
Theorem 16.2. (i) Assume that u ∈ W 1,p (Ω), 1 ≤ p ≤ +∞. Let Ω ⊂⊂ Ω. Then for |h| < dist(Ω , ∂Ω) we have |τh,s u|p,Ω ≤ |∂xs u|p,Ω .
(16.11)
(ii) Assume that u ∈ Lp (Ω), 1 ≤ p ≤ +∞. If there exists a constant C such that for all Ω ⊂⊂ Ω |τh,s u|p,Ω ≤ C then ∂xs u ∈ Lp (Ω) and
∀ h, |h| < dist(Ω , ∂Ω),
|∂xs u|p,Ω ≤ C.
(16.12) (16.13)
220
Chapter 16. Regularity Theory
Proof. (i) Suppose first that u is smooth. Then we have 1 h d u(x + hes ) − u(x) = u(x + tes ) dt τh,s u(x) = h h 0 dt d u(x + tes ) dt = − dt [0,h] where −A denotes the average on A. Thus we derive for p < +∞ |τh,s u|pp,Ω
p d u(x + tes ) dt dx = − dt Ω [0,h] − |∂xs u(x + tes )|p dt dx ≤ Ω [0,h] ≤ − |∂xs u(x + tes )|p dx dt [0,h] Ω |∂xs u(x)|p dx dt ≤ −
(by H¨ older’s inequality) (by Fubini’s theorem)
[0,h]
=
Ω p |∂xs u|p,Ω .
If u is not smooth one replaces it by uε = ρε ∗ u for ε small enough and passes to the limit in ε. To have the result for p = +∞ one can pass to the limit in p → +∞ since one has lim |u|p,Ω ∀ u ∈ L∞ (Ω) = |u|∞,Ω p→+∞
has finite measure see Exercise 2 of the previous chapter). (if Ω (ii) By assumption τh,s u is bounded in Lp (Ω ). So we can extract a “sequence” such that τh,s u v in Lp (Ω ) when h → 0. We have already noticed that τh,s u −→ ∂xs u
in D (Ω ).
It follows that ∂xs u = v ∈ Lp (Ω ). Thus u ∈ W 1,p (Ω ). Moreover passing to the limit-inf in |τh,s u|p,Ω ≤ C we obtain by the weak lower semi-continuity of the norm |∂xs u|p,Ω ≤ C. This completes the proof of the theorem, since this holds for any Ω .
16.2. The translation method
221
16.2 The translation method This technique uses only the structure of ellipticity – i.e., the coerciveness and the continuity – so there is no additional difficulty to present it on systems. We thus consider a function u which satisfies ⎧ 1 ⎨u ∈ H (Ω), (16.14) ⎩ A(x)∇u · ∇v dx = f · v dx ∀ v ∈ H10 (Ω) Ω
Ω
where – to simplify we will assume f ∈ L2 (Ω).
(16.15)
(See Chapter 13 for the notation. Recall that u has m entries.) We assume that the operator A is having its coefficients in L∞ (Ω) in such a way that it holds for some constant Λ |A(x)M | ≤ Λ|M | (16.16) (see (13.16)). We will consider the following coerciveness assumption ∃ λ1 , λ2 > 0 such that 2 2 A(x)∇u · ∇u dx ≥ λ1 |∇u|2,Ω − λ2 |u|2,Ω Ω
∀ u ∈ H10 (Ω).
(16.17)
Remark 16.2. Note that in (16.14) we did not specify any boundary condition – i.e., u could be in any subspace of H1 (Ω). The assumption (16.17) called the G˚ arding inequality holds of course when A satisfies the Legendre condition or when A is an operator with constant coefficients satisfying the Legendre–Hadamard condition. Having stated our coerciveness assumption under the form (16.17) we do not need any assumption on Ω. We have then Theorem 16.3. Suppose that u satisfies (16.14). If A ∈ C 0,1 (Ω) – i.e., the coefficients Aαβ ij are uniformly Lipschitz continuous in Ω – and (16.15)–(16.17) hold, then for every Ω ⊂⊂ Ω, u ∈ H2 (Ω ) = (H 2 (Ω ))m and we have for some constant C independent of u 2 |D u| ≤ C |f | + |∇u| (16.18) 2,Ω 2,Ω 2,Ω (D2 u denotes the vector of all second derivatives of the components of u). Proof. Let us choose η ∈ D(Ω) such that 0 ≤ η ≤ 1,
η = 1 on Ω .
(16.19)
222
Chapter 16. Regularity Theory
Then the function v = −ητ−h,s (ητh,s u) ∈ H10 (Ω)
(16.20)
for h small enough, see Proposition 16.1 (ii). Of course τh,s u is the vector with components τh,s ui . We will drop the index s in what follows. Using (16.14) and with the summation convention we obtain −
j i Aαβ ij ∂xβ u {η∂xα (τ−h (ητh u ))} dx j i = Aαβ ∂ u (∂ η)τ (ητ u ) dx − f · ητ−h (ητh u) dx. xα −h h ij xβ
Ω
Ω
(16.21)
Ω
Let us denote by I the left-hand side of the equality above. Using Proposition 16.1 we derive I=
Ω
= Ω
=
Ω = Ω
j i τh (Aαβ ij ∂xβ u η)∂xα (ητh u )dx αβ j j i {Aαβ ij ητh (∂xβ u ) + ∂xβ u (x + hes )τh (Aij η)}∂xα (ητh u )dx αβ j j i {Aαβ ij η∂xβ (τh u ) + ∂xβ u (x + hes )τh (Aij η)}∂xα (ητh u )dx αβ αβ j j j i {Aαβ ij ∂xβ (ητh u ) − Aij ∂xβ ητh u + ∂xβ u (x + hes )τh (Aij η)}∂xα (ητh u )dx.
Combining this computation with (16.21) we get
j i Aαβ ij ∂xβ (ητh u )∂xα (ητh u ) dx j i = Aαβ ij ∂xβ ητh u ∂xα (ητh u ) dx Ω i − ∂xβ uj (x + hes )τh (Aαβ ij η)∂xα (ητh u ) dx Ω j i + Aαβ ij ∂xβ u (∂xα η)τ−h (ητh u ) dx Ω − f · ητ−h (ητh u) dx.
Ω
(16.22)
Ω
Since we assumed the coefficients Aαβ ij uniformly Lipschitz continuous there exists a constant L such that |τh (Aαβ ij η)| ≤ L
∀ i, j, α, β.
16.2. The translation method
223
Then from (16.22) we derive A∇(ητh u) · ∇(ητh u) dx ≤ Λ |∂xβ η(τh uj )| |∇(ητh u)| dx Ω Ω + LC |∇u(x + hes )| |∇(ητh u)| dx Ω |∇u| |τ−h (ητh u)| dx + ΛC Ω + |f |2,Ω |τ−h (ητh u)|2,Ω (C is a constant depending on η, | · | denotes the norm of matrices or vectors). Using now Theorem 16.2 (note that the first integral in the right-hand side is an integral on the support of η) we get A∇(ητh u) · ∇(ητh u) dx Ω ≤ |∇(ητh u)|2,Ω ΛC |∇u|2,Ω + LC |∇u|2,Ω + ΛC |∇u|2,Ω + |f |2,Ω . We used above the Cauchy–Schwarz inequality. Thus by (16.17) we obtain for some new constant C 2 2 λ1 |∇(ητh u)|2,Ω − λ2 |ητh u|2,Ω ≤ C |∇(ητh u)|2,Ω |∇u|2,Ω + |f |2,Ω . Using the Young inequality ab ≤ εa2 +
b2 4ε
we obtain 2 2 2 2 C 2 |∇u|2,Ω + |f |2,Ω . λ1 |∇(ητh u)|2,Ω ≤ λ2 |ητh u|2,Ω + ε|∇(ητh u)|2,Ω + 4ε Choosing ε = λ21 and using (16.11) for the term in λ2 we obtain for some new constant C = C(λ1 , λ2 , Λ, L, η) 2 |∇(ητh u)|2 ≤ C 2 |∇u| + |f | . 2,Ω 2,Ω 2,Ω In particular since η = 1 on Ω we get for any Ω ⊂⊂ Ω 2 |∇(τh,s u)|2 ≤ C 2 |∇u| + |f | 2,Ω 2,Ω 2,Ω where C is independent of Ω . Using Theorem 16.2, (ii) we obtain |∇(∂xs u)|2 ≤ C 2 |∇u| + |f |2 . 2,Ω 2,Ω 2,Ω
(16.23)
Since this inequality holds for any s this provides an estimate for all second derivatives of u. This completes the proof of the theorem.
224
Chapter 16. Regularity Theory
Remark 16.3. As a consequence of the theorem above we have achieved part of the programme mentioned in the introduction of this chapter namely if u is a weak solution to u ∈ H 1 (Ω), (16.24) −Δu = f in Ω then f ∈ L2 (Ω) implies that u ∈ H 2 (Ω ) for any subdomain Ω ⊂⊂ Ω and we have for some constant C u2,2,Ω ≤ C |f |2,Ω + |u|2,Ω + |∇u| (16.25) 2,Ω
(we denote by u2,2,Ω the norm in H 2 (Ω ) defined in (15.9)). Indeed Theorem 16.3 clearly applies here for m = 1 and the bilinear form ∇u · ∇v dx. (u, v) → Ω
This allows us to bound the second derivatives of u in L2 (Ω ). The rest of the norm of u2,2,Ω is included in the right-hand side of (16.25). We would like to iterate this process but before we would like to analyze the connection between higher Sobolev spaces and regularity.
16.3 Regularity of functions in Sobolev spaces Definition 16.1. We say that a function u is H¨older continuous in Ω with order α ∈ (0, 1) if we have for some constant C |u(x) − u(y)| ≤ C|x − y|α
∀ x, y ∈ Ω.
(16.26)
older continuous functions We will denote by C α (Ω) the set of the (uniformly) H¨ in Ω. Then we have Theorem 16.4. Let Ω be an open set in Rn . We have for any 1 < p W01,p (Ω) → Lp (Ω)
∗
∀ p < n,
(16.27)
W01,p (Ω)
α
∀ p > n.
(16.28)
→ C (Ω)
p∗ is the Sobolev exponent given by p1∗ = p1 − n1 , → means an algebraic inclusion together with the continuity of the canonical embedding – see below Exercise 2 for the metric of C α , α = 1 − np . Proof. (16.27) follows directly from (12.30). (16.28) will be a consequence of the following estimate. Namely there exists a constant C = C(p, n) such that (16.29) |v(x) − v(y)| ≤ C |∇v|p,Rn |x − y|α ∀ x, y ∈ Rn , ∀ v ∈ Cc∞ (Rn ).
16.3. Regularity of functions in Sobolev spaces
225
For x, y ∈ Rn , set r = |x − y| and denote by D the domain D = Br (x) ∩ Br (y). One has |v(x) − v(y)| ≤ |v(x) − v(z)| + |v(z) − v(y)|
∀ z ∈ D.
By integration on D in z it follows that |v(x) − v(z)| dz + |v(z) − v(y)| dz. |D| |v(x) − v(y)| ≤ D
(16.30)
D
(Recall that |D| is the measure of D.) Let us evaluate the first integral above – the second one can be treated the same way. From 1 1 d v(z) − v(x) = v(x + t(z − x)) dt = ∇v(x + t(z − x)) · (z − x) dt 0 dt 0 we derive |v(x) − v(z)| ≤
0
1
|∇v(x + t(z − x))| |z − x| dt
It follows then that we have |v(x) − v(z)| dz ≤ r
1
|∇v(x + t(z − x))| dt dz
D 0 1
D
=r
0
∀ z ∈ D.
|∇v(x + t(z − x))| dz dt.
D
Setting ξ = x + t(z − x), dξ = tn dz we get
D
|v(x) − v(z)| dz ≤ r
1
0
Btr (x)
|∇v(ξ)|t−n dξ dt.
Using H¨older’s inequality it comes D
|v(x) − v(z)| dz ≤ r
1 0
Btr (x)
≤ r|∇v| =r
p,Rn
n+1− n p
1 = 1−
0
|∇v|
|∇v(ξ)|p dξ 1
p1
1
|Btr (x)|1− p t−n dt
1
{(tr)n ωn }1− p t−n dt 1 1− p
ω p,Rn n
1
n
t− p dt
0 1 1− p |∇v| n rn+1− np , n ωn p,R p
226
Chapter 16. Regularity Theory
where ωn is the measure of the unit ball in Rn . Since the second integral of (16.30) can be estimated the same way we get 1− p1 2 |∇v| n |x − y|n+1− np . |D| |v(x) − v(y)| ≤ n ωn p,R 1− p It is clear that
|D| = c(n)|x − y|n
for some constant c(n). It follows that for some constant c = C(n, p) we have n |v(x) − v(y)| ≤ C |∇v|p,Rn |x − y|1− p . (16.31) This completes the proof of (16.29). As a consequence of (16.29) we also have |v(x)| ≤ |v(y)| + C |∇v|
p,Rn
n
|x − y|1− p
and thus integrating this in y on B1 (x) we obtain n |v(y)| dy + C |∇v|p,Rn − |x − y|1− p dy |v(x)| ≤ − B (x) B1 (x) 1 |v(y)| dy + C |∇v|p,Rn ≤ − B1 (x)
where − denotes the average. Applying H¨ older’s inequality in the first integral we deduce that (16.32) |v(x)| ≤ C |v|p,Rn + |∇v|p,Rn for some constant C = C(p, n). If now vn ∈ D(Ω) is such that vn → v
in W01,p (Ω)
then by (16.32) vn is a Cauchy sequence for the uniform convergence and thus converges toward a continuous function which also satisfies (16.29) and is a continuous representative for v. Remark 16.4. Note that even for Ω not necessarily bounded we have obtained (16.33) |v|∞,Ω ≤ C(p, n)|v|1,p ∀ v ∈ W01,p (Ω). As a consequence we have Theorem 16.5. Let Ω be an open set in Rn . np
W0k,p (Ω) → L n−kp (Ω)
If kp < n, If kp > n,
n ∈ / N, p
W0k,p (Ω) → C m,α (Ω) with m = k −
!
(16.34)
n n , α = k − − m. p p (16.35)
(Recall that [·] denotes the integer part of a real number.) In (16.35) C m,α (Ω) denotes the space of functions of class C m in Ω such that the derivatives of order m belong to C α (Ω).
16.4. The bootstrap technique
227
Proof. One proceeds by iteration. In the case where k < np – arguing eventually first for smooth functions one ∗
has W0k,p (Ω) → W0k−1,p (Ω) → · · · → Lp (Ω) where p∗k stands for the fact that we have taken k times the ∗ operation, i.e., since ∗k
1 1 1 = − p∗ p n we have
n − kp 1 k 1 = − = ∗k p p n np
and (16.34) follows. In the case where k > np , np ∈ / N, u ∈ W0k,p (Ω) we have by the first part and the preceding theorem that all the derivatives of order m of u belong to C α (Ω). Then u is derivable in the usual sense m times and its partial derivatives of order m belong to C α (Ω) – see also Exercise 3. This completes the proof of the theorem. Remark 16.5. In the case where
n p
∈ N one has of course
W0k,p (Ω) → C m−1,α (Ω)
(16.36)
for every α ∈ (0, 1).
16.4 The bootstrap technique We would like to show now that the solutions we have defined by a weak formulation are in fact smooth – i.e., differentiable in the usual sense – provided our data are themselves smooth. Applying successively Theorem 16.3 we will gain at each step some additional smoothness. Such reasoning – typical in partial differential equations – is called “bootstrapping”. We have Theorem 16.6. Under the assumptions of Theorem 16.3 and if k,1 (Ω), Aαβ ij ∈ C
f ∈ Hk (Ω)
(16.37)
then for every subdomain Ω we have u ∈ Hk+2 (Ω ) := (H k+2 (Ω ))m and for some constant C independent of u k+2 |D (16.38) u|2,Ω ≤ C f k,2 + |∇u|2,Ω . (For a function v, Dk v denotes the vector of the k th derivatives of each component of v.)
228
Chapter 16. Regularity Theory
Proof. Let u be a solution to (16.14) that we can also write as j i Aαβ ∂ u ∂ v dx = f · v dx ∀ v ∈ H10 (Ω). xα ij xβ Ω
(16.39)
Ω
m If v ∈ D(Ω) we can use for any s, −∂xs v as test function in (16.39). We get j j Aαβ ∂ u ∂ (−∂ v ) dx = f · (−∂xs v) dx. (16.40) x x x α s β ij Ω
Ω
We know by Theorem 16.3 that ∇u ∈ H1 (Ω )
∀ Ω ⊂⊂ Ω,
1,1 i.e., each entry of this Jacobian matrix is in H 1 (Ω ). If in addition Aαβ (Ω) ij ∈ C then j 1 Aαβ ij ∂xβ u ∈ H (Ω ),
and permuting first ∂xα and ∂xs we get from (16.40) αβ j j ∂xs (Aij ∂xβ u )∂xα v dx = ∂xs f · v dx (16.41) Ω Ω j j j i Aαβ ∂xs f · v dx − ∂xs (Aαβ ⇐⇒ ij ∂xβ (∂xs u )∂xα v dx = ij )∂xβ u ∂xα v dx Ω Ω Ω j i ⇐⇒ A(x)∇∂xs u · ∇v dx = ∂xs f · v dx + ∂xα (∂xs (Aαβ ij )∂xβ u )v dx. Ω
Ω
Ω
Since for every α and for Ω ⊂⊂ Ω ⊂⊂ Ω j 2 ∂xα (∂xs (Aαβ ij )∂xβ u ) ∈ L (Ω )
we can apply Theorem 16.3 to ∂xs u ∈ H1 (Ω ) to get that all derivatives of order 3 of u are bounded in L2 (Ω ) by a constant times the L2 (Ω )-norm of the first derivatives of f and the L2 (Ω )-norm of the second derivatives of u (note that (16.41) is satisfied for all v ∈ H10 (Ω )). Since the second derivatives of u in Ω can be bounded by an estimate like (16.18) this provides (16.38) in the case k = 1 – repeating of course the estimate for every s. Continuing this process or applying what we just did to ∂xs u we obtain (16.38). This completes the proof of the theorem. As a consequence we have Theorem 16.7 (Regularity of the solution of elliptic systems). Under the assumptions of Theorem 16.3 if u is solution to (16.14) and if ∞ Aαβ ij ∈ C (Ω),
m then u ∈ C ∞ (Ω) .
f i ∈ C ∞ (Ω)
∀ i, j, α, β
Exercises
229
Proof. By the previous theorem we have u ∈ Hk (Ω )
∀ k, ∀ Ω ⊂⊂ Ω.
Take η a function of D(Ω) which is equal to 1 on Ω . Then we have ηu ∈ Hk0 (Ω) Since k −
n 2
∀ k.
will finish by pass any integer q we get by Theorem 16.5 m ηu ∈ C q (Ω) ∀q
in particular since η = 1 on Ω we have m u ∈ C ∞ (Ω ) and the result follows since Ω is arbitrary.
Remark 16.6. If u is the weak solution of the Dirichlet problem u ∈ H01 (Ω), −Δu = f in Ω then f ∈ C ∞ (Ω) implies that u ∈ C ∞ (Ω) and the equation is satisfied in the usual sense in Ω. This is indeed a trivial consequence of the previous theorem in the case where m = 1, i.e., u has only one component. Remark 16.7. We did not include – for the sake of simplicity lower order terms in (16.14). This can be done and we refer the reader to the exercises.
Exercises 1. Reproduce the proof of Theorem 16.3 in the simple case where u is weak solution to −Δu = f in Ω. 2. Let Ω be a bounded open subset of Rn . Show that C α (Ω) is a Banach space when equipped with the norm uα = |u|∞,Ω + [u]α where [u]α = Supx=y
|u(x)−u(y)| |x−y|α .
3. If f is a continuous function on R show that x f (s) ds 0
is differentiable in the distributional sense and has for derivative f . Show that a distribution which admits for derivative (in the distributional sense) a continuous function is derivable in the usual sense. Extend the preceding question in Rn . 4. State and prove Theorem 16.3 when some lower order terms are included.
Chapter 17
The p-Laplace Equation In this chapter we would like to consider in more details the Dirichlet problem for the p-Laplace operator, namely, if Ω is an open subset of Rn , 1 < p < +∞ the problem −∂xi (|∇u|p−2 ∂xi u) = f u=0
in Ω, on ∂Ω.
(17.1)
As remarked before the problem presents two interesting features: it requires to work in Lp -spaces and one cannot rely on Lax–Milgram to solve it and thus it offers two directions of extensions for our techniques.
17.1 A minimization technique One way to show existence of a solution to (17.1) is to use a minimization technique. In the case where p = 2 we have seen that the solution to (17.1) is also the unique minimizer of the functional 1 |∇v|2 dx − f v dx. (17.2) J2 (v) = 2 Ω Ω It is in particular a critical point of this functional. Thus a way to solve some partial differential equations is via the finding of critical points. For f ∈ Lp (Ω) let us set 1 Jp (v) = |∇v|p dx − f v dx. (17.3) p Ω Ω The functional Jp is well defined for v ∈ W01,p (Ω). Moreover we have Theorem 17.1. Let Ω be an open subset of Rn bounded in one direction. There exists u minimizing the functional Jp on W01,p (Ω). Moreover u is solution to (17.1).
232
Chapter 17. The p-Laplace Equation
Proof. We use here the so-called direct method in the calculus of variations (see [43], [53]). It consists to obtain a minimizer as the limit of a minimizing sequence. Let us single out the different steps of the method. First when the infimum is 0 then 0 is a minimizer, so we will assume the infimum negative. • 1. The functional Jp is bounded from below. Indeed by the H¨ older inequality we have f v dx ≤ |f |p ,Ω |v|p,Ω ≤ C|f |p ,Ω |∇v| p,Ω Ω
(by the Poincar´e inequality of Theorem 15.3). It follows then that p 1 |∇v|p,Ω − C|f |p ,Ω |∇v|p,Ω p p p 1 1 1 ≥ |∇v|p,Ω − |∇v|p,Ω − {C|f |p ,Ω }p p p p 1 = − {C|f |p ,Ω }p p
Jp (v) ≥
(by the Young inequality ab ≤
ap p
(17.4)
+
bp p
).
• 2. Minimizing sequences are bounded. Let un be a minimizing sequence – i.e., a sequence such that Jp (un ) −→ One has then for n large enough p 1 |∇un |p,Ω − p =⇒
Inf
W01,p (Ω)
Jp (v).
Ω
f un dx ≤ 0
|∇un |p ≤ p p,Ω
Ω
f un dx
≤ p|f |p ,Ω |un |p,Ω ≤ p|f |p ,Ω C |∇un | From this we derive
(17.5)
1 |∇un | ≤ {p|f |p ,Ω C} p−1 p,Ω
(17.6)
p,Ω
. (17.7)
and thus un is bounded in W01,p (Ω). • 3. un converges toward a minimizer. Since un is bounded and W01,p (Ω) is reflexive – up to a subsequence – there exists a u ∈ W01,p (Ω) such that un u in W01,p (Ω). (17.8)
17.1. A minimization technique
233
It is clear that Jp is continuous since v →
Ω
f v dx
is continuous and v → |∇v|p,Ω also. Moreover since p ≥ 1, Jp (v) is convex and thus lower semi-continuous for the weak convergence. It follows that Jp (u) ≤ lim inf Jp (un ) = n
Inf
W01,p (Ω)
Jp (v)
(17.9)
and thus u is a minimizer of Jp on W01,p (Ω). • 4. u is a critical point for Jp or equivalently a solution to (17.1). Since u is a minimizer of Jp , for any v ∈ W01,p (Ω) the function 1 λ −→ |∇(u + λv)|p dx − f (u + λv) dx = Jp (u + λv) p Ω Ω possesses a minimum at 0. It follows that d Jp (u + λv) = 0 ∀ v ∈ W01,p (Ω). dλ 0
(17.10)
By a simple computation one has d d |∇(u + λv)|p = {|∇(u + λv)|2 }p/2 dλ dλ p p = {|∇(u + λv)|2 } 2 −1 {2∇u · ∇v + 2λ|∇v|2 } 2 = p|∇(u + λv)|p−2 {∇u · ∇v + λ|∇v|2 }. By the usual rule of the derivation under the integral sign (note that the last expression above is integrable), (17.10) implies then that |∇u|p−2 ∇u · ∇v dx = f v dx ∀ v ∈ W01,p (Ω), (17.11) Ω
Ω
i.e., we have shown the existence of a solution to (17.11). This completes the proof of the theorem.
Remark 17.1. One could assume only f ∈ W0−1,p (Ω) = (W01,p (Ω))∗ and get the same existence result. Remark 17.2. Since the function x → |x|p is strictly convex, so is the functional Jp and thus it possesses a unique minimizer. However the uniqueness of the minimizer does not imply necessarily the uniqueness of a solution to (17.1) – i.e., the uniqueness of a critical point for Jp . In our case the strict convexity also implies uniqueness of a solution to (17.1). We will see it below thanks to useful elementary inequalities.
234
Chapter 17. The p-Laplace Equation
First we have Proposition 17.2. Let 1 < p < +∞, then there exists a constant Cp such that p−2 |ξ| ξ − |η|p−2 η ≤ Cp |ξ − η|{|ξ| + |η|}p−2 ∀ ξ, η ∈ Rn . (17.12) Proof. It is enough to show that for any ξ = η p−2 |ξ| ξ − |η|p−2 η Φ(ξ, η) = ≤ Cp . |ξ − η|{|ξ| + |η|}p−2 For ξ = 0 one has Φ(0, η) = 1 and with no loss of generality we can assume ξ = 0. Then one notices that Φ is invariant by orthogonal transformations – i.e., Φ(Qξ, Qη) = Φ(ξ, η) for every orthogonal transformation Q. Thus performing such a transformation we can then suppose ξ = |ξ|e1 . Using the homogeneity of Φ, i.e., the fact that η Φ(|ξ|e1 , η) = Φ e1 , |ξ| we can assume ξ = e1 , i.e., we are reduced to show that Φ(e1 , η) =
|e1 − |η|p−2 η| ≤ Cp . |e1 − η|{1 + |η|}p−2
For η close to e1 one has – since X → X p−2 is differentiable near 1 |e1 − |η|p−2 η| = |e1 − η + (1 − |η|p−2 )η| ≤ |e1 − η| + C 1 − |η| |η| ≤ |e1 − η|{1 + C|η|} and thus Φ(e1 , η) ≤
1 + C|η| ≤ C(ε) {1 + |η|}p−2
for |η − e1 | ≤ ε. Since clearly Φ(e1 , η) is continuous on Rn \ {e1 } by a compactness argument it is enough to show that Φ(e1 , η) is bounded for large |η|. This follows from the inequality 1 + |η|p−1 Φ(e1 , η) ≤ . (|η| − 1){1 + |η|}p−2 This completes the proof of the proposition.
17.1. A minimization technique
235
The next inequalities could be called coerciveness inequalities for their lefthand side part. Let us set Np (ξ, η) = {|ξ| + |η|}p−2 |ξ − η|2
(17.13)
since this quantity will play an important rˆ ole in the following. Note that when p < 2 it can be extended by continuity at (0, 0). We have Proposition 17.3. Let 1 < p < +∞. There exist two positive constants cp , Cp such that for every ξ, η ∈ Rn cp Np (ξ, η) ≤ (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) ≤ Cp Np (ξ, η)
(17.14)
a dot denotes here the Euclidean scalar product in Rn . Proof. The bound from above follows directly from the previous proposition. Next by a simple computation we have (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) = |ξ|p + |η|p − {|ξ|p−2 + |η|p−2 }(ξ · η).
(17.15)
Noticing that (ξ · η) =
1 {|ξ|2 + |η|2 − |ξ − η|2 } 2
we obtain (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) 1 = {|ξ|p−2 + |η|p−2 }|ξ − η|2 + 2 1 = {|ξ|p−2 + |η|p−2 }|ξ − η|2 + 2
1 p 1 p 1 p−2 2 1 p−2 |ξ| + |η| − |η| |ξ| − |ξ| |η| 2 2 2 2 1 p−2 p−2 2 2 (|ξ| − |η| )(|ξ| − |η| ). 2
If p ≥ 2 the function X → X p−2 is nondecreasing and it comes (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) ≥
1 {|ξ|p−2 + |η|p−2 }|ξ − η|2 . 2
By exchanging the rˆ ole of ξ and η one can always assume |ξ| ≥ |η|. Then we have
|ξ| + |η| . 2 Using again the fact that X → X p−2 is nondecreasing we obtain p−2 |ξ| + |η| 1 (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) ≥ |ξ − η|2 2 2 1 = p−1 Np (ξ, η). 2 |ξ| ≥
This is the first inequality of (17.14).
236
Chapter 17. The p-Laplace Equation
To complete the proof in the case p < 2 we will need the following lemma Lemma 17.4. Let 1 < p < +∞. There exist two constants 0 < cp ≤ 1 ≤ Kp such that for every a, b such that 0 < b ≤ a cp {a + b}p−2 (a − b)2 ≤ (ap−1 − bp−1 )(a − b) ≤ Kp {a + b}p−2 (a − b)2 .
(17.16)
Proof of the lemma. Taking b = 0 we see that cp ≤ 1 ≤ Kp . Dividing both sides of (17.16) by ap the inequalities reduce to cp {1 + z}p−2 (1 − z) ≤ 1 − z p−1 ≤ Kp {1 + z}p−2 (1 − z) ∀ z ∈ (0, 1]. Considering the function z −→ since
1 − z p−1 (1 + z)p−2 (1 − z)
1 1 − z p−1 p−1 = p−2 z→1 1 − z (1 + z)p−2 2 lim
one sees that this continuous function is bounded from above and from below by two positive constants. This completes the proof of the lemma. End of the proof of the proposition. Going back to (17.15) we have (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) = (|ξ|p−1 − |η|p−1 )(|ξ| − |η|) + {|ξ|p−2 + |η|p−2 }(|ξ||η| − (ξ · η)). If p ≤ 2 we have since the function X → X p−2 is nonincreasing |ξ|p−2 + |η|p−2 ≥ (|ξ| + |η|)p−2 + (|ξ| + |η|)p−2 = 2(|ξ| + |η|)p−2 . Thus it comes, using Lemma 17.4 (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) ≥ {cp (|ξ| − |η|)2 + 2cp (|ξ||η| − (ξ · η))}{|ξ| + |η|}p−2 = cp {|ξ| + |η|}p−2 |ξ − η|2 . This completes the proof of the proposition.
Remark 17.3. If p ≥ 2 we derive from (17.16) and since X → X p−2 is nondecreasing that (|ξ|p−2 ξ − |η|p−2 η) · (ξ − η) ≤ {Kp (|ξ| − |η|)2 + 2Kp (|ξ||η| − (ξ · η))}{|ξ| + |η|}p−2 = Kp {|ξ| + |η|}p−2 |ξ − η|2 . This gives us another proof of the right-hand side inequality (17.14) with a precise constant.
17.2. A weak maximum principle and its consequences
237
As a trivial consequence of the formula above we have Theorem 17.5. Suppose that Ω is an open subset of Rn bounded in one direction then problem (17.1) admits a unique solution. Proof. If u1 , u2 are two weak solution of (17.1) then we have |∇u1 |p−2 ∇u1 · ∇v − |∇u2 |p−2 ∇u2 · ∇v dx = 0 ∀ v ∈ W01,p (Ω). Ω
Taking v = u1 − u2 the result follows from Proposition 17.3.
17.2 A weak maximum principle and its consequences As a generalization of the uniqueness result above we have the following Theorem 17.6. Under the assumptions of Theorem 17.1, let ui , i = 1, 2 the weak solutions to |∇ui |p−2 ∇ui · ∇v dx = fi v dx ∀ v ∈ W01,p (Ω). (17.17) Ω
Ω
If we have f1 ≤ f2 and u1 ≤ u2
on ∂Ω,
in Ω
(17.18)
(i.e., (u1 − u2 )+ ∈ W01,p (Ω))
then u1 ≤ u2
in Ω.
(17.19)
Proof. By subtraction we have |∇u1 |p−2 ∇u1 − |∇u2 |p−2 ∇u2 · ∇v dx = (f1 − f2 )v dx Ω
for every v ∈
Ω
W01,p (Ω).
Taking v = (u1 − u2 )+ ∈ W01,p (Ω)
we derive that Ω
|∇u1 |p−2 ∇u1 − |∇u2 |p−2 ∇u2 · ∇(u1 − u2 )+ dx ≤ 0.
From Proposition 17.3 it follows that for some constant cp > 0 cp {|∇u1 |p−2 + |∇u2 |p−2 }|∇(u1 − u2 )|2 dx ≤ 0 {u1 >u2 }
(17.20)
238
Chapter 17. The p-Laplace Equation
where {u1 > u2 } denotes the set defined by {u1 > u2 } = { x ∈ Ω | u1 (x) > u2 (x) }. It follows that
∇(u1 − u2 )+ = 0 in Ω
and by the Poincar´e inequality that (u1 − u2 )+ = 0 in Ω. This completes the proof of the theorem. As a consequence we can show that the solution to (17.1) remains bounded when (17.21) f ∈ L∞ (Ω) ∩ Lp (Ω). We will suppose that for some a > 0, Ω is included in a strip of size 2a. More precisely (17.22) Ω ⊂ Sν = { x | (x − x0 ) · ν ∈ (−a, a) } where ν is a unit vector. Then we have Theorem 17.7. Under the assumptions (17.21) and (17.22), let u be the unique weak solution to (17.1). We have u ∈ L∞ (Ω) and |u|∞,Ω ≤
1 p p − 1 p−1 p−1 a |f |∞,Ω . p
(17.23)
(Note that this generalizes (12.6).) Proof. Set α = 1 +
1 p−1
and consider
g = aα − |(x − x0 ) · ν|α ≥ 0
on Ω.
(Recall that “·” denotes the scalar product in Rn .) One has α α α ∇g = −∇{((x − x0 ) · ν)2 } 2 = − {((x − x0 ) · ν)2 } 2 −1 · 2((x − x0 ) · ν)ν 2 = −α|(x − x0 ) · ν|α−2 ((x − x0 ) · ν)ν. It follows that – recall that |ν| = 1: |∇g|p−2 = αp−2 |(x − x0 ) · ν|(α−1)(p−2) . Thus we deduce |∇g|p−2 ∇g = −αp−1 |(x − x0 ) · ν|(α−2)+(α−1)(p−2) ((x − x0 ) · ν)ν. Now from α − 2 + (α − 1)(p − 2) =
p−2 1 −1+ =0 p−1 p−1
17.3. A generalization of the Lax–Milgram theorem
239
it follows that |∇g|p−2 ∇g = −αp−1 ((x − x0 ) · ν)ν and thus −∇ · (|∇g|p−2 ∇g) = αp−1 νi ∂xi ((x − x0 ) · ν) = αp−1 (we used the summation convention and the notation ∇· = div). It follows that we have – for instance in the D (Ω) sense – see also (15.2) −Δp u = f ≤ |f |∞,Ω = −Δp
1
p−1 |f |∞,Ω
α
g .
By the weak maximum principle we deduce that 1
u≤
p−1 |f |∞,Ω
α
g.
(17.24)
Since −u is solution to (17.1) corresponding to −f we derive 1
|u| ≤ Since
p−1 1 = α p
p−1 |f |∞,Ω
α
g. p
and 0 ≤ g ≤ aα = a p−1
the result follows.
17.3 A generalization of the Lax–Milgram theorem In Section 17.1 we have solved the problem (17.1) by generalizing the Dirichlet principle which consists in minimizing the functional 1 2 |∇v| dx − f v dx 2 Ω Ω on H01 (Ω) in order to obtain the solution of the Dirichlet problem. Considering (17.2) we could as well have tried to extend the theory of existence of a solution to a(u, v) = f, v to forms a(u, v) which are linear in v but not necessarily in u. This is what we would like to do now. (Recall that in this case one can write a(u, v) = Au, v for some possibly nonlinear operators A.) We will do that also with the goal to extend to nonlinear operators our theory of variational inequalities (see Chapter 11).
240
Chapter 17. The p-Laplace Equation
• 1. The finite-dimensional case. In this paragraph we suppose that X is a finite-dimensional space with dual X ∗ . Then we have Theorem 17.8. Let K be a nonempty compact convex subset of X and A a continuous mapping from K into X ∗ . Then for every f ∈ X ∗ there exists a solution u to the problem Au, v − u ≥ f, v − u ∀ v ∈ K, (17.25) u ∈ K. ·, · is the pairing between X ∗ and X. Proof. We can define an Euclidean structure on X – that is to say a scalar product (·, ·). Denote by j the linear mapping from X ∗ into X such that x , x = (j(x ), x)
∀ x ∈ X.
If PK denotes the projection on K for the Euclidean structure above then the mapping x → PK (x − j(Ax − f )) is continuous from K into itself. Then by the Brouwer fixed point theorem (see Corollary A.6) it has a fixed point u which satisfies (u, v − u) ≥ (u − j(Au − f ), v − u) ∀ v ∈ K, u ∈ K, i.e., u is solution to (17.25). This completes the proof of the theorem.
Remark 17.4. In the case where K is unbounded then (17.25) does not necessarily have a solution. Indeed, choose for instance X = X ∗ = R = K with the pairing x , x = x · x. For A : R → (0, +∞) the problem Au, v − u ≥ 0 ∀ v ∈ R is equivalent to Au = 0 which does not have a solution. Thus in the case where K is unbounded some additional assumptions are necessary. This is what we would like to consider now. Definition 17.1. Let K be an unbounded convex subset of X and A a mapping from K into X ∗ . We say that A is coercive on K if there exists v0 ∈ K such that Av − Av0 , v − v0 /|v − v0 | −→ +∞ when v ∈ K, (| · | is a norm in X).
|v| → +∞
(17.26)
17.3. A generalization of the Lax–Milgram theorem
241
Then we have Theorem 17.9. Let K be a closed convex subset of X and A : K → X ∗ be a continuous coercive map. For every f ∈ X ∗ there exists u solution to Au, v − u ≥ f, v − u ∀ v ∈ K, (17.27) u ∈ K. Proof. Since the map v → Av −f is continuous and coercive if we can solve (17.27) for f = 0 the general case will follow. We suppose then f = 0 and denote by BR the ball BR = { v ∈ X | |v| ≤ R }. From Theorem 17.8 there exists uR solution to AuR , v − uR ≥ 0 ∀ v ∈ K ∩ BR , uR ∈ K ∩ BR .
(17.28)
If v0 ∈ K is a point such that (17.26) holds let us choose R > |v0 |. By (17.28) we have (17.29) AuR , v0 − uR ≥ 0. On the other hand AuR , v0 − uR = − AuR − Av0 , uR − v0 + Av0 , v0 − uR ≤ − AuR − Av0 , uR − v0 + |Av0 |∗ |v0 − uR | AuR − Av0 , uR − v0 + |Av0 |∗ = |uR − v0 | − |uR − v0 | where | · |∗ denotes the strong dual norm in X ∗ . If for every R, |uR | = R then taking R large enough in the inequality above will lead to – see (17.26) AuR , v0 − uR < 0 which is in contradiction with (17.29). Thus there exists some R large enough for which |uR | < R. Then for every v ∈ K one can find ε > 0 small enough such that εv + (1 − ε)uR = uR + ε(v − uR ) ∈ K ∩ BR . It follows by (17.28) that ε AuR , v − uR ≥ 0, i.e., AuR , v − uR ≥ 0 ∀ v ∈ K and uR is the solution we were looking for. This completes the proof of the theorem.
242
Chapter 17. The p-Laplace Equation
• 2. The infinite-dimensional case. We assume now that X is a reflexive Banach space with norm | · | and dual X ∗ . K is a nonempty closed convex subset of X, A a mapping from K into X ∗ . Definition 17.2. We shall say that A is monotone iff Au − Av, u − v ≥ 0
∀ u, v ∈ K.
(17.30)
A is strictly monotone if the equality in (17.30) holds only for u = v. Definition 17.3. We shall say that A : K → X ∗ is continuous on finite-dimensional subspaces of X if for every finite-dimensional subspace M of X the mapping A : K ∩ M −→ X ∗ is weakly continuous, that is to say for every v ∈ X u → Au, v is continuous on K ∩ M . Under the assumptions above we have: Theorem 17.10. Let K be a nonempty closed convex subset of a reflexive Banach space X. Let f ∈ X ∗ and A : K → X ∗ be a monotone mapping continuous on finite-dimensional subspaces of X. Then if • K is bounded, or • A is coercive on K (see Definition 17.1), there exists u solution to the variational inequality Au, v − u ≥ f, v − u ∀ v ∈ K, u ∈ K.
(17.31)
Moreover if A is strictly monotone the solution of (17.31) is unique. In the proof of Theorem 17.10 we will use the following lemma Lemma 17.11 (Minty’s Lemma). Under the assumptions of Theorem 17.10 the variational inequality (17.31) is equivalent to Av, v − u ≥ f, v − u ∀ v ∈ K, (17.32) u ∈ K. Proof of the lemma. If u is solution to (17.31) we have Av, v − u = Au, v − u + Av − Au, v − u ≥ f, v − u ∀ v ∈ K,
17.3. A generalization of the Lax–Milgram theorem
243
by the monotonicity of A and (17.31). Thus u solution of (17.31) implies that u is solution to (17.32). Conversely let u be solution to (17.32). Changing in (17.32) v in u + t(v − u) = tv + (1 − t)u ∈ K for t ∈ (0, 1) we get =⇒
A(u + t(v − u)), t(v − u) ≥ f, t(v − u) A(u + t(v − u)), v − u ≥ f, v − u ∀ v ∈ K.
Using the continuity of A on finite-dimensional subspaces we get (17.31) by letting t → 0 in the inequality above. This completes the proof of the Lemma. Proof of the theorem. Without loss of generality we can assume f = 0. Suppose then first that K is bounded. Consider C(v) = { u ∈ K | Av, v − u ≥ 0 }. It is clear that C(v) is a weakly closed subset of K which is bounded. Thus C(v) is weakly compact and if ∩v∈K C(v) = ∅ one can find v1 , . . . , vn in K such that C(v1 ) ∩ · · · ∩ C(vn ) = ∅.
(17.33)
Consider then M the subspace of X spanned by v1 , . . . , vn . Let us denote by i the canonical injection from M into X and by r the restriction mapping defined by r(x ) = x |M
∀ x ∈ X ∗ .
By applying Theorem 17.8 and Minty’s Lemma to the operator v → r ◦ A ◦ i(v) we find that there exists a solution of the problem Av, v − u ≥ 0 ∀ v ∈ K ∩ M, u ∈ K ∩ M. Of course such a 2 u belongs to C(v1 ) ∩ · · · ∩ C(vn ) which gives a contradiction to (17.33) and thus v∈K C(v) = ∅ which proves the theorem in the case where K is bounded. One deals with the case K unbounded exactly as in Theorem 17.9. To prove uniqueness in the case where A is strictly monotone it is enough to remark that if u1 , u2 are two solutions to (17.31) then Au1 , v − u1 ≥ f, v − u1 ∀ v ∈ K, Au2 , v − u2 ≥ f, v − u2 ∀ v ∈ K.
244
Chapter 17. The p-Laplace Equation
Taking v = u2 in the first inequality, v = u1 in the second, adding we obtain Au1 − Au2 , u1 − u2 ≤ 0. The strict monotonicity implies u1 = u2 . This completes the proof of the theorem. As a corollary we have Corollary 17.12. Let A be a monotone, coercive operator from a reflexive Banach space X into its dual X ∗ . If A is continuous on finite-dimensional subspaces of X then A is onto, i.e., for every f ∈ X ∗ there exists a solution to Au = f.
(17.34)
If in addition A is strictly monotone then the solution of (17.34) is unique. Proof. Taking K = X we derive from Theorem 17.10 that there exists u ∈ X such that Au, v − u ≥ f, v − u ∀ v ∈ X. Replacing v by u ± v in this inequality we get Au, v = f, v
∀ v ∈ X.
This completes the proof.
Remark 17.5. In view of this corollary the coerciveness assumption seems to be more natural. Indeed in order to a monotone mapping A from R to R to be onto one needs that |Ax| −→ +∞ when |x| → +∞, i.e., assuming A(0) = 0 Ax · x |Ax||x| = −→ +∞ when |x| → +∞. |x| |x| • 3. Applications. We go back to (17.1) and we define A : W01,p (Ω) → (W01,p (Ω))∗ by setting Au, v =
Ω
|∇u|p−2 ∇u · ∇v dx ∀ v ∈ W01,p (Ω).
It is easy to see that A defined that way satisfies all the assumptions of Theorem 17.10 and thus there exists a unique solution to (17.1).
Exercises
245
Exercises 1. Detail the proof of (17.11). 2. Derive the Lax–Milgram theorem from Theorem 17.10. 3. Let u ∈ W01,p (Ω) and v ∈ W 1,p (Ω) such that v ≥ 0 a.e. on Ω. Prove that (u − v)+ ∈ W01,p (Ω). Deduce rigorously (17.24). 4. Let u, v ∈ W 1,p (Ω) such that u+ ∈ W01,p (Ω), v − ∈ W01,p (Ω). Prove that (u − v)+ ∈ W01,p (Ω).
5. Let Ω be a bounded open set of Rn , ε > 0, f ∈ Lp (Ω). (i) Show that there exists a uε unique weak solution to −ε∂xi (|∇uε |p−2 ∂xi uε ) + |uε |p−2 uε = f in Ω, uε ∈ W01,p (Ω). 1
(ii) Show that uε → sign f |f | p−1 in Lp (Ω) when ε → 0 (sign f denotes the sign of f ).
Chapter 18
The Strong Maximum Principle Since we have seen that weak solutions can be smooth (see Chapter 16) in this chapter we would like to introduce some tools and properties appropriate for smooth solutions of elliptic problems (see also [15], [17], [44], [48]–[50], [54], [55], [57], [80]–[82]).
18.1 A first version of the maximum principle Let Ω = (a, b). Let u ∈ C 2 (Ω) ∩ C(Ω) such that u > 0 on (a, b).
(18.1)
u(x) < u(a) ∨ u(b) ∀ x ∈ (a, b)
(18.2)
Then we claim that where ∨ denotes the maximum of two numbers. Indeed – if not – u achieves an interior maximum x0 . At this point we have u(x) ≤ u(x0 )
∀ x close to x0 .
Using the Taylor expansion it comes u(x) = u(x0 ) + u (x0 )(x − x0 ) +
u (ξx ) (x − x0 )2 2
where ξx lies between x0 and x. From (18.3) we derive # x − x $ 0 ≤0 (x − x0 ) u (x0 ) + u (ξx ) 2 for x close to x0 . This forces u (x0 ) = 0,
u (ξx ) ≤ 0
(18.3)
248
Chapter 18. The Strong Maximum Principle
for x close to x0 . Letting x → x0 we see that at a point x0 where u reaches a local maximum we have u (x0 ) ≤ 0. u (x0 ) = 0, By (18.1) such a point cannot exist and thus (18.2) holds. A natural issue is to generalize this result in higher dimension replacing for instance u by a linear combination of second derivatives of u. To this purpose let us consider Ω a bounded open subset of Rn and aij , bi , i, j = 1, . . . , n functions in Ω such that the aij satisfy aij (x)ξi ξj ≥ 0
∀ x ∈ Ω, ∀ ξ ∈ Rn .
(18.4)
Then we have Theorem 18.1. Let u ∈ C 2 (Ω) ∩ C(Ω) satisfying n i,j=1
aij (x)∂x2i xj u +
n
bi (x)∂xi u > 0
∀ x ∈ Ω.
(18.5)
i=1
Then we have u(x) < Max u(x) ∂Ω
∀ x ∈ Ω.
(18.6)
(∂Ω denotes the boundary of Ω.) Proof. Since for a C 2 -function we have ∂x2i xj u = ∂x2j xi u there is no loss of generality in assuming that A = (aij ) is a symmetric matrix. If not one replaces aij by 12 (aij + aji ) in (18.5). Since u ∈ C(Ω) and Ω is compact, u achieves its maximum at some point in Ω. If (18.6) does not hold then u achieves its maximum at a point x0 ∈ Ω. Then for every ξ ∈ Rn the function v(t) = u(x0 + tξ) achieves a local maximum at t = 0. This implies (see above) that v (0) = 0,
v (0) ≤ 0,
i.e., by the chain rule ∂xi u(x0 )ξi = 0,
∂x2i xj u(x0 )ξi ξj ≤ 0 ∀ ξ ∈ Rn .
(18.7)
Since A = (aij (x0 )) is symmetric there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQT .
18.1. A first version of the maximum principle
Suppose that
d1 (x0 )
0
..
D= 0
249
.
.
dn (x0 ) n
If “·” denotes the usual scalar product in R then (18.4) reads Aξ · ξ ≥ 0
∀ ξ ∈ Rn
(for convenience we dropped the x). This implies that Dξ · ξ = QT AQξ · ξ = AQξ · Qξ ≥ 0
∀ ξ ∈ Rn
and in particular – for ξ = ei – di (x0 ) ≥ 0
∀ i = 1, . . . , n.
(18.8)
If ∂ 2 u denotes the Hessian matrix of u at x0 we have n i,j=1
aij (x0 )∂x2i xj u(x0 ) = tr(A∂ 2 u) = tr(QDQT ∂ 2 u) = tr(DQT ∂ 2 uQ),
(18.9)
(we used the fact that for two matrices M1 , M2 , tr(M1 M2 ) = tr(M2 M1 )). If we set C = QT ∂ 2 u(x0 )Q = (cij ) by (18.7) we have Cξ · ξ = ∂ 2 u(x0 )Qξ · Qξ ≤ 0
∀ ξ ∈ Rn
and thus as above cii (x0 ) ≤ 0 ∀ i = 1, . . . , n. Combining (18.7), (18.8), (18.9) we get n
aij (x0 )∂x2i xj u(x0 ) +
i,j=1
n
bi (x0 )∂xi u(x0 ) = tr(DC) =
i=1
n
di (x0 )cii (x0 ) ≤ 0
i=1
which contradicts (18.5) and concludes the proof. Remark 18.1. The result above holds true for A = (aij ) ≡ 0.
The next issue is to see what happens when the inequality (18.5) is not strict. In this case we have Theorem 18.2. Suppose in addition to (18.4) that there exists a vector ξ = 0 such that for some positive constants γ, δ aij (x)ξi ξj ≥ γ > 0,
bi (x)ξi ≥ −δ
∀ x ∈ Ω.
(18.10)
bi (x)∂xi u ≥ 0
∀x ∈ Ω
(18.11)
Let u ∈ C 2 (Ω) ∩ C(Ω) satisfying n i,j=1
aij (x)∂x2i xj u
+
n i=1
then u(x) ≤ Max u(x) ∂Ω
∀ x ∈ Ω.
(18.12)
250
Chapter 18. The Strong Maximum Principle
Proof. Consider the function uε (x) = u(x) + εeα(ξ·x) (α is a positive constant to be chosen later on). One has ∂xi uε = ∂xi u + εαeα(ξ·x) ξi , ∂x2i xj uε = ∂x2i xj u + εα2 eα(ξ·x) ξi ξj . Thus it comes (we drop the x for convenience) n i,j=1
aij ∂x2i xj uε + =
n i,j=1
≥ εe
n
bi ∂xi uε
i=1
aij ∂x2i xj u +
α(ξ·x)
n
bi ∂xi u + εeα(ξ·x) {α2 aij ξi ξj + αbi ξi }
i=1
2
{α γ − αδ} > 0
for α large enough. Applying Theorem 18.1 we deduce that uε (x) < Max uε ≤ Max u + ε Max eα(ξ·x) ∂Ω
∂Ω
∂Ω
∀ x ∈ Ω.
The result follows by letting ε → 0.
Remark 18.2. 1. In the case of (18.11) one cannot have (18.6) since for instance u ≡ cst is a solution to (18.11). 2. Suppose that in Ω ⊂ Rn × R, u = u(x, t) satisfies Δu ± ut ≥ 0 in Ω then Theorem 18.2 applies – take ξ = e1 in Rn+1 and consider t as xn+1 . Thus the maximum principle works the same way for elliptic problems – i.e., involving an elliptic operator as the Laplacian – and for parabolic problems involving operators of the type ±ut − Au where A is an elliptic operator in the x variable. As a corollary of Theorem 18.2 we have the following Theorem 18.3. Suppose that we are under the assumptions of Theorem 18.2 – namely (18.4), (18.10) hold. Let us denote by c a function such that c(x) ≥ 0
∀ x ∈ Ω.
(18.13)
Let u ∈ C 2 (Ω) ∩ C(Ω) satisfying n i,j=1
aij (x)∂x2i xj u +
n i=1
bi (x)∂xi u − c(x)u ≥ 0
∀x ∈ Ω
(18.14)
18.1. A first version of the maximum principle
251
then u(x) ≤ Max u+ (x) ∂Ω
∀ x ∈ Ω.
(18.15)
Proof. Consider the open set Ω+ = { x ∈ Ω | u(x) > 0 }. If Ω+ = ∅ then (18.15) clearly holds. Else by (18.14) we have n i,j=1
aij (x)∂x2i xj u +
n
bi (x)∂xi u ≥ c(x)u(x) ≥ 0 on Ω+
i=1
and by Theorem 18.2 u(x) ≤ Max u(x) ≤ Max u+ (x). ∂Ω
∂Ω+
This completes the proof of the theorem.
Remark 18.3. (Minimum principle). Under the assumptions of Theorem 18.3 if u is such that n
Lu :=
i,j=1
aij (x)∂x2i xj u +
n
bi (x)∂xi u − c(x)u ≤ 0 ∀ x ∈ Ω
(18.16)
i=1
then −u satisfies (18.14) and one has −u(x) ≤ Max(−u)+ (x) ∂Ω
⇐⇒
u(x) ≥ − Max u− (x). ∂Ω
Remark 18.4. Condition (18.13) cannot be dropped. Indeed for instance u = sin nx satisfies u + n2 u = 0 on Ω = (0, π) and the maximum principle does not hold on (0, π). Remark 18.5. A consequence of the maximum principle is the uniqueness for the Dirichlet problem associated to the operator L defined in (18.16) or more generally if u, v ∈ C 2 (Ω) ∩ C(Ω) satisfy
−Lu ≤ −Lv u≤v
in Ω, on ∂Ω,
(18.17)
then we have u≤v
in Ω.
(18.18)
252
Chapter 18. The Strong Maximum Principle
18.2 The Hopf maximum principle As a consequence of Theorem 18.2, in one dimension, if u ∈ C 2 (Ω) ∩ C(Ω) satisfies u ≥ 0
in Ω = (a, b)
(18.19)
then we have u(x) ≤ M = Max{u(a), u(b)} ∀ x ∈ Ω. Suppose now that the maximum M is achieved at some point x0 ∈ Ω. Then we have u (x0 ) = 0 and from (18.19), u ≤ 0 on the left of x0 and u ≥ 0 on the right. In other words u is nonincreasing on the left of x0 and nondecreasing on the right. But since M = u(x0 ) is the maximum we have u≡M
in Ω.
In other words when (18.19) holds then u cannot achieve its maximum inside Ω without being constant and equal to this maximum on Ω. This is what we would like to generalize now. Theorem 18.4 (The Hopf Maximum principle). Let us denote by c a bounded function. We assume in addition that for some positive λ aij ξi ξj ≥ λ|ξ|2 ∀ ξ ∈ Rn . ij
Let u ∈ C 2 (Ω) such that n
aij (x)∂x2i xj u +
i,j=1
n
bi (x)∂xi u − c(x)u ≥ 0
in Ω.
(18.20)
i=1
Consider a point x0 ∈ ∂Ω and suppose that (i) (ii) (iii)
u is continuous at x0
(18.21)
u(x0 ) > u(x) ∀ x ∈ Ω ∂Ω satisfies an interior sphere condition at x0 (see below).
(18.22) (18.23)
Let ν be an exterior direction to the ball involved in (iii) (see below) then if ∂u ∂ν (x0 ) exists, we have ∂u (x0 ) > 0 (18.24) ∂ν for • c=0 u(x0 ) arbitrary u(x0 ) ≥ 0 • c≥0 (18.25) • c arbitrary u(x0 ) = 0.
18.2. The Hopf maximum principle
253
Proof. First let us explain what is meant by (iii). We assume that there exists a ball BR (y) = { x | |x − y| < R } ⊂ Ω such that x0 is the only point of ∂Ω which belongs to its boundary (see Figure 18.1 below).
x0
Ω r
R
Figure 18.1. Consider the function v defined as 2
2
v(x) = e−αρ − e−αR ≥ 0
(18.26)
where ρ = |x − y|, r < ρ < R and α is a positive constant that we will choose later. We have 2
2
∂xi v = ∂xi e−α|x−y| = −2αe−αρ (xi − yi ) 2
2
∂x2i xj v = 4α2 e−αρ (xi − yi )(xj − yj ) − 2αe−αρ δij where δij denotes the Kronecker symbol. Thus if Lu is defined as the left-hand side of (18.20) let us denote by L+ the operator defined as L with c replaced by c+ . We have with the summation convention 2
2
2
L+ v = aij {4α2 e−αρ (xi − yi )(xj − yj ) − 2αe−αρ δij } + bi {−2αe−αρ (xi − yi )} 2
2
− c+ {e−αρ − e−αR } 2
≥ e−αρ {4α2 aij (xi − yi )(xj − yj ) − 2α(aii + bi (xi − yi )) − c+ } 2
≥ e−αρ {4α2 λ|x − y|2 − 2α(|aii | + |b| |x − y|) − c+ } 2
≥ e−αρ {4α2 λr2 − 2α(|aii | + |b|R) − c+ } ≥ 0 on BR (y) \ Br (y) for α large enough. On the boundary of Br (y) we have, since u is continuous, u − u(x0 ) ≤ −γ < 0 for some positive constant γ. Thus for ε small enough we have u − u(x0 ) + εv ≤ 0
on ∂Br (y).
(18.27)
254
Chapter 18. The Strong Maximum Principle
Moreover on ∂BR (y) we have (see (18.22), (18.26)) u − u(x0 ) + εv ≤ 0
on ∂BR (y).
Finally we have L+ (u − u(x0 ) + εv) = Lu − c− u + c+ u(x0 ) + εL+ (v) ≥ −c− (u − u(x0 )) − c− u(x0 ) + c+ u(x0 ) ≥0 in all cases (18.25). Applying Theorem 18.3 with L replaced by L+ we deduce u − u(x0 ) + εv ≤ 0 on BR (y) \ Br (y). Since v vanishes at x0 we have for h > 0 small enough u(x0 − hν) − u(x0 ) + ε(v(x0 − hν) − v(x0 )) ≤ 0, (since ν is an exterior direction to BR (y); i.e., (x0 − y) · ν > 0). This implies v(x0 − hν) − v(x0 ) u(x0 − hν) − u(x0 ) ≥ −ε . −h −h Letting h → 0 we obtain ∂u ∂v (x0 ) ≥ −ε (x0 ). ∂ν ∂ν But 2 ∂v (x0 ) = ∇v(x0 ) · ν = −2αe−α|x0 −y| (x0 − y) · ν < 0 ∂ν
and the result follows.
As a corollary we have Theorem 18.5. Suppose that we are under the assumptions of Theorem 18.4. Let u ∈ C 2 (Ω) satisfying Lu ≥ 0 in Ω. If Ω is connected and c=0 c≥0
and u achieves its maximum inside Ω and u achieves a nonnegative maximum inside Ω
c arbitrary and u achieves a 0 maximum in Ω then u is constant in Ω.
(18.28)
18.3. Application: the moving plane technique
255
Proof. Suppose on the contrary u nonconstant with a maximum M achieved in Ω. Set Ω− = { x | u(x) < M }. Since u is not constant Ω− is an open subset of Ω which is not empty and such − that ∂Ω− ∩ Ω is not empty (if not, then Ω ⊂ Ω− ∪ (Rn − Ω ) which contradicts the connectivity of Ω). Let y ∈ Ω− such that y is closer from ∂Ω− than from ∂Ω. Consider then BR (y) the largest ball contained in Ω− . There are points x0 ∈ ∂BR (y) such that u(x0 ) = M . At such a point by Theorem 18.4 one has ∂u (x0 ) > 0 ∂ν (ν is the outward normal of the set BR (y)) but this contradicts the fact that x0 ∈ Ω is a point where the maximum is achieved and hence where ∇u(x0 ) = 0. Thus u is constant. This completes the proof of the theorem. Remark 18.6. A consequence of Theorem 18.5 is that if Ω is bounded and M denotes the maximum of u in Ω and if we are under the assumptions of Theorem 18.5 then u ≡ M in Ω or u < M in Ω.
18.3 Application: the moving plane technique The maximum principle that we just saw allows to obtain qualitative properties for the solution of elliptic equations. Let us see that on an example (cf. [15], [54]). Definition 18.1. Let Ω ⊂ Rn . We say that Ω is convex in the x1 direction if (x1 , x ), (y1 , x ) ∈ Ω =⇒ (αx1 + (1 − α)y1 , x ) ∈ Ω
∀ α ∈ (0, 1).
(18.29)
Then we have: Theorem 18.6. Let Ω be a bounded domain in Rn convex in the x1 direction and symmetric with respect to the hyperplane x1 = 0 (see Figure 18.2 on top of the next page). Let u ∈ C 2 (Ω) ∩ C(Ω) be a positive solution to
−Δu = f (u) u=0
in Ω, on ∂Ω,
(18.30)
where f is a Lipschitz continuous function. Then u is symmetric with respect to the hyperplane x1 = 0 and ∂x1 u > 0
on
Ω− = { x ∈ Ω | x1 < 0 }.
(18.31)
256
Chapter 18. The Strong Maximum Principle
x
x
)2λ − x1 , x )
Ω
Σλ λ
x1
0
Tλ
Figure 18.2.
Proof. One uses the so-called moving plane technique. Set −a =
Inf
(x1 ,x )∈Ω
x1 .
(18.32)
For −a < λ < 0 we define Tλ , Σλ as Tλ = { x | x1 = λ },
Σλ = { x ∈ Ω | x1 < λ }.
(18.33)
Define wλ (x) = u(2λ − x1 , x ) − u(x1 , x ),
x = (x1 , x ) ∈ Σλ .
(18.34)
We have in Σλ −Δwλ = −Δu(2λ − x1 , x ) + Δu(x1 , x ) {f (u(2λ − x1 , x )) − f (u(x1 , x ))} wλ wλ =: c(x, λ)wλ =
with the convention that c(x, λ) = 0 if wλ = 0. Moreover, we have a uniform bound with respect to λ since |c(x, λ)| ≤ L ∀ x ∈ Σλ ∀ λ,
(18.35)
where L is the Lipschitz constant of f . Since u is positive we also have wλ ≥ 0,
wλ ≡ 0
on ∂Σλ .
(Note that wλ = 0 on Tλ ∩ Ω.) For 0 < λ + a small enough, Σλ is narrow in the x1 direction and from the weak maximum principle we deduce wλ (x) ≥ 0
in Σλ .
(18.36)
18.3. Application: the moving plane technique
257
(See (18.35) and Corollary 4.5.) By the Hopf maximum principle (Theorem 18.5) we have in fact (18.37) wλ (x) > 0 in Σλ . Let (−a, μ) be the largest interval for which wλ (x) > 0 in Σλ
∀ λ ∈ (−a, μ).
(18.38)
We claim that μ = 0. If not suppose μ < 0. By continuity we have wμ (x) ≥ 0 in Σμ and by the Hopf maximum principle wμ (x) > 0
in Σμ .
Let us denote by K a compact set in Σμ such that |Σμ \ K| ≤
δ 2
where δ is a real number that we will fix later. (| · | is the Lebesgue measure.) On K we have wμ (x) ≥ m > 0. Thus by continuity for ε small enough we have wμ+ε (x) ≥
m >0 2
on K.
We can also choose ε small enough in such a way that |Σμ+ε \ K| ≤ δ. Now for δ small enough (see (18.35) and Theorem 12.10) the maximum principle μ+ε = Σμ+ε \ K. More precisely since applies to Σ −Δwμ+ε − cwμ+ε = 0 wμ+ε ≥ 0
μ+ε , in Σ μ+ε , on ∂ Σ
μ+ε and finally by the Hopf maximum principle we derive wμ+ε ≥ 0 in Σ wμ+ε > 0
in Σμ+ε .
This contradicts the maximality of μ and thus we have wλ (x) > 0 in Σλ
∀ λ ∈ (−a, 0).
258
Chapter 18. The Strong Maximum Principle
Letting λ → 0 in the inequality above and recalling the definition of wλ we arrive to u(−x1 , x ) ≥ u(x1 , x ) ∀ (x1 , x ) ∈ Ω, x1 < 0. Since u(−x1 , x ) is also a positive solution to (18.30) we have u(x1 , x ) ≥ u(−x1 , x )
∀ (x1 , x ) ∈ Ω,
x1 < 0.
u(−x1 , x ) = u(x1 , x ) ∀ (x1 , x ) ∈ Ω,
x1 < 0
This shows that
and the symmetry of u with respect to the hyperplane x1 = 0 is proved. Now for any λ ∈ (−a, 0) we have wλ (x1 , x ) > 0 in Σλ ,
wλ (x1 , x ) = 0
on Tλ ∩ Ω.
By the Hopf maximum principle (see (18.25)) ∂x1 wλ (λ, x ) < 0 ∀ x such that (λ, x ) ∈ Ω. Going back to (18.34) we get −2∂x1 u(λ, x ) < 0
∀ x such that (λ, x ) ∈ Ω.
This is (18.31) and this completes the proof of the theorem.
Remark 18.7. One has of course due to the symmetry of u ∂x1 u < 0
on Ω+ = { x ∈ Ω | x1 > 0 }
and by the continuity of ∂xi u, ∂xi u = 0 on T0 ∩ Ω. As a corollary we have the following famous result due to Gidas–Ni–Nirenberg ([54]): Corollary 18.7. Let Ω = BR (0) ⊂ Rn . Let u ∈ C 2 (Ω) ∩ C(Ω) be a positive solution to −Δu = f (u) in BR (0), u=0 on ∂BR (0), where f is Lipschitz continuous. Then u is radially symmetric namely u(x) = u˜(|x|) for some function u ˜. Moreover u˜ < 0 on (0, R). Proof. This follows directly from Theorem 18.6 since u is symmetric with respect to any hyperplane passing through the origin. Remark 18.8. u ˜ is of course solution to −˜ urr − n−1 ˜r = f (˜ u) on (0, R), r u u ˜ (0) = 0, u(R) = 0.
Exercises
259
Exercises 1. Show that Theorem 18.2 holds when (18.10) is replaced by the existence of v such that
aij (x)∂x2i xj v
+
i,j
n
bi (x)∂xi v > 0 ∀ x ∈ Ω.
i=1
2. Let Ω be a bounded domain symmetric in x1 . Let f ∈ L2 (Ω) be a function symmetric in x1 . Show that the solution to −Δu = f in Ω, 1 u ∈ H0 (Ω), is symmetric in x1 . 3. (a) Justify carefully the fact that wλ = 0 on ∂Σλ . (b) Justify carefully the use of the maximum principle in the proof of Theorem 18.6. 4. Let u ∈ C 2 (Rn ) satisfying −Δu = f
in Rn .
Let B1 denote the unit ball of Rn , dσ the superficial measure on ∂B1 . Set U (r) = − u(rσ) dσ. ∂B1
(i) Show that U is solution to
i.e.,
−(rn−1 U ) = rn−1 − f (rσ) dσ, ∂B1 − ΔU = − f (rσ) dσ. ∂B1
(ii) If u is harmonic that is to say satisfies −Δu = 0 in Rn show that U is constant. Deduce that u(y) dσ(y) u(x0 ) = − ∂BR (x0 )
∀ x0 ∈ Rn , ∀ R > 0.
Chapter 19
Problems in the Whole Space We would like to consider here elliptic equations set in the whole Rn . For instance if A is a positive definite matrix and a a nonnegative function we are going to consider the equation − div(A(x)∇u) + a(x)u = 0
in Rn .
(19.1)
Let us first start by considering the harmonic functions, i.e., the functions satisfying (19.1) with A(x) ≡ Id, a = 0.
19.1 The harmonic functions, Liouville theorem Definition 19.1. We say that a function is harmonic if u ∈ C 2 (Rn ) and Δu(x) =
n
∂x2i u = 0
∀ x ∈ Rn .
(19.2)
i=1
Then we have Theorem 19.1. Let u be a harmonic function and x0 ∈ Rn . We have 1 u(y) dσ(y) ∀ x0 ∈ Rn , ∀ R > 0. u(x0 ) = |∂BR (x0 )| ∂BR (x0 ) (dσ denotes the superficial measure on ∂BR (x0 ).) Proof. For any r we have
1 u(y) dσ(y) U (r) = |∂Br (x0 )| ∂Br (x0 ) 1 = u(x0 + rω)rn−1 dσ(ω) nωn rn−1 ∂B1 (x0 ) 1 = u(x0 + rω) dσ(ω), nωn ∂B1 (x0 )
(19.3)
262
Chapter 19. Problems in the Whole Space
where ωn denotes the volume of the unit ball. By differentiation it comes d 1 u(x0 + rω) dσ(ω) U (r) = nωn ∂B1 (x0 ) dr 1 ∇u(x0 + rω) · ω dσ(ω) = nωn ∂B1 (x0 ) (y − x0 ) 1 = dσ(y) ∇u(y) · n−1 nωn r r ∂Br (x0 ) 1 ∂u = (y) dσ(y) |∂Br (x0 )| ∂Br (x0 ) ∂ν 1 = Δu dx = 0 (by the Green formula). |∂Br (x0 )| Br (x0 ) Then U is a constant function and letting r → 0 we see that it is equal to u(x0 ). This completes the proof of the theorem. Theorem 19.2. Let u be a harmonic function and x0 ∈ Rn . We have 1 u(x0 ) = u(x) dx ∀ x0 ∈ Rn , ∀ R > 0. |BR (x0 )| BR (x0 )
(19.4)
Proof. From the equality (19.3) we derive n−1 nωn r u(x0 ) = u(y) dσ(y) ∀ 0 < r < R. ∂Br (x0 )
Integrating in r we get n
u(x0 )ωn R =
0
R
∂Br (x0 )
u(y) dσ(y) dr =
BR (x0 )
u(x) dx.
This completes the proof of the theorem.
As a consequence we can show Theorem 19.3. Let u be a nonnegative harmonic function then u = cst . Proof. Let x, y be two points of Rn . Note that BR (x) ⊂ BR+|y−x| (y) for any R > 0 due to the inequality |z − y| ≤ |z − x| + |x − y| < R + |x − y| ∀ z ∈ BR (x).
(19.5)
19.1. The harmonic functions, Liouville theorem
263
Then by Theorem 19.2 we have u(x) = − u(z) dz =
1 u(z) dz |BR (x)| BR (x) 1 ≤ u(z) dz |BR (x)| BR+|y−x| (y) 1 |BR+|y−x| (y)| − u(z) dz = |BR (x)| BR+|y−x| (y) n R + |y − x| = u(y). R
BR (x)
Letting R → +∞ we deduce that u(x) ≤ u(y). Since the two points are arbitrary we have similarly u(y) ≤ u(x) and the result follows. More generally we have Theorem 19.4 (Liouville). Let u be a harmonic function, bounded from above or from below, then u is constant. Proof. Without loss of generality (at the expense of changing u in −u) one can assume that u is bounded from below, i.e., u≥c for some constant c. By Theorem 19.3, u − c is constant, since this is a harmonic function, and so does u. This completes the proof of the theorem. Remark 19.1. The result above gives us some information about the harmonic functions: besides the constants they have to be unbounded from below and from above. Note that in one dimension the harmonic functions – i.e., the functions such that u = 0 are the affine functions x → Ax+ B. Unless they are constant they are unbounded from below and from above. In our research of information on the harmonic functions one can try to find out if – besides the constants – there are some which are radially symmetric that is to say of the type u(x) = v(r) r = |x|. (19.6) By a simple application of the chain rule we have for r = 0 xi , r 1 x2i x2 ∂x2i u(x) = v (r) 2i + v (r) − 3 . r r r
∂xi u(x) = v (r) ·
264
Chapter 19. Problems in the Whole Space
By summing up we see that Δu(x) = v (r) +
n−1 v (r) r
(r = 0).
(19.7)
We already feel some problem for r = 0 . . . So suppose that we want to find the harmonic functions in Rn \ {0} which in addition are radially symmetric. We have to solve n−1 v (r) = 0 r = 0 v (r) + r or equivalently rn−1 v (r) + (n − 1)rn−2 v (r) = 0 (r
n−1
v) =0
r = 0, r = 0.
(19.8)
This gives v =
A rn−1
for some constant A and by integration ⎧ ⎨A ln r + B v= A ⎩ +B rn−2
if n = 2, if n ≥ 2.
(19.9)
(The case n = 1 was considered in Remark 19.1.) Thus again besides the constant we cannot find a radially symmetric harmonic function. However the functions given by (19.9) are playing an important rˆ ole. Definition 19.2. We call “fundamental solution” of the Laplace operator the function defined by ⎧ 1 ⎪ ⎨ ln |x| if n = 2, (19.10) Fn (x) = 2π −1 ⎪ if n > 2. ⎩ n−2 n(n − 2)ωn |x| The theorem below justifies Definition 19.2. We have: Theorem 19.5. We have in the distributional sense ΔFn = δ0
(19.11)
where δ0 denotes the Dirac mass at 0. Proof. Let ϕ ∈ D(Rn ). We have by definition of the derivative in the distributional sense ΔFn , ϕ = Fn , Δϕ = Fn Δϕ dx. Rn
(19.12)
19.1. The harmonic functions, Liouville theorem
265
(Note that this integral makes sense since Fn ∈ L1loc (Rn ).) Denote by BR (0) a big ball containing the support of ϕ then we have Fn Δϕ dx = Fn Δϕ dx + Fn Δϕ dx. (19.13) Rn
BR (0)\Bε (0)
Bε (0)
Let us consider the first integral of the right-hand side of the equation above. By the Green formula we have ∇Fn · ∇ϕ dx = div(Fn ∇ϕ) − Fn Δϕ dx BR (0)\Bε (0) BR (0)\Bε (0) ∂ϕ dσ − Fn Fn Δϕ dx = ∂ν ∂{BR (0)\Bε (0)} BR (0)\Bε (0) ∂Fn dσ − ϕ ϕΔFn dx = ∂ν ∂{BR (0)\Bε (0)} BR (0)\Bε (0) (the last equality is obtained by exchanging the rˆ ole of Fn and ϕ). Since Fn is harmonic and ϕ = 0 in a neighborhood of ∂BR (0) we obtain ∂ϕ ∂Fn dσ + dσ Fn Δϕ dx = − ϕ Fn ∂ν ∂ν BR (0)\Bε (0) ∂Bε (0) ∂Bε (0) (in the formula above ν denotes the inward normal to ∂Bε (0)). Thus from (19.12) and (19.13) we derive ∂ϕ ∂Fn dσ + dσ + ϕ Fn Fn Δϕ dx. (19.14) ΔFn , ϕ = − ∂ν ∂ν ∂Bε (0) ∂Bε (0) Bε (0) Since Fn Δϕ ∈ L1 (B1 (0)), the last integral in the right-hand side of (19.14) converges toward 0 when ε → 0. Denote by M a constant such that |∇ϕ| ≤ M
on B1 (0).
Then for ε < 1 we have ∂ϕ dσ ≤ Fn (ε)M |∂Bε (0)| = CFn (ε)εn−1 , Fn ∂ν ∂Bε (0) for some constant C independent of ε. Then it is clear by the definition of Fn that the integral above converge toward 0 when ε → 0. We have now (with the summation in i) ∂Fn xi xi = ∇Fn · ν = Fn (r) · · − = −Fn (r) on ∂Bε (0) ∂ν r r where r = |x| (we used the same notation for Fn (x) = Fn (|x|)). Thus we have ∂Fn dσ = −Fn (ε) ϕ ϕ dσ. ∂ν ∂Bε (0) ∂Bε (0)
266
Chapter 19. Problems in the Whole Space
Since by (19.10) Fn (ε) = we get
1 |∂Bε (0)|
1 ∂Fn dσ = − ϕ ∂ν |∂B ε (0)| ∂Bε (0)
∂Bε (0)
ϕ dσ → −ϕ(0)
when ε → 0. Passing to the limit in (19.14) we obtain ΔFn , ϕ = ϕ(0) = δ0 , ϕ.
This completes the proof of the theorem. We can now prove
Theorem 19.6. Let u be a harmonic function in Rn . Then u is analytic and thus uniquely determined by its values on any open subset of Rn . Proof. Arguing as above (see below (19.13)) ∇Fn (x − y) · ∇u(x) dx BR (y)\Bε (y) = {div(Fn (x − y)∇u) − Fn Δu} dx BR (y)\Bε (y)
=
∂{BR (y)\Bε (y)}
Fn (x − y)
=
∂{BR (y)\Bε (y)}
u(x)
∂u (x) dσ(x) ∂ν
∂Fn (x − y) dσ(x) ∂ν
(by exchanging the rˆ ole of Fn and u). It follows that (for ν denoting the outward unit normal) ∂u ∂u Fn (x − y) (x) dσ(x) − Fn (x − y) (x) dσ(x) ∂ν ∂ν ∂BR (y) ∂Bε (y) ∂Fn ∂Fn (x − y) dσ(x) − (x − y) dσ(y). = u(x) u(x) ∂ν ∂ν ∂BR (y) ∂Bε (y) Passing to the limit in ε as in (19.14) we obtain ∂Fn ∂u (x − y) dσ(x) − u(x) = u(x) Fn (x − y) (x) dσ(x). ∂ν ∂ν ∂BR (y) ∂BR (y) Clearly since Fn is analytic in Rn \ {0} the right-hand side of the inequality above is analytic and the result follows.
19.2. The Schr¨ odinger equation
267
19.2 The Schr¨odinger equation We would like to see now if the equation (19.1) can have bounded solutions when a ≡ 0. A particular case is the so-called stationary Schr¨ odinger equation. It has the form −Δu + a(x)u = 0. (19.15) This equation gave rise to numerous work during the last decades – see for instance [67], [88] and the references there. In order to obtain a nontrivial bounded solution to (19.15) one needs to impose some decay on a at infinity. This will be clarified later on. So, let us assume n a ≥ 0, a ≡ 0, (19.16) a ∈ L∞ loc (R ),
and
Rn
a(x)|x|2−n dx < +∞
(19.17)
with n ≥ 3.
(19.18)
Then we have Theorem 19.7 (Grigor’yan [58] and [76]–[79], [8]). Let us assume that (19.16)– (19.18) hold. Then there exists a function u = 0 such that 0≤u≤1
in Rn
(19.19)
in D (Rn ).
(19.20)
which satisfies −Δu + au = 0
Proof. Denote by uk the solution (weak) to −Δuk + auk = 0 in Bk (0), on ∂Bk (0). uk = 1
(19.21)
(Bk (0) denotes the ball of center 0 and radius k. We refer to Chapter 4 for the existence of a solution.) By the maximum principle we have 0 ≤ uk ≤ 1.
(19.22)
From this we derive that −Δuk+1 + auk+1 = −Δuk + auk and uk+1 ≤ uk
on ∂Bk (0).
in Bk (0),
268
Chapter 19. Problems in the Whole Space
By the maximum principle again 0 ≤ uk+1 ≤ uk
in Bk (0).
(19.23)
Combining (19.22), (19.23) it follows from the Lebesgue convergence theorem that there exists u such that uk −→ u in L2loc (Rn ). Then clearly u satisfies −Δu + au = 0 in D (Rn ).
(19.24)
We will be done if we can show that u ≡ 0.
(19.25)
u ≡ 0.
(19.26)
Let us assume by contradiction that
Let ζ ∈ C ∞ (Rn ), 0 ≤ ζ ≤ 1 be a function such that 0 for |x| < R, ζ(x) = 1 for |x| > R + 1. R will be chosen later on. For k > R + 1 if ak =
1 kn−2
we have
ζ − ak ∈ H01 (Bk (0)). |x|n−2 From the weak formulation of (19.21) we get ζ ζ ∇uk · ∇ − a a(x)u − a dx + dx = 0. (19.27) k k k |x|n−2 |x|n−2 Bk (0) Bk (0) The first integral above can also be written as ζ ∇uk · ∇ − a k dx |x|n−2 Bk (0) ζ = ∇(uk − 1) · ∇ − a k dx |x|n−2 Bk (0) ζ (1 − uk )Δ − ak dx = |x|n−2 Bk (0) ζ ζ Δ u Δ dx − dx = k |x|n−2 |x|n−2 Bk (0) Bk (0)
19.2. The Schr¨ odinger equation
269
ζ ζ u Δ dσ(x) − dx k |x|n−2 |x|n−2 ∂Bk (0) Bk (0) ζ ∂ 1 u Δ dσ(x) − dx = k n−2 |x|n−2 ∂Bk (0) ∂ν |x| Bk (0) ζ 1−n dx k dσ(x) − uk Δ = (2 − n) |x|n−2 ∂Bk (0) Bk (0) ζ = (2 − n) σn − uk Δ dx, |x|n−2 Bk (0)
∂ ∂ν
=
where σn is the area of the unit sphere in Rn . From (19.27) we then deduce easily −
Bk (0)
uk Δ
ζ ζ a(x)uk n−2 dx ≥ (n − 2)σn . dx + n−2 |x| |x| Bk (0)
(19.28)
ζ has compact support in BR+1 (0) \ BR (0). Letting k → +∞ if Now Δ |x|n−2 uk → 0 we derive then that ζ uk Δ lim − dx = 0. k→+∞ |x|n−2 Bk (0) Since also
{|x|>R}
a(x)|x|2−n dx ≥
Bk (0)
a(x)uk
ζ dx |x|n−2
we get passing to the limit in (19.28) a(x)|x|2−r dx ≥ (n − 2)σn . {|x|>R}
Since R can be arbitrary this contradicts (19.17) and completes the proof of the theorem. Remark 19.2. A function a with compact support provides a very simple example of a function for which (19.17) holds. The example given in the remark above and (19.17) itself shows that some decay of a seems to be needed in order to get a nontrivial bounded solution. Also we did not consider the case of the dimension less than 3. This is what we would like to consider now. Suppose that u is a weak solution to 1 (Rn ), u ∈ Hloc (19.29) − div(A(x)∇u) + a(x)u = 0 in Rn ,
270
Chapter 19. Problems in the Whole Space
where A is a positive definite matrix satisfying for some constants λ and Λ λ|ξ|2 ≤ A(x)ξ · ξ |A(x)ξ| ≤ Λ|ξ|
∀ ξ ∈ Rn , a.e. x ∈ Rn ,
(19.30)
∀ ξ ∈ Rn , a.e. x ∈ Rn .
(19.31)
n a ∈ L∞ loc (R ) is a nonnegative function. Let us denote by ρ a smooth function such that
0 ≤ ρ ≤ 1,
ρ = 1 on B 12 ,
ρ = 0 outside B1 ,
|∇ρ| ≤ cρ .
(19.32) (19.33)
Then we can show Theorem 19.8. Set Br = Br (0). If u is a solution to (19.29) we have x dx |∇u|2 + au2 ρ2 r Br 12 12 C 2 2 x 2 ≤ dx |∇u| ρ u dx r r Br \Br/2 Br \Br/2
(19.34)
where C is a constant independent of r. Proof. Consider then uρ2
x r
∈ H01 (Br ).
From (19.29) we derive x $ x # + a(x)u2 ρ2 dx = 0. A(x)∇u · ∇ uρ2 r r Br This implies
x x dx + a(x)u2 ρ2 dx (A(x)∇u · ∇u)ρ2 r r Br x 1 x =− uρ dx. 2A(x)∇u · ∇ρ r r r Br
In this last integral one integrates in fact on Br \ Br/2 . Thus it comes by (19.30), (19.31) x x Λ dx ≤ 2 cρ dx. {λ|∇u|2 + a(x)u2 }ρ2 |∇u| |u|ρ r r r Br Br \Br/2 By the Cauchy–Schwarz inequality we obtain x dx (λ ∧ 1) {|∇u|2 + au2 }ρ2 r Br 12 12 Λ 2 2 x 2 ≤ 2 cρ dx |∇u| ρ u dx r r Br \Br/2 Br \Br/2
19.2. The Schr¨ odinger equation
271
where λ ∧ 1 denotes the minimum of λ and 1. This inequality is nothing but (19.34) and the proof of the theorem is complete. As a first application of this inequality we have Theorem 19.9. If n ≤ 2 there is no nontrivial bounded solution to (19.29), i.e., every bounded solution to (19.29) is a constant which is equal to 0 when a ≡ 0. Proof. From (19.34) we derive Br
2 2
|∇u| ρ
x
C dx ≤ r r
2 2
Br \Br/2
|∇u| ρ
x r
12 dx
Br \Br/2
12 u dx (19.35) 2
from which it follows that C2 2 2 x dx ≤ 2 |∇u| ρ u2 dx. r r Br \Br/2 Br Since ρ
x r
= 1 on Br/2 we get
C2 |∇u| dx ≤ 2 r Br/2 2
Br \Br/2
u2 dx.
If u is bounded, we derive when n ≤ 2 that C2 |∇u|2 dx ≤ 2 u2 dx ≤ C r Br \Br/2 Br/2 for some constant C independent of r. Thus the mapping r → |∇u|2 dx Br/2
is nondecreasing and bounded. It has a limit when r → +∞. From (19.35) we deduce then that 12 2 12 2 2 |∇u| dx ≤ C |∇u| dx − |∇u| dx →0 Br/2
Br
Br/2
when r → +∞. This shows that u = cst and completes the proof of the theorem. We now consider the case when a is not small at infinity. To simplify we will assume a(x) ≥ λ > 0 a.e. x (19.36) referring the reader to [22] for deeper results. We have
272
Chapter 19. Problems in the Whole Space
Theorem 19.10. Under the assumptions of Theorem 19.9, (19.36), and n arbitrary there is no nontrivial bounded solution to (19.29). Proof. From (19.34) we derive easily
C {|∇u| + au } dx ≤ r Br/2 2
2
12 {|∇u| + au } dx 2
Br
2
Br \Br/2
12 2 2 {|∇u| + au } dx
C ≤ r B r C ≤ {|∇u|2 + au2 } dx r Br where C =
C √ λ
12 u dx 2
Br
12 a 2 u dx λ
is independent of r. Iterating this formula we get
Br/2k+1
(C )k rk
{|∇u|2 + au2 } dx ≤
Br/2
{|∇u|2 + au2 } dx.
(19.37)
12 dx
12 u dx
Now from (19.34) we also have
2
2
{|∇u| + au }ρ
2
x
C dx ≤ r r
Br
2
2
{|∇u| + au }ρ
2
x r
Br
Br \Br/2
2
which leads to
2
Br
and to
2
{|∇u| + au }ρ
Br/2
2
x
C2 dx ≤ 2 r r
{|∇u|2 + au2 } dx ≤
C2 r2
Br \Br/2
Br \Br/2
u2 dx
u2 dx.
Going back to (19.37) we obtain
C k C 2 {|∇u| + au } dx ≤ k · 2 r r Br/2k+1 2
If u is bounded we obtain Br/2k+1
2
{|∇u|2 + au2 } dx ≤
Br \Br/2
u2 dx.
Ck . rk+2−n
Choosing k + 2 − n > 0 and letting r → +∞ completes the proof.
Exercises
273
Exercises 1. Using Theorem 19.2 show that a harmonic function cannot achieve a local maximum or a local minimum without being constant. 2. Show that in arbitrary dimension there is no nontrivial solution u to (19.29) such that 1 u2 dx ≤ C ∀ r > 0. r2 Br \Br/2 3. A function is called “subharmonic” if u ∈ C 2 (Rn ),
Δu ≥ 0.
Show that for a subharmonic function we have 1 u(y) dσ(y) u(x0 ) ≤ |∂BR (x0 )| ∂BR (x0 ) 1 u(x0 ) ≤ u(y) dy |BR (x0 )| BR (x0 )
∀ x0 ∈ Rn ,
∀ R > 0,
∀ x0 ∈ Rn ,
∀ R > 0.
4. Show that there exist nontrivial positive bounded subharmonic functions. (Hint: Use Theorem 19.7. Compare to [73].) 5. Rephrase Theorems 19.1, 19.2 for u ∈ C 2 (Ω),
Δu = 0
in Ω
where Ω is some open set of Rn . 6. Let b, c ∈ R, b = 0. Show that for f ∈ L∞ (R) the equation u + cu + bu = f
in R
admits a unique solution u = Λ(f ). Show that Λ is a bounded operator from L∞ (R) into itself. 7. Let Ω = R × (−1, 1). Let u be a weak solution to −Δu = 0 in Ω, (1) u=0 on ∂Ω = R × {−1, 1}. The first equation of (1) means that for any bounded set Ω ⊂ Ω one has u ∈ H 1 (Ω ) and ∇u · ∇v dx = 0 ∀ v ∈ H01 (Ω ). Ω
The second condition means that for any function ρ = ρ(x1 ) which is bounded, piecewise C 1 , continuous and with support in (−, ) one has ρu ∈ H01 (Ω ) where Ω = (−, ) × (−1, 1).
274
Chapter 19. Problems in the Whole Space
(i) Using the technique of Chapter 6 show that for any 2 < 1 one has |∇u|
2,Ω2
≤ e−
√ λ1 (1 −2 )
|∇u|2,Ω
1
(λ1 is defined by (6.11)). (ii) (Liouville type theorem). Show √ that there is no nontrivial solution to (1) such that for some α < λ1 , C > 0 one has |∇u| ≤ Ceα1 . 2,Ω1
(iii) Show that this result is sharp since π
u = e 2 x1 cos
π x2 2
is a nontrivial solution to (1). 8. Let u be such that u ∈ L2loc (Ω),
Δu = 0
in D (Ω).
Show that u is harmonic. 9. We suppose that
+∞ 0
ra(r) dr = +∞.
Show that there is no bounded nontrivial radially symmetric solution to −Δu + a(r)u = 0
in D (Rn ).
(See [22].) 10. Let u be a harmonic function. Show that |∇u(y)| ≤
n Sup |u|. R ∂BR (y)
Give another proof of the Liouville theorem for bounded harmonic functions.
Appendix
Fixed Point Theorems We present here for the reader’s convenience some fixed point theorems that we used in the text. We follow some lines of [47].
A.1 The Brouwer fixed point theorem In this section Br (x) denotes the closed ball defined by Br (x) = { y ∈ Rn | |y − x| ≤ r }. Theorem A.1. Let B1 (0) be the unit ball in Rn . Let F : B1 (0) → B1 (0) be a continuous mapping then F admits a fixed point that is to say a point for which F(x) = x.
(A.1)
Proof. Let us give first a naive proof when n = 1. In this case F : [−1, 1] → [−1, 1]. If we set G(x) = F (x) − x we have G(−1) = F (−1)+1 ≥ 0, G(1) = F (1)−1 ≤ 0 since F (−1), F (1) ∈ [−1, 1]. Since G is continuous there exists a point x such that G(x) = 0, i.e., such that (A.1) holds. The proof in higher dimensions is more involved and requires some preliminaries. Definition A.1. Let A be a n × n matrix. The matrix of the cofactors of A is the matrix given by (A.2) Cof A = ((−1)i+k det Aik )i,k=1,...,n where Aik denotes the matrix deduced from A by deleting the ith row and the k th column.
276
Appendix: Fixed Point Theorems
We then have Proposition A.2. Let u : Rn → Rn be a C 2 -mapping. Denote by Cof(∇u)ik the entries of Cof(∇u). Then we have with the summation convention in k ∂xk (Cof(∇u)ik ) = 0
∀ i = 1, . . . , n,
(A.3)
i.e., the rows of the matrix Cof(∇u) are divergence free. Proof. By definition of the matrix Cof A we have (det A) Id = AT Cof A.
(A.4)
This matricial equality can also be written as (det A)δij = aki (Cof A)kj (we set A = (aij )). In particular since (Cof A)kj contains no term in aki we have ∂aki (det A) = (Cof A)ki .
(A.5)
Replacing now A by ∇u in (A.4) we derive (det ∇u)δij = ∂xi uk (Cof ∇u)kj
∀ i, j = 1, . . . , n,
where u = (u1 , . . . , un ). Taking the derivative in the direction xj and summing up we obtain ∂xj (det ∇u)δij = ∂x2i xj uk (Cof ∇u)kj + ∂xi uk ∂xj (Cof ∇u)kj .
(A.6)
Using the chain rule and (A.5) we also have ∂xi (det ∇u) = (Cof ∇u)kj · ∂xi (∂xj uk ) and (A.6) becomes ∂xi uk ∂xj (Cof ∇u)kj = 0 ∀ i = 1, . . . , n.
(A.7)
If ∇u(x) is invertible then we obtain ∂xj (Cof ∇u(x))kj = 0. If det ∇u(x) = 0 then replacing u by u + εx the result follows by letting ε → 0. This completes the proof of the proposition.
A.1. The Brouwer fixed point theorem
277
As a consequence we have Proposition A.3. Let Ω be a smooth bounded open set of Rn and u, v : Ω → Rn be C 2 (Ω)-mappings such that u = v on ∂Ω. (A.8)
Then we have
Ω
det ∇u dx =
Proof. We set for t ∈ [0, 1] f (t) =
Ω
det ∇v dx.
(A.9)
Ω
det ∇{u + t(v − u)} dx.
By the chain rule we have (see (A.5)) d det{∇(u + t(v − u))} dx f (t) = dt Ω = Cof{∇(u + t(v − u))} ik ∂xk (v i − ui ) dx Ω = ∂xk Cof{∇(u + t(v − u))}ik (v i − ui ) dx
(by (A.3))
Ω
=0
(by (A.8)).
It follows that f is constant on (0, 1) and since it is continuous f (0) = f (1). This is (A.9) and the proof of the proposition is complete. Proof of Brouwer’s theorem. 1. One cannot find w : B1 (0) → ∂B1 (0) which is C 2 and satisfies w(x) = x on ∂B1 (0). Indeed in case such a function exists by the proposition above we have det ∇w = det ∇Id(x) = |B1 (0)| (A.10) B1 (0)
B1 (0)
where |B1 (0)| denotes the measure of the unit ball. On the other hand we have w·w ≡ 1 and differentiating we derive ∇w · w ≡ 0. It follows that det ∇w ≡ 0 and a contradiction to (A.10). 2. One cannot find w : B1 (0) → ∂B1 (0) continuous and satisfying w(x) = x on ∂B1 (0). Suppose such a function exists. We extend it by x outside B1 (0). Then we consider wε (x) = ρε ∗ w(x), (A.11)
278
Appendix: Fixed Point Theorems
i.e., we mollify each component of w. Then for ε small enough one has wε (x) = x on ∂B2 (0).
(A.12)
Moreover – see Chapter 2 – wε → w uniformly on B2 (0) and for ε small enough we have |wε | > 12 . It follows that the mapping w ˜ ε (x) =
2wε |wε |
is a smooth mapping satisfying all the assumptions of part 1 with B1 (0) replaced by B2 (0). This completes the proof of this part. 3. Let F be a continuous map such that F(x) = x ∀ x. Let w(x) be the point of ∂B1 (0) where the half-line (F(x), x) crosses ∂B1 (0). Then w is a mapping as in Part 3 and we get a contradiction. This completes the proof of the theorem.
F(x) x w(x)
∂B1 (0)
Figure A.1. Corollary A.4. Let K be a compact convex subset of Rn and F : K → K be a continuous mapping. Then F admits a fixed point. Proof. Since K is compact it is bounded. Let R > 0 such that K ⊂ BR (0). Set G = F ◦ PK where PK denotes the projection on the convex set K (see Theorem 1.1 and (11.62)). Then G is a continuous mapping from BR (0) into itself and by Theorem A.1 it possesses a fixed point x, i.e., such that F(PK (x)) = x. Since x ∈ K this is also a fixed point for F. This completes the proof.
A.2. The Schauder fixed point theorem
279
A.2 The Schauder fixed point theorem The Schauder fixed point theorem is a generalization of the Brouwer fixed point theorem in infinite dimensions. More precisely we have Theorem A.5. Let K be a compact convex subset of a Banach space B. Let F be a continuous mapping from K into itself, then F admits a fixed point. Proof. Since K is compact, for any ε > 0 one can cover K with N (ε) open balls Bε (xk )
k = 1, . . . , N = N (ε).
Denote by Kε the convex hull of the xk . Consider the mapping Iε from K into Kε defined by dist(x, K \ Bε (xi ))xi . Iε (x) = i i dist(x, K \ Bε (xi )) Note that Iε is well defined and continuous (see Exercise 3). By a simple computation one has for every x ∈ K dist(x, K \ Bε (xi ))(xi − x) |Iε (x) − x| = i i dist(x, K \ Bε (xi )) (A.13) dist(x, K \ Bε (xi ))|xi − x| ≤ ε. ≤ i i dist(x, K \ Bε (xi )) Consider then the mapping Fε = Iε ◦ F. As a mapping from Kε into itself it has a fixed point xε . Now up to a subsequence (K is compact) one can assume that xε −→ x0 . We have then by (A.12) |F (xε ) − xε | = |F (xε ) − Iε (F (xε ))| ≤ ε. Letting ε → 0 we see – since F is continuous – that x0 is a fixed point. This completes the proof of the theorem. As a corollary we have Corollary A.6. Let K be a closed convex set of a Banach space B and F a continuous mapping from K into K such that F (K) is precompact. Then F admits a fixed point. = the closure Proof. Apply the previous theorem for the compact convex set K of the convex hull of F (K) (see Exercise 4).
280
Appendix: Fixed Point Theorems
Remark A.3. The compactness in the analysis above cannot be dropped. Consider for instance the space of real sequences +∞ 2 xi < +∞ . = (xi )i 2
i=1
This is a Hilbert space for the norm |x|2 =
+∞
x2i
12
∀ x = (xi ).
i=1
Now the application defined by 3 F (x) = (0, 1 − |x|22 , x1 , x2 , . . . , ) ∀ x = (x1 , x2 , . . . ) maps the unit ball of 2 into the unit sphere. A fixed point x would be such that F n (x) = x
∀n
(F n = 4F ◦ F ◦56· · · ◦ F7) and thus n-times
x1 = x2 = · · · = xn = 0
∀ n.
The only possibility would be x = 0 which clearly is not a fixed point since it does not belong to the unit sphere.
Exercises 1. Let u : Ω ⊂ Rn → Rn ; u ∈ C 1 (Ω). Prove that for all x ∈ Ω, there exists εx > 0 such that det ∇(u + ε Id)(x) = 0 ∀ ε ∈ (−εx , εx ) \ {0}. 2. Let ρ ∈ D(Rn ) be such that Rn ρ dx = 1 and ρ(x) = ρ(−x) ∀ x ∈ Rn . Prove that Id ∗ ρ = Id (we mollify each component of the identity mapping) and justify (A.12) for ρε well chosen. 3. (a) Prove that the mapping Iε : K → Kε defined in the proof of Theorem A.5 is well defined and continuous. (b) Justify carefully the inequality (A.13). 4. Let A be a precompact subset of a Banach space X. Prove that the closure of the convex hull of A is a compact convex set.
Bibliography [1] R.A. Adams. Sobolev Spaces. Academic Press, New York, 1975. [2] H. Amann. Existence of multiple solutions for nonlinear elliptic boundary value problems. Indiana Univ. Math. J., 21:925–935, 1971/72. [3] H. Amann. On the existence of positive solutions of nonlinear elliptic boundary value problems. Indiana Univ. Math. J., 21:125–146, 1971/72. [4] H. Amann. Existence and multiplicity theorems for semi-linear elliptic boundary value problems. Math. Z., 150:281–295, 1976. [5] H. Amann. Supersolutions, monotone iterations, and stability. J. Differential Equations, 21:363–377, 1976. [6] N. Andr´e and M. Chipot. A remark on uniqueness for quasilinear elliptic equations. In Proceedings of the Banach Center, volume 33, pages 9–18, 1996. [7] N. Andr´e and M. Chipot. Uniqueness and non uniqueness for the approximation of quasilinear elliptic equation. SIAM J. of Numerical Analysis, 33, 5:1981–1994, 1996. [8] W. Arendt, C.J.K. Batty, and P. B´enilan. Asymptotic stability of Schr¨ odinger semigroups on L1 (RN ). Math. Z., 209:511–518, 1992. [9] C. Baiocchi. Sur un probl`eme `a fronti`ere libre traduisant le filtrage de liquides a travers des milieux poreux. C.R. Acad. Sc. Paris, S´erie A 273:1215–1217, ` 1971. [10] C. Baiocchi. Su un problema di frontiera libera connesso a questioni di idraulica. Ann. Mat. Pura Appl., 92:107–127, 1972. [11] C. Baiocchi and A. Capelo. Disequazioni Variazionali e Quasivariazionali, volume 1 and 2. Pitagora Editrice, Bologna, 1978. [12] C.J.K. Batty. Asymptotic stability of Schr¨ odinger semigroups: path integral methods. Math. Ann., 292:457–492, 1992. [13] A. Bensoussan and J.L. Lions. Application des In´equations Variationnelles en Contrˆ ole Stochastique. Dunod, Paris, 1978. [14] A. Bensoussan, J.L. Lions, and G. Papanicolaou. Asymptotic Analysis for Periodic Structures. North Holland, Amsterdam, 1978.
282
Bibliography
[15] H. Berestycki, L. Nirenberg, and S.R.S. Varadhan. The principal eigenvalue and maximum principle for second-order elliptic operators in general domains. Commun. Pure Appl. Math., 47:47–92, 1994. [16] J. Blat and K.J. Brown. Global bifurcation of positive solutions in some systems of elliptic equations. SIAM J. Math. Anal., 17:1339–1353, 1986. [17] J.M. Bony. Principe du maximum dans les espaces de Sobolev. C.R. Acad. Sc. Paris, 265:333–336, 1967. [18] H. Brezis. Equations et in´equations non lin´eaires dans les espaces vectoriels en dualit´e. Ann. Inst. Fourier, 18:115–175, 1968. [19] H. Brezis. Probl`emes unilat´eraux. J. Math. Pures Appl., 51:1–168, 1972. [20] H. Brezis. Operateurs maximaux monotones et semigroupes de contractions dans les espaces de Hilbert, volume 5 of Math. Studies. North Holland, Amsterdam, 1975. [21] H. Brezis. Analyse fonctionnelle. Masson, Paris, 1983. [22] H. Brezis, M. Chipot, and Y. Xie. Some remarks on Liouville type theorems. In proceedings of the international conference in nonlinear analysis, pages 43–65, Hsinchu, Taiwan 2006, World Scientific, 2008. [23] H. Brezis and D. Kinderlehrer. The smoothness of solutions to nonlinear variational inequalities. Indiana Univ. Math. J., 23:831–844, 1974. [24] H. Brezis and G. Stampacchia. Sur la r´egularit´e de la solution d’in´equations elliptiques. Bull. Soc. Math. France, 96:152–180, 1968. [25] N. Bruy`ere. Limit behaviour of a class of nonlinear elliptic problems in infinite cylinders. Advances in Diff. Equ., 10:1081–1114, 2007. [26] L.A. Caffarelli and A. Friedman. The free boundary for elastic-plastic torsion problems. Trans. Amer. Math. Soc., 252:65–97, 1979. [27] J. Carrillo-Menendez and M. Chipot. On the dam problem. J. Diff. Eqns., 45:234–271, 1982. [28] M. Chipot. Variational inequalities and flow in porous media. Springer Verlag, New York, 1984. [29] M. Chipot. Elements of Nonlinear Analysis. Birkh¨ auser, 2000. [30] M. Chipot. goes to plus infinity. Birkh¨ auser, 2002. [31] M. Chipot and N.-H. Chang. On some mixed boundary Value problems with nonlocal diffusion. Advances Math. Sc. Appl., 14, 1:1–24, 2004. [32] M. Chipot and S. Mardare. Asymptotic behavior of the Stokes problem in cylinders becoming unbounded in one direction. J. Math. Pures Appl., 90:133– 159, 2008. [33] M. Chipot and J.F. Rodrigues. On a class of nonlocal nonlinear elliptic problems. M2 AN, 26, 3:447–468, 1992.
Bibliography
283
[34] M. Chipot and A. Rougirel. On the asymptotic behavior of the solution of parabolic problems in domains of large size in some directions. DCDS Series B, 1:319–338, 2001. [35] M. Chipot and A. Rougirel. On the asymptotic behavior of the solution of elliptic problems in cylindrical domains becoming unbounded. Communications in Contemporary Math., 4, 1:15–24, 2002. [36] M. Chipot and Y. Xie. Elliptic problems with periodic data: an asymptotic Analysis. Journ. Math. pures et Appl., 85:345–370, 2006. [37] M. Chipot and K. Yeressian. Exponential rates of convergence by an iteration technique. C.R. Acad. Sci. Paris, S´er. I 346:21–26, 2008. [38] P.G. Ciarlet. The finite element method for elliptic problems. North Holland, Amsterdam, 1978. [39] P.G. Ciarlet. Mathematical Elasticity. North Holland, Amsterdam, 1988. [40] D. Cioranescu and P. Donato. An Introduction to homogenization, volume #17 of Oxford Lecture Series in Mathematics and its Applications. Oxford Univ. Press, 1999. [41] D. Cioranescu and J. Saint Jean Paulin. Homogenization of Reticulated Structures, volume 139 of Applied Mathematical Sciences. Springer Verlag, New York, 1999. [42] E. Conway, R. Gardner, and J. Smoller. Stability and bifurcation of steadystate solutions for predator-prey equations. Adv. Appl. Math., 3:288–334, 1982. [43] B. Dacorogna. Direct Methods in the Calculus of Variations. Springer-Verlag, Berlin, 1989. [44] R. Dautray and J.L. Lions. Mathematical Analysis and Numerical Methods for Science and Technology. Springer-Verlag, 1988. [45] G. Duvaut and J.L. Lions. Les In´equations en M´ecanique et en Physique. Dunod, Paris, 1972. [46] L.C. Evans. Weak Convergence Methods for Nonlinear Partial Differential Equations, volume # 74 of CBMS. American Math. Society, 1990. [47] L.C. Evans. Partial Differential Equations, volume 19 of Graduate Studies in Mathematics. American Math. Society, Providence, 1998. [48] G. Folland. Introduction to Partial Differential Equations. Princeton University Press, 1976. [49] G. Folland. Real Analysis: Modern Techniques and their Applications. Wiley– Interscience, 1984. [50] A. Friedman. Variational principles and free-boundary problems. R.E. Krieger Publishing Company, Malabar, Florida, 1988. [51] G.P. Galdi. An Introduction to the Mathematical Theory of Navier–Stokes Equations. Volume I: Linearized steady problems. Springer, 1994.
284
Bibliography
[52] G.P. Galdi. An Introduction to the Mathematical Theory of Navier–Stokes Equations. Volume II: Nonlinear steady problems. Springer, 1994. [53] M. Giaquinta. Multiple integrals in the calculus of variations and nonlinear elliptic systems, volume 105 of Annals of math studies. Princeton University Press, 1983. [54] B. Gidas, W.-M. Ni, and L. Nirenberg. Symmetry and related properties via the maximum principle. Comm. Math. Phys., 68:209–243, 1979. [55] D. Gilbarg and N.S. Trudinger. Elliptic Partial Differential Equations of Second Order. Springer Verlag, 1983. [56] V. Girault and P.A. Raviart. Finite Element Methods for Navier–Stokes Equations, volume 749 of Lecture Notes in Mathematics. Springer Verlag, Berlin, 1981. [57] E. Giusti. Equazioni ellittiche del secondo ordine, volume #6 of Quaderni dell’ Unione Matematica Italiana. Pitagora Editrice, Bologna, 1978. [58] A. Grigor’yan. Bounded solutions of the Schr¨ odinger equation on noncompact Riemannian manifolds. J. Sov. Math., 51:2340–2349, 1990. [59] C. Gui and Y. Lou. Uniqueness and nonuniqueness of coexistence states in the Lotka-Volterra model. Comm. Pure Appl. Math., XLVII:1–24, 1994. [60] P. Hartman and G. Stampacchia. On some nonlinear elliptic differential functional equations. Acta Math., 115:153–188, 1966. [61] V.V. Jikov, S.M. Kozlov, and O.A. Oleinik. Homogenization of Differential Operators and Integral Functionals. Springer-Verlag, Berlin, Heidelberg, 1994. [62] D. Kinderlehrer. The coincidence set of solutions of certain variational inequalities. Arch. Rat. Mech. Anal., 40:231–250, 1971. [63] D. Kinderlehrer. Variational inequalities and free boundary problems. Bull. Amer. Math. Soc., 84:7–26, 1978. [64] D. Kinderlehrer and G. Stampacchia. An Introduction to Variational Inequalities and their Applications, volume 31 of Classic Appl. Math. SIAM, Philadelphia, 2000. [65] O. Ladyzhenskaya and N. Uraltseva. Linear and Quasilinear Elliptic Equations. Academic Press, New York, 1968. [66] H. Lewy and G. Stampacchia. On the regularity of the solution of a variational inequality. Comm. Pure Appl. Math., 22:153–188, 1969. [67] E.H. Lieb and M. Loss. Analysis, Graduate Studies in Mathematics, volume # 14. AMS, Providence, 2000. [68] J.L. Lions. Quelques m´ethodes de r´esolution des probl`emes aux limites non lin´eaires. Dunod–Gauthier–Villars, Paris, 1969. [69] J.L. Lions. Perturbations singuli`eres dans les probl`emes aux limites et en contrˆ ole optimal, volume # 323 of Lecture Notes in Mathematics. SpringerVerlag, 1973.
Bibliography
285
[70] J.L. Lions and E. Magenes. Nonhomogeneous boundary value problems and applications, volume I–III. Springer-Verlag, 1972. [71] J.L. Lions and G. Stampacchia. Variational inequalities. Comm. Pure Appl. Math., 20:493–519, 1967. [72] G. Dal Maso. An introduction to Γ-convergence. Birkh¨ auser, Boston, 1993. [73] M. Meier. Liouville theorem for nonlinear elliptic equations and systems. Manuscripta Mathematica, 29:207–228, 1979. [74] J. Moser. On Harnack’s theorem for elliptic differential equations. Comm. Pure Appl. Math., 14:577–591, 1961. [75] J. Ne˘cas. Les m´ethodes directes en th´eorie des ´equations elliptiques. Masson, Paris, 1967. [76] Y. Pinchover. On the equivalence of green functions of second-order elliptic equations in Rn . Diff. and Integral Equations, 5:481–493, 1992. [77] Y. Pinchover. Maximum and anti-maximum principles and eigenfunctions estimates via perturbation theory of positive solutions of elliptic equations. Math. Ann., 314:555–590, 1999. [78] R.G. Pinsky. Positive harmonic functions and diffusions: An integrated analytic and probabilistic approach. Cambridge studies in advanced mathematics, 45, 1995. [79] R.G. Pinsky. A probabilistic approach bounded positive solutions for Schr¨ odinger operators with certain classes of potential. Trans. Am. Math. Soc. 360:6545–6554, 2008. [80] M.H. Protter and H.F. Weinberger. Maximum Principles in Differential Equations. Prentice-Hall, Englewood Cliffs, NJ, 1967. [81] M.H. Protter and H.F. Weinberger. A maximum principle and gradient bounds for linear elliptic equations. Indiana U. Math. J., 23:239–249, 1973. [82] P. Pucci and J. Serrin. The Maximum Principle, volume #73 of Progress in Nonlinear Differential Equations and Their Applications. Birkh¨ auser, 2007. [83] P.A. Raviart and J.M. Thomas. Introduction a ` l’analyse num´erique des ´equations aux d´eriv´ees partielles. Masson, Paris, 1983. [84] J.F. Rodrigues. Obstacle problems in mathematical physics, volume 134 of Math. Studies. North Holland, Amsterdam, 1987. [85] W. Rudin. Real and complex analysis. McGraw Hil, 1966. [86] L. Schwartz. Th´eorie des distributions. Hermann, 1966. [87] A.S. Shamaev, O.A. Olenik, and G.A. Yosifian. Mathematical problems in elasticity and homogenization. Studies in mathematics and its applications, volume 26. North-Holland Publ., Amsterdam and New York, 1992. [88] B. Simon. Schr¨ odinger semigroups. Bull. Am. Math. Soc., 7:447–526, 1982. [89] G. Stampacchia. Equations elliptiques du second ordre a ` coefficients discontinus. Presses de l’Universit´e de Montr´eal, 1965.
286
Bibliography
[90] G. Stampacchia. Opere scelte. Edizioni Cremonese, 1997. [91] G. Talenti. Best constants in Sobolev inequality. Ann. Math. Pura Appl., 110:353–372, 1976. [92] L. Tartar. Estimations des coefficients homog´en´eis´es, volume 704 of Lectures Notes in Mathematics, pages 364–373. Springer-Verlag, Berlin, 1977b. English translation, Estimations of homogenized coefficients, in Topics in the Mathematical Modeling of Composite Materials, ed. A. Cherkaev and R. Kohn, Birkh¨auser, Boston, pp. 9–20. [93] R. Temam. Navier–Stokes Equations. North Holland, Amsterdam, 1979. [94] F. Treves. Topological vector spaces, distributions and kernels. Academic Press, 1967. [95] G.M. Troianiello. Elliptic differential equations and obstacle problems. Plenum, New York, 1987. [96] K. Yosida. Functional Analysis. Springer Verlag, Berlin, 1978.
Index Bε (x0 ), 14 C α (Ω), 224 D (Ω), 19 D(Ω), 14 H 1 (Ω), 22 1 Hper (T ), 94 −1 H (Ω), 26 H01 (Ω), 24 1 (T ), 112 H per L∞ (Ω), 13 Lp (Ω), 13 Lploc (Ω), 13 P1 , 140 P1 method, 139 p-Laplace equation, 231 anisotropic, 61 Arzel`a–Ascoli theorem, 17, 18 asymptotic analysis, 73 average, 107 −, 42, 220, 226 bootstrap, 227 boundary condition, 37 Brouwer fixed point, 275 Cauchy–Schwarz inequality, 3 C´ea Lemma, 136 coercive, 240 coercive operator, 244 compact mapping, 31 compact operator, 126 competition, 197 composite materials, 105 conductivity, 105
conjugate number, 13 consistency, 132 convex, 4 cylinders, 73 derivative in the ν-direction, 25 difference quotients, 218 diffusion, 35 diffusion of population, 197 Dirac mass, 19 Dirichlet problem, 38 Discrete Maximum Principle, 130 distribution, 18, 19 divergence, 47 form, 43 formula, 47 eigenfunction, 121 eigenvalue, 121 Einstein convention, 47 elasticity, 198 elliptic problems, 43 elliptic systems, 191 Euclidean norm, 14 Euclidean scalar product, 3 finite difference, 129 finite element, 135 fixed point, 275 fluid, 35 fundamental solution, 264 Galerkin method, 135
288
harmonic functions, 261 heat, 35 Hilbert basis, 122, 126 Hilbert space, 3 H¨ older Inequality, 32 homogenization, 105 Hopf maximum principle, 252 inhomogeneous problems, 53 Jacobian matrix, 194 Korn inequality, 200 Lam´e constants, 201 Laplace equation, 37 p-Laplace equation, 231 Legendre condition, 194 Legendre–Hadamard, 194 Liouville Theorem, 261 mesh, 135 free, 135 size, 140 Minty Lemma, 242 mixed problem, 54 mollifiers, 13 monotone, 242, 244 monotone methods, 153 Neumann problem, 38, 48 non-degenerate, 143 nonlinear problems, 153 nonlocal problems, 166 normalized eigenfunction, 122 obstacle problem, 172 partial derivative, 18 period, 93 periodic, 93 periodic problems, 93 Poincar´e Inequality, 25 population, 35 positive, 126 projection on a convex set, 4
Index
quasilinear equations, 160 rank one matrices, 194 Rayleigh quotient, 123 regular, 143 regularity, 217 scalar product, 3 scaling, 86 Schauder fixed point, 279 Schr¨ odinger equation, 267 self-adjoint, 126 singular perturbation, 57 Sobolev Spaces, 22 Stokes problem, 197 strong maximum principle, 247 strong solution, 38 translation method, 221 triangulation, 139, 143 variational inequalities, 5, 170 weak formulation, 35 weak maximum principle, 49 weakly, 10
¨ Birkhauser Advanced Texts (BAT) Edited by Herbert Amann, Zurich University, Switzerland ¨ Steven G. Krantz, Washington University, St. Louis, USA Shrawan Kumar, University of North Carolina, Chapel Hill, USA ´ r, Universite´ Pierre et Marie Curie, Paris, France Jan Nekovaˇ This series presents, at an advanced level, introductions to some of the fields of current interest in mathematics. Starting with basic concepts, fundamental results and techniques are covered, and important applications and new developments discussed. The textbooks are suitable as an introduction for students and non–specialists, and they can also be used as background material for advanced courses and seminars.
Chipot, M. Elliptic Equations: An Introductory Course (2009). ISBN 978-3-7643-9981-8 Azarin, V. Growth Theory of Subharmonic Functions (2008). In this book an account of the growth theory of subharmonic functions is given, which is directed towards its applications to entire functions of one and several complex variables. The presentation aims at converting the noble art of constructing an entire function with prescribed asymptotic behaviour to a handicraft. For this one should only construct the limit set that describes the asymptotic behaviour of the entire function. All necessary material is developed within the book, hence it will be most useful as a reference book for the construction of entire functions. ISBN 978-3-7643-8885-0 Quittner, P. / Souplet, P. Superlinear Parabolic Problems. Blow-up, Global Existence and Steady States (2007). This book is devoted to the qualitative study of solutions of superlinear elliptic and parabolic partial differential equations and systems. This class of problems contains, in particular, a number of reaction-diffusion systems which arise in various mathematical models, especially in chemistry, physics and biology. The book is self-contained and up-to-date, it has a high didactic quality. It is devoted to problems that are intensively studied but have not been treated so far in depth in the book literature. The intended audience includes graduate and postgraduate students and researchers working
in the field of partial differential equations and applied mathematics. ISBN 978-3-7643-8441-8 ´ Drabek, P. / Milota, J. Methods of Nonlinear Analysis. Applications to Differential Equations (2007). ISBN 978-3-7643-8146-2 Krantz, S.G. / Parks, H.R. A Primer of Real Analytic Functions (2002) ISBN 978-0-8176-4264-8 DiBenedetto, E. Real Analysis (2002). ISBN 978-0-8176-4231-0 Estrada, R. / Kanwal, R.P. A Distributional Approach to Asymptotics. Theory and Applications (2002). ISBN 978-0-8176-4142-9 Chipot, M. goes to plus Infinity (2001). ISBN 978-3-7643-6646-9 Sohr, H. The Navier–Stokes Equations. An Elementary Functional Analytic Approach (2001). ISBN 978-3-7643-6545-5 Conlon, L. Differentiable Manifolds (2001). ISBN 978-0-8176-4134-4 Chipot, M. Elements of Nonlinear Analysis (2000). ISBN 978-3-7643-6406-9 Gracia-Bondia, J.M. / Varilly, J.C. / Figueroa,H. Elements of Noncommutative Geometry (2000). ISBN 978-0-8176-4124-5